外文原文及译文
什么使薪酬看起来合理?【外文翻译】
外文文献翻译译文一:外文原文原文:What Make a Salary Seem ReasonableHighhouse Scoot,Brooks-Laber Margaret EAlthough considerable research attention has been directed at understanding perceptions of salary fairness, very little attention has been given to how salary expectations are formed or how trivial elements of the job search context may influence these expectations.Twoexp eriments demonstrated how the simple manipulation of response options for a multiple-choice item may influence subsequent salary expectations and salary satisfaction. Results are discussed in light of Parducci's(1995) contextual theory.It has been repeatedly shown that the way in which a question is asked can influence perceptions of what is normal. For example, Harris found that people who were asked 'How short was the basketball player?' estimated lower heights than people who were asked 'How tall was the basketball player?' Similarly, Loftus found that people who were asked 'Do you get headaches frequently?' reported more headaches than people asked 'Do you get headaches occasionally?' More recently, Norbert Schwarz and his colleagues identified numerous examples of how the wording of survey items can strongly impact self reports. For example, Schwarz reported research showing that people claimed a higher life success when the numeric values of a life-success item ranged from -5 to + 5, than when they ranged from 0 to 10. He described another study showing that psychosomatic patients reported symptom frequencies of more than twice a month when the item's response scale ranged from 'twice a month or less' to 'several times per day', but did not do so when the scale ranged from 'never' to 'more than twice a month'. Schwarz suggested that respondents to surveys assumethat the values in the middle range of a scale reflect the 'typical' value in the 'real world', whereas the extremes of the scale correspond to the extremes of the distribution. More provocative is the finding that, in addition to affecting respondents' behavioural reports on surveys, these simple context effects may also affect subsequent judgments. For example, patients in Schwarz and Scheuring's study reported a higher health satisfaction when the response scale suggested that their symptom frequency was below average.Similar to the Schwarz and Scheuring study, our research examined the effects of response scales on subsequent judgments. However,the response-scale paradigm was used to examine broader theoretical issues regarding the impact of the job-seeking context on expectations about pay. Despite the importance of starting salary in the job choice process,very little research has focused on factors that has focused on incumbent satisfaction with organizational pay noted that the existing meagre research on job-seeker expectations for staring salary is more fragmented than programmatic.The authors suggested that researchers draw from the vast literature on decision-making to understand how individual and situational factors influence salary perceptions.Our failure to find effects consistent with adaptation-level theory for our sample of job seekers was consistent with Parducci, Calfee, Marshall, and Davidson's (1960) failure to support adaptation-level theory predictions in the lab. Parducci and his colleagues found that, holding all else constant, variation in the mean of a distribution of numbers had no effect on student perceptions of the typical number. Similary, Ordóñez et al. (2000), in a study of distributive fairness perceptions, found that MBAs presented with two reference salaries (i.e., salary of a peer paid higher and salary of a peer paid lower) did not average these reference points in the manner predicted by adaptation-level theory. Ordóñez and her colleagues recommended that future research examine what happens when people are presented with more than two reference salaries.Parducci's (1995) contextual theory generally proposes that attribute judgments reflect a compromise between a range principle and a frequency principle. Most relevant tothe present concern is the frequency principle, which is a tendency for people to assign the same number of contextual representations to equal segments of the scale of judgment. For example, if fewer than one half of salaries available in the immediate context (e.g., salaries in classified advertisements) are below a particular salary, that salary is perceived to be in the bottom half of the scale of judgment. As another simple example, consider a summer job seeker whose three friends have accepted jobs with hourly rates of $5, $10, and $11. An offer of $8.50 may be perceived to be near the bottom of salaries available in the market, because two of the three available strategies are above that offer, even though it is objectively above the midpoint of the salaries of his friends. A related phenomenon, called the alternative-outcomes effect (Windschitl & Wells, 1998), has been observed for people judging the likelihood of winning a raffle. In one study, participants were presented with two different raffles involving 10 tickets. In the first situation, they were told 'You hold 3 tickets and seven other people each hold 1.' In the second situation, they were told 'You hold 3 tickets and one other person holds 7.' People felt much more certain of winning in the situation where they held more tickets than any individual competitor (3-1-1-1-1-1-1-1) than when they held fewer tickets ( 3-7), despite the fact that the probability of winning either raffle is the same. In both this example and the previous hypothetical salary examples, people are influenced by the context in which information is presented, ignoring the absolute value of the current situation. Contextual theory posits that when events that elicit hedonic judgments are concentrated at the upper endpoints of their contexts, they elicit greater happiness, regardless of the absolute levels of the events. This means that an important factor influencing satisfaction with any particular outcome is the placement of that outcome relative to other possible outcomes, or the proportion of contextual representations below that outcome. Other researchers have also suggested that reference points are not combined into a single comparison point (Kahneman, 1992) and that satisfaction is determined instead by the relative frequencies of positive and negative events (Diener, Sandvik, & Pavot, 1990). When applied to starting salary expectations, these models seem to suggest that the frequency of salary options above and below a targetsalary, not the lower and upper bounds of the salary distribution, will influence starting salary expectations. The experiments were designed to examine this proposition within the context of a simple multiple-choice item on a career-expectations survey.Participants were business students (N= 204) enrolled at a medium-sized public university in the Midwestern United States. Participation occurred during class time in seven marketing classes, ranging in size from 21 to 37 students. The majority (i.e., 90%) of the students were juniors or seniors, and male (52%). The average participant was 21 years of age. A'career attitudes survey' was designed that contained seven typical items inquiring about students' plans after graduation, along with demographic questions. Items were in multiple-choice and open-ended formats, and addressed issues such as how many jobs the students planned to apply for, what methods they planned to use to find jobs, and the nature of their expected first job. Embedded within the survey was the starting salary item that was the focus of the endpoint-level and option-frequency manipulations. The item read 'What do you expect your starting salary to be?' Participants received one of four response scales differing on the two factors. Table 1 presents the response options by endpoints of the range (low= $15,000-50,000; high= $30,000-65,000) and frequency of multiple-choice options above a target salary(low frequency; high frequency). The overall range roughly corresponded with the range of annual starting salaries of recent graduates from the business school (i.e., $21,000-65,000). Note that the manipulation of range endpoints (see Fig. 1) is distinct from a manipulation of range width, which has appeared in earlier research by Rynes et al. (1983) and Highhouse et al. (1999). The width of the salary range (i.e., $35,000) remained constant across experimental conditions in our study. The option-frequency manipulation was designed using $40,000 as the target salary(low frequency = 1 response option above $40,000; high frequency = 4 response options above $40,000). The target salary is the midpoint of the entire salary range (i.e., half the distance between the lowest salary in the low endpoint level condition and the highest salary in the high endpoint level condition). Each participant received one endpoint level and one frequency condition in a2 x 2between-subjects factorial design.Although previous research has shown that the social environment can have an impact on perceptions of a fair staring salary, far fewer studies have investigated the impact of contextual features of the decision-marking environment that may influence salary expectations. The research that does exist has focused on the width of the range of salaries available in the market. Our research builds on this work by showing that factors other than range width may influence salary expectations. Drawing from Parducci's contextual theory. We expected that, holding salary range constant, the frequency of salary options above and below a target salary would influence salary expectations independent of the level of the endpoints of salaries in the market. Our first experiment, using the manipulated response-option paradigm, showed that the frequency of response options above a target salary in the response categories for an item on a typical career survey influenced later reports of expected staring salary for a group of business majors. Contrary to Helson's adaptation-level theory, the salary endpoint level had no effect on expectations for this group. Thus, consistent with Parducci's proposition, our findings showed that one must consider not only the level of the endpoints of salaries in the immediate context but also the perceived frequency of salaries. The second experiment showed that these effects can extend beyond salary expectations to influence satisfaction with job offers. The frequency of response options above the midpoint salary in a multiple-choice item had a main effect on salary satisfaction and job attractiveness for psychology students presented 20 min later with a hypothetical job advertisement.These research results suggest that salary expectations,at least for naive job seekers,can be influenced by simple features of the contextual environment. Unfortunately, longitudinal investigations are highly dynamic.Future research is needed that employs more moment-to-moment assessments(see Stone, Shiffman,& DeCries, 1999)of job seekers' salary expectations.We do suspect,however,that our results are not limited to the simple numerical anchors set up in our experiments.Considerable research has shown that simple numerical anchors can strongly influence judgments of experts as well as novices (e.g, Northcraft andNeale,1987).Similarly, studies using such varied experts as agricultural judges (Gaeth Shanteau,1984) ,parole offices (Carroll& Payne, 1976),and court judges (Ebbesen & Konecni,1975) have concluded that the experience of these judges does not make them less susceptible to simple context effects.Barber and Bretz(2000) noted that understanding how different contexts can evoke difference in how a given salary offer will be evaluated is important for organizations"as organizations cannot predict employee reactions to pay practices without knowledge of the standards against which those practices will be evaluated" We believe that,in addition to the importance of this knowledge for organizations,such knowledge is important for job seekers. Job seekers need to be aware that their salary expectations can be inadvertently affected by their exposure to salaries that may or may not be meaningful to their situation.People are constantly faced with skewed distributions of salaries because they tend to hear more about fellow job seekers who were paid high salaries than they are to hear about fellow job seekers who were paid the industry average. This creates a cognitive context in which offered salaries are likely to be perceived as being in the bottom half of the scale of judgement,even when they are objectively near the middle of the distributions of salaries. This could be positive if it leads employees to hold high expectations of pay,as these expectations may be associated with higher negotiated salaries.Generally,though,it is important for job seekers to realize when they are being affected by context.Job seekers need to be aware of the danger of marking inferences from small saiples,as small samples of salaries may not represent the salary distribution in the population of relevant jobs.Demonstrating context dependence may be as simple as showing people how a multiple-choice option on a typical survey can influence their standards for appropriate pay.Future research might consider whether such basic training techniques can inoculate job seekers against simple context effects.Source:Frequency Context Effects On Starting-Salary Expectations. Journal of Occupational & Organizational Psychology, V olume 67, 2003(1): P69二、翻译文章译文:什么使薪酬看起来合理?虽然很多研究工作应该注意理解对工资公平、怎样看管给出了期望的薪水如何形成或者是多么微不足道的要素影响上下文可以找工作的这些期望。
毕设外文原文及译文
北京联合大学毕业设计(论文)任务书题目:OFDM调制解调技术的设计与仿真实现专业:通信工程指导教师:张雪芬学院:信息学院学号:2011080331132班级:1101B姓名:徐嘉明一、外文原文Evolution Towards 5G Multi-tier Cellular WirelessNetworks:An Interference ManagementPerspectiveEkram Hossain, Mehdi Rasti, Hina Tabassum, and Amr AbdelnasserAbstract—The evolving fifth generation (5G) cellular wireless networks are envisioned to overcome the fundamental challenges of existing cellular networks, e.g., higher data rates, excellent end-to-end performance and user-coverage in hot-spots and crowded areas with lower latency, energy consumption and cost per information transfer. To address these challenges, 5G systems will adopt a multi-tier architecture consisting of macrocells, different types of licensed small cells, relays, and device-to-device (D2D) networks to serve users with different quality-of-service (QoS) requirements in a spectrum and energy-efficient manner. Starting with the visions and requirements of 5G multi-tier networks, this article outlines the challenges of interference management (e.g., power control, cell association) in these networks with shared spectrum access (i.e., when the different network tiers share the same licensed spectrum). It is argued that the existing interference management schemes will not be able to address the interference management problem in prioritized 5G multitier networks where users in different tiers have different priorities for channel access. In this context, a survey and qualitative comparison of the existing cell association and power control schemes is provided to demonstrate their limitations for interference management in 5G networks. Open challenges are highlighted and guidelines are provided to modify the existing schemes in order to overcome these limitations and make them suitable for the emerging 5G systems.Index Terms—5G cellular wireless, multi-tier networks, interference management, cell association, power control.I. INTRODUCTIONTo satisfy the ever-increasing demand for mobile broadband communications, the IMT-Advanced (IMT-A) standards have been ratified by the International Telecommunications Union (ITU) in November 2010 and the fourth generation (4G) wireless communication systems are currently being deployed worldwide. The standardization for LTE Rel-12, also known as LTE-B, is also ongoing and expected to be finalized in 2014. Nonetheless, existing wireless systems will not be able to deal with the thousand-fold increase in total mobile broadband data [1] contributed by new applications and services such as pervasive 3D multimedia, HDTV, VoIP, gaming, e-Health, and Car2x communication. In this context, the fifth generation (5G) wireless communication technologies are expected to attain 1000 times higher mobile data volume per unit area,10-100 times higher number of connecting devices and user data rate, 10 times longer battery life and 5 times reduced latency [2]. While for 4G networks the single-user average data rate is expected to be 1 Gbps, it is postulated that cell data rate of theorder of 10 Gbps will be a key attribute of 5G networks.5G wireless networks are expected to be a mixture of network tiers of different sizes, transmit powers, backhaul connections, different radio access technologies (RATs) that are accessed by an unprecedented numbers of smart and heterogeneous wireless devices. This architectural enhancement along with the advanced physical communications technology such as high-order spatial multiplexing multiple-input multiple-output (MIMO) communications will provide higher aggregate capacity for more simultaneous users, or higher level spectral efficiency, when compared to the 4G networks. Radio resource and interference management will be a key research challenge in multi-tier and heterogeneous 5G cellular networks. The traditional methods for radio resource and interference management (e.g., channel allocation, power control, cell association or load balancing) in single-tier networks (even some of those developed for two-tier networks) may not be efficient in this environment and a new look into the interference management problem will be required.First, the article outlines the visions and requirements of 5G cellular wireless systems. Major research challenges are then highlighted from the perspective of interference management when the different network tiers share the same radio spectrum. A comparative analysis of the existing approaches for distributed cell association and power control (CAPC) is then provided followed by a discussion on their limitations for5G multi-tier cellular networks. Finally, a number of suggestions are provided to modifythe existing CAPC schemes to overcome these limitations.II. VISIONS AND REQUIREMENTS FOR 5G MULTI-TIERCELLULAR NETWORKS5G mobile and wireless communication systems will require a mix of new system concepts to boost the spectral and energy efficiency. The visions and requirements for 5G wireless systems are outlined below.·Data rate and latency: For dense urban areas, 5G networks are envisioned to enable an experienced data rate of 300 Mbps and 60 Mbps in downlink and uplink, respectively, in 95% of locations and time [2]. The end-to- end latencies are expected to be in the order of 2 to 5 milliseconds. The detailed requirements for different scenarios are listed in [2].·Machine-type Communication (MTC) devices: The number of traditional human-centric wireless devices with Internet connectivity (e.g., smart phones, super-phones, tablets) may be outnumbered by MTC devices which can be used in vehicles, home appliances, surveillance devices, and sensors.·Millimeter-wave communication: To satisfy the exponential increase in traffic and the addition of different devices and services, additional spectrum beyond what was previously allocated to 4G standard is sought for. The use of millimeter-wave frequency bands (e.g., 28 GHz and 38 GHz bands) is a potential candidate to overcome the problem of scarce spectrum resources since it allows transmission at wider bandwidths than conventional 20 MHz channels for 4G systems.·Multiple RATs: 5G is not about replacing the existing technologies, but it is about enhancing and supporting them with new technologies [1]. In 5G systems, the existing RATs, including GSM (Global System for Mobile Communications), HSPA+ (Evolved High-Speed Packet Access), and LTE, will continue to evolve to provide a superior system performance. They will also be accompanied by some new technologies (e.g., beyondLTE-Advanced).·Base station (BS) densification: BS densification is an effective methodology to meet the requirements of 5G wireless networks. Specifically, in 5G networks, there will be deployments of a large number of low power nodes, relays, and device-to-device (D2D) communication links with much higher density than today’s macrocell networks.Fig. 1 shows such a multi-tier network with a macrocell overlaid by relays, picocells, femtocells, and D2D links. The adoption of multiple tiers in the cellular networkarchitecture will result in better performance in terms of capacity, coverage, spectral efficiency, and total power consumption, provided that the inter-tier and intratier interferences are well managed.·Prioritized spectrum access: The notions of both trafficbased and tier-based Prioriti -es will exist in 5G networks. Traffic-based priority arises from the different requirements of the users (e.g., reliability and latency requirements, energy constraints), whereas the tier-based priority is for users belonging to different network tiers. For example, with shared spectrum access among macrocells and femtocells in a two-tier network, femtocells create ―dead zones‖ around them in the downlink for macro users. Protection should, thus, be guaranteed for the macro users. Consequently, the macro and femtousers play the role of high-priority users (HPUEs) and lowpriority users (LPUEs), respectively. In the uplink direction, the macrocell users at the cell edge typically transmit with high powers which generates high uplink interference to nearby femtocells. Therefore, in this case, the user priorities should get reversed. Another example is a D2D transmission where different devices may opportunistically access the spectrum to establish a communication link between them provided that the interference introduced to the cellular users remains below a given threshold. In this case, the D2D users play the role of LPUEs whereas the cellular users play the role of HPUEs.·Network-assisted D2D communication: In the LTE Rel- 12 and beyond, focus will be on network controlled D2D communications, where the macrocell BS performs control signaling in terms of synchronization, beacon signal configuration and providing identity and security management [3]. This feature will extend in 5G networks to allow other nodes, rather than the macrocell BS, to have the control. For example, consider a D2D link at the cell edge and the direct link between the D2D transmitter UE to the macrocell is in deep fade, then the relay node can be responsible for the control signaling of the D2Dlink (i.e., relay-aided D2D communication).·Energy harvesting for energy-efficient communication: One of the main challenges in 5G wireless networks is to improve the energy efficiency of the battery-constrained wireless devices. To prolong the battery lifetime as well as to improve the energy efficiency, an appealing solution is to harvest energy from environmental energy sources (e.g., solar and wind energy). Also, energy can be harvested from ambient radio signals (i.e., RF energy harvesting) with reasonable efficiency over small distances. The havested energy could be used for D2D communication or communication within a small cell. Inthis context, simultaneous wireless information and power transfer (SWIPT) is a promising technology for 5G wireless networks. However, practical circuits for harvesting energy are not yet available since the conventional receiver architecture is designed for information transfer only and, thus, may not be optimal for SWIPT. This is due to the fact that both information and power transfer operate with different power sensitivities at the receiver (e.g., -10dBm and -60dBm for energy and information receivers, respectively) [4]. Also, due to the potentially low efficiency of energy harvesting from ambient radio signals, a combination of different energy harvesting technologies may be required for macrocell communication.III. INTERFERENCE MANAGEMENT CHALLENGES IN 5GMULTI-TIER NETWORKSThe key challenges for interference management in 5G multi-tier networks will arise due to the following reasons which affect the interference dynamics in the uplink and downlink of the network: (i) heterogeneity and dense deployment of wireless devices, (ii) coverage and traffic load imbalance due to varying transmit powers of different BSs in the downlink, (iii) public or private access restrictions in different tiers that lead to diverse interference levels, and (iv) the priorities in accessing channels of different frequencies and resource allocation strategies. Moreover, the introduction of carrier aggregation, cooperation among BSs (e.g., by using coordinated multi-point transmission (CoMP)) as well as direct communication among users (e.g., D2D communication) may further complicate the dynamics of the interference. The above factors translate into the following key challenges.·Designing optimized cell association and power control (CAPC) methods for multi-tier networks: Optimizing the cell associations and transmit powers of users in the uplink or the transmit powers of BSs in the downlink are classical techniques to simultaneously enhance the system performance in various aspects such as interference mitigation, throughput maximization, and reduction in power consumption. Typically, the former is needed to maximize spectral efficiency, whereas the latter is required to minimize the power (and hence minimize the interference to other links) while keeping theFig. 1. A multi-tier network composed of macrocells, picocells, femtocells, relays, and D2D links.Arrows indicate wireless links, whereas the dashed lines denote the backhaul connections. desired link quality. Since it is not efficient to connect to a congested BS despite its high achieved signal-to-interference ratio (SIR), cell association should also consider the status of each BS (load) and the channel state of each UE. The increase in the number of available BSs along with multi-point transmissions and carrier aggregation provide multiple degrees of freedom for resource allocation and cell-selection strategies. For power control, the priority of different tiers need also be maintained by incorporating the quality constraints of HPUEs. Unlike downlink, the transmission power in the uplink depends on the user’s batt ery power irrespective of the type of BS with which users are connected. The battery power does not vary significantly from user to user; therefore, the problems of coverage and traffic load imbalance may not exist in the uplink. This leads to considerable asymmetries between the uplink and downlink user association policies. Consequently, the optimal solutions for downlink CAPC problems may not be optimal for the uplink. It is therefore necessary to develop joint optimization frameworks that can provide near-optimal, if not optimal, solutions for both uplink and downlink. Moreover, to deal with this issue of asymmetry, separate uplink and downlink optimal solutions are also useful as far as mobile users can connect with two different BSs for uplink and downlink transmissions which is expected to be the case in 5G multi-tier cellular networks [3].·Designing efficient methods to support simultaneous association to multiple BSs: Compared to existing CAPC schemes in which each user can associate to a singleBS, simultaneous connectivity to several BSs could be possible in 5G multi-tier network. This would enhance the system throughput and reduce the outage ratio by effectively utilizing the available resources, particularly for cell edge users. Thus the existing CAPCschemes should be extended to efficiently support simultaneous association of a user to multiple BSs and determine under which conditions a given UE is associated to which BSs in the uplink and/or downlink.·Designing efficient methods for cooperation and coordination among multiple tiers: Cooperation and coordination among different tiers will be a key requirement to mitigate interference in 5G networks. Cooperation between the macrocell and small cells was proposed for LTE Rel-12 in the context of soft cell, where the UEs are allowed to have dual connectivity by simultaneously connecting to the macrocell and the small cell for uplink and downlink communications or vice versa [3]. As has been mentioned before in the context of asymmetry of transmission power in uplink and downlink, a UE may experience the highest downlink power transmission from the macrocell, whereas the highest uplink path gain may be from a nearby small cell. In this case, the UE can associate to the macrocell in the downlink and to the small cell in the uplink. CoMP schemes based on cooperation among BSs in different tiers (e.g., cooperation between macrocells and small cells) can be developed to mitigate interference in the network. Such schemes need to be adaptive and consider user locations as well as channel conditions to maximize the spectral and energy efficiency of the network. This cooperation however, requires tight integration of low power nodes into the network through the use of reliable, fast andlow latency backhaul connections which will be a major technical issue for upcoming multi-tier 5G networks. In the remaining of this article, we will focus on the review of existing power control and cell association strategies to demonstrate their limitations for interference management in 5G multi-tier prioritized cellular networks (i.e., where users in different tiers have different priorities depending on the location, application requirements and so on). Design guidelines will then be provided to overcome these limitations. Note that issues such as channel scheduling in frequency domain, timedomain interference coordination techniques (e.g., based on almost blank subframes), coordinated multi-point transmission, and spatial domain techniques (e.g., based on smart antenna techniques) are not considered in this article.IV. DISTRIBUTED CELL ASSOCIATION AND POWERCONTROL SCHEMES: CURRENT STATE OF THE ARTA. Distributed Cell Association SchemesThe state-of-the-art cell association schemes that are currently under investigation formulti-tier cellular networks are reviewed and their limitations are explained below.·Reference Signal Received Power (RSRP)-based scheme [5]: A user is associated with the BS whose signal is received with the largest average strength. A variant of RSRP, i.e., Reference Signal Received Quality (RSRQ) is also used for cell selection in LTE single-tier networks which is similar to the signal-to-interference (SIR)-based cell selection where a user selects a BS communicating with which gives the highest SIR. In single-tier networks with uniform traffic, such a criterion may maximize the network throughput. However, due to varying transmit powers of different BSs in the downlink of multi-tier networks, such cell association policies can create a huge traffic load imbalance. This phenomenon leads to overloading of high power tiers while leaving low power tiers underutilized.·Bias-based Cell Range Expansion (CRE) [6]: The idea of CRE has been emerged as a remedy to the problem of load imbalance in the downlink. It aims to increase the downlink coverage footprint of low power BSs by adding a positive bias to their signal strengths (i.e., RSRP or RSRQ). Such BSs are referred to as biased BSs. This biasing allows more users to associate with low power or biased BSs and thereby achieve a better cell load balancing. Nevertheless, such off-loaded users may experience unfavorable channel from the biased BSs and strong interference from the unbiased high-power BSs. The trade-off between cell load balancing and system throughput therefore strictly depends on the selected bias values which need to be optimized in order to maximize the system utility. In this context, a baseline approach in LTE-Advanced is to ―orthogonalize‖ the transmissions of the biased and unbiased BSs in time/frequency domain such that an interference-free zone is created.·Association based on Almost Blank Sub-frame (ABS) ratio [7]: The ABS technique uses time domain orthogonalization in which specific sub-frames are left blank by the unbiased BS and off-loaded users are scheduled within these sub-frames to avoid inter-tier interference. This improves the overall throughput of the off-loaded users by sacrificing the time sub-frames and throughput of the unbiased BS. The larger bias values result in higher degree of offloading and thus require more blank subframes to protect the offloaded users. Given a specific number of ABSs or the ratio of blank over total number of sub-frames (i.e., ABS ratio) that ensures the minimum throughput of the unbiased BSs, this criterion allows a user to select a cell with maximum ABS ratio and may even associate with the unbiased BS if ABS ratio decreases significantly. A qualitative comparison amongthese cell association schemes is given in Table I. The specific key terms used in Table I are defined as follows: channel-aware schemes depend on the knowledge of instantaneous channel and transmit power at the receiver. The interference-aware schemes depend on the knowledge of instantaneous interference at the receiver. The load-aware schemes depend on the traffic load information (e.g., number of users). The resource-aware schemes require the resource allocation information (i.e., the chance of getting a channel or the proportion of resources available in a cell). The priority-aware schemes require the information regarding the priority of different tiers and allow a protection to HPUEs. All of the above mentioned schemes are independent, distributed, and can be incorporated with any type of power control scheme. Although simple and tractable, the standard cell association schemes, i.e., RSRP, RSRQ, and CRE are unable to guarantee the optimum performance in multi-tier networks unless critical parameters, such as bias values, transmit power of the users in the uplink and BSs in the downlink, resource partitioning, etc. are optimized.B. Distributed Power Control SchemesFrom a user’s point of view, the objective of power control is to support a user with its minimum acceptable throughput, whereas from a system’s point of view it is t o maximize the aggregate throughput. In the former case, it is required to compensate for the near-far effect by allocating higher power levels to users with poor channels as compared to UEs with good channels. In the latter case, high power levels are allocated to users with best channels and very low (even zero) power levels are allocated to others. The aggregate transmit power, the outage ratio, and the aggregate throughput (i.e., the sum of achievable rates by the UEs) are the most important measures to compare the performance of different power control schemes. The outage ratio of a particular tier can be expressed as the ratio of the number of UEs supported by a tier with their minimum target SIRs and the total number of UEs in that tier. Numerous power control schemes have been proposed in the literature for single-tier cellular wireless networks. According to the corresponding objective functions and assumptions, the schemes can be classified into the following four types.·Target-SIR-tracking power control (TPC) [8]: In the TPC, each UE tracks its own predefined fixed target-SIR. The TPC enables the UEs to achieve their fixed target-TABLE IQUALITATIVE COMPARISON OF EXISTING CELL ASSOCIATION SCHEMESFOR MULTI-TIER NETWORKSSIRs at minimal aggregate transmit power, assuming thatthe target-SIRs are feasible. However, when the system is infeasible, all non-supported UEs (those who cannot obtain their target-SIRs) transmit at their maximum power, which causes unnecessary power consumption and interference to other users, and therefore, increases the number of non-supported UEs.·TPC with gradual removal (TPC-GR) [9], [10], and [11]:To decrease the outage ra -tio of the TPC in an infeasiblesystem, a number of TPC-GR algorithms were proposedin which non-supported users reduce their transmit power[10] or are gradually removed [9], [11].·Opportunistic power control (OPC) [12]: From the system’s point of view, OPC allocates high power levels to users with good channels (experiencing high path-gains and low interference levels) and very low power to users with poor channels. In this algorithm, a small difference in path-gains between two users may lead to a large difference in their actual throughputs [12]. OPC improves the system performance at the cost of reduced fairness among users.·Dynamic-SIR tracking power control (DTPC) [13]: When the target-SIR requirements for users are feasible, TPC causes users to exactly hit their fixed target-SIRs even if additional resources are still available that can otherwise be used to achieve higher SIRs (and thus better throughputs). Besides, the fixed-target-SIR assignment is suitable only for voice service for which reaching a SIR value higher than the given target value does not affect the service quality significantly. In contrast, for data services, a higher SIR results in a better throughput, which is desirable. The DTPC algorithm was proposed in [13] to address the problem of system throughput maximization subject to a given feasible lower bound for the achieved SIRs of all users in cellular networks. In DTPC, each user dynamically sets its target-SIR by using TPC and OPC in a selective manner. It was shown that when the minimum acceptable target-SIRs are feasible, the actual SIRs received by some users can be dynamically increased (to a value higher than their minimum acceptabletarget-SIRs) in a distributed manner so far as the required resources are available and the system remains feasible (meaning that reaching the minimum target-SIRs for the remaining users are guaranteed). This enhances the system throughput (at the cost of higher power consumption) as compared to TPC. The aforementioned state-of-the-art distributed power control schemes for satisfying various objectives in single-tier wireless cellular networks are unable to address the interference management problem in prioritized 5G multi-tier networks. This is due to the fact that they do not guarantee that the total interference caused by the LPUEs to the HPUEs remain within tolerable limits, which can lead to the SIR outage of some HPUEs. Thus there is a need to modify the existing schemes such that LPUEs track their objectives while limiting their transmit power to maintain a given interference threshold at HPUEs. A qualitative comparison among various state-of-the-art power control problems with different objectives and constraints and their corresponding existing distributed solutions are shown in Table II. This table also shows how these schemes can be modified and generalized for designing CAPC schemes for prioritized 5G multi-tier networks.C. Joint Cell Association and Power Control SchemesA very few work in the literature have considered the problem of distributed CAPC jointly (e.g., [14]) with guaranteed convergence. For single-tier networks, a distributed framework for uplink was developed [14], which performs cell selection based on the effective-interference (ratio of instantaneous interference to channel gain) at the BSs and minimizes the aggregate uplink transmit power while attaining users’ desire d SIR targets. Following this approach, a unified distributed algorithm was designed in [15] for two-tier networks. The cell association is based on the effective-interference metric and is integrated with a hybrid power control (HPC) scheme which is a combination of TPC and OPC power control algorithms.Although the above frameworks are distributed and optimal/ suboptimal with guaranteed convergence in conventional networks, they may not be directly compatible to the 5G multi-tier networks. The interference dynamics in multi-tier networks depends significantly on the channel access protocols (or scheduling), QoS requirements and priorities at different tiers. Thus, the existing CAPC optimization problems should be modified to include various types of cell selection methods (some examples are provided in Table I) and power control methods with different objectives and interference constraints (e.g., interference constraints for macro cell UEs, picocell UEs, or D2Dreceiver UEs). A qualitative comparison among the existing CAPC schemes along with the open research areas are highlighted in Table II. A discussion on how these open problems can be addressed is provided in the next section.V. DESIGN GUIDELINES FOR DISTRIBUTED CAPCSCHEMES IN 5G MULTI-TIER NETWORKSInterference management in 5G networks requires efficient distributed CAPC schemes such that each user can possibly connect simultaneously to multiple BSs (can be different for uplink and downlink), while achieving load balancing in different cells and guaranteeing interference protection for the HPUEs. In what follows, we provide a number of suggestions to modify the existing schemes.A. Prioritized Power ControlTo guarantee interference protection for HPUEs, a possible strategy is to modify the existing power control schemes listed in the first column of Table II such that the LPUEs limit their transmit power to keep the interference caused to the HPUEs below a predefined threshold, while tracking their own objectives. In other words, as long as the HPUEs are protected against existence of LPUEs, the LPUEs could employ an existing distributed power control algorithm to satisfy a predefined goal. This offers some fruitful direction for future research and investigation as stated in Table II. To address these open problems in a distributed manner, the existing schemes should be modified so that the LPUEs in addition to setting their transmit power for tracking their objectives, limit their transmit power to keep their interference on receivers of HPUEs below a given threshold. This could be implemented by sending a command from HPUEs to its nearby LPUEs (like a closed-loop power control command used to address the near-far problem), when the interference caused by the LPUEs to the HPUEs exceeds a given threshold. We refer to this type of power control as prioritized power control. Note that the notion of priority and thus the need of prioritized power control exists implicitly in different scenarios of 5G networks, as briefly discussed in Section II. Along this line, some modified power control optimization problems are formulated for 5G multi-tier networks in second column of Table II.To compare the performance of existing distributed power control algorithms, let us consider a prioritized multi-tier cellular wireless network where a high-priority tier consisting of 3×3 macro cells, each of which covers an area of 1000 m×1000 m, coexists with a low-priority tier consisting of n small-cells per each high-priority macro cell, each。
毕业论文外文翻译(中英文)
译文交通拥堵和城市交通系统的可持续发展摘要:城市化和机动化的快速增长,通常有助于城市交通系统的发展,是经济性,环境性和社会可持续性的体现,但其结果是交通量无情增加,导致交通拥挤。
道路拥挤定价已经提出了很多次,作为一个经济措施缓解城市交通拥挤,但还没有见过在实践中广泛使用,因为道路收费的一些潜在的影响仍然不明。
本文首先回顾可持续运输系统的概念,它应该满足集体经济发展,环境保护和社会正义的目标.然后,根据可持续交通系统的特点,使拥挤收费能够促进经济增长,环境保护和社会正义。
研究结果表明,交通拥堵收费是一个切实有效的方式,可以促进城市交通系统的可持续发展。
一、介绍城市交通是一个在世界各地的大城市迫切关注的话题。
随着中国的城市化和机动化的快速发展,交通拥堵已成为一个越来越严重的问题,造成较大的时间延迟,增加能源消耗和空气污染,减少了道路网络的可靠性.在许多城市,交通挤塞情况被看作是经济发展的障碍.我们可以使用多种方法来解决交通挤塞,包括新的基础设施建设,改善基础设施的维护和操作,并利用现有的基础设施,通过需求管理策略,包括定价机制,更有效地减少运输密度.交通拥堵收费在很久以前就已提出,作为一种有效的措施,来缓解的交通挤塞情况。
交通拥堵收费的原则与目标是通过对选择在高峰拥挤时段的设施的使用实施附加收费,以纾缓拥堵情况.转移非高峰期一些出行路线,远离拥挤的设施或高占用车辆,或完全阻止一些出行,交通拥堵收费计划将在节省时间和降低经营成本的基础上,改善空气中的质量,减少能源消耗和改善过境生产力。
此计划在世界很多国家和地方都有成功的应用。
继在20世纪70年代初和80年代中期挪威与新加坡实行收费环,在2003年2月伦敦金融城推出了面积收费;直至现在,它都是已经开始实施拥挤收费的大都市圈中一个最知名的例子。
然而,交通拥堵收费由于理论和政治的原因未能在实践中广泛使用。
道路收费的一些潜在的影响尚不清楚,和城市发展的拥塞定价可持续性,需要进一步研究。
外文资料原件或复印件及译文
外文资料原件或复印件及译文1、原文部分:Cyclone IntroductionThe CycloneTM field programmable gate array family is based on a 1.5-V, 0.13-μm, all-layer copper SRAM process, with densities up to 20,060 logicelements (LEs) and up to 288 Kbits of RAM. With features like phaselocked loops (PLLs) for clocking and a dedicated double data rate (DDR) interface to meet DDR SDRAM and fast cycle RAM (FCRAM) memory requirements, Cyclone devices are a cost-effective solution for data-path applications. Cyclone devices support various I/O standards, including LVDS at data rates up to 640 megabits per second (Mbps), and 66- and 33-MHz, 64- and 32-bit peripheral component interconnect (PCI), for interfacing with and supporting ASSP and ASIC devices. Altera also offers new low-cost serial configuration devices to configure Cyclone devices.FeaturesThe Cyclone device family offers the following features:■ 2,910 to 20,060 LEs, see Table 1–1■ Up to 294,912 RAM bits (36,864 bytes)■ Supports configuration through low-cost serial configuration device■ Support for LVTTL, LVCMOS, SSTL-2, and SSTL-3 I/O standards■ Support for 66- and 33-MHz, 64- and 32-bit PCI standard■ High-speed (640 Mbps) LVDS I/O support■ Low-speed (311 Mbps) LVDS I/O support■ 311-Mbps RSDS I/O support■Up to eight global clock lines with six clock resources available per logic array block (LAB) row■Support for external memory, including DDR SDRAM (133 MHz), FCRAM, and single data rate (SDR) SDRAMDescriptionCyclone devices contain a two-dimensional row- and column-based architecture to implement custom logic. Column and row interconnects of varying speeds provide signal interconnects between LABs and embedded memory blocks. The logic array consists of LABs, with 10 LEs in each LAB. An LE is a small unit of logic providing efficient implementation of user logic functions. LABs are grouped into rows and columns across the device. Cyclone devices range between 2,910 to 20,060 LEs. M4K RAM blocks are true dual-port memory blocks with 4K bits of memory plus parity (4,608 bits). These blocks provide dedicated true dual-port, simple dual-port, or single-port memory up to 36-bits wide at up to 250 MHz. These blocks are grouped into columns across the device in between certain LABs. Cyclone devices offer between 60 to 288 Kbits of embedded RAM. Each Cyclone device I/O pin is fed by an I/O element(IOE) located at the ends of LAB rows and columns around the periphery of the device. I/O pins support various single-ended and differential I/O standards, such as the 66- and 33-MHz, 64- and 32-bit PCI standard and the LVDS I/O standard at up to 640 Mbps. Each IOE contains a bidirectional I/O buffer and three registers for registering input, output, and output-enable signals.Dual-purpose DQS, DQ, and DM pins along with delay chains (used to phase-align DDR signals) provide interface support with external memory devices such as DDR SDRAM, and FCRAM devices at up to 133 MHz (266 Mbps). Cyclone devices provide a global clock network and up to two PLLs. The global clock network consists of eight global clock lines that drive throughout the entire device. The global clock network can provide clocks for all resources within the device, such as IOEs, LEs, and memory blocks. The global clock lines can also be used for control signals. Cyclone PLLs provide general-purpose clocking with clock multiplication and phase shifting as well as external outputs for high-speed differential I/O support.2、译文部分:飓风系列芯片介绍:飓风系列可编程芯片核心电压 1.5V,采用0.13um制程,全铜工艺, 集成20,060个逻辑单元,并且内置288 K RAM空间。
外文参考文献译文及原文
目录1介绍 (1)在这一章对NS2的引入提供。
尤其是,关于NS2的安装信息是在第2章。
第3章介绍了NS2的目录和公约。
第4章介绍了在NS2仿真的主要步骤。
一个简单的仿真例子在第5章。
最后,在第.8章作总结。
2安装 (1)该组件的想法是明智的做法,以获取上述件和安装他们的个人。
此选项保存downloadingtime和大量内存空间。
但是,它可能是麻烦的初学者,因此只对有经验的用户推荐。
(2)安装一套ns2的all-in-one在unix-based系统 (2)安装一套ns2的all-in-one在Windows系统 (3)3目录和公约 (4)目录 (4)4运行ns2模拟 (6)ns2程序调用 (6)ns2模拟的主要步骤 (6)5一个仿真例子 (8)6总结 (12)1 Introduction (13)2 Installation (15)Installing an All-In-One NS2 Suite on Unix-Based Systems (15)Installing an All-In-One NS2 Suite on Windows-Based Systems (16)3 Directories and Convention (17)Directories and Convention (17)Convention (17)4 Running NS2 Simulation (20)NS2 Program Invocation (20)Main NS2 Simulation Steps (20)5 A Simulation Example (22)6 Summary (27)1介绍网络模拟器(一般叫作NS2)的版本,是证明了有用在学习通讯网络的动态本质的一个事件驱动的模仿工具。
模仿架线并且无线网络作用和协议(即寻址算法,TCP,UDP)使用NS2,可以完成。
一般来说,NS2提供用户以指定这样网络协议和模仿他们对应的行为方式。
毕业设计(论文)外文原文及译文
毕业设计(论文)外文原文及译文一、外文原文MCUA microcontroller (or MCU) is a computer-on-a-chip. It is a type of microcontroller emphasizing self-sufficiency and cost-effectiveness, in contrast to a general-purpose microprocessor (the kind used in a PC).With the development of technology and control systems in a wide range of applications, as well as equipment to small and intelligent development, as one of the single-chip high-tech for its small size, powerful, low cost, and other advantages of the use of flexible, show a strong vitality. It is generally better compared to the integrated circuit of anti-interference ability, the environmental temperature and humidity have better adaptability, can be stable under the conditions in the industrial. And single-chip widely used in a variety of instruments and meters, so that intelligent instrumentation and improves their measurement speed and measurement accuracy, to strengthen control functions. In short,with the advent of the information age, traditional single- chip inherent structural weaknesses, so that it show a lot of drawbacks. The speed, scale, performance indicators, such as users increasingly difficult to meet the needs of the development of single-chip chipset, upgrades are faced with new challenges.The Description of AT89S52The AT89S52 is a low-power, high-performance CMOS 8-bit microcontroller with 8K bytes of In-System Programmable Flash memory. The device is manufactured using Atmel's high-density nonvolatile memory technology and is compatible with the industry-standard 80C51 instruction set and pinout. The on-chip Flash allows the program memory to be reprogrammed in-system or by a conventional nonvolatile memory programmer. By combining a versatile 8-bit CPU with In-System Programmable Flash on a monolithic chip, the Atmel AT89S52 is a powerful microcontroller which provides a highly-flexible and cost-effective solution to many embedded control applications.The AT89S52 provides the following standard features: 8K bytes ofFlash, 256 bytes of RAM, 32 I/O lines, Watchdog timer, two data pointers, three 16-bit timer/counters, a six-vector two-level interrupt architecture, a full duplex serial port, on-chip oscillator, and clock circuitry. In addition, the AT89S52 is designed with static logic for operation down to zero frequency and supports two software selectable power saving modes. The Idle Mode stops the CPU while allowing the RAM, timer/counters, serial port, and interrupt system to continue functioning. The Power-down mode saves the RAM contents but freezes the oscillator, disabling all other chip functions until the next interrupt or hardware reset.Features• Compatible with MCS-51® Products• 8K Bytes of In-System Programmable (ISP) Flash Memory– Endurance: 1000 Write/Erase Cycles• 4.0V to 5.5V Operating Range• Fully Static Operation: 0 Hz to 33 MHz• Three-level Program Memory Lock• 256 x 8-bit Internal RAM• 32 Programmable I/O Lines• Three 16-bit Timer/Counters• Eight Interrupt Sources• Full Duplex UART Serial Channel• Low-power Idle and Power-down Modes• Interrupt Recovery from Power-down Mode• Watchdog Timer• Dual Data Pointer• Power-off FlagPin DescriptionVCCSupply voltage.GNDGround.Port 0Port 0 is an 8-bit open drain bidirectional I/O port. As an output port, each pin can sink eight TTL inputs. When 1s are written to port 0 pins, the pins can be used as high-impedance inputs.Port 0 can also be configured to be the multiplexed low-order address/data bus during accesses to external program and data memory. In this mode, P0 has internal pullups.Port 0 also receives the code bytes during Flash programming and outputs the code bytes during program verification. External pullups are required during program verification.Port 1Port 1 is an 8-bit bidirectional I/O port with internal pullups. The Port 1 output buffers can sink/source four TTL inputs. When 1s are written to Port 1 pins, they are pulled high by the internal pullups and can be used as inputs. As inputs, Port 1 pins that are externally being pulled low will source current (IIL) because of the internal pullups.In addition, P1.0 and P1.1 can be configured to be the timer/counter 2 external count input (P1.0/T2) and the timer/counter 2 trigger input (P1.1/T2EX), respectively.Port 1 also receives the low-order address bytes during Flash programming and verification.Port 2Port 2 is an 8-bit bidirectional I/O port with internal pullups. The Port 2 output buffers can sink/source four TTL inputs. When 1s are written to Port 2 pins, they are pulled high by the internal pullups and can be used as inputs. As inputs, Port 2 pins that are externally being pulled low will source current (IIL) because of the internal pullups.Port 2 emits the high-order address byte during fetches from external program memory and during accesses to external data memory that use 16-bit addresses (MOVX @ DPTR). In this application, Port 2 uses strong internal pull-ups when emitting 1s. During accesses to external data memory that use 8-bit addresses (MOVX @ RI), Port 2 emits the contents of the P2 Special Function Register.Port 2 also receives the high-order address bits and some control signals during Flash programming and verification.Port 3Port 3 is an 8-bit bidirectional I/O port with internal pullups. The Port 3 output buffers can sink/source four TTL inputs. When 1s are written to Port 3 pins, they are pulled high by the internal pullups and can be used as inputs. As inputs, Port 3 pins that are externally being pulled low will source current (IIL) because of the pullups.Port 3 also serves the functions of various special features of the AT89S52, as shown in the following table.Port 3 also receives some control signals for Flash programming and verification.RSTReset input. A high on this pin for two machine cycles while the oscillator is running resets the device. This pin drives High for 96 oscillator periods after the Watchdog times out. The DISRTO bit in SFR AUXR (address 8EH) can be used to disable this feature. In the default state of bit DISRTO, the RESET HIGH out feature is enabled.ALE/PROGAddress Latch Enable (ALE) is an output pulse for latching the low byte of the address during accesses to external memory. This pin is also the program pulse input (PROG) during Flash programming.In normal operation, ALE is emitted at a constant rate of 1/6 the oscillator frequency and may be used for external timing or clocking purposes. Note, however, that one ALE pulse is skipped during each access to external data memory.If desired, ALE operation can be disabled by setting bit 0 of SFR location 8EH. With the bit set, ALE is active only during a MOVX or MOVC instruction. Otherwise, the pin is weakly pulled high. Setting the ALE-disable bit has no effect if the microcontroller is in external execution mode.PSENProgram Store Enable (PSEN) is the read strobe to external program memory. When the AT89S52 is executing code from external program memory, PSENis activated twice each machine cycle, except that two PSEN activations are skipped during each access to external data memory.EA/VPPExternal Access Enable. EA must be strapped to GND in order to enable the device to fetch code from external program memory locations starting at 0000H up to FFFFH. Note, however, that if lock bit 1 is programmed, EA will be internally latched on reset. EA should be strapped to VCC for internal program executions.This pin also receives the 12-volt programming enable voltage (VPP) during Flash programming.XTAL1Input to the inverting oscillator amplifier and input to the internal clock operating circuit.XTAL2Output from the inverting oscillator amplifier.Special Function RegistersNote that not all of the addresses are occupied, and unoccupied addresses may not be implemented on the chip. Read accesses to these addresses will in general return random data, and write accesses will have an indeterminate effect.User software should not write 1s to these unlisted locations, since they may be used in future products to invoke new features. In that case, the reset or inactive values of the new bits will always be 0.Timer 2 Registers:Control and status bits are contained in registers T2CON and T2MOD for Timer 2. The register pair (RCAP2H, RCAP2L) are the Capture/Reload registers for Timer 2 in 16-bit capture mode or 16-bit auto-reload mode.Interrupt Registers:The individual interrupt enable bits are in the IE register. Two priorities can be set for each of the six interrupt sources in the IP register.Dual Data Pointer Registers: To facilitate accessing both internal and external data memory, two banks of 16-bit Data Pointer Registers areprovided: DP0 at SFR address locations 82H-83H and DP1 at 84H-85H. Bit DPS = 0 in SFR AUXR1 selects DP0 and DPS = 1 selects DP1. The user should always initialize the DPS bit to the appropriate value before accessing the respective Data Pointer Register.Power Off Flag:The Power Off Flag (POF) is located at bit 4 (PCON.4) in the PCON SFR. POF is set to “1” during power up. It can be set and rest under software control and is not affected by reset.Memory OrganizationMCS-51 devices have a separate address space for Program and Data Memory. Up to 64K bytes each of external Program and Data Memory can be addressed.Program MemoryIf the EA pin is connected to GND, all program fetches are directed to external memory. On the AT89S52, if EA is connected to VCC, program fetches to addresses 0000H through 1FFFH are directed to internal memory and fetches to addresses 2000H through FFFFH are to external memory.Data MemoryThe AT89S52 implements 256 bytes of on-chip RAM. The upper 128 bytes occupy a parallel address space to the Special Function Registers. This means that the upper 128 bytes have the same addresses as the SFR space but are physically separate from SFR space.When an instruction accesses an internal location above address 7FH, the address mode used in the instruction specifies whether the CPU accesses the upper 128 bytes of RAM or the SFR space. Instructions which use direct addressing access of the SFR space. For example, the following direct addressing instruction accesses the SFR at location 0A0H (which is P2).MOV 0A0H, #dataInstructions that use indirect addressing access the upper 128 bytes of RAM. For example, the following indirect addressing instruction, where R0 contains 0A0H, accesses the data byte at address 0A0H, rather than P2 (whose address is 0A0H).MOV @R0, #dataNote that stack operations are examples of indirect addressing, so the upper 128 bytes of data RAM are available as stack space.Timer 0 and 1Timer 0 and Timer 1 in the AT89S52 operate the same way as Timer 0 and Timer 1 in the AT89C51 and AT89C52.Timer 2Timer 2 is a 16-bit Timer/Counter that can operate as either a timer or an event counter. The type of operation is selected by bit C/T2 in the SFR T2CON (shown in Table 2). Timer 2 has three operating modes: capture, auto-reload (up or down counting), and baud rate generator. The modes are selected by bits in T2CON.Timer 2 consists of two 8-bit registers, TH2 and TL2. In the Timer function, the TL2 register is incremented every machine cycle. Since a machine cycle consists of 12 oscillator periods, the count rate is 1/12 of the oscillator frequency.In the Counter function, the register is incremented in response to a1-to-0 transition at its corresponding external input pin, T2. In this function, the external input is sampled during S5P2 of every machine cycle. When the samples show a high in one cycle and a low in the next cycle, the count is incremented. The new count value appears in the register during S3P1 of the cycle following the one in which the transition was detected. Since two machine cycles (24 oscillator periods) are required to recognize a 1-to-0 transition, the maximum count rate is 1/24 of the oscillator frequency. To ensure that a given level is sampled at least once before it changes, the level should be held for at least one full machine cycle.InterruptsThe AT89S52 has a total of six interrupt vectors: two external interrupts (INT0 and INT1), three timer interrupts (Timers 0, 1, and 2), and the serial port interrupt. These interrupts are all shown in Figure 10.Each of these interrupt sources can be individually enabled or disabledby setting or clearing a bit in Special Function Register IE. IE also contains a global disable bit, EA, which disables all interrupts at once.Note that Table 5 shows that bit position IE.6 is unimplemented. In the AT89S52, bit position IE.5 is also unimplemented. User software should not write 1s to these bit positions, since they may be used in future AT89 products. Timer 2 interrupt is generated by the logical OR of bits TF2 and EXF2 in register T2CON. Neither of these flags is cleared by hardware when the service routine is vectored to. In fact, the service routine may have to determine whether it was TF2 or EXF2 that generated the interrupt, and that bit will have to be cleared in software.The Timer 0 and Timer 1 flags, TF0 and TF1, are set at S5P2 of the cycle in which the timers overflow. The values are then polled by the circuitry in the next cycle. However, the Timer 2 flag, TF2, is set at S2P2 and is polled in the same cycle in which the timer overflows.二、译文单片机单片机即微型计算机,是把中央处理器、存储器、定时/计数器、输入输出接口都集成在一块集成电路芯片上的微型计算机。
福建茶叶出口论文外文原文及译文
北京联合大学毕业论文外文原文及译文题目:福建省茶叶出口现状及对策研究专业:国际经济与贸易指导教师:学院:学号:班级:姓名:一、外文原文Current status and future development of global tea production and tea productsAlastair HicksFAO Regional Office for Asia and the PacificTea is globally one of the most popular and lowest cost beverages, next only to water. Tea is consumed by a wide range of age groups in all levels of society. More than three billion cups of tea are consumed daily worldwide. Tea is considered to be a part of the huge beverage market, not to be seen in isolation jus t as a ‘commodity’. Tea active ingredients are of interest to functional foods markets. Africa, South America, the Near East and especially the Asian region produces a varied range of teas, this, together with a reputation in the international markets for high quality, has resulted in Asia enjoying a share of every importing market in the world. Huge populations in Asia, Middle East, Africa, UK, EU, and countries of the CIS consume tea regularly and throughout the day. The main tea producing countries globally are: in Africa: Burundi, Kenya, Malawi, Rwanda, Tanzania, Uganda, Zimbabwe and others. In South America: Argentina, Brazil and others; In Near East: Iran and Turkey. In Asia: Bangladesh, China, India, Indonesia, Sri Lanka, Viet Nam and others. In addition, the Russian Federation and CIS countries produce quantities of tea. Numerous types of teas are produced in the countries listed above. In China, for example, the country with the largest planting of tea and second in output, green tea is around half of the total export, black tea around one third and other teas one fifth. Depending on the manufacturing technique it may be described as green, black, oolong, white, yellow and even compressed tea. The Intergovernmental Group on Tea monitors market conditions and provides an update of potential market prospects for tea over the medium term. which examines the current situation and medium term prospects for production, consumption and trade of tea, and its impact on the world tea market.In summary, tea is considered as having a share of the global beverage market, ahighly competitive field. A wide range of tea products continue to be developed, through product and process development for added-value, as market shares become more sophisticated and competitive. The tea industry must rise to these challenges, facing the future with confidence.IntroductionThe Asian region produces a varied range of teas and this, together with a reputation in the international markets for high quality, has resulted in Asia enjoying a share of every importing market in the world. Africa, South America and the Near East also produce quantities of tea. Huge populations of Asia, UK, EU, Middle East, Africa and countries of the CIS consume tea regularly and throughout the day .The common tea plant is the evergreen shrub, Camellia sinensis. There are several varieties of this species of plant, a well known one being the Indian Assam tea (C. sinensis var. assamica Kitamura). Traditionally, tea is prepared from its dried young leaves and leaf buds, made into a beverage by steeping the leaves in boiling water. China is credited with introducing tea to the world, though the evergreen tea plant is in fact native to Southern China, North India, Myanmar and Cambodia .Although there are a growing number of countries that produce teas in a multiplicity of blends, there are essentially three main types of Camellia tea, which are Green, ‘Oolong’ and Black. The difference lies in the ‘fermentation’, which actually refers to oxidative and enzymatic changes within the tea leaves, during processing. Green tea is essentially unfermented, Oolong tea is partially fermented and Black tea is fully fermented. Black tea, which represents the majority of international trade, yields an amber coloured, full-flavour liquid without bitterness .For example, both Orange Pekoe and Pekoe are black teas. refers to the silver-tipped Assam teas. Orange Pekoe is made from the very young top leaves and traditionally comes from India or Sri Lanka. Pekoe tea comes from India, Indonesia or Sri Lanka and is made from leaves even smaller than those characteristically used for Orange Pekoe.In addition to these conventional teas, many countries of Asia have a number of herbal teas, made from brewing plant leaves, or other plant parts including flowers. For example, Gymnema sylvestre, a member of the botanical family Asclepiadaceae, found mainly in India, has been used as a healthy and nutritive herbal tea which claims to have a number of medicinal properties. Numerous other herbal teas are gaining more popularity recently .Current SituationThe global tea production growth rate in 2006 was more than 3% to reach an estimated 3.6 million t.. The expansion was mainly due to record crops in China, Viet Nam and India. Production in China increased 9.5% over the record in 2005, to 1.05 million t. in 2006, through Government policies to increase rural household incomes. Expansion of 28 percent in Viet Nam gave an output of 133,000t as tea bushes reached optimum yields. India had a 3% increase in harvest output of 945,000t for the year. This growth offset other major countries, Kenya and Sri Lanka, where output declined by 6 and 1.6%, respectively.ExportsExports in 2006 reached 1.55 million t. compared to 1.53 million t. in 2005 (Table 2).Increased shipments from Sri Lanka, India and Viet Nam offset major declines in Kenya and Indonesia, down by 12.4 and 7%. Tea exports from Sri Lanka reached 314,900 in 2006, a gain of 5.4%, while exports from Viet Nam and India expanded by 24 and 14%. The increase was due to expansion in trade to the Near East, with their growth and strength of the economies in the region. Significant growth was also achieved by Rwanda, and Tanzania, while shipments from China were relatively unchanged. Decline in exports from Kenya were affected by political uncertainty in Pakistan, its major market. Pakistan’s uncertainty also affected shipments from Indonesia and Bangladesh where exports declined, and structural problems plague the industry (FAO 2008).ImportsWorld net imports of tea declined by 1.7% to 1.57 million t. in 2006 (Table 3), reflecting reduced tea imports by Pakistan, the Russian Federation, and the Netherlands. Increased imports by traditional markets such as the United Kingdom, United States, Egypt and Germany, did not offset these declines. Imports by Pakistan declined by 3%, Russian Federation by 2%, and Netherlands by 25%, imports increasing by 7% in United Kingdom, United States, and Egypt. In Germany a 9 percent increase was recorded.ConsumptionWorld tea consumption grew by 1% in 2006, reaching 3.64 million t., but less than the annual average of 2.7% over the previous decade (Table 4). The biggest influence has been the growth in agricultural products consumption, tea included, in China and India, as their economies expanded dramatically. In 2006, China recorded a spectacular annual increase of 13.6% in total consumption, which reached 776,900 t., whilst annual growth in tea consumption in India was less, it was higher than the previous decade. Income gains inIndia, China, other developing countries, translate to more demand, for higher value-added items.Tea Added Value Product and Process DevelopmentTraditional loose tea has been largely replaced by bagged tea in many forms, for convenience. There are a range of preferences for tea styles and drinking habits among different consumers in various countries . Green and black tea will remain as major forms of tea, however, instant tea, flavored tea, decaffeinated tea, organically grown tea,‘foamy’ tea, roasted tea, herbal tea, ready-to-drink tea (canned and bottled) are developing into the market. Food products being developed are tea-rice, tea-noodles, tea-cake, tea-biscuits, tea-wine, tea-candy, tea-ice cream. In particular new types of herbal, fruit-flavor and decaffeinated teas, as well as ready-to-drink teas are becoming popular. The organically grown and healthful image of tea can be exploited, as can the utilization of active-ingredients of tea as their functional properties and nature become better known.Ready-to-drink tea is cheaper than coca-cola derivatives and this is perceived as a main competitor. There is a risk that tea consumption may drop as other drinks come on the market, from e.g. rice, potatoes, mulberry leaves. Diversified products such as tea chewing gum have been developed (Hicks 2001).Some ConclusionsThe review of the world tea market indicates some improvement in the fundamental oversupply situation in the world market which has persisted in recent years. However, in the medium term, projections suggest that although supply will continue to outstrip demand, the gap could be closer to equilibrium, if consumption improves in traditional markets. Strategies must be devised to continue the improvement in demand. Opportunities for an expansion in consumption and improvement in prices exist in producing countries themselves, as per capita consumption levels are relatively low. E.g. per capita consumption level in the major importing countries, such as the Russian Federation is 1.26 kg and for the UK, is 2.20kg, whilst per capita consumption levels in India is 0.65 kg and for Kenya is 0.40 kg.The results of research into the health benefits of tea consumption should also be used more extensively in promoting consumption in both producing and importing countries. In addition, strategies to exploit demand in value-added market segments, including specialty and organic teas, should also be more aggressively pursued. In targeting potential growth markets, recognition of and compliance with food safety and quality standards is essential.Even the impact of imposing a minimum quality standard as a means of improving the quality of tea traded internationally, would by default, reduce the quantity of tea in the world market and improve prices, at least in the short to medium term (FAO 2008).In summary, tea can be considered as having a share of the soft drink/beverages market, as well as having functional food potential. A wide range of tea products will continue to be developed through product and process development for added-value as the market shares become more sophisticated and competitive. The industry must rise to these challenges and face the future with confidence (Hicks 2001).Article ID:/ducument/d11.pdf二、译文世界茶叶产业现状和未来发展茶是全球最受欢迎和最低成本的饮料之一,仅次于纯净水。
外文参考文献译文及原文
广东工业大学华立学院本科毕业设计(论文)外文参考文献译文及原文系部城建学部专业土木工程年级 2011级班级名称 11土木工程9班学号 23031109000学生姓名刘林指导教师卢集富2015 年5 月目录一、项目成本管理与控制 0二、Project Budget Monitor and Control (1)三、施工阶段承包商在控制施工成本方面所扮演的作用 (2)四、The Contractor's Role in Building Cost Reduction After Design (4)一、外文文献译文(1)项目成本管理与控制随着市场竞争的激烈性越来越大,在每一个项目中,进行成本控制越发重要。
本文论述了在施工阶段,项目经理如何成功地控制项目预算成本。
本文讨论了很多方法。
它表明,要取得成功,项目经理必须关注这些成功的方法。
1.简介调查显示,大多数项目会碰到超出预算的问……功控制预算成本。
2.项目控制和监测的概念和目的Erel and Raz (2000)指出项目控制周期包括测量成……原因以及决定纠偏措施并采取行动。
监控的目的就是纠偏措施的...标范围内。
3.建立一个有效的控制体系为了实现预算成本的目标,项目管理者需要建立一……被监测和控制是非常有帮助的。
项目成功与良好的沟通密...决( Diallo and Thuillier, 2005)。
4.成本费用的检测和控制4.1对检测的优先顺序进行排序在施工阶段,很多施工活动是基于原来的计……用完了。
第四,项目管理者应该检测高风险活动,高风险活动最有...重要(Cotterell and Hughes, 1995)。
4.2成本控制的方法一个项目的主要费用包括员工成本、材料成本以及工期延误的成本。
为了控制这些成本费用,项目管理者首先应该建立一个成本控制系统:a)为财务数据的管理和分析工作落实责任人员b)确保按照项目的结构来合理分配所有的……它的变化--在成本控制线上准确地记录所有恰...围、变更、进度、质量)相结合由于一个工程项目......虑时间价值影响后的结果。
外文翻译--创业板市场
外文文献翻译译文一、外文原文原文:China's Second BoardI. Significance of and events leading to the establishment of a Second BoardOn 31 March 2009 the China Securities Regulatory Commission (CSRC issued Interim Measures on the Administration of Initial Public Offerings and Listings of Shares on the ChiNext [i.e., the Second Board, also called the Growth Enterprise Market] ("Interim Measures"), which came into force on 1 May 2009. This marked the creation by the Shenzhen Stock Exchange of the long-awaited market for venture businesses. As the original plan to establish such a market in 2001 had come to nothing when the dotcom bubble burst, the market's final opening came after a delay of nearly 10 years.Ever since the 1980s, when the Chinese government began to foster the development of science and technology, venture capital has been seen in China as a means of supporting the development of high-tech companies financially. The aim, as can be seen from the name of the 1996 Law of the People's Republic of China on Promoting the Conversion of Scientific and Technological Findings into Productivity ,was to support the commercialization of scientific and technological developments. Venture capital funds developed gradually in the late 1990s, and between then and 2000 it looked increasingly likely that a Second Board would be established. When the CSRC published a draft plan for this in September 2000, the stage was set. However, when the dotcom bubble (and especially the NASDAQ bubble) burst, this plan was shelved. Also, Chinese investors and venture capitalists were probably not quite ready for such a move.As a result, Chinese venture businesses sought to list on overseas markets (a so-called "red chip listing") from the late 1990s. However, as these listings increased, so did the criticism that valuable Chinese assets were being siphoned overseas.On thepolicy front, in 2004 the State Council published Some Opinions on Reform, Opening and Steady Growth of Capital Markets ("the Nine Opinions"), in which the concept of a "multi-tier capital market" was presented for the first time. A first step in this direction was made in the same year, when an SME Board was established as part of the Main Board. Although there appear to have been plans to eventually relax the SME Board's listing requirements, which were the same as those for companies listed on the Main Board, and to make it a market especially for venture businesses, it was decided to establish a separate market (the Second Board) for this purpose and to learn from the experience of the SME Board.As well as being part of the process of creating a multi-tier capital market, the establishment of the Second Board was one of the measures included in the policy document Several Opinions of the General Office of the State Council on Providing Financing Support for Economic Development ("the 30 Financial Measures"), published in December 2008 in response to the global financial crisis and intended as a way of making it easier for SMEs to raise capital.It goes without saying that the creation of the Second Board was also an important development in that it gives private equity funds the opportunity to exit their investments. The absence of such an exit had been a disincentive to such investment, with most funds looking for a red chip listing as a way of exiting their investments. However, with surplus savings at home, the Chinese authorities began to encourage companies to raise capital on the domestic market rather than overseas. This led, in September 2006, to a rule making it more difficult for Chinese venture businesses to list their shares on overseas markets. The corollary of this was that it increased the need for a means whereby Chinese private equity funds could exit their investments at an early opportunity and on their own market. The creation of the Second Board was therefore a belated response to this need.II. Rules and regulations governing the establishment of the Second BoardWe now take a closer look at some of the rules and regulations governing the establishment of the Second Board.First , the Interim Measures on the Administration of Initial Public Offerings andListings of Shares on the ChiNext, issued by the CSRC on 31 March 2009 with effect from 1 May 2009. The Interim Measures consist of six chapters and 58 articles, stipulating issue terms and procedures, disclosure requirements, regulatory procedures, and legal responsibilities.First, the General Provisions chapter. The first thing this says (Article 1) is: "These Measures are formulated for the purposes of promoting the development of innovative enterprises and other growing start-ups" This shows that one of the main listing criteria is a company's technological innovativeness and growth potential. The Chinese authorities have actually made it clear that, although the Second Board and the SME Board are both intended for SMEs of similar sizes, the Second Board is specifically intended for SMEs at the initial (rather than the growth or mature) stage of their development with a high degree of technological innovativeness and an innovative business model while the SME Board is specifically intended for companies with relatively stable earnings at the mature stage of their development. They have also made it clear that the Second Board is not simply a "small SME Board." This suggests to us that the authorities want to see technologically innovative companies listing on the Second Board and SMEs in traditional sectors listing on the SME Board.Next, Article 7 says: "A market access system that is commensurate with the risk tolerance of investors shall be established for investors on the ChiNext and investment risk shall be fully disclosed to investors." One noteworthy feature is the adoption of the concept of the "qualified investor" in an attempt to improve risk control.Furthermore, Article 8 says: "China Securities Regulatory Commission (hereinafter, CSRC) shall, in accordance with law, examine and approve the issuer’s IPO application and supervise the issuer’s IPO activities. The stock exchange shall formulate rules in accordance with law, provide an open, fair and equitable market environment and ensure the normal operation of the ChiNext." Until the Second Board was established, it was thought by some that the stock exchange had the right to approve new issues. Under the Interim Measures, however, it is the CSRC that examines and approves applications.First, offering conditions. Article 10 stipulates four numerical conditions for companies applying for IPOs.Second, offering procedures. The Interim Measures seek to make sponsoring securities companies more responsible by requiring them to conduct due diligence investigations and make prudential judgment on the issuer’s growth and render special opinions thereon.Third, information disclosure. Article 39 of the Interim Measures stipulates that the issuer shall make a statement in its prospectus pointing out the risks of investing in Second Board companies: namely, inconsistent performance, high operational risk, and the risk of delisting. Similarly,Fourth, supervision. Articles 51 and 52 stipulate that the stock exchange (namely, the Shenzhen Stock Exchange) shall establish systems for listing, trading and delisting Second Board stocks, urge sponsors to fulfill their ongoing supervisory obligations, and establish a market risk warning system and an investor education system.1. Amendments to the Interim Measures on Securities Issuance and Listing Sponsor System and the Provisional Measures of the Public Offering Review Committee of the China Securities Regulatory Commission2. Rules Governing the Listing of Shares on the ChiNext of Shenzhen Stock Exchange Next, the Shenzhen Stock Exchange published Rules Governing the Listing of Shares on the ChiNext of Shenzhen Stock Exchange on 6 June (with effect from 1 July).3. Checking investor eligibility As the companies listed on the Second Board are more risky than those listed on the Main Board and are subject to more rigorous delisting rules (see above), investor protection requires that checks be made on whether Second Board shares are suitable for all those wishing to invest in them.4. Rules governing (1) application documents for listings on the ChiNext and (2) prospectuses of ChiNext companies On 20 July the CSRC published rules governing Application Documents for Initial Public Offerings and Listings of Shares on the ChiNext and Prospectuses of ChiNext Companies, and announced that it would begin processing listing applications on 26 July.III. Future developmentsAs Its purpose is to "promote the development of innovative enterprises and other growing start-ups",the Second Board enables such companies to raise capital by issuing shares. That is why its listing requirements are less demanding than those of the Main Board but also why it has various provisions to mitigate risk. For one thing, the Second Board has its own public offering review committee to check how technologically specialized applicant companies are, reflecting the importance attached to this. For another, issuers and their controlling shareholders, de facto controllers, and sponsoring securities companies are subject to more demanding accountability requirements. The key factor here is, not surprisingly, disclosure. Also, the qualified investor system is designed to mitigate the risks to retail investors.Once the rules and regulations governing the Second Board were published, the CSRC began to process listing applications from 26 July 2009. It has been reported that 108 companies initially applied. As of mid-October, 28 of these had been approved and on 30 October they were listed on the Second Board.As of 15 December, there are 46 companies whose listing application has been approved by CSRC (including the above-mentioned 28 companies). They come from a wide range of sectors, especially information technology, services, and biopharmacy. Thus far, few companies in which foreign private equity funds have a stake have applied. This is because these funds have tended to go for red-chip listings.Another point is movement between the various tiers of China's multi-tier capital market. As of early September, four companies that are traded on the new Third Board had successfully applied to list on the Second Board. As 22 new Third Board companies meet the listing requirements of the Second Board on the basis of their interim reports for the first half of fiscal 2009, a growing number of companies may transfer their listing from the new Third Board to the Second Board. We think this is likely to make the new Third Board a more attractive market for private equity investors.The applicants include companies that were in the process of applying for a listing on the SME Board. The CSRC has also made it clear that it does not see theSecond Board simply as a "small SME Board" and attaches great importance to the companies' innovativeness and growth potential. Ultimately, whether or not such risks can be mitigated will depend on whether the quality of the companies that list on the Second Board improves and disclosure requirements are strictly complied with. For example, according to the rules governing Prospectuses of ChiNext Companies, companies are required to disclose the above-mentioned supplementary agreements as a control right risk. The point is whether such requirements will be complied with.Since there is a potentially large number of high-tech companies in China in the long term, whether or not the Second Board becomes one of the world's few successful venture capital markets will depend on whether all these rules and regulations succeed in shaping its development and the way in which it is run.The authorities clearly want to avoid a situation where the Second Board attracts a large number of second-rate companies and becomes a vehicle for market abuse as it would then run the risk of becoming an illiquid market shunned by investors who have lost trust in it. Indeed, such has been the number of companies applying to list on the Second Board that some observers have expressed concern about their quality.There has also been some concern about investor protection. For example, supplementary agreements between private equity funds and issuers pose a risk to retail investors in that they may suddenly be faced with a change in the controlling shareholder. This is because such agreements can result in a transfer of shares from the founder or controlling shareholder to a private equity fund if the company fails to meet certain agreed targets or in a shareholding structure that is different from the apparent one, for example. The problem of low liquidity, which has long faced the new Third Board market, where small-cap high-tech stocks are also traded, also needs to be addressed.Meanwhile, the Second Board's Public Offering Review Committee was officially established on 14 August. It has 35 members. A breakdown reveals that the number of representatives of the CSRC and the Shenzhen Stock Exchange has been limited to three and two, respectively, to ensure that the committee has the necessary number of technology specialists. Of the remainder, 14 are accountants, six lawyers,three from the Ministry of Science and Technology, three from the China Academy of Sciences, two from investment trust companies, one from an asset evaluation agency, and one from the National Development and Reform Commission (NDRC). It has been reported that the members include specialists in the six industry fields the CSRC considers particularly important for Second Board companies (namely, new energy, new materials, biotechnology and pharmaceuticals, energy conservation and environmental protection, services and IT).Source: Takeshi Jingu.2009.“China's Second Board”. Nomura Journal of Capital Markets Winter 2009 V ol.1 No.4.pp.1-15.二、翻译文章译文:中国创业板市场一、建立创业板市场及其意义2009年3月31日中国证券监督管理委员会(以下简称“中国证监会”)发行《中国证监会管理暂行办法》,首次在创业板市场上[即,第二个板,也叫创业板市场](“暂行办法”) 公开募股,从 2009年的5月1日开始生效,这标志着深圳证券交易所市场这个人们期待已久的合资企业即将诞生。
外文资料原文及译文
外文资料原文及译文南通大学法政与管理学院2009年06月HOW DO THE CHINESE PERCEIVE HARMONIOUSCORPORATE CULTURE:An Empirical Study on Dimensions of Harmonious Corporate CultureLianke SONG,Hao YANG,Lan YANGABSTRACT The Sixth Plenary Session of the 16th Central Committee of the Communist Party of China points out creating harmonious culture is an important task for building socialist harmonious society. Building harmonious culture needs all companies to create harmonious culture, because a company is a basic social unit. Henceforth, many Chinese companies advocate building harmonious corporate culture. Scholars must study basic theories for harmonious corporate culture. This study tried to answer two questions: What is harmonious corporate culture in Chinese mind and how do different Chinese perceive harmonious corporate culture? Firstly, this paper analyzed background of harmonious corporate culture from Chinese traditional culture and era needs. Secondly, authors designed an open-ended questionnaire and sent them to employees in Jiangsu and Shanghai. 329 questionnaires were collected and 291 questionnaires were valid, representing a response rate of 88.45%. Thirdly, this study explored dimensions of harmonious corporate culture and identified different viewpoints from different group. Finally, this paper discussed the results and pointed out limitations of this study and future research. The results of this paper were on basis of defining, measuring, analyzing, and creating harmonious corporate culture.1. THEORETICAL BACKGROUND AND QUESTIONSThe Fourth Plenary Session of the 16th Central Committee of the Communist Party of China puts forward building socialist harmonious societies and the sixth plenary session of the 16th central committee of the communist party of China points out creating harmonious culture is an important task for building socialist harmonious society. Building harmonious culture needs all companies to create harmonious culture, because a company is a basic social unit[1].Why do Chinese corporations advocate harmonious corporate culture? Maybe Chinese traditional culture and era needs are responsible.Chinese philosophy has a history of several thousand years. Its origins are often traced back to the Book of Changs (yi jing), which introduced some of the most fundamental terms of Chinese philosophy. Its first flowering is generally considered to have been in about the 6th century BC, but it draws on an oral tradition that goes back to Neolithic times.The Tao Te Ching (dao de jing) of Lao Tzu (lao zi) and the Analects (lun yu)of Confucius (kong zi) both appeared around the 6th century BC, around the time of early Buddhist philosophy.Confucianism focuses on the fields of ethics and politics, emphasizing personal and governmental morality, correctness of social relationships, justice, traditionalism, and sincerity. Confucianism and legalism are responsible for creating the world’s first meritocracy. Confucianism was and continues to be a major influence on Chinese culture. Harmonious culture is meant to respect the tradition of established virtue under Confucius upon "harmony with differences" while exploring extensively our cultural resources and cultural ideas or beliefs.The Chinese schools of philosophy, except during the Qin Dynasty, can be both critical and tolerant of one another. Despite the debates and competition, they generally have cooperated and shared ideas, which they would usually incorporate with their own.Harmony was a central concept in Chinese ancient philosophy. Confucian, Taoist, Buddhist and Legalist that are the major Chinese traditions all prize “harmony” as an ultimate value, but they disagree on how to achieve it. Confucians in particular emphasize the single-character term for “harmony” (he), which appears in all of Confucianism’s “Four Books and Five Classics” (si shu wu jing). The most forceful articulation of identification of personal and communal harmony comes from the Doctrine of the Mean (zhong yong), which defines harmony as a state of equilibriumw here pleasure, anger, sorrow and joy are moderated and restrained, claiming “all things in the universe to attain the way”.During the Industrial and Modern Ages, Chinese philosophy began to integrate the concepts of Western philosophy. Chinese philosophy attempted to incorporate democracy, republicanism and industrialism. Mao Zedong added Marxism, Stalinism and other communist thoughts. The government of the People’s Republic of China initiates Socialism with Chinese Characteristics.The theoretical bases of harmonious socialist society are Marxism-Leninism, Mao Zedong Thoughts, Deng Xiaoping Theory, and the important thought of "Three Represents" (That is, the CPC must always represent the development trend of China's advanced productive forces, the orientation of China's advanced culture, and the fundamental interests of the overwhelming majority of the people in China.).Six main characteristics of a harmonious society are democracy and the rule of law, fairness and justice, integrity and fraternity, vitality, stability and order, and harmony between man and nature. The principles observed in building a harmonious socialist society are as the following: people oriented; development in a scientific way; in-depth reform and opening up; democracy and the rule of law; properly handling the relationships between reform, development and stability; and the participation of the whole society under the leadership of the Party.The authors tried to define harmonious corporate culture: harmonious corporate culture is the corporate culture that adheres to people-oriented principle and considers harmony as a core concept, by managing in good faith and scientific administration to achieve harmony among enterprises, society and nature, and eventually make enterprises develop harmoniously and healthily.Chinese traditional culture is the basis of harmonious corporate culture. Era need is the direction of harmonious corporate culture. “Harmonious Corporate Culture” is a new identification and is different from any existent conceptions. What is harmonious corporate culture? This study wants to answer this question by analyzing Chinese viewpoints from open-ended questionnaires.Question 1: What is harmonious corporate culture in Chinese mind?Harmonious corporate culture is a new and special conception for Chinese. General views of Chinese can be found by searching dimensions of harmonious corporate culture. In fact, different people have different ideas. Maybe there are differences among different groups, which can be classified by sex, age, education and position. This study will find and explain those differences.Question 2: How do different Chinese perceive harmonious corporate culture?Today, many Chinese companies advocate building harmonious corporate culture. Understanding conception and characters of harmonious corporate culture are very important. This paper will answer two questions which are the basis of this field.2. METHODS2.1 Sample and ProcedureThe empirical analysis was carried out in Jiangsu and Shanghai. J iangsu’s economic and social development has always been taking the lead in China. Shanghai is China’s chief industrial and commercial centre and one of its leading centres of higher education and scientific research. They both lie in center of China’s eas t coast. We can know what modern Chinese are thinking and hoping by studying employees in Jiangsu and Shanghai.Questionnaires couldn’t be counted because we used both paper version and computer version. From January 2007 to January 2008, authors sent questionnaires to employees who worked in Jiangsu and Shanghai. 329 questionnaires were returned and 291 questionnaires were valid, representing a response rate of 88.45%.Table 1 summarizes the key statistics for the sample used in the study.Table 1 Characteristics of the sample2.2 MeasuresThe authors designed an open-ended questionnaire based on the purpose of the study. This scale only used one question to collect information for answering question 1 of this study. This question is “Please use ten words or ten sentences to describe harmonious corporate culture”.3. RESULTSThis research found out that there were some similar viewpoints about harmonious corporate culture from collected questionnaires. The authors classify these viewpoints into 15 dimensions after holding 10 study group meetings. Some dimensions were identified based on China’s traditional culture and present policies. Table 2 lists 15 dimensions in English and Chinese because of some dimensions with Chinese characteristics.Table 2 Dimension and frequency of harmonious corporate cultureThis s tudy calculated dimensions’ frequencies from different groups to know different people’s ideal harmonious corporate culture. Table 3 shows statistics for male’s and female’s viewpoints on harmonious corporate culture.Table3 Frequency and order of harmonious corporate culture from female and male4. DISCUSSION AND CONCLUSION4.1 ResultsSome companies advocate building harmonious corporate culture and some companies boast that they possess harmonious corporate culture after the central government calls on all society to create harmonious culture. But what is harmonious corporate culture? Some scholars wanted toexplain it, but nobody has answered this question by empirical study. The authors answered question 1 of this study by analyzing collected data. A lot of standpoints were found, but some standpoints could be integrated as one because they possess same meaning but are described with different words. The study group held 10 meetings to discuss harmonious corporate culture dimensions based on questionnaires. Finally, 15 dimensions were identified. They are People oriented, steady development, scientific administration, vitality, stability and order, fraternity and concord, unity and cooperation, fairness and impartiality, democratic participation, managing in good faith, pursuing excellence, social responsibility, energy conservation and environmental protection, incorporating things of diverse nature, and common development and win-win situation. The result answered question 1: What is harmonious corporate culture in Chinese mind?Dimensions were arranged on frequency. People oriented ranked first. People oriented in China has three sources: Max’s study of humanity; “People first” descending from Chinese history and new anthropocentric[2]. The Chinese like speaking “people oriented” relating to Chinese traditional culture. The genesis of people oriented is traceable to the Western Zhou Dynasty and people oriented became the core thought of Confucianism which influenced the Chinese deeply. Many archaism were concerned with people oriented, such as “The pe ople are the most important element in a state; next are the gods of land and grain; least is the ruler himself[3].”(min wei gui, she ji ci zhi, jun wei qing) Many scholars also considered people oriented is the core and basis of harmonious corporate culture[4][5].This paper compared different groups’ viewpoints to answer question 2 -- how do different Chinese perceive harmonious corporate culture?People oriented, unity and cooperation, vitality, and fraternity and concord were ranked from 1 to 4 by female and male. The same results made the authors surprised. But they are different in fifth dimension. The fifth of female is democratic participation and the fifth of male is stability and order. Female status was lower than male in ancient China. Female had to comply with the three obedience and the four virtues (san cong si de) in past. The three obediences (obey her father before marriage, her husband when married, and her sons in widowhood) and the four virtues (morality, proper speech, modest manner and diligent work) of women in ancient China, which were spiritual fetters of wifely submission and virtue imposed on women in feudal society. Female status is improving after female deputy attended the first National Congress of the Communist Party ofChina. Today, Chinese female think much of the rights of women, so democratic participation is the fifth dimension. The ancient belief “Men’s work centers around outside, women’s work centers around the home[6]”(nü zheng wei hu nei, nan zheng wei hu wai) which c ame from The Book of Changes (yi jing). Man had to work hard in society to earn money and get honour for his family. Today, both man and woman work in government, company, school, hospital and so on, but man always plays a major role and assumes primary responsibility in society and at home for traditional culture. The change is fast and the competition is fierce in modern society, so man is facing great pressure. This is the reason why man hopes to live and work in a more stable environment, so stability and order is the fifth dimension.People oriented, unity and cooperation, and vitality were ranked from 1 to 3 by Managerial employee and Nonmanagerial employee. Scientific administration and democratic participation were ordered as the fourth dimension by managerial employee. Managerial employee looks deeper and thinks further than nonmanagerial employee because managerial employee is at higher level and holds more responsibility in organization. Managerial employee cares about management questions. Fraternity and concord was ordered as the fourth dimension by nonmanagerial employee. Nonmanagerial employee concerns less about enterprises’ overall operation and management state than managerial employee does. They understand harmonious corporate culture from their own specific the work and life. Nonmanagerial employee does specific task and needs direct corporation. They believe that the staffs’ civilized language and behaviours, mutual understanding, the warm atmosphere of interpersonal relationships in the enterprise are very important aspects of harmonious corporate culture. Nonmanagerial employee cares about good relationship. Generally speaking, the differences of the harmonious corporate culture dimensions understanding between managerial employee and nonmanagerial employee are closely related to their location in the organizational structure and their working content in the enterprise.People oriented was ordered as first dimension and unity and cooperation was ordered as the second dimension by all persons whatever their education background is Vitality was ordered as the third dimension by all responders except persons who got a master or doctor degree. The responders whose highest education qualification over master degree ordered scientific administration as the second dimension too. The person holding advanced academic degree has more opportunity to be promoted to managerial position, so they think scientific administration is very important in aharmonious environment. Compared with other groups,the relatively higher education group who get undergraduate degree, are more interested in stability and order, fairness and impartiality dimensions. People in this group are the middle and high-level managers in the enterprise, that is, not only they are familiar with the overall state of the enterprise, but also they understand deeply internal staffs’ living conditions characteristics. Therefore, they put more attention on stability and order, fairness and impartiality dimensions.All groups ordered people oriented, unity and cooperation, and vitality as most important three dimensions. The same results showed what core contents for harmonious corporate culture are.4.2 Limitations and Future ResearchThis study was just an exploratory study. The authors searc hed harmonious corporate culture’s dimensions by open-ended questionnaire. But the validity of these results need to be proved by more studies. The authors will design close-ended questionnaire based on this study and collect new data. Dimensions of harmonious corporate culture will be confirmed by exploratory factor analysis and confirmatory factor analysis.This paper only discussed what harmonious corporate culture is. In the future, how to create harmonious corporate culture should be studied.The authors compared viewpoints from different sex, position and education. Age, birthplace, nationality and work experience influence individual thought too. Different opinions from different groups should be identified in future study.China should act as not only the defender of Chinese culture but an explorer and promoter of the new harmonious culture. Harmony is the social theme for present China. Studying basic theory of harmonious corporate culture will contribute to our society.REFERENCES[1] Lianke SONG, Dongtao, YANG, Hao YANG. Why do companies create harmonious cultures? Comparing the influence of different corporate cultures on employees. Enterprise Management and Change in a Transitional Economy. 2008. p595-603.[2] LU Wanglin. On theoretic s ource of “human oriented” -- analyzing the scientific factor of “scientific development view” from one point of view. Hebei Academic Journal, 26 (5), 2006,p228-230.[3] Mencius. The Mencius. Warring States time.[4] Liangbo CHENG, Lincheng JING. An search on creating harmonious corporate culture. Group Economy, (17), 2007, p294-295.[5] Xiangkui GENG. Extracting kernel of Confucianism to create harmonious corporate culture. Theoretical Research, (3), 2007, p47-48.[6] The Book of Changes.中国人如何认识和谐企业文化?——关于和谐企业文化维度的实证研究宋联可杨浩杨兰摘要党的十六届六中全会指明建设和谐文化是构建社会主义和谐社会的重要任务。
刘颖会 外文翻译原文及译文
大连民族学院国际商学院英文翻译2007级毕业论文外文翻译资料Microfinance's Latest Growing Pains小额信贷业的发展阵痛《Knowledge Wharton》February 2nd 2011《沃顿知识》杂志 2011年2月2日译者:刘颖会大连民族学院国际商学院国际经济与贸易072班2011年6月小额信贷业发展阵痛近期的小额信贷危机源于印度南部城市安得拉邦,当地过度负债、暴力催款和借款者被迫自杀等问题引发了民众对小额信贷行业的广泛指责,并强烈呼吁政府加强监管。
10月,印度政府对损害信贷、强行控制回款天数并拖累印度最大的盈利性小额信贷公司SKS股价暴跌的小额信贷机构实施管制。
1月19日,印度储备银行发布Malegam委员会报告,建议对印度小额信贷机构施加一系列新的监管措施,包括设置利率上限、贷款限额以及对借款人的收入进行规定。
有些观察家对此表示欢迎,而悲观人士则认为此举难以避免信贷紧缩和行业崩溃。
尽管现在要分析行业前景还为时尚早,但安得拉邦的危机着实引发了民众对全球小额信贷行业的热烈讨论和深刻反省。
近期在沃顿阿瑞斯高级管理教育学院小额信贷管理培训班上,讨论的焦点集中在过度信贷、高速的行业增长以及如何在追求利润的同时更好地实现小额信贷的设立宗旨。
小额信贷业经历了一场由坏账“大地震”所引发的“痛苦的觉醒”,26名来自全球各地的社会财富计划参与者之一Kamran Azim在一堂主题为小额信贷业的增长与可持续发展的讨论中如此比喻道。
Azim是创立于1996年的巴基斯坦拉合尔小额信贷机构Kashf 基金的运营总监。
他指出,过去20到30年间,小额信贷的方式方法几乎都没有发生过变化。
但现在,突然之间,这个行业经历了一场地震。
正如该培训计划中一门课程的导言所说:“面对不断加速的变革,人们趋向于依赖传统的方式进行商业发展。
然而,正是在这样的时刻,创新方显得尤为重要。
”此外,几名学员也指出,小额信贷行业必须在兼顾客户需求的同时通过创新的方式来巩固发展。
NIH 预算[文献翻译]
外文文献翻译译文一、外文原文原文:The NIH budgetIntroductionFederal funding for biomedical research in the United States has fueled discoveries that have advanced our understanding of human disease, led to novel and effective diagnostic tools and therapies, and made our research enterprise an international paragon. Although it was not the original intent, this investment, through the National Institutes of Health (NIH), has also become an essential source of support for academic medical centers, providing funds for faculty and staff salaries, operational expenses, and even capital improvements related to research that can no longer be supported by clinical income. Until approximately 20 years ago, clinical income often subsidized research, but managed care, increased scrutiny and efficiency in the management of clinical expenses, and reductions in federal support for teaching hospitals have rendered clinical margins insufficient to support the research mission. Although some may see institution building as an inappropriate use of NIH funds, a consistent, productive biomedical research enterprise requires a solid infrastructure.Ensuring durable federal support for such research has not, however, been without tribulations. As with all line items in the federal budget, NIH funding is subject to the vicissitudes of the political process, and intermittent periods of growth have been followed by periods of decline. Some argue that funding cycles refresh the research enterprise, eliminating through competition investigators whose work is not of the highest quality. Though not as sanguine about their purposes or consequences, the academic medical community has accepted these cycles and works to find ways to dampen the effects of downturns on research programs and institutional stability.Redefined the concept of a comprehensive budget managementBudget originated in 18th century England, first implemented in government departments, the purpose is mainly used to control the king's right to tax, thereby limiting government spending. Then in the U.S., the budget has been further developed. American small towns by the public budget for the implementation of the national budget system, the establishment of the United States played an important role. Inspired by the government budget.budget management concept which was subsequently used by large companies in the United States to go to business management.Later emerged the concept of the overall budget management, budget management is the use of the main line of the internal departments of various financial and non-financial resources to control, reflecting a series of evaluation activities, and to improve the management level and management efficiency. Since the 20th century, a comprehensive budget management by the United States, many large enterprises such as: General Electric, DuPont and General Motors, so successfully applied and obtained good results. Since then, this method will soon become a large modern industrial and commercial enterprises on the standard operating procedures. From the initial planning, coordination of production become both now control, motivation, evaluation function of an integrated strategic approach to the implementation of enterprise management mechanism, and then in the internal control system of the heart.Comprehensive budget management to reflect the following three features: First, to enhance the organization's governance capacity, strengthening organization and management of contractual; Secondly, the overall budget management is to organize the constraints necessary incentives to play the price in the market separation of powers and incentives; Finally, the overall budget management strategy and daily operations of enterprise link. Establish and perfect the modern enterprise system, the budget must be set up scientific management system, the full budget is not just a modern enterprise budget form, and is a target, coordination, control, assessment in one set of control mechanisms. The Comprehensive Budget Management conducive to enhanced resistance to adapt to market changes and risks, will help streamline business management system and operational mechanism for achieving the most effective business strategy support system.Comprehensive budget management system, including: budget preparation, budget implementation and monitoring and evaluation and performance evaluation of the budget, etc.. The budget was from the strategy from the beginning of shareholders to demand and market conditions, and to transform strategy into daily operational performance indicators and a series of quantitative and specific forms and documents as the realization of carrier. Budget execution and monitoring is in the budget goals into reality in the process of budget execution will be reflected in the progress and results of the identification and decomposition of differences and thus. Assessment and performance evaluation of the budget is through regular and ad hoc evaluation methods, analysis and decomposition of the differences, and correcting for timely and appropriate incentives.The Future of Biomedical ResearchWe have recently entered another period of stagnant funding for the NIH. Having doubled between 1998 and 2003, the NIH budget is expected to be $28.6 billion for fiscal year 2007, a 0.1 percent decrease from last year,1 or a 3.8 percent decrease after adjustment for inflation — the first true budgeted reduction in NIH support since 1970. Whereas national defense spending has reached approximately $1,600 per capita, federal spending for biomedical research now amounts to about $97 per capita — a rather modest investment in "advancing the health, safety, and well-being of our people."1 This downturn is more severe than any we have faced previously, since it comes on the heels of the doubling of the budget and threatens to erode the benefits of that investment. It takes many years for institutions to develop investigators skilled in modern research techniques and to build the costly, complicated infrastructure necessary for biomedical research. Rebuilding the investigator pool and the infrastructure after a downturn is expensive and time-consuming and weakens the benefits of prior funding. This situation is unlikely to improve anytime soon: the resources required for the war in Iraq and for hurricane relief, along with the erosion of the tax base by the current administration's fiscal policies, are expected to have long-term, far-reaching effects.Most institutes within the NIH have quickly adopted policy changes to minimizethe adverse consequences, including reducing the maximum grant term from five years to four years, eliminating cost-of-living increases, and capping the amounts of awards. These changes have important effects on currently funded research and the infrastructure that it requires. Moreover, the future of biomedical research is also affected: NIH training grants represent a major source of support for postdoctoral and clinical fellows during their research experiences, and budget limitations affect not only available training slots but also the training climate. As it becomes increasingly difficult for established investigators to renew their grants, their frustration is transmitted to trainees, who increasingly opt for alternative career paths, shrinking the pipeline of future investigators.Meanwhile, for more than 10 years, the pharmaceutical industry has been investing larger amounts in research and development than the federal government —$51.3 billion in fiscal year 2005,2 for instance, or 78 percent more than NIH funding that year. Fiscal conservatives may view this industry investment as an appropriate, market-driven solution that should suffice and that does not justify additional government funding for biomedical research. However, the lion's share of industry funds is applied to drug development, especially clinical trials, rather than to fundamental research and is targeted to applications that are first and foremost of value to the industry. Federal funding has traditionally targeted a broad range of investigator-initiated research, from studies of molecular mechanisms of disease to population-based studies of disease prevalence, promoting an unrestricted environment of biomedical discovery that serves as the basis for industry-driven development. These approaches are complementary, and both have served society well.How, then, can we ensure that funding for biomedical research is maintained at adequate levels for the foreseeable future? Korn and colleagues have argued that stability and quality can be ensured by maintaining overall funding at an annual growth rate of 8 to 9 percent (unadjusted for inflation).3 They base their conclusion on the costs associated with six basic goals, which I endorse: preserving the integrity of the merit and peer-review process, which requires that funding levels not fall below the 30th percentile success rate; maintaining a stable pool of new investigators; sustainingcommitments to continuing awards; preserving the capacity of institutions that receive grants by minimizing cost-sharing with the federal government (e.g., for lease costs or animal care); recognizing the continuous growth of new research technologies; and maintaining a robust intramural NIH research program. I would, however, modify the annual required growth rate to 5 to 6 percent real growth plus inflation: the annual growth rate over the past 30 years has been approximately 10 percent, which reflects an annual average real growth rate of 5.2 percent and an average inflation rate of 4.8 percent (ranging from 1.1 to 13.3 percent).Unfortunately, the federal government probably cannot accommodate this growth rate under its current fiscal constraints. So maintaining, by statute, a stable base level of funding equivalent to the fiscal year 2006 budget, with annual inflationary adjustments, seems to me a reasonable starting point. Congress may then choose to allocate additional resources annually, subject to availability, aiming for an annual real growth rate of 5 to 6 percent. Alternatively, to avoid politicization of the flow of funds and their targets, a dedicated tax could be imposed on consumer products that threaten human health —such as fast foods, tobacco, and alcohol —and used to maintain the biomedical research infrastructure by a formulaic allocation, much as the gasoline tax is used to maintain the federal highway infrastructure.The NIH can optimize the use of these funds by limiting the size and duration of awards as well as the number of awards per investigator. It might also consider shifting the target of grants. Whereas other countries often provide funding as an award for work accomplished before the application, the NIH theoretically funds proposed work —though in reality, the peer-review process effectively requires that a hypothesis virtually be proved correct before funding is approved. Within the NIH intramural research program, funding levels for individual laboratories are often decided on the basis of accomplishments during the previous cycle, so there is already a precedent that can be applied to the extramural program. Of course, new investigators would need to be reviewed differently to ensure appropriate allocation of funds to these promising members of the research community who have no or limited previous research accomplishments.Even with such changes, however, it would be preferable for academic medicalcenters to cease relying so heavily on the NIH for research funding. In addition tohaving investigators seek funding from not-for-profit organizations and from industry, Ibelieve that centers should encourage major nongovernmental funding organizations toconsolidate their resources into a durable pool of support for the best research proposalsin the life sciences. In addition, individual centers should encourage generous donors tosupport unrestricted research endowments designed to fund translational and clinicalresearch programs within the medical center or to contribute to a national pool linkedwith support from industry to establish a national endowment for funding translationalresearch and drug or device development within academic medical centers. Suchpromotion of later-phase research within academic medical centers could enhance thevalue of the intellectual property derived from it, financial benefits from which could,in turn, be used to establish research endowments within the medical centers.The federal government might also consider alternative ways to fund the NIH budget that are independent of allocations from the tax base. One approach might include seeking support from industries whose products contribute to the burden of disease, providing tax credits as an incentive for their contribution. These resources could be used to establish an independently managed national fund, which could be used to ensure adequate support for biomedical research without the funding gaps or oscillations that currently plague the process. In this scenario, unused money from any fiscal year would be retained in the fund, with the goal of achieving self-sustained growth.Whatever mechanisms are ultimately chosen, it seems clear that new methods ofsupport must be developed if biomedical research is to continue to thrive in the UnitedStates. The goal of a durable, steady stream of support for research in the life scienceshas never been more pressing, since the research derived from that support has neverpromised greater benefits. The fate of life-sciences research should not be consigned tothe political winds of Washington.Source: Joseph Loscalzo.The NIH budget.The New England Journal of Medicine April20, 2006 V ol.354(16)P1665-1667二、翻译文章译文:NIH 预算介绍美国联邦对生物医学研究的资助,推动我们对人类疾病的认识和发现,指出了新的和有效的诊断工具和治疗方法,使我们的研究成为企业的国际典范。
5、外文文献翻译(附原文)产业集群,区域品牌,Industrial cluster ,Regional brand
外文文献翻译(附原文)外文译文一:产业集群的竞争优势——以中国大连软件工业园为例Weilin Zhao,Chihiro Watanabe,Charla-Griffy-Brown[J]. Marketing Science,2009(2):123-125.摘要:本文本着为促进工业的发展的初衷探讨了中国软件公园的竞争优势。
产业集群深植于当地的制度系统,因此拥有特殊的竞争优势。
根据波特的“钻石”模型、SWOT模型的测试结果对中国大连软件园的案例进行了定性的分析。
产业集群是包括一系列在指定地理上集聚的公司,它扎根于当地政府、行业和学术的当地制度系统,以此获得大量的资源,从而获得产业经济发展的竞争优势。
为了成功驾驭中国经济范式从批量生产到开发新产品的转换,持续加强产业集群的竞争优势,促进工业和区域的经济发展是非常有必要的。
关键词:竞争优势;产业集群;当地制度系统;大连软件工业园;中国;科技园区;创新;区域发展产业集群产业集群是波特[1]也推而广之的一个经济发展的前沿概念。
作为一个在全球经济战略公认的专家,他指出了产业集群在促进区域经济发展中的作用。
他写道:集群的概念,“或出现在特定的地理位置与产业相关联的公司、供应商和机构,已成为了公司和政府思考和评估当地竞争优势和制定公共决策的一种新的要素。
但是,他至今也没有对产业集群做出准确的定义。
最近根据德瑞克、泰克拉[2]和李维[3]检查的关于产业集群和识别为“地理浓度的行业优势的文献取得了进展”。
“地理集中”定义了产业集群的一个关键而鲜明的基本性质。
产业由地区上特定的众多公司集聚而成,他们通常有共同市场、,有着共同的供应商,交易对象,教育机构和其它像知识及信息一样无形的东西,同样地,他们也面临相似的机会和威胁。
在全球产业集群有许多种发展模式。
比如美国加州的硅谷和马萨诸塞州的128鲁特都是知名的产业集群。
前者以微电子、生物技术、和风险资本市场而闻名,而后者则是以软件、计算机和通讯硬件享誉天下[4]。
毕业论文--成本控制(cost--control)外文原文及译文【范本模板】
本科生毕业设计(论文)外文原文及译文所在系管理系学生姓名专业财务管理班级学号指导教师2014 年 6 月外文原文及译文Cost ControlRoger J. AbiNaderReference for Business,Encyclopedia of Business, 2nd ed。
Cost control,also known as cost management or cost containment,is a broad set of cost accounting methods and management techniques with the common goal of improving business cost-efficiency by reducing costs, or at least restricting their rate of growth. Businesses use cost control methods to monitor, evaluate, and ultimately enhance the efficiency of specific areas,such as departments,divisions, or product lines, within their operations.Cooper and Kaplan in 1987 in an article entitled "how cost accounting systematically distorts product costs” article for the first time put forward the theory of "cost drivers" (cost driver, cost of driving factor)of that cost, in essence,is a function of a variety of independent or interaction of factors (independent variable) work together to drive the results. So what exactly is what factors drive the cost or the cost of motive which? Traditionally, the volume of business (such as yield)as the only cost driver (independent variable),at least that its cost allocation plays a decisive role in restricting aside,regardless of other factors (motivation). In accordance with the full cost of this cost driver, the enterprise is divided into variable costs and fixed costs of the two categories。
外文文献翻译原文+译文
外文文献翻译原文Analysis of Con tin uous Prestressed Concrete BeamsChris BurgoyneMarch 26, 20051、IntroductionThis conference is devoted to the development of structural analysis rather than the strength of materials, but the effective use of prestressed concrete relies on an appropriate combination of structural analysis techniques with knowledge of the material behaviour. Design of prestressed concrete structures is usually left to specialists; the unwary will either make mistakes or spend inordinate time trying to extract a solution from the various equations.There are a number of fundamental differences between the behaviour of prestressed concrete and that of other materials. Structures are not unstressed when unloaded; the design space of feasible solutions is totally bounded;in hyperstatic structures, various states of self-stress can be induced by altering the cable profile, and all of these factors get influenced by creep and thermal effects. How were these problems recognised and how have they been tackled?Ever since the development of reinforced concrete by Hennebique at the end of the 19th century (Cusack 1984), it was recognised that steel and concrete could be more effectively combined if the steel was pretensioned, putting the concrete into compression. Cracking could be reduced, if not prevented altogether, which would increase stiffness and improve durability. Early attempts all failed because the initial prestress soon vanished, leaving the structure to be- have as though it was reinforced; good descriptions of these attempts are given by Leonhardt (1964) and Abeles (1964).It was Freyssineti’s observations of the sagging of the shallow arches on three bridges that he had just completed in 1927 over the River Allier near Vichy which led directly to prestressed concrete (Freyssinet 1956). Only the bridge at Boutiron survived WWII (Fig 1). Hitherto, it had been assumed that concrete had a Young’s modulus which remained fixed, but he recognised that the de- ferred strains due to creep explained why the prestress had been lost in the early trials. Freyssinet (Fig. 2) also correctly reasoned that high tensile steel had to be used, so that some prestress would remain after the creep had occurred, and alsothat high quality concrete should be used, since this minimised the total amount of creep. The history of Freyssineti’s early prestressed concrete work is written elsewhereFigure1:Boutiron Bridge,Vic h yFigure 2: Eugen FreyssinetAt about the same time work was underway on creep at the BRE laboratory in England ((Glanville 1930) and (1933)). It is debatable which man should be given credit for the discovery of creep but Freyssinet clearly gets the credit for successfully using the knowledge to prestress concrete.There are still problems associated with understanding how prestressed concrete works, partly because there is more than one way of thinking about it. These different philosophies are to some extent contradictory, and certainly confusing to the young engineer. It is also reflected, to a certain extent, in the various codes of practice.Permissible stress design philosophy sees prestressed concrete as a way of avoiding cracking by eliminating tensile stresses; the objective is for sufficient compression to remain after creep losses. Untensionedreinforcement, which attracts prestress due to creep, is anathema. This philosophy derives directly from Freyssinet’s logic and is primarily a working stress concept.Ultimate strength philosophy sees prestressing as a way of utilising high tensile steel as reinforcement. High strength steels have high elastic strain capacity, which could not be utilised when used as reinforcement; if the steel is pretensioned, much of that strain capacity is taken out before bonding the steel to the concrete. Structures designed this way are normally designed to be in compression everywhere under permanent loads, but allowed to crack under high live load. The idea derives directly from the work of Dischinger (1936) and his work on the bridge at Aue in 1939 (Schonberg and Fichter 1939), as well as that of Finsterwalder (1939). It is primarily an ultimate load concept. The idea of partial prestressing derives from these ideas.The Load-Balancing philosophy, introduced by T.Y. Lin, uses prestressing to counter the effect of the permanent loads (Lin 1963). The sag of the cables causes an upward force on the beam, which counteracts the load on the beam. Clearly, only one load can be balanced, but if this is taken as the total dead weight, then under that load the beam will perceive only the net axial prestress and will have no tendency to creep up or down.These three philosophies all have their champions, and heated debates take place between them as to which is the most fundamental.2、Section designFrom the outset it was recognised that prestressed concrete has to be checked at both the working load and the ultimate load. For steel structures, and those made from reinforced concrete, there is a fairly direct relationship between the load capacity under an allowable stress design, and that at the ultimate load under an ultimate strength design. Older codes were based on permissible stresses at the working load; new codes use moment capacities at the ultimate load. Different load factors are used in the two codes, but a structure which passes one code is likely to be acceptable under the other.For prestressed concrete, those ideas do not hold, since the structure is highly stressed, even when unloaded. A small increase of load can cause some stress limits to be breached, while a large increase in load might be needed to cross other limits. The designer has considerable freedom to vary both the working load and ultimate load capacities independently; both need to be checked.A designer normally has to check the tensile and compressive stresses, in both the top and bottom fibre of the section, for every load case. The critical sections are normally, but not always, the mid-span and the sections over piers but other sections may become critical ,when the cable profile has to be determined.The stresses at any position are made up of three components, one of which normally has a different sign from the other two; consistency of sign convention is essential.If P is the prestressing force and e its eccentricity, A and Z are the area of the cross-section and its elastic section modulus, while M is the applied moment, then where ft and fc are the permissible stresses in tension and compression.c e t f ZM Z P A P f ≤-+≤Thus, for any combination of P and M , the designer already has four in- equalities to deal with.The prestressing force differs over time, due to creep losses, and a designer isusually faced with at least three combinations of prestressing force and moment;• the applied moment at the time the prestress is first applied, before creep losses occur,• the maximum applied moment after creep losses, and• the minimum applied moment after creep losses.Figure 4: Gustave MagnelOther combinations may be needed in more complex cases. There are at least twelve inequalities that have to be satisfied at any cross-section, but since an I-section can be defined by six variables, and two are needed to define the prestress, the problem is over-specified and it is not immediately obvious which conditions are superfluous. In the hands of inexperienced engineers, the design process can be very long-winded. However, it is possible to separate out the design of the cross-section from the design of the prestress. By considering pairs of stress limits on the same fibre, but for different load cases, the effects of the prestress can be eliminated, leaving expressions of the form:rangestress e Perm issibl Range Mom entZ These inequalities, which can be evaluated exhaustively with little difficulty, allow the minimum size of the cross-section to be determined.Once a suitable cross-section has been found, the prestress can be designed using a construction due to Magnel (Fig.4). The stress limits can all be rearranged into the form:()M fZ PA Z e ++-≤1 By plotting these on a diagram of eccentricity versus the reciprocal of the prestressing force, a series of bound lines will be formed. Provided the inequalities (2) are satisfied, these bound lines will always leave a zone showing all feasible combinations of P and e. The most economical design, using the minimum prestress, usually lies on the right hand side of the diagram, where the design is limited by the permissible tensile stresses.Plotting the eccentricity on the vertical axis allows direct comparison with the crosssection, as shown in Fig. 5. Inequalities (3) make no reference to the physical dimensions of the structure, but these practical cover limits can be shown as wellA good designer knows how changes to the design and the loadings alter the Magnel diagram. Changing both the maximum andminimum bending moments, but keeping the range the same, raises and lowers the feasible region. If the moments become more sagging the feasible region gets lower in the beam.In general, as spans increase, the dead load moments increase in proportion to the live load. A stage will be reached where the economic point (A on Fig.5) moves outside the physical limits of the beam; Guyon (1951a) denoted the limiting condition as the critical span. Shorter spans will be governed by tensile stresses in the two extreme fibres, while longer spans will be governed by the limiting eccentricity and tensile stresses in the bottom fibre. However, it does not take a large increase in moment ,at which point compressive stresses will govern in the bottom fibre under maximum moment.Only when much longer spans are required, and the feasible region moves as far down as possible, does the structure become governed by compressive stresses in both fibres.3、Continuous beamsThe design of statically determinate beams is relatively straightforward; the engineer can work on the basis of the design of individual cross-sections, as outlined above. A number of complications arise when the structure is indeterminate which means that the designer has to consider, not only a critical section,but also the behaviour of the beam as a whole. These are due to the interaction of a number of factors, such as Creep, Temperature effects and Construction Sequence effects. It is the development of these ideas whichforms the core of this paper. The problems of continuity were addressed at a conference in London (Andrew and Witt 1951). The basic principles, and nomenclature, were already in use, but to modern eyes concentration on hand analysis techniques was unusual, and one of the principle concerns seems to have been the difficulty of estimating losses of prestressing force.3.1 Secondary MomentsA prestressing cable in a beam causes the structure to deflect. Unlike the statically determinate beam, where this motion is unrestrained, the movement causes a redistribution of the support reactions which in turn induces additional moments. These are often termed Secondary Moments, but they are not always small, or Parasitic Moments, but they are not always bad.Freyssinet’s bridge across the Marne at Luzancy, started in 1941 but not completed until 1946, is often thought of as a simply supported beam, but it was actually built as a two-hinged arch (Harris 1986), with support reactions adjusted by means of flat jacks and wedges which were later grouted-in (Fig.6). The same principles were applied in the later and larger beams built over the same river.Magnel built the first indeterminate beam bridge at Sclayn, in Belgium (Fig.7) in 1946. The cables are virtually straight, but he adjusted the deck profile so that the cables were close to the soffit near mid-span. Even with straight cables the sagging secondary momentsare large; about 50% of the hogging moment at the central support caused by dead and live load.The secondary moments cannot be found until the profile is known but the cablecannot be designed until the secondary moments are known. Guyon (1951b) introduced the concept of the concordant profile, which is a profile that causes no secondary moments; es and ep thus coincide. Any line of thrust is itself a concordant profile.The designer is then faced with a slightly simpler problem; a cable profile has to be chosen which not only satisfies the eccentricity limits (3) but is also concordant. That in itself is not a trivial operation, but is helped by the fact that the bending moment diagram that results from any load applied to a beam will itself be a concordant profile for a cable of constant force. Such loads are termed notional loads to distinguish them from the real loads on the structure. Superposition can be used to progressively build up a set of notional loads whose bending moment diagram gives the desired concordant profile.3.2 Temperature effectsTemperature variations apply to all structures but the effect on prestressed concrete beams can be more pronounced than in other structures. The temperature profile through the depth of a beam (Emerson 1973) can be split into three components for the purposes of calculation (Hambly 1991). The first causes a longitudinal expansion, which is normally released by the articulation of the structure; the second causes curvature which leads to deflection in all beams and reactant moments in continuous beams, while the third causes a set of self-equilibrating set of stresses across the cross-section.The reactant moments can be calculated and allowed-for, but it is the self- equilibrating stresses that cause the main problems for prestressed concrete beams. These beams normally have high thermal mass which means that daily temperature variations do not penetrate to the core of the structure. The result is a very non-uniform temperature distribution across the depth which in turn leads to significant self-equilibrating stresses. If the core of the structure is warm, while the surface is cool, such as at night, then quite large tensile stresses can be developed on the top and bottom surfaces. However, they only penetrate a very short distance into the concrete and the potential crack width is very small. It can be very expensive to overcome the tensile stress by changing the section or the prestress。
外文翻译--在组奖励中的激励强度的决定因素
外文文献翻译译文一、外文原文:原文:DETERMINANTS OF INCENTIVE INTENSITY IN GROUPBASED REWARDSTHEORY AND HYPOTHESESAgency Theory and Incentive IntensityA fundamental argument in the agency theory literature and in much of the compensation literature is that the incentive intensity of rewards—often measured as the variable portion of pay—enhances employee contributions to performance. Incentive-intensive pay increases effort and may increase the talent level of those attracted to a compensation plan. Higher incentive intensity increases the marginal gains in income that employees receive from increased effort. If increased effort has physical or psychological costs, agents will choose levels of effort whereby the marginal gain from effort equals its marginal cost. Therefore, when pay plans are more incentive-intensive, employees reach higher levels of effort before deciding that these increases fail to compensate for their personal costs. Research in a variety of fields confirms this relationship between incentive intensity and effort (e.g., Ehrenberg & Bognanno, 1990; Landau & Leventhal, 1976; Zenger, 1992).Higher incentive intensity may also help companies lure and keep talented workers (Lazear, 1986; Rynes, 1987; Zenger, 1994). Given the randomness of measured performance, as incentive intensity rises, so does the uncertainty of an individual's pay. The higher the incentive intensity, the more likely it is that only the very best performers (those who have the highest probability of generating strong measured performance) will find it efficient to assume the risk of an incentive-intensive contract.As suggested in empirical studies, employees with lower ability—those unlikely to generate high performance—will prefer contracts that place less emphasis on performance (Cable & Judge, 1994; U.S. Office of Personnel Management, 1988;Zenger, 1994).Incentive' intensity in group rewards should function much like incentive intensity in individual rewards: higher levels should motivate effort, lure talent, and thereby enhance performance. As Kruse argued in regard to profit-sharing plans, "The size of the profit share in relation to other employee compensation should clearly be an important factor in the impact of profit sharing upon workplace relations and performance.A profit share that, forexample, averages less than 1 percent of employee compensation is unlikely to be taken seriously by employees as an incentive for increased effort, monitoring, and cooperation with workers" (1993:81). By escalating the incentive intensity of group rewards (the incentive portion of pay), managers enhance the individual benefit from increased group effort and promote desirable self-selection. Although group incentive pay is less attractive to top talent than individual incentive pay (see Cable & Judge, 1994; Weiss, 1987), top talent should prefer highly incentive-intensive group pay to weakly incentive-intensive group pay. Kruse (1993) provided some empirical evidence of a relationship between incentive intensity and performance in group rewards. Thus, our motivation for exploring the determinants of incentive intensity stemmed from the underlying assumption that higher incentive intensity triggers higher effort, lures superior talent, and generally yields higher performance levels.Costs of Increasing Incentive IntensityThe rather low incentive intensity characteristic of rewards in many firms suggests significant impediments to raising incentive intensity. Agency theorists point to four impediments. First, incentive intensityis constrained by agents' inability to control performance measures (Lai & Srinivasan, 1993; Milgrom & Roberts, 1992; Weitzman, 1980). If agents cannot control performance measures, then imposing high levels of incentive intensity imposes substantial uncertainty on employees and provides rather modest motivational benefits.Second, incentive intensity is constrained by the inaccuracy of performance measures, or the weakness of the link between true and measured performance (Holmstrom & Milgrom, 1991; Milgrom & Roberts, 1992). If measured performance is only weakly correlated with true performance, aggressively rewarding measured performance may encourage agents to neglect unmeasured performance dimensions, thereby lowering true performance.Third, some agency theorists and scholars outside economics have argued that processes of pay comparison constrain incentive intensity within organizations (Lazear, 1989; Milgrom & Roberts, 1988; Pfeffer & Langton, 1993; Zenger, 1992). Higher incentive intensity generates greater variance in pay and magnifies the negative effects of comparison processes (Lazear, 1989; Pfeffer & Langton, 1993). Employees reduce their effort, leave a firm, or even sabotage its activities when they perceive pay differences as inequitable(Adams, 1965; Deutsch, 1985). Lowering incentive intensity reduces pay variance and thus diminishes the effects of these comparisons.Fourth, incentive intensity is constrained by "intertemporal" problems of incentive ratcheting and output restriction. Managers have an incentive to strategically alter incentive structures, adjusting payouts downward (or performance hurdles upward) once employees reveal their capacity to perform (Gibbons, 1987; Miller, 1992). Recognizing this managerial incentive, employees have an incentive to restrict output in anticipation of downward ratcheting of payouts should they reveal theircapacity for hard work (Mathewson, 1931; Whyte,1955). Such concerns may prompt the reduction or elimination of incentive intensity in rewards. Determinants of Incentive Intensity in Group-Based RewardsAlthough group rewards partially circumvent the impediments to incentive intensity detailed above, designers of group pay plans nonetheless confrontsimilar impediments.Control of performance measures. A primary advantage of group rewards is the capacity to link participants' pay to a performance measure over which they have rather complete control. However, this control is collective, with each individual having only a limited capacity to control the outcome. Consistent with agency theory, this inability to individually control observable performance measures encourages lower levels of incentive intensity(Lai & Srinivasan, 1993; Milgrom & Roberts, 1992;Weitzman, 1980).A group's size strongly infiuences its members' capacity to individually control their group's performance and subsequent individual pay. Clearly, a group member has more direct control over the performance of a small group than over that of a large group. Thus, to make a large portion of individual pay contingent on a large group's performance attaches pay to a measure over which any given employee has little control. Consequently, when groups are large, incentive intensity will be lower, refiecting low ability to control measured performance.The direct incentives triggered by group rewards may dissipate rather rapidly as groups increase in size. However, group rewards trigger mutual monitoring and "concertive" control (Barker, 1993); employees monitor and encourage their peers' performance and tightly screen new applicants (Welbourne,Balkin, & Gomez-Mejia, 1993). Such mutual monitoring or concertive control may extend the effectiveness of group rewards to size levels at which direct financial performance incentives are quite minimal.The effectiveness of mutual monitoring should, nonetheless, remain closely linked to group size and the level of performance measurement. Mutual monitoring should be most prevalent within small group plans where the contributions of specific colleagues more powerfully affect individual pay. Therefore, a greater capacity to control performance both directly and indirectly enables plan designers to escalate incentive intensity in small groups while imposing more modest levels of uncertainty.The effects of group size on incentive intensity,however, may diminish with size. Thus, although increasing a group from 10 to 20 members has a large effect on members' direct and indirect control of performance measures, increasing from 1,000 to1,010 members has rather limited bearing on such control. Hence,Hypothesis 1. Increases in group size have a negative, but diminishing, effect on the incentive intensity of group-based pay plans.The capacity to control group performance measures may also be influenced by the organizational level at which performance is measured, in part because organizational level is closely related to unit size. Our focus was on employee incentive plans, rather than managerial incentive plans. With the latter excluded, plans with lower-level measures,such as work team performance, are clearly more easily controlled by individual group members than plans that measure performance at higher organizational levels (such as divisions). Thus, Hypothesis 2. Group-based pay plans linked to performance measures at a lower organizational level will have greater incentive intensity than plans linked to performance measures at a higher organizational level.The composition of a group may also influence group members' capacity to control performance measures and thereby the optimal level of incentive intensity. Arguably, employees in managerial positions have greaterinfluence over individual performance than employees in lower-level positions (Baiman, Larker, & Rajan, 1993; Gerhart & Milkovich,1990). Managers and professionals have the authority to infiuence a broader set of performance determining decisions than do lower-level employees. Similar arguments can be made for professional employees having greater control over organizational outcomes than nonprofessional, nonmanagerial employees. Consistent witb agency theory reasoning, such increased control over outcomes should enable an increase in incentive intensity (Baiman et al., 1993). Empirical studies of managerial pay plans by Gerhart and Milkovich (1990) and Bushman, Indejejikian, and Smith (1994) have confirmed a positive relationship between incentive intensity and hierarchical level. Our focus was on employee-level pay plans, but many such plans also encompass management personnel. Hence, where managers and professionals comprise a significant percentage of plan participants, the enhanced ability of those groups to control performance should trigger the use of more incentive-intensive rewards. Thus, Hypothesis 3. Group incentive plans attached to groups in which a high proportion of participants are managers and professionals will have higher levels of incentive intensity than group incentive plans attached to groups in which a low proportion of participants are managers and professionals. Measurement accuracy and measurement complexity.The incentive intensity of group rewards should also depend on the complexity and accuracy of group performance measurement. If critical dimensions of performance are neglected, then aggressively rewarding measured performance yields dysfunctional outcomes. As discussed in writing on both agency theory and organizational behavior, performance measurement is particularly problematic when employees are assigned multiple tasks or when a single task has multiple performance dimensions (Holmstrom & Milgrom, 1991; Kerr,1975). Individuals respond to what is measured and rewarded andneglect other dimensions of performance(Kerr, 1975). As Holmstrom and Milgrom(1991) noted, simply adding measures to address the full spectrum of performance dimensions does not ensure optimal attention to all dimensions. Variability in the accuracy with which differing performance dimensions are evaluated, or variability in their ability to be controlled, creates incentives for employees to attend selectively to those dimensions more easily measured or controlled. Incentive intensity should, thus, be lower when jobs have dimensions or tasks that are difficult to measure. Doing otherwise only encourages allocations of effort that are less than optimal for a firm.Productivity and output volume are primary performance measures in many organizations. However, as has been argued in the total quality management literature, a focus on these primary performance indicators often leads to neglect of quality (Deming, 1993; Ishikawa, 1985). Although occasionally quality is quite easily measured, typically it is a performance dimension that is more difficult to accurately measure and control than other performance dimensions such as cost, output volume, or profitability. Consequently, when employees confront incentives that compensate attention both to quality and to other more accurately measured and more easily controlled performance attributes, they rationally neglect quality. Therefore, when quality is an important performance dimension, firms limit incentive intensity to increase attention to quality (Holmstrom & Milgrom,1991; Laffont & Tirole, 1989). Not surprisingly.leaders in the quality movement have recommended the avoidance of performance-based rewards (Deming, 1993; Ishikawa, 1985: 26-27].Thus, Hypothesis 4. Group incentive plans that reward quality will have lower incentive intensity than plans that do not measure and reward quality.Numerous performance measures in a group pay plan may further indicatecomplex and difficult performance measurement. Having numerous performance measures implies a broad range of important performance dimensions, some of which are potentially problematic to measure. Adding performance measures to a group incentive plan may focus attention on dimensions that would otherwise be neglected, but such additions cannot trigger optimal allocations of effort, as previously discussed. Escalating incentive intensity in work settings with such complex measurement may heighten neglect of those dimensions that are difficult or impossible to measure. Thus, in response to measurement complexity, plan designers may restrict incentive intensity to limit misallocation of effort. Hence, Hypothesis 5. Group incentive plans with a large number of performance measures will have lower incentive intensity than group incentive plans with few performance measures.Comparison processes and firm size.Groupbased pay plans are implemented within a broad organizational setting—a setting in which employees actively engage in processes of social comparison around the topic of pay. Agency theorists, such as Lazear (1989) and Milgrom and Roberts (1988), and psychologists and sociologists, such as Deutsch (1985) and Pfeffer and Langton (1993), have noted the potential desirability of pay equality as a means of promoting harmony and avoiding costly pay comparisons. Highly exaggerated self-perceptions (Meyer, 1975) ensure that pay differences are viewed as inequitable. Given individuals' costly responses to perceived inequity—such as departure and reduced effort (Adams, 1965; Deutsch, 1985)—firms choose to reduce performance-based variance in individual pay (Lazear, 1989; Zenger, 1992,1994).TODD R.ZENGER,C.R.MARSHALL.Determinants of incentive intensity in groupbased rewards[J].Academy of Management Joumal.2000, Vol. 43. No.2. 149-163.二、翻译文章译文:在组奖励中的激励强度的决定因素理论和假设代理理论和激励强度在代理理论文学和文学补偿多少中的一个基本论点是,奖励的激励强度经常是衡量薪酬,提高员工的贡献业绩的表现。
20外文文献翻译原文及译文参考样式
20外⽂⽂献翻译原⽂及译⽂参考样式华北电⼒⼤学科技学院毕业设计(论⽂)附件外⽂⽂献翻译学号: 0819******** 姓名:宗鹏程所在系别:机械⼯程及⾃动化专业班级:机械08K1指导教师:张超原⽂标题:Development of a High-PerformanceMagnetic Gear年⽉⽇⾼性能磁齿轮的发展1摘要:本⽂提出了⼀个⾼性能永磁齿轮的计算和测量结果。
上述分析的永磁齿轮有5.5的传动⽐,并能够提供27 Nm的⼒矩。
分析表明,由于它的弹簧扭转常数很⼩,因此需要特别重视安装了这种⾼性能永磁齿轮的系统。
上述分析的齿轮也已经被应⽤在实际中,以验证、预测其效率。
经测量,由于较⼤端齿轮传动引起的磁⼒齿轮的扭矩只有16 Nm。
⼀项关于磁齿轮效率损失的系统研究也展⽰了为什么实际⼯作效率只有81%。
⼀⼤部分磁损耗起源于轴承,因为机械故障的存在,此轴承的备⽤轴承在此时是必要的。
如果没有源于轴的少量磁泄漏,我们估计能得到⾼达96%的效率。
与传统的机械齿轮的⽐较表明,磁性齿轮具有更好的效率和单位体积较⼤扭矩。
最后,可以得出结论,本⽂的研究结果可能有助于促进传统机械齿轮向磁性齿轮发展。
关键词:有限元分析(FEA)、变速箱,⾼转矩密度,磁性齿轮。
⼀、导⾔由于永久磁铁能产⽣磁通和磁⼒,虽然⼏个世纪过去了,许多⼈仍然着迷于永久磁铁。
,在过去20年的复兴阶段,正是这些优点已经使得永久磁铁在很多实际中⼴泛的应⽤,包括在起重机,扬声器,接头领域,尤其是在永久磁铁电机⽅⾯。
其中对永磁铁的复兴最常见于效率和转矩密度由于永磁铁的应⽤显著提⾼的⼩型机器的领域。
在永久磁铁没有获取⾼度重视的⼀个领域是传动装置的领域,也就是说,磁⼒联轴器不被⼴泛⽤于传动装置。
磁性联轴器基本上可以被视为以传动⽐为1:1磁⼒齿轮。
相⽐标准电⽓机器有约10kN m/m的扭矩,装有⾼能量永久磁铁的磁耦有⾮常⾼的单位体积密度的扭矩,变化范围⼤约300–400 kN 。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
外文原文及译文一、外文原文Subject:Financial Analysis with the DuPont Ratio: A UsefulCompassDerivation:Steven C. Isberg, Ph.D.Financial Analysis and the Changing Role of Credit ProfessionalsIn today's dynamic business environment, it is important for credit professionals to be prepared to apply their skills both within and outside the specific credit management function. Credit executives may be called upon to provide insights regarding issues such as strategic financial planning, measuring the success of a business strategy or determining the viability of an acquisition candidate. Even so, the normal duties involved in credit assessment and management call for the credit manager to be equipped to conduct financial analysis in a rapid and meaningful way.Financial statement analysis is employed for a variety of reasons. Outside investors are seeking information as to the long run viability of a business and its prospects for providing an adequate return in consideration of the risks being taken. Creditors desire to know whether a potential borrower or customer can service loans being made. Internal analysts and management utilize financial statement analysis as a means to monitor the outcome of policy decisions, predict future performance targets, develop investment strategies, and assess capital needs. As the role of the credit manager is expanded cross-functionally, he or she may be required to answer the call to conduct financial statement analysis under any of these circumstances. The DuPont ratio is a useful tool in providing both an overview and a focus for such analysis.A comprehensive financial statement analysis will provide insights as to a firm's performance and/or standing in the areas of liquidity, leverage, operating efficiency and profitability. A complete analysis will involve both time series and cross-sectional perspectives. Time series analysis will examine trends using the firm's own performance as a benchmark. Cross sectional analysis will augment the process by using external performance benchmarks for comparison purposes. Every meaningful analysis will begin with a qualitative inquiry as to the strategy and policies of the subject company, creating a context for the investigation. Next, goals and objectives of the analysis will be established, providing a basis for interpreting the results. The DuPont ratio can be used as a compass in this process by directing the analyst toward significant areas of strength and weakness evident in the financial statements.The DuPont ratio is calculated as follows:ROE = (Net Income/Sales) X (Sales/Average Assets) X (Average Assets/Avenge Equity)The ratio provides measures in three of the four key areas of analysis, eachrepresenting a compass bearing, pointing the way to the next stage of the investigation.The DuPont Ratio DecompositionThe DuPont ratio is a good place to begin a financial statement analysis because it measures the return on equity (ROE). A for-profit business exists to create wealth for its owner(s). ROE is, therefore, arguably the most important of the key ratios, since it indicates the rate at which owner wealth is increasing. While the DuPont analysis is not an adequate replacement for detailed financial analysis, it provides an excellent snapshot and starting point, as will be seen below.The three components of the DuPont ratio, as represented in equation, cover the areas of profitability, operating efficiency and leverage. In the following paragraphs, we examine the meaning of each of these components by calculating and comparing the DuPont ratio using the financial statements and industry standards for Atlantic Aquatic Equipment, Inc. (Exhibits 1, 2, and 3), a retailer of water sporting goods.Profitability: Net Profit Margin (NPM: Net Income/Sales)Profitability ratios measure the rate at which either sales or capital is converted into profits at different levels of the operation. The most common are gross, operating and net profitability, which describe performance at different activity levels. Of the three, net profitability is the most comprehensive since it uses the bottom line net income in its measure.A proper analysis of this ratio would include at least three to five years of trend and cross-sectional comparison data. The cross sectional comparison can be drawn from a variety of sources. Most common are the Dun & Bradstreet Index of Key Financial Ratios and the Robert Morris Associates (RMA) Annual Statement Studies. Each of these volumes provide key ratios estimated for business establishments grouped according to industry (i.e., SIC codes). More will be discussed in regard to comparisons as our example is continued below. As is, over the two years, Whitbread has become less profitable.Leverage: The Leverage Multiplier (Average Assets/Average Equity)Leverage ratios measure the extent to which a company relies on debt financing in its capital structure. Debt is both beneficial and costly to a firm. The cost of debt is lower thanthe cost of equity, an effect which is enhanced by the tax deductibility of interest payments in contrast to taxable dividend payments and stock repurchases. If debt proceeds are invested in projects which return more than the cost of debt, owners keep the residual, and hence, the return on equity is "leveraged up." The debt sword, however, cuts both ways. Adding debt creates a fixed payment required of the firm whether or not it is earning an operating profit, and therefore, payments may cut into the equity base. Further, the risk of the equity position is increased by the presence of debt holders having a superior claim to the assets of the firm.二、译文题目:杜邦分析体系出处:史蒂文c Isberg运输研究所硕士论文杜邦分析体系财务分析与专业信用人员的角色转变在当今动态商业环境中,信贷的专业人士申请内部外部的特定信贷管理职能的技能非常重要。