文献翻译原文

合集下载

毕业论文(设计)外文文献翻译及原文

毕业论文(设计)外文文献翻译及原文

金融体制、融资约束与投资——来自OECD的实证分析R.SemenovDepartment of Economics,University of Nijmegen,Nijmegen(荷兰内梅亨大学,经济学院)这篇论文考查了OECD的11个国家中现金流量对企业投资的影响.我们发现不同国家之间投资对企业内部可获取资金的敏感性具有显著差异,并且银企之间具有明显的紧密关系的国家的敏感性比银企之间具有公平关系的国家的低.同时,我们发现融资约束与整体金融发展指标不存在关系.我们的结论与资本市场信息和激励问题对企业投资具有重要作用这种观点一致,并且紧密的银企关系会减少这些问题从而增加企业获取外部融资的渠道。

一、引言各个国家的企业在显著不同的金融体制下运行。

金融发展水平的差别(例如,相对GDP的信用额度和相对GDP的相应股票市场的资本化程度),在所有者和管理者关系、企业和债权人的模式中,企业控制的市场活动水平可以很好地被记录.在完美资本市场,对于具有正的净现值投资机会的企业将一直获得资金。

然而,经济理论表明市场摩擦,诸如信息不对称和激励问题会使获得外部资本更加昂贵,并且具有盈利投资机会的企业不一定能够获取所需资本.这表明融资要素,例如内部产生资金数量、新债务和权益的可得性,共同决定了企业的投资决策.现今已经有大量考查外部资金可得性对投资决策的影响的实证资料(可参考,例如Fazzari(1998)、 Hoshi(1991)、 Chapman(1996)、Samuel(1998)).大多数研究结果表明金融变量例如现金流量有助于解释企业的投资水平。

这项研究结果解释表明企业投资受限于外部资金的可得性。

很多模型强调运行正常的金融中介和金融市场有助于改善信息不对称和交易成本,减缓不对称问题,从而促使储蓄资金投着长期和高回报的项目,并且提高资源的有效配置(参看Levine(1997)的评论文章)。

因而我们预期用于更加发达的金融体制的国家的企业将更容易获得外部融资.几位学者已经指出建立企业和金融中介机构可进一步缓解金融市场摩擦。

文献翻译 译文+原文

文献翻译 译文+原文

09届本科毕业设计(论文)外文文献翻译学 院: 物理与电子工程学院专 业: 光电信息工程姓 名: 徐 驰学 号: Y05209222 外文出处: Surface & Coatings Technology214(2013)131-137附 件: 1.外文资料翻译译文;2.外文原文。

(用外文写)附件1:外文资料翻译译文气体温度通过PECVD沉积对Si:H薄膜的结构和光电性能的影响摘要气体温度的影响(TG)在等离子体增强化学气相沉积法(PECVD)生长的薄膜的结构和光电特性:H薄膜已使用多种表征技术研究。

气体的温度被确定为制备工艺的优化、结构和光电薄膜的性能改进的一个重要参数。

薄膜的结构性能进行了研究使用原子力显微镜(AFM),傅立叶变换红外光谱(FTIR),拉曼光谱,和电子自旋共振(ESR)。

此外,光谱椭偏仪(SE),在紫外线–可见光区域的光传输的测量和电气测量被用来研究的薄膜的光学和电学性能。

它被发现在Tg的变化可以修改的表面粗糙度,非晶网络秩序,氢键模式和薄膜的密度,并最终提高光学和电学性能。

1.介绍等离子体增强化学气相沉积法(PECVD)是氢化非晶硅薄膜制备一种技术,具有广泛的实际应用的重要材料。

它是用于太阳能电池生产,在夜视系统红外探测器,和薄膜晶体管的平板显示装置。

所有这些应用都是基于其良好的电气和光学特性以及与半导体技术兼容。

然而,根据a-Si的性质,PECVD制备H薄膜需要敏感的沉积条件,如衬底温度,功率密度,气体流量和压力。

许多努力已经花在制备高品质的薄膜具有较低的缺陷密度和较高的结构稳定性的H薄膜。

众所周知,衬底温度的强烈影响的自由基扩散的生长表面上,从而导致这些自由基更容易定位在最佳生长区。

因此,衬底温度一直是研究最多的沉积参数。

至于温度参数在PECVD工艺而言,除了衬底温度,气体温度(Tg)美联储在PECVD反应室在辉光放电是定制的a-Si的性能参数:H薄膜的新工艺。

外文文献翻译译稿和原文

外文文献翻译译稿和原文

外文文献翻译译稿1卡尔曼滤波的一个典型实例是从一组有限的,包含噪声的,通过对物体位置的观察序列(可能有偏差)预测出物体的位置的坐标及速度。

在很多工程应用(如雷达、计算机视觉)中都可以找到它的身影。

同时,卡尔曼滤波也是控制理论以及控制系统工程中的一个重要课题。

例如,对于雷达来说,人们感兴趣的是其能够跟踪目标。

但目标的位置、速度、加速度的测量值往往在任何时候都有噪声。

卡尔曼滤波利用目标的动态信息,设法去掉噪声的影响,得到一个关于目标位置的好的估计。

这个估计可以是对当前目标位置的估计(滤波),也可以是对于将来位置的估计(预测),也可以是对过去位置的估计(插值或平滑)。

命名[编辑]这种滤波方法以它的发明者鲁道夫.E.卡尔曼(Rudolph E. Kalman)命名,但是根据文献可知实际上Peter Swerling在更早之前就提出了一种类似的算法。

斯坦利。

施密特(Stanley Schmidt)首次实现了卡尔曼滤波器。

卡尔曼在NASA埃姆斯研究中心访问时,发现他的方法对于解决阿波罗计划的轨道预测很有用,后来阿波罗飞船的导航电脑便使用了这种滤波器。

关于这种滤波器的论文由Swerling(1958)、Kalman (1960)与Kalman and Bucy(1961)发表。

目前,卡尔曼滤波已经有很多不同的实现。

卡尔曼最初提出的形式现在一般称为简单卡尔曼滤波器。

除此以外,还有施密特扩展滤波器、信息滤波器以及很多Bierman, Thornton开发的平方根滤波器的变种。

也许最常见的卡尔曼滤波器是锁相环,它在收音机、计算机和几乎任何视频或通讯设备中广泛存在。

以下的讨论需要线性代数以及概率论的一般知识。

卡尔曼滤波建立在线性代数和隐马尔可夫模型(hidden Markov model)上。

其基本动态系统可以用一个马尔可夫链表示,该马尔可夫链建立在一个被高斯噪声(即正态分布的噪声)干扰的线性算子上的。

系统的状态可以用一个元素为实数的向量表示。

纳米材料与微型机器外文文献翻译、中英文翻译

纳米材料与微型机器外文文献翻译、中英文翻译

外文资料Nanotechnology and Micro-machine原文(一):NanomaterialNanomaterials and nanotechnology have become a magic word in modern society.Nanomaterials represent today’s cutting edge in the development of novel advanced materials which promise tailor-made functionality and unheard applications in all key technologies. So nanomaterials are considered as a great potential in the 21th century because of their special properties in many fields such as optics, electronics, magnetics, mechanics, and chemistry. These unique properties are attractive for various high performance applications. Examples include wear resistant surfaces, low temperature sinterable high-strength ceramics, and magnetic nanocomposites. Nanostructures materials present great promises and opportunities for a new generation of materials with improved and marvelous properties.It is appropriate to begin with a brief introduction to the history of the subject. Nanomaterials are found in both biological systems and man-made structures. Nature has been using nanomaterials for millions of years,as Disckson has noted: “Life itself could be regarded as a nanophase system”.Examples in which nanostructured elements play a vital role are magnetotactic bacteria, ferritin, and molluscan teeth. Several species of aquatic bacteria use the earth’s magnetic field to orient thenselves. They are able to do this because they contain chains of nanosized, single-domain magnetite particles. Because they have established their orientation, they are able to swim down to nutriments and away from what is lethal to them ,oxygen. Another example of nanomaterials in nature is that herbivorous mollusks use teeth attached to a tonguelike organ, the radula, to scrape their food. These teeth have a complexstructure containing nanocrystalline needles. We can utilize biological templates formaking nanomaterials. Apoferritin has been used as a confined reaction environmentfor the synthesis of nanosized magnetite particles. Some scholars consider biologicalnanomaterials as model systems for developing technologically useful nanomaterials.Scientific work on this subject can be traced back over 100 years.In 1861 theBritish chemist Thomas Graham coined the term colloid to describe a solutioncontaining 1 to 100 nm diameter particles in suspension. Around the turn of thecentury, such famous scientists as Rayleigh, Maxwell, and Einstein studied colloids.In 1930 the Langmuir-Blodgett method for developing monolayer films wasdeveloped. By 1960 Uyeda had used electron microscopy and diffraction to studyindividual particles. At about the same time, arc, plasma, and chemical flame furnaceswere employed to prouduce submicron particles. Magnetic alloy particles for use inmagnetic tapes were produced in 1970.By 1980, studies were made on clusterscontaining fewer than 100 atoms .In 1985, a team led by Smalley and Kroto foundC clusters were unusually stable. In 1991, Lijima spectroscopic evidence that 60reported studies of graphitic carbon tube filaments.Research on nanomaterials has been stimulated by their technologicalapplications. The first technological uses of these materials were as catalysts andpigments. The large surface area to volume ratio increases the chemicalactivity.Because of this increased activity, there are significant cost advantages infabricating catalysts from nanomaterials. The peoperties of some single-phasematerials can be improved by preparing them as nanostructures. For example, thesintering temperature can be decreased and the plasticity increased on single-phase,structural ceramics by reducing the grain size to several nanometers. Multiphasenanostructured materials have displayed novel behavior resulting from the small sizeof he individual phases.Technologically useful properties of nanomaterials are not limited to theirstructural, chemical, or mechanical behavior. Multilayers represent examples ofmaterials in which one can modify of tune a property for a specific application bysensitively controlling the individual layer thickness. It was discovered that the resistance of Fe-Cr multilayered thin films exhibited large changes in an applied magnetic field of several tens of kOe.This effect was given the name giant magnetoresistance (GMR). More recently, suitably annealed magnetic multilayers have been developed that exhibit significant magnetoresistance effects even in fields as low as 5 to10 Oe (Oersted). This effect may prove to be of great technological importance for use in magnetic recording read heads.In microelectronics, the need for faster switching times and ever larger integration has motivated considerable effort to reduce the size of electronic components. Increasing the component density increases the difficulty of satisfying cooling requirements and reduces the allowable amount of energy released on switching between states. It would be ideal if the switching occurred with the motion of a single electron. One kind of single-electron device is based on the change in the Coulombic energy when an electron is added or removed from a particle. For a nanoparticle this enery change can be large enough that adding a single electron will effectively blocks the flow of other electrons. The use of Coulombic repulsion in this way is called Coulomb blockade.In addition to technology, nanomaterials are also interesting systems for basic scientific investigations .For example, small particles display deviations from bulk solid behavior such as reductios in the melting temperature and changes (usually reductions) in the lattice parameter. The changes n the lattice parameter observed for metal and semiconductor particles result from the effect of the surface free energy. Both the surface stress and surface free energy are caused by the reduced coordination of the surface atoms. By studying the size dependence of the properties of particles, it is possible to find the critical length scales at which particles behave essentially as bulk matter. Generally, the physical properties of a nanoparticle approach bulk values for particles containing more than a few hundred atoms.New techniques have been developed recently that have permitted researchers to produce larger quantities of other nanomaterials and to better characterize these materials.Each fabrication technique has its own set of advantages anddisadvantages.Generally it is best to produce nanoparticles with a narrow size distribution. In this regard, free jet expansion techniques permit the study of very small clusters, all containing the same number of atoms. It has the disadvantage of only producing a limited quantity of material.Another approach involves the production of pellets of nanostructured materials by first nucleating and growing nanoparticles in a supersaturated vapor and then using a cold finger to collect the nanoparticle. The nanoparticles are then consolidated under vacuum. Chemical techniques are very versatile in that they can be applied to nearly all materials (ceramics, semiconductors, and metals) and can usually produce a large amount of material. A difficulty with chemical processing is the need to find the proper chemical reactions and processing conditions for each material. Mechanical attrition, which can also produce a large amount of material, often makes less pure material. One problem common to all of these techniques is that nanoparticles often form micron-sized agglomerates. If this occurs, the properties of the material may be determined by the size of the agglomerate and not the size of the individual nanoparticles. For example, the size of the agglomerates may determine the void size in the consolidated nanostructured material.The ability to characterize nanomaterials has been increased greatly by the invention of the scanning tunneling microscope (STM) and other proximal probes such as the atomic force microscope (AFM), the magnetic force microscope, and the optical near-field microscope.SMT has been used to carefully place atoms on surfaces to write bits using a small number of atmos. It has also been employed to construct a circular arrangement of metal atoms on an insulating surface. Since electrons are confined to the circular path of metal atoms, it serves ad a quantum ‘corral’of atoms. This quantum corral was employed to measure the local electronic density of states of these circular metallic arrangements. By doing this, researchers were able to verify the quantum mechanical description of electrons confined in this way.Other new instruments and improvements of existing instruments are increasingly becoming important tools for characterizing surfaces of films, biological materials, and nanomaterials.The development of nanoindentors and the improvedability to interpret results from nanoindentation measurements have increased our ability to study the mechanical properties of nanostructured materials. Improved high-resolution electron microscopes and modeling of the electron microscope images have improved our knowledges of the structure of the the particles and the interphase region between particles in consolidated nanomaterials.Nanotechnology1. IntroductionWhat id nanotechnology? it is a term that entered into the general vocabulary only in the late 1970’s,mainly to describe the metrology associated with the development of X-ray,optical and other very precise components.We defined nanotechnology as the technology where dimensions and tolerances in the range 0.1~100nm(from the size of the atom to the wavelength of light) play a critical role.This definition is too all-embracing to be of practical value because it could include,for example,topics as diverse as X-ray crystallography ,atomic physics and indeed the whole of chemistry.So the field covered by nanotechnology is later narrowed down to manipulation and machining within the defined dimensional range(from 0.1nm to 100nm) by technological means,as opposed to those used by the craftsman,and thus excludes,for example,traditional forms of glass polishing.The technology relating to fine powders also comes under the general heading of nanotechnology,but we exclude observational techniques such as microscopy and various forms of surface analysis.Nanotechnology is an ‘enabling’ technology, in that it provides the basis for other technological developments,and it is also a ‘horizontal’or ‘cross-sectional’technology in that one technological may,with slight variations,be applicable in widely differing fields. A good example of this is thin-film technology,which is fundamental to electronics and optics.A wide range of materials are employed in devices such as computer and home entertainment peripherals, including magnetic disc reading heads,video cassette recorder spindles, optical disc stampers and ink jet nozzles.Optical and semiconductor components include laser gyroscope mirrors,diffraction gratings,X-ray optics,quantum-well devices.2. Materials technologyThe wide scope of nanotechnology is demonstrated in the materials field,where materials provide a means to an end and are not an end in themseleves. For example, in electronics,inhomogeneities in materials,on a very fine scale, set a limit to the nanometre-sized features that play an important part in semiconductor technology, and in a very different field, the finer the grain size of an adhesive, the thinner will be the adhesive layer, and the higher will be the bond strength.(1) Advantages of ultra-fine powders. In general, the mechanical, thermal, electrical and magnetic properties of ceramics, sintered metals and composites are often enhanced by reducing the grain or fiber size in the starting materials. Other properties such as strength, the ductile-brittle transition, transparency, dielectric coefficient and permeability can be enhanced either by the direct influence of an ultra-fine microstructure or by the advantages gained by mixing and bonding ultra-fine powders.Oter important advantages of fine powders are that when they are used in the manufacture of ceramics and sintered metals, their green (i.e, unfired) density can be greatly increased. As a consequence, both the defects in the final produce and the shrinkage on firing are reduced, thus minimizing the need for subsequent processing.(2)Applications of ultra-fine powders.Important applications include:Thin films and coatings----the smaller the particle size, the thinner the coating can beElectronic ceramics ----reduction in grain size results in reduced dielectric thicknessStrength-bearing ceramics----strength increases with decreasing grain sizeCutting tools----smaller grain size results in a finer cutting edge, which can enhance the surface finishImpact resistance----finer microstructure increases the toughness of high-temperature steelsCements----finer grain size yields better homogeneity and densityGas sensors----finer grain size gives increased sensitivityAdhesives----finer grain size gives thinner adhesive layer and higher bond strength3. Precision machining and materials processingA considerable overlap is emerging in the manufacturing methods employed in very different areas such as mechanical engineering, optics and electronics. Precision machining encompasses not only the traditional techniques such as turning, grinding, lapping and polishing refined to the nanometre level of precision, but also the application of ‘particle’ beams, ions, electrons and X-rays. Ion beams are capable of machining virtually any material and the most frequent applications of electrons and X-rays are found in the machining or modification of resist materials for lithographic purposes. The interaction of the beams with the resist material induces structural changes such as polymerization that alter the solubility of the irradiated areas.(1) Techniques1) Diamond turning. The large optics diamond-turning machine at the Lawrence Livermore National Laboratory represents a pinnacle of achievement in the field of ultra-precision machine tool engineering. This is a vertical-spindle machine with a face plate 1.6 m in diameter and a maximum tool height of 0.5m. Despite these large dimensions, machining accuracy for form is 27.5nm RMS and a surface roughness of 3nm is achievable, but is dependent both on the specimen material and cutting tool.(2) GrindingFixed Abrasive Grinding The term“fixed abrasive” denotes that a grinding wheel is employed in which the abrasive particles, such as diamond, cubic boron nitride or silicon carbide, are attached to the wheel by embedding them in a resin or a metal. The forces generated in grinding are higher than in diamond turning and usually machine tools are tailored for one or the other process. Some Japanese work is in the vanguard of precision grinding, and surface finishes of 2nm (peak-to-valley) have been obtained on single-crystal quartz samples using extremely stiff grinding machinesLoose Abrasive Grinding The most familiar loose abrasive grinding processes are lapping and polishing where the workpiece, which is often a hard material such asglass, is rubbed against a softer material, the lap or polisher, with abrasive slurry between the two surfaces. In many cases, the polishing process occurs as a result of the combined effects of mechanical and chemical interaction between the workpiece, slurry and polished.Loose abrasive grinding techniques can under appropriate conditions produce unrivalled accuracy both in form and surface finish when the workpiece is flat or spherical. Surface figures to a few nm and surface finishes bettering than 0.5nm may be achieved. The abrasive is in slurry and is directed locally towards the workpiece by the action of a non-contacting polyurethane ball spinning at high speed, and which replac es the cutting tool in the machine. This technique has been named “elastic emission machining” and has been used to good effect in the manufacture of an X-ray mirror having a figure accuracy of 10nm and a surface roughness of 0.5nm RMS.3)Thin-film production. The production of thin solid films, particularly for coating optical components, provides a good example of traditional nanotechnology. There is a long history of coating by chemical methods, electro-deposition, diode sputtering and vacuum evaporation, while triode and magnetron sputtering and ion-beam deposition are more recent in their wide application.Because of their importance in the production of semiconductor devices, epitaxial growth techniques are worth a special mention. Epitaxy is the growth of a thin crystalline layer on a single-crystal substrate, where the atoms in the growing layer mimic the disposition of the atoms in the substrate.The two main classes of epitaxy that have ben reviewed by Stringfellow (1982) are liquid-phase and vapour-phase epitaxy. The latter class includes molecular-beam epitaxy (MBE), which in essence, is highly controlled evaporation in ultra high vacuum. MBE may be used to grow high quality layered structures of semiconductors with mono-layer precision, and it is possible to exercise independent control over both the semiconductor band gap, by controlling the composition, and also the doping level. Pattern growth is possible through masks and on areas defined by electron-beam writing.4. ApplicationsThere is an all-pervading trend to higher precision and miniaturization, and to illustrate this a few applications will be briefly referred to in the fields of mechanical engineering,optics and electronics. It should be noted however, that the distinction between mechanical engineering and optics is becoming blurred, now that machine tools such as precision grinding machines and diamond-turning lathes are being used to produce optical components, often by personnel with a backgroud in mechanical engineering rather than optics. By a similar token mechanical engineering is also beginning to encroach on electronics particularly in the preparation of semiconductor substrates.(1) Mechanical engineeringOne of the earliest applications of diamond turning was the machining of aluminum substrates for computer memory discs, and accuracies are continuously being enhanced in order to improve storage capacity: surface finishes of 3nm are now being achieved. In the related technologies of optical data storage and retrieval, the toler ances of the critical dimensions of the disc and reading head are about 0.25 μm. The tolerances of the component parts of the machine tools used in their manufacture, i.e.the slideways and bearings, fall well within the nanotechnology range.Some precision components falling in the manufacturing tolerance band of 5~50nm include gauge blocks, diamond indenter tips, microtome blades, Winchester disc reading heads and ultra precision XY tables (Taniguchi 1986). Examples of precision cylindrical components in two very different fields, and which are made to tolerances of about 100 nm, are bearing for mechanical gyroscopes and spindles for video cassette recorders.The theoretical concept that brittle materials may be machined in a ductile mode has been known for some time. If this concept can be applied in practice it would be of significant practical importance because it would enable materials such as ceramics, glasses and silicon to be machined with minimal sub-surface damage, and could eliminate or substantially reduce the need for lapping and polishing.Typically, the conditions for ductile-mode machining require that the depth of cutis about 100 nm and that the normal force should fall in the range of 0.1~0.01N. These machining conditons can be realized only with extremely precise and stiff machine tools, such as the one described by Yoshioka et al (1985), and with which quartz has been ground to a surface roughness of 2 nm peak-to-valley. The significance of this experimental result is that it points the way to the direct grinding of optical components to an optical finish. The principle can be extended to other materials of significant commercial importance, such as ceramic turbine blades, which at present must be subjected to tedious surface finishing procedures to remove the structure-weakening cracks produced by the conventional grinding process.(2) OpticsIn some areas in optics manufacture there is a clear distinction between the technological approach and the traditional craftsman’s approach, particul arly where precision machine tools are employed. On the other hand, in lapping and polishing, there is a large grey area where the two approaches overlap. The large demand for infrared optics from the 1970s onwards could not be met by the traditional suppliers, and provided a stimulus for the development and application of diamond-turning machines to optic manufacture. The technology has now progressed and the surface figure and finishes that can be obtained span a substantial proportion of the nanotechnology range. Important applications of diamond-turned optics are in the manufacture of unconventionally shaped optics, for example axicons and more generelly, aspherics and particularly off-axis components. Such as paraboloids.The mass production(several million per annum) of the miniature aspheric lenses used in compact disc players and the associated lens moulds provides a good example of the merging of optics and precision engineering. The form accuracy must be better than 0.2μm and the surface roughness m ust be below 20 nm to meet the criterion for diffraction limited performance.(3) ElectronicsIn semiconductors, nanotechnology has long been a feature in the development of layers parallel to the substrate and in the substrate surface itself, and the need for precision is steadily increasing with the advent of layered semiconductor structures.About one quarter of the entire semiconductor physics community is now engaged in studying aspects of these structures. Normal to the layer surface, the structure is produced by lithography, and for research purposes ar least, nanometre-sized features are now being developed using X-ray and electron and ion-beam techniques.5. A look into the futureWith a little imagination, it is not difficult to conjure up visions of future developments in high technology, in whatever direction one cares to look. The following two examples illustrate how advances may take place both by novel applications and refinements of old technologies and by development of new ones.(1) Molecular electronicsLithography and thin-film technology are the key technologies that have made possible the continuing and relentless reduction in the size of integrated circuits, to increase both packing density and operational speed. Miniaturization has been achieved by engineering downwards from the macro to the micro scale. By simple extrapolation it will take approximately two decades for electronic switches to be reduced to molecular dimensions. The impact of molecular biology and genetic engineering has thus provided a stimulus to attempt to engineer upwards, starting with the concept that single molecules, each acting as an electronic device in their own right, might be assembled using biotechnology, to form molecular electronic devices or even biochip computers.Advances in molecular electronics by downward engineering from the macro to the micro scale are taking place over a wide front. One fruitful approach is by way of the Langmure-Biodgett (LB) film using a method first described by Blodgett (1935).A multi-layer LB structure consists of a sequence of organic monolayers made by repeatedly dipping a substrate into a trough containing the monolayer floating on a liquid (usually water), one layer being added at a time. The classical film forming materials were the fatty acids such as stearic acid and their salts. The late 1950s saw the first widespread and commercially important application of LB films in the field of X-ray spectroscopy (e.g, Henke 1964, 1965). The important properties of the films that were exploited in this application were the uniform thickness of each film, i.e.one molecule thick, and the range of thickness, say from 5to 15nm, which were available by changing the composition of the film material. Stacks of fifty or more films were formed on plane of curved substrates to form two-dimensional diffraction gratings for measuring the characteristic X-ray wavelengths of the elements of low atomic number for analytical purposes in instruments such as the electron probe of X-ray micro-analyzer.(2) Scanning tunneling engineeringIt was stated that observational techniques such as microscopy do mot, at least for the purposes of this article, fall within the domain of nanotechnology. However,it is now becoming apparent that scanning tunneling microscopy(STM) may provide the basis of a new technology, which we shall call scanning tunneling engineering.In the STM, a sharp stylus is positioned within a nanometre of the surface of the sample under investigation. A small voltage applied between the sample and the stylus will cause a current to foow through the thin intervening insulating medium (e.g.air, vacum, oxide layer). This is the tunneling electron current which is exponentially dependent on the sample-tip gap. If the sample is scanned in a planr parallel to ies surface and if the tunneling current is kept cnstant by adjusting the height of the stylus to maintain a constant gap, then the displacement of the stylus provides an accurate representation of the surface topographyu of the sample. It is relevant to the applications that will be discussed that individual atoms are easily resolved by the STM, that the stylus tip may be as small as a single atom and that the tip can be positioned with sub-atomic dimensional accuracy with the aid of a piezoelectric transducer.The STM tip has demonstrated its ability to draw fine lines, which exhibit nanometre-sized struture, and hence may provide a new tool for nanometre lithography.The mode of action was not properly understood,but it was suspected that under the influence of the tip a conducting carbon line had been drawn as the result of polymerizing a hydrocarbon film, the process being assisted by the catalytic activity of the tungsten tip. By extrapolating their results the authors believed that it would be possible to deposit fine conducting lines on an insulating film. The tip would operatein a gaseous environment that contained the metal atoms in such a form that they could either be pre-adsorbed on the film or then be liberated from their ligands or they would form free radicals at the location of the tip and be transferred to the film by appropriate adjustment of the tip voltage.Feynman proposed that machine tools be used to make smaller machine tools which in turn would make still smaller ones, and so on all the way down to the atomic level. These machine tools would then operate via computer control in the nanometre domain, using high resolution electron microscopy for observation and control. STM technology has short-cricuired this rather cumbrous concept,but the potential applications and benefits remain.原文(二)Micro-machine1. IntroductionFrom the beginning, mankind seems instinctively to have desired large machines and small machines. That is, “large” and “small” in comp arison with human-scale. Machines larger than human are powerful allies in the battle against the fury of nature; smaller machines are loyal partners that do whatever they are told.If we compare the facility and technology of manufacturing larger machines, common sense tells us that the smaller machines are easier to make. Nevertheless, throughout the history of technology, larger machines have always stood ort. The size of the restored models of the water-mill invented by Vitruvius in the Roman Era, the windmill of the middle Ages, and the steam engine invented by Watt is overwhelming. On the other hand, smaller machined in history of technology are mostly tools. If smaller machines are easier to make, a variety of such machined should exist, but until modern times, no significant small machines existed except for guns and clocks.This fact may imply that smaller machines were actually more difficult to make. Of course, this does not mean simply that it was difficult to make a small machine; it means that it was difficult to invent a small machine that would be significant to human beings.。

微生物英文文献及翻译—原文

微生物英文文献及翻译—原文

微生物英文文献及翻译—原文本期为微生物学的第二讲,主要讨论炭疽和蛔虫病这两种既往常见而当今社会较为罕见的疾病。

炭疽是由炭疽杆菌所致的一种人畜共患的急性传染病。

人因接触病畜及其产品及食用病畜的肉类而发生感染。

临床上主要表现为皮肤坏死、溃疡、焦痂和周围组织广泛水肿及毒血症症状;似蚓蛔线虫简称蛔虫,是人体内最常见的寄生虫之一。

成虫寄生于小肠,可引起蛔虫病。

其幼虫能在人体内移行,引起内脏幼虫移行症。

案例分析Case 1:A local craftsman who makes garments from the hides of goats visits his physician because over the past few days he has developed several black lesions on his hands and arms. The lesions are not painful, but he is alarmed by their appearance. He is afebrile and his physical examination is unremarkable.案例1:一名使用鹿皮做皮衣的当地木匠来就医,主诉过去几天中手掌和手臂上出现几个黑色皮肤损害。

皮损无痛,但是外观较为骇人。

患者无发热,体检无异常发现。

1. What is the most likely diagnosis?Cutaneous anthrax, caused by Bacillus anthracis. The skin lesions are painless and dark or charred ulcerations known as black eschar. It is classically transmitted by contact with thehide of a goat at the site of a minor open wound.皮肤炭疽:由炭疽杆菌引起,皮损通常无痛、黑色或称为焦痂样溃疡。

酒店业服务质量管理研究-外文文献翻译

酒店业服务质量管理研究-外文文献翻译

毕设附件外文文献翻译原文及译文(3500 字)原文Study of Service Quality Management in Hotel IndustryBorkar; SameerAbstractIt is an attempt to understand the role of quality improvement process in hospitality industry and effectiveness in making it sustainable business enterprise. It is a survey of the presently adopted quality management tools which are making the hotels operations better focused and reliable and meet the customer expectations. Descriptive research design is used to know the parameters of service quality management in hospitality industry. Exploratory research design is undertaken to dig out the service quality management practices and its effectiveness. Data analysis is done and presented; hypothesis is tested against the collected data. Since the industry continuously tries to improve upon their services to meet the levels of customer satisfaction; Study presents tools for continuous improvement process and how it benefits all the stake holders. It can be inferred from the study that the hotel implement continuous improvement process and quality management tools to remain competitive in the market. The study involves hotels of highly competitive market with limited number of respondents. This limits the study to hotel industry and has scope of including other hospitality service providers as well.Keywords: Customer Satisfaction, Perception, Performance Measurement, Continuous, Improvement Process.IntroductionIt has brought paradigm shifts in the operations of hospitality industry. The overall perspective of the industry is changed due to introduction of new techniques and methods of handling various processes. Awareness among the hoteliers and the guests has fuelled the inventions focused on operations. The increased sagacity of customer satisfaction led to the use of high standards of service in industry. The new service parameters made the hoteliers to implement quality management as an effective aid. It has significantly affected hotels' ability to control and adapt to changing environments. The use of new techniques began with the simple motive of sophistication and precise activities in the given field of operation which may result in high standards of service in global economy and has allowed the rise of a leisure class.Conceptual Framework This study of Service quality management in hospitality industry is an attempt to understand the presence of quality improvement process in hospitality industry and effectiveness in making it sustainable business enterprise. It is a survey of the presently adopted quality management tools which are making the hotels operations safer, focused and reliable and meet the customer expectations.As the hospitality industry becomes more competitive there is an obvious need to retain clientele as well as increasing profitability and hence management professionals strive to improve guest satisfaction and revenues. The management professionals whom are striving for these results however often have limited understanding of research surrounding the paradigms of guest satisfaction and loyalty and financial performance. This research paper shall enlighten some of the variables and important facts of service quality resulting into guest satisfaction.Review of LiteratureCustomers of hospitality often blame themselves when dissatisfied for their bad choice. Employees must be aware that dissatisfied customers may not complain and therefore the employees should seek out sources of dissatisfaction and resolve them. (Zeithaml V., 1981, p.186 -190)It is said that service quality is what differentiates hospitality sector, however there is not an agreed definition of what service quality is. There is however a few different suggestions of how to define service quality. Dividing it into technical, functional and image components; (Greenrooms C., 1982) another is that service quality is determined by its fitness for use by internal and external customers. It is accepted that service quality is depends upon guest's needs and expectations. A definition of service quality state that quality is simply conformance to specifications, which would mean that positive quality is when a product or service specific quality meet or exceed preset standards or promises. This however seems like an easy viewwithin the hospitality industry. The alternative definitions read as follows: 1) quality is excellence; 2) quality is value for money; 3) quality is meeting or exceeding expectations. This appears better aligned with ideas which exist within hospitality management than the first mentioned simplistic approach. Service quality and value is rather difficult to calculate, companies must therefore rely on guest's quality perceptions and expectations to get consistent results which is best achieved by asking guest's questions related to expectations and their perceptions of the service quality, which can effectively be achieved through carefully designed surveys.A major problem with service quality is variability and limited capability and robustness of the service production process. (Gummesson E., 1991) Hotels consumers have well-conceived ideas about service quality and quality attributes are considered important for most types of services, the absence of certain attributes may lead consumers to perceive service quality as poor. The presence of these attributes may not substantially improve the perceived quality of the service. Most customers would be willing to trade some convenience for a price break, and that the behavior, skill level and performance of service employees are key determinants of perceived quality of services. This is a major challenge in improving or maintaining a high level of service quality. (Tigineh M. et al 1992)Studies focusing on service quality management suggest that service firms spend too little effort on planning for service quality. The resultant costs of poor service quality planning lead to lower profitability as part of the service failures.(Stuart F., et al 1996)When discussing satisfaction, it is important to understand that guest's evaluation of service comprise of two basic distinct dimensions: service delivery and service outcome (Mattila, 1999). Research indicates that how the service was delivered (perceived functional quality) is more important than the outcome of the service process (technical quality). This research clearly indicates that effort by staff have a strong effect on guest's satisfaction judgments.Companies delivering services must broaden their examination of productivity to help settle conflicts – the leverage synergies – between improving service quality and boosting service productivity. ( Parasuraman A. 2002)A key activity is to conduct regularly scheduled review of progress by quality council or working group and management must establish a system to identify areas for future improvement and to track performance with respect to internal and external customers. They must also track the changing preferences of customer. Continuous improvement means not only being satisfied with doing a good job or process. It is accomplished by incorporating process measurement and team problem solving an all work activities. Organization must continuously strive for excellence by reducing complexity, variation and out of control process. Plan-D-Study-Act (PDSA) developed by Shewhart and later on modified by Deming is an effective improvement technique. First Plan carefully, then carry out plan, study the results and check whether the plan worked exactly as intended and act on results by identifying what worked as planned and what didn't work.Continuous process improvement is the objective and these phases of PDSA are the framework to achieve those objectives. (Besterfield D. et al 2003) The 'servicescape' -is a general term to describe the physical surroundings of a service environment (Reimer 2005, p. 786) such as a hotel or cruise ship. Guests are sometimes unconsciously trying to obtain as much information as possible through experiences to decrease information asymmetries This causes guests to look for quality signals or cues which would provide them with information about the service, which leads us to 'cue utilization theory'. Cue utilization theory states that products or services consist of several arrays of cues that serve as surrogate indicators of product or service quality. There are both intrinsic and extrinsic cues to help guests determine quality. Consequentially, due to the limited tangibility of services, guests are often left to accept the price of the experience and the physical appearance or environment of the hotel or cruise ship itself as quality indicators. Though there are many trade and academic papers discussing guest satisfaction has been published, one can note that limited attention has been paid to the value perception and expectations guests have towards product delivery and influence price guests pay for an experience has on satisfaction and future spending. Furthermore it is also known that the role of pricing in relation to guest determinants of perceived quality of services. This is a major challenge in improving or maintaining a high level of service quality. (Tigineh M. et al 1992) Studies focusing on service quality management suggest that service firms spend too little effort on planning for service quality.The resultant costs of poor service quality planning lead to lower profitability as part of the service failures. (Stuart F., et al 1996)When discussing satisfaction, it is important to understand that guest's evaluation of service comprise of two basic distinct dimensions: service delivery and service outcome (Mattila, 1999). Research indicates that how the service was delivered (perceived functional quality) is more important than the outcome of the service process (technical quality). This research clearly indicates that effort by staff have a strong effect on guest's satisfaction judgments. Companies delivering services must broaden their examination of productivity to help settle conflicts –the leverage synergies –between improving service quality and boosting service productivity. ( Parasuraman A. 2002)Telephonic conversation with peers and friends in hospitality industry worked a wonder giving lots of inputs in drafting this paper. Secondary data sources- For this study, data sources such as hospitality journals, Books on service quality management, organization behavior, URL on internet of various hospitality majors. Referring hospitality publications were helpful in knowing the current inventions in industry.Research Tools: Descriptive research design is used to know the attributes of service quality management in hospitality industry. Exploratory research design is undertaken to dig out the service quality management practices and its effectiveness. Data analysis is done and presented in tables. The hypothesis is tested against the collected data.Hypotheses: The hypotheses framed for the subject areHypothesis 1: Implementing service quality management as a tool for improvement in Customer Satisfaction.Hypothesis 2: Practicing Continuous Improvement program has benefited hotel. Limitation & Scope of the Study: Though there was a specific questionnaire used for collecting information, the objective of the paper was well discussed with the every contributor and whatever the information was provided by these sources is arranged for further analysis. The analysis of the available data is done on the relevance to the topic. The effectiveness of the technology in conservation of resources was always a point of consideration. The data is sifted for making it as precise as possible.Analysis and DiscussionsThere is a significant relationship between service quality management and customer satisfaction. In hospitality industry, the customer satisfaction variables such as Availability, Access, Information, Time, delivery of service, availability of personal competence, Comfortable and safer atmosphere and pollution free environment are of prime concern to every hotelier. The industry continuously tries to improve upon their services to meet the levels of customer satisfaction.The intangible nature of the service as a product means that it could be very difficult to place quantifiable terms on the features that contribute to the quality and measurement of the quality of the product is a problem for Service quality management.The customer is frequently directly involved in the delivery of the service and as such introduces an unknown and unpredictable influence on the process. The customer variability in the process makes it difficult to determine the exact requirements of the customer and what they regard as an acceptable standard of service.This problem is magnified as it is often judgmental, based on personal preferences or even mood, rather than on technical performance that can be measured. Every hotel has a target market to cater which has very specific requirement in terms of expected and perceived quality of service.The customers come with different perception of quality every time they come to hotel and this makes it quite difficult to define quality and set the level of it. It requires hotel to continuously compare their perception against customer perception in terms of satisfaction measurement with performance measurement. The study has shown that the effective tools which management of various hotels uses for continuous improvement process and how it is dissipated amongst all the stake holders.译文酒店业服务质量管理研究博卡;萨米尔摘要本文旨在研究酒店业中质量改进过程的作用以及如何有效地推动企业的可持续发展。

论文外文文献翻译

论文外文文献翻译

论文外文文献翻译以下是一篇700字左右的论文外文文献翻译:原文题目:The Role of Artificial Intelligence in Medical Diagnostics: A Review原文摘要:In recent years, there has been a growing interest in the use of artificial intelligence (AI) in the field of medical diagnostics. AI has the potential to improve the accuracy and efficiency of medical diagnoses, and can assist clinicians in making treatment decisions. This review aims to examine the current state of AI in medical diagnostics, and discuss its advantages and limitations. Several AI techniques, including machine learning, deep learning, and natural language processing, are discussed. The review also examines the ethical and legal considerations associated with the use of AI in medical diagnostics. Overall, AI has shown great promise in improving medical diagnostics, but further research is needed to fully understand its potential benefits and limitations.AI在医学诊断中发挥的作用:一项综述近年来,人工智能(AI)在医学诊断领域的应用引起了越来越多的关注。

毕业论文文献外文翻译----危机管理:预防,诊断和干预文献翻译-中英文文献对照翻译

毕业论文文献外文翻译----危机管理:预防,诊断和干预文献翻译-中英文文献对照翻译

第1页 共19页中文3572字毕业论文(设计)外文翻译标题:危机管理-预防,诊断和干预一、外文原文标题:标题:Crisis management: prevention, diagnosis and Crisis management: prevention, diagnosis andintervention 原文:原文:The Thepremise of this paper is that crises can be managed much more effectively if the company prepares for them. Therefore, the paper shall review some recent crises, theway they were dealt with, and what can be learned from them. Later, we shall deal with the anatomy of a crisis by looking at some symptoms, and lastly discuss the stages of a crisis andrecommend methods for prevention and intervention. Crisis acknowledgmentAlthough many business leaders will acknowledge thatcrises are a given for virtually every business firm, many of these firms do not take productive steps to address crisis situations. As one survey of Chief Executive officers of Fortune 500 companies discovered, 85 percent said that a crisisin business is inevitable, but only 50 percent of these had taken any productive action in preparing a crisis plan(Augustine, 1995). Companies generally go to great lengths to plan their financial growth and success. But when it comes to crisis management, they often fail to think and prepare for those eventualities that may lead to a company’s total failure.Safety violations, plants in need of repairs, union contracts, management succession, and choosing a brand name, etc. can become crises for which many companies fail to be prepared untilit is too late.The tendency, in general, is to look at the company as a perpetual entity that requires plans for growth. Ignoring the probabilities of disaster is not going to eliminate or delay their occurrences. Strategic planning without inclusion ofcrisis management is like sustaining life without guaranteeinglife. One reason so many companies fail to take steps to proactively plan for crisis events, is that they fail to acknowledge the possibility of a disaster occurring. Like an ostrich with its head in the sand, they simply choose to ignorethe situation, with the hope that by not talking about it, it will not come to pass. Hal Walker, a management consultant, points out “that decisions will be more rational and better received, and the crisis will be of shorter duration, forcompanies who prepare a proactive crisis plan” (Maynard, 1993) .It is said that “there are two kinds of crises: those that thatyou manage, and those that manage you” (Augustine, 1995). Proactive planning helps managers to control and resolve a crisis. Ignoring the possibility of a crisis, on the other hand,could lead to the crisis taking a life of its own. In 1979, theThree-Mile Island nuclear power plant experienced a crisis whenwarning signals indicated nuclear reactors were at risk of a meltdown. The system was equipped with a hundred or more different alarms and they all went off. But for those who shouldhave taken the necessary steps to resolve the situation, therewere no planned instructions as to what should be done first. Hence, the crisis was not acknowledged in the beginning and itbecame a chronic event.In June 1997, Nike faced a crisis for which they had no existi existing frame of reference. A new design on the company’s ng frame of reference. A new design on the company’s Summer Hoop line of basketball shoes - with the word air writtenin flaming letters - had sparked a protest by Muslims, who complained the logo resembled the Arabic word for Allah, or God.The council of American-Islamic Relations threatened aa globalNike boycott. Nike apologized, recalled 38,000 pairs of shoes,and discontinued the line (Brindley, 1997). To create the brand,Nike had spent a considerable amount of time and money, but hadnever put together a general framework or policy to deal with such controversies. To their dismay, and financial loss, Nike officials had no choice but to react to the crisis. This incident has definitely signaled to the company that spending a little more time would have prevented the crisis. Nonetheless,it has taught the company a lesson in strategic crisis management planning.In a business organization, symptoms or signals can alert the strategic planners or executives of an eminent crisis. Slipping market share, losing strategic synergy anddiminishing productivity per man hour, as well as trends, issues and developments in the socio-economic, political and competitive environments, can signal crises, the effects of which can be very detrimental. After all, business failures and bankruptcies are not intended. They do not usually happen overnight. They occur more because of the lack of attention to symptoms than any other factor.Stages of a crisisMost crises do not occur suddenly. The signals can usuallybe picked up and the symptoms checked as they emerge. A company determined to address these issues realizes that the real challenge is not just to recognize crises, but to recognize themin a timely fashion (Darling et al., 1996). A crisis can consistof four different and distinct stages (Fink, 1986). The phasesare: prodromal crisis stage, acute crisis stage, chronic crisisstage and crisis resolution stage.Modern organizations are often called “organic” due tothe fact that they are not immune from the elements of their surrounding environments. Very much like a living organism, organizations can be affected by environmental factors both positively and negatively. But today’s successfulorganizations are characterized by the ability to adapt by recognizing important environmental factors, analyzing them, evaluating the impacts and reacting to them. The art of strategic planning (as it relates to crisis management)involves all of the above activities. The right strategy, in general, provides for preventive measures, and treatment or resolution efforts both proactively and reactively. It wouldbe quite appropriate to examine the first three stages of acrisis before taking up the treatment, resolution or intervention stage.Prodromal crisis stageIn the field of medicine, a prodrome is a symptom of the onset of a disease. It gives a warning signal. In business organizations, the warning lights are always blinking. No matter how successful the organization, a number of issues andtrends may concern the business if proper and timely attentionis paid to them. For example, in 1995, Baring Bank, a UK financial institution which had been in existence since 1763,ample opportunitysuddenly and unexpectedly failed. There wasfor the bank to catch the signals that something bad was on thehorizon, but the company’s efforts to detect that were thwarted by an internal structure that allowed a single employee both to conduct and to oversee his own investment trades, and the breakdown of management oversight and internalcontrol systems (Mitroff et al., 1996). Likewise, looking in retrospect, McDonald’s fast food chain was given the prodromalsymptoms before the elderly lady sued them for the spilling ofa very hot cup of coffee on her lap - an event that resulted in a substantial financial loss and tarnished image of thecompany. Numerous consumers had complained about thetemperature of the coffee. The warning light was on, but the company did not pay attention. It would have been much simplerto pick up the signal, or to check the symptom, than facing the consequences.In another case, Jack in the Box, a fast food chain, had several customers suffer intestinal distress after eating at their restaurants. The prodromal symptom was there, but the company took evasive action. Their initial approach was to lookaround for someone to blame. The lack of attention, the evasiveness and the carelessness angered all the constituent groups, including their customers. The unfortunate deaths thatptoms,occurred as a result of the company’s ignoring thesymand the financial losses that followed, caused the company to realize that it would have been easier to manage the crisis directly in the prodromal stage rather than trying to shift theblame.Acute crisis stageA prodromal stage may be oblique and hard to detect. The examples given above, are obvious prodromal, but no action wasWebster’s New Collegiate Dictionary, an acute stage occursacutewhen a symptom “demands urgent attention.” Whether the acutesymptom emerges suddenly or is a transformation of a prodromalstage, an immediate action is required. Diverting funds and other resources to this emerging situation may cause disequilibrium and disturbance in the whole system. It is onlythose organizations that have already prepared a framework forthese crises that can sustain their normal operations. For example, the US public roads and bridges have for a long time reflected a prodromal stage of crisis awareness by showing cracks and occasionally a collapse. It is perhaps in light of the obsessive decision to balance the Federal budget that reacting to the problem has been delayed and ignored. This situation has entered an acute stage and at the time of this writing, it was reported that a bridge in Maryland had just collapsed.The reason why prodromes are so important to catch is thatit is much easier to manage a crisis in this stage. In the caseof most crises, it is much easier and more reliable to take careof the problem before it becomes acute, before it erupts and causes possible complications (Darling et al., 1996). In andamage. However, the losses are incurred. Intel, the largest producer of computer chips in the USA, had to pay an expensiveprice for initially refusing to recall computer chips that proved unreliable o n on certain calculations. The f irmfirm attempted to play the issue down and later learned its lesson. At an acutestage, when accusations were made that the Pentium Chips were not as fast as they claimed, Intel quickly admitted the problem,apologized for it, and set about fixing it (Mitroff et al., 1996). Chronic crisis stageDuring this stage, the symptoms are quite evident and always present. I t isIt is a period of “make or break.” Being the third stage, chronic problems may prompt the company’s management to once and for all do something about the situation. It may be the beginning of recovery for some firms, and a deathknell for others. For example, the Chrysler Corporation was only marginallysuccessful throughout the 1970s. It was not, however, until the company was nearly bankrupt that amanagement shake-out occurred. The drawback at the chronic stage is that, like in a human patient, the company may get used to “quick fixes” and “band “band--aid”approaches. After all, the ailment, the problem and the crisis have become an integral partoverwhelmed by prodromal and acute problems that no time or attention is paid to the chronic problems, or the managers perceive the situation to be tolerable, thus putting the crisison a back burner.Crisis resolutionCrises could be detected at various stages of their development. Since the existing symptoms may be related todifferent problems or crises, there is a great possibility thatthey may be misinterpreted. Therefore, the people in charge maybelieve they have resolved the problem. However, in practicethe symptom is often neglected. In such situations, the symptomwill offer another chance for resolution when it becomes acute,thereby demanding urgent care. Studies indicate that today anincreasing number of companies are issue-oriented and searchfor symptoms. Nevertheless, the lack of experience in resolvinga situation and/or inappropriate handling of a crisis can leadto a chronic stage. Of course, there is this last opportunityto resolve the crisis at the chronic stage. No attempt to resolve the crisis, or improper resolution, can lead to grim consequences that will ultimately plague the organization or even destroy it.It must be noted that an unsolved crisis may not destroy the company. But, its weakening effects can ripple through the organization and create a host of other complications.Preventive effortsThe heart of the resolution of a crisis is in the preventiveefforts the company has initiated. This step, similar to a humanbody, is actually the least expensive, but quite often the mostoverlooked. Preventive measures deal with sensing potential problems (Gonzales-Herrero and Pratt, 1995). Major internalfunctions of a company such as finance, production, procurement, operations, marketing and human resources are sensitive to thesocio-economic, political-legal, competitive, technological, demographic, global and ethical factors of the external environment. What is imminently more sensible and much more manageable, is to identify the processes necessary forassessing and dealing with future crises as they arise (Jacksonand Schantz, 1993). At the core of this process are appropriate information systems, planning procedures, anddecision-making techniques. A soundly-based information system will scan the environment, gather appropriate data, interpret this data into opportunities and challenges, and provide a concretefoundation for strategies that could function as much to avoid crises as to intervene and resolve them.Preventive efforts, as stated before, require preparations before any crisis symptoms set in. Generally strategic forecasting, contingency planning, issues analysis, and scenario analysis help to provide a framework that could be used in avoiding and encountering crises.出处:出处:Toby TobyJ. Kash and John R. Darling . Crisis management: prevention, diagnosis 179-186二、翻译文章标题:危机管理:预防,诊断和干预译文:本文的前提是,如果该公司做好准备得话,危机可以更有效地进行管理。

在线图书管理系统外文文献原文及译文

在线图书管理系统外文文献原文及译文

毕业设计说明书英文文献及中文翻译班姓 名:学 院:专指导教师:2014 年 6 月软件学院 软件工程An Introduction to JavaThe first release of Java in 1996 generated an incredible amount of excitement, not just in the computer press, but in mainstream media such as The New York Times, The Washington Post, and Business Week. Java has the distinction of being the first and only programming language that had a ten-minute story on National Public Radio. A $100,000,000 venture capital fund was set up solely for products produced by use of a specific computer language. It is rather amusing to revisit those heady times, and we give you a brief history of Java in this chapter.In the first edition of this book, we had this to write about Java: “As a computer language, Java’s hype is overdone: Java is certainly a good program-ming language. There is no doubt that it is one of the better languages available to serious programmers. We think it could potentially have been a great programming language, but it is probably too late for that. Once a language is out in the field, the ugly reality of compatibility with existing code sets in.”Our editor got a lot of flack for this paragraph from someone very high up at Sun Micro- systems who shall remain unnamed. But, in hindsight, our prognosis seems accurate. Java has a lot of nice language features—we examine them in detail later in this chapter. It has its share of warts, and newer additions to the language are not as elegant as the original ones because of the ugly reality of compatibility.But, as we already said in the first edition, Java was never just a language. There are lots of programming languages out there, and few of them make much of a splash. Java is a whole platform, with a huge library, containing lots of reusable code, and an execution environment that provides services such as security, portability across operating sys-tems, and automatic garbage collection.As a programmer, you will want a language with a pleasant syntax and comprehensible semantics (i.e., not C++). Java fits the bill, as do dozens of other fine languages. Some languages give you portability, garbage collection, and the like, but they don’t have much of a library, forcing you to roll your own if you want fancy graphics or network- ing or database access. Well, Java has everything—a good language, a high-quality exe- cution environment, and a vast library. That combination is what makes Java an irresistible proposition to so many programmers.SimpleWe wanted to build a system that could be programmed easily without a lot of eso- teric training and which leveraged t oday’s standard practice. So even though wefound that C++ was unsuitable, we designed Java as closely to C++ as possible in order to make the system more comprehensible. Java omits many rarely used, poorly understood, confusing features of C++ that, in our experience, bring more grief than benefit.The syntax for Java is, indeed, a cleaned-up version of the syntax for C++. There is no need for header files, pointer arithmetic (or even a pointer syntax), structures, unions, operator overloading, virtual base classes, and so on. (See the C++ notes interspersed throughout the text for more on the differences between Java and C++.) The designers did not, however, attempt to fix all of the clumsy features of C++. For example, the syn- tax of the switch statement is unchanged in Java. If you know C++, you will find the tran- sition to the Java syntax easy. If you are used to a visual programming environment (such as Visual Basic), you will not find Java simple. There is much strange syntax (though it does not take long to get the hang of it). More important, you must do a lot more programming in Java. The beauty of Visual Basic is that its visual design environment almost automatically pro- vides a lot of the infrastructure for an application. The equivalent functionality must be programmed manually, usually with a fair bit of code, in Java. There are, however, third-party development environments that provide “drag-and-drop”-style program development.Another aspect of being simple is being small. One of the goals of Java is to enable the construction of software that can run stand-alone in small machines. The size of the basic interpreter and class support is about 40K bytes; adding the basic stan- dard libraries and thread support (essentially a self-contained microkernel) adds an additional 175K.This was a great achievement at the time. Of course, the library has since grown to huge proportions. There is now a separate Java Micro Edition with a smaller library, suitable for embedded devices.Object OrientedSimply stated, object-oriented design is a technique for programming that focuses on the data (= objects) and on the interfaces to that object. To make an analogy with carpentry, an “object-oriented” carpenter would be mostly concerned with the chair he was building, and secondari ly with the tools used to make it; a “non-object- oriented” carpenter would think primarily of his tools. The object-oriented facilities of Java are essentially those of C++.Object orientation has proven its worth in the last 30 years, and it is inconceivable that a modern programming language would not use it. Indeed, the object-oriented features of Java are comparable to those of C++. The major difference between Java and C++ lies in multiple inheritance, which Java has replaced with the simpler concept of interfaces, and in the Java metaclass model (which we discuss in Chapter 5). NOTE: If you have no experience with object-oriented programming languages, you will want to carefully read Chapters 4 through 6. These chapters explain what object-oriented programming is and why it is more useful for programming sophisticated projects than are traditional, procedure-oriented languages like C or Basic.Network-SavvyJava has an extensive library of routines for coping with TCP/IP protocols like HTTP and FTP. Java applications can open and access objects across the Net via URLs with the same ease as when accessing a local file system.We have found the networking capabilities of Java to be both strong and easy to use. Anyone who has tried to do Internet programming using another language will revel in how simple Java makes onerous tasks like opening a socket connection. (We cover net- working in V olume II of this book.) The remote method invocation mechanism enables communication between distributed objects (also covered in V olume II).RobustJava is intended for writing programs that must be reliable in a variety of ways.Java puts a lot of emphasis on early checking for possible problems, later dynamic (runtime) checking, and eliminating situations that are error-prone. The single biggest difference between Java and C/C++ is that Java has a pointer model that eliminates the possibility of overwriting memory and corrupting data.This feature is also very useful. The Java compiler detects many problems that, in other languages, would show up only at runtime. As for the second point, anyone who has spent hours chasing memory corruption caused by a pointer bug will be very happy with this feature of Java.If you are coming from a language like Visual Basic that doesn’t explicitly use pointers, you are probably wondering why this is so important. C programmers are not so lucky. They need pointers to access strings, arrays, objects, and even files. In Visual Basic, you do not use pointers for any of these entities, nor do you need to worry about memory allocation for them. On the other hand, many data structures are difficult to implementin a pointerless language. Java gives you the best of both worlds. You do not need point- ers for everyday constructs like strings and arrays. You have the power of pointers if you need it, for example, for linked lists. And you always have complete safety, because you can never access a bad pointer, make memory allocation errors, or have to protect against memory leaking away.Architecture NeutralThe compiler generates an architecture-neutral object file format—the compiled code is executable on many processors, given the presence of the Java runtime sys- tem. The Java compiler does this by generating bytecode instructions which have nothing to do with a particular computer architecture. Rather, they are designed to be both easy to interpret on any machine and easily translated into native machine code on the fly.This is not a new idea. More than 30 years ago, both Niklaus Wirth’s original implemen- tation of Pascal and the UCSD Pascal system used the same technique.Of course, interpreting bytecodes is necessarily slower than running machine instruc- tions at full speed, so it isn’t clear that this is even a good idea. However, virtual machines have the option of translating the most frequently executed bytecode sequences into machine code, a process called just-in-time compilation. This strategy has proven so effective that even Microsoft’s .NET platform relies on a virt ual machine.The virtual machine has other advantages. It increases security because the virtual machine can check the behavior of instruction sequences. Some programs even produce bytecodes on the fly, dynamically enhancing the capabilities of a running program.PortableUnlike C and C++, there are no “implementation-dependent” aspects of the specifi- cation. The sizes of the primitive data types are specified, as is the behavior of arith- metic on them.For example, an int in Java is always a 32-bit integer. In C/C++, int can mean a 16-bit integer, a 32-bit integer, or any other size that the compiler vendor likes. The only restriction is that the int type must have at least as many bytes as a short int and cannot have more bytes than a long int. Having a fixed size for number types eliminates a major porting headache. Binary data is stored and transmitted in a fixed format, eliminating confusion about byte ordering. Strings are saved in a standard Unicode format. The libraries that are a part of the system define portable interfaces. For example,there is an abstract Window class and implementations of it for UNIX, Windows, and the Macintosh.As anyone who has ever tried knows, it is an effort of heroic proportions to write a pro- gram that looks good on Windows, the Macintosh, and ten flavors of UNIX. Java 1.0 made the heroic effort, delivering a simple toolkit that mapped common user interface elements to a number of platforms. Unfortunately, the result was a library that, with a lot of work, could give barely acceptable results on different systems. (And there were often different bugs on the different platform graphics implementations.) But it was a start. There are many applications in which portability is more important than user interface slickness, and these applications did benefit from early versions of Java. By now, the user interface toolkit has been completely rewritten so that it no longer relies on the host user interface. The result is far more consistent and, we think, more attrac- tive than in earlier versions of Java.InterpretedThe Java interpreter can execute Java bytecodes directly on any machine to which the interpreter has been ported. Since linking is a more incremental and lightweight process, the development process can be much more rapid and exploratory.Incremental linking has advantages, but its benefit for the development process is clearly overstated. Early Java development tools were, in fact, quite slow. Today, the bytecodes are translated into machine code by the just-in-time compiler.MultithreadedThe benefits of multithreading are better interactive responsiveness and real-time behavior.If you have ever tried to do multithreading in another language, you will be pleasantly surprised at how easy it is in Java. Threads in Java also can take advantage of multi- processor systems if the base operating system does so. On the downside, thread imple- mentations on the major platforms differ widely, and Java makes no effort to be platform independent in this regard. Only the code for calling multithreading remains the same across machines; Java offloads the implementation of multithreading to the underlying operating system or a thread library. Nonetheless, the ease of multithread- ing is one of the main reasons why Java is such an appealing language for server-side development.Java程序设计概述1996年Java第一次发布就引起了人们的极大兴趣。

外文文献翻译——顾客满意度(附原文)

外文文献翻译——顾客满意度(附原文)

外文文献翻译(附原文)译文一:韩国网上购物者满意度的决定因素摘要这篇文章的目的是确定可能导致韩国各地网上商场顾客满意的因素。

假设客户的积极认知互联网购物的有用性,安全,技术能力,客户支持和商场接口积极影响客户满意度.这也是推测,满意的顾客成为忠实的客户。

调查结果证实,客户满意度对顾客的忠诚度有显著影响,这表明,当顾客满意服务时会显示出很高的忠诚度。

我们还发现,“网上客户有关安全风险的感知交易中,客户支持,网上购物和商场接口与客户满意度呈正相关。

概念模型网上购物者可以很容易的将一个商场内的商品通过价格或质量进行排序,并且可以在不同的商场之间比较相同的产品.网上购物也可以节省时间和降低信息搜索成本。

因此,客户可能有一种感知,他们可以用更少的时间和精力得到更好的网上交易。

这个创新的系统特性已被定义为知觉有用性。

若干实证研究发现,客户感知的实用性在采用影响满意度的创新技术后得以实现。

因此,假设网上购物的知觉有用性与满意度成正相关(H1).网上客户首要关注的是涉及关于网上信用卡使用的明显的不安全感。

虽然认证系统有明显进步,但是顾客担心在网上传输信用卡号码这些敏感的信息是不会被轻易的解决的。

网上的隐私保护环境是另一个值得关注的问题。

研究表明,网上客户担心通过这些网上业务会造成身份盗窃或冒用他们的私人信息。

因此,据推测,网上购物的安全性对顾客满意度有积极地影响(H2)。

以往的研究表明,系统方面的技术,如网络速度,错误恢复能力和系统稳定性都是导致客户满意度的重要因素。

例如,Kim和Lim(2001)发现,网络速度与网上购物者的满意度有关。

Dellaert和卡恩(1999年)也报告说,当网络提供商没有进行很好的管理时网上冲浪速度慢会给评价网站内容带来负面影响。

丹尼尔和Aladwani的文件表明,系统错误的迅速准确的恢复能力以及网络速度是影响网上银行用户满意度的重要因素(H3)。

由于网上交易的非个人化性质客户查询产品和其他服务的迅速反应对客户满意度来说很重要.也有必要提供快捷交货,优质的售后服务和简便的退货程序.因此,许多网上购物商场为客户查询配备互动回答系统。

NIH 预算[文献翻译]

NIH 预算[文献翻译]

外文文献翻译译文一、外文原文原文:The NIH budgetIntroductionFederal funding for biomedical research in the United States has fueled discoveries that have advanced our understanding of human disease, led to novel and effective diagnostic tools and therapies, and made our research enterprise an international paragon. Although it was not the original intent, this investment, through the National Institutes of Health (NIH), has also become an essential source of support for academic medical centers, providing funds for faculty and staff salaries, operational expenses, and even capital improvements related to research that can no longer be supported by clinical income. Until approximately 20 years ago, clinical income often subsidized research, but managed care, increased scrutiny and efficiency in the management of clinical expenses, and reductions in federal support for teaching hospitals have rendered clinical margins insufficient to support the research mission. Although some may see institution building as an inappropriate use of NIH funds, a consistent, productive biomedical research enterprise requires a solid infrastructure.Ensuring durable federal support for such research has not, however, been without tribulations. As with all line items in the federal budget, NIH funding is subject to the vicissitudes of the political process, and intermittent periods of growth have been followed by periods of decline. Some argue that funding cycles refresh the research enterprise, eliminating through competition investigators whose work is not of the highest quality. Though not as sanguine about their purposes or consequences, the academic medical community has accepted these cycles and works to find ways to dampen the effects of downturns on research programs and institutional stability.Redefined the concept of a comprehensive budget managementBudget originated in 18th century England, first implemented in government departments, the purpose is mainly used to control the king's right to tax, thereby limiting government spending. Then in the U.S., the budget has been further developed. American small towns by the public budget for the implementation of the national budget system, the establishment of the United States played an important role. Inspired by the government budget.budget management concept which was subsequently used by large companies in the United States to go to business management.Later emerged the concept of the overall budget management, budget management is the use of the main line of the internal departments of various financial and non-financial resources to control, reflecting a series of evaluation activities, and to improve the management level and management efficiency. Since the 20th century, a comprehensive budget management by the United States, many large enterprises such as: General Electric, DuPont and General Motors, so successfully applied and obtained good results. Since then, this method will soon become a large modern industrial and commercial enterprises on the standard operating procedures. From the initial planning, coordination of production become both now control, motivation, evaluation function of an integrated strategic approach to the implementation of enterprise management mechanism, and then in the internal control system of the heart.Comprehensive budget management to reflect the following three features: First, to enhance the organization's governance capacity, strengthening organization and management of contractual; Secondly, the overall budget management is to organize the constraints necessary incentives to play the price in the market separation of powers and incentives; Finally, the overall budget management strategy and daily operations of enterprise link. Establish and perfect the modern enterprise system, the budget must be set up scientific management system, the full budget is not just a modern enterprise budget form, and is a target, coordination, control, assessment in one set of control mechanisms. The Comprehensive Budget Management conducive to enhanced resistance to adapt to market changes and risks, will help streamline business management system and operational mechanism for achieving the most effective business strategy support system.Comprehensive budget management system, including: budget preparation, budget implementation and monitoring and evaluation and performance evaluation of the budget, etc.. The budget was from the strategy from the beginning of shareholders to demand and market conditions, and to transform strategy into daily operational performance indicators and a series of quantitative and specific forms and documents as the realization of carrier. Budget execution and monitoring is in the budget goals into reality in the process of budget execution will be reflected in the progress and results of the identification and decomposition of differences and thus. Assessment and performance evaluation of the budget is through regular and ad hoc evaluation methods, analysis and decomposition of the differences, and correcting for timely and appropriate incentives.The Future of Biomedical ResearchWe have recently entered another period of stagnant funding for the NIH. Having doubled between 1998 and 2003, the NIH budget is expected to be $28.6 billion for fiscal year 2007, a 0.1 percent decrease from last year,1 or a 3.8 percent decrease after adjustment for inflation — the first true budgeted reduction in NIH support since 1970. Whereas national defense spending has reached approximately $1,600 per capita, federal spending for biomedical research now amounts to about $97 per capita — a rather modest investment in "advancing the health, safety, and well-being of our people."1 This downturn is more severe than any we have faced previously, since it comes on the heels of the doubling of the budget and threatens to erode the benefits of that investment. It takes many years for institutions to develop investigators skilled in modern research techniques and to build the costly, complicated infrastructure necessary for biomedical research. Rebuilding the investigator pool and the infrastructure after a downturn is expensive and time-consuming and weakens the benefits of prior funding. This situation is unlikely to improve anytime soon: the resources required for the war in Iraq and for hurricane relief, along with the erosion of the tax base by the current administration's fiscal policies, are expected to have long-term, far-reaching effects.Most institutes within the NIH have quickly adopted policy changes to minimizethe adverse consequences, including reducing the maximum grant term from five years to four years, eliminating cost-of-living increases, and capping the amounts of awards. These changes have important effects on currently funded research and the infrastructure that it requires. Moreover, the future of biomedical research is also affected: NIH training grants represent a major source of support for postdoctoral and clinical fellows during their research experiences, and budget limitations affect not only available training slots but also the training climate. As it becomes increasingly difficult for established investigators to renew their grants, their frustration is transmitted to trainees, who increasingly opt for alternative career paths, shrinking the pipeline of future investigators.Meanwhile, for more than 10 years, the pharmaceutical industry has been investing larger amounts in research and development than the federal government —$51.3 billion in fiscal year 2005,2 for instance, or 78 percent more than NIH funding that year. Fiscal conservatives may view this industry investment as an appropriate, market-driven solution that should suffice and that does not justify additional government funding for biomedical research. However, the lion's share of industry funds is applied to drug development, especially clinical trials, rather than to fundamental research and is targeted to applications that are first and foremost of value to the industry. Federal funding has traditionally targeted a broad range of investigator-initiated research, from studies of molecular mechanisms of disease to population-based studies of disease prevalence, promoting an unrestricted environment of biomedical discovery that serves as the basis for industry-driven development. These approaches are complementary, and both have served society well.How, then, can we ensure that funding for biomedical research is maintained at adequate levels for the foreseeable future? Korn and colleagues have argued that stability and quality can be ensured by maintaining overall funding at an annual growth rate of 8 to 9 percent (unadjusted for inflation).3 They base their conclusion on the costs associated with six basic goals, which I endorse: preserving the integrity of the merit and peer-review process, which requires that funding levels not fall below the 30th percentile success rate; maintaining a stable pool of new investigators; sustainingcommitments to continuing awards; preserving the capacity of institutions that receive grants by minimizing cost-sharing with the federal government (e.g., for lease costs or animal care); recognizing the continuous growth of new research technologies; and maintaining a robust intramural NIH research program. I would, however, modify the annual required growth rate to 5 to 6 percent real growth plus inflation: the annual growth rate over the past 30 years has been approximately 10 percent, which reflects an annual average real growth rate of 5.2 percent and an average inflation rate of 4.8 percent (ranging from 1.1 to 13.3 percent).Unfortunately, the federal government probably cannot accommodate this growth rate under its current fiscal constraints. So maintaining, by statute, a stable base level of funding equivalent to the fiscal year 2006 budget, with annual inflationary adjustments, seems to me a reasonable starting point. Congress may then choose to allocate additional resources annually, subject to availability, aiming for an annual real growth rate of 5 to 6 percent. Alternatively, to avoid politicization of the flow of funds and their targets, a dedicated tax could be imposed on consumer products that threaten human health —such as fast foods, tobacco, and alcohol —and used to maintain the biomedical research infrastructure by a formulaic allocation, much as the gasoline tax is used to maintain the federal highway infrastructure.The NIH can optimize the use of these funds by limiting the size and duration of awards as well as the number of awards per investigator. It might also consider shifting the target of grants. Whereas other countries often provide funding as an award for work accomplished before the application, the NIH theoretically funds proposed work —though in reality, the peer-review process effectively requires that a hypothesis virtually be proved correct before funding is approved. Within the NIH intramural research program, funding levels for individual laboratories are often decided on the basis of accomplishments during the previous cycle, so there is already a precedent that can be applied to the extramural program. Of course, new investigators would need to be reviewed differently to ensure appropriate allocation of funds to these promising members of the research community who have no or limited previous research accomplishments.Even with such changes, however, it would be preferable for academic medicalcenters to cease relying so heavily on the NIH for research funding. In addition tohaving investigators seek funding from not-for-profit organizations and from industry, Ibelieve that centers should encourage major nongovernmental funding organizations toconsolidate their resources into a durable pool of support for the best research proposalsin the life sciences. In addition, individual centers should encourage generous donors tosupport unrestricted research endowments designed to fund translational and clinicalresearch programs within the medical center or to contribute to a national pool linkedwith support from industry to establish a national endowment for funding translationalresearch and drug or device development within academic medical centers. Suchpromotion of later-phase research within academic medical centers could enhance thevalue of the intellectual property derived from it, financial benefits from which could,in turn, be used to establish research endowments within the medical centers.The federal government might also consider alternative ways to fund the NIH budget that are independent of allocations from the tax base. One approach might include seeking support from industries whose products contribute to the burden of disease, providing tax credits as an incentive for their contribution. These resources could be used to establish an independently managed national fund, which could be used to ensure adequate support for biomedical research without the funding gaps or oscillations that currently plague the process. In this scenario, unused money from any fiscal year would be retained in the fund, with the goal of achieving self-sustained growth.Whatever mechanisms are ultimately chosen, it seems clear that new methods ofsupport must be developed if biomedical research is to continue to thrive in the UnitedStates. The goal of a durable, steady stream of support for research in the life scienceshas never been more pressing, since the research derived from that support has neverpromised greater benefits. The fate of life-sciences research should not be consigned tothe political winds of Washington.Source: Joseph Loscalzo.The NIH budget.The New England Journal of Medicine April20, 2006 V ol.354(16)P1665-1667二、翻译文章译文:NIH 预算介绍美国联邦对生物医学研究的资助,推动我们对人类疾病的认识和发现,指出了新的和有效的诊断工具和治疗方法,使我们的研究成为企业的国际典范。

企业固定资产管理外文文献翻译最新

企业固定资产管理外文文献翻译最新

毕设附件:外文文献翻译原文+译文原文The study on fixed assets management of enterpriseDaum J HAbstractThe rapid development of computer technology, computer technology has infiltrated all walks of life; it has become an integral part of the industry. The importance of computer software in the enterprise is more and more important. Fixed assets management in the enterprise management occupies an important share, fixed assets of enterprises with high quantity, variety, value; use cycle is long, etc. Fixed assets management is faced with a large amount of data copied, fill out a form, data save and query repeat operation. Enterprise reasonable use of fixed assets, the scientific management of fixed assets to conduct a comprehensive, especially to strengthen the information management of fixed assets utilization and analysis, the mining potential of the fixed assets, improve the effect of the fixed asset utilization, enhance the vitality of enterprises, improving economic efficiency of unit is of great significance.Keywords: fixed assets; management information system; design1 IntroductionEarly, because there are no fixed asset management software system (EAM) systems in Europe with operation support, in the process of enterprise fixed assets management, frequently is the condition of the account, content, card is not consistent. Due to many asset information, to the operation of the data processing slow inefficient. Sometimes leadership often not particularly aware how much property, don't know the location of each asset, this leads to hard assets allocation, maintenance, use, scrap, lead to financial cannot charge-offs in time. Due to no unified scrap handlers, depreciation calculation is complex, the accuracy is not high. In brief in the early days of the fixed assets management mode, it is difficult to realize the effective management of assets. With the growth of the enterprise, fixed assets involves the processing of data is becoming more and more from the traditional fixed assetsmanagement mode has been unable to meet the demand of enterprise management, so the fixed assets management system arises at the historic moment. Fixed assets management system to achieve the effective management of fixed assets, it will be complicated management process automation, statistical analysis and calculation of the assets, as well as make or print reports, simplifies many links in the management. Realize the assets of quick query, statistic and allocation. Implements the examination and approval of the paperless office mobile. Through the use of advanced bar code technology for fixed assets physical objects from the purchase, receiving, transfer, inventory, clean up the scrap comprehensive regulation. Through asset assessment effectively, improve the assets quality of asc and purchasing.2 Literature reviewIn recent years, many countries on the basis of the fixed assets management, proposed the theory of quality management and made significant achievements. MOTOROLA and using of gm's six sigma quality management procedures, not only make the enterprise production and research and development to reduce the cost, also make the chaos of the original fixed assets become more clear, concise, and effective management procedures. One of the big five global accounting firms PWC has been focused on research in the fixed assets management of the world's leading supplier of superior performance. At the same time, PWC also make full use of their knowledge of experts, consultants, as well as the industry leader, the concept of the fixed assets management to deepen and focus on the promotion, has achieved remarkable success in the fixed assets management direction. And the literature also points out that foreign fixed assets management information system development and implementation of the main is divided into two kinds: mainly engaged in the research of fixed assets information system began to research the company, the company perennial engaged in the research of fixed assets management and information system planning and design, has a wealth of management experience and technical basis. Because of their early start has accumulated plenty of experience and development cases, these companies in the fixed assets management in the industry leading level. Company's technology department to study and let hair, fixed asset informationsystem technology department according to the specific needs of the company, customized special fixed assets management system, this system can well meet the current needs of the business, but lack of scalability, with the increase of the company's business, will often appear some defects. Fixed assets management still has great space for development. Many units of the construction of fixed assets management system is still in its infancy, existing information island, the system efficiency, slow data update cycle, etc. Fixed assets management system is still in the static mostly, single, batch processing stage, unable to meet the needs of enterprise of all departments. In the current rapid development of network technology, to develop a fixed assets management information system based on network can effectively solve the plight of the fixed assets management, this also is the inevitable developing trend of information system in the future.3 System construction requirementsFixed assets management of each enterprise is a very important work, fixed assets management well, can accurately reflect the performance and results of an enterprise crack down on corruption, provide the basis for the inspection period of staff, whereas mismanagement will cause the loss of the utilization rate of fixed assets, even lead to loss of state-owned assets. Especially for fixed assets distributed enterprise, use a lot of department fixed assets, including intermediary departments directly under custody of assets, station; Use range, classification of complex structure, large coverage, can be divided into the experiment equipment, daily business equipment, office equipment, measuring instruments and equipment and automation equipment, communications equipment, transportation equipment, tools, equipment, land, buildings, welfare facilities and so on more than a dozen categories; In addition some enterprise fixed assets quantity, usually involve large sums of money, updates faster, management requires timely and accurately reflect these changes, all of which poses challenges for the enterprise fixed assets management. The characteristics of enterprise fixed assets requires that we targeted in the development of industry, according to different units deployment management information system, in accordance with the characteristics for different business enterprises to establish aproprietary fixed assets management system is absolutely necessary.In order to meet the requirements of enterprise fixed assets management, the system USES the advanced computer programming language and database management technology. Strive to establish a complete set of enterprise fixed assets management system. The system requirements can truthfully accurate reflection of enterprise fixed assets usage, use period, equipment number, type and other information. From asset purchase, number, distribution, maintenance, recycling a whole set of processing flow. At the same time, the system should also be by department, according to the business to give different permissions, and according to the specific articles of association of enterprise fixed assets management of design, arrived in a seamless combination of information system and the organization's business.Fixed assets management of each enterprise is a very important work, fixed assets management well, can accurately reflect the performance and results of an enterprise crack down on corruption, provide the basis for the inspection period of staff, whereas mismanagement will cause the loss of the utilization rate of fixed assets, even lead to loss of state-owned assets. Especially for fixed assets distributed enterprise, use a lot of department fixed assets, including intermediary departments directly under custody of assets, station; Use range, classification of complex structure, large coverage, can be divided into the experiment equipment, daily business equipment, office equipment, measuring instruments and equipment and automation equipment, communications equipment, transportation equipment, tools, equipment, land, buildings, welfare facilities and so on more than a dozen categories; In addition some enterprise fixed assets quantity, usually involve large sums of money, updates faster, management requires timely and accurately reflect these changes, all of which poses challenges for the enterprise fixed assets management. The characteristics of enterprise fixed assets requirements in let we have targeted industry, according to different units deployed in accordance with the characteristics of management information system, so for different business enterprises to establish a proprietary fixed assets management system is absolutely necessary.4 System designFixed assets management system including system management, basic information management, asset management, and examination and approval administration four main modules. System business process of enterprise fixed assets management and development needs, realize the asset management as the core have full authorization to users at the same time, grasp the overall information and office and other functions of management information system on the net. System information management: this module contains staff management module, mainly to distribution system management authority, for different levels of employees, system will be awarded to the different administrative privileges. Permission to different employees, in can use the system function and has different limits can access the data content. At the same time the module can also be as the position changes of staff access to the system.Basic information management: basic information management including use management, tube and categories of management of the organization. Can use to manage distribution in different parts of the fixed assets management, because the current many enterprise distribution of multiple departments in different areas of the city, or have a subsidiary in other cities, so the module can be according to the different unit seat according to regional management of fixed assets. Press agency management is at the same time to classify the management of fixed assets belongs to, you can clear understanding of the institutions have fixed assets. Category management is a type of fixed assets management. Asset management is the core functions of fixed assets management information system, which include: the list of assets, assets put in storage, the list of recipients and query management. Assets list is mainly given inventory of all fixed assets in the unit, and you can also output list of assets and fixed category for printing. Assets put in storage; it is for examination and approval of new assets, coding and entry in the database, convenient query in the future, management, maintenance, etc. List of recipients; reflect the use of all fixed assets in the unit. Through this function, managers can clearly know the whereabouts of the assets, and usage. Query management, it is for a specific query of fixed assets,understand its usage.Employees to access enterprise fixed assets management system management module to the staff is divided into three broad categories: system administrators assist and consult the information personnel; According to different user's identity, they see the application interface is different also, so that we can make different users access to the application of different functions. After the user login page is to authenticate. Only by verifying employees can login system into the corresponding page. Staff management module is mainly to achieve the management of users and administrators at all levels. Contained in the user information management features: change user information and query the user information and the administrator management is including ads or modify the administrator information and query the administrator to join the new user. Ordinary users can only to query and fixed assets of the fixed assets of the state-owned enterprises collect module for the corresponding operation, for the higher authority of user information module and management module for fixed assets, only the administrator can operate.文献出处: Daum J H. The study on fixed assets management of enterprise [J]. Measuring business excellence, 2016, 2(1): 6-17.译文企业固定资产管理问题研究摘要计算机技术的飞速发展,使计算机技术已经渗透到了各行各业,它早已成为各行业不可或缺的一部分。

5、外文文献翻译(附原文)产业集群,区域品牌,Industrial cluster ,Regional brand

5、外文文献翻译(附原文)产业集群,区域品牌,Industrial cluster ,Regional brand

外文文献翻译(附原文)外文译文一:产业集群的竞争优势——以中国大连软件工业园为例Weilin Zhao,Chihiro Watanabe,Charla-Griffy-Brown[J]. Marketing Science,2009(2):123-125.摘要:本文本着为促进工业的发展的初衷探讨了中国软件公园的竞争优势。

产业集群深植于当地的制度系统,因此拥有特殊的竞争优势。

根据波特的“钻石”模型、SWOT模型的测试结果对中国大连软件园的案例进行了定性的分析。

产业集群是包括一系列在指定地理上集聚的公司,它扎根于当地政府、行业和学术的当地制度系统,以此获得大量的资源,从而获得产业经济发展的竞争优势。

为了成功驾驭中国经济范式从批量生产到开发新产品的转换,持续加强产业集群的竞争优势,促进工业和区域的经济发展是非常有必要的。

关键词:竞争优势;产业集群;当地制度系统;大连软件工业园;中国;科技园区;创新;区域发展产业集群产业集群是波特[1]也推而广之的一个经济发展的前沿概念。

作为一个在全球经济战略公认的专家,他指出了产业集群在促进区域经济发展中的作用。

他写道:集群的概念,“或出现在特定的地理位置与产业相关联的公司、供应商和机构,已成为了公司和政府思考和评估当地竞争优势和制定公共决策的一种新的要素。

但是,他至今也没有对产业集群做出准确的定义。

最近根据德瑞克、泰克拉[2]和李维[3]检查的关于产业集群和识别为“地理浓度的行业优势的文献取得了进展”。

“地理集中”定义了产业集群的一个关键而鲜明的基本性质。

产业由地区上特定的众多公司集聚而成,他们通常有共同市场、,有着共同的供应商,交易对象,教育机构和其它像知识及信息一样无形的东西,同样地,他们也面临相似的机会和威胁。

在全球产业集群有许多种发展模式。

比如美国加州的硅谷和马萨诸塞州的128鲁特都是知名的产业集群。

前者以微电子、生物技术、和风险资本市场而闻名,而后者则是以软件、计算机和通讯硬件享誉天下[4]。

英文文献小短文(原文加汉语翻译)

英文文献小短文(原文加汉语翻译)

A fern that hyperaccumulates arsenic(这是题目,百度一下就能找到原文好,原文还有表格,我没有翻译)A hardy, versatile, fast-growing plant helps to remove arsenic from contaminated soilsContamination of soils with arsenic,which is both toxic and carcinogenic, is widespread1. We have discovered that the fern Pteris vittata (brake fern) is extremely efficient in extracting arsenic from soils and translocating it into its above-ground biomass. This plant —which, to our knowledge, is the first known arsenic hyperaccumulator as well as the first fern found to function as a hyperaccumulator— has many attributes that recommend it for use in the remediation of arsenic-contaminated soils.We found brake fern growing on a site in Central Florida contaminated with chromated copper arsenate (Fig. 1a). We analysed the fronds of plants growing at the site for total arsenic by graphite furnace atomic absorption spectroscopy. Of 14 plant species studied, only brake fern contained large amounts of arsenic (As;3,280–4,980 p.p.m.). We collected additional samples of the plant and soil from the contaminated site (18.8–1,603 p.p.m. As) and from an uncontaminated site (0.47–7.56 p.p.m. As). Brake fern extracted arsenic efficiently from these soils into its fronds: plants growing in the contaminated site contained 1,442–7,526p.p.m. Arsenic and those from the uncontaminated site contained 11.8–64.0 p.p.m. These values are much higher than those typical for plants growing in normal soil, which contain less than 3.6 p.p.m. of arsenic3.As well as being tolerant of soils containing as much as 1,500 p.p.m. arsenic, brake fern can take up large amounts of arsenic into its fronds in a short time (Table 1). Arsenic concentration in fern fronds growing in soil spiked with 1,500 p.p.m. Arsenic increased from 29.4 to 15,861 p.p.m. in two weeks. Furthermore, in the same period, ferns growing in soil containing just 6 p.p.m. arsenic accumulated 755 p.p.m. Of arsenic in their fronds, a 126-fold enrichment. Arsenic concentrations in brake fern roots were less than 303 p.p.m., whereas those in the fronds reached 7,234 p.p.m.Addition of 100 p.p.m. Arsenic significantly stimulated fern growth, resulting in a 40% increase in biomass compared with the control (data not shown).After 20 weeks of growth, the plant was extracted using a solution of 1:1 methanol:water to speciate arsenic with high-performance liquid chromatography–inductively coupled plasma mass spectrometry. Almostall arsenic was present as relatively toxic inorganic forms, with little detectable organoarsenic species4. The concentration of As(III) was greater in the fronds (47–80%) than in the roots (8.3%), indicating that As(V) was converted to As(III) during translocation from roots to fronds.As well as removing arsenic from soils containing different concentrations of arsenic (Table 1), brake fern also removed arsenic from soils containing different arsenic species (Fig. 1c). Again, up to 93% of the arsenic was concentrated in the fronds. Although both FeAsO4 and AlAsO4 are relatively insoluble in soils1, brake fern hyperaccumulated arsenic derived from these compounds into its fronds (136–315 p.p.m.)at levels 3–6 times greater than soil arsenic.Brake fern is mesophytic and is widely cultivated and naturalized in many areas with a mild climate. In the United States, it grows in the southeast and in southern California5. The fern is versatile and hardy, and prefers sunny (unusual for a fern) and alkaline environments (where arsenic is more available). It has considerable biomass, and is fast growing, easy to propagate,and perennial.We believe this is the first report of significant arsenic hyperaccumulationby an unmanipulated plant. Brake fern has great potential to remediate arsenic-contaminated soils cheaply and could also aid studies of arsenic uptake, translocation, speciation, distribution and detoxification in plants. *Soil and Water Science Department, University ofFlorida, Gainesville, Florida 32611-0290, USAe-mail: lqma@†Cooperative Extension Service, University ofGeorgia, Terrell County, PO Box 271, Dawson,Georgia 31742, USA‡Department of Chemistry & SoutheastEnvironmental Research Center, FloridaInternational University, Miami, Florida 33199,1. Nriagu, J. O. (ed.) Arsenic in the Environment Part 1: Cyclingand Characterization (Wiley, New York, 1994).2. Brooks, R. R. (ed.) Plants that Hyperaccumulate Heavy Metals (Cambridge Univ. Press, 1998).3. Kabata-Pendias, A. & Pendias, H. in Trace Elements in Soils and Plants 203–209 (CRC, Boca Raton, 1991).4. Koch, I., Wang, L., Ollson, C. A., Cullen, W. R. & Reimer, K. J. Envir. Sci. Technol. 34, 22–26 (2000).5. Jones, D. L. Encyclopaedia of Ferns (Lothian, Melbourne, 1987).积累砷的蕨类植物耐寒,多功能,生长快速的植物,有助于从污染土壤去除砷有毒和致癌的土壤砷污染是非常广泛的。

外文文献翻译原文+译文

外文文献翻译原文+译文

外文文献翻译原文Analysis of Con tin uous Prestressed Concrete BeamsChris BurgoyneMarch 26, 20051、IntroductionThis conference is devoted to the development of structural analysis rather than the strength of materials, but the effective use of prestressed concrete relies on an appropriate combination of structural analysis techniques with knowledge of the material behaviour. Design of prestressed concrete structures is usually left to specialists; the unwary will either make mistakes or spend inordinate time trying to extract a solution from the various equations.There are a number of fundamental differences between the behaviour of prestressed concrete and that of other materials. Structures are not unstressed when unloaded; the design space of feasible solutions is totally bounded;in hyperstatic structures, various states of self-stress can be induced by altering the cable profile, and all of these factors get influenced by creep and thermal effects. How were these problems recognised and how have they been tackled?Ever since the development of reinforced concrete by Hennebique at the end of the 19th century (Cusack 1984), it was recognised that steel and concrete could be more effectively combined if the steel was pretensioned, putting the concrete into compression. Cracking could be reduced, if not prevented altogether, which would increase stiffness and improve durability. Early attempts all failed because the initial prestress soon vanished, leaving the structure to be- have as though it was reinforced; good descriptions of these attempts are given by Leonhardt (1964) and Abeles (1964).It was Freyssineti’s observations of the sagging of the shallow arches on three bridges that he had just completed in 1927 over the River Allier near Vichy which led directly to prestressed concrete (Freyssinet 1956). Only the bridge at Boutiron survived WWII (Fig 1). Hitherto, it had been assumed that concrete had a Young’s modulus which remained fixed, but he recognised that the de- ferred strains due to creep explained why the prestress had been lost in the early trials. Freyssinet (Fig. 2) also correctly reasoned that high tensile steel had to be used, so that some prestress would remain after the creep had occurred, and alsothat high quality concrete should be used, since this minimised the total amount of creep. The history of Freyssineti’s early prestressed concrete work is written elsewhereFigure1:Boutiron Bridge,Vic h yFigure 2: Eugen FreyssinetAt about the same time work was underway on creep at the BRE laboratory in England ((Glanville 1930) and (1933)). It is debatable which man should be given credit for the discovery of creep but Freyssinet clearly gets the credit for successfully using the knowledge to prestress concrete.There are still problems associated with understanding how prestressed concrete works, partly because there is more than one way of thinking about it. These different philosophies are to some extent contradictory, and certainly confusing to the young engineer. It is also reflected, to a certain extent, in the various codes of practice.Permissible stress design philosophy sees prestressed concrete as a way of avoiding cracking by eliminating tensile stresses; the objective is for sufficient compression to remain after creep losses. Untensionedreinforcement, which attracts prestress due to creep, is anathema. This philosophy derives directly from Freyssinet’s logic and is primarily a working stress concept.Ultimate strength philosophy sees prestressing as a way of utilising high tensile steel as reinforcement. High strength steels have high elastic strain capacity, which could not be utilised when used as reinforcement; if the steel is pretensioned, much of that strain capacity is taken out before bonding the steel to the concrete. Structures designed this way are normally designed to be in compression everywhere under permanent loads, but allowed to crack under high live load. The idea derives directly from the work of Dischinger (1936) and his work on the bridge at Aue in 1939 (Schonberg and Fichter 1939), as well as that of Finsterwalder (1939). It is primarily an ultimate load concept. The idea of partial prestressing derives from these ideas.The Load-Balancing philosophy, introduced by T.Y. Lin, uses prestressing to counter the effect of the permanent loads (Lin 1963). The sag of the cables causes an upward force on the beam, which counteracts the load on the beam. Clearly, only one load can be balanced, but if this is taken as the total dead weight, then under that load the beam will perceive only the net axial prestress and will have no tendency to creep up or down.These three philosophies all have their champions, and heated debates take place between them as to which is the most fundamental.2、Section designFrom the outset it was recognised that prestressed concrete has to be checked at both the working load and the ultimate load. For steel structures, and those made from reinforced concrete, there is a fairly direct relationship between the load capacity under an allowable stress design, and that at the ultimate load under an ultimate strength design. Older codes were based on permissible stresses at the working load; new codes use moment capacities at the ultimate load. Different load factors are used in the two codes, but a structure which passes one code is likely to be acceptable under the other.For prestressed concrete, those ideas do not hold, since the structure is highly stressed, even when unloaded. A small increase of load can cause some stress limits to be breached, while a large increase in load might be needed to cross other limits. The designer has considerable freedom to vary both the working load and ultimate load capacities independently; both need to be checked.A designer normally has to check the tensile and compressive stresses, in both the top and bottom fibre of the section, for every load case. The critical sections are normally, but not always, the mid-span and the sections over piers but other sections may become critical ,when the cable profile has to be determined.The stresses at any position are made up of three components, one of which normally has a different sign from the other two; consistency of sign convention is essential.If P is the prestressing force and e its eccentricity, A and Z are the area of the cross-section and its elastic section modulus, while M is the applied moment, then where ft and fc are the permissible stresses in tension and compression.c e t f ZM Z P A P f ≤-+≤Thus, for any combination of P and M , the designer already has four in- equalities to deal with.The prestressing force differs over time, due to creep losses, and a designer isusually faced with at least three combinations of prestressing force and moment;• the applied moment at the time the prestress is first applied, before creep losses occur,• the maximum applied moment after creep losses, and• the minimum applied moment after creep losses.Figure 4: Gustave MagnelOther combinations may be needed in more complex cases. There are at least twelve inequalities that have to be satisfied at any cross-section, but since an I-section can be defined by six variables, and two are needed to define the prestress, the problem is over-specified and it is not immediately obvious which conditions are superfluous. In the hands of inexperienced engineers, the design process can be very long-winded. However, it is possible to separate out the design of the cross-section from the design of the prestress. By considering pairs of stress limits on the same fibre, but for different load cases, the effects of the prestress can be eliminated, leaving expressions of the form:rangestress e Perm issibl Range Mom entZ These inequalities, which can be evaluated exhaustively with little difficulty, allow the minimum size of the cross-section to be determined.Once a suitable cross-section has been found, the prestress can be designed using a construction due to Magnel (Fig.4). The stress limits can all be rearranged into the form:()M fZ PA Z e ++-≤1 By plotting these on a diagram of eccentricity versus the reciprocal of the prestressing force, a series of bound lines will be formed. Provided the inequalities (2) are satisfied, these bound lines will always leave a zone showing all feasible combinations of P and e. The most economical design, using the minimum prestress, usually lies on the right hand side of the diagram, where the design is limited by the permissible tensile stresses.Plotting the eccentricity on the vertical axis allows direct comparison with the crosssection, as shown in Fig. 5. Inequalities (3) make no reference to the physical dimensions of the structure, but these practical cover limits can be shown as wellA good designer knows how changes to the design and the loadings alter the Magnel diagram. Changing both the maximum andminimum bending moments, but keeping the range the same, raises and lowers the feasible region. If the moments become more sagging the feasible region gets lower in the beam.In general, as spans increase, the dead load moments increase in proportion to the live load. A stage will be reached where the economic point (A on Fig.5) moves outside the physical limits of the beam; Guyon (1951a) denoted the limiting condition as the critical span. Shorter spans will be governed by tensile stresses in the two extreme fibres, while longer spans will be governed by the limiting eccentricity and tensile stresses in the bottom fibre. However, it does not take a large increase in moment ,at which point compressive stresses will govern in the bottom fibre under maximum moment.Only when much longer spans are required, and the feasible region moves as far down as possible, does the structure become governed by compressive stresses in both fibres.3、Continuous beamsThe design of statically determinate beams is relatively straightforward; the engineer can work on the basis of the design of individual cross-sections, as outlined above. A number of complications arise when the structure is indeterminate which means that the designer has to consider, not only a critical section,but also the behaviour of the beam as a whole. These are due to the interaction of a number of factors, such as Creep, Temperature effects and Construction Sequence effects. It is the development of these ideas whichforms the core of this paper. The problems of continuity were addressed at a conference in London (Andrew and Witt 1951). The basic principles, and nomenclature, were already in use, but to modern eyes concentration on hand analysis techniques was unusual, and one of the principle concerns seems to have been the difficulty of estimating losses of prestressing force.3.1 Secondary MomentsA prestressing cable in a beam causes the structure to deflect. Unlike the statically determinate beam, where this motion is unrestrained, the movement causes a redistribution of the support reactions which in turn induces additional moments. These are often termed Secondary Moments, but they are not always small, or Parasitic Moments, but they are not always bad.Freyssinet’s bridge across the Marne at Luzancy, started in 1941 but not completed until 1946, is often thought of as a simply supported beam, but it was actually built as a two-hinged arch (Harris 1986), with support reactions adjusted by means of flat jacks and wedges which were later grouted-in (Fig.6). The same principles were applied in the later and larger beams built over the same river.Magnel built the first indeterminate beam bridge at Sclayn, in Belgium (Fig.7) in 1946. The cables are virtually straight, but he adjusted the deck profile so that the cables were close to the soffit near mid-span. Even with straight cables the sagging secondary momentsare large; about 50% of the hogging moment at the central support caused by dead and live load.The secondary moments cannot be found until the profile is known but the cablecannot be designed until the secondary moments are known. Guyon (1951b) introduced the concept of the concordant profile, which is a profile that causes no secondary moments; es and ep thus coincide. Any line of thrust is itself a concordant profile.The designer is then faced with a slightly simpler problem; a cable profile has to be chosen which not only satisfies the eccentricity limits (3) but is also concordant. That in itself is not a trivial operation, but is helped by the fact that the bending moment diagram that results from any load applied to a beam will itself be a concordant profile for a cable of constant force. Such loads are termed notional loads to distinguish them from the real loads on the structure. Superposition can be used to progressively build up a set of notional loads whose bending moment diagram gives the desired concordant profile.3.2 Temperature effectsTemperature variations apply to all structures but the effect on prestressed concrete beams can be more pronounced than in other structures. The temperature profile through the depth of a beam (Emerson 1973) can be split into three components for the purposes of calculation (Hambly 1991). The first causes a longitudinal expansion, which is normally released by the articulation of the structure; the second causes curvature which leads to deflection in all beams and reactant moments in continuous beams, while the third causes a set of self-equilibrating set of stresses across the cross-section.The reactant moments can be calculated and allowed-for, but it is the self- equilibrating stresses that cause the main problems for prestressed concrete beams. These beams normally have high thermal mass which means that daily temperature variations do not penetrate to the core of the structure. The result is a very non-uniform temperature distribution across the depth which in turn leads to significant self-equilibrating stresses. If the core of the structure is warm, while the surface is cool, such as at night, then quite large tensile stresses can be developed on the top and bottom surfaces. However, they only penetrate a very short distance into the concrete and the potential crack width is very small. It can be very expensive to overcome the tensile stress by changing the section or the prestress。

关于会计的英文文献原文(带中文翻译)

关于会计的英文文献原文(带中文翻译)

The Optimization Method of Financial Statements Based on Accounting Management TheoryABSTRACTThis paper develops an approach to enhance the reliability and usefulness of financial statements. International Financial Reporting Standards (IFRS) was fundamentally flawed by fair value accounting and asset-impairment accounting. According to legal theory and accounting theory, accounting data must have legal evidence as its source document. The conventional “mixed attribute” accounting system should be re placed by a “segregated” system with historical cost and fair value being kept strictly apart in financial statements. The proposed optimizing method will significantly enhance the reliability and usefulness of financial statements.I.. INTRODUCTIONBased on international-accounting-convergence approach, the Ministry of Finance issued the Enterprise Accounting Standards in 2006 taking the International Financial Reporting Standards (hereinafter referred to as “the International Standards”) for reference. The Enterprise Accounting Standards carries out fair value accounting successfully, and spreads the sense that accounting should reflect market value objectively. The objective of accounting reformation following-up is to establish the accounting theory and methodology which not only use international advanced theory for reference, but also accord with the needs of China's socialist market economy construction. On the basis of a thorough evaluation of the achievements and limitations of International Standards, this paper puts forward a stand that to deepen accounting reformation and enhance the stability of accounting regulations.II. OPTIMIZA TION OF FINANCIAL STATEMENTS SYSTEM: PARALLELING LISTING OF LEGAL FACTS AND FINANCIAL EXPECTA TIONAs an important management activity, accounting should make use of information systems based on classified statistics, and serve for both micro-economic management and macro-economic regulation at the same time. Optimization of financial statements system should try to take all aspects of the demands of the financial statements in both macro and micro level into account.Why do companies need to prepare financial statements? Whose demands should be considered while preparing financial statements? Those questions are basic issues we should consider on the optimization of financial statements. From the perspective of "public interests", reliability and legal evidence are required as qualitative characters, which is the origin of the traditional "historical cost accounting". From the perspective of "private interest", security investors and financial regulatory authoritieshope that financial statements reflect changes of market prices timely recording "objective" market conditions. This is the origin of "fair value accounting". Whether one set of financial statements can be compatible with these two different views and balance the public interest and private interest? To solve this problem, we design a new balance sheet and an income statement.From 1992 to 2006, a lot of new ideas and new perspectives are introduced into China's accounting practices from international accounting standards in a gradual manner during the accounting reform in China. These ideas and perspectives enriched the understanding of the financial statements in China. These achievements deserve our full assessment and should be fully affirmed. However, academia and standard-setters are also aware that International Standards are still in the process of developing .The purpose of proposing new formats of financial statements in this paper is to push forward the accounting reform into a deeper level on the basis of international convergence.III. THE PRACTICABILITY OF IMPROVING THE FINANCIAL STATEMENTS SYSTEMWhether the financial statements are able to maintain their stability? It is necessary to mobilize the initiatives of both supply-side and demand-side at the same time. We should consider whether financial statements could meet the demands of the macro-economic regulation and business administration, and whether they are popular with millions of accountants.Accountants are responsible for preparing financial statements and auditors are responsible for auditing. They will benefit from the implementation of the new financial statements.Firstly, for the accountants, under the isolated design of historical cost accounting and fair value accounting, their daily accounting practice is greatly simplified. Accounting process will not need assets impairment and fair value any longer. Accounting books will not record impairment and appreciation of assets any longer, for the historical cost accounting is comprehensively implemented. Fair value information will be recorded in accordance with assessment only at the balance sheet date and only in the annual financial statements. Historical cost accounting is more likely to be recognized by the tax authorities, which saves heavy workload of the tax adjustment. Accountants will not need to calculate the deferred income tax expense any longer, and the profit-after-tax in the solid line table is acknowledged by the Company Law, which solves the problem of determining the profit available for distribution.Accountants do not need to record the fair value information needed by security investors in the accounting books; instead, they only need to list the fair value information at the balance sheet date. In addition, because the data in the solid line table has legal credibility, so the legal risks of accountants can be well controlled. Secondly, the arbitrariness of the accounting process will be reduced, and the auditors’ review process will be greatly simplified. The independent auditors will not have to bear the considerable legal risk for the dotted-line table they audit, because the risk of fair value information has been prompted as "not supported by legalevidences". Accountants and auditors can quickly adapt to this financial statements system, without the need of training. In this way, they can save a lot of time to help companies to improve management efficiency. Surveys show that the above design of financial statements is popular with accountants and auditors. Since the workloads of accounting and auditing have been substantially reduced, therefore, the total expenses for auditing and evaluation will not exceed current level as well.In short, from the perspectives of both supply-side and demand-side, the improved financial statements are expected to enhance the usefulness of financial statements, without increase the burden of the supply-side.IV. CONCLUSIONS AND POLICY RECOMMENDATIONSThe current rule of mixed presentation of fair value data and historical cost data could be improved. The core concept of fair value is to make financial statements reflect the fair value of assets and liabilities, so that we can subtract the fair value of liabilities from assets to obtain the net fair value.However, the current International Standards do not implement this concept, but try to partly transform the historical cost accounting, which leads to mixed using of impairment accounting and fair value accounting. China's accounting academic research has followed up step by step since 1980s, and now has already introduced a mixed-attributes model into corporate financial statements.By distinguishing legal facts from financial expectations, we can balance public interests and private interests and can redesign the financial statements system with enhancing management efficiency and implementing higher-level laws as main objective. By presenting fair value and historical cost in one set of financial statements at the same time, the statements will not only meet the needs of keeping books according to domestic laws, but also meet the demand from financial regulatory authorities and security investorsWe hope that practitioners and theorists offer advices and suggestions on the problem of improving the financial statements to build a financial statements system which not only meets the domestic needs, but also converges with the International Standards.基于会计管理理论的财务报表的优化方法摘要本文提供了一个方法,以提高财务报表的可靠性和实用性。

仓储物流外文文献翻译中英文原文及译文2023-2023

仓储物流外文文献翻译中英文原文及译文2023-2023

仓储物流外文文献翻译中英文原文及译文2023-2023原文1:The Current Trends in Warehouse Management and LogisticsWarehouse management is an essential component of any supply chain and plays a crucial role in the overall efficiency and effectiveness of logistics operations. With the rapid advancement of technology and changing customer demands, the field of warehouse management and logistics has seen several trends emerge in recent years.One significant trend is the increasing adoption of automation and robotics in warehouse operations. Automated systems such as conveyor belts, robotic pickers, and driverless vehicles have revolutionized the way warehouses function. These technologies not only improve accuracy and speed but also reduce labor costs and increase safety.Another trend is the implementation of real-time tracking and visibility systems. Through the use of RFID (radio-frequency identification) tags and GPS (global positioning system) technology, warehouse managers can monitor the movement of goods throughout the entire supply chain. This level of visibility enables better inventory management, reduces stockouts, and improves customer satisfaction.Additionally, there is a growing focus on sustainability in warehouse management and logistics. Many companies are implementing environmentally friendly practices such as energy-efficient lighting, recycling programs, and alternativetransportation methods. These initiatives not only contribute to reducing carbon emissions but also result in cost savings and improved brand image.Furthermore, artificial intelligence (AI) and machine learning have become integral parts of warehouse management. AI-powered systems can analyze large volumes of data to optimize inventory levels, forecast demand accurately, and improve operational efficiency. Machine learning algorithms can also identify patterns and anomalies, enabling proactive maintenance and minimizing downtime.In conclusion, warehouse management and logistics are continuously evolving fields, driven by technological advancements and changing market demands. The trends discussed in this article highlight the importance of adopting innovative solutions to enhance efficiency, visibility, sustainability, and overall performance in warehouse operations.译文1:仓储物流管理的当前趋势仓储物流管理是任何供应链的重要组成部分,并在物流运营的整体效率和效力中发挥着至关重要的作用。

20外文文献翻译原文及译文参考样式

20外文文献翻译原文及译文参考样式

20外⽂⽂献翻译原⽂及译⽂参考样式华北电⼒⼤学科技学院毕业设计(论⽂)附件外⽂⽂献翻译学号: 0819******** 姓名:宗鹏程所在系别:机械⼯程及⾃动化专业班级:机械08K1指导教师:张超原⽂标题:Development of a High-PerformanceMagnetic Gear年⽉⽇⾼性能磁齿轮的发展1摘要:本⽂提出了⼀个⾼性能永磁齿轮的计算和测量结果。

上述分析的永磁齿轮有5.5的传动⽐,并能够提供27 Nm的⼒矩。

分析表明,由于它的弹簧扭转常数很⼩,因此需要特别重视安装了这种⾼性能永磁齿轮的系统。

上述分析的齿轮也已经被应⽤在实际中,以验证、预测其效率。

经测量,由于较⼤端齿轮传动引起的磁⼒齿轮的扭矩只有16 Nm。

⼀项关于磁齿轮效率损失的系统研究也展⽰了为什么实际⼯作效率只有81%。

⼀⼤部分磁损耗起源于轴承,因为机械故障的存在,此轴承的备⽤轴承在此时是必要的。

如果没有源于轴的少量磁泄漏,我们估计能得到⾼达96%的效率。

与传统的机械齿轮的⽐较表明,磁性齿轮具有更好的效率和单位体积较⼤扭矩。

最后,可以得出结论,本⽂的研究结果可能有助于促进传统机械齿轮向磁性齿轮发展。

关键词:有限元分析(FEA)、变速箱,⾼转矩密度,磁性齿轮。

⼀、导⾔由于永久磁铁能产⽣磁通和磁⼒,虽然⼏个世纪过去了,许多⼈仍然着迷于永久磁铁。

,在过去20年的复兴阶段,正是这些优点已经使得永久磁铁在很多实际中⼴泛的应⽤,包括在起重机,扬声器,接头领域,尤其是在永久磁铁电机⽅⾯。

其中对永磁铁的复兴最常见于效率和转矩密度由于永磁铁的应⽤显著提⾼的⼩型机器的领域。

在永久磁铁没有获取⾼度重视的⼀个领域是传动装置的领域,也就是说,磁⼒联轴器不被⼴泛⽤于传动装置。

磁性联轴器基本上可以被视为以传动⽐为1:1磁⼒齿轮。

相⽐标准电⽓机器有约10kN m/m的扭矩,装有⾼能量永久磁铁的磁耦有⾮常⾼的单位体积密度的扭矩,变化范围⼤约300–400 kN 。

儿童教育外文翻译文献

儿童教育外文翻译文献

儿童教育外文翻译文献(文档含中英文对照即英文原文和中文翻译)原文:The Role of Parents and Community in the Educationof the Japanese ChildHeidi KnipprathAbstractIn Japan, there has been an increased concern about family and community participation in the child’s educat ion. Traditionally, the role of parents and community in Japan has been one of support and less one of active involvement in school learning. Since the government commenced education reforms in the last quarter of the 20th century, a more active role for parents and the community in education has been encouraged. These reforms have been inspired by the need to tackle various problems that had arisen, such as the perceived harmful elements of society’spreoccupation with academic achievement and the problematic behavior of young people. In this paper, the following issues are examined: (1) education policy and reform measures with regard to parent and community involvement in the child’s education; (2) the state of parent and community involvement at the eve of the 20th century.Key Words: active involvement, community, education reform, Japan, parents, partnership, schooling, supportIntroduction: The Discourse on the Achievement GapWhen western observers are tempted to explain why Japanese students attain high achievement scores in international comparative assessment studies, they are likely to address the role of parents and in particular of the mother in the education of the child. Education mom is a phrase often brought forth in the discourse on Japanese education to depict the Japanese mother as being a pushy, and demanding home-bound tutor, intensely involved in the child’s education due to severe academic competition. Although this image of the Japanese mother is a stereotype spread by the popular mass media in Japan and abroad, and the extent by which Japanese mothers are absorbed in their children is exaggerated (Benjamin, 1997, p. 16; Cummings, 1989, p. 297; Stevenson & Stigler, 1992, p. 82), Stevenson and Stigler (1992) argue that Japanese parents do play an indispensable role in the academic performance of their children. During their longitudinal and cross-national research project, they and their collaborators observed that Japanese first and fifth graders persistently achieved higher on math tests than American children. Besides reciting teacher’s teaching style, cultural beliefs, and organization of schooling, Stevenson and Stigler (1992) mention parent’s role in supporting the learning conditions of the child to explain differences in achievement between elementary school students of the United States and students of Japan. In Japan, children receive more help at home with schoolwork (Chen & Stevenson, 1989; Stevenson & Stigler, 1992), and tend to perform less household chores than children in the USA (Stevenson et al., 1990; Stevenson & Stigler, 1992). More Japanese parents than American parents provide space and a personal desk and purchase workbooks for their children to supplement their regular text-books at school (Stevenson et al., 1990; Stevenson & Stigler, 1992). Additionally, Stevenson and Stigler (1992) observed that American mothers are much more readily satisfied with their child’s performance than Asian parents are, have less realistic assessments of their child’s academic perform ance, intelligence, and other personality characteristics, and subsequently have lower standards. Based on their observation of Japanese, Chinese and American parents, children and teachers, Stevenson and Stigler (1992) conclude that American families can increase the academic achievement of their children by strengthening the link between school and home, creating a physical and psychological environment that is conducive to study, and by making realistic assessments and raising standards. Also Benjamin (1997), who performed ‘day-to-day ethnography’ to find out how differences in practice between American and Japanese schools affect differences in outcomes, discusses the relationship between home and school and how the Japanese mother is involved in the academic performance standards reached by Japanese children. She argues that Japanese parents are willing to pay noticeable amounts of money for tutoring in commercial establishments to improve the child’s performance on entrance examinations, to assist in ho mework assignments, to facilitate and support their children’s participation in school requirements and activities, and to check notebooks of teachers on the child’s progress and other school-related messages from the teacher. These booklets are read and written daily by teachers and parents. Teachers regularly provide advice and reminders to parents, and write about homework assignments of the child, special activities and the child’s behavior (Benjamin, 1997, p. 119, p. 1993–1995). Newsletters, parents’ v isits to school, school reports, home visits by the teacher and observation days sustain communication in later years at school. According toBenjamin (1997), schools also inform parents about how to coach their children on proper behavior at home. Shimahara (1986), Hess and Azuma (1991), Lynn (1988) and White (1987) also try to explain national differences in educational achievement. They argue that Japanese mothers succeed in internalizing into their children academic expectations and adaptive dispositions that facilitate an effective teaching strategy, and in socializing the child into a successful person devoted to hard work.Support, Support and SupportEpstein (1995) constructed a framework of six types of involvement of parents and the community in the school: (1) parenting: schools help all families establish home environments to support children as students; (2) communicating: effective forms of school-to-home and home-to-school communications about school programs and children’s progress; (3) volu nteering: schools recruit and organize parents help and support; (4) learning at home: schools provide information and ideas to families about how to help students at home with homework and other curriculum-related activities, decisions and planning; (5) decision making: schools include parents in school decisions, develop parent leaders and representatives; and (6) collaborating with the community: schools integrate resources and services from the community to strengthen school programs, family practices, and student learning and development. All types of involvement mentioned in studies of Japanese education and in the discourse on the roots of the achievement gap belong to one of Epstein’s first four types of involvement: the creation of a conducive learn ing environment (type 4), the expression of high expectations (type 4), assistance in homework (type 4), teachers’ notebooks (type 2), mother’s willingness to facilitate school activities (type3) teachers’ advice about the child’s behavior (type 1), observ ation days by which parents observe their child in the classroom (type 2), and home visits by the teachers (type 1). Thus, when one carefully reads Stevenson and Stigler’s, Benjamin’s and other’s writings about Japanese education and Japanese students’ high achievement level, one notices that parents’ role in the child’s school learning is in particular one of support, expected and solicited by the school. The fifth type (decision making) as well as the sixth type (community involvement) is hardly ever mentioned in the discourse on the achievement gap.In 1997, the OECD’s Center for Educational Research and Innovation conducted a cross-national study to report the actual state of parents as partners in schooling in nine countries, including Japan. In its report, OECD concludes that the involvement of Japanese parents in their schools is strictly limited, and that the basis on which it takes place tends to be controlled by the teacher (OECD, 1997, p. 167). According to OECD (1997), many countries are currently adopting policies to involve families closely in the education of their children because (1) governments are decentralizing their administrations; (2) parents want to be increasingly involved; and (3) because parental involvement is said to be associated with higher achievement in school (p. 9). However, parents in Japan, where students already score highly on international achievement tests, are hardly involved in governance at the national and local level, and communication between school and family tends to be one-way (Benjamin, 1997; Fujita, 1989; OECD, 1997). Also parent–teacher associations (PTA, fubo to kyoshi no kai ) are primarily presumed to be supportive of school learning and not to participate in school governance (cf. OECD, 2001, p. 121). On the directionsof the occupying forces after the second world war, PTA were established in Japanese schools and were considered with the elective education boards to provide parents and the community an opportunity to participate actively in school learning (Hiroki, 1996, p. 88; Nakata, 1996, p. 139). The establishment of PTA and elective education boards are only two examples of numerous reform measures the occupying forces took to decentralize the formal education system and to expand educational opportunities. But after they left the country, the Japanese government was quick to undo liberal education reform measures and reduced the community and parental role in education. The stipulation that PTA should not interfere with personnel and other administrative tasks of schools, and the replacement of elective education boards by appointed ones, let local education boards believe that parents should not get involved with school education at all (Hiroki, 1996, p. 88). Teachers were regarded to be the experts and the parents to be the laymen in education (Hiroki, 1996, p. 89).In sum, studies of Japanese education point into one direction: parental involvement means being supportive, and community involvement is hardly an issue at all. But what is the actual state of parent and community involvement in Japanese schools? Are these descriptions supported by quantitative data?Statistics on Parental and Community InvolvementTo date, statistics of parental and community involvement are rare. How-ever, the school questionnaire of the TIMSS-R study did include some interesting questions that give us a clue about the degree of involvement relatively compared to the degree of involvement in other industrialized countries. The TIMSS-R study measured science and math achievement of eighth graders in 38 countries. Additionally, a survey was held among principals, teachers and students. Principals answered questions relating to school management, school characteristics, and involvement. For convenience, the results of Japan are only compared with the results of those countries with a GNP of 20650 US dollars or higher according to World Bank’s indicators in 1999.Unfortunately, only a very few items on community involvement were measured. According to the data, Japanese principals spend on average almost eight hours per month on representing the school in the community (Table I). Australian and Belgian principals spend slightly more hours and Dutch and Singaporean principals spend slightly less on representing the school and sustaining communication with the community. But when it comes to participation from the community, Japanese schools report a nearly absence of involvement (Table II). Religious groups and the business community have hardly any influence on the curriculum of the school. In contrast, half of the principals report that parents do have an impact in Japan. On one hand, this seems a surprising result when one is reminded of the centralized control of the Ministry of Education. Moreover, this control and the resulting uniform curriculum are often cited as a potential explanation of the high achievement levels in Japan. On the other hand, this extent of parental impact on the curriculum might be an indicator of the pressure parents put on schools to prepare their children appropriately for the entrance exams of senior high schools.In Table III, data on the extent of other types of parental involvement in Japan and other countries are given. In Japan, parental involvement is most common in case of schools volunteering for school projects and programs, and schools expecting parents to make sure that thechild completes his or her homework. The former is together with patrolling the grounds of the school to monitor student behavior most likely materialized through the PTA. The kinds and degree of activities of PTA vary according to the school, but the activities of the most active and well-organized PTA’s of 395 elementary schools investigated by Sumida (2001)range from facilitating sport and recreation for children, teaching greetings, encouraging safe traffic, patrolling the neighborhood, publishing the PTA newspaper to cleaning the school grounds (pp. 289–350). Surprisingly, less Japanese principals expect from the parents to check one’s child’s completion of homework than principals of other countries. In the discourse on the achievement gap, western observers report that parents and families in Japan provide more assistance with their children’s homework than parents and families outside Japan. This apparent contradiction might be the result of the fact that these data are measured at the lower secondary level while investigations of the roots of Japanese students’ high achievement levels focus on childhood education and learning at primary schools. In fact, junior high school students are given less homework in Japan than their peers in other countries and less homework than elementary school students in Japan. Instead, Japanese junior high school students spend more time at cram schools. Finally, Japanese principals also report very low degrees of expectations toward parents with regard to serving as a teacher aid in the classroom, raising funds for the school, assisting teachers on trips, and serving on committees which select school personnel and review school finances. The latter two items measure participation in school governance.In other words, the data support by and large the descriptions of parental of community involvement in Japanese schooling. Parents are requested to be supportive, but not to mount the territory of the teacher nor to be actively involved in governance. Moreover, whilst Japanese principals spend a few hours per month on communication toward the community, involvement from the community with regard to the curriculum is nearly absent, reflecting the nearly absence of accounts of community involvement in studies on Japanese education. However, the reader needs to be reminded that these data are measured at the lower secondary educational level when participation by parents in schooling decreases (Epstein, 1995; OECD, 1997; Osakafu Kyoiku Iinkai, unpublished report). Additionally, the question remains what stakeholders think of the current state of involvement in schooling. Some interesting local data provided by the Osaka Prefecture Education Board shed a light on their opinion.ReferencesBenjamin, G. R. (1997). Japanese lessons. New York: New York University Press.Cave, P. (2003). Educational reform in Japan in the 1990s: ‘Individuality’ and other uncertainties. Comparative Education Review, 37(2), 173–191.Chen, C., & Stevenson, H. W. (1989). Homework: A cross-cultural examination. Child Development, 60(3), 551–561.Chuo Kyoiku Shingikai (1996). 21 seiki o tenbo shita wagakuni no kyoiku no arikata ni tsu-ite [First Report on the Model for Japanese Education in the Perspective of theCummings, W. K. (1989). The American perception of Japanese parative Education, 25(3), 293–302.Epstein, J. L. (1995). School/family/community partnerships. Phi Delta Kappan , 701–712.Fujita, M. (1989). It’s all mother’s fault: childcare and the socialization of working mothers in Japan. The Journal of Japanese Studies , 15(1), 67–91.Harnish, D. L. (1994). Supplemental education in Japan: juku schooling and its implication. Journal of Curriculum Studies , 26(3), 323–334.Hess, R. D., & Azuma, H. (1991). Cultural support for schooling, contrasts between Japanand the United States. Educational Researcher , 20(9), 2–8, 12.Hiroki, K. (1996). Kyoiku ni okeru kodomo, oya, kyoshi, kocho no kenri, gimukankei[Rights and duties of principals, teachers, parents and children in education. InT. Horio & T. Urano (Eds.), Soshiki toshite no gakko [School as an organization](pp. 79–100). Tokyo: Kashiwa Shobo. Ikeda, H. (2000). Chiiki no kyoiku kaikaku [Local education reform]. Osaka: Kaiho Shup-pansha.Kudomi, Y., Hosogane, T., & Inui, A. (1999). The participation of students, parents and the community in promoting school autonomy: case studies in Japan. International Studies in Sociology of Education, 9(3), 275–291.Lynn, R. (1988).Educational achievement in Japan. London: MacMillan Press.Martin, M. O., Mullis, I. V. S., Gonzalez, E. J., Gregory, K. D., Smith, T. A., Chrostowski,S. J., Garden, R. A., & O’Connor, K. M. (2000). TIMSS 1999 Intern ational science report, findings from IEA’s Repeat of the Third International Mathematics and ScienceStudy at the Eight Grade.Chestnut Hill: The International Study Center.Mullis, I. V. S., Martin, M. O., Gonzalez, E. J., Gregory, K. D., Garden, R. A., O’Connor, K. M.,Chrostowski, S. J., & Smith, T. A.. (2000). TIMSS 1999 International mathemat-ics report, findings from IEA’s Repeat of the Third International Mathematics and Science Study at the Eight Grade.Chestnut Hill: The International Study Center. Ministry of Education, Science, Sports and Culture (2000).Japanese government policies in education, science, sports and culture. 1999, educational reform in progress. Tokyo: PrintingBureau, Ministry of Finance.Monbusho Ed. (1999).Heisei 11 nendo, wagakuni no bunkyoshisaku : Susumu kaikaku [Japanese government policies in education, science, sports and culture 1999: Educational reform in progress]. Tokyo: Monbusho.Educational Research for Policy and Practice (2004) 3: 95–107 © Springer 2005DOI 10.1007/s10671-004-5557-6Heidi KnipprathDepartment of MethodologySchool of Business, Public Administration and TechnologyUniversity of Twente P.O. Box 2177500 AE Enschede, The Netherlands译文:家长和社区在日本儿童教育中的作用摘要在日本,人们越来越关心家庭和社区参与到儿童教育中。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

The New Public Management SituationOwen E. HughesMonash University Management (Australia)No doubt, many countries in the world, and both developed countries and developing countries, in the late 1980s and early 1990s began a continuous public sector management reform movement. The reform movement is still in many aspects government continue to the organization and management of the influence. People in these reforms view repudiating them. Critics especially in Britain and the United States, critics say the new mode of various problems exist, but also does not have the international prevailing reform of public management, could not be called paradigm. Criticism from almost every aspect of the change. Most of the academic criticism belong to the mouth. Different schools of thought in detail discussion, The academic journal articles and abstraction, from reality. At the same time, in the practice of public management and implementation of the reform and the change. As I in other articles in the thought, in most countries, the traditional public administrative mode for public management mode has been replaced. The reform of public department responded to the realities of several interrelated problems, including: the function of public sector provide public services of low efficiency, Economic theory of change, Private sector related changes impact of globalization, especially as a kind of economic power, Technology changes made decentralization and better control globally becomes possible. The administrative management can be divided into three stages: the development of distinct phases, and public administration before traditional pattern and public management reform stage. Each stage has its own management mode. From a stage of transition to the next stage is not easy, from the traditional public administration to public administration has not yet completed the transition. But it was only a matter of time. Because the new mode of theoretical basis is very strong. The new public management movement ", "although this name, but it is not only a debate in the booming, and in most developed countries have taken the best management mode of expression. The traditional administrative mode than it's age is a great reform, but that time has passed.A traditional patternObviously, in the late 19th century bureaucracy system theory, not sound already exists some form of administrative management. Public administration has a long history, and it is the concept of a government and the rise of civilization as history. As the case Glad2den Osama bin laden (point), a model of administrative since the government appears has existed. First is endowed with founder or leader, then is the social or administrative person to organizers of eternity. Administration management or business is all in social activities, although not among factors, but the glow of social sustainable development is of vital importance. Recognized administrative system in ancient Egypt is already exists, its jurisdiction from the Nile flooding caused by the year to build the pyramids irrigation affairs. China is adopted in the han dynasty, Confucian norms that government should be elected, not according to the background, but according to the character and ability, the government's main goal is to seek the welfare of the people. In Europe, various empire - Greek, Roman, and the holy Roman, Spain's administrative empire, they first by the central through various rules and procedures. Weber's thought, "modern" medieval countries develop simultaneously with "bureaucratic management structure development". Although these countries in different ways, but they have common features, it can be called before modern. Namely, the administrative system of early essence is the personification of, or the establishment in Max Weber's "nepotism" basis, i.e. to loyal to the king or minister certain human foundation, not is personified, With allegiance to the organization or individual basis rather than for the foundation. Although there are such a viewpoint that administration itself not only praise from traditional mode, the characteristic of early but often leads to seek personal interests corruption or abuse of power. In the early administrative system, we now feel very strange approach has the functions of government administration is generally behavior. All those who walk official tend to rely on friends or relatives for work or buy officer, which means the money to buy the first officer or tax officials, and then out to the customer to money, which is the first to buy officer recovery investment cost, and can make a fortune. America in the 19th century FenFei system of "political parties" means in the ruling changed at the same time, the government of all administrative position is changed. Modern bureaucracy is before "personal,traditional, diffusion and similar and special", and according to the argument, modern Weber bureaucracy is "impersonal, rational, concrete, achievement orientation and common". Personalized government is often inefficient: nepotism means incompetent not capable person was arranged to positions of leadership, FenFei political corruption, in addition to making often still exist serious low efficiency. The enormous success of traditional administrative pattern that early practice looks strange. Specialization and not politicized administrative in our opinion is so difficult to imagine that trace, there exist other system. Western administrative system even simple selection of officials to pass the exam, until 1854, Britain and north G..M. Trevelyan report after Northcote - began to establish in China, although the system has long passage.The traditional public administrative patternIn the late 19th century, additionally one kind of pattern on the world popular, this is the so-called traditional administrative pattern. Its main theoretical basis from several countries, namely, the American scholars and Germany Woodrow Wilson of Max Weber's, people put their associated with bureaucracy model, Frederick Tyler systematically elaborated the scientific management theory, the theory of the private sector from America, for public administration method was provided. And the other theorists, Taylor without focusing on public sector, but his theory was influential in this field. The three traditional public administration mode is theorist of main effect. In other countries, plus G..M. Trevelyan and North America, the state administration of administrative system, especially the Wilson has produced important influence. In the 19th century, the north G..M. Trevelyan and put forward through the examination and character, and appointed officials put forward bias and administrative neutral point of view. The traditional administrative pattern has the following features:1. The bureaucracy. The government shall, according to the principle of bureaucratic rank and organization. The German sociologist Max Weber bureaucracy system of a classic, and analysis. Although the bureaucracy in business organizations and other tissues, but it is in the public sector got better and longer.2. The best way of working and procedures are in full manual detail codes, for administrativepersonnel to follow. Strictly abide by these principles will run for the organization provides the best way.3. Bureaucratic service. Once the government policy areas in, it will be through the bureaucracy to provide public products and service providers.4. In political and administrative two relations, political and administrative managers generally think of administrative affairs can be separated. Administration is the implement instruction, and any matter policy or strategic affairs shall be decided by the political leaders, which can ensure that the democratic system.5. Public interests are assumed to individual civil servants, the only motive for public service is selfless paying.6. Professional bureaucracy. Public administration is viewed as a kind of special activities, thus requirements, obscure, civil servants neutral equal employment and lifelong service to any political leaders.7. The administrative task is to carry out the meaning of the written instructions and not others assume the personal responsibility.Through the comparison of the early administrative pattern, we can better understand the main advantages and Webber system differences. Webber system and it is the most important mode of various before the difference: the rule-based impersonal system replaced the personification of administrative management system. An organization and its rules than any of the people are important organization. Bureaucracy is its operation and how to respond to customer must is personified. As Weber has demonstrated that the modern office management ", will be incorporated into various regulations deeply touched it. The modern public administration by law theory, to command certain affairs authority has been awarded the legitimate public authority. This does not grant an institution specific cases through some instructions. It only matters is abstractly control some issues. In contrast, through personal privileges and give concession regulation of all affairs. The latter is completely dominated by the hereditary system, at least these affairs is not the traditional infringement is this situation." It is very important. Early administration based on personal relationships, be loyal to relatives, protect, leaders or political, rather than on the system. Sometimes, the early administration ispolitically sensitive, because of the administrative organs of the staff is appointed, they also politicians arms or mainstream class. However, it is often autocratic, autocratic administration may be unfair, especially for those who can't or unwilling to input personal and political game. One of the basic principles for with weber impersonal system to completely eliminate autocratic - at least in ideal condition is so. File exists, the reference principle of parallel and legal basis in the same environment means will always make the same decision. Below this kind of circumstance is not only more efficient, and the citizen and bureaucratic hierarchy know myself.Other differences were associated with this. In various regulations and impersonal basis, will naturally formed strict hierarchy. Personal rating system and its provisions in the left unchanged. Although Webber emphasizes the entire system, but he also noticed the bureaucracy of the organization and individual term.The traditional administrative mode won great success, it is widely adopted by governments around the world. Theoretically or in practice, it shows the advantage. And before the corruption flourished, it is more efficient than system, and the thought of individual professionalization civil servants and amateur service has a great progress. However, this model is also exposed the problems that shows that the model can even said outdated, also can say is outdated.The theory of public administration has been difficult to describe the pillar. Political control theory has problems. Administrative means follow instructions, so people demand a well-ordered transceiver method. Instruction between implementers and has a clear division. But this is not the reality, and with the public service domain expands the scale and more impossible. The traditional mode of another theoretical pillar - bureaucracy theory is no longer considered particularly effective form of organization. Formal bureaucracy could have its advantages, but people think it often training to routineer and innovators, Encourage executives rather than risk aversion risk-taking, encourage them to waste instead of effective use of scarce resources. Webb was the bureaucracy is regarded as an ideal type ", "but now this ideal type is inert, cultivate the progressive, leads to low efficiency, these mediocrity and is believed to be the public sector of the special disease. It is also criticized. Actually, theword "bureaucracy in today's more likely as low efficiency of synonyms.The new public management modeIn the 1980s, the public sector is a traditional administrative pattern of new management methods of defects. This method can alleviate some of the problems of traditional pattern, also means that the public sector operation aspects has changed significantly. The new management method has many names: management of "individualism", "the new public administration", based on the market of public administration ", after the bureaucracy model "or" entrepreneurial government ". To the late 1990s, people tend to use "and the concept of new public administration".Although the new public management, but for many of the names of public management of department of actual changes happened, people still have a consensus. First, no matter what, it is called mode with traditional represents a significant change of public administration, different more attention and managers of the individual responsibility. Second, it is clear to get rid of the classical bureaucracy, thereby organization, personnel, term and conditions more flexible. Third, it stipulates the organization and personnel, and it can target according to the performance indicators measuring task completion. Also, to plan the assessment system for more than ever before, and also can be more strictly determine whether the government plans to achieve its objectives. Fourth, the senior executives are more likely to color with political government work, rather than independent or neutral. Fifth, the more likely the inspection by the market, buyers of public service provider and distinguish "helmsman, with the rower to distinguish". Government intervention is not always refers to the government by means of bureaucracy. Sixth, appeared through privatization and market means such as inspection, contract of government function reduce trend. In some cases, it is fundamental. Once happened during the transformation from the important changes to all connected with this, the continuity of the steps are necessary.Holmes and Shand as a useful characteristics of generalization. They put the new public management paradigm, the good as management method has the following features: (1) it is a more strategic or structure of decision-making method (around the efficiency, quality and service). (2) decentralization type management environment replaced concentration levelstructure. The resource allocation and service delivery closer to supply, we can get more itself from the customers and related information and other interest groups. (3) can be more flexible to replace the method of public products supply directly, so as to provide cost savings of the policy. (4) concerned with the responsibility, authority as the key link of improving performance, including emphasize clear performance contract mechanism. (5) in the public sector, and between internal to create a competitive environment. (6) strengthen the strategic decision-making ability, which can quickly, flexible and low cost to manage multiple interests outside change and the response. (7) by request relevant results and comprehensive cost reports to improve transparency and responsibility. (8) general service budget and management system to support and encourage the change.The new public management and realize a result that no one in the best way. Managers in endowed with responsibility and without being told to get results. Decision is a management job duties, if not for achieving goals, managers should assume responsibility.ConclusionThe government management over the past 150 years experienced three modes. First is the personification of modern administrative mode, or when the pattern of its defects and increasingly exposed to improve efficiency, it is the second mode of traditional bureaucracy model is replaced. Similarly, when the traditional administrative mode problems, it is the third model is the new public management, from the government to alternative market. Since 1980s, the dominance of the market as the 1920s to 1960s dominant bureaucracy. In any kind of government, market and bureaucratic system are coexisting, just a form at some stage dominant, and in another stage of another kind of form, the dominant. The new public management is increasingly weakened and bureaucracy in the public administration field market dominant period.In reality, the market and bureaucracy, mutual complement each other. The new public management may not be completely replace the bureaucracy, as in 1989, the eastern Europe before bureaucracy could not instead of the market. But the new public management movement is early traditional bureaucracy, many functions can be and often by market now. In a bureaucracy system for organizational principle is weakened environment, marketsolutions will be launched. Of course not all market prescription can succeed, but this is not the issue. The government of new public management will be a toolbox dowsed solutions. If the scheme of the ineffective, the government will from the same source for other solutions. The theory behind the government management has already happened, we can use the term "paradigm" to describe it. In public administration academia, many of the new public management denial of critics. But their criticism of the government reform quickly. In the new public management mode, another a kind of new mode, but certainly not returned to the traditional administrative pattern.。

相关文档
最新文档