外文原文

合集下载

外文翻译原文science 2

外文翻译原文science 2

/locate/rggThe stages and duration of formation of gold mineralizationat copper-skarn deposits (Altai–Sayan folded area )I.V. Gaskov *, A.S. Borisenko, V.V. Babich, E.A. NaumovV.S. Sobolev Institute of Geology and Mineralogy, Siberian Branch of the Russian Academy of Sciences,prosp. Akad. Koptyuga 3, Novosibirsk, 630090, RussiaReceived 20 March 2009; accepted l6 November 2009AbstractGold mineralization at copper-skarn deposits (Tardanskoe, Murzinskoe, Sinyukhinskoe, Choiskoe) in the Altai–Sayan folded area is related to different hydrothermal-metasomatic formations. It was produced at 400–150 ºC in several stages spanning 5–6 Myr, which determined the diversity of its mineral assemblages. Gold mineralization associated with magnetite bodies is spatially correlated with magnesian and calcareous skarns, whereas gold mineralization in crushing zones and along fault sutures in moderate- and low-temperature hydrothermal-metasomatic rocks (propylites, beresites, serpentinites, and argillizites) is of postskarn formation. Different stages were manifested with different intensities at gold deposits. For example, the Sinyukhinskoe deposit abounds in early high-temperature mineral assemblages; the Choiskoe deposit, in low-temperature ones; and the Tardanskoe and Murzinskoe deposits are rich in both early and late gold minerals. Formation of commercial gold mineralization at different copper-skarn deposits is due to the combination of gold mineralization produced at different stages as a result of formation of intricate igneous complexes (Tannu-Ola, Ust’-Belaya, and Yugala) composed of differentiated rocks from gabbros to granites.© 2010, V.S. Sobolev IGM, Siberian Branch of the RAS. Published by Elsevier B.V. All rights reserved.Keywords: gold mineralization; skarns, copper-skarn deposits; hydrothermal-metasomatic formationsIntroductionRecent data on the isotope geology and geochronology of rocks and ores and geological data on the ore genesis gaps proved that ore deposits formed for a much longer time than was assumed earlier (Rundkvist, 1997). This is also true for commercial gold mineralization at many Cu-skarn deposits in the Altai–Sayan folded area (ASFA).Gold-containing Cu-skarn deposits are widespread in many ore districts of the ASFA: Gorny Altai (Sinyukhinskoe,Murzinskoe, Choiskoe), Kuznetsk Alatau (Natal’evskoe, Fe-dorovskoe), Gornaya Shoria (Maisko-Lebedskoe), and Tuva (Tardanskoe, Khopto). Most of them are commercial deposits (Fig. 1).Skarn formation processes at these deposits were related to the Early and Middle Paleozoic granitoid magmatism in the Tannu-Ola (eastern Tuva), Yugala (Sinyukha, northeastern Altai), and Ust’-Belaya (northwestern Altai) intrusive com-plexes (Gusev, 2007; Shokalsky et al., 2000). Formation of commercial gold mineralization was a longer and more intricate process (Gaskov, 2008). In most part of these deposits, gold mineralization is the product of multistage ore process, which is characterized by different mineral composi-tions and spatial occurrences. Almost all these deposits bear gold mineralization spatially and genetically related to skarns and aposkarns in assemblage with magnetite and sulfides (Korobeinikov and Matsyushevskii, 1976; Korobeinikov and Zotov, 2006; Korobeinikov et al., 1987; Vakhrushev, 1972)and gold mineralization isolated from skarns and represented by sulfide-containing (pyrite, chalcopyrite, bornite, chalcocite)hydrothermal products of moderate-temperature assemblage in crushing zones (Shcherbakov, 1974). Often, the deposits also bear epithermal gold-containing assemblage with low-tem-perature sulfides, tellurides, and selenides usually developed at the final stage of mineral formation in rocks of different compositions, including sedimentary, igneous, and skarn (Gas-kov, 2008; Gaskov et al., 2005).The recently obtained ages of ore formation products and igneous rocks (Gaskov, 2008; Rudnev et al., 2004, 2006;Shokalsky et al., 2000) provide a new concept of the sequence of ore formation and its duration and relation with multiphasemagmatism.Russian Geology and Geophysics 51 (2010) 1091–1101*Corresponding author.E-mail address : gaskov@uiggm.nsc.ru (I.V. Gaskov)doi:10.1016/j.rgg.2010.0.0011068-7971/$-see front matter D 2010, IG M, Siberian Branch of the RAS.Published by E lsevier B.V .All rights reserved.V S. .Sabolev 9Let us dwell on the specific features of gold mineralization at particular deposits.Gold mineralization at Cu-skarn depositsThe Tardanskoe deposit is localized in the zone of the Kaa-Khem deep fault, in the exocontact part of the Kopto-Baisyut gabbro-diorite-plagiogranite massif (Fig. 2) (Korobe-inikov and Zotov, 2006; Korobeinikov et al., 1987). At the massif contact, Lower Cambrian volcanogenic-carbonate de-posits are transformed into magnesian and calcareous skarns described in detail earlier (Korobeinikov, 1999; Korobeinikov and Matsyushevskii, 1976; Korobeinikov et al., 1997). The skarn bodies are spatially close to aposkarn metasomatites bearing actinolite, tremolite, epidote, serpentine, chlorite, talc,quartz, carbonate, magnetite, and hematite.Gold mineralization at the deposit is of two types: (1) in skarn-magnetite rocks and (2) in metasomatites of linear crushing zones. These types have specific mineralogical and geochemical features.Gold mineralization in skarn-magnetite ores is widespread at the deposit. It is described elsewhere (Korobeinikov and Matsyushevskii, 1976; Korobeinikov and Zotov, 2006; Koro-beinikov et al., 1987; Kudryavtseva, 1969). Gold is spatially related to areas of sulfide mineralization, and its contents are in direct correlation with the amount of sulfide minerals.Gold-sulfide mineralization is extremely unevenly distributed and is localized at the sites of magnetite ores that underwent cataclasis as well as in magnetite microcracks and interstices.The total amount of sulfides (pyrite, chalcopyrite, bornite, and scarcer sphalerite, pyrrhotite, and arsenopyrite) is 1–3%. Gold occurs as fine thin (0.3–0.01 mm) native segregations. This is mainly high-fineness gold (820–990) (Fig. 3, a ) with impuri-ties of silver (up to 13.6%) and copper (up to 5.07%).According to Korobeinikov (1999) and Korobeinikov and Matsyushevskii (1976), the temperatures of formation of magnetite ores were 430–550 ºC, whereas the gold-sulfide assemblage and the hosting metasomatites (actinolite, tre-molite, serpentine, talc) were produced at 250–320 ºC (Gaskov et al., 2005; Vakhrushev, 1972).Gold mineralization in crushing zones is localized in steeply dipping linear tectonic structures of NW, NE, and NS strikes (Fig. 2), which develop after different rocks, including volcanosedimentary, igneous, and skarn ones. These zones reach several hundred meters in length and few tens of meters in width. The petrographic composition of these zones is di-verse and depends mainly on the composition of initial rocks that underwent transformation later. The rocks are metaso-matic, close in composition to propylites, listwaenites, talc-containing and sericite-quartz metasomatites, and beresite-like rocks. Almost each type of hydrothermal-metasomatic rocks is intimately associated with ore minerals. Though the total volume of these minerals does not exceed 3–5%, they are extremely diverse in composition and are extremely unevenly distributed. Along with sulfide minerals typical of Cu-skarn deposits (chalcopyrite, pyrite, bornite, chalcocite,digenite, sphalerite, galena), the mineralized zones of the deposit abound in tellurides—hessite (Ag 2Te), tellurobis-muthite (Bi 2Te 3), and tetradymite (Bi 2Te 2S),—and low-tem-perature Co and Ni sulfides and sulfoarsenides (Table 1). The latter have a variable composition and often consist of intermediate phases of continuous mineral series, e.g., allo-clasite(CoAsS)–arsenopyrite(FeAsS) or siegenite(CoNi 2S 4)–violarite(FeNi 2S 4).Gold occurs mainly as native fine thin (0.01–0.5 mm)disseminations in rock microcracks and as inclusions in pyrite,chalcopyrite, and bornite. The gold fineness varies over a broad range of values—from 440 to 820 (Fig. 3, b ). The lowest-fineness gold segregations are compositionally similar to electrum and have high contents of Ag (up to 54.78%) and Hg impurity (up to 3.65%).On the flanks of mineralized crushing zones, there is sometimes gold mineralization in low-temperature argillitized rocks of chlorite-kaolinite-carbonate-hydromica composition.This gold is of low fineness (no more than 600). The mainimpurities are Ag (20–66%) and Hg (up to 5.47%). The formation temperatures of sulfide-telluride assemblages andFig. 1. Schematic occurrence of gold-bearing Cu-skarn deposits in the Altai-Sayan folded area: 1, Murzinskoe; 2, Sinyukhinskoe; 3, Choiskoe; 4, Maisko-Lebedskoe;5, Fedorovskoe; 6, Natal’evskoe; 7, Tardanskoe; 8, Kopto.1092I.V. Gaskov et al. / Russian Geology and Geophysics 51 (2010) 1091–1101gold mineralization in metasomatites and argillitized rocks are within 200–75 ºC.The Murzinskoe deposit is localized at the contact of a small stock-like granodiorite body of the Ust’-Belaya gabbro-diorite complex (Fig. 4). In the exocontact zone, calcareous skarns composed of garnet, pyroxene, wollastonite, and mag-netite develop after the calcareous sandstones of the Murzinka Formation (D1-2). In the local zones, there are aposkarnFig. 2. Schematic geologic structure of the Tardanskoe deposit (compiled after the data of K.M. Kil’chichakov and L.V. Kopylova and our new data). 1–4, Lower Paleozoic deposits: 1, andesitic porphyrites and tuffs with siltstone and sandstone interbeds in the lower part of the Tumat-Taiga Formation (Cm 1tm 1); 2, quartz porphyrites with interbeds of andesitic porphyrites and limestones in the upper part of the Tumat-Taiga Formation (Cm 1tm 2); 3, limestones and calcareous shales of the Tapsa Formation (Cm 1tp); 4, Lower and Middle Silurian conglomerates and sandstones (S 1-2); 5, Quaternary deposits (Q IV ); 6, 7, Lower Paleozoic igneous rocks of the Tannu-Ola complex (γδO 1-2): 6, gabbro-diorite-plagiogranite formation; 7, small granite-porphyry and quartz diorite bodies; 8, calcareous and magnesian skarns; 9, hydrothermal-metasomatic rocks in mineralized crushing zones; 10, gold orebodies; 11, tectonic zones; 12, geologic boundaries.I.V. Gaskov et al. / Russian Geology and Geophysics 51 (2010) 1091–11011093Fig. 3. Variations in gold fineness in gold ores from skarn-magnetite bodies (a) and in ores from mineralized crushing zones (b) at the Tardanskoe deposit.Table 1. Mineral parageneses in gold-bearing ores produced at different stages and composition of host rocks at Au-Cu-skarn depositsDeposit Early aposkarn Au-sulfide mineralization in magnetite-skarn rocks Late Au-telluride-sulfide mineralization in superposed crushingzonesOre parageneses Host rocks Ore parageneses Host rocksTardanskoe Magneite (Fe3O4)Pyrite (FeS2)Chalcopyrite (CuFeS2)Bornite (Cu5FeS4)Sphalerite (ZnS)Pyrrhotite (FeS)Arsenopyrite (FeAsS)Gold (Au)Magnesian skarns (pyroxene +fassayite + phlogopite +pargasite + forsterite + spinel).Calcareous skarns (pyroxene +garnet + epidote +wollastonite + skapolite).Aposkarn serpentine andserpentine-chlorite rocksCobaltite (CoFe)AsSGlaucodot (Co,Fe)AsSSiegenite (CoNi2S4)Violarite (FeNi2S4)Hessite (Ag2Te)Gold (Au)Propylites, listvaenites, talc-serpentine-containing andsericite-quartz metasomatites,and argillitized rocksMurzinskoe Magnetite (Fe3O4)Chalcopyrite (CuFeS2)Pyrite (FeS2)Bornite (Cu5FeS4)Sphalerite (ZnS)Galena (PbS)FahloreArsenopyrite (FeAsS)Clinobisvanite (BiVO4)Gold (Au)Calcareous skarns (garnet +pyroxene + wollastonite).Aposkarn metasomatic rocks(quartz + epidote + chlorite +actinolite)Cinnabar (HgS)Metacinnabarite (HgS)Bismuthine (Bi2S3)Aikinite (CuPbBiS3)Emplectite (CuBiS2)Berryite [Pb2(Cu,Ag)3Bi5S11]Naumannite (Ag2Se)Polybasite (Ag16Sb2S11)Barite (BaSO4)Gold (Au)Quartz and quartz-carbonateveins, near-vein metasomatitesof quartz-chlorite-carbonatecomposition, and argillitizedrocksSinyukhinskoe Magnetite (Fe3O4)Pyrite (FeS2)Chalcopyrite (CuFeS2)Bornite (Cu5FeS4)Chalcocite (Cu2S)Sphalerite (ZnS)Pyrrhotite (FeS)Cubanite (CuFe2S3)Gold (Au)Wollastonite, garnet-wollastonite, garnet-pyroxeneand pyroxene skarns, andaposkarn metasomatic rocks(chlorite + actinolite + calcite)Tetradymite (Bi2TeS)Siegenite (CoNi2S4)Cobaltite ((CoNiFe)AsS)Melonite (NiTe2)Wittichenite (Cu3BiS3)Hessite (Ag2Te)Petzite (AuAg3Te2)Altaite (PbTe)Clausthalite (PbSe)Gold (Au)Local zones of actinolite-chlorite-calcite-quartzcompositionChoiskoe Magnetite (Fe3O4)Pyrite (FeS2)Chalcopyrite (CuFeS2)Gold (Au)Garnet, garnet-pyroxene,garnet-wollastonite, andpyroxene-epidote skarnsTetradymite (BiTe2S)Ingodite (Bi2TeS)Joseite (Bi4TeS2)Hedleyite (Bi2Te)Tellurobismuthite (Bi2Te3)Bismuthite (Bi2S3),Native bismuth (Bi)Gold (Au)Quartz and quartz-carbonateveins and quartz-carbonate-chlorite metasomatites1094I.V. Gaskov et al. / Russian Geology and Geophysics 51 (2010) 1091–1101metasomatic rocks consisting of quartz, epidote, calcite,chlorite, actinolite, and, more seldom, tourmaline, apatite, and rodonite.Gold mineralization at the Murzinskoe deposit was earlier ascribed to gold-skarn type. But recent data have shown that only a minor part of the deposit ores — scarce postskarn sulfide mineralization spatially associated with skarn-magnet-ite bodies—can be referred to this type. Most of the commer-cial ores occur in mineralized crushing zones. They form gold-sulfide mineralization in quartz and quartz-carbonate veins and near-vein metasomatites in a 300–400 m thick zone stretching in the N-NW direction for more than 3 km (Fig. 4).The crust of weathering widespread at the deposit contains hypergene copper minerals: malachite, chrysocolla, azurite,chalcocite, coveline, and high-fineness gold.Gold-sulfide mineralization spatially associated with skarn-magnetite bodies is superposed on skarn rocks. It was produced either at the regressive stage of the skarn formation or at the postskarn hydrothermal-metasomatic stage and was accompanied by the formation of moderate- and low-tempera-ture metasomatic minerals—chlorite, actinolite, epidote, and quartz. Sulfide mineralization is unevenly distributed and occurs as veinlet-disseminated chalcopyrite, pyrite, bornite,and sphalerite. It amounts to few percent. Gold occurs as fine thin (0.5–0.01 mm) native segregations. It is mainly of high fineness (840–994) (Fig. 5, a ).In crushing zones (Fig. 4), gold mineralization was found in quartz-carbonate-sulfide veinlets and veins in hydrothermal-metasomatic rocks of quartz-chlorite-carbonate composition with kaolinite, hydromica, and adularia (argillizite formation)developing after different rocks—skarns, hornfelses, shales,siltstones, and limestones,—often beyond skarning and horn-felsing zones. The quartz veins are 0.1 to 2.0 m (on average,0.4 m) thick, of N-S strike and eastern dip. In contrast to the gold-skarn-magnetite type, this mineralization is of more complex composition. In addition to minerals typical of skarn deposits (chalcopyrite, pyrite, bornite, sphalerite, and galena),it includes fahlore, arsenopyrite (FeAsS), cinnabar (HgS),metacinnabarite (HgS), bismuthine (Bi 2S 3), aikinite (CuPb BiS 3), emplectite (CuBiS 2), berryite [Pb 2(Cu,Ag)3Bi 5S 11],naumannite (Ag 2Se), polybasite (Ag 16Sb 2S 11), scheelite (Ca 3WO 4), hematite (Fe 2O 3), clinobisvanite (BiVO 4), bariteFig. 4. Schematic geologic structure of the Murzinskoe deposit. 1, mica-sili-ceous shales (O 1); 2, sandstones, siltstones, and aleuropelites (S 1); 3, terri-genous-carbonate deposits (D 1-2): a , conglomerates, b , limestones, c , sand-stones; 4, granodiorites of the Ust’-Belaya complex (D 3); 5, altered rocks and metasomatites: a , hornfelses, b , skarns, c , quartz-tourmaline metasomatites;6, mineralized crushing zones; 7, faults: a , established, b , predicted; 8, other types of mineralization: a , Murzinka-3 (Au), b, skarn Fe.Fig. 5. Variations in the fineness of gold associated with skarn-magnetite bodies (a ) and gold from ores of mineralized crushing zones (b ) at the Murzin-skoe deposit.I.V. Gaskov et al. / Russian Geology and Geophysics 51 (2010) 1091–11011095(BaSO 4), and gold (Table 1). The content of gold in the ores varies over a broad range of values, from 0.1 to 232 ppm.This gold occurs as fine (<0.1 mm) thin segregations in assemblage with sulfides. Its fineness also greatly varies (640–840), but, compared with the first type of ores, low-fine-ness gold prevails here (Fig. 5, b ).The presence of cinnabar, sulfides and sulfosalts of Bi, Se,and Sb, and barite, predominance of low-fineness gold and electrum, and low-temperature wallrock alteration (formation of kaolinite, hydromica, and adularia) differ these ores from earlier formed ores in skarn-magnetite bodies. The gap between the skarn and ore formation processes is evidenced from the presence of basite dikes cutting the skarns, which bear superposed gold mineralization of this type. At the same time, the presence of gold–cinnabar intergrowths and fine dissemination of gold in cinnabar, presence of Hg-minerals (cinnabar, Hg-sphalerite, saucovite) in the ores, and high contents of As, Sb, and Ti (typical elements of many Au-Hg deposits) permit this mineralization to be referred to as epithermal Au-Hg type (Borisenko et al., 2006). Thermometric studies showed that the homogenization temperatures of fluid inclusions in quartz veins in the northern and central parts ofthe mineralized zone are 215–200 ºC and decrease to 160–130 ºC in the southern part.Fig. 6. Schematic geologic structure of the Sinyukhinskoe deposit (compiled by Gusev (2007) and supplemented by our data). 1, loose Quaternary deposits; 2–6, rocksof the Choya (O 1cs), Elanda (C−2-3el), Ust’-Sema (C −2us), and Upper Ynyrga (C −2vy) Formations: 2, conglomerates, 3, siltstones, 4, sandstones, 5, limestones,6, andesite-basaltic porphyrites; 7–9, rocks of the Yugala (Sinyukha) complex: 7, granites and granodiorites of the early phase (γδD 2-3), 8, granites of the late phase (γD 2-3), 9, dolerite and gabbro-dolerite dikes; 10, plagiogranites of the Sarakoksha complex (ν C −2); 11, skarns; 12, sites with gold mineralization (1, Pervyi Rudnyi (First Ore), 2, Zapadnyi (Western), 3, Faifanov, 4, West Faifanov, 5, Ynyrga, 6, Nizhnii (Lower), 7, Tushkenek, 9, Gorbunov); 13, faults.1096I.V. Gaskov et al. / Russian Geology and Geophysics 51 (2010) 1091–1101The Sinyukhinskoe deposit is localized in northeastern Altai, at the contact of the large (600 km 2) complex Sarakok-sha pluton and Cambrian volcanosedimentary strata of the Ust’-Sema Formation (Shcherbakov, 1967; Vakhrushev, 1972)(Fig. 6). According to Shokalsky et al. (2000) and Gusev (2007), this massif includes the Lower Cambrian Sarakoksha diorite-tonalite-plagiogranite complex and Lower Devonian Yugala gabbro-diorite-granite complex (Sinyukha complex (Gusev, 2003)). It is in the latter complex that the commercial mineralization of the Sinyukha ore field is localized. In the contact zone of the Sinyukha massif, skarns of different compositions are developed in horizons of carbonate rocks and tuffs. Wollastonite and garnet-wollastonite varieties are the most widespread, and garnet-pyroxene and pyroxene ones are scarcer. Near the contact with basic effusive bodies, small magnetite orebodies have been revealed among garnet-py-roxene skarns.Gold mineralization occurs mainly among wollastonite,garnet-wollastonite, and pyroxene-wollastonite skarns and is intimately associated with an assemblage of sulfide minerals.The latter are dominated by bornite, chalcocite, chalcopyrite,and pyrite, which compose ore zones in these rocks and are present in the form of nest-disseminations and stockworks. In local zones of actinolite-chlorite-calcite-quartz composition we found minor amounts of sphalerite, pyrrhotite, cubanite, and tetradymite. There are also occasional findings of rare miner-als, such as siegenite (CoNi 2S 4), cobaltite ((CoNiFe)AsS),melonite (NiTe 2), wittichenite (Cu 3BiS 3), gessite (Ag 2Te),petzite (AuAg 3Te 2), altaite (PbTe), and clausthalite (PbSe)(Table 1). The total content of sulfides does not exceed 5–10%. The sulfides are extremely unevenly distributed—from occasional dissemination to densely disseminated, almost massive ores. The composition of sulfide mineralization slightly changes with depth: Gold-chalcocite-bornite assem-blage is changed by gold-chalcopyrite one. The accumulation of gold-sulfide mineralization was accompanied by the hy-drothermal-metasomatic alteration of the host skarns with the formation of actinolite, chlorite, and calcite near ore veins and nests. Magnetite ores are poorer in gold, and sulfide-free rocks(marbles and diorite-porphyry and granite-porphyry dikes)virtually lack it.Fig. 7. Variations in gold fineness in ores from the Sinyukhinskoe deposit.Fig. 8. Schematic geologic structure of the Choiskoe deposit (compiled by Gusev and Gusev (1998) and supplemented by our data). 1–5, rocks of the Ishpa (O 1is) andTandosha (C−2-3td) Formations: 1, conglomerates, 2, siltstones, 3, sandstones, 4, limestones, 5, felsic tuffs; 6–7, granitoids of the Yugala complex: 6, granites and granodiorites of the early phase (γδD 2-3), 7, leucocratic granites of the late phase (γD 2-3); 8, granite-porphyry, diorite, and lamprophyre dikes (γδD 2-3); 9, skarns;10, gold mineralization occurrences (1, occurrence of the Central skarn deposit, 2, Pikhtovyi, 3, Smorodinovyi); 11, faults.I.V. Gaskov et al. / Russian Geology and Geophysics 51 (2010) 1091–11011097Gold often occurs in ores as native segregations in the form of hooks, fine wires, lumps, and sheets intimately intergrown with bornite, chalcocite, and chalcopyrite. Sometimes, native gold segregations are observed as fine inclusions in cracks and interstices of skarn minerals, most often, wollastonite. These gold particles are mainly no larger than hundredths of millimeter. The gold of primary ores of the Sinyukhinskoe deposit is of high fineness varying over a narrow range of values (911–964) (Fig. 7). The fineness of gold decreases to 860–870 only in its parageneses with tellurides, selenides, and rare sulfide minerals (Roslyakova et al., 1999). The main impurities in gold are silver (up to 19%) and copper (up to 1.7%). The content of Hg does not exceed 0.1%. By the formation conditions, these ores are postskarn hydrothermal,with their deposition temperatures not exceeding 350 ºC (Roslyakova et al., 1999; Shcherbakov, 1972).The Choiskoe deposit is localized 20 km northeast of the Sinyukha ore field, in the zone of contact between the Upper Cambrian terrigenous-carbonate deposits of the Ishpa Forma-tion and the Choya granitoid massif referred to the Lower Devonian Yugala gabbro-diorite-granite complex (Fig. 8). The Choya granitoid massif is small at the surface (1 × 5 km) and extends from west to east, tracing the Choya fault (Gusev,2007). The deposit abounds in dikes of dolerite porphyrites,diorites, and granite-porphyry and in rocks of the lamprophyre series—kersantites, minette, and spessartites. The zone of contact between the granitoids of the Choya massif and the horizons of limestones and terrigenous-carbonate rocks is composed of skarns, which form linear zones extending in the NE direction, like the other rocks. Most bodies are of persistent thickness, ~100 m. By composition, the skarn bodies are divided into zones of garnet, garnet-pyroxene, pyroxene,garnet-wollastonite, and pyroxene-epidote skarns. In the skarn zones and near lamprophyre bodies, poor scheelite-molybde-nite mineralization in quartz veins was established (Gusev,1998).Gold mineralization at the deposit occurs in linear tectonic zones and is not spatially associated with skarns. It develops as quartz veins and quartz-carbonate and quartz-carbonate-chlorite veinlets and nests with gold-sulfide mineralization in crushing and brecciation zones in both the skarns and the granitoids of the Choya massif (Fig. 8).The mineral composition of these objects is nearly the same—gold-sulfide and gold-telluride parageneses. A numberof rare tellurides have been revealed among the Choya deposit ores: tetradymite (BiTe 2S), ingodite (Bi 2TeS), joseite (Bi 4TeS 2), hedleyite (Bi 2Te), tellurobismuthite (Bi 2Te 3), bis-muthine (Bi 2S 3), and native bismuth (Table 1). Magnetite,pyrite, and chalcopyrite, typical minerals of Cu-skarn deposits,are extremely scarce here. The total content of sulfides does not exceed few percent. They occur mainly as fine thin dissemination and do not form large accumulations and nests.Gold in the Choya deposit ores occurs as fine inclusions in sulfide and telluride minerals in quartz veinlets and as intergrowths with ore minerals. The gold particles are hun-dredths and tenths of millimeter in size. By chemical compo-sition, the gold is divided into two groups: medium-fineness (843–880) and high-fineness (940–959); the latter is probably of exogenous nature (Fig. 9). The gold contains Ag (3–12.5 wt.%) and Hg (0–0.48 wt.%) impurities and Cu traces.The thermometric studies showed that homogenization of primary gas-liquid inclusions into liquid proceeds at 126–150 ºC in quartz and at 105–128 ºC in calcite from ore-bear-ing veins.The sequence and duration of formation of gold mineralization and its correlation with magmatism As seen from the above data, gold mineralization at all considered Cu-skarn deposits has a complex multistage for-mation history. But the same stages at different deposits ran with different intensities. For example, at the Sinyukhinskoe deposit, mainly early high-temperature mineral assemblages are widespread, whereas at the Choiskoe deposit, low-tempera-ture ones. The Tardanskoe and Murzinskoe deposits bear both early and late minerals. To elucidate the peculiarities of gold-ore formation, establish the correlation between different types of gold mineralization and magmatic activity, and evaluate the duration of ore formation, we performed Ar-Ar and U-Pb dating of different mineralization and igneous rocks from the Tardanskoe and Murzinskoe deposits.Our investigations have shown that the formation of gold mineralization at the Tardanskoe deposit lasted for a longer time than it was supposed earlier. Skarn mineralization formed at the contact of diorites with carbonate rocks as a result of the intrusion of the Kopto-Baisyut massif. Ar-Ar biotite dating of the massif yielded an age of 485.7 ± 4.4 Ma corresponding to the Early Ordovician (Table 2). The skarns at the massif contact as well as magnetite ores and gold-sulfide mineraliza-tion (pyrite, chalcopyrite, pyrrhotite, bornite, gold) spatially and genetically associated with skarn-magnetite bodies are of similar age. Gold was deposited together with sulfides, as evidenced from the direct correlation between the contents of gold and sulfides (especially chalcopyrite) and from gold inclusions in the sulfides. The formation of skarn and aposkarn mineralization was followed (with some temporal gap) by the intrusion of dike and stock-like small granitoid bodies, which is indicated by their cutting of the sulfide-bearing skarn and magnetite bodies. Ar-Ar dating of these granite bodies yielded an age of 484.2 ±4.3 Ma (Table 2).Fig. 9. Variations in gold fineness in ores from the Choiskoe deposit.1098I.V. Gaskov et al. / Russian Geology and Geophysics 51 (2010) 1091–1101。

毕业论文--成本控制(cost--control)外文原文及译文【范本模板】

毕业论文--成本控制(cost--control)外文原文及译文【范本模板】

本科生毕业设计(论文)外文原文及译文所在系管理系学生姓名专业财务管理班级学号指导教师2014 年 6 月外文原文及译文Cost ControlRoger J. AbiNaderReference for Business,Encyclopedia of Business, 2nd ed。

Cost control,also known as cost management or cost containment,is a broad set of cost accounting methods and management techniques with the common goal of improving business cost-efficiency by reducing costs, or at least restricting their rate of growth. Businesses use cost control methods to monitor, evaluate, and ultimately enhance the efficiency of specific areas,such as departments,divisions, or product lines, within their operations.Cooper and Kaplan in 1987 in an article entitled "how cost accounting systematically distorts product costs” article for the first time put forward the theory of "cost drivers" (cost driver, cost of driving factor)of that cost, in essence,is a function of a variety of independent or interaction of factors (independent variable) work together to drive the results. So what exactly is what factors drive the cost or the cost of motive which? Traditionally, the volume of business (such as yield)as the only cost driver (independent variable),at least that its cost allocation plays a decisive role in restricting aside,regardless of other factors (motivation). In accordance with the full cost of this cost driver, the enterprise is divided into variable costs and fixed costs of the two categories。

外文文献综述电能质量监测(外文原文+中文翻译)

外文文献综述电能质量监测(外文原文+中文翻译)

1 Power Quality MonitoringPatrick ColemanMany power quality problems are caused by inadequate wiring or improper grounding. These problems can be detected by simple examination of the wiring and grounding systems. Another large population of power quality problems can be solved by spotchecks of voltage, current, or harmonics using hand held meters. Some problems, however, are intermittent and require longer-term monitoring for solution.Long-term power quality monitoring is largely a problem of data management. If an RMS value of voltage and current is recorded each electrical cycle, for a three-phase system, about 6 gigabytes of data will be produced each day. Some equipment is disrupted by changes in the voltage waveshape that may not affect the rms value of the waveform. Recording the voltage and current waveforms will result in about 132 gigabytes of data per day. While modern data storage technologies may make it feasible to record every electrical cycle, the task of detecting power quality problems within this mass of data is daunting indeed.Most commercially available power quality monitoring equipment attempts to reduce the recorded data to manageable levels. Each manufacturer has a generally proprietary data reduction algorithm. It is critical that the user understand the algorithm used in order to properly interpret the results.1.1Selecting a Monitoring PointPower quality monitoring is usually done to either solve an existing power quality problem, or to determine the electrical environment prior to installing new sensitive equipment. For new equipment, it is easy to argue that the monitoring equipment should be installed at the point nearest the point of connection of the new equipment. For power quality problems affecting existing equipment, there is frequently pressure to determine it. the problem is being caused by some external source, i. e., the utility. This leads to the installation of monitoring equipment at the service point to try to detect the source of the problem. This is usually not the optimum location for monitoring equipment. Most studies suggest that 80% of power quality problems originate within the facility. A monitor installed on the equipment being affected will detect problemsoriginating within the facility, as well as problems originating on the utility. Each type of event has distinguishing characteristics to assist the engineer in correctly identifying the source of the disturbance.1.1.1 What to MonitorAt minimum, the input voltage to the affected equipment should be monitored. If the equipment is single phase, the monitored voltage should include at least the line-to~neutral voltage and the neutral to-ground voltages. If possible, the Iine_to_ground voltage should also be monitored. For three-phase equipment, the voltages may either be monitored line to neutral, or line to line. Line-to-neutral voltages are easier to understand, but most three-phase equipment operates on line-to-line voltages. Usually, it is preferable to monitor the voltage line to line for three-phase equipment.If the monitoring equipment has voltage thresholds which can be adjusted, the thresholds should be set to match the sensitive equipment voltage requirements. If the requirements are not known, a good starting point is usually the nominal equipment voltage plus or minus 10%.In most sensitive equipment, the connection to the source is a rectifier, and the critical voltages are DC. In some cases, it may be necessary to monitor the critical DC voltages. Some commercial power quality monitors are capable of monitoring AC and DC simultaneously, while others are AC only.It is frequently useful to monitor current as well as voltage. For example, if the problem is being caused by voltage sags, the reaction of the current during the sag can help determine the source of the sag. If the current doubles when the voltage sags 10%, then the cause of the sag is on the load side of the current monitor point. If the current increases or decreases 10 - 20% during a 10% voltage sag, then the cause of the sag is on the source side of the current monitoring point.Sensitive equipment can also be affected by other environmental factors such as temperature, humidity, static, harmonics, magnetic fields, radio frequency interference (RFl), and operator error or sabotage. Some commercial monitors can record some of these factors, but it may be necessary to install more than one monitor to cover every possible source of disturbance.It can also be useful to record power quantity data while searchingfor power quality problems. For example, the author found a shortcut to the source of a disturbance affecting a wide area by using the power quantity data. The recordings revealed an increase in demand of 2500 KW immediately after the disturbance. Asking a few questions quickly led to a nearby plant with a 2500 KW switched load that was found to be malfunctioning.1.2Selecting a MonitorCommercially available monitors fall into two basic categories: line disturbance analyzers and voltage recorders. The line between the categories is becoming blurred as new models are developed. Voltage recorders are primarily designed to record voltage and current strip chart data, but some models are able to capture waveforms under certain circumstances. Line disturbance analyzers are designed to capture voltage events that may affect sensitive equipment. Generally, line disturbance analyzers are not good voltage recorders, but newer models are better than previous designs at recording voltage strip charts.In order to select the best monitor for the job, it is necessary to have an idea of the type of disturbance to be recorded, and an idea of the operating characteristics of the available disturbance analyzers. For example, a common power quality problem is nuisance tripping of variable speed drives. Variable speed drives may trip due to the waveform disturbance created by power factor correction capacitor switching, or due to high or low steady state voltage, or, in some cases, due to excessive voltage imbalance. If the drive trips due to high voltage or waveform disturbances, the drive diagnostics will usually indicate an over voltage code as the cause of the trip. If the voltage is not balanced, the drive will draw significantly unbalanced currents. The current imbalance may reach a level that causes the drive to trip for input over current. Selecting a monitor for variable speed drive tripping can be a challenge. Most line disturbance analyzers can easily capture the waveshape disturbance of capacitor switching, but they are not good voltage recorders, and may not do a good job of reporting high steady state voltage. Many line disturbance analyzers cannot capture voltage unbalance at all, nor will they respond to current events unless there is a corresponding voltage event. Most voltage and current recorders can easily capture the high steady state voltage that leads to a drive trip, but they may notcapture the capacitor switching waveshape disturbance. Many voltage recorders can capture voltage imbalance, current imbalance, and some of them will trigger a capture of voltage and current during a current event, such as the drive tripping off.To select the best monitor for the job, it is necessary to understand the characteristics of the available monitors. The following sections will discuss the various types of data that may be needed for a power quality investigation, and the characteristics of some commercially available monitors.I. 3 VoltageThe most commonly recorded parameter in power quality investigations is the RMS voltage delivered to the equipment. Manufacturers of recording equipment use a variety of techniques to reduce the volume of the data recorded. The most common method of data reduction is to record Min/Max/Average data over some interval. Figure I. I shows a strip chart of rms voltages recorded on a eyeIe-by-cycle basis. Figure I. 2 shows a Min/Max/Average chart for the same time period. A common recording period is I week. Typical recorders will use a recording interval of 2 - 5 minutes. Each recording interval will produce three numbers: the rms voltage of the highest I cycle, the lowest I cycle, and the average of every cycle during the interval. This is a simple, easily understood recording method, and it is easily implemented by the manufacturer. There are several drawbacks to this method. If there are several events during a recording interval, only the event with the largest deviation is recorded. Unless the recorder records the event in some other manner, there is no time~stamp associated with the events, and no duration available. The most critical deficiency is the lack of a voltage profile during the event. The voltage profile provides significant clues to the source of the event. For example, if the event is a voltage sag, the minimum voltage may be the same for an event caused by a distant fault on the utility system, and for a nearby large motor start. For the distant fault, however, the voltage will sag nearly instantaneously, stay at a fairly constant level for 3-10 cycles, and almost instantly recover to full voltage, or possibly a slightly higher voltage it. the faulted section of the utility system is separated. For a nearby motor start, the voltage will drop nearly instantaneousIy,and almost immediately begin a gradual recovery over 30 - 180 cycles toa voltage somewhat lower than before. Figure 1.3 shows a cycle-by-cycle recording of a simulated adjacent feeder fault, followed by a simulation of a voltage sag caused by a large motor start. Figure I.4 shows a Min/Max/Average recording of the same two events. The events look quite similar when captured by the Min/Max/Average recorder, while the cycle-by-cycle recorder reveals the difference in the voltage recovery profile.FIGURE 1.1 RMS voltage strip chart, taken cycle by cycle.FIGURE I. 2 Min/Max/Average strip chart, showing the minimum single cycle voltage, the maximum single cycle voltage, and the average of every cycle in a recording interval. Compare to the Fig. I. I strip chart data.Some line disturbance analyzers allow the user to set thresholds for voltage events. If the voltage exceeds these thresholds, a short duration strip chart is captured showing the voltage profile during the event. This short duration strip chart is in addition to the long duration recordings, meaning that the engineer must look at several different charts to find the needed information.Some voltage recorders have user-programmable thresholds, and record deviations at a higher resolution than voltages that fall within the thresholds. These deviations are incorporated into the stripchart, so the user need only open the stripchart to determine, at a glance, if there are any significant events. If there are events to be examined, the engineer can immediately “zoom in” on the portion of the stripchart with the event.Some voltage recorders do not have user-settable thresholds, but rather choose to capture events based either on fixed default thresholds or on some type of significant change. For some users, fixed thresholds are an advantage, while others are uncomfortable with the lack of control over the meter function. In units with fixed thresholds, if the environment is normally somewhat disturbed, such as on a welder circuit at a motor control center, the meter memory may fill up with insignificant events and the monitor may not be able to record a significant event when it occurs. For this reason, monitors with fixed thresholds should not be used in electrically noisy environments.FIGURE I. 3 Cycle-by-cycle rms strip chart showing two voltage sags. The sag on the left is due to an adjacent feeder fault on the supply substation, and the sag on the right is due to a large motor start. Note the difference in the voltage profile during recoveryMln/Ave/Max Chartt SagFIGURE I. 4 Min/Max/Average strip chart of the same voltage sags as Fig. I. 3. Note that both sags look almost identical. Without the recovery detail found in Fig. I. 3, it is difficult to determine a cause for the voltage sagscapacitor energizationI. 3. I Voltage Waveform Disturbances.Some equipment can be disturbed by changes in the voltage waveform. These waveform changes may not significantly affect the rms voltage, yet may still cause equipment to malfunction. An rms-onIy recorder may not detect the cause of the malfunction. Most line disturbance analyzers have some mechanism to detect and record changes in voltage waveforms. Some machines compare portions of successive waveforms, and capture the waveform if there is a significant deviation in any portion of the waveform. Others capture waveforms if there is a significant change in the rms value of successive waveforms. Another method is to capture waveforms if there is a significant change in the voltage total harmonic distortion (THD) between successive cycles.The most common voltage waveform change that may cause equipment malfunction is the disturbance created by power factor correctioncapacitor switching. When capacitors are energized, a disturbance iscreated that lasts about I cycle, but does not result in a significant change in the rms voltage. Figure 1.5 shows a typical power factorFIGURE 1.6 RMS stripcharts of voltage and current during a large current increase due to a motor start downstream of the monitor point.1.4Current Waveshape DisturbancesVery few monitors are capable of capturing changes in current waveshape. It is usually not necessary to capture changes in current waveshape, but in some special cases this can be useful data. For example, inrush current waveforms can provide more useful information than inrush current rms data. Figure I. 7 shows a significant change in the current waveform when the current changes from zero to nearly 100 amps peak. The shape of the waveform, and the phase shift with respect to the voltage waveform, confirm that this current increase was due to an induction motor start.Figure 1.7 shows the first few cycles of the event shown in Fig.1.6.I.5HarmonicsHarmonic distortion is a growing area of concern. Many commercially available monitors are capable of capturing harmonic snapshots. Some monitors have the ability to capture harmonic strip chart data. In this area, it is critical that the monitor produce accurate data. Some commercially available monitors have deficiencies in measuring harmonics. Monitors generally capture a sample of the voltage and current waveforms, and perform a Fast Fourier Transform to produce a harmonic spectrum. According to the Nyquist Sampling Theorem, the input waveform must be sampled at least twice the highest frequency that is present in the waveform. Some manufacturers interpret this to mean the highest frequency of interest, and adjust their sample rates accordingly. If the input signal contains a frequency that is above the maximum frequency that can be correctly sampled, the high frequency signal may be u aliased, ” that is, it may be incorrectly identified as a lower frequency harmonic. This may lead the engineer to search for a solution to a harmonic problem that does not exist. The aliasing problem can be alleviated by sampling at higher sample rates, and by filtering out frequencies above the highest frequency of interest. The sample rate is usually found in the manufacturer’ s literature, but the presence of an antialiasing filter is not usually mentioned in the literature.I. 6 SummaryMost power quality problems can be solved with simple hand~tools and attention to detail. Some problems, however, are not so easily identified, and it may be necessary to monitor to correctly identify the problem. Successful monitoring involves several steps. First, determine if it is really necessary to monitor. Second, decide on a location for the monitor. Generally,the monitor should be installed close to the affected equipment. Third, decide what quantities need to be monitored, such as voltage, current, harmonics, and power data. Try to determine the types of events that can disturb the equipment, and select a meter that is capable of detecting those types of events. Fourth, decide on a monitoring period. Usually, a good first choice is at least one business cycle, or at least I day, and more commonly, I week. It may be necessary to monitor until the problem recurs. Some monitors can record indefinitely by discardingolder data to make space for new data. These monitors can be installed and left until the problem recurs. When the problem recurs, the monitoring should be stopped before the event data is discarded.After the monitoring period ends, the most difficult task begins — interpreting the data. Modern power quality monitors produce reams of data during a disturbance. Data interpretation is largely a matter of experience, and Ohm’ s law. There are many examples of disturbance data in books such as The BMI Handbook of Power Signatures, Second Edition, and the Dranetz Field Handbook for Power Quality Analysis.1量监测里•曼许多电能质量问题所造成的布线不足或不当的接地。

外文原文及译文

外文原文及译文

外文原文及译文一、外文原文Subject:Financial Analysis with the DuPont Ratio: A UsefulCompassDerivation:Steven C. Isberg, Ph.D.Financial Analysis and the Changing Role of Credit ProfessionalsIn today's dynamic business environment, it is important for credit professionals to be prepared to apply their skills both within and outside the specific credit management function. Credit executives may be called upon to provide insights regarding issues such as strategic financial planning, measuring the success of a business strategy or determining the viability of an acquisition candidate. Even so, the normal duties involved in credit assessment and management call for the credit manager to be equipped to conduct financial analysis in a rapid and meaningful way.Financial statement analysis is employed for a variety of reasons. Outside investors are seeking information as to the long run viability of a business and its prospects for providing an adequate return in consideration of the risks being taken. Creditors desire to know whether a potential borrower or customer can service loans being made. Internal analysts and management utilize financial statement analysis as a means to monitor the outcome of policy decisions, predict future performance targets, develop investment strategies, and assess capital needs. As the role of the credit manager is expanded cross-functionally, he or she may be required to answer the call to conduct financial statement analysis under any of these circumstances. The DuPont ratio is a useful tool in providing both an overview and a focus for such analysis.A comprehensive financial statement analysis will provide insights as to a firm's performance and/or standing in the areas of liquidity, leverage, operating efficiency and profitability. A complete analysis will involve both time series and cross-sectional perspectives. Time series analysis will examine trends using the firm's own performance as a benchmark. Cross sectional analysis will augment the process by using external performance benchmarks for comparison purposes. Every meaningful analysis will begin with a qualitative inquiry as to the strategy and policies of the subject company, creating a context for the investigation. Next, goals and objectives of the analysis will be established, providing a basis for interpreting the results. The DuPont ratio can be used as a compass in this process by directing the analyst toward significant areas of strength and weakness evident in the financial statements.The DuPont ratio is calculated as follows:ROE = (Net Income/Sales) X (Sales/Average Assets) X (Average Assets/Avenge Equity)The ratio provides measures in three of the four key areas of analysis, eachrepresenting a compass bearing, pointing the way to the next stage of the investigation.The DuPont Ratio DecompositionThe DuPont ratio is a good place to begin a financial statement analysis because it measures the return on equity (ROE). A for-profit business exists to create wealth for its owner(s). ROE is, therefore, arguably the most important of the key ratios, since it indicates the rate at which owner wealth is increasing. While the DuPont analysis is not an adequate replacement for detailed financial analysis, it provides an excellent snapshot and starting point, as will be seen below.The three components of the DuPont ratio, as represented in equation, cover the areas of profitability, operating efficiency and leverage. In the following paragraphs, we examine the meaning of each of these components by calculating and comparing the DuPont ratio using the financial statements and industry standards for Atlantic Aquatic Equipment, Inc. (Exhibits 1, 2, and 3), a retailer of water sporting goods.Profitability: Net Profit Margin (NPM: Net Income/Sales)Profitability ratios measure the rate at which either sales or capital is converted into profits at different levels of the operation. The most common are gross, operating and net profitability, which describe performance at different activity levels. Of the three, net profitability is the most comprehensive since it uses the bottom line net income in its measure.A proper analysis of this ratio would include at least three to five years of trend and cross-sectional comparison data. The cross sectional comparison can be drawn from a variety of sources. Most common are the Dun & Bradstreet Index of Key Financial Ratios and the Robert Morris Associates (RMA) Annual Statement Studies. Each of these volumes provide key ratios estimated for business establishments grouped according to industry (i.e., SIC codes). More will be discussed in regard to comparisons as our example is continued below. As is, over the two years, Whitbread has become less profitable.Leverage: The Leverage Multiplier (Average Assets/Average Equity)Leverage ratios measure the extent to which a company relies on debt financing in its capital structure. Debt is both beneficial and costly to a firm. The cost of debt is lower thanthe cost of equity, an effect which is enhanced by the tax deductibility of interest payments in contrast to taxable dividend payments and stock repurchases. If debt proceeds are invested in projects which return more than the cost of debt, owners keep the residual, and hence, the return on equity is "leveraged up." The debt sword, however, cuts both ways. Adding debt creates a fixed payment required of the firm whether or not it is earning an operating profit, and therefore, payments may cut into the equity base. Further, the risk of the equity position is increased by the presence of debt holders having a superior claim to the assets of the firm.二、译文题目:杜邦分析体系出处:史蒂文c Isberg运输研究所硕士论文杜邦分析体系财务分析与专业信用人员的角色转变在当今动态商业环境中,信贷的专业人士申请内部外部的特定信贷管理职能的技能非常重要。

企业绩效管理【外文翻译】

企业绩效管理【外文翻译】

外文文献翻译译文一、外文原文Corporate Performance ManagementAbstractTwo of the most important duties of a chief executive officer are (1)to formulate strategy and (2) to manage his company's performance。

In this article we examine the second of these tasks and discuss how corporate performance should be modeled and managed. We begin by considering the environment in which a company operates, which includes, besides outside stakeholders, the industry it belongs and the market it supplies,and then proceed to explain how the functioning of a company can be understood by an examination of its business, operational and performance management models. Next we describe the structure recommended by the authors for a corporate planning, control and evaluation system, the most important part of a corporate performance management system. The core component of the planning system is the corporate performance evaluation model,the structure of which is mapped into the planning system’s database, simulation models and budgeting too ls’ structures,and also used to shape information contained in the system’s products, besides being the nucleus of the language used by the system's agents to talk about corporate performance. The ontology of planning, the guiding principles of corporate planning and the history of "MADE”,the corporate performance management system discussed in this article,are reviewed next, before we proceed to discuss in detail the structural components of the corporate planning and control system introduced before. We conclude the article by listing the main steps which should be followed when implementing a performance planning, control and evaluation system for a company.1.IntroductionTwo of the most important corporate tasks for which a chief executive officer is primarily responsible are (1)to formulate strategy and (2)to manage thecompany’s performance. In this article we examine the second of these tasks and discuss how corporate performance should be modeled and managed。

毕业设计(论文)外文原文及译文

毕业设计(论文)外文原文及译文

毕业设计(论文)外文原文及译文一、外文原文MCUA microcontroller (or MCU) is a computer-on-a-chip. It is a type of microcontroller emphasizing self-sufficiency and cost-effectiveness, in contrast to a general-purpose microprocessor (the kind used in a PC).With the development of technology and control systems in a wide range of applications, as well as equipment to small and intelligent development, as one of the single-chip high-tech for its small size, powerful, low cost, and other advantages of the use of flexible, show a strong vitality. It is generally better compared to the integrated circuit of anti-interference ability, the environmental temperature and humidity have better adaptability, can be stable under the conditions in the industrial. And single-chip widely used in a variety of instruments and meters, so that intelligent instrumentation and improves their measurement speed and measurement accuracy, to strengthen control functions. In short,with the advent of the information age, traditional single- chip inherent structural weaknesses, so that it show a lot of drawbacks. The speed, scale, performance indicators, such as users increasingly difficult to meet the needs of the development of single-chip chipset, upgrades are faced with new challenges.The Description of AT89S52The AT89S52 is a low-power, high-performance CMOS 8-bit microcontroller with 8K bytes of In-System Programmable Flash memory. The device is manufactured using Atmel's high-density nonvolatile memory technology and is compatible with the industry-standard 80C51 instruction set and pinout. The on-chip Flash allows the program memory to be reprogrammed in-system or by a conventional nonvolatile memory programmer. By combining a versatile 8-bit CPU with In-System Programmable Flash on a monolithic chip, the Atmel AT89S52 is a powerful microcontroller which provides a highly-flexible and cost-effective solution to many embedded control applications.The AT89S52 provides the following standard features: 8K bytes ofFlash, 256 bytes of RAM, 32 I/O lines, Watchdog timer, two data pointers, three 16-bit timer/counters, a six-vector two-level interrupt architecture, a full duplex serial port, on-chip oscillator, and clock circuitry. In addition, the AT89S52 is designed with static logic for operation down to zero frequency and supports two software selectable power saving modes. The Idle Mode stops the CPU while allowing the RAM, timer/counters, serial port, and interrupt system to continue functioning. The Power-down mode saves the RAM contents but freezes the oscillator, disabling all other chip functions until the next interrupt or hardware reset.Features• Compatible with MCS-51® Products• 8K Bytes of In-System Programmable (ISP) Flash Memory– Endurance: 1000 Write/Erase Cycles• 4.0V to 5.5V Operating Range• Fully Static Operation: 0 Hz to 33 MHz• Three-level Program Memory Lock• 256 x 8-bit Internal RAM• 32 Programmable I/O Lines• Three 16-bit Timer/Counters• Eight Interrupt Sources• Full Duplex UART Serial Channel• Low-power Idle and Power-down Modes• Interrupt Recovery from Power-down Mode• Watchdog Timer• Dual Data Pointer• Power-off FlagPin DescriptionVCCSupply voltage.GNDGround.Port 0Port 0 is an 8-bit open drain bidirectional I/O port. As an output port, each pin can sink eight TTL inputs. When 1s are written to port 0 pins, the pins can be used as high-impedance inputs.Port 0 can also be configured to be the multiplexed low-order address/data bus during accesses to external program and data memory. In this mode, P0 has internal pullups.Port 0 also receives the code bytes during Flash programming and outputs the code bytes during program verification. External pullups are required during program verification.Port 1Port 1 is an 8-bit bidirectional I/O port with internal pullups. The Port 1 output buffers can sink/source four TTL inputs. When 1s are written to Port 1 pins, they are pulled high by the internal pullups and can be used as inputs. As inputs, Port 1 pins that are externally being pulled low will source current (IIL) because of the internal pullups.In addition, P1.0 and P1.1 can be configured to be the timer/counter 2 external count input (P1.0/T2) and the timer/counter 2 trigger input (P1.1/T2EX), respectively.Port 1 also receives the low-order address bytes during Flash programming and verification.Port 2Port 2 is an 8-bit bidirectional I/O port with internal pullups. The Port 2 output buffers can sink/source four TTL inputs. When 1s are written to Port 2 pins, they are pulled high by the internal pullups and can be used as inputs. As inputs, Port 2 pins that are externally being pulled low will source current (IIL) because of the internal pullups.Port 2 emits the high-order address byte during fetches from external program memory and during accesses to external data memory that use 16-bit addresses (MOVX @ DPTR). In this application, Port 2 uses strong internal pull-ups when emitting 1s. During accesses to external data memory that use 8-bit addresses (MOVX @ RI), Port 2 emits the contents of the P2 Special Function Register.Port 2 also receives the high-order address bits and some control signals during Flash programming and verification.Port 3Port 3 is an 8-bit bidirectional I/O port with internal pullups. The Port 3 output buffers can sink/source four TTL inputs. When 1s are written to Port 3 pins, they are pulled high by the internal pullups and can be used as inputs. As inputs, Port 3 pins that are externally being pulled low will source current (IIL) because of the pullups.Port 3 also serves the functions of various special features of the AT89S52, as shown in the following table.Port 3 also receives some control signals for Flash programming and verification.RSTReset input. A high on this pin for two machine cycles while the oscillator is running resets the device. This pin drives High for 96 oscillator periods after the Watchdog times out. The DISRTO bit in SFR AUXR (address 8EH) can be used to disable this feature. In the default state of bit DISRTO, the RESET HIGH out feature is enabled.ALE/PROGAddress Latch Enable (ALE) is an output pulse for latching the low byte of the address during accesses to external memory. This pin is also the program pulse input (PROG) during Flash programming.In normal operation, ALE is emitted at a constant rate of 1/6 the oscillator frequency and may be used for external timing or clocking purposes. Note, however, that one ALE pulse is skipped during each access to external data memory.If desired, ALE operation can be disabled by setting bit 0 of SFR location 8EH. With the bit set, ALE is active only during a MOVX or MOVC instruction. Otherwise, the pin is weakly pulled high. Setting the ALE-disable bit has no effect if the microcontroller is in external execution mode.PSENProgram Store Enable (PSEN) is the read strobe to external program memory. When the AT89S52 is executing code from external program memory, PSENis activated twice each machine cycle, except that two PSEN activations are skipped during each access to external data memory.EA/VPPExternal Access Enable. EA must be strapped to GND in order to enable the device to fetch code from external program memory locations starting at 0000H up to FFFFH. Note, however, that if lock bit 1 is programmed, EA will be internally latched on reset. EA should be strapped to VCC for internal program executions.This pin also receives the 12-volt programming enable voltage (VPP) during Flash programming.XTAL1Input to the inverting oscillator amplifier and input to the internal clock operating circuit.XTAL2Output from the inverting oscillator amplifier.Special Function RegistersNote that not all of the addresses are occupied, and unoccupied addresses may not be implemented on the chip. Read accesses to these addresses will in general return random data, and write accesses will have an indeterminate effect.User software should not write 1s to these unlisted locations, since they may be used in future products to invoke new features. In that case, the reset or inactive values of the new bits will always be 0.Timer 2 Registers:Control and status bits are contained in registers T2CON and T2MOD for Timer 2. The register pair (RCAP2H, RCAP2L) are the Capture/Reload registers for Timer 2 in 16-bit capture mode or 16-bit auto-reload mode.Interrupt Registers:The individual interrupt enable bits are in the IE register. Two priorities can be set for each of the six interrupt sources in the IP register.Dual Data Pointer Registers: To facilitate accessing both internal and external data memory, two banks of 16-bit Data Pointer Registers areprovided: DP0 at SFR address locations 82H-83H and DP1 at 84H-85H. Bit DPS = 0 in SFR AUXR1 selects DP0 and DPS = 1 selects DP1. The user should always initialize the DPS bit to the appropriate value before accessing the respective Data Pointer Register.Power Off Flag:The Power Off Flag (POF) is located at bit 4 (PCON.4) in the PCON SFR. POF is set to “1” during power up. It can be set and rest under software control and is not affected by reset.Memory OrganizationMCS-51 devices have a separate address space for Program and Data Memory. Up to 64K bytes each of external Program and Data Memory can be addressed.Program MemoryIf the EA pin is connected to GND, all program fetches are directed to external memory. On the AT89S52, if EA is connected to VCC, program fetches to addresses 0000H through 1FFFH are directed to internal memory and fetches to addresses 2000H through FFFFH are to external memory.Data MemoryThe AT89S52 implements 256 bytes of on-chip RAM. The upper 128 bytes occupy a parallel address space to the Special Function Registers. This means that the upper 128 bytes have the same addresses as the SFR space but are physically separate from SFR space.When an instruction accesses an internal location above address 7FH, the address mode used in the instruction specifies whether the CPU accesses the upper 128 bytes of RAM or the SFR space. Instructions which use direct addressing access of the SFR space. For example, the following direct addressing instruction accesses the SFR at location 0A0H (which is P2).MOV 0A0H, #dataInstructions that use indirect addressing access the upper 128 bytes of RAM. For example, the following indirect addressing instruction, where R0 contains 0A0H, accesses the data byte at address 0A0H, rather than P2 (whose address is 0A0H).MOV @R0, #dataNote that stack operations are examples of indirect addressing, so the upper 128 bytes of data RAM are available as stack space.Timer 0 and 1Timer 0 and Timer 1 in the AT89S52 operate the same way as Timer 0 and Timer 1 in the AT89C51 and AT89C52.Timer 2Timer 2 is a 16-bit Timer/Counter that can operate as either a timer or an event counter. The type of operation is selected by bit C/T2 in the SFR T2CON (shown in Table 2). Timer 2 has three operating modes: capture, auto-reload (up or down counting), and baud rate generator. The modes are selected by bits in T2CON.Timer 2 consists of two 8-bit registers, TH2 and TL2. In the Timer function, the TL2 register is incremented every machine cycle. Since a machine cycle consists of 12 oscillator periods, the count rate is 1/12 of the oscillator frequency.In the Counter function, the register is incremented in response to a1-to-0 transition at its corresponding external input pin, T2. In this function, the external input is sampled during S5P2 of every machine cycle. When the samples show a high in one cycle and a low in the next cycle, the count is incremented. The new count value appears in the register during S3P1 of the cycle following the one in which the transition was detected. Since two machine cycles (24 oscillator periods) are required to recognize a 1-to-0 transition, the maximum count rate is 1/24 of the oscillator frequency. To ensure that a given level is sampled at least once before it changes, the level should be held for at least one full machine cycle.InterruptsThe AT89S52 has a total of six interrupt vectors: two external interrupts (INT0 and INT1), three timer interrupts (Timers 0, 1, and 2), and the serial port interrupt. These interrupts are all shown in Figure 10.Each of these interrupt sources can be individually enabled or disabledby setting or clearing a bit in Special Function Register IE. IE also contains a global disable bit, EA, which disables all interrupts at once.Note that Table 5 shows that bit position IE.6 is unimplemented. In the AT89S52, bit position IE.5 is also unimplemented. User software should not write 1s to these bit positions, since they may be used in future AT89 products. Timer 2 interrupt is generated by the logical OR of bits TF2 and EXF2 in register T2CON. Neither of these flags is cleared by hardware when the service routine is vectored to. In fact, the service routine may have to determine whether it was TF2 or EXF2 that generated the interrupt, and that bit will have to be cleared in software.The Timer 0 and Timer 1 flags, TF0 and TF1, are set at S5P2 of the cycle in which the timers overflow. The values are then polled by the circuitry in the next cycle. However, the Timer 2 flag, TF2, is set at S2P2 and is polled in the same cycle in which the timer overflows.二、译文单片机单片机即微型计算机,是把中央处理器、存储器、定时/计数器、输入输出接口都集成在一块集成电路芯片上的微型计算机。

外文翻译--创业板市场

外文翻译--创业板市场

外文文献翻译译文一、外文原文原文:China's Second BoardI. Significance of and events leading to the establishment of a Second BoardOn 31 March 2009 the China Securities Regulatory Commission (CSRC issued Interim Measures on the Administration of Initial Public Offerings and Listings of Shares on the ChiNext [i.e., the Second Board, also called the Growth Enterprise Market] ("Interim Measures"), which came into force on 1 May 2009. This marked the creation by the Shenzhen Stock Exchange of the long-awaited market for venture businesses. As the original plan to establish such a market in 2001 had come to nothing when the dotcom bubble burst, the market's final opening came after a delay of nearly 10 years.Ever since the 1980s, when the Chinese government began to foster the development of science and technology, venture capital has been seen in China as a means of supporting the development of high-tech companies financially. The aim, as can be seen from the name of the 1996 Law of the People's Republic of China on Promoting the Conversion of Scientific and Technological Findings into Productivity ,was to support the commercialization of scientific and technological developments. Venture capital funds developed gradually in the late 1990s, and between then and 2000 it looked increasingly likely that a Second Board would be established. When the CSRC published a draft plan for this in September 2000, the stage was set. However, when the dotcom bubble (and especially the NASDAQ bubble) burst, this plan was shelved. Also, Chinese investors and venture capitalists were probably not quite ready for such a move.As a result, Chinese venture businesses sought to list on overseas markets (a so-called "red chip listing") from the late 1990s. However, as these listings increased, so did the criticism that valuable Chinese assets were being siphoned overseas.On thepolicy front, in 2004 the State Council published Some Opinions on Reform, Opening and Steady Growth of Capital Markets ("the Nine Opinions"), in which the concept of a "multi-tier capital market" was presented for the first time. A first step in this direction was made in the same year, when an SME Board was established as part of the Main Board. Although there appear to have been plans to eventually relax the SME Board's listing requirements, which were the same as those for companies listed on the Main Board, and to make it a market especially for venture businesses, it was decided to establish a separate market (the Second Board) for this purpose and to learn from the experience of the SME Board.As well as being part of the process of creating a multi-tier capital market, the establishment of the Second Board was one of the measures included in the policy document Several Opinions of the General Office of the State Council on Providing Financing Support for Economic Development ("the 30 Financial Measures"), published in December 2008 in response to the global financial crisis and intended as a way of making it easier for SMEs to raise capital.It goes without saying that the creation of the Second Board was also an important development in that it gives private equity funds the opportunity to exit their investments. The absence of such an exit had been a disincentive to such investment, with most funds looking for a red chip listing as a way of exiting their investments. However, with surplus savings at home, the Chinese authorities began to encourage companies to raise capital on the domestic market rather than overseas. This led, in September 2006, to a rule making it more difficult for Chinese venture businesses to list their shares on overseas markets. The corollary of this was that it increased the need for a means whereby Chinese private equity funds could exit their investments at an early opportunity and on their own market. The creation of the Second Board was therefore a belated response to this need.II. Rules and regulations governing the establishment of the Second BoardWe now take a closer look at some of the rules and regulations governing the establishment of the Second Board.First , the Interim Measures on the Administration of Initial Public Offerings andListings of Shares on the ChiNext, issued by the CSRC on 31 March 2009 with effect from 1 May 2009. The Interim Measures consist of six chapters and 58 articles, stipulating issue terms and procedures, disclosure requirements, regulatory procedures, and legal responsibilities.First, the General Provisions chapter. The first thing this says (Article 1) is: "These Measures are formulated for the purposes of promoting the development of innovative enterprises and other growing start-ups" This shows that one of the main listing criteria is a company's technological innovativeness and growth potential. The Chinese authorities have actually made it clear that, although the Second Board and the SME Board are both intended for SMEs of similar sizes, the Second Board is specifically intended for SMEs at the initial (rather than the growth or mature) stage of their development with a high degree of technological innovativeness and an innovative business model while the SME Board is specifically intended for companies with relatively stable earnings at the mature stage of their development. They have also made it clear that the Second Board is not simply a "small SME Board." This suggests to us that the authorities want to see technologically innovative companies listing on the Second Board and SMEs in traditional sectors listing on the SME Board.Next, Article 7 says: "A market access system that is commensurate with the risk tolerance of investors shall be established for investors on the ChiNext and investment risk shall be fully disclosed to investors." One noteworthy feature is the adoption of the concept of the "qualified investor" in an attempt to improve risk control.Furthermore, Article 8 says: "China Securities Regulatory Commission (hereinafter, CSRC) shall, in accordance with law, examine and approve the issuer’s IPO application and supervise the issuer’s IPO activities. The stock exchange shall formulate rules in accordance with law, provide an open, fair and equitable market environment and ensure the normal operation of the ChiNext." Until the Second Board was established, it was thought by some that the stock exchange had the right to approve new issues. Under the Interim Measures, however, it is the CSRC that examines and approves applications.First, offering conditions. Article 10 stipulates four numerical conditions for companies applying for IPOs.Second, offering procedures. The Interim Measures seek to make sponsoring securities companies more responsible by requiring them to conduct due diligence investigations and make prudential judgment on the issuer’s growth and render special opinions thereon.Third, information disclosure. Article 39 of the Interim Measures stipulates that the issuer shall make a statement in its prospectus pointing out the risks of investing in Second Board companies: namely, inconsistent performance, high operational risk, and the risk of delisting. Similarly,Fourth, supervision. Articles 51 and 52 stipulate that the stock exchange (namely, the Shenzhen Stock Exchange) shall establish systems for listing, trading and delisting Second Board stocks, urge sponsors to fulfill their ongoing supervisory obligations, and establish a market risk warning system and an investor education system.1. Amendments to the Interim Measures on Securities Issuance and Listing Sponsor System and the Provisional Measures of the Public Offering Review Committee of the China Securities Regulatory Commission2. Rules Governing the Listing of Shares on the ChiNext of Shenzhen Stock Exchange Next, the Shenzhen Stock Exchange published Rules Governing the Listing of Shares on the ChiNext of Shenzhen Stock Exchange on 6 June (with effect from 1 July).3. Checking investor eligibility As the companies listed on the Second Board are more risky than those listed on the Main Board and are subject to more rigorous delisting rules (see above), investor protection requires that checks be made on whether Second Board shares are suitable for all those wishing to invest in them.4. Rules governing (1) application documents for listings on the ChiNext and (2) prospectuses of ChiNext companies On 20 July the CSRC published rules governing Application Documents for Initial Public Offerings and Listings of Shares on the ChiNext and Prospectuses of ChiNext Companies, and announced that it would begin processing listing applications on 26 July.III. Future developmentsAs Its purpose is to "promote the development of innovative enterprises and other growing start-ups",the Second Board enables such companies to raise capital by issuing shares. That is why its listing requirements are less demanding than those of the Main Board but also why it has various provisions to mitigate risk. For one thing, the Second Board has its own public offering review committee to check how technologically specialized applicant companies are, reflecting the importance attached to this. For another, issuers and their controlling shareholders, de facto controllers, and sponsoring securities companies are subject to more demanding accountability requirements. The key factor here is, not surprisingly, disclosure. Also, the qualified investor system is designed to mitigate the risks to retail investors.Once the rules and regulations governing the Second Board were published, the CSRC began to process listing applications from 26 July 2009. It has been reported that 108 companies initially applied. As of mid-October, 28 of these had been approved and on 30 October they were listed on the Second Board.As of 15 December, there are 46 companies whose listing application has been approved by CSRC (including the above-mentioned 28 companies). They come from a wide range of sectors, especially information technology, services, and biopharmacy. Thus far, few companies in which foreign private equity funds have a stake have applied. This is because these funds have tended to go for red-chip listings.Another point is movement between the various tiers of China's multi-tier capital market. As of early September, four companies that are traded on the new Third Board had successfully applied to list on the Second Board. As 22 new Third Board companies meet the listing requirements of the Second Board on the basis of their interim reports for the first half of fiscal 2009, a growing number of companies may transfer their listing from the new Third Board to the Second Board. We think this is likely to make the new Third Board a more attractive market for private equity investors.The applicants include companies that were in the process of applying for a listing on the SME Board. The CSRC has also made it clear that it does not see theSecond Board simply as a "small SME Board" and attaches great importance to the companies' innovativeness and growth potential. Ultimately, whether or not such risks can be mitigated will depend on whether the quality of the companies that list on the Second Board improves and disclosure requirements are strictly complied with. For example, according to the rules governing Prospectuses of ChiNext Companies, companies are required to disclose the above-mentioned supplementary agreements as a control right risk. The point is whether such requirements will be complied with.Since there is a potentially large number of high-tech companies in China in the long term, whether or not the Second Board becomes one of the world's few successful venture capital markets will depend on whether all these rules and regulations succeed in shaping its development and the way in which it is run.The authorities clearly want to avoid a situation where the Second Board attracts a large number of second-rate companies and becomes a vehicle for market abuse as it would then run the risk of becoming an illiquid market shunned by investors who have lost trust in it. Indeed, such has been the number of companies applying to list on the Second Board that some observers have expressed concern about their quality.There has also been some concern about investor protection. For example, supplementary agreements between private equity funds and issuers pose a risk to retail investors in that they may suddenly be faced with a change in the controlling shareholder. This is because such agreements can result in a transfer of shares from the founder or controlling shareholder to a private equity fund if the company fails to meet certain agreed targets or in a shareholding structure that is different from the apparent one, for example. The problem of low liquidity, which has long faced the new Third Board market, where small-cap high-tech stocks are also traded, also needs to be addressed.Meanwhile, the Second Board's Public Offering Review Committee was officially established on 14 August. It has 35 members. A breakdown reveals that the number of representatives of the CSRC and the Shenzhen Stock Exchange has been limited to three and two, respectively, to ensure that the committee has the necessary number of technology specialists. Of the remainder, 14 are accountants, six lawyers,three from the Ministry of Science and Technology, three from the China Academy of Sciences, two from investment trust companies, one from an asset evaluation agency, and one from the National Development and Reform Commission (NDRC). It has been reported that the members include specialists in the six industry fields the CSRC considers particularly important for Second Board companies (namely, new energy, new materials, biotechnology and pharmaceuticals, energy conservation and environmental protection, services and IT).Source: Takeshi Jingu.2009.“China's Second Board”. Nomura Journal of Capital Markets Winter 2009 V ol.1 No.4.pp.1-15.二、翻译文章译文:中国创业板市场一、建立创业板市场及其意义2009年3月31日中国证券监督管理委员会(以下简称“中国证监会”)发行《中国证监会管理暂行办法》,首次在创业板市场上[即,第二个板,也叫创业板市场](“暂行办法”) 公开募股,从 2009年的5月1日开始生效,这标志着深圳证券交易所市场这个人们期待已久的合资企业即将诞生。

外文参考文献(带中文翻译)

外文参考文献(带中文翻译)

外文资料原文涂敏之会计学 8051208076Title:Future of SME finance(c)Background – the environment for SME finance has changedFuture economic recovery will depend on the possibility of Crafts, Trades and SMEs to exploit their potential for growth and employment creation.SMEs make a major contribution to growth and employment in the EU and are at the heart of the Lisbon Strategy, whose main objective is to turn Europe into the most competitive and dynamic knowledge-based economy in the world. However, the ability of SMEs to grow depends highly on their potential to invest in restructuring, innovation and qualification. All of these investments need capital and therefore access to finance.Against this background the consistently repeated complaint of SMEs about their problems regarding access to finance is a highly relevant constraint that endangers the economic recovery of Europe.Changes in the finance sector influence the behavior of credit institutes towards Crafts, Trades and SMEs. Recent and ongoing developments in the banking sector add to the concerns of SMEs and will further endanger their access to finance. The main changes in the banking sector which influence SME finance are:•Globalization and internationalization have increased the competition and the profit orientation in the sector;•worsening of the economic situations in some institutes (burst of the ITC bubble, insolvencies) strengthen the focus on profitability further;•Mergers and restructuring created larger structures and many local branches, which had direct and personalized contacts with small enterprises, were closed;•up-coming implementation of new capital adequacy rules (Basel II) will also change SME business of the credit sector and will increase its administrative costs;•Stricter interpretation of State-Aide Rules by the European Commission eliminates the support of banks by public guarantees; many of the effected banks are very active in SME finance.All these changes result in a higher sensitivity for risks and profits in the financesector.The changes in the finance sector affect the accessibility of SMEs to finance.Higher risk awareness in the credit sector, a stronger focus on profitability and the ongoing restructuring in the finance sector change the framework for SME finance and influence the accessibility of SMEs to finance. The most important changes are: •In order to make the higher risk awareness operational, the credit sector introduces new rating systems and instruments for credit scoring;•Risk assessment of SMEs by banks will force the enterprises to present more and better quality information on their businesses;•Banks will try to pass through their additional costs for implementing and running the new capital regulations (Basel II) to their business clients;•due to the increase of competition on interest rates, the bank sector demands more and higher fees for its services (administration of accounts, payments systems, etc.), which are not only additional costs for SMEs but also limit their liquidity;•Small enterprises will lose their personal relationship with decision-makers in local branches –the credit application process will become more formal and anonymous and will probably lose longer;•the credit sector will lose more and more i ts “public function” to provide access to finance for a wide range of economic actors, which it has in a number of countries, in order to support and facilitate economic growth; the profitability of lending becomes the main focus of private credit institutions.All of these developments will make access to finance for SMEs even more difficult and / or will increase the cost of external finance. Business start-ups and SMEs, which want to enter new markets, may especially suffer from shortages regarding finance. A European Code of Conduct between Banks and SMEs would have allowed at least more transparency in the relations between Banks and SMEs and UEAPME regrets that the bank sector was not able to agree on such a commitment.Towards an encompassing policy approach to improve the access of Crafts, Trades and SMEs to financeAll analyses show that credits and loans will stay the main source of finance for the SME sector in Europe. Access to finance was always a main concern for SMEs, but the recent developments in the finance sector worsen the situation even more.Shortage of finance is already a relevant factor, which hinders economic recovery in Europe. Many SMEs are not able to finance their needs for investment.Therefore, UEAPME expects the new European Commission and the new European Parliament to strengthen their efforts to improve the framework conditions for SME finance. Europe’s Crafts, Trades and SMEs ask for an encompassing policy approach, which includes not only the conditions for SMEs’ access to l ending, but will also strengthen their capacity for internal finance and their access to external risk capital.From UEAPME’s point of view such an encompassing approach should be based on three guiding principles:•Risk-sharing between private investors, financial institutes, SMEs and public sector;•Increase of transparency of SMEs towards their external investors and lenders;•improving the regulatory environment for SME finance.Based on these principles and against the background of the changing environment for SME finance, UEAPME proposes policy measures in the following areas:1. New Capital Requirement Directive: SME friendly implementation of Basel IIDue to intensive lobbying activities, UEAPME, together with other Business Associations in Europe, has achieved some improvements in favour of SMEs regarding the new Basel Agreement on regulatory capital (Basel II). The final agreement from the Basel Committee contains a much more realistic approach toward the real risk situation of SME lending for the finance market and will allow the necessary room for adaptations, which respect the different regional traditions and institutional structures.However, the new regulatory system will influence the relations between Banks and SMEs and it will depend very much on the way it will be implemented into European law, whether Basel II becomes burdensome for SMEs and if it will reduce access to finance for them.The new Capital Accord form the Basel Committee gives the financial market authorities and herewith the European Institutions, a lot of flexibility. In about 70 areas they have room to adapt the Accord to their specific needs when implementing itinto EU law. Some of them will have important effects on the costs and the accessibility of finance for SMEs.UEAPME expects therefore from the new European Commission and the new European Parliament:•The implementation of the new Capital Requirement Directive will be costly for the Finance Sector (up to 30 Billion Euro till 2006) and its clients will have to pay for it. Therefore, the implementation – especially for smaller banks, which are often very active in SME finance –has to be carried out with as little administrative burdensome as possible (reporting obligations, statistics, etc.).•The European Regulators must recognize traditional instruments for collaterals (guarantees, etc.) as far as possible.•The European Commission and later the Member States should take over the recommendations from the European Parliament with regard to granularity, access to retail portfolio, maturity, partial use, adaptation of thresholds, etc., which will ease the burden on SME finance.2. SMEs need transparent rating proceduresDue to higher risk awareness of the finance sector and the needs of Basel II, many SMEs will be confronted for the first time with internal rating procedures or credit scoring systems by their banks. The bank will require more and better quality information from their clients and will assess them in a new way. Both up-coming developments are already causing increasing uncertainty amongst SMEs.In order to reduce this uncertainty and to allow SMEs to understand the principles of the new risk assessment, UEAPME demands transparent rating procedures –rating procedures may not become a “Black Box” for SMEs: •The bank should communicate the relevant criteria affecting the rating of SMEs.•The bank should inform SMEs about its assessment in order to allow SMEs to improve.The negotiations on a European Code of Conduct between Banks and SMEs , which would have included a self-commitment for transparent rating procedures by Banks, failed. Therefore, UEAPME expects from the new European Commission and the new European Parliament support for:•binding rules in the framework of the new Capital Adequacy Directive,which ensure the transparency of rating procedures and credit scoring systems for SMEs;•Elaboration of national Codes of Conduct in order to improve the relations between Banks and SMEs and to support the adaptation of SMEs to the new financial environment.3. SMEs need an extension of credit guarantee systems with a special focus on Micro-LendingBusiness start-ups, the transfer of businesses and innovative fast growth SMEs also depended in the past very often on public support to get access to finance. Increasing risk awareness by banks and the stricter interpretation of State Aid Rules will further increase the need for public support.Already now, there are credit guarantee schemes in many countries on the limit of their capacity and too many investment projects cannot be realized by SMEs.Experiences show that Public money, spent for supporting credit guarantees systems, is a very efficient instrument and has a much higher multiplying effect than other instruments. One Euro form the European Investment Funds can stimulate 30 Euro investments in SMEs (for venture capital funds the relation is only 1:2).Therefore, UEAPME expects the new European Commission and the new European Parliament to support:•The extension of funds for national credit guarantees schemes in the framework of the new Multi-Annual Programmed for Enterprises;•The development of new instruments for securitizations of SME portfolios;•The recognition of existing and well functioning credit guarantees schemes as collateral;•More flexibility within the European Instruments, because of national differences in the situation of SME finance;•The development of credit guarantees schemes in the new Member States;•The development of an SBIC-like scheme in the Member States to close the equity gap (0.2 – 2.5 Mio Euro, according to the expert meeting on PACE on April 27 in Luxemburg).•the development of a financial support scheme to encourage the internalizations of SMEs (currently there is no scheme available at EU level: termination of JOP, fading out of JEV).4. SMEs need company and income taxation systems, whichstrengthen their capacity for self-financingMany EU Member States have company and income taxation systems with negative incentives to build-up capital within the company by re-investing their profits. This is especially true for companies, which have to pay income taxes. Already in the past tax-regimes was one of the reasons for the higher dependence of Europe’s SMEs on bank lending. In future, the result of rating w ill also depend on the amount of capital in the company; the high dependence on lending will influence the access to lending. This is a vicious cycle, which has to be broken.Even though company and income taxation falls under the competence of Member States, UEAPME asks the new European Commission and the new European Parliament to publicly support tax-reforms, which will strengthen the capacity of Crafts, Trades and SME for self-financing. Thereby, a special focus on non-corporate companies is needed.5. Risk Capital – equity financingExternal equity financing does not have a real tradition in the SME sector. On the one hand, small enterprises and family business in general have traditionally not been very open towards external equity financing and are not used to informing transparently about their business.On the other hand, many investors of venture capital and similar forms of equity finance are very reluctant regarding investing their funds in smaller companies, which is more costly than investing bigger amounts in larger companies. Furthermore it is much more difficult to set out of such investments in smaller companies.Even though equity financing will never become the main source of financing for SMEs, it is an important instrument for highly innovative start-ups and fast growing companies and it has therefore to be further developed. UEAPME sees three pillars for such an approach where policy support is needed:Availability of venture capital•The Member States should review their taxation systems in order to create incentives to invest private money in all forms of venture capital.•Guarantee instruments for equity financing should be further developed.Improve the conditions for investing venture capital into SMEs•The development of secondary markets for venture capital investments in SMEs should be supported.•Accounting Standards for SMEs should be revised in order to easetransparent exchange of information between investor and owner-manager.Owner-managers must become more aware about the need for transparency towards investors•SME owners will have to realise that in future access to external finance (venture capital or lending) will depend much more on a transparent and open exchange of information about the situation and the perspectives of their companies.•In order to fulfil the new needs for transparency, SMEs will have to use new information instruments (business plans, financial reporting, etc.) and new management instruments (risk-management, financial management, etc.).外文资料翻译涂敏之会计学 8051208076题目:未来的中小企业融资背景:中小企业融资已经改变未来的经济复苏将取决于能否工艺品,贸易和中小企业利用其潜在的增长和创造就业。

外文翻译--燃气报警器

外文翻译--燃气报警器

外文翻译--燃气报警器福州大学至诚学院本科生毕业设计(论文)外文翻译题目:基于单片机可燃气体检测报警器的设计姓名:蔡佳阳学号: 211014128系别:信息工程系专业:电子信息工程年级: 2010级指导教师:(签名)年月日附录:外文文献及译文外文原文1 :Combustible gas alarmCombustible gas alarmto prevent gas leakage as a powerful weapon, it has, however, does not seem to have attracted the attention it deserves. This security and household fire extinguishers can be placed on a par, or even more than the fire extinguisher into the family of the little things that most families do not see it as one thing, do not even know there can be such a fundamental solution to gas poisoning and gas explosion, "the protection of God" exists. Shanghai as an example, last year, due to poisoning and cooking gas water heater overflow out, piece of rubber hose off the aging caused by gas leakage and poisoning caused by a total of 86 deaths, accounting for all the gas data of accidents were 84%. However, according to an authoritative department to another survey released shows that in Shanghai, about three million gas users, the installation of domesticgas leakage alarm of less than 10%.In their daily lives, whether it is gas poisoning or gas explosion, because of gas leak into the sky. Home life, no one is inseparable from the use of gas, no matter what you do more preventive measures, but a hundred secret inevitably very careful, not to mention of any fire safety measures are not taken on even more dangerous family. Therefore it is necessary to prepare a Combustible gas at home at any time for the owner guardian of the gas appliances, a gas alert to this invisible killer slipped quietly out to help the owner of the elimination offamily problems in the bud, the domestic security of the good housekeeper, so that family members with the use of gas, the use of hearts at ease. For example, there are many families of fire gas explosion, do not know in the room full of gas leaking out, the blind use of electricalswitches and tragedy in an instantif there is an alarm, a tragedy like this ,can be greatly avoided.Combustible gas alarm into the family, will become a good home security to help, this is an indisputable fact Product Description:Detection of gas: natural gas, liquefied petroleum gas, city gas (H2)Size: 115mm * 71mm * 43.3mm(1) add automatic sensor drift compensation, the real and omitted to prevent the false positives.(2) The failure prompted the police to enable the user to replace and repair, to prevent the non-reported.(3) MCU control the entire process, working temperature -40 degrees to 80 degrees. Operating voltage: 220V AC or 110V AC, 12VDC-20VDC Additional features: linkage exhaust fan, the manipulator, the solenoid valveNetworking: wired networking functions: (NO, NC) Wireless networking: 315MHZ/433MHZ (2262 OR 1527)译文:燃气报警器燃气报警器作为预防燃气泄漏的有力武器,它的出现却似乎并没有引起人们应有的注意。

在线图书管理系统外文文献原文及译文

在线图书管理系统外文文献原文及译文

毕业设计说明书英文文献及中文翻译班姓 名:学 院:专指导教师:2014 年 6 月软件学院 软件工程An Introduction to JavaThe first release of Java in 1996 generated an incredible amount of excitement, not just in the computer press, but in mainstream media such as The New York Times, The Washington Post, and Business Week. Java has the distinction of being the first and only programming language that had a ten-minute story on National Public Radio. A $100,000,000 venture capital fund was set up solely for products produced by use of a specific computer language. It is rather amusing to revisit those heady times, and we give you a brief history of Java in this chapter.In the first edition of this book, we had this to write about Java: “As a computer language, Java’s hype is overdone: Java is certainly a good program-ming language. There is no doubt that it is one of the better languages available to serious programmers. We think it could potentially have been a great programming language, but it is probably too late for that. Once a language is out in the field, the ugly reality of compatibility with existing code sets in.”Our editor got a lot of flack for this paragraph from someone very high up at Sun Micro- systems who shall remain unnamed. But, in hindsight, our prognosis seems accurate. Java has a lot of nice language features—we examine them in detail later in this chapter. It has its share of warts, and newer additions to the language are not as elegant as the original ones because of the ugly reality of compatibility.But, as we already said in the first edition, Java was never just a language. There are lots of programming languages out there, and few of them make much of a splash. Java is a whole platform, with a huge library, containing lots of reusable code, and an execution environment that provides services such as security, portability across operating sys-tems, and automatic garbage collection.As a programmer, you will want a language with a pleasant syntax and comprehensible semantics (i.e., not C++). Java fits the bill, as do dozens of other fine languages. Some languages give you portability, garbage collection, and the like, but they don’t have much of a library, forcing you to roll your own if you want fancy graphics or network- ing or database access. Well, Java has everything—a good language, a high-quality exe- cution environment, and a vast library. That combination is what makes Java an irresistible proposition to so many programmers.SimpleWe wanted to build a system that could be programmed easily without a lot of eso- teric training and which leveraged t oday’s standard practice. So even though wefound that C++ was unsuitable, we designed Java as closely to C++ as possible in order to make the system more comprehensible. Java omits many rarely used, poorly understood, confusing features of C++ that, in our experience, bring more grief than benefit.The syntax for Java is, indeed, a cleaned-up version of the syntax for C++. There is no need for header files, pointer arithmetic (or even a pointer syntax), structures, unions, operator overloading, virtual base classes, and so on. (See the C++ notes interspersed throughout the text for more on the differences between Java and C++.) The designers did not, however, attempt to fix all of the clumsy features of C++. For example, the syn- tax of the switch statement is unchanged in Java. If you know C++, you will find the tran- sition to the Java syntax easy. If you are used to a visual programming environment (such as Visual Basic), you will not find Java simple. There is much strange syntax (though it does not take long to get the hang of it). More important, you must do a lot more programming in Java. The beauty of Visual Basic is that its visual design environment almost automatically pro- vides a lot of the infrastructure for an application. The equivalent functionality must be programmed manually, usually with a fair bit of code, in Java. There are, however, third-party development environments that provide “drag-and-drop”-style program development.Another aspect of being simple is being small. One of the goals of Java is to enable the construction of software that can run stand-alone in small machines. The size of the basic interpreter and class support is about 40K bytes; adding the basic stan- dard libraries and thread support (essentially a self-contained microkernel) adds an additional 175K.This was a great achievement at the time. Of course, the library has since grown to huge proportions. There is now a separate Java Micro Edition with a smaller library, suitable for embedded devices.Object OrientedSimply stated, object-oriented design is a technique for programming that focuses on the data (= objects) and on the interfaces to that object. To make an analogy with carpentry, an “object-oriented” carpenter would be mostly concerned with the chair he was building, and secondari ly with the tools used to make it; a “non-object- oriented” carpenter would think primarily of his tools. The object-oriented facilities of Java are essentially those of C++.Object orientation has proven its worth in the last 30 years, and it is inconceivable that a modern programming language would not use it. Indeed, the object-oriented features of Java are comparable to those of C++. The major difference between Java and C++ lies in multiple inheritance, which Java has replaced with the simpler concept of interfaces, and in the Java metaclass model (which we discuss in Chapter 5). NOTE: If you have no experience with object-oriented programming languages, you will want to carefully read Chapters 4 through 6. These chapters explain what object-oriented programming is and why it is more useful for programming sophisticated projects than are traditional, procedure-oriented languages like C or Basic.Network-SavvyJava has an extensive library of routines for coping with TCP/IP protocols like HTTP and FTP. Java applications can open and access objects across the Net via URLs with the same ease as when accessing a local file system.We have found the networking capabilities of Java to be both strong and easy to use. Anyone who has tried to do Internet programming using another language will revel in how simple Java makes onerous tasks like opening a socket connection. (We cover net- working in V olume II of this book.) The remote method invocation mechanism enables communication between distributed objects (also covered in V olume II).RobustJava is intended for writing programs that must be reliable in a variety of ways.Java puts a lot of emphasis on early checking for possible problems, later dynamic (runtime) checking, and eliminating situations that are error-prone. The single biggest difference between Java and C/C++ is that Java has a pointer model that eliminates the possibility of overwriting memory and corrupting data.This feature is also very useful. The Java compiler detects many problems that, in other languages, would show up only at runtime. As for the second point, anyone who has spent hours chasing memory corruption caused by a pointer bug will be very happy with this feature of Java.If you are coming from a language like Visual Basic that doesn’t explicitly use pointers, you are probably wondering why this is so important. C programmers are not so lucky. They need pointers to access strings, arrays, objects, and even files. In Visual Basic, you do not use pointers for any of these entities, nor do you need to worry about memory allocation for them. On the other hand, many data structures are difficult to implementin a pointerless language. Java gives you the best of both worlds. You do not need point- ers for everyday constructs like strings and arrays. You have the power of pointers if you need it, for example, for linked lists. And you always have complete safety, because you can never access a bad pointer, make memory allocation errors, or have to protect against memory leaking away.Architecture NeutralThe compiler generates an architecture-neutral object file format—the compiled code is executable on many processors, given the presence of the Java runtime sys- tem. The Java compiler does this by generating bytecode instructions which have nothing to do with a particular computer architecture. Rather, they are designed to be both easy to interpret on any machine and easily translated into native machine code on the fly.This is not a new idea. More than 30 years ago, both Niklaus Wirth’s original implemen- tation of Pascal and the UCSD Pascal system used the same technique.Of course, interpreting bytecodes is necessarily slower than running machine instruc- tions at full speed, so it isn’t clear that this is even a good idea. However, virtual machines have the option of translating the most frequently executed bytecode sequences into machine code, a process called just-in-time compilation. This strategy has proven so effective that even Microsoft’s .NET platform relies on a virt ual machine.The virtual machine has other advantages. It increases security because the virtual machine can check the behavior of instruction sequences. Some programs even produce bytecodes on the fly, dynamically enhancing the capabilities of a running program.PortableUnlike C and C++, there are no “implementation-dependent” aspects of the specifi- cation. The sizes of the primitive data types are specified, as is the behavior of arith- metic on them.For example, an int in Java is always a 32-bit integer. In C/C++, int can mean a 16-bit integer, a 32-bit integer, or any other size that the compiler vendor likes. The only restriction is that the int type must have at least as many bytes as a short int and cannot have more bytes than a long int. Having a fixed size for number types eliminates a major porting headache. Binary data is stored and transmitted in a fixed format, eliminating confusion about byte ordering. Strings are saved in a standard Unicode format. The libraries that are a part of the system define portable interfaces. For example,there is an abstract Window class and implementations of it for UNIX, Windows, and the Macintosh.As anyone who has ever tried knows, it is an effort of heroic proportions to write a pro- gram that looks good on Windows, the Macintosh, and ten flavors of UNIX. Java 1.0 made the heroic effort, delivering a simple toolkit that mapped common user interface elements to a number of platforms. Unfortunately, the result was a library that, with a lot of work, could give barely acceptable results on different systems. (And there were often different bugs on the different platform graphics implementations.) But it was a start. There are many applications in which portability is more important than user interface slickness, and these applications did benefit from early versions of Java. By now, the user interface toolkit has been completely rewritten so that it no longer relies on the host user interface. The result is far more consistent and, we think, more attrac- tive than in earlier versions of Java.InterpretedThe Java interpreter can execute Java bytecodes directly on any machine to which the interpreter has been ported. Since linking is a more incremental and lightweight process, the development process can be much more rapid and exploratory.Incremental linking has advantages, but its benefit for the development process is clearly overstated. Early Java development tools were, in fact, quite slow. Today, the bytecodes are translated into machine code by the just-in-time compiler.MultithreadedThe benefits of multithreading are better interactive responsiveness and real-time behavior.If you have ever tried to do multithreading in another language, you will be pleasantly surprised at how easy it is in Java. Threads in Java also can take advantage of multi- processor systems if the base operating system does so. On the downside, thread imple- mentations on the major platforms differ widely, and Java makes no effort to be platform independent in this regard. Only the code for calling multithreading remains the same across machines; Java offloads the implementation of multithreading to the underlying operating system or a thread library. Nonetheless, the ease of multithread- ing is one of the main reasons why Java is such an appealing language for server-side development.Java程序设计概述1996年Java第一次发布就引起了人们的极大兴趣。

外文文献免费范文精选

外文文献免费范文精选

英文原文1:《Professional C# Third Edition》Simon Robinson,Christian Nagel, Jay Glynn, Morgan Skinner, Karli Watson, Bill Evjen. Wiley Publishing, Inc. 2006 Where C# Fits InIn one sense, C# can be seen as being the same thing to programming languages is to the Windows environment. Just as Microsoft has been adding more and more features to Windows and the Windows API over the past decade. Visual Basic andC++ have undergone expansion. Although Visual Basic and C++ have ended up as hugely powerful languages as a result of this, both languages also suffer from problems due to the legacies of how they have evolved.In the case of Visual Basic 6 and earlier, the main strength of the language was the fact that it was simple to understand and didn't make many programming tasks easy, largely hiding the details of the Windows APT and the COM component infrastructure from the developer. The downside to this was that Visual Basic was never truly object-oriented, so that large applications quickly become disorganized and hard to maintain. As well as this, because Visual Basic's syntax was inherited from early versions of BASIC (which, in turn, was designed to be intuitively simple for beginning programmers to understand, rather than lo write large commercial applications), it didn't really lend itself to well-structured or object-oriented programs.C++, on the other hand, has its roots in the ANSI C++ language definition. It isn’t completely ANSI compliant for the simple reason that Microso ft first wrote itsC++ compiler before the ANSI definition had become official, but it conics close. Unfortunately, this has led to two problems. First, ANSI C++ has its roots in a decade-old state of technology, and this shows up in a lack of support for modern 1 外文文献-中文翻译-c#concepts (such as Unicode strings and generating XML documentation), and in some archaic syntax structures designed for the compilers of yesteryear (such as the separation of declaration from definition of member functions). Second, Microsoft has been simultaneously trying to evolve C++ into a language that is designed for high-performance (asks on Windows, and in order to achieve that they've been forced to add a huge number of Microsoft-specific keywords as well as various libraries to the language.The result is that on Windows, the language has become a complete mess. Just ask C++ developers how many definitions for a string they can think of: char*, LPTSTR, string, CString (MFC version), CString (WTL version), wchar_l*, OLECHAR*, and so on.Now completely new environment that is going to involve new extensions to both languages. Microsoft has gotten around this by adding yet more Microsoft-specific keywords to C++, and by completely revamping Visual Basic into Visual a language that retains some of the basic VB syntax but that is so different in design that we can consider it to be, for all practical purposes, a new language. It?s in this context that Microsoft has decided to give developers an alternative—a language designed specifically and designed with a clean slate. Visual C# .NET is the result. Officially, Microsoft describes C# as a ''simple, modern, object-oriented, and type-safe programming language derived from C and C++.” Most independent observers would probably change Chat to '"derived from C, C++, and Java.^ Such descriptions are technically accurate but do little to convey the beauty or elegance of the language. Syntactically, C# is very similar to both C++ and Java, to such 2an extent that many keywords are (he same, and C# also shares the same block structure with braces ({}) to mark blocks of code, and semicolons to separate statements. The first impression of a piece of C# code is that it looks quite like C++ or Java code. Behind that initial similarity, however, C# is a lot easier to learn than C++, and of comparable difficulty to Java. Its design is more in tune with modern developer tools than both of those other languages, and it has been designed to give us, simultaneously, the ease of use of Visual Basic, and the high performance, low-level memory access of C++ if required. Some of the features of C# arc:LI Full support for classes and object-oriented programming, including both interface and implementation inheritance, virtual functions, and operator overloading.□ A consistent and well-defined set of basic types.□ Built-in support for automatic generation of XML documentation.□ Automatic cleanup of dynamically allocated memory.□ The facility to mark classes or methods with user-defined attributes. This can be useful for documentation and can have some effects on compilation (for example, marking methods to be compiled only in debug builds).□ Full access to base class library, as well as easy access to the Windows AP I (if you really need it, which won’t be all that often).□ Pointers and direct memory access are available if required, but the language has been designed in such a way that you can work without them in almost all cases. □ Support for properties and eve nts in the style of Visual Basic.LJ Just by changing the compiler options, you can compile either to an executable or to a library components that can be called up by other code in the same way as 3外文文献-中文翻译-c#ActiveX controls (COM components).LI C# can be used to write dynamic Web pages and XMLWeb services.Most of the above statements, it should be pointed out. do also apply to Visual and Managed C++. The fact that C# is designed from the start to work however, means that its support for the features is both more complete, and offered within the context of a more suitable syntax than for those other languages. While the C# language itself is very similar to Java, there are some improvements: in particular. Java is not designed to work with environment.Before we leave the subject, we should point out a couple of limitations of C#. The one area the language is not designed for is time-critical or extremely high performance code—the kind where you really are worried about whether a loop takes 1.000 or 1,050 machine cycles to run through, and you need to clean up your resources the millisecond they arc no longer needed. C++ is likely to continue to reign supreme among low-level languages in this area. C# lacks certain key facilities needed for extremely high performance apps, including the ability to specify inline functions and destructors that are guaranteed to run at particular points in the code. However, the proportions of applications that fall into this category are very low.4外文文献-中文翻译-c#中文译文1:《C#的优点》C#在某种程度上k可以打作足.NET面向Windows环境的种编程语言。

外文翻译原文Gamma scattering scanning of concrete block for detection of voids

外文翻译原文Gamma scattering scanning of concrete block for detection of voids

Gamma scattering scanning of concrete block for detection of voids.Shivaramu 1,Arijit Bose 2and M.Margret 11Radiological Safety Division,Safety Group,IGCAR,Kalpakaam -603102(India)2Chennai Mathematical Institute,Sipcot I.T.Park,Chennai -603103(India)E-mail:shiv@.in,arijitbose@cmi.ac.in Abstract The present paper discusses a Non Destructive Evaluation (NDE)technique involving Compton back-scattering.Two 15cm x 15cm x 15cm cubical concrete blocks were scanned for detection of voids.The setup used a PC controlled gamma scattering scanning system.A 137Cs radioactive source of strength 153.92GBq with lead shielding and a collimated and shielded 50%efficiency coaxial HPGe detector providing high resolution energy dispersive analysis of the scattered spectrum,were mounted on source and detector sub-assemblies respectively.In one of the concrete blocks air cavities were created by insertion of two hollow cylindrical plastic voids each of volume 71.6cm 3.Both the concrete blocks,one normal and another with air cavities were scanned by lateral and depth-wise motion in steps of 2.5cm.The results show that the scattering method is highly sensitive to changes in electronic and physical densities of the volume element (VOXEL)under study.The voids have been detected with a statistical accuracy of better than 0.1%and their positions have been determined with good spatial resolution.A reconstruction algorithm is developed for characterization of the block with voids.This algorithm can be generally applied to such back-scattering experiments in order to determine anomalies in materials.Keywords:Gamma backscattering,Non-Destructive Evaluation,Compton scattering,HPGe detector,Voids,VOXEL,reconstruction algorithm.1Introduction The need for advanced techniques for detection and evaluation of a class of sub-surface defects that require access only to the one side of any material or structure relatively thick to be inspected has drawn attention to X-ray or gamma backscatter as a desirable choice [1-3].There is a great demand for non-destructive testing and evaluation (NDT,NDE)techniques in many diverse fields of activity.A particular area of interest is encountered in detection of defects in concrete structures.Ultrasonic,Radiography and Compton Back Scattering techniques are currently employed to test the components of structures.Transmission provides line-integrated information along the path of radiation from the source to the detector,which masks the position of an anomaly present along the transmission line.Therefore,it is difficult to determine the position of an anomaly directly from transmission measurements.The backscatter technique can image a volume rather than a plane.Point-wise information can be obtained by focusing the field of view of the source anddetector so that they intersect around a point.Since both the source and detector are located on the same side of the object,examination of massive or extended structures become possible.This NDE technique should be developed as it can be applied in:•Detection of corrosion in steel liners that are buried (covered by concrete),detection of voids >20mm diameter in concrete.Detection of flaws before they propagate to the point of causing failure is essential.•Locating flaws in nuclear reactor walls or dam walls.•Detection of landmines.a r X i v :0912.1554v 2 [c o n d -m a t .m t r l -s c i ] 5 J a n 2010Therefore an urgent need exists to develop diversified and effective NDE technologies to detectflaws in structures.The gamma scattering method is a viable tool for inspecting materials since it is strongly depen-dent on the electron density of the scattering medium,and in turn,its mass density.The concept is based on the Compton interaction between the incident photons and the electrons of matter.Gamma or X-rays scattered from a VOXEL are detected by a well-collimated detector placed at an angle which could vary from forward scattering angles to the back-scattering configuration.The scattered signal,therefore,provides an indication of the electron density of the material comprising the inspected volume[4].By scanning a plane of interest of the object,it is possible to obtain the density distribution in this plane.In this process scattered signals are recorded from different depths of the material.Hence,gamma scattering enables the detection of local defects and the discrimination between materials of different density and composition,such as concrete,void and steel.Moreover,in NDT,the nature of the inspected object is usually known and the purpose is to determine any disturbance in the measured signal that can indicate the presence of an anomaly.Two concrete blocks one normal and another with air cavities were scanned completely by voxel method. The inspection volume(voxel)was formed by the intersection of the collimated incident beam cone and the collimatedfield of view of the HPGe detector.The corresponding Compton scattered spectrums of these two blocks were compared and analyzed.As the void intersected the sensitive volume,there was a decrease in the total electron density of the material comprising the voxel,hence it showed a decrease in detector response.The presence of void could be clearly distinguished and located.2Experimental ProcedureThe schematic of the experimental set up is shown in Fig.1.The scanning system consists of source and detector unit and a four axis job positioning system for moving the block.The source and detector units are composed of a positioning stage,a fabricated base and common to both unit is a control panel and a Laptop.The positioning stages of each unit consist of X,Y-axis travel stage,Z-axis vertical travel stage and a rotary stage.The137Cs radioactive source of strength153.92GBq with a lead shielding and the collimated and shielded50%efficiency coaxial HPGe detector,providing high resolution energy dispersive analysis of the scattered spectrum,are mounted separately on the source and detector sub-assemblies of6-axis system respectively.The concrete blocks are mounted on the4-axis job positioning system.The voxel to be analyzed is geometrically established by the intersection of the incident and scattered beams.The size of the voxel is defined by the diameter and length of the collimators employed and on source to object,detector to object distances and can be easily chosen by proper adjustment of X, Y,Z andθpositions of6-axis and4-axis job positioning systems.The source and detector is collimated with solid lead cylindrical collimators of diameter17and3mm respectively,and the size of the resulting voxel is19cm3.The scattered intensity from a voxel in the concrete block is detected and the pulse height spectrum(PHS)is accumulated and displayed using an8K-channel analyzer which is interfaced with a PC for data storage and analysis.The scattering angle is96◦and the incident photon energy emitted by137Cs is661.6keV and the energy of the scattered photon is273.6keV(Compton scattering).Two concrete blocks one normal and another with air cavities,created by inserting two hollow cylindrical plastic voids of diameter39mm and height60mm,are chosen for void detection and quantification.These voids were placed at equal distances from the surface and from the centre of concrete block symmetrically as shown in Fig.1.The horizontal and depth-wise scanning of a plane in the concrete cube was performed by moving the blocks across the source and detector collimators in steps of2.5cm.The photo peak counts of the scattered spectrum for corresponding positions of both the specimens of concrete blocks one normal and another withair cavities were compared and analyzed.3Reconstruction Algorithm and Attenuation CorrectionsThe reconstruction algorithm developed to correct for attenuation of the incident and scattered photons by the material surrounding each scatter site is described.It provides accurate reconstruction of the Compton scattering data through an adequate correction of the absorption phenomena.The path from the source to detector can be broken into three stages.Thefirst stage is the photon’s travel from the source to the scattering point P along the pathα.Neglecting attenuation due to air,from Beer-Bougher law:I1=I0exp−xµ(E)ρρdx(1)Where I1and I0are the transmitted and incidentflux,respectively,µ(E)ρis the mass attenuation coefficientof the material for photons of energy E,ρis the density of the material and x is the length of pathαin block[5].The second stage is scattering towards detector at point P.The scatteredflux I2is determined by;I2=I1dσdΩ(E,θ)S(E,θ,Z)dΩρe(P)∆L(2)WheredσdΩ(E,θ)is the differential scatter cross section as governed by the Klein-Nishina formula(a function of the incident gamma energy E and scatter angleθ),S is the incoherent scattering function(a function of E,θand atomic number of the element Z),dΩis the solid angle subtended by the detector and its collimator,ρe(P)is the electron density at point P and∆L is an element of pathαin the vicinity of point P(the voxel thickness).The electron density at P is the material property we are attempting to measure. It is proportional to the physical densityρaccording to the formula:ρe=ρNZA[6],where N is Avogadro’s number,Z is atomic number and A is atomic weight.The third stage is the transport of scattered photons back through material to detector.The signal is further attenuated,so that:I3=I2exp−x sµs(E s,θ)ρρdx(3)here I3is theflux intensity reaching the detector,µs(E s,θ)ρis the mass attenuation coefficient for scatteredphotons of energy E s,now a function ofθby the virtue of the Compton energy shift at P,and x s is the length of the pathβin the bining the expressions for the three stages,the signal intensity corresponding to point P can be written as,I3=I0exp−xµ(E)ρρdx +x sµs(E s,θ)ρρdxdσdΩ(E,θ)S(E,θ,Z)dΩρe(P)∆L(4)Letk=dσdΩ(E,θ)S(E,θ,Z)dΩNZA∆L(5)The attenuation factor is,AF=exp−[x0{µ(E)ρ}ρdx +x s{µs(E s,θ)ρ}ρdx ](6)In the present case as per Fig.2;AF=exp−[T0{µ(E)ρ}ρdtcosθ1+T{µs(E s)ρ}ρdt scosθ2(7)combining(4),(5)and(6)we get,I3=I0kρ(P)AF(8) Taking the ratio of I3which is experimentally the counts under the photo peak of Compton scattered spectra for normal concrete block(Block1)and another with air cavities(Block2)we get,I3(Block2) I3(Block1)=ρ(P,Block2)AF(P,Block2)ρ(P,Block1)AF(P,Block1)(9)The attenuation factor for normal concrete block can be calculated just by substituting the mass attenuation factor and density of concrete in(7).For calculating the attenuation factor for different VOXEL positions of concrete block with air cavities one requires the geometry and hence the path length traveled by incident and scattered rays in void and concrete.This information is obtained using software graphics.Here the knowledge of the precise location and size of the voids are used.We see that equation(9)is iteration in density.We can either take up iteration approximations to obtain the void locations in case of voids of unknown location and sizes.Other alternatives are:•reduction of VOXEL size which will give a better spatial resolution and contour of the voids•or changing the scattering angle and generating more density distribution pictures of the concrete block under investigation andfinally superimposing the results.4Results and DiscussionThe lateral variation of the difference in Compton photo peak counts of the scattered spectrum between normal and void incorporated concrete block,measured at7.5cm from the bottom,for various depths (D0:-front face to D3:-center each2.5cm apart)are shown in Fig.3and4.The density ratio of the two concrete blocks derived from the equation(9)plotted as a function of lateral distance is shown in Fig.5.As the sensitive volume intersects the void cavity,there is a reduction in the total electron density and hence increase in the difference scattered intensity(Figs.3&4)and decrease in density ratio(Fig.5).At the front face of the blocks(D0)it is known that there is no void coming into the investigation voxel and the same can be inferred from the results shown in Figs.3,4&5which show same scattered intensity and hence density. On the other hand at5cm depth from the front face(D2)the effect of the voids can be seen(Figs.3&4) as a U shaped scattered intensity curve and an inverted U shaped density ratio(Fig.5).The magnitude ofthe difference in scattered intensity(Figs.3&4)and decrease in density(Fig.5)is proportional to the size of the void within the voxel.This gives an idea of the size and location of the void as the voxel size and its orientation in concrete is known for all positions.The voxel size was estimated as85cm3in the present case. The cylindrical voids have been detected in the present study with a statistical accuracy of better than0.1%. It is also possible to locate the position of the voids successfully even without considering the contribution due to the attenuation factor.The effectiveness of this inspection technique can be defined by the spatial resolution and the density contrast achievable.Good resolution requires a small sensitive volume,while high contrast demands large sensitivity to changes in composition.The size of the inspection volume defines the spatial resolution.Reducing the collimator aperture to improve the spatial resolution leads,however, to a decrease in the count rate.This can be compensated for by increasing the source strength and the counting period.A practical compromise is therefore necessary to achieve a reasonable resolution within an appropriate counting period and without exposure to a high dose of radiation.In order to increase the contrast,the contribution to the detector from the material contained within the sensitive volume should be enhanced while that of the surrounding media should be reduced.This can be achieved by reducing the attenuation of the radiation as it travels to and from the sensitive volume,and/or by increasing the probability of scattering within the volume.The attenuation and scattering probability,however,depend on the radiation energy and the angle of scattering.The incident angle,defined in Fig. 2.determines the photon path length and in turn affects the attenuation probability.The source energy and the scattering angle are,however,the two most important design parameters as they directly affect the detector response.Fig.3.The difference Compton photo peak countsFig.4.The2D bar graph of difference Compton photo peak countsFig.5.The density ratio as a function of lateral distance5Referenceswson,L.,¨Backscatter Imaging¨,Materials Evaluation,60(11),1295-1316,20022.Harding G,Inelastic photon scattering:Effects and applications in biomedical science and industryRadiation Phys.Chem.1997,50,91-1113.Hussein E M A Whynot T M,A compton scattering method for inspecting concrete structures Nucl.Instr.Method1989,A283,100-1064.P Zhu,P Duvauchelle,G Peix and D Babot:X-ray Compton backscattering techniques for processtomography:imaging and charecterization of materials;Meas.Sci.Technol.7.5.C.F.Poranski,E.C.Greenawald and Y.S.Ham;X-Ray Backscatter Tomography:NDT Potential andLimitations;Materials Science Forum Vols.210-213(1996)pp.211-218.6.R.Cesareo,F.Balogun,A.Brunetti,C.Cappio Borlino,90o Compton and Rayleigh measurements andimaging;Radiation Physics and Chemistry61(2001)339-342.6Acknowledgements•Dr.N.Mohankumar,Head,Radiological Safety Division,IGCAR,Kalpakkam.•Ramar,IGCAR.•Priyada,IGCAR.。

项目成本控制外文文献原文

项目成本控制外文文献原文

项目成本控制外文文献原文Project Budget Monitor and ControlWith the marketing competitiveness growing,it is more and more criticalin budget control of each project.This paper discusses that in theconstruction phase,how can a project manager be successful in budgetcontrol.There are many methods discussed in this paper,it reveals thatto be successful, the project manager must concern all this methods.1.INTRODUCTIONThe survey shows that most project encounter cost over-runs (Williams Ackermann,Eden,2002,p192).According to Wright (1997)’s research, a goodrule of thumb is to add a minimum of 50% to the first estimate of thebudget(Gardiner and Stewart,1998,p251).It indicates that project is verycomplex and full of challenge.Many unexpected issues will lead the projectcost over-runs.Therefore, many technologies and methods are developed forsuccessful monitoring and control to lead the project to success.In thisarticle,we will discuss in the construction phase,how can a projectmanager to be successful budget control.2.THE CONCEPT AND THE PURPOSE OF PURPOSE OF PROJECT CONTROL ANDMONITORErel and Raz (2000) state that the project control cycle consists ofmeasuring the status of the project, comparing to the plan, analysis ofthe deviations,and implementing any appropriate corrective actions.Whena project reach the construction phase,monitor and control is criticalto deliver the project success.Project monitoring exists to establish theneed to take corrective action,whilst there in still time to takeaction.Through monitoring the activities,the project team can analyze thedeviations and decide what to do and actually do it.The purpose of monitorand control is to support the implementation of corrective actions ,ensure projects stay on target to get project back on target once in has gone off target.3.SETTING UP AN EFFICIENT CONTROL SYSTEMFor the purpose of achieving cost target,the manager need to set up an efficient management framework including:reporting structure, assessing progress,and communication system,The employees’ responsibility and authority need to be defined in the reporting structure. The formal and informal assessing progress can help getting a general perspective between reality and target.It is significant to help identity what is the risk and should be monitored and controlled. Project success is strongly linked to communication.The efficient communication system benefit for teamwork and facilitate problem solving(Diallo and Thuillier,2005)4.COST MONITOR AND CONTROL4.1Ranking the priority of monitoringIN construction phase,many activities are carried out based on the original plan.It is need to know what kind of activities or things are most likely to lead the project delay and disruption.Therefore,the first step is ranking the priority of the activities.Because the duration of a projects determined by the total time of activities on critical path,any delay in an activity on the critical path will cause a delay in the completion date for the project(Ackermann Eden, Howick and Williams,2000,p295).Therefore,the activities on critical path should firstly to be monitored and controlled.Secondly, monitoring the activities with no free float remaining,a delay in any activity with no free float remaining,a delay in any activity with no free float will delay some subsequent activity inevitably.Some resources are unavailable because they are committed elsewhere.Thirdly,monitoring the activities with less than a specified float,because if an activity has very littlefloat, it might use up the time before control decision is made once such an activities has a variance with the target.Fourthly,managers should monitor high risky activities.High risky activities are most likely to overspend.Fifthly,managers should monitor the activities using critical resource.Some resource is critical because they are very expensive or limited (Cotterell and Hughes,1995)4.2Methods of cost controlThe main cost of a project includes staff cost,material cost and delay cost.To control these cost,managers should first set up a cost,managers should first set up a cost control system to:(a)Allocate responsibilities for administration and analysis of financial data(b)Ensure all costs are properly allocated against project codes(c)Ensure all costs are genuinely in pursuit of project activities(d)Check that other projects are not using the budget.Then,managers should monitor and control change to the project budget.It means the following things:(a)Concerned with key factors that cause changes to the budget(b)Controlling actual cost changes as they occur-Monitor cost performance to detect variances-Record all appropriate changes accurately in the cost baseline-Preventing incorrect,unauthorized changes being included in the coat baseline-Determine positive and negative variances-Integrated with all other control processes(scope,change,schedule,quality)As a project is dynamic,sometimes the project managers know the project is going out off target by monitoring,but don’t know the best action to take .In this circumstance,net present value (NPV) should be used as anongoing monitor and control mechanism,because NPV takes account of the time element and discounts future cash flows,it is the result of the time effect on cash.Change monitor and controlVoropajev states that dynamic changes of project environment will influence the process of process of project implementation,the project itself and may cause heightened risk.When carried out some activities ,the methods different from that in the original plan must be used to keep the process moving forward (as experienced under practice).Therefore,changes are inevitable and need to be managed during project life-cycle (Voropajev,1998,p16-17).An effective change control system should be established to ensure change procedure is clear and unambiguous and easy for employee to request a change. And the following things need to be concerned.(a)Monitoring and forecasting most probable changes key factors that generate change to ensure good results;make sure that change must be checked by suitable person.(b)Changes should take place once it is approved in the project documentation.CONCLUSIONThis article shows the best methods of budget control.First,an efficient control system must be set up.Secondly,it is required to recognize and rank the important factoring budget target.Thirdly,manager should combine different control techniques to reach the success of a project.。

外文文献原文

外文文献原文

附件3外文文献原文Clusters and Competitiveness——A New Federal Role For Stimulating Regional EconomiesByKaren lsElisabeth B.ReynoldsAndrew ReamerClusters reinvigorate regional competitiveness. In recent decades, the nation’s economic dominance has eroded across an array of industries and business functions. In the decades following World War II, the United States built world-leading industries that provided well-paying jobs and economic prosperity to the nation. This dominance flowed from the nation’s e xtraordinary aptitude for innovation as well as a relative lack of international competition. Other nations could not match the economic prowess of the U.S. due to some combination of insufficient financial, human, and physical capital and economic and social systems that did not value creativity and entrepreneurship.However, while the nation today retains its preeminence in many realms, the dramatic expansion of economic capabilities abroad has seen the U.S. cede leadership, market share, and jobs in an ever-growing, wide-ranging list of industries and business functions. Initially restricted to labor-intensive, lower-skill activities such as apparel and electronic parts manufacturing, the list of affected U.S. operations has expanded to labor-intensive, higher-skill ones such as furniture-making and technical support call centers; capital-intensive, higher-skill ones such as auto, steel, and information technology equipment manufacturing; and, more recently, research and development (R&D) activities in sectors as diverse as computers and consumer products. Looking ahead, the nation’s capability for generating and sustaining stable, sufficiently well-paying jobs for a large number of U.S. workers is increasingly at risk. Across numerous industries, U.S.-based operations have not been fully effective inresponding to competitive challenges from abroad. Many struggle to develop and adopt the technological innovations (in products and production processes) and institutional innovations (new ways of organizing firms and their relationships with customers, suppliers, and collaborators) that sustain economic activity and high-skill, high value-added jobs. As a result, too many workers are losing decent jobs without prospect of regaining them and too many regions are struggling economically.In this environment, regional industry clusters provide a valuable mechanism for boosting national and regional competitiveness. Essentially, an industry cluster is a geographic concentration of interconnected businesses, suppliers, service providers, and associated institutions in a particular field.Defined by relationships rather than a particular product or function, clusters include organizations across multiple traditional industrial classifications (which makes drawing the categorical boundaries of a cluster a challenge). Specifically, participants in an industry cluster include:•organizations providing similar and related goods or services•specialized suppliers of goods, services, and financial capital (backward linkages)•distributors and local customers (forward linkages)•companies with complementary products (lateral linkages)•companies employing related skills or technologies or common inputs (lateral linkages)•related research, education, and training ins titutions such as universities, community colleges, and workforce training programs•cluster support organizations such as trade and professional associations, business councils, and standards setting organizationsThe power of clusters to advance regional economic growth was described (using the term ―industrial districts‖) in the pioneering work of Alfred Marshall in 1890. With the sizeable upswing in regional economic restructuring in recent decades, understanding of and interest in the role of clusters in regional competitiveness again has come to the fore through the work of a number of scholars and economic development practitioners.In particular, the efforts of Michael Porter, in a dual role as scholar and development practitioner, have done much to develop and disseminate the concept.Essentially, industry clusters develop through the attractions of geographic proximity—firms find that the geographic concentration of similar, related,complementary, and supporting organizations offers a wide array of benefits. Clusters promote knowledge sharing (―spillovers‖) and innovations in products and in technical and business processes by providing thick networks of formal and informal relationships across organizations. As a result, companies derive substantial benefits from participation in a cluster’s ―social structure of innovation.‖A number of studies indicate a positive correlation between clusters and patenting rates, one measure of the innovation process.What is more, clusters enhance firm access to specialized labor, materials, and equipment and enable lower operating costs. Highly concentrated markets attract skilled workers by offering job mobility and specialized suppliers and service providers—such as parts makers, workforce trainers, marketing firms, or intellectual property lawyers—by providing substantial business opportunities in close proximity. And concentrated markets tend to provide firms with various cost advantages; for example, search costs are reduced, market economies of scale can cut costs, and price competition among suppliers can be heightened.Entrepreneurship is one important means through which clusters achieve their benefits. Dynamic clusters offer the market opportunities and the conditions—culture, social networks, inter-firm mobility, access to capital—that encourage new business development.In sum, clusters stimulate innovation and improve productivity. In so doing, they are a critical element of national and regional competitiveness. After all, the nation’s econom y is essentially an amalgamation of regional ones, the health of which depends in turn on the competitiveness of its traded sector—that part of the economy which provides goods and services to markets that extend beyond the region. In metropolitan areas and most other economic regions of any size, the traded sector contains one or more industry clusters.In this respect, the presence and strength of industry clusters has a direct effect on economic performance as demonstrate a number of recent studies. A strong correlation exists between gross domestic product per capita and cluster concentrations.Several studies show a positive correlation between cluster strength and wage levels in cluster.And a third set of studies indicates that regions with strong clusters have higher regional and traded sector wages.For purposes of economic development policy, meanwhile, it should be kept in mind that every cluster is unique. Clusters come in a variety of purposes, shapes,and sizes and emerge out of a variety of initial conditions. (See Appendix A for examples.) The implication is that one size, in terms of policy prescription, does not fit all.Moreover, clusters differ considerably in their trajectory of growth, development, and adjustment in the face of changing market conditions. The accumulation of evidence suggests, in this respect, that there are three critical factors of cluster success: collaboration (networks and partnerships), skills and abilities (human resources), and organizational capacities to generate and take advantage of innovations.Any public policy for clusters, then, needs to aim at spurring these success factors.Policy also needs to recognize that cluster success breeds success: The larger a cluster, the greater the benefits it generates in terms of innovation and efficiencies, the more attractive it becomes to firms, entrepreneurs, and workers as a place to be, the more it grows, and so on. As a result, most sectors have a handful of dominant clusters in the U.S. As the dominant sectors continually pull in firms, entrepreneurs, and workers, it is difficult for lower tier regions to break into the dominant group.For instance, the biotech industry is lead by the Boston and San Francisco clusters, followed by San Diego, Seattle, Raleigh-Durham, Washington-Baltimore, and Los Angeles.Moreover, as suggested by the biotech example, the dominant clusters tend to be in larger metro areas. Larger metros (almost by definition) tend to have larger traded clusters, which offer a greater degree of specialization and diversity, which lead to patenting rates almost three times higher than smaller metros.The implication is that public policy needs to be realistic; not every region can be, as many once hoped, the next Silicon Valley.At the same time, not even Silicon Valley can rest on its laurels. While the hierarchy of clusters in a particular industry may be relatively fixed for a period of time, the transformation of the American industrial landscape from the 1950s—when Detroit meant cars, Pittsburgh meant steel, and Hartford meant insurance—to the present makes quite clear that cluster dominance cannot be taken for granted. This is true now more than ever—as innovation progresses, many clusters have become increasingly vulnerable, for three related reasons.First, since the mid-20th century, transportation and communications innovations have allowed manufacturers to untether production capacity from clusters and scatter isolated facilities around the nation and the world, to be closer to new markets and totake advantage of lower wage costs. Once relatively confined to the building of ―greenfield‖ branch plants in less industrial, non-union areas of the U.S., the shift of nondurables manufacturing to non-U.S. locations is a more recent manifestation of this phenomenon. Further, these innovations have enabled foreign firms to greatly increasetheir share of markets once dominated by American firms and their associated home-based clusters.Second, more recent information technology innovations have allowed the geographic disaggregation of functions that traditionally had been co-located in a single cluster. Firms now have the freedom to place headquarters, R&D, manufacturing, marketing and sales, and distribution and logistics in disparate locations in light of the particular competitive requirements (e.g., skills, costs, access to markets) of each function.As a result, firms often locate operations in function-specific clusters. The geographic fragmentation of corporate functions has had negative impacts on many traditional, multi-functional clusters, such as existed in 1960. At the same time, it offers opportunities, particularly for mid-sized and smaller areas, to develop clusters around highly specific functions that may serve a variety of industry sectors. For instance, Memphis, TN and Louisville, KY have become national airfreight distribution hubs. Relying on Internet technologies, firms such as IBM and Procter & Gamble are creating virtual clusters, cross-geography ―collaboratories.‖However, by whatever name and changes in information technology, the benefits of the geographic agglomeration of economic activity will continue for the foreseeable future.)Third, as radically new products and services disrupt existing markets, new clusters that produce them can do likewise. For instance, the transformation in the computer industry away from mainframes and then from minicomputers in the 1970s and 1980s led to a shift in industry dominance from the Northeast to Silicon Valley and Seattle.In the new world of global competition, the U.S. and its regions are in a perpetual state of economic transition. Industries rise and fall, transform products and processes, and move around the map. As a result, regions across the U.S. are working hard to sustain a portfolio of competitive clusters and other traded activities that provide decent jobs. In this process, some regional economies are succeeding for the moment, while others are struggling. For U.S. regions, states, and particularly the federal government, the challenge is to identify and pursue mechanisms—clusterinitiatives, in particular—to enhance the competitiveness of existing clusters while taking advantage of opportunities to develop new ones.Cluster initiatives stimulate cluster competitiveness and growth. Cluster initiatives are formally organized efforts to promote cluster competitiveness and growth through a variety of collaborative activities among cluster participants.Examples of such collaborative efforts include:•facilitating mark et development through joint market assessment, marketing,and brand-building•encouraging relationship-building (networking) within the cluster, within the region, and with clusters in other locations•promoting collaborative innovation –research, product and process development, and commercialization•aiding the innovation diffusion, the adoption of innovative products, processes, and practices•supporting the cluster expansion through attracting firms to the area and supporting new business development•sponsoring education and training activities•representing cluster interests before external organizations such as regional development partnerships, national trade associations, and local, state, and federal governmentsWhile cluster initiatives have existed for some time, research indicates that the number of such initiatives has grown substantially around the world in a short period of time. In 2003, the Global Cluster Initiative Survey (GCIS) identified over 500 cluster initiatives in Europe, North America, Australia, and New Zealand; 72 percent of these had been created during the previous four years.That number likely has expanded significantly in the last five years. Today, the U.S. alone has several hundred distinct cluster initiatives.A look across the breadth of cluster initiatives indicates the following:•Clusters are present across the full array of i ndustry sectors, including both manufacturing and services—as examples, initiatives exist in information technology, biomedical, photonics, natural resources, communications, and the arts •They are almost always in sectors of economic importance, in other words, they tend not to be frivolously or naively chosen•They carry out a diverse set of activities, typically in four to six of the b ulleted categories on the previous page•While the geographic boundaries of many are natural economic regions such as metro areas, others follow political boundaries, such as states•Typically, they are industry-led, with active involvement from government and nonprofit organizations•In terms of legal structure, they can be sponsored by existing collaborative institutions such as chambers of commerce and trade associations or created as new sole-purpose nonprofits (e.g., the North Star Alliance)•Most have a dedicated facilitator•The number of participants in a cluster initiative can range from a handful to over 500•Almost every cluster initiative is unique when the combination of regional setting, industry, size, range of objectives and activities, development, structure, and financing are consideredSuccessful cluster initiatives:•are industry-led•involve state and local government decisionmakers that can be supportive•are inclusive: They seek any and all organizations that might find benefi t from participation, including startups, firms not locally-owned, and firms rival to existing members•create consensus regarding vision and roadmap (mission, objectives, how to reach them)•encourage broad participation by members and collaboration amon g all types of participants in implementing the roadmap•are well-funded initially and self-sustaining over the long-term•link with relevant external efforts, including regional economic development partnerships and cluster initiatives in other locationsAs properly organized cluster initiatives can effectively promote cluster competitiveness, it is in the nation’s interest to have well-designed, well-implemented cluster initiatives in all regions. Cluster initiatives often emerge as a natural, firm-led outgrowth of cluster development. For example, the Massachusetts Biotechnology Council formed out of a local biotech softball league.However, left to the initiative of cluster participants, a good number of possible cluster initiatives never see reality because of a series of barriers to the efficient working of markets (what economists call ―market failures‖). First are ―public good‖ and ―free rider‖ problems. In certain instances, individual firms, particularly smallones, will under-invest in cluster a ctivities because any one firm’s near-term cost in time, money, and effort will outweigh the immediate benefits it receives. So no firm sees the incentive to be an early champion or organizer. Further, because all firms in the cluster benefit from the work of early champions (―public good‖), many are content to sit back and wait for others to take the lead (be a ―free rider‖).Consequently, if cluster firms are left to their own devices and no early organizers emerge, a sub-optimal amount of cluster activity will occur and the cluster will lose the economic benefits that collaboration could bring.Some firms have issues of mistrust, concerns about collaborating with the competition. In certain industries in certain regions, competition among firms is so intense that a culture of secrecy and suspicion has developed that stymies mutually beneficial cooperation.Even if the will to organize a cluster initiative is present, the way may be impeded by a variety of factors. Cluster initiatives may not get off the ground because would-be organizers lack knowledge about the full array of organizations in the cluster, relationships or standing with key organizations (i.e., lack the power to convene), financial resources to organize, or are uncertain about how organizin g should best proceed. They see the ―transaction costs‖ of overcoming these barriers (that is, seeking information, building relationships, raising money) as too high to move forward. In the face of the various barriers to self-generating cluster initiatives, public purpose organizations such as regional development partnerships and state governments are taking an increasingly active role in getting cluster initiatives going. So, for example, the Massachusetts Technology Collaborative, a quasi-public state agency, was instrumental in initiating the Massachusetts Medical Device Industry Council (inresponse to an economic development report to the governor prepared by Michael Porter). And Maine’s North Star Alliance was created through the effort of that state’s governor.However, a number of states and regional organizations—and national governments elsewhere—have come to understand that creating single cluster initiatives in ad hoc, ―one-off‖ manner is an insufficient response to the problem and the opportunity. Rather, as discussed in the next section, they have created formal on-going programs to seed and support a series of cluster initiatives. Even so, the nation’s network of state and regional cluster init iatives is thin and uneven in terms of geographic and industry coverage. Consequently, the nation’s ability to stay competitive and provide well-paying jobs across U.S. regions is diminished; broader, thoughtful federal action is necessary.。

外文文献翻译原文+译文

外文文献翻译原文+译文

外文文献翻译原文Analysis of Con tin uous Prestressed Concrete BeamsChris BurgoyneMarch 26, 20051、IntroductionThis conference is devoted to the development of structural analysis rather than the strength of materials, but the effective use of prestressed concrete relies on an appropriate combination of structural analysis techniques with knowledge of the material behaviour. Design of prestressed concrete structures is usually left to specialists; the unwary will either make mistakes or spend inordinate time trying to extract a solution from the various equations.There are a number of fundamental differences between the behaviour of prestressed concrete and that of other materials. Structures are not unstressed when unloaded; the design space of feasible solutions is totally bounded;in hyperstatic structures, various states of self-stress can be induced by altering the cable profile, and all of these factors get influenced by creep and thermal effects. How were these problems recognised and how have they been tackled?Ever since the development of reinforced concrete by Hennebique at the end of the 19th century (Cusack 1984), it was recognised that steel and concrete could be more effectively combined if the steel was pretensioned, putting the concrete into compression. Cracking could be reduced, if not prevented altogether, which would increase stiffness and improve durability. Early attempts all failed because the initial prestress soon vanished, leaving the structure to be- have as though it was reinforced; good descriptions of these attempts are given by Leonhardt (1964) and Abeles (1964).It was Freyssineti’s observations of the sagging of the shallow arches on three bridges that he had just completed in 1927 over the River Allier near Vichy which led directly to prestressed concrete (Freyssinet 1956). Only the bridge at Boutiron survived WWII (Fig 1). Hitherto, it had been assumed that concrete had a Young’s modulus which remained fixed, but he recognised that the de- ferred strains due to creep explained why the prestress had been lost in the early trials. Freyssinet (Fig. 2) also correctly reasoned that high tensile steel had to be used, so that some prestress would remain after the creep had occurred, and alsothat high quality concrete should be used, since this minimised the total amount of creep. The history of Freyssineti’s early prestressed concrete work is written elsewhereFigure1:Boutiron Bridge,Vic h yFigure 2: Eugen FreyssinetAt about the same time work was underway on creep at the BRE laboratory in England ((Glanville 1930) and (1933)). It is debatable which man should be given credit for the discovery of creep but Freyssinet clearly gets the credit for successfully using the knowledge to prestress concrete.There are still problems associated with understanding how prestressed concrete works, partly because there is more than one way of thinking about it. These different philosophies are to some extent contradictory, and certainly confusing to the young engineer. It is also reflected, to a certain extent, in the various codes of practice.Permissible stress design philosophy sees prestressed concrete as a way of avoiding cracking by eliminating tensile stresses; the objective is for sufficient compression to remain after creep losses. Untensionedreinforcement, which attracts prestress due to creep, is anathema. This philosophy derives directly from Freyssinet’s logic and is primarily a working stress concept.Ultimate strength philosophy sees prestressing as a way of utilising high tensile steel as reinforcement. High strength steels have high elastic strain capacity, which could not be utilised when used as reinforcement; if the steel is pretensioned, much of that strain capacity is taken out before bonding the steel to the concrete. Structures designed this way are normally designed to be in compression everywhere under permanent loads, but allowed to crack under high live load. The idea derives directly from the work of Dischinger (1936) and his work on the bridge at Aue in 1939 (Schonberg and Fichter 1939), as well as that of Finsterwalder (1939). It is primarily an ultimate load concept. The idea of partial prestressing derives from these ideas.The Load-Balancing philosophy, introduced by T.Y. Lin, uses prestressing to counter the effect of the permanent loads (Lin 1963). The sag of the cables causes an upward force on the beam, which counteracts the load on the beam. Clearly, only one load can be balanced, but if this is taken as the total dead weight, then under that load the beam will perceive only the net axial prestress and will have no tendency to creep up or down.These three philosophies all have their champions, and heated debates take place between them as to which is the most fundamental.2、Section designFrom the outset it was recognised that prestressed concrete has to be checked at both the working load and the ultimate load. For steel structures, and those made from reinforced concrete, there is a fairly direct relationship between the load capacity under an allowable stress design, and that at the ultimate load under an ultimate strength design. Older codes were based on permissible stresses at the working load; new codes use moment capacities at the ultimate load. Different load factors are used in the two codes, but a structure which passes one code is likely to be acceptable under the other.For prestressed concrete, those ideas do not hold, since the structure is highly stressed, even when unloaded. A small increase of load can cause some stress limits to be breached, while a large increase in load might be needed to cross other limits. The designer has considerable freedom to vary both the working load and ultimate load capacities independently; both need to be checked.A designer normally has to check the tensile and compressive stresses, in both the top and bottom fibre of the section, for every load case. The critical sections are normally, but not always, the mid-span and the sections over piers but other sections may become critical ,when the cable profile has to be determined.The stresses at any position are made up of three components, one of which normally has a different sign from the other two; consistency of sign convention is essential.If P is the prestressing force and e its eccentricity, A and Z are the area of the cross-section and its elastic section modulus, while M is the applied moment, then where ft and fc are the permissible stresses in tension and compression.c e t f ZM Z P A P f ≤-+≤Thus, for any combination of P and M , the designer already has four in- equalities to deal with.The prestressing force differs over time, due to creep losses, and a designer isusually faced with at least three combinations of prestressing force and moment;• the applied moment at the time the prestress is first applied, before creep losses occur,• the maximum applied moment after creep losses, and• the minimum applied moment after creep losses.Figure 4: Gustave MagnelOther combinations may be needed in more complex cases. There are at least twelve inequalities that have to be satisfied at any cross-section, but since an I-section can be defined by six variables, and two are needed to define the prestress, the problem is over-specified and it is not immediately obvious which conditions are superfluous. In the hands of inexperienced engineers, the design process can be very long-winded. However, it is possible to separate out the design of the cross-section from the design of the prestress. By considering pairs of stress limits on the same fibre, but for different load cases, the effects of the prestress can be eliminated, leaving expressions of the form:rangestress e Perm issibl Range Mom entZ These inequalities, which can be evaluated exhaustively with little difficulty, allow the minimum size of the cross-section to be determined.Once a suitable cross-section has been found, the prestress can be designed using a construction due to Magnel (Fig.4). The stress limits can all be rearranged into the form:()M fZ PA Z e ++-≤1 By plotting these on a diagram of eccentricity versus the reciprocal of the prestressing force, a series of bound lines will be formed. Provided the inequalities (2) are satisfied, these bound lines will always leave a zone showing all feasible combinations of P and e. The most economical design, using the minimum prestress, usually lies on the right hand side of the diagram, where the design is limited by the permissible tensile stresses.Plotting the eccentricity on the vertical axis allows direct comparison with the crosssection, as shown in Fig. 5. Inequalities (3) make no reference to the physical dimensions of the structure, but these practical cover limits can be shown as wellA good designer knows how changes to the design and the loadings alter the Magnel diagram. Changing both the maximum andminimum bending moments, but keeping the range the same, raises and lowers the feasible region. If the moments become more sagging the feasible region gets lower in the beam.In general, as spans increase, the dead load moments increase in proportion to the live load. A stage will be reached where the economic point (A on Fig.5) moves outside the physical limits of the beam; Guyon (1951a) denoted the limiting condition as the critical span. Shorter spans will be governed by tensile stresses in the two extreme fibres, while longer spans will be governed by the limiting eccentricity and tensile stresses in the bottom fibre. However, it does not take a large increase in moment ,at which point compressive stresses will govern in the bottom fibre under maximum moment.Only when much longer spans are required, and the feasible region moves as far down as possible, does the structure become governed by compressive stresses in both fibres.3、Continuous beamsThe design of statically determinate beams is relatively straightforward; the engineer can work on the basis of the design of individual cross-sections, as outlined above. A number of complications arise when the structure is indeterminate which means that the designer has to consider, not only a critical section,but also the behaviour of the beam as a whole. These are due to the interaction of a number of factors, such as Creep, Temperature effects and Construction Sequence effects. It is the development of these ideas whichforms the core of this paper. The problems of continuity were addressed at a conference in London (Andrew and Witt 1951). The basic principles, and nomenclature, were already in use, but to modern eyes concentration on hand analysis techniques was unusual, and one of the principle concerns seems to have been the difficulty of estimating losses of prestressing force.3.1 Secondary MomentsA prestressing cable in a beam causes the structure to deflect. Unlike the statically determinate beam, where this motion is unrestrained, the movement causes a redistribution of the support reactions which in turn induces additional moments. These are often termed Secondary Moments, but they are not always small, or Parasitic Moments, but they are not always bad.Freyssinet’s bridge across the Marne at Luzancy, started in 1941 but not completed until 1946, is often thought of as a simply supported beam, but it was actually built as a two-hinged arch (Harris 1986), with support reactions adjusted by means of flat jacks and wedges which were later grouted-in (Fig.6). The same principles were applied in the later and larger beams built over the same river.Magnel built the first indeterminate beam bridge at Sclayn, in Belgium (Fig.7) in 1946. The cables are virtually straight, but he adjusted the deck profile so that the cables were close to the soffit near mid-span. Even with straight cables the sagging secondary momentsare large; about 50% of the hogging moment at the central support caused by dead and live load.The secondary moments cannot be found until the profile is known but the cablecannot be designed until the secondary moments are known. Guyon (1951b) introduced the concept of the concordant profile, which is a profile that causes no secondary moments; es and ep thus coincide. Any line of thrust is itself a concordant profile.The designer is then faced with a slightly simpler problem; a cable profile has to be chosen which not only satisfies the eccentricity limits (3) but is also concordant. That in itself is not a trivial operation, but is helped by the fact that the bending moment diagram that results from any load applied to a beam will itself be a concordant profile for a cable of constant force. Such loads are termed notional loads to distinguish them from the real loads on the structure. Superposition can be used to progressively build up a set of notional loads whose bending moment diagram gives the desired concordant profile.3.2 Temperature effectsTemperature variations apply to all structures but the effect on prestressed concrete beams can be more pronounced than in other structures. The temperature profile through the depth of a beam (Emerson 1973) can be split into three components for the purposes of calculation (Hambly 1991). The first causes a longitudinal expansion, which is normally released by the articulation of the structure; the second causes curvature which leads to deflection in all beams and reactant moments in continuous beams, while the third causes a set of self-equilibrating set of stresses across the cross-section.The reactant moments can be calculated and allowed-for, but it is the self- equilibrating stresses that cause the main problems for prestressed concrete beams. These beams normally have high thermal mass which means that daily temperature variations do not penetrate to the core of the structure. The result is a very non-uniform temperature distribution across the depth which in turn leads to significant self-equilibrating stresses. If the core of the structure is warm, while the surface is cool, such as at night, then quite large tensile stresses can be developed on the top and bottom surfaces. However, they only penetrate a very short distance into the concrete and the potential crack width is very small. It can be very expensive to overcome the tensile stress by changing the section or the prestress。

给水处理-外文文献原文+翻译

给水处理-外文文献原文+翻译

膜技术和环境保护中的水处理Membrane technology and water treatment in environmental protectionREN J ianxin1 , ZHANGBaocheng2(1.China National Blue Star Chemical Cleaning Co. , No。

9 West Road , BeituchengChaoyang District ,Beijing 100029 , China2。

Department of Chemical Engineering , Polytechnic of Turin , Corso Duca degli Abruzzi 24 ,Torino 10129 , Italy)Abstract : The paper present s a general summary on the state of the water resource and membrane industry of China。

Now the water pollution is becoming more grave , and the water resource is shorter and shorter in the earth。

China ha s660 cities ,360 cities of them are short of water. The situation in 110 cities is serious , and the situation in 40 cities is dangerous。

It was predicted that the water could be a main cause of local conflict s and international wars。

外文翻译原文-父亲在孩子发展中的角色TheRoleoftheFatherinChildDevelopment

外文翻译原文-父亲在孩子发展中的角色TheRoleoftheFatherinChildDevelopment

IN C H I L D D E V E L O P M E N T F I F T H E D I T I ONThe Role of the Father in Child DevelopmentFifth EditionThe Role of the Father in ChildDevelopmentFifth EditionEdited byMichael mbUniversity of CambridgeJohn Wiley&Sons,Inc.C H A P T E R1How Do Fathers Influence Children’s Development?Let Me Countthe WaysMICHAEL MBI T IS OFTEN claimed that psychology became a science in the second half ofthe19th century,led in part by continental(mostly German)research on perception,psychophysics,and memory,Galton’s attempts to measure intelligence and establish the importance of heredity,and William James’s efforts to create a coherent theoretical edifice,which might guide the deriva-tion of empirical answers to age-old philosophical questions.For those who study the development of personality and social behavior,however,the key figure was Freud,who pioneered the close study of pathology as a medium through which to elucidate psychological functioning and spawned a pleth-ora of admirers and critics who constructed much of the popular and scientific psychology we encounter in books such as this.For example,we owe Freud credit for the proposition,now widely viewed as an article of faith, that childhood experiences shape subsequent personality and behavior, although Freud himself only shifted the focus from late childhood and early adolescence to infancy very late in his life.Similarly,it was Freud who placed special emphasis on the formative importance of parent–child relationships, although the specific mechanisms he considered have since been widely discredited.Furthermore,although Freud(and the cohort of psychoanalysts and psychodynamic theorists he inspired)published prodigiously from just before the turn of the nineteenth century to the time of the Second World War, the scientific study of social,personality,and developmental psychology really took off in the postwar period,initially dominated by social learning theorists who rejected Freud’s theoretical architecture even as they embraced many of the related beliefs and concepts,including those regarding the importance of parent–child relationships,although neo-analysts played a central role in the construction of attachment theory,which dominates parts of developmental psychology to this day.12H OW D O F ATHERS I NFLUENCE C HILDREN’S D EVELOPMENT?L ET M E C OUNT THE W AYS Developmental psychology changed from a discipline dominated by theoretical analysis to one dominated by empirical research,much of it initially conducted in North America,in the years following World War II. This is often viewed as a politically conservative era,dominated by policies designed to put into the past the rigors and horrors of both the Depression and the two world wars by creating a new age of affluence and opportunity. In practice,this involved championing the‘‘traditional’’nuclear family, dominated by a breadwinning father and a home-making,child-rearing mother,often housed some distance from either parent’s biological or metaphorical roots.Not surprisingly,psychologists embraced these values of the society in which they were reared and lived,so their initial empirical forays into research on children’s early development were dominated by mothers—as informants,as the cofocus of observations,and as the‘‘social-izing’’figures about whom they theorized.Where fathers did enter the picture,their roles were often represented through the eyes and voices of their partners,or they were judged against the models of family function developed by family theorists who shared similar societal assumptions.In such a context,it was easy(if exaggeratedly provocative)to entitle myfirst essay on the subject:‘‘Fathers:Forgotten Contributions to Child Develop-ment’’(Lamb,1975).Three and a half decades later,the scholarly landscape has changed dramatically.Thousands of professional articles have explored the ways in which fathers affect their children’s development,and the contributors to this anthology provide a thorough and readable summary of our contemporary understanding.My goal in this introductory chapter is to sketch some of the overarching themes that dominate the book.FATHERS AND THEIR ROLESW HAT D O F ATHERS D O?It seems logical to begin this anthology by examining definitions and de-scriptions of fathering.What roles do fathers play in family life today?What taxonomies might effectively characterize fathers’activities with and com-mitments to their children?What do fathers do when they are available to their children,and why they do what they do?In this regard,a fuller conceptualization of fathers’roles and the origins of their‘‘prescribed’’responsibilities is warranted.As several contributors illustrate in this volume, historical,cultural,and familial ideologies inform the roles fathers play and undoubtedly shape the absolute amounts of time fathers spend with their children,the activities they share with them,and perhaps even the quality of the relationships between fathers and children.In earlier times,fathers were viewed as all-powerful patriarchs who wielded enormous power over their families(Knibiehler,1995)and vestiges of these notions continued until quite recently.According to Pleck and Pleck (1997),for example,Euro-American fathers were viewed primarily as moral teachers during the colonial phase of American history.By popular consen-sus,fathers were primarily responsible for ensuring that their children grewFathers and their Roles3 up with an appropriate sense of values,acquired primarily from a study of the Bible and other scriptural texts.Around the time of industrialization, however,the primary focus shifted from moral leadership to breadwinning and economic support of the family.Then,perhaps as a result of the Great Depression,which revealed many hapless men as poor providers,social scientists came to portray fathers as sex role models,with commentators expressing concern about the failures of many men to model masculine behavior for their sons.Throughout the20th century,fathers were urged to be involved(Griswold,1993),and following feminist and scholarly cri-tiques of masculinity and femininity,there emerged in the late1970s a concern with the‘‘new nurturant father,’’who played an active role in his children’s lives.As Elizabeth Pleck(2004)explained,however,popular and scholarly discussions of fatherhood have long dwelled on the importance of involvement—often defined by successful breadwinning—and the fear of inadequate fathering.In contrast to earlier conceptualizations of fathers’roles,often focused quite narrowly on breadwinning,and later discussions focused narrowly on‘‘involvement,’’researchers,theorists,and practitioners no longer cling to the simplistic belief that fathers ideallyfill a unidimensional and universal role in their families and in their children’s eyes.Instead,they recognize that fathers play a number of significant roles—companions,care providers,spouses,protectors,models,moral guides,teachers,and bread-winners—whose relative importance varies across historical epochs and subcultural groups.Only by considering fathers’performance of these vari-ous roles,and by taking into account their relative importance in the socio-ecological contexts concerned,can fathers’impact on child development be evaluated.Unfortunately,theorists and social commentators have tended in the past to emphasize only one paternal role at a time,with different functions attracting most attention during different historical epochs.Focusing on fathers’behavior when with their children,much of the observational and survey data collected by developmental and social psy-chologists in the1970s and early1980s(e.g.,Lamb,1977)suggested that mothers and fathers engage in rather different types of interaction with their children,especially in Anglo-Saxon countries like the United States(see Chapter4).These studies have consistently shown that fathers tend to ‘‘specialize’’in play,whereas mothers specialize in caretaking and nurtur-ance,especially(but not only)in relation to infants.Although suchfindings seem quite reliable,the results have often been misrepresented,and have led to overly stereotypical and unidimensional portrayals of fathers as play pared with mothers,fathers indeed spend a greater proportion of their time with children engaged in play,but they still spend most of their time with children engaged in other activities.In absolute terms,most studies suggest that mothers play with their children more than fathers do,but because play(particularly boisterous, stimulating,emotionally arousing play)is more prominent in father–child interaction,paternal playfulness and relative novelty may help make fathers especially salient to their children(Lamb,Frodi,Hwang,&Frodi,1983).This enhanced salience may increase fathers’influence more than would be expected based on the amount of time they spend with their children.4H OW D O F ATHERS I NFLUENCE C HILDREN’S D EVELOPMENT?L ET M E C OUNT THE W AYS However,comparative studies,in which fathers’interactions are con-trasted with those of mothers,typically focus on mean level differences in parenting activities,and often obscure other common patterns of parent–child interaction.By highlighting the predominant qualities of fathers and mothers,they may promote narrow views of fathers’and mothers’roles, thereby failing to capture similarities in the meaning or degree of influence parents exert on their children.In fact,both fathers and mothers encourage exploration during play with their infants(Power,1985),alter their speech patterns to infants by speaking slowly and using shorter phrases(Dalton-Hummel,1982;Golinkoff&Ames,1979;Rondal,1980),respond to their infants’cries and smiles(Berman,1980),even when otherwise engaged (Notaro&Volling,1999),and adjust their behaviors to accommodate devel-opmental changes in their infants’competencies(Belsky,Gilstrap,&Rovine, 1984;Crawley&Sherrod,1984).Sensitive fathering—responding to,talking to,scaffolding,teaching and encouraging their children to learn—predicts children’s socio-emotional,cognitive,and linguistic achievements just as sensitive mothering does(e.g.,Conner,Knight,&Cross,1997;Easterbrooks &Goldberg,1984;Shannon,Tamis-LeMonda,London,&Cabrera,2002;Van IJzendoorn&De Wolff,1997).Suchfindings suggest that fathers can and do engage with their children in many different ways,not only as playmates,and that they are more than role models for their children.The broader,more inclusive conceptualization of fathers’roles recognizes the appreciable variation that exists both within and between fathers.Most individual fathers assume numerous roles in their families(including bread-winner,playmate,guide,caregiver),although fathers differ with respect to the relative importance of these diverse roles.F ATHERS’I NFLUENCES ON C HILDRENA second line of research on fatherhood examines fathers’effects on children and the pathways through which those effects are exerted.Which aspects of child development are influenced most,at what ages,under which circum-stances,and why?Three types of studies have been designed to explore this topic:correlational studies,studies of father absence and divorce,and studies of involved fathers.Here,we review these research methods and then examine direct and indirect effects of fathering on child development. Correlational Studies Many of the earliest studies of paternal influences were designed to identify correlations between paternal andfilial character-istics.The vast majority of these studies were conducted between1940and 1970,when the father’s role as a sex role model was considered most important;as a result,most studies were focused on sex role development, especially in sons(for reviews,see Biller,1971;Lamb,1981).The design of these early studies was quite simple:Researchers assessed masculinity in fathers and in sons,and then determined how strongly the two sets of scores were correlated.To the great surprise of most researchers,however,there was no consistent correlation between the two constructs,a puzzling finding because it seemed to violate a guiding assumption about the crucialFathers and their Roles5 function served by fathers.If fathers did not make their boys into men,what role did they really serve?It took a while for psychologists to realize that they had failed to ask:Why should boys want to be like their fathers?Presumably,they should only want to resemble fathers whom they liked and respected,and with whom their relationships were warm and positive.In fact,the quality of father–son relationships proved to be a crucial mediating variable:When the relationships between masculine fathers and their sons were good,the boys were indeed more masculine.Subsequent research even suggested that the quality of the father–child relationships was more important than the masculinity of the father(Mussen&Rutherford,1963;Payne&Mussen,1956;Sears,Maccoby,& Levin,1957).Boys seemed to conform to the sex role standards of their communities when their relationships with their fathers were warm,regard-less of how‘‘masculine’’the fathers were,even though warmth and intimacy have traditionally been seen as feminine characteristics.A similar conclusion was suggested by research on other aspects of psychosocial adjustment and on achievement:Paternal warmth or closeness appeared beneficial,whereas paternal masculinity appeared to be irrelevant(Biller,1971;Lamb,1981;Radin, 1981).By the1980s,it had thus become clear that fathers and mothers influence children in similar ways by virtue of nurturant personal and social character-istics(see Chapter4).Research summarized in this volume by Golombok and Tasker(Chapter11)goes even further,indicating that the sexual orientation of homosexual fathers does not increase the likelihood that their children will be homosexual,effeminate,or maladjusted.As far as influences on children are concerned,in sum,very little about the gender of the parent seems to be distinctly important.The characteristics of the father as a parent rather than the characteristics of the father as a male adult appear to be most significant,although some scholars and social commentators continued to underscore the crucial importance of distinctive maternal and paternal roles into the late1990s(Biller,1994;Blankenhorn, 1995;Popenoe,1996).Studies of Father Absence and Divorce While the whole body of research that is here termed correlational was burgeoning in the1950s,another body of literature comprising studies in which researchers tried to understand the father’s role by examining families without fathers was developing in paral-lel.The assumption was that,by comparing the behavior and personalities of children raised with and without fathers,one could—essentially by a process of subtraction—estimate what sort of influences fathers typically had on their children’s development.The early father-absence and correlational studies were conducted in roughly the same era;not surprisingly,therefore,the outcomes studied were very similar and the implications were similar and consistent with popular assumptions as well(see Adams,Milner,&Schrepf, 1984;Biller,1974,1993;Blankenhorn,1995;Herzog&Sudia,1973;Whitehead, 1993,for reviews):Children—especially boys—growing up without fathers seemed to have‘‘problems’’in the areas of sex role and gender-identity development,school performance,psychosocial adjustment,and perhaps in the control of aggression.THE DEFINITIVE REFERENCE ONTHE IMPORTANT ROLE FATHERS PLAY IN CHILD DEVELOPMENT TODAYMICHAEL E. LAMB, P H D, is Professor of Psychology in the Social Sciences, Cambridge University, and has served as head of the Section on Social and Emotional Development at the National Institute of Child Health and Human Development. His current research is concerned with the evaluation, validation, and facilitation of children’s accounts of sexual abuse; the effects of domestic violence on children’s development; the effects of contrasting patterns of early child care on children and their families; and the description of early patterns of infant care in diverse sociocultural ecologies.。

应收账款【外文翻译】

应收账款【外文翻译】

外文文献翻译一、外文原文原文:Accounts Receivable IssuesFor many companies, the accounts receivable portfolio is its largest asset. Thus, it deserves special care and attention. Effective handling of the portfolio can add to the bottom line, while neglect can cost companies in unseen losses.Accounts Receivable Strategies to Energize the Bottom LineDon’t be surprised to find the big shots from finance suddenly looking over your shoulder questioning the ways your credit department operates. Accounts receivable has become the darling of those executives desperate to optimize working capital and improve their balance sheet.Here’s a roundup of some of the tactics that have been collected from the best credit managers to squeeze every last cent out of their accounts receivable portfolio: ·Have invoices printed and mailed as quickly as possible. Most customers start the clock ticking when the invoice arrives in their offices. The sooner you can get the invoice to them, the sooner they will pay you. While this strategy will not affect days sales outstanding(DSO),it will improve the bottom line.·Look for ways to improve invoice accuracy without delaying the mail date.·Offer more stringent terms where appropriate in your annual credit reviews and with new customers. Consider whether shorter terms might be better for you company.·Offer financial inducements to customers who agree to pay your invoices electronically.·If you have not had a lockbox study performed in the last few years, have one done to determine your optimal lockbox location.·With customers who have a history of paying late, begin your collection effortsbefore the due date. Call to inquire whether they have the invoice and if everything is in order. Resolve any problems quickly at this point.·If you have been giving a grace period to those taking discounts after the discount period, reduce or eliminate it.·Resolve all discrepancies quickly so payment can be made promptly.·If a customer indicates it has a problem with part of an invoice, authorize partial payments.·Keep a log of customer problems and analyze it once a month to discover weaknesses in your procedures that cause these quandaries.·Apply cash the same day the payment is received. Collectors can then spend their time with customers who have not paid rather than annoying ones who have already sent their payment.·Deal with a bank that makes lockbox information available immediately by fax, or preferably, online. Then when a customer claims it has made a payment , the collector will be able to verify this.·Look into ways to accept P-cards from customers placing small orders and those who cannot be extended credit on open account terms.·Benchmark department and individual collectors’ performance to pinpoint those areas and individuals in need of additional training.Review your own policies and procedures to determine if there are any areas that could be tweaked to improve cash flow. Then, when the call comes from executive quarters, you will be ready, and they will be hard pressed to find ways that you fell down on the job.Dealing with Purchase OrdersLeading credit managers have learned to pay attention to the purchase orders that their companies receive. Specifically, they want to ensure that the purchase order accepted by the salesperson does not include clause that will ultimately cause trouble for their companies, or even legal difficulties later on. Realistically, the salesperson should have caught the problem, but he or she rarely does. When the customer doesn’tpay due to one of these techn icalities, it’s not the salesperson who will get blamed.To help avoid a purchase order disaster, credit professionals can take the following steps:1.Simply read the purchase order. Vendors often slip clauses into purchase orders that you would never agree to. One favorite is to include a statementsaying the seller will be paid as soon as its customer pays the buyer. This is arisk few companies are willing to tolerate.2.Prioritize attachments. Typically, buyers write purchase orders that contain attachments. These include drawings, specifications, supplementary termsand conditions for work done on company premises, or safety rules for thesupplier.When including attachments, it is recommended that one of them be a listof priorities to guard against any inconsistencies in the documents. Thepurchase order should “clearly reference all the attachments, and there shouldbe a recitation as to which attachments are controlling over the others.” In theevent of any inconsistency between or among these documents, the purchaseorder shall be controlling over any attachments, and the attachments shall beinterpreted using the priority listed.3.Take care when reference is made to a buyer’s documents in the purchase order. There are likely to be both helpful and harmful statements in thosedocuments that reference the buyer’s material. The buyer may have printedits own terms and conditions on the back of a document. By referring to thedocument in the purchase order, you may inadvertently refer not only to theprice, but also to terms and conditions, which may include warrantydisclaimers and limitations of remedies that your company does not intend togive.Instead, the recommendation is not to refer to the buyers’ documents.Insist that the information is specified in the purchase order. If this is notpractical, the following language might work:” Any reference to thepurchaser’s quotation contained in this purchase order is a reference forconvenience only, and no such reference shall be deemed to include any ofthe purchaser’s standard terms and conditions of sale. The seller expresslyrejects anything in any of the buyer’s documents that is inconsistent with theseller’s standard terms and conditions.”Another favorite is to include terms and conditions on the back of thepurchase order written in very small print and a pale (almost undecipherable)color.4.Be careful of confirming purchase orders. Often, buyers will place orders via telephone, only to later confirm them with a written purchase order. In oralcontracts, the buyer will often want the purchase order to be more than justan offer. Therefore, the buyer will try to show on the purchase order that it isa confirming purchase order and cement the oral contract made over thephone. If the buyer does so, the confirming purchase order will satisfy theUniform Commerical Code (UCC) requirement of a written confirmationunless the other side objects to it within ten days.More than one cunning purchaser has slipped terms into a confirmingpurchase order that were nothing like those agreed to orally. Don’t fall intothe trap of assuming that the confirming purchase order confirms what wasactually said on the phone.Credit professionals who take these few extra steps with regard to purchase orders will limit their troubles.Quality of Accounts Receivable: Days Sales OutstandingMany credit professionals are measured on their effectiveness by reviewing the accounts receivable portfolio. The most common measurement is the length of time a sale stays outstanding before being paid. The Credit Research Foundation (CRF) defines DSO as the average time in days that receivables are outstanding. It helps determine if a change in receivables is due to a change in sales, or to another factor such as a change in selling terms. An analyst might compare the day’s sales in receivables with the company’s credit terms as an indication of how efficiently thecompany manages its receivables. Days sales outstanding is occasionally referred to as days receivable outstanding, as well. The formula to calculate DSO is:365e Re Sales t N Annual ceivablesGrossQuality of Accounts Receivable: Collection Effectiveness IndexSome feel that the quality of the portfolio is dependent to a large extent on the efforts of the collection staff. This is measured by the collection effectiveness index (CEI). The CRF says this percentage expresses the effectiveness of collection efforts over time. The closer to 100% the ratio gets, the more effective the collection effort. It is a measure of the quality of collection of receivables, not of time. Here’s the formula to calculate the CEI:Daysor Months of Number N ceivables Current Ending N Sales Credit ceivables Beginning ceivablesTotal Ending N Sales Credit ceivables Beginning =⨯-+-+100Re )(Re Re )(ReQuality of Accounts Receivable: Best Possible Days Sales OutstandingMany credit professionals find fault with using DSO to measure theirperformance. They feel that a better measure is one based on average terms based on customer payment patterns. The CRF says that this figure expresses the best possible level of receivables. The CRF believes this measure should be used together with DSO. The closer the overall DSO is to the average terms based on customer payment patterns (best possible DSO [BPDSO]),the closer the receivables are to the optimal level. The formula for calculating BPDSO is:AnalyzedPeriod for Sales Credit Analyzed Period in Days of Number ceivables Current ⨯ReBad-Debt ReservesInevitably, no matter how good the credit professional, a company will have a customer that does not pay its debts. Most companies understand that bad debts are simply part of doing business and reserve for bad debts. In fact, many believe that acompany with no bad debts is not doing a good job. The reason that being that if the company loosened its credit terms slightly, the company would greatly increase its sales and, even after accounting for the bad debts, its profits. Thus, most companies plan for bad debt, monitor it, and periodically, depending on the company’s outlook, revise projections and credit policy to allow for an increase or decrease.For example, as the economy goes into a recession, most companies will experience an increase in bad debts if their credit policy remains static. So, in light of declining economic conditions, companies should either increase their bad-debt reserves or tighten the credit policy. Similarly, if the economy is improving, a company would take reverse actions, either decreasing the reserve for bad debts or loosening the credit policy.Many companies take advantage or a favorable economy to expand their customer base. They might simultaneously increase the bad-debt reserve and loosen credit policy. Obviously, these decisions are typically made at a fairly high level. Other factors will also come into play in establishing a bad-debt reserve. Industry conditions are key and can often be quite different than the state of the economy. This is especially true when competition comes from foreign markets.There is no one set way to calculate the reserve for bad debts. Many simply take a percentage of sales or outstanding accounts receivable, or they make some other relatively uncomplicated calculation.How to Reduce Your Bad-Debt Write-OffsMost credit and collection professionals would love to be able to brag about having no bad-debt write-offs. Few can. While a goal of reducing the amount of bad debt write-offs to zero might be unrealistic in most industries, keeping that number as low as possible is something within the control of today’s credit managers. The following seven techniques will help you keep your numbers as low as possible:1.Call early. Don’t wait until the ac count goes 30 or even 60 days past duebefore calling customers about late payments. Such delays can mean that, in the case of a financially unstable company, a second and perhaps even a thirdshipment will be made to a customer who ultimately will pay for naught. Some professionals even call a few days before the payment is due to ensure that everything is in order and the customer has everything it needs to make a timely payment. By beginning your calling campaign as early as possible, it is possible o uncover shaky situations. Even if payment is not received for the first delivery, future order are not accepted, effectively reducing bad-debt write-offs.municate, communicate, communicate. Keep the dialogue open with everyone involved. This not only includes your customers, but the sales force as well. In many cases, they are in a better position than the credit manager to know when a customer is on thin ice. With good lines of communication between sales and credit, it is possible to avoid taking some of those orders that will ultimately have to be written off.3.Follow up, follow up, follow up. Continual follow up with customers is important, whether you’re trying to collect on a timely basis or attempting to avoid a bad-debt write-off. If the customer knows you will call every few days or will be calling to track the status of promises made, it is much more likely to pay. This can also be the case of the squeaky wheel getting the grease, or in this case the money, when cash is tight.4.Systematize. Many collection professionals keep track of promises and deadlines by hand, on a pad or calendar. Items tend to fall through the cracks with this approach. Invest some money either in prepackaged software or in developing your own in-house, and the likelihood of losing track of customers diminishes. Some accounting programs have a tracking capability that many have not taken the time to learn. If your software has such a facility, use it.5.Specialize. Set up a group of one or more individuals who do nothing but try to collect receivables that are overdue. By having experts on staff to handle such work, you will improve your collection rate and speed.6.Credit hold. Putting customers on credit hole early in the picture will sometimes entice a payment from someone who really had no intention of paying you. This technique is particularly effective with customers who rely heavily onyour product and would be hard put to get it elsewhere. Of course, if you sell something that many other vendors sell as well, putting a potentially good customer on hold could backfire.7.Small claims court. Some credit professionals have had great success incollecting smaller amounts by taking the customer to small claims court. The limits for such actions vary by state but can be as high as $10,000.While these techniques will not necessarily squeeze money from a bankrupt client, they will help you get as much as possible as soon as possible from as many of your customers as possible. This can be especially important in avoiding preference actions with clients who eventually do file. The quicker you get the clock ticking, the more likely you are to be able to avoid preference claims.Source: Mary S. Schaeffer, 2002, Essentials of Credit, Collections, and Accounts Receivable, John Wiley & Sons, Inc.( October 01, 2002 ):pp81-102.二、翻译文章译文:应收账款对许多公司来说,应收账款是其最大的资产。

20外文文献翻译原文及译文参考样式

20外文文献翻译原文及译文参考样式

20外⽂⽂献翻译原⽂及译⽂参考样式华北电⼒⼤学科技学院毕业设计(论⽂)附件外⽂⽂献翻译学号: 0819******** 姓名:宗鹏程所在系别:机械⼯程及⾃动化专业班级:机械08K1指导教师:张超原⽂标题:Development of a High-PerformanceMagnetic Gear年⽉⽇⾼性能磁齿轮的发展1摘要:本⽂提出了⼀个⾼性能永磁齿轮的计算和测量结果。

上述分析的永磁齿轮有5.5的传动⽐,并能够提供27 Nm的⼒矩。

分析表明,由于它的弹簧扭转常数很⼩,因此需要特别重视安装了这种⾼性能永磁齿轮的系统。

上述分析的齿轮也已经被应⽤在实际中,以验证、预测其效率。

经测量,由于较⼤端齿轮传动引起的磁⼒齿轮的扭矩只有16 Nm。

⼀项关于磁齿轮效率损失的系统研究也展⽰了为什么实际⼯作效率只有81%。

⼀⼤部分磁损耗起源于轴承,因为机械故障的存在,此轴承的备⽤轴承在此时是必要的。

如果没有源于轴的少量磁泄漏,我们估计能得到⾼达96%的效率。

与传统的机械齿轮的⽐较表明,磁性齿轮具有更好的效率和单位体积较⼤扭矩。

最后,可以得出结论,本⽂的研究结果可能有助于促进传统机械齿轮向磁性齿轮发展。

关键词:有限元分析(FEA)、变速箱,⾼转矩密度,磁性齿轮。

⼀、导⾔由于永久磁铁能产⽣磁通和磁⼒,虽然⼏个世纪过去了,许多⼈仍然着迷于永久磁铁。

,在过去20年的复兴阶段,正是这些优点已经使得永久磁铁在很多实际中⼴泛的应⽤,包括在起重机,扬声器,接头领域,尤其是在永久磁铁电机⽅⾯。

其中对永磁铁的复兴最常见于效率和转矩密度由于永磁铁的应⽤显著提⾼的⼩型机器的领域。

在永久磁铁没有获取⾼度重视的⼀个领域是传动装置的领域,也就是说,磁⼒联轴器不被⼴泛⽤于传动装置。

磁性联轴器基本上可以被视为以传动⽐为1:1磁⼒齿轮。

相⽐标准电⽓机器有约10kN m/m的扭矩,装有⾼能量永久磁铁的磁耦有⾮常⾼的单位体积密度的扭矩,变化范围⼤约300–400 kN 。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

基于ZigBee技术农业无线温湿度传感器网络
与农业生产实践相结合,提出了农业无线和湿度传感器网络设计,它基于ZigBee技术。

我们使用基于CC2530 ZigBee协议作为数据的采集,传输和显示的传感器节点和协调器节点的芯片,目的是实现农业生产自动化和精确农业。

关键词:农业,生产,温度和湿度,无线网络,传感器。

1.简介
目前,生产和生活的许多方面都需要提取和加工周围环境的温度和湿度信息。

在过去的技术是收集温度和湿度传感器的温湿度信息,并通过RS-485总线或现场总线再次发送数据到监控中心,所以你需要铺设大量的电缆来收集温度和湿度信息。

传统农业主要使用孤立的机械设备,没有沟通能力,主要依靠的人来监控作物生长状况。

然而,如果使用ZigBee无线传感器网络技术,农业将逐步转变为信息和生产的为主的生产模式,使用更加自动化,网络化,智能化的耕作方式,实现远程无线控制设备。

传感器可以收集信息,如土壤水分,氮浓度,pH值,降水,温度,空气湿度,空气压力等。

采集到的上述信息和所收集信息的位置被传递到中央控制设备用于通过ZigBee网络的决策和参考,所以我们可以提前和准确地识别用于帮助维持和提高作物产量的问题。

在许多面向数据的无线网络传输,低成本和复杂性的无线网络被广泛地使用。

2. ZigBee的技术特点
ZigBee技术是一种短距离,低复杂度,低功耗,低数据速率,和低成本,双向无线通信技术,主要是采用在自动控制和远程控制的领域中,可以嵌入各种设备中,以实现他们的自动化[1]。

对于现有的各种无线通信技术,ZigBee技术将是最低功耗和成本的技术。

ZigBee的数据传输速率低,在10KB/ s到250KB/ s的范围内,并主要集中在低速率传输。

在低功耗待机模式下,两个普通的5号电池可以持续6至24个月。

ZigBee的数据传输速率低,并且它的协议很简单,所以它大大降低了成本。

而它的网络容量大,可容纳65000设备。

延迟时间很短,一般在15毫秒〜30毫秒。

ZigBee提供了数据完整性检查和认证的功能,使用AES-128加密算法。

使用空闲的预留频段2.4GHz,作为可靠传输路径。

3.整个系统的设计理念
基于ZigBee的无线技术中,温湿度传感器网络是由三部分组成:发射器,接收器和显示系统。

发射器是由多个终端节点构成;每个节点包括一个温湿度的传感器和ZigBee无线射频模块。

温室温湿度传感器收集有关的温度和湿度,然后温度和湿度的数据被发送到ZigBee无线射频模块。

温湿度数据的校正是通过嵌入在ZigBee无线射频(RF)模块芯片处理并修改了数据然后通过ZigBee无线网络发送到接收器。

接收器是由一个ZigBee射频模块和RS232串口模块组成。

接收器模块是作为一个网络协调器建立了一个星形网络.采集到的数据通过ZigBee网络接收再通过RS232发送到显示系统。

这是该系统的发送节点的温度和湿度的收集和传递过程。

图1.总体系统仿真方案
4.系统硬件设计
CC2530满足低成本和基于ZigBee的2.4GHz ISM频段的低功率要求。

它包括一个高性能的2.4GHz DSSS(直接序列扩频)射频收发器芯和8051控制器。

ZigBee的射频前端,存储器和微控制器被集成在单一芯片上。

它拥有128kB的可编程闪存和8KB RAM,包括ADC,定时器,32kHz晶振,休眠模式,定时器,上电复位电路,掉电监测电路和20个可编程I / O引脚,使节点的小型化[2]。

CC2530无线单芯片的特点在于非常低的功率,只有0.2微安的待机电流消耗。

在32 kHz晶振时钟的运行,消耗电流小于1μA。

温湿度传感器SHT11集成了多种电路集成到一个芯片上,如温度和湿度的检测,信号放大调理,A/ D 转换和数字通信接口。

湿度测量范围为0〜100%RH,温度测量范围为-40℃〜+123.8℃,湿度测量精度为±3.0%RH,温度测量精度为±0.4℃,响应时间小于4秒。

对于数字接口,SHT11提供两线数字串行接口SCK和数据接口DAT; SCK是用于微处理器以实现同步通信使用的串行时钟线。

DAT作为串行数据线,并与微处理器之间进行数据传输。

该芯片接口简单,善于传输,可靠性高,而且测量精度可通过编程进行调整。

在测量和通信后,CC2530低功耗模式被自动启动。

4.1传输节点的硬件设计
发射节点是温湿度传感器SHT11模块,CC2530处理器模块,天线模块,电源模块组成的网络的基本单元。

它负责获取的温度和湿度的数据和数据的预处理,并且它们将被发送到的ZigBee接收端。

温湿度传感器模块是负责收集检测区内的温度和湿度的数据。

处理器模块将所收集的数据信号进行模数转换,然后预处理。

经过预处理的数据由天线模块发送[3]。

功率模块主要为处理器供电。

发送硬件架构
示于图 2。

图2.发送的硬件架构
4.2接收节点的硬件设计
接收器是由电源模块,密钥模块,串行模块,LCD模块,LED灯,CC2530处理器模块和天线模块组成。

无线温湿度传感器网络不是一个独立的无线通信网络,其中需要监测的温度和湿度数据发送给主计算机,并显示它们。

LED指示灯用于显示接收节点的网络状态信息(例如网络是否被成功建立); LCD模块用于显示传感器网络功能模式,用户可以通过按钮选择各种模式。

CC2530是一个协调器负责数据接收。

当接收数据时,RF接收信号由低噪声放大器放大它们翻倒入混频器之前。

通过混合频率,IF(中频)信号产生。

在IF处理阶段,信号送入解调器被放大并过滤之前,解调的数据被放入移位寄存器,然后进入RFBUF。

在RFBUF的温度和湿度的数据由微控制器除去后,它们将被放入串口数据缓冲寄存器UART SBUF,并且通过RS232串行模块发送给主计算机显示。

5.系统软件设计
使用的开发环境系统IAR7.51A,采用协议栈是TI的Z-STACK。

通过RS232连接接收器和计算机,显示数据时为了区分各节点数据必须知道每个传感器节点的网络地址,因此要求每个传感器节点设备加入网络后将自己的网络地址发送到接收器。

接收器接收该传感器节点的网络地址后建立地址表存储各节点地址,当用户收集每个传感器的数据时可以通过地址表收集各节点数据。

发射器和接收器的软件流程图如图3。

图3.发射器和接收器的软件流程图
6. 基于ZigBee的无线传感器网络的建设
FFD(全功能设备)必须作为网络协调器建立网络,然后其他的FFD或RFD加入该网络,但RFD (缩减功能的设备)只能链接到FFD。

根据网络中设备的各自功能,各个设备地程序预先配置。

协调器的功能是通过扫描16个通道设置合适的最佳通道并启动一个网络搜索。

协调器可以形成一个免费信道网络,或连接到一个已经存在的网络。

路由器的功能是通过扫描搜索找到一个有效通道,并连接它,然后允许连接其它设备。

终端设备的功能总是试图连接到现有网络。

这些终端设备搜索那些可以在网络中提供完整搜索服务的其它设备,允许任何网络设备初始化搜索服务,并且可绑定其它可以提供完整服务的设备,在特定组网络中提供的命令和控制功能。

中央控制中心通过网络与发射器相连接。

接收器和发射器之间的无线信息传输经由ZigBee技术实现的。

发射器负责检测和处理数据,并将其发送到接收器。

控制中心通过网络获取所收集的相关信息。

多个发射器分布在传感器网络中,通过单片机轮询扫描可以有序的根据不同的ID上传后发送相应的数据。

7.结论
本文提出了一种基于ZigBee技术的无线温湿度传感器网络系统,此网络构建方式灵活,适应性强。

通过在实验室和其周围的办公室实际应用中,该系统被证明是非常实用的。

在实际应用中,终端设备地数量通过应用环境要求来确定。

该系统可应用于农业生产,以及生产和生活等更多领域,解决成本高和有线网络布线复杂,环境恶劣的区域进行环境温湿度监测问题。

使用ZigBee组网方式进行实时传递温湿度信息过程中降低了系统成本,节省能源,为工业化生产提供了便利。

随着ZigBee芯片价格的降
低,新的温湿度监测系统将有广阔的应用前景。

相关文档
最新文档