Extended gloss overlaps as a measure of semantic relatedness

合集下载

工程光学英语补充内容和习题

工程光学英语补充内容和习题
6
Homework
1. In ancient times the rectilinear propagation of light was used to measure the height of objects by comparing the length of their shadows with the length of the shadow of an object of known length. A staff 2m long when held erect casts a shadow 3.4 m long, while a building’s shadow is 170 m long. How tall is the building?
I''=-I
and the incoming ray, the outgoing ray, and the normal to the surface at the point of intersection all lie in the same plane.
3
4. The Law of Refraction
17
Keywords and concept
2. Nodal points are where no refraction occurs. Whenever the refractive indices on either side of the lens are the same, the nodal points coincide with the principal points. If the refractive indices on the two sides of the lens are different, the N points would move away from the principal planes, toward the side of higher index.

一道80%的考生都会漏选的GRE阅读多选题

一道80%的考生都会漏选的GRE阅读多选题

一道80%的考生都会漏选的GRE阅读多选题大家在做gre阅读的多选题的时候常常会有漏选的现象,下面小编就给大家实例讲解一下,告诉大家如何避免漏选的问题!一道80%的考生都会漏选的GRE阅读多选题阅读-正文Astronomers who study planet formation once believed that comets—because they remain mostly in the distant Oort cloud, where temperatures are close to absolute zero—must be pristine relics of the material that formed the outer planets.The conceptual shift away from seeing comets as pristine relics began in the 1970s, when laboratory simulations revealed there was sufficient ultraviolet radiation reaching comets to darken their surfaces and there were sufficient cosmic rays to alter chemical bonds or even molecular structure near the surface.Nevertheless, astronomers still believed that when a comet approached the Sun—where they could study it—the Sun’s intense heat would remove the corrupted surface layer, exposing the interior. About the same time, though, scientists realized comets might contain decaying radioactive isotopes that could have warmed cometary interiors to temperatures that caused the interiors to evolve.Consider each of the choices separately and select all that apply.Q:According to the passage, astronomers recognize which of the following as being liable to cause changes to comets?A. cosmic raysB. radioactive decayC. ultraviolet radiation易错点本题绝大部分同学都能通过定位到第二句,然后选出AC;然后他们会觉得B选项在第四句出现,属于非答案区间,所以不选。

tpo36三篇托福阅读TOEFL原文译文题目答案译文背景知识

tpo36三篇托福阅读TOEFL原文译文题目答案译文背景知识

tpo36三篇托福阅读TOEFL原文译文题目答案译文背景知识阅读-1 (2)原文 (2)译文 (3)题目 (5)答案 (10)背景知识 (11)阅读-2 (12)原文 (12)译文 (14)题目 (15)答案 (20)背景知识 (20)阅读-3 (24)原文 (24)译文 (25)题目 (27)答案 (32)背景知识 (33)阅读-1原文Soil Formation①Living organisms play an essential role in soil formation. The numerous plants and animals living in the soil release minerals from the parent material from which soil is formed, supply organic matter, aid in the translocation (movement) and aeration of the soil, and help protect the soil from erosion. The types of organisms growing or living in the soil greatly influence the soil's physical and chemical characteristics. In fact, for mature soils in many parts of the world, the predominant type of natural vegetation is considered the most important direct influence on soil characteristics. For this reason, a soil scientist can tell a great deal about the attributes of the soil in any given area simply from knowing what kind of flora the soil supports. Thus prairies and tundra regions, which have characteristic vegetations, also have characteristic soils.②The quantity and total weight of soil flora generally exceed that of soil fauna. By far the most numerous and smallest of the plants living in soil are bacteria. Under favorable conditions, a million or more of these tiny, single-celled plants can inhabit each cubic centimeter of soil. It is the bacteria, more than any other organisms, that enable rock or other parent material to undergo the gradual transformation to soil. Some bacteria produce organic acids that directly attack parent material, breaking it down and releasing plant nutrients. Others decompose organic litter (debris) to form humus (nutrient-rich organic matter). A third group of bacteria inhabits the root systems of plants called legumes. These include many important agricultural crops, such as alfalfa, clover, soybeans, peas, and peanuts. The bacteria that legumes host within their root nodules (small swellings on the root) change nitrogen gas from the atmosphere into nitrogen compounds that plants are able to metabolize, a process, known as nitrogen fixation, that makes the soil more fertile. Other microscopic plants also are important in soil development. For example, in highly acidic soils where few bacteria can survive, fungi frequently become the chief decomposers of organic matter.③More complex forms of vegetation play several vital roles with respect to the soil. Trees, grass, and other large plants supply the bulk of the soil's humus. The minerals released as these plants decompose on the surface constitute an important nutrient source for succeeding generations of plants as well as for other soil organisms. In addition, trees can extend their roots deep within the soil and bring up nutrients from far below the surface. These nutrients eventually enrich the surface soil when the tree drops its leaves or when it dies and decomposes. Finally, trees perform the vital function of slowing water runoff and holding the soil in place with their root systems, thus combating erosion. The increased erosion that often accompanies agricultural use of sloping land is principally caused by the removal of its protective cover of natural vegetation.④Animals also influence soil composition. The faunal counterparts of bacteria are protozoa. These single-celled organisms are the most numerous representatives of the animal kingdom, and, like bacteria, a million or more can sometimes inhabit each cubic centimeter of soil. Protozoa feed on organic matter and hasten its decomposition. Among other soil-dwelling animals, the earthworm is probably the most important. Under exceptionally favorable conditions, up to a million earthworms (with a total body weight exceeding 450 kilograms) may inhabit an acre of soil. Earthworms ingest large quantities of soil, chemically alter it, and excrete it as organic matter called casts. The casts form a high-quality natural fertilizer. In addition, earthworms mix of soil both vertically and horizontally, improving aeration and drainage.⑤Insects such as ants and termites also can be exceedingly numerous under favorable climatic and soil conditions. In addition, mammals such as moles, field mice, gophers, and prairie dogs sometimes are present in sufficient numbers to have significant impact on the soil. These animals primarily work the soil mechanically. As a result, the soil is aerated broken up, fertilized, and brought to the surface, hastening soil development.译文土壤形成①活生物体在土壤形成中起着重要作用。

初中全英文数学试卷

初中全英文数学试卷

Section 1: Multiple Choice (30 points)1. Which of the following is a prime number?A. 25B. 29C. 40D. 332. Solve for x: 3x + 5 = 19A. x = 3B. x = 4C. x = 5D. x = 63. What is the area of a rectangle with a length of 8 units and a width of 5 units?A. 15 square unitsB. 40 square unitsC. 45 square unitsD. 80 square units4. If a car travels at a speed of 60 kilometers per hour for 3 hours, how far will it travel?A. 180 kilometersB. 300 kilometersC. 360 kilometersD. 420 kilometers5. Simplify the expression: 8 - 3(2 + 4)B. 5C. 10D. 146. Solve for y: 2y - 7 = 11A. y = 3B. y = 4C. y = 5D. y = 67. What is the value of x in the equation 4x - 2 = 14?A. x = 3B. x = 4C. x = 5D. x = 68. If a triangle has two sides measuring 6 cm and 8 cm, what is the range of possible lengths for the third side?A. 2 cm to 14 cmB. 4 cm to 12 cmC. 6 cm to 10 cmD. 8 cm to 14 cm9. Simplify the expression: 3(4 - 2) ÷ 2A. 3B. 4C. 510. What is the perimeter of a square with a side length of 7 units?A. 14 unitsB. 21 unitsC. 28 unitsD. 49 unitsSection 2: Fill in the Blanks (20 points)11. The product of two consecutive integers is 42. Find the integers.12. The sum of three numbers is 27. If two of the numbers are 8 and 11, find the third number.13. Solve for z: 5z + 10 = 2514. The area of a circle is 144π square units. Find the radius of the circle.15. If a train travels at a speed of 100 kilometers per hour for 5 hours, what is the distance it will cover?Section 3: Short Answer (25 points)16. Explain how to find the perimeter of a rectangle when given the length and width.17. Solve the following system of equations:2x + 3y = 84x - y = 1218. Convert 0.25 into a percentage.19. What is the meaning of the term "factor" in mathematics?20. Solve the following equation for x: 4x - 3(2x + 1) = 11Section 4: Extended Answer (25 points)21. A bookstore sells books at a discount of 20% off the original price. If the original price of a book is $30, how much will the bookstore charge for the book after the discount?22. A farmer has 200 meters of fencing to enclose a rectangular garden. If the length of the garden is twice its width, what are the dimensions of the garden?23. A school is planning a trip for 30 students. The cost of the trip is $600 for transportation and $10 per student for admission. How much will the school spend in total for the trip?24. Solve the following quadratic equation by factoring: x^2 - 5x + 6 = 025. A triangle has angles measuring 30°, 60°, and 90°. If the shortest side of the triangle is 6 units long, find the lengths of the other two sides.Total Points: 100Good luck!。

测绘工程专业英语翻译Unit 3

测绘工程专业英语翻译Unit 3

Unit 3 Distance Measurement(距离测量)One of the fundamentals of surveying is the need to measure distance.(测量工作的一项基础【fundamental基本原则,同时有基础的意思】是距离测量)Distances are not necessarily linear, especially if they occur on the spherical earth(距离不一定指的是直线的,尤其是在地球曲面上的距离)In this subject we will deal with distances in Euclidean space, which we can considera straight line from one point or feature to another.(这里【In this subject翻译成这里】,我们所涉及的是欧几里德空间,我们可以认为一条从一点到另一点或一个特征到另一个特征的线是直线。

)Distance between two points can be horizontal, slope, or vertical.(两点之间的距离可以是平距、斜距、或者是垂距。

)Horizontal and slope distances can be measured with lots of techniques of measurement depending on the desired quality of the result.(根据测量结果的精度【quality质量】要求不同,平距或斜距有多种测量方法)If the points are at different elevations, then the distance is the horizontal length between plumb lines atthe points. (如果这些点在不同高程上,那么平距指的是过点的垂线【plumb lines】之间的水平长度。

地磁场漂移与倒转

地磁场漂移与倒转

GGALVANIC DISTORTIONThe electrical conductivity of Earth materials affects two physical processes:electromagnetic induction which is utilized with magneto-tellurics(MT)(q.v.),and electrical conduction.If electromagnetic induction in media which are heterogeneous with respect to their elec-trical conductivity is considered,then both processes take place simul-taneously:Due to Faraday’s law,a variational electric field is induced in the Earth,and due to the conductivity of the subsoil an electric cur-rent flows as a consequence of the electric field.The current compo-nent normal to boundaries within the heterogeneous structure passes these boundaries continously according tos1E1¼s2E2where the subscripts1and2indicate the boundary values of conductiv-ity and electric field in regions1and2,respectively.Therefore the amplitude and the direction of the electric field are changed in the vicinity of the boundaries(Figure G1).In electromagnetic induction studies,the totality of these changes in comparison with the electric field distribution in homogeneous media is referred to as galvanic distortion. The electrical conductivity of Earth materials spans13orders of mag-nitude(e.g.,dry crystalline rocks can have conductivities of less than 10–6S mÀ1,while ores can have conductivities exceeding106S mÀ1). Therefore,MT has a potential for producing well constrained mod-els of the Earth’s electrical conductivity structure,but almost all field studies are affected by the phenomenon of galvanic distortion, and sophisticated techniques have been developed for dealing with it(Simpson and Bahr,2005).Electric field amplitude changes and static shiftA change in an electric field amplitude causes a frequency-indepen-dent offset in apparent resistivity curves so that they plot parallel to their true level,but are scaled by a real factor.Because this shift can be regarded as spatial undersampling or“aliasing,”the scaling factor or static shift factor cannot be determined directly from MT data recorded at a single site.If MT data are interpreted via one-dimensional modeling without correcting for static shift,the depth to a conductive body will be shifted by the square root of the factor by which the apparent resistivities are shifted.Static shift corrections may be classified into three broad groups: 1.Short period corrections relying on active near-surface measurementssuch as transient electromagnetic sounding(TEM)(e.g.,Meju,1996).2.Averaging(statistical)techniques.As an example,electromagneticarray profiling is an adaptation of the magnetotelluric technique that involves sampling lateral variations in the electric field con-tinuously,and spatial low pass filtering can be used to suppress sta-tic shift effects(Torres-Verdin and Bostick,1992).3.Long period corrections relying on assumed deep structure(e.g.,a resistivity drop at the mid-mantle transition zones)or long-periodmagnetic transfer functions(Schmucker,1973).An equivalence relationship exists between the magnetotelluric impedance Z and Schmucker’s C-response:C¼Zi om0;which can be determined from the magnetic fields alone,thereby providing an inductive scale length that is independent of the dis-torted electric field.Magnetic transfer functions can,for example, be derived from the magnetic daily variation.The appropriate method for correcting static shift often depends on the target depth,because there can be a continuum of distortion at all scales.As an example,in complex three-dimensional environments near-surface correction techniques may be inadequate if the conductiv-ity of the mantle is considered,because electrical heterogeneity in the deep crust creates additional galvanic distortion at a larger-scale, which is not resolved with near-surface measurements(e.g.,Simpson and Bahr,2005).Changes in the direction of electric fields and mixing of polarizationsIn some target areas of the MT method the conductivity distribution is two-dimensional(e.g.,in the case of electrical anisotropy(q.v.))and the induction process can be described by two decoupled polarizations of the electromagnetic field(e.g.,Simpson and Bahr,2005).Then,the changes in the direction of electric fields that are associated with galvanic distortion can result in mixing of these two polarizations. The recovery of the undistorted electromagnetic field is referred to as magnetotelluric tensor decomposition(e.g.,Bahr,1988,Groom and Bailey,1989).Current channeling and the“magnetic”distortionIn the case of extreme conductivity contrasts the electrical current can be channeled in such way that it is surrounded by a magneticvariational field that has,opposite to the assumptions made in the geo-magnetic deep sounding(q.v.)method,no phase lag with respect to the electric field.The occurrence of such magnetic fields in field data has been shown by Zhang et al.(1993)and Ritter and Banks(1998).An example of a magnetotelluric tensor decomposition that includes mag-netic distortion has been presented by Chave and Smith(1994).Karsten BahrBibliographyBahr,K.,1988.Interpretation of the magnetotelluric impedance tensor: regional induction and local telluric distortion.Journal of Geophy-sics,62:119–127.Chave,A.D.,and Smith,J.T.,1994.On electric and magnetic galvanic distortion tensor decompositions.Journal of Geophysical Research,99:4669–4682.Groom,R.W.,and Bailey,R.C.,1989.Decomposition of the magneto-telluric impedance tensor in the presence of local three-dimensional galvanic distortion.Journal of Geophysical Research,94: 1913–1925.Meju,M.A.,1996.Joint inversion of TEM and distorted MT sound-ings:some effective practical considerations.Geophysics,61: 56–65.Ritter,P.,and Banks,R.J.,1998.Separation of local and regional information in distorted GDS response functions by hypothetical event analysis.Geophysical Journal International,135:923–942. Schmucker,U.,1973.Regional induction studies:a review of methods and results.Physics of the Earth and Planetary Interiors,7: 365–378.Simpson,F.,and Bahr,K.,2005.Practical Magnetotellurics.Cam-bridge:Cambridge University Press.Torres-Verdin,C.,and Bostick,F.X.,1992.Principles of special sur-face electric field filtering in magnetotellurics:electromagnetic array profiling(EMAP).Geophysics,57:603–622.Zhang,P.,Pedersen,L.B.,Mareschal,M.,and Chouteau,M.,1993.Channelling contribution to tipper vectors:a magnetic equivalent to electrical distortion.Geophysical Journal International,113: 693–700.Cross-referencesAnisotropy,ElectricalGeomagnetic Deep SoundingMagnetotelluricsMantle,Electrical Conductivity,Mineralogy GAUSS’DETERMINATION OF ABSOLUTE INTENSITYThe concept of magnetic intensity was known as early as1600in De Magnete(see Gilbert,William).The relative intensity of the geomag-netic field in different locations could be measured with some preci-sion from the rate of oscillation of a dip needle—a method used by Humboldt,Alexander von(q.v.)in South America in1798.But it was not until Gauss became interested in a universal system of units that the idea of measuring absolute intensity,in terms of units of mass, length,and time,was considered.It is now difficult to imagine how revolutionary was the idea that something as subtle as magnetism could be measured in such mundane units.On18February1832,Gauss,Carl Friedrich(q.v.)wrote to the German astronomer Olbers:“I occupy myself now with the Earth’s magnetism,particularly with an absolute determination of its intensity.Friend Weber”(Wilhelm Weber,Professor of Physics at the University of Göttingen)“conducts the experiments on my instructions.As, for example,a clear concept of velocity can be given only through statements on time and space,so in my opinion,the complete determination of the intensity of the Earth’s magnetism requires to specify(1)a weight¼p,(2)a length¼r,and then the Earth’s magnetism can be expressed byffiffiffiffiffiffiffip=rp.”After minor adjustment to the units,the experiment was completed in May1832,when the horizontal intensity(H)at Göttingen was found to be1.7820mg1/2mm–1/2s–1(17820nT).The experimentThe experiment was in two parts.In the vibration experiment(Figure G2) magnet A was set oscillating in a horizontal plane by deflecting it from magnetic north.The period of oscillations was determined at different small amplitudes,and from these the period t0of infinite-simal oscillations was deduced.This gave a measure of MH,where M denotes the magnetic moment of magnet A:MH¼4p2I=t20The moment of inertia,I,of the oscillating part is difficult to deter-mine directly,so Gauss used the ingenious idea of conductingtheFigure G2The vibration experiment.Magnet A is suspended from a silk fiber F It is set swinging horizontally and the period of an oscillation is obtained by timing an integral number of swings with clock C,using telescope T to observe the scale S reflected in mirror M.The moment of inertia of the oscillating part can be changed by a known amount by hanging weights W from the rodR. 278GAUSS’DETERMINATION OF ABSOLUTE INTENSITYexperiment for I and then I þD I ,where D I is a known increment obtained by hanging weights at a known distance from the suspension.From several measures of t 0with different values of D I ,I was deter-mined by the method of least squares (another of Gauss ’s original methods).In the deflection experiment,magnet A was removed from the suspension and replaced with magnet B.The ratio M /H was measured by the deflection of magnet B from magnetic north,y ,produced by magnet A when placed in the same horizontal plane as B at distance d magnetic east (or west)of the suspension (Figure G3).This required knowledge of the magnetic intensity due to a bar magnet.Gauss deduced that the intensity at distance d on the axis of a dipole is inversely proportional to d 3,but that just one additional term is required to allow for the finite length of the magnet,giving 2M (1þk/d 2)/d 3,where k denotes a small constant.ThenM =H ¼1=2d 3ð1Àk =d 2Þtan y :The value of k was determined,again by the method of least squares,from the results of a number of measures of y at different d .From MH and M /H both M and,as required by Gauss,H could readily be deduced.Present methodsWith remarkably little modification,Gauss ’s experiment was devel-oped into the Kew magnetometer,which remained the standard means of determining absolute H until electrical methods were introduced in the 1920s.At some observatories,Kew magnetometers were still in use in the 1980s.Nowadays absolute intensity can be measured in sec-onds with a proton magnetometer and without the considerable time and experimental skill required by Gauss ’s method.Stuart R.C.MalinBibliographyGauss,C.F.,1833.Intensitas vis magneticae terrestris ad mensuram absolutam revocata.Göttingen,Germany.Malin,S.R.C.,1982.Sesquicentenary of Gauss ’s first measurement of the absolute value of magnetic intensity.Philosophical Transac-tions of the Royal Society of London ,A 306:5–8.Malin,S.R.C.,and Barraclough,D.R.,1982.150th anniversary of Gauss ’s first absolute magnetic measurement.Nature ,297:285.Cross-referencesGauss,Carl Friedrich (1777–1855)Geomagnetism,History of Gilbert,William (1544–1603)Humboldt,Alexander von (1759–1859)Instrumentation,History ofGAUSS,CARL FRIEDRICH (1777–1855)Amongst the 19th century scientists working in the field of geomag-netism,Carl Friedrich Gauss was certainly one of the most outstanding contributors,who also made very fundamental contributions to the fields of mathematics,astronomy,and geodetics.Born in April 30,1777in Braunschweig (Germany)as the son of a gardener,street butcher,and mason Johann Friderich Carl,as he was named in the certificate of baptism,already in primary school at the age of nine perplexed his teacher J.G.Büttner by his innovative way to sum up the numbers from 1to ter Gauss used to claim that he learned manipulating numbers earlier than being able to speak.In 1788,Gauss became a pupil at the Catharineum in Braunschweig,where M.C.Bartels (1769–1836)recognized his outstanding mathematical abilities and introduced Gauss to more advanced problems of mathe-matics.Gauss proved to be an exceptional pupil catching the attention of Duke Carl Wilhelm Ferdinand of Braunschweig who provided Gauss with the necessary financial support to attend the Collegium Carolinum (now the Technical University of Braunschweig)from 1792to 1795.From 1795to 1798Gauss studied at the University of Göttingen,where his number theoretical studies allowed him to prove in 1796,that the regular 17-gon can be constructed using a pair of compasses and a ruler only.In 1799,he received his doctors degree from the University of Helmstedt (close to Braunschweig;closed 1809by Napoleon)without any oral examination and in absentia .His mentor in Helmstedt was J.F.Pfaff (1765–1825).The thesis submitted was a complete proof of the fundamental theorem of algebra.His studies on number theory published in Latin language as Disquitiones arithi-meticae in 1801made Carl Friedrich Gauss immediately one of the leading mathematicians in Europe.Gauss also made further pioneering contributions to complex number theory,elliptical functions,function theory,and noneuclidian geometry.Many of his thoughts have not been published in regular books but can be read in his more than 7000letters to friends and colleagues.But Gauss was not only interested in mathematics.On January 1,1801the Italian astronomer G.Piazzi (1746–1820)for the first time detected the asteroid Ceres,but lost him again a couple of weeks later.Based on completely new numerical methods,Gauss determined the orbit of Ceres in November 1801,which allowed F.X.von Zach (1754–1832)to redetect Ceres on December 7,1801.This prediction made Gauss famous next to his mathematical findings.In 1805,Gauss got married to Johanna Osthoff (1780–1809),who gave birth to two sons,Joseph and Louis,and a daughter,Wilhelmina.In 1810,Gauss married his second wife,Minna Waldeck (1788–1815).They had three more children together,Eugen,Wilhelm,and Therese.Eugen Gauss later became the founder and first president of the First National Bank of St.Charles,Missouri.Carl Friedrich Gauss ’interest in the Earth magnetic field is evident in a letter to his friend Wilhelm Olbers (1781–1862)as early as 1803,when he told Olbers that geomagnetism is a field where still many mathematical studies can be done.He became more engaged in geo-magnetism after a meeting with A.von Humboldt (1769–1859)and W.E.Weber (1804–1891)in Berlin in 1828where von Humboldt pointed out to Gauss the large number of unsolved problems in geo-magnetism.When Weber became a professor of physics at the Univer-sity of Göttingen in 1831,one of the most productive periods intheFigure G3The deflection experiment.Suspended magnet B is deflected from magnetic north by placing magnet A east or west (magnetic)of it at a known distance d .The angle of deflection y is measured by using telescope T to observe the scale S reflected in mirror M.GAUSS,CARL FRIEDRICH (1777–1855)279field of geomagnetism started.In1832,Gauss and Weber introduced the well-known Gauss system according to which the magnetic field unit was based on the centimeter,the gram,and the second.The Mag-netic Observatory of Göttingen was finished in1833and its construc-tion became the prototype for many other observatories all over Europe.Gauss and Weber furthermore developed and improved instru-ments to measure the magnetic field,such as the unifilar and bifilar magnetometer.Inspired by A.von Humboldt,Gauss and Weber realized that mag-netic field measurements need to be done globally with standardized instruments and at agreed times.This led to the foundation of the Göttinger Magnetische Verein in1836,an organization without any for-mal structure,only devoted to organize magnetic field measurements all over the world.The results of this organization have been published in six volumes as the Resultate aus den Beobachtungen des Magnetischen Vereins.The issue of1838contains the pioneering work Allgemeine Theorie des Erdmagnetismus where Gauss introduced the concept of the spherical harmonic analysis and applied this new tool to magnetic field measurements.His general theory of geomagnetism also allowed to separate the magnetic field into its externally and its internally caused parts.As the external contributions are nowadays interpreted as current systems in the ionosphere and magnetosphere Gauss can also be named the founder of magnetospheric research.Publication of the Resultate ceased in1843.W.E.Weber together with such eminent professors of the University of Göttingen as Jacob Grimm(1785–1863)and Wilhelm Grimm(1786–1859)had formed the political group Göttingen Seven protesting against constitutional violations of King Ernst August of Hannover.As a consequence of these political activities,Weber and his colleagues were dismissed. Though Gauss tried everything to bring back Weber in his position he did not succeed and Weber finally decided to accept a chair at the University of Leipzig in1843.This finished a most fruitful and remarkable cooperation between two of the most outstanding contribu-tors to geomagnetism in the19th century.Their heritage was not only the invention of the first telegraph station in1833,but especially the network of36globally operating magnetic observatories.In his later years Gauss considered to either enter the field of bota-nics or to learn another language.He decided for the language and started to study Russian,already being in his seventies.At that time he was the only person in Göttingen speaking that language fluently. Furthermore,he was asked by the Senate of the University of Göttingen to reorganize their widow’s pension system.This work made him one of the founders of insurance mathematics.In his final years Gauss became fascinated by the newly built railway lines and supported their development using the telegraph idea invented by Weber and himself.Carl Friedrich Gauss died on February23,1855as a most respected citizen of his town Göttingen.He was a real genius who was named Princeps mathematicorum already during his life time,but was also praised for his practical abilities.Karl-Heinz GlaßmeierBibliographyBiegel,G.,and K.Reich,Carl Friedrich Gauss,Braunschweig,2005. Bühler,W.,Gauss:A Biographical study,Berlin,1981.Hall,T.,Carl Friedrich Gauss:A Biography,Cambridge,MA,1970. Lamont,J.,Astronomie und Erdmagnetismus,Stuttgart,1851. Cross-referencesHumboldt,Alexander von(1759–1859)Magnetosphere of the Earth GELLIBRAND,HENRY(1597–1636)Henry Gellibrand was the eldest son of a physician,also Henry,and was born on17November1597in the parish of St.Botolph,Aldersgate,London.In1615,he became a commoner at Trinity Col-lege,Oxford,and obtained a BA in1619and an MA in1621.Aftertaking Holy Orders he became curate at Chiddingstone,Kent,butthe lectures of Sir Henry Savile inspired him to become a full-timemathematician.He settled in Oxford,where he became friends withHenry Briggs,famed for introducing logarithms to the base10.Itwas on Briggs’recommendation that,on the death of Edmund Gunter,Gellibrand succeeded him as Gresham Professor of Astronomy in1627—a post he held until his death from a fever on16February1636.He was buried at St.Peter the Poor,Broad Street,London(now demolished).Gellibrand’s principal publications were concerned with mathe-matics(notably the completion of Briggs’Trigonometrica Britannicaafter Briggs died in1630)and navigation.But he is included herebecause he is credited with the discovery of geomagnetic secular var-iation.The events leading to this discovery are as follows(for furtherdetails see Malin and Bullard,1981).The sequence starts with an observation of magnetic declinationmade by William Borough,a merchant seaman who rose to“captaingeneral”on the Russian trade route before becoming comptroller ofthe Queen’s Navy.The magnetic observation(Borough,1581,1596)was made on16October1580at Limehouse,London,where heobserved the magnetic azimuth of the sun as it rose through sevenfixed altitudes in the morning and as it descended through the samealtitudes in the afternoon.The mean of the two azimuths for each alti-tude gives a measure of magnetic declination,D,the mean of which is11 190EÆ50rms.Despite the small scatter,the value could have beenbiased by site or compass errors.Some40years later,Edmund Gunter,distinguished mathematician,Gresham Professor of Astronomy and inventor of the slide rule,foundD to be“only6gr15m”(6 150E)“as I have sometimes found it oflate”(Gunter,1624,66).The exact date(ca.1622)and location(prob-ably Deptford)of the observation are not stated,but it alerted Gunterto the discrepancy with Borough’s measurement.To investigatefurther,Gunter“enquired after the place where Mr.Borough observed,and went to Limehouse with...a quadrant of three foot Semidiameter,and two Needles,the one above6inches,and the other10inches long ...towards the night the13of June1622,I made observation in sev-eral parts of the ground”(Gunter,1624,66).These observations,witha mean of5 560EÆ120rms,confirmed that D in1622was signifi-cantly less than had been measured by Borough in1580.But was thisan error in the earlier measure,or,unlikely as it then seemed,was Dchanging?Unfortunately Gunter died in1626,before making anyfurther measurements.When Gellibrand succeeded Gunter as Gresham Professor,allhe required to do to confirm a major scientific discovery was towait a few years and then repeat the Limehouse observation.Buthe chose instead to go to the site of Gunter’s earlier observationin Deptford,where,in June1633,Gellibrand found D to be“muchless than5 ”(Gellibrand,1635,16).He made a further measurement of D on the same site on June12,1634and“found it not much to exceed4 ”(Gellibrand,1635,7),the published data giving4 50 EÆ40rms.His observation of D at Paul’s Cray on July4,1634adds little,because it is a new site.On the strength of these observations,he announced his discovery of secular variation(Gellibrand,1635,7and 19),but the reader may decide how much of the credit should go to Gunter.Stuart R.C.Malin280GELLIBRAND,HENRY(1597–1636)BibliographyBorough,W.,1581.A Discourse of the Variation of the Compass,or Magnetical Needle.(Appendix to R.Norman The newe Attractive).London:Jhon Kyngston for Richard Ballard.Borough,W.,1596.A Discourse of the Variation of the Compass,or Magnetical Needle.(Appendix to R.Norman The newe Attractive).London:E Allde for Hugh Astley.Gellibrand,H.,1635.A Discourse Mathematical on the Variation of the Magneticall Needle.Together with its admirable Diminution lately discovered.London:William Jones.Gunter,E.,1624.The description and use of the sector,the crosse-staffe and other Instruments.First booke of the crosse-staffe.London:William Jones.Malin,S.R.C.,and Bullard,Sir Edward,1981.The direction of the Earth’s magnetic field at London,1570–1975.Philosophical Transactions of the Royal Society of London,A299:357–423. Smith,G.,Stephen,L.,and Lee,S.,1967.The Dictionary of National Biography.Oxford:University Press.Cross-referencesCompassGeomagnetic Secular VariationGeomagnetism,History ofGEOCENTRIC AXIAL DIPOLE HYPOTHESISThe time-averaged paleomagnetic fieldPaleomagnetic studies provide measurements of the direction of the ancient geomagnetic field on the geological timescale.Samples are generally collected at a number of sites,where each site is defined as a single point in time.In most cases the time relationship between the sites is not known,moreover when samples are collected from a stratigraphic sequence the time interval between the levels is also not known.In order to deal with such data,the concept of the time-averaged paleomagnetic field is used.Hospers(1954)first introduced the geocentric axial dipole hypothesis(GAD)as a means of defining this time-averaged field and as a method for the analysis of paleomag-netic results.The hypothesis states that the paleomagnetic field,when averaged over a sufficient time interval,will conform with the field expected from a geocentric axial dipole.Hospers presumed that a time interval of several thousand years would be sufficient for the purpose of averaging,but many studies now suggest that tens or hundreds of thousand years are generally required to produce a good time-average. The GAD model is a simple one(Figure G4)in which the geomag-netic and geographic axes and equators coincide.Thus at any point on the surface of the Earth,the time-averaged paleomagnetic latitude l is equal to the geographic latitude.If m is the magnetic moment of this time-averaged geocentric axial dipole and a is the radius of the Earth, the horizontal(H)and vertical(Z)components of the magnetic field at latitude l are given byH¼m0m cos l;Z¼2m0m sin l;(Eq.1)and the total field F is given byF¼ðH2þZ2Þ1=2¼m0m4p a2ð1þ3sin2lÞ1=2:(Eq.2)Since the tangent of the magnetic inclination I is Z/H,thentan I¼2tan l;(Eq.3)and by definition,the declination D is given byD¼0 :(Eq.4)The colatitude p(90 minus the latitude)can be obtained fromtan I¼2cot pð0p180 Þ:(Eq.5)The relationship given in Eq. (3) is fundamental to paleomagnetismand is a direct consequence of the GAD hypothesis.When applied toresults from different geologic periods,it enables the paleomagneticlatitude to be derived from the mean inclination.This relationshipbetween latitude and inclination is shown in Figure G5.Figure G5Variation of inclination with latitude for a geocentricdipole.GEOCENTRIC AXIAL DIPOLE HYPOTHESIS281Paleom a gnetic polesThe positio n where the time-averaged dipole axis cuts the surface of the Earth is called the paleomagnetic pole and is defined on the present latitude-longitude grid. Paleomagnetic poles make it possible to com-pare results from different observing localities, since such poles should represent the best estimate of the position of the geographic pole.These poles are the most useful parameter derived from the GAD hypothesis. If the paleomagnetic mean direction (D m , I m ) is known at some sampling locality S, with latitude and longitude (l s , f s ), the coordinates of the paleomagnetic pole P (l p , f p ) can be calculated from the following equations by reference to Figure G6.sin l p ¼ sin l s cos p þ cos l s sin p cos D m ðÀ90 l p þ90 Þ(Eq. 6)f p ¼ f s þ b ; when cos p sin l s sin l porf p ¼ f s þ 180 À b ; when cos p sin l s sin l p (Eq. 7)wheresin b ¼ sin p sin D m = cos l p : (Eq. 8)The paleocolatitude p is determined from Eq. (5). The paleomagnetic pole ( l p , f p ) calculated in this way implies that “sufficient ” time aver-aging has been carried out. What “sufficient ” time is defined as is a subject of much debate and it is always difficult to estimate the time covered by the rocks being sampled. Any instantaneous paleofield direction (representing only a single point in time) may also be con-verted to a pole position using Eqs. (7) and (8). In this case the pole is termed a virtual geomagnetic pole (VGP). A VGP can be regarded as the paleomagnetic analog of the geomagnetic poles of the present field. The paleomagnetic pole may then also be calculated by finding the average of many VGPs, corresponding to many paleodirections.Of course, given a paleomagnetic pole position with coordinates (l p , f p ), the expected mean direction of magnetization (D m , I m )at any site location (l s , f s ) may be also calculated (Figure G6). The paleocolatitude p is given bycos p ¼ sin l s sin l p þ cos l s cos l p cos ðf p À f s Þ; (Eq. 9)and the inclination I m may then be calculated from Eq. (5). The corre-sponding declination D m is given bycos D m ¼sin l p À sin l s cos pcos l s sin p; (Eq. 10)where0 D m 180 for 0 (f p – f s ) 180and180 < D m <360for 180 < (f p –f s ) < 360 .The declination is indeterminate (that is any value may be chosen)if the site and the pole position coincide. If l s ¼Æ90then D m is defined as being equal to f p , the longitude of the paleomagnetic pole.Te s ting the GAD hy p othesis Tim e scale 0– 5 MaOn the timescale 0 –5 Ma, little or no continental drift will have occurred, so it was originally thought that the observation that world-wide paleomagnetic poles for this time span plotted around the present geographic indicated support for the GAD hypothesis (Cox and Doell,1960; Irving, 1964; McElhinny, 1973). However, any set of axial mul-tipoles (g 01; g 02 ; g 03 , etc.) will also produce paleomagnetic poles that cen-ter around the geographic pole. Indeed, careful analysis of the paleomagnetic data in this time interval has enabled the determination of any second-order multipole terms in the time-averaged field (see below for more detailed discussion of these departures from the GAD hypothesis).The first important test of the GAD hypothesis for the interval 0 –5Ma was carried out by Opdyke and Henry (1969),who plotted the mean inclinations observed in deep-sea sediment cores as a function of latitude,showing that these observations conformed with the GAD hypothesis as predicted by Eq. (3) and plotted in Figure G5.Testing the axial nature of the time-averaged fieldOn the geological timescale it is observed that paleomagnetic poles for any geological period from a single continent or block are closely grouped indicating the dipole hypothesis is true at least to first-order.However,this observation by itself does not prove the axial nature of the dipole field.This can be tested through the use of paleoclimatic indicators (see McElhinny and McFadden,2000for a general discus-sion).Paleoclimatologists use a simple model based on the fact that the net solar flux reaching the surface of the Earth has a maximum at the equator and a minimum at the poles.The global temperature may thus be expected to have the same variation.The density distribu-tion of many climatic indicators (climatically sensitive sediments)at the present time shows a maximum at the equator and either a mini-mum at the poles or a high-latitude zone from which the indicator is absent (e.g.,coral reefs,evaporates,and carbonates).A less common distribution is that of glacial deposits and some deciduous trees,which have a maximum in polar and intermediate latitudes.It has been shown that the distributions of paleoclimatic indicators can be related to the present-day climatic zones that are roughly parallel with latitude.Irving (1956)first suggested that comparisons between paleomag-netic results and geological evidence of past climates could provide a test for the GAD hypothesis over geological time.The essential point regarding such a test is that both paleomagnetic and paleoclimatic data provide independent evidence of past latitudes,since the factors con-trolling climate are quite independent of the Earth ’s magnetic field.The most useful approach is to compile the paleolatitude values for a particular occurrence in the form of equal angle or equalareaFigure G6Calculation of the position P (l p ,f p )of thepaleomagnetic pole relative to the sampling site S (l s ,f s )with mean magnetic direction (D m ,I m ).282GEOCENTRIC AXIAL DIPOLE HYPOTHESIS。

UV固化条件聚合动力学和光泽度聚氨酯丙烯酸酯涂料的

UV固化条件聚合动力学和光泽度聚氨酯丙烯酸酯涂料的

Progress in Organic Coatings 76 (2013) 432–438Contents lists available at SciVerse ScienceDirectProgress in OrganicCoatingsj o u r n a l h o m e p a g e :w w w.e l s e v i e r.c o m /l o c a t e /p o r g c o atInfluence of UV-curing conditions on polymerization kinetics and gloss of urethane acrylate coatingsViera Janˇc oviˇc ová∗,Milan Mikula,Bohuslava Havlínová,Zuzana JakubíkováInstitute of Polymer Materials,Department of Graphics Arts Technology and Applied Photochemistry,Faculty of Chemical and Food Technology,Slovak University of Technology,Radlinského 9,SK-81237Bratislava 1,Slovak Republica r t i c l ei n f oArticle history:Received 21April 2012Received in revised form 28September 2012Accepted 20October 2012Available online 14 November 2012Keywords:Photopolymerization Urethane acrylate Kinetics FTIR Glossa b s t r a c tThe photochemically curable polymer films were prepared by addition of 2,2-dimethyl-2-hydroxyacetophenone (Darocure 1173)as a radical initiator to aliphatic urethane tetraacrylate Craynor 925.Kinetic study of the UV-curing of these films by medium pressured mercury lamp was performed by means of infrared spectroscopy.The results showed that the photoinitiator concentration,the light inten-sity,sample coating thickness,presence of air oxygen,as well as the UV light intensity were the most significant factors affecting the polymerization course of UV-cured films.The influence of the sample coating thickness on the kinetics and final gloss were also considerable.© 2012 Elsevier B.V. All rights reserved.1.IntroductionLight induced curing in polymer coating systems has been intensively studied due to environmental protection,lower energy consumption and rapid curing even at the room temperature.One of the most effective methods of fast generation of spatial crosslinked polymers is based on a multifunctional monomer or oligomer exposed by UV light in the presence of an initiator [1–3].Therefore,UV-curing technology has been considered as an alterna-tive to traditional solvent-borne coatings,due to its eco-compatible process and excellent properties,such as high hardness,gloss,scratch and chemical resistance caused by high crosslink density from acrylate groups [4].Desired ingredients in radically cured formulations are ure-thane acrylate oligomers providing chemical,water resistance and heat resistance and adhesion.Polyurethane acrylate resins are often used in the liquid state as precursors to produce three-dimensional networks giving high-performance final materials [5].As UV curable resins they prove excellent adhesion,flexibility,impact property,chemical and scratch resistance and weather-ability [6,7]but often suffer from the high viscosities.They are commercially available with molecular weights ranging from 600g/mol to 6000g/mol and with functionalities ranging from 2to 6.Depending on molecular weight and chemical structure,hard∗Corresponding author.Tel.:+421259325227;fax:+421252493198.E-mail address:viera.jancovicova@stuba.sk (V.Janˇc oviˇc ová).stiff to flexible coatings can be prepared in a broad range of prop-erties [8,9].The photoinitiated polymerization with photoinitiatorDarocure 1173(2-hydroxy-2-methyl-1-phenylpropane-1-one)was studied and the maximal conversion was obtained at 70◦C [10].Several authors dealt with water based urethane acrylate coatings.The advantages offered by these environment-friendly systems are partially offset by the necessity to introduce a drying step before the UV-exposure,which will increase the overall processing time.The water sensitivity of these UV-cured polymers and their hydrophilic character may also restrict their use in a humid environment and in exterior applications [11,12].The important aspect of a coated material in terms of qual-ity is a gloss [13–16].It is influenced by many factors such as rheological properties and formulation of the coating,film flat-tening,curing rate,layer thickness,refraction index,substrate characteristics (roughness,pore size distribution),film curing behaviour (wrinkling,cratering,and yellowing),etc.In principle,the gloss is a complex phenomenon resulting from the interaction between light and the surface of the coating.Kim et al.studied the influence of coating composition and curing conditions on the final surface properties (pencil hardness and coating gloss).They found out that some gloss decrease can be caused by oxygen inhibition of polymerization.If simultaneously the lower layers are cross-linked,then shrinkage could occur resulting in puckering or wrinkling in the top layer.Consequently,the wrinkled pattern on the surface leads to low gloss since the surface is no longer smooth [17].The influence of the curing conditions (UV light intensity,coating thickness)and coating formulation (photoinitiator type0300-9440/$–see front matter © 2012 Elsevier B.V. All rights reserved./10.1016/j.porgcoat.2012.10.010V.Janˇc oviˇc ováet al./Progress in Organic Coatings76 (2013) 432–438433and concentration)on the UV-curing of1,6-hexandioldiacrylate andfinal gloss of the cured surface was significant.In these low viscose formulations the gloss was decreasing during the curing process.The gloss decrease of the coatings thicker than15␮m was considerable,which might be caused by shrinking of the sample surface during its curing[15].Ruiz and Machado[16]discussed the behaviour of UV-clear coats submitted to degradation processes on the basis of gloss changes.The authors found that the composition of the curing system and the curing conditions effectively affect the rate of polymerization,the maximum conversion reached and the surface properties,including gloss,hydrophobicity,surface energy.Gloss is important parameter in the printing technology, providing products with a better overall look,higher chroma (greater depth of colours)[15].UV-cured urethane acrylate clear coats are suitable to function as protective coating for prints,their advantage is an improvement in the surface properties of the coated materials such as excellent scratch and abrasion resistance, the gloss and brilliancy of print[18].The aim of this study was to investigate the curing process of a simple varnish model system composed of urethane acrylate oligomer Craynor925and photoinitiator(Darocure1173)in rela-tion to a coating composition,curing conditions(UV light intensity, air)and coating thickness.Subsequently,the influence of these fac-tors on the gloss evolution during the curing process as well as the influence of coating on the colour stability was studied.2.Material and methods2.1.MaterialsLow viscosity modified aliphatic urethane tetraacrylate Craynor925(Sartomer,France)and radical photoinitiator,2-hydroxy-2-methyl-1-phenylpropane-1-one(Darocure1173,Ciba, Switzerland)were used in order to prepare a simple varnish model.The UV–vis spectrum of this photoinitiator has absorp-tion maxima at245nm(ε=7320dm3mol−1cm−1),280nm (ε=947dm3mol−1cm−1)and325nm(ε=85dm3mol−1cm−1).2.2.Preparation offilmsThe samples were applied immediately after preparation.Var-ious amounts of initiator(from0.5wt.%to5wt.%)were added to urethane acrylate,mechanically mixed and stored in opaque bot-tle.The viscosity of prepared coatings was2700mPa s at25◦C,the density1.1g cm−3and the surface energy about35mJ m−2.Pho-tocuring reactions were realized on aluminium and glass plates. The defined sample volume(according to layer thickness)was spread on the plate by spin coating apparatus(Tesla Roˇz nov,Czech Republic).Different layer thicknesses were achieved by different spin velocity(2000–4000rpm)and different amount of applied sample,while the average layer thickness was determined by gravi-metric measurements.Consequently,some layers were covered with polyethylene foil(PE,Chemosvit,Slovak Republic,thickness 30␮m,molecular weight3×103kg mol−1,permeability for oxy-gen450cm3m−2day−1).The PE foil reduced the sample contact with atmospheric oxygen,thus,preventing the oxygen inhibition influence.PE foil had absorbed round40%of radiation in the spec-tral absorption region of the photoinitiator that was considered at the UV exposition.2.3.UV-curing of coatingsThe samples on the aluminium plates were irradiated by a medium pressure mercury lamp250W(RVC,Czech Republic)built into an UV-cure device constructed in our laboratory.The lamp (without anyfilters)emits standard medium pressure mercury radiation with narrow bands in UV and vis regions.However, the absorption regions of the used photoinitiator with maxima at245nm,280nm and325nm,causes that only UV radiation is photochemically active.In order to prevent the overheating during exposure the samples were placed on the water cooled Cu plate kept at25◦C.The intensity of incident light was changed with the varying distance of the light source from the sample(5cm=23mW cm−2,9cm=17mW cm−2, 12cm=12mW cm−2,15cm=7mW cm−2).The incident light intensity was measured by UVX digital radiometer(UVP,USA)with the probes for UVA and UVB region(the given values are the sum of the two measured values).Full sample area(12cm2)was exposed with the same light intensity.The curing process was evaluated by IR spectroscopy(FTIR spectrophotometer EXCALIBUR SERIES Digi-lab FTS3000NX,USA)based on the transmittance measurements. The degree of conversion in the curedfilm was determined accord-ing to the amount of acrylate double bond(twisting vibration at 810cm−1,stretching vibration at1610–1640cm−1)by a baseline method.The internal standard was a carbonyl peak at1725cm−1, in order to eliminate the influence of scatter in layer thickness.The degree of conversion X and relative polymerization rate R p were cal-culated from well-known equation(1)[19]which were modified according to the standard peak:X=1−A t( )A0( )·A0(1725)A t(1725)×100(1)where A0( )and A t( )is the absorbance of monomers C C bonds measured at chosen wavelength(810,1618or1635cm−1)before and after the exposure to UV light for the time t,respectively and A0(1725)and A t(1725)is the absorbance of carbonyl bonds at the same exposure time.Generally,our experiences show that the absorbance at1725cm−1did not change with irradiation.The relative polymerization rate R p was calculated from equa-tion R p=( X/ t),where X is the conversion degree of monomer’s C C bonds,at the exposure time t.The values of maximum conver-sion X max and maximum polymerization rate R p,max were obtained from the plots of X and R p vs.time in initial stage of curing.The time interval of curing steps was changed during the curing pro-cess to obtain nearly the same and noticeable change of absorbance at1635cm−1.2.4.Sample gloss estimationGloss(G)is defined as the ability of a surface to reflect light to the specular angle.Gloss(in gloss units“GU”)can be measured by gloss-metres that are able to compare the amount of light reflected from the sample surface and from the gloss standard at the same geometry set-up.Glossy black glass with defined refractive index is usually used as a calibration standard(GU=100).The sample surface appears to be matt if its gloss is less than6GU,if the sample gloss is in the range of6–30GU then the surface is semi-matt,if the surface reaches the gloss of30–70GU then the surface appears to be semi-glossy,and if its gloss is over70GU of the standard gloss then the surface is high-glossy.The gloss of coated lustrously foils is often pretty higher than100GU because of light reflection from 2or more boundaries.Gloss changes were monitored in real time during the sample curing process using a monochromatic gloss-metre constructed in our laboratory.The gloss measurements were carried out using the glass substrate samples with a matt-white surface.The sample was placed andfixed on a horizontal support,illuminated by red diode-laser light(650nm)at the angle of45◦,and the light reflected from the sample surface was detected by a silicon photodetector with the linear amplifier.Illuminated area was10mm×5mm at the centre of the sample and the laser diode was25cm apart.At the434V.Janˇc oviˇc ováet al./Progress in Organic Coatings 76 (2013) 432–438same time,the sample was exposed and cured by UV light andthe change of the photodetector signal U t sample was recorded.The signal is proportional to the light reflected,i.e.to the gloss of the sample surface.The glossy black glass was used as the calibration standard (100GU).The gloss value at given exposure time t was calculated using equation:G t =U t sampleU standard×100(2)The final gloss values of completely cured surfaces G ∞were obtained from plots of G t vs.time [15].2.5.Testing the effect of coating on the light fastness of ink-jet prints coloursThe samples (solid printed areas,4cm 2)were printed by ink jet printer Desk Jet 560C (Hewlett Packard,resolution 300dpi)with CMYK dye based inks on paper,then coated by bar film applicator with a layer of urethane acrylate Craynor 925containing 3wt.%)of photoinitiator Darocure 1173.The coatings were cured (60s at 23mW cm −2).Consequently the samples were irradiated in the laboratory made box with metal halogen and fluorescent lamps simulating day light exposition (colour temperature 5000K).The total light dose varied between 5and 20MJ m −2.UV–vis reflectance spectra were measured by spectrocolorimeter Spectrolino,GretagMacbeth AG.The total colour difference E ∗abwas calculated from Eq.(3)[20]:E ∗ab=[( L ∗)2+( a ∗)2+( b ∗)2]1/2(3)where L ∗=L ∗2−L ∗1; a ∗=a ∗2−a ∗1and b ∗=b ∗2−b ∗1.Value L *represents lightness of colour spot,chromatic coordi-nates a *and b *range from red to green and from yellow to blue colour respectively.Difference of the two colours in the CIELAB colour space is given by the length of the line connecting the points given by the L *,a *,b *coordinates of respective colours and can be calculated using equation [24],where values L 1,a 1,b 1were mea-sured immediately after samples preparation and L 2,a 2,b 2after irradiation with the above-mentioned lamp.3.Results and discussionPhotochemical curing of coatings prepared from urethane acrylate Craynor 925with various content of initiator 2-hydroxy-2-methyl-1-phenylpropane-1-one was observed by FTIR spectroscopy.The samples were coated on aluminium plates.The curing process resulted in a decrease of the intensity of C C band vibrations at 810,1618and 1635cm −1(Fig.1).The conversion was calculated according to equation for all three wavenumbers (Fig.2).The calculated values of consumption of monomer and polymer-ization rate were very similar and independent on wavenumber.In the following experiments the conversion degrees and the reac-tion rates were calculated only based on the values at 810cm −1.The spectra were normalized according to the carbonyl peak at 1725cm −1.The double bond content of the uncured formulation was defined as 100%.3.1.Influence of the initiator concentrationThe photoinitiator plays a key role in the process of light induced curing.It produces free radicals that are initiating the chain reac-tion with double bond in tetrafunctional urethane acrylate.Hence it regulates the rate of initiation,the amount of light penetrating to the system and the degree of conversion.The lower production of initiating radicals at low photoinitiator concentration results in the20001750150 0125 0100 075020*********T r a n s m i t a n c e [%]Wavenumber [cm -1]Fig.1.Infrared spectra of a urethane acrylate Craynor 925with initiator Darocure 1173contents 3wt.%before (solid line)and after exposition 1min,light intensity 23mW cm −2(dot line).reduction of the polymerization rate and lower conversion.Another factor affecting the polymerization is the oxygen inhibition effect,which is due to the scavenging of the initiating and grows radi-cals by molecular oxygen [21,22].Longer UV exposure required at low photoinitiator concentration will increase the amount of atmospheric oxygen that diffuses into the sample and makes the oxygen inhibition to be more pronounced.In order to reduce the oxygen amount in the layer the samples were covered with thin polyethylene foil during curing.The samples were cured at various light intensities in the range from 7mW cm −2to 23mW cm −2.The curing at the lowest and at the highest intensity is presented in Fig.3and Table 1.The concentration of photoinitiator varied in the range of 0.5–5wt.%)according to the mass of urethane acrylate.The amount of initia-tor has significant influence on the curing of urethane acrylate.The maximal conversion degree and maximal polymerization rate were reached at the initiator concentration of 3wt.%(X max 92%at 23mW cm −2and 86%at 7mW cm −2).These results are in a good agreement with the results of Huang and Shi [23]obtained for sim-ilar system by DSC analysis.In our experiment with the increase of the amount of photoinitiator from 3wt.%to 5wt.%,decreased the double bond conversion as well as the rate of polymerization was observed.Probably the higher concentration of 2-hydroxy-2-methyl-1-phenylpropane-1-one exhibits high absorption at its absorptions maximum at 245nm,and the initiator acts as inter-nal filter.This internal filtration effect (decreased penetration of UV light)can give rise to a concentration gradient between surface and bottom layer of irradiated film.Additionally,local high con-centration of initiator radicals can simultaneously promote radical recombination,and hence consumption of initiator in side reaction not leading to a polymerization.Table 1Effect of photoinitiator concentration and light intensity on maximal degree of con-version X max and maximal relative polymerization rate R p (layer thickness 10␮m,PE foil).Initiator concentration (wt.%)Light intensity (mW cm −2)237X max (%)R p,max (s −1)X max (%)R p,max (s −1)0.5660.06470.021790.33740.222860.42790.433920.71860.563.5890.44810.185850.37800.10V.Janˇc oviˇc ováet al./Progress in Organic Coatings 76 (2013) 432–438435180**** **** 0165 0160 0155 0150 0020406080100T r a n s m i t a n c e [%]Wavenumber [cm -1]Wavenumber [cm -1]A 0(1635) = 0.78A 60(1635)= 0.12X 1635= 84.6 %100095 090085 080 07507020406080100T r a n s m i t a n c e [%]A 0 (810] = 1.29A 60 (810) = 0.20X 810 = 84 .5%Fig.2.Double bond conversion (60s)estimation in urethane acrylate at wavenumbers 810,1618and 1635cm −1.C o n v e r s i o n [%]Irradiation time [s]C o n v e r s i o n [%]Irradiation time [s]Fig.3.Influence of irradiation time on the UV-curing of urethane acrylate Craynor 925at various Darocure 1173concentrations (layerthickness 10␮m,covered with PEfoil)at the light intensity 23mW cm −2(a)and 7mW cm −2(b).Although the conversion values achieved at total light dose 1J cm −2were very similar for both intensities,the maximal conversion achieved at higher intensity was higher compared to that at lower intensity (Fig.4).The influence of composition onC o n v e r s i o n [%]Concentration of initiator [wt %]Fig.4.The effect of initiator concentration on the conversion of double bond inurethane acrylate at different light intensity (X max is the maximal achieved double bond conversion,X 1J is the conversion after light dose 1J cm −2).maximal achieved conversion was more important for curing at light intensity of 7mW cm −2.For composition with 0.5wt.%of initiator the maximal conversion was only 47%at this intensity.The composition was uncured in fact and tacky.When the light intensity was 23mW cm −2.,the maximal conversion of 66%was obtained with the same initiator concentration and system was completely cured.Anyway,the highest conversion was observed at initiator concentration 3wt.%.The rate of polymerization,R p,max (Table 1)was also influ-enced by the photoinitiator concentration.The highest value was achieved at the higher radiation intensity (23mW cm −2)and at the initiator concentration 3wt.%.3.2.Influence of the external conditionsThe radiation intensity,the thickness of the applied layer,the temperature [24]and the oxygen presence [21,22]are factors which can significantly influence curing process and the final quality of the cured film.The influence of three of these factors (incident light intensity,layer thickness and oxygen presence)was observed (Figs.5and 6).The conversion degree of the double bond increased with increasing incident light intensity.Under the given experimental conditions the photopolymerization represents a complex process,with polymerization rate and conversion strongly436V.Janˇc oviˇc ováet al./Progress in Organic Coatings 76 (2013) 432–438C o n v e r s i o n [%]Irradiation time [s]C o n v e r s i o n [%]Irradiation time [s]Fig.5.The influence of irradiation time on the curing of urethane acrylate Craynor 925at various light intensities (initiator concentration 3wt.%,layer thickness 10␮m)unprotected (a)and covered with PE foil (b).dependent on light intensity.An extension of exposure does not lead to an increasing conversion and the conversion values for the formulations cured with the same UV dose but with higher incident light intensity were higher in comparison to those with lower incident light intensity in accordance with literature [25,26].The influence of light intensity was insignificant when cured samples were covered with PE foil preventing the air oxygen access into the cured layer (Fig.5).The influence of oxygen increased,when curing occurred in air without protection by PE foil.At low intensity the curing was very slow and the hardening was insuffi-cient obviously.The reaction of formed radicals with oxygen was faster than the initiation reaction with double bond.The effect of layer thickness on the curing of samples on the air is shown in Fig.6.The highest reaction rate was observed in the initial curing phase for the thickest layer.The reaction was proba-bly more effective in bottom layer of coating,following the effect of top layer as barrier for air oxygen.As viscosity increased dur-ing photopolymerization,theoxygen penetration throughout the coating became more difficult and the photopolymerization was more effective in thinner layers (10and 15␮m).The highest max-imal conversion degree was observed for the thinnest layer;with the increasing layer thickness this value diminished.Slowdown of the reaction in the later reaction phase (exposure time longer that 10s)with the growing layer can be probably due to the decrease of UV light transmission involved higher absorption of reaction prod-ucts in surface layer of irradiated film.Higher layer thickness willC o n v e r s i o n [%]Irradiation t ime [s]Fig.6.The influence of irradiation time on the curing of urethane acrylate Craynor925at various layer thicknesses (initiator concentration 3wt.%,light intensity 7mW cm −2,air).decrease the light penetration to the bottom of the coating and result in low and non-homogenous conversion,which may lead to inferior adhesion properties [22].The influence of air oxygen on the curing of tetrafunctional urethane acrylate in the presence of photoinitiator 2-hydroxy-2-methyl-1-phenylpropane-1-one (Fig.5)was very significant.The samples covered with polyethylene foil were well hardened after 30s at the intensity 7mW cm −2(light dose 0.210J cm −2,conver-sion degree 75%);while the uncovered samples were insufficiently cured after 5min (light dose 2.1W cm −2,conversion degree 65%)and the samples were tacky.Air oxygen significantly retarded the reaction.Even the longer exposition (3min,exposition doses 2–6J cm −2)was not enough to cure the coating at this light inten-sity.The coatings were tacky,smelling and cracked.Free radicals formed by the photolysis of the initiator are rapidly scavenged by O 2molecules to yield peroxyl radicals.They are not reactive towards the acrylate double bonds and cannot initiate or participate in any polymerization reaction.They cannot abstract also hydrogen atoms from the polymer backbone to generate hydroperoxides.Oxygen can also react with polymer grow-radicals to yield hydroperoxide and premature chain termination occurred.The elimination of air oxygen access to the cured layers increased the curing efficiency [21,22].3.3.Gloss changes during curingSpecular gloss is a measure of the ability of a coating surface to reflect a beam light in a particular angle without scattering.This is an important property of coating specially used for aesthetic and decorative purposes [27].The effect of layer thickness and curing conditions on the kinetics and gloss were investigated using the sample with constant composition and three different intensities of UV light.The layer thickness affected the final gloss of the samples (Fig.7).It is obvious that the value of final gloss was firstly increased with increasing the thickness of layers (at the range from 20␮m to 30␮m).In the case of layers thicker than 30␮m (40␮m)the lower value of G ∞was achieved.It was caused by the orange peel effect creation during the sample curing.Reducing either the sam-ple thickness or photopolymerization rate can eliminate the orange peel effect.Final gloss of the cured samples depended on the used UV light intensity (Fig.7).Time dependence of gloss had increas-ing character with the tendency to achieve a steady state where the gloss of the cured surface was constant.All samples showed very high-gloss and were transparent after curing.The change of gloss was more intense in the initial part of the curing process,and depended on both the above-mentioned factors.V.Janˇc oviˇc ováet al./Progress in Organic Coatings 76 (2013) 432–438437Irr adiati on ti me [s]G l o s s [G U ]G l o s s [G U ]Irradiati on time [s]Fig.7.Gloss changes during curing of urethane acrylate Craynor 925(initiator concentration 3wt.%)at the light intensity 23mW cm −2(a)and 7mW cm −2(b)and differentlayer thickness.Table 2Total colour differences E ∗abafter 5and 20h irradiation of unprotected samples and samples protected with the system urethane acrylate CN-925/initiator Darocure 1173(3wt.%),layer thickness 10,15and 20␮m for 4inks –cyan (C),magenta (M),yellow (Y)and black (B).Layer thickness of coatingIrradiation time (h)5h20hCMYBCMYB0␮m 4.1 5.7 1.00.918.222.1 5.1 1.210␮m 3.6 4.60.50.29.312.4 4.60.815␮m 3.8 5.30.60.310.013.1 4.20.420␮m3.74.60.60.310.113.13.80.43.4.The composition influence on the inks stabilityThe low stability of ink jet prints towards environmental influ-ences represents an extensive problem especially for the outdoor use of prints or coatings.Inks are often very sensitive against light [28].One of the possible solutions could consist in the development of transparent layer with barrier properties against water,which will acts as protecting layer against the sunlight.UV–vis reflectance spectra of four inks (cyan,magenta,yellow and black)deposited on paper and corresponding CIELAB values L *,a *,b *were measured immediately after samples preparation and after irradiation withthe day light irradiation.The total colour difference E ∗abwas cal-culated according to equation from Hunt [20]mentioned in Section 2.5.The results summarized in Table 2show that the applied accel-erated ageing procedures caused significant colour changes,asdocumented by the values of colour difference E ∗ab.The most significant changes were observed for magenta and cyan.The coating influence was positive,the E ∗abfor coated foils was smaller compared to the uncoated foils,but no significant differ-ence was observed between samples with various layer thicknesses (Table 2).As the layer thickness does not influence the colour sta-bility significantly,it is possible to use the coating with the thinner layer (10␮m),allowing faster and more effective UV-curing.4.ConclusionsPhotochemical curing of coatings prepared from urethane acrylate Craynor 925with various content of initiator 2-hydroxy-2-methyl-1-phenylpropane-1-one was studied by FTIR spectroscopy.The samples were coated on aluminium plates.The conversion degrees and the reaction rates were calculated from values atwavenumber of 810cm −1,the carbonyl peak at 1725cm −1was used as an internal standard.The final properties of UV-cured coatings depend on their com-position as well as on the experimental curing conditions.The highest conversion was achieved for initiator concentration 3wt.%.The initial slope of the curve of final conversion vs.initiator con-centration was steepest for the lower irradiation intensities.The influence of light intensity was insignificant for curing samples covered with PE foil,which avoided the air oxygen access to the cured layer.Influence of light intensity increased when curing on air.Moreover,the layer thickness influenced the conversion degree.The highest conversion degree was calculated for the thinnest layer (10␮m).Final gloss of the cured samples depended on the UV light inten-sity used;the samples cured at higher light intensity reached the higher gloss values.It is obvious that the maximal value of final gloss was obtained for layers with thickness of 30␮m.The influence of prepared coating on the colour ink stability waspositive;coated foils exhibited lower total colour difference E ∗abcompared to uncoated foils.However,the total colour differenceE ∗abdoes not depend on layer thickness of protective coating.AcknowledgementsThe authors thank the Slovak Grant Agencies APVV (Project No.0324-10)and VEGA (Project No.1/0811/11)for their financial sup-port.Appendix A.Supplementary dataSupplementary data associated with this article can be found,in the online version,at /10.1016/j.porgcoat.2012.10.010.References[1]J.P.Fouassier,Photoinitiation,Photopolymerization and Photocuring,CarlHanser Verlag,München,1995.[2]J.Kindernay,A.Blaˇz ková,J.Rudá,V.Janˇc oviˇc ová,Z.Jakubíková,J.Photochem.Photobiol.A:Chem.151(2002)229–236.[3]J.F.Rabek,Mechanisms of Photophysical and Photochemical Reactions in Poly-mers,Theory and Practical Applications,Wiley,New York,1987.[4]H.Hwang,C.Park,J.Moon,H.Kim,T.Masubuchi,.Coat.72(2011)663–675.[5]X.Yu, B.P.Grady,R.S.Reiner,S.L.Cooper,J.Appl.Polym.Sci.49(1993)1943–1955.[6]R.Schwalm,UV,Coatings,Basics,Recent Developments and New Applications,first ed.,Elsevier,Amsterdam,2007.[7]Y.Zhang,F.Zhan,W.Shi,.Coat.71(2011)399–405.。

新SAT评分详解及样题

新SAT评分详解及样题

* Combined score of two raters, each scoring on a 1– 4 scale 1-4
SAT 1. Composite Score 2 2. SAT raw score 3 3. SAT Test Score Evidence-Based Reading and Writing raw score 4. SAT Studies OG 1—15 5. SAT Subscore 7 Cross-section Score 3 Section Score 400—1600
3.
)
25+15min 49
:35min 44
2-12 25min
2-8 50min
History Questions 1-5 are based on the following passage.
This passage is adapted from a speech delivered by Congresswoman Barbara Jordan of Texas on July 25, 1974, as a member of the Judiciary Committee of the United States House of Representatives. In the passage, Jordan discusses how and when a United States president may be impeached, or charged with serious offenses, while in office. Jordan’s speech was delivered in the context of impeachment hearings against then president Richard M. Nixon.

卫星轨道外推:第二部分(英文批注)

卫星轨道外推:第二部分(英文批注)

Orbital Propagation: Part II轨道外推:第二部分By Dr. T.S. KelsoIn our last column, we covered the basics of modeling as they apply to predicting the position of a satellite in earth orbit. As with any model, the complexity of the orbital model we choose depends upon several factors, chief among these being the accuracy of the desired predictions. At the same time, since we will be calculating predictions from this orbital model on a computer, we would also like to reduce the computational complexity of the model. Unfortunately, these two goals conflict, so we will seek a model which strikes an appropriate balance between the model's fidelity (accuracy) and the computational burden of producing a prediction.In order to obtain the appropriate level of model fidelity, it will be necessary to determine the types and relative magnitudes of the forces acting upon the satellite. For our orbital model, these include the gravity of the earth along with perturbations due to the nonuniform mass distribution of the earth, gravitational attraction of the sun, moon, and planets, and atmospheric drag. Which of these forces are most important will depend not only on their relative magnitude but upon whether their effects are periodic or secular.A periodic effect, such as your car's antenna whipping back and forth as you drive down the road, has little effect on the position of the tip of the antenna over time, even though the forces involved may be rather large. If you only need to know the position of the antenna tip to within one meter, you will likely ignore this effect. However, a secular effect, one which consistently increases or decreases over time, may have a considerable effect even though the force involved may be small. An example of a secular effect would be that of a light wind blowing a sailboat across the water.As far as computational complexity is concerned, we noted at the close of the last column that there were two primary computational methods of implementing an orbital model. One was to start with a satellite's position and velocity and to sum, or integrate, all of the forces acting on the satellite. This total force is assumed to act on the satellite over some small time span, at the end of which a new position and velocity is calculated and the process is repeated. When implemented on a computer, this approach is known as numerical integration. The advantage of this approach is that if sufficient detail is given to the forces acting upon the satellite and small enough time steps are used, highly-accurate predictions can be obtained. The down side to numerical integration is that we must calculate the satellite's position and velocity for each time step between our known initial conditions and our desired prediction time.Depending upon the size of the time step and the length of the time interval, the computational burden for even a single satellite can be large. Now, multiply this problem by the approximately 7,500 objects currently tracked by NORAD (the North American Aerospace Defense Command) and you'll begin to appreciate the need for another solution. We would prefer to have a model which provides an analytical solution, that is, a solution wherein if we know the time of interestwe can directly calculate the state of the satellite's orbit at that time without the need to 'step' along in time. This is our second computational method of implementing an orbital model.During the 1970s, NORAD came up with just such a solution: a fully-analytical orbital model. With this model, known as SGP (for Simplified General Perturbation), a user can calculate a satellite's position and velocity at a particular time directly, without the need for numerical integration. The result is a considerable reduction in computational burden to support tracking the growing population of earth-orbiting satellites.Obviously, some tradeoffs had to be made in the process of developing SGP. As with any model, certain simplifications had to be made and these simplifications place restrictions upon how the model can be used. In particular, the mass of the satellite relative to the mass of the earth is assumed to be negligible. In addition, the satellite is assumed to be in a low-eccentricity, near-earth orbit and not in a rapidly-decaying orbit. These conditions were true for a large percentage of the satellite population at the time SGP was developed and allowed a model to be formulated accounting only for the primary perturbations on these satellites due to the earth's nonuniform mass distribution and atmospheric drag.Subsequent refinements to SGP extended this theory to include a semi-analytic treatment of orbits with periods greater than 225 minutes. While this theory does depend to some extent upon numerical integration, the computational burden is still considerably reduced. The break at 225 minutes (about 6,000 km altitude) is the result of a somewhat natural gap which has resulted from historical choices of orbits for various satellite missions and a crossover in significant perturbing effects; above the gap the primary perturbing forces are no longer atmospheric drag and the earth's nonuniform density, but orbital resonances with the earth's nonuniform gravitational field, solar and lunar gravitational forces, and solar radiation pressure. This new model is referred to as SGP4 and is the model currently in use by NORAD and the United Stated Space Command.You may be wondering, at this point, why I am focusing on the history of SGP4. Certainly, there are many other models, many of which are more accurate than SGP4. But SGP4 has one thing going for it that none of the others do: data. Not unlike the advent of automobiles running on unleaded gasoline, it didn't make any difference how slick a new car you had if you couldn't find any unleaded gas. Most other orbital models only provide data for a limited number of satellites (such as the space shuttle). NORAD, on the other hand, is responsible for tracking all satellites on a daily basis and it uses SGP4 to do that. This means that orbital data for this model is available for all earth-orbiting objects capable of being tracked by the US Space Surveillance Network.But we're not talking about unleaded versus regular gasoline here. Aren't all orbital element sets alike? After all, they all measure things like inclination and semi-major axis (terms we'll define further in a future column) or I can convert to these terms. Why can't I take the NORAD elements and use them in my favorite tracking program (which doesn't use the SGP4 model)? Allow me to use a computer analogy here to explain.Most of you are familiar with various algorithms for compressing data. Whether you know them by their formal names, such as run-length encoding, Huffman or LZW compression, or by the terms used by some of the popular archiving packages, such as squeezing, imploding, crushing, or the like, you wouldn't be too happy with the result if someone sent you data compressed by one method and you used another to uncompress it. Of course, your software would probably report an error, but that's not true of orbital software. But the result will be the same: errors.The two-line orbital element sets made available by NORAD (and redistributed by NASA) are mean Keplerian orbital element sets. The mean values for each element are generated using the SGP4 orbital model. The effects of the major perturbing forces are incorporated into these mean values in a very specific way using SGP4 and SGP4 must be used to generate accurate predictions. Failure to do so will result in errors, the magnitude of which will depend heavily on the type of orbit being modeled. These errors are typically manifested as large time errors in satellite signal or visual acquisition or being unable to locate the satellite at all.The implications of the last couple of paragraphs should be clear. Just as you wouldn't expect to find a program compressed with LHARC to be distributed with a .ZIP extension, the only data distributed in the two-line format should be NORAD-generated SGP4-compatible data. Data from other sources should never be reformatted into the two-line format just so it can be fed into a particular software package unless you don't care about the accuracy of the results (just as you wouldn't rename a file with an .LHA extension to .ZIP just so you could use PKUNZIP).Many popular commercial, shareware, and public domain satellite tracking packages implement SGP4 and you should check for this feature if you are concerned about accurate predictions. We will review some of these packages in a future column. And, if you are concerned about the source of your data, allow me to suggest two reliable sources. Daily updates of NORAD-generated two-line orbital element sets are available via Internet (anonymous ftp, gopher, or mosaic) on in the directory pub/space or on the Celestial BBS at 205/409-9280 [Neither of these services is currently available. Data can now be found only on the CelesTrak WWW site.]. Both services are available free of charge to all users.In our next column, we will explore how the NORAD two-line element sets are generated and give some numerical examples to illustrate the magnitude of the errors which can result from improper combinations of models and data. I look forward to seeing you here next time!。

MULTI GLOSS 268 商品说明书

MULTI GLOSS 268 商品说明书

MULTI GLOSS 268UNI GLOSS 60A wide array of display modes(corresponding to six languages)You can measure by selecting Basic mode,Continuous,Statistics from the Mode screen.You can select the measurement angle from the Geometry selection screen and display two angles or three angles at the same time.(MULTI GLOSS 268)You can select Mode,Geometry selection,Difference,and Calibrationfrom the Main Menu screen.Geometric selection selected two angles.(MULTI GLOSS 268)Geometric selection selected three angles.(MULTI GLOSS 268)PASS/FAILStandard value nameSample nameLarge screen size and easily viewable liquid crystal display.Easy operation with mode scroll wheel.By operating the mode scroll wheel in accordance with easy-to-understand menu items,you can set a wide array of functions easily.Wide measurement range that can be measured from plastic to metal surface with a high gloss finish :0.0~2,000GU(in the case of 20˚)The gloss difference measurement of thestandard/sample and the PASS/FAIL evaluation.You can make a pass/fail evaluation by setting the limit value.You can enter the sample name and the standard value name in the measurement result.Autodiagnosis function of calibrationThis means a little frequency of calibration and long-term stability.In addition,if the calibration is required,the message will be displayed.Calibrating standardProtective holder•It has data compatibility with the previous MULTI GLOSS 268/UNI GLOSS 60,and improves repeatability and reproducibility.•Approx.10,000measure-ments are available with one AA (R6)size alkaline battery.(20˚,60˚,85˚)(60˚)1981∗ Microsoft Excel ®97or higher.Windows ®,Windows NT ®and Excel ®is a trademark or registered trademark of Microsoft Corporation of America or its subsidiaries.AEFDPK Printed in Japan9242-4890-11 2003KONICA MINOLTA SENSING,INC.4OS Windows ®98/2000/Me/XP,Windows NT ®4.03-91,Daisennishimachi,Sakai.Osaka 590-8551,JapanKonica Minolta Photo Imaging U.S.A.,Inc.725Darlington Avenue,Mahwah,NJ 07430,U.S.A.Phone:888-473-2656(in USA),201-529-6060(outside USA)FAX:201-529-6070Konica Minolta Photo Imaging Canada,Inc.1329Meyerside Drive,Mississauga,Ontario L5T 1C9,Canada Phone:905-670-7722FAX:905-795-8234Konica Minolta Photo Imaging Europe GmbH Europaallee 17,30855Langenhagen,Germany Phone:0511-740440FAX:0511-741050Konica Minolta Photo Imaging France S.A.S.Paris Nord II ,385,rue de la Belle-Etoile,B.P.50077,F-95948Roissy C.D.G.Cedex,France Phone:01-49386550/01-30866161FAX:01-48638069/01-30866280Konica Minolta Photo Imaging UK Ltd.Precedent Drive,Rooksley Park,Milton Keynes United Kingdom Phone:01-908200400FAX:01-908618662Konica Minolta Photo Imaging Austria GmbH Amalienstrasse 59-61,1131Vienna,Austria Phone:01-87882-430FAX:01-87882-431Konica Minolta Photo Imaging Benelux B.V.Postbus 6000,3600HA Maarssen,The Netherlands Phone:030-2470860FAX:030-2470861Konica Minolta Photo Imaging (Schweiz)AG Riedstrasse 6,8953Dietikon,Switzerland Phone:01-7403727FAX:01-7422350Konica Minolta Business Solutions Italia S.p.A.Via Stephenson 37,20157,Milano,Italy Phone:02-39011-1FAX:02-39011-219Konica Minolta Photo Imaging Svenska AB Solnastrandvägen 3,P.O.Box 9058S-17109,Solna,Sweden Phone:08-627-7650FAX:08-627-7685Konica Minolta Photo Imaging (HK)Ltd.Room 1818,Sun Hung Kai Centre,30Harbour Road,Wanchai,Hong Kong Phone:852-********FAX:852-********Shanghai OfficeRm 1211,Ruijin Building No.205Maoming Road (S)Shanghai 20020,China Phone:************FAX:************Konica Minolta Photo Imaging Asia HQ Pte Ltd.10,Teban Gardens Crescent,Singapore 608923Phone:+656563-5533FAX:+656560-9721KONICA MINOLTA SENSING,INC.Seoul Office801,Chung-Jin Bldg.,475-22,BangBae-Dong,Seocho-ku,Seoul,Korea Phone:02-523-9726FAX:02-523-9729DimensionsUnits :mmWith dedicated Data transfer software(gloss-link),the data management such as data transfer to Excel ®∗is easy.。

tpo40三篇托福阅读TOEFL原文译文题目答案译文背景知识

tpo40三篇托福阅读TOEFL原文译文题目答案译文背景知识

tpo40三篇托福阅读TOEFL原文译文题目答案译文背景知识阅读-1 (2)原文 (2)译文 (5)题目 (8)答案 (17)背景知识 (17)阅读-2 (20)原文 (20)译文 (23)题目 (25)答案 (35)背景知识 (35)阅读-3 (38)原文 (38)译文 (41)题目 (44)答案 (53)背景知识 (54)阅读-1原文Ancient Athens①One of the most important changes in Greece during the period from 800 B.C. to 500 B.C. was the rise of the polis, or city-state, and each polis developed a system of government that was appropriate to its circumstances. The problems that were faced and solved in Athens were the sharing of political power between the established aristocracy and the emerging other classes, and the adjustment of aristocratic ways of life to the ways of life of the new polis. It was the harmonious blending of all of these elements that was to produce the classical culture of Athens.②Entering the polis age, Athens had the traditional institutions of other Greek protodemocratic states: an assembly of adult males, an aristocratic council, and annually elected officials. Within this traditional framework the Athenians, between 600 B.C. and 450 B.C., evolved what Greeks regarded as a fully fledged democratic constitution, though the right to vote was given to fewer groups of people than is seen in modern times.③The first steps toward change were taken by Solon in 594 B.C., when he broke the aristocracy's stranglehold on elected offices by establishing wealth rather than birth as the basis of office holding, abolishing the economic obligations of ordinary Athenians to the aristocracy, and allowing the assembly (of which all citizens were equal members) to overrule the decisions of local courts in certain cases. The strength of the Athenian aristocracy was further weakened during the rest of the century by the rise of a type of government known as a tyranny, which is a form of interim rule by a popular strongman (not rule by a ruthless dictator as the modern use of the term suggests to us). The Peisistratids, as the succession of tyrants were called (after the founder of the dynasty, Peisistratos), strengthened Athenian central administration at the expense of the aristocracy by appointing judges throughout the region, producing Athens’ first national coinage, and adding and embellishing festivals that tended to focus attention on Athens rather than on local villages of the surrounding region. By the end of the century, the time was ripe for more change: the tyrants were driven out, and in 508 B.C. a new reformer, Cleisthenes, gave final form to the developments reducing aristocratic control already under way.④Cleisthenes' principal contribution to the creation of democracy at Athens was to complete the long process of weakening family and clanstructures, especially among the aristocrats, and to set in their place locality-based corporations called demes, which became the point of entry for all civic and most religious life in Athens. Out of the demes were created 10 artificial tribes of roughly equal population. From the demes, by either election or selection, came 500 members of a new council, 6,000 jurors for the courts, 10 generals, and hundreds of commissioners. The assembly was sovereign in all matters but in practice delegated its power to subordinate bodies such as the council, which prepared the agenda for the meetings of the assembly, and courts, which took care of most judicial matters. Various committees acted as an executive branch, implementing policies of the assembly and supervising, for instance, the food and water supplies and public buildings. This wide-scale participation by the citizenry in the government distinguished the democratic form of the Athenian polis from other less liberal forms.⑤The effect of Cleisthenes’ reforms was to establish the superiority of the Athenian community as a whole over local institutions without destroying them. National politics rather than local or deme politics became the focal point. At the same time, entry into national politics began at the deme level and gave local loyalty a new focus: Athens itself. Over the next two centuries the implications of Cleisthenes’ reforms were fully exploited.⑥During the fifth century B.C. the council of 500 was extremely influential in shaping policy. In the next century, however, it was the mature assembly that took on decision-making responsibility. By any measure other than that of the aristocrats, who had been upstaged by the supposedly inferior "people", the Athenian democracy was a stunning success. Never before, or since, have so many people been involved in the serious business of self-governance. It was precisely this opportunity to participate in public life that provided a stimulus for the brilliant unfolding of classical Greek culture.译文古雅典①在公元前800年到公元前500年期间,希腊最重要的变化之一是城邦的崛起,并且每个城邦都发展了适合其情况的政府体系。

CellTiter Glo Luminescent Cell Viability Assay Protocol

CellTiter Glo Luminescent Cell Viability Assay Protocol

Promega Corporation ·2800 Woods Hollow Road ·Madison, WI 53711-5399 USA Toll F ree in USA 800-356-9526·Phone 608-274-4330 ·F ax 608-277-2516 ·1.Description (1)2.Product Components and Storage Conditions (4)3.Performing the CellTiter-Glo ®Assay (5)A.Reagent Preparation (5)B.Protocol for the Cell Viability Assay (6)C.Protocol for Generating an ATP Standard Curve (optional) (7)4.Appendix (7)A.Overview of the CellTiter-Glo ®Assay..............................................................7B.Additional Considerations..................................................................................8C.References............................................................................................................11D.Related Products. (12)1.DescriptionThe CellTiter-Glo ®Luminescent Cell Viability Assay (a–e)is a homogeneous method to determine the number of viable cells in culture based on quantitation of the ATP present, which signals the presence of metabolically active cells. The CellTiter-Glo ®Assay is designed for use with multiwell-plate formats, making it ideal for automated high-throughput screening (HTS) and cell proliferation and cytotoxicity assays. The homogeneous assay procedure (Figure 1) involves adding a single reagent (CellTiter-Glo ®Reagent) directly to cells cultured in serum-supplemented medium. Cell washing, removal of medium or multiple pipetting steps are not required.The homogeneous “add-mix-measure” format results in cell lysis and generation of a luminescent signal proportional to the amount of ATP present (Figure 2).The amount of ATP is directly proportional to the number of cells present in culture in agreement with previous reports (1). The CellTiter-Glo ®Assay relies on the properties of a proprietary thermostable luciferase (Ultra-Glo™ Recombinant Luciferase), which generates a stable “glow-type” luminescent signal and improves performance across a wide range of assay conditions. The luciferase reaction for this assay is shown in Figure 3. The half-life of the luminescent signal resulting from this reaction is greater than five hours (Figure 4). This extended half-life eliminates the need for reagent injectors and provides flexibility for continuous or batch-mode processing of multiple plates. The unique homogeneous format reduces pipetting errors that may be introduced during the multiple steps required by other ATP-measurement methods.CellTiter-Glo ®Luminescent Cell Viability AssayAll technical literature is available on the Internet at: /protocols/ Please visit the web site to verify that you are using the most current version of this Technical Bulletin. Please contact Promega Technical Services if you have questions on useofthissystem.E-mail:********************Figure 1. Flow diagram showing preparation and use of CellTiter-Glo ®Reagent.Promega Corporation ·2800 Woods Hollow Road ·Madison, WI 53711-5399 USA Toll F ree in USA 800-356-9526·Phone 608-274-4330 ·F ax 608-277-2516 ·3170M A 12_0ACellTiter-Glo CellTiter-Glo MixerLuminometer®System Advantages•Homogeneous:“Add-mix-measure” format reduces the number of plate-handling steps to fewer than that required for similar ATP assays.•Fast:Data can be recorded 10 minutes after adding reagent.•Sensitive:Measures cells at numbers below the detection limits of standard colorimetric and fluorometric assays.•Flexible:Can be used with various multiwell formats. Data can be recorded by luminometer or CCD camera or imaging device.•Robust:Luminescent signal is very stable, with a half-life >5 hours,depending on cell type and culture medium used.•Able to Multiplex:Can be used with reporter gene assays or other cell-based assays from Promega (2,3).Figure 3. The luciferase reaction.Mono-oxygenation of luciferin is catalyzed byluciferase in the presence of Mg 2+, ATP and molecular oxygen.Promega Corporation ·2800 Woods Hollow Road ·Madison, WI 53711-5399 USA Toll F ree in USA 800-356-9526·Phone 608-274-4330 ·F ax 608-277-2516 ·3171M A 12_0A L u m i n e s c e n c e (R L U )Cells per Well10,00060,00020,00030,00040,00050,0000R² = 0.9990.5 × 1061.0 × 1061.5 × 1062.0 × 1062.5 × 1063.0 × 1063.5 × 1064.0 × 106r² = 0.99020,00010,00030,00040,00050,000r² = 0.9900100200300400HO SN S N O S N S N OCOOH +ATP+O 2Ultra-Glo™ Recombinant Luciferase +AMP+PP i +CO 2+LightBeetle Luciferin OxyluciferinMg 2+0Figure 2. Cell number correlates with luminescent output.A direct relationship exists between luminescence measured with the CellTiter-Glo ®Assay and the number of cells in culture over three orders of magnitude. Serial twofold dilutions of HEK293cells were made in a 96-well plate in DMEM with 10% FBS, and assays wereperformed as described in Section 3.B. Luminescence was recorded 10minutes after reagent addition using a GloMax ®-Multi+ Detection System. Values represent the mean ± S.D. of four replicates for each cell number. The luminescent signal from 50HEK293 cells is greater than three times the background signal from serum-supplemented medium without cells. There is a linear relationship (r 2= 0.99)between the luminescent signal and the number of cells from 0to 50,000 cells per well.Figure 4. Extended luminescent half-life allows high-throughput batchprocessing.Signal stability is shown for three common cell lines. HepG2 and BHK-21cells were grown and assayed in MEM containing 10% FBS, while CHO-K1 cells were grown and assayed in DME/F-12 containing 10% FBS. CHO-K1, BHK-21 and HepG2 cells, at 25,000 cells per well, were added to a 96-well plate. After an equal volume of CellTiter-Glo ®Reagent was added, plates were shaken and luminescence monitored over time with the plates held at 22°C. The half-lives of the luminescent signals for the CHO-K1, BHK-21 and HepG2 cells were approximately 5.4, 5.2 and5.8hours, respectively.2.Product Components and Storage ConditionsProduct Size Cat.#CellTiter-Glo ®Luminescent Cell Viability Assay 10ml G7570Substrate is sufficient for 100 assays at 100µl/assay in 96-well plates or 400 assays at 25µl/assay in 384-well plates. Includes:• 1 × 10mlCellTiter-Glo ®Buffer • 1 vial CellTiter-Glo ®Substrate (lyophilized)Product Size Cat.#CellTiter-Glo ®Luminescent Cell Viability Assay 10 × 10ml G7571Each vial of substrate is sufficient for 100 assays at 100µl/assay in 96-well plates or 400 assays at 25µl/assay in 384-well plates (1,000 to 4,000 total assays). Includes:•10 × 10mlCellTiter-Glo ®Buffer •10 vials CellTiter-Glo ®Substrate (lyophilized)Promega Corporation ·2800 Woods Hollow Road ·Madison, WI 53711-5399 USA Toll F ree in USA 800-356-9526·Phone 608-274-4330 ·F ax 608-277-2516 ·R e l a t i v e L u m i n e s c e n c e (%)Time (minutes)CHO-K101020304050607080901003173M A 12_0AProduct Size Cat.# CellTiter-Glo®Luminescent Cell Viability Assay100ml G7572 Substrate is sufficient for 1,000 assays at 100µl/assay in 96-well plates or 4,000assays at 25µl/assay in 384-well plates. Includes:•1 × 100ml CellTiter-Glo®Buffer• 1 vial CellTiter-Glo®Substrate (lyophilized)Product Size Cat.# CellTiter-Glo®Luminescent Cell Viability Assay10 × 100ml G7573Each vial of substrate is sufficient for 1,000 assays at 100µl/assay in 96-well plates or4,000 assays at 25µl/assay in 384-well plates (10,000to 40,000 total assays). Includes:•10 × 100ml CellTiter-Glo®Buffer•10 vials CellTiter-Glo®Substrate (lyophilized)Storage Conditions:For long-term storage, store the lyophilized CellTiter-Glo®Substrate and CellTiter-Glo®Buffer at –20°C. For frequent use, the CellTiter-Glo®Buffer can be stored at 4°C or room temperature for 48hours without loss of activity. See product label for expiration date information. ReconstitutedCellTiter-Glo®Reagent (Buffer plus Substrate) can be stored at room temperaturefor up to 8hours with <10% loss of activity, at 4°C for 48hours with ~5% lossof activity, at 4°C for 4days with ~20% loss of activity or at –20°C for 21weekswith ~3% loss of activity. The reagent is stable for up to ten freeze-thaw cycles,with less than 10% loss of activity.3.Performing the CellTiter-Glo®AssayMaterials to Be Supplied by the User•opaque-walled multiwell plates adequate for cell culture•multichannel pipette or automated pipetting station for reagent delivery•device (plate shaker) for mixing multiwell plates•luminometer, CCD camera or imaging device capable of reading multiwell plates •optional:ATP for use in generating a standard curve (Section 3.C)3.A.Reagent Preparation1.Thaw the CellTiter-Glo®Buffer, and equilibrate to room temperature priorto use. For convenience the CellTiter-Glo®Buffer may be thawed andstored at room temperature for up to 48hours prior to use.2.Equilibrate the lyophilized CellTiter-Glo®Substrate to room temperatureprior to use.Promega Corporation·2800 Woods Hollow Road ·Madison, WI 53711-5399 USA Toll F ree in USA 800-356-9526·Phone 608-274-4330 ·F ax 608-277-2516 ·3.A.Reagent Preparation (continued)3.Transfer the appropriate volume (10ml for Cat.# G7570 and G7571, or 100mlfor Cat.# G7572 and G7573) of CellTiter-Glo ®Buffer into the amber bottlecontaining CellTiter-Glo ®Substrate to reconstitute the lyophilizedenzyme/substrate mixture. This forms the CellTiter-Glo ®Reagent.4.Mix by gently vortexing, swirling or inverting the contents to obtain ahomogeneous solution. The CellTiter-Glo ®Substrate should go intosolution easily in less than 1minute.3.B.Protocol for the Cell Viability AssayWe recommend that you perform a titration of your particular cells todetermine the optimal number and ensure that you are working within thelinear range of the CellTiter-Glo ®Assay. Figure 2 provides an example of sucha titration of HEK293 cells using 0 to 50,000 cells per well in a 96-well format.1.Prepare opaque-walled multiwell plates with mammalian cells in culturemedium, 100µl per well for 96-well plates or 25µl per well for 384-wellplates.Multiwell plates must be compatible with the luminometer used.2.Prepare control wells containing medium without cells to obtain a value forbackground luminescence.3.Add the test compound to experimental wells, and incubate according toculture protocol.4.Equilibrate the plate and its contents at room temperature forapproximately 30 minutes.5.Add a volume of CellTiter-Glo ®Reagent equal to the volume of cell culturemedium present in each well (e.g., add 100µl of reagent to 100µl of mediumcontaining cells for a 96-well plate, or add 25µl of reagent to 25µl ofmedium containing cells for a 384-well plate).6.Mix contents for 2 minutes on an orbital shaker to induce cell lysis.7.Allow the plate to incubate at room temperature for 10 minutes to stabilizeluminescent signal.Note:Uneven luminescent signal within standard plates can be caused bytemperature gradients, uneven seeding of cells or edge effects in multiwellplates.8.Record luminescence.Note:Instrument settings depend on the manufacturer. An integration timeof 0.25–1 second per well should serve as a guideline.Promega Corporation ·2800 Woods Hollow Road ·Madison, WI 53711-5399 USA Toll F ree in USA 800-356-9526·Phone 608-274-4330 ·F ax 608-277-2516 ·3.C.Protocol for Generating an ATP Standard Curve (optional)It is a good practice to generate a standard curve using the same plate onwhich samples are assayed. We recommend ATP disodium salt (Cat.# P1132,Sigma Cat.# A7699 or GE Healthcare Cat.# 27-1006). The ATP standard curveshould be generated immediately prior to adding the CellTiter-Glo®Reagentbecause endogenous ATPase enzymes found in sera may reduce ATP levels.1.Prepare 1µM ATP in culture medium (100µl of 1µM ATP solution contains10–10moles ATP).2.Prepare serial tenfold dilutions of ATP in culture medium (1µM to 10nM;100µl contains 10–10to 10–12moles of ATP).3.Prepare a multiwell plate with varying concentrations of ATP standard in100µl medium (25µl for a 384-well plate).4.Add a volume of CellTiter-Glo®Reagent equal to the volume of ATPstandard present in each well.5.Mix contents for 2 minutes on an orbital shaker.6.Allow the plate to incubate at room temperature for 10 minutes to stabilizethe luminescent signal.7.Record luminescence.4.Appendix4.A.Overview of the CellTiter-Glo®AssayThe assay system uses the properties of a proprietary thermostable luciferase toenable reaction conditions that generate a stable “glow-type” luminescentsignal while simultaneously inhibiting endogenous enzymes released duringcell lysis (e.g., ATPases). Release of ATPases will interfere with accurate ATPmeasurement. Historically, firefly luciferase purified from Photinus pyralis(LucPpy) has been used in reagents for ATP assays (1,4–7). However, it hasonly moderate stability in vitro and is sensitive to its chemical environment,including factors such as pH and detergents, limiting its usefulness fordeveloping a robust homogeneous ATP assay. Promega has successfullydeveloped a stable form of luciferase based on the gene from another firefly,Photuris pennsylvanica(LucPpe2), using an approach to select characteristics thatimprove performance in ATP assays. The unique characteristics of this mutant(LucPpe2m) enabled design of a homogeneous single-reagent-addition approachto perform ATP assays with cultured cells. Properties of the CellTiter-Glo®Reagent overcome the problems caused by factors, such as ATPases, thatinterfere with ATP measurement in cell extracts. The reagent is physicallyrobust and provides a sensitive and stable luminescent output.Promega Corporation·2800 Woods Hollow Road ·Madison, WI 53711-5399 USA Toll F ree in USA 800-356-9526·Phone 608-274-4330 ·F ax 608-277-2516 ·4.A.Overview of the CellTiter-Glo®Assay (continued)Sensitivity and Linearity:The ATP-based detection of cells is more sensitivethan other methods (8–10). In experiments performed by Promega scientists,the luminescent signal from 50HEK293 cells is greater than three standarddeviations above the background signal from serum-supplemented mediumwithout cells. There is a linear relationship (r2= 0.99) between the luminescentsignal and the number of cells from 0 to 50,000 cells per well in the 96-wellformat. The luminescence values in Figure 2 were recorded after 10minutes ofincubation at room temperature to stabilize the luminescent signal as describedin Section3.B. Incubation of the same 96-well plate used in the experimentshown in Figure 2 for 360minutes at room temperature had little effect on therelationship between luminescent signal and number of cells (r2= 0.99).Speed:The homogeneous procedure to measure ATP using the CellTiter-Glo®Assay is quicker than other ATP assay methods that require multiple steps toextract ATP and measure luminescence. The CellTiter-Glo®Assay also is fasterthan other commonly used methods to measure the number of viable cells(such as MTT, alamarBlue®or Calcein-AM) that require prolonged incubationsteps to enable the cells’ metabolic machinery to convert indicator moleculesinto a detectable signal.4.B.Additional ConsiderationsTemperature:The intensity and decay rate of the luminescent signal from theCellTiter-Glo®Assay depends on the luciferase reaction rate. Environmentalfactors that affect the luciferase reaction rate will change the intensity andstability of the luminescent signal. Temperature is one factor that affects therate of this enzymatic assay and thus the light output. For consistent results,equilibrate assay plates to a constant temperature before performing the assay.Transferring eukaryotic cells from 37°C to room temperature has little effect onATP content (5). We have demonstrated that removing cultured cells from a37°C incubator and allowing them to equilibrate to 22°C for 1–2 hours hadlittle effect on ATP content. For batch-mode processing of multiple assayplates, take precautions to ensure complete temperature equilibration. Platesremoved from a 37°C incubator and placed in tall stacks at room temperaturewill require longer equilibration than plates arranged in a single layer.Insufficient equilibration may result in a temperature gradient effect betweenwells in the center and at the edge of the plates. The temperature gradientpattern also may depend on the position of the plate in the stack.Promega Corporation·2800 Woods Hollow Road ·Madison, WI 53711-5399 USA Toll F ree in USA 800-356-9526·Phone 608-274-4330 ·F ax 608-277-2516 ·Chemicals:The chemical environment of the luciferase reaction affects theenzymatic rate and thus luminescence intensity. Differences in luminescenceintensity have been observed using different types of culture media and sera.The presence of phenol red in culture medium should have little impact onluminescence output. Assaying 0.1µM ATP in RPMI medium without phenolred resulted in ~5% increase in luminescence output (in relative light units[RLU]) compared to assays in RPMI containing the standard concentration ofphenol red, whereas assays in RPMI medium containing twice the normalconcentration of phenol red showed a ~2% decrease in luminescence.Solvents for the various test compounds may interfere with the luciferasereaction and thus the light output from the assay. Interference with theluciferase reaction can be detected by assaying a parallel set of control wellscontaining medium without cells. Dimethylsulfoxide (DMSO), commonly usedas a vehicle to solubilize organic chemicals, has been tested at finalconcentrations of up to 2% in the assay and only minimally affects light output.Plate Recommendations:We recommend using standard opaque-walledmultiwell plates suitable for luminescence measurements. Opaque-walledplates with clear bottoms to allow microscopic visualization of cells also maybe used; however, these plates will have diminished signal intensity andgreater cross talk between wells. Opaque white tape may be used to decreaseluminescence loss and cross talk.Cellular ATP Content:Different cell types have different amounts of ATP,and values reported for the ATP level in cells vary considerably (1,4,11–13).Factors that affect the ATP content of cells may affect the relationship betweencell number and luminescence. Anchorage-dependent cells that undergocontact inhibition at high densities may show a change in ATP content per cellat high densities, resulting in a nonlinear relationship between cell numberand luminescence. Factors that affect the cytoplasmic volume or physiology ofcells also will affect ATP content. For example, oxygen depletion is one factorknown to cause a rapid decrease in ATP (1).Promega Corporation·2800 Woods Hollow Road ·Madison, WI 53711-5399 USA Toll F ree in USA 800-356-9526·Phone 608-274-4330 ·F ax 608-277-2516 ·4.B.Additional Considerations (continued)Mixing:Optimal assay performance is achieved when the CellTiter-Glo®Reagent is mixed completely with the cultured cells. Suspension cell lines (e.g., Jurkat cells) generally require less mixing to achieve lysis and extract ATP than adherent cells (e.g., L929 cells). Tests were done to evaluate the effect ofshaking the plate after adding the CellTiter-Glo® Reagent. Suspension cellscultured in multiwell plates showed only minor differences in light outputwhether or not the plates were shaken after adding the CellTiter-Glo®Reagent.Adherent cells are more difficult to lyse and show a substantial differencebetween shaken and nonshaken plates.Several additional parameters related to reagent mixing include the force ofdelivery of CellTiter-Glo®Reagent, sample volume and dimensions of the well.All of these factors may affect assay performance. The degree of reagent mixing required may be affected by the method used to add the CellTiter-Glo®Reagent to the assay plates. Automated pipetting devices using a greater or lesser force of fluid delivery may affect the degree of subsequent mixing required.Complete reagent mixing in 96-well plates should be achieved using orbitalplate shaking devices built into many luminometers and the recommended2-minute shaking time. Special electromagnetic shaking devices that use aradius smaller than the well diameter may be required to efficiently mixcontents of 384-well plates. The depth of medium and geometry of themultiwell plates may have an effect on mixing efficiency. We recommend that you take these factors into consideration when performing the assay andempirically determine whether a mixing step is necessary for the individualapplication.LuminometersFor highly sensitive luminometric assays, the luminometer model and settings greatly affect the quality of data obtained. Luminometers from differentmanufacturers will vary in sensitivities and dynamic ranges. We recommend the GloMax®products because these instruments do not require gainadjustments to achieve optimal sensitivity and dynamic range. Additionally, GloMax®instruments are preloaded with Promega protocols for ease of use.If you are not using a GloMax®luminometer, consult the operating manual for your luminometer to determine the optimal settings. The limits should beverified on each instrument before analysis of experimental samples. The assay should be linear in some portion of the detection range of the instrument used.For an individual luminometer there may be different gain settings. Werecommend that you optimize the gain settings.4.C.References1.Crouch, S.P. et al.(1993) The use of ATP bioluminescence as a measure of cellproliferation and cytotoxicity. J. Immunol. Methods160, 81–8.2.Farfan, A.et al.(2004) Multiplexing homogeneous cell-based assays. Cell Notes10, 2–5.3.Riss, T., Moravec, R. and Niles, A. (2005) Selecting cell-based assays for drugdiscovery screening. Cell Notes13, 16–21.4.Kangas, L., Grönroos, M. and Nieminen, A.L. (1984) Bioluminescence of cellular ATP:A new method for evaluating cytotoxic agents in vitro. Med. Biol.62, 338–43.5.Lundin, A. et al.(1986) Estimation of biomass in growing cell lines by adenosinetriphosphate assay.Methods Enzymol. 133, 27–42.6.Sevin, B.U. et al.(1988) Application of an ATP-bioluminescence assay in human tumorchemosensitivity testing. Gynecol. Oncol.31, 191–204.7.Gerhardt, R.T.et al.(1991) Characterization of in vitro chemosensitivity ofperioperative human ovarian malignancies by adenosine triphosphatechemosensitivity assay. Am. J. Obstet. Gynecol. 165, 245–55.8.Petty, R.D. et al.(1995) Comparison of MTT and ATP-based assays for themeasurement of viable cell number. J. Biolumin. Chemilumin.10, 29–34.9.Cree, I.A. et al.(1995) Methotrexate chemosensitivity by ATP luminescence in humanleukemia cell lines and in breast cancer primary cultures: Comparison of the TCA-100assay with a clonogenic assay. AntiCancer Drugs6, 398–404.10.Maehara, Y. et al.(1987) The ATP assay is more sensitive than the succinatedehydrogenase inhibition test for predicting cell viability. Eur. J. Cancer Clin. Oncol.23, 273–6.11.Stanley, P.E. (1986) Extraction of adenosine triphosphate from microbial and somaticcells. Methods Enzymol.133, 14–22.12.Beckers, B. et al.(1986) Application of intracellular ATP determination in lymphocytesfor HLA-typing. J. Biolumin. Chemilumin.1, 47–51.13.Andreotti, P.E. et al.(1995) Chemosensitivity testing of human tumors using amicroplate adenosine triphosphate luminescence assay: Clinical correlation forcisplatin resistance of ovarian carcinoma. Cancer Res. 55, 5276–82.4.D.Related ProductsCell Proliferation ProductsProduct Size Cat.# ApoLive-Glo™ Multiplex Assay10ml G6410 ApoTox-Glo™ Triplex Assay10ml G6320 CellTiter-Fluor™ Cell Viability Assay (fluorescent)10ml G6080 CellTiter-Blue®Cell Viability Assay (resazurin)20ml G8080 CellTiter 96®AQ ueous One SolutionCell Proliferation Assay (MTS, colorimetric)200 assays G3582 CellTiter 96®AQ ueous Non-RadioactiveCell Proliferation Assay (MTS, colorimetric)1,000 assays G5421 CellTiter 96®AQ ueous MTS Reagent Powder1g G1111 CellTiter 96®Non-RadioactiveCell Proliferation Assay (MTT, colorimetric)1,000 assays G4000 Additional sizes available.Cytotoxicity AssaysProduct Size Cat.# CytoTox-Glo™ Cytotoxicity Assay (luminescent)*10ml G9290Mitochondrial ToxGlo™ Assay*10ml G8000 MultiTox-Glo Multiplex Cytotoxicity Assay(luminescent, fluorescent)*10ml G9270 MultiTox-Fluor Multiplex Cytotoxicity Assay(fluorescent)*10ml G9200 CytoTox-Fluor™ Cytotoxicity Assay (fluorescent)*10ml G9260 CytoTox-ONE™ Homogeneous MembraneIntegrity Assay (LDH, fluorometric)*200–800 assays G7890 CytoTox-ONE™ Homogeneous MembraneIntegrity Assay, HTP1,000–4,000 assays G7892 CytoTox 96® Non-Radioactive Cytotoxicity Assay1,000 assays G1780 (LDH, colorimetric)*GSH-Glo™ Glutathione Assay10ml V691150ml V6912 GSH/GSSG-Glo™ Assay10ml V661150ml V6612 *Additional sizes available.LuminometersProduct Size Cat.# GloMax®-Multi+ Detection System with Instinct™ Software:Base Instrument with Shaking 1 each E8032 GloMax®-Multi+ Detection System with Instinct™ Software:Base Instrument with Heating and Shaking 1 each E9032 GloMax®-Multi+ Luminescence Module 1 each E8041Apoptosis ProductsProduct Size Cat.# Caspase-Glo®2 Assay*10ml G0940 Caspase-Glo®6 Assay*10ml G0970 Caspase-Glo®3/7 Assay* 2.5ml G8090 Caspase-Glo®8 Assay* 2.5ml G8200 Caspase-Glo®9 Assay* 2.5ml G8210Apo-ONE®Homogeneous Caspase-3/7 Assay1ml G7792 DeadEnd™ Fluorometric TUNEL System60 reactions G3250 DeadEnd™ Colorimetric TUNEL System20 reactions G7360Anti-ACTIVE®Caspase-3 pAb50µl G7481Anti-PARP p85 Fragment pAb50µl G7341Anti-pS473Akt pAb40µl G7441 Caspase Inhibitor Z-VAD-FMK, 20mM50µl G7231125µl G7232*Additional sizes available.(a)U.S. Pat. Nos. 6,602,677 and 7,241,584, European Pat. No. 1131441, Japanese Pat. Nos. 4537573 and 4520084 and other patents pending(b)U.S. Pat. No. 7,741,067, Japanese Pat. No. 4485470 and other patents pending.(c)U.S. Pat. No. 7,700,310, European Pat. No. 1546374 and other patents pending.(d)U.S. Pat. Nos 7,083,911, 7,452,663 and 7,732,128, European Pat. No. 1383914 and Japanese Pat. Nos. 4125600 and 4275715.(e)The method of recombinant expression of Coleoptera luciferase is covered by U.S. Pat. Nos. 5,583,024, 5,674,713 and 5,700,673.© 2001–2012 Promega Corporation. All Rights Reserved.Anti-ACTIVE, Apo-ONE, Caspase-Glo, CellTiter 96, CellTiter-Blue, CellTiter-Glo, CytoTox 96 and GloMax are registered trademarks of Promega Corporation. ApoTox-Glo, ApoLive-Glo, CellTiter-Fluor, CytoTox-Fluor, CytoTox-Glo, CytoTox-ONE, DeadEnd, GSH-Glo, GSH/GSSG-Glo, Instinct, Mitochondrial ToxGlo and Ultra-Glo are trademarks of Promega Corporation. alamarBlue is a registered trademark of Trek Diagnostic Ssystems, Inc.Products may be covered by pending or issued patents or may have certain limitations. Please visit our Web site for more information.All prices and specifications are subject to change without prior notice.Product claims are subject to change. Please contact Promega Technical Services or access the Promega online catalog for the most up-to-date information on Promega products.。

翻译——精选推荐

翻译——精选推荐

Modelling the grain orientation of austenitic stainless steel multipass welds to improve ultrasonic assessment of structural integrity奥氏体不锈钢的多路模拟晶粒取向改善焊缝超声评价结构完整性J. Moysan a,*, A. Apfel a, G. Corneloup a, B. Chassignole bJ.莫伊桑A.阿普费尔G.科尔内卢帕 B.沙西尼奥尔aLaboratoire de Caracte´risation Non Destructive (LCND, EA 3153), Universite´ de la Me´diterrane´e, IUT Avenue Gaston Berger,13625 Aix-en-Provence Cedex, FrancebElectricite´ De France, Direction Etudes et Recherches, Les Renardie`res, 77818 MoretsurLoing, FranceReceived 1 August 2002; revised 4 February 2003; accepted 4 February 2003 AbstractKnowledge of the grain orientation quantifies the material anisotropy which helps to ensure the good ultrasonic testing of weldedassemblies and the assessment of their mechanical integrity. The model described here concerns the weld solidification of 316L stainlesssteel. The solidification of multipass welds made with a shielded electrode raises many unsolved modelling questions as it involves heat andfluid flow modelling in addition to soluteredistribution models. To overcome these difficulties we have developed the MINA model topredict the resulting grain orientations without using a complete solidification model. This model relies upon a phenomenologicaldescriptionof grain orientations from macrograph analysis. One important advance of this model is to include data reporting in the welding notebook thatensures the generality of the model. This model allows us to accurately simulate the ultrasonic testing of welded co mponents and to propose anew tool to associate welding design with the ultrasonic assessment of structural integrity.Keywords: Welding; Anisotropy; Modelling; Ultrasound; Structural integrity文摘晶粒取向量化材料各向异性的知识,有助于确保良好的焊接程序集和评估超声检测机械的完整性。

晶界上分布第二相的长度英文

晶界上分布第二相的长度英文

晶界上分布第二相的长度英文英文回答:The length of the second phase distributed on the grain boundary is a crucial factor that influences the mechanical properties of polycrystalline materials. It affects the strength, ductility, and toughness of the material. The length of the second phase on the grain boundary can be controlled by various processing parameters, such as the cooling rate, the composition of the material, and the heat treatment conditions.The length of the second phase on the grain boundary can be measured using various techniques, such as electron microscopy, X-ray diffraction, and atom probe tomography. The most commonly used technique is electron microscopy, which allows for the direct observation of the microstructure of the material.The length of the second phase on the grain boundary isan important factor that needs to be considered when designing materials for specific applications. Bycontrolling the length of the second phase, it is possibleto tailor the mechanical properties of the material to meet the desired requirements.中文回答:晶界上第二相分布的长度是影响多晶材料力学性能的关键因素之一。

2019-2020学年沈阳市苏家屯区沙河中学高三英语期末考试试题及参考答案

2019-2020学年沈阳市苏家屯区沙河中学高三英语期末考试试题及参考答案

2019-2020学年沈阳市苏家屯区沙河中学高三英语期末考试试题及参考答案第一部分阅读(共两节,满分40分)第一节(共15小题;每小题2分,满分30分)阅读下列短文,从每题所给的A、B、C、D四个选项中选出最佳选项ACharlie Thorne and the Last Equationby Stuart GibbsThe CIA is on a task to find an equation (方程式) called Pandora, which could destroy the world if the wrong people get it. For help, they turn to Charlie, a 12-year-old girl who's as smart as Albert Einstein. People who like action-packed mysteries will enjoy reading this exciting book.AstroNutsby Jon Scieszka and Steven WeinbergIn AstroNuts, the Earth has been destroyed by humans for thousands of years. Four animals set out from Mount Rushmore, the headquarters (总部) of NNASA. Their task is to find a new planet fit for human life. Finally, they discover one: Plant Planet. The story's theme (主题) is simple: Don't harm the planet. Readers who love fantasy will enjoy AstroNuts.Stargazingby Jen WangChristine hears that Moon, who's new in town, is the kind of kid who beats people up for fun. But Moon and her mum come to live with Christine's family, and the two kids become best friends. Moon even shares a big secret with Christine. Stargazing is based on author Jen Wang's experiences as a child. The story is about the power of friendship and how people are able to change.Roll with Itby Jamie SumnerRoll with It is a story about a 12-year-old girl named Ellie. She has difficulty walking on her own and uses a wheelchair. When Ellie and her mum move to another state to take care of Ellie's grandpa, she must learn to navigate (处理) a new school and new friendships. This page-turner is a must-read for everyone. It's a heartwarming story that really shows the value of familyand how being different is special.1. Which book tells readers to protect the place we live in?A.AstroNutsB.Stargazing.C.Roll with It.D.Charlie Thorne and the Last Equation.2. What makes Stargazing different from the other three books?A. It talks about friendship.B. It tells stories about animals.C. It contains lots of scientific knowledge.D. It was written according to the author's experiences.3. What happened to Ellie?A. She had difficulty in making friends.B. She had an accident which left her in a wheelchair.C. She went to a new school and had to start all over again.D. She lost her mum and was taken care of by her grandpa.BBrian Hamilton's life changed in a prison when he went there with his friend, Reverend Robert J. Harris, who often went to local prisons to do ministry work. During the visit,Hamiltonstarted talking to one of the prisoners and asked what he was going to do when he got out. “He said he was going to get a job,”Hamiltonrecalls. “I thought to myself, wow, that’s going to be difficult with a criminal background.”The conversation madeHamiltonconsider how to help those who came out from prison. Finally in 2008, 16 years after that initial conversation,Hamiltoncreated Inmates to Entrepreneurs, a nonprofit organization that helps people with criminal backgrounds start their own small businesses.At the time,Hamiltonwas building his own company, a software technology company for the banking industry. As his company grew, so didHamilton’s time devoted to giving lessons to prisoners. He averaged three to four courses a month at prisons throughoutNorth Carolina.Eventually,Hamiltondecided to shift his focus to his true passion. In May 2019, he sold his company and focused on helping those who were imprisoned. His online courses will be set next year. “By March 1, 2022, anyone will be able to access the courses, either to become a certificated instructor or to access it for themselves as a prisoner or part of the general population,”Hamiltonexplained. In addition, he visits middle schools and presents the course to at-risk students as a preventative measure against crime.The free course is funded by the recently established Brian Hamilton Foundation, which offers assistance to military members as they return to civilian life and provides loans o small businesses. “Starting up a business isn't for everyone, but if we make opportunities available, and let people know that other people care about them, it makes a difference.”Hamiltonsaid.4. Why did Brian Hamilton went to a prison?A. He accompanied his friend.B. He took lessons in the prison.C. He wanted to get a job in the prison.D. He had a friend who was in prison.5. What can be inferred about Inmates to Entrepreneurs?A. It often assists military members.B It provides loans to small businesses.C. Its course has been largely broadened.D. It is an organization intended for business men.6. According to the author, which of the following best describesHamilton?A. He is a man who always changes his mind.B. He has a sense of social responsibility.C. He is good at running a big company.D. He makes money by giving lessons.7. What is the main idea of the text?A. A man made a fruitless visit to the prison.B. A man sold his business to teach prisoners.C. A man realized his dream of being a teacher.D. A man successfully created two organizations.CThe far side of the moonis a strange and wild region, quite different from the familiar and mostly smooth face we see nightly from our planet. Soon this rough space will have even stranger features: it will be crowded with radio telescopes.Astronomers are planning to make the moon's distant side our newest and best window on the cosmic(宇宙的) dark ages, a mysterious era hiding early marks of stars and galaxies. Our universe was not always filled with stars. About 380,000 years after the big bang, the universe cooled, and the first atoms of hydrogen formed. Gigantic hydrogen clouds soon filled the universe. But for a few hundred million years, everything remained dark, without stars. Then came the cosmic dawn: the first stars flickered, galaxies came into existence and slowly the universe's large­scale structure took shape.The seeds of this structure must have been present in the dark­age hydrogen clouds, but the era has been impossible toprobeusing optical(光学的) telescopes—there was no light. And although this hydrogen produced long­wavelength(or low­frequency) radio emissions,radio telescopes on Earth have found it nearly impossible to detect them. Our atmosphere either blocks or disturbs these faint signals; those that get through are drowned out by humanity's radio noise.Scientists have dreamed for decades of studying the cosmic dark ages from the moon's far side. Now multiple space agencies plan lunar missions carrying radio­wave­detecting instruments—some within the next three years—and astronomers' dreams are set to become reality.“If I were to design an ideal place to do low­frequency radio astronomy, I would have to build the moon,”says astrophysicist Jack Burns of the University of Colorado Boulder. “We are just now finally getting to the place where we're actually going to be putting these telescopes down on the moon in the next few years.”8. What's the purpose of building radio telescopes on the moon?A To research the big bang. B. To discover unknown stars.C. To study the cosmic dark ages.D. To observe the far side of the moon.9. What does the underlined word “probe” in Paragraph 3 possibly mean?A. Explore.B. Evaluate.C. Produce.D. Predict.10. Hydrogen radio emissions can't be detected on Earth because ________.A. there was no light in the dark agesB. they cannot possibly get through our atmosphereC. gigantic hydrogen clouds no longer fill the universeD. radio signals on Earth cause too much interference11. What can we infer from theunderlined sentence in the last paragraph?A. Scientists have to rebuild the moon.B. We will finally get to the moon's distant side.C. The moon is a perfect place to set up radio telescopes.D. A favorable research environment will be found on the moon.DThey are smart. They know how to steal. They know how to find food. They know how to intimidate(恐吓) . Who are they? They are macaque monkeys. They have taken over the old city ofLopburiinThailand. About 8,400 ofthem are in the center of the city. They roam(漫游) neighborhoods in groups. Dozens of businesses in Lopburi are closing. They include a music school, a gold shop, a barber, a cellphone store and a movie theatre. The Buddhist culture believes reducing the number of monkeys would disturb spiritual well-being.The monkeys were not always such a hazard. They attracted tourists. Buddhists thought feeding them was a good deed. Now times have changed. Recently, the coronavirus made things worse. There are fewer tourists, which means that travelers give less food to the monkeys. Over the years, the monkeys moved into empty buildings. They trashed whatever they came across. They ripped(扯掉) antennas and windshield wipers off parked cars. What happens when monkeys come into contact with humans? An observer said that years ago the monkeys were fewer, biggerand healthier. Their fur was shiny and thick. They kept to the temples,as well as the ruins of the ancient Khmer civilization.Then tourists came with easy and unhealthy food. Along with bananas and citrus(柑橘), the macaques feasted on junk food. An observer said, “The monkeys are never hungry. They are just like children who eat too much KFC. ” Compared with the monkeys of the forest, their urban counterparts have less muscle. They have more hypertension and blood disease. Their fur has thinned. Some have gone bald. With so much food available,they have more time to breed and to give birth. Their population has exploded. “These monkeys were here before us,” a man said. A juvenile macaque tugged(拽) his trousers demanding a treat. “We have to adapt to them,not the other way around. ”12. Why are macaque monkeys so popular in Lopburi inThailand?A. Because they are smart and know how to steal.B. Because they have taken over the old city ofLopburiinThailand.C. Because they are symbols of spiritual well-being in the Buddhist culture.D. Because 8,400 of them are in the centre of the city and roam everywhere.13. In what way has tourism influenced the macaque monkeys?A. The monkeys are bigger and healthier.B. The monkeys like eating KFC food.C. The monkeys have suffered from more diseases.D. The monkeys are subject to birth control.14. What does the underlined word “counterparts” in the last paragraph refer to?A. Monkeys.B. Tourists.C. Buddhists.D. Children.15. It can be inferred from what the man said that ________.A. monkeys are ancestors, so humans should adapt to monkeysB. humans should regard monkeys with awe and respectC. monkeys should be forced to follow rulesD. humans should give monkeys whatever they want第二节(共5小题;每小题2分,满分10分)阅读下面短文,从短文后的选项中选出可以填入空白处的最佳选项。

Comparison between laboratory

Comparison between laboratory

364
J Chem Technol Biotechnol 2010; 85: 364–370

c 2010 Society of Chemical Industry
Comparison between laboratory and pilot biotrickling for VOC removal variations in concentration, spikes and unstable loads of VOCs in order to enhance the reliability of the BTF.11 – 14 The main objective of this study was to provide data related to the removal of VOCs in air emissions from wood coating and painting applications by using BTFs. This work involves studies at both laboratory and pilot-plant scales. For this purpose, a closed booth operated to automatically paint pieces in a furniture facility located in Villarreal, Spain, was selected as representative of the emissions of this industrial sector. A laboratory study was performed using a synthetic feed composed of n-butyl acetate, toluene and m-xylene, which simulated these types of industrial air emissions. Two feed regimes were tested: first, continuous and stationary loading, followed by discontinuous and oscillating loading. The pilot-plant unit was connected to the industrial focus mentioned above. In this case, the main purpose was to show the robustness of the process and establish the minimum EBRT allowing the meeting of the legal regulations. At both scales, an activated carbon (AC) prefilter was used to evaluate the influence of load equalisation on the overall removal efficiency.

毕业翻译 3

毕业翻译 3

高阶有限差分海洋CSEM模拟用波和扩散领域对应的原理摘要:计算机时代要求解决一个典型的海洋可控源电磁测量三维CSEM模拟,在准静态可以减少改变超过一个数量级的低频麦克斯韦方程,或扩散限制到一个双曲组偏微分方程,给出表示电磁场在一个虚拟的波域分稳定分析可以相当于其他类型的波模拟问题,如地震声波和弹性建模。

二阶到八阶空间导数算子实现灵活性。

四阶和六阶方法是最有效的实现数值对于这个特定的方法。

一个实现与高阶算子要求两个电和磁场是推测的同时向空气层。

稳定性条件给出高阶交错导数操作在这里应该同样有效的地震波模拟。

在带宽虚拟,波域带宽的回收领域扩散域是独立的场宽,在假想的波域这场不代表可观察到的场。

传播路径和交互/反射振幅都不会改变的从虚拟波域转换到扩散频率域;然而,变换包含一个指数衰减系数,制下来来晚的人在虚构的波域。

造成的传播路径大多数的扩散域场是声波加上典型的事件,如超临界的折射导波。

转换从冗长的频率域的虚拟波域是一个不适定问题。

这变换不是单独的。

这在假想的波域重显正确的扩散频域场对时间的波形为边界条件给了很大的自由度。

简介:海洋可控源电磁(CSEM)测量现在是为油气勘探建立技术(Eidesmo等人,2002年;Ellingsrud等人,2002年;Srnka等人,2006年)海洋方法使用一个电偶极子CSEM发射来探测地下。

这项技术已经被证明特别适用于检测低、高电阻层典型油气藏。

在海洋三维CSEM建模中对于研究设计和反演的观测数据起着至关重要的作用。

这两个领域有共同点的必要性大量的三维模拟。

适当的研究设计需要建模的每个接收机位置众多的频率和地下场景。

在施工前研究它并不少见分析10或更多的频率范围在0.1赫兹到3 -5赫兹。

一个时间域建模方法可以计算电磁场这些频率上运行而频域方法必须模型的场为每个频率在不同的运行。

相互作用可以用来显著减少建模时间。

源定位建模操作又看了真正的接收器位置和建模数据记录真实的源位置。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Extended Gloss Overlaps as a Measure of Semantic RelatednessSatanjeev BanerjeeCarnegie Mellon UniversityPittsburgh,PA15213 satanjeev.banerjee@Ted Pedersen University of Minnesota Duluth,MN,55812 tpederse@AbstractThis paper presents a new measure of semantic re-latedness between concepts that is based on thenumber of shared words(overlaps)in their defini-tions(glosses).This measure is unique in that itextends the glosses of the concepts under consid-eration to include the glosses of other concepts towhich they are related according to a given concepthierarchy.We show that this new measure reason-ably correlates to human judgments.We introducea new method of word sense disambiguation basedon extended gloss overlaps,and demonstrate that itfares well on the S ENSEVAL-2lexical sample data.1IntroductionHuman beings have an innate ability to determine if two con-cepts are related.For example,most would agree that the automotive senses of car and tire are related while car and tree are not.However,assigning a value that quantifies the degree to which two concepts are related proves to be more difficult[Miller and Charles,1991].In part,this is because relatedness is a very broad notion.For example,two concepts can be related because one is a more general instance of the other(e.g.,a car is a kind of vehicle)or because one is a part of another(e.g.,a tire is a part of a car).This paper introduces extended gloss overlaps,a measure of semantic relatedness that is based on information from a machine readable dictionary.In particular,this measure takes advantage of hierarchies or taxonomies of concepts as found in resources such as the lexical database WordNet[Fellbaum, 1998].Concepts are commonly represented in dictionaries by word senses,each of which has a definition or gloss that briefly describes its meaning.Our measure determines how related two concepts are by counting the number of shared words(overlaps)in the word senses of the concepts,as well as in the glosses of words that are related to those concepts according to the dictionary.These related concepts are ex-plicitly encoded in WordNet as relations,but can be found in any dictionary via synonyms,antonyms,or also-see refer-ences provided for a word sense.To our knowledge,this work represents thefirst attempt to define a quantitative measure of relatedness between two concepts based on their dictionary definitions.This paper begins with a brief description of WordNet, which was used in developing our measure.Then we intro-duce the extended gloss overlap measure,and present two dis-tinct evaluations.First,we conduct a comparison to previous human studies of relatedness andfind that our measure has a correlation of at least0.6with human judgments.Second, we introduce a word sense disambiguation algorithm that as-signs the most appropriate sense to a target word in a given context based on the degree of relatedness between the target and its neighbors.Wefind that this technique is more accurate than all but one system that participated in the S ENSEVAL–2 comparative word sense disambiguation exercise.Finally we present an extended analysis of our results and close with a brief discussion of related work.2WordNetWordNet is a lexical database where each unique meaning of a word is represented by a synonym set or synset.Each synset has a gloss that defines the concept that it represents.For ex-ample the words car,auto,automobile,and motorcar consti-tute a single synset that has the following gloss:four wheel motor vehicle,usually propelled by an internal combustion engine.Many glosses have examples of usages associated with them,such as“he needs a car to get to work.”Synsets are connected to each other through explicit se-mantic relations that are defined in WordNet.These relations only connect word senses that are used in the same part of speech.Noun synsets are connected to each other through hypernym,hyponym,meronym,and holonym relations.If a noun synset is connected to another noun synset through the is–a–kind–of relation then is said to be a hy-pernym of synset and a hyponym of.For example the synset containing car is a hypernym of the synset con-taining hatchback and hatchback is a hyponym of car.If a noun synset is connected to another noun synset through the is–a–part–of relation then is said to be a meronym of and a holonym of.For example the synset contain-ing accelerator is a meronym of car and car is a holonym of accelerator.Noun synset is related to adjective synset through the attribute relation when is a value of.For example the adjective synset standard is a value of the noun synset measure.Taxonomic or is–a relations also exist for verb synsets. Verb synset is a hypernym of verb synset if to B is one way to A.Synset is called a troponym of.For example the verb synset containing the word operate is a hypernym of drive since to drive is one way to operate.Conversely drive is a troponym of operate.The troponym relation for verbs is analogous to the hyponym relation for nouns,and henceforth we shall use the term hyponym instead of the term troponym. Adjective synsets are related to each other through the similar to relation.For example the synset containing the adjective last is said to be similar to the synset containing the adjec-tive dying.Verb and adjective synsets are also related to each other through cross–reference also–see links.For example, the adjectives accessible and convenient are related through also–see links.While there are other relations in WordNet,those described above make up more than93%of the total number of links in WordNet.These are the measures we have employed in the extended gloss overlap measure.3The Extended Gloss Overlap Measure Gloss overlaps were introduced by[Lesk,1986]to perform word sense disambiguation.The Lesk Algorithm assigns a sense to a target word in a given context by comparing the glosses of its various senses with those of the other words in the context.That sense of the target word whose gloss has the most words in common with the glosses of the neighboring words is chosen as its most appropriate sense.For example,consider the glosses of car and tire:four wheel motor vehicle usually propelled by an internal com-bustion engine and hoop that covers a wheel,usually made of rubber andfilled with compressed air.The relationship be-tween these concepts is shown in that their glosses share the content word wheel.However,they share no content words with the gloss of tree:a tall perennial woody plant having a main trunk and branches forming a distinct elevated crown. The original Lesk Algorithm only considers overlaps among the glosses of the target word and those that surround it in the given context.This is a significant limitation in that dictionary glosses tend to be fairly short and do not provide sufficient vocabulary to makefine grained distinctions in re-latedness.As an example,the average length of a gloss in WordNet is just seven words.The extended gloss overlap measure expands the glosses of the words being compared to include glosses of concepts that are known to be related to the concepts being compared.Our measure takes as input two concepts(represented by two WordNet synsets)and outputs a numeric value that quan-tifies their degree of semantic relatedness.In the sections that follow,we describe the foundations of the measure and how it is computed.3.1Using Glosses of Related SensesThere are two fundamental premises to the original Lesk Al-gorithm.First,words that appear together in a sentence will be used in related senses.Second,and most relevant to our measure,the degree to which senses are related can be iden-tified by the number of overlaps in their glosses.In other words,the more related two senses are,the more words their glosses will share.WordNet provides explicit semantic relations between synsets,such as through the is–a or has–part links.However such links do not cover all possible relations between synsets. For example,WordNet encodes no direct link between the synsets car and tire,although they are clearly related.We observe however that the glosses of these two synsets have words in common.Similar to Lesk’s premise,we assert that such overlaps provide evidence that there is an implicit rela-tion between those synsets.Given such a relation,we fur-ther conclude that synsets explicitly related to car are thereby also related to synsets explicitly related to tire.For exam-ple,we conclude that the synset vehicle(which is the hyper-nym synset of car)is related to the synset hoop(which is the hypernym synset of tire).Thus,our measure combines the advantages of gloss overlaps with the structure of a concept hierarchy to create an extended view of relatedness between synsets.We base our measure on the idea of an extended set of com-parisons.When measuring the relatedness between two input synsets,we not only look for overlaps between the glosses of those synsets,but also between the glosses of the hypernym, hyponym,meronym,holonym and troponym synsets of the input synsets,as well as between synsets related to the input synsets through the relations of attribute,similar–to and also–see.Not all of these relations are equally helpful,and the op-timum choice of relations to use for comparisons is possibly dependent on the application in which the overlaps–measure is being employed.Section6compares the relative efficacy of these relations when our measure of relatedness is applied to the task of word sense disambiguation.3.2Scoring MechanismWe introduce a novel way offinding and scoring the overlaps between two glosses.The original Lesk Algorithm compares the glosses of a pair of concepts and computes a score by counting the number of words that are shared between them. This scoring mechanism does not differentiate between single word and phrasal overlaps and effectively treats each gloss as a“bag of words”.For example,it assigns a score of3to the concepts drawing paper and decal,which have the glosses paper that is specially prepared for use in drafting and the art of transferring designs from specially prepared paper to a wood or glass or metal surface.There are three words that overlap,paper and the two–word phrase specially prepared. There is a Zipfian relationship[Zipf,1935]between the lengths of phrases and their frequencies in a large corpus of text.The longer the phrase,the less likely it is to occur mul-tiple times in a given corpus.A phrasal–word overlap is a much rarer occurrence than an single word overlap.There-fore,we assign an word overlap the score of.This gives an–word overlap a score that is greater than the sum of the scores assigned to those words if they had occurred in two or more phrases,each less than words long.For the above gloss pair,we assign the overlap paper a score of1and specially prepared a score of4,leading to a to-tal score of5.Note that if the overlap was the3–word phrase specially prepared paper,then the score would have been9.Thus,our overlap detection and scoring mechanism can beformally defined as follows:When comparing two glosses, we define an overlap between them to be the longest sequenceof one or more consecutive words that occurs in both glossessuch that neither thefirst nor the last word is a function word, that is a pronoun,preposition,article or conjunction.If twoor more such overlaps have the same longest length,then theoverlap that occurs earliest in thefirst string being compared is reported.Given two strings,the longest overlap betweenthem is detected,removed and in its place a unique markeris placed in each of the two input strings.The two strings thus obtained are then again checked for overlaps,and thisprocess continues until there are no longer any overlaps be-tween them.The sizes of the overlaps thus found are squaredand added together to arrive at the score for the given pair ofglosses.3.3Computing RelatednessThe extended gloss overlap measure computes the relatedness between two input synsets and by comparing the glossesof synsets that are related to and through explicit rela-tions provided in WordNet.We define RELS as a(non-empty)set of relations that con-sists of one or more of the relations described in Section2. That is,RELS is a relation defined in WordNet. Suppose each relation(RELS)has a function of the same name that accepts a synset as input and returns the gloss of the synset(or synsets)related to the input synset by the designated relation.For example,assume represents the hypernym relation. Then returns the gloss of the hypernym synset of. can also represent the gloss“relation”such that returns the gloss of synset,and the example“relation”such that returns the example string associated with synset.If more than one synset is related to the input synset through the same relation,their glosses are concatenated and returned. We perform this concatenation because we do not wish to differentiate between the different synsets that are all related to the input synset through a particular relation,but instead are only interested in all their definitional glosses.If no synset is related to the input synset by the given relation then the null string is returned.Next,form a non–empty set of pairs of relations from the set of relations above.The only constraint in forming such pairs is that if the pair is chosen,(RELS), then the pair must also be chosen so that the relat-edness measure is reflexive.That is,.Thus,we define the set RELPAIRS asfollows:RELPAIRS=RELS;if RELPAIRS,then RELPAIRS Finally,assume that is a function that accepts as input two glosses,finds the phrases that overlap between them and returns a score as described in the previous sec-tion.Given all of the above,the relatedness score between the input synsets and is computed as follows:RELPAIRSOur relatedness measure is based on the set of all possi-ble pairs of relations from the list of relations described insection3.1.For purposes of illustration,assume that our set of relations RELS=gloss,hype,hypo(where hype and hypo are contractions of hypernym and hyponym respec-tively).Further assume that our set of relation pairs REL-PAIRS=(gloss,gloss),(hype,hype),(hypo,hypo),(hype, gloss),(gloss,hype).Then the relatedness between synsets and is computed as follows:Observe that due to our pair selection constraint as de-scribed above,is indeed the same as.4Comparison to Human JudgementsOur comparison to human judgments is based on three previ-ous studies.[Rubenstein and Goodenough,1965]presented human subjects with65noun pairs and asked them how sim-ilar they were on a scale from0.0to4.0.[Miller and Charles, 1991]took a30pair subset of this data and repeated this ex-periment,and found results that were highly correlated(.97) to the previous study.The results from the30pair set com-mon to both studies were used again by[Budanitsky and Hirst,2001]in an evaluation offive automatic measures of se-mantic relatedness that will be mentioned in Section7.They report that all of the measures fared relatively well,with the lowest correlation being.74and the highest.85.When com-paring our measure to these30words,wefind that it has a correlation of.67to the Miller and Charles human study,and one of.60to the Rubenstein and Goodenough experiment. We do notfind it discouraging that the correlation of ex-tended gloss overlaps is lower than those reported by Budan-itsky and Hirst for other measures.In fact,given the com-plexity of the task,it is noteworthy that it demonstrates some correlation with human judgement.The fact that the test set contains only30word pairs is a drawback of human evalu-ation,where rigourous studies are by necessity limited to a small number of words.Automatic measures can be evalu-ated relative to very large numbers of words,and we believe such an evaluation is an important next step in order to estab-lish where differences lie among such measures.As afinal point of concern,concepts can be related in many ways,and it is possible that a human and an automatic measure could rely on different yet equally well motivated criteria to arrive at diverging judgements.5Application to WSDWe have developed an approach to word sense disambigua-tion based on the use of the extended gloss overlap measure. In our approach,a window of context around the target word is selected,and a set of candidate senses is identified for each content word in the window.Assume that the win-dow of context consists of words denoted by,,where the target word is.Further let denote the number of candidate senses of word,and let these senses be denoted by,.Next we assign to each possible sense of the target word a computed by adding together the relatedness scores obtained by comparing the sense of the target word in question with every sense of every non–target word in the window of context.The SenseScore for sense is com-puted as follows:That sense with the highest SenseScore is judged to be the most appropriate sense for the target word.If there are on average senses per word and the window of context is words long,there are pairs of sets of synsets to be compared,which increases linearly with.5.1Experimental DataOur evaluation data is taken from the English lexical sample task of S ENSEVAL–2[Edmonds and Cotton,2001].This was a comparative evaluation of word sense disambiguation sys-tems that resulted in a large set of results and data that are now freely available to the research community.This data consists of4,328instances each of which con-tains a sentence with a single target word to be disam-biguated,and one or two surrounding sentences that provide additional context.A human judge has labeled each target word with the most appropriate WordNet sense for that con-text.A word sense disambiguation system is given these same instances(minus the human assigned senses)and must output what it believes to be the most appropriate senses for each of the target words.There are73distinct target words:29 nouns,29verbs,and15adjectives,and the part of speech of the target words is known to the systems.5.2Experimental ResultsFor every instance,function words are removed and then a window of words is defined such that the target word is at the center(if possible).Next,for every word in the window,can-didate senses are picked by including the synsets in WordNet that the word belongs to,as well as those that an uninflected form of the word belong to(if any).Given these candidate senses,the algorithm described abovefinds the most appro-priate sense of the target word.It is possible that there be a tie among multiple senses for the highest score for a word.In this case,all those senses are reported as answers and partial credit is given if one of them prove to be correct.This would be appropriate if a word were truly ambiguous in a context,or if the meanings were very closely related and it was not possible to distinguish between them.It is also possible that no sense gets more than a score of0–in this case,no answer is reported since there is no evidence to choose one sense over another.Given the answers generated by the algorithm,we compare them with the human decided answers and compute precision (the number of correct answers divided by the number of an-swers reported)and recall(the number of correct answers di-vided by the number of instances).These two values can be summarized by the F–measure,which is the harmonic meanTable1:WSD Evaluation ResultsBaselines and Other SystemsAlgorithm Prec.Recall F-Meas.Sval-First0.4020.4010.401Overall*0.3510.3420.346Sval-Second0.2930.2930.293Sval-Third0.2470.2440.245Original Lesk0.1830.1830.183Random0.1410.1410.141Extended Gloss Overlaps w3word windowPOS Prec.Recall F-Meas.Noun0.4290.4160.422Adj.0.3670.3460.356Verb0.2700.2660.268Overall*0.3510.3420.346of the precision and recall:Table1lists the precision,recall and F–measure for all the S ENSEVAL-2words when disambiguated using a window size of3.The overall results for our approach are shown as Overall*,and these are also broken down based on the part of speech(POS)of the target word.This table also displays re-sults from other baseline or representative systems.The Orig-inal Lesk results are based on utilizing the glosses of only the input synsets and nothing else.While this does not exactly replicate the original Lesk Algorithm it is quite similar.The random results reflect the accuracies obtained by simply se-lecting randomly from the candidate senses.The Sval–First,Sval–Second,and Sval–Third results are from the top three most accurate fully automatic unsupervised systems in the S ENSEVAL-2exercise.This is the class of sys-tems most directly comparable to our own,since they require no human intervention and do not use any manually created training examples.These results show that our approach was considerably more accurate than all but one of the participat-ing systems.These results are significant because they are based on a very simple algorithm that relies on assigning relatedness scores to the senses of a target word and the senses of its im-mediately adjacent neighbors.While the disambiguation re-sults could be improved via the combination of various tech-niques,our focus is on developing the extended gloss overlap measure of relatedness as a general tool for Natural Language Processing and Artificial Intelligence.6DiscussionTable1shows that the disambiguation results obtained using the extended gloss overlap measure of semantic relatedness are significantly better than both the random and Original Lesk baselines.In the Original Lesk Algorithm,relatedness between two synsets is measured by considering overlaps be-tween the glosses of the candidate senses of the target wordand its neighbors.By adding the glosses of related synsets, the results improve by89%relative(16.3%absolute).This shows that overlaps between glosses of synsets explicitly re-lated to the input synsets provide almost as much evidence about the implicit relation between the input synsets as do overlaps between the glosses of the input synsets themselves.Table1also breaks down the precision,recall and F–measure according to the part of speech of the target word. Observe that the noun target words are the easiest to disam-biguate,followed by the adjective target words.The verb tar-get words prove to be the hardest to disambiguate.We at-tribute this to the fact that the number of senses per target word is much smaller for the nouns and adjectives than it is for the verbs.Nouns and adjective target words have less than5candidate senses each on average,whereas verbs have close to16.Thus,when disambiguating verbs there are more choices to be made and more chances of errors.The results in table1are based on a3word window of context.In other experiments we used window sizes of5, 7,9and11.Although this increase in window size provides more data to the disambiguation algorithm,our experiments show that this does not significantly improve disambiguation results.This suggests that words that are in the immediate vicinity of the target word are most useful for disambiguation, and that using larger context windows is either adding noise or redundant data.The fact that small windows are best cor-responds with earlier studies on human subjects that showed that humans often only require a window of one or two sur-rounding words to disambiguate a target word[Choueka and Lusignan,1985].We also tried to normalize the overlap scores by the max-imum score that two glosses can generate,but that did not help performance.We believe that the difference between the sizes of various glosses in terms of number of words is small enough to render normalization unnecessary.6.1Evaluating Individual Relation PairsOur measure of relatedness utilizes pairs of relations picked from the list of relations in section3.1.In this section we at-tempt to quantify the relative effectiveness of these individual relation pairs.Specifically,given a set of relations RELS,we create all possible minimal relation pair sets,where a mini-mal relation pair set is defined as the set that contains either exactly one relation pair or exactly two relation pairs,where.For example (gloss,gloss)and(hype,gloss),(gloss,hype)are both minimal relation pair sets.We evaluate each of these minimal relation pair sets by performing disambiguation using only the given minimal re-lation pair set and computing the resulting precision,recall and F–measure.The higher the F–measure,the“better”the quality of the evidence provided by gloss overlaps from that minimal relation pair set.In effect we are decomposing the extended gloss overlap measure into its individual pieces and assessing how each of those pieces perform individually. Recall that each part of speech has a different set of rela-tions associated with it.The difference in the numbers and types of relations available for the three parts of speech leads us to expect that the optimal minimal relation pair sets willTable2:Best Relation Pair SetsNounsRelation pair Prec.Recall F-Meas.hypo-mero0.2630.0910.136hypo-hypo0.1680.1110.134glos-mero0.2720.0870.132gloss-gloss0.1610.1080.129example-mero0.3140.0740.120AdjectivesRelation pair Prec.Recall F-Meas.also-gloss0.2200.0840.122attr-gloss0.3230.0720.117gloss-gloss0.1460.0940.114example-gloss0.1380.0940.112gloss-hype0.1640.0830.110VerbsRelation pair Prec.Recall F-Meas.example-example0.0610.0480.053example-hype0.0600.0460.052hypo-hypo0.0610.0420.050gloss-hypo0.0530.0460.049example-gloss0.0540.0450.049differ with the part of speech of the input synsets.Table2 lists the top5minimal relation pair sets for target words be-longing to the three parts of speech,where relation pair sets are ranked on the F–measure achieved by using them in dis-ambiguation.Note that in this table,hypo,mero,also,attr, and hype stand for the relations hyponym,meronym,also–see,attribute,and hypernym respectively.Also in the table the relation pair refers to the minimal relation pair setif and otherwise. Perhaps one of the most interesting observations is that no single minimal relation pair set achieves F–measure even close to that achieved using all the relation pairs(0.42,0.35, and0.26for nouns,verbs,and adjectives respectively),sug-gesting that there is no single relation pair that generates a lot of evidence for the relatedness of two synsets.Thisfind-ing also implies that the richer the set of explicit relations between synsets in WordNet,the more accurate the overlap based measure of semantic relatedness will be.This fact is borne out by the comparatively high accuracy attained by nouns which is the best developed portion of WordNet.For nouns,Table2shows that comparisons between the glosses of the hyponyms and meronyms of the input synsets and also between the glosses of the input synsets are most in-formative about the relatedness of the synsets.Interestingly, although both hyponyms and hypernyms make up the is–a hi-erarchy,the hypernym relation does not provide an equivalent amount of information.In WordNet,a noun synset usually has a single hypernym(parent)but many hyponyms(chil-dren),which implies that the hyponym relation provides more definitional glosses to the algorithm than the hypernym re-lation.This assymetry also exists in the holonym–meronym pair of relations.Most noun synsets have less holonym(is–a–part–of)relations than meronyms(has–part)resulting in more glosses from the meronym relation.These further confirm that the accuracy of the relatedness measure depends at least partly on the number of glosses that we can access for a given pair of synsets.Thisfinding also applies to adjectives.The two most fre-quent relations,the also–see relation and the attribute rela-tion,rank highest among the useful relations for adjectives. Similarly for verbs,the hyponym relation again appears to be extremely useful.Interestingly,for all three parts of speech, the example“relation”(which simply returns the example string associated with the input synset)seems to provide use-ful information.This is in keeping with the S ENSEVAL–2 results where the addition of example strings to a Lesk–like baseline system improves recall from16%to23%.7Related WorkA number of measures of semantic relatedness have been pro-posed in recent years.Most of them rely on the noun taxon-omy of the lexical database WordNet.[Resnik,1995]aug-ments each synset in WordNet with an information content value derived from a large corpus of text.The measure of re-latedness between two concepts is taken to be the information content value of the most specific concept that the two con-cepts have in common.[Jiang and Conrath,1997]and[Lin, 1997]extend Resnik’s measure by scaling the common in-formation content values by those of the individual concepts. Our method of extended gloss overlaps is distinct in that it takes advantage of the information found in the glosses.The other measures rely on the structure of WordNet and corpus statistics.In addition,the measures above are all limited to relations between noun concepts,while extended gloss over-laps canfind relations between adjectives and verbs as well. 8ConclusionsWe have presented a new measure of semantic relatedness based on gloss overlaps.A pair of concepts is assigned a value of relatedness based on the number of overlapping words in their respective glosses,as well as the overlaps found in the glosses of concepts they are related to in a given concept hierarchy.We have evaluated this measure relative to human judgements and found it to be reasonably corre-lated.We have carried out a word sense disambiguation ex-periment with the S ENSEVAL-2lexical sample data.Wefind that disambiguation accuracy based on extended gloss over-laps is more accurate than all but one of the participating S ENSEVAL-2systems.AcknowledgementsThanks to Jason Rennie for his WordNet::QueryData mod-ule,and to Siddharth Patwardhan for useful discussions,ex-perimental help,and for integrating the extended gloss over-lap measure into his WordNet::Similarity module.Both of these modules are freely available from the Comprehensive Perl Archive Network().This work has been supported by a National Science Foundation Faculty Early CAREER Development award (#0092784)and NSF Grant no.REC–9979894.Any opin-ions,findings,conclusions,or recommendations expressed in this publications are those of the authors and do not necessar-ily reflect the views of the NSF or the official policies,either expressed or implied,of the sponsors or of the United States Government.References[Budanitsky and Hirst,2001]A.Budanitsky and G.Hirst.Semantic distance in WordNet:An experimental, application-oriented evaluation offive measures.In Work-shop on WordNet and Other Lexical Resources,Second meeting of the North American Chapter of the Association for Computational Linguistics,Pittsburgh,June2001. [Choueka and Lusignan,1985]Y.Choueka and S.Lusignan.Disambiguation by short puters and the Hu-manities,19:147–157,1985.[Edmonds and Cotton,2001]P.Edmonds and S.Cotton,ed-itors.Proceedings of the Senseval–2Workshop.Asso-ciation for Computational Linguistics,Toulouse,France, 2001.[Fellbaum,1998]C.Fellbaum,editor.WordNet:An elec-tronic lexical database.MIT Press,1998.[Jiang and Conrath,1997]J.Jiang and D.Conrath.Semantic similarity based on corpus statistics and lexical taxonomy.In Proceedings on International Conference on Research in Computational Linguistics,Taiwan,1997.[Lesk,1986]M.Lesk.Automatic sense disambiguation us-ing machine readable dictionaries:How to tell a pine cone from a ice cream cone.In Proceedings of SIGDOC’86, 1986.[Lin,1997]ing syntactic dependency as a local context to resolve word sense ambiguity.In Proceedings of the35th Annual Meeting of the Association for Compu-tational Linguistics,pages64–71,Madrid,July1997. [Miller and Charles,1991]ler and W.G.Charles.Contextual correlates of semantic nguage and Cognitive Processes,6(1):1–28,1991.[Resnik,1995]ing information content to eval-uate semantic similarity in a taxonomy.In Proceedings of the14th International Joint Conference on Artificial Intel-ligence,Montreal,August1995.[Rubenstein and Goodenough,1965]H.Rubenstein and J.B.Goodenough.Contextual correlates of -putational Linguistics,8:627–633,1965.[Zipf,1935]G.Zipf.The Psycho-Biology of Language.Houghton Mifflin,Boston,MA,1935.。

相关文档
最新文档