Asymmetries in the production of lambda zero, cascade minus, and omega minus hyperons in 50

合集下载

兴安升麻醇提物成分分析及基于网络药理学探讨其促进皮肤伤口愈合的作用机制

兴安升麻醇提物成分分析及基于网络药理学探讨其促进皮肤伤口愈合的作用机制

李艳娜,张富源,程伟峰,等. 兴安升麻醇提物成分分析及基于网络药理学探讨其促进皮肤伤口愈合的作用机制[J]. 食品工业科技,2023,44(24):12−22. doi: 10.13386/j.issn1002-0306.2023050247LI Yanna, ZHANG Fuyuan, CHENG Weifeng, et al. Analysis of the Components of Cimicifuga dahurica (Turcz.) Maxim. Alcohol Extract and Exploration of Its Mechanism of Promoting Skin Wound Healing Based on Network Pharmacology[J]. Science and Technology of Food Industry, 2023, 44(24): 12−22. (in Chinese with English abstract). doi: 10.13386/j.issn1002-0306.2023050247· 特邀主编专栏—枸杞、红枣、沙棘等食药同源健康食品研究与开发(客座主编:方海田、田金虎、龚桂萍) ·兴安升麻醇提物成分分析及基于网络药理学探讨其促进皮肤伤口愈合的作用机制李艳娜1,张富源1,程伟峰2,桑亚新1,王向红1,*(1.河北农业大学食品科技学院,河北保定 071000;2.河北雅果食品有限公司,河北保定 071000)摘 要:为了深入挖掘兴安升麻生理功能,本研究采用醇提法提取活性成分,通过HPLC-MS 分析兴安升麻的化学成分并基于网络药理学探讨其对皮肤伤口的促愈机制。

结果表明,兴安升麻中鉴定出125种化学成分,其中萜类、苯丙素类及酮类化合物含量较多,主要包括隐绿原酸、绿原酸、白术内脂III 等活性成分,可作用于一个或多个靶点, EGFR 、KDR 、F2可能为促愈过程中发挥作用的重要靶点,且橘皮素、山姜素、5-去甲川陈皮素等活性成分可能在此过程中发挥重要作用,说明兴安升麻具有多成分、多靶点作用于促进皮肤伤口愈合的特性;兴安升麻可能通过VEGF 、EGFR 和TNF 信号通路调控相关因子和蛋白的表达,发挥对皮肤伤口的促愈作用。

再生障碍性贫血英文PPT

再生障碍性贫血英文PPT

LOGO
Physical Examination
1.Petechiae and ecchymoses are typical.
2.Pallor of the skin and mucous membraboratory Studies
➢ 1.Blood Smear: Pancytopenia (anemia, leukopenia, and thrombocytopenia) is a universal presenting finding.
➢ Immunosuppression ➢Supportive Care ➢Other Therapies
ALOpGOathway for the diagnostic management of patients with aplastic anemia
LOGO
C
D.Marrow smear in aplastic anemia. The marrow shows replacement of hematopoietic tissue by fat and only residual stromal and lymphoid cells. D
LOGO
Reference standard of diagnosis for AA
B. Aplastic anemia biopsy. This specimen shows few detectable hematopoietic cells, and those that can be seen (one small nest) are largely lymphocytes.
hematopoietic cells increase . ▪ 4. Elimination of other diseases which may cause

Estimating Production Functions

Estimating Production Functions

2
model favors this set of proxies because the firm jointly cultivates physical capital and intangible organizational knowledge, given market conditions. Whenever its market prospects are favorable, the firm has an incentive to expand physical capital and to improve its intangible organizational capital simultaneously. Similar to prior approaches, the estimation model permits resolution of transmission bias (controlling for the simultaneous determination of unobserved firm-level productivity and factor choices), survival bias (using exit-rule estimation on an unbalanced firm panel), and omitted-price bias (removing time-invariant demand components from otherwise confounded productivity estimates). The estimation model achieves the resolution of bias on the basis of a lean set of identifying assumptions with plausible implications for productivity evolution. Its assumptions and implications set the present estimation model apart from prior approaches. First, identification of the production function does not have to rely on timing assumptions. Productivity shocks need not be fully known to the firm prior to physical investment choice (Olley and Pakes 1996) or variable input choice (Levinsohn and Petrin 2003). Instead, the asset model implies that variables related to a firm’s market conditions, and interacted with its physical investment, provide a natural source of identification for the productivity control function as long as an exit rule is estimated alongside. Estimation of an exit rule, as in Olley and Pakes (1996), is crucial if there are fixed costs of investment in organizational change, because the productivity control function for survivors changes strictly monotonically in its arguments only through the exit rule. So, an extended Olley-Pakes procedure is the estimation method of choice under an asset model of the firm. Beyond sector-level covariates, proxies to market conditions include the firmspecific mean characteristics of each firm’s competitors. When applied to a sample of medium-sized to large Brazilian manufacturing companies between 1986 and 1998, the extended Olley-Pakes procedure detects, and removes, frequently suspected biases. Bootstraps show that alternative estimators yield more volatile and less precise estimates than does extended Olley-Pakes estimation. Second, observations with non-positive investment are permissible for estimation under the asset model of the firm. Non-positive net investments occur frequently in micro data. Common theory of the firm predicts that firms with higher marginal products of capital, and lower marginal products of other factors, invest more so that a restriction to a positive-investments-only sample expectedly results in capital coefficients that exceed those in the full sample. As a consequence, the positive-investments-only subsample does not reflect the average production technology, but an initially less capital-intensive technology. Estimates from the asset model of the firm confirm this prediction in the sample of Brazilian manufacturers. 3

应用地球化学元素丰度数据手册-原版

应用地球化学元素丰度数据手册-原版

应用地球化学元素丰度数据手册迟清华鄢明才编著地质出版社·北京·1内容提要本书汇编了国内外不同研究者提出的火成岩、沉积岩、变质岩、土壤、水系沉积物、泛滥平原沉积物、浅海沉积物和大陆地壳的化学组成与元素丰度,同时列出了勘查地球化学和环境地球化学研究中常用的中国主要地球化学标准物质的标准值,所提供内容均为地球化学工作者所必须了解的各种重要地质介质的地球化学基础数据。

本书供从事地球化学、岩石学、勘查地球化学、生态环境与农业地球化学、地质样品分析测试、矿产勘查、基础地质等领域的研究者阅读,也可供地球科学其它领域的研究者使用。

图书在版编目(CIP)数据应用地球化学元素丰度数据手册/迟清华,鄢明才编著. -北京:地质出版社,2007.12ISBN 978-7-116-05536-0Ⅰ. 应… Ⅱ. ①迟…②鄢…Ⅲ. 地球化学丰度-化学元素-数据-手册Ⅳ. P595-62中国版本图书馆CIP数据核字(2007)第185917号责任编辑:王永奉陈军中责任校对:李玫出版发行:地质出版社社址邮编:北京市海淀区学院路31号,100083电话:(010)82324508(邮购部)网址:电子邮箱:zbs@传真:(010)82310759印刷:北京地大彩印厂开本:889mm×1194mm 1/16印张:10.25字数:260千字印数:1-3000册版次:2007年12月北京第1版•第1次印刷定价:28.00元书号:ISBN 978-7-116-05536-0(如对本书有建议或意见,敬请致电本社;如本社有印装问题,本社负责调换)2关于应用地球化学元素丰度数据手册(代序)地球化学元素丰度数据,即地壳五个圈内多种元素在各种介质、各种尺度内含量的统计数据。

它是应用地球化学研究解决资源与环境问题上重要的资料。

将这些数据资料汇编在一起将使研究人员节省不少查找文献的劳动与时间。

这本小册子就是按照这样的想法编汇的。

鸟声在生态学研究中的应用

鸟声在生态学研究中的应用

and so on…
声音分析 - 分析方法
语图:结构+频率和时间参数
回放实验
声音分析 - 分析方法 - 语图结构
Element(音素)
Element group(音节): Fixed group of two or more different element types.
Verse(句子): Separated by pauses
NATURE|29 January 2009 The first evidence of within-species dialects among neotropical primates has beห้องสมุดไป่ตู้n revealed. recorded the vocal patterns of adult pygmy marmosets (Callithrix pygmaea) from 14 groups found in five geographically distinct regions researchers discovered consistent structural differences in calls between regions.
Potter, Science, 1945, 102, 463-470.
Introduction
Bird song research are focus on
Development of Song in the Individual The Syrinx: Organ of Vocal Production Brain Mechanisms and the Vocal Control System Bird Song as a Communication System Evolution of Song and of Vocal Learning Systemic Taxon Individual Identification

Fan is to monoid as scheme is to ring a generalization of the notion of a fan

Fan is to monoid as scheme is to ring a generalization of the notion of a fan

a rX iv:mat h /36221v2[mat h.AG ]8J ul24F AN IS TO MONOID AS SCHEME IS TO RING HOW ARD M THOMPSON Abstract.This paper generalizes the notion of a toric variety.In particular,these generalized toric varieties include examples of non-normal non-quasi-projective toric varieties.Such an example seems not to have been noted before in the literature.This generalization is achieved by replacing a fan of strictly rational poly-hedral cones in a lattice with a monoided space,that is a topological space equipped with a distinguished sheaf of monoids.For a classic toric variety,the underlying topological space of this monoided space is its orbit space under the action of the torus.And,if σis a cone,then the stalk of the structure sheaf of the monoided space at the point corresponding to σis the monoid S σ=σ∨∩M that is usually associated to σ.When applied to the affine toric variety associated to some cone σ,the monoided space so obtained is isomor-phic to the spectrum of the monoid S σ,where the spectrum of a monoid is defined in analogy with the definition of the spectrum of a ring.More gener-ally,we will call a monoided space that is locally isomorphic to the spectrum of a monoid a fan and we form schemes from these fans by taking monoid algebras and glueing.Introduction In recent years,various mathematicians have studied non-separated toric vari-eties or not necessarily normal toric varieties or combinatorial gadgets for describ-ing toric varieties other than a fan of strictly rational polyhedral cones in a lattice.For examples,see A’Campo-Neuen &Hausen [1],Sturmfels [7]and Berchtold &Hausen [2]respectively.Following DeMeyer,Ford &Miranda [3],we define a topology on a fan ∆in R d by declaring the open sets to be its subfans.Then,like Kato [6],we make such fans into monoided spaces by associating a sheaf of monoids to each ∆.Our sheaf of monoids is similar to but differs from Kato’s.Observing that this new monoided space is locally isomorphic to the spectrum of some monoid in the same sort ofway a scheme is locally isomorphic to the spectrum of some ring,we define any monoided space that is locally isomorphic to the spectra of monoids to be a fan.The monoid algebra functor can then be used to associate a scheme to such a fan.The advantages of the approach presented here are:1)some of the intuition built up from studying schemes can be carried over to this new gadget;2)for any toric variety,not just the projective ones,the gadget presented here comes equipped with an abstract analog of the moment map and often torus invariant objects on the toric variety descend to this gadget via this map;3)when the toric variety is projective this new gadget is the corresponding polytope equipped with some extra structure and the associated abstract moment map is,essentially,the moment map;4)this gadget permits the consideration of non-normal toric varieties2HOWARD M THOMPSONthat are neither affine nor projective,in fact the scheme under consideration need not be a variety;5)at the same time,the collection of schemes built using thesegadgets is small enough that all the normal varieties in it are classic toric varieties.1.Basic NotionsDefinition1.1.Let S be a commutative monoid.We say a subset a⊆S is an ideal if a+S⊆a.We say an ideal p⊂S is prime if its complement is a submonoid of S.The spectrum of S is a pair(Spec S,M)consisting of a topological space Spec S and a sheaf of monoids M on Spec S defined as follows:The underlying set of Spec S is the collection of prime ideals of S,the open sets of the form D(f)={p∈Spec S|f∈S\p}form a base for the topology of Spec S,andΓ(D(f),M)=S+N(−f).I should warn you that I will use the same notion for both rings and monoids.In this paper,if A is a ring,then Spec A is the set of prime ideals of A as a ring withits standard topology,etc.A monoided space is a pair(∆,M)consisting of a topological space∆and a sheafof monoids M on∆.A morphism of monoided spaces consists of a continuous map ϕ:∆→∆′and local homomorphism of sheaves of monoidsϕ#:M∆′→ϕ∗M∆. That is,for every pointσ∈∆,the homomorphism of monoidsϕ#σ:M∆′,ϕ(σ)→ϕ∗M∆,σmaps non-units to non-units.We say a monoided space(∆,M)is a fan if every point of∆has an open neigh-borhood U such that there exists a monoid S with(U,M|U)∼=(Spec S,M Spec S).Every monoid S has a unique maximal ideal S+=S\S∗so there is no distinct notion of a locally monoided space.Furthermore,every pointσon a fan(∆,M) has a unique smallest open neighborhood and this neighborhood is isomorphic to Spec Mσ.If∆is a fan and A is a ring,we associate a scheme to this pair using the monoid algebra functor A→A[∆]:That is,if Spec S and Spec S′are open subsets of the fan∆,and f∈S is such that D(f)⊆Spec S′,we glue the affine schemes Spec A[S]and Spec A[S′]along Spec S f.Here the map A[S f]→A[S′]is the one induced by the restriction map S f→S′from∆.This makes sense because the canonical inclusions of the form S֒→A[S];s→t s(We write elements of A[S]or more expansively,A[t;S],as polynomials in t with coefficients in A and exponents in S.)induce a continuous map A[∆]→∆.The pullback of D(s)⊆Spec S is D(t s)⊆Spec A[S].More generally,if∆is a fan and X is a scheme,we may−−→X to this pair.associate the X-scheme X[∆]=X×Spec Z Z[∆]pr12.The fan of a classic toric varietyDefinition2.1.Let∆be a fan in a lattice N∼=Z d,that is,let∆be a collectionof strongly convex rational polyhedral cones in the R-vector space N R=N⊗Z R. We associate to∆a monoided space as follows.The underlying topological space,∆top,is the collection{σ∈∆}equipped with the topology whose open sets are the subfans of∆.That is,the open sets are subsets of∆that are also fans in N. And,the sheaf of monoids on∆top is the sheaf M such thatΓ(σtop,M|σ)=Sσ(=σ∨∩M).Here we have identified the coneσwith the fan in N consisting ofσand its faces and M=Hom Z(N,Z).Theorem2.2.If∆is a fan in N,then(∆top,M)is a fan.FAN IS TO MONOID AS SCHEME IS TO RING3 Proof.It suffices to prove(σtop,M|σ)∼=Spec Sσfor allσ∈∆.This is straightfor-ward.Notice that,if one starts with the normal fan of a polytope,then topological space we obtain is isomorphic to the set of faces of the polytope equipped with the topology such the the star of a face is the smallest open subset containing that face. More generally,it is worth noting that∆top is homeomorphic to the orbit space of the classic toric variety associated to∆.3.When is k[∆]a normal k-variety?This section is devoted to proving:If∆is a fan and k is afield,then k[∆]is a normal k-variety if and only if it is a classic toric variety.Here by k-variety we mean an integral,separated scheme offinite type over k.Theorem3.1.[5,Exercise II.3.5]If f:X→Y is a morphism of schemes offinite type,then for every open affine subset V=Spec B⊆Y,and for every open affine subset U=Spec A⊆f−1(V),A is afinitely generated B-algebra.In particular,if k[∆]is a k-scheme offinite type,then the stalks of M are all finitely generated monoids.Recall that a monoid S is said to be torsion-free if ns=ns′for some positive integer n implies s=s′in S.Theorem3.2.[4,Theorem8.1]Let A be a nonzero ring.The monoid algebra A[S] is an integral domain if and only if A is an integral domain and S is torsion-free and cancellative.As a consequence of this proposition,if k[∆]is integral,then the stalks of M are all cancellative,torsion-free monoids.In particular,if k[S]is a domain,the S embeds in a group it e Z S to denote this group and identify S with its image in Z S.If S is afinitely generated,cancellative torsion-free monoid Z S is afinitely gen-erated,torsion-free Abelian group.That is,Z S∼=Z n for some non-negative integer n.Consider the rational polyhedral cone R≥0S= l i=1r i⊗s i|r i≥0,s i∈S in thefinitely generated R-vector space,R S=R⊗Z Z S,generated by S.Identify S and Z S with their images in R S.If s∈Z S∩R≥0S,then ns=s′∈S for some positive integer n and some s′∈S.So,the image of s in k[Z S∩R≥0S],t s,satisfies the monic polynomial x n−t s′with coefficients in k[S].Therefore,if s∈Z S∩R≥0S and k[S]is integrally closed,then s∈S.In other words,if Spec k[S]is a normal k-variety,then S is a rational polyhedral cone in somefinite dimensional vector space intersected with a sublattice generated by a basis of the vector space.More briefly,if X=Spec k[S]is a normal k-variety,then X is an affine toric variety.So far,we know that if X=k[∆]is a normal integral scheme offinite type over k,then X can be glued together from affine toric varieties along open torus invariant subvarieties.We now wish to show that if in addition X is separated, then X is a classic toric variety.Theorem3.3.Suppose X=k[∆]is a normal integral scheme offinite type over k.If X is separated,then X is a classic toric variety.Proof.We have already established that X is glued together from a collection of affine toric varieties.We need to show that the cones corresponding to these toric varietiesfit together to form a fan of strictly rational polyhedral cones in a lattice.4HOWARD M THOMPSONSince on each of the fans of these affine toric varieties the generic point is the unique smallest open set,all these generic points are identified by the gluing.So, there is afixed lattice M,the stalk of M at the generic point,containing every monoid S that occurs.Furthermore,M=Z S for every monoid S and R≥0S is dual to a strictly convex rational polyhedral cone in N,where N=Hom Z(M,Z). So,it makes sense to ask whether X comes from a fan.All we have to prove is:The intersection of any two cones in this collection is a face of each.We know the intersection U=Spec k[S1]∩Spec k[S2]⊆X is affine since X is separated.Furthermore,U is torus invariant.So,U comes from a subfan of the cone,σ,in N corresponding to Spec k[S1].Since U is also affine,U corresponds to a face ofσ.References1.A.A’Campo-Neuen and J.Hausen,Toric prevarieties and subtorus actions,Geom.Dedicata87(2001),no.1-3,35–64.MR2002h:140852.F.Berchtold and J.Hausen,Bunches of cones in the divisor class group—a new combinatoriallanguage for toric varieties,Int.Math.Res.Not.(2004),no.6,261–302.MR20410653.F.R.DeMeyer,T.J.Ford,and R.Miranda,The cohomological Brauer group of a toric variety,J.Algebraic Geom.2(1993),no.1,137–154.MR93i:140514.R.Gilmer,Commutative semigroup rings,Chicago Lectures in Mathematics,University ofChicago Press,Chicago,Ill.,1984.MR85e:200585.R.Hartshorne,Algebraic geometry,Springer-Verlag,New York,1977,Graduate Texts in Math-ematics,No.52.MR57#31166.K.Kato,Toric singularities,Amer.J.Math.116(1994),no.5,1073–1099.MR95g:140567.B.Sturmfels,Gr¨o bner bases and convex polytopes,University Lecture Series,vol.8,AmericanMathematical Society,Providence,RI,1996.MR97b:13034Department of Mathematics,University of Michigan,Ann Arbor,Michigan48109 E-mail address:hmthomps@。

湖泊红球藻等离子诱变及其高产虾青素藻株培养条件的优化

湖泊红球藻等离子诱变及其高产虾青素藻株培养条件的优化

代容春,林荣华,何文锦,等. 湖泊红球藻等离子诱变及其高产虾青素藻株培养条件的优化[J]. 食品工业科技,2023,44(23):213−220. doi: 10.13386/j.issn1002-0306.2023030120DAI Rongchun, LIN Ronghua, HE Wenjin, et al. Plasma Mutagenesis of Haematococcus lacustris and Optimization of Culture Conditions for High-yield Astaxanthin Algae Strains[J]. Science and Technology of Food Industry, 2023, 44(23): 213−220. (in Chinese with English abstract). doi: 10.13386/j.issn1002-0306.2023030120· 工艺技术 ·湖泊红球藻等离子诱变及其高产虾青素藻株培养条件的优化代容春1,2, *,林荣华1,2,何文锦1,2,薛 婷1,2,陈建楠1,2,陈 菁1,2,孙化淼1,2(1.福建师范大学生命科学学院,福建福州 350117;2.福建师范大学南方海洋研究院,福建福州 350117)摘 要:为进一步提高湖泊红球藻(Haematococcus lacustris )的工业利用价值,本研究使用常压室温等离子体(atmospheric and room temperature plasma ,ARTP )诱变仪对湖泊红球藻进行等离子诱变。

以藻细胞致死率为指标确定等离子诱变的适宜输入功率和诱变时间。

诱变之后通过固体平板培养初筛和液体培养复筛获得高产虾青素的突变藻株。

再以藻细胞密度为指标采用单因素实验及正交试验对高产藻株的营养生长阶段的培养条件进行优化,并筛选虾青素诱导阶段适宜虾青素积累的高光照条件。

概率论与数理统计英文文献

概率论与数理统计英文文献

Introduction to probability theory andmathematical statisticsThe theory of probability and the mathematical statistic are carries on deductive and the induction science to the stochastic phenomenon statistical rule, from the quantity side research stochastic phenomenon statistical regular foundation mathematics discipline, the theory of probability and the mathematical statistic may divide into the theory of probability and the mathematical statistic two branches. The probability uses for the possible size quantity which portrays the random event to occur. Theory of probability main content including classical generally computation, random variable distribution and characteristic numeral and limit theorem and so on. The mathematical statistic is one of mathematics Zhonglian department actually most directly most widespread branches, it introduced an estimate (rectangular method estimate, enormousestimate), the parameter supposition examination, the non-parameter supposition examination, the variance analysis and the multiple regression analysis, the fail-safe analysis and so on the elementary knowledge and the principle, enable the student to have a profound understanding tostatistics principle function. Through this curriculum study, enables the student comprehensively to understand, to grasp the theory of probability and the mathematical statistic thought and the method, grasps basic and the commonly used analysis and the computational method, and can studies in the solution economy and the management practice question using the theory of probability and the mathematical statistic viewpoint and the method.Random phenomenonFrom random phenomenon, in the nature and real life, some things are interrelated and continuous development. In the relationship between each other and developing, according to whether there is a causal relationship, very different can be divided into two categories: one is deterministic phenomenon. This kind of phenomenon is under certain conditions, will lead to certain results. For example, under normal atmospheric pressure, water heated to 100 degrees Celsius, is bound to a boil. This link is belong to the inevitability between things. Usually in natural science is interdisciplinary studies and know the inevitability, seeking this kind of inevitable phenomenon.Another kind is the phenomenon of uncertainty. This kind of phenomenon is under certain conditions, the resultis uncertain. The same workers on the same machine tools, for example, processing a number of the same kind of parts, they are the size of the there will always be a little difference. As another example, under the same conditions, artificial accelerating germination test of wheat varieties, each tree seed germination is also different, there is strength and sooner or later, respectively, and so on. Why in the same situation, will appear this kind of uncertain results? This is because, we say "same conditions" refers to some of the main conditions, in addition to these main conditions, there are many minor conditions and the accidental factor is people can't in advance one by one to grasp. Because of this, in this kind of phenomenon, we can't use the inevitability of cause and effect, the results of individual phenomenon in advance to make sure of the answer. The relationship between things is belong to accidental, this phenomenon is called accidental phenomenon, or a random phenomenon.In nature, in the production, life, random phenomenon is very common, that is to say, there is a lot of random phenomenon. Issue such as: sports lottery of the winning Numbers, the same production line production, the life of the bulb, etc., is a random phenomenon. So we say: randomphenomenon is: under the same conditions, many times the same test or survey the same phenomenon, the results are not identical, and unable to accurately predict the results of the next. Random phenomena in the uncertainties of the results, it is because of some minor, caused by the accidental factors.Random phenomenon on the surface, seems to be messy, there is no regular phenomenon. But practice has proved that if the same kind of a large number of repeated random phenomenon, its overall present certain regularity. A large number of similar random phenomena of this kind of regularity, as we observed increase in the number of the number of times and more obvious. Flip a coin, for example, each throw is difficult to judge on that side, but if repeated many times of toss the coin, it will be more and more clearly find them up is approximately the same number.We call this presented by a large number of similar random phenomena of collective regularity, is called the statistical regularity. Probability theory and mathematical statistics is the study of a large number of similar random phenomena statistical regularity of the mathematical disciplines.The emergence and development of probability theoryProbability theory was created in the 17th century, it is by the development of insurance business, but from the gambler's request, is that mathematicians thought the source of problem in probability theory.As early as in 1654, there was a gambler may tired to the mathematician PASCAL proposes a question troubling him for a long time: "meet two gamblers betting on a number of bureau, who will win the first m innings wins, all bets will be who. But when one of them wins a (a < m), the other won b (b < m) bureau, gambling aborted. Q: how should bets points method is only reasonable?" Who in 1642 invented the world's first mechanical addition of computer.Three years later, in 1657, the Dutch famous astronomy, physics, and a mathematician huygens is trying to solve this problem, the results into a book concerning the calculation of a game of chance, this is the earliest probability theory works.In recent decades, with the vigorous development of science and technology, the application of probability theory to the national economy, industrial and agricultural production and interdisciplinary field. Many of applied mathematics, such as information theory, game theory, queuing theory, cybernetics, etc., are based on the theory of probability.Probability theory and mathematical statistics is a branch of mathematics, random they similar disciplines are closely linked. But should point out that the theory of probability and mathematical statistics, statistical methods are each have their own contain different content.Probability theory, is based on a large number of similar random phenomena statistical regularity, the possibility that a result of random phenomenon to make an objective and scientific judgment, the possibility of its occurrence for this size to make quantitative description; Compare the size of these possibilities, study the contact between them, thus forming a set of mathematical theories and methods.Mathematical statistics - is the application of probability theory to study the phenomenon of large number of random regularity; To through the scientific arrangement of a number of experiments, the statistical method given strict theoretical proof; And determining various methods applied conditions and reliability of the method, the formula, the conclusion and limitations. We can from a set of samples to decide whether can with quite large probability to ensure that a judgment is correct, and can control the probability of error.- is a statistical method provides methods are used in avariety of specific issues, it does not pay attention to the method according to the theory, mathematical reasoning.Should point out that the probability and statistics on the research method has its particularity, and other mathematical subject of the main differences are:First, because the random phenomena statistical regularity is a collective rule, must to present in a large number of similar random phenomena, therefore, observation, experiment, research is the cornerstone of the subject research methods of probability and statistics. But, as a branch of mathematics, it still has the definition of this discipline, axioms, theorems, the definitions and axioms, theorems are derived from the random rule of nature, but these definitions and axioms, theorems is certain, there is no randomness.Second, in the study of probability statistics, using the "by part concluded all" methods of statistical inference. This is because it the object of the research - the range of random phenomenon is very big, at the time of experiment, observation, not all may be unnecessary. But by this part of the data obtained from some conclusions, concluded that the reliability of the conclusion to all the scope.Third, the randomness of the random phenomenon, refers to the experiment, investigation before speaking. After the real results for each test, it can only get the results of the uncertainty of a certain result. When we study this phenomenon, it should be noted before the test can find itself inherent law of this phenomenon.The content of the theory of probabilityProbability theory as a branch of mathematics, it studies the content general include the probability of random events, the regularity of statistical independence and deeper administrative levels.Probability is a quantitative index of the possibility of random events. In independent random events, if an event frequency in all events, in a larger range of stable around a fixed constant. You can think the probability of the incident to the constant. For any event probability value must be between 0 and 1.There is a certain type of random events, it has two characteristics: first, only a finite number of possible results; Second, the results the possibility of the same. Have the characteristics of the two random phenomenon called"classical subscheme".In the objective world, there are a large number of random phenomena, the result of a random phenomenon poses a random event. If the variable is used to describe each random phenomenon as a result, is known as random variables.Random variable has a finite and the infinite, and according to the variable values is usually divided into discrete random variables and the discrete random variable. List all possible values can be according to certain order, such a random variable is called a discrete random variable; If possible values with an interval, unable to make the order list, the random variable is called a discrete random variable.The content of the mathematical statisticsIncluding sampling, optimum line problem of mathematical statistics, hypothesis testing, analysis of variance, correlation analysis, etc. Sampling inspection is to pair through sample investigation, to infer the overall situation. Exactly how much sampling, this is a very important problem, therefore, is produced in the sampling inspection "small sample theory", this is in the case of the sample is small, the analysis judgment theory.Also called curve fitting and optimal line problem. Some problems need to be according to the experience data to find a theoretical distribution curve, so that the whole problem get understanding. But according to what principles and theoretical curve? How to compare out of several different curve in the same issue? Selecting good curve, is how to determine their error? ...... Is belong to the scope of the optimum line issues of mathematical statistics.Hypothesis testing is only at the time of inspection products with mathematical statistical method, first make a hypothesis, according to the result of sampling in reliable to a certain extent, to judge the null hypothesis.Also called deviation analysis, variance analysis is to use the concept of variance to analyze by a handful of experiment can make the judgment.Due to the random phenomenon is abundant in human practical activities, probability and statistics with the development of modern industry and agriculture, modern science and technology and continuous development, which formed many important branch. Such as stochastic process, information theory, experimental design, limit theory, multivariate analysis, etc.译文:概率论和数理统计简介概率论与数理统计是对随机现象的统计规律进行演绎和归纳的科学,从数量侧面研究随机现象的统计规律性的基础数学学科,概率论与数理统计又可分为概率论和数理统计两个分支。

2013_Double-in-the-supply-chain-with-uncertain-supply_Li_European-Journal-of-Operational-Research

2013_Double-in-the-supply-chain-with-uncertain-supply_Li_European-Journal-of-Operational-Research

Production,Manufacturing and LogisticsDouble marginalization and coordination in the supply chain with uncertain supplyXiang Li a ,Yongjian Li b ,⇑,Xiaoqiang Cai caCollege of Economic and Social Development,Nankai University,Tianjin 300071,PR China bBusiness School,Nankai University,Tianjin 300071,PR China cDepartment of Systems Engineering &Engineering Management,The Chinese University of Hong Kong,Shatin,NT,Hong Konga r t i c l e i n f o Article history:Received 6August 2011Accepted 26October 2012Available online 11November 2012Keywords:Supply chain management Uncertain supply Contract designDouble marginalizationa b s t r a c tThis paper explores a generalized supply chain model subject to supply uncertainty after the supplier chooses the production input level.Decentralized systems under wholesale price contracts are investi-gated,with double marginalization effects shown to lead to supply insufficiencies,in the cases of both deterministic and random demands.We then design coordination contracts for each case and find that an accept-all type of contract is required to coordinate the supply chain with random demand,which is a much more complicated situation than that with deterministic demand.Examples are provided to illustrate the application of our findings to specific industrial domains.Moreover,our coordination mech-anisms are shown to be applicable to the multi-supplier situation,which fills the research gap on assem-bly system coordination with random yield and random demand under a voluntary compliance regime.Ó2012Elsevier B.V.All rights reserved.1.IntroductionSupply uncertainty is a major issue in both the industrial and academic worlds.In traditional manufacturing processes,stochas-tic capacity and random yield are regarded as the main causes of supply uncertainty.In the remanufacturing field,uncertainty with-in the supply channel also stems from used product acquisition.These supply uncertainties pose great challenges to supply chain management,and result in substantial revenue losses.In a decen-tralized system,channel performance could be even worse because of the different,and usually conflicting,objectives that the supply chain members attempt to optimize.Therefore,it has been a signif-icant problem to design coordination schemes to increase supply chain performance and to properly allocate the supply risk be-tween the channel members.This paper explores the underlying double marginalization of a supply chain with supply uncertainty and the related problem of coordination contract design.Specifically,a buyer procures a prod-uct from a supplier,who is confronted with uncertain production quantity (output)after deciding a certain production input level because of random disturbance in the production/delivery/supply process.The supplier’s modeling setting (partially or fully)general-izes many of the uncertain supply models in literature.Examples include (i)random yield,such as the models of Gerchak et al.(1994)and Li and Zheng (2006);(2)random capacity,such as those of Ciarallo et al.(1994)and Hsieh and Wu (2008);(3)random deteriorating loss,such as that of Cai et al.(2010);and (4)random used product acquisition,such as that of Kaya (2010).We explore the supply chain model in such a generalized setting and show that double marginalization effect is translated into a lower production input level from the supplier,which leads to supply insufficiency for the buyer.We also investigate coordination mechanisms for the cases of deterministic and random demand.A simple shortage penalty contract is designed to coordinate the supply chain under conditions of deterministic demand,but the coordination issue becomes more complicated when there is a random demand.Our study thus stands on the interface between supply uncer-tainty and supply chain coordination,both of which have attracted a vast amount of research.Most of the papers considering supply uncertainty focus on the production planning problems of uncer-tain production capacity,random manufacturing yield,or unreli-able supplier,such as those of Gerchak et al.(1988),Parlar and Perry (1996),Gupta and Cooper (2005),Chopra et al.(2007),and Pac et al.(2009),among others.The subject of supply chain coordi-nation has been investigated by Muson and Rosenblatt (2001),Cachon and Lariviere (2005),Li and Wang (2007),Chen and Xiao (2009),and Luo et al.(2010),to name just a few.We also refer the reader to Cachon (2003)for an excellent survey of this field.Other than the aforementioned studies,the works most related to this paper involve the study of a supply chain model under con-ditions of uncertain supply from the supplier,which constitutes a rather limited body of literature.Zimmer (2002)investigates the supply chain coordination issue between a buyer facing determin-istic demand and a supplier facing uncertain capacity equal to the sum of newly built capacity and a random leftover.He and Zhang (2008)consider the risk-sharing problem between a retailer and a0377-2217/$-see front matter Ó2012Elsevier B.V.All rights reserved./10.1016/j.ejor.2012.10.047⇑Corresponding author.Tel.:+862223505341.E-mail addresses:xiangli@ (X.Li),liyongjian@ (Y.Li),xqcai@.hk (X.Cai).supplier subject to random production yield and a more expensive emergency production option,but do not address the supply chain coordination issue.In a subsequent work,He and Zhang(2010)fur-ther explore a supply chain with random yield production and a yield-dependent secondary market in which the product is ac-quired or disposed of.Serel(2008)considers a supply chain in which the retailer orders products from one unreliable supplier and one reliable manufacturer,thereby analyzing the retailer’s ordering problem in conjunction with the manufacturer’s pricing problem.Keren(2009)considers a supply chain in which the dis-tributor faces a known demand and orders from the producer sub-ject to a random production yield,and shows that the distributor mayfind it optimal to order more than the demand due to supply uncertainty under a uniform distribution.Li et al.(2012)further extends the problem into a generalized distribution of yield ran-domness,and specifies the condition under which ordering more is optimal.Xu(2010)investigates a supply chain model in which the retailer adopts an option contract and the supplier conforms with random yield production and a more expensive emergency production.None of the above papers,except Zimmer(2002),ad-dresses the supply chain coordination issue.In addition,a number of recent papers explore issues of information asymmetry in set-tings of supply uncertainty based on incentive theory,such as Yang et al.(2009)and Chaturvedi and Martinez-de-Albeniz(2011).There is also research on the supply chain coordination of an assembly system with a random yield supply.Gurnani and Gerchak(2007),for example,explore an assembly system with random yield of component production and deterministic demand for thefinal product.They assume the voluntary compliance of the suppliers and propose two coordination contracts,one with a pen-alty only for each undelivered item and the other with an extra penalty for the worst performing supplier.Yan et al.(2010)extend Gurnani and Gerchak’s model to the case of a positive salvage value and asymmetric suppliers,and derive a new coordination scheme that also falls within a voluntary compliance regime.Guler and Bilgic(2009)explore a similar problem under conditions of both random yield and random demand,but their model assumes forced compliance,i.e.,the suppliers have to make the input pro-duction quantities equal to the manufacturer’s ordered quantities.In contrast,the main contributions of our paper can be summa-rized as follows:(1)We prove theoretically that the wholesale price contractundermines the performance of the entire supply chain by lowering the supplier’s production input level,which is shown as the double marginalization effect in the supply chain with uncertain supply.(2)We consider both deterministic and random demands,pro-vide coordination mechanisms for both cases,and point out the essential differences between them.In the case of deterministic demand,a shortage penalty contract could achieve channel coordination and arbitrary profit allocation, as the buyer will not accept more units than the known demand.In the case of random demand,in contrast,coordi-nation contracts require the buyer to accept all yielded units from the supplier in response to the random disturbance on the demand side.(3)Our results are derived in a very generalized setting and arethus applicable to more specific industrial cases,such as ran-dom yield,uncertain capacity,and stochastic used product collection.Moreover,they can easily be extended into a mul-tiple-supplier scenario.In particular,our model covers the problem of a decentralized assembly system with suppliers subject to random component yields,which is related to the aforementioned papers:Gurnani and Gerchak(2007), Guler and Bilgic(2009),and Yan et al.(2010).However,aswe have reviewed,Gurnani and Gerchak(2007)and Yan et al.(2010)only consider the case of deterministic demand, while Guler and Bilgic(2009)explore the case of random demand but assume forced compliance,and thus their derived schemes can be applied only when the manufacturer is substantially pared to them,our coordina-tion contracts are based on voluntary compliance and unify the cases of deterministic and random demands.To the best of our knowledge,this paper is among thefirst to delve into the coordination mechanisms of an assembly system with both random yields and random demand and under a volun-tary compliance regime.Furthermore,our model is analyzed under a more general setup compatible with,e.g.,non-linear production costs and additive production risk,and thus can applied to wider industrial domains.The reminder of the paper is organized as follows.In Section2, we describe the problem.Sections3and4investigate the cases of deterministic demand and random demand,respectively,explore the double marginalization effects,and derive the coordination contracts.Section5provides examples to illustrate applications to specific industrial cases,and Section6further extends the model to the case of multiple suppliers.Section7concludes the paper with directions for future research.2.Problem descriptionConsider a supply chain consisting of a supplier and a buyer. The buyer contracts the supplier to provide a specific product and sells it on the market to meet deterministic demand d or ran-dom demand D.The supplier determines production input level e, which depending on the specific industrial scenario,could repre-sent the product development effort,invested capacity,planned manufacturing quantity,or any other determinants of production.1 Due to the random disturbance in the production process,the output of production quantity S(e)is stochastic and sensitive to input level e.The production cost for the supplier C(e)satisfies C0(e)>0and C00(e)P0,which implies that the marginal production cost is non-decreasing as the production input level is raised.Such a (non-strictly)concave cost is a commonly used assumption in the production planning literature(see, e.g.,Veinott,1964;Morton, 1978;Smith and Zhang,1998;Yang et al.,2005).In our model,it re-flects the idea that the supplierfirst utilizes his most efficient pat-tern of production.However,as the production volume is asked to increase,it exhausts the most attractive resources that are available to the supplier and he has to pay more in order to acquire additional resources to generate the extra output.We assume that the production quantity can be expressed by (1)S(e)=s(e)+e or(2)S(e)=s(e)Áe,where s(e)is a deterministic function of the production input level and e is an independent ran-dom variable withfinite support,denoting the yield randomness. These are called the additive and multiplicative forms,respec-tively,both of which are widely used in the literature.Let f e(Á), F e(Á)and FÀ1eðÁÞbe the density,distribution and inverse distribution functions of e,respectively.Also suppose that s0(e)>0and s00(e)60, which implies that the marginal expected production quantity is non-increasing as the production input level is raised.Without any loss of generality,we assume linear function s(e)=a+b e, b>0,and normalize the yield randomness into E e=0or E e=1 for the additive or multiplicative form,respectively.In fact,for 1Although in practice the decisions concerning product development,capacity reservation,and manufacturing quantity plan can be made at different time points, and thus have different degrees of forecast accuracy,our model can be applied as long as these decisions are appropriately abstracted and meet the required theoretical assumptions.Please see Section4.3for relevant examples.X.Li et al./European Journal of Operational Research226(2013)228–236229any arbitrary function s (e ),let s =s (e )be the new variable repre-senting the production input level and C (s )the new cost function.We then have C 0(s )>0,C 00(s )P 0,and S (s )=s +e or S (s )=s Áe ,and thus the expected production quantity is a linear function.The de-tails can be found in the Supplementary material of this paper.Other parameters include exogenous selling price p and the hold-ing cost of leftover unit,h ,which can be negative if the leftover unit incurs salvage value.We assume that p >Àh .The interaction between the supply chain members is as fol-lows.As the Stackelberg leader,the buyer first proposes to the sup-plier an incentive contract with payment T ,which is related to the realization of production quantity S (e ).As the follower,the sup-plier accordingly chooses production input level e .After S (e )is realized,the supplier delivers a partial or full quantity of units to the buyer,according to the prescription in the contract.Finally,demand is realized (if it is random),and the buyer meets that de-mand.Note that we assume voluntary compliance in the supplier’s decision,which means the supplier freely chooses the production input level without any contracting obligation.In many industries,it would be very difficult or costly for the buyer to monitor the sup-plier’s production input level,thus the supply chain system should indeed be operated under a voluntary compliance regime.Further-more,we assume that both the supplier and the buyer are risk-neutral and seek to maximize their own expected profits,respectively,with information on the system parameters as com-mon knowledge.3.Deterministic demandIn the case of deterministic demand d ,we first investigate the optimization problem for the centralized supply chain,which serves as a benchmark for the decentralized case.The expected profit of the centralized supply chain isp c ðe Þ¼E f p min ½d ;S ðe Þ Àh ½S ðe ÞÀd þÀC ðe Þg :ð1ÞLemma 1.(1)There is a unique e Ãc that maximizes the expected profitof the centralized supply chain,and (2)e Ãc is increasing with p and decreasing with h .Lemma 1shows the uniqueness of the optimal production input level e Ãc that maximizes the expected profit of the entire supply chain,and it is increasing with the selling price and decreasing with the holding cost.In the following,this maximized supplychain profit is assumed to be positive,i.e.,p c e Ãc ÀÁ>0.We now ex-plore the decentralized supply chain.The simplest scheme,which is also widely adopted in practice,is a wholesale price contract with the buyer prescribing an order quantity q and paying whole-sale price w 2(Àh ,p )for each delivered unit in the order.Hence,the payment to the supplier is T (w ,q )=w min[S (e ),q ],and the sup-plier’s expected profit isp s ðe Þ¼E f w min ½q ;S ðe Þ Àh ½S ðe ÞÀq þÀC ðe Þg :ð2ÞLemma 2.For any order quantity q from the buyer,there is a unique optimal production input level e d (q )for the supplier characterized byðw þh Þ@E min ½q ;S ðe Þ¼b h þC 0ðe Þ:ð3ÞGiven the above optimal response from the supplier,the buyer can strategically choose an order quantity q which is no less than the deterministic demand d .In other words,the buyer anticipates the supplier’s response e d (q )characterized by (3)and determines order quantity q to maximizep b ðq Þ¼pE min ½d ;S ðe d ðq ÞÞ ÀhE ½min ½q ;S ðe d ðq ÞÞ Àd þÀwE min ½q ;S ðe d ðq ÞÞ ¼ðp þh ÞE min ½d ;S ðe d ðq ÞÞ Àðw þh ÞE min ½q ;S ðe d ðq ÞÞ ;ð4Þwhere the second equality is because the order quantity q should be no less than the deterministic demand d .We reformulate the above buyer’s expected profit into the func-tion of the induced production input level e as follows:p b ðe Þ¼ðp þh ÞE min ½d ;S ðe Þ Àðw þh ÞE min ½q d ðe Þ;S ðe Þ ;ð5Þwhere q d (e )is characterized by (3).Let e Ãd be the optimally induced production input level under the wholesale price contract,which maximizes the objective function (5).The following result shows the properties of the optimal production input levels in the decen-tralized case.Proposition 1.We have:(1)e Ãd <e Ãc ;(2)e Ãd is increasing with p .Proposition 1indicates that the optimal production input level in the decentralized supply chain is increasing with the selling price,which is an intuitive result.More importantly,it shows that this input level is lower than that in the centralized case,tending towards a smaller supply quantity.It has been well-known that double marginalization generally refers to the channel barrier that causes a lower transfer/delivery amount of products through the channel.This usually exhibits as a result of a higher selling price set by the retailer or a less order quantity submitted from the retailer (see e.g.,Cachon,2003;Dellarocas,2012).However,in a supply chain with uncertain supply,such a lower transfer/delivery amount may not necessarily occur.For example,under deterministic demand d the retailer may submit an order higher than d (see Li et al.,2012),and thus the ac-tual transfer/delivery amount within the channel can be even lar-ger than that in the centralized system.As we can see,the double marginalization in our model is the lower production input level as shown by Proposition 1and the resulting insufficient supply,as compared to that in the centralized optimum.This effect is differ-ent from the traditional one,although the root for such phenome-non is also the double price distortion that occurs when two successive vertical layers in a supply chain independently stack their price–cost margins under a price-only contract,as the term ‘‘double marginalization’’implies.With an appropriate coordination scheme,the systematically optimal decision can be induced and the maximal profit achieved,even if the supply chain members act in a decentralized way.Specifically,we derive a shortage penalty contract T (w ,b )=w min[S (e ),d ]Àb [d ÀS (e )]+,in which the supplier is not only paid wholesale price w for each delivered unit within the demand d ,but is also charged penalty b for each order shortage for the demand.The following result shows that this contract is an imple-mentable coordination scheme that achieves both optimal system performance and flexible profit allocation.Proposition 2.For any k 2(0,1),letw üp Àk p c e Ãc ÀÁ=db ük pc e ÃcÀÁ=d :8<:ð6ÞThen,we have:(1)w ⁄>0,b ⁄>0;and(2)the shortage penalty contract T (w ⁄,b ⁄)coordinates the sup-ply chain with deterministic demand d ,and the resulting expected profit shares of the buyer and supplier are k and 1Àk ,respectively.230X.Li et al./European Journal of Operational Research 226(2013)228–236We have two remarks on the contracts in Proposition1.First, one should note that our contracts can coordinate the supply chain under voluntary compliance since the monetary transfer is based on the realized delivery from the supplier.That is,the supplier is allowed to freely choose the production input,although in the equilibrium resulted from the contract he will rationally choose the production input level that is systematically optimal.In prac-tice,such contracts are more advantageous than those under forced compliance since in many cases the production inputs are not verifiable,and thus the coordination contracts under forced compliance are not applicable(see,for example,Cachon and Lariviere,2001,the case of unverifiable supplier’s capacity input). More importantly,as Cachon(2003)points out,any contract that coordinates a supply chain with voluntary compliance surely coor-dinates the supply chain with forced compliance,but the reverse is not true.We refer to Cachon(2003)and Bolandifar et al.(2010)for more detailed discussion on the concept of compliance regime.In addition,the contracts in Proposition1not only coordinate the supply chain but also leavesflexibility for profit allocation.In general,the particular profit split depends on thefirms’relative bargaining power.As the buyer’s bargaining position becomes stronger,one would anticipate k to increase.In the extreme case, if the buyer is a completely dominating party in the channel she might leave to the supplier only the profit he earns in the uncoor-dinated case to ensure the supplier’s participation in the coordina-tion contract.In the less extreme cases,cooperative game theories such as Nash-bargainning solutions may be adopted to determine what kind of allocation k is a fair one.We refer to Muthoo(1999) for a more detailed discussion on the issue of profit allocation un-der cooperation.4.Random demand4.1.Double marginalizationWe now turn our attention to the case in which the buyer faces random demand D,which is independent of supply randomness e. Let f D(Á),F D(Á)and FÀ1DðÁÞbe the density,distribution and inverse dis-tribution functions of D,respectively.In this subsection we will show how double marginalization is manifested under the whole-sale price contract.The expected profit of the centralized supply chain ispcðeÞ¼E f p min½D;SðeÞ ÀhÁ½SðeÞÀD þÀCðeÞg:ð7ÞLemma3.(1)There is a unique eÃcthat maximizes the expected profitof the centralized supply chain(7);(2)eÃcis increasing with p and decreasing with h.Lemma3shows that the uniqueness of the optimal productioninput level eÃcfor the centralized supply chain under random de-mand,and it is increasing with the selling price and decreasing with the holding cost.The results are similar to the case of deter-ministic demand.In the decentralized case,the wholesale price contract T(w,q)=w min[S(e),q]is adopted.The supplier’s expected profit p s(e)and his optimal production input level e d(q)are expressed by(2)and(3),respectively,for given order quantity q.The buyer’s problem is to choose order quantity q to maximize her expected profitpbðqÞ¼E f p min½D;Sðe dðqÞÞ;q Àh½min½Sðe dðqÞÞ;q;q ÀD þÀw min½q;Sðe dðqÞÞ;q g¼ðpþhÞE min½D;Sðe dðqÞÞ;q ÀðwþhÞE min½q;Sðe dðqÞÞ ;ð8Þwhich can be reformulated into the function of the induced produc-tion input level e as follows:pbðeÞ¼ðpþhÞE min½D;SðeÞ;q dðeÞ ÀðwþhÞE min½q dðeÞ;SðeÞ ;ð9Þwhere q d(e)is characterized by(3).Let qÃdand eÃdbe the optimal or-der quantity and its induced production input level,which maxi-mize(8)and(9),respectively.We have the following result.Proposition3.(1)If p b(q)in(8)is a quasi-concave function,theneÃd<eÃc,and eÃdis increasing with p.(2)When C00(e)=0,p b(q)in(8)is a strictly concave function.Proposition3shows that double marginalization also occurs and manifests at a lower production input level,which is similar to the case of deterministic demand.The condition of quasi-concavity for the buyer’s objective function is to ensure the optimal order quan-tity(and the induced production input level)to be unique.This con-dition holds in the case of linear production cost,as indicated in Proposition3.For other cost functions,we point out that although it is very difficult to theoretically prove that the quasi-concavity holds in general,extensive numerical experiments have shown that it is true for many convex polynomial functions of C(e).Proposition 3also shows that the optimal production input level in decentral-ized case is increasing with the selling price.The above results are valid when an order quantity q<+1is ap-pointed as a decision variable in wholesale price contract T(w,q), while the wholesale price w2(Àh,p)can be any given plausible value.In fact,contract T(w,1)can be adopted when the buyer des-ignates a specific supplier for the product supply,which implies that the buyer will accept as many units as the supplier pro-duces/provides and pays for them at a unified wholesale price w. We call this type of payment scheme with q=1as an accept-all type of wholesale price contract for the buyer,and it can be a prac-tical policy mechanism in some cases.For example,in the reman-ufacturing industry if a remanufacturer outsources the acquisition activity of used product to a specialized collector,then this collec-tor may be required to deliver all acquired units to the remanufac-turer,and the accept-all contract T(w,1)can be used.Similar situation also occurs in agriculture in China,where the government encourages the downstreamfirm to provide the farmers(or the production base)a contract with no order limit on the quantity of agricultural products given a certain price mechanism.Under the accept-all type of wholesale price contract T(w,1), the supplier determines the production input level e to maximize p sðeÞ¼E½wSðeÞÀCðeÞ :ð10ÞThe buyer anticipates the supplier’s optimal response e d(w)and determines the wholesale price w to maximizepbðwÞ¼E f p min½D;Sðe dðwÞÞ Àh½Sðe dðwÞÞÀD þÀwSðe dðwÞÞg;ð11Þwhich can be reformulated into the function of the induced produc-tion input level e as follows:pbðwÞ¼E f p min½D;SðeÞ Àh½SðeÞÀD þÀw dðeÞSðeÞgð12Þwhere w d(e)is the inverse function of e d(w)that maximizes(10).LetwÃdand eÃdbe the optimal wholesale price and its induced induced production input level,which maximize(11)and(12),respectively. We have the following result.Corollary1.Under the accept-all wholesale price contract T(w,1),we have eÃd<eÃcwhen C00(e)>0.Corollary1shows that the double marginalization effect is also manifested as a lower production input level with accept-all wholesale price contract T(w,1),in which case the wholesale price w becomes a decision variable.The required condition C00(e)>0X.Li et al./European Journal of Operational Research226(2013)228–236231。

常压下甲醇_碳酸二甲酯汽液平衡测定及其萃取剂选择

常压下甲醇_碳酸二甲酯汽液平衡测定及其萃取剂选择

第39卷第8期2011年8月化学工程CHEMICAL ENGINEERING (CHINA )Vol.39No.8Aug.2011收稿日期:2011-03-11基金项目:国家自然科学基金资助项目(20476005);国家科技支撑计划课题子课题(2006BA109B07-01)作者简介:李群生(1963—),男,博士,教授,从事精馏、吸收、超临界萃取、连续结晶理论与技术的研究,E-mail :liqsh@buct.edu.cn ;朱炜,通讯联系人,E-mail :zw198204@163.com 。

常压下甲醇-碳酸二甲酯汽液平衡测定及其萃取剂选择李群生1,朱炜1,付永泉1,高东江2,王海川1,王浩1,李仑1(1.北京化工大学化学工程学院,北京100029;2.兰州新西部维尼龙有限公司,甘肃兰州730094)摘要:甲醇(MeOH )与尿素催化合成碳酸二甲酯的工艺路线是目前最有发展前途的一种合成方法,但该过程中由于使用了过量的甲醇,在合成中形成了碳酸二甲酯和甲醇的共沸物,分离困难。

已报道的分离方法中萃取精馏,在经济效益、操作和安全方面都优于其他方法。

草酸二甲酯,碳酸乙烯酯,碳酸丙烯酯均可作为萃取精馏分离DMC 的萃取剂,但MeOH-DMC-萃取剂萃取精馏的数学模型尚未见报道,所以有必要研究MeOH-DMC 及其与萃取剂体系的汽液平衡数据。

本文在常压下,用改进的Othmer 釜测定了MeOH-DMC 二元汽液平衡数据,并用Margules ,Van Laar ,Wilson ,NRTL ,UNIQUAC 方程对实验数据进行了关联,并利用UNIFAC 方程模拟推算了MeOH-DMC-萃取剂(草酸二甲酯、碳酸乙烯酯、碳酸丙烯酯)三元体系在常压下的汽液平衡,为建立萃取精馏法分离DMC 与甲醇共沸体系的数学模型提供了必要的汽液平衡(VLE )数据。

关键词:碳酸二甲酯;甲醇;萃取剂;汽液平衡中图分类号:TQ 013.1文献标识码:A文章编号:1005-9954(2011)08-0044-04Vapor-liquid equilibrium of methanol-dimethyl carbonate atnormal pressure and selection of extractantLI Qun-sheng 1,ZHU Wei 1,FU Yong-quan 1,GAO Dong-jiang 2,WANG Hai-chuan 1,WANG Hao 1,LI Lun 1(1.College of Chemical Engineering ,Beijing University of Chemical Technology ,Beijing 100029,China ;2.Lanzhou New West Vinylon Company Limited ,Lanzhou 730094,Gansu Province ,China )Abstract :The route of catalytic synthesizing dimethyl carbonate by methanol (MeOH )and urea is the most promising synthetic method.But because of the excessive use of methanol ,it is difficult to separate dimethyl carbonate and methanol mixture due to the existence of azeotrope in this process.Extractive distillation is more economical ,simple ,safe and effective than the other separation methods reported ,and dimethyl oxalate (DMO ),ethylene carbonate (EC )and propylene carbonate (PC )can be used as the extractant.However ,the mathematical model of MeOH-DMC-extractant was not reported.So ,it is necessary to study the vapor-liquid equilibrium (VLE )data of MeOH-DMC and its extractant.The vapor-liquid equilibrium data for the dimethyl carbonate and methanol were measured at normal pressure in a modified Othmer still.The measured binary data were correlated by using Margules ,Van Laar ,Wilson ,NRTL and UNIQUAC models.The normal atmosphere vapor-liquid equilibrium of ternary system MeOH-DMC-extractant (dimethyl oxalate ,ethylene carbonate ,propylene carbonate )was simulated by using the UNIFAC equation.The VLE data are necessary to separate methanol from DMC by extraction distillation.Key words :dimethyl carbonate ;methanol ;extractant ;vapor-liquid equilibrium 碳酸二甲酯[1-6](DMC )是绿色化工产品,由于其分子中含有羰基、甲氧基和甲基等多种官能团,因此,在有机合成中可作为甲基化试剂代替原来有致癌作用的硫酸二甲酯广泛用于合成农药、医药和燃料等化工生产中。

Input, interaction and output

Input, interaction and output
The process in which, in an effort to communicate, learners and competent speakers provide and interpret signals of their own and their interlocutor’s perceived comprehension, thus provoking adjustments to linguistic form, conversational structure, message content, or all three, until an acceptable level of understanding is achieved (p. 418).
Introduction This paper presents an overview of what has come to be known as the Interaction Hypothesis, the basic tenet of which is that through input and interaction with interlocutors, language learners have opportunities to notice differences between their own formulations of the target language and the language of their conversational partners. They also receive feedback which both modifies the linguistic input they receive and pushes them to modify their output during conversation. This paper focuses on the major constructs of this approach to SLA, namely, input, interaction, feedback and output, and discusses recent literature that addresses these issues. We begin by noting that the Interaction Hypothesis subsumes aspects of the Input Hypothesis (Krashen 1982, 1985) and the original Output Hypothesis (Swain 1985, 1995). As we explain in Gass and Mackey (in press), the Interaction Hypothesis has been characterized and referred to in various ways, evolving over the years to the point that current research often refers to it as the interaction ‘approach’ or as a ‘model’ (see, for example, Block’s 2003 discussion of the input, interaction, output model). We return to these various characterizations at the end of this paper. In simple terms, the interaction approach considers exposure to language (input), production of language (output), and feedback on production (through interaction) as constructs that are important for understanding how second language learning takes

材料模块开发与适当解决方案应用-阿里儿莱因阿尔茜特阿尔夫达瑙·马此itan说明书

材料模块开发与适当解决方案应用-阿里儿莱因阿尔茜特阿尔夫达瑙·马此itan说明书

Development of Materials in Materials Module With Appropriate Resolution ApplicationRirin Dwi Agustin, Firda Alfiana PatriciaMathematics EducationIKIP Budi Utomo MalangIndonesiaAbstract-. The purpose of this research is (1) development of matrix module with problem solving approach for vocational students of class X. (2) To know and describe the quality of matrix module with problem solving approach in XK SMK students viewed from the aspect of validity, effectiveness, and practicality. This research is a development research using 4-D development model. In this development model is adopted until the third stage of D, namely the development stage due to time constraints and tailored to the needs and development of the module. This research produces matrix module with problem solving approach for students of class X SMK. The result of the research shows that: (1) based on the evaluation of the quality of the module by the validator, the module is valid with 79% percentage and the category is good, (2) based on the pilot test developed module is effective with 80% test error, 3) based on student response questionnaire developed module expressed practical with 84% percentage and very good category. Keyword—Matrix; Problem Solving; Development4-DI.INTRODUCTIONIn learning mathematics, ideally students are accustomed to gain understanding through experience and knowledge developed by students in accordance with the development of thinking. Because students have different potential in the functioning of their thinking ability. This is in line with the learning objectives formulated by the National Council of Teachers of Mathematics [1]that students must learn mathematics through understanding and actively building new knowledge from previously possessed experiences and knowledge.According to [2]The ability of students in solving problems is closely related to intellectual development. Thus the difficulty level of the problems given to the students must be adjusted to the development of students. The theme of the given problems should be taken from the daily (contextual) events that are closer to the student's life.To achieve the purpose of learning mathematics, one. In order for students to master their own processes, systematic steps are required to guide students to solve problems.On the one hand the availability of a quality textbook is still lacking. The authors of textbooks do not understand how the books are so easily understood by students. The rules of learning psychology and design theories of a textbook are not applied in the preparation of textbooks. Finally, students find it difficult to understand the book they read but it also looks boring. The module is a unit of instruction program organized in a certain form for learning purposes. According to the meaning of the original term the module is a complete measuring instrument, a unit that can function independently, separately, but also can function as a unity of all other units. In fact the module is a planned type of learning activity, designed to assist students individually in achieving their learning objectives.Matrix is one of the materials taught to students of class X of SMK. In everyday life the matrix can be used to solve systems of linear equations, geometry transformations, computer programs, and others. But the matrix material is still often considered difficult and abstract. As well as in a book published by the Ministry of Education and Culture of the Republic of Indonesia used in SMK PGRI 3 Malang, matrix material is taught in class X and Class XII whereas matrix material is taught in class X only. This is the background of choosing matrix material in this research.II.RESEARCH METHODOLOGYThe research design used in developing this module is a 4-D development model. According to [3] The device development model as suggested by Thiagarajan and Semmel is Model 4-D. This model consists of 4 development stages of Define, Design, Develop, and Disseminate. In this development model is adopted until the third stage of D, the stage of development (development). (1) In the Define stage includes five basic steps, namely (a) Analysis of the front end, (b) Student analysis, (c) Task analysis, (d) Concept analysis, and (e) formulation learning objectives. (2) In the Design stage, theme selection, selection of initial design and format, reference selection, test preparation and layout design are made. (3) Development stage consists of: (a) Preparation of matrix module with problem solving approach in accordance with the results of the analysis planning. (b) Assessing the quality of the module (module validation) before the test is tested (c) Initial revision after the module quality assessment. (d) The revised module is tested forUniversity of Muhammadiyah Malang's 1st International Conference of Mathematics Education (INCOMED 2017)10 students to be asked to study and solve problems in the Matrix module.Research instrument used: (1) Module validation sheet used to know the level of validity and module. This module validation sheet aims to obtain suggestions and inputs to the matrix module. The module validation sheet is used by the validator to assess the eligibility of the module from various aspects including material, language, and module appearance.(2) Questionnaire response is used to measure the practicality of the module. The student response questionnaire contains statements that represent students' responses after observing the module the researchers developed. (3) Trial conducted on 10 students. The results of this limited trial are used to revise the modules that have been made. If the results of this limited trial get a good response and students can complete the test question according to the standard specified then the module is said to be effective.For a validity level basisTABLE I.Invalid criteria and module revisions Percentage(%)Criteriap > 80 Very Good (No Revision needed)60 < p ≤ 80Good (No Revision needed)40 < p ≤ 60Good Enough (Revised)20 < p ≤ 40Not good (Revised)p ≤ 20Poor (Revised)[4]Data derived from the trial will be analyzed quantitatively to find out the average grade and learning mastery. Here are the scoring guidelines and test assessments.• Each test item has a different weight score according to the level of difficulty of each question.• From the number of scores obtained then calculated the value of students with the formulastudent score (x) = total score• Completeness of individual learning (minimum master y criteria) is 75• Completed tests are calculated by:TABLE II : Complete Testing Criterion[4]Precentage (%) Criteria ResultsStudent ResponseQuestionnairep > 80Very Good60 < p ≤ 80Good40 < p ≤ 60Good Enough20 < p ≤ 40Not Goodp ≤ 20Poor[4]III.RESULT AND DISCUSSIONIn the Define stage in determining and setting learning requirements begins with an analysis of the objectives of the material constraints developed by the device. This stage includes 5 main steps, namely: (1) Analysis of the front end, done by analyzing the core competencies and basic competencies as well as indicators of achievement of competence with reference to Curriculum 2013. (2) Student analysis is done by looking at student value X TKJ A semester one on matrix material. From the data shows that the student's score has not been maximal yet. One of the causes is the book supporting the learning is still small and incomplete, for material matrix students do not get the supporting book that suits the needs of students. This is because the matrix material in Vocational High School (SMK) is taught in class X only, while the book used in teaching and learning process of matrix matter is divided into 2 in class X with matrix understanding material, matrix type, transpose matrix, equality of two matrices , counting operations on the matrix, inverse and determinant of the order matrix. And in class XII is given the material of minor matrix, cofactor, inverse and determinant of order matrix. (3) Task analysis is done to detail the content of teaching materials in the form of outline. Based on the results of interviews with teachers mathematics SMK PGRI 3 Malang about matrix material known that the material taught up to the inverse matrix.So the matrix material contained in the module is presented in 4 learning activities. Learning activity 1 contains material understanding of matrix, matrix types, transpose and similarity of two matrices. Learning activity 2 contains matrix addition material, matrix reduction, and matrix multiplication with scalar. Learning activity 3 contains matrix multiplication materials with matrix and matrix determinants. While learning activity 4 contains minor matrix, cofactor matrix and inverse matrix. (4) Analysis concepts at this stage of analysis - analysis conducted is the analysis of core competence and competence and analysis of teaching materials. (5) Formulation of Learning Objectives contains the results of task analysis and concept analysis.In the Design stage consists of (1) the selection of the theme of this module is developed with a problem-solving approach so that the module is given problem-solving problems that make students can think critically because the way of solving it is not as usual. In the module is also given problems with common problems that often occur around the goal students can better understand the matrix material and do not consider it abstracts. (2) Selection of initial format and design. this module is arranged with A4 size of 21 cm x 29.7 cm. using Cambria Math font size 12. (3) Preparation of material concepts from various references. Researchers seek and collect relevant reference books as a reference in developing modules. The reference books are: Advanced Mathematics for Class XII SMA / MA Language Program, Mathematics Technology Group, Health, and Agriculture For Vocational High School Class X, Mathematics Module For SMK - X Curriculum KTSP, Application Matic Volume III and Mathematics Textbook SMK Group Business Management. In addition to reference books,researchers also collect pictures downloaded from the internet and some are personal collections of authors. (4) The test and module layout design is divided into 3 parts, namely the beginning, the contents, and the end: (a) The initial section consists of: Cover or cover module, identity page, introduction, table of contents, introduction, and map concept. (b) The content section consists of: Introduction page, description of the material, sample questions, exercise questions, formative tests, retraining exercises and summaries (c) The final section consists of: glossary and bibliography. (5) Module Making.At development stage done (1) Preparation of matrix module with problem-solving approach in accordance with the analysis results planninga) Assessing the quality of the module (module validation) before tested Module that has been developed then consulted to the supervisor to get suggestions for improvement. After some improvements have been made, the module is then validated. The module developed in this research was validated by two lecturers from the Mathematics Education Department of IKIP Budi Utomo Malang and one mathematics teacher of SMK PGRI 3 Malangb) The revised module is tested for 10 students. (4) The final product produced is a matrix module with problem-solving approach for SMKIV.C ONCLU S IONDevelopment of matrix module with problem solving approach for X class high school students using 4D (Define, Design, Develop, and Desseminate). In this development was adopted until the third stage of D Develop (Development).Based on the assessment of validator that has been described previously all the aspects of the assessment of the matrix module is valid that is 79%. In addition, based on the results of trials of ten matrix modules declared effective with 80% percentage. Pursuant to result of questionnaire of student response 84% expressed practical with percentage. Then module declared valid, effective and practical.(1) The matrix module for students of Vocational High School class X can be further developed by other researchers for other mathematical materials. (2) The matrix module for X-Grade Vocational High School students may be further developed for teaching materials at SMK. Matrix module is feasible in perfected both in terms of material, terms of appearance and facet of development. In order to be developed until the last stage of the stage disseminate. (3) Testing can be done periodically on each sub-chapter material contained in the module. And do the test with a larger scale again.R EFERENCES[1] Nindy Citoresmi, Sugiatno, D.S, Pengembangan Modul MatematikaBerbasis Masalah Untuk Meningkatkan Kemampuan Penyelesaian Masalah Dan Berpikir Kreatif Matematis Siswa, Program Studi Magister Pendidikan Matematika FKIP UNTAN, 2016.[2] I. Rizkianto, Workshop Kemampuan Pemecahan Masalah Topik AljabarBagi Guru Smp Di Kabupaten Sleman Yogyakarta. Retrieved from Workshop Kemampuan Pemecahan Masalah Topik Aljabar Bagi Guru Smp Di Kabupaten Sleman Yogyakarta.[3] Trianto, Mendesain Pembelajaran Kontekstual (Contextual Teaching AndLearning, Cerdas Pustaka, 2008.[4] E. P. Widyoko, Evaluasi Program Pembelajaran, Yogyakarta: Pustakabelajar, 2009.。

羟胺与氰基反应

羟胺与氰基反应

1. Single Step100%OverviewSteps/Stages Notes1.1 R:NH2OH, S:H2O, S:EtOH, 48 h, reflux Reactants: 1, Reagents: 1, Solvents: 2, Steps:1, Stages: 1, Most stages in any one step: 1ReferencesNitrile and amidoxime compounds, theirpreparation and use in semiconductorprocessingBy Lee, Wai MunFrom U.S. Pat. Appl. Publ., 20090111965, 30Apr 2009Experimental ProcedureAB) Reaction of Benzonitrile. Benzonitrile (0.99 cm3, 1 g, 9.7 mmol) and hydroxylamine (50% in water,0.89 cm3, 0.96 g, 14.55 mmol, 1.5 eq) were stirred under reflux in EtOH (10 cm3) for 48 hours. Thesolvent was evaporated under reduced pressure and water (10 cm3) was added to the residue. Themixture was extracted with dichloromethane (100 cm3) and the organic extract was evaporated underreduced pressure. The residue was purified by column chromatography to give the product. N'-hydroxybenzimidamide, yield 1.32 g, 100%, as a white crystalline solid. mp 79-81° C. (lit 79-80° C.) CASREACT ®: Copyright © 2012 American Chemical Society. All Rights Reserved. CASREACT contains reactions from CAS and from: ZIC/VINITI database (1974-1999) provided by InfoChem; INPI data prior to 1986; Biotransformations database compiled under the direction of Professor Dr. Klaus Kieslich; organic reactions, portions copyright 1996-2006 John Wiley & Sons, Ltd., John Wiley and Sons, Inc., Organic Reactions Inc., and Organic Syntheses Inc. Reproduced under license. All Rights Reserved.2. Single Step100%OverviewSteps/Stages Notes1.1 R:NH2OH, S:EtOH, 1 h, reflux; reflux → rt stereoselective, Reactants: 1, Reagents: 1,Solvents: 1, Steps: 1, Stages: 1, Most stagesin any one step: 1ReferencesPreparation of diaryl-substituted 5-memberedheterocycles as antibacterial agentsBy Mobashery, Shahriar et alFrom PCT Int. Appl., 2009082398, 02 Jul2009Experimental Procedure(Z)-N'-hydroxybenzamidine (compound 17-structure shown below): A solution of ethanol (5.0 mL),benzonitrile (203 mg, 1.97 mmol) and hydroxylamine (520 mg, 7.87 mmol) were refluxed for 1 hour.The reaction was then cooled to room temperature and concentrated in vacuo to give the a clear oilwhich was taken to the next step without further purification (268 mg, 100%). 1H NMR (500 MHz,CDCL3) δ(ppm): 4.92 (2H, bs), 7.38-7.44 (3H, m), 7.62-7.65 (2H, m). 13C NMR (125 MHz, CDCL3)δ(ppm): 126.1 (CH), 128.9 (CH), 130.2 (CH), 132.6, 152.8. MS (FAB+): 137 (MH+). HRMS forC7H8N2O (MH+): calculated: 137.0715; found 137.0718.CASREACT ®: Copyright © 2012 American Chemical Society. All Rights Reserved. CASREACT contains reactions from CAS and from: ZIC/VINITI database (1974-1999) provided by InfoChem; INPI data prior to 1986; Biotransformations database compiled under the direction of Professor Dr. Klaus Kieslich; organic reactions, portions copyright 1996-2006 John Wiley & Sons, Ltd., John Wiley and Sons, Inc., Organic Reactions Inc., and Organic Syntheses Inc. Reproduced under license. All Rights Reserved.3. Single Step100%OverviewSteps/Stages Notes1.1 R:NH2OH, S:EtOH, 1 h, reflux; reflux → rt stereoselective, Reactants: 1, Reagents: 1,Solvents: 1, Steps: 1, Stages: 1, Most stagesin any one step: 1ReferencesPreparation of oxadiazole derivatives asantibacterial agentsBy Mobashery, Shahriar et alFrom PCT Int. Appl., 2009041972, 02 Apr2009Experimental Procedure(Z)-N'-hydroxybenzamidine (compound 17 - structure shown below): A solution of ethanol (5.0 mL),benzonitrile (203 mg, 1.97 mmol) and hydroxylamine (520 mg, 7.87 mmol) were refluxed for 1 hour.The reaction was then cooled to room temperature and concentrated in vacuo to give the a clear oilwhich was taken to the next step without further purification (268 mg, 100%). 1H NMR (500 MHz,CDCL3) δ(ppm): 4.92 (2H, bs), 7.38-7.44 (3H, m), 7.62-7.65 (2H, m). 13C NMR (125 MHz, CDCL3)δ(ppm): 126.1 (CH), 128.9 (CH), 130.2 (CH), 132.6, 152.8. MS (FAB+): 137 (MH+). HRMS forC7H8N2O (MH+): calculated: 137.0715; found 137.0718.CASREACT ®: Copyright © 2012 American Chemical Society. All Rights Reserved. CASREACT contains reactions from CAS and from: ZIC/VINITI database (1974-1999) provided by InfoChem; INPI data prior to 1986; Biotransformations database compiled under the direction of Professor Dr. Klaus Kieslich; organic reactions, portions copyright 1996-2006 John Wiley & Sons, Ltd., John Wiley and Sons, Inc., Organic Reactions Inc., and Organic Syntheses Inc. Reproduced under license. All Rights Reserved.4. Single Step99%OverviewSteps/Stages Notes1.2 R:Disodium carbonate, S:H2OReferencesDiscovery and SAR exploration of N-aryl-N-(3-aryl-1,2,4-oxadiazol-5-yl)amines aspotential therapeutic agents for prostatecancerBy Krasavin, Mikhail et alFrom Chemistry Central Journal, 4, No pp.given; 2010CASREACT ®: Copyright © 2012 American Chemical Society. All Rights Reserved. CASREACT contains reactions from CAS and from: ZIC/VINITI database (1974-1999) provided by InfoChem; INPI data prior to 1986; Biotransformations database compiled under the direction of Professor Dr. Klaus Kieslich; organic reactions, portions copyright 1996-2006 John Wiley & Sons, Ltd., John Wiley and Sons, Inc., Organic Reactions Inc., and Organic Syntheses Inc. Reproduced under license. All Rights Reserved.5. Single Step95%OverviewSteps/Stages Notes1.1 R:H2NOH-HCl, R:NaOH, S:H2O, 1 h, 30°C, pH 10; 2 h, reflux Reactants: 1, Reagents: 2, Solvents: 1, Steps:1, Stages: 1, Most stages in any one step: 1ReferencesTwo synthetic methods of 3,4-bis(3-nitrophenyl)furoxanBy Yang, Jian-ming et alFrom Hanneng Cailiao, 17(5), 527-530; 2009 CASREACT ®: Copyright © 2012 American Chemical Society. All Rights Reserved. CASREACT contains reactions from CAS and from: ZIC/VINITI database (1974-1999) provided by InfoChem; INPI data prior to 1986; Biotransformations database compiled under the direction of Professor Dr. Klaus Kieslich; organic reactions, portions copyright 1996-2006 John Wiley & Sons, Ltd., John Wiley and Sons, Inc., Organic Reactions Inc., and Organic Syntheses Inc. Reproduced under license. All Rights Reserved.6. Single Step95%OverviewSteps/Stages NotesReferencesSynthesis of 3,4-bis(3',5'-dinitrophenyl-1'-yl)furoxanBy Huo, Huan et alFrom Hecheng Huaxue, 17(2), 208-210; 2009 CASREACT ®: Copyright © 2012 American Chemical Society. All Rights Reserved. CASREACT contains reactions from CAS and from: ZIC/VINITI database (1974-1999) provided by InfoChem; INPI data prior to 1986; Biotransformations database compiled under the direction of Professor Dr. Klaus Kieslich; organic reactions, portions copyright 1996-2006 John Wiley & Sons, Ltd., John Wiley and Sons, Inc., Organic Reactions Inc., and Organic Syntheses Inc. Reproduced under license. All Rights Reserved.7. Single Step93%OverviewSteps/Stages Notes1.1 R:NH2OH, S:H2O, S:MeOH, > 1 min, 50°C; 3 h, reflux Reactants: 1, Reagents: 1, Solvents: 2, Steps:1, Stages: 1, Most stages in any one step: 1ReferencesQuinazoline derivatives as adrenergicreceptor antagonists and their preparation,pharmaceutical compositions and use in thetreatment of diseasesBy Sarma, Pakala Kumara Savithru et alFrom Indian Pat. Appl., 2005DE01706, 31Aug 2007CASREACT ®: Copyright © 2012 American Chemical Society. All Rights Reserved. CASREACT contains reactions from CAS and from: ZIC/VINITI database (1974-1999) provided by InfoChem; INPI data prior to 1986; Biotransformations database compiled under the direction of Professor Dr. Klaus Kieslich; organic reactions, portions copyright 1996-2006 John Wiley & Sons, Ltd., John Wiley and Sons, Inc., Organic Reactions Inc., and Organic Syntheses Inc. Reproduced under license. All Rights Reserved.8. Single Step92%OverviewSteps/Stages Notes1.1 R:NH2OH, R:Et3N, S:EtOH, rt stereoselective, Reactants: 1, Reagents: 2,Solvents: 1, Steps: 1, Stages: 1, Most stagesin any one step: 1ReferencesPotent inhibitors of lipoprotein-associatedphospholipase A2: Benzaldehyde O-heterocycle-4-carbonyloximeBy Jeong, Hyung Jae et alFrom Bioorganic & Medicinal ChemistryLetters, 16(21), 5576-5579; 2006 CASREACT ®: Copyright © 2012 American Chemical Society. All Rights Reserved. CASREACT contains reactions from CAS and from: ZIC/VINITI database (1974-1999) provided by InfoChem; INPI data prior to 1986; Biotransformations database compiled under the direction of Professor Dr. Klaus Kieslich; organic reactions, portions copyright 1996-2006 John Wiley & Sons, Ltd., John Wiley and Sons, Inc., Organic Reactions Inc., and Organic Syntheses Inc. Reproduced under license. All Rights Reserved.9. Single Step89%OverviewSteps/Stages Notes1.1 R:EtN(Pr-i)2, R:H2NOH-HCl, S:EtOH, 18 h, 80°C Reactants: 1, Reagents: 2, Solvents: 1, Steps:1, Stages: 1, Most stages in any one step: 1ReferencesTuned methods for conjugate addition to avinyl oxadiazole; synthesis ofpharmaceutically important motifsBy Burns, Alan R. et alFrom Organic & Biomolecular Chemistry,8(12), 2777-2783; 2010CASREACT ®: Copyright © 2012 American Chemical Society. All Rights Reserved. CASREACT contains reactions from CAS and from: ZIC/VINITI database (1974-1999) provided by InfoChem; INPI data prior to 1986; Biotransformations database compiled under the direction of Professor Dr. Klaus Kieslich; organic reactions, portions copyright 1996-2006 John Wiley & Sons, Ltd., John Wiley and Sons, Inc., Organic Reactions Inc., and Organic Syntheses Inc. Reproduced under license. All Rights Reserved.10. Single Step91%OverviewSteps/Stages Notes1.1 R:NaOH, R:H2NOH-HCl, S:H2O, S:EtOH, 12 h, 80°C; cooled Reactants: 1, Reagents: 2, Solvents: 2, Steps:1, Stages: 1, Most stages in any one step: 1ReferencesPreparation of heteropolycyclic compoundsand their use as metabotropic glutamatereceptor antagonistsBy Edwards, Louise et alFrom U.S. Pat. Appl. Publ., 20050272779, 08Dec 2005Experimental ProcedureGeneral/Typical Procedure: Example 6 N-Hydroxy-3-methoxy-benzamidine. Using the generalprocedure of Shine et al., J. Heterocyclic Chem. (1989) 26:125-128, hydroxylamine hydrochloride (22ml, 5 M, 110 mmol) and sodium hydroxide (11 ml, 10 M, 110 mmol) were added to a solution of 3-methoxybenzonitrile (11.5 ml. 94 mmol) in ethanol (130 ml). The reaction mixture was then heated atreflux (80 °C.) for 12 h. After the mixture was cooled, most of the solvent was removed in vacuo. Thecrude product was partitioned between ethyl acetate and water, washed with saturated brine, driedover anhydrous sodium sulfate and the solvent was removed in vacuo. Flash chromatography on silicagel using 35-50% ethyl acetate in hexane yielded the title compound (8.05 g, 52%). Examples 7-9were prepared in an analogous method to the procedure given in Example 6. N-Hydroxy-benzamidine.N-hydroxy-benzamidine (4.83 g, 91%, white solid) was obtained from benzonitrile (4 g, 38.9 mmol),hydroxylamine hydrochloride (8.89 ml, 44.0 mmol) and sodium hydroxide (4.49 ml, 45.0 mmol) inethanol (30 ml). 1H NMR (CDCl3), δ (ppm): 8.81 (broad peak, 1H), 7.63 (m, 2H), 7.39(m, 3H), 4.91 (s,2H).CASREACT ®: Copyright © 2012 American Chemical Society. All Rights Reserved. CASREACT contains reactions from CAS and from: ZIC/VINITI database (1974-1999) provided by InfoChem; INPI data prior to 1986; Biotransformations database compiled under the direction of Professor Dr. Klaus Kieslich; organic reactions, portions copyright 1996-2006 John Wiley & Sons, Ltd., John Wiley and Sons, Inc., Organic Reactions Inc., and Organic Syntheses Inc. Reproduced under license. All Rights Reserved.11. Single Step91%OverviewSteps/Stages Notes1.1 R:NaOH, R:H2NOH-HCl, S:H2O, S:EtOH, 12 h, 80°C literature preparation, Reactants: 1, Reagents:2, Solvents: 2, Steps: 1, Stages: 1, Moststages in any one step: 1ReferencesPreparation of five-membered heterocycliccompounds as mGluR5 receptor antagonistsBy Wensbo, David et alFrom PCT Int. Appl., 2004014881, 19 Feb2004CASREACT ®: Copyright © 2012 American Chemical Society. All Rights Reserved. CASREACT contains reactions from CAS and from: ZIC/VINITI database (1974-1999) provided by InfoChem; INPI data prior to 1986; Biotransformations database compiled under the direction of Professor Dr. Klaus Kieslich; organic reactions, portions copyright 1996-2006 John Wiley & Sons, Ltd., John Wiley and Sons, Inc., Organic Reactions Inc., and Organic Syntheses Inc. Reproduced under license. All Rights Reserved.12. Single Step85%OverviewSteps/Stages Notes1.1 R:Et3N, R:H2NOH-HCl, S:EtOH, 18 h, reflux stereoselective (Z), Reactants: 1, Reagents: 2,Solvents: 1, Steps: 1, Stages: 1, Most stagesin any one step: 1ReferencesUnexpected C-C Bond Cleavage: Synthesisof 1,2,4-Oxadiazol-5-ones from Amidoximeswith Pentafluorophenyl or TrifluoromethylAnion Acting as Leaving GroupBy Gerfaud, Thibaud et alFrom Organic Letters, 13(23), 6172-6175;2011CASREACT ®: Copyright © 2012 American Chemical Society. All Rights Reserved. CASREACT contains reactions from CAS and from: ZIC/VINITI database (1974-1999) provided by InfoChem; INPI data prior to 1986; Biotransformations database compiled under the direction of Professor Dr. Klaus Kieslich; organic reactions, portions copyright 1996-2006 John Wiley & Sons, Ltd., John Wiley and Sons, Inc., Organic Reactions Inc., and Organic Syntheses Inc. Reproduced under license. All Rights Reserved.13. Single Step85%OverviewSteps/Stages Notes1.1 R:Disodium carbonate, R:H2NOH-HCl, S:H2O, S:EtOH, 15 min,55°Cultrasound (40kHz), reaction withoutultrasound at room temperature decreasedyield and increased reaction time, Reactants:1, Reagents: 2, Solvents: 2, Steps: 1, Stages:1, Most stages in any one step: 1ReferencesSynthesis of amidoximes using an efficientand rapid ultrasound methodBy Barros, Carlos Jonnatan Pimentel et alFrom Journal of the Chilean ChemicalSociety, 56(2), 721-722; 2011CASREACT ®: Copyright © 2012 American Chemical Society. All Rights Reserved. CASREACT contains reactions from CAS and from: ZIC/VINITI database (1974-1999) provided by InfoChem; INPI data prior to 1986; Biotransformations database compiled under the direction of Professor Dr. Klaus Kieslich; organic reactions, portions copyright 1996-2006 John Wiley & Sons, Ltd., John Wiley and Sons, Inc., Organic Reactions Inc., and Organic Syntheses Inc. Reproduced under license. All Rights Reserved.14. Single Step83%OverviewSteps/Stages Notes1.1 R:NaHCO3, R:H2NOH-HCl, S:H2O, S:EtOH, 4 h, 80°C Reactants: 1, Reagents: 2, Solvents: 2, Steps:1, Stages: 1, Most stages in any one step: 1ReferencesA novel bifunctional chelating agent based onbis(hydroxamamide) for 99mTc labeling ofpolypeptidesBy Ono, Masahiro et alFrom Journal of Labelled Compounds andRadiopharmaceuticals, 55(2), 71-79; 2012 CASREACT ®: Copyright © 2012 American Chemical Society. All Rights Reserved. CASREACT contains reactions from CAS and from: ZIC/VINITI database (1974-1999) provided by InfoChem; INPI data prior to 1986; Biotransformations database compiled under the direction of Professor Dr. Klaus Kieslich; organic reactions, portions copyright 1996-2006 John Wiley & Sons, Ltd., John Wiley and Sons, Inc., Organic Reactions Inc., and Organic Syntheses Inc. Reproduced under license. All Rights Reserved.15. Single Step80%OverviewSteps/Stages Notes1.1 R:NaHCO3, R:H2NOH-HCl, S:H2O, 10 min, 25°C1.2 S:EtOH, 20 h, 25°C1.3 R:H2NOH-HCl, 50 h, 25°Cregioselective, other product also detected, in-situ generated reagent, Reactants: 1,Reagents: 2, Solvents: 2, Steps: 1, Stages: 3,Most stages in any one step: 3ReferencesSynthesis, mechanism of formation, andmolecular orbital calculations ofarylamidoximesBy Srivastava, Rajendra M. et alFrom Monatshefte fuer Chemie, 140(11),1319-1324; 2009CASREACT ®: Copyright © 2012 American Chemical Society. All Rights Reserved. CASREACT contains reactions from CAS and from: ZIC/VINITI database (1974-1999) provided by InfoChem; INPI data prior to 1986; Biotransformations database compiled under the direction of Professor Dr. Klaus Kieslich; organic reactions, portions copyright 1996-2006 John Wiley & Sons, Ltd., John Wiley and Sons, Inc., Organic Reactions Inc., and Organic Syntheses Inc. Reproduced under license. All Rights Reserved.16. Single Step79%OverviewSteps/Stages Notes1.1 R:Disodium carbonate, R:H2NOH-HCl, S:H2O, S:EtOH Reactants: 1, Reagents: 2, Solvents: 2, Steps:1, Stages: 1, Most stages in any one step: 1ReferencesSynthesis of 1,2,4- and 1,3,4-oxadiazolesfrom 1-aryl-5-methyl-1H-1,2,3-triazole-4-carbonyl chloridesBy Obushak, N. D. et alFrom Russian Journal of Organic Chemistry,44(10), 1522-1527; 2008CASREACT ®: Copyright © 2012 American Chemical Society. All Rights Reserved. CASREACT contains reactions from CAS and from: ZIC/VINITI database (1974-1999) provided by InfoChem; INPI data prior to 1986; Biotransformations database compiled under the direction of Professor Dr. Klaus Kieslich; organic reactions, portions copyright 1996-2006 John Wiley & Sons, Ltd., John Wiley and Sons, Inc., Organic Reactions Inc., and Organic Syntheses Inc. Reproduced under license. All Rights Reserved.17. Single Step85%OverviewSteps/Stages Notes1.1 R:K2CO3, R:H2NOH-HCl, S:EtOH1.2 R:HCl, S:Et2O, S:H2O1.3 R:NH3, R:NaCl1.4 S:Et2OReactants: 1, Reagents: 5, Solvents: 3, Steps:1, Stages: 4, Most stages in any one step: 4ReferencesModification of the Tiemann rearrangement:One-pot synthesis of N,N-disubstitutedcyanamides from amidoximesBy Bakunov, Stanislav A. et alFrom Synthesis, (8), 1148-1159; 2000 CASREACT ®: Copyright © 2012 American Chemical Society. All Rights Reserved. CASREACT contains reactions from CAS and from: ZIC/VINITI database (1974-1999) provided by InfoChem; INPI data prior to 1986; Biotransformations database compiled under the direction of Professor Dr. Klaus Kieslich; organic reactions, portions copyright 1996-2006 John Wiley & Sons, Ltd., John Wiley and Sons, Inc., Organic Reactions Inc., and Organic Syntheses Inc. Reproduced under license. All Rights Reserved.18. Single Step76%OverviewSteps/Stages Notes1.1 R:EtN(Pr-i)2, R:H2NOH-HCl, S:EtOH, 6-12 h, 80°C Reactants: 1, Reagents: 2, Solvents: 1, Steps:1, Stages: 1, Most stages in any one step: 1ReferencesA versatile solid-phase synthesis of 3-aryl-1,2,4-oxadiazolones and analoguesBy Charton, Julie et alFrom Tetrahedron Letters, 48(8), 1479-1483;2007CASREACT ®: Copyright © 2012 American Chemical Society. All Rights Reserved. CASREACT contains reactions from CAS and from: ZIC/VINITI database (1974-1999) provided by InfoChem; INPI data prior to 1986; Biotransformations database compiled under the direction of Professor Dr. Klaus Kieslich; organic reactions, portions copyright 1996-2006 John Wiley & Sons, Ltd., John Wiley and Sons, Inc., Organic Reactions Inc., and Organic Syntheses Inc. Reproduced under license. All Rights Reserved.19. Single Step70%OverviewSteps/Stages Notes1.1 R:Disodium carbonate, R:H2NOH-HCl, S:H2O, S:EtOH, 8 h, reflux Reactants: 1, Reagents: 2, Solvents: 2, Steps:1, Stages: 1, Most stages in any one step: 1ReferencesDesign, synthesis, characterization, andantibacterial activity of {5-chloro-2-[(3-substitutedphenyl-1,2,4-oxadiazol-5-yl)-methoxy]-phenyl}-(phenyl)-methanonesBy Rai, Neithnadka Premsai et alFrom European Journal of MedicinalChemistry, 45(6), 2677-2682; 2010 CASREACT ®: Copyright © 2012 American Chemical Society. All Rights Reserved. CASREACT contains reactions from CAS and from: ZIC/VINITI database (1974-1999) provided by InfoChem; INPI data prior to 1986; Biotransformations database compiled under the direction of Professor Dr. Klaus Kieslich; organic reactions, portions copyright 1996-2006 John Wiley & Sons, Ltd., John Wiley and Sons, Inc., Organic Reactions Inc., and Organic Syntheses Inc. Reproduced under license. All Rights Reserved.20. Single Step70%OverviewSteps/Stages Notes1.1 R:H2NOH-HCl, R:NaHCO3, S:H2O, S:MeOH, 1 h, rt → 70°C; cooled stereoselective, Reactants: 1, Reagents: 2, Solvents: 2, Steps: 1, Stages: 1, Most stages in any one step: 1ReferencesDiscovery and Optimization of a Novel Series of N-Arylamide Oxadiazoles as Potent, Highly Selective and Orally Bioavailable Cannabinoid Receptor 2 (CB2) AgonistsBy Cheng, Yuan et alFrom Journal of Medicinal Chemistry, 51(16), 5019-5034; 2008Experimental ProcedureN-(9-Ethyl-9H-carbazol-3-yl)-3-(3-phenyl-1,2,4-oxadiazol-5-yl) propanamide (37). To a mixture ofsodium carbonate (1.0 g, 10 mmol) and hydroxylamine hydrochloride (1.0 g, 19 mmol) inmethanol/H2O was added benzonitrile (2 mL, 19 mmol). The mixture was heated to 70 °C for 1 h. Thecooled reaction mixture was concentrated, and the residue was taken up in dichloromethane. Theorganic layer was washed with water and concentrated to give (Z)-N'-hydroxybenzamidine (1.85 g,70% yield), which was used without further purification.CASREACT ®: Copyright © 2012 American Chemical Society. All Rights Reserved. CASREACT contains reactions from CAS and from: ZIC/VINITI database (1974-1999) provided by InfoChem; INPI data prior to 1986; Biotransformations database compiled under the direction of Professor Dr. Klaus Kieslich; organic reactions, portions copyright 1996-2006 John Wiley & Sons, Ltd., John Wiley and Sons, Inc., Organic Reactions Inc., and Organic Syntheses Inc. Reproduced under license. All Rights Reserved.21. Single Step75%OverviewSteps/Stages Notes1.1 R:H2NOH-HCl, R:Disodium carbonate, S:MeOH Reactants: 1, Reagents: 2, Solvents: 1, Steps:1, Stages: 1, Most stages in any one step: 1ReferencesN-Aryl N'-Hydroxyguanidines, A New Class ofNO-Donors after Selective Oxidation by NitricOxide Synthases: Structure-ActivityRelationshipBy Renodon-Corniere, Axelle et alFrom Journal of Medicinal Chemistry, 45(4),944-954; 2002Experimental ProcedureBenzamidoximes 30-32 were prepared by refluxing anhydrous methanolic solutions of hydroxylaminehydrochloride with the corresponding nitrile in the presence of sodium carbonate as previouslydescribed.57Benzamidoxime (30). Compound 30 was obtained as a white solid in 75% yield frombenzonitrile mp 76 °C (literature: 76 °C).57CASREACT ®: Copyright © 2012 American Chemical Society. All Rights Reserved. CASREACT contains reactions from CAS and from: ZIC/VINITI database (1974-1999) provided by InfoChem; INPI data prior to 1986; Biotransformations database compiled under the direction of Professor Dr. Klaus Kieslich; organic reactions, portions copyright 1996-2006 John Wiley & Sons, Ltd., John Wiley and Sons, Inc., Organic Reactions Inc., and Organic Syntheses Inc. Reproduced under license. All Rights Reserved.22. Single Step70%OverviewSteps/Stages Notes1.1 R:KOH, R:H2NOH-HCl, S:MeOH, 3-6 h, 6°C in-situ generated reagent, Reactants: 1,Reagents: 2, Solvents: 1, Steps: 1, Stages: 1,Most stages in any one step: 1ReferencesHCV NS5b RNA-Dependent RNAPolymerase Inhibitors: From α,γ-Diketoacidsto 4,5-Dihydroxypyrimidine- or 3-Methyl-5-hydroxypyrimidinonecarboxylic Acids. Designand SynthesisBy Summa, Vincenzo et alFrom Journal of Medicinal Chemistry, 47(22),5336-5339; 2004Experimental ProcedureN'-hydroxybenzenecarboximidamide (12), 3-(benzyloxy)-N'-hydroxybenzenecarboximidamide (13), N'-hydroxy-3-[(4-methoxybenzyl)oxy]benzenecarboximidamide were prepared from the correspondingnitriles by use of known procedures. Generally, one equiv of potassium hydroxide dissolved inmethanol was added to a solution of hydroxylamine hydrochloride (1 equiv) in methanol. Theprecipitated potassium chloride was removed by filtration and to the above solution the appropriate arylnitrile was added. Reaction mixture was stirred at 60°C for the appropriate time (3-6 h, TLCmonitoring). After cooling, the solvent was removed under vacuum, and the residue was triturated withdiethyl ether. The precipitate was collected and eventually recristallyzed from an appropriate solvent,furnishing the desired amidoxime in 60-70 % yield. N'-hydroxybenzenecarboximidamide (12): spectraldata matches literature data.3CASREACT ®: Copyright © 2012 American Chemical Society. All Rights Reserved. CASREACT contains reactions from CAS and from: ZIC/VINITI database (1974-1999) provided by InfoChem; INPI data prior to 1986; Biotransformations database compiled under the direction of Professor Dr. Klaus Kieslich; organic reactions, portions copyright 1996-2006 John Wiley & Sons, Ltd., John Wiley and Sons, Inc., Organic Reactions Inc., and Organic Syntheses Inc. Reproduced under license. All Rights Reserved.23. Single Step65%OverviewSteps/Stages Notes1.1 R:K2CO3, R:H2NOH-HCl, S:EtOH, 1 h, rt; 6 h, reflux Reactants: 1, Reagents: 2, Solvents: 1, Steps:1, Stages: 1, Most stages in any one step: 1ReferencesAcetic acid aldose reductase inhibitorsbearing a five-membered heterocyclic corewith potent topical activity in a visualimpairment rat modelBy La Motta, Concettina et alFrom Journal of Medicinal Chemistry, 51(11),3182-3193; 2008Experimental ProcedureGeneral Procedure for the Synthesis of N-Hydroxybenzimidamides3a-i and N-Hydroxy-2-phenylacetimidamides 4a-i. A solution of the appropriate nitrile 1a-i or 2a-i (1.00 mmol), hydroxylaminehydrochloride (1.35 mmol), and potassium carbonate (1.00 mmol) in ethanol was left under stirring atroom temperature for 1 h, then heated under reflux until the disappearance of the starting materials (6h, TLC analysis). After cooling, the resulting mixture was filtered and the solvent was evaporated todryness under reduced pressure to give the target compound as a white solid, which was purified byrecrystallization (Supporting Information, Tables 1 and 2).CASREACT ®: Copyright © 2012 American Chemical Society. All Rights Reserved. CASREACT contains reactions from CAS and from: ZIC/VINITI database (1974-1999) provided by InfoChem; INPI data prior to 1986; Biotransformations database compiled under the direction of Professor Dr. Klaus Kieslich; organic reactions, portions copyright 1996-2006 John Wiley & Sons, Ltd., John Wiley and Sons, Inc., Organic Reactions Inc., and Organic Syntheses Inc. Reproduced under license. All Rights Reserved.24. Single Step60%OverviewSteps/Stages Notes1.1 R:Et3N, R:H2NOH-HCl, S:EtOH, rt → reflux; 24 h, reflux Reactants: 1, Reagents: 2, Solvents: 1, Steps:1, Stages: 1, Most stages in any one step: 1ReferencesSynthesis and cannabinoid activity of 1-substituted-indole-3-oxadiazole derivatives:Novel agonists for the CB1 receptorBy Moloney, Gerard P. et alFrom European Journal of MedicinalChemistry, 43(3), 513-539; 2008 CASREACT ®: Copyright © 2012 American Chemical Society. All Rights Reserved. CASREACT contains reactions from CAS and from: ZIC/VINITI database (1974-1999) provided by InfoChem; INPI data prior to 1986; Biotransformations database compiled under the direction of Professor Dr. Klaus Kieslich; organic reactions, portions copyright 1996-2006 John Wiley & Sons, Ltd., John Wiley and Sons, Inc., Organic Reactions Inc., and Organic Syntheses Inc. Reproduced under license. All Rights Reserved.25. Single Step60%Overview。

毕业设计论文塑料注射成型

毕业设计论文塑料注射成型

Modeling of morphology evolution in the injection moldingprocess of thermoplastic polymersR.Pantani,I.Coccorullo,V.Speranza,G.Titomanlio* Department of Chemical and Food Engineering,University of Salerno,via Ponte don Melillo,I-84084Fisciano(Salerno),Italy Received13May2005;received in revised form30August2005;accepted12September2005AbstractA thorough analysis of the effect of operative conditions of injection molding process on the morphology distribution inside the obtained moldings is performed,with particular reference to semi-crystalline polymers.The paper is divided into two parts:in the first part,the state of the art on the subject is outlined and discussed;in the second part,an example of the characterization required for a satisfactorily understanding and description of the phenomena is presented,starting from material characterization,passing through the monitoring of the process cycle and arriving to a deep analysis of morphology distribution inside the moldings.In particular,fully characterized injection molding tests are presented using an isotactic polypropylene,previously carefully characterized as far as most of properties of interest.The effects of both injectionflow rate and mold temperature are analyzed.The resulting moldings morphology(in terms of distribution of crystallinity degree,molecular orientation and crystals structure and dimensions)are analyzed by adopting different experimental techniques(optical,electronic and atomic force microscopy,IR and WAXS analysis).Final morphological characteristics of the samples are compared with the predictions of a simulation code developed at University of Salerno for the simulation of the injection molding process.q2005Elsevier Ltd.All rights reserved.Keywords:Injection molding;Crystallization kinetics;Morphology;Modeling;Isotactic polypropyleneContents1.Introduction (1186)1.1.Morphology distribution in injection molded iPP parts:state of the art (1189)1.1.1.Modeling of the injection molding process (1190)1.1.2.Modeling of the crystallization kinetics (1190)1.1.3.Modeling of the morphology evolution (1191)1.1.4.Modeling of the effect of crystallinity on rheology (1192)1.1.5.Modeling of the molecular orientation (1193)1.1.6.Modeling of theflow-induced crystallization (1195)ments on the state of the art (1197)2.Material and characterization (1198)2.1.PVT description (1198)*Corresponding author.Tel.:C39089964152;fax:C39089964057.E-mail address:gtitomanlio@unisa.it(G.Titomanlio).2.2.Quiescent crystallization kinetics (1198)2.3.Viscosity (1199)2.4.Viscoelastic behavior (1200)3.Injection molding tests and analysis of the moldings (1200)3.1.Injection molding tests and sample preparation (1200)3.2.Microscopy (1202)3.2.1.Optical microscopy (1202)3.2.2.SEM and AFM analysis (1202)3.3.Distribution of crystallinity (1202)3.3.1.IR analysis (1202)3.3.2.X-ray analysis (1203)3.4.Distribution of molecular orientation (1203)4.Analysis of experimental results (1203)4.1.Injection molding tests (1203)4.2.Morphology distribution along thickness direction (1204)4.2.1.Optical microscopy (1204)4.2.2.SEM and AFM analysis (1204)4.3.Morphology distribution alongflow direction (1208)4.4.Distribution of crystallinity (1210)4.4.1.Distribution of crystallinity along thickness direction (1210)4.4.2.Crystallinity distribution alongflow direction (1212)4.5.Distribution of molecular orientation (1212)4.5.1.Orientation along thickness direction (1212)4.5.2.Orientation alongflow direction (1213)4.5.3.Direction of orientation (1214)5.Simulation (1214)5.1.Pressure curves (1215)5.2.Morphology distribution (1215)5.3.Molecular orientation (1216)5.3.1.Molecular orientation distribution along thickness direction (1216)5.3.2.Molecular orientation distribution alongflow direction (1216)5.3.3.Direction of orientation (1217)5.4.Crystallinity distribution (1217)6.Conclusions (1217)References (1219)1.IntroductionInjection molding is one of the most widely employed methods for manufacturing polymeric products.Three main steps are recognized in the molding:filling,packing/holding and cooling.During thefilling stage,a hot polymer melt rapidlyfills a cold mold reproducing a cavity of the desired product shape. During the packing/holding stage,the pressure is raised and extra material is forced into the mold to compensate for the effects that both temperature decrease and crystallinity development determine on density during solidification.The cooling stage starts at the solidification of a thin section at cavity entrance (gate),starting from that instant no more material can enter or exit from the mold impression and holding pressure can be released.When the solid layer on the mold surface reaches a thickness sufficient to assure required rigidity,the product is ejected from the mold.Due to the thermomechanical history experienced by the polymer during processing,macromolecules in injection-molded objects present a local order.This order is referred to as‘morphology’which literally means‘the study of the form’where form stands for the shape and arrangement of parts of the object.When referred to polymers,the word morphology is adopted to indicate:–crystallinity,which is the relative volume occupied by each of the crystalline phases,including mesophases;–dimensions,shape,distribution and orientation of the crystallites;–orientation of amorphous phase.R.Pantani et al./Prog.Polym.Sci.30(2005)1185–1222 1186R.Pantani et al./Prog.Polym.Sci.30(2005)1185–12221187Apart from the scientific interest in understandingthe mechanisms leading to different order levels inside a polymer,the great technological importance of morphology relies on the fact that polymer character-istics (above all mechanical,but also optical,electrical,transport and chemical)are to a great extent affected by morphology.For instance,crystallinity has a pro-nounced effect on the mechanical properties of the bulk material since crystals are generally stiffer than amorphous material,and also orientation induces anisotropy and other changes in mechanical properties.In this work,a thorough analysis of the effect of injection molding operative conditions on morphology distribution in moldings with particular reference to crystalline materials is performed.The aim of the paper is twofold:first,to outline the state of the art on the subject;second,to present an example of the characterization required for asatisfactorilyR.Pantani et al./Prog.Polym.Sci.30(2005)1185–12221188understanding and description of the phenomena, starting from material description,passing through the monitoring of the process cycle and arriving to a deep analysis of morphology distribution inside the mold-ings.To these purposes,fully characterized injection molding tests were performed using an isotactic polypropylene,previously carefully characterized as far as most of properties of interest,in particular quiescent nucleation density,spherulitic growth rate and rheological properties(viscosity and relaxation time)were determined.The resulting moldings mor-phology(in terms of distribution of crystallinity degree, molecular orientation and crystals structure and dimensions)was analyzed by adopting different experimental techniques(optical,electronic and atomic force microscopy,IR and WAXS analysis).Final morphological characteristics of the samples were compared with the predictions of a simulation code developed at University of Salerno for the simulation of the injection molding process.The effects of both injectionflow rate and mold temperature were analyzed.1.1.Morphology distribution in injection molded iPP parts:state of the artFrom many experimental observations,it is shown that a highly oriented lamellar crystallite microstructure, usually referred to as‘skin layer’forms close to the surface of injection molded articles of semi-crystalline polymers.Far from the wall,the melt is allowed to crystallize three dimensionally to form spherulitic structures.Relative dimensions and morphology of both skin and core layers are dependent on local thermo-mechanical history,which is characterized on the surface by high stress levels,decreasing to very small values toward the core region.As a result,the skin and the core reveal distinct characteristics across the thickness and also along theflow path[1].Structural and morphological characterization of the injection molded polypropylene has attracted the interest of researchers in the past three decades.In the early seventies,Kantz et al.[2]studied the morphology of injection molded iPP tensile bars by using optical microscopy and X-ray diffraction.The microscopic results revealed the presence of three distinct crystalline zones on the cross-section:a highly oriented non-spherulitic skin;a shear zone with molecular chains oriented essentially parallel to the injection direction;a spherulitic core with essentially no preferred orientation.The X-ray diffraction studies indicated that the skin layer contains biaxially oriented crystallites due to the biaxial extensionalflow at theflow front.A similar multilayered morphology was also reported by Menges et al.[3].Later on,Fujiyama et al.[4] investigated the skin–core morphology of injection molded iPP samples using X-ray Small and Wide Angle Scattering techniques,and suggested that the shear region contains shish–kebab structures.The same shish–kebab structure was observed by Wenig and Herzog in the shear region of their molded samples[5].A similar investigation was conducted by Titomanlio and co-workers[6],who analyzed the morphology distribution in injection moldings of iPP. They observed a skin–core morphology distribution with an isotropic spherulitic core,a skin layer characterized by afine crystalline structure and an intermediate layer appearing as a dark band in crossed polarized light,this layer being characterized by high crystallinity.Kalay and Bevis[7]pointed out that,although iPP crystallizes essentially in the a-form,a small amount of b-form can be found in the skin layer and in the shear region.The amount of b-form was found to increase by effect of high shear rates[8].A wide analysis on the effect of processing conditions on the morphology of injection molded iPP was conducted by Viana et al.[9]and,more recently, by Mendoza et al.[10].In particular,Mendoza et al. report that the highest level of crystallinity orientation is found inside the shear zone and that a high level of orientation was also found in the skin layer,with an orientation angle tilted toward the core.It is rather difficult to theoretically establish the relationship between the observed microstructure and processing conditions.Indeed,a model of the injection molding process able to predict morphology distribution in thefinal samples is not yet available,even if it would be of enormous strategic importance.This is mainly because a complete understanding of crystallization kinetics in processing conditions(high cooling rates and pressures,strong and complexflowfields)has not yet been reached.In this section,the most relevant aspects for process modeling and morphology development are identified. In particular,a successful path leading to a reliable description of morphology evolution during polymer processing should necessarily pass through:–a good description of morphology evolution under quiescent conditions(accounting all competing crystallization processes),including the range of cooling rates characteristic of processing operations (from1to10008C/s);R.Pantani et al./Prog.Polym.Sci.30(2005)1185–12221189–a description capturing the main features of melt morphology(orientation and stretch)evolution under processing conditions;–a good coupling of the two(quiescent crystallization and orientation)in order to capture the effect of crystallinity on viscosity and the effect offlow on crystallization kinetics.The points listed above outline the strategy to be followed in order to achieve the basic understanding for a satisfactory description of morphology evolution during all polymer processing operations.In the following,the state of art for each of those points will be analyzed in a dedicated section.1.1.1.Modeling of the injection molding processThefirst step in the prediction of the morphology distribution within injection moldings is obviously the thermo-mechanical simulation of the process.Much of the efforts in the past were focused on the prediction of pressure and temperature evolution during the process and on the prediction of the melt front advancement [11–15].The simulation of injection molding involves the simultaneous solution of the mass,energy and momentum balance equations.Thefluid is non-New-tonian(and viscoelastic)with all parameters dependent upon temperature,pressure,crystallinity,which are all function of pressibility cannot be neglected as theflow during the packing/holding step is determined by density changes due to temperature, pressure and crystallinity evolution.Indeed,apart from some attempts to introduce a full 3D approach[16–19],the analysis is currently still often restricted to the Hele–Shaw(or thinfilm) approximation,which is warranted by the fact that most injection molded parts have the characteristic of being thin.Furthermore,it is recognized that the viscoelastic behavior of the polymer only marginally influences theflow kinematics[20–22]thus the melt is normally considered as a non-Newtonian viscousfluid for the description of pressure and velocity gradients evolution.Some examples of adopting a viscoelastic constitutive equation in the momentum balance equations are found in the literature[23],but the improvements in accuracy do not justify a considerable extension of computational effort.It has to be mentioned that the analysis of some features of kinematics and temperature gradients affecting the description of morphology need a more accurate description with respect to the analysis of pressure distributions.Some aspects of the process which were often neglected and may have a critical importance are the description of the heat transfer at polymer–mold interface[24–26]and of the effect of mold deformation[24,27,28].Another aspect of particular interest to the develop-ment of morphology is the fountainflow[29–32], which is often neglected being restricted to a rather small region at theflow front and close to the mold walls.1.1.2.Modeling of the crystallization kineticsIt is obvious that the description of crystallization kinetics is necessary if thefinal morphology of the molded object wants to be described.Also,the development of a crystalline degree during the process influences the evolution of all material properties like density and,above all,viscosity(see below).Further-more,crystallization kinetics enters explicitly in the generation term of the energy balance,through the latent heat of crystallization[26,33].It is therefore clear that the crystallinity degree is not only a result of simulation but also(and above all)a phenomenon to be kept into account in each step of process modeling.In spite of its dramatic influence on the process,the efforts to simulate the injection molding of semi-crystalline polymers are crude in most of the commercial software for processing simulation and rather scarce in the fleur and Kamal[34],Papatanasiu[35], Titomanlio et al.[15],Han and Wang[36],Ito et al.[37],Manzione[38],Guo and Isayev[26],and Hieber [25]adopted the following equation(Kolmogoroff–Avrami–Evans,KAE)to predict the development of crystallinityd xd tZð1K xÞd d cd t(1)where x is the relative degree of crystallization;d c is the undisturbed volume fraction of the crystals(if no impingement would occur).A significant improvement in the prediction of crystallinity development was introduced by Titoman-lio and co-workers[39]who kept into account the possibility of the formation of different crystalline phases.This was done by assuming a parallel of several non-interacting kinetic processes competing for the available amorphous volume.The evolution of each phase can thus be described byd x id tZð1K xÞd d c id t(2)where the subscript i stands for a particular phase,x i is the relative degree of crystallization,x ZPix i and d c iR.Pantani et al./Prog.Polym.Sci.30(2005)1185–1222 1190is the expectancy of volume fraction of each phase if no impingement would occur.Eq.(2)assumes that,for each phase,the probability of the fraction increase of a single crystalline phase is simply the product of the rate of growth of the corresponding undisturbed volume fraction and of the amount of available amorphous fraction.By summing up the phase evolution equations of all phases(Eq.(2))over the index i,and solving the resulting differential equation,one simply obtainsxðtÞZ1K exp½K d cðtÞ (3)where d c Z Pid c i and Eq.(1)is recovered.It was shown by Coccorullo et al.[40]with reference to an iPP,that the description of the kinetic competition between phases is crucial to a reliable prediction of solidified structures:indeed,it is not possible to describe iPP crystallization kinetics in the range of cooling rates of interest for processing(i.e.up to several hundreds of8C/s)if the mesomorphic phase is neglected:in the cooling rate range10–1008C/s, spherulite crystals in the a-phase are overcome by the formation of the mesophase.Furthermore,it has been found that in some conditions(mainly at pressures higher than100MPa,and low cooling rates),the g-phase can also form[41].In spite of this,the presence of different crystalline phases is usually neglected in the literature,essentially because the range of cooling rates investigated for characterization falls in the DSC range (well lower than typical cooling rates of interest for the process)and only one crystalline phase is formed for iPP at low cooling rates.It has to be noticed that for iPP,which presents a T g well lower than ambient temperature,high values of crystallinity degree are always found in solids which passed through ambient temperature,and the cooling rate can only determine which crystalline phase forms, roughly a-phase at low cooling rates(below about 508C/s)and mesomorphic phase at higher cooling rates.The most widespread approach to the description of kinetic constant is the isokinetic approach introduced by Nakamura et al.According to this model,d c in Eq.(1)is calculated asd cðtÞZ ln2ðt0KðTðsÞÞd s2 435n(4)where K is the kinetic constant and n is the so-called Avrami index.When introduced as in Eq.(4),the reciprocal of the kinetic constant is a characteristic time for crystallization,namely the crystallization half-time, t05.If a polymer is cooled through the crystallization temperature,crystallization takes place at the tempera-ture at which crystallization half-time is of the order of characteristic cooling time t q defined ast q Z D T=q(5) where q is the cooling rate and D T is a temperature interval over which the crystallization kinetic constant changes of at least one order of magnitude.The temperature dependence of the kinetic constant is modeled using some analytical function which,in the simplest approach,is described by a Gaussian shaped curve:KðTÞZ K0exp K4ln2ðT K T maxÞ2D2(6)The following Hoffman–Lauritzen expression[42] is also commonly adopted:K½TðtÞ Z K0exp KUÃR$ðTðtÞK T NÞ!exp KKÃ$ðTðtÞC T mÞ2TðtÞ2$ðT m K TðtÞÞð7ÞBoth equations describe a bell shaped curve with a maximum which for Eq.(6)is located at T Z T max and for Eq.(7)lies at a temperature between T m(the melting temperature)and T N(which is classically assumed to be 308C below the glass transition temperature).Accord-ing to Eq.(7),the kinetic constant is exactly zero at T Z T m and at T Z T N,whereas Eq.(6)describes a reduction of several orders of magnitude when the temperature departs from T max of a value higher than2D.It is worth mentioning that only three parameters are needed for Eq.(6),whereas Eq.(7)needs the definition offive parameters.Some authors[43,44]couple the above equations with the so-called‘induction time’,which can be defined as the time the crystallization process starts, when the temperature is below the equilibrium melting temperature.It is normally described as[45]Dt indDtZðT0m K TÞat m(8)where t m,T0m and a are material constants.It should be mentioned that it has been found[46,47]that there is no need to explicitly incorporate an induction time when the modeling is based upon the KAE equation(Eq.(1)).1.1.3.Modeling of the morphology evolutionDespite of the fact that the approaches based on Eq.(4)do represent a significant step toward the descriptionR.Pantani et al./Prog.Polym.Sci.30(2005)1185–12221191of morphology,it has often been pointed out in the literature that the isokinetic approach on which Nakamura’s equation (Eq.(4))is based does not describe details of structure formation [48].For instance,the well-known experience that,with many polymers,the number of spherulites in the final solid sample increases strongly with increasing cooling rate,is indeed not taken into account by this approach.Furthermore,Eq.(4)describes an increase of crystal-linity (at constant temperature)depending only on the current value of crystallinity degree itself,whereas it is expected that the crystallization rate should depend also on the number of crystalline entities present in the material.These limits are overcome by considering the crystallization phenomenon as the consequence of nucleation and growth.Kolmogoroff’s model [49],which describes crystallinity evolution accounting of the number of nuclei per unit volume and spherulitic growth rate can then be applied.In this case,d c in Eq.(1)is described asd ðt ÞZ C m ðt 0d N ðs Þd s$ðt sG ðu Þd u 2435nd s (9)where C m is a shape factor (C 3Z 4/3p ,for spherical growth),G (T (t ))is the linear growth rate,and N (T (t ))is the nucleation density.The following Hoffman–Lauritzen expression is normally adopted for the growth rateG ½T ðt Þ Z G 0exp KUR $ðT ðt ÞK T N Þ!exp K K g $ðT ðt ÞC T m Þ2T ðt Þ2$ðT m K T ðt ÞÞð10ÞEqs.(7)and (10)have the same form,however the values of the constants are different.The nucleation mechanism can be either homo-geneous or heterogeneous.In the case of heterogeneous nucleation,two equations are reported in the literature,both describing the nucleation density as a function of temperature [37,50]:N ðT ðt ÞÞZ N 0exp ½j $ðT m K T ðt ÞÞ (11)N ðT ðt ÞÞZ N 0exp K 3$T mT ðt ÞðT m K T ðt ÞÞ(12)In the case of homogeneous nucleation,the nucleation rate rather than the nucleation density is function of temperature,and a Hoffman–Lauritzen expression isadoptedd N ðT ðt ÞÞd t Z N 0exp K C 1ðT ðt ÞK T N Þ!exp KC 2$ðT ðt ÞC T m ÞT ðt Þ$ðT m K T ðt ÞÞð13ÞConcentration of nucleating particles is usually quite significant in commercial polymers,and thus hetero-geneous nucleation becomes the dominant mechanism.When Kolmogoroff’s approach is followed,the number N a of active nuclei at the end of the crystal-lization process can be calculated as [48]N a ;final Zðt final 0d N ½T ðs Þd sð1K x ðs ÞÞd s (14)and the average dimension of crystalline structures can be attained by geometrical considerations.Pantani et al.[51]and Zuidema et al.[22]exploited this method to describe the distribution of crystallinity and the final average radius of the spherulites in injection moldings of polypropylene;in particular,they adopted the following equationR Z ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi3x a ;final 4p N a ;final 3s (15)A different approach is also present in the literature,somehow halfway between Nakamura’s and Kolmo-goroff’s models:the growth rate (G )and the kinetic constant (K )are described independently,and the number of active nuclei (and consequently the average dimensions of crystalline entities)can be obtained by coupling Eqs.(4)and (9)asN a ðT ÞZ 3ln 24p K ðT ÞG ðT Þ 3(16)where heterogeneous nucleation and spherical growth is assumed (Avrami’s index Z 3).Guo et al.[43]adopted this approach to describe the dimensions of spherulites in injection moldings of polypropylene.1.1.4.Modeling of the effect of crystallinity on rheology As mentioned above,crystallization has a dramatic influence on material viscosity.This phenomenon must obviously be taken into account and,indeed,the solidification of a semi-crystalline material is essen-tially caused by crystallization rather than by tempera-ture in normal processing conditions.Despite of the importance of the subject,the relevant literature on the effect of crystallinity on viscosity isR.Pantani et al./Prog.Polym.Sci.30(2005)1185–12221192rather scarce.This might be due to the difficulties in measuring simultaneously rheological properties and crystallinity evolution during the same tests.Apart from some attempts to obtain simultaneous measure-ments of crystallinity and viscosity by special setups [52,53],more often viscosity and crystallinity are measured during separate tests having the same thermal history,thus greatly simplifying the experimental approach.Nevertheless,very few works can be retrieved in the literature in which(shear or complex) viscosity can be somehow linked to a crystallinity development.This is the case of Winter and co-workers [54],Vleeshouwers and Meijer[55](crystallinity evolution can be drawn from Swartjes[56]),Boutahar et al.[57],Titomanlio et al.[15],Han and Wang[36], Floudas et al.[58],Wassner and Maier[59],Pantani et al.[60],Pogodina et al.[61],Acierno and Grizzuti[62].All the authors essentially agree that melt viscosity experiences an abrupt increase when crystallinity degree reaches a certain‘critical’value,x c[15]. However,little agreement is found in the literature on the value of this critical crystallinity degree:assuming that x c is reached when the viscosity increases of one order of magnitude with respect to the molten state,it is found in the literature that,for iPP,x c ranges from a value of a few percent[15,62,60,58]up to values of20–30%[58,61]or even higher than40%[59,54,57].Some studies are also reported on the secondary effects of relevant variables such as temperature or shear rate(or frequency)on the dependence of crystallinity on viscosity.As for the effect of temperature,Titomanlio[15]found for an iPP that the increase of viscosity for the same crystallinity degree was higher at lower temperatures,whereas Winter[63] reports the opposite trend for a thermoplastic elasto-meric polypropylene.As for the effect of shear rate,a general agreement is found in the literature that the increase of viscosity for the same crystallinity degree is lower at higher deformation rates[62,61,57].Essentially,the equations adopted to describe the effect of crystallinity on viscosity of polymers can be grouped into two main categories:–equations based on suspensions theories(for a review,see[64]or[65]);–empirical equations.Some of the equations adopted in the literature with regard to polymer processing are summarized in Table1.Apart from Eq.(17)adopted by Katayama and Yoon [66],all equations predict a sharp increase of viscosity on increasing crystallinity,sometimes reaching infinite (Eqs.(18)and(21)).All authors consider that the relevant variable is the volume occupied by crystalline entities(i.e.x),even if the dimensions of the crystals should reasonably have an effect.1.1.5.Modeling of the molecular orientationOne of the most challenging problems to present day polymer science regards the reliable prediction of molecular orientation during transformation processes. Indeed,although pressure and velocity distribution during injection molding can be satisfactorily described by viscous models,details of the viscoelastic nature of the polymer need to be accounted for in the descriptionTable1List of the most used equations to describe the effect of crystallinity on viscosityEquation Author Derivation Parameters h=h0Z1C a0x(17)Katayama[66]Suspensions a Z99h=h0Z1=ðx K x cÞa0(18)Ziabicki[67]Empirical x c Z0.1h=h0Z1C a1expðK a2=x a3Þ(19)Titomanlio[15],also adopted byGuo[68]and Hieber[25]Empiricalh=h0Z expða1x a2Þ(20)Shimizu[69],also adopted byZuidema[22]and Hieber[25]Empiricalh=h0Z1Cðx=a1Þa2=ð1Kðx=a1Þa2Þ(21)Tanner[70]Empirical,basedon suspensionsa1Z0.44for compact crystallitesa1Z0.68for spherical crystallitesh=h0Z expða1x C a2x2Þ(22)Han[36]Empiricalh=h0Z1C a1x C a2x2(23)Tanner[71]Empirical a1Z0.54,a2Z4,x!0.4h=h0Zð1K x=a0ÞK2(24)Metzner[65],also adopted byTanner[70]Suspensions a Z0.68for smooth spheresR.Pantani et al./Prog.Polym.Sci.30(2005)1185–12221193。

How_do_producer_services_affect_the_location_of_manufacturing_firms_The_role_of_information_accessib

How_do_producer_services_affect_the_location_of_manufacturing_firms_The_role_of_information_accessib

1Introduction A perfectly remarkable fact in any country in the world is the existence of a high level of concentration of economic activity [see,for example,Ellison and Glaeser (1997)in the American case,or Maurel and Se dillot (1999)in the French case,or Bru lhart (2000)in the case of Europe].In general,companies and individuals are not distributed uniformly in space,but rather in some places agglomerate with higher intensity than in others.In recent years a great number of works,involved in what has been called `the new economic geography',have been focused on explaining agglomeration.(1)Several of these articles analyze to what extent the processes of economic integration may affect industrial location.(2)So,for example,in his seminal work,Krugman (1991)shows that,in a world of increasing returns to scale,taste for variety in consumption,transport costs,and workforce mobility,regional asymmetries can be intensified to the extent to which the regions are integrated,that is,if the costs of transport between them are reduced.One region (the core)will tend to absorb all manufacturing industry,to the detriment of the other region (the periphery),to the extent that low transport costs permit the companies to provide all the demand from one single location.Venables (1996)adds new elements to the previous model to show that vertical linkages between upstream and downstream industries can play an equivalent role to Krugman's (1991)labor mobility when explaining agglomeration.In this new context,Venables suggests that reductions in trade costs tend,initially,to provoke a major concentration of industry,and later a major dispersion,to an extent in which the periphery becomes attractive to companies in their search for reductions in salary costs.Therefore,the processes of economic integration would tend to cause,eventually,How do producer services affect the location of manufacturing firms?The role of information accessibilityOlga Alonso-Villar,Jose -Mar|a Chamorro-Rivas Departamento de Econom|a Aplicada,Universidade de Vigo,Lagoas-Marcosende,s/n,36200Vigo,Spain;e-mail:ovillar@uvigo.es ,chamorro@uvigo.es Received 7February 2001;in revised form 6June 2001Environment and Planning A 2001,volume 33,pages 1621^1642Abstract.The aim of this paper is to analyze the location decisions both of producer services and of manufacturing,exploring two main topics:how manufacturing and services location change when regions integrate,and how developments in telecommunications affect them both.We built a theoreti-cal model consisting of two regions and three sectors (manufacturing,producer services,and agriculture).Given that producer services constitute a sector oriented towards information and knowledge,this is a factor that is taken into account in the explanation of their location.Numerical simulations are made in a general equilibrium framework.In contrast to previous works,we find that when regions integrate,specialization (producer services being in the core regions,and manufacturing in the periphery)örather than convergence öarises.Besides,improvements in telecommunications may facilitate concentration of services in the core regions.DOI:10.1068/a3436(1)This literature combines insights of traditional regional science with the modeling of new trade theory.Important contributions are by Krugman (1980;1991),Krugman and Livas Elizondo (1996),Puga and Venables (1997),and Venables (1996),among others.A review of this literature can be seen in Schmutzler (1999).Also,Fujita et al (2000)offer a thorough analysis of the main contributions in the field.(2)Ottaviano and Puga (1998)offer a discussion on this particular topic.1622O Alonso-Villar,J-M Chamorro-Rivasdispersion of economic activity.Analogous results are also obtained by Puga(1999), combining both points of view:the input^output linkages and emigration.These works take into account factors such as proximity to demand,relations between companies which supply intermediate and final goods,and salary costs,factors which are of great utility in understanding the processes of agglomeration.Some important shortcomings remain,however,and we will try to address them in this paper.Without in any way belittling the importance of the role that manufacturing can play in the development of a region,we should also take note of services,and more particularly producer services.Producer services are a sector to which increasingly is attributed greater activity as a catalyst of regional growth(Coffey and Pole se,1989; Hansen,1990;Michalak and Fairbairn,1993;Stabler and Howe,1988).However,a study of the location of the sector has received little attention from the theoreticians. So,for example,Puga(1999)and Venables(1996)focus solely on analyzing the location of manufacturing,whereas the companies which supply intermediate goods are only a necessary element in explaining the high degree of industrial concentration,and there is no special characteristic in the intermediate goods sector that makes it a catalyst of regional growth.(3)Also,in the above works the location decisions of the intermediate and final products sectors necessarily go together;that is,either both sectors decide to locate in one region or both do so in another.However,both sectors are not necessarily affected by the same location factors.Indeed,to a greater and greater extent,the more routine jobs within companies are moved to low-pay regions,while knowledge and information intensive activities,which form a good part of producer services, occupy more central places(Warf,1995).What is more,the centralization that this type of service experiences is notorious.As Coffey and Pole se(1989,page16)point out: in the Canadian case,``over the period1971to1981,approximately80percent of employment growth in producer services occurred in metropolitan areas''.They suggest that this centralization,also corroborated in countries such as Great Britain,France, and the United States,as may be gathered from that work,may be a result of the existence of knowledge spillovers derived from the pool of highly skilled labor.In this vein,the aim of this paper is to analyze the location decisions of both producer services and manufacturing,exploring two main topics:how manufac-turing and service location change when regions integrate,and how developments in telecommunications affect them both.Given that producer services constitute a sector oriented towards information and knowledge(Tofflemire,1992;Warf,1995),this is a factor that should be taken into account in the explanation of their location.One of the goals of this paper is to explore whether it is possible for services to agglomerate in some regions,while manufacturing remains in others.In contrast to Rodr|guez-Clare(1996)and Rivera-Batiz(1988), we focus on producer services that are traded between locations,because producer services have emerged as one of the fastest growing components of both regional and international trade(Coffey and Pole se,1989;Stabler and Howe,1988).With respect to the first topic,we suggest that,when trade costs between regions are low,producer services tend to concentrate in regions with advantages in access to information or with high human capital levels,whereas producers locate in regions with low wages.As in Venables(1996),this model also shows that integration between (3)Recently,de Vaal and van den Berg(1999)have also found that input^output linkages promote concentration of economic activity,using a model a la Krugman(1991)with labor mobility.In a different framework,Rodr|¨guez-Clare(1996)explores the effects of multinationals on underdevel-oped regions through the generation of linkages between upstream and downstream industries. However,in this paper we do not analyze the effects of regional integration.Firm location and information accessibility1623regions initially fosters agglomeration of firms.However,in contrast to Venables(1996), we find that,when regions integrate further,specialization(producer services being in core regions,and manufacturers on the periphery)örather than convergenceöarises.The second topic of this paper is whether developments in telecommunications will lead to a spaceless world.As Gaspar and Glaeser(1998)suggest,this does not necessarily have to be the case.Telecommunications make it possible to communicate with more people,which may bring about more face-to-face contacts as ideas become more complex and harder to communicate.They show that an improvement in tele-communications will make cities even more attractive if urban residents use more telecommunications than rural residents do.Also,as Warf(1995)suggests,the emer-gence of a global economy based on producer services and telecommunications systems has encouraged the growth of world cities such as London,New Y ork,and Tokyo,which are centers of information-intensive activities.All of them are cities in which data-intensive financial and business services play an important economic role,which have caused teleports to be built in these areas.These teleports have further fostered their local advantages.In this respect,we explicitly model how changes in telecommunications technology will affect the location of services.The analysis of this paper has,therefore,some policy implications.Will improvements in telecommunications bring convergence or divergence between regions?On what forces does this depend?If economic integration reduces trade costs to intermediate levels,can consequent divergence suggested by different works(for example,Puga,1999;Venables,1996)be compensated by telecommunica-tions development?We show that,when trade costs are intermediate,improvements in telecommunications may facilitate concentration of services in the core regions.The paper is organized as follows.In section2we discuss the basic characteristics of the approach we use.In section3we present the assumptions of the model,which is built on Venables(1996),differentiating between three different sectors:agriculture, manufacturing,and producer services.In section4we solve the model,and in section5 we present the principal results,using numerical simulations in a general equilibrium framework.Section6contains our conclusions.2Agglomeration with the presence of a knowledge-intensive sectorAt the beginning of the1990s a new line of research began that,picking up some ideas from previous economic geographers,(4)tackles the question of agglomeration of eco-nomic activity with a more rigorous and formal approach.This literature reconsiders the point of view of the Myrdal(1957)theory of circular and cumulative causation,that emphasizes the positive feedback of the processes of economic growth.According to Myrdal,if in a particular place a critical development threshold is exceeded,whatever the cause that originated it,a still greater concentration process may be unleashed in the economic activity.To the extent to which a good number of companies are already installed in a particular location,this may prove attractive for new investment.This will generate a greater level of profit and,therefore,of consumption,which at the same time will permit greater creation of companies.These two elements set in motion a circular and cumulative phenomenon of concentration of firms,which eventually will generate grave interterritorial inequalities.Reworking these ideas,Krugman(1991)proposes a methodology that permits an explanation of why some regions concentrate the major part of their economic activity without assuming a priori differences in resources or technologies(causes traditionally advanced by the theory of comparative advantages).So,Krugman considers a theoreti-cal scheme in which the economy is composed of two regions and two sectors: (4)Christaller(1933),Lo sch(1940),and Pred(1966),among others.1624O Alonso-Villar,J-M Chamorro-Rivasa manufacturing sector,in which the companies compete in a regime of monopolistic competition,and an agricultural sector,which is assumed to be perfectly competitive.In this model,agglomeration arises from the interaction of several elements:the existence of increasing returns to scale at firm level,the demand for variety in con-sumption,interregional transport costs,and labor mobility in the industrial sector.The economies of scale mean that each company concentrates its production in a single location.Besides,for a nominal fixed salary,and given the preferences for variety in consumption,the real income of the consumers grows in the more industrialized regions,which offer greater access to manufactured goods without incurring transport costs.All this prompts more workers to migrate to the same location(forward linkage). Also,this increase in the number of consumers creates a major demand for goods, which makes it feasible to sustain a greater number of firms(backward linkage).As a consequence,these forward and backward linkages provoke increasing returns,no longer on a firm level,but on a regional level.Krugman's model assumes that workers in the industrial sector have incentives to move between different regions so long as there exists between them a significant salary difference.This assumption is quite reasonable and realistic if it is proposed to study the agglomeration phenomenon in the North American context.However,this assumption is found to be inadequate in the case of Europe,where interregional mobility is lower in spite of regional salary differences being notorious.Ottaviano and Puga(1998,page713)point out that``only1.5%of the European Union citizens live in a country different from that in which they were born''.Venables(1996)proposes the introduction of new elements into Krugman's meth-odology,more adjusted to the European reality or,in general,adequate to analyze the differences in the level of industrialization among countries or regions,between which the mobility of the labor force is limited.For this he introduces into the previous methodology the existence of two industrial sectors connected between themselves by input^output linkages,and eliminates the mobility of the workforce.In this new context the salary differences between regions will not be reduced by emigration,given that the population cannot change their place of residence,but,as a counterbalance, companies themselves can direct them to those less industrialized locations in which salaries are lower.So,in the above model the salary differences would be acting as a force of dispersion,whereas the links between the different manufacturing companies,derived from their input^output relations,would come to counteract the previous phenom-enon.As shown by Venables(1996),when industries are vertically linked,the upstream industry is attracted to locations where there are many downstream firms(demand linkage).Also,firms in the downstream industry will reduce costs by locating where there are many upstream firms(cost linkage).The work mentioned above is focused on analyzing the manufacturing concentra-tion,whereas the existence of the intermediate goods sector is simply the way that permits the realization of the study.However,in a more and more information-usage-intensive world,manufacture has inevitably to depend on services such as logistics, technological transfer,marketing,finance,industrial engineering,etc,in which infor-mation and knowledge constitute especially important factors,given that a good part of them carry a certain degree of greater sophistication than those associated with the manufacturing process.Indeed,as Hansen(1994)comments,only10^15%of the value of an IBM computer comes from the manufacturing process,the rest coming from services such as research,design,engineering,maintenance,or sales.It is for this reason that we consider an economy with three sectors:agriculture, manufacturing,and producer services.The agricultural and manufacturing sectors areFirm location and information accessibility1625analogous with those already considered in Venables(1996).However,given that many of the producer services constitute an information-oriented sector,we assume that this sector is characterized by a need for greater interaction among the companies in the sector,given that on occasions the diffusion of information and/or knowledge entails a certain degree of difficulty.A good part of what we know is the result of our interaction with other people,via both formal and informal channels[Saxenian (1996)offers numerous examples].However,distance is an element that undoubtedly affects this flow of information and ideas,as empirical evidence shows(see Jaffe et al,1993;Rauch,1993;Simon,1998). But information may also be imported from other places at greater or lesser costs.One can send an e-mail to another person to consult over a particular question.However,it is not always easy to substitute for face-to-face communication,especially when the information involves a certain degree of difficulty.Besides,the functioning of the network is not always adequate,and access to particular places may be very costly in time.On the other hand,not all individuals are familiar with the technology necessary for the transmission of information via the network,given that it requires a certain degree of computer knowledge.In this sense,it is clear that access to information is not the same for all regions or countries,because of their differences in levels of human capital,in their background in the use of certain technologies,or in their own tele-communications infrastructures,that may facilitate or impede such access.As Warf (1995,page368)points out,``it is evident that the greatest Internet access remains in the most economically developed parts of the world,notably North America,Europe and Japan.''For the aforementioned reasons,we consider that companies in the service sector benefit from cooperation,from their exchange of experiences that permits them to update information about the sector(technology,clients,etc).Interrelations between companies will be conducive to transforming information in productive knowledge to the extent to which they facilitate the interchange of ideas and to which these are later put into practice.However,physical distance impedes the transmission and acquisition of this information,from which we suppose that companies that are spatially separated share less information than do others that are closer.Also,we assume that the information a company may obtain from companies in other regions depends not only on the telecommunications that exist between both regions,but also on the characteristics of the region in which the company is located.Thus we will suppose that asymmetries exist between the regions;that is,a region will initially have advantages in the use of information with respect to another region.The fact that producer services need to share information and,taking into account that geographical distances make this transmission more difficult,gives rise to the appearance of a new force of agglomeration in the model:services will want to locate in the same place.This new element will affect the intersectoral linkages,which eventually will lead to different location patterns from those suggested by Venables (1996).3The modelConsider a world consisting of two locations,labeled1and2,which are populated with L1and L2individuals,respectively.This economy has three sectors:agriculture,man-ufacturing,and producer services.The first is perfectly competitive and the other two are imperfectly competitive and vertically linked,because services are intermediate goods of manufacturing.These two industries produce differentiated varieties under increasing returns to scale and firms are assumed to compete in a monopolistic regime of the Dixit and Stiglitz(1977)bor is used by the three sectors and is perfectlymobile between them.However,labor is not mobile across locations.(5)We denote by w j the wage rate in location j.Following Venables(1996),we make two assumptions with respect to trade costs between the two locations,and which affect only goods. First,trade of agricultural output will be assumed to be costless.Second,we assume ad valorem trade cost for services(s)and manufactured goods(i),so p rj t jk is the price paid for a unit of good r(r s,i)produced at location j and sold at location k,and p rj is its free on board(fob)price.If k j,t jk 1;otherwise t jk b1.(6)3.1PreferencesConsumers have Cobb^Douglas preferences over the agricultural good and a con-stant-elasticity-of-substitution aggregate of manufacturing goods,U z b M z1Àb A,where z A is consumption of the agricultural good and z M is consumption of the manufactures aggregate,which is defined byz Mi z EÀ1 a EiE a EÀ1,where E is the elasticity of substitution between any two variables,E b1.(7)Following Dixit and Stiglitz(1977),we define the price of this aggregate for individuals living at location j asP j2k 1 n ki 1p ik t jk 1ÀE!1a 1ÀE,j 1,2,where p ik is the fob price of variety i produced at location k,and t jk refers to the trade cost between locations j and k.The number of manufacturing varieties in location k is endogenously determined and denoted by n k.Each individual supplies one unit of labor inelastically,and owns an equal propor-tion of the agriculture profits,which will be obtained in what follows.3.2AgricultureAgriculture is perfectly competitive,producing a costlessly tradeable good which we choose as the numeraire.This sector is described by a strictly concave technology F(L A) a L a A,where labor is the only factor of production,a`1.(8)This assumption will allow us the possibility of different wages between locations.Otherwise(that is, with constant returns to scale),wages in both locations would be equal.(9)The profit function in location j is then given byP A j F L A j Àw j L A j,(5)This assumption implicitly corresponds to regions or countries between which the mobility of the labor force is limited because of their mutual remoteness or because of political restrictions. Puga(1999)combines Venables's(1996)and Krugman's(1991)models to analyze the effects of considering either free labor mobility or labor immobility.He finds that,when workers can move to locations with higher real wages,this intensifies agglomeration.(6)Hereafter,service sector is denoted by the subscript S and we drop the subscript for the manufacturing sector.Furthermore,subscript s(or i)will refer to a particular variety of producer services(or manufacturing)variety.We choose to keep superscripts for parameters.(7)It can be shown that E is also the elasticity of demand(see Dixit and Stiglitz,1977).(8)Strict concavity can be interpreted as the presence of a sector-specific factor such as land.Puga (1999)explicitly considers this input in the production technology of the agricultural sector. (9)Salary differentials between locations will allow us to analyze the effects of salary costs on firm locations.1626O Alonso-Villar,J-M Chamorro-Rivaswhere L A j and w j are the number of farmers and salary in location j ,respectively .The first-order condition yields L ÃA j w j a a1a a À1 ,(1)so thatP ÃA j a 1a 1Àa 1Àa 1a a a a À1 w a a a À1 j b 0.(2)To simplify the analysis,we assume that these profits are equally split among consumers.We would like to emphasize why this sector is necessary in the model.As there is no labor mobility between locations,if we want to study why one region is more indus-trialized than another,we need a sector from which to take the workers required by the manufacturing and service industries in that location.The existence of this pool of agricultural workers seems to be crucial to explain industrialization.(10)3.3Manufacturing We assume Cobb ^Douglas technology between labor and an aggregate of differenti-ated services,that requires fixed (f )and variable (x ij )quantities of the inputs,AL 1Àm ijsz E À1 a E s 1a 1ÀE m f x ij ,where L ij and z s are the labor and the amount of service s ,respectively,used in producing x ij units of variety i at location j ,and m denotes the intermediate share.(11)Therefore,the cost function of a firm producing variety i at location j is (12)C ij w 1Àm j P S j m f x ij ,where P S j is the services price index at location j and is defined by P S j 2k 1 n S k r 1p rk t kj 1ÀE !E a 1ÀE ,j 1,2.(3)As we can see,the price index depends on the fob prices of individual services and trade costs between locations.The number of producer services at location j is endo-genously determined and denoted by n S j .3.4Producer services Following Krugman (1991),we assume the production of a single service variety s involves a fixed cost and a constant marginal cost in terms of labor.However,rather than supposing that costs depend only on wages,we assume that they are also affected by the communication level among all producer services in the economy,wherever they are located.Some explanations are in order.As has been broadly discussed in recent literature,producer services are an increasingly information-oriented sector (Tofflemire,1992;Warf,1995).For this reason we consider that firms in this sector benefit from contacts amongst themselves.Talks turn information into productive knowledge,but long distances increase the cost of collecting and spreading this information.We assume that the level of information that a firm can get from the(10)In fact,the large pool of farmers available to work in the industrial sector has been stressed as the main reason for the existence of megacities in less developed countries (see Puga,1998).(11)The assumed elasticity of substitution between any two varieties in this sector is equal to E .This is a simplifying and standard assumption (for example,see Fujita et al,2000;Venables,1996).(12)Actually,we have considered parameter A as (1Àm )m À1m Àm in the technology function so as to obtain a simpler expression for the cost function.See appendix A.Firm location and information accessibility 1627other location depends both on the telecommunication system,which is shared by both locations,and on specific characteristics of the former location of what we call the technological environment(the number and quality of its computers,its extension of optic fibre,its`cyberspace'background,its level of human capital,etc).Hence,each location benefits at a different level from the telecommunication net-work it shares with others.``Individuals,social groups and institutions are seen to have some degree of choice in shaping the design,development and application technologies in specific cases''(approach surveyed in Graham and Marvin,1996,page105).Instead of seeing technology as somehow autonomous from society,we stress the importance of political and economic forces in shaping how telecommunications increase in regions.The development of telecommunications is then part of society.This can cause different regions(countries)to have different technological environments.We assume that producer services are a knowledge-intensive sector,so that they require an initial investment in learning.It seems reasonable to assume that the higher the information level available for firms in this sector,the lower will be this investment.This results in a lower fixed cost.We capture the essence of these ideas by defining the following technology ofproductionL sj f n S j K f j T c n S kn S j n S k!À1x sj,j,k 1,2,where L sj is the labor used in producing x sj units of service s at location j,and n S j is the number of producer services at that location.Thus,fixed and variable quantities of labor(the first and second terms in the technology of production,respectively)are required.As far as symmetry is concerned,all firms located in the same region are identical.As has often been argued,information has the characteristic of a public good:a firm can use information without reducing the amount available for others. Thus,firms can obtain benefits by the exchange of information among themselves (informational spillovers).We assume that the higher the information level available in a location,the lower the fixed costs that services face in that location.(13)We consider that the amount of information available from face-to-face contact in each location depends on the number of firms sited there(all firms sited in the region are of the same size).So,if one region has more firms than the other,it means that it has also more local information.However,information can be taken from the other location, though its quality is lower because of distance-decay effects.By n S j a(n S j n S k)we mean the information that firms share at location j(based strongly on face-to-face contact)and by(K f j T c)n S k a(n S j n S k)we mean the information they can take from location k(by electronic communications),with f,c,T,and K j P[0,1].Note the lowest fixed costs are obtained when the denominator in the technology production is equal to1.This happens when there is either full concentration or when telecommu-nications and the technological environment are perfect(T,K j 1).Otherwise,fixed costs are higher than f.Therefore,if a service firm is far away from the rest(that is,if it is in the periphery),the information it handles will be lower than that of firms in the core.All this means that being away from the information core is penalized with higher fixed costs and,thus,lower productivity.Also,the better the telecommunication system in the economy(high value of T)and the technological environment of the location(13)Because information affects only the fixed cost,prices will not depend on the level of informa-tion[see equation(9)].This allows us to use steps similar to those in Venables(1996)to undertake the analysis of the demand and costs linkages between sectors.1628O Alonso-Villar,J-M Chamorro-Rivas。

Principles of Plasma Discharges and Materials Processing第2章

Principles of Plasma Discharges and Materials Processing第2章

CHAPTER 2BASIC PLASMA EQUATIONS AND EQUILIBRIUM2.1INTRODUCTIONThe plasma medium is complicated in that the charged particles are both affected by external electric and magnetic fields and contribute to them.The resulting self-consistent system is nonlinear and very difficult to analyze.Furthermore,the inter-particle collisions,although also electromagnetic in character,occur on space and time scales that are usually much shorter than those of the applied fields or the fields due to the average motion of the particles.To make progress with such a complicated system,various simplifying approxi-mations are needed.The interparticle collisions are considered independently of the larger scale fields to determine an equilibrium distribution of the charged-particle velocities.The velocity distribution is averaged over velocities to obtain the macro-scopic motion.The macroscopic motion takes place in external applied fields and in the macroscopic fields generated by the average particle motion.These self-consistent fields are nonlinear,but may be linearized in some situations,particularly when dealing with waves in plasmas.The effect of spatial variation of the distri-bution function leads to pressure forces in the macroscopic equations.The collisions manifest themselves in particle generation and loss processes,as an average friction force between different particle species,and in energy exchanges among species.In this chapter,we consider the basic equations that govern the plasma medium,con-centrating attention on the macroscopic system.The complete derivation of these 23Principles of Plasma Discharges and Materials Processing ,by M.A.Lieberman and A.J.Lichtenberg.ISBN 0-471-72001-1Copyright #2005John Wiley &Sons,Inc.equations,from fundamental principles,is beyond the scope of the text.We shall make the equations plausible and,in the easier instances,supply some derivations in appendices.For the reader interested in more rigorous treatment,references to the literature will be given.In Section2.2,we introduce the macroscopicfield equations and the current and voltage.In Section2.3,we introduce the fundamental equation of plasma physics, for the evolution of the particle distribution function,in a form most applicable for weakly ionized plasmas.We then define the macroscopic quantities and indicate how the macroscopic equations are obtained by taking moments of the fundamental equation.References given in the text can be consulted for more details of the aver-aging procedure.Although the macroscopic equations depend on the equilibrium distribution,their form is independent of the equilibrium.To solve the equations for particular problems the equilibrium must be known.In Section2.4,we introduce the equilibrium distribution and obtain some consequences arising from it and from thefield equations.The form of the equilibrium distribution will be shown to be a consequence of the interparticle collisions,in Appendix B.2.2FIELD EQUATIONS,CURRENT,AND VOLTAGEMaxwell’s EquationsThe usual macroscopic form of Maxwell’s equations arerÂE¼Àm0@H@t(2:2:1)rÂH¼e0@E@tþJ(2:2:2)e0rÁE¼r(2:2:3) andmrÁH¼0(2:2:4) where E(r,t)and H(r,t)are the electric and magneticfield vectors and wherem 0¼4pÂ10À7H/m and e0%8:854Â10À12F/m are the permeability and per-mittivity of free space.The sources of thefields,the charge density r(r,t)and the current density J(r,t),are related by the charge continuity equation(Problem2.1):@rþrÁJ¼0(2:2:5) In general,J¼J condþJ polþJ mag24BASIC PLASMA EQUATIONS AND EQUILIBRIUMwhere the conduction current density J cond is due to the motion of the free charges, the polarization current density J pol is due to the motion of bound charges in a dielectric material,and the magnetization current density J mag is due to the magnetic moments in a magnetic material.In a plasma in vacuum,J pol and J mag are zero and J¼J cond.If(2.2.3)is integrated over a volume V,enclosed by a surface S,then we obtain its integral form,Gauss’law:e0þSEÁd A¼q(2:2:6)where q is the total charge inside the volume.Similarly,integrating(2.2.5),we obtaind q d t þþSJÁd A¼0which states that the rate of increase of charge inside V is supplied by the total currentflowing across S into V,that is,that charge is conserved.In(2.2.2),thefirst term on the RHS is the displacement current densityflowing in the vacuum,and the second term is the conduction current density due to the free charges.We can introduce the total current densityJ T¼e0@E@tþJ(2:2:7)and taking the divergence of(2.2.2),we see thatrÁJ T¼0(2:2:8)In one dimension,this reduces to d J T x=d x¼0,such that J T x¼J T x(t),independent of x.Hence,for example,the total currentflowing across a spatially nonuniform one-dimensional discharge is independent of x,as illustrated in Figure2.1.A generalization of this result is Kirchhoff’s current law,which states that the sum of the currents entering a node,where many current-carrying conductors meet,is zero.This is also shown in Figure2.1,where I rf¼I TþI1.If the time variation of the magneticfield is negligible,as is often the case in plasmas,then from Maxwell’s equations rÂE%0.Since the curl of a gradient is zero,this implies that the electricfield can be derived from the gradient of a scalar potential,E¼Àr F(2:2:9)2.2FIELD EQUATIONS,CURRENT,AND VOLTAGE25Integrating (2.2.9)around any closed loop C givesþC E Ád ‘¼ÀþC r F Ád ‘¼ÀþC d F ¼0(2:2:10)Hence,we obtain Kirchhoff’s voltage law ,which states that the sum of the voltages around any loop is zero.This is illustrated in Figure 2.1,for which we obtainV rf ¼V 1þV 2þV 3that is,the source voltage V rf is equal to the sum of the voltages V 1and V 3across the two sheaths and the voltage V 2across the bulk plasma.Note that currents and vol-tages can have positive or negative values;the directions for which their values are designated as positive must be specified,as shown in the figure.If (2.2.9)is substituted in (2.2.3),we obtainr 2F ¼Àre 0(2:2:11)Equation (2.2.11),Poisson’s equation ,is one of the fundamental equations that we shall use.As an example of its application,consider the potential in the center (x ¼0)of two grounded (F ¼0)plates separated by a distance l ¼10cm and con-taining a uniform ion density n i ¼1010cm 23,without the presence of neutralizing electrons.Integrating Poisson’s equationd 2F d x 2¼Àen i eFIGURE 2.1.Kirchhoff’s circuit laws:The total current J T flowing across a nonuniform one-dimensional discharge is independent of x ;the sum of the currents entering a node is zero (I rf ¼I T þI 1);the sum of voltages around a loop is zero (V rf ¼V 1þV 2þV 3).26BASIC PLASMA EQUATIONS AND EQUILIBRIUMusing the boundary conditions that F ¼0at x ¼+l =2and that d F =d x ¼0at x ¼0(by symmetry),we obtainF ¼12en i e 0l 22Àx 2"#The maximum potential in the center is 2.3Â105V,which is impossibly large for a real discharge.Hence,the ions must be mostly neutralized by electrons,leading to a quasi-neutral plasma.Figure 2.2shows a PIC simulation time history over 10210s of (a )the v x –x phase space,(b )the number N of sheets versus time,and (c )the potential F versus x for 100unneutralized ion sheets (with e /M for argon ions).We see the ion acceleration in (a ),the loss of ions in (b ),and the parabolic potential profile in (c );the maximum potential decreases as ions are lost from the system.We consider quasi-neutrality further in Section 2.4.Electric and magnetic fields exert forces on charged particles given by the Lorentz force law :F ¼q (E þv ÂB )(2:2:12)FIGURE 2.2.PIC simulation of ion loss in a plasma containing ions only:(a )v x –x ion phase space,showing the ion acceleration trajectories;(b )number N of ion sheets versus t ,with the steps indicating the loss of a single sheet;(c )the potential F versus x during the first 10210s of ion loss.2.2FIELD EQUATIONS,CURRENT,AND VOLTAGE 2728BASIC PLASMA EQUATIONS AND EQUILIBRIUMwhere v is the particle velocity and B¼m0H is the magnetic induction vector.The charged particles move under the action of the Lorentz force.The moving charges in turn contribute to both r and J in the plasma.If r and J are linearly related to E and B,then thefield equations are linear.As we shall see,this is not generally the case for a plasma.Nevertheless,linearization may be possible in some cases for which the plasma may be considered to have an effective dielectric constant;that is,the “free charges”play the same role as“bound charges”in a dielectric.We consider this further in Chapter4.2.3THE CONSERVATION EQUATIONSBoltzmann’s EquationFor a given species,we introduce a distribution function f(r,v,t)in the six-dimensional phase space(r,v)of particle positions and velocities,with the interpret-ation thatf(r,v,t)d3r d3v¼number of particles inside a six-dimensional phasespace volume d3r d3v at(r,v)at time tThe six coordinates(r,v)are considered to be independent variables.We illus-trate the definition of f and its phase space in one dimension in Figure2.3.As particles drift in phase space or move under the action of macroscopic forces, theyflow into or out of thefixed volume d x d v x.Hence the distribution functionaf should obey a continuity equation which can be derived as follows.InFIGURE2.3.One-dimensional v x–x phase space,illustrating the derivation of the Boltzmann equation and the change in f due to collisions.time d t,f(x,v x,t)d x a x(x,v x,t)d t particlesflow into d x d v x across face1f(x,v xþd v x,t)d x a x(x,v xþd v x,t)d t particlesflow out of d x d v x across face2 f(x,v x,t)d v x v x d t particlesflow into d x d v x across face3f(xþd x,v x,t)d v x v x d t particlesflow out of d x d v x across face4where a x v d v x=d t and v x;d x=d t are theflow velocities in the v x and x directions, respectively.Hencef(x,v x,tþd t)d x d v xÀf(x,v x,t)d x d v x¼½f(x,v x,t)a x(x,v x,t)Àf(x,v xþd v x,t)a x(x,v xþd v x,t) d x d tþ½f(x,v x,t)v xÀf(xþd x,v x,t)v x d v x d tDividing by d x d v x d t,we obtain@f @t ¼À@@x(f v x)À@@v x(fa x)(2:3:1)Noting that v x is independent of x and assuming that the acceleration a x¼F x=m of the particles does not depend on v x,then(2.3.1)can be rewritten as@f @t þv x@f@xþa x@f@v x¼0The three-dimensional generalization,@f@tþvÁr r fþaÁr v f¼0(2:3:2)with r r¼(^x@=@xþ^y@=@yþ^z@=@z)and r v¼(^x@=@v xþ^y@=@v yþ^z@=@v z)is called the collisionless Boltzmann equation or Vlasov equation.In addition toflows into or out of the volume across the faces,particles can “suddenly”appear in or disappear from the volume due to very short time scale interparticle collisions,which are assumed to occur on a timescale shorter than the evolution time of f in(2.3.2).Such collisions can practically instantaneously change the velocity(but not the position)of a particle.Examples of particles sud-denly appearing or disappearing are shown in Figure2.3.We account for this effect,which changes f,by adding a“collision term”to the right-hand side of (2.3.2),thus obtaining the Boltzmann equation:@f @t þvÁr r fþFmÁr v f¼@f@tc(2:3:3)2.3THE CONSERVATION EQUATIONS29The collision term in integral form will be derived in Appendix B.The preceding heuristic derivation of the Boltzmann equation can be made rigorous from various points of view,and the interested reader is referred to texts on plasma theory, such as Holt and Haskel(1965).A kinetic theory of discharges,accounting for non-Maxwellian particle distributions,must be based on solutions of the Boltzmann equation.We give an introduction to this analysis in Chapter18. Macroscopic QuantitiesThe complexity of the dynamical equations is greatly reduced by averaging over the velocity coordinates of the distribution function to obtain equations depending on the spatial coordinates and the time only.The averaged quantities,such as species density,mean velocity,and energy density are called macroscopic quantities,and the equations describing them are the macroscopic conservation equations.To obtain these averaged quantities we take velocity moments of the distribution func-tion,and the equations are obtained from the moments of the Boltzmann equation.The average quantities that we are concerned with are the particle density,n(r,t)¼ðf d3v(2:3:4)the particlefluxG(r,t)¼n u¼ðv f d3v(2:3:5)where u(r,t)is the mean velocity,and the particle kinetic energy per unit volumew¼32pþ12mu2n¼12mðv2f d3v(2:3:6)where p(r,t)is the isotropic pressure,which we define below.In this form,w is sumof the internal energy density32p and theflow energy density12mu2n.Particle ConservationThe lowest moment of the Boltzmann equation is obtained by integrating all terms of(2.3.3)over velocity space.The integration yields the macroscopic continuity equation:@n@tþrÁ(n u)¼GÀL(2:3:7)The collision term in(2.3.3),which yields the right-hand side of(2.3.7),is equal to zero when integrated over velocities,except for collisions that create or destroy 30BASIC PLASMA EQUATIONS AND EQUILIBRIUMparticles,designated as G and L ,respectively (e.g.,ionization,recombination).In fact,(2.3.7)is transparent since it physically describes the conservation of particles.If (2.3.7)is integrated over a volume V bounded by a closed surface S ,then (2.3.7)states that the net number of particles generated per second within V ,either flows across the surface S or increases the number of particles within V .For common low-pressure discharges in the steady state,G is usually due to ioniz-ation by electron–neutral collisions:G ¼n iz n ewhere n iz is the ionization frequency.The volume loss rate L ,usually due to recom-bination,is often negligible.Hencer Á(n u )¼n iz n e (2:3:8)in a typical discharge.However,note that the continuity equation is clearly not sufficient to give the evolution of the density n ,since it involves another quantity,the mean particle velocity u .Momentum ConservationTo obtain an equation for u ,a first moment is formed by multiplying the Boltzmann equation by v and integrating over velocity.The details are complicated and involve evaluation of tensor elements.The calculation can be found in most plasma theory texts,for example,Krall and Trivelpiece (1973).The result is mn @u @t þu Ár ðÞu !¼qn E þu ÂB ðÞÀr ÁP þf c (2:3:9)The left-hand side is the species mass density times the convective derivative of the mean velocity,representing the mass density times the acceleration.The convective derivative has two terms:the first term @u =@t represents an acceleration due to an explicitly time-varying u ;the second “inertial”term (u Ár )u represents an acceleration even for a steady fluid flow (@=@t ;0)having a spatially varying u .For example,if u ¼^xu x (x )increases along x ,then the fluid is accelerating along x (Problem 2.4).This second term is nonlinear in u and can often be neglected in discharge analysis.The mass times acceleration is acted upon,on the right-hand side,by the body forces,with the first term being the electric and magnetic force densities.The second term is the force density due to the divergence of the pressure tensor,which arises due to the integration over velocitiesP ij ¼mn k v i Àu ðÞv j Àu ÀÁl v (2:3:10)2.3THE CONSERVATION EQUATIONS 31where the subscripts i,j give the component directions and kÁl v denotes the velocity average of the bracketed quantity over f.ÃFor weakly ionized plasmas it is almost never used in this form,but rather an isotropic version is employed:P¼p000p000p@1A(2:3:11)such thatrÁP¼r p(2:3:12) a pressure gradient,withp¼13mn k(vÀu)2l v(2:3:13)being the scalar pressure.Physically,the pressure gradient force density arises as illustrated in Figure2.4,which shows a small volume acted upon by a pressure that is an increasing function of x.The net force on this volume is p(x)d AÀp(xþd x)d A and the volume is d A d x.Hence the force per unit volume isÀ@p=@x.The third term on the right in(2.3.9)represents the time rate of momentum trans-fer per unit volume due to collisions with other species.For electrons or positive ions the most important transfer is often due to collisions with neutrals.The transfer is usually approximated by a Krook collision operatorf j c¼ÀXbmn n m b(uÀu b):Àm(uÀu G)Gþm(uÀu L)L(2:3:14)where the summation is over all other species,u b is the mean velocity of species b, n m b is the momentum transfer frequency for collisions with species b,and u G and u L are the mean velocities of newly created and lost particles.Generally j u G j(j u j for pair creation by ionization,and u L%u for recombination or charge transfer lossprocesses.We discuss the Krook form of the collision operator further in Chapter 18.The last two terms in(2.3.14)are generally small and give the momentum trans-fer due to the creation or destruction of particles.For example,if ions are created at rest,then they exert a drag force on the moving ionfluid because they act to lower the averagefluid velocity.A common form of the average force(momentum conservation)equation is obtained from(2.3.9)neglecting the magnetic forces and taking u b¼0in theÃWe assume f is normalized so that k f lv ¼1.32BASIC PLASMA EQUATIONS AND EQUILIBRIUMKrook collision term for collisions with one neutral species.The result is mn @u @t þu Ár u !¼qn E Àr p Àmn n m u (2:3:15)where only the acceleration (@u =@t ),inertial (u Ár u ),electric field,pressure gradi-ent,and collision terms appear.For slow time variation,the acceleration term can be neglected.For high pressures,the inertial term is small compared to the collision term and can also be dropped.Equations (2.3.7)and (2.3.9)together still do not form a closed set,since the pressure tensor P (or scalar pressure p )is not determined.The usual procedure to close the equations is to use a thermodynamic equation of state to relate p to n .The isothermal relation for an equilibrium Maxwellian distribution isp ¼nkT(2:3:16)so thatr p ¼kT r n (2:3:17)where T is the temperature in kelvin and k is Boltzmann’s constant (k ¼1.381Â10223J /K).This holds for slow time variations,where temperatures are allowed to equilibrate.In this case,the fluid can exchange energy with its sur-roundings,and we also require an energy conservation equation (see below)to deter-mine p and T .For a room temperature (297K)neutral gas having density n g and pressure p ,(2.3.16)yieldsn g (cm À3)%3:250Â1016p (Torr)(2:3:18)p FIGURE 2.4.The force density due to the pressure gradient.2.3THE CONSERVATION EQUATIONS 33Alternatively,the adiabatic equation of state isp¼Cn g(2:3:19) such thatr p p ¼gr nn(2:3:20)where g is the ratio of specific heat at constant pressure to that at constant volume.The specific heats are defined in Section7.2;g¼5/3for a perfect gas; for one-dimensional adiabatic motion,g¼3.The adiabatic relation holds for fast time variations,such as in waves,when thefluid does not exchange energy with its surroundings;hence an energy conservation equation is not required. For almost all applications to discharge analysis,we use the isothermal equation of state.Energy ConservationThe energy conservation equation is obtained by multiplying the Boltzmannequation by12m v2and integrating over velocity.The integration and some othermanipulation yield@ @t32pþrÁ32p uðÞþp rÁuþrÁq¼@@t32pc(2:3:21)Here32p is the thermal energy density(J/m3),32p u is the macroscopic thermal energyflux(W/m2),representing theflow of the thermal energy density at thefluid velocityu,p rÁu(W/m3)gives the heating or cooling of thefluid due to compression orexpansion of its volume(Problem2.5),q is the heatflow vector(W/m2),whichgives the microscopic thermal energyflux,and the collisional term includes all col-lisional processes that change the thermal energy density.These include ionization,excitation,elastic scattering,and frictional(ohmic)heating.The equation is usuallyclosed by setting q¼0or by letting q¼Àk T r T,where k T is the thermal conduc-tivity.For most steady-state discharges the macroscopic thermal energyflux isbalanced against the collisional processes,giving the simpler equationrÁ32p u¼@32pc(2:3:22)Equation(2.3.22),together with the continuity equation(2.3.8),will often prove suf-ficient for our analysis.34BASIC PLASMA EQUATIONS AND EQUILIBRIUMSummarySummarizing our results for the macroscopic equations describing the electron and ionfluids,we have in their most usually used forms the continuity equationrÁ(n u)¼n iz n e(2:3:8) the force equation,mn @u@tþuÁr u!¼qn EÀr pÀmn n m u(2:3:15)the isothermal equation of statep¼nkT(2:3:16) and the energy-conservation equationrÁ32p u¼@@t32pc(2:3:22)These equations hold for each charged species,with the total charges and currents summed in Maxwell’s equations.For example,with electrons and one positive ion species with charge Ze,we haver¼e Zn iÀn eðÞ(2:3:23)J¼e Zn i u iÀn e u eðÞ(2:3:24)These equations are still very difficult to solve without simplifications.They consist of18unknown quantities n i,n e,p i,p e,T i,T e,u i,u e,E,and B,with the vectors each counting for three.Various simplifications used to make the solutions to the equations tractable will be employed as the individual problems allow.2.4EQUILIBRIUM PROPERTIESElectrons are generally in near-thermal equilibrium at temperature T e in discharges, whereas positive ions are almost never in thermal equilibrium.Neutral gas mol-ecules may or may not be in thermal equilibrium,depending on the generation and loss processes.For a single species in thermal equilibrium with itself(e.g.,elec-trons),in the absence of time variation,spatial gradients,and accelerations,the2.4EQUILIBRIUM PROPERTIES35Boltzmann equation(2.3.3)reduces to@f @tc¼0(2:4:1)where the subscript c here represents the collisions of a particle species with itself. We show in Appendix B that the solution of(2.4.1)has a Gaussian speed distribution of the formf(v)¼C eÀj2m v2(2:4:2) The two constants C and j can be obtained by using the thermodynamic relationw¼12mn k v2l v¼32nkT(2:4:3)that is,that the average energy of a particle is12kT per translational degree offreedom,and by using a suitable normalization of the distribution.Normalizing f(v)to n,we obtainCð2p0d fðpsin u d uð1expÀj2m v2ÀÁv2d v¼n(2:4:4)and using(2.4.3),we obtain1 2mCð2pd fðpsin u d uð1expÀj2m v2ÀÁv4d v¼32nkT(2:4:5)where we have written the integrals over velocity space in spherical coordinates.The angle integrals yield the factor4p.The v integrals are evaluated using the relationÃð10eÀu2u2i d u¼(2iÀ1)!!2ffiffiffiffipp,where i is an integer!1:(2:4:6)Solving for C and j we havef(v)¼nm2p kT3=2expÀm v22kT(2:4:7)which is the Maxwellian distribution.Ã!!denotes the double factorial function;for example,7!!¼7Â5Â3Â1. 36BASIC PLASMA EQUATIONS AND EQUILIBRIUMSimilarly,other averages can be performed.The average speed vis given by v ¼m =2p kT ðÞ3=2ð10v exp Àv 22v 2th !4p v 2d v (2:4:8)where v th ¼(kT =m )1=2is the thermal velocity.We obtainv ¼8kT p m 1=2(2:4:9)The directed flux G z in (say)the þz direction is given by n k v z l v ,where the average is taken over v z .0only.Writing v z ¼v cos u we have in spherical coordinatesG z ¼n m 2p kT 3=2ð2p 0d f ðp =20sin u d u ð10v cos u exp Àv 22v 2th v 2d v Evaluating the integrals,we findG z ¼14n v (2:4:10)G z is the number of particles per square meter per second crossing the z ¼0surfacein the positive direction.Similarly,the average energy flux S z ¼n k 1m v 2v z l v in theþz direction can be found:S z ¼2kT G z .We see that the average kinetic energy W per particle crossing z ¼0in the positive direction isW ¼2kT (2:4:11)It is sometimes convenient to define the distribution in terms of other variables.For example,we can define a distribution of energies W ¼12m v 2by4p g W ðÞd W ¼4p f v ðÞv 2d vEvaluating d v =d W ,we see that g and f are related byg W ðÞ¼v (W )f ½v (W ) m (2:4:12)where v (W )¼(2W =m )1=2.Boltzmann’s RelationA very important relation can be obtained for the density of electrons in thermal equilibrium at varying positions in a plasma under the action of a spatially varying 2.4EQUILIBRIUM PROPERTIES 3738BASIC PLASMA EQUATIONS AND EQUILIBRIUMpotential.In the absence of electron drifts(u e;0),the inertial,magnetic,and fric-tional forces are zero,and the electron force balance is,from(2.3.15)with@=@t;0,en e Eþr p e¼0(2:4:13) Setting E¼Àr F and assuming p e¼n e kT e,(2.4.13)becomesÀen e r FþkT e r n e¼0or,rearranging,r(e FÀkT e ln n e)¼0(2:4:14) Integrating,we havee FÀkT e ln n e¼constorn e(r)¼n0e e F(r)=kT e(2:4:15)which is Boltzmann’s relation for electrons.We see that electrons are“attracted”to regions of positive potential.We shall generally write Boltzmann’s relation in more convenient unitsn e¼n0e F=T e(2:4:16)where T e is now expressed in volts,as is F.For positive ions in thermal equilibrium at temperature T i,a similar analysis shows thatn i¼n0eÀF=T i(2:4:17) Hence positive ions in thermal equilibrium are“repelled”from regions of positive potential.However,positive ions are almost never in thermal equilibrium in low-pressure discharges because the ion drift velocity u i is large,leading to inertial or frictional forces in(2.3.15)that are comparable to the electricfield or pressure gra-dient forces.Debye LengthThe characteristic length scale in a plasma is the electron Debye length l De.As we will show,the Debye length is the distance scale over which significant charge densities can spontaneously exist.For example,low-voltage(undriven)sheaths are typically a few Debye lengths wide.To determine the Debye length,let us intro-duce a sheet of negative charge having surface charge density r S,0C/m2into an。

Anomaly Detection A Survey(综述)

Anomaly Detection  A Survey(综述)

A modified version of this technical report will appear in ACM Computing Surveys,September2009. Anomaly Detection:A SurveyVARUN CHANDOLAUniversity of MinnesotaARINDAM BANERJEEUniversity of MinnesotaandVIPIN KUMARUniversity of MinnesotaAnomaly detection is an important problem that has been researched within diverse research areas and application domains.Many anomaly detection techniques have been specifically developed for certain application domains,while others are more generic.This survey tries to provide a structured and comprehensive overview of the research on anomaly detection.We have grouped existing techniques into different categories based on the underlying approach adopted by each technique.For each category we have identified key assumptions,which are used by the techniques to differentiate between normal and anomalous behavior.When applying a given technique to a particular domain,these assumptions can be used as guidelines to assess the effectiveness of the technique in that domain.For each category,we provide a basic anomaly detection technique,and then show how the different existing techniques in that category are variants of the basic tech-nique.This template provides an easier and succinct understanding of the techniques belonging to each category.Further,for each category,we identify the advantages and disadvantages of the techniques in that category.We also provide a discussion on the computational complexity of the techniques since it is an important issue in real application domains.We hope that this survey will provide a better understanding of the different directions in which research has been done on this topic,and how techniques developed in one area can be applied in domains for which they were not intended to begin with.Categories and Subject Descriptors:H.2.8[Database Management]:Database Applications—Data MiningGeneral Terms:AlgorithmsAdditional Key Words and Phrases:Anomaly Detection,Outlier Detection1.INTRODUCTIONAnomaly detection refers to the problem offinding patterns in data that do not conform to expected behavior.These non-conforming patterns are often referred to as anomalies,outliers,discordant observations,exceptions,aberrations,surprises, peculiarities or contaminants in different application domains.Of these,anomalies and outliers are two terms used most commonly in the context of anomaly detection; sometimes interchangeably.Anomaly detectionfinds extensive use in a wide variety of applications such as fraud detection for credit cards,insurance or health care, intrusion detection for cyber-security,fault detection in safety critical systems,and military surveillance for enemy activities.The importance of anomaly detection is due to the fact that anomalies in data translate to significant(and often critical)actionable information in a wide variety of application domains.For example,an anomalous traffic pattern in a computerTo Appear in ACM Computing Surveys,092009,Pages1–72.2·Chandola,Banerjee and Kumarnetwork could mean that a hacked computer is sending out sensitive data to an unauthorized destination[Kumar2005].An anomalous MRI image may indicate presence of malignant tumors[Spence et al.2001].Anomalies in credit card trans-action data could indicate credit card or identity theft[Aleskerov et al.1997]or anomalous readings from a space craft sensor could signify a fault in some compo-nent of the space craft[Fujimaki et al.2005].Detecting outliers or anomalies in data has been studied in the statistics commu-nity as early as the19th century[Edgeworth1887].Over time,a variety of anomaly detection techniques have been developed in several research communities.Many of these techniques have been specifically developed for certain application domains, while others are more generic.This survey tries to provide a structured and comprehensive overview of the research on anomaly detection.We hope that it facilitates a better understanding of the different directions in which research has been done on this topic,and how techniques developed in one area can be applied in domains for which they were not intended to begin with.1.1What are anomalies?Anomalies are patterns in data that do not conform to a well defined notion of normal behavior.Figure1illustrates anomalies in a simple2-dimensional data set. The data has two normal regions,N1and N2,since most observations lie in these two regions.Points that are sufficiently far away from the regions,e.g.,points o1 and o2,and points in region O3,are anomalies.Fig.1.A simple example of anomalies in a2-dimensional data set. Anomalies might be induced in the data for a variety of reasons,such as malicious activity,e.g.,credit card fraud,cyber-intrusion,terrorist activity or breakdown of a system,but all of the reasons have a common characteristic that they are interesting to the analyst.The“interestingness”or real life relevance of anomalies is a key feature of anomaly detection.Anomaly detection is related to,but distinct from noise removal[Teng et al. 1990]and noise accommodation[Rousseeuw and Leroy1987],both of which deal To Appear in ACM Computing Surveys,092009.Anomaly Detection:A Survey·3 with unwanted noise in the data.Noise can be defined as a phenomenon in data which is not of interest to the analyst,but acts as a hindrance to data analysis. Noise removal is driven by the need to remove the unwanted objects before any data analysis is performed on the data.Noise accommodation refers to immunizing a statistical model estimation against anomalous observations[Huber1974]. Another topic related to anomaly detection is novelty detection[Markou and Singh2003a;2003b;Saunders and Gero2000]which aims at detecting previously unobserved(emergent,novel)patterns in the data,e.g.,a new topic of discussion in a news group.The distinction between novel patterns and anomalies is that the novel patterns are typically incorporated into the normal model after being detected.It should be noted that solutions for above mentioned related problems are often used for anomaly detection and vice-versa,and hence are discussed in this review as well.1.2ChallengesAt an abstract level,an anomaly is defined as a pattern that does not conform to expected normal behavior.A straightforward anomaly detection approach,there-fore,is to define a region representing normal behavior and declare any observation in the data which does not belong to this normal region as an anomaly.But several factors make this apparently simple approach very challenging:—Defining a normal region which encompasses every possible normal behavior is very difficult.In addition,the boundary between normal and anomalous behavior is often not precise.Thus an anomalous observation which lies close to the boundary can actually be normal,and vice-versa.—When anomalies are the result of malicious actions,the malicious adversaries often adapt themselves to make the anomalous observations appear like normal, thereby making the task of defining normal behavior more difficult.—In many domains normal behavior keeps evolving and a current notion of normal behavior might not be sufficiently representative in the future.—The exact notion of an anomaly is different for different application domains.For example,in the medical domain a small deviation from normal(e.g.,fluctuations in body temperature)might be an anomaly,while similar deviation in the stock market domain(e.g.,fluctuations in the value of a stock)might be considered as normal.Thus applying a technique developed in one domain to another is not straightforward.—Availability of labeled data for training/validation of models used by anomaly detection techniques is usually a major issue.—Often the data contains noise which tends to be similar to the actual anomalies and hence is difficult to distinguish and remove.Due to the above challenges,the anomaly detection problem,in its most general form,is not easy to solve.In fact,most of the existing anomaly detection techniques solve a specific formulation of the problem.The formulation is induced by various factors such as nature of the data,availability of labeled data,type of anomalies to be detected,etc.Often,these factors are determined by the application domain inTo Appear in ACM Computing Surveys,092009.4·Chandola,Banerjee and Kumarwhich the anomalies need to be detected.Researchers have adopted concepts from diverse disciplines such as statistics ,machine learning ,data mining ,information theory ,spectral theory ,and have applied them to specific problem formulations.Figure 2shows the above mentioned key components associated with any anomaly detection technique.Anomaly DetectionTechniqueApplication DomainsMedical InformaticsIntrusion Detection...Fault/Damage DetectionFraud DetectionResearch AreasInformation TheoryMachine LearningSpectral TheoryStatisticsData Mining...Problem CharacteristicsLabels Anomaly Type Nature of Data OutputFig.2.Key components associated with an anomaly detection technique.1.3Related WorkAnomaly detection has been the topic of a number of surveys and review articles,as well as books.Hodge and Austin [2004]provide an extensive survey of anomaly detection techniques developed in machine learning and statistical domains.A broad review of anomaly detection techniques for numeric as well as symbolic data is presented by Agyemang et al.[2006].An extensive review of novelty detection techniques using neural networks and statistical approaches has been presented in Markou and Singh [2003a]and Markou and Singh [2003b],respectively.Patcha and Park [2007]and Snyder [2001]present a survey of anomaly detection techniques To Appear in ACM Computing Surveys,092009.Anomaly Detection:A Survey·5 used specifically for cyber-intrusion detection.A substantial amount of research on outlier detection has been done in statistics and has been reviewed in several books [Rousseeuw and Leroy1987;Barnett and Lewis1994;Hawkins1980]as well as other survey articles[Beckman and Cook1983;Bakar et al.2006].Table I shows the set of techniques and application domains covered by our survey and the various related survey articles mentioned above.12345678TechniquesClassification Based√√√√√Clustering Based√√√√Nearest Neighbor Based√√√√√Statistical√√√√√√√Information Theoretic√Spectral√ApplicationsCyber-Intrusion Detection√√Fraud Detection√Medical Anomaly Detection√Industrial Damage Detection√Image Processing√Textual Anomaly Detection√Sensor Networks√Table parison of our survey to other related survey articles.1-Our survey2-Hodge and Austin[2004],3-Agyemang et al.[2006],4-Markou and Singh[2003a],5-Markou and Singh [2003b],6-Patcha and Park[2007],7-Beckman and Cook[1983],8-Bakar et al[2006]1.4Our ContributionsThis survey is an attempt to provide a structured and a broad overview of extensive research on anomaly detection techniques spanning multiple research areas and application domains.Most of the existing surveys on anomaly detection either focus on a particular application domain or on a single research area.[Agyemang et al.2006]and[Hodge and Austin2004]are two related works that group anomaly detection into multiple categories and discuss techniques under each category.This survey builds upon these two works by significantly expanding the discussion in several directions. We add two more categories of anomaly detection techniques,viz.,information theoretic and spectral techniques,to the four categories discussed in[Agyemang et al.2006]and[Hodge and Austin2004].For each of the six categories,we not only discuss the techniques,but also identify unique assumptions regarding the nature of anomalies made by the techniques in that category.These assumptions are critical for determining when the techniques in that category would be able to detect anomalies,and when they would fail.For each category,we provide a basic anomaly detection technique,and then show how the different existing techniques in that category are variants of the basic technique.This template provides an easier and succinct understanding of the techniques belonging to each category.Further, for each category we identify the advantages and disadvantages of the techniques in that category.We also provide a discussion on the computational complexity of the techniques since it is an important issue in real application domains.To Appear in ACM Computing Surveys,092009.6·Chandola,Banerjee and KumarWhile some of the existing surveys mention the different applications of anomaly detection,we provide a detailed discussion of the application domains where anomaly detection techniques have been used.For each domain we discuss the notion of an anomaly,the different aspects of the anomaly detection problem,and the challenges faced by the anomaly detection techniques.We also provide a list of techniques that have been applied in each application domain.The existing surveys discuss anomaly detection techniques that detect the sim-plest form of anomalies.We distinguish the simple anomalies from complex anoma-lies.The discussion of applications of anomaly detection reveals that for most ap-plication domains,the interesting anomalies are complex in nature,while most of the algorithmic research has focussed on simple anomalies.1.5OrganizationThis survey is organized into three parts and its structure closely follows Figure 2.In Section2we identify the various aspects that determine the formulation of the problem and highlight the richness and complexity associated with anomaly detection.We distinguish simple anomalies from complex anomalies and define two types of complex anomalies,viz.,contextual and collective anomalies.In Section 3we briefly describe the different application domains where anomaly detection has been applied.In subsequent sections we provide a categorization of anomaly detection techniques based on the research area which they belong to.Majority of the techniques can be categorized into classification based(Section4),nearest neighbor based(Section5),clustering based(Section6),and statistical techniques (Section7).Some techniques belong to research areas such as information theory (Section8),and spectral theory(Section9).For each category of techniques we also discuss their computational complexity for training and testing phases.In Section 10we discuss various contextual anomaly detection techniques.We discuss various collective anomaly detection techniques in Section11.We present some discussion on the limitations and relative performance of various existing techniques in Section 12.Section13contains concluding remarks.2.DIFFERENT ASPECTS OF AN ANOMALY DETECTION PROBLEMThis section identifies and discusses the different aspects of anomaly detection.As mentioned earlier,a specific formulation of the problem is determined by several different factors such as the nature of the input data,the availability(or unavailabil-ity)of labels as well as the constraints and requirements induced by the application domain.This section brings forth the richness in the problem domain and justifies the need for the broad spectrum of anomaly detection techniques.2.1Nature of Input DataA key aspect of any anomaly detection technique is the nature of the input data. Input is generally a collection of data instances(also referred as object,record,point, vector,pattern,event,case,sample,observation,entity)[Tan et al.2005,Chapter 2].Each data instance can be described using a set of attributes(also referred to as variable,characteristic,feature,field,dimension).The attributes can be of different types such as binary,categorical or continuous.Each data instance might consist of only one attribute(univariate)or multiple attributes(multivariate).In To Appear in ACM Computing Surveys,092009.Anomaly Detection:A Survey·7 the case of multivariate data instances,all attributes might be of same type or might be a mixture of different data types.The nature of attributes determine the applicability of anomaly detection tech-niques.For example,for statistical techniques different statistical models have to be used for continuous and categorical data.Similarly,for nearest neighbor based techniques,the nature of attributes would determine the distance measure to be used.Often,instead of the actual data,the pairwise distance between instances might be provided in the form of a distance(or similarity)matrix.In such cases, techniques that require original data instances are not applicable,e.g.,many sta-tistical and classification based techniques.Input data can also be categorized based on the relationship present among data instances[Tan et al.2005].Most of the existing anomaly detection techniques deal with record data(or point data),in which no relationship is assumed among the data instances.In general,data instances can be related to each other.Some examples are sequence data,spatial data,and graph data.In sequence data,the data instances are linearly ordered,e.g.,time-series data,genome sequences,protein sequences.In spatial data,each data instance is related to its neighboring instances,e.g.,vehicular traffic data,ecological data.When the spatial data has a temporal(sequential) component it is referred to as spatio-temporal data,e.g.,climate data.In graph data,data instances are represented as vertices in a graph and are connected to other vertices with ter in this section we will discuss situations where such relationship among data instances become relevant for anomaly detection. 2.2Type of AnomalyAn important aspect of an anomaly detection technique is the nature of the desired anomaly.Anomalies can be classified into following three categories:2.2.1Point Anomalies.If an individual data instance can be considered as anomalous with respect to the rest of data,then the instance is termed as a point anomaly.This is the simplest type of anomaly and is the focus of majority of research on anomaly detection.For example,in Figure1,points o1and o2as well as points in region O3lie outside the boundary of the normal regions,and hence are point anomalies since they are different from normal data points.As a real life example,consider credit card fraud detection.Let the data set correspond to an individual’s credit card transactions.For the sake of simplicity, let us assume that the data is defined using only one feature:amount spent.A transaction for which the amount spent is very high compared to the normal range of expenditure for that person will be a point anomaly.2.2.2Contextual Anomalies.If a data instance is anomalous in a specific con-text(but not otherwise),then it is termed as a contextual anomaly(also referred to as conditional anomaly[Song et al.2007]).The notion of a context is induced by the structure in the data set and has to be specified as a part of the problem formulation.Each data instance is defined using following two sets of attributes:To Appear in ACM Computing Surveys,092009.8·Chandola,Banerjee and Kumar(1)Contextual attributes.The contextual attributes are used to determine thecontext(or neighborhood)for that instance.For example,in spatial data sets, the longitude and latitude of a location are the contextual attributes.In time-series data,time is a contextual attribute which determines the position of an instance on the entire sequence.(2)Behavioral attributes.The behavioral attributes define the non-contextual char-acteristics of an instance.For example,in a spatial data set describing the average rainfall of the entire world,the amount of rainfall at any location is a behavioral attribute.The anomalous behavior is determined using the values for the behavioral attributes within a specific context.A data instance might be a contextual anomaly in a given context,but an identical data instance(in terms of behavioral attributes)could be considered normal in a different context.This property is key in identifying contextual and behavioral attributes for a contextual anomaly detection technique.TimeFig.3.Contextual anomaly t2in a temperature time series.Note that the temperature at time t1is same as that at time t2but occurs in a different context and hence is not considered as an anomaly.Contextual anomalies have been most commonly explored in time-series data [Weigend et al.1995;Salvador and Chan2003]and spatial data[Kou et al.2006; Shekhar et al.2001].Figure3shows one such example for a temperature time series which shows the monthly temperature of an area over last few years.A temperature of35F might be normal during the winter(at time t1)at that place,but the same value during summer(at time t2)would be an anomaly.A similar example can be found in the credit card fraud detection domain.A contextual attribute in credit card domain can be the time of purchase.Suppose an individual usually has a weekly shopping bill of$100except during the Christmas week,when it reaches$1000.A new purchase of$1000in a week in July will be considered a contextual anomaly,since it does not conform to the normal behavior of the individual in the context of time(even though the same amount spent during Christmas week will be considered normal).The choice of applying a contextual anomaly detection technique is determined by the meaningfulness of the contextual anomalies in the target application domain. To Appear in ACM Computing Surveys,092009.Anomaly Detection:A Survey·9 Another key factor is the availability of contextual attributes.In several cases defining a context is straightforward,and hence applying a contextual anomaly detection technique makes sense.In other cases,defining a context is not easy, making it difficult to apply such techniques.2.2.3Collective Anomalies.If a collection of related data instances is anomalous with respect to the entire data set,it is termed as a collective anomaly.The indi-vidual data instances in a collective anomaly may not be anomalies by themselves, but their occurrence together as a collection is anomalous.Figure4illustrates an example which shows a human electrocardiogram output[Goldberger et al.2000]. The highlighted region denotes an anomaly because the same low value exists for an abnormally long time(corresponding to an Atrial Premature Contraction).Note that that low value by itself is not an anomaly.Fig.4.Collective anomaly corresponding to an Atrial Premature Contraction in an human elec-trocardiogram output.As an another illustrative example,consider a sequence of actions occurring in a computer as shown below:...http-web,buffer-overflow,http-web,http-web,smtp-mail,ftp,http-web,ssh,smtp-mail,http-web,ssh,buffer-overflow,ftp,http-web,ftp,smtp-mail,http-web...The highlighted sequence of events(buffer-overflow,ssh,ftp)correspond to a typical web based attack by a remote machine followed by copying of data from the host computer to remote destination via ftp.It should be noted that this collection of events is an anomaly but the individual events are not anomalies when they occur in other locations in the sequence.Collective anomalies have been explored for sequence data[Forrest et al.1999; Sun et al.2006],graph data[Noble and Cook2003],and spatial data[Shekhar et al. 2001].To Appear in ACM Computing Surveys,092009.10·Chandola,Banerjee and KumarIt should be noted that while point anomalies can occur in any data set,collective anomalies can occur only in data sets in which data instances are related.In contrast,occurrence of contextual anomalies depends on the availability of context attributes in the data.A point anomaly or a collective anomaly can also be a contextual anomaly if analyzed with respect to a context.Thus a point anomaly detection problem or collective anomaly detection problem can be transformed toa contextual anomaly detection problem by incorporating the context information.2.3Data LabelsThe labels associated with a data instance denote if that instance is normal or anomalous1.It should be noted that obtaining labeled data which is accurate as well as representative of all types of behaviors,is often prohibitively expensive. Labeling is often done manually by a human expert and hence requires substantial effort to obtain the labeled training data set.Typically,getting a labeled set of anomalous data instances which cover all possible type of anomalous behavior is more difficult than getting labels for normal behavior.Moreover,the anomalous behavior is often dynamic in nature,e.g.,new types of anomalies might arise,for which there is no labeled training data.In certain cases,such as air traffic safety, anomalous instances would translate to catastrophic events,and hence will be very rare.Based on the extent to which the labels are available,anomaly detection tech-niques can operate in one of the following three modes:2.3.1Supervised anomaly detection.Techniques trained in supervised mode as-sume the availability of a training data set which has labeled instances for normal as well as anomaly class.Typical approach in such cases is to build a predictive model for normal vs.anomaly classes.Any unseen data instance is compared against the model to determine which class it belongs to.There are two major is-sues that arise in supervised anomaly detection.First,the anomalous instances are far fewer compared to the normal instances in the training data.Issues that arise due to imbalanced class distributions have been addressed in the data mining and machine learning literature[Joshi et al.2001;2002;Chawla et al.2004;Phua et al. 2004;Weiss and Hirsh1998;Vilalta and Ma2002].Second,obtaining accurate and representative labels,especially for the anomaly class is usually challenging.A number of techniques have been proposed that inject artificial anomalies in a normal data set to obtain a labeled training data set[Theiler and Cai2003;Abe et al.2006;Steinwart et al.2005].Other than these two issues,the supervised anomaly detection problem is similar to building predictive models.Hence we will not address this category of techniques in this survey.2.3.2Semi-Supervised anomaly detection.Techniques that operate in a semi-supervised mode,assume that the training data has labeled instances for only the normal class.Since they do not require labels for the anomaly class,they are more widely applicable than supervised techniques.For example,in space craft fault detection[Fujimaki et al.2005],an anomaly scenario would signify an accident, which is not easy to model.The typical approach used in such techniques is to 1Also referred to as normal and anomalous classes.To Appear in ACM Computing Surveys,092009.Anomaly Detection:A Survey·11 build a model for the class corresponding to normal behavior,and use the model to identify anomalies in the test data.A limited set of anomaly detection techniques exist that assume availability of only the anomaly instances for training[Dasgupta and Nino2000;Dasgupta and Majumdar2002;Forrest et al.1996].Such techniques are not commonly used, primarily because it is difficult to obtain a training data set which covers every possible anomalous behavior that can occur in the data.2.3.3Unsupervised anomaly detection.Techniques that operate in unsupervised mode do not require training data,and thus are most widely applicable.The techniques in this category make the implicit assumption that normal instances are far more frequent than anomalies in the test data.If this assumption is not true then such techniques suffer from high false alarm rate.Many semi-supervised techniques can be adapted to operate in an unsupervised mode by using a sample of the unlabeled data set as training data.Such adaptation assumes that the test data contains very few anomalies and the model learnt during training is robust to these few anomalies.2.4Output of Anomaly DetectionAn important aspect for any anomaly detection technique is the manner in which the anomalies are reported.Typically,the outputs produced by anomaly detection techniques are one of the following two types:2.4.1Scores.Scoring techniques assign an anomaly score to each instance in the test data depending on the degree to which that instance is considered an anomaly. Thus the output of such techniques is a ranked list of anomalies.An analyst may choose to either analyze top few anomalies or use a cut-offthreshold to select the anomalies.2.4.2Labels.Techniques in this category assign a label(normal or anomalous) to each test instance.Scoring based anomaly detection techniques allow the analyst to use a domain-specific threshold to select the most relevant anomalies.Techniques that provide binary labels to the test instances do not directly allow the analysts to make such a choice,though this can be controlled indirectly through parameter choices within each technique.3.APPLICATIONS OF ANOMALY DETECTIONIn this section we discuss several applications of anomaly detection.For each ap-plication domain we discuss the following four aspects:—The notion of anomaly.—Nature of the data.—Challenges associated with detecting anomalies.—Existing anomaly detection techniques.To Appear in ACM Computing Surveys,092009.。

丹参与白花丹参的比较研究

丹参与白花丹参的比较研究

Comparative study on Salvia miltiorrhiza Bge. and Salvia miltiorrhiza Bge.var.alba C.Y.Wu et H.W.L.Speciality:Chinese materia medicaAuthor:Li ChangfengTutor:Professor Zhang YongqingAbstractObjective:The morphological characteristic, photosynthesis character, seed protein electrophoresis, peroxidase zymogram of leaves, root character and root weight, the content of chemical constituent and HPLC fingerprint between Salvia miltiorrhiza Bge. and Salvia miltiorrhiza Bge.var.alba C.Y.Wu et H.W.L. were studied systematically, which provide basic foundation for fine germplasm selection of Salvia miltiorrhiza Bge. and the quality evaluation of Salvia miltiorrhiza Bge.var.alba C.Y.Wu et H.W.L..Methods:The diversity of morphological character, photosynthesis character, genetic character, the character that showing yield, the content of chemical constituent and HPLC fingerprint between Salvia miltiorrhiza Bge. and Salvia miltiorrhiza Bge.var.alba C.Y.Wu et H.W.L. were compared using the basic methods in pharmaceutical botany, medicinal plant cultivation, plant physiology and pharmacognosy. The net photosynthetic rates were measured with CIRAS-1. The plant character and the character that showing yield were measured. The protein band of seed and peroxidase band of leaves were studied by gel electrophoresis. The extractum content was determined in accordance with China pharmacopeia. The content of tanshinoneⅡA and salvianolic acid B were determined by high performance liquid chromatography (HPLC).Results:The morphological characteristic of Salvia miltiorrhiza Bge.var.alba C.Y.Wu et H.W.L.were similar to Salvia miltiorrhiza Bge., the numerical valueIIof the plant height, flowering woods and the seeds quantity for the latter were higher than that of the former. The light saturation point of Salvia miltiorrhiza Bge. was higher, while its light compensation point was lower. The seed protein electrophoresis band of both Salvia miltiorrhiza Bge.and Salvia miltiorrhiza Bge.var.alba C.Y.Wu et H.W.L. was at equal place, but the peroxidase band of leaves showed marked difference.Both the content of chemical constituent and HPLC fingerprint of Salvia miltiorrhiza Bge. and Salvia miltiorrhiza Bge.var.alba C.Y.Wu et H.W.L. were different unobviously. Conclusion:There are certain genetic differences but little quality difference of medicinal material between Salvia miltiorrhiza Bge. and Salvia miltiorrhiza Bge.var.alba C.Y.Wu et H.W.L.. So, the latter should be calssified as Radiex et Rhizoma Salviae Miltiorrhizae rather than as a new medicinal material. The net photosynthetic rates and root weight of Salvia miltiorrhiza Bge. were significantly higher than that of Salvia miltiorrhiza Bge.var.alba C.Y.Wu et H.W.L.. In production, Salvia miltiorrhiza Bge. should be cultivated to achieve more profit, while s alvia miltiorrhiza Bge.var.alba C.Y.Wu et H.W.L.could be preserved as a germplasm. Whether s alvia miltiorrhiza Bge.var.alba C.Y.Wu et H.W.L. was a variety or a variant to Salvia miltiorrhiza Bge., more research should be done.Key words :Salvia miltiorrhiza Bge.;Salvia miltiorrhiza Bge.var.alba C.Y.Wu et H.W.L.; protein electrophoresis; chemical constituent; HPLC finger printingIII目录引 言 (1)第一章 丹参与白花丹参植株形态比较研究 (3)1 材料与方法 (3)1.1 材料 (3)1.2方法 (3)2 结果与分析 (3)2.1 株高比较 (3)2.2 植株主茎直径与主茎一次性分支数比较 (4)2.3植株花苔数、花苔长度比较 (5)2.4植株叶片数量比较 (6)2.5植株小叶片长度、宽度与叶缘5cm长度上锯齿的数量比较 (7)2.6丹参与白花丹参植株果实数量比较 (8)3讨论与小结 (9)第二章 丹参与白花丹参植株光合特性比较研究 (10)1 材料与方法 (10)1.1试验材料 (10)1.2仪器 (10)1.3实验方法 (10)2 结果与分析 (10)2.1植株叶片净光合速率光响应曲线比较 (10)2.2气孔导度日变化比较 (12)浓度日变化 (12)2.3胞间CO22.4净光合速率日变化比较 (13)2.5叶片蒸腾速率日变化比较 (14)2.6净光合速率与其影响因素的相关性分析 (14)3 讨论与小结 (15)第三章 丹参与白花丹参种子蛋白电泳比较研究 (17)1 材料与方法 (17)IV1.1实验材料 (17)1.2实验仪器 (17)1.3 实验试剂 (17)1.4 实验方法 (17)2 结果与分析 (19)3 讨论与小结 (20)第四章 丹参与白花丹参过氧化物同工酶谱比较研究 (21)1 材料与方法 (21)1.1实验材料 (21)1.2实验仪器 (21)1.3 实验试剂 (21)1.4 实验方法 (21)2 结果与分析 (23)3 讨论与小结 (24)第五章 丹参与白花丹参植株根部性状与重量比较研究 (25)1材料与方法 (25)1.1材料 (25)1.2方法 (25)2 结果与分析 (25)2.1 根长、根直径、根分支数的比较 (25)2.2根部鲜重、干重与折干率比较 (26)3 讨论与小结 (27)第六章 丹参与白花丹参化学成分比较研究 (28)1 材料与方法 (28)1.1 材料 (28)1.2 仪器 (28)1.3 方法 (28)2结果与分析 (34)2.1 水浸出物含量比较 (34)V2.2 醇浸出物含量比较 (34)2.3丹参酮ⅡA含量比较 (34)2.4丹酚酸B含量比较 (35)3讨论与小结 (35)第七章 丹参与白花丹参高效液相指纹图谱比较研究 (36)1 材料与方法 (36)1.1材料 (36)1.2仪器 (36)1.3试剂 (36)1.4方法 (36)2 结果与分析 (41)2.1 HPLC指纹色谱图 (41)2.2共有指纹峰的标定 (43)2.3丹参与白花丹参HPLC指纹图谱比较 (43)3讨论与小结 (45)结 语 (46)参考文献 (48)综述:白花丹参研究概况 (50)致 谢 (56)查新报告 (57)VI引 言丹参属于常用中药材之一,为唇形科植物丹参Salvia miltiorrhiza Bge.的干燥根及根茎,其味苦,性微寒,具有祛瘀止痛、活血通经、清心除烦之功效,用于治疗月经不调、经闭痛经、癥瘕积聚、胸腹刺痛、热痹疼痛、疮疡肿痛、心烦不眠等症[1]。

药用大麻 Medicinal Marijuana 英语作文

药用大麻 Medicinal Marijuana 英语作文

Medicinal Marijuana>Medicinal Marijuana Essay:The utilization of clinical marijuana has been a colossal conversation topic in the United States. Numerous individuals contend the upsides and downsides of this medication being utilized for clinical purposes. The considerable scope of banter incases a wide range of feelings on this medication. More patients assert that marijuana is compelling in treating indications from a wide range of sorts of sicknesses and illnesses.Cannabinoids are substances aggravates that stifle synapses in the cerebrum. The cerebrum, the body, reacts alternately. The mixes append to the various synapses and make your body shut out different sentiments, for example, torment. These cannabinoid synthetic compounds are the main piece of the plant for a clinical reason.Long and Short Essays on Medicinal Marijuana for Students and Kids in EnglishWe provide the students with essay samples on an extended essay of 500 words and a shortpiece of 150 words on the topic of Medicinal Marijuana.Long Essay on Medicinal Marijuana 500 Words in EnglishLong Essay on Medicinal Marijuana is usually given to classes 7, 8, 9, and 10.Marijuana is unlawful in fifty states due to its grouping as an illegal medication. Yet, dubious issues have been set up that this “illegal medication” has improved the course of treatment for enduring patients. Marijuana has beneficial impacts when utilized in medicinalsituations for agony treatment; subsequently, it should be a controlled medication for patients who can profit by the utilization of this medication.Marijuana has gone through investigation for its utilization as a medication, and the outcomes have demonstrated enhancements in the patients treated with this medication. Specialists have communicated inverse suppositions, making this issue questionable. As the discussion about marijuana’s utilization as medication proceeds, specialists have given usdata relating to its beneficial outcomes when utilized appropriately.A large part of the debate falls in possession of the public authority, which indicates that marijuana is certifiably not a protected medication, versus the specialists who research the topic for medicinal purposes. Without a doubt, not all specialists feel cannabis should be a “legitimate” endorsed medication; it is in their grasp to choose so. The Institute of Medicine had lighted the discussion when it said smoking marijuana is dangerous, yet additionally suggested that fundamentally sick patientsshould be permitted to utilize it under firmly observed settings.A pro at the National Cancer Institute approved his patients to use the medication, however not overdo it. With all the hypothesis, one would believe that specialists wouldn’t be so anxious to offer the pill as a reliever. The National Institute of Drug Abuse delivers roughly 300 free joints every month for patients who are tried out a trial program. The Government announces there is no remedial incentive in the medicinal utilization of marijuana, yet they don’t have hard proof to demonstrate it. Ira Glasser, the leader overseerof the American Civil Liberties Union, communicated: “the public authority has defamed all medication use without separation and has deliberately and madly opposed science Possibly if the two “sides” would cooperate, an understanding could be set up concerning methodology for the additional turn of events and treatment.Marijuana has facilitated chemotherapy’s agony; extreme muscle fits brought about by various sclerosis, weight reduction because of the AIDS infection, and different issues. Specialists from the National Institute of Health or NIH haveaffirmed that marijuana is a compelling, protected, and cheap option for treating sickness brought about by AIDS drugs and malignancy therapies other such infirmities as glaucoma, muscle fits, immovable agony, epilepsy, anorexia, asthma, sleep deprivation, misery, and different problems.Other such sicknesses in which marijuana has been said to help are Parkinson’s infection, Huntington’s illness, dull headaches, and Alzheimer’s; however, the NIH has not revea led those outcomes. The National Institute of Medicine shows us that the advantages ofcannabis momentary use don’t block the potential perils of its drawn-out use. Marijuana has a useful standpoint for certain sicknesses; however, experimentation is restricted because of its unlawfulness. The constructive outcomes of this medication help a predetermined number, so specialists have attempted to work with the public authority to make a dependable method to circulate the drug without smoking it.Short Essay on Medicinal Marijuana 150 Words in EnglishShort Essay on Medicinal Marijuana is usually given to classes 1, 2, 3, 4, 5, and 6.Clinical marijuana can treat numerous ailments, for example, gloom, uneasiness, glaucoma, and Alzheimer’s infection. It can li kewise be utilized to expand hunger in malignant growth patients going through chemotherapy. Clinical marijuana contains similar fixings utilized for recreational marijuana. Notwithstanding, clinical marijuana contains fewer synthetics that cause rapture.Clinical marijuana utilizes the marijuana plant or synthetic compounds to treat illnesses or conditions. It’s essentially a similar item as recreational marijuana, yet it’s taken for clinical purposes. The marijuana plant contains anexcess of 100 unique synthetics called cannabinoids. Every one differently affects the body. Delta-9-tetrahydrocannabinol (THC) and cannabidiol (CBD) are the principal synthetic compounds utilized in medication. THC additionally creates the “high” individuals feel when they smoke marijuana or eat nourishments containing it.With all the exploration and time spent on utilizing marijuana as a clinical treatment, one would believe that cannabis is valuable and might prompt different advancements for the treatment of infection and agony.10 Lines on Medicinal Marijuana in EnglishClinical marijuana can treat numerous ailments, for example, gloom, uneasiness, glaucoma, and Alzheimer’s infection.Clinical marijuana contains similar fixings utilized for recreational marijuana.The issue with composing a clinical marijuana essay is that there is still a great deal of exploration being made about this plant’s employments.The Institute of Medicine had lighted the discussion when it said smoking marijuana is dangerous, yet additionally suggested that fundamentally sick patients should be permitted to utilize it under firmly observed settings.Marijuana is unlawful in fifty states due to its grouping as an illegal medication. Yet, dubious issues have been set up that this “illegal medic ation” has improved the course of treatment for enduring patients.Marijuana has beneficial impacts when utilized in medicinal situations for agony treatment; subsequently, it should be a controlledmedication for patients who can profit by the utilization of this medication.Marijuana has gone through investigation for its utilization as a medication, and the outcomes have demonstrated enhancements in the patients treated with this medication.Cannabinoids are substances aggravates that stifle synapses in the cerebrum. The body reacts alternately.Clinical marijuana utilizes the marijuana plant or synthetic compounds to treat illnesses or conditions. It’s essentially a similar item asrecreatio nal marijuana, yet it’s taken for clinical purposes.The marijuana plant contains an excess of 100 unique synthetics called cannabinoids. Every one differently affects the body. Delta-9-tetrahydrocannabinol (THC) and cannabidiol (CBD) are the principal synthetic compounds utilized in medication.FAQ’s on Medicinal Marijuana EssayQuestion 1.Does Cannabis Damage the Brain?Answer:This inquiry has left specialists confused. Past examinations thought to have demonstrated a connection between’s cannabis utilization and lower IQ have been disposed of. Later tests indicated no distinction between cannabis utilization and non-cannabis devouring indistinguishable twins.Question 2.Will Smoking Weed Damage My Lungs?Answer:One more yes and no inquiry. Honestly, taking in superheated smoke can aggravate the lungs, bringing about a hack. Smoking tar can make you hack up mucous and experience a few manifestations of bronchitis. Notwithstanding, these side effects are typically fleeting and disappear when you quit smoking and change to other utilization strategiesQuestion 3.How are THC and CBD unique?Answer:Like we referenced over, a few plants don’t create the psychoactive high that one anticipates. Almost certainly, these strains are high in cannabidiol or CBD. CBD has demonstrated effectiveness as of late in the treatment of youth epilepsy.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

a r X i v :h e p -e x /0009016v 1 6 S e p 2000Asymmetries in the production of Λ0,Ξ−,and Ω−hyperons in 500GeV/c π−–NucleonInteractionsFermilab E791CollaborationE.M.Aitala,i S.Amato,a J.C.Anjos,a J.A.Appel,eD.Ashery,n S.Banerjee,e I.Bediaga,a G.Blaylock,hS.B.Bracker,o P.R.Burchat,m R.A.Burnstein,f T.Carter,e H.S.Carvalho,a N.K.Copty,ℓL.M.Cremaldi,i C.Darling,r K.Denisenko,e S.Devmal,c A.Fernandez,k G.F.Fox,ℓP.Gagnon,b C.Gobel,a K.Gounder,i A.M.Halling,e G.Herrera,d G.Hurvits,n C.James,e P.A.Kasper,fS.Kwan,e ngs,ℓJ.Leslie,b B.Lundberg,e J.Magnin,a S.MayTal-Beck,n B.Meadows,c J.R.T.de Mello Neto,a burn,p J.M.de Miranda,a A.Napier,p A.Nguyen,gA.B.d’Oliveira,c ,k K.O’Shaughnessy,b K.C.Peng,fL.P.Perera,c M.V.Purohit,ℓB.Quinn,i S.Radeztsky,qA.Rafatian,i N.W.Reay,g J.J.Reidy,i A.C.dos Reis,aH.A.Rubin,f D.A.Sanders,i A.K.S.Santha,cA.F.S.Santoro,a A.J.Schwartz,c M.Sheaff,d ,qR.A.Sidwell,g F.R.A.Sim˜a o,a A.J.Slaughter,rM.D.Sokoloff,c J.Solano,a N.R.Stanton,g K.Stenson,qD.J.Summers,i S.Takach,r K.Thorne,e A.K.Tripathi,g S.Watanabe,q R.Weiss-Babai,n J.Wiener,j N.Witchey,gE.Wolin,r D.Yi,i S.Yoshida,g R.Zaliznyak,m C.Zhang ga Centro Brasileiro de Pesquisas F´ısicas,Rio de Janeiro,Brazilb University of California,Santa Cruz,California 95064,USAc University of Cincinnati,Cincinnati,Ohio 45221,USAd CINVESTAV,Mexico e Fermilab,Batavia,Illinois 60510,USAf Illinois Institute of Technology,Chicago,Illinois 60616,USAg KansasState University,Manhattan,Kansas 66506,USAh University of Massachusetts,Amherst,Massachusetts01003,USAi University of Mississippi–Oxford,University,MS38677,USAj Princeton University,Princeton,New Jersey08544,USAk Universidad Autonoma de Puebla,MexicoℓUniversity of South Carolina,Columbia,South Carolina29208,USAm Stanford University,Stanford,California94305,USAn Tel Aviv University,Tel Aviv69978,Israelo Box1290,Enderby,British Columbia V0E1VO,Canadap Tufts University,Medford,Massachusetts02155,USAq University of Wisconsin,Madison,Wisconsin53706,USAr Yale University,New Haven,Connecticut06511,USAStrange particle production is an important tool for studying how non-perturbative QCD affects light quark production and hadronization.One process that in-volves both production and hadronization is the leading particle effect.Thiseffect is manifest as an enhancement in the production rate of particles which have one or more valence quarks in common with an initial state hadron compared to that of their antiparticles which have fewer valence quarks in common.This enhancement increases as the momentum of the produced par-ticle increases,in the direction of the initial hadron with which the produced particle shares valence quarks.This process has been extensively studied in charm production in recent years from both experimental[1,2]and theoret-ical[3]points of view.The same type of leading particle effects have been seen in light hadron production[4]and are expected[5].Other effects,like the associated production of a kaon and a hyperon,can also contribute to an asymmetry in hyperon-antihyperon production[6].As a byproduct of our charm program in Fermilab experiment E791,we col-lected a large sample ofΛ0−s)covered by our experiment was−0.12≤xF≤0.12.Given E791’snegative pion beam and nucleon target,we expect recombination effects[5] to produce different asymmetries in the beam and target fragmentation re-gions as the content of valence quarks in the beam and target hadrons differ.A growing asymmetry is expected inΛ0−Λ0each share one valence quark with the incident pion.ForΞ−−Ξ+production,a growing asymmetry with|x F|is expected in both regions since theΞ−shares one valence quark with theπ−as well as with the target particles(p and n)whereas theΞ+shares none. No leading particle effects are expected forΩ−orΩ+as they have no valence quarks in common with either the target or the beam.Measurements ofΛ0−Λ0asymmetry can be found in the ex-periments listed in[9],but in general light hadron production asymmetries in π−N interactions have not been studied extensively.Experiment E791recorded data from500GeV/cπ−interactions infive thin foils(one platinum and four diamond)separated by gaps of1.38to1.39cm. Each foil had a thickness of approximately0.4%of a pion interaction length (0.5mm for the upstream platinum target,and1.6mm for each of the carbon targets).The E791spectrometer[10]in the Fermilab Tagged Photon Labo-ratory was a large-acceptance,two-magnet spectrometer augmented by eight planes of multiwire proportional chambers(MWPC)and six planes of silicon microstrip detectors(SMD)for beam tracking.The magnets provided a total transverse momentum kick of512MeV/c.Downstream of the target there were17planes of SMD’s for track and vertex reconstruction,35drift chamber planes,two MWPC’s,two multicell thresholdˇCerenkov counters,electromag-netic and hadronic calorimeters(with apertures about70by140mr),and a muon detector.An important element of the experiment was its extremely fast data acquisition system[11]which was combined with a very open transverse-energy trigger to record a data sample of2×1010events.The trigger required that the total“transverse energy”(i.e.,sum of the products of energy observed times the tangent of the angle from the target to each calorimeter segment) be at least3GeV.For this analysis we use only interactions in the isoscalar carbon targets so that our results truly represent a“nucleon”,that is,the average of neutrons and protons.MostΛ0’s decay before entering the drift chamber region(150 cm downstream of the targets)but downstream of the end of the silicon vertex detectors(50cm from the targets),while someΞ−’s andΩ−’s decay in the silicon region.Throughout this paper,references to a particle should be taken to include its antiparticle unless explicitly stated otherwise.Λ0’s were reconstructed using the pπ−decay mode.Proton andπ−tracks were required to have a distance of closest approach less than0.7cm at the decay vertex and to have an in-variant mass between1.101and1.127GeV/c2.In addition,the ratio of the momentum of the proton to that of the pion was required to be larger than 2.5.The reconstructedΛ0decay vertex formed by the two tracks was required to be downstream of the last target but upstream of thefirst magnet.For the Λ0production study,we removedΛ0’s coming fromΞ−decay by requiring that theΛ0candidates have an impact parameter with respect to the primary vertex of less than0.3cm if decaying within thefirst20cm downstream of the target,and less than0.4cm otherwise.After these cuts,the remaining Ξ−contamination was≈1.5%,having negligible effect on theΛ0−Λ0analysis we use data from approximately7%of the overall sample recorded for the experiment.The total signal,taken as the sum of background subtracted signal in each bin,was2587870±1780Λ0’s and1690030±1500GeV/c .For this momentum range,the ˇCerenkov kaon identification efficiencywas about 85%and the pion misidentification rate was about 5%.As with the Λ0sample,the Ξ−−Ξ+and Ω−−Ω+invariant mass plots were fit to a Gaussian signal plus linear background for each interval of x F and p 2T .The total reconstructed mass distributions are shown in Figs.1(c)through (f).With the final selection criteria and the full E791data set,we found 996180±1200Ξ−,706620±1020Ξ+,8750±110Ω−,and 7460±100Ω+after background subtraction.Again,these numbers and their errors come from the sum of signals in all bins.We checked that the Ξ−contamination for the Ω−’s,after all cuts,was negligible.For each x F and p 2T bin and for each hyperon Y ,we defined an asymmetry parameter A asA Y −Yr Y Y r Y ;r Y =ǫY Y ,(1)where N Y (NY),the product of the geometrical acceptance and reconstructionefficiency for each hyperon (antihyperon).Values for the N ’s were obtained from the individual fits to the mass plots for events selected to lie within each x F and p 2T range.Selection criteria for the particle and antiparticle samples were identical.How-ever,geometrical acceptances and reconstruction efficiencies were not neces-sarily the same,mostly due to an inefficient region in the drift chambers produced by the negative pion beam.To evaluate this effect,a large sample of Monte Carlo (MC)events was created using the Pythia /Jetset event gen-erators [12].These were projected through a detailed simulation of the E791spectrometer to simulate “data”in digitized format which was then processed through the same computer reconstruction code as that used for data from the detector.Candidate events were then subjected to the same selection criteria as that used for data.To account for correlations between x F and p 2T ,efficien-cies were determined in bins of the two parameters.The ratios of efficiencies ǫY /ǫhyperon the asymmetry is presented as functions of xFfor different intervalsof p2T ,and also as functions of p2Tfor different intervals of xF.The asymmetriesshow a substantial dependence on xF and some dependence on p2T.There isevidence for a correlation between xF and p2T.The asymmetry A(xF )integrated over p2T,and the asymmetry A(p2T)integratedover xF,are presented in Fig.5.Thisfigure shows asymmetries for all three types of hyperons.The results are also listed in Table1along with statisticaland systematic errors.We looked for systematic effects from the following sources:•Event selection criteria;•The minimum transverse energy in the calorimeters required in the event trigger;•Uncertainties in calculating relative efficiencies for particle and antiparticle;•Effect of the2.5%K−contamination in the beam.Thefirst two effects were found to be negligible.A significant error came from the spectrometer efficiencies,and these uncertainties are included as systematic errors in Table1.The effect of the K−contamination in the beam was difficult to estimate as nodata on hyperon asymmetries in K−production in this xFrange existed.How-ever,even for equalπ−and K−production ofΛ0’s(as opposed toΛ0production),only a1.5%change would occur in thefinal result.As for theπ−(¯u d)beam,no rise in the asymmetryin the positive xFdirection was expected since bothΛ0(uds)andΛ0only shares one with the beam.Consequently theΛ0−not for xF>0.TheΩ−−Ω+asymmetry appears to be constant.Notice that neither theΩ−nor theΩ+shares valence quarks with either the beam or target particles.The observed positive asymmetry forΩcould arise,in part,from the associ-ated production of kaons.This could also be the origin of the observed positiveasymmetry near xF=0.0forΞ’s andΛ’s.Part of the associated production enhancement of baryons,as opposed to antibaryons,may come from the higher energy thresholds for the production of antibaryons.For production of hyper-ons,the conservation of strangeness requires only the associated production of one or more kaons.For production of an antihyperon,baryon number as well as strangeness must be conserved,requiring the associated production of at least two additional baryons,thus raising the energy thresholds and favoring particle over antiparticle production[6].The behavior of the three asymmetries shown in Fig.5gives evidence for theleading particle effect.In the backward region(xF<0),a larger asymmetry is observed when there is a larger difference in the number of valence quarks in common with the target,for hyperon and antihyperon.Evidence for a similareffect in the production of D±mesons in the forward region(xF>0)was presented by the E791collaboration[1]and others[13].The E791collabora-tion has also studied asymmetries in the production of D±s[2]andΛ±c[14]. However,leading particle effects for charmed hadrons typically occur at largervalues of xFthan they do for strange hadrons.The Pythia/Jetset[12]model describes only some features of our results, and those only qualitatively,as can be seen in Fig.5.This model predictssmall values of asymmetry for xF∼0;this is in contrast with our results which range from0.08to0.18in this region.Our data show that,even in the central region,the asymmetries are not zero,and suggest that leading particle effects play an even larger role than expected as|xF|increases.In summary we report the most precise,systematic study to date of the pro-duction asymmetry forΛ,Ξ,andΩhyperons in a single experiment.The rangeof xF covered,−0.12≤xF≤0.12,allows the study of asymmetries in regionsclose to xF=0for thefirst time in afixed target experiment.Some evidencefor possible correlations between xF and p2Tis observed(see Figs.3and4).Our results for particle-antiparticle asymmetries are consistent with the ex-perimental results obtained by previous experiments[7,8](see Table2)but with smaller uncertainties.Our results can be described qualitatively in terms of the energy thresholds for the production of hyperons and antihyperons to-gether with their associated particles and a model in which the recombination of valence and sea quarks in the beam and target particles contributes to the hyperon and antihyperon production in an asymmetrical manner[5].We gratefully acknowledge the assistance of the staffs of Fermilab and of all the participating institutions.This research was supported by the Brazilian Conselho Nacional de Desenvolvimento Cient´ıfico e Tecnol´o gico,CONACyT (Mexico),the U.S.-Israel Binational Science Foundation,the U.S.National Science Foundation and the U.S.Department of Energy.Fermilab is operated by the Universities Research Associates,Inc.,under contract with the United States Department of Energy.References[1] E.M.Aitala et al.(E791Collaboration),Phys.Lett.B371(1996)157;G.A.Alves et al.(E769Collaboration),Phys.Rev.Lett.72(1994)812and72(1994) 1946;M.Adamovich et al.(WA92Collaboration),Nucl.Phys.B495(1997)3.[2] E.M.Aitala et al.(E791Collaboration),Phys.Lett.B411(1997)230.[3]V.G.Kartvelishvili,A.K.Likhoded,and S.R.Slobospitskii,Sov.J.Nucl.Phys.33(1981)434;R.C.Hwa,Phys.Rev.D51(1995)85;R.Vogt and S.J.Brodsky, Nucl.Phys.B478(1996)311;B.W.Harris,J.Smith,and R.Vogt,Nucl.Phys.B461(1996)181;G.Herrera and J.Magnin,Eur.Phys.J.C2(1998)477.[4]L.G.Pondrom,Physics Reports,Vol.122,Nos.2and3(1985)57–172.[5]J.C.Anjos,J.Magnin, F.R.A.Sim˜a o,and J.Solano,Proceedings of IISilafae,pg.540,AIP Conf.Proc.No.444(1998),hep-ph/9806396;W.G.D.Dharmaratna and G.R.Goldstein,Phys.Rev.D41(1990)1731.[6] A.Capella,U.Sukhatme,C.I.Tan,and J.Tran Thanh Van,Phys.Rev.D36(1987)109.[7]S.Barlag et al.(ACCMOR Collaboration),Phys.Lett.B325(1994)531.[8]S.Barlag et al.(ACCMOR Collaboration),Phys.Lett.B233(1989)522.[9]S.Mikocki et al.(E580Collaboration),Phys.Rev.D34(1986)42;R.T.Edwardset al.(E415Collaboration),Phys.Rev.D18(1978)76;N.N.Biswas et al.(E002, E281,and E597Collaborations),Nucl.Phys.B167(1980)41;D.Bogert et al.(E234Collaboration),Phys.Rev.D16(1977)2098.[10]J.A.Appel,Ann.Rev.Nucl.Part.Sci.42(1992)367,and references therein;D.J.Summers et al.(E791Collaboration),Proceedings of the XXVII thRencontre de Moriond,Electroweak Interactions and Unified Theories,Les Arcs,France(15-22March,1992)417;E.M.Aitala,et al.(E791Collaboration), EPJdirect C4(1999)1.[11]S.Amato,et al.,Nucl.Instr.and Methods A324(1993)535.[12]Pythia5.7and Jetset7.4.Physics Manual,CERN-TH-7112/93(1993);H.U.Bengtsson and T.Sj¨o strand,Computer Physics Commun46(1987)43;T.Sjostrand,CERN-TH.7112/93(1993).[13]G.A.Alves et al.(E769Collaboration),Phys.Rev.Lett.77(1996)2388;G.A.Alves et al.(E769Collaboration),Phys.Rev.Lett.81(1998)1537;M.Adamovich et al.(WA82Collaboration),Phys.Lett.B305(1993)402. [14]E.M.Aitala et al.(E791Collaboration),submitted to Phys.Lett.B(2000),hep-ex/0008020.0500100015002000x 102 1.111.12C a n d i d a t e s /0.35M e V /c 2(a)GeV/c 2M p π-(2588±2) x1030500100015002000x 102 1.11 1.12(b)GeV/c 2M p - π+(1690±2) x1030500010000x 10 1.31.32 1.34C a n d i d a t e s /0.8M e V /c 2(c)M Λo π-GeV/c 2(996±1) x1030500010000x 10 1.3 1.32 1.34(d)M Λ- o π+GeV/c 2(707±1) x1030100020003000 1.66 1.68 1.7C a n d i d a t e s /0.8M e V /c 2(e)M Λo K -GeV/c 28750±1100100020003000 1.66 1.68 1.7(f)M Λ- o K +GeV/c 27460±100Fig.1.Effective mass distributions for decay products of hyperons in events selected for this study and the corresponding signals with background subtracted.Plotscorrespond to (a)Λ0→pπ−;(b)pπ+;(c)Ξ−→Λ0π−;(d)Ξ+→Λ0K +.The numbers and errors in the top corner ofeach plot come from the sum of the number of background subtracted events in the individual (x F ,p 2T )bin fits.-0.1-0.0500.050.10240.40.60.811.2ε Λ/ε Λ_P 2TX F01234-0.1-0.050.050.1ε Λ/ε Λ-100 xP 2T -0.1-0.0500.050.10240.40.60.811.2ε Ξ/ε Ξ_P 2TX F01234-0.1-0.050.050.1ε Ξ/ε Ξ-100 xP 2T -0.1-0.0500.050.10240.40.60.811.2ε Ω/ε Ω_P 2TX F01234-0.1-0.050.050.1ε Ω/ε Ω-100 xP 2T X FFig.2.Ratio of efficiencies as functions of x F and p 2T for the three hyperons.The transverse momentum axes are in units of (GeV/c )2.11xA (x F )x A (x F )x FA (x F )Fig.3.Λ0−22A (p 2T )22A (p 2T )p 2T (GeV/c)2A (p 2T )Fig.4.Λ0−FA (x F )-0.10.10.20.30.4-0.1-0.0500.050.1ΛΞΩP /J YTHIA ETSETx FA (x F)p 2T(GeV/c)2A (p 2T )-0.10.10.20.30.401234ΛΞΩP /J YTHIA ETSETp 2T(GeV/c)2A (p 2T )parison among the asymmetries in the production of the three hyperons for the E791data and Pythia /Jetset predictions.The error bars of the E791data include the statistical and systematic uncertainties14Table1Final asymmetries ofΛ,Ξ,andΩshowing the statistical plus systematical errors. The statistical errors given include those due to the number of observed events and the number of MC events.Typically,the error is dominated by the MC uncertainty.Λ0=Λ0−Λ0+Ξ−+Ξ+AΩ−−Ω+=Ω−−Ω+-0.12–-0.090.392±0.007±0.0010.286±0.014±0.0010.047±0.096±0.028 -0.09–-0.060.315±0.005±0.0010.220±0.010±0.0010.128±0.046±0.005 -0.06–-0.030.240±0.004±0.0010.182±0.007±0.0010.100±0.030±0.001 -0.03–0.000.179±0.002±0.0010.154±0.007±0.0010.099±0.027±0.002 0.00–0.030.136±0.002±0.0010.139±0.009±0.0010.106±0.028±0.001 0.03–0.060.111±0.005±0.0010.158±0.012±0.0010.085±0.038±0.005 0.06–0.090.094±0.008±0.0010.196±0.017±0.0010.075±0.058±0.011 0.09–0.120.086±0.012±0.0030.250±0.025±0.0010.129±0.106±0.018Λ0=Λ0−Λ0+Ξ−+Ξ+AΩ−−Ω+=Ω−−Ω+0.0–0.50.206±0.002±0.0010.162±0.007±0.0010.094±0.034±0.0010.5–1.00.198±0.002±0.0010.173±0.006±0.0010.088±0.026±0.0011.0–1.50.209±0.003±0.0010.191±0.005±0.0010.095±0.029±0.0011.5–2.00.224±0.003±0.0010.207±0.006±0.0010.099±0.036±0.0012.0–2.50.233±0.006±0.0020.215±0.008±0.0010.100±0.042±0.0032.5–3.00.245±0.008±0.0050.229±0.011±0.0010.123±0.052±0.0113.0–3.50.264±0.011±0.0090.228±0.015±0.0010.080±0.062±0.0173.5–4.00.254±0.015±0.0150.235±0.020±0.0010.163±0.080±0.019E791E791E791ACCMOR−0.12≤x F≤0.12−0.12≤x F≤00≤x F≤0.120≤x F≤0.35Λ00.207±0.0010.242±0.0020.124±0.0020.119±0.009[7]AΞ−−Ξ+0.176±0.0040.186±0.0040.150±0.0070.130±0.050[8]AΩ−−Ω+0.099±0.0130.098±0.0180.100±0.0210.107±0.070[8]。

相关文档
最新文档