Exact Path Integral Quantization of 2-D Dilaton Gravity
Generalized Berezin quantization, Bergman metrics and fuzzy Laplacians
TCDMATH 08-04
arXiv:0804.4555v2 [hep-th] 9 Sep 2008
Generalized Berezin quantization, Bergman metrics and fuzzy Laplacians
Calin Iuliu Lazaroiu, Daniel McNamee and Christian S¨ amann
Trinity College Dublin Dublin 2, Ireland calin, danmc, saemann@maths.tcd.ie
Abstract: We study extended Berezin and Berezin-Toeplitz quantization for compact K¨ ahler manifolds, two related quantization procedures which provide a general framework for approaching the construction of fuzzy compact K¨ ahler geometries. Using this framework, we show that a particular version of generalized Berezin quantization, which we baptize “Berezin-Bergman quantization”, reproduces recent proposals for the construction of fuzzy K¨ ahler spaces. We also discuss how fuzzy Laplacians can be defined in our general framework and study a few explicit examples. Finally, we use this approach to propose a general explicit definition of fuzzy scalar field theory on compact K¨ ahler manifolds. Keywords: Non-Commutative Geometry, Differential and Algebraic Geometry.
A new goodness of fit test the reversed Berk-Jones statistic
A new goodness offit test:the reversed Berk-Jones statisticLeah Jager1and Jon A.Wellner2University of WashingtonJanuary23,2004;revised July22and29,2005AbstractOwen inverted a goodness offit statistic due to Berk and Jones to obtain confidence bands for a distribution function using No´e’s recursion.As argued by Owen,the resulting bands are narrower in the tails and wider in the center than the classical Kolmogorov-Smirnov bands and have certain advantages related to the optimality theory connected with the test statistic proposed by Berk and Jones.In this article we introduce a closely related statistic,the“reversed Berk-Jones statistic”which differs from the Berk and Jones statistic essentially because of the asymmetry of Kullback-Leibler information in its two arguments.We parallel the development of Owen for the new statistic,giving a method for constructing the confidence bands using the recursion formulas of No´e to compute rectangle probabilities for order statistics.Along the way we uncover some difficulties in Owen’s calculations and give appropriate corrections.We also compare the exclusion probabilites(corresponding to the power of the tests)of our new bands with the (corrected version of)Owen’s bands for a simple Lehmann type alternative considered by Owen and show that our bands are preferable over a certain range of alternatives.1IntroductionConsider the classical goodness-of-fit testing problem:based on X1,...,X n i.i.d.F,testH0:F(x)=F0(x)for all x∈I R(1) versusH1:F(x)=F0(x)for some x∈I R(2) where F0is afixed continuous distribution function.Berk and Jones(1979)introduced the test statistic R n,which is defined asR n=sup−∞<x<∞K(F n(x),F0(x)),(3)whereK(x,y)=x log x1−y,(4)and F n is the empirical distribution function of the X i’s,given byF n(x)=12Exact quantiles of the null distribution of˜R n forfinite n2.1Exact null distribution of˜R2Proposition1.Under the null hypothesis,P(˜R2≤x)=r2x,0≤x≤log2(7) where0≤r x≤1is the unique solution of(1−r x)log(1−r x)+(1+r x)log(1+r x)=2x.(8) Proof.Without loss of generality we can take F0to be the uniform distribution on[0,1],F0(x)=x, 0≤x≤1.Note that˜R2=supX(1)≤x<X(2)K(x,F2(x))=max{K(X(1),12)}=max{K(X1,12)}where X1,X2∼Uniform[0,1]are independent.Thus we calculateP(K(U,12solves K(u x,12solves K(l x,12−t,12+t,12,it is clear that u x−12−l x,or u x=1−l x.Hence it follows that(9)equals P(l x≤U≤1−l x)=P(12+v x)=2v x(10)where v x=12]satisfies K(12)=x.But this means that r x=2v x satisfies(8).The conclusion follows sinceP(˜R2≤x)=P(max{K(X1,12)}≤x)=P(K(U,12((1−√1−α)+(1+√1−α)).(12)The0.95quantile is then˜λ0.952=0.625251.The0.99quantile is˜λ0.992=0.675634.2.2Quantiles of the null distribution of˜R n for n>2Owen(1995)computed exact quantiles of the Berk-Jones statistic under the null distribution for finite n using a recursion of No´e(1972).Using an analagous method,we compute exact quantiles of the reversed statistic using this recursion.We want to calculate˜λ1−αn such that P(˜R n≤˜λ1−αn)=1−α.With this˜λ1−αn we can form1−αconfidence bands for F byfinding˜L n(x)and˜H n(x)(depending on the data)such that P(˜R n≤˜λ1−αn)=P(˜L n(x)≤F(x)≤˜H n(x),x∈R).We can rewrite this probability in terms of the order statistics,and then use the recursions due to No´e(1972)to compute it.Our procedure is as follows.We want tofind˜λ1−αn for a given confidence interval correspondingto1−α.Given this˜λ1−αn ,we calculate a confidence band of the form{˜a i,i=1,···,n}and{˜b i,i=1,2,···,n}such that P(˜R n≤˜λ1−αn)=P(˜a i<X(i)≤˜b i,i=1,2,···,n).To see how to calculate{˜a i}and{˜b i},we look at the reversed statistic,˜R n,itself.Similarly to the n=2case above,we can separate the event[˜R n≤˜λn]into parts associated with each order statistic.Now˜Rn=supX(1)≤x<X(n)K(x,F n(x))=K(X(1),1n),K(X(i),in).So the event[˜R n≤˜λn]is equivalent to the intersection of the eventsK(X(1),1n ),K(X(i),in)≤˜λn.(15)Here we have managed to divide the event into smaller events relating to each order statistic separately.In order to compute thefinite sample quantiles,we are looking for{˜a i}n i=1and{˜b i}n i=1such that P(˜R n≤˜λn)=P(˜a i<X(i)≤˜b i,1≤i≤n).(Note that we have deliberately chosen somewhat different notation from Owen(1995).)Splitting the event[˜R n≤˜λn]into events(13),(14),and (15),we can define{˜a i}and{˜b i}in terms of these smaller events.From(13),K(X(1),1n)≤˜λn},(16)˜a1=min{x|K(x,1n)≤˜λn},˜a n=min{x|K(x,n−1Finally,the event(14)yields that for2≤i≤n−1,˜bi=max{x|max{K(x,i−1n)}≤˜λn}=max{x|K(x,i−1n)≤˜λn},(18)˜a i=min{x|max{K(x,i−1n)}≤˜λn}=min{x|K(x,i−1n)≤˜λn}.(19)The above equations for2≤i≤n−1are not as easy to deal with as those for the cases wherei=1and i=n.However,by noticing the relationship between K(x,i−1n ),we cansimplify further.To do this we use the following claim.Claim1.Let˜λn>0.Then for anyfixed y1and y2such that0<y1<y2<1,(i)max{x|K(x,y1)≤˜λn,K(x,y2)≤˜λn}=max{x|K(x,y1)≤˜λn},(ii)min{x|K(x,y1)≤˜λn,K(x,y2)≤˜λn}=min{x|K(x,y2)≤˜λn}, provided{x|K(x,y1)≤˜λn,K(x,y2)≤˜λn}is not empty.Proof.(i)First,note that∂y−log1−x∂xK(x,y1)>∂n)≤˜λn},(20)˜a i=min{x|K(x,iWe can further simplify the calculation by noticing that˜a i=1−˜b n−i+1for1≤i≤n.This is shown in the following two claims.Claim2.Letλ>0.Then for anyfixed y,max{x|K(x,y)≤˜λ}=1−min{x|K(x,1−y)≤˜λ}(22) Proof.Fix y.Now∂y−log1−x)≤˜λn}ni=1−max{x|K(x,1−)≤˜λn}n=1−˜b n−i+1.The cases i=1and i=n are trivial.2 Now that we have defined{˜b i}for all i,and thus defined{˜a i},we can calculate P(˜R n≤˜λn)= P(˜a i<X(i)≤˜b i,1≤i≤n)by a recursion due to No´e(1972).The computing process involved in finding the values of{˜a i},{˜b i},and˜λn follows the method outlined in Owen(1995),using the Van Wijngaarden-Decker-Brent method tofirstfind the{˜a i}and{˜b i}corresponding to a particular˜λnand then reapplying the same Van Wijngaarden-Decker-Brent method again to solve for the˜λ1−αn associated with the1−αquantile.There is,however,one slight complication in these calculations compared to the method outlined in Owen(1995).When calculating confidence bands by inverting the Berk-Jones statistic,we lookat K(x,y)as a function of y for afixed x.For each x∈(0,1),K(x,y)is a continuous function of y with a minimum of0at y=x that tends to∞as y→0or y→1.Therefore for anyλ>0there exists a y∗such that K(x,y∗)=˜λ.For the reversed statistic,however,we look at K(x,y)as a function of x for afixed y.Again, K(x,y)is a continuous function of x with a minimum of0at x=y.But K(0,y)=log1<∞.So we are not guaranteed that for each˜λ>0there exists an x∗such ythat K(x∗,y)=˜λ.Thus care must be taken when looking at˜b i as defined in(20).If there is no x∗satisfying K(x∗,i−1Proof.Note thatR n =sup 0≤x ≤1K (F n (x ),x )=max 1≤i ≤n{K (i −1n,X (i ))}.Thus for n =1we have (interpreting 0log 0=0)R 1=log1X 1.(25)It follows thatP (R 1≤x )=P (log1X 1≤x )=P (e −x≤X 1≤1−e −x )=(1−2e −x )1[log 2,∞)(x ).2Knowing the exact distribution of R 1allows us to calculate the exact quantiles for n =1.We do this by solvingP (R 1≤λ1−α1)=1−α(26)for λ1−α1given a value of 1−α.This implies that the 1−αquantile λ1−α1of R 1is given by λ1−α1=−logαn,X (i )),K (i1−X (1),K (1n,X (i )),K (in,X (n )),log11−X (1),K (1n ,X (i )),K (in,X (n )),log1Again we are looking for numbers{a i}n i=1and{b i}n i=1(which depend also on n andλn, dependence suppressed in the notation)such thatP(R n≤λn)=P(a i<X(i)≤b i,1≤i≤n).Splitting the event[R n≤λn]into events(27),(28),and(29),we can define{a i}and{b i}in terms of these smaller events,as we did in the case of the reversed statistic.From(27),we see thatb1=max{x|max{log 1n,x)}≤λn}=max{x|log 1n,x)≤λn},a1=min{x|max{log 1n,x)}≤λn}=min{x|log 1n,x)≤λn}.Similarly,because of(29),we haveb n=max{x|max{K(n−1x}≤λn}=max{x|K(n−1x≤λn},a n=min{x|max{K(n−1x}≤λn}=min{x|K(n−1x≤λn}.Finally,event(28)gives us that for2≤i≤n−1,b i=max{x|max{K(i−1n,x)}≤λn}=max{x|K(i−1n,x)≤λn}(30)a i=min{x|max{K(i−1n,x)}≤λn}=min{x|K(i−1n,x)≤λn}.(31)Again,we can simplify these expressions for a i and b i by noticing a few things.We begin with the following claim.Claim4.Letλn>0.Then for anyfixed x1and x2such that0<x1<x2<1,(i)max{y|K(x1,y)≤λn,K(x2,y)≤λn}=max{y|K(x1,y)≤λn},(ii)min{y|K(x1,y)≤λn,K(x2,y)≤λn}=min{y|K(x2,y)≤λn}, provided{y|K(x1,y)≤λn,K(x2,y)≤λn}is not empty.Proof.(i)First,note that∂y(1−y).So K decreases in y on the interval[0,x),has a minimum of0at y=x,and increases in y on the interval(x,1].This means that max{y|K(x,y)≤λn}will occur on the interval(x,1].Nowfix x1and x2such that x1<x2.Then K(x1,y)and K(x2,y)will have a point of intersection at c in the interval(x1,x2).That is,K(x1,c)=K(x2,c).Now K(x1,y)is increasing in y on the interval(c,x2],while K(x2,y)is decreasing in y on this same interval.So K(x2,y)<K(x1,y)on(c,x2].But∂∂y K(x2,y)for all y.So K(x2,y)<K(x1,y)for all y in(c,1].There are three cases to consider.First,supposeλn>K(x1,c),where again,c is the point of intersection.Then max{y|K(x1,y)≤λn,K(x2,y)≤λn}>c.Since we have shown that K(x2,y)< K(x1,y)for all y>c,the maximum y value where both K(x1,y)≤λn and K(x2,y)≤λn is the same as the maximum y for which K(x1,y)≤λn.So max{y|K(x1,y)≤λn,K(x2,y)≤λn}= max{y|K(x1,y)≤λn}.Now supposeλn=K(x1,c).Then max{y|K(x1,y)≤λn,K(x2,y)≤λn}=c= max{y|K(x1,y)≤λn}.Finally,supposeλn<K(x1,y).Then there is no y in the interval[0,1]that satisfies max{y|K(x1,y)≤λn,K(x2,y)≤λn},since K(x1,y)>λn for y≥c and K(x2,y)>λn for y≤c.(ii)The result for the minimum is proved in a similar way.2 This claim allows us to simplify(30)and(31)to be(for2≤i≤n−1)b i=max{x|K(i−1n,x)≤λn}.(33) We can also simplify the cases where i=1and i=n.These cases can be written asb1=max{x|log1n,x)≤λn},(35)b n=max{x|K(n−1x≤λn}=e−λn.(37) The rationale for this is as follows.First the i=1case.Notice that log11−yis increasing on (0,c)and K(x,y)is decreasing on this interval,log1∂y log11−y>1y)=∂1−y>K(x,y)onthe interval(c,1).This gives the result for b1.The case for i=n is similar,except that log11−y case.Finally,as in the case of the reversed statistic described above,we see that we can once again define the{a i}in terms of the{b i}as a i=1−b n−i+1for1≤i≤n.The following two claims (analogous to claims2and3in the case of the reversed statistic)show this.Claim5.Letλ>0.Then for anyfixed x,max{y|K(x,y)≤λ}=1−min{y|K(1−x,y)≤λ}(38)Proof.Fix x.Now∂y(1−y).So K decreases in y on the interval[0,x),has a minimum of0at y=x,and increases in y on the interval(x,1].This means that max{y|K(x,y)≤λ}will occur on the interval(x,1],while min{y|K(x,y)≤λ}will occur on the interval[0,x).Also,notice that for all x and y,K(x,y)=K(1−x,1−y).Now,since K(x,y)→∞as x→1for afixed y,there exists a y∗in(x,1]such that K(x,y∗)=λ. So y∗=max{y|K(x,y)≤λ}.But K(1−x,1−y∗)=λas well.Now1−y∗is in the interval [0,1−x).So1−y∗=min{y|K(1−x,y)≤λ}.Thus the result is proved.2 Claim6.For1≤i≤n,a i=1−b n−i+1.(39) Proof.From(32)and(33)we have that for2≤i≤n−1a i=min{x|K(in,x)≤λn}by Claim5=1−max{x|K(n−iP(L i−1<X(i)≤H i,i=1,2,···,n).As defined by Owen,in the two displays below his formula (7),page517,{H i}and{L i}becomeH i=max{x|K(in ,x)≤λn},1≤i≤n−1,L0=0,and then Owen(1995)claims in his formula(9)page518that these are linked to the event {R n≤λn}by{R n≤λn}={L i−1≤X(i)≤H i:i=1,...,n}.(42) But in fact,(42)is false.Note that the event on the right side of(42)implies thatR n≥K((n−1)/n,H n−ǫ)=K((n−1)/n,1−ǫ)→∞asǫ↓0.Thus{R n≤λn}⊂{L i−1≤X(i)≤H i:i=1,...,n}(43) with strict inclusion since the event on the right side allows R n=∞.Moreover,note that R n= R n(X1,...,X n)>λ.95n also at the points X(i)=H i with i<n since K((i−1)/n,H i)>λ.95n. Our definition of{b i}isb1=max{x|log1n,x)≤λn},2≤i≤n.Note that Owen’s H i’s involve the maximum x such that K(in ,x)≤λn.Thus it follows that b i=H i−1for i=2,...,n−1,and similarly a i=1−b n−i+1=1−H n−i=L i,i=2,...,n−1by virtue of Owen’s relation L i=1−H n−i.Thus we claim the correct event identity is:{R n≤λn}={a i<X(i)≤b i,i=1,...,n}(44) where a i=L i,i=1,...,n−1,a n=e−λn,b i=H i−1,i=2,...,n,b1=1−e−λn The following table illustrates the situation numerically for n=4.Table1:Numerical illustration of order statistic bounds,n=4J-W bounds J-W boundsa1=.002737a1=.001340L1=.002737L1=.001340a3=.147088a3=.114653L3=.147088L3=.114653H1=.852912H1=.885347b2=.852912b2=.885347H3=.997263H3=.998660b4=.997263b4=.998660Actual coverage=.901771Actual coverage=.95 wrongλcorrectλcorrect bounds correct bounds Thefirst column of table1gives the constants L i and H i involved in the right side of Owen’s claimed event identity(42)corresponding to hisλ.954=.9149....The“actual coverage”in this column is the probability of the event on the right side of(42)and(43).[Note thatthis probability,.99841...,is not.95.It seems that in his computer program Owen used{a1,...,a n}={L1,...,L n−1,e−λn}in place of L0,...,L n−1.When we make this replacementwefind P(M4)≡P(∩4i=1[a i≤X(i)≤H i])=.95when a i and H i are determined by Owen’sλ.954.But this latter event M4also satisfies[R4≤λ.954]⊂M4.]The second column of table1gives the constants{a i}and{b i}corresponding to Owen’sλ.954=.9149....The resulting probability of thetwo equal events in(44)is0.9017...<.95.(This makes sense since the four dimensional rectangle involved on the right side of(44)is strictly contained in the rectangle on the right side of(42),even with the L i’s replaced by a i’s as in Owen’s program.)The third column of table1gives the constants L i and H i corresponding to ourλ.954=1.092....This is the correctλ.954,but Owen’s bounds L i and H i are still wrong,so equality in(42)fails and the resulting probability of the eventon right side of(43)is.9995...(or.9747...if we use the a i’s in place of L0,...,L4as the lower boundsas in Owen’s program).Finally,column4of table1gives our{a i}and{b i}corresponding to ourλ.954,and the probability of the event on the left side of(43)and both terms in(44)is.95.Table2compares theλ0.95n values calculated using Owen’s definition of the{H i}’s and thosecalculated using our{b i}’s with exact results for n=2,and simulation results for3≤n≤20 and selected larger values of n.The Monte Carlo simulations were carried out by simulating the Berk-Jones statistic100,000times and taking the0.95quantile(or95000th order statistic)of the simulated values.In each case,thefinite sample quantile calculated according to the{b i}found by our method (which is similar to the method used for the reversed statistic)agrees more closely with the simulated result.Finally,we determine the confidence bandsL n(x)=ni=0l i1(X(i),X(i+1)](x),andH n(x)=ni=0h i1[X(i),X(i+1))(x),where X(0)≡−∞and X(n+1)=∞by convention.For x∈(X(i),X(i+1))we have F n(x)=i/n and it is clear that the event[R n≤λ1−αn]restricts F(x)only by K(i/n,F(x))≤λ1−αn.Henceh i=max{p|K(i/n,p)≤λ1−αn},whilel i=min{p|K(i/n,p)≤λ1−αn}.From(32)and(36)it follows that h i=b i+1,i∈{0,...,n−1},and from(35)and(33)we have l i=a i,i∈{1,...,n}.Furthermore h n=1and l0=0(trivially).Table2:Comparison of0.95quantiles of the Berk-Jones statistic with simulation2 1.67031.90032 1.67117.90058 2.024950 2.027693 1.176631.90122 1.17665.90122 1.414108 1.4136240.914983.901950.915054.90198 1.092493 1.0890750.751753.901840.751894.902630.8927880.89133760.639718.903850.639889.903970.7562510.75572570.557816.903500.557992.903590.6567880.65978580.495200.905950.495369.906030.5809900.57874890.445698.907670.445852.907760.5212420.519253100.405531.906600.405670.906500.4728950.473739 110.372252.906460.372376.906610.4329430.433429 120.344209.905940.344319.906050.3993580.402910 130.320240.908650.320337.908760.3707180.370418 140.299506.909030.299592.909080.3459950.344839 150.281384.910730.281461.910410.3244320.322943 160.265404.909260.265473.909350.3054560.306919 170.251203.911450.251265.911560.2886220.287871 180.238495.911720.238551.911800.2735850.272886 190.227054.912120.227104.912190.2600690.258733 200.216696.911420.216742.911340.2478530.249022 500.093344.919850.0933644.919880.1042390.103634 1000.048899.921860.0489062.923330.0537660.053617 5000.010631.930430.0106328.930290.0113810.011379 10000.005466.932510.0054659.931450.005804.00580896As in Owen(1995)and Owen(2001),we give approximation formulas for the0.95and0.99 quantiles of the Berk-Jones statistic which are polynomial in log n.These formulas compare to (10)-(13)in Owen(1995)and to Table7.1,page159in Owen(2001).Wefind that Owen’s exact and approximate critical values for bands with claimed confidence coefficient.95have true coverage ranging from about.90to.93for sample sizes between3and1000;see Table2for estimated coverage probabilities(with105monte-carlo samples).λ0.95 n .=1n(3.7752+0.5062log n−0.0417(log n)2+0.0016(log n)3),100<n≤1000.(46)λ0.99n.=1n(5.6392+0.4018log n−0.0183(log n)2),100<n≤1000.(48)4Power considerations4.1Power heuristicsHere we more specifically address issues of power.We are able to get some qualitative ideas of the behavior of these test statistics against different alternatives by considering the functions K(F0(x),F(x))and K(F(x),F0(x))pointwise in x,rather than taking the supremum over all x.Consider the distribution functionsF1(x)=1x(49)andF2(x)=e−(1x0.00.20.40.60.8 1.00.00.20.40.60.81.Figure 1:Extreme distribution functions F 1(solid line)and F 2(dashed line).than the null,we are actually more interested in the power behavior for alternatives which are slightly different from the null distribution.For example,alternatives which are more moderately stochastically larger or smaller than F 0.Natural alternatives to consider are those of the form F c 0,for different values of c ∈(0,∞).For values of c >1,this distribution is stochastically larger than F 0.For values of c <1,this distribution is stochastically smaller than F 0.Based on the behavior of the functions g 1,g 2,˜g 1,and ˜g 2,we would guess that in this case as well,the dual statistic would be more powerful against stochastically larger alternatives (c >1),while the Berk-Jones statistic would remain more powerful for stochastically smaller alternatives (c <1).4.2Power calculationsTo test our conjectures about power,we use the same algorithm by No ´e (1972)to calculate the probability that F 1,F 2,and F c are contained in the 95%confidence band for F 0.Figure 3plots these probabilities for F c against c for different sample sizes.The curves for sample size n =20can be compared to Figure 5in Owen (1995).The line representing the Berk-Jones statistic is the same as that which Owen calls the curve for the nonparametric likelihood bands.We see that the reverse Berk-Jones statistic has greater power than both the Berk-Jones statistic and the Kolmogorov-Smirnov statistic for values of c >1.5ExamplesFigure 4shows the empirical distribution function of the velocities of 82galaxies from the Corona Borealis region along with 95%confidence intervals generated by inverting both the Berk-Jonesx0.00.20.40.60.8 1.00.00.20.40.60.81.0(a)x0.00.20.40.60.8 1.00.00.20.40.60.81.0(b)Figure 2:(a)The functions g 1(solid line)and ˜g 1(dashed line).(b)The functions g 2(solid line)and ˜g 2(dashed line).0.00010.00100.01000.10001.0000Figure 3:The the 95%confidence bands for F 0(vertical distribution F 0.statistic and the reversed statistic.This data appears in Table 1of Roeder (1990).This figure can be compared to Figures 1and 2in Owen (1995).Comparison shows that the confidence band based on the reversed Berk-Jones statistic are narrower at the tails than the one based on the Kolmogorov-Smirnov statistic.Also,there are slight differences between the confidence band based on the reversed statistic compared to the Berk-Jones statistic.In the region of the lower tail,the band based on the reversed statistic is shifted slightly downward,while in the region of the upper tail,this band is shifted slightly upward.This behavior is more noticable when looking at equally spaced data points.Figure 5shows these same 95%confidence bands for n =20equispaced observations.Velocity (km/sec)50001000015000200002500030000350000.00.20.40.60.81.0Figure 4:The empirical CDF of the velocities of 82galaxies in the Corona Borealis Region (dark solid line)and 95%confidence bands obtained by inverting the Berk-Jones statistic (dashed line)and the reversed Berk-Jones statistic (solid line)Equispaced Data51015200.00.20.40.60.81.0Figure 5:The empirical CDF of 20equally spaced data points (dark solid line)and 95%confidence bands obtained by inverting the Berk-Jones statistic (dashed line)and the reversed Berk-Jones statistic (solid line)Finally,Figure 6gives a comparison of Owen’s bands based on the Berk-Jones statistic to our bands based on the same statistic;as argued above,Owen’s bands do not have the correct coverage probability.Equispaced Data51015200.00.20.40.60.81.0Figure 6:Comparison of Owen’s bands based on the Berk-Jones statistic (dashed line)to our bands based on the Berk-Jones statistic (solid line)for 20equally spaced data pointsThe C and R programs used to carry out the computations presented here are available (in several forms)at/jaw/RESEARCH/SOFTWARE/software.list.html .Acknowledgements:We owe thanks to Art Owen for sharing his C programs used to carry out the computations for his 1995paper.Those programs were used as a starting point for the programs used here.We also owe thanks to Art for several helpful discussions.Mame Astou Diouf pointed out several typographical errors in the first version.ReferencesBerk,R.H.and Jones,D.H.(1979).Goodness-of-fit test statistics that dominate the Kolmogorov statistics.Zeitschrift f¨u r Wahrscheinlichkeitstheorie und Verwandte Gebiete47,47-59.No´e,M.(1972).The calculation of distributions of two-sided Kolmogorov-Smirnov type statistics.Annals of Mathematical Statistics43,58-64.Owen,A.B.(1995).Nonparametric likelihood confidence bands for a distribution function.Journal of the American Statistical Association90,516-521.Owen,A.B.(2001).Empirical Likelihood.Chapman&Hall/CRC,Boca Raton.Roeder,K.(1990).Density estimation with confidence sets exemplified by superclusters and voids in the galaxies.Journal of the American Statistical Association85,617-624. Wellner,J.A.and Koltchinskii,V.(2003).A note on the asymptotic distribution of Berk-Jones type statistics under the null hypothesis.High Dimensional Probability III,321-332.Birkh¨a user,Basel(2003).University of WashingtonStatisticsBox354322Seattle,Washington98195-4322e-mail:leah@University of WashingtonStatisticsBox354322Seattle,Washington98195-4322e-mail:jaw@21。
Correlation functions for M^NS_N orbifolds
arXiv:hep-th/0006196v3 15 Dec 2000
Correlation functions for M N /SN orbifolds
Oleg Lunin and Samir D. Mathur Department of Physics, The Ohio State University, Columbus, OH 43210, USA
1
See however [3] for an analysis of the finite N case.
2
terms of a unitary microscopic process, not only qualitatively but also quantitatively, since one finds an agreement of spin dependence and radiation rates between the semiclassically computed radiation and the microscopic calculation [6]. While it is possible to use simple models for the low energy dynamics of the D1-D5 system when one is computing the coupling to massless modes of the supergravity theory, it is believed that the exact description of this CFT must be in terms of a sigma model with target space being a deformation of the orbifold M N /SN , which is the symmetric orbifold of N copies of M . (Here N = n1 n5 , with n1 being the number of D1 branes and n5 being the number of D5 branes, and we must take the low energy limit of the sigma model to obtain the desired CFT.) In particular we may consider the ‘orbifold point’ where the target space is exactly the orbifold M N /SN with no deformation. It was argued in [7] that this CFT does correspond to a certain point in the moduli space of string theories on AdS3 × S 3 × M , but at this point the string theory is in a strongly coupled domain where it cannot be approximated by tree level supergravity on a smooth background. The orbifold point is the closest we can get to a ‘free’ theory on the CFT side, and thus this point is the analogue of free N=4 supersymmetric YangMills in the D3 brane example. Thus one would like to compare the three point functions of chiral operators in the supergravity limit with the 3-point functions at the orbifold point, to see if we have an analogue of the surprising agreement that was found in the case of the AdS5 - 4-d Yang-Mills duality. The orbifold group in our case is SN , the permutation group of N elements. This group is nonabelian, in contrast to the cyclic group ZN which has been studied more extensively in the past for computation of correlation functions in orbifold theories [9]. Though there are some results in the literature for general orbifolds [10], the study of nonabelian orbifolds is much less developed than for abelian orbifolds. It turns out however that the case of the SN orbifolds has its own set of simplifications which make it possible to develop a technique for computation of correlation functions for these theories. The essential quantities that we wish to compute are the correlation functions of ‘twist operators’, in the CFT that arises from the infra-red limit of the 2-d sigma model with target space M N /SN . If we circle the insertion of a twist operator of the permutation group SN , different copies of the target space M permute into each other. We pass to the covering space of the 2-d base space, such that on this covering space the fields of the CFT are single-valued. For the special case where the orbifold group is SN , the path integral on the baslop a method for computing correlation functions of twist operators in the bosonic 2-d CFT arising from orbifolds M N /SN , where M is an arbitrary manifold. The path integral with twist operators is replaced by a path integral on a covering space with no operator insertions. Thus, even though the CFT is defined on the sphere, the correlators are expressed in terms of partition functions on Riemann surfaces with a finite range of genus g. For large N, this genus expansion coincides with a 1/N expansion. The contribution from the covering space of genus zero is ‘universal’ in the sense that it depends only on the central charge of the CFT. For 3-point functions we give an explicit form for the contribution from the sphere, and for the 4-point function we do an example which has genus zero and genus one contributions. The condition for the genus zero contribution to the 3-point functions to be non–vanishing is similar to the fusion rules for an SU(2) WZW model. We observe that the 3-point coupling becomes small compared to its large N limit when the orders of the twist operators become comparable to the square root of N this is a manifestation of the stringy exclusion principle.
Integer Quantization of the Chern-Simons Coefficient in a Broken Phase
UCONN-94-8;hep-th/9411062
1
also generate a gauge field mass by the conventional Higgs mechanism, in a vacuum in which the gauge symmetry is spontaneously broken. In this Letter we consider the perturbative fate of the integer quantization condition on the Chern-Simons coupling in the broken phase of a spontaneously broken nonabelian topologically massive theory. We show that if there is still a nonabelian symmetry present in the broken phase, then the one-loop renormalized πm ratio 4g is shifted from its bare value by an integer. 2 ren The abelian spontaneously broken Chern-Simons theory was investigated in [5, 6], where πm receives a finite renormalization shift it was shown that in the broken vacuum q ≡ 4g 2 which is a complicated function of the various mass scales: the Chern-Simons mass m, the gauge coupling g 2 , and the symmetry breaking mass scale. A similar situation exists for a completely broken nonabelian theory [7]. However, here there is no apparent contradiction with the Chern-Simons quantization condition since there is no residual nonabelian symmetry in the broken vacuum. To probe this question more deeply, the authors of Ref. [8] introduced an ingenious model in which a nonabelian topologically massive theory is partially broken, leaving a nonabelian gauge symmetry in the broken phase. One’s immediate expectation should be quantized, but with an integer is that once again the renormalized ratio 4πm g 2 ren renormalization shift characteristic of the smaller unbroken residual gauge symmetry. We have reconsidered this model both in a detailed canonical treatment and, as reported in this Letter, in a direct perturbative analysis. We disagree with the result reported in Ref. [8]. We find that the integer quantization condition is indeed preserved to one loop. The mechanism by which this quantization arises in a perturbative analysis is considerably more involved than the Pisarski and Rao case [3] where there is no symmetry breaking. The cancellations required to produce the integer shift can be viewed as new ‘topological Ward identities’ for the spontaneously broken theory, generalizing the ‘topological Ward identity’ found in [3]. For simplicity, we consider an SU (3) topologically massive gauge theory, coupled to a triplet of charged scalar fields with a potential possessing a vacuum in which the SU (3) symmetry is spontaneously broken to SU (2). We later show that our result applies also to the case of SU (N ) broken to SU (N − 1), for N ≥ 3. The (Euclidean) Lagrange density for this model is
XPSPEAK 说明书
Using XPSPEAK Version 4.1 November 2000Contents Page Number XPS Peak Fitting Program for WIN95/98 XPSPEAK Version 4.1 (1)Program Installation (1)Introduction (1)First Version (1)Version 2.0 (1)Version 3.0 (1)Version 3.1 (2)Version 4.0 (2)Version 4.1 (2)Future Versions (2)General Information (from R. Kwok) (3)Using XPS Peak (3)Overview of Processing (3)Appearance (4)Opening Files (4)Opening a Kratos (*.des) text file (4)Opening Multiple Kratos (*.des) text files (5)Saving Files (6)Region Parameters (6)Loading Region Parameters (6)Saving Parameters (6)Available Backgrounds (6)Averaging (7)Shirley + Linear Background (7)Tougaard (8)Adding/Adjusting the Background (8)Adding/Adjusting Peaks (9)Peak Types: p, d and f (10)Peak Constraints (11)Peak Parameters (11)Peak Function (12)Region Shift (13)Optimisation (14)Print/Export (15)Export (15)Program Options (15)Compatibility (16)File I/O (16)Limitations (17)Cautions for Peak Fitting (17)Sample Files: (17)gaas.xps (17)Cu2p_bg.xps (18)Kratos.des (18)ASCII.prn (18)Other Files (18)XPS Peak Fitting Program for WIN95/98 XPSPEAKVersion 4.1Program InstallationXPS Peak is freeware. Please ask RCSMS lab staff for a copy of the zipped 3.3MB file, if you would like your own copyUnzip the XPSPEA4.ZIP file and run Setup.exe in Win 95 or Win 98.Note: I haven’t successfully installed XPSPEAK on Win 95 machines unless they have been running Windows 95c – CMH.IntroductionRaymond Kwok, the author of XPSPEAK had spent >1000 hours on XPS peak fitting when he was a graduate student. During that time, he dreamed of many features in the XPS peak fitting software that could help obtain more information from the XPS peaks and reduce processing time.Most of the information in this users guide has come directly from the readme.doc file, automatically installed with XPSPEAK4.1First VersionIn 1994, Dr Kwok wrote a program that converted the Kratos XPS spectral files to ASCII data. Once this program was finished, he found that the program could be easily converted to a peak fitting program. Then he added the dreamed features into the program, e.g.∙ A better way to locate a point at a noise baseline for the Shirley background calculations∙Combine the two peaks of 2p3/2 and 2p1/2∙Fit different XPS regions at the same timeVersion 2.0After the first version and Version 2.0, many people emailed Dr Kwok and gave additional suggestions. He also found other features that could be put into the program.Version 3.0The major change in Version 3.0 is the addition of Newton’s Method for optimisation∙Newton’s method can greatly reduce the optimisation time for multiple region peak fitting.Version 3.11. Removed all the run-time errors that were reported2. A Shirley + Linear background was added3. The Export to Clipboard function was added as requested by a user∙Some other minor graphical features were addedVersion 4.0Added:1. The asymmetrical peak function. See note below2. Three additional file formats for importing data∙ A few minor adjustmentsThe addition of the Asymmetrical Peak Function required the peak function to be changed from the Gaussian-Lorentzian product function to the Gaussian-Lorentzian sum function. Calculation of the asymmetrical function using the Gaussian-Lorentzian product function was too difficult to implement. The software of some instruments uses the sum function, while others use the product function, so both functions are available in XPSPEAK.See Peak Function, (Page 12) for details of how to set this up.Note:If the selection is the sum function, when the user opens a *.xps file that was optimised using the Gaussian-Lorentzian product function, you have to re-optimise the spectra using the Gaussian-Lorentzian sum function with a different %Gaussian-Lorentzian value.Version 4.1Version 4.1 has only two changes.1. In version 4.0, the printed characters were inverted, a problem that wasdue to Visual Basic. After about half year, a patch was received from Microsoft, and the problem was solved by simply recompiling the program2. The import of multiple region VAMAS file format was addedFuture VersionsThe author believes the program has some weakness in the background subtraction routines. Extensive literature examination will be required in order to revise them. Dr Kwok intends to do that for the next version.General Information (from R. Kwok)This version of the program was written in Visual Basic 6.0 and uses 32 bit processes. This is freeware. You may ask for the source program if you really want to. I hope this program will be useful for people without modern XPS software. I also hope that the new features in this program can be adopted by the XPS manufacturers in the later versions of their software.If you have any questions/suggestions, please send an email to me.Raymund W.M. KwokDepartment of ChemistryThe Chinese University of Hong KongShatin, Hong KongTel: (852)-2609-6261Fax:(852)-2603-5057email: rmkwok@.hkI would like to thank the comments and suggestions from many people. For the completion of Version 4.0, I would like to think Dr. Bernard J. Flinn for the routine of reading Leybold ascii format, Prof. Igor Bello and Kelvin Dickinson for providing me the VAMAS files VG systems, and my graduate students for testing the program. I hope I will add other features into the program in the near future.R Kwok.Using XPS PeakOverview of Processing1. Open Required Files∙See Opening Files (Page 4)2. Make sure background is there/suitable∙See Adding/Adjusting the Background, (Page 8)3. Add/adjust peaks as necessary∙See Adding/Adjusting Peaks, (Page 9), and Peak Parameters, (Page 11)4. Save file∙See Saving Files, (Page 6)5. Export if necessary∙See Print/Export, (Page 15)AppearanceXPSPEAK opens with two windows, one above the other, which look like this:∙The top window opens and displays the active scan, adds or adjusts a background, adds peaks, and loads and saves parameters.∙The lower window allows peak processing and re-opening and saving dataOpening FilesOpening a Kratos (*.des) text file1. Make sure your data files have been converted to text files. See the backof the Vision Software manual for details of how to do this. Remember, from the original experiment files, each region of each file will now be a separate file.2. From the Data menu of the upper window, choose Import (Kratos)∙Choose directory∙Double click on the file of interest∙The spectra open with all previous processing INCLUDEDOpening Multiple Kratos (*.des) text files∙You can open up a maximum of 10 files together.1. Open the first file as above∙Opens in the first region (1)2. In the XPS Peak Processing (lower) window, left click on 2(secondregion), which makes this region active3. Open the second file as in Step2, Opening a Kratos (*.des) text file,(Page 4)∙Opens in the second region (2)∙You can only have one description for all the files that are open. Edit with a click in the Description box4. Open further files by clicking on the next available region number thenfollowing the above step.∙You can only have one description for all the files that are open. Edit with a click in the Description boxDescriptionBox 2∙To open a file that has already been processed and saved using XPSPEAK, click on the Open XPS button in the lower window. Choose directory and file as normal∙The program can store all the peak information into a *.XPS file for later use. See below.Saving Files1. To save a file click on the Save XPS button in the lower window2. Choose Directory3. Type in a suitable file name4. Click OK∙Everything that is open will be saved in this file∙The program can also store/read the peak parameter files (*.RPA)so that you do not need to re-type all the parameters again for a similar spectrum.Region ParametersRegion Parameters are the boundaries or limits you have used to set up the background and peaks for your files. These values can be saved as a file of the type *.rpa.Note that these Region Parameters are completely different from the mathematical parameters described in Peak Parameters, (Page 11) Loading Region Parameters1. From the Parameters menu in the upper window, click on Load RegionParameters2. Choose directory and file name3. Click on Open buttonSaving Parameters1. From the Parameters menu in the XPS Peak Fit (Upper) window, clickon Save Region Parameters2. Choose directory and file name3. Click on the Save buttonAvailable BackgroundsThis program provides the background choices of∙Shirley∙Linear∙TougaardAveraging∙ Averaging at the end points of the background can reduce the time tofind a point at the middle of a noisy baseline∙ The program includes the choices of None (1 point), 3, 5, 7, and 9point average∙ This will average the intensities around the binding energy youselect.Shirley + Linear Background1. The Shirley + Linear background has been added for slopingbackgrounds∙ The "Shirley + Linear" background is the Shirley background plus astraight line with starting point at the low BE end-point and with a slope value∙ If the slope value is zero , the original Shirley calculation is used∙ If the slope value is positive , the straight line has higher values atthe high BE side, which can be used for spectra with higher background intensities at the high BE side∙ Similarly, a negative slope value can be used for a spectrum withlower background intensities at the high BE side2. The Optimization button may be used when the Shirley background is higher at some point than the signal intensities∙ The program will increase the slope value until the Shirleybackground is below the signal intensities∙ Please see the example below - Cu2p_bg.xps - which showsbackground subtraction using the Shirley method (This spectrum was sent to Dr Kwok by Dr. Roland Schlesinger).∙ A shows the problematic background when the Shirley backgroundis higher than the signal intensities. In the Shirley calculation routine, some negative values were generated and resulted in a non-monotonic increase background∙ B shows a "Shirley + Linear" background. The slope value was inputby trial-and-error until the background was lower than the signal intensities∙ C was obtained using the optimisation routineA slope = 0B slope = 11C slope = 15.17Note: The background subtraction calculation cannot completely remove the background signals. For quantitative studies, the best procedure is "consistency". See Future Versions, (Page 2).TougaardFor a Tougaard background, the program can optimise the B1 parameter by minimising the "square of the difference" of the intensities of ten data points in the high binding energy side of the range with the intensities of the calculated background.Adding/Adjusting the BackgroundNote: The Background MUST be correct before Peaks can be added. As with all backgrounds, the range needs to include as much of your peak as possible and as little of anything else as possible.1. Make sure the file of interest is open and the appropriate region is active2. Click on Background in the upper window∙The Region 0 box comes up, which contains the information about the background3. Adjust the following as necessary. See Note.∙High BE (This value needs to be within the range of your data) ∙Low BE (This value needs to be within the range of your data) NOTE: High and Low BE are not automatically within the range of your data. CHECK CAREFULLY THAT BOTH ENDS OF THE BACKGROUND ARE INSIDE THE EDGE OF YOUR DATA. Nothing will happen otherwise.∙No. of Ave. Pts at end-points. See Averaging, (Page 7)∙Background Type∙Note for Shirley + Linear:To perform the Shirley + Linear Optimisation routine:a) Have the file of interest openb) From the upper window, click on Backgroundc) In the resulting box, change or optimise the Shirley + LinearSlope as desired∙Using Optimize in the Shirley + Linear window can cause problems. Adjust manually if necessary3. Click on Accept when satisfiedAdding/Adjusting PeaksNote: The Background MUST be correct before peaks can be added. Nothing will happen otherwise. See previous section.∙To add a peak, from the Region Window, click on Add Peak ∙The peak window appears∙This may be adjusted as below using the Peak Window which will have opened automaticallyIn the XPS Peak Processing (lower) window, there will be a list of Regions, which are all the open files, and beside each of these will be numbers representing the synthetic peaks included in that region.Regions(files)SyntheticPeaks1. Click on a region number to activate that region∙The active region will be displayed in the upper window2. Click on a peak number to start adjusting the parameters for that peak.∙The Processing window for that peak will open3. Click off Fix to adjust the following using the maximum/minimum arrowkeys provided:∙Peak Type. (i.e. orbital – s, p, d, f)∙S.O.S (Δ eV between the two halves of the peak)∙Position∙FWHM∙Area∙%Lorenzian-Gaussian∙See the notes for explanations of how Asymmetry works.4. Click on Accept when satisfiedPeak Types: p, d and f.1. Each of these peaks combines the two splitting peaks2. The FWHM is the same for both the splitting peaks, e.g. a p-type peakwith FWHM=0.7eV is the combination of a p3/2 with FWHM at 0.7eV anda p1/2 with FWHM at 0.7eV, and with an area ratio of 2 to 13. If the theoretical area ratio is not true for the split peaks, the old way ofsetting two s-type peaks and adding the constraints should be used.∙The S.O.S. stands for spin orbital splitting.Note: The FWHM of the p, d or f peaks are the FWHM of the p3/2,d5/2 or f7/2, respectively. The FWHM of the combined peaks (e.g. combination of p3/2and p1/2) is shown in the actual FWHM in the Peak Parameter Window.Peak Constraints1. Each parameter can be referenced to the same type of parameter inother peaks. For example, for four peaks (Peak #0, 1, 2 and 3) with known relative peak positions (0.5eV between adjacent peaks), the following can be used∙Position: Peak 1 = Peak 0 + 0.5eV∙Position: Peak 2 = Peak 1 + 0.5eV∙Position: Peak 3 = Peak 2 + 0.5eV2. You may reference to any peak except with looped references.3. The optimisation of the %GL value is allowed in this program.∙ A suggestion to use this feature is to find a nice peak for a certain setting of your instrument and optimise the %GL for this peak.∙Fix the %GL in the later peak fitting process when the same instrument settings were used.4. This version also includes the setting of the upper and lower bounds foreach parameter.Peak ParametersThis program uses the following asymmetric Gaussian-Lorentzian sumThe program also uses the following symmetrical Gaussian-Lorentzian product functionPeak FunctionNote:If the selection is the sum function, when the user opens a *.xps file that was optimised using the Gaussian-Lorentzian product function, you have to re-optimise the spectra using the Gaussian-Lorentzian sum function with a different %Gaussian-Lorentzian value.∙You can choose the function type you want1. From the lower window, click on the Options button∙The peak parameters box comes up∙Select GL sum for the Gaussian-Lorentzian sum function∙Select GL product for the Gaussian-Lorentzian product function. 2. For the Gaussian-Lorentzian sum function, each peak can have sixparameters∙Peak Position∙Area∙FWHM∙%Gaussian-Lorentzian∙TS∙TLIf anyone knows what TS or TL might be, please let me know. Thanks, CMH3. Each peak in the Gaussian-Lorentzian product function can have fourparameters∙Peak Position∙Area∙FWHM∙%Gaussian-LorentzianSince peak area relates to the atomic concentration directly, we use it as a peak parameter and the peak height will not be shown to the user.Note:For asymmetric peaks, the FWHM only refers to the half of the peak that is symmetrical. The actual FWHM of the peak is calculated numerically and is shown after the actual FWHM in the Peak Parameter Window. If the asymmetric peak is a doublet (p, d or f type peak), the actual FWHM is the FWHM of the doublet.Region ShiftA Region Shift parameter was added under the Parameters menu∙Use this parameter to compensate for the charging effect, the fermi level shift or any change in the system work function∙This value will be added to all the peak positions in the region for fitting purposes.An example:∙ A polymer surface is positively charged and all the peaks are shifted to the high binding energy by +0.5eV, e.g. aliphatic carbon at 285.0eV shifts to 285.5eV∙When the Region Shift parameter is set to +0.5eV, 0.5eV will be added to all the peak positions in the region during peak fitting, but the listed peak positions are not changed, e.g. 285.0eV for aliphatic carbon. Note: I have tried this without any actual shift taking place. If someone finds out how to perform this operation, please let me know. Thanks, CMH.In the meantime, I suggest you do the shift before converting your files from the Vision Software format.OptimisationYou can optimise:1. A single peak parameter∙Use the Optimize button beside the parameter in the Peak Fitting window2. The peak (the peak position, area, FWHM, and the %GL if the "fix" box isnot ticked)∙Use the Optimize Peak button at the base of the Peak Fitting window3. A single region (all the parameters of all the peaks in that region if the"fix" box is not ticked)∙Use the Optimize Region menu (button) in the upper window4. All the regions∙Use the Optimize All button in the lower window∙During any type of optimisation, you can press the "Stop Fitting" button and the program will stop the process in the next cycle.Print/ExportIn the XPS Peak Fit or Region window, From the Data menu, choose Export or Print options as desiredExport∙The program can export the ASCII file of spectrum (*.DAT) for making high quality figures using other software (e.g. SigmaPlot)∙It can export the parameters (*.PAR) for further calculations (e.g. use Excel for atomic ratio calculations)∙It can also copy the spectral image to the system clipboard so that the spectral image can be pasted into a document (e.g. MS WORD). Program Options1. The %tolerance allows the optimisation routine to stop if the change inthe difference after one loop is less that the %tolerance2. The default setting of the optimisation is Newton's method∙This method requires a delta value for the optimisation calculations ∙You may need to change the value in some cases, but the existing setting is enough for most data.3. For the binary search method, it searches the best fit for each parameterin up to four levels of value ranges∙For example, for a peak position, in first level, it calculates the chi^2 when the peak position is changed by +2eV, +1.5eV, +1eV, +0.5eV,-0.5eV, -1eV, -1.5eV, and -2eV (range 2eV, step 0.5eV) ∙Then, it selects the position value that gives the lowest chi^2∙In the second level, it searches the best values in the range +0.4eV, +0.3eV, +0.2eV, +0.1eV, -0.1eV, -0.2eV, -0.3eV, and -0.4eV (range0.4eV, step 0.1eV)∙In the third level, it selects the best value in +0.09eV, +0.08eV, ...+0.01eV, -0.01eV, ...-0.09eV∙This will give the best value with two digits after decimal∙Level 4 is not used in the default setting∙The range setting and the number of levels in the option window can be changed if needed.4. The Newton's Method or Binary Search Method can be selected byclicking the "use" selection box of that method.5. The selection of the peak function is also in the Options window.6. The user can save/read the option parameters with the file extension*.opa∙The program reads the default.opa file at start up. Therefore, the user can customize the program options by saving the selectionsinto the default.opa file.CompatibilityThe program can read:∙Kratos text (*.des) files together with the peak fitting parameters in the file∙The ASCII files exported from Phi's Multiplex software∙The ASCII files of Leybold's software∙The VAMAS file format∙For the Phi, Leybold and VAMAS formats, multiple regions can be read∙For the Phi format, if the description contains a comma ",", the program will give an error. (If you get the error, you may use any texteditor to remove the comma)The program can also import ASCII files in the following format:Binding Energy Value 1 Intensity Value 1Binding Energy Value 2 Intensity Value 2etc etc∙The B.E. list must be in ascending or descending order, and the separation of adjacent B.E.s must be the same∙The file cannot have other lines before and after the data∙Sometimes, TAB may cause a reading error.File I/OThe file format of XPSPEAK 4.1 is different from XPSPEAK 3.1, 3.0 and 2.0 ∙XPSPEAK 4.1 can read the file format of XPSPEAK 3.1, 3.0 and 2.0, but not the reverse∙File format of 4.1 is the same as that of 4.0.LimitationsThis program limits the:∙Maximum number of points for each spectrum to 5000∙Maximum of peaks for all the regions to 51∙For each region, the maximum number of peaks is 10. Cautions for Peak FittingSome graduate students believe that the fitting parameters for the best fitted spectrum is the "final answer". This is definitely not true. Adding enough peaks can always fit a spectrum∙Peak fitting only assists the verification of a model∙The user must have a model in mind before adding peaks to the spectrum!Sample Files:gaas.xpsThis file contains 10 spectra1. Use Open XPS to retrieve the file. It includes ten regions∙1-4 for Ga 3d∙5-8 for Ga 3d∙9-10 for S 2p2. For the Ga 3d and As 3d, the peaks are d-type with s.o.s. = 0.3 and 0.9respectively3. Regions 4 and 8 are the sample just after S-treatment4. Other regions are after annealing5. Peak width of Ga 3d and As 3d are constrained to those in regions 1 and56. The fermi level shift of each region was determined using the As 3d5/2peak and the value was put into the "Region Shift" of each region7. Since the region shift takes into account the Fermi level shift, the peakpositions can be easily referenced for the same chemical components in different regions, i.e.∙Peak#1, 3, 5 of Ga 3d are set equal to Peak#0∙Peak#8, 9, 10 of As 3d are set equal to Peak#78. Note that the %GL value of the peaks is 27% using the GL sum functionin Version 4.0, while it is 80% using the GL product function in previous versions.18 Cu2p_bg.xpsThis spectrum was sent to me by Dr. Roland Schlesinger. It shows a background subtraction using the Shirley + Linear method∙See Shirley + Linear Background, (Page 7)Kratos.des∙This file shows a Kratos *.des file∙This is the format your files should be in if they have come from the Kratos instrument∙Use import Kratos to retrieve the file. See Opening Files, (Page 4)∙Note that the four peaks are all s-type∙You may delete peak 2, 4 and change the peak 1,3 to d-type with s.o.s. = 0.7. You may also read in the parameter file: as3d.rpa. ASCII.prn∙This shows an ASCII file∙Use import ASCII to retrieve the file∙It is a As 3d spectrum of GaAs∙In order to fit the spectrum, you need to first add the background and then add two d-type peaks with s.o.s.=0.7∙You may also read in the parameter file: as3d.rpa.Other Files(We don’t have an instrument that produces these files at Auckland University., but you may wish to look at them anyway. See the readme.doc file for more info.)1. Phi.asc2. Leybold.asc3. VAMAS.txt4. VAMASmult.txtHave Fun! July 1, 1999.。
A New Supersymmetric Index
Lyman Laboratory of Physics
Harvard University, Cambridge, MA 02138, USA
dimensions, ependent of almost all deformations of the theory. This index is related to the geometry of the vacua (Berry’s curvature) and satisfies an exact differential equation as a function of β . For integrable theories we can also compute the index thermodynamically, using the exact S -matrix. The equivalence of these two results implies a highly non-trivial equivalence of a set of coupled integral equations with these differential equations, among them Painleve III and the affine Toda equations.
HUTP-92/A021 SISSA 68/92/EP BUHEP-92-14
arXiv:hep-th/9204102v1 30 Apr 1992
A New Supersymmetric Index
Sergio Cecotti† , Paul Fendley⋆ , Ken Intriligator∞ and Cumrun Vafa∞
partition function Tr e−βH . This powerful method is known as the thermodynamic Bethe ansatz (TBA)[10]. In particular, the TBA analysis for a large class of N =2 integrable theories in two dimensions was carried out in [11,12], confirming the conjectured S -matrices as in particular reproducing the correct central charges in the UV limit. One can extend the usual TBA analysis by allowing arbitrary chemical potentials, and in particular one can compute objects such as TreiαF e−βH . This allows us, as a special case, to compute Tr(−1)F F e−βH in these theories in terms of integral equations. Thus for integrable theories we seem to have two inequivalent methods to compute the
Fermionic Chern-Simons theory for the Fractional Quantum Hall Effect in Bilayers
state. In single-layer
ቤተ መጻሕፍቲ ባይዱ
systems, even though many transport anomalies have been reported, there is no evidence of FQHE. On the other hand, this is a well observed [5] FQHE state in double-layer systems. Motivated by the fact that very interesting physics can be found in these 2DES if one considers new degrees of freedom, we study double-layer FQHE systems. Our formalism can also be extended to the study of spin non-polarized systems. There are two energy scales that play a very important role in this problem. One is the potential energy between the electrons in different layers, and the other one is the tunneling 2
amplitude between layers. We only consider the case in which the tunneling between the layers may be neglected, and both layers are identical. Therefore, the number of particles in each layer is conserved, and the collective modes corresponding to in phase and out of phase density oscillations are decoupled. We generalize the fermionic Chern-Simons field theory developed in reference [6]. The generalization is straightforward. We consider a theory in which the electrons are coupled to both the electromagnetic field, and to the Chern-Simons gauge fields (two in this case, one for each layer). We show that this theory is equivalent to the standard system in which the Chern-Simons fields are absent, provided that the coefficient of the Chern-Simons action is such that the electrons are attached to an even number of fluxes of the gauge field in their own layer, and to an arbitrary number of fluxes of the gauge field in the opposite layer. In this form, the theory has a U (1) ⊗ U (1) gauge invariance. We obtain the same action as the one derived by Wen and Zee in their matrix formulation of topological fluids [7]. In this paper, we study the liquid-like solution of the semiclassical approximation to this theory. We can describe a large class of states which are characterized by filling fractions in each layer given by ν1 = ν2 = n − (± p12 + 2s2 ) n2 − (± p11 + 2s1 )(± p12 + 2s2 ) n − (± p11 + 2s1 ) n2 − (± p11 + 2s1 )(± p12 + 2s2 ) (1.1)
Least square quantization in PCM
ple, F(x) = P{s( t) 5 X}) -@3(x<m,
is independent of t, - 00 < t -C 00, as indicated by the notation. Then the averagepower of the s process,assumed to be finite, is constant in time: S = E{s’ (t)} =/:mxzdF(x), -co ==c < co. t
received amplitudes s!i’ as in (1), to produce an exact ), reproduction of the ongmal band-limited signal s. PCM is a modification of this. Instead of sending the exact sample values (3), one partitions the voltage range of the signal into a finite number of subsetsand transmits to the receiver only the information as to which subset a sample happens to fall in. Built into the receiver there is a source of fixed representative voltages-“quanta’-one ‘ for each of the subsets.When the receiver is informed that a certain sample fell in a certain subset,it usesits quantum for that subset as an approximation to the true sample value and constructs a band-limited signal based on these approximate sample values. We define the noise signal as the difference between the receiver-output signal and the original signal and the noise I. INTRODUCTION power as the averagesquare of the noise signal. The probHE BASIC IDEAS in the pulse-code modulation lem we consider is the following: given the number of (PCM) system [ 11,[2, ch. 191are the Shannon-Nyquist quanta and certain statistical properties of the signal, desampling theorem and the notion of quantizing the sample termine the subsetsand quanta that are best in minimizing values. the noise power. The sampling theorem asserts that a signal voltage s(t), - 00 < t < cc, containing only frequencies less than W II. QUANTIZATION cycles/s can be recovered from a sequenceof its sample Let us formulate the quantization process more exvalues according to plicitly. A quantization scheme consists of a class of sets . s(t) = f$ s(tj)K(t - t,), -ccoOttm, (1) {Q,, Qz>. ->Q,} and a set of quanta {q,, q2;. -,q,}. The {Q,} are any v disjoint subsets of the voltage axis which, jz-00 taken together, cover the entire voltage axis. The {qa} are where s(tj) is the value of s at thejth sampling instant any v finite voltage values. The number v of quanta is to be regarded throughout as a fixed finite preassignednumber. -coCj-Co3, t,=&k We associatewith a partition (Q,} a label function y(x), - 00 < x < co, defined for all (real) voltages x by and where y(x) = 1 if x liesin Q,, sin2rWt -KloOttco, K(t) = 2mJ,J,7t T (2) y(x) =2 if x liesin Q2, (4) is a “sin t/t ” pulse of the appropriate width. The pulse-amplitude modulation (PAM) system [2, ch. 161 is based on the sampling theorem alone. One sends y(x) = v if x liesin Q,. over the system channel, instead of the signal values s(t) If s(t,) is the jth sample of the signal s, as in Section I, for all times t, only a sequence then we denote by aj the label of the set that this sample . . . >s(t-I>, &), s(t*>, * * (3) falls in: of samplesof the signal. The (idealized) receiver constructs -m<j<co. aj = Y(s(tj)), the pulses K(t - tj) and adds them together with the In PCM the signal sent over the channel is (in some code or another) the sequenceof labels Manuscript received May 1, 1981. The material in this paper was . . . ,a-,,a,,q;~~, (5)
Two-dimensional Quantum Field Theory, examples and applications
Abstract The main principles of two-dimensional quantum field theories, in particular two-dimensional QCD and gravity are reviewed. We study non-perturbative aspects of these theories which make them particularly valuable for testing ideas of four-dimensional quantum field theory. The dynamics of confinement and theta vacuum are explained by using the non-perturbative methods developed in two dimensions. We describe in detail how the effective action of string theory in non-critical dimensions can be represented by Liouville gravity. By comparing the helicity amplitudes in four-dimensional QCD to those of integrable self-dual Yang-Mills theory, we extract a four dimensional version of two dimensional integrability.
2 48 49 52 54 56
5 Four-dimensional analogies and consequences 6 Conclusions and Final Remarks
苏汝铿高等量子力学讲义(英文版)Chapter4 Path Integral
§4.2 Path integral
§4.2 Path integral
§4.2 Path integral
Normalization factor
§4.2 Path integral
§4.2 Path integral
§4.3 Gauss integration
A type of functional integration which can easily be calculated
Chapter 4 Path Integral
§4.1 Classical action and the amplitude in Quantum Mechanics
Introduction: how to quantize? Wave mechanics h Schrödinger equ. Matrix mechanics h commutator Classical Poisson bracket Q. P. B. Path integral h wave function
§4.5 The canonical form of the path integral
§4.5 The canonical form of the path integral
§4.5 The canonical form of the path integral
§4.5 The canonical form of the path integral
§4.5 The canonical form of the path integral
§4.5 The canonical form of the path integral
外文翻译原文模板
1、外文资料翻译内容要求:外文资料的内容应为本学科研究领域,并与毕业设计(论文)选题相关的技术资料或专业文献,译文字数应不少于3000汉字以上,同时应在译文末注明原文的出处。
不可采用网络中直接有外文和原文的。
2、外文资料翻译格式要求:译文题目采用小二号黑体,居中;译文正文采用宋体小四号,段前、段后距为0行;行距:固定值20磅。
英文原文如果为打印的话用新罗马(Times New Roman)小四号字。
装订时原文在前,译文在后。
文章中有引用的地方在原文中也要体现。
参考文献也要翻译成中文!An Energy-Efficient Cooperative Algorithm for Data Estimation inWireless Sensor NetworksAbstract – In Wireless Sensor Networks (WSN), nodes operate on batteries and network’s lifetime depends on energy consumption of the nodes. Consider the class of sensor networks where all nodes sense a single phenomenon at different locations and send messages to a Fusion Center (FC) in order to estimate the actual information. In classical systems all data processing tasks are done in the FC and there is no processing or compression before transmission. In the proposed algorithm, network is divided into clusters and data processing is done in two parts. The first part is performed in each cluster at the sensor nodes after local data sharing and the second part will be done at the Fusion Center after receiving all messages from clusters. Local data sharing results in more efficient data transmission in terms of number of bits. We also take advantage of having the same copy of data at all nodes of each cluster and suggest a virtual Multiple-Input Multiple-Output (V-MIMO) architecture for data transmission from clusters to the FC. A Virtual-MIMO network is a set of distributed nodes each having one antenna. By sharing their data among themselves, these nodes turn into a classical MIMO system. In the previously proposed cooperative/virtual MIMO architectures there has not been any data processing or compression in the conference phase. We modify the existing VMIMO algorithms to suit the specific class of sensor networks that is of our concern. We use orthogonal Space-Time Block Codes (STBC) for MIMO part and by simulation show that this algorithm saves considerable energy compared to classical systems.I. INTRODUCTIONA typical Wireless Sensor Network consists of a set of small, low-cost and energy-limited sensor nodes which are deployed in a field in order to observe a phenomenon and transmit it to a Fusion Center (FC). These sensors are deployed close to one another and their readings of the environment are highly correlated. Their objective is to report a descriptive behavior of the environment based on all measurements to the Fusion Center. This diversity in measurement lets the system become more reliable and robust against failure. In general, each node is equipped with a sensing device, a processor and a communication module (which can be either a transmitter or transmitter/receiver).Sensor nodes are equipped with batteries and are supposed to work for a long period of time without battery replacement. Thus, they are limited in energy and one of the most important issues in designing sensor networks will be the energy consumption of the sensor nodes. To deal with this problem, we might either reduce the number of bits to be transmitted by source compression or reduce the required power for transmission by applying advanced transmission techniques while satisfying certain performance requirement.A lot of research has been done in order to take advantage of the correlation among sensors’ data for reducing the number of bits to be transmitted. Some are based on distributed source coding[1]while others use decentralized estimation[2-5]. In [1], authors present an efficient algorithm that applies distributed compression based on Slepian – Wolf[14] encoding technique and use an adaptive signal processing algorithm to track correlation among sensors data. In [2-5] the problem of decentralized estimation in sensor networks has been studied under different constraints. In these algorithms, sensors perform a local quantization on their data considering that their observations are correlated with that of other sensors. They produce a binary message and send it to the FC. FC combines these messages based on the quantization rules used at the sensor nodes and estimates the unknown parameter. Optimal local quantization and final fusion rules are investigated in these works. The distribution of data assumed for sensor observation in these papers has Uniform probability distribution function. In our model we consider Gaussian distribution introduced in [17] for sensor measurements which ismore likely to reality.As an alternative approach, some works have been done using energy-efficient communication techniques such as cooperative/virtual Multiple-Input Multiple-Output (MIMO) transmission in sensor networks [6-11]. In these works, as each sensor is equipped with one antenna, nodes are able to form a virtual MIMO system by performing cooperation with others. In [6] the application of MIMO techniques in sensor networks based on Alamouti[15] space-time block codes was introduced. In [8,9] energy-efficiency of MIMO techniques has been explored analytically and in [7] a combination of distributed signal processing algorithm presented and in [1] cooperative MIMO was studied.In this paper, we consider both techniques of compression and cooperative transmission at the same time. We reduce energy consumption in two ways; 1) processing data in part at the transmitting side, which results in removing redundant information thus having fewer bits to be transmitted and 2) reducing required transmission energy by applying diversity and Space-Time coding. Both of these goals will be achieved by our proposed two-phase algorithm. In our model, the objective is to estimate the unknown parameter which is basically the average of all nodes’ measurements. That is, exact measurements of individual nodes are not important and it is not necessary to spend a lot of energy and bandwidth to transmit all measured data with high precision to the FC. We can move some part of data processing to the sensors side. This can be done by local data sharing among sensors. We divide the network into clusters of ‘m’ members. The number of members in the cluster (m) is both the compression factor in data processing and also the diversity order in virtual-MIMO architecture. The remaining of this paper is organized as following: in section II we introduce our system model and basic assumptions. In section III we propose our collaborative algorithm. In section IV we present the mathematical analysis of the proposed algorithm and in section V we give some numerical simulations. Finally section VI concludes the paper.II. SYSTEM MODELA. Network ModelThe network model that we use is similar to the one presented in [2-5].Our network consists of N distributed Sensor Nodes (SN) and a Fusion Center (FC). Sensors are deployed uniformly in the field, close to one another and each taking observations on an unknown parameter (θ). Fusion Center is located far from the nodes. All nodes observe same phenomenon but with different measurements. These nodes together with the Fusion Center are supposed to find the value of the unknown parameter. Nodes send binary messages to Fusion Center. FC will process the received messages and estimate the unknown value.B. Data ModelIn our formulation we use the data model introduced in[17]. We assume that all sensors observe the same phenomenon (θ) which has Gaussian distribution with variance σx 2. They observe different versions of θ and we model this difference as an additive zero mean Gaussian noisewith variance σn 2. Therefore, sensor observations will be described byn i i θx += (1) Where θ ~ N (0, σx 2) and n i ~ N (0, σn 2) for i = 1, 2, … , N .Based on thisassumption the value of θ can be estimated by taking the numerical average of the nodes observations, i.e.∑==N i i x N 11θ(2)C. Reference System ModelOur reference system consists of N conventional Single Input Single Output (SISO) wireless links, each connecting one of the sensor nodes to the FC. For the reference system we do not consider any communication or cooperation among the sensors. Therefore each sensor quantizes its observation by an L-bit scalar quantizer designed for distribution of θ, generates a message of length L and transmits it directly to the FC. Fusion Center receives all messages and performs the processing, which is calculation of the numerical average of these messages.III. COOPERATIVE DATA PROCESSING ALGORITHMSensor readings are analog quantities. Therefore, each sensor has to compress its data into several bits. For data compression we use L -bit scalar quantizer [12,13].In our algorithm, network is divided into clusters, each cluster having a fixed and pre-defined number of members (m). Members of each cluster are supposed to cooperate with one another in two ways:1. Share, Process and Compress their data2. Cooperatively transmit their processed data using virtual MIMO.IV. ANALYSISThe performance metric considered in our analysis is the total distortion due to compression and errors occurred during transmission. The first distortion is due to finite length quantizer, used in each sensor to represent the analog number by L bits. This distortion depends on the design of quantizer.We consider a Gaussian scalar quantizer which is designed over 105 randomly generated samples. The second distortion is due to errors occurred during transmission through the channel. In our system, this distortion is proportional to the probability of bit error. Since the probability of bit error (Pe) is a function of transmission energy per bit (Eb), total distortion will be a function of Eb. In this section we characterize the transmission and total consumed energy of sensors and find the relationship between distortion and probability of bit error.V. SIMULATION AND NUMERICAL RESULTS To give a numerical example, we assume m = 4 members in each cluster. Therefore our Virtual-MIMO scheme will consist of 4 transmit antennas. We assume that network has N = 32 sensors. Sensor observations are Gaussian with σx2= 1 and are added to a Gaussian noise of σn2= 0.1 .Nodes are deployed uniformly in the field and are 2 meters apart from each other and the Fusion Center is located 100 meters away from the center of the field. The values for circuit parameters are quoted from [6] and are listed in Table I. These parameters depend on the hardware design and technological advances. Fig. 1 illustrates the performance (Distortion) of reference system and proposed two-phase V-MIMO scheme versus transmission energy consumption in logarithmic scale. As shown in the figures, depending on how much precision is needed in the system, we can save energy by applying the proposed algorithm.TABLE IFig. 2 illustrates the Distortion versus total energy consumption of sensor nodes. That is, in this figure we consider both the transmission and circuit energy consumption. The parameters that lead us to these results may be designed to give better performance than presented here. However, from these figures we can conclude that the proposed algorithm outperforms the reference system when we want to have distortion less than 10−3 and it can save energy as high as 10 dB.VI. CONCLUSIONIn this paper we proposed a novel algorithm which takes advantage of cooperation among sensor nodes in two ways: it not only compresses the set of sensor messages at the sensor nodes into one message, appropriate for final estimation but also encodes them into orthogonal space-time symbols which are easy to decode and energy-efficient. This algorithm is able to save energy as high as 10 dB.REFERENCES[1] J.Chou,D.Petrovic and K.Ramchandran “A distributed and adaptive signalprocessing approach to reducing energy consumption in sensornetworks,”Proc. IEEE INFOCOM,March 2003.[2] Z.Q.Luo, “Universal decentralized estimation in a bandwidth constrainedsensor network,” IEEE rmation The ory, vol.51,no.6,June 2005.[3] Z.Q.Luo,“An Isotropic Universal decentralized estimation scheme for abandwidth constrained Ad Hoc sensor network,”IEEEm. vol.23,no. 4,April 2005.[4] Z.Q.Luo and J.-J. Xiao, “Decentralized estimation i n an inhomogeneoussensing environment,” IEEE Trans. Information Theory, vol.51, no.10,October 2005.[5] J.J.Xiao,S.Cui,Z.-Q.Luo and A.J.Goldsmith, “Joint estimation in sensornetworks under energy constraints,” Proc.IEEE First conference on Sensor and Ad Hoc Communications and Networks, (SECON 04),October 2004.[6] S.Cui, A.J.Goldsmith, and A.Bahai,“Energy-efficiency of MIMO andcooperative MIMO techniques in sensor networks,”IEEEm,vol.22, no.6pp.1089–1098,August 2004.[7] S.K.Jayawe era and M.L.Chebolu, “Virtual MIMO and distributed signalprocessing for sensor networks-An integrated approach”,Proc.IEEEInternational Conf. Comm.(ICC 05)May 2005.[8] S.K.Jayaweera,"Energy efficient virtual MIMO-based CooperativeCommunications for Wireless Sensor Networks",2nd International Conf. on Intelligent Sensing and Information Processing (ICISIP 05),January 2005.[9] S.K.Jayaweera,“Energy Analysis of MIMO Techniques in Wireless SensorNetworks”, 38th Annual Conference on Information Sciences and Systems (CISS 04),March 2004.[10] S.K.Jayaweera and M.L.Chebolu,“Virtual MIMO and Distributed SignalProcessing for Sensor Networks - An Integrated Approach”,IEEEInternational Conf.on Communications (ICC 05),May 2005.[11] S.K.Jayaweera,“An Energy-efficient Virtual MIMO CommunicationsArchitecture Based on V-BLAST Processing for Distributed WirelessSensor Networks”,1st IEEE International Conf.on Sensor and Ad-hocCommunications and Networks (SECON 2004), October 2004.[12] J.Max,“Quantizing for minimum distortion,” IRE rmationTheory,vol.IT-6, pp.7 – 12,March 1960.[13] S.P.Lloyd,“Least squares quantization in PCM ,”IEEE rmationTheory,vol.IT-28, pp.129-137,March 1982.[14] D.Slepian and J.K.Wolf “Noiseless encoding of correlated inf ormationsources,” IEEE Trans. on Information Theory,vol.19, pp.471-480,July1973.[15] S.M.Alamouti,“A simple transmit diversity technique for wirelesscommunications,” IEEE m., vol.16,no.8,pp.1451–1458,October 1998.[16] V.Tarokh,H.Jafarkhani,and A.R.Calderbank. “Space-time block codesfrom orthogonal designs,’’IEEE rmationTheory,vol.45,no.5,pp.1456 -1467,July 1999.[17] Y.Oohama,“The Rate-Distortion Function for the Quadratic GaussianCEO Problem,” IEEE Trans. Informatio nTheory,vol.44,pp.1057–1070,May 1998.。
A Workshop of MIT-RTG
Here are several broad goals of the workshop we could keep in mind:1)Understand equivalence between constructible sheaves and Fukaya category of cotangent bundle.Consider applications such as homological characterizations of com-pact exact branes,mirror symmetry for toric varieties,and Springer theory.In the first two cases,sheaves help us understand branes;in the third case,branes help us understand sheaves.All of the material here is available in the literature.2)Understand what the above equivalence should imply about quantizations of more general exact symplectic targets arising in representation theory.For example,we should be able to see the relation between quantizations of Slodowy slices in the form of the Fukaya category and in the form of modules over W-algebras.To my knowledge, this is not completely mapped out by the literature.3)Discuss directions for further investigation of relations between Fukaya categories and categories in representation theory.Starting point:Fukaya categories of cotangent bundles toflag varieties,D-modules on moduli spaces of bundles.Here the literature points to many open questions.1.Singularities and constructible sheaves.1.1.Tame geometry.Subanalytic geometry.Defining functions.Whitney stratifica-tions and triangulations.Thom isotopy lemmas.Example of real line.Build up the notion of subanalytic subset of a real analytic manifold by starting with the real line and then considering standard operations(with an emphasis on the special role of the image of a map).Explain relation between closed subanalytic subsets and zeros of subanalytic functions.Discuss axioms of Whitney stratifications and results about stratifying and triangulating subanalytic sets.Discuss Thom iso-topy lemmas,in particular the assertion:if f:M→N is a proper stratified map, then stratum-preserving homeomorphisms of N(smooth along each stratum)lift to stratum-preserving homeomorphisms of M(smooth along each stratum).Describe local structure of Whitney stratifications as iterated cone bundles along strata.This lecture should contain many simple counterexamples.For example,to illustrate the Whitney conditions,one could discuss the Whitney umbrella and cusp.Refs:[BM88],[VM96]1.2.Homotopical categories.Differential graded and A∞-categories.Functors and modules.Linear structure:shifts and cones.Localization with respect to collection of morphisms.Homological perturbation theory.This lecture should approach categories as multi-pointed versions of algebras.In fact,we should have in mind the case where the number of points isfinite so that in the end we could think in terms of algebras.Introduce chain complexes and basic notions:tensor and hom,shift,sum and sum-mand,cone,quasi-equivalence,...Introduce strong notion of algebra(differential graded12algebra)and weak notion of algebra(A∞-algebra).Draw operadic pictures for A∞-categories and functors between them.Describe equivalence of differential graded cate-gories and A∞-categories via homological perturbation theory.Explain what is gained (and perhaps lost)in the stabilization(Morita theory)of A∞-categories by passing to perfect modules:idempotent-completion of complexes of representable functors to chain complexes.Reminder that triangulated categories arise in nature as the under-lying discrete categories of stable A∞-categories.Describe localization of a category (basic example:passing from modules over an algebra to modules over a localization of the algebra).Refs:[Ke06],[S],[L]1.3.Constructible sheaves.Differential graded category of sheaves.Functoriality under maps.Standard triangles and bases.Relation to constructible functions.This lecture can be in the more traditional language of triangulated categories as long as it is understood that all of the constructions and results can be lifted to the differential graded setting.Fix Whitney stratification S of real analytic manifold X.Introduce differential graded category of S-constructible complexes.Discuss case where S consists of one stratum X itself(local systems and complexes with locally constant cohomology).In-troduce Grothendieck’s6operations(f∗,f∗),(f!,f!),⊗,H om and Verdier duality.Con-struct standard triangles associated to pair of an open U⊂X and closed V=X\U. Calculate morphisms between standard extensions of constant sheaves on strata of S. Possibly include:informal discussion of exit-path simplicial category of a Whitney stratification,and constructible sheaves asfinitely-generated modules over the exit-path category.Explain how stalk Euler characteristic identifies Grothendieck group of constructible sheaves with constructible functions.Refs:[KS84],[GM83]1.4.Examples.Constructible sheaves on R stratified with a single marked point.Con-structible sheaves on S1stratified with a single marked point.Constructible sheaves on A1stratified with a single marked point.Constructible sheaves on P1stratified with a single marked point.The aim here is to give quiver presentations of categories of constructible sheaves in some simple examples.By choosing enough functionals,we can describe a category with the description depending on the functionals.For the above examples,choose various functionals and describe resulting quivers.Describe objects representing the functionals considered.For example,first construct quiver arising from considering generic stalk and stalk at marked point,then construct quiver using generic stalk and vanishing cycles at marked point.Further example:a three stratum space such as A2with a marked singular curve.2.Microlocal geometry of sheaves2.1.Cotangent bundles.Exact symplectic structure.Geodesicflow.Examples of Lagrangians:conormals,graphs and generalizations.Conormals to stratifi-grangian correspondences.3 Summary of basic structures in exact symplectic geometry with emphasis on the case of cotangent bundles,including Liouvilleflow,contact hypersurfaces,compatible almost complex structures,exact Lagrangians,...Explain meaning of basic objects in terms of classical mechanics.Describe graph Lagrangians and conormal Lagrangians and their hybrids.Construct Lagrangian correspondences of cotangent bundles from maps of base manifolds,emphasizing case of projection and inclusion.Refs:[A],[KS84]2.2.Characteristic cycles.From constructible sheaves to conical Lagrangian cycles. Functoriality under maps.Introduce group of conical Lagrangian cycles.Construct characteristic cycle of con-structible sheaf on a manifold.Calculate everything in case when manifold is real line or complex line.Explain functoriality for Grothendieck’s6operations(f∗,f∗),(f!,f!),⊗,H om and Verdier duality.Show characteristic cycle construction descends to iso-morphism between group of constructible functions and group of conical Lagrangian cycles.Refs:[KS84],[SV96]2.3.Intersection of Lagrangian cycles.Perturbations near infinity.Intersections of characteristic cycles:compatibility with ext-pairing of constructible sheaves and corresponding pairing of constructible functions.Index theorems.Describe framework of perturbing conical Lagrangians by normalized geodesicflow near infinity.Discuss Z/2-grading on intersections of conical Lagrangian cycles.Show characteristic cycle takes pairing on constructible functions to intersection of conical Lagrangian cycles.Dubson-Kashiwara index formula(generalization of Poincar´e-Hopf index formula):calculate global Euler characteristic of constructible sheaf as intersec-tion with zero section.Construct automorphisms of group of conical Lagrangian cycles via motions of pieces of support.Example of Dehn twist on conical Lagrangian cycles in T∗S1.This topic is logically independent of the preceding but is reasonable to discuss at this juncture.Refs:[GrM97],[NZ09]2.4.Riemann-Hilbert correspondence.Differential operators as quantization of functions on cotangent bundle.Algebraic model of constructible sheaves:regular holo-nomic D-modules.Explain Riemann-Hilbert correspondence between regular holonomic D-modules and constructible sheaves.Discuss the failure of an abelian version and the resulting notion of a perverse sheaf.Introduce the singular support of a D-module and its relation to characteristic cycles.Illustrate everything with the case of A1stratified by a single marked point.Refs:[Be],[Kap]3.Exact Lagrangians in cotangent bundles3.1.Morse category of submanifolds.Gradient tree A∞-category of submanifolds with local systems.Equivalence with constructible sheaves.4This lecture should explain how the differential graded category of constructiblesheaves on a manifold can be reformulated in terms of a Morse A∞-category whoseobjects are locally closed submanifolds equipped with local systems.Basic case:explain equivalence of de Rham algebra of compact manifold with MorseA∞-algebra.This will provide opportunity to interpret A∞-operad in terms of trivalentgraphs.Show how Morse theory provides geometric ingredients to apply homologicalperturbation theory.(For bonus points:mention other sources of parallel geometricingredients such as Hodge theory.)Main topic:interpret constructible sheaves in termsof Morse theory.Explain how Thom’s isotopy lemma allows one to replace locally closedsubmanifolds with singular boundary with open submanifolds with smooth boundary.Draw vectorfields for constructible sheaves and calculate morphisms,for example forR stratified with a single marked point,and A1stratified with a single marked point. Possible further topic:explain some of Grothendieck’s6operations in terms of MorseA∞-category.Refs:[HL01],[KS01],[NZ09]3.2.Exact Floer-Fukaya theory.Fukaya category of compact exact Lagrangians inexact symplectic target.Brane structures.Moduli spaces of anization intoA∞-category.This lecture should be an introduction to Fukaya categories of exact targets.Seidel’sbook provides the foundations and the speaker should choose appropriate highlights.Itis likely more worthwhile that we understand the broad picture than the analytic details.We should know what brane structures are and why that is what they are(gradingsof intersections and orientations of moduli spaces).We should hear enough about thebehavior of moduli of disks to see the A∞-structure(most prominently,there shouldbe a discussion of the A∞-equations coming from the boundary of moduli).We shouldhear enough about continuation maps to believe that everything is well-defined.Thelecture can restrict to compact Lagrangians as we will be hearing about non-compactones soon enough.Should discuss the Piunikhin-Salamon-Schwarz(PSS)calculationof endomorphisms of compact branes.Resolutions of du Val singularities and theirdeformations would be a good example to illustrate the theory(and will appear inlater talks).Refs:[S]3.3.Infinitesimal Fukaya category of cotangent bundle.Noncompact branes:perturbations,tameness,bounds on parisons with directed and wrappedFukaya categories.Equivalence of subcategory of standard branes with Morse categoryof submanifolds.This talk should consist of roughly two halves:general theory of Fukaya categorieswith non-compact branes and example of the cotangent bundle.First half.Survey general techniques for dealing with disks along noncompact branes:energy bounds,tameness,diameter estimates.For exact target withfixed energy func-tion,introduce the infinitesimal Fukaya category where small Hamiltonian perturba-tions of branes are used near infiparisons could be made with directed Fukaya-Seidel categories of Lefschetzfibrations,and also wrapped Fukaya categories where theHamiltonian perturbations are not small but rather linear near infinity.5 Second half.Explain why the Morse A∞-version of constructible sheaves embeds in the infinitesimal Fukaya category of the cotangent bundle.Here the main ingredient is Fukaya-Oh’s analytic equivalence between gradient trees and pseudo-holomorphic disks (or alternatively,hybrid moduli spaces interpolating between them).Refs:[S],[Sik94],[FO97],[NZ09],[Nspr]3.4.Equivalence of sheaves and branes.Formalism of Yoneda lemma and bimod-ules.Beilinson’s argument.Decomposition of diagonal.Noncharacteristic motions.The aim of this talk is to prove that the infinitesimal Fukaya category of the cotangent bundle is equivalent to constructible sheaves.At this point,what is left to prove is that the standard branes coming from standard sheaves indeed generate.Begin with general discussion of the Yoneda lemma,functors and bimodules,and the formalism of generators.Introduce Beilinson’s construction of generators for coherent sheaves on projective space as guiding example.Bulk of talk should be devoted to applying this argument to the infinitesimal Fukaya category of the cotangent bundle. Here the main ingredient is the notion of non-characteristic propagation.Thom’s iso-topy lemma should be reinterpreted in the language of non-characteristic maps.An analogous lemma for continuation maps of branes should be formulated.Finally,we should see at least a sketch of Beilinson’s argument in the setting of the infinitesimal Fukaya category.Application:homological characterization of compact exact branes in cotangent bundle.Refs:[B78],[N09],[Nspr]4.Some examples and applicationsThis day’s talks are more independent of each other and the specific material covered can be determined by the speaker’s taste.4.1.Mirror symmetry for toric varieties.Fukaya category of cotangent bundle of torus.Consider torus(S1)n=R n/Z n.Introduce alternative viewpoints on symplectic geometry of(C×)n T∗(S1)n via two projections T∗(S1)n→(S1)n and T∗(S1)n→(R∨)n.Describe branes arising from considering a toric compactification of(C×)n. Explain how to think about them in terms of constructible sheaves.Discuss mirror symmetry and dual description of coherent sheaves in terms of constructible sheaves. Extra credit:equivariant generalization.If we understand nothing else,we should at least understand mirror symmetry be-tween A-model of T∗S1and B-model of P1.Refs:[FLTZ]and related papers.4.2.Springer theory.Fukaya category of cotangent bundle of Lie algebra.Fourier transform from Floer perspective.Describe basic diagram of Springer theory arriving at Springer brane in T∗g.Intro-duce Fourier dual perspective and Fourier transform for branes.Deduce consequences for Springer brane.Interpret preceding in classical language of constructible sheaves.Refs:[BoM81],[Nspr]64.3.Microlocalization and Hamiltonian reduction.Formalism of microlocaliza-tion and Hamiltonian reduction.Introduction to crepant resolutions,their deformations and quantizations.This talk and the one that follows could be planned in tandem.One approach would be to have thefirst talk cover theory,and the second cover examples.In any event,the speakers should strategize together.We should see that many important examples of exact symplectic manifolds(sym-plectic resolutions with C×-action)arising in representation theory can be constructed from conical open subsets of cotangent bundles via Hamiltonian reduction.We should learn how to think about categories(Fukaya,modules over deformation quantization) associated to such targets and their deformations can be arrived at from categories (Fukaya,microlocal constructible sheaves and D-modules)associated to conical open subsets of cotangent bundles via Hamiltonian reductions.4.4.W-algebras from topological viewpoint.Fukaya category beyond compact branes in Slodowy slices.This talk could map out the relation between branes in Slodowy slices and their deformations,sheaves onflag manifolds(or equivalently,regular holonomic D-modules onflag manifolds),and modules over W-algebras.The specific example of du Val resolutions could be discussed concretely.Refs:[KhS],[SS],[Ma],[Lo]among many related papers.5.Further directions5.1.Gauge theory setting.Hitchin integrable system.Relation to talks of previous day.Challenge of quantization offibers.Refs:[BD],[KW],[Kap]5.2.Where to go from here.References[A]Arnold,V.I.“Mathematical methods of classical mechanics.”Translated from the Russian byK.Vogtmann and A.Weinstein.Graduate Texts in Mathematics,60.Springer-Verlag,New York-Heidelberg,1978.x+462pp.ISBN:0-387-90314-3[B78]A.A.Be˘ılinson,“Coherent sheaves on P n and problems in linear algebra,”(Russian)Funktsional.Anal.i Prilozhen.12(1978),no.3,68–69;English translation:Functional Anal.Appl.12(1978), no.3,214–216(1979).[BD]A.Beilinson and V.Drinfeld,“Quantization of Hitchin Hamiltonians and Hecke Eigensheaves,”preprint.[Be]J.Bernstein,“Algebraic theory of D-modules.”/~mitya/langlands/Bernstein/Bernstein-dmod.ps[BM88]E.Bierstone and man,“Semianalytic and subanalytic sets,”Inst.Hautes´Etudes Sci.Publ.Math.67(1988),5–42.[FLTZ]Bohan Fang,Chiu-Chu Melissa Liu,David Treumann,Eric Zaslow.“T-Duality and Homolog-ical Mirror Symmetry of Toric Varieties”,arXiv:0811.1228[BoM81]W.Borho and R.MacPherson,“Repr´e sentations des groupes de Weyl et homologie d’intersection pour les vari´e t´e s nilpotentes,”C.R.Acad.Sci.Paris S´e r.I Math.292(1981),no.15,707–710.7 [FO97]K.Fukaya and Y.-G.Oh,“Zero-loop open strings in the cotangent bundle and Morse homo-topy,”Asian.J.Math.1(1997)96–180.[FSS]Kenji Fukaya,Paul Seidel,Ivan Smith,“The symplectic geometry of cotangent bundles from a categorical viewpoint”.arXiv:0705.3450.[GM83]M.Goresky and R.MacPherson.“Intersection homology.II.”Invent.Math.72(1983),no.1, 77–129.[GrM97]M.Grinberg and R.MacPherson.“Euler characteristics and Lagrangian intersections.”Sym-plectic geometry and topology(Park City,UT,1997),265–293,IAS/Park City Math.Ser.,7, Amer.Math.Soc.,Providence,RI,1999.[HL01]F.R.Harvey and wson,Jr.,“Finite Volume Flows and Morse Theory,”Annals of Math.vol.153,no.1(2001),1–25.[Kap]A.Kapustin,“A-branes and noncommutative geometry,”arXiv:hep-th/0502212.[KW]A.Kapustin and E.Witten,“Electric-Magnetic Duality And The Geometric Langlands Pro-gram,”arXiv:hep-th/0604151.[KS84]M.Kashiwara and P.Schapira,Sheaves on manifolds.Grundlehren der Mathematischen Wis-senschaften292,Springer-Verlag(1994).[Ke06]B.Keller,On differential graded categories.arXiv:math.AG/0601185.International Congress of Mathematicians.Vol.II,151–190,Eur.Math.Soc.,Z¨u rich,2006.[KhS]Mikhail Khovanov,Paul Seidel“Quivers,Floer cohomology,and braid group actions”, arXiv:math/0006056.[KS01]M.Kontsevich and Y.Soibelman,“Homological Mirror Symmetry and Torus Fibrations,”Sym-plectic geometry and mirror symmetry(Seoul,2000),203–263,World Sci.Publ.,River Edge,NJ, 2001.[Lo]I.Losev,“Finite W-algebras”,arXiv:1003.5811.[L]J.Lurie,“Stable Infinity Categories,”arXiv:math/0608228.[Ltft]J.Lurie,“On the Classification of Topological Field Theories,”arXiv:0905.0465[Ma]Ciprian Manolescu“Link homology theories from symplectic geometry”,arXiv:math/0601629. [N09]D.Nadler,“Microlocal Branes are Constructible Sheaves,”Selecta Math.15(2009),no.4,563–619.[Nspr]D.Nadler,“Springer theory via the Hitchinfibration”,arXiv:0806.4566.[NZ09]D.Nadler,E.Zaslow,“Constructible sheaves and the Fukaya category.”J.Amer.Math.Soc.22(2009),no.1,233–286.[SV96]W.Schmid and K.Vilonen,“Characteristic cycles of constructible sheaves,”Invent.Math.124 (1996),451–502.[S]P.Seidel,Fukaya Categories and Picard-Lefschetz Theory.[SS]Paul Seidel,Ivan Smith“A link invariant from the symplectic geometry of nilpotent slices”, arXiv:math/0405089.[STZ]N.Sibilla,D.Treumann,E.Zaslow,“Ribbon Graphs and Mirror Symmetry I”,arXiv:1103.2462. [Sik94]J.-C.Sikorav,“Some properties of holomorphic curves in almost complex manifolds,”in Holo-morphic Curves in Symplectic Geometry,Birkh¨a user(1994),165–189.[Sl80]P.Slodowy,Simple singularities and simple algebraic groups.Lecture Notes in Mathematics, 815.Springer,Berlin,1980.[VM96]L.van den Dries and ler,“Geometric categories and o-minimal structures,”Duke Math.J.84,no.2(1996),497–539.。
Exact conserved quantities on the cylinder I conformal case
arXiv:hep-th/0211094v3 23 Jul 2003
Exact conserved quantities on the cylinder I: conformal case.
ቤተ መጻሕፍቲ ባይዱ
D. Fioravanti
a b
a
and M. Rossi
it became well known that these local integrals of motion represent one series of local commuting charges of conformal field theories without extended symmetries [18, 19] (cfr. [8, 1, 20] for details concerning the matrix Lax formulation). In addition, this series is exactly that which is still conserved, after suitable modification, if the conformal field theory is perturbed by its Φ(1,3) operator. Actually, applying the investigations of (2) the present paper and of [8] to the setup elaborated in [19] about the A2 KdV theory, it is only a matter of calculations to extend the subsequent results to the only other series [18], characterising the only other integrable perturbations. The very intriguing feature of the conformal formulation in terms of integrable hierarchies is that the conformal monodromy matrix already contains the perturbing field and the screening operator [19]. A similar discretisation should be also allowed in conformal field theories with extended symmetry algebra (e.g. for W algebras it is of basic importance the setting of [21]; cfr. [22] for development in a peculiar case). We remark that similar results have been obtained in pioneering works [1, 2] by Bazhanov, Lukyanov and Zamolodchikov from a different starting point and via a different approach. They define directly in the continuous field theory the transfer matrix as an operator series acting on Virasoro modules (for 0 < β 2 < 1/2). On the other hand, we intend to create a bridge between conformal field theories and the powerful algebraic Bethe Ansatz formulation of lattice KdV [8, 9] – based on this interesting generalisation of the Yang-Baxter algebra –, showing that the continuum field theory limit can be recovered in the braided case as well. As particular consequence, in the second paper we will be able to formulate the Φ(1,3) perturbation of conformal field theory within the same framework of [8] by merging two KdV theories. In any case, the monodromy matrix of [1, 2] is different from the na¨ ıve continuous limit [8] of our monodromy matrices, which, unlike in [1, 2], are not solutions of the usual Yang-Baxter equation. In section 2 we summarise the main results on (m)KdV theories obtained in [8]. In section 3, starting from the Bethe equations of [8], we write the nonlinear integral equation for each vacuum of the left and right quantum KdV equations. In section 4 we use this equation to calculate up to quadratures the energy of the vacua. In section 5 we apply the nonlinear integral equation to find exact expressions for the continuous limit of the vacuum eigenvalues of the transfer matrix. From the asymptotic expansion of these eigenvalues we obtain the local abelian charges, mentioned before as characterising the Φ(1,3) perturbation. In addition, we have solved exactly the theory in two cases: at 1 the free fermion point β 2 = 2 and in the limit of infinite twist (˜ ω → +∞). Some results are also compared with results and conjectures in [1, 2]. In Section 6 we summarise our work describing also possible further applications.
Effective Field Theories and Quantum Chromodynamics on the Lattice
a r X i v :h e p -l a t /0403006v 4 31 M a r 2005HU-EP-04/10Effective Field Theories and Quantum Chromodynamics on theLatticeA.Ali KhanInstitut f¨u r Physik,Humboldt-Universit¨a t zu Berlin,12489Berlin,GermanyAbstractWe give a selection of results on spectrum and decay constants of light and heavy-light hadrons.Effective fields theories relevant for their lattice calculation,namely non-relativistic QCD (NRQCD)for heavy quarks on the lattice and Chiral Perturbation Theory for light quarks,are briefly discussed.1INTRODUCTIONThe Standard Model of the strong and electroweak interactions is based on a SU (3)×SU (2)×U (1)gauge symmetry with three generations of quarks and leptons as fermionic matter fields and a scalar field,the Higgs,which is responsible for the masses of the weak SU (2)gauge bosons and of the fermions.For a recent review about the status of the Standard Model and new physics see e.g.[1].The SU (3)‘sector’of the model is Quantum Chromodynamics (QCD),a gauge theory of the strong interaction.With relativistic Dirac quarks,the model can be described classically by the LagrangianL QCD =4F c µνF cµν.(1)The q fields are 4-component Dirac spinors,and the D µare covariant derivatives,e.g.D µ≡∂µ−ig s A c µt c with [D µ,D ν]=−ig s F c µνt c ,where F cµνare the field strength ten-sors,g s is the coupling constant,and the t c are generators of SU (3)in the fundamental representation.A consequence of the self-interactions among the gluon fields A c µis asymp-totic freedom,i.e.the interactions between particles become weak at short distances andcan be described with perturbation theory in the strong coupling αs =g 2s /(4π).At larger distance,the forces become strong,and non-perturbative methods are necessary to under-stand how hadron masses arise and whether it is possible to explain the hadron spectrum from first principles within the theory of strong interactions.Although rather successful,the Standard Model by itself does not seem completely satisfactory.On the experimental side,recent discoveries such as neutrino mixings,new results from accelerator experiments [3,4]and indications for ‘dark energy’in the cosmos indicate a need for an extension of the model.The Higgs particle has not yet been found;recent reviews of the status of Higgs searches are [5].Further there are theoretical motivations to search for physics beyond the Standard Model (for a discussion see e.g.[2]).The Standard Model contains a considerably large set of coupling constants and masses as input parameters.It does not explain the values of typical energy scales such as the masses of the weak gauge bosons.A strategy in the research is to simultaneously measure as many physical quantities as possible,test the results for self-consistency within the Standard Model and search for indications of new physics.Among the most interesting search grounds are the elements ofthe Cabibbo-Kobayashi-Maskawa(CKM)matrix which parameterizes theflavor changing weak currents and provides a mechanism for CP violation within the Standard Model. Those CKM matrix elements which are relevant to reactions of heavy,for example b and c,quarks are at present studied intensively in experiment and theory.We introduce the CKM matrix with an emphasis on B meson decays in the framework of the weak effective theory in Section1.1.The status of the CKM matrix is reviewed in[6].For a review about recent results on quark masses see Ref.[7].Description of the long-range interactions of QCD requires non-perturbative ing a four-dimensional lattice description of space and time it is possible to calculate matrix elements numerically on a computer within a path integral formalism.A brief introduction to the lattice formalism is given in Section2;for detailed recent reviews see[8].Ideally,the lattice extent L should be much larger than the extent or the Compton wavelength of the particles that are supposed to be described,and the inverse lattice spacing a should be much larger than the masses and momenta in the theory in order to avoid cutoffeffects.The lightest hadrons,the pions,have a mass of around140MeV, whereas the B meson has a mass of5.28GeV and contains a heavy quark with a mass of5GeV.The problem is how lattice simulations can accommodate this large range of scales.Ideally L−1≪masses and≪a−1energy splittings1.1Heavy quark decaysStudy of weak decays of quarks is of interest for determinations of elements of the CKM matrix which parameterizes the mixings of quark generations in the Standard Model:L =−g 22c LB 0mixing,which is described by the left and middle diagrams in Figure1in the electroweak theory.Processes at energy scales much less than the W boson mass can be calculatedu,c,t u,c,tWWbddb><^<>vb ddb>>u,c,tu,c,tWWu,c,t<<<>Figure 1:Box diagrams describing B 0−B 0is described by the third diagram in Fig.1.To relate the weak processes between quarks with exclusive reaction rates of mesons,one uses form factors which get contributions from long-distance QCD interactions,and therefore have to be calculated nonperturbatively.This can be done from first principles using the lattice.In the effective theory the B meson decay is described by a matrix element of the heavy-light axial vector current0|A µ(x )|B (p ) =if B p µe −ipx ,(3)where f B is the B decay constant.The branching ratio for the decay B +→l +νl isBR (B +→l +νl )=G 2F m B m 2l m 2B2f 2B |V ub |2τB ,(4)where G F =g 22/(8M 2W )is the Fermi constant and τB the B lifetime.If f B is known,|V ub |can in principle be determined experimentally from this decay.There exist onlyexperimental upper bounds on f B,f Bs and f D,but there are results on f Ds:f Ds=280(17)(25)(35)MeV[13]and=285(19)(40)MeV[14].In the weak effective theory,the form factor for the B0−2 b Lγµd L)( bγµγ5d|0 0|N c Re Tr ,β=2N cimprovement program for on-shell quantities in QCD by Ref.[16]),or with renormalization group methods to obtain renormalization group improved(RG)[17]or perfect[18]actions.At typical values ofβin lattice simulations,there are large corrections due to gauge field loops on the lattice which shift the expectation value of U substantially with respect to the freefield value,one.The perturbative corrections can be reduced with‘mean-field’(or‘tadpole’)improvement[12]:the gauge links U are divided by their expectation value which can be calculated in perturbation theory or determined nonperturbatively in simulations.2.2Lattice fermionsDiscretization of the Euclidean continuum Dirac action by substituting the covariant derivatives by covariant symmetric lattice differences gives the’naive’lattice fermion actionS F=a4 1q xγµ U x,µq x+µ−U†x−µ,µq x−µ + x mq x∆q x to the action,where∆is a covariant second derivative.The doublers obtain masses which remainfinite in lattice units:ma=0if a→0.Chiral symmetry receives corrections at a=0,and O(am),O(ap)discretization errors occur.O(a)errors can be removed from the action with the clover term proportional to a4 xq x=Z DφO†(x)O(0)e−S[φ],(10)where Dφdenotes integration over all dynamicalfields(gauge,fermion,etc...)in the theory.To determine for example f B from the lattice,it is necessary to calculate the renormalization factors to match the unrenormalized lattice matrix element of the axial vector current to the corresponding matrix element in continuum QCD.Ideally,these calculations are done at various values of the lattice spacings,and the continuum estimate is obtained by extrapolating as a function of a to a→0.In practice, some lattice calculations are performed only at one or two values of a,in which case a continuum limit cannot be taken,and the discretization effects have to be included into the estimate of systematic errors.With NRQCD calculations,higher dimensional operators are included as discussed in Section3.1,and an a→0extrapolation cannot be done out of principle.Calculation at several values of a then serves to determine the systematic error from keeping the lattice spacingfinite.In full QCD,the path integral includes gauge and fermionicfields D U Dwith heavy quark Pauli spinorψand the HamiltonianH=− D2Mσ· B+ig s c28M2σ·( D× E− E× D)−c1( D2)2d quarks withL π=1f πτ π)transforms according to the (2,2)representation of SU (2)×SU (2).χ=2B 0M ,M is the quark mass matrix and B 0is proportional to the chiral condensate.The πare the pion fields,and the τi are Pauli matrices.f πis the pion decay constant.The nucleon Lagrangian at lowest order (O (p 1))isL (1)=2g A2[u †,∂µu ]and u µ=iu †∂µUu †.Ψis the Dirac spinor of the nucleon,m 0the nucleon mass in the chiral limit,and g A the nucleon axial vector coupling in the chiral limit.The O (p 2)Lagrangian is given byL(2)=c 1Tr(χ+)4m 20Tr(u µu ν)2Tr(u µu µ)4drr =r 0=1.65,which can be calculated on the lattice with high precision [35].The physical values cor-responding to potential models are around r 0=0.49−0.5fm.Unless noted otherwise we use r 0=0.5fm.For the string tension σ,usually experimental values of√a[GeV ]a -1/a -1(r 0)quenched Wilson gaugea[GeV -1]a -1/a -1(r 0)quenched RG gaugeFigure 2:Discrepancy of lattice spacings from various physical quantities on quenchedlattices.Results are from [35,37,39,41,42,44,45].Average over spin orientations is denoted by anoverbar.Figure 3:Comparison of lattice with experimental results from [40],using zero (left)and N f =2light +1strange flavors (right).a from the Υ′−Υmass splitting.In Fig.2we show examples for the discrepancy between lattice spacings from different physical quantities in the quenched approximation.With two flavors (N f =2),the agreement is improved:using Wilson gauge fields and two flavors of O (a )improved clover sea quarks,Ref.[36]quotes an agreement of scales from m ρ,f K and r 0=0.5fm.However,in the two flavor calculations of [37,38](Wilson gauge fields,staggered sea and clover valence quarks)and of [39,41](RG gauge fields and tadpole-improved clover sea and valence quarks)at a ∼0.5GeV −1,a ∼20%discrepancy between lattice spacings from χb −Υmass splittings and m ρremains.With two flavors of light and one flavor of strange dynamical quarks,using a 1-loop Symanzik O (a 2)improved gauge action and a tree-level tadpole O (a 2)improved staggered sea quark action Ref.[40]finds an agreement of a variety of physical quantities with experiment (see Fig.3).M [G e V ]M [G e V ]1.01.5m [G e V ]Figure 4:Unquenched light baryon spectrum from the lattice.Left and middle:results from [41].Right figure:from [42].Open symbols denote quenched,filled symbols unquenched results.K input:strange quark mass set by fixing the K meson mass to the physical value,φinput:strange quark mass set by fixing the φmeson mass.4.2The light baryon spectrum from the latticeIn quenched calculations it was found that the features of the experimental light hadron spectrum are described well by the lattice [46,47].It is of interest to study whether un-quenching improves the agreement.In Fig.4we plot the baryon spectrum from the recent unquenched simulations of [41,42].Discrepancies with experiment of ∼2σremain.A rea-son may be uncertainty in the chiral extrapolation.Ref.[42]assigns additional systematic errors of up to 25MeV from the chiral extrapolation uncertainty and the determination of r 0.In Table 1we give results for light baryon mass splittings corresponding to theΛ−N [MeV]Σ∗−Σ[MeV]N f =0m ρ346(41)247(26)(07)[42]116(33)(260)252(30)(010)N f =2m ρ358(68)279(56)(20)[42]128(26)(160)248(27)(05)N f =2light +1strangeχb −Υ293(54)[71]155180(15)Experiment 294215Table 1:Light baryon mass splittings.The first error is statistical,the second is the difference from fixing the strange quark mass using the K or φmeson where applicable.The quantity used to fix the lattice scale is indicated.results shown in Fig.4.For the splittings,the agreement with experiment is at the 1−2σlevel.A recent calculation [43]with N f =2light +1strange finds a ∆−N splitting whichagrees well with experiment.4.3Nucleon mass:chiral extrapolation and finite size effectsm π [GeV]0.70.91.11.31.51.7M N [G e V ]m 2[GeV 2]m N [G e V ]Figure 5:Chiral extrapolation of nucleon masses on large lattices in two-flavor QCD using non-relativistic (left:from Ref.[48])and relativistic (right:from Ref.[50])χP T at O (p 4).The dot-dashed line on the left shows the non-relativistic O (p 3)result.HBχP T predicts at O (p 3)a correction ∼m 3πto the quadratic dependence on m π,but with a coefficient which is very different from the value found from fits to the lattice data.In Ref.[48],a good description of lattice data up to pion masses ∼600MeV could be achieved using the non-relativistic formalism at O (p 4)(see Fig.5on the left).With relativistic χP T at O (p 4)[49],the agreement with the lattice data is also good up to rather large pion masses,as shown in Fig.5on the right [50].Having ensured that relativistic χP T O (p 4)indeed describes the nucleon mass on very large lattices,it should be possible to calculate the finite size effects on lattices which are not too small within in this formalism.Calculating the difference of the nucleon self-energy in a spatially finite and infinite volume within χP T at O (p 4)[50],assuming an infinite temporal extent of the lattice,one finds a good agreement with the finite size behavior of the lattice results.An example for a pion mass around 550MeV is given in Fig.6.The non-relativistic formalism at O (p 3)predicts finite size effects which are clearly smaller than the finite size effects of the lattice data [51].4.4The spectrum of hadrons with a b quarkA heavy quark with infinite mass can be regarded as a color source which is static in the rest frame of the hadron and whose spin is not relevant to the interactions.Corrections due to the finiteness of the heavy quark mass can be included in a 1/M expansion.Within the Heavy Quark Effective Theory (HQET),the mass of a heavy-light hadron H can beL [fm]m N [G e V ]Figure 6:Volume dependence of the nucleon mass relativistic χP T compared with N f =2lattice results,from Ref.[50].Solid line:O (p 4)result,dashed line:only O (p 3)terms.thought of as consisting of the following contributions:M H =M Q +2M QH |2M H +H |2M H+O (1/M 2Q ),(17)where Q is the heavy quark spinor,M Q the heavy quark mass,σ=427MeV.Ref.[53]and [54]uses r 0with physical values of 0.5and 0.525fm respectively,and [60]uses r 0=0.5fm and quarkonia at and around the charm [61].All other calculations from the set discussed here use m ρ.The NRQCD action has errors O (αs ΛQCD /M )from corrections to the spin-magnetic coefficient.Errors on spin splittings are treated as being dominated by an error on the spin-magnetic coefficient of O (αs )∼20−30%.The systematical error of each result is divided into a part common to all calcula-tions,which is taken to be of the order of the error of the calculation with the smallest uncertainty,and and a rest which is treated as independent.The error on the average is rescaled by r=∆M [M e V ]B *-B, lattice∆M [M e V ]B *-B, models∆M [M e V ]B *-B, latticeB * -B , modelsFigure 7:Comparison of splittings between P wave and the ground state of B s mesonsfrom Refs.[44,53,56,58,60](lattice)and Refs.[67,69,70](models).The lines denote theexperimental value of the narrow B ∗sJ (5850)resonance believed to come from orbitally excited B s mesons [62].excited B s states.A comparison of the lattice results with experiment and with model calculations is given in Fig.7.The sign of the B ∗2−B ∗0mass difference is disputed among potential model calculations (e.g.[67–70]).Individual lattice calculations [44,58]find a splitting around zero and arewithin errors compatible with a small negative splitting,but the lattice average for B ∗2−B ∗0is positive.4.4.3b baryonsBaryons with one b quark can be thought of as two light quarks coupling to form a spin zero or spin one diquark.The state with a spin zero diquark is the Λb .If the diquark has spin one,the heavy quark can couple to a spin 1/2state,the Σb ,and a spin 3/2state,the Σb ∗.If the light quarks in the Σb and the Σb ∗are substituted by strange quarks one obtains the Ωb and the Ωb ∗.In Table 4we summarize results for the spin-independent splittings Λb −Σb −Λb with M (B )=[3M (B ∗)+M (B )]/4,and the Σb ∗−Σb hyperfine splitting.The values quoted for [55]are obtained by interpolating the heavy-∆M [M e V ]quenched Λ-BN = 2 Λ-B∆M [M e V ]Λ-B, modelsFigure 8:Λb −B and the Λc −Σb −Λb splitting.Ref.[72]gives a rela-tion between the heavy-light and light baryon splittings,∆M∆M [M e V ]Σ-Λ, latticeΣ-Λ, models∆M [M e V ]Σ*-Σ, lattice Σ*-Σ, modelsFigure 9:Above:Σc −Λc splitting.Below:Σb ∗−Σbttice data from Refs.[52,55,56,59],model results from Refs.[71,72].4.5f BIn Table 5we summarize lattice results for f B and f B s since 1998.The first error given in the Table is statistical,and the second is the systematical error given by the authors added in quadrature.First we address unquenching effects on f B .They depend on how the scale is set.In Table 6we compare ratios of decay constants from quenched and two-flavor simulations with the same gauge field and valence quark actions and find an increase of ∼10%if a is set with f π,10−20%if a is set with m ρand no increase with Υand (for f D s )with r 0.We calculate weighted averages for quenched,N f =2and N f =2+1results of f B and f B s .Since the methods for error estimation can vary considerably between different collaborations,even if similar lattice actions and parameters are used,we make new assignments motivated by the error analysis of the authors themselves.We assign common systematic errors to the calculations with RG gauge fields using NRQCD [73]and using the FNAL heavy quark action as non-relativistic effective field theory without taking the continuum limit [74],and to the calculations using Wilson gauge fields and NRQCD (quenched are [64,75,77]).For quenched configurations with clover light quarks,we assign systematic errors of 20and 22MeV respectively for f B and f B s .Ref.[60]uses NRQCD with an O (a 2)tadpole improved gauge action and staggered light valence quarks,and we use their own systematic error assignment.The quenched calculations of Refs.[45,78–80]use Wilson gauge and clover quarks at a =0.35−0.37GeV −1(β=6.2)simulated at the charm quark mass.Ref.[79]uses tree-level tadpole-improved clover quarks without including O (αs ×a )terms in the renormalization.Ref.[78]uses a non-perturbatively O (a )improved clover quark action and a partly non-perturbative current renormalization.Refs.[45,80]use nonperturbative O (a )improvement except for a perturbative value for the O (αs am q )quark mass correction to the renormalization constant.Although different degrees of improvement are used,and the scaling behaviour is found to be different,the results for f D s agree at β=6.2.We therefore assign a common systematic error to these results.According to the estimate of the discretization error given in [79](8%)and of a 1/M extrapolation error of ∼9%given in [45]we use 23MeV for f B and 26MeV for f B s .Refs.[76,81,82]use heavy quarks in the FNAL formalism and extrapolate their results to a →0.Refs.[83,84]use a step scaling method with the Schr¨o dinger functional and clover heavy quarks.Part of the renormalization factors is calculated nonperturbatively.Their results are continuum extrapolated.Ref.[85]uses an interpolation between static and clover charm quarks which are non-perturbatively improved using the Schr¨o dinger functional and continuum extrapolated.The second error on the quenched results includes the ambiguity between scales from m ρand Υlevel splittings by varying the result by +30%if the scale is taken from m ρor√M B s /(f B√M D /(f D sup to 1/M Q corrections,as is supported by the results of a two-flavor calculation using FNAL heavy quarks [88].As argued by Ref.[87]using χP T ,the chiral extrapolationuncertainty of the ratio f B sM B )×f π/f K should also be small.Employing unquenched staggered (N f =2+1)MILC gauge field ensembles at a ≃0.13fm,the NRQCD estimate of [86]is:f B s =260(7)(28),(18)where systematical errors are added in quadrature.Calculations of the decay constants with the staggered N f =2+1configurations using NRQCD [89]and FNAL [90]heavy quarks are in further progress.Within the statistical and systematical errors quoted in Table 5,the results with N f =0,2and 2+1agree among each other.We relate this to the experimental value for f D s using unquenched lattice results for the ratio f B s /f D s from two-flavor calculations which work directly at the b and c quark masses without using extrapolations.Taking the experimental value f D s =283(45)MeV (Eq.(5)),and the range of values for the ratio f B s /f D s from Table 6,one obtains f B s =230−260MeV.Other recent review articles [92–96]quote lattice estimates for f B and f B s which are within errors in agreement with the averages quoted in Table 5.In Table 5we compare the lattice results with recent sum rule [97–100]and potential model calculations [101],and we find that they are within errors in agreement.5ConclusionsApplications of non-relativistic QCD and chiral perturbation theory in lattice calculations are presented.The status of lattice results on the light and heavy-light hadron spectrum and the decay constants f B and f B s is summarized,and weighted lattice averages for b hadron mass splittings and decay constants are calculated.The agreement of the hadron spectrum with experiment is a major success of lattice QCD in general,and of non-relativistic methods for heavy quarks in particular,and supports the reliability of lattice predictions of hadronic matrix elements.The lattice has become instrumental in QCD calculations.Work on further understanding and reduction of lattice errors is in progress and will enable very precise checks.AcknowledgementsI thank D.Ebert,R.N.Faustov,V.O.Galkin,A.Sch¨a fer and G.Schierholz for discussions and comments on the manuscript,and C.Bernard,V.M.Braun,J.Koponen,L.Lellouch and C.Michael for discussions.I thank D.Toussaint for the numbers for the unquenched light baryons from MILC ([43]).I am grateful for a personal fellowship of the Deutsche Forschungsgemeinschaft.I would like to thank the group “Theory of Elementary Particles/Phenomenology-Lattice Gauge Theories”of the Humboldt University Berlin and the NIC/DESY Zeuthen for their kind hospitality.The numerical computations relevant to my publications were performed at the com-puter centers EPCC (Edinburgh),LANL (Los Alamos),LRZ (M¨u nchen),NIC (J¨u lich)and NIC (Zeuthen),NCSA (Urbana-Champaign),RCCP (Tsukuba)and SCRI (Tallahas-see).I thank all institutions for their support.References[1]Wilczek,F.2003,Summary talk at ICHEP2002,24-31July2002,Amsterdam,Nucl.Phys.Proc.Suppl.117,410.[2]Wilczek F.1999,Invited theoretical summary talk at18th International Conferenceon Neutrino Physics and Astrophysics(NEUTRINO98),Takayama,Japan,4-9Jun 1998,Nucl.Phys.(Proc.Suppl.)77,511.[3]Ligeti Z.2004,Plenary talk at ICHEP2004,16-22August,2004,Beijing,China,toappear in the proceedings,hep-ph/0408267.[4]H¨o cker A.2004,talk at ICHEP2004,16-22August,2004,Beijing,China,to appearin the proceedings,hep-ph/0410081.[5]Djouadi A.2005,hep-ph/0503172;Djouadi A.2005,hep-ph/0503173.[6]Ali,A.2003,Lectures at the International Meeting on Fundamental Physics,Sotode Cangas(Asturias),Spain,February23-28,2003,to appear in the proceedings, hep-ph/0312303;Ligeti Z.2004,Plenary talk at ICHEP2004,16-22August,2004,Beijing,China,to appear in the proceedings,hep-ph/0408267.[7]Gupta,R.2003,Summary talk at the2nd Workshop CKM Unitarity Triangle,5-9April2003,IPPP Durham,UK,hep-ph/0311033.[8]Sharpe S.1995,Lectures given at Theoretical Advanced Study Institute in Elemen-tary Particle Physics(TASI94),Boulder,CO,29May-24Jun1994,in:CP Violation and the limits of the Standard Model,J.F.Donoghue World Scientific,Singapore;Davies,C.T.H.2001,Lectures at55th Scottish Universities Summer School,St An-drews,August2001,in:Heavy Flavour Physics,C.T.H.Davies and S.M.Playfer (Eds.),Scottish Graduate Textbook Series,Institute of Physics2002;Kronfeld,A.S.2002,in:At the Frontiers of Physics:Handbook of QCD,Vol.4, M.Shifman(Ed.),World Scientific,Singapore,hep-lat/0205021;C.McNeile,2003,submitted to Int.Rev.Nucl.Phys,Vol.9,Hadronic Physics fromLattice QCD,A.M.Green(Ed.),hep-lat/0307027;DeGrand T.2004,Int.J.Mod.Phys.A19,1337.[9]Isgur N.and Wise M.B.1992,Invited talk at Hadron91,College Park,MD,Aug12-16,1991,Adv.Ser.Direct.High Energy Phys.10,549.[10]Isgur,N.and Wise,M.B.1992,in:Heavy Flavors,A.J.Buras and M.Lindner(Eds.),World Scientific,Singapore,234;Neubert M.1994,Phys.Rep.245,259;Grinstein,B.1995,hep-ph/9508227;Manohar A.and Wise M.B.2000,Heavy Quark Physics,Cambridge University Press, Cambridge.[11]Thacker,B.A.,and Lepage,G.P.1991,Phys.Rev.D43,196.[12]Lepage,G.P.et al.1992,Phys.Rev.D46,4052.[13]Chadha,M.et al.1998,CLEO Collaboration,Phys.Rev.D58,032002.[14]Heister,A.et al.(2002),ALEPH Collaboration,Phys.Lett.B528,1.[15]Symanzik,K.1980,in:Recent Developments in Gauge Theories,G.t’Hooft et al.(Eds.),Plenum,New York,313;Symanzik,K.1982,in:Mathematical Problems in Theoretical Physics,R.Schrader et al.(Ed.),Springer,New York;Symanzik K.,1983,Nucl Phys.B226,187,205.[16]L¨u scher M.and Weisz P.1985,Commun.Math.Phys.97,59;Erratum ibid.98,433;L¨u scher M.and Weisz P.1985,Phys.Lett.B158,250.[17]Iwasaki Y.1983,Univ.of Tsukuba Report No.UTHEP-118;Iwasaki Y.1985,Nucl.Phys.B258,141;[18]Hasenfratz P.and Niedermayer F.1994,Nucl.Phys.B414,785.[19]Sheikholeslami B.and Wohlert R.1985,Nucl.Phys.B259,572.[20]L¨u scher M.et al.1997,ALPHA Collaboration,Nucl.Phys.B491,323;Jansen K.et al.1998,ALPHA Collaboration,Nucl.Phys.B530,185and2002, erratum ibid.B643,517.[21]Kaplan D.1992,Phys.Lett.B288,342;Shamir Y.1993,Nucl.Phys.B406,90;Fuhrman V.and Shamir Y.1995,Nucl.Phys.B439,54.[22]Neuberger H.1998,Phys.Rev.D57,5417.[23]Hasenfratz P.1998,Nucl.Phys.(Proc.Suppl.)63,53;Hasenfratz P.,Laliena V.and Niedermayer F.1998,Phys.Lett.B427,125.[24]Heitger J.and Sommer R.2004,JHEP0402,022.[25]El-Khadra A.X.,Kronfeld A.S.and Mackenzie P.B.1997,Phys.Rev.D55,3933;Oktay M.et al.2004,Nucl.Phys.(Proc.Suppl.)129,349.[26]Aoki S.,Kuramashi Y.and Tominaga S.2002,Nucl.Phys.(Proc.Suppl.)106,349;Aoki S.,Kuramashi Y.and Tominaga S.2003,Prog.Theor.Phys.109,383. [27]Bernard C.2001,plenary talk at Lattice2000,Nucl.Phys.(Proc.Suppl.)94,159(2001).[28]Kronfeld A.S.2004,Nucl.Phys.(Proc.Suppl.)129,46.[29]Gasser J.and Leutwyler H.1984,Ann.Phys.158,142;Gasser J.and Leutwyler H.1985,Nucl.Phys.B250,465;Weinberg S.1996,The Quantum Theory of Fields,Vol.II,Cambridge University Press,Cambridge.[30]Jenkins E.and Manohar A.V.1991,Phys.Lett.B255,558;Bernard V.,Kaiser N.and Meißner U.-G.1995,Int.J.Mod.Phys.E4,193.。
Tikhonov吉洪诺夫正则化
Tikhonov regularizationFrom Wikipedia, the free encyclopediaTikhonov regularization is the most commonly used method of of named for . In , the method is also known as ridge regression . It is related to the for problems.The standard approach to solve an of given as,b Ax =is known as and seeks to minimize the2bAx -where •is the . However, the matrix A may be or yielding a non-unique solution. In order to give preference to a particular solution with desirable properties, the regularization term is included in this minimization:22xb Ax Γ+-for some suitably chosen Tikhonov matrix , Γ. In many cases, this matrix is chosen as the Γ= I , giving preference to solutions with smaller norms. In other cases, operators ., a or a weighted ) may be used to enforce smoothness if the underlying vector is believed to be mostly continuous. This regularizationimproves the conditioning of the problem, thus enabling a numerical solution. An explicit solution, denoted by , is given by:()b A A A xTTT 1ˆ-ΓΓ+=The effect of regularization may be varied via the scale of matrix Γ. For Γ=αI , when α = 0 this reduces to the unregularized least squares solution providedthat (A T A)−1 exists.Contents••••••••Bayesian interpretationAlthough at first the choice of the solution to this regularized problem may look artificial, and indeed the matrix Γseems rather arbitrary, the process can be justified from a . Note that for an ill-posed problem one must necessarily introduce some additional assumptions in order to get a stable solution.Statistically we might assume that we know that x is a random variable with a . For simplicity we take the mean to be zero and assume that each component isindependent with σx. Our data is also subject to errors, and we take the errorsin b to be also with zero mean and standard deviation σb. Under these assumptions the Tikhonov-regularized solution is the solution given the dataand the a priori distribution of x, according to . The Tikhonov matrix is then Γ=αI for Tikhonov factor α = σb/ σx.If the assumption of is replaced by assumptions of and uncorrelatedness of , and still assume zero mean, then the entails that the solution is minimal . Generalized Tikhonov regularizationFor general multivariate normal distributions for x and the data error, one can apply a transformation of the variables to reduce to the case above. Equivalently,one can seek an x to minimize22Q P x x b Ax -+-where we have used 2P x to stand for the weighted norm x T Px (cf. the ). In the Bayesian interpretation P is the inverse of b , x 0 is the of x , and Q is the inverse covariance matrix of x . The Tikhonov matrix is then given as a factorization of the matrix Q = ΓT Γ. the ), and is considered a . This generalized problem can be solved explicitly using the formula()()010Ax b P A QPA A x T T-++-[] Regularization in Hilbert spaceTypically discrete linear ill-conditioned problems result as discretization of , and one can formulate Tikhonov regularization in the original infinite dimensional context. In the above we can interpret A as a on , and x and b as elements in the domain and range of A . The operator ΓΓ+T A A *is then a bounded invertible operator.Relation to singular value decomposition and Wiener filterWith Γ= αI , this least squares solution can be analyzed in a special way viathe . Given the singular value decomposition of AT V U A ∑=with singular values σi , the Tikhonov regularized solution can be expressed asb VDU xT =ˆ where D has diagonal values22ασσ+=i i ii Dand is zero elsewhere. This demonstrates the effect of the Tikhonov parameteron the of the regularized problem. For the generalized case a similar representation can be derived using a . Finally, it is related to the :∑==qi iiT i i v bu f x1ˆσwhere the Wiener weights are 222ασσ+=i i i f and q is the of A .Determination of the Tikhonov factorThe optimal regularization parameter α is usually unknown and often in practical problems is determined by an ad hoc method. A possible approach relies on the Bayesian interpretation described above. Other approaches include the , , , and . proved that the optimal parameter, in the sense of minimizes:()()[]21222ˆTTXIX XX I Tr y X RSSG -+--==αβτwhereis the and τ is the effective number .Using the previous SVD decomposition, we can simplify the above expression:()()21'22221'∑∑==++-=qi iiiqi iiub u ub u y RSS ασα()21'2220∑=++=qi iiiub u RSS RSS ασαand∑∑==++-=+-=qi iqi i i q m m 12221222ασαασστRelation to probabilistic formulationThe probabilistic formulation of an introduces (when all uncertainties are Gaussian) a covariance matrix C M representing the a priori uncertainties on the model parameters, and a covariance matrix C D representing the uncertainties on the observed parameters (see, for instance, Tarantola, 2004 ). In the special case when these two matrices are diagonal and isotropic,and, and, in this case, the equations of inverse theory reduce to theequations above, with α = σD / σM .HistoryTikhonov regularization has been invented independently in many differentcontexts. It became widely known from its application to integral equations from the work of and D. L. Phillips. Some authors use the term Tikhonov-Phillips regularization . The finite dimensional case was expounded by A. E. Hoerl, who took a statistical approach, and by M. Foster, who interpreted this method as a - filter. Following Hoerl, it is known in the statistical literature as ridge regression .[] References•(1943). "Об устойчивости обратных задач [On the stability of inverse problems]". 39 (5): 195–198.•Tychonoff, A. N. (1963). "О решении некорректно поставленных задач и методе регуляризации [Solution of incorrectly formulated problems and the regularization method]". Doklady Akademii Nauk SSSR151:501–504.. Translated in Soviet Mathematics4: 1035–1038. •Tychonoff, A. N.; V. Y. Arsenin (1977). Solution of Ill-posed Problems.Washington: Winston & Sons. .•Hansen, ., 1998, Rank-deficient and Discrete ill-posed problems, SIAM •Hoerl AE, 1962, Application of ridge analysis to regression problems, Chemical Engineering Progress, 58, 54-59.•Foster M, 1961, An application of the Wiener-Kolmogorov smoothing theory to matrix inversion, J. SIAM, 9, 387-392•Phillips DL, 1962, A technique for the numerical solution of certain integral equations of the first kind, J Assoc Comput Mach, 9, 84-97•Tarantola A, 2004, Inverse Problem Theory (), Society for Industrial and Applied Mathematics,•Wahba, G, 1990, Spline Models for Observational Data, Society for Industrial and Applied Mathematics。
量子力学103 – 174
In Chap. 2, we assumed that the Lagrangian density L(ψ (x), ∂µ ψ (x)) is nonsingular. In this chapter, we discuss the path integral quantization of gauge field theory. Gauge field theory is singular in the sense that the kernel of the quadratic part of the gauge field Lagrangian density is a fourdimensionally transverse projection operator, which is not invertible without the gauge fixing term. The most familiar example of a gauge field is the electrodynamics of J.C. Maxwell. The electromagnetic field has a well-known invariance property under the gauge transformation (gauge invariance). The charged matter field interacting with the electromagnetic field has a property known as the charge conservation law. The original purpose of the introduction of the notion of gauge invariance (originally called Eichinvarianz) by H. Weyl was a unified description of the gauge invariance of the electromagnetic field and the scale invariance of the gravitational field. Weyl failed to accomplish his goal. After the birth of quantum mechanics, however, Weyl reconsidered gauge invariance and discovered that the gauge invariance of the electromagnetic field is not related to the scale invariance of the gravitational field, but is related to the local phase transformation of the matter field and that the interacting system of the electromagnetic field and the matter field is invariant under the gauge transformation of the electromagnetic field plus the local phase transformation of the matter field. The invariance of the matter field Lagrangian density under a global phase transformation results in the charge conservation law, according to Noether’s theorem. Weyl’s gauge principle declares that the extension of the global invariance (invariance under the space-time independent phase transformation) of the matter field Lagrangian density Lmatter (ψ (x), ∂µ ψ (x)) to the local invariance (invariance under the space-time dependent phase transformation) of the matter field Lagrangian density under the continuous symmetry group G necessitates the introduction of the gauge field and the replacement of the derivative ∂µ ψ (x) with the covariant derivative Dµ ψ (x) in the matter field Lagrangian density,
德国物理学家的英文
德国物理学家的英文German physicists have made significant contributions to the field of physics, shaping our understanding of the universe and the laws that govern it. From the early days of modern physics to the cutting-edge research of today, German scientists have been at the forefront of discovery and innovation.One of the most renowned German physicists is Albert Einstein, who developed the theory of relativity, which revolutionized our understanding of space, time, and gravity. His famous equation, E=mc^2, demonstrated the equivalence of mass and energy, and it has become a fundamental principle in physics.Another giant in the field was Max Planck, who is considered the father of quantum theory. Planck's work on black-body radiation led to the discovery of the quantization of energy, which laid the groundwork for the development of quantum mechanics.In the realm of nuclear physics, Otto Hahn and Fritz Strassmann were pioneers in the discovery of nuclear fission, a process that would later be harnessed to produce nuclear energy and atomic weapons. Their work was built upon by physicists like Werner Heisenberg and Erwin Schrödinger, who made significant contributions to the development of quantum mechanics.Heisenberg, in particular, formulated the uncertainty principle, which states that it is impossible to simultaneously measure the exact position and momentum of a particle. Schrödinger, on the other hand, is famous for his wave equation, which describes how the quantum state of a physical system changes over time.The work of these German physicists has not only expanded our knowledge of the physical world but also led to practical applications that have transformed society. From the development of medical imaging technologies like MRI and PET scans to the advancement of computing and materials science, their discoveries have had far-reaching impacts.Today, German physicists continue to be leaders in research, exploring the mysteries of dark matter, the nature of dark energy, and the potential for quantum computing. Their legacy is a testament to the enduring spirit of inquiry and the relentless pursuit of knowledge that characterizes the scientific community.。
Infinite chain of N different deltas a simple model for a Quantum Wire
Infinite chain of N different deltas: A simple model for a quantum wire
Jose M. Cerver´ o∗ and Alberto Rodr´ ıguez
PACS Numbers: 03.65.-w: Quantum Mechanics 71.23.An: Theories and Models; Localized States 73.21.Hb: Quantum Wires
∗
cervero@al.es. Author to whom all correspondence should be addressed
3 model. We close with a Section of Conclusions.
1
Periodic Array
Let us consider an electron in a periodic one dimensional chain of atoms modelled by the potential constituted by an array of N delta functions each one with its own coupling e2 i, (i = 1, 2, ...N ). After finishing the N -array, the structure repeats itself an infinite number of times. The number of species N , can be arbitrarily large but finite. The case N =1 is an old textbook exercise but may be convenient to be revisited [19] for taking a full profit of our general results. The generalization can thus be followed in a more straightforward manner. The relevant primitive cell for N = 2 can be represented for the following set of wavefunctions:
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Permanent address: Department of Theoretical Physics, St. Petersburg University, 198904 St. Petersburg, Russia
1
the canonical BVF7 formalism, produces the determinant5 F = (det ∂0)2 det(∂0 + X+U (X)). The generating functional for the Green functions is
√ L(1) = −g
−X R − U (X) (∇X)2 + V (X))
2
2
.
(1)
A common feature of all these studies is that due to the particular structure of the theory the constraints can be solved exactly, yielding a finite dimensional phase space. This remarkable property raises hope that one will be able to get insight into the information paradox. However, in the presence of an additional matter field again an infinite number of modes must be quantized.
exactly
the
classical
action (2) in our temporal gauge, up to surface terms. As expected this clearly
re-demonstrates the absence of (local) quantum effects.
Z = −i ln W =
JX
+
J−
1 ∂0
j+
+
J+
∂0
+
1
U
(X
)
1 ∂0
j+
j− − V (X)
,
(8)
where X has to be replaced by X = ∂0−2j+ + ∂0−1j. Note that the determinant F is precisely canceled by these last three integrations. Eq.(8) gives the exact non-
‡The case of non-minimal coupling has been treated in 8
jω1 + j+e−1 + j−e+1 + L(9P)
2
δ(X) = δ ∂0ω1 + J − αe+1 δ(X+) = δ ω1 + J − + ∂0e−1 δ(X−) = δ ∂0e+1 + J + .
1 Exact Path Integral Quantization – Matterless Case
Let our starting point be the first order action (X+, X− and X are auxiliary fields)
L(2) = X+De− + X−De+ + Xdω + ǫ(V (X) + X+X−U (X)),
We demonstrate that in the absence of ‘matter’ fields to all orders of perturbation theory and for all 2D dilaton theories the quantum effective action coincides with the classical one. This resolves the apparent contradiction between the well established results of Dirac quantization and perturbative (path-integral) approaches which seemed to yield non-trivial quantum corrections. For the Jackiw–Teitelboim (JT) model, our result is even extended to the situation when a matter field is present.
We must stress that due to the partial integrations where we discarded the surface terms the following results are true only locally. A properly regularized definition of the Green functions like ∂0−1, ∂0−2 is presented in 6. The remaining integrations can be performed most conveniently in the order X+, X− and X to yield
(10) (11) (12)
The remaining integrations can be done with the use of (10 to 12) during which the term F gets cancelled again. As a final result we arrive at Z = −i ln W = jω1 + j+e−1 + j−e+1 + LP where the zweibein and connection have to be expressed as the solutions of (10 to 12). This gives the exact non-perturbative generating functional for connected Green functions, even in the presence of matter. In the absence of external matter sources therefore the quantum JT model with matter is locally equivalent to the classical JT model with Polyakov term, i.e. the ‘semiclassical’ approximation becomes exact.
integration of the gravitational and the matter action. Beginning with the X, X+
and X− integration we arrive (start1 )(De−1 )(Dω1)δ(X)δ(X+)δ(X−)F exp i
W = (DX)(DX+)(DX−)(De+1 )(De−1 )(Dω1)F exp i L(2) + Ls , (3)
x
where Ls denotes the contribution of the sources (j±, j, J±, J) corresponding to the fields (e∓1 , ω1, X∓, X). Integrating over the zweibein components and the spin connection results in
2 Exact Path Integral Quantization – JT-Model with Matter
Coupling a scalar field known Polyakov action
LmPin=im√a−llygR‡2t−o1Rth.e
gravitational action leads to Clearly LP is not linear in the
In recent years, stimulated by the ‘dilaton black hole’ 1 numerous studies of quantized gravity in d = 2 were performed 2 using the second order formalism
(2)
where Dea, dω and ǫ are the torsion, curvature and volume two form, respectively. The quantum equivalence of (2) to the second order form (1) was demonstrated in 5. We will be working in a ‘temporal’ gauge e+0 = ω0 = 0, e−0 = 1 which, by applying