On quantization of affine Jacobi varieties of spectral curves

合集下载

对柯恩的非帕斯卡归纳概率的评价与辩护

对柯恩的非帕斯卡归纳概率的评价与辩护

样 , 然 有 不 少 学 者 对 此 逻 辑 系 统 的 合 理 性 提 出 仍 了质 疑 , 表 现 在 可 应 用 性 方 面 和 理 论 本 身 的 一 这 致 性 方 面 。 笔 者 将 探 讨 这 些 问 题 , 对 柯 恩 的 非 并 帕斯 卡逻 辑 系统作 适 当的辩 护 。
的 方 向 上 , 纳 森 ・柯 恩 所 建 立 的 非 帕 斯 卡 概 率 乔
逻 辑 就 是 局 部 辩 护 方 法 的 一 种 重 要 的 分 支 。 柯 恩 的 非 帕斯 卡 概 率 系 统 与 经 典 概 率 系 统 有 着 本 质 的 区 别 , 恩 声 称 他 的 逻 辑 系 统 不 仅 是 合 理 的 , 且 柯 而 有 着 广 泛 的 应 用 领 域 , 其 是 数 学 概 率 逻 辑 失 效 尤
14 8
睾 汝 22 0. 15
意 味 着 如 果 每 个 论 点 已 经 被 表 明 是 占 优 证 据 的 话 , 么 这 些 分 论 点 的 合 取 也 必 然 是 占 优 的 。 如 那
果确 证度 最低 的分 论 点 没 有 满 足 占优 的话 , 们 它 的 合 取 也 就 不 是 占优 的 。柯 恩 认 为 , 所 以 出 现 之 合 取 概 率 不 减 的规 律 乃 是 因 为 随 着 新 的 证 据 因 素 的增 加 时 , 们将 改 变具有 普遍 规律 的 概括 句 , 我 或 者 说 使 用 越 来 越 精 致 的 概 括 句 。所 以 , 如 , 果 例 如 某 个 案 件 要 求 建 立 A, C 和 D 四 个 论 点 才 能 得 B, ( ), 是 P ( ) 大 于 P H ) H. 但 。 H;远 ( 的话 , 么 将 得 那 到 P( 八… 八H H。 )<P H 八… ^HN 。 布 兰 克 “( )

Quantization of Gauge Field Theories on the Front-Form without Gauge Constraints I The Abe

Quantization of Gauge Field Theories on the Front-Form without Gauge Constraints I  The Abe

a r X i v :h e p -t h /9311017v 1 3 N o v 19936392September 1993T/EQuantization of Gauge Field Theories on the Front-Formwithout Gauge Constraints I :The Abelian Case⋆Ovid C.Jacob†Stanford Linear Accelerator Center Stanford University,Stanford,California 94309ABSTRACTRecently,we have proposed a new front-form quantization which treated both the x +and the x −coordinates as front-form ’times.’This quantization was found to preserve parity explicitly.In this paper we extend this construction to local Abelian gauge fields .We quantize this theory using a method proposed originally by Faddeev and Jackiw .We emphasize here the feature that quantizing along both x +and x −,gauge theories does not require extra constraints (also known as ’gauge conditions’)to determine the solution uniquely.Submitted to Phys.Rev.D1.IntroductionFront-form quantization is usually done by quantization along the front x+= ually this is done by quantizing a system with constraints[1][2][3][4].In a previous papers[5][6][7]we introduced a quantization which treated x+and x−on equal footing.[8]The main argument given was that this new approach was manifestly parity invariant.We also pointed out that this new approach had the same number of degrees of freedom as the equal-time approach.We’d like to point out that in some work involving initial value problems in gravity using front-form coordinates[9][10][11][12][13],the initial data for these coordi-nates is also specified along both x+=const and x−=const surfaces as well as at x+=x−=0.R.Penrose[11]points out that in this approach there are no constraints.In this paper we study how this approach could bypass much of the difficulty coming from the presence of constraints in the presence of local Abelian (U(1))gauge symmetry.(A future paper will look at the non-Abelian(SU(N)) case.The point is as follows:in usual gauge theory quantization,the gauge con-dition is a relation(constraint)between the quantizing degrees of freedom(the initial data).We show in this work that using the two null hyperplanes,we don’t need any constraints between initial data.There are two points which we should mention.First,the use of the reduced phase space quantization of Faddeev and Jackiw[14](see also[15])allows us to get the commutation relations easily.Second,as they point out,if the two-form(which goes in defining the equations of motion)is invertible,then there are no con-straints.This fact coupled with Penrose’s remark[11]seem to imply that using two null hyperplanes,we always have an invertible two-form,hence never any constraints.Obviously this greatly facilitates the quantization procedure.2.Reduced Phase Space Quantization of QEDWe follow the reduced phase space quantization of Faddeev and Jackiw[14]to study QED:L=−12∂νψiψψ+L I(2.1)where L I is the interaction part of the Lagrangean:L I=−eψγνψ=0(2.3)(iγµ∂µ−m−eγµAµ)ψ=0(2.4) To obtain the Hamiltonians for evolution along x+and x−,we follow the approach of previous papers[5]and[6],so we write L out explicitly:L d4x= −12ψ+−i2ψ−−i2ψ−+i2ψ++i2ψ−−mψ†−γ+2(2.5)whereψ±=Λ±ψandΛ±=1∂xνso that∂−= 2∂+=2∂∂x−.The corresponding conjugate momenta forx+-derivatives areπi A=∂L2F+i(2.6)πψ(x)=∂L2ψ†+(2.7)πψ†(x)=∂L2ψ+(2.8)For the momenta corresponding to x−-derivatives we get similar forms:ρi A=∂L2F−i(2.9)ρψ(x)=∂L2ψ†−(2.10)ρψ†(x)=∂L2ψ−(2.11)We rewrite L d4x in the following wayL d4x=1contains the remaining pieces which give the’constraints’,though are not’true constraints’[14]as are consistent with the gaugefield equations of motions.The Hamiltonians H and K are:H=dx−dx2⊥2(B−2)+2eψ†+A+ψ++2eψ†−A−ψ−+ψ†+γ0γi i∂i2(∂iψ†−)γ0γiψ−eψ†−γ0γi A iψ++mψ†+γ−4ψ+ (2.13)K=dx+dx2⊥2(B+2)+2eψ†−A−ψ−+2eψ†+A+ψ++ψ†−γ0γi i∂i2(∂iψ†+)γ0γiψ−+eψ†+γ0γi A iψ−+mψ†−γ+4ψ− (2.14)where B−=B+=12F12,and for the’constraints’we get whatever is left over M= −∂i A+F+i−∂i A−F−i−F+−F+−+2eA+ψ†+ψ++2eA−ψ†−ψ− (2.15) Well,we can rewrite is as(up to total derivatives):M=A+Cπ+A−Cρ(2.16) and the’constraints’Cπand CρareCπ=−∂−F−+−∂i F i++2eψ†+ψ+(2.17)Cρ=−∂+F+−−∂i F i−+2eψ†−ψ−(2.18) We see that Cπ=Cρ=0identically by the classical equation of motion for the gaugefields,as in equation[(2.3)]forν=+,−.Let us write the L dx4with the explicit momenta dependence(up to total derivatives which we can discard[14],[16]),so as to make the resulting commutation relation clear:L d4x=12+12−H dx+−K dx−+A+Cπd4x+A−Cρd4x(2.19)We see now that we have two types of evolutions,one along x+,for which thefirst term in equation(2.19)gives the commutation relations along surfaces x+=y+ according to the form:[ξa,ξb]=Γ−1ab a,b=1,..8(2.20) withξ1=π1A,ξ2=π2A,ξ3=πψ,ξ4=πψ†,ξ5=A1,ξ6=A2,ξ7=ψ+,ξ8=ψ†+(2.21) andΓ15=Γ26=Γ37=Γ48=2=−Γ48=−Γ57=−Γ62=−Γ51(2.22)and all the otherΓ’s are0.The second term in equation(2.19)gives the com-mutation relations along surfaces x−=y−according to the form:[ηa,ηb]=∆−1ab a,b=1,..8(2.23) withη1=ρ1A,η2=ρ2A,η3=πψ,η4=πψ†,η5=A1,η6=A2,η7=ψ−,η8=ψ†−(2.24) and∆15=∆26=∆37=∆48=2=−∆48=−∆57=−∆62=−∆51(2.25) and all the other∆’s are0.Going now to the quantum commutators,we get thefollowing relations forfields at equal x+=y+,the usual front-form’time’:i[A i(x+,x−,x⊥),πj A(y+,y−,y⊥)]x+=y+=Λ+δ(x−−y−)δ2(x⊥−y⊥)(2.27)2ψ†+(x+,x−,x⊥),πψ†(y+,y−,y⊥) x+=y+=−iδ(x−−y−)δ2(x⊥−y⊥)δij(2.29)2ψ−(x+,x−,x⊥),ρψ(y+,y−,y⊥) x−=y−=+iΛ−δ(x+−y+)δ2(x⊥−y⊥)(2.31)2Here,the physical(quantized)degrees of freedom on x−=0are A i,ψ−andψ†−. Note that A+and A−do not enter in the list of physical(quantized)degrees of freedom.The equations of motions are now like in Faddeev and Jackiw[14]∂HΓab∂−ξb=(2.33)∂ηafor the x−variation.For a=5and b=1,equation(2.32)gives∂+F+1=+2eψ†+γ0γiψ−+2eψ†−γ0γiψ+(2.34) which is just the equation of motion[(2.3)]forν=1.For a=7and b=3werecover the equation of motion forψ†+i∂−ψ†+=i ∂iψ†−2+2eψ†+A+(2.35)We get similar results from(2.33).But what is the meaning of thefields A+and A−?They obey the following coupled set of differential equations,according to Cπand Cρ:12(∂+)2A−−(∂i)2A+=−∂i∂+A i+2eψ†+ψ+(2.36)12(∂−)2A+−(∂i)2A−=−∂i∂−A i+2eψ†−ψ−(2.37) We’ve arranged the equations so that all the knownfields,the independentfields are on the right-hand side,and the’new’fields are on the left-hand side.The point is that these are not constraint equations since they are not relations between the initial data,since neither A+nor A−get initialized on either hyperplane!We introduce these newfields so that we preserve Lorentz covariance and so that we have the same equations of motion in the Euler-Lagrange case and the Hamiltonian case.Inverting these equations,we got the following equations for A+and A−:A+=((∂i)2)−1∂i∂+A i−((∂i)2)−1eψ†+ψ++(∂+∂−−(∂i)2)−1eψ†+ψ+−((∂i)2)−1(∂+∂−−(∂i)2)−1(∂+)2eψ†+ψ+(2.38) A−=((∂i)2)−1∂i∂+A i−((∂i)2)−1eψ†−ψ−+(∂−∂+−(∂i)2)−1eψ†−ψ−−((∂i)2)−1(∂−∂+−(∂i)2)−1(∂−)2eψ†−ψ−(2.39) To fully define thesefields,we need to define the operators((∂i)2)−1and(∂+∂−−(∂i)2)−1.Then we’ll have A+and A−completely determined in terms of known fields.This is quite straight-forward.For the definition of(∂+)−1,we use the idea of Zhang and Harindranath[16]of taking anti-periodic boundary conditions for all thefields.This determines then the definition for this operator we are considering. It is12 dk+k++iǫ+1∂+)2f(x−)=12πe−ik+x−1k+−iǫ 2f(k+)(2.41)In position space,the operator(∂+)−1,is just the convoluted epsilon distribution [16],while the operator(∂+)−2,becomes1∂−f(x+)=12πe−ik−x+1k−−iǫ f(k−)(2.43)This leads to the following form for its square(12dk−k−+iǫ+1[16],while the operator(∂−)−2,becomes1(2π)2e+ik⊥x⊥(2π)2dk+dk−4k+k−−k2⊥+iǫf(k+,k−,k⊥)(2.47).3.Quantization of the FieldsNow that we have the commutation relations,we are ready to define thefieldsA i andψ.According to[11],using two null hyperplanes,the initial data mustbe specified on each of the hyperplanes as well as on their intersection.In this case,we will have initialization on the two surfaces x+=0and x−=0.We will require,though,that on the intersection of these surfaces,at x+=x−=0thesefields satisfy certain consistency conditions.This works out as follows.On x+=0we have then:A i(x+=0,x−,x⊥)= d2k⊥2k+ ǫi(k+,k⊥)a(k+,k⊥)e−ik.x+ǫ∗i(k+,k⊥)a†(k+,k⊥)e+ik.x(3.1)ψ+(x+=0,x−,x⊥)= d2k⊥2k+ λ b(k+,k⊥)u+(k+,k⊥,λ)e−ik.x+d†(k+,k⊥)v+(k+,k⊥,λ)e+ik.x (3.2)In this case,ik.x=ik+x−−ik⊥.x⊥and the polarization vector isǫi(k+,k⊥).On the other hyperplane,x−=0we get similar forms:A i(x−=0,x+,x⊥)= d2k⊥2k− ˆǫi(k−,k⊥)ˆa(k−,k⊥)e−iˆk.x+ˆǫ∗i(k−,k⊥)ˆa†(k−,k⊥)e+iˆk.x(3.3)ψ−(x−=0,x+,x⊥)= d2k⊥2k− µ ˆb(k−,k⊥)u−(k−,k⊥,µ)e−iˆk.x+ˆd†(k−,k⊥)v−(k−,k⊥,µ)e+iˆk.x (3.4) Here,iˆk.x=ik−x+−ik⊥.x⊥.We require now that thefields be consistent at x+=x−=0.This means thatwe haveA i(x+=0,x−=0,x⊥)=A i(x−=0,x+=0,x⊥)(3.5)This impliesd2k⊥2k+ ǫi(k+,k⊥)a(k+,k⊥)e+ik⊥.x⊥+ǫi(k+,k⊥)∗a†(k+,k⊥)e−ik⊥.x⊥= d2k⊥2k− ˆǫi(k−,k⊥)ˆa(k−,k⊥)e+ik⊥.x⊥+ˆǫi(k−,k⊥)∗ˆa†(k−,k⊥)e−ik⊥.x⊥ (3.6)As k+and k−are just dummy variables here,we get thata(k+,k⊥)=ˆa(k+,k⊥),a†(k+,k⊥)=ˆa†(k+,k⊥)(3.7)as well asǫi(k+,k⊥)=ˆǫi(k+,k⊥)(3.8)and we need to point out that the variables are the same for both creation oper-ators.So this means thata(k+,k⊥)=ˆa(k−,k⊥),ǫi(k+,k⊥)=ˆǫi(k−,k⊥)(3.9)hence thefield A i has different effects on the two surfaces.On x+=0,A i(x+=0,x−,x⊥)creates or destroys vector quanta with momentum k=(k+,k⊥)andpolarizationǫi(k+,k⊥).On x−=0,A i(x−=0,x+,x⊥)creates or destroys quanta with momentumˆk=(k−,k⊥)and polarizationǫi(k−,k⊥).[17]The analysis for the fermionfields goes through just like in the previous paper [6].What about thefields A+and A−?As mentioned in the previous section,as thesefields are not initialized on any of the surfaces,they do not constitute con-straints.We have solved equations[(2.36)]and[(2.37)]in terms of the independent degrees of freedom A i andψ+,ψ−in equations[(2.38)]and[(2.39)].We get the following:A+(x+,x−,x⊥)=((∂i)2)−1∂i∂+A i(x+=0,x−,x⊥)−((∂i)2)−1eψ†+ψ+(x+=0,x−,x⊥)+(∂+∂−−(∂i)2)−1eψ†+ψ+(x+=0,x−,x⊥)−((∂i)2)−1(∂+∂−−(∂i)2)−1(∂+)2eψ†+ψ+(x+=0,x−,x⊥)(3.10)A−(x−,x+,x⊥)=((∂i)2)−1∂i∂+A i(x−=0,x+,x⊥)−(∂i)2)−1eψ†−ψ−(x−=0,x+,x⊥)+(∂−∂+−(∂i)2)−1eψ†−ψ−(x−=0,x+,x⊥)−((∂i)2)−1(∂−∂+−(∂i)2)−1(∂−)2eψ†−ψ−(x−=0,x+,x⊥)(3.11) where we use the definitions[(2.46)],[(2.47)],[(2.41)]and[(2.44)].So all thefields coming in the definition of QED are defined and A+or A−do not represent new modes or new quanta.It is important to point out here that our gaugefield A has only two physical degrees of freedom,A i,i=1,2.Thefields A+and A−are needed to guarantee Lorentz covariance,but are not gotten from some constraint equations.Let us point out that these equations are different in nature than similar equa-tions one gets in the case of constraint quantization.In the constrained case,oneneeds to solve the constraint equation before quantization.This is often hard and sometimes impossible analytically.Here,we have already quantized our theory and are computing new fields,so we are past the quantization stage.The quantization procedure seems easier in this approach than in the constrained approaches [1],[2],[3],[4].4.Parity in Front-Form QuantizationWe are ready now to study how the fields A i ,ψ+and ψ−transform underparity.For this we use (Bjorken and Drell for instance [18]):P A i (x +,x −,x ⊥)P −1=−A i (x −,x +,−x ⊥)(4.1)since under parity (x +,x −,x ⊥)→(x −,x +,−x ⊥)and the vector field has negative intrinsic parity.For the vector field we getP A i (x +=0,x −,x ⊥)P −1=P d 2k ⊥2k +ǫi (k +,k ⊥)a (k +,k ⊥)e −ik.x +ǫ∗i(k +,k ⊥)a †(k +,k ⊥)e +ik.x P −1(4.2)This becomesP A i (x +=0,x −,x ⊥)P −1= d 2(−k ⊥)2k − −ǫi (k −,−k ⊥)a (k −,−k ⊥)e −ik ′.x′−ǫ∗i (k −,−k ⊥)a †(k −,−k ⊥)e +ik ′.x ′ (4.3)ifP a (k +,k ⊥)P −1=a (k −,−k ⊥),P a †(k +,k ⊥)P −1=a †(k −,−k ⊥)andP ǫi (k +,k ⊥)P −1=−ǫi (k −,−k ⊥)(4.4)and ik ′.x ′=ik −x +−ik ⊥x ⊥.Redefining variables (k −,−k ⊥)→(l −,l ⊥),we getthe resultP A i(x+=0,x−,x⊥)P−1=−A i(x−=0,x+,−x⊥)(4.5)Let us consider the fermionfields now.In this case we have the same result of the previous paper[6]Pψ(x+,x−,x⊥)P−1=γ0ψ(x−,x+,−x⊥)(4.6) and we expect thatfields defined on x+will be mapped intofields defined on x−by parity.Indeed,that is what wefind forψ+.We derive now these relations for arbitrary x+and x−.Note that for the x+ evolution we haveA i(x+,x−,x⊥)=e−iP−x+A i(x+=0,x−,−x⊥)(4.7) orψ−(x+,x−,x⊥)=e−iP−x+ψ−(x+=0,x−,−x⊥)(4.8) so that the parity-transformedfield isP A i(x+,x−,x⊥)P−1=P e−iP−x+P−1P A i(x+=0,x−,−x⊥)P−1(4.9) which becomesP A i(x+,x−,x⊥)P−1=e−iP+x−A i(x−=0,x+,−x⊥)(4.10) sinceP P−P−1=P HP−1= K=P+(4.11) by use of the equations(2.13)and(2.14).A similar result holds for the fermion case.We also get the generator of x−evolutions to transform properly as well sinceP P+P−1=P KP−1= H=P−(4.12) again,by use of equations(2.14)and(2.13).Since now the generators of evolution along x+and x−(H and K respectively), transform properly under parity,we can evolve the parity relations obtained at x+=0and x−=0to relations for arbitrary x+and x−.For the vector case we getP A i(x+,x−,x⊥)P−1=−A i(x−,x+,−x⊥)(4.13) as expected from previous work[5].For the fermion case,we get[7]Pψ+(x+,x−,x⊥)P−1=γ0ψ−(x−,x+,−x⊥)(4.14)which show very clearly that parity maps independentfields on x+=0[ψ+(x+= 0,x−,x⊥)],to independentfields on x−=0[ψ−(x−=0,x+,x⊥)],demonstrating the it is crucial that we take both x+=0and x−=0as quantizing surfaces if we desire to havefields with parity as an explicit symmetry as already noted[6].Thus far we have looked at transformation properties of independentfields on x+=0.It is quite straightforward to show that we get similar results for the fields which are initialized on x−=0:P A i(x−,x+,x⊥)P−1=−A i(x+,x−,−x⊥)(4.15) for the vectorfield andPψ−(x−,x+,x⊥)P−1=γ0ψ+(x+,x−,−x⊥)(4.16) for the fermionfield[7].Let us examine the parity transformation properties of thefields A+and A−. It is a straightforward exercise to check,using equations[(3.10)]and[(3.11)]that we getP A+(x+=0,x−,x⊥)P−1=A−(x−=0,x+,−x⊥)(4.17) due to the transformation properties of thefields A i andψ+.We likewise getP A+(x+,x−,x⊥)P−1=A−(x−,x+,−x⊥)(4.18) for arbitrary x+.For the otherfield A−,results come out as expected as wellP A−(x−=0,x+,x⊥)P−1=A+(x+=0,x−,−x⊥)(4.19) due to the transformation properties of thefields A i andψ−.We likewise getP A−(x−,x+,x⊥)P−1=A+(x+,x−,−x⊥)(4.20) for arbitrary x−.This completes our demonstration thatfields defined on x+=0 and x−=0transform properly under parity,and define QED consistently.5.AcknowledgementsI would like to thank Dharam Ahluwalia and Prof.Stanley Brodsky for dis-cussions and to thank Prof.Richard Blankenbecler for his continuing support.REFERENCES1.Kurt Sundermeyer,’Constrained Dynamics’,Springer-Verlag,New York,19822.D.M.Gitman and I.V.Tyutin,’Quantization of Fields with Constraints’,Springer-Verlag,New York,19903.Jan Govaerts,’Hamiltonian Quantization and Constrained Dynamics’,Leuven University Press,Leuven,Belgium,19914.Marc Henneaux and Claudio Teitelboim,’Quantization of Gauge Systems’,Princeton University Press,Princeton,NJ,19925.Ovid C.Jacob,’Parity Conserving Light-Cone Quantization’,SLAC-PUB-6188,May1993,hep-th/9305076,Submitted to Mod.Phys.Lett A.6.Ovid C.Jacob’Parity and Front-Form quantization of Field Theories,’SLAC-PUB August19937.see also Dharam V.Ahluwalia and Mikolaj Sawicki,’Spinors on the Front-Form,’LANL preprint,October19938.Couple of recent preprints also address the issue of consistency offieldequations on light-cone(T.Heinzl and E.Werner,Regensburg preprint TPR-93-3)or for null-planefield theory(Norbert E.Ligterink and B.L.G.Bakker,Vrije Universiteit,Amsterdam preprint,1993).In both of these papers,the authors stay in the usual approach of taking only one light-cone (null-plane)’time.’The Regensburg paper actually makes some rather strong claims(with scant support)regarding lack of need for a second light-like hyperplane,but they do it in the context of a mixed initial-boundary value problem,whereas here we consider an initial value problem.9.R.Gambini,A.Restuccia,Phys.Rev.D17,908,(1962)11.R.Penrose,Gen.Relat.and Grav.12,259,(1977)13.H.Bondi,M.G.J.van der Burg and A.W.K.Metzner,Proc.R.Soc.London,A269,1692,(1988)15.R.Jackiw,’(Constrained)Quantization without Tears,’CTP-#2215,hep-th/930607516.Wei-Min Zhang and Avaroth Harindranath,’Light-Front QCD:Role ofLongitudinal Boundary Integrals’,hepth@xxx/9302119;Wei-Min Zhang and Avaroth Harindranath,’Residual Gauge Fixing in Light-Front QCD’, hepth@xxx/930210717.Note that the Fourier decomposition goes through eventhough we have two’times.’The point is that on each constant surface,be it x+=0or x−=0, there is only one front-form time,so that there is a consistent definition of a Fourier decomposition.On each of these surfaces,space-time looks like2+1 Minkowski space-time.18.James D.Bjorken and Sidney D.Drell,’Relativistic Quantum Fields,’Chap-ter15,McGraw-Hill,San Francisco,1965。

Quantization

Quantization

QuantizationRobert M.Gray,Fellow,IEEE,and David L.Neuhoff,Fellow,IEEE(Invited Paper)Abstract—The history of the theory and practice of quan-tization dates to1948,although similar ideas had appearedin the literature as long ago as1898.The fundamental roleof quantization in modulation and analog-to-digital conversionwasfirst recognized during the early development of pulse-code modulation systems,especially in the1948paper of Oliver,Pierce,and Shannon.Also in1948,Bennett published thefirsthigh-resolution analysis of quantization and an exact analysis ofquantization noise for Gaussian processes,and Shannon pub-lished the beginnings of rate distortion theory,which wouldprovide a theory for quantization as analog-to-digital conversionand as data compression.Beginning with these three papers offifty years ago,we trace the history of quantization from itsorigins through this decade,and we survey the fundamentals ofthe theory and many of the popular and promising techniquesfor quantization.Index Terms—High resolution theory,rate distortion theory,source coding,quantization.I.I NTRODUCTIONT HE dictionary(Random House)definition of quantizationis the division of a quantity into a discrete numberof small parts,often assumed to be integral multiples ofa common quantity.The oldest example of quantization isrounding off,which wasfirst analyzed by Sheppard[468]for the application of estimating densities by histograms.Anyreal number,with a resulting quantization error so thatis ordinarily a collection of consecutive integersbeginning with,together with a set of reproductionvalues or points or levelsFig. 2.A uniform quantizer.If the distortion is measured by squarederror,into a binaryrepresentation or channel codeword of the quantizer index possible levels and all of thebinary representations or binary codewords have equal length (a temporary assumption),the binary vectors willneed (or the next largerinteger,,unless explicitly specified otherwise.In summary,the goal of quantization is to encode the data from a source,characterized by its probability density function,into as few bits as possible (i.e.,with low rate)in such a way that a reproduction may be recovered from the bits with as high quality as possible (i.e.,with small average distortion).Clearly,there is a tradeoff between the two primary performance measures:average distortion (or simply distortion ,as we will often abbreviate)and rate.This tradeoff may be quantified as the operational distortion-ratefunction or less.Thatis,or less,which is the inverseofor less.We will also be interested in thebest possible performance among all quantizers.Both as a preview and as an occasional benchmark for comparison,we informally define the class of all quantizers as the class of quantizers that can 1)operate on scalars or vectors instead of only on scalars (vector quantizers),2)have fixed or variable rate in the sense that the binary codeword describing the quantizer output can have length depending on the input,and 3)be memoryless or have memory,for example,using different sets of reproduction levels,depending on the past.In addition,we restrict attention to quantizers that do not change with time.That is,when confronted with the same input and the same past history,a quantizer will produce the same output regardless of the time.We occasionally use the term lossy source code or simply code as alternatives to quantizer .The rate is now defined as the average number of bits per source symbol required to describe the corresponding reproduction symbol.We informally generalize the operational distortion-ratefunctionor less.ThusGRAY AND NEUHOFF:QUANTIZATION2327for special nonasymptotic cases,such as Clavier,Panter, and Grieg’s1947analysis of the spectra of the quantization error for uniformly quantized sinusoidal signals[99],[100], and Bennett’s1948derivation of the power spectral density of a uniformly quantized Gaussian random process[43]. The most important nonasymptotic results,however,are the basic optimality conditions and iterative-descent algorithms for quantizer design,such asfirst developed by Steinhaus(1956) [480]and Lloyd(1957)[330],and later popularized by Max (1960)[349].Our goal in the next section is to introduce in historical context many of the key ideas of quantization that originated in classical works and evolved over the past50years,and in the remaining sections to survey selectively and in more detail a variety of results which illustrate both the historical development and the state of thefield.Section III will present basic background material that will be needed in the remainder of the paper,including the general definition of a quantizer and the basic forms of optimality criteria and descent algorithms. Some such material has already been introduced and more will be introduced in Section II.However,for completeness, Section III will be largely self-contained.Section IV reviews the development of quantization theories and compares the approaches.Finally,Section V describes a number of specific quantization techniques.In any review of a large subject such as quantization there is no space to discuss or even mention all work on the subject. Though we have made an effort to select the most important work,no doubt we have missed some important work due to bias,misunderstanding,or ignorance.For this we apologize, both to the reader and to the researchers whose work we may have neglected.II.H ISTORYThe history of quantization often takes on several parallel paths,which causes some problems in our clustering of topics. We follow roughly a chronological order within each and order the paths as best we can.Specifically,we willfirst track the design and analysis of practical quantization techniques in three paths:fixed-rate scalar quantization,which leads directly from the discussion of Section I,predictive and transform coding,which adds linear processing to scalar quantization in order to exploit source redundancy,and variable-rate quantiza-tion,which uses Shannon’s lossless source coding techniques [464]to reduce rate.(Lossless codes were originally called noiseless.)Next we follow early forward-looking work on vector quantization,including the seminal work of Shannon and Zador,in which vector quantization appears more to be a paradigm for analyzing the fundamental limits of quantizer performance than a practical coding technique.A surprising amount of such vector quantization theory was developed out-side the conventional communications and signal processing literature.Subsequently,we review briefly the developments from the mid-1970’s to the mid-1980’s which mainly concern the emergence of vector quantization as a practical technique. Finally,we sketch briefly developments from the mid-1980’s to the present.Except where stated otherwise,we presume squared error as the distortion measure.A.Fixed-Rate Scalar Quantization:PCM and the Origins of Quantization TheoryBoth quantization and source coding with afidelity crite-rion have their origins in pulse-code modulation(PCM),a technique patented in1938by Reeves[432],who25years later wrote a historical perspective on and an appraisal of the future of PCM with Deloraine[120].The predictions were surprisingly accurate as to the eventual ubiquity of digital speech and video.The technique wasfirst successfully imple-mented in hardware by Black,who reported the principles and implementation in1947[51],as did another Bell Labs paper by Goodall[209].PCM was subsequently analyzed in detail and popularized by Oliver,Pierce,and Shannon in1948[394]. PCM was thefirst digital technique for conveying an analog information signal(principally telephone speech)over an analog channel(typically,a wire or the atmosphere).In other words,it is a modulation technique,i.e.,an alternative to AM, FM,and various other types of pulse modulation.It consists of three main components:a sampler(including a prefilter),a quantizer(with afixed-rate binary encoder),and a binary pulse modulator.The sampler converts a continuous-timewaveform into a sequence ofsamples,whereand the high-frequency power removed by the lowpassfilter.The binary pulse modulator typically uses the bits produced by the quantizer to determine the amplitude,frequency,or phase of a sinusoidal carrier waveform.In the evolutionary development of modulation techniques it was found that the performance of pulse-amplitude modulation in the presence of noise could be improved if the samples were quantized to the nearest of a setoflevels had been transmitted in the presence of noise could be done with such reliability that the overall MSE was substantially reduced.Reducing the number of quantizationlevelsat a value giving acceptably small quantizer MSE and to binary encode the levels,so that the receiver had only to make binary decisions,something it can do with great reliability.The resulting system,PCM,had the best resistance to noise of all modulations of the time.As the digital era emerged,it was recognized that the sampling,quantizing,and encoding part of PCM performs an analog-to-digital(A/D)conversion,with uses extending much beyond communication over analog channels.Even in the communicationsfield,it was recognized that the task of analog-to-digital conversion(and source coding)should be factored out of binary modulation as a separate task.Thus2328IEEE TRANSACTIONS ON INFORMATION THEORY,VOL.44,NO.6,OCTOBER1998 PCM is now generally considered to just consist of sampling,quantizing,and encoding;i.e.,it no longer includes the binarypulse modulation.Although quantization in the information theory literatureis generally considered as a form of data compression,itsuse for modulation or A/D conversion was originally viewedas data expansion or,more accurately,bandwidth expansion.For example,a speech waveform occupying roughly4kHzwould have a Nyquist rate of8kHz.Sampling at the Nyquistrate and quantizing at8bits per sample and then modulatingthe resulting binary pulses using amplitude-or frequency-shiftkeying would yield a signal occupying roughly64kHz,a16–fold increase in bandwidth!Mathematically this constitutescompression in the sense that a continuous waveform requiringan infinite number of bits is reduced to afinite number of bits,but for practical purposes PCM is not well interpreted as acompression scheme.In an early contribution to the theory of quantization,Clavier,Panter,and Grieg(1947)[99],[100]applied Rice’scharacteristic function or transform method[434]to provideexact expressions for the quantization error and its momentsresulting from uniform quantization for certain specific inputs,including constants and sinusoids.The complicated sums ofBessel functions resembled the early analyses of anothernonlinear modulation technique,FM,and left little hope forgeneral closed-form solutions for interesting signals.Thefirst general contributions to quantization theory camein1948with the papers of Oliver,Pierce,and Shannon[394]and Bennett[43].As part of their analysis of PCM forcommunications,they developed the oft-quoted result that forlarge rate or resolution,a uniform quantizer with cellwidthlevels andrate,and the source has inputrange(or support)ofwidthdBshowing that for large rate,the SNR of uniform quantizationincreases6dB for each one-bit increase of rate,which is oftenreferred to as the“6-dB-per-bit rule.”Thefor companders,systems that preceded auniform quantizer by a monotonic smooth nonlinearity calleda“compressor,”saywas givenby is auniform quantizer.Bennett showed that in thiscaseis the cellwidth of the uniformquantizer,and the integral is taken over the granular range ofthe input.(Theconstantmaps to the unit intervalcan be interpreted,as Lloydwould explicitly point out in1957[330],as a constant timesa“quantizer point-densityfunctionnumber of quantizer levelsinover a region gives the fraction ofquantizer reproduction levels in the region,it is evidentthat,which when integratedoverrather than the fraction.In the currentsituationis infinite.Rewriting Bennett’s integral in terms of the point-densityfunction yields its more commonform(7)The idea of a quantizer point-density function will generalizeto vectors,while the compander approach will not in the sensethat not all vector quantizers can be represented as companders[192].Bennett also demonstrated that,under assumptions of highresolution and smooth densities,the quantization error behavedmuch like random“noise”:it had small correlation with thesignal and had approximately aflat(“white”)spectrum.Thisled to an“additive-noise”model of quantizer error,since withthese properties theformulaGRAY AND NEUHOFF:QUANTIZATION2329 is uniformly quantized,providing one of the very few exactcomputations of quantization error spectra.In1951Panter and Dite[405]developed a high-resolutionformula for the distortion of afixed-rate scalar quantizer usingapproximations similar to Bennett’s,but without reference toBennett.They then used variational techniques to minimizetheir formula and found the following formula for the opera-tional distortion-rate function offixed-rate scalar quantization:for large valuesof(9)Indeed,substituting this point density into Bennett’s integraland using the factthat yields(8).As an example,if the input density is Gaussian withvariance,thenasor less.(It was not until Shannon’s1959paper[465]thatthe rate is0.72bits/sample larger thanthat achievable by the best quantizers.In1957,Smith[474]re-examined companding and PCM.Among other things,he gave somewhat cleaner derivations of1They also indicated that it had been derived earlier by P.R.Aigrain.Bennett’s integral,the optimal compressor function,and thePanter–Dite formula.Also in1957,Lloyd[330]made an important study ofquantization with three main contributions.First,he foundnecessary and sufficient conditions for afixed-rate quantizer tobe locally optimal;i.e.,conditions that if satisfied implied thatsmall perturbations to the levels or thresholds would increasedistortion.Any optimal quantizer(one with smallest distortion)will necessarily satisfy these conditions,and so they are oftencalled the optimality conditions or the necessary conditions.Simply stated,Lloyd’s optimality conditions are that for afixed-rate quantizer to be optimal,the quantizer partition mustbe optimal for the set of reproduction levels,and the set ofreproduction levels must be optimal for the partition.Lloydderived these conditions straightforwardly fromfirst principles,without recourse to variational concepts such as derivatives.For the case of mean-squared error,thefirst condition impliesa minimum distance or nearest neighbor quantization rule,choosing the closest available reproduction level to the sourcesample being quantized,and the second condition implies thatthe reproduction level corresponding to a given cell is theconditional expectation or centroid of the source value giventhat it lies in the specified cell;i.e.,it is the minimum mean-squared error estimate of the source sample.For some sourcesthere are multiple locally optimal quantizers,not all of whichare globally optimal.Second,based on his optimality conditions,Lloyd devel-oped an iterative descent algorithm for designing quantizers fora given source distribution:begin with an initial collection ofreproduction levels;optimize the partition for these levels byusing a minimum distortion mapping,which gives a partitionof the real line into intervals;then optimize the set of levels forthe partition by replacing the old levels by the centroids of thepartition cells.The alternation is continued until convergenceto a local,if not global,optimum.Lloyd referred to thisdesign algorithm as“Method I.”He also developed a MethodII based on the optimality properties.First choose an initialsmallest reproduction level.This determines the cell thresholdto the right,which in turn implies the next larger reproductionlevel,and so on.This approach alternately produces a leveland a threshold.Once the last level has been chosen,theinitial level can then be rechosen to reduce distortion andthe algorithm continues.Lloyd provided design examplesfor uniform,Gaussian,and Laplacian random variables andshowed that the results were consistent with the high resolutionapproximations.Although Method II would initially gain morepopularity when rediscovered in1960by Max[349],it isMethod I that easily extends to vector quantizers and manytypes of quantizers with structural constraints.Third,motivated by the work of Panter and Dite butapparently unaware of that of Bennett or Smith,Lloyd re-derived Bennett’s integral and the Panter–Dite formula basedon the concept of point-density function.This was a criticallyimportant step for subsequent generalizations of Bennett’sintegral to vector quantizers.He also showed directly thatin situations where the global optimum is the only localoptimum,quantizers that satisfy the optimality conditionshave,asymptotically,the optimal point density given by(9).2330IEEE TRANSACTIONS ON INFORMATION THEORY,VOL.44,NO.6,OCTOBER1998Unfortunately,Lloyd’s work was not published in an archival journal at the time.Instead,it was presented at the1957Institute of Mathematical Statistics(IMS)meeting and appeared in print only as a Bell Laboratories Technical Memorandum.As a result,its results were not widely known in the engineering literature for many years,and many were independently rediscovered.All of the independent rediscoveries,however,used variational derivations,rather than Lloyd’s simple derivations.The latter were essential for later extensions to vector quantizers and to the development of many quantizer optimization procedures.To our knowledge, thefirst mention of Lloyd’s work in the IEEE literature came in 1964with Fleischer’s[170]derivation of a sufficient condition (namely,that the log of the source density be concave)in order that the optimal quantizer be the only locally optimal quantizer, and consequently,that Lloyd’s Method I yields a globally optimal quantizer.(The condition is satisfied for common densities such as Gaussian and Laplacian.)Zador[561]had referred to Lloyd a year earlier in his Ph.D.dissertation,to be discussed later.Later in the same year in another Bell Telephone Laborato-ries Technical Memorandum,Goldstein[207]used variational methods to derive conditions for global optimality of a scalar quantizer in terms of second-order partial derivatives with respect to the quantizer levels and thresholds.He also provided a simple counterintuitive example of a symmetric density for which the optimal quantizer was asymmetric.In1959,Shtein[471]added terms representing overload distortion totheth-power distortion measures, rediscovered Lloyd’s Method II,and numerically investigated the design offixed-rate quantizers for a variety of input densities.Also in1960,Widrow[529]derived an exact formula for the characteristic function of a uniformly quantized signal when the quantizer has an infinite number of levels.His results showed that under the condition that the characteristic function of the input signal be zero when its argument is greaterthanis a deterministic function of the signal.The“bandlimited”property of the characteristic function implies from Fourier transform theory that the probability density function must have infinite support since a signal and its transform cannot both be perfectly bandlimited.We conclude this subsection by mentioning early work that appeared in the mathematical and statistical literature and which,in hindsight,can be viewed as related to scalar quantization.Specifically,in1950–1951Dalenius et al.[118],[119]used variational techniques to consider optimal group-ing of Gaussian data with respect to average squared error. Lukaszewicz and H.Steinhaus[336](1955)developed what we now consider to be the Lloyd optimality conditions using variational techniques in a study of optimum go/no-go gauge sets(as acknowledged by Lloyd).Cox in1957[111]also derived similar conditions.Some additional early work,which can now be seen as relating to vector quantization,will be reviewed later[480],[159],[561].B.Scalar Quantization with MemoryIt was recognized early that common sources such as speech and images had considerable“redundancy”that scalar quantization could not exploit.The term“redundancy”was commonly used in the early days and is still popular in some of the quantization literature.Strictly speaking,it refers to the statistical correlation or dependence between the samples of such sources and is usually referred to as memory in the information theory literature.As our current emphasis is historical,we follow the traditional language.While not dis-rupting the performance of scalar quantizers,such redundancy could be exploited to attain substantially better rate-distortion performance.The early approaches toward this end combined linear processing with scalar quantization,thereby preserving the simplicity of scalar quantization while using intuition-based arguments and insights to improve performance by incorporating memory into the overall code.The two most important approaches of this variety were predictive coding and transform coding.A shared intuition was that a prepro-cessing operation intended to make scalar quantization more efficient should“remove the redundancy”in the data.Indeed, to this day there is a common belief that data compression is equivalent to redundancy removal and that data without redundancy cannot be further compressed.As will be discussed later,this belief is contradicted both by Shannon’s work, which demonstrated strictly improved performance using vec-tor quantizers even for memoryless sources,and by the early work of Fejes Toth(1959)[159].Nevertheless,removing redundancy leads to much improved codes.Predictive quantization appears to originate in the1946 delta modulation patent of Derjavitch,Deloraine,and Van Mierlo[129],but the most commonly cited early references are Cutler’s patent[117]2605361on“Differential quantization of communication signals”and on DeJager’s Philips technical report on delta modulation[128].Cutler stated in his patent that it“is the object of the present invention to improve the efficiency of communication systems by taking advantage of correlation in the signals of these systems”and Derjavitch et al.also cited the reduction of redundancy as the key to the re-duction of quantization noise.In1950,Elias[141]provided an information-theoretic development of the benefits of predictive coding,but the work was not published until1955[142].Other early references include[395],[300],[237],[511],and[572]. In particular,[511]claims Bennett-style asymptotics for high-resolution quantization error,but as will be discussed later, such approximations have yet to be rigorously derived. From the point of view of least squares estimation theory,if one were to optimally predict a data sequence based on its pastGRAY AND NEUHOFF:QUANTIZATION2331Fig.3.Predictive quantizer encoder/decoder.in the sense of minimizing the mean-squared error,then the resulting error or residual or innovations sequence would be uncorrelated and it would have the minimum possible variance. To permit reconstruction in a coded system,however,the prediction must be based on past reconstructed samples and not true samples.This is accomplished by placing a quantizer inside a prediction loop and using the same predictor to decode the signal.A simple predictive quantizer or differential pulse-coded modulator(DPCM)is depicted in Fig.3.If the predictor is simply the last sample and the quantizer has only one bit, the system becomes a delta-modulator.Predictive quantizers are considered to have memory in that the quantization of a sample depends on previous samples,via the feedback loop. Predictive quantizers have been extensively developed,for example there are many adaptive versions,and are widely used in speech and video coding,where a number of standards are based on them.In speech coding they form the basis of ITU-G.721,722,723,and726,and in video coding they form the basis of the interframe coding schemes standardized in the MPEG and H.26X prehensive discussions may be found in books[265],[374],[196],[424],[50],and[458],as well as survey papers[264]and[198].Though decorrelation was an early motivation for predictive quantization,the most common view at present is that the primary role of the predictor is to reduce the variance of the variable to be scalar-quantized.This view stems from the facts that a)it is the prediction errors rather than the source samples that are quantized,b)the overall quantization error precisely equals that of the scalar quantizer operating on the prediction errors,c)the operational distortion-ratefunctionresults in a scalingof,where is the variance of the sourceandthat is multiplied by an orthogonal matrix(an2332IEEE TRANSACTIONS ON INFORMATION THEORY,VOL.44,NO.6,OCTOBER1998Fig.4.Transform code.orthogonal transform)and the resulting transform coefficients are scalar quantized,usually with a different quantizer for each coefficient.The operation is depicted in Fig.4.This style of code was introduced in1956by Kramer and Mathews [299]and analyzed and popularized in1962–1963by Huang and Schultheiss[247],[248].Kramer and Mathews simply assumed that the goal of the transform was to decorrelate the symbols,but Huang and Schultheiss proved that decorrelating does indeed lead to optimal transform code design,at least in the case of Gaussian sources and high resolution.Transform coding has been extensively developed for coding images and video,where the discrete cosine transform(DCT)[7], [429]is most commonly used because of its computational simplicity and its good performance.Indeed,DCT coding is the basic approach dominating current image and video coding standards,including H.261,H.263,JPEG,and MPEG.These codes combine uniform scalar quantization of the transform coefficients with an efficient lossless coding of the quantizer indices,as will be considered in the next section as a variable-rate quantizer.For discussions of transform coding for images see[533],[422],[375],[265],[98],[374],[261],[424],[196], [208],[408],[50],[458],and More recently,transform coding has also been widely used in high-fidelity audio coding[272], [200].Unlike predictive quantizers,the transform coding approach lent itself quite well to the Bennett high-resolution approx-imations,the classical analysis being that of Huang and Schultheiss[247],[248]of the performance of optimized transform codes forfixed-rate scalar quantizers for Gaussian sources,a result which demonstrated that the Karhunen–Lo`e ve decorrelating transform was optimum for this application for the given assumptions.If the transform is the Karhunen–Lo`e ve transform,then the coefficients will be uncorrelated(and hence independent if the input vector is also Gaussian).The seminal work of Huang and Schultheiss showed that high-resolution approximation theory could provide analytical descriptions of optimal performance and design algorithms for optimizing codes of a given structure.In particular,they showed that under the high-resolution assumptions with Gaussian sources, the average distortion of the best transform code with a given rate is less than that of optimal scalar quantization bythefactor,where is the average of thevariances of the components of the source vectorandcovariance matrix.Note that this reduction indistortion becomes larger for sources with more memory(morecorrelation)because the covariance matrices of such sourceshave smaller determinants.Whenor less.Sincewe have weakened the constraint by expanding the allowedset of quantizers,this operational distortion-rate function willordinarily be smaller than thefixed-rate optimum.Huffman’s algorithm[251]provides a systematic methodof designing binary codes with the smallest possible averagelength for a given set of probabilities,such as those of thecells.Codes designed in this way are typically called Huffmancodes.Unfortunately,there is no known expression for theresulting minimum average length in terms of the probabilities.However,Shannon’s lossless source coding theorem impliesthat given a source and a quantizer partition,one can alwaysfind an assignment of binary codewords(indeed,a prefix set)with average length not morethan,where。

Quantization of soliton systems and Langlands duality

Quantization of soliton systems and Langlands duality
QUANTIZATION OF SOLITON SYSTEMS AND LANGLANDS DUALITY
arXiv:0705.2486v3 [math.QA] 9 Dec 2007
BORIS FEIGIN AND EDWARD FRENKEL Abstract. We consider the problem of quantization of classical soliton integrable systems, such as the KdV hierarchy, in the framework of a general formalism of Gaudin models associated to affine Kac–Moody algebras. Our experience with the Gaudin models associated to finite-dimensional simple Lie algebras suggests that the common eigenvalues of the mutually commuting quantum Hamiltonians in a model associated to an affine algebra b g should be encoded by affine opers associated to the Langlands dual affine algebra Lb g. This leads us to some concrete predictions for the spectra of the quantum Hamiltonians of the soliton systems. In particular, for the KdV system the corresponding affine opers may be expressed as Schr¨ odinger operators with spectral parameter, and our predictions in this case match those recently made by Bazhanov, Lukyanov and Zamolodchikov. This suggests that this the correspondence between quantum integrals of motion and differential operators may be viewed as special cases of the Langlands duality.

高效液相色谱手性固定相法拆分betti碱及其衍生物对映体

高效液相色谱手性固定相法拆分betti碱及其衍生物对映体
第 47 卷第 1 期 2020 年 1 月
浙 Байду номын сангаас 大 学 学 报(理学版)


Journal of Zhejiang University Science Edition
: http ///sci
Vol. 47 No. 1 Jan. 2020
DOI:10.3785/j.issn.1008⁃9497.2020.01.014
高效液相色谱手性固定相法拆分 碱及其衍生物对映体
BETTI
支 明 玉 1,2,朱 岩 2*
(1. 杭州职业技术学院,浙江 杭州 310018; 2. 浙江大学 化学系,浙江 杭州 310028)
摘 要 :在 纤 维 素 三 (3,5-二 甲 基 苯 基 氨 基 甲 酸 酯)(Chiralcel OD-H) 和 Pirkle 型 (R, R)-Whelk-O1 手 性 柱 上 对 手性氨基酚 1-(α-氨基苄基)-2-萘酚 (Betti 碱) 及其衍生物 1-(α-苄氨基苄基)-2-萘酚和 1-(α-哌啶基苄基)-2-萘酚 对映体分离进行了研究,分别考察了在正己烷流动相中,碱性添加剂、醇类添加剂的种类和浓度对手性拆分的影 响。结果表明:溶质在 Chiralcel OD-H 柱上的分离效果好,而在(R, R)-Whelk-O1 手性柱上只有 1-(α-苄氨基苄 基)-2-萘酚部分分离。研究了空间立体结构因素对手性分离的影响,初步探讨和比较了溶质在这 2 种手性柱上的 手性识别机理。发现对(R, R)-Whelk-O1 柱,溶质与固定相之间的吸引作用很小,而对于 Chiralcel OD-H 柱,溶 质在手性空腔中的空间适应性很可能是手性识别的关键。
关 键 词:Betti 碱;1-(α-苄氨基苄基)-2-萘酚;1-(α-哌啶基苄基)- 2-萘酚;手性固定相;对映体分离

环糊精

环糊精

26CyclodextrinsKatia Martina and Giancarlo CravottoCONTENTS26.1 Introduction (593)26.2 Inclusion Complex Formation (595)26.3 Applications of CD in Food (596)26.4 Analysis of CD (597)26.4.1 Characterization of CD-Inclusion Complex (597)26.4.2 Determination of CD Content (598)26.4.2.1 The Colorimetric Method (598)26.4.2.2 Chromatography (599)26.4.2.3 Affinity Capillary Electrophoresis (600)26.5 Conclusion (600)References (601)26.1 I ntroductionCyclodextrins (CDs) are unique molecular complexation agents. They possess a cage-like supramolecular structure, which involves intra- and intermolecular interactions where no covalent bonds are formed between interacting molecules, ions, or radicals. It is mainly a “host–guest” type phenomenon. CDs are definitively the most important supramolecular hosts found in the literature. As a result of molecular complexation, CDs are widely used in many industrial fields (cosmetics, pharmaceutics, bioremediation, etc.) and in analytical chemistry. Their high biocompatibility and negligible cytotoxicity have opened the doors to their uses such as drug excipients and agents for drug-controlled release (Stella and Rajewski 1997, Matsuda and Arima 1999), in food and flavors (Mabuchi and Ngoa 2001), cosmetics (Buschmann and Schollmeyer 2002), textiles (Buschmann et al. 2001), environment protection (Baudin et al. 2000), and fermentation and catalysis (Koukiekolo et al. 2001, Kumar et al. 2001).CDs are cyclic oligosaccharides consisting of at least six glucopyranose units which are joined together by a (1 → 4) linkage. CDs are known as cycloamyloses, cyclomaltoses, and historically as Schardinger dextrins. They are produced as a result of an intramolecular transglycosylation reaction from the degra-dation of starch which is performed by the CD glucanotransferase enzyme (CGTase) (Szetjli 1998). The first reference to the molecule, which later proved to be CD, was published by Villiers in 1891. Digesting starch with Bacillus amylobacter, he isolated two crystalline products, probably α- and β-CDs. In 1903, Schardinger reported the isolation of two crystalline products that he called α- and β-dextrin, in which the helix of amylose was conserved in fixed-ring structures.From the x-ray structures, it appears that the secondary hydroxyl groups (C2 and C3) are located on the wider edge of the ring and the primary hydroxyl groups (C6) on the other edge. The apolar –CH (C3 and C5) and ether-like oxygens are on the inside of the truncated cone-shaped molecules (Figure 26.1). This results in a hydrophilic structure with an apolar cavity, which provides a hydrophobic matrix, often described as a “microheterogeneous environment.” As a result of this cavity, CDs are able to form inclu-sion complexes with a wide variety of hydrophobic guest molecules. One or two guest molecules can be entrapped by one, two, or three CDs.593594 Handbook of Analysis of Active Compounds in Functional FoodsAlthough CDs with up to 12 glucose units are known, only the first three homologues (α-, β-, and γ-CD) have been extensively studied and used. β-CD is the most accessible due to its low price and high versatility. The main properties of the aforementioned CDs are given in Table 26.1.The safety profiles of the three most common natural CDs and some of their derivatives have recently been reviewed (Irie and Uekama 1997, Thompson 1997). All toxicity studies have demonstrated that orally administered CDs are practically nontoxic due to the fact that they are not absorbed by the gastro-intestinal tract.Pioneer country in the industrial applications of CDs was Japan, since 1990 it become the largest con-sumer in the world. Eighty percent of the annual consumption was used in the food industry and over 10% in cosmetics, <5% was used in the pharmaceutical and the agrochemical industries. The industrial usage of CDs progresses somewhat slower in Europe and America. The constant annual growth of the number of scientific papers and patents indicates the scale of research and industrial interest in this field. From a regulatory standpoint, a monograph for β-CD is available in both the US Pharmacopoeia/National Formulary (USP 23/NF 18, 1995) and the European Pharmacopoeia (3rd ed., 1997). All native CDs are listed in the generally regarded and/or recognized as safe (GRAS) list of the US-FDA for use as a food additive. β-CD was recently approved in Europe as a food additive (up to 1 g/kg food). In Japan, the native CDs were declared to be enzymatically modified starch and, therefore, their use in food prod-ucts has been permitted since 1978.FIGURE 26.1 Chemical structure of α, β, and γ-CD.Cyclodextrins 595Apart from these naturally occurring CDs, many derivatives have been synthesized so as to improve solubility, stability to light or oxygen and control over the chemical activity of guest molecules (Eastburnand and Tao 1994, Szente and Szejtli 1999). Through partial functionalization, the applications of CDs are expanded. CDs are modified through substituting various functional compounds on the pri-mary and/or secondary face of the molecule.26.2 I nclusion Complex FormationThe most notable feature of CDs is their ability to form solid inclusion complexes (host–guest complexes) with a very wide range of solid, liquid, and gaseous compounds by molecular complexation (Szejtli 1982).Since the exterior of the CDs is hydrophilic, they can include guest molecules in water solution. As depicted in Figure 26.2, the guest can be either completely or partially surrounded by the host molecule. The driving force in complex formation is the substitution of the high enthalpy water molecules by an appropriate guest (Muñoz-Botella et al. 1995). One, two, or more CDs can entrap one or more guest molecules. More frequently the host–guest ratio is 1:1; however, 2:1, 1:2, 2:2 or even more complicated associations and higher-order equilibria have been described. The packing of the CD adducts is related to the dimensions of the guest and cavity. Several factors play a role in inclusion complex formation and several interactions have been found:a. Hydrophobic effects, which cause the apolar group of a molecule to fit into the cavity.b. Van der Waals interactions between permanent and induced dipoles.c. Hydrogen bonds between guest molecules and secondary hydroxyl groups at the rim of the cavity.d. Solvent effects.TABLE 26.1Physical Properties of α-, β-, and γ-CDsPropertyα-CD β-CD γ-CD Number of glucose units678Mol wt. (anhydrous)97211351297V olume of cavity (Å3 in 1 mol CD)174262427Solubility in water (g 100 mL −1 r.t.)14.5 1.8523.2Outer diameter (Å)14.615.417.5Cavity diameter (Å) 4.7–5.3 6.0–6.57.5–8.3′R ″CD derivatives R R ′ R ″Native CD R R ′ R ″ = H1:1 and 1:2 inclusion complexes with a naphthalene derivativeFIGURE 26.2 1:1 and 1:2 host–guest CD complexes.596Handbook of Analysis of Active Compounds in Functional Foods Regardless of what kind of stabilizing forces are involved, the geometric characteristics and the polar-ity of guest molecules, the medium and temperature are the most important factors for determining the stability of the inclusion complex. Geometric rather than the chemical factors are decisive in determin-ing the kind of guest molecules which can penetrate the cavity. If the guest is too small, it will easily pass in and out of the cavity with little or no bonding at all. Complex formation with guest molecules signifi-cantly larger than the cavity may also be possible, but the complex is formed in such a way that only certain groups or side chains penetrate the CD cavity.Complexes can be formed either in solution or in the crystalline state and water is typically the solvent of choice. Inclusion complexation can be accomplished in cosolvent systems, also in the presence of any nonaqueous solvent. Inclusion in CDs exerts a strong effect on the physicochemical properties of guest molecules as they are temporarily locked or caged within the host cavity giving rise to beneficial modi-fications which are not achievable otherwise (Dodziuk 2006).Molecular encapsulation can be responsible for the solubility enhancement of highly insoluble guests, the stabilization of labile guests against degradation and greater control over volatility and sublimation. It can also modify taste through the masking of flavors, unpleasant odors, and the controlled release of drugs and flavors. Therefore, CDs are widely used in food industry (Shaw 1990), in food packaging (Fenyvesi et al. 2007), in pharmaceuticals (Loftsson and Duchene 2007, Laze-Knoerr et al. 2010), and above all in cosmetics and toiletries (Szejtli 2006).26.3 A pplications of CD in FoodToday the nontoxicity of β-CD is well proven, the same tenet is generally accepted for the other CDs. The regulatory statuses of CDs differ in Europe, the United States, and Japan, because official processes for food approval are different. In the United States α-, β-, and γ-CD have obtained the GRAS status and can be commercialized as such. In Europe, the approval process for α-CD as Novel Food has just started and is expected to legalize the widespread application of α-CD to dietary products, including soluble fiber. In Japan, α-, β-, and γ-CDs are recognized as natural products and their commercialization in the food sector is restricted only by purity considerations. In Australia and New Zealand, α- and γ-CD have been classified as Novel Foods since 2004 and 2003, respectively.Nowadays the application of CD-assisted molecular encapsulation in foods offers many advantages (Cravotto et al. 2006):• Improvement in the solubility of substances.• Protection of the active ingredients against oxidation, light-induced reactions, heat-promoted decomposition, loss by volatility, and sublimation.• Elimination (or reduction) of undesired tastes/odors, microbiological contamination, hygro-scopicity, and so on.Typical technological advantages include, for example, stability, standardized compositions, simple dosing and handling of dry powders, reduced packing and storage costs, more economical, and man-power savings. CDs are mainly used, in food processing, as carriers for the molecular encapsulation of flavors and other sensitive ingredients. As CDs are not altered by moderate heat, they protect flavors throughout many rigorous food-processing methods such as freezing, thawing, and microwaving. β-CD preserves flavor quality and quantity to a greater extent and for a longer time compared to other encap-sulants (Hirayama and Uekama 1987).CDs can improve the chemical stability of foods by complete or partial inclusion of oxygen-sensitive components. They can be used to stabilize flavors against heat that can induce degradation and they can also be employed to prolong shelf-life by acting as stabilizers.CDs are used for the removal or masking of undesirable components; for example, trimethylamine can be deodorized by the inclusion of a mixture of α-, β-, and γ-CDs. CDs are also used to free soybean products from their fatty smell and astringent taste. Even the debittering of citrus juices with β-CD is a long pursued goal.Cyclodextrins 597 CDs have an important use in the removal of cholesterol from animal products such as milk, butter, and egg yolks and have recently been studied as neutraceutics carriers to disperse and protect natural lipophylic molecules such as polyunsaturated fatty acids, Coenzyme Q10 (ubiquinone) and Vitamin K3.26.4 A nalysis of CD26.4.1 C haracterization of CD-Inclusion ComplexWhen molecules are inserted within the hydrophobic interior of the CDs, several weak forces between the host and guest are involved, that is, dipole–dipole interaction, electrostatic interactions, van der Waals forces, and hydrophobic and hydrogen bonding interactions. An equilibrium exists between the free and complexed guest molecules. The equilibrium constant depends on the nature of the CD and guest molecule, as well as temperature, moisture level, and so on. The inclusion complexes formed in this way can be isolated as stable crystalline substances, and precise information on their topology can be obtained from the structural x-ray analysis of single crystals (Song et al. 2009). The topology of the inclusion complex can also be determined in solution. The interactions between host and guest may lead to characteristic shifts in the 1H and 13C NMR spectra (Dodziuk et al. 2004, Chierotti and Gobetto 2008). Nuclear Overhauser effects (NOE) provide more precise information since their magnitudes are a mea-sure of the distance between host and guest protons. Circular dichroism spectra give information on the topology of the adduct, when achiral guests are inserted into the chiral cavity (Silva et al. 2007). Potentiometry, calorimetry, and spectroscopic methods including fluorescence, infrared, Raman, and mass spectrometry have also been used to study inclusion complexes (Daniel et al. 2002).The molecular encapsulation of natural essential oils, spices, and flavors such as cheese, cocoa, meat, and coffee aromas with β-CD has been known since several years. The literature has dealt with the improved physical and chemical stability of these air-, light-, and heat-sensitive flavors (Szente et al. 1988; Qi and Hedges 1995) and investigated the interaction of these compounds with CDs.UV absorbance spectroscopy was applied to investigate hyperchromic effects induced by the addition of β-CD to a water solution of caffeine (Mejri et al. 2009). The spectroscopic and photochemical behav-ior of β-CD inclusion complexes with l-tyrosine were investigated by Shanmugam et al. (2008). UV–vis, fluorimetry, FT-IR, scanning electron microscope techniques, and thermodynamic parameters have been used to examine β-CD/l-tyrosine complexation.Nishijo and Tsuchitani (2001) studied the formation of an inclusion complex between α-CD and l-tryp-tophan using nuclear magnetic resonance (NMR). Linde et al. (2010) investigated the complexation of amino acids by β-CD using different NMR experiments such as diffusion-ordered spectroscopy (DOSY) and rotating frame Overhauser effect spectroscopy (ROESY). This study provided molecular level infor-mation on complex structure and association-binding constants and advanced the sensorial knowledge and the development of new technologies for masking the bitter taste of peptides in functional food products. The preparation of stable, host–guest complexes of β-CD with thymol, carvacrol, and oil of origanum has been described by LeBlanc et al. (2008). The complex was characterized by NMR and the inclusion constant was measured by fluorescence spectroscopy where 6-p-toluidinylnaphthalene-2-sulfonate was in competitive binding and acted as a fluorescent probe.Caccia et al. (1998) provide the evidence of the inclusion complex between neohesperidin dihydrochalcone/β-CD by x-ray, high resolution NMR and MS spectroscopy. The association constant was determined by NMR via an iterative nonlinear fitting of the chemical shift variation of H3 in β-CD. The geometry of the binding was studied by nuclear NOEs between the proton directly involved in the host/guest interaction as well as by ROESY. The use of fast atom bombardment (FAB) gave comple-mentary information on specific host–guest interaction, while x-ray diffractometry patterns could define the complex in solid state.Differential scanning calorimetry (DSC), thermogravimetry analysis (TGA), or nuclear magnetic resonance (1H-NMR) were employed by Marcolino et al. (2011) to study the stability of the β-CD com-plexes with bixin and curcumin. Owing to the huge industrial applications of natural colorants, this study aimed to compare different methods of complexes formation and evaluate their stability.598Handbook of Analysis of Active Compounds in Functional Foods Natural and synthetic coffee flavors were included in β-CD and the complexes were analyzed by x-ray diffraction by Szente and Szejtli (1986). By thermofractometry and the loss of a volatile constitu-ent, it was demonstrated that the volatility of these complexed flavors diminished in such a way that they could be stored for longer periods. Various spectroscopic methods have been compared, by Goubet et al. (1998, 2000), to study the competition for specific binding to β-CD. The substrates were a group of flavors which show different physicochemical properties, such as vapor pressure, water solubility, and log P.Inverse gas chromatography was recently used for the direct assessment of the retention of several aroma compounds of varying chemical functionalities by high amylose corn starch, wheat starch, and β-CD (Delarue and Giampaoli 2000). The inclusion selectivity of several monoterpene alcohols with β-CD in water/alcohol mixtures was studied by Chatjigakis et al. (1999) using reverse-phase HPLC. Flavor r etention in α-, β-, and γ-CDs was compared, by Reineccius et al. (2002), by the GC analysis of the released flavor compounds; quantification was accomplished using standard internal protocols.GC-MS was used for the identification of the volatile constituents of cinnamon leaf and garlic oils before and after the microencapsulation process with β-CD (Ayala-Zavala et al. 2008). The profile of volatile substances in the β-CD microcapsules was used to evaluate the competitive equilibrium between β-CD and all volatile substances. The eugenol and allyl disulfide content of cinnamon leaf and garlic oils were used as a pattern to evaluate the efficiency in the microencapsulation process. The IR spectra of the microcapsules was employed to demonstrate the formation of intramolecular hydrogen bonds between the guest and host molecules.Samperio et al. (2010) investigated the solubility in water and in apple juice of 23 different essential oils and 4 parabens. The study was focused on the β-CD complexes of few essential oil components (o-methoxycinnamaldehyde,trans, trans-2,4-decadienal, and citronellol), evaluating the increase of solubility in water and the storage stability. UV absorption spectrophotometry was performed to quan-tify the compound in solution. Linear regression analysis was used to calculate the concentration of test compounds in solution from day 0 to day 7.26.4.2 D etermination of CD ContentTraditionally, a variety of techniques have been developed to analyze CDs and their derivatives.Few analytical methods for the quantification of β-CD are described in the literature. Among them are colorimetric methods, LC methods based on the use of indirect photometric detection, pulse ampero-metry, or refractive index experiments, affinity capillary electrophoresis, and mass spectrometry are able to provide qualitative and quantitative data when analyzing the complex CD mixtures.26.4.2.1 T he Colorimetric MethodThe colorimetric method may be used as an alternative to chromatography especially at low CD concen-trations, this also works in the presence of linear oligosaccharides. The colorimetric method, based on the complexation of phenolphthalein, was employed by Higuti et al. (2004) to carry out sensitive and relatively specific quantification of β-CD. A decrease in absorbance at 550 nm, due to phenolphthalein–CD complex formation, was exploited to study the optimization of the CGTase production in Bacillus firmus. A highly reproducible and selective α-CD determination method had already been described by Lejeune et al. (1989). This involves the formation of an inclusion complex between the α-CD and methyl orange under conditions of low pH and low temperature. The metal indicator calmagite (1-(1-hydrohy-4-methyl-phenylazo)-2-naphthol-4-sulfonic acid) interacts selectively with γ-CD and was described by Hokse (1983) to quantify a standard solution of γ-CD.Kobayashi et al. (2008) observed that various kinds of hydrophobic food polyphenols and fatty acids could be dispersed in water containing starch by the action of GTAse (CD-producing enzyme). NMR and spectrophotometric methods were used to confirm the presence of CDs as solubilizing agents. The for-mation of inclusion complexes was demonstrated by using Congo Red as a model molecule in the pres-ence of GTAse or α-, β-, and γ-CD, respectively. Major changes in the 1H NMR profile of Congo Red were observed in the presence of γ- and β-CD.Cyclodextrins 599On the other hand, a spectrophotometric and infrared spectroscopic study of the interaction between Orange G, a valuable clastogenic and genotoxic acid dye used as a food colorant, and β-CD has been described by Wang et al. (2007) as a method for the quantitative determination of this dye. Based on the enhancement of the absorbance of Orange G when complexed by β-CD, the authors proposed a ratiomet-ric method, carried out spectrophotometrically, for the quantitative determination of Orange G in bulk aqueous solution. The absorbance ratio of the complex at 479 and 329 nm in a buffer solution at pH 7.0 showed a linear relationship in the range of 1.0 × 10−5 to 4.0 × 10−5 mol L−1. IR spectroscopy of the com-plex was described to confirm the inclusion complex formation.26.4.2.2 C hromatography26.4.2.2.1 T hin-Layer ChromatographyOne reference in the literature refers to the use of thin-layer chromatography (TLC) technique as an inexpensive, simple, and very informative method for the analysis and separation of CD inclusion com-plex food components. Prosek et al. (2004) isolated the inclusion complex between coenzyme Q10 (CoQ10) and β-CD and described its analysis and separation by one-dimensional, two-dimensional, and multidimensional TLC. The article described different TLC supports, mobile phases, and visualization methods in detail and the authors evaluated that 70% of the complex remained unchanged during the first semipreparative chromatography run and only a small amount of CoQ10 was lost from the complex dur-ing the TLC procedure. The results were confirmed by the use of other separation techniques such as HPLC, HPLC-MS, and NMR.26.4.2.2.2 L iquid Chromatography, LC-MS, HPLC-MSLiquid chromatography (LC) methods are employed for the analysis and separation of CDs and their derivatives. The separation of the complex samples containing CDs in mixture with linear oligosaccha-ride residual starch as well as protein salts and other substances may suffer from poor sensitivity, resolu-tion, and long separation times. Good results can be achieved where differences in mass or polarity are found or, otherwise, will require extensive sample preparation.Several stationary phases have been described, for example, resins modified with specific adsorbents and reverse-phase media used in combination with either refractive index detection (Berthod et al. 1998), evaporative light scattering (Caron et al. 1997, Agüeros et al. 2005), indirect photometric detection (Takeuchi et al. 1990), postcolumn complexation with phenolphthalein (Frijlink et al. 1987, Bassappa et al. 1998), polarimetric detection (Goodall 1993), or pulsed amperometric detection (Kubota et al. 1992).López et al. (2009) described the application of LC and refractive index detection to estimate the amount of residual β-CD (>20 mg per 100 g of product) present in milk, cream, and butter after treat-ment with β-CD. The analyses were performed with a C18 reversed-phase silica-based LC column, α-CD was defined as an internal standard. The repeatability of the analytical method for β-CD was tested on commercial milk, cream, and butter spiked with known amounts of β-CD.The detection limit in milk was determined to be >0.03 mg mL−1 of β-CD which is similar to that found by LC using amperometric detection (Kubota et al. 1992) and its reproducibility was comparable to that found in a colorimetric method for the estimation of β-CD using phenolphthalein (Basappa et al. 1998, Frijlink et al. 1987).LC-MS coupling has led to the development of new interfaces, extending the automation of various procedures and increasing the sensitivity for high-polar and high-molecular mass compounds. New ion-ization techniques such as electron spray (ESI) and matrix-assisted laser desorption ionization (MALDI) (Bartsch et al. 1996, Sporn and Wang 1998) on quadrupole, magnetic sector, or time-of-flight (TOF) instruments or coupled with instruments with tandem MS (MS-MS) capabilities have also been funda-mental in food applications. By coupling HPLC to isotope-ratio, MS has been proven valuable in provid-ing precise isotopic measurements for nonvolatile species such as carbohydrates. For these reasons, the number of reported applications of LC-MS in the analysis of CD in food is rapidly increasing.HPLC/MS analyses for the detection of minute amounts of CDs in enzyme and heat-treated, s tarch-containing food products were proposed by Szente et al. (2006). A suitable sensitive and selective600Handbook of Analysis of Active Compounds in Functional Foods analytical method was studied with the aim of verifying the presence of parent β- and γ-CDs and all the three, α-, β-, and γ-branched CDs with different degrees of glycosylation in appropriately preconcen-trated and purified food samples (beer samples, corn syrups, and bread). Both the HPLC-retention times and mass-spectral data were used for the identification of CDs. As the expected concentrations of CDs were very low, selected ion monitoring (SIM) was preferred to the routinely used refractive index and evaporative light scattering detection techniques as the only reliable detection method. The malto-oli-gomer mixture was analyzed with a detection window opened at the masses of CD sodium salts in order to enable the detection of any malto-oligomer side products.Wang et al. (1999) proposed the efficient qualitative and quantitative analysis of food oligosaccharides by MALDI-TOF-MS. In order to optimize the method, matrices, alkali–metal adducts, response inten-sity, and sample preparation were all examined individually. A series of experiments were carried out by the authors to study analyte incorporation in the matrix. In a first phase of experiments, maltohexanose and γ-CD were used as reference samples to verify the suitability of 2,5-dihydroxybenzoic acid (DHB), 3-aminoquinoline (3-AQ), 4-hydroxy-a-cyanocinnamic acid (HCCA), and 2,5-dihydroxybenzoic acid (DHB), 1-hydroxy-isoquinoline (HIC), (1:1) as the matrix material. Spot-to-spot or sample-to-sample repeatability tests and the ability to achieve a good quality spectrum with a reasonable signal-to-noise ratio and the best resolution were compared. Good quality spectra and acceptable repeatability were achieved with DHB but many interfering matrix peaks were observed in the low mass region. The best results were achieved using a 2,4,6-trihydroxy-acetophenone monohydrate (THAP) matrix. The authors exploited the high solubility of THAP in acetone, its fast evaporation to fine crystals, and the homo-geneous incorporation of the sample to avoid low-quality results which may be due to irregular crystal-lization when the substance is used directly in water.26.4.2.3 A ffinity Capillary ElectrophoresisAffinity capillary electrophoresis (ACE) techniques have been introduced more recently and are currently in rapid development. CDs have played a central role in the development of a wide variety of analytical methods based on ACE in the separation of chiral molecules. ACE also provides a powerful analytical tool for the analysis of CDs and their derivatives.The electrophoretic separation and analysis of α-, β-, and γ-CDs have been carried out recently without modification. CDs that are charged at very high pH can be separated by the formation of inclu-sion complexes. Their complexes, with a large range of aromatic ions, facilitate detection by indirect UV absorbance (Larsen and Zimmermann 1998, 1999). In addition, fluorescent molecules such as 2-anilinonaphthalene-6-sulfonic have been used for the separation and detection of CDs in a ACE system (Penn et al. 1994).Furthermore, the indirect electrophoretic determination of CD content has recently been described using periodate oxidation. The amount of produced iodate was monitored by ACE and reproducible quantitative results were obtained for α-, β-, and γ-CDs (Pumera et al. 2000). Nevertheless, ACE has not been yet exploited for the analysis of CDs in food. The major advantages of ACE compared to other analysis methods are their short analysis times and high versatility. An exhaustive review of this topic was published in 1999 (Larsen and Zimmermann 1998, 1999).26.5 C onclusionThe use of native CDs for human consumption is growing dramatically due to their well-established safety. CDs are effective in protecting lipophilic food components from degradation during cooking and storage. In this context, several methodologies have been developed to detect, identify, and quantify CDs in food extracts and to study molecular inclusion complexes. X-ray and NMR spectroscopy afford valuable and detailed insight into the structure and the dynamics of a wide range of complexes which are not amenable to study by other analytical techniques. HPLC coupled with refractive index and evaporative light scattering detection technique is routinely used in CD food analysis and LC-MS data in this respect are particularly useful in detecting minute amounts of CDs in complex food samples.。

资产定价与数据科学中的拟蒙特卡洛方法

资产定价与数据科学中的拟蒙特卡洛方法
缺点
拟蒙特卡洛方法需要确定目标概率分布,这可能是一个挑战,尤其是当缺乏历 史数据或专家意见时。此外,对于大规模和高维度的随机过程,生成符合分布 的样本点可能需要大量的计算资源和时间。
05
数据科学在资产定价中的应用
大数据处理技术
数据清洗
去除重复、错误或不完整的数据,确 保数据质量。数据存储采用分布存储系统,高效地存储大 规模数据。
随机折现因子模型(SDF)
总结词
随机折现因子模型是一种用于评估资产 价格的动态模型,它以折现现金流为基 础,考虑了不确定性和风险因素对资产 价格的影响。
VS
详细描述
SDF模型假设资产的内在价值由一系列未 来的现金流决定,这些现金流的折现值即 为资产的当前价格。SDF模型还考虑了风 险因素对折现率的影响,使得不同风险水 平的资产具有不同的折现率。SDF模型提 供了一种动态的框架来评估资产价格,并 可以用于预测资产价格的变动。
资产定价与数据科学中的拟 蒙特卡洛方法
汇报人: 2023-12-24
目录
• 引言 • 资产定价理论 • 蒙特卡洛模拟方法 • 拟蒙特卡洛方法 • 数据科学在资产定价中的应用 • 案例分析
01
引言
研究背景与意义
背景
随着金融市场的发展和数据科学技术的进步,资产定 价和风险管理成为金融领域的核心问题。蒙特卡洛方 法作为一种重要的统计模拟方法,在资产定价和风险 管理中得到了广泛应用。然而,传统的蒙特卡洛方法 存在一些局限性,如样本数量大、计算成本高等。为 了解决这些问题,拟蒙特卡洛方法被提出并应用于资 产定价和数据科学领域。
数据科学在资产定价中的应用案例
大数据与机器学习
利用大数据和机器学习技术,可以对 大量的历史数据进行分析和预测,为 资产定价提供更准确和全面的信息。

液相色谱中相邻两色谱峰

液相色谱中相邻两色谱峰

液相色谱中相邻两色谱峰英文回答:In liquid chromatography (LC), adjacent chromatographic peaks can be separated or merged depending on several factors, including the selectivity of the stationary phase, the mobile phase composition, and the flow rate.Selectivity refers to the ability of the stationary phase to differentiate between different analytes. A higher selectivity will result in better separation of adjacent peaks. The selectivity can be affected by the chemical nature of the stationary phase, the pH of the mobile phase, and the temperature.Mobile phase composition also plays a role in peak separation. The composition of the mobile phase can be varied to change the elution order of the analytes and to improve the separation of adjacent peaks. The most common mobile phases used in LC are aqueous-organic mixtures. Theorganic solvent can be changed to vary the polarity of the mobile phase and to improve the separation of analytes with different polarities.Flow rate can also affect peak separation. A higher flow rate will result in faster elution of the analytes, but it can also lead to a decrease in the separation of adjacent peaks. A lower flow rate will result in slower elution of the analytes, but it can improve the separation of adjacent peaks.In addition to these factors, the column temperature can also affect peak separation. A higher temperature will result in faster elution of the analytes, but it can also lead to a decrease in the separation of adjacent peaks. A lower temperature will result in slower elution of the analytes, but it can improve the separation of adjacent peaks.中文回答:液相色谱中相邻色谱峰的分离或合并取决于多种因素,包括固定相的选择性、流动相组成和流速。

泽尼克多项式 f

泽尼克多项式 f

泽尼克多项式 f泽尼克多项式(Zernike polynomials)是一类在极坐标系下定义的正交多项式。

它们由实数系数的公式表示,并可以用于描述光学系统中的相位畸变、图像分析以及图像重建等领域。

泽尼克多项式以荷兰数学家弗里茨·泽尼克(Frits Zernike)的名字命名。

泽尼克多项式最初是由泽尼克在1934年提出的。

他通过研究在光学器件中光波的传播过程,发现了一类特殊的正交函数,这些函数可以用于描述光波的复振幅分布。

泽尼克将这些函数命名为Zernike多项式,并发现它们具有很多重要的性质。

泽尼克多项式通常用Zernike多项式系数表示,其中第一项表示振幅,后续的项表示光波的相位畸变。

这些多项式是以正交归一形式存在的,并且在极坐标系下具有特殊的对称性质。

泽尼克多项式具有完备性,即可以通过它们的线性组合来逼近任意一个函数。

泽尼克多项式在光学系统中被广泛应用,特别是在光学相干断层扫描(OCT)和自适应光学(AO)中。

光学相干断层扫描是一种非侵入性的医学成像技术,它可以高分辨率地观察人体组织的内部结构。

自适应光学则用于校正光学系统中的相位畸变,以提高成像的质量。

除了在光学领域的应用,泽尼克多项式还被应用于图像分析和图像重建中。

通过计算一幅图像的泽尼克多项式系数,可以得到图像的形状和特征信息。

这些信息可以用于图像分类、目标识别和目标跟踪等任务。

此外,泽尼克多项式还可以用于图像的去模糊和重建,以提高图像的清晰度和质量。

总而言之,泽尼克多项式是一类在极坐标系下定义的正交多项式,具有许多重要的性质和应用。

它们在光学系统中被广泛应用,用于描述光波的相位畸变和复振幅分布。

同时,泽尼克多项式还在图像分析和图像重建中发挥着重要作用。

随着科学技术的不断发展,泽尼克多项式的应用将继续扩大,并在更多领域发挥其独特的作用。

尺度相互作用墨西哥帽小波提取图像特征点

尺度相互作用墨西哥帽小波提取图像特征点
法 , 方 法 对 于 经 过 旋 转 、 度 、 糊 以 及 噪 声 处 理 后 的失 真 图 像 仍 能 提 取 出相 对 位 置 和 数 量 都 较 为 一 致 的 该 亮 模 特 征点 。而 针 对 尺 度 相 互 作 用 的墨 西 哥 帽 小 波 提 取 不 同 尺 度 的 图像 时 的 特 征 点 相 对 位 置 不 一 致 的 问题 , 提 出 了在 墨 西 哥 帽 小 波 中加 入 尺 度 因子 的方 法 , 过 仿 真 实 验 验 证 了算 法 的 正确 性 。 通
Ex r c i g I a e Fe t r i t i g t a tn m g a u e Po n s Usn
S ae I t r c i n o e ia - tW a ees c l — n e a to fM x c n Ha v l t
D I G a — n ~ .LI Ya yi g N N n na U n— n . ZH U i M ng
Cha g hu 1 0 3 nc n 3 0 3,C n E- l: n8 04@ l 3.On; hia, mai d 5 6 CT
2 .Gr d a e ie s y o h n s a e f S i c s ejn 1 0 3 , h n ) a u t Un v ri f C i ee t Ac d myo ce e ,B iig n 0 0 9 C ia
( . h n c u n ttt f O t s i e c a i n y is C iee a e f S in e 1 C a g h nI s u eo p i ,F n h nc a d Ph sc , h n s d my o c cs i c Me s Ac e
文章 编 号 :0 72 8 2 1 ) 10 2 — 5 1 0 — 7 0( 0 2 0 — 1 5 0

水相识别分子印迹技术

水相识别分子印迹技术

收稿:2006年7月,收修改稿:2006年8月 3通讯联系人 e 2mail :chemxuzl @水相识别分子印迹技术王学军 许振良3 杨座国 邴乃慈(华东理工大学化学工程研究所膜科学与工程研发中心 上海200237)摘 要 在各种基于超分子方法的仿生识别体系中,分子印迹聚合物已经证明是一种有潜力的合成受体,受到了广泛的关注。

传统的分子印迹技术通常是在有机溶剂中制备对小分子具有选择性的印迹聚合物,而在水相中制备及识别生物大分子的研究仍具有相当的挑战性。

从小分子到生物大分子、从有机相到水相,反映了分子印迹技术的发展趋势。

本文对最近几年分子印迹在水相制备与识别方面的最新进展进行了总结与评述,探讨了水相识别印迹聚合物的设计策略与制备方法;着重介绍了水相识别技术在固相萃取、色谱固定相、药物控释、中药有效成分提取以及生物分子识别等方面的应用;指出了提高水相识别选择性的途径并对其将来的发展进行了建议与展望。

关键词 分子印迹 水相识别 分子印迹聚合物 分子印迹膜中图分类号:O65812 文献标识码:A 文章编号:10052281X (2007)0520805208Molecular R ecognition in Aqueous Media with Molecular Imprinting TechniqueWang Xuejun Xu 3 Yang Zuoguo Bing Naici(Membr hane Science and Engineering R&D Center ,Chemical Engineering Research Center ,East China University of Science and T echnology ,Shanghai 200237,China )Abstract Am ong the variety of biomimetic recognition systems based on supram olecular approaches ,m olecularly im printed polymers (MIPs )have proved potential as synthetic receptors and received m ore and m ore attention.C onventional m olecular im printing technology allows the synthesis in organic s olvents of m olecularly im printed polymers selective toward relatively low m olecular weight com pounds.H owever ,synthesis in aqueous media of chemically and mechanically stable MIPs that can recognize biom olecules still is a great challenge.From small m olecules to biomacrom olecules,from organic phase to aqueous media ,the application field expands with the development of m olecular im printing technique.The recent progress in preparation and recognition of m olecularly im printed polymers in aqueous phase are overviewed and discussed.The design strategy and preparation methods of aqueous MIPs are investigated.The em phasis is put on the applications of aqueous recognition in the fields of s olid phase extraction ,chromatographic stationary phases ,drug delivery and controlled release ,separation of active ingredients from herbs and recognition of biom olecules.The methods to im prove the selectivities of MIPs in aqueous recognition are presented ,and the challenges ,as well as the suggestions to the development of m olecular im printing technique are outlined.K ey w ords m olecular im printing ;recognition in aqueous media ;m olecularly im printed polymers ;m olecularly im printed membrane1 引言分子印迹技术是将材料科学、生物化学和化学工程等学科有机结合在一起,为获得在空间结构和结合位点上与模板分子完美匹配的聚合物(分子印迹聚合物,MIP )的一种新型功能材料制备技术。

bonfferoni检验原理

bonfferoni检验原理

bonfferoni检验原理
关于是否需要进行参数估计,一直存在着两种对立的观点:
一种观点认为,当数据分布不具有任何统计意义时,参数估计是没有意义的。

例如当观测数据服从正态分布时,进行参数估计毫无意义。

这种观点又被称为无参数估计。

我们首先来看第一种观点,这种观点认为我们需要对数据进行无参数估计,因为当数据具有某种统计意义时,如果我们不进行参数估计,那么这种统计意义就不存在了。

例如在正态分布中,如果数据服从正态分布,那么它就是一个常数。

在这种观点下,如果我们对观测数据进行无参数估计(假设所有数据都是无参数的),那么这种统计意义就没有了。

虽然这
种方法也能对参数进行估计(假设所有数据都是有参的),但我
们需要进行另外的统计检验来确认这种统计意义是否存在。

因此,有必要对无参数估计方法进行一个检验。

—— 1 —1 —。

基于拟幂等的交换单位Quantale的模糊Domain

基于拟幂等的交换单位Quantale的模糊Domain

基于拟幂等的交换单位Quantale的模糊Domain姚卫;李颜【摘要】基于一个拟幂等的交换单位Quantale,用模糊集的方法重新研究了量化Domain理论,主要定义了模糊DCPO上的模糊Scott拓扑,建立了满层的L-滤子的Scott收敛理论,证明了模糊DCPO范畴的笛卡尔闭性.虽然所得结论和已有文献中的基本一样,但是证明过程却有很大不同.结果表明,拟幂等的交换单位Quantale 是用模糊集方法研究量化Domain的最宽泛的格.%Based on a preidempotent commutative unital Quantale, quantitative Domain theory is studied by using the approach of fuzzy sets. Fuzzy Scott topology is defined on fuzzy DCPOs, the filter-theoretical Scott converngence theory is established and the cartesian-closedness of the category of fuzzy DCPOs is proved. Although the obtained results are similar to those in the existing literatures, the proofs are very different from them. These results also indiacte that a preidempotent commutative uni tal Quantale is the most general lattice in studying quantitative Domain theory by fuzzy-set approach.【期刊名称】《河北科技大学学报》【年(卷),期】2013(034)002【总页数】6页(P119-124)【关键词】交换单位Quantale;拟幂等;模糊DCPO;模糊Scott开集;笛卡尔闭;Scott收敛【作者】姚卫;李颜【作者单位】河北科技大学理学院,河北石家庄050018【正文语种】中文【中图分类】O159;O153.1。

解稠密对称特征值和奇异值分解用的多处理机Jacobi算法

解稠密对称特征值和奇异值分解用的多处理机Jacobi算法

解稠密对称特征值和奇异值分解用的多处理机Jacobi算法MichaelBerry;AhmedSameh;王宏斌
【期刊名称】《计算机工程与科学》
【年(卷),期】1990(000)001
【摘要】本文介绍两种基于 Jacobi 算法的并行算法,用于多处理机上解稠密实对
称矩阵的全部特征值以及对直角矩阵进行奇异值分解。

我们的目的是研究和介绍在Alliant FX/8多处理机环境下Jacobi 算法和类Jacobi 算法与当前最新EISPACK、LINPACK 例行程序相比的突出优点。

我们对稠密特征值问题列出了矩阵阶数小情况下的较为理想的结果;并说明对直角矩阵(矩阵的行数大于列数)采用单边类Jacobi 算法进行奇异值分解也可提供较为理想的性能。

【总页数】11页(P82-92)
【作者】MichaelBerry;AhmedSameh;王宏斌
【作者单位】
【正文语种】中文
【中图分类】TP3
【相关文献】
1.解对称带状Toeplitz矩阵特征值问题的一种并行算法 [J], 罗晓广;李晓梅
2.基于Jacobi算法对称矩阵特征值计算的FPGA实现 [J], 袁生光;沈海斌
3.解非对称矩阵特征值问题的并行NSM-APA算法 [J], 崔向照;李春;杨明周
4.解对称矩阵特征值问题的并行SM-APA算法 [J], 崔向照
5.解非对称矩阵特征值问题的一种并行分治算法 [J], 罗晓广;李晓梅
因版权原因,仅展示原文概要,查看原文内容请购买。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
N
(z + z ′ ) 2(z − z ′ )
E ii ⊗ E ii +
i
E ji ⊗ E ij +
j>i
z′ z − z′
E ji ⊗ E ij
j<i
det(wI + l(z )) =
k =0
w n−k tk (z )
and there coefficients tk defined by
kn
(j )
tk (z ) =
j =0
tk z nk−j
(j )
The form of the Poisson brackets implies that all coefficients of characteristic polynomial are in involution: {det(wI + l(z )), det(w ′ I + l(z ′ ))} = 0 Moreover, part of them belongs to the center of the Poisson brackets: these are tN because of {det(l(z )) ⊗ , l (z ′ )} = 0 and tk
l± (z ) are polynomials of degree n − 1, l0 (z ) is polynomial of degree n, l+ (z ) (l− (z )) is upper (lower)triangular, l0 (z ) is diagonal. The classical algebra of observables A is generated by the coefficients of polynomials giving the matrix elements of l(z ). The algebra A a is Poisson algebra, Poisson structure being given by the r -matrix relations: { l (z ) ⊗ , l(z ′ )} = [r (z, z ′ ), l(z ) ⊗ l(z ′ )] where the classical r -matrix is r (z, z ′ ) = z + z − z′ with j . . 0 0 . E ij = · · · 1 · · · i . . 0 0 . Let us introduce the polynomials tk (z ):
(N − 1)(Nn − 2). The relation is as follows. Consider the Jacobi variety of the curve of genus: g = 1 2 X i.e. the complex torus: Cg J= g Z × B Zg where B is the period matrix. Introduce corresponding Riemann Theta-function which satisfies:
0 1
Membre du CNRS Laboratoire associ´ e au CNRS.
1
Classical case.
Consider classical integrable model with the l-operator which is an N × N matrix depending on the spectral parameter z : l(z ) = l+ (z ) + l0 (z ) + zl− (z ), (1)
LPTHE-01-61
On quantization of affine Jacobi varieties of spectral curves.
arXiv:math-ph/0111038v1 19 Nov 2001
F.A. Smirnov0, V. Zeitlin
Laboratoire de Physique Th´ eorique et Hautes Energies 1 Universit´ e Pierre et Marie Curie, Tour 16, 1er ´ etage, 4 place Jussieu 75252 Paris Cedex 05, France
Abstract. A quantum integrable model related to Uq (sl(N )) is considered. A reduced model is introduced which allows interpretation in terms of quantized affine Jacobi variety. Closed commutation relations for observables of reduced model are found.
1 t nBn − t nζ θ(ζ + m + Bn) = exp 2πi − 2
θ (ζ ),
∀m, n ∈ Zg
The relation between integrable models and algebraic geometry is normally based on the following “identity”: Liouville torus = Real part of Jacobi variety. However, this relation only holds in the usual cases because the levels of the “non-compact” integrals are usually fixed from the very beginning in the particular integrable model under consideration, so these integrals rarely, if ever, appear explicitly in the discussion (the problem of having “extra” degrees of freedom resulting from imposing some additional implicit constraints is well-known though). But in our case this “identity” cannot be correct as can be easily seen from the comparison of dimensions: dim(M) = 2g + 2(N − 1) In fact, the real part of Jacobi variety describe only the compact part of the level of integrals and, clearly, to use the usual algebra-geometric formulation we need to eliminate N − 1 non compact integrals. Let us discuss this point in some more details. The integrable model under consideration allows a complexification. Complexification of the compact part of the level of integrals should give the Jacobi variety. More precisely, the complexification gives the Jacobi variety from which the following divisor is cut off: D = {ζ ∈ J | θ(ζ + ρ1 ) · · · θ(ζ + ρN ) = 0} where ρ1 , · · · , ρN are images under Abel map of N points of X which project onto the point ∞ on z plane. In other words the observables considered as functions on the Jacobi variety possess singularities (only) on D . Generally we have: Functions on the Level of Integrals of Motion = =(Functions on Jaff ) × (Sections) where Jaff ≡ J − D stands for affine Jacobi variety. Sections correspond exactly to non-compact directions of the level of integrals. In algebra-geometric language they are given by expressions of the form θ(ζ + ρi ) θ(ζ + ρj ) Our goal is to reduce the model on the sub-manifold of the phase space which does not contain the non-compact directions. There is a general Dirac procedure to do that: we have to fix N − 1 first kind constraints (the integrals of motion t0 k ), the non-compact coordinates being the “auxiliary” relations. However, in the case under consideration there is a very direct way of describing the reduced model which allows quantum analogue. To be precise, there is an N × N matrix s whose matrix elements are dynamical variables such that the similarity transformation of l(z ): m(z ) = sl(z )s−1 is of the form: m(z ) = a(z ) b(z ) , c(z ) d (z ) (2)
相关文档
最新文档