From Double Chooz to Triple Chooz - Neutrino Physics at the Chooz Reactor Complex
Closed-form solution of absolute orientation using unit quaternions
![Closed-form solution of absolute orientation using unit quaternions](https://img.taocdn.com/s3/m/25d1db22a5e9856a561260f6.png)
1.
Hale Waihona Puke INTRODUCTIO N
Suppose that we are given the coordinates of a number o f points as measured in two different Cartesian coordinate systems (Fig . 1.) . The photogrammetric problem of recovering the transformation between the two systems from thes e .1 measurements is referred to as that of absolut eorinta It occurs in several contexts, foremost in relating a stere o model developed from pairs of aerial photographs to a geodetic coordinate system . It also is of importance in robotics , in which measurements in a camera coordinate system mus t be related to coordinates in a system attached to a mechani cal manipulator. Here one speaks of the determination o f the hand-eye transform . 2 A. Previous Wor k The problem of absolute orientation is usually treated in a n .31,4 empirical, graphical, or numerical iterativ efashion Thompson gives a solution to this problem when thre e points are measured . His method, as well as the simpler one of Schut,6 depends on selective neglect of the extra constraints available when all coordinates of three points ar e known . Schut uses unit quaternions and arrives at a set o f linear equations . I present a simpler solution to this specia l case in Subsection 2 .A that does not require solution of a system of linear equations. These methods all suffer fro m the defect that they cannot handle more than three points . Perhaps more importantly, they do not even use all th e information available from the three points . Oswal and Balasubramanian 7 developed a least-squares method that can handle more than three points, but thei r method does not enforce the orthonormality of the rotatio n matrix . An iterative method is then used to square up the result--bringing it closer to being orthonormal . The method for doing this is iterative, and the result is not the solutio n of the original least-squares problem .
Polynomial Averages Converge to the Product of Integrals
![Polynomial Averages Converge to the Product of Integrals](https://img.taocdn.com/s3/m/28f284cda1c7aa00b52acb95.png)
a rX iv:mat h /43454v1[mat h.DS]26Mar24POLYNOMIAL AVERAGES CONVERGE TO THE PRODUCT OF INTEGRALS NIKOS FRANTZIKINAKIS AND BRYNA KRA Abstract.We answer a question posed by Vitaly Bergelson,show-ing that in a totally ergodic system,the average of a product of functions evaluated along polynomial times,with polynomials of pairwise differing degrees,converges in L 2to the product of the integrals.Such averages are characterized by nilsystems and so we reduce the problem to one of uniform distribution of polynomial sequences on nilmanifolds.1.Introduction 1.1.Bergelson’s Question.In [B96],Bergelson asked if the average of a product of functions in a totally ergodic system (meaning that each power of the transformation is ergodic)evaluated along polynomial times converges in L 2to the product of the integrals.More precisely,if (X,X ,µ,T )is a totally ergodic probability measure preserving system,p 1,p 2,...,p k are polynomials taking integer values on the integers with pairwise distinct non-zero degrees,and f 1,f 2,...,f k ∈L ∞(µ),does lim N →∞ 12NIKOS FRANTZIKINAKIS AND BRYNA KRA independent family of polynomials.Then for f1,f2,...,f k∈L∞(µ), (1)limN→∞ 1POLYNOMIAL AVERAGES CONVERGE TO THE PRODUCT OF INTEGRALS3 A factor of the measure preserving system(X,X,µ,T)is a measure preserving system(Y,Y,ν,S)so that there exists a measure preserving mapπ:X→Y takingµtoνand such that S◦π=π◦T.In a slight abuse of terminology,when the underlying measure space is implicit we call S a factor of T.In this terminology,Host and Kra’s result means that there exists a factor(Z,Z,m)of X,where Z denotes the Borelσ-algebra of Z and m its Haar measure,so that the action of T on Z is an inverse limit of nilsystems and furthermore,whenever E(f j|Z)=0for some j∈{1,2,...,k},the average in(1)is itself0.Since an inverse limits of nilsystems can be approximated arbitrarily well by a nilsystem,it suffices to verify Theorem1.1for nilsystems.Moreover,since measur-able functions can be approximated arbitrarily well in L2by continuous functions,Theorem1.1is equivalent to the following generalization of Weyl’s polynomial uniform distribution theorem(see Section4for the statement of Weyl’s Theorem):Theorem1.2.Let X=G/Γbe a nilmanifold,(G/Γ,G/Γ,µ,T a)a nilsystem and suppose that the nilrotation T a is totally ergodic.If {p1(n),p2(n),...,p k(n)}is an independent polynomial family,then for almost every x∈X the sequence(a p1(n)x,a p2(n)x,...,a p k(n)x)is uni-formly distributed in X k.If G is connected,we can reduce Theorem1.2to a uniform distribu-tion problem that is easily verified using the standard uniform distri-bution theorem of Weyl.The general(not necessarily connected)case is more ing a result of Leibman[L02],in Section2,we reduce the problem to studying the action of a polynomial sequence on a fac-tor space with abelian identity component.The key step(Section3)is then to prove that nilrotations acting on such spaces are isomorphic to affine transformations on somefinite dimensional torus.In Section4, we complete the proof by checking the result for affine transformations.2.Reduction to an abelian connected component Suppose that G is a nilpotent Lie group andΓis a discrete,cocom-pact subgroup.Throughout,we let G0denote the connected compo-nent of the identity element and denote the identity element by e.A sequence g(n)=a p1(n)1a p2(n)2...a p k(n)kwith a1,a2,...,a k∈G andp1,p2,...,p k integer polynomials is called a polynomial sequence in G. We are interested in studying uniform distribution properties of poly-nomial sequences on the nilmanifold X=G/Γ.4NIKOS FRANTZIKINAKIS AND BRYNA KRALeibman[L02]showed that the uniform distribution of a polynomial sequence in a connected nilmanifold reduces to uniform distribution in a certain factor:Theorem.[Leibman]Let X=G/Γbe a connected nilmanifold andlet g(n)=a p1(n)1a p2(n)2...a p k(n)kbe a polynomial sequence in G.Let Z=X/[G0,G0]and letπ:X→Z be the natural projection.If x∈X then {g(n)x}n∈Z is uniformly distributed in X if and only if{g(n)π(x)}n∈Z is uniformly distributed in Z.We remark that if G is connected,then the factor X/[G0,G0]is an abelian group.However,this does not hold in general as the following examples illustrate:Example1.On the space G=Z×R2,define multiplication as follows: if g1=(m1,x1,x2)and g2=(n1,y1,y2),letg1·g2=(m1+n1,x1+y1,x2+y2+m1y1).Then G is a2-step nilpotent group and G0={0}×R2is abelian.The discrete subgroupΓ=Z3is cocompact and X=G/Γis connected. Moreover,[G0,G0]={e}and so X/[G0,G0]=X.Example2.On the space G=Z×R3,define multiplication as follows: if g1=(m1,x1,x2,x3)and g2=(n1,y1,y2,y3),letg1·g2=(m1+n1,x1+y1,x2+y2+m1y1,x3+y3+m1y2+1POLYNOMIAL AVERAGES CONVERGE TO THE PRODUCT OF INTEGRALS53.Reduction to an affine transformation on a torus We reduce the problem on uniform distribution(Theorem1.2)to studying an affine transformation on a torus.If G is a group then a map T:G→G is said to be affine if T(g)=bA(g)for a homomorphism A of G and some b∈G.The homomorphism A is said to be unipotent if there exists n∈N so that so that(A−I d)n=0.In this case we say that the affine transformation T is a unipotent affine transformation. Proposition3.1.Let X=G/Γbe a connected nilmanifold such that G0is abelian.Then any nilrotation T a(x)=ax defined on X with the Haar measureµis isomorphic to a unipotent affine transformation on somefinite dimensional torus.Proof.First observe that for every g∈G,the subgroup g−1G0g is both open and closed in G so g−1G0g=G0.Hence,G0is a normal subgroup of G.Similarly,since G0Γis both open and closed in G,we have that (G0Γ)/Γis open and closed in X.Since X is connected,X=(G0Γ)/Γand so G=G0Γ.We claim thatΓ0=Γ∩G0is a normal subgroup of G.Letγ0∈Γ0 and g=g0γ,where g0∈G0andγ∈Γ.Since G0is normal in G,we have that g−1γ0g∈G0.Moreover,g−1γ0g=γ−1g−10γ0g0γ=γ−1γ0γ∈Γ,the last equality being valid since G0is abelian.Hence,g−1γ0g∈Γ0 andΓ0is normal in G.Therefore we can substitute G/Γ0for G andΓ/Γ0forΓ;then X= (G/Γ0)/(Γ/Γ0).So we can assume that G0∩Γ={e}.Note that we now have that G0is a connected compact abelian Lie group and so is isomorphic to somefinite dimensional torus T d.Every g∈G is uniquely representable in the form g=g0γ,with g0∈G0,γ∈Γ.The mapφ:X→G0,given byφ(gΓ)=g0is a well defined homeomorphism.Sinceφ(hgΓ)=hφ(gΓ)for any h∈G0,the measureφ(µ)on G0is invariant under left translations.Thusφ(µ)is the Haar measure on G0.If a=a0γ,g=g0γ′with a0,g0∈G0and γ,γ′∈Γ,then agΓ=a0γg0γ−1Γ.Sinceγg0γ−1∈G0,we have that φ(agΓ)=a0γg0γ−1.Henceφconjugates T a to T′a:G0→G0defined byT′a(g0)=φT aφ−1=a0γg0γ−1.Since G0is abelian this is an affine map;its linear part g0→γg0γ−1 is unipotent since G is nilpotent.Lettingψ:G0→T d denote the isomorphism between G0and T d,we have that T a is isomorphic to the unipotent affine transformation S=ψT′aψ−1acting on T d.6NIKOS FRANTZIKINAKIS AND BRYNA KRAWe illustrate this with the examples of the previous section: Example3.Let X be as in Example1and let a=(m1,a1,a2).Since G0/Γ0=T2we see that T a is isomorphic to the unipotent affine trans-formation S:T2→T2given byS(x1,x2)=(x1+a1,x2+m1x1+a2).Example4.Let X be as in Example2and a=(m1,a1,a2,a3).Since G0/Γ0=R3/(Z2×Z/2),andψ:G0/Γ0→T3defined byψ(x1,x2,x3)= (x1,x2,2x3)is an isomorphism,we see that T a is isomorphic to the unipotent affine transformation S:T3→T3given byS(x1,x2,x3)=(x1+a1,x2+m1x1+a2,x3+2m1x2+m21x1+2a3). Proposition3.2.Theorem1.2follows if it holds for all nilsystems (G/Γ,G/Γ,µ,T a)such that T a is isomorphic to an ergodic,unipotent, affine transformation on somefinite dimensional torus.Proof.Wefirst note that since X=G/Γadmits a totally ergodic nilrotation T a,it must be connected.Indeed,let X0be the identity component of X.Since X is compact,it is a disjoint union of d copies of translations of X0for some d∈N.Since a permutes these copies, a d preserves X0.By assumption the translation by T a d=T d a is ergodic and so X0=X.By Proposition2.1we can assume that G0is abelian.Since X is connected,the result follows from Proposition3.1.4.Uniform distribution for an affine transformation We are left with showing that Theorem1.2holds when the nilsystem is isomorphic to an ergodic,unipotent,affine system on afinite dimen-sional torus.Before turning into the proof,note that if G is connected then the uniform distribution property of Theorem1.2holds for every x∈X.However,this does not hold in general.We illustrate this with the following example:Example5.We have seen that the nilrotation of Example1is iso-morphic to the affine transformation S:T2→T2given byS(x1,x2)=(x1+a1,x2+m1x1+a2).If m1=2and a1=a2=a is irrational then S is totally ergodic and S n(x1,x2)=(x1+na,x2+2nx1+n2a).ThenS n(0,0),S n2(0,0) =(na,n2a,n2a,n4a)POLYNOMIAL AVERAGES CONVERGE TO THE PRODUCT OF INTEGRALS7 is not uniformly distributed on T4.On the other handS n(x1,x2),S n2(x1,x2) =(x1+na,x2+2nx1+n2a,x1+n2a,x2+2n2x1+n4a,)is uniformly distributed on T4as long as a and x1are rationally inde-pendent.The main tool used in the proof of Theorem1.2is the following classic theorem of Weyl[W16]on uniform distribution:Theorem.[Weyl](i)Let a n∈R d.Then a n is uniformly distributed in T d if and only iflimN→∞1NNn=1e2πia n=0.Before turning to the proof of Theorem1.2,we prove a lemma that simplifies the computations:Lemma4.1.Let T:T d→T d be defined by T(x)=Ax+b,where A is a d×d unipotent integer matrix and b∈T d.Assume furthermore that T is ergodic.Then T is a factor of an ergodic affine transformation S:T d→T d,where S=S1×S2×···×S s and for r=1,2,...,s, S r:T d r→T d r( s r=1d r=d)has the formS r(x r1,x r2,...,x rd r)=(x r1+b r,x r2+x r1,...,x rd r+x rd r−1)for some b r∈T.Proof.Let J be the Jordan canonical form of A with Jordan blocks J r of dimension d r for r=1,2,...,s.Since A is unipotent,all diagonal entries of J are equal to1.There exists a matrix P with rational entries such that P A=JP.After multiplying P by an appropriate integer,we can assume that it too has integer entries.So P defines a homomorphism P:T d→T d such that P T=SP,where S:T d→T d is given by S(x)=J(x)+c for c=P(b).Hence,T is a factor of S.By making the change of variables x ij→x ij+a ij,we can assume that S has the advertised form.8NIKOS FRANTZIKINAKIS AND BRYNA KRAIt remains to show that S is ergodic.Since J is unipotent,using a theorem of Hahn([H63],Theorem4)we get that ergodicity of S is equivalent to showing that for every nontrivial characterχin the dual of T d we have the implicationχ(Jx)=χ(x)for every x∈T d⇒χ(c)=1.Suppose thatχ(Jx)=χ(x).Using the relation P A=JP we get that χ′(Ax)=χ′(x)whereχ′(x)=χ(P x).Since T(x)=Ax+b is assumed to be ergodic,again using Hahn’s theorem we get thatχ′(b)=1.The relation P A=JP implies thatχ(c)=1and the proof is complete. Proof of Theorem1.2.By Proposition3.2it suffices to verify the uni-form distribution property for all ergodic,unipotent,affine transforma-tions on T d.First observe that relation(1)of Theorem1.1is preserved when passing to factors.Hence,using Lemma4.1we can assume that T=T1×T2×···×T s,where T r:T d r→T d r( s r=1d r=d)is given by T r(x r1,x r2,...,x rd r)=(x r1+b r,x r2+x r1,...,x rd r+x rd r−1),for r=1,2,...,s.Since T is ergodic the set{b1,b2,...,b s}is rationally independent.For convenience,set x r0=b r for r=1,2,...s.We claim that if x is chosen so that the set A={x rj:1≤r≤s,0≤j≤d r}is rationally independent,then the polynomial sequence g(n)˜x=(T p1(n)x,T p2(n)x,...,T p k(n)x)is uniformly distributed on T dk (we include x rd r in A only for simplicity).To see this we use thefirst part of Weyl’s theorem;letting Q rjl(n)denote the j-th coordinate of x andT p l(n)r(2)R(n)= r,j,l m rjl Q rjl(n)where{m rjl:1≤r≤s,1≤j≤d r,1≤l≤k}are integers,not all of them zero,it suffices to check that1(3)limN→∞POLYNOMIAL AVERAGES CONVERGE TO THE PRODUCT OF INTEGRALS 9We can put R (n )in the form(5)R (n )= r,jR rj (n )x rj ,where R rj are integer polynomials and 1≤r ≤s ,0≤j ≤d r .This representation is unique since the x rj are rationally independent.So it remains to show that some R rj is nonconstant.To see this,choose any r 0such that m r 0jl =0for some j,l ,and define j 0to be the maximum 1≤j ≤d r 0such that m r 0jl =0for some 1≤l ≤k .We show that R r 0,j 0−1is nonconstant.By the definition of j 0we have m r 0jl =0for j >j 0.For j ≤j 0we see from (4)that the variable x r 0j 0−1appears only in the polynomials Q r 0j 0l with coefficient p l (n ),and if j 0>1also in the polynomials Q r 0(j 0−1)l with coefficient 1.It follows from (2)and(5)thatR r 0j 0−1(n )=kl =1m r 0j 0l p l (n )+c,where c = kl =1m r 0j 0l if j 0>1,and c =0if j 0=1.Since thepolynomial family {p i (n )}k i =1is independent and m r 0j 0l =0for some l ,the polynomial R r 0j 0−1is nonconstant.We have thus established uniform distribution for a set of x of full measure,completing the proof.Acknowledgment :The authors thank the referee for his help in orga-nizing and simplifying the presentation,and in particular for the simple proof of Proposition 3.1.References[B87]V.Bergelson.Weakly mixing PET.Erg.Th.&Dyn.Sys.,7(1987),337-349.[B96]V.Bergelson.Ergodic Ramsey theory an update.Ergodic Theory of Z d -actions ,Eds.:M.Pollicott,K.Schmidt.Cambridge University Press,Cam-bridge (1996),1-61.[FW96]H.Furstenberg and B.Weiss.A mean ergodic theorem for 110NIKOS FRANTZIKINAKIS AND BRYNA KRA[W16]H.Weyl.¨Uber die Gleichverteilung von Zahlen mod Eins.Math.Ann.,77 (1916),313-352.Department of Mathematics,McAllister Building,The Pennsylva-nia State University,University Park,PA16802E-mail address:nikos@E-mail address:kra@。
Twisted vertex representations via spin groups and the McKay correspondence
![Twisted vertex representations via spin groups and the McKay correspondence](https://img.taocdn.com/s3/m/3d99b3f9770bf78a652954c5.png)
TWISTED VERTEX REPRESENTATIONS VIA SPIN GROUPS AND THE MCKAY CORRESPONDENCE
IGOR B. FRENKEL, NAIHUAN JING, AND WEIQIANG WANG Abstract. We establish a twisted analog of our recent work on vertex representations and the McKay correspondence. For each finite group Γ and a virtual character of Γ we construct twisted vertex operators on the Fock space spanned by the super spin characters of the spin wreath products Γ ≀ Sn of Γ and a double cover of the symmetric group Sn for all n. When Γ is a subgroup of SL2 (C) with the McKay virtual character, our construction gives a group theoretic realization of the basic representations of the twisted affine and twisted toroidal algebras. When Γ is an arbitrary finite group and the virtual character is trivial, our vertex operator construction yields the spin character tables for Γ ≀ Sn .
外研版()选择性必修第二册Unit 3Times change(解析版)
![外研版()选择性必修第二册Unit 3Times change(解析版)](https://img.taocdn.com/s3/m/3a6b612230b765ce0508763231126edb6f1a7684.png)
选择性必修第二册Unit 3Times change!佳作抢鲜背学写作思路,背精彩范文(发言稿——网上学习的利弊) (2021·全国乙卷)你校将举办英语演讲比赛。
请你以Be smart online learners为题写一篇发言稿参赛。
内容包括:1.分析优势与不足;2.提出学习建议。
精彩范文Be smart online learnersGood morning,everyone,I feel greatly privileged to stand here to deliver a speech titled“Be smart online learners”.It’s widely acknowledged that online learning is becoming increasingly popular with Chinese due to its convenience as well as flexibility.However,online learning also presents us learners with challenges in terms of self-discipline and time management.Tha t’s why we should develop a positive attitude towards online learning.First of all,w e’d better obey our school timetable at home,which will surely contribute to our learning productivity. Besides,it’s wise to follow the teachers closely in online class so that we can become more involved,focused and motivated.Follow these tips,and we will become smart online learners.Tha t’s all! Thank you.迁移运用每日句型练透:which引导的非限制性定语从句1.You’d better make full preparations before class and have a brief understanding of the history of Tang Dynasty,which makes it easy for you to go through the class.上课之前,你最好准备充分并且对唐朝历史有一个简单的了解,这会使你上起课来很容易。
语义三元组提取-概述说明以及解释
![语义三元组提取-概述说明以及解释](https://img.taocdn.com/s3/m/bd654fa2162ded630b1c59eef8c75fbfc77d94bd.png)
语义三元组提取-概述说明以及解释1.引言1.1 概述概述:语义三元组提取是一种自然语言处理技术,旨在从文本中自动抽取出具有主谓宾结构的语义信息。
通过将句子中的实体与它们之间的关系抽取出来,形成三元组(subject-predicate-object)的形式,从而获得更加结构化和可理解的语义信息。
这项技术在信息检索、知识图谱构建、语义分析等领域具有广泛的应用前景。
概述部分将介绍语义三元组提取的基本概念、意义以及本文所要探讨的重点内容。
通过对语义三元组提取技术的介绍,读者可以更好地理解本文后续内容的研究意义和应用场景。
1.2 文章结构本文将分为三个主要部分,分别是引言、正文和结论。
在引言部分,将从概述、文章结构和目的三个方面介绍本文的主题内容。
首先,我们将简要介绍语义三元组提取的背景和意义,引出本文的研究对象。
接着,我们将介绍文章的整体结构,明确各个部分的内容安排和逻辑关系。
最后,我们将阐明本文的研究目的,明确本文要解决的问题和所带来的意义。
在正文部分,将主要分为三个小节。
首先,我们将介绍语义三元组的概念,包括其定义、特点和构成要素。
接着,我们将系统梳理语义三元组提取的方法,包括基于规则的方法、基于统计的方法和基于深度学习的方法等。
最后,我们将探讨语义三元组在实际应用中的场景,包括知识图谱构建、搜索引擎优化和自然语言处理等方面。
在结论部分,将对前文所述内容进行总结和展望。
首先,我们将概括本文的研究成果和亮点,指出语义三元组提取的重要性和必要性。
接着,我们将展望未来研究方向和发展趋势,探索语义三元组在智能技术领域的潜在应用价值。
最后,我们将用简洁的语言作出结束语,强调语义三元组提取对于推动智能化发展的意义和价值。
1.3 目的本文的目的是介绍语义三元组提取这一技术,并探讨其在自然语言处理、知识图谱构建、语义分析等领域的重要性和应用价值。
通过对语义三元组概念和提取方法的讨论,希望能够帮助读者更好地理解和应用这一技术,提高对文本语义信息的理解和利用能力。
简述波普尔试错法
![简述波普尔试错法](https://img.taocdn.com/s3/m/bf77298bb1717fd5360cba1aa8114431b90d8ee5.png)
简述波普尔试错法
波普尔试错法(The Binary Search algorithm)是一种用来在有序数组中查找某
一特定元素的搜索算法。
它的基本思想是按照数组的排序,每次取中间元素去比较,如果找到了要查找的元素则结束,如果没有,则根据大小再进行拆分,继续查找,如此迭代,直到找到元素为止。
波普尔试错法在有序数组中搜索的最大优势是比较次数少,时间效率高,它采用的是一种分治思想,把一个大问题分解为若干小问题,并让小问题能够迅速的解决。
算法的时间复杂度为O(logn),当搜索处理的数据较多的时候,波普尔试错法可以极大的提高搜索的效率。
波普尔试错算法的实现步骤很简单,首先在数组中定位初始索引low和high,low代表查找范围的最小索引,high为查找范围的最大索引,然后计算中间值m,中间值m可以通过 (low + high) / 2 计算。
接着比较期待查找的数与中间值的大小
关系,如果比中间值大,则将low更新为m+1,如果比中间值小,将high更新为
m-1,重复以上步骤,直到找到期待查找的数。
波普尔试错法既可以在内存中处理,也可以用于外存。
在内存中,用户把数据通过分段、排序等方法,放入内存,然后用波普尔试错法搜索;在外存中,用户通过分段、排序等方法把数据存入硬盘或者外存设备,然后用波普尔试错法搜索。
由于波普尔试错法搜索算法的时间复杂度低,因此用它来搜索数据较多的时候,可以极大的提高搜索的效率。
总的来说,波普尔试错法是一种高效的搜索算法,它是一种分治策略,在有序数组中,可以极大提高搜索效率,因此大多数计算机程序员都喜欢使用这种技术来搜索硬件设备或者外存设备中的内容。
(Extra)Ordinary Gauge Mediation
![(Extra)Ordinary Gauge Mediation](https://img.taocdn.com/s3/m/ac5aa38683d049649b66584b.png)
(Extra)Ordinary Gauge Mediation
Clifford Cheung,1,2 A. Liam Fitzpatrick,1 and David Shih2
1 2
Department of Physics, Harvard University, Cambridge, MA 02138 USA
School of Natural Sciences, Institute for Advanced Study, Princeton, NJ 08540 USA
We study models of “(extra)ordinary gauge mediation,” which consist of taking ordinary gauge mediation and extending the messenger superpotential to include all renormalizable couplings consistent with SM gauge invariance and an R-symmetry. We classify all such models and find that their phenomenology can differ significantly from that of ordinary gauge mediation. Some highlights include: arbitrary modifications of the squark/slepton mass relations, small µ and Higgsino NLSP’s, and the possibility of having fewer than one effective messenger. We also show how these models lead naturally to extremely simple examples of direct gauge mediation, where SUSY and R-symmetry breaking occur not in a hidden sector, but due to the dynamics of the messenger sector itself.
部分背包问题的贪心算法正确性证明
![部分背包问题的贪心算法正确性证明](https://img.taocdn.com/s3/m/43e98f90e43a580216fc700abb68a98270feac57.png)
部分背包问题的贪⼼算法正确性证明⼀,部分背包问题介绍⾸先介绍下0-1背包问题。
假设⼀共有N件物品,第 i 件物品的价值为 V i,重量为W i,⼀个⼩偷有⼀个最多只能装下重量为W的背包,他希望带⾛的物品越有价值越好,请问:他应该选择哪些物品?0-1背包问题的特点是:对于某件(更适合的说法是:某类)物品,要么被带⾛(选择了它),要么不被带⾛(没有选择它),不存在只带⾛⼀部分的情况。
⽽部分背包问题则是:可以带⾛⼀部分。
即,部分背包问题可带⾛的物品是可以⽆限细分的。
(连续与离散的区别)可以把0-1背包问题中的物品想象的⼀个⾦⼦,你要么把它带⾛,要么不带⾛它;⽽部分背包问题中的物品则是⼀堆⾦粉末,可以取任意部分的⾦粉末⼆,部分背包问题的贪⼼算法部分背包问题可以⽤贪⼼算法求解,且能够得到最优解。
贪⼼策略是什么呢?将物品按单位重量所具有的价值排序。
总是优先选择单位重量下价值最⼤的物品。
单位重量所具有的价值:V i / W i举个例⼦:假设背包可容纳50Kg的重量,物品信息如下:物品 i 重量(Kg) 价值单位重量的价值1 10 60 62 20 100 53 30 120 4按照我们的贪⼼策略,单位重量的价值排序:物品1 > 物品2 > 物品3因此,我们尽可能地多拿物品1,直到将物品1拿完之后,才去拿物品2.....最终贪⼼选择的结果是这样的:物品1全部拿完,物品2也全部拿完,物品3拿⾛10Kg(只拿⾛了物品3的⼀部分)这种选择获得的价值是最⼤的。
在(三)会给出证明。
⽽对于0-1背包问题,如果也按“优先选择单位重量下价值最⼤的物品”这个贪⼼策略,那么,在拿了物品1和物品2之后,就不能在拿物品3了。
因为,在拿了物品1和物品2之后,背包中已经装了10+20=30Kg的物品了,已经装不下物品3了(50-30 < 30)(0-1背包:⼀件物品要么拿,要么不拿,否能只拿⼀部分),此时得到的总价值是 160。
VANMASS 2 in 1 Wireless Charger CDRZ35说明书
![VANMASS 2 in 1 Wireless Charger CDRZ35说明书](https://img.taocdn.com/s3/m/b101ac1a814d2b160b4e767f5acfa1c7aa00823e.png)
@VANMASS™ Thank you for purchasing and using this product!In order to bring you a better exper ence, 『ead th1S manual and alsoplease refer to you『mobile phone or沪ur electronic product manual.WIRELESS CHARGERPlease read th s operation instruction carefully and keep it proper!Y扫lorn us ng this product.@VANMASS™2in1'J,(七匕又充雷器芒使用前1工`乙O取1!l:锐明霍宅去苍切动上、正1.-(安全1匕在使c,<茫i!'C、。
衷九、布骁为仁左-,t,梭灶、L、勹家飞旯6n石.I:?书手元仁大切保芒L<<忔古小。
上纪11)/污又一夕比VANMASS"奶研究室(f)屯(f)飞才夫暇切芍又夕比裂品七于(f)他(f)要因l.:J:?飞兵坛苍渴合纾;\jl}jji东写哀灶爹考保尺9([)飞、英院O裂品苍基準1.:1,去才.@VANMASS-Product SpecificationsPmduct Name Wire比ss c归rgerMod� CDRZ35Size 巧(1OOx 93 x 55)mmNet We ght •92gInput 5V=2A. 9V=1.8ACharging Distance •(4·6)mmAppl尥t,on Model Compatible with Qi standard cellphon@:iPhone, Samsung, LG, Mato etc.W咱坴car扣rger•1,US氏如cable x1,Pa欢age L叹Air vent dip x1,Sudion cup holder x1,Instruction book x 1The pa『ameters abo心a"'derived from the laboratory of V闷MASS巴The actual parameter would be different Because of p『oducts andother f釭tor<Bitte lesen S e d ese Bedienungsanleitung aufmerksam durch undbewahren Sie sie vor de『Verwendung dieses Produkts aul.乙0度比、7千乍匕又充霓台(以下『本毅品J匕表兄L衷Tl宅巧贯L、上I九卞任宅、械仁il;'),Jf,七夕己名L、玄L九本蚁品宅它使用韵IC1:0)取贷祝明吝宅.I:(扫怯办<茫古C、。
组合选择的计数原则
![组合选择的计数原则](https://img.taocdn.com/s3/m/2a28c065905f804d2b160b4e767f5acfa0c78369.png)
组合选择的计数原则英文回答:Combinatorial counting principles are a set of rules that help us count the number of possible combinations or arrangements of objects. These principles are widely usedin mathematics, statistics, and computer science to solve problems involving counting.One of the fundamental principles is the multiplication principle, also known as the counting principle. It states that if there are m ways to do one thing and n ways to do another thing, then there are m x n ways to do both things. This principle is based on the fact that the number of choices for each step multiplies together to give the total number of choices.For example, let's say I have 3 different shirts and 4 different pants. If I want to count the number of possible outfits I can create by choosing one shirt and one pair ofpants, I can use the multiplication principle. There are 3 ways to choose a shirt and 4 ways to choose pants, so the total number of outfits is 3 x 4 = 12.Another principle is the addition principle, which states that if there are m ways to do one thing and n ways to do another thing, then there are m + n ways to do either of the two things. This principle is used when the choices are mutually exclusive, meaning that you can only choose one option.For example, let's say I have 2 different desserts and 3 different drinks. If I want to count the number of possible combinations of dessert and drink, I can use the addition principle. There are 2 ways to choose a dessert and 3 ways to choose a drink, so the total number of combinations is 2 + 3 = 5.The principles of multiplication and addition can be combined to solve more complex counting problems. For example, let's say I have 3 different types of fruits, 2 different types of vegetables, and 4 different types ofmeats. If I want to count the number of possible meals I can create by choosing one fruit, one vegetable, and one meat, I can use both principles. There are 3 ways to choose a fruit, 2 ways to choose a vegetable, and 4 ways to choose a meat. By applying the multiplication principle, the total number of meals is 3 x 2 x 4 = 24.In summary, combinatorial counting principles provide a systematic way to count the number of possible combinations or arrangements. The multiplication principle is used when the choices are independent and can be multiplied together, while the addition principle is used when the choices are mutually exclusive and can be added together. These principles are essential tools in solving counting problems and are widely applicable in various fields.中文回答:组合选择的计数原则是一组帮助我们计算对象可能的组合或排列数量的规则。
一个不同的人的作文英语
![一个不同的人的作文英语](https://img.taocdn.com/s3/m/72d30792d4bbfd0a79563c1ec5da50e2524dd136.png)
一个不同的人的作文英语In the vast tapestry of humanity, each thread represents an individual with a distinct story to tell. This essay delves into the life and perspectives of a person who stands outfrom the crowd, not just for their achievements, but for the unique lens through which they view the world.Born in the heart of a bustling city, our subject, Alex, was always aware of the diversity that surrounded them. However,it was not the city's cacophony that shaped Alex's uniqueness, but rather the quiet moments of introspection and the boundless curiosity that propelled them to explore the worldin their own way.Alex's essay is not a conventional narrative. It is a mosaicof experiences, a collage of thoughts, and a symphony of emotions. The first paragraph is a poetic reflection on the concept of individuality, with Alex pondering the question, "What does it mean to be different?" They write, "To be different is to be a color in a world that often prefers monochrome, a note in a song that can't be harmonized, a question that seeks no answer."As the essay unfolds, Alex discusses their journey of self-discovery, a path that was neither straightforward nor smooth. They recount the challenges of being misunderstood, the loneliness of standing apart, and the courage it took to embrace their differences. "I was a puzzle piece that didn'tfit into any of the boxes," Alex eloquently puts it, "but instead of trying to squeeze into a space I wasn't made for, I decided to create my own puzzle."The middle section of the essay is a tribute to the influences that have shaped Alex's worldview. From the works of obscure philosophers to the melodies of underground musicians, Alex has drawn inspiration from sources that are not typically part of mainstream culture. They speak passionately about the importance of seeking out diverse perspectives and the value of learning from those who are often overlooked.In the penultimate part, Alex addresses the societal pressures to conform and the stereotypes that are imposed on individuals. They argue for the acceptance of diversity and the freedom for everyone to express their true selves. "We are not mere shadows cast by the societal norms; we are the light that illuminates the path to a more inclusive and understanding world," Alex writes, advocating for a world where uniqueness is not just tolerated but celebrated.The essay concludes with a forward-looking perspective, where Alex shares their aspirations to use their unique voice to inspire change. They envision a future where every individual is empowered to be different and where the collective strength of humanity is derived from its diversity.Alex's essay is a testament to the power of individuality and a call to action for a society that values and respects the differences in each person. It is a narrative that transcendsthe boundaries of a traditional essay, offering a glimpse into the mind of a truly unique individual.。
折半查找描述的算法
![折半查找描述的算法](https://img.taocdn.com/s3/m/2744a4b903d276a20029bd64783e0912a2167ce1.png)
折半查找描述的算法
折半查找(Binary Search)是一种查找算法,适用于已排序数组或列表。
其算法步骤如下:
1. 初始化一个左指针left和一个右指针right。
left指向数组的起始位置,right 指向数组的末尾位置。
2. 计算中间位置mid = (left+right) / 2。
如果数组的大小为奇数,则mid是中间元素的索引;如果数组的大小为偶数,则mid是中间两个元素中的较小索引。
3. 比较中间元素[mid]和目标值target的大小。
- 如果[mid] == target,则找到目标值,返回mid。
- 如果[mid] < target,则目标值可能在[mid+1, right]之间,更新left = mid + 1,重复步骤2。
- 如果[mid] > target,则目标值可能在[left, mid-1]之间,更新right = mid - 1,重复步骤2。
4. 重复步骤2和步骤3,直到left > right或者找到目标值。
如果找到目标值,则返回其索引;如果找不到目标值,则返回-1。
折半查找的时间复杂度为O(log n),其中n是数组的大小。
由于每次都将搜索区间缩小一半,因此它是一种高效的查找算法。
The Neutrino World - Fermilab Home:中微子的世界-费米实验室的家
![The Neutrino World - Fermilab Home:中微子的世界-费米实验室的家](https://img.taocdn.com/s3/m/a8bcfb474b73f242326c5f34.png)
(90% SuperK) (MINOS)
14
Is the spectral pattern
or ?
Generically, SO(10) grand unified models predict .
is un-quark-like, and would probably involve a lepton symmetry with no quark analogue.
Stephen Geer <>, Michael Zisman <>
Neutrinoless Double Beta Decay and Direct Searches for Neutrino Mass
Steve Elliott <>, Petr Vogel <>
At NOnA, with a 2nd detector, the determination would be possible for sin2 2q13 almost down to 0.01.
phases of the Uai.
AtmosphericCross-MixingSolar1 U0
0 c23
0 s23
c13 0
0 1
s130eics1122
s12 c12
0 0
0 s23 c23 s13ei 0 c13 0 0 1
3
la (le e, lm m, lt t)
ni
Uai
Detector
U is the Leptonic Mixing Matrix.
4
What Have We Learned?
If LSND is confirmed, there are at least 4 neutrino species.
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
a rXiv:h ep-ph/61266v131Jan26TUM-HEP-618/06MADPH-06-1252From Double Chooz to Triple Chooz —Neutrino Physics at the Chooz Reactor Complex P.Huber a ,J.Kopp b ,M.Lindner c ,M.Rolinec d ,W.Winter e a Department of Physics,University of Wisconsin,1150University Avenue,Madison,WI 53706,USA b ,c ,d Physik–Department,Technische Universit¨a t M¨u nchen,James–Franck–Strasse,85748Garching,Germany e School of Natural Sciences,Institute for Advanced Study,Einstein Drive,Princeton,NJ 08540,USA February 2,2008Abstract We discuss the potential of the proposed Double Chooz reactor experiment to measure theneutrino mixing angle sin 22θ13.We especially consider systematical uncertainties and their partial cancellation in a near and far detector operation,and we discuss implications of a delayed near detector startup.Furthermore,we introduce Triple Chooz,which is a possible upgrade scenario assuming a second,larger far detector,which could start data taking in an existing cavern five years after the first far detector.We review the role of the Chooz reactor experiments in the global context of future neutrino beam experiments.We find that both Double Chooz and Triple Chooz can play a leading role in the search for a finite value of sin 22θ13.Double Chooz could achieve a sensitivity limit of ∼2·10−2at the 90%confidence level after 5years while the Triple Chooz setup could give a sensitivity below 10−2.1IntroductionNeutrino oscillations have now clearly been established for solar,atmospheric and reactor neutrinos,as well as with neutrino beams.However,these oscillations can still be described by an effective two neutrino picture to a very good approximation.This is a consequence of the smallness of the third mixing angleθ13and the fact that the solar mass splitting∆m221is much smaller than the atmospheric mass splitting∆m231.Establishing generic threeflavour effects by measuring afinite value for the third mixing angleθ13is therefore one of the most important tasks for future neutrino experiments.For a concise review and description of the current status see Ref.[1].Afinite value ofθ13is crucial for the search for leptonic CP violation,too.Since CP violating effects are proportional toθ13,discovering afinite value ofθ13or excluding a certain range of values is a key information for the planning of future long baseline neutrino beam experiments.Therefore,we discuss in this paper the potential to limit or measureθ13with Double Chooz,which is currently the most advanced reactor project.In addition,we consider the Triple Chooz upgrade option,which could benefit from an existing cavern where a second large far detector could be constructed.We also discuss how a timely information on sin22θ13will influence the choice of technology for the second generation neutrino beam facilities.The outline of the paper is as follows.In Sec.2,we present some general remarks on the neutrino oscillation framework and we discuss implications for reactor anti-neutrino disappearance measurements.In Sec.3,we describe the simulated experimental setups of Double Chooz and a potential upgrade to Triple Chooz.We then discuss in Sec.4 the systematical errors at Double Chooz and we present their implementation within our analysis.Next,in Sec.5,we present the results of our simulations for the sensitivity and the precision of sin22θ13.Here,we provide also a detailed discussion of the quantitative impact of the systematical uncertainties.Finally,we assess the role of Double Chooz and eventually Triple Chooz in the global context of sin22θ13measurements with reactors and future neutrino beam experiments.2Neutrino oscillation frameworkAs discussed in previous studies[2–6],reactor experiments can play a crucial role for mea-surements of the third small neutrino mixing angleθ13.An important aspect is that such a measurement in the¯νe-disappearance channel does not suffer from correlations with un-known parameters,such as the CP phaseδCP.Correlations with the other oscillation parameters were also found to be negligible[5].This can easily be seen in the expansion of the full oscillation probability in the small parameters sin22θ13andα≡∆m221/∆m231up to second order:1−P¯e¯e≃sin22θ13sin2∆31+α2∆231cos4θ13sin22θ12,(1) where∆31=∆m231L/4E,L is the baseline,and E the neutrino energy.Matter effects can also be safely ignored for such short baselines of L=1∼2km.For a measurement at the first oscillation maximum and sin22θ13>10−3even the second term in Eq.(1)becomesnegligible1.Unless stated differently,we use the following input oscillation parameters(see e.g.Refs.[7–10]):∆m231=2.5·10−5eV2;sin22θ23=1(2)∆m221=8.2·10−3eV2;sin22θ12=0.83(3) Our analysis is performed with a modified version of the GLoBES Software[11],which allows a proper treatment of all kinds of systematical errors which can occur at a reactor experiment such as Double Chooz.This is important since the sensitivity of a reactor experiment to sin22θ13depends crucially on these systematical uncertainties[5].The importance of systematical errors becomes obvious from Eq.(1),since a small quantity has to be measured as a deviation from1.3Experimental setupsThe basic idea of the Double Chooz experiment is a near and a far detector which are as similar as possible in order to cancel systematical uncertainties.The two detectors are planned to have the samefiducial mass of10.16t of liquid scintillator.However,there are also some unavoidable differences,such as the larger muon veto in the near detector.The thermal power of the reactor is assumed to be2·4.2GW(two reactor cores).The Double Chooz setup can benefit from the existing Chooz cavern at a baseline of L=1.05km from the reactor cores.This allows a faster startup of the far detector in order to collect as much statistics as possible at the larger baseline.For the near detector,a new underground cavern must be built close to the reactor cores.In this paper,we assume100m for the baseline of the near detector[12].Being so close to the reactor,it can catch up with the statistics of the far detector.As our standard scenario in this paper,we assume that the near detector starts1.5years after the far detector.We refer to the initial phase without the near detector as phase I,and to the period in which both the near and far detectors are in operation as phase II.Typically this leads for the far detector to19333unoscillated events per year,corresponding to1.071·106events per year in the near detector[12]. Besides the Double Chooz experiment,we discuss a potential Triple Chooz upgrade after a few years by construction of a second,larger far detector.Another existing cavern at roughly the same baseline from the Chooz reactor cores can be used for this purpose. This is a very interesting option,since this second cavern should be available around2010, and one could avoid large civil engineering costs and save time.In particular,one could essentially spend all of the money for a typical second generation reactor experiment on the detector.We therefore consider a200t liquid scintillating detector with costs comparable to other proposed next generation reactor experiments[13].The ultimate useful size of such a detector strongly depends on the level of irreducible systematics such as the bin-to-bin error,which will be discussed in greater detail in the following sections.4Systematical errors at Double ChoozA reactor neutrino experiment depends on a variety of different systematical errors,which are the most important limiting factor for sin22θ13measurements.Any deficit in the de-tected neutrinoflux could be attributed either to oscillations or to a different reactor neu-trinofluxΦ.The systematicalflux uncertainty is consequently the dominant contribution which must be minimized.In past experiments,theflux was deduced from the thermal power of the reactor,which can only be measured at the level of a few percent.However, in next generation reactor experiments such as Double Chooz,a dedicated identical near detector will be used to precisely measure the unoscillated neutrinoflux close to the reactor core such that the uncertainty inΦcancels out.In addition,the near detector eliminates,in principle,the uncertainties in the neutrino energy spectrum,the interaction cross sections, the properties of the liquid scintillator(which is assumed to be identical in both detec-tors),and the spill-in/spill-out effect.The latter occurs if the neutrino interaction takes place inside thefiducial volume,but the reaction products escape thefiducial volume or vice-versa.However,cancellation of systematical errors for a simultaneous near and far detector operation works only for the uncertainties that are correlated between both detec-tors.Any uncorrelated systematical error between near and far detector must therefore be well controlled.The knowledge of thefiducial detector mass or the relative calibration of normalization and energy reconstruction are,for instance,partly uncorrelated uncertainties and are therefore not expected to cancel completely.In addition,backgrounds play a special role,as some of the associated uncertainties are correlated(e.g.,the radioactive impurities in the detector),while others are not.In particular,since the overburden of the near detec-tor is smaller than that of the far detector,theflux of cosmic muons will be higher for the near detector site.This requires a different design for the outer veto and different cuts in thefinal data analysis,which again introduces additional uncorrelated systematical errors. Another complication in the discussion of cancellation of correlated uncertainties in Double Chooz is the fact that the near detector is supposed to start operation about1.5years later than the far detector.Therefore,only those systematical errors which are correlated between the detectors and which are not time-dependent can be fully eliminated.This applies to the errors in the cross-sections,the properties of the scintillator,and the spill-in/spill-out effects.However,it only partly applies to systematical uncertainties in the background.In particular,the errors in the reactorflux and spectrum will be uncorrelated between phase II, where both detectors are in operation,and phase I,where only the far-detector operates. The reason for this is the burn-up and the periodical partial replacement of fuel elements. The different systematical uncertainties discussed so far are summarized in Table1together with their magnitudes we assume for Double Chooz.For the proper implementation of all relevant correlated and uncorrelated systematical un-certainties,together with an appropriate treatment of the delayed near detector start up, we modified theχ2-analysis of the GLoBES Software and defined aχ2-function which in-corporates all the relevant uncertainties.The numerical simulation assumes the events to follow the Poisson distribution,but for illustrative purposes it is sufficient to consider the Gaussian approximation which is very good due to the large event rates in Double Chooz. The totalχ2is composed of the statistical contributions of the far detector in phase I,χ2F,I,Correlated Value for DC 1yes 2.0% Reactor spectrum yes3yes4yes 2.0%5yesFiducial mass noDetector normalization yesAnalysis cuts no9no0.5% Backgrounds partlyO F,I,i,(5)χ2F,II= i[(1+a F,fid+a norm+a drift)T F,II,i+(1+a F,fid+a bckgnd)B F,II,i−O F,II,i]2O N,II,i,(7)χ2pull=a2F,fidσ2N,fid+a2normσ2drift+a2bckgndσ2shape,i.(8)In these expressions,O F,I,i denotes the event number in the i-th bin at the far detector in phase I,O F,II,i the corresponding event number in phase II and O N,II,i the event number in the near detector during phase II.These event numbers are calculated with GLoBES assuming the values given in Eqs.(2)and(3)for the oscillation parameters.The T F,I,i, T F,II,i and T N,II,i are the corresponding theoretically expected event numbers in the i-th bin and are calculated with a varyingfit value forθ13.The other oscillation parameters are kept fixed,but we have checked that marginalizing over them within the ranges allowed by other neutrino experiments does not change the results of the simulations.This is in accordance with Ref.[5].B F,I,i,B F,II,i and B N,II,i denote the expected background rates,which we assume to be1% of the corresponding signal rates.This means in particular,that the background spectrumfollows the reactor spectrum.In reality,backgrounds will have different spectra,however,as long as these spectra are known,this actually makes it easier to discriminate between signal and background because the spectral distortion caused by backgrounds will be different from that caused by neutrino oscillations.If there are unknown backgrounds,we must introduce bin-to-bin uncorrelated errors,which will be discussed in section5.As systematical errors we introduce the correlated normalization uncertaintyσnorm=2.8% (describing the quadratic sum of the reactorflux error,the uncertainties in the cross sections and the scintillator properties,and the spill in/spill out effect)and thefiducial mass uncer-tainty for near and far detectorσN,fid=0.6%andσF,fid=0.6%.Furthermore,to account for errors introduced by the delayed startup of the near detector,we allow an additional bias to theflux normalization in phase II with magnitudeσdrift=1%per year of delay. We also introduce a shape uncertaintyσshape,i=2%per bin in phase I,which describes the uncertainty in the reactor spectrum.It is completely uncorrelated between energy bins. Note that in phase II a possible shape uncertainty is irrelevant as it will be canceled by the near detector.We assume a background normalization uncertainty ofσbckgnd=40%. Finally,we introduce a0.5%energy calibration error which is implemented as a re-binning of T F,I,i,T F,II,i and T N,II,i before theχ2analysis(see App.A of Ref.[5]).It is uncorrelated between the two detectors,but we neglect its time dependence,since we have checked that it hardly affects the results.5Physics potentialIn this section,we present the numerical results of our analysis and we discuss the perfor-mance of Double Chooz and the Triple Chooz upgrade.First,we discuss the quantitative impact of the systematical uncertainties introduced in the last section.In Fig.1,we as-sume a reactor experiment with identical near and far detectors located at a baseline of 1.05km which are running simultaneously.Note that this is neither the initial Double Chooz setup,where the near detector will be added with some delay,nor the Triple Chooz setup,which would have two different far detectors at slightly different baselines.Fig.1 is nevertheless interesting,since it allows to compare the principal strength of the Double Chooz and Triple Chooz setups.The vertical black lines in Fig.1correspond to5years of full Double Chooz operation(5yrs×10.16t×8.4GW),10years of full Double Chooz operation(10yrs×10.16t×8.4GW)and5years of full Double Chooz+5years Triple Chooz([5yrs×10.16t+5yrs×210.16t]×8.4GW),respectively.The sensitivity of an experiment with the integrated luminosity of∼103GW t yrs,such as Double Chooz,is quite independent of the bin-to-bin error as can be seen from Fig.1.This is not surprising and has been already discussed in detail in Ref.[5].Therefore,a sensitivity down to sin22θ13=0.02 is certainly obtainable.The situation is somewhat different for an experiment of the size of Triple Chooz.From discussions in Ref.[5]it is expected that the sin22θ13sensitivity limit at a reactor experiment of the size of Triple Chooz should be quite robust with re-spect to systematical uncertainties associated to the normalization,since the normalization is determined with good accuracy from the very good statistics and from additional spec-tral information.This robustness can be seen in Fig.1where the sensitivity limit at the 90%confidence level for different sets of systematical errors is shown as function of the totalIntegrated Luminosity in Far Detector GW t years0.0020.0050.010.020.050.1s i n 22Θ13s e n s i t i v i t y a t 90%C .L .Figure 1:The impact of systematical uncertainties on the sin 22θ13sensitivity limit at the 90%confidence level as function of the total integrated luminosity for a reactor experiment with near and far detector (both taking data from the beginning).The integrated luminosity is given by the product of reactor power,far detector mass and running time in GW t yrs.The vertical lines indicate the exposure in 5years of Double Chooz operation (left),10years of Double Chooz (middle),and 5years Double Chooz +5years Triple Chooz (right).We still neglect the effects of a delayed near detector startup and of the different baselines of the two far detectors in Triple Chooz.The plot illustrates that for high luminosities it is crucial to control the uncorrelated uncertainties,in particular the bin-to-bin errors.integrated luminosity in the far detector (given by the product of reactor power,detector mass and running time in GW t yrs).As can be seen in Fig.1,the performance at lumi-nosities associated with Triple Chooz decreases immediately if in addition bin-to-bin errors are introduced which are uncorrelated between near and far detector.These uncorrelated bin-to-bin errors are added to the χ2function in the same way as σshape was introduced in Eqs.(4)to (8)for each bin independently,but uncorrelated between the two detectors.These uncertainties could,for instance,come from uncorrelated backgrounds and different cutting methods necessary if the detectors are not 100%identical.Thus,especially for the Triple Chooz setup,these uncertainties have to be under control,because they can spoil the overall performance.The bin-to-bin error is used here as a parameterization for yet unknown systematical effects and is an attempt to account for the worst case.Thus,in a realistic situation,the bin-to-bin error would have to be broken down into individual knownRunning time years 0.010.0150.020.030.050.070.1S i n 22Θ13s e n s i t i v i t y a t 90%C .L .Figure 2:The sin 22θ13sensitivity limit at the 90%confidence level achievable at Double Chooz for three different delayed startup times of the near detector,and of the Triple Chooz Scenario,where the second far detector is added after 5years of Double Chooz running.components and thus the impact would be less severe.If bin-to-bin errors were excluded,the evolution of the sensitivity limit would already enter a second statistics dominated regime (curve parallel to dashed statistics only curve),since the systematical uncertainties could be reduced due to the spectral information in the data (see also Ref.[5]for explanations).Note that the σshape uncertainty does not affect the sin 22θ13sensitivity in a sizeable manner,since it is correlated between near and far detector and therefore cancels out.The evolution of the sin 22θ13sensitivity limit at the 90%confidence level as a function of the running time is shown in Fig.2.Here the upper thin dashed curve indicates the limit which could be obtained by the far detector of Double Chooz alone (i.e.,no near detector is assumed,which corresponds to phase I continuing up to 10years),while the lower thin dashed curve shows the limit which could be obtained if the near detector started data taking together with the far detector (i.e.,phase I is absent,while phase II continues up to 10years).The near detector improves the sensitivity considerably,but even the far detector alone would quickly improve the existing Chooz limit.The solid blue (black)curve corresponds to the standard Double Chooz scenario,where the near detector starts operation1.5years after the far detector.It can be seen that the sin 22θ13limit improves strongly after the startup of the near detector and converges very fast to the curve correspondingto a near detector in operation from the beginning.Thus,the Double Chooz performance does not suffer from the delayed near detector startup in the end.This“delayed startup”is in fact not a delay,but it allows a considerably quicker startup of the whole experiment, utilizing the fact that no civil engineering is necessary at the site of the far detector.There have been performed similar calculations by the Double Chooz collaboration[14],concerning the evolution of the sin22θ13sensitivity with a1.5years duration of phase I,followed by a phase II scenario,which are in good agreement with the corresponding curves in Fig.2. However,there are slight differences especially for the evolution of the sin22θ13sensitivity in phase I.These come from the inclusion of spectral information in Fig.2,whereas in the calculations in Ref.[14]only total rates were taken into account.The dashed and dotted blue(black)curves in Fig.2show the evolution of the sensitivity limit,if the near detector were operational not1.5years after the far detector,but2.5or5years,respectively.Again, the sensitivity limit improves quickly as soon as the near detector is available and quickly approaches the limit with a near detector from the beginning.The main reason for this is,that the overall sensitivity is ultimately dominated by the uncorrelated systematical uncertainties and not by statistics.Furthermore,Fig.2shows the evolution of the sin22θ13 sensitivity limit for the Triple Chooz setup,both without uncorrelated bin-to-bin errors (solid cyan/grey curve)and withσbin-to-bin=0.5%(dashed cyan/grey curve).It is assumed that the second far detector starts operation5years after thefirst far detector.In the Triple Chooz simulation,we have assumed the uncorrelated normalization and energy calibration errors of the second far detector to be1%each.This is slightly larger than the0.6%resp.0.5%in the original Double Chooz reflecting that the design of the new detector would have to be different from that of the two original detectors.It can be seen that the Triple Chooz scenario could achieve a90%confidence level sensitivity limit below sin22θ13=10−2after less than8years of total running time(5years Double Chooz+3years Triple Chooz),even if small bin-to-bin errors were allowed to account for backgrounds or detector characteristics that are not fully understood.If bin-to-bin errors are absent,the sensitivity will improve by about10%.The plot shows that the Triple Chooz setup can compete with the sensitivity expected from other second generation precision reactor experiments.It also demonstrates that the precision of reactor experiments could be further improved in a timely manner. The improved sin22θ13limits or measurements could be valuable input for planning and optimizing the second generation neutrino beam experiments.We have so far considered a200t detector for Triple Chooz and one may wonder how an even larger detector,which easilyfits into the large existing cavern,would perform.A larger detector implies even higher values of integrated far detector luminosity.From Fig.1one can immediately see that the achievable value ofσbin−to−bin determines the performance, i.e.if one can benefit from the larger detector mass or if the sensitivity is already saturated byσbin−to−bin.From Fig.1one can read offthat for100t or200tσbin−to−bin<0.5% should be achieved.A500t detector would requireσbin−to−bin<0.1%in order to obtain an improvement of the sensitivity limit to the level of5·10−3.Fig.3shows the dependence of the sin22θ13sensitivity of Double Chooz on the true value of∆m231.Such a parametric presentation makes sense,since∆m231will be known relatively precisely by then from the MINOS experiment.The sensitivity again is shown for four different scenarios:5years with the far detector of Double Chooz only(dashed black curve∆m231.The curves correspond to the following setups:a5-year run of only the Double Chooz far detector without near detector(dashed blue/black curve to the right),a5-year run of Double Chooz with near detector after1.5years(solid blue/black curve),and a5-year run of Double Chooz followed by a5-year run of Triple Chooz without bin-to-bin errors(solid cyan/grey curve)and with a0.5%bin-to-bin error(dashed cyan/grey curve).The light grey areas show the3σexcluded regions for∆m231from a globalfit[10],the horizontal line indicates the corresponding bestfit value.to the right),5years of Double Chooz with a near detector after1.5years(solid blue/black curve),andfinally the Triple Chooz scenario with and without bin-to-bin errors,where the second far detector is starting operation5years after thefirst far detector(cyan/grey curves).We also show the curves for a region of∆m231parameter space that is already excluded by current globalfits(upper grey-shaded region;see,e.g.,Refs.[7–10]).One can easily see that a larger true value of∆m231would be favorable for an experiment at the relatively short baseline of L∼1.05km between the reactor and the Double Chooz detector. As can be seen in Fig.3,the setup with only a far detector and the Triple Chooz setup show a characteristic dip around∆m231≈3·10−3eV2.This effect is due to the normalization errors and can be understood as follows:If the true∆m231is very small,thefirst oscillation maximum lies outside the energy range of reactor neutrinos.For∆m231≈2·10−3eV2, thefirst maximum enters at the lower end of the spectrum.Therefore oscillations cause a spectral distortion which cannot be mimicked by an error in theflux normalization.But0.0050.010.020.050.10.2True value of sin 22Θ130.20.40.60.81R e l .e r r o r o n l o g s i n 22Θ13 5y DC FD only 5y DC ND after 1.5years 5y DC 5y TC Σbin to bin 0.0% 5y DC 5y TC Σbin to bin 0.5% GLoBES 2006Figure 4:The precision of the sin 22θ13measurement at the 90%confidence level as a function of the true value of sin 22θ13.The curves correspond to the following setups:a 5-year run of only the Double Chooz far detector without near detector (dashed blue/black curve to the right),a 5-year run of Double Chooz with near detector after 1.5years (solid blue/black curve),and a 5-year run of Double Chooz followed by a 5-year run of Triple Chooz without bin-to-bin errors (solid cyan/grey curve)and with a 0.5%bin-to-bin error (dashed cyan/grey curve).with increasing true ∆m 231,a larger part of the relevant energy range is affected by the oscillations.This behaviour could also come from a normalization error which decreasesthe sensitivity to sin 22θ13in the region around ∆m 231≈3·10−3eV 2.For even larger ∆m 231 4·10−3eV 2,the second oscillation maximum enters the reactor spectrum,which again causes a characteristic spectral distortion.Up to now,we have only considered the achievable sin 22θ13sensitivity limit.If a finite value were observed,reactor experiments could determine sin 22θ13with a certain precision,since no correlations with the unknown CP phase δCP would exist.For a large reactor experiment,this might allow the first generation beam experiments,T2K and NOvA to have a first glimpse on CP violation [15].Fig.4shows the precision to sin 22θ13for the different considered setups.This precision is defined asRel.error on sin 22θ13=log(sin 22θ(u )13)−log(sin 22θ(d )13)where log(sin22θ(u)13)and log(sin22θ(d)13)are the upper and lower bounds of the90%confidenceregion,and log(sin22θ(true)13)is the true value assumed in the simulation(same definition asin Ref.[5]).The plot confirms the expectation that the precision is better for a larger value of sin22θ13.The ability to measure sin22θ13is then completely lost for true values near the sensitivity limit.6Role in the global context and complementarity to beam ex-perimentsIn order to discuss the role of the Double Chooz and Triple Chooz setups in the global context,we show in Fig.5a possible evolution of the sin22θ13discovery potential(left)and sin22θ13sensitivity limit(right)as function of time.In the left panel of Fig.5,we assume that sin22θ13isfinite and that a certain unknown value ofδCP exists.The bands in the figure reflect the dependence on the unknown value ofδCP,i.e.,the actual sensitivity will lie in between the best case(upper)and worst(lower)curve,depending on the value of δCP chosen by nature.In addition,the curves for the beam experiments shift somewhat to the worse for the inverted mass hierarchy,which,however,does not qualitatively affect this discussion.The right panel of thefigure shows the sin22θ13limit which can be obtained for the hypothesis sin22θ13=0,i.e.,no signal.Since particular parameter combinations can easily mimic sin22θ13=0in the case of the neutrino beams,theirfinal sin22θ13sensitivity limit is spoilt by correlations(especially withδCP)compared to Double Chooz2.The two panels of Fig.5very nicely illustrate the complementarity of beam and reactor experiments: Beams are sensitive toδCP(and the mass hierarchy for long enough baselines),reactor ex-periments are not.On the other hand,reactor experiments allow for a“clean”measurement of sin22θ13without being affected by correlations.There are a number of important observations which can be read offfrom Fig.5.First of all,assume that Double Chooz starts as planned(solid Double Chooz curves).Then it will quickly exceed the sin22θ13discovery reach of MINOS,especially after the near detector is online(left panel).For some time,it would certainly be the experiment with the best sin22θ13discovery potential.If afinite value of sin22θ13were established at Double Chooz, thefirst generation superbeam experiments T2K and NOvA could try to optimize a potential anti-neutrino running strategy.The breaking of parameter correlations and degeneracies might in this case be even achieved by the synergy with the Triple Chooz upgrade(similar to Reactor-II in Ref.[5]).For the sin22θ13sensitivity,i.e.,if there is no sin22θ13signal, the best limit will come from Double Chooz already from the very beginning even without near detector.Together with the near detector,this sensitivity cannot be exceeded by the superbeams without upgrades,because these suffer from the correlation withδCP.Double Chooz has altogether an excellent chance to observe afinite value ofθ13first.Ifθ13were zero or tiny,then Double Chooz would be an extremely good exclusion machine.It could exclude a large fraction of the parameter space already a few years before the corresponding。