Random Matrix - Alokesh Banerjee - MBA and an Engineer with 随机矩阵alokesh巴纳吉- MBA与一个工程师
Random matrix theory and L-functions at s = 12
Y
p
1 1 ? =X 1 ; 1 ? ps ns
1
n
(2)
=1
for Res > 1, and by an analytical continuation in the rest of the complex plane. We conjectured that the moments of (1=2 + it) high on the critical line t 2 R factor into a part which is speci c to the Riemann zeta function, and a universal component which is the corresponding moment of the characteristic polynomial Z (U; ) of matrices in U (N ), de ned with respect to an average over the CUE. The connection between N and the height T up the critical line corresponds to equating the mean density of eigenvalues N=2 with the mean density of zeros log T . This idea has subsequently been applied by Brezin and Hikami 2] to other random matrix ensembles, and by Coram and Diaconis 4] to other statistics. Our purpose here is to extend these calculations to SO(2N ) and USp(2N ), and to compare the results with what is known about the L-functions. (Only SO(2N ) is relevant, because a family
Products of random rectangular matrices
Introduction
Matrix products might de ne the simplest dynamical systems possible, but nevertheless they still play an important role in investigations of dynamics. They arise naturally as linearizations of smooth systems and also as transition matrices in population dynamics or economic models. Moreover they can be found in the form of stochastic matrices in the theory of random walks and Markov chains. In the trivial case of just one matrix A, linear algebra allows a complete description and characterization of the dynamics given by the iteration of the matrix in terms of the eigenvalues and eigenspaces of A. For the case of inhomogeneous matrix products An An?1 : : : A1 the situation immediately becomes very complicated. In the case of products of random matrices, where the Ai form a stochastic process, there is at least a possibility to describe the asymptotic behaviour of the dynamics using ergodic theory. This requires that the matrices are chosen in a stationary way according to some probability measure. The history of ergodic theory for products of random matrices really starts with a result of Furstenberg and Kesten 8] on the dominating growth rate which can be described as follows. If ( ; F ; P) is a probability space and # is a P-preserving transformation consider a random variable A : ! Rd d , d 2 Z+ such that log+ kA(:)k 2 L1 ( ; F ; P). Then there exists a measurable #-invariant function + 1 : ! R f?1g with 1 2 L1 ( ; F ; P) such that for P-almost all ! 2 lim 1 log kA(#n?1 !) : : : A(#!)A(!)k = (!):
Percolation on random Johnson-Mehl tessellations and related models
a rX iv:mat h /61716v2[mat h.PR]16Fe b27Percolation on random Johnson–Mehl tessellations and related models B´e la Bollob´a s ∗†‡Oliver Riordan ∗§October 24,2006;revised February 162007Abstract We make use of the recent proof that the critical probability for per-colation on random Voronoi tessellations is 1/2to prove the correspond-ing result for random Johnson–Mehl tessellations,as well as for two-dimensional slices of higher-dimensional Voronoi tessellations.Surpris-ingly,the proof is a little simpler for these more complicated models.1Introduction and results The Johnson–Mehl tessellation of R d may be described as follows:particles (nucleation centres)arrive at certain times according to a spatial (deterministic or random)birth process on R d .When a particle arrives,it starts to grow a ‘crystal’at a constant rate in all directions.Crystals grow only through ‘vacant’space not yet occupied by other crystals;they stop growing when they run into each other.Also,a new particle that arrives inside an existing crystal never forms a crystal at all.This generates a covering of R d by crystals meeting only in their boundaries:every point of R d belongs to the crystal that first reached it,or to the boundaries of two or more such crystals if it is reached simultaneously by several crystals.These tessellations were introduced by Johnson and Mehl [14]in 1939as spatial models for the growth of crystals in metallic systems.The same growth model (but not the resulting tessellation)had been considered earlier by Kol-mogorov [15];similar models were introduced independently by Avrami [1],[2].Not surprisingly,these models go under a variety of names (see the references):in mathematics,they tend to be called Johnson–Mehl tessellations ,so this is the term we shall use here.These models have been used to analyze a great varietyof problems from phase transition kinetics to polymers,ecological systems and DNA replication(see Evans[8],Fanfoni and Tomellini[9],[10],Ramos,Rikvold and Novotny[24],Tomellini,Fanfoni and Volpe[25],[26],and Pacchiarotti, Fanfoni and Tomellini[21],to mention only a handful of papers);mathematical properties of these tessellations have been studied by Gilbert[12],Miles[17], Møller[18,19],Chiu and Quine[5],and Penrose[23],among others.If all particles arrive at the same time then we get a Voronoi tessellation; this was introduced into crystallography by Meijering[16]in1953,although it had been studied much earlier by Delesse[6],Dirichlet[7]and Voronoi[27],in whose honour it is named.Random Voronoi tessellations have been studied in numerous papers:for a list of references,see[4];here,we shall make heavy use of the results in[3].For a discussion of many aspects of Voronoi and related tessellations,including random Voronoi tessellations and Johnson–Mehl tessellations,see the book by Okabe,Boots,Sugihara and Chiu[20].In this paper we are mainly interested in random Johnson–Mehl tessellations of the plane.As in almost all probabilistic models in the literature,we shall assume that the birth process is a time-homogeneous Poisson process of constant intensity,say,intensity1.Thus,our particles arrive randomly on the plane at random times t≥0,according to a homogeneous Poisson process P on R2×[0,∞).For each particle arriving at position w∈R2and time t≥0we have a point z=(w,t)∈P.The crystal associated to z=(w,t)reaches a point x∈R2at time d2(x,w)+t,where d2denotes Euclidean distance.(The subscript in the notation refers to the power in the norm,not to the dimension:we shall write d p for the metric on R d associated to theℓp-norm.)Let||·||JM denote the norm on R3defined by||(x1,x2,t)||JM=Figure1:Part of a random Johnson–Mehl tessellation of R2.The dots are the projections onto R2of those points z of a Poisson process in R2×[0,∞)for which the corresponding cell V z is non-empty.unnatural object.Indeed,for a general norm,the cells V z⊂R2need not even be connected,and the associated graph G P defined below need not be planar. For this reason,we shall focus our attention on tessellations associated to||·||JM and to the Euclidean norm.Another example we shall consider is the normℓ1 on R3,which may be viewed asℓ1⊕ℓ1on R2⊕R1;the associated tessellation of R2is a Johnson–Mehl type tessellation in which crystals grow as squares whose side-lengths increase at a constant rate.As our main focus will be the Johnson–Mehl tessellation,we shall always take P to be a Poisson process on R2×[0,∞),rather than on R3,noting that the only effect on the resulting tessellation of R2is a rescaling.Having defined the cells V z associated to the points z of a Poisson process P,there is a natural way to construct an associated graph G P:the vertex set may be taken either to be the set of z∈P for which V z=∅,or all of P(in which case vertices corresponding to empty cells will be isolated).Two vertices are adjacent if the corresponding cells meet,i.e.,share one or more boundary3points.Ignoring probability zero events,as we may,two vertices are adjacent if and only if their cells have a common boundary arc.(Two cells may share more than one boundary arc;there is an example in Figure1.)Our aim is to study site percolation on the random graph G P,or,equiv-alently,‘face percolation’on the tessellation{V z}itself.Let0<p<1be a parameter.We assign a state,open or closed,to each vertex of G P,so that, given P,the states of the vertices are independent,and each is open with prob-ability p.We are interested in the question‘for which p does G P contain an infinite connected subgraph all of whose vertices are open?’.Equivalently,we may colour the cells V z of the tessellation independently,taking each cell to be black with probability p and white otherwise,and we ask for which p there is an unbounded black component.We shall switch freely between these two viewpoints,writing P p for the(common)associated probability measure.Let us say that a point x∈R2is black if it lies in a black cell,and white if it lies in a white cell.Note that a point may be both black and white,if it lies in the boundary of two cells.Let C0be the set of points of R2joined to the origin by a black path,i.e.,a topological path in R2every point of which is black.Let z0be the a.s.unique point of P in whose cell the origin lies,and let C G0be the open cluster of G P containing z0,i.e.,the set of all vertices of G P joined to z0 by a path in the graph G P in which every vertex is open.Since the cells V z are connected(see Section2),the set C0⊂R2is precisely the union of the V z for z∈C G0.Letθ(p)=P p(|C G0|=∞)=P p(C0is unbounded),and letχ(p)=E p(|C G0|),where E p is the expectation corresponding to P p.Note that the graph G P depends on the metric d as well as on P.Thusθ(p)andχ(p)depend on d;most of the time,we suppress this dependence.We say that our coloured random tessellation percolates ifθ(p)>0.It is easy to see(from Kolmogorov’s0-1law,say)that,in this case,the tessellation a.s.contains an unbounded black component,while ifθ(p)=0,then a.s.there is none.We writep H=p H(d)=inf{p:θ(p)>0}for the Hammersley critical probability associated to percolation on our random tessellation,andp T=p T(d)=inf{p:χ(p)=∞}for the corresponding Temperley critical probability.Our main aim in this paper is to determine the critical probabilities for the Johnson–Mehl tessellation and for a two-dimensional slice of the three-dimensional Voronoi tessellation.The corresponding task for the random Voronoi tessellation associated to a homogeneous Poisson process on R2was accom-plished recently in[3],where it was proved that p H=p T=1/2.4Theorem1.Let d denote either d JM or d2,let P be a homogeneous Poisson process on R2×[0,∞)or on R3,and let G P be the graph associated to the tessellation{V z}of R2defined by(1).Then p H(d)=p T(d)=1/2.More precisely,θ(p)>0if and only if p>1/2and,for every p<1/2,there is a constant a=a(p)>0such thatP p size(C0)≥n ≤exp(−a(p)n)for all n≥1,where size(C0)is the area of C0,the diameter of C0,or the number |C G0|of cells in C0.The proof of the corresponding result for Voronoi tessellations in[3]is rather lengthy.Much of this proof adapts easily to the Johnson–Mehl setting,includ-ing,for example,the analogue of the Russo–Seymour–Welsh Lemma.However, the hardest part of the proof,a certain technical lemma,Theorem6.1in[3], does not.This result asserts that one can approximate the continuous Poisson process P by a suitable discrete process;the proof of this extremely unsurpris-ing statement makes up a significant fraction of the length of[3].The analogue of this result for the Johnson–Mehl model is Theorem8below;because the arguments depend on the details of the geometry,a fresh proof is required here. Surprisingly,although the Johnson–Mehl model is more complicated that the Voronoi model,the proof turns out to be simpler,though still not short.The key difference is that we can use the third dimension of the model to our advantage.In the next section we describe basic properties of the Johnson–Mehl model. In Section3we outline the proof of Theorem1,assuming Theorem8;this part of the paper consists of a straightforward adaptation of arguments from[3].The heart of the present paper is Section4,where we prove the technical approxima-tion lemma for the Johnson–Mehl model.In thefinal section we discuss some generalizations.2Basic propertiesThe probability that some point of the plane is equidistant from four points of P is zero.Hence,with probability1,at most three cells V z of the tessellation associated to P meet at any point.We shall always assume that P has this property.Similarly,given any measure zero set N(for example,the boundary of afixed rectangle),we may assume that no point of N lies in three cells. Also,as any ball in R3contains onlyfinitely many points of P(a.s.or always, depending on the definition of a Poisson process one chooses),we shall assume that every disk in R2meetsfinitely many cells V z.If we take our metric d to be the Euclidean metric d2or theℓ1-metric d1(and take P to be a Poisson process on R3),then the cells V z are two-dimensional sections of(bounded)convex sets(in fact polyhedra)in R3,and hence convex. For d=d JM this is not true,but the cell V z associated to a point z=(w,t)∈P is still a star domain,with centre w:if x∈V z and y is a point on the line5Figure2:A point y on the line segment wx in the plane.As we move towards w from x at rate1,the d JM-distance from z decreases at rate1.The d JM-distance from any other z′∈P decreases at most this fast,so if x lies in V z then so does y.segment wx,as in Figure2,then we haved JM (y,0),z =||y−w||2+t=||x−w||2−||y−x||2+t=d JM (x,0),z −d JM (x,0),(y,0) , while for any z′∈R3,d JM (y,0),z′ ≥d JM (x,0),z′ −d JM (x,0),(y,0)by the triangle inequality.Since d JM (x,0),z ≤d JM (x,0),z′ for all z′∈P, the same inequality for y follows,i.e.,y∈V z.Thus V z is a star domain,and in particular V z is connected.Of course,the same argument applies to any metric d on R3=R2⊕R that is the direct sum of a metric on R2and one on R.Rather thanfirst constructing a Poisson process P on R2×[0,∞),and then colouring the points of P black with probability p and white with probability 1−p,equivalently we may start with two independent Poisson processes P+, P−with intensities p and(1−p),corresponding to the black and white points, respectively.This is the viewpoint we shall adopt most of the time.In this viewpoint,our state spaceΩconsists of all pairs(X+,X−)of discrete subsets of R2×[0,∞).An event E⊂Ωis black-increasing,or simply increasing if,whenever(X+1,X−1)∈E and(X+2,X−2)∈Ωwith X+1⊂X+2and X−1⊃X−2, then(X+2,X−2)∈E.In other words,E is increasing if it is preserved by the addition of(black)points to X+and the deletion of(white)points from X−.If x∈R2,then‘x is black’is an increasing event,and so is any event of the form ‘there exists a black path P⊂R2with certain properties.’It is straightforward to check that Harris’s Lemma concerning correlation of increasing events extends to the present context;see[3].Lemma2.Let E1and E2be(black-)increasing events,and let0<p<1. Then P p(E1∩E2)≥P p(E1)P p(E2).Let us note the following simple fact for future reference.6Lemma3.There is an absolute constant A=A(d)with the following property: let S⊂R2be a set with diameter at most s,and let loc(S)be the event that every point of S is within d-distance A(log s)1/3of some point of P.Then P(loc(S))=1−o(1)as s→∞.This lemma is a simple consequence of the basic properties of Poisson pro-cesses(and is also a special case of a very weak form of a result of Penrose[22]); we omit the proof.3Reduction to a coupling resultIn this section we present a proof of Theorem1,assuming a certain coupling result,Theorem8below,that will allow us to discretize our Poisson process.In a sense,Theorem8is a technical lemma,and the arguments in this section are the heart of the proof.However,as in[3],the hardest part of the overall proof is the proof of Theorem8,presented in the next section.The arguments in this section are,mutatis mutandis,exactly the same as those for random Voronoi percolation in[3],so in places we shall only outline the details.Given a rectangle R=[a,b]×[c,d]⊂R2,a<b,c<d,let H(R)=H b(R) be the event that there is a piecewise linear path P⊂R joining the left-and right-hand sides of R with every point of P black.When H b(R)holds,we say that R has a black horizontal crossing.Let V(R)=V b(R)be the event that R has a black vertical crossing,defined similarly.Also,let H w(R)and V w(R) denote the events that R has a white horizontal crossing or a white vertical crossing,respectively,defined in the obvious way.Note that H(R)=H b(R)is a black-increasing event.Also,from the topol-ogy of our tessellation,H(R)holds if and only if there is a sequence z1,...,z t ofand V z t meet the left-and right-hand black points of P such that the cells V z1sides of R,respectively,and the cells of V z i and V z i+1meet at some point of R for each i.If no boundary point of R lies in three or more Voronoi cells and no corner of R lies in two cells(which we may assume,as this event has probability1), then from the topology of the plane exactly one of the events H b(R)and V w(R) holds,so P p(H b(R))+P p(V w(R))=1.Note that,from the symmetry of the model with respect to interchanging black and white,P p(V w(R))=P1−p(V b(R))for any R and any p.Furthermore, the metrics d we consider are invariant under rotation(of the plane)through π/2,so P p(H b(S))=P p(V b(S))for every square S.It follows that P1/2(H(S))= 1/2.Let f p(ρ,s)denote the P p-probability of the event H([0,ρs]×[0,s]),i.e., the probability that a rectangle with aspect ratioρand vertical side length(or ‘scale’)s has a black horizontal crossing,and note thatf1/2(1,s)=1/2.(2) The events H(R)are defined in terms of the existence of certain black paths in a certain black/white-colouring of the plane(in which some points are both7black and white).This random colouring has the following properties:firstly, the event that any point(or given set of points)is black is a black-increasing event,so any two such events are positively correlated.Secondly,the distri-bution of the random colouring is invariant under the symmetries of Z2,i.e., under translations(by integer or in fact arbitrary vectors),under reflections in the axes,and under rotations through multiples ofπ/2.Thirdly,well separated regions are asymptotically independent:more precisely,letρ>0andη>0 be constants.Givenε>0,if s is large enough,then for R1and R2twoρs by s rectangles separated by a distance of at leastηs,and E1and E2any events determined by the colours of the points(of R2,not just of P)within R1and R2 respectively,we have|P(E1∩E2)−P(E1)P(E2)|≤ε.To see this,note that when the event loc(R i)defined in Lemma3holds,the colouring of R i is determined by the positions and colours of the points of P within distance O((log s)1/3)=o(s) of R i.As noted in[3],the properties above are all that is needed in the proof of Theorem4.1of that paper,which thus carries over to the present setting.Theorem4.Let0<p<1andρ>1befixed.If lim inf s→∞f p(1,s)>0,then lim sup s→∞f p(ρ,s)>0.Together with(2),Theorem4has the following corollary.Corollary5.Letρ>1befixed.There is a constant c0=c0(ρ)>0such that for every s0there is an s>s0with f1/2(ρ,s)≥c0.As in the context of ordinary Voronoi percolation,to prove Theorem1it suffices to prove the following result,analogous to Theorem7.1of[3]. Theorem6.Letρ>1,p>1/2,c1<1and s1be given.There is an s>s1 such that f p(ρ,s)>c1.Theorem1may be deduced from Theorem6by using the idea of1-independent percolation.The argument is exactly the same as in the Voronoi setting,so we shall not give it.In the light of the comments above,our task is to deduce Theorem6from Corollary5.The basic idea is simple:for s large,we shall show that a small increase in p greatly increases f p(ρ,s)=P p(H(R)),where R is aρs by s rect-angle.If H(R)were a symmetric event in a discrete product space,then this would be immediate from the sharp-threshold result of Friedgut and Kalai[11]. Unfortunately,H(R)is neither symmetric nor an event in a discrete product space,so we have two difficulties to overcome.Thefirst is easily dealt with,by working on the torus.Let T(s)denote the s by s torus,i.e.,the quotient of R2by the equivalence relation(x,y)∼(x′,y′)if x−x′,y−y′∈s Z.Instead of R2×[0,∞),we shall work in the‘thickened torus’T(s)×[0,t].Note that we do not wrap around in the third direction.It turns out that the precise thickness t is not important in the arguments that follow:we could use any thickness t larger than a certain constant times(log s)1/3but bounded by a power of s.For simplicity we shall set t=s,working in T(s)×[0,s]throughout.8Let us write P T(s)p for the probability measure associated to a Poisson processP on T(s)×[0,s]of intensity1in which each point is coloured black with probability p and white otherwise,independently of the process and of theother points.Alternatively,P T(s)p is the probability measure associated to a pair(P+,P−)of independent Poisson processes on T(s)×[0,s]with intensities p and1−p,respectively.Our metric d=d JM(or d2,or d1)induces a metric onT(s)×[0,s]⊂T(s)×R in a natural way.Thus,associated to P T(s)p we have arandom black/white-coloured tessellation of T(s)by the Voronoi cells associated to(P,d).If we restrict our attention to a region that does not come close to‘wrappingaround’the torus,then P T(s)p and P p are essentially equivalent.More precisely,identifying T(s)×[0,s]with[0,s)2×[0,s]⊂R3,we may couple the measuresP T(s) p and P p by realizing our coloured Poisson process on T(s)×[0,s]as asubset of that on R2×[0,∞).Letε>0befixed,and let R=[εs,(1−ε)s]2. Whenever the event loc(R)defined in Lemma3holds,the colour of every point of R is determined by the restriction of the Poisson process to[0,s)2×[0,s],sothe colourings of R associated to the measures P T(s)p and P p coincide.Hence,Lemma3has the following consequence.Lemma7.Let0<a,b<1be constant,and let R s be an as by bs rectangle. Then for every p we haveP T(s)p(H(R s))=P p(H(R s))+o(1)as s→∞.Lemma7says that when studying crossings of rectangles,we can work on the torus instead of in the plane.On the torus,there is a natural way to convert H(R)into a symmetric event;we shall return to this shortly.As in[3],we wish to apply a Friedgut–Kalai sharp-threshold result from[11].A key step is to approximate our Poisson process P on T(s)×[0,s]by a discrete process.Givenδ=δ(s)>0with s/δan integer,partition T(s)×[0,s]into(s/δ)3 cubes Q i of side-lengthδin the natural way.(We may ignore the boundaries of the cubes,since the probability that P contains a point in any of these boundaries is0.)As in[3],the crude state of a cube Q i is bad if Q i contains one or more points of P−,neutral if Q i contains no points of P−∪P+=P,and good if Q i contains one or more points of P+but no points of P−.Letδ=δ(s) be a function of s that tends to0as s→∞(later,δ(s)will be a small negative power of s);all asymptotic notation refers to the s→∞limit.Writingγ=δ3, sinceδ(s)→0,each Q i is bad,neutral or good with respective probabilities p bad=1−exp −γ(1−p) ∼γ(1−p),p neutral=exp(−γ),(3)p good=exp −γ(1−p) 1−exp(−γp) ∼γp.Also,the crude states of the Q i are independent.9Writing N=(s/δ)3for the number of cubes,and representing bad,neutralinduces a product and good states by−1,0and1respectively,the measure P T(s)pmeasure on the setΩN={−1,0,1}N of crude states.To prove results about the continuous process,we shall pass to the discrete setting and then back;starting from a realization(P+1,P−1)of our Poisson process,first we generate the corresponding crude states,and then we return to a possibly different realization(P+2,P−2)consistent with the same crude states. An event such as H(R)need not survive these transitions:a point x or path P may be black with respect to(P+1,P−1)but not with respect to(P+2,P−2).To deal with this problem,we consider a‘robust’version of the event that a point or path is black.Givenη>0,let us say that a point x∈T(s)isη-robustly black with respect to(P+,P−)if the closest point of P+to x is at least a distanceηcloser than the closest point of P−,where all distances are measured in the metric d.[Note that whenever x isη-robustly black,the entire(η/2)-neighbourhood of x is black. There is no reverse implication:for anyη>0and any r,it is possible for the r-neighbourhood of x to be black without x beingη-robustly black.]A path P isη-robustly black if every point of P isη-robustly black.SetC d=sup{d(x,y):x,y∈[0,1]3}<∞.If(P+i,P−i),i=1,2,are realizations of our Poisson process on T(s)×[0,s] consistent with the same crude state,and a point x∈T(s)is(2C dδ)-robustly black with respect to(P+1,P−1),then it is easy to check that x is black with respect to(P+2,P−2).Our starting point for the proof of Theorem6is Corollary5,which gives us (with reasonable probability)a certain black path.Fortunately,we can‘bump up’a black path to a robustly black path at the cost of increasing p slightly, using the following analogue of Theorem6.1of[3].Here,and in what follows, we say that an event holds with high probability,or whp,if it has probability 1−o(1)as s→∞with any other parametersfixed.Theorem8.Let d denote either d JM or d2,and let0<p1<p2<1and ε>0be given.Letδ=δ(s)be any function with0<δ(s)≤s−ε.We may construct in the same probability space Poisson processes P+1,P−1,P+2and P−2 on T(s)×[0,s]of intensities p1,1−p1,p2and1−p2,respectively,so that P+i and P−i are independent for i=1,2,and the following global event E gl holds whp as s→∞:for every piecewise-linear path P1⊂T(s)which is black with respect to(P+1,P−1)there is a piecewise-linear path P2⊂T(s)which is(2C dδ)-robustly black with respect to(P+2,P−2),such that every point of P2is within distance log s of some point of P1and vice versa.The proof of this result is a little involved,and will be given in the next section.This is the only part of the present paper that is essentially different from the arguments for usual Voronoi tessellations given in[3].Before turning to the proof of Theorem8,let us outline how Theorem6fol-lows.The proof is exactly the same as that in Section7of[3],mutatis mutandis:10we use Corollary 5and Theorem 8in place of their analogues Corollary 4.2and Theorem 6.1of [3];we write (2C d δ)-robustly black in place of 4δ-robustly black;γ=δ3,the volume of each small cube Q i ,replaces γ=δ2,the volume of a small square S i in [3];finally,N =(s/δ)3,the number of cubes Q i ,replaces N =(s/δ)2,the number of squares S i .Very roughly,the strategy of the proof is as follows (for details see [3]).Fix p >1/2,let ε>0be chosen below (depending on p ),and let s be ‘sufficiently large’.From Corollary 5,after increasing s ,if necessary,the crossing probability f 1/2(10,s/13)is at least some positive absolute constant.By Lemma 7,it follows that in the torus,i.e.,in the measure P T (s )1/2,the probability that a given 10s/13by s/13rectangle has a black horizontal crossing is also at least a positive constant.Set p ′=(p +1/2)/2,say,so that 1/2<p ′<p ,and set δ=s −ε,decreasing δslightly if necessary so that s/δis an ing Theorem 8,we can convert a black path in P T (s )1/2to a ‘nearby’robustly black path in P T (s )p ′:it follows that the P T (s )p ′-probability that a given 3s/4by s/12rectangle has a (2C d δ)-robustly black horizontal crossing is not too small,i.e.,is at least some constant c >0.The sharp-threshold result we shall need is Theorem 2.2of [3],a simple modification of a result of Friedgut and Kalai,Theorem 3.2of [11].Consider the state space ΩN ={−1,0,1}N with a product measure,in which the coordi-nates are independent and identically distributed.An event E in this space is increasing if ω=(ωi )n i =1∈E and ωi ≤ω′i for every i imply ω′∈E .Also,E is symmetric if there is a group acting transitively on the coordinates 1,2,...,N whose induced action on ΩN preserves E .Roughly speaking,Theorem 2.2of [3]says that if E is a symmetric increasing event in ΩN ,and we consider product measures on ΩN in which the probability that a given coordinate is non-zero is ‘small’,say bounded by p max ,then increasing the probability that each coordi-nate is 1by at least ∆and decreasing the probability that it is −1by at least ∆is enough to increase the probability of E from ηto 1−η,where∆=C log(1/η)p max log(1/p max )/log Nand C is constant.(For details,see [3].)To apply the result above,we need a symmetric event in a discrete product space.To achieve symmetry,following the notation in Section 7of [3],we simply consider the event E 3that some 3s/4by s/12rectangle in T (s )has a robustly black horizontal crossing.To convert to a discrete product space,we divide T (s )×[0,s ]into N =(s/δ)3cubes of volume γ=δ3,and consider the crude state of each cube as defined above.Let E crude 3be the event that the crude states of the cubes are consistent with E 3,which may be naturally identified with an event in ΩN .Note that P T (s )p ′(E crude 3)≥P T (s )p ′(E 3)≥c >0.The Friedgut–Kalai result implies that a small increase in the probability of black points increases the probability of E crude 3dramatically (details below).It follows that P T (s )p (E crude 3)is very close to 1if s is large.Hence,there is a very high P T (s )p-probability that some 3s/4by s/12rectangle has a black 11crossing.(Not necessarily a robustly black crossing:in passing to the discrete approximation and back again,points of our Poisson process may move slightly. However,as noted above,any crossing that was robustly black remains black.)By a simple application of the square-root trick,one can deduce that the P T(s)p -probability that afixed s/2by s/6rectangle in T(s)has a black horizontal crossing is also very close to1.Finally,using Lemma7again it follows that f p(3,s/6)can be made arbitrarily close to1,and Theorem6follows.Turning to the quantitative application of the sharp-threshold result,we may takeηto be a(very small)absolute constant.Each cube Q i is very small,and the probability that a cube Q i is either good or bad is at mostγ=δ3,so we maytake p max=γ.Also,passing from P T(s)p′to P T(s)pincreases the probability thata given cube is good,and decreases the probability that it is bad,by roughly (p−p′)γ;see(3).Hence,to deduce Theorem6from the Friedgut–Kalai result, we need(p−p′)γ≥C′γlog(1/γ)/log N,for some constant C′.With p and p′fixed,this reduces to C′′log(1/γ)/log N< 1.Since N=sΘ(1)andγ=s−Θ(ǫ),this condition can be met by choosingεsufficiently small.Note that it is irrelevant whetherγ=δ3and N=(s/δ)3,as here,orγ=δ2and N=(s/δ)2,as in[3].Indeed,this part of the argument works unchanged in any dimension:the key point is that we can afford only to discretize to a scaleδgiven by an arbitrarily small negative power of s. Fortunately,Theorem8applies for such aδ.4Replacing black paths by robustly black paths It remains only to prove Theorem8.Roughly speaking,this states that a small increase in the probability p that each point is black allows any black path to be replaced by a nearby robustly black path.The proof,to which this section is devoted,turns out to be the hardest part of the paper.We shall use the following fact about random Voronoi tessellations in three dimensions;here O denotes the origin.Theorem9.Let P be a homogeneous Poisson process on R3of intensity1,and let d denote d2or d JM.For A>0let E k=E k,A be the event that P contains k points P1,...,P k with the following property:there are points Q1,...,Q k∈R3 and real numbers0<r1,...,r k<Ak1/3such that d(O,Q i)=d(P i,Q i)=r i for every i,and d(P j,Q i)≥r i for all i and j.If A>0and C>0are constant, thenP(E k)=o(e−Ck)as k→∞.Theorem9is essentially equivalent to the following statement:if V0is the cell of the origin in the Voronoi tessellation of R3defined using the point set P∪{O}and metric d,then the probability that V0has at least k faces is o(e−Ck),for any constant C.In fact,Theorem9easily implies this:suppose12。
Matrix Derivative_wiki
Matrix calculusIn mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrices, where it defines the matrix derivative. This notation was to describe systems of differential equations, and taking derivatives of matrix-valued functions with respect to matrix variables. This notation is commonly used in statistics and engineering, while the tensor index notation is preferred in physics.NoticeThis article uses another definition for vector and matrix calculus than the form often encountered within the field of estimation theory and pattern recognition. The resulting equations will therefore appear to be transposed when compared to the equations used in textbooks within these fields.NotationLet M(n,m) denote the space of real n×m matrices with n rows and m columns, such matrices will be denoted using bold capital letters: A, X, Y, etc. An element of M(n,1), that is, a column vector, is denoted with a boldface lowercase letter: a, x, y, etc. An element of M(1,1) is a scalar, denoted with lowercase italic typeface: a, t, x, etc. X T denotes matrix transpose, tr(X) is trace, and det(X) is the determinant. All functions are assumed to be of differentiability class C1 unless otherwise noted. Generally letters from first half of the alphabet (a, b, c, …) will be used to denote constants, and from the second half (t, x, y, …) to denote variables.Vector calculusBecause the space M(n,1) is identified with the Euclidean space R n and M(1,1) is identified with R, the notations developed here can accommodate the usual operations of vector calculus.•The tangent vector to a curve x : R→ R n is•The gradient of a scalar function f : R n→ RThe directional derivative of f in the direction of v is then•The pushforward or differential of a function f : R m→ R n is described by the Jacobian matrix The pushforward along f of a vector v in R m isMatrix calculusFor the purposes of defining derivatives of simple functions, not much changes with matrix spaces; the space of n×m matrices is isomorphic to the vector space R nm. The three derivatives familiar from vector calculus have close analogues here, though beware the complications that arise in the identities below.•The tangent vector of a curve F : R→ M(n,m)•The gradient of a scalar function f : M(n,m) → RNotice that the indexing of the gradient with respect to X is transposed as compared with the indexing of X. The directional derivative of f in the direction of matrix Y is given by•The differential or the matrix derivative of a function F : M(n,m) → M(p,q) is an element of M(p,q) ⊗ M(m,n), a fourth-rank tensor (the reversal of m and n here indicates the dual space of M(n,m)). In short it is an m×n matrix each of whose entries is a p×q matrix.is a p×q matrix defined as above. Note also that this matrix has its indexing and note that each ∂F/∂Xi,jtransposed; m rows and n columns. The pushforward along F of an n×m matrix Y in M(n,m) is thenas formal block matrices.Note that this definition encompasses all of the preceding definitions as special cases.According to Jan R. Magnus and Heinz Neudecker, the following notations are both unsuitable, as the determinants of the resulting matrices would have "no interpretation" and "a useful chain rule does not exist" if these notations are being used:[1]1.2.The Jacobian matrix, according to Magnus and Neudecker,[1] isIdentitiesNote that matrix multiplication is not commutative, so in these identities, the order must not be changed.•Chain rule: If Z is a function of Y which in turn is a function of X, and these are all column vectors, then•Product rule:In all cases where the derivatives do not involve tensor products (for example, Y has more than one row and X has more than one column),ExamplesDerivative of linear functionsThis section lists some commonly used vector derivative formulas for linear equations evaluating to a vector.Derivative of quadratic functionsThis section lists some commonly used vector derivative formulas for quadratic matrix equations evaluating to a scalar.Related to this is the derivative of the Euclidean norm:Derivative of matrix tracesThis section shows examples of matrix differentiation of common trace equations.Derivative of matrix determinantRelation to other derivativesThe matrix derivative is a convenient notation for keeping track of partial derivatives for doing calculations. The Fréchet derivative is the standard way in the setting of functional analysis to take derivatives with respect to vectors. In the case that a matrix function of a matrix is Fréchet differentiable, the two derivatives will agree up to translation of notations. As is the case in general for partial derivatives, some formulae may extend under weaker analytic conditions than the existence of the derivative as approximating linear mapping.UsagesMatrix calculus is used for deriving optimal stochastic estimators, often involving the use of Lagrange multipliers. This includes the derivation of:•Kalman filter•Wiener filter•Expectation-maximization algorithm for Gaussian mixtureAlternativesThe tensor index notation with its Einstein summation convention is very similar to the matrix calculus, except one writes only a single component at a time. It has the advantage that one can easily manipulate arbitrarily high rank tensors, whereas tensors of rank higher than two are quite unwieldy with matrix notation. Note that a matrix can be considered simply a tensor of rank two.Notes[1]Magnus, Jan R.; Neudecker, Heinz (1999 (1988)). Matrix Differential Calculus. Wiley Series in Probability and Statistics (revised ed.).Wiley. pp. 171–173.External links•Matrix Calculus (/engineering/cas/courses.d/IFEM.d/IFEM.AppD.d/IFEM.AppD.pdf) appendix from Introduction to Finite Element Methods book on University of Colorado at Boulder.Uses the Hessian (transpose to Jacobian) definition of vector and matrix derivatives.•Matrix calculus (/hp/staff/dmb/matrix/calculus.html) Matrix Reference Manual , Imperial College London.•The Matrix Cookbook (), with a derivatives chapter. Uses the Hessian definition.Article Sources and Contributors5Article Sources and ContributorsMatrix calculus Source: /w/index.php?oldid=408981406 Contributors: Ahmadabdolkader, Albmont, Altenmann, Arthur Rubin, Ashigabou, AugPi, Blaisorblade,Bloodshedder, CBM, Charles Matthews, Cooli46, Cs32en, Ctacmo, DJ Clayworth, DRHagen, Dattorro, Dimarudoy, Dlohcierekim, Enisbayramoglu, Eroblar, Esoth, Excirial, Fred Bauder,Freddy2222, Gauge, Geometry guy, Giftlite, Giro720, Guohonghao, Hu12, Immunize, Jan mei118, Jitse Niesen, Lethe, Michael Hardy, MrOllie, NawlinWiki, Oli Filth, Orderud, Oussjarrouse, Ozob, Pearle, RJFJR, Rich Farmbrough, SDC, Sanchan89, Stpasha, TStein, The Thing That Should Not Be, Vgmddg, Willking1979, Xiaodi.Hou, Yuzisee, 170 anonymous editsLicenseCreative Commons Attribution-Share Alike 3.0 Unported/licenses/by-sa/3.0/。
随机矩阵——精选推荐
Random matrixIn probability theory and mathematical physics, a random matrix is a matrix-valued random variable. Many important properties of physical systems can be represented mathematically as matrix problems. For example, the thermal conductivity of a lattice can be computed from the dynamical matrix of the particle-particle interactions within the lattice.MotivationPhysicsIn nuclear physics, random matrices were introduced by Eugene Wigner[1]to model the spectra of heavy atoms. He postulated that the spacings between the lines in the spectrum of a heavy atom should resemble the spacings between the eigenvalues of a random matrix, and should depend only on the symmetry class of the underlying evolution.[2] In solid-state physics, random matrices model the behaviour of large disordered Hamiltonians in the mean field approximation.In quantum chaos, the Bohigas–Giannoni–Schmit (BGS) conjecture[3] asserts that the spectral statistics of quantum systems whose classicalcounterparts exhibit chaotic behaviour are described by random matrix theory.Random matrix theory has also found applications to quantum gravity in two dimensions,[4]mesoscopic physics,[5] andmore[6][7][8][9][10]Mathematical statistics and numerical analysisIn multivariate statistics, random matrices were introduced by John Wishart for statistical analysis of large samples;[11] see estimation of covariance matrices.Significant results have been shown that extend the classical scalar Chernoff,Bernstein, and Hoeffding inequalities to the largest eigenvalues of finite sums of random Hermitian matrices.[12] Corollary results are derived for the maximum singular values of rectangular matrices.In numerical analysis, random matrices have been used since the work of John von Neumann and Herman Goldstine[13]to describe computation errors in operations such as matrix multiplication. See also[14] for more recent results.Number theoryIn number theory, the distribution of zeros of the Riemann zeta function(and other L-functions) is modelled by the distribution of eigenvalues of certain random matrices.[15] The connection was first discovered by Hugh Montgomery and Freeman J. Dyson. It is connected to the Hilbert–Pólya conjecture.Gaussian ensemblesThe most studied random matrix ensembles are the Gaussian ensembles.The Gaussian unitary ensemble GUE(n) is described by the Gaussian measure with densityon the space of n × n Hermitian matrices H = (H ij)ni,j=1. Here Z GUE(n) = 2n/2πn2/2 is a normalisation constant, chosen so that the integral of the density is equal to one. The term unitary refers to the fact that the distribution is invariant under unitary conjugation. The Gaussian unitary ensemble models Hamiltonians lacking time-reversal symmetry.The Gaussian orthogonal ensemble GOE(n) is described by the Gaussian measure with densityon the space of n × n real symmetric matrices H = (H ij)ni,j=1. Its distribution is invariant under orthogonal conjugation, and it models Hamiltonians with time-reversal symmetry.The Gaussian symplectic ensemble GSE(n) is described by the Gaussian measure with densityon the space of n × n quaternionic Hermitian matrices H = (H ij)n i,j=1. Its distribution is invariant under conjugation by the symplectic group, and it models Hamiltonians with time-reversal symmetry but no rotational symmetry.The joint probability density for the eigenvaluesλ1,λ2,...,λn of GUE/GOE/GSE is given bywhere β = 1 for GOE, β = 2 for GUE, and β = 4 for GSE; Zβ,n is a normalisation constant which can be explicitly computed, seeSelberg integral. In the case of GUE (β = 2), the formula (1) describes a determinantal point process.GeneralisationsWigner matrices are random Hermitian matricessuch that the entriesabove the main diagonal are independent random variables with zero mean, andhave identical second moments.Invariant matrix ensembles are random Hermitian matrices with density on the space of real symmetric/ Hermitian/ quaternionicHermitian matrices, which is of the form where the function V is called the potential.The Gaussian ensembles are the only common special cases of these two classes of random matrices.Spectral theory of random matricesThe spectral theory of random matrices studies the distribution of the eigenvalues as the size of the matrix goes to infinity.Global regimeIn the global regime, one is interested in the distribution of linear statistics of the form N f, H = n-1 tr f(H).Empirical spectral measureThe empirical spectral measureμH of H is defined byUsually, the limit of μH is a deterministic measure; this is a particular case of self-averaging. The cumulative distribution function of the limiting measure is called the integrated density of states and is denoted N(λ). If the integrated density of states is differentiable, its derivative is called the density of states and is denoted ρ(λ).The limit of the empirical spectral measure for Wigner matrices was described by Eugene Wigner, see Wigner's law. A more general theory was developed by Marchenko and Pastur [16][17]The limit of the empirical spectral measure of invariant matrix ensembles is described by a certain integral equation which arises from potential theory.[18]FluctuationsFor the linear statistics N f,H = n−1∑f(λj), one is also interested in the fluctuations about ∫f(λ) dN(λ). For many classes of random matrices, a central limit theorem of the formis known, see[19][20] et cet.Local regimeIn the local regime, one is interested in the spacings between eigenvalues, and, more generally, in the joint distribution of eigenvalues in an interval of length of order 1/n. One distinguishes between bulk statistics, pertaining to intervals inside the support of the limiting spectral measure, and edge statistics, pertaining to intervals near the boundary of the support.Bulk statisticsFormally, fix λ0 in the interior of the support of N(λ). Then consider the point processwhere λj are the eigenvalues of the random matrix.The point process Ξ(λ0) captures the statistical properties of eigenvalues in the vicinity of λ0. For the Gaussian ensembles, the limit of Ξ(λ0) is known;[2] thus, for GUE it is a determinantal point process with the kernel(the sine kernel).The universality principle postulates that the limit of Ξ(λ0) as n→ ∞ should depend only on the symmetry class of the random matrix (and neither on the specific model of random matrices nor on λ0). This was rigorously proved for several models of random matrices: for invariant matrix ensembles,[21][22] for Wigner matrices,[23][24] et cet.Edge statisticsSee Tracy–Widom distribution.Other classes of random matricesWishart matricesMain article:Wishart distributionWishart matrices are n × n random matrices of the form H = X X*, where X is an n × n random matrix with independent entries, and X* is its conjugate matrix. In the important special case considered by Wishart, the entries of X are identically distributed Gaussian random variables (either real or complex).The limit of the empirical spectral measure of Wishart matrices was found[16] by Vladimir Marchenko and Leonid Pastur, see Marchenko–Pastur distribution.Random unitary matricesSee circular ensemblesNon-Hermitian random matricesSee circular law.Guide to references∙Books on random matrix theory:[2][25]∙Survey articles on random matrix theory:[14][17][26][27]∙Historic works:[1][11][13]References1.^a b Wigner, E. (1955). "Characteristic vectors ofbordered matrices with infinite dimensions". Ann. Of Math.62 (3): 548–564. doi:10.2307/1970079.2.^a b c Mehta, M.L. (2004). Random Matrices.Amsterdam: Elsevier/Academic Press. ISBN0-120-88409-7.3.^Bohigas, O.; Giannoni, M.J.; Schmit, Schmit (1984)."Characterization of Chaotic Quantum Spectra andUniversality of Level Fluctuation Laws". Phys. Rev. Lett.52: 1–4. Bibcode1984PhRvL..52....1B.doi:10.1103/PhysRevLett.52.1./doi/10.1103/PhysRevLett.52.1.4.^Franchini F, Kravtsov VE (October 2009). "Horizon inrandom matrix theory, the Hawking radiation, and flow ofcold atoms". Phys. Rev. Lett.103 (16): 166401. Bibcode2009PhRvL.103p6401F.doi:10.1103/PhysRevLett.103.166401. PMID19905710./doi/10.1103/PhysRevLett.103.166401.5.^Sánchez D, Büttiker M (September 2004)."Magnetic-field asymmetry of nonlinear mesoscopictransport". Phys. Rev. Lett.93 (10): 106802.arXiv:cond-mat/0404387. Bibcode2004PhRvL..93j6802S.doi:10.1103/PhysRevLett.93.106802. PMID15447435./doi/10.1103/PhysRevLett.93.106802.6.^Rychkov VS, Borlenghi S, Jaffres H, Fert A, WaintalX (August 2009). "Spin torque and waviness in magneticmultilayers: a bridge between Valet-Fert theory and quantum approaches". Phys. Rev. Lett.103 (6): 066602. Bibcode2009PhRvL.103f6602R.doi:10.1103/PhysRevLett.103.066602. PMID19792592./doi/10.1103/PhysRevLett.103.066602.7.^Callaway DJE (April 1991). "Random matrices,fractional statistics, and the quantum Hall effect". Phys. Rev.,B Condens. Matter43 (10): 8641–8643. Bibcode1991PhRvB..43.8641C. doi:10.1103/PhysRevB.43.8641.PMID9996505./doi/10.1103/PhysRevB.43.8641.8.^Janssen M, Pracz K (June 2000). "Correlated randomband matrices: localization-delocalization transitions". Phys Rev E Stat Phys Plasmas Fluids Relat Interdiscip Topics61(6 Pt A): 6278–86. arXiv:cond-mat/9911467. Bibcode2000PhRvE..61.6278J. doi:10.1103/PhysRevE.61.6278.PMID11088301./doi/10.1103/PhysRevE.61.6278.9.^Zumbühl DM, Miller JB, Marcus CM, Campman K,Gossard AC (December 2002). "Spin-orbit coupling,antilocalization, and parallel magnetic fields in quantumdots". Phys. Rev. Lett.89 (27): 276803.arXiv:cond-mat/0208436. Bibcode2002PhRvL..89A6803Z.doi:10.1103/PhysRevLett.89.276803. PMID12513231./doi/10.1103/PhysRevLett.89.276803.10.^Bahcall SR (December 1996). "Random Matrix Modelfor Superconductors in a Magnetic Field". Phys. Rev. Lett.77(26): 5276–5279. arXiv:cond-mat/9611136. Bibcode1996PhRvL..77.5276B. doi:10.1103/PhysRevLett.77.5276.PMID10062760./doi/10.1103/PhysRevLett.77.5276.11.^a b Wishart, J. (1928). "Generalized product momentdistribution in samples". Biometrika20A (1–2): 32–52.12.^Tropp, J. (2011). "User-Friendly Tail Bounds for Sumsof Random Matrices". Foundations of ComputationalMathematics. doi:10.1007/s10208-011-9099-z.13.^a b von Neumann, J.; Goldstine, H.H. (1947)."Numerical inverting of matrices of high order". Bull. Amer.Math. Soc.53 (11): 1021–1099.doi:10.1090/S0002-9904-1947-08909-6.14.^a b Edelman, A.; Rao, N.R (2005). "Random matrixtheory". Acta Numer.14: 233–297.doi:10.1017/S0962492904000236.15.^Keating, Jon (1993). "The Riemann zeta-function andquantum chaology". Proc. Internat. School of Phys. Enrico Fermi CXIX: 145–185.16.^a b.Marčenko, V A; Pastur, L A (1967). "Distributionof eigenvalues for some sets of random matrices".Mathematics of the USSR-Sbornik1 (4): 457–483. Bibcode 1967SbMat...1..457M.doi:10.1070/SM1967v001n04ABEH001994./0025-5734/1/4/A01;jsessionid=F84 D52B000FBFEFEF32CCB5FD3CEF2A1.c1.17.^a b Pastur, L.A. (1973). "Spectra of random self-adjointoperators". Russ. Math. Surv.28(1): 1−67. Bibcode1973RuMaS..28....1P.doi:10.1070/RM1973v028n01ABEH001396.18.^Pastur, L.; Shcherbina, M.; Shcherbina, M. (1995)."On the Statistical Mechanics Approach in the RandomMatrix Theory: Integrated Density of States". J. Stat. Phys.79 (3–4): 585−611. Bibcode1995JSP....79..585D.doi:10.1007/BF02184872.19.^Johansson, K. (1998). "On fluctuations of eigenvaluesof random Hermitian matrices". Duke Math. J.91 (1):151–204. doi:10.1215/S0012-7094-98-09108-6.20.^Pastur, L.A. (2005). "A simple approach to the globalregime of Gaussian ensembles of random matrices".Ukrainian Math. J.57 (6): 936–966.doi:10.1007/s11253-005-0241-4.21.^Pastur, L.; Shcherbina, M. (1997). "Universality of thelocal eigenvalue statistics for a class of unitary invariantrandom matrix ensembles". J. Statist. Phys.86 (1–2):109−147. Bibcode1997JSP....86..109P.doi:10.1007/BF02180200.22.^Deift, P.; Kriecherbauer, T.; McLaughlin, K.T.-R.;Venakides, S.; Zhou, X. (1997). "Asymptotics forpolynomials orthogonal with respect to varying exponential weights". Internat. Math. Res. Notices16 (16): 759−782.doi:10.1155/S1073792897000500.23.^Erdős, L.; Péché, S.; Ramírez, J.A.; Schlein, B.; Yau,H.T. (2010). "Bulk universality for Wigner matrices". Comm.Pure Appl. Math.63 (7): 895–925.24.^Tao., T.; Vu, V. (2010). "Random matrices:universality of local eigenvalue statistics up to the edge".Comm. Math. Phys.298(2): 549−572. Bibcode2010CMaPh.298..549T. doi:10.1007/s00220-010-1044-5.25.^Anderson, G.W.; Guionnet, A.; Zeitouni, O. (2010).An introduction to random matrices.. Cambridge: CambridgeUniversity Press. ISBN978-0-521-19452-5.26.^Diaconis, Persi (2003). "Patterns in eigenvalues: the70th Josiah Willard Gibbs lecture". American MathematicalSociety. Bulletin. New Series40 (2): 155–178.doi:10.1090/S0273-0979-03-00975-3. MR1962294./bull/2003-40-02/S0273-0979-03-00975-3/home.html.27.^Diaconis, Persi (2005). "What is ... a random matrix?".Notices of the American Mathematical Society52 (11):1348–1349. ISSN0002-9920. MR2183871./notices/200511/.External linksFyodorov, Y. (2011)."Random matrix theory". Scholarpedia6(3): 9886./article/Random_matrix_theory.Weisstein, E.W.. "Random Matrix". MathWorld--A WolframWeb Resource./RandomMatrix.html .A random matrix is a matrix of given type and size whose entries consist of random numbers from some specified distribution.Random matrix theory is cited as one of the "modern tools" used in Catherine's proof of an important result in prime number theory in the 2005 film Proof .For a realmatrix with elements having a standard normal distribution , the expected number of real eigenvalues is given by(1)(2)where is a hypergeometric function and is a beta(Edelman et al. 1994, Edelman and Kostlan 1994). hasfunction(3) Let be the probability that there are exactly real eigenvalues inthe complex spectrum of the matrix. Edelman (1997) showedthat(4) which is the smallest probability of all s. The entire probabilityfunction of the number of expected real eigenvalues in the spectrum of a Gaussian real random matrix was derived by Kanzieper and Akemann (2005) as(5) where(6)(7)In (6), the summation runs over all partitionsof length, is the number of pairs of complex-conjugated eigenvalues, and are zonal polynomial . In addition, (6) makes use a frequency representation of the partition(Kanzieper and Akemann 2005). Thearguments depend on the parityof (the matrix dimension) and are given by(8)where is a matrix trace,is an matrix with entries(9)(10)andvary between 0 and,with the floor function),are generalized Laguerre polynomials, andis the complementary erf function erfc (Kanzieper and Akemann 2005).Edelman (1997) proved that the density of a random complex pair of eigenvalues of a real matrix whose elements are taken froma standard normal distribution is(11)(12)erfc(complementary error) function,for , where is theis thegamma function. Integrating over the upper half-plane(and multiplying by 2) gives the expected number of complex eigenvalues as(13)(14)(15) (Edelman 1997). The first few values are(16)(17)(18)(19)(20) (Sloane's A052928,A093605, and A046161).Girko's circular law considers eigenvalues(possibly complex) of awith entries independent and takenset of random real matricesstandard normal distribution and states that as , isfrom astates that the for large symmetric realWigner's semicircle lawmatrices with elements taken from a distribution satisfying certain rather general properties, the distribution of eigenvalues is the semicircle function.If matrices are chosen with probability 1/2 from one of(21)(22) then(23)A078416) and denotes the matrixwhere (Sloane'sspectral norm(Bougerol and Lacroix 1985, pp. 11 and 157; Viswanath 2000). This is the same constant appearing in the randomFibonacci sequence. The following Mathematica code can be used to estimate this constant.With[{n = 100000}, m = Fold[Dot, IdentityMatrix[2], {{0, 1}, {1, #}}& /@ RandomChoice[{-1, 1}, {n}] ] // N;Log[Sqrt[Max[Eigenvalues[Transpose[m] . m]]]] / n ]SEE ALSO:Complex Matrix,Girko's Circular Law,Integer Matrix, Matrix,Random Fibonacci Sequence,Real Matrix,Wigner's Semicircle LawREFERENCES:Bougerol, P. and Lacroix, J. Random Products of Matrices with Applications to Schrödinger Operators. Basel, Switzerland:Birkhäuser 1985.Chassaing, P.; Letac, G.; and Mora, M. "Brocot Sequences and Random Walks on ." In Probability Measures on Groups VII(Ed. H. Heyer). New York Springer-Verlag, pp. 36-48, 1984. Edelman, A. "The Probability that a Random Real Gaussian Matrix has Real Eigenvalues, Related Distributions, and the CircularLaw." J. Multivariate Anal.60, 203-232, 1997.Edelman, A. and Kostlan, E. "How Many Zeros of a Random Polynomial are Real?" Bull. Amer. Math. Soc.32, 1-37, 1995. Edelman, A.; Kostlan, E.; and Shub, M. "How Many Eigenvalues of a Random Matrix are Real?" J. Amer. Math. Soc.7, 247-267, 1994.Furstenberg, H. "Non-Commuting Random Products." Trans. Amer. Math. Soc.108, 377-428, 1963.Furstenberg, H. and Kesten, H. "Products of Random Matrices." Ann. Math. Stat.31, 457-469, 1960.Girko, V. L. Theory of Random Determinants. Boston, MA: Kluwer, 1990.Kanzieper, E. and Akemann, G. "Statistics of Real Eigenvalues in Ginibre's Ensemble of Random Real Matrices." Phys. Rev. Lett.95, 230201-1-230201-4, 2005.Katz, M. and Sarnak, P. Random Matrices, Frobenius Eigenvalues, and Monodromy. Providence, RI: Amer. Math. Soc., 1999.Lehmann, N. and Sommers, H.-J. "Eigenvalue Statistics of Random Real Matrices." Phys. Rev. Let.67, 941-944, 1991.Mehta, M. L. Random Matrices, 3rd ed. New York: Academic Press, 1991.Sloane, N. J. A. Sequences A046161,A052928,A078416, andA093605in "The On-Line Encyclopedia of Integer Sequences." Viswanath, D. "Random Fibonacci Sequences and the Number1.13198824...." Math. Comput.69, 1131-1155, 2000.CITE THIS AS:Weisstein, Eric W."Random Matrix." From MathWorld--A Wolfram Web Resource./RandomMatrix.htmlPeter ForresterEmail:**********************.edu.auPhone: +61 (0)3 8344 9683Department of Mathematics and Statistics, The University of Melbourne, Parkville, Vic 3010, Australia.Research InterestsRandom matrices.Random matrix theory is concerned with giving analytic statistical properties of the eigenvalues and eigenvectors of matrices defined by a statistical distribution. It is found that the statistical properties are to a large extent independent of the underlying distribution, and dependent only on global symmetry properties of the matrix. Moreover, these same statistical properties are observed in many diverse settings: the spectra of complex quantum systems such as heavy nuclei, the Riemann zeros, the spectra of single particle quantum systems with chaotic dynamics, the eigenmodes of vibrating plates, amongst other examples.Imposing symmetry constraints on random matrices leads to relationships with Lie algebras and symmetric spaces, and the internal symmetry of these structures shows itself as a relection group symmetry exhibited by the eigenvalue probability densities. The calculation of eigenvalue correlation functions requires orthogonal polynomials, skew orthogonal polynomials, deteminants and Pfaffians. The calculation of spacing distributions involves many manifestations of integrable systems theory, in particular Painlev\'e equations, isomonodromy deformation of differentialequations, and the Riemann-Hilbert problem. Topics of ongoing study include the asymptotics of spacing distributions, eigenvalue distributions in the complex plane and low rank perturbations of the classical random matrix ensembles.Macdonald polynomial theory.Over forty years ago the many body Schrodinger operator with1/r2was isolated as having special properties. Around fifteen years ago families of commuting differential/difference operators based on root systems were identified and subsequently shown to underly the theory of Macdonald polynomials, which are multivariable orthogonal polynomials generalizing the Schur polynomials. In fact these commuting operators can be used to write the1/r2? Schrodinger operator in a factorized form, and the multivariable polynomials are essentially the eigenfunctions. This has the consequence that ground state dynamical correlations can be computed. They explicitly exhibit the fractional statistical charge carried by the elementary excitations. This latter notion is the cornerstone of Laughlin's theory of the fractional quantum Hall effect, which earned him the 1998 Nobel prize for physics. The calculation of correlations requires knowledge of special properties of the multivariable polynomials, much of which follows from thepresence of a Hecke algebra structure. The study of these special structures is an ongoing project.Statistical mechanics and combinatorics.Counting configurations on a lattice is a basic concern in the formalism of equilibrium statistical mechanics. Of the many counting problems encountered in this setting, one attracting a good deal of attention at present involves directed non-intersecting paths on a two-dimensional lattice. There are bijections between such paths and Young tableaux, which in turn are in bijective correspondence with generalized permutations and integer matrices. This leads to a diverse array of model systems which relate to random paths: directed percolation, tilings, asymmetric exclusion and growth models to name a few. The probability density functions which arise typically have the same form as eigenvalue probability density functions in random matrix theory, except the analogue of the eigenvalues are discrete. One is thus led to consider discrete orthogonal polynomials and integrable systems based on difference equations. The Schur functions are fundamentally related tonon-intersecting paths, and this gives rise to interplay with Macdonald polynomial theory.Statistical mechanics of log-potential Coulomb systems.The logarithmic potential is intimately related to topological charge -- for example vortices in a fluid carry a topological charge determined by the circulation, and the energy between two vortices is proportional to the logarithm of the separation. The logarithmic potential is also the potential between two-dimensional electric charges, so properties of the two-dimensional Coulomb gas can be directly related to properties of systems with topological charges. In a celebrated analysis, Kosterlitz and Thouless identified a pairing phase transition in the two-dimensional Coulomb gas. They immediately realized that this mechanism, with the vortices playing the role of the charges, was responsible for the superfluid--normal fluid transition in liquid Helium films. In my studies of thetwo-dimensional Coulomb gas I have exploited the fact that at a special value of the coupling the system is equivalent to the Dirac field and so is exactly solvable. This has provided an analytic laboratory on which to test approximate physical theories, and has also led to the discovery of new universal features of Coulomb systems in their conductive phase.My book `Log-gases and Random matrices' (PUP, 2010)I started working on this project in August 1994, and finished (apart from minor changes) 15 years later.It can be browsed from its Princeton University Press web page.Department of Mathematics Personal Home pages. Department of Mathematics home page.Maths home pageRandom MatricesDate: Tuesday 29 May - Friday 1 June 2012Venue: Mathematik-Zentrum, Lipschitz Lecture Hall, Endenicher Allee 60, BonnOrganizers: Holger Rauhut, Patrik Ferrari, Benjamin SchleinAbstractRandom Matrices and their analysis play an important role in various areas, such as mathematical physics, statistics, Banach space geometry, signal processing (compressive sensing), analysis of optimization algorithms, growth models and more. The interaction with application fields, in particular, has triggered high research activity in random matrix theory recently. The proposed workshop aims at bringing together experts and junior researchers working onvarious aspects of random matrices, and to report on recent advances. In particular, we aim at identifying possible new directions and methods that may arise from the combination of different expertises.Random Matrix Theory and Applications in Theoretical Sciences Date: December 15 - 17, 2011Convenors: Gernot Akemann (Bielefeld), Igor Krasovsky (London), Dmitry Savin (Brunel), Igor Smolyarenko (Brunel)The aim of this workshop is to bring together physicists and mathematicians who work in the area of Random Matrix Theory in a broad sense. The concept of matrices with stochastic matrix elements appears in many modern developments in pure and applied mathematics, physics and other sciences. This workshop will be devoted to topics which have seen a very rapid development in recent years. These include the study of universality using complex analysis and probability theory, the physics of quantum computation and entanglement, turbulence, and the evaluation of economic risk. One of the purposes of the conference is to intensify collaborations between Germany and the United Kingdom in these areas of research, where several highly active international centers arelocated. We will build upon the experience and networking established through previous workshops, especially in a series of annual international workshops based at Brunel University Londonin 2005-2010. We encourage an active participation by students and young scientists who will comprise a significant fraction of the total number of about 40-50 participants大维随机矩阵理论及其应用【成果完成人】白志东【第一完成单位】东北师范大学【关键词】随机矩阵;分布;特征根;特征向量【成果简介】大维随机矩阵最初出现在50年代, 核物理当中为了分析分光镜数据把随机矩阵做为物理的分支提出。
代数英语
(0,2) 插值||(0,2) interpolation0#||zero-sharp; 读作零井或零开。
0+||zero-dagger; 读作零正。
1-因子||1-factor3-流形||3-manifold; 又称“三维流形”。
AIC准则||AIC criterion, Akaike information criterionAp 权||Ap-weightA稳定性||A-stability, absolute stabilityA最优设计||A-optimal designBCH 码||BCH code, Bose-Chaudhuri-Hocquenghem codeBIC准则||BIC criterion, Bayesian modification of the AICBMOA函数||analytic function of bounded mean oscillation; 全称“有界平均振动解析函数”。
BMO鞅||BMO martingaleBSD猜想||Birch and Swinnerton-Dyer conjecture; 全称“伯奇与斯温纳顿-戴尔猜想”。
B样条||B-splineC*代数||C*-algebra; 读作“C星代数”。
C0 类函数||function of class C0; 又称“连续函数类”。
CA T准则||CAT criterion, criterion for autoregressiveCM域||CM fieldCN 群||CN-groupCW 复形的同调||homology of CW complexCW复形||CW complexCW复形的同伦群||homotopy group of CW complexesCW剖分||CW decompositionCn 类函数||function of class Cn; 又称“n次连续可微函数类”。
Cp统计量||Cp-statisticC。
Random Matrix - Alokesh Banerjee - MBA and an Engineer with 随机矩阵alokesh巴纳吉- MBA与一个工程师
• Market attractiveness is based on as many relevant factors as are appropriate in a given context
GE VS BCG
The GE / McKinsey Matrix is more sophisticated than the BCG Matrix in three aspects:
1. Market (Industry) attractiveness replaces market growth as the dimension of industry attractiveness. Market Attractiveness includes a broader range of factors other than just the market growth rate that can determine the attractiveness of an industry / market.
3. Finally the GE / McKinsey Matrix works with a 3*3 grid, while the BCG Matrix has only 2*2. This also allows for more sophistication.
GE(General Electric)/McKinsey MultiFactor Matrix
-Evaluate potentila for leadership via segmentation -Identify weaknesses -Build strengths
随机矩阵理论及其统计应用
随机矩阵理论及其统计应用Title: Random Matrix Theory and Its Statistical ApplicationsIntroduction:Random Matrix Theory (RMT) is a branch of mathematics that studies the properties and characteristics of matrices with randomly distributed elements. Initially developed in the mid-20th century to model nuclear energy levels, RMT has found extensive applications in various scientific fields, including physics, finance, and computer science. In this article, we will explore the basics of RMT, its statistical applications, and the significance it holds in understanding complex systems.1. Random Matrix Theory FundamentalsRandom matrices are defined as matrices whose elements are generated from a probability distribution. The properties and statistics of these matrices differ from those of deterministic matrices, leading to unique mathematical characteristics. Key concepts in RMT include:1.1 UniversalityRandom matrices exhibit universal behavior, meaning that certain statistical properties are independent of the specific distribution used to generate the matrix elements. This universality enables the application of RMT beyond specific systems, making it a powerful tool for understanding complex phenomena.1.2 EnsemblesRMT classifies random matrices into different ensembles based on their symmetry properties. The most commonly studied ensembles are the Gaussian ensembles (GOE, GUE, and GSE), characterized by their symmetries and probability distributions. Each ensemble has distinct statistical properties and is utilized to model different physical or financial systems.2. Statistical Applications of Random Matrix Theory2.1 Quantum ChaosRandom Matrix Theory plays a crucial role in understanding quantum chaotic systems. By analyzing statistical properties of random matrices, RMT provides insights into the energy levels and spectral statistics of chaotic quantum systems, such as atoms, nuclei, and even quantum billiards. RMT has successfully explained the universal behavior observed experimentally in these systems.2.2 Financial MarketsThe complex dynamics of financial markets have attracted the application of RMT. By considering financial returns as random variables, RMT enables the analysis of correlations, volatility, and risk in financial time series. RMT-based methods have been applied to portfolio optimization, risk management, and financial forecast modeling.2.3 Wireless Communication SystemsRandom Matrix Theory has also found applications in wireless communication systems. Analysis of the random matrices formed by the correlation matrix of received signals helps in understanding multi-antennasystems, improving signal detection algorithms, and enhancing system capacity.3. The Significance and Limitations of Random Matrix TheoryRMT has been successful in providing powerful analytical tools to understand complex systems in various fields. Its universality and applicability make it a valuable approach for statistical analysis. However, there are certain limitations to consider:3.1 System-Specific EffectsDespite universality, RMT may not capture all the system-specific characteristics that determine the behavior of a given complex system. Therefore, careful consideration and adaptation of RMT principles are required based on specific applications.3.2 Computational ChallengesComputational complexity can pose challenges in applying RMT to large-scale systems. The calculation of eigenvalues and eigenvectors of large random matrices can be time-consuming and computationally intensive, requiring efficient algorithms and modern computational resources.Conclusion:Random Matrix Theory has emerged as a powerful mathematical tool for understanding complex systems and analyzing statistical properties. Its applications in fields like quantum physics, finance, and wireless communications demonstrate its versatility and broad impact. Despite limitations, RMT continues to evolve, providing valuable insights into adiverse range of phenomena and contributing to advancements in statistical analysis techniques.。
random-matrices
Random MatricesA random matrix is a matrix whose entries are random variables.The moments of an n×n random matrix A are the expected values of the random variablestr(A k).This project asks you tofirst investigate the moments of families of random matrices,especially limiting behavior as n→∞.Here is an interesting family of square random matrices.Take an n×n random matrix M whose entries are normally distributed random variables with mean0and constant variance a,and form A=(M+M T)/2.Preliminary question:what are the means and variances of the entries in A?It’s worthwhile thinking through what the mean and variance of the random variables tr(A k)are for small n,but the fun begins when n grows large.The constant a will have to be allowed to vary with n in order to obtain limiting values of E[tr(A k)]which are neither zero nor infinite.Write a n for the value you use for the n×n case.Investigate the resulting limiting values.Can you account for whatever patterns you observe?There is a close relationship between traces and eigenvalues.Investigate the limiting distribution of eigenvalues of these random matrices A as n grows large.There are many other directions in which this kind of exploration can be pursued.For example,what happens if you use different random variables for the entries in A—perhaps not even normally distributed?The offdiagonal entries are independent and identically distributed;what happens if we alter those assumptions?And there are many families of matrices other than symmetric ones,as well.Resources:We recommend the use of MATLAB for this project.We can also recommend the18.440textbook A First Course in Probability by Sheldon Ross.。
如何用简单易懂的例子解释隐马尔可夫模型教学文案
如何用简单易懂的例子解释隐马尔可夫模型如何用简单易懂的例子解释隐马尔可夫模型? - 知乎隐马尔可夫(HMM)好讲,简单易懂不好讲。
我想说个更通俗易懂的例子。
我希望我的读者是对这个问题感兴趣的入门者,所以我会多阐述数学思想,少写公式。
霍金曾经说过,你多写一个公式,就会少一半的读者。
还是用最经典的例子,掷骰子。
假设我手里有三个不同的骰子。
第一个骰子是我们平常见的骰子(称这个骰子为D6),6个面,每个面(1,2,3,4,5,6)出现的概率是1/6。
第二个骰子是个四面体(称这个骰子为D4),每个面(1,2,3,4)出现的概率是1/4。
第三个骰子有八个面(称这个骰子为D8),每个面(1,2,3,4,5,6,7,8)出现的概率是1/8。
假设我们开始掷骰子,我们先从三个骰子里挑一个,挑到每一个骰子的概率都是1/3。
然后我们掷骰子,得到一个数字,1,2,3,4,5,6,7,8中的一个。
不停的重复上述过程,我们会得到一串数字,每个数字都是1,2,3,4,5,6,7,8中的一个。
例如我们可能得到这么一串数字(掷骰子10次):1 6 3 5 2 7 3 5 2 4这串数字叫做可见状态链。
但是在隐马尔可夫模型中,我们不仅仅有这么一串可见状态链,还有一串隐含状态链。
在这个例子里,这串隐含状态链就是你用的骰子的序列。
比如,隐含状态链有可能是:D6 D8 D8 D6 D4 D8 D6 D6 D4 D8一般来说,HMM中说到的马尔可夫链其实是指隐含状态链,因为隐含状态(骰子)之间存在转换概率(transition probability)。
在我们这个例子里,D6的下一个状态是D4,D6,D8的概率都是1/3。
D4,D8的下一个状态是D4,D6,D8的转换概率也都一样是1/3。
这样设定是为了最开始容易说清楚,但是我们其实是可以随意设定转换概率的。
比如,我们可以这样定义,D6后面不能接D4,D6后面是D6的概率是0.9,是D8的概率是0.1。
米尔格罗姆―罗伯茨的Kruskal算法的应用
⽶尔格罗姆―罗伯茨的Kruskal算法的应⽤2019-10-10【摘要】⽶尔格罗姆和罗伯茨的垄断限制性定价模型是信号传递博弈在产业组织中的第⼀个应⽤。
Kruskal算法是⼀个贪⼼算法,它每⼀次从剩余的边中选⼀条权值最⼩的边,然后加⼊到⼀个边的集合A中,从⽽逼近垄断限制极限。
其最优产量和最⼤持续利润,平衡阻挠博弈竞争模式。
【关键词】Kruskal算法;垄断限制性定价模型;最优化阻挠博弈⽶尔格罗姆和罗伯茨的垄断限制性定价模型是信号传递博弈在产业组织中的第⼀个应⽤,是市场进⼊阻挠博弈的简单模型,Milgrom and Robert 提出解释,垄断限价可能反映这样⼀个事实,即其他企业不知道垄断者的⽣产成本,垄断者试图⽤低价格来告诉其他企业⾃⼰是低成本,进⼊⽆利可图。
在分离均衡中,进⼊者能够推断出在位者的真实成本,从⼀家企业的情况做起:只有⼀家企业时,⽬标收益函数u=Q(a-b*Q),针对maxu 的解为Q0=a/2b,u0=a2/4b当有两家企业时,设产量分别为Q1,Q2,则p=a-b(Q1+Q2)u1(Q1,Q2)=p*Q1=Q[a-b(Q1+Q2)]u2(Q1,Q2)=p*Q2=Q[a-b(Q1+Q2)]纳什均衡点Q1,Q2为⽅程组:Q1/QQ1=0(1)Q2/QQ2=0(2)的解。
整理,得到:2bQ1+bQ2=a (3)bQ1+2bQ2=a (4)解得 Q1=Q2=a/3b,对应的u1=u2=a2/9b纳什均衡点是⼀个极值点,⼀旦达到该点时双⽅都没有率先改变的动机。
(1)式表⽰⼚商1的最优函数,在给定对⽅产量Q时它根据(1)来使⾃⼰收益最⼤,由(3)式,⼚商最优函数为Q1=(a-bQ2)/2b同样(2)式表⽰⼚商(2)的最优函数,由(4)式,⼚商2的最优函数为Q2=(a-bQ1)/2b这是两条直线,如图,交点E为纳什均衡点。
AB为⼚商1的最优函数,CD为⼚商2的最优函数,当双⽅的初始选择点为A,即Q1=0,Q2=a/b,A在⼚商1最优函数上,故⼚商1不会改变,但⼚商2针对Q1=0的最有效点为C,于是双⽅的决策点转移到C,在C点⼚商1会调整⾃⼰的产量时双⽅决策点到F,然⼚商2⼜会调整策略到CD上,以此类推,最后将到达E点,在第⼀象限的任何初始选择点,按以上分析双⽅都能经过⼀系列调整到达E点。
数学专业词汇及翻译
一、字母顺序表 (1)二、常用的数学英语表述 (7)三、代数英语(高端) (13)一、字母顺序表1、数学专业词汇Aabsolute value 绝对值 accept 接受 acceptable region 接受域additivity 可加性 adjusted 调整的 alternative hypothesis 对立假设analysis 分析 analysis of covariance 协方差分析 analysis of variance 方差分析 arithmetic mean 算术平均值 association 相关性 assumption 假设 assumption checking 假设检验availability 有效度average 均值Bbalanced 平衡的 band 带宽 bar chart 条形图beta-distribution 贝塔分布 between groups 组间的 bias 偏倚 binomial distribution 二项分布 binomial test 二项检验Ccalculate 计算 case 个案 category 类别 center of gravity 重心 central tendency 中心趋势 chi-square distribution 卡方分布 chi-square test 卡方检验 classify 分类cluster analysis 聚类分析 coefficient 系数 coefficient of correlation 相关系数collinearity 共线性 column 列 compare 比较 comparison 对照 components 构成,分量compound 复合的 confidence interval 置信区间 consistency 一致性 constant 常数continuous variable 连续变量 control charts 控制图 correlation 相关 covariance 协方差 covariance matrix 协方差矩阵 critical point 临界点critical value 临界值crosstab 列联表cubic 三次的,立方的 cubic term 三次项 cumulative distribution function 累加分布函数 curve estimation 曲线估计Ddata 数据default 默认的definition 定义deleted residual 剔除残差density function 密度函数dependent variable 因变量description 描述design of experiment 试验设计 deviations 差异 df.(degree of freedom) 自由度 diagnostic 诊断dimension 维discrete variable 离散变量discriminant function 判别函数discriminatory analysis 判别分析distance 距离distribution 分布D-optimal design D-优化设计Eeaqual 相等 effects of interaction 交互效应 efficiency 有效性eigenvalue 特征值equal size 等含量equation 方程error 误差estimate 估计estimation of parameters 参数估计estimations 估计量evaluate 衡量exact value 精确值expectation 期望expected value 期望值exponential 指数的exponential distributon 指数分布 extreme value 极值F factor 因素,因子 factor analysis 因子分析 factor score 因子得分 factorial designs 析因设计factorial experiment 析因试验fit 拟合fitted line 拟合线fitted value 拟合值 fixed model 固定模型 fixed variable 固定变量 fractional factorial design 部分析因设计 frequency 频数 F-test F检验 full factorial design 完全析因设计function 函数Ggamma distribution 伽玛分布 geometric mean 几何均值 group 组Hharmomic mean 调和均值 heterogeneity 不齐性histogram 直方图 homogeneity 齐性homogeneity of variance 方差齐性 hypothesis 假设 hypothesis test 假设检验Iindependence 独立 independent variable 自变量independent-samples 独立样本 index 指数 index of correlation 相关指数 interaction 交互作用 interclass correlation 组内相关 interval estimate 区间估计 intraclass correlation 组间相关 inverse 倒数的iterate 迭代Kkernal 核 Kolmogorov-Smirnov test柯尔莫哥洛夫-斯米诺夫检验 kurtosis 峰度Llarge sample problem 大样本问题 layer 层least-significant difference 最小显著差数 least-square estimation 最小二乘估计 least-square method 最小二乘法 level 水平 level of significance 显著性水平 leverage value 中心化杠杆值 life 寿命 life test 寿命试验 likelihood function 似然函数 likelihood ratio test 似然比检验linear 线性的 linear estimator 线性估计linear model 线性模型 linear regression 线性回归linear relation 线性关系linear term 线性项logarithmic 对数的logarithms 对数 logistic 逻辑的 lost function 损失函数Mmain effect 主效应 matrix 矩阵 maximum 最大值 maximum likelihood estimation 极大似然估计 mean squared deviation(MSD) 均方差 mean sum of square 均方和 measure 衡量 media 中位数 M-estimator M估计minimum 最小值 missing values 缺失值 mixed model 混合模型 mode 众数model 模型Monte Carle method 蒙特卡罗法 moving average 移动平均值multicollinearity 多元共线性multiple comparison 多重比较 multiple correlation 多重相关multiple correlation coefficient 复相关系数multiple correlation coefficient 多元相关系数 multiple regression analysis 多元回归分析multiple regression equation 多元回归方程 multiple response 多响应 multivariate analysis 多元分析Nnegative relationship 负相关 nonadditively 不可加性 nonlinear 非线性 nonlinear regression 非线性回归 noparametric tests 非参数检验 normal distribution 正态分布null hypothesis 零假设 number of cases 个案数Oone-sample 单样本 one-tailed test 单侧检验 one-way ANOVA 单向方差分析 one-way classification 单向分类 optimal 优化的optimum allocation 最优配制 order 排序order statistics 次序统计量 origin 原点orthogonal 正交的 outliers 异常值Ppaired observations 成对观测数据paired-sample 成对样本parameter 参数parameter estimation 参数估计 partial correlation 偏相关partial correlation coefficient 偏相关系数 partial regression coefficient 偏回归系数 percent 百分数percentiles 百分位数 pie chart 饼图 point estimate 点估计 poisson distribution 泊松分布polynomial curve 多项式曲线polynomial regression 多项式回归polynomials 多项式positive relationship 正相关 power 幂P-P plot P-P概率图predict 预测predicted value 预测值prediction intervals 预测区间principal component analysis 主成分分析 proability 概率 probability density function 概率密度函数 probit analysis 概率分析 proportion 比例Qqadratic 二次的 Q-Q plot Q-Q概率图 quadratic term 二次项 quality control 质量控制 quantitative 数量的,度量的 quartiles 四分位数Rrandom 随机的 random number 随机数 random number 随机数 random sampling 随机取样random seed 随机数种子 random variable 随机变量 randomization 随机化 range 极差rank 秩 rank correlation 秩相关 rank statistic 秩统计量 regression analysis 回归分析regression coefficient 回归系数regression line 回归线reject 拒绝rejection region 拒绝域 relationship 关系 reliability 可*性 repeated 重复的report 报告,报表 residual 残差 residual sum of squares 剩余平方和 response 响应risk function 风险函数 robustness 稳健性 root mean square 标准差 row 行 run 游程run test 游程检验Sample 样本 sample size 样本容量 sample space 样本空间 sampling 取样 sampling inspection 抽样检验 scatter chart 散点图 S-curve S形曲线 separately 单独地 sets 集合sign test 符号检验significance 显著性significance level 显著性水平significance testing 显著性检验 significant 显著的,有效的 significant digits 有效数字 skewed distribution 偏态分布 skewness 偏度 small sample problem 小样本问题 smooth 平滑 sort 排序 soruces of variation 方差来源 space 空间 spread 扩展square 平方 standard deviation 标准离差 standard error of mean 均值的标准误差standardization 标准化 standardize 标准化 statistic 统计量 statistical quality control 统计质量控制 std. residual 标准残差 stepwise regression analysis 逐步回归 stimulus 刺激 strong assumption 强假设 stud. deleted residual 学生化剔除残差stud. residual 学生化残差 subsamples 次级样本 sufficient statistic 充分统计量sum 和 sum of squares 平方和 summary 概括,综述Ttable 表t-distribution t分布test 检验test criterion 检验判据test for linearity 线性检验 test of goodness of fit 拟合优度检验 test of homogeneity 齐性检验 test of independence 独立性检验 test rules 检验法则 test statistics 检验统计量 testing function 检验函数 time series 时间序列 tolerance limits 容许限total 总共,和 transformation 转换 treatment 处理 trimmed mean 截尾均值 true value 真值 t-test t检验 two-tailed test 双侧检验Uunbalanced 不平衡的 unbiased estimation 无偏估计 unbiasedness 无偏性 uniform distribution 均匀分布Vvalue of estimator 估计值 variable 变量 variance 方差 variance components 方差分量 variance ratio 方差比 various 不同的 vector 向量Wweight 加权,权重 weighted average 加权平均值 within groups 组内的ZZ score Z分数2. 最优化方法词汇英汉对照表Aactive constraint 活动约束 active set method 活动集法 analytic gradient 解析梯度approximate 近似 arbitrary 强制性的 argument 变量 attainment factor 达到因子Bbandwidth 带宽 be equivalent to 等价于 best-fit 最佳拟合 bound 边界Ccoefficient 系数 complex-value 复数值 component 分量 constant 常数 constrained 有约束的constraint 约束constraint function 约束函数continuous 连续的converge 收敛 cubic polynomial interpolation method三次多项式插值法 curve-fitting 曲线拟合Ddata-fitting 数据拟合 default 默认的,默认的 define 定义 diagonal 对角的 direct search method 直接搜索法 direction of search 搜索方向 discontinuous 不连续Eeigenvalue 特征值 empty matrix 空矩阵 equality 等式 exceeded 溢出的Ffeasible 可行的 feasible solution 可行解 finite-difference 有限差分 first-order 一阶GGauss-Newton method 高斯-牛顿法 goal attainment problem 目标达到问题 gradient 梯度 gradient method 梯度法Hhandle 句柄 Hessian matrix 海色矩阵Independent variables 独立变量inequality 不等式infeasibility 不可行性infeasible 不可行的initial feasible solution 初始可行解initialize 初始化inverse 逆 invoke 激活 iteration 迭代 iteration 迭代JJacobian 雅可比矩阵LLagrange multiplier 拉格朗日乘子 large-scale 大型的 least square 最小二乘 least squares sense 最小二乘意义上的 Levenberg-Marquardt method 列文伯格-马夸尔特法line search 一维搜索 linear 线性的 linear equality constraints 线性等式约束linear programming problem 线性规划问题 local solution 局部解M medium-scale 中型的 minimize 最小化 mixed quadratic and cubic polynomialinterpolation and extrapolation method 混合二次、三次多项式内插、外插法multiobjective 多目标的Nnonlinear 非线性的 norm 范数Oobjective function 目标函数 observed data 测量数据 optimization routine 优化过程optimize 优化 optimizer 求解器 over-determined system 超定系统Pparameter 参数 partial derivatives 偏导数 polynomial interpolation method 多项式插值法Qquadratic 二次的 quadratic interpolation method 二次内插法 quadratic programming 二次规划Rreal-value 实数值 residuals 残差 robust 稳健的 robustness 稳健性,鲁棒性S scalar 标量 semi-infinitely problem 半无限问题 Sequential Quadratic Programming method 序列二次规划法 simplex search method 单纯形法 solution 解 sparse matrix 稀疏矩阵 sparsity pattern 稀疏模式 sparsity structure 稀疏结构 starting point 初始点 step length 步长 subspace trust region method 子空间置信域法 sum-of-squares 平方和 symmetric matrix 对称矩阵Ttermination message 终止信息 termination tolerance 终止容限 the exit condition 退出条件 the method of steepest descent 最速下降法 transpose 转置Uunconstrained 无约束的 under-determined system 负定系统Vvariable 变量 vector 矢量Wweighting matrix 加权矩阵3 样条词汇英汉对照表Aapproximation 逼近 array 数组 a spline in b-form/b-spline b样条 a spline of polynomial piece /ppform spline 分段多项式样条Bbivariate spline function 二元样条函数 break/breaks 断点Ccoefficient/coefficients 系数cubic interpolation 三次插值/三次内插cubic polynomial 三次多项式 cubic smoothing spline 三次平滑样条 cubic spline 三次样条cubic spline interpolation 三次样条插值/三次样条内插 curve 曲线Ddegree of freedom 自由度 dimension 维数Eend conditions 约束条件 input argument 输入参数 interpolation 插值/内插 interval取值区间Kknot/knots 节点Lleast-squares approximation 最小二乘拟合Mmultiplicity 重次 multivariate function 多元函数Ooptional argument 可选参数 order 阶次 output argument 输出参数P point/points 数据点Rrational spline 有理样条 rounding error 舍入误差(相对误差)Sscalar 标量 sequence 数列(数组) spline 样条 spline approximation 样条逼近/样条拟合spline function 样条函数 spline curve 样条曲线 spline interpolation 样条插值/样条内插 spline surface 样条曲面 smoothing spline 平滑样条Ttolerance 允许精度Uunivariate function 一元函数Vvector 向量Wweight/weights 权重4 偏微分方程数值解词汇英汉对照表Aabsolute error 绝对误差 absolute tolerance 绝对容限 adaptive mesh 适应性网格Bboundary condition 边界条件Ccontour plot 等值线图 converge 收敛 coordinate 坐标系Ddecomposed 分解的 decomposed geometry matrix 分解几何矩阵 diagonal matrix 对角矩阵 Dirichlet boundary conditions Dirichlet边界条件Eeigenvalue 特征值 elliptic 椭圆形的 error estimate 误差估计 exact solution 精确解Ggeneralized Neumann boundary condition 推广的Neumann边界条件 geometry 几何形状geometry description matrix 几何描述矩阵 geometry matrix 几何矩阵 graphical user interface(GUI)图形用户界面Hhyperbolic 双曲线的Iinitial mesh 初始网格Jjiggle 微调LLagrange multipliers 拉格朗日乘子Laplace equation 拉普拉斯方程linear interpolation 线性插值 loop 循环Mmachine precision 机器精度 mixed boundary condition 混合边界条件NNeuman boundary condition Neuman边界条件 node point 节点 nonlinear solver 非线性求解器 normal vector 法向量PParabolic 抛物线型的 partial differential equation 偏微分方程 plane strain 平面应变 plane stress 平面应力 Poisson's equation 泊松方程 polygon 多边形 positive definite 正定Qquality 质量Rrefined triangular mesh 加密的三角形网格 relative tolerance 相对容限 relative tolerance 相对容限 residual 残差 residual norm 残差范数Ssingular 奇异的二、常用的数学英语表述1.Logic∃there exist∀for allp⇒q p implies q / if p, then qp⇔q p if and only if q /p is equivalent to q / p and q are equivalent2.Setsx∈A x belongs to A / x is an element (or a member) of Ax∉A x does not belong to A / x is not an element (or a member) of AA⊂B A is contained in B / A is a subset of BA⊃B A contains B / B is a subset of AA∩B A cap B / A meet B / A intersection BA∪B A cup B / A join B / A union BA\B A minus B / the diference between A and BA×B A cross B / the cartesian product of A and B3. Real numbersx+1 x plus onex-1 x minus onex±1 x plus or minus onexy xy / x multiplied by y(x - y)(x + y) x minus y, x plus yx y x over y= the equals signx = 5 x equals 5 / x is equal to 5x≠5x (is) not equal to 5x≡y x is equivalent to (or identical with) yx ≡ y x is not equivalent to (or identical with) yx > y x is greater than yx≥y x is greater than or equal to yx < y x is less than yx≤y x is less than or equal to y0 < x < 1 zero is less than x is less than 10≤x≤1zero is less than or equal to x is less than or equal to 1| x | mod x / modulus xx 2 x squared / x (raised) to the power 2x 3 x cubedx 4 x to the fourth / x to the power fourx n x to the nth / x to the power nx −n x to the (power) minus nx (square) root x / the square root of xx 3 cube root (of) xx 4 fourth root (of) xx n nth root (of) x( x+y ) 2 x plus y all squared( x y ) 2 x over y all squaredn! n factorialx ^ x hatx ¯ x barx ˜x tildex i xi / x subscript i / x suffix i / x sub i∑ i=1 n a i the sum from i equals one to n a i / the sum as i runs from 1 to n of the a i4. Linear algebra‖ x ‖the norm (or modulus) of xOA →OA / vector OAOA ¯ OA / the length of the segment OAA T A transpose / the transpose of AA −1 A inverse / the inverse of A5. Functionsf( x ) fx / f of x / the function f of xf:S→T a function f from S to Tx→y x maps to y / x is sent (or mapped) to yf'( x ) f prime x / f dash x / the (first) derivative of f with respect to xf''( x ) f double-prime x / f double-dash x / the second derivative of f with r espect to xf'''( x ) triple-prime x / f triple-dash x / the third derivative of f with respect to xf (4) ( x ) f four x / the fourth derivative of f with respect to x∂f ∂ x 1the partial (derivative) of f with respect to x1∂ 2 f ∂ x 1 2the second partial (derivative) of f with respect to x1∫ 0 ∞the integral from zero to infinitylimx→0 the limit as x approaches zerolimx→0 + the limit as x approaches zero from abovelimx→0 −the limit as x approaches zero from belowlog e y log y to the base e / log to the base e of y / natural log (of) ylny log y to the base e / log to the base e of y / natural log (of) y一般词汇数学mathematics, maths(BrE), math(AmE)公理axiom定理theorem计算calculation运算operation证明prove假设hypothesis, hypotheses(pl.)命题proposition算术arithmetic加plus(prep.), add(v.), addition(n.)被加数augend, summand加数addend和sum减minus(prep.), subtract(v.), subtraction(n.)被减数minuend减数subtrahend差remainder乘times(prep.), multiply(v.), multiplication(n.)被乘数multiplicand, faciend乘数multiplicator积product除divided by(prep.), divide(v.), division(n.)被除数dividend除数divisor商quotient等于equals, is equal to, is equivalent to 大于is greater than小于is lesser than大于等于is equal or greater than小于等于is equal or lesser than运算符operator数字digit数number自然数natural number整数integer小数decimal小数点decimal point分数fraction分子numerator分母denominator比ratio正positive负negative零null, zero, nought, nil十进制decimal system二进制binary system十六进制hexadecimal system权weight, significance进位carry截尾truncation四舍五入round下舍入round down上舍入round up有效数字significant digit无效数字insignificant digit代数algebra公式formula, formulae(pl.)单项式monomial多项式polynomial, multinomial系数coefficient未知数unknown, x-factor, y-factor, z-factor 等式,方程式equation一次方程simple equation二次方程quadratic equation三次方程cubic equation四次方程quartic equation不等式inequation阶乘factorial对数logarithm指数,幂exponent乘方power二次方,平方square三次方,立方cube四次方the power of four, the fourth power n次方the power of n, the nth power开方evolution, extraction二次方根,平方根square root三次方根,立方根cube root四次方根the root of four, the fourth root n次方根the root of n, the nth root集合aggregate元素element空集void子集subset交集intersection并集union补集complement映射mapping函数function定义域domain, field of definition值域range常量constant变量variable单调性monotonicity奇偶性parity周期性periodicity图象image数列,级数series微积分calculus微分differential导数derivative极限limit无穷大infinite(a.) infinity(n.)无穷小infinitesimal积分integral定积分definite integral不定积分indefinite integral有理数rational number无理数irrational number实数real number虚数imaginary number复数complex number矩阵matrix行列式determinant几何geometry点point线line面plane体solid线段segment射线radial平行parallel相交intersect角angle角度degree弧度radian锐角acute angle直角right angle钝角obtuse angle平角straight angle周角perigon底base边side高height三角形triangle锐角三角形acute triangle直角三角形right triangle直角边leg斜边hypotenuse勾股定理Pythagorean theorem钝角三角形obtuse triangle不等边三角形scalene triangle等腰三角形isosceles triangle等边三角形equilateral triangle四边形quadrilateral平行四边形parallelogram矩形rectangle长length宽width附:在一个分数里,分子或分母或两者均含有分数。
随机利率环境下一类跳扩散相依风险资产的最优投资策略
2020年9月Sep., 2020运筹学学报Operations Research Transactions第24卷第3期Vol.24 No.3DOI: 10.15960/ki.issn.1007-6093.2020.03.008随机利率环境下一类跳扩散相依风险资产的最优投资策略+孙景云w郭精军1摘要考虑了随机利率环境下基于连续时间的动态最优资产配置问题。
假设市场利率满 足一个均值回复的随机过程,且金融市场由一个零息债券和两个价格受到共同冲击的相依性风险资产所构成。
在均值-方差目标准则之下,利用随机最优控制理论和Lagrange对偶原理获得了有效投资策略以及相应有效边界的解析式。
最后通过数值算例,分析了有效策略及有效边界对相关参数的敏感性,并验证了相关理论结果。
关键词随机利率,跳扩散相依,均值-方差准则,Hamilton-Jacobi-Bellman方程,有效边界中图分类号F830.59, F2242010数学分类号91G10,91B30O p t i m a l i n v e s t m e n t s t r a t e g i e s f o r a c l a s s o f r i s k ya s s e t s w i t h j u m p-d i f f u s i o n d e p e n d e n c eu n d e r t h e s t o c h a s t i c i n t e r e s t r a t e*SUN Jingyun1,t GUO Jingj皿1Abstract In this paper, we consider the continuous time dynamic optimal asset allocation problem under the stochastic interest rate. We suppose that the market interest rate satisfies a stochastic process with the characteristic of mean-reverting, and thefinancial market consists of a zero-coupon bond and two dependent risky assets whoseprices are suffered a common shock. Under the mean-variance criterion, using stochastic optimal control theory and Lagrange dual principle, the analytical solution for theefficient investment strategies and corresponding efficient frontier are obtained. Finally,through numerical examples, the sensitivity of efficient strategies and efficient frontier torelevant parameters are analyzed, and the relevant theoretical results are also verified.Keywords stochastic interest rate, jump-diffusion dependence, mean-variance criteria, Hamilton-Jacobi-Bellman equation, efficient frontierChinese Library Classification F830.59, F2242010 M athem atics Subject Classification 91G10, 91B30投资组合选择理论是现代数理金融中的基本理论之一,在不同的投资目标和市场环收稿日期:2019-01-02*基金项目:国家自然科学基金(Nos.71701084,71961013),甘肃省高等学校创新能力提升项目(No.2019A-060),甘肃省科技厅软科学项目(No.1604ZCRA024)1.兰州财经大学统计学院,兰州730020; School of Statistics, Lanzhou University of Finance and Economics, Lanzhou 730020, Chinat 通信作者E-mail: ***************.cn102孙景石■,郭精车24卷境下寻求最优的资产配置策略是投资组合选择理论解决的主要问题。
KraljicMatrix包的中文名字:克拉尔吉矩阵策略分析包说明书
Package‘KraljicMatrix’October12,2022Type PackageTitle A Quantified Implementation of the Kraljic MatrixVersion0.2.1Maintainer Bradley Boehmke<************************>Date2017-11-01Description Implements a quantified approach to the Kraljic Matrix(Kraljic,1983,<https: ///1983/09/purchasing-must-become-supply-management>)for strategically analyzing afirm’s purchasing portfolio.It combines multi-objective decision analysis to measure purchasing characteristics anduses this information to place products and services within the Kraljic Matrix.URL https:///koalaverse/KraljicMatrixBugReports https:///koalaverse/KraljicMatrix/issuesLicense MIT+file LICENSEEncoding UTF-8LazyData trueDepends R(>=2.10)Imports ggplot2,dplyr,tibble,magrittrSuggests knitr,rmarkdown,testthatVignetteBuilder knitrRoxygenNote6.0.1NeedsCompilation noAuthor Bradley Boehmke[aut,cre],Brandon Greenwell[aut],Andrew McCarthy[aut],Robert Montgomery[ctb]Repository CRANDate/Publication2018-03-0622:49:03UTC1R topics documented:geom_frontier (2)get_frontier (3)kraljic_matrix (4)kraljic_quadrant (5)MA VF_score (6)MA VF_sensitivity (7)psc (8)SA VF_plot (9)SA VF_plot_rho_error (10)SA VF_preferred_rho (11)SA VF_score (12)%>% (13)Index14 geom_frontier Plotting the Pareto Optimal FrontierDescriptionThe frontier geom is used to overlay the efficient frontier on a scatterplot.Usagegeom_frontier(mapping=NULL,data=NULL,position="identity",direction="vh",na.rm=FALSE,show.legend=NA,inherit.aes=TRUE,...)stat_frontier(mapping=NULL,data=NULL,geom="step",position="identity",direction="vh",na.rm=FALSE,show.legend=NA,inherit.aes=TRUE,quadrant="top.right",...) Argumentsmapping Set of aesthetic mappings created by aes or aes_.If specified and inherit.aes =TRUE(the default),it is combined with the default mapping at the top level ofthe plot.You must supply mapping if there is no plot mapping.data The data to be displayed in this layer.position Position adjustment,either as a string,or the result of a call to a position adjust-ment function.direction Direction of stairs:’vh’for vertical then horizontal,or’hv’for horizontal then vertical.na.rm If FALSE,the default,missing values are removed with a warning.If TRUE, missing values are silently removed.show.legend Logical.Should this layer be included in the legends?NA,the default,includesif any aesthetics are mapped.FALSE never includes,and TRUE always includes.inherit.aes If FALSE,overrides the default aesthetics,rather than combining with them.This is most useful for helper functions that define both data and aesthetics andshouldn’t inherit behaviour from the default plot specification,e.g.borders....Other arguments passed on to layer.These are often aesthetics,used to set anaesthetic to afixed value,like color="red"or size=3.They may also beparameters to the paired geom/stat.geom Use to override the default connection between geom_frontier and stat_frontier.quadrant See get_frontier.Examples##Not run:#default will find the efficient front in top right quadrantggplot(mtcars,aes(mpg,wt))+geom_point()+geom_frontier()#change the direction of the stepsggplot(mtcars,aes(mpg,wt))+geom_point()+geom_frontier(direction= hv )#use quadrant parameter to change how you define the efficient frontierggplot(airquality,aes(Ozone,Temp))+geom_point()+geom_frontier(quadrant= top.left )ggplot(airquality,aes(Ozone,Temp))+geom_point()+geom_frontier(quadrant= bottom.right )##End(Not run)get_frontier Compute the Pareto Optimal FrontierDescriptionExtract the points that make up the Pareto frontier from a set of data.Usageget_frontier(data,x,y,quadrant=c("top.right","bottom.right","bottom.left","top.left"),decreasing=TRUE)4kraljic_matrixArgumentsdata A data frame.x A numeric vector.y A numeric vector.quadrant Chararacter string specifying which quadrant the frontier should appear in.De-fault is"top.right".decreasing Logical value indicating whether the data returned is in decreasing or ascending order(ordered by x and then y).Default is decreasing order.ValueA data frame containing the data points that make up the efficient frontier.See Alsogeom_frontier for plotting the Pareto frontExamples#default will find the Pareto optimal observations in top right quadrantget_frontier(mtcars,mpg,wt)#the output can be in descending or ascending orderget_frontier(mtcars,mpg,wt,decreasing=FALSE)#use quadrant parameter to change how you define the efficient frontierget_frontier(airquality,Ozone,Temp,quadrant= top.left )get_frontier(airquality,Ozone,Temp,quadrant= bottom.right )kraljic_matrix Kraljic matrix plotting functionDescriptionkraljic_matrix plots each product or service in the Kraljic purchasing matrix based on the at-tribute value score of x and yUsagekraljic_matrix(data,x,y)kraljic_quadrant5Argumentsdata A data framex Numeric vector of valuesy Numeric vector of values with compatible dimensions to xValueA Kraljic purchasing matrix plotSee AlsoSAVF_score for computing the exponential single attribute value score for x and yExamples#Given the following\code{x}and\code{y}attribute values we can plot each#product or service in the purchasing matrix:#to add a new variable while preserving existing datalibrary(dplyr)psc2<-psc%>%mutate(x_SAVF_score=SAVF_score(x_attribute,1,5,.653),y_SAVF_score=SAVF_score(y_attribute,1,10,.7))kraljic_matrix(psc2,x_SAVF_score,y_SAVF_score)kraljic_quadrant Kraljic quadrant assignment functionDescriptionkraljic_quadrant assigns the Kraljic purchasing matrix quadrant based on the attribute value score of x and yUsagekraljic_quadrant(x,y)Argumentsx Numeric vector of valuesy Numeric vector of values with compatible dimensions to x6MA VF_score ValueA vector of the same length as x and y with the relevant Kraljic quadrant nameSee AlsoSAVF_score for computing the exponential single attribute value score for x and yExamples#Given the following\code{x}and\code{y}attribute values we can determine#which quadrant each product or service falls in:#to add a new variable while preserving existing datalibrary(dplyr)psc2<-psc%>%mutate(x_SAVF_score=SAVF_score(x_attribute,1,5,.653),y_SAVF_score=SAVF_score(y_attribute,1,10,.7))psc2%>%mutate(quadrant=kraljic_quadrant(x_SAVF_score,y_SAVF_score))MAVF_score Multi-attribute value functionDescriptionMAVF_score computes the multi-attribute value score of x and y given their respective weights UsageMAVF_score(x,y,x_wt,y_wt)Argumentsx Numeric vector of valuesy Numeric vector of values with compatible dimensions to xx_wt Swing weight for xy_wt Swing weight for yValueA vector of the same length as x and y with the multi-attribute value scoresMA VF_sensitivity7See AlsoMAVF_sensitivity to perform sensitivity analysis with a range of x and y swing weightsSAVF_score for computing the exponential single attribute value scoreExamples#Given the following\code{x}and\code{y}attribute values with\code{x}and#\code{y}swing weight values of0.65and0.35respectively,we can compute#the multi-attribute utility score:x_attribute<-c(0.92,0.79,1.00,0.39,0.68,0.55,0.73,0.76,1.00,0.74)y_attribute<-c(0.52,0.19,0.62,1.00,0.55,0.52,0.53,0.46,0.61,0.84)MAVF_score(x_attribute,y_attribute,x_wt=.65,y_wt=.35)MAVF_sensitivity Multi-attribute value function sensitivity analysisDescriptionMAVF_sensitivity computes summary statistics for multi-attribute value scores of x and y given a range of swing weights for each attributeUsageMAVF_sensitivity(data,x,y,x_wt_min,x_wt_max,y_wt_min,y_wt_max) Argumentsdata A data framex Variable from data frame to represent x attribute valuesy Variable from data frame to represent y attribute valuesx_wt_min Lower bound anchor point for x attribute swing weightx_wt_max Upper bound anchor point for x attribute swing weighty_wt_min Lower bound anchor point for y attribute swing weighty_wt_max Upper bound anchor point for y attribute swing weightDetailsThe sensitivity analysis performs a Monte Carlo simulation with1000trials for each product or service(row).Each trial randomly selects a weight from a uniform distribution between the lower and upper bound weight parameters and calculates the mult-attribute utility score.From these trials, summary statistics for each product or service(row)are calculated and reported for thefinal output.8pscValueA data frame with added variables consisting of sensitivity analysis summary statistics for eachproduct or service(row).See AlsoMAVF_score for computing the multi-attribute value score of x and y given their respective weights SAVF_score for computing the exponential single attribute value scoreExamples#Given the following data frame that contains\code{x}and\code{y}attribute#values for each product or service contract,we can compute how the range of#swing weights for each\code{x}and\code{y}attribute influences the multi-#attribute value score.df<-data.frame(contract=1:10,x_attribute=c(0.92,0.79,1.00,0.39,0.68,0.55,0.73,0.76,1.00,0.74),y_attribute=c(0.52,0.19,0.62,1.00,0.55,0.52,0.53,0.46,0.61,0.84)) MAVF_sensitivity(df,x_attribute,y_attribute,.55,.75,.25,.45)psc Product and service contractsDescriptionA dataset containing a single value score for the x attribute(i.e.supply risk)and y attribute(i.e.profit impact)of200product and service contracts(PSC).The variables are as follows:UsagepscFormatA tibble with200rows and3variables:PSC Contract identifier for each product and servicex_attribute x attribute score,from1(worst)to5(best)in.01incrementsy_attribute y attribute score,from1(worst)to10(best)in.01incrementsSA VF_plot9 SAVF_plot Plot the single attribute value curveDescriptionSAVF_plot plots the single attribute value curve along with the subject matter desired values for comparisonUsageSAVF_plot(desired_x,desired_v,x_low,x_high,rho)Argumentsdesired_x Elicited input x value(s)desired_v Elicited value score related to elicited input value(s)x_low Lower bound anchor point(can be different than min(x))x_high Upper bound anchor point(can be different than max(x))rho Exponential constant for the value functionValueA plot that visualizes the single attribute value curve along with the subject matter desired valuesfor comparisonSee AlsoSAVF_plot_rho_error for plotting the rho squared error termsSAVF_score for computing the exponential single attribute value scoreExamples#Given the single attribute x is bounded between1and5and the subject matter experts #prefer x values of3,4,&5provide a utility score of.75,.90&1.0respectively, #the preferred rho is0.54.We can visualize this value function:SAVF_plot(desired_x=c(3,4,5),desired_v=c(.75,.9,1),x_low=1,x_high=5,rho=0.54)10SA VF_plot_rho_error SAVF_plot_rho_error Plot the rho squared error termsDescriptionSAVF_plot_rho_error plots the squared error terms for the rho search space to illustrate the pre-ferred rho that minimizes the squared error between subject matter desired values and exponentially fitted scoresUsageSAVF_plot_rho_error(desired_x,desired_v,x_low,x_high,rho_low=0,rho_high=1)Argumentsdesired_x Elicited input x value(s)desired_v Elicited value score related to elicited input value(s)x_low Lower bound anchor point(can be different than min(x))x_high Upper bound anchor point(can be different than max(x))rho_low Lower bound of the exponential constant search space for a bestfit value func-tionrho_high Upper bound of the exponential constant search space for a bestfit value func-tionValueA plot that visualizes the squared error terms for the rho search spaceSee AlsoSAVF_preferred_rho for identifying the preferred rho valueSAVF_score for computing the exponential single attribute value scoreExamples#Given the single attribute x is bounded between1and5and the subject matter experts #prefer x values of3,4,&5provide a utility score of.75,.90&1.0respectively,we #can visualize the error terms for rho values between0-1:SAVF_plot_rho_error(desired_x=c(3,4,5),desired_v=c(.75,.9,1),x_low=1,x_high=5,rho_low=0,rho_high=1)SA VF_preferred_rho11 SAVF_preferred_rho Identify preferred rhoDescriptionSAVF_preferred_rho computes the preferred rho that minimizes the squared error between subject matter input desired values and exponentiallyfitted scoresUsageSAVF_preferred_rho(desired_x,desired_v,x_low,x_high,rho_low=0,rho_high=1)Argumentsdesired_x Elicited input x value(s)desired_v Elicited value score related to elicited input value(s)x_low Lower bound anchor point(can be different than min(x))x_high Upper bound anchor point(can be different than max(x))rho_low Lower bound of the exponential constant search space for a bestfit value func-tionrho_high Upper bound of the exponential constant search space for a bestfit value func-tionValueA single element vector that represents the rho value that bestfits the exponential utility function tothe desired inputsSee AlsoSAVF_plot_rho_error for plotting the rho squared error termsSAVF_score for computing the exponential single attribute value scoreExamples#Given the single attribute x is bounded between1and5and the subject matter experts #prefer x values of3,4,&5provide a utility score of.75,.90&1.0respectively,we #can search for a rho value between0-1that provides the best fit utility function: SAVF_preferred_rho(desired_x=c(3,4,5),desired_v=c(.75,.9,1),x_low=1,x_high=5,rho_low=0,rho_high=1)12SA VF_score SAVF_score Single attribute value functionDescriptionSAVF_score computes the exponential single attribute value score of xUsageSAVF_score(x,x_low,x_high,rho)Argumentsx Numeric vector of values to scorex_low Lower bound anchor point(can be different than min(x))x_high Upper bound anchor point(can be different than max(x))rho Exponential constant for the value functionValueA vector of the same length as x with the exponential single attribute value scoresSee AlsoSAVF_plot for plotting single attribute scoresSAVF_preferred_rho for identifying the preferred rhoExamples#The single attribute x is bounded between1and5and follows an exponential#utility curve with rho=.653x<-runif(10,1,5)x##[1]2.9648531.9631821.2239491.5620254.3814672.2860303.071066##[8]4.4708753.9209134.314907SAVF_score(x,x_low=1,x_high=5,rho=.653)##[1]0.78005560.50382750.14682340.33152170.96058560.61319440.8001003##[8]0.96731240.91896850.9553165%>%13 %>%Pipe functionsDescriptionLike dplyr,KraljicMatrix also uses the pipe function,%>%to turn function composition into a series of imperative statements.Argumentslhs,rhs An R object and a function to apply to itExamples#given the following\code{psc2}data setpsc2<-dplyr::mutate(psc,x_SAVF_score=SAVF_score(x_attribute,1,5,.653),y_SAVF_score=SAVF_score(y_attribute,1,10,.7))#you can use the pipe operator to re-write the following:kraljic_matrix(psc2,x_SAVF_score,y_SAVF_score)#aspsc2%>%kraljic_matrix(x_SAVF_score,y_SAVF_score)Index∗datasetspsc,8%>%,13geom_frontier,2,4get_frontier,3,3kraljic_matrix,4kraljic_quadrant,5MAVF_score,6,8MAVF_sensitivity,7,7psc,8SAVF_plot,9,12SAVF_plot_rho_error,9,10,11SAVF_preferred_rho,10,11,12SAVF_score,5–11,12stat_frontier(geom_frontier),214。
蒙特卡洛模型方法
蒙特卡罗方法(Monte Carlo method)蒙特卡罗方法概述蒙特卡罗方法又称统计模拟法、随机抽样技术,是一种随机模拟方法,以概率和统计理论方法为基础的一种计算方法,是使用随机数(或更常见的伪随机数)来解决很多计算问题的方法。
将所求解的问题同一定的概率模型相联系,用电子计算机实现统计模拟或抽样,以获得问题的近似解。
为象征性地表明这一方法的概率统计特征,故借用赌城蒙特卡罗命名。
蒙特卡罗方法的提出蒙特卡罗方法于20世纪40年代美国在第二次世界大战中研制原子弹的“曼哈顿计划”计划的成员S.M.乌拉姆和J.冯·诺伊曼首先提出。
数学家冯·诺伊曼用驰名世界的赌城—摩纳哥的Monte Carlo—来命名这种方法,为它蒙上了一层神秘色彩。
在这之前,蒙特卡罗方法就已经存在。
1777年,法国Buffon提出用投针实验的方法求圆周率∏。
这被认为是蒙特卡罗方法的起源。
蒙特卡罗方法的基本思想Monte Carlo方法的基本思想很早以前就被人们所发现和利用。
早在17世纪,人们就知道用事件发生的“频率”来决定事件的“概率”。
19世纪人们用投针试验的方法来决定圆周率π。
本世纪40年代电子计算机的出现,特别是近年来高速电子计算机的出现,使得用数学方法在计算机上大量、快速地模拟这样的试验成为可能。
考虑平面上的一个边长为1的正方形及其内部的一个形状不规则的“图形”,如何求出这个“图形”的面积呢?Monte Carlo方法是这样一种“随机化”的方法:向该正方形“随机地”投掷N个点,有M个点落于“图形”内,则该“图形”的面积近似为M/N。
可用民意测验来作一个不严格的比喻。
民意测验的人不是征询每一个登记选民的意见,而是通过对选民进行小规模的抽样调查来确定可能的优胜者。
其基本思想是一样的。
科技计算中的问题比这要复杂得多。
比如金融衍生产品(期权、期货、掉期等)的定价及交易风险估算,问题的维数(即变量的个数)可能高达数百甚至数千。
Banach空间值独立随机变量序列的Hàjek-Rènyi不等式
Banach空间值独立随机变量序列的Hàjek-Rènyi不等式朱永刚;于林
【期刊名称】《三峡大学学报(自然科学版)》
【年(卷),期】2007(29)3
【摘要】证明了Banach空间值独立随机变量序列的Hàjek-Rènyi型不等式,并利用该不等式证明了Banach空间值独立随机变量序列的强大数定律,所得结果刻画了Banach空间的p型性质.
【总页数】3页(P276-278)
【作者】朱永刚;于林
【作者单位】三峡大学理学院,湖北,宜昌,443002;三峡大学理学院,湖北,宜
昌,443002
【正文语种】中文
【中图分类】O211.4
【相关文献】
1.NA随机变量序列的Hájeck-Rènyi不等式 [J], 邱德华;甘师信
2.AANA序列的新Hájek-Rényi型不等式 [J], 刘玥雯;王才士;普丽琴
3.PA序列的Hájek-Rényi型不等式及强大数定律 [J], 冯德成;王晓艳;高玉峰
4.任意m值随机变量序列与独立序列的比较及其极限性质 [J], 刘文; 顾巧论
5.B值随机场的Hǎjek-Rényi不等式及其应用 [J], 陈平炎;甘师信
因版权原因,仅展示原文概要,查看原文内容请购买。
基于随机栅格的异或区域递增式视觉密码方案
基于随机栅格的异或区域递增式视觉密码方案胡浩;沈刚;郁滨;马浩俊夫【期刊名称】《计算机研究与发展》【年(卷),期】2016(053)008【摘要】Considering the problem of the w hite pixels in secret images can not be correctly recovered , w hich is resulted by the OR operation confined in exiting region incrementing visual cryptography schemes ,a novel definition of region incrementing visual cryptography based on XOR operation is proposed for the first time .By iterating random‐grid‐based (k ,k) single‐secret visual cryptography scheme and combing the property of 0 as the generator of the {0 ,1} group ,the shares generation algorithm of (k ,n) single‐secret‐sharing scheme using XOR operation isdesigned ,and the secret sharing and recovering procedures for region incrementing scheme are proposed further . For any original secret pixel s ,according to the security level , s is reassigned associated with a randomly chosen qualified set Q in the sharing procedure ,and then encoded by the proposed (k ,n) single‐secret‐sharing scheme . T he recovering procedure is the same as that of the previous schemes . T he effectiveness is theoretically verified at last .Experimental results show that the present scheme not only realizes no pixel expansion ,but also can obtain the perfect recovery of w hite pixels w hen all the shares are stacked .%针对现有区域递增式视觉密码方案仅局限于或运算,导致秘密图像中白像素无法被正确恢复的问题,给出了基于异或运算的区域递增式视觉密码的定义。
单峰偏好理论
单峰偏好理论单峰偏好理论(Single Peak Preference Theory)[编辑]单峰偏好理论简介单峰偏好理论是由邓肯·布莱克(Duncan Black)在1958年出版的《委员会与选举理论》一书中做出的。
拟通过修正阿罗五原则解决投票悖论。
其内容是限定每个选民的偏好只能有一个峰值。
所谓单峰偏好,是指选民在一组按某种标准排列的备选方案中,有一个最为偏好的选择,而从这个方案向任何方面的偏离,选民的偏好程度或效用都是递减的。
如果一个人具有双峰或多峰偏好,则他从最为偏好的方案偏离时,其偏好程度或效用会下降,但之后会再上升。
布莱克证明了如果假设各个选民的偏好都是单峰偏好,那么最终投票的结果就可以避免阿罗悖论,社会成员个人的偏好之和可以得出确定的唯一的社会总体偏好,而这种社会总体偏好恰好是个人偏好处于所有选民偏好峰的中点上的选民,高于他偏好的选民数量和低于他偏好的选民数量正好相等,这也就是著名的中间投票人模式(median voter models)。
布莱克由于对这个问题的开创性研究而被戈登·塔洛克(Gordon Tullock)称为公共选择学派的奠基人。
邓肯·布莱克认为,通过对个人的偏好进行适当限制,使其适合于某一种类型,则多数决策结果就满足可传递性假定。
布莱克对个人偏好提出的特殊类型就是具有单峰形状。
这种单峰形状的个人偏好类型可被说明如下(表1):表1单峰形状的个人偏好我们可以对A、B、C三种选择目标进行比较:当A与B相比较时,B将胜于A;当B与C比较时,B仍将胜于C;当A与C比较时,C将胜于A。
这样,在以上例子中,给定一个特殊的个人偏好结构,多数决策的结果满足可传递性,社会选择的偏好顺序将是BpCpAp(这里p表示“偏好(prefer)”,即前者比后者更可取)。
为什么称上表所示的个人偏好类型为单峰型呢?可以用下图加以说明。
(图1)假定有三个人l、2、3,每人共同面临A、B、C三种选择,A代表政府高水平的预算,B代表中等水平的预算,C 代表低水平的预算。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
High Attractiveness Weak Competitive Position The strategy advice for this cell is to opportunistically invest for earnings. However, if you can't strengthen your enterprise you should exit the market. Consider the following strategies: ride with the market growth seek niches or specialization seek an opportunity to increase strength through acquisition
y Managerial
Competence
** External Factors
*
Market size
Market growth rate
CyclicalityCompetitiv
e structure
Barriers to entry
Industry profitability
Technolog y
Low Attractiveness Average Competitive Position The strategy advice for this cell is to restructure, harvest or divest. Consider the following strategies: make only essential commitments prepare to divest shift resources to a more attractive segment
Medium High
Business Strengths
-Grow -Seek dominance -Maximise investment
-Indentify growth segments -Invest strongly -Maintain position elsewhere
-Maintain overall position -Seek cash flow -Invest at maintanence level
Low Attractiveness Weak Competitive Position The advice for this cell is to harvest or divest. You should exit the market or prune the product line.
Strategic implications from the Industry Attractiveness – Business Strength Matrix
2) Develop growth strategies for adding new products and businesses to the portfolio
3) Decide which businesses or products should no longer be retained.
2. Competitive strength replaces market share as the dimension by which the competitive position of each SBU is assessed. Competitive strength likewise includes a broader range of factors other than just the market share that can determine the competitive strength of a Strategic Business Unit.
Medium Attractiveness Average Competitive Position The strategy advice for this cell is to selectively invest for earnings. Consider the following strategies: segment the market to find a more attractive position make contingency plans to protect your vulnerable position
Low Attractiveness Strong Competitive Position The strategy advice for this cell is to selectively invest for earnings. Consider the following strategies: defend strengths shift resources to attractive segments examine ways to revitalize the industry time your exit by monitoring for harvest or divestment timing
• Originally developed by GE’s planners drawing on McKinsey’s approaches
• Market attractiveness is based on as many relevant factors as are appropriate in a given context
-Evaluate potentila for leadership via segmentation -Identify weaknesses -Build strengths
-Identify growth segments -Specialise -Invest secratively
-Prune lines -Minimise investment -Position to divest
PORTFOLIO MANAGEMENT TOOLS
Alokesh Banerjee
The aim of a portfolio analysis
1) Analyze its current business portfolio and decide which SBU's should receive more or less investment, and
3. Finally the GE / McKinsey Matrix works with a 3*3 grid, while the BCG Matrix has only 2*2. This also allows for more sophistication.
GE(General Electric)/McKinsey MultiFactor Matrix
Hale Waihona Puke High Attractiveness Average Competitive Position The strategy advice for this cell is to invest for growth. Consider the following strategies: build selectively on strength define the implications of challenging for market leadership fill weaknesses to avoid vulnerability
Medium Attractiveness Weak Competitive Position The strategy advice for this cell is to preserve for harvest. Consider the following strategies: act to preserve or boost cash flow as you exit the business seek an opportunistic sale seek a way to increase your strengths
GE VS BCG
The GE / McKinsey Matrix is more sophisticated than the BCG Matrix in three aspects:
1. Market (Industry) attractiveness replaces market growth as the dimension of industry attractiveness. Market Attractiveness includes a broader range of factors other than just the market growth rate that can determine the attractiveness of an industry / market.
Inflation
Regulation
Manpower
availability
Social
issuesEnvironmental
issues
Political issuesLegal
issues
High Attractiveness Strong Competitive Position The strategy advice for this cell is to invest for growth. Consider the following strategies: provide maximum investment diversify consolidate your position to focus your resources accept moderate near-term profits to build share