The subgraph bisimulation problem
Superconducting qubits II Decoherence
The transition from quantum to classical physics, now known as decoherence, has intrigued physicists since the formulation of quantum mechanics (Giulini et al., 1996; Leggett, 2002; Peres, 1993; Feynman and Vernon, 1963; Zurek, 1993). It has been put into the poignant Schr¨ odinger cat paradox (Schr¨ odinger, 1935) and was considered an open fundamental question for a long time.
and compare it to the corresponding classical mixture leading to the same expectation value of σz 1 1 0 ρmix = (2) 2 0 1 we can see that the von-Neumann entropy ρ = −kB Tr [ρ log ρ] rises from Spure = 0 to Smix = kB ln 2. Hence, decoherence taking ρpure to ρmix creates entropy and is irreversible. Quantum mechanics, on the other hand, is always reversible. It can be shown, that any isolated quantum system is described by the Liouville von-Neumann equation i¯ hρ ˙ = [H, ρ] (3)
inequalities for graph eigenvalues
1
Introduction
We consider the Laplacian and eigenvalues of graphs and induced subgraphs. Although an induceas a graph in its own right, it is natural to consider an induced subgraph S as having a boundary (formed by edges joining vertices in S and vertices not in S but in the “ host ” graph). The host graph then can be regarded as a special case of a subgraph with no boundary. This paper consists of three parts. In the first part (Section 2-5), we give definitions and describe basic properties for the Laplacian of graphs. We introduce the Neumann eigenvalues for induced subgraphs and the heat kernel for graphs and induced subgraphs. Then we establish the following lower bound for the Neumann eigenvalues of induced subgraphs. 1
Abstract For an induced subgraph S of a graph, we show that its Neumann eigenvalue λS can be lower-bounded by using the heat kernel Ht (x, y ) of the subgraph. Namely, √ 1 Ht (x, y ) dx λS ≥ inf y ∈S 2t dy x ∈S where dx denotes the degree of the vertex x. In particular, we derive lower bounds of eigenvalues for convex subgraphs which consist of lattice points in an d-dimensional Riemannian manifolds M with convex boundary. The techniques involve both the (discrete) heat kernels of graphs and improved estimates of the (continuous) heat kernels of Riemannian manifolds. We prove eigenvalue lower bounds for convex subgraphs of the form cǫ2 /(dD(M ))2 where ǫ denotes the distance between two closest lattice points, D(M ) denotes the diameter of the manifold M and c is a constant (independent of the dimension d and the number of vertices in S , but depending on the how “dense” the lattice points are). This eigenvalue bound is useful for bounding the rates of convergence for various random walk problems. Since many enumeration problems can be approximated by considering random walks in convex subgraphs of some appropriate host graph, the eigenvalue inequalities here have many applications.
two-stage stochastic programming
two-stage stochastic programmingTwo-stage stochastic programming is a mathematical optimization approach used to solve decision-making problems under uncertainty. It is commonly applied in various fields such as operations research, finance, energy planning, and supply chain management. In this approach, decisions are made in two stages: the first stage involves decisions made before uncertainty is realized, and the second stage involves decisions made after observing the uncertain events.In two-stage stochastic programming, the decision-maker aims to optimize their decisions by considering both the expected value and the risk associated with different outcomes. The problem is typically formulated as a mathematical program with constraints and objective functions that capture the decision variables, uncertain parameters, and their probabilistic distributions.The first stage decisions are typically made with theknowledge of the uncertain parameters, but without knowing their actual realization. These decisions are usually strategic and long-term in nature, such as investment decisions, capacity planning, or resource allocation. The objective in the first stage is to minimize the expected cost or maximize the expected profit.The second stage decisions are made after observing the actual realization of the uncertain events. These decisions are typically tactical or operational in nature, such as production planning, inventory management, or scheduling. The objective in the second stage is to minimize the cost or maximize the profit given the realized values of the uncertain parameters.To solve two-stage stochastic programming problems, various solution methods can be employed. One common approach is to use scenario-based methods, where a set of scenarios representing different realizations of the uncertain events is generated. Each scenario is associated with a probability weight, and the problem is then transformed into a deterministic equivalent problem byreplacing the uncertain parameters with their corresponding scenario values. The deterministic problem can be solved using traditional optimization techniques such as linear programming or mixed-integer programming.Another approach is to use sample average approximation, where the expected value in the objective function is approximated by averaging the objective function valuesover a large number of randomly generated scenarios. This method can be computationally efficient but may introduce some approximation errors.Furthermore, there are also robust optimization techniques that aim to find solutions that are robust against the uncertainty, regardless of the actualrealization of the uncertain events. These methods focus on minimizing the worst-case cost or maximizing the worst-case profit.In summary, two-stage stochastic programming is a powerful approach for decision-making under uncertainty. It allows decision-makers to consider both the expected valueand the risk associated with uncertain events. By formulating the problem as a mathematical program and employing various solution methods, optimal or near-optimal solutions can be obtained to guide decision-making in a wide range of applications.。
A Harmonic Analysis Solution to the Static Basket Arbitrage Problem
10
Harmonic Analysis on Semigroups
Some quick definitions... • a pair (S, ·) is called a semigroup iff: ◦ if s, t ∈ S then s · t is also in S ◦ there is a neutral element e ∈ S such that e · s = s for all s ∈ S • the dual S∗ of S is the set of semicharacters, i.e. applications χ : S → R such that ◦ χ(s)χ(t) = χ(s · t) for all s, t ∈ S ◦ χ(e) = 1, where e is the neutral element in S • a function α is called an absolute value on S iff ◦ α(e) = 1 ◦ α(s · t) ≤ α(s)α(t), for all s, t ∈ S
6
No Arbitrage Conditions
Suppose we are given market prices qi for basket calls with weights wi and strike prices Ki: • Fundamental theorem of asset pricing: there is no arbitrage in the static market if and only if there is a probability measure π such that:
T x − Ki)+] = qi Eπ [(wi
Existence of initial data satisfying the constraints for the spherically symmetric Einstein
ing the Vlasov equation and it will be seen that it gives rise to new mathematical
features compared to those cases studied up to now. The second is connected
1 Introduction
The global dynamical behavior of self-gravitating matter is a subject of central importance in general relativity. A form of matter which has particularly nice mathematical properties is collisionless matter, described by the Vlasov equation. It has the advantage that it lacks the tendency observed in certain other models, such as perfect fluids, that solutions of the equations of motion of the matter lose differentiability after a finite time. These singularities of the mathematical model form an obstacle to further analysis and prevent the study of the global dynamical properties of the solutions. Collisionless matter is free from these difficulties and there is a growing literature on global properties of solutions of the Einstein-Vlasov system [1], [8].
A Bernstein type result for special Lagrangian submanifolds
The results in this paper imposes conditions on the image of the Gauss map of Σ. Recall the set of all Lagrangian subspaces of Cn is parametrized by the Lagrangian Grassmannian U(n)/SO(n). The Gauss map of a Lagrangian submanifold γ : Σ → U(n)/SO(n) assigns to each x ∈ Σ the tangent space at x, TxΣ.
We wish to thank Professor D. H. Phong and Professor S.-T. Yau for their encouragement and support.
2 Proof of Theorem
Let Σ be a complete submanifold of R2n. Around any point p ∈ Σ, we choose orthonormal frames {ei}i=1···n for T Σ and {eα}α=n+1,··· ,2n for N Σ, the
1.
F (hijk) =
h2ijk +
λ2i h2iik + 2
λiλj h2ijk ≥ 0
代数拓扑学习题(英文)
(b) X is the graph which is the suspension of n points and f is the suspension of a
cyclic permutation of the n points.
cylinder Mf is a CW complex. [The technique of cellular approximation described in §4.1 can be applied to show that the cellularity hypothesis can be dropped.]
3. Show that the n skeleton of the simplex ∆k has the homotopy type of a wedge
sum of
k n+1
n spheres.
Section 1.1.
1. If x0 and x1 are two points in the same path component of X , construct a bijection between the set of homotopy classes of paths from x0 to x1 and π1(X, x0) .
9. (a) Show that a finite CW complex, or more generally one with a finite 1 skeleton,
has finitely generated fundamental group.
(b) Show that a map f : X→Y with X compact and Y a CW complex cannot induce an
概率论与数理统计 习题4 英文版
EECE 7204
APPL. PROB. & STOCH. PROC.
Homework-4 Problem Set with Text
Due on October 18, 2011
2011 FALL
Reading Assignment: Chapter 4 from Stark and Woods, edition 4.
in part (a) above.
0.5
fX(x )
0.5
0
1
2
x
P4.9
Random variable X
is distributed according to the pd e−x u(x).
(a) Determine the mean µx = E [X ] of X . (b) Determine the variance σ2X = E X − µx 2 of X . (c) Determine the skewness of X given by
2x, 0 < x < 1, fX (x) = 0, else.
P4.2 Text Problem 4.18 (Page 285). Verify your answer in part (c) by computing the mean of the conditional mean determined in part (b). Let X and Y be two RVs. The pdf fX (x) of X is given as
P4.5 Text Problem 4.38 (Page 288).
Let X
be
a
uniform
Lattice Regularization and Symmetries
a rXiv:h ep-la t/6621v21J ul26Lattice Regularization and Symmetries Peter Hasenfratz,Ferenc Niedermayer ∗and Reto von Allmen Institute for Theoretical Physics University of Bern Sidlerstrasse 5,CH-3012Bern,Switzerland Abstract Finding the relation between the symmetry transformations in the con-tinuum and on the lattice might be a nontrivial task as illustrated by the history of chiral ttice actions induced by a renormalization group procedure inherit all symmetries of the continuum theory.We give a general procedure which gives the corresponding symmetry transforma-tions on the lattice.—————∗On leave from E¨o tv¨o s University,HAS Research Group,Budapest,Hungary1IntroductionIn1981Ginsparg and Wilson formulated a condition[1]to be satisfied by the lattice Dirac operator in order to have the physical consequences of chiral sym-metry on the lattice.The derivation of this condition is based on renormalization group(RG)considerations,but the result is more general.Indeed,the GW re-lation is satisfied not only by thefixed point[2],but also by the overlap[3]and the domain-wall operator after dimensional reduction[4].The Dirac operators in the latter two cases are not related to RG ideas.The GW condition is a non-linear relation for the lattice Dirac operator reflecting the fact that concerning chiral symmetry the physical content of the classical lattice theory is the same as that in the continuum.It was observed only many years later that an exact symmetry transformation exists on the lattice as well[5].Discretizing afield theory by repeated RG block transformation–or equiva-lently,by’blocking out of continuum’[6]–has the advantage that all symmetries of the continuum theory will be inherited by the lattice action–even those which are explicitly broken by the block transformation.The symmetry transforma-tions are,however,different from those in the continuum.We present here a general technique and a streamlined procedure tofind the form of the symmetry transformations and the symmetry conditions(like the GW relation).We have to emphasize that talking about a symmetry transformation on the lattice we mean a symmetry of the lattice action,i.e.the classicalfield theory. In the quantum theory this transformation enters as a change of variable in the path integral which might induce a non-trivial contribution to the Ward identity by the integration measure.Not all internal symmetries of the continuum theory can be kept by the block transformation.Consider,as an example,the chiral symmetry.For a continuum action and a block transformation which both have an explicitγ5-invariance,the resulting lattice action will also beγ5-invariant.But due to the existence of the chiral anomaly,this action cannot be an acceptable one–it has to be either non-local or has to describe extra unwanted degrees of freedom (doublers)[7].This is what happens in the limit when the coefficient of the natural block transformation goes to infinity.In this limit the blocking becomes chiral invariant,but at the same time the corresponding lattice action ceases to be local[8].This paper is motivated by some unsolved theoretical problems in lattice regularized chiral gauge theories.In spite of the great progress during the last years[9]an important problem remains:the relative weight between the different topological sectors is undefined.This situation might be related to2different technical problems.Although the chiral invariant vector theory has a controlled RG background,the steps towards a chiral theory are not related to RG anymore.The projectors[10]are introduced by hand and,seemingly unavoidably,they break CP and T symmetry[11].Further,the fermion num-ber anomaly1enters in an unusual way:the different topological sectors have different number of degrees of freedom on the lattice.These technical issues might be related to the problem mentioned above.A different strategy would be to start with a fermion number violating block transformation,which makes the relation between the continuum and lattice symmetries non-trivial.The systematic approach discussed here might be a useful tool in this and similar problems.2Free massless fermionsSince fermions enter quadratically even in the presence of gaugefields,most of the equations below remain valid in the presence of interactions as well.The block transformation is a Gaussian integral which is equivalent to a formal minimization problem:ψ,ψ χ−ψ0=χ)in eq.(1)are given byψ0(χ)=A−1ω†χ,χ)=1We mean here the global vector anomaly of a chiral gauge theory free of gauge anomalies.3From the equations above it is easy to derive the following useful relations which will be used repeatedly in this workωψ0(χ)=(1−D)χ,χ)ω†=(6)ψ0(χDω.The Ginsparg-Wilson relation can be obtained then from eq.(5)by using {D,γ5}=0and the relations above2:{D,γ5}=2Dγ5D.(7)We formulate now a general statement on the form of infinitesimal symmetry transformations on the lattice.StatementLetδψandδψDψ.Define the infinitesimal change of the latticefields byδχ=ωδψ0(χ),δψ0(χDχis invariant under this infinitesimal symmetry trans-formations.ProofOne can use the explicit equations above to show the statement.Replaceψ0(χ)byψ0(χ+δχ)−δψ0(χ)andχ)byχ+δψ0(χDχ=χ+δχ− χ+δψ0(χ=δχ)ω†,(10)which leads toψ0(χ)Dψ0(χ+δχ)++ χ−χ+δχ+δχDχ(12) i.e.eq.(10)is a symmetry transformation of the lattice actionψ0(ψ0(χ=iǫχ)γ5ω†=iǫψ0(ψ0(χ=iǫχandχ,respectively[10]. Notice the asymmetry between the transformations ofχand 3as well as theα=−1case5U(1)vector transformationThe standard infinitesimal vector rotation in the continuumδψ0(χ)=iǫψ0(χ),δχ)=−iǫχ)implies the lattice transformationδχ=iǫ(1−D)χ,δχ(1−D)(17) while the transformationδψ0(χ)=iǫ(1−αD)ψ0(χ),δχ)=−iǫχ)(1−αD) leads toδχ=iǫ(1−(1+α)D)χδχ(1−(1+α)D).(18) Forα=0the continuum,while forα=−1the lattice transformation has the standard form.Consideringfinite transformations,note that forα=0in eqs.(15),(16)the transformation exp(itγ5(1−D))is not compact(it is not2π-periodic in t)as opposed to the corresponding transformation exp(itγ5)in the continuum.On the other hand,forα=1(orα=−1)the lattice transformation exp(itˆγ5) corresponding to eq.(16)is compact,while its continuum counterpart is not. Infinitesimal translationIn the continuum we haveδψ0(χ)=ǫˆ∂µψ0(χ),δχ)=ǫχ)ˆ∂†µ,where ˆ∂µ xy=∂xµδ(x−y).Our general procedure leads to the lattice transformationsδχ=ǫωˆ∂µψ0(χ)δψ0(this caseA( R)=min{ S} A( S)+T R,ω( S) (20)where T is the block transformation,ω( S)defines the averaging,A( S)is the continuum action,while A( R)is the(fixed point[13])action on the lattice.For the averaging one might take theflat,non-overlapping averaging in eq.(2).A simple example for T isT=2κ n R n−(ω S)n|(ω S)n|,˜ω( S)2=1.(22)For notational simplicity we shall take4κ=1.The minimizingfield in eq.(20) is denoted by S0= S0( R).Consider now an infinitesimal symmetry transfor-mation of the continuum action(infinitesimal translation,for example)acting on the minimizingfield S0( R)→ S0( R)+δ S0( R).Introduce the notation˜ω( S0+δ S0)=˜ω( S0)+δ˜ω, δ˜ω,˜ω( S0) =0.(23)Following the procedure used for fermions above,we compensate the change of the blocking term T by changing the lattice configuration R→ R+δ Rδ R,˜ω( S0( R)) + R,δ˜ω =0,( R,δ R)=0(24) where we used eq.(21).The solution of eq.(24)can be written asδ R=δ˜ω ˜ω, R −˜ω δ˜ω, R .(25)Ifδ S0( R)is a symmetry transformation in the continuum,then A( R+δ R)= A( R),i.e.δ R in eq.(25)is a symmetry transformation on the lattice.Acknowledgements P.H.is indebted for the kind hospitality and for the discussions with many of the participants at the ILFTN Workshop in Nara and at the Workshop at YITP in Kyoto.P.H.and F.N.thank the kind invitation to the Ringberg Meeting where part of this work was presented.This work was supported by the Schweizerischer Nationalfonds.7References[1]P.H.Ginsparg and K.G.Wilson,Phys.Rev.D25(1982)2649.[2]P.Hasenfratz,Nucl.Phys.Proc.Suppl.63(1998)53[arXiv:hep-lat/9709110].[3]H.Neuberger,Phys.Lett.B427(1998)353[arXiv:hep-lat/9801031].[4]Y.Kikukawa,T.Noguchi,arXiv:hep-lat/9902022.[5]M.L¨u scher,Phys.Lett.B428(1998)342[arXiv:hep-lat/9802011].[6]W.Bietenholz and U.-J.Wiese,Nucl.Phys.B464(1996)319[arXiv:hep-lat/9510026]and references therein.[7]N.B.Nielsen and M.Ninomiya,Nucl.Phys.B185(1981)20.[8]U.-J.Wiese,Phys.Lett.B315(1993)417[arXiv:hep-lat/9306003].[9]M.L¨u scher,Nucl.Phys.B549(1999)295[arXiv:hep-lat/9811032];ibidB568(2000)162[arXiv:hep-lat/9904009];M.L¨u scher,J.High Energy Phys.06(2000)028[arXiv:hep-lat/0006014];H.Suzuki,J.High Energy Phys.10(2000)039[arXiv:hep-lat/0009036];Y.Kikukawa and Y.Nakayama,Nucl.Phys.B597(2001)519 [arXiv:hep-lat/0005015];M.L¨u scher,in’Erice2000,Theory and experiment heading for new physics’,41-89,[arXiv:hep-th/0102028]and references therein.[10]M.L¨u scher,Nucl.Phys.B538(1999)515[arXiv:hep-lat/9808021]R.Narayanan,Phys.Rev.D58(1998)97501[arXiv:hep-lat/9802018];F.Niedermayer,Nucl.Phys.Proc.Suppl.73(1999)105[arXiv:hep-lat/9810026].[11]P.Hasenfratz,Nucl.Phys.Proc.Suppl.B106(2002)159[arXiv:hep-lat/0111023];K.Fujikawa,M.Ishibashi and H.Suzuki,JHEP0204(2002)046 [arXiv:hep-lat/0203016];Nucl.Phys.Proc.Suppl.B119(2003)781,[arXiv:hep-lat/0203016];K.Fujikawa,and H.Suzuki,Phys.Rev.D67(2003)034506 [arXiv:hep-lat/0210013];P.Hasenfratz and M.Bissegger,Phys.Lett.B613(2005)57 [arXiv:hep-lat/0501010].[12]T.DeGrand,A.Hasenfratz,P.Hasenfratz and F.Niedermayer,Nucl.Phys.B454(1995)587[arXiv:hep-lat/9506030].[13]P.Hasenfratz and F.Niedermayer,Nucl.Phys.B414(1994)785[arXiv:hep-lat/9308004].8。
David Hilbert - Mathematical Problems
Mathematical ProblemsLecture delivered before the International Congress ofMathematicians at Paris in 1900By Professor David Hilbert1Who of us would not be glad to lift the veil behind which the future lies hidden; to cast a glance at the next advances of our science and at the secrets of its development during future centuries? What particular goals will there be toward which the leading mathematical spirits of coming generations will strive? What new methods and new facts in the wide and rich field of mathematical thought will the new centuries disclose?History teaches the continuity of the development of science. We know that every age has its own problems, which the following age either solves or casts aside as profitless and replaces by new ones. If we would obtain an idea of the probable development of mathematical knowledge in the immediate future, we must let the unsettled questions pass before our minds and look over the problems which the science of today sets and whose solution we expect from the future. To such a review of problems the present day, lying at the meeting of the centuries, seems to me well adapted. For the close of a great epoch not only invites us to look back into the past but also directs our thoughts to the unknown future. The deep significance of certain problems for the advance of mathematical science in general and the important role which they play in the work of the individual investigator are not to be denied. As long as a branch of science offers an abundance of problems, so long is it alive; a lack of problems foreshadows extinction or the cessation of independent development. Just as every human undertaking pursues certain objects, so also mathematical research requires its problems. It is by the solution of problems that the investigator tests the temper of his steel; he finds new methods and new outlooks, and gains a wider and freer horizon.It is difficult and often impossible to judge the value of a problem correctly in advance; for the final award depends upon the gain which science obtains from the problem. Nevertheless we can ask whether there are general criteria which mark a good mathematical problem. An old French mathematician said: "A mathematical theory is not to be considered complete until you have made it so clear that you can explain it to the first man whom you meet on the street." This clearness and ease of comprehension, here insisted on for a mathematical theory, I should still more demand for a mathematical problem if it is to be perfect; for what is clear and easily comprehended attracts, the complicated repels us.Moreover a mathematical problem should be difficult in order to entice us, yet not completely inaccessible, lest it mock at our efforts. It should be to us a guide post on the mazy paths to hidden truths, and ultimately a reminder of our pleasure in the successful solution.The mathematicians of past centuries were accustomed to devote themselves to the solution of difficult particular problems with passionate zeal. They knew the value of difficult problems. I remind you only of the "problem of the line of quickest descent," proposed by John Bernoulli. Experience teaches, explains Bernoulli in the public announcement of this problem, that lofty minds are led to strive for the advance of science by nothing more than by laying before them difficult and at the same time useful problems, and he therefore hopes to earn the thanks of the mathematical world by following the example of men like Mersenne, Pascal, Fermat, Viviani and others and laying before the distinguished analysts of his time a problem by which, as a touchstone, they may test the value of their methods and measure their strength. The calculus of variations owes its origin to this problem of Bernoulli and to similar problems.Fermat had asserted, as is well known, that the diophantine equationx n + y n = z n(x, y and z integers) is unsolvable—except in certain self evident cases. The attempt to prove this impossibility offers a striking example of the inspiring effect which such a very special and apparently unimportant problem may have upon science. For Kummer, incited by Fermat's problem, was led to the introduction of ideal numbers and to the discovery of the law of the unique decomposition of the numbers of a circular field into ideal prime factors—a law which today, in its generalization to any algebraic field by Dedekind and Kronecker, stands at the center of the modern theory of numbers and whose significance extends far beyond the boundaries of number theory into the realm of algebra and the theory of functions.To speak of a very different region of research, I remind you of the problem of three bodies. The fruitful methods and the far-reaching principles which Poincaré has brought into celestial mechanics and which are today recognized and applied in practical astronomy are due to the circumstance that he undertook to treat anew that difficult problem and to approach nearer a solution.The two last mentioned problems—that of Fermat and the problem of the three bodies—seem to us almost like opposite poles—the former a free invention of pure reason, belonging to the region of abstract number theory, the latter forced upon us by astronomy and necessary to an understanding of the simplest fundamental phenomena of nature.But it often happens also that the same special problem finds application in the most unlike branches of mathematical knowledge. So, for example, the problem of the shortest line plays a chief and historically important part in the foundations of geometry, in the theory of curved lines and surfaces, in mechanics and in the calculus of variations. And how convincingly has F. Klein, in his work on the icosahedron, pictured the significance which attaches to the problem of the regular polyhedra in elementary geometry, in group theory, in the theory of equations and in that of linear differential equations.In order to throw light on the importance of certain problems, I may also refer to Weierstrass, who spokeof it as his happy fortune that he found at the outset of his scientific career a problem so important as Jacobi's problem of inversion on which to work.Having now recalled to mind the general importance of problems in mathematics, let us turn to the question from what sources this science derives its problems. Surely the first and oldest problems in every branch of mathematics spring from experience and are suggested by the world of external phenomena. Even the rules of calculation with integers must have been discovered in this fashion in a lower stage of human civilization, just as the child of today learns the application of these laws by empirical methods. The same is true of the first problems of geometry, the problems bequeathed us by antiquity, such as the duplication of the cube, the squaring of the circle; also the oldest problems in the theory of the solution of numerical equations, in the theory of curves and the differential and integral calculus, in the calculus of variations, the theory of Fourier series and the theory of potential—to say nothing of the further abundance of problems properly belonging to mechanics, astronomy and physics. But, in the further development of a branch of mathematics, the human mind, encouraged by the success of its solutions, becomes conscious of its independence. It evolves from itself alone, often without appreciable influence from without, by means of logical combination, generalization, specialization, by separating and collecting ideas in fortunate ways, new and fruitful problems, and appears then itself as the real questioner. Thus arose the problem of prime numbers and the other problems of number theory, Galois's theory of equations, the theory of algebraic invariants, the theory of abelian and automorphic functions; indeed almost all the nicer questions of modern arithmetic and function theory arise in this way.In the meantime, while the creative power of pure reason is at work, the outer world again comes into play, forces upon us new questions from actual experience, opens up new branches of mathematics, and while we seek to conquer these new fields of knowledge for the realm of pure thought, we often find the answers to old unsolved problems and thus at the same time advance most successfully the old theories. And it seems to me that the numerous and surprising analogies and that apparently prearranged harmony which the mathematician so often perceives in the questions, methods and ideas of the various branches of his science, have their origin in this ever-recurring interplay between thought and experience.It remains to discuss briefly what general requirements may be justly laid down for the solution of a mathematical problem. I should say first of all, this: that it shall be possible to establish the correctness of the solution by means of a finite number of steps based upon a finite number of hypotheses which are implied in the statement of the problem and which must always be exactly formulated. This requirement of logical deduction by means of a finite number of processes is simply the requirement of rigor in reasoning. Indeed the requirement of rigor, which has become proverbial in mathematics, corresponds to a universal philosophical necessity of our understanding; and, on the other hand, only by satisfying this requirement do the thought content and the suggestiveness of the problem attain their full effect. A new problem, especially when it comes from the world of outer experience, is like a young twig, which thrives and bears fruit only when it is grafted carefully and in accordance with strict horticultural rules upon the old stem, the established achievements of our mathematical science.Besides it is an error to believe that rigor in the proof is the enemy of simplicity. On the contrary we find it confirmed by numerous examples that the rigorous method is at the same time the simpler and the more easily comprehended. The very effort for rigor forces us to find out simpler methods of proof. It also frequently leads the way to methods which are more capable of development than the old methods of less rigor. Thus the theory of algebraic curves experienced a considerable simplification and attained greater unity by means of the more rigorous function-theoretical methods and the consistent introduction of transcendental devices. Further, the proof that the power series permits the application of the four elementary arithmetical operations as well as the term by term differentiation and integration, and the recognition of the utility of the power series depending upon this proof contributed materially to the simplification of all analysis, particularly of the theory of elimination and the theory of differential equations, and also of the existence proofs demanded in those theories. But the most striking example for my statement is the calculus of variations. The treatment of the first and second variations of definite integrals required in part extremely complicated calculations, and the processes applied by the old mathematicians had not the needful rigor. Weierstrass showed us the way to a new and sure foundation of the calculus of variations. By the examples of the simple and double integral I will show briefly, at the close of my lecture, how this way leads at once to a surprising simplification of the calculus of variations. For in the demonstration of the necessary and sufficient criteria for the occurrence of a maximum and minimum, the calculation of the second variation and in part, indeed, the wearisome reasoning connected with the first variation may be completely dispensed with—to say nothing of the advance which is involved in the removal of the restriction to variations for which the differential coefficients of the function vary but slightly.While insisting on rigor in the proof as a requirement for a perfect solution of a problem, I should like, on the other hand, to oppose the opinion that only the concepts of analysis, or even those of arithmetic alone, are susceptible of a fully rigorous treatment. This opinion, occasionally advocated by eminent men, I consider entirely erroneous. Such a one-sided interpretation of the requirement of rigor would soon lead to the ignoring of all concepts arising from geometry, mechanics and physics, to a stoppage of the flow of new material from the outside world, and finally, indeed, as a last consequence, to the rejection of the ideas of the continuum and of the irrational number. But what an important nerve, vital to mathematical science, would be cut by the extirpation of geometry and mathematical physics! On the contrary I think that wherever, from the side of the theory of knowledge or in geometry, or from the theories of natural or physical science, mathematical ideas come up, the problem arises for mathematical science to investigate the principles underlying these ideas and so to establish them upon a simple and complete system of axioms, that the exactness of the new ideas and their applicability to deduction shall be in no respect inferior to those of the old arithmetical concepts.To new concepts correspond, necessarily, new signs. These we choose in such a way that they remind us of the phenomena which were the occasion for the formation of the new concepts. So the geometrical figures are signs or mnemonic symbols of space intuition and are used as such by all mathematicians. Who does not always use along with the double inequality a > b > c the picture of three points following one another on a straight line as the geometrical picture of the idea "between"? Who does not make use of drawings of segments and rectangles enclosed in one another, when it is required to prove with perfect rigor a difficult theorem on the continuity of functions or the existence of points of condensation? Whocould dispense with the figure of the triangle, the circle with its center, or with the cross of three perpendicular axes? Or who would give up the representation of the vector field, or the picture of a family of curves or surfaces with its envelope which plays so important a part in differential geometry, in the theory of differential equations, in the foundation of the calculus of variations and in other purely mathematical sciences?The arithmetical symbols are written diagrams and the geometrical figures are graphic formulas; and no mathematician could spare these graphic formulas, any more than in calculation the insertion and removal of parentheses or the use of other analytical signs.The use of geometrical signs as a means of strict proof presupposes the exact knowledge and complete mastery of the axioms which underlie those figures; and in order that these geometrical figures may be incorporated in the general treasure of mathematical signs, there is necessary a rigorous axiomatic investigation of their conceptual content. Just as in adding two numbers, one must place the digits under each other in the right order, so that only the rules of calculation, i. e., the axioms of arithmetic, determine the correct use of the digits, so the use of geometrical signs is determined by the axioms of geometrical concepts and their combinations.The agreement between geometrical and arithmetical thought is shown also in that we do not habitually follow the chain of reasoning back to the axioms in arithmetical, any more than in geometrical discussions. On the contrary we apply, especially in first attacking a problem, a rapid, unconscious, not absolutely sure combination, trusting to a certain arithmetical feeling for the behavior of the arithmetical symbols, which we could dispense with as little in arithmetic as with the geometrical imagination in geometry. As an example of an arithmetical theory operating rigorously with geometrical ideas and signs, I may mention Minkowski's work, Die Geometrie der Zahlen.2Some remarks upon the difficulties which mathematical problems may offer, and the means of surmounting them, may be in place here.If we do not succeed in solving a mathematical problem, the reason frequently consists in our failure to recognize the more general standpoint from which the problem before us appears only as a single link in a chain of related problems. After finding this standpoint, not only is this problem frequently more accessible to our investigation, but at the same time we come into possession of a method which is applicable also to related problems. The introduction of complex paths of integration by Cauchy and of the notion of the IDEALS in number theory by Kummer may serve as examples. This way for finding general methods is certainly the most practicable and the most certain; for he who seeks for methods without having a definite problem in mind seeks for the most part in vain.In dealing with mathematical problems, specialization plays, as I believe, a still more important part than generalization. Perhaps in most cases where we seek in vain the answer to a question, the cause of the failure lies in the fact that problems simpler and easier than the one in hand have been either not at all or incompletely solved. All depends, then, on finding out these easier problems, and on solving them bymeans of devices as perfect as possible and of concepts capable of generalization. This rule is one of the most important levers for overcoming mathematical difficulties and it seems to me that it is used almost always, though perhaps unconsciously.Occasionally it happens that we seek the solution under insufficient hypotheses or in an incorrect sense, and for this reason do not succeed. The problem then arises: to show the impossibility of the solution under the given hypotheses, or in the sense contemplated. Such proofs of impossibility were effected by the ancients, for instance when they showed that the ratio of the hypotenuse to the side of an isosceles right triangle is irrational. In later mathematics, the question as to the impossibility of certain solutions plays a preeminent part, and we perceive in this way that old and difficult problems, such as the proof of the axiom of parallels, the squaring of the circle, or the solution of equations of the fifth degree by radicals have finally found fully satisfactory and rigorous solutions, although in another sense than that originally intended. It is probably this important fact along with other philosophical reasons that gives rise to the conviction (which every mathematician shares, but which no one has as yet supported by a proof) that every definite mathematical problem must necessarily be susceptible of an exact settlement, either in the form of an actual answer to the question asked, or by the proof of the impossibility of its solution and therewith the necessary failure of all attempts. Take any definite unsolved problem, such as the question as to the irrationality of the Euler-Mascheroni constant C, or the existence of an infinite number of prime numbers of the form 2n + 1. However unapproachable these problems may seem to us and however helpless we stand before them, we have, nevertheless, the firm conviction that their solution must follow by a finite number of purely logical processes.Is this axiom of the solvability of every problem a peculiarity characteristic of mathematical thought alone, or is it possibly a general law inherent in the nature of the mind, that all questions which it asks must be answerable? For in other sciences also one meets old problems which have been settled in a manner most satisfactory and most useful to science by the proof of their impossibility. I instance the problem of perpetual motion. After seeking in vain for the construction of a perpetual motion machine, the relations were investigated which must subsist between the forces of nature if such a machine is to be impossible;3 and this inverted question led to the discovery of the law of the conservation of energy, which, again, explained the impossibility of perpetual motion in the sense originally intended.This conviction of the solvability of every mathematical problem is a powerful incentive to the worker. We hear within us the perpetual call: There is the problem. Seek its solution. You can find it by pure reason, for in mathematics there is no ignorabimus.The supply of problems in mathematics is inexhaustible, and as soon as one problem is solved numerous others come forth in its place. Permit me in the following, tentatively as it were, to mention particular definite problems, drawn from various branches of mathematics, from the discussion of which an advancement of science may be expected.Let us look at the principles of analysis and geometry. The most suggestive and notable achievements of the last century in this field are, as it seems to me, the arithmetical formulation of the concept of thecontinuum in the works of Cauchy, Bolzano and Cantor, and the discovery of non-euclidean geometry by Gauss, Bolyai, and Lobachevsky. I therefore first direct your attention to some problems belonging to these fields.1. Cantor's problem of the cardinal number of the continuumTwo systems, i. e, two assemblages of ordinary real numbers or points, are said to be (according to Cantor) equivalent or of equal cardinal number, if they can be brought into a relation to one another such that to every number of the one assemblage corresponds one and only one definite number of the other. The investigations of Cantor on such assemblages of points suggest a very plausible theorem, which nevertheless, in spite of the most strenuous efforts, no one has succeeded in proving. This is the theorem:Every system of infinitely many real numbers, i. e., every assemblage of numbers (or points), is either equivalent to the assemblage of natural integers, 1, 2, 3,... or to the assemblage of all real numbers and therefore to the continuum, that is, to the points of a line; as regards equivalence there are, therefore, only two assemblages of numbers, the countable assemblage and the continuum.From this theorem it would follow at once that the continuum has the next cardinal number beyond that of the countable assemblage; the proof of this theorem would, therefore, form a new bridge between the countable assemblage and the continuum.Let me mention another very remarkable statement of Cantor's which stands in the closest connection with the theorem mentioned and which, perhaps, offers the key to its proof. Any system of real numbers is said to be ordered, if for every two numbers of the system it is determined which one is the earlier and which the later, and if at the same time this determination is of such a kind that, if a is before b and b is before c, then a always comes before c. The natural arrangement of numbers of a system is defined to be that in which the smaller precedes the larger. But there are, as is easily seen infinitely many other ways in which the numbers of a system may be arranged.If we think of a definite arrangement of numbers and select from them a particular system of these numbers, a so-called partial system or assemblage, this partial system will also prove to be ordered. Now Cantor considers a particular kind of ordered assemblage which he designates as a well ordered assemblage and which is characterized in this way, that not only in the assemblage itself but also in every partial assemblage there exists a first number. The system of integers 1, 2, 3, ... in their natural order is evidently a well ordered assemblage. On the other hand the system of all real numbers, i. e., the continuum in its natural order, is evidently not well ordered. For, if we think of the points of a segment of a straight line, with its initial point excluded, as our partial assemblage, it will have no first element.The question now arises whether the totality of all numbers may not be arranged in another manner so that every partial assemblage may have a first element, i. e., whether the continuum cannot be consideredas a well ordered assemblage—a question which Cantor thinks must be answered in the affirmative. It appears to me most desirable to obtain a direct proof of this remarkable statement of Cantor's, perhaps by actually giving an arrangement of numbers such that in every partial system a first number can be pointed out.2. The compatibility of the arithmetical axiomsWhen we are engaged in investigating the foundations of a science, we must set up a system of axioms which contains an exact and complete description of the relations subsisting between the elementary ideas of that science. The axioms so set up are at the same time the definitions of those elementary ideas; and no statement within the realm of the science whose foundation we are testing is held to be correct unless it can be derived from those axioms by means of a finite number of logical steps. Upon closer consideration the question arises: Whether, in any way, certain statements of single axioms depend upon one another, and whether the axioms may not therefore contain certain parts in common, which must be isolated if one wishes to arrive at a system of axioms that shall be altogether independent of one another. But above all I wish to designate the following as the most important among the numerous questions which can be asked with regard to the axioms: To prove that they are not contradictory, that is, that a definite number of logical steps based upon them can never lead to contradictory results.In geometry, the proof of the compatibility of the axioms can be effected by constructing a suitable field of numbers, such that analogous relations between the numbers of this field correspond to the geometrical axioms. Any contradiction in the deductions from the geometrical axioms must thereupon be recognizable in the arithmetic of this field of numbers. In this way the desired proof for the compatibility of the geometrical axioms is made to depend upon the theorem of the compatibility of the arithmetical axioms.On the other hand a direct method is needed for the proof of the compatibility of the arithmetical axioms. The axioms of arithmetic are essentially nothing else than the known rules of calculation, with the addition of the axiom of continuity. I recently collected them4 and in so doing replaced the axiom of continuity by two simpler axioms, namely, the well-known axiom of Archimedes, and a new axiom essentially as follows: that numbers form a system of things which is capable of no further extension, as long as all the other axioms hold (axiom of completeness). I am convinced that it must be possible to find a direct proof for the compatibility of the arithmetical axioms, by means of a careful study and suitable modification of the known methods of reasoning in the theory of irrational numbers.To show the significance of the problem from another point of view, I add the following observation: If contradictory attributes be assigned to a concept, I say, that mathematically the concept does not exist. So, for example, a real number whose square is -l does not exist mathematically. But if it can be proved that the attributes assigned to the concept can never lead to a contradiction by the application of a finite number of logical processes, I say that the mathematical existence of the concept (for example, of a number or a function which satisfies certain conditions) is thereby proved. In the case before us, wherewe are concerned with the axioms of real numbers in arithmetic, the proof of the compatibility of the axioms is at the same time the proof of the mathematical existence of the complete system of real numbers or of the continuum. Indeed, when the proof for the compatibility of the axioms shall be fully accomplished, the doubts which have been expressed occasionally as to the existence of the complete system of real numbers will become totally groundless. The totality of real numbers, i. e., the continuum according to the point of view just indicated, is not the totality of all possible series in decimal fractions, or of all possible laws according to which the elements of a fundamental sequence may proceed. It is rather a system of things whose mutual relations are governed by the axioms set up and for which all propositions, and only those, are true which can be derived from the axioms by a finite number of logical processes. In my opinion, the concept of the continuum is strictly logically tenable in this sense only. It seems to me, indeed, that this corresponds best also to what experience and intuition tell us. The concept of the continuum or even that of the system of all functions exists, then, in exactly the same sense as the system of integral, rational numbers, for example, or as Cantor's higher classes of numbers and cardinal numbers. For I am convinced that the existence of the latter, just as that of the continuum, can be proved in the sense I have described; unlike the system of all cardinal numbers or of all Cantor s alephs, for which, as may be shown, a system of axioms, compatible in my sense, cannot be set up. Either of these systems is, therefore, according to my terminology, mathematically non-existent.From the field of the foundations of geometry I should like to mention the following problem:3. The equality of two volumes of two tetrahedra of equal bases and equal altitudesIn two letters to Gerling, Gauss5 expresses his regret that certain theorems of solid geometry depend upon the method of exhaustion, i. e., in modern phraseology, upon the axiom of continuity (or upon the axiom of Archimedes). Gauss mentions in particular the theorem of Euclid, that triangular pyramids of equal altitudes are to each other as their bases. Now the analogous problem in the plane has been solved.6 Gerling also succeeded in proving the equality of volume of symmetrical polyhedra by dividing them into congruent parts. Nevertheless, it seems to me probable that a general proof of this kind for the theorem of Euclid just mentioned is impossible, and it should be our task to give a rigorous proof of its impossibility. This would be obtained, as soon as we succeeded in specifying two tetrahedra of equal bases and equal altitudes which can in no way be split up into congruent tetrahedra, and which cannot be combined with congruent tetrahedra to form two polyhedra which themselves could be split up into congruent tetrahedra.74. Problem of the straight line as the shortest distance between two pointsAnother problem relating to the foundations of geometry is this: If from among the axioms necessary to。
GRE,Sub,Math03回忆题
GRE,Sub,Math03回忆题GRE SUB Test problems?mon?1.Check whether two vectors are linear independent or not.2.Given two points on the Euclid Plane and determine whether they lie in a square with unit area.3.Find out all the maximum points of the following function:e?x sin2πx,04.What is the minimum value ofa+b+c+dwhen the following identity satis?es:4a=3b=5c=15d.5.If the mean of a6.It is known that f >0and f(x)=f(?x),which of the following statements are correct?f(0)2.7.Suppose there is a?eld of q elements.Determine the number of invertible matrix on this?eld.8.f n and f are continuous functions.f n(x)→f(x)pointwise.Which of the following is/are correct?x0F n(t)d t→xF(t)d t,F n(x)→f(x),xf n(t)d t→xf(t)d t,where F(x)=f(x)d x and F n(x)=f n(x)d x9.Which person made a great contribution to the modern analysis?10.Which letter is not homomorphic to letter C?J,N,S,O,U.11.Determine the number of the trees of the following graph.12.Ten questions in all.If four of the?rst?ve must be answer at least,then how many way are there to answer seven of all the ten questions?13.Fair coins are tossed and when either four consecutive heads or tails appear the process will be stopped.What is the probability of two consecutive head or tail or any one of them in one row?14.T is a one-to-one and onto mapping de?ned as:T:R3→R3.Which of the following is/are correct?T?(S?N)=(T?S)?N,T?N=N?T,T?A=T?B=?A=B.Based on the GRE SUB Test in Year2003-2004All right reserved c mon2004.No illegal copy allowed.Restricted use for educational purpose.15.A 10×10cm square card is folded as.A circle with radius 1cm is placed so that it is entirely on the square.The upper part of the square is blue and the lower part of the square is red.What is the probability that the circle lies entirelyin the red part?l16.Choose the graph of d y d x=sin y.17.Check the consistence of the linear equations (2equations in all).18.In a group G ,a,b ∈G such that ab =ba.Choose the correct relation.(Easy,direct check)19.Function f is a linear function composed of two linear pieces and continuous on [0,2].f (0)=f (2)=0and max f =1. What is the length of the graph of f ?20.Two particles move along x and y directions separately.v x and v y are constant velocity.Which of the following can be used to calculate the distance between particles on axis x and y at time t =1?When t =0,the particles are at the origin. t =3the distance between x and y ;t =10the travelled distance of v x and v y21.z on the unit circle |z |=1.How many ??part ??does the transformed region z →e z has?22.Change the coordinates of the function.23.A = 0110 and I = 1001.Calculate I +∞ n =0t n n !A n .24.Calculate the volume of the solid generated by rotate the region {x =1,y =2,y =x }along the the y axis.25.Write the formula to calculate the volume of the intersection of x 2+y 2+z 2=4and r =2cos θ.26.c ??→F ·?→td s =?where F =x ?y and c is the following circle (counterclockwise rotation).(-1,-1)(1,-1)(-1,1)(1,1)。
稠密 k-子图问题的双非负松弛
稠密 k-子图问题的双非负松弛郭传好;单而芳【摘要】稠密k-子图问题是组合优化里面一类经典的优化问题,其在通常情况下是非凸且NP-难的。
本文给出了求解该问题的一个新凸松弛方法-双非负松弛方法,并建立了问题的相应双非负松弛模型,而且证明了其在一定的条件下等价于一个新的半定松弛模型。
最后,我们使用一些随机例子对这些模型进行了数值测试,测试的结果表明双非负松弛的计算效果要优于等价的半定松弛。
%Densest k-subgraph problem is a classical problem of combinatorial optimization, which is nonconvex and NP-hard in general.In this paper, we propose a new convex relaxation method, i.e., doubly non-negative relaxation method, for solving this problem, and establish the corresponding doubly nonnegative relaxation model for the problem.Moreover, we prove thatthe doubly nonnegative relaxation model is equivalent to a new semidef-inite relaxation model under some conditions.Finally, some random examples are tested by these relaxation mod-els.The numerical results show that the doubly non-negative relaxation is more promising than the corresponding semidefinite relaxation.【期刊名称】《运筹与管理》【年(卷),期】2015(000)005【总页数】7页(P144-150)【关键词】组合优化;双非负松弛;半定松弛;稠密k-子图【作者】郭传好;单而芳【作者单位】上海大学管理学院管理科学与工程系,上海 200444;上海大学管理学院管理科学与工程系,上海 200444【正文语种】中文【中图分类】O221.7(DepartmentofManagementScienceandEngineering,SchoolofManagement,ShanghaiUniversity,Shanghai 200444,China)Key words:combinatorial optimization; doubly nonnegative relaxation; sem idefinite relaxation; densestk-subgraph对于一个给定的图G(V,E),其中V表示图的顶点集,E表示图的边集.图G的稠密k-子图(Densest K -Subgraph,简记DkS)问题就是指:对任给一个参数 k,寻找图G中一个具有k个顶点的子图,使得由这k个顶点所张成的子图的所有边所对应的权和最大.通常1<k<|V|,|V|表示图G的顶点个数,该问题通常是NP-难的.DkS问题最初由Corneil和Perl [1] 提出,其也可以视为是最大团(Maximum Clique)问题的一种自然推广.DkS问题在实际中有广泛的应用,例如社会网络中的社区探测,蛋白质识别[2]等.因此,对DkS问题的研究具有重要的理论意义和实际应用价值.最近,Burer[3] 为了处理一类全正规划[4](Completely Positive Programming)问题,借助于Dinanada分解定理[5],得到了一类易求解的凸规划问题,即双非负规划(Doubly Nonnegative Programming)问题.该问题具有多项式时间内点算法求解,而且还可以被许多凸规划软件求解.随后这一方法被许多学者做进一步的研究[3, 4, 6, 7].本文基于文献[3]中方法的思想,对DkS问题的求解做进一步的研究. 首先, 根据DkS问题的结构特点, 我们建立其对应的双非负松弛问题模型,探讨分析其相应的模型特点.其次,还研究了双非负松弛问题与相应的半定松弛问题之间的关系,即在一定的假设条件下,这两类松弛问题具有一定的等价性.最后,大量的数值实验结果表明双非负松弛具有较好的计算效果相比较与等价的半定松弛问题.首先,简单回顾一下DkS问题的基本定义.定义1 DkS问题就是指:在给定的图G(V,E)上,寻找一个具有k个顶点的子图,使得由这k顶点所张成的子图的所有边所对应的权和最大,其中1<k<|V|,|V|表示图G的顶点个数.记A=(aij)n×n表示图G的加权邻接矩阵,注意A是对称的.则根据上面关于DkS问题的定义,可得该问题具有下面形式的数学表达式xTAxs.t.xTe=k, (DkS)n其中e表示分量全为1的向量.同时注意到关系x∈{0,1}n⟺,∀i=1,2,…,n ,则(DkS) 也等价于下面的问题xTAxs.t.xTx=k, (DkS-1)n不失一般性,假设A不正定,则(DkS)和(DkS-1)都是非凸的整数约束二次规划问题,其通常是NP-难的.因此建立其相应可计算松弛问题模型是求解它们的有效方法之一.为了建立(DkS)的可计算松弛模型,同时注意到(DkS)和(DkS-1)中目标函数xTAx可以表示为A·xxT,其中·表示矩阵乘积的迹.记X=xxT,借助于向量的提升技术,则(DkS)和 (DkS-1)可分别被转化为下面的数学表达形式其中{0},∀k∈K}∪{0},S1+n是1+n阶对称矩阵集合,是1+n维非负向量集合.可以很简单地证明C1+n是一个闭凸锥,其被称为全正矩阵锥,对应的问题(CPP-DkS)和(CPP-DkS-1)被称为全正规划问题.类似于文献[4]中的定理2.6的证明,我们可以得到下面的定理,其证明过程在此省略.定理2.1 (i)(DkS)与(CPP-DkS)的最优值相等,(DkS-1)与(CPP-DkS-1)的最优值相等;(ii)如果(x*,X*)是(CPP-DkS)的最优解,则x*一定是(DkS)最优解的凸包,该结论对于(DkS-1)同样成立.通过定理2.1,我们可以把(DkS)和(DkS-1)分别转化为(CPP-DkS)和(CPP-DkS-1),而且这种转化在某种意义下是等价的.同时注意到(CPP-DkS)与(CPP-DkS-1)本质上都是线性凸规划问题,我们希望其可以被一些现有的有效凸优化软件有效求解.但是注意到(CPP-DkS)和 (CPP-DkS-1)中都含有约束条件C1+n,Dickinson和Gijen[8]已经证明判断该条件的可行性是 NP-难的,所以(CPP-DkS)和(CPP-DkS-1)仍然是NP-难的,尽管其在形式上是凸问题.值得庆幸的是,借助于下面的定理,C1+n可以被松弛为一个可计算的凸锥,这样(CPP-DkS)和(CPP-DkS-1)就可以被进一步转化为可计算的凸规划问题.定理内容如下:定理2.2[5]C1+n⊆,∀n.关系“⊇”成立当且仅当n≤4.根据定理2.2,(CPP-DkS)和(CPP-DkS-1)可以被松弛为下面的问题其中,E是单位矩阵,是1+n阶正半定矩阵集合,是1+n阶对称非负矩阵集合,D1+n被称为双非负矩阵锥, (DNN-DkS)和(DNN-DkS-1)被称为双非负规划问题. 注意(DNN-DkS)和(DNN-DkS-1)不仅具有相同的目标函数,且约束函数也几乎相同,除了 (DNN-DkS)比(DNN-DkS-1)多一个等式约束条件之外.此外,我们还很容易验证D1+n不仅是一个凸集合,且其也可以视为半定约束与非负约束的交集,这样结合(DNN-DkS)和 (DNN-DkS-1)中目标函数和约束函数都是线性函数,因此(DNN-DkS)和(DNN-DkS-1)均可以被一些现有的凸优化软件有效的求解, 例如CVX[9]等.至此,由于(DkS)和(DkS-1)等价,我们分别建立了其相应的可计算双非负松弛模型 (DNN-DkS)和(DNN-DkS-1).为了检验这些松弛模型的实际计算效果,下面我们将通过求解一些随机产生的例子来展现其实际计算性能,所有的例子都通过Matlab 编程调用CVX软件进行求解.例1n=3,k=2,其中系数矩阵.通过简单的计算,很容易得到A是不定矩阵,因此可以使用上述两个松弛模型来求解该问题.首先,使用(DNN-DkS)来求解可得Opt(DNN-DkS)=2,其中x*=[1 0 1]T和.很显然, 关系X*=x*(x*)T成立,所以得到的解x*即是原问题的最优解.其次使用松弛模型(DNN-DkS-1)来求解此问题可得Opt(DNN-DkS-1)=2.69218,其中x*=[0.7184 0.5632 0.7184]T 和。
The mixed problem in L p for some two-dimensional Lipschitz domains
Loredana Lanzani∗ Department of Mathematics University of Arkansas Fayetteville, Arkansas lanzani@ Luca Capogna∗ Department of Mathematics University of Arkansas Fayetteville, Arkansas capogna@
= fN u=f D
in Ω on N on D
1
Introduction
The goal of this paper is to study the mixed problem (or Zaremba’s problem) for Laplace’s equation in certain two-dimensional domains when the Neumann data
for |z | ≤ 1/2.
∗ Hence, we do not have (∇u)∗ ∈ L2 loc (R, dσ ). On the other hand we have (∇u) ∈ Lp loc (R, dσ ) for all 1 ≤ p < 2. We detour around this problem by establishing weighted estimates in L2 using the Rellich identity. This relies on an observation of Luis Escauriaza [11] that a Rellich identity holds when the components of the vector field are, respectively, the real and imaginary part of a holomorphic function. Then, we imitate the arguments of Dahlberg and Kenig [10] to establish Hardy-space estimates (with a different weight). The weights are chosen so that interpolation will give us unweighted Lp as an intermediate space. The weights we consider will be of the form |x|ǫ restricted to the boundary of a Lipschitz graph domain. Earlier work of Shen [30] (see also §3) gives a different approach to the study of weighted estimates for the Neumann and regularity problems when the weight is a power. In the result below and throughout this paper, we assume that Ω is a standard Lipschitz graph domain and that
Author’sContributions
Author’sContributionsL.EulerIn 1735, Euler presented a solution to the problem known as the Seven Bridges of K?nigsberg.[35] The city of K?nigsberg, Prussia was set on the Pregel River, and included two large islands that were connected to each other and the mainland by seven bridges. The problem is to decide whether it is possible to follow a path that crosses each bridge exactly once and returns to the starting point. It is not possible: there is no Eulerian circuit. This solution is considered to be the first theorem of graph theory, specifically of planar graph theory.[35]Euler also discovered the formula V ? E + F = 2 relating the number of vertices, edges and faces of a conv ex polyhedron,[36] and hence of a planar graph. The constant in this formula is now known as the Euler characteristic for the graph (or other mathematical object), and is related to the genus of the object.[37] The study and generalization of this formula, specifically by Cauchy[38] and L'Huillier,[39] is at the origin of topology.G.N.FredericksonFrederickson proposed a heuristics that solve rural postman. The algorithm mainly consists of two steps: it firstly find the shortest tree over connected components, and then match odd-degree vertices.S.L.HakimiHe is known for characterizing the degree sequences of undirected graphs,[3] for formulating the Steiner tree problem on networks, and for his work on facility location problems on networks.There always exists an optimal solution located at vertices.C.HierholzerHierholzer proved that a graph has an Eulerian cycle if and only if it is connected and every vertex has an even degree. This result had been given, without proof, by Leonhard Euler in 1736. He also proposed a algorithm for how to find the Eulerian.J.KruskalIn computer science, his best known work is Kruskal's algorithm for computing the minimal spanning tree (MST) of a weighted graph. The algorithm first orders the edges by weight and then proceeds through the ordered list adding an edge to the partial MST provided that adding the new edge does not create a cycle. Minimal spanning trees have applications to the construction and pricing of communication networks. In combinatorics, he is known for Kruskal's tree theorem (1960), which is also interesting from a mathematical logic perspective since it can only be proved nonconstructively.Kruskal's algorithm is a greedy algorithm in graph theory that finds a minimum spanning tree for a connected weighted graph. This means it finds a subset of the edges that forms a tree that includes every vertex, where the total weight of all the edges in the tree is minimized. If the graph is not connected, then it finds a minimum spanning forest (a minimum spanning tree for each connected component).E. WeiszfeldWeiszfeld's algorithm after the work of Endre Weiszfeld,[4] is a form of iteratively re-weighted least squares. This algorithm defines a set of weights that are inversely proportional to the distances from the currentestimate to the samples, and creates a new estimate that isthe weighted average of the samples according to these weights.C.PrinsR.M.KarpFor his continuing contributions to the theory of algorithms including the development of efficient algorithms for network flow and other combinatorial optimization problems, the identification of polynomial-time computability with the intuitive notion of algorithmic efficiency, and, most notably, contributions to the theory of NP-completeness. Karp introduced the now standard methodology for proving problems to be NP-complete which has led to the identification of many theoretical and practical problems as being computationally difficult.E.TorricelliIn geometry, the Fermat point of a triangle, also called the Torricelli point, is a point such that the total distance from the three vertices of the triangle to the point is the minimum possible.The Fermat point of a triangle with largest angle at most 120° is simply its first isogonic center or X(13), which is constructed as follows:1. Construct an equilateral triangle on each of two arbitrarily chosen sides of the given triangle.2. Draw a line from each new vertex to the opposite vertex of the original triangle.3. The two lines intersect at the Fermat point.P. MiliotisMiliotis develop the first completely automatic procedure for solving symmetric TSP, using the 2-matching lower bound algorithm.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
1 The Subgraph Bisimulation ProblemAgostino Dovier Carla PiazzaAbstractWe study the complexity of the Subgraph Bisimulation Problem,which stands to Graph Bisim-ulation as Subgraph Isomorphism stands to Graph Isomorphism and we prove its NP-Completeness.Our analysis is motivated by its applications to Semistructured Databases.Keywords:Bisimulation,Complexity,Semistructured DataI.I NTRODUCTIONGraph and Subgraph Isomorphism are two basic algorithmic problems[6].Although the latter is NP-complete,the lower bound for the time complexity of the former is still an open and very attractive issue.Bisimulation,a relation weaker than isomorphism,emerged as a fundamental property in various areas of Computer Science[1],[8].Polynomial time procedures can be used to check whether two distinct graphs are bisimilar[9].The Subgraph Bisimulation problem consists in identifying a subgraph Gof a graph G2bisimilar to a given graph G1.2The graphical query language G-log[10]uses this notion for retrieving data from semistruc-tured information[4].Data retrieval in the languages UnQL[2]and Graphlog[3]can be imple-mented on the basis of this notion,as shown in[5].Relationships between web-like databases and hypersets(where bisimulation is used for testing equivalence)are pointed out in[7].We prove that the Subgraph Bisimulation problem is NP-complete.As a consequence,data retrieval based on this notion in its generality is not feasible.However,a class of queries that allows polynomial time data retrieval based on Subgraph Bisimulation is shown in[5].A.Dovier is with the Dip.di Matematica e Informatica,Univ.di Udine.Via delle Scienze206,33100Udine(Italy).email: dovier@dimi.uniud.itC.Piazza is with the Dip.di Informatica,Univ.Ca’Foscari di Venezia.Via Torino155,30173Mestre—Venezia(Italy).email: piazza@dsi.unive.itII.B ASIC D EFINITIONS AND R ESULTSA directed graph (graph )is a pair G = N,E ,where N is the set of nodes and E ⊆N ×N is the set of edges.G 1= N 1,E 1 is a subgraph of G 2= N 2,E 2 if N 1⊆N 2and E 1⊆E 2.G 1= N 1,E 1 and G 2= N 2,E 2 are isomorphic (G 1≡G 2)if there exists a bijection f :N 1→N 2s.t.: u 1,v 1 ∈E 1⇔ f (u 1),f (v 1) ∈E 2.The Graph Isomorphism problem GI (G 1,G 2)amounts to determining whether G 1≡G 2;the Subgraph Isomorphism problem SI (G 1,G 2)consists in deciding whether there exists a subgraph G 2of G 2s.t.G 1≡G 2.A bisimulation [1]between G 1= N 1,E 1 and G 2= N 2,E 2 is a relation b ⊆N 1×N 2s.t.:(1)u 1∈N 1⇒∃u 2∈N 2(u 1b u 2),(2)u 2∈N 2⇒∃u 1∈N 1(u 1b u 2),(3)u 1b u 2∧ u 1,v 1 ∈E 1⇒∃v 2∈N 2(v 1bv 2∧ u 2,v 2 ∈E 2),(4)u 1bu 2∧ u 2,v 2 ∈E 2⇒∃v 1∈N 1(v 1bv 2∧ u 1,v 1 ∈E 1).If there is a bisimulation between G 1and G 2we say that G 1is bisimilar to G 2(G 1∼G 2).1The Graph Bisimulation problem GB (G 1,G 2)amounts to determining whether G 1∼G 2;the Subgraph Bisimulation problem SB (G 1,G 2)consists in deciding whether there exists a subgraph G 2of G 2s.t.G 1∼G 2.The problem GB (G 1,G 2)(GI (G 1,G 2))and the problem SB (G 1,G 2)(SI (G 1,G 2))are different.However,each graph isomorphism is a bisimulation.In Figure 1a bisimulation between two non isomorphic graphs is shown.u u u u u u 1u 2u 4u 3u 5E r r r j ¨¨¨B ¨¨¨B r r r j u u u u u v 1v 3v 2v 4v 5¨¨¨B r r r j r r r j ¨¨¨B E Fig.1.b ={ u 1,v 1 , u 2,v 2 , u 2,v 3 , u 3,v 4 , u 4,v 4 , u 5,v 5 }is a bisimulation between the two graphs.The size of an instance of each one of the problems is the sum of the number nodes and edges of G 1and G 2.The polynomiality of the GB problem follows from [9],where it is shown how to find the maximum bisimulation contraction in time O (|E |log |N |).1In [8]the notion is given on labeled graphs.The unlabeled notion of [1]is a particular case when all edges have the same label.Thus,NP hardness of the unlabeled version implies NP hardness of the labeled one.However,it can be shown that the labeled problem can be polynomially reduced to the unlabeled one.III.C OMPLEXITY R ESULTSSB (G 1,G 2)is in NP,since G 2∼G 1can be verified in polynomial time [9].We prove its NP-hardness by reducing the NP-complete Directed Hamilton Path (HP )problem 2([6],p.60)to SB.Let n ∈N ;C n = N,E is a n -chain if N ={x 1,...,x n }and E ={ x i +1,x i :1≤i <n }.Lemma 1:(i)If G = N,E ∼C n ,then |N |≥n .(ii)If G ∼C n and |N |=n ,then G ≡C n .Proof:The property (i)can be proved by induction on n ≥1.For (ii),let C n = {1,...,n },{ i +1,i :1≤i <n }and b a bisimulation between G and C n .We prove that b is a bijection from N to {1,...,n }.(1)states that b is defined on all nodes of N .We prove that b is a function.By contradiction,let i,j be the minimum pair (w.r.t.lexicographic order)s.t.∃n ∈N (y b i ∧y b j ∧i =j )(5).If y has no outgoing edges,by (4),i and j have no outgoing edges,i.e.i =j =1:a contradiction.Otherwise,let y,z ∈E .By (3),there are i ,j nodes of C n s.t. i,i and j,j are edges of C n and z b i and z b j .The form of C n ,however,implies that i =i −1and j =j −1.Hence, i,j is not the minimum pair satisfying the property(5):a contradiction.Thus,b is a function,and from (2)we know that b (N )={1,...,n },namely it is surjective.Since,by hypothesis and by the previous point,|N |=n =|b (N )|,b is also injective,and,thus,it is abijection.v v v v v E d d d ?-6-v v v v v E d d d c 'T 'v vv v v c E ''Fig.2.From left to right,the graph G admits a Hamilton path (‘thick’edges);G does not admit Hamilton paths;and C 5is a 5-chain (that trivially is a Hamilton path).Each Hamilton path on a 5-nodes graph is isomorphic (hence,bisimilar)to the 5-chain C 5.Theorem 1:The Subgraph Bisimulation problem is NP-complete.Proof:It remains to prove the NP-hardness of the problem.We reduce the HP problem to it.Let G = N,E ;we claim that HP (G )is equivalent to SB (C n ,G ),with C n a n -chain and n =|N |2HP (G )is the problem:given a graph G = N,E ,is there a path that visits each node of N exactly once?(see also Fig.2).Assume that G admits the Hamilton path:v n→v n−1→···→v1.By definition,the v i’s are pairwise distinct.Each edge occurring in the path occurs exactly once(otherwise,some node is repeated in the path).The path is a subgraph of G isomorphic(and hence bisimilar)to C n.Assume that there is a subgraph G of G bisimilar to C n.By Lemma1(i)G has at least n nodes;thus,being a subgraph of G,it has n nodes.By Lemma1(ii)G ≡C n,i.e.it is a n-chain describing a Hamilton path.The reduction is trivially in determisistic O(log n)space.Acknowledgements.We thank Elisa Quintarelli for the useful discussions concerning this work.R EFERENCES[1]P.Aczel.Non-well-founded sets.V ol.14of Lecture Notes,CLSI Stanford,1988.[2]P.Buneman,S.B.Davidson,G.G.Hillebrand,and D.Suciu.A Query Language and Opti-mization Techniques for Unstructured Data.In Proc.of the1996ACM SIGMOD,pp.505–516.[3]M.P.Consens and A.O.Mendelzon.Graphlog:a Visual Formalism for Real Life Recursion.In Proc.of9th ACM PODS’90,pp.404–416,1990.[4]A.Cortesi,A.Dovier,E.Quintarelli,and L.Tanca.Operational and Abstract Semantics of theQuery Language G-Log.Theoretical Computer Science,275(1–2):521–560,2002.[5]A.Dovier and E.Quintarelli:Model-Checking Based Data Retrieval.In Proc.of DatabaseProgramming Languages,8th Int.Workshop,LNCS V ol.2397,pp.62–77,2002.[6]M.R.Garey and puters and Intractability–A Guide to the Theory ofNP-Completeness.W.H.Freeman and Company,New York,1979.[7]A.Lisitsa and V.Sazonov.Bounded Hyperset Theory and Web-like Data Bases.In Proc.of5th Kurt G¨o del Colloquium,LNCS V ol.1289,pp.172–185,1997.[8]ner.Operational and Algebraic Semantics of Concurrent Processes.In Handbook ofTheoretical Computer Science,chapter19.Elsevier Science1990.[9]R.Paige and R.E.Tarjan.Three partition refinements algorithms.SIAM Journal on Comput-ing,16(6):973–989,1987.[10]J.Paredaens,P.Peelman,and L.Tanca.G-Log:A Declarative Graphical Query Language.IEEE Trans.on Knowledge and Data Eng.,7:436–453,1995.。