Uniform Bounds for the Least Almost-Prime Primitive Root
z--
GA (u; f ) =
M X i=1
ai kLi u ? fi k2;A 0
Abstract. We establish an a-posteriori error estimate, with corresponding bounds, that is valid for any FOSLS L2 -minimization problem. Such estimates follow almost immediately from the FOSLS formulation, but they are usually di cult to establish for other methodologies. We present some numerical examples to support our theoretical results. We also establish a local a-priori lower error bound that is useful for indicating when re nement is necessary and for determining the initial grid. Finally, we obtain a sharp theoretical error estimate under certain assumptions on the re nement region and show how this provides the basis for an e ective re nement strategy. The local a-priori lower error bound and the sharp theoretical error estimate both appear to be unique to the leastsquares approach.
Invited Lecture
8.4 ER-schemes
9 Conclusions and further research
We have introduced the use of translation schemes into database design theory. We have shown how they capture disparate notions such as information preservation and dependency preservation in a uniform way. We have shown how they relate to normal form theory and have stated what we think to be the Fundamental Problem of Database Design. Several resulting research problems have been explicitly stated in the paper. We have shown that the Embedded Implicational Dependencies are all needed, when we deal with stepwise re nements of database schemes speci ed by Functional and Inclusion Dependencies. As the material presented grew slowly while teaching database theory, its foundational and didactic merits should not be underestimated. Over the years our students of the advanced database theory course con rmed our view that traditional database design lacks coherence and that this approach makes many issues accessible to deeper understanding. Our approach via dependency preserving translation{re nements can be extended to a full edged design theory for Entity{Relationship design or, equivalently, for database schemes in ER-normal form, cf. MR92]. It is also the appropriate framework to compare transformations of ER-schemes and to address the Fundamental Problem of ER{Database Design. Translation schemes can also be used to deal with views and view updates, as views are special cases of translation schemes. The theory of complementary views from BS81] can be rephrased elegantly in this framework. It is connected with the notion of translation schemes invariant under a relation and implicit denability, Kol90]. Order invariant translation schemes play an important role in descriptive complexity theory, Daw93] and Mak94]. The theory of independent complementary views of KU84] exhibits some severe limitations on the applicability of BS81]. In spite of these limitations it seems worthwhile to explore the connection between independent views and transformation invariant for certain relations further. The latter two applications are currently being developed by the authors and their students and will be included in the full paper.
IEEE Transactions on Automatic Control
To date bounds on performance measures for adaptive controllers which include control e ort terms have remained elusive in the adaptive control literature. This paper presents the rst bound on L2 control and state performance measures. The only related literature is 6] where ideas from inverse optimality are used. In that paper, a special adaptive controller is constructed for strict feedback systems for which a `meaningful' cost functional can be exhibited to which the controller can be shown to be optimal. However, the cost functionals constructed depend in a complicated fashion on the state, control and parameter estimate; and the interpretation of such performance measures is not clear. In this paper we x the performance measure a-priori, and then produce a bound. Thus these results are interpretable although not optimal. It should be noted that there is a richer literature which concerns only the transient state performance, see eg. 5], or for results in the prescence of disturbances see eg. 12].
Abstract
Hyper-Polynomial Hierarchies and the NP-JumpStephen Fenner University of Southern MaineSteven HomerBoston UniversityRandall PruimBoston UniversityCalvin CollegeMarcus SchaeferUniversity of ChicagoDecember19,1997AbstractAssuming that the polynomial hierarchy()does not collapse,we show the existence of ascending sequences of ptime Turing degrees of lengthall of which are in and uniformly hard for,such that succes-sors are-jumps of their predecessors.This is analgous to the hyperarith-metic hierarchy,which is defined similarly but with the(computable)Turing degrees.The lack of uniform least upper bounds for ascending sequences of ptime degrees causes the limit levels of our hyper-polynomial hierarchy to be inherently non-canonical.This problem is investigated in depth,and various possible structures for hyper-polynomial hierarchies are explicated,as are properties of the-jump operator on the languages which are inbut not in.1IntroductionSince its definition in1976,[Sto77]the polynomial hierarchy has been used to classify and measure the complexity of infeasible combinatorial problems.It has been hugely successful in this capacity,providing the main framework for com-plexity classes above polynomial time within which most subsequent complexity theory has taken place.The classes in this hierarchy,particularly in thefirst few levels of the hierarchy,have been studied extensively and their structure carefully examined.In this paper we consider extensions of the polynomial hierarchy into extended hierarchies,all lying within.Our aim is to provide tools for a further understanding of many complex and interesting problems which lie just outside as well as to gain further understanding of the intricacies of ptime reductions,degrees and the-jump operator.The-jump has provento be a fundamental and useful tool in complexity theory.It is the central concept in the definition of the high and low hierarchies.Its properties have recently been explored by Fenner[Fen95].The polynomial time hierarchy was defined and motivated in analogy with the arithmetic hierarchyfirst studied by Stephen Kleene.The structure and many key properties of the classes in the polynomial hierarchy are similar to those in the arithmetic hierarchy.Furthermore various concepts and definitions originating in the arithmetic hierarchy have been important in illuminating interesting aspects of the complexity theory of problems in the resource bounded setting.For example, the alternating quantifier characterizations of the levels of both the arithmetic and polynomial hierarchies provides a simple and useful method for placing problems within levels of these hierarchies.One of the deepest and most elegant develop-ments in this area of mathematical logic was the extension of the arithmetic hier-archy to the hyperarithmetic hierarchy by transfinite iteration of the Turing jump operator and the subsequent development by Kleene,Spector and others of the properties of this hierarchy.(See,for example,[Sac90],[Rog67].)Our work here intends to develop an analogous resource bounded framework for problems lying within and above.In this work we define and(under reasonable as-sumptions)prove the existence of hyper-polynomial hierarchies formed by transfi-nite iteration of the-jump operator and study their properties and the properties of the-jump operator in the realm between and.Assuming the polynomial hierarchy is infinite,Ambos-Spies[AS89]has shown the existence of a rich,infinite partial order of degrees in above.In this paper we extend his techniques to define infinite,NP-jump-respecting hierar-chies of length(thefirst nonconstructive ordinal)in above, which naturally extend the polynomial hierarchy.This shows that if does not collapse then not only is there a rich and complex structure to the degrees in1,but that is in some sense“very far”from,since not even many-jumps suffice to get from to.We are hopeful that the classes of problems hard for levels of these hierarchies will also provide a new classification scheme for interesting hard combinatorial problems,such as the -complete languages,which lie in but above.The major technical obstacle encountered in proving the existence of an ex-tended polynomial hierarchy is the lack of uniform least upper bounds for ascend-ing sequences of ptime degrees.This fact was noted by Ambos-Spies[AS89],and makes the definition of our hierarchies non-canonical at limit levels,giving rise to several possibilities for the properties of the extended hierarchy.This situation is explored in depth here and various possible structures for the hyper-polynomial hierarchy are explicated.For example,under reasonable assumptions about the structure of uniformly hard sets for,we prove that there is a problem which is a uniform upper bound for but is not such a bound for any ptime non-constant alternation class.Such a problem would lie“just above”,and a careful ex-amination of the proof of Toda's Theorem[Tod91]indicates that the-complete languages mayfit this description.Outline.After providing the necessary background on constructive ordinals and uniform upper bounds in Section2,we constuct in Section3an infinite hi-erarchy of languages of length in above.This hierarchy is proper provided that doesn't collapse.In Sections4and5we investigate the extent to which such a hierarchy is or is not canonical by asking where within such a hierarchy can be placed.This investigation leads to the differentiation between two types of uniform upper bounds,slow and fast.Finally, in Section6we present some directions for further investigation.2PreliminariesWe identify,the natural numbers,with,the set of all binary strings,via the usual dyadic representation.We let be the empty string and denote bya standard ptime-computable,ptime-invertible bijection such that,and for all.Wefix a standard,acceptable enumeration of nondeterminis-tic oracle TMs,and a standard enumeration of deterministic ptime oracle TMs,where for each,enumerates the set of all oracle com-putations running in time for all oracles and inputs of length.Often we will abuse notation and associate with a set(language)its characteristic function. Thus if and only if.We write if and accesses the oracle only in a manner allowed by the2reduction type.For the most part we are interested in-and-reductions. As usual,is a standard acceptable enumeration of the computable partial functions(as in[Soa87]).Definition2.1For any set,we define,in the spirit of Balc´a zar,et al. [BDG88],has an acceptingpath of lengthWe call the-jump of.It is complete for under(unrelativized) -reductions.It is easy to check that lifts to a well-defined operator on the degrees. We denote by the-fold iteration of.Let be a ptime alternating oracle Turing machine such that for all and,,the canonical -complete set,and write for.We assume without loss of generality that has been chosen so that for any oracle,only makes oracle queries of length.2.1Kleene'sHere we give a brief definition of Kleene's partial order,,of all notations for constructive ordinals.Here and is a binary relation on.The information in this section comes chiefly from Sacks[Sac90],but see also[Rog67]. Our development is slightly different from,but entirely isomorphic to,Kleene's original definition.Define and.We define by transfinite induction.It is the least partial order such that the following hold for all :1..2.If,then and.3.If is total,,and,then and for all.4.is transitive.It can be shown that is well-founded,and hence functions with domain can be defined by transfinite recursion.For all we define,the unique ordinal for which is a notation,in this way:1..32.If,then.3.If,then.Each element of is the notation for a constructive ordinal,and each constructive ordinal has at least one(but usually more than one)notation.Also,if,then ,but not conversely.The set of all constructive ordinals is,which is the least non-constructive ordinal,and is countable.The structure of is a tree,where(infinite)branching occurs at every limit level.Some branches(maximal linearly ordered subsets)peter out well be-fore reaching height(in fact,there are branches of height only),but some branches do reach height.The most important fact about is that one can construct objects via“effective transfinite recursion”up to by using notations from.We will do just that in Section3,where we define sets in for all such that implies,and.(This last inequality assumes that is infinite.)This mirrors the classical construction of the hyperarithmetic hierarchy.2.2Uniform Upper Bounds and Padding ArraysIn computability theory,it is a simple matter to define a canonical join of a uni-vormly enumerable sequence of sets which is the least uniform-upper bound (in fact,the least uniform-upper bound)for the sequence.In complexity the-ory this is not possible,since there is no least uniform-upper bound[AS89], [Lad75].Furthermore,the most natural join operator has the unfortunate(for our purposes)property that the join of a collection consisting of a complete language for each level of is-complete.In our case we are interested in un-derstanding the problems which lie between and and we would like the join to be as close to as possible.Therefore,we must work instead with uniform upper bounds,defined below,which correspond to possible choices for a nicely behaved join operator.Definition2.2Given a countable collection of languages,a uniform-upper bound for is a language such that there is a computable function with the property that for all,.We are primarily interested in uniform upper bounds for and similar classes.A uniform upper bound for is a uniform upper bound for. Since for any,is computably enumerable in,it also makes sense to talk about uniform upper bounds for.4Definition2.3For any computable function and any countable collection of languages,the padding array for via is the language defined byTwo types of padding arrays are of special interest.1.If for every,is monotone non-decreasing and is ptimecomputable,and for every is monotone non-decreasing,then we say that is a ptime padding array via.2.If in addition,there are constants and such that for all,then we say that is a ptime padding array of degree via.A ptime padding array for is a ptime padding array for.As the following lemma shows,padding arrays are particularly nice uniform upper bounds.Lemma2.4If is a ptime padding array for via then is a uniform-upper bound for.Proof.The map is a many-one reduction from to.As a partial converse to the result above we have the following lemma.Definition2.5A function is nice if is monotone non-decreasing,unbounded, and can be computed in steps.Lemma2.6Let be a uniform-upper bound for via.If is the ptime padding array for via,and is a nice function such that for all halts in fewer than steps,then.Proof.We describe a reduction from to.On input:1.If,then.This can be determined in polynomial timebecause is nice.2.If,then compute.Since is greater thanthe number of steps required to compute,this can also be done in polynomial time.5Thus,in particular,every ptime padding array is a uniform-upper bound for,and every set which is a uniform-upper bound for is-above some ptime padding array for.3A Hyper-Polynomial HierarchyWe now come to the construction of an extended polynomial hierarchy.We show how to“embed”into in such a way that successors correspond to-jumps and limits to uniform upper bounds.We will call such an embedding a hyper-polynomial hierarchy,or-.More formally,we construct a set such that if denotes,then satisfies the following properties.(P1).(P2).(P3)for all.(P4)For any with,is a uniform upper bound for.(P5)If is infinite,then for any,. Although is defined here for all,we are really only interested in when.will be constructed by transfinite induction over.We will say that is universal for this hyper-polynomial hierarchy.To ensure the last two properties,we build so that is infinite. At the same time,we must code into all for.We do the latter by making a uniform upper bound of,which we do by making a ptime padding array for.Now to get to separate over we delay coding each into until we notice that some designated-reduction fails to reduce some-level of the hierarchy over to the previous level(say,the st to the th).If we can do this for all and,we are done.Assuming by transfinite induction that separates for all,this can be accomplished by delayed diagonalization.We are guaranteed to kill off ourreduction just by waiting long enough before coding each level:will “look like”and since,will eventually make a mistake.The particular“delayed diagonalization”strategy employed here is simi-lar to those used by Ambos-Spies[AS89],which in turn are based on well-known techniques of Ladner[Lad75].We now define formally by simultaneous transfinite induction over and length-decreasing recursion.In what follows,are arbitrary and is a fixed ptime alternating oracle Turing machine such that for all and,,the canonical-complete set.We write for and assume without loss of generality that has been chosen so that for any oracle,only makes oracle queries of length.The limit case is as explained above.We need to perform the diagonalization via a look-back technique in order to keep in—this explains the stringent bounds on ,,and in3(b).1.(thus).2.(thus).3.,unless is of the form,where is least(if it exists)such that(a)all halt in a combined total of steps,and(b)there is“sufficient evidence”thatWe will say there is sufficient evidence ifsuch thatIf such a least exists and for some,then we letIn this way,will be a padded version of.It is important to observe that the value of is3(b)above only depends on and not on.7Theorem3.1The set satisfies properties(P1)–(P5)listed above.Proof.It is not too difficult to see that:the recursive aspects of the definition are all length-decreasing—due to the stringent bounds on,,and in3(b)—and the rest of the algorithm clearly needs no more than a polynomialamount of space.Properties(P2)and(P3)are also clearly satisfied.We prove properties(P4)and(P5)simultaneously by induction over. Actually,we need to prove a stronger property than(P5),namely:(P5a)If is infinite,then is infinite.Choose an arbitrary and assume that properties(P4)and(P5a)hold for all with.There are three cases:Case1:.Property(P4)holds vacuously.Since,property(P5a) holds.Case2:.Again,property(P4)holds vacuously for.Since is infinite and,clearly is infinite as well.Case3:.Note that is total.For property(P4),we will only show that is a uniform upper bound for.This suffices,be-cause for any,there is a such that and henceand one could furthermorefind such a reduction effectively in and, using certain basic facts about.Wefirst show that for every,the mentioned in case(3)of the definition ofmust always exist.Assuming otherwise,let be least such that no such corresponding exists.Then for all and all, so we never code into.This makes,or if then.Now by our inductive hypothesis,is infinite,so in particular,for all,there is a such thatand hence must exist by case3(b)in the definition of.The fact that exists for all immediately implies thatis infinite,via an argument similar to the one just given,and8for all,a padded version of is coded into,and thusvia the mapping,where is the corresponding to.This concludes the proof.4Uniform Upper Bounds forIn this section and the next we investigate the extent to which the construction of extended hierarchies in Section3is canonical.Since uniform upper bounds can be thought of as non-canonical joins,it is inevitable that,at least level by level,such a hierarchy cannot be canonically defined.As we shall see,the fact that we were able to use ptime padding arrays of bounded degree(in fact,degree0)for all of the uniform upper bounds in our construction allows for considerable manipulation of the structure of our extended hierarchies.4.1Quick Uniform Upper BoundsThe observation that all of the uniform upper bounds in our construction in the previous section were actually ptime padding arrays of degree0prompts the fol-lowing definition.We will call a uniform-upper bound forquick if there is a polynomial such that for each there is a-reduction which runs in time.A uniform-upper bound which is not quick is slow.All ptime padding arrays of bounded degree are quick uniform upper bounds.Quick uniform upper bounds for can also be characterized in terms of alternating time,as defined in[CKS81].For this we define the class to consist of all languages accepted by an alternating Turing machine in polynomial time and at most alternations,beginning in an existencial state.(Note that this really is alternations and not.)Lemma4.1A set is a quick uniform-upper bound for if and only if it is -hard for for some nice.Proof.If is a quick uniform-upper bound for,then there is a com-putable function and a polynomial such that in time. Let be the largest such that all of can be computed in less than steps.Then fulfills the conditions.9For the other direction let be-hard for for some as above.Con-sider the set for some., so.On the other hand via a reduction that can be found effectively.The following theorem says that quick uniform upper bounds for cannot be“just above”.Theorem4.2If is a quick uniform upper bound for,then there is a ptime padding array(hence also a uniform upper bound for)such that.The proof of Theorem4.2makes use of the following lemma.Lemma4.3If and are ptime padding arrays for via nice functions and respectively and there is some polynomial such thatthen.Note that for any,is a polynomial in of the same degree as.So to satisfy the requirements of the lemma,must be a ptime padding array of bounded degree.Thus,we have the theorem only for quick uniform upper bounds for.It remains open if there are slow uniform upper bounds for which cannot compute the jump of any other(necessarily slow)uniform upper bound for.Before proceeding to the proofs of Theorem4.2and Lemma4.3,we give a couple of examples.1.If is a padding array for via or,where is a constant,then,so is afixed point of the-jump.2.If is a bounded degree ptime padding array for via,then the ptimepadding array for via satisfies. Proof of Theorem4.2.If is a quick uniform upper bound for then there is a bounded degree ptime padding array for,such that.By the Lemma4.3there is another ptime padding array for,,such that .10Proof of Lemma4.3.We describe an algorithm for a reduction. Remember that accepts in steps.For this proof we will use the fact that-QBF is a true formula. WLOG we can assume that,the encoding of the formula as a binary string satisfies.Algorithm for:1.On input,first determine,,and such that.If none suchexist,then,so can be chosen to be somefixed element of .2.We need tofind a QBF,such that accepts if and only if is true.Furthermore,we must in time polynomial in be able to produce a string where is coded into.For this we need the following observations about the queries made during the computation(along any path)..Therefore,if makes queries to any strings,then.If does not have the form,where is a formula and,then we know that,so we could modify so that it answers such queries“internally”(via a syntax check)without actuallyquerying the oracle.If,then we will say that is a query of typeabout.Note that,since.Let be the query of type such that is maximal among thequeries.Then all of the queries in the simulation of not handledby the syntax check above are about formulas with codes of length .ing the observations above,we see that determining whether ac-cepts is equivalent to a formula of the formwhere codes a path in the non-deterministic computation tree,codes a sequence of queries to B,codes answer bits to those queries,and is a polynomial time predicate which checks that along path,makes queries to if the answers(to previous queries)are and halts in an accept-ing configuration after at most steps.11By the comments above,each predicate is,so that -QBF accepts.4.Finally,.Notice that and that,since there was a query of type.So,where is the degree of.Therefore,can be computed in time polynomial in.4.2Slow Uniform Upper BoundsWe will now construct a set which is a slow uniform upper bound for,i.e. it is an upper bound for,but there is no uniform time bound on the reductions from each level of the hierarchy to.For this proof we need to assume more than that separates.We present one hypothesis which is sufficient;modifications are possible.This hypothesis is expressed in terms of a notion of subexponential advice and says roughly that no level of can be computed from a previous level,even with the additional aid of polynomial time reductions with subexponential advice.By increasing the padding,we can modify our construction of a-to include slow uniform upper bounds,if they exist.We begin by defining what we mean by subexponential advice.For this,we use definitions of advice classes and reductions which are based on oracle access to the advice string.This definition is equivalent to the usual one for defining common classes such as,,etc.The motivation for our definition comes from the observation that,in defining for some function class such that,the usual definition in terms of languages(see[BDG88])allows the accepting machine not only to get advice from the advice string,but may also provide it with greater resource bounds,since the length of the advice string counts toward the length of the input.Thus,for example,an advice string consisting solely of a super-polynomially long sequence of0's becomes potentially useful, not because it contains any information,but simply because of its length.Our definition avoids this problem.Definition4.4(Superpolynomial advice)Let be a class of functions.For any string,let be the smallestfinite set for which is an initial segment of the characteristic sequence i.e.,.12We write if there is a an oracle Turing machine which runs in polynomial time regardless of oracle and a function in such thatacceptsand write if.This is equivalent to saying that access to the advice comes by querying bits of the advice string,.Similar notions could be defined for other classes of Turing machines as well.For,letThe classesandhave their usual meanings,since using the advice oracle,one can calculate the ad-vice string as a preprocess,and using the advice string one can simulate the queries to the advice oracle by checking bits of the advice string.Similar statements can be made whenever the machines are capable of querying the entire advice oracle bit by bit.Also note that is the class of all languages,since we can use bits to code the membership of all the strings of length.We can now state a hypothesis sufficient to demonstrate the existence of slow uniform upper bounds for.Hypothesis.The polynomial hierarchy does not collapse under polynomial time reductions with subexponential advice in the following sense:Notice that Hypothesis is(a priori)slightly stronger than the statement that whereand13Theorem4.5If is nice,then contains a slow uniform-upper bound for ,unless fails.Corollary4.6If is nice,then contains a uniform-upper bound for that is not hard for any(with nice),unless fails.Proof.We define and verify the properties.From the definition of it is clear that is a uniform-upper bound for.If were quick,then we would haveWe show that this contradicts Hypothesis.So suppose is quick,andfix.LetSo we can use the oracle to decide.14Thus,can be simulated making use only of and.Furthermore,since implies that and,we can code the information about into a string of length.So from our assumption that is quick it follows thatand therefore that fails.To this point we have not concerned ourselves with the complexity of.The proof just given can,however,be improved to get the upper bound required by the statement of the theorem.Given any nice,we can obtain a slow uniform upper bound for which is in as follows.Let be the smallest for which .We will use for additional padding:Then if is a string of length in,we know that,and hence.Hence lies in.The rest of the proof needs only minor adjustments.This yields the result as stated.4.3Fixed Points of the NP-jumpIn constructing a-we must avoidfixed points of the-jump.We show that everyfixed point of the-jump is a uniform upper bound for and that fixed points exist which are very unlikely(even more unlikely than the examples after Lemma4.3)to be-complete.Thus it really is necessary to actively avoid them in our construction.Lemma4.7If then is a uniform-upper bound for(in fact for).Proof.Fix the reduction from to i.e..We assume that our enumeration of nondeterministic OTMs is nice in that the jump and composition of machines is effective,i.e.,that there are two computable functions and such thatif via,then via,andif via and via,then via.15Now by definition say via.Then we can prove by induction thating we can define a computable function such that via which proves that is a uniform upper bound of.Starting with instead of will yield the same result for(without making any additional assumptions).Remarks.1.If we start with the stronger assumption that,then will bea uniform upper bound for with regard to-reductions.The necessaryadjustments in the proof are straightforward.2.If is witnessed by a reduction which runs in linear time,thenthe uniformity in the above proof,together with the fact that andare also computable in linear time,yields that is hard foroperator and upper bounds,and(assuming is infinite)has no-jumpfixed points.Furthermore,it is evident from our construction of that is a quick uniform upper bound for every with.Section4showed how every quick uniform upper bound can compute the jump of another quick uniform upper bining these results it is possible to give further evidence of the richness of the quick uniform upper bounds with respect to the-jump.We iterate our construction in Section4to construct a proper,“upside-down”image of in the quick uniform upper bounds below any given quick uniform upper bound. That is,Theorem5.1Given any which is a quick uniform-upper bound for,there is a such that(letting)1.for all,is a quick uniform upper bound for and if sepa-rates,then separates,2.,3.for all,via a reduction found effectively in,4.for all such that,is a uniform lower bound for,and5.if is infinite,then for all,,so the embedding is properin the ptime degrees.Within this upside-down-,it is also possible to place a(right-side up) -starting with any and completely below all the for. Proof Sketch.We define a hierarchy of padding functions so that successor func-tions satisfy the conditions in Lemma4.3with respect to their predecessors,and limit functions dominate all previous padding functions.This by itself is straight-forward,but there are a few extra wrinkles that we must smooth out.All our padding functions must be nice,to satisfy the conditions in Lemma4.3,and must give rise to good sets(i.e.,sets over which separates)to ensure a properly descending hierarchy of degrees.Let be a quick uniform upper bound for andfix anand a computable such that for all,.By taking a padded version of and choosing a new,we can assume that.Hence, by Lemma2.6there is a computable such that,and this fact remains true if we increase the padding.Now let be least such thatfor all.This implies that,,and,where is the function corresponding to,c.f.17。
chemicalcontaminantsinfoodjifsan食品中化学污染物jifsan
Modeling Detected Values
Non-detect values are removed from the data set Detected values are modeled with distributions Probability tree is used to decide which model provides
uncertainty • Food consumption and current practices to address
uncertainty • Conclusions
Uncertainty
The imperfect knowledge concerning the present or future state of an organism, system, or (sub)population under consideration.
Summary statistics used to describe the chemical concentration in foods.
Sources of Uncertainty in Chemical Concentration Data
Sources of uncertainty:
Non-Detects in Chemical Concentration
Current practices for addressing the uncertainty from non-detects:
Substitution Method Modeling Detected Values
Example: Characterization of Summary Statistics with Multiple Distribution Models
The Laplacian of a uniform hypergraph
The Laplacian of a Uniform Hypergraph∗Shenglong Hu†,Liqun Qi‡February5,2013AbstractIn this paper,we investigate the Laplacian,i.e.,the normalized Laplacian tensor of a k-uniform hypergraph.We show that the real parts of all the eigenvalues of theLaplacian are in the interval[0,2],and the real part is zero(respectively two)if andonly if the eigenvalue is zero(respectively two).All the H+-eigenvalues of the Laplacianand all the smallest H+-eigenvalues of its sub-tensors are characterized through thespectral radii of some nonnegative tensors.All the H+-eigenvalues of the Laplacianthat are less than one are completely characterized by the spectral components of thehypergraph and vice verse.The smallest H-eigenvalue,which is also an H+-eigenvalue,of the Laplacian is zero.When k is even,necessary and sufficient conditions for thelargest H-eigenvalue of the Laplacian being two are given.If k is odd,then its largest H-eigenvalue is always strictly less than two.The largest H+-eigenvalue of the Laplacianfor a hypergraph having at least one edge is one;and its nonnegative eigenvectors arein one to one correspondence with theflower hearts of the hypergraph.The secondsmallest H+-eigenvalue of the Laplacian is positive if and only if the hypergraph isconnected.The number of connected components of a hypergraph is determined bythe H+-geometric multiplicity of the zero H+-eigenvalue of the Lapalacian.Key words:Tensor,eigenvalue,hypergraph,LaplacianMSC(2010):05C65;15A181IntroductionIn this paper,we establish some basic facts on the spectrum of the normalized Laplacian tensor of a uniform hypergraph.It is an analogue of the spectrum of the normalized Lapla-cian matrix of a graph[6].This work is derived by the recently rapid developments on both ∗To appear in:Journal of Combinatorial Optimization.†Email:Tim.Hu@connect.polyu.hk.Department of Applied Mathematics,The Hong Kong Polytechnic University,Hung Hom,Kowloon,Hong Kong.‡Email:maqilq@.hk.Department of Applied Mathematics,The Hong Kong Polytechnic Uni-versity,Hung Hom,Kowloon,Hong Kong.This author’s work was supported by the Hong Kong Research Grant Council(Grant No.PolyU501909,502510,502111and501212).1the spectral hypergraph theory [7,16,19–21,23,27,29,30,33–35]and the spectral theory of tensors [4,5,11,13–15,17,19–22,24–26,28,31,32,36].The study of the Laplacian tensor for a uniform hypergraph was initiated by Hu and Qi [16].The Laplacian tensor introduced there is based on the discretization of the higher order Laplace-Beltrami operator.Following this,Li,Qi and Yu proposed another definition of the Laplacian tensor [19].Later,Xie and Chang introduced the signless Laplacian tensor for a uniform hypergraph [33,34].All of these Laplacian tensors are in the spirit of the scheme of sums of powers.In formalism,they are not as simple as their matrix counterparts which can be written as D −A or D +A with A the adjacency matrix and D the diagonal matrix of degrees of a graph.Also,this approach only works for even-order hypergraphs.Very recently,Qi [27]proposed a simple definition D −A for the Laplacian tensor and D +A for the signless Laplacian tensor.Here A =(a i 1...i k )is the adjacency tensor of a k -uniform hypergraph and D =(d i 1...i k )the diagonal tensor with its diagonal elements being the degrees of the vertices.This is a natural generalization of the definition for D −A and D +A in spectral graph theory [3].The elements of the adjacency tensor,the Laplacian tensor and the signless Laplacian tensors are rational numbers.Some results were derived in [27].More results are expected along this simple and natural approach.On the other hand,there is another approach in spectral graph theory for the Laplacian of a graph [6].Suppose that G is a graph without isolated vertices.Let the degree of vertex i be d i .The Laplacian,or the normalized Laplacian matrix,of G is defined as L =I −¯A ,where I is the identity matrix,¯A =(¯a ij )is the normalized adjacency matrix,¯a ij =1√d i d j ,if vertices i and j are connected,and ¯a ij =0otherwise.This approach involves irrational numbers in general.However,it is seen that λis an eigenvalue of the Laplacian L if and only if 1−λis an eigenvalue of the normalized adjacency matrix ¯A.A comprehensive theory was developed based upon this by Chung [6].In this paper,we will investigate the normalized Laplacian tensor approach.A formal definition of the normalized Laplacian tensor and the normalized adjacency tensor will be given in Definition 2.7.In the sequel,the normalized Laplacian tensor is simply called the Laplacian as in [6],and the normalized adjacency tensor simply as the adjacency tensor.In this paper,hypergraphs refer to k -uniform hypergraphs on n vertices.For a positive integer n ,we use the convention[n ]:={1,...,n }.Let G =(V,E )be a k -uniform hypergraph with vertex set V =[n ]and edge set E ,and d i be the degree of the vertex i .If k =2,then G is a graph.For a graph,let λ0≤λ1≤···≤λn −1be the eigenvalues of L in increasing order.The following results are fundamental in spectral graph theory [6,Section 1.3].(i)λ0=0andi ∈[n −1]λi ≤n with equality holding if and only if G has no isolated vertices.(ii)0≤λi ≤2for all i ∈[n −1],and λn −1=2if and only if a connected component of Gis bipartite and nontrivial.2(iii)The spectrum of a graph is the union of the spectra of its connected components. (iv)λi=0andλi+1>0if and only if G has exactly i+1connected components.I.Ourfirst major work is to show that the above results can be generalized to the Laplacian L of a uniform hypergraph.Let c(n,k):=n(k−1)n−1.For a k-th order n-dimensional tensor,there are exactly c(n,k)eigenvalues(with algebraic multiplicity)[13,24]. Letσ(L)be the spectrum of L(the set of eigenvalues,which is also called the spectrum of G).Then,we have the followings.(i)(Corollary3.2)The smallest H-eigenvalue of L is zero.(Proposition3.1)m(λ)λ≤c(n,k)with equality holding if and only if G has noλ∈σ(L)isolated vertices.Here m(λ)is the algebraic multiplicity ofλfor allλ∈σ(L).(ii)(Theorem3.1)For allλ∈σ(L),0≤Re(λ)with equality holding if and only ifλ=0;and Re(λ)≤2with equality holding if and only ifλ=2.(Corollary6.2)When k is odd,we have that Re(λ)<2for allλ∈σ(L).(Theorem6.2/Corollary6.5)When k is even,necessary and sufficient conditions are given for2being an eigenvalue/H-eigenvalue of L.(Corollary6.6)When k is even and G is k-partite,2is an eigenvalue of L.(iii)(Theorem3.1together with Lemmas2.1and3.3)Viewed as sets,the spectrum of G is the union of the spectra of its connected components.Viewed as multisets,an eigenvalue of a connected component with algebraic multiplic-ity w contributes to G as an eigenvalue with algebraic multiplicity w(k−1)n−s.Here s is the number of vertices of the connected component.(iv)(Corollaries3.2and4.1)Let all the H+-eigenvalues of L be ordered in increasing order asµ0≤µ1≤···≤µn(G)−1.Here n(G)is the number of H+-eigenvalues of L(with H+-geometric multiplicity),see Definition4.1.Thenµn(G)−1≤1with equality holding if and only if|E|>0.µ0=0;andµi−2=0andµi−1>0if and only if log2i is a positive integer and G has exactly log2i connected components.Thus,µ1>0if and only if G is connected.On top of these properties,we also show that the spectral radius of the adjacency tensor of a hypergraph with|E|>0is equal to one(Lemma3.2).The linear subspace generated by the nonnegative H-eigenvectors of the smallest H-eigenvalue of the Laplacian has dimension exactly the number of the connected components of the hypergraph(Lemma3.4).Equalities that the eigenvalues of the Laplacian should satisfy are given in Proposition3.1.The only two H+-eigenvalues of the Laplacian of a complete hypergraph are zero and one(Corollary 4.2).We give the H+-geometric multiplicities of the H+-eigenvalues zero and one of the Laplacian respectively in Lemma4.4and Proposition4.2.We show that when k is odd and G is connected,the H-eigenvector of L corresponding to the H-eigenvalue zero is unique3(Corollary6.4).The spectrum of the adjacency tensor is invariant under multiplication by any s-th root of unity,here s is the primitive index of the adjacency tensor(Corollary6.3). In particular,the spectrum of the adjacency tensor of a k-partite hypergraph is invariant under multiplication by any k-th root of unity(Corollary6.6).II.Our second major work is that we study the smallest H+-eigenvalues of the sub-tensors of the Laplacian.We give variational characterizations for these H+-eigenvalues(Lemma 5.1),and show that an H+-eigenvalue of the Laplacian is the smallest H+-eigenvalue of some sub-tensor of the Laplacian(Theorem4.1and(8)).Bounds for these H+-eigenvalues based on the degrees of the vertices and the second smallest H+-eigenvalue of the Laplacian are given respectively in Propositions5.1and5.2.We discuss the relations between these H+-eigenvalues and the edge connectivity(Proposition5.3)and the edge expansion(Proposition 5.5)of the hypergraph.III.Our third major work is that we introduce the concept of spectral components of a hypergraph and investigate their intrinsic roles in the structure of the spectrum of the hypergraph.We simply interpret the idea of the spectral componentfirst.Let G=(V,E)be a k-uniform hypergraph and S⊂V be nonempty and proper.The set of edges E(S,S c):={e∈E|e∩S=∅,e∩S c=∅}is the edge cut with respect to S.Unlike the graph counterpart,the number of intersections e∩S c may vary for different e∈E(S,S c). We say that E(S,S c)cuts S c with depth at least r≥1if|e∩S c|≥r for every e∈E(S,S c).A subset of V whose edge cut cuts its complement with depth at least two is closely related to an H+-eigenvalue of the Laplacian.These sets are spectral components(Definition2.5). With edge cuts of depth at least r,we define r-th depth edge expansion which generalizes the edge expansion for graphs(Definition5.1).Aflower heart of a hypergraph is also introduced (Definition2.6),which is related to the largest H+-eigenvalue of the Laplacian.We show that the spectral components characterize completely the H+-eigenvalues of the Laplacian that are less than one and vice verse,and theflower hearts are in one to one correspondence with the nonnegative eigenvectors of the H+-eigenvalue one(Theorem4.1). In general,the set of the H+-eigenvalues of the Laplacian is strictly contained in the set of the smallest H+-eigenvalues of its sub-tensors(Theorem4.1and Proposition4.1).We introduce H+-geometric multiplicity of an H+-eigenvalue.The second smallest H+-eigenvalue of the Laplacian is discussed,and a lower bound for it is given in Proposition5.2.Bounds are given for the r-th depth edge expansion based on the second smallest H+-eigenvalue of L for a connected hypergraph(Proposition5.4and Corollary5.5).For a connected hypergraph, necessary and sufficient conditions for the second smallest H+-eigenvalue of L being the largest H+-eigenvalue(i.e.,one)are given in Proposition4.3.The rest of this paper begins with some preliminaries in the next section.In Section2.1, the eigenvalues of tensors and some related concepts are reviewed.Some basic facts about the spectral theory of symmetric nonnegative tensors are presented in Section2.2.Some new observations are given.Some basic definitions on uniform hypergraphs are given in Section2.3.The spectral components and theflower hearts of a hypergraph are introduced.In Section3.1,some facts about the spectrum of the adjacency tensor are discussed.4Then some properties on the spectrum of the Laplacian are investigated in Section3.2. We characterize all the H+-eigenvalues of the Laplacian through the spectral components and theflower hearts of the hypergraph in Section4.1.In Section4.2,the H+-geometric multiplicity is introduced,and the second smallest H+-eigenvalue is explored.The smallest H+-eigenvalues of the sub-tensors of the Laplacian are discussed in Section 5.The variational characterizations of these eigenvalues are given in Section5.1.Then their connections to the edge connectivity and the edge expansion are discussed in Section5.2 and Section5.3respectively.The eigenvectors of the eigenvalues on the spectral circle of the adjacency tensor are characterized in Section6.1.It gives necessary and sufficient conditions under which the largest H-eigenvalue of the Laplacian is two.In Section6.2,we reformulate the above conditions in the language of linear algebra over modules and give necessary and sufficient conditions under which the eigenvector of an eigenvalue on the spectral circle of the adjacency tensor is unique.Somefinal remarks are made in the last section.2PreliminariesSome preliminaries on the eigenvalues and eigenvectors of tensors,the spectral theory of symmetric nonnegative tensors and basic concepts of uniform hypergraphs are presented in this section.2.1Eigenvalues of TensorsIn this subsection,some basic facts about eigenvalues and eigenvectors of tensors are re-viewed.For comprehensive references,see[13,24–26]and references therein.Let C(R)be thefield of complex(real)numbers and C n(R n)the n-dimensional complex(real)space.The nonnegative orthant of R n is denoted by R n+,the interior of R n+is denotedby R n++.For integers k≥3and n≥2,a real tensor T=(t i1...i k)of order k and dimension nrefers to a multiway array(also called hypermatrix)with entries t i1...i k such that t i1...i k∈Rfor all i j∈[n]and j∈[k].Tensors are always referred to k-th order real tensors in this paper,and the dimensions will be clear from the content.Given a vector x∈C n,definean n-dimensional vector T x k−1with its i-th element beingi2,...,i k∈[n]t ii2...i kx i2···x ikfor alli∈[n].Let I be the identity tensor of appropriate dimension,e.g.,i i1...i k =1if and only ifi1=···=i k∈[n],and zero otherwise when the dimension is n.The following definitions are introduced by Qi[24,27].Definition2.1Let T be a k-th order n-dimensional real tensor.For someλ∈C,if polynomial system(λI−T)x k−1=0has a solution x∈C n\{0},thenλis called an eigenvalue of the tensor T and x an eigenvector of T associated withλ.If an eigenvalue5λhas an eigenvector x ∈R n ,then λis called an H-eigenvalue and x an H-eigenvector.Ifx ∈R n +(R n ++),then λis called an H +-(H++-)eigenvalue.It is easy to see that an H-eigenvalue is real.We denote by σ(T )the set of all eigenval-ues of the tensor T .It is called the spectrum of T .We denoted by ρ(T )the maximum module of the eigenvalues of T .It is called the spectral radius of T .In the sequel,unlessstated otherwise,an eigenvector x would always refer to its normalization x k √ i ∈[n ]|x i|k.This convention does not introduce any ambiguities,since the eigenvector defining equations are homogeneous.Hence,when x ∈R n +,we always refer to x satisfying n i =1x k i =1.The algebraic multiplicity of an eigenvalue is defined as the multiplicity of this eigenvalue as a root of the characteristic polynomial χT (λ).To give the definition of the characteristic polynomial,the determinant or the resultant theory is needed.For the determinant theory of a tensor,see [13].For the resultant theory of polynomial equations,see [8,9].Definition 2.2Let T be a k -th order n -dimensional real tensor and λbe an indeterminate variable.The determinant Det (λI −T )of λI −T ,which is a polynomial in C [λ]and denoted by χT (λ),is called the characteristic polynomial of the tensor T .It is shown that σ(T )equals the set of roots of χT (λ),see [13,Theorem 2.3].If λis a root of χT (λ)of multiplicity s ,then we call s the algebraic multiplicity of the eigenvalue λ.Let c (n,k )=n (k −1)n −1.By [13,Theorem 2.3],χT (λ)is a monic polynomial of degree c (n,k ).Definition 2.3Let T be a k -th order n -dimensional real tensor and s ∈[n ].The k -th order s -dimensional tensor U with entries u i 1...i k =t j i 1...j i k for all i 1,...,i k ∈[s ]is called the sub-tensor of T associated to the subset S :={j 1,...,j s }.We usually denoted U as T (S ).For a subset S ⊆[n ],we denoted by |S |its cardinality.For x ∈C n ,x (S )is defined as an |S |-dimensional sub-vector of x with its entries being x i for i ∈S ,and sup(x ):={i ∈[n ]|x i =0}is its support .The following lemma follows from [13,Theorem 4.2].Lemma 2.1Let T be a k -th order n -dimensional real tensor such that there exists an integer s ∈[n −1]satisfying t i 1i 2...i k ≡0for every i 1∈{s +1,...,n }and all indices i 2,...,i k such that {i 2,...,i k }∩{1,...,s }=∅.Denote by U and V the sub-tensors of T associated to [s ]and {s +1,...,n },respectively.Then it holds thatσ(T )=σ(U )∪σ(V ).Moreover,if λ∈σ(U )is an eigenvalue of the tensor U with algebraic multiplicity r ,then it is an eigenvalue of the tensor T with algebraic multiplicity r (k −1)n −s ,and if λ∈σ(V )is an eigenvalue of the tensor V with algebraic multiplicity p ,then it is an eigenvalue of the tensor T with algebraic multiplicity p (k −1)s .62.2Symmetric Nonnegative TensorsThe spectral theory of nonnegative tensors is a useful tool to investigate the spectrum of a uniform hypergraph[7,23,27,33–35].A tensor is called nonnegative,if all of its entriesare nonnegative.A tensor T is called symmetric,if tτ(i1)...τ(i k)=t i1...i kfor all permutationsτon(i1,...,i k)and all i1,...,i k∈[n].In this subsection,we present some basic facts about symmetric nonnegative tensors which will be used extensively in the sequel.For comprehensive references on this topic,see[4,5,11,14,22,28,31,32]and references therein.By[23,Lemma3.1],hypergraphs are related to weakly irreducible nonnegative tensors. Essentially,weakly irreducible nonnegative tensors are introduced in[11].In this paper,we adopt the following definition[14,Definition2.2].For the definition of reducibility for a nonnegative matrix,see[12,Chapter8].Definition2.4Suppose that T is a nonnegative tensor of order k and dimension n.We call an n×n nonnegative matrix R(T)the representation of T,if the(i,j)-th element of R(T)is defined to be the summation of t ii2...i kwith indices{i2,...,i k} j.We say that the tensor T is weakly reducible if its representation R(T)is a reducible matrix.If T is not weakly reducible,then it is called weakly irreducible.For convenience,a one dimensional tensor,i.e.,a scalar,is regarded to be weakly irreducible.We summarize the Perron-Frobenius theorem for nonnegative tensors which will be used in this paper in the next lemma.For comprehensive references on this theory,see[4,5,11, 14,28,31,32]and references therein.Lemma2.2Let T be a nonnegative tensor.Then we have the followings.(i)ρ(T)is an H+-eigenvalue of T.(ii)If T is weakly irreducible,thenρ(T)is the unique H++-eigenvalue of T.Proof.The conclusion(i)follows from[32,Theorem2.3].The conclusion(ii)follows from[11,Theorem4.1].2 The next lemma is useful.Lemma2.3Let B and C be two nonnegative tensors,and B≥C in the sense of compo-nentwise.If B is weakly irreducible and B=C,thenρ(B)>ρ(C).Thus,if x∈R n+is aneigenvector of B corresponding toρ(B),then x∈R n++is positive.Proof.By[31,Theorem3.6],ρ(B)≥ρ(C)and the equality holding implies that|C|=B. Since C is nonnegative and B=C,we must have the strict inequality.7The second conclusion follows immediately from the first one and the weak irreducibility of B .For another proof,see [31,Lemma 3.5].2Note that the second conclusion of Lemma 2.3is equivalent to that ρ(S )<ρ(B )for any sub-tensor S of B other than the trivial case S =B .By [14,Theorem 5.3],without the weakly irreducible hypothesis,it is easy to construct an example such that the strict inequality in Lemma 2.3fails.For general nonnegative tensors which are weakly reducible,there is a characterization on their spectral radii based on partitions,see [14,Theorems 5.2amd 5.3].As remarked before [14,Theorem 5.4],such partitions can result in diagonal block representations for symmetric nonnegative tensors.Recently,Qi proved that for a symmetric nonnegative tensor T ,it holds that [28,Theorem 2]ρ(T )=max {T x k :=x T (T x k −1)|x ∈R n +, i ∈[n ]x k i =1}.(1)We summarize the above results in the next theorem with some new observations.Theorem 2.1Let T be a symmetric nonnegative tensor of order k and dimension n .Then,there exists a pairwise disjoint partition {S 1,...,S r }of the set [n ]such that every tensor T (S j )is weakly irreducible.Moreover,we have the followings.(i)For any x ∈C n ,T x k =j ∈[r ]T (S j )x (S j )k ,and ρ(T )=max j ∈[r ]ρ(T (S j )).(ii)λis an eigenvalue of T with eigenvector x if and only if λis an eigenvalue of T (S i )with eigenvector x (S i )k √ j ∈S i |x j|k whenever x (S i )=0.(iii)ρ(T )=max {T x k |x ∈R n +,i ∈[n ]x k i =1}.Furthermore,x ∈R n +is an eigenvector ofT corresponding to ρ(T )if and only if it is an optimal solution of the maximization problem (1).Proof.(i)By [14,Theorem 5.2],there exists a pairwise disjoint partition {S 1,...,S r }of the set [n ]such that every tensor T (S j )is weakly irreducible.Moreover,by the proof for [14,Theorem 5.2]and the fact that T is symmetric,{T (S j ),j ∈[r ]}encode all the possible nonzero entries of the tensor T .After a reordering of the index set,if necessary,we get a diagonal block representation of the tensor T .Thus,T x k = j ∈[r ]T (S j )x (S j )k follows for every x ∈C n .The spectral radii characterization is [14,Theorem 5.3].(ii)follows from the partition immediately.(iii)Suppose that x ∈R n +is an eigenvector of T corresponding to ρ(T ),then ρ(T )=x T (T x k −1).Hence,x is an optimal solution of (1).8On the other side,suppose that x is an optimal solution of (1).Then,by (i),we haveρ(T )=T x k =T (S 1)x (S 1)k +···+T (S r )x (S r )k .Whenever x (S i )=0,we must have ρ(T )( j ∈S i (x (S i ))k j )=T (S i )x (S i )k ,since ρ(T )( j ∈S i(y (S i ))k j )≥T (S i )y (S i )k for any y ∈R n +by (1).Hence,ρ(T (S i ))=ρ(T ).By Lemma 2.3,(1)and the weak irreducibility of T (S i ),we must have that x (S i )is a positive vector whenever x (S i )=0.Otherwise,ρ([T (S i )](sup(x (S i ))))=ρ(T (S i ))with sup(x (S i ))being the support of x (S i ),which is a contradiction.Thus,max {T (S i )z k |z ∈R |S i |+,i ∈S iz k i =1}has an optimal solution x (S i )in R |S i |++.By optimization theory [2],we must have that(T (S i )−ρ(T )I )x (S i )k −1=0.Then,by (ii),x is an eigenvector of T .22.3Uniform HypergraphsIn this subsection,we present some preliminary concepts of uniform hypergraphs which will be used in this paper.Please refer to [1,3,6]for comprehensive references.In this paper,unless stated otherwise,a hypergraph means an undirected simple k -uniform hypergraph G with vertex set V ,which is labeled as [n ]={1,...,n },and edge set E .By k -uniformity,we mean that for every edge e ∈E ,the cardinality |e |of e is equal to k .Throughout this paper,k ≥3and n ≥k .For a subset S ⊂[n ],we denoted by E S the set of edges {e ∈E |S ∩e =∅}.For a vertex i ∈V ,we simplify E {i }as E i .It is the set of edges containing the vertex i ,i.e.,E i :={e ∈E |i ∈e }.The cardinality |E i |of the set E i is defined as the degree of the vertex i ,which is denoted by d i .Then we have that k |E |= i ∈[n ]d i .If d i =0,then we say that the vertex i is isolated .Two different vertices i and j are connected to each other (or the pair i and j is connected),if there is a sequence of edges (e 1,...,e m )such that i ∈e 1,j ∈e m and e r ∩e r +1=∅for all r ∈[m −1].A hypergraph is called connected ,if every pair of different vertices of G is connected.A set S ⊆V is a connected component of G ,if every two vertices of S are connected and there is no vertices in V \S that are connected to any vertex in S .For the convenience,an isolated vertex is regarded as a connected component as well.Then,it is easy to see that for every hypergraph G ,there is a partition of V as pairwise disjoint subsets V =V 1∪...∪V s such that every V i is a connected component of G .Let S ⊆V ,the hypergraph with vertex set S and edge set {e ∈E |e ⊆S }is called the sub-hypergraph of G induced by S .We will denoted it by G S .In the sequel,unless stated otherwise,all the notations introduced above are reserved for the specific meanings.Here are some convention.For a subset S ⊆[n ],S c denotes the complement of S in [n ].For a nonempty subset S ⊆[n ]and x ∈C n ,we denoted by x S the monomial i ∈S x i .Let G =(V,E )be a k -uniform hypergraph.Let S ⊂V be a nonempty proper subset.Then,the edge set is partitioned into three pairwise disjoint parts:E (S ):={e ∈E |e ⊆S },9E(S c)and E(S,S c):={e∈E|e∩S=∅,e∩S c=∅}.E(S,S c)is called the edge cut of G with respect to S.When G is a usual graph(i.e.,k=2),for every edge in an edge cut E(S,S c)whenever it is nonempty,it contains exactly one vertex from S and the other one from S c.When G is a k-uniform hypergraph with k≥3,the situation is much more complicated.We will say that an edge in E(S,S c)cuts S with depth at least r(1≤r<k)if there are at least r vertices in this edge belonging to S.If every edge in the edge cut E(S,S c)cuts S with depth at least r,then we say that E(S,S c)cuts S with depth at least r.Definition2.5Let G=(V,E)be a k-uniform hypergraph.A nonempty subset B⊆V is called a spectral component of the hypergraph G if either the edge cut E(B,B c)is empty or E(B,B c)cuts B c with depth at least two.It is easy to see that any nonempty subset B⊂V satisfying|B|≤k−2is a spectral component.Suppose that G has connected components{V1,...,V r},it is easy to see that B⊂V is a spectral component of G if and only if B∩V i,whenever nonempty,is a spectral component of G Vi.We will establish the correspondence between the H+-eigenvalues that are less than one with the spectral components of the hypergraph,see Theorem4.1.Definition2.6Let G=(V,E)be a k-uniform hypergraph.A nonempty proper subset B⊆V is called aflower heart if B c is a spectral component and E(B c)=∅.If B is aflower heart of G,then G likes aflower with edges in E(B,B c)as leafs.It is easy to see that any proper subset B⊂V satisfying|B|≥n−k+2is aflower heart. There is a similar characterization between theflower hearts of G and these of its connected components.Theorem4.1will show that theflower hearts of a hypergraph correspond to its largest H+-eigenvalue.We here give the definition of the normalized Laplacian tensor of a uniform hypergraph.Definition2.7Let G be a k-uniform hypergraph with vertex set[n]={1,...,n}and edge set E.The normalized adjacency tensor A,which is a k-th order n-dimension symmetric nonnegative tensor,is defined asa i1i2...i k :=1(k−1)!j∈[k]1k√i jif{i1,i2...,i k}∈E,0otherwise.The normalized Laplacian tensor L,which is a k-th order n-dimensional symmetric tensor,is defined asL:=J−A,where J is a k-th order n-dimensional diagonal tensor with the i-th diagonal element j i...i=1 whenever d i>0,and zero otherwise.10When G has no isolated points,we have that L =I −A .The spectrum of L is called the spectrum of the hypergraph G ,and they are referred interchangeably.The current definition is motivated by the formalism of the normalized Laplacian matrix of a graph investigated extensively by Chung [6].We have a similar explanation for the normalized Laplacian tensor to the Laplacian tensor (i.e.,L =P k ·(D −B )1)as that for the normalized Laplacian matrix to the Laplacian matrix [6].Here P is a diagonal matrixwith its i -th diagonal element being 1k √d iwhen d i >0and zero otherwise.We have already pointed out one of the advantages of this definition,namely,L =I −A whenever G has no isolated vertices.Such a special structure only happens for regular hypergraphs under the definition in [27].(A hypergraph is called regular if d i is a constant for all i ∈[n ].)By Definition 2.1,the eigenvalues of L are exactly a shift of the eigenvalues of −A .Thus,we can establish many results on the spectra of uniform hypergraphs through the spectral theory of nonnegative tensors without the hypothesis of regularity.We note that,by Definition 2.1,L and D −B do not share the same spectrum unless G is regular.In the sequel,the normalized Laplacian tensor and the normalized adjacency tensor are simply called the Laplacian and the adjacency tensor respectively.By Definition 2.4,the following lemma can be proved similarly to [23,Lemma 3.1].Lemma 2.4Let G be a k -uniform hypergraph with vertex set V and edge set E .G is connected if and only if A is weakly irreducible.For a hypergraph G =(V,E ),it can be partitioned into connected components V =V 1∪···∪V r for r ≥1.Reorder the indices,if necessary,L can be represented by a block diagonal structure according to V 1,...,V r .By Definition 2.1,the spectrum of L does not change when reordering the indices.Thus,in the sequel,we assume that L is in the block diagonal structure with its i -th block tensor being the sub-tensor of L associated to V i for i ∈[r ].By Definition 2.7,it is easy to see that L (V i )is the Laplacian of the sub-hypergraph G V i for all i ∈[r ].Similar convention for the adjacency tensor A is assumed.3The Spectrum of a Uniform HypergraphBasic properties of the eigenvalues of a uniform hypergraph are established in this section.3.1The Adjacency TensorIn this subsection,some basic facts of the eigenvalues of the adjacency tensor are discussed.1The matrix-tensor product is in the sense of [24,Page 1321]:L =(l i 1...i k ):=P k ·(D −A )is a k -th order n -dimensional tensor with its entries being l i 1...i k := j s ∈[n ],s ∈[k ]p i 1j 1···p i k j k (d j 1...j k −a j 1...j k ).11。
青少年被允许选择他们自己的衣服英语作文
青少年被允许选择他们自己的衣服英语作文Should Teenagers Get to Pick Their Own Clothes?Have you ever wished you could wear whatever you wanted? That's how I feel sometimes when I look at what teenagers get to wear compared to my school uniform. Teenagers seem so lucky that they can express themselves through their clothes and hairstyles. I think letting teens choose their own outfits is a good idea for a few reasons.Firstly, fashion is a way for people to show their personality and individuality. If teenagers wear the same thing every day, like a uniform, then they can't use their style choices to stand out. Wearing what you want lets you creatively combine different items in unique ways. That's a form of self-expression that shouldn't be stifled.I have some friends who are really into fashion already, even though we're just kids. They follow the latest trends and put together cool outfits by mixing and matching items from their closets. They seem to really enjoy the creative process of styling different looks. If they suddenly had to start wearing boring uniforms in high school, that could make them less interested in fashion and self-expression.Additionally, being able to choose your own outfit builds practical life skills for the future. As a teenager, you start taking more responsibility and ownership over your daily choices and habits. Deciding what to wear each day is a simple freedom, but it starts teaching decision-making abilities. Those skills become more crucial as you grow into an independent adult.Besides, having no say over your clothes and appearance can be frustrating. Imagine being stuck wearing the same boring outfits every single day for years! I think I'd go crazy without any variety. Forcing teens to conform to strict dress codes eliminates their sense of identity and autonomy during important developmental years.Of course, there have to be some reasonable limits and guidelines about what's appropriate for school. You can't have kids wearing offensive graphic tees, revealing clothes, or bizarre outfits that would cause distractions. But within the bounds of decent taste, teens should get flexibility to define their personal style.And it's not like teens who pick their clothes never make mistakes or look silly sometimes – that's just part of exploring fashion and figuring out what works. I've definitely seen teenagers out in public wearing some pretty cringeworthy outfits!But that's how you learn. Making those types of personal decisions, even the mistakes, is valuable experience.Fashion is also connected to peer approval and self-esteem, especially for young people. What you wear influences how you're perceived by your friends and classmates. If you don't like your outfit or feel you can't express your personality, it can really dampen your confidence during an already insecure time of life. Having some freedom over their clothes and style helps teenagers feel more comfortable in their own skin.Thinking back to some of my older cousins, I remember they absolutely lived in graphic tees, hoodies, and certain sneaker styles as teens. Forcing them into khakis and polos would've felt so wrong and restrictive for their generation. Each decade has its own defined fashion trends and aesthetic for young people. It would be undermining for schools to ignore those cultural forces completely.At the same time, I can understand why some parents, teachers, and admins want clear dress codes. They likely had their own embarrassing fashion moments back in the day too. But enforcing strict uniform requirements isn't the answer. There's a balance where you exercise some oversight forappropriateness, while still allowing room for personal expression through clothes.I think having input and choice is also good preparation for the working world. Unless you'll be wearing a standardized uniform for your career, you'll need to build common sense about what's acceptable professional attire. Getting practice at making those judgments as a teenager, with some guidance, is honestly not a bad thing. Then you're not blindsided by strict dress codes in your first real job.Plus, wearing trendy styles boosts teens' confidence about being current and relatable to their peers. That social acceptance and fitting in is so important at that age. I know I personally worry about wearing something embarrassing or Looking silly compared to other kids sometimes. For teenagers, those pressures are way amplified.At the end of the day, being a teenager is already stressful with all the bodily changes, social challenges, figuring out who you are, and cramming for tests. If there's a simple freedom like picking your outfit that can make them feel better about themselves, why not allow it? As long as they stay within appropriate boundaries, having some choice over their clothesand style is a pretty harmless way for teens to express individuality.Fashion is about a lot more than just material items. For young people, it's an integral component of their identity formation. By letting teenagers practice decision-making through their clothing choices, you're encouraging creativity, independence, and confidence during pivotal years. And really, doesn't making your own choices about small things like outfits prepare you for bigger life decisions too? I think having that ownership over daily details helps teenagers grow into responsible, self-assured adults.。
Monotone
Monotone circuits for the majority functionShlomo Hoory Avner Magen†Toniann Pitassi†AbstractWe present a simple randomized construction of size O n3and depth53log n O1monotone circuits for the majority function on n variables.This result can be viewed as a reduction in the size anda partial derandomization of Valiant’s construction of an O n53monotone formula,[15].On the otherhand,compared with the deterministic monotone circuit obtained from the sorting network of Ajtai, Koml´o s,and Szemer´e di[1],our circuit is much simpler and has depth O log n with a small constant.The techniques used in our construction incorporate fairly recent results showing that expansion yields performance guarantee for the belief propagation message passing algorithms for decoding low-density parity-check(LDPC)codes,[3].As part of the construction,we obtain optimal-depth linear-size mono-tone circuits for the promise version of the problem,where the number of1’s in the input is promised to be either less than one third,or greater than two thirds.We also extend these improvements to general threshold functions.At last,we show that the size can be further reduced at the expense of increased depth,and obtain a circuit for the majority of size and depth about n1Department of Computer Science,University of British Columbia,Vancouver,Canada.†Department of Computer Science,University of Toronto,Toronto,Canada.1IntroductionThe complexity of monotone formulas/circuits for the majority function is a fascinating,albeit perplexing,problem in theoretical computer science.Without the monotonicity restriction,majority can be solvedwith simple linear-size circuits of depth O log n,where the best known depth(over binary AND,OR,NOT gates)is495log n O1[12].There are two fundamental algorithms for the majority function thatachieve logarithmic depth.Thefirst is a beautiful construction obtained by Valiant in1984[15]that achievesmonotone formulas of depth53log n O1and size O n53.The second algorithm is obtained from the celebrated sorting network constructed in1983by Ajtai,Koml´o s,and Szemer´e di[1].Restricting to binaryinputs and taking the middle output bit(median),reduces this network to a monotone circuit for the majorityfunction of depth K log n and size O n log n.The advantage of the AKS sorting network for majority is thatit is a completely uniform construction of small size.On the negative side,its proof is quite complicated andmore importantly,the constant K is huge:the best known constant K is about5000[11],and as observed byPaterson,Pippenger,and Zwick[12],this constant is important.Further converting the circuit to a formulayields a monotone formula of size O n K,which is roughly n5000.In order to argue about a quality of a solution to the problem,one should be precise about the differentresources and the tradeoffs between them.We care about the depth,the size,the number of random bitsfor a randomized construction,and formula vs circuit question.Finally,the conceptual simplicity of boththe algorithm and the correctness proof is also an important goal.Getting the best depth-size tradeoffs isperhaps the most sought after goal around this classical question,while achieving uniformity comes next. An interesting aspect of the problem is the natural way it splits into two subproblems,the solution to which gives a solution to the original problem.Problem I takes as input an arbitrary n-bit binary vector,and outputs an m-bit vector.If the input vector has a majority of1’s,then the output vector has at least a2/3fraction of 1’s,and if the input vector does not have a majority of1’s,then the output vector has at most a1/3fraction of1’s.Problem II is a promise problem that takes the m-bit output of problem I as its input.The output of Problem II is a single bit that is1if the input has at least a2/3fraction of1’s,and is a0if the input has at most a1/3fraction of1’s.Obviously the composition of these two functions solves the original majority problem.There are several reasons to consider monotone circuits that are constructed via this two-phase approach.First,Valiant’s analysis uses this viewpoint.Boppana’s later work[2]actually lower bounds each of thesesubproblems separately(although failing to provide lower bound for the entire problem).Finally,the secondsubproblem is of interest in its own right.Problem II can be viewed as an approximate counting problem,and thus plays an important role in many areas of theoretical computer science.Non monotone circuits forthis promise problem have been widely studied.Results The contribution of the current work is primarily in obtaining a new and simple construction ofmonotone circuits for the majority function of depth53log n and size O n3,hence significantly reducing the size of Valiant’s formula while not compromising at all the depth parameter.Further,for subproblem II as defined above,we supply a construction of a circuit size that is of a linear size,and it too does not compromise the depth compared to Valiant’s solution.A very appealing feature of this construction is that it is uniform,conditioned on a reasonable assumption about the existence of good enough expander graphs. To this end we introduce a connection between this circuit complexity question and another domain,namely message passing algorithms.The depth we achieve for the promise problem nearly matches the1954lower bound of Moore and Shannon[10].We further show how to generalize our solution to a general threshold function,and explore another optionin the tradeoffs between the different resources we use;specifically,we show that by allowing for a depth of roughly twice that of Valiant’s construction,we may get a circuit of size O n12Definitions and amplificationFor a monotone boolean function H on k inputs,we define its amplification function A H:0101 as A H p Pr H X1X k1,where X i are independent boolean random variables that are one with probability p.Valiant[15],considered the function H on four variables,which is the OR of two AND gates,H x1x2x3x4x1x2x3x4.The amplification function of H,depicted in Figure1,is A H p11p22,and has a non-trivialfixed point atβ512152.Let H k be the depth2k binary tree with alternating layers of AND and OR gates,where the root is labeled OR.Valiant’s construction uses the fact that A Hk is the composition of A H with itself k times.Therefore,H k probabilistically amplifiesβ∆β∆to βγεk∆βγεk∆,as long asγεk∆∆0.This implies that for any constantε0we can take2k33log n O1to probabilistically amplifyβΩ1nβΩ1n toε1ε,where33is any constant bigger thanαlogDefinition1.Let F be a boolean function F:01n01m,and let S01n be some subset of the inputs.We say that F deterministically amplifies p l p h to q l q h with respect to S,if for all inputs x S, the following promise is satisfied(we denote by x the number of ones in the vector x):F x q l m if x p l nF x q h m if x p h nNote that unlike the probabilistic amplification,deterministic amplification has to work for all inputs or scenarios in the given set S.From here on,whenever we simply say“amplification”we mean deterministic amplification.For an arbitrary small constantε0,the construction we give is composed of two independent phases that may be of independent interest.A circuit C1:01n01m for m O n that deterministically amplifiesβΩ1nβΩ1n toδ1δfor an arbitrarily small constantδ0.This circuit has size O n3and depth αεlog n O1.A circuit C2:01m01,such that C2x0if xδm and C2x1if x1δm,whereδ0is a sufficiently small constant.This circuit has size O m and depth2εlog m O1.Thefirst circuit C1is achieved by a simple probabilistic construction that resembles Valiant’s construction. We present two constructions for the second circuit,C2.Thefirst construction is probabilistic;the second construction is a simulation of a logarithmic number of rounds of a certain message passing algorithm on a good bipartite expander graph.The correctness is based on the analysis of a similar algorithm used to decode a low density parity check code(LDPC)on the erasure channel[3].Combining the two circuits together yields a circuit C:01n01for theβn-th threshold function. The circuit is of size O n3and depthα22εlog n O1.3Monotone circuits for MajorityIn this section we give a randomized construction of the circuit C:01n01such that C x is one if the portion of ones in x is at leastβn and zero otherwise.The circuit C has size O n3and depth2αεlog n O1for an arbitrary small constantε0.As we described before,we will describe C as the compositions of the circuits C1and C2whose parameters are given by the following two theorems: Theorem2.For everyεεc0,there exists a circuit C1:01n01m for m O n,of size O n3and depthαεlog n O1that deterministically amplifies all inputs fromβc nβc n toε1ε. Theorem3.For everyε0,there existsε0and a circuit C2:01n01,of size O n and depth 2εlog n O1that deterministically amplifies all inputs fromε1εto01.The two circuits use a generalization of the four input function H used in Valiant’s construction.For any integer d2,we define the function H d on d2inputs as the d-ary OR of d d-ary AND gates,i.e d i1d j1 x i j.Note that Valiant’s function H is just H2.Each of the circuits C1and C2is a layered circuit,where layer zero is the input,and each value at the i-th layer is obtained by applying H d to d2independently chosen inputs from layer i 1.However,the valuesof d we choose for C1and C2are different.For C1we have d2,while for C2we choose sufficiently large d dεto meet the depth requirement of the circuit.We let F n m F denote a random circuit mapping n inputs to m outputs,where F is afixed monotone boolean circuit with k inputs,and each of the m output bits is calculated by applying F to k independently chosen random inputs.We start with a simple lemma that relates the deterministic amplification properties of F to the probabilistic amplification function A F.1Lemma4.For anyεδ0,the random function F deterministically amplifies p l p h to A F p l1δA F p h1δwith respect to S01n with probability at least1ε,if:log S log1εmΩΘγ2εi1c nβγεγ2εi1c nThat is,we can chooseδas an increasing geometric sequence,starting fromΘ1n for i1,up toΘ1 for i logγ2εn.The implied layer size for error probability2n(which is much better than we need),is Θnδ2.Therefore,it decreases geometrically fromΘn3down toΘn.It is not difficult to see that after achieving the desired amplification fromβc n toβ∆0,only a constant number of layers is needed to get down toε.The corresponding value ofδin these last steps is a constant (that depends onε),and therefore,the required layer sizes are allΘn.Proof of Theorem3.The circuit C2is a composition of F n m1H d F m1m2H dF mt1m t H d,where d andthe layer sizes n m0m1m t1are suitably chosen parameters depending onε.We prove that with high probability such a circuit deterministically amplifies all inputs fromε1εto01.As before,we restrict our attention to the lower end of the promise problem and prove that C2outputs zero on all inputs with portion of ones smaller thanε.As in the circuit C1,the layer sizes must be sufficiently large to allow accurate computation.However, for the circuit C2,accurate computation does not mean that the portion of ones in each layer is close to its expected value.Rather,our aim is to keep the portion of ones bounded by afixed constantε,while making each layer smaller than the preceding one by approximately a factor of d.We continue this process until the layer size is constant,and then use a constant size circuit tofinish the computation.Therefore,since the number of layers of such a circuit is about log n log d,and the depth of the circuit for H d is2log d,the total depth is about2log n for large d.By the above discussion,it suffices to prove the following:For everyε0there exists a real number δ0and two integers d n0,such that for all n n0the random circuit F n m H d with m1εn d, deterministically amplifiesδtoδwith respect to all inputs,with failure probability at most1n.Since A Hδ11δd d dδd,the probability of failure for any specific input with portion of ones at most δ,is bounded by:mδmA Hδδm eamplification method to analyze the performance of a belief propagation message passing algorithm for decoding low density parity check(LDPC)codes.Today the use of belief propagation for decoding LDPC codes is one of the hottest topics in error correcting codes[9,14,13].Let G V L V R;E be a d regular bipartite graph with n vertices on each side,V L V R n.Consider the following message passing algorithm,where we think of the left and right as two players.The left player “plays AND”and the right player“plays OR”.At time zero the left player starts by sending one boolean message through each left to right edge,where the value of the message m uv from u V L to v V R is the input bit x u.Subsequently,the messages at time t0are calculated from the messages at time t 1.At odd times,given the left to right messages m uv,the right player calculates the right to left messages m vw, from v V R to w V L by the formula m vw u N v w m uv.That is,the right player sends a1along the edge from v V R to w V L if and only if at least one of the incoming messages/values(not including the incoming message from w)is1.Similarly,at even times the algorithm calculates the left to right messages m vw,v V L,w V R,from the right to left messages m uv,by the formula m vw u N v w m uv.That is,the left player sends a1along the edge from v V L to w V R if and only if all of the incoming messages/values (not including the incoming message from w)are1.We further need the following definitions.We call a left vertex bad at even time t if it transmits at least one message of value one at time t.Similarly,a right vertex is bad at odd time t if it is a right vertex that transmits at least one message of value zero at time t.We let b t be the number of bad vertices at time t.These definitions will be instrumental in providing a potential function measuring the progress of the message passing algorithm which is expressed in Lemma5.We say that a bipartite graph G V L V R;E isλe-expanding,if for any vertex set S V L(or S V R)of size at mostλn,N S e S.It will be convenient to denote the expansion of the set S by e S N S S. Lemma5.Consider the message passing algorithm using a d4regular expander graph with d1e d12.If b tλn d2then b t2b tη,whereηd1d1ηt and so b2t10for t log a d2d e gets,and the better the time guarantee above gets.How good are the expanders that we may use?One can show the existence of such expanders for sufficiently large d large,and e d c for an absolute constant c.The best known explicit construction that gets close to what we need,is the result of[4].However,that result does not suffice here for two reasons.Thefirst is that it only achieves expansion1εd for anyε0 and sufficiently large d depending onε.The second is that it only guarantees left-to-right expansion,while our construction needs both left-to-right and right-to-left expansion.We refer the reader to the survey[6] for further reading and background.For such expanders,ηd1d1log d1log d1iterations,all mes-sages contain the right answer,whereεcan be made arbitrarily small by choosing sufficiently large d.It remains to convert the algorithm into a monotone circuit,which introduces a depth-blowup of log d1 owing to the depth of a binary tree simulating a d1-ary gate.Thus we get a2εlog n-depth circuit for arbitrarily smallε0.The size is obviously dn depth O n log n.To get a linear circuit,further work is needed,which we now describe.The idea is to use a sequence of graphs G 0G G 1,where each graph is half the size of its preceding graph,but has the same degree and expansion parameters.We start the message passing algorithm using the graph G G 0,and every t 0rounds (each round consists of OR and then AND),we switch to the next graph in the sequence.Without the switch,the portion of bad vertices should decrease by a factor of ηt 0,every t 0rounds.We argue that each switch can be performed,while losing at most a constant factor.To describe the switch from G i to G i 1,we identify V L G i 1with an arbitrary half of the vertices V L G i ,and start the message passing algorithm on G i 1with the left to right messages from each vertex in V L G i 1,being the same as at the last round of the algorithm on G i .As the number of bad left vertices cannot increase at a switch,their portion,at most doubles.For the right vertices,the exact argument is slightly more involved,but it is clear that the portion of bad right vertices in the first round in G i 1,increases by at most a constant factor c ,compared with what it should have been,had there been no switch.(Precise calculation,yields c 2d η.)Therefore,to summarize,as the circuit consists of a geometrically decreasing sequence of blocks starting with a linear size block,the total size is linear as well.As for the depth,the amortized reduction in the portion of bad vertices per round,is by a factor of ηηc 1t 0.Therefore,the resulting circuit is only deeper than the one described in the previous paragraph,by a factor of log ηlog η.By choosing a sufficiently large value for t 0,we obtain:Theorem 6.For any ε0,there exists a 0such that for any n there exists a monotone circuit of depth 2εlog n O 1and size O n that solves a-promise problem.We note here that O log n depth monotone circuits for the a -promise problem can also be obtained from ε-halvers.These are building blocks used in the AKS network.However,our monotone circuits for the a -promise problem have two advantages.First,our algorithm relates this classical problem in circuit com-plexity to recent popular message passing algorithms.And second,the depth that we obtain is nearly ly,Moore and Shannon [10]prove that any monotone formula/circuit for majority requires depth 2log n O 1,and the lower bound holds for the a -promise problem as well.Proof of Lemma 5.(based on Burshtein and Miller [3])We consider only the case of bad left vertices.The proof for bad right vertices follows from the same proof,after exchanging ones with zeroes,ANDs with ORs,and lefts with rights.Let B V L be the set of bad leftvertices,and assume Bλd 2at some even time t and B the set of bad vertices at time t 2.We bound the size of B by considering separately B B and B B .Note that all sets considered in the proof have size at most λn ,and therefore expansion at least e.N(B’)To bound B B ,consider the set Q N B B N B N B B N B .Since vertices in Q are not adjacent to B ,then at time t 1they send right to left messages valued zero.On the other hand,any vertex in B B can receive at most one such zero message (otherwise all its messages at time t 2will be valuedzero and it cannot be in B).Therefore,since each vertex in Q must have at least one neighbour in B B,it follows that Q B B.Therefore,we have:N B B N B Q N B B B e B B B BOn the other hand,N B B e B B e B B B.Combining the above two inequalities,we obtain:B B e Be2B B1d12B(2) Combining inequalities(1)and(2)we get that:B B e B ed12Since e d12,and e B e,this yields the required bound:B B2d e d1As noted before in Section2,replacing the last2log n layers of Valiant’s tree with2log r n layers of r-ary AND/OR gates,results in an arbitrarily small increase in the depth of the corresponding formula for a large value of r.It is interesting to compare the expected behavior of the suggested belief-propagation algorithm to the behavior of the d1-ary tree.Assume that the graph G is chosen at random(in theconfiguration model),and that the number of rounds k is sufficiently small,d12k n.Then,almost surely the computation of all but o1fraction of the k-th round messages is performed by evaluating a d1-ary depth k trees.Moreover,introducing an additional o1error,one may assume that the leaves are independently chosen boolean random variables that are one with probability p,where p is the portion of ones in the input.This observation sheds some light on the performance of the belief propagation algorithm. However,our analysis proceeds far beyond the number of rounds for which a cycle free analysis can be applied.4Monotone formulas for threshold-k functionsConsider the case of the k-th threshold function,T k n,i.e.a function that is one on x01n if xk1and zero otherwise.We show that,by essentially the same techniques of Section3,we can construct monotone circuits to this more general problem.We assume henceforth that k n2,since otherwise, we construct the circuit T n1k n and switch AND with OR gates.For k nΘ1,the construction yields circuits of depth53log n O1and size O n3.However,when k o n,circuits are shallower and smaller (this not surprising fact is also discussed in[2]in the context of formulas).The construction goes as follows:(i)Amplify k n k1n toβΩ1kβΩ1k by randomly applying to the input a sufficiently large number of OR gates with arityΘn k(ii)AmplifyβΩ1kβΩ1k to O11O1using a variation of phase I,and(iii)Amplify O11O1to01using phase II.We now give a detailed description.For the sake of the section to follow,we require the following lemma which is more general than is needed for the results of this section.Lemma7.Let S01n,andε0.Then,for any k,there is a randomized construction of a monotone circuit that evaluates T k n correctly on all inputs from S and hasdepth log n23log k2εloglog S O1size O log S k nHere k min k n1k,and the constants of the O depend only onε.Proof.Let s log S,and let i be the OR function with arity i.Then An kk n11k n n k,while An k k1n11k1n n k.Hence An kk n is a constant bounded from zero andone.We further notice thatAn k k1nΘ1kIt is not hard to see that we can pick a constantρso that Aρn k knβΩ1k.Therefore,ρn k probabilistically amplify k n k1n toβΩ1kβΩingLemma4withδΘ1k and m sk2we get that F n mρn k amplifies k n k1n toβΩ1kβΩ1k with arbitrarily high probability.The depth required to implement the above circuit is log n k and the size is O skn.Next we apply a slight modification of phase I.The analysis there remains the same except that the starting point is separation guarantee ofΩ1k instead ofΩ1n,and log S is s instead of n.This leads to a circuit of depthαεlog k O1and of size O sk2,for an arbitrarily small constantε0.Also,we note that the output of this phase is of sizeΘs.Finally,we apply phase II,where the number of inputs isΘs instead ofΘn,to obtain an amplification from O11O1to01.This requires depth2εlog s O1and size O s,for an arbitrarily small constantε0.To guarantee the correctness of a monotone circuit for T n k,it suffices to check its output on inputs of weight k k1(as the circuit is monotone).Therefore,S n k n k1,implying that log S O k log n k. Therefore,we have:Theorem8.There is a randomized construction of a monotone circuit for T k n with:depth log n43log k O loglog n ksize O k2n log n kwhere k min k n1k,and the constants of the O are absolute.5Reducing the circuit sizeThe result obtained so far for the majority,is a monotone circuit of depth53log n O1and size O n3.In this section,we would like to obtain smaller circuit size,at the expense of increasing the depth somewhat. The crucial observation is that the size of our circuit depends linearly on the logarithm of the number of scenarios it has to handle.Therefore,applying a preprocessing stage to reduce the wealth of scenarios may save up to a factor of n in the circuit size.We propose a recursive construction that reduces the circuit size to about n1We chooseαi2σi1to equate1αiσi with3αi.This implies thatσi132σi1,and δi153δi22σi1,resulting in the following sequence:iαiσiδi2,and the sequence of δi tends to129896.Therefore,we have:Theorem9.There is a randomized construction of a monotone circuit for the majority of size n1There are two central open problems related to this work.First,is the promise version really simpler than majority?A lower bound greater than2log n on the communication complexity of mMaj-search would settle this question.Boppana[2]and more recent work[5]show lower bounds on a particular method for obtaining monotone formulas for majority.However we are asking instead for lower bounds on the size/depth of unrestricted monotone formulas/circuits.Secondly,the original question remains unresolved. Namely,we would like to obtain explicit uniform formulas for majority of optimal or near optimal size.A related problem is to come up with a natural(top-down)communication complexity protocol for mMaj-Search that uses O log n many bits.References[1]M.Ajtai,J.Koml´o s,and E.Szemer´e di.Sorting in c log n parallel binatorica,3(1):1–19,1983.[2]R.B.Boppana.Amplification of probabilistic boolean formulas.IEEE Symposium on Foundations ofComputer Science(FOCS),pages20–29,1985.[3]D.Burshtein and ler.Expander graph arguments for message-passing algorithms.IEEE Trans.Inform.Theory,47(2):782–790,2001.[4]M.Capalbo,O.Reingold,S.Vadhan,and A.Wigderson.Randomness conductors and constant-degreeexpansion beyond the degree2barrier.In Proceedings34th Symposium on Theory of Computing, pages659–668,2002.[5]M.Dubiner and U.Zwick.Amplification by read-once formulas.SIAM put.,26(1):15–38,1997.[6]S.Hoory,N.Linial,and A.Wigderson.Expander graphs and their applications.survey article toappear in the Bulletin of the AMS.[7]Mauricio Karchmer and Avi Wigderson.Monotone circuits for connectivity require super-logarithmicdepth.In Proceedings of the Twentieth Annual ACM Symposium on Theory of Computing,pages539–550,Chicago,IL,May1988.[8]M.Luby,M.Mitzenmacher,and A.Shokrollahi.Analysis of random processes via and-or tree evalu-ation.In ACM-SIAM Symposium on Discrete Algorithms(SODA),1998.[9]M.Luby,M.Mitzenmacher,A.Shokrollahi,and D.A.Spielman.Analysis of low density codes andimproved designs using irregular graphs.ACM Symposium on Theory of Computing(STOC),1998.[10]E.F.Moore and C.E.Shannon.Reliable circuits using less reliable relays.I,II.J.Franklin Inst.,262:191–208,281–297,1956.[11]M.S.Paterson.Improved sorting networks with O log N depth.Algorithmica,5(1):75–92,1990.[12]M.S.Paterson,N.Pippenger,and U.Zwick.Optimal carry save networks.In Boolean functioncomplexity(Durham,1990),volume169of London Math.Soc.Lecture Note Ser.,pages174–201.Cambridge Univ.Press,Cambridge,1992.[13]T.Richardson and R.Urbanke.Modern coding theory.Draft of a book.[14]T.Richardson and R.Urbanke.The capacity of low-density parity-check codes under message-passingdecoding.IEEE rm.Theory,47(2):599–618,2001.[15]L.G.Valiant.Short monotone formulae for the majority function.J.Algorithms,5(3):363–366,1984.。
模拟托福--同义词汇
Scarce---limited 有限的,缺乏的Period---time 时间,时代Succeeding---later 随后的,以后的Minute---tiny 微小的Motion---movement 运动,移动Random---unpredictable 随机的,不可预言的Illuminated---clarified 澄清的,透明的Prerequisite---requirement 必要条件,首要条件Peculiar---unusual 特殊的,不常见的Controlled---managed 控制,约束,管理Naturally---unsurprisingly 自然的,不令人吃惊的Make the most---get the best yield from Withstand---tolerate 抵挡,经得起,忍受Constant---invariable 不变的,常数Divergent---different 相异的,不同的Squarely---solidly 正好Unwieldy---unmanagable 难处理的Initiated---started 开始Stimulated---motivated 受激的,激发。
Retain---preserve 记住保持,保存Altered---changed 改变了的Vowed---promised 发誓,允诺Dream---expectation 梦想的,期盼的Undiminished---not lessened 未衰减的Monochrome---one color 单色的Eerily---strangely 奇怪的Surpassed---exceeded 超过Functional---useful 功能的,有用的Basic---fundamental 基本的,根本的Pattern---design model 设计,模仿?Marked---distinct 显著的,明显的Enhanced---improved 加强的,增强的Spiraling---rising 盘旋的,上升的More modest---less ambitious 谦虚的,少野心的Precisely---exactly 精确的,恰好Shape---form 形状形成Capture---catch 捕获,捉住Retracts---pulls back 缩进,取消,撤退Transform---change 改变Composition---arrangement 构成,排列Shots---photographs 拍照Great---strong 伟大的,牢固的Sensational---spectacular 轰动的,壮观的Tardy---late 缓慢的,迟到的Discounted---disregarded 不重视,忽视Encountered---faced 遇到,面对Hampered---impeded 阻碍,防止Array---large number 数组,大量Advanced---proposed 先进的,被提议的Inaccessible---unreachable 难接近,达到的Extracting---removing 拔出,去除Strength---basis 优势,基础Surging---accelerating 冲击,加速的Trend---tendency 趋势,倾向Peak---maximum 最大,最大值的Prior to---preceding 在。
Hausdorff Convergence and Universal Covers
Christina Sormani
Abstract
Guofang Weiy
We prove that if Y is the Gromov-Hausdor limit of a sequence of compact manifolds, Min , with a uniform lower bound on Ricci curvature and a uniform upper bound on diameter, then Y has a universal cover. We then show that, for i su ciently large, the fundamental group of Mi has a surjective homeomorphism onto the group of deck transforms of Y . Finally, in the non-collapsed case where the Mi have an additional uniform lower bound on volume, we prove that the kernels of these surjective maps are nite with a uniform bound on their cardinality. A number of theorems are also proven concerning the limits of covering spaces and their deck transforms when the Mi are only assumed to be compact length spaces with a uniform upper bound on diameter.
底线篮球羽毛球界限作文
底线篮球羽毛球界限作文英文回答:In the competitive realms of basketball and badminton, the lines that define the boundaries of play hold immense significance, dictating the flow of the game and shaping the strategies employed by athletes. These lines establish clear limits within which players must operate, creating a structured framework that ensures fairness and accountability.The Court Dimension and Boundary Lines in Basketball and Badminton.Basketball is played on a rectangular court with dimensions of 28 meters in length and 15 meters in width. The boundaries are clearly marked by painted lines, including the sidelines, endlines, and center line. The three-point arc, located 6.75 meters from the basket, further delineates the court's perimeter.In badminton, the court measures 13.4 meters long and 6.1 meters wide for singles matches and 13.4 meters long and 6.7 meters wide for doubles matches. Similar to basketball, the boundaries are marked by painted lines, consisting of the sidelines, endlines, and center line. A service line, situated 1.98 meters from the net, divides the court into the service court and the back court.The Significance of Boundaries in Basketball and Badminton.The boundary lines serve several crucial functions in both basketball and badminton. They:Define the playing area: The lines establish thelimits within which players can move and interact with the ball or shuttlecock.Govern player movement: Boundary lines regulate player positioning, preventing them from illegally entering the opponent's court or committing out-of-bounds violations.Determine scoring opportunities: The three-point arcin basketball and the service line in badminton create distinct scoring zones, influencing the value anddifficulty of shots.Maintain fairness and consistency: Uniform and well-defined boundaries ensure that all players compete on an equal footing, regardless of the venue or officiating crew.Respecting the Boundaries in Basketball and Badminton.Upholding the sanctity of the boundary lines is paramount for fair play and sportsmanship in both basketball and badminton. Players must:Stay within the boundaries: Intentionally or unintentionally crossing the lines results in violations and can lead to penalties.Avoid touching the lines: The boundaries must not be disrupted or interfered with during play.Respect the opponent's court: Players must not encroach on the opponent's side of the court without permission.Maintain situational awareness: Players should be constantly aware of their position relative to the boundary lines to avoid committing violations.Consequences of Boundary Violations in Basketball and Badminton.Violating the boundary lines in basketball and badminton typically results in the following consequences:Out-of-bounds: Players who step out of the boundaries lose possession of the ball or shuttlecock.Foot fault: In basketball, stepping over the free throw line before releasing the ball is a foul.Service fault: In badminton, serving from outside theservice court is a fault.Delay of game: Repeated boundary violations can lead to technical fouls or timeouts.Conclusion.The boundary lines in basketball and badminton are not merely geometric constructs but rather fundamental elements that shape the dynamics of each sport. They establish the boundaries of competition, govern player movement, determine scoring opportunities, and ensure fairness. Respecting the boundaries is essential for maintaining the integrity of the game and fostering a spirit of sportsmanship.中文回答:篮球羽毛球界限。
柔道规则作文模板英语
柔道规则作文模板英语## Judo Rules Essay Template。
English Answer:General Principles:Judo is a martial art and combat sport that focuses on throwing and grappling techniques.The objective is to immobilize or submit the opponent through pins, chokes, or holds within a specific time frame on a mat.Mat Area:The judo mat is a square or circular area with specified dimensions.The mat is divided into a central area, a safety area,and an out-of-bounds area.Attire:Judokas wear a white or blue uniform called a judogi, consisting of a jacket, pants, and a belt.Scoring:Points are awarded for various techniques, including throws, immobilizations, chokes, and holds.The highest score is awarded for an ippon (full point), followed by waza-ari (half point) and yuko (quarter point).Accumulating three waza-ari or two yukos results in an ippon.Penalties:Penalties can be imposed for illegal techniques, such as hitting, kicking, or striking with the fist.Multiple penalties or severe violations can lead to disqualification.Time:Matches are timed and range from 4 to 6 minutes in duration, depending on the level and competition.Competition Format:Judo competitions typically involve a single-elimination tournament format.Judokas are paired based on weight categories and compete against each other until a winner is determined.Officiating:Judo matches are supervised by a referee and two judges on the mat.The referee stops and starts the match, awards points, and enforces the rules.The judges assist the referee in scoring and monitoring the match.Ranks and Belts:Judo practitioners are ranked according to their skill level and experience.Ranks are indicated by different colored belts, ranging from white (beginner) to black (master).Etiquette:Judo emphasizes respect and etiquette.Judokas bow to the mat, opponents, and the referee before and after matches.## 中文回答:柔道规则作文模板。
我的家规和校规英语作文初一,80字左右
我的家规和校规英语作文初一,80字左右全文共6篇示例,供读者参考篇1My Home Rules and School RulesHi there! My name is Jamie and I'm a student in 6th grade. Today, I want to tell you all about the rules I have to follow at home and at school. Rules might seem like a drag, but they actually help keep things running smoothly and keep everyone safe and happy!At Home RulesAt my house, we have some pretty basic rules that aren't too hard to follow. The first big rule is that I have to keep my room clean and make my bed every morning. My mom says a messy room is a messy mind, so I try my best to keep my toys and clothes picked up. I'll admit, sometimes I let it get a little out of hand, but I always clean it up before bedtime.Another home rule is that I have a set bedtime of 8:30pm on school nights. My parents are really strict about that one because they know I need lots of sleep to have energy for school the nextday. I'm not allowed any screens like TV, video games or my tablet for an hour before bedtime either so my brain can start winding down.Dinner is another serious rule time at my house. We all have to sit together at the table with no phones, TVs or distractions and actually talk to each other about our days. Sometimes it's hard to sit still, but I know family time is important. We also have to try everything on our plates, even if we don't like a food at first.The last big home rule is that I have a set chore list. Every week, I'm responsible for taking out the recycling, feeding the dog, and cleaning my bathroom. If I slack off, I don't get my allowance! Chores may be boring, but I feel good after doing them.School RulesNow, let me tell you about all the rules I have to follow at school. The morning starts with the dress code rules. We have to wear a uniform polo shirt with the school logo, navy blue pants or skirts, and black or white shoes. No ripped jeans, graphic tees or flashy sneakers allowed! I think uniforms keep us all looking neat and professional.Once I'm inside the building, there are tons of rules about how to behave in the hallways and classrooms. We have to walk in straight, quiet lines when changing classes and get to our desks before the tardy bell. We also have to raise our hands to speak in class and ask permission to use the bathroom or get a drink of water. These rules just help keep us all focused.At lunch and recess, there's a whole other set of playground rules about taking turns, including others, using equipment properly and staying within bounds. We're also not allowed to have any food besides fruits and veggies outside to avoid bugs and rodents. I think these rules promote good sportsmanship and keep our outdoor spaces clean.Dismissal time has its own protocols too. We have to wait patiently in lines until called to the pickup area. If we're walking or biking home, we have to use designated routes and crosswalks and avoid any horseplay to stay safe. Bus riders have a whole separate set of bus safety rules they must follow. All these rules prevents chaos!Rules, Rules Everywhere!Phew, writing all those down makes me realize there sure are a lot of rules I have to follow at home and at school! They might seem restrictive, but I can see how they create structure and keepme on the right track. Even though I might not love every single one, I do my best to respect the reasons behind the rules. After all, the people making the rules just want what's best for me. With enough practice, following rules becomes a healthy habit for life. Now I'll get off this computer before I get a screen time violation! Thanks for reading, rule-followers!篇2My Family Rules and School RulesHey there! Lemme tell you about the rules I gotta follow at home and at school. It's kinda like having two different worlds, ya know? But the rules are pretty important in both places.At home, my parents have set up some family rules that we all gotta obey. Rule number one is always to be respectful and kind to each other. No fighting or name-calling allowed! We're a family, and we should love and support one another, not bring each other down.Another biggie is to do our chores without complainin'. I know, I know, chores can be such a drag. But my parents say it's important to pitch in and help out around the house. Plus, if we all do our part, the work gets done faster, and we can have more fun time afterwards!Oh, and we're not allowed to have too much screen time either. My parents want us to balance our time between studying, playing outside, and using our phones or tablets. I gotta admit, sometimes I get a little too hooked on my games, so it's probably a good rule to have.Now, let's talk about school rules. These are a whole different ballgame! At school, we have to follow the rules set by our teachers and the principal. One of the most important rules is to be on time and ready to learn. That means getting to class before the bell rings and having all our supplies and homework ready to go.We also have to raise our hands and wait to be called on before speaking in class. It can be tough sometimes, especially when you've got a burning question or a really good joke to share. But it's all about keeping things orderly and giving everyone a chance to participate.Another big rule at school is to treat others with respect, just like at home. No bullying, name-calling, or making fun of others is allowed. We're all different, and that's what makes our school community so cool and interesting!Oh, and let's not forget about the dress code. We have to wear our uniforms or follow the school's guidelines onappropriate clothing. No pajamas or flip-flops allowed! I know, it can be a bummer sometimes, but the rules are there to help us look neat and presentable.Breaking any of these rules can lead to consequences, like getting a warning, having to stay after class, or even getting detention. Nobody wants that, right? So it's important to do our best to follow the rules, even when it's not always easy or fun.But you know what? Having rules isn't all bad. They help keep things running smoothly and make sure everyone is safe, respected, and able to learn and grow. Plus, following rules shows that we're responsible and can be trusted with more freedom and privileges.So, there you have it – my family rules and school rules in a nutshell. They might seem like a lot, but they're really not that bad once you get used to them. And who knows, maybe one day we'll be the ones making the rules! But for now, it's our job to listen, follow along, and do our best to be good kids at home and at school.Phew, that was a lot of talkin'! But I hope you got a good idea of what it's like to navigate the world of rules as a kid. Just remember, even though they might seem like a drag sometimes,rules are there to help us and keep us safe. So let's do our best to follow 'em, and maybe even have a little fun while we're at it!篇3My Home Rules and School RulesAt home, my parents have some rules I have to follow. I can't watch too much TV or play video games for too long. I have to do my homework before anything fun. And I need to keep my room clean and make my bed every day. Seems fair enough!At school, there are a bunch of rules too. We have to be on time, wear the right uniform, and be ready to learn. No fighting, no bullying, and no cheating on tests. We have to respect teachers and be kind to classmates.Following rules at home and school isn't always easy, but I know the rules are there to help me. Home rules teach me discipline and responsibility. School rules keep everything running smoothly so we can all learn.I don't always love the rules, but I understand why we need them. If there were no rules at home, my parents' house would be crazy! My brother and I would fight over the TV all day andnever do chores. We'd eat nothing but candy and stay up all night. What a mess that would be!No rules at school would be even worse. Everyone would show up late in whatever clothes they want. Students would goof off and disrupt class constantly. Bullying would be everywhere with no consequences. Tests would be meaningless since everyone cheats. School would be total chaos!So while rules can feel restrictive sometimes, they actually give structure and keep things under control. Home rules help create a peaceful family life. School rules allow for a positive learning environment. If we had zero rules, our homes and schools would descend into anarchic madness!Of course, not all rules are good. Some are unfair or unnecessary. But in general, rules serve an important purpose. They set boundaries and expectations. They promote values like respect, responsibility and self-discipline. Rules aren't in place to ruin our fun - they're in place to protect us and help us develop into healthy, productive people.I may not love every single rule, but I know rules are necessary for society to function. As a kid, following rules at home and school is helping me build good habits for the future. Maybe rules feel confining now, but one day I'll be an adultmaking rules of my own. And you can bet I'll make sure they're reasonable rules that people actually want to follow!For now though, I'll just focus on the key rules - doing homework, helping around the house, being kind, and staying out of trouble. Those core rules are simple enough. If I can master following just the basics, everything else should fall into place. Then mom and dad - and my teachers - will be proud of me.So in the end, rules are okay by me. They give my life some helpful structure and steer me towards making good choices. With rules in place at home and school, I'm learning how to be a respectful, well-rounded person who contributes positively to my community. Maybe rules seem lame now, but I know they're preparing me for success. Listen to your parents and teachers, kids - the rules are there to help!篇4My Family Rules and School RulesBy A Middle SchoolerI have a bunch of rules I gotta follow at home and at school. At home, my parents have mad rules I can't break or I'll get inhuge trouble! I can't watch too much TV, gotta do my chores without being asked, and have to be nice to my little brother (even when he's annoying). We have family dinner every night too with no phones allowed. Ugh, my parents are so strict!At school, the rules seem never-ending. We have to walk in the hallways in straight lines and cant make any noise. If we talk in class or pass notes, we get a demerit. Detention is the worst - you have to miss recess and sit in a room in complete silence. Don't even get me started on the dress code rules...We all gotta follow rules I guess, even if they're a total drag. My parents say rules teach responsibility and keep order. I get that, but couldn't they maybe loosen up a little? Like let me have pizza for dinner once in a while instead of veggies? Or wear a cool graphic tee sometimes instead of those boring polos?Speaking of food, mealtimes at my house are pretty strict too. We're not allowed to eat anything other than what's served for dinner. Dessert only on weekends as a "treat" if we're good. One time I snuck a cookie after school and my mom found the wrapper in my backpack. Grounded for a week!I get where my parents are coming from with the healthy eating stuff. Childhood obesity is a real problem nowadays with kids eating too much junk food. But c'mon, you're allowed toindulge a little here and there, right? Maybe they should make an exception on birthdays or special occasions.The mealtime rules at school aren't as intense, but still pretty lame in my opinion. We only get 20 minutes to eat which feels super rushed. And the lunch ladies will literally yell at you if you don't clean up after yourself properly. I've seen them make kids rewipe the tables like three times because there was ONE crumb left. A bit excessive if you ask me...I probably sound like I'm just complaining about every single rule in existence. But I do understand why we need them - both at home and school. Without any structure or guidelines, there would be total chaos. My parents couldn't effectively look after me and my siblings. And a school with zero rules would just be a madhouse of wild kids running amok.So while I may roll my eyes at some of the more nitpicky things, I realize most of the rules come from a good place. They teach important values like self-discipline, respect for authority, and considering others' needs besides just your own. As much as I may hate following them sometimes, I know it's building character.There are definitely some rules I wish I could change or negotiate on though. For example, maybe having a occasional"free dress" day at school where you can wear normal clothes instead of the uniform. Or a monthly "dessert night" as a family if we all did our chores and homework that month. Small concessions that allow for some balance instead of such strict rigidity, ya know?I'd way rather have that approach than just outright rebelling against or ignoring the rules altogether. That's only gonna lead to me getting grounded, detention, or some other unpleasant consequence. And grounding is a nightmare - no phone, no TV, no hanging with friends for days on end. I'd literally go stir crazy!At the end of the day, I know my parents and teachers are just trying to do what's best and prepare me for the real world. Out there, there are rules and standards everywhere you look - at jobs, in society, you name it. Can't go being a reckless rebel who does whatever they want with zero discipline or regard for others.So for now, I'll just have to suck it up and follow all the rules at home and school as best I can. Maybe dad will fire up the grill for a "cookout night" once in a while as a treat if I stop complaining so much. Or maybe I can investigate getting a job next year to have a little more freedom and autonomy over myown life. Babysitting money could go towardssome new clothes I actually want to wear!Either way, one thing's for sure - I'm definitely looking forward to being a grown-up oneday when I can finally make my own rules. But who knows, by then I'll probably be jaded and dishing out rules left and right to my own kids. "No ice cream until you finish your brussels sprouts! Don't even think about staying up past 9pm on a school night!" And the cycle will repeat itself, haha. Maybe rules aren't so bad after all?篇5My Family Rules and School RulesHi there! My name is Emma and I'm nine years old. I go to Oakwood Elementary School and I'm in fourth grade. Today I want to tell you all about the rules I have to follow at home with my family and at school too!First up, the family rules. My mom and dad are super nice but they have some important rules that me and my little brother Charlie have to obey. Rule number one is we have to keep our rooms tidy. That means making our beds every morning, putting our toys and clothes away, and not leaving stuff all over the floor.If our rooms get too messy, we're not allowed to watch TV or play video games until we clean up. Bummer!Another big family rule is we have to do our homework as soon as we get home from school before we can have any fun. My parents really want me and Charlie to do well in school, so they make sure we get our work done first thing. Sometimes it's hard because I just want to go outside and play, but I know it's important to follow the rules.We also have to eat whatever food my mom cooks for dinner, no complaining! Even if it's something gross like Brussels sprouts or liver and onions (yuck!). My parents say it's rude not to eat the food someone worked hard to make. And we're not allowed to have dessert unless we clean our plates.Some other family rules are no hitting or fighting with each other, no going outside after dark, always telling the truth, and being kind and polite. If we break too many rules, we might get punished by having privileges taken away like no TV or iPad time. So we try really hard to be good!Now for the school rules - there are quite a few of those too! The very first rule is we have to be on time. My mom drops me off at 8:20am every morning right when the bell rings. If I'm latemore than three times, I have to stay after school for detention which is no fun at all.In class, we have to raise our hand to speak and can't shout out or interrupt the teacher. We also have to stay seated unless we get permission to get up. And of course we can't run or make noise in the hallways when we're changing classes. Basically, we need to follow instructions and not be disruptive so everyone can learn.At recess and lunch, there are playground rules like no fighting, keeping our hands and feet to ourselves, only going down the slide the proper way, and staying off the grass if it's muddy. The lunchroom rules are to stay seated while eating, don't throw food, clean up after yourself, and don't share food or drink because of allergies.We even have rules for the school bus! We have to stay seated and face forward at all times, no screaming or yelling, keep hands and body inside the bus, and absolutely no fighting, pushing, or horseplay. The bus driver has to be able to concentrate on driving to keep us safe.There sure are a lot of rules to remember at home and school! But I understand they are all in place to keep me safe, help me learn, and make sure I grow up to be a kind andresponsible person. As long as I follow the rules, I can avoid getting in trouble. Sometimes it's hard and I might slip up, but I really do try my best. My parents and teachers are proud of me when I'm a good listener and follow all the rules. I'll keep working on it!That's all about the rules that are so important in my life right now. Thanks for reading! Maybe you can tell me about some of your rules too. Rules might not seem fun, but they help make the world a better place. See you next time!篇6My House Rules and School RulesHey there! I'm Sam, a 6th grader at Lincoln Middle School. Today I'm gonna tell you all about the rules I have to follow at home and at school. It's a lot, but rules are important to keep everything running smoothly.At home, my parents have some pretty strict rules that I gotta obey. First up, there's a bunch of chores I have to do every week like taking out the trash, walking the dog, and cleaning my room. I try not to grumble too much cuz I know it's good to pitch in around the house.My parents are also really strict about screen time. I'm only allowed 1 hour per day during the week for video games, TV, and browsing the web. Sometimes I beg for more time, but they always say no way Jose! On weekends I get a bit more freedom though which is cool.Another big rule is that I have to go to bed at 9pm every school night. No ifs, ands, or buts about it! I used to try to sneak reading under the covers with a flashlight, but they always seemed to catch me. These days I just accept bedtime for what it is.Now school rules are a whole other beast! Lincoln has sooooo many rules to keep things orderly and the hallways from turning into a total circus. Let me give you a quick rundown:First, there's the dress code. We all have to wear polos or collared shirts, along with nice pants or skirts. No ripped jeans, graphic tees, or pajamas allowed! They're pretty strict about making sure we look presentable.Then there are the standard rules about being respectful to teachers and not fighting, bullying or using bad language. Gets a bit tricky to navigate the drama at this age if I'm being honest! I do my best to stay out of feuds though.We also have to walk in straight lines everywhere we go and keep voices down in the hallways. Cafeteria rules are crazy strict too - no shouting, trading food, or leaving a mess. Basically we're expected to be little angels from arrival to dismissal.My favorite rules are for the computer lab and library. In the lab, we have to be super careful with the equipment and no games are allowed unless it's for an assignment. The library has a zillion rules about checked out books, late fees, and keeping quiet. I actually don't mind those rules cuz I'm a total book nerd!Breaking any of the gazillion school rules can lead to consequences like detention, suspension or even getting expelled if it's a major violation. My parents have made it super clear that I better follow every single rule or else! No TV, video games or fun of any kind if I get in trouble.Phew, that's a lot of rules to remember right?? Sometimes it does feel like my life is just a long checklist I'm constantly having to obey. But I know all the rules are there for good reasons like safety, respect and learning.Even though I might groan and grumble about it sometimes, I really do my best to follow all the house rules and school rules. After all, I wanna stay out of the dreaded principal's office at allcosts! Well, that's the scoop on rules from me. Hope you enjoyed listening to a 6th grader's perspective!。
做土豆的过程英语作文
做土豆的过程英语作文Title: The Process of Making Potatoes。
Potatoes, humble yet versatile tubers, hold a cherished place in culinary traditions worldwide. From crispy friesto creamy mashed potatoes, their versatility knows no bounds. In this essay, we will delve into the intricate process of making potatoes, exploring various cooking methods and dishes that celebrate this beloved vegetable.To embark on our culinary journey, let us first gather the essential ingredients: fresh potatoes, oil or butter, salt, and any desired seasonings or toppings. The type of potato chosen can significantly impact the final dish, with starchy varieties like russets ideal for frying or mashing, while waxy potatoes such as Yukon Golds excel in salads or roasting.Our first culinary adventure begins with the classic comfort food mashed potatoes. To create this creamy delight,start by washing and peeling the potatoes, then cuttingthem into evenly sized chunks to ensure uniform cooking.Boil the potatoes in salted water until fork-tender, then drain and return them to the pot. Mash the potatoes with a potato masher or fork, gradually incorporating butter, milk, or cream until reaching the desired consistency. Seasonwith salt and pepper to taste, and voila! A comforting bowl of mashed potatoes is ready to be enjoyed.Moving on to another beloved potato dish, let usexplore the art of making crispy french fries. Begin by washing and cutting the potatoes into thin, uniform strips. Soak the potato strips in cold water for at least thirty minutes to remove excess starch, then pat them dry with a paper towel. Heat oil in a deep fryer or heavy-bottomed pot to 350°F (175°C), then carefully add the potato strips in batches, ensuring not to overcrowd the fryer. Fry the potatoes until golden brown and crispy, then remove themand drain on a paper towel-lined plate. Season with saltand any desired spices, and serve hot with ketchup or aioli for dipping.As we continue our culinary exploration, let us not forget the simple yet satisfying pleasure of roasted potatoes. Preheat th e oven to 400°F (200°C) and prepare a baking sheet with parchment paper or lightly greased foil. Wash and cut the potatoes into bite-sized pieces, then toss them in olive oil, salt, pepper, and any desired herbs or spices. Arrange the potatoes in a single layer on the prepared baking sheet, ensuring they are not overcrowded. Roast in the preheated oven for 30-40 minutes, or until the potatoes are golden brown and tender, turning them halfway through cooking for even browning. Once roasted to perfection, serve the potatoes as a delicious side dish or incorporate them into salads or breakfast hashes for added flavor and texture.In conclusion, the process of making potatoes encompasses a wide range of culinary techniques and dishes, each celebrating the humble tuber in its own unique way. Whether mashed, fried, or roasted, potatoes have a timeless appeal that transcends cultural boundaries and continues to delight taste buds around the world. So, the next time you find yourself in the kitchen, consider embarking on yourown potato adventure and discover the endless possibilities that this versatile vegetable has to offer.。
UniqueRisk
U1.UBPRUBPR represents Uniform Bank Performance Re-port.The UBPR is an analytical tool that is avail-able at no charge through the Financial Institutions Examination Council(FFIEC)at their website .The UBPR is created for bank supervisory,examination,and bank management purposes.The report is produced for each commercial bank in the US that is super-vised by the Board of Governors of the Federal Reserve System,Federal Deposit Insurance Cor-poration,or the Office of the Comptroller of the Currency.UBPRs are also produced for FDIC insured savings banks.This computer generated repot is from a database derived from public and nonpublic sources.2.Unbundling[S ee also Bundling]3.Uncommitted Line of CreditAn uncommitted line of credit does not have an up-front fee payment,and so the bank is not obliged to lend the firm money.If the bank chooses to lend under the terms of the line of credit,it may do so,but it also may choose not to lend.[S ee also Revolving credit agreement]4.Underlying AssetThe asset whose price determines the profitability of a derivative.For example,the underlying asset for a purchased call is the asset that the call owner can buy by paying the strike price.5.Underlying VariableA variable which the price of an option or other derivative depends.[S ee also Black-Scholes option pricing model]6.UnderpricingUnderpricing represents the difference between the aftermarket stock price and the offering price. Underpricing represents money left on the table, or money the firm could have received had the offer price better approximated the aftermarket value of the stock.7.UnderwritePurchase securities from the initial issuer and dis-tribute them to investors.[S ee also Underwriter]8.UnderwriterThe investment bank carries,or underwrites,the risk of fluctuating stock prices.Thus,an investment bank is sometimes called an underwriter.Should the market’s perception of the issuer change or a macroeconomic event result in a stock market de-cline,the investment bank carries the risk of loss,or at least the possibility of a smaller than expected spread.9.Underwriting,Underwriting Syndicate Underwriters(investment bankers)purchase secur-ities from the issuing company and resell them. Usually a syndicate of investment bankers is or-ganized behind a lead firm.[S ee also Syndicates]10.Undivided ProfitsRetained earnings or cumulative net income not paid out as dividends.It can be used as an internal source of funds.11.Unearned InterestInterest received prior to completion of the under-lying contract.12.Unemployment RateThe ratio of the number of people classified as unemployed to the total labor force.The un-employment rate is used to determine whether a country’s economy is in boom,recession,or nor-mal.13.Unexpected LossesA popular term for the volatility of losses but also used when referring to the realization of a large loss which,in retrospect,was unexpected.14.Unfunded DebtShort-term debt,such as account payable is the unfunded debt.Cost of capital of an unfunded debt is the risk-free interest rate.15.Uniform Limited Offering Registration Several states offer programs to ease the process of public equity financing for firms within their bor-ders.A firm in a state that has enacted a ULOR (Uniform Limited Offering Registration)law can raise$1million by publicly selling shares worth at least$5.This law creates a fairly standardized,fill-in-the blank registration document to reduce a firm’s time and cost in preparing an offering.16.Unique Risk[S ee Diversifiable risk]17.Unit Banking StatesStates that prohibit branch banking are called units banking states.Since1994,most of the states have become branch banking states.18.Unit Benefit FormulaMethod used to determine a participant’s retire-ment benefit plan by multiplying years of service by the percentage of salary.19.Unit Investment TrustMoney invested in a portfolio whose composition is fixed for the life of the fund.Shares in a unit trust are called redeemable trust certificates,and they are sold at a premium above net asset value.20.Unit of Production MethodThe unit production method is one of the acceler-ated depreciation methods.This method deter-mines the depreciation in accordance with total production hours for the machines and production hours operate each year.If we assume that a machine is purchased for$6,000and has a salvage value of$600,then,the expected useful life of 5,000hours is divided into the depreciable cost (cost-salvage value)to obtain an hourly depreci-ation rate of$1.08.If we assume the machine is used2,000hours the first year,1,000hours the second year,900hours the third year,700 hours the fourth year,and400hours the fifth year,then the annual depreciation is determined as follows:Year1$1:08Â2,000¼$2,160Year2$1:08Â1,000¼$1,080Year3$1:08Â900¼$972Year4$1:08Â700¼$756Year5$1:08Â400¼$432Total depreciation$5,400:21.Unit Volume VariabilityVariability in the quantity of output sold can lead to variability in EBIT through variations in sales rev-enue and total variable costs such as raw material costs and labor costs.The net effect of fluctuating280ENCYCLOPEDIA OF FINANCEvolume leads to fluctuations in EBIT and contrib-utes to business risk.[S ee also Business risk]22.Universal Financial InstitutionA financial institution(FI)that can engage in a broad range of financial service activities.Finan-cial system in US has traditionally been structured along separatist or segmented product lines.Regu-latory barriers and restrictions have often inhibited the ability of an FI operating in one area of the financial services industry to expand its product set into other areas.This might be compared with FIs operating in Germany,Switzerland,and the UK, where a more universal FI structure allows indi-vidual financial services organizations to offer a far broader range of banking,insurance,securities, and other financial services products.However, the recent merger between Citicorp and Travelers to create Citigroup,the largest universal bank or financial conglomerate in the world,was a sign that the importance of regulatory barrier in the US is receding.Moreover,the passage of the Fi-nancial Services Modernization Act of1999has accelerated the reduction in the barriers among financial services firms.Indeed,as consolidation in the US and global financial services industry proceeds apace,we are likely to see acceleration in the creation of very large,globally oriented multi-product financial service firms.23.Universal Life PolicyAn insurance policy that allows for a varying death benefit and premium level over the term of the policy,with an interest rate on the cash value that changes with market interest rates.Universal life was introduced in1979.It combines the death protection features of term insurance with the op-portunity to earn market rates of return on excess premiums.Unlike variable life,with its level pre-mium structure,premiums on universal life policies can be changed.The policyholder can pay as high a ‘‘premium’’as desired,instructing the insurer to invest the excess over that required for death pro-tection in the insurer’s choice of ter,if the policyholder wishes to pay no premium at all,the insurer can deduct the cost of providing death pro-tection for the year from the cash value accumu-lated in previous years.With other types of policies, skipping a premium would cause the policy to lapse. Unlike whole or variable life policies,the face amount of guaranteed death protection in a univer-sal life policy can be changed at the policyholder’s option.Also,unlike variable life,the cash value hasa minimum guaranteed rate of return.24.Unseasoned New IssueInitial public offering(IPO).It is the first public equity issue that is made by a company.[S ee also Initial public offering and Going public]25.Unsystematic RiskA well-diversified portfolio can reduce the effects of firm-specific or industry-specific events–such as strikes,technological advances,and entry and exit of competitors–to almost zero.Risk that can be diversified away is known as unsystematic risk or diversifiable rmation that has negative im-plications for one firm may contain good news for another firm.In a well-diversified portfolio of firms from different industries,the effects of good news for one firm may effectively cancel out bad news for another firm.The overall impact of such news on the portfolio’s returns should approach zero.26.Up-and-InA knock-in option for which the barrier exceeds the current price of the underlying asset.[S ee also Knock-in option]27.Up-and-OutA knock-out option for which the barrier exceeds the current price of the underlying asset.[S ee also Knock-out option]PART I:TERMINOLOGIES AND ESSAYS28128.Up-and-Out-OptionAn option that comes into existence when the price of the underlying asset increases to a prespecified level.29.UptickA trade resulting in a positive change in a stock price,or a trade at a constant price following a preceding price increase.ury CeilingsUsury refers to interest charges in excess of that legally allowed for a specific instrument.Besides disclosure and bankruptcy laws,some sates restrict the rate of interest that may be charged on certain categories of loans,primary consumer loans,but also some agricultural and small business loans. Usury laws establish rate ceilings that a lender may not exceed,regardless of the lender’s costs. Usury ceilings apply to lenders of all types,not just to depository institutions.31.Utility FunctionUtility is t he measure of the welfare or satisfaction of an investor.The utility function can be defined as U¼f(R,s2),where R¼average rates of re-turn;and s2¼variance of rate of return.32.Utility TheoryUtility theory is the foundation for the theory of choice under uncertainty.Following Henderson and Quandt(1980),cardinal and ordinal theories are the two major alternatives used by economists to determine how people and societies choose to allocate scarce resources and to distribute wealth among one another over time.[S ee also Cardinal utility and Ordinal utility]33.Utility ValueThe welfare a given investor assigns to an invest-ment with a particular return and risk.[S ee also Utility function]282ENCYCLOPEDIA OF FINANCE。
非时齐马氏链的中心极限定理和中偏差
非时齐马氏链的中心极限定理和中偏差徐明周;丁云正;周永正【摘要】本文研究了非时齐可列马氏链当其转移概率矩阵在Cesàro意义下一致收敛时的中心极限定理的问题.利用指数等价和G?rtner-Ellis定理的方法,获得了相应的中偏差结果.【期刊名称】《数学杂志》【年(卷),期】2019(039)001【总页数】10页(P137-146)【关键词】中心极限定理;中偏差;非时齐马氏链;鞅【作者】徐明周;丁云正;周永正【作者单位】景德镇陶瓷大学信息工程学院,江西景德镇 333403;景德镇陶瓷大学信息工程学院,江西景德镇 333403;景德镇陶瓷大学信息工程学院,江西景德镇333403【正文语种】中文【中图分类】O211.61 IntroductionHuang et al.[1]proved central limit theorem for nonhomogeneous Markov chain withfinite state space.Gao[2]obtained moderate deviation principles for homogeneous Markov chain.De Acosta[3]studied moderate deviationslower bounds for homogeneous Markov chain.De Acosta andChen[4]established moderate deviations upper bounds for homogeneous Markov chain.It is natural and important to study central limit theorem and moderate deviation for countable nonhomogeneous Markov chain.We wish to investigate a central limit theorem and moderate deviation for countable nonhomogeneous Markov chain under the condition of uniform convergence of transition probability matrices for countable nonhomogeneous Markov chain in Ces`aro sense.Suppose that{Xn,n≥0}is a nonhomogeneous Markov chain taking values in S={1,2,···}with initial probabilityand the transition matriceswhere pn(i,j)=P(Xn=j|Xn−1=i).WriteWhen the Markov chain is homogeneous,P,Pkdenote,respectively.If Pis a stochastic matrix,then we writewhere[a]+=max{0,a}.Let A=(aij)be a matrix defined as S×S.Write.If h=(h1,h2,···),then we write.If g=(g1,g2,···)0,then we write|.The properties below hold(see Yang[5,6])(a)kABk≤kA kkBkfor all matrices Aand B;(b)kPk=1 for all stochastic matrix P.Suppose that Ris a‘constant’stochastic matrix each row of which is the same.Then{Pn,n≥1}is said to be strongly ergodic(with a constant stochastic matrix R)if for all.The sequence{Pn,n≥1}is said to converge in the Cesro sense(to a constant stochastic matrix R)if for every m≥0,The sequence{Pn,n≥1}is said to uniformly converge in the Ces`aro sense(to a constant stochastic matrix R)ifSis divided into ddisjoint subspaces C0,C1,···,Cd−1,by an irreducible stochastic matrix P,of period d(d≥1)(see Theorem 3.3 of Hu[7]),and Pdgives dstochastic matrices{Tl,0≤l≤d−1},where Tlis defined on Cl.As in Bowerman et al.[8]and Yang[5],we shall discuss such an irreducible stochastic matrix P,of period dthat Tlis strongly ergodic forl=0,1,···,d−1.This matrix will be called periodic strongly ergodic.Remark 1.1 If S={1,2,···},d=2,P=(p(i,j)),p(1,2)=1,p(k,k−1)=,then Pis an irreducible stochastic matrix of period 2.Moreover,for k≥2.wherefor k≥1.The solution of πP=πand arefor n≥3.Theorem 1.1 Suppose{Xn,n≥0}is a countable nonhomogeneous Markov chain taking values in S={1,2,···}with initial distribution of(1.1)and transition matrices of(1.2).Assume that fis a real function satisfying|f(x)|≤Mfor allx∈R.Suppose that Pis a periodic strongly ergodic stochastic matrix.Assume that Ris a constant stochastic matrix each row of which is the left eigenvector π=(π(1),π(2),···)of Psatisfying πP=πand .Assume thatandMoreover,if the sequence of δ-coefficient satisfiesthen we havewherestands for the convergence in distribution.Theorem 1.2 Under the hypotheses of Theorem 1.1,if moreoverthen for each open set G⊂R1,and for each closed set F⊂R1,where.In Sections 2 and 3,we prove Theorems 1.1 and 1.2.The ideas of proofs of Theorem 1.1 come from Huang et al.[1]and Yang[5].2 Proof of Theorem 1.1LetWrite Fn=σ(Xk,0≤k≤n).Then{Wn,Fn,n≥1}is a martingale,sothat{Dn,Fn,n≥0}is the related martingale difference.For n=1,2,···,setandIt is clear thatAs in Huang et al.[1],to prove Theorem 1.1,we first state the central limit theorem associated with the stochastic sequence of{Wn}n≥1,which is a key step to establish Theorem 1.1.Lemma 2.1 Assume{Xn,n≥0}is a countable nonhomogeneous Markov chain taking values in S={1,2,···}w ith initial distribution of(1.1)and transition matrices of(1.2).Suppose fis a real function satisfying|f(x)|≤Mfor allx∈R.Assume that Pis a periodic strongly ergodic stochastic matrix,and Ris a constant stochastic matrix each row of which is the left eigenvectorπ=(π(1),π(2),···)of Psatisfying πP=πand .Suppose that(1.4)and(1.5)are satisfied,and{Wn,n≥0}is defined by(2.2).Thenwherestands for the convergence in distribution.As in Huang et al.[1],to establish Lemma 2.1,we need two important statements below such as Lemma 2.2(see Brown[9])and Lemma 2.3(see Yang[6]).Lemma 2.2 Assume that(Ω,F,P)is a probability space,and{Fn,n=1,2,···}is an increasing sequence of σ-algebras.Suppose that{Mn,Fn,n=1,2,···}is a martingale,denote its related martingale difference byξ0=0,ξn=Mn−Mn−1(n=1,2,···).For n=1,2,···,writewhere F0is the trivial σ-algebra.Assume that the following holds(i)(ii)the Lindeberg condition holds,i.e.,for any†>0,where I(·)denotes the indicator function.Then we havewheredenote convergence in probability and in distribution respectively. Write δi(j)=δij,(i,j∈S).SetLemma 2.3 Assume that{Xn,n≥0}is a countable nonhomogeneous Markov chain taking values in S={1,2,···}with initial distribution(1.1),and transition matrices(1.2).Suppose that Pis a periodic strongly ergodic stochastic matrix,and Ris matrix each row of which is the left eigenvectorπ=(π(1),π(2),···)of Psatisfying πP=πand.Assume(1.4)holds.ThenNow let’s come to establish Lemma 2.1.Proof of Lemma 2.1 Applications of properties of the conditional expectation and Markov chains yieldwhereandWe first use(1.4)and Fubini’s theorem to obtainHence,it follows from(2.10)and πP=πthatWe next claim thatIndeed,we use(1.4)and(2.9)to haveThus we use Lemma 2.3 again to obtainTherefore(2.12)bining(2.11)and(2.12)results inwhich givesSince{V(Wn)/n,n≥1}is uniformly bounded,{V(Wn)/n,n≥1}is uniformly integrable.By applying the above two facts,and(1.5),we haveTherefore we obtainAlso note thatis uiformly integrable.Thuswhich implies that the Lindeberg condition holds.Application of Lemma 2.2 yields(2.3).This establishes Lemma 2.1.Proof of Theorem 1.1 Note thatWriteLet’s evaluate the upper bound of|E[f(Xk)|Xk−1]−E[f(Xk)]|.In fact,we use the C-K formula of Markov chain to obtainhereApplication of(1.6)yieldsCombining(1.6),(2.3),(2.16),and(2.17),results in(1.7).This proves Theorem 1.1.3 Proof of Theorem 1.2We use Grtner-Ellis theorem,and exponential equivalence methods to prove Theorem 1.2.By applying Taylor’s formula ofex,(1.5),(1.8),(2.15),Fubini’s theorem,properties of conditional expectations and martingale,we claim that for any t∈R1,In fact,by(1.8),and the claim is proved.Hence,by using Grtner-Ellis theorem,we deduce that Wn/a(n)satisfies the moderate deviation theorem with rate function.It follows from(1.8)and(2.17)that∀†>0,Thus,by the exponential equivalent method(see Theorem 4.2.13 of Dembo and Zeitouni[10],Gao[11]),we see thatsatisfies the same moderate deviation theorem aswith rate function.This completes the proof. References【相关文献】[1]Huang H L,Yang W G,Shi Z Y.The central limit theorem for nonhomogeneous Markov chains[J].Chinese J.Appl.Prob.Stat.,2013,29(4):337–347.[2]Gao F Q.Moderately large deviations for uniformly ergodic markov processes,research announcements[J].Adv.Math.(China),1992,21(3):364–365.[3]de Acosta A.Moderate deviations for empirical measures of markov chain:lower bounds[J].Ann.Prob.,1997,25(1):259–284.[4]de Acosta A,Chen X.Moderate deviations for empirical measures of markov chain:upper bounds[J].J.The.Prob.,1998,11(4):1075–1110.[5]Yang W G.Convergence in the Ces`aro sense ans strong law of large numbers for nonhomogeneous Markov chains[J].Linear Alg.Appl.,2002,354(1):275–286.[6]Yang W G.Strong law of large numbers for nonhomogeneous Markov chains[J].Linear Alg.Appl.,2009,430(11-12):3008–3018.[7]Hu D H.Countable Markov process theory[M].Wuhan:Wuhan University Press(in Chinese),1983.[8]Bowerman B,David H T and Isaacson D.The convergence of Ces`aro averages for certain nonstationary Markov chains[J].Stoch.Proc.Appl.,1977,5(1):221–230.[9]Brown B M.Martingale central limit theorems[J].Ann.Math.Statist.,1971,42(1):59–66.[10]Dembo A,Zeitouni rge deviations techniques and applications[M].NewYork:Springer,1998.[11]Gao F Q.Moderate deviations for a nonparametric estimator for samplecoverage[J].Ann.Prob.,2013,41(2):641–669.[12]Zhang H Z,Hao R L,Ye Z X,Yang W G.Some strong limit properties for countable nonhomogeneous Markov chains(in Chinese)[J].Chinese J.Appl.Prob.Stat.,2016,32(1):62–68.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
φ (q ) , φ(φ(q ))
which is majorized by the above theorems. We remark that the λ-roots we find to establish Theorems 1 and 2 are squarefree and have no small prime factors (“small” here meaning up to some fixed power of qc ). The proof of Theorem 1 uses the weighted linear sieve, specifically results due to Greaves [2], [3] (see equation (18) below). We note that conjecturally, there is some choice of weight function W in the weighted linear sieve which would allow us to take δr arbitrarily small in Theorem 1; this would also allow us to replace the first three exponents in Theorem 2 by 1/2 + ε, 3/8 + ε, and 1/3 + ε respectively. We also note that if the generalized Lindel¨ of hypothesis for the L-functions corresponding to certain characters (the ones in the subgroup G defined in Lemma 4 below) were true, we could employ much stronger character sum ∗ ε estimates than Lemma 7 below, allowing us to improve Theorem 1 to g2 (q ) ≪ε qc . We would of course like to be able to show the existence of small prime primitive roots rather than P2 primitive roots. In his work on the analogous problem of finding primes in arithmetic progressions, Heath-Brown [6] first treats the case where the L-function corresponding to a real Dirichlet character has a real zero very close to s = 1. Although it is certainly believed that these “Siegel zeros” do not exist, disposing of this case allowed Heath-Brown in [7] to work with a better zero-free region for Dirichlet L-functions than is known unconditionally. Similarly, if we assume the existence of a sufficiently extreme Siegel zero, we can show the existence of small prime primitive roots, as the following theorem asserts. Theorem 3. Let ε > 0, let q be an odd prime power or twice an odd prime power, and let χ1 denote the nonprincipal quadratic Dirichlet character (mod q ). Suppose that L(s, χ1 ) has
∗ g3 (q ) ≪ p3/8+1/207 ; ∗ g2 (q ) ≪ p1/2+1/873 ;
δ3 = 0.074267,
δ4 = 0.103974,
∗ gr (q ) ≪r p1/4+O(1/r) .
∗ g4 (q ) ≪ p1/3+1/334 ;
(The exponents here are simply approximations to the corresponding exponents in Theorem 1.) By comparison, from the work of Mikawa on small P2 s in almost all arithmetic progressions [10], one can easily derive that
UNIFORM BOUNDS FOR THE LEAST ALMOST-PRIME PRIMITIVE ROOT
GREG MARTIN
arXiv:math/9807105v1 [math.NT] 20 Jul 1998
1. Introduction A recurring theme in number theory is that multiplicative and additive properties of integers are more or less independent of each other, the classical result in this vein being Dirichlet’s theorem on primes in arithmetic progressions. Since the set of primitive roots to a given modulus is a union of arithmetic progressions, it is natural to study the distribution of prime primitive roots. Results concerning upper bounds for the least prime primitive root to a given modulus q , which we denote by g ∗ (q ), have hitherto been of three types. There are conditional bounds: assuming the Generalized Riemann Hypothesis, Shoup [11] has shown that g ∗ (q ) ≪ ω (φ(q )) log 2ω (φ(q )) (log q )2 , where ω (n) denotes the number of distinct prime factors of n. There are also upper bounds that hold for almost all moduli q . For instance, one can show [9] that for all but O (Y ε ) primes up to Y , we have g ∗ (p) ≪ (log p)C (ε) for some positive constant C (ε). Finally, one can unimaginatively apply a uniform upper bound for the least prime in a single arithmetic progression. The best uniform result of this type, due to Heath-Brown [7], implies that g ∗ (q ) ≪ q 5.5 . However, there is not at present any stronger unconditional upper bound for g ∗ (q ) that holds uniformly for all moduli q . The purpose of this paper is to provide such an upper bound, at least for primitive roots that are “almost prime”. The methods herein will actually apply for any modulus q , not just those q whose group × Zq of reduced residue classes is cyclic (which occurs exactly when q = 2, 4, an odd prime power, or twice an odd prime power). We say that an integer n, coprime to q , is a λ-root (mod q ) if it has maximal order in Z× q . We see that λ-roots are generalizations of primitive ∗ roots, and we extend the notation g (q ) to represent the least prime λ-root (mod q ) for any integer q ≥ 2. We also recall that a Pk integer is one that has at most k prime factors, counted with ∗ multiplicity. For any integer k ≥ 1, we let gk (q ) denote the least Pk λ-root (mod q ) (so that ∗ ∗ g1 (q ) = g (q )). We may now state our main theorem. Theorem 1. For all integers q , r ≥ 2 and all ε > 0, we have