A no-go theorem for string warped compactifications

合集下载

gcc 常见的编译警告与错误(按字母顺序排列)

gcc 常见的编译警告与错误(按字母顺序排列)

gcc 常见的编译警告与错误(按字母顺序排列)C语言初学者遇到的最大问题往往是看不懂编译错误,进而不知如何修改程序。

有鉴于此,本附录罗列了用gcc编译程序时经常出现的编译警告与错误。

需要提醒读者的是,出现警告(warning)并不影响目标程序的生成,但出现错误(error)则无法生成目标程序。

为便于读者查阅,下面列出了经常遇到的警告与错误,给出了中英文对照(英文按字典顺序排列),并对部分错误与警告做了必要的解释。

#%s expects \FILENAME\ or …#%s 需要\FILENAME\ 或…#%s is a deprecated GCC extension#%s 是一个已过时的GCC 扩展#%s is a GCC extension#%s 是一个GCC 扩展#~ error:#~ 错误:#~ In file included from %s:%u#~ 在包含自%s:%u 的文件中#~ internal error:#~ 内部错误:#~ no newline at end of file#~ 文件未以空白行结束#~ warning:#~ 警告:#elif after #else#elif 出现在#else 后#elif without #if#elif 没有匹配的#if#else after #else#else 出现在#else 后#else without #if#else 没有匹配的#if#endif without #if#endif 没有匹配的#if#include nested too deeply#include 嵌套过深#include_next in primary source file#include_next 出现在主源文件中#pragma %s %s is already registered#pragma %s %s 已经被注册#pragma %s is already registered#pragma %s 已经被注册#pragma once in main file#pragma once 出现在主文件中#pragma system_header ignored outside include file#pragma system_heade 在包含文件外被忽略%.*s is not a valid universal character%.*s 不是一个有效的Unicode 字符%s in preprocessing directive预处理指示中出现%s%s is a block device%s 是一个块设备%s is shorter than expected%s 短于预期%s is too large%s 过大%s with no expression%s 后没有表达式%s: not used because `%.*s’ defined as `%s’ not `%.*s’%s:未使用因为‘%.*s’被定义为‘%s’而非‘%*.s’%s: not used because `%.*s’ is poisoned%s:未使用因为‘%.*s’已被投毒%s: not used because `%.*s’ not def ined%s:未使用因为‘%.*s’未定义%s: not used because `%s’ is defined%s:未使用因为‘%s’已定义%s: not used because `__COUNTER__’ is invalid%s:未使用因为‘__COUNTER__’无效(\%s\ is an alternative token for \%s\ in C++)(在C++ 中“%s”会是“%s”的替代标识符)(this will be reported only once per input file)(此警告为每个输入文件只报告一次)\%s\ after # is not a positive integer# 后的“%s”不是一个正整数\%s\ after #line is not a positive integer#line 后的“%s”不是一个正整数\%s\ cannot be used as a macro name as it is an operator in C++ “%s”不能被用作宏名,因为它是C++ 中的一个操作符\%s\ is not a valid filename“%s”不是一个有效的文件名\%s\ is not defined“%s”未定义\%s\ may not appear in macro parameter list“%s不能出现在宏参数列表中\%s\ re-asserted重断言“%s”\%s\ redefined“%s重定义\/*\ within comment“/*出现在注释中\\x used with no following hex digits\\x 后没有16 进制数字\defined\ cannot be used as a macro name“defined不能被用作宏名__COUNTER__ expanded inside directive with -fdirectives-only带-fdirectives-only 时__COUNTER__ 在指示中扩展__V A_ARGS__ can only appear in the expansion of a C99 variadic macro __V A_ARGS__ 只能出现在C99 可变参数宏的展开中_Pragma takes a parenthesized string literal_Pragma 需要一个括起的字符串字面常量‘%.*s’ is not in NFC‘%.*s’不在NFC 中‘%.*s’ is not in NFKC‘%.*s’不在NFKC 中‘##’ cannot appear at either end of a macro expansion‘##’不能出现在宏展开的两端‘#’ is not followed by a macro parameter‘#’后没有宏参数‘$’ in identifier or number‘$’出现在标识符或数字中‘:’ without preceding ‘?’‘:’前没有‘?’‘?’ without following ‘:’‘?’后没有‘:’'return' with a value, in function returning void在void返回类型的函数中,return返回值。

ON THE COMPUTATIONAL COMPLEXITY OF ALGORITHMS

ON THE COMPUTATIONAL COMPLEXITY OF ALGORITHMS

ON THE COMPUTATIONALCOMPLEXITY OF ALGORITHMSBYJ. HARTMANIS AND R. E. STEARNSI. Introduction. In his celebrated paper [1], A. M. Turing investigated the computability of sequences (functions) by mechanical procedures and showed that the setofsequencescanbe partitioned into computable and noncomputable sequences. One finds, however, that some computable sequences are very easy to compute whereas other computable sequences seem to have an inherent complexity that makes them difficult to compute. In this paper, we investigate a scheme of classifying sequences according to how hard they are to compute. This scheme puts a rich structure on the computable sequences and a variety of theorems are established. Furthermore, this scheme can be generalized to classify numbers, functions, or recognition problems according to their compu-tational complexity.The computational complexity of a sequence is to be measured by how fast a multitape Turing machine can print out the terms of the sequence. This particular abstract model of a computing device is chosen because much of the work in this area is stimulated by the rapidly growing importance of computation through the use of digital computers, and all digital computers in a slightly idealized form belong to the class of multitape Turing machines. More specifically, if Tin) is a computable, monotone increasing function of positive integers into positive integers and if a is a (binary) sequence, then we say that a is in complexity class ST or that a is T-computable if and only if there is a multitape Turing machine 3~ such that 3~ computes the nth term of a. within Tin) operations. Each set ST is recursively enumerable and so no class ST contains all computable sequences. On the other hand, every computable a is contained in some com-plexity class ST. Thus a hierarchy of complexity classes is assured. Furthermore, the classes are independent of time scale or of the speed of the components from which the machines could be built, as there is a "speed-up" theorem which states that ST = SkT f or positive numbers k.As corollaries to the speed-up theorem, there are several limit conditions which establish containment between two complexity classes. This is contrasted later with the theorem which gives a limit condition for noncontainment. One form of this result states that if (with minor restrictions)Received by the editors April 2, 1963 and, in revised form, August 30, 1963.285286J. HARTMANIS AND R. E. STEARNS[May»*«, U(n)then S,; properly contains ST. The intersection of two classes is again a class. The general containment problem, however, is recursively unsolvable.One section is devoted to an investigation as to how a change in the abstract machine model might affect the complexity classes. Some of these are related by a "square law," including the one-tape-multitape relationship: that is if a is T-computable by a multitape Turing machine, then it is T2-computable by a single tape Turing machine. It is gratifying, however, that some of the more obvious variations do not change the classes.The complexity of rational, algebraic, and transcendental numbers is studied in another section. There seems to be a good agreement with our intuitive notions, but there are several questions still to be settled.There is a section in which generalizations to recognition problems and functions are discussed. This section also provides the first explicit "impossibility" proof, by describing a language whose "words" cannot be recognized in real-time [T(n) = n] .The final section is devoted to open questions and problem areas. It is our conviction that numbers and functions have an intrinsic computational nature according to which they can be classified, as shown in this paper, and that there is a good opportunity here for further research.For background information about Turing machines, computability and related topics, the reader should consult [2]. "Real-time" computations (i.e., T(n) = n) were first defined and studied in [3]. Other ways of classifying the complexity of a computation have been studied in [4] and [5], where the complexity is defined in terms of the amount of tape used.II. Time limited computations. In this section, we define our version of a multitape Turing machine, define our complexity classes with respect to this type of machine, and then work out some fundamental properties of these classes.First, we give an English description of our machine (Figure 1) since one must have a firm picture of the device in order to follow our paper. We imagine a computing device that has a finite automaton as a control unit. Attached to this control unit is a fixed number of tapes which are linear, unbounded at both ends, and ruled into an infinite sequence of squares. The control unit has one reading head assigned to each tape, and each head rests on a single square of the assigned tape. There are a finite number of distinct symbols which can appear on the tape squares. Each combination of symbols under the reading heads together with the state of the control unit determines a unique machine operation. A machine operation consists of overprinting a symbol on each tape square under the heads, shifting the tapes independently either one square left, one square1965]ON THE COMPUTATIONAL COMPLEXITY OF ALGORITHMS287ti 1111 i n cm U I I i I I I ID mm.Tn T| in i i i i i i i m-m Î2II I I I I I I I I m II I I I I I I IIP TnTAPESFINITE STATECOMPUTEROUTPUT TAPEFigure 1. An «-tape Turing machineright, or no squares, and then changing the state of the control unit. The machine is then ready to perform its next operation as determined by the tapes and control state. The machine operation is our basic unit of time. One tape is signaled out and called the output tape. The motion of this tape is restricted to one way move-ment, it moves either one or no squares right. What is printed on the output tape and moved from under the head is therefore irrevocable, and is divorced from further calculations.As Turing defined his machine, it had one tape and if someone put k successive ones on the tape and started the machine, it would print some f(k) ones on the tape and stop. Our machine is expected to print successively /(l),/(2), ••• on its output tape. Turing showed that such innovations as adding tapes or tape symbols does not increase the set of functions that can be computed by machines. Since the techniques for establishing such equivalences are common knowledge, we take it as obvious that the functions computable by Turing's model are the same as those computable by our version of a Turing machine. The reason we have chosen this particular model is that it closely resembles the operation of a present day computer; and being interested in how fast a machine can compute, the extra tapes make a difference.To clear up any misconceptions about our model, we now give a formal definition.Definition 1. An n-tape Turing machine, &~, is a set of (3n + 4)-tuples, {(q¡; Stl, Sh, — , Sin ; Sjo, Sjl, — , Sh ; m0, mx, —, m… ; qf)},where each component can take on a finite set of values, and such that for every possible combination of the first n + 1 entries, there exists a unique (3zi-t-4)-tupIe in this set. The first entry, q¡, designates the present state; the next n entries, S(l,-",S,B, designate the present symbols scanned on tapes Tx, •■•, T…,respectively; the next n + 1 symbols SJa, ••-, Sjn, designate the new symbols to be printed on288J. HARTMANIS AND R. E. STEARNS[May tapes T0, •■», T…, respectively; the next n entries describe the tape motions (left, right, no move) of the n + 1 tapes with the restriction m0 # left ; and the last entry gives the new internal state. Tape T0 is called the output tape. One tuple with S¡. = blank symbol for 1 = j = n is designated as starting symbol.Note that we are not counting the output tape when we figure n. Thus a zero-tape machine is a finite automaton whose outputs are written on a tape. We assume without loss of generality that our machine starts with blank tapes.For brevity and clarity, our proofs will usually appeal to the English description and will technically be only sketches of proofs. Indeed, we will not even give a formal definition of a machine operation. A formal definition of this concept can be found in [2].For the sake of simplicity, we shall talk about binary sequences, the general-ization being obvious. We use the notation a = axa2 ••• .Definition 2. Let Tin) be a computable function from integers into integers such that Tin) ^ Tin + 1) and, for some integer k, Tin) ^ n/ k for all n. Then we shall say that the sequence a is T-computable if and only if there exists a multitape Turing machine, 3~, which prints the first n digits of the sequence a on its output tape in no more than Tin) operations, n = 1,2, ••», allowing for the possibility of printing a bounded number of digits on one square. The class of all T-computable binary sequences shall be denoted by ST, and we shall refer to T(n) as a time-function. Sr will be called a complexity class.When several symbols are printed on one square, we regard them as components of a single symbol. Since these are bounded, we are dealing with a finite set of output symbols. As long as the output comes pouring out of the machine in a readily understood form, we do not regard it as unnatural that the output not be strictly binary. Furthermore, we shall see in Corollaries 2.5, 2.7, and 2.8 that if we insist that Tin) ^ n and that only (single) binary outputs be used, then the theory would be within an e of the theory we are adopting.The reason for the condition Tin) ^ n/fc is that we do not wish to regard the empty set as a complexity class. For if a is in ST and F is the machine which prints it, there is a bound k on the number of digits per square of output tape and T can print at most fcn0 d igits in n0 operations. By assumption, Tikn0) ^ n0 or (substituting n0 = n/ k) Tin) à n/ k . On the other hand, Tin) ^ n/ k implies that the sequence of all zeros is in ST because we can print k zeros in each operation and thus ST is not void.Next we shall derive some fundamental properties of our classes.Theorem 1. TAe set of all T-computable binary sequences, ST, is recursively enumerable.Proof. By methods similar to the enumeration of all Turing machines [2] one can first enumerate all multitape Turing machines which print binary sequences. This is just a matter of enumerating all the sets satisfying Definition 1 with the1965] ON THE COMPUTATIONAL C OMPLEXITY O F ALGORITHMS 289 added requirement that Sjo is always a finite sequence of binary digits (regarded as one symbol). Let such an enumeration be &~x, 3~2, ••• . Because T(n) is comput-able, it is possible to systematically modify each ^"¡ to a machine &"'t w ith the following properties : As long as y¡ prints its nth digit within T(n) operations (and this can be verified by first computing T(n) and then looking at the first T(n) operations of ^"¡), then the nth digit of &~'t will be the nth output of &~¡. If &~¡ s hould ever fail to print the nth digit after T(n) operations, then ^"¡'will print out a zero for each successive operation. Thus we can derive a new enumeration •^"'u &~2> "•• If' &\ operates within time T(n), then ^", and ^"¡'compute the same T-computable sequence <x¡. O therwise, &~{ c omputes an ultimately constant sequence a¡ and this can be printed, k bits at a time [where T(n) — n / fc] by a zero tape machine. In either case, a¡ is T-computable and we conclude that {«,} = ST.Corollary 1.1. There does not exist a time-function T such that ST is the set of all computable binary sequences.Proof. Since ST is recursively enumerable, we can design a machine !T which, in order to compute its ith output, computes the z'th bit of sequence a, and prints out its complement. Clearly 3~ produces a sequence a different from all <Xj in ST.Corollary 1.2. For any time-function T, there exists a time-function U such that ST is strictly contained in Sv. Therefore, there are infinitely long chainsSTl cr STl cz •••of distinct complexity classes.Proof. Let &" compute a sequence a not in ST (Corollary 1.1). Let V(n) equal the number of operations required by ^"to compute the nth digit of a. Clearly V is computable and a e Sr. Lett/(n) = max [Tin), V(n)] ,then Vin) is a time-function and clearlyOrí ^3 Oj1 *Since a in Sv and a not in ST, we haveCorollary 1.3. The set of all complexity classes is countable.Proof. The set of enumerable sets is countable.Our next theorem asserts that linear changes in a time-function do not change the complexity class. // r is a real number, we write [r] to represent the smallest integer m such that m = r.290J. HARTMANIS AND R. E. STEARNS[MayTheorem 2. If the sequence cc is T-computable and k is a computable, positive real number, then a is [kT~\-computable; that is,ST = S[kTX.Proof. We shall show that the theorem is true for k = 1/2 and it will be true for fc = 1/ 2m b y induction, and hence for all other computable k since, given k, k ^ 1 /2'" for some m. (Note that if k is computable, then \kT~\ is a computable function satisfying Definition 2.)Let ¡F be a machine which computes a in time T. If the control state, the tape symbols read, and the tape symbols adjacent to those read are all known, then the state and tape changes resulting from the next two operations of &~ are determined and can therefore be computed in a single operation. If we can devise a scheme so that this information is always available to a machine 5~', then &' can perform in one operation what ST does in two operations. We shall next show how, by combining pairs of tape symbols into single symbols and adding extra memory to the control, we can make the information available.In Figure 2(a), we show a typical tape of S" with its head on the square marked 0. In Figure 2(b), we show the two ways we store this information in &~'. Each square of the ^"'-tape contains the information in two squares of the ^-tape. Two of the ^"-tape symbols are stored internally in 3r' and 3~' must also remember which piece of information is being read by 9~. In our figures, this is indicated by an arrow pointed to the storage spot. In two operations of &~, t he heads must move to one of the five squares labeled 2, 1,0, — l,or —2. The corresponding next position of our ^"'-tape is indicated in Figures 2(c)-(g). It is easily verified that in each case, &"' can print or store the necessary changes. In the event that the present symbol read by IT is stored on the right in ¡T' as in Figure 2(f), then the analogous changes are made. Thus we know that ST' can do in one operation what 9~ does in two and the theorem is proved.Corollary 2.1. If U and T are time-functions such that«-.«> Vin)then Svçz ST.Proof. Because the limit is greater than zero, Win) ^ Tin) for some k > 0, and thus Sv = SlkVj çz sT.Corollary 2.2. If U and T are time-functions such thatTin)sup-TTT-r- < 00 ,n-»a> O(n)then SV^ST.Proof. This is the reciprocal of Corollary 2.1.1965] ON THE COMPUTATIONAL COMPLEXITY OF ALGORITHMSE37291/HO W2|3l4[5l(/ZEEI33OÏÏT2Ï31/L-2_-iJ(c]¿m W\2I3I4I5K/(b)ZBE o2|3|4l5|\r2Vi!¿En on2l3l4l5|/l-T-i](d)¿BE2 34[5|6|7ir\10 l|(f)¿m2 34|5l6l7l /L<Dj(g)Figure 2. (a) Tape of ^" with head on 0. (b) Corresponding configurations of 9"'. (c) 9~' if F moves two left, (d) 9~> i f amoves to -1. (e) 9~' if ^~ moves to 0. (f)^"' if amoves to 1.(g) 9~' if 3~ moves two rightCorollary 2.3. If U and T are time-functions such thatTin)0 < hm ) ; < oo ,H-.« Uin)then Srj = ST .Proof. This follows from Corollaries 2.1 and 2.2.Corollary 2.4. // Tin) is a time-function, then Sn^ST . Therefore, Tin) = n is the most severe time restriction.Proof. Because T is a time-function, Tin) = n/ k for some positive k by Definition 2; hence292j. hartmanis and r. e. stearns[Maymf m à 1 > O…-»o, n kand S… çz s T by Corollary 2.1.Corollary 2.5. For any time-function T, Sr=Sv where t/(n)=max \T(n),n\. Therefore, any complexity class may be defined by a function U(n) ^ n. Proof. Clearly inf (T/ Í7) > min (1,1/ k) and sup (T/ U) < 1 .Corollary 2.6. If T is a time-function satisfyingTin) > n and inf -^ > 1 ,…-co nthen for any a in ST, there is a multitape Turing machined with a binary (i.e., two symbol) output which prints the nth digit of a in Tin) or fewer operations. Proof. The inf condition implies that, for some rational e > 0, and integer N, (1 - e) Tin) > n or Tin) > eTin) + n for all n > N. By the theorem, there is a machine 9' which prints a in time \zT(ri)\. 9' can be modified to a machine 9" which behaves like 9' except that it suspends its calculation while it prints the output one digit per square. Obviously, 9" computes within time \i.T(ri)\ + n (which is less than Tin) for n > N). $~" can be modified to the desired machine9~ by adding enough memory to the control of 9~" to print out the nth digit of a on the nth operation for n ^ N.Corollary 2.7. IfT(n)^nandoieST,thenforanys >0, there exists a binary output multitape Turing machine 9 which prints out the nth digit of a in [(1 + e) T(n)J or fewer operations.Proof. Observe that. [(1 + e) T(n)]inf —--——■— — 1 + enand apply Corollary 2.6.Corollary 2.8. // T(n)^n is a time-function and oteST, then for any real numbers r and e, r > e > 0, /Aere is a binary output multitape Turing machine ¡F which, if run at one operation per r—e seconds, prints out the nth digit of a within rT(n) seconds. Ifcc$ ST, there are no such r and e. Thus, when considering time-functions greater or equal to n, the slightest increase in operation speed wipes out the distinction between binary and nonbinary output machines.Proof. This is a consequence of the theorem and Corollary 2.7.Theorem 3. // Tx and T2 are time-functions, then T(n) = min [T^n), T2(n)~] is a time-function and STí O ST2 = ST.1965] ON THE COMPUTATIONAL COMPLEXITY OF ALGORITHMS 293 Proof. T is obviously a time-function. If 9~x is a machine that computes a in time T, and 9~2 computes a in time T2, then it is an easy matter to construct a third device &~ i ncorporating both y, and 3T2 which computes a both ways simul-taneously and prints the nth digit of a as soon as it is computed by either J~x or 9~2. Clearly this machine operates inTin) = min \Txin), T2(n)] .Theorem 4. If sequences a and ß differ in at most a finite number of places, then for any time-function T, cceST if and only if ße ST.Proof. Let ,T print a in time T. Then by adding some finite memory to the control unit of 3", we can obviously build a machine 3~' which computes ß in time T.Theorem 5. Given a time-function T, there is no decision procedure to decide whether a sequence a is in ST.Proof. Let 9~ be any Turing machine in the classical sense and let 3Tx be a multitape Turing machine which prints a sequence ß not in ST. Such a 9~x exists by Theorem 1. Let 9~2 be a multitape Turing machine which prints a zero for each operation $~ makes before stopping. If $~ should stop after k operations, then 3~2 prints the /cth and all subsequent output digits of &x. Let a be the sequence printed by 9"2, Because of Theorem 4, a.eST if and only if 9~ does not stop. Therefore, a decision procedure for oceST would solve the stopping problem which is known to be unsolvable (see [2]).Corollary 5.1. There is no decision procedure to determine if SV=ST or Sv c STfor arbitrary time-functions U and T.Proof. Similar methods to those used in the previous proof link this with the stopping problem.It should be pointed out that these unsolvability aspects are not peculiar to our classification scheme but hold for any nontrivial classification satisfying Theorem 4.III. Other devices. The purpose of this section is to compare the speed of our multitape Turing machine with the speed of other variants of a Turing machine. Most important is the first result because it has an application in a later section.Theorem 6. If the sequence a is T-computable by multitape Turing machine, !T, then a is T2-computable by a one-tape Turing machine 3~x .Proof. Assume that an n-tape Turing machine, 3~, is given. We shall now describe a one-tape Turing machine Px that simulates 9~, and show that if &" is a T-computer, then S~x is at most a T2-computer.294j. hartmanis and r. e. stearns[May The S~ computation is simulated on S'y as follows : On the tape of & y will be stored in n consecutive squares the n symbols read by S on its n tapes. The symbols on the squares to the right of those symbols which are read by S~ on its n tapes are stored in the next section to the right on the S'y tape, etc., as indicated in Figure 3, where the corresponding position places are shown. The1 TAPE T|A 1 TAPE T2I?TAPE Tn(a)J-"lo(b)Figure 3. (a) The n tapes of S. (b) The tape of S~\machine Tx operates as follows: Internally is stored the behavioral description of the machine S", so that after scanning the n squares [J], [o], ■■■, [5]»-^"îdetermines to what new state S~ will go, what new symbols will be printed by it on its n tapes and in which direction each of these tapes will be shifted. First,¡Fy prints the new symbols in the corresponding entries of the 0 block. Then it shifts the tape to the right until the end of printed symbols is reached. (We can print a special symbol indicating the end of printed symbols.) Now the machine shifts the tape back, erases all those entries in each block of n squares which correspond to tapes of S~ which are shifted to the left, and prints them in the corresponding places in the next block. Thus all those entries whose corresponding S~ tapes are shifted left are moved one block to the left. At the other end of the tape, the process is reversed and returning on the tape 9y transfers all those entries whose corresponding S~ tapes are shifted to the right one block to the right on the S'y tape. When the machine S', reaches the rigAz most printed symbol on its tape, it returns to the specially marked (0) block which now contains1965] ON THE COMPUTATIONAL COMPLEXITY OF ALGORITHMS 295 the n symbols which are read by &~ o n its next operation, and #", has completed the simulation of one operation of 9~. It can be seen that the number of operations of Tx is proportional to s, the number of symbols printed on the tape of &"¡. This number increases at most by 2(n + 1) squares during each operation of &. Thus, after T(fc) operations of the machine J~, the one-tape machine S"t will perform at most7(*)T,(fc) =C0+ T Cxii = loperations, where C0 and C, are constants. But thenr,(fe) g C2 £ i^C [T(fc)]2 .¡ =iSince C is a constant, using Theorem 2, we conclude that there exists a one tape machine printing its fcth output symbol in less than T(fc)2 tape shifts as was to be shown.Corollary 6.1. The best computation time improvement that can be gained in going from n-tape machines to in + l)-tape machines is the square root of the computation time.Next we investigate what happens if we allow the possibility of having several heads on each tape with some appropriate rule to prevent two heads from occupy-ing the same square and giving conflicting instructions. We call such a device a multihead Turing machine. Our next result states that the use of such a model would not change the complexity classes.Theorem 7. Let a. be computable by a multihead Turing machine 3T which prints the nth digit in Tin) or less operations where T is a time-function; then a is in ST .Proof. We shall show it for a one-tape two-head machine, the other cases following by induction. Our object is to build a multitape machine Jr' which computes a within time 4T which will establish our result by Theorem 2. The one tape of !T will be replaced by three tapes in 9"'. Tape a contains the left-hand information from 9", tape b contains the right-hand information of 9~, and tape c keeps count, two at a time, of the number of tape squares of ST which are stored on both tapes a and b_. A check mark is always on some square of tape a to indicate the rightmost square not stored on tape b_ and tape b has a check to indicate the leftmost square not stored on tape a.When all the information between the heads is on both tapes a and b. then we have a "clean" position as shown in Figure 4(a). As &" operates, then tape296j. hartmanis and r. e. stearns [May7/Fio TTzTTR" 5 "6Ï7M I 4T5T6" 7 8TT77' ^f(a) rT-Tô:TT2l3l4l?l \J ¿Kh.1y(b) J I l?IM2!3|4 5.6T7 /I |?|4,|5|6 7 8TT7(c) f\7~ /\V\/\A7\7M J M/l/yTITTTTTTJ(a) (b)Figure 4. (a) .^"' in clean position, (b) S' in dirty positiona performs like the left head of S~, tape A behaves like the right head, and tape c reduces the count each time a check mark is moved. Head a must carry the check right whenever it moves right from a checked square, since the new symbol it prints will not be stored on tape A; and similarly head A moves its check left.After some m operations of S~' corresponding to m operations of S~, a "dirty"position such as Figure 4(b) is reached where there is no overlapping information.The information (if any) between the heads of S~ must be on only one tape of S~',say tape A as in Figure 4(b). Head A then moves to the check mark, the between head information is copied over onto tape a, and head amoves back into position.A clean position has been achieved and S~' is ready to resume imitating S~. The time lost is 3/ where I is the distance between the heads. But / ^ m since headA has moved / squares from the check mark it left. Therefore 4m is enough time to imitate m operations of S~ and restore a clean position. Thusas was to be shown.This theorem suggests that our model can tolerate some large deviations without changing the complexity classes. The same techniques can be applied to other changes in the model. For example, consider multitape Turing ma-chines which have a fixed number of special tape symbols such that each symbol can appear in at most one square at any given time and such that the reading head can be shifted in one operation to the place where the special symbol is printed, no matter how far it is on the tape. Turing machines with such "jump instructions^ are similarly shown to leave the classes unchanged.Changes in the structure of the tape tend to lead to "square laws." For example,consider the following :Definition 3. A two-dimensional tape is an unbounded plane which is sub-divided into squares by equidistant sets of vertical and horizontal lines as shown in Figure 5. The reading head of the Turing machine with this two-dimensional tape can move either one square up or down, or one square left or right on each operation. This definition extends naturally to higher-dimensional tapes.。

《人工智能-一种现代方法》第四版习题答案

《人工智能-一种现代方法》第四版习题答案
Chapter 2
2.1 Define in your own words the following terms: agent, agent function, agent program, rationality, reflex agent, model-based agent, goal-based agent, utility-based agent, learning agent. The following are just some of the many possible definitions that can be written:
1.11 “surely computers cannot be intelligent-they can do only what their programmers tell them.” Is the latter statement true, and does it imply the former? This depends on your definition of “intelligent” and “tell.” In one sense computers only do what the programmers command them to do, but in another sense what the programmers consciously tells the computer to do often has very little to do with what the computer actually does. Anyone who has written a program with an ornery bug knows this, as does anyone who has written a successful machine learning program. So in one sense Samuel “told” the computer “learn to play checkers better than I do, and then play that way,” but in another sense he told the computer “follow this learning algorithm” and it learned to play. So we’re left in the situation where you may or may not consider learning to play checkers to be s sign of intelligence (or you may think that learning to play in the right way requires intelligence, but not in this way), and you may think the intelligence resides in the programmer or in the computer

Insight Problem Solving A Critical Examination of the Possibility

Insight Problem Solving A Critical Examination of the Possibility

The Journal of Problem Solving • volume 5, no. 1 (Fall 2012)56Insight Problem Solving: A Critical Examination of the Possibilityof Formal TheoryWilliam H. Batchelder 1 and Gregory E. Alexander 1AbstractThis paper provides a critical examination of the current state and future possibility of formal cognitive theory for insight problem solving and its associated “aha!” experience. Insight problems are contrasted with move problems, which have been formally defined and studied extensively by cognitive psychologists since the pioneering work of Alan Newell and Herbert Simon. To facilitate our discussion, a number of classical brainteasers are presented along with their solutions and some conclusions derived from observing the behavior of many students trying to solve them. Some of these problems are interesting in their own right, and many of them have not been discussed before in the psychologi-cal literature. The main purpose of presenting the brainteasers is to assist in discussing the status of formal cognitive theory for insight problem solving, which is argued to be considerably weaker than that found in other areas of higher cognition such as human memory, decision-making, categorization, and perception. We discuss theoretical barri-ers that have plagued the development of successful formal theory for insight problem solving. A few suggestions are made that might serve to advance the field.Keywords Insight problems, move problems, modularity, problem representation1 Department of Cognitive Sciences, University of California Irvine/10.7771/1932-6246.1143Insight Problem Solving: The Possibility of Formal Theory 57• volume 5, no. 1 (Fall 2012)1. IntroductionThis paper discusses the current state and a possible future of formal cognitive theory for insight problem solving and its associated “aha!” experience. Insight problems are con-trasted with so-called move problems defined and studied extensively by Alan Newell and Herbert Simon (1972). These authors provided a formal, computational theory for such problems called the General Problem Solver (GPS), and this theory was one of the first formal information processing theories to be developed in cognitive psychology. A move problem is posed to solvers in terms of a clearly defined representation consisting of a starting state, a description of the goal state(s), and operators that allow transitions from one problem state to another, as in Newell and Simon (1972) and Mayer (1992). A solu-tion to a move problem involves applying operators successively to generate a sequence of transitions (moves) from the starting state through intermediate problem states and finally to a goal state. Move problems will be discussed more extensively in Section 4.6.In solving move problems, insight may be required for selecting productive moves at various states in the problem space; however, for our purposes we are interested in the sorts of problems that are described often as insight problems. Unlike Newell and Simon’s formal definition of move problems, there has not been a generally agreed upon defini-tion of an insight problem (Ash, Jee, and Wiley, 2012; Chronicle, MacGregor, and Ormerod, 2004; Chu and MacGregor, 2011). It is our view that it is not productive to attempt a pre-cise logical definition of an insight problem, and instead we offer a set of shared defining characteristics in the spirit of Wittgenstein’s (1958) definition of ‘game’ in terms of family resemblances. Problems that we will treat as insight problems share many of the follow-ing defining characteristics: (1) They are posed in such a way as to admit several possible problem representations, each with an associated solution search space. (2) Likely initial representations are inadequate in that they fail to allow the possibility of discovering a problem solution. (3) In order to overcome such a failure, it is necessary to find an alternative productive representation of the problem. (4) Finding a productive problem representation may be facilitated by a period of non-solving activity called incubation, and also it may be potentiated by well-chosen hints. (5) Once obtained, a productive representation leads quite directly and quickly to a solution. (6) The solution involves the use of knowledge that is well known to the solver. (7) Once the solution is obtained, it is accompanied by a so-called “aha!” experience. (8) When a solution is revealed to a non-solver, it is grasped quickly, often with a feeling of surprise at its simplicity, akin to an “aha!” experience.It is our position that very little is known empirically or theoretically about the cogni-tive processes involved in solving insight problems. Furthermore, this lack of knowledge stands in stark contrast with other areas of cognition such as human memory, decision-making, categorization, and perception. These areas of cognition have a large number of replicable empirical facts, and many formal theories and computational models exist that attempt to explain these facts in terms of underlying cognitive processes. The main goal58W. H. Batchelder and G. E. Alexander of this paper is to explain the reasons why it has been so difficult to achieve a scientific understanding of the cognitive processes involved in insight problem solving.There have been many scientific books and papers on insight problem solving, start-ing with the seminal work of the Gestalt psychologists Köhler (1925), Duncker (1945), and Wertheimer (1954), as well as the English social psychologist, Wallas (1926). Since the contributions of the early Gestalt psychologists, there have been many journal articles, a few scientific books, such as those by Sternberg and Davidson (1996) and Chu (2009), and a large number of books on the subject by laypersons. Most recently, two excellent critical reviews of insight problem solving have appeared: Ash, Cushen, and Wiley (2009) and Chu and MacGregor (2011).The approach in this paper is to discuss, at a general level, the nature of several fun-damental barriers to the scientific study of insight problem solving. Rather than criticizing particular experimental studies or specific theories in detail, we try to step back and take a look at the area itself. In this effort, we attempt to identify principled reasons why the area of insight problem solving is so resistant to scientific progress. To assist in this approach we discuss and informally analyze eighteen classical brainteasers in the main sections of the paper. These problems are among many that have been posed to hundreds of upper divisional undergraduate students in a course titled “Human Problem Solving” taught for many years by the senior author. Only the first two of these problems can be regarded strictly as move problems in the sense of Newell and Simon, and most of the rest share many of the characteristics of insight problems as described earlier.The paper is divided into five main sections. After the Introduction, Section 2 describes the nature of the problem solving class. Section 3 poses the eighteen brainteasers that will be discussed in later sections of the paper. The reader is invited to try to solve these problems before checking out the solutions in the Appendix. Section 4 lays out six major barriers to developing a deep scientific theory of insight problem solving that we believe are endemic to the field. We argue that these barriers are not present in other, more theo-retically advanced areas of higher cognition such as human memory, decision-making, categorization, and perception. These barriers include the lack of many experimental paradigms (4.1), the lack of a large, well-classified set of stimulus material (4.2), and the lack of many informative behavioral measures (4.3). In addition, it is argued that insight problem solving is difficult to study because it is non-modular, both in the sense of Fodor (1983) but more importantly in several weaker senses of modularity that admit other areas of higher cognition (4.4), the lack of theoretical generalizations about insight problem solv-ing from experiments with particular insight problems (4.5), and the lack of computational theories of human insight (4.6). Finally, in Section 5, we suggest several avenues that may help overcome some of the barriers described in Section 4. These include suggestions for useful classes of insight problems (5.1), suggestions for experimental work with expert problem solvers (5.2), and some possibilities for a computational theory of insight.The Journal of Problem Solving •Insight Problem Solving: The Possibility of Formal Theory 592. Batchelder’s Human Problem Solving ClassThe senior author, William Batchelder, has taught an Upper Divisional Undergraduate course called ‘Human Problem Solving” for over twenty-five years to classes ranging in size from 75 to 100 students. By way of background, his active research is in other areas of the cognitive sciences; however, he maintains a long-term hobby of studying classical brainteasers. In the area of complex games, he achieved the title of Senior Master from the United States Chess Federation, he was an active duplicate bridge player throughout undergraduate and graduate school, and he also achieved a reasonable level of skill in the game of Go.The content of the problem-solving course is split into two main topics. The first topic involves encouraging students to try their hand at solving a number of famous brainteasers drawn from the sizeable folklore of insight problems, especially the work of Martin Gardner (1978, 1982), Sam Loyd (1914), and Raymond Smullyan (1978). In addition, games like chess, bridge, and Go are discussed. The second topic involves presenting the psychological theory of thinking and problem solving, and in most cases the material is organized around developments in topics that are covered in the first eight chapters of Mayer (1992). These topics include work of the Gestalt psychologists on problem solving, discussion of experiments and theories concerning induction and deduction, present-ing the work on move problems, including the General Problem Solver (Newell & Simon, 1972), showing how response time studies can reveal mental architectures, and describing theories of memory representation and question answering.Despite efforts, the structure of the course does not reflect a close overlap between its two main topics. The principal reason for this is that in our view the level of theoreti-cal and empirical work on insight problem solving is at a substantially lower level than is the work in almost any other area of cognition dealing with higher processes. The main goal of this paper is to explain our reasons for this pessimistic view. To assist in this goal, it is helpful to get some classical brainteasers on the table. While most of these problems have not been used in experimental studies, the senior author has experienced the solu-tion efforts and post solution discussions of over 2,000 students who have grappled with these problems in class.3. Some Classic BrainteasersIn this section we present eighteen classical brainteasers from the folklore of problem solving that will be discussed in the remainder of the paper. These problems have de-lighted brainteaser connoisseurs for years, and most are capable of giving the solver a large dose of the “aha!” experience. There are numerous collections of these problems in books, and many collections of them are accessible through the Internet. We have selected these problems because they, and others like them, pose a real challenge to any effort to • volume 5, no. 1 (Fall 2012)60W. H. Batchelder and G. E. Alexander develop a deep and general formal theory of human or machine insight problem solving. With the exception of Problems 3.1 and 3.2, and arguably 3.6, the problems are different in important respects from so-called move problems of Newell and Simon (1972) described earlier and in Section 4.6.Most of the problems posed in this section share many of the defining characteristics of insight problems described in Section 1. In particular, they do not involve multiple steps, they require at most a very minimal amount of technical knowledge, and most of them can be solved by one or two fairly simple insights, albeit insights that are rarely achieved in real time by problem solvers. What makes these problems interesting is that they are posed in such a way as to induce solvers to represent the problem information in an unproductive way. Then the main barrier to finding a solution to one of these problems is to overcome a poor initial problem representation. This may involve such things as a re-representation of the problem, the dropping of an implicit constraint on the solution space, or seeing a parallel to some other similar problem. If the solver finds a productive way of viewing the problem, the solution generally follows rapidly and comes with burst of insight, namely the “aha!” experience. In addition, when non-solvers are given the solu-tion they too may experience a burst of insight.What follows next are statements of the eighteen brainteasers. The solutions are presented in the Appendix, and we recommend that after whatever problem solving activity a reader wishes to engage in, that the Appendix is studied before reading the remaining two sections of the paper. As we discuss each problem in the paper, we provide authorship information where authorship is known. In addition, we rephrased some of the problems from their original sources.Problem 3.1. Imagine you have an 8-inch by 8-inch array of 1-inch by 1-inch little squares. You also have a large box of 2-inch by 1-inch rectangular shaped dominoes. Of course it is easy to tile the 64 little squares with dominoes in the sense that every square is covered exactly once by a domino and no domino is hanging off the array. Now sup-pose the upper right and lower left corner squares are cut off the array. Is it possible to tile the new configuration of 62 little squares with dominoes allowing no overlaps and no overhangs?Problem 3.2. A 3-inch by 3-inch by 3-inch cheese cube is made of 27 little 1-inch cheese cubes of different flavors so that it is configured like a Rubik’s cube. A cheese-eating worm devours one of the top corner cubes. After eating any little cube, the worm can go on to eat any adjacent little cube (one that shares a wall). The middlemost little cube is by far the tastiest, so our worm wants to eat through all the little cubes finishing last with the middlemost cube. Is it possible for the worm to accomplish this goal? Could he start with eating any other little cube and finish last with the middlemost cube as the 27th?The Journal of Problem Solving •Insight Problem Solving: The Possibility of Formal Theory 61 Figure 1. The cheese eating worm problem.Problem 3.3. You have ten volumes of an encyclopedia numbered 1, . . . ,10 and shelved in a bookcase in sequence in the ordinary way. Each volume has 100 pages, and to simplify suppose the front cover of each volume is page 1 and numbering is consecutive through page 100, which is the back cover. You go to sleep and in the middle of the night a bookworm crawls onto the bookcase. It eats through the first page of the first volume and eats continuously onwards, stopping after eating the last page of the tenth volume. How many pieces of paper did the bookworm eat through?Figure 2.Bookcase setup for the Bookworm Problem.Problem 3.4. Suppose the earth is a perfect sphere, and an angel fits a tight gold belt around the equator so there is no room to slip anything under the belt. The angel has second thoughts and adds an inch to the belt, and fits it evenly around the equator. Could you slip a dime under the belt?• volume 5, no. 1 (Fall 2012)62W. H. Batchelder and G. E. Alexander Problem 3.5. Consider the cube in Figure 1 and suppose the top and bottom surfaces are painted red and the other four sides are painted blue. How many little cubes have at least one red and at least one blue side?Problem 3.6. Look at the nine dots in Figure 3. Your job is to take a pencil and con-nect them using only three straight lines. Retracing a line is not allowed and removing your pencil from the paper as you draw is not allowed. Note the usual nine-dot problem requires you to do it with four lines; you may want to try that stipulation as well. Figure 3.The setup for the Nine-Dot Problem.Problem 3.7. You are standing outside a light-tight, well-insulated closet with one door, which is closed. The closet contains three light sockets each containing a working light bulb. Outside the closet, there are three on/off light switches, each of which controls a different one of the sockets in the closet. All switches are off. Your task is to identify which switch operates which light bulb. You can turn the switches off and on and leave them in any position, but once you open the closet door you cannot change the setting of any switch. Your task is to figure out which switch controls which light bulb while you are only allowed to open the door once.Figure 4.The setup of the Light Bulb Problem.The Journal of Problem Solving •Insight Problem Solving: The Possibility of Formal Theory 63• volume 5, no . 1 (Fall 2012)Problem 3.8. We know that any finite string of symbols can be extended in infinitely many ways depending on the inductive (recursive) rule; however, many of these ways are not ‘reasonable’ from a human perspective. With this in mind, find a reasonable rule to continue the following series:Problem 3.9. You have two quart-size beakers labeled A and B. Beaker A has a pint of coffee in it and beaker B has a pint of cream in it. First you take a tablespoon of coffee from A and pour it in B. After mixing the contents of B thoroughly you take a tablespoon of the mixture in B and pour it back into A, again mixing thoroughly. After the two transfers, which beaker, if either, has a less diluted (more pure) content of its original substance - coffee in A or cream in B? (Forget any issues of chemistry such as miscibility).Figure 5. The setup of the Coffee and Cream Problem.Problem 3.10. There are two large jars, A and B. Jar A is filled with a large number of blue beads, and Jar B is filled with the same number of red beads. Five beads from Jar A are scooped out and transferred to Jar B. Someone then puts a hand in Jar B and randomly grabs five beads from it and places them in Jar A. Under what conditions after the second transfer would there be the same number of red beads in Jar A as there are blue beads in Jar B.Problem 3.11. Two trains A and B leave their train stations at exactly the same time, and, unaware of each other, head toward each other on a straight 100-mile track between the two stations. Each is going exactly 50 mph, and they are destined to crash. At the time the trains leave their stations, a SUPERFLY takes off from the engine of train A and flies directly toward train B at 100 mph. When he reaches train B, he turns around instantly, A BCD EF G HI JKLM.............64W. H. Batchelder and G. E. Alexander continuing at 100 mph toward train A. The SUPERFLY continues in this way until the trains crash head-on, and on the very last moment he slips out to live another day. How many miles does the SUPERFLY travel on his zigzag route by the time the trains collide?Problem 3.12. George lives at the foot of a mountain, and there is a single narrow trail from his house to a campsite on the top of the mountain. At exactly 6 a.m. on Satur-day he starts up the trail, and without stopping or backtracking arrives at the top before6 p.m. He pitches his tent, stays the night, and the next morning, on Sunday, at exactly 6a.m., he starts down the trail, hiking continuously without backtracking, and reaches his house before 6 p.m. Must there be a time of day on Sunday where he was exactly at the same place on the trail as he was at that time on Saturday? Could there be more than one such place?Problem 3.13. You are driving up and down a mountain that is 20 miles up and 20 miles down. You average 30 mph going up; how fast would you have to go coming down the mountain to average 60 mph for the entire trip?Problem 3.14. During a recent census, a man told the census taker that he had three children. The census taker said that he needed to know their ages, and the man replied that the product of their ages was 36. The census taker, slightly miffed, said he needed to know each of their ages. The man said, “Well the sum of their ages is the same as my house number.” The census taker looked at the house number and complained, “I still can’t tell their ages.” The man said, “Oh, that’s right, the oldest one taught the younger ones to play chess.” The census taker promptly wrote down the ages of the three children. How did he know, and what were the ages?Problem 3.15. A closet has two red hats and three white hats. Three participants and a Gamesmaster know that these are the only hats in play. Man A has two good eyes, man B only one good eye, and man C is blind. The three men sit on chairs facing each other, and the Gamesmaster places a hat on each man’s head, in such a way that no man can see the color of his own hat. The Gamesmaster offers a deal, namely if any man correctly states the color of his hat, he will get $50,000; however, if he is in error, then he has to serve the rest of his life as an indentured servant to the Gamesmaster. Man A looks around and says, “I am not going to guess.” Then Man B looks around and says, “I am not going to guess.” Finally Man C says, “ From what my friends with eyes have said, I can clearly see that my hat is _____”. He wins the $50,000, and your task is to fill in the blank and explain how the blind man knew the color of his hat.Problem 3.16. A king dies and leaves an estate, including 17 horses, to his three daughters. According to his will, everything is to be divided among his daughters as fol-lows: 1/2 to the oldest daughter, 1/3 to the middle daughter, and 1/9 to the youngest daughter. The three heirs are puzzled as to how to divide the horses among themselves, when a probate lawyer rides up on his horse and offers to assist. He adds his horse to the kings’ horses, so there will be 18 horses. Then he proceeds to divide the horses amongThe Journal of Problem Solving •Insight Problem Solving: The Possibility of Formal Theory 65 the daughters. The oldest gets ½ of the horses, which is 9; the middle daughter gets 6 horses which is 1/3rd of the horses, and the youngest gets 2 horses, 1/9th of the lot. That’s 17 horses, so the lawyer gets on his own horse and rides off with a nice commission. How was it possible for the lawyer to solve the heirs’ problem and still retain his own horse?Problem 3.17. A logical wizard offers you the opportunity to make one statement: if it is false, he will give you exactly ten dollars, and if it is true, he will give you an amount of money other than ten dollars. Give an example of a statement that would be sure to make you rich.Problem 3.18. Discover an interesting sense of the claim that it is in principle impos-sible to draw a perfect map of England while standing in a London flat; however, it is not in principle impossible to do so while living in a New York City Pad.4. Barriers to a Theory of Insight Problem SolvingAs mentioned earlier, our view is that there are a number of theoretical barriers that make it difficult to develop a satisfactory formal theory of the cognitive processes in play when humans solve classical brainteasers of the sort posed in Section 3. Further these barriers seem almost unique to insight problem solving in comparison with the more fully developed higher process areas of the cognitive sciences such as human memory, decision-making, categorization, and perception. Indeed it seems uncontroversial to us that neither human nor machine insight problem solving is well understood, and com-pared to other higher process areas in psychology, it is the least developed area both empirically and theoretically.There are two recent comprehensive critical reviews concerning insight problem solving by Ash, Cushen, and Wiley (2009) and Chu and MacGregor (2011). These articles describe the current state of empirical and theoretical work on insight problem solving, with a focus on experimental studies and theories of problem restructuring. In our view, both reviews are consistent with our belief that there has been very little sustainable progress in achieving a general scientific understanding of insight. Particularly striking is that are no established general, formal theories or models of insight problem solving. By a general formal model of insight problem solving we mean a set of clearly formulated assumptions that lead formally or logically to precise behavioral predictions over a wide range of insight problems. Such a formal model could be posed in terms of a number of formal languages including information processing assumptions, neural networks, computer simulation, stochastic assumptions, or Bayesian assumptions.Since the groundbreaking work by the Gestalt psychologists on insight problem solving, there have been theoretical ideas that have been helpful in explaining the cog-nitive processes at play in solving certain selected insight problems. Among the earlier ideas are Luchins’ concept of einstellung (blind spot) and Duncker’s functional fixedness, • volume 5, no. 1 (Fall 2012)as in Maher (1992). More recently, there have been two developed theoretical ideas: (1) Criterion for Satisfactory Progress theory (Chu, Dewald, & Chronicle, 2007; MacGregor, Ormerod, & Chronicle, 2001), and (2) Representational Change Theory (Knoblich, Ohls-son, Haider, & Rhenius, 1999). We will discuss these theories in more detail in Section 4. While it is arguable that these theoretical ideas have done good work in understanding in detail a few selected insight problems, we argue that it is not at all clear how these ideas can be generalized to constitute a formal theory of insight problem solving at anywhere near the level of generality that has been achieved by formal theories in other areas of higher process cognition.The dearth of formal theories of insight problem solving is in stark contrast with other areas of problem solving discussed in Section 4.6, for example move problems discussed earlier and the more recent work on combinatorial optimization problems such as the two dimensional traveling salesman problem (MacGregor and Chu, 2011). In addition, most other higher process areas of cognition are replete with a variety of formal theories and models. For example, in the area of human memory there are currently a very large number of formal, information processing models, many of which have evolved from earlier mathematical models, as in Norman (1970). In the area of categorization, there are currently several major formal theories along with many variations that stem from earlier theories discussed in Ashby (1992) and Estes (1996). In areas ranging from psycholinguistics to perception, there are a number of formal models based on brain-style computation stemming from Rumelhart, McClelland, and PDP Research Group’s (1987) classic two-volume book on parallel distributed processing. Since Daniel Kahneman’s 2002 Nobel Memorial Prize in the Economic Sciences for work jointly with Amos Tversky developing prospect theory, as in Kahneman and Tversky (1979), psychologically based formal models of human decision-making is a major theoretical area in cognitive psychology today. In our view, there is nothing in the area of insight problem solving that approaches the depth and breadth of formal models seen in the areas mentioned above.In the following subsections, we will discuss some of the barriers that have prevented the development of a satisfactory theory of insight problem solving. Some of the bar-riers will be illustrated with references to the problems in Section 3. Then, in Section 5 we will assuage our pessimism a bit by suggesting how some of these barriers might be removed in future work to facilitate the development of an adequate theory of insight problem solving.4.1 Lack of Many Experimental ParadigmsThere are not many distinct experimental paradigms to study insight problem solving. The standard paradigm is to pick a particular problem, such as one of the ones in Section 3, and present it to several groups of subjects, perhaps in different ways. For example, groups may differ in the way a hint is presented, a diagram is provided, or an instruction。

Mata手册说明书

Mata手册说明书

Contents[M-0]Introduction to the Mata manualintro.................................................Introduction to the Mata manual[M-1]Introduction and adviceintro........................................................Introduction and advice ing Mata with ado-files first....................................................Introduction andfirst session help........................................................Obtaining help in Stata how.............................................................How Mata works ing Mata interactively LAPACK.........................................The LAPACK linear-algebra routines limits..................................................Limits and memory utilization naming......................................Advice on naming functions and variables permutation................................An aside on permutation matrices and vectors returnedargs..................................Function arguments used to return results source.....................................................Viewing the source code e and specification of tolerances[M-2]Language definitionnguage definition break..............................................Break out of for,while,or do loop class...........................................Object-oriented programming(classes) ments continue...........................Continue with next iteration of for,while,or do loop declarations...................................................Declarations and types do..............................................................do...while(exp) errors.................................................................Error codes exp..................................................................Expressions for......................................................for(exp1;exp2;exp3)stmt ftof...................................................Passing functions to functions goto...................................................................goto label if.............................................................if(exp)...else... op arith........................................................Arithmetic operators op assignment..................................................Assignment operator op colon...........................................................Colon operators op conditional..................................................Conditional operator op increment.......................................Increment and decrement operators op join..............................................Row-and column-join operators op kronecker........................................Kronecker direct-product operator op logical........................................................Logical operators op range..........................................................Range operators op transpose.............................................Conjugate transpose operator12Contents optargs.........................................................Optional arguments pointers..................................................................Pointers pragma...............................................Suppressing warning messages reswords...........................................................Reserved words return........................................................return and return(exp) e of semicolons struct..................................................................Structures e of subscripts syntax............................................Mata language grammar and syntax version............................................................Version control void................................................................V oid matrices while.............................................................while(exp)stmt[M-3]Commands for controlling Matamands for controlling Mata end...................................................Exit Mata and return to Stata mata.....................................................Mata invocation command mata clear.....................................................Clear Mata’s memory mata describe.....................................Describe contents of Mata’s memory mata drop...................................................Drop matrix or function mata help......................................................Obtain help in Stata mata matsave..............................................Save and restore matrices mata memory.........................................Report on Mata’s memory usage mata mlib....................................................Create function library mata mosave................................Save function’s compiled code in objectfile mata rename..............................................Rename matrix or function mata set......................................Set and display Mata system parameters mata stata...................................................Execute Stata command mata which........................................................Identify function namelists........................................Specifying matrix and function names [M-4]Index and guide to functions intro....................................................Index and guide to functions io...................................................................I/O functions manipulation....................................................Matrix manipulation mathematical.........................................Important mathematical functions matrix............................................................Matrix functions programming................................................Programming functions scalar..................................................Scalar mathematical functions solvers................................Functions to solve AX=B and to obtain A inverse standard..........................................Functions to create standard matrices stata........................................................Stata interface functions statistical........................................................Statistical functions string..................................................String manipulation functions utility.......................................................Matrix utility functions [M-5]Mata functions intro...............................................................Mata functionsContents3 abbrev().........................................................Abbreviate strings abs().......................................................Absolute value(length) adosubdir().........................................Determine ado-subdirectory forfile all()..........................................................Element comparisons args()........................................................Number of arguments asarray().........................................................Associative arrays ascii().....................................................Manipulate ASCII codes assert().....................................................Abort execution if false blockdiag()...................................................Block-diagonal matrix bufio()........................................................Buffered(binary)I/O byteorder().............................................Byte order used by computer C()...............................................................Make complex c()...............................................................Access c()value callersversion()........................................Obtain version number of caller cat()....................................................Loadfile into string matrix chdir().......................................................Manipulate directories cholesky().........................................Cholesky square-root decomposition cholinv()...................................Symmetric,positive-definite matrix inversion cholsolve()............................Solve AX=B for X using Cholesky decomposition comb()binatorial function cond()...........................................................Condition number conj()plex conjugate corr()....................................Make correlation matrix from variance matrix cross().............................................................Cross products crossdev()..................................................Deviation cross products cvpermute().................................................Obtain all permutations date()...................................................Date and time manipulation deriv()........................................................Numerical derivatives designmatrix().....................................................Design matrices det()........................................................Determinant of matrix diag().................................................Replace diagonal of a matrix diag().......................................................Create diagonal matrix diag0cnt()..................................................Count zeros on diagonal diagonal().........................................Extract diagonal into column vector dir()....................................................................File list direxists()..................................................Whether directory exists direxternal().....................................Obtain list of existing external globals display().............................................Display text interpreting SMCL displayas()........................................................Set display level displayflush()............................................Flush terminal-output buffer Dmatrix().......................................................Duplication matrix docx*().......................................Generate Office Open XML(.docx)file dsign()............................................FORTRAN-like DSIGN()function e()...................................................................Unit vectors editmissing()..........................................Edit matrix for missing values edittoint()......................................Edit matrix for roundoff error(integers) edittozero().......................................Edit matrix for roundoff error(zeros) editvalue()............................................Edit(change)values in matrix eigensystem()............................................Eigenvectors and eigenvalues4Contentseigensystemselect()pute selected eigenvectors and eigenvalues eltype()..................................Element type and organizational type of object epsilon().......................................Unit roundoff error(machine precision) equilrc()..............................................Row and column equilibration error().........................................................Issue error message errprintf()..................................Format output and display as error message exit()..........................................................Terminate execution exp().................................................Exponentiation and logarithms factorial()..............................................Factorial and gamma function favorspeed()...................................Whether speed or space is to be favored ferrortext()......................................Text and return code offile error code fft().............................................................Fourier transform fileexists().......................................................Whetherfile exists fillmissing()..........................................Fill matrix with missing values findexternal().................................Find,create,and remove external globals findfile().................................................................Findfile floatround().................................................Round tofloat precision fmtwidth().........................................................Width of%fmt fopen()..................................................................File I/O fullsvd()............................................Full singular value decomposition geigensystem().................................Generalized eigenvectors and eigenvalues ghessenbergd()..................................Generalized Hessenberg decomposition ghk()...................Geweke–Hajivassiliou–Keane(GHK)multivariate normal simulator ghkfast().....................GHK multivariate normal simulator using pregenerated points gschurd()...........................................Generalized Schur decomposition halton().........................................Generate a Halton or Hammersley set hash1()...........................................Jenkins’one-at-a-time hash function hessenbergd()..............................................Hessenberg decomposition Hilbert()..........................................................Hilbert matrices I()................................................................Identity matrix inbase()...........................................................Base conversion indexnot()..................................................Find character not in list invorder()............................................Permutation vector manipulation invsym().............................................Symmetric real matrix inversion invtokens()...............................Concatenate string rowvector into string scalar isdiagonal()..............................................Whether matrix is diagonal isfleeting()...........................................Whether argument is temporary isreal()......................................................Storage type of matrix isrealvalues()..................................Whether matrix contains only real values issymmetric().................................Whether matrix is symmetric(Hermitian) isview()....................................................Whether matrix is view J().............................................................Matrix of constants Kmatrix()mutation matrix lapack()PACK linear-algebra functions liststruct()...................................................List structure’s contents Lmatrix().......................................................Elimination matrix logit()...........................................Log odds and complementary log-logContents5 lowertriangle().........................................Extract lower or upper triangle lud()...........................................................LU decomposition luinv()......................................................Square matrix inversion lusolve()...................................Solve AX=B for X using LU decomposition makesymmetric().............................Make square matrix symmetric(Hermitian) matexpsym().......................Exponentiation and logarithms of symmetric matrices matpowersym().........................................Powers of a symmetric matrix mean()............................................Means,variances,and correlations mindouble().................................Minimum and maximum nonmissing value minindex().......................................Indices of minimums and maximums minmax().................................................Minimums and maximums missing().......................................Count missing and nonmissing values missingof()................................................Appropriate missing value mod()..................................................................Modulus moptimize().....................................................Model optimization more().....................................................Create–more–condition negate().......................................................Negate real matrix norm()....................................................Matrix and vector norms normal()................................Cumulatives,reverse cumulatives,and densities optimize().....................................................Function optimization panelsetup()...................................................Panel-data processing pathjoin()....................................................File path manipulation pinv().................................................Moore–Penrose pseudoinverse polyeval()........................................Manipulate and evaluate polynomials printf().............................................................Format output qrd()...........................................................QR decomposition qrinv().............................Generalized inverse of matrix via QR decomposition qrsolve()...................................Solve AX=B for X using QR decomposition quadcross().............................................Quad-precision cross products range()..................................................Vector over specified range rank().............................................................Rank of matrix Re()..................................................Extract real or imaginary part reldif()..................................................Relative/absolute difference rows()........................................Number of rows and number of columns rowshape().........................................................Reshape matrix runiform()..............................Uniform and nonuniform pseudorandom variates runningsum().................................................Running sum of vector schurd()......................................................Schur decomposition select()..............................................Select rows,columns,or indices setbreakintr()..................................................Break-key processing sign()...........................................Sign and complex quadrant functions sin()..........................................Trigonometric and hyperbolic functions sizeof().........................................Number of bytes consumed by object solve tol().....................................Tolerance used by solvers and inverters solvelower()..........................................Solve AX=B for X,A triangular solvenl().........................................Solve systems of nonlinear equations sort().......................................................Reorder rows of matrix6Contentssoundex().............................................Convert string to soundex code spline3()..................................................Cubic spline interpolation sqrt().................................................................Square root st addobs()....................................Add observations to current Stata dataset st addvar().......................................Add variable to current Stata dataset st data()...........................................Load copy of current Stata dataset st dir()..................................................Obtain list of Stata objects st dropvar()...........................................Drop variables or observations st global()........................Obtain strings from and put strings into global macros st isfmt()......................................................Whether valid%fmt st isname()................................................Whether valid Stata name st local()..........................Obtain strings from and put strings into Stata macros st macroexpand().......................................Expand Stata macros in string st matrix().............................................Obtain and put Stata matrices st numscalar().......................Obtain values from and put values into Stata scalars st nvar()........................................Numbers of variables and observations st rclear().....................................................Clear r(),e(),or s() st store().................................Modify values stored in current Stata dataset st subview()..................................................Make view from view st tempname()...............................................Temporary Stata names st tsrevar().....................................Create time-series op.varname variables st updata()....................................Determine or set data-have-changedflag st varformat().................................Obtain/set format,etc.,of Stata variable st varindex()...............................Obtain variable indices from variable names st varname()...............................Obtain variable names from variable indices st varrename()................................................Rename Stata variable st vartype()............................................Storage type of Stata variable st view()..........................Make matrix that is a view onto current Stata dataset st viewvars().......................................Variables and observations of view st vlexists()e and manipulate value labels stata()......................................................Execute Stata command stataversion().............................................Version of Stata being used strdup()..........................................................String duplication strlen()...........................................................Length of string strmatch()....................................Determine whether string matches pattern strofreal().....................................................Convert real to string strpos().....................................................Find substring in string strreverse()..........................................................Reverse string strtoname()...........................................Convert a string to a Stata name strtoreal().....................................................Convert string to real strtrim()...........................................................Remove blanks strupper()......................................Convert string to uppercase(lowercase) subinstr()...........................................................Substitute text sublowertriangle()...........................Return a matrix with zeros above a diagonal substr()......................................................Substitute into string substr()...........................................................Extract substring sum().....................................................................Sums svd()..................................................Singular value decomposition svsolve()..........................Solve AX=B for X using singular value decomposition swap()..............................................Interchange contents of variablesContents7 Toeplitz().........................................................Toeplitz matrices tokenget()........................................................Advanced parsing tokens()..................................................Obtain tokens from string trace().......................................................Trace of square matrix transpose()..................................................Transposition in place transposeonly().......................................Transposition without conjugation trunc()............................................................Round to integeruniqrows()..............................................Obtain sorted,unique values unitcircle()plex vector containing unit circle unlink().................................................................Erasefile valofexternal().........................................Obtain value of external global Vandermonde()................................................Vandermonde matrices vec().........................................................Stack matrix columns xl()............................................................Excelfile I/O class[M-6]Mata glossary of common terms Glossary........................................................................ Subject and author index...........................................................。

auto9

auto9

in sequential circuits.
Definition A Moore machine is a collection of five things:
1. A finite set of states q0, q1, q2 … where q0 is
designated as the start state. 2. An alphabet of letters for forming the input string = {a, b, c, …}. 3. An alphabet of possible output characters = {x, y, z, …}. 4. A transition table that shows for each state and each input letter what state is reached next. 5. An output table that shows what character from is
which is the first state chosen. • We shall adopt the policy that a Moore machine always begins by printing the character dictated by the mandatory start state. • This difference is not significant. If the input string has 7 letters, then the output string will have 8 characters
printed by each state that is entered.

FAA团队与Polarion的高空成功:应用生命周期管理的客户端,结果与挑战说明书

FAA团队与Polarion的高空成功:应用生命周期管理的客户端,结果与挑战说明书

U.S. FAA Team Flying High with PolarionThe CustomerThe SolutionThe ResultThe Challenges: Traceability &Standards ComplianceThe U.S. Federal Aviation Administration (FAA) is the national aviation authority of the government of the United States. A division of the U.S. Department of Transportation, the agency regulates and oversees all aspects of civilian aviation in the United States. A development group within the FAA has adopted Polarion as their application lifecycle management solution.The software product managed by this group primarily monitors compliance of contractor provided automatic dependent surveillance broadcast system (SBS) data with contract terms and expectations. Since the monitoring system is ground based, the group aligns its development methods with the aviation standard forground based software known as DO-278.Because the group already used Subversion to manage source code changes, any solution for traceability based on Subversion would be highly desirable. “We much preferred to have a lifecycle and traceability solution that was tightly coupled to our existing version controls rather than an external overlay system.” said the group leader. Polarion was the top candidate almost from first research stages because it was clear from the outset that it addressed all the project’s key requirements:• Polarion uses Subversion as the data repository, eliminating the need for any integration from the outset.• Polarion is designed to provide format-independent traceability from requirements to source code and onward to object code.•Polarion supports any process, easily mapping it to workflow control and automating key functions.Polarion LiveDoc ™ technology has brought significant benefits to this group. “Before Polarion, we had to change the entire documents when requirements were changed - not just one requirement. Now, we can change requirements one by one.”Import from Microsoft Word proved to be another plus. According to the group manager, the overall result in time savings for new requirements is as much as 50%.Polarion’s flexibility and customizability in mapping a process and controlling it throughout the lifecycle notTheir previous solution, which included Harvest SCM and Microsoft ® Word documents, simply could not provide the necessary traceability and baseline centric documentation required by DO-278. It could not handle all the different file types, linking of requirements to source code impossible, and linking to object code extremely difficult.Another challenge was a fast-growing set of stakeholders in different groups, necessitating an ever greater level of agility in their “cyclic waterfall” process. “Having to be nimble was a big challenge,” said the group’s configuration and verification manager.* “The ability to attach source code changes to requirements enhances portability and maintainability, and is essential for DO-278 at higher levels of assurance.”“Polarion takes new feature requests quickly, and that’s great!”time reducedDO-278on creating new requirements50%CUSTOMER SUCCESS STORYAbout PolarionAbout the FAAAdditional Polarion PlussesCreators of the world’s fastest enterprise scale web-based ALM solution, the success of Polarion Software is best described by the hundreds of Global 1000 companies and over 1 Million users who rely daily on Polarion’s Requirements Management , Quality Assurance and Application Lifecycle Management solutions in their business processes. Polarion is a thriving international company with offices across Europe and North America,and a wide ecosystem of partners world-wide.For more information, visit .than at the FAA. Whether your industry is aerospace, automotive, or medical devices, Polarion has solutions and know-how to help you remove obstacles, work more efficiently, and compete more effectively in today’s fast-evolving markets.* (Name withheld due to agency policy.)The Federal Aviation Administration describes its mission as providing “the safest, most efficient aerospace system in the world”. You can learn more about this civil aviation agency on their web site at .only helps this FAA team today, but is likely to deliver ongoing benefits down the road.“Right now, we need compliance with only a fairly low level of DO-278. With Polarion, we can easily ramp up to higher levels, and as standards evolve and change over time, it will be much easier for us to adapt.”“Our process is complex. We like the ability to use scripting and our own development to automatically link and trigger workflow without manual work.”Besides their top 3 decision criteria (“native” SVN, traceability, flexibility), The FAA team found other factors that have cemented the decision for Polarion.“Polarion takes new feature requests quickly,and that’s great. We’re looking t an even better product now than when we started.”“The ability to duplicate a Work Item as a different type and plug it into any Document is really efficient.”You can work in the Document view to enter new Requirements, then switch to the Table view, add a column for Revision, and easily find requirements that have no revision at all. You can write a query to do that, “Our process is complex. We like the ability to use scripting and our own development to automatically link and trigger workflow without manual work.”“The ability to attach source code changes to requirements is a really nice thing to have, especially for the higher levels of DO-278.”。

C++ 出错时英文翻译

C++ 出错时英文翻译

C++ 出错时英文翻译Ambiguous operators need parentheses -----------不明确的运算需要用括号括起Ambiguous symbol ''xxx'' ----------------不明确的符号Argument list syntax error ----------------参数表语法错误Array bounds missing ------------------丢失数组界限符Array size toolarge -----------------数组尺寸太大Bad character in paramenters ------------------参数中有不适当的字符Bad file name format in include directive --------------------包含命令中文件名格式不正确Bad ifdef directive synatax ------------------------------编译预处理ifdef有语法错Bad undef directive syntax ---------------------------编译预处理undef有语法错Bit field too large ----------------位字段太长Call of non-function -----------------调用未定义的函数Call to function with no prototype ---------------调用函数时没有函数的说明Cannot modify a const object ---------------不允许修改常量对象Case outside of switch ----------------漏掉了case 语句Case syntax error ------------------ Case 语法错误Code has no effect -----------------代码不可述不可能执行到Compound statement missing{ --------------------分程序漏掉"{"Conflicting type modifiers ------------------不明确的类型说明符Constant expression required ----------------要求常量表达式Constant out of range in comparison -----------------在比较中常量超出范围Conversion may lose significant digits -----------------转换时会丢失意义的数字Conversion of near pointer not allowed -----------------不允许转换近指针Could not find file ''xxx'' -----------------------找不到XXX文件Declaration missing ; ----------------说明缺少";" houjiumingDeclaration syntax error -----------------说明中出现语法错误Default outside of switch ------------------ Default 出现在switch语句之外Define directive needs an identifier ------------------定义编译预处理需要标识符Division by zero ------------------用零作除数Do statement must have while ------------------ Do-while语句中缺少while部分Enum syntax error ---------------------枚举类型语法错误Enumeration constant syntax error -----------------枚举常数语法错误Error directive :xxx ------------------------错误的编译预处理命令Error writing output file ---------------------写输出文件错误Expression syntax error -----------------------表达式语法错误Extra parameter in call ------------------------调用时出现多余错误File name too long ----------------文件名太长Function call missing -----------------函数调用缺少右括号Fuction definition out of place ------------------函数定义位置错误Fuction should return a value ------------------函数必需返回一个值Goto statement missing label ------------------ Goto语句没有标号Hexadecimal or octal constant too large ------------------16进制或8进制常数太大Illegal character ''x'' ------------------非法字符xIllegal initialization ------------------非法的初始化Illegal octal digit ------------------非法的8进制数字houjiumingIllegal pointer subtraction ------------------非法的指针相减Illegal structure operation ------------------非法的结构体操作Illegal use of floating point -----------------非法的浮点运算Illegal use of pointer --------------------指针使用非法Improper use of a typedefsymbol ----------------类型定义符号使用不恰当In-line assembly not allowed -----------------不允许使用行间汇编Incompatible storage class -----------------存储类别不相容Incompatible type conversion --------------------不相容的类型转换Incorrect number format -----------------------错误的数据格式Incorrect use of default --------------------- Default使用不当Invalid indirection ---------------------无效的间接运算Invalid pointer addition ------------------指针相加无效Irreducible expression tree -----------------------无法执行的表达式运算Lvalue required ---------------------------需要逻辑值0或非0值Macro argument syntax error -------------------宏参数语法错误Macro expansion too long ----------------------宏的扩展以后太长Mismatched number of parameters in definition ---------------------定义中参数个数不匹配Misplaced break ---------------------此处不应出现break语句Misplaced continue ------------------------此处不应出现continue语句Misplaced decimal point --------------------此处不应出现小数点Misplaced elif directive --------------------不应编译预处理elifMisplaced else ----------------------此处不应出现else houjiumingMisplaced else directive ------------------此处不应出现编译预处理elseMisplaced endif directive -------------------此处不应出现编译预处理endifMust be addressable ----------------------必须是可以编址的Must take address of memory location ------------------必须存储定位的地址No declaration for function ''xxx'' -------------------没有函数xxx的说明No stack ---------------缺少堆栈No type information ------------------没有类型信息Non-portable pointer assignment --------------------不可移动的指针(地址常数)赋值Non-portable pointer comparison --------------------不可移动的指针(地址常数)比较Non-portable pointer conversion ----------------------不可移动的指针(地址常数)转换Not a valid expression format type ---------------------不合法的表达式格式Not an allowed type ---------------------不允许使用的类型Numeric constant too large -------------------数值常太大Out of memory -------------------内存不够用houjiumingParameter ''xxx'' is never used ------------------能数xxx没有用到Pointer required on left side of -> -----------------------符号->的左边必须是指针Possible use of ''xxx'' before definition -------------------在定义之前就使用了xxx(警告)Possibly incorrect assignment ----------------赋值可能不正确Redeclaration of ''xxx'' -------------------重复定义了xxxRedefinition of ''xxx'' is not identical ------------------- xxx的两次定义不一致Register allocation failure ------------------寄存器定址失败Repeat count needs an lvalue ------------------重复计数需要逻辑值Size of structure or array not known ------------------结构体或数给大小不确定Statement missing ; ------------------语句后缺少";"Structure or union syntax error --------------结构体或联合体语法错误Structure size too large ----------------结构体尺寸太大Sub scripting missing ] ----------------下标缺少右方括号Superfluous & with function or array ------------------函数或数组中有多余的"&" Suspicious pointer conversion ---------------------可疑的指针转换Symbol limit exceeded ---------------符号超限Too few parameters in call -----------------函数调用时的实参少于函数的参数不Too many default cases ------------------- Default太多(switch语句中一个)Too many error or warning messages --------------------错误或警告信息太多Too many type in declaration -----------------说明中类型太多houjiumingToo much auto memory in function -----------------函数用到的局部存储太多Too much global data defined in file ------------------文件中全局数据太多Two consecutive dots -----------------两个连续的句点Type mismatch in parameter xxx ----------------参数xxx类型不匹配Type mismatch in redeclaration of ''xxx'' ---------------- xxx重定义的类型不匹配Unable to create output file ''xxx'' ----------------无法建立输出文件xxxUnable to open include file ''xxx'' ---------------无法打开被包含的文件xxxUnable to open input file ''xxx'' ----------------无法打开输入文件xxxUndefined label ''xxx'' -------------------没有定义的标号xxxUndefined structure ''xxx'' -----------------没有定义的结构xxxUndefined symbol ''xxx'' -----------------没有定义的符号xxxUnexpected end of file in comment started on line xxx ----------从xxx行开始的注解尚未结束文件不能结束Unexpected end of file in conditional started on line xxx ----从xxx 开始的条件语句尚未结束文件不能结束Unknown assemble instruction ----------------未知的汇编结构houjiuming Unknown option ---------------未知的操作Unknown preprocessor directive: ''xxx'' -----------------不认识的预处理命令xxx Unreachable code ------------------无路可达的代码Unterminated string or character constant -----------------字符串缺少引号User break ----------------用户强行中断了程序Void functions may not return a value ----------------- Void类型的函数不应有返回值Wrong number of arguments -----------------调用函数的参数数目错''xxx'' not an argument ----------------- xxx不是参数''xxx'' not part of structure -------------------- xxx不是结构体的一部分xxx statement missing ( -------------------- xxx语句缺少左括号xxx statement missing ) ------------------ xxx语句缺少右括号xxx statement missing ; -------------------- xxx缺少分号houjiumingxxx'' declared but never used -------------------说明了xxx但没有使用xxx'' is assigned a value which is never used ----------------------给xxx赋了值但未用过Zero length structure ------------------结构体的长度为零。

广东省广雅中学2023-2024学年高一上学期12月阶段测试英语试卷

广东省广雅中学2023-2024学年高一上学期12月阶段测试英语试卷

广东省广雅中学2023-2024学年高一上学期12月阶段测试英语试卷学校:___________姓名:___________班级:___________考号:___________一、单项选择1.In the dark street, there wasn’t a single person ________ she could turn for help.A.for whom B.to who C.to whom D.for who2.I am always grateful for Mr. Vamp, __________ warning I would have been drown in the river the way __________ other children did.A.with whose; /B.whose; in whichC.without whose;/D.without whose; what3.__________ is reported in the news, Harrison has been arrested who is said __________ his neighbour.A.Which; had harmed B.Which; to have harmedC.As; harming D.As; to have harmed4.John as well as the other children who_______no parents_______ good care of in the village.A.have; is being taken B.have; has takenC.has; is taken D.has; have been taken5.She didn’t expect the great trouble she had ________ the problem ________ currently.A.to solve; discussing B.solving; to be discussedC.to solve; discussed D.solving; being discussed6.---There isn’t anything serious, _________,Doctor?---_________.The children will be all right in a day or two.A.is it; Yes B.isn’t ; NoC.isn’t there; Yes D.is there; No7.Time should be made good use of ______ our lessons well.A.learning B.learned C.to learn D.having learned 8.It was in Beihai Park ______ they made a date for the first time ______ the old couple told us their love story.A.that; that B.where; when C.that; when D.where; that 9.The old man has a son and two daughters, ______ treat him well, ______ made him verysad.A.and none of them; that B.none of them; whichC.and none of whom; that D.none of whom; which10.The teacher proposed that the student ________ an opportunity to further his study________ his excellence.A.giving, because of B.be given, because ofC.was given, because D.give, because二、阅读理解As a language learning enthusiast, I’ve come up with the best apps for learning English from the thousands of mobile apps out there.Best for Pronunciation: ELSA SpeakELSA Speak is probably the best mobile app around for helping you improve your English pronunciation. The app’s greatest strength is its intensive AI feedback, but ELSA also provides mini-training sessions to really perfect your pronunciation. The AI analyzes your recordings based on pronunciation, intonation and fluency then points out exactly which parts sound inaccurate.Best for Immersing in English Videos: FluentUFuentU is a language learning app that teaches you English through authentic videos like news reports, movie scenes and interviews, with learner tools for all levels. Each clip has interactive subtitles so if you’re not sure what a word means, you can hover over it and get in explanation. The app also gives video examples for each word so you can learn vocabulary in context.Best for Practical Topics: BabbelBabbel has you learn and practice English with realistic conversations that surround things that you’re personally interested in. Lessons are short and consist of written and audio versions of the grammar featured in the lessons. Then you are able to complete practice exercises to solidify your understanding.Best for Fun Beginner Lessons: LingodeerLingodeer uses games and short exercises to teach beginner and intermediate English learners. Lingodeer takes a gamified approach to language learning with a goal-oriented curriculum consisting of structured lessons and regular reviews. Lingodeer’s lessons arearranged according to themes, such as sports, weather, parts of the body and shopping. 11.What is the feature of ELSA Speak?A.It analyzes learners’ recordings.B.It provides guidance for learners.C.It gives learners helpful feedback.D.It improves learners’ communication skill.12.Which app provides videos as learning resources?A.FluentU.B.ELSA Speak.C.Babbel.D.Lingodeer. 13.What can learners do with the app Lingodeer?A.Design games.B.Study around a theme.C.Structure lessons.D.Take advanced courses.Hotels were full, local specialties were nearly sold out, tourists came in droves from all parts of China… Rongjiang, a small county of Qiandongnan Miao and Dong autonomous prefecture in Guizhou Province, has recently gone viral for its “Village Super League”.Every week, more than 40,000 audiences flooded to the stadium for the “Village Super League”. As of the end of June, 2023, the championship has had more than 20 billion views on the Internet.“This is a soccer carnival deeply rooted in the soil! I’ve covered varieties of sports events, including the FIFA World Cup and the Olympics. But it is really the first time that I have experienced such a down-to-earth and lively scene as this,” said Han Qiaosheng, a Chinese sports TV commentator.During a match in May, Wu Chuguo, a decoration worker from Liubaitang village, Rongjiang County, scored a classic 40-meter goal that ignited the crowd. Wu said he was influenced by elders in his village and fell in love with the sport. In the county, rural soccer matches have taken place regularly since the 1990s. When there was a lack of facilities, soccer players used barren land as a field and wooden stakes (桩) for the posts. The boundaries were marked out with lime powder. When they can’t gather enough players, they play futsal, a soccer-based game played on a smaller hard court.In recent years, the county has focused on gymnastics, soccer, rock climbing and other sports, which has developed outstanding national and provincial-level sporting talent. What sets the Village Super League apart from other sports events are the vibrant displays of ethnic (民族的) cultures and traditions at the championship.“I hope that more and more tourists will come to Rongjiang to watch the matches, enjoy the folk customs and local cuisine, and have fun,” said Xiong Zhuqing, a melon grower who is also a cheerleader. She says she has sold over 10,000 kilograms of watermelons since the start of the championship. As of June 27, a total of 654 new businesses were established, including 91 in the catering sector, 188 in retail, and 195 in agriculture and food processing, the local government said.By combining tourism with sports, Rongjiang has found a creative and meaningful way to promote rural revival and rural economic growth.14.What can we infer from paragraph 4?A.Rural soccer matches are demanding on facilities.B.Wu Chuguo started the sports event at Liubaitang village.C.Rongjiang County has a deep tradition of soccer matches.D.The small court greatly limits soccer players’ performances.15.What is special about the “Village Super League”?A.It covers players from all walks of life.B.It combines ethnic elements with sports.C.It aims to select athletes for national-level games.D.It has a cheering team consisting of football fans.16.What do the statistics in the last but one paragraph tell us?A.The matches can help promote economic growth.B.Rongjiang’s tourism has fueled the sports industry.C.Selling agricultural products is key to rural vitalization.D.Local companies have offered significant financial support to the matches. 17.Where can we find this passage?A.Science magazine.B.Tourism brochure.C.Newspaper.D.Textbook.It is reported that endangered polar bears are breeding (繁殖) with grizzly bears (灰熊), creating “pizzly” bears, which is being driven by climate change.As the world warms and Arctic sea ice thins, starving polar bears are being forced ever further south, where they meet grizzlies, whose ranges are expanding northwards. And with that growing contact between the two come increasing hybrids (杂交种).With characteristics that could give the hybrids an advantage in warming northern habitats, some scientists guess that they could be here to stay. “Usually, hybrids aren’t better suited to their environments than their parents, but these hybrids are able to search for a broader range of food sources,” Larisa DeSantis, an associate professor of biological sciences at Vanderbilt University, told Live Science.The rise of “pizzly” bears appears with polar bears’ decline: their numbers are estimated to decrease by more than 30% in the next 30 years. This sudden fall is linked partly to “pizzly” bears taking up polar bears’ ranges, where they outcompete them, but also to polar bears’ highly specialized diets.“Polar bears mainly consumed soft foods even during the Medieval Warm Period, a previous period of rapid warming,” DeSantis said, referring to fat meals such as seals. “Although all of these starving polar bears are trying to find alternative food sources, like seabird eggs, it could be a tipping point for their survival.” Actually, the calories they gain from these sources do not balance out those they burn from searching for them. This could result in a habitat ready for the hybrids to move in and take over, leading to a loss in biodiversity if polar bears are replaced.“We’re having massive impacts with climate change on species,” DeSantis said. “The polar bear is telling us how bad things are. In some sense, “pizzly” bears could be a sad but necessary compromise given current warming trends.”18.Why are polar bears moving further south?A.To create hybrids.B.To expand ranges.C.To contact grizzlies.D.To relieve hunger.19.What enables “pizzly” bears to adapt to natural surroundings better than their parents?A.Broader habitats.B.Climate preference.C.More food options.D.Improved breeding ability.20.What does the underlined phrase “a tipping point” in paragraph 5 refer to?A.A rare chance.B.A critical stage.C.A positive factor.D.A constant change.21.What’s the main idea of the text?A.Polar bears are changing diets for climate change.B.Polar bears have already adjusted to climate change.C.“Pizzly” bears have replaced polar bears for global warming.D.“Pizzly” bears are on the rise because of global warming.Is my article mid or valid? If you can answer this question, you already are used to what we term “algospeak.” As more and more online users join social media platforms such as TikTok (抖音), algospeak continues to grow. But what is it, what do these words mean?Algospeak is a coded language or slang used online. For some communities it is the only way to talk safely about sensitive subjects. Due to the rise of algorithmic censorship (算法筛查) in media, algospeak developed as a way to prevent robot from deleting their videos and messages. Users had to get creative to avoid deletion. This means that as long as there is censorship, there will be a new language to avoid it.Further, new slang created on social media platforms fits itself into everyday life. Even if you are not writing a message on TikTok, you may have caught yourself using phrases from the app in your daily life. This connection between people all over the globe allows for shared vocabulary. It also has the potential to completely change the way we as English speakers speak. Here is some new slang popularized by TikTok.Bussin’ (adj.) — something is really good, usually referring to foodMid (adj.) — ordinary, not good or badSheesh (ex.) — response either meaning disbelief or surprise, can be positive or negative Valid (adj.) — something very good or meets a very high standard; a respectable opinion No one would have expected artificial intelligence (AI) would be the catalyst (催化剂) for change but it is. The combination of not only censorship but also high connectivity birthed a new language. Some older generations can’t even understand what the youngest generation says because of the lack of access to the new language. The gap between English before social media and English as it is now is huge, and it continues to grow.So, what should we expect for the future? Will censorship loosen or will the English language continue to develop from digital media? Time can only tell.22.What contributed to the appearance of algospeak?A.The need for netizens to escape censorship.B.The desire for a shared vocabulary around the world.C.The authority’s demand for creating a new language.D.The social media’s intention to catch public attention.23.What can we learn from the last two paragraphs?A.AI needs catalyst to develop.B.Social media birthed a new language.C.Algospeak may cause communication obstacles.D.Strict censorship is a barrier to interpersonal relationships.24.What is the author’s attitude towards the future of algospeak?A.Optimistic.B.Indifferent.C.Doubtful.D.Unclear. 25.What can we infer about algospeak?A.It offers a new outlook on life.B.It has reshaped the digital media.C.Its development is associated with AI.D.It has won popularity among all ages.A cobra (眼镜蛇) was set free on September 10 in a park in Xiangtan City, Southprevent it from hurting anyone.In the name of mercy, some people free captive (关在笼子里的) animals, including foreign species, mostly bought from pet shops or markets. 27 Last April, someone freed hundreds of foxes and raccoon dogs in a Beijing suburb, causing economic losses to animal farmers and endangering the safety of local residents.28 Besides, today people are increasingly aware of the importance of environmental protection and biodiversity. But it is necessary to ensure that acts of freeing captive animals do not violate laws or harm the interests of the public. 29 In order to regulate the release of captive animals, the government revised the law on wildlife conservation in 2016. According to the updated regulations, no individual or organization should harm the public interest by freeing captive animals. 30 A.And any creatures set free should be local species that have no threat to local biodiversity.B.However, their warm-hearted kindness often causes serious consequences.C.Freeing a cobra in a park reflects the troublemaker’s ignorance of other people’s lives.D.Such news has frequently appeared in recent years.E.Local police immediately arrested the troublemaker.F.Influenced by Buddhism, freeing captive animals is an act that deserves respect inChina.G.It is important to protect animals including those set free by people.三、完形填空On the April morning I found out about Lucy’s mother, it rained. I didn’t know what kind of cancer Mrs. Hastings had until later, 31 I knew her condition was very serious.I love Mr. and Mrs. Hastings almost as much as I love my own parents, and Lucy is my best friend. But I didn’t want to go to school that day. And I 32 didn’t want to see Lucy.What could I possibly say to her? What do people say to their friends at such a time? I tried every trick I knew to get out of going to school.But Mom 33 . “You have a history test this morning, Kristin,” she said, looking at me as if she’d crawled into my mind and knew I was just making 34 . Then she smiled. “Be sure to stay close to Lucy, especially today, because that poor girl is going to need your strength.”Strength? What was Mother talking about? I didn’t even know what to say to my best friend. I hid out in the choir room between classes 35 avoiding Lucy, but she was never out of my thoughts. I kept trying to come up with something 36 to say to her because I really wanted to get it right.Lucy and I had last-period English in Mrs. Green’s room. I’d avoided meeting her all day, I was going to have to face her last period, and I still didn’t have a(n) 37 . However, Mrs. Green said, “Kristin, Lucy Hastings is your best friend, and she is coming here in a few minutes to get her lesson assignments.”My heart tightened into a hard knot and I 38 inwardly. I still didn’t know what to say to Lucy, and time was running out.I rushed out of the classroom, racing down the hall and out the front door of school. It had stopped raining, and the air smelt clean and fresh. In the distance I 39 someone coming toward me, stumbling across the campus. I knew it was Lucy 40 I couldn’t see her face. As she was approaching, I noticed how Lucy’s shoulders shook with every step she took. She must be crying. The rain came down 41 . Raindrops mingled with my tears. Lucy’s heart was breaking, and I wasn’t doing a thing to help her!As I drew nearer to her, my throat tightened, making it impossible to 42 . I prayed for 43 and forced myself to move forward, 44 outstretched. “Oh. Kristin!” Lucy cried. “I was hoping it was you.” We hugged then. It was then that I learned something that I might never have grasped in any other way. I’d been always focusing on myself: What should I do? How should I act? What would I say to Lucy? But when we finally came face to face, I forgot myself and centered on Lucy and her needs. Only when I did that was I able to share Lucy’s grief and let her know that she was 45 and that I really cared.31.A.but B.and C.so D.if 32.A.abruptly B.apparently C.surely D.scarcely 33.A.agreed B.insisted C.hesitated D.swore 34.A.efforts B.sense C.excuses D.mistakes 35.A.in the wake of B.in hopes of C.in pursuit of D.in light of 36.A.personal B.discouraging C.unusual D.appropriate 37.A.plan B.explanation C.goal D.clue 38.A.rocked B.calmed C.sprang D.trembled 39.A.realized B.spotted C.greeted D.expected 40.A.in case B.as long as C.even though D.as though 41.A.again B.suddenly C.expectedly D.surprisingly 42.A.tell B.react C.scream D.speak 43.A.determination B.strength C.courage D.fortune 44.A.palms B.legs C.shoulders D.arms 45.A.special B.outstanding C.good D.sad四、语法填空The airport covers an area of about 150 square kilometers, 50 the terminal building (候机楼) alone covering 780,000 square meters, equivalent to 100 standard football fields. It is worth mentioning that the clever design takes passengers 51 (little) than eight minutes to walk to any gate from the center of the terminal.The airport is also 52 “intelligent airport”, with thousands of intelligent devices and tens of thousands of sensors. The airport 53 (inform) center is run with cutting-edge technologies including cloud computing and 5G, 54 (contribute) to passengers’ comfort and convenience.With the help of face check-in, self-service check-in, mobile phone query luggage location… in Beijing Daxing International Airport, you can feel the new travel experience 55 (bring) by high technology. More Chinese miracles are happening in this land.56.This (refer) book, though published years ago, is still up to date.(所给词的适当形式填空)57.There are (variety) activities for the teenagers to choose from. (所给词的适当形式填空)58.It is hard to make a (compare) with her previous book because they are totally different. (所给词的适当形式填空)59.The paintings were in an excellent state of (preserve). (所给词的适当形式填空)60.We believe in (equal) between men and women. (所给词的适当形式填空)五、单词拼写拼写)62.She is giving a detailed (描述) of what she has seen. (根据汉语提示单词拼写)63.She asked for (原谅) for what she had done wrong. (根据汉语提示单词拼写) 64.(尽管) applying for hundreds of jobs, he is still out of work. (根据汉语提示单词拼写)65.At that time, they had to (斗争) against natural disasters. (根据汉语提示单词拼写)66.The test questions are kept secret so as to (to stop something from happening) cheating. (根据英文提示单词拼写)67.The painting reflected the artist’s intelligence and (the ability to create new ideas). (根据英文提示单词拼写)68.Mr. Wang doesn’t agree to my (a plan or suggestion) that we should have a picnic this weekend. (根据英文提示单词拼写)69.He wrote a book to (to record something in writing or on film) his sailing experiences. (根据英文提示单词拼写)70.The task was so (needing a lot of time, ability, and energy) that Alice needs to work hard at it. (根据英文提示单词拼写)六、完成句子七、书信写作81.假如你是新华高中的学生李华,近年全球掀起了学汉语的热潮。

On Lattices, Learning with Errors,Random Linear Codes, and Cryptography

On Lattices, Learning with Errors,Random Linear Codes, and Cryptography

On Lattices,Learning with Errors,Random Linear Codes,and CryptographyOded Regev∗May2,2009AbstractOur main result is a reduction from worst-case lattice problems such as G AP SVP and SIVP to a certain learning problem.This learning problem is a natural extension of the‘learning from parity witherror’problem to higher moduli.It can also be viewed as the problem of decoding from a random linearcode.This,we believe,gives a strong indication that these problems are hard.Our reduction,however,isquantum.Hence,an efficient solution to the learning problem implies a quantum algorithm for G AP SVPand SIVP.A main open question is whether this reduction can be made classical(i.e.,non-quantum).We also present a(classical)public-key cryptosystem whose security is based on the hardness of the learning problem.By the main result,its security is also based on the worst-case quantum hardness ofG AP SVP and SIVP.The new cryptosystem is much more efficient than previous lattice-based cryp-tosystems:the public key is of size˜O(n2)and encrypting a message increases its size by a factor of˜O(n)(in previous cryptosystems these values are˜O(n4)and˜O(n2),respectively).In fact,under theassumption that all parties share a random bit string of length˜O(n2),the size of the public key can bereduced to˜O(n).1IntroductionMain theorem.For an integer n≥1and a real numberε≥0,consider the‘learning from parity with error’problem,defined as follows:the goal is tofind an unknown s∈Z n2given a list of‘equations with errors’s,a1 ≈εb1(mod2)s,a2 ≈εb2(mod2)...where the a i’s are chosen independently from the uniform distribution on Z n2, s,a i =js j(a i)j is theinner product modulo2of s and a i,and each equation is correct independently with probability1−ε. More precisely,the input to the problem consists of pairs(a i,b i)where each a i is chosen independently and∗School of Computer Science,Tel Aviv University,Tel Aviv69978,Israel.Supported by an Alon Fellowship,by the Binational Science Foundation,by the Israel Science Foundation,by the Army Research Office grant DAAD19-03-1-0082,by the European Commission under the Integrated Project QAP funded by the IST directorate as Contract Number015848,and by a European Research Council(ERC)Starting Grant.uniformly from Z n 2and each b i is independently chosen to be equal to s ,a i with probability 1−ε.The goal is to find s .Notice that the case ε=0can be solved efficiently by,say,Gaussian elimination.This requires O (n )equations and poly(n )time.The problem seems to become significantly harder when we take any positive ε>0.For example,let us consider again the Gaussian elimination process and assume that we are interested in recovering only the first bit of s .Using Gaussian elimination,we can find a set S of O (n )equations such that S a i is (1,0,...,0).Summing the corresponding values b i gives us a guess for the first bit of s .However,a standard calculationshows that this guess is correct with probability 12+2−Θ(n ).Hence,in order to obtain the first bit with good confidence,we have to repeat the whole procedure 2Θ(n )times.This yields an algorithm that uses 2O (n )equations and 2O (n )time.In fact,it can be shown that given only O (n )equations,the s ∈Z n 2that maximizes the number of satisfied equations is with high probability s .This yields a simple maximum likelihood algorithm that requires only O (n )equations and runs in time 2O (n ).Blum,Kalai,and Wasserman [11]provided the first subexponential algorithm for this problem.Their algorithm requires only 2O (n/log n )equations/time and is currently the best known algorithm for the problem.It is based on a clever idea that allows to find a small set S of equations (say,O (√n ))among 2O (n/log n )equations,such that S a i is,say,(1,0,...,0).This gives us a guess for the first bit of s that is correct with probability 12+2−Θ(√n ).We can obtain the correct value with high probability by repeating the whole procedure only 2O (√n )times.Their idea was later shown to have other important applications,such as the first 2O (n )-time algorithm for solving the shortest vector problem [23,5].An important open question is to explain the apparent difficulty in finding efficient algorithms for this learning problem.Our main theorem explains this difficulty for a natural extension of this problem to higher moduli,defined next.Let p =p (n )≤poly(n )be some prime integer and consider a list of ‘equations with error’s ,a 1 ≈χb 1(mod p )s ,a 2 ≈χb 2(mod p )...where this time s ∈Z n p ,a i are chosen independently and uniformly from Z n p ,and b i ∈Z p .The errorin the equations is now specified by a probability distribution χ:Z p →R +on Z p .Namely,for each equation i ,b i = s ,a i +e i where each e i ∈Z p is chosen independently according to χ.We denote the problem of recovering s from such equations by LWE p,χ(learning with error).For example,the learning from parity problem with error εis the special case where p =2,χ(0)=1−ε,and χ(1)=ε.Under a reasonable assumption on χ(namely,that χ(0)>1/p +1/poly(n )),the maximum likelihood algorithm described above solves LWE p,χfor p ≤poly(n )using poly(n )equations and 2O (n log n )time.Under a similar assumption,an algorithm resembling the one by Blum et al.[11]requires only 2O (n )equations/time.This is the best known algorithm for the LWE problem.Our main theorem shows that for certain choices of p and χ,a solution to LWE p,χimplies a quantum solution to worst-case lattice problems.Theorem 1.1(Informal)Let n,p be integers and α∈(0,1)be such that αp >2√n .If there exists anefficient algorithm that solves LWE p,¯Ψαthen there exists an efficient quantum algorithm that approximatesthe decision version of the shortest vector problem (G AP SVP )and the shortest independent vectors problem(SIVP )to within ˜O(n/α)in the worst case.The exact definition of ¯Ψαwill be given later.For now,it is enough to know that it is a distribution on Z p that has the shape of a discrete Gaussian centered around 0with standard deviation αp ,as in Figure 1.Also,the probability of 0(i.e.,no error)is roughly 1/(αp ).A possible setting for the parameters is p =O (n 2)and α=1/(√n log 2n )(in fact,these are the parameters that we use in our cryptographic application).Figure 1:¯Ψαfor p =127with α=0.05(left)and α=0.1(right).The elements of Z p are arranged on a circle.G AP SVP and SIVP are two of the main computational problems on lattices.In G AP SVP,for instance,the input is a lattice,and the goal is to approximate the length of the shortest nonzero lattice vector.The best known polynomial time algorithms for them yield only mildly subexponential approximation factors[24,38,5].It is conjectured that there is no classical (i.e.,non-quantum)polynomial time algorithm that approximates them to within any polynomial ttice-based constructions of one-way functions,such as the one by Ajtai [2],are based on this conjecture.One might even conjecture that there is no quantum polynomial time algorithm that approximates G AP SVP (or SIVP)to within any polynomial factor.One can then interpret the main theorem as say-ing that based on this conjecture,the LWE problem is hard.The only evidence supporting this conjecture is that there are no known quantum algorithms for lattice problems that outperform classical algorithms,even though this is probably one of the most important open questions in the field of quantum computing.1In fact,one could also interpret our main theorem as a way to disprove this conjecture:if one finds an efficient algorithm for LWE,then one also obtains a quantum algorithm for approximating worst-case lattice problems.Such a result would be of tremendous importance on its own.Finally,we note that it is possible that our main theorem will one day be made classical.This would make all our results stronger and the above discussion unnecessary.The LWE problem can be equivalently presented as the problem of decoding random linear codes.More specifically,let m =poly(n )be arbitrary and let s ∈Z n p be some vector.Then,consider the followingproblem:given a random matrix Q ∈Z m ×n p and the vector t =Q s +e ∈Z m p where each coordinate of the error vector e ∈Z m p is chosen independently from ¯Ψα,recover s .The Hamming weight of e isroughly m (1−1/(αp ))(since a value chosen from ¯Ψαis 0with probability roughly 1/(αp )).Hence,the Hamming distance of t from Q s is roughly m (1−1/(αp )).Moreover,it can be seen that for large enough m ,for any other word s ,the Hamming distance of t from Q s is roughly m (1−1/p ).Hence,we obtain that approximating the nearest codeword problem to within factors smaller than (1−1/p )/(1−1/(αp ))on random codes is as hard as quantumly approximating worst-case lattice problems.This gives a partial 1If forced to make a guess,the author would say that the conjecture is true.answer to the important open question of understanding the hardness of decoding from random linear codes.It turns out that certain problems,which are seemingly easier than the LWE problem,are in fact equiv-alent to the LWE problem.We establish these equivalences in Section4using elementary reductions.For example,being able to distinguish a set of equations as above from a set of equations in which the b i’s are chosen uniformly from Z p is equivalent to solving LWE.Moreover,it is enough to correctly distinguish these two distributions for some non-negligible fraction of all s.The latter formulation is the one we use in our cryptographic applications.Cryptosystem.In Section5we present a public key cryptosystem and prove that it is secure based on the hardness of the LWE problem.We use the standard security notion of semantic,or IND-CPA,secu-rity(see,e.g.,[20,Chapter10]).The cryptosystem and its security proof are entirely classical.In fact, the cryptosystem itself is quite simple;the reader is encouraged to glimpse at the beginning of Section5. Essentially,the idea is to provide a list of equations as above as the public key;encryption is performed by summing some of the equations(forming another equation with error)and modifying the right hand side depending on the message to be transmitted.Security follows from the fact that a list of equations with error is computationally indistinguishable from a list of equations in which the b i’s are chosen uniformly.By using our main theorem,we obtain that the security of the system is based also on the worst-case quantum hardness of approximating SIVP and G AP SVP to within˜O(n1.5).In other words,breaking our cryptosystem implies an efficient quantum algorithm for approximating SIVP and G AP SVP to within ˜O(n1.5).Previous cryptosystems,such as the Ajtai-Dwork cryptosystem[4]and the one by Regev[36],are based on the worst-case(classical)hardness of the unique-SVP problem,which can be related to G AP SVP (but not SIVP)through the recent result of Lyubashevsky and Micciancio[26].Another important feature of our cryptosystem is its improved efficiency.In previous cryptosystems, the public key size is˜O(n4)and the encryption increases the size of messages by a factor of˜O(n2).In our cryptosystem,the public key size is only˜O(n2)and encryption increases the size of messages by a factor of only˜O(n).This possibly makes our cryptosystem practical.Moreover,using an idea of Ajtai[3],we can reduce the size of the public key to˜O(n).This requires all users of the cryptosystem to share some(trusted) random bit string of length˜O(n2).This can be achieved by,say,distributing such a bit string as part of the encryption and decryption software.We mention that learning problems similar to ours were already suggested as possible sources of cryp-tographic hardness in,e.g.,[10,7],although this was done without establishing any connection to lattice problems.In another related work[3],Ajtai suggested a cryptosystem that has several properties in common with ours(including its efficiency),although its security is not based on worst-case lattice problems.Why quantum?This paper is almost entirely classical.In fact,quantum is needed only in one step in the proof of the main theorem.Making this step classical would make the entire reduction classical.To demonstrate the difficulty,consider the following situation.Let L be some lattice and let d=λ1(L)/n10 whereλ1(L)is the length of the shortest nonzero vector in L.We are given an oracle that for any point x∈R n within distance d of Lfinds the closest lattice vector to x.If x is not within distance d of L, the output of the oracle is undefined.Intuitively,such an oracle seems quite powerful;the best known algorithms for performing such a task require exponential time.Nevertheless,we do not see any way to use this oracle classically.Indeed,it seems to us that the only way to generate inputs to the oracle is the following:somehow choose a lattice point y∈L and let x=y+z for some perturbation vector z of lengthat most d .Clearly,on input x the oracle outputs y .But this is useless since we already know y !It turns out that quantumly,such an oracle is quite useful.Indeed,being able to compute y from x allows us to uncompute y .More precisely,it allows us to transform the quantum state |x ,y to the state |x ,0 in a reversible (i.e.,unitary)way.This ability to erase the contents of a memory cell in a reversible way seems useful only in the quantum setting.Techniques.Unlike previous constructions of lattice-based public-key cryptosystems,the proof of our main theorem uses an ‘iterative construction’.Essentially,this means that instead of ‘immediately’finding very short vectors in a lattice,the reduction proceeds in steps where in each step shorter lattice vectors are found.So far,such iterative techniques have been used only in the construction of lattice-based one-way functions [2,12,27,29].Another novel aspect of our main theorem is its crucial use of quantum computation.Our cryptosystem is the first classical cryptosystem whose security is based on a quantum hardness assumption (see [30]for a somewhat related recent work).Our proof is based on the Fourier transform of Gaussian measures,a technique that was developed in previous papers [36,29,1].More specifically,we use a parameter known as the smoothing parameter,as introduced in [29].We also use the discrete Gaussian distribution and approximations to its Fourier transform,ideas that were developed in [1].Open questions.The main open question raised by this work is whether Theorem 1.1can be dequantized:can the hardness of LWE be established based on the classical hardness of SIVP and G AP SVP?We see no reason why this should be impossible.However,despite our efforts over the last few years,we were not able to show this.As mentioned above,the difficulty is that there seems to be no classical way to use an oracle that solves the closest vector problem within small distances.Quantumly,however,such an oracle turns out to be quite useful.Another important open question is to determine the hardness of the learning from parity with errors problem (i.e.,the case p =2).Our theorem only works for p >2√n .It seems that in order to prove similar results for smaller values of p ,substantially new ideas are required.Alternatively,one can interpret our inability to prove hardness for small p as an indication that the problem might be easier than believed.Finally,it would be interesting to relate the LWE problem to other average-case problems in the liter-ature,and especially to those considered by Feige in [15].See Alekhnovich’s paper [7]for some related work.Followup work.We now describe some of the followup work that has appeared since the original publi-cation of our results in 2005[37].One line of work focussed on improvements to our cryptosystem.First,Kawachi,Tanaka,and Xa-gawa [21]proposed a modification to our cryptosystem that slightly improves the encryption blowup to O (n ),essentially getting rid of a log factor.A much more significant improvement is described by Peikert,Vaikuntanathan,and Waters in [34].By a relatively simple modification to the cryptosystem,they managed to bring the encryption blowup down to only O (1),in addition to several equally significant improvements in running time.Finally,Akavia,Goldwasser,and Vaikuntanathan [6]show that our cryptosystem remains secure even if almost the entire secret key is leaked.Another line of work focussed on the design of other cryptographic protocols whose security is based on the hardness of the LWE problem.First,Peikert and Waters [35]constructed,among other things,CCA-secure cryptosystems (see also [33]for a simpler construction).These are cryptosystems that are secure even if the adversary is allowed access to a decryption oracle (see,e.g.,[20,Chapter 10]).All previous lattice-based cryptosystems (including the one in this paper)are not CCA-secure.Second,Peikert,Vaikuntanathan,and Waters [34]showed how to construct oblivious transfer protocols,which are useful,e.g.,for performing secure multiparty computation.Third,Gentry,Peikert,and Vaikuntanathan [16]constructed an identity-based encryption (IBE)scheme.This is a public-key encryption scheme in which the public key can be any unique identifier of the user;very few constructions of such schemes are known.Finally,Cash,Peikert,and Sahai [13]constructed a public-key cryptosystem that remains secure even when the encrypted messages may depend upon the secret key.The security of all the above constructions is based on the LWE problem and hence,by our main theorem,also on the worst-case quantum hardness of lattice problems.The LWE problem has also been used by Klivans and Sherstov to show hardness results related to learning halfspaces [22].As before,due to our main theorem,this implies hardness of learning halfspaces based on the worst-case quantum hardness of lattice problems.Finally,we mention two results giving further evidence for the hardness of the LWE problem.In the first,Peikert [32]somewhat strengthens our main theorem by replacing our worst-case lattice problems with their analogues for the q norm,where 2≤q ≤∞is arbitrary.Our main theorem only deals with the standard 2versions.In another recent result,Peikert [33]shows that the quantum part of our proof can be removed,leading to a classical reduction from G AP SVP to the LWE problem.As a result,Peikert is able to show that public-key cryptosystems (including many of the above LWE-based schemes)can be based on the classical hardness of G AP SVP,resolving a long-standing open question (see also [26]).Roughly speaking,the way Peikert circumvents the difficulty we described earlier is by noticing that the existence of an oracle that is able to recover y from y +z ,where y is a random lattice point and z is a random perturbation of length at most d ,is by itself a useful piece of information as it provides a lower bound on the length of the shortest nonzero vector.By trying to construct such oracles for several different values of d and checking which ones work,Peikert is able to obtain a good approximation of the length of the shortest nonzero vector.Removing the quantum part,however,comes at a cost:the construction can no longer be iterative,the hardness can no longer be based on SIVP,and even for hardness based on G AP SVP,the modulus p in the LWE problem must be exponentially big unless we assume the hardness of a non-standard variant of G AP SVP.Because of this,we believe that dequantizing our main theorem remains an important open problem.1.1OverviewIn this subsection,we give a brief informal overview of the proof of our main theorem,Theorem 1.1.The complete proof appears in Section 3.We do not discuss here the reductions in Section 4and the cryptosystem in Section 5as these parts of the paper are more similar to previous work.In addition to some very basic definitions related to lattices,we will make heavy use here of the discrete Gaussian distribution on L of width r ,denoted D L,r .This is the distribution whose support is L (which is typically a lattice),and in which the probability of each x ∈L is proportional to exp −π x /r 2 (see Eq.(6)and Figure 2).We also mention here the smoothing parameter ηε(L ).This is a real positive number associated with any lattice L (εis an accuracy parameter which we can safely ignore here).Roughly speaking,it gives the smallest r starting from which D L,r ‘behaves like’a continuous Gaussian distribution.For instance,for r ≥ηε(L ),vectors chosen from D L,r have norm roughly r √n with high probability.Incontrast,for sufficiently small r ,D L,r gives almost all its mass to the origin 0.Although not required for thisFigure 2:D L,2(left)and D L,1(right)for a two-dimensional lattice L .The z -axis represents probability.Let α,p,n be such that αp >2√n ,as required in Theorem 1.1,and assume we have an oracle that solvesLWE p,¯Ψα.For concreteness,we can think of p =n 2and α=1/n .Our goal is to show how to solve thetwo lattice problems mentioned in Theorem 1.1.As we prove in Subsection 3.3using standard reductions,it suffices to solve the following discrete Gaussian sampling problem (DGS):Given an n -dimensional lattice L and a number r ≥√2n ·ηε(L )/α,output a sample from D L,r .Intuitively,the connection to G AP SVP and SIVP comes from the fact that by taking r close to its lower limit √2n ·ηε(L )/α,we can obtain short lattice vectors (of length roughly √nr ).In the rest of this subsection we describe our algorithm for sampling from D L,r .We note that the exact lower bound on r is not that important for purposes of this overview,as it only affects the approximation factor we obtain for G AP SVP and SIVP.It suffices to keep in mind that our goal is to sample from D L,r for r that is rather small,say within a polynomial factor of ηε(L ).The core of the algorithm is the following procedure,which we call the ‘iterative step’.Its input consists of a number r (which is guaranteed to be not too small,namely,greater than √2pηε(L )),and n c samples from D L,r where c is some constant.Its output is a sample from the distribution D L,r for r =r √n/(αp ).Notice that since αp >2√n ,r <r/2.In order to perform this ‘magic’of converting vectors of norm √nr into shorter vectors of norm √nr ,the procedure of course needs to use the LWE oracle.Given the iterative step,the algorithm for solving DGS works as follows.Let r i denote r ·(αp/√n )i .The algorithm starts by producing n c samples from D L,r 3n .Because r 3n is so large,such samples can be computed efficiently by a simple procedure described in Lemma 3.2.Next comes the core of the algorithm:for i =3n,3n −1,...,1the algorithm uses its n c samples from D L,r i to produce n c samples from D L,r i −1by calling the iterative step n c times.Eventually,we end up with n c samples from D L,r 0=D L,r and we complete the algorithm by simply outputting the first of those.Note the following crucial fact:using n c samples from D L,r i ,we are able to generate the same number of samples n c from D L,r i −1(in fact,we couldeven generate more than n c samples).The algorithm would not work if we could only generate,say,n c /2samples,as this would require us to start with an exponential number of samples.We now finally get to describe the iterative step.Recall that as input we have n c samples from D L,r and we are supposed to generate a sample from D L,r where r =r √n/(αp ).Moreover,r is known and guaranteed to be at least √2pηε(L ),which can be shown to imply that p/r <λ1(L ∗)/2.As mentioned above,the exact lower bound on r does not matter much for this overview;it suffices to keep in mind that ris sufficiently larger thanηε(L),and that1/r is sufficiently smaller thanλ1(L∗).The iterative step is obtained by combining two parts(see Figure3).In thefirst part,we construct a classical algorithm that uses the given samples and the LWE oracle to solve the following closest vector problem,which we denote by CVP L∗,αp/r:given any point x∈R n within distanceαp/r of the dual lattice L∗,output the closest vector in L∗to x.2By our assumption on r,the distance between any two points in L∗is greater than2αp/r and hence the closest vector is unique.In the second part,we use this algorithm to generate samples from D L,r .This part is quantum(and in fact,the only quantum part of our proof). The idea here is to use the CVP L∗,αp/r algorithm to generate a certain quantum superposition which,after applying the quantum Fourier transform and performing a measurement,provides us with a sample from D L,r√n/(αp).In the following,we describe each of the two parts in more detail.Figure3:Two iterations of the algorithmPart1:We start by recalling the main idea in[1].Consider some probability distribution D on some lattice L and consider its Fourier transform f:R n→C,defined asf(x)=y∈L D(y)exp(2πi x,y )=Expy∼D[exp(2πi x,y )]where in the second equality we simply rewrite the sum as an expectation.By definition,f is L∗-periodic, i.e.,f(x)=f(x+y)for any x∈R n and y∈L∗.In[1]it was shown that given a polynomial number of samples from D,one can compute an approximation of f to within±1/poly(n).To see this,note that by the Chernoff-Hoeffding bound,if y1,...,y N are N=poly(n)independent samples from D,thenf(x)≈1NNj=1exp(2πi x,y j )where the approximation is to within±1/poly(n)and holds with probability exponentially close to1, assuming that N is a large enough polynomial.By applying this idea to the samples from D L,r given to us as input,we obtain a good approximation of the Fourier transform of D L,r,which we denote by f1/r.It can be shown that since1/r λ1(L∗)one hasthe approximationf1/r(x)≈exp−π(r·dist(L∗,x))2(1)2In fact,we only solve CVPL∗,αp/(√2r)but for simplicity we ignore the factor√2here.(see Figure 4).Hence,f 1/r (x )≈1for any x ∈L ∗(in fact an equality holds)and as one gets away from L ∗,its value decreases.For points within distance,say,1/r from the lattice,its value is still some positive constant (roughly exp (−π)).As the distance from L ∗increases,the value of the function soon becomes negligible.Since the distance between any two vectors in L ∗is at least λ1(L ∗) 1/r ,the Gaussians around each point of L ∗are well-separated.Figure 4:f 1/r for a two-dimensional latticeAlthough not needed in this paper,let us briefly outline how one can solve CVP L ∗,1/r using samples from D L,r .Assume that we are given some point x within distance 1/r of L ∗.Intuitively,this x is located on one of the Gaussians of f 1/r .By repeatedly computing an approximation of f 1/r using the samples from D L,r as described above,we ‘walk uphill’on f 1/r in an attempt to find its ‘peak’.This peak corresponds to the closest lattice point to x .Actually,the procedure as described here does not quite work:due to the error in our approximation of f 1/r ,we cannot find the closest lattice point exactly.It is possible to overcome this difficulty;see [25]for the details.The same procedure actually works for slightly longer distances,namely O (√log n/r ),but beyond that distance the value of f 1/r becomes negligible and no useful information can be extracted from our approximation of it.Unfortunately,solving CVP L ∗,1/r is not useful for the iterative step as it would lead to samples from D L,r √n ,which is a wider rather than a narrower distribution than the one we started with.This is not surprising,since our solution to CVP L ∗,1/r did not use the LWE ing the LWE oracle,we will now show that we can gain an extra αp factor in the radius,and obtain the desired CVP L ∗,αp/r algorithm.Notice that if we could somehow obtain samples from D L,r/p we would be done:using the procedure described above,we could solve CVP L ∗,p/r ,which is better than what we need.Unfortunately,it is not clear how to obtain such samples,even with the help of the LWE oracle.Nevertheless,here is an obvious way to obtain something similar to samples from D L,r/p :just take the given samples from D L,r and divide them by p .This provides us with samples from D L/p,r/p where L/p is the lattice L scaled down by a factor of p .In the following we will show how to use these samples to solve CVP L ∗,αp/r .Let us first try to understand what the distribution D L/p,r/p looks like.Notice that the lattice L/p consists of p n translates of the original lattice L .Namely,for each a ∈Z n p ,consider the setL +L a /p ={L b /p |b ∈Z n ,b mod p =a }.Then {L +L a /p |a ∈Z n p }forms a partition of L/p .Moreover,it can be shown that since r/p is larger。

Coordination of groups of mobile autonomous agents using nearest neighbor rules

Coordination of groups of mobile autonomous agents using nearest neighbor rules

This research was supported by DARPA under its SEC program and by the NSF under its KDI/LIS initiative.
of nearest neighbors change with time as the system evolves. In this paper we provide a theoretical explanation for this observed behavior. There is a large and growing literature concerned with the coordination of groups of mobile autonomous agents. Included here is the work of Czirok et al. [3] who propose one-dimensional models which exhibit the same type of behavior as Vicsek’s. In [4, 5], Toner and Tu construct a continuous ”hydrodynamic” model of the group of agents, while other authors such as Mikhailov and Zanette [6] consider the behavior of populations of self propelled particles with long range interactions. Schenk et al. determined interactions between individual self-propelled spots from underlying reaction-diffusion equation [7]. Meanwhile in modeling biological systems, Gr¨ unbaum and Okubo use statistical methods to analyze group behavior in animal aggregations. In addition to these modelling and simulation studies, research papers focusing on the detailed mathematical analysis of emergent behaviors are beginning to appear. For example, Lui et al. [8] use Lyapunov methods and Leonard et al. [9] and Olfati and Murray [10] use potential function theory to understand flocking behavior while Fax and Murray [11] and Desai et. al. [12] employ graph theoretic techniques for the same purpose. The one feature which sharply distinguishes previously analyses from that undertaken here is that the latter explicitly takes into account possible changes in nearest neighbors over time, whereas the former do not. Changing nearest neighbor sets is an inherent property of the Vicsek model and in the other models we consider. To analyze such models, it proves useful to appeal to well-known results [13, 14] characterizing the convergence of infinite products of certain types of non-negative matrices. The study of infinite matrix products is ongoing [15, 16, 17, 18, 19, 20] and is undoubtedly producing results which will find application in the theoretical study of emergent behaviors. Vicsek’s model is set up in Section 2 as a system of n simultaneous, one-dimensional recursion equations, one for each agent. A family of simple graphs on n vertices is then introduced to characterize all possible neighbor relationships. Doing this makes it possible to represent the Vicsek model as an n-dimensional switched linear system whose switching signal takes values in the set of indices which parameterize the family of graphs. The matrices which are switched within the system turn out to be non-negative with special structural properties. By exploiting these properties and making use of a classical convergence result due to Wolfowitz [13], we prove that all n agents’ headings converge to a common steady state heading provided the n agents are all “linked together” via their neighbors with sufficient frequency as the system evolves. The model under consideration turns out to provide a graphic example of a switched linear system which is stable, but for which there does not exist a common quadratic Lyapunov function. In Section 2.2 we define the notion of an average heading vector in terms of graph Laplacians [21] and we show how this idea leads naturally to the Vicsek model as well as to other decentralized control models which might be used for the same purposes. We propose one such model which assumes each agent knows an upper bound on the number of agents in the group, and we explain why this model has the convergence properties similar to Vicsek’s. In Section 3 we consider a modified version of Vicsek’s discrete-time system consisting of the same group of n agents, plus one additional agent, labelled 0, which acts as the group’s leader. Agent 0 moves at the same constant speed as its n followers but with a fixed heading θ0 . The ith follower updates its heading just as in the Vicsek model, using the average of its own heading plus the headings of its neighbors. For this system, each follower’s set of neighbors can also include 2

编译原理龙书课后部分答案(英文版)

编译原理龙书课后部分答案(英文版)

1) What is the difference between a compiler and an interpreter?∙ A compiler is a program that can read a program in one language - the source language - and translate it into an equivalent program in another language – the target language and report any errors in the source program that it detects during the translation process.∙ Interpreter directly executes the operations specified in the source program on inputs supplied by the user.2) What are the advantages of:(a) a compiler over an interpretera. The machine-language target program produced by a compiler is usually much faster than an interpreter at mapping inputs to outputs.(b) an interpreter over a compiler?b. An interpreter can usually give better error diagnostics than a compiler, because it executes the source program statement by statement.3) What advantages are there to a language-processing system in which the compiler produces assembly language rather than machine language?The compiler may produce an assembly-language program as its output, becauseassembly language is easier to produce as output and is easier to debug.4.2.3 Design grammars for the following languages:a) The set of all strings of 0s and 1s such that every 0 is immediately followed by at least 1.S -> SS | 1 | 01 | ε4.3.1 The following is a grammar for the regular expressions over symbols a and b only, using + in place of | for unions, to avoid conflict with the use of vertical bar as meta-symbol in grammars:rexpr -> rexpr + rterm | rtermrterm -> rterm rfactor | rfactorrfactor -> rfactor * | rprimaryrprimary -> a | ba) Left factor this grammar.rexpr -> rexpr + rterm | rtermrterm -> rterm rfactor | rfactorrfactor -> rfactor * | rprimaryrprimary -> a | bb) Does left factoring make the grammar suitable for top-down parsing?No, left recursion is still in the grammar.c) In addition to left factoring, eliminate left recursion from the original grammar.rexpr -> rterm rexpr‟rexpr‟ -> + rterm rexpr | εrterm -> rfactor rterm‟rterm‟ -> rfactor rterm | εrfactor -> rprimary rfactor‟rfactor‟ -> * rfactor‟ | εrprimary -> a | bd) Is the resulting grammar suitable for top-down parsing?Yes.Exercise 4.4.1 For each of the following grammars, derive predictive parsers and show the parsing tables. You may left-factor and/or eliminate left-recursion from your grammars first.A predictive parser may be derived by recursive decent or by the table driven approach. Either way you must also show the predictive parse table.a) The grammar of exercise 4.2.2(a).4.2.2 a) S -> 0S1 | 01This grammar has no left recursion. It could possibly benefit from left factoring. Here is the recursive decent PP code.s() {match(…0‟);if (lookahead == …0‟)s();match(…1‟);}OrLeft factoring the grammar first:S -> 0S‟S‟ -> S1 | 1s() {match(…0‟); s‟();}s‟() {if (lookahead == …0‟)s(); match(…1‟);elsematch(…1‟);}Now we will build the PP tableS -> 0S‟S‟ -> S1 | 1First(S) = {0}First(S‟) = {0, 1}Follow(S) = {1, $}The predictive parsing algorithm on page 227 (fig4.19 and 4.20) can use this table for non-recursive predictive parsing.b) The grammar of exercise 4.2.2(b).4.2.2 b) S -> +SS | *SS | a with string +*aaa.Left factoring does not apply and there is no left recursion to remove.s() {if(lookahead == …+‟)match(…+‟); s(); s();else if(lookahead == …*‟)match(…*‟); s(); s();else if(lookahead == …a‟)match(…a‟);elsereport(“syntax error”);}First(S) = {+, *, a}Follow(S) = {$, +, *, a}The predictive parsing algorithm on page 227 (fig4.19 and 4.20) can use this table for non-recursive predictive parsing.5.1.1 a, b, c: Investigating GraphViz as a solution to presenting trees5.1.2: Extend the SDD of Fig. 5.4 to handle expressions as in Fig. 5.1:1.L -> E N1.L.val = E.syn2. E -> F E'1. E.syn = E'.syn2.E'.inh = F.val3.E' -> + T Esubone'1.Esubone'.inh = E'.inh + T.syn2.E'.syn = Esubone'.syn4.T -> F T'1.T'.inh = F.val2.T.syn = T'.syn5.T' -> * F Tsubone'1.Tsubone'.inh = T'.inh * F.val2.T'.syn = Tsubone'.syn6.T' -> epsilon1.T'.syn = T'.inh7.E' -> epsilon1.E'.syn = E'.inh8. F -> digit1. F.val = digit.lexval9. F -> ( E )1. F.val = E.syn10.E -> T1. E.syn = T.syn5.1.3 a, b, c: Investigating GraphViz as a solution to presenting trees5.2.1: What are all the topological sorts for the dependency graph of Fig. 5.7?1.1, 2, 3, 4, 5, 6, 7, 8, 92.1, 2, 3, 5, 4, 6, 7, 8, 93.1, 2, 4, 3, 5, 6, 7, 8, 94.1, 3, 2, 4, 5, 6, 7, 8, 95.1, 3, 2, 5, 4, 6, 7, 8, 96.1, 3, 5, 2, 4, 6, 7, 8, 97.2, 1, 3, 4, 5, 6, 7, 8, 98.2, 1, 3, 5, 4, 6, 7, 8, 99.2, 1, 4, 3, 5, 6, 7, 8, 910.2, 4, 1, 3, 5, 6, 7, 8, 95.2.2 a, b: Investigating GraphViz as a solution to presenting trees5.2.3: Suppose that we have a production A -> BCD. Each of the four nonterminals A, B, C, and D have two attributes: s is a synthesized attribute, and i is an inherited attribute. For each of the sets of rules below, tell whether (1) the rules are consistent with an S-attributed definition (2) the rules are consistent with an L-attributed definition, and (3) whether the rules are consistent with any evaluation order at all?a) A.s = B.i + C.s1.No--contains inherited attribute2.Yes--"From above or from the left"3.Yes--L-attributed so no cyclesb) A.s = B.i + C.s and D.i = A.i + B.s1.No--contains inherited attributes2.Yes--"From above or from the left"3.Yes--L-attributed so no cyclesc) A.s = B.s + D.s1.Yes--all attributes synthesized2.Yes--all attributes synthesized3.Yes--S- and L-attributed, so no cyclesd)∙ A.s = D.i∙ B.i = A.s + C.s∙ C.i = B.s∙ D.i = B.i + C.i1.No--contains inherited attributes2.No--B.i uses A.s, which depends on D.i, which depends on B.i (cycle)3.No--Cycle implies no topological sorts (evaluation orders) using the rules5.3.1: Below is a grammar for expressions involving operator + and integer or floating-point operands. Floating-point numbers are distinguished by having a decimal point.1. E -> E + T | T2.T -> num . num | numa) Give an SDD to determine the type of each term T and expression E.1. E -> Esubone + T1. E.type = if (E.type == float || T.type == float) { E.type = float } else{ E.type = integer }2. E -> T1. E.type = T.type3.T -> numsubone . numsubtwo1.T.type = float4.T -> num1.T.type = integerb) Extend your SDD of (a) to translate expressions into postfix notation. Use the binary operator intToFloat to turn an integer into an equivalent float.Note: I use character ',' to separate floating point numbers in the resulting postfix notation. Also, the symbol "||" implies concatenation.1. E -> Esubone + T1. E.val = Esubone.val || ',' || T.val || '+'2. E -> T1. E.val = T.val3.T -> numsubone . numsubtwo1.T.val = numsubone.val || '.' || numsubtwo.val4.T -> num1.T.val = intToFloat(num.val)5.3.2 Give an SDD to translate infix expressions with + and * into equivalent expressions without redundant parenthesis. For example, since both operators associate from the left, and * takes precedence over +, ((a*(b+c))*(d)) translates into a*(b+c)*d. Note: symbol "||" implies concatenation.1.S -> E1. E.iop = nil2.S.equation = E.equation2. E -> Esubone + T1.Esubone.iop = E.iop2.T.iop = E.iop3. E.equation = Esubone.equation || '+' || T.equation4. E.sop = '+'3. E -> T1.T.iop = E.iop2. E.equation = T.equation3. E.sop = T.sop4.T -> Tsubone * F1.Tsubone.iop = '*'2. F.iop = '*'3.T.equation = Tsubone.equation || '*' || F.equation4.T.sop = '*'5.T -> F1. F.iop = T.iop2.T.equation = F.equation3.T.sop = F.sop6. F -> char1. F.equation = char.lexval2. F.sop = nil7. F -> ( E )1.if (F.iop == '*' && E.sop == '+') { F.equation = '(' || E.equation || ')' }else { F.equation = E.equation }2. F.sop = nil5.3.3: Give an SDD to differentiate expressions such as x * (3*x + x * x) involving the operators + and *, the variable x, and constants. Assume that no simplification occurs, so that, for example, 3*x will be translated into 3*1 + 0*x. Note: symbol "||" implies concatenation. Also, differentiation(x*y) = (x * differentiation(y) + differentiation(x) * y) and differentiation(x+y) = differentiation(x) + differentiation(y).1.S -> E1.S.d = E.d2. E -> T1. E.d = T.d2. E.val = T.val3.T -> F1.T.d = F.d2.T.val = F.val4.T -> Tsubone * F1.T.d = '(' || Tsubone.val || ") * (" || F.d || ") + (" || Tsubone.d || ") * (" ||F.val || ')'2.T.val = Tsubone.val || '*' || F.val5. E -> Esubone + T1. E.d = '(' || Esubone.d || ") + (" || T.d || ')'2. E.val = Esubone.val || '+' || T.val6. F -> ( E )1. F.d = E.d2. F.val = '(' || E.val || ')'7. F -> char1. F.d = 12. F.val = char.lexval8. F -> constant1. F.d = 02. F.val = constant.lexval。

Paths, Trees, and Flowers

Paths, Trees, and Flowers

Paths,Trees,and Flowers“Jack Edmonds has been one of the creators of the field of combinatorial optimization and polyhedral combinatorics.His1965paper‘P aths,Trees,and Flowers’[1]was one of the first papers to suggest the possibility of establishing a mathematical theory of efficient combinatorial algorithms...”[from the award citation of the1985John von Neumann Theory Prize,awarded annually since1975by the Institute for Operations Research and the Management Sciences].In1962,the Chinese mathematician Mei-Ko Kwan proposed a method that could be used to minimize the lengths of routes walked by mail carriers.Much earlier, in1736,the eminent Swiss mathematician Leonhard Euler,who had elevated calculus to unprecedented heights,had investigated whether there existed a walk across all seven bridges that connected two islands in the river Pregel with the rest of the city of Ko¨nigsberg on the adjacent shores,a walk that would cross each bridge exactly once.Different as they may sound,these two inquiries have much in common.They both admit schematic represen-tations based on the concept of graphs.Thus they are problems in graph theory,a twentieth century discipline which combines aspects of combinatorics and topology. Given a set of items called nodes and a second set of items called arcs,a graph is defined as a relationship between such sets:For each arc,two of the nodes are specified to be joined by this arc.The actual,say physical,meaning of such nodes and arcs is not what distinguishes graphs.Only formal relationships count. In particular,the bridges of Ko¨nigsberg may be consid-ered as arcs,each joining a pair of the four nodes whichcorrespond to the two islands and the two shores. The commonality between the two graph-theoretical problem formulations reaches deeper.Given any graph, for one of its nodes,say i0,find an arc a1that joins node i0and another node,say i1.One may try to repeat this process and find a second arc a2which joins node i1to a node i2and so on.This process would yield a sequence of nodes—some may be revisited—successive pairs of which are joined by arcs(Fig.3).The first andthe last node in that sequence are then connected by this“path.”Fig.2.The graph of the bridges of Ko¨nigsberg.Fig.1.The bridges of Ko¨ningsberg.Fig.3.Open and closed paths in the graph.(Think of a route along street segments joining inter-sections.)If each pair of nodes in a graph can be con-nected by a path,then the whole graph is considered connected.A path in a graph is called closed if it returns to its starting point(Fig.3).Mei-Ko Kwan’s“Chinese Postman Problem,”as it is now generally called(since first suggested by Alan Goldman,then of NBS),is to determine,in a given connected graph,a shortest closed path which traverses every arc at least once—delivers mail on every assigned street block.Euler also considers closed paths meeting all arcs,but aims at characterizing all graphs which accommodate an Euler tour,a closed path that crosses each arc exactly once.As Euler found,they are precisely those connected graphs in which each node attaches to an even number of arcs,or in other words,every node is of even degree.By this result,there can be no Euler tour over the bridges of Ko¨nigsberg.Euler’s examination typifies the classical combinatorial query:Do certain constructs exist and if so,how many?The following question also illustrates the flavor of classical combinatorics:how many isomers are there of the hydrocarbon C n H2n+2?Each such isomer(Fig.4)is characterized by the joining patterns of the carbon atoms as nodes and bonds as arcs.The corresponding arrangements,therefore,represent graphs.These graphs have a very special property:For any two of their nodes there exists a unique connecting path without repeat arcs.Such a graph is called a“tree.”Every isomer defines a tree of carbon atoms whose nodes are of degree four or less.Conversely,every such tree uniquely characterizes an isomer.In graph-theoret-ical terms,the question is:How many trees with n nodes are there with node degrees not exceeding four?(For n=40,the number is62,481,801,147,341,as deter-mined by ingenious use of“generating functions.”) Here is one more topic in the same vein.A match-maker has a list of k single men and l single women.Her experience of many years enables her to discern which pairs are compatible.She considers a number of com-patible matches that might be arranged simultaneously. How can she be sure that she arrived at the largest possible number of such matches?A theorem by one of the founders of graph theory, the Hungarian mathematician De´nes Ko¨nig in1931, suggests an answer to that question.He considered theFig.4.The carbon graphs of the isomers of C6H14.Fig.5.A matching in a graph;two of the augmenting paths arehighlighted by........graph whose nodes are the matchmaker’s clients and whose arcs join the compatible pairs,leading to the graph-theoretical concept of matching.A matching in a graph is any subset of its arcs such that no two arcs in the matching meet at a common node(Fig.5). Similar concepts are that of a packing or independent set,namely,a subset of the nodes no two of which are connected by an arc,and of a cover,namely,a subset of the nodes that meets every arc.The matchmaker’s graph has the special property that each of its nodes is of one of two kinds,male or female, and every arc connects nodes of different gender.For such a bipartite graph,Ko¨nig’s theorem states that the number of arcs in a maximum matching equals the number of nodes in a minimum cover.However,this deep result ignores the problem of actually finding a minimum cover in order to prove the optimality of a matching.Given a particular matching in any graph,an exposed node is one that is not met by any arc of the matching.An alternating path,that is,a path whose every other arc is part of the matching,is called augmenting if it connects two exposed nodes.Indeed,switching arcs in and out of the matching along an augmenting path results in a new matching with one more arc in it.It is a1957theorem by the French mathematician Claude Berge that a larger matching exists not merely if,but also only if,the current matching admits an augmenting path.The classical graph theorist would look at this elegant characterization of maximum matchings and ask:what more needs to be said?That outlook had been changing during and after W orld W ar II.The extensive planning needs,military and civilian,encountered during the war and post-war years now required finding actual solutions to many graph-theoretical and combinatorial problems,but with a new slant:Instead of asking questions about existence or numbers of solutions,there was now a call for “optimal”solutions,crucial in such areas as logistics, traffic and transportation planning,scheduling of jobs, machines,or airline crews,facility location,microchip design,just to name a few.George B.Dantzig’s celebrated“Simplex Algorithm”for linear program-ming was a key achievement of this era.The Chinese Postman is a case in point.He does not care about how many tours of his street blocks there are to choose from nor whether there are indeed Euler tours. He cares about delivering his mail while walking the shortest possible distance.(Admittedly,mail carriers, Chinese and otherwise,don’t really think in terms of the Chinese Postman Problem.But how about garbage collection in a big city?)And then there were computers,and the expectation that those electronic wizards would handle applied combinatorial optimizations even if faced with the large numbers of variables which typically rendered manual computations impractical.Enter Jack Edmonds.Edmonds did his undergradu-ate work at the George W ashington University.He recounts—as one of the pioneers featured in the volume History of Mathematical Programming:A Collection of P ersonal Reminiscences[20]Ϫhis varied interests and activities of that period.Thus he designed toys and games with expectations of monetary rewards, which unfortunately did not materialize.A stint as a copy boy at the W ashington Post found him at the night desk during President Eisenhower’s heart attack in1955.Mathematics,however,survived as his over-riding interest.Fascinated by the study of polytopes by the Canadian Mathematician H.S.M.Coxeter, Edmonds’master thesis at the University of Maryland (1959)addressed the problem of embedding graphs into surfaces[6].Fig.6.Jack Edmonds.In1959he joined NBS,became a founding member of Alan Goldman’s newly created Operations Research Section in1961,and was steered towards precisely the endeavor of developing optimization algorithms for problems in graph theory and combinatorics.In particular,he was drawn to two fundamental problems: the“Maximum Packing Problem”and the“Minimum Cover Problem”of determining largest packings and smallest covers,respectively,in a given graph[2]. Edmonds quickly recognized the major challenge of that task,a challenge that he called“the curse of exponentiality.”While plainly“finite,”many of the known graph-theoretical algorithms required exponen-tial effort,or their authors had not detailed their proce-dures in a way that avoided such exponentiality. Consider the“Maximum Matching Problem”of finding the largest matching in a given graph.Recall that a matching can be enlarged whenever there is an augmenting path,that is,an alternating path connecting two exposed nodes.As a consequence,there exists an “algorithm”which in a finite number of steps—every-thing is finite here—determines whether a matching is maximum or shows how to improve it.“All”that’s needed is to examine,at each step,the alternating paths originating at exposed nodes.It’s just that there are so darn many of them!In a general framework,computational problems have a natural size,for instance,the number of nodes or arcs in a graph.For algorithms,computational effort can then be defined as the number of basic computational steps,such as individual additions or multiplications, and depends on problem size.If computational effort increases as fast or faster than exponentially with problem size,then it is said to require exponential effort. P olynomial time,by contrast,—Edmonds used the term good—prevails if computational effort is bounded by a power d of problem size n.The notation O(n d)is used to indicate that level of complexity.Regardless of technological progress,computers are helpless when faced with exponential effort.T o wit, the“Traveling Salesman Problem.”Here a connected graph is given along with a particular length for each arc.What is wanted is a closed path of least total length that visits every node.The sequence of visits essentially defines such a round trip.Consider,for instance,the 48state capitals of the contiguous United States and W ashington,DC,as nodes of a graph with an arc between any two of them.From a base in W ashington, all those capitals are to be visited while minimizing the total distance traveled.Thus there are48!(factorial) round trips in the running.A future supercomputer, spending1nanosecond per trip,would require more than3ϫ1044years to examine all possible trips.Stressing the integral role of complexity,Edmonds became the leading proponent of a new direction:to develop good algorithms for problems in graph theory and combinatorics(or to identify problems for which such algorithms can be proven not to exist).This has spawned a new area of research that has grown and flourished for four decades and is still going strong.It is not by accident that a graph-theoretical optimization problem,namely,the Traveling Salesman Problem,is now frequently used as a complexity standard,calling a problem NP-complete if it requires computational effort equivalent to that of the Traveling Salesman Problem. In1961,while attending a summer workshop at the RAND corporation in Santa Monica,California, Jack Edmonds discovered a good algorithm for the Maximum Matching Problem,whose complexity he conservatively pegged at O(n4).That algorithm is described in the paper P aths,Trees, and Flowers[1].In its title,the words“paths”and “trees”refer to the standard graph-theoretical concepts. The algorithms augmenting paths are found by a“tree search”combined with a sophisticated process of shrinking certain subgraphs called blossoms into single nodes of a reduced graph.Hence the term“flowers”in the title.Why was it a breakthrough?The answer is that all good graph-theoretical algorithms known at the time addressed“unimodular”problems such as the“Shortest Path”and“Network Flow”problems,the rigorous proof for the latter having been given by Edmonds with collab-oration by Richard M.Karp[13].These are problems that could be formulated as integrality-preserving linear programs,which by themselves did not create good algorithms but indicated the potential for such. Edmonds’matching algorithm was the very first instance of a good algorithm for a problem outside that mold.In addition,P aths,Trees,and Flowers contributed a major theoretical result:a generalization of Ko¨nig’s theorem that holds for matchings in all kinds of graphs, not just bipartite ones.Edmonds also conjectured in this paper that both the Maximum Packing Problem and the Minimum Cover Problem were intrinsically harder than the Maximum Matching Problem.Indeed,both of the former problems were subsequently shown to be NP-complete.In one of his seminal papers[3-10]published in the Journal of Research of the National Bureau of Standards,Edmonds[7]extended his algorithm to find matchings that optimize the sum of given arc weights, and,perhaps more importantly,he laid the foundationfor a polyhedral interpretation of matching theory which was pursued,for instance,in the doctoral thesis by William R.Pulleyblank[15]advised by Edmonds. Subsequently,Edmonds found other good algorithms, for instance,in his path-breaking research on combina-torial abstractions of matrices,called matroids[4,12, 14-17,19].And,last but not least,he and Ellis L. Johnson used the matching paradigm to arrive at a first good algorithm for the Chinese Postman Problem[18]. In1969,Edmonds accepted a professorship of mathematics at the University of W aterloo,where a list of distinguished doctoral students is testament to his special gift of guiding and motivating young mathemati-cians.He remains to this day an active and highly influential researcher in the field of graph theory and combinatorics.Why is it important to identify even a few graph-theoretical and combinatorial problems with good solution algorithms,when there is such a great variety of real-life optimization tasks,most of them defined in a less clear-cut fashion?The utility of good algorithms for idealized problems and their theory is that they sug-gest generalizations,variations,promising avenues of attack,treatable approximations,iterative applications, and also flag problem formulations best to avoid.In all these roles,Edmonds’matching algorithm has been an indispensable and inspirational part of the toolkit for combinatorial optimization and its multiple applications to modern technology.Prepared by Christoph Witzgall with help by Ronald Boisvert,Geraldine Cheok,Saul Gass,Alan Goldman, and James Lawrence.Bibliography[1]Jack Edmonds,Paths,Trees,and Flowers,Canad.J.Math.17,449-467(1965).[2]Jack Edmonds,Covers and Packings in a Family of Sets,Bull.Amer.Math.Soc.68,494-499(1962).[3]Jack Edmonds,Existence of k-Edge Connected Ordinary Graphswith Prescribed Degrees,J.Res.Natl.Bur.Stand.68B,73-74 (1964).[4]Jack Edmonds,Minimum Partition of a Matroid into Indepen-dent Subsets,J.Res.Natl.Bur.Stand.69B,67-72(1965).[5]Jack Edmonds,Lehman’s Switching Game and a Theorem ofTutte and Nash-Williams,J.Res.Natl.Bur.Stand.69B,73-77 (1965).[6]Jack Edmonds,On the Surface Duality of Linear Graphs,J.Res.Natl.Bur.Stand.69B,121-123(1965).[7]Jack Edmonds,Maximum Matching and a Polyhedron with0,1-V ertices,J.Res.Natl.Bur.Stand.69B,125-130(1965). [8]Jack Edmonds and D.R.Fulkerson,Transversals and MatroidPartition,J.Res.Natl.Bur.Stand.69B,147-153(1965). [9]Jack Edmonds,Optimum Branchings,J.Res.Natl.Bur.Stand.71B,233-240(1967).[10]Jack Edmonds,Systems of Distinct Representatives and LinearAlgebra,J.Res.Natl.Bur.Stand.71B,241-245(1967). [11]Jack Edmonds,Matroid Partition,in Mathematics of the Deci-sion Sciences,P art I,(Proceedings of the Fifth Summer Seminar on the Mathematics of the Decision Sciences,Stanford Univer-sity,Stanford,CA,1967),American Mathematical Society, Providence,RI(1968)pp.335-345.[12]Jack Edmonds,Submodular Functions,Matroids,and CertainPolyhedra,in Combinatorial Structures and their Applications (Proceedings of the Calgary International Conference on Combi-natorial Structures and their Applications,University of Calgary, Calgary,Alberta,Canada,1969),Gordon and Breach,New Y ork (1970)pp.69-81.[13]Jack Edmonds and Richard M.Karp,Theoretical Improvementsin Algorithmic Efficiency for Network Flow Problems,in Combinatorial Structures and their Applications(Proceedings of the Calgary International Conference on Combinatorial Structures and their Applications,University of Calgary, Calgary,Alberta,Canada,1969),Gordon and Breach,New Y ork (1970)pp.93-96.[14]Jack Edmonds,Matroids and the Greedy Algorithm,Math.Programming1,127-136(1971).[15]W.Pulleyblank and Jack Edmonds,Facets of1-MatchingPolyhedra,in Hypergraph Seminar(Proceedings of the First W orking Seminar on Hypergraphs,Ohio State Univ.,Columbus, Ohio,1972)Lecture Notes in Mathematics V ol.411,Springer-V erlag,Berlin(1974)pp.214-242.[16]Jack Edmonds,Edge-Disjoint Branchings,in CombinatorialAlgorithms(Courant Computer Science Symposium9,Naval Postgraduate School,Monterey,CA,1972),Algorithmics Press, New Y ork(1973)pp.91-96.[17]Peyton Y oung and Jack Edmonds,Matroid Designs,J.Res.Natl.Bur.Stand.77B,15-44(1973).[18]Jack Edmonds and Ellis L.Johnson,Matching,Euler T ours,andthe Chinese Postman,Math.Programming5,88-124(1973).[19]Jack Edmonds,Matroid Intersection,in Discrete Optimization I(Proceedings of the Advanced Research Institute on Discrete Optimization and Systems Applications,Alberta,Canada,1977);Ann.Discrete Math.4,39-49(1979).[20]Jan Karel Lenstra,Alexander H.G.Rinnooy Kan,AlexanderSchrijver,History of Mathematical Programming,A Collection of P ersonal Reminiscences,North-Holland,Amsterdam(1991).。

Optimal

Optimal

Optimal spaced seeds for faster approximate stringmatchingMartin Farach-Colton∗Gad ndau†S.Cenk Sahinalp‡Dekel Tsur§AbstractFiltering is a standard technique for fast approximate string matching in practice.Infiltering,a quickfirst step is used to rule out almost all positions of a text as possiblestarting positions for a pattern.Typically this step consists offinding the exact matchesof small parts of the pattern.In the followup step,a slow method is used to verify oreliminate each remaining position.The running time of such a method depends largelyon the quality of thefiltering step,as measured by its false positives rate.The qualityof such a method depends on the number of true matches that it misses,that is,on itsfalse negative rate.A spaced seed is a recently introduced type offilter pattern that allows gaps(i.e.don’t cares)in the small sub-pattern to be searched for.Spaced seeds promise toyield a much lower false positives rate,and thus have been extensively studied,thoughheretofore only heuristically or statistically.In this paper,we show how to design almost optimal spaced seeds that yield no false negatives.Keywords pattern matching,Hamming distance,seeds designAMS subject classifications68Q25,68R151IntroductionGiven a pattern string P of length m,a text string T of length ,and an integer k,the approximate pattern matching problem is tofind all substrings of T whose edit distance or Hamming distance to P is at most k.The basic idea employed in many approximate pattern matching algorithms[7,14]andcommonly used software tools such as BLAST[1]isfiltering based on the use of the pigeon-hole principle:Let P and S be two strings with edit distance or Hamming distance at most k.Then P and S must have identical substrings(contiguous blocks)whose sizes are at least(m−k)/(k+1) .This simple observation can be used to perform efficient approximate pattern matching through the following approach.(i)Anchorfinding:Choose some b≤ (m−k)/(k+1) .Consider each substring of P ofsize b,andfind all of its exact occurrences in T.(ii)Anchor verification:Verify whether each initial exact match extends to a complete ap-proximate match,through the use of(a localized)dynamic program or any other appropriatemethod.When the text string T is available off-line,the anchorfinding step above can be imple-mented very efficiently:(i)build a compact trie of all substrings in T of size b;(ii)search each substring in P of size b on the compact trie.By the use of suffix links,the compact trie can be built in O( )time and the anchorfinding step can be completed in O(m)time,both independent of the size of b.The running time of the anchor verification step depends on the specific method for extending an initial exact match and the value of b.As b increases,the number of false positives is expected to decrease,but if b> (m−k)/(k+1) some actual occurrences of the pattern may be missed,yielding false negatives.In the remainder of this paper we will focus onfiltering processes that yield to no falsenegatives,except as noted.Under this constraint,much of the literature on pattern matchingviafiltering focuses on improving the specific method for verifying anchors.The fastest approximate pattern matching algorithms based onfiltering have a running time of O( (1+ poly k·polylogk log k)[2].1.1The performance of thefiltering approachAlthough thefiltering approach does not always speed up pattern matching,it is usually quite efficient on high-entropy texts,such as those in which each character is drawn uniformly at random from the input alphabet(of sizeσ).Given a pattern P,suppose that the text string T is a concatenation of(1)actual matches of P:substrings of size m whose Hamming distance to P is at most k;(2)high entropy text:long stretches of characters,determined uniform i.i.d.from the input alphabet.On this T and P we can estimate the performance of the anchor verification step and thus thefiltering approach in general as follows.Let the number of actual matches be#occ. Each such match(i.e.true positive)will be identified as an anchor due to the pigeonhole principle.There will be other anchors yielding false positives.It is the expected number of false positives which will determine the performance of the anchor verification step and thus2the overall algorithm.The false positives,i.e.substrings from the high entropy text thatwill be identified as anchors,can be calculated as follows.The probability that a substringT[i:i+m−1]is identified as an anchor,i.e.has a block of size b which exactly matches its corresponding block in P,is≤mσ−b.The expected number of anchors from the highentropy text is thus≤ mσ−b.This implies that the running time of thefiltering approach is proportional to#occ+ mσ−b as well as the time required to verify a given anchor.The above estimate of the performance of thefiltering approach is determined mostlyby problem specific parameters,#occ, ,m or the time for verifying a given anchor,none of which can be changed.There is only one variable,b,that can be determined by thefilterdesigner.Unfortunately,in order to avoid false negatives b must be at most (m−k)/(k+1) .It is possible to relax the above constraint on b by performingfiltering through the use of non-contiguous blocks,namely substrings with a number of gaps(i.e.don’t care symbols).Tounderstand why searching with blocks having don’t care symbols can help,consider the case when k=1,that is,we allow one mismatch.When contiguous blocks are used,the maximumvalue of b is (m−1)/2 .Now consider using a block of size b+1with one don’t care symbolin the center position.How large can b be while guaranteeing that each substring of T with a single mismatching character with P will be found?No matter where the mismatch occurs,we are guaranteed that such a substring can be found even when b=2 m/3 −1.This is a substantial improvement over ungapped search,where b≤ (m−1)/2 ,reducing the timespent on false positives by a factor of≈σm/6.1.2Previous workThe idea of using gappedfilters(which are called spaced seeds)was introduced in[6],and the problem of designing the best possible seed wasfirst posed in[5].The design of spaced seedshas been extensively studied for the case when thefilter is allowed to have false negatives (lossyfilter),e.g.[3,4,9,13].In this scenario,the goal is tofind a seed that minimizes thefalse negatives rate,or in other words,a seed that maximizes the hit probability.In theabove papers,the seed design problem is solved using computational methods,namely,an optimal seed is found by enumerating all possible seeds and computing the hit probabilityof each seed.More recent work study the design of multiple spaced seeds,and also use computational methods[12,15,16].Unlike the case of one seed,going over all possible setsof seeds is impractical.Therefore,heuristic methods are used tofind a set of seeds whichmay be far from optimal.For some applications,it is desirable to construct seeds that are lossless,i.e.,seeds thatfind all matches of any pattern P of length m under Hamming distance k.Alternatively,one can want tofind all matches of any pattern P of length m ≥m within error rate k/m. The lossless seed design problem was studied in[5],and was solved using computationalmethods.Burkhardt and K¨a rkk¨a inen posed the question how tofind optimal seeds more directly.This combinatorial seed design problem has remained open both in the lossless and the lossycase.31.3Our contributionsIn this paper we study the combinatorial seed design problem in the lossless case.We give explicit design of seeds that are(almost)optimal for high entropy texts.Our specific results are as follows.The combinatorial seed design problem has four parameters:minimum pattern length m,the number of“solid”symbols in the seed b,the number of“don’t care”symbols in the seed g,and the maximum number of allowed errors between the pattern and its match k.We denote by n the seed length,namely n=g+b. One can optimize any one of the parameters,given the values of the other three parameters. In this paper we focus on two variants of this optimization problem:1.We study the following problem:Given m,n,and g,what is the spaced seed(of lengthn and with g don’t cares)that maximizes the number of allowed errors k?i.e.we want tofind a seed which guarantees that all matches of a pattern of length m within Hamming distance k are found,for the largest possible value of k.Our result for this problem is explicit construction of seeds(for various values of m,n,and g)which we prove to be almost optimal.2.More interestingly,given the number of errors k and minimum pattern length m,weare interested in the seed with largest possible b such that b+g=n≤m,which guarantees no false negatives for matches with at most k errors.Clearly this seed minimizes the time spent on false positives and thus maximizes the performance of the filtering approach for any given pattern of size m with k errors.Again,we give explicit construction of seeds that are almost optimal.Ourfinal result is on the design of multiple seeds:For anyfixed pattern length m and number of errors k(alternatively minimum pattern length m≤m and error rate k/m),we show that by the use of s≥m1/k seeds one can guarantee to improve on the maximum size of b achievable by a single seed.We note that the problem of maximizing b for the case when k is constant was solved in[10].Moreover,[10]considers the problem of maximizing n forfixed k,m,and g,and solves it for the case g=1.This problem is analogous to the problem of maximizing k.Our results are more general:We study the problem of maximizing k for wider range of g,and the problem of maximizing b for wider range of k.2PreliminariesFor the remainder of the paper,the letters A and B will be used to denote strings over the alphabet{0,1}.For a string A and an integer l,A l denotes the concatenation of A l times. Let Zeros(A)be the number zeros in the string A.A[i:j]denotes the substring of A that starts at the i-th character of A and ends at the j-th character.For two strings A and B of equal lengths,we write A≤B if A[i]≤B[i]for all i= 1,...,|A|.Note that this differs from lexicographic ordering.We say that a string A covers a string B if there is a substring B of B of length|A|such that A≤B .In words,A covers4B if we can align A against B such that every zero in the aligned region of B,is aligned against a zero in A.We will say that such an alignment covers B.The connection between the above definitions and the seed design problem is as follows: a seed of length n will be represented by a string A of length n such that A[i]=0if the i-th symbol of the seed is a don’t care,and A[i]=1otherwise.Given a pattern string P and a substring T of some text T of length m,let B be a string of length m,such that B[i]=0 if P[i]=T [i],and B[i]=1otherwise.Then,thefiltering algorithm using seed A willfind a match between P and T if and only if A covers B.We define k(n,g,m)to be the maximum k such that there is a string A of length n containing g zeros that covers every string B of length m with at most k zeros.In other words,for a seed length of n with g don’t cares,k(n,g,m)is the maximum possible number of errors between any P and any substring T of length m that is guaranteed to be detected by the best possible seed.Also,b(k,m)is the maximum b such that there is a string A with b ones that covers every string B of length m with at most k zeros.In other words,given the maximum number of errors k,b(k,m)is the maximum number of solid symbols one can have in a seed so that a match between any P and T with k errors could be detected by the best possible seed.In the next sections we will give upper and lower bounds on k(n,g,m)and b(k,m)for various values of parameters,effectively solving the combinatorial seed design problem.Our lower bounds on k(n,g,m)and are proved by giving explicit constructions of a seeds with desired value of k.The lower bounds on b(k,m)are proved using explicit constructions and probabilistic arguments.The respective upper bounds on k(n,g,m)and b(k,m)show that the seeds we construct are almost to optimal.3Spaced seeds that maximize kWefirst present our results on how to design a seed that maximizes the number of errors k when the other parameters n,g and m arefixed.In other words,we describe a seed with length n and g don’t care symbols that guarantees tofind all substrings T of size m whose Hamming distance to pattern P is as high as possible.Our results also extend to the problem of maximizing the error rate k /m for any pattern P of length m ≥m withfixed n and g.3.1Maximizing k for constant gTheorem1.For everyfixed g,k(n,g,m)=(2−1n±O(max(1,m2g+1,2·n2g+1,and ones elsewhere.5A1010111111 B 101111111111(a)A1010111111B 110111111111(b)A1010111111B 111110111111(c)Figure1:An example for the proof of the lower bound in Theorem1.Suppose that n=10 and g=2.Then,A=1010111111.If B is a string of length m with at least mg+1)·m2−1/(g+1).Lemma2.Let B be a string of length at least n with at most(2−1n−3zeros.Then, either there is a substring of B of length n containing no zeros,or there is a substring Bof B of length2L containing exactly one zero,and the zero appears in thefirst L characters of B .Proof.We prove the lemma using induction on|B|.The base of the induction is when |B|≤n+2L.In this case we have that B contains no zeros(since(2−1n−3<1). Now,suppose that B is a string of length greater than n+2L.W.l.o.g.B contains at least two zeros,and let x1and x2be the indices of the rightmost zero and second rightmost zero in B,respectively.If|B|−x1+1≤L,then the prefix of B of length x1−1contains at most (2−1n−4≤(2−1n−3ones,and by the induction hypothesis we have that B satisfies the statement of the lemma.If|B|−x2+1≤2L,then the prefix of B of length x2−1contains at most(2−1n−5≤(2−1n−3ones,and again we use the induction hypothesis.If neither of these two cases above occurs,we take B to be the suffix of B of length2L.Now we can complete the proof for the lower bound.Let B be a string of length m with at most(2−1n−3zeros.If B contains a substring of length n with no zeros,then clearly A≤B.Otherwise,let B be a substring of B of length2L that contains exactly onezero,and this zero appears in thefirst L characters of B .There are2L−n+1ways to alignA againstB ,and at least one of these alignments cover B (see Figure1for an example).More precisely,let j be the index such that B [j]=0,and let s be the integer for whichs2g+1n.Note that s≤g since j≤L.For s=0,the alignment of A and Bin which A[1]is aligned with B [j+1]covers B .For s≥1,the alignment of A and B in which the s-th zero in A is aligned against B [j]covers B6For every n which is not divisible by2g+1,let A be the string constructed above for the length n =(2g+1) n/(2g+1) ,and let A be the prefix of length n of A (if A contains less than g zeros,arbitrarily change some ones into zeros).A covers every string that is covered by A .Therefore,the seed A described above covers any string B of length m with k= 2−1n −3zeros or less.Thus we get the following bound for the maximum value of k that is achievable by any seed:k(n,g,m)≥ 2−1n −3≥ 2−1n+2g−3= 2−1n−(2−1n(n+2g)−3.Upper bound.We now give an upper bound on k(n,g,m),demonstrating that no seed can achieve a much higher value for k than the seed we described above.Let A be any seed of length n with g zeros.We will construct strings B0,...,B g that are not covered by A,+O(max(1,msuch that at least one of these strings has at most(2−1nn,then d0≥g+12g+1n−g22g+1(a)(b)Figure 2:An example for the proof of the upper bound in Theorem 1.Figure (a)shows a string A with two zeros,and the values of z 1and z 2.Figure (b)shows the corresponding strings B 0,B 1,and B 2.so B 0is defined andZeros (B 0)=m g +12=2g +1n +O (m 2g +1n ,so there is an index i ≥1such that z i +1−z i >12>12,so for large n ,d i >0(and also d i >0).Moreover,d i +d i ≥n −1+z i +1−z i −2|Y |≥n +1d i +d i +1≤2m 2g +1n −g 2+1=2g +1n +O (max(1,m n)Hamming errors.In this section we consider the case whereg ,the number of don’t care symbols,is not constant but g ≤n 1−1/r for some r ≥2.We show how to construct an (almost)optimal seed for this case,which turns out to be capable of capturing all matches of any pattern P within ≈r m l −r +2)·mn 1+1/r )).8Proof.Recall that in the construction of Theorem1,we were able tofind an alignment of A and B such that the aligned region of B contained the character0at most once,and this character was aligned against one of the zeros in A.Here,we willfind an alignment that contains at most r zeros in the aligned region of B,which will be aligned against zeros in A.Wefirst prove the theorem for the case r=2.Suppose that√n is divisible by3l−2.A is then constructed as follows.The seed A consists of√n.The blocks numbered2i n for i= 1,...,l−1contain only zeros.The other blocks contain√l )·m n3−2/l(n n)−5.Let B such a string,and define L=nn)that contains at most two zeros,and these zeros appears in thefirst2(L+√3l−2n+2√3l−2n+2√3l−2n+2√3l−2n+2√3l−2n].Notethat A[2s3l−2√3l−2n−j the position in A which is aligned against B [j ].Let d be the distancefrom position i to the nearest occurrence of a zero in A to the left of i,and d=i if there is no such occurrence.Since d<√n is not integer,or when√n is an integer divisible by3l−2,and we take A to be the prefix of length n of A .We now deal with the case of r>2.If n1/r is an integer divisible by(r+1)(l−r+1)+1, then we build the string A as follows:We begin by taking A=1n.We then partition A into blocks of different levels.Level i(i=0,...,r−1)consists of n1−i/r blocks of size n i/r each. For every i=0,...,r−2,and every j which is divisible by n1/r,we change block number j in level i to consists of all zeros.Furthermore,for j=1,...,l−r+1,we change block number j·l−r+2l−r+2 ·m n1+1/r)),9we have that either there is a substring of B of length n without zeros,or there is a substring B of B of length(r+1)/(r+1−rl−r+2)+O(n1−1/r)characters of B .Assume thatthe second case occurs.Suppose w.l.o.g.that B contains exactly r zeros.We create an alignment of A and B that covers B as follows:First,we align the rightmost zero in B withthe rightmost character of the appropriate zeros block of level r−1.Then,for i=2,...,r, we move A to the right,until the i-th zero from the right in B is either aligned against the rightmost character of some zeros block of level r−i,or it is outside the aligned region.Byour construction,the movement of A is no more than n1−(i−1)/r−1positions.Moreover, during the movements of A the following invariant is kept:For every j≤i−1,after the i-thmovement,the j-th zero from the right in B is aligned against a character of some zeros block of level j ,where r−i−1≤j ≤r−j.In particular,at the end,all the zeros in B are aligned against zeros in A,and therefore A covers B.The case when n1/r is not an integer,or when n1/r is not divisible by(r+1)(l−r+1)+1, is handled the same as before.The following theorem gives an upper bound that matches the lower bound in Theorem3,demonstrating that the seed it describes is(almost)optimal.Theorem4.For every integer r≥2,if g≤n1−1/r then k(n,g,m)≤r·mn+r.4Maximizing bWe now show how to maximize b,the number of solid symbols in a seed,given the numberof errors k and the pattern length m.As the number of false positives depends heavily on b,the resulting seed simply optimizes the performance of thefiltering method.Our result also provides the maximum b for the problem offinding all matches of any pattern P of size m ≥m within error rate k /m =k/m.10Theorem5.For every k<1.311Upper bound.Denote M= 1M.Since the number of k-tuples that satisfy(1)above is at least m−2M k /M,we have that there is a k-tuple(j,i1,...,i k−1)that satisfies(1)but does not satisfy(2).We now construct a string B of length m that contains zeros in positions jM,jM−i1,...,jM−i k−1and ones elsewhere.If A covers B,then consider some alignment of A and B that covers B,and suppose that A[1]is aligned with B[y].Since the length of A is at least b≥m−M+1,we have that y≤M.Therefore,B[jM]is aligned with A[x]for some x∈I j.It follows that (j,i1,...,i k−1)∈Y,a contradiction.Therefore,A does not cover B,and the upper bound follows.Theorem6.For every k≥1k log mklog m).Proof.The lower bound is trivial if k=Ω(m),so assume that k<m8·m k that contains b=Θ(n)ones and covers every string of length8n with at most log n zeros.If B is a string of length m with at most k zeros,then by the pigeon-hole principle,there is a substring B of B of length8n that has at most k·8nk≤log n zeros,and therefore A covers B .Thus,A covers B.Upper bound.Suppose that A is a string of length n with b≥mlog1b·log m≤k,and the upper bound follows.124.1Multiple seedsTo model multiple seeds,we define that a set of strings{A1,...,A s}covers a string B if at least one string A i from the set covers B.Let b(k,m,s)be the maximum b such that there is a set of strings{A1,...,A s}that covers every string B of length m with at most k zeros,and each string A i contains at least b ones.The following theorem shows that using multiple seeds can give better results than one seed,namely,b(k,m,m1/k)is slightly larger than b(k,m)(see Theorem5).Theorem7.For every k<log log m,b(k,m,m1/k)≥m−O(km1−1/k). Proof.We take s= m1/k ,and build a string A of length n=m− k−1l=0s l,that has k−1 levels of blocks as in the proof of Theorem5.Then,we build strings A1,...,A s−1,where A i is obtained by taking the string A and adding zeros at positions js−i for j≥s.It is easy to verify that{A1,...,A s−1}covers every string of length m with at most k zeros.References[1]S.Altschul,W.Gisch,ler,E.Myers,and D.Lipman.Basic local alignmentsearch tool.J.of Molecular Biology,215(3):403–410,1990.[2]A.Amir,M.Lewenstein,and E.Porat.Faster algorithms for string matching with kmismatches.J.of Algorithms,50(2):257–275,2004.[3]B.Brejov´a,D.G.Brown,and T.Vinar.Vector seeds:An extension to spaced seeds.J.Comput.Syst.Sci.,70(3):364–380,2005.[4]J.Buhler,U.Keich,and Y.Sun.Designing seeds for similarity search in genomic DNA.put.Syst.Sci.,70(3):342–363,2005.[5]S.Burkhardt and J.K¨a rkk¨a inen.Betterfiltering with gapped q-grams.Fundam.In-form.,56(1–2):51–70,2003.[6]A.Califano and I.Rigoutsos.FLASH:A fast look-up algorithm for string homology.InProc.1st International Conference on Intelligent Systems for Molecular Biology(ISMB), pages56–64,1993.[7]R.Cole and R.Hariharan.Approximate string matching:A simpler faster algorithm.SIAM put.,31(6):1761–1782,2002.[8]M.Karpinski and A.Zelikovsky.Approximating dense cases of covering problems.Electronic Colloquium on Computational Complexity(ECCC),4(4),1997.[9]U.Keich,M.Li,B.Ma,and J.Tromp.On spaced seeds for similarity search.DiscreteApplied Mathematics,138(3):253–263,2004.13[10]G.Kucherov,L.No´e,and M.A.Roytberg.Multiseed losslessfiltration.IEEE/ACMput.Biology Bioinform.,2(1):51–61,2005.[11]ndau and U.Vishkin.Fast parallel and serial approximate string matching.J.of Algorithms,10(2):157–169,1989.[12]M.Li,B.Ma,D.Kisman,and J.Tromp.Patternhunter II:Highly sensitive and fasthomology search.J.Bioinformatics and Computational Biology,2(3):417–440,2004.[13]B.Ma,J.Tromp,and M.Li.Patternhunter:faster and more sensitive homology search.Bioinformatics,18(3):440–445,2002.[14]S.C.Sahinalp and U.Vishkin.Efficient approximate and dynamic matching of patternsusing a labeling paradigm.In Proc.37th Symposium on Foundation of Computer Science (FOCS),pages320–328,1996.[15]Y.Sun and J.Buhler.Designing multiple simultaneous seeds for dna similarity search.J.of Computational Biology,12(6):847–861,2005.[16]J.Xu,D.G.Brown,M.Li,and B.Ma.Optimizing multiple spaced seeds for homologysearch.In Proc.14th Annual Symposium on Combinatorial Pattern Matching(CPM), pages47–58,2004.14。

The primes contain arbitrarily long polynomial progressions

The primes contain arbitrarily long polynomial progressions

Contents 1. Introduction 2. Notation and initial preparation 3. Three pillars of the proof 4. Overview of proof of transference principle 5. Proof of generalized von Neumann theorem 6. Polynomial dual functions 7. Proof of structure theorem 8. A pseudorandom measure which majorizes the primes 9. Local estimates 10. Initial correlation estimate 2 7 15 20 25 35 41 48 52 54
60 62 66 69 71 73 77 80
1. Introduction In 1975, Szemer´ edi [31] proved that any subset A of integers of positive upper ∩[N ]| density lim supN →∞ |A |[N ]| > 0 contains arbitrarily long arithmetic progressions. Throughout this paper [N ] denotes the discrete interval [N ] := {1, . . . , N }, and |X | denotes the cardinality of a finite set X . Shortly afterwards, Furstenberg [10] gave an ergodic-theory proof of Szemer´ edi’s theorem. Furstenberg observed that questions about configurations in subsets of positive density in the integers correspond to recurrence questions for sets of positive measure in a probability measure preserving system. This observation is now known as the Furstenberg correspondence principle. In 1978, S´ ark¨ ozy [29]1 (using the Hardy-Littlewood circle method) and Furstenberg [11] (using the correspondence principle, and ergodic theoretic methods) proved independently that for any polynomial2 P ∈ Z[m] with P (0) = 0, any set A ⊂ Z of positive density contains a pair of points x, y with difference y − x = P (m) for some positive integer m ≥ 1. In 1996 Bergelson and Leibman [6] proved, by purely ergodic theoretic means3, a vast generalization of the Fustenberg-S´ ark¨ ozy theorem - establishing the existence of arbitrarily long polynomial progressions in sets of positive density in the integers.

C++调试常见错误英汉翻译

C++调试常见错误英汉翻译

C++调试出错提示英汉对照表Ambiguous operators need parentheses -----------不明确的运算需要用括号括起Ambiguous symbol ''xxx'' ---------------- 不明确的符号Argument list syntax error ---------------- 参数表语法错误Array bounds missing ------------------ 丢失数组界限符Array size toolarge ----------------- 数组尺寸太大Bad character in paramenters ------------------ 参数中有不适当的字符Bad file name format in include directive ---------包含命令中文件名格式不正确Bad ifdef directive synatax --------------------------编译预处理ifdef有语法错Bad undef directive syntax ---------------------------编译预处理undef有语法错Bit field too large ---------------- 位字段太长Call of non-function ----------------- 调用未定义的函数Call to function with no prototype --------------- 调用函数时没有函数的说明Cannot modify a const object --------------- 不允许修改常量对象Case outside of switch ---------------- 漏掉了case 语句Case syntax error ------------------ Case 语法错误Code has no effect ----------------- 代码不可述不可能执行到Compound statement missing{ -------------------分程序漏掉"{" Conflicting type modifiers ------------------ 不明确的类型说明符Constant expression required ---------------- 要求常量表达式Constant out of range in comparison -------------在比较中常量超出范围Conversion may lose significant digits -----------转换时会丢失意义的数字Conversion of near pointer not allowed ---------不允许转换近指针Could not find file ''xxx'' ----------------------- 找不到XXX文件Declaration missing ; ---------------- 说明缺少";" houjiuming Declaration syntax error -----------------说明中出现语法错误Default outside of switch ------------------ Default 出现在switch语句之外Define directive needs an identifier ------------------定义编译预处理需要标识符Division by zero ------------------用零作除数Do statement must have while ------------------ Do-while语句中缺少while部分Enum syntax error ---------------------枚举类型语法错误Enumeration constant syntax error -----------------枚举常数语法错误Error directive :xxx ------------------------错误的编译预处理命令Error writing output file ---------------------写输出文件错误Expression syntax error -----------------------表达式语法错误Extra parameter in call ------------------------调用时出现多余错误File name too long ----------------文件名太长Function call missing -----------------函数调用缺少右括号Fuction definition out of place ------------------函数定义位置错误Fuction should return a value ------------------函数必需返回一个值Goto statement missing label ------------------ Goto语句没有标号Hexadecimal or octal constant too large ------------------16进制或8进制常数太大Illegal character ''x'' ------------------非法字符xIllegal initialization ------------------非法的初始化Illegal octal digit ------------------非法的8进制数字houjiuming Illegal pointer subtraction ------------------非法的指针相减Illegal structure operation ------------------非法的结构体操作Illegal use of floating point -----------------非法的浮点运算Illegal use of pointer --------------------指针使用非法Improper use of a typedefsymbol ----------------类型定义符号使用不恰当In-line assembly not allowed -----------------不允许使用行间汇编Incompatible storage class -----------------存储类别不相容Incompatible type conversion --------------------不相容的类型转换Incorrect number format -----------------------错误的数据格式Incorrect use of default --------------------- Default使用不当Invalid indirection ---------------------无效的间接运算Invalid pointer addition ------------------指针相加无效Irreducible expression tree -----------------------无法执行的表达式运算Lvalue required ---------------------------需要逻辑值0或非0值Macro argument syntax error -------------------宏参数语法错误Macro expansion too long ----------------------宏的扩展以后太长Mismatched number of parameters in definition ---------------------定义中参数个数不匹配Misplaced break ---------------------此处不应出现break语句Misplaced continue ------------------------此处不应出现continue语句Misplaced decimal point --------------------此处不应出现小数点Misplaced elif directive --------------------不应编译预处理elif Misplaced else ----------------------此处不应出现else houjiuming Misplaced else directive ------------------此处不应出现编译预处理else Misplaced endif directive -------------------此处不应出现编译预处理endifMust be addressable ----------------------必须是可以编址的Must take address of memory location ------------------必须存储定位的地址No declaration for function ''xxx'' -------------------没有函数xxx的说明No stack ---------------缺少堆栈No type information ------------------没有类型信息Non-portable pointer assignment --------------------不可移动的指针(地址常数)赋值Non-portable pointer comparison --------------------不可移动的指针(地址常数)比较Non-portable pointer conversion ----------------------不可移动的指针(地址常数)转换Not a valid expression format type ---------------------不合法的表达式格式Not an allowed type ---------------------不允许使用的类型Numeric constant too large -------------------数值常太大Out of memory -------------------内存不够用houjiumingParameter ''xxx'' is never used ------------------能数xxx没有用到Pointer required on left side of -> -----------------------符号->的左边必须是指针Possible use of ''xxx'' before definition -------------------在定义之前就使用了xxx(警告)Possibly incorrect assignment ----------------赋值可能不正确Redeclaration of ''xxx'' -------------------重复定义了xxxRedefinition of ''xxx'' is not identical ------------------- xxx的两次定义不一致Register allocation failure ------------------寄存器定址失败Repeat count needs an lvalue ------------------重复计数需要逻辑值Size of structure or array not known ------------------结构体或数给大小不确定Statement missing ; ------------------语句后缺少";"Structure or union syntax error --------------结构体或联合体语法错误Structure size too large ----------------结构体尺寸太大Sub scripting missing ] ----------------下标缺少右方括号Superfluous & with function or array ------------------函数或数组中有多余的"&"Suspicious pointer conversion ---------------------可疑的指针转换Symbol limit exceeded ---------------符号超限Too few parameters in call -----------------函数调用时的实参少于函数的参数不Too many default cases ------------------- Default太多(switch语句中一个)Too many error or warning messages --------------------错误或警告信息太多Too many type in declaration -----------------说明中类型太多houjiumingToo much auto memory in function -----------------函数用到的局部存储太多Too much global data defined in file ------------------文件中全局数据太多Two consecutive dots -----------------两个连续的句点Type mismatch in parameter xxx ----------------参数xxx类型不匹配Type mismatch in redeclaration of ''xxx'' ---------------- xxx重定义的类型不匹配Unable to create output file ''xxx'' ----------------无法建立输出文件xxx Unable to open include file ''xxx'' ---------------无法打开被包含的文件xxxUnable to open input file ''xxx'' ----------------无法打开输入文件xxx Undefined label ''xxx'' -------------------没有定义的标号xxx Undefined structure ''xxx'' -----------------没有定义的结构xxx Undefined symbol ''xxx'' -----------------没有定义的符号xxx Unexpected end of file in comment started on line xxx ----------从xxx行开始的注解尚未结束文件不能结束Unexpected end of file in conditional started on line xxx ----从xxx 开始的条件语句尚未结束文件不能结束Unknown assemble instruction ----------------未知的汇编结构houjiumingUnknown option ---------------未知的操作Unknown preprocessor directive: ''xxx'' -----------------不认识的预处理命令xxxUnreachable code ------------------无路可达的代码Unterminated string or character constant -----------------字符串缺少引号User break ----------------用户强行中断了程序Void functions may not return a value ----------------- Void类型的函数不应有返回值Wrong number of arguments -----------------调用函数的参数数目错''xxx'' not an argument ----------------- xxx不是参数''xxx'' not part of structure -------------------- xxx不是结构体的一部分xxx statement missing ( -------------------- xxx语句缺少左括号xxx statement missing ) ------------------ xxx语句缺少右括号xxx statement missing ; -------------------- xxx缺少分号houjiumingxxx'' declared but never used -------------------说明了xxx但没有使用xxx'' is assigned a value which is never used ----------------------给xxx赋了值但未用过Zero length structure ------------------结构体的长度为零。

英语技术写作试题及答案

英语技术写作试题及答案

英语技术写作试题及答案一、选择题(每题2分,共20分)1. The term "API" stands for:A. Application Programming InterfaceB. Artificially Programmed IntelligenceC. Advanced Programming InterfaceD. Automated Programming Interface答案:A2. Which of the following is not a common data type in programming?A. IntegerB. StringC. BooleanD. Vector答案:D3. In technical writing, what is the purpose of using the term "shall"?A. To indicate a requirement or obligationB. To suggest a recommendationC. To express a possibilityD. To denote a future action答案:A4. What does the acronym "GUI" refer to in the context of computing?A. Graphical User InterfaceB. Global User InterfaceC. Generalized User InterfaceD. Graphical Unified Interface答案:A5. Which of the following is a correct statement regarding version control in software development?A. It is used to track changes in software over time.B. It is a type of software testing.C. It is a method for encrypting code.D. It is a way to compile code.答案:A6. What is the primary function of a compiler in programming?A. To debug codeB. To execute codeC. To translate code from one language to anotherD. To optimize code for performance答案:C7. In technical documentation, what does "RTFM" commonly stand for?A. Read The Frequently Asked QuestionsB. Read The Full ManualC. Read The File ManuallyD. Read The Final Message答案:B8. Which of the following is a common method for organizing code in a modular fashion?A. LoopingB. RecursionC. EncapsulationD. Inheritance答案:C9. What is the purpose of a "pseudocode" in programming?A. To provide a detailed step-by-step guide for executing codeB. To serve as a preliminary version of code before actual codingC. To act as an encryption for the codeD. To be used as a substitute for actual code in production答案:B10. What does "DRY" stand for in software development?A. Don't Repeat YourselfB. Data Retrieval YieldC. Database Record YieldD. Dynamic Resource Yield答案:A二、填空题(每空2分,共20分)1. The process of converting a high-level code into machine code is known as _______.答案:compilation2. In programming, a _______ is a sequence of characters that is treated as a single unit.答案:string3. The _______ pattern in object-oriented programming is a way to allow a class to be used as a blueprint for creating objects.答案:prototype4. A _______ is a type of software development methodology that emphasizes iterative development.答案:agile5. The _______ is a set of rules that defines how data is formatted, transmitted, and received between software applications.答案:protocol6. In technical writing, the term "should" is used toindicate a _______.答案:recommendation7. The _______ is a type of software that is designed to prevent, detect, and remove malicious software.答案:antivirus8. A _______ is a variable that is declared outside the function and hence belongs to the global scope.答案:global variable9. The _______ is a programming construct that allows you to execute a block of code repeatedly.答案:loop10. In software development, the term "branch" in version control refers to a _______.答案:separate line of development三、简答题(每题10分,共40分)1. Explain the difference between a "bug" and a "feature" in software development.答案:A "bug" is an unintended behavior or error in a software program that causes it to behave incorrectly or crash. A "feature," on the other hand, is a planned and intentional part of the software that provides some functionality or capability to the user.2. What is the significance of documentation in technical writing?答案:Documentation in technical writing is significant as it serves to provide detailed information about a product or system, making it easier for users, developers, and other stakeholders to understand its workings, usage, and maintenance. It is crucial for training, troubleshooting, and future development.3. Describe the role of a software architect in a software development project。

关于黑神话悟空话题的英语阅读理解

关于黑神话悟空话题的英语阅读理解

关于黑神话悟空话题的英语阅读理解一、阅读理解原文。

Black Myth: Wukong is an upcoming action role - playing game that has captured the attention of gamers around the world. Developed by Game Science, it is set in a world inspired by Chinese mythology, with the main character being the well - known Monkey King, Sun Wukong.The game's trailers have shown stunning visuals, with detailed character models and beautiful landscapes. The combat system seems to be fast - paced and fluid, allowing players to perform a variety of acrobatic moves and powerful attacks as they battle against different enemies. One of the most impressive aspects is the way the developers have recreated the Monkey King's abilities. For example, his transformation skills are vividly presented, enabling him to change into different forms such as a bird or a fish to overcome obstacles or surprise opponents.The game also appears to have a rich story. It delves deep into thelore of Chinese mythology, not just focusing on the Monkey King's adventures but also exploring the relationships between different gods, demons, and mortals. This gives the game a sense of depth and authenticity that many players are looking forward to.However, developing such a game is not without challenges. The developers need to balance between staying true to the source material and making the game appealing to a global audience. They also have to deal with technical issues such as optimizing the game for different platforms to ensure smooth gameplay.Despite these challenges, the anticipation for Black Myth: Wukong continues to grow. It represents a new wave of games that draw inspirationfrom non - Western cultures, and it has the potential to introduce Chinese mythology to a whole new generation of gamers around the world.二、阅读理解题目。

c++ adhoc题

c++ adhoc题

c++ adhoc题全文共四篇示例,供读者参考第一篇示例:C++ Adhoc 题目是指在C++编程中遇到的一些具有挑战性和实用性的问题,这些问题可能不属于传统的算法或数据结构题目,但是考验着程序员的编程能力和思维逻辑。

在实际的工作中,遇到这类问题需要程序员有足够的经验和技能去解决,因此掌握一定数量的C++ Adhoc题目是非常有帮助的。

下面将介绍一些常见的C++ Adhoc题目,帮助大家更好地理解这类题目的特点和解题方法。

1. 反转字符串题目描述:给定一个字符串,要求将字符串中的字符顺序反转。

示例:输入: "hello"输出: "olleh"解题思路:可以使用两个指针指向字符串的两端,然后逐步交换两个指针指向的字符,直到两个指针相遇为止。

2. 数组去重题目描述:给定一个有序数组,要求去除其中的重复元素,返回去重后的数组长度,并且原数组的前几位是去重后的数组元素。

示例:输入: [1, 1, 2, 2, 3, 4]输出: [1, 2, 3, 4]解题思路:遍历数组,使用一个指针指向当前不重复元素应该存放的位置,另一个指针遍历数组,如果发现重复元素则继续遍历,如果不重复则将元素放到指定位置。

3. 查找数组中缺失的元素题目描述:给定一个从0 到n 的有序数组,但是其中可能缺少某个数字,要求找出缺失的元素。

示例:输入:[0, 1, 3, 5, 6]输出:2解题思路:可以通过遍历数组,逐个比较当前元素和下一个元素的差值,如果差值大于1,则说明中间有缺失的元素。

4. 字符串的全排列题目描述:给定一个字符串,要求输出所有的字符排列组合。

示例:输入:abc输出:abc, acb, bac, bca, cab, cba解题思路:可以使用递归的方法将问题分解为多个子问题,然后逐步解决子问题得到最终答案。

5. 实现一个简单的字符串解码器题目描述:给定一个编码后的字符串,解码成原始的字符串。

The Duplicity of Hargraves

The Duplicity of Hargraves

The Duplicity of HargravesWhen Major Pendleton Talbot, of Mobile, sir, and his daughter, Miss Lydia Talbot, came to Washington to reside, they selected for a boarding place a house that stood fifty yards back from one of the quietest avenues. It was an old-fashioned brick building, with a portico upheld by tall white pillars. The yard was shaded by stately locusts and elms, and a catalpa tree in season rained its pink and white blossoms upon the grass. Rows of high box busheslined the fence and walks. It was the Southern style and aspect of the place that pleased the eyes of the Talbots. In this pleasant private boarding house they engaged rooms, including a study for Major Talbot, who was adding the finishing chapters to his book, Anecdotes and Reminiscencesof the Alabama Army, Bench, and Bar.Major Talbot was of the old, old South. The present day had little interest or excellence in his eyes. His mind livedin that period before the Civil War when the Talbots owned thousands of acres of fine cotton land and the slaves totill them; when the family mansion was the scene ofprincely hospitality, and drew its guests from the aristocracy of the South. Out of that period he had brought all its old pride and scruples of honor, an antiquated and punctilious politeness, and (you would think) its wardrobe. Such clothes were surely never made within fifty years. The Major was tall, but whenever he made that wonderful, archaic genuflexion he called a bow, the corners of his frock coat swept the floor. That garment was a surprise even to Washington, which has long ago ceased to shy at thefrocks and broad-brimmed hats of Southern Congressmen. One of the boarders christened it a "Father Hubbard," and it certainly was high in the waist and full in the skirt.But the Major, with all his queer clothes, his immense area of plaited, raveling shirt bosom, and the little black string tie with the bow always slipping on one side, both was smiled at and liked in Mrs. Vardeman's select boarding house. Some of the young department clerks would often "string him," as they called it, getting him started upon the subject dearest to him--the traditions and history of his beloved Southland. During his talks he would quote freely from the Anecdotes and Reminiscences. But they were very careful not to let him see their designs, for in spite of his sixty-eight years he could make the boldest of them uncomfortable under the steady regard of his piercing grayeyes.Miss Lydia was a plump, little old maid of thirty-five, with smoothly drawn, tightly twisted hair that made her look still older. Old-fashioned, too, she was; but antebellum glory did not radiate from her as it did from the Major. She possessed a thrifty common sense, and it was she who handled the finances of the family, and met all comers when there were bills to pay. The Major regarded board bills and wash bills as contemptible nuisances. They kept coming in so persistently and so often. Why, the Major wanted to know, could they not be filed and paid in a lump sum at some convenient period--say when the Anecdotes and Reminiscences had been published and paid for? Miss Lydia would calmly go on with her sewing and say, "We'll pay as we go as long as the money lasts, and then perhaps they'llhave to lump it."Most of Mrs. Vardeman's boarders were away during the day, being nearly all department clerks and business men; but there was one of them who was about the house a great deal from morning to night. This was a young man named Henry Hopkins Hargraves--every one in the house addressed him by his full name--who was engaged at one of the popular vaudeville theaters. Vaudeville has risen to such a respectable plane in the last few years, and Mr. Hargraves was such a modest and well-mannered person, that Mrs. Vardeman could find no objection to enrolling him upon her list of boarders.At the theater Hargraves was known as an all-round dialect comedian, having a large repertoire of German, Irish, Swede, and black-face specialties. But Mr. Hargraves wasambitious, and often spoke of his great desire to succeed in legitimate comedy.This young man appeared to conceive a strong fancy for Major Talbot. Whenever that gentleman would begin his Southern reminiscences, or repeat some of the liveliest of the anecdotes, Hargraves could always be found, the most attentive among his listeners.For a time the Major showed an inclination to discourage the advances of the "play actor," as he privately termed him; but soon the young man's agreeable manner and indubitable appreciation of the old gentleman's stories completely won him over.It was not long before the two were like old chums. The Major set apart each afternoon to read to him the manuscript of his book. During the anecdotes Hargravesnever failed to laugh at exactly the right point. The Major was moved to declare to Miss Lydia one day that young Hargraves possessed remarkable perception and a gratifying respect for the old regime. And when it came to talking of those old days--if Major Talbot liked to talk, Mr. Hargraves was entranced to listen.Like almost all old people who talk of the past, the Major loved to linger over details. In describing the splendid, almost royal, days of the old planters, he would hesitate until he had recalled the name of the negro who held his horse, or the exact date of certain minor happenings, or the number of bales of cotton raised in such a year; but Hargraves never grew impatient or lost interest. On the contrary, he would advance questions on a variety of subjects connected with the life of that time, and he neverfailed to extract ready replies.The fox hunts, the 'possum suppers, the hoe-downs and jubilees in the negro quarters, the banquets in the plantation-house hall, when invitations went for fifty miles around; the occasional feuds with the neighboring gentry; the Major's duel with Rathbone Culbertson about Kitty Chalmers, who afterward married a Thwaite of South Carolina; and private yacht races for fabulous sums on Mobile Bay; the quaint beliefs, improvident habits, and loyal virtues of the old slaves--all these were subjects that held both the Major and Hargraves absorbed for hours at a time.Sometimes, at night, when the young man would be coming upstairs to his room after his turn at the theater was over, the Major would appear at the door of his study andbeckon archly to him. Going in, Hargraves would find alittle table set with a decanter, sugar bowl, fruit, and a big bunch of fresh green mint."It occurred to me," the Major would begin--he was always ceremonious--"that perhaps you might have found your duties at the--at your place of occupation--sufficiently arduous to enable you, Mr. Hargraves, to appreciate what the poet might well have had in his mind when he wrote, 'tired Nature's sweet restorer'--one of our Southern juleps."It was a fascination to Hargraves to watch him make it. He took rank among artists when he began, and he never varied the process. With what delicacy he bruised the mint; with what exquisite nicety he estimated the ingredients; with what solicitous care he capped the compound with thescarlet fruit glowing against the dark green fringe! Andthen the hospitality and grace with which he offered it, after the selected oat straws had been plunged into its tinkling depths!After about four months in Washington, Miss Lydia discovered one morning that they were almost without money. The Anecdotes and Reminiscences was completed, but publishers had not jumped at the collected gems of Alabama sense and wit. The rental of a small house which they still owned in Mobile was two months in arrears. Their board money for the month would be due in three days. Miss Lydia called her father to a consultation."No money?" said he with a surprised look. "It is quite annoying to be called on so frequently for these petty sums, Really, I--"The Major searched his pockets. He found only a two-dollarbill, which he returned to his vest pocket."I must attend to this at once, Lydia," he said. "Kindly get me my umbrella and I will go downtown immediately. The congressman from our district, General Fulghum, assured me some days ago that he would use his influence to get my book published at an early date. I will go to his hotel at once and see what arrangement has been made."With a sad little smile Miss Lydia watched him button his "Father Hubbard" and depart, pausing at the door, as he always did, to bow profoundly.That evening, at dark, he returned. It seemed that Congressman Fulghum had seen the publisher who had the Major's manuscript for reading. That person had said that if the anecdotes, etc., were carefully pruned down about one-half, in order to eliminate the sectional and classprejudice with which the book was dyed from end to end, he might consider its publication.The Major was in a white heat of anger, but regained his equanimity, according to his code of manners, as soon as he was in Miss Lydia's presence."We must have money," said Miss Lydia, with a little wrinkle above her nose. "Give me the two dollars, and Iwill telegraph to Uncle Ralph for some to-night."The Major drew a small envelope from his upper vest pocket and tossed it on the table."Perhaps it was injudicious," he said mildly, "but the sum was so merely nominal that I bought tickets to the theater to-night. It's a new war drama, Lydia. I thought you would be pleased to witness its first production in Washington. I am told that the South has very fair treatment in the play.I confess I should like to see the performance myself." Miss Lydia threw up her hands in silent despair.Still, as the tickets were bought, they might as well be used. So that evening, as they sat in the theater listening to the lively overture, even Miss Lydia was minded to relegate their troubles, for the hour, to second place. The Major, in spotless linen, with his extraordinary coat showing only where it was closely buttoned, and his white hair smoothly roached, looked really fine and distinguished. The curtain went up on the first act of A Magnolia Flower, revealing a typical Southern plantation scene. Major Talbot betrayed some interest."Oh, see!" exclaimed Miss Lydia, nudging his arm, and pointing to her program.The Major put on his glasses and read the line in the castof characters that her fingers indicated.Col. Webster Calhoun .... Mr. Hopkins Hargraves."It's our Mr. Hargraves," said Miss Lydia. "It must be his first appearance in what he calls 'the legitimate.' I'm so glad for him."Not until the second act did Col. Webster Calhoun appear upon the stage. When he made his entry Major Talbot gave an audible sniff, glared at him, and seemed to freeze solid. Miss Lydia uttered a little, ambiguous squeak and crumpled her program in her hand. For Colonel Calhoun was made up as nearly resembling Major Talbot as one pea does another. The long, thin white hair, curly at the ends, the aristocratic beak of a nose, the crumpled, wide, raveling shirt front, the string tie, with the bow nearly under one ear, were almost exactly duplicated. And then, to clinch theimitation, he wore the twin to the Major's supposed to be unparalleled coat. High-collared, baggy, empire-waisted, ample-skirted, hanging a foot lower in front than behind, the garment could have been designed from no other pattern. From then on, the Major and Miss Lydia sat bewitched, and saw the counterfeit presentment of a haughty Talbot "dragged," as the Major afterward expressed it, "through the slanderous mire of a corrupt stage."Mr. Hargraves had used his opportunities well. He had caught the Major's little idiosyncrasies of speech, accent, and intonation and his pompous courtliness to perfection--exaggerating all to the purpose of the stage. When he performed that marvelous bow that the Major fondly imagined to be the pink of all salutations, the audience sent forth a sudden round of hearty applause.Miss Lydia sat immovable, not daring to glance toward her father. Sometimes her hand next to him would be laid against her cheek, as if to conceal the smile which, in spite of her disapproval, she could not entirely suppress. The culmination of Hargraves audacious imitation took place in the third act. The scene is where Colonel Calhoun entertains a few of the neighboring planters in his "den." Standing at a table in the center of the stage, with his friends grouped about him, he delivers that inimitable, rambling character monologue so famous in A Magnolia Flower, at the same time that he deftly makes juleps for the party.Major Talbot, sitting quietly, but white with indignation, heard his best stories retold, his pet theories and hobbies advanced and expanded, and the dream of the Anecdotes andReminiscences served, exaggerated and garbled. His favorite narrative--that of his duel with Rathbone Culbertson--was not omitted, and it was delivered with more fire, egotism, and gusto than the Major himself put into it.The monologue concluded with a quaint, delicious, witty little lecture on the art of concocting a julep,illustrated by the act. Here Major Talbot's delicate but showy science was reproduced to a hair's breadth--from his dainty handling of the fragrant weed--"the one-thousandth part of a grain too much pressure, gentlemen, and you extract the bitterness, instead of the aroma, of this heaven-bestowed plant"--to his solicitous selection of the oaten straws.At the close of the scene the audience raised a tumultuous roar of appreciation. The portrayal of the type was soexact, so sure and thorough, that the leading characters in the play were forgotten. After repeated calls, Hargraves came before the curtain and bowed, his rather boyish face bright and flushed with the knowledge of success.At last Miss Lydia turned and looked at the Major. His thin nostrils were working like the gills of a fish. He laid both shaking hands upon the arms of his chair to rise. "We will go, Lydia," he said chokingly. "This is an abominable--desecration."Before he could rise, she pulled him back into his seat. "We will stay it out," she declared. "Do you want to advertise the copy by exhibiting the original coat?" So they remained to the end.Hargraves's success must have kept him up late that night, for neither at the breakfast nor at the dinner table did heappear.About three in the afternoon he tapped at the door of Major Talbot's study. The Major opened it, and Hargraves walkedin with his hands full of the morning papers--too full of his triumph to notice anything unusual in the Major's demeanor."I put it all over 'em last night, Major," he began exultantly. "I had my inning, and, I think, scored. Here's what The Post says:"'His conception and portrayal of the old-time Southern colonel, with his absurd grandiloquence, his eccentric garb, his quaint idioms and phrases, his motheaten pride of family, and his really kind heart, fastidious sense of honor, and lovable simplicity, is the best delineation of a character role on the boards to-day. The coat worn byColonel Calhoun is itself nothing less than an evolution of genius. Mr. Hargraves has captured his public.'"How does that sound, Major, for a first-nighter?""I had the honor"--the Major's voice sounded ominously frigid--"of witnessing your very remarkable performance, sir, last night."Hargraves looked disconcerted."You were there? I didn't know you ever--I didn't know you cared for the theater. Oh, I say, Major Talbot," he exclaimed frankly, "don't you be offended. I admit I did get a lot of pointers from you that helped out wonderfully in the part. But it's a type, you know--not individual. The way the audience caught on shows that. Half the patrons of that theater are Southerners. They recognized it.""Mr. Hargraves," said the Major, who had remained standing,"you have put upon me an unpardonable insult. You have burlesqued my person, grossly betrayed my confidence, and misused my hospitality. If I thought you possessed the faintest conception of what is the sign manual of a gentleman, or what is due one, I would call you out, sir, old as I am. I will ask you to leave the room, sir."The actor appeared to be slightly bewildered, and seemed hardly to take in the full meaning of the old gentleman's words."I am truly sorry you took offense," he said regretfully. "Up here we don't look at things just as you people do. I know men who would buy out half the house to have their personality put on the stage so the public would recognize it.""They are not from Alabama, sir," said the Major haughtily."Perhaps not. I have a pretty good memory, Major; let me quote a few lines from your book. In response to a toast at a banquet given in--Milledgeville, I believe--you uttered, and intend to have printed, these words:"'The Northern man is utterly without sentiment or warmth except in so far as the feelings may be turned to his own commercial profit. He will suffer without resentment any imputation cast upon the honor of himself or his loved ones that does not bear with it the consequence of pecuniary loss. In his charity, he gives with a liberal hand; but it must be heralded with the trumpet and chronicled in brass.' "Do you think that picture is fairer than the one you saw of Colonel Calhoun last night?""The description," said the Major, frowning, "is--not without grounds. Some exag--latitude must be allowed inpublic speaking.""And in public acting," replied Hargraves."That is not the point," persisted the Major, unrelenting. "It was a personal caricature. I positively decline to overlook it, sir.""Major Talbot," said Hargraves, with a winning smile, "I wish you would understand me. I want you to know that I never dreamed of insulting you. In my profession, all life belongs to me. I take what I want, and what I can, and return it over the footlights. Now, if you will, let's let it go at that. I came in to see you about something else. We've been pretty good friends for some months, and I'm going to take the risk of offending you again. I know you are hard up for money--never mind how I found out, a boarding house is no place to keep such matters secret--andI want you to let me help you out of the pinch. I've been there often enough myself. I've been getting a fair salary all the season, and I've saved some money. You're welcome to a couple hundred--or even more--until you get----" "Stop!" commanded the Major, with his arm outstretched. "It seems that my book didn't lie, after all. You think your money salve will heal all the hurts of honor. Under no circumstances would I accept a loan from a casual acquaintance; and as to you, sir, I would starve before I would consider your insulting offer of a financial adjustment of the circumstances we have discussed. I beg to repeat my request relative to your quitting the apartment." Hargraves took his departure without another word. He also left the house the same day, moving, as Mrs. Vardeman explained at the supper table, nearer the vicinity of thedowntown theater, where A Magnolia Flower was booked for a week's run.Critical was the situation with Major Talbot and Miss Lydia. There was no one in Washington to whom the Major's scruples allowed him to apply for a loan. Miss Lydia wrote a letter to Uncle Ralph, but it was doubtful whether that relative's constricted affairs would permit him to furnish help. The Major was forced to make an apologetic address to Mrs. Vardeman regarding the delayed payment for board, referring to "delinquent rentals" and "delayed remittances" in a rather confused strain.Deliverance came from an entirely unexpected source.Late one afternoon the door maid came up and announced an old colored man who wanted to see Major Talbot. The Major asked that he be sent up to his study. Soon an old darkeyappeared in the doorway, with his hat in hand, bowing, and scraping with one clumsy foot. He was quite decently dressed in a baggy suit of black. His big, coarse shoes shone with a metallic luster suggestive of stove polish. His bushy wool was gray--almost white. After middle life, it is difficult to estimate the age of a negro. This one might have seen as many years as had Major Talbot."I be bound you don't know me, Mars' Pendleton," were his first words.The Major rose and came forward at the old, familiar style of address. It was one of the old plantation darkeys without a doubt; but they had been widely scattered, and he could not recall the voice or face."I don't believe I do," he said kindly--"unless you will assist my memory.""Don't you 'member Cindy's Mose, Mars' Pendleton, what'migrated 'mediately after de war?""Wait a moment," said the Major, rubbing his forehead with the tips of his fingers. He loved to recall everything connected with those beloved days. "Cindy's Mose," he reflected. "You worked among the horses--breaking the colts. Yes, I remember now. After the surrender, you took the name of--don't prompt me--Mitchell, and went to the West--to Nebraska.""Yassir, yassir,"--the old man's face stretched with a delighted grin--"dat's him, dat's it. Newbraska. Dat's me--Mose Mitchell. Old Uncle Mose Mitchell, dey calls me now. Old mars', your pa, gimme a pah of dem mule colts when I lef' fur to staht me goin' with. You 'member dem colts, Mars' Pendleton?""I don't seem to recall the colts," said the Major. "You know. I was married the first year of the war and living at the old Follinsbee place. But sit down, sit down, Uncle Mose. I'm glad to see you. I hope you have prospered." Uncle Mose took a chair and laid his hat carefully on the floor beside it."Yessir; of late I done mouty famous. When I first got to Newbraska, dey folks come all roun' me to see dem mule colts. Dey ain't see no mules like dem in Newbraska. I sold dem mules for three hundred dollars. Yessir--three hundred. "Den I open a blacksmith shop, suh, and made some money and bought some lan'. Me and my old 'oman done raised up seb'm chillun, and all doin' well 'cept two of 'em what died. Fo' year ago a railroad come along and staht a town slamag'inst my lan', and, suh, Mars' Pendleton, Uncle Mose amworth leb'm thousand dollars in money, property, and lan'." "I'm glad to hear it," said the Major heartily. "Glad to hear it.""And dat little baby of yo'n, Mars' Pendleton--one what you name Miss Lyddy--I be bound dat little tad done growed up tell nobody wouldn't know her."The Major stepped to the door and called: "Lydie, dear,will you come?"Miss Lydia, looking quite grown up and a little worried, came in from her room."Dar, now! What'd I tell you? I knowed dat baby done be plum growed up. You don't 'member Uncle Mose, child?" "This is Aunt Cindy's Mose, Lydia," explained the Major. "He left Sunnymead for the West when you were two years old.""Well," said Miss Lydia, "I can hardly be expected to remember you, Uncle Mose, at that age. And, as you say, I'm 'plum growed up,' and was a blessed long time ago. But I'm glad to see you, even if I can't remember you."And she was. And so was the Major. Something alive and tangible had come to link them with the happy past. The three sat and talked over the olden times, the Major and Uncle Mose correcting or prompting each other as they reviewed the plantation scenes and days.The Major inquired what the old man was doing so far from his home."Uncle Mose am a delicate," he explained, "to de grand Baptis' convention in dis city. I never preached none, but bein' a residin' elder in de church, and able fur to pay my own expenses, dey sent me along.""And how did you know we were in Washington?" inquired Miss Lydia."Dey's a cullud man works in de hotel whar I stops, what comes from Mobile. He told me he seen Mars' Pendleton comin' outen dish here house one mawnin'."What I come fur," continued Uncle Mose, reaching into his pocket--"besides de sight of home folks--was to pay Mars' Pendleton what I owes him."Yessir--three hundred dollars." He handed the Major a roll of bills. "When I lef' old mars' says: 'Take dem mule colts, Mose, and, if it be so you gits able, pay fur 'em.' Yessir--dem was his words. De war had done lef' old mars' po' hisself. Old mars' bein' long ago dead, de debt descends to Mars' Pendleton. Three hundred dollars. Uncle Mose is plenty able to pay now. When dat railroad buy mylan' I laid off to pay fur dem mules. Count de money, Mars' Pendleton. Dat's what I sold dem mules fur. Yessir."Tears were in Major Talbot's eyes. He took Uncle Mose's hand and laid his other upon his shoulder."Dear, faithful, old servitor," he said in an unsteady voice, "I don't mind saying to you that 'Mars' Pendleton spent his last dollar in the world a week ago. We will accept this money, Uncle Mose, since, in a way, it is asort of payment, as well as a token of the loyalty and devotion of the old regime. Lydia, my dear, take the money. You are better fitted than I to manage its expenditure." "Take it, honey," said Uncle Mose. "Hit belongs to you.Hit's Talbot money."After Uncle Mose had gone, Miss Lydia had a good cry---for joy; and the Major turned his face to a corner, and smokedhis clay pipe volcanically.The succeeding days saw the Talbots restored to peace and ease. Miss Lydia's face lost its worried look. The major appeared in a new frock coat, in which he looked like a wax figure personifying the memory of his golden age. Another publisher who read the manuscript of the Anecdotes and Reminiscences thought that, with a little retouching and toning down of the high lights, he could make a really bright and salable volume of it. Altogether, the situation was comfortable, and not without the touch of hope that is often sweeter than arrived blessings.One day, about a week after their piece of good luck, a maid brought a letter for Miss Lydia to her room. The postmark showed that it was from New York. Not knowing any one there, Miss Lydia, in a mild flutter of wonder, satdown by her table and opened the letter with her scissors. This was what she read:DEAR MISS TALBOT:I thought you might be glad to learn of my good fortune. I have received and accepted an offer of two hundred dollars per week by a New York stock company to play Colonel Calhoun in A Magnolia Flower.There is something else I wanted you to know. I guess you'd better not tell Major Talbot. I was anxious to make him some amends for the great help he was to me in studying the part, and for the bad humor he was in about it. He refused to let me, so I did it anyhow. I could easily spare the three hundred.Sincerely yours,H. HOPKINS HARGRAVES.P.S. How did I play Uncle Mose?。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

a r X i v :h e p -t h /0008232v 3 7 N o v 2000CERN-TH/2000-280A no-go theorem for string warped compactificationsS.Ivanov 1and G.Papadopoulos 21.Department of Mathematics University of Sofia “St.Kl.Ohridski”2.CERN Theory Division 1211Geneva,23Switzerland Abstract We give necessary conditions for the existence of perturbative heterotic and common sector type II string warped compactifications preserving four and eight supersymmetries to fourspacetime dimensions,respectively.In particular,we find that the only compactifications of heterotic string with the spin connection embedded in the gauge connection and type II strings are those on Calabi-Yau manifolds with constant dilaton.We obtain similar results for compactifications to six and to two dimensions.In past few years there has been interest in investigating perturbative[1]and non-perturbative string warped compactifications[2,3,4,5].One of the attractive features of such compactifications is the appearance of a scalar potential in the effective low di-mensional effective action depending on some of modulifields.This leads to the lifting of some of theflat directions of the associated compactifications without a warp factor.The presence of the potential can be understood in various ways.One way is that in warped compactifications some ten-or eleven-dimensional supergravity formfield strengths re-ceive an expectation value.Therefore one can view the presence of the potentials as a consequence of a Scherk-Schwarz type of mechanism.Another feature of these warped compactifications is that they preserve a fraction of spacetime supersymmetry.Warped compactifications have also been related to Randall-Sundrum type of senario[6].Warped compactifications of the perturbative heterotic string and of the common sector of type II strings have been investigated sometime ago by Strominger in[1].This was achieved by allowing the dilaton to be a non-constant function of the compact space. Necessary conditions were given for the existence of such compactifications.In particular it was shown that the compact manifold of such warped compactification is a2n-dimensional hermitian,but no K¨a hler,manifold equipped with a holomorphic(n,0)-form.Therefore it is required that the Hodge number h n,0=dim H n,0of the compact manifold is h n,0≥1.In this paper,we shall investigate the conditions for the existence of holomorphic(n,0)-forms on hermitian manifolds using the Gauduchon theorem[16]-[18].We shall assume the following:•The solution associated with the warped compactification is a smooth and the internal manifold is compact.•For compactifications to four-dimensions the solution preserves four and eight su-persymmetries for the heterotic and type II strings,respectively.The non-vanishing fields are those of the common sector.•The dilaton is a globally defined scalar function on the compact space.Then we shall show that there are restrictions for the existence of such forms which man-ifest themselves as conditions on the string three-formfield strength H and its exterior derivative dH.In particular,we shallfind that heterotic warped compactifications with the spin connection embedded in the gauge connection and type II string warped compact-ifications are ruled out.So the only compactifications of these systems with four and eight remaining supersymmetries in four dimensions,respectively,are the standard Calabi-Yau compactifications for which dilaton is constant and the string three-formfield strength van-ishes.Recently other no-go theorems have appeared in the literature for Randall-Sundrum [19,20,21,22]and warped compactifications[23];see also[24,25].We shall begin our analysis with a summary of the relevant results of[1].Then after giving some definitions of various curvature tensors associated with hermitian geometry,we shall present our main result.This is an inequality involving the length of the torsion of the Chern connection and the scalar derived by taking twice the trace of dH with the complexstructure of the hermitian manifold.We shall then explore this inequality in the context of both heterotic and type II strings.We shall conclude with some remarks regarding the relationship between the Killing spinor equation associated with the gravitino and that associated with the dilatino.In particular,we shall argue that the former does not always imply the latter.The string compactifications we consider involve supergravity solutions for which the non-vanishing bosonicfields are those of the NS⊗NS sector,ie the metric G,the dilaton φand the string three-formfield strength H.In type II strings,H is closed but in the heterotic string due to the anomaly cancellation mechanism,dH=0.In what follows,we shall be mainly concerned with compactifications of the heterotic string though our results can be easily adapted to the type II strings.The relevant heterotic Killing spinor equations in the string frame areˆ∇η=01(ΓM∂Mφ−H MR N Y R;(0.2)2Y is a vectorfield.The gamma matrices have been chosen to be hermitian along the space directions and antihermitian along the time direction.The spinorηis in the Majorana-Weyl representation of the spin group Spin(1,9).In the heterotic string,there is an additional Killing spinor equation associated with the gaugino.However this will not enter in our investigation and it will be only very briefly mentioned later.In the case of type II strings there are two additional Killing spinor equations to(0.1)but again do not enter in our analysis.It has been shown in[1]that for warped compactifications to R2(5−n),the solution for the background in the string frame can be written asds2=ds2(R2(5−n))+ds2(M)φ=φ(y)1H=φds2,(0.4)2where ds2E is the Einstein metric.For thefirst Killing spinor equation in(0.1)to have solutions,the holonomy of the connectionˆ∇has to be an appropriate subgroup of SO(1,9).For the compactifications(0.3)we are considering,the non-trivial part of the connectionˆ∇is given by a connection on the compact manifold M which we shall denote with the same letter.Therefore for the first Killing spinor equation in(0.1)to have solutions,we take the holonomy ofˆ∇to be in SU(n).This in particular implies that for compactifications to four dimensions(n=3), four supersymmetries will be preserved.The solutions of this Killing spinor equation are ˆ∇-parallel spinors which can be identified with the singlets in the decomposition of the Majorana-Weyl spinor representation of SO(1,9)under SU(n).In addition it was shown in[1]that M is a hermitian manifold,i.e.M admits a complex structure and it is equipped with a compatible hermitian metric.The complex structure is given in terms of a killing spinorη+asJ i j=−iη†+Γi jη+,(0.5) satisfyingˆ∇k J i j=0,(0.6) i.e.it is parallel with respect to theˆ∇connection.The torsion of the connectionˆ∇is determined in terms of the metric and the complex structure on M asH ijk=−3J m[i dΩ|m|jk],(0.7) whereΩij=g ik J k j is the K¨a hler form of M and dΩis its exterior derivative,(dΩ)ijk= 3∂[iΩjk]and ds2(M)=g ij dy i dy j is the metric on M.In mathematicsˆ∇has appeared sometime ago in the context of hermitian manifolds[11]and recently it is called Bismut connection[12],and in physics has appeared in the context of supersymmetric sigma models [13,14,15].Hermitian manifolds equipped with the Bismut connections are also called KT(K¨a hler with torsion)manifolds[15].Let us now investigate the second Killing spinor equation associated with the dilatino. For the compactifications we are considering,this Killing spinor equation forη+becomes1(Γi∂iφ−6{Γm,H})η+=0.(0.9) After some gamma matrix algebra,this equation givesθi=2∂iφ,(0.10) whereθi is the so called Lee form of J defined as1θi=−∇kΩkm J m i=We remark that thefield and Killing spinor equations of supergravity theory are well-defined if we take the dilaton to be locally defined on M and thereforeθto be closed but not exact.However in the supergravity action,φappears without a derivative acting on it and therefore in order the Lagrangian to be a scalar,φis required to be a globally defined scalar on M.In many backgrounds that arise in the worldsheet-conformalfield theory approach to strings,φis only locally defined function on the background.For example,the background associated with the Wess-Zumino-Witten(WZW)model on S1×S3is supersymmetric,in the supergravity sense,if one introduces a dilaton which depends linearly on the angular coordinate of S1factor.Such a dilaton is not globally defined on S1×S3;though is well-defined in the universal cover of S1×S3which is the near horizon geometry of theNS5-brane.As we shall see in conformalfield theory,a more general situation can arise.The key observation in[1]is that the existence of a solution to the supergravity Killing spinor equations implies the presence of a holomorphic(n,0)-form on M.To see this observe that M admits aˆ∇-parallel(n,0)-formǫbecause of the requirement that the holonomy ofˆ∇is in SU(n).Next we assume thatφis a globally-defined scalar on M and write the form˜ǫ=e−2φǫ.(0.12) Then usingˆ∇ǫ=0,ˆΓγ¯βγ=θ¯β,as it can be verified with an explicit computation,and(0.10),wefind∂¯β˜ǫα1...αn =−2∂¯βφ˜ǫα1...αn+nˆΓγ¯β[α1˜ǫ|γ|α2...αn]=−2∂¯βφ˜ǫα1...αn+ˆΓγ¯βγ˜ǫα1...αn=−2∂¯βφ˜ǫα1...αn+θ¯β˜ǫα1...αn=0,(0.13)whereα1,α2,...αn,γ,δ=1,...,n are holomorphic indices with respect to J.So˜ǫis holomorphic.This concludes our summary of the warped string compactifications.There are obstructions to the existence of holomorphic(n,0)-forms on hermitian man-ifolds.In particular,we shall show there does not exist such a holomorphic(n,0)-form for a large class of the compactifications we have described.For this,wefirst define the Chern connection˜∇on M as˜∇i Y j=∇i Y j+12(J m i dΩmjk+J m j dΩimk),(0.15) which can be expressed in terms of the string three-form H asC ijk=1Observe that if the torsion of the Chern connection vanishes,then the manifold M is K¨a hler and consequently H also vanishes.Next we defineb=−14˜RijklΩijΩkl,(0.18)whereˆR and˜R are the curvature tensors of the Bismut and Chern connections on M, respectively.Moreover a computation reveals that2u=b+C ijk C ijk+1where∇is the Levi-Civita connection of g.In particular,we haveL∗(g)(1)=−2∇αθα.(0.23) The corresponding operators with respect to a conformally equivalent metric¯g=e2n−1w L(g)(f)L∗(¯g)(f)=e−2nhd2n y u(h)≥0,(0.26)then the mth-plurigenus p m(J)=dimH0(M,O(K m))∈{0,1},m>0.Moreover,if the inequality in(0.26)is strict,then p m(J)=0,m>0.In particular h n,0=p1(J)and this gives necessary conditions for the existence of holomorphic(n,0)-forms.To briefly explain the proof of the plurigenera theorem,letλbe a holomorphic(n,0)-form.Then wefind that0=∂¯βλα1...αn=˜∇¯βηα1...αn(0.27) by the properties of the Chern connection˜∇.Next we apply the complex Laplacian L(g) to−12|λ|2)=g ¯βα∂¯β(˜∇αλα1...αnλα1...αn)=g¯βα˜∇¯β˜∇αλα1...αnλα1...αn+|˜∇λ|2=2u|λ|2+|˜∇λ|2.(0.29) We have used the Ricci identities for the Chern connection to derive the latter equality.Now if u≥0,then L(g)(−12gα¯β˜Rα¯β=−1g d2n y f o u(µ)(0.32)is independent of the choosen bundle metricµ,where Vol(g)(M)is the volume of M with respect to the metric g.Moreover there exist a canonical bundle metricµo with u(µo)= v g(L),constant.But the existence of holomorphic sections of a holomorphic bundle L does not depend on the choice offibre metricµ,thus it only depends on the sign of the constantu (µo ).In the case of interest where the line bundle L is a positive power of the canonical line bundle,the constantv g (L )=Vol −1(h )(M ) M√hd 2n y u (h )= M √2 M √4dH ijkl Ωij Ωkl ).(0.35)This is our main equation.Further,the torsion C of the Chern connection can be expressed in terms of the string three-form H using (0.16)but the above expression will suffice.Now consider warped compactifications of the heterotic string with compact space the hermitian,but no K¨a hler manifold,M and with spin connection embedded in the gauge one.If this is the case,then dH =0and soM √2 M √gd 2n y e −(n −1)f [C ijk C ijk +1From the heterotic string one-loop anomaly cancellation formuladH=λ[Tr(R′∧R′)−Tr(F∧F)],(0.38) we haveλ(dH)ijklΩijΩkl=−ǫr stσs∧σt(0.40)2and similarly for¯σr.The metric on M can be chosen to beds2=δrsσrσs+δrs¯σr¯σs.(0.41) (There is a two parameter family of bi-invariant metrics on M but this choice will suffice for our purpose.)The K¨a hler form of the metric that we shall consider isΩ=σ0∧¯σ0+σ1∧σ2+¯σ1∧¯σ2(0.42) In particular wefind that the string formfield strength in this case isH=−σ0∧σ1∧σ2−¯σ0∧¯σ1∧¯σ2.(0.43) The Bismut connection is the left-invariant connection on the group SU(2)×SU(2)and so it has holonomy group the identity.Thus its holonomy is contained in SU(3)and thefirst Killing spinor equation has the required solutions.It remains to see whether the second Killing spinor equation can be solved as well.For this we shall attempt to verify the equationθi=2∂iφ.To do this we compute the Lee formθandfindθ=σ0−¯σ0.(0.44) Thus dθ=0and so the second Killing spinor equation is not satisfied.The manifold M=SU(2)×SU(2)is associated with a two-dimensional N=2superconformal theory, for example that of a WZW model with target space M.Of course such conformal theories do not have the appropriate central charge to serve as string backgrounds,i.e.the dilaton field equation is not satisfied.Nevertheless,it is surprising that there is no choice of a dilaton for the above complex structure,even after adding for example background charges,for which the associated supergravity background preserves some of the spacetime supersymmetry.However if the complex structure J is chosen in a different way,then by adding two appropriate background charges,the resulting background can be thought of as having geometry S1×S3×S1×S3which preserves some spacetime supersymmetry.However the dilatonφin this case depends linearly on the two angular coordinates of S1×S1and so it is not a well-defined function on S1×S3×S1×S3leading to a dφwhich is a closed but not an exact one-form.This is similar to the case of S1×S3WZW background that has already been explained.Alternatively,one can consider the universal cover R×S3×R×S3 of S1×S3×S1×S3.In such case the dilaton is a well-defined function on this backgroundbut the solution is non-compact.All these are in accordance with our main result that there are no common sector type II and heterotic string warped compactifications with the spin connection embedded in the gauge one preserving appropriate supersymmetry.Acknowledgments:We would like to thank G.W.Gibbons and A.Tseytlin for help-ful discussions.S.I is supported by the contracts MM809/1998of the Ministry of Sci-ence and Education of Bulgaria,238/1998of the University of Sofia“St.Kl.Ohrid-ski”and EDGE,Contract HPRN-CT-2000-0010.G.P.is supported by a University Re-search Fellowship from the Royal Society.This work is partially supported by SPG grant PPA/G/S/1998/00613.References[1]A.Strominger,Superstrings with torsion,Nucl.Phys.B274(1986)253.[2]K.Becker and M.Becker,M-theory and Eight-Manifolds Nucl.Phys.B477(1996)155;hep-th/9605053.[3]S.Gukov, C.Vafa and E.Witten,CFT’s From Calabi-Yau Four Folds;hep-th/9906070.[4]S.Gukov,Solitons,Superpotentials and Calibrations Nucl.Phys.B574(2000)169;hep-th/9911011.[5]K.Dasgupta,G.Rajesh,S.Sethi,M-Theory,Orientifolds and G-flux,hep-th/9908088.[6]C.S.Chan,P.L.Paul and H.Verlinde,A note on warped string compactifications,hep-th/0003236.[7]M.B.Green,J.H.Schwarz and P.C.West,Anomaly free chiral theories in six dimen-sions,Nucl.Phys.B254(1985)327.[8]P.Candelas,G.T.Horowitz,A.Strominger and E.Witten,Vacuum Configurationsfor superstrings,Nucl.Phys.B258(1985)46.[9]B.Alexandrov,S.Ivanov,Vanishing theorems on Hermitian manifolds,to appear inDiff.Geom.and Appl.,math.DG/9901090.[10]S.Ivanov and G.Papadopoulos,Vanishing Theorems and String Backgrounds,math.DG/0010038.[11]K.Yano,Differential geometry on complex and almost complex spaces,PergamonPress,Oxford(1965);K.Yano and M.Kon,Structures on Manifolds,World Scientific (1984).[12]J.-M.Bismut,A local index theorem for non-K¨a hler manifolds,Math.Ann.284(1989),681−699.[13]S.J.Gates,Jr.,C.M.Hull and M.Rocˇe k,Twisted multiplets and new supersymmetricnonlinear sigma models Nucl.Phys.B248(1984)157.[14]P.S.Howe and G.Papadopoulos,Ultra-violet behavior of two-dimensional supersym-metric nonlinear sigma models,Nucl.Phys.B289(1987)264;Further remarks on the geometry of two-dimensional sigma models,Class.Quantum Grav.5(1988)1647. [15]P.S.Howe and G.Papadopoulos,Twistor spaces for HKT manifolds,Phys.Lett.B379(1996)80.[16]P.Gauduchon,Le theoreme de l’excentricit´e nulle,C.R.Acad.Sci.Paris Ser.A285(1977),387−390.[17]P.Gauduchon,Fibr´e s hermitiennes`a endomorphisme de Ricci non-n´e gatif.,Bull.Soc.Math.France105(1977),113-140.[18]P.Gauduchon,La1-forme de torsion d’une vari´e t´e hermitienne compacte,Math.Ann.267(1984),495−518.[19]K.Behrndt,M.Cvetic,Supersymmetric Domain Wall World from D=5Simple Su-pergravity,Phys.Lett.B475(2000)253,hep-th/9909058;Anti-deSitter vacua of gauged supergravities with eight supercharges,Phys.Rev.D61:101901,(2000),hep-th/0001159.[20]M.Wijnholt and S.Zhukov,On the uniqueness of black hole attractors,hep-th/9912002[21]R.Kallosh and A.Linde,Supersymmetry and the brane world,JHEP02(2000)005;hep-th/0001071.[22]G.W.Gibbons and mbert,Domain walls and solitons in odd dimensions,hep-th/0003197.[23]J.Maldacena and C.Nunez,Supergravity description offield theories on curved man-ifolds and a no go theorem,hep-th/0007018.[24]G.Papadopoulos,J.G.Russo,A.A.Tseytlin,Curved branes from string dualities,Class.Quantum Grav.17(2000)1713,hep-th/9911253.[25]M.Cvetic,H.Lu,C.N.Pope,J.F.Vazquez-Poritz,AdS in warped space-times,hep-th/0005246.[26]P.Spindel,A.Sevrin,W.Troost,A.van Proeyen,Extended supersymmetricσ-modelson group manifolds,Nucl.Phys.B308(1988),662-698.[27]A.Opfermann,G.Papadopoulos,Homogeneous HKT and QKT manifolds,math-ph/9807026.。

相关文档
最新文档