ON THE COMPLEXITY OF COMPUTING DETERMINANTS (Extended abstract)

合集下载

Parametric high resolution techniques for radio astronomical imaging

Parametric high resolution techniques for radio astronomical imaging

higher sensitivity to observe faint objects results in high dynamic range requirements within the image, where strong sources can affect the imaging of the very weak sources. On the other hand, Moore’s law [3] together with recent advances in optimization theory [4] open the way to the application of more advanced computational techniques. In contrast to hardware implementation, these image formation algorithms, that are implemented in software can benefit from the continuing increase in computational power, even after the antennas and the correlators will be built. In this paper we extend the parametric deconvolution approach of [5] to obtain better power estimation accuracy and higher robustness to interference and modeling uncertainty. The algorithms presented here can also be used in conjunction with real-time interference mitigation techniques as described in [5] and [6]. We briefly describe the current status of radio astronomical imaging techniques. For a more extensive overview the reader is referred to [7], [8] or [9]. A good historic perspective can be found in [10], whereas [11] provides a very recent perspective. The principle of radio interferometry has been used in radio astronomy since 1946 when Ryle and Vonberg constructed a radio interferometer using dipole antenna arrays [12]. During the 1950’s several radio interferometers which use the synthetic aperture created by movable antennas have been constructed. In 1962 the principle of aperture synthesis using earth rotation has been proposed [13]. The basic idea is to exploit the rotation of the earth to obtain denser coverage of the visibility domain (spatial Fourier domain). The first instrument to use this principle was the five kilometer Cambridge radio telescope. During the 1970’s new instruments with large aperture have been constructed. Among these we find the Westerbork Synthesis Radio Telescope (WSRT) in the Netherlands and the Very Large Array (VLA) in the USA. Recently, the Giant Microwave Telescopes (GMRT) has been constructed in India and the Allen Telescope Array (ATA) in the US. Even these instruments subsample the Fourier domain, so that unique reconstruction is not possible without some further processing known as deconvolution. The deconvolution process uses some a-priori knowledge about the image to remove the effect of “dirty beam” side-lobes. Two principles dominate the astronomical imaging deconvolution. The first method was proposed by Hogbom [14] and is known as CLEAN. The CLEAN method is basically a sequential Least-Squares (LS) fitting procedure in which the brightest source location and power are estimated. The response of this source is removed from the image and then the process continues to find the next brightest source, until the residual image is noise-like. During the years it has been partially analyzed [15], [16] and [17].

ON THE COMPUTATIONAL COMPLEXITY OF ALGORITHMS

ON THE COMPUTATIONAL COMPLEXITY OF ALGORITHMS

ON THE COMPUTATIONALCOMPLEXITY OF ALGORITHMSBYJ. HARTMANIS AND R. E. STEARNSI. Introduction. In his celebrated paper [1], A. M. Turing investigated the computability of sequences (functions) by mechanical procedures and showed that the setofsequencescanbe partitioned into computable and noncomputable sequences. One finds, however, that some computable sequences are very easy to compute whereas other computable sequences seem to have an inherent complexity that makes them difficult to compute. In this paper, we investigate a scheme of classifying sequences according to how hard they are to compute. This scheme puts a rich structure on the computable sequences and a variety of theorems are established. Furthermore, this scheme can be generalized to classify numbers, functions, or recognition problems according to their compu-tational complexity.The computational complexity of a sequence is to be measured by how fast a multitape Turing machine can print out the terms of the sequence. This particular abstract model of a computing device is chosen because much of the work in this area is stimulated by the rapidly growing importance of computation through the use of digital computers, and all digital computers in a slightly idealized form belong to the class of multitape Turing machines. More specifically, if Tin) is a computable, monotone increasing function of positive integers into positive integers and if a is a (binary) sequence, then we say that a is in complexity class ST or that a is T-computable if and only if there is a multitape Turing machine 3~ such that 3~ computes the nth term of a. within Tin) operations. Each set ST is recursively enumerable and so no class ST contains all computable sequences. On the other hand, every computable a is contained in some com-plexity class ST. Thus a hierarchy of complexity classes is assured. Furthermore, the classes are independent of time scale or of the speed of the components from which the machines could be built, as there is a "speed-up" theorem which states that ST = SkT f or positive numbers k.As corollaries to the speed-up theorem, there are several limit conditions which establish containment between two complexity classes. This is contrasted later with the theorem which gives a limit condition for noncontainment. One form of this result states that if (with minor restrictions)Received by the editors April 2, 1963 and, in revised form, August 30, 1963.285286J. HARTMANIS AND R. E. STEARNS[May»*«, U(n)then S,; properly contains ST. The intersection of two classes is again a class. The general containment problem, however, is recursively unsolvable.One section is devoted to an investigation as to how a change in the abstract machine model might affect the complexity classes. Some of these are related by a "square law," including the one-tape-multitape relationship: that is if a is T-computable by a multitape Turing machine, then it is T2-computable by a single tape Turing machine. It is gratifying, however, that some of the more obvious variations do not change the classes.The complexity of rational, algebraic, and transcendental numbers is studied in another section. There seems to be a good agreement with our intuitive notions, but there are several questions still to be settled.There is a section in which generalizations to recognition problems and functions are discussed. This section also provides the first explicit "impossibility" proof, by describing a language whose "words" cannot be recognized in real-time [T(n) = n] .The final section is devoted to open questions and problem areas. It is our conviction that numbers and functions have an intrinsic computational nature according to which they can be classified, as shown in this paper, and that there is a good opportunity here for further research.For background information about Turing machines, computability and related topics, the reader should consult [2]. "Real-time" computations (i.e., T(n) = n) were first defined and studied in [3]. Other ways of classifying the complexity of a computation have been studied in [4] and [5], where the complexity is defined in terms of the amount of tape used.II. Time limited computations. In this section, we define our version of a multitape Turing machine, define our complexity classes with respect to this type of machine, and then work out some fundamental properties of these classes.First, we give an English description of our machine (Figure 1) since one must have a firm picture of the device in order to follow our paper. We imagine a computing device that has a finite automaton as a control unit. Attached to this control unit is a fixed number of tapes which are linear, unbounded at both ends, and ruled into an infinite sequence of squares. The control unit has one reading head assigned to each tape, and each head rests on a single square of the assigned tape. There are a finite number of distinct symbols which can appear on the tape squares. Each combination of symbols under the reading heads together with the state of the control unit determines a unique machine operation. A machine operation consists of overprinting a symbol on each tape square under the heads, shifting the tapes independently either one square left, one square1965]ON THE COMPUTATIONAL COMPLEXITY OF ALGORITHMS287ti 1111 i n cm U I I i I I I ID mm.Tn T| in i i i i i i i m-m Î2II I I I I I I I I m II I I I I I I IIP TnTAPESFINITE STATECOMPUTEROUTPUT TAPEFigure 1. An «-tape Turing machineright, or no squares, and then changing the state of the control unit. The machine is then ready to perform its next operation as determined by the tapes and control state. The machine operation is our basic unit of time. One tape is signaled out and called the output tape. The motion of this tape is restricted to one way move-ment, it moves either one or no squares right. What is printed on the output tape and moved from under the head is therefore irrevocable, and is divorced from further calculations.As Turing defined his machine, it had one tape and if someone put k successive ones on the tape and started the machine, it would print some f(k) ones on the tape and stop. Our machine is expected to print successively /(l),/(2), ••• on its output tape. Turing showed that such innovations as adding tapes or tape symbols does not increase the set of functions that can be computed by machines. Since the techniques for establishing such equivalences are common knowledge, we take it as obvious that the functions computable by Turing's model are the same as those computable by our version of a Turing machine. The reason we have chosen this particular model is that it closely resembles the operation of a present day computer; and being interested in how fast a machine can compute, the extra tapes make a difference.To clear up any misconceptions about our model, we now give a formal definition.Definition 1. An n-tape Turing machine, &~, is a set of (3n + 4)-tuples, {(q¡; Stl, Sh, — , Sin ; Sjo, Sjl, — , Sh ; m0, mx, —, m… ; qf)},where each component can take on a finite set of values, and such that for every possible combination of the first n + 1 entries, there exists a unique (3zi-t-4)-tupIe in this set. The first entry, q¡, designates the present state; the next n entries, S(l,-",S,B, designate the present symbols scanned on tapes Tx, •■•, T…,respectively; the next n + 1 symbols SJa, ••-, Sjn, designate the new symbols to be printed on288J. HARTMANIS AND R. E. STEARNS[May tapes T0, •■», T…, respectively; the next n entries describe the tape motions (left, right, no move) of the n + 1 tapes with the restriction m0 # left ; and the last entry gives the new internal state. Tape T0 is called the output tape. One tuple with S¡. = blank symbol for 1 = j = n is designated as starting symbol.Note that we are not counting the output tape when we figure n. Thus a zero-tape machine is a finite automaton whose outputs are written on a tape. We assume without loss of generality that our machine starts with blank tapes.For brevity and clarity, our proofs will usually appeal to the English description and will technically be only sketches of proofs. Indeed, we will not even give a formal definition of a machine operation. A formal definition of this concept can be found in [2].For the sake of simplicity, we shall talk about binary sequences, the general-ization being obvious. We use the notation a = axa2 ••• .Definition 2. Let Tin) be a computable function from integers into integers such that Tin) ^ Tin + 1) and, for some integer k, Tin) ^ n/ k for all n. Then we shall say that the sequence a is T-computable if and only if there exists a multitape Turing machine, 3~, which prints the first n digits of the sequence a on its output tape in no more than Tin) operations, n = 1,2, ••», allowing for the possibility of printing a bounded number of digits on one square. The class of all T-computable binary sequences shall be denoted by ST, and we shall refer to T(n) as a time-function. Sr will be called a complexity class.When several symbols are printed on one square, we regard them as components of a single symbol. Since these are bounded, we are dealing with a finite set of output symbols. As long as the output comes pouring out of the machine in a readily understood form, we do not regard it as unnatural that the output not be strictly binary. Furthermore, we shall see in Corollaries 2.5, 2.7, and 2.8 that if we insist that Tin) ^ n and that only (single) binary outputs be used, then the theory would be within an e of the theory we are adopting.The reason for the condition Tin) ^ n/fc is that we do not wish to regard the empty set as a complexity class. For if a is in ST and F is the machine which prints it, there is a bound k on the number of digits per square of output tape and T can print at most fcn0 d igits in n0 operations. By assumption, Tikn0) ^ n0 or (substituting n0 = n/ k) Tin) à n/ k . On the other hand, Tin) ^ n/ k implies that the sequence of all zeros is in ST because we can print k zeros in each operation and thus ST is not void.Next we shall derive some fundamental properties of our classes.Theorem 1. TAe set of all T-computable binary sequences, ST, is recursively enumerable.Proof. By methods similar to the enumeration of all Turing machines [2] one can first enumerate all multitape Turing machines which print binary sequences. This is just a matter of enumerating all the sets satisfying Definition 1 with the1965] ON THE COMPUTATIONAL C OMPLEXITY O F ALGORITHMS 289 added requirement that Sjo is always a finite sequence of binary digits (regarded as one symbol). Let such an enumeration be &~x, 3~2, ••• . Because T(n) is comput-able, it is possible to systematically modify each ^"¡ to a machine &"'t w ith the following properties : As long as y¡ prints its nth digit within T(n) operations (and this can be verified by first computing T(n) and then looking at the first T(n) operations of ^"¡), then the nth digit of &~'t will be the nth output of &~¡. If &~¡ s hould ever fail to print the nth digit after T(n) operations, then ^"¡'will print out a zero for each successive operation. Thus we can derive a new enumeration •^"'u &~2> "•• If' &\ operates within time T(n), then ^", and ^"¡'compute the same T-computable sequence <x¡. O therwise, &~{ c omputes an ultimately constant sequence a¡ and this can be printed, k bits at a time [where T(n) — n / fc] by a zero tape machine. In either case, a¡ is T-computable and we conclude that {«,} = ST.Corollary 1.1. There does not exist a time-function T such that ST is the set of all computable binary sequences.Proof. Since ST is recursively enumerable, we can design a machine !T which, in order to compute its ith output, computes the z'th bit of sequence a, and prints out its complement. Clearly 3~ produces a sequence a different from all <Xj in ST.Corollary 1.2. For any time-function T, there exists a time-function U such that ST is strictly contained in Sv. Therefore, there are infinitely long chainsSTl cr STl cz •••of distinct complexity classes.Proof. Let &" compute a sequence a not in ST (Corollary 1.1). Let V(n) equal the number of operations required by ^"to compute the nth digit of a. Clearly V is computable and a e Sr. Lett/(n) = max [Tin), V(n)] ,then Vin) is a time-function and clearlyOrí ^3 Oj1 *Since a in Sv and a not in ST, we haveCorollary 1.3. The set of all complexity classes is countable.Proof. The set of enumerable sets is countable.Our next theorem asserts that linear changes in a time-function do not change the complexity class. // r is a real number, we write [r] to represent the smallest integer m such that m = r.290J. HARTMANIS AND R. E. STEARNS[MayTheorem 2. If the sequence cc is T-computable and k is a computable, positive real number, then a is [kT~\-computable; that is,ST = S[kTX.Proof. We shall show that the theorem is true for k = 1/2 and it will be true for fc = 1/ 2m b y induction, and hence for all other computable k since, given k, k ^ 1 /2'" for some m. (Note that if k is computable, then \kT~\ is a computable function satisfying Definition 2.)Let ¡F be a machine which computes a in time T. If the control state, the tape symbols read, and the tape symbols adjacent to those read are all known, then the state and tape changes resulting from the next two operations of &~ are determined and can therefore be computed in a single operation. If we can devise a scheme so that this information is always available to a machine 5~', then &' can perform in one operation what ST does in two operations. We shall next show how, by combining pairs of tape symbols into single symbols and adding extra memory to the control, we can make the information available.In Figure 2(a), we show a typical tape of S" with its head on the square marked 0. In Figure 2(b), we show the two ways we store this information in &~'. Each square of the ^"'-tape contains the information in two squares of the ^-tape. Two of the ^"-tape symbols are stored internally in 3r' and 3~' must also remember which piece of information is being read by 9~. In our figures, this is indicated by an arrow pointed to the storage spot. In two operations of &~, t he heads must move to one of the five squares labeled 2, 1,0, — l,or —2. The corresponding next position of our ^"'-tape is indicated in Figures 2(c)-(g). It is easily verified that in each case, &"' can print or store the necessary changes. In the event that the present symbol read by IT is stored on the right in ¡T' as in Figure 2(f), then the analogous changes are made. Thus we know that ST' can do in one operation what 9~ does in two and the theorem is proved.Corollary 2.1. If U and T are time-functions such that«-.«> Vin)then Svçz ST.Proof. Because the limit is greater than zero, Win) ^ Tin) for some k > 0, and thus Sv = SlkVj çz sT.Corollary 2.2. If U and T are time-functions such thatTin)sup-TTT-r- < 00 ,n-»a> O(n)then SV^ST.Proof. This is the reciprocal of Corollary 2.1.1965] ON THE COMPUTATIONAL COMPLEXITY OF ALGORITHMSE37291/HO W2|3l4[5l(/ZEEI33OÏÏT2Ï31/L-2_-iJ(c]¿m W\2I3I4I5K/(b)ZBE o2|3|4l5|\r2Vi!¿En on2l3l4l5|/l-T-i](d)¿BE2 34[5|6|7ir\10 l|(f)¿m2 34|5l6l7l /L<Dj(g)Figure 2. (a) Tape of ^" with head on 0. (b) Corresponding configurations of 9"'. (c) 9~' if F moves two left, (d) 9~> i f amoves to -1. (e) 9~' if ^~ moves to 0. (f)^"' if amoves to 1.(g) 9~' if 3~ moves two rightCorollary 2.3. If U and T are time-functions such thatTin)0 < hm ) ; < oo ,H-.« Uin)then Srj = ST .Proof. This follows from Corollaries 2.1 and 2.2.Corollary 2.4. // Tin) is a time-function, then Sn^ST . Therefore, Tin) = n is the most severe time restriction.Proof. Because T is a time-function, Tin) = n/ k for some positive k by Definition 2; hence292j. hartmanis and r. e. stearns[Maymf m à 1 > O…-»o, n kand S… çz s T by Corollary 2.1.Corollary 2.5. For any time-function T, Sr=Sv where t/(n)=max \T(n),n\. Therefore, any complexity class may be defined by a function U(n) ^ n. Proof. Clearly inf (T/ Í7) > min (1,1/ k) and sup (T/ U) < 1 .Corollary 2.6. If T is a time-function satisfyingTin) > n and inf -^ > 1 ,…-co nthen for any a in ST, there is a multitape Turing machined with a binary (i.e., two symbol) output which prints the nth digit of a in Tin) or fewer operations. Proof. The inf condition implies that, for some rational e > 0, and integer N, (1 - e) Tin) > n or Tin) > eTin) + n for all n > N. By the theorem, there is a machine 9' which prints a in time \zT(ri)\. 9' can be modified to a machine 9" which behaves like 9' except that it suspends its calculation while it prints the output one digit per square. Obviously, 9" computes within time \i.T(ri)\ + n (which is less than Tin) for n > N). $~" can be modified to the desired machine9~ by adding enough memory to the control of 9~" to print out the nth digit of a on the nth operation for n ^ N.Corollary 2.7. IfT(n)^nandoieST,thenforanys >0, there exists a binary output multitape Turing machine 9 which prints out the nth digit of a in [(1 + e) T(n)J or fewer operations.Proof. Observe that. [(1 + e) T(n)]inf —--——■— — 1 + enand apply Corollary 2.6.Corollary 2.8. // T(n)^n is a time-function and oteST, then for any real numbers r and e, r > e > 0, /Aere is a binary output multitape Turing machine ¡F which, if run at one operation per r—e seconds, prints out the nth digit of a within rT(n) seconds. Ifcc$ ST, there are no such r and e. Thus, when considering time-functions greater or equal to n, the slightest increase in operation speed wipes out the distinction between binary and nonbinary output machines.Proof. This is a consequence of the theorem and Corollary 2.7.Theorem 3. // Tx and T2 are time-functions, then T(n) = min [T^n), T2(n)~] is a time-function and STí O ST2 = ST.1965] ON THE COMPUTATIONAL COMPLEXITY OF ALGORITHMS 293 Proof. T is obviously a time-function. If 9~x is a machine that computes a in time T, and 9~2 computes a in time T2, then it is an easy matter to construct a third device &~ i ncorporating both y, and 3T2 which computes a both ways simul-taneously and prints the nth digit of a as soon as it is computed by either J~x or 9~2. Clearly this machine operates inTin) = min \Txin), T2(n)] .Theorem 4. If sequences a and ß differ in at most a finite number of places, then for any time-function T, cceST if and only if ße ST.Proof. Let ,T print a in time T. Then by adding some finite memory to the control unit of 3", we can obviously build a machine 3~' which computes ß in time T.Theorem 5. Given a time-function T, there is no decision procedure to decide whether a sequence a is in ST.Proof. Let 9~ be any Turing machine in the classical sense and let 3Tx be a multitape Turing machine which prints a sequence ß not in ST. Such a 9~x exists by Theorem 1. Let 9~2 be a multitape Turing machine which prints a zero for each operation $~ makes before stopping. If $~ should stop after k operations, then 3~2 prints the /cth and all subsequent output digits of &x. Let a be the sequence printed by 9"2, Because of Theorem 4, a.eST if and only if 9~ does not stop. Therefore, a decision procedure for oceST would solve the stopping problem which is known to be unsolvable (see [2]).Corollary 5.1. There is no decision procedure to determine if SV=ST or Sv c STfor arbitrary time-functions U and T.Proof. Similar methods to those used in the previous proof link this with the stopping problem.It should be pointed out that these unsolvability aspects are not peculiar to our classification scheme but hold for any nontrivial classification satisfying Theorem 4.III. Other devices. The purpose of this section is to compare the speed of our multitape Turing machine with the speed of other variants of a Turing machine. Most important is the first result because it has an application in a later section.Theorem 6. If the sequence a is T-computable by multitape Turing machine, !T, then a is T2-computable by a one-tape Turing machine 3~x .Proof. Assume that an n-tape Turing machine, 3~, is given. We shall now describe a one-tape Turing machine Px that simulates 9~, and show that if &" is a T-computer, then S~x is at most a T2-computer.294j. hartmanis and r. e. stearns[May The S~ computation is simulated on S'y as follows : On the tape of & y will be stored in n consecutive squares the n symbols read by S on its n tapes. The symbols on the squares to the right of those symbols which are read by S~ on its n tapes are stored in the next section to the right on the S'y tape, etc., as indicated in Figure 3, where the corresponding position places are shown. The1 TAPE T|A 1 TAPE T2I?TAPE Tn(a)J-"lo(b)Figure 3. (a) The n tapes of S. (b) The tape of S~\machine Tx operates as follows: Internally is stored the behavioral description of the machine S", so that after scanning the n squares [J], [o], ■■■, [5]»-^"îdetermines to what new state S~ will go, what new symbols will be printed by it on its n tapes and in which direction each of these tapes will be shifted. First,¡Fy prints the new symbols in the corresponding entries of the 0 block. Then it shifts the tape to the right until the end of printed symbols is reached. (We can print a special symbol indicating the end of printed symbols.) Now the machine shifts the tape back, erases all those entries in each block of n squares which correspond to tapes of S~ which are shifted to the left, and prints them in the corresponding places in the next block. Thus all those entries whose corresponding S~ tapes are shifted left are moved one block to the left. At the other end of the tape, the process is reversed and returning on the tape 9y transfers all those entries whose corresponding S~ tapes are shifted to the right one block to the right on the S'y tape. When the machine S', reaches the rigAz most printed symbol on its tape, it returns to the specially marked (0) block which now contains1965] ON THE COMPUTATIONAL COMPLEXITY OF ALGORITHMS 295 the n symbols which are read by &~ o n its next operation, and #", has completed the simulation of one operation of 9~. It can be seen that the number of operations of Tx is proportional to s, the number of symbols printed on the tape of &"¡. This number increases at most by 2(n + 1) squares during each operation of &. Thus, after T(fc) operations of the machine J~, the one-tape machine S"t will perform at most7(*)T,(fc) =C0+ T Cxii = loperations, where C0 and C, are constants. But thenr,(fe) g C2 £ i^C [T(fc)]2 .¡ =iSince C is a constant, using Theorem 2, we conclude that there exists a one tape machine printing its fcth output symbol in less than T(fc)2 tape shifts as was to be shown.Corollary 6.1. The best computation time improvement that can be gained in going from n-tape machines to in + l)-tape machines is the square root of the computation time.Next we investigate what happens if we allow the possibility of having several heads on each tape with some appropriate rule to prevent two heads from occupy-ing the same square and giving conflicting instructions. We call such a device a multihead Turing machine. Our next result states that the use of such a model would not change the complexity classes.Theorem 7. Let a. be computable by a multihead Turing machine 3T which prints the nth digit in Tin) or less operations where T is a time-function; then a is in ST .Proof. We shall show it for a one-tape two-head machine, the other cases following by induction. Our object is to build a multitape machine Jr' which computes a within time 4T which will establish our result by Theorem 2. The one tape of !T will be replaced by three tapes in 9"'. Tape a contains the left-hand information from 9", tape b contains the right-hand information of 9~, and tape c keeps count, two at a time, of the number of tape squares of ST which are stored on both tapes a and b_. A check mark is always on some square of tape a to indicate the rightmost square not stored on tape b_ and tape b has a check to indicate the leftmost square not stored on tape a.When all the information between the heads is on both tapes a and b. then we have a "clean" position as shown in Figure 4(a). As &" operates, then tape296j. hartmanis and r. e. stearns [May7/Fio TTzTTR" 5 "6Ï7M I 4T5T6" 7 8TT77' ^f(a) rT-Tô:TT2l3l4l?l \J ¿Kh.1y(b) J I l?IM2!3|4 5.6T7 /I |?|4,|5|6 7 8TT7(c) f\7~ /\V\/\A7\7M J M/l/yTITTTTTTJ(a) (b)Figure 4. (a) .^"' in clean position, (b) S' in dirty positiona performs like the left head of S~, tape A behaves like the right head, and tape c reduces the count each time a check mark is moved. Head a must carry the check right whenever it moves right from a checked square, since the new symbol it prints will not be stored on tape A; and similarly head A moves its check left.After some m operations of S~' corresponding to m operations of S~, a "dirty"position such as Figure 4(b) is reached where there is no overlapping information.The information (if any) between the heads of S~ must be on only one tape of S~',say tape A as in Figure 4(b). Head A then moves to the check mark, the between head information is copied over onto tape a, and head amoves back into position.A clean position has been achieved and S~' is ready to resume imitating S~. The time lost is 3/ where I is the distance between the heads. But / ^ m since headA has moved / squares from the check mark it left. Therefore 4m is enough time to imitate m operations of S~ and restore a clean position. Thusas was to be shown.This theorem suggests that our model can tolerate some large deviations without changing the complexity classes. The same techniques can be applied to other changes in the model. For example, consider multitape Turing ma-chines which have a fixed number of special tape symbols such that each symbol can appear in at most one square at any given time and such that the reading head can be shifted in one operation to the place where the special symbol is printed, no matter how far it is on the tape. Turing machines with such "jump instructions^ are similarly shown to leave the classes unchanged.Changes in the structure of the tape tend to lead to "square laws." For example,consider the following :Definition 3. A two-dimensional tape is an unbounded plane which is sub-divided into squares by equidistant sets of vertical and horizontal lines as shown in Figure 5. The reading head of the Turing machine with this two-dimensional tape can move either one square up or down, or one square left or right on each operation. This definition extends naturally to higher-dimensional tapes.。

高三现代科技前沿探索英语阅读理解20题

高三现代科技前沿探索英语阅读理解20题

高三现代科技前沿探索英语阅读理解20题1<背景文章>Artificial intelligence (AI) is rapidly transforming the field of healthcare. In recent years, AI has made significant progress in various aspects of medical care, bringing new opportunities and challenges.One of the major applications of AI in healthcare is in disease diagnosis. AI-powered systems can analyze large amounts of medical data, such as medical images and patient records, to detect diseases at an early stage. For example, deep learning algorithms can accurately identify tumors in medical images, helping doctors make more accurate diagnoses.Another area where AI is making a big impact is in drug discovery. By analyzing vast amounts of biological data, AI can help researchers identify potential drug targets and design new drugs more efficiently. This can significantly shorten the time and cost of drug development.AI also has the potential to improve patient care by providing personalized treatment plans. Based on a patient's genetic information, medical history, and other factors, AI can recommend the most appropriate treatment options.However, the application of AI in healthcare also faces some challenges. One of the main concerns is data privacy and security. Medicaldata is highly sensitive, and ensuring its protection is crucial. Another challenge is the lack of transparency in AI algorithms. Doctors and patients need to understand how AI makes decisions in order to trust its recommendations.In conclusion, while AI holds great promise for improving healthcare, it also poses significant challenges that need to be addressed.1. What is one of the major applications of AI in healthcare?A. Disease prevention.B. Disease diagnosis.C. Health maintenance.D. Medical education.答案:B。

量子计算外文翻译中英文2019

量子计算外文翻译中英文2019

量子计算中英文2019英文FROM BITS TO QUBITS, FROM COMPUTING TO QUANTUM COMPUTING: AN EVOLUTION ON THE VERGE OF A REVOLUTION IN THE COMPUTINGLANDSCAPEPi rjan Alexandru; Petroşanu Dana-Mihaela.ABSTRACTThe "Quantum Computing" concept has evolved to a new paradigm in the computing landscape, having the potential to strongly influence the field of computer science and all the fields that make use of information technology. In this paper, we focus first on analysing the special properties of the quantum realm, as a proper hardware implementation of a quantum computing system must take into account these properties. Afterwards, we have analyzed the main hardware components required by a quantum computer, its hardware structure, the most popular technologies for implementing quantum computers, like the trapped ion technology, the one based on superconducting circuits, as well as other emerging technologies. Our study offers important details that should be taken into account in order to complement successfully the classical computer world of bits with the enticing one of qubits.KEYWORDS: Quantum Computing, Qubits, Trapped Ion Technology, Superconducting Quantum Circuits, Superposition, Entanglement, Wave-Particle Duality, Quantum Tunnelling1. INTRODUCTIONThe "Quantum Computing" concept has its roots in the "Quantum Mechanics" physics subdomain that specifies the way how incredibly small particles, up to the subatomic level, behave. Starting from this concept, the Quantum Computing has evolved to a new paradigm in the computing landscape. Initially, the concept was put forward in the 1980s as a mean for enhancing the computing capability required tomodel the way in which quantum physical systems act. Afterwards, in the next decade, the concept has drawn an increased level of interest due to the Shor's algorithm, which, if it had been put into practice using a quantum computing machine, it would have risked decrypting classified data due to the exponential computational speedup potential offered by quantumcomputing [1].However, as the development of the quantum computing machines was infeasible at the time, the whole concept was only of theoretical value. Nowadays, what was once thought to be solely a theoretical concept, evolved to become a reality in which quantum information bits (entitled "qubits") can be stored and manipulated. Both governmental and private companies alike have an increased interest in leveraging the advantages offered by the huge computational speedup potential provided by the quantum computing techniques in contrast to traditional ones [2].One of the aspects that make the development of quantum computers attractive consists in the fact that the shrinkage of silicon transistors at the nanometer scale that has been taking place for more than 50 years according to Moore's law begins to draw to a halt, therefore arising the need for an alternate solution [3].Nevertheless, the most important factor that accounts for boosting the interest in quantum computing is represented by the huge computational power offered by these systems and the fact that their development from both hardware and software perspectives has become a reality. Quantum computing managed to surpass the computability thesis of ChurchTuring, which states that for any computing device, its power computation could increase only in a polynomial manner when compared to a "standard" computer, entitled the Turing machine [4].During the time, hardware companies have designed and launched "classical" computing machines whose processing performance has been improving over the time using two main approaches: firstly, the operations have been accelerated through an increased processing clock frequency and secondly, through an increase in the number of operations performed during each processing clock's cycle [5].Although the computing processing power has increased substantially after having applied the above-mentioned approaches, the overall gain has remained inaccordance with the thesis of Church-Turing. Afterwards, in 1993, Bernstein and Vazirani have published in [6] a theoretical analysis stating that the extended Church-Turing thesis can be surpassed by means of quantum computing. In the following year, Peter Shor has proved in his paper that by means of quantumcomputing the factorization of a large number can be achieved with an exponentially computing speedup when compared to a classical computing machine [7-9]. Astonishing as the theoretical framework was, a viable hardware implementation was still lacking at the time.The first steps for solving this issue have been made in 1995, when scientists have laid the foundations for a technology based on a trapped ion system [10] and afterwards, in 1999, for a technology employing superconducting circuits [11]. Based on the advancement of technology, over the last decades, researchers have obtained huge progress in this field, therefore becoming able to build and employ the first quantum computing systems.While in the case of a classical computing machine the data is stored and processed as bits (having the values 0 or 1), in the case of a quantum computingmachine, the basic unit of quantum information under which the data is stored and processed is represented by the quantum bits, or qubits that can have besides the values of 0 and 1, a combination of both these values in the same time, representing a "superposition" of them [12].At a certain moment in time, the binary values of the n bits corresponding to a classical computer define a certain state for it, while in the case of a quantumcomputer, at a certain moment in time, a number of n qubits have the possibility to define all the classical computer's states, therefore covering an exponential increased computational volume. Nevertheless, in order to achieve this, the qubits must be quantum entangled, a non-local property that makes it possible for several qubits to be correlated at a higher level than it was previously possible in classical computing. In this purpose, in order to be able to entangle two or several qubits, a specific controlled environment and special conditions must be met [13].During the last three decades, a lot of studies have been aiming to advance thestate of knowledge in order to attain the special conditions required to build functional quantum computing systems. Nowadays, besides the most popular technologies employed in the development of quantum computing systems, namely the ones based on trapped ion systems and superconducting circuits, a wide range of other alternative approaches are being extensively tested in complex research projects in order to successfully implement qubits and achieve quantum computing [14].One must take into account the fact that along with the new hardware architectures and implementations of quantum computing systems, new challenges arise from the fact that this new computing landscape necessitates new operations, computing algorithms, specialized software, all of these being different than the ones used in the case of classical computers.A proper hardware implementation of a quantum computing system must take into account the special properties of the quantum realm. Therefore, this paper focuses first on analyzing these characteristics and afterwards on presenting the main hardware components required by a quantum computer, its hardware structure, the most popular technologies for implementing quantum computers, like the trapped ion technology, the one based on superconducting circuits, as well as other emerging technologies. Our developed research offers important details that should be taken into account in order to complement successfully the classical computer world of bits with the enticing one of qubits.2.SPECIAL PROPERTIES OF THE QUANTUM REALMThe huge processing power of quantum computers results from the capacity of quantum bits to take all the binary values simultaneously but harnessing this vast amount of computational potential is a challenging task due to the special properties of the quantum realm. While some of these special properties bring considerable benefits towards quantum computing, there are others that can hinder the whole process.One of the most accurate and extensively tested theory that comprehensibly describes our physical world is quantum mechanics. While this theory offers intuitive explanations for large-scale objects, while still very accurate also at the subatomiclevel, the explanations might seem counterintuitive at the first sight. At the quantum level, an object does not have a certain predefined state, the object can behave like a particle when a measurement is performed upon it and like a wave if left unmeasured, this representing a special quantum property entitled wave-particle duality [15].The global state of a quantum system is determined by the interference of the multitude of states that the objects can simultaneously have at a quantum level, the state being mathematically described through a wave function. Actually, the system's state is often described by the sum of the different possible states of its components, multiplied by a coefficient consisting in a complex number, representing, for each state, its relative weight [16, 17]. For such a complex coefficient, by taking into consideration its trigonometric (polar) form, one can write it under the form Aew = A(cos6 + i sind), where A > 0 represents the module of this complex number and is denoted as the "amplitude", while в represents the argument of the complex number, being denoted as "the phase shift". Therefore, the complex coefficient is known if the two real numbers A and в are known.All the constitutive components of a quantum system have wave-like properties, therefore being considered "coherent". In the case of coherence, the different states of the quantum components interact between them, either in a constructive manner or in a destructive one [1]. If a quantum system is measured at a certain moment, the system exposes only a single component, the probability of this event being equal to the squared absolute value of the corresponding coefficient, multiplied by a constant. If the quantum system is measured, from that moment on it will behave like a classical system, therefore leading to a disruption of its quantum state. This phenomenon causes a loss of information, as the wave function is collapsed, and only a single state remains. As a consequence of the measurement, the wave function associated to the quantum obj ect corresponds only to the measured state [1, 17].Considering a qubit, one can easily demonstrate that its quantum state could be represented by a linear superposition of two vectors, in a space endowed with a scalar product having the dimension 2. The orthonormal basis in this space consists of thevectors denoted as |0 >= [Jj and |1 >= [°j. If one considers two qubits, they could be represented as a linear combination of the 22 elements of the base, namely the ones denoted as .... Generally, in the case of n qubits, they could be represented by a superposition state vector in a space having the dimension 2n [2].Another special property of the quantum realm consists in the entanglement, a property that has the ability to exert a significant influence on quantumcomputing and open up a plethora of novel applications. The physical phenomenon of quantum entanglement takes place when two (or more) quantumobjects are intercorrelated and therefore the state of a quantum object influences instantaneously the state(s) of the other(s) entangled quantum object(s), no matter the distance(s) between these objects [16].Another important quantum mechanical phenomenon that plays a very important role in quantum computing is quantum tunneling that allows a subatomic particle to go through a potential barrier, which otherwise would have been impossible to achieve, if it were to obey only the physical laws of classical mechanics. An explanation of this different behavior consists in the fact that in quantum mechanics the matter is treated both as waves and particles, as we have described above, when we have presented the wave-particle duality concept [15].The Schrödinger equation describes the variation of the wave function, taking into account the energy environment that acts upon a quantum system, therefore highlighting the way in which this quantum system evolves. In order to obtain the mathematical description of the environment, of the energies corresponding to all the forces acting upon the system, one uses the Hamiltonian of the quantum system. Therefore, the control of a quantum system can be achieved by controlling its energy environment, which can be obtained by isolating the system from the external forces, and by subjecting the system to certain energy fields as to induce a specific behavior. One should note that a perfect isolation of the quantum system from the external world cannot be achieved, therefore in practice the interactions are minimized as much as possible. Over time, the quantum system is continuously influenced to a small extent by the external environment, through a process called "decoherence",process that modifies the wave function, therefore collapsing it to a certain degree [1].Figure 1 depicts the main special properties of the quantum realm, which, when precisely controlled, have the ability to influence to a large extent the performance of a quantum computer implementation, and open up new possibilities for innovation concerning the storing, manipulation and processing of data.In the following, we analyze a series of hardware components and existing technologies used for developing and implementing quantum computers.3.AN OVERVIEW OF THE NECESSARY HARDWARE AND OF THE EXISTING TECHNOLOGIES USED IN THE IMPLEMENTATIONS OF QUANTUM COMPUTERSA proper hardware architecture is vital in order to be able to program, manipulate, retrieve qubits and overall to achieve an appropriate and correct quantumcomputer implementation. When implementing a quantum computer at the hardware level, one must take into account the main hardware functions, a proper modularization of the equipment along with both similarities and differences between quantum and classic computer implementations. Conventional computers are an essential part in the successful implementation of a quantum computer, considering the fact that after having performed its computation, a quantumcomputer will have to interact with different categories of users, to store or transmit its results using classic computer networks. In order to be efficient, quantum computers need to precisely control the qubits, this being an aspect that can be properly achieved by making use of classic computing systems.The scientific literature [1, 18, 19] identifies four abstract layers in the conceptual modelling process of quantum computers. The first layer is entitled the "quantum data plane" and it is used for storing the qubits. The second layer, called "control and measurement plane", performs the necessary operations and measurement actions upon the qubits. The third layer entitled "control processor plane" sets up the particular order of operations that need to be performed along with the necessary measurement actions for the algorithms, while the fourth abstract layer, the "host processor", consists in a classical computer that manages the interface withthe different categories of personnel, the storage of data and its transmission over the networks.In the following, we present the two most popular technologies employed in the development of quantum computing systems, namely the ones based on trapped ion systems and superconducting circuits and, afterwards, other alternative approaches that are being extensively tested in complex research projects in order to successfully implement qubits and achieve quantum computing.By means of trapping atomic ions, based on the theoretical concepts presented by Cirac et al within [20], in 1995, Monroe et al [21] revealed the first quantumlogic gate. This was the starting point in implementing the first small scale quantum processing units, making it possible to design and implement a rich variety of basic quantum computing algorithms. However, the challenges to scale up the implementations of quantum computers based on the trapped ion technology are enormous because this process implies a synergy of complex technologies like coherent electronic controllers, laser, radio frequency, vacuum, microwave [1, 22].In the case of a quantum computer based on the trapped atomic ions technology, the qubits are represented by atomic ions contained within the quantum data plane by a mechanism that keeps them in a certain fixed location. The desired operations and measurement actions are performed upon the qubits using accurate lasers or a source of microwave electromagnetic radiation in order to alter the states of the quantum objects, namely the atomic ions. In order to reduce the velocity of the quantum objects and perform measurements upon them, one uses a laser beam, while for assessing the state of the ions one uses photon detectors [14, 23, 24]. Figure 2 depicts an implementation of the quantum trapping atomic ions technology.Another popular technology used in the development and implementation of quantum computers is based on superconducting quantum circuits. These quantum circuits have the property of emitting quantized energy when exposed to temperatures of 10-3K order, being referred in the literature as "superconducting artificial atoms" [25]. In contrast to classic integrated circuits, the superconducting quantum circuits incorporate a distinctive characteristic, namely a"Josephson junction" that uses wires made of superconducting materials in order to achieve a weak connection. The common way of implementing the junction consists in using an insulator that exposes a very thin layer and is created through the Niemeyer-Dolan technique which is a specialized lithographic method that uses thin layers of film in order to achieve overlapping structures having a nanometer size [26].Superconducting quantum circuits technology poses a series of important advantages, offering red3uced decoherence and an improved scale up potential, being compatible with microwaves control circuits, operating with time scales of the nanosecond order [1]. All of these characteristics make the superconducting quantum circuits an attractive and performant technique in developing quantum computers. A superconducting quantum circuit developed by D-Wave Systems Inc. is depicted in Figure 3.In order to overcome the numerous challenges regarding the scaling of quantum computers developed based on trapped ion systems and superconducting circuits, many scientists focus their research activity on developing emerging technologies that leverage different approaches for developing quantumcomputers.One of the alternatives that scientists investigate consists in making use of the photons' properties, especially of the fact that photons have a weak interaction between each other and also with the environment. The photons have been tested in a series of quantum experiments and the obtained results made the researchers remark that the main challenge in developing quantum computers through this approach is to obtain gates that operate on spaces of two qubits, as at the actual moment the photons offer very good results in terms of single qubit gates. In order to obtain the two-qubit gates, two alternative approaches are extensively being investigated as these have provided the most promising results.The first approach is based on operations and measurements of a single photon, therefore creating a strong interaction, useful in implementing a probabilistic gate that operates on a space of two qubits [1]. The second alternative approach employs semiconductor crystals structures of small dimensions in order to interact with the photons. These small structures can be found in nature, case in which they are called"optically active defects", but can also be artificially created, case in which they are called "quantum dots". An important challenge that must be overcome when analyzing quantum computers based on photons is their size. Until now, the development of this type of computers has been possible only for small dimensions, as a series of factors limit the possibility to increase the dimensions of photon quantum computers: the very small wavelengths of the photons (micron-size), their very high speed (the one of the light), the direction of their movement being along a certain dimension of the optical chip. Therefore, trying to significantly increase the number of qubits (represented by the photons) proves to be a difficult task in the case of a photonic device, much more difficult than in the case of other systems, in which the qubits are located in space. Nevertheless, the evolution of this emerging technology promises efficient implementations in the near future [27].Another technology that resembles the one of "trapping atomic ions" for obtaining qubits consists in the use and manipulation of neutral atoms by means of microwave radiation, lasers and optics. Just like in the case of the trapping atomic ions technology, the "cooling" process is achieved using laser sources. According to [1, 28], in 2018 there were implemented successfully quantum systems having 50 qubits that had a reduced space between them. By means of altering the space between the qubits, these quantum systems proved to be a successful analog implementation of quantum computers. In what concerns the error rates, according to [29], in 2018 there have been registered values as low as 3% within two-qubit quantum systems that managed to isolate properly the operations performed by nearby qubits. Since there are many similarities between the two technologies, the scaling up process faces a lot of the problems of the "trapping atomic ions" technology. However, the use of the neutral atoms technology offers the possibility of creating multidimensional arrays.A classification of semiconductor qubits is made according to the method used to manipulate the qubits that can be achieved either by photon manipulation or by using electrical signals. Quantum dots are used in the case of semiconductor qubits that are gated by optical means in order to assure a strong coupling of the photons while in the case of semiconductor qubits manipulated via electrical signals, voltages are usedupon lithographically metal gates for manipulating the qubits [1]. This quantum technology, although being less popular than other alternatives, resembles the existing classical electronic circuits, therefore one might argue that it has a better chance in attracting considerable investments that eventually will help speed up the scaling up process of quantum computers implementation.In order to scale up qubits that are optically gated, one needs a high degree of consistency and has to process every qubit separately at the optical level. In [30], Pla et al. state that even if the qubits that are gated electrically can be very dense, the material related problems posed not long-ago serious quality problems up to single qubits gates level. Although the high density provided by this type of quantum technology creates opportunities for integrating a lot of qubits on a single processor, complex problems arise when one has to manipulate this kind of qubits because the wiring will have to assure an isolation of the control signals as to avoid interference and crosstalk.Another ongoing approach in developing quantum computers consists in using topological qubits within which the operations to be performed upon are safeguarded due to a microscopically incorporated topological symmetry that allows the qubit to correct the errors that may arise during the computing process [1]. If in the future this approach materializes, the computational cost associated with correcting the quantum errors will diminish considerably or even be eliminated altogether. Although this type of technology is still in its early stages, if someday one is able to implement it and prove its technical feasibility, the topological quantum computers will become an important part of the quantum computing landscape.4. CONCLUSIONSQuantum computing represents a field in a continuous evolution and development, a huge challenge in front of researchers and developers, having the potential to influence and revolutionize the development of a wide range of domains like the computing theory, information technology, communications and, in a general framework, regarding from the time perspective, even the evolution and progress of society itself. Therefore, each step of the quantum computers' evolution has thepotential to become of paramount importance for the humanity: from bits to qubits, from computing to quantum computing, an evolution on the verge of a revolution in the computing landscape.中文从比特到量子比特,从计算到量子计算:计算机革命的演变抽象“量子计算”的概念已发展成为计算领域的一个新范例,具有极大地影响计算机科学领域和所有利用信息技术的领域的潜力。

经济学人双语阅读:超级计算 更深奥的思维

经济学人双语阅读:超级计算 更深奥的思维

【经济学人】双语阅读:超级计算更深奥的思维Science and technology科学技术Supercomputing超级计算Deeper thought更深奥的思维The world has a new fastest computer, thanks to video games多亏电子游戏,让世界拥有了一台新的最快的计算机The ultimate games machine终极游戏机SPEED fanatics that they are, computer nerds like to check the website of Top500, a collaboration between German and American computer scientists that keeps tabs on which of the world's supercomputers is the fastest.作为速度控,电脑迷们喜欢查看Top500的网站,该网站是由德国和美国的计算机科学家合办,记录世界上最快的超级计算机。

On November 12th the website released its latest list, and unveiled a new champion.11月12日,该网站发布了最新榜单,揭开了新一任冠军的面纱。

The computer in question is called Titan, and it lives at Oak Ridge National Laboratory, in Tennessee.获得冠军的计算机名为泰坦,居于田纳西州的橡树岭国家实验室,It took first place from another American machine, IBM's Sequoia, which is housed at Lawrence Livermore National Laboratory, in California.它是击败了另一台美国的计算机-IBM的红杉而取得冠军的,红杉位于加利福尼亚州的劳伦斯利物莫国家实验室。

本质安全从20世纪90年代开始逐渐成为安全管理研究的一个热点问题

本质安全从20世纪90年代开始逐渐成为安全管理研究的一个热点问题
由此可见,上述关于本质安全的定义,从客观上来说还停留在关于本质安全的表层意思理解,也就是所谓的外在本质安全,虽然也提到系统和谐、系统可靠性、人的观念变化、人的自由度、及事故超前预防,但还没有触及到本质安全的核心内容,即本质安全的和谐交互性,系统本质安全是通过微观层面的和谐交互以达到系统整体的和谐所取得的,本质安全形成应该是由外而内的,最终通过文化交互的和谐性而达到系统的内在本质安全性。
本质安全从20世纪90年代开始逐渐成为安全管理研究的一个热点问题,一些人认为它是一种全新的安全理念,将会从根上改变人类在事故治理和预防上的被动局面。但是,我们知道任何新技术新思想都不是凭空创造的,都需要以已经存在的部分作为基石,本质安全思想也毫不例外,它的出现反映出人类在事故预防技术及思想上的脆弱性以及对安全性的渴求。面对着频繁发生的空难、海难、矿难以及大量难以预测和预防的自然灾害,如地震、海啸、山体滑坡、泥石流及雪崩等,人们期盼着找到一种有效途径,从此可以一劳永逸的预防甚至是杜绝事故,于是人们在安全管理实践中进行了广泛而深入的探索,提出了大量事故成因理论,如人为失误论、骨牌论、综合论等等,试图从源头入手,对事故进行预防和治理。似乎每一种理论都很美好,但现实世界的事故及灾难仍然漫不经心的发生,对人类的种种美好愿望和殷切期盼显得如此漠不关心。究竟是现有的理论存在欠缺?还是事故本身就具有不可预测性和预防性?本质安全管理思想的出现能够从根本上改变这种现状吗?面对种种疑惑,本文将从本质安全概念的诠释入手,对本质安全管理理论体系进行必要的梳理。
3本质安全的实现机制及分类标准
3.1本质安全的研究范畴及实现机制
在交互式安全管理试验中。我们发现对于复杂系统来说隐患出现首先破坏的是系统和谐性,当系统和谐性降低到临界点时,事故就会接踵而来。正是存在于系统内外部的动态交互机制和谐性决定了系统的安全性,事故正是由于系统内外部交互作用不和谐性的耦合作用结果,也即系统内部交互作用的波动引起的系统性偏差所造成的系统内部不和谐性与系统与外部交互作用的波动引起的外生偏差所造成的系统外部不和谐性的耦合作用结果。因此,本质安全管理理论的主要研究范畴是如何消减复杂社会技术系统的内外部不和谐性,使系统和谐性始终处在临界点之上,从而使系统保持内外在本质安全性。此外,系统和谐临界点(也即系统和谐预警点)的存在也为系统安全预警提供了定量依据,将本质安全管理理论推进到一个可操作层面。

黎曼猜想英语

黎曼猜想英语

黎曼猜想英语The Riemann Hypothesis, named after the 19th-century mathematician Bernhard Riemann, is one of the most profound and consequential conjectures in mathematics. It is concerned with the distribution of the zeros of the Riemann zeta function, a complex function denoted as $$\zeta(s)$$, where $$s$$ is a complex number. The hypothesis posits that all non-trivial zeros of this analytical function have their real parts equal to $$\frac{1}{2}$$.To understand the significance of this conjecture, one must delve into the realm of number theory and the distribution of prime numbers. Prime numbers are the building blocks of arithmetic, as every natural number greater than 1 is either a prime or can be factored into primes. The distribution of these primes, however, has puzzled mathematicians for centuries. The Riemann zeta function encodes information about the distribution of primes through its zeros, and thus, the Riemann Hypothesis is directly linked to understanding this distribution.The zeta function is defined for all complex numbers except for $$s = 1$$, where it has a simple pole. For values of $$s$$ with a real part greater than 1, it converges to a sum over the positive integers, as shown in the following equation:$$\zeta(s) = \sum_{n=1}^{\infty} \frac{1}{n^s}$$。

21高考英语北师大一轮复习课后达标检测:1 nit 1 Lifetyle 含答案

21高考英语北师大一轮复习课后达标检测:1 nit 1 Lifetyle 含答案

课时练12篇阅读+1篇完形Ⅰ.阅读理解A(2020·合肥高三调研)Rich as a KingWilliamⅠ,who conquered England some 950 years ago, had wealth,power and an army. Yet although William was very rich by the standard ofhis time, he had nothing like a flush toilet(抽水马桶), paper towels, or ariding lawn mower(割草机). How did he get__by?History books are filled with wealthy people who were poor comparedto me. I have storm windows, Croesus did not. Entire nations trembled before Alexander the Great, but he couldn’t buy cat food. Czar Nicholas lacked an electric saw.Given how much better off I am than so many famous dead people, you’d think I’d be content. The trouble is that, like most people, I compare my wealth with that of living persons: neighbors, school classmates, and famous TV people. The greed I feel toward my friend Howard’s new kitchen is not reduced by the fact that no kings ever had a refrigerator with glass doors.There is really no rising or falling standard of living. Over the centuries people simply find different things to feel sad about. You’d think that simply not having disease would put us in a good mood, but no, we want a hot bath too.Of course, one way to achieve happiness would be to realize that even by today’s standards the things I own are pretty nice. My house is smaller than the houses of many investment bankers, but even so it has a lot more rooms that my wife and I can keep clean.Besides, to people looking back at our era from a century or two in the future, these bankers’fancy counter tops and my own worn Formica will seem equally shabby. I can’t keep up with my neighbors right now. But just wait.【解题导语】本文主要通过同富有的古人对比启迪读者:生活没有必要攀比,要知足常乐。

当代研究生英语读写教程上Unit3课文+翻译

当代研究生英语读写教程上Unit3课文+翻译

Unit 31The first mistake is to think of mankind as a thing in itself. It isn’t.第一个错误是把人看作是某种独立的事物。

其实并不是。

It is part of an intricate web of life.人是复杂的生命网络系统中的一部分。

And we can’t think even of life as a thing in itself. It isn’t.我们甚至不能将生命本身视为某种独立的事物。

它确实不是。

It is part of the intricate structure of a planet bathed by energy from the Sun. 生命是一颗沐浴着太阳能的行星上的复杂结构的一部分。

2The Earth, in the nearly 5 billion years since it assumed approximately its present form, has undergone a vast evolution.地球自从呈目前的形状近 50 亿年以来,已经历了一场巨大的演变。

When it first came into being, it very likely lacked what we would today call an ocean and an atmosphere. 在形成的初期,地球上很可能没有我们今天称之为海洋和大气层之类的东西。

These were formed by the gradual outward movement of material as the solid interior settled together.当地球的内部固体紧压在一起时,物质的逐渐向外运动就形成了海洋和大气层。

3Nor were ocean, atmosphere, and solid crust independent of each other after formation. 地球形成之后,海洋、大气层以及坚固的地壳之间也并非相互独立。

高中英语科技前沿词汇单选题50题

高中英语科技前沿词汇单选题50题

高中英语科技前沿词汇单选题50题1. In the field of computer science, when we talk about data storage, "cloud computing" provides a ______ solution.A. revolutionaryB. traditionalC. limitedD. temporary答案:A。

本题考查词汇含义。

“revolutionary”意为“革命性的”,“cloud computing”(云计算)在数据存储方面提供的是一种革命性的解决方案。

“traditional”表示“传统的”,不符合云计算的特点。

“limited”指“有限的”,与云计算的强大存储能力不符。

“temporary”意思是“临时的”,也不符合云计算作为长期数据存储方式的特性。

2. The development of artificial intelligence requires advanced algorithms and powerful ______.A. processorsB. memoriesC. screensD. keyboards答案:A。

“processors”是“处理器”,人工智能的发展需要先进算法和强大的处理器。

“memories”是“内存”,内存并非发展人工智能的关键硬件。

“screens”是“屏幕”,对人工智能发展并非核心硬件。

“keyboards”是“键盘”,与人工智能发展所需的硬件无关。

3. In the era of big data, ______ plays a crucial role in extracting valuable information.A. data miningB. data hidingC. data deletingD. data adding答案:A。

“data mining”是“数据挖掘”,在大数据时代,数据挖掘在提取有价值信息方面起着关键作用。

本质安全从20世纪90年代开始逐渐成为安全管理研究的一个热点问题

本质安全从20世纪90年代开始逐渐成为安全管理研究的一个热点问题
第2次跨越:行为失误论。在这次跨越中,人们对事故成因认识有了进一步发展,认为事故是由于人为因素和技术因素耦合结果,人为错误起主导作用,即操作者的行为失误和认识缺陷是事故的主因。因此,这个阶段的研究主要是如何改变和规范人的行为问题,主要措施是通过规范管理、制度和立法来改变人的不安全行为,各国针对高危行业制订了大量的法律法规、技术规范来约束和规范人的行为。
由此可见,上述关于本质安全的定义,从客观上来说还停留在关于本质安全的表层意思理解,也就是所谓的外在本质安全,虽然也提到系统和谐、系统可靠性、人的观念变化、人的自由度、及事故超前预防,但还没有触及到本质安全的核心内容,即本质安全的和谐交互性,系统本质安全是通过微观层面的和谐交互以达到系统整体的和谐所取得的,本质安全形成应该是由外而内的,最终通过文化交互的和谐性而达到系统的内在本质安全性。
图1系统不和谐性消减模式
3.2本质安全的演化模式及分类标准
从本质安全的实现机制中可以看到,本质安全实现的过程实际上就是系统和谐性实现过程,也是人类对事故成因认识水平上逐步提高过程。客观地讲,人类对事故成因的认识主要经历了以下三次历史性的跨越,它为本质安全管理理论的提出奠定了坚实基础。
第一次跨越:技术致因论。在这次跨越中,人们从事故“不可抗力”说转变为事故技术致因说,认为事故成因于技术本身的不可靠性。由于一种技术本身就有缺陷,因此,应用该技术制造的机械也会不可靠,人们在使用这类机械则时常会造成安全事故。这个阶段的研究主要集中于如何采用可靠技术设计安全可靠的机械设备,从而消除机械运转故障导致的事故。
在本质安全管理实践中,消减社会技术系统不这主要取决于系统及其构成部分的复杂性。①对于复杂程度低的系统,以确定性控制和优化为主导;②随着系统复杂性增加,人在系统运行中的主导作用则越来越明显,这时消减系统不和谐性则以对人的不确定性规避(human-nondeterminacy mitigating)为主导(如图1所示)。

On Computation of Error Locations and Values in Hermitian Codes

On Computation of Error Locations and Values in Hermitian Codes

2 Code Construction
We consider codes from a Hermitian curve χ : xq+1 = y q + y over a finite field Fq2 . The space L(mP∞ ) consists of all functions on χ that have a pole of multiplicity at most m only at the unique point at infinity. We present the following proposition from [11][12] without proof: Proposition 1: The following set is a basis of L(mP∞ ) for each m ≥ 0 xa y b : aq + b(q + 1) ≤ m, 0 ≤ a, 0 ≤ b < q (1)
Abstract We obtain a technique to reduce the computational complexity associated with decoding of Hermitian codes. In particular, we propose a method to compute the error locations and values using an uni-variate error locator and an uni-variate error evaluator polynomial. To achieve this, we introduce the notion of Semi-Erasure Decoding of Hermitian codes and prove that decoding of Hermitian codes can always be performed using semi-erasure decoding. The central results are: ⋆ Searching for error locations require evaluating an univariate error locator polynomial over q 2 points as in Chien search for Reed-Solomon codes. ⋆ Forney’s formula for error value computation in Reed-Solomon codes can directly be applied to compute the error values in Hermitian codes. The approach develops from the idea that transmitting a modified form of the information may be more efficient that the information itself.

State Space Reconstruction for Multivariate Time Series Prediction

State Space Reconstruction for Multivariate Time Series Prediction

a r X i v :0809.2220v 1 [n l i n .C D ] 12 S e p 2008APS/123-QEDState Space Reconstruction for Multivariate Time Series PredictionI.Vlachos ∗and D.Kugiumtzis †Department of Mathematical,Physical and Computational Sciences,Faculty of Technology,Aristotle University of Thessaloniki,Greece(Dated:September 12,2008)In the nonlinear prediction of scalar time series,the common practice is to reconstruct the state space using time-delay embedding and apply a local model on neighborhoods of the reconstructed space.The method of false nearest neighbors is often used to estimate the embedding dimension.For prediction purposes,the optimal embedding dimension can also be estimated by some prediction error minimization criterion.We investigate the proper state space reconstruction for multivariate time series and modify the two abovementioned criteria to search for optimal embedding in the set of the variables and their delays.We pinpoint the problems that can arise in each case and compare the state space reconstructions (suggested by each of the two methods)on the predictive ability of the local model that uses each of them.Results obtained from Monte Carlo simulations on known chaotic maps revealed the non-uniqueness of optimum reconstruction in the multivariate case and showed that prediction criteria perform better when the task is prediction.PACS numbers:05.45.Tp,02.50.Sk,05.45.aKeywords:nonlinear analysis,multivariate analysis,time series,local prediction,state space reconstructionI.INTRODUCTIONSince its publication Takens’Embedding Theorem [1](and its extension,the Fractal Delay Embedding Preva-lence Theorem by Sauer et al.[2])has been used in time series analysis in many different settings ranging from system characterization and approximation of invariant quantities,such as correlation dimension and Lyapunov exponents,to prediction and noise-filtering [3].The Em-bedding Theorem implies that although the true dynam-ics of a system may not be known,equivalent dynamics can be obtained under suitable conditions using time de-lays of a single time series,treated as an one-dimensional projection of the system trajectory.Most applications of the Embedding Theorem deal with univariate time series,but often measurements of more than one quantities related to the same dynamical system are available.One of the first uses of multivari-ate embedding was in the context of spatially extended systems where embedding vectors were constructed from data representing the same quantity measured simulta-neously at different locations [4,5].Multivariate em-bedding was used for noise reduction [6]and for surro-gate data generation with equal individual delay times and equal embedding dimensions for each time series [7].In nonlinear multivariate prediction,the prediction with local models on a space reconstructed from a different time series of the same system was studied in [8].This study was extended in [9]by having the reconstruction utilize all of the observed time series.Multivariate em-bedding with the use of independent components analysis was considered in [10]and more recently multivariate em-2as x n=h(y n).Despite the apparent loss of information of the system dynamics by the projection,the system dynamics may be recovered through suitable state space reconstruction from the scalar time series.A.Reconstruction of the state space According to Taken’s embedding theorem a trajectory formed by the points x n of time-delayed components from the time series{x n}N n=1asx n=(x n−(m−1)τ,x n−(m−2)τ,...,x n),(1)under certain genericity assumptions,is an one-to-one mapping of the original trajectory of y n provided that m is large enough.Given that the dynamical system“lives”on an attrac-tor A⊂Γ,the reconstructed attractor˜A through the use of the time-delay vectors is topologically equivalent to A.A sufficient condition for an appropriate unfolding of the attractor is m≥2d+1where d is the box-counting dimension of A.The embedding process is visualized in the following graphy n∈A⊂ΓF→y n+1∈A⊂Γ↓h↓hx n∈R x n+1∈R↓e↓ex n∈˜A⊂R m G→x n+1∈˜A⊂R mwhere e is the embedding procedure creating the delay vectors from the time series and G is the reconstructed dynamical system on˜A.G preserves properties of the unknown F on the unknown attractor A that do not change under smooth coordinate transformations.B.Univariate local predictionFor a given state space reconstruction,the local predic-tion at a target point x n is made with a model estimated on the K nearest neighboring points to x n.The local model can have a simple form,such as the zeroth order model(the average of the images of the nearest neigh-bors),but here we consider the linear modelˆx n+1=a(n)x n+b(n),where the superscript(n)denotes the dependence of the model parameters(a(n)and b(n))on the neighborhood of x n.The neighborhood at each target point is defined either by afixed number K of nearest neighbors or by a distance determining the borders of the neighborhood giving a varying K with x n.C.Selection of embedding parametersThe two parameters of the delay embedding in(1)are the embedding dimension m,i.e.the number of compo-nents in x n and the delay timeτ.We skip the discussion on the selection ofτas it is typically set to1in the case of discrete systems that we focus on.Among the ap-proaches for the selection of m we choose the most popu-lar method of false nearest neighbors(FNN)and present it briefly below[13].The measurement function h projects distant points {y n}of the original attractor to close values of{x n}.A small m may still give badly projected points and we seek the reconstructed state space of the smallest embed-ding dimension m that unfolds the attractor.This idea is implemented as follows.For each point x m n in the m-dimensional reconstructed state space,the distance from its nearest neighbor x mn(1)is calculated,d(x m n,x mn(1))=x m n−x mn(1).The dimension of the reconstructed state space is augmented by1and the new distance of thesevectors is calculated,d(x m+1n,x m+1n(1))= x m+1n−x m+1n(1). If the ratio of the two distances exceeds a predefined tol-erance threshold r the two neighbors are classified as false neighbors,i.e.r n(m)=d(x m+1n,x m+1n(1))3 III.MULTIV ARIATE EMBEDDINGIn Section II we gave a summary of the reconstructiontechnique for a deterministic dynamical system from ascalar time series generated by the system.However,it ispossible that more than one time series are observed thatare possibly related to the system under investigation.For p time series measured simultaneously from the samedynamical system,a measurement function H:Γ→R pis decomposed to h i,i=1,...,p,defined as in Section II,giving each a time series{x i,n}N n=1.According to the dis-cussion on univariate embedding any of the p time seriescan be used for reconstruction of the system dynamics,or better,the most suitable time series could be selectedafter proper investigation.In a different approach all theavailable time series are considered and the analysis ofthe univariate time series is adjusted to the multivariatetime series.A.From univariate to multivariate embeddingGiven that there are p time series{x i,n}N n=1,i=1,...,p,the equivalent to the reconstructed state vec-tor in(1)for the case of multivariate embedding is of theformx n=(x1,n−(m1−1)τ1,x1,n−(m1−2)τ1,...,x1,n,x2,n−(m2−1)τ2,...,x2,n,...,x p,n)(3)and are defined by an embedding dimension vector m= (m1,...,m p)that indicates the number of components used from each time series and a time delay vector τ=(τ1,...,τp)that gives the delays for each time series. The corresponding graph for the multivariate embedding process is shown below.y n∈A⊂ΓF→y n+1∈A⊂Γւh1↓h2...ցhpւh1↓h2...ցhpx1,n x2,n...x p,n x1,n+1x2,n+1...x p,n+1ցe↓e...ւeցe↓e...ւex n∈˜A⊂R M G→x n+1∈˜A⊂R MThe total embedding dimension M is the sum of the individual embedding dimensions for each time seriesM= p i=1m i.Note that if redundant or irrelevant information is present in the p time series,only a sub-set of them may be represented in the optimal recon-structed points x n.The selection of m andτfollows the same principles as for the univariate case:the attrac-tor should be fully unfolded and the components of the embedding vectors should be uncorrelated.A simple se-lection rule suggests that all individual delay times and embedding dimensions are the same,i.e.m=m1and τ=τ1with1a p-vector of ones[6,7].Here,we set againτi=1,i=1,...,p,but we consider bothfixed and varying m i in the implementation of the FNN method (see Section III D).B.Multivariate local predictionThe prediction for each time series x i,n,i=1,...,p,is performed separately by p local models,estimated as in the case of univariate time series,but for reconstructed points formed potentially from all p time series as given in(3)(e.g.see[9]).We propose an extension of the NRMSE for the pre-diction of one time series to account for the error vec-tors comprised of the individual prediction errors for each of the predicted time series.If we have one step ahead predictions for the p available time series,i.e.ˆx i,n, i=1,...,p(for a range of current times n−1),we define the multivariate NRMSENRMSE=n (x1,n−¯x1,...,x p,n−¯x p) 2(4)where¯x i is the mean of the actual values of x i,n over all target times n.C.Problems and restrictions of multivariatereconstructionsA major problem in the multivariate case is the prob-lem of identification.There are often not unique m and τembedding parameters that unfold fully the attractor.A trivial example is the Henon map[17]x n+1=1.4−x2n+y ny n+1=0.3x n(5) It is known that for the state space reconstruction from the observable x n the appropriate embedding parame-ters are m=2andτ=1.Due to the fact that y n is a lagged multiple of x n the attractor can obviously be reconstructed from the bivariate time series{x n,y n} equally well with any of the following two-dimensional embedding schemesx n=(x n,x n−1)x n=(x n,y n)x n=(y n,y n−1) since they are essentially the same.This example shows also the problem of redundant information,e.g.the state space reconstruction would not improve by augmenting the delay vector x n=(x n,x n−1)with the component y n that actually duplicates x n−1.Redundancy is inevitable in multivariate time series as synchronous observations of the different time series are generally correlated and the fact that these observations are used as components in the same embedding vector adds redundant information in them.We note here that in the case of continuous dynamical systems,the delay parameterτi may be se-lected so that the components of the i time series are not correlated with each other,but this does not imply that they are not correlated to components from another time series.4 A different problem is that of irrelevance,whenseries that are not generated by the same dynamicaltem are included in the reconstruction procedure.may be the case even when a time series is connectedtime series generated by the system underAn issue of concern is also the fact thatdata don’t always have the same data ranges andtances calculated on delay vectors withdifferent ranges may depend highly on only some ofcomponents.So it is often preferred to scale all theto have either the same variance or be in the samerange.For our study we choose to scale the data torange[0,1].D.Selection of the embedding dimension vector Taking into account the problems in the state space reconstruction from multivariate time series,we present three methods for determining m,two based on the false nearest neighbor algorithm,which we name FNN1and FNN2,and one based on local models which we call pre-diction error minimization criterion(PEM).The main idea of the FNN algorithms is as for the univariate case.Starting from a small value the embed-ding dimension is increased by including delay compo-nents from the p time series and the percentage of the false nearest neighbors is calculated until it falls to the zero level.The difference of the two FNN methods is on the way that m is increased.For FNN1we restrict the state space reconstruction to use the same embedding dimension for each of the p time series,i.e.m=(m,m,...,m)for a given m.To assess whether m is sufficient,we consider all delay embeddings derived by augmenting the state vector of embedding di-mension vector(m,m,...,m)with a single delayed vari-able from any of the p time series.Thus the check for false nearest neighbors in(2)yields the increase from the embedding dimension vector(m,m,...,m)to each of the embedding dimension vectors(m+1,m,...,m), (m,m+1,...,m),...,(m,m,...,m+1).Then the algo-rithm stops at the optimal m=(m,m,...,m)if the zero level percentage of false nearest neighbors is obtained for all p cases.A sketch of thefirst two steps for a bivariate time series is shown in Figure1(a).This method has been commonly used in multivariate reconstruction and is more appropriate for spatiotem-porally distributed data(e.g.see the software package TISEAN[18]).A potential drawback of FNN1is that the selected total embedding dimension M is always a multiple of p,possibly introducing redundant informa-tion in the embedding vectors.We modify the algorithm of FNN1to account for any form of the embedding dimension vector m and the total embedding dimension M is increased by one at each step of the algorithm.Let us suppose that the algorithm has reached at some step the total embedding dimension M. For this M all the combinations of the components of the embedding dimension vector m=(m1,m2,...,m p)are considered under the condition M= p i=1m i.Then for each such m=(m1,m2,...,m p)all the possible augmen-tations with one dimension are checked for false nearest neighbors,i.e.(m1+1,m2,...,m p),(m1,m2+1,...,m p), ...,(m1,m2,...,m p+1).A sketch of thefirst two steps of the extended FNN algorithm,denoted as FNN2,for a bivariate time series is shown in Figure1(b).The termination criterion is the drop of the percent-age of false nearest neighbors to the zero level at every increase of M by one for at least one embedding dimen-sion vector(m1,m2,...,m p).If more than one embedding dimension vectors fulfill this criterion,the one with the smallest cumulative FNN percentage is selected,where the cumulative FNN percentage is the sum of the p FNN percentages for the increase by one of the respective com-ponent of the embedding dimension vector.The PEM criterion for the selection of m= (m1,m2,...,m p)is simply the extension of the goodness-of-fit or prediction criterion in the univariate case to account for the multiple ways the delay vector can be formed from the multivariate time series.Thus for all possible p-plets of(m1,m2,...,m p)from(1,0,...,0), (0,1,...,0),etc up to some vector of maximum embed-ding dimensions(m max,m max,...,m max),the respective reconstructed state spaces are created,local linear mod-els are applied and out-of-sample prediction errors are computed.So,totally p m max−1embedding dimension vectors are compared and the optimal is the one that gives the smallest multivariate NRMSE as defined in(4).IV.MONTE CARLO SIMULATIONS ANDRESULTSA.Monte Carlo setupWe test the three methods by performing Monte Carlo simulations on a variety of known nonlinear dynamical systems.The embedding dimension vectors are selected using the three methods on100different realizations of each system and the most frequently selected embedding dimension vectors for each method are tracked.Also,for each realization and selected embedding dimension vec-5ate NRMSE over the100realizations for each method is then used as an indicator of the performance of each method in prediction.The selection of the embedding dimension vector by FNN1,FNN2and PEM is done on thefirst three quarters of the data,N1=3N/4,and the multivariate NRMSE is computed on the last quarter of the data(N−N1).For PEM,the same split is used on the N1data,so that N2= 3N1/4data are used tofind the neighbors(training set) and the rest N1−N2are used to compute the multivariate NRMSE(test set)and decide for the optimal embedding dimension vector.A sketch of the split of the data is shown in Figure2.The number of neighbors for the local models in PEM varies with N and we set K N=10,25,50 for time series lengths N=512,2048,8192,respectively. The parameters of the local linear model are estimated by ordinary least squares.For all methods the investigation is restricted to m max=5.The multivariate time series are derived from nonlin-ear maps of varying dimension and complexity as well as spatially extended maps.The results are given below for each system.B.One and two Ikeda mapsThe Ikeda map is an example of a discrete low-dimensional chaotic system in two variables(x n,y n)de-fined by the equations[19]z n+1=1+0.9exp(0.4i−6i/(1+|z n|2)),x n=Re(z n),y n=Im(z n),where Re and Im denote the real and imaginary part,re-spectively,of the complex variable z n.Given the bivari-ate time series of(x n,y n),both FNN methods identify the original vector x n=(x n,y n)andfind m=(1,1)as optimal at all realizations,as shown in Table I.On the other hand,the PEM criterionfinds over-embedding as optimal,but this improves slightly the pre-diction,which as expected improves with the increase of N.Next we consider the sum of two Ikeda maps as a more complex and higher dimensional system.The bivariateI:Dimension vectors and NRMSE for the Ikeda map.2,3and4contain the embedding dimension vectorsby their respective frequency of occurrenceNRMSEFNN1PEM FNN2 512(1,1)1000.0510.032 (1,1)100(2,2)1000.028 8192(1,1)1000.0130.003II:Dimension vectors and NRMSE for the sum ofmapsNRMSEFNN1PEM FNN2 512(2,2)650.4560.447(1,3)26(3,3)95(2,3)540.365(2,2)3(2,2)448192(2,3)430.2600.251(1,4)37time series are generated asx n=Re(z1,n+z2,n),y n=Im(z1,n+z2,n).The results of the Monte Carlo simulations shown in Ta-ble II suggest that the prediction worsens dramatically from that in Table I and the total embedding dimension M increases with N.The FNN2criterion generally gives multiple optimal m structures across realizations and PEM does the same but only for small N.This indicates that high complex-ity degrades the performance of the algorithms for small sample sizes.PEM is again best for predictions but over-all we do not observe large differences in the three meth-ods.An interesting observation is that although FNN2finds two optimal m with high frequencies they both give the same M.This reflects the problem of identification, where different m unfold the attractor equally well.This feature cannot be observed in FNN1because the FNN1 algorithm inspects fewer possible vectors and only one for each M,where M can only be multiple of p(in this case(1,1)for M=2,(2,2)for M=4,etc).On the other hand,PEM criterion seems to converge to a single m for large N,which means that for the sum of the two Ikeda maps this particular structure gives best prediction re-sults.Note that there is no reason that the embedding dimension vectors derived from FNN2and PEM should match as they are selected under different conditions. Moreover,it is expected that the m selected by PEM gives always the lowest average of multivariate NRMSE as it is selected to optimize prediction.TABLE III:Dimension vectors and NRMSE for the KDR mapNRMSE FNN1PEM FNN2512(0,0,2,2)30(1,1,1,1)160.7760.629 (1,1,1,1)55(2,2,2,2)39(0,2,1,1)79(0,1,0,1)130.6598192(2,1,1,1)40(1,1,1,1)140.5580.373TABLE IV:Dimension vectors and NRMSE for system of Driver-Response Henon systemEmbedding dimensionsN FNN1PEM FNN2512(2,2)100(2,2)75(2,1)100.196(2,2)100(3,2)33(2,2)250.127(2,2)100(3,0)31(0,3)270.0122048(2,2)100(2,2)1000.093(2,2)100(3,3)45(4,3)450.084(2,2)100(0,3)20(3,0)190.0068192(2,2)100(2,2)1000.051(2,2)100(3,3)72(4,3)250.027(2,2)100(0,4)31(4,0)300.002TABLE V:Dimension vectors and NRMSE for Lattice of3coupled Henon mapsEmbedding dimensionsN FNN1PEM FNN2512(2,2,2)94(1,1,1)6(1,2,1)29(1,1,2)230.298(2,2,2)98(1,1,1)2(2,0,2)44(2,1,1)220.2282048(2,2,2)100(1,2,2)34(2,2,1)300.203(2,2,2)100(2,1,2)48(2,0,2)410.1318192(2,2,2)100(2,2,2)97(3,2,3)30.174(2,2,2)100(2,1,2)79(3,2,3)190.084NRMSEC FNN2FNN1PEM0.4(1,1,1,1)42(1,0,2,1)170.2850.2880.8(1,1,1,1)40(1,0,1,2)170.3140.2910.4(1,1,1,1)88(1,1,1,2)70.2290.1900.8(1,1,1,1)36(1,0,2,1)330.2250.1630.4(1,1,1,1)85(1,2,1,1)80.1970.1370.8(1,2,0,1)31(1,0,2,1)220.1310.072 PEM cannot distinguish the two time series and selectswith almost equal frequencies vectors of the form(m,0)and(0,m)giving again over-embedding as N increases.Thus PEM does not reveal the coupling structure of theunderlying system and picks any embedding dimensionstructure among a range of structures that give essen-tially equivalent predictions.Here FNN2seems to de-tect sufficiently the underlying coupling structure in thesystem resulting in a smaller total embedding dimensionthat gives however the same level of prediction as thelarger M suggested by FNN1and slightly smaller thanthe even larger M found by PEM.ttices of coupled Henon mapsThe last system is an example of spatiotemporal chaosand is defined as a lattice of k coupled Henon maps{x i,n,y i,n}k i=1[22]specified by the equationsx i,n+1=1.4−((1−C)x i,n+C(x i−1,n+x i+1,n)ple size,at least for the sizes we used in the simulations. Such a feature shows lack of consistency of the PEM cri-terion and suggests that the selection is led from factors inherent in the prediction process rather than the quality of the reconstructed attractor.For example the increase of embedding dimension with the sample size can be ex-plained by the fact that more data lead to abundance of close neighbors used in local prediction models and this in turn suggests that augmenting the embedding vectors would allow to locate the K neighbors used in the model. On the other hand,the two schemes used here that ex-tend the method of false nearest neighbors(FNN)to mul-tivariate time series aim atfinding minimum embedding that unfolds the attractor,but often a higher embedding gives better prediction results.In particular,the sec-ond scheme(FNN2)that explores all possible embedding structures gives consistent selection of an embedding of smaller dimension than that selected by PEM.Moreover, this embedding could be justified by the underlying dy-namics of the known systems we tested.However,lack of consistency of the selected embedding was observed with all methods for small sample sizes(somehow expected due to large variance of any estimate)and for the cou-pled maps(probably due to the presence of more than one optimal embeddings).In this work,we used only a prediction performance criterion to assess the quality of state space reconstruc-tion,mainly because it has the most practical relevance. There is no reason to expect that PEM would be found best if the assessment was done using another criterion not based on prediction.However,the reference(true)value of other measures,such as the correlation dimen-sion,are not known for all systems used in this study.An-other constraint of this work is that only noise-free multi-variate time series from discrete systems are encountered, so that the delay parameter is not involved in the state space reconstruction and the effect of noise is not studied. It is expected that the addition of noise would perplex further the process of selecting optimal embedding di-mension and degrade the performance of the algorithms. For example,we found that in the case of the Henon map the addition of noise of equal magnitude to the two time series of the system makes the criteria to select any of the three equivalent embeddings((2,0),(0,2),(1,1))at random.It is in the purpose of the authors to extent this work and include noisy multivariate time series,also fromflows,and search for other measures to assess the performance of the embedding selection methods.AcknowledgmentsThis paper is part of the03ED748research project,im-plemented within the framework of the”Reinforcement Programme of Human Research Manpower”(PENED) and co-financed at90%by National and Community Funds(25%from the Greek Ministry of Development-General Secretariat of Research and Technology and75% from E.U.-European Social Fund)and at10%by Rik-shospitalet,Norway.[1]F.Takens,Lecture Notes in Mathematics898,365(1981).[2]T.Sauer,J.A.Yorke,and M.Casdagli,Journal of Sta-tistical Physics65,579(1991).[3]H.Kantz and T.Schreiber,Nonlinear Time Series Anal-ysis(Cambridge University Press,1997).[4]J.Guckenheimer and G.Buzyna,Physical Review Let-ters51,1438(1983).[5]M.Paluˇs,I.Dvoˇr ak,and I.David,Physica A StatisticalMechanics and its Applications185,433(1992).[6]R.Hegger and T.Schreiber,Physics Letters A170,305(1992).[7]D.Prichard and J.Theiler,Physical Review Letters73,951(1994).[8]H.D.I.Abarbanel,T.A.Carroll,,L.M.Pecora,J.J.Sidorowich,and L.S.Tsimring,Physical Review E49, 1840(1994).[9]L.Cao,A.Mees,and K.Judd,Physica D121,75(1998),ISSN0167-2789.[10]J.P.Barnard,C.Aldrich,and M.Gerber,Physical Re-view E64,046201(2001).[11]S.P.Garcia and J.S.Almeida,Physical Review E(Sta-tistical,Nonlinear,and Soft Matter Physics)72,027205 (2005).[12]Y.Hirata,H.Suzuki,and K.Aihara,Physical ReviewE(Statistical,Nonlinear,and Soft Matter Physics)74, 026202(2006).[13]M.B.Kennel,R.Brown,and H.D.I.Abarbanel,Phys-ical Review A45,3403(1992).[14]D.T.Kaplan,in Chaos in Communications,edited byL.M.Pecora(SPIE-The International Society for Optical Engineering,Bellingham,Washington,98227-0010,USA, 1993),pp.236–240.[15]B.Chun-Hua and N.Xin-Bao,Chinese Physics13,633(2004).[16]R.Hegger and H.Kantz,Physical Review E60,4970(1999).[17]M.H´e non,Communications in Mathematical Physics50,69(1976).[18]R.Hegger,H.Kantz,and T.Schreiber,Chaos:An Inter-disciplinary Journal of Nonlinear Science9,413(1999).[19]K.Ikeda,Optics Communications30,257(1979).[20]C.Grebogi,E.Kostelich,E.O.Ott,and J.A.Yorke,Physica D25(1987).[21]S.J.Schiff,P.So,T.Chang,R.E.Burke,and T.Sauer,Physical Review E54,6708(1996).[22]A.Politi and A.Torcini,Chaos:An InterdisciplinaryJournal of Nonlinear Science2,293(1992).。

学术英语(理工)讲义-课后习题解答包括unit4,全部包含

学术英语(理工)讲义-课后习题解答包括unit4,全部包含

Unit 1 Choosing a Topic
1 Deciding on a topic
Your narrower subtopics
Topics
Energy
Questions – Is the topic appropriate for a 1500-word essay? Why or why not?
Internet Artificial intelligence
– If the topic is too general, how do you narrow it down to a more manageable topic?
– Can you suggest some appropriate topics of each subject?
Unit 1 Choosing a Topic
1 Deciding on a topic
Enhancing your academic language
Match the words with their definitions.
1 —— g 2 —— a 3 —— e 4 —— b 5 —— c
Unit 1 Choosing a Topic
1 Deciding on a topic
In which aspect do the two essays differ? Text 1 illustrates how hackers or unauthorized users use one way or another to get inside a computer, while Text 2 describes the various electronic threats a computer might face.

复杂人工智能英语作文

复杂人工智能英语作文

复杂人工智能英语作文Title: The Evolution and Implications of Complex Artificial Intelligence。

Artificial Intelligence (AI) has rapidly advanced in recent years, transitioning from simple algorithms to complex systems capable of intricate tasks. This evolution has raised significant questions and implications across various domains. In this essay, I will delve into the development, challenges, and potential consequences of complex AI.Firstly, the development of complex AI stems from breakthroughs in machine learning, neural networks, and computational power. These advancements enable AI systems to analyze vast amounts of data, recognize patterns, and make decisions autonomously. Deep learning algorithms, inspired by the human brain's neural networks, have revolutionized AI capabilities, allowing for tasks like image recognition, natural language processing, and evenautonomous driving.However, with complexity comes challenges. One major concern is the interpretability of complex AI systems. Deep neural networks, while highly accurate, often operate as "black boxes," making it difficult to understand the reasoning behind their decisions. This lack of transparency raises ethical, legal, and trust-related issues,particularly in critical applications such as healthcare and criminal justice.Moreover, the integration of AI into society poses socio-economic implications. While AI has the potential to streamline processes, increase efficiency, and improve quality of life, it also threatens job displacement and exacerbates inequality. Industries heavily reliant on manual labor are particularly vulnerable to automation, leading to unemployment and economic disruption. Addressing these challenges requires proactive measures such as retraining programs, social safety nets, and policies to ensure equitable distribution of AI's benefits.Furthermore, the rise of complex AI brings aboutethical dilemmas surrounding its use and impact. Issues such as data privacy, algorithmic bias, and autonomous weapon systems demand careful consideration. As AI systems become increasingly autonomous, they raise questions of accountability and control. Ensuring that AI operates ethically and aligns with human values necessitates robust governance frameworks, ethical guidelines, and interdisciplinary collaboration.Despite these challenges, complex AI holds immense potential for transformative impact across various domains. In healthcare, AI-powered diagnostics and personalized treatment plans can revolutionize patient care and disease management. In education, adaptive learning platforms can cater to individual student needs, enhancing learning outcomes. Additionally, in environmental sustainability,AI-driven solutions can optimize resource allocation and mitigate climate change.However, realizing the full potential of complex AI requires collaboration and responsible stewardship.Interdisciplinary research efforts are needed to address technical challenges, enhance interpretability, and mitigate biases. Moreover, fostering a culture of transparency, accountability, and inclusivity is essential for building trust and acceptance of AI technologies.In conclusion, the evolution of complex AI presents both opportunities and challenges for society. While it promises to revolutionize various domains, its adoption must be accompanied by ethical considerations, regulatory frameworks, and societal safeguards. By navigating these challenges thoughtfully, we can harness the power of complex AI to create a more equitable, sustainable, and prosperous future for all.。

2021版高考英语一轮复习Unit3Amazingpeople学案牛津译林版必修2

2021版高考英语一轮复习Unit3Amazingpeople学案牛津译林版必修2

Unit 3 Amazing people一、语基必备知识(一)重点词汇——分类记忆Ⅰ.阅读词汇——知其意1.preserve vt.保存,保护,保持2.swallow vt.& vi. 吞下,吞咽3.companion n. 伴侣;陪伴4.status n. 地位,身份5.superior n. 上级,上司adj. 更好的,更高的6.coincidence n. 巧合,碰巧7.signal n. 信号vi.& vt. 发信号;表明8.murder vt.& n. 谋杀9.survival n. 幸存,存活Ⅱ.核心词汇——写其形1.content n. 内容2.disturb vt. 打扰,扰乱3.desire n. 愿望,欲望,渴望vt. 渴望,期望4.whichever pron. 无论哪个;无论哪些5.punishment n. 惩罚6.various adj. 各种各样的7.apply vi. 申请vt. 使用,应用8.outgoing adj. 爱交际的;友好的;外向的9.optimistic adj. 乐观的,抱乐观看法的Ⅲ.拓展词汇——通其变1.curious adj. 好奇的,求知欲强的→curiosity n. 好奇→curiously adv.好奇地2.fortune n. 大笔的钱,财富;运气→fortunately adv.幸运地→fortunate adj.幸运的3.scientific adj.科学的→science n.科学→scientist n.科学家4.breathe vi.& vt.呼吸→breath n.呼吸5.inspire vt.启迪,赋予灵感;激励,鼓舞→inspired adj.受到启发的→inspiring adj.令人振奋的→inspiration n.灵感;激励6.organization n.组织;机构→organize v.组织→organizer n.组织者7.devotion n.奉献;忠诚;专心→devote→ vt.献出,付出;专心于→devoted adj.挚爱的;忠诚的8.requirement n.要求,规定→require v.需要;要求9.connection n.联系,关联→connect v.联系,连接→connected adj.相连的,相关的10.death n.死亡→dead adj.死的,已故的→die vi.死亡→dying adj.垂死的11.explorer n.探险家;勘探者→explore vt.探测;勘探→exploration n.探测12.discourage vt.使灰心;劝阻→courage n.勇气→encourage v.鼓励→encouragement n.鼓励1.“性格”表达万花筒①outgoing 爱交际的;友好的;外向的②optimistic 乐观的,抱乐观看法的③pessimistic 悲观的④easy­going 随和的⑤gentle 温和的⑥stubborn 固执的2.单复数意义不同①content内容→contents目录②art艺术→arts文科③arm手臂→arms武器④brain脑袋→brains脑力⑤custom风俗→customs海关⑥paper纸→papers文件⑦work工作→works作品⑧force力量→forces军队3.“各种各样的”小结①various③(all) sorts of ④(all)kinds of⑤a variety of ⑥varieties of4.“打扰,麻烦”表达法①disturb 打扰,扰乱②trouble 麻烦,使烦恼③bother 烦扰,打扰④interrupt 打扰,打断(二)重点短语——记牢用活1.be_curious_about 对……感到好奇2.set_sail 启航3.search_for 搜寻4.along_with 与……一起5.be_known_as 作为……而出名6.come_across (偶然)遇见;发现7.as_well_as 也,以及,和8.die_of 死于……9.in_connection_with 关于,与……有关10.result_in 导致,结果是……11.pay_off 带来好结果;得到回报12.star_in 在……担任主角,主演13.get_in_touch_with 与……取得联系14.come_into_use 开始使用15.be_fit_for 适合于16.be_in_control_(of_sth.) 掌管,控制(某物) 17.look_up_to 敬佩18.of_all_time 有史以来1.“as...as”短语集锦①as well as 也,以及,和②as far as 至于;远至③as long as 只要④as soon as 一……就……⑤as much as 和……差不多2.“be known+prep.”短语小结①be known as 作为……而出名②be known for 因……而闻名③be known to 为……熟知④be known by 被……所知3.“导致”短语聚会①result in②lead to 导致,造成③contribute to 促成,有助于④bring about 导致,引起(三)重点句式——背熟巧用二、语境强化训练Ⅰ.语境填词——根据提示写出该词的适当形式1.Light­hearted and optimistic(乐观的), she is the sort of woman to spread sunshine to people through her smile.2.The disturbing(令人不安的) problem drew the attention of the government at once.3.(2019·天津卷)However,technology is also the application(应用) of scientific knowledge to solve a problem, touching lives in countless ways.4.(2018·江苏卷)It may influence consumers’ trust in media and disturb(打扰;妨碍) the market order.5.(2018·浙江卷6月)Many cities with bans still allow shoppers to purchase paper bags, which are easily recycled but require(需要) more energy to produce and transport.6.Protective measures are necessary if the city’s monuments are to be preserved(preserve).7.Students soon get discouraged(discourage) if you criticize them too often.8.Many experts thought that there is a definite connection(connect) betweencitizens’ heart problems and their ways of life.9.(2018·天津卷)The possibility that there is life on other planets in the universe has always inspired(inspire) scientists to explore the outer space.10.She signalled(signal) to the other girls that everything was all right.Ⅱ.派生词练习——用所给词的适当形式填空1.Listening to his inspiring speech, we were inspired to make great efforts. It gave us not only hope but also inspiration.(inspire)2.He wanted to have an organization set up to help those in need, whose organizers could make its work well organized.(organize)3.Fortunately,_I won the award. It was without doubt a small fortune to me.(fortune)4.He devoted his life to his work, and his devotion to his work is admired by everyone, including some of his devoted friends.(devote)Ⅲ.选词成篇look up to; in control of; come across; dream of; result in Li Qiang was 1.dreaming_of becoming an astronaut some day. The strong desire inspired him to devote himself to learning scientific knowledge. Although he had e_across various difficulties, he was never discouraged. He was not only optimistic about his future but also always 3.in_control_of himself. It was his excellent qualities that 4.resulted_in his success. Nowadays, the young 5.look_up_to him.get in touch with; be fit for; search for; thanks to; be curious about He 6.has_been_curious_about Tibet since he was a boy. In order to go on an adventure to Tibet, he took exercise every day and 7.searched_for all the useful information online about Tibet. 8.Thanks_to his persistence and hard work, after half a year’s preparation, he thought he 9.was_fit_for the trip and 10.got_in_touch_with an explorers’ club.Ⅳ.完成句子1.到上个周末,我们已经学完了五个单元。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

2
Wiedemann’s algorithm with baby steps/giant steps
Already in his orses a method for computing the determinant of a sparse matrix [33]. The algorithm is based on a sequence of bilinear projections of the powers of the input matrix. For vectors u, v ∈ Kn , where K is an arbitrary field, and the input matrix A ∈ Kn×n consider the 2
1
rithm. Note that Clarkson’s algorithm and its new variants are in the worst case quartic in n. An entirely different method, based on an algorithm in [33], was first described for dense matrices with polynomial entries [22]. For integer matrices the resulting randomized algorithm is of the Las Vegas kind—always correct, probably fast—and has worst case bit complexity (n3.5 log A )1+o(1) and again can be speeded with sub-cubic time matrix multiplication. We give a description of this algorithm in Section 2 below. That algorithm was originally put to a different use, namely that of computing the characteristic polynomial and adjoint of a matrix without divisions, counting additions, subtractions, and multiplications in the commutative ring of entries. By considering a bilinear map with two blocks of vectors rather than a single pair of vectors, Wiedemann’s algorithm can be accelerated [11, 23, 30, 31]. This technique can be applied to our fast determinant algorithm, and results in a worst case bit complexity of (n3+1/3 log A )1+o(1) , again based on standard cubic time matrix multiplication. We discuss this modification and its mathematical justification in Section 3. Serendipitously, blocking can be applied to our original 1992 division-free algorithm, and a similar improvement of the division-free complexity of the determinant is obtained (see Section 4), thus changing the status of a problem that has now been open for over 9 years. In this extended abstract we do not consider the use of fast matrix multiplication algorithms. By using the algorithms in [13, 12] the bit complexity for the determinant of an n × n matrix with integer entries can be reduced to O(n2.698 (log A )1+o(1) ), and the division-free complexity of the determinant and adjoint of a matrix over a commutative ring to O(n2.698 ) ring operations. These exponents have purely mathematical interest and no impact for the computation of a determinant on a computer. We shall also hide the exponents of the log n and loglog A factors in the “+o(1)” of the exponents. In Section 2 we shall address the impact of those factors on the practicality of our methods. In general, the precise exponents of these logarithms are dependent on the actual computational model, such as multi-tape Turing machine, logarithmic random access machine, hierarchical memory machine, etc.
ON THE COMPLEXITY OF COMPUTING DETERMINANTS* (Extended abstract)
ERICH KALTOFEN1 and GILLES VILLARD2
1
Department of Mathematics, North Carolina State University Raleigh, North Carolina 27695-8205 kaltofen@,
1
Introduction
The computation of the determinant of an n × n matrix A of numbers or polynomials is a challenge for both numerical and symbolic methods. Numerical methods, such as Clarkson’s algorithm [10, 7] for the sign of the determinant must deal with conditionedness that determines the number of mantissa bits necessary for obtaining a correct sign. Symbolic algorithms that are based on Chinese remaindering [6, 17, Chapter 5.5] must deal with the fact that the length of the determinant in the worst case grows linearly in the dimension of the matrix. Hence the number of modular operations is n times the number of arithmetic operations in a given algorithm. Hensel lifting combined with rational number recovery [14, 1] has cubic bit complexity in n, but the algorithm can only determine a factor of the determinant, namely the largest invariant factor. If the matrix is similar to a multiple of the identity matrix, the running time is again that of Chinese remaindering. The techniques developed in [32] for computing the characteristic polynomial of a sparse matrix lead to a speedup for the general, dense determinant problem. For integer matrices, the bit complexity was shown [16] to be n3.5+o(1) (log A )2.5+o(1) , where log A measures the length of the entries in A and the exponent adjustment by “+o(1)” captures missing log factors (“soft-O”). The algorithms of [32, 16] are randomized of the Monte Carlo kind—always fast, probably correct—and can be further speeded by a Strassen/Coppersmith-Winograd sub-cubic time matrix multiplication algo∗ This
相关文档
最新文档