On the multiresolution structure of Internet traffic traces

合集下载

On distinguishing quotients of symmetric groups

On distinguishing quotients of symmetric groups

The notation used is fairly standard. We use κ, λ, µ, and ν to stand for cardinals (usually infinite), and |X | for the cardinality of the set X . If Ω is any set we write Sym(Ω) for the group of all permutations of Ω (1–1 maps from Ω onto itself), with permutations acting on the right, and we write S (µ) for Sym(µ) for any cardinal µ. For g ∈ Sym(Ω) we let supp g be the support of g . If we are working in Sλ (µ)/Sκ (µ) (where Sλ (µ), Sκ (µ) are as introduced above) then we refer to sets of cardinality less than κ as small. We use overlines such as x to stand for finite sequences (‘tuples’) (x1 , x2 , . . . , xn ). By a permutation representation or action of a group G we understand a homomorphism θ from G into Sym(X ) for some set X . The representation is faithful if θ is 1–1, it is transitive if for any x, y ∈ X there is g ∈ G such that x(gθ) = y , and it is trivial if its image is the trivial group. If X is a subset (or sequence of elements) of a group G, we let X denote the subgroup generated by X . If g, h ∈ G we write g h for the conjugate h−1 gh of g by h. If g is a sequence of members of G and h ∈ G, we write g h for the h sequence whose ith entry is gi , and if g, h are sequences of members of G of the same length, we let g ∗ h be the sequence whose ith entry is gi hi . If gh 1 = g 2 for some h, g1 and g2 are said to be conjugate. If N ≤ G and f = (f1 , . . . , fn ) ∈ Gn we let N.f = (N f1 , . . . , N fn ). We write P (X ) for the power set of the set X , and Pκ (X ) for the set of subsets of X of cardinality less than κ. Then P (X ) is a boolean algebra, and each Pκ (X ) for κ infinite is a ring of sets. Moreover, if ℵ0 ≤ κ < λ ≤ |X |+ , Pκ (X ) is an ideal of Pλ (X ), so we may study the quotient ring Pλ (X )/Pκ (X ), which is a boolean algebra just in the case where λ = |X |+ (that is, where Pλ (X ) = P (X )). In the remainder of this introductory section we give an outline of the main arguments of the paper. Our analysis of the quotient groups Sλ (µ)/Sκ (µ) is carried out using certain 2 many sorted structures Mκλµ and Nκλµ . (There is also a simpler version M∗ κλµ of Mκλµ applicable just in the case cf (κ) > 2ℵ0 .) These structures are devised with the object of describing the permutation action of tuples of elements of Sλ (µ), modulo small sets. The essential properties of such an n-tuple g = (g1 , g2 , . . . , gn ) are described by its action on the orbits of the subgroup g . In fact, if g1 and g 2 are n-tuples of elements of Sλ (µ) then g1 and g2 are conjugate if and only if the orbits of g 1 and g 2 can be put into 1–1 correspondence in such a way that the action of g1 on each orbit of g1 is isomorphic to that of g 2 on the corresponding orbit of g2 . Similar remarks apply in the quotient group, except that we have to allow fewer than κ ‘mistakes’ (by passing to equivalence classes of a suitable equivalence relation). These considerations lead us to observe that what should represent g in Mκλµ is a list of how many g -orbits there are of the various possible isomorphism types, where by ‘isomorphic’ here we mean ‘under the action of g’. Included among the sorts of Mκλµ are therefore, for each positive integer n, the family ISn of isomorphism types of pairs (A, f ), where f is an n-tuple of 3

外文翻译--最小方波在小波领域的展开

外文翻译--最小方波在小波领域的展开

中文3300字附录A:英文原文Least squares phase unwrapping in wavelet domain Abstract:Least squares phase unwrapping is one of the robust techniques used to solve two-dimensional phase unwrapping problems. However, owing to itssparse structure, the convergence rate is very slow, and some practicalmethods have been applied to improve this condition. In this paper, a newmethod for solving the least squares two-dimensional phase unwrappingproblem is presented. This technique is based on the multiresolutionrepresentation of a linear system using the discrete wavelet transform. Byapplying the wavelet transform, the original system is decomposed into itscoarse and fine resolution levels. Fast convergence in separate coarseresolution levels makes the overall system convergence very fast.1 introductionTwo-dimensional phase unwrapping is an important processing step in some coherent imaging applications, such as synthetic aperture radar interferometry(InSAR) and magnetic resonance imaging(MRI).In these processes, three-dimensional information of the measured objects can be extracted from the phase of the sensed signals ,However, the obseryed phase data are wrapped principal values, which are restricted in a 2πmodulus ,and they must be unwrapped to their true absolute phase values .This is the task of the phase unwrapping, especially for two-dimensional problems.The basic assumption of the general phase unwrapping methods is that the discrete derivatives of the unwrapped phase at all grid points are less than πin absolute value .With this assumption satisfied ,the absolute phase can be reconstructed perfectly by integrating the partial derivatives of the wrapped phase data. In the general case, however, it is not possible to recover unambiguously the absolute phase from the measured wrapped phase which is usually corrupted by noise or aliasing effects such as shadow, layover, etc. In such cases, the basic assumption is violated and the simple integration procedure cannot be applied owing to the phase inconsistencies caused by the contaminations.After Goldstein-et al introduced the concept of ‘residues’ in the two-dimensional phase unwrapping problem of InSAR, many phase unwrapping approaches to cope with this problem have been investigated.Path-following (or integration-based) methods and least squares methods are the most representative two basic classes in this field. There have also been some other approaches such as Green methods, Bayesian regularizationmethods ,image processing-based methods, and model-based methods.Least squares phase unwrapping ,established by Ghiglia and Romero, isone of the most robust techniques to solve the two-dimensional phase unwrapping problem. This method obtains an unwrapped solution by minimizing the differences between the partial derivatives of the wrapped phase data and the unwrapped solution .Least squares method is divided into unweighted and weighted least squares phase unwrapping. To isolate the phase inconsistencies, a weighted least squares method should be used, which depresses the contamination effects by using the weighting arrays. Green methods and Bayesian methods are also based on the least squares scheme .But these methods are different from those of ,in the concept of phase inconsistency treatment. Thus, this paper concerns only the least squares phase unwrapping problem of Ghiglia’s category.The least squares method is well-defined mathematically and equivalent tothe solution of Poisson’s partial differential equation, which can be expressed as a sparse linear equation. anterior method is usually used to solve this large linear equation. However, a large computation time is required and therefore improving the convergence rate is a very important task when using this method. Some numerical algorithms have been applied to this problem to improve convergence conditions.An approach for fast convergence of a sparse linear equation is to transferthe original equation system into a new system with larger supports .Multiresolution or hierarchical representation concepts have often been used for this purpose. Recently, wavelet transform has been investigated deeply in science and engineering fields as a sophisticated tool for the multiresolution analysis of signals and systems. It decomposes a signal space into its low-resolution subspace and the complementary detail subspaces. In our method, the discrete wavelet transform is applied to the linear system of least squares phase unwrapping problem to represent the original system in separate multiresolution spaces .In this new transferred system, a better convergence condition can be achieved. This method was briefly introduced in out previous work ,where the proposed method was applied only to the unweighted problem, In this paper, this new method is extended to the weighted least squares problem. Also, a full description of the proposed method is given here.2 Weighted least squares phase unwrapping: a reviewLeast squares phase obtains an unwrapped solution by minimizing the 2L -norm between the discrete partial derivatives of the wrapped phase data and those of the unwrapped solution function. Given the wrapped phase ,i j ψ on an M×N rectangular grid(01i M ≤≤-,01j N ≤≤-),the partial derivatives of the wrapped phase are defined as{},1,,x i j i j i j W ψψ+∆=-,{},,1,y i j i j i j W ψψ+∆=- (1)Where W is the wrapping operator that wraps the phase into the interval [],ππ-.The differences between the partial derivatives of the solution ,i j φ andthose in (1) can be minimized in the weighted least squares sense, by differentiating the sum()()22,1,,,,,1,,,,xxyyi j i j i j i j i j i j i j i j i j i jw w φφφφ++--∆+--∆∑∑ (2) With respect to ,i j φ and setting the result to zero.In (2),the gradient weights ,,xi j w and ,yi j w ,are used to prevent some phasevalues corrupted by noise or aliasing from degrading the unwrapping , and are defined by()22,1,,min ,xi j i j i j w w w +=,()22,,1,min ,yi j i j i j w w w +=,,01i j w ≤≤ (3)The weighted least squares phase unwrapping problem is to find the solution ,i j φ that minimizes the sum of (2).The initial weight array {},i j w is user-defined and some methods for defining these weights are presented in [1,11]. When all the weights ,1i j w =, the above equation is the unweighted phase unwrapping problem. Since weight array is related to the exactitude of the resultant unwrapped solution , it must be defined properly. In this paper, however, it is assumed that the weight array is defined already for the given phase data and how to define it is not covered here. Only the convergence rates issue of the weighted least squares phase unwrapping problem is considered here.The least squares solution to this problem yields the following equation:()()()(),1,,1,,1,,,1,,1,,1,xxyyi j i j i j i j i j i j i j i j i j i j i j i j i jw w w w φφφφφφφφρ+--+-----+---= (4)Where ,i j ρ is the we ighted phase Laplacian’ defined by,,,1,1,,,,1xx xxx x x x i j i j i j i j i j i j i j i j i j w w w w ρ----=∆-∆+∆-∆ (5)The unwrapped solution ,i j φ is obtained by iteratively solving the following equation()(),,1,1,1,,,1,1,1,,1,,,1/xxyyxxyyi j i j i j i j i j i j i j i j i j i j i j i j i j i j w w w w w w w w φφφφφρ+--+----=+++-+++ (6)Equation (4) is the weighted and discrete version of the Poisson’s partial differential equation (PDE),2φρ∇=.By concatenating all the nodal variables,i j φ into MN×1 one column vector φ, the above equation is expressed as alinear systemA φρ= (7)Where the system matrix A is of size K×K(K=MN) and ρ is a column vector of ,i j ρ, That is ,the solution φ of the least squares phase unwrapping problemcan be obtained by solving this linear system, and for given A and ρ,which are defined from the weight array {},xi j w and the measured wrapped phase ,i j ψthe unwrapped phase φ has the unique solution 1A φρ-=,But since A is a very large matrix, the direct inverse operation is practically impossible. The structure of the system matrix A is very sparse and most of the off-diagonal elements are zero, which is evident from (4).Direct methods based on the fast Fourier transform(FFT) or the discretecosine transform (DCT) can be applied to solve the unweighted phase unwrapping problem. However, in the weighted case, iterative methods should be adopted. The classical iterative method for solving the linear system is the Gauss-Seidel relaxation, which solves (6) by simple iteration until it converges. However, this method is not practical owing to its extremely slow convergence, which is caused by the sparse characteristics of the system matrix A. Some numerical algorithms such as preconditioned conjugate gradient (PCG), or multigrid method were applied to implement the weighted least squares phase unwrapping. The PCG method converges rapidly on unweighted phase unwrapping problems or weighted problems that do not have large phase discontinuities. However, on data with large discontinuities, it requires many iterations to converge. The multigrid method is an efficient algorithm to solve a linear system and performs much better than the Gauss-Seidel method and the PCG method in solving the least squares phase unwrapping problem. However, in the weighted case, the method needs an additional weight restriction operation, This operation is very complicated and although it is designed properly in some books, there may be some errors during the restriction.There are other approaches to solve a sparse linear system problem efficiently,In these approaches, a system is converted into another equivalent system with better convergence condition .The convergence speed of the system is characterised by the system matrix A. The structure of the system matrix of the least squares phase unwrapping problem is very sparse. In the iterative solving methods, the local connections between the nodal variables slow down the progress of the solution in iteration and result in a low convergence rate. In other words , the Gauss-Seidel method extracts the local high-frequency information of the surface from only four neighbours of each nodal value. Thus, the global low-frequency surface information propagates very slowly, which is the mainreason for the low convergence rate of the sparse problem. The computation speed of the least squares phase unwrapping problem is dominated by the low-frequency portions of the problem, and to obtain a fast convergence, the low-frequency portions of the problem should be extracted. This concept is based on the multiresolution representation, in which a signal is represented in different resolutions, i.e. coarse and fine resolution levels. Solving separately the low-frequency portions in coarse resolution level will speed up the overall system convergence rate.Wavelet transform is the most sophisticated method to represent a system in multiresolution concept. In this paper, an efficient method to solve the least squares phase unwrapping problem is proposed, by using the discrete wavelet transform (DWT).This is an extension of the work presented in literature. Some literature on work in the domain of wavelet approaches to the solution of partial differential equations can be found. Those studies deal with the PDE structure itself in wavelet domain to solve the problem efficiently. This paper, however, applies the wavelet transform to reform the structure of the linear system extracted from the PDE , and does not deal with the PDE problem itself.3conclusionsAn efficient method to solve the weighted least squares two-dimensional phase unwrapping has been presented. Biorthogonal wavelet transform is applied to transfer the original system into the new equivalent system in wavelet domain with low-frequency and high-frequency portions decomposed.Separately solving the low-frequency portion of the new system speeds up the overall system convergence rate. The convergence improvement has been shown by experiments with some synthetic phase images.The proposed method provides better results than those obtained by using the Gauss-Seidel relaxation and the multigrid method. Another advantage of this method is that the new system is mathematically equivalent to the original matrix. so that its solution is exact to the original equation both for the weighted and unweighted least squares phase unwrapping problems.附录B:汉语翻译最小方波在小波领域的展开摘要:最小方波的展开是过去一直解决二维小波展开问题的关键技术之一。

Managing Resolution in Multi-Resolution Databases

Managing Resolution in Multi-Resolution Databases

Managing Resolutionin Multi-Resolution DatabasesDavid Skogan1Department of Computer Science,Keele University,ST55BG,United Kingdom(Visiting PhD Student2000)2Department of Informatics,University of Oslo,P.O.Box1080Blindern,N-0316OSLO,NORWAYdavids@ifi.uio.nohttp://www.ifi.uio.no/~davids/3SINTEF Telecom and Informatics,P.O.Box124Blindern,N-0314OSLO,NORWAYdavid.skogan@informatics.sintef.noAbstract.Resolution is an integral part of all forms of data.This pa-per presents a framework for quantifying,reasoning with and managingmulti-resolution object-based data.The framework differs from otherapproaches in that it handles resolution at the object level explicitly.Itdefines a basic set of operations for changing an object’s resolution thatcan be utilized for generalization,consistency control and integrationpurposes.Keywords:multi-resolution,multiple representation,model generalization, data management1IntroductionAll forms of data are captured at a certain resolution.Unfortunately,resolution is mostly ignored in object-based spatial databases where all objects apparently seem to be at a single resolution,i.e.they are stored at afixed precision.This because of the difficulty in quantifying resolution for object-based data and the lack of a suitable framework for integrating variable resolution objects.Partial solutions such as adding accuracy information to the data has been proposed, but methods to utilize this information haven’t been developed[8].Resolution is,however,far to important to ignore and should be an integral part of data management and utilized in generalization and consistency control procedures as well as in data integration.This paper presents a framework for managing multi-resolution data in an object-based federated database system.The framework describes how to quan-tify and measure the resolution of individual objects and to aggregate resolu-tion at the database level.It also defines basic operations for coarsening and refining objects.The framework consists of four components:1)the federated multi-resolution database management system for organizing databases and gen-eralization functions;2)the resolution space for organizing multiple partitions of2David Skoganspace;3)the multi-resolution type for representing objects at different resolutions with operations for changing,assessing and comparing objects’resolution and 4)methods for aggregating resolution for comparing and measuring resolution at the database level.This work is a contribution to model based generalization and to the emerg-ingfield of multi-resolution databases.Related works are Stell and Worboy’s Stratified Map Space concept[17],Devogele et al.’s Multi-Scale Database[4], Kilpel¨a inen’s Multiple Representation database[10],Timpf’s Hierarchical Struc-tures in Map Series[18]and Spaccapietra et al.’s Multi-Representation Multi-Resolution approach[16].The paper is organized as follows:section2defines the Federated Multi-Resolution DBMS approach,3introduces the concept of resolution and resolu-tion change,4defines spatial and thematic resolution at the object level,and5 covers resolution at the database level.Finally6concludes the paper.2Federated Multi-Resolution DBMSThis section defines a way of organizing multi-resolution databases according to a set of generalization functions.It considers how objects are generalized and addresses some of the deficiencies of earlier multi-resolution and generalization approaches.Generalization is usually defined as the process of transforming a representation into a less detailed one[21].This work focuses on model oriented generalization and the main objective is to control the reduction in resolution. Note that this is slightly different from the original objective of model oriented generalization to achieve data reduction[20].A federated database system ap-proach is chosen instead of the sequential organization preferred by other au-thors[3,4,11].Assume that data are managed in a system of cooperating autonomous databases that are related through a set of generalization functions.The sys-tem is federated[15]in that the databases store different versions of the same information and cooperate by exchanging information.The generalization func-tions define how data mayflow from one database to another and imposes a graph structure on the system of databases.A database can have many input data streams and export data to many databases.The databases share a global schema that defines a set of multi-resolution types,formally defined in sec.4.For know we assume that objects of a multi-resolution type can be represented at multiple resolutions.A database is considered a multi-resolution database(MR-database)or a variable-resolution database if it is capable of storing objects at multiple resolutions and a system of databases is a multi-resolution system if the databases represent different views of the same information.Relative resolution differences between the databases exists.A multi-resolution system of databases implies that multiple objects represent the same real world entity,i.e.an entity can be represented by objects stored in different databases at various resolutions. Note that two objects representing the same entity cannot be stored in the sameManaging Resolution in MR-Databases3 database.An example of such a system is shown in Fig.1(left).Each database is a multi-resolution database which represent a different view of the real world.An object-oriented approach is assumed where the fundamental element of data is the object[14,1].An object is an abstraction of a real world phenomena (virtual or real entity)and has structural and behavioral aspects defined by its type.The structural aspect is governed by the spatial and thematic attributes and the behavioral aspect by the operations.Fig.1.Federated Multi-Resolution DBMS(left)and a generalization function(right) Figure1(right)shows two databases D i and D j.A function F ij:D i→D j defines a generalization dataflow from D i to D j.When the function is executed candidate objects are selected from D i and transformed into target objects in D j. The generalization function defines of a number of selection and transformation pairs.The criteria for selection may depend on the objects’types,attribute values and on their relations to other objects.The transformation operations must in principle adhere to the rule that the output objects are coarser or equal to the input objects.Two variants of the coarsening operation are considered,i.e. coarsening of a single object and coarsening of several objects by amalgamating the input objects into a single output object.ˆo=coarsen(o)(1)ˆo=coarsen(o1,o2,...,o n)(2)Thefirst coarsen operation works by coarsening one or more of o’s attributes producing an objectˆo that has lower resolution than o.Hereˆo can be of the same type as o or it may be of another type.Note that the two objects describe the same real world entity.The second coarsen operation merges two or more objects into an aggregated objectˆo by amalgamating the objects attributes. Intuitively,the aggregated object has lower resolution than its constituent ob-jects.An object can therefore in principle undergo two forms of resolution change4David Skogancalled intra-type resolution change where the object keeps its type and inter-type resolution change where the object’s type is changed.Storing relationships between source and target objects based on the coars-ening operations give an indication of the relative resolution differences between objects.This has been treated by several authors[4,11,16]where the proposed solutions has been to define a set of types for each resolution level(database), model inter-resolution relationships between the types at the different resolu-tion levels and create an integrated schema.However,modeling different types for each resolution level is restrictive in that users have to define a number of artificial types that describe the same entity.This approach is static in that if a new resolution level is required then a new set of types must be defined. The approach further ignores the fact that a database in reality stores object at multiple resolutions.Many existing generalization operations fail to take into account the source object’s initial resolution and the output object’s required resolution.Most ap-proaches apply general coarsening operations on all objects so that the resulting objects just have coarser resolution.For example by applying the same line sim-plification operation on all objects with attributes of line type to reduce the number of points used to represent the line,which easily may lead to over-or under-generalization.This approach also fail to take into account the objects’types,e.g.that a road may require different generalization operations than a river.A solution to this problem is to associate type specific and resolution aware coarsening operations to each type.The Federated multi-resolution DBMS is aflexible way of organizing multi-resolution databases.New databases can be defined by adding the appropriate generalization functions.The generalization functions reduce the resolution of selected objects by applying coarsening operations.A coarsening operation re-sults either in an intra-type resolution change or an inter-type resolution change. In thefirst the object keeps its type,whereas in the latter the object or objects are transformed into an object of another type.The next section looks at how resolution can be quantified at the object level.3ResolutionThis section discusses resolution in general and introduces the concepts of spa-tial and thematic resolution for object-based data.It also covers an intuitive approach to quantify resolution and operations for coarsening and refining spa-tial and thematic attributes.The term resolution is used to indicate the level of detail that can be discerned in an observation of some data.The resolution of an object can be qualified by relative and vague predicates such as high or low indicating detailed data and coarser data respectively.Or it can be given as absolute values,e.g.1or100 metre resolution.This leads to the definition that resolution is an assertion or a measure of the level of detail or the information content of an object or a database with respect to some reference frame.Granularity is often used as aManaging Resolution in MR-Databases5 synonym for resolution.The resolution of an object is a statement about the level of detail stored by the object,whereas the resolution of a database is a statement about the level of detail of the objects stored.Spatial resolution in regular gridded data such as images and raster structures is fairly straight forward.Here the pixels is the smallest objects and assertions such as300dpi or10metre resolution hold for all the pixels in the structure.It is natural to talk about spatial resolution as the object’s size compared to the real world and thematic resolution as the number of bits associated per pixel[19]. Intuitively we should be able to make similar statement about object-based data.E.g.this building object has a1metre spatial resolution and this road object has100metre spatial resolution,where it is well-defined what spatial resolution means.Operations to decrease the resolution of the objects to50metre resolution should result in a building that is at50metre resolution and a road that still is at100metre resolution.The three types that often are used to describe an object’s spatial characteristics are point,curve and region.The definition of these types vary.Assuming an underlying pointset topology[5]a point can be defined as a closed pointset with only one member,a curve as a closed continuous pointset with empty interior and a region as a closed pointset with connected non-empty interior.From a mathematical point of view a point is considered to have no spatial extent and a curve to have zero width.But this assumption requires an underlying space that is continuous and has infinite resolution.If we take a discrete view then a spatial object must have a resolution,often given by restrictions in the data capturing techniques.Following the definition of spatial resolution for the regular case then the resolution of a spatial object is a measure of the resolution of its components.A point that occupies a region of1x1metre has a resolution of1metre.A curve at1m resolution has all component points at a resolution of less than or equal to1m,and similarly for the resolution of a region.This duality between the wish for a well defined continuous mathematical view of the world and a necessarily practical discrete representation is seldom captured in object-based data models.Thematic resolution can be defined similar to spatial resolution by qualify-ing a value with a resolution statement.For example an object’s age can have a resolution of1year and a date usually has afinest resolution of1day.The reso-lution statement consists of a number followed by a string indicating a reference system.The number specifies the extent or the granularity of the value within the reference system.The value2001with resolution statement2years indicate that the value2001actually spans two years.A value can be transformed to a coarser orfiner reference system,e.g.from day to month or from month to day, which results in a change in the object’s resolution.Relationships between ref-erence systems such as hierarchical value domains can be defined,where groups of values are aggregated into single values at a coarser resolution.Examples of such hierarchies are second,hour,day,month,year for a temporal type.Parish, county,region,country for spatial types.Similar value hierarchies are much used in thefield of data mining[6,12].6David SkoganSemantic resolution,however,is more difficult to quantify.The semantic or meaning of an object is indicated by its type,which represents or models a con-cept.Semantic resolution is therefore related to the generality or the granularity of the type’s concept,i.e.if the object’s type corresponds to a general concept then its semantic resolution is low or coarse,and if the object’s type correspond to afine grained concept its semantic resolution is high.Semantic resolution will not be covered in this work.By coarsening an object its resolution is reduced and by refining an object its resolution is increased.Three operations for resolution change are introduced in the following example.Assume an integer object with value of42that has1 metre resolution R1.Let’s say we wish to decrease the resolution of the value to 10metre resolution R2.A coarsen operation returns a coarsened value of40. This value intuitively corresponds to an interval[35,44]in R1.Since intervals can be problematic to work with a single value is instead assigned to represent the coarsened value at R2.A refine operation can now be used to refine40to a single value in R1.This operation arbitrarily selects a single value,e.g.37,from the ten possible values in the interval.An embed operation returns the image (interval or region)of the object in a higher resolution.An important consistency rule is that the original value of42must lie inside the interval returned by embed operation of its coarsened value40,i.e.[35,44]=embed(40).Figure2 shows examples of four types:real,point,curve and region.The objects are first coarsened then refined.Column three shows the embedded image and the refined object.Column four shows that the original object in fact lies inside the embedded version.Fig.2.Examples of coarsening and refining primitive types This section introduced the concept of resolution and presented some intu-itive ideas for defining resolution of object-based data,i.e.spatial and thematic resolution which is related to an object’s spatial and thematic attributes.Oper-Managing Resolution in MR-Databases7 ations for changing an object’s resolution were presented,including both coars-ening and refining operations.The next section gives a formal treatment of these concepts.4Object ResolutionThis section presents the central components in the framework that are respon-sible for representing resolution and controlling intra-type resolution change,i.e. the resolution space and the multi-resolution type.Intra-type resolution change means that an object changes its resolution within the framework of its type.Tra-ditional data types are not resolution aware.A type therefore has to be extended to a multi-resolution type(MR-type).The MR-type integrates an ordinary type with a resolution space and a set of resolution aware operations.The concepts of resolution and resolution space defined by Worboys[23,22] are essential in defining an MR-type.A resolution R defines afinite partition of a space S.Here S can be any1-d,2-d,...,n-d space where R defines the number of elements that can be discerned.The partition divides the space into a number of non-overlapping elements(also called resels).There are many ways of partitioning a space and the set of all partitions R of a space S can be organized as a partial order.Let R1,R2∈R,then the partial order≤r is defined as:R1≤r R2⇔∀x∈R1,∃y∈R2x⊆y.(3) Hence,if R1≤r R2,then each element in R2must completely contain one or more elements from R1,otherwise the two resolutions are incomparable.Worboys further shows that R with the partial order is a lattice and defines a resolution space to be any sublattice of R.In the following Worboys’concepts of resolution and resolution space will be used to define object resolution and multi-resolution types.A multi-resolution type(MR-type)is a type that defines a multi-resolution value domain.A definition of MR-types for the relational model was given by Read et al.[13].They defined basic and structured MR-types as user defined types with approximation operations to coarsen and relate multi-resolution val-ues.In this work an MR-type is defined in the context of an object-oriented model and combines a resolution space with basic operations.A basic MR-type T is defined as a triple(T,R T,O).Where T is a basic type,R T the resolution space of this type and O is the set of operations for coarsening and refining objects of this type within R T.A value of an MR-type can be represented as a pair R,V where R is a reference to a resolution in R T and V is the value at that resolution.This value is called a resolution value or a value with resolution. V must correspond to one or more elements(resels)in R constrained by T.Any type with a value domain that can be partitioned and represented by a resolution space can be made into an MR-type,which includes integer,real,enumerations, and spatial types.If T is a point then V refers to one element in R,if T is a region then V must refer to a connected set of elements,etc.An identification scheme that corresponds with the granularity of the elements of the resolution must be8David Skoganchosen.Forfine-grained regular spatial resolutions a coordinate based scheme would probably be cost efficient,whereas an element-identifier scheme would be better at a coarser resolution.Note that different MR-types can share the same resolution space.The geometric types are examples of that because point,curve and region objects all occupies the same space and therefore naturally share the same resolution space.Figure3shows an example of an1dimensional resolution space for an in-teger MR-type.Thefinest resolution R0partitions the space into1000different elements.If the space corresponds to a height of1000metre then it follows that one element has a resolution statement of1metre.Here R1is10times coarser than R0and R2is10times coarser than R1.The value R2,300 corresponds to an interval in R1,[250,340] .The above example shows the importance that the resolutions of two resolu-tion values must match if we are to perform value comparison operations or cal-culations.For example we can define that R2,300 = R1,250 ,since R1,250 is enclosed in the interval of R2,300 .Care must therefore be made to either align the resolution of objects before performing operations or alternatively to define resolution aware operations.Fig.3.Example of a1dimensional resolution space for an integer type The resolution of an object is given by the object’s resolution value R,V . R corresponds to a partition of a space,but R does not tell us anything about the actual resolution of the object itself,unless R is regular.Let R i be a par-tition of a space S.In an irregular partition each element e∈R i will cover a different area in S.An operation granularity can be defined for each e thatManaging Resolution in MR-Databases 9indicates the granularity of the element or the area the element covers.For exam-ple if the space is a 1-dimensional 4000metre line and R is a regular partition of 1000elements,then each element has a granularity of 4metre.Note that granularity (S )= e ∈R granularity (e ).If R i is part of a resolution space where R i −1≤r R i ≤r R i +1then e will have a relation to its parent element in R i +1and to its component elements in R i −1.This gives rise to the definition of the operations parent and components for each element.Hence,parent (e,R i +1)returns the parent element of e at a coarser resolution and components (e,R i −1)returns the components of e at a higher resolution.Figure 4shows a hierarchical enumerated value domain which forms a thematic resolution space.Hereparent (accommodation ,R 3)={building-category }andcomponents (accommodation ,R 1)={hotel ,guest-house ,b&b }.Since this is not a linear ordered space we need a mechanism to indicate the granularity of the elements.One way is to assigned a number to each element at the finest resolution that indicates their granularity at R 1.These can so be used to create the aggregated granularity of the elements of the lower resolutions.Note that these numbers only have to be assigned to the finest resolution.Next we will see that the underlying notion that each element has a granularity results in a mechanism for quantifying and reasoning with an object’s resolution.Fig.4.Example of a thematic resolution spaceThe granularity of the elements of an object’s resolution value R,V can be used to define the object’s actual resolution,also called the object’s granularity to distinguish it from the object’s resolution R .If V contains one element then the granularity of the object is the same as the granularity of the element.If |V |>1then a granularity measure called a g-measure can be computed based on the granularity of the elements of V .The g-measure is application dependent and candidate measures are minimum,average and maximum granularity of the elements of the object.For the rest of this paper the maximum granularity of the elements in V is chosen.A resolution value can now be thought of as a triple R,V,G where G is the g-measure the object.The g-measure allow us to assess10David Skoganthe difference in granularity of two objects of the same MR-type even if they are represented at incomparable resolutions.Four operations are as a minimum defined for each MR-type,i.e.coarsen, embed,refine and transform.Let o be an object of an MR-type with resolution value R1,V1 ,let R1,R2and R3∈R T and R1≤r R2.The coarsen(o,R2) operation returns an object at a coarser resolution,where the coarsened object ˆo=parent(e1,R2)∪...∪parent(e n,R2)for all e i∈V1.It should be ensured thatˆo’s resolution value R2,V2 corresponds to T and that o⊆embed(ˆo,R1). The embed(ˆo,R1)operation returns an region object by embedding the elements of a coarse resolution into afiner resolution.The region object is given by¯o= components(e1,R1)∪...∪components(e n,R1)for all e i∈V2.The refine(o,R1) operation returns an object at afiner resolution.Any objectˇo that is a T where ˇo⊆embed(ˆo,R1)can be chosen.The last operation is transform(o,R3),which returns an object at an incompatible resolution,i.e.R1≤r R3and R3≤r R1. There are three ways define this operation:1.Coarsen/refine:o =refine(coarsen(o,lub(R1,R3)),R3).2.Refine/coarsen:o =coarsen(refine(o,glb(R1,R3)),R3).3.Project the elements of o into R3according to some user defined rule. Note that the three transformation methods most likely will produce different results.Figure5shows an example of a resolution space and operations on objects in that space.Fig.5.Coarsening(c),refining(r)and transformation(t)operationsThe four minimum operations all require an a priori defined resolution space. In many situations this is not available,especially when generalizing a database for thefirst time.A coarsening operation that creates elements at a lower reso-lution can be defined and associated with the MR-type.Such an operation could create a buffer around the existing object and create corresponding elementsManaging Resolution in MR-Databases11 at a lower resolution.Note that type specific operations should be defined.For example buildings should keep their basic rectangular shape and roads should keep their main bends.Generalization is,however,a global task which involves coarsening,detection and resolving of conflicts for many objects at the same time,see sec.5.A structured MR-type is created by combining the resolution spaces of the type’s multi-resolution attributes.A structured type defines a sequence of n attributes a1,a2,...,a n ,where each attribute a i=(name i,m i,T i)is identi-fied by a name,has a multiplicity statement and an MR-type.An object of a structured type can now be represented as a sequence of resolution values (R1,V1),(R2,V2),...,(R n,V n) ,were R i∈R T i and R T i is the resolution space of T i.Let f i,g i∈R T i represent the resolution of the i-th attribute of objects o f and o g respectively.The combined resolution space R T and the combined resolution comparison relation≤r are defined asR T=R T1×R T2×...×R T n(4)o f≤r o g⇔∀i∈[1,n]f i≤r g i.(5) The combined resolution comparison relation only holds if all the attributes are comparable of equal orfiner resolution.It is also possible to say that an object is coarser than another with respect to a subset of the attributes.Two attributes are resolution dependent if the resolution of one attribute depends on the resolution of the other.These dependencies can be expressed as constraints and associated with the definition of the MR-type,e.g.if attribute a1is at resolution R2then a3must be at resolution R4and a5must be at R2.The constraints can be used to prune the structured type’s resolution space.An MR-type provides a mechanism for storing values with associated res-olution information.How does this work for associations and operations?An association are usually represented as an attribute of a pointer type that refer to other objects,called a link.A link points to a target object.The resolution of a link therefore corresponds to the resolution of its target object.To coarsen a link is the same as coarsening the target object.Operations,however,are usually not resolution aware.Since objects now can be represented at different resolutions special care must be taken,especially with operations that changes the state of the object.A precaution could be to coarsen or refine the output values of the operations according to the object’s resolution.This,however,requires further work which is not addressed here.An attribute can have a multiplicity m that indicates the number of possible occurrences of its value,where m⊆{0,1,...,∗}.An instance of an attribute therefore can be non-existent,single-valued or multi-valued.For a single-value attribute it is straight forward,but we need a way of expressing the resolu-tion of a non-existent and a multi-valued attribute.If no value is present V=∅, R=S and G is defined to be equal to granularity(S).A multi-valued attribute consists of a sequence of values that can have variable resolution.They can be constrained to have the same resolution(single-resolution)alternatively resolu-tion measures for the sequence must be defined.A resolution measure r-measure。

Quantum Computing for Computer Scientists

Quantum Computing for Computer Scientists

More informationQuantum Computing for Computer ScientistsThe multidisciplinaryfield of quantum computing strives to exploit someof the uncanny aspects of quantum mechanics to expand our computa-tional horizons.Quantum Computing for Computer Scientists takes read-ers on a tour of this fascinating area of cutting-edge research.Writtenin an accessible yet rigorous fashion,this book employs ideas and tech-niques familiar to every student of computer science.The reader is notexpected to have any advanced mathematics or physics background.Af-ter presenting the necessary prerequisites,the material is organized tolook at different aspects of quantum computing from the specific stand-point of computer science.There are chapters on computer architecture,algorithms,programming languages,theoretical computer science,cryp-tography,information theory,and hardware.The text has step-by-stepexamples,more than two hundred exercises with solutions,and program-ming drills that bring the ideas of quantum computing alive for today’scomputer science students and researchers.Noson S.Yanofsky,PhD,is an Associate Professor in the Departmentof Computer and Information Science at Brooklyn College,City Univer-sity of New York and at the PhD Program in Computer Science at TheGraduate Center of CUNY.Mirco A.Mannucci,PhD,is the founder and CEO of HoloMathics,LLC,a research and development company with a focus on innovative mathe-matical modeling.He also serves as Adjunct Professor of Computer Sci-ence at George Mason University and the University of Maryland.QUANTUM COMPUTING FORCOMPUTER SCIENTISTSNoson S.YanofskyBrooklyn College,City University of New YorkandMirco A.MannucciHoloMathics,LLCMore informationMore informationcambridge university pressCambridge,New York,Melbourne,Madrid,Cape Town,Singapore,S˜ao Paulo,DelhiCambridge University Press32Avenue of the Americas,New York,NY10013-2473,USAInformation on this title:/9780521879965C Noson S.Yanofsky and Mirco A.Mannucci2008This publication is in copyright.Subject to statutory exceptionand to the provisions of relevant collective licensing agreements,no reproduction of any part may take place withoutthe written permission of Cambridge University Press.First published2008Printed in the United States of AmericaA catalog record for this publication is available from the British Library.Library of Congress Cataloging in Publication dataYanofsky,Noson S.,1967–Quantum computing for computer scientists/Noson S.Yanofsky andMirco A.Mannucci.p.cm.Includes bibliographical references and index.ISBN978-0-521-87996-5(hardback)1.Quantum computers.I.Mannucci,Mirco A.,1960–II.Title.QA76.889.Y352008004.1–dc222008020507ISBN978-0-521-879965hardbackCambridge University Press has no responsibility forthe persistence or accuracy of URLs for external orthird-party Internet Web sites referred to in this publicationand does not guarantee that any content on suchWeb sites is,or will remain,accurate or appropriate.More informationDedicated toMoishe and Sharon Yanofskyandto the memory ofLuigi and Antonietta MannucciWisdom is one thing:to know the tho u ght by which all things are directed thro u gh allthings.˜Heraclitu s of Ephe s u s(535–475B C E)a s quoted in Dio g ene s Laertiu s’sLives and Opinions of Eminent PhilosophersBook IX,1. More informationMore informationContentsPreface xi1Complex Numbers71.1Basic Definitions81.2The Algebra of Complex Numbers101.3The Geometry of Complex Numbers152Complex Vector Spaces292.1C n as the Primary Example302.2Definitions,Properties,and Examples342.3Basis and Dimension452.4Inner Products and Hilbert Spaces532.5Eigenvalues and Eigenvectors602.6Hermitian and Unitary Matrices622.7Tensor Product of Vector Spaces663The Leap from Classical to Quantum743.1Classical Deterministic Systems743.2Probabilistic Systems793.3Quantum Systems883.4Assembling Systems974Basic Quantum Theory1034.1Quantum States1034.2Observables1154.3Measuring1264.4Dynamics1294.5Assembling Quantum Systems1325Architecture1385.1Bits and Qubits138viiMore informationviii Contents5.2Classical Gates1445.3Reversible Gates1515.4Quantum Gates1586Algorithms1706.1Deutsch’s Algorithm1716.2The Deutsch–Jozsa Algorithm1796.3Simon’s Periodicity Algorithm1876.4Grover’s Search Algorithm1956.5Shor’s Factoring Algorithm2047Programming Languages2207.1Programming in a Quantum World2207.2Quantum Assembly Programming2217.3Toward Higher-Level Quantum Programming2307.4Quantum Computation Before Quantum Computers2378Theoretical Computer Science2398.1Deterministic and Nondeterministic Computations2398.2Probabilistic Computations2468.3Quantum Computations2519Cryptography2629.1Classical Cryptography2629.2Quantum Key Exchange I:The BB84Protocol2689.3Quantum Key Exchange II:The B92Protocol2739.4Quantum Key Exchange III:The EPR Protocol2759.5Quantum Teleportation27710Information Theory28410.1Classical Information and Shannon Entropy28410.2Quantum Information and von Neumann Entropy28810.3Classical and Quantum Data Compression29510.4Error-Correcting Codes30211Hardware30511.1Quantum Hardware:Goals and Challenges30611.2Implementing a Quantum Computer I:Ion Traps31111.3Implementing a Quantum Computer II:Linear Optics31311.4Implementing a Quantum Computer III:NMRand Superconductors31511.5Future of Quantum Ware316Appendix A Historical Bibliography of Quantum Computing319 by Jill CirasellaA.1Reading Scientific Articles319A.2Models of Computation320More informationContents ixA.3Quantum Gates321A.4Quantum Algorithms and Implementations321A.5Quantum Cryptography323A.6Quantum Information323A.7More Milestones?324Appendix B Answers to Selected Exercises325Appendix C Quantum Computing Experiments with MATLAB351C.1Playing with Matlab351C.2Complex Numbers and Matrices351C.3Quantum Computations354Appendix D Keeping Abreast of Quantum News:QuantumComputing on the Web and in the Literature357by Jill CirasellaD.1Keeping Abreast of Popular News357D.2Keeping Abreast of Scientific Literature358D.3The Best Way to Stay Abreast?359Appendix E Selected Topics for Student Presentations360E.1Complex Numbers361E.2Complex Vector Spaces362E.3The Leap from Classical to Quantum363E.4Basic Quantum Theory364E.5Architecture365E.6Algorithms366E.7Programming Languages368E.8Theoretical Computer Science369E.9Cryptography370E.10Information Theory370E.11Hardware371Bibliography373Index381More informationPrefaceQuantum computing is a fascinating newfield at the intersection of computer sci-ence,mathematics,and physics,which strives to harness some of the uncanny as-pects of quantum mechanics to broaden our computational horizons.This bookpresents some of the most exciting and interesting topics in quantum computing.Along the way,there will be some amazing facts about the universe in which we liveand about the very notions of information and computation.The text you hold in your hands has a distinctflavor from most of the other cur-rently available books on quantum computing.First and foremost,we do not assumethat our reader has much of a mathematics or physics background.This book shouldbe readable by anyone who is in or beyond their second year in a computer scienceprogram.We have written this book specifically with computer scientists in mind,and tailored it accordingly:we assume a bare minimum of mathematical sophistica-tion,afirst course in discrete structures,and a healthy level of curiosity.Because thistext was written specifically for computer people,in addition to the many exercisesthroughout the text,we added many programming drills.These are a hands-on,funway of learning the material presented and getting a real feel for the subject.The calculus-phobic reader will be happy to learn that derivatives and integrals are virtually absent from our text.Quite simply,we avoid differentiation,integra-tion,and all higher mathematics by carefully selecting only those topics that arecritical to a basic introduction to quantum computing.Because we are focusing onthe fundamentals of quantum computing,we can restrict ourselves to thefinite-dimensional mathematics that is required.This turns out to be not much more thanmanipulating vectors and matrices with complex entries.Surprisingly enough,thelion’s share of quantum computing can be done without the intricacies of advancedmathematics.Nevertheless,we hasten to stress that this is a technical textbook.We are not writing a popular science book,nor do we substitute hand waving for rigor or math-ematical precision.Most other texts in thefield present a primer on quantum mechanics in all its glory.Many assume some knowledge of classical mechanics.We do not make theseassumptions.We only discuss what is needed for a basic understanding of quantumxiMore informationxii Prefacecomputing as afield of research in its own right,although we cite sources for learningmore about advanced topics.There are some who consider quantum computing to be solely within the do-main of physics.Others think of the subject as purely mathematical.We stress thecomputer science aspect of quantum computing.It is not our intention for this book to be the definitive treatment of quantum computing.There are a few topics that we do not even touch,and there are severalothers that we approach briefly,not exhaustively.As of this writing,the bible ofquantum computing is Nielsen and Chuang’s magnificent Quantum Computing andQuantum Information(2000).Their book contains almost everything known aboutquantum computing at the time of its publication.We would like to think of ourbook as a usefulfirst step that can prepare the reader for that text.FEATURESThis book is almost entirely self-contained.We do not demand that the reader comearmed with a large toolbox of skills.Even the subject of complex numbers,which istaught in high school,is given a fairly comprehensive review.The book contains many solved problems and easy-to-understand descriptions.We do not merely present the theory;rather,we explain it and go through severalexamples.The book also contains many exercises,which we strongly recommendthe serious reader should attempt to solve.There is no substitute for rolling up one’ssleeves and doing some work!We have also incorporated plenty of programming drills throughout our text.These are hands-on exercises that can be carried out on your laptop to gain a betterunderstanding of the concepts presented here(they are also a great way of hav-ing fun).We hasten to point out that we are entirely language-agnostic.The stu-dent should write the programs in the language that feels most comfortable.Weare also paradigm-agnostic.If declarative programming is your favorite method,gofor it.If object-oriented programming is your game,use that.The programmingdrills build on one another.Functions created in one programming drill will be usedand modified in later drills.Furthermore,in Appendix C,we show how to makelittle quantum computing emulators with MATLAB or how to use a ready-madeone.(Our choice of MATLAB was dictated by the fact that it makes very easy-to-build,quick-and-dirty prototypes,thanks to its vast amount of built-in mathematicaltools.)This text appears to be thefirst to handle quantum programming languages in a significant way.Until now,there have been only research papers and a few surveyson the topic.Chapter7describes the basics of this expandingfield:perhaps some ofour readers will be inspired to contribute to quantum programming!This book also contains several appendices that are important for further study:Appendix A takes readers on a tour of major papers in quantum computing.This bibliographical essay was written by Jill Cirasella,Computational SciencesSpecialist at the Brooklyn College Library.In addition to having a master’s de-gree in library and information science,Jill has a master’s degree in logic,forwhich she wrote a thesis on classical and quantum graph algorithms.This dualbackground uniquely qualifies her to suggest and describe further readings.More informationPreface xiii Appendix B contains the answers to some of the exercises in the text.Othersolutions will also be found on the book’s Web page.We strongly urge studentsto do the exercises on their own and then check their answers against ours.Appendix C uses MATLAB,the popular mathematical environment and an es-tablished industry standard,to show how to carry out most of the mathematicaloperations described in this book.MATLAB has scores of routines for manip-ulating complex matrices:we briefly review the most useful ones and show howthe reader can quickly perform a few quantum computing experiments with al-most no effort,using the freely available MATLAB quantum emulator Quack.Appendix D,also by Jill Cirasella,describes how to use online resources to keepup with developments in quantum computing.Quantum computing is a fast-movingfield,and this appendix offers guidelines and tips forfinding relevantarticles and announcements.Appendix E is a list of possible topics for student presentations.We give briefdescriptions of different topics that a student might present before a class of hispeers.We also provide some hints about where to start looking for materials topresent.ORGANIZATIONThe book begins with two chapters of mathematical preliminaries.Chapter1con-tains the basics of complex numbers,and Chapter2deals with complex vectorspaces.Although much of Chapter1is currently taught in high school,we feel thata review is in order.Much of Chapter2will be known by students who have had acourse in linear algebra.We deliberately did not relegate these chapters to an ap-pendix at the end of the book because the mathematics is necessary to understandwhat is really going on.A reader who knows the material can safely skip thefirsttwo chapters.She might want to skim over these chapters and then return to themas a reference,using the index and the table of contents tofind specific topics.Chapter3is a gentle introduction to some of the ideas that will be encountered throughout the rest of the ing simple models and simple matrix multipli-cation,we demonstrate some of the fundamental concepts of quantum mechanics,which are then formally developed in Chapter4.From there,Chapter5presentssome of the basic architecture of quantum computing.Here one willfind the notionsof a qubit(a quantum generalization of a bit)and the quantum analog of logic gates.Once Chapter5is understood,readers can safely proceed to their choice of Chapters6through11.Each chapter takes its title from a typical course offered in acomputer science department.The chapters look at that subfield of quantum com-puting from the perspective of the given course.These chapters are almost totallyindependent of one another.We urge the readers to study the particular chapterthat corresponds to their favorite course.Learn topics that you likefirst.From thereproceed to other chapters.Figure0.1summarizes the dependencies of the chapters.One of the hardest topics tackled in this text is that of considering two quan-tum systems and combining them,or“entangled”quantum systems.This is donemathematically in Section2.7.It is further motivated in Section3.4and formallypresented in Section4.5.The reader might want to look at these sections together.xivPrefaceFigure 0.1.Chapter dependencies.There are many ways this book can be used as a text for a course.We urge instructors to find their own way.May we humbly suggest the following three plans of action:(1)A class that provides some depth might involve the following:Go through Chapters 1,2,3,4,and 5.Armed with that background,study the entirety of Chapter 6(“Algorithms”)in depth.One can spend at least a third of a semester on that chapter.After wrestling a bit with quantum algorithms,the student will get a good feel for the entire enterprise.(2)If breadth is preferred,pick and choose one or two sections from each of the advanced chapters.Such a course might look like this:(1),2,3,4.1,4.4,5,6.1,7.1,9.1,10.1,10.2,and 11.This will permit the student to see the broad outline of quantum computing and then pursue his or her own path.(3)For a more advanced class (a class in which linear algebra and some mathe-matical sophistication is assumed),we recommend that students be told to read Chapters 1,2,and 3on their own.A nice course can then commence with Chapter 4and plow through most of the remainder of the book.If this is being used as a text in a classroom setting,we strongly recommend that the students make presentations.There are selected topics mentioned in Appendix E.There is no substitute for student participation!Although we have tried to include many topics in this text,inevitably some oth-ers had to be left out.Here are a few that we omitted because of space considera-tions:many of the more complicated proofs in Chapter 8,results about oracle computation,the details of the (quantum)Fourier transforms,and the latest hardware implementations.We give references for further study on these,as well as other subjects,throughout the text.More informationMore informationPreface xvANCILLARIESWe are going to maintain a Web page for the text at/∼noson/qctext.html/The Web page will containperiodic updates to the book,links to interesting books and articles on quantum computing,some answers to certain exercises not solved in Appendix B,anderrata.The reader is encouraged to send any and all corrections tonoson@Help us make this textbook better!ACKNOLWEDGMENTSBoth of us had the great privilege of writing our doctoral theses under the gentleguidance of the recently deceased Alex Heller.Professor Heller wrote the follow-ing1about his teacher Samuel“Sammy”Eilenberg and Sammy’s mathematics:As I perceived it,then,Sammy considered that the highest value in mathematicswas to be found,not in specious depth nor in the overcoming of overwhelmingdifficulty,but rather in providing the definitive clarity that would illuminate itsunderlying order.This never-ending struggle to bring out the underlying order of mathematical structures was always Professor Heller’s everlasting goal,and he did his best to passit on to his students.We have gained greatly from his clarity of vision and his viewof mathematics,but we also saw,embodied in a man,the classical and sober ideal ofcontemplative life at its very best.We both remain eternally grateful to him.While at the City University of New York,we also had the privilege of inter-acting with one of the world’s foremost logicians,Professor Rohit Parikh,a manwhose seminal contributions to thefield are only matched by his enduring com-mitment to promote younger researchers’work.Besides opening fascinating vis-tas to us,Professor Parikh encouraged us more than once to follow new directionsof thought.His continued professional and personal guidance are greatly appre-ciated.We both received our Ph.D.’s from the Department of Mathematics in The Graduate Center of the City University of New York.We thank them for providingus with a warm and friendly environment in which to study and learn real mathemat-ics.Thefirst author also thanks the entire Brooklyn College family and,in partic-ular,the Computer and Information Science Department for being supportive andvery helpful in this endeavor.1See page1349of Bass et al.(1998).More informationxvi PrefaceSeveral faculty members of Brooklyn College and The Graduate Center were kind enough to read and comment on parts of this book:Michael Anshel,DavidArnow,Jill Cirasella,Dayton Clark,Eva Cogan,Jim Cox,Scott Dexter,EdgarFeldman,Fred Gardiner,Murray Gross,Chaya Gurwitz,Keith Harrow,JunHu,Yedidyah Langsam,Peter Lesser,Philipp Rothmaler,Chris Steinsvold,AlexSverdlov,Aaron Tenenbaum,Micha Tomkiewicz,Al Vasquez,Gerald Weiss,andPaula Whitlock.Their comments have made this a better text.Thank you all!We were fortunate to have had many students of Brooklyn College and The Graduate Center read and comment on earlier drafts:Shira Abraham,RachelAdler,Ali Assarpour,Aleksander Barkan,Sayeef Bazli,Cheuk Man Chan,WeiChen,Evgenia Dandurova,Phillip Dreizen,C.S.Fahie,Miriam Gutherc,RaveHarpaz,David Herzog,Alex Hoffnung,Matthew P.Johnson,Joel Kammet,SerdarKara,Karen Kletter,Janusz Kusyk,Tiziana Ligorio,Matt Meyer,James Ng,SeverinNgnosse,Eric Pacuit,Jason Schanker,Roman Shenderovsky,Aleksandr Shnayder-man,Rose B.Sigler,Shai Silver,Justin Stallard,Justin Tojeira,John Ma Sang Tsang,Sadia Zahoor,Mark Zelcer,and Xiaowen Zhang.We are indebted to them.Many other people looked over parts or all of the text:Scott Aaronson,Ste-fano Bettelli,Adam Brandenburger,Juan B.Climent,Anita Colvard,Leon Ehren-preis,Michael Greenebaum,Miriam Klein,Eli Kravits,Raphael Magarik,JohnMaiorana,Domenico Napoletani,Vaughan Pratt,Suri Raber,Peter Selinger,EvanSiegel,Thomas Tradler,and Jennifer Whitehead.Their criticism and helpful ideasare deeply appreciated.Thanks to Peter Rohde for creating and making available to everyone his MAT-LAB q-emulator Quack and also for letting us use it in our appendix.We had a gooddeal of fun playing with it,and we hope our readers will too.Besides writing two wonderful appendices,our friendly neighborhood librar-ian,Jill Cirasella,was always just an e-mail away with helpful advice and support.Thanks,Jill!A very special thanks goes to our editor at Cambridge University Press,HeatherBergman,for believing in our project right from the start,for guiding us through thisbook,and for providing endless support in all matters.This book would not existwithout her.Thanks,Heather!We had the good fortune to have a truly stellar editor check much of the text many times.Karen Kletter is a great friend and did a magnificent job.We also ap-preciate that she refrained from killing us every time we handed her altered draftsthat she had previously edited.But,of course,all errors are our own!This book could not have been written without the help of my daughter,Hadas-sah.She added meaning,purpose,and joy.N.S.Y.My dear wife,Rose,and our two wondrous and tireless cats,Ursula and Buster, contributed in no small measure to melting my stress away during the long andpainful hours of writing and editing:to them my gratitude and love.(Ursula is ascientist cat and will read this book.Buster will just shred it with his powerful claws.)M.A.M.。

嵌入式系统中英文翻译

嵌入式系统中英文翻译

6.1 ConclusionsAutonomous control for small UAVs imposes severe restrictions on the control algorithmdevelopment, stemming from the limitations imposed by the on-board hardwareand the requirement for on-line implementation. In this thesis we have proposed anew hierarchical control scheme for the navigation and guidance of a small UAV forobstacle avoidance. The multi-stage control hierarchy for a complete path control algorithmis comprised of several control steps: Top-level path planning,mid-level pathsmoothing, and bottom-level path following controls. In each stage of the control hierarchy,the limitation of the on-board computational resources has been taken intoaccount to come up with a practically feasible control solution. We have validatedthese developments in realistic non-trivial scenarios.In Chapter 2 we proposed a multiresolution path planning algorithm. The algorithmcomputes at each step a multiresolution representation of the environment usingthe fast lifting wavelet transform. The main idea is to employ high resolution closeto the agent (where is needed most), and a coarse resolution at large distances fromthe current location of the agent. It has been shown that the proposed multiresolutionpath planning algorithm provides an on-line path solution which is most reliableclose to the agent, while ultimately reaching the goal. In addition, the connectivityrelationship of the corresponding multiresolution cell decomposition can be computed directly from the the approximation and detail coefficients of the FLWT. The path planning algorithm is scalable and can be tailored to the available computational resources of the agent.The on-line path smoothing algorithm incorporating the path templates is presentedin Chapter 3. The path templates are comprised of a set of B-spline curves,which have been obtained from solving the off-line optimization problem subject tothe channel constraints. The channel is closely related to the obstacle-free high resolutioncells over the path sequence calculated from the high-level path planner. Theobstacle avoidance is implicitly dealt with since each B-spline curve is constrainedto stay inside the prescribed channel, thus avoiding obstacles outside the channel.By the affine invariance property of B-spline, each component in the B-spine pathtemplates can be adapted to the discrete path sequence obtained from thehigh-levelpath planner. We have shown that the smooth reference path over the entire pathcan be calculated on-line by utilizing the path templates and path stitching scheme. The simulation results with the D_-lite path planning algorithm validates the effectivenessof the on-line path smoothing algorithm. This approach has the advantageof minimal on-line computational cost since most of computations are done off-line.In Chapter 4 a nonlinear path following control law has been developed for asmall fixed-wing UAV. The kinematic control law realizes cooperative path followingso that the motion of a virtual target is controlled by an extra control input to helpthe convergence of the error variables. We applied the backstepping to derive theroll command for a fixed-wing UAV from the heading rate command of the kinematiccontrol law. Furthermore, we applied parameter adaptation to compensatefor theinaccurate time constant of the roll closed-loop dynamics. The proposed path followingcontrol algorithm is validated through a high-fidelity 6-DOF simulation of a fixed-wing UAV using a realistic sensor measurement, which verifies the applicabilityof the proposed algorithm to the actual UAV.Finally, the complete hierarchical path control algorithm proposed in this thesis isvalidated thorough a high-fidelity hardware-in-the-loop simulation environment usingthe actual hardware platform. From the simulation results, it has been demonstratedthat the proposed hierarchical path control law has been successfully applied for pathcontrol of a small UAV equipped with an autopilot that has limited computational resources.6.2 Future ResearchIn this section, several possible extensions of the work presented in this thesis are outlined.6.2.1 Reusable graph structure The proposed path planning algorithm involves calculating the multiresolution cell decomposition and the corresponding graph structure at each of iteration. Hence, the connectivity graph G(t) changes as the agent proceeds toward the goal. Subsequently, let x 2 W be a state (location) which corresponds to nodes of two distinct graphs as followsBy the respective A_ search on those graphs, the agent might be rendered to visit x at different time steps of t i and t j , i 6= j. As a result, a cyclic loop with respect to x is formed for the agent to repeat this pathological loop, while never reaching the goal. Although it has been presented that maintaining a visited set might be a means of avoiding such pathological situations[142], it turns out to be a trial-and-error scheme is not a systemical approach. Rather, suppose that we could employ a unified graph structure over the entire iteration, which retains the information from the previous search. Similar to the D_-lite path planning algorithm, the incremental search over the graph by reusing the previous information results in not only overcoming the pathological situation but also reducing the computational time. In contrast to D_ orD_-lite algorithms where a uniform graph structure is employed, a challenge lies in building the unified graph structure from a multiresolution cell decomposition. Specifically, it includes a dynamic, multiresolution scheme for constructing the graph connectivity between nodes at different levels. The unified graph structure will evolveitself as the agent moves, while updating nodes and edges associated with the multiresolutioncell decomposition from the FLWT. If this is the case, we might be ableto adapt the proposed path planning algorithm to an incremental search algorithm, hence taking advantages of both the efficient multiresolution connectivity (due tothe FLWT) and the fast computation (due to the incremental search by using the previous information).6.1个结论小型无人机自主控制施加严厉限制控制算法发展,源于所施加的限制板载硬件并要求在线实施。

通用学术英语写作_中国政法大学中国大学mooc课后章节答案期末考试题库2023年

通用学术英语写作_中国政法大学中国大学mooc课后章节答案期末考试题库2023年

通用学术英语写作_中国政法大学中国大学mooc课后章节答案期末考试题库2023年1. 5.First of all, watching TV has the value of sheer relaxation. Watchingtelevision can be soothing and restful after an eight-hour day of pressure,challenges, or concentration. After working hard all day, people look forward to a new episode of a favorite show or yet another showing of Casablanca or Sleepless in Seattle. 该段的衔接手段主要是_____与______。

参考答案:近义词(话题近义词 TV-television-show-showing; 主题近义词relaxation-soothing-restful)、上下义词(TV--Casablanca or Sleepless in Seattle)2. 2.We hear a lot about the negative effects of television on the viewer.Obviously, television can be harmful if it is watched constantly to theexclusion of other activities. It would be just as harmful to listen to DCs allthe time or to eat constantly. However, when television is watched inmoderation, it is extremely valuable, as it provides relaxation, entertainment, and education. 该段两大内容是________与_________。

Logarithmic sheaves attached to arrangements of hyperplanes

Logarithmic sheaves attached to arrangements of hyperplanes

1. Introduction Any divisor D on a nonsingular variety X defines a sheaf of logarithmic differential forms Ω1 X (log D ). Its equivalent definitions and many useful properties are discussed in a fundamental paper of K. Saito [Sa]. This sheaf is locally free when D is a strictly normal crossing divisor, and in this situation it is a part of the logarithmic De Rham complex used by P. Deligne to define the mixed Hodge structure on the cohomology of the complement X \ D . In the theory of hyperplane arrangements this sheaf arises when D is a central arrangement of hyperplanes in Cn+1 . In exceptional situations this sheaf could be free (a free arrangement), for example, when the arrangement is a complex reflection arrangement. Many geometric properties of the vector bundle Ω1 X (log D ) were studied in the case when D is a generic arrangement of hyperplanes in Pn [DK1]. Among these properties is a Torelli type theorem which asserts that two arrangemenlogarithmic 1-forms coincide unless they osculate a normal rational ˜ 1 (log D ) curve. In this paper we introduce and study a certain subsheaf Ω X of Ω1 X (log D ). This sheaf contains as a subsheaf (and coincides with it in the case when the divisor D is the union of normal irreducible divisors) the sheaf of logarithmic differentials considered earlier in [CHKS]. Its double 1 dual is isomorphic to Ω1 X (log D ). Although ΩX (log D ) could be locally free for very singular arrangements, e.g. when n = 2 or for free arrangements, ˜ 1 (log D ) is never locally free unless the divisor D is locally forthe sheaf Ω X mally isomorphic to a strictly normal crossing divisor. This disadvantage is

新教材同步备课2024春高中生物第3章基因的本质3.3DNA的复制课件新人教版必修2

新教材同步备课2024春高中生物第3章基因的本质3.3DNA的复制课件新人教版必修2
括所有的复制,但后者只包括第n次的复制。
(2)注意碱基的单位是“对”还是“个”。 (3)切记在DNA复制过程中,无论复制了几次,含有亲代脱氧 核苷酸单链的DNA分子都只有两个。 (4)看清试题中问的是“DNA分子数”还是“链数”,“含” 还是“只含”等关键词,以免掉进陷阱。
二、DNA分子的复制
例1.某DNA分子中含有1 000个碱基对(被32P标记),其中有胸腺 嘧啶400个。若将该DNA分子放在只含被31P标记的脱氧核苷酸的 培养液中让其复制两次,子代DNA分子相对分子质量平均比原来 减少 1 500 。
F2:
提出DNA离心
高密度带 低密度带 高密度带
低密度带 高密度带
一、DNA复制的推测—— 假说-演绎法
1.提出问题 2.提出假说
(1)演绎推理 ③分散复制
15N 15N
提出DNA离心
P:
3.验证假说
15N 14N
F1:
细胞分 裂一次
转移到含 14NH4Cl的培养 液中
提出DNA离心
细胞再 分裂一次
二、DNA分子的复制
例3.若亲代DNA分子经过诱变,某位点上一个正常碱基变成了5-溴 尿嘧啶(BU),诱变后的DNA分子连续进行2次复制,得到4个子 代DNA分子如图所示,则BU替换的碱基可能是( C )
A.腺嘌呤 C.胞嘧啶
B.胸腺嘧啶或腺嘌呤 D.鸟嘌呤或胞嘧啶
二、DNA分子的复制
例4. 5-BrU(5-溴尿嘧啶)既可以与A配对,又可以与C配对。将一 个正常的具有分裂能力的细胞,接种到含有A、G、C、T、5-BrU 五种核苷酸的适宜培养基上,至少需要经过几次复制后,才能实现 细胞中某DNA分子某位点上碱基对从T—A到G—C的替换( B )

gomery

gomery

An Implementation of RSA Exponentiation onReconfigurable LogicLaura Beth LincolnApril25,2004AbstractThe RSA public-key cryptosystem relies on the modular exponentia-tion of>1024-bit numbers to both encrypt and decrypt.The followingpaper reviews the paper“Efficient Architectures for Implementing Mont-gomery Modular Multiplication and RSA Modular Exponentiation on Re-configurable Logic”by Daly and Marnane as Daly and Marnane attemptto exploit the fast carry chains of the Xliinx Virtex V1000FG680-6by gen-erating variations of the algorithm for modular multiplication developedby P.L.Montgomery in1985.1BackgroundThe RSA public-key cryptosystem is commonly used to allow for secure telecom-munication networks by providing a means of authentication while maintaining confidentiality and data integrity.The security of the RSA cryptosystem is de-pendent upon the lack of techniques for factoring substantially large numbers, on the order of1024bits,into two distinct primes of roughly512bits each.The sender wishing to send the plain-text P,can encrypt P with the public1encryption key(E,M)by modular exponentiation.C=P E(modM)By similar method of modular exponentiation,the receiver can decrypt the cipher-text C received from the sender using his private-key(D,M).P=C D(modM)Since modular exponentiation is central to the encryption of plain-text and the decryption of cipher-text,and since the majority of all computers in use are connected to some network that would most likely use the RSA system for se-cure connections,it would be beneficial and reasonable to incorporate modular exponentiation into the processor’s architecture.Modular exponentiation is often implemented in software using the Square-and-Multiply(L-R Algorithm[DM02])or Multiply-and-Square(R-L Algorithm [DM02])algorithms.The method on the left is traditionally used in software because going from the highest order bit to the lowest order bit reduces the computation needed to convert the exponent currently in decimal representation into its equivalent binary representation.However,at the hardware level numbers are stored in registers thus naturally being represented in binary,but there is an advantage of choosing the version on the right for hardware implementation because the squaring and multiplication in a particular iteration may be computed in paral-lel,so given that there is enough space on the chip to allow for two multiplication units,the Multiply-and-Square method is preferable.2Square-and-Multiply(P,E,M) 1P←P2R←13for i←k−1to04do R←(R·Rmod)M5if E i←16then R←(R·P)mod M 7return R Multiply-and-Square(P,E,M) 1P←P2R←13for i←0to k−14do if E i←15then R←(R·P)mod M 6P←(P·P)mod M 7return RThe above has reduced the problem of making modular exponentiation faster to making modular multiplication faster.The Classical algorithm for modular multiplication simply multiplies the two numbers together and after the mul-tiplication reduces the product by modulus ing the Classical algorithm on two n-bit numbers requires a register of length2n bits to hold the resulting product,which in turn requires a division unit that takes an n-bit divisor and a2n-bit dividend to compute thefinal n-bit quotient to aid in the reduction of the product modulus M.Another possible method of modular multiplication could involve multiply-ing only one bit of thefirst operand by the entire second operand and re-ducing the result modulus M every1×n bit multiplication.In1985,P.L. Montgomery developed such an algorithm,but the Montgomery algorithm for modular exponentiation has a side effect.The goal of modular multiplica-tion is tofind A=XY mod M,but by Montgomery modular multiplication3A=XY r−1mod M where r=2n.LetC=(r2r−1)mod M≡(22n·r−1)mod MA=(XY r−1)mod Mthen(C·A)modM≡(XY)mod MThus we can pre-compute C by using the Montgomery algorithm on r×r and use this value at the end of each modular multiplication to remove the extra r−1.This suggests that the Montgomery algorithm should only be used when computing several multiplications with the same modulus since for each modu-lus we must compute r×r.Another disclaimer for the Montgomery algorithm is that the M must be odd.Since gcd(M,r)=1,M may not be divisible by2, but for the RSA cryptosystem M is guaranteed to be odd,thus not being divisi-ble by2,since all primes>2are odd and the product of two odd numbers is odd.[DM02]presents three variations on the Montgomery algorithm for modular multiplication,each lending itself to different hardware configurations,and thus it is only appropriate to discuss the details of each variation with respect to hardware configurations discussed later.2SolutionThe Field Programmable Gate Array(FPGA)circuit design has an underlying architecture which[DM02]exploits.Some Field Programmable Logic(FPL) units consist of4-input lookup tables which may be used as either4-input func-4tion generators or as a16x1RAM(or distributed RAM),“high speed intercon-nect lines between vertically adjacent logic blocks which are designed to provide efficient carry propagation”[DM02],and control logic.Placing all logic blocks in which the carry signals are generated and propagated in a single column allows [DM02]to utilize the fast carry chain towards a benefit.Using the dedicated AND gate in a Configurable Logic Block(CLB),2adder and b i A or q i M functions could be performed in a single CLB,and by using the fast carry chains of the CLB’s the clock period could be reduced.MonArch1, depicted in Figure4,uses MonPro1to define the structure in the CLB.Mon-Pro1takes n+1clock cycles to complete a calculation because M =(M+1)/2 must be calculated to compute any further steps.MonArch2reduced the num-ber of clock cycles by1since the algorithm MonPro2which defines a CLB in this architecture requires no pre-computation,but the MonPro2algorithm re-quires that the second n-bit adder be replaced with an n+1-bit adder since the new intermediate sum S i is not truncated to the higher n bits until the sum S i−1+q i M+b i A was computed.Another issue with MonPro2is that q i is dependent upon the computation of b i A.If A is shifted up one bit,then it is guaranteed that that the LSB of S i+1+b i A is zero,removing the aforementioned dependency.Unfortunately this modification comes at the cost of an extra clock cycle to account for that extra factor of two introduced by shifting the value of A.This modification is presented in MonPro3and implemented in MonArch3. If the order of addition is changed,which is valid since addition is commutative, the two adders can almost work in parallel.By computing(b i·2A)+(q i·M) with thefirst adder and adding S i to the result with the second adder,thefirst adder can operate soon after the LSB of the second adder is computed.This parallel computation comes at the cost of space.5Thefirst adder has only four possible results:0,M,2A,or M+2A.This smallfinite number of outcomes lends itself to a multiplexor based implemen-tation.The MuxMult1architecture requires the pre-computation of M+2A, which costs two extra clock cycles,but the replacement of the multiplexor for thefirst adder is space efficient.[DM02]discusses using the16x1RAM modules of the particular FPGA used in the discussed experiments because the intent is to exploit the intricacies of the hardware,but memory accesses are slow,thus the16x1RAM adds no e of the RAM is disadvantageous.Since the bit lengths of the numbers the RSA cryptosystem are going to be larger than the number of CLB’s in one column of the FPGA,thus exceeding the maximum carry chain length,the advantage of using a fast carry chain no longer exists.To combat this loss,the CLB’s should be pipelined.Taking the n-bit word we wish to compute and dividing it into j p-bit words,where p is roughly the number of CLB’s per column of the FPGA,and j is the number of pipelined units,clock speed can be recovered.The inputs to the multiplier must be broken into p-bit long words and the carry out of the sum will have to be delayed for one clock cycle before proceeding to the second adder.Also,the right-shift operation performed at the beginning of MonPro3requires that the LSB of the sum be fed directly to the most significant bit(MSB)of the previ-ous adder,which does not affect the speed of the overall computation because the LSB of the sum will be computed before the MSB is computed by thefirst adder.This pipelined multiplier takes n+3clock cycles from the cycle which thefirst word is loaded to the cycle which thefirst output word occurs.There6are two clock cycles needed for initialization and n+1cycles prescribed by use of MonPro3.Thus an n-bit multiplication,where the n-bit value is divided into j words is computed in(n+3)+(j−1)clock cycles.Since the multiplicands change,there are now have6inputs to be handled by the multiplexor because input A can be the constant C,mentioned in the previous section,R0the initial start result,or R the previous multiplication result.The two extra inputs to the multiplexor require an extra control line, and it should be noted that M and C are stored outside of the pipelined units since these two values are constant for all multiplications.The Multiply-and-Square method mentioned in the previous section can be modified to use Montgomery modular multiplication as described in Algorithm 5:L-R Algorithm from[DM02].This algorithm,as discussed before,allows for two parallel multiplications within one iteration.To implement two parallel multiplications,it is necessary to have two multiplier units or in other words2j of the pipelined Montgomery modular multiplication units.3ResultsThe target device in[DM02]is the Xilinx Virtex V1000FG680-6,which contains columns of124CLB’s.[DM02]choose p=120which implies j=9to achieve a1080-bit design which exceeds the minimum of a1024-bit RSA cryptosystem.Considering Table1in[DM02],it is easily seen that the MonArch4is faster than any MonArch1-3,the RAMMult4,and any MuxMult1-2.The RAMMult4 is obviously the slowest because memory accesses are slower than register ac-cesses.The RAMMult4ties with the MonArch4for most space needed.The7MuxMult1is slightly slower than the Monarch4,but the percent reduction in space outweighs the decrease in clock frequency.The MuxMult2incurs a very small increase in space and decrease in frequency compared to the MuxMult1, but the MuxMult1is not modified for pipelining like MuxMult2,and thus the MuxMult1only prevails in speed over the MuxMult2when n is less than the number of CLB’s in an FPGA column.The MuxMult2is extended to the Mux-Mult3to account for differing multiplicands.The difference between a pipelined unit and a non-pipelined unit is illus-trated by looking at Table2and Table3from[DM02].Each increase of the multiplier size by120-bits resulted in a33%−50%decrease in clock frequency; whereas,the pipelined Montgomery modular multiplier experiences only a sig-nificant decrease in clock frequency between a120-bit unit to a240-bit unit.Using two1080-bit Montgomery modular multiplication units,the clock fre-quencies of the exponentiator are comparable to the clock frequecies of a single Montgomery modular multiplier(see Table4[DM02]).The data rate however, decreases roughly33%−50%with each120-bit increase,but this is to be ex-pected.Thus,an FPGA can obtain data rate close to50kb/s with pipelining and proper use of fast carry chains.4ConclusionsWhile at the time[DM02]was written and from the sources cited in[DM02],this implementation of a Montgomery modular multiplication unit was the fastest known implementation.The VLSI implementation cited in[DM02],which uti-lized a Carry Propogation Adder(CPA)and two Carry Save Adders(CSA), acheived a data rate of45.6kb/s on a1024-bit exponentiator.The1080-bit8exponentiator designed in[DM02]acheived a data rate of49.63kb/s.This is an increase of only4.03kb/s.One year later[MMM+03]considered a2048-bit exponentiator.Considering that the data rate decreases by nearly50%for every120bits the Montgomery modular multiplier handles,extending to a2048-bit exponentiator implementation using FPGA’s as discussed in[DM02]seems ridiculously slow to even consider.Reconfigurable logic is also quite expensive compared to generic hardwired implementations.Also,reconfigurable logic units are not as generic as most other logic units.Thus an architecture developed for one FPGA is not necessar-ily easily transferrable to another FPGA family,and even if an implementation for an FPGA were to be migrated to another FPGA family,or other recon-figurable logic device,the speed due to exploitation of features on the original device does not necessarily translate to the same speed-up on the new device. [MMM+03]Since most modular exponentiation is accomplished through several mul-tiplications and the Montgomery algorithm for modular exponentiation lends itself to hardware implementation by computing everything modulus the word size,which is easily accomplished through shifts,the discussion of how to ma-nipulate the Montgomery algorithm tofit the hardware is beneficial to the use of Montgomery algorithm in the design of RSA Cryptographic processors.The mild success of implementation of a modular exponentiator using reconfigurable logic suggests that reconfigurable logic has little to offer for further endeavors in this area.9References[DM02]Alan Daly and William Marnane.Efficient architectures for im-plementing montgomery modular multiplication and rsa modu-lar exponentiation on reconfigurable logic.In Proceedings ofthe2002ACM/SIGDA Tenth International Symposium on Field-programmable Gate Arrays,pages40–49.ACM Press,2002. [MMM+03]C.McIvor,McLoone M.,J.V.McCanny,A.Daly,and W.Marnane.Fast montgomery modular multiplication and rsa cryptographicprocessor architectures.In37th Annual Asilomar Conference onSignals,Systems and Computers,November2003.10。

On the Origin of Cosmic Magnetic Fields

On the Origin of Cosmic Magnetic Fields

a rXiv:077.2783v2[astro-ph]19Mar28Table of contents•Abstract ———1•1.Introduction———–2•2.Observed properties of galactic magnetic fields –4•3.Summary of our present understanding of cosmic magnetic field origins————9•4.Basic equations for magnetic field evolution –12•5.Cowling’s theorem and Parker’s dynamo –15•6.The alpha-Omega disc dynamo —18•7.The magnitudes of αand βin the interstellar medium —23•8.Ferri`e re’s dynamo theory based on supernova and superbubble explosions—26•9.The validity of the vacuum boundary conditions –34•10.Arguments against a primordial origin –37•11.Seed fields —-41•12A protogalactic theory for magnetic field generation —45•13.Generation of small scale magnetic fields by turbulence —48•14.The saturation of the small scale magnetic fields ——–54•15.History of the evolution of a primordial magnetic field ——–58•16.Extragalactic magnetic fields –66•17.Summary and conclusions –67•18.Publications —701On the Origin of Cosmic Magnetic FieldsRussell M.Kulsrud and Ellen G.Zweibel Princeton University and the University ofWisconsin-MadisonAbstractWe review the extensive and controversial literature concerning how the cosmic magneticfields pervading nearly all galaxies and clus-ters of galaxies actually got started.Some observational evidence supports a hypothesis that thefield is already moderately strong at the beginning of the life of a galaxy and its disc.One argument in-volves the chemical abundance of the light elements Be and B,while a second one is based on the detection of strong magneticfields in very young high–red–shift galaxies.Since this problem of initial amplification of cosmic magneticfields involves important plasma problems,it is obvious that one must know the plasma in which the amplification occurs.Most of this review is devoted to this basic problem and,for this,it is necessary to devote ourselves to reviewing studies that take place in environments in which the plasma properties are most clearly understood.For this reason the authors have chosen to restrict themselves almost completely to studies of dynamo action in our Galaxy.It is true that one can get a much better idea of the grand scope of galacticfields in extragalactic systems.However,most mature galaxies share the same dilemma as ours of overcoming important plasma problems.Since the authors are both trained in plasma physics,they may be biased in pursuing this approach,but they feel this restriction it is justified by the above argument.In addition,they feel they can produce a better review by staying close to that which they know best.In addition they have chosen not to consider the saturation prob-lem of the galactic magneticfield since,if the original dynamo ampli-fication fails,the saturation question does not enter.It is generally accepted that seedfields,whose strength is of order 10−20gauss,easily spring up in the era preceding galaxy formation. Several mechanisms have been proposed to amplify these seed mag-neticfields to a coherent structure with the microgauss strengths of the currently observed galactic magneticfields.The standard and most popular mechanism is the alpha-Omega mean–field dynamo theory developed by a number of people in the2late sixties.This theory and its application to galactic magneticfields is discussed in considerable detail in this review.We point out certain difficulties with this theory that make it seem unlikely that this is the whole story.The main difficulty with this as the only such amplifica-tion mechanism,is rooted in the fact that,on galactic scales,flux is constant and is frozen in the interstellar medium.This implies that flux must be removed from the galactic discs,as is well recognized by the standard theory.For our Galaxy this turns out to be a major problem,since unless theflux and the interstellar mass are somehow separated,some inter-stellar mass must also be removed from the deep galactic gravitational well.This is very difficult.It is pointed out that unless thefield has a substantialfield strength,much larger than that of the seedfields, this separation can hardly happen.And of course,the alpha–Omega dynamo must start from the ultra weak seedfield.(It is our philos-ophy,expressed in this review,that if an origin theory is unable to create the magneticfield in our Galaxy it is essentially incomplete.) Thus,it is more reasonable for thefirst and largest amplification to occur before the Galaxy forms and the matter embedded in thefield is gravitationally trapped.Two such mechanisms are discussed for such a pregalactic origin;1)thefields are generated in the turbulence of the protogalaxy and2)thefields come from giant radio jets.Several arguments against a primordial origin are also discussed,as are ways around them.Our conclusion as to the most likely origin of cosmic magnetic fields is that they arefirst produced at moderatefield strengths by primordial mechanisms,and then changed and their strength increased to their present value and structure by a galactic disc dynamo.The primordial mechanisms have not yet been seriously developed as of yet,and this preliminary amplification of the magneticfields is still very open.If a convincing case can be made that these primordial mechanisms are necessary,more effort will of course be devoted to their study.31IntroductionIt is well established that the universe isfilled with magneticfields of very large scale and substantial strength.Thefields exist on all scales, in planets,stars,galaxies,and clusters of galaxies(Parker1979).But with respect to its origin,the magneticfield in stars and planets is secondary,while thefield in galaxies is primary.The situation for clus-ters of galaxies is not very clear(Carilli2002);their magnitude and structure being rather uncertain.Therefore,the best route to under-standing cosmicfields is through discovering their origin in galaxies, and in particular in our Galaxy.(Parker1979,Ruzmaikin et al,1988, Beck et al1996,Zweibel&Heiles1997,Kulsrud1999,Carilli&Taylor 2002,Widrow2002,Vallee2004).The idea embraced in this review is:that one has the clearest idea of what happens in our Galaxy.If one can not understand the origin problem here,then the cosmic origin theory of magneticfields has to be considered incomplete.It must be remarked that this choice of reviewing only the work on dynamos specifically in our Galaxy,is the choice of the authors and represents a certain bias on their part.Generally,reviews of galactic magneticfields discuss the magneticfields in a great variety of extragalactic systems.This in general is justified since,by examining the global shapes and properties offields in external galaxies one can form a much better picture of thesefields,than by restricting ourselves to thefield in our Galaxy,in which we see only its more local parts. Moreover,the display of these magneticfields has aesthetic beauty which alone should justify this approach.However,the authors feel that every one of these extragalactic fields represent a very difficult problem from a plasma physics point of view.If one wants to understand how all thesefield in ours and other galaxies got started from an extremely weak seedfield,one has tofirst deal withfields much weaker than those that can be observed.The problem that needs to be overcome is the problem offlux conservation, basically a plasma problem.Since the authors are trained plasma physicists,they need to know the most basic properties of the plasma in which this happened,so these is no better situation to examine than our own interstellar medium.Therefore,their choice makes it is possible to critically examine the basic plasma physics of galactic dynamos in this weak phase and here at home.In addition,they do not seriously consider the problem of the sat-4uration of the the interstellar magneticfield.In the opinion of the authors this problem is really secondary to the origin of thefield since if thefield cannot be amplified by the large amount required to reach its present value,the saturation problem does not enter.Although it is widely accepted that magneticfields were not pro-duced in the Big Bang,there seems little difficulty with the creation of seedfields in the universe,during the period subsequent to recom-bination,that is the creation offields with strengths of order10−20 gauss.There are a number of mechanisms that can operate during the period of galaxy formation and can generate suchfields.The Biermann battery(Biermann1950)is a simple such mechanism1.On the other hand,the currently observedfield strengths are of the order of10−6gauss.Thus,there is a long way to go betweenfields with these two strengths.Hence,the main problem with the origin of cos-mic magneticfields centers on how the strengths of cosmic magnetic fields were raised from weak values of10−20gauss to the currently observed microgauss strengths.In discussing this subject of magneticfield origins,we distinguish between(1)amplification offields that are already somewhat strong so that the amplification mechanisms can actually by aided by the magneticfields themselves,and(2)amplification of extremely weak fields,those whose strength is so weak that they can play no role in the amplification mechanisms.The latter are passively amplified by mechanisms that are totally unaffected by their presence.There is a second division of the problem of amplification that concerns the nature of the magneticfield itself.As we will see,it is relatively easy to increase the energy of magneticfields if one allows them to be very tangled,changing their direction on very small scales. It is much more difficult to produce coherent magneticfields that change their direction only on very large scales,as is the case for the magneticfield in our Galaxy.Since amplification of the magneticfield energy is relatively easy to understand,whether thefield is very weak or whether it is strong (Batchelor1950,Kazantsev1968,Kraichnan&Nagarajan1967,Kul-srud&Anderson1992,Boldyrev&Cattaneo2004),the real problem of concern for a theory of magneticfield origin is this coherency prob-lem,especially for very weakfields.Why are we interested in the generation and amplification of cos-micfields?There are several reasons.First,until we can be sure we understand this problem,we cannot be sure that we really under-stand the cosmological evolution of the universe.Second,the actual structure of the observed magneticfields is not very well determined because most of the measurements of the magneticfield use tech-niques,such as Faraday rotation,that average magneticfields over large distances,(Zweibel&Heiles1997,Heiles,1998).If we knew theoretically how the magneticfields were generated,this would give extra leverage to determining their local structure.Finally,many of the very mechanisms that produce magneticfields are astronomically interesting,and important in themselves.Why are galactic magneticfields astrophysically important?With-out their universal presence in the interstellar medium its astrophysi-cal properties would be very different.At the present time,magnetic fields play a crucial role in the way stars form(Spitzer1978).They also control the origin and confinement of cosmic rays,which in turn play important astrophysical roles.Further,magneticfields are an important ingredient in the equilibrium balance of the galactic disc.Why is the origin of magneticfields so difficult to understand? First,they are difficult to observe because they are so weak and far away.Second,to understand the physics of their origin one needs to understand astrophysical plasma physics(Kulsrud2005,Cowling 1953),fluid dynamics,and many otherfields of astrophysics.Plasma physics on galactic scales is still not a well developed subject and the details of how it works are hard to observe.More importantly, since the origin of cosmicfields occurs either over the entire life of the Galaxy,(Ruzmaikin et al1988)or possibly in a pregalactic age before galactic discs formed,it is very difficult to gain observational knowledge of the early generation mechanisms.The early stages in other galaxies are observed at large red shift where large distances make these observations difficult to make,(Kronberg,1994,Welter, Perry&Kronberg1984,Watson&Perry1991,Wolfe,Lanzetta,& Oren1992,Oren&Wolfe1995.)For these reasons definitive progress in uncovering magneticfield origins in the universe has been slow.A main goal of this review is to arrive at some sort of conclusion as to whether at the stage when the galactic disc formed the magnetic field was still extremely weak and the amplification occurred during the entire age of the disc,or whether the thefields already were sub-6stantial before the galactic disc started to form.If the former is the case,we will call the origin galactic and the dynamo generating it the galactic dynamo.In the latter case we call the origin pregalactic. (We avoid using primordial which suggest a much earlier origin then occurs,say,during the protogalactic era.)In the next two sections we briefly review the salient observations and present a historical introduction to galactic dynamo theory.The remainder of the review discusses current theories of magneticfield origin and concludes with a critical summary.2Observed properties of galactic mag-neticfieldsOur knowledge of galactic magneticfields rests on four observational pillars.Measurements of Faraday rotation and Zeeman splitting give the magneticfield perpendicular to the plane of the sky,integrated along the line of sight.These effects are direction sensitive,and con-tributions from oppositely directedfields tend to cancel each other. Observations of the polarized synchrotron continuum,and polarized emission and absorption by magnetically aligned dust grains,give the line–of–sight integrated magneticfield components in the plane of the sky.These diagnostics are sensitive to orientation,not direction,but 90◦swings in orientation also cancel.In addition to line–of–sight av-eraging,finite angular resolution of the telescope causes plane–of–sky averaging.According to these observations,the mean orientation of the mag-neticfield is parallel to the Galactic plane and nearly azimuthal;the deviation is consistent with assignment along the spiral arms(Heiles 1996).This orientation is consistent with the effects of induction in a system with strong differential rotation and a spiral density wave pattern(Roberts&Yuan1970).Assuming that the Galactic halo ro-tates more slowly than the disk,induction would act on a verticalfield so as to produce a reversal in the azimuthalfield across the Galactic plane(so-called dipole symmetry).The traditional view has been that thefield does not reverse across the plane(Beck et al1996),although some authors favor asymmetry(Han2003,Han et al1997,Han2001). This important question is still open.The Galacticfield within several kiloparsecs(kpc)of the Sun has both mean and random components(Rand&Kulkarni1989,Han et7al.2006,Beck2007),with the mean component being of order1.4-2µG and the rmsfield about5-6µG.The rms value for the randomfield is derived from the assumption that the randomfield is isotropic and from a measurement of its line–of–sight component.There is some evidence that thefluctuations are anisotropic,with more power paral-lel to the meanfield than perpendicular to it(Zweibel1996,Brown& Taylor2001).There is also evidence that the meanfield reverses with Galactocentric radius(so-called bisymmetric spiral structure),but the locations and frequency of reversal are quite uncertain(see Han& Wielebinski2002for a review,and Weisberg et al.2004,Han et al. 2006,Brown et al2007for more recent studies).The discrepant re-sults of this important and difficult measurement reflect the high level of noise(fluctuations greater than the mean),uncertain distances to the background pulsars against which Faraday rotation is measured, uncertainty in the location of Galactic spiral arms(Vallee2005),and a possible systematic spatial variation in thefield structure which is unaccounted for in the models.There are some complications in the interpretation introduced by the spiral arms perturbing the direction of the magneticfield.Even if the unperturbedfield is toroidal,these spiral arm perturbations give the impression that the globalfield is aligned along the spiral arms and its lines of force are a spiral[Lin,Yuan and Shu,1969,Manchester, 1974(section iv page642)].How far back in time are galactic magneticfields detectable?Young galaxies and their environment have been probed through the absorp-tion lines found in quasar spectra whose redshift is different than that of the quasar.These absorption lines are impressed on the quasar light as it passes through clouds of gas,and these clouds are interpreted as young galaxies.Most of these systems are rather thin and are referred to as part of a Lyman alpha forest.However,some of these systems are much thicker and display both metallic lines(particularly lines of MgII)and very broad damped Lyman alpha lines.The latter are referred to as damped Lyman alpha systems.More important for our purpose,if the quasar emits polarized radio waves,the plane of po-larization of these waves would be Faraday rotated by a significant amount if these systems had coherent magneticfields(Perry1994).To determine this possibility Welter,Kronberg and Perry(1984) searched for Faraday rotation in a number of radio emitting quasars, and found a definite correlation between those which had a rotation measure and those which have metallic absorption lines.The ma-8jor difficulty with these observations is the correct subtraction of the Faraday rotation produced by the magneticfield of our Galaxy,which does not vary smoothly with angular position.Welter et.al.found five unambiguous cases.These results were reaffirmed by Watson& Perry(1991).Since the metallic absorption line systems had red shifts that were fairly large(of order two),these systems probably represent young galaxies in an early state of formation.This data was reana-lyzed by Wolfe and his collaborators(Oren and Wolfe,1995),but in a different manner.They restricted their analysis to61quasars with MgII absorption lines and separated out,as a subclass,11of these that also had damped Lyman alpha lines.They found that the incidence of Faraday rotation in the11damped systems was higher than that in the remaining50undamped systems with a99.8per cent confidence level.In this analysis they concluded that the errors introduced by the Faraday rotation in our Galaxy were larger than those assumed by Welter et al by a factor of three.Thus,they only found two cases in which they were certain that there was Faraday rotation.The detection of only two cases with definite intrinsic rotation measures seems to make a weak case for extragalacticfields in these damped systems.But these cases were those systems with the lowest red shift.For the other systems of larger redshifts z,any intrinsic Faraday rotation produced by them is diluted by a factor of(1+z)2. This is because the frequency of the radio waves passing through them is higher by the factor(1+z)and the amount of Faraday rotation de-creases with the frequency squared.Thus,the other members of the damped class could very well have magneticfields of the same strength as the lower red shift members without producing a detectable signal. This bolsters their correct subtraction of the galactic component of the Faraday rotation.Taking this into account,Oren and Wolfe con-clude with98per cent confidence that all such systems have Faraday rotation and substantial magneticfields.Another important window on the history of the Galactic magnetic fields over cosmic time is provided by analyzing the chemical composi-tion of the atmospheres of the oldest stars in the Galactic halo(Zweibel 2003).As a result of observations from the Hubble space telescope the chemical abundances of these very early stars have now been analyzed. The light elements lithium,beryllium,and boron have been found in even the oldest stars,e.g.those with10−3times solar abundances.In addition,the amount of beryllium and boron in them increases with9the iron abundance and in fact is directly proportional to it.Since the early stars are produced from the interstellar medium their com-position should reflect that of the interstellar medium.(Primas et al. 1999,Duncan et al.1998,Garcia-Lopez et al.1998,1999) Now,it is known that no beryllium was produced in Big Bang nucleosynthesis,and that it is very difficult to make it in stars,since it quickly burns up.The leading theory is that it was made by cosmic ray nucleosynthesis,(Meneguzzi,Audouze&Reeves,1971,Reeves,1994, 2007).The situation for boron is ambiguous,because this element can also be produced by neutrinos during Type II supernova explosions; see Ramaty et al.1997.We will still include boron in the arguments for cosmic rays and magneticfields early in the history of the Galaxy, but this caveat should be kept in mind.The linearity of the Be and B abundances with iron is explained by assuming their creation is by spallation of the low energy(tens of MeV)carbon and oxygen cosmic rays.If it were due to the comple-mentary process,spallation breakup of interstellar carbon and oxygen nuclei,one must take into account that the latter themselves are pro-duced by stellar nucleosynthesis and supernovae.and their abundance should increase proportionally to the amount of iron produced in su-pernovae.Thus,their abundance should increase with that of iron in the interstellar medium,and the abundance of these light elements should be quadratic with iron abundance or in time.However,to get linearity in time with spallation of carbon and oxy-gen cosmic rays,one needs to assume that the composition of cosmic rays must not change with time.This would seem to be a stumbling block since cosmic rays are assumed to be accelerated by shocks from interstellar–medium nuclei.Therefore,they would be expected to also reflect the changing abundances in the interstellar medium and would be expected to increase their abundance in carbon and oxygen with time.Ramaty and others(Ramaty,Lingenfelter,Kozlovsky(2000),Ra-maty,Scully,Lingenfelter,2000a,Lingenfelter,Higdon,Ramaty,2000b, Ramaty,Tatischeffet al.2000)argue that the acceleration of cosmic rays occurs mainly inside superbubbles,and that the material inside these superbubbles is the material that has just emerged from the supernova generating the superbubble.This freshly produced matter has not been diluted with the rest of interstellar medium in the super-bubble region where cosmic ray acceleration occurs.Thus,the relative abundance of different cosmic ray nuclei should be constant in time10and determined by the undiluted material emerging from supernovae. On the other hand,iron in the interstellar material from which stars emerge has been diluted,since it was produced in supernovae.There-fore,its abundance relative its solar abundance increases with time at a constant rate determined by the rate of supernova explosions.This constancy of the chemical abundance of cosmic rays has been supported by detailed numerical simulations of Ramaty and others, and does lead to an explanation of the observations that the abundance of beryllium and boron is directly proportional to the abundance of iron in old stars(Garcia-Lopez et al.1998,1999)Now to explain the numerical value of this ratio,the cosmic ray intensity at tens to hundreds of MeV per nucleon in the early Galaxy had to be the same then as now.Zweibel(2003)has shown that magneticfields several orders of magnitude weaker than now,suffice for cosmic ray acceleration and diffusive propagation at these energies. On the other hand,if thefield is very weak such a cosmic ray intensity might not be confined by a very weakfield.This is because,if most of the mass in the interstellar medium was in the form of discrete, cold clouds,as appears to be the case today,then a high cosmic ray pressure between the clouds,which must be confined by magnetic tension,would inflate thefield lines to infinity(Parker1979).This can be quantified by a simple two dimensional model.Let the cloud distribution be two dimensional and have a scale height H and the clouds be on planes separated by a mean distanceℓ.Let the cosmic ray pressure be proportional to the magneticfield strength squared by a factorβ/α.Then the model shows that the lines bow out above their average height in the clouds by a factor1(β/α+1)ℓ/HThus,if thefield is very weak,β/αis very large and there is no solution,implying thefield lines would bow out to infinity.Although the amount of cold interstellar material in the early Galaxy was doubtless lower than today because of the lower metal-licity,thereby somewhat easing the cosmic ray confinement problem, we are faced with a situation in which primeval galaxies already must have had substantial magneticfields,at a stage too early to have been produced by a conventional dynamo.Finally,the problem of the abundance of the Li6isotope which is not at all linear with that of iron is entirely up in the air.(Reeves112007,Ramaty,Tatischeffet al.2000).Li6also is not produced by Big Bang nucleosynthesis and,because it depletes so easily,can only be produced with great difficulty in stars.Thus,as long as we do not understand the process that produces lithium6,we cannot be sure that this process does not in some way produce beryllium and boron as well,without the aid of cosmic rays.Until the Lithium problem is resolved,we cannot be certain that the beryllium boron argument proves that the origin of Galacticfield is pregalactic.It is interesting that boron has been observed in other galaxies at large red shifts(Prochaska et al2003).Thus,if the ideas concerning the origin of boron hold up,this provides even stronger evidence that magneticfields are already present at the formation time of galactic discs and their origin is pregalactic.3Summary of our present understand-ing of cosmic magneticfield origins Origin theories divide into two further parts:(1)the origin in small bodies such as planets and(2)the origin in larger bodies such as galaxies.This division occurs because,for a large body such as a galaxy,the resistive decay lifetime of its magneticfields is much longer than the age of the universe.For these bodies a so-called fast dynamo is required.On the other hand in small bodies the decay lifetime is much shorter and the required dynamo is called a slow dynamo.For the Earth the problem of where thefield came from,and how it is sustained against resistive decay is easier than the same problem for fields in large bodies(Parker1979,Spitzer1978).If a magneticfield starts to decay,the inductance of the body in which it resides produces a backvoltage(or E.M.F.)that keeps its currentsflowing against its resistance.Thus,one can roughly say that the lifetime of the magnetic field against decay(if no other mechanisms are present)is its total inductance divided by its total resistance.Inductance is proportional to the size of a body,L while resistance is inversely proportional to L so that the lifetime is proportional to L2.Thus,the Earth has a relatively short magnetic decay time of order a few tens of thousands of years,while that of the Galactic disc is many orders of magnitude longer than the age of the universe.Thus,dynamo mechanisms aside, the Earth’sfield would decay away in a time very short compared to its age,and therefore,there must be a mechanism to sustain it,just12as to sustain a current in a laboratory circuit,one needs a battery or dynamo(Parker1955,Elsasser1946,1950.)On the other hand,while one need not worry about the galactic field decaying because of its enormous inductance(Fermi1949),one has to worry about how to get the currents started in the interstellar medium to produce the galacticfield:as the currents rise the back voltage is so large that a very strong generator is needed to balance the back voltage.This turns out to be the essence of the problem behind the origin of galacticfields,(Hoyle,1958).This discussion of magneticfield generation and decay is not quite correct.It treats the bodies as being static and at rest,while in both cases the bodies are eitherfluid or gaseous with motions generated by their dynamics.When afluid moves across a magneticfield,an electricfield E=−v×B/c exists in the frame in which the veloc-ity is measured,and this electricfield results in the dynamo action that is needed to balance the magneticfield against resistive decay in case of the Earth,and balance inductance in the case of the Galaxy. The problem is tofind a reasonablefluid velocity that would properly balance the inductive and resistive effects that must occur during the evolution of thefield(its decay or growth).In1955Parker was able tofind such velocities and to propose a model containing,non axisymmetric motions,that explained how the Earth’sfield could be sustained against decay(Parker1955).To do this,he built on the work of many others(Elsasser,1946,1950). Parker found that a dynamo should exist in the liquid core of the Earth and thefluid motions producing this dynamo action are a combination of differential rotation of the core,and a multitude of rising and falling convective cells,that are twisted by the Coriolis force of the Earth’s rotation.His solution was gradually improved in the next decade(Backus, 1958),untilfinally,in1966,Steenbeck,Krause,and R¨a dler(1966) developed a refined theory for dynamos,the mean–field theory,which consist of such turbulent motionsOnce this theory was accepted as correct,it was applied to the problem of the origin of the Galactic magneticfield.Parker in the United States(Parker,1971a)and Vainshtein and Ruzmaikin in Rus-sia(Vainshtein&Ruzmaikin,1972)in the early seventies showed that motions,similar to the terrestrial motions,exist in the Galaxy,and that they could overcome the inductance problem and generate the Galacticfield.13。

DRAFT

DRAFT

D R A F T LECTURE NOTES Discrete Mathematics Maintainer:Anthony A.Aaby An Open Source Text DRAFT Version:α0.1Last Edited:August 18,2004Copyright c 2004by Anthony AabyWalla Walla College204S.College Ave.College Place,WA99324E-mail:aabyan@Original Author:Anthony AabyThis work is licensed under the Creative Commons Attribution License.To view a copy of this license,visit /licenses/by/2.0/or send a letter to Creative Commons,559Nathan Abbott Way,Stanford,California 94305,USA.This book is distributed in the hope it will be useful,but without any warranty; without even the implied warranty of merchantability orfitness for a particular purpose.No explicit permission is required from the author for reproduction of this book in any medium,physical or electronic.The author solicits collaboration with others on the elaboration and extension of the material in this ers are invited to suggest and contribute material for inclusion in future versions provided it is offered under compatible copyright provisions.The most current version of this text and L A T E Xsource is available at /~aabyan/Math/index.html.Dedication:WWC Computer Science Students4Contents1Sets,Relations,and Functions91.1Informal Set Theory (9)1.2Relations (12)1.3Functions (13)1.4References (14)2Paradox152.1Antinomies of intuitive set theory (15)2.2Zeno’s paradoxes (17)3The Basics of Counting193.1Counting Arguments (19)3.2The Pigeonhole Principle (19)3.3Permutations and combinations (20)3.4Solving recurence relations (20)3.5References (21)4Numbers and the Loss of Innocence234.1How many kinds of numbers? (23)4.2Pythagoras,the Pythagoreans,&Pure Mathematics (24)4.3How big is big? (25)4.4Why we will never catch up (26)5The Infinite295.1The infinite (29)6Cardinality and Countability3156.1The Cardinal Numbers (31)6.2Countability (31)7The Indescribable357.1Is the universe indescribable? (35)7.2Formal Languages (36)7.3Grammars (37)8Proof Methods in Logic398.1Preliminaries (39)8.2The Axiomatic Method (41)8.2.1Classical logic (41)8.2.2Hilbert’s Axiomatization (43)8.3Hilbert Style Proofs (44)8.4Natural Deduction (45)8.5The Analytic Properties (47)8.6The Method of Analytic Tableaux (49)8.7Sequent Systems(Gentzen) (52)9Many-sorted Algebra579.1Historical Perspectives and Further Reading (60)9.2Exercises (60)10Graphs and Trees6310.1Vertex,vertices (63)10.2Graphs (64)10.2.1Paths and cycles (64)10.3Trees and Forests (66)10.4Traversal strategies (67)11Discrete Probability6911.1Definition of probability (69)11.2Complementary events (70)11.3Conditional probability (70)11.4Independent events (70)11.5Bayes’theorem (70)611.6Random variables (71)11.7Expectation (71)Bibliography7378List of Figures7.1Alphabet and Language (37)8.1Formulas of Logic (42)8.2Natural Deduction Inference Rules (46)8.3Analytic Subformula Classification (48)8.4Block Tableau Construction (50)8.5Block Tableau Inference Rules (50)8.6Tableau for¬[(p∨q)→(p∧q)] (51)8.7Tableau for∀x.[P(x)→Q(x)]→[∀x.P(x)→∀x.Q(x)] (52)8.8Analytic Sequent Inference Rules (53)9.1Algebraic Definition of Peano Arithmetic (58)9.2Algebraic definition of an Integer Stack (59)9Chapter1Sets,Relations,and Functions1.1Informal Set Theory(Set theory is)Thefinest product of mathematical genius and oneof the supreme achievements of purely intellectual human activity.-David HilbertBag A bag is an unordered collection of elements.It is also called a multiset and may include duplicates.Set A set is an unordered collection of distinct elements selected from a domain of discourse or the universe of values,U.S,X,A,B,...are sets.a,b,x, y,...are elements.Set definition/description Sets may be defined by extension:specification by explicit listing of members A={x0,...,x n}.A set of one element is called a singleton set.Sets may be defined by intension/comprehension: specification by a membership condition or rule for inclusion in the set.A={x|P(x)}={x:P(x)}.P(x)is usually a logical expression.As no restrictions are placed on the condition or rule this method is called the unrestricted principle of comprehension or abstraction.Some Sets∅={x|x=x}The empty set is the set that has no elements.U The set of all the elements in the universe of discourse.N={0,1,2,...}The set of natural numbers.Z={...−1,0,1,...}The set of integers.11CHAPTER1.SETS,RELATIONS,AND FUNCTIONSA=∼A={x|x/∈A}=U\A The complement of a set A is a set consisting of those elements not found in the set A.Cross product-A×B={(a,b)|a∈A and b∈B}The cross product of a pair of sets is the set of ordered pairs of elements from each set.Note: sets are unordered while pairs are ordered thus{a,b}={b,a}while the tuple(a,b)=(b,a).Subset:A⊆B if x∈A then x∈B A set A is a subset of a set B if every element of A is an element of B.Proper subset-A⊂B={x|if x∈A then x∈B and A=B}A subset A of a set B is a proper subset if the sets are not equal.DRAFT COPY August18,200412RMAL SET THEORYCHAPTER1.SETS,RELATIONS,AND FUNCTIONS1.3.FUNCTIONSCHAPTER1.SETS,RELATIONS,AND FUNCTIONSChapter2ParadoxAntinomy a contradiction between two apparently equally valid principles or between inferences correctly drawn from such principlesParadox a self-contradictory statement that atfirst seems true.2.1Antinomies of intuitive set theoryThe paradoxes in intuitive set theory are actually antinomies and are the result of the use of the unrestricted principle of comprehension/abstraction(defining a set A={x:P(x)}where no restriction is placed on P(x).The most famous is Russell’s paradox.Russell(1901)and Zermelo:Let A={x|x/∈x}.Is A∈A?Boththe assumption that A is a member of A and A is not a member ofA lead to a contradiction(If R={x|x/∈x}then R∈R iffR/∈R).Two popular forms of this paradox are:•Is there is a bibliography that lists all bibliographies that don’tlist themselves.•In a village,there is a barber(a man)who shaves all those menwho do not shave themselves.Who shaves the barber?Logical antinomies•Burali-Forti(1897):Is there a set of all ordinal numbers?May have been discovered by Cantor in1885.•Cantor(1899):If there is a set of all sets,its cardinality must be the greatest possible cardinal yet the cardinality of the power set of the set17CHAPTER2.PARADOX2.2.ZENO’S PARADOXESCHAPTER2.PARADOXChapter3The Basics of Counting3.1Counting ArgumentsA set A isfinite if there is some n∈N such that there is a bijection from the set {0,1,2,...,n−1}to the set A.The number n is called the cardinality of A and we say“A has n elements,”or“n is the cardinal number of A.”The cardinality of A is denoted by|A|.3.2The Pigeonhole PrincipleThe fundamental rule of counting:The Pigeonhole Principle.The following are equivalent1.If m pigeons are put into m pigeonholes,there is an empty hole iffthere’sa hole with more than one pigeon.2.If n>m pigeons are put into m pigeonholes,there’s a hole with morethan one pigeon.3.Let|A|denote the number of elements in afinite set A.For twofinite setsA and B,there exists a1-1correspondence f:A→B iff|A|=|B|. Theorem3.1N can be placed in1-1correspondence with any infinite subset of itself.Proof:by the natural ordering of the elements of N.Theorem3.2|I|=|N|Proof:evens count positives and odds count negatives.Theorem3.3|N|=|Q|Place Q in tabular form and count along successive diagonals.21CHAPTER3.THE BASICS OF COUNTING(n−r)!Theorem3.9Let r,n∈N and r≤n.The number of combinations of n things taken r at a time is nr =n!3.5.REFERENCESCHAPTER3.THE BASICS OF COUNTINGChapter4Numbers and the Loss of InnocenceThere are several ideas that are introduced in this section that are covered in more detail in later sections:1.Relationship between language an new ideas.2.Emergence of pure mathematics.3.Consequences offixed world views.4.Big numbers as prelude to a discussion of infinity5.Big numbers as a prelude to a discussion of the limits of computation. 4.1How many kinds of numbers?In our examination of informal set theory,we saw an example of how informal language could lead to paradox.The solution offered was more precision and careful use of the nguage may also lead to new ideas as in the following example.In this example,our language is that of equations.Different types of numbers are required as solutions for slightly different forms of the equation.•Natural Numbers are solutions to equations of the form:x+a=b where a≤b,e.g.,x+3=7•Negative numbers are solutions to equations of the form:x+a=b wherea>b,e.g.,x+5=325CHAPTER4.NUMBERS AND THE LOSS OF INNOCENCE2is the length of the diagonal of a unit square,andπis the ratio of the length of the diameter of a circle to the length of its circumference.•Imaginary numbers are solutions to equations of the form;x2=−1All these numbers are solutions to polynomial equations with integer coefficients and are collectively called algebraic numbers.Numbers which are not algebraic are called transcendental numbers.Among the transcendental numbers are pi and e.4.2Pythagoras,the Pythagoreans,&Pure Math-ematicsPythagoras and the PythagoreansThe material here has been stolen from else where.Pure mathematicsPure mathematics-numbers detached from reality-the irrationals,and non-constructive(indirect)proofs.Zeno of EleaGreek philosopher,born at Elea,about490B.C.At his birthplace Xenophanes and Parmenides had established the metaphysical school of philosophy known as the Eleatic School.The chief doctrine of the school was the oneness and immutability of reality and the distrust of sense-knowledge which appears to testify to the existence of multiplicity and change.Zeno’s contribution to the literature of the school consisted of a treatise,now lost,in which,according to Plato,he argued indirectly against the reality of motion and the existence of the manifold.There were,it seems,several discourses,in each of which he DRAFT COPY August18,2004264.3.HOW BIG IS BIG?CHAPTER4.NUMBERS AND THE LOSS OF INNOCENCE4.4.WHY WE WILL NEVER CATCH UP.Running Time Func-tion Example:n=256(instructions)1microsec/instruction 1×10−6sec/instructionConstant time O(1)check the timea log log n+b0.000003secLog N time O(log n)an+b0.0025secN Log N time O(n log n)an2+bn+c0.065secPolynomial time O(n k)matrix multiplication ak n+... 3.67x1061centuries(k=2) Still computable Ackermann’s function Computable functionsNP-non-deterministic polyno-mial complexityP-deterministic polynomialcomplexity(tractable prob-lems)29DRAFT COPY August18,2004CHAPTER4.NUMBERS AND THE LOSS OF INNOCENCEChapter5The Infinite5.1The infiniteIts infinite all the way up and down!•Small numbers:the infinitesimals,non-standard arithmetic and non-standard analysis•<ahref="../Math/Cardinality.html">Counting and the definition ofinfinity;cardinality of N,I,Q,&R and transfinite arithmetic-Georg Can-tor•[0,1]and[−∞,+∞],R and R n•℘(N)and infinite binary strings•When will it ever end?-Hierarchy of infinitiesℵ0,ℵ1,...•Paradoxs–One way infinite,bounded and infinite,unbounded and infinite–<ahref="Paradox.html">Zeno’s paradox and infinite algorithms–Gabriel’s horn–The Axiom of Choice–Banach-Tarski paradox–Spacefilling curves-fractals in general31CHAPTER5.THE INFINITEChapter6Cardinality and Countability6.1The Cardinal NumbersIf a set A isfinite,there is a nonnegative integer,denoted#A or|A|,which is the number of elements in A.That number is one of thefinite cardinal numbers. To do arithmetic with cardinal numbers,you use facts aboutfinite sets and the number of elements in them,such as the following:•If A and B can be put into one-to-one correspondence,then#A=#B, and conversely.•If A is contained in B,then|A|≤|B|.•If A is disjoint from B and C is their union,then|C|=|A|+|B|.•If A and B are sets,and C=A×B is the set of all ordered pairs of elements,thefirst from A and the second from B,then|C|=|A|×|B|.•If C is the set of all subsets of A,i.e.,C=℘(A),then|C|=2|A|.6.2CountabilityIf the set is infinite,the corresponding cardinal number is not one of thefinite cardinal numbers,so it is called a transfinite(or infinite)cardinal number.The smallest infinite cardinal number isℵ0=|{0,1,2,...}|.Sets having this cardinal number are called countably infinite sets,or just countable sets,because they can be put into one-to-one correspondence with the positive integers,or counting numbers33CHAPTER6.CARDINALITY AND COUNTABILITY6.2.COUNTABILITY02i∞b...where each of the b’s are0or1.The1’s indicate that the corre-sponding natural number is in the set.The binary sequence of allzeros corresponds to the empty set.The binary sequence of all onescorresponds to N.Theorem6.7|A|<|℘(A)|Proof:Since A⊂℘(A)(every element of A is in℘(A)),|A|≤|℘(A)|.Assume that there is a1to1correspondence f between A and℘(A).Let B={x|x∈A,x∈f(x)}-x is not a member of the set towhich it corresponds.Let y∈A be such that f(y)=B.If y∈Bthen by the definition of B,y∈B.If y∈B then by the definitionof B,y∈B.A contradiction∴Therefore,|A|=|℘(A)|and we haveestablished the theorem.Theorem6.8|N|<|℘(N)|.Examples:There are infinite sequences such as•1/3=0.33333...•1/1+1/2+...+1/n+...•All x P(x)=P x0∧P x1∧...Arithmetic of the infinite cardinals•ℵ0+n=n+ℵ0=ℵ0•ℵ0+ℵ0=ℵ0•ℵ0∗n=n∗ℵ0=ℵ0(n>0)•ℵ0∗ℵ0=ℵ0•ℵn0=ℵ0(n>0Subtraction and division are not definable operations in this arithmetic.The Associative Laws of Addition and Multiplication hold,and the Commutative 35DRAFT COPY August18,2004CHAPTER6.CARDINALITY AND COUNTABILITYChapter7The IndescribableLanguage and structures•How big is the English Language?-Alphabet=Σ,Words⊂Σ*,Sentences ⊂Σ∗∗,Texts⊂Σ∗∗∗–|Σ|=n,|Σ∗|=ℵ0,Σ*=Σ**=Σ***•How big is the universe?Constants=Σ,Strings=Σ∗,Relations=2|Σ∗|•Theorem:An infinite universe is not completely describable.Proof:Fora givenfinite alphabet(Σ,|Sigma|=n),there are at most,countablyinfinite many descriptions(|Σ∗|=ℵ0).For a given infinite set of constants (N,|N|=ℵ0),there are uncountably many relations(|℘(N)=ℵ1≤|℘∪i∈N N i)|.Therefore,some relation in N cannot be described,ℵ0<ℵ1.Q.E.D.7.1Is the universe indescribable?How big is the English Language?-Alphabet=Σ;,Words⊂Σ∗,Sentences ⊂Σ∗∗,Texts⊂Σ∗∗∗•|Σ|=n,•|Σ∗|=ℵ0,•|Σ∗|=|Σ∗∗|=|Σ∗∗∗|How big is the universe?Is itfinite or infinite?Can it be described in terms of a possibly infinite set of constants(=Σ)and a set of relations(=℘(Σ∗))on those constants?37CHAPTER7.THE INDESCRIBABLE7.3.GRAMMARS Σan alphabet.Σis a nonempty,finite set of symbols.Λhe empty string.Λis a string with no symbols at all.L a language L over an alphabetΣis a collection of strings of ele-ments ofΣΣ∗The set of all possiblefinite strings of elements ofΣis denotedbyΣ∗.Λis an element ofΣ∗and L is a subset ofΣ∗.Figure7.1:Alphabet and LanguageCHAPTER7.THE INDESCRIBABLEChapter8Proof Methods in Logic8.1PreliminariesLetΣbe a set of symbols andΣ*be the set of all strings offinite length composed of symbols inΣincluding the empty string.A language L is a subset ofΣ*.Alternately,let G= Σ,P,S be a grammar whereΣis a set of symbols, P a set of grammar rules,and S the symbol for sentences in the language.The notation L(G)designates the language defined by the grammar G.The set of strings in L/L(G)are called sentences or formulas.Three sets of formulas are distinguished,axioms(A),theorems(T),and formu-las(F).In monotonic logic systems the relationship among them is:A⊂T⊂F=L⊂Σ*If the set of theorems is the same as the set of formulas(T=F),then the system is of little interest and in logic is said to be contradictory.Inference rules I are functions from sets of formulas to formulas(I:℘(L)→L for each I∈I).The set of theorems are constructed from the set of axioms by the application of rules of inference.A proof is a sequence of statements,each of which is an axiom,a previously proved theorem,or is derived from previous statements in the sequence by means of a rule of inference.The notation U⊢T is used to indicate that there is a proof of T from the set of formulas U.The task of determining whether or not some arbitrary formula A is a member of the set of theorems is called theorem proving.There are several styles of proofs.The semi-formal style of proof common in mathematics papers and texts is a paragraph style.Formal proofs are presented in several formats.The following are the most common.•Hilbert style proofs•Natural Deduction41CHAPTER8.PROOF METHODS IN LOGIC8.2.THE AXIOMATIC METHODCHAPTER8.PROOF METHODS IN LOGICThe set of atomic formulas,P,is defined byP={P i j t k...t k+i−1|t l∈C,i,j,k,l∈N}with f∈P where C={F i j t k...t k+i−1|t k∈C,i,j,k∈N}is a set of terms,{P0j|j∈N}is a set of propositional constants,and{F0j|j∈N}is a set of individual constants.The set of formulas,F,is defined byF::=P|→FF|2F|∀x.[F]x twhere V={x i|i∈N}is a set of individual variables,t∈C,x∈V, and textual substitution,[F]t x,is a part of the meta language and designates the formula that results from replacing each occurrence of t with x.Additional operators and infix notation:(A→B)≡→AB¬A≡(A→f)(A∨B)≡(¬A→B)(A∧B)≡¬(A→¬B)f≡(A∧¬A)(A↔B)≡((A→B)∧(B→A))3A≡¬2¬A∃x.A≡¬∀x.¬AFigure8.1:Formulas of Logic8.2.THE AXIOMATIC METHODCHAPTER8.PROOF METHODS IN LOGIC8.4.NATURAL DEDUCTION Hilbert Style Proof FormatQ By Modus Ponens explanation explanationA→B By Contrapositive AssumptionexplanationBut A holds because explanationP∧Q→R By Deduction Assumption Assumption explanationP By Contradiction Assumption explanationP By Contradiction Assumption explanationR By Case Analysis explanation explanation explanationP↔R By Mutual implication explanation explanation∀n.P By Inductionexplanation(Base step) Assumption(Induction hypothesis) explanation(Induction step)8.4Natural DeductionNatural deduction was invented independently by S.Jaskowski in1934and G.Gentzen in1935.It is an approach to proof using rules that are designed to mirror human patterns of reasoning.There are no logical axioms,only inference rules.For each logical connective,there are two kinds of inference rules,an introduction rule and an elimination rule.•Each introduction rule answers the question,under what conditions can the connective be introduced.•Each elimination rule answers the question,underheat conditions can the connective be eliminated.The natural deduction rules of inference are listed in Figure8.2.47DRAFT COPY August18,2004CHAPTER8.PROOF METHODS IN LOGICIntroductionRules¬¬A AA,B A∧B∨A∨B AA⊢B A,A→B∀x.∀x.P(x)[P(x)]c x for any c∈CP(c)∃x.P(x)8.5.THE ANALYTIC PROPERTIESCHAPTER8.PROOF METHODS IN LOGIC。

distance transform of sampled function解读

distance transform of sampled function解读

distance transform of sampled function解读Distance Transform of Sampled Function: An InterpretationIntroductionThe distance transform of a sampled function is a fundamental concept in digital image processing and computer vision. It serves as a powerful tool for various applications such as object recognition, image segmentation, and shape analysis. In this article, we will delve into the intricacies of the distance transform of a sampled function, its key properties, and its significance in computer science.Definition and Basic PrinciplesThe distance transform is an operation that assigns a distance value to each pixel in an image, based on its proximity to a specific target object or region. It quantifies the distance between each pixel and the nearest boundary of the object, providing valuable geometric information about the image.To compute the distance transform, first, a binary image is created, where the target object or region is represented by foreground pixels (usually white) and the background is represented by background pixels (usually black). This binary image serves as the input for the distance transform algorithm.Distance Transform AlgorithmsSeveral distance transform algorithms have been developed over the years. One of the most widely used algorithms is the chamfer distancetransform, also known as the 3-4-5 algorithm. This algorithm assigns a distance value to each foreground pixel by considering the neighboring pixels and their corresponding distances. Other popular algorithms include the Euclidean distance transform, the Manhattan distance transform, and the Voronoi distance transform.Properties of the Distance TransformThe distance transform possesses a set of important properties that make it a versatile tool for image analysis. These properties include:1. Distance Metric Preservation: The distance values assigned to the pixels accurately represent their geometric proximity to the boundary of the target object.2. Locality: The distance transform efficiently encodes local shape information. It provides a detailed description of the object's boundary and captures fine-grained details.3. Invariance to Object Shape: The distance transform is independent of the object's shape, making it robust to variations in object size, rotation, and orientation.Applications of the Distance TransformThe distance transform finds numerous applications across various domains. Some notable applications include:1. Image Segmentation: The distance transform can be used in conjunction with segmentation algorithms to accurately delineate objects inan image. It helps in distinguishing objects from the background and separating overlapping objects.2. Skeletonization: By considering the foreground pixels with a distance value of 1, the distance transform can be used to extract the object's skeleton. The skeleton represents the object's medial axis, aiding in shape analysis and recognition.3. Path Planning: The distance transform can assist in path planning algorithms by providing a distance map that guides the navigation of robots or autonomous vehicles. It helps in finding the shortest path between two points while avoiding obstacles.ConclusionThe distance transform of a sampled function plays a vital role in digital image processing and computer vision. Its ability to capture geometric information, preserve distance metrics, and provide valuable insights into the spatial structure of objects makes it indispensable in various applications. The proper understanding and utilization of the distance transform contribute to the advancement of image analysis techniques, enabling more accurate and efficient solutions in computer science.。

多尺度形态金字塔图像去噪算法

多尺度形态金字塔图像去噪算法

j j j j +1 jj ↓ x ♠♥第30卷 第20期Vol.30 № 20计 算 机 工 程 Computer Engineering2004年10月October 2004·人工智能及识别技术 ·文章编号:1000—3428(2004)20 —0136—02 文献标识码:A中图分类号:T P391.41多尺度形态金字塔图像去噪算法任获荣,杨万海,王家礼(西安电子科技大学机电工程学院,西安 710071)摘 要:提出了一种多尺度形态金字塔扫描图像去噪算法,采用多尺度形态金字塔将图像进行分解,对第一级细节信号用形态梯度算子、 H MT 变换删除孤立噪声点;对其它细节信号形态开闭滤波,保留图像结构特征;最后利用处理后的细节信号重建图像。

仿真结果表明,同中值滤 波相比,算法能够更好地消除扫描图像的噪声,保留更多的细节信息。

关键词:数学形态学;形态金字塔;去噪;扫描图像A Method of Image Denoising Based o n Multiresolution Morphological P yramidREN Huorong ,YANG Wanhai ,WANG J iali(School of Mechanical and Electrical Engineering, Xidian University, Xi'a n 710071)【Abstract 】A method of scanned image denoising based on mathematical morphological pyramid is proposed. It is used to decompose an image l evel -wise and the detail information is filtered in different resolu tion spaces, a cleaned image with the detail information processed is restructured. Experiment results demonstrate that the efficient of the proposal method, and the method, Compared with conventional median filter, is of the advantages in terms of eliminating scanned noise, fine edges p reserving.【Key words 】Mathematic morphology ;Morphological p yramid ;Denoising ;Scanned image图像去噪是图像处理中的一个重要课题。

TPO-30 Reading 2 解析

TPO-30 Reading 2 解析

Q1正确答案:A解析:innumerable,无数的,数不清的;这个单词是numerable(可数的,可计算的)加否定前缀in-,可以推断出近义词是countless,无数的,多得数不清的;occasional,偶尔的,不经常的;repeated,反复的,再三的,重复的。

Q2正确答案:C解析:在第一段中,A选项对应第2句的意思,B选项对应第3句的意思,D选项对应最后一句的意思。

C选项与原文不符,原文一直在说D达尔文的进化论被人们广泛接受。

Q3正确答案:D解析:高亮句的主干部分意思:间断平衡论(punctuated equilibrium hypothesis)挑战了进化论;高亮句的定语从句部分意思:间断平衡轮的内容是新种族的崛起是相对突发的,不需要长时间的转型期。

人物和年份不属于核心信息,可以忽略。

D选项意思和高亮句主干和从句部分意思相符。

A选项which修饰指代不明,容易产生误解;B选项与原文主干部分意思矛盾;C与原文不符,cannot occur without a lengthy transition period与间断平衡论正相反。

Q4正确答案:C解析:根据原文,进化论是说物种演变是通过长时间的缓慢改变发生的;间断平衡论是说物种演变是短期爆发的,所以两个理论的不同在于演变速度,C选项对应原文内容,进化是否是匀速发生的。

Q5正确答案:C解析:题干中的lack of intermediate fossils第三段第3句内容一致,定位到上一句,第2句的大意是,很多生物的化石上百年都不变,这一情况与达尔文学的进化论是不相符的,be at odds with, a与…不和”。

对应C选项。

这一段的最后也用clam or coral举例证明有物种突然被新物种替换,而不是渐渐进化改变的,据此排除B。

第三段第1句说的是间断平衡理论has usually been ignored,排除A;段落最后一句话说的clam or coral species的情况是In most localities而不是most common,排除D。

Problems on electrorheological fluid flows

Problems on electrorheological fluid flows
PROBLEMS ON ELECTRORHEOLOGICAL FLUID FLOWS.
R.H.W. HOPPE, W.G. LITVINOV ¨ ANGEWANDTE ANALYSIS MIT SCHWERPUNKT NUMERIK LEHRSTUHL FUR ¨ AUGSBURG, UNIVERSITATSSTRASSE, ¨ UNIVERSITAT 14 86159 AUGSBURG, GERMANY
k −2 2 k −1 2
2 where |ε|2 = n i,j =1 εij , n being the dimension of a domain of flow, γ1 − γ4 , are constants, and k is a function of |E |2 . The constants γ1 − γ4 and the function k are determined by the approximation of flow curves which are obtained experimentally for different values of the vector of electric field E (see Subsection 2.2). But the conditions of coerciveness and monotonicity of the operator − div(σ + pI1 ) impose severe constraints on the constants γ1 − γ4 and on the function k, see [20], such that with these restrictions one cannot obtain a good approximation of a flow curve, to say nothing of approximation of a set of flow curves corresponding to different values of E .

perceivable英语造句

perceivable英语造句

Perceivable 造句1. Of Skyland southerners are proud, Perceivable through fleeting or dispersing cloud.“天姥连天向天横,势拔五岳掩赤城,天台四万八千丈,对此欲倒东南倾。

”2. Work with Career Advisor, who will proceed further with candidates in a more perceivable manner, to ensure the adequacy of candidates’information regarding the market trend.与职业顾问(职业顾问将会深入跟进与候选人的沟通合作)紧密合作,以确保有充足的并符合职业市场动向的候选人信息。

3. There will not be any perceivable effect on the animal during communication, awake or asleep.于沟通时动物并不会有任何可感知的效果,无论是清醒还是睡著。

4. Some suggestions are formed by people's perception of the perceivable factors, the aura, such as signs, odours and light.人对气氛等可感知因素的感知形成的暗示,例如:符号、气味、光线。

5. But further traits like her inner psychological mechanism aren't easily perceivable at a glance.但是,再深一层的东西,比如十娘形象的深层心理机制,就不是泛泛的欣赏所能参悟的了。

6. It is perceivable that Kant's major concern in this book is about “change of heart”.作者指出,康德的自由概念的中心关注是:“心灵改变”如何可能?7. Let the text height perceivable使文字字高直观化8. The other extreme took "law" completely metaphorically, picking out some standard or norm perceivable in natural phenomena which governs behavior through entirely impersonal means.另一极端是对“法律”采用完全的比喻,从自然现象中挑选出支配非个人的行为的可知觉的某些标准或规范。

一种准线性光束平差方法

一种准线性光束平差方法

一种准线性光束平差方法刘侍刚;彭亚丽;韩崇昭【摘要】为了解决光束平差运行速度慢、复杂度高的缺点,提出了一种准线性光束平差方法(QBA),该方法利用深度因子等于投影矩阵的第3行与射影空间点相乘的特性,采用重投影点和已知图像点的代数距离建立目标函数,交替地将投影矩阵和射影空间点两个量中的一个保持不变,线性地求取另一个量,最后完成射影重建.实验结果表明,QBA方法具有较快的收敛速度,同时和传统的Leyenberg-Marquat方法及Mahamud方法比较,QBA方法运行时间约是L-M方法的1/8,是Mahamud方法的1/3.【期刊名称】《西安交通大学学报》【年(卷),期】2010(044)012【总页数】4页(P1-4)【关键词】光束平差;线性迭代;代数距离【作者】刘侍刚;彭亚丽;韩崇昭【作者单位】西安交通大学电子与信息工程学院,710049,西安;陕西师范大学计算机科学学院,710062,西安;西安交通大学电子与信息工程学院,710049,西安【正文语种】中文【中图分类】TP391.41;P232从图像序列中重建出三维场景结构是计算机视觉的主要目标之一[1-2].目前,它仍然是计算机视觉领域中的研究热点之一[3-4].如果没有任何先验知识,从图像测量中只能得到射影重建[5-6].现在所提出的射影重建算法大部分都是基于多线性约束[7-8].利用多线性约束关系进行射影重建的缺点是它并没有把所有的图像统一地看待,而是倚重某几幅图像,因此一旦获得了初始的射影重建,就要进一步求精,这一步叫做光束平差(bund le ad justment)[9].它是一种在计算机视觉领域有广泛应用的优化算法,其目的是以全局最优化来进一步求得所需要的精确解.在理想情况下,光束平差就是求已知图像点与重投影点之间的几何距离的最小值,可以采用Guass-New ton、Levenberg-M arquat(L-M)等求解非线性优化方法来求解[9-11],但这些求解方法计算量比较大,很难满足实时性的要求.有些文献将求解几何距离修改为求解代数距离,再利用奇异值分解(SVD)得到一个初始的射影重建,然后利用迭代的方法进一步求精,但这种方法需要对图像点数据矩阵的行和列进行归一化处理以避免全零解的出现,而这一步的出现,会导致算法不稳定.Mahamud等人对深度因子求导[12],令其等于0,再交替地求解投影矩阵和射影空间点.该方法最大的优点是能够线性迭代地求取在代数距离最小意义下的最优射影重建,而且可以保证算法的收敛,但该方法并没有考虑到深度因子实际上就是由投影矩阵的第3行行向量与射影空间点相乘而得到的,因此会影响该算法的收敛速度.本文针对上述缺点,提出了一种准线性光束平差方法(QBA),利用深度因子等于投影矩阵的第3行与射影空间点相乘的特性,采用重投影点和已知图像点的代数距离建立目标函数,线性迭代地求取投影矩阵和射影空间点,最后完成射影重建.1 基于代数距离的目标函数假定摄像机模型为经典的针孔模型,即成像过程可以用下列方程表示式中:λ为深度因子为3维空间点的齐次坐标为对应的图像平面点的齐次坐标;P为相机的投影矩阵,是一个3×4的矩阵.设有n个三维空间点,m幅图像,对于第i幅图像上的第j个图像平面点,由式(1)可得由式(2)可得基于代数距离的余差函数为式(3)可以利用线性的方法进行求解,但它是病态的.从式(3)可以看出,λi,j=0,Pi=0,Xj=0将是它的最优解,但是这种解并不是我们所期望的,因此应该增加一些附加的约束条件.2 准线性光束平差方法2.1 线性求解射影空间点首先,让投影矩阵Pi保持不变,线性地求解射影空间点Xj,使代数距离E最小.为了表示方便,令式中:pi,k表示投影矩阵Pi的第k列.由式(2)可得从式(9)可以看出,若不对Xj增加附加约束,将会出现全零解,这是本文不希望的.同时,若 Xj是它的一个解,则(α为常数)也是它的一个解,因此它有无穷多个解.为了求解方便,需要增加一个约束,通常可以令中的最后一个元素为1,或者增加一个约束条件进行SVD分解可得到射影空间点Xj,由此可对式(7)进行求解.2.2 线性求解投影矩阵现在让射影空间点 Xj保持不变,线性地求解投影矩阵Pi,使代数距离E最小.同样,为了表示方便,将投影矩阵Pi写成一个列向量,即同样,从式(15)可以看出,若不对qi增加附加约束,将会出现全零解.因此,本文增加一个约束条件|qi|=1.对Bi进行SVD分解可得到投影矩阵 Pi,由此可对式(15)进行求解.3 QBA方法本文提出的QBA方法步骤描述如下:(1)利用已知的初始射影重建},求到初始代数距离==,并令k=1及ε为任意小的一个正数;(2)令投影矩阵保持不变,利用式(9)求解每个射影空间点;(3)利用式(3)求重投影点到已知图像点的代数距离E(k)1,并判断时停止;否则,进行步骤(4);(4)令射影空间点保持不变,利用式(15)求解每个投影矩阵(5)利用式(3)求重投影点到已知图像点的代数距离并判断ε时停止;否则,k=k+1并转至第(2)步.收敛性分析:当投影矩阵保持不变时,是式(3)的极小值点,因此有.同样,当射影空间点保持不变时是式(3)的极小值点,因此也有.由以上分析可知,本文的方法能够保证收敛.4 实验为了检验本文提出的QBA方法的收敛性,用Matlab模拟产生8幅大小为640×480像素的图像,并在图像中分别加入均值为0、方差分别为1和2的高斯噪声,利用这些模拟图像点用文献[13]的方法完成初步的射影重建之后,分别用Levenberg-Marquat(L-M)非线性优化方法[9]、Mahamud方法[12]及本文提出的QBA方法进行光束平差,实验结果如图1所示.从图1可以看出,QBA方法具有良好的收敛性,在3~4步就可以达到收敛,而L-M方法和Mahamud方法都要5~6步才能够达到收敛.从图中还可以看出,QBA方法和Mahamud方法具有相同的收敛精度,而L-M方法的收敛精度要略高于本文方法和Mahamud方法,这是因为L-M方法求的是几何距离的最小值,而最后却用代数距离来衡量,通常情况下,几何距离的最小值点并不和代数距离的最小值重合.图1 代数距离随迭代次数变化情况同时,为了比较本文提出的QBA方法和L-M方法及Mahamud方法的运行速度,本文首先模拟产生8幅图像,每幅图像点由20个变化到400个.然后,固定每幅图像点数为100个,图像数由2幅变化到16幅.在所有的实验中,图像像素中都加入均值为0、方差为1的高斯噪声,当连续2个代数距离(对于L-M方法为几何距离)之差小于10-6时,就认为算法已经达到收敛.在每种情况下,实验重复100次,然后取其平均值,实验结果如图2和图3所示.图2 运行时间随空间点数变化图图3 运行时间随空间点数变化图从图2和图3中可以看,QBA方法运行时间约是L-M 方法的1/8,约是Mahamud 方法的1/3.由于L-M方法是采用非线性解法,因此它的运行速度最慢,而在Mahamud方法中,它并没有考虑深度因子实际上就是由投影矩阵和射影空间点相乘所组成,因此它的运行速度也较慢.5 结束语本文提出了一种准线性光束平差方法——QBA,利用重投影点和已知图像点的代数距离建立目标函数,通过线性迭代求取代数距离的极小值.由于本文方法考虑到深度因子实际上就是由投影矩阵的第3行行向量与射影空间点相乘而得到的,所以具有较快的运行速度,实验结果也表明了本文QBA方法具有收敛性好及运行速度快等优点.参考文献:【相关文献】[1]WANG Guanghui,WU J Q M.The quasi-perspective model:geometric properties and 3D reconstruction[J].Pattern Recognition,2010,43(5):1932-1942.[2]彭亚丽,刘芳,刘侍刚.一维子空间的三维重建方法[J].西安交通大学学报,2009,43(12):31-35.PENG Yali,LIU Fang,LIU Shigang.3D reconstruction method with 1D subspace[J].Journal of Xi′an Jiaotong University,2009,43(12):31-35.[3]彭亚丽,刘芳,焦李成,等.基于秩4约束的遮挡点恢复方法[J].机器人,2008,30(2):138-141.PENG Yali,LIU Fang,JIAO Licheng,et al.A method for occlusion recovery based on rank4[J].Robot,2008,30(2):138-141.[4]刘侍刚,彭亚丽,韩崇昭,等.3维子空间约束的遮挡点恢复方法[J].西安交通大学学报,2009,43(4):10-13.LIU Shigang,PENG Yali,HAN Chongzhao,et al.An occlusion recovery method based on 3D subspace[J].Journal of Xi′an Jiaoton g University,2009,43(4):10-13. [5]刘侍刚,彭亚丽,韩崇昭,等.基于秩1的射影重建方法[J].电子学报,2009,37(1):225-228.LIU Shigang,PENG Yali,HAN Chongzhao,et al.Projective reconstruction based on rank 1 matrix[J].Acta Elecronica Sinica,2009,37(1):225-228.[6]彭亚丽,刘侍刚,刘芳.基于秩1约束的三维重建方法[J].信号处理,2010,26(1):28-31.PENG Yali,LIU Shigang,LIU Fang.A 3D reconstruction method based on rank 1[J].Signal Processing,2010,26(1):28-31.[7]PENG Yali,LIU Shigang,LIU Fang.Projective reconstruction with occlusions[J].Opto-Electronics Review,2010,18(2):14-18.[8]刘侍刚,吴成柯,李良福,等.基于1维子空间线性迭代射影重建[J].电子学报,2007,35(4):692-696.LIU Shigang,WU Chengke,LI Liangfu,et al.An iterative method based on 1D subspace for projective structure and motion[J].Acta Elecronica Sinica,2007,35(4):692-696.[9]TRIGGS B,MCLAUCHLAN P,HARTLEY R I,et al.Bundle adjustment:a modernsynthesis[C]∥Proceedings of International Workshop on VisionAlgorithms.Berlin,Germany:Springer Verlag,2005:298-372[10]BARTOLI A.A unified framework for quasi-linear bundle adjustment[C]∥Proceeding of 16th International Conference on Pattern Recognition.Los Alamitos,CA,USA:IEEE Computer Society,2002:560-563.[11]MICHOT J,BARTOLI A,GASPARD F,et al.Algebraic line search for bundleadjustment[C]∥Proceedings of the Ninth British Machine Vision.Berlin,Germany:Springer Verlag,2009:1-8.[12]MAHAMUD S,HEBERT M,OMORI Y,et al.Provably-convergent iterative methods for projective structure from motion[C]∥IEEE Conference on Computer Vision and Pattern Recognition.Los Alamitos,CA,USA:IEEE Computer Society,2001:1018-1025.[13]MARQUES M,COSTEIRA J.Estimating 3D shape from degenerate sequences withmissing data[J].Computer Vision and Image Understanding,2009,113(2):261-272.[14]JULIA C,SAPPA A.An iterative multiresolution scheme for SFM with missing data:single and multiple object scenes[J].Image and Vision Computing,2010,28(1):164-176.。

U(1) textures and Lepton Flavor Violation

U(1) textures and Lepton Flavor Violation
a b
Theoretical Physics Division, Ioannina University, GR-451 10 Ioannina, Greece CERN Theory Division, CH-1211, Switzerland
Abstract
U (1) family symmetries have led to successful predictions of the fermion mass spectrum and the mixing angles of the hadronic sector. In the context of the supersymmetric unified theories, they further imply a non-trivial mass structure for the scalar partners, giving rise to new sources of flavor violation. In the present work, lepton flavor non-conserving processes are examined in the context of the minimal supersymmetric standard model augmented by a U (1)-family symmetry. We calculate the mixing effects on the µ → eγ and τ → µγ rare decays. All supersymmetric scalar masses involved in the processes are determined at low energies using two loop renormalization group analysis and threshold corrections. Further, various novel effects are considered and found to have important impact on the branching ratios. Thus, a rather interesting result is that when the see-saw mechanism is applied in the 12 × 12 sneutrino mass matrix, the mixing effects of the Dirac matrix in the effective light sneutrino sector are canceled at first order. In this class of models and for the case that soft term mixing is already present at the GUT scale, τ → µγ decays are mostly expected to arise at rates significantly smaller than the current experimental limits. On the other hand, the µ → eγ rare decays impose important bounds on the model parameters, particularly on the supersymmetric scalar mass spectrum. In the absence of soft term mixing at high energies, the predicted branching ratios for rare decays are, as expected, well below the experimental bounds.
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

1
1.1
Introduction
The problem
The object of this paper is the study of the behavior of Internet traffic, as observed on a network link (typically shared by a large number of network users), aiming to the identification, quantification, and justification of its salient N features. Traffic observation here assumes the form of data traces, i.e. sequences of the form {(ti , di )}i=1 , where di is a data quantity (possibly measured in bits, bytes etc.), and ti is its time stamp, the time when di was observed. N , with Throughout the paper, though, it will be more convenient to use binned data traces Xi∆ , i = 1, ..., M = t∆ ∆ bin size ∆, defined as: Xi = {j |i∆<tj <(i+1)∆} dj . Our ignorance of the exact procedures taking place in the network, due either to their complexity, or to lack of information, as well as of the behavior of its users, forces us to view X ∆ as a stochastic process [5]. This process has been the object of study of numerous researchers. It has unusual and initially unexpected properties, such as long range dependence [2, 7], nonstationarity [6, 9], and different behavior on different time scales [15, 18, 20, 16]. Moreover, even for the coarser time scales, its marginal distributions may be neither Gaussian, nor p-Stable. This is all the more surprising, since the Central Limit Theorems would lead one to expect the opposite, because a) the observed traffic is the aggregate result of many users, possibly acting independently [24, 10, 3] b) each Xi∆ is actually the sum of a large number of di ’s, and c) for small time bins, traffic is uncorrelated. Indeed, despite the fact that “bursts” and “spikes” seem to persist for at least 4 orders of bin magnitude (Fig. 1 and 2), which argues against a Gaussian limit, it can be easily seen, using classical fitting techniques, that the marginal distributions do not possess the characteristic heavy tails of p-Stable variables either; instead, their tails appear to be exponential or Weibull (Fig. 3). Thus, the process seems “unwilling” to converge to any limit, in contrast to most of the processes observed in nature or industry, which do not exhibit this type of “wild” behavior, but rather get “attracted” to the (Gaussian or p-Stable) central limit much sooner (this behavior will be quantified more precisely in section 5 below). The unusual properties of the traffic are generally attributed to the diversity of user applications, the network protocols, and the complexity of the network topology. Indeed, Internet traffic today is a result of numerous
1
(a)
(b)
(c)
(d)
(e)
(f)
Figure 1: Traces 94 -(a),(c),(e)- and 97 -(b),(d),(f)- binned with 3 different time bins: 1ms,100ms,1s. The data sets 94 and 97 are two of the eight data sets used in this paper; for more details, see section 1.2 below.
∗ A preliminary version of this paper appeared in Proceedings of SPIE Vol. 4868 (2002) under the title: “Some results on the multiresolution structure of Internet traffic traces”. † The author is a scholar of the Lilian Boudouris Foundation. ‡ The largest portion of the research for this paper was conducted while the author was affiliated to Princeton University, and was partially supported by AT&T Research Center.
February 1, 2008
Abstract Internet traffic on a network link can be modeled as a stochastic process. After detecting and quantifying the properties of this process, using statistical tools, a series of mathematical models is developed, culminating in one that is able to generate “traffic” that exhibits –as a key feature– the same difference in behavior for different time scales, as observed in real traffic, and is moreover indistinguishable from real traffic by other statistical tests as well. Tools inspired from the models are then used to determine and calibrate the type of activity taking place in each of the time scales. Surprisingly, the above procedure does not require any detailed information originating from either the network dynamics, or the decomposition of the total traffic into its constituent user connections, but rather only the compliance of these connections to very weak conditions.
On the multiresolution structure of Internet traffic traces∗
arXiv:math/0211307v2 [math.PR] 3 Feb 2003
Konstantinos Drakakis† Princeton University
Dragan Radulovi´ c‡ Yale University
2
(a)
(b)
(c)
(d)
相关文档
最新文档