Introduction of algorithms used in PIV technique
Introduction to Algorithms
• 本文對基本算法説明的基本方式
– 説明算法的基本原理 – 給出由此算法解決的實例
定義我們要解決的問題
給定有待排序的N 給定有待排序的N個項目 R1,R2,…,RN 我們稱這些項目為記錄(record),並稱N 我們稱這些項目為記錄(record),並稱N個記錄的整 個集合為一個文件(file)。每一個R 個集合為一個文件(file)。每一個Rj有一個鍵碼 (key) Kj ,支配排序過程。 排序的目標,是確定下標為{1,2,…,N}的一個排列 排序的目標,是確定下標為{1,2,…,N}的一個排列 p(1) p(2)…p(N),它以非遞降的次序來放置所有的 p(2)…p(N),它以非遞降的次序來放置所有的 鍵碼: Kp(1) ≤ Kp(2) ≤ …≤ Kp(N)
合併兩個已經排列好的序列
MERGE(A, p, q, r)
1 n1 ← q - p + 1 2 n2 ← r - q 3 create arrays L[1 ‥ n1 + 1] and R[1 ‥ n2 + 1] 4 for i ← 1 to n1 5 do L[i] ← A[p + i - 1] 6 for j ← 1 to n2 7 do R[j] ← A[q + j] 8 L[n1 + 1] ← ∞ 9 R[n2 + 1] ← ∞ 10 i ← 1 11 j ← 1 12 for k ← p to r 13 do if L[i] ≤ R[j] 14 then A[k] ← L[i] i←i+1 15 16 else A[k] ← R[j] 17 j←j+1
算法講義
目的
• 算法(algorithm)是一個優秀程序員的基本功, 算法(algorithm)
Introduction Algorithms for Non-negative Matrix Factorization
Algorithms for Non-negative MatrixFactorizationDaniel D.LeeBell Laboratories Lucent Technologies Murray Hill,NJ07974H.Sebastian SeungDept.of Brain and Cog.Sci.Massachusetts Institute of TechnologyCambridge,MA02138 AbstractNon-negative matrix factorization(NMF)has previously been shown tobe a useful decomposition for multivariate data.Two different multi-plicative algorithms for NMF are analyzed.They differ only slightly inthe multiplicative factor used in the update rules.One algorithm can beshown to minimize the conventional least squares error while the otherminimizes the generalized Kullback-Leibler divergence.The monotonicconvergence of both algorithms can be proven using an auxiliary func-tion analogous to that used for proving convergence of the Expectation-Maximization algorithm.The algorithms can also be interpreted as diag-onally rescaled gradient descent,where the rescaling factor is optimallychosen to ensure convergence.IntroductionUnsupervised learning algorithms such as principal components analysis and vector quan-tization can be understood as factorizing a data matrix subject to different constraints.De-pending upon the constraints utilized,the resulting factors can be shown to have very dif-ferent representational properties.Principal components analysis enforces only a weak or-thogonality constraint,resulting in a very distributed representation that uses cancellations to generate variability[1,2].On the other hand,vector quantization uses a hard winner-take-all constraint that results in clustering the data into mutually exclusive prototypes[3]. We have previously shown that nonnegativity is a useful constraint for matrix factorization that can learn a parts representation of the data[4,5].The nonnegative basis vectors that are learned are used in distributed,yet still sparse combinations to generate expressiveness in the reconstructions[6,7].In this submission,we analyze in detail two numerical algorithms for learning the optimal nonnegative factors from data.Non-negative matrix factorizationWe formally consider algorithms for solving the following problem:Non-negative matrix factorization(NMF)Given a non-negative matrix,find non-negative matrix factors and such that:(1)NMF can be applied to the statistical analysis of multivariate data in the following manner. Given a set of of multivariate-dimensional data vectors,the vectors are placed in the columns of an matrix where is the number of examples in the data set.This matrix is then approximately factorized into an matrix and an matrix. Usually is chosen to be smaller than or,so that and are smaller than the original matrix.This results in a compressed version of the original data matrix.What is the significance of the approximation in Eq.(1)?It can be rewritten column by column as,where and are the corresponding columns of and.In other words,each data vector is approximated by a linear combination of the columns of, weighted by the components of.Therefore can be regarded as containing a basis that is optimized for the linear approximation of the data in.Since relatively few basis vectors are used to represent many data vectors,good approximation can only be achieved if the basis vectors discover structure that is latent in the data.The present submission is not about applications of NMF,but focuses instead on the tech-nical aspects offinding non-negative matrix factorizations.Of course,other types of ma-trix factorizations have been extensively studied in numerical linear algebra,but the non-negativity constraint makes much of this previous work inapplicable to the present case [8].Here we discuss two algorithms for NMF based on iterative updates of and.Because these algorithms are easy to implement and their convergence properties are guaranteed, we have found them very useful in practical applications.Other algorithms may possibly be more efficient in overall computation time,but are more difficult to implement and may not generalize to different cost functions.Algorithms similar to ours where only one of the factors is adapted have previously been used for the deconvolution of emission tomography and astronomical images[9,10,11,12].At each iteration of our algorithms,the new value of or is found by multiplying the current value by some factor that depends on the quality of the approximation in Eq.(1).We prove that the quality of the approximation improves monotonically with the application of these multiplicative update rules.In practice,this means that repeated iteration of the update rules is guaranteed to converge to a locally optimal matrix factorization.Cost functionsTofind an approximate factorization,wefirst need to define cost functions that quantify the quality of the approximation.Such a cost function can be constructed using some measure of distance between two non-negative matrices and.One useful measure is simply the square of the Euclidean distance between and[13],(2)This is lower bounded by zero,and clearly vanishes if and only if.Another useful measure is(3) Like the Euclidean distance this is also lower bounded by zero,and vanishes if and only if.But it cannot be called a“distance”,because it is not symmetric in and, so we will refer to it as the“divergence”of from.It reduces to the Kullback-Leibler divergence,or relative entropy,when,so that and can beregarded as normalized probability distributions.We now consider two alternative formulations of NMF as optimization problems:Problem1Minimize with respect to and,subject to the constraints .Problem2Minimize with respect to and,subject to the constraints .Although the functions and are convex in only or only,they are not convex in both variables together.Therefore it is unrealistic to expect an algorithm to solve Problems1and2in the sense offinding global minima.However,there are many techniques from numerical optimization that can be applied tofind local minima. Gradient descent is perhaps the simplest technique to implement,but convergence can be slow.Other methods such as conjugate gradient have faster convergence,at least in the vicinity of local minima,but are more complicated to implement than gradient descent [8].The convergence of gradient based methods also have the disadvantage of being very sensitive to the choice of step size,which can be very inconvenient for large applications.Multiplicative update rulesWe have found that the following“multiplicative update rules”are a good compromise between speed and ease of implementation for solving Problems1and2.Theorem1The Euclidean distance is nonincreasing under the update rules(4)The Euclidean distance is invariant under these updates if and only if and are at a stationary point of the distance.Theorem2The divergence is nonincreasing under the update rules(5)The divergence is invariant under these updates if and only if and are at a stationary point of the divergence.Proofs of these theorems are given in a later section.For now,we note that each update consists of multiplication by a factor.In particular,it is straightforward to see that this multiplicative factor is unity when,so that perfect reconstruction is necessarily afixed point of the update rules.Multiplicative versus additive update rulesIt is useful to contrast these multiplicative updates with those arising from gradient descent [14].In particular,a simple additive update for that reduces the squared distance can be written as(6) If are all set equal to some small positive number,this is equivalent to conventional gradient descent.As long as this number is sufficiently small,the update should reduce .Now if we diagonally rescale the variables and set(7) then we obtain the update rule for that is given in Theorem1.Note that this rescaling results in a multiplicative factor with the positive component of the gradient in the denom-inator and the absolute value of the negative component in the numerator of the factor. For the divergence,diagonally rescaled gradient descent takes the form(8)Again,if the are small and positive,this update should reduce.If we now set(9) then we obtain the update rule for that is given in Theorem2.This rescaling can also be interpretated as a multiplicative rule with the positive component of the gradient in the denominator and negative component as the numerator of the multiplicative factor.Since our choices for are not small,it may seem that there is no guarantee that such a rescaled gradient descent should cause the cost function to decrease.Surprisingly,this is indeed the case as shown in the next section.Proofs of convergenceTo prove Theorems1and2,we will make use of an auxiliary function similar to that used in the Expectation-Maximization algorithm[15,16].Definition1is an auxiliary function for if the conditions(10) are satisfied.The auxiliary function is a useful concept because of the following lemma,which is also graphically illustrated in Fig.1.Lemma1If is an auxiliary function,then is nonincreasing under the update(11)Proof:Note that only if is a local minimum of.If the derivatives of exist and are continuous in a small neighborhood of,this also implies that the derivatives.Thus,by iterating the update in Eq.(11)we obtain a sequence of estimates that converge to a local minimum of the objective function:(12)We will show that by defining the appropriate auxiliary functions for both and,the update rules in Theorems1and2easily follow from Eq.(11).minFigure1:Minimizing the auxiliary function guarantees that for.Lemma2If is the diagonal matrix(13) then(14) is an auxiliary function for(15)Proof:Since is obvious,we need only show that.To do this,we compare(16) with Eq.(14)tofind that is equivalent to(17) To prove positive semidefiniteness,consider the matrix1:(18) which is just a rescaling of the components of.Then is positive semidefinite if and only if is,and(19)(20)(21)(22)(23)1One can also show that is positive semidefinite by considering the matrix.Then is a positive eigenvector of with unity eigenvalue,and application of the Frobenius-Perron theorem shows that Eq.17holds.We can now demonstrate the convergence of Theorem1:Proof of Theorem1Replacing in Eq.(11)by Eq.(14)results in the update rule:(24) Since Eq.(14)is an auxiliary function,is nonincreasing under this update rule,according to Lemma1.Writing the components of this equation explicitly,we obtain(25) By reversing the roles of and in Lemma1and2,can similarly be shown to be nonincreasing under the update rules for.We now consider the following auxiliary function for the divergence cost function: Lemma3Define(26)(27) This is an auxiliary function for(28) Proof:It is straightforward to verify that.To show that, we use convexity of the log function to derive the inequality(29) which holds for all nonnegative that sum to unity.Setting(30) we obtain(31) From this inequality it follows that.Theorem2then follows from the application of Lemma1:Proof of Theorem2:The minimum of with respect to is determined by setting the gradient to zero:(32)Thus,the update rule of Eq.(11)takes the form(33)Since is an auxiliary function,in Eq.(28)is nonincreasing under this update.Rewrit-ten in matrix form,this is equivalent to the update rule in Eq.(5).By reversing the roles of and,the update rule for can similarly be shown to be nonincreasing.DiscussionWe have shown that application of the update rules in Eqs.(4)and(5)are guaranteed to find at least locally optimal solutions of Problems1and2,respectively.The convergence proofs rely upon defining an appropriate auxiliary function.We are currently working to generalize these theorems to more complex constraints.The update rules themselves are extremely easy to implement computationally,and will hopefully be utilized by others for a wide variety of applications.We acknowledge the support of Bell Laboratories.We would also like to thank Carlos Brody,Ken Clarkson,Corinna Cortes,Roland Freund,Linda Kaufman,Yann Le Cun,Sam Roweis,Larry Saul,and Margaret Wright for helpful discussions.References[1]Jolliffe,IT(1986).Principal Component Analysis.New York:Springer-Verlag.[2]Turk,M&Pentland,A(1991).Eigenfaces for recognition.J.Cogn.Neurosci.3,71–86.[3]Gersho,A&Gray,RM(1992).Vector Quantization and Signal Compression.Kluwer Acad.Press.[4]Lee,DD&Seung,HS.Unsupervised learning by convex and conic coding(1997).Proceedingsof the Conference on Neural Information Processing Systems9,515–521.[5]Lee,DD&Seung,HS(1999).Learning the parts of objects by non-negative matrix factoriza-tion.Nature401,788–791.[6]Field,DJ(1994).What is the goal of sensory coding?Neural Comput.6,559–601.[7]Foldiak,P&Young,M(1995).Sparse coding in the primate cortex.The Handbook of BrainTheory and Neural Networks,895–898.(MIT Press,Cambridge,MA).[8]Press,WH,Teukolsky,SA,Vetterling,WT&Flannery,BP(1993).Numerical recipes:the artof scientific computing.(Cambridge University Press,Cambridge,England).[9]Shepp,LA&Vardi,Y(1982).Maximum likelihood reconstruction for emission tomography.IEEE Trans.MI-2,113–122.[10]Richardson,WH(1972).Bayesian-based iterative method of image restoration.J.Opt.Soc.Am.62,55–59.[11]Lucy,LB(1974).An iterative technique for the rectification of observed distributions.Astron.J.74,745–754.[12]Bouman,CA&Sauer,K(1996).A unified approach to statistical tomography using coordinatedescent optimization.IEEE Trans.Image Proc.5,480–492.[13]Paatero,P&Tapper,U(1997).Least squares formulation of robust non-negative factor analy-b.37,23–35.[14]Kivinen,J&Warmuth,M(1997).Additive versus exponentiated gradient updates for linearprediction.Journal of Information and Computation132,1–64.[15]Dempster,AP,Laird,NM&Rubin,DB(1977).Maximum likelihood from incomplete data viathe EM algorithm.J.Royal Stat.Soc.39,1–38.[16]Saul,L&Pereira,F(1997).Aggregate and mixed-order Markov models for statistical languageprocessing.In C.Cardie and R.Weischedel(eds).Proceedings of the Second Conference on Empirical Methods in Natural Language Processing,81–89.ACL Press.。
微积分介值定理的英文
微积分介值定理的英文The Intermediate Value Theorem in CalculusCalculus, a branch of mathematics that has revolutionized the way we understand the world around us, is a vast and intricate subject that encompasses numerous theorems and principles. One such fundamental theorem is the Intermediate Value Theorem, which plays a crucial role in understanding the behavior of continuous functions.The Intermediate Value Theorem, also known as the Bolzano Theorem, states that if a continuous function takes on two different values, then it must also take on all values in between those two values. In other words, if a function is continuous on a closed interval and takes on two different values at the endpoints of that interval, then it must also take on every value in between those two endpoint values.To understand this theorem more clearly, let's consider a simple example. Imagine a function f(x) that represents the height of a mountain as a function of the distance x from the base. If the function f(x) is continuous and the mountain has a peak, then theIntermediate Value Theorem tells us that the function must take on every height value between the base and the peak.Mathematically, the Intermediate Value Theorem can be stated as follows: Let f(x) be a continuous function on a closed interval [a, b]. If f(a) and f(b) have opposite signs, then there exists a point c in the interval (a, b) such that f(c) = 0.The proof of the Intermediate Value Theorem is based on the properties of continuous functions and the completeness of the real number system. The key idea is that if a function changes sign on a closed interval, then it must pass through the value zero somewhere in that interval.One important application of the Intermediate Value Theorem is in the context of finding roots of equations. If a continuous function f(x) changes sign on a closed interval [a, b], then the Intermediate Value Theorem guarantees that there is at least one root (a value of x where f(x) = 0) within that interval. This is a powerful tool in numerical analysis and the study of nonlinear equations.Another application of the Intermediate Value Theorem is in the study of optimization problems. When maximizing or minimizing a continuous function on a closed interval, the Intermediate Value Theorem can be used to establish the existence of a maximum orminimum value within that interval.The Intermediate Value Theorem is also closely related to the concept of connectedness in topology. If a function is continuous on a closed interval, then the image of that interval under the function is a connected set. This means that the function "connects" the values at the endpoints of the interval, without any "gaps" in between.In addition to its theoretical importance, the Intermediate Value Theorem has practical applications in various fields, such as economics, biology, and physics. For example, in economics, the theorem can be used to show the existence of equilibrium prices in a market, where supply and demand curves intersect.In conclusion, the Intermediate Value Theorem is a fundamental result in calculus that has far-reaching implications in both theory and practice. Its ability to guarantee the existence of values between two extremes has made it an indispensable tool in the study of continuous functions and the analysis of complex systems. Understanding and applying this theorem is a crucial step in mastering the powerful concepts of calculus.。
算法导论题目
• A map M = {r1 , r2 , . . . , rn } is a set of n roads. • A view rectangle V = �(Vx1 , Vy1 ), (Vx2 , Vy2 )� specifies the rectangular region that should be displayed by giving the coordinates of the rectangle’s lower-left and upper-right corners, (Vx1 , Vy1 ) and (Vx2 , Vy2 ) respectively. • A road r is visible in the view rectangle V if it intersects the interior of the rectangle V . There are two types of visible roads r: Type 1: One or both endpoints of the road r are inside the view rectangle V .
You will often be called upon to “give an algorithm” to solve a certain problem. Your write-up should take the form of a short essay. A topic paragraph should summarize the problem you are solving and what your results are. The body of your essay should provide the following: 1. A description of the algorithm in English and, if helpful, pseudocode. 2. At least one worked example or diagram to show more precisely how your algorithm works. 3. A proof (or indication) of the correctness of the algorithm. 4. An analysis of the running time of the algorithm. Remember, your goal is to communicate. Graders will be instructed to take off points for convo luted and obtuse descriptions.
如何成为一个程序员
如何成为一个程序员:想成为一个游戏程序员需要有以下资料疯狂代码 / ĵ:http://GameDevelopment/Article36086.html、书籍:算法和数据结构:数据结构(C语言版)——严蔚敏、吴伟民 清华出版社我觉得其配套习题集甚至比原书更有价值每个较难题都值得做下Introduction to Algorithms第 2版 中文名算法导论有关算法标准学习教材和工程参考手册在去年CSDN网站WebSite上其翻译版竟然评为年度 2十大技术畅销书同时员杂志上开设了“算法擂台”栏目这些溯源固本举动不由得使人对中国现今浮躁不堪所谓“IT”业又产生了线希望这本厚厚书幸亏打折我才买得起虽然厚达千页但其英文通俗晓畅内容深入浅出可见经典的作往往比般水准书还耐读还能找到MIT视频教程第节课那个老教授嘻皮笑脸后面就是长发助教上课了C语言名题精选百则 窍门技巧篇——冼镜光 机械工业出版社作者花费年时间搜集了各种常见C段极具窍门技巧性编程法其内容都是大有来头而且给出了详细参考资料如个普通Fibonacci数就给出了非递归解、快速算法、扩充算法等步步深入直至几无油水可榨对于视速度如生命连个普通浮点数转化为整数都另辟蹊径以减少CPU cycle游戏员怎可不看?计算机算法基础(第 2版)—— 佘祥宣等 华中科大出版社我看到几个学校研究生拿它作教材(研究生才开算法太开玩笑了吧)这本书薄是薄了点用作者话来说倒也“精辟”其实此书是Fundamentals of Computer Algorithms缩写版不过原书出版太久了反正我是没找到The Art of Computer ProgrammingVolume 1-3作者Donald E. Knuth是我心目中和冯.诺依曼、Dijkstra、Shannon并列 4位大师这本书作者从读大学本科时开始写直写到博士时十年磨剑足见其下足了功夫可作为计算机技术核心——算法和数据结构终极参考手册创新处也颇多譬如常见Shell排序他在书中提出可用(3i-1)/2间隔这使其稍快于O(n1. 5)当然这套书描述高度数学化为此恐怕般人(我?)最好还得先看本数学预备书Concrete Mathematics(直译为混凝土数学?^-^)再说可惜是这套书才出到第 3卷并没有覆盖全部常见算法内容不过好在对于游戏员来说越常见算法用得越多这也不算是什么要命损失STL源码剖析—— 侯捷 华中科大出版社侯捷不用介绍了华人技术作家中旗舰说其有世界级水准也不为过这本书我以为是C和数据结构葵花宝典(欲练此功必先自宫)也就是说不下几层地狱很难看懂它要求预备知识太多了如STL、数据结构、泛型编程、内存管理都要很扎实(为此是不是还要看看有内存管理设计模式的称Small Memory Software这本书呢?)但是旦看懂真会是所向披靡Data Structures for Game Programmers每个数据结构例程都是个小游戏还用SDL库实现了个算法演示系统虽然内容失的于浅但起码让人了解了数据结构在游戏中作用其实游戏并不比其它特殊甚至要求基本功更加扎实所以花时间做些看似和实际应用不甚相干习题对今后工作是大有裨益而且有些应用很广算法如常被人津津乐道[Page]A*算法及其变种牵涉到图检索周游和分枝-限界法恐怕还得读些艰深论文才能充分明白运用如Donald E. KnuthAn analysis of alpha-beta cutoffs其实还有不少此类好书如Data Structures and Algorithms in C、Programming Pearls、More Programming Pearls(算法珠玑)等我却以为要先看严谨点著作再看内容随笔点书汇编:IBM-PC 汇编语言设计第 2版 国内经典教材The Art of Assembly Language这本书足有1600页噢!C语言:The C Programming Language第 2版虽然篇幅短小但每个例程都很经典(我们老师开始拿它作教材后面换为谭小强C语言书理由为:例子尽是些文本处理我就纳了闷了难道现代计算机不是将大量时间消耗在串和文本处理上吗?)C:学过C语言再学C先看这本C Primer缩写版:Essential C对C有个入门了解再看C Common Knowledge: Essential Intermediate Programming就不会有什么重要知识点完全不知所措了接下来是The C Standard Library : A Tutorial and Reference标准库当然主要是标准模板库标准学习参考手册然后最好平时边写边参悟Effective C等我是说书名以形容词 + C那些书计有 7 8本慢慢看吧罗马不是日建成(Essential C、Effective C、More Effective C、Accelerated C、Effective STL、Exceptional C、More Exceptional C、Imperfect C虽然书名格式相似但每本都绝非马虎的作)谁说C比C要慢?那就请看下面:The Design and Evolution of C知其过去才能知其未来才能应用Inside the C Object Model揭露C编译器模型Efficient C Performance Programming Techniques当算法优化已到极致在运用汇编的前最后还可看看此书有时高级和低阶都能做成相同事情还有两本特别书:Modern C Design : Generic Programming and Design Patterns Applied作者想把设计模式和泛型编程结合起来并写了个尝试提供切Loki库来实作,不过其观点并未得到C社区普遍响应尽管如此本书仍称得上思想前沿性和技术实用性结合典范C Template Metaprogramming把编译器当作计算器?本书介绍了Boost库MPL模板元编程库当然提到Boost库对于游戏员不能不提到其中Graph库有The Boost Graph Library书可看还有其中Python库号称国内首款商业 3维图形引擎起点引擎就用了Boost-Python库说实话我觉得起点引擎还是蛮不错那个自制 3维编辑器虽然界面简陋但功能还算蛮完善给游戏学院用作教学内容也不错另有个号称中国首款自主研发全套网游解决方案我看到它那个 3维编辑器心想这不就是国外个叫freeworld3D编辑器吗?虽然有点偏门但我以前还较劲尝试破解过呢还把英文界面汉化了大概用[Page]exescope这样资源修改软件Software就能搞定吧我又心想为什么要找freeworld3D这个功能并不太强大编辑器呢?仅仅是它便宜到几十美金?它唯特别点地方就是支持导出OGRE图形引擎场景格式这样想不由得使人对它图形引擎“自主”性也产生怀疑了这样“自主”研发真让人汗颜只要中国还没封sourceforge这个网站WebSite(据说以前和freeBSD网站WebSite起被封过?)国人就能“自主”研发有人还会推荐C PrimerThinking in CThe C Programming Language等书吧诚然这些书也很好但我总觉得它们太大部头了还不如多花点时间看看国外好源代码Windows编程Operating Concepts第 5版国内有些操作系统教程其实就是它缩写版Windows 95 Programming Secrets深入剖析了Windows操作系统种种种种有人爱看Linux内核完全注释有人爱看自己动手写操作系统这样煽情书但我想作为商业操作系统把Windows内核剖析到这地步也高山仰止了Programming Applications for Microsoft Windows第 4版先进程线程再虚存管理再动态链接库最多讲到消息机制作者在序言中说:“我不讲什么ActiveX, COM等等当你了解了这些基础后那些东西很快就会明白!”可以作为Programming Windows先修课计算机体系:Computer s : A Programmer’s Perspective和The Art of Computer Programming在我心中是计算机史上两本称得上伟大书计算机组成原理操作系统汇编编译原理计算机网络等等课程汇成这本千页大书计算机在作者眼中就是个整体开源阅读:Code Reading : The Open Source Perspective张大千临摹了几百张明代石涛山水画出画以假乱真后来他去敦煌潜心临摹几年回来画风大变终成大家员其实有40%时间是在读别人源代码侯捷先生说:“源码面前了无秘密”又说“天下大事必作于细”可以和他上穷碧落下黄泉源码追踪经验谈参看MFC:深入浅出MFC我实在以为没有看过侯捷先生深入浅出MFC人多半不会懂得MFC编程其实我是打算用年多时间写个给游戏美工用 3维编辑器顺便作为毕业设计图形库就用MFC吧反正也没得选择如果要用wxWidgets无非是猎奇而已还不是MFC翻版当然它跨平台了就象阻击手对自己枪械零件了如指掌样要想用MFC写出非玩具人定要了解其内部构造还有本书叫MFC深入浅出并不是同本IDE:Microsoft Visual Studio 2005 Unleashed工欲善其事必先利其器当然我认为和其用形如Source Insight、Slick Edit、Code Visualizer的类代码阅读器、图形化工具还不如用自己大脑但如果你嫌打源代码慢话可以用Visual AssistX如果嫌老是写重复相似代码话可以用Code Smith单元测试可以用CppUnitBoost库中测试框架也不错有心情可以吧Visual Studio外接[Page]Intel Compiler内嵌STLport但不是大工程性能分析没必要动不动就用下VTune吧员的路:游戏的旅——我编程领悟云风大哥在我心目中游戏员国外首推卡马克国内首推云风也许过两年我会到网易当云风大哥助理员吧It’s my dream.(^-^)他写这本书时候本着只有透彻理解东西才写出来因此内容不会很酷新但是相信我每读遍都有新收获主要还不是知识上知识是学无止境授人以鱼不如授人以渔精神上启迪才是长久诚如经典游戏仙剑奇侠传主力员兼美术指导姚壮宪(人称姚仙)在序言中所说“云风得到只是些稿费而整个中国民族游戏产业得到将是次知识推动”此言不虚矣编程高手箴言梁肇新是豪杰超级解霸作者本来每个合格员(Programmer , 而非Coder)都应该掌握东西现在变成了编程高手独家箴言不知是作者幸运还是中国IT业悲哀知识点还是讲得蛮多不过对MFC地位颇有微词我实在认为MFC名声就是那些不懂得用它人搞臭不过作者牢骚也情有可原每个具有创造力员都应该不太喜欢frameworkMasters of DOOM: How Two Guys Created an Empire and Transformed Pop Culture中文名DOOM启世录卡马克罗洛斯这些游戏史上如雷贯耳名字(现在卡马克已专注于火箭制造上罗洛斯则携妻回乡隐居)要不是没上过大学卡马克和图形学大师亚伯拉罕功勋可能到现在游戏中还不知 3维为何物勿庸置疑在计算机界历史是英雄们所推动这本书真实记录了这些尘世英雄所为所思作为员我对这几本策划和美工书也产生了浓厚兴趣以前搞过两年3DS MAX插件编程觉得用maxscript还是好过MaxSDK毕竟游戏开发中所多是模型场景数据导入导出大可不必大动干戈策划:Creating Emotion in Games : The Craft and Art of Emotioneering在壮丽煊目宏伟 3维世界背后在残酷杀戮动人心魄情节背后我们还需要什么来抓住玩家心?答对了就是emotion.真正打动人心才是深入骨髓Ultimate Game Design : Building Game Worlds从名字可以看出写给关卡设计师特别是讲室外自然场景构建颇有可取的处Developing _disibledevent=>s Guide就象名为反模式书讲软件Software团队(Team)运营样这本书讲商业运作多过技术个历经艰难现在盛大游戏员翻译了这本书美工:Digital Cinematography & Directing数字摄影导演术每当你在3DS MAX或者Maya等 3维创作软件Software中摆放摄影机设计其运动轨迹时你可曾想过你也站在导演位置上了?The Animator’s Survival Kit看着这本讲卡通角色运动规律书边产生温习猫和老鼠念头边继续对前不久新闻联播中有关中国产生了某计算机自动卡通动画生成软件Software报道蔑视这条报道称此举可大大加快中国卡通动画产量我且不从技术上探讨其是否是在放卫星(其实我知道得很清楚前文已表本人搞过两年卡通动画辅助软件Software编程)但计算机机械生成动画怎可代替人类充满灵性创作?[Page]The Dark Side of Game Texturing用Photoshop制作材质贴图还真有些学问3维图形学:搞 3维图形学首先还是要扎扎实实先看解析几何、线性代数、计算几何教材后面习题个都不能少国内数学书还是蛮好苏步青大师计算几何称得上具有世界级水准可惜中国CAD宏图被盗版给击垮了现在是我们接过接力棒时候了It’s time!Computer Graphics Geometrical Tools计算机图形学几何工具算法详解算法很多纰漏处也不少3D Math Primer for Graphics and Game Development浅易可作为 3维数学“速食“Mathematics for 3D Game Programming & Computer Graphics第 2版比上面那本深入些证明推理数学气也浓些可作为专业数学书和编程实战个过渡桥梁吧内容涉猎也广射线追踪光照计算可视裁剪碰撞检测多边形技术阴影算法刚体物理流体水波数值思路方法曲线曲面还真够丰富Vector Game Math Processors想学MMX,SSE吗那就看它吧不过从基础讲起要耐心哦DirectX:Introduction to 3D Game Programming with DirectX 9.0DirectX入门龙书作者自己写简单举例框架后面我干脆用State模式把所有例子绑到块儿去了Beginning Direct3D Game Programming作者取得律师学位后变成了游戏员真是怪也哉本书虽定位为入门级书内容颇有独特可取的处它用到举例框架是DXSDK Sample Framework而不是现在通行DXUT要想编译有两种办法吧是自己改写成用DXUT 2是找旧Sample Framework我又懒得为了个举例框架下载整个早期版本DirectX后面在Nvidia SDK 9.5中发现了Advanced Animation with DirectXDirectX高级动画技术骨骼系统渐变关键帧动画偶人技术表情变形粒子系统布料柔体动态材质不而足我常常在想从 3维创作软件Software导出种种效果变成堆text或binary先加密压缩打包再解包解压解密再用游戏重建个Lite 动画系统游戏员也真是辛苦OpenGL:NeHe OpenGL Tutorials虽是网络教程不比正式书逊本来学OpenGL就不过是看百来条C文档工夫吧,如果图形学基础知识扎实话OpenGL Shading LanguageOpenGL支持最新显卡技术要靠修修补补插件扩展所以还要配合Nvidia OpenGL Extension Specications来看为上Focus _disibledevent=>Focus _disibledevent=>Focus _disibledevent=>顾名思义 3本专论虽然都很不深但要对未知 3维模型格式作反向工程前研读Geomipmapping地形算法论文前CAD前还是要看看它们为上如果没从别处得过到基础话脚本:先看Game Scripting Mastery等自己了解了虚拟机构造可以设计出简单脚本解释执行系统了再去查Python , Lua [Page]Ruby手册吧会事半半功倍倍Programming Role Playing Games with DirectX 8.0边教学边用DirectX写出了个GameCore库初具引擎稚形Isometric Game Programming with DirectX 7.03维也是建立在 2维基础上这就是这本书现在还值得看原因Visual C网络游戏建模和实现联众员写功力很扎实讲棋牌类游戏编程特别讲了UML建模和Rotional RoseObject-Oriented Game Development套用某人话:“I like this book.”Shader:要入门可先看Shaders for Game Programmers and Artists讲在RenderMonkey中用HLSL高级着色语言写Shader.再看Direct3D ShaderX : Vertex and Pixel Shander Tips and Tricks用汇编着色语言纯银赤金3大宝库:Game Programming Gems我只见到1-6本据说第7、8本也出来了?附带源代码常有bug不过瑕不掩瑜这套世界顶级游戏员每年度技术文集涉及游戏开发各个方面我觉得富有开发经验人更能在其中找到共鸣Graphics Gems全 5本图形学编程Bible看了这套书你会明白计算机领域科学家和工程师区别的所在科学家总是说这个东西在理论上可行工程师会说要使问题在logN时限内解决我只能忍痛割爱舍繁趋简GPU Gems出了 2本Nvidia公司召集图形学Gurus写等到看懂那天我也有心情跑去Siggraph国际图形学大会上投文章碰运气游戏引擎编程:3D Game Engine Programming是ZFXEngine引擎设计思路阐释很平实冇太多惊喜3D Game Engine Design数学物理理论知识讲解较多本来这样就够了还能期待更多吗?人工智能:AI Techniques for Game Programming讲遗传算法人工神经网络主要用到位图算法书原型是根据作者发表到论坛上内容整理出来还比较切中实际AI Game Programming Wisdom相当于AI编程GemsPC游戏编程(人机博弈)以象棋为蓝本介绍了很多种搜索算法除了常见极大极小值算法及其改进--负极大值算法还有深度优先搜索以外更提供了多种改进算法如:Alpha-Beta,Fail-soft alpha-beta,Aspiration Search, Minimal Window Search,Zobrist Hash,Iterative Deepening,History Heuristic,KillerHeuristic,SSS*,DUAL*,MFD and more.琳琅满目实属难得反外挂:加密和解密(第 2版) 看雪论坛站长 段钢破解序列号和反外挂有关系么?不过世上哪两件事情的间又没有关系呢?UML Distilled Martin Fowler很多人直到看了这本书才真正学懂UMLMartin Fowler是真正大师,从早期分析模式,到这本UML精粹,革命性重构都是他提出,后来又写了企业模式书现在领导个软件Software开发咨询公司去年JavaOne中国大会他作为专家来华了吧个人网站WebSite: [Page]设计模式 3剑客:Design Patterns Elements of Reusable Object-Oriented SoftwareDesign Patterns ExplainedHead First Design Patterns重构 3板斧:Refactoring : Improving the Design of Existing CodeRefactoring to PatternsRefactoring Workbook软件Software工程:Extreme Programming Explained : Embrace Change第 2版其中SimplicityValue真是振聋发聩这就是我什么都喜欢轻量级原因Agile Software Development Principles,Patterns,and Practices敏捷真是炒得够火连企业都有敏捷说不过大师是不会这么advertisingCode Complete第 2版名著数学:数学确定性丧失M.克莱因原来数学也只不过是人类发明和臆造用不着供入神殿想起历史上那么多不食人间烟火科学家(多半是数学家)自以为发现了宇宙运作奥秘是时候走下神坛了物理:普通物理学第册 Physics for Game Developers物理我想就到此为此吧再复杂我可要用Newton Engine,ODE了等待物理卡PPU普及那天就可充分发挥PhysX功效了看过最新细胞分裂游戏Demo演示成千上万个Box疯狂Collide骨灰级玩家该边摸钱包边流口水了2、开源代码:Irrlicht著名鬼火引擎从两年前第眼看到它这个轻量级 3维图形引擎就喜欢上了它源代码优雅高效且不故弄玄虚值得每个C员读并不限于图形编程者它周边中也有不少轻量级东西如Lightfeather扩展引擎ICE、IrrlichtRPG、IrrWizard.还有IrrEdit、IrrKlang、IrrXML可用(可能是为了效率原因很多开源作者往往喜欢自己写XML解析库如以上IrrXML库,即使有现成tinyXML库可用这真会让tomcat里面塞AxisAxis里面塞JUDDI弄得像俄罗斯套娃玩具Java Web Service Coder们汗颜)OGRE排名第开源图形引擎当然规模是很大周边也很多除了以C#写就OgreStudio ofusion嵌入3DS MAX作为WYSWYG式 3维编辑器也是棒棒特别是其几个场景、地形插件值得研究以至于Pro OGRE 3D Programming书专论其使用方法搜狐天龙 8部游戏就是以其作为图形引擎当然还另外开发了引擎插块啦我早知道OGRE开发组中有个中国人谢员他以前做了很多年传统软件Software编程有次天龙 8部游戏图形模块出错信息中包含了串某员工作目录有个文件夹名即是谢员英文名我据此推断谢员即是搜狐北京主程看来中国对开源事业还是有所贡献嘛王开源哥哥努力看来不会白费!(^-^)不过我侦测手法也有些像网站WebSite数据库爆库了非君子的所为作RakNet基于UDI网络库竟还支持声音传输以后和OpenVision结合起来做个视聊试试Blender声誉最盛开源 3维动画软件Software竟还带个游戏引擎虽然操作以快捷键驱动也就是说要背上百来个快捷键才能熟练使用但是作为从商业代码变为开源的作威胁 3维商业巨头轻骑兵历经十年锤炼代码达百万行此代码只应天上有人间哪得几回看怎可不作为长期源码参考?[Page]风魂2维图形库云风大哥成名的作虽然不代表其最高水平(最高水平作为商业代码保存在广州网易互动SVN里呢)但是也可以仰风采了圣剑英雄传2维RPG几个作者已成为成都锦天主力员锦天老总从百万发家 3年时间身价过亿也是代枭雄了这份代码作为几年前学生作品也算可以了个工程讲究是 4平 8稳并不定要哪个模块多么出彩反正我是没有时间写这么个东东连个美工都找不到只能整天想着破解别人资源(^-^)BoostC准标准库我想更多时候可以参考学习其源代码Yake我遇到最好轻量级游戏框架了在以前把个工程中图形引擎从Irrlicht换成OGRE尝试中遇到了它OGRE周边工程在我看来都很庸肿没有完善文档情况下看起来和Linux内核差不多不过这个Yake引擎倒是很喜欢它以个FSM有限状态机作为实时调度核心然后每个模块:物理、图形、网络、脚本、GUI、输入等等都提供个接口接口的下再提供到每种具体开源引擎接口然后再接具体引擎通过这样层层抽象此时你是接Newton Engine,ODE还是PysX都可以;是接OGRE,Crystal Space还是Irrlicht都可以;是接RakNet还是LibCurl都可以;是接PythonLua还是Ruby都可以是接CEGUI还是others是接OIS还是others(呵呵,记不起来others)都可以所以Yake本质上不是OGRE周边虽然用Neoengine人都倒向了它但是现在版本还很早特别是我认为学习研究时定要有这种抽象的抽象接口的接口东西把思维从具体绑定打开而开发时抽象要有限度就像蔡学镛在Java夜未眠中讲面向对象用得过滥也会得OOOO症(面向对象过敏强迫症)Quake Doom系列据说很经典卡马克这种开源黑客精神就值得赞许把商业源代码放出来走自己创新的路让别人追去吧不过Quake 和Unreal引擎 3维编辑器是现在所有编辑器鼻祖看来要好好看看了Nvidia SDK 9.X3维图形编程大宝库这些Diret3D和OpenGL举例都是用来展示其最新显卡技术硬件厂商往往对软件Software产品不甚在意源代码给你看,东西给你用去吧学完了还得买我硬件Intel编译器PhysX物理引擎大概也都是这样Havok会把它Havok物理引擎免费给别人用吗?别说试用版连个Demo都看不到所以这套SDK内容可比MS DirectX SDK里面那些入门级举例酷多了反正我是如获至宝 3月不知愁滋味不过显卡要so-so哦我GeForce 6600有两 3个跑不过去,差强人意3、网站WebSite:员大本营吧软文和“新技术秀”讨厌了点blog和社区是精华的所在www.基础编程学习知识的家员起点游戏员基地文档库中还有点东西投稿接收者Seabug和圣剑英雄传主程Seabug会是同个人吗?个在成都锦天担当技术重担高手还有时间维护网站WebSite吗?我不得而知“何苦做游戏”网站WebSite名字很个性站长也是历尽几年前产业发展初期艰难才出此名字[Page]2维游戏图片资源很多站长柳柳主推RPGMaker 软件Software也可以玩玩吧但对于专业开发者来说不可当真论坛中有不少热心国外高手在活动不用说了世界最大开源代码库入金山怎可空手而返?看到国外那些学生项目动不动就像模像样(DirectX稚形就是英国学生项目在学校还被判为不合格)源代码搜索引擎,支持正则表达式,google Lab中也有当你某种功能写不出来时,可以看下开源代码如何写,当然不过是仅供参考,开源代码未必都有产品级强度说到google,可看Google Power Tools Bible书你会发现google众多产品原来也有这么多使用门道2009-2-12 4:00:21疯狂代码 /。
PIV技术的几种实现方法
r12( x , y ) =
I1( x , y ) I 2( x + x , y + y) dx dy =
I ( x , y ) I ( x + x + x , y + y + y) dx dy
--
--
( 1)
收稿日期: 2002 11 11; 修订日期: 2003 02 20 基金项目: 国家杰出青年科学基金资助项目 ( 50125924) ; 国家自 然科学基 金重点资助 项目 ( 10332050) ; 国 家自然 科学基 金
( 7)
Ho(
) = H(
) - H (2
)=
f ( t ) sin( t ) dt
-
( 8)
He(
) = H(
) + H (2
)=
f ( t ) cos( t ) dt
-
( 9)
F( ) = f ( t ) exp(- j t ) dt = H e ( ) - jH o( )
( 10)
-
由 Fourier 变换的互相关特性可知:
rxy ( ) =
x ( t)y( t +
) dt
FT
RF(
)=
X
* F
(
) YF(
)
( 11)
-
利用式( 10) 与式( 11) 就可容易地推导出 Hartley 变换的互相关特性:
RH ( ) = XH ( ) YH o( ) + XH (- ) YH e ( )
( 12)
第 1期
孙鹤 泉等: PIV 技术的几种实现方法
( 25)
表 1 中列出了上述 3 种方法在计算不同长度的二维互相关函数时所需的计算
算法导论第4版英文版
算法导论第4版英文版Algorithm Introduction, Fourth Edition by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein is undoubtedly one of the most influential books in the field of computer science. With its comprehensive coverage of various algorithms and their analysis, this book has become a beloved resource for students, researchers, and professionals alike.The fourth edition of Algorithm Introduction builds upon the success of its predecessors, offering updated content and new insights into the world of algorithms. It starts with an introduction to algorithm analysis, providing readers with a solid foundation to understand the efficiency and effectiveness of different algorithms. The authors skillfully explain the techniques used in algorithm design and analysis, such as divide-and-conquer, dynamic programming, and greedy algorithms.One of the standout features of this book is its detailed and comprehensive treatment of various data structures. From arrays and linked lists to trees and graphs, the authors explore the intricacies of each data structure, discussing their properties, operations, and analysis. This thorough examination ensures that readers gain a deep understanding of the strengths and weaknesses of different data structures, enabling them to make informed decisions when choosing the appropriate structure for their algorithms.The book also covers a wide range of fundamental algorithms, including sorting, searching, and graph algorithms. The authors presentthese algorithms in a clear and concise manner, using pseudocode and diagrams to facilitate understanding. Additionally, they providedetailed analysis of these algorithms, discussing their time and space complexity, as well as their theoretical limits.Furthermore, Algorithm Introduction delves into advanced topics, such as computational geometry, network flow, and NP-completeness. These topics offer readers a glimpse into the cutting-edge research and real-world applications of algorithms. The authors' expertise in these areas shines through, making the book a valuable resource for those interested in pushing the boundaries of algorithmic research.In addition to its comprehensive content, Algorithm Introduction also stands out for its pedagogical approach. The authors include numerous exercises and problems throughout the book, encouraging readers to apply the concepts they have learned. These exercises not only serve as a means of reinforcing understanding but also provide an opportunity for readers to sharpen their problem-solving skills.The fourth edition of Algorithm Introduction is undoubtedly a must-have for anyone interested in algorithms and their applications. Its clear and concise explanations, comprehensive coverage of topics, and practical exercises make it an invaluable resource for students, researchers, and professionals alike. Whether you are a beginner looking to grasp the basics or an experienced practitioner seeking to expand your knowledge, this book will undoubtedly enhance your understanding of algorithms and their role in computer science.。
Introduction_to_Algorithms
1
书名为《现代计算机常用数据结构和算法》,南京大学出版社出版,现已绝版
目 录
关于本书
iiBiblioteka 第一部分 基础知识第一章 算法概念 1.1 算法 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 1.1.2 1.2 1.2.1 1.2.2 1.2.3 1.3 1.3.1 1.3.2 1.3.3 1.4 插入排序 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 伪代码的使用约定 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 插入排序算法的分析 . . . . . . . . . . . . . . . . . . . . . . . . . .
和式的界
思考题 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
第一部分 基础知识
2 在学习本篇的内容时,我们建议读者不必一次将这些数学内容全部消化。先浏览 一下这部分的各章,看看它们包含哪些内容,然后直接去读集中谈算法的章节。在阅 读这些章节时,如果需要对算法分析中所用到的数学工具有个更好的理解的话,再回 过头来看这部分。当然,读者也可顺序地学习这几章,以便很好地掌握有关的数学技 巧。 本篇各章的内容安排如下: • 第一章介绍本书将用到的算法分析和设计的框架。 • 第二章精确定义了几种渐近记号,其目的是使读者采用的记号与本书中的一致, 而不在于向读者介绍新的数学概念。 • 第三章给出了对和式求值和限界的方法,这在算法分析中是常常会遇到的。 • 第四章将给出求解递归式的几种方法。我们已在第一章中用这些方法分析了合并 排序,后面还将多次用到它们。一种有效的技术是“主方法”,它可被用来解决 分治算法中出现的递归式。第四章的大部分内容花在证明主方法的正确性,读者 若不感兴趣可以略过。 • 第五章包含了有关集合、关系、函数、图和树的基本定义和记号。这一章还给出 了这些数学对象的一些基本性质。如果读者已学过离散数学课程,则可以略过这 部分内容。 • 第六章首先介绍计数的基本原则,即排列和组合等内容。这一章的其余部分包含 基本概率的定义和性质。
纹理物体缺陷的视觉检测算法研究--优秀毕业论文
摘 要
在竞争激烈的工业自动化生产过程中,机器视觉对产品质量的把关起着举足 轻重的作用,机器视觉在缺陷检测技术方面的应用也逐渐普遍起来。与常规的检 测技术相比,自动化的视觉检测系统更加经济、快捷、高效与 安全。纹理物体在 工业生产中广泛存在,像用于半导体装配和封装底板和发光二极管,现代 化电子 系统中的印制电路板,以及纺织行业中的布匹和织物等都可认为是含有纹理特征 的物体。本论文主要致力于纹理物体的缺陷检测技术研究,为纹理物体的自动化 检测提供高效而可靠的检测算法。 纹理是描述图像内容的重要特征,纹理分析也已经被成功的应用与纹理分割 和纹理分类当中。本研究提出了一种基于纹理分析技术和参考比较方式的缺陷检 测算法。这种算法能容忍物体变形引起的图像配准误差,对纹理的影响也具有鲁 棒性。本算法旨在为检测出的缺陷区域提供丰富而重要的物理意义,如缺陷区域 的大小、形状、亮度对比度及空间分布等。同时,在参考图像可行的情况下,本 算法可用于同质纹理物体和非同质纹理物体的检测,对非纹理物体 的检测也可取 得不错的效果。 在整个检测过程中,我们采用了可调控金字塔的纹理分析和重构技术。与传 统的小波纹理分析技术不同,我们在小波域中加入处理物体变形和纹理影响的容 忍度控制算法,来实现容忍物体变形和对纹理影响鲁棒的目的。最后可调控金字 塔的重构保证了缺陷区域物理意义恢复的准确性。实验阶段,我们检测了一系列 具有实际应用价值的图像。实验结果表明 本文提出的纹理物体缺陷检测算法具有 高效性和易于实现性。 关键字: 缺陷检测;纹理;物体变形;可调控金字塔;重构
Keywords: defect detection, texture, object distortion, steerable pyramid, reconstruction
II
ACM算法书籍推荐收藏
ACM算法书籍推荐收藏.txt52每个人都一条抛物线,天赋决定其开口,而最高点则需后天的努力。
没有秋日落叶的飘零,何来新春绿芽的饿明丽?只有懂得失去,才会重新拥有。
1. CLRS 算法导论算法百科全书,只做了前面十几章的习题,便感觉受益无穷。
2. Algorithms 算法概论短小精悍,别据一格,准经典之作。
一个坏消息: 同算法导论,该书没有习题答案。
好消息:习题很经典,难度也适中,只需花点点时间自己也都能做出来。
不好也不坏的消息:我正在写习题的答案,已完成前三章,还剩九章约二百道题,顺利的话二个月之后发布。
另有中文版名《算法概论》,我没看过,不知道翻译得怎么样。
如果有心的话,还是尽量看原版吧,其实看原版与看中文版花费时间不会相差很大,因为大部分时间其实都花费在做习题上了。
3. Algorithm Design 算法设计很经典的一本书,很久之前看的,遗憾的是现在除了就记得它很经典之外其它都忘光了。
4. SICP 计算机程序的构造和解释六星之书无需多言,虽然这不是一本讲算法的书,但看完此书有助于你更深入的理解什么是递归。
我一直很强调习题,看完此书后你至少应该做完前四章的太部分习题。
否则那是你的遗憾,也是作者的遗憾。
5. Concrete Mathematics 具体数学有人说看TAOCP之前应该先弄清楚这本书的内容,要真是如此的话那我恐怕是看不到TAOCP 了。
零零碎碎的看了一大半,很多东西都没有时间来好好消化。
如果你是刚进大学不久的本科生,有着大把的可自由支配时间,那你幸运又幸福了,花上几个月时间好好的读一下此书吧,收获绝对大于你的期望值。
6. Introduction to The Design and Analysis of Algorithms 算法设计与分析基础很有趣的一本算法书,有许多在别的书上找不到的趣题,看完此书绝对能让你大开眼界,实在是一本居家旅行,面试装逼的必备佳作。
7. 编程之美--微软技术面试心得虽说是一本面试书,但如果把前面十几页扯掉的话,我更愿意把它看作是一本讲解题思维的算法小品。
遗传算法简介 英文版
Using GAs in MatLab
/mirage/GAToolBox/gaot/
MatLab Code
% Bounds on the variables bounds = [-5 5; -5 5]; % Evaluation Function evalFn = 'Four_Eval'; evalOps = []; % Generate an intialize population of size 80 startPop=initializega(80,bounds,evalFn,[1e-10 1]); % GA Options [epsilon float/binary display] gaOpts=[1e-10 1 0]; % Termination Operators -- 500 Generations termFns = 'maxGenTerm'; termOps = [500]; % Selection Function selectFn = 'normGeomSelect'; selectOps = [0.08]; % Crossover Operators xFns = 'arithXover heuristicXover simpleXover'; xOpts = [1 0; 1 3; 1 0];
of steepest gradient. Simple to implement, guaranteed convergence. Must know something about the derivative. Can easily get stuck in a local minimum.
The genetic algorithm was
算法导论习题答案 (1)
Introduction to Algorithms September 24, 2004Massachusetts Institute of Technology 6.046J/18.410J Professors Piotr Indyk and Charles E. Leiserson Handout 7Problem Set 1 SolutionsExercise 1-1. Do Exercise 2.3-7 on page 37 in CLRS.Solution:The following algorithm solves the problem:1.Sort the elements in S using mergesort.2.Remove the last element from S. Let y be the value of the removed element.3.If S is nonempty, look for z=x−y in S using binary search.4.If S contains such an element z, then STOP, since we have found y and z such that x=y+z.Otherwise, repeat Step 2.5.If S is empty, then no two elements in S sum to x.Notice that when we consider an element y i of S during i th iteration, we don’t need to look at the elements that have already been considered in previous iterations. Suppose there exists y j∗S, such that x=y i+y j. If j<i, i.e. if y j has been reached prior to y i, then we would have found y i when we were searching for x−y j during j th iteration and the algorithm would have terminated then.Step 1 takes �(n lg n)time. Step 2 takes O(1)time. Step 3 requires at most lg n time. Steps 2–4 are repeated at most n times. Thus, the total running time of this algorithm is �(n lg n). We can do a more precise analysis if we notice that Step 3 actually requires �(lg(n−i))time at i th iteration.However, if we evaluate �n−1lg(n−i), we get lg(n−1)!, which is �(n lg n). So the total runningi=1time is still �(n lg n).Exercise 1-2. Do Exercise 3.1-3 on page 50 in CLRS.Exercise 1-3. Do Exercise 3.2-6 on page 57 in CLRS.Exercise 1-4. Do Problem 3-2 on page 58 of CLRS.Problem 1-1. Properties of Asymptotic NotationProve or disprove each of the following properties related to asymptotic notation. In each of the following assume that f, g, and h are asymptotically nonnegative functions.� (a) f (n ) = O (g (n )) and g (n ) = O (f (n )) implies that f (n ) = �(g (n )).Solution:This Statement is True.Since f (n ) = O (g (n )), then there exists an n 0 and a c such that for all n √ n 0, f (n ) ←Similarly, since g (n )= O (f (n )), there exists an n � 0 and a c such that for allcg (n ). �f (n ). Therefore, for all n √ max(n 0,n Hence, f (n ) = �(g (n )).�()g n ,0← �),0c 1 � g (n ) ← f (n ) ← cg (n ).n √ n c � 0 (b) f (n ) + g (n ) = �(max(f (n ),g (n ))).Solution:This Statement is True.For all n √ 1, f (n ) ← max(f (n ),g (n )) and g (n ) ← max(f (n ),g (n )). Therefore:f (n ) +g (n ) ← max(f (n ),g (n )) + max(f (n ),g (n )) ← 2 max(f (n ),g (n ))and so f (n ) + g (n )= O (max(f (n ),g (n ))). Additionally, for each n , either f (n ) √max(f (n ),g (n )) or else g (n ) √ max(f (n ),g (n )). Therefore, for all n √ 1, f (n ) + g (n ) √ max(f (n ),g (n )) and so f (n ) + g (n ) = �(max(f (n ),g (n ))). Thus, f (n ) + g (n ) = �(max(f (n ),g (n ))).(c) Transitivity: f (n ) = O (g (n )) and g (n ) = O (h (n )) implies that f (n ) = O (h (n )).Solution:This Statement is True.Since f (n )= O (g (n )), then there exists an n 0 and a c such that for all n √ n 0, �)f ()n ,0← �()g n ,0← f (n ) ← cg (n ). Similarly, since g (n ) = O (h (n )), there exists an n �h (n ). Therefore, for all n √ max(n 0,n and a c � such thatfor all n √ n Hence, f (n ) = O (h (n )).cc�h (n ).c (d) f (n ) = O (g (n )) implies that h (f (n )) = O (h (g (n )).Solution:This Statement is False.We disprove this statement by giving a counter-example. Let f (n ) = n and g (n ) = 3n and h (n )=2n . Then h (f (n )) = 2n and h (g (n )) = 8n . Since 2n is not O (8n ), this choice of f , g and h is a counter-example which disproves the theorem.(e) f(n)+o(f(n))=�(f(n)).Solution:This Statement is True.Let h(n)=o(f(n)). We prove that f(n)+o(f(n))=�(f(n)). Since for all n√1, f(n)+h(n)√f(n), then f(n)+h(n)=�(f(n)).Since h(n)=o(f(n)), then there exists an n0such that for all n>n0, h(n)←f(n).Therefore, for all n>n0, f(n)+h(n)←2f(n)and so f(n)+h(n)=O(f(n)).Thus, f(n)+h(n)=�(f(n)).(f) f(n)=o(g(n))and g(n)=o(f(n))implies f(n)=�(g(n)).Solution:This Statement is False.We disprove this statement by giving a counter-example. Consider f(n)=1+cos(�≈n)and g(n)=1−cos(�≈n).For all even values of n, f(n)=2and g(n)=0, and there does not exist a c1for which f(n)←c1g(n). Thus, f(n)is not o(g(n)), because if there does not exist a c1 for which f(n)←c1g(n), then it cannot be the case that for any c1>0and sufficiently large n, f(n)<c1g(n).For all odd values of n, f(n)=0and g(n)=2, and there does not exist a c for which g(n)←cf(n). By the above reasoning, it follows that g(n)is not o(f(n)). Also, there cannot exist c2>0for which c2g(n)←f(n), because we could set c=1/c2if sucha c2existed.We have shown that there do not exist constants c1>0and c2>0such that c2g(n)←f(n)←c1g(n). Thus, f(n)is not �(g(n)).Problem 1-2. Computing Fibonacci NumbersThe Fibonacci numbers are defined on page 56 of CLRS asF0=0,F1=1,F n=F n−1+F n−2for n√2.In Exercise 1-3, of this problem set, you showed that the n th Fibonacci number isF n=�n−� n,�5where �is the golden ratio and �is its conjugate.A fellow 6.046 student comes to you with the following simple recursive algorithm for computing the n th Fibonacci number.F IB(n)1 if n=02 then return 03 elseif n=14 then return 15 return F IB(n−1)+F IB(n−2)This algorithm is correct, since it directly implements the definition of the Fibonacci numbers. Let’s analyze its running time. Let T(n)be the worst-case running time of F IB(n).1(a) Give a recurrence for T(n), and use the substitution method to show that T(n)=O(F n).Solution: The recurrence is: T(n)=T(n−1)+T(n−2)+1.We use the substitution method, inducting on n. Our Induction Hypothesis is: T(n)←cF n−b.To prove the inductive step:T(n)←cF n−1+cF n−2−b−b+1← cF n−2b+1Therefore, T(n)←cF n−b+1provided that b√1. We choose b=2and c=10.∗{For the base case consider n0,1}and note the running time is no more than10−2=8.(b) Similarly, show that T(n)=�(F n), and hence, that T(n)=�(F n).Solution: Again the recurrence is: T(n)=T(n−1)+T(n−2)+1.We use the substitution method, inducting on n. Our Induction Hypothesis is: T(n)√F n.To prove the inductive step:T(n)√F n−1+F n−2+1√F n+1Therefore, T(n)←F n. For the base case consider n∗{0,1}and note the runningtime is no less than 1.1In this problem, please assume that all operations take unit time. In reality, the time it takes to add two numbers depends on the number of bits in the numbers being added (more precisely, on the number of memory words). However, for the purpose of this problem, the approximation of unit time addition will suffice.Professor Grigori Potemkin has recently published an improved algorithm for computing the n th Fibonacci number which uses a cleverly constructed loop to get rid of one of the recursive calls. Professor Potemkin has staked his reputation on this new algorithm, and his tenure committee has asked you to review his algorithm.F IB�(n)1 if n=02 then return 03 elseif n=14 then return 15 6 7 8 sum �1for k�1to n−2do sum �sum +F IB�(k) return sumSince it is not at all clear that this algorithm actually computes the n th Fibonacci number, let’s prove that the algorithm is correct. We’ll prove this by induction over n, using a loop invariant in the inductive step of the proof.(c) State the induction hypothesis and the base case of your correctness proof.Solution: To prove the algorithm is correct, we are inducting on n. Our inductionhypothesis is that for all n<m, Fib�(n)returns F n, the n th Fibonacci number.Our base case is m=2. We observe that the first four lines of Potemkin guaranteethat Fib�(n)returns the correct value when n<2.(d) State a loop invariant for the loop in lines 6-7. Prove, using induction over k, that your“invariant” is indeed invariant.Solution: Our loop invariant is that after the k=i iteration of the loop,sum=F i+2.We prove this induction using induction over k. We assume that after the k=(i−1)iteration of the loop, sum=F i+1. Our base case is i=1. We observe that after thefirst pass through the loop, sum=2which is the 3rd Fibonacci number.To complete the induction step we observe that if sum=F i+1after the k=(i−1)andif the call to F ib�(i)on Line 7 correctly returns F i(by the induction hypothesis of ourcorrectness proof in the previous part of the problem) then after the k=i iteration ofthe loop sum=F i+2. This follows immediately form the fact that F i+F i+1=F i+2.(e) Use your loop invariant to complete the inductive step of your correctness proof.Solution: To complete the inductive step of our correctness proof, we must show thatif F ib�(n)returns F n for all n<m then F ib�(m)returns m. From the previous partwe know that if F ib�(n)returns F n for all n<m, then at the end of the k=i iterationof the loop sum=F i+2. We can thus conclude that after the k=m−2iteration ofthe loop, sum=F m which completes our correctness proof.(f) What is the asymptotic running time, T�(n), of F IB�(n)? Would you recommendtenure for Professor Potemkin?Solution: We will argue that T�(n)=�(F n)and thus that Potemkin’s algorithm,F ib�does not improve upon the assymptotic performance of the simple recurrsivealgorithm, F ib. Therefore we would not recommend tenure for Professor Potemkin.One way to see that T�(n)=�(F n)is to observe that the only constant in the programis the 1 (in lines 5 and 4). That is, in order for the program to return F n lines 5 and 4must be executed a total of F n times.Another way to see that T�(n)=�(F n)is to use the substitution method with thehypothesis T�(n)√F n and the recurrence T�(n)=cn+�n−2T�(k).k=1Problem 1-3. Polynomial multiplicationOne can represent a polynomial, in a symbolic variable x, with degree-bound n as an array P[0..n] of coefficients. Consider two linear polynomials, A(x)=a1x+a0and B(x)=b1x+b0, where a1, a0, b1, and b0are numerical coefficients, which can be represented by the arrays [a0,a1]and [b0,b1], respectively. We can multiply A and B using the four coefficient multiplicationsm1=a1·b1,m2=a1·b0,m3=a0·b1,m4=a0·b0,as well as one numerical addition, to form the polynomialC(x)=m1x2+(m2+m3)x+m4,which can be represented by the array[c0,c1,c2]=[m4,m3+m2,m1].(a) Give a divide-and-conquer algorithm for multiplying two polynomials of degree-bound n,represented as coefficient arrays, based on this formula.Solution:We can use this idea to recursively multiply polynomials of degree n−1, where n isa power of 2, as follows:Let p(x)and q(x)be polynomials of degree n−1, and divide each into the upper n/2 and lower n/2terms:p(x)=a(x)x n/2+b(x),q(x)=c(x)x n/2+d(x),where a(x), b(x), c(x), and d(x)are polynomials of degree n/2−1. The polynomial product is thenp(x)q(x)=(a(x)x n/2+b(x))(c(x)x n/2+d(x))=a(x)c(x)x n+(a(x)d(x)+b(x)c(x))x n/2+b(x)d(x).The four polynomial products a(x)c(x), a(x)d(x), b(x)c(x), and b(x)d(x)are computed recursively.(b) Give and solve a recurrence for the worst-case running time of your algorithm.Solution:Since we can perform the dividing and combining of polynomials in time �(n), recursive polynomial multiplication gives us a running time ofT(n)=4T(n/2)+�(n)=�(n2).(c) Show how to multiply two linear polynomials A(x)=a1x+a0and B(x)=b1x+b0using only three coefficient multiplications.Solution:We can use the following 3 multiplications:m1=(a+b)(c+d)=ac+ad+bc+bd,m2=ac,m3=bd,so the polynomial product is(ax+b)(cx+d)=m2x2+(m1−m2−m3)x+m3.� (d) Give a divide-and-conquer algorithm for multiplying two polynomials of degree-bound nbased on your formula from part (c).Solution:The algorithm is the same as in part (a), except for the fact that we need only compute three products of polynomials of degree n/2 to get the polynomial product.(e) Give and solve a recurrence for the worst-case running time of your algorithm.Solution:Similar to part (b):T (n )=3T (n/2) + �(n )lg 3)= �(n �(n 1.585)Alternative solution Instead of breaking a polynomial p (x ) into two smaller polynomials a (x ) and b (x ) such that p (x )= a (x ) + x n/2b (x ), as we did above, we could do the following:Collect all the even powers of p (x ) and substitute y = x 2 to create the polynomial a (y ). Then collect all the odd powers of p (x ), factor out x and substitute y = x 2 to create the second polynomial b (y ). Then we can see thatp (x ) = a (y ) + x b (y )· Both a (y ) and b (y ) are polynomials of (roughly) half the original size and degree, and we can proceed with our multiplications in a way analogous to what was done above.Notice that, at each level k , we need to compute y k = y 2 (where y 0 = x ), whichk −1 takes time �(1) per level and does not affect the asymptotic running time.。
算法的利与弊英文作文初中
算法的利与弊英文作文初中英文:Advantages and Disadvantages of Algorithms。
Algorithms play a crucial role in our daily lives, from the way we search for information on the internet to the way we navigate through traffic. They are essentially step-by-step procedures for solving problems or accomplishing tasks, and they have both advantages and disadvantages.One of the main advantages of algorithms is their efficiency. They can quickly and accurately process large amounts of data, making tasks such as data analysis and pattern recognition much easier. For example, when I search for a specific item on an e-commerce website, the algorithm quickly sorts through thousands of products and presents me with relevant options in a matter of seconds. This efficiency saves me time and effort, and helps me find what I need more easily.Another advantage of algorithms is their consistency. Once a set of instructions is programmed into an algorithm, it will consistently produce the same result when given the same input. This reliability is important in fields such as finance and healthcare, where accuracy is crucial. For instance, when I use a financial planning app to track my expenses, I can rely on the algorithm to consistently categorize my transactions and provide me with accurate reports on my spending habits.However, algorithms also have their drawbacks. One of the main disadvantages is their potential for bias. Algorithms are created by humans, and they can inherit the biases and prejudices of their creators. For example, in the field of recruiting, algorithms used to screen job applicants may inadvertently discriminate against certain groups based on factors such as race or gender. This can perpetuate inequality and limit opportunities for marginalized individuals.Another disadvantage of algorithms is their lack ofcreativity and adaptability. While algorithms excel at performing repetitive tasks with precision, they struggle to think outside the box or adapt to unexpected situations. For instance, when I use a navigation app to find the fastest route to a destination, the algorithm may not account for real-time road closures or traffic accidents, leading me to a less efficient route.中文:算法的利与弊。
算法c语言基础PPT课件
Step 1: 比较两个数,显示较大的数 Step 2: 算法结束
没有输入!
算法的五个特性【难点】 没有输出的情形
Concept of Algorithm
Step 1: 输入一个数 Step 2: 求大于该数的最小偶数 Step 3: 算法结束
没有输出!
从计算的本质看算法* 计算:从一个已知的符号串【输入】开始,按照一定的规则【确定性】改变符号串【有效性】,经过有限步骤【有穷性】,最后得到一个满足预先规定的符号串【输出】,这种变换过程就是计算 计算就是符号串的连续变换【算法的执行】
变量(内存单元)
变量(内存单元)
自然语言描述【重点】 用自然语言把算法表示为有穷的步骤 需要保证算法的五个特征(在一定的抽象层次上) 一般形式
Representation of Algorithm
算法名称:【算法命名】 输入:【算法的输入信息】 输出:【算法的输出结果】 Step 1: Step 2: …
Concept of Algorithm
计算过程(e.g.,OS)
算法的五个特性【难点】 有穷性破坏的情形
Concept of Algorithm
Step 1: 打印数字1 Step 2: 打印下一个自然数 Step 3: 转Step 2
死循环!
算法的五个特性【难点】 确定性破坏的情形
Concept of Algorithm
伪代码(Pseudo-code)——掌握程序控制结构之后经常会使用 算法的类程序语言描述形式,目的是为了使被描述的算法可以容易地以任何一种编程语言(Pascal,C,Java)实现 要求结构清晰、代码简单、可读性好,并且类似自然语言, 介于自然语言与编程语言之间 可以用C语言的语法编写,但只强调程序控制结构
【推荐下载】Introduction to Programming Using Python——第10章 笔记
Introduction to Programming Using Python——第10 章笔记10.3 Case Study: Lotto NumbersLISTING 10.2 LottoNumbers.pyisCovered = 99 * [False] # 初始化一个列表endOfInput = True #设定一个跳出loop的判定值while endOfInput: s = input(“Enter a line of numbers separated by space:\n”) # 用户输入数字并以“空格”隔开item = s.split() lst = [eval(x) for x in item] # 注意这种语法格式convert every string in item to number for number in lst: if number == 0: endOfInput = False else: isCovered[number -1] = True #将与之对应的位置标记成”True”allCovered= True #初始化一个值,假设所有值都覆盖了for i in range(99): if isCovered[i] == False: #如果有数没有覆盖,初始值设为F allCovered = False breakif allCovered: print(“The tickets cover all numbers”)else:print(“The tickets don’t cover all numbers”)----------------------------小节分割线-----------------------------10.4 Case Study: Deck of Cards从52 张扑克牌中任意抽取4 张。
deck = [x for x in range(52)] 初始化deck 列表,并赋值0~51,代表52 张牌。
算法的利与弊英语作文
算法的利与弊英语作文Title: Pros and Cons of Algorithms。
In today's digital age, algorithms play a crucial role in various aspects of our lives. From social media feeds to search engine results, algorithms are used to process and analyze data to provide us with personalized content and recommendations. While algorithms have brought about numerous benefits, they also raise concerns about privacy, bias, and ethical implications. In this essay, we will explore the pros and cons of algorithms.On the positive side, algorithms have revolutionized the way we access information and interact with technology. They help streamline processes, improve efficiency, and enhance user experience. For example, recommendation algorithms used by streaming platforms like Netflix and Spotify analyze our viewing and listening habits to suggest content that matches our preferences. This not only saves us time searching for new movies or songs but alsointroduces us to new content we might enjoy.Algorithms also play a crucial role in fields like healthcare, finance, and transportation. In healthcare, algorithms are used to analyze medical data and identify patterns that can help diagnose diseases and develop personalized treatment plans. In finance, algorithms are used for high-frequency trading and risk management. In transportation, algorithms optimize traffic flow, reduce congestion, and improve public transportation systems.However, despite their many benefits, algorithms also have their drawbacks. One major concern is the issue of bias. Algorithms are only as good as the data they are trained on, and if the data is biased, the algorithm will produce biased results. For example, a facial recognition algorithm trained on data that is predominantly white may struggle to accurately identify people of color. This can lead to discriminatory outcomes and reinforce existing inequalities.Another concern is the lack of transparency andaccountability in algorithmic decision-making. Algorithms are often treated as black boxes, making it difficult for users to understand how decisions are being made. This lack of transparency can erode trust in algorithms and raise questions about their fairness and reliability. For example, in the case of automated hiring systems, it is unclear how the algorithms are evaluating candidates and what criteria they are using to make decisions.Furthermore, algorithms raise concerns about privacyand data security. As algorithms collect and analyze vast amounts of data about us, there is a risk that this information could be misused or leaked. For example, social media algorithms track our online behavior to deliver targeted ads, but this also raises concerns about how our personal information is being used and shared without our consent.In conclusion, algorithms have the potential to bring about significant benefits in terms of efficiency, personalization, and innovation. However, it is importantto be aware of the potential drawbacks and risks associatedwith algorithms, such as bias, lack of transparency, and privacy concerns. As we continue to rely on algorithms in various aspects of our lives, it is essential to address these issues and ensure that algorithms are used responsibly and ethically. Only then can we fully harness the power of algorithms for the greater good.。
面向教学实验的轻型无监督PIV算法
基于运动的粒子的灰度及其外形在极短的间隔时间
内保持不变的假设,我们设计了图像变换层。图像变换
层的作用是利用图像 A 和运动矢量进行变换得到图像 B
的估计。图像变形层可以用如式(1)所示的公式描述图像
变换的过程。
I~i
B ,j
=
IA i+v, j+u
(1)
式(1)中,i,j 表示像素的位置,u,v 表示速度场估 计的水平分量和垂直分量。图像变换层使模型可以用图
UNet 网络,模型参数仅为 LiteFlowNet 模型的十分之
一,且训练过程没有速度场真值的参与。相较于其他几
种深度学习 PIV 算法,本文算法精度有所下降,但仍然
取得了可以接受的结果,在不同类型的粒子图像中平均
单像素的估计误差均低于 0.5 像素。
在运算速度上,基于深度学习的 PIV 算法可以使用
的干扰。主干神经网络提取出运动场的估价值,平滑层
对运动场估计值进行平滑滤波,同时利用掩膜层消除无
粒子区域的干扰。用运动场估价值对图像 A 进行空间变
换,得到图像 B 的估计。图像 B 与图像 B 的估计进行对
应像素的亮度值计算损失,并将损失值反向传播用于调
整主干神经网络的参数。重复以上过程直至损失值收敛。
1.1 模型概述 本文提出的轻型无监督学习粒子图像算法 MinUnS_ PIV,算法模型如图 1 所示。
图 1 MinUnS_PIV 模型 Fig.1 MinUnS_PIV model 收稿日期 :2023-09-10 * 基金项目 :浙江省教育厅一般科研项目(Y202147419);青年科学基金项目(12002334) 作者简介 :施芒 (1983— ),男,浙江金华人,硕士研究生,高级工程师,从事机器视觉及人工智能研究工作。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
PIV 技术的几种实现方法孙鹤泉,沈永明,王永学,康海贵,李广伟(大连理工大学海岸和近海工程国家重点实验室,辽宁大连 116023)摘要:PI V 技术是一种基于流场图像互相关分析的二维流场非接触式测试技术。
在介绍利用F ourier 变换的互相关特性及其快速算法实现PI V 技术的基础上,介绍了可提高分析效率的实序列F ourier 变换的实现方法,并进而提出了一种更高效的基于Hartley 变换的分析法,给出了分离式内核的Hartley 变换互相关特性的明确表达式。
从算法复杂性理论与实际分析时间两方面对3种方法进行的比较,充分说明了Hartley 变换在PI V 技术应用中的优越性。
关 键 词:粒子图像测速法;互相关分析;F ourier 变换;Hartley 变换中图分类号:T B9 文献标识码:A 文章编号:100126791(2004)0120105204收稿日期:2002211211;修订日期:2003202220基金项目:国家杰出青年科学基金资助项目(50125924);国家自然科学基金重点资助项目(10332050);国家自然科学基金资助项目(50125924);武器装备预研基金资助项目(51443010101JW0901)作者简介:孙鹤泉(1973-),男,山东莱州人,大连理工大学讲师,博士研究生,主要从事海洋环境和工程中信息技术及非接触测量方法的应用研究。
E 2mail :hqsun @dlut 1edu 1cn1 PIV 技术简介图1 PI V 实验示意图Fig 11Sketch of PT V experimentPI V 技术[1,2](粒子图像测速法)是一种基于流场图像互相关分析的非接触式二维流场测量技术,能够无扰动、精确有效地测量二维流速分布。
作为PI V 技术核心的流场图像分析法目前主要采用二维快速F ourier 变换实现互相关函数的计算,并利用速度的基本定义,通过测量水质点在已知时间间隔内的位移实现对水质点速度测量;对测量平面上的多个水质点进行跟踪、测量,就可实现流速分布的二维测量。
利用PI V 技术测量流场时,需在流场中散播比重适当且跟随性好的示踪粒子,由示踪粒子的运动来反映水质点的运动;并用自然光或激光对所测平面进行照射,形成光照平面,使用CC D 等摄像设备获得示踪粒子的图像,图1为PI V 实验示意图。
对得到的PI V 图像序列进行互相关分析,就能获得流场的二维速度矢量分布。
2 PIV 的F ourier 实现如果t 1时刻的流场图像表示为I 1(x ,y )=I (x ,y )+n 1(x ,y );那么,在时间间隔Δt 相对较小的情况下,t 2=t 1+Δt 时刻的流场图像就可表示为I 2(x ,y )=I (x +Δx ,y +Δy )+n 2(x ,y );n 1(x ,y )、n 2(x ,y )为随机噪音。
计算I 1(x ,y )与I 2(x ,y )的互相关函数r 12(τx ,τy ),并假定噪音n 1(x ,y )、n 2(x ,y )与有效图像函数I (x ,y )不相关,可得r 12(τx ,τy )=∫∞-∞∫∞-∞I 1(x ,y )I 2(x +τx ,y +τy)d x d y =∫∞-∞∫∞-∞I (x ,y )I (x +Δx +τx ,y +Δy +τy)d x d y(1)第15卷第1期2004年1月 水科学进展ADVANCES I N W ATER SCIE NCEV ol 115,N o 11 Jan 1,2004 I(x,y)的自相关函数为:r(τx,τy)=∫∞-∞∫∞-∞I(x,y)I(x+τx,y+τy)d x d y(2)从而,式(1)可转化为:r12(τx,τy)=r(τx+Δx,τy+Δy)。
自相关函数是偶函数,并在原点取得最大值,即r(τx,τy)≤r(0,0),使得如下不等式成立:r12(τx,τy)≤r12(-Δx,-Δy)(3)由此可知,互相关函数的最大值所在位置对应流场间的相对位移,即水质点在时刻t1与t2之间的位移,进而可计算出流场在Δt内的速度。
在算法实现上,为了提高运算效率,可采用二维快速F ourier变换实现互相关函数的计算。
令I1(x,y)、I2 (x,y)与r12(τx,τy)的F ourier变换分别为I F1(u,v)、I F2(u,v)与R F(u,v),则式(1)对应的频域表达式为R F(u,v)=I3F1(u,v)×I F2(u,v)其中,I3F1(u,v)为I F1(u,v)的共轭。
(4)由式(4)可知,利用F ourier变换的互相关性质和快速算法FFT,进行两次F ourier变换和一次逆变换,就可计算出互相关函数r12(τx,τy),不但提高了计算效率,而且避免了复杂的积分运算。
流场图像是实序列,而其F ourier变换是复序列,导致变换过程中存在很大的冗余,仍会限制存储效率与运算速度,可利用实序列F ourier变换的Hermitian对称性进一步提高分析效率。
构造一个复函数z(x,y)=I1(x,y)+j I2(x,y),该函数对应的F ourier变换为Z(u,v)=Re(u,v)+j Im (u,v),那么,以下关系成立:I F1(u,v)=Re(u,v)+Re(-u,-v)2+jIm(u,v)-Im(-u,-v)2I F2(u,v)=Im(u,v)+Im(-u,-v)2-jRe(u,v)-Re(-u,-v)2(5) 与直接采用复F ourier变换法相比,用式(5)分析PI V图像,可减少一次变换运算,节省约1/3的计算量。
3 基于Hartley变换的PIV方法Hartley变换是类似于F ourier变换的积分变换,其正反变换的积分核相同,具有F ourier变换的大部分特性,且实序列的Hartley变换仍是实序列,避免了变换过程中的冗余性,能成倍地节约内存空间。
另外,Hartley变换的快速实现FHT可采用FFT的结构形式,能进一步提高运算速度,更适合于应用于批量PI V图像分析。
311 一维变换Hartley变换[3、4]及其逆变换的定义为H(ω)=∫∞-∞f(t)cas(ωt)d t f(t)=∫∞-∞H(ω)cas(ωt)dω(6)式中 cas(ωt)=cos(ωt)+sin(ωt)。
定义由Hartley变换构造的奇函数与偶函数[4,5]:(7)H o(ω)=H(ω)-H(-ω)2=∫∞-∞f(t)sin(ωt)d t(8)H e(ω)=H(ω)+H(-ω)2=∫∞-∞f(t)cos(ωt)d t(9)F(ω)=∫∞-∞f(t)exp(-jωt)d t=H e(ω)-jH o(ω)(10)由F ourier变换的互相关特性可知:r xy(τ)=∫∞-∞x(t)y(t+τ)d t FT R F(ω)=X3F(ω)Y F(ω)(11) 利用式(10)与式(11)就可容易地推导出Hartley变换的互相关特性:R H(ω)=X H(ω)Y Ho(ω)+X H(-ω)Y He(ω)(12) 601水科学进展第15卷式中 R H (ω)、X H (ω)与Y H (ω)为r xy (τ)、x (t )与y (t )的Hartley 变换,下标o 、e 表示由式(9)与式(10)构造的奇函数与偶函数。
312 二维变换与二维F ourier 变换不同,二维Hartley 变换的积分核存在两种选择:cas (ux +vy )、cas (ux )cas (vy )。
为了便于快速实现,选择可分离的第二种形式,并得到如下的正逆变换表达式[4,6]H (u ,v )=∫∞-∞∫∞-∞f (x ,y )cas (ux )cas (vy )d x d y (13)f (x ,y )=∫∞-∞∫∞-∞H (u ,v )cas (ux )cas (vy )d u d v(14) 定义由二维Hartley 变换构造的奇函数与偶函数[6]H o (u ,v )=12[H (u ,v )-H (-u ,-v )](15)H e (u ,v )=12[H (u ,v )+H (-u ,-v )](16)可得二维F ourier 变换与Hartley 变换的关系[6]:F (u ,v )=H e (u ,-v )-jH o (u ,v )(17) 二维数据p (x ,y )与q (x ,y )的互相关函数的表达式为r pq (τx ,τy )=∫∞-∞∫∞-∞p (x ,y )q (x +τx,y +τy)d x d y(18) R H (u ,v )、P H (u ,v )与Q H (u ,v )分别对应r pq (τx ,τy )、p (x ,y )与q (x ,y )的二维Hartley 变换,可得到二维Hartley 变换的互相关特性表达式:R H (u ,v )=P He (u ,v )Q He (u ,v )-P Ho (-u ,v )Q Ho (u ,-v )+P He (-u ,v )Q Ho (u ,v )-P Ho (u ,v )Q He (u ,-v )(19)4 算法复杂性对比根据文献[7,8]中提供的FFT 算法与FHT 算法所需的实数乘法、加法次数的计算公式,利用FFT (复)、FFT (实)和FHT 对大小为M ×N 的数据进行互相关分析所需的实数乘法、加法分别为M CF (M ,N )与A CF (M ,N )、图2 速度矢量场Fig 12Velocity mapsM CR (M ,N )与A CR (M ,N )及M CH (M ,N )与A CH (M ,N ):M CF (M ,N )=3MN log 2(MN )-14MN +12(M +N )(20)A CF (M ,N )=9MN log 2(MN )-16MN +12(M +N )(21)M CR (M ,N )=2MN log 2(MN )-8MN +8(M +N )(22)A CR (M ,N )=6MN log 2(MN )-6MN +8(M +N )(23)M CH (M ,N )=32MN log 2(MN )-5MN +6(M +N )(24)A CH (M ,N )=92MN log 2(MN )-4MN +18(M +N )(25) 表1中列出了上述3种方法在计算不同长度的二维互相关函数时所需的计算量。
从表中计算量的对比可看出,利用FHT 的计算量比FFT (复)要减少近一半,从而其计算速度将提高一倍。
另外,在CPU 主频为Intel P -Ⅳ117G H z 的PC 机上,利用3种方法对下载自http ://piv 1vsj 1or 1jp/piv/image 1html 的流场图像piv4111bm p 与piv4121bm p 进行分析,都获得了图2所示的速度矢量图,相应的分析时间分别为:51741949、41561032与31192722s ,从分析时间上进一步反映出FHT701 第1期孙鹤泉等:PI V 技术的几种实现方法法PI V应用中的优越性。