图像检测外文翻译参考文献

合集下载

外文参考文献翻译-中文

外文参考文献翻译-中文

外⽂参考⽂献翻译-中⽂基于4G LTE技术的⾼速铁路移动通信系统KS Solanki教授,Kratika ChouhanUjjain⼯程学院,印度Madhya Pradesh的Ujjain摘要:随着时间发展,⾼速铁路(HSR)要求可靠的,安全的列车运⾏和乘客通信。

为了实现这个⽬标,HSR的系统需要更⾼的带宽和更短的响应时间,⽽且HSR的旧技术需要进⾏发展,开发新技术,改进现有的架构和控制成本。

为了满⾜这⼀要求,HSR采⽤了GSM的演进GSM-R技术,但它并不能满⾜客户的需求。

因此采⽤了新技术LTE-R,它提供了更⾼的带宽,并且在⾼速下提供了更⾼的客户满意度。

本⽂介绍了LTE-R,给出GSM-R与LTE-R之间的⽐较结果,并描述了在⾼速下哪种铁路移动通信系统更好。

关键词:⾼速铁路,LTE,GSM,通信和信令系统⼀介绍⾼速铁路需要提⾼对移动通信系统的要求。

随着这种改进,其⽹络架构和硬件设备必须适应⾼达500公⾥/⼩时的列车速度。

HSR还需要快速切换功能。

因此,为了解决这些问题,HSR 需要⼀种名为LTE-R的新技术,基于LTE-R的HSR提供⾼数据传输速率,更⾼带宽和低延迟。

LTE-R能够处理⽇益增长的业务量,确保乘客安全并提供实时多媒体信息。

随着列车速度的不断提⾼,可靠的宽带通信系统对于⾼铁移动通信⾄关重要。

HSR的应⽤服务质量(QOS)测量,包括如数据速率,误码率(BER)和传输延迟。

为了实现HSR的运营需求,需要⼀个能够与 LTE保持⼀致的能⼒的新系统,提供新的业务,但仍能够与GSM-R长时间共存。

HSR系统选择合适的⽆线通信系统时,需要考虑性能,服务,属性,频段和⼯业⽀持等问题。

4G LTE系统与第三代(3G)系统相⽐,它具有简单的扁平架构,⾼数据速率和低延迟。

在LTE的性能和成熟度⽔平上,LTE- railway(LTE-R)将可能成为下⼀代HSR通信系统。

⼆ LTE-R系统描述考虑LTE-R的频率和频谱使⽤,对为⾼速铁路(HSR)通信提供更⾼效的数据传输⾮常重要。

数字图像处理外文翻译参考文献

数字图像处理外文翻译参考文献

数字图像处理外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)原文:Application Of Digital Image Processing In The MeasurementOf Casting Surface RoughnessAhstract- This paper presents a surface image acquisition system based on digital image processing technology. The image acquired by CCD is pre-processed through the procedure of image editing, image equalization, the image binary conversation and feature parameters extraction to achieve casting surface roughness measurement. The three-dimensional evaluation method is taken to obtain the evaluation parametersand the casting surface roughness based on feature parameters extraction. An automatic detection interface of casting surface roughness based on MA TLAB is compiled which can provide a solid foundation for the online and fast detection of casting surface roughness based on image processing technology.Keywords-casting surface; roughness measurement; image processing; feature parametersⅠ.INTRODUCTIONNowadays the demand for the quality and surface roughness of machining is highly increased, and the machine vision inspection based on image processing has become one of the hotspot of measuring technology in mechanical industry due to their advantages such as non-contact, fast speed, suitable precision, strong ability of anti-interference, etc [1,2]. As there is no laws about the casting surface and the range of roughness is wide, detection parameters just related to highly direction can not meet the current requirements of the development of the photoelectric technology, horizontal spacing or roughness also requires a quantitative representation. Therefore, the three-dimensional evaluation system of the casting surface roughness is established as the goal [3,4], surface roughness measurement based on image processing technology is presented. Image preprocessing is deduced through the image enhancement processing, the image binary conversation. The three-dimensional roughness evaluation based on the feature parameters is performed . An automatic detection interface of casting surface roughness based on MA TLAB is compiled which provides a solid foundation for the online and fast detection of casting surface roughness.II. CASTING SURFACE IMAGE ACQUISITION SYSTEMThe acquisition system is composed of the sample carrier, microscope, CCD camera, image acquisition card and the computer. Sample carrier is used to place tested castings. According to the experimental requirements, we can select a fixed carrier and the sample location can be manually transformed, or select curing specimens and the position of the sampling stage can be changed. Figure 1 shows the whole processing procedure.,Firstly,the detected castings should be placed in the illuminated backgrounds as far as possible, and then through regulating optical lens, setting the CCD camera resolution and exposure time, the pictures collected by CCD are saved to computer memory through the acquisition card. The image preprocessing and feature value extraction on casting surface based on corresponding software are followed. Finally the detecting result is output.III. CASTING SURFACE IMAGE PROCESSINGCasting surface image processing includes image editing, equalization processing, image enhancement and the image binary conversation,etc. The original and clipped images of the measured casting is given in Figure 2. In which a) presents the original image and b) shows the clipped image.A.Image EnhancementImage enhancement is a kind of processing method which can highlight certain image information according to some specific needs and weaken or remove some unwanted informations at the same time[5].In order to obtain more clearly contour of the casting surface equalization processing of the image namely the correction of the image histogram should be pre-processed before image segmentation processing. Figure 3 shows the original grayscale image and equalization processing image and their histograms. As shown in the figure, each gray level of the histogram has substantially the same pixel point and becomes more flat after gray equalization processing. The image appears more clearly after the correction and the contrast of the image is enhanced.Fig.2 Casting surface imageFig.3 Equalization processing imageB. Image SegmentationImage segmentation is the process of pixel classification in essence. It is a very important technology by threshold classification. The optimal threshold is attained through the instmction thresh = graythresh (II). Figure 4 shows the image of the binary conversation. The gray value of the black areas of the Image displays the portion of the contour less than the threshold (0.43137), while the white area shows the gray value greater than the threshold. The shadows and shading emerge in the bright region may be caused by noise or surface depression.Fig4 Binary conversationIV. ROUGHNESS PARAMETER EXTRACTIONIn order to detect the surface roughness, it is necessary to extract feature parameters of roughness. The average histogram and variance are parameters used to characterize the texture size of surface contour. While unit surface's peak area is parameter that can reflect the roughness of horizontal workpiece.And kurtosis parameter can both characterize the roughness of vertical direction and horizontal direction. Therefore, this paper establisheshistogram of the mean and variance, the unit surface's peak area and the steepness as the roughness evaluating parameters of the castings 3D assessment. Image preprocessing and feature extraction interface is compiled based on MATLAB. Figure 5 shows the detection interface of surface roughness. Image preprocessing of the clipped casting can be successfully achieved by this software, which includes image filtering, image enhancement, image segmentation and histogram equalization, and it can also display the extracted evaluation parameters of surface roughness.Fig.5 Automatic roughness measurement interfaceV. CONCLUSIONSThis paper investigates the casting surface roughness measuring method based on digital Image processing technology. The method is composed of image acquisition, image enhancement, the image binary conversation and the extraction of characteristic parameters of roughness casting surface. The interface of image preprocessing and the extraction of roughness evaluation parameters is compiled by MA TLAB which can provide a solid foundation for the online and fast detection of casting surface roughness.REFERENCE[1] Xu Deyan, Lin Zunqi. The optical surface roughness research pro gress and direction[1]. Optical instruments 1996, 18 (1): 32-37.[2] Wang Yujing. Turning surface roughness based on image measurement [D]. Harbin:Harbin University of Science and Technology[3] BRADLEY C. Automated surface roughness measurement[1]. The InternationalJournal of Advanced Manufacturing Technology ,2000,16(9) :668-674.[4] Li Chenggui, Li xing-shan, Qiang XI-FU 3D surface topography measurement method[J]. Aerospace measurement technology, 2000, 20(4): 2-10.[5] Liu He. Digital image processing and application [ M]. China Electric Power Press,2005译文:数字图像处理在铸件表面粗糙度测量中的应用摘要—本文提出了一种表面图像采集基于数字图像处理技术的系统。

图像识别中英文对照外文翻译文献

图像识别中英文对照外文翻译文献

中英文对照外文翻译文献(文档含英文原文和中文翻译)Elastic image matchingAbstractOne fundamental problem in image recognition is to establish the resemblance of two images. This can be done by searching the best pixel to pixel mapping taking into account monotonicity and continuity constraints. We show that this problem is NP-complete by reduction from 3-SAT, thus giving evidence that the known exponential time algorithms are justified, but approximation algorithms or simplifications are necessary.Keywords: Elastic image matching; Two-dimensional warping; NP-completeness 1. IntroductionIn image recognition, a common problem is to match two given images, e.g. when comparing an observed image to given references. In that pro-cess, elastic image matching, two-dimensional (2D-)warping (Uchida and Sakoe, 1998) or similar types of invariant methods (Keysers et al., 2000) can be used. For this purpose, we can define cost functions depending on the distortion introduced in the matching andsearch for the best matching with respect to a given cost function. In this paper, we show that it is an algorithmically hard problem to decide whether a matching between two images exists with costs below a given threshold. We show that the problem image matching is NP-complete by means of a reduction from 3-SAT, which is a common method of demonstrating a problem to be intrinsically hard (Garey and Johnson, 1979). This result shows the inherent computational difficulties in this type of image comparison, while interestingly the same problem is solvable for 1D sequences in polynomial time, e.g. the dynamic time warping problem in speech recognition (see e.g. Ney et al., 1992). This has the following implications: researchers who are interested in an exact solution to this problem cannot hope to find a polynomial time algorithm, unless P=NP. Furthermore, one can conclude that exponential time algorithms as presented and extended by Uchida and Sakoe (1998, 1999a,b, 2000a,b) may be justified for some image matching applications. On the other hand this shows that those interested in faster algorithms––e.g. for pattern recognition purposes––are right in searching for sub-optimal solutions. One method to do this is the restriction to local optimizations or linear approximations of global transformations as presented in (Keysers et al., 2000). Another possibility is to use heuristic approaches like simulated annealing or genetic algorithms to find an approximate solution. Furthermore, methods like beam search are promising candidates, as these are used successfully in speech recognition, although linguistic decoding is also an NP-complete problem (Casacuberta and de la Higuera, 1999). 2. Image matchingAmong the varieties of matching algorithms,we choose the one presented by Uchida and Sakoe(1998) as a starting point to formalize the problem image matching. Let the images be given as(without loss of generality) square grids of size M×M with gray values (respectively node labels)from a finite alphabet &={1,…,G}. To define thed:&×&→N , problem, two distance functions are needed,one acting on gray valuesg measuring the match in gray values, and one acting on displacement differences :Z×Z→N , measuring the distortion introduced by t he matching. For these distance ddfunctions we assume that they are monotonous functions (computable in polynomial time) of the commonly used squared Euclid-ean distance, i.ed g (g 1,g 2)=f 1(||g 1-g 2||²)and d d (z)=f 2(||z||²) monotonously increasing. Now we call the following optimization problem the image matching problem (let µ={1,…M} ).Instance: The pair( A ; B ) of two images A and B of size M×M .Solution: A mapping function f :µ×µ→µ×µ.Measure:c (A,B,f )=),(),(j i f ij g B Ad ∑μμ⨯∈),(j i+∑⨯-⋅⋅⋅∈+-+μ}1,{1,),()))0,1(),(())0,1(),(((M j i d j i f j i f dμ⨯-⋅⋅⋅∈}1,{1,),(M j i +∑⋅⋅⋅⨯∈+-+1}-M ,{1,),()))1,0(),(())1,0(),(((μj i d j i f j i f d 1}-M ,{1,),(⋅⋅⋅⨯∈μj iGoal:min f c(A,B,f).In other words, the problem is to find the mapping from A onto B that minimizes the distance between the mapped gray values together with a measure for the distortion introduced by the mapping. Here, the distortion is measured by the deviation from the identity mapping in the two dimensions. The identity mapping fulfills f(i,j)=(i,j),and therefore ,f((i,j)+(x,y))=f(i,j)+(x,y)The corresponding decision problem is fixed by the followingQuestion:Given an instance of image matching and a cost c′, does there exist a ma pping f such that c(A,B,f)≤c′?In the definition of the problem some care must be taken concerning the distance functions. For example, if either one of the distance functions is a constant function, the problem is clearly in P (for d g constant, the minimum is given by the identity mapping and for d d constant, the minimum can be determined by sorting all possible matching for each pixel by gray value cost and mapping to one of the pixels with minimum cost). But these special cases are not those we are concerned with in image matching in general.We choose the matching problem of Uchida and Sakoe (1998) to complete the definition of the problem. Here, the mapping functions are restricted by continuity and monotonicity constraints: the deviations from the identity mapping may locally be at most one pixel (i.e. limited to the eight-neighborhood with squared Euclidean distance less than or equal to 2). This can be formalized in this approach bychoosing the functions f1,f2as e.g.f 1=id,f2(x)=step(x):=⎩⎨⎧.2,)10(,2,0>≤⋅xGxMM3. Reduction from 3-SAT3-SAT is a very well-known NP-complete problem (Garey and Johnson, 1979), where 3-SAT is defined as follows:Instance: Collection of clauses C={C1,···,CK} on a set of variables X={x1, (x)L}such that each ckconsists of 3 literals for k=1,···K .Each literal is a variable or the negation of a variable.Question:Is there a truth assignment for X which satisfies each clause ck, k=1,···K ?The dependency graph D(Ф)corresponding to an instance Ф of 3-SAT is defined to be the bipartite graph whose independent sets are formed by the set of clauses Cand the set of variables X .Two vert ices ck and x1are adjacent iff ckinvolvesx 1or-xL.Given any 3-SAT formula U, we show how to construct in polynomial time anequivalent image matching problem l(Ф)=(A(Ф),B(Ф)); . The two images of l (Ф)are similar according to the cost function (i.e.f:c(A(Ф),B(Ф),f)≤0) iff the formulaФ is satisfiable. We perform the reduction from 3-SAT using the following steps:• From the formula Ф we construct the dependency graph D(Ф).• The dependency graph D(Ф)is drawn in the plane.• The drawing of D(Ф)is refined to depict the logical behaviour of Ф , yielding two images(A(Ф),B(Ф)).For this, we use three types of components: one component to represent variables of Ф , one component to represent clauses of Ф, and components which act as interfaces between the former two types. Before we give the formal reduction, we introduce these components.3.1. Basic componentsFor the reduction from 3-SAT we need five components from which we will construct the in-stances for image matching , given a Boolean formula in 3-DNF,respectively its graph. The five components are the building blocks needed for the graph drawing and will be introduced in the following, namely the representations of connectors,crossings, variables, and clauses. The connectors represent the edges and have two varieties, straight connectors and corner connectors. Each of the components consists of two parts, one for image A and one for image B , where blank pixels are considered to be of the‘background ’color.We will depict possible mappings in the following using arrows indicating the direction of displacement (where displacements within the eight-neighborhood of a pixel are the only cases considered). Blank squares represent mapping to the respective counterpart in the second image.For example, the following displacements of neighboring pixels can be used with zero cost:On the other hand, the following displacements result in costs greater than zero:Fig. 1 shows the first component, the straight connector component, which consists of a line of two different interchanging colors,here denoted by the two symbols◇and□. Given that the outside pixels are mapped to their respe ctive counterparts and the connector is continued infinitely, there are two possible ways in which the colored pixels can be mapped, namely to the left (i.e. f(2,j)=(2,j-1)) or to the right (i.e. f(2,j)=(2,j+1)),where the background pixels have different possibilities for the mapping, not influencing the main property of the connector. This property, which justifies the name ‘connector ’, is the following: It is not possible to find a mapping, which yields zero cost where the relative displacements of the connector pixels are not equal, i.e. one always has f(2,j)-(2,j)=f(2,j')-(2,j'),which can easily be observed by induction over j'.That is, given an initial displacement of one pixel (which will be ±1 in this context), the remaining end of the connector has the same displacement if overall costs of the mapping are zero. Given this property and the direction of a connector, which we define to be directed from variable to clause, wecan define the state of the connector as carrying the‘true’truth value, if the displacement is 1 pixel in the direction of the connector and as carrying the‘false’ truth value, if the displacement is -1 pixel in the direction of the connector. This property then ensures that the truth value transmitted by the connector cannot change at mappings of zero cost.Image A image Bmapping 1 mapping 2Fig. 1. The straight connector component with two possible zero cost mappings.For drawing of arbitrary graphs, clearly one also needs corners,which are represented in Fig. 2.By considering all possible displacements which guarantee overall cost zero, one can observe that the corner component also ensures the basic connector property. For example, consider the first depicted mapping, which has zero cost. On the other hand, the second mapping shows, that it is not possible to construct a zero cost mapping with both connectors‘leaving’the component. In that case, the pixel at the position marked‘? ’either has a conflict (that i s, introduces a cost greater than zero in the criterion function because of mapping mismatch) with the pixel above or to the right of it,if the same color is to be met and otherwise, a cost in the gray value mismatch term is introduced.image A image Bmapping 1 mapping 2Fig. 2. The corner connector component and two example mappings.Fig. 3 shows the variable component, in this case with two positive (to the left) and one negated output (to the right) leaving the component as connectors. Here, a fourth color is used, denoted by ·.This component has two possible mappings for thecolored pixels with zero cost, which map the vertical component of the source image to the left or the right vertical component in the target image, respectively. (In both cases the second vertical element in the target image is not a target of the mapping.) This ensures±1 pixel relative displacements at the entry to the connectors. This property again can be deducted by regarding all possible mappings of the two images.The property that follows (which is necessary for the use as variable) is that all zero cost mappings ensure that all positive connectors carry the same truth value,which is the opposite of the truth value for all the negated connectors. It is easy to see from this example how variable components for arbitrary numbers of positive and negated outputs can be constructed.image A image BImage C image DFig. 3. The variable component with two positive and one negated output and two possible mappings (for true and false truth value).Fig. 4 shows the most complex of the components, the clause component. This component consists of two parts. The first part is the horizontal connector with a 'bend' in it to the right.This part has the property that cost zero mappings are possible for all truth values of x and y with the exception of two 'false' values. This two input disjunction,can be extended to a three input dis-junction using the part in the lower left. If the z connector carries a 'false' truth value, this part can only be mapped one pixel downwards at zero cost.In that case the junction pixel (the fourth pixel in the third row) cannot be mapped upwards at zero cost and the 'two input clause' behaves as de-scribed above. On the other hand, if the z connector carries a 'true' truth value, this part can only be mapped one pixel upwards at zero cost,and the junction pixel can be mapped upwards,thus allowing both x and y to carry a 'false' truth value in a zero cost mapping. Thus there exists a zero cost mapping of the clause component iff at least one of the input connectors carries a truth value.image Aimage B mapping 1(true,true,false)mapping 2 (false,false,true,)Fig. 4. The clause component with three incoming connectors x, y , z and zero cost mappings forthe two cases(true,true,false)and (false, false, true).The described components are already sufficient to prove NP-completeness by reduction from planar 3-SAT (which is an NP-complete sub-problem of 3-SAT where the additional constraints on the instances is that the dependency graph is planar),but in order to derive a reduction from 3-SAT, we also include the possibility of crossing connectors.Fig. 5 shows the connector crossing, whose basic property is to allow zero cost mappings if the truth–values are consistently propagated. This is assured by a color change of the vertical connector and a 'flexible' middle part, which can be mapped to four different positions depending on the truth value distribution.image Aimage Bzero cost mappingFig. 5. The connector crossing component and one zero cost mapping.3.2. ReductionUsing the previously introduced components, we can now perform the reduction from 3-SAT to image matching .Proof of the claim that the image matching problem is NP-complete:Clearly, the image matching problem is in NP since, given a mapping f and two images A and B ,the computation of c(A,B,f)can be done in polynomial time. To prove NP-hardness, we construct a reduction from the 3-SAT problem. Given an instance of 3-SAT we construct two images A and B , for which a mapping of cost zero exists iff all the clauses can be satisfied.Given the dependency graph D ,we construct an embedding of the graph into a 2D pixel grid, placing the vertices on a large enough distance from each other (say100(K+L)² ).This can be done using well-known methods from graph drawing (see e.g.di Battista et al.,1999).From this image of the graph D we construct the two images A and B , using the components described above.Each vertex belonging to a variable is replaced with the respective parts of the variable component, having a number of leaving connectors equal to the number of incident edges under consideration of the positive or negative use in the respective clause. Each vertex belonging to a clause is replaced by the respective clause component,and each crossing of edges is replaced by the respective crossing component. Finally, all the edges are replaced with connectors and corner connectors, and the remaining pixels inside the rectangular hull of the construction are set to the background gray value. Clearly, the placement of the components can be done in such a way that all the components are at a large enough distance from each other, where the background pixels act as an 'insulation' against mapping of pixels, which do not belong to the same component. It can be easily seen, that the size of the constructed images is polynomial with respect to the number of vertices and edges of D and thus polynomial in the size of the instance of 3-SAT, at most in the order (K+L)².Furthermore, it can obviously be constructed in polynomial time, as the corresponding graph drawing algorithms are polynomial.Let there exist a truth assignment to the variables x1,…,xL, which satisfies allthe clauses c1,…,cK. We construct a mapping f , that satisfies c(f,A,B)=0 asfollows.For all pixels (i, j ) belonging to variable component l with A(i,j)not of the background color,set f(i,j)=(i,j-1)if xlis assigned the truth value 'true' , set f(i,j)=(i,j+1), otherwise. For the remaining pixels of the variable component set A(i,j)=B(i,j),if f(i,j)=(i,j), otherwise choose f(i,j)from{(i,j+1),(i+1,j+1),(i-1,j+1)}for xl'false' respectively from {(i,j-1),(i+1,j-1),(i-1,j-1)}for xl'true ',such that A(i,j)=B(f(i,j)). This assignment is always possible and has zero cost, as can be easily verified.For the pixels(i,j)belonging to (corner) connector components,the mapping function can only be extended in one way without the introduction of nonzero cost,starting from the connection with the variable component. This is ensured by thebasic connector property. By choosing f (i ,j )=(i,j )for all pixels of background color, we obtain a valid extension for the connectors. For the connector crossing components the extension is straight forward, although here ––as in the variable mapping ––some care must be taken with the assign ment of the background value pixels, but a zero cost assignment is always possible using the same scheme as presented for the variable mapping.It remains to be shown that the clause components can be mapped at zero cost, if at least one of the input connectors x , y , z carries a ' true' truth value.For a proof we regard alls even possibilities and construct a mapping for each case. In thedescription of the clause component it was already argued that this is possible,and due to space limitations we omit the formalization of the argument here.Finally, for all the pixels (i ,j )not belonging to any of the components, we set f (i ,j )=(i ,j )thus arriving at a mapping function which has c (f ,A ,B )=0。

外文文献及翻译

外文文献及翻译

((英文参考文献及译文)二〇一六年六月本科毕业论文 题 目:STATISTICAL SAMPLING METHOD, USED INTHE AUDIT学生姓名:王雪琴学 院:管理学院系 别:会计系专 业:财务管理班 级:财管12-2班 学校代码: 10128 学 号: 201210707016Statistics and AuditRomanian Statistical Review nr. 5 / 2010STATISTICAL SAMPLING METHOD, USED IN THE AUDIT - views, recommendations, fi ndingsPhD Candidate Gabriela-Felicia UNGUREANUAbstractThe rapid increase in the size of U.S. companies from the earlytwentieth century created the need for audit procedures based on the selectionof a part of the total population audited to obtain reliable audit evidence, tocharacterize the entire population consists of account balances or classes oftransactions. Sampling is not used only in audit – is used in sampling surveys,market analysis and medical research in which someone wants to reach aconclusion about a large number of data by examining only a part of thesedata. The difference is the “population” from which the sample is selected, iethat set of data which is intended to draw a conclusion. Audit sampling appliesonly to certain types of audit procedures.Key words: sampling, sample risk, population, sampling unit, tests ofcontrols, substantive procedures.Statistical samplingCommittee statistical sampling of American Institute of CertifiedPublic Accountants of (AICPA) issued in 1962 a special report, titled“Statistical sampling and independent auditors’ which allowed the use ofstatistical sampling method, in accordance with Generally Accepted AuditingStandards (GAAS). During 1962-1974, the AICPA published a series of paperson statistical sampling, “Auditor’s Approach to Statistical Sampling”, foruse in continuing professional education of accountants. During 1962-1974,the AICPA published a series of papers on statistical sampling, “Auditor’sApproach to Statistical Sampling”, for use in continuing professional educationof accountants. In 1981, AICPA issued the professional standard, “AuditSampling”, which provides general guidelines for both sampling methods,statistical and non-statistical.Earlier audits included checks of all transactions in the period coveredby the audited financial statements. At that time, the literature has not givenparticular attention to this subject. Only in 1971, an audit procedures programprinted in the “Federal Reserve Bulletin (Federal Bulletin Stocks)” includedseveral references to sampling such as selecting the “few items” of inventory.Statistics and Audit The program was developed by a special committee, which later became the AICPA, that of Certified Public Accountants American Institute.In the first decades of last century, the auditors often applied sampling, but sample size was not in related to the efficiency of internal control of the entity. In 1955, American Institute of Accountants has published a study case of extending the audit sampling, summarizing audit program developed by certified public accountants, to show why sampling is necessary to extend the audit. The study was important because is one of the leading journal on sampling which recognize a relationship of dependency between detail and reliability testing of internal control.In 1964, the AICPA’s Auditing Standards Board has issued a report entitled “The relationship between statistical sampling and Generally Accepted Auditing Standards (GAAS)” which illustrated the relationship between the accuracy and reliability in sampling and provisions of GAAS.In 1978, the AICPA published the work of Donald M. Roberts,“Statistical Auditing”which explains the underlying theory of statistical sampling in auditing.In 1981, AICPA issued the professional standard, named “Audit Sampling”, which provides guidelines for both sampling methods, statistical and non-statistical.An auditor does not rely solely on the results of a single procedure to reach a conclusion on an account balance, class of transactions or operational effectiveness of the controls. Rather, the audit findings are based on combined evidence from several sources, as a consequence of a number of different audit procedures. When an auditor selects a sample of a population, his objective is to obtain a representative sample, ie sample whose characteristics are identical with the population’s characteristics. This means that selected items are identical with those remaining outside the sample.In practice, auditors do not know for sure if a sample is representative, even after completion the test, but they “may increase the probability that a sample is representative by accuracy of activities made related to design, sample selection and evaluation” [1]. Lack of specificity of the sample results may be given by observation errors and sampling errors. Risks to produce these errors can be controlled.Observation error (risk of observation) appears when the audit test did not identify existing deviations in the sample or using an inadequate audit technique or by negligence of the auditor.Sampling error (sampling risk) is an inherent characteristic of the survey, which results from the fact that they tested only a fraction of the total population. Sampling error occurs due to the fact that it is possible for Revista Română de Statistică nr. 5 / 2010Statistics and Auditthe auditor to reach a conclusion, based on a sample that is different from the conclusion which would be reached if the entire population would have been subject to audit procedures identical. Sampling risk can be reduced by adjusting the sample size, depending on the size and population characteristics and using an appropriate method of selection. Increasing sample size will reduce the risk of sampling; a sample of the all population will present a null risk of sampling.Audit Sampling is a method of testing for gather sufficient and appropriate audit evidence, for the purposes of audit. The auditor may decide to apply audit sampling on an account balance or class of transactions. Sampling audit includes audit procedures to less than 100% of the items within an account balance or class of transactions, so all the sample able to be selected. Auditor is required to determine appropriate ways of selecting items for testing. Audit sampling can be used as a statistical approach and a non- statistical.Statistical sampling is a method by which the sample is made so that each unit consists of the total population has an equal probability of being included in the sample, method of sample selection is random, allowed to assess the results based on probability theory and risk quantification of sampling. Choosing the appropriate population make that auditor’ findings can be extended to the entire population.Non-statistical sampling is a method of sampling, when the auditor uses professional judgment to select elements of a sample. Since the purpose of sampling is to draw conclusions about the entire population, the auditor should select a representative sample by choosing sample units which have characteristics typical of that population. Results will not extrapolate the entire population as the sample selected is representative.Audit tests can be applied on the all elements of the population, where is a small population or on an unrepresentative sample, where the auditor knows the particularities of the population to be tested and is able to identify a small number of items of interest to audit. If the sample has not similar characteristics for the elements of the entire population, the errors found in the tested sample can not extrapolate.Decision of statistical or non-statistical approach depends on the auditor’s professional judgment which seeking sufficient appropriate audits evidence on which to completion its findings about the audit opinion.As a statistical sampling method refer to the random selection that any possible combination of elements of the community is equally likely to enter the sample. Simple random sampling is used when stratification was not to audit. Using random selection involves using random numbers generated byRomanian Statistical Review nr. 5 / 2010Statistics and Audit a computer. After selecting a random starting point, the auditor found the first random number that falls within the test document numbers. Only when the approach has the characteristics of statistical sampling, statistical assessments of risk are valid sampling.In another variant of the sampling probability, namely the systematic selection (also called random mechanical) elements naturally succeed in office space or time; the auditor has a preliminary listing of the population and made the decision on sample size. “The auditor calculated a counting step, and selects the sample element method based on step size. Step counting is determined by dividing the volume of the community to sample the number of units desired. Advantages of systematic screening are its usability. In most cases, a systematic sample can be extracted quickly and method automatically arranges numbers in successive series.”[2].Selection by probability proportional to size - is a method which emphasizes those population units’recorded higher values. The sample is constituted so that the probability of selecting any given element of the population is equal to the recorded value of the item;Stratifi ed selection - is a method of emphasis of units with higher values and is registered in the stratification of the population in subpopulations. Stratification provides a complete picture of the auditor, when population (data table to be analyzed) is not homogeneous. In this case, the auditor stratifies a population by dividing them into distinct subpopulations, which have common characteristics, pre-defined. “The objective of stratification is to reduce the variability of elements in each layer and therefore allow a reduction in sample size without a proportionate increase in the risk of sampling.” [3] If population stratification is done properly, the amount of sample size to come layers will be less than the sample size that would be obtained at the same level of risk given sample with a sample extracted from the entire population. Audit results applied to a layer can be designed only on items that are part of that layer.I appreciated as useful some views on non-statistical sampling methods, which implies that guided the selection of the sample selecting each element according to certain criteria determined by the auditor. The method is subjective; because the auditor selects intentionally items containing set features him.The selection of the series is done by selecting multiple elements series (successive). Using sampling the series is recommended only if a reasonable number of sets used. Using just a few series there is a risk that the sample is not representative. This type of sampling can be used in addition to other samples, where there is a high probability of occurrence of errors. At the arbitrary selection, no items are selected preferably from the auditor, Revista Română de Statistică nr. 5 / 2010Statistics and Auditthat regardless of size or source or characteristics. Is not the recommended method, because is not objective.That sampling is based on the auditor’s professional judgment, which may decide which items can be part or not sampled. Because is not a statistical method, it can not calculate the standard error. Although the sample structure can be constructed to reproduce the population, there is no guarantee that the sample is representative. If omitted a feature that would be relevant in a particular situation, the sample is not representative.Sampling applies when the auditor plans to make conclusions about population, based on a selection. The auditor considers the audit program and determines audit procedures which may apply random research. Sampling is used by auditors an internal control systems testing, and substantive testing of operations. The general objectives of tests of control system and operations substantive tests are to verify the application of pre-defined control procedures, and to determine whether operations contain material errors.Control tests are intended to provide evidence of operational efficiency and controls design or operation of a control system to prevent or detect material misstatements in financial statements. Control tests are necessary if the auditor plans to assess control risk for assertions of management.Controls are generally expected to be similarly applied to all transactions covered by the records, regardless of transaction value. Therefore, if the auditor uses sampling, it is not advisable to select only high value transactions. Samples must be chosen so as to be representative population sample.An auditor must be aware that an entity may change a special control during the course of the audit. If the control is replaced by another, which is designed to achieve the same specific objective, the auditor must decide whether to design a sample of all transactions made during or just a sample of transactions controlled again. Appropriate decision depends on the overall objective of the audit test.Verification of internal control system of an entity is intended to provide guidance on the identification of relevant controls and design evaluation tests of controls.Other tests:In testing internal control system and testing operations, audit sample is used to estimate the proportion of elements of a population containing a characteristic or attribute analysis. This proportion is called the frequency of occurrence or percentage of deviation and is equal to the ratio of elements containing attribute specific and total number of population elements. WeightRomanian Statistical Review nr. 5 / 2010Statistics and Audit deviations in a sample are determined to calculate an estimate of the proportion of the total population deviations.Risk associated with sampling - refers to a sample selection which can not be representative of the population tested. In other words, the sample itself may contain material errors or deviations from the line. However, issuing a conclusion based on a sample may be different from the conclusion which would be reached if the entire population would be subject to audit.Types of risk associated with sampling:Controls are more effective than they actually are or that there are not significant errors when they exist - which means an inappropriate audit opinion. Controls are less effective than they actually are that there are significant errors when in fact they are not - this calls for additional activities to establish that initial conclusions were incorrect.Attributes testing - the auditor should be defining the characteristics to test and conditions for misconduct. Attributes testing will make when required objective statistical projections on various characteristics of the population. The auditor may decide to select items from a population based on its knowledge about the entity and its environment control based on risk analysis and the specific characteristics of the population to be tested.Population is the mass of data on which the auditor wishes to generalize the findings obtained on a sample. Population will be defined compliance audit objectives and will be complete and consistent, because results of the sample can be designed only for the population from which the sample was selected.Sampling unit - a unit of sampling may be, for example, an invoice, an entry or a line item. Each sample unit is an element of the population. The auditor will define the sampling unit based on its compliance with the objectives of audit tests.Sample size - to determine the sample size should be considered whether sampling risk is reduced to an acceptable minimum level. Sample size is affected by the risk associated with sampling that the auditor is willing to accept it. The risk that the auditor is willing to accept lower, the sample will be higher.Error - for detailed testing, the auditor should project monetary errors found in the sample population and should take into account the projected error on the specific objective of the audit and other audit areas. The auditor projects the total error on the population to get a broad perspective on the size of the error and comparing it with tolerable error.For detailed testing, tolerable error is tolerable and misrepresentations Revista Română de Statistică nr. 5 / 2010Statistics and Auditwill be a value less than or equal to materiality used by the auditor for the individual classes of transactions or balances audited. If a class of transactions or account balances has been divided into layers error is designed separately for each layer. Design errors and inconsistent errors for each stratum are then combined when considering the possible effect on the total classes of transactions and account balances.Evaluation of sample results - the auditor should evaluate the sample results to determine whether assessing relevant characteristics of the population is confirmed or needs to be revised.When testing controls, an unexpectedly high rate of sample error may lead to an increase in the risk assessment of significant misrepresentation unless it obtained additional audit evidence to support the initial assessment. For control tests, an error is a deviation from the performance of control procedures prescribed. The auditor should obtain evidence about the nature and extent of any significant changes in internal control system, including the staff establishment.If significant changes occur, the auditor should review the understanding of internal control environment and consider testing the controls changed. Alternatively, the auditor may consider performing substantive analytical procedures or tests of details covering the audit period.In some cases, the auditor might not need to wait until the end audit to form a conclusion about the effectiveness of operational control, to support the control risk assessment. In this case, the auditor might decide to modify the planned substantive tests accordingly.If testing details, an unexpectedly large amount of error in a sample may cause the auditor to believe that a class of transactions or account balances is given significantly wrong in the absence of additional audit evidence to show that there are not material misrepresentations.When the best estimate of error is very close to the tolerable error, the auditor recognizes the risk that another sample have different best estimate that could exceed the tolerable error.ConclusionsFollowing analysis of sampling methods conclude that all methods have advantages and disadvantages. But the auditor is important in choosing the sampling method is based on professional judgment and take into account the cost / benefit ratio. Thus, if a sampling method proves to be costly auditor should seek the most efficient method in view of the main and specific objectives of the audit.Romanian Statistical Review nr. 5 / 2010Statistics and Audit The auditor should evaluate the sample results to determine whether the preliminary assessment of relevant characteristics of the population must be confirmed or revised. If the evaluation sample results indicate that the relevant characteristics of the population needs assessment review, the auditor may: require management to investigate identified errors and likelihood of future errors and make necessary adjustments to change the nature, timing and extent of further procedures to take into account the effect on the audit report.Selective bibliography:[1] Law no. 672/2002 updated, on public internal audit[2] Arens, A şi Loebbecke J - Controve …Audit– An integrate approach”, 8th edition, Arc Publishing House[3] ISA 530 - Financial Audit 2008 - International Standards on Auditing, IRECSON Publishing House, 2009- Dictionary of macroeconomics, Ed C.H. Beck, Bucharest, 2008Revista Română de Statistică nr. 5 / 2010Statistics and Audit摘要美国公司的规模迅速增加,从第二十世纪初创造了必要的审计程序,根据选定的部分总人口的审计,以获得可靠的审计证据,以描述整个人口组成的帐户余额或类别的交易。

医学影像学英文文献

医学影像学英文文献

医学影像学英文文献英文回答:Within the realm of medical imaging, sophisticated imaging techniques empower healthcare professionals with the ability to visualize and comprehend anatomical structures and physiological processes in the human body. These techniques are instrumental in diagnosing diseases, guiding therapeutic interventions, and monitoring treatment outcomes.Computed tomography (CT) and magnetic resonance imaging (MRI) are two cornerstone imaging modalities widely employed in medical practice. CT utilizes X-rays and advanced computational algorithms to generate cross-sectional images of the body, providing detailed depictions of bones, soft tissues, and blood vessels. MRI, on the other hand, harnesses the power of powerful magnets and radiofrequency waves to create intricate images that excel in showcasing soft tissue structures, including the brain,spinal cord, and internal organs.Positron emission tomography (PET) and single-photon emission computed tomography (SPECT) are nuclear medicine imaging techniques that involve the administration of radioactive tracers into the body. These tracers accumulate in specific organs or tissues, enabling the visualization and assessment of metabolic processes and disease activity. PET is particularly valuable in oncology, as it can detect the presence and extent of cancerous lesions.Ultrasound, also known as sonography, utilizes high-frequency sound waves to produce images of internal structures. It is a versatile technique commonly employed in obstetrics, cardiology, and abdominal imaging. Ultrasound offers real-time visualization, making it ideal for guiding procedures such as biopsies and injections.Interventional radiology is a specialized field that combines imaging guidance with minimally invasive procedures. Interventional radiologists utilize imaging techniques to precisely navigate catheters and otherinstruments through the body, enabling the diagnosis and treatment of conditions without the need for open surgery. This approach offers reduced invasiveness and faster recovery times compared to traditional surgical interventions.Medical imaging has revolutionized healthcare by providing invaluable insights into the human body. The ability to visualize anatomical structures andphysiological processes in exquisite detail has transformed the practice of medicine, leading to more accurate diagnoses, targeted treatments, and improved patient outcomes.中文回答:医学影像学是现代医学不可或缺的一部分,它利用各种成像技术对人体的解剖结构和生理过程进行可视化和理解,在疾病诊断、治疗方案制定和治疗效果评估中发挥着至关重要的作用。

A Threshold Selection Method from Gray-Level Histograms图像分割经典论文翻译(部分)

A Threshold Selection Method from Gray-Level Histograms图像分割经典论文翻译(部分)

A Threshold Selection Method from Gray-Level Histograms[1][1]Otsu N, A threshold selection method from gray-level histogram. IEEE Transactions on System,Man,and Cybemetics,SMC-8,1978:62-66.一种由灰度直方图选取阈值的方法摘要介绍了一种对于画面分割自动阈值选择的非参数和无监督的方法。

最佳阈值由判别标准选择,即最大化通过灰度级所得到的类的方差。

这个过程很简单,是利用了灰度直方图0阶和第1阶的累积。

这是简单的方法扩展到多阈值的问题。

几种实验结果呈现也支持了方法的有效性。

一.简介选择灰度充分的阈值,从图片的背景中提取对象对于图像处理非常重要。

在这方面已经提出了多种技术。

在理想的情况下,直方图具有分别表示对象和背景的能力,两个峰之间有很深的明显的谷,使得阈值可以选择这个谷底。

然而,对于大多数实际图片,它常常难以精确地检测谷底,特别是在这种情况下,当谷是平的和广泛的,具有噪声充满时,或者当两个峰是在高度极其不等,通常不产生可追踪的谷。

已经出现了,为了克服这些困难,提出的一些技术。

它们是,例如,谷锐化技术[2],这个技术限制了直方图与(拉普拉斯或梯度)的衍生物大于绝对值的像素,并且描述了绘制差分直方图方法[3],选择灰度级的阈值与差的最大值。

这些利用在原始图象有关的信息的相邻像素(或边缘),修改直方图以便使其成为阈值是有用的。

另一类方法与参数方法的灰度直方图直接相关。

例如,该直方图在最小二乘意义上与高斯分布的总和近似,应用了统计决策程序 [4]。

然而,这种方法需要相当繁琐,有时不稳定的计算。

此外,在许多情况下,高斯分布与真实模型的近似值较小。

在任何情况下,没有一个阈值的评估标准能够对大多数的迄今所提出的方法进行评价。

这意味着,它可能是派生的最佳阈值方法来建立一个适当的标准,从更全面的角度评估阈值的“好与坏”的正确方法。

图像去噪 英文文献及翻译

图像去噪 英文文献及翻译

New Method for Image Denoising while Keeping Edge InformationEdge information is the most important high- frequency information of an image, so we should try to maintain more edge information while denoising。

In order to preserve image details as well as canceling image noise,we present a new image denoising method:image denoising based on edge detection。

Before denoising, image’s edges are first detected, and then the noised image is divided into two parts: edge part and smooth part。

We can therefore set high denoising threshold to smooth part of the image and low denoising threshold to edge part. The theoretical analyses and experimental results presented in this paper show that, compared to commonly—used wavelet threshold denoising methods,the proposed algorithm could not only keep edge information of an image, but also could improve signal-to-noise ratio of the denoised image。

目标检测参考文献

目标检测参考文献

目标检测参考文献目标检测是计算机视觉领域中的一个重要研究方向,主要目标是在图像或视频中识别和定位特定目标物体。

近年来,随着深度学习技术的兴起,目标检测取得了显著的进展,在许多实际应用中得到了广泛应用。

以下是一些关于目标检测的重要参考文献。

1. Viola, P., & Jones, M. (2001). Rapid Object Detection using a Boosted Cascade of Simple Features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (Vol.1, pp. I-511-I-518).这篇经典论文提出了基于级联AdaBoost算法的人脸检测方法,该方法将输入图像的特征与级联分类器相结合,实现了高效的目标检测。

这种方法为后续的目标检测方法奠定了基础,并被广泛应用于人脸检测等领域。

2. Dalal, N., & Triggs, B. (2005). Histograms of Oriented Gradients for Human Detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (Vol.1, pp. 886-893).这篇论文提出了一种基于梯度方向直方图的特征表示方法,称为“方向梯度直方图”(Histograms of Oriented Gradients,简称HOG),并将其应用于行人检测。

HOG特征具有旋转不变性和局部对比度归一化等优点,在目标检测中取得了显著的性能提升。

matlab图像处理外文翻译外文文献

matlab图像处理外文翻译外文文献

matlab图像处理外文翻译外文文献附录A 英文原文Scene recognition for mine rescue robotlocalization based on visionCUI Yi-an(崔益安), CAI Zi-xing(蔡自兴), WANG Lu(王璐)Abstract:A new scene recognition system was presented based on fuzzy logic and hidden Markov model(HMM) that can be applied in mine rescue robot localization during emergencies. The system uses monocular camera to acquire omni-directional images of the mine environment where the robot locates. By adopting center-surround difference method, the salient local image regions are extracted from the images as natural landmarks. These landmarks are organized by using HMM to represent the scene where the robot is, and fuzzy logic strategy is used to match the scene and landmark. By this way, the localization problem, which is the scene recognition problem in the system, can be converted into the evaluation problem of HMM. The contributions of these skills make the system have the ability to deal with changes in scale, 2D rotation and viewpoint. The results of experiments also prove that the system has higher ratio of recognition and localization in both static and dynamic mine environments.Key words: robot location; scene recognition; salient image; matching strategy; fuzzy logic; hidden Markov model1 IntroductionSearch and rescue in disaster area in the domain of robot is a burgeoning and challenging subject[1]. Mine rescue robot was developed to enter mines during emergencies to locate possible escape routes for those trapped inside and determine whether it is safe for human to enter or not. Localization is a fundamental problem in this field. Localization methods based on camera can be mainly classified into geometric, topological or hybrid ones[2]. With its feasibility and effectiveness, scene recognition becomes one of the important technologies of topological localization.Currently most scene recognition methods are based on global image features and have two distinct stages: training offline and matching online.。

外文翻译----数字图像处理方法的研究(中英文)(1)

外文翻译----数字图像处理方法的研究(中英文)(1)

The research of digital image processing technique1IntroductionInterest in digital image processing methods stems from two principal application areas:improvement of pictorial information for human interpretation;and processing of image data for storage,transmission,and representation for autonomous machine perception.1.1What Is Digital Image Processing?An image may be defined as a two-dimensional function,f(x,y),where x and y are spatial(plane)coordinates,and the amplitude of f at any pair of coordinates(x,y)is called the intensity or gray level of the image at that point.When x,y,and digital image.The field of digital image processing refers to processing digital images by means of a digital computer.Note that a digital image is composed of a finite number of elements,each of which has a particular location and value.These elements are referred to as picture elements,image elements,pels,and pixels.Pixel is the term most widely used to denote the elements of a digital image.We consider these definitions in more formal terms in Chapter2.Vision is the most advanced of our senses,so it is not surprising that images play the single most important role in human perception.However,unlike human who are limited to the visual band of the electromagnetic(EM)spectrum,imaging machines cover almost the entire EM spectrum,ranging from gamma to radio waves.They can operate on images generated by sources that human are not accustomed to associating with image.These include ultrasound,electron microscopy,and computer-generated images.Thus,digital image processing encompasses a wide and varied field of application.There is no general agreement among authors regarding where image processing stops and other related areas,such as image analysis and computer vision,start. Sometimes a distinction is made by defining image processing as a discipline in which both the input and output of a process are images.We believe this to be a limiting and somewhat artificial boundary.For example,under this definition,even the trivial task of computing the average intensity of an image(which yields a single number)would not be considered an image processing operation.On the other hand, there are fields such as computer vision whose ultimate goal is to use computer to emulate human vision,including learning and being able to make inferences and take actions based on visual inputs.This area itself is a branch of artificial intelligence(AI) whose objective is to emulate human intelligence.This field of AI is in its earliest stages of infancy in terms of development,with progress having been much slower than originally anticipated.The area of image analysis(also called image understanding)is in between image processing and computer vision.There are no clear-cut boundaries in the continuum from image processing at one end to computer vision at the other.However,one useful paradigm is to consider three types of computerized processes is this continuum:low-,mid-,and high-ever processes.Low-level processes involve primitive operation such as image preprocessing to reduce noise,contrast enhancement,and image sharpening.A low-level process is characterized by the fact that both its input and output are images. Mid-level processing on images involves tasks such as segmentation(partitioning an image into regions or objects),description of those objects to reduce them to a form suitable for computer processing,and classification(recognition)of individual object. Amid-level process is characterized by the fact that its inputs generally are images, but its output is attributes extracted from those images(e.g.,edges contours,and the identity of individual object).Finally,higher-level processing involves“making sense”of an ensemble of recognized objects,as in image analysis,and,at the far end of the continuum,performing the cognitive function normally associated with vision. Based on the preceding comments,we see that a logical place of overlap between image processing and image analysis is the area of recognition of individual regions or objects in an image.Thus,what we call in this book digital image processing encompasses processes whose inputs and outputs are images and,in addition, encompasses processes that extract attributes from images,up to and including the recognition of individual objects.As a simple illustration to clarify these concepts, consider the area of automated analysis of text.The processes of acquiring an image of the area containing the text.Preprocessing that images,extracting(segmenting)the individual characters,describing the characters in a form suitable for computer processing,and recognizing those individual characters are in the scope of what we call digital image processing in this book.Making sense of the content of the page may be viewed as being in the domain of image analysis and even computer vision, depending on the level of complexity implied by the statement“making cense.”As will become evident shortly,digital image processing,as we have defined it,is used successfully in a broad rang of areas of exceptional social and economic value.The concepts developed in the following chapters are the foundation for the methods used in those application areas.1.2The Origins of Digital Image ProcessingOne of the first applications of digital images was in the newspaper industry,when pictures were first sent by submarine cable between London and NewYork. Introduction of the Bartlane cable picture transmission system in the early1920s reduced the time required to transport a picture across the Atlantic from more than a week to less than three hours.Specialized printing equipment coded pictures for cable transmission and then reconstructed them at the receiving end.Figure 1.1was transmitted in this way and reproduced on a telegraph printer fitted with typefaces simulating a halftone pattern.Some of the initial problems in improving the visual quality of these early digital pictures were related to the selection of printing procedures and the distribution ofintensity levels.The printing method used to obtain Fig.1.1was abandoned toward the end of1921in favor of a technique based on photographic reproduction made from tapes perforated at the telegraph receiving terminal.Figure1.2shows an images obtained using this method.The improvements over Fig.1.1are evident,both in tonal quality and in resolution.FIGURE1.1A digital picture produced in FIGURE1.2A digital picture 1921from a coded tape by a telegraph printer made in1922from a tape punched With special type faces(McFarlane)after the signals had crossed theAtlantic twice.Some errors areVisible.(McFarlane)The early Bartlane systems were capable of coding images in five distinct level of gray.This capability was increased to15levels in1929.Figure1.3is typical of the images that could be obtained using the15-tone equipment.During this period, introduction of a system for developing a film plate via light beams that were modulated by the coded picture tape improved the reproduction process considerably. Although the examples just cited involve digital images,they are not considered digital image processing results in the context of our definition because computer were not involved in their creation.Thus,the history of digital processing is intimately tied to the development of the digital computer.In fact digital images require so much storage and computational power that progress in the field of digital image processing has been dependent on the development of digital computers of supporting technologies that include data storage,display,and transmission.The idea of a computer goes back to the invention of the abacus in Asia Minor, more than5000years ago.More recently,there were developments in the past two centuries that are the foundation of what we call computer today.However,the basis for what we call a modern digital computer dates back to only the1940s with the introduction by John von Neumann of two key concepts:(1)a memory to hold a stored program and data,and(2)conditional branching.There two ideas are the foundation of a central processing unit(CPU),which is at the heart of computer today. Starting with von Neumann,there were a series of advances that led to computers powerful enough to be used for digital image processing.Briefly,these advances maybe summarized as follow:(1)the invention of the transistor by Bell Laboratories in1948;(2)the development in the1950s and1960s of the high-level programminglanguages COBOL(Common Business-Oriented Language)and FORTRAN (Formula Translator);(3)the invention of the integrated circuit(IC)at Texas Instruments in1958;(4)the development of operating system in the early1960s;(5)the development of the microprocessor(a single chip consisting of the centralprocessing unit,memory,and input and output controls)by Inter in the early 1970s;(6)introduction by IBM of the personal computer in1981;(7)progressive miniaturization of components,starting with large scale integration(LI)in the late1970s,then very large scale integration(VLSI)in the1980s,to the present use of ultra large scale integration(ULSI).Figure1.3In1929from London to Cenerale Pershingthat New York delivers with15level tone equipmentsthrough cable with Foch do not the photograph by decorationConcurrent with these advances were development in the areas of mass storage and display systems,both of which are fundamental requirements for digital image processing.The first computers powerful enough to carry out meaningful image processing tasks appeared in the early1960s.The birth of what we call digital image processing today can be traced to the availability of those machines and the onset of the apace program during that period.It took the combination of those two developments to bring into focus the potential of digital image processing concepts.Work on using computer techniques for improving images from a space probe began at the Jet Propulsion Laboratory(Pasadena,California)in1964when pictures of the moontransmitted by Ranger7were processed by a computer to correct various types of image distortion inherent in the on-board television camera.Figure1.4shows the first image of the moon taken by Ranger7on July31,1964at9:09A.M.Eastern Daylight Time(EDT),about17minutes before impacting the lunar surface(the markers,called reseau mark,are used for geometric corrections,as discussed in Chapter5).This also is the first image of the moon taken by a U.S.spacecraft.The imaging lessons learned with ranger7served as the basis for improved methods used to enhance and restore images from the Surveyor missions to the moon,the Mariner series of flyby mission to Mars,the Apollo manned flights to the moon,and others.In parallel with space application,digital image processing techniques began in the late1960s and early1970s to be used in medical imaging,remote Earth resources observations,and astronomy.The invention in the early1970s of computerized axial tomography(CAT),also called computerized tomography(CT)for short,is one of the most important events in the application of image processing in medical diagnosis. Computerized axial tomography is a process in which a ring of detectors encircles an object(or patient)and an X-ray source,concentric with the detector ring,rotates about the object.The X-rays pass through the object and are collected at the opposite end by the corresponding detectors in the ring.As the source rotates,this procedure is repeated.Tomography consists of algorithms that use the sensed data to construct an image that represents a“slice”through the object.Motion of the object in a direction perpendicular to the ring of detectors produces a set of such slices,which constitute a three-dimensional(3-D)rendition of the inside of the object.Tomography was invented independently by Sir Godfrey N.Hounsfield and Professor Allan M. Cormack,who shared the X-rays were discovered in1895by Wilhelm Conrad Roentgen,for which he received the1901Nobel Prize for Physics.These two inventions,nearly100years apart,led to some of the most active application areas of image processing today.Figure1.4The first picture of the moon by a U.S.Spacecraft.Ranger7took this image on July31,1964at9:09A.M.EDT,about17minutes beforeImpacting the lunar surface.(Courtesy of NASA.)中文翻译数字图像处理方法的研究1绪论数字图像处理方法的研究源于两个主要应用领域:其一是为了便于人们分析而对图像信息进行改进;其二是为了使机器自动理解而对图像数据进行存储、传输及显示。

人脸识别外文翻译参考文献

人脸识别外文翻译参考文献

人脸识别外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)译文:基于PAC的实时人脸检测和跟踪方法摘要:这篇文章提出了复杂背景条件下,实现实时人脸检测和跟踪的一种方法。

这种方法是以主要成分分析技术为基础的。

为了实现人脸的检测,首先,我们要用一个肤色模型和一些动作信息(如:姿势、手势、眼色)。

然后,使用PAC技术检测这些被检验的区域,从而判定人脸真正的位置。

而人脸跟踪基于欧几里德(Euclidian)距离的,其中欧几里德距离在位于以前被跟踪的人脸和最近被检测的人脸之间的特征空间中。

用于人脸跟踪的摄像控制器以这样的方法工作:利用平衡/(pan/tilt)平台,把被检测的人脸区域控制在屏幕的中央。

这个方法还可以扩展到其他的系统中去,例如电信会议、入侵者检查系统等等。

1.引言视频信号处理有许多应用,例如鉴于通讯可视化的电信会议,为残疾人服务的唇读系统。

在上面提到的许多系统中,人脸的检测喝跟踪视必不可缺的组成部分。

在本文中,涉及到一些实时的人脸区域跟踪[1-3]。

一般来说,根据跟踪角度的不同,可以把跟踪方法分为两类。

有一部分人把人脸跟踪分为基于识别的跟踪喝基于动作的跟踪,而其他一部分人则把人脸跟踪分为基于边缘的跟踪和基于区域的跟踪[4]。

基于识别的跟踪是真正地以对象识别技术为基础的,而跟踪系统的性能是受到识别方法的效率的限制。

基于动作的跟踪是依赖于动作检测技术,且该技术可以被分成视频流(optical flow)的(检测)方法和动作—能量(motion-energy)的(检测)方法。

基于边缘的(跟踪)方法用于跟踪一幅图像序列的边缘,而这些边缘通常是主要对象的边界线。

然而,因为被跟踪的对象必须在色彩和光照条件下显示出明显的边缘变化,所以这些方法会遭遇到彩色和光照的变化。

此外,当一幅图像的背景有很明显的边缘时,(跟踪方法)很难提供可靠的(跟踪)结果。

当前很多的文献都涉及到的这类方法时源于Kass et al.在蛇形汇率波动[5]的成就。

关于结构检测的外国文献

关于结构检测的外国文献

关于结构检测的外国文献结构检测(structure detection)是计算机视觉领域的一个重要研究方向,旨在从图像或视频中检测并识别出不同对象的结构信息。

本文将介绍几篇关于结构检测的外国文献,以展示该领域的研究进展和应用。

1. "Structure extraction in images and videos using deep learning"(使用深度学习在图像和视频中进行结构提取):这篇文献研究了如何利用深度学习算法来实现图像和视频中的结构提取。

作者提出了一种基于卷积神经网络的方法,通过训练网络来学习图像中的结构信息,并在实验中取得了很好的效果。

2. "Structure detection for 3D reconstruction"(用于三维重建的结构检测):该文献关注于如何在三维重建过程中进行结构检测。

作者提出了一种基于特征匹配和相机投影的方法,通过分析图像中的特征点和相机参数,实现对场景结构的检测和重建。

实验证明该方法在三维重建中具有较高的精度和鲁棒性。

3. "Structure detection for object recognition"(用于物体识别的结构检测):该文献研究了如何利用结构检测来提高物体识别的准确性。

作者提出了一种基于边缘检测和支持向量机的方法,通过检测物体的结构信息来辅助物体识别任务。

实验结果表明,该方法在提高物体识别准确性方面取得了显著的效果。

4. "Structure detection for video analysis"(用于视频分析的结构检测):该文献探讨了如何利用结构检测来进行视频分析。

作者提出了一种基于光流和运动分析的方法,通过检测视频中的结构信息来实现对视频内容的理解和分析。

实验证明该方法在视频分析任务中具有较高的准确性和实时性。

结构检测在计算机视觉领域具有广泛的应用,例如目标检测、场景理解、图像分割等。

图像边缘检测算法英文文献翻译中英文翻译

图像边缘检测算法英文文献翻译中英文翻译

image edge examination algorithmAbstractDigital image processing took a relative quite young discipline, is following the computer technology rapid development, day by day obtains the widespread edge took the image one kind of basic characteristic, in the pattern recognition, the image division, the image intensification as well as the image compression and so on in the domain has a more widespread edge detection method many and varied, in which based on brightness algorithm, is studies the time to be most long, the theory develops the maturest method, it mainly is through some difference operator, calculates its gradient based on image brightness the change, thus examines the edge, mainly has Robert, Laplacian, Sobel, Canny, operators and so on LOG。

First as a whole introduced digital image processing and the edge detection survey, has enumerated several kind of at present commonly used edge detection technology and the algorithm, and selects two kinds to use Visual the C language programming realization, through withdraws the image result to two algorithms the comparison, the research discusses their good and bad points.ForewordIn image processing, as a basic characteristic, the edge of the image, which is widely used in the recognition, segmentation,intensification and compress of the image, is often applied to high-level are many kinds of ways to detect the edge. Anyway, there are two main techniques: one is classic method based on the gray grade of every pixel; the other one is based on wavelet and its multi-scale characteristic. The first method, which is got the longest research,get the edge according to the variety of the pixel gray. The main techniques are Robert, Laplace, Sobel, Canny and LOG algorithm.The second method, which is based on wavelet transform, utilizes the Lipschitz exponent characterization of the noise and singular signal and then achieve the goal of removing noise and distilling the real edge lines. In recent years, a new kind of detection method, which based on the phase information of the pixel, is developed. We need hypothesize nothing about images in advance. The edge is easy to find in frequency domain. It’s a reliable method.In chapter one, we give an overview of the image edge. And in chapter two, some classic detection algorithms are introduced. The cause of positional error is analyzed, and then discussed a more precision method in edge orientation. In chapter three, wavelet theory is introduced. The detection methods based on sampling wavelet transform, which can extract maim edge of the image effectively, and non-sampling wavelet transform, which can remain the optimum spatial information, are recommended respectively. In the last chapter of this thesis, the algorithm based on phase information is introduced. Using the log Gabor wavelet, two-dimension filter is constructed, many kinds of edges are detected, including Mach Band, which indicates it is a outstanding and bio-simulation method。

imagenet 参考文献引用

imagenet 参考文献引用

很高兴能为您撰写关于imagenet的文章。

imagenet是一个颇具影响力的计算机视觉数据库,它包含了来自世界各地的数百万张图片,每一张图片都被标记和分类。

它的发布对图像识别和深度学习领域产生了深远的影响,并且在学术界和工业界都受到广泛关注。

1. imagenet的创建和影响imagenet数据库创建于2009年,由斯坦福大学的李飞飞教授发起,并在图像识别领域引起了革命性的变化。

它为研究人员提供了一个丰富的数据集,让他们可以训练和测试各种图像识别算法。

在imagenet 的推动下,深度学习技术得到了快速发展,图像识别的准确率大幅提升,以及广泛应用于人脸识别、自动驾驶、医学影像分析等领域。

2. imagenet的重要性imagenet作为图像识别领域的基础数据集,对于深度学习的发展起到了至关重要的作用。

许多研究者利用imagenet进行算法的验证和比较,以此来衡量他们的图像识别模型的准确性。

imagenet还激励了许多学者和工程师在算法优化、网络架构设计等方面的不懈努力,为图像识别技术的进步贡献力量。

3. 我对imagenet的个人观点和理解作为一项重要的计算机视觉工具,imagenet在图像识别领域发挥着巨大的作用。

它促进了深度学习技术的飞速发展,也为研究人员提供了一个广阔的研究评台。

在未来,我相信imagenet会继续对图像识别技术的进步起到重要的推动作用,带来更多的创新和突破。

通过以上的文章撰写,希望能够为您提供一篇有价值的文章,帮助您更深入地了解imagenet这一重要概念。

如果有任何修改意见或其他要求,请随时告诉我。

imagenet作为一个包含数百万张图片的庞大数据库,对计算机视觉和深度学习领域产生了深远的影响。

它的创建和发布不仅推动了图像识别技术的发展,还激发了学术界和工业界的广泛关注和探讨。

通过imagenet,研究人员可以进行算法的验证和比较,提高图像识别模型的准确性,进而应用于各种领域,如人脸识别、自动驾驶、医学影像分析等。

图像处理方面的参考文献

图像处理方面的参考文献

图像处理方面的参考文献:由整理[1] Pratt W K. Digital image processing :3rd edition [M]. New York :Wiley Inter-science ,1991.[2] HUANG Kai-qi ,WANG Qiao ,WU Zhen-yang ,et al. Multi-Scale Color Image Enhancement AlgorithmBased on Human Visual System [J]. Journal of Image and Graphics ,2003,8A(11) :1242-1247.[3] 赵春燕,郑永果,王向葵.基于直方图的图像模糊增强算法[J]. 计算机工程,2005,31(12):185-187.[4] 刘惠燕,何文章,马云飞. 基于数学形态学的雾天图像增强算法[J]. 天津工程师范学院学报, 2021.[5] Pal S K, King R A. Image enhancement using fuzzy sets[J]. Electron Lett, 1980, 16(9): 376 —378.[6] Zhou S M, Gan J Q, Xu L D, et al. Interactive image enhancement by fuzzy relaxation [J]. International Journalof Automation and Computing, 2007, 04(3): 229 —235.[7] Mir A.H. Fuzzy entropy based interactive enhancement of radiographic images [J]. Journal of MedicalEngineering and Technology, 2007, 31(3): 220 —231.[8] TAN K, OAKLEY J P. Enhancement of color images in poor visibility conditions[C] / / Proceedings of IEEEInternational Conference on Image Processing. Vancouver, Canada: IEEE press, 2000, 788 - 791.[9] TAN R T, PETTERSSON N, PETERSSON L. Visibility enhancement for roads with foggy or hazy scenes[C] / /IEEE intelligent Vehicles Symposium. Istanbul Turkey: IEEE Press, 2007: 19 - 24.[10] 祝培,朱虹,钱学明, 等. 一种有雾天气图像景物影像的清晰化方法[J ]. 中国图象图形学报, 2004, 9 (1) : 124 -128.[11] 王萍,张春,罗颖昕. 一种雾天图像低比照度增强的快速算法[J ]. 计算机应用, 2006, 26 (1) : 152 - 156.[12] 詹翔,周焰. 一种基于局部方差的雾天图像增强算法[J ]. 计算机应用, 2007, 27 (2) : 510 - 512.[13] 芮义斌,李鹏,孙锦涛. 一种图像去薄雾方法[ J ]. 计算机应用,2006, 26 (1) : 154 - 156.[14] Coltuc D ,Bolon P,Chassery J-M. Exact Histogram Specification [J]. IEEE TIP ,2006,15(05):1143-1152.[15] Menotti D. Contrast Enhancement in Digital Imaging Using Histogram Equalization [D]. Brazil :UFMGUniversidade Federal de Minas Gerais ,2021.[16] Alsuwailem A M. A Novel FPGA Based Real-time Histogram Equalization Circuit for Infrared ImageEnhancement [J]. Active and Passive Electronic Devices ,2021(03):311-321.[17] 杨必武,郭晓松,王克军,等.基于直方图的线性拉伸的红外图像增强新算法[J]. 红外与激光工程,2003,32(1):1-3,7[18] 赵立兴,唐英干,刘冬,关新平.基于直方图指数平滑的模糊散度图像分割[J].系统工程与电子技术,2005,27(7):1182-1185[19] 李忠海.图像直方图局部极值算法及其在边界检测中的应用[J]. 吉林大学学报,2003,21( 5):89-91[20] Dah-Chung Chang,Wen-Rong Wu.Image Contrast Enhancement Based on a Histogram Transformation ofLocal Standard Deviation[J].IEEE Transactions on Medical Imageing,1998,17(4):518-530[21] Andrea Polesel, Giovanni Ramponi, and V.John Mathews. Image Enhancement via Adaptive UnsharpMasking[J].IEEE Transactions on Image Processing,2000,9(3)[22] 苏秀琴,张广华,李哲. 一种陡峭边缘检测的改进Laplacian 算子[J]. 科学技术与工程,2006,6(13):1833-1835[23] 张利平,何金其,黄廉卿•多尺度抗噪反锐化淹没的医学影像增强算法[J].光电工程,2004,31( 10):53-56 ,68[24] CHENG H D, XU HU I2JUAN. A novel fuzzy logic app roach to mammogram contrast enhancement[J ].Information Sciences, 2002,148: 167 - 184.[25] Zhou S M, Gan J Q. A new fuzzy relaxation algorithm for image enhancement [J]. International Journal ofKnowledge-based and Intelligent Engineering Systems, 2006, 10(3): 181 —192.[26] 李水根,吴纪桃.分形与小波[M]. 北京:科学出版社,2002[27] 冈萨雷斯. 数字图像处理[M]. 电子工业出版社,2003.[28] Silvano Di Zenzo,Luigi Cinque.Image Thresholding Using FuzzyEntrops.IEEE TRANSACTIONSONSYSTEM,MAN,ANDCYBERNETICS-PARTB:CYBERNETICS,1998,28(1):23[29] 王保平,刘升虎,范九伦,谢维信.基于模糊熵的自适应图像多层次模糊增强算法[J].电子学报,2005,33(4):730-734[30] Zhang D,Wang Z.Impulse noise detection and removal using fuzzy techniques[J].IEEE ElectronicsLetters.1997,33(5):378-379[31] Jun Wang,Jian Xiao, Dan Hu.Fuzzy wavelet network modeling with B-spline wavelet[J].Machine Learning andCybernetics,2005:4144-4148[32] 刘国军,唐降龙,黄剑华,刘家峰.基于模糊小波的图像比照度增强算[J].电子学报,2005, 33⑷:643-646[33] 陈佳娟,陈晓光,纪寿文.基于遗传算法的图像模糊增强处理方法[J].计算机工程与应用,2001, 21:109-111[34] 汪亚明,吴国忠,赵匀•基于模糊上级遗传算法的图象增强技术[J].农业机械学报,2003,34(3):96-98[35] 赵春燕,郑永果,王向葵•基于直方图的图像模糊增强算法[J].计算机工程,2005, 31(12):185-186,222[36] Tobias,,Seara.Image segmentation by histogram thresholding using fuzzy sets[J].IEEE Transactions onImage Processing,2002 11(12):1457-1465[37] 王玉平,蔡元龙多尺度B样条小波边缘算子]J]冲国科学,A辑,1995,25⑷:426 —437.[38] 李玉,于凤琴,杨慧中等基于新的阈值函数的小波阈值去噪方法[J].江南大学学报,2006 ,5(4):476-479[39] ,,K.R.Subramanian.Low-Complexity Image Denoiseing based on Statistical Modeling of WaveletCofficients[J].IEEE Signal Process,1999,6(12):300-303[40] 朱云芳,戴朝华,陈维荣.小波信号消噪及阈值函数的一种改良方法[J].中国测试技术,2006, 32(7):28-30[41] 凌毓涛,姚远,曾竞.基于小波的医学图像自适应增强[J].华中师范大学学报(自然科学版),2004,38(1):47-51[42] 何锦平.基于小波多分辨分析的图像增强及其应用研究[D]. 西北工业大学硕士学位论文,2003,4[43] Hayit Grernspan,Charles H,Anderson.Image Enhancement by Nonlinear Extrapolation[J].IEEE Transactionson Image Processing,2000,9(6):[44] 李旭超,朱善安.基于小波模极大值和Neyman Pearson准那么阈值的图像去噪.中国图象图形学报,10(8):964-969[45] Tai-Chiu Hsung,Daniel Pak-Kong Lun,Wan-Chi Siu.Denoising by Singularity Detection[J].IEEETransactions on Signal Processing,1999,47(11)[46] Lei Zhang,Paul Bao.Denoising by Spatial Correlation Thresholding[J]. IEEE Transactions on Circuts andSystems for Video Technology,2003,13(6):535-538[47] ,Huai Li,Matthew T.Freedman Optimization of Wavelet Decomposition for ImageCompression and Feature Preservation.IEEE Transactions on Medical Imaging,2003,22(9):1141-1151 [48] Junmei Zhong,Ruola Ning.Image Denoising Based on Wavelets and Multifractals for SingularityDetection[J].IEEE Transactions on Image Processing,2005,14(10):1435-1447[49] Z Cai,T H Cheng,C Lu,, 2001,37(11):683-685[50] Ahmed,J,Jafri,Ahmad.Target Tracking in an Image Sequence Using Wavelet Features and a NeuralNetwork[J].TENCON 2005,10(11):1-6[51] Yunyi Yan,Baolong Guo,Wei Ni.Image Denoising:An Approach Based on Wavelet Neural Network andImproved Median Filtering[J].WCICA 2006:10063-10067[52] Changjiang Zhang,Jinshan Wang,Xiaodong Wang,Huajun Feng.Contrast Enhancement for Image withIncomplete Beta Transform and Wavelet Neural Network[J].Neural Networks and Brain,2005,10:1236-1241 [53] Cao Wanpeng,Che Rensheng,Ye Dong An illumination independent edge detection and fuzzy enhancementalgorithm based on wavelet transform for non-uniform weak illumination images[J]Pattern Recognition Letters 29(2021)192-199金炜,潘英俊,魏彪 .基于改良遗传算法的图象小波阈值去噪研究 [M]. 刘轩,刘佳宾.基于比照度受限自适应直方图均衡的乳腺图像增强[J].计算机工程与应 用,2021,44(10):173-175Pal S k,King R A.Image enhancement using smoothing with fuzzy sets[J].IEEE Trans,Syst,Man,Cybern, 1981,11(7):494-501徐朝伦.基于子波变换和模糊数学的图像分割研究[D]:[博士学位论文].北京:北京理工大学电子工程 系, 1998Philippe Saint-Marc,Jer-Sen Chen. Adaptive Smoothing: A General Tool for Early Vision[J].IEEE Transactionson Pattern Analysis and Machine Intelligence.2003 , 13( 6) : 514-528翟艺书柳晓鸣,涂雅瑗.基于模糊逻辑的雾天降质图像比照度增强算法 [J].计算机工程与应用,2021,3 艾明晶,戴隆忠,曹庆华.雾天环境下自适应图像增强去雾方法研究 [J].计算机仿真,2021,7 Joung - YounKim ,Lee - SupKim , Seung - HoHwang. An Advanced Contrast Enhancement Using PartiallyOverlapped Sub - Block Histogram Equalization [J]. IEEE Transactions on Circuits and Systems of VideoTechnology , 2001, 11 (4) : 475 - 484.Srinivasa G Narasimhan , Shree K. Nayar. Contrast Restoration of weather Degraded Images[ J ]. IEEE Transactions on Pattern Analysis andMachine Intelligence, 2003, 25 (6) : 713 -724陈武凡 ,鲁贤庆 . 彩色图像边缘检测的新算法 ——广义模糊算子法 [ J ]. 中国科学 (A) , 2000, 15 ( 2) : 219-224.刘博,胡正平,王成儒. 基于模糊松弛迭代的分层图像增强算法 [ J ]. 光学技术, 2021,1 黄凯奇,王桥,吴镇扬,等. 基于视觉特性和彩色空间的多尺度彩色图像增强算法 [J]. 电子学报,2004, 32(4): 673-676.吴颖谦,方涛,李聪亮,等. 一种基于小波分析和人眼视觉特性的图像增强方法 [J]. 数据采集与处理, 2003,18: 17-21.陈少卿,吴朝霞,程敬之 .骨肿瘤 X 光片的多分辨特征增强 .西安:西安交通大学校报, 2003. 周旋,周树道,黄峰. 基于小波变换的图像增强新算法 [J ] . 计算机应用 :2005 ,25 (3) :606 - 608. 何锦平. 基于小波多分辨分析的图像增强及其应用研究 [D] .西安:西北工业大学 ,2003. 张德干 .小波变换在数字图像处理中的应用研究[J]. 沈阳:东北大学, 2000. 方勇•基于软阈值的小波图像增强方法〔J 〕计算机工程与应用,2002, 23:16 一 19. 谢杰成,张大力,徐文立 .小波图像去噪综述 .中国图象图形学报,2002, 3(7), 209 一 217. 欧阳诚梓、李勇等,基于小波变换与中值滤波相结合的图像去噪处理 [J].中原工学院学报 王云松、林德杰,基于小波包分析的图像去噪处理 [J].计算技术与自动 李晓漫 , 雷英杰等,小波变换和粗糙集的图像增强方法 [J]. 电光与控制 ,2007.12 刘晨 ,候德文等 . 基于多小波变换与图像融合的图像增强方法 [J]. 计算机技术及应用 ,2021.5 徐凌,刘薇,杨光等. 结合小波域变换和空间域变换的图像增强方法 [J]. 波谱学杂志 Peli E. Cont rast in complex images [J ] . J Opt Soc Am A , 1990 , 7 (10) : 2 032 - 2 040.Huang K Q , Wu Z Y, Wang Q. Image enhancement based on t he statistics of visual representation [J ] .Image Vision Comput , 2005 , 23 (1) : 51 - 57.高彦平 .图像增强方法的研究与实现 [D]. 山东科技大学硕士学位论文, 2005, 4XIANG Q S, LI A. Water-fat imaging with direct phase encoding [J]. Magnetic ResonanceImaging,1997,7(6):1002-1015.MA J, SINGH S K, KUMAR A J, et al. Method for efficient fast spin echo Dixon imaging [J]. Magnetic Resonance in Medicine, 2002, 48(6):1021-1027. [54][55] [56] [57][58] [59][60][61] [62] [63] [64][65] [66] [67] [68] [69][70][71][72] [73] [74] [75] [76][77][78][79] [80][81] [82]。

数字图像处理 外文翻译 外文文献 英文文献 数字图像处理

数字图像处理 外文翻译 外文文献 英文文献 数字图像处理

数字图像处理外文翻译外文文献英文文献数字图像处理Digital Image Processing1 IntroductionMany operators have been proposed for presenting a connected component n a digital image by a reduced amount of data or simplied shape. In general we have to state that the development, choice and modi_cation of such algorithms in practical applications are domain and task dependent, and there is no \best method". However, it isinteresting to note that there are several equivalences between published methods and notions, and characterizing such equivalences or di_erences should be useful to categorize the broad diversity of published methods for skeletonization. Discussing equivalences is a main intention of this report.1.1 Categories of MethodsOne class of shape reduction operators is based on distance transforms. A distance skeleton is a subset of points of a given component such that every point of this subset represents the center of a maximal disc (labeled with the radius of this disc) contained in the given component. As an example in this _rst class of operators, this report discusses one method for calculating a distance skeleton using the d4 distance function which is appropriate to digitized pictures. A second class of operators produces median or center lines of the digitalobject in a non-iterative way. Normally such operators locate critical points _rst, and calculate a speci_ed path through the object by connecting these points.The third class of operators is characterized by iterative thinning. Historically, Listing [10] used already in 1862 the term linear skeleton for the result of a continuous deformation of the frontier of a connected subset of a Euclidean space without changing the connectivity of the original set, until only a set of lines and points remains. Many algorithms in image analysis are based on this general concept of thinning. The goal is a calculation of characteristic properties of digital objects which are not related to size or quantity. Methods should be independent from the position of a set in the plane or space, grid resolution (for digitizing this set) or the shape complexity of the given set. In the literature the term \thinning" is not used - 1 -in a unique interpretation besides that it always denotes a connectivity preserving reduction operation applied to digital images, involving iterations of transformations of speci_ed contour points into background points. A subset Q _ I of object points is reduced by ade_ned set D in one iteration, and the result Q0 = Q n D becomes Q for the next iteration. Topology-preserving skeletonization is a special case of thinning resulting in a connected set of digital arcs or curves.A digital curve is a path p =p0; p1; p2; :::; pn = q such that pi is a neighbor of pi?1, 1 _ i _ n, and p = q. A digital curve is called simpleif each point pi has exactly two neighbors in this curve. A digital arc is a subset of a digital curve such that p 6= q. A point of a digital arc which has exactly one neighbor is called an end point of this arc. Within this third class of operators (thinning algorithms) we may classify with respect to algorithmic strategies: individual pixels are either removed in a sequential order or in parallel. For example, the often cited algorithm by Hilditch [5] is an iterative process of testing and deleting contour pixels sequentially in standard raster scan order. Another sequential algorithm by Pavlidis [12] uses the de_nition of multiple points and proceeds by contour following. Examples of parallel algorithms in this third class are reduction operators which transform contour points into background points. Di_erences between these parallel algorithms are typically de_ned by tests implemented to ensure connectedness in a local neighborhood. The notion of a simple point is of basic importance for thinning and it will be shown in this reportthat di_erent de_nitions of simple points are actually equivalent. Several publications characterize properties of a set D of points (to be turned from object points to background points) to ensure that connectivity of object and background remain unchanged. The report discusses some of these properties in order to justify parallel thinning algorithms.1.2 BasicsThe used notation follows [17]. A digital image I is a functionde_ned on a discrete set C , which is called the carrier of the image.The elements of C are grid points or grid cells, and the elements (p;I(p)) of an image are pixels (2D case) or voxels (3D case). The range of a (scalar) image is f0; :::Gmaxg with Gmax _ 1. The range of a binary image is f0; 1g. We only use binary images I in this report. Let hIi be the set of all pixel locations with value 1, i.e. hIi = I?1(1). The image carrier is de_ned on an orthogonal grid in 2D or 3D - 2 -space. There are two options: using the grid cell model a 2D pixel location p is a closed square (2-cell) in the Euclidean plane and a 3D pixel location is a closed cube (3-cell) in the Euclidean space, where edges are of length 1 and parallel to the coordinate axes, and centers have integer coordinates. As a second option, using the grid point model a 2D or 3D pixel location is a grid point.Two pixel locations p and q in the grid cell model are called 0-adjacent i_ p 6= q and they share at least one vertex (which is a 0-cell). Note that this speci_es 8-adjacency in 2D or 26-adjacency in 3D if the grid point model is used. Two pixel locations p and q in the grid cell model are called 1- adjacent i_ p 6= q and they share at least one edge (which is a 1-cell). Note that this speci_es 4-adjacency in 2D or 18-adjacency in 3D if the grid point model is used. Finally, two 3Dpixel locations p and q in the grid cell model are called 2-adjacent i_ p 6= q and they share at least one face (which is a 2-cell). Note that this speci_es 6-adjacency if the grid point model is used. Any of these adjacency relations A_, _ 2 f0; 1; 2; 4; 6; 18; 26g, is irreexive andsymmetric on an image carrier C. The _-neighborhood N_(p) of a pixel location p includes p and its _-adjacent pixel locations. Coordinates of 2D grid points are denoted by (i; j), with 1 _ i _ n and 1 _ j _ m; i; j are integers and n;m are the numbers of rows and columns of C. In 3Dwe use integer coordinates (i; j; k). Based on neighborhood relations wede_ne connectedness as usual: two points p; q 2 C are _-connected with respect to M _ C and neighborhood relation N_ i_ there is a sequence of points p = p0; p1; p2; :::; pn = q such that pi is an _-neighbor of pi?1, for 1 _ i _ n, and all points on this sequence are either in M or all in the complement of M. A subset M _ C of an image carrier is called _-connected i_ M is not empty and all points in M are pairwise _-connected with respect to set M. An _-component of a subset S of C is a maximal _-connected subset of S. The study of connectivity in digital images has been introduced in [15]. It follows that any set hIi consists of a number of _-components. In case of the grid cell model, a component is the union of closed squares (2D case) or closed cubes (3D case). The boundary of a 2-cell is the union of its four edges and the boundary of a 3-cell is the union of its six faces. For practical purposes it iseasy to use neighborhood operations (called local operations) on adigital image I which de_ne a value at p 2 C in the transformed image based on pixel- 3 -values in I at p 2 C and its immediate neighbors in N_(p).2 Non-iterative AlgorithmsNon-iterative algorithms deliver subsets of components in specied scan orders without testing connectivity preservation in a number of iterations. In this section we only use the grid point model.2.1 \Distance Skeleton" AlgorithmsBlum [3] suggested a skeleton representation by a set of symmetric points.In a closed subset of the Euclidean plane a point p is called symmetric i_ at least 2 points exist on the boundary with equal distances to p. For every symmetric point, the associated maximal discis the largest disc in this set. The set of symmetric points, each labeled with the radius of the associated maximal disc, constitutes the skeleton of the set. This idea of presenting a component of a digital image as a \distance skeleton" is based on the calculation of a speci_ed distance from each point in a connected subset M _ C to the complement of the subset. The local maxima of the subset represent a \distance skeleton". In [15] the d4-distance is specied as follows. De_nition 1 The distance d4(p; q) from point p to point q, p 6= q, is the smallest positive integer n such that there exists a sequence of distinct grid points p = p0,p1; p2; :::; pn = q with pi is a 4-neighbor of pi?1, 1 _ i _ n.If p = q the distance between them is de_ned to be zero. Thedistance d4(p; q) has all properties of a metric. Given a binary digital image. We transform this image into a new one which represents at each point p 2 hIi the d4-distance to pixels having value zero. The transformation includes two steps. We apply functions f1 to the image Iin standard scan order, producing I_(i; j) = f1(i; j; I(i; j)), and f2in reverse standard scan order, producing T(i; j) = f2(i; j; I_(i; j)), as follows:f1(i; j; I(i; j)) =8><>>:0 if I(i; j) = 0minfI_(i ? 1; j)+ 1; I_(i; j ? 1) + 1gif I(i; j) = 1 and i 6= 1 or j 6= 1- 4 -m+ n otherwisef2(i; j; I_(i; j)) = minfI_(i; j); T(i+ 1; j)+ 1; T(i; j + 1) + 1g The resulting image T is the distance transform image of I. Notethat T is a set f[(i; j); T(i; j)] : 1 _ i _ n ^ 1 _ j _ mg, and let T_ _ T such that [(i; j); T(i; j)] 2 T_ i_ none of the four points in A4((i; j)) has a value in T equal to T(i; j)+1. For all remaining points (i; j) let T_(i; j) = 0. This image T_ is called distance skeleton. Now weapply functions g1 to the distance skeleton T_ in standard scan order, producing T__(i; j) = g1(i; j; T_(i; j)), and g2 to the result of g1 in reverse standard scan order, producing T___(i; j) = g2(i; j; T__(i; j)), as follows:g1(i; j; T_(i; j)) = maxfT_(i; j); T__(i ? 1; j)? 1; T__(i; j ? 1) ? 1gg2(i; j; T__(i; j)) = maxfT__(i; j); T___(i + 1; j)? 1; T___(i; j + 1) ? 1gThe result T___ is equal to the distance transform image T. Both functions g1 and g2 de_ne an operator G, with G(T_) = g2(g1(T_)) = T___, and we have [15]: Theorem 1 G(T_) = T, and if T0 is any subset of image T (extended to an image by having value 0 in all remaining positions) such that G(T0) = T, then T0(i; j) = T_(i; j) at all positions of T_with non-zero values. Informally, the theorem says that the distance transform image is reconstructible from the distance skeleton, and it is the smallest data set needed for such a reconstruction. The useddistance d4 di_ers from the Euclidean metric. For instance, this d4-distance skeleton is not invariant under rotation. For an approximation of the Euclidean distance, some authors suggested the use of di_erent weights for grid point neighborhoods [4]. Montanari [11] introduced a quasi-Euclidean distance. In general, the d4-distance skeleton is a subset of pixels (p; T(p)) of the transformed image, and it is not necessarily connected.2.2 \Critical Points" AlgorithmsThe simplest category of these algorithms determines the midpointsof subsets of connected components in standard scan order for each row. Let l be an index for the number of connected components in one row of the original image. We de_ne the following functions for 1 _ i _ n: ei(l) = _ j if this is the lth case I(i; j) = 1 ^ I(i; j ? 1) = 0 in row i, counting from the left, with I(i;?1) = 0 ,oi(l) = _ j if this is the lth case I(i; j) = 1- 5 -^ I(i; j+ 1) = 0 ,in row i, counting from the left, with I(i;m+ 1)= 0 ,mi(l) = int((oi(l) ?ei(l)=2)+ oi(l) ,The result of scanning row i is a set ofcoordinates (i;mi(l)) ofof the connected components in row i. The set of midpoints of all rows midpoints ,constitutes a critical point skeleton of an image I. This method is computationally eÆcient.The results are subsets of pixels of the original objects, and these subsets are not necessarily connected. They can form \noisy branches" when object components are nearly parallel to image rows. They may be useful for special applications where the scanning direction is approximately perpendicular to main orientations of object components.References[1] C. Arcelli, L. Cordella, S. Levialdi: Parallel thinning ofbinary pictures. Electron. Lett. 11:148{149, 1975}.[2] C. Arcelli, G. Sanniti di Baja: Skeletons of planar patterns. in: Topolog- ical Algorithms for Digital Image Processing (T. Y. Kong, A. Rosenfeld, eds.), North-Holland, 99{143, 1996.}[3] H. Blum: A transformation for extracting new descriptors of shape. in: Models for the Perception of Speech and Visual Form (W. Wathen- Dunn, ed.), MIT Press, Cambridge, Mass., 362{380, 1967.19} - 6 -数字图像处理1引言许多研究者已提议提出了在数字图像里的连接组件是由一个减少的数据量或简化的形状。

人脸识别的英文文献15篇

人脸识别的英文文献15篇

人脸识别的英文文献15篇英文回答:1. Title: A Survey on Face Recognition Algorithms.Abstract: Face recognition is a challenging task in computer vision due to variations in illumination, pose, expression, and occlusion. This survey provides a comprehensive overview of the state-of-the-art face recognition algorithms, including traditional methods like Eigenfaces and Fisherfaces, and deep learning-based methods such as Convolutional Neural Networks (CNNs).2. Title: Face Recognition using Deep Learning: A Literature Review.Abstract: Deep learning has revolutionized the field of face recognition, leading to significant improvements in accuracy and robustness. This literature review presents an in-depth analysis of various deep learning architecturesand techniques used for face recognition, highlighting their strengths and limitations.3. Title: Real-Time Face Recognition: A Comprehensive Review.Abstract: Real-time face recognition is essential for various applications such as surveillance, access control, and biometrics. This review surveys the recent advances in real-time face recognition algorithms, with a focus on computational efficiency, accuracy, and scalability.4. Title: Facial Expression Recognition: A Comprehensive Survey.Abstract: Facial expression recognition plays a significant role in human-computer interaction and emotion analysis. This survey presents a comprehensive overview of facial expression recognition techniques, including traditional approaches and deep learning-based methods.5. Title: Age Estimation from Facial Images: A Review.Abstract: Age estimation from facial images has applications in various fields, such as law enforcement, forensics, and healthcare. This review surveys the existing age estimation methods, including both supervised and unsupervised learning approaches.6. Title: Face Detection: A Literature Review.Abstract: Face detection is a fundamental task in computer vision, serving as a prerequisite for face recognition and other facial analysis applications. This review presents an overview of face detection techniques, from traditional methods to deep learning-based approaches.7. Title: Gender Classification from Facial Images: A Survey.Abstract: Gender classification from facial imagesis a widely studied problem with applications in gender-specific marketing, surveillance, and security. This surveyprovides an overview of gender classification methods, including both traditional and deep learning-based approaches.8. Title: Facial Keypoint Detection: A Comprehensive Review.Abstract: Facial keypoint detection is a crucialstep in face analysis, providing valuable information about facial structure. This review surveys facial keypoint detection methods, including traditional approaches anddeep learning-based algorithms.9. Title: Face Tracking: A Survey.Abstract: Face tracking is vital for real-time applications such as video surveillance and facial animation. This survey presents an overview of facetracking techniques, including both model-based andfeature-based approaches.10. Title: Facial Emotion Analysis: A Literature Review.Abstract: Facial emotion analysis has become increasingly important in various applications, including affective computing, human-computer interaction, and surveillance. This literature review provides a comprehensive overview of facial emotion analysis techniques, from traditional methods to deep learning-based approaches.11. Title: Deep Learning for Face Recognition: A Comprehensive Guide.Abstract: Deep learning has emerged as a powerful technique for face recognition, achieving state-of-the-art results. This guide provides a comprehensive overview of deep learning architectures and techniques used for face recognition, including Convolutional Neural Networks (CNNs) and Deep Residual Networks (ResNets).12. Title: Face Recognition with Transfer Learning: A Survey.Abstract: Transfer learning has become a popular technique for accelerating the training of deep learning models. This survey presents an overview of transferlearning approaches used for face recognition, highlighting their advantages and limitations.13. Title: Domain Adaptation for Face Recognition: A Comprehensive Review.Abstract: Domain adaptation is essential foradapting face recognition models to new domains withdifferent characteristics. This review surveys various domain adaptation techniques used for face recognition, including adversarial learning and self-supervised learning.14. Title: Privacy-Preserving Face Recognition: A Comprehensive Guide.Abstract: Privacy concerns have arisen with the widespread use of face recognition technology. This guide provides an overview of privacy-preserving face recognition techniques, including anonymization, encryption, anddifferential privacy.15. Title: The Ethical and Social Implications of Face Recognition Technology.Abstract: The use of face recognition technology has raised ethical and social concerns. This paper explores the potential risks and benefits of face recognition technology, and discusses the implications for society.中文回答:1. 题目,人脸识别算法综述。

无人机航拍图像外文参考文献

无人机航拍图像外文参考文献

无人机航拍图像外文参考文献
无人机在高原山区线路巡检中的应用高坤,李昶君,杨金波,莫卫权,李立新.电气技术.2015(04)
无人机在输电线路巡检中的应用和发展[J].石书山.光源与照明.2021(07)
基于无人机的电力线路巡检多功能地面站设计与实现[J].许家浩,王娇.集成电路应用.2022(03)
无人机在输电线路巡检中的应用及发展前景[J].李明明,秦宇翔,李志学.电子制作.2014(21)
无人机在高原山区线路巡检中的应用[J].王海东,纪正刚,静磊.科技风.2017(10)
研究输电线路巡检中无人机的运用及发展[J].莫文华.通讯世界.2016(09)。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

图像检测外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)译文基于半边脸的人脸检测概要:图像中的人脸检测是人脸识别研究中一项非常重要的研究分支。

为了更有效地检测图像中的人脸,此次研究设计提出了基于半边脸的人脸检测方法。

根据图像中人半边脸的容貌或者器官的密度特征,比如眼睛,耳朵,嘴巴,部分脸颊,正面的平均全脸模板就可以被构建出来。

被模拟出来的半张脸是基于人脸的对称性的特点而构建的。

图像中人脸检测的实验运用了模板匹配法和相似性从而确定人脸在图像中的位置。

此原理分析显示了平均全脸模型法能够有效地减少模板的局部密度的不确定性。

基于半边脸的人脸检测能降低人脸模型密度的过度对称性,从而提高人脸检测的速度。

实验结果表明此方法还适用于在大角度拍下的侧脸图像,这大大增加了侧脸检测的准确性。

关键词:人脸模板,半边人脸模板,模板匹配法,相似性,侧脸。

I.介绍近几年,在图像处理和识别以及计算机视觉的研究领域中,人脸识别是一个很热门的话题。

作为人脸识别中一个重要的环节,人脸检测也拥有一个延伸的研究领域。

人脸检测的主要目的是为了确定图像中的信息,比如,图像总是否存在人脸,它的位置,旋转角度以及人脸的姿势。

根据人脸的不同特征,人脸检测的方法也有所变化[1-4]。

而且,根据人脸器官的密度或颜色的固定布局,我们可以判定是否存在人脸。

因此,这种基于肤色模型和模板匹配的方法对于人脸检测具有重要的研究意义[5-7]。

这种基于模板匹配的人脸检测法是选择正面脸部的特征作为匹配的模板,导致人脸搜索的计算量相对较大。

然而,绝大多数的人脸都是对称的。

所以我们可以选择半边正面人脸模板,也就是说,选择左半边脸或者有半边脸作为人脸匹配的模板,这样,大大减少了人脸搜索的计算。

II.人脸模板构建的方法人脸模板的质量直接影响匹配识别的效果。

为了减少模板局部密度的不确定性,构建人脸模板是基于大众脸的信息,例如,平均的眼睛模板,平均的脸型模板。

这种方法很简单。

在模板的仿射变换的实例中,人脸检测的有效性可以被确保。

构建人脸模板的过程如下[8]:步骤一:选择正面人脸图像;步骤二:决定人脸区域的大小和选择人脸区域;步骤三:将选出来的人脸区域格式化成同一种尺寸大小;步骤四:计算人脸区域相对应像素的平均值。

在构建模板之前,挑选些有正面人脸的图片。

首先,决定人脸区域的尺寸大小。

然后,在图像中手动挑选人脸区域。

我们设选定的人脸区域的数量为n。

因为人脸区域的矩阵向量都是被独立分布的,所以在那些人脸图像相同位置的像素值也是独立分布的。

我们设在人脸区域第k(k=1,2,Ă,n)位置的像素值是fk(i,j) (k=1,2,…,n),那些人脸图像的是标准比例系数wk (k=1,2,…,n),由此得出正面人脸模板的表达式:1根据统计学,如果在人脸区域第k个位置,有些像素值fk(i,j)趋于正态分布, 其中u是像素fk(i,j)的平均值,是方差,T(i,j)是正态分布。

所以模板局部密度的不确定性大大降低了。

2如果抽样的人脸图像都是在同一间距下拍摄的,相对应的人脸尺寸是一致的,标准的比例像素wk 就等于1. 那么,大众人脸模板T(i,j)也就变成了III.正面的平均全脸模板的构建在人脸与相机间距相同,鸟瞰图的拍摄角度是15°的情况下,120张人脸图像被取样,包含正面的,左侧倾斜30°和左侧倾斜45°。

每种角度的图像都是40张。

其中20张中的人戴帽子,2张没有戴帽子。

被抽样的图像如图1所示。

正面左侧倾斜30°左侧倾斜45°图1 :各个角度的人脸图像在图像中,正面人脸包括特征器管像眼睛,耳朵,鼻子,部分脸颊等等,如2图2(a)所示。

这些图像的分布特征可以作为检测人脸存在的根据。

所以人的眼睛,耳朵,鼻子,嘴巴和部分脸颊都被选作可以构建整张正面人脸模板的主要区域,如图2(b)所示。

这种方法可以排除异常区域和非人类特有物的影响,比如帽子,胡须等。

图2:人脸特有器官的模型手动取样16张人脸图像。

每张图像都是22 × 26像素。

做为一个比较性的实验,模板不仅要匹配正面图像,还要匹配侧面图像。

所以模板不能太宽。

构建整张正面人脸模板如图3所示。

通过16张正面人脸模板,正面的平均全脸模板就可以被构建出来。

图3:正面全脸的平均模板的构建IV.平均的半脸模板的构建正面的平均全脸模板可以被看做大径相同的左脸模板和有脸模板的结合体。

所以正面的全脸模板可以被中心对称轴分成左脸模板和右脸模板。

所以,半边脸模板的构建如图4所示。

此外,平均半脸模板可以根据平均全脸模板的原理来构建。

这样可以减低在全脸模板中密集度的对称冗余的问题。

方法如图5所示。

图4:构建半脸模板的模型图5:半脸平均模板的构建在一张完美的人脸模板中,左脸和右脸的密集度是对称的,也就是说,两半边脸是相似的一对。

事实上,在一张人脸图像中左脸和右脸存在一些差异,两半边脸的器官密集度的分布也不是完全对称的,所以相似性就降低了。

就拿左半边脸为例,当利用平均的半脸模板搜索人脸图像时,左半边脸会先被识别出来,如图6(a)所示。

图中实线框内是被检测出的左脸,接着根据左脸模板来检测右半边脸的位置。

被检测到的可能是右半边脸的位置如图6(b)所示,这些位置由虚线框标记出来。

图6:被检测出来的可能是半边脸的位置IV. 判别函数在实验中,图像中半边脸被检测出来是运用了模板匹配的方法。

此方法的基本原理解释如下。

被选择的平均半脸模板在被检测的图像上到处搜索。

接着,计算模板与被检测到的半边脸的相似度。

如果在某些位置,相似度方程的值大于阈值,那么我们就认为这班别脸的图像相似于平均半脸模板。

相似度是指图像上局部区域的统计值。

一些不同的子块影像的图像相似度值也许与其他的一样,尽管如此,它们还是属于不同的字块图像。

在实验中,与半脸模板匹配的子块图像所在位置的相似度值应该被保留,而那些未匹配的值应该被剔除。

此方法具体可以描述如下:假设半脸模板T 的长度是I ,宽是J ,如图四所示。

那么全脸模板的长度就是2I ,宽是J 。

假设被检测的图像的长度是L ,宽是W ,当模板为放在(m,n)时,子块图像在图像中相对应的位置为。

于是模板与子块图像S(m,n)的相似度的公式(3) [9]可以表达为上述公式中,判定图像中是否存在半边脸的规则是,给定一个阈值th ,如果S(m,n) th,那么此半边脸相似于子块图像;如果S(m,n)=0,那么此半边脸完全与子块图像一致。

把公式(3)展开后,得到 其中,()[]∑∑==i i j j j i P n m 112,,是子块的能,被位于图像(i, j)位置的半脸模板所覆盖。

在到处搜索图像的时候,它的值变化很慢。

()()j i T j i i i j j n m P,,11,∙∑∑==表示模板T 与子块图像的相关系数。

当模板T 与子块图像完全匹配时,相关系数达到最大值。

()[]∑∑==i i j j j i T 112,是半脸模板T 的能,当半脸模板被构建完后,它的值就被确定了。

它与子块的位置没有任何关系。

所以,模板T 与子块图像的关联系数和子块的能值的比率就是相似度值,如下:简化式子(5),得到式子(6)其中,s(m,n) 是相似度,。

判定半边脸存在的规则如下:给定阈值th ,如果s(m,n) th,那么总结为此半脸模板T 相似于子块图像;如果s(m,n)=1,那么半脸入班T 完全与子块图像匹配。

假设O(T)代表基于半边脸模板检测人脸的时间花费,O(F) 代表基于全脸模板检测人脸的时间花费。

计算方法如下:O(T) = I * J * ( L - I ) * ( W - J ) (7) O(F) = 2I * J * ( L - 2I ) * ( W - J ) (8)O(T) t 与O(F) 的比值如下()()()()()()IL I L J W I L J I J W I L J I F O T O 4222--=-*-**-*-**= (9) 当L 的值大大超过I 值时,方式(9)的值接近1/2,也就是说,基于半边脸模板检测人脸的时间花费是全脸模板方法的1/2。

所以,半边脸模板检测方法可以省一半的时间。

V. 实验结果正如图五显示的平均半脸模板,左脸与左脸,右脸与右脸,左脸与右脸的相似度都是通过式子(6)计算得出。

计算结果分别是1.0000,1.0000 和0.9535。

实验结果是相同半边脸的相似度值很高,不同半边脸是相似的一对。

所以左脸对于右脸的密集度冗余很大。

人脸检测的实验中,利用左半脸模板来检测正面人脸图像,左偏30°和45°的图像。

准确检测的结果如图7显示。

此外,作为对比,同时做了基于平均全脸模板的检测实验,结果如图8所示。

图7:基于半边脸模板的人脸检测实验结果图8:基于平均全脸模板的人脸检测实验结果在人脸检测实验中,准确的人脸检测比率是非常总要的估算准则,描述如下40张正面人脸图像,40张左侧偏离30°的人脸图像,40张左侧偏离45°的人脸图像。

表格1中是被准确检测的人脸的数目和检测比率。

表格1中结果可以总结如下:(1)正面图像的人脸检测准确率很高,因为平均全脸模板是根据一系列的正面人脸图像而构建的,平均半边脸模板是根据平均全脸模板的原理而都建的,这两者都表现了人脸的特征。

(2)当利用平均全脸模板在图像中检测的人脸倾斜,比如左倾斜30°或者45°,检测的准确率就很快地下降了,因为当左脸倾斜时,右脸的很多信息丢失了。

而且在全脸模板里很难找到能够匹配右脸的信息。

(3)当根据平均半边脸模板检测侧脸时,准确率会相对高一点。

主要是因为脸向左侧倾斜时,左脸并没有丢失很多信息,以至于她们可以很好地与平均半脸模板匹配。

然而,人脸偏离的角度过大,检测的准确度也会大大下降,这主要是因为在人脸图像取样的过程中,正脸的成像从三维空间变成了二维空间,这样不仅损失了人脸图像位置的深度信息,而且严重地改变了脸上器官的位置。

所以相似度也随之降低了。

VI.总结平均人脸模板是根据脸部特有的特征这一原理构建的。

人脸位置是根据人脸模板与不同角度的人脸的相似度来判断的。

下面是理论分析和实验的结果:眼睛,耳朵,鼻子,嘴巴和部分脸颊是构建人脸模板的必要部分,它们的密集分布特征是作为人脸检测的依据。

(1)平均人脸模板大大减少了局部特征器官的密集信息的不确定性。

在平均全脸模板的基础上,平均半脸模板能够通过脸上器官位置的对称性构建出来,然后后半边脸就可以直接被检测出来。

人脸模板的密集冗余就大大减少了。

(2)通过基于模板匹配法的人脸检测的时间复杂性的分析,半脸模板相对于全脸模板的价值更适用于实际。

半脸模板能够节省一半的时间,所以检测的数独就可以提高。

相关文档
最新文档