图像分割 外文翻译 外文文献 英文文献 一种在线图像编.
外文参考文献翻译-中文
外⽂参考⽂献翻译-中⽂基于4G LTE技术的⾼速铁路移动通信系统KS Solanki教授,Kratika ChouhanUjjain⼯程学院,印度Madhya Pradesh的Ujjain摘要:随着时间发展,⾼速铁路(HSR)要求可靠的,安全的列车运⾏和乘客通信。
为了实现这个⽬标,HSR的系统需要更⾼的带宽和更短的响应时间,⽽且HSR的旧技术需要进⾏发展,开发新技术,改进现有的架构和控制成本。
为了满⾜这⼀要求,HSR采⽤了GSM的演进GSM-R技术,但它并不能满⾜客户的需求。
因此采⽤了新技术LTE-R,它提供了更⾼的带宽,并且在⾼速下提供了更⾼的客户满意度。
本⽂介绍了LTE-R,给出GSM-R与LTE-R之间的⽐较结果,并描述了在⾼速下哪种铁路移动通信系统更好。
关键词:⾼速铁路,LTE,GSM,通信和信令系统⼀介绍⾼速铁路需要提⾼对移动通信系统的要求。
随着这种改进,其⽹络架构和硬件设备必须适应⾼达500公⾥/⼩时的列车速度。
HSR还需要快速切换功能。
因此,为了解决这些问题,HSR 需要⼀种名为LTE-R的新技术,基于LTE-R的HSR提供⾼数据传输速率,更⾼带宽和低延迟。
LTE-R能够处理⽇益增长的业务量,确保乘客安全并提供实时多媒体信息。
随着列车速度的不断提⾼,可靠的宽带通信系统对于⾼铁移动通信⾄关重要。
HSR的应⽤服务质量(QOS)测量,包括如数据速率,误码率(BER)和传输延迟。
为了实现HSR的运营需求,需要⼀个能够与 LTE保持⼀致的能⼒的新系统,提供新的业务,但仍能够与GSM-R长时间共存。
HSR系统选择合适的⽆线通信系统时,需要考虑性能,服务,属性,频段和⼯业⽀持等问题。
4G LTE系统与第三代(3G)系统相⽐,它具有简单的扁平架构,⾼数据速率和低延迟。
在LTE的性能和成熟度⽔平上,LTE- railway(LTE-R)将可能成为下⼀代HSR通信系统。
⼆ LTE-R系统描述考虑LTE-R的频率和频谱使⽤,对为⾼速铁路(HSR)通信提供更⾼效的数据传输⾮常重要。
数字图像处理外文翻译参考文献
数字图像处理外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)原文:Application Of Digital Image Processing In The MeasurementOf Casting Surface RoughnessAhstract- This paper presents a surface image acquisition system based on digital image processing technology. The image acquired by CCD is pre-processed through the procedure of image editing, image equalization, the image binary conversation and feature parameters extraction to achieve casting surface roughness measurement. The three-dimensional evaluation method is taken to obtain the evaluation parametersand the casting surface roughness based on feature parameters extraction. An automatic detection interface of casting surface roughness based on MA TLAB is compiled which can provide a solid foundation for the online and fast detection of casting surface roughness based on image processing technology.Keywords-casting surface; roughness measurement; image processing; feature parametersⅠ.INTRODUCTIONNowadays the demand for the quality and surface roughness of machining is highly increased, and the machine vision inspection based on image processing has become one of the hotspot of measuring technology in mechanical industry due to their advantages such as non-contact, fast speed, suitable precision, strong ability of anti-interference, etc [1,2]. As there is no laws about the casting surface and the range of roughness is wide, detection parameters just related to highly direction can not meet the current requirements of the development of the photoelectric technology, horizontal spacing or roughness also requires a quantitative representation. Therefore, the three-dimensional evaluation system of the casting surface roughness is established as the goal [3,4], surface roughness measurement based on image processing technology is presented. Image preprocessing is deduced through the image enhancement processing, the image binary conversation. The three-dimensional roughness evaluation based on the feature parameters is performed . An automatic detection interface of casting surface roughness based on MA TLAB is compiled which provides a solid foundation for the online and fast detection of casting surface roughness.II. CASTING SURFACE IMAGE ACQUISITION SYSTEMThe acquisition system is composed of the sample carrier, microscope, CCD camera, image acquisition card and the computer. Sample carrier is used to place tested castings. According to the experimental requirements, we can select a fixed carrier and the sample location can be manually transformed, or select curing specimens and the position of the sampling stage can be changed. Figure 1 shows the whole processing procedure.,Firstly,the detected castings should be placed in the illuminated backgrounds as far as possible, and then through regulating optical lens, setting the CCD camera resolution and exposure time, the pictures collected by CCD are saved to computer memory through the acquisition card. The image preprocessing and feature value extraction on casting surface based on corresponding software are followed. Finally the detecting result is output.III. CASTING SURFACE IMAGE PROCESSINGCasting surface image processing includes image editing, equalization processing, image enhancement and the image binary conversation,etc. The original and clipped images of the measured casting is given in Figure 2. In which a) presents the original image and b) shows the clipped image.A.Image EnhancementImage enhancement is a kind of processing method which can highlight certain image information according to some specific needs and weaken or remove some unwanted informations at the same time[5].In order to obtain more clearly contour of the casting surface equalization processing of the image namely the correction of the image histogram should be pre-processed before image segmentation processing. Figure 3 shows the original grayscale image and equalization processing image and their histograms. As shown in the figure, each gray level of the histogram has substantially the same pixel point and becomes more flat after gray equalization processing. The image appears more clearly after the correction and the contrast of the image is enhanced.Fig.2 Casting surface imageFig.3 Equalization processing imageB. Image SegmentationImage segmentation is the process of pixel classification in essence. It is a very important technology by threshold classification. The optimal threshold is attained through the instmction thresh = graythresh (II). Figure 4 shows the image of the binary conversation. The gray value of the black areas of the Image displays the portion of the contour less than the threshold (0.43137), while the white area shows the gray value greater than the threshold. The shadows and shading emerge in the bright region may be caused by noise or surface depression.Fig4 Binary conversationIV. ROUGHNESS PARAMETER EXTRACTIONIn order to detect the surface roughness, it is necessary to extract feature parameters of roughness. The average histogram and variance are parameters used to characterize the texture size of surface contour. While unit surface's peak area is parameter that can reflect the roughness of horizontal workpiece.And kurtosis parameter can both characterize the roughness of vertical direction and horizontal direction. Therefore, this paper establisheshistogram of the mean and variance, the unit surface's peak area and the steepness as the roughness evaluating parameters of the castings 3D assessment. Image preprocessing and feature extraction interface is compiled based on MATLAB. Figure 5 shows the detection interface of surface roughness. Image preprocessing of the clipped casting can be successfully achieved by this software, which includes image filtering, image enhancement, image segmentation and histogram equalization, and it can also display the extracted evaluation parameters of surface roughness.Fig.5 Automatic roughness measurement interfaceV. CONCLUSIONSThis paper investigates the casting surface roughness measuring method based on digital Image processing technology. The method is composed of image acquisition, image enhancement, the image binary conversation and the extraction of characteristic parameters of roughness casting surface. The interface of image preprocessing and the extraction of roughness evaluation parameters is compiled by MA TLAB which can provide a solid foundation for the online and fast detection of casting surface roughness.REFERENCE[1] Xu Deyan, Lin Zunqi. The optical surface roughness research pro gress and direction[1]. Optical instruments 1996, 18 (1): 32-37.[2] Wang Yujing. Turning surface roughness based on image measurement [D]. Harbin:Harbin University of Science and Technology[3] BRADLEY C. Automated surface roughness measurement[1]. The InternationalJournal of Advanced Manufacturing Technology ,2000,16(9) :668-674.[4] Li Chenggui, Li xing-shan, Qiang XI-FU 3D surface topography measurement method[J]. Aerospace measurement technology, 2000, 20(4): 2-10.[5] Liu He. Digital image processing and application [ M]. China Electric Power Press,2005译文:数字图像处理在铸件表面粗糙度测量中的应用摘要—本文提出了一种表面图像采集基于数字图像处理技术的系统。
外文翻译---特征空间稳健性分析:彩色图像分割
附录2:外文翻译Robust Analysis of Feature Spaces: Color ImageSegmentationAbstractA general technique for the recovery of significant image features is presented. The technique is based on the mean shift algorithm, a simple nonparametric procedure for estimating density gradients. Drawbacks of the current methods (including robust clustering) are avoided. Feature space of any nature can be processed, and as an example, color image segmentation is discussed. The segmentation is completely autonomous, only its class is chosen by the user. Thus, the same program can produce a high quality edge image, or provide, by extracting all the significant colors, a preprocessor for content-based query systems. A 512 512 color image is analyzed in less than 10 seconds on a standard workstation. Gray level images are handled as color images having only the lightness coordinate.Keywords: robust pattern analysis, low-level vision, content-based indexing1 IntroductionFeature space analysis is a widely used tool for solving low-level image understanding tasks. Given an image, feature vectors are extracted from local neighborhoods and mapped into the space spanned by their components. Significant features in the image then correspond to high density regions in this space. Feature space analysis is the procedure of recovering the centers of the high density regions, i.e., the representations of the significant image features. Histogram based techniques, Hough transform are examples of the approach.When the number of distinct feature vectors is large, the size of the feature space is reduced by grouping nearby vectors into a single cell. A discretized feature space is called an accumulator. Whenever the size of the accumulator cell is not adequate for the data, serious artifacts can appear. The problem was extensively studied in the context of the Hough transform, e.g.. Thus, for satisfactory results a feature space should have continuous coordinate system. The content of a continuous feature space can be modeled as a sample from a multivariate, multimodal probability distribution. Note that for real images the number of modes can be very large, of the order of tens.The highest density regions correspond to clusters centered on the modes of the underlying probability distribution. Traditional clustering techniques, can be used for feature space analysis but they are reliable only if the number of clusters is small and known a priori. Estimating the number of clusters from the data is computationally expensive and not guaranteed to produce satisfactory result.A much too often used assumption is that the individual clusters obey multivariate normal distributions, i.e., the feature space can be modeled as a mixture of Gaussians. The parameters of the mixture are then estimated by minimizing an error criterion. For example, a large class of thresholding algorithms are based on the Gaussian mixture model of the histogram, e.g.. However, there is no theoretical evidence that an extracted normal cluster necessarily corresponds to a significant image feature. On the contrary, a strong artifact cluster may appear when several features are mapped into partially overlapping regions.Nonparametric density estimation avoids the use of the normality assumption. The two families of methods, Parzen window, and k-nearest neighbors, both require additional input information (type of the kernel, number of neighbors). Thisinformation must be provided by the user, and for multimodal distributions it is difficult to guess the optimal setting.Nevertheless, a reliable general technique for feature space analysis can be developed using a simple nonparametric density estimation algorithm. In this paper we propose such a technique whose robust behavior is superior to methods employing robust estimators from statistics.2 Requirements for RobustnessEstimation of a cluster center is called in statistics the multivariate location problem. To be robust, an estimator must tolerate a percentage of outliers, i.e., data points not obeying the underlying distribution of the cluster. Numerous robust techniques were proposed, and in computer vision the most widely used is the minimum volume ellipsoid (MVE) estimator proposed by Rousseeuw.The MVE estimator is affine equivariant (an affine transformation of the input is passed on to the estimate) and has high breakdown point (tolerates up to half the data being outliers). The estimator finds the center of the highest density region by searching for the minimal volume ellipsoid containing at least h data points. The multivariate location estimate is the center of this ellipsoid. To avoid combinatorial explosion a probabilistic search is employed. Let the dimension of the data be p. A small number of (p+1) tuple of points are randomly chosen. For each (p+1) tuple the mean vector and covariance matrix are computed, defining an ellipsoid. The ellipsoid is inated to include h points, and the one having the minimum volume provides the MVE estimate.Based on MVE, a robust clustering technique with applications in computer vision was proposed in. The data is analyzed under several \resolutions" by applying the MVE estimator repeatedly with h values representing fixed percentages of the data points. The best cluster then corresponds to the h value yielding the highest density inside the minimum volume ellipsoid. The cluster is removed from the feature space, and the whole procedure is repeated till the space is not empty. The robustness of MVE should ensure that each cluster is associated with only one mode of the underlying distribution. The number of significant clusters is not needed a priori.The robust clustering method was successfully employed for the analysis of a large variety of feature spaces, but was found to become less reliable once the number of modes exceeded ten. This is mainly due to the normality assumption embeddedinto the method. The ellipsoid defining a cluster can be also viewed as the high confidence region of a multivariate normal distribution. Arbitrary feature spaces are not mixtures of Gaussians and constraining the shape of the removed clusters to be elliptical can introduce serious artifacts. The effect of these artifacts propagates as more and more clusters are removed. Furthermore, the estimated covariance matrices are not reliable since are based on only p + 1 points. Subsequent post processing based on all the points declared inliers cannot fully compensate for an initial error.To be able to correctly recover a large number of significant features, the problem of feature space analysis must be solved in context. In image understanding tasks the data to be analyzed originates in the image domain. That is, the feature vectors satisfy additional, spatial constraints. While these constraints are indeed used in the current techniques, their role is mostly limited to compensating for feature allocation errors made during the independent analysis of the feature space. To be robust the feature space analysis must fully exploit the image domain information.As a consequence of the increased role of image domain information the burden on the feature space analysis can be reduced. First all the significant features are extracted, and only after then are the clusters containing the instances of these features recovered. The latter procedure uses image domain information and avoids the normality assumption.Significant features correspond to high density regions and to locate these regions a search window must be employed. The number of parameters defining the shape and size of the window should be minimal, and therefore whenever it is possible the feature space should be isotropic. A space is isotropic if the distance between two points is independent on the location of the point pair. The most widely used isotropic space is the Euclidean space, where a sphere, having only one parameter (its radius) can be employed as search window. The isotropy requirement determines the mapping from the image domain to the feature space. If the isotropy condition cannot be satisfied, a Mahalanobis metric should be defined from the statement of the task.We conclude that robust feature space analysis requires a reliable procedure for the detection of high density regions. Such a procedure is presented in the next section.3 Mean Shift AlgorithmA simple, nonparametric technique for estimation of the density gradient was proposed in 1975 by Fukunaga and Hostetler. The idea was recently generalized by Cheng.Assume, for the moment, that the probability density function p(x) of the p-dimensional feature vectors x is unimodal. This condition is for sake of clarity only, later will be removed. A sphere X S of radius r, centered on x contains the featurevectors y such that r x y ≤-. The expected value of the vector x y z -=, given x and X S is[]()()()()()dy S y p y p x y dy S y p x y S z E X X S X S X X ⎰⎰∈-=-==μ(1) If X S is sufficiently small we can approximate()()X S X V x p S y p =∈,where p S r c V X ⋅=(2)is the volume of the sphere. The first order approximation of p(y) is()()()()x p x y x p y p T∇-+=(3) where ()x p ∇ is the gradient of the probability density function in x. Then()()()()⎰∇--=X X S S Tdy x p x p V x y x y μ(4) since the first term vanishes. The value of the integral is()()x p x p p r ∇+=22μ(5) or[]()()x p x p p r x S x x E X ∇+=-∈22(6) Thus, the mean shift vector, the vector of difference between the local mean and the center of the window, is proportional to the gradient of the probability density at x. The proportionality factor is reciprocal to p(x). This is beneficial when the highest density region of the probability density function is sought. Such region corresponds to large p(x) and small ()x p ∇, i.e., to small mean shifts. On the other hand, low density regions correspond to large mean shifts (amplified also by small p(x) values).The shifts are always in the direction of the probability density maximum, the mode. At the mode the mean shift is close to zero. This property can be exploited in a simple, adaptive steepest ascent algorithm.Mean Shift Algorithm1. Choose the radius r of the search window.2. Choose the initial location of the window.3. Compute the mean shift vector and translate the search window by that amount.4. Repeat till convergence.To illustrate the ability of the mean shift algorithm, 200 data points were generated from two normal distributions, both having unit variance. The first hundred points belonged to a zero-mean distribution, the second hundred to a distribution having mean 3.5. The data is shown as a histogram in Figure 1. It should be emphasized that the feature space is processed as an ordered one-dimensional sequence of points, i.e., it is continuous. The mean shift algorithm starts from the location of the mode detected by the one-dimensional MVE mode detector, i.e., the center of the shortest rectangular window containing half the data points. Since the data is bimodal with nearby modes, the mode estimator fails and returns a location in the trough. The starting point is marked by the cross at the top of Figure 1.Figure 1: An example of the mean shift algorithm.In this synthetic data example no a priori information is available about the analysis window. Its size was taken equal to that returned by the MVE estimator, 3.2828. Other, more adaptive strategies for setting the search window size can also be defined.Table 1: Evolution of Mean Shift AlgorithmIn Table 1 the initial values and the final location,shown with a star at the top of Figure 1, are given.The mean shift algorithm is the tool needed for feature space analysis. The unimodality condition can be relaxed by randomly choosing the initial location of the search window. The algorithm then converges to the closest high density region. The outline of a general procedure is given below.Feature Space Analysis1. Map the image domain into the feature space.2. Define an adequate number of search windows at random locations in the space.3. Find the high density region centers by applying the mean shift algorithm to each window.4. Validate the extracted centers with image domain constraints to provide the feature palette.5. Allocate, using image domain information, all the feature vectors to the feature palette.The procedure is very general and applicable to any feature space. In the next section we describe a color image segmentation technique developed based on this outline.4 Color Image SegmentationImage segmentation, partioning the image into homogeneous regions, is a challenging task. The richness of visual information makes bottom-up, solely image driven approaches always prone to errors. To be reliable, the current systems must be large and incorporate numerous ad-hoc procedures, e.g.. The paradigms of gray level image segmentation (pixel-based, area-based, edge-based) are also used for color images. In addition, the physics-based methods take into account information about the image formation processes as well. See, for example, the reviews. The proposed segmentation technique does not consider the physical processes, it uses only the given image, i.e., a set of RGB vectors. Nevertheless, can be easily extended to incorporate supplementary information about the input. As homogeneity criterioncolor similarity is used.Since perfect segmentation cannot be achieved without a top-down, knowledge driven component, a bottom-up segmentation technique should·only provide the input into the next stage where the task is accomplished using a priori knowledge about its goal; and·eliminate, as much as possible, the dependence on user set parameter values.Segmentation resolution is the most general parameter characterizing a segmentation technique. Whilethis parameter has a continuous scale, three important classes can be distinguished.Undersegmentation corresponds to the lowest resolution. Homogeneity is defined with a large tolerance margin and only the most significant colors are retained for the feature palette. The region boundaries in a correctly undersegmented image are the dominant edges in the image.Oversegmentation corresponds to intermediate resolution. The feature palette is rich enough that the image is broken into many small regions from which any sought information can be assembled under knowledge control. Oversegmentation is the recommended class when the goal of the task is object recognition.Quantization corresponds to the highest resolution.The feature palette contains all the important colors in the image. This segmentation class became important with the spread of image databases, e.g.. The full palette, possibly together with the underlying spatial structure, is essential for content-based queries.The proposed color segmentation technique operates in any of the these three classes. The user only chooses the desired class, the specific operating conditions are derived automatically by the program.Images are usually stored and displayed in the RGB space. However, to ensure the isotropy of the feature space, a uniform color space with the perceived color differences measured by Euclidean distances should be used. We have chosen the *v**L space, whose coordinates are related to the RGB values by nonlinear uD was used as reference illuminant. The transformations. The daylight standard65chromatic information is carried by *u and *v, while the lightness coordinate *L can be regarded as the relative brightness. Psychophysical experiments show that *v**L space may not be perfectly isotropic, however, it was found satisfactory for uimage understanding applications. The image capture/display operations alsointroduce deviations which are most often neglected.The steps of color image segmentation are presented below. The acronyms ID and FS stand for image domain and feature space respectively. All feature space computations are performed in the ***v u L space.1. [FS] Definition of the segmentation parameters.The user only indicates the desired class of segmentation. The class definition is translated into three parameters·the radius of the search window, r;·the smallest number of elements required for a significant color, min N ;·the smallest number of contiguous pixels required for a significant image region, con N .The size of the search window determines the resolution of the segmentation, smaller values corresponding to higher resolutions. The subjective (perceptual) definition of a homogeneous region seem s to depend on the “visual activity” in the image. Within the same segmentation class an image containing large homogeneous regions should be analyzed at higher resolution than an image with many textured areas. The simplest measure of the “visual activity” can be derived from the global covariance matrix. The square root of its trace,σ, is related to the power of the signal(image). The radius r is taken proportional to σ. The rules defining the three segmentation class parameters are given in Table 2. These rules were used in the segmentation of a large variety images, ranging from simple blood cells to complex indoor and outdoor scenes.When the goal of the task is well defined and/or all the images are of the same type, the parameters can be fine tuned.Table 2: Segmentation Class Parameters2. [ID+FS] Definition of the search window.The initial location of the search window in the feature space is randomly chosen. To ensure that the search starts close to a high density region several locationcandidates are examined. The random sampling is performed in the image domain and a few, M = 25, pixels are chosen. For each pixel, the mean of its 3 3 neighborhood is computed and mapped into the feature space. If the neighborhood belongs to a larger homogeneous region, with high probability the location of the search window will be as wanted. To further increase this probability, the window containing the highest density of feature vectors is selected from the M candidates.3. [FS] Mean shift algorithm.To locate the closest mode the mean shift algorithm is applied to the selected search window. Convergence is declared when the magnitude of the shift becomes less than 0.1.4. [ID+FS] Removal of the detected feature.The pixels yielding feature vectors inside the search window at its final location are discarded from both domains. Additionally, their 8-connected neighbors in the image domain are also removed independent of the feature vector value. These nei ghbors can have “strange” colors due to the image formation process and their removal cleans the background of the feature space. Since all pixels are reallocated in Step 7, possible errors will be corrected.5. [ID+FS] Iterations.Repeat Steps 2 to 4, till the number of feature vectors in the selected searchN.window no longer exceedsmin6. [ID] Determining the initial feature palette.N vectors.In the feature space a significant color must be based on minimumminN pixels Similarly, to declare a color significant in the image domain more thanminof that color should belong to a connected component. From the extracted colors only those are retained for the initial feature palette which yield at least one connectedN. The neighbors removed at Step 4 component in the image of size larger thanminare also considered when defining the connected components Note that the threshold N which is used only at the post processing stage.is notcon7. [ID+FS] Determining the final feature palette.The initial feature palette provides the colors allowed when segmenting the image. If the palette is not rich enough the segmentation resolution was not chosen correctly and should be increased to the next class. All the pixel are reallocated basedon this palette. First, the pixels yielding feature vectors inside the search windows at their final location are considered. These pixels are allocated to the color of the window center without taking into account image domain information. The windowsare then inflated to double volume (their radius is multiplied with p32). The newly incorporated pixels are retained only if they have at least one neighbor which was already allocated to that color. The mean of the feature vectors mapped into the same color is the value retained for the final palette. At the end of the allocation procedure a small number of pixels can remain unclassified. These pixels are allocated to the closest color in the final feature palette.8. [ID+FS] Postprocessing.This step depends on the goal of the task. The simplest procedure is the removal from the image of all small connected components of size less thanN.Thesecon pixels are allocated to the majority color in their 3⨯3 neighborhood, or in the case of a tie to the closest color in the feature space.In Figure 2 the house image containing 9603 different colors is shown. The segmentation results for the three classes and the region boundaries are given in Figure 5a-f. Note that undersegmentation yields a good edge map, while in the quantization class the original image is closely reproduced with only 37 colors. A second example using the oversegmentation class is shown in Figure 3. Note the details on the fuselage.5 DiscussionThe simplicity of the basic computational module, the mean shift algorithm, enables the feature space analysis to be accomplished very fast. From a 512⨯512 pixels image a palette of 10-20 features can be extracted in less than 10 seconds on a Ultra SPARC 1 workstation. To achieve such a speed the implementation was optimized and whenever possible, the feature space (containing fewer distinct elements than the image domain) was used for array scanning; lookup tables were employed instead of frequently repeated computations; direct addressing instead of nested pointers; fixed point arithmetic instead of floating point calculations; partial computation of the Euclidean distances, etc.The analysis of the feature space is completely autonomous, due to the extensive use of image domain information. All the examples in this paper, and dozens more notshown here, were processed using the parameter values given in Table 2. Recently Zhu and Yuille described a segmentation technique incorporating complex global optimization methods(snakes, minimum description length) with sensitive parameters and thresholds. To segment a color image over a hundred iterations were needed. When the images used in were processed with the technique described in this paper, the same quality results were obtained unsupervised and in less than a second. The new technique can be used un modified for segmenting gray level images, which are handled as color images with only the *L coordinates. In Figure 6 an example is shown.The result of segmentation can be further refined by local processing in the image domain. For example, robust analysis of the pixels in a large connected component yields the inlier/outlier dichotomy which then can be used to recover discarded fine details.In conclusion, we have presented a general technique for feature space analysis with applications in many low-level vision tasks like thresholding, edge detection, segmentation. The nature of the feature space is not restricted, currently we are working on applying the technique to range image segmentation, Hough transform and optical flow decomposition.255⨯pixels, 9603 colors.Figure 2: The house image, 192(a)(b)Figure 3: Color image segmentation example.512⨯pixels, 77041 colors. (b)Oversegmentation: 21/21(a)Original image, 512colors.(a)(b)Figure 4: Performance comparison.116⨯pixels, 200 colors. (b) Undersegmentation: 5/4 colors.(a) Original image, 261Region boundaries.(a)(b)(c)(d)(e)(f)Figure 5: The three segmentation classes for the house image. The right columnshows the region boundaries.(a)(b) Undersegmentation. Number of colors extracted initially and in the featurepalette: 8/8.(c)(d) Oversegmentation: 24/19 colors. (e)(f) Quantization: 49/37 colors.(a)(b)(c)256 Figure 6: Gray level image segmentation example. (a)Original image, 256pixels.(b) Undersegmenta-tion: 5 gray levels. (c) Region boundaries.特征空间稳健性分析:彩色图像分割摘要本文提出了一种恢复显著图像特征的普遍技术。
图像的分割和配准中英文翻译
外文文献资料翻译:李睿钦指导老师:刘文军Medical image registration with partial dataSenthil Periaswamy,Hany FaridThe goal of image registration is to find a transformation that aligns one image to another. Medical image registration has emerged from this broad area of research as a particularly active field. This activity is due in part to the many clinical applications including diagnosis, longitudinal studies, and surgical planning, and to the need for registration across different imaging modalities (e.g., MRI, CT, PET, X-ray, etc.). Medical image registration, however, still presents many challenges. Several notable difficulties are (1) the transformation between images can vary widely and be highly non-rigid in nature; (2) images acquired from different modalities may differ significantly in overall appearance and resolution; (3) there may not be a one-to-one correspondence between the images (missing/partial data); and (4) each imaging modality introduces its own unique challenges, making it difficult to develop a single generic registration algorithm.In estimating the transformation that aligns two images we must choose: (1) to estimate the transformation between a small number of extracted features, or between the complete unprocessed intensity images; (2) a model that describes the geometric transformation; (3) whether to and how to explicitly model intensity changes; (4) an error metric that incorporates the previous three choices; and (5) a minimization technique for minimizing the error metric, yielding the desired transformation.Feature-based approaches extract a (typically small) number of corresponding landmarks or features between the pair of images to be registered. The overall transformation is estimated from these features. Common features include corresponding points, edges, contours or surfaces. These features may be specified manually or extracted automatically. Fiducial markers may also be used as features;these markers are usually selected to be visible in different modalities. Feature-based approaches have the advantage of greatly reducing computational complexity. Depending on the feature extraction process, these approaches may also be more robust to intensity variations that arise during, for example, cross modality registration. Also, features may be chosen to help reduce sensor noise. These approaches can be, however, highly sensitive to the accuracy of the feature extraction. Intensity-based approaches, on the other hand, estimate the transformation between the entire intensity images. Such an approach is typically more computationally demanding, but avoids the difficulties of a feature extraction stage.Independent of the choice of a feature- or intensity-based technique, a model describing the geometric transform is required. A common and straightforward choice is a model that embodies a single global transformation. The problem of estimating a global translation and rotation parameter has been studied in detail, and a closed form solution was proposed by Schonemann. Other closed-form solutions include methods based on singular value decomposition (SVD), eigenvalue-eigenvector decomposition and unit quaternions. One idea for a global transformation model is to use polynomials. For example, a zeroth-order polynomial limits the transformation to simple translations, a first-order polynomial allows for an affine transformation, and, of course, higher-order polynomials can be employed yielding progressively more flexible transformations. For example, the registration package Automated Image Registration (AIR) can employ (as an option) a fifth-order polynomial consisting of 168 parameters (for 3-D registration). The global approach has the advantage that the model consists of a relatively small number of parameters to be estimated, and the global nature of the model ensures a consistent transformation across the entire image. The disadvantage of this approach is that estimation of higher-order polynomials can lead to an unstable transformation, especially near the image boundaries. In addition, a relatively small and local perturbation can cause disproportionate and unpredictable changes in the overall transformation. An alternative to these global approaches are techniques that model the global transformation as a piecewise collection of local transformations. For example, the transformation between each local region may bemodeled with a low-order polynomial, and global consistency is enforced via some form of a smoothness constraint. The advantage of such an approach is that it is capable of modeling highly nonlinear transformations without the numerical instability of high-order global models. The disadvantage is one of computational inefficiency due to the significantly larger number of model parameters that need to be estimated, and the need to guarantee global consistency. Low-order polynomials are, of course, only one of many possible local models that may be employed. Other local models include B-splines, thin-plate splines, and a multitude of related techniques. The package Statistical Parametric Mapping (SPM) uses the low-frequency discrete cosine basis functions, where a bending-energy function is used to ensure global consistency. Physics-based techniques that compute a local geometric transform include those based on the Navier–Stokes equilibrium equations for linear elastici and those based on viscous fluid approaches.Under certain conditions a purely geometric transformation is sufficient to model the transformation between a pair of images. Under many real-world conditions, however, the images undergo changes in both geometry and intensity (e.g., brightness and contrast). Many registration techniques attempt to remove these intensity differences with a pre-processing stage, such as histogram matching or homomorphic filtering. The issues involved with modeling intensity differences are similar to those involved in choosing a geometric model. Because the simultaneous estimation of geometric and intensity changes can be difficult, few techniques build explicit models of intensity differences. A few notable exceptions include AIR, in which global intensity differences are modeled with a single multiplicative contrast term, and SPM in which local intensity differences are modeled with a basis function approach.Having decided upon a transformation model, the task of estimating the model parameters begins. As a first step, an error function in the model parameters must be chosen. This error function should embody some notion of what is meant for a pair of images to be registered. Perhaps the most common choice is a mean square error (MSE), defined as the mean of the square of the differences (in either feature distance or intensity) between the pair of images. This metric is easy to compute and oftenaffords simple minimization techniques. A variation of this metric is the unnormalized correlation coefficient applicable to intensity-based techniques. This error metric is defined as the sum of the point-wise products of the image intensities, and can be efficiently computed using Fourier techniques. A disadvantage of these error metrics is that images that would qualitatively be considered to be in good registration may still have large errors due to, for example, intensity variations, or slight misalignments. Another error metric (included in AIR) is the ratio of image uniformity (RIU) defined as the normalized standard deviation of the ratio of image intensities. Such a metric is invariant to overall intensity scale differences, but typically leads to nonlinear minimization schemes. Mutual information, entropy and the Pearson product moment cross correlation are just a few examples of other possible error functions. Such error metrics are often adopted to deal with the lack of an explicit model of intensity transformations .In the final step of registration, the chosen error function is minimized yielding the desired model parameters. In the most straightforward case, least-squares estimation is used when the error function is linear in the unknown model parameters. This closed-form solution is attractive as it avoids the pitfalls of iterative minimization schemes such as gradient-descent or simulated annealing. Such nonlinear minimization schemes are, however, necessary due to an often nonlinear error function. A reasonable compromise between these approaches is to begin with a linear error function, solve using least-squares, and use this solution as a starting point for a nonlinear minimization.译文:部分信息的医学图像配准Senthil Periaswamy,Hany Farid图像配准的目的是找到一种能把一副图像对准另外一副图像的变换算法。
外文文献及翻译DigitalImageProcessingandEdgeDetection数字图像处理与边缘检测
Digital Image Processing and Edge DetectionDigital Image ProcessingInterest in digital image processing methods stems from two principal applica- tion areas: improvement of pictorial information for human interpretation; and processing of image data for storage, transmission, and representation for au- tonomous machine perception.An image may be defined as a two-dimensional function, f(x, y), where x and y are spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point. When x, y, and the amplitude values of f are all finite, discrete quantities, we call the image a digital image. The field of digital image processing refers to processing digital images by means of a digital computer. Note that a digital image is composed of a finite number of elements, each of which has a particular location and value. These elements are referred to as picture elements, image elements, pels, and pixels. Pixel is the term most widely used to denote the elements of a digital image.Vision is the most advanced of our senses, so it is not surprising that images play the single most important role in human perception. However, unlike humans, who are limited to the visual band of the electromagnetic (EM) spec- trum, imaging machines cover almost the entire EM spectrum, ranging from gamma to radio waves. They can operate on images generated by sources that humans are not accustomed to associating with images. These include ultra- sound, electron microscopy, and computer-generated images. Thus, digital image processing encompasses a wide and varied field of applications.There is no general agreement among authors regarding where image processing stops and other related areas, such as image analysis and computer vi- sion, start. Sometimes a distinction is made by defining image processing as a discipline in which both the input and output of a process are images. We believe this to be a limiting and somewhat artificial boundary. For example, under this definition, even the trivial task of computing the average intensity of an image (which yields a single number) would not be considered an image processing operation. On the other hand, there are fields such as computer vision whose ultimate goal is to use computers to emulate human vision, including learning and being able to make inferences and take actions based on visual inputs. This area itself is a branch of artificial intelligence (AI) whose objective is to emulate human intelligence. The field of AI is in its earliest stages of infancy in terms of development, with progress having been much slower than originally anticipated. The area of image analysis (also called image understanding) is in be- tween image processing and computer vision.There are no clearcut boundaries in the continuum from image processing at one end to computer vision at the other. However, one useful paradigm is to consider three types of computerized processes in this continuum: low-, mid-, and highlevel processes. Low-level processes involve primitive opera- tions such as imagepreprocessing to reduce noise, contrast enhancement, and image sharpening. A low-level process is characterized by the fact that both its inputs and outputs are images. Mid-level processing on images involves tasks such as segmentation (partitioning an image into regions or objects), description of those objects to reduce them to a form suitable for computer processing, and classification (recognition) of individual objects. A midlevel process is characterized by the fact that its inputs generally are images, but its outputs are attributes extracted from those images ., edges, contours, and the identity of individual objects). Finally, higherlevel processing involves “making sense” of an ensemble of recognized objects, as in image analysis, and, at the far end of the continuum, performing the cognitive functions normally associated with vision.Based on the preceding comments, we see that a logical place of overlap between image processing and image analysis is the area of recognition of individual regions or objects in an image. Thus, what we call in this book digital image processing encompasses processes whose inputs and outputs are images and, in addition, encompasses processes that extract attributes from images, up to and including the recognition of individual objects. As a simple illustration to clarify these concepts, consider the area of automated analysis of text. The processes of acquiring an image of the area containing the text, preprocessing that image, extracting (segmenting) the individual characters, describing the characters in a form suitable for computer processing, and recognizing those individual characters are in the scope of what we call digital image processing in this book. Making sense of the content of the page may be viewed as being in the domain of image analysis and even computer vision, depending on the level of complexity implied by the statement “making sense.” As will become evident shortly, digital image processing, as we have defined it, is used successfully in a broad range of areas of exceptional social and economic value.The areas of application of digital image processing are so varied that some form of organization is desirable in attempting to capture the breadth of this field. One of the simplest ways to develop a basic understanding of the extent of image processing applications is to categorize images according to their source ., visual, X-ray, and so on). The principal energy source for images in use today is the electromagnetic energy spectrum. Other important sources of energy include acoustic, ultrasonic, and electronic (in the form of electron beams used in electron microscopy). Synthetic images, used for modeling and visualization, are generated by computer. In this section we discuss briefly how images are generated in these various categories and the areas in which they are applied.Images based on radiation from the EM spectrum are the most familiar, es- pecially images in the X-ray and visual bands of the spectrum. Electromagnet- ic waves can be conceptualized as propagating sinusoidal waves of varying wavelengths, or they can be thought of as a stream of massless particles, each traveling in a wavelike pattern and moving at the speed of light. Each massless particle contains a certain amount (or bundle) of energy. Each bundle of energy is called a photon. If spectral bands aregrouped according to energy per photon, we obtain the spectrum shown in fig. below, ranging from gamma rays (highest energy) at one end to radio waves (lowest energy) at the other. The bands are shown shaded to convey the fact that bands of the EM spectrum are not distinct but rather transition smoothly from one to the other.Image acquisition is the first process. Note that acquisition could be as simple as being given an image that is already in digital form. Generally, the image acquisition stage involves preprocessing, such as scaling.Image enhancement is among the simplest and most appealing areas of digital image processing. Basically, the idea behind enhancement techniques is to bring out detail that is obscured, or simply to highlight certain features of interest in an image. A familiar example of enhancement is when we increase the contrast of a n image because “it looks better.” It is important to keep in mind that enhancement is a very subjective area of image processing. Image restoration is an area that also deals with improving the appearance of an image. However, unlike enhancement, which is subjective, image restoration is objective, in the sense that restoration techniques tend to be based on mathematical or probabilistic models of image degradation. Enhancement, on the other hand, is based on human subjective preferences regarding what constitutes a “good” enhancement result.Color image processing is an area that has been gaining in importance because of the significant increase in the use of digital images over the Internet. It covers a number of fundamental concepts in color models and basic color processing in a digital domain. Color is used also in later chapters as the basis for extracting features of interest in an image.Wavelets are the foundation for representing images in various degrees of resolution. In particular, this material is used in this book for image data compression and for pyramidal representation, in which images are subdivided successively into smaller regions.Compression, as the name implies, deals with techniques for reducing the storage required to save an image, or the bandwidth required to transmi storage technology has improved significantly over the past decade, the same cannot be said for transmission capacity. This is true particularly in uses of the Internet, which are characterized by significant pictorial content. Image compression is familiar (perhaps inadvertently) to most users of computers in the form of image file extensions, such as the jpg file extension used in the JPEG (Joint Photographic Experts Group) image compression standard.Morphological processing deals with tools for extracting image components that are useful in the representation and description of shape. The material in this chapter begins a transition from processes that output images to processes that output image attributes.Segmentation procedures partition an image into its constituent parts or objects. In general, autonomous segmentation is one of the most difficult tasks in digital image processing. A rugged segmentation procedure brings the process a long way toward successful solution of imaging problems that require objects to be identified individually. On the other hand, weak or erratic segmentation algorithms almost always guarantee eventual failure. In general, the more accurate the segmentation, the more likely recognition is to succeed.Representation and description almost always follow the output of a segmentation stage, which usually is raw pixel data, constituting either the bound- aryof a region ., the set of pixels separating one image region from another) or all the points in the region itself. In either case, converting the data to a form suitable for computer processing is necessary. The first decision that must be made is whether the data should be represented as a boundary or as a complete region. Boundary representation is appropriate when the focus is on external shape characteristics, such as corners and inflections. Regional representation is appropriate when the focus is on internal properties, such as texture or skeletal shape. In some applications, these representations complement each other. Choosing a representation is only part of the solution for trans- forming raw data into a form suitable for subsequent computer processing. A method must also be specified for describing the data so that features of interest are highlighted. Description, also called feature selection, deals with extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another.Recognition is the process that assigns a label ., “vehicle”) to an object based on its descriptors. As detailed before, we conclude our coverage of digital image processing with the development of methods for recognition of individual objects.So far we have said nothing about the need for prior knowledge or about the interaction between the knowledge base and the processing modules in Fig2 above. Knowledge about a problem domain is coded into an image processing system in the form of a knowledge database. This knowledge may be as sim- ple as detailing regions of an image where the information of interest is known to be located, thus limiting the search that has to be conducted in seeking that information. The knowledge base also can be quite complex, such as an interrelated list of all major possible defects in a materials inspection problem or an image database containing high-resolution satellite images of a region in con- nection with change-detection applications. In addition to guiding the operation of each processing module, the knowledge base also controls the interaction between modules. This distinction is made in Fig2 above by the use of double-headed arrows between the processing modules and the knowledge base, as op- posed to single-headed arrows linking the processing modules.Edge detectionEdge detection is a terminology in and , particularly in the areas of and , to refer to which aim at identifying points in a at which the changes sharply or more formally has point and line detection certainly are important in any discussion on segmentation,edge dectection is by far the most common approach for detecting meaningful discounties in gray level.Although certain literature has considered the detection of ideal step edges, the edges obtained from natural images are usually not at all ideal step edges. Instead they are normally affected by one or several of the following effects: blur caused by a finite and finite ; 2. caused by shadows created by light sources of non-zero radius; 3. at a smooth object edge; or in the vicinity of object edges.A typical edge might for instance be the border between a block of red color and a blockof yellow. In contrast a (as can be extracted by a ) can be a small number of pixels of a different color on an otherwise unchanging background. For a line, there may therefore usually be one edge on each side of the line.To illustrate why edge detection is not a trivial task, let us consider the problem of detecting edges in the following one-dimensional signal. Here, we may intuitively say that there should be an edge between the 4th and 5th pixels.5 76 4 152 148 149If if the intensity differences between the adjacent neighbouring pixels were higher, it would not be as easy to say that there should be an edge in the corresponding region. Moreover, one could argue that this case is one in which there are several , to firmly state a specific threshold on how large the intensity change between two neighbouring pixels must be for us to say that there should be an edge between these pixels is not always a simple problem. Indeed, this is one of the reasons why edge detection may be a non-trivial problem unless the objects in the scene are particularly simple and the illumination conditions can be well controlled.There are many methods for edge detection, but most of them can be grouped into two categories,search-based and based. The search-based methods detect edges by first computing a measure of edge strength, usually a first-order derivative expression such as the gradient magnitude, and then searching for local directional maxima of the gradient magnitude using a computed estimate of the local orientation of the edge, usually the gradient direction. The zero-crossing based methods search for zero crossings in a second-order derivative expression computed from the image in order to find edges, usually the zero-crossings of the or the zero-crossings of a non-linear differential expression, as will be described in the section on following below. As a pre-processing step to edge detection, a smoothing stage, typically Gaussian smoothing, is almost always applied (see also ).The edge detection methods that have been published mainly differ in the types of smoothing filters that are applied and the way the measures of edge strength are computed. As many edge detection methods rely on the computation of image gradients, they also differ in the types of filters used for computing gradient estimates in the x- and y-directions.Once we have computed a measure of edge strength (typically the gradient magnitude), the next stage is to apply a threshold, to decide whether edges are present or not at an image point. The lower the threshold, the more edges will be detected, and the result will be increasingly susceptible to , and also to picking out irrelevant features from the image. Conversely a high threshold may miss subtle edges, or result in fragmented edges.If the edge thresholding is applied to just the gradient magnitude image, the resulting edges will in general be thick and some type of edge thinning post-processing is necessary. For edges detected with non-maximum suppression however, the edge curves are thin by definition and the edge pixels can be linked into edge polygon by an edge linking (edge tracking) procedure. On a discrete grid, the non-maximum suppression stage can be implemented by estimating the gradient direction using first-order derivatives, then rounding off the gradient direction to multiples of 45 degrees, and finally comparing the values of the gradient magnitude in the estimated gradient direction.A commonly used approach to handle the problem of appropriate thresholds for thresholding is by using with . This method uses multiple thresholds to find edges. We begin by using the upper threshold to find the start of an edge. Once we have a start point, we then trace the path of the edge through the image pixel by pixel, marking an edge whenever we are above the lower threshold. We stop marking our edge only when the value falls below our lower threshold. This approach makes the assumption that edges are likely to be in continuous curves, and allows us to follow a faint section of an edge we have previously seen, without meaning that every noisy pixel in the image is marked down as an edge. Still, however, we have the problem of choosing appropriate thresholding parameters, and suitable thresholding values may vary over the image.Some edge-detection operators are instead based upon second-order derivatives of the intensity. This essentially captures the in the intensity gradient. Thus, in the ideal continuous case, detection of zero-crossings in the second derivative captures local maxima in the gradient.We can come to a conclusion that,to be classified as a meaningful edge point,the transition in gray level associated with that point has to be significantly stronger than the background at that we are dealing with local computations,the method of choice to determine whether a value is “significant” or not id to use a we define a point in an image as being as being an edge point if its two-dimensional first-order derivative is greater than a specified criterion of connectedness is by definition an term edge segment generally is used if the edge is short in relation to the dimensions of the key problem in segmentation is to assemble edge segments into longer alternate definition if we elect to use the second-derivative is simply to define the edge ponits in an image as the zero crossings of its second definition of an edge in this case is the same as is important to note that these definitions do not guarantee success in finding edge in an simply give us a formalism to look for derivatives in an image are computed using the derivatives are obtained using the Laplacian.数字图像处理与边缘检测数字图像处理数字图像处理方法的研究源于两个主要应用领域:其一是为了便于人们分析而对图像信息进行改进:其二是为使机器自动理解而对图像数据进行存储、传输及显示。
5电气自动化 单片机 外文文献 英文文献 外文翻译 中英对照大学毕设论文
Single-chip1.The definition of a single-chipSingle-chip is an integrated on a single chip a complete computer system .Even though most of his features in a small chip,but it has a need to complete the majority of computer components:CPU,memory,internal and external bus system,most will have the Core.At the same time,such as integrated communication interfaces,timers,real-time clock and other peripheral equipment.And now the most powerful single-chip microcomputer system can even voice ,image,networking,input and output complex system integration on a single chip.Also known as single-chip MCU(Microcontroller),because it was first used in the field of industrial control.Only by the single-chip CPU chip developed from the dedicated processor. The design concept is the first by a large numberof peripherals and CPU in a single chip,the computer system so that smaller,more easily integrated into the complex and demanding on the volume control devices.INTEL the Z80 is one of the first design in accordance with the idea of the processor,From then on,the MCU and the development of a dedicated processor parted ways.Early single-chip 8-bit or all the four.One of the most successful is INTELs 8031,because the performance of a simple and reliable access to a lot of good praise.Since then in 8031to develop a single-chip microcomputer system MCS51 series.based on single-chip microcomputer system of the system is still widely used until now.As the field of industrial control requirements increase in the beginning of a 16-bit single-chip,but not ideal because the price has not been very widely used.After the90s with the big consumer electronics product development,single-chip technology is a huge improvement.INTEL i960 series with subsequent ARM in particular ,a broad range of application,quickly replaced by 32-bit single-chip 16-bit single-chip performance has been the rapid increase in processing power compared to the 80s to raise a few hundred times.At present,the high-end 32-bit single-chip frequency over 300MHz,the performance of the mid-90s close on the heels of a special processor,while the ordinary price of the model dropped to one U.S dollars,the most high-end models,only 10 U.S dollars.Contemporary single-chip microcomputer system is no longer only the bare-metal environment in the development and use of a large number of dedicated embedded operating system is widely used in the full range of single-chip microcomputer.In PDAs and cellphones as the coreprocessing of high-end single-chip or even a dedicated direct access to Windows and Linux operating systems.More than a dedicated single-chip processor suitable for embedded systems,so it was up to the application.In fact the number of single-chip is the worlds largest computer.Modern human life used in almost every piece of electronic and mechanical products will have a single-chip integration.Phone,telephone,calculator,home applicances,electronic toys,handheld computers and computer accessories such as a mouse in the Department are equipped with 1-2 single chip.And personal computers also have a large number of single-chip microcomputer in the workplace.Vehicles equipped with more than 40 Department of the general single-chip ,complex industrial control systems and even single-chip may have hundreds of work at the same time!SCM is not only far exceeds the number of PC and other integrated computing,even more than the number of human beings.2.single-chip introducedSingle-chip,also known as single-chip microcontroller,it is not the completion of a logic function of the chip,but a computer system integrated into a chip.Speaking in general terms: a single chip has become a computer .Its small size,light weight,cheap,for the learning,application and development of facilities provided .At the same time,learning to use the principle of single-chip computer to understand and structure the best choice.Single-chip and computer use is also similar to the module,such as CPU,memory,parallel bus, as well as the role and the same hard memory,is it different from the performance of these components are relatively weak in our home computer a lot,but the price is low ,there is generally no more than 10yuan,,can use it to make some control for a class of electrical work is not very complex is sufficient.We are using automatic drum washing machines, smoke hood,VCD and so on inside the home appliances can see its shadow! It is mainly as part of the core components of the control.It is an online real-time control computer,control-line is at the scene,we need to have a stronger anti-interference ability,low cost,and this is off-line computer(such as home PC)The main difference.By single-chip process,and can be amended.Through different procedures to achieve different functions,in particular the special unique features,this is the need to charge other devices can do a great effort,some of it is also difficult to make great efforts to do so .A function is not very complicated fi the United States the development of the 50s series of 74 or 60 during the CD4000series to get these pure hardware,the circuit must be a big PCB board !However,if the United States if the successful 70s seriesof single-chip market ,the result will be different!Simply because the adoption of single-chip preparation process you can achieve high intelligence,high efficiency and high reliability!Because of cost of single-chip is sensitive,so the dominant software or the lowest level assembly language,which is in addition to the lowest level for more than binary machine code of the language ,since such a low-level so why should we use ?Many of the seniors language has reached a level of visual programming why is it not in use ?The reason is simple ,that is,single-chip computer as there is no home of CPU,also not as hard as the mass storage device.A visualization of small high-level language program,even if there is only one button which will reach the size of dozens of K! For the home PCs hard drive is nothing,but in terms of the single-chip microcomputer is unacceptable.Single-chip in the utilization of hardware resources have to do very high ,so the compilation of the original while still in heavy use .The same token ,if the computer giants operating system and appplications run up to get the home PC,homePCcan not afford to sustain the same.It can be said that the twentieth century across the three “power”of the times,that is ,the electrical era,the electronic age and has now entered the computer age. However ,such a computer,usually refers to a personal computer,or PC.It consisits of the host ,keyboards,displays .And other components.There is also a type of computer,not how most people are familiar with . This computer is smart to give a variety of mechanical single-chip(also known as micro-controller).As the name suggests,these computer systems use only the minimum of an integrated circuit to make a simple calculation and control. Because of its small size,are usually charged with possession of machine in the “belly”in. It in the device,like the human mind plays a role, it is wrong,the entire device was paralyzed .Now,this single chip has a very wide field of use,such as smart meters,real-time industrial control,communications equipment,navigation systems,and household appliances. Once a variety of products with the use of the single-chip ,will be able to play so that the effectiveness of product upgrading,product names often adjective before the word “intelligent”,such as was hing machines and so intelligent.At present,some technical personnel of factories or other amateur electrtonics developers from engaging in certain products ,not the circuit is too complex ,that is functional and easy to be too simple imitation.The reason may be the product not on the cards or the use of single-chip programmable logic device on the other.3.single-chip historysingle-chip 70 was born in the late 20th century,experienced a SCM,MCU,SOC three stages.Single-chip micro-computer 1.SCM that(Single Chip Microcomputer)stage,is mainly a single from to find the best of the best embedded systems architecture.”Innovation model”to be successful,lay the SCM with the general-purpose computers,a completely different path of development . In embedded systems to create an independent development path,Intel Corporation credit.That is 2.MCU microcontroller(Micro Controller Unit)stage,the main direction of technology development: expanding to meet the embedded applications,the target system requirements for the various peripheral circuits and interface circuits,to highlingt the target of intelligent control.It covers all areas related with the objectSystem,therefore,the development of MCU inevitably fall on the heavy electrical,electronics manufacturers. From this point of view ,Intels development gradually MCU has its objective factors.MCU in the development ,the most famous manufacturers when the number of Philips Corporation.Philips in embedded applications for its enormous advantages,the MCS-51 from the rapid deveploment of single-chip micro-computer to the microcontroller.Therefore,when we look back at the path of development of embedded systems,Intel and Philips do not forget the historical merits.3.Single-chip is an independent embedded systems development,to the MCU an important factor in the development stage,is seeking applications to maximize the natural trend .With the mico-electronics technology,IC design,EDA tools development,based on the single-chip SOC design application systems will have greater development. Therefore,the understanding of single-chip micro-computer from a single ,monolithic single-chip microcontroller extends to applications.4.Single-chip applicationsAt present,single-chip microcomputer to infiltrate all areas of our lives,which is very difficult to find the area of almost no traces of single-chip microcomputer.Missile navigation equipment,aircraft control on a variety of instruments,compuer network communications and data transmission,industrial automation,real-time process control and data processing ,are widely used in a variety of smart IC card,limousine civilian security systems,video recorders,cameras,the control of automatic washing machines,as well as program-controllde toys,electronic pet,etc,which are inseparable from the single-chip microcomputer.Not to mention the field of robot automation ,intelligent instrumentation,medical equipment has been. Therefore,the single- chip learning ,development and application to a large number of computer applications and intelligent control of scientists,engineers.Single-chip widely used in instruments and meters,household appliances,medical equipment ,acrospace,specialized equipment and the intellingent management in areas such as process control,generally can be divided into the following areas:1.In the smart application of instrumentationSingle-chip with small size,low power consumption,control,and expansion flexibility , miniaturization and ease of sensors,can be realized,suchvoltage,power,frequency,humidity,temperature,flow,speed,thickness,angle,length,hardness,elemen t,measurement of physical pressure. SCM makes use of digital instrumentation,intelligence,miniaturization and functional than the use of electronic or digital circuitry even stronger.For example,precision measurement equipment(power meter,oscilloscope,and analyzer).2.In the industrial controlMCU can constitute a variety of control systems,data acquisition system.Such as factory assembly line of intelligent management ,intelligent control of the lift ,all kinds of alarm systems ,and computer networks constitute a secondary control system.3.In the applicationof household appliancesIt can be said that almost all home appliances are using the single-chip control,electric rice from favorable,washing machines,refrigerators,air conditioners,color TV and other audio video equipment,and then to the electronic weighing equipment,all kinds ,everywhere.4.On computer networks and communication applications in the field ofGenerally with the modern single-chip communication interface,can be easily carried out with computer carried out with computer data communications,computer networks and in inter-application communications equipment to provide an excellent material conditions,the communications equipment to provide an excellent material condition,from the mobile phone ,telephone , mini-program-controlled switchboards,buiding automated communications system call,the train wireless communications,and then you can see day-to-day work of mobile phones,Mobile communications,such as radios.5.Single-chip in the field of medical equipment applicationsSingle-chip microcomputer in medical devices have a wide range of purpose,such as medical ventilator,various analyzers,monitors,ultrasonic diagnostic equipment and hospital call systems.6.In a variety of large-scale electrical applications of modularSome special single-chip design to achieve a specific function to carry out a variety of modular circuitapplications,without requiring users to understand its internal structure.Integrated single-chip microcomputer such as music ,which seems to be simpleFunctions,a miniature electronic chip in a pure(as distinct from the principle of tape machine),would require a complex similar to the principle of the computer. Such as :music signal to digital form stored in memory(similar to ROM),read out by the microcontroller into analog music signal(similar to the sound card).In large circuits,modular applications that greatly reduces the size ,simplifying the circuit and reduce the damage,error rate ,but also to facilitate the replacement.In addition,single-chip microcomputer in the industrial,commercial,financial,scientific research ,education,defense aerospace and other fields have a wide range of uses.单片机1.单片机定义单片机是一种集成在电路芯片上的完整计算机系统。
计算机java外文翻译外文文献英文文献
英文原文:Title: Business Applications of Java. Author: Erbschloe, Michael, Business Applications of Java -- Research Starters Business, 2008DataBase: Research Starters - BusinessBusiness Applications of JavaThis article examines the growing use of Java technology in business applications. The history of Java is briefly reviewed along with the impact of open standards on the growth of the World Wide Web. Key components and concepts of the Java programming language are explained including the Java Virtual Machine. Examples of how Java is being used bye-commerce leaders is provided along with an explanation of how Java is used to develop data warehousing, data mining, and industrial automation applications. The concept of metadata modeling and the use of Extendable Markup Language (XML) are also explained.Keywords Application Programming Interfaces (API's); Enterprise JavaBeans (EJB); Extendable Markup Language (XML); HyperText Markup Language (HTML); HyperText Transfer Protocol (HTTP); Java Authentication and Authorization Service (JAAS); Java Cryptography Architecture (JCA); Java Cryptography Extension (JCE); Java Programming Language; Java Virtual Machine (JVM); Java2 Platform, Enterprise Edition (J2EE); Metadata Business Information Systems > Business Applications of JavaOverviewOpen standards have driven the e-business revolution. Networking protocol standards, such as Transmission Control Protocol/Internet Protocol (TCP/IP), HyperText Transfer Protocol (HTTP), and the HyperText Markup Language (HTML) Web standards have enabled universal communication via the Internet and the World Wide Web. As e-business continues to develop, various computing technologies help to drive its evolution.The Java programming language and platform have emerged as major technologies for performing e-business functions. Java programming standards have enabled portability of applications and the reuse of application components across computing platforms. Sun Microsystems' Java Community Process continues to be a strong base for the growth of the Java infrastructure and language standards. This growth of open standards creates new opportunities for designers and developers of applications and services (Smith, 2001).Creation of Java TechnologyJava technology was created as a computer programming tool in a small, secret effort called "the Green Project" at Sun Microsystems in 1991. The Green Team, fully staffed at 13 people and led by James Gosling, locked themselves away in an anonymous office on Sand Hill Road in Menlo Park, cut off from all regular communications with Sun, and worked around the clock for18 months. Their initial conclusion was that at least one significant trend would be the convergence of digitally controlled consumer devices and computers. A device-independent programming language code-named "Oak" was the result.To demonstrate how this new language could power the future of digital devices, the Green Team developed an interactive, handheld home-entertainment device controller targeted at the digital cable television industry. But the idea was too far ahead of its time, and the digital cable television industry wasn't ready for the leap forward that Java technology offered them. As it turns out, the Internet was ready for Java technology, and just in time for its initial public introduction in 1995, the team was able to announce that the Netscape Navigator Internet browser would incorporate Java technology ("Learn about Java," 2007).Applications of JavaJava uses many familiar programming concepts and constructs and allows portability by providing a common interface through an external Java Virtual Machine (JVM). A virtual machine is a self-contained operating environment, created by a software layer that behaves as if it were a separate computer. Benefits of creating virtual machines include better exploitation of powerful computing resources and isolation of applications to prevent cross-corruption and improve security (Matlis, 2006).The JVM allows computing devices with limited processors or memory to handle more advanced applications by calling up software instructions inside the JVM to perform most of the work. This also reduces the size and complexity of Java applications because many of the core functions and processing instructions were built into the JVM. As a result, software developersno longer need to re-create the same application for every operating system. Java also provides security by instructing the application to interact with the virtual machine, which served as a barrier between applications and the core system, effectively protecting systems from malicious code.Among other things, Java is tailor-made for the growing Internet because it makes it easy to develop new, dynamic applications that could make the most of the Internet's power and capabilities. Java is now an open standard, meaning that no single entity controls its development and the tools for writing programs in the language are available to everyone. The power of open standards like Java is the ability to break down barriers and speed up progress.Today, you can find Java technology in networks and devices that range from the Internet and scientific supercomputers to laptops and cell phones, from Wall Street market simulators to home game players and credit cards. There are over 3 million Java developers and now there are several versions of the code. Most large corporations have in-house Java developers. In addition, the majority of key software vendors use Java in their commercial applications (Lazaridis, 2003).ApplicationsJava on the World Wide WebJava has found a place on some of the most popular websites in the world and the uses of Java continues to grow. Java applications not only provide unique user interfaces, they also help to power the backend of websites. Two e-commerce giants that everybody is probably familiar with (eBay and Amazon) have been Java pioneers on the World Wide Web.eBayFounded in 1995, eBay enables e-commerce on a local, national and international basis with an array of Web sites-including the eBay marketplaces, PayPal, Skype, and -that bring together millions of buyers and sellers every day. You can find it on eBay, even if you didn't know it existed. On a typical day, more than 100 million items are listed on eBay in tens of thousands of categories. Recent listings have included a tunnel boring machine from the Chunnel project, a cup of water that once belonged to Elvis, and the Volkswagen that Pope Benedict XVI owned before he moved up to the Popemobile. More than one hundred million items are available at any given time, from the massive to the miniature, the magical to the mundane, on eBay; the world's largest online marketplace.eBay uses Java almost everywhere. To address some security issues, eBay chose Sun Microsystems' Java System Identity Manager as the platform for revamping its identity management system. The task at hand was to provide identity management for more than 12,000 eBay employees and contractors.Now more than a thousand eBay software developers work daily with Java applications. Java's inherent portability allows eBay to move to new hardware to take advantage of new technology, packaging, or pricing, without having to rewrite Java code ("eBay drives explosive growth," 2007).Amazon (a large seller of books, CDs, and other products) has created a Web Service application that enables users to browse their product catalog and place orders. uses a Java application that searches the Amazon catalog for books whose subject matches a user-selected topic. The application displays ten books that match the chosen topic, and shows the author name, book title, list price, Amazon discount price, and the cover icon. The user may optionally view one review per displayed title and make a buying decision (Stearns & Garishakurthi, 2003).Java in Data Warehousing & MiningAlthough many companies currently benefit from data warehousing to support corporate decision making, new business intelligence approaches continue to emerge that can be powered by Java technology. Applications such as data warehousing, data mining, Enterprise Information Portals (EIP's), and Knowledge Management Systems (which can all comprise a businessintelligence application) are able to provide insight into customer retention, purchasing patterns, and even future buying behavior.These applications can not only tell what has happened but why and what may happen given certain business conditions; allowing for "what if" scenarios to be explored. As a result of this information growth, people at all levels inside the enterprise, as well as suppliers, customers, and others in the value chain, are clamoring for subsets of the vast stores of information such as billing, shipping, and inventory information, to help them make business decisions. While collecting and storing vast amounts of data is one thing, utilizing and deploying that data throughout the organization is another.The technical challenges inherent in integrating disparate data formats, platforms, and applications are significant. However, emerging standards such as the Application Programming Interfaces (API's) that comprise the Java platform, as well as Extendable Markup Language (XML) technologies can facilitate the interchange of data and the development of next generation data warehousing and business intelligence applications. While Java technology has been used extensively for client side access and to presentation layer challenges, it is rapidly emerging as a significant tool for developing scaleable server side programs. The Java2 Platform, Enterprise Edition (J2EE) provides the object, transaction, and security support for building such systems.Metadata IssuesOne of the key issues that business intelligence developers must solve is that of incompatible metadata formats. Metadata can be defined as information about data or simply "data about data." In practice, metadata is what most tools, databases, applications, and other information processes use to define, relate, and manipulate data objects within their own environments. It defines the structure and meaning of data objects managed by an application so that the application knows how to process requests or jobs involving those data objects. Developers can use this schema to create views for users. Also, users can browse the schema to better understand the structure and function of the database tables before launching a query.To address the metadata issue, a group of companies (including Unisys, Oracle, IBM, SAS Institute, Hyperion, Inline Software and Sun) have joined to develop the Java Metadata Interface (JMI) API. The JMI API permits the access and manipulation of metadata in Java with standard metadata services. JMI is based on the Meta Object Facility (MOF) specification from the Object Management Group (OMG). The MOF provides a model and a set of interfaces for the creation, storage, access, and interchange of metadata and metamodels (higher-level abstractions of metadata). Metamodel and metadata interchange is done via XML and uses the XML Metadata Interchange (XMI) specification, also from the OMG. JMI leverages Java technology to create an end-to-end data warehousing and business intelligence solutions framework.Enterprise JavaBeansA key tool provided by J2EE is Enterprise JavaBeans (EJB), an architecture for the development of component-based distributed business applications. Applications written using the EJB architecture are scalable, transactional, secure, and multi-user aware. These applications may be written once and then deployed on any server platform that supports J2EE. The EJB architecture makes it easy for developers to write components, since they do not need to understand or deal with complex, system-level details such as thread management, resource pooling, and transaction and security management. This allows for role-based development where component assemblers, platform providers and application assemblers can focus on their area of responsibility further simplifying application development.EJB's in the Travel IndustryA case study from the travel industry helps to illustrate how such applications could function. A travel company amasses a great deal of information about its operations in various applications distributed throughout multiple departments. Flight, hotel, and automobile reservation information is located in a database being accessed by travel agents worldwide. Another application contains information that must be updated with credit and billing historyfrom a financial services company. Data is periodically extracted from the travel reservation system databases to spreadsheets for use in future sales and marketing analysis.Utilizing J2EE, the company could consolidate application development within an EJB container, which can run on a variety of hardware and software platforms allowing existing databases and applications to coexist with newly developed ones. EJBs can be developed to model various data sets important to the travel reservation business including information about customer, hotel, car rental agency, and other attributes.Data Storage & AccessData stored in existing applications can be accessed with specialized connectors. Integration and interoperability of these data sources is further enabled by the metadata repository that contains metamodels of the data contained in the sources, which then can be accessed and interchanged uniformly via the JMI API. These metamodels capture the essential structure and semantics of business components, allowing them to be accessed and queried via the JMI API or to be interchanged via XML. Through all of these processes, the J2EE infrastructure ensures the security and integrity of the data through transaction management and propagation and the underlying security architecture.To consolidate historical information for analysis of sales and marketing trends, a data warehouse is often the best solution. In this example, data can be extracted from the operational systems with a variety of Extract, Transform and Load tools (ETL). The metamodels allow EJBsdesigned for filtering, transformation, and consolidation of data to operate uniformly on datafrom diverse data sources as the bean is able to query the metamodel to identify and extract the pertinent fields. Queries and reports can be run against the data warehouse that contains information from numerous sources in a consistent, enterprise-wide fashion through the use of the JMI API (Mosher & Oh, 2007).Java in Industrial SettingsMany people know Java only as a tool on the World Wide Web that enables sites to perform some of their fancier functions such as interactivity and animation. However, the actual uses for Java are much more widespread. Since Java is an object-oriented language like C++, the time needed for application development is minimal. Java also encourages good software engineering practices with clear separation of interfaces and implementations as well as easy exception handling.In addition, Java's automatic memory management and lack of pointers remove some leading causes of programming errors. Most importantly, application developers do not need to create different versions of the software for different platforms. The advantages available through Java have even found their way into hardware. The emerging new Java devices are streamlined systems that exploit network servers for much of their processing power, storage, content, and administration.Benefits of JavaThe benefits of Java translate across many industries, and some are specific to the control and automation environment. For example, many plant-floor applications use relatively simple equipment; upgrading to PCs would be expensive and undesirable. Java's ability to run on any platform enables the organization to make use of the existing equipment while enhancing the application.IntegrationWith few exceptions, applications running on the factory floor were never intended to exchange information with systems in the executive office, but managers have recently discovered the need for that type of information. Before Java, that often meant bringing together data from systems written on different platforms in different languages at different times. Integration was usually done on a piecemeal basis, resulting in a system that, once it worked, was unique to the two applications it was tying together. Additional integration required developing a brand new system from scratch, raising the cost of integration.Java makes system integration relatively easy. Foxboro Controls Inc., for example, used Java to make its dynamic-performance-monitor software package Internet-ready. This software provides senior executives with strategic information about a plant's operation. The dynamic performance monitor takes data from instruments throughout the plant and performs variousmathematical and statistical calculations on them, resulting in information (usually financial) that a manager can more readily absorb and use.ScalabilityAnother benefit of Java in the industrial environment is its scalability. In a plant, embedded applications such as automated data collection and machine diagnostics provide critical data regarding production-line readiness or operation efficiency. These data form a critical ingredient for applications that examine the health of a production line or run. Users of these devices can take advantage of the benefits of Java without changing or upgrading hardware. For example, operations and maintenance personnel could carry a handheld, wireless, embedded-Java device anywhere in the plant to monitor production status or problems.Even when internal compatibility is not an issue, companies often face difficulties when suppliers with whom they share information have incompatible systems. This becomes more of a problem as supply-chain management takes on a more critical role which requires manufacturers to interact more with offshore suppliers and clients. The greatest efficiency comes when all systems can communicate with each other and share information seamlessly. Since Java is so ubiquitous, it often solves these problems (Paula, 1997).Dynamic Web Page DevelopmentJava has been used by both large and small organizations for a wide variety of applications beyond consumer oriented websites. Sandia, a multiprogram laboratory of the U.S. Department of Energy's National Nuclear Security Administration, has developed a unique Java application. The lab was tasked with developing an enterprise-wide inventory tracking and equipment maintenance system that provides dynamic Web pages. The developers selected Java Studio Enterprise 7 for the project because of its Application Framework technology and Web Graphical User Interface (GUI) components, which allow the system to be indexed by an expandable catalog. The flexibility, scalability, and portability of Java helped to reduce development timeand costs (Garcia, 2004)IssueJava Security for E-Business ApplicationsTo support the expansion of their computing boundaries, businesses have deployed Web application servers (WAS). A WAS differs from a traditional Web server because it provides a more flexible foundation for dynamic transactions and objects, partly through the exploitation of Java technology. Traditional Web servers remain constrained to servicing standard HTTP requests, returning the contents of static HTML pages and images or the output from executed Common Gateway Interface (CGI ) scripts.An administrator can configure a WAS with policies based on security specifications for Java servlets and manage authentication and authorization with Java Authentication andAuthorization Service (JAAS) modules. An authentication and authorization service can bewritten in Java code or interface to an existing authentication or authorization infrastructure. Fora cryptography-based security infrastructure, the security server may exploit the Java Cryptography Architecture (JCA) and Java Cryptography Extension (JCE). To present the user with a usable interaction with the WAS environment, the Web server can readily employ a formof "single sign-on" to avoid redundant authentication requests. A single sign-on preserves user authentication across multiple HTTP requests so that the user is not prompted many times for authentication data (i.e., user ID and password).Based on the security policies, JAAS can be employed to handle the authentication process with the identity of the Java client. After successful authentication, the WAS securitycollaborator consults with the security server. The WAS environment authentication requirements can be fairly complex. In a given deployment environment, all applications or solutions may not originate from the same vendor. In addition, these applications may be running on different operating systems. Although Java is often the language of choice for portability between platforms, it needs to marry its security features with those of the containing environment.Authentication & AuthorizationAuthentication and authorization are key elements in any secure information handling system. Since the inception of Java technology, much of the authentication and authorization issues have been with respect to downloadable code running in Web browsers. In many ways, this had been the correct set of issues to address, since the client's system needs to be protected from mobile code obtained from arbitrary sites on the Internet. As Java technology moved from a client-centric Web technology to a server-side scripting and integration technology, it required additional authentication and authorization technologies.The kind of proof required for authentication may depend on the security requirements of a particular computing resource or specific enterprise security policies. To provide such flexibility, the JAAS authentication framework is based on the concept of configurable authenticators. This architecture allows system administrators to configure, or plug in, the appropriate authenticatorsto meet the security requirements of the deployed application. The JAAS architecture also allows applications to remain independent from underlying authentication mechanisms. So, as new authenticators become available or as current authentication services are updated, system administrators can easily replace authenticators without having to modify or recompile existing applications.At the end of a successful authentication, a request is associated with a user in the WAS user registry. After a successful authentication, the WAS consults security policies to determine if the user has the required permissions to complete the requested action on the servlet. This policy canbe enforced using the WAS configuration (declarative security) or by the servlet itself (programmatic security), or a combination of both.The WAS environment pulls together many different technologies to service the enterprise. Because of the heterogeneous nature of the client and server entities, Java technology is a good choice for both administrators and developers. However, to service the diverse security needs of these entities and their tasks, many Java security technologies must be used, not only at a primary level between client and server entities, but also at a secondary level, from served objects. By using a synergistic mix of the various Java security technologies, administrators and developers can make not only their Web application servers secure, but their WAS environments secure as well (Koved, 2001).ConclusionOpen standards have driven the e-business revolution. As e-business continues to develop, various computing technologies help to drive its evolution. The Java programming language and platform have emerged as major technologies for performing e-business functions. Java programming standards have enabled portability of applications and the reuse of application components. Java uses many familiar concepts and constructs and allows portability by providing a common interface through an external Java Virtual Machine (JVM). Today, you can find Java technology in networks and devices that range from the Internet and scientific supercomputers to laptops and cell phones, from Wall Street market simulators to home game players and credit cards.Java has found a place on some of the most popular websites in the world. Java applications not only provide unique user interfaces, they also help to power the backend of websites. While Java technology has been used extensively for client side access and in the presentation layer, it is also emerging as a significant tool for developing scaleable server side programs.Since Java is an object-oriented language like C++, the time needed for application development is minimal. Java also encourages good software engineering practices with clear separation of interfaces and implementations as well as easy exception handling. Java's automatic memory management and lack of pointers remove some leading causes of programming errors. The advantages available through Java have also found their way into hardware. The emerging new Java devices are streamlined systems that exploit network servers for much of their processing power, storage, content, and administration.中文翻译:标题:Java的商业应用。
电气工程及其自动化专业 外文文献 英文文献 外文翻译 plc方面
1、外文原文(复印件)A: Fundamentals of Single-chip MicrocomputerTh e si ng le-ch i p mi cr oc om pu ter is t he c ul mi nat i on o f bo th t h e d ev el op me nt o f th e d ig it al com p ut er an d t he int e gr at ed ci rc ui ta r gu ab ly th e t ow m os t s i gn if ic ant i nv en ti on s o f t h e 20t h c en tu ry[1].Th es e to w typ e s of a rc hi te ctu r e ar e fo un d i n s in gl e-ch ip m i cr oc om pu te r. So m e em pl oy t he sp l it p ro gr am/d ata me mo ry o f th e H a rv ar d ar ch it ect u re, sh ow n i n -5A, ot he rs fo ll ow th e ph i lo so ph y, w i de ly a da pt ed fo r g en er al-p ur pos e c om pu te rs an d m i cr op ro ce ss or s, o f m a ki ng no lo gi c al di st in ct io n b e tw ee n p ro gr am a n d da t a m em ory a s i n th e Pr in cet o n ar ch it ec tu re,sh ow n in-5A.In g en er al te r ms a s in gl e-chi p m ic ro co mp ut er i sc h ar ac te ri zed b y the i nc or po ra tio n of al l t he uni t s o f a co mp ut er i n to a s in gl e dev i ce, as s ho wn in Fi g3-5A-3.-5A-1 A Harvard type-5A. A conventional Princeton computerFig3-5A-3. Principal features of a microcomputerRead only memory (ROM).R OM i s u su al ly f or th e p er ma ne nt, n o n-vo la ti le s tor a ge o f an a pp lic a ti on s pr og ra m .M an ym i cr oc om pu te rs an d mi cr oc on tr ol le r s a re in t en de d fo r h ig h-v ol ume a p pl ic at io ns a nd h en ce t he e co nom i ca l ma nu fa ct ure of t he d ev ic es r e qu ir es t ha t the co nt en ts o f the pr og ra m me mo ry b e co mm it te dp e rm an en tl y d ur in g th e m an uf ac tu re o f c hi ps . Cl ear l y, th is im pl ie sa ri g or ou s a pp roa c h t o R OM co de d e ve lo pm en t s in ce c ha ng es ca nn otb e m ad e af te r man u fa ct ur e .T hi s d e ve lo pm en t pr oce s s ma y in vo lv e e m ul at io n us in g a s op hi st ic at ed deve lo pm en t sy st em w i th a ha rd wa re e m ul at io n ca pa bil i ty a s we ll a s th e u se of po we rf ul so ft wa re t oo ls.So me m an uf act u re rs p ro vi de ad d it io na l RO M opt i on s byi n cl ud in g i n th ei r ra ng e de vi ce s wi th (or i nt en de d fo r us e wi th) u s er pr og ra mm ab le m em or y. Th e s im p le st of th es e i s us ua ll y d ev ice w h ic h ca n op er ate in a m ic ro pr oce s so r mo de b y usi n g so me o f th e i n pu t/ou tp ut li ne s as a n ad dr es s an d da ta b us f or acc e ss in g e xt er na l m e mo ry. T hi s t ype o f d ev ic e c an b e ha ve fu nc ti on al l y a s t he si ng le c h ip mi cr oc om pu te r fr om wh ic h i t i s de ri ve d a lb eit w it h r es tr ic ted I/O an d a mo di fie d e xt er na l ci rcu i t. T he u se o f t h es e RO Ml es sd e vi ce s is c om mo n e ve n in p ro du ct io n c ir cu it s wh er e t he v ol um e do es n o t ju st if y th e d e ve lo pm en t co sts of c us to m on-ch i p RO M[2];t he re c a n st il l b e a si g ni fi ca nt s a vi ng in I/O a nd ot he r c hi ps co mp ar ed t o a c on ve nt io nal mi cr op ro ce ss or b as ed c ir cu it. M o re e xa ctr e pl ac em en t fo r RO M d ev ic es c an b e o bt ai ne d in t he f o rm o f va ri an ts w i th 'pi gg y-ba ck'EP RO M(Er as ab le p ro gr am ma bl e ROM)s oc ke ts o rd e vi ce s w it h EP ROM i ns te ad o f R OM 。
自动化专业 单片机相关 外文文献 英文文献 外文翻译中英对照
本科生毕业论文(外文翻译) 译文名称:MCS -51 系列单片机的功能和结构专业:自动化班次:学员:指导教员:评阅人:完成时间:2022 年11 月30 日Structure and function of the MCS-51 series Structure and function of the MCS-51 series one-chip computer is a name ofa piece of one-chip computer series which Intel Company produces. This company introduced 8 top-grade one-chip computers of MCS-51 series in 1980 after introducing 8 one-chip computers of MCS-48 series in 1976. It belong to alot of kinds this line of one-chip computer the chips have,such as 8051, 8031, 8751, 80C51BH, 80C31BH,etc., their basic composition, basic performance and instruction system are all the same. 8051 daily representatives- 51 serial one-chip computers .An one-chip computer system is made up of several following parts: ( 1) One microprocessor of 8 (CPU). ( 2) At slice data memory RAM (128B/256B),it use not depositting not can reading /data that write, such as result not middle of operation, final result and data wanted to show, etc. ( 3) Procedure memory ROM/EPROM (4KB/8KB ), is used to preserve the procedure , some initial data and form in slice. But does not take ROM/EPROM within some one-chip computers, such as 8031 , 8032, 80C ,etc.. ( 4) Four 8 run side by side I/O interface P0 four P3, each mouth can use as introduction , may use as exporting too. ( 5) Two timer / counter, each timer / counter may set up and count in the way, used to count to the external incident, can set up into a timing way too, and can according to count or result of timing realize the control of the computer. ( 6) Five cut off cutting off the control system of the source . ( 7) One all duplexing serial I/O mouth of UART (universal asynchronous receiver/transmitter (UART) ), is it realize one-chip computer or one-chip computer and serial communication of computer to use for. ( 8) Stretch oscillator and clock produce circuit, quartz crystal finely tune electric capacity need outer. Allow oscillation frequency as 12 megahertas now at most. Every the above-mentioned part was joined through the inside data bus .Among them, CPU is a core of the one-chip computer, it is the control of the computer and command centre, made up of such parts as arithmetic unit and controller , etc.. The arithmetic unit can carryon 8 persons of arithmetic operation and unit ALU of logic operation while including one, the 1 storing device temporarilies of 8, storing device 2 temporarily, 8's accumulation device ACC, register B and procedure state register PSW, etc. Person who accumulate ACC count by 2 input ends entered of checking etc. temporarily as one operation often, come from person who store 1 operation is it is it make operation to go on to count temporarily , operation result and loopback ACC with another one. In addition, ACC is often regarded as the transfer station of data transmission on 8051 inside . The same as general microprocessor, it is the busiest register. Help remembering that agreeing with A expresses in the order. The controller includes the procedure counter , the order is depositted, the order decipher, the oscillator and timing circuit, etc. The procedure counter is made up of counter of 8 for two, amounts to 16. It is a byte address counter of the procedure in fact, the content is the next IA that will carried out in PC. The content which changes it can change the direction that the procedure carries out . Shake the circuit in 8051 one-chip computers, only need outer quartz crystal and frequency to finely tune the electric capacity, its frequency range is its 12MHZ of 1.2MHZ. This pulse signal, as 8051 basic beats of working, namely the minimum unit of time. 8051 is the same as other computers, the work in harmony under the control of the basic beat, just like an orchestra according to the beat play that is commanded.There are ROM (procedure memory , can only read ) and RAM in 8051 slices (data memory, can is it can write ) two to read, they have each independent memory address space, dispose way to be the same with general memory of computer. Procedure 8051 memory and 8751 slice procedure memory capacity 4KB, address begin from 0000H, used for preserving the procedure and form constant. Data 8051- 8751 8031 of memory data memory 128B, address false 00FH, use for middle result to deposit operation, the data are stored temporarily and the data are buffered etc.. In RAM of this 128B, there is unit of 32 byteses that can be appointed as the job register, this and generalmicroprocessor is different, 8051 slice RAM and job register rank one formation the same to arrange the location. It is not very the same that the memory of MCS-51 series one-chip computer and general computer disposes the way in addition. General computer for first address space, ROM and RAM can arrangein different space within the range of this address at will, namely the addressesof ROM and RAM, with distributing different address space in a formation. While visiting the memory, corresponding and only an address Memory unit, can ROM, it can be RAM too, and by visiting the order similarly. This kind of memory structure is called the structure of Princeton. 8051 memories are divided into procedure memory space and data memory space on the physics structure, there are four memory spaces in all: The procedure stores in one and data memory space outside data memory and one in procedure memory space and one outside one, the structure forms of this kind of procedure device and data memory separated form data memory, called Harvard structure. But use the angle from users, 8051 memory address space is divided into three kinds: (1) In the slice, arrange blocks of FFFFH , 0000H of location , in unison outside the slice (use 16 addresses). (2) The data memory address space outside one of 64KB, the address is arranged from 0000H 64KB FFFFH (with 16 addresses ) too to the location. (3) Data memory address space of 256B (use 8 addresses). Three above-mentioned memory space addresses overlap, for distinguishing and designing the order symbol of different data transmission in the instruction system of 8051: CPU visit slice, ROM order spend MOVC , visit block RAM order uses MOVX outside the slice, RAM order uses MOV to visit in slice.8051 one-chip computer have four 8 walk abreast I/O port, call P0, P1, P2 and P3. Each port is 8 accurate two-way mouths, accounts for 32 pins altogether. Every one I/O line can be used as introduction and exported independently. Each port includes a latch (namely special function register ), one exports the driver and a introduction buffer . Make data can latch when outputting, data can buffer when making introduction , but four function of passway these self-same.Expand among the system of memory outside having slice, four port these may serve as accurate two-way mouth of I/O in common use. Expand among the system of memory outside having slice, P2 mouth see high 8 address off; P0 mouth is a two-way bus, send the introduction of 8 low addresses and data / export in timesharingThe circuit of 8051 one-chip computers and four I/O ports is very ingenious in design. Familiar with I/O port logical circuit, not only help to use ports correctly and rationally, and will inspire to designing the peripheral logical circuit of one-chip computer to some extent. Load ability and interface of port have certain requirement, because output grade, P0 of mouth and P1 end output, P3 of mouth grade different at structure, so, the load ability and interface of its door demand to have nothing in common with each other. P0 mouth is different from other mouths, its output grade draws the resistance supremly. When using it as the mouth in common use to use, output grade is it leak circuit to turn on, is it is it urge NMOS draw the resistance on taking to be outer with it while inputting toEvery one with P0 mouth can drive 8 Model LS TTL load to export. P1 mouth is an accurate two-way mouth too, used as I/O in common use. Different from P0 mouth output of circuit its, draw load resistance link with power on inside have. In fact, the resistance is that two effects are in charge of FET and together: One FET is in charge of load, its resistance is regular. Another one can is it lead to work with close at two state, make its President resistance value change approximate 0 or group value heavy two situation very. When it is 0 that the resistance is approximate , can draw the pin to the high level fast ; When resistance value is very large, P1 mouth, in order to hinder the introduction state high. Output as P1 mouth high electricity at ordinary times, can is it draw electric current load to offer outwards, draw the resistance on needn't answer and thenning. Here when the port is used as introduction, must write into 1 to the corresponding latch first too, make FET end. Relatively about 20,000 ohmsbecause of the load resistance in scene and because 40,000 ohms, will not exert an influence on the data that are input. The structure of P2 some mouth is similar to P0 mouth, there are MUX switches. Is it similar to mouth partly to urge, but mouth large a conversion controls some than P1. P3 mouth one multi-functionalthese, make her besides accurate two-way function with P1 mouth just, can alsodetermines to be to output data of latch to output second signal of function. Act as W =At 1 o'clock, output Q end signal; Act as Q =At 1 o'clock, can output W line signal . At the time of programming, it is that the first function is still the second function but needn't have software that set up P3 mouth in advance . It hardware not inside is the automatic to have two function outputted when CPU carries on SFR and seeks the location (the location or the byte ) to visit to P3 mouth /at not lasting lining, there are inside hardware latch Qs =1.The operation principle of P3 mouth is similar to P1 mouth.Output grade , P3 of mouth , P1 of P1 , connect with inside have load resistance of drawing , every one of they can drive 4 Model LS TTL load to output. As while inputting the mouth, any TTL or NMOS circuit can drive P1 of 8051 one-chip computers as P3 mouth in a normal way . Because draw resistance on output grade of them have, can open a way collector too or drain-source resistance is it urge to open a way, do not need to have the resistance of drawing outerly . Mouths are all accurate two-way mouths too. When the conduct is input, must write the corresponding port latch with 1 first . As to 80C51 one-chip computer, port can only offer milliampere of output electric currents, is it output mouth go when urging one ordinary basing of transistor to regard as, should contact a resistance among the port and transistor base , in order to the electricity while restraining the high level from exporting P1~P3 Being restored to the throne is the operation of initializing of an one-chip computer. Its main function is to turn PC into 0000H initially , make theone-chip computer begin to hold the conduct procedure from unit 0000H. Except that the ones that enter the system are initialized normally,as because procedure operate it make mistakes or operate there aren't mistake, in order to extricate oneself from a predicament , need to be pressed and restored to the throne the key restarting too. It is an input end which is restored to the throne the signal in 8051 China RST pin. Restore to the throne signal high level effective , should sustain 24 shake cycle (namely 2 machine cycles ) the above its effective times. If 6 of frequency of utilization brilliant to shake, restore to the throne signal duration should exceed 4 delicate to finish restoring to the throne and operating. Produce the logic picture of circuit which is restored to the throne the signal:Restore to the throne the circuit and include two parts outside in the chip entirely. Outside that circuit produce to restore to the throne signal (RST ) hand over to Schmitt's trigger, restore to the throne circuit sample to output , Schmitt of trigger constantly in each S5P2 , machine of cycle in having one more , then just got and restored to the throne and operated the necessary signal insidly. Restore to the throne resistance of circuit generally, electric capacity parameter suitable for 6 brilliant to shake, can is it restore to the throne signal high level duration greater than 2 machine cycles to guarantee. Being restored to the throne in the circuit is simple, its function is very important. Pieces of one-chip computer system could normal running,should first check it can restore to the throne not succeeding. Checking and can pop one's head and monitor the pin with the oscillograph tentatively, push and is restored to the throne the key, the wave form that observes and has enough range is exported (instantaneous), can also through is it restore to the throne circuit group holding value carry on the experiment to change.MCS -51 系列单片机的功能和结构MCS - 51 系列单片机具有一个单芯片电脑的结构和功能,它是英特尔公司生产的系列产品的名称。
外文文献及翻译-fpga实现实时适应图像阈值-其他专业
FPGA实现实时适应图像阈值Elham Ashari电气与计算机工程系,滑铁卢大学理查德霍恩西计算机科学和工程系,纽约大学摘要:本文提出了一种基于实时阈值的通用FPGA结构。
硬件架构是基于一种加权聚类算法的架构,这种算法的重点就在于聚类的前景和背景像素的阈值问题。
该方法采用聚类的二值加权神经网络法找到两个像素组的质心。
图像的阈值是两个质心的平均值。
因为对于每个输入的像素,选定的最近的权值是用来更新的,因而推荐一种自适应的阈值技术。
更新是基于输入像素的灰度级和相关权值的差额的,通过学习快慢因素来衡量其速率。
硬件系统是在FPGA平台上实现的,它包含两个功能模块。
第一个模块获得图像框架阈值,另一个模块将阈值应用于图像的框架。
两个模块的并行性和简单的硬件组成部分使其适用于实时应用程序,并且,其性能可与经常用于离线阈值技术相媲美。
通过利用FPGA对无数的例子进行模拟和实验,得到该算法的结果。
这项工作的基本应用是确定激光的质心,但接下来将会讨论它在其他方面的应用。
关键词:实时阈值,自适应阈值,FPGA实现、神经网络1 简介图像二值化是图像处理的一个主要问题。
如果要从一张图像上提取有用的信息,我们需要将它分成不同的部分(例如背景色和前景色)来进行更为详细的分析。
一般来说,前景色的像素的灰度级与背景色的灰度级是不同的。
现在已有一些较好的使图像二值化地算法,就性能而不是就速度而言,这些算法的主要目标在于高效率,然而对于一些应用,尤其对是在那些定制的硬件和实时应用程序来说,速度则是最关键的要求。
可实现的快速而简单的阈值技术在实际成像系统中得到广泛应用。
例如,结合了CMOS图像传感器的片上图像处理技术普遍存在于各种各样的成像系统当中。
在这样一个系统当中,图像的实时处理及其得到的相关信息是至关重要的。
实时阈值技术的应用领域包括机器人、汽车、目标追踪以及激光测距。
在激光测距,即确定目标的运动范围的过程中,所捕获的图像为二值图像。
自动化专业-外文文献-英文文献-外文翻译-plc方面
1、外文原文(复印件)A: Fundamentals of Single-chip MicrocomputerTh e si ng le-ch i p mi cr oc om pu ter is t he c ul mi nat i on o f bo th t h e d ev el op me nt o f th e d ig it al com p ut er an d t he int e gr at ed ci rc ui ta r gu ab ly th e t ow m os t s i gn if ic ant i nv en ti on s o f t h e 20t h c en tu ry[1].Th es e to w t ype s o f a rc hi te ct ur e a re fo un d i n s i ng le—ch ip m i cr oc om pu te r。
S o me em pl oy th e s p li t p ro gr am/d at a me mo ry of t he H a rv ar d ar ch it ect u re, sh ow n in Fi g.3-5A—1,ot he r s fo ll ow t hep h il os op hy, wi del y a da pt ed f or ge n er al—pu rp os e c o mp ut er s an dm i cr op ro ce ss or s, of ma ki ng no lo gi c al di st in ct io n be tw ee n p ro gr am a n d da ta m em or y a s i n th e Pr in cet o n ar ch it ec tu re,sh ow n in F ig。
3-5A-2.In g en er al te r ms a s in gl e—ch i p mi cr oc om pu ter isc h ar ac te ri zed b y the i nc or po ra tio n of al l t he uni t s o f a co mp ut er i n to a s in gl e de v i ce,as s ho wn i n F ig3—5A—3。
图像分割技术研究--毕业论文
本科毕业论文图像分割技术研究Survey on the image segmentation学院名称:电气信息工程学院专业班级:电子信息工程0601班2010年 6 月图像分割技术研究摘要图像分割是图像分析的第一步,是计算机视觉的基础,是图像理解的重要组成部分,也是图像处理、模式识别等多个领域中一个十分重要且又十分困难的问题。
在图像处理过程中,原有的图像分割方法都不可避免的会产生误差,这些误差会影响到图像处理和识别的效果。
遗传算法作为一种求解问题的高效并行的全局搜索方法,以其固有的鲁棒性、并行性和自适应性,使之非常适于大规模搜索空间的寻优,已广泛应用许多学科及工程领域。
在计算机视觉领域中的应用也正日益受到重视,为图像分割问题提供了新而有效的方法。
本文对遗传算法的基本概念和研究进展进行了综述;重点阐述了基于遗传算法的最大类间方差进行图像分割算法的原理、过程,并在MATLAB中进行了仿真实现。
实验结果表明基于遗传算法的最大类间方差方法的分割速度快,轮廓区域分割明显,分割质量高,达到了预期目的。
关键字:图像分割;遗传算法;阈值分割Survey on the image segmentationAbstract I mage segmentation is the first step of image processing and the basic of computer vision. It is an important part of the image, which is a very important and difficult problem in the field of image processing, pattern recognition.In image processing process, the original method of image segmentation can produce inevitable errors and these errors can affect the effect of image processing and identification .This paper discusses the current situation of the genetic algorithms used in the image segmentation and gives some kind of principles and the processes on genetic algorithm of image segmentationIn this paper.It also descripts the basic concepts and research on genetic algorithms .It emphasizes the algorithm based on genetic and ostu and realizes the simulation on Matlab. The experimental results show that this method works well in segmentation speed,the outline of the division and separate areas of high quality and achieve the desired effect.Genetic algorithm (GA) is a sort of efficient,paralled,full search method with its inherent virtues of robustness,parallel and self-adaptive characters. It is suitable for searching the optimization result in the large search space. Now it has been applied widely and perfectly in many study fields and engineering areas. In computer vision field GA is increasingly attached more importance. It provides the image segmentation a new and effective method.Key words image segmentation;genetic algorithm;image threshold segmentation目录第一章绪论 (1)1.1本课题研究的背景、目的与意义 (1)1.2本课题研究的现状与前景 (2)1.3本论文的主要工作及内容安排 (3)第二章图像分割基本理论 (4)2.1图像分割基本概念 (4)2.2图像分割的体系结构 (4)2.3图像分割方法分类 (5)2.3.1阈值分割方法 (5)2.3.2边缘检测方法 (8)2.3.3区域提取方法 (9)2.3.4结合特定理论工具的分割方法 (10)2.4图像分割的质量评价 (11)第三章遗传算法相关理论 (12)3.1遗传算法的应用研究概况 (12)3.2遗传算法的发展 (12)3.3遗传算法的基本概念 (13)3.4遗传算法基本流程 (14)3.5遗传算法的构成 (14)3.5.1编码 (14)3.5.2确定初始群体 (14)3.5.3适应度函数 (15)3.5.4遗传操作 (15)3.5.5控制参数 (17)3.6遗传算法的特点 (18)第四章 MATLAB相关知识 (20)4.1MATLAB简介 (20)4.2MATLAB的主要功能 (20)4.3MATLAB的技术特点 (21)4.4遗传算工法具箱(S HEFFIELD工具箱) (22)第五章基于遗传算法的最大类间方差图像分割算法 (24)5.1最大类间方差法简介 (24)5.2基于遗传算法的最大类间方差图像分割 (25)5.3流程图 (26)5.4实验结果 (27)第六章总结与展望 (29)6.1全文工作总结 (29)6.2展望 (29)致谢 (30)参考文献 (31)附录 (32)第一章绪论1.1本课题研究的背景、目的与意义数字图像处理技术是一个跨学科的领域。
matlab图像处理外文翻译外文文献
matlab图像处理外文翻译外文文献附录A 英文原文Scene recognition for mine rescue robotlocalization based on visionCUI Yi-an(崔益安), CAI Zi-xing(蔡自兴), WANG Lu(王璐)Abstract:A new scene recognition system was presented based on fuzzy logic and hidden Markov model(HMM) that can be applied in mine rescue robot localization during emergencies. The system uses monocular camera to acquire omni-directional images of the mine environment where the robot locates. By adopting center-surround difference method, the salient local image regions are extracted from the images as natural landmarks. These landmarks are organized by using HMM to represent the scene where the robot is, and fuzzy logic strategy is used to match the scene and landmark. By this way, the localization problem, which is the scene recognition problem in the system, can be converted into the evaluation problem of HMM. The contributions of these skills make the system have the ability to deal with changes in scale, 2D rotation and viewpoint. The results of experiments also prove that the system has higher ratio of recognition and localization in both static and dynamic mine environments.Key words: robot location; scene recognition; salient image; matching strategy; fuzzy logic; hidden Markov model1 IntroductionSearch and rescue in disaster area in the domain of robot is a burgeoning and challenging subject[1]. Mine rescue robot was developed to enter mines during emergencies to locate possible escape routes for those trapped inside and determine whether it is safe for human to enter or not. Localization is a fundamental problem in this field. Localization methods based on camera can be mainly classified into geometric, topological or hybrid ones[2]. With its feasibility and effectiveness, scene recognition becomes one of the important technologies of topological localization.Currently most scene recognition methods are based on global image features and have two distinct stages: training offline and matching online.。
数字图像处理论文中英文对照资料外文翻译文献
第 1 页中英文对照资料外文翻译文献原 文To image edge examination algorithm researchAbstract :Digital image processing took a relative quite young discipline,is following the computer technology rapid development, day by day obtains th widespread application.The edge took the image one kind of basic characteristic,in the pattern recognition, the image division, the image intensification as well as the image compression and so on in the domain has a more widesp application.Image edge detection method many and varied, in which based on brightness algorithm, is studies the time to be most long, the theory develo the maturest method, it mainly is through some difference operator, calculates its gradient based on image brightness the change, thus examines the edge, mainlyhas Robert, Laplacian, Sobel, Canny, operators and so on LOG 。
电气自动化 单片机 外文文献 英文文献 外文翻译 中英对照
Single-chip1.The definition of a single-chipSingle-chip is an integrated on a single chip a complete computer system .Even though most of his features in a small chip,but it has a need to complete the majority of computer components:CPU,memory,internal and external bus system,most will have the Core.At the same time,such as integrated communication interfaces,timers,real-time clock and other peripheral equipment.And now the most powerful single-chip microcomputer system can even voice ,image,networking,input and output complex system integration on a single chip.Also known as single-chip MCU(Microcontroller),because it was first used in the field of industrial control.Only by the single-chip CPU chip developed from the dedicated processor. The design concept is the first by a large numberof peripherals and CPU in a single chip,the computer system so that smaller,more easily integrated into the complex and demanding on the volume control devices.INTEL the Z80 is one of the first design in accordance with the idea of the processor,From then on,the MCU and the development of a dedicated processor parted ways.Early single-chip 8-bit or all the four.One of the most successful is INTELs 8031,because the performance of a simple and reliable access to a lot of good praise.Since then in 8031to develop a single-chip microcomputer system MCS51 series.based on single-chip microcomputer system of the system is still widely used until now.As the field of industrial control requirements increase in the beginning of a 16-bit single-chip,but not ideal because the price has not been very widely used.After the90s with the big consumer electronics product development,single-chip technology is a huge improvement.INTEL i960 series with subsequent ARM in particular ,a broad range of application,quickly replaced by 32-bit single-chip 16-bit single-chip performance has been the rapid increase in processing power compared to the 80s to raise a few hundred times.At present,the high-end 32-bit single-chip frequency over 300MHz,the performance of the mid-90s close on the heels of a special processor,while the ordinary price of the model dropped to one U.S dollars,the most high-end models,only 10 U.S dollars.Contemporary single-chip microcomputer system is no longer only the bare-metal environment in the development and use of a large number of dedicated embedded operating system is widely used in the full range of single-chip microcomputer.In PDAs and cellphones as the coreprocessing of high-end single-chip or even a dedicated direct access to Windows and Linux operating systems.More than a dedicated single-chip processor suitable for embedded systems,so it was up to the application.In fact the number of single-chip is the worlds largest computer.Modern human life used in almost every piece of electronic and mechanical products will have a single-chip integration.Phone,telephone,calculator,home applicances,electronic toys,handheld computers and computer accessories such as a mouse in the Department are equipped with 1-2 single chip.And personal computers also have a large number of single-chip microcomputer in the workplace.Vehicles equipped with more than 40 Department of the general single-chip ,complex industrial control systems and even single-chip may have hundreds of work at the same time!SCM is not only far exceeds the number of PC and other integrated computing,even more than the number of human beings.2.single-chip introducedSingle-chip,also known as single-chip microcontroller,it is not the completion of a logic function of the chip,but a computer system integrated into a chip.Speaking in general terms: a single chip has become a computer .Its small size,light weight,cheap,for the learning,application and development of facilities provided .At the same time,learning to use the principle of single-chip computer to understand and structure the best choice.Single-chip and computer use is also similar to the module,such as CPU,memory,parallel bus, as well as the role and the same hard memory,is it different from the performance of these components are relatively weak in our home computer a lot,but the price is low ,there is generally no more than 10yuan,,can use it to make some control for a class of electrical work is not very complex is sufficient.We are using automatic drum washing machines, smoke hood,VCD and so on inside the home appliances can see its shadow! It is mainly as part of the core components of the control.It is an online real-time control computer,control-line is at the scene,we need to have a stronger anti-interference ability,low cost,and this is off-line computer(such as home PC)The main difference.By single-chip process,and can be amended.Through different procedures to achieve different functions,in particular the special unique features,this is the need to charge other devices can do a great effort,some of it is also difficult to make great efforts to do so .A function is not very complicated fi the United States the development of the 50s series of 74 or 60 during the CD4000series to get these pure hardware,the circuit must be a big PCB board !However,if the United States if the successful 70s seriesof single-chip market ,the result will be different!Simply because the adoption of single-chip preparation process you can achieve high intelligence,high efficiency and high reliability!Because of cost of single-chip is sensitive,so the dominant software or the lowest level assembly language,which is in addition to the lowest level for more than binary machine code of the language ,since such a low-level so why should we use ?Many of the seniors language has reached a level of visual programming why is it not in use ?The reason is simple ,that is,single-chip computer as there is no home of CPU,also not as hard as the mass storage device.A visualization of small high-level language program,even if there is only one button which will reach the size of dozens of K! For the home PCs hard drive is nothing,but in terms of the single-chip microcomputer is unacceptable.Single-chip in the utilization of hardware resources have to do very high ,so the compilation of the original while still in heavy use .The same token ,if the computer giants operating system and appplications run up to get the home PC,homePCcan not afford to sustain the same.It can be said that the twentieth century across the three “power”of the times,that is ,the electrical era,the electronic age and has now entered the computer age. However ,such a computer,usually refers to a personal computer,or PC.It consisits of the host ,keyboards,displays .And other components.There is also a type of computer,not how most people are familiar with . This computer is smart to give a variety of mechanical single-chip(also known as micro-controller).As the name suggests,these computer systems use only the minimum of an integrated circuit to make a simple calculation and control. Because of its small size,are usually charged with possession of machine in the “belly”in. It in the device,like the human mind plays a role, it is wrong,the entire device was paralyzed .Now,this single chip has a very wide field of use,such as smart meters,real-time industrial control,communications equipment,navigation systems,and household appliances. Once a variety of products with the use of the single-chip ,will be able to play so that the effectiveness of product upgrading,product names often adjective before the word “intelligent”,such as was hing machines and so intelligent.At present,some technical personnel of factories or other amateur electrtonics developers from engaging in certain products ,not the circuit is too complex ,that is functional and easy to be too simple imitation.The reason may be the product not on the cards or the use of single-chip programmable logic device on the other.3.single-chip historysingle-chip 70 was born in the late 20th century,experienced a SCM,MCU,SOC three stages.Single-chip micro-computer 1.SCM that(Single Chip Microcomputer)stage,is mainly a single from to find the best of the best embedded systems architecture.”Innovation model”to be successful,lay the SCM with the general-purpose computers,a completely different path of development . In embedded systems to create an independent development path,Intel Corporation credit.That is 2.MCU microcontroller(Micro Controller Unit)stage,the main direction of technology development: expanding to meet the embedded applications,the target system requirements for the various peripheral circuits and interface circuits,to highlingt the target of intelligent control.It covers all areas related with the objectSystem,therefore,the development of MCU inevitably fall on the heavy electrical,electronics manufacturers. From this point of view ,Intels development gradually MCU has its objective factors.MCU in the development ,the most famous manufacturers when the number of Philips Corporation.Philips in embedded applications for its enormous advantages,the MCS-51 from the rapid deveploment of single-chip micro-computer to the microcontroller.Therefore,when we look back at the path of development of embedded systems,Intel and Philips do not forget the historical merits.3.Single-chip is an independent embedded systems development,to the MCU an important factor in the development stage,is seeking applications to maximize the natural trend .With the mico-electronics technology,IC design,EDA tools development,based on the single-chip SOC design application systems will have greater development. Therefore,the understanding of single-chip micro-computer from a single ,monolithic single-chip microcontroller extends to applications.4.Single-chip applicationsAt present,single-chip microcomputer to infiltrate all areas of our lives,which is very difficult to find the area of almost no traces of single-chip microcomputer.Missile navigation equipment,aircraft control on a variety of instruments,compuer network communications and data transmission,industrial automation,real-time process control and data processing ,are widely used in a variety of smart IC card,limousine civilian security systems,video recorders,cameras,the control of automatic washing machines,as well as program-controllde toys,electronic pet,etc,which are inseparable from the single-chip microcomputer.Not to mention the field of robot automation ,intelligent instrumentation,medical equipment has been. Therefore,the single- chip learning ,development and application to a large number of computer applications and intelligent control of scientists,engineers.Single-chip widely used in instruments and meters,household appliances,medical equipment ,acrospace,specialized equipment and the intellingent management in areas such as process control,generally can be divided into the following areas:1.In the smart application of instrumentationSingle-chip with small size,low power consumption,control,and expansion flexibility , miniaturization and ease of sensors,can be realized,suchvoltage,power,frequency,humidity,temperature,flow,speed,thickness,angle,length,hardness,elemen t,measurement of physical pressure. SCM makes use of digital instrumentation,intelligence,miniaturization and functional than the use of electronic or digital circuitry even stronger.For example,precision measurement equipment(power meter,oscilloscope,and analyzer).2.In the industrial controlMCU can constitute a variety of control systems,data acquisition system.Such as factory assembly line of intelligent management ,intelligent control of the lift ,all kinds of alarm systems ,and computer networks constitute a secondary control system.3.In the applicationof household appliancesIt can be said that almost all home appliances are using the single-chip control,electric rice from favorable,washing machines,refrigerators,air conditioners,color TV and other audio video equipment,and then to the electronic weighing equipment,all kinds ,everywhere.4.On computer networks and communication applications in the field ofGenerally with the modern single-chip communication interface,can be easily carried out with computer carried out with computer data communications,computer networks and in inter-application communications equipment to provide an excellent material conditions,the communications equipment to provide an excellent material condition,from the mobile phone ,telephone , mini-program-controlled switchboards,buiding automated communications system call,the train wireless communications,and then you can see day-to-day work of mobile phones,Mobile communications,such as radios.5.Single-chip in the field of medical equipment applicationsSingle-chip microcomputer in medical devices have a wide range of purpose,such as medical ventilator,various analyzers,monitors,ultrasonic diagnostic equipment and hospital call systems.6.In a variety of large-scale electrical applications of modularSome special single-chip design to achieve a specific function to carry out a variety of modular circuitapplications,without requiring users to understand its internal structure.Integrated single-chip microcomputer such as music ,which seems to be simpleFunctions,a miniature electronic chip in a pure(as distinct from the principle of tape machine),would require a complex similar to the principle of the computer. Such as :music signal to digital form stored in memory(similar to ROM),read out by the microcontroller into analog music signal(similar to the sound card).In large circuits,modular applications that greatly reduces the size ,simplifying the circuit and reduce the damage,error rate ,but also to facilitate the replacement.In addition,single-chip microcomputer in the industrial,commercial,financial,scientific research ,education,defense aerospace and other fields have a wide range of uses.单片机1.单片机定义单片机是一种集成在电路芯片上的完整计算机系统。
normalized cuts and image segmentation翻译
规范化切割和图像分割摘要:为解决在视觉上的感知分组的问题,我们提出了一个新的方法。
我们目的是提取图像的总体印象,而不是只集中于局部特征和图像数据的一致性。
我们把图像分割看成一个图形的划分问题,并且提出一个新的分割图形的全球标准,规范化切割。
这一标准衡量了不同组之间的总差异和总相似。
我们发现基于广义特征值问题的一个高效计算技术可以用于优化标准。
我们已经将这种方法应用于静态图像和运动序列,发现结果是令人鼓舞的。
1简介近75年前,韦特海默推出的“格式塔”的方法奠定了感知分组和视觉感知组织的重要性。
我的目的是,分组问题可以通过考虑图(1)所示点的集合而更加明确。
Figure1:H<iw m.3Uiyps?通常人类观察者在这个图中会看到四个对象,一个圆环和内部的一团点以及右侧两个松散的点团。
然而这并不是唯一的分割情况。
有些人认为有三个对象,可以将右侧的两个认为是一个哑铃状的物体。
或者只有两个对象,右侧是一个哑铃状的物体,左侧是一个类似结构的圆形星系。
如果一个人倒行逆施,他可以认为事实上每一个点是一个不同的对象。
这似乎是一个人为的例子,但每一次图像分割都会面临一个相似的问题一将一个图像的区域D划分成子集Di会有许多可能的划分方式(包括极端的将每一个像素认为是一个单独的实体)。
我们怎样挑选“最正确”的呢?我们相信贝叶斯的观点是合适的,即一个人想要在较早的世界知识背景下找到最合理的解释。
当然,困难在于具体说明较早的世界知识一一些低层次的,例如亮度,颜色,质地或运行的一致性,但是关于物体对称或对象模型的中高层次的知识是同等重要的。
这些表明基于低层次线索的图像分割不能够也不应该旨在产生一个完整的最终的正确的分割。
目标应该是利用低层次的亮度,颜色,质地,或运动属性的一致性继续的提出分层分区。
中高层次的知识可以用于确认这些分组或者选择更深的关注。
这种关注可能会导致更进一步的再分割或分组。
关键点是图像分割是从大的图像向下进行,而不是像画家首先标示出主要区域,然后再填充细节。
数字图像处理英文原版及翻译
数字图象处理英文原版及翻译Digital Image Processing: English Original Version and TranslationIntroduction:Digital Image Processing is a field of study that focuses on the analysis and manipulation of digital images using computer algorithms. It involves various techniques and methods to enhance, modify, and extract information from images. In this document, we will provide an overview of the English original version and translation of digital image processing materials.English Original Version:The English original version of digital image processing is a comprehensive textbook written by Richard E. Woods and Rafael C. Gonzalez. It covers the fundamental concepts and principles of image processing, including image formation, image enhancement, image restoration, image segmentation, and image compression. The book also explores advanced topics such as image recognition, image understanding, and computer vision.The English original version consists of 14 chapters, each focusing on different aspects of digital image processing. It starts with an introduction to the field, explaining the basic concepts and terminology. The subsequent chapters delve into topics such as image transforms, image enhancement in the spatial domain, image enhancement in the frequency domain, image restoration, color image processing, and image compression.The book provides a theoretical foundation for digital image processing and is accompanied by numerous examples and illustrations to aid understanding. It also includes MATLAB codes and exercises to reinforce the concepts discussed in each chapter. The English original version is widely regarded as a comprehensive and authoritative reference in the field of digital image processing.Translation:The translation of the digital image processing textbook into another language is an essential task to make the knowledge and concepts accessible to a wider audience. The translation process involves converting the English original version into the target language while maintaining the accuracy and clarity of the content.To ensure a high-quality translation, it is crucial to select a professional translator with expertise in both the source language (English) and the target language. The translator should have a solid understanding of the subject matter and possess excellent language skills to convey the concepts accurately.During the translation process, the translator carefully reads and comprehends the English original version. They then analyze the text and identify any cultural or linguistic nuances that need to be considered while translating. The translator may consult subject matter experts or reference materials to ensure the accuracy of technical terms and concepts.The translation process involves several stages, including translation, editing, and proofreading. After the initial translation, the editor reviews the translated text to ensure its coherence, accuracy, and adherence to the target language's grammar and style. The proofreader then performs a final check to eliminate any errors or inconsistencies.It is important to note that the translation may require adapting certain examples, illustrations, or exercises to suit the target language and culture. This adaptation ensures that the translated version resonates with the local audience and facilitates better understanding of the concepts.Conclusion:Digital Image Processing: English Original Version and Translation provides a comprehensive overview of the field of digital image processing. The English original version, authored by Richard E. Woods and Rafael C. Gonzalez, serves as a valuable reference for understanding the fundamental concepts and techniques in image processing.The translation process plays a crucial role in making this knowledge accessible to non-English speakers. It involves careful selection of a professional translator, thoroughunderstanding of the subject matter, and meticulous translation, editing, and proofreading stages. The translated version aims to accurately convey the concepts while adapting to the target language and culture.By providing both the English original version and its translation, individuals from different linguistic backgrounds can benefit from the knowledge and advancements in digital image processing, fostering international collaboration and innovation in this field.。
3-电气工程及其自动化专业 外文文献 英文文献 外文翻译
3-电气工程及其自动化专业外文文献英文文献外文翻译1、外文原文(复印件)A: Fundamentals of Single-chip MicrocomputerThe single-chip microcomputer is the culmination of both the development of the digital computer and the integrated circuit arguably the tow most significant inventions of the 20th century [1].These tow types of architecture are found in single-chip microcomputer. Some employ the split program/data memory of the Harvard architecture, shown in Fig.3-5A-1, others follow the philosophy, widely adapted for general-purpose computers and microprocessors, of making no logical distinction between program and data memory as in the Princeton architecture, shown in Fig.3-5A-2.In general terms a single-chip microcomputer is characterized by the incorporation of all the units of a computer into a single device, as shown in Fig3-5A-3.ProgramInput& memoryOutputCPU unitDatamemoryFig.3-5A-1 A Harvard typeInput&Output CPU memoryunitFig.3-5A-2. A conventional Princeton computerExternal Timer/ System Timing Counter clock componentsSerial I/OReset ROMPrarallelI/OInterrupts RAMCPUPowerFig3-5A-3. Principal features of a microcomputerRead only memory (ROM).ROM is usually for the permanent,non-volatile storage of an applications program .Many microcomputers and microcontrollers are intended for high-volume applications and hence the economical manufacture of the devices requires that the contents of the program memory be committed permanently during the manufacture of chips . Clearly, this implies a rigorous approach to ROM code development since changes cannot be made after manufacture .This development process may involve emulation using a sophisticated development system with a hardware emulation capability as well as the use of powerful software tools.Some manufacturers provide additional ROM options by including in their range devices with (or intended for use with) user programmablememory. The simplest of these is usually device which can operate in a microprocessor mode by using some of the input/output lines as an address and data bus for accessing external memory. This type of device can behave functionally as the single chip microcomputer from which itis derived albeit with restricted I/O and a modified external circuit. The use of these ROMlessdevices is common even in production circuits where the volume does not justify the development costs of custom on-chip ROM[2];there canstill be a significant saving in I/O and other chips compared to a conventional microprocessor based circuit. More exact replacement for ROM devices can be obtained in the form of variants with 'piggy-back' EPROM(Erasable programmable ROM )sockets or devices with EPROM instead of ROM 。
图像分割文献综述
文献综述图像分割就是把图像分成各具特色的区域提取感兴趣目标的技术和过程。
它是由图像处理到图像分析的关键步骤,是一种基本的计算机视觉技术。
图像分割起源于电影行业。
伴随着近代科技的发展,图像分割在实际中得3到了广泛应用,如在工业自动化、在线产品检验、生产过程控制、文档图像处理、遥感和生物医学图像分析、以及军事、体育、农业工程等方面。
总之,只要是涉及对对象目标进行特征提取和测量,几乎都离不开图像分割。
所以,对图像分割的研究一直是图像工程中的重点和热点。
自图像分割的提出至今,已经提出了上千种各种类型的分割算法。
由于分割算法非常多,所以对它们的分类方法也不尽相同。
我们依据使用知识的特点与层次,将其分为基于数据和基于模型两大类。
前者是直接对当前图像的数据进行操作,虽然可以利用相关的先验信息,但是不依赖于知识;后者则是直接建立在先验知识的基础上,这类分割更符合当前图像分割的技术要点,也是当今图像分割的主流。
基于数据的图像分割算法多数为传统算法,常见的包括,基于边缘检测,基于区域以及边缘与区域相结合的分割方法等等。
这类分割方法具有以下缺点,○1易受噪声和伪边缘影响导致得到的边界不连续,需要用特定的方法进行连接;○2只能提取图像局部特征,缺乏有效约束机制,难以获得图像的全局信息;○3只利用图像的底层视觉特征,难以将图像的先验信息融合到高层的理解机制中。
这是因为传统的图像处理算法都是基于MIT人工智能实验室Marr提出的各层相互独立、严格由低到高的分层视觉框架下进行的。
由于各层之间不存在反馈,数据自底向上单向流动,高层的信息无法指导底层特征的提取,从而导致底层的误差不断积累,且无法修正。
基于模型的分割方法则可以克服以上缺陷。
基于模型的分割方法可以将分割目标的先验知识等有用信息融合到高层的理解机制之中,并通过对图像中的特定目标对象建模来完成分割任务。
这是一种自上而下的处理过程,可以将图像的底层视觉特征与高层信息有机结合起来,因此更接近人类的视觉处理。
图像分割技术在医学图像处理应用论文
图像分割技术在医学图像处理中的应用研究摘要:通过图像分割技术在医学图像处理中的应用研究,深入理解各种分割方法的理论基础、应用价值以及优缺点,着重研究基于变形模型的分割方法在医学图像分割中的应用,研究该方法的优缺点并提出相应的改进算法。
关键词:图像;分割方法中图分类号:tp399 文献标识码:a 文章编号:1007-9599 (2011) 22-0000-01picture partitions technology application study in the medical science picture processingyang jiaping(wuxi teachers’ college,wuxi 214000,china)abstract:pass a picture partition technique in the medical science picture application study within processing,go deep into to comprehend various theory foundation,applied value and merit and shortcoming that partition a method and emphasize research according to transform the partition method of model partitions in the medical science picture in of application,study the merit and shortcoming of the method and put forward homologous improvement calculate way.keywords:picture;partition a method随着多媒体技术的迅速发展,在现代医学中,医学成像技术已成为其重要分支和不可或缺的诊断、治疗及研究工具。