Advanced iterative algorithm for phase extraction of randomly phase-shifted interferograms
中国科学英文版模板
中国科学英文版模板1.Identification of Wiener systems with nonlinearity being piece wise-linear function HUANG YiQing,CHEN HanFu,FANG HaiTao2.A novel algorithm for explicit optimal multi-degree reduction of triangular surfaces HU QianQian,WANG GuoJin3.New approach to the automatic segmentation of coronary arte ry in X-ray angiograms ZHOU ShouJun,YANG Jun,CHEN WuFan,WANG YongTian4.Novel Ω-protocols for NP DENG Yi,LIN DongDai5.Non-coherent space-time code based on full diversity space-ti me block coding GUO YongLiang,ZHU ShiHua6.Recursive algorithm and accurate computation of dyadic Green 's functions for stratified uniaxial anisotropic media WEI BaoJun,ZH ANG GengJi,LIU QingHuo7.A blind separation method of overlapped multi-components b ased on time varying AR model CAI QuanWei,WEI Ping,XIAO Xian Ci8.Joint multiple parameters estimation for coherent chirp signals using vector sensor array WEN Zhong,LI LiPing,CHEN TianQi,ZH ANG XiXiang9.Vision implants: An electrical device will bring light to the blind NIU JinHai,LIU YiFei,REN QiuShi,ZHOU Yang,ZHOU Ye,NIU S huaibining search space partition and search Space partition and ab straction for LTL model checking PU Fei,ZHANG WenHui2.Dynamic replication of Web contents Amjad Mahmood3.On global controllability of affine nonlinear systems with a tria ngular-like structure SUN YiMin,MEI ShengWei,LU Qiang4.A fuzzy model of predicting RNA secondary structure SONG D anDan,DENG ZhiDong5.Randomization of classical inference patterns and its applicatio n WANG GuoJun,HUI XiaoJing6.Pulse shaping method to compensate for antenna distortion in ultra-wideband communications WU XuanLi,SHA XueJun,ZHANG NaiTong7.Study on modulation techniques free of orthogonality restricti on CAO QiSheng,LIANG DeQun8.Joint-state differential detection algorithm and its application in UWB wireless communication systems ZHANG Peng,BI GuangGuo,CAO XiuYing9.Accurate and robust estimation of phase error and its uncertai nty of 50 GHz bandwidth sampling circuit ZHANG Zhe,LIN MaoLiu,XU QingHua,TAN JiuBin10.Solving SAT problem by heuristic polarity decision-making al gorithm JING MingE,ZHOU Dian,TANG PuShan,ZHOU XiaoFang,ZHANG Hua1.A novel formal approach to program slicing ZHANG YingZhou2.On Hamiltonian realization of time-varying nonlinear systems WANG YuZhen,Ge S. S.,CHENG DaiZhan3.Primary exploration of nonlinear information fusion control the ory WANG ZhiSheng,WANG DaoBo,ZHEN ZiYang4.Center-configur ation selection technique for the reconfigurable modular robot LIU J inGuo,WANG YueChao,LI Bin,MA ShuGen,TAN DaLong5.Stabilization of switched linear systems with bounded disturba nces and unobservable switchings LIU Feng6.Solution to the Generalized Champagne Problem on simultane ous stabilization of linear systems GUAN Qiang,WANG Long,XIA B iCan,YANG Lu,YU WenSheng,ZENG ZhenBing7.Supporting service differentiation with enhancements of the IE EE 802.11 MAC protocol: Models and analysis LI Bo,LI JianDong,R oberto Battiti8.Differential space-time block-diagonal codes LUO ZhenDong,L IU YuanAn,GAO JinChun9.Cross-layer optimization in ultra wideband networks WU Qi,BI JingPing,GUO ZiHua,XIONG YongQiang,ZHANG Qian,LI ZhongC heng10.Searching-and-averaging method of underdetermined blind s peech signal separation in time domain XIAO Ming,XIE ShengLi,F U YuLi11.New theoretical framework for OFDM/CDMA systems with pe ak-limited nonlinearities WANG Jian,ZHANG Lin,SHAN XiuMing,R EN Yong1.Fractional Fourier domain analysis of decimation and interpolat ion MENG XiangYi,TAO Ran,WANG Yue2.A reduced state SISO iterative decoding algorithm for serially concatenated continuous phase modulation SUN JinHua,LI JianDong,JIN LiJun3.On the linear span of the p-ary cascaded GMW sequences TA NG XiaoHu4.De-interlacing technique based on total variation with spatial-t emporal smoothness constraint YIN XueMin,YUAN JianHua,LU Xia oPeng,ZOU MouYan5.Constrained total least squares algorithm for passive location based on bearing-only measurements WANG Ding,ZHANG Li,WU Ying6.Phase noise analysis of oscillators with Sylvester representation for periodic time-varying modulus matrix by regular perturbations FAN JianXing,YANG HuaZhong,WANG Hui,YAN XiaoLang,HOU ChaoHuan7.New optimal algorithm of data association for multi-passive-se nsor location system ZHOU Li,HE You,ZHANG WeiHua8.Application research on the chaos synchronization self-mainten ance characteristic to secret communication WU DanHui,ZHAO Che nFei,ZHANG YuJie9.The changes on synchronizing ability of coupled networks fro m ring networks to chain networks HAN XiuPing,LU JunAn10.A new approach to consensus problems in discrete-time mult iagent systems with time-delays WANG Long,XIAO Feng11.Unified stabilizing controller synthesis approach for discrete-ti me intelligent systems with time delays by dynamic output feedbac k LIU MeiQin1.Survey of information security SHEN ChangXiang,ZHANG Hua ngGuo,FENG DengGuo,CAO ZhenFu,HUANG JiWu2.Analysis of affinely equivalent Boolean functions MENG QingSh u,ZHANG HuanGuo,YANG Min,WANG ZhangYi3.Boolean functions of an odd number of variables with maximu m algebraic immunity LI Na,QI WenFeng4.Pirate decoder for the broadcast encryption schemes from Cry pto 2005 WENG Jian,LIU ShengLi,CHEN KeFei5.Symmetric-key cryptosystem with DNA technology LU MingXin,LAI XueJia,XIAO GuoZhen,QIN Lei6.A chaos-based image encryption algorithm using alternate stru cture ZHANG YiWei,WANG YuMin,SHEN XuBang7.Impossible differential cryptanalysis of advanced encryption sta ndard CHEN Jie,HU YuPu,ZHANG YueYu8.Classification and counting on multi-continued fractions and its application to multi-sequences DAI ZongDuo,FENG XiuTao9.A trinomial type of σ-LFSR oriented toward software implemen tation ZENG Guang,HE KaiCheng,HAN WenBao10.Identity-based signature scheme based on quadratic residues CHAI ZhenChuan,CAO ZhenFu,DONG XiaoLei11.Modular approach to the design and analysis of password-ba sed security protocols FENG DengGuo,CHEN WeiDong12.Design of secure operating systems with high security levels QING SiHan,SHEN ChangXiang13.A formal model for access control with supporting spatial co ntext ZHANG Hong,HE YePing,SHI ZhiGuo14.Universally composable anonymous Hash certification model ZHANG Fan,MA JianFeng,SangJae MOON15.Trusted dynamic level scheduling based on Bayes trust model WANG Wei,ZENG GuoSun16.Log-scaling magnitude modulated watermarking scheme LING HeFei,YUAN WuGang,ZOU FuHao,LU ZhengDing17.A digital authentication watermarking scheme for JPEG image s with superior localization and security YU Miao,HE HongJie,ZHA NG JiaShu18.Blind reconnaissance of the pseudo-random sequence in DS/ SS signal with negative SNR HUANG XianGao,HUANG Wei,WANG Chao,L(U) ZeJun,HU YanHua1.Analysis of security protocols based on challenge-response LU O JunZhou,YANG Ming2.Notes on automata theory based on quantum logic QIU Dao Wen3.Optimality analysis of one-step OOSM filtering algorithms in t arget tracking ZHOU WenHui,LI Lin,CHEN GuoHai,YU AnXi4.A general approach to attribute reduction in rough set theory ZHANG WenXiuiu,QIU GuoFang,WU WeiZhi5.Multiscale stochastic hierarchical image segmentation by spectr al clustering LI XiaoBin,TIAN Zheng6.Energy-based adaptive orthogonal FRIT and its application in i mage denoising LIU YunXia,PENG YuHua,QU HuaiJing,YiN Yong7.Remote sensing image fusion based on Bayesian linear estimat ion GE ZhiRong,WANG Bin,ZHANG LiMing8.Fiber soliton-form 3R regenerator and its performance analysis ZHU Bo,YANG XiangLin9.Study on relationships of electromagnetic band structures and left/right handed structures GAO Chu,CHEN ZhiNing,WANG YunY i,YANG Ning10.Study on joint Bayesian model selection and parameter estim ation method of GTD model SHI ZhiGuang,ZHOU JianXiong,ZHAO HongZhong,FU Qiang。
电压闪变与波动的外文翻译
Measurement of a power system nominal voltage, frequency and voltage flicker parametersA.M. Alkandari a, S.A. Soliman b,*a b s t r a c t:We present, in this paper, an approach for identifying the frequency and amplitude of voltage flicker signal that imposed on the nominal voltage signal, as well as the amplitude and frequency of the nominal signal itself. The proposed algorithm performs the estimation in two steps; in the first step the original voltage signal is shifted forward and backward by an integer number of sample, one sample in this paper.The new generated signals from such a shift together with the original one is used to estimate the amplitude of the original signal voltage that composed of the nominal voltage and flicker voltage. The average of this amplitude gives the amplitude of the nominal voltage; this amplitude is subtracted from the original identified signal amplitude to obtain the samples of the flicker voltage. In the second step, the argument of the signal is calculated by simply dividing the magnitude of signal sample with the estimated amplitude in the first step. Calculating the arccosine of the argument, the frequency of the nominal signal as well as the phase angle can be computing using the least error square estimation algorithm. Simulation examples are given within the text to show the features of the proposed approach.Keywords: Power quality Nominal frequency and amplitude measurements Voltage flicker Frequency and amplitude estimation Forward and backward difference1.IntroductionV oltage flicker and harmonics are introduced to power system as a result of arc furnace operation, and power utilities are concern about their effects. As such an accurate model for the voltage flicker is needed. The definition of voltage flicker in IEEE standards is the ‘‘impression of fluctuating brightness or color, when the frequency observed variation lies between a few hertz and the fusion frequency of image” [1]. The flicker phenomenon may be divided into two general categories, cyclic flicker and non-cyclic flicker. Cyclic flicker is repetitive one and is caused by periodic voltage fluctuations due to the operation of loads such as spot welders, compressors, or arc welders. Non-cyclic flicker corresponds to occasional voltage fluctuations, such as starting of large motors, some of loads will cause both cyclic andnon-cyclic flicker, such as arc furnace, welder, and ac choppers.Over the past three decades, many digital algorithms have been developed and tested to measure power system frequency and rate of change of frequency. Ref. [2] presents the application of the continuous Wavelet transform for power quality analysis. The transform appears to be reliable for detecting and measuring voltage sags, flicker and transients in power quality analysis. Ref. [3] pays attention to the fast Fourier transform and its pitfalls. A low pass.digital filter is used, and the effects of system voltage deviation on the voltage - flicker measurements by direct FFT are studied. The DC component leakage effect on the flicker components in the spectrum analysis of the effective value of the voltage and the windowing effect on the data acquisition of the voltage signal are discussed as well.A digital flicker meter is proposed in Ref. [6] based on forward and inverse FFT and on filtering, in the frequency domain, for the implementation of the functional blocks of simulation of lamp–eye–brain response. Refs. [5–7] propose a method based on Kalman filtering algorithms to measure the low frequency modulation of the 50/60 Hz signal. The method used, in these references, allows for random and deterministic variation of the modulation. The approach utilizes a combination of linear and non-linear Kalman filter modes.Ref. [8] presents a method for direct calculation of flicker level from digital measurements of voltage waveforms. The direct digital implementation uses Fast Fourier transform (FFT) as the first step in computation. A pruned FFT, customized for the flicker level computation, is also proposed. Presented in Ref. [9] is a static state estimation algorithm based on least absolute value error (LA V) for measurement of voltage flicker level. The waveform for the voltage signal is assumed to have, for simplicity, one flicker component. This algorithm estimates accurately the nominal voltage waveform and the voltage flicker component. An application of continuous wavelet transform (CWT) for analysis of voltage flicker-generated signals is proposed in Ref. [10] With the time-frequency localization characteristics embedded in the wavelets, the time and frequency information of a waveform can be integrally presentedRef. [11] presents an arc furnace model that implemented in the Simulink environment by using chaotic and deterministic elements. This model is obtained by solving the corresponding differential equation, which yields dynamic and multivalued v i characteristics of the arc furnace load. In order to evaluate the flicker in the simulated arc furnace voltage, the IEC flicker meter is implemented based on the IEC 1000-4-15 standard in Matlab environment.Ref. [12] presents an approach to estimate voltage flicker components magnitudes and frequencies, based on Lp norms (p = 1,2 and 1) and Taylor’s’ series linearization. It has been found that it is possible to design an Lp estimator to identify flicker frequency and amplitude from time series measurements. The Teager energy operator (TEO) and the Hilbert transform (HT) are introduced in Ref. [13] as effective approaches for tracking the voltage flicker levels. It has been found that TEO and HT are capable of tracking the amplitude variations of the voltage flicker and supply frequency in industrial systems with an average error 3%.Ref. [14] presents a control technique for flicker mitigation. This technique is based on the instantaneous tracking of the measured voltage envelope. The ADALINE (ADAptive LINear) neuron algorithm and the Recursive Least Square (RLS) algorithm are introduced for the flicker envelope tracking. In Ref. [15], an algorithm for tracking the voltage envelope based on calculating the energy operator of a sinusoidal waveform is presented. It is assumed that the frequency of the sinusoidal waveform is known and a lead-lag network with unity gain is used. Ref. [16] develops an enhanced method for estimating the voltage fluctuation (DV10) of the electric arc furnace (EAF). The method proposed considers the reactive power variation and also the active power variation in calculating DV10 value of ac and dc LEAFsControl and protection of power systems requires accurate measurement of system frequency. A system operates at nominal frequency, 50/60 Hz means a balance in the active power, i.e. The power generated equals the demand power plus losses. Imbalance in the active power causes the frequency to change. A frequency less than the nominal frequency means that the demand load plus losses are greater than the power generated, but a frequency greater than nominal frequency means that the system generation is greater than the load demand plus losses. As such, frequency can be used as a measure of system power balance.Ref. [17] presents a numerical differentiation-based algorithm for high-accuracy, wide-range frequency estimation of power systems. The signal, using this algorithm, includes up to 31st-order harmonic components. Ref. [18] presents a method for estimation of power frequency and its rate of change. The proposed methodaccommodates the inherent non-linearity of the frequency estimation problem. The estimator is based on a quadrature phase-look loop concept.An approach for designing a digital algorithm for local system frequency estimation is presented in Ref. [19]. The algorithm is derived using the maximum likelihood method. A recursive Newtontype algorithm suitable for various measurement applications in power system is developed in Ref. [20] and is used for power system frequency and spectra estimation. A precise digital algorithm based on discrete Fourier transforms (DFT) to estimate the frequency of a sinusoid with harmonics in real-time is proposed in Ref. [21]. This algorithm is called smart discrete Fourier transform (SDFT) that avoids the errors due to frequency deviation and keeps all the advantages of the DFT.Ref. [22] presents an algorithm for frequency estimation from distorted signals. The proposed algorithm is based on the extended complex Kalman filter, which uses discrete values of a three-phase voltage that are transformed into the well-known ab-transform Using such a transformation a non-linear state space formulation is obtained for the extended Kalman filter. This algorithm is iterative and complex and needs much computing time and uses the three-phase voltage measurements, to calculate the power system voltage frequency.Ref. [23] describes design, computational aspect, and implementation aspects of a digital signal processing technique for measuring the operating frequency of a power system. It is suggested this technique produces correct and noise-free estimates for near nominal, nominal and off-nominal frequencies in about 25 ms, and it requires modest computation. The proposed technique uses per-phase digitized voltage samples and applies orthogonal FIR digital filters with the least errors square (LES) algorithm to extract the system frequency.Ref. [24] presents an iterative technique for measuring power system frequency to a resolution of 0.01–0.02 Hz for near nominal, nominal and off-nominal frequencies in about 20 ms. The algorithm in this reference uses per-phase digitized voltage samples together with a FIR filter and the LES algorithm to extract iteratively the signal frequency.This algorithm has beneficial features including fixed sampling rate, fixed data window size and easy implementationRefs. [25,26] present a new pair of orthogonal filters for phasor computation; the technique proposed extracts accurately the fundamental component of fault voltageand current signal. Ref. [27] describes an algorithm for power system frequency estimation. The algorithm, applies orthogonal signal component obtained with use of two orthogonal FIR filters. The essential property of the algorithm proposed is outstanding immunity to both signals orthogonal component magnitudes and FIR filter gain variations. Again this algorithm uses the per-phase digitized voltage samples.Ref. [28] presents a method of measuring the power system frequency, based on digital filtering and Prony’s estimation. The discrete Fourier transform with a variable data window is used to filter out the noise and harmonics associated with the signal. The results obtained using this algorithm are more accurate than when applying the method based on the measurement of angular velocity of the rotating voltage phasor. The response time of the proposed method equals to three to four periods of the fundamental components. This method uses also per phase digitized voltage samples to compute the system frequency from harmonics polluted voltage signal. Ref. [29] implements a digital technique for the evaluation of power system frequency. The algorithm is suitable for microprocessor implementation and uses only standard hardware. The algorithm works with any relative phase of the input signal and produces a new frequency estimate for every new input sample. This algorithm uses the orthogonal sine and cosine- filtering algorithm..A frequency relay, which is capable of under/over frequency and rate of change of frequency measurements using an instantaneous frequency-measuring algorithm, is presented in Ref. [30]. It has been shown that filtering the relay input signal could adversely affect the dynamic frequency evaluation response. Misleading frequency behavior is observed in this method, and an algorithm has been developed to improve this behavior. The under/over frequency function of the relay will cause it to operate within 30 ms.Digital state estimation is implemented to estimate the power system voltage amplitude and normal frequency and its rate of change. The techniques employed for static state estimation are least errors square technique [31–33], least absolute value technique [34–36]. While linear and non-linear Kalman filtering algorithms are implemented for tracking the system operating frequency, rate of change of frequency and power system voltage magnitude from a harmonic polluted environment of the system voltage at the relay location. Most of these techniques use the per-phasedigitized voltage samples, and assume that the three-phase voltages are balanced and contain the same noise and harmonics, which is not the case in real-time, especially in the distribution systems, where different single phase loads are supplied from different phases.An approach for identifying the frequency and amplitude of flicker signal that imposed on the nominal voltage signal, as well as the amplitude and frequency of the nominal signal itself is presented in this text. The proposed algorithm performs the estimation in two steps. While, in the first step the original signal is shifted forward and backward by an integer number of sample, one sample in this paper. The generated signals from such shift together with the original one are used to estimate the amplitude of the original voltage signal that composed of the nominal voltage and the flicker voltage, the average of this amplitude gives the amplitude of the nominal voltage. This amplitude is subtracted from the original identified amplitude to obtain the samples of the flicker voltage. In the second step, the argument of the signal is calculated by simply dividing the magnitude of signal sample with the estimated amplitude in step one. Computing the arccosine of the argument, the frequency of the nominal signal as well as the phase angle can be calculated using the least error square estimation algorithm. Simulation examples are given within the text to show the features of the proposed approach.2. Flicker voltage identificationGenerally speaking, the voltage during the time of flicker can be expressed as [2]:where AO is the amplitude of the nominal power system voltage, xO is the nominal power frequency, and /O is the nominal phase angle. Furthermore, Ai is the amplitude of the flicker voltage, xfi its frequency, and /fi its phase angle and M is the expected number of flicker voltage signal in the voltage waveform. This type of voltage signal is called amplitude modulated (AM) signal.2.1. Signal amplitude measurement::The first bracket in Eq. (1) is the amplitude of the signal, A(t) which can be written as:As such Eq. (1) can be rewritten asAssume that the signal is given forward and backward shift by an angle equals an integral number of the sampling angle. Then Eq. (3) can be written in the forward direction as:While for the backward direction, it can be written as:where h is the shift angle and is given byN is the number of samples required for the shift, fO is the signal frequency and m is the total number of samples over the data window size. Using Eqs. (4)–(6), one obtainsThe recursive equation for the amplitude A(k) is given by:Having identified the amplitude A(k), the amplitude of the nominal voltage signal of frequency x O can be calculated, just by taking the average over complete data window size as:Having identified the power signal amplitude AO, then the flicker voltage components can be determined by;This voltage flicker signal can be written as;where DT is the sampling time that is the reciprocal of the sampling frequency.2.2. Measurement of flicker frequencyWithout loss of generality, we assume that the voltage flicker signal has only one component given by, i = 1To determine the flicker amplitude Vf1(k) and the frequency xf1 from the available m samples, we may use the algorithm explained in Ref. [9]. The frequency is calculated fromWhile the amplitude can be calculated as:In the above equations v0 are the fist and second derivative of the flicker signal, they can be calculated, using the central forward and backward difference [9] as:2.3. Nominal voltage signal frequency and phase angleThe signal argument can be calculated fromwhere AR(k) is given byIn the above equation W0, u are the two parameters to be estimated from the available m samples of the argument AR(k). At least two samples are required for such a linear estimation.Eq. (17) can be written, for m samples, asIn vector form, Eq. (19) can be written as:where Z is m 1 measurements vector for the argument samples, H is m 2 measurements matrix the element of this matrix depend on the sampling time, sampling frequency, X is the 2 1 parameters vector to be estimated and f is m 1 error vector to be minimized. The minimum of f based on least error squares occurs when:The above two equations give directly the frequency and phase angle, in closed forms, for the signal under study. To have a practical approach those formulas should not be sensitive to noise and harmonics. One way to reduce those sensitivities is to use of least error squares algorithm, as we explained in Eq. (21), for the frequency estimation in the paper. In the following section we offer examples from the area of power system voltage flickers that can be considered as amplitude modulated signals.puter experimentsThe above algorithm is tested using amplitude modulated signal with one voltage flicker signal given by;The signal is sampled at 10000 Hz and is giving a forward shift and backward shift by one sample, h = 7.2 and 1000 samples are used. The power system voltage, 50 Hz signal, amplitude is estimated using the algorithm explained earlier, using Eqs. (8) and (9), and it has been found to that the proposed algorithm is succeeded in estimating this amplitude with great accuracy and is found be AO = 1.0. Fig. 1 gives the actualvoltage signal, the tracked signal and the voltage signal amplitude. Examining this figure reveals the following:The power voltage signal amplitude, 50 Hz, is almost 1 p.u., the average value of A(t), as that calculated using Eq. (9).The proposed technique tracked the actual signal exactly.The flicker signal frequency is estimated using 200 samples only with Eq. (13). Fig. 2 gives the estimated flicker voltage frequency at each sampling step. Examining this figure reveals that the proposed algorithm estimates the flicker frequency with great accuracy. The spikes, in these curves, are due to the value of the voltage flicker signal at this time of sampling which is very small, and looking to Eq. (13) one can notice that to calculate the frequency we divide by this value ceeded to estimate the flicker amplitude, except at the points of spikes, as we explained earlier in the frequency estimation.Another example has been solved, where the voltage signal has two flicker signals with different amplitude and frequency. The voltage signal in this case is given by The signal is sampled at 500 Hz and is giving a forward shift and backward shift by one sample, h = 7.2 and 500 samples are used. The voltage nominal amplitude is estimated using the technique explained earlier and has found to be one per unit, and the tracking voltage, using this technique, tracks the signal exactly as shown in Fig. 4.4.ConclusionsAn approach for identifying the frequency and amplitude of flicker signal that imposed on the nominal voltage signal, as well as the amplitude and frequency of the nominal signal itself is presented in this paper. The proposed algorithm performs the estimation in two steps; in the first step the original signal is shifted forward and backward by an integer number of sample, at least one sample. The new generated signals from such a shift together with the original one is used to estimate the amplitude of the original voltage signal that composed of the nominal voltage and theflicker voltage. The average of this amplitude gives the amplitude of the nominal voltage; this amplitude is subtracted from the original identified amplitude to obtain the samples of the flicker voltage. The frequency of the flicker voltage is calculated using the forward and backward difference for the first and second derivatives for the voltage flicker signal.In the second step, the argument of the signal is calculated by simply dividing the magnitude of signal sample with the estimated amplitude in step one. Calculating the arccosine of the argument, the frequency of the nominal signal as well as the phase angle can be calculated using the least error square estimation algorithm. Simulation examples are given. It has been shown that the proposed algorithm is succeeded in estimating the voltage flicker frequency and amplitude as well as the amplitude and frequency of the power voltage signal.The proposed algorithm can be used off-line as well as on-line. In the on-line mode we recommend the usage of a digital lead- lag circuit. Wile in the off-line mode; just shift the registration counter on sample in the backward direction and another on in the forward direction to obtain the required sample of the data window size.电力系统的额定电压,频率和电压闪变参数的测量摘要我们提出,在本文中,用于识别施加于标称电压信号电压闪变信号,以及标称信号本身的振幅和频率的频率和幅度的方法。
以太网数据转发约束的高速LDPC码设计
以太网数据转发约束的高速LDPC码设计李霈霈;周志刚;那美丽【摘要】为了灵活支持多种高速以太网接口,将低密度奇偶校验(LDPC)编码运用在以太网数据转发,取消传统数据包解码,提出了LDPC并行编码架构。
在考虑1G到100G以太网物理层编码码字长度约束的基础上,分别设计了针对1G、10G、100G接口中最大通道速率的LDPC(192,120),LDPC(594,462),LDPC(1188,990)码字,实现了信道编码处理的低时延。
仿真结果表明,构造的准循环LDPC码误码性能优,系统的处理时延小(考虑了编码时延和译码时延)。
LDPC编码时延在0.58~1.17μs之间,译码时延在3.20~4.26μs之间,可以满足不同以太网接口的最大通道编译速率。
%This paper presented interface-aware Low Density Parity Check (LDPC) codes in parallel encoding framework to support high-speed Ethernet data transmission and cancel the packets decoding process. Considering constraints of encoded codeword length of 1G to 100G Ethernet physical layer, LDPC(192,120), LDPC(594,462), LDPC(1188,990) codes aiming the maximum channel ratefor 1G, 10G and 100G Ethernet interface were designed to reach low latency in channel coding process. The simulation results claimed that LDPC codes have excellent performance and minimum processing delay in system, taking encoding delay and decoding delay into consideration. The encoding delay of 0.58~1.17 μs and the decoding delay of 3.20~4.26μs could meet the maximum code rate for different channels in Ethernet interfaces.【期刊名称】《电子设计工程》【年(卷),期】2016(024)022【总页数】4页(P1-4)【关键词】以太网接口;数据转发;并行编码架构;编码时延;译码时延【作者】李霈霈;周志刚;那美丽【作者单位】中国科学院上海微系统与信息技术研究所,上海 200050;中国科学院上海微系统与信息技术研究所,上海 200050;中国科学院上海微系统与信息技术研究所,上海 200050【正文语种】中文【中图分类】TN911.22以太网是现有局域网采用的最通用的通信协议标准。
双点干涉法位相缺陷检测中的解相算法比较
和单帧算法,其中定步长算法因为其简单、快速等 优势使用最为广泛,因此,本文主要研究定步长移 相算法中具有代表性的 Hariharan5帧法和 13帧 法。由于定步长移相算法对移相误差的抑制能力 有限,这里介绍一种基于最小二乘的迭代随机移 相算法。
Hariharan5帧 法 和 常 用 13帧 法 公 式 分 别 为 : [12]
获得相位分布。同年,韩志刚[9]等人针对宽带光 干涉仪提出了一种对包络变化和移相误差不敏感 的八步移相算法,可以实现移相量的在线标定和 校正。卢丙辉 [10]等 人 针 对 线 性 移 相 误 差 构 造 了 新的 5帧算法与 Hariharan算法互补修正,大幅提 高了算法的线性误差抑制能力。
2 系统组成
惯性约束聚变 (ICF)采 用 球 形 内 爆 增 压 技 术,以高功率、高能量密度的激光轰击球形靶丸, 使靶丸内的氘氚燃料达到点火条件,从而形成自 持的热核反应,它被认为是解决未来能源危机最 有前景的方 案[1]。系 统 装 置 中 包 含 很 多 大 口 径 光学元件,位相缺陷是其常见的缺陷之一,其振幅 透过率均匀,会对光的相位进行调制,无法被传统 的光学元件探测。位相缺陷的存在可能会造成光 线会聚,在大功率系统中会损坏整套系统,因此, 光学元件位相缺陷的相关技术研究至关重要。为 此,2013年,美国劳伦斯实验室 F.L.Ravizza等人 提出线扫 描 相 位 微 分 成 像 技 术[2](LPDI)进 行 位 相缺陷粗定位,采用移相衍射干涉技术(PSDI)进 行准确求解的光学元件位相缺陷检测方法,成功 对上千块大口径光学元件进行了位相缺陷的快速 检测,满足了美国 NIF系统中对大口径光学元件 位相缺陷快速检测需求,取得了非常好的检测效 果。
Comparisonofphaseextractionalgorithmsintestingof phasedefectswithtwopointinterference
纹理物体缺陷的视觉检测算法研究--优秀毕业论文
摘 要
在竞争激烈的工业自动化生产过程中,机器视觉对产品质量的把关起着举足 轻重的作用,机器视觉在缺陷检测技术方面的应用也逐渐普遍起来。与常规的检 测技术相比,自动化的视觉检测系统更加经济、快捷、高效与 安全。纹理物体在 工业生产中广泛存在,像用于半导体装配和封装底板和发光二极管,现代 化电子 系统中的印制电路板,以及纺织行业中的布匹和织物等都可认为是含有纹理特征 的物体。本论文主要致力于纹理物体的缺陷检测技术研究,为纹理物体的自动化 检测提供高效而可靠的检测算法。 纹理是描述图像内容的重要特征,纹理分析也已经被成功的应用与纹理分割 和纹理分类当中。本研究提出了一种基于纹理分析技术和参考比较方式的缺陷检 测算法。这种算法能容忍物体变形引起的图像配准误差,对纹理的影响也具有鲁 棒性。本算法旨在为检测出的缺陷区域提供丰富而重要的物理意义,如缺陷区域 的大小、形状、亮度对比度及空间分布等。同时,在参考图像可行的情况下,本 算法可用于同质纹理物体和非同质纹理物体的检测,对非纹理物体 的检测也可取 得不错的效果。 在整个检测过程中,我们采用了可调控金字塔的纹理分析和重构技术。与传 统的小波纹理分析技术不同,我们在小波域中加入处理物体变形和纹理影响的容 忍度控制算法,来实现容忍物体变形和对纹理影响鲁棒的目的。最后可调控金字 塔的重构保证了缺陷区域物理意义恢复的准确性。实验阶段,我们检测了一系列 具有实际应用价值的图像。实验结果表明 本文提出的纹理物体缺陷检测算法具有 高效性和易于实现性。 关键字: 缺陷检测;纹理;物体变形;可调控金字塔;重构
Keywords: defect detection, texture, object distortion, steerable pyramid, reconstruction
II
变压精馏分离丙烯腈-水工艺设计
第 52 卷第 11 期
陈鑫,等:变压精馏分离丙烯腈-水工艺设计
1625
塔板总数为 7,进料位置为 5,回流位置为 3,此时
的回流比为 1.265。
(a)NLPC 与热负荷的关系
(b)NHPC 与 TAC 的关系
图 5 HPC 塔优化 NHPC 与热负荷的关系 和 NHPC 与 TAC 的关系
根据序贯迭代算法,以最小 TAC 为目标,对流 程进行优化。在进行模拟之前,首先给定 HPC 的塔 板数为 15,进料位置为 5,回流比为 0.879。同时, 理论塔板数(NLPC、NHPC)被设定为最外层的循环 变量,回流比(RR1、RR2)为次内层迭代变量,进 料位置(NF1、NF2、NRE)为最内层迭代变量。这个 设置如图 4 所示。
图 3 丙烯腈-水体系的变压精馏工艺流程图
年总费用(TAC)由 DOUGLAS[11]提出的概念, 是衡量化工过程工艺经济型的关键指标,并且在化 工模拟优化过程中作为评价工艺经济性的标准得到 了广泛的应用。本文计算年总费用的公式来源于 WANG[12]和 CUI[13]等的研究成果。年总费用包含了 2 个方面,分别是能源费用和设备投资费用,计算
and environmentally friendly extractive distillation/pervaporation
process on the separation of ternary azeotropes with different
compositions[J]. Journal of Cleaner Production, 2022, 346: 131179. [6]WANG C, ZHUANG Y, QIN Y, et al. Design and eco-efficiency
正弦信号频率估计算法研究
正弦信号频率估计算法研究何一鸣;张刚兵;钱显毅【摘要】为精确估计噪声中正弦信号的频率,研究了任意长度正弦信号的频率估计算法.取2的整数次幂个数据样本点,基于谱线插值算法估计短序列的频率.根据信号的频率、频率分辨率及最大谱线对应的索引值之间的关系,确定原信号傅里叶变换(DFT)幅度最大谱线对应的索引值.利用谱线插值解决了任意长度正弦信号的频率估计问题.仿真结果表明:该算法的性能不随信号频率的变化而变化,在整个频段范围内比较稳定,均接近正弦波信号频率估计的克拉美-罗限.%To estimate the frequency of sinusoidal signals in noise accurately, an algorithm of frequency estimation for sinusoidal signals with arbitrary length is researched. The lengths of the samples are taken as integral powers of 2 and the frequency of truncated sequences is estimated based on the interpolation algorithm. The corresponding index of maximum amplitude of spectrum for the original signal is given according to the relationship among the frequency, the frequency resolution and the index of spectrum with biggest amplitude. The frequency estimation for sinusoidal signals with arbitrary length is solved by using interpolation. Simulation results show that the performances of the proposed algorithm do not vary with the change of signal frequency and attains the Cramer-Rao lower bound of sinusoidal signal.【期刊名称】《南京理工大学学报(自然科学版)》【年(卷),期】2012(036)006【总页数】4页(P989-992)【关键词】正弦信号;频率估计;谱线插值;克拉美-罗限【作者】何一鸣;张刚兵;钱显毅【作者单位】常州工学院电子信息与电气工程学院,江苏常州231002;常州工学院电子信息与电气工程学院,江苏常州231002;常州工学院电子信息与电气工程学院,江苏常州231002【正文语种】中文【中图分类】TN911精确估计噪声中正弦信号的频率是信号处理中的研究热点之一,它在通信、雷达、电子战以及生物医学信号处理方面具有广泛的应用,多位学者研究了高斯噪声中正弦信号的频率估计[1-12]。
iterative refinement机制
iterative refinement机制
迭代优化(Iterative Refinement)是一种优化或改进算法,在这种算法中,解决方案会经过多次迭代和修正,直到满足预定的标准或目标。
迭代优化的基本流程是:首先对一组初始解进行评估,然后在每一次迭代中,根据评估结果进行修改和改进,直到达到预定的目标为止。
在实践中,迭代优化可以用于各种问题,如优化算法、机器学习、计算机视觉等领域。
常见的迭代优化算法有随机梯度下降(Stochastic Gradient Descent)、共轭梯度法(Conjugate Gradient Method)等。
迭代优化的优点是能够逐步改进解决方案,并不断逼近最优解,同时也具有较好的鲁棒性和可扩展性。
缺点是需要进行多次迭代和计算,时间和计算资源消耗较大。
可实现误差估计的移相量计算方法
可实现误差估计的移相量计算方法刘乾;袁道成;何建国【摘要】针对移相干涉仪中移相器的标定,提出了一种基于干涉图计算移相量的迭代方法.该方法分为两步:首先假设移相量已知,构建三元最小二乘方程计算位相;然后假设位相已知,构建二元最小二乘方程计算移相量,同时依据三角函数关系和遍历原则,建立估算移相量计算误差的参数.利用计算机仿真和实验验证了提出方法的有效性.计算机仿真显示:提出的方法比已有算法计算精度更高,而且误差估计值与实际计算误差偏离小于15%.在Fizeau干涉仪上开展了验证实验,利用两个电容位移传感器测量了镜架的位移.计算结果与电容传感器测量结果非常吻合,最大偏差仅为0.7nm.另外,利用本文方法得到的误差估计值为0.52 nm,显示测量结果和计算结果的偏差在误差估计值范围内.所提出的方法可以高精度地提取移相量,且能给出移相量计算误差,是一种简单可靠的移相器标定方法.【期刊名称】《光学精密工程》【年(卷),期】2016(034)010【总页数】7页(P2565-2571)【关键词】移相干涉仪;移相器;标定;误差估计;迭代算法【作者】刘乾;袁道成;何建国【作者单位】中国工程物理研究院机械制造工艺研究所,四川绵阳621999;中国工程物理研究院机械制造工艺研究所,四川绵阳621999;西安交通大学机械工程学院,陕西西安710049;中国工程物理研究院机械制造工艺研究所,四川绵阳621999【正文语种】中文【中图分类】TH744.3移相干涉仪因其分辨率高和重复性好而成为波面测量的一种重要仪器[1]。
移相过程可以使干涉仪采集多帧干涉图计算每一像素点的位相,从而获得高的空间分辨率。
测量中移相量应等于预设值,否则将导致波面测量误差,如经典的四步移相中,10 nm的移相误差将导致20 nm波长的波面误差[2]。
然而,无论压电陶瓷[3]还是波长调谐[4]方式的移相,均存在一定的非线性。
因此,需要对移相干涉仪的移相器进行高精度校正[5]。
UG_TMG
NX TMG Thermal Analysis NX TMG Thermal Analysis (TMG) is completely integrated within I-deas NX Series,enabling you to carry out sophisticated thermal analysis as part of a collaborative engineering process.TMG enables 3D part modeling to be used as the foundation for thermal analysis by enabling you to efficiently create and fully associate FE models with abstracted analysis geometry.All of the thermal design attributes and operating conditions can be applied as history-supported entities on 3D model geometry.TMG Thermal Analysis incorporates sophisticated technologies for the efficient solution of element-based thermal models.A rigorous control volume scheme computes accurate conductive terms for even highly skewed meshes.Radiative heat transfer is computed using an innovative combination of radiosity and ray-tracing techniques;hemicube technology and sparse matrix solvers enable the code to easily solve very large radiative models.For analysis of assemblies,TMG provides powerful tools to connect disjoint meshes of parts andcomponents.TMG offers outstanding model solution technology:a state-of-the-art biconjugategradient solver delivers exceptional speed,reliability and precision.The TMG user interface is icon-driven and forms-based,minimizing learning time and enhancingproductivity.Units can be selected on individual forms for each entry field.Context-sensitive onlinehelp is always only a click away.The solution process is highly automated and fully integrated,whichmeans that no additional input files are required and all analysis is carried out in a single pass.Thermal results are directly available for loading structural models and can be mapped onto adifferent mesh.These features,combined with a variety of interfaces and customization options,makeTMG Thermal Analysis an ideal solution for any engineering environment.Comprehensive thermal modeling toolsThermal problems in mechanical and electronic systems are often difficult to detect and resolvebecause of the complex effects of convection or radiation.TMG provides a broad range of tools tomodel these effects and leading edge simulation technology to get fast and accurate results.NX TMG Thermal AnalysisFast and accurate solutions to complex thermal problemsNXfact sheetFeaturesUse advanced numerical techniques to model nonlinear and transient heat transfer processesUse 3D part modeling as the foundation for thermal analysis,creating and associating FE models with abstracted geometryUse sophisticated technologies for the efficient solution of element-based thermal modelsPerform analysis of assemblies with tools that connect disjointed meshesEmploy thermal results that are directly available for loading structural models and can be mapped onto a different mesh Benefits Perform accurate thermal analysis quickly and efficiently Perform integrated thermal analysis as part of a collaborative engineering process Minimize learning time and enhance productivity via icon-driven,forms-based interface Use a highly automated and integrated solution process that requires no additional input files and carries out analysis in a single passSummaryNX TMG Thermal Analysis (TMG) within I-deas ®NX Series provides rapid and accurate thermal modeling and simulation.Augmenting the capabilities of NX MasterFEM,TMG makes it easy to model nonlinear and transient heat transfer processes including conduction,radiation,free and forced convection,fluid flow,and phase change.Leading edge solver technology provides solid reliability and superior solution speed for even the most challenging problems.With TMG,accurate thermal analysis can be performed quickly and effectively,delivering the engineering insight and turnaround speed needed to ensure success within today’s rapid development cycles.UGS,T eamcenter ,Solid Edge,Femap and I-deas are registered trademarks;and Imageware is a trademark of UGS Corp.All other logos,trademarks or service marks used herein are the property of their respective owners.Copyright ©2004 UGS Corp.All rights reserved.10/2004ContactUGSAmericas 800 498 5351Europe 44 1276 705170Asia-Pacific 852 2230 。
Linear n-point camera pose determination
Manuscript received 8 Dec. 1997; revised 30 Mar. 1999. Recommended for acceptance by R. Szeliski. For information on obtaining reprints of this article, please send e-mail to: tpami@, and reference IEEECS Log Number 107783.
Camera pose determination from redundant data has also been developed. However, most of them rely either on iterative methods or on applying the closed-form solutions to minimal subsets of the redundant data. Iterative methods suffer from the problems of
. L. Quan is with the CNRS-GRAVIR-INRIA, ZIRST 655, avenue de l'Europe, 38330 Montbonnot, France. E-mail: Long.Quan@inrialpes.fr.
相干衍射成像及相位恢复算法介绍
相⼲衍射成像及相位恢复算法介绍我写的玩具程序相位恢复:Ptychography:⽤的 fft2 和 ifft2 代替光的正向传播和逆向传播过程===================================================你们要是愿意从电磁学层⾯了解光的⾏为,那结果当然是更细致更精确的。
貌似下⾯这篇论⽂开头的章节对CDI的概况更全⾯些:相⼲光光强恢复相位的⽅法研究===================================================本⽂主要介绍相⼲衍射成像(coherent diffractive imaging,CDI),以及其中⽤到的各种相位恢复算法(phase retrieval algorithm)。
深⼊了解CDI,需要学习⼏何光学、傅⾥叶光学、泛函、最优化理论⽅⾯的东西。
要做出些东西来,相当熟练的编程技能也是少不了的。
传统⽅法测量微观结构,⼀般是⽤TEM得到⼀个衍射花样,然后推断得到样品的晶体结构。
CDI能⼲什么?CDI也是利⽤TEM成像,但是是上述⽅式的拓展和加强,做的好的话其所能达到的分辨率远超传统⽅法。
CDI的实验装置应该是:相⼲光源 -> 光阑 -> 样品 -> CCD相机光阑上⾯有孔,使得样品只有⼀部分被照亮。
⼊射光透过样品后变成透射光,透射光经过⾜够远的传播距离后发⽣衍射。
CCD相机⽤来记录衍射图样的光强。
相⼲光源是⽤来产⽣相⼲光的,产⽣的相⼲光⽤于照射样品,因此可称作⼊射光。
⼊射光透过样品后变成透射光。
已知的是:(1)光阑的形状和尺⼨,(2)CCD记录到的衍射图样的光强。
因为CCD相机只能记录衍射图样的光强,⽽衍射图样的相位信息⽆法记录,所以相位丢失了。
因此这个问题被称作相位恢复问题。
⽬标是:利⽤相位恢复算法,恢复出样品被照亮区域的图像。
(其实是记录衍射图样的或者叫波前wavefront的detector不⾏才会丢失相位。
反褶积分时窗英语
反褶积分时窗英语Deconvolution is a widely used technique in signal processing and image analysis to recover the original signal or image from a degraded or convolved version. It is particularly useful in various fields such as astronomy, medical imaging, and telecommunications. In this article, we will explore the concept of deconvolution, its applications, and the methods used to perform it.Deconvolution is essentially the inverse operation of convolution. Convolution is a mathematical operation that combines two functions to produce a third function. In the context of signal processing, convolution is used to model the effect of a system on a signal. For example, in image processing, an image can be convolved with a blur kernel to simulate the blurring effect caused by the optics of a camera.The goal of deconvolution is to estimate the original signal or image that has been convolved with a known or unknown kernel. This is often referred to as the restoration problem. The restoration problem is ill-posed, meaning that there are an infinite number of solutions that can produce the same convolved result. Therefore, additional constraints or prior knowledge about the signal or image must be incorporated to obtain a unique solution.There are several methods commonly used for deconvolution. One of the most popular methods is the Wiener deconvolution. The Wiener deconvolution is based on the assumption that the original signal or image and the convolution kernel are both corrupted by additive noise. It aims to minimize the mean squared error between the estimated signal or image and the true signal or image.Another commonly used method is the Richardson-Lucy algorithm. The Richardson-Lucy algorithm is an iterative algorithm that estimates the original signal or image by alternating between estimating the blur kernel and the original signal or image. This iterative process continues until convergence is reached.In addition to these methods, there are also other advanced deconvolution techniques such as blind deconvolution, which aims to estimate both the original signal or image and the blur kernel without any prior knowledge. Blind deconvolution is particularly challenging as it requires solving an underdetermined system of equations.Deconvolution has numerous applications in various fields. In astronomy, deconvolution is used to enhance the resolution of telescopes and remove the blurring effect caused by the Earth's atmosphere. In medical imaging, deconvolution is used to improve the quality of images obtained from medical scanners and remove noise and artifacts. In telecommunications, deconvolution is used to mitigate the effects of channel distortion and improve the quality of transmitted signals.In conclusion, deconvolution is a powerful technique for recovering the original signal or image from a convolved or degraded version. It has a wide range of applications in fields such as astronomy, medical imaging, and telecommunications. Various methods, such as the Wiener deconvolution and the Richardson-Lucy algorithm, are used to perform deconvolution. The choice of method depends on the specific problem and the available information. With further advancements in computational power and algorithms, deconvolution techniques are expected to continue to improve and find new applications in the future.。
PRIZM 2020 生活段群系统技术文档说明书
Technical DocumentationPRIZM 2020April 2020Creating the PRIZM segmentation system involved more than a year of planning and development. The system categorizes every Canadian neighbourhood and postal code into one of 67 distinct lifestyle segments based on the characteristics of households. PRIZM examined five years of neighbourhood formation, change and, in numerous cases, stability to better understand the demographic, socioeconomic and behavioural characteristics of consumers.Every edition of PRIZM is developed by a team of experts familiar with the demographics and geography of Canada at both the city and regional levels. The system is built from the ground up using authoritative data from recognized suppliers like Statistics Canada, Canada Post, Canada Revenue Agency, Equifax, TomTom, Environics Research and others.The vast majority of targeting and marketing in Canada is still done based on age, sex and income. Examples of typical target groups using these characteristics are:•Women aged 18-34 for fashion (clothing)•Baby Boomers aged 55-70 who travel frequently•Households with income over $200,000 for prestige vehiclesGeodemographic systems permit refinement of this typical targeting method. Small neighbourhoods are assigned to household segments based on similarities of demographic attributes and general lifestyle behaviour. This approach is a well-established method for segmenting and identifying target groups for numerous products and industries.DATAThe primary foundation for any geodemographic system is high-quality demographic data. Determining which variables to include in the final clustering is a matter of science and art. The science element captures variables that are significant in their ability to differentiate neighbourhoods in ways that are important for marketing. The art component falls into two categories: a) understanding that users of the final system expect specific variables to be included, and b) determining the final selection of weights for each variable. When going to market, customer expectation and demand also play an important role. At the same time, there is a need to respect and adhere to thorough scientific methods. Successfully balancing these many requirements makes a product credible and effective.The number of variables in the final clustering should be kept to a minimum (for scientific parsimony) yet, at the same time, must include all of the important demographic characteristics. There is no known acceptable number of variables to include in clustering; it is up to the experienced analyst to select the optimal set of variables and experiment with weights to find the optimal mix.Creating PRIZM begins with CensusPlus, a database derived from Statistics Canada’s census, which has been enhanced by our modellers to fill in missing values. The core data are available at the dissemination area (DA) level, the smallest unit of geography for which any significant demographic and socioeconomic data are released. There are 56,590 DAs in Canada. CensusPlus contains about 850 variables for each of these DAs covering numerous themes.We take CensusPlus and combine it with the latest vintage of DemoStats, a proprietary database that reflects our estimates of current-year demographics and socioeconomic characteristics at the neighbourhoodlevel1, to select the final demographic and socioeconomic variables. The variables are categorized in the following 18 themes:Age Dwelling ValueHousehold Size Mother Tongue/Home LanguageMarital Status Ethnic OriginHouseholds with Children Visible MinorityMigration Aboriginal IdentityImmigration EducationDwelling Type Labour Force/Occupation/Work PlaceDwelling Tenure Mode of TransportDwelling Period of Construction IncomeIn addition to the core demographic and socioeconomic data, other data are used as basic ingredients of PRIZM. Data describing the settlement context—the geographic location of neighbourhoods—are fundamental to understanding where the resulting segments are situated geographically. Are the segments predominantly found in large urban centres, small suburban towns or sparse rural communities? Proximity to major retail centres is another measure we use to classify established urban cores differently from suburban, town and rural neighbourhoods.Another important source of input data comes from our SocialValues database, which is derived from data supplied by our sister company, Environics Research. Every year, Environics Research conducts a nationwide survey that measures human motivation and social relations, employing advanced techniques to understand the mindset of Canadians. From these data, they create “social constructs” that identify common trends in views and attitudes among Canadians.In addition, we use aggregated, anonymized small-area credit data from Equifax Canada. These data measure credit worthiness, credit usage and credit default rates. Additional variables capturing the financial theme are also included in creating PRIZM.From these data sources, we select more than 80 variables, including at least one variable for each of the above themes.ProcessAt the outset, our analysts determined that the PRIZM segmentation system was going to consist of three different products. The segments had to be available for populated residential DAs and postal codes, plus they had to operate at two levels: One with a conventional number of segments between 60 and 70 and a more detailed version with 150 segments, which we call Delta. This approach was divided into three stages. Stage OneThe initial phase of stage one involved creating a set of variables that captured the settlement context of the DAs. Settlement context is a scaled measure of urbanity, from the dense urban core of large cities to the most 1 For more information on the development of DemoStats 2020, refer to the DemoStats technical document available at .sparse, uninhabited rural parts of the countryside. These are key variables that serve in the initial segmentation process.The next phase involved assessing and selecting CensusPlus and DemoStats variables from the more than 1,400 variables available for the creation of the atoms. We selected variables that we know from experience to be significant for differentiation among the DAs.In the final step we selected the clustering algorithm for creating the atoms. Numerous algorithms are available for cluster analysis and the method used is critical to the success of PRIZM. Based on our research, we selected the K-medians algorithm, an iterative partitioning approach commonly used for these types of applications. The greatest strength of this algorithm is its ability to find similar patterns that maximize within-segment uniformity while differentiating between all the identified segments. Additionally, the resulting segments are not as influenced by extreme values (“outliers”) as many other traditional methods.To determine the best segmentation solution, we tested thousands of weighted data combinations. Every run was informed by the previous one, and with each subsequent run, adjustments to variables and variable weights were made. The runs that offered the greatest differentiation between segments were examined and systematically tested. The best solution offered the greatest discrimination of segments against actual consumer behaviour (more on this later). Finally, we produced what we consider to be an optimal 150-atom segment solution.Stage TwoThe focus of this stage was to link SocialValues data to the 150 atoms, and then aggregate them to create a system between 60 and 70 segments. We resolved to look for fewer than 70 segments, to make the system more manageable and maintain the greatest differentiation between segments. Using our estimated demographic and socioeconomic data, along with settlement context, financial credit data and SocialValues, the atoms from the DA system were aggregated using several clustering algorithms.Our analysts identified many segment solutions by applying different weights to a variety of variable subsets. In reviewing the solutions that were automatically generated through the clustering processes, 67 segments offered the greatest variety in neighbourhood and SocialValues types, while meeting our minimum cluster size as represented by the number of Canadian households assigned to it (0.50 percent of Canadian households). The 150 atoms nest perfectly within the final 67 segments.Testing segmentsWe tested each solution using a variety of data supplied by several of our data partners as well as with an analysis of key products and services. From the thousands of clustering runs, three solutions emerged as leading contenders.Key survey providers for the testing exercise were:•Vividata•Numeris•AskingCanadians™ (Social Media, Mobile and eShopper surveys)•IHS Markit™•Select client data from different industriesMore than 3,000 variables were selected from the various surveys covering such topics as category and product usage, media preferences, leisure activities and attitudes. The survey data were aggregated to the 67 segments and each variable was compared to i ts Canadian average. A review of each segment’s demographics, socioeconomics, settlement context and behavioural survey variables served as the method for analyzing and comparing the numerous cluster solutions.These solutions were tested against one another to help identify the single best segmentation system solution. In reviewing the solutions, we examined the following for each of the 67 segments:1)Demographic reports with more than 400 variables summarized at the segment level showing percentof a segment having the attribute and an index showing its relation to the Canadian average.2)Geographic maps showing the segment distribution and whether the segment was concentrated in afew markets or dispersed across the country.3)Survey reports with selected variables indexed against Canada for each segment on a large selectionof category, product, behavioural and attitudinal variables.Stage ThreeWith the 67 segments finalized at the DA-neighbourhood level, the next task was to assign all residential postal codes to the final solution of 150 atoms and its 67 segments. This stage involved combining DemoStats data with the Equifax credit data, all at the postal code level.A set of demographic and socioeconomic variables were selected from the more than 500 available in DemoStats. Added into the mix were the settlement context data that were assigned to all postal codes (based on the DA they fall within) and a small set of variables from the Equifax data. This information was assembled for the complete roster of residential postal codes.We then created 150 cluster centroids, the statistics reflecting multi-dimensional segment profiles—the basic building blocks of segments—using the atoms created at the DA-neighbourhood level. In addition to these DA-level centroids, we developed a new version using only postal code-level data. Several versions of the centroids were created and tested to ensure they captured the fundamental characteristics that describe each atom at the DA level. Did family-based segments have the correct ages of children? Were culturally diverse segments showing high concentrations of the relevant groups? Were urban segments found in urban areas and rural segments in rural areas?We selected the centroid that depicted the 150 atoms the best. Following this process, all postal codes were assigned to the closest atom based on statistical proximity to guarantee the optimal assignment for all selected variables.In addition, there were a few final manual adjustments to the automated cluster solutions. These adjustments were made to preserve, as much as possible, the settlement context structure of Urban, Urban Fringe, Suburban, Town and Rural. Other important considerations in the clean-up phase ensured that the wealthiest segments were captured appropriately, that key culturally diverse segments were identified correctly as speaking dialects of Chinese and South Asian languages, and that francophone segments had a minimum number of French-speaking populations.ResultPRIZM consists of a whole new set of geodemographic segments for Canada reflecting the most recent and reliable data. There are 67 segments, made up of 150 atoms, which capture all dimensions of the Canadian landscape. PRIZM is available for both DAs and postal codes.Socioeconomic Status Indicator (SESI)With the final segmentation system created, we had to decide how to number and rank the segments. A proprietary score was developed to characterize each segment using a Socioeconomic Status Indicator (SESI). This SESI score reflects a variety of factors such as average household income, discretionary income, education attainment, the value of private dwellings, average net worth and household size.As a result, a blue-collar, high school-educated segment whose residents earn above-average incomes may rank lower on the SESI ladder than an educated, up-and-coming younger segment whose residents have average household incomes. A segment with an older population, many of whom are on fixed incomes, may rise in the ranking if their net worth is significant. And a segment earning $120,000 on average will rise or fall in the ranking, depending on whether the household is composed of dual-income couples or families with several young children.The 67 segments have been ranked from 01 to 67 on the SESI scale, with 01 classified as the highest. Because this ranking reflects more than income alone, most of the segments have a SESI score that is different from their average household income ranking.Social and Lifestage GroupsThe 67 PRIZM segments were assigned to one of 20 Social Groups and 8 Lifestage Groups. The Social Groups consider the urban-rural context, home language (English, French and non-official), affluence, family status, age of maintainer and ethnicity. Each segment was assigned to one, and only one, Social Group. The Social Groups reflect various groupings, patterns and trends. A critical issue concerned the urban-rural dimension, which is neither linear nor one-dimensional. Each segment was assigned to one of five settlement types to form the Social Groups: Urban, Urban Fringe, Suburban, Town or Rural. Urban Fringe segments reflect once-suburban areas that, over the last 30 years, have been absorbed by urban sprawl. In general, urban segments are found in large- and medium-sized cities. Suburban segments tend to consist of communities located on the outskirts of cities and can often be found in the core neighbourhoods of smaller cities and larger towns. Town neighbourhoods are found in smaller towns across the country. Rural neighbourhoods reflect areas that are smaller than towns and include very small towns, villages, hamlets, and rural farms and isolated areas. The final segmentation solution features many francophone-based segments, a variety of culturally diverse segments and many segments that represent important combinations of age, lifestage and family status—from young singles living on their own up to widowed seniors in apartments. These were essential inputs into the creation of the Social Groups. The ranking of Social Groups is based on average income (not a SESI ranking). Groups have a letter and number combination. The letters U, F, S, T and R stand for Urban, Urban Fringe, Suburban, Town or Rural, while the numbers refer to income, with 1 indicating the highest average income for the group and 7 the lowest.The Lifestage Groups categorize household composition according to the presence of singles, couples and families. The major grouping divides the 67 segments into Young, Family and Mature. These groups are then further subdivided by analyzing the commonality among the segments. The Young group is divided into three subgroups according to the presence of singles, couples or starter families. Families are split into three sets based on the age of children: the very young, tweens, teens and twenty-somethings. The Mature group is divided into two based on the age of maintainers and the presence of children at home.Annual UpdateEach year, when a new edition of DemoStats is completed, PRIZM will be updated. The update will reflect the most recent estimates of Canada’s demographic and socioeconomic characteristics, along with updated SocialValues data and financial credit data from Equifax. Both DA and postal code PRIZM assignments will be reassessed and updated.。
自由心率、个性化扫描和碘对比剂注射的前瞻性冠状动脉计算机体层摄影血管造影中迭代算法对于图像质量的影响
·青年学术论坛·自由心率、个性化扫描和碘对比剂注射的前瞻性冠状动脉计算机体层摄影血管造影中迭代算法对于图像质量的影响金 絫 高轶奕 孙英丽 高 盼 赵 伟 张渌恺 姜安琪 李 铭【摘要】 目的 评估自由心率、个性化的扫描参数和碘对比剂注射方案的前瞻性心电门控冠状动脉CTA(CCTA)中正弦图确定迭代重建(SAFIRE)和高级建模迭代重建(ADMIRE)对图像质量的影响,推荐最合适的迭代强度。
方法 选择2017年8月—2018年1月在复旦大学附属华东医院行CCTA检查疑诊为冠心病的患者50例,序号为1至30号的患者应用第2代双源CT扫描前瞻性心电触发序列扫描模式进行检查(Flash组),31至50号的患者应用第3代双源CT扫描前瞻性心电触发序列扫描模式进行检查(Force组)。
根据患者的BMI调节扫描参数,根据体表面积(BSA)调整碘对比剂的注射方案。
Flash组图像的重建方式为滤波反投影算法(Flash FBP)、SAFIRE的迭代强度3算法(SAFIRE 3)和SAFIRE的迭代强度5算法(SAFIRE 5),Force组图像的重建方式为滤波反投影算法(Force FBP)、ADMIRE的迭代强度3算法(ADMIRE 3)和ADMIRE的迭代强度5算法(ADMIRE 5)。
比较各图像重建方式的主动脉根部(AO)的CT值、图像噪声、AO信噪比(SNR AO)、AO对比度噪声比(CNR AO),以及冠状动脉各节段[左前降支近端(LAD P)、左前降支远端(LAD D)、左回旋支近端(LCX P)、左回旋支远端(LCX D)、右冠状动脉近端(RCA P)、右冠状动脉远端(RCA D)]的对比度噪声比(CNR);由两名影像科医师对冠状动脉15段分支血管进行双盲评分,采用kappa分析评价其结果的一致性。
结果 所有患者均顺利完成CCTA检查,图像质量均满足诊断要求。
Flash组的BSA显著大于Force组(犘<0.01),心率显著慢于Flash组(犘<0.01),对比剂注射速率显著高于Force组(犘值均<0.01),对比剂剂量显著多于Force组(犘<0.01)。
耦合矩阵方程AX+XB=C,DX+XTE=F的递度迭代算法
耦合矩阵方程AX+XB=C,DX+XTE=F的递度迭代算法张华民;殷红彩【期刊名称】《安庆师范学院学报(自然科学版)》【年(卷),期】2013(19)2【摘要】通过应用递阶辨识原理和推广求解矩阵方程 AX =b 的递度迭代算法,本文给出了求解耦合矩阵方程 AX +XB =C,DX +XT E =F 的递度迭代算法。
分析表明,只要矩阵方程有唯一解,则对任何初始值此算法给出的迭代解都快速收敛到其真实解。
一个数值例子表明了此算法的有效性。
%This paper presents a gradient iterative algorithm for solving the coupled matrix equations AX +XB =C,DX +XT E=F by using the hierarchical identification principle and extending of the iterative algorithm for the matrix equation AX =b.The analysis shows that if the matrix equation has an unique solution,then the iterative solutions converge fast to the exact one for any initial value.A numerical example demonstrates the effectiveness of the proposed algorithm.【总页数】5页(P12-16)【作者】张华民;殷红彩【作者单位】蚌埠学院数理系,安徽蚌埠 233030;安徽财经大学管理科学与工程学院,安徽蚌埠 233000【正文语种】中文【中图分类】O241.6【相关文献】1.矩阵方程AX+XB=F的参数迭代解法 [J], 张凯院;王同军2.矩阵方程AX+XB=F的参数迭代解法 [J], 张凯院;王同军3.耦合矩阵方程AX+XB=C,DX+XE=F的梯度迭代算法 [J], 殷红彩;张华民;段凯宇4.耦合Sylvester矩阵方程的梯度迭代算法 [J], 张龙5.耦合Sylvester矩阵方程的改进的梯度迭代算法 [J], 胡文凭;王卫国因版权原因,仅展示原文概要,查看原文内容请购买。
ai agent 处理流程
ai agent 处理流程Artificial intelligence (AI) agents are revolutionizing the way we process information and make decisions in various fields. These advanced systems are capable of analyzing vast amounts of data at incredible speeds, helping us to automate tasks and improve overall efficiency. The handling process of an AI agent involves several steps that are crucial for its successful functioning.人工智能(AI)代理正在革新我们处理信息和决策的方式,在各个领域发挥着重要作用。
这些先进的系统能够以惊人的速度分析大量数据,帮助我们自动化任务并提高整体效率。
AI代理的处理流程涉及几个关键步骤,对其成功运行至关重要。
The first step in the process of handling an AI agent is data collection. This involves gathering relevant information from various sources, such as databases, sensors, or online sources. The quality and quantity of data collected are crucial for the performance of the AI agent, as it relies on this data to make informed decisions and predictions. Data collection also involves cleaning and preprocessing the data to ensure its accuracy and reliability.处理AI代理的第一步是数据收集。
Advanced Techniques for GA-based sequential ATPGs
Advanced Techniques for GA-based sequential ATPGsF. Corno, P. Prinetto, M. Rebaudengo, M. Sonza Reorda R. MoscaPolitecnico di Torino CSP (Centro Supercalcolo Piemonte) Torino, Italy Torino, ItalyAbstract*Genetic Algorithms have been recently investigated as an efficient approach to test generation for synchro-nous sequential circuits. In this paper we propose a set of techniques which significantly improves the per-formance of the GA-based ATPG algorithm proposed in [PRSR94]: in particular, the new techniques en-hance the capability of the algorithm in terms of test length minimization and fault excitation. We report some experimental results gathered with a prototypical tool and show that a well-tuned GA-based ATPG is generally superior to both symbolic and topological ones in terms of achieved Fault Coverage and required CPU time.1. IntroductionDifferent approaches have been proposed to solve the problem of Automatic Test Pattern Generation for Synchronous Sequential circuits.The topological approach [NiPa91] is based on ex-tending to sequential circuits the branch and bound techniques developed for combinational circuits by adopting the Huffman’s Iterative Array Model. The method’s effectiveness heavily relies on the heuristics adopted to guide the search; the approach uses a com-plete, but often fails when applied to large circuits, where the search space is excessively large to explore.The symbolic approach [CHSo93] exploits tech-niques for Boolean function representation and ma-nipulation which were initially developed for formal verification; this approach is based on a complete al-gorithm, too, and is very effective when small- and medium-sized circuits are considered. Unfortunately, it is completely unapplicable when circuits with more than some tens of Flip-Flops are dealt with. This greatly limits its usefulness in real practice.Finally, the simulation-based approach [ACAg88] consists in generating random sequences, simulating them, and then modifying their characteristics in order to increase the obtained fault coverage. In the last few * This work has been partially supported by European Union through the ESPRIT PCI project #9452 94 204 70 PETREL. Contact address: Paolo Prinetto, Dipartimento di Automatica e Informatica, Politecnico di Torino, Corso Duca degli Abruzzi 24, I-10129 Torino (Italy), e-mail Paolo.Prinetto@polito.it years, several methods [SSAb92] [RPGN94] [PRSR94] have been proposed, which combine this approach with the use of Genetic Algorithms (GAs) [Gold89]. Results demonstrated that the approach is very flexible and provides good results for large circuits, where other methods fail.However, the analysis we performed on the behavior of GATTO (the tool described in [PRSR94]) shows that the algorithm has some weakness points:• the cross-over operator is not as effective as in other problems GAs have been applied to;• the method can hardly determine the length of the sequences; this results in an increase of thetime required by the A TPG process, and of thenumber of generated vectors;• the phase devoted to find sequences which excite faults is purely random; this obviously decreasesthe method effectiveness in terms of achievedfault coverage and required CPU time.In this paper we introduce some new techniques to overcome the above problems. We devised a more ef-fective cross-over operator, and added new techniques which provide the method with the capability of auto-matically determining the minimal length of the test sequences. Finally, we re-arranged the whole algorithm in order to increase the effectiveness of the fault exci-tation phase, which is no longer purely random, but exploits information from the already generated se-quences. To experimentally prove the effectiveness of the proposed techniques we implemented an improved version of GATTO, named GATTO+. The results show substantial improvements in GATTO+: from one side, they are now comparable to the competing algorithms on the small- and medium-size circuits; from the other side, results are further enhanced on the largest ones.The paper is organized as follow: in Section 2 we briefly summarize the GATTO algorithm; Section 3 describes the improvements introduced in GATTO+. Section 4 presents the experimental results we gathered and provides some comparisons with other A TPG al-gorithms. Section 5 draws some conclusions.2. The GATTO algorithmThe GATTO algorithm, presented in [PRSR94], is organized in three phases:• the first phase aims at selecting one fault (denoted as target fault); this phase consists ofrandomly generating sequences and fault simu-lating them w.r.t. the untested faults. As soon asone sequence is able to excite at least one fault,the fault is chosen as target fault;• the second phase aims at generating a test se-quence for the target fault; it is implemented as aGenetic Algorithm: each individual is a test se-quence to be applied starting from the reset state;cross-over and mutation operators are defined tomodify the population and generate new indi-viduals; a fitness function evaluates how closeeach individual is to the final goal (i.e., detectingthe target fault); this function is a weighted sumof the numbers of gates and Flip-Flops having adifferent value in the good and faulty circuit.After a maximum number of unsuccessful gen-erations the target fault is aborted and the secondphase is exited;• the third phase is a fault simulation experiment which determines whether the test sequencepossibly generated in phase 2 detects other faults(fault dropping).The three phases are repeated till either all the faults have been tested or aborted, or a maximum number of iterations has been reached.3. ImprovementsBased on the results reported in [PRSR94], we real-ized that several points in the GATTO algorithm were still worth of improvements. We will describe them in details in the following subsections, together with the improved solutions we devised for each of them.3.1. Cross-Over OperatorThe cross-over operator adopted in GATTO belongs to the category denoted as two-cuts cross-over. The operator works in a horizontal manner: the new se-quence is composed of some vectors coming from ei-ther parents (Fig. 1), according to the position of two randomly generated cut points. Unfortunately, there is no guarantee that the vectors coming from the second parent produce in the new sequence the same behavior they produce in the parent sequence, as the state from which they are applied is different. As a consequence, we observed in GATTO that the off-spring of two good individuals was often a bad individual; in general, the cross-over operator was not as effective for the A TPG problem as it usually is for other problems GAs have been applied to.The cross-over operator defined for GATTO+ works in a vertical manner; the off-spring does not inherit whole vectors from parents: rather, the values for each input are taken either from one parent or from the other, depending on a random choice (Fig. 2), as the operator belongs to the category known as uniform cross-over. The length of the new sequence is that of the longest between the two parent sequences: inputs taken from the shortest parent are completed with ran-dom values (dark in Fig. 2).3.2. Test LengthIn GATTO it is up to the user to decide the initial test length, which is then automatically increased dur-ing the A TPG process. For some circuits, this results in a test length higher than the minimum one, while for other circuits the process spends many iterations for reaching the length required to test some faults. Moreover, the computational complexity of the whole process mainly depends on the cost of fault simulation; therefore, any unnecessary increase in the length of the sequences results in a corresponding waste in the re-quired CPU time.To face this problem we improved the GATTO al-gorithm in two ways: we first modified the evaluation function on which the fitness function is based, and then introduced new mutation operators.3.2.1. New Evaluation FunctionThe evaluation function adopted in GATTO is based on the following expressionh(v j k,f i)= c1*b1(v j k,f i)+c2*b2(v j k,f i)(1) which provides a measure of how close the k-th in-put vector v j k of a sequence s j is to detect the fault f i. In (1), c1 and c2 are constants, while b1 and b2 are functions, whose value is proportional to the number of gates and Flip-Flops (respectively) having a different value in the good and faulty circuit for fault f i. Once the value of h(v j k,f i)is known for every vector in thesequence, the evaluation function H for the sequence s j is computed asH(s j ,f i )=max k (h(v j k ,f i )) ∀ v j k ∈ s j (2)In order to bias the evolution towards the shortest sequences, a modified version H *(s j ,f i ) of the evaluation function H has been introduced in GATTO+;the value of h(v j k ,f i ) for the k -th vector of the j -th sequence is weighted with a coefficient whose value decreases with k ; the new evaluation function corre-sponds to the maximum value of the weighted function:H *(s j ,f i )=max k (HANDICAP k ·h(v j k ,f i )) ∀ v j k ∈ s j (3)where the value of the parameter HANDICAP ranges3.2.2. New Mutation OperatorsIn GATTO, any change in the length of the se-quences during phase 2 stems from the cross-over op-erator: in fact, the length of any new sequence can randomly vary up to the sum of the lengths of the two parent sequences. The new cross-over operator pre-sented in the previous Section behaves in a completely different way, and generates sequences as long as the longest parent. This means that the length of the se-quences in a population can never be higher than that of the longest one in the previous generation. Unfortu-nately, there is thus no way to increase the sequence length.To overcome this problem, and to force the algo-rithm to better explore all the search space, we intro-duce two new mutation operators (MO+ and MO-),which are activated on an existing sequence with a given activation probability:• MO+ introduces a randomly generated vector in a random position within the existing sequence;thanks to this operator, longer sequences are generated and evaluated;• MO- removes a randomly selected vector from the existing sequence: if the vector is not essen-tial, the evaluation function of the sequence in-creases.3.3. Fault Excitation Fault excitation is one of the most critical problems when devising a GA-based A TPG. In fact, no way has been found, up to now, to evaluate how close a se-TPGs proposed in the literature resort to a 2 all the sequences belonging to the last population are stored, and then used in the following phase 1 instead of the randomly generated ones (as in GATTO).If one of these sequences is able to excite at least one fault, this is selected as target fault, and a new phase 2 is activated. Otherwise, random sequences are generated trying to excite faults, like in GATTO. The pseudo-code of phase 1 is reported in Fig. 3.4. Experimental Results We implemented a prototypical version of GATTO+containing all the techniques described above: the new cross-over operator substitutes the old one, and the operators MO+ and MO-, as well as the parameter HANDICAP, have been introduced. The values of all the parameters have been experimentally determined through a preliminary set of runs: the operators MO+and MO- are activated with probability 0.05 and 0.1,the parameter HANDICAP holds the value 0.98, andC1 and C2 have been assigned the values 1 and 10,respectively. Tab. 1 reports the results in terms of Fault Coverage (FC), CPU time and test length for the whole set of ISCAS’89 circuits. Experiments have been per-4.1. GATTO+ vs. GATTOTo demonstrate the effectiveness of the described techniques we report in Tab. 2 a comparison with the results of GATTO published in [PRSR94], where only the largest ISCAS’89 circuits were considered. GATTO+ is able to increase the fault coverage in 11 cases out of 12, and in 6 cases the increase is greater than 4%. On the other side, the CPU time is decreased in 9 cases out of 12. For all the circuits, we were able either to increase the Fault Coverage by more than 4%, or to decrease the CPU time. GATTO+ achieves this result mainly thanks to the more effective technique adopted for phase 1, whose cost is now greatly de-creased. Concerning the test length, the number of test vectors generated by GATTO+ is sometimes higher than those of GATTO, due to the new sequences added to detect other faults.Tab. 1: GATTO+ performance on ISCAS’89 circuits.4.2. GATTO+ vs. other algorithmsW report in the following the data published for two other A TPG algorithms and concerning the ISCAS’89 benchmark circuits. In Tab. 3 we consider HITEC, a topological algorithm described in [NiPa91], and the GA-based A TPG proposed in [RPGN94]. The two algorithms were selected, as they are representative of the two categories we denoted above as topological and simulation-based A TPG algorithms; we did not con-sider any A TPG belonging to the category of symbolic ones, as they are not able to deal with large circuits, which are normally the most critical problem in the real world.Two difficulties must be faced when performing such a comparison: the first one concerns the hardware platform, which is different for the three A TPGs (results for HITEC were gathered on a SPARCstation 1, those in [RPGN94] on a SPARCstation II, and those for GATTO+ on a DECstation 3000/500). The second difficulty comes from the fact that GATTO+ assumes that all the Flip-Flops in the circuits are resettable, and generates sequences starting from the all-0s state, while the two other algorithms do not make this assumption, and generate sequences starting from the all-Xs state.Tab. 2: comparison between GATTO and GATTO+.Taking into account the two points above, the results in Tab. 3 show that:• GATTO+ is able to reach higher Fault Coverage figures in all cases but 4 when HITEC is consid-ered; the figures of GATTO+ are always betterwhen the tool of [RPGN94] is analyzed;• the CPU times required by GATTO+ are lower than the ones required by HITEC for all the cir-cuits but S1196 and S1238; on the other side,GATTO+ is always faster than the method in[RPGN94]. For most circuits, we believe that thespeed-up ratios are greater than any reasonablefactor due to the different hardware platforms. 5. ConclusionsWe described some advanced techniques to improve the effectiveness of a GA-based A TPG like GATTO [PRSR94]. They fully exploit the powerfulness of Evo-lutionary Computation by removing some weakness points concerning the cross-over operator, the ability to determine the optimal sequence length, and the fault excitation phase.Experimental results demonstrate that the new techniques are able to significantly improve the per-formance of GATTO in terms of Fault Coverage and CPU times. We also compared them with the results of a state-of-the-art topological algorithm and with the ones of another GA-based ATPG algorithm.As a main contribution, this paper experimentally demonstrates that a carefully tuned GA-based A TPG algorithm is able to provide better results than any other approach: in fact, symbolic techniques, although faster on the small circuits, do not work with the large ones, while topological techniques, although able to identify untestable faults, are generally slower.6. References[ACAg88]V.D. Agrawal, K.-T. Cheng, P. Agrawal,“CONTEST: A Concurrent Test Generator forSequential Circuits,” Proc. 25th Design Auto-mation Conf., 1988, pp. 84-89[CHSo93]H. Cho, G.D. Hatchel, F. Somenzi, “Redundancy Identification/Removal and Test Generation forSequential Circuits Using Implicit State Enu-meration,” IEEE Trans. on CAD/ICAS, Vol.CAD-12, No. 7, pp. 935-945, July 1993[Gold89] D.E. Goldberg, “Genetic Algorithms in Search, Optimization, and Machine Learning,” Addison-Wesley, 1989[NiPa91]T. Niermann, J.H. Patel, “HITEC: A Test Gen-erator Package for Sequential Circuits,” Proc.European. Design Aut. Conf., 1991, pp. 214-218 [PRSR94]P. Prinetto, M. Rebaudengo, M. Sonza Reorda,“An Automatic Test Pattern Generator for LargeSequential Circuits based on Genetic Algo-rithms,” Proc. Int. Test Conf., 1994, pp. 240-249 [RPGN94] E.M. Rudnick, J.H. Patel, G.S. Greenstein, T.M.Niermann, “Sequential Circuit Test Generationin a Genetic Algorithm Framework,” Proc. De-sign Automation Conf., 1994, pp. 698-704 [SSAb92] D.G. Saab, Y.G. Saab, J. Abraham, “CRIS: A Test Cultivation Program for Sequential VLSICircuits,” Proc. Int. Conf. on Comp. Aided De-sign, 1992, pp. 216-219。
单频复正弦信号频率估计
单频复正弦信号频率估计摘要:频率估计是数字信号处理的重要内容,对淹没在噪声中的正弦波信号进行频率估计是信号处理的一个经典课题。
目前,高精度频率估计己经成功应用于雷达探测、声纳地震监测、桥梁振动检测以及电子通信技术中,因此,研究高精度频率估计算法,具有重要的理论意和应用价值。
本文对于高斯白噪声中单频复正弦信号的频率估计,对常用的几种频率估计方法进行了回顾,提出了一种对复加性高斯白噪声环境下的复正弦信号的频率进行估计的迭代方法。
该方法在Kay 提出的相位加权平均(WPA )方法的基础上引入迭代的思想,只需要通过少数几次迭代就可克服WPA 方法中信噪比门限随所估计的复正弦信号频率的增大而升高的缺点,从而大大提升估计性能。
新的迭代方法的估计范围为整个[,)ππ-区间,且在这整个估计范围内,新的迭代方法都能得到基本相同的较低信噪比门限。
仿真实验的结果验证了新的迭代方法对WPA 方法及WNLP 方法的性能提升,说明了该方法的优越性。
关键词 复正弦信号,频率估计,信噪比门限,相位加权平均算法,迭代算法,matlabAbstract:Frequency estimation is an important part of digital signal processing, and submerged in the noise of the sine wave signal of frequency estimation is a classic signal processing tasks.Currently, high-precision frequency estimation has been successfully applied to radar, sonar, seismic monitoring, bridge vibration testing and electronic communications, therefore, of high accuracy frequency estimation algorithm, has important theoretical significance and application value.This white Gaussian noise for a single complex sinusoid of frequency estimation, frequency estimation of several commonly used methods were reviewed, a pair of complex additive white Gaussian noise environment of the complex sinusoidal signal to estimate the frequency of iterative method. The method proposed phase-weighted average Kay (WPA) method based on the introduction of iterative thinking, only a few times through the iteration method can overcome the WPA with the estimated signal to noise ratio threshold of the complex sinusoidal signal frequency increases increased shortcomings, which greatly enhance the estimation performance. New iteration method for the entire range of the estimated range, and estimates in the context of the whole, the new iteration method can be basically the same low signal to noise ratio threshold. Simulation results verify the new method and iterative method WPA performance WNLP method shows the superiority of the method.Kaywords:Complex sinusoid,Frequency estimation,SNR threshold,Phase weighted average algorithm,Iterative algorithm,matlab目录1. 引言 (1)2. 频率估计的研究综述相关算法回顾 (3)2.1 最大似然估计法 (3)2.2 双线幅度法(Rife法) (4)2.3 M-Rife算法(修正Rife算法) (6)2.4 Quinn 频率估计方法 (10)2.5 分段FFT法测频 (14)2.6 相关结论 (16)3. 频率估计的相位加权平均算法及其迭代方法 (17)3.1 相位加权平均法 (17)3.2 迭代方法 (19)3.2.1 信号模型 (19)3.2.2 WPA方法及其问题 (20)3.3.3 频率估计的迭代方法 (21)4. 性能对比及计算机模拟结果 (25)5. 结论 (29)致谢 (30)参考文献: (31)附录 (33)1. 引言频率是参量估计中的一个重要物理量。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
© 2004 Optical Soci LETTERS / Vol. 29, No. 14 / July 15, 2004
where the superscript t denotes the theoretical value, the subscript i denotes the ith phase-shifted image (i 1, 2, . . . , M ), and j denotes the individual pixel locations in each image ͑ j 1, 2, . . . , N ͒. In Eq. (1), Aij is the background or mean intensity, Bij is the modulation amplitude, fj is the angular phase information, di is the phase-shift amount of each of M frames (M $ 3), and N is the total number of pixels in each frame. As in the conventional phase-shifting algorithms, in step 1 it is assumed that the background intensity and the modulation amplitude do not have frame-to-frame variation; i.e., they are functions of pixels only. Under that assumption, Aij and Bij become single-order tensors, i.e., A1j A2j . . . AMj and B1j B2j . . . BMj . Def ining a new set of variables as aj Aij , bj Bij cos fj , and cj 2Bij sin fj , we can express Eq. (1) as Iij t aj 1 bj cos di 1 cj sin di . (2)
July 15, 2004 / Vol. 29, No. 14 / OPTICS LETTERS
1671
Advanced iterative algorithm for phase extraction of randomly phase-shifted interferograms
Zhaoyang Wang
0146-9592/04/141671-03$15.00/0
Step 1. Pixel-by-pixel iteration to determine phase distribution: The intensity of an interferogram can be expressed as Iij t Aij 1 Bij cos͑fj 1 di ͒ , (1)
Department of Mechanical Engineering, The Catholic University of America, Washington, D.C. 20064
Bongtae Han
Department of Mechanical Engineering, University of Maryland, College Park, Maryland 20742 Received March 1, 2004 An advanced random phase-shifting algorithm to extract phase distributions from randomly phase-shifted interferograms is proposed. The algorithm is based on a least-squares iterative procedure, but it copes with the limitation of the existing iterative algorithms by separating a frame-to-frame iteration from a pixel-to-pixel iteration. The algorithm provides stable convergence and accurate phase extraction with as few as three interferograms, even when the phase shifts are completely random. The algorithm is simple, fast, and fully automatic. A computer simulation is conducted to prove the concept. © 2004 Optical Society of America OCIS codes: 120.5050, 120.3180, 100.2650, 100.2000.
If di is known, there are ͑3N ͒ unknowns and ͑MN ͒ equations. The unknowns can be solved by use of the overdetermined least-squares method. An expression of the least-squares error Sj accumulated from all the images described by Eqs. (1) and (2) can be written as Sj
Since the phase-shifting algorithm was introduced in the 1960s, numerous self-calibrating algorithms to cope with the phase-shift errors caused by imperfect phase-shifting mechanisms have been proposed. Among them, an overdeterministic approach that uses least-squares algorithms has been studied extensively for randomly shifted interferograms. Okada et al.1 first proposed a least-squares-based iterative algorithm. Their research was a substantial extension of that previously proposed by Greivenkamp.2 In the Okada algorithm, one solves the approximate linear equations iteratively to determine phase-shift amounts and phase distributions simultaneously. Similar methods were proposed by Lassahn et al.,3 Han and Kim,4 Kim et al.,5 and Wei et al.6 For all the iterative methods cited above it was claimed that four frames should be suff icient for accurate extraction of phase information. As indicated by Larkin,7 however, no published results of algorithms that used fewer than five frames can be found in the literature. In fact, it was found from an extensive study conducted by the present authors that the existing algorithms do not always provide accurate phase information unless a large number of frames are utilized (usually more than 15 frames). If a practically feasible number of frames (say, #5) is employed, the existing algorithms produce signif icant phase-shift errors unless the shift intervals are reasonably uniform and the initial estimates of shift amounts are within a few degrees of the actual shift amounts. These stringent requirements limit the applications of the existing iterative algorithms in practice. In this Letter an improved iterative algorithm to cope with the above problem is proposed. The proposed algorithm makes the least squares converge accurately and rapidly, which substantially relaxes the required amount of input data; only three randomly shifted interferograms are sufficient for accurate extraction of phase information. The algorithm does not require accurate initial estimation of phase-shift amounts. The phase shifts can be completely random as long as at least three frames have different phase-shift amounts. The details of the proposed algorithm are described below.