Graphical Methods for Efficient Likelihood Inference in Gaussian Covariance Models

合集下载

基于平滑非线性能量算子划分的尖峰相关特征癫痫发作自动检测算法

基于平滑非线性能量算子划分的尖峰相关特征癫痫发作自动检测算法

第14卷㊀第3期Vol.14No.3㊀㊀智㊀能㊀计㊀算㊀机㊀与㊀应㊀用IntelligentComputerandApplications㊀㊀2024年3月㊀Mar.2024㊀㊀㊀㊀㊀㊀文章编号:2095-2163(2024)03-0128-05中图分类号:TP18,TP399文献标志码:A基于平滑非线性能量算子划分的尖峰相关特征癫痫发作自动检测算法何雪兰,吴㊀江,蒋路茸(浙江理工大学信息科学与工程学院,杭州310018)摘㊀要:针对癫痫发作自动检测算法多集中于时域㊁频域等传统特征,无法全面表征癫痫脑电信号的信息等问题,本文结合癫痫脑电图中异常波振幅和频率提高的现象,提出一种基于平滑非线性能量算子划分的尖峰相关特征癫痫发作自动检测算法㊂该算法使用传统的时域㊁频域特征,结合尖峰相关性特征对脑电信号进行刻画,使用有监督的机器学习分类器,测试癫痫发作自动检测的有效性和可靠性㊂本文将提出的方法在开源数据集CHBMIT上进行了评估,获得了96.52%的准确率㊁95.65%的敏感性和97.09%的特异性㊂实验结果表明,基于平滑非线性能量算子划分的尖峰相关特征,能够作为癫痫脑电信息的补充,提高癫痫发作检测的性能㊂关键词:癫痫发作检测;机器学习;尖峰相关性;平滑非线性能量算子Automaticseizuredetectionalgorithmbasedonspike-relatedfeaturesofsmoothednonlinearenergyoperatordivisionHEXuelan,WUJiang,JIANGLurong(SchoolofInformationScienceandEngineering,ZhejiangSci-TechUniversity,Hangzhou310018,China)Abstract:Mostcurrentseizureautomaticdetectionalgorithmsfocusontraditionalfeaturessuchastimedomainandfrequencydomain,whichcannotfullycharacterizetheinformationofepilepticEEGsignals.Thispaperproposesanautomaticseizuredetectionalgorithmbasedonspikecorrelationfeaturesdividedbyasmoothnonlinearenergyoperator,takingintoaccountthephenomenonthattheamplitudeandfrequencyofabnormalwavesinepilepticEEGwillincrease.Thealgorithmusestraditionaltime-domainandfrequency-domainfeatures,combinedwithspikecorrelationfeaturestocharacterizetheEEGsignal,andusessupervisedmachinelearningclassifierstotestitseffectivenessandreliabilityforautomaticseizuredetection.TheresearchevaluatestheproposedmethodontheopensourcedatasetCHBMITandobtains96.52%onaccuracy,95.65%onsensitivityand97.09%onspecificity.Theexperimentalresultsshowthattheproposedspike-relatedfeaturesbasedonthesmoothednonlinearenergyoperatorsegmentationcanbeusedasacomplementtotheepilepticEEGinformationtoimprovetheperformanceofseizuredetection.Keywords:seizuredetection;machinelearning;spikecorrelation;smoothednonlinearenergyoperator基金项目:浙江省基础公益项目(LGF19F010008);北京邮电大学泛网无线通信教育部重点实验室(BUPT)(KFKT-2018101);浙江省重点研发计(2022C03136);国家自然科学基金(61602417)㊂作者简介:何雪兰(1999-),女,硕士研究生,主要研究方向:癫痫检测;吴㊀江(1978-),男,博士,高级工程师,主要研究方向:无线通信技术,工业物联网㊂通讯作者:蒋路茸(1982-),男,博士,教授,主要研究方向:生理电信号处理㊁复杂网络和无线传感器网络㊂Email:jianglurong@zstu.edu.cn收稿日期:2023-03-150㊀引㊀言癫痫是一种神经系统疾病,由大脑神经元异常放电引起[1],常常表现为突发性㊁反复性和复发性等特点㊂癫痫发作的临床症状复杂多样,如阵发性痉挛㊁意识丧失㊁认知功能障碍等[2]㊂这些发作事件对患者的认知水平及正常生活都产生了明显影响㊂因此,癫痫的诊断和治疗对于预防癫痫发作和改善生活质量至关重要㊂头皮脑电图是一种用于临床记录脑活动的无创信号采集方法[3],用于记录大脑活动时的电波变化㊂头皮脑电图包含丰富的生理㊁心理和病理信息,是评估癫痫和其他脑部疾病的有效工具[4]㊂在脑电图的记录中,癫痫发作和癫痫样放电(如棘波㊁尖波和棘慢波复合体)是癫痫的重要生物标志物[5],并被广泛应用于临床评价㊂目前,临床上基于脑电图的识别与分析是医生进行癫痫检测的黄金标准,但对海量的临床脑电数据进行人工筛查,不仅给医生带来沉重的负担,还存在较强的主观性㊁判断标准不统一等问题[6-7],影响分析的效率和准确性㊂因此,设计一种自动的癫痫发作检测方法是亟待解决的问题㊂为了克服传统诊断方法的局限性㊁提高医疗效率,伴随着机器学习的快速发展,癫痫发作的自动检测已成为行业内关注的重点㊂研究者们根据头皮脑电图的时域㊁频域或非线性特征建立了特征工程方法[8-10],并通过具有一个或多个特征的分类器检测癫痫发作㊂Mursalin等学者[11]从时域㊁频域和基于熵的特征中选择突出特征,使用随机森林分类器学习选定特征集合的特性,获得了更好的分类结果㊂杨舒涵等学者[12]使用时域和非线性特征对脑电信号进行表征,结合XGBoost分类器,实现了癫痫的自动检测㊂Zarei等学者[13]使用离散小波变换DWT和正交匹配追踪(OrthogonalMatchingPursuit,OMP)提取EEG中不同的系数,计算非线性特征和统计特征,使用SVM进行分类,获得了较好的检测性能㊂吴端波等学者[14]使用aEEG尖峰和cEEG棘波提取的方法计算棘波率,使用阈值法对癫痫进行发作检测㊂上述模型虽然都能取得较好的分类结果,但是也存在以下问题:(1)多数研究在特征提取阶段仅从时域㊁频域或时频域中表征脑电信号信息,这些特征所涵盖的信息量并不足以全面描述一段EEG信号㊂(2)在癫痫发作的自动检测中,强调周期性的信号转换对于有效㊁可靠地区分癫痫发作的重复特征至关重要,而互相关是时域上广泛用于表示信号周期性的方法㊂针对上述问题,本文提出一种基于平滑非线性能量算子划分的尖峰相关(SpikeCorrelation,SC)特征的癫痫发作自动检测算法㊂SC是关于自适应提取的脑电图尖峰信号段之间时间延迟的最大互相关㊂使用平滑非线性能量算子衡量癫痫脑电信号中出现的异常波,将脑电信号在癫痫发作期和非发作期的尖峰相关特征作为度量患者大脑活动的一个重要补充㊂本文提出的算法主要使用巴特沃斯滤波器对脑电信号进行滤波,去除外部伪迹的干扰,然后从传统特征角度出发,提取时域㊁频域特征,再结合提出的尖峰相关特征,进一步表征癫痫发作时的异常信息㊂最后结合有监督的机器学习分类模型,实现癫痫发作的自动检测㊂1㊀方法癫痫发作自动检测整体流程设计如图1所示,其中包含预处理㊁特征提取和分类等3个模块㊂脑电信号通道筛选滤波数据分割归一化预处理特征提取传统特征:时域、频域尖峰相关特征分类癫痫发作/非发作图1㊀癫痫发作自动检测流程图Fig.1㊀Flowchartofseizuredetection1.1㊀脑电信号预处理头皮脑电数据通过放置在头皮固定位置的电极采集得到㊂由于外置电极,这种采集方式很容易受到外部干扰,导致采集到的数据被噪声污染㊂此外,由于受试者在采集过程中生理活动产生的内部伪迹(如:眨眼㊁心脏跳动等)[15],也会对数据产生干扰,影响分类结果㊂因此,针对内部伪迹,本文首先对采集到的脑电信号进行通道筛选,剔除受眼部运动干扰严重的2个电极FT1和FT2;同时,由于左侧耳电磁极易受到心电伪迹的干扰,因此也剔除了靠近耳部的2个电极FT9和FT10㊂所以,在通道筛选阶段,共选择了脑电图中20个通道信号㊂㊀㊀滤波是一种常见的去除脑电信号外部伪迹的方法,本文采用1 48Hz的带通巴特沃斯滤波器进行滤波,抑制其他频率范围的信号[16]㊂根据数据集中标注的癫痫发作开始和结束时间,为了保证波形的完整性,设置重叠率为50%的滑动窗口,将脑电信号分割成4s的数据片段,最后对所有片段进行归一化处理㊂由于通道筛选和滤波后的脑电信号幅值的浮动一般是在可接受范围内,最大最小标准化能够较大程度地还原真实EEG信号波形㊂因此,本文采用最大最小标准化对原始EEG信号进行归一化操作,推得的公式为:Xmin-max=X-X-maxX()-minX()(1)1.2㊀特征提取原始脑电信号数据量庞大,且不具有代表性,而特征提取方法可以提炼出能够表征癫痫发作特征的数据,用于模型的建立㊂因此,本文主要使用传统时域㊁频域特征和基于平滑非线性能量算子的尖峰相关性特征,对脑电数据进行特征提取㊂1.2.1㊀传统特征提取研究主要从时域和频域两个角度对脑电信号进行传统特征提取㊂本文主要提取时域上每个通道的最大值㊁最小值㊁平均值㊁峰度(Kurtosis)㊁偏斜度921第3期何雪兰,等:基于平滑非线性能量算子划分的尖峰相关特征癫痫发作自动检测算法(Sknewness)和线长(LineLength);频域上主要提取每个信号频域分量的振幅㊂其中,峰度㊁偏斜度和线长的数学定义分别见式(2) (4):Kurtosis=E[(x-mean(x))4]{E[(x-mean(x))2]}2(2)Sknewness=E[(x-mean(x)std(x))3](3)LineLength=1nðni=1absxi+1-xi()(4)㊀㊀其中,x表示脑电信号片段;E表示对括号中数值求期望;xi表示采样点i的值;n表示片段x中采样点数㊂1.2.2㊀尖峰相关特征提取根据癫痫发作时脑电信号异常波的振幅和频率发生改变的特异性现象,本文提出将尖峰相关特征作为表征癫痫发作时异常特征的补充㊂非线性能量算子(NLEO)是一种对信号进行能量度量的方法[17],能够跟踪信号的瞬时能量㊂对于离散信号xn,其非线性能量算子表达如式(5)所示:φ[x(n)]=x(n-l)x(n-p)-x(n-q)x(n-s)(5)㊀㊀通常,当癫痫脑电信号中出现异常放电时,脑电波的振幅和频率会有所提高,可以更好地突出异常波在平稳状态下的放电波形,但非线性能量对脑电信号中可能存在的噪音信号也具有很高的敏感度㊂为了进一步提高NLEO对非平稳信号的表征能力和抗干扰能力,文献[18]提出了一种NLEO的改进方法,即平滑非线性能量算子(SNLEO),将计算所得的能量与一个窗函数进行卷积运算,在一定程度上减小低波幅噪音信号对输出结果的影响㊂SNLEO计算见式(6):φ[x(n)]=w(n)∗φ[x(n)](6)㊀㊀其中,w是一个矩形的窗函数, ∗ 表示卷积操作㊂在非线性能量算子的计算中,本文使用的参数值为l=1,p=2,q=0和s=3,并采用7个点的窗函数进行卷积计算㊂获得SNLEO后,需要设定一个合适的阈值,尽可能多地筛选出可能是尖峰的样本,同时最小化漏检率㊂本文使用自适应阈值,对SNLEO进行尖峰筛选识别㊂本文采用影响检测尖峰数量没有大范围变化的阈值作为最优阈值㊂最优阈值的搜索范围为SNLEO的10% 90%[19],相邻2个峰值的中间被确定为一个尖峰的起始点或结束点㊂由于数据在划分过程中导致波形的不连续问题,本文将检测到的第一个和最后一个尖峰丢弃,以确保每个片段具有完整的尖峰形态㊂如果检测出尖峰,则将每个划分好的尖峰与后续5个尖峰片段相关联㊂本文使用尖峰相关性(SpikeCorrelation,SC)来定义该矩阵,并将SC的平均值和标准差作为癫痫发作检测的特征㊂SC计算见式(7 8):SCi,j=maxmRxixj(m)(7)Rxixj(m;i,j)=E[xi(n)xj(n+m)]σxiσxj(8)㊀㊀其中,xi㊁xj是脑电EEG信号的片段,这里i=[2, ,S-6],j=[i+1, ,i+5];S表示在一个片段中检测到的峰值数;σ表示脑电图片段的标准差㊂估计SC特征的处理过程如图2所示㊂将一个片段的第一个和最后一个丢弃,而后根据得到的尖峰计算其与后面5个尖峰的相关性㊂根据图2(a)中样例计算出的尖峰相关矩阵如图3所示㊂10050-501234时间/s(a)癫痫发作片段样例EEG/μV400200SNLEO/μV23224168尖峰数/个1234时间/s(b)片段(a)对应的S N L E Ot h(c)基于(b)确定的自适应阈值t h阈值104.87710050-50EEG/μV1234时间/s(d)划分好的尖峰片段(“*”表示丢弃的片段)12345678910t h图2㊀使用自适应阈值的SNLEO计算尖峰相关性示意图Fig.2㊀SchematicdiagramofSNLEOcalculationofspikecorrelationsusingadaptivethreshold1234567892345678910图3㊀尖峰片段得到的最大相关矩阵Fig.3㊀Maximumcorrelationmatrixobtainedfromspikefragments031智㊀能㊀计㊀算㊀机㊀与㊀应㊀用㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀第14卷㊀㊀㊀此外,计算了SNLEO的平均值㊁标准差和平均最大SNLEO值spikiness㊂其中,spikiness被定义为SNLEO中峰值的最大值除以SNLEO的平均值[20],以及检测到的峰值数量(snum)㊁平均持续时间(swidth)和平均峰值间间隔(sgap)㊂基于SNLEO划分的尖峰相关特征的具体描述见表1㊂表1㊀尖峰相关特征的描述Table1㊀Descriptionofspike-relatedcharacteristics特征描述mean(SC)尖峰相关性矩阵的平均值std(SC)尖峰相关性矩阵的标准差mean(SNLEO)SNLEO的平均值std(SNLEO)SNLEO的标准差spikiness平均最大SNLEO值snum峰值数量swidth平均持续时间sgap平均峰值间间距1.3㊀分类模型使用传统机器学习分类器RF和SVM来评估本文提出的方法,这些分类器经常被用于癫痫发作的自动检测㊂2㊀实验2.1㊀数据集本研究采用公开的头皮脑电数据集CHB-MIT㊂该数据集共记录了美国波士顿儿童医院的23名癫痫患者的头皮脑电数据,每个患者的数据都是由多个.edf文件组成,采样频率256Hz,共含有157次癫痫发作㊂大多数文件包含有23个EEG通道信号,并采用国际标准10-20系统使用的EEG电极位置命名这些通道记录㊂由于癫痫发作时间远小于发作间期的时间,为了保证数据集正负样本的均衡性,本文采用欠采样的方式在发作间期随机采样和癫痫发作样本数量相当的负样本㊂2.2㊀评价指标为了验证本文方法的有效性,采用准确率(Acc)㊁敏感性(Sen)㊁特异性(Spe)㊁F1值和AUC等指标进行实验评估㊂计算方法见式(9) 式(11):Acc=TP+TNTP+TN+FP+FN(9)Sen=TPTP+FN(10)Spe=TNTN+FP(11)㊀㊀其中,TP㊁FP㊁FN和TN分别为真阳性㊁假阳性㊁假阴性和真阴性㊂本文产生的所有实验结果都是在配置为Intel(R)Core(TM)i7-9700CPU@3.00GHz,16GBRAM的计算机上实现的㊂实验模型使用Python3.7和Scikit-learn构建㊂2.3㊀结果分析本文先对提取的传统时域㊁频域特征分别使用RF和SVM分类模型进行测试,所得实验结果见表2㊂由表2可知,SVM分类模型表现最佳㊂表2㊀基于传统特征的实验结果Table2㊀Experimentalresultsbasedontraditionalcharacteristics特征分类器AccSenSpe传统特征RF0.86210.75330.9339SVM0.95900.93850.9736㊀㊀在确定分类模型SVM的基础上,将传统特征和尖峰相关特征结合,探讨尖峰相关特征对癫痫脑电信号的表征能力㊂添加前后对比结果见表3㊂表3㊀尖峰相关特征对比的分类结果Table3㊀Classificationresultsofspike-relatedfeaturecomparison分类器特征AccSenSpeSVM传统特征0.95900.93850.9736传统特征+尖峰相关特征0.96520.95650.9709㊀㊀由表3可知,尖峰相关特征能够对癫痫脑电信号信息进行表征㊂加入尖峰相关特征后,检测结果在Acc上提升了0.62%,在Sen上提升了1.8%,在Spe上有所降低㊂在实际的临床应用中,正确识别发作样本比正确识别非发作样本更重要,因此Sen指标更能准确衡量方法的优劣㊂本文提出的方法虽然在Spe上略有降低,但Sen指标上有一定程度的提升㊂3㊀结束语本文提出了一种基于平滑非线性能量算子划分的尖峰相关特征的癫痫发作自动检测算法㊂该算法使用传统的时域㊁频域特征,结合尖峰相关性特征对脑电信号进行刻画,使用RF和SVM分类器来测试癫痫发作自动检测的有效性和可靠性㊂将所提方法在开源数据集CHB-MIT上进行了评估,SVM分类器获得了更好的结果,其准确率㊁敏感性和特异性分别为96.52%,95.65%和97.09%㊂此外,研究开展的特征消融实验结果表明,提出的基于平滑非线性能131第3期何雪兰,等:基于平滑非线性能量算子划分的尖峰相关特征癫痫发作自动检测算法量算子划分的尖峰相关特征,能够作为癫痫脑电信息的补充,进一步提高癫痫发作检测的性能㊂参考文献[1]PATELDC,TEWARIBP,CHAUNSALIL,etal.Neuron–gliainteractionsinthepathophysiologyofepilepsy[J].NatureReviewsNeuroscience,2019,20(5):282-297.[2]SPECCHION,WIRRELLEC,SCHEFFERIE,etal.InternationalLeagueAgainstEpilepsyclassificationanddefinitionofepilepsysyndromeswithonsetinchildhood:PositionpaperbytheILAETaskForceonNosologyandDefinitions[J].Epilepsia,2022,63(6):1398-1442.[3]SCHADA,SCHINDLERK,SCHELTERB,etal.Applicationofamultivariateseizuredetectionandpredictionmethodtonon-invasiveandintracraniallong-termEEGrecordings[J].ClinicalNeurophysiology,2008,119(1):197-211.[4]BENBADISSR,BENICZKYS,BERTRAME,etal.TheroleofEEGinpatientswithsuspectedepilepsy[J].EpilepticDisorders,2020,22(2):143-155.[5]王学峰.癫癎的脑电图:传统观点㊁新认识和新领域[J].中华神经科杂志,2004,37(3):7-9.[6]刘晓燕,黄珍妮,秦炯.不同类型小儿癫痫持续状态的临床及脑电图分析[J].中华神经科杂志,2000,33(2):73-73.[7]MATURANAMI,MEISELC,DELLK,etal.Criticalslowingdownasabiomarkerforseizuresusceptibility[J].NatureCommunications,2020,11(1):2172.[8]彭睿旻,江军,匡光涛,等.基于EEG的癫痫自动检测:综述与展望[J].自动化学报,2022,48(2):335-350.[9]HOSSEINIMP,HOSSEINIA,AHIK.AreviewonmachinelearningforEEGsignalprocessinginbioengineering[J].IEEEReviewsinBiomedicalEngineering,2020,14:204-218.[10]ACHARYAUR,HAGIWARAY,DESHPANDESN,etal.CharacterizationoffocalEEGsignals:Areview[J].FutureGenerationComputerSystems,2019,91:290-299.[11]MURSALINM,ZHANGY,CHENY,etal.Automatedepilepticseizuredetectionusingimprovedcorrelation-basedfeatureselectionwithrandomforestclassifier[J].Neurocomputing,2017,241:204-214.[12]杨舒涵,李博,周丰丰.基于机器学习的跨患者癫痫自动检测算法[J].吉林大学学报(理学版),2021,59(1):101-106.[13]ZAREIA,ASLBM.Automaticseizuredetectionusingorthogonalmatchingpursuit,discretewavelettransform,andentropybasedfeaturesofEEGsignals[J].ComputersinBiologyandMedicine,2021,131:104250.[14]吴端坡,王紫萌,董芳,等.基于aEEG尖峰和cEEG棘波提取的癫痫发作检测算法[J].实验技术与管理,2020,37(12):57-62.[15]骆睿鹏,冯铭科,黄鑫,等.脑电信号预处理方法研究综述[J].电子科技,2023,36(4):36-43.[16]OCBAGABIRHT,ABOALAYONKAI,FAEZIPOURM.EfficientEEGanalysisforseizuremonitoringinepilepticpatients[C]//2013IEEELongIslandSystems,ApplicationsandTechnologyConference(LISAT).Farmingdate,USA:IEEE,2013:1-6.[17]BOONYAKITANONTP,LEK-UTHAIA,CHOMTHOK,etal.AreviewoffeatureextractionandperformanceevaluationinepilepticseizuredetectionusingEEG[J].BiomedicalSignalProcessingandControl,2020,57:101702.[18]MUKHOPADHYAYS,RAYGC.Anewinterpretationofnonlinearenergyoperatoranditsefficacyinspikedetection[J].IEEETransactionsonBiomedicalEngineering,1998,45(2):180-187.[19]TAPANIKT,VANHATALOS,STEVENSONNJ.IncorporatingspikecorrelationsintoanSVM-basedneonatalseizuredetector[C]//EMBEC&NBC2017:JointConferenceoftheEuropeanMedicalandBiologicalEngineeringConference(EMBEC)andtheNordic-BalticConferenceonBiomedicalEngineeringandMedicalPhysics(NBC).Singapore:Springer,2018:322-325.[20]TAPANIKT,VANHATALOS,STEVENSONNJ.Time-varyingEEGcorrelationsimproveautomatedneonatalseizuredetection[J].InternationalJournalofNeuralSystems,2019,29(4):1850030.231智㊀能㊀计㊀算㊀机㊀与㊀应㊀用㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀第14卷㊀。

IND560 weighing terminal和Fill-560应用软件商品说明书

IND560 weighing terminal和Fill-560应用软件商品说明书

2Industrial Weighing and MeasuringDairy & CheeseNewsIncrease productivitywith efficient filling processesThe new IND560 weighing terminal enables you to boost speed and precision during the filling process. Choose from a wide range of scales and weigh modules to connect to the terminal.The versatile IND560 excels in control-ling filling and dosing applications, delivering best-in-class performance for fast and precise results in manual, semi-automatic or fully automatic operations. For more advanced filling, the Fill-560 application software adds additional sequences and component inputs. Without complex and costly programming you can quickly con-figure standard filling sequences, or create custom filling and blending applications for up to four compo-nents, that prompt operators for action and reduce errors.Ergonomic design Reducing operator errors is achieved through the large graphic display which provides visual signals.SmartTrac ™, the METTLER TOLEDO graphical display mode for manual operations, which clearly indicate sta-tus of the current weight in relation to the target value, helps operators to reach the fill target faster and more accurately.Connectivity and reliabilityMultiple connectivity options are offered to integrate applications into your con-trol system, e.g. Allen-Bradley ® RIO, Profibus ®DP or DeviceNet ™. Even in difficult vibrating environments, the TraxDSP ™ filtering system ensures fast and precise weighing results. High reli-ability and increased uptime are met through predictive maintenance with TraxEMT ™ Embedded MaintenanceTechnician.METTLER TOLEDO Dairy & Cheese News 22Speed up manual operations with flexible checkweighingB e n c h & F l o o r S c a l e sHygienic design, fast display readouts and the cutting-edge color backlight of the new BBA4x9 check scales and IND4x9 terminals set the standard for more efficient manual weigh-ing processes.Flexibility through customizationFor optimal static checkweighing the software modules ‘check’ and‘check+’ are the right solutions. They allow customization of the BBA4x9 and the IND4x9 for individual activi-ties and needs, e.g. manual portion-ing or over/under control. Flexibility is increased with the optional battery which permits mobility. Hygienic design Easy-to-clean equipment is vital in food production environments. Both the BBA4x9 scale and the IND4x9 ter-minal are designed after the EHEDGand NSF guidelines for use in hygi-enically sensitive areas.Even the back side of the scale stand has a smooth and closed surfacewhich protects from dirt and allowstrouble-free cleaning.Fast and preciseThe colorWeight ® display with a colored backlight gives fast, clear indication when the weight is with-in, below or above the tolerance.The color of the backlight can be chosen (any mixture of red, greenand blue) as well as the condition itrefers to (e.g. below tolerance). The ergonomic design enables operators to work more efficiently due to less exhaustion.Short stability time, typically between 0.5s and 0.8s, ensures high through-put and increased productivity.PublisherMettler-Toledo GmbH IndustrialSonnenbergstrasse 72CH-8603 Schwerzenbach SwitzerlandProductionMarCom IndustrialCH-8603 Schwerzenbach Switzerland MTSI 44099754Subject to technical changes © 06/2006 Mettler-Toledo GmbH Printed in SwitzerlandYellow – weight above toleranceGreen – weight within toleranceRed – weight below toleranceYour benefits• Fast and precise results and operations • Higher profitability• Ergonomic design, simple to operate • Mobility up to 13h due to optional batteryFast facts BBA4x9 and IND4x9• 6kgx1g, 15kgx2g, 30kgx5g (2x3000d), for higher capacity scales: IND4x9 terminal • Weights and measures approved versions 2x3000e • Functions: simple weighing, static checkweighing, dispensing • Color backlight, bar graph • Tolerances in weight or %• 99 Memory locations • Optional WLAN, battery• Meets the IP69k protection standardsagainst high-pressure and steam cleaning • Complete stainless steel construction Immediate checkweighing resultswith color Weight®EHEDGThe colored backlight of the LC display provides easy-to- recognize indication whether the weight is within the tolerancelimits or not.WLANMETTLER TOLEDO Dairy & Cheese News 23HACCP programs, GMP (Good Manufacturing Practice), pathogen monitoring and good cleaning practices are essential for effective food safety plans. Our scales are constructed for compliance with the latest hygienic design guidelines.Hygienic design to improve food safetyMETTLER TOLEDO supports you in complying with the latest food safety standards like BRC, IFS or ISO 22000 by offering solutions which are:• Compliant with EHEDG (European Hygienic Engineering & Design Group) and NSF (National Sanitation Foundation) guidelines • Full V2A stainless steel construc-tions, optional V4A load plates • Smooth surface (ra < 0.8μm)• Easy-to-clean construction, no exposed holes • Radius of inside corners > 3mm• Ingress protection rating up to IP69k• Hermetically sealed loadcellsYour benefits• Reduce biological and chemical contamination risks • Fast and thorough cleaning procedures • Fulfillment of hygiene regulations • Long equipment life thanks to rugged designGuaranteed serviceKeep your business runningAvoid unnecessary downtime with our wide range of service packages.With a range of innovative service solutions offering regulatory compli-ance, equipment calibration, train-ing, routine service and breakdown assistance, you can profit from sus-tained uptime, together with ongoing performance verification and long life of equipment. There is a range of contract options designed to comple-ment your existing quality systems. Each one offers its own unique set of benefits depending on the equipment and its application.4/serviceFast facts PUA579 low profile scale • 300kgx0.05kg – 1500kgx0.2kg • Open design• Lifting device for easy cleaning • EHEDG conform(300 and 600kg models -CS/-FL)• Free size scale dimensions • Approach rampsExample:PUA579 first EHEDG conform floor scaleEHEDGW e i g h P r i c e L a b e l i n gChallenges faced in the packaging area are:• Responding quickly to retailer demands while improving margins • Improving pack presentation • Minimizing downtime and product giveawaysWith a complete offering of cutting-edge weighing technology, high-per-formance printing, and smart soft-ware solutions, we can help you tackle your labeling challenges whether they are very simple or highly demanding. Intuitive human-machine interfaceTouch-screen operator displays withgraphical icons guide the operator intuitively and reduce nearly every operation to just one or two key-strokes. This interface allows reduced operator training as well as increased operating efficiency.Advanced ergonomics and sani-tary designOur weigh-price labelers are made out of stainless steel for extensive pro-tection against food contamination. Careful attention to hygienic design requirements, with no dead spots and few horizontal parts, ensure that the labelers are easy to clean.Modular designOur product offering includes both manual and automatic weigh-price labelers constructed of flexible “build-ing blocks.” Different combinations and configurations can meet specific budget and operational requirements. METTLER TOLEDO will help you toselect the right:• Scale performance • Display technology • Memory capacity • IT connections • Degree of automation• Integration kitsA large range of options and peripher-als give flexibility for meeting unique requirements e.g. wireless network, hard disks, external keyboards, bar code scanners, RFID transponder, dynamic checkweighing, or metal detection.Weigh-price-labeling Ergonomic, modular, fastEtica 2300 standard manual labelerFor individual weight labeling of various products, high- speed weighing, smart printing and fast product changes are essential. METTLER TOLEDO offers static and automated solutions for both manual and high-speed prepack applica-tions. Choose from our Etica and PAS product range.METTLER TOLEDO Dairy & Cheese News 24Etica 2400i combination with automatic stretch wrappersEtica 2430G multi-conveyer weigh-price labeler rangeEfficient label applicatorsThe unique Etica label applicator (Etica G series) does not require an air compressor, allowing savings on initial equipment expense and ongo-ing maintenance costs. Labels are gently applied in any pre-memorized orientation.PAS systems provide motorized height adjustment and places the label in any corner of the package. Users will have a new degree of freedom in planning their case display layouts to maximize both product presentation and consumer impact.Smart label design tools Retailers want labels to carry clear, correct information, in accordance with their traceability and style requirements. Our solutions are equipped with labeldesign software tools which facili-tate the design of labels customizedfor retailers demands. A touch-screenallows the user to create specific labels– even with scanned elements such aslogos and graphics, pre-programmedlabel templates, or RFID.Versatile integration capabilitiesThe engineers at METTLER TOLEDOworked closely with Europe’s leadingautomatic stretch wrapper suppliersto design performance-enhancingand cost-effective weigh-wrap-labelsystem solutions. Achieving a smallsystem footprint means the systemsrequire only slightly more floor spacethan the wrapper alone.The PAS and Etica weigh-price label-ers can be integrated via TCP/IP ina METTLER TOLEDO scale network,in host computer systems and goodsmanagement systems.Etica weigh-price-labeling systems• Static and automatic weigh-price-labeling up to 55 pieces/min.• Operator displays:– 5.7” color back-lighted LCD (Etica 2300 series)– 10.4” high resolution touch screen (Etica 4400 series)• 3 inch graphic thermal printer (125 to 250mm/sec) withfully programmable label format (max. size 80x200mm)• Data memory:– 64 to 256 Mb RAM– 128 Mb to 10 Gb mass storage– Unlimited number of logo graphics and label descriptions• Interfaces:– 1 serial RS232 interface– Optional second RS232 + RS485 + Centronics port– Ethernet network communication interface(10baseT), TCP/IP, 2 USB ports (1)– Optional: hand-held bar code scanner for automatictraceability data processingGarvens PAS 3008/3012 price labelersEtica 4400METTLER TOLEDO Dairy & Cheese News 2FlexMount ® weigh moduleFast, reproducible and reliable batch-ing and filling are key success factors for your production process. Various factors can affect precision: foam can compromise optical/radar sensors, and solids do not distribute evenly in a tank or silo. Our weighing techno-logy is not affected by these condi-tions and provides direct, accurate and repeatable measurement of mass without media contact. In addition our range of terminals and transmit-ters/sensors enable easy connectivity to your control systems.Key customer benefits• Increased precision and consistencyof your material transfer processes• Faster batching process throughsupreme TraxDSP ™ noise and vibration filtering • Minimal maintenance cost Fast facts terminals/transmitters: PTPN and IND130• Exclusive TraxDSP ™ vibration rejection and superior noise filter-ing system • Easy data integration through a variety of interfaces, including Serial, Allen-Bradley ® RIO, Modbus Plus ®, Profibus ® and DeviceNET • IP65 stainless harsh versionsProcess terminal PTPN• Local display for weight indication and calibration checks • Panel-mount or stainless steel desk enclosureIND130 smart weight transmitter• Direct connectivity where no local display is required • Quick setup and run via PC tool • CalFREE ™ offers fast and easy cali-bration without test weights • DIN rail mounting versionPLCIND1306Tank and silo weighing solutions master your batching processesT a n k & S i l o W e i g h i n g S o l u t i o n sBoost your productivity and process uptime with reliable weighing equipment – improved batching speed and precision, maximum uptime at low maintenance cost.TraxDSP ™ ensures accurate results evenin difficult environments with vibrationPTPN process terminalMETTLER TOLEDO Dairy & Cheese News 27Quality data under control?We have the right solutionConsistently improving the quality of your products requires the ability of efficiently controlling product and package quality parameters in a fast-changing and highly competi-tive environment.Competition in the food industry –with high volumes but tight margins – causes demands for efficient quality assurance systems. Statistical Quality Control (SQC) systems for permanent online information and documenta-tion about your key quality para-meters convert into real cost savings.Our solutions for Statistical Quality Control (SQC) combine ease of opera-tion, quality data management and analysis functionality.• We offer mobile compact solutions with embedded SQC intelligence up to networked systems with an SQL database.• The systems are upgradeable and can be expanded and adapted to meet changing customer needs.• Simple and intuitive prompts guide the user through the sample proc-ess, reducing training costs as well as sampling errors.• Realtime analysis and alarms help to take immediate corrective measures and to save money by reducing overfilling.Throughout the manufacturing pro-cess, METTLER TOLEDO SQC solu-tions analyze your important product and package quality parameters andpresent them the way you want, help-ing to comply to legislation, to control and document your product qualityand your profitability.Metal detectionCheckweigher Sample check ® onlinequality data analysis/dairy-cheeseFor more informationMettler-Toledo GmbH CH-8606 Greifensee SwitzerlandTel. +41 44 944 22 11Fax +41 44 944 30 60Your METTLER TOLEDO contact:1. SevenGo ™ portable pH-meter2. In-line turbidity, pH and conductivity sensors3. DL22 Food and beverage analyzer4. Halogen moisture analyzersA wide range of solutions to improve processes1. Statistical Quality Control/Statistical Process Control2. Process weighing3. Predictive maintenance4. Methods of moisture content determinationShare our knowledgeLearn from our specialists – our knowledge and experience are at your disposal in print or online.Learn more about all of our solutions for the dairy and cheese industry at our website. You can find information on a wide range of topics to improve your processes, including case studies,application stories, return-on invest-ment calculators, plus all the product information you need to make aninformed decision.1 2 341423。

A Fast and Accurate Plane Detection Algorithm for Large Noisy Point Clouds Using Filtered Normals

A Fast and Accurate Plane Detection Algorithm for Large Noisy Point Clouds Using Filtered Normals

A Fast and Accurate Plane Detection Algorithm for Large Noisy Point CloudsUsing Filtered Normals and Voxel GrowingJean-Emmanuel DeschaudFranc¸ois GouletteMines ParisTech,CAOR-Centre de Robotique,Math´e matiques et Syst`e mes60Boulevard Saint-Michel75272Paris Cedex06jean-emmanuel.deschaud@mines-paristech.fr francois.goulette@mines-paristech.frAbstractWith the improvement of3D scanners,we produce point clouds with more and more points often exceeding millions of points.Then we need a fast and accurate plane detection algorithm to reduce data size.In this article,we present a fast and accurate algorithm to detect planes in unorganized point clouds usingfiltered normals and voxel growing.Our work is based on afirst step in estimating better normals at the data points,even in the presence of noise.In a second step,we compute a score of local plane in each point.Then, we select the best local seed plane and in a third step start a fast and robust region growing by voxels we call voxel growing.We have evaluated and tested our algorithm on different kinds of point cloud and compared its performance to other algorithms.1.IntroductionWith the growing availability of3D scanners,we are now able to produce large datasets with millions of points.It is necessary to reduce data size,to decrease the noise and at same time to increase the quality of the model.It is in-teresting to model planar regions of these point clouds by planes.In fact,plane detection is generally afirst step of segmentation but it can be used for many applications.It is useful in computer graphics to model the environnement with basic geometry.It is used for example in modeling to detect building facades before classification.Robots do Si-multaneous Localization and Mapping(SLAM)by detect-ing planes of the environment.In our laboratory,we wanted to detect small and large building planes in point clouds of urban environments with millions of points for modeling. As mentioned in[6],the accuracy of the plane detection is important for after-steps of the modeling pipeline.We also want to be fast to be able to process point clouds with mil-lions of points.We present a novel algorithm based on re-gion growing with improvements in normal estimation and growing process.For our method,we are generic to work on different kinds of data like point clouds fromfixed scan-ner or from Mobile Mapping Systems(MMS).We also aim at detecting building facades in urban point clouds or little planes like doors,even in very large data sets.Our input is an unorganized noisy point cloud and with only three”in-tuitive”parameters,we generate a set of connected compo-nents of planar regions.We evaluate our method as well as explain and analyse the significance of each parameter. 2.Previous WorksAlthough there are many methods of segmentation in range images like in[10]or in[3],three have been thor-oughly studied for3D point clouds:region-growing, hough-transform from[14]and Random Sample Consen-sus(RANSAC)from[9].The application of recognising structures in urban laser point clouds is frequent in literature.Bauer in[4]and Boulaassal in[5]detect facades in dense3D point cloud by a RANSAC algorithm.V osselman in[23]reviews sur-face growing and3D hough transform techniques to de-tect geometric shapes.Tarsh-Kurdi in[22]detect roof planes in3D building point cloud by comparing results on hough-transform and RANSAC algorithm.They found that RANSAC is more efficient than thefirst one.Chao Chen in[6]and Yu in[25]present algorithms of segmentation in range images for the same application of detecting planar regions in an urban scene.The method in[6]is based on a region growing algorithm in range images and merges re-sults in one labelled3D point cloud.[25]uses a method different from the three we have cited:they extract a hi-erarchical subdivision of the input image built like a graph where leaf nodes represent planar regions.There are also other methods like bayesian techniques. In[16]and[8],they obtain smoothed surface from noisy point clouds with objects modeled by probability distribu-tions and it seems possible to extend this idea to point cloud segmentation.But techniques based on bayesian statistics need to optimize global statistical model and then it is diffi-cult to process points cloud larger than one million points.We present below an analysis of the two main methods used in literature:RANSAC and region-growing.Hough-transform algorithm is too time consuming for our applica-tion.To compare the complexity of the algorithm,we take a point cloud of size N with only one plane P of size n.We suppose that we want to detect this plane P and we define n min the minimum size of the plane we want to detect.The size of a plane is the area of the plane.If the data density is uniform in the point cloud then the size of a plane can be specified by its number of points.2.1.RANSACRANSAC is an algorithm initially developped by Fis-chler and Bolles in[9]that allows thefitting of models with-out trying all possibilities.RANSAC is based on the prob-ability to detect a model using the minimal set required to estimate the model.To detect a plane with RANSAC,we choose3random points(enough to estimate a plane).We compute the plane parameters with these3points.Then a score function is used to determine how the model is good for the remaining ually,the score is the number of points belonging to the plane.With noise,a point belongs to a plane if the distance from the point to the plane is less than a parameter γ.In the end,we keep the plane with the best score.Theprobability of getting the plane in thefirst trial is p=(nN )3.Therefore the probability to get it in T trials is p=1−(1−(nN )3)ing equation1and supposing n minN1,we know the number T min of minimal trials to have a probability p t to get planes of size at least n min:T min=log(1−p t)log(1−(n minN))≈log(11−p t)(Nn min)3.(1)For each trial,we test all data points to compute the score of a plane.The RANSAC algorithm complexity lies inO(N(Nn min )3)when n minN1and T min→0whenn min→N.Then RANSAC is very efficient in detecting large planes in noisy point clouds i.e.when the ratio n minN is 1but very slow to detect small planes in large pointclouds i.e.when n minN 1.After selecting the best model,another step is to extract the largest connected component of each plane.Connnected components mean that the min-imum distance between each point of the plane and others points is smaller(for distance)than afixed parameter.Schnabel et al.[20]bring two optimizations to RANSAC:the points selection is done locally and the score function has been improved.An octree isfirst created from point cloud.Points used to estimate plane parameters are chosen locally at a random depth of the octree.The score function is also different from RANSAC:instead of testing all points for one model,they test only a random subset and find the score by interpolation.The algorithm complexity lies in O(Nr4Ndn min)where r is the number of random subsets for the score function and d is the maximum octree depth. Their algorithm improves the planes detection speed but its complexity lies in O(N2)and it becomes slow on large data sets.And again we have to extract the largest connected component of each plane.2.2.Region GrowingRegion Growing algorithms work well in range images like in[18].The principle of region growing is to start with a seed region and to grow it by neighborhood when the neighbors satisfy some conditions.In range images,we have the neighbors of each point with pixel coordinates.In case of unorganized3D data,there is no information about the neighborhood in the data structure.The most common method to compute neighbors in3D is to compute a Kd-tree to search k nearest neighbors.The creation of a Kd-tree lies in O(NlogN)and the search of k nearest neighbors of one point lies in O(logN).The advantage of these region growing methods is that they are fast when there are many planes to extract,robust to noise and extract the largest con-nected component immediately.But they only use the dis-tance from point to plane to extract planes and like we will see later,it is not accurate enough to detect correct planar regions.Rabbani et al.[19]developped a method of smooth area detection that can be used for plane detection.Theyfirst estimate the normal of each point like in[13].The point with the minimum residual starts the region growing.They test k nearest neighbors of the last point added:if the an-gle between the normal of the point and the current normal of the plane is smaller than a parameterαthen they add this point to the smooth region.With Kd-tree for k nearest neighbors,the algorithm complexity is in O(N+nlogN). The complexity seems to be low but in worst case,when nN1,example for facade detection in point clouds,the complexity becomes O(NlogN).3.Voxel Growing3.1.OverviewIn this article,we present a new algorithm adapted to large data sets of unorganized3D points and optimized to be accurate and fast.Our plane detection method works in three steps.In thefirst part,we compute a better esti-mation of the normal in each point by afiltered weighted planefitting.In a second step,we compute the score of lo-cal planarity in each point.We select the best seed point that represents a good seed plane and in the third part,we grow this seed plane by adding all points close to the plane.Thegrowing step is based on a voxel growing algorithm.The filtered normals,the score function and the voxel growing are innovative contributions of our method.As an input,we need dense point clouds related to the level of detail we want to detect.As an output,we produce connected components of planes in the point cloud.This notion of connected components is linked to the data den-sity.With our method,the connected components of planes detected are linked to the parameter d of the voxel grid.Our method has 3”intuitive”parameters :d ,area min and γ.”intuitive”because there are linked to physical mea-surements.d is the voxel size used in voxel growing and also represents the connectivity of points in detected planes.γis the maximum distance between the point of a plane and the plane model,represents the plane thickness and is linked to the point cloud noise.area min represents the minimum area of planes we want to keep.3.2.Details3.2.1Local Density of Point CloudsIn a first step,we compute the local density of point clouds like in [17].For that,we find the radius r i of the sphere containing the k nearest neighbors of point i .Then we cal-culate ρi =kπr 2i.In our experiments,we find that k =50is a good number of neighbors.It is important to know the lo-cal density because many laser point clouds are made with a fixed resolution angle scanner and are therefore not evenly distributed.We use the local density in section 3.2.3for the score calculation.3.2.2Filtered Normal EstimationNormal estimation is an important part of our algorithm.The paper [7]presents and compares three normal estima-tion methods.They conclude that the weighted plane fit-ting or WPF is the fastest and the most accurate for large point clouds.WPF is an idea of Pauly and al.in [17]that the fitting plane of a point p must take into consider-ation the nearby points more than other distant ones.The normal least square is explained in [21]and is the mini-mum of ki =1(n p ·p i +d )2.The WPF is the minimum of ki =1ωi (n p ·p i +d )2where ωi =θ( p i −p )and θ(r )=e −2r 2r2i .For solving n p ,we compute the eigenvec-tor corresponding to the smallest eigenvalue of the weightedcovariance matrix C w = ki =1ωi t (p i −b w )(p i −b w )where b w is the weighted barycenter.For the three methods ex-plained in [7],we get a good approximation of normals in smooth area but we have errors in sharp corners.In fig-ure 1,we have tested the weighted normal estimation on two planes with uniform noise and forming an angle of 90˚.We can see that the normal is not correct on the corners of the planes and in the red circle.To improve the normal calculation,that improves the plane detection especially on borders of planes,we propose a filtering process in two phases.In a first step,we com-pute the weighted normals (WPF)of each point like we de-scribed it above by minimizing ki =1ωi (n p ·p i +d )2.In a second step,we compute the filtered normal by us-ing an adaptive local neighborhood.We compute the new weighted normal with the same sum minimization but keep-ing only points of the neighborhood whose normals from the first step satisfy |n p ·n i |>cos (α).With this filtering step,we have the same results in smooth areas and better results in sharp corners.We called our normal estimation filtered weighted plane fitting(FWPF).Figure 1.Weighted normal estimation of two planes with uniform noise and with 90˚angle between them.We have tested our normal estimation by computing nor-mals on synthetic data with two planes and different angles between them and with different values of the parameter α.We can see in figure 2the mean error on normal estimation for WPF and FWPF with α=20˚,30˚,40˚and 90˚.Us-ing α=90˚is the same as not doing the filtering step.We see on Figure 2that α=20˚gives smaller error in normal estimation when angles between planes is smaller than 60˚and α=30˚gives best results when angle between planes is greater than 60˚.We have considered the value α=30˚as the best results because it gives the smaller mean error in normal estimation when angle between planes vary from 20˚to 90˚.Figure 3shows the normals of the planes with 90˚angle and better results in the red circle (normals are 90˚with the plane).3.2.3The score of local planarityIn many region growing algorithms,the criteria used for the score of the local fitting plane is the residual,like in [18]or [19],i.e.the sum of the square of distance from points to the plane.We have a different score function to estimate local planarity.For that,we first compute the neighbors N i of a point p with points i whose normals n i are close toFigure parison of mean error in normal estimation of two planes with α=20˚,30˚,40˚and 90˚(=Nofiltering).Figure 3.Filtered Weighted normal estimation of two planes with uniform noise and with 90˚angle between them (α=30˚).the normal n p .More precisely,we compute N i ={p in k neighbors of i/|n i ·n p |>cos (α)}.It is a way to keep only the points which are probably on the local plane before the least square fitting.Then,we compute the local plane fitting of point p with N i neighbors by least squares like in [21].The set N i is a subset of N i of points belonging to the plane,i.e.the points for which the distance to the local plane is smaller than the parameter γ(to consider the noise).The score s of the local plane is the area of the local plane,i.e.the number of points ”in”the plane divided by the localdensity ρi (seen in section 3.2.1):the score s =card (N i)ρi.We take into consideration the area of the local plane as the score function and not the number of points or the residual in order to be more robust to the sampling distribution.3.2.4Voxel decompositionWe use a data structure that is the core of our region growing method.It is a voxel grid that speeds up the plane detection process.V oxels are small cubes of length d that partition the point cloud space.Every point of data belongs to a voxel and a voxel contains a list of points.We use the Octree Class Template in [2]to compute an Octree of the point cloud.The leaf nodes of the graph built are voxels of size d .Once the voxel grid has been computed,we start the plane detection algorithm.3.2.5Voxel GrowingWith the estimator of local planarity,we take the point p with the best score,i.e.the point with the maximum area of local plane.We have the model parameters of this best seed plane and we start with an empty set E of points belonging to the plane.The initial point p is in a voxel v 0.All the points in the initial voxel v 0for which the distance from the seed plane is less than γare added to the set E .Then,we compute new plane parameters by least square refitting with set E .Instead of growing with k nearest neighbors,we grow with voxels.Hence we test points in 26voxel neigh-bors.This is a way to search the neighborhood in con-stant time instead of O (logN )for each neighbor like with Kd-tree.In a neighbor voxel,we add to E the points for which the distance to the current plane is smaller than γand the angle between the normal computed in each point and the normal of the plane is smaller than a parameter α:|cos (n p ,n P )|>cos (α)where n p is the normal of the point p and n P is the normal of the plane P .We have tested different values of αand we empirically found that 30˚is a good value for all point clouds.If we added at least one point in E for this voxel,we compute new plane parameters from E by least square fitting and we test its 26voxel neigh-bors.It is important to perform plane least square fitting in each voxel adding because the seed plane model is not good enough with noise to be used in all voxel growing,but only in surrounding voxels.This growing process is faster than classical region growing because we do not compute least square for each point added but only for each voxel added.The least square fitting step must be computed very fast.We use the same method as explained in [18]with incre-mental update of the barycenter b and covariance matrix C like equation 2.We know with [21]that the barycen-ter b belongs to the least square plane and that the normal of the least square plane n P is the eigenvector of the smallest eigenvalue of C .b0=03x1C0=03x3.b n+1=1n+1(nb n+p n+1).C n+1=C n+nn+1t(pn+1−b n)(p n+1−b n).(2)where C n is the covariance matrix of a set of n points,b n is the barycenter vector of a set of n points and p n+1is the (n+1)point vector added to the set.This voxel growing method leads to a connected com-ponent set E because the points have been added by con-nected voxels.In our case,the minimum distance between one point and E is less than parameter d of our voxel grid. That is why the parameter d also represents the connectivity of points in detected planes.3.2.6Plane DetectionTo get all planes with an area of at least area min in the point cloud,we repeat these steps(best local seed plane choice and voxel growing)with all points by descending order of their score.Once we have a set E,whose area is bigger than area min,we keep it and classify all points in E.4.Results and Discussion4.1.Benchmark analysisTo test the improvements of our method,we have em-ployed the comparative framework of[12]based on range images.For that,we have converted all images into3D point clouds.All Point Clouds created have260k points. After our segmentation,we project labelled points on a seg-mented image and compare with the ground truth image. We have chosen our three parameters d,area min andγby optimizing the result of the10perceptron training image segmentation(the perceptron is portable scanner that pro-duces a range image of its environment).Bests results have been obtained with area min=200,γ=5and d=8 (units are not provided in the benchmark).We show the re-sults of the30perceptron images segmentation in table1. GT Regions are the mean number of ground truth planes over the30ground truth range images.Correct detection, over-segmentation,under-segmentation,missed and noise are the mean number of correct,over,under,missed and noised planes detected by methods.The tolerance80%is the minimum percentage of points we must have detected comparing to the ground truth to have a correct detection. More details are in[12].UE is a method from[12],UFPR is a method from[10]. It is important to notice that UE and UFPR are range image methods and our method is not well suited for range images but3D Point Cloud.Nevertheless,it is a good benchmark for comparison and we see in table1that the accuracy of our method is very close to the state of the art in range image segmentation.To evaluate the different improvements of our algorithm, we have tested different variants of our method.We have tested our method without normals(only with distance from points to plane),without voxel growing(with a classical region growing by k neighbors),without our FWPF nor-mal estimation(with WPF normal estimation),without our score function(with residual score function).The compari-son is visible on table2.We can see the difference of time computing between region growing and voxel growing.We have tested our algorithm with and without normals and we found that the accuracy cannot be achieved whithout normal computation.There is also a big difference in the correct de-tection between WPF and our FWPF normal estimation as we can see in thefigure4.Our FWPF normal brings a real improvement in border estimation of planes.Black points in thefigure are non classifiedpoints.Figure5.Correct Detection of our segmentation algorithm when the voxel size d changes.We would like to discuss the influence of parameters on our algorithm.We have three parameters:area min,which represents the minimum area of the plane we want to keep,γ,which represents the thickness of the plane(it is gener-aly closely tied to the noise in the point cloud and espe-cially the standard deviationσof the noise)and d,which is the minimum distance from a point to the rest of the plane. These three parameters depend on the point cloud features and the desired segmentation.For example,if we have a lot of noise,we must choose a highγvalue.If we want to detect only large planes,we set a large area min value.We also focus our analysis on the robustess of the voxel size d in our algorithm,i.e.the ratio of points vs voxels.We can see infigure5the variation of the correct detection when we change the value of d.The method seems to be robust when d is between4and10but the quality decreases when d is over10.It is due to the fact that for a large voxel size d,some planes from different objects are merged into one plane.GT Regions Correct Over-Under-Missed Noise Duration(in s)detection segmentation segmentationUE14.610.00.20.3 3.8 2.1-UFPR14.611.00.30.1 3.0 2.5-Our method14.610.90.20.1 3.30.7308Table1.Average results of different segmenters at80%compare tolerance.GT Regions Correct Over-Under-Missed Noise Duration(in s) Our method detection segmentation segmentationwithout normals14.6 5.670.10.19.4 6.570 without voxel growing14.610.70.20.1 3.40.8605 without FWPF14.69.30.20.1 5.0 1.9195 without our score function14.610.30.20.1 3.9 1.2308 with all improvements14.610.90.20.1 3.30.7308 Table2.Average results of variants of our segmenter at80%compare tolerance.4.1.1Large scale dataWe have tested our method on different kinds of data.We have segmented urban data infigure6from our Mobile Mapping System(MMS)described in[11].The mobile sys-tem generates10k pts/s with a density of50pts/m2and very noisy data(σ=0.3m).For this point cloud,we want to de-tect building facades.We have chosen area min=10m2, d=1m to have large connected components andγ=0.3m to cope with the noise.We have tested our method on point cloud from the Trim-ble VX scanner infigure7.It is a point cloud of size40k points with only20pts/m2with less noise because it is a fixed scanner(σ=0.2m).In that case,we also wanted to detect building facades and keep the same parameters ex-ceptγ=0.2m because we had less noise.We see infig-ure7that we have detected two facades.By setting a larger voxel size d value like d=10m,we detect only one plane. We choose d like area min andγaccording to the desired segmentation and to the level of detail we want to extract from the point cloud.We also tested our algorithm on the point cloud from the LEICA Cyrax scanner infigure8.This point cloud has been taken from AIM@SHAPE repository[1].It is a very dense point cloud from multiplefixed position of scanner with about400pts/m2and very little noise(σ=0.02m). In this case,we wanted to detect all the little planes to model the church in planar regions.That is why we have chosen d=0.2m,area min=1m2andγ=0.02m.Infigures6,7and8,we have,on the left,input point cloud and on the right,we only keep points detected in a plane(planes are in random colors).The red points in thesefigures are seed plane points.We can see in thesefig-ures that planes are very well detected even with high noise. Table3show the information on point clouds,results with number of planes detected and duration of the algorithm.The time includes the computation of the FWPF normalsof the point cloud.We can see in table3that our algo-rithm performs linearly in time with respect to the numberof points.The choice of parameters will have little influence on time computing.The computation time is about one mil-lisecond per point whatever the size of the point cloud(we used a PC with QuadCore Q9300and2Go of RAM).The algorithm has been implented using only one thread andin-core processing.Our goal is to compare the improve-ment of plane detection between classical region growing and our region growing with better normals for more ac-curate planes and voxel growing for faster detection.Our method seems to be compatible with out-of-core implemen-tation like described in[24]or in[15].MMS Street VX Street Church Size(points)398k42k7.6MMean Density50pts/m220pts/m2400pts/m2 Number of Planes202142Total Duration452s33s6900sTime/point 1ms 1ms 1msTable3.Results on different data.5.ConclusionIn this article,we have proposed a new method of plane detection that is fast and accurate even in presence of noise. We demonstrate its efficiency with different kinds of data and its speed in large data sets with millions of points.Our voxel growing method has a complexity of O(N)and it is able to detect large and small planes in very large data sets and can extract them directly in connected components.Figure 4.Ground truth,Our Segmentation without and with filterednormals.Figure 6.Planes detection in street point cloud generated by MMS (d =1m,area min =10m 2,γ=0.3m ).References[1]Aim@shape repository /.6[2]Octree class template /code/octree.html.4[3] A.Bab-Hadiashar and N.Gheissari.Range image segmen-tation using surface selection criterion.2006.IEEE Trans-actions on Image Processing.1[4]J.Bauer,K.Karner,K.Schindler,A.Klaus,and C.Zach.Segmentation of building models from dense 3d point-clouds.2003.Workshop of the Austrian Association for Pattern Recognition.1[5]H.Boulaassal,ndes,P.Grussenmeyer,and F.Tarsha-Kurdi.Automatic segmentation of building facades using terrestrial laser data.2007.ISPRS Workshop on Laser Scan-ning.1[6] C.C.Chen and I.Stamos.Range image segmentationfor modeling and object detection in urban scenes.2007.3DIM2007.1[7]T.K.Dey,G.Li,and J.Sun.Normal estimation for pointclouds:A comparison study for a voronoi based method.2005.Eurographics on Symposium on Point-Based Graph-ics.3[8]J.R.Diebel,S.Thrun,and M.Brunig.A bayesian methodfor probable surface reconstruction and decimation.2006.ACM Transactions on Graphics (TOG).1[9]M.A.Fischler and R.C.Bolles.Random sample consen-sus:A paradigm for model fitting with applications to image analysis and automated munications of the ACM.1,2[10]P.F.U.Gotardo,O.R.P.Bellon,and L.Silva.Range imagesegmentation by surface extraction using an improved robust estimator.2003.Proceedings of Computer Vision and Pat-tern Recognition.1,5[11] F.Goulette,F.Nashashibi,I.Abuhadrous,S.Ammoun,andurgeau.An integrated on-board laser range sensing sys-tem for on-the-way city and road modelling.2007.Interna-tional Archives of the Photogrammetry,Remote Sensing and Spacial Information Sciences.6[12] A.Hoover,G.Jean-Baptiste,and al.An experimental com-parison of range image segmentation algorithms.1996.IEEE Transactions on Pattern Analysis and Machine Intelligence.5[13]H.Hoppe,T.DeRose,T.Duchamp,J.McDonald,andW.Stuetzle.Surface reconstruction from unorganized points.1992.International Conference on Computer Graphics and Interactive Techniques.2[14]P.Hough.Method and means for recognizing complex pat-terns.1962.In US Patent.1[15]M.Isenburg,P.Lindstrom,S.Gumhold,and J.Snoeyink.Large mesh simplification using processing sequences.2003.。

无人驾驶汽车测试技术手册说明书

无人驾驶汽车测试技术手册说明书

Use of Automation in Sensor Readout ASIC Chip Characterization to Improve Test Yield, Coverage andMan-Hour ReductionNoor Shelida Salleh, Siti Noor Harun and Tan Kong YewIC Design Lab, Nano Semiconductor Technology, MIMOS Berhad, Technology Park Malaysia, 57000 Kuala Lumpur, MalaysiaAbstract— An approach for efficient sensor readout ASIC chip characterization is presented in this paper. Use of automation results in better test yield and coverage, in addition to man-hour reduction. In order to perform this, the hardware setup has been integrated based on laboratory bench instrumentation. All of hardware is controlled by software written in VEE PRO graphical programming language.Keywords—ion sensing electrode; ion sensing field effect transistor; moisture sensorI.I NTRODUCTIONThe trend in the chip design verification increases linearly with new products, thus device characterization tests have increased exponentially and created bottlenecks. If there is only one function (herein designated as “A”), “A” need only be tested. If there are two functions, “A” and “B”, then test sets will be “A,” “B,” “AB,” and “BA.” Three functions or more will thus increase the test sets dramatically. Therefore, an automated software testing is the best way to increase the effectiveness of resources and time, tests can be faster, they’re consistent, and test can be run over and over again with less overhead. [2]This paper elaborates automation setups and methods that replace manual test procedures so that design bottlenecks can be eliminated and device characterization improved in terms of test yield, coverage and man-hour reduction. In achieving the stated objective, it will be necessary to develop a framework that attains and integrates commonality, maintainability, and reusability [3].Normally, the engineer manually sets up the power supplies, device stimulus, triggers the DUT (device under test), then records multi-meter measurements by hand. This process is tedious, time consuming, and lends itself to errors. In essence, a typical validation procedure requires the use of three or four different instruments which results in the necessity of different protocols to control the instruments. Therefore, the key goal of this paper is to improve the test methodologies by automating test techniques. The test project should use easy, flexible, modular, interchangeable components. Whenever feasible, the interface screen should match as closely as possible to the actual test measurement instruments. The programming language used is VEE PRO as most of the bench equipment are from Agilent. In the sections that follow, the ASIC chip to be tested (DUT) are first explained followed by the hardware setups necessary to implement the automated testing. Next the software code and routines that drives the bench equipment are discussed. Finally the conclusion surveys the benefits gains from such an exercise.II.C HIP A RCHITECTUREThe ASIC to be tested is designed in-house. It provides the following sensor read out circuits:-∙Three channels of individual ion sensing electrode (ISE) sensor readout circuit∙Two ion sensing field effect transistor (ISFET) readout circuit of which either one (mutually exclusive) is used to readout signals from the ISFET sensor.∙One moisture sensor readout circuit.FIGURE I.ASIC CHIP DESIGN ARCHITECTURE Figure 1 shows the chip design architecture. At any instance in time, only one of these blocks is selected through a multiplexer for connection to an Analog Digital Converter (ADC) through an external low pass filter.To verify the automation setups, each of these readout circuits has been separately tested manually, which include ISFET TYPE A, ISFET TYPE B, ISE1, ISE2, ISE3, MOISTURE and also ADC tests. The results obtained from the manual tests, are used as reference to verify the performance of the automation routines and its results.III.E QUIPMENT &S OFTWAREIn order to perform all these testing, either in manual or automated mode, it involves some high end instruments coupled with software as follows:-International Conference on Test, Measurement and Computational Method (TMCM 2015)∙Agilent E3631A Triple Output DC Power Supply∙Agilent 34970A Data Acquisition (3 Slots)∙Slot 1: Agilent 34903A 20 Channel Actuator/General Purpose Switch (ACT)∙Slot 2: Agilent 34901A 20 Channel Multiplexer (MUL)∙Test DUT PCB Board∙National Instrument GPIB/GPIB interface∙Agilent 82357B USB/GPIB Interface∙Agilent VEE PRO software∙Agilent Connection Expert softwareThe main instruments and software in this automation is Agilent 34903A 20 Channel Actuator/General Purpose Switch, Agilent 34901A 20 Channel Multiplexer, and Agilent VEE PRO software.Agilent VEE PRO is a graphical programming language designed specifically for test and measurement applications. With this software, instruments can easily communicate with each other and automation is possible. Therefore, the switches can be easily programmed on/off based on the required test.Agilent 34903A 20 Channel Actuator/General Purpose Switch has 20 independent single-pole, double-throw (SPDT) relays and is used to cycle power to products under test, control indicator and status lights, and to actuate external power relays and solenoids. Combining it with matrix and multiplexer modules allows a custom switch system to be built. In automated mode, it acts as switches and used to sequence the tests. From one test to another, different pins are used. Therefore, so correct pins need to be chosen for a particular test by closing or opening the appropriate switches.Agilent 34901A 20 Channel Multiplexer is the most versatile multiplexer for general purpose scanning. It acts as the Digital Multimeter in the Automation.IV.S ET-UP C ONFIGURATIONAs shown in Figure 2, the USB/GPIB and GPIB/GPIB cables are used for communication between computer and instrument and between the instruments. As mentioned earlier, the Agilent 34903A 20 Channel Actuator/General Purpose Switch act are used extensively in the automation procedure.It is used for setting up different test configurations through automation. In manual testing, different tests will have different pin set-up configuration so when changing from one test to another; pin connection will be different for each test setup. It requires some time to change the set-up and errors are possible especially done manually.However, in automation mode, all the pins required for all these tests will connect to the 20 Channel Actuator/General Purpose Switch so when performing the particular tests, the pin selection required resulting in closure of particular switches are done through software written in Agilent VEE PRO. Similar reasoning goes for 20 Channel Multiplexer, connect all the output pins to it so when need to read the output value from the particular pin, just need to close the particular switch. This can also perform in the VEE PRO. This is a good instrument when need to read few values from different pin. Instead of using multi-meter to read one by one, by using this instrument all the values can be read automatically one after another. Therefore, with these two instruments, it saves setup and measure time when changing from one test to another.After setting up all the hardware and software, the program written in VEE PRO is executed. All the results are exportedand stored in Microsoft Excel format in the computer.FIGURE II.SETUP CONFIGURATION FOR TEST AUTOMATIONV.F LOW C HART AND VEE PRO P ROGRAM Since all the tests composed of the same hardware and software. The only different thing between them is the channels use in the Agilent 34903A 20 Channel Actuator (which act as the switches to switch between each test). Therefore, the flow chart for each tests are almost the same, only the switches involved different. Figure 3 shows the example of the flow chart for ISE1 test.The programs written in Agilent’s VEE PRO software are based on the flow chart. The commands used in the VEE PRO to communicate with the instruments are called SCPI commands. Figure 4 and 5 show the VEE PRO program for ISE 1 test and also the complete program for all the tests, which include ISE1, ISE2, ISE3, ISFET TYPE A and also ISFET TYPE B.Figure 5 shows the complete program for the whole automation process which combined all ISE and ISFET tests. In the figure, the 3 gray colour icons include all the ISE tests while the pink colour icons include all the ISFET tests. For example, when the ISE1 icon is clicked, it depicts the Figure 4 program. The Figure 5 program is used to combine all of the tests, so they perform automatically one after another with final results exported and stored in excel format on the computer.This automation has faster testing time compared to the conventional method. From Table I below, we can see the complete test for one chip takes 2.4 hours compared to 4 minutes when it have been automated. Software testing timefor 80% of the total cost of software development [6] and the cost could be reduced if the process of testing is automated.FIGURE III.FLOW CHART FOR ISE1 TEST TABLE I. MAN-HOUR COMPARISONTotal Chip TestedMan-hour reduction tableManual AutomationTotalhoursave623 1495 hours 40 hours 1455hoursVI.ADC A RCHITECTUREThe automation code does not include the ADC and Serializer block. This is because the characterization of the ADC and Serializer will involve AC tests as well as DC tests, as such they do not fit well with the earlier hardware setup where it is for DC tests only. However characterization of the ADC and serializer is done separately and confirmed working before the setup is used to test for functionality of the entire ASIC. Table II below presented the test result for ADC.In order to perform functionality test, a voltage input was biased at the input port for each ROIC. The input will go through multiplexer, ADC and serializer. The digital output in binary format will then convert to analog value by calculation method. The testing results for some chips were tabulated in Table III and IV.FIGURE IV.VEE PRO PROGRAM ISE N+ TESTFIGURE PLETE PROGRAM FOR ALL ROICTABLE II. TEST RESULT FOR ADC BLOCKChip No.Result for Open Port TestInput SetupSerializeOutput(binary)Output(V)930 V 0000100110 0.0370.5V 0110110001 0.4221 V 1110111001 0.930510 V 0000000010 1.94 m0.5V 01101101010.4261 V 1110111110 0.935860 V 0000011011 0.0260.5V 010*******0.3071 V 1010011111 0.655C190 V 0000100011 0.0340.5V 01110010100.4471 V 1111011010 0.962TABLE III. TEST RESULT FOR ISFET-REFET ROICChipNo.Result for ISFET-REFET ROICInput SetupSerializeOutput(binary)Output(V)93 0.515V 0111000101 0.44251 0.519V 0111001010 0.44786 0.423V 010******* 0.327C19 0.513V 0111010110 0.458TABLE IV. TEST RESULT FOR ISE ROICChipNo.Result for ISE ROICInput SetupSerializeOutput(binary)Output(V)930 V 0000100111 0.0380.5V 010*******0.3721 V 1110000000 0.875VII.C ONCLUSIONA VEE PRO based software for ASIC chip test automation has been presented in this paper. Currently, this is only considered semi-automated since mechanical handling is still required during part change. The ADC and serializer block test are still performed manually. However, even with this setup now, in terms of efficiency and man-hour reduction, significant achievement has been accomplished. This is especially evident when a large batch of prototype chips need to be characterized.R EFERENCES[1]SCPI Command Reference Volume 2[2]/litweb/pdf/E2120-90011.pdf[3]/litweb/pdf/N5102-90001A.pdf[4]/litweb/pdf/34970-90002.pdf[5]/agilent/redirector.jspx?action=ref&cname=AGILENT_EDITORIAL&ckey=1887740&cc=US&lc=eng[6] B. Beizer. Software Testing Techniques. Van Nostrand Reinhold, 2ndedition, 1990.。

LabX Laboratory Software 2 连接实验室仪器说明书

LabX Laboratory Software 2 连接实验室仪器说明书

Power Your Titration Benchwith LabX®L a b o r a t o r y S o f t w a r e2C o n n e c t Y o u r L a bSingle SoftwareFor Many Laboratory InstrumentsThe new LabX ® from METTLER TOLEDO offers a great new possibility to con-nect your titrators, balances, density meters and refractometers, melting point instruments and Quantos systems all with one single software. This uniform software and interface means less training and also helps toincrease your efficiency. With a unique approach, LabX allows you to work from either the instrument or PC or both and if required, with complete compliance.Reducing redundant software for different instruments helps you save time and money spent on maintenance and support. Use LabX to increase the efficiency of your workflow and eliminate waste by using LabX for your pa-perless lab documentation. Inte-grate LabX with 3rd party software such as LIMS, ERP , etc. for a fully integrated process.LabX offers many tools to help you optimize your workflow in the laboratory and increase efficiency. Connect your METTLER TOLEDO instruments to LabX and benefit from simplified operation, efficient method editing with the graphical layout, fast sample series modi-fications and tailored reports.Less training also means less time and money spent educat-ing your workforce. LabX offers the same concept of use for all instruments. Beyond that, users will recognize similar workflow concepts throughout the instrument terminals for fast adaptation to new instruments and seamless integration.Cost EffectiveWorkflow Optimization Less Training3LabX offers you all the conve-nience of centrally managed software. With one database all users have access to the same data, which can be restricted to the appropriate access accord-ing to your user management. Create your SOP methods once and all the instruments from dif-ferent laboratories will always have access to the latest version.Central Management4Work from PC or Titrator or Both For Flexible OperationAll methods and series are stored in one database. Thus the meth-ods and sample series that you create in LabX or at the instru-ment are always available wher-ever you are. Beyond that, the ti-trator and workbench intelligently display solely those methods and sample series that can be run by the selected titrator model.Each instrument connected to LabX has its own workbench correlating to that instrument. It offers all the components needed to run your daily tasks and the tools to monitor your sample series and results. Use the sta-tistics view for quick and easy inspection of results to see if they are within established limits.LabX offers you full flexibility for the way you want to work. Either start your analysis at the instrument or from the PC and have constant access to both your current sample analysis and latest results. No matter where you are use LabX Mail to receive a tailored message with the latest results, when a sample changer is ready to test new samples or when results are out of specification.LabX anticipates the different demands of users and the way they like to work. Thus, LabX of-fers you full flexibility wherever you work. For example: start your sample series at the titrator and move to other daily tasks. Open up the workbench from your office and monitor the same series you previously started.Methods & Series WorkbenchFlexible Operation F u l l F l e x i b i l i t y5Have access to your results from the titrator or your PC. Use search folders to structure results from different analyses. LabX offers you a sample and a series based view of your results to match your needs. Each series offers the statistics tab for a quick and simple overview of the results. Use LabX to simplify sample changer operation with thetransparent and easy-to-manage sample series tables. Track the state of your series from any LabX connected PC in your laboratory. Alter your series by adding or inserting urgent samples while operating.Whether you are on the go or cur-rently not located next to your titra-tor or LabX network, use LabX Mail to have messages or results sent to you. Increase your efficiency by receiving tailored messages: e.g. “Results out of specifications” or “Ready for more samples”.Results Automation LabX Mail6Execute Your Daily TasksWith One Click®Use the Scheduler to automati-cally start tasks, e.g. begin Karl Fischer pretitrations just before coming into the lab for the day. Or schedule LabX to do the daily sensor calibration for you. Periodically take samples of production-line baths to observe the correct ion concentrations.Execute your tasks with One Click. Simply create a shortcut to methods, sample series, series sequences, tasks or manual op-erations, e.g. drain your beaker to waste, and have direct access to your most common analyses. All the shortcuts you create at the workbench are also displayed at the titrator and vice versa.Excellence Balances connected to LabX have a dedicated entry point for titration tasks that re-quire weighing. Simply pick the task and the balance will guide you through the samples. Start your titration and add more sample weights as you‘re test-ing for improved productivity.One Click is a well-recognized concept of METTLER TOLEDO instruments. LabX offers you the same concept and helps to easily distribute shortcuts for different users over various instruments. To facilitate your workflow, LabX offers the task scheduler and SmartCodes ™ to automate youranalyses. Use SmartSample ™ for secure and efficient titration weighing.O n e C l i c k A p p r o a c hSchedule TasksShortcuts Simple Weighing7Remove transcription and order-ing errors as well as improve effi-ciency with the new SmartSample RFID technology workflow. Identify and weigh your titration samples at an Excellence Analytical Bal-ance with the SmartSample kit. Enter the sample identification (ID) at the balance to make sure the ID and weight are properly as-signed to the physical sample. Fully automate the selection of the correct method and transcription of samples IDs with a barcode or SmartSample RFID tag. Thus, eliminate sample order errors by reading the sample information as you test it, as well as ensur-ing the right method is selected for each product. Use the auto-mated 1D and 2D barcode reader with InMotion ™ Autosamplers.SmartSample ™ SmartCodes ™8R e g u l a t o r y C o m p l i a n c eTraceable and CompliantEnsured with LabX®Work from the instrument or PC and LabX will ensure that you are fully compliant with regulatory standards such as FDA 21 CFR Part 11. Use the services offered from METTLER TOLEDO for software verification (IPac) or full validation of your system with the Validation Manuals I & II. A sophis-ticated user management, electronic signatures and release process for methods and reports are additional handy features of LabX that support your daily reporting.Centrally set up the user manage-ment with users and assign rights to roles. Instruments connected to LabX have the same user man-agement, i.e. use the same login at the instrument and PC, to offer full traceability whether you work at the instrument or in LabX. Enforce your own account poli-cies to meet company or regula-tory guidelines and standards.The LabX Initial Software verifica-tion Package (IPac) is a service product of METTLER TOLEDO to ensure successful software instal-lation and proper operation at the installation site. Use the IPac to meet your standards for quality management and documentation. IPac contents are:– Installation Qualification (IQ)– Operational Qualification (OQ)– General System Suitability TestFor software validation in ac-cordance with regulatory bodies such as FDA, METTLER TOLEDO offers onsite validation support with the Validation Manuals I and II. Validation Manual I of-fers the necessary documenta-tion for vendor qualification and Validation Manual II for the validation of the software including installation and opera-tional qualification (IQ & OQ).User ManagementIPac Validation Manual I and II9The Regulation option of LabX server provides all the necessary tools to meet the FDA regulation (21 CFR Part 11) for data ma-nagement and storage. All rele-vant actions taken at the instru-ment or PC are recorded in the audit trail of LabX for full traceabi-lity and flexibility, no matter where you work.To differentiate between devel-opment and the final version of methods, report templates and other objects can be approved and then released. Use the flexible views at the titrator and LabX to display only the released method for daily use or switch to the lat-est amended method for any further modifications needed.Define your electronic signa-ture policies according to your standards. For example, make sure that new methods follow your own review and approval process before they are released for your daily applications in the lab. Reviewed objects such as methods, results or reports, etc. are protected from any alteration.Regulatory Compliance Propose & ReleaseElectronic Signature10M o d u l a r A p p r o a c hBuild the SystemThat Perfectly Matches Your NeedsLabX ® offers a modular licensing approach to meet your needs today and in the future. Choose the starter pack that suits your requirements and add optional licenses to match your customized needs. Integrate additional instruments to your LabX system by simply activating the relevant instru-ment license key in LabX.Titration Starter PacksIncluded Optional11automatically into LabX creating tasks or even start the analyses directly. Import and export data directly in CSV and XML formats. Exporting and importing was never more powerful and able to meet your needs.Straight fingerprint reader. With the regulation option combined with the user management option, your system will be audit-ready at any time.system integration option allows bi-directional integration. Centrally start and control tasks directly from your ELN or LIMS. These 3rd party systems are able to send tasks directly to the METTLER TOLEDO instrument touchscreen terminal.freely customize reports with all details and options, such as charts, plots, and tables.* Only available with LabX Serverrights each user or role is given. Login to all your instruments and PCs on your LabX net-work with the same username and password or fingerprint via the LogStraight™ fingerprint reader.OptionsName DescriptionPart bX IPacLabX Initial Software verification Package (IPac) ensures successful software installation51710898LabX Validation Manual I All information necessary for the qualification of METTLER TOLEDO as the software supplier (vendor qualification)30003640LabX Validation Manual II Starterpack TitrationAll instructions and forms necessary for the validation of the software (IQ, OQ).30097758LabX ® Servicesfor efficient LabX system qualification and validation/LabXTitrationMultiparameter Analysis with One Click ®Today’s quality control and testing labs have a multitude of tests to be per-formed on a single sample and often have multiple samples to run at the same time. Combining tests with a multiparameter system can op-timize accuracy and reproducibility while saving valuable time and en-suring proper correlation of results to sample identification.Benefits of a Multiparameter Analysis System:• Efficient high sample throughput• Robust system layout and smart security checks for reliable unattended operation • Simple and ergonomic operation – pleasant to work with, short learning period • Optimized for fast, accurate and repeatable tests, and cleaning • Secure data handling thanks to our powerful LabX ® software• Onsite METTLER TOLEDO service installation, qualification and training See what METTLER TOLEDO can do to optimize your lab with a Multipa-rameter Analysis system: /titration-multiparameterFor more informationMettler-Toledo AG, Analytical CH-8603 Schwerzenbach, Schweiz Tel. +41 44 806 77 11Fax +41 44 806 72 40Subject to technical changes© 10/2013 Mettler-Toledo AG, 30100354Marketing Titration / MarCom AnalyticalQuality certificate. Development, production and testing according to ISO 9001.Environmental management system according to ISO 14001.“European conformity”. The CE conformity mark provides you with the assurance that our products comply with the EU directives.One Click is a Registered Trademark of METTLER TOLEDO in Switzerland, the European Union, Russia and bX is a Registered Trademark of METTLER TOLEDO inSwitzerland, USA, China, Germany and a further 13 countries.。

图形化与用户自定义工作流的协同办公研究

图形化与用户自定义工作流的协同办公研究

文章编号:图形化与用户自定义工作流的协同办公研究时国栋1,汪海涛,聂晓改(昆明理工大学信息工程与自动化学院昆明650500)摘要:为了实现协同办公简洁、高效的流程控制方式。

本文以工作流的处理机制、框架设计与数据库模型为主线,将图形化与用户自定义工作流的设计方式应用到协同办公。

用户可以实现工作流程的监控、自定义表单、会签、工作流预处理、可用范围控制、审批授权等功能,最终通过管理平台实现工作流的图形化与用户自定义功能。

这种基于图形化与用户自定义工作流的设计方式是实现协同办公高可控性与易用性的有效解决方案。

关键词:协同办公;图形化工作流;用户自定义工作流;工作流引擎中图分类号:TP311.52 文献标识码:AOn Cooperation Office Based on Graphical and User-defined WorkflowSHI Guo-dong,WANG Hai-tao,NIE Xiao-Gai(Institude of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, 650500, China)Abstract: In order to achieve Cooperation office process control, in an elegant and efficicent way. In this article, workflow processing mechanism, framework design and database model was taken as the main line, graphical design with custom workflow methods were applied to Cooperation Office. Then users can achieve a variety of function, such as , workflow monitoring, custom forms, checks, pretreatment, available range of workflow features, the approval authority, eventually in management platform for its graphics and user-defined functions. This design pattern based on user-defined workflow and graphical design is achieved in collaboration with the office of the high efficient solution and accessibility.Keywords:Cooperative Office; Graphical Workflow;User-defined Workflow;Workflow engine收稿日期:xxxx-xx-xx.基金项目:云南省面上基金项目,项目编号:kksa201003002作者简介:时国栋(1983-) 男,汉,河南周口人,计算机应用技术专业,硕士研究生,主要研究方向:软件工程,E-mail: guodongshi525@;通讯作者:汪海涛,女,副教授,硕士生导师;聂晓改,女,昆明理工大学,硕士研究生0.引言用户构建协同办公系统的目标是为用户实现一个集内部通讯、信息发布、协同工作、知识管理、辅助办公为一体的协同平台[1]。

qgraphicsscene

qgraphicsscene

qgraphicssceneQGraphicsScene: A Powerful Tool for Managing and Displaying Graphical ContentIntroductionIn the field of computer graphics, efficiently managing and displaying graphical content is essential for creating engaging and interactive applications. QGraphicsScene, a class provided by the Qt framework, offers a powerful solution for managing graphical items such as shapes, images, and text. This document explores the features and functionality of QGraphicsScene, highlighting its importance in developing visually rich applications.Overview of QGraphicsSceneQGraphicsScene is a fundamental part of the Qt Graphics View framework, which provides a high-level API for building 2D graphical user interfaces. It serves as a container for managing graphical items, enabling efficient rendering and handling of user interactions. With QGraphicsScene, developers can create and manipulate a variety of graphicalelements, organize them in a hierarchical manner, and handle user events such as mouse clicks and key presses.Key Features of QGraphicsScene1. Hierarchical Organization: QGraphicsScene allows for the hierarchical organization of graphical items, creating a tree-like structure. This feature is particularly useful when dealing with complex scenes that require grouping of items or defining parent-child relationships. By organizing items hierarchically, developers can easily manipulate and transform sets of elements as a single entity.2. Item Model: QGraphicsScene provides an item-based model for managing graphical items. Each item is represented by the QGraphicsItem class, which serves as the base class for various types of items like QGraphicsRectItem, QGraphicsEllipseItem, QGraphicsPixmapItem, and QGraphicsTextItem. This model allows developers to add, remove, and modify graphical items dynamically at runtime, providing flexibility in creating interactive scenes.3. Efficient Rendering: QGraphicsScene optimizes rendering by only updating the items that have changed or need to be redrawn. This optimization reduces the processing powerrequired for rendering, allowing for smoother animations and improved performance. Additionally, QGraphicsScene provides methods for clipping and caching items, further enhancing rendering efficiency.4. Collision Detection: QGraphicsScene offers built-in collision detection mechanisms, enabling developers to detect and respond to collisions between different items in the scene. This feature is particularly useful in scenarios where objects need to interact with each other, enabling the development of games, simulations, and other interactive applications.5. Event Handling: QGraphicsScene provides a comprehensive event-handling system that allows for the handling of various user-generated events. Developers can easily define custom event handlers for mouse clicks, key presses, and other user interactions. This flexibility enables the creation of highly interactive applications with rich user experiences.Applications of QGraphicsSceneQGraphicsScene finds diverse applications in a wide range of domains, some of which include:1. Graphical Editors: QGraphicsScene provides a foundation for creating graphical editing applications such as vector graphics editors, diagramming tools, and flowchart designers. Its hierarchical organization and item model make it easy to manipulate and transform graphical elements, enabling users to create, edit, and arrange complex designs.2. Data Visualization: QGraphicsScene is well-suited for displaying and visualizing large datasets. Developers can represent data points as graphical items in a scene, and use various visualization techniques to display data relationships and patterns. This capability makes it useful in applications such as scientific visualization, financial analysis, and geographic information systems.3. Interactive Games: QGraphicsScene's efficient rendering and collision detection make it suitable for developing interactive games. Game developers can define game objects as graphical items and leverage QGraphicsScene's event handling capabilities to create engaging and immersive gaming experiences.4. User Interfaces: With its ability to manage graphical elements and handle user interactions, QGraphicsScene serves as a powerful tool for creating visually appealing and interactive user interfaces. Developers can useQGraphicsScene to build custom widgets, design complex layouts, and handle user input effectively.ConclusionQGraphicsScene offers a powerful and flexible solution for managing and displaying graphical content in Qt applications. Its hierarchical organization, item model, efficient rendering, collision detection, and event handling capabilities make it an indispensable tool for developing visually rich and interactive applications. By leveraging the features provided by QGraphicsScene, developers can enhance the user experience, create engaging interfaces, and build applications that stand out in terms of visual appeal and functionality.。

Graph Regularized Nonnegative Matrix

Graph Regularized Nonnegative Matrix

Ç
1 INTRODUCTION
HE
techniques for matrix factorization have become popular in recent years for data representation. In many problems in information retrieval, computer vision, and pattern recognition, the input data matrix is of very high dimension. This makes learning from example infeasible [15]. One then hopes to find two or more lower dimensional matrices whose product provides a good approximation to the original one. The canonical matrix factorization techniques include LU decomposition, QR decomposition, vector quantization, and Singular Value Decomposition (SVD). SVD is one of the most frequently used matrix factorization techniques. A singular value decomposition of an M Â N matrix X has the following form: X ¼ UÆVT ; where U is an M Â M orthogonal matrix, V is an N Â N orthogonal matrix, and Æ is an M Â N diagonal matrix with Æij ¼ 0 if i 6¼ j and Æii ! 0. The quantities Æii are called the singular values of X, and the columns of U and V are called

自动控制原理与设计 第5版 英文

自动控制原理与设计 第5版 英文

自动控制原理与设计第5版英文Automatic Control Principles and Design 5th EditionIntroduction:Automatic control principles and design play a crucial role in various engineering fields, enabling the efficient operation of systems and processes. In this article, we delve into the key concepts and applications of automatic control, as outlined in the 5th edition of the book "Automatic Control Principles and Design."Chapter 1: Introduction to Automatic ControlAutomatic control is the use of control systems to regulate processes or machines without human intervention. It encompasses a wide range of applications, from simple domestic appliances to complex industrial systems. The chapter provides an overview of the basic principles and benefits of automatic control.Chapter 2: Modeling of Dynamic SystemsThe accurate modeling of dynamic systems is essential for effective control design. This chapter explores different techniques for modeling linear and nonlinear systems, including differential equations, transfer functions, and state-space representations. Real-world examples are used to illustrate the modeling process.Chapter 3: Time-Domain Analysis of Control SystemsTime-domain analysis allows us to examine the transient and steady-state responses of control systems. This chapter covers the analysis of first-and second-order systems, as well as higher-order systems. It also introduces the concept of system stability and the influence of system parameters on stability.Chapter 4: Frequency-Domain Analysis of Control SystemsFrequency-domain analysis provides insights into the behavior of control systems in the frequency spectrum. The chapter discusses transfer function analysis, Bode plots, Nyquist plots, and the relationship between the time- and frequency-domain representations of systems. Practical examples are included to enhance understanding.Chapter 5: Control System Design by Root Locus TechniqueThe root locus technique is a graphical method that aids in control system design. This chapter explores the construction of the root locus plot and its use in determining system stability, transient response, and controller design. Design guidelines and examples are presented to illustrate the application of this powerful tool.Chapter 6: Control System Design by Frequency Response TechniqueFrequency response techniques provide an alternative approach to control system design. This chapter discusses the design of compensators and filters using frequency response methods such as gain and phase margins and loop shaping. The advantages and limitations of this design approach are highlighted.Chapter 7: State-Space Analysis and DesignState-space analysis offers a modern and comprehensive framework for control system design. This chapter presents the concepts of state variables, state equations, observability, and controllability. The methods for state feedback control and observer design are also covered, along with their applications.Chapter 8: Digital Control SystemsThe design and implementation of control systems in the digital domain are covered in this chapter. It discusses the advantages of digital control, discretization of continuous-time systems, sampling and quantization, and various digital control algorithms. Practical considerations and implementation issues are addressed.Chapter 9: Introduction to Nonlinear Systems and ControlNonlinear systems present unique challenges in control design. This chapter introduces the basics of nonlinear systems and control techniques. It covers phase-plane analysis, describing functions, feedback linearization, and sliding mode control. Real-world examples demonstrate the application of these methods.Conclusion:The 5th edition of "Automatic Control Principles and Design" provides a comprehensive and up-to-date resource for understanding the principles and design techniques in automatic control. Through the exploration of different chapters, readers gain the necessary knowledge and skills to design effective control systems for a wide range of applications. This book serves as aninvaluable guide for students, researchers, and professionals in the field of automatic control.。

Micro Focus Unified Functional Testing Pro (LeanFT

Micro Focus Unified Functional Testing Pro (LeanFT

OverviewOne of the world’s leading financial market institutions relies on speed, reliability and stateof-the-art technology to help its cli-ents build wealth. With Micro Focus Unified Functional T esting Pro (LeanFT), the company has established an efficient application test -ing platform capable of integrating testing from application development through production to speed quality and time-to-market.ChallengeOne of the world’s leading financial market institutions offers a full suite of capital market services. With a highly-skilled team of people, the company works with a broad range of B2B and B2C customers. Operating in a highlyregu-lated environment, the company helps clientsbuild wealth and manage investment and op-erating risk. The company’s network and data center are connected to leading financial hubs. Speed, reliability and state-of-the-art technol-ogy are fundamental to its success.Running a combination of 200+ in-house, ac-quired and modified applications, the com-pany is focused on delivering quality software to meet the needs of its users. A few years ago, it began to embrace agile methodolo-gies to improve quality and speed-to-mar-ket. Having outgrown its use of Micro Focus Unified Functional T esting (UFT), the company decided to look for a testing solution that pro-vided automation across Application Program Interface (API) and Graphical User Interface (GUI) components.“We wanted to align test automation to de-velopment practices to feed into continuous integration,” says a spokesperson for the com-pany. “But, we didn’t want to add complexity to our testing ecosystem by having another tool-set—we wanted a single application that would satisfy our requirements.SolutionAfter investigating potential testing applica-tions, the company selected UFT Pro (LeanFT), a powerful functional testing solution builtFinancial Market ExchangeLeading financial market institution maximizes developmentspeed with Micro Focus UFT Pro.At a Glance■Industry Financial Services ■Location Sydney, Australia ■ChallengeAutomate testing, provide cross-platform and cross-browser support for API and GUI components, and integrate with standard IDEs. ■Products and Services UFT Pro (LeanFT) ■Success Highlights+Enables rapid software development and deployment to enhance agility.+Reduces risks through efficient testing practices and scripts implementations.+Standardizes functional testing on one platform. +Facilitates greater collaboration between testing and development teamsCase StudyApplication Delivery Management“We chose UFT Pro (LeanFT) due to its functionality, particularly with respect to cross-platform, cross-browser and desktop applications support, as well as its integration capabilities with API and GUI components.”SPOKESPERSONLeading Financial Market Institution AustraliaCase StudyFinancial Market Exchangespecifically for continuous testing and continu-ous integration.The spokesperson says, “We chose UFT Pro (LeanFT) due to its functionality, particularly with respect to cross-platform, cross-browser and desktop applications support, as well as its integration capabilities with API and GUI com-ponents. Its support for the most common Applications Under T est (AUT) technologies was important to us as we needed a solution that could handle both desktop and web ap-plications. Having a solution that works inside standard Integrated Development Environment (IDE) using modern scripting languages was also a deciding factor. The solution’s Object Identification Center (OIC) meant that we could model AUT s and objects with ease as we cre-ate robust scripts.”T o test the suitability of UFT Pro (LeanFT), the company ran a pilot on a live project.“We took the opportunity to investigate the functionality of UFT Pro (LeanFT) by testing one of our web browser based applications,” explains the spokesperson.“After a few months of using the solution, we realized it was the right tool for us. We gained a good understanding of how to use applica-tion models, share code resources, integrate with Continuous Integration (CI) / Continuous Development (CD) tools, and support com-mon AUT technologies including Java, .NET and AngularJS.“Using UFT Pro (LeanFT) on this project gave us a roadmap for how we could leverage the solution’s functionality more broadly. It showed us what it would take to create a company-wide foundation for integration from develop-ment through to production with end-to-end traceability.”Following the success of this pilot, the com-pany rolled out UFT Pro (LeanFT) across itstesting environment. It is now being used totest seven core applications.ResultsHaving the ability in UFT Pro (LeanFT) to cre-ate Application Models that serve as a sharedobjects repository, and to easily identify ob-jects via the OIC feature, so that the objectswill not break from one build to the other, aretwo key benefits the company is realizing fromthe solution.The spokesperson explains, “We can nowcreate abstractions of our AUTs, and in turnprovide our tests with an interface to the ap-plications and their objects. This allows us tomaintain our test objects in a single location foruse across our testing suite, which in turn helpsour developers to write code more quickly,without the need to write manual programmaticdescriptions for each object.“In addition, we now have access to multiplevisual relational identifiers to discover fields inour AUT s based on neighboring objects. Thisfeature alone has helped us many times whenobjects were changing depending on optionsset in the application.”The UFT Pro (LeanFT) integration capabilitiesfacilitate greater collaboration between test-ing and development teams, the spokesper-son continues, “This allows for robust softwaretesting that easily accommodates changesto applications.“Our investment in UFT Pro (LeanFT) is for thelong-term. We want to reduce our automationtoolkit to a common AUT like Java and handle amyriad of third-party plug-ins. In due time, we’llbe able to hook into all the tools our develop-ment teams use, and write test scripts in thesame language. This will significantly reducethe time between development and testing,and enhance the quality of our applications,not to mention speed-to-market. “We’re cur-rently leveraging the interoperability of UFT Pro(LeanFT) to create closer alignment with the CI/CD process.”While the company is still in the infancy stagesof using UFT Pro (LeanFT), it has recognizedthe potential for improving the efficiency ofits testing.The spokesperson says, “UFT Pro (LeanFT) is apowerful solution that provides openness andallows us to use our object-oriented program-ming knowledge and advanced coding tech-niques, such as polymorphism and inheritance.This has opened up so many options for us tomake our test scripts more efficient.“Once we mature our testing processes and getmore familiar with UFT Pro (LeanFT), we will be-come more productive. Certainly, as we migratemore and more of our 700+ UFT test scripts—automate them, store them in a central reposi-tory, and re-use them—we will see dramaticimprovements in speed and efficiency.“In the long-run, having a single tool for testautomation will make it easier for us to moveour resources to different projects as required.”Looking ahead, the company is focused ongetting up to speed with UFT Pro (LeanFT)and evolving its approach to functional test-ing as part of a broader commitment to qualityassurance.“To maximize the return on our investment,we’re working towards getting the most outof the functionality of UFT Pro (LeanFT),” saysthe spokesperson. “Our progress is a little slowbecause this is a brand new way of handlingtesting for us, particularly as we have amassed more than 10 years of experience with UFT, but we are becoming quicker and more confident every day.”The company has a lot of systems, includ-ing legacy applications, that use different technologies.The priority is to automate test cases for both API and GUI across multiple programming lan-guages, and leverage modern industry stan-dards. Centralization and standardization of test scripts will also help drive further efficiency and productivity gains.“We want to get the basics right first in terms of test creation and automation,” concludes the spokesperson.“Once we’ve achieved that, we’ll look at lever-aging other functionality such as analytics.“We’re playing the long game with UFT Pro (LeanFT) and have chosen to implement the solution as we recognize the potential to achieve closer alignment between testing and development. We know this will ultimately help us release applications rapidly and confidently.”“The UFT Pro (LeanFT) integration capabilities facilitate greater collaboration between our testing and development teams. It allows for robust software testing that easily accommodates changes to the application.”SPOKESPERSONLeading Financial Market InstitutionAustralia。

expectation propagation算法基本原理 -回复

expectation propagation算法基本原理 -回复

expectation propagation算法基本原理-回复Expectation Propagation (EP) Algorithm: An IntroductionExpectation Propagation is a powerful algorithm used in probabilistic graphical models for approximating complex posterior distributions. It is particularly effective when dealing with models that contain both linear and non-linear dependencies. In this article, we will dive into the basic principles of the EP algorithm, explaining each step along the way.1. Introduction to Probabilistic Graphical Models:Probabilistic graphical models are a popular framework used for representing and reasoning about uncertainty in complex systems. They combine ideas from probability theory and graph theory, providing a powerful tool for data analysis and decision making. These models consist of two components:- Nodes: representing random variables.- Edges: representing dependence relationships between variables.2. Problem Statement:Consider a complex probabilistic graphical model with a set ofobserved variables denoted as "y" and a set of hidden variables denoted as "x." Our goal is to estimate the posterior distribution of the hidden variables given the observed data, P(x y).3. Introduction to Expectation Propagation:Expectation Propagation is an algorithm that approximates the posterior distribution P(x y) by iteratively refining a set of factorized distributions. The algorithm is based on the principle of minimizing a divergence measure called the Kullback-Leibler (KL) divergence between the true posterior and the approximate distribution.4. Factor Graph Representation:To understand the EP algorithm, let's first represent our probabilistic graphical model using a factor graph. A factor graph is a bipartite graph that connects variables to factors. Each factor represents the conditional distribution that connects its neighboring variables.5. Initialization:The EP algorithm starts with an initialization step. We begin by setting the approximate factorized distributions, denoted as Q(x), to a simple form, often chosen as a factorial distribution.6. Message Passing:EP employs a message-passing scheme to update the approximate distributions. Messages represent the information that variables and factors share with each other during the iterative process. There are two types of messages in EP:- Belief Messages: These carry information from variables to factors and reflect their beliefs about the hidden variables according to the observed data.- Calibration Messages: These carry information from factors to variables and reflect their beliefs about the hidden variables based on the compatible distributions.7. Update Step:In the EP algorithm, messages are updated in an iterative fashion until convergence is achieved. In each update step, we sequentially send, receive, and update messages. This process continues until the maximum number of iterations is reached or until a convergence criterion is satisfied.8. Fusion Step:After the messages have been updated, the EP algorithm performs a fusion step to combine the information from all the factorized distributions into a single refined distribution that approximates the true posterior. This refined distribution is considered the final estimate of P(x y).9. Convergence Analysis:Convergence of the EP algorithm can be assessed by monitoring the changes in the KL divergence between the approximate and true posterior distributions. If the divergence falls below a predefined threshold, the algorithm is deemed converged.10. Computational Complexity:The computational complexity of the EP algorithm depends on the size and structure of the graphical model. In general, EP provides an efficient approach to approximating complex posterior distributions compared to other methods such as Monte Carlo sampling.11. Applications and Limitations:The EP algorithm has found applications in various fields, including machine learning, computer vision, and bioinformatics. It excels atapproximating non-linear and high-dimensional models. However, its effectiveness may degrade when models exhibit non-conjugate factorizations or strong dependencies.12. Conclusion:Expectation Propagation is a powerful algorithm for approximating posterior distributions in probabilistic graphical models. It combines message-passing and distribution fusion techniques to refine factorized distributions iteratively. Although there are challenges and limitations, EP remains a valuable tool in handling complex probabilistic models.In conclusion, the EP algorithm provides a framework for efficiently approximating complex posterior distributions, enabling robust inference and decision-making in probabilistic graphical models.。

外文翻译----现代的控制理论简介

外文翻译----现代的控制理论简介

英文原文Introduction to Modern Control TheorySeveral factors provided the stimulus for the development of modern control theory:a. The necessary of dealing with more realistic models of system.b. The shift in emphasis towards optimal control and optimal system design.c. The continuing developments in digital computer technology.d. The shortcoming of previous approaches.e. Recognition of the applicability of well-known methods in other fields of knowledge.The transition from simple approximate models, which are easy to work with, to more realistic models, produces two effects. First, a large number of variables must be included in the models. Second, a more realistic model is more likely to contain nonlinearities and time-varying parameters. Previously ignored aspects of the system, such as interactions with feedback through the environment, are more likely to be included.With an advancing technological society, there is a trend towards more ambitious goals. This also means dealing with complex system with a large number of interacting components. The need for greater accuracy and efficiency has changer the emphasis on control system performance. The classical specifications in terms of percent overshoot, setting time, bandwidth, etc. have in many cases given way to optimal criteria such as mini mum energy, minimum cost, and minimum time operation. Optimization of these criteria makes it even more difficult to avoid dealing with unpleasant nonlinearities. Optimal control theory often dictates that nonlinear time-varying control laws are used, even if the basic system is linear and time-invariant.The continuing advances in computer technology have had three principal effects on the controls field. One of these relates to the gigantic supercomputers. The size andthe class of the problems that can now be modeled, analyzed, and controlled are considerably large than they were when the first edition of this book was written.The second impact of the computer technology has to so with the proliferation and wide availability of the microcomputers in homes and I the work place, classical control theory was dominated by graphical methods because at the time that was the only way to solve certain problems, Now every control designer has easy access to powerful computer packages for systems analysis and design. The old graphical methods have not yet disappeared, but have been automated. They survive because of the insight and intuition that they can provide, some different techniques are often better suited to a computer. Although a computer can be used to carry out the classical transform-inverse transform methods, it is used usually more efficient for a computer to integrate differential equations directly.The third major impact of the computers is that they are now so commonly used as just another component in the control systems. This means that the discrete-time and digital system control now deserves much more attention than it did in the past.Modern control theory is well suited to the above trends because its time-domain techniques and its mathematical language (matrices, linear vector spaces, etc.) are ideal when dealing with a computer. Computers are a major reason for the existence of state variable methods.Most classical control techniques were developed for linear constant coefficient systems with one input and one output (perhaps a few inputs and outputs). The language of classical techniques is the Laplace or Z-transform and transfer functions. When nonlinearities ad time variations are present, the very basis for these classical techniques is removed. Some successful techniques such as phase-plane methods, describing function s, and other ad hoc methods, have been developed to alleviant this shortcoming.However, the greatest success has been limited to low-order systems. The state variable approach of modern control theory provides a uniform and powerful method of representing systems of arbitrary order, linear or nonlinear, with time-varying or constant coefficient. It provides an ideal formulation for computer implementationand is responsible for much of the progress in optimization theory.Modern control theory is a recent development in the field of control. Therefore, the name is justified at least as a descriptive title. However, the foundations of modern control theory are to be found in other well-established fields. Representing a system in terms of state variables is equivalent to the approach of Hamiltonian mechanics, using generalized coordinates and generalized moment. The advantages of this approach have been well-known I classical physics for many years. The advantages of using matrices when dealing with simultaneous equations of various kinds have long been appreciated in applied mathematics. The field of linear algebra also contributes heavily to modern control theory. This is due to the concise notation, the generality of the results, and the economy of thought that linear algebra provides.Mechanism of Surface Finish Production There are basically five mechanisms which contribute to the production of a surface which have been machined. There are:(1) The basic geometry of the cutting process. In, for example, single point turning the tool will advance a constant distance axially per revolution of the work piece and the resultant surface will have on it, when viewed perpendicularly to the direction of tool feed motion, a series of cusps which will have a basic form which replicates the shape of the tool in cut.(2) The efficiency of the cutting operation. It has already been mentioned that cutting with unstable built-up-edges will produce a surface which contains hard built-up-edge fragments which will result in a degradation of the surface finish. It can also be demonstrated that cutting under adverse conditions such as apply when using large feeds small rake angles and low cutting speeds, besides producing conditions which continuous shear occurring in the shear zone, tearing takes place, discontinuous chips of uneven thickness are produced, and the resultant surface is poor. This situation is particularly noticeable when machining very ductile materials such as copper and aluminum.(3) The stability of the machine tool. Under some combinations of cutting conditions: work piece size , method of clamping, and cutting tool rigidity relative to the machine tool structure, instability can be set up in the tool which causes it to vibrate. Under some conditions the vibration will built up and unless cutting is stopped considerable damage to both the cutting tool and work piece may occur. This phenomenon is known as chatter and in axial turning is characterized by long pitch helical bands on the work piece surface and short pitch undulations on the transient machined surface.(4) The effectiveness of removing swarf. In discontinuous chip production machining, such as milling or turning of brittle materials, it is expected that the chip (swarf) will leave the cutting zone either under gravity or with the assistance of a jet of cutting fluid and that they will not influence the cut surface in any way. However, when continuous chip production is evident, unless steps ate taken to control the swarf it is likely that it will impinge on the cut surface and mark it. Inevitably, this marking beside a looking unattractive, often results in a poorer surface finishing,(5) The effective clearance angle on the cutting tool. For certain geometries of minor cutting edge relief and clearance angles it is possible to cut on the major cutting edge and burnish on the minor cutting edge. This can produce a good surface finish but, of course, it is strictly a combination of metal cutting and metal forming and is not to be recommended as a practical cutting method. However, due to cutting tool wear, these conditions occasionally arise and lead to a marked change in the surface characteristics.Surface Finishing and Dimensional ControlProducts that have been completed to their proper shape and size frequently require some type of surface finishing to enable than to satisfactorily fulfill their function. In some cases, tit is necessary to improve the physical properties of the surface material for resistance to penetration or abrasion. In many manufacturing processes, the product surface is left with dirt, chips, grease, or other harmful material upon it. Assemblies that are made of different materials, or from the same materials processedin different manners, many require some special surface treatment to provide uniformity of appearance.Surface finishing many sometimes become an intermediate step processing. For instance, cleaning and polishing are usually essential before any kind of plating process. Some of the cleaning procedures are also used for improving surface smoothness on mating parts and for removing burrs and sharp corners, which might be harmful in later use. Another important need for surface finishing is for corrosion protection in a variety of environments. The type of protection procedure will depend largely upon the anticipated exposure, with due consideration to the material being protected and the economic factors involved.Satisfying the above objectives necessitates the use of main surface-finishing methods that involve chemical change of the surface mechanical work affecting surface properties, cleaning by a variety of methods, and the application of protective coatings, organic and metallic.In the early days of engineering, the mating of parts was achieved by machining one part as nearly as possible to the required size, machining the mating part nearly to size, and then completing its machining, continually offering the other part to it, until the desired relationship was obtained. If it was inconvenient to offer one par to the other part during machining, the final work was done at the bench by a fitter, who scraped the mating parts until the desired fit was obtained, the fitter therefore being a ‘fitter’ in the literal sense. It is obvious that the two parts would have to remain together, and in the event of one having to be replaced, the fitting would have to be done all over again. I n these days, we expect to be able to purchase a replacement for a broken part, and for it to function correctly without the need for scraping and other fitting operations.When one part can be used ‘off the shelf’ to replace another of the same dimension and material specification, the parts are said to be interchangeable. A system of interchangeability usually lowers the production costs as there is no need for an expensive, ‘fiddling’ operation, and it benefits the customer in the event of the need to replace worn parts.Limits and TolerancesMachine parts are manufactured so they are interchangeable. In other words, each part of a machine or mechanism is made to a certain size and shape so it will fit into any other machine or mechanism of the same type. To make the part interchangeable, each individual part must be made to a size that will fit the mating part in the correct way. It is not only impossible, but also impractical to make many parts to an exact size. This is because machines are not perfect, and the tools become worn. A slight variation from the exact size is always allowed. The amount of this variation depends on the kind of part being manufactured. For example, a part might be made 6 in. long with a variation allowed of 0.003(three thousandths) in. above and below this size. Therefore, the part could be 5.997 to 6.003 in. and still be the correct size. These are known as the limits. The difference between upper and lower limits is called the tolerance.A tolerance is the total permissible variation in the size of a part.The basic size is that size from which limits of size are derived by the application of allowances and tolerances.Sometimes the limit is allowed in only one direction. This is known as unilateral tolerance.Unilateral tolerancing is a system of dimensioning where the tolerance (that is variation) is shown I only one direction from the nominal size. Unilateral tolerancing allow the changing of tolerance on a hole or shaft without seriously affecting the fit.When the tolerance is in both directions from the basic size, it is known as a bilateral tolerance (plus and minus).Bilateral tolerancing is a system of dimensioning where the tolerance (that is variation) is split and is shown on either side of the nominal size. Limit dimensioning is a system of dimensioning where only the maximum and minimum dimensions are shown. Thus, the tolerance is the difference between these two dimensions.Introduction of MachiningMachining as a shape-producing method is the most universally used and the most important of all manufacturing processes. Machining is a shape-producing process in which a power-driven device causes material to be removed in chip form. Most machining is done with equipment that supports both the work piece and cutting tool although in some cases portable equipment is used with unsupported work piece. Low setup cost for small quantities. Machining has two applications in manufacturing. For casting, forging, and pressworking, each specific shape to be produced, even one part, nearly always has a high tooling cost. The shapes that may be produced by welding depend to a large degree on the shapes of raw material that are available. By making use of generally high cost equipment but without special tooling, it is possible, by machining, to start with nearly any form of raw material, so long as the exterior dimensions are great enough, and produce any desired shape from any material. Therefore, machining is usually the preferred method for producing one or a few parts, even when the design of the part would logically lead to casting, forging or pressworking if a high quantity were to be produced.Close accuracies, good finishes. The second application for machining is based on the high accuracies and surface finishes possible. Many of the parts machined in low quantities would be produced with lower but acceptable tolerances if produced I high quantities by some other process. On the other hand, many parts are given their general shapes by some high quantity deformation process and machined only on selected surfaces where high accuracies are needed. Internal threads, for example, are seldom produced by any means other than machining and small holes in pressworked parts may be machined following the pressworking operations.中文翻译:现代的控制理论简介下列几方面为现代控制理论发展的促进因素:1.处理更多的现实模型系统的必要性2.强调向最佳的控制和最佳的系统设计的升级.3.数字化计算机技术的持续发展.4.当前技术的不成熟.众所周知的方法在其它知识领域的适用性得到承认.从容易解决的简单近似的模型到更多的现实模型的转变产生了两种效果:首先,模型必须包括很多的变量。

ABB Freelance 自动化系统用户手册说明书

ABB Freelance 自动化系统用户手册说明书

2 Process control system| ABB FreelanceWith Freelance you’ll be right on targetFreelance combines the advantages of both worlds – process control system and PLC. It offers the small design and reason-able price of a PLC together with the entire functionality of a process control system. The integrated environment simplifies engineering, commissioning, maintenance and fieldbus manage-ment. The intuitive operator interface enables simple operation and diagnosis of the entire system.The objective of process industry companies today is clearly defined: increased automation at lower cost. Based on this principle, ABB has redefined compact, scalable control systems, and is today viewed by many as a trend-setter in this industry.Today, people are impressed by the global success story of Freelance, with more than 15,000 applications covering all industry sectors.Freelance provides powerful automation that is not only cost-effective in terms of hardware and software, but is also very easy to use. The advanced design of Freelance makes the control system ideal for numerous applications in power,process or environmental technology plants.AC 800F controllerAC 700F controllerABB Freelance | Process control system 3AC 700F is a controller in a small footprint with direct S700 I/O modules that supports Profibus. The AC 800F controller can be equipped with up to four fieldbus modules. If required, these can also be of a different type.The AC 700F controller expands the scalability of Freelance for smaller applications. The Freelance controller offering provides scope for optimized scalability, thus allowing it to be used in systems ranging from just a few signals to several thousand signals. Both controller types can even be used several times in a single system.Both Freelance controllers are suitable both for installation in the control room and for use in junction boxes directly in the field.Minimum engineering Maximum automationThe Freelance control system combines user-friendly engineering with an open, modern system architecture. In detail, this means:–Only one tool for engineering, commissioning and diagnosis – F ieldbus management completely integrated into control system engineering – I ntegration of process automation and process electrification – P otential savings in engineering, commissioning, testing, service and maintenance – A ssembly close to the field: reduction of field wiring and space requirements Pre-configured components for the operator level The engineering of the DigiVis operator level is relativelystraightforward. The pre-configured visualization components include:–Faceplates–Module diagnostics–Extended troubleshooting capabilities–Automatically generated sequence diagrams –Automatically generated system communication –Event list and alarm line–Trend displays with long-term archivingThese components can be used straight out of the box, eliminating time-consuming manual configuration.Scalable controllersThe two controllers AC 700F and AC 800F are the core of theFreelance system.4 Process control system | ABB FreelanceUser friendly engineeringAll tasks – configuration of the controllers and operator inter-face as well as fieldbus management – are performed quite simply using a single engineering tool, the Control Builder F. All five programming languages specified in IEC 61131-3 are available. Users especially appreciate how quickly they are able to familiarize themselves with the tool. Supported by a uniform data basis throughout the system as well as cross reference functions, the Control Builder F enables you to conduct the entire system configuration quickly and easily – including: –Configuring and parameterizing the field devices and I/Os – S etting the bus topology and parameters such as trans-mission rates and addresses Control Builder F can be used with both controller types (AC 700F and AC 800F).Freelance supports FDT/DTM and also user-defined hardware templates for the efficient configuration or maintenance of field devices. This eliminates time-consuming tasks of integrating device GSD files. In addition to intuitive graphical parameter-ization dialogs for intelligent field devices, a DTM also, for instance, offers comprehensive diagnostics functions for efficient maintenance.The structure of the field levelFor the field level, ABB offers an extensive selection of devices that are fine-tuned to meet the needs of the relevant area of use. Thanks to established communication standards, inte-gration into Freelance is quite simple. Using Profibus together with the AC 700F or AC 800F controllers, intelligent field devices can be integrated into the system directly using the fieldbus or with the aid of remote I/Os.The choice of fieldbus typeIn accordance with the concept of plug & produce, Freelance allows the integration of all common fieldbuses – leaving the user free to choose the fieldbus type.Various types of fieldbus can even be operated in parallel in a single controller. This proves to be a real advantage if the task at hand specifies which fieldbus type is to be used in eachcase.Bus type AC 700F AC 800F PROFIBUS-DPV1 Yes Yes FOUNDATION Fieldbus No Yes HART via remote I/O Yes Yes MODBUS Master and Slave Yes Yes CAN for Freelance Rack I/O No Yes Telecontrol protocol IEC870-5NoYesThe DigiVis operator interfaceDigiVis meets all standard process control requirements with regard to operation and observation at an attractive price. Amongst other things, DigiVis offers the following visualization options:–Clearly structured faceplates for operator interventions (which can also be combined as required in group displays)–Trend displays including historical data and long-term archiving–Alarm pages for specific plant areas, sequence control displays, shift logs, event logs and data archiving–Standardized system display for system hardware diagnostics –Free graphic displays that besides standard graphic elements are also supported by bitmaps and a 3D macro library–Dual monitor operation–Control aspect for interlocking displaysDigiVis configuration is fully integrated in the engineering tool Control Builder F. An optional batch package is also easy tointegrate.ABB Freelance | Process control system 5System overviewFreelance is divided into an operator level and a process level. The operator level contains the functions for operation and observation, archives and logs, trends and alarms. Open-loop and closed-loop control functions are processed in the controllers, and exchange data with actuators and sensors in the field.Operator stations:Field devicesPanel 800Controller direct I/OAC 700FControllerRemotePackage Unitwith AC 700Fstop switch Panel 800LocaloperatorstationACS800Motor Variable speed driveUniversalmotorcontrolRemoteI/O6 Process control system | ABB FreelanceAC 800F Controller,redundantProfibus, redundantS800 I/ORedundancy Link Module RLM01S700 I/O S900 I/OFoundation FieldbusProfibusAC 800F ControllerDrivesand motorsIntelligent switchgear MNS iSUniversal motor controlField devicesMODBUS IEC870Freelance Rack I/ODrivesand motors8 Process control system | ABB FreelanceYou get technology that lastsHigh availabilityThe technology has proven its worth in industrial use over several years and meets the toughest requirements regarding availability. The hardware can be structured redundantly at all levels. This includes the redundant fieldbus modules, redundant fieldbus lines as well as network and controller redundancy. ScalabilityFrom small units with perhaps 8 signals to major systems with more than 10,000 signals: Freelance grows with your plant and can be extended very easily to meet requirements at any time.Versatile communicationYou can use the following as required: OPC, Ethernet, TCP/IP, PROFIBUS, FOUNDATION Fieldbus, MODBUS, HART or audio instructions to obtain a solution in the event an alarm, video integration or Internet connection.Regulatory complianceWith a view to meeting the requirements of regulatory authorities such as the American FDA (Food and Drug Administration) or the EFSA (European Food Safety Authority), Freelance provides a series of features that facilitate the validation procedure. Examples include:–Encrypted log and trend data–Audit trail functions–Access rights and user administration (security lock)Lifecycle managementWith the Automation Sentinel Program you can keep your control software up-to-date at all times. At the same time, Automation Sentinel provides a flexible means of ensuring that the latest system software technologies are used. Automation Sentinel helps manage automation software assets with timely delivery of the latest system software releases, thus providing you with better productivity, lower support cost and simpler software management. Migration from traditional control systems to Freelance enables ease-of-use and lower mainte-nance cost.Asset ManagementIf you want to keep your production plant up and running in the long term, you need information about the availability and degree of wear and tear of your equipment. All of the infor-mation necessary for this is available in the Freelance control system. As a result, several customers have been able to avoid making investments that appeared essential but were in fact unnecessary. Freelance allows the use of modern asset management methods for more efficient maintenance and optimization – helping for instance to make optimum use ofplant capacity.ABB Freelance | Process control system 910 Process control system | ABB FreelanceOur comprehensive Life Cycle Services enable us to increase the value of your plant over its entire lifetime. The convention-al, reactive service can reduce production downtimes, while the use of new technologies offers an increased number of capabilities for preventive service measures to identify and avoid cost-intensive faults at an early stage. Proactive service such as asset management or ongoing modernization in-creases the value of our customers’ plants and gives them a distinct competitive edge.Are you interested? Then get in touch with us to obtain more information about the Freelance system and its components.You get a comprehensive customer serviceA comprehensive customer service is worth its weight in goldService means a profitable investment in continually maximizing and optimizing the availability, performance, quality and security of a plant. ABB’s support covers the following areas: –Customer Support Services –Training–Spare Parts & Logistics, Repair Shops–Process, Application & Consulting ServicesThrough the resulting specialization of our employees, we guarantee maximum competence for each task we perform. Whether it's more traditional service support such as commis-sioning and maintenance or individual consulting services – theresult is measurable customer benefits.ABB Freelance | Process control system 11Contact us3B D D 013090 e n G 11.2011ABB Automation GmbH Open Control Systems Mannheim, GermanyPhone: +49 1805 26 67 76 Fax: +49 1805 77 63 29E-mail: *********************************.com www.abb.de/controlsystemsABB ABOpen Control Systems Västerås, SwedenPhone: +46 (0) 21 32 50 00 Fax: +46 (0) 21 13 78 45E-mail: ************************.com /controlsystems ABB Inc.Open Control Systems Wickliffe, Ohio, USAPhone: +1 440 585 8500 Fax: +1 440 585 8756E-mail: ****************************.com /controlsystems ABB Pte LtdOpen Control Systems SingaporePhone: +65 6776 5711 Fax: +65 6778 0222E-mail: ************************.com /controlsystems ABB Automation LLC Open Control SystemsAbu Dhabi, United Arab Emirates Phone: +971 2 493 8000Fax: +971 2 557 0145E-mail: ************************.com /controlsystemsNote:We reserve the right to make technical changes to the products or modify the contents of this docu-ment without prior notice. With regard to purchase orders, the agreed particulars shall prevail. ABB does not assume any responsibility for any errors or incomplete information in this document.We reserve all rights to this document and the items and images it contains. The reproduction, disclosure to third parties or the use of the content of this document – including parts thereof – are prohibited without ABB’s prior written permission.Copyright© 2011 ABB All rights reserved。

高分影像处理流程

高分影像处理流程

高分影像处理流程Image processing is a complex and crucial step in obtaining high-quality visual data. 高分辨率影像处理是获取高质量视觉数据的一个复杂而关键的步骤。

It involves a series of procedures and techniques to enhance, analyze, and interpret images for various applications. 它涉及一系列程序和技术,旨在提高、分析和解释图像,以适用于各种应用。

From satellite imagery to medical scans, image processing plays a significant role in extracting valuable information and making informed decisions. 从卫星图像到医学扫描,影像处理在提取有价值信息和作出明智决策方面起着重要作用。

The first step in the high-resolution image processing workflow involves image acquisition. 高分辨率影像处理流程的第一步是图像获取。

This can be done through various means such as digital cameras, satellite sensors, or medical imaging devices. 这可以通过各种手段实现,如数码相机、卫星传感器或医学影像设备。

The quality of the acquired image greatly impacts the subsequent processing and analysis. 获取的图像质量极大地影响着随后的处理和分析。

METHOD FOR EFFICIENTLY SIMULATING THE INFORMATION

METHOD FOR EFFICIENTLY SIMULATING THE INFORMATION

专利名称:METHOD FOR EFFICIENTLY SIMULATING THE INFORMATION PROCESSING IN CELLSAND TISSUE OF THE NERVOUS SYSTEMWITH A TEMPORAL SERIES COMPRESSEDENCODING NEURAL NETWORK发明人:GUILLEN, Marcos, E.,MAROTO, Fernando, M.申请号:US2010/001984申请日:20100713公开号:WO2011/011044A2公开日:20110127专利内容由知识产权出版社提供专利附图:摘要:A neural network simulation represents components of neurons by finite state machines, called sectors, implemented using look-up tables. Each sector has an internal state represented by a compressed history of data input to the sector and is factorized into distinct historical time intervals of the data input. The compressed history of data input to the sector may be computed by compressing the data input to the sector during a time interval, storing the compressed history of data input to the sector in memory, and computing from the stored compressed history of data input to the sector the data output from the sector.申请人:CORTICAL DATABASE INC.,GUILLEN, Marcos, E.,MAROTO, Fernando, M.地址:85 Bluxome Street, Suite 301 San Francisco, CA 94107 US,Edificio Washington, Piso 4, Puerta 1 Washington Esq Con Juan De Salazar Asuncion PY,6426 Hagen Boulevard El Cerrito, CA 34530 US国籍:US,PY,US代理人:MCFARLANE, Thomas, J. et al.更多信息请下载全文后查看。

字库芯片 寻址

字库芯片 寻址

字库芯片寻址In today's fast-paced technological world, the demand for efficient and reliable components is higher than ever before. This is especially true in the field of electronics, where the need for high-quality parts is essential for the functioning of various devices. One such component that plays a crucial role in electronic devices is the font chip.在当今快节奏的技术世界,对高效可靠组件的需求比以往任何时候都要高。

这在电子领域尤为重要,高质量零件对各种设备的功能至关重要。

字库芯片就是在电子设备中扮演着至关重要角色的一个组件。

A font chip, also known as a character generator chip, is a type of integrated circuit that stores graphical representations of characters or symbols. These chips are commonly used in printers, computers, and other electronic devices to generate text and symbols on displays. Without a font chip, these devices would not be able to display characters or symbols, making them essentially useless.字库芯片,又称字符发生器芯片,是一种存储字符或符号图形表示的集成电路。

Advanced Graphical User Interface for Analysis of

Advanced Graphical User Interface for Analysis of

B. Iyer, S. Nalbalwar and R. Pawade (Eds.)ICCASP/ICMMD-2016. Advances in Intelligent Systems Research.Vol. 137, Pp. 187-193.© 2017. The authors - Published by Atlantis PressThis is an open access article under the CC BY-NC license (/licens)es/by-nc/4.0/).Advanced Graphical User Interface for Analysis of Infrared Thermographic Sequence using MATLABM. Kante1*, D. Reddy21Narayana Engineering College, Nellore, India2Andhra University College of Engineering, Vishakaptanam, India{**********************}Abstract:The Infrared non-destructive evaluation (IRNDE) is an emerging approach for non-contact inspection of various solid materials such as metals, composites and semiconductors for industrial and research interest. Data processing is an essential step in IRNDE in order to visualize the subsurface defects in the sample and determine the shapes and sizes of the same. The data processing intends to analyse temporal variations in the contrast of each pixel, relative to defect-free reference point on the sample. For that, several post processing algorithms are applied to the recorded thermograms. In this work, it is proposed to an advanced graphical user interface (GUI) for processing the recorded thermograms by IR camera using MATLAB, that supports preprocessing, processing and quantification is presented. Investigations arecarried out using the thermal image sequence recorded with different active thermographic methods: frequency modulated thermal wave imaging (FMTWI) Pulse thermography (PT), lock in thermography (LT). A comparative analysis of the results from different themographic techniques for defect visualization is presented.Keywords: Graphical user interface (GUI), image processing, active thermography1 IntroductionDuring the past few years, the infrared thermography has been the promising technique to locate subsurface defects in inspected subjects. Based on the method of excitation, it is classified in to several categories viz. ultrasonic imaging, radiographic, eddy current, acoustic emission, heat energy (Infrared), vibration etc. Infrared thermography has become a more popular nondestructive inspection method to evaluate subsurface defects in metallic, insulating, and composite materials, by virtue of its fast inspection rate, noncontact, portability, and easy interpretation. The raw images (thermograms) obtained from IR camera do not often give exact information about defects. The analysis of the raw data is essential to find discontinuities and defects present on the analyzed specimen. This analysis is mainly divided into three parts: pre-processing, processing and analysis of results. This paper is focused on study of different processing techniques to thermographic sequences (thermograms)recorded with different active thermographic methods: frequency modulated thermal wave imaging (FMTWI)[1], Pulse thermography (PT)[2], lock in thermography (LT)[3] . A Graphical user interface (GUI) is developed, that is suitable for both qualitative and quantitative analysis of thermographic sequences. The main goal of the GUI is to cover the evaluation needs of researchers to assess the sample. In this paper, a Graphical interface that supports preprocessing, processing and quantification is presented. Additionally, this GUI shall help researchers to better understand the strengths and weaknesses of the existing algorithms. The GUI is intended to loading images from data base, display them and perform their processing.2 Thermal ImageA thermographic sequence is composed of several hundreds of thermograms. The sample under test is irradiated using a source of infrared radiation, and then the infrared thermal camera records transmitted /reflected radiation. The recorded thermal image provides information about the thermo-physical properties, defects and inhomogenities inside the sample. Thermal mages obtained from experiments always have high noise, especially for metal materials due to high reflectivity and environmental factors, which also need to be processed.188Kante and Reddy3 Related WorkIn the literature, we can find many existing packages of this type that are available for processing thermograms such as the open-source IR View Toolbox [4]., the Altair-Li suite provided by Cedip, RTools by FLIR [ 5, 6]., the ThermoFitPro software by Innovation Inc.,[7] and Termidge[8].Although several methods have been proposed to investigate defects, flaws, voids etc in materials/systems, none has so far been free from certain disadvantages. The main objective of this GUI is ensure loading thermal images from data base , display them and perform their preprocessing, processing and quantification to characterize the possible subsurface failures inside sample4 Materials and ExperimentationExperimentation has been carried over mild steel containing 24 drilled bottom holes of diameter 1cm each kept at different depths from the non defective end. A frequency modulated chirped stimulation of frequencies swept from 0.01Hz to 0.1 Hz in 100s, has been provided to the sample with the help of two halogen lamps of power 1kW each.5 Graphical User InterfaceIn order to present the results of the research in an accessible platform a Graphical User Interface (GUI) has been designed. The tool is comprehensive in that it is capable of performing different analysis as required by the user. The tool is coded using Matlab Version 12.The Graphical User Interface enables the user to have seamless use and flexibility of operation. The implementation is carried out in a system having Core 2 Duo processor cloaking at a speed of 2 GHz with a RAM of 2GB.The Structure of the GUI is given in the figure (1)Fig. 1. Basic Structure of the GUIScreenshot of the GUI designed is illustrated in the Figure (2). The tool as such can be demarcated into 6 different functional regions. Each region has specific functions and analysis elements inbuilt in to them.Fig. 2.Screen shot of the GUIAdvanced GUI for Analysis of NDT images1895.1 Section 1 This is the Input / Output section of the tool. The tool is capable of handling images in normal image formats like, .jpg, .bmp, .etc... Images and images that are layered in the Volume format. Most of the image data for thermal analysis are usually in the form of volume images as series of image frames.This section has the necessary functions to load the image data in to workspace and further read the work space volume for further processing. In the output section, function element enable to save the image in current display window for further analysis. Similarly the current volume can be saved for further processing and analysis.5.2 Section 2This section helps the user in performing basic analysis of the images. This section has 3 sub functional sections like; Histogram Analysis, Pixel Profile of a particular region of the image , a specific tool for adjusting the image intensity for better visualization and a specific function to visualize the current frame.The histogram profile of the images serves to give a trend in distribution of intensity values and help in the initial stages of the choosing the threshold. The histogram of the pre and post thermal processing are illustrated in the figure (3)Fig. 3. Histogram profile of an specific slice before and after processing.From figure (3) it can be clearly inferred that there is an appreciable shift in the histogram of the image before and after thermal processing. Histogram analysis gives a crucial indication of the representation of the pixels which inturn can help in the visulisation of the Processing.Manual visual interpretation and analysis also play a very critical role in the evlaution procedure, the adjust intensity range tool helps in manual adjustement of the threshold value there by providing better visualisation. The ouput of the adjust inetnsity function is illustrated in the figure (4)Fig. 4. Resultant output adjusts intensity function in GUISimilarly other functions of this section include identification of pixel value at a particular pointer position and facility to visualize an individual frame.5.3 Section 3This section helps the user in performing further analysis of the images. This section has sub functional sections like; Histogram Equalization, image adjusting, smoothening sharpening functions along with different color maps.F r e q u e n c y190Kante and ReddyIntensity profile anlysis is another usefull indication about the qaulity of segmentation. Also intensity profile gives the user an idea about distribution of pixel, post processing and pre processing. The intensity profile of a particualr region can also give an inclination towards presence of faults. The intensity profile of processed image across a particular row is given for illustaration in the following figure (5) and figure (6).Fig. 5. Intensity Profile of the particular row (top) marked in the imageFig. 6. Intensity Profile of the particular row (bottom) marked in the imageFrom the figure (5) and figure (6) it can be clearly observed that there is clear difference in pixel profiles between the two rows selected. There is a clear indication in intensity whenever there is a fault and in the test images considered in this work they are holes. This difference in intensity values can give a clear indication of the holes and can help in identification and quantization. The troughs indicate the presence of holes, it can be observed from figure (5) 3 troughs corresponds to the presence of 2 clearly identified holes and in figure (6) there are 4 troughs corresponding to 4 holes in the processed image.Post processing in processed image plays a crucial role in critical analysis of processed image. Since most of image acquisition process results in very few frames that can be used for analysis and interpretation it is imperative to have post processing techniques that can help in proper analysis of existing information. In this tool 4 functions in the form of im.adjust, Histogram Equalization, Smoothening and Sharpening have been implemented. The results of these operations on processed image are illustrated in the figure (7).(a) (b) (c) (d)Fig. 7.Results of operation a) Histogram Equalization, b) imdjust, c) Smoothening, and d)Sharpening.Advanced GUI for Analysis of NDT images 191Colormap enables the option of viewing the image in different color spaces like, hot, gray and jet. This helps in better visualization of images. The human brain perceives changes in the lightness parameter as changes in the data much better than, for example, changes in hue. Therefore, colormaps which have monotonically increasing lightness through the colormap will be better interpreted by the viewer. Different color space images representation is presented in the figure (8)(a) (b) (c)Fig. 8. An Image represented in different color maps of hot , jet and inverse gray.5.4 Section 4This section has functions which implement the different thermal processing approaches. In this section we have incorporated, Principal Component Thermography (PCT), Total Harmonic Distortion (DHT), Differential Absolute Contrast (DAC), Pulse Phase Thermography (PPT) and a new method which is proposed based on improved PCT. While choosing the particular method for analysis, user will be asked to input the frame numbers to start and end the processing .This feature will be of great help in analyzing huge volume of data where the user can analyze segment by segment.Since the scope of the journal is to elaborate on the tool designed, the methods are not explained here. The results of different processing methods are illustrated in the following figure (9).(a) (b) (c) (d) (e)Fig. 9. Results of thermal image processing by a) PCT, b)THD, c)DAC, d)PPT and e)Proposed Method The THD method among other things also produces a plot of different parameters like SNR, amplitude ofharmonic distortion etc. This is illustrated in the figure (10).192 Kante and ReddyFig. 10. Plots of different parameters produced by THD5.5 Section 5This section displays the results of different operations and analysis performed with the help of different function available in the tool box. It has options to choose and view a particular frame in the display section. 5.6 Section 6The quantifiable results are displayed in this section. For example the SNR of a particular point in the image can be visualized here. The SNR changes with depth and can be useful tool in analyzing the quality of thermal processing as well.Fig. 11. Points Selected for Analyzing SNRThe above figure (11) illustrates the selections of points for analyzing the SNR .SNR at point adjacent to hole was identified as 83.72 and the SNR at the centre of the hole was identified at 71.266 ConclusionsHere, a comprehensive framework is presented help researchers in the domain of thermal image processing. The prime objective behind this endeavor is to design a frame tool that has the ease of use and versatility in adding thermal image analysis with specific emphasis on identification of faults. The proposed tool is capable of providing a seamless and flexible user experience and also is capable of providing multi modal analysis. The tool is modular in that it can easily accommodate other analysis procedures as well. It is concluded that, the proposed would be very helpful to researchers for carrying out tasks such as error analysis, system comparison and graphical representationsReferences[1]R. Mulaveesala and S. Tuli, ,”Applications Frequency modulate d thermal wave imaging for non-destructive characterization, NCTP’07,American Institute of Physics, 15. 2007[2]S.M. Shepard ,” Advances in Pulsed Thermography” Proceedings of SPIE Vol. 4360, 511. 2001[3]Datong Wu, Cerd Busse, , “Lock –in thermography for nond estructive evaluation of materials”, Rev.Gin. Therm. 37, 693, 1998Advanced GUI for Analysis of NDT images193[4]M. Klein, C. Ibarra-Castanedo, X. Maldague, and A. H. Bendada, \IR-View: a straightforwardgraphical user interface for basic and advanced signal processing of thermographic infrared sequences,"in Thermosense XXX, SPIE Defense and Security Symposium, V. P. Vavilov and D. D. Burleigh, Eds., vol. 6939, p. 693914, Orlando, Florida, USA, Vol. 6939, March 16-20 2008.[5]R. Jones and S. Pittb, using Altair-li in "An experimental evaluation of crack face energy dissipation,"International Journal of Fatigue Volume 28, Issue 12, December 2006, Pages 1716-1724, 2006,[6] F. Escourbiac, S. Constans, X. Courtois, A. Durocher, using Altair-li in "Application of lock-inthermography non destructive technique to CFC armoured plasma facing components", Journal of Nuclear Materials 367–370 1492– 1496, 2007[7]Software ThermoFitPro, from Innovation Inc. http://www.innovation.tomsk.ru/ind_en.html[8]Vavilov V.P, Kourtenkov D., Grinzato E., Bison P., Marinetti S., Bressan C., Inversion ofExperimental Data andThermal Tomography Using …Thermo Heat” and …Termidge” Software, Proc.QIRT’94, p. 273-278, 1994.。

电脑发明的英语作文

电脑发明的英语作文

电脑发明的英语作文Title: The Invention of the Computer。

The invention of the computer stands as one of the most significant milestones in human history. Its impact on society, economy, and technology is profound, shaping the way we live, work, and communicate. In this essay, we will delve into the history of the computer, its evolution, and its transformative effects.The concept of a computer dates back to ancient times when humans devised tools like the abacus to aid in calculations. However, the modern computer as we know it emerged in the 20th century, driven by the need forefficient computation and data processing.One of the pivotal moments in the history of computing was the development of the electronic computer. In the 1940s, pioneers like Alan Turing, John von Neumann, and others laid the groundwork for electronic computingmachines. These early computers were massive, cumbersome machines that relied on vacuum tubes and punched cards for processing information.The breakthrough came with the invention of the transistor in the late 1940s, which paved the way for smaller, faster, and more reliable computers. The introduction of integrated circuits further revolutionized the field, enabling the development of miniaturized and affordable computers.In the 1970s, the personal computer (PC) era dawned with the introduction of machines like the Altair 8800 and the Apple I. These early PCs were rudimentary by today's standards but represented a significant shift in computing, bringing the power of computation to individuals and small businesses.The 1980s and 1990s witnessed rapid advancements in computer technology, with the rise of graphical user interfaces (GUIs), networking, and the internet. The World Wide Web, invented by Tim Berners-Lee in 1989, transformedthe internet into a global platform for communication, commerce, and collaboration.The 21st century saw the proliferation of smartphones, tablets, and other mobile devices, further blurring the lines between computing and everyday life. Cloud computing, artificial intelligence (AI), and big data emerged as dominant trends, driving innovation across various industries.Today, computers are ubiquitous, permeating every aspect of society. From smartphones in our pockets to supercomputers powering scientific research, computing technology touches nearly every facet of modern life.The impact of the computer revolution is undeniable. It has revolutionized industries, from finance and healthcare to entertainment and manufacturing. It has empowered individuals, enabling access to information, communication, and resources like never before. It has transformed education, commerce, and governance, reshaping the way we learn, work, and govern ourselves.However, the rise of the computer age also poses challenges and concerns. Issues like privacy, cybersecurity, and digital divide loom large in an increasingly connected world. As we embrace the benefits of computing technology, we must also address these challenges to ensure a fair, inclusive, and secure digital future.In conclusion, the invention of the computer marks a watershed moment in human history. From its humble beginnings as a room-sized machine to its ubiquitous presence in our daily lives, the computer has transformedthe world in profound ways. As we continue to push the boundaries of computing technology, let us strive toharness its power for the betterment of humanity.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

a rXiv:078.1321v1[math.ST]9A ug27Graphical Methods for Efficient Likelihood Inference in Gaussian Covariance Models Mathias Drton 1,∗and Thomas S.Richardson 21Department of Statistics,The University of Chicago,5734S.University Avenue,Chicago IL 60637,USA.E-mail:drton@ 2Department of Statistics,University of Washington,Box 354322,Seattle WA 98195-4322,USA.E-mail:tsr@ ∗Corresponding author Running title:Graphical Gaussian Covariance Models Abstract In graphical modelling,a bi-directed graph encodes marginal independences among random variables that are identified with the vertices of the graph.We show how to transform a bi-directed graph into a maximal ancestral graph that (i)represents the same independence structure as the original bi-directed graph,and (ii)minimizes the number of arrowheads among all ancestral graphs satisfying (i).Here the number of arrowheads of an ancestral graph is the number of directed edges plus twice the number of bi-directed edges.In Gaussian models,this construction can be used for more efficient iterative maximization of the likelihood function and to determine when maximum likelihood estimates are equal to empirical counterparts.Keywords:ancestral graph;bi-directed graph;covariance graph;graphical model;marginal independence;maximum likelihood estimation.1IntroductionIn graphical modelling,bi-directed graphs encode marginal independences among ran-dom variables that are identified with the vertices of the graph(Pearl and Wermuth, 1994;Kauermann,1996;Richardson,2003).In particular,if two vertices are not joined by an edge,then the two associated random variables are assumed to be marginally independent.For example,the graph G in Figure1,whose vertices are to be identified with a random vector(X1,X2,X3,X4),represents the pairwise marginal independences X1⊥⊥X3,X1⊥⊥X4,and X2⊥⊥X4.While other authors(Cox and Wermuth,1993,1996; Edwards,2000)have used dashed edges to represent marginal independences,the bi-directed graphs we employ here make explicit the connection to path diagrams(Wright, 1934;Koster,1999).Gaussian graphical models for marginal independence,also known as covariance graph models,impose zero patterns in the covariance matrix,which are linear hypotheses on the covariance matrix(Anderson,1973).The graph in Figure1,for example,imposesσ13=σ14=σ24=0.An estimation procedure designed for covariance graph models is described in Drton and Richardson(2003).Other recent work involving these models includes Mao et al.(2004)and Wermuth et al.(2006).In this paper we employ the connection between bi-directed graphs and the more gen-eral ancestral graphs with undirected,directed,and bi-directed edges(Section2).We show how to construct a maximal ancestral graph,which we call an oriented simplicial graph and denote by G os,that is Markov equivalent to a given bi-directed graph G,i.e. the independence models associated with the two graphs coincide,and such that the num-ber of arrowheads is minimal(Sections3–4).The number of arrowheads of an ancestral graph is the number of directed edges plus twice the number of bi-directed edges.Ori-ented simplicial graphs provide useful nonparametric information about Markov equiv-alence of bi-directed,undirected and directed acyclic graphs.For example,the graph G in Figure1is not Markov equivalent to an undirected graph because G os is not an undirected graph,and G is not Markov equivalent to a DAG because G os contains a bi-directed edge.For other recent results on Markov equivalence,see e.g.Roverato(2005).For covariance graph models,oriented simplicial graphs allow one to determine when the maximum likelihood estimate of a variance or covariance is available explicitly as its empirical counterpart(Section5).For example,since no arrowheads appear at the vertices1and4in the graph G os in Figure1,the maximum likelihood estimates ofσ11and σ44must be equal to the empirical variance of X1and X4,respectively.The likelihood function for covariance graph models may be multi-modal,though simulations suggest this only occurs at small sample sizes,or under mis-specification(Drton and Richardson, 2004b).However,inspection of the oriented simplicial graph can reveal that certain parameters will take the same value at every mode.Perhaps most importantly,oriented simplicial graphs allow for computationally more efficient maximum likelihoodfitting; see Remark5.1and the example in Section6.G G osFigure1:A bi-directed graph G and its oriented simplicial graph G os.2Graphical terminologyIn this paper,we consider simple mixed graphs,which feature undirected(v−w),directed (v→w)and bi-directed edges(v↔w)under the constraint that there is at most one edge between two vertices.For a formal definition,let E={∅,−,←,→,↔}be the set of possible edges between an ordered pair of vertices;∅denoting that there is no edge.A simple mixed graph G=(V,E)is a pair of afinite vertex set V and an edge map E:V×V→E.The edge map E has to satisfy that for all v,w∈V,(i)E(v,v)=∅,i.e.there is no edge between a vertex and itself,(ii)E(v,w)=E(w,v)if E(v,w)∈{−,↔},(iii)E(v,w)=→⇐⇒E(w,v)=←.In the sequel,we write v−w∈G,v→w∈G,v←w∈G or v↔w∈G if E(v,w) equals−,→,←or↔,respectively.If E(v,w)=∅,then v and w are adjacent.If there is an edge v←w∈G or v↔w∈G then there is an arrowhead at v on this edge.If there is an edge v→w∈G or v−w∈G then there is a tail at v on this edge.A vertex w is in the boundary of v,denoted by bd(v),if v and w are adjacent.The boundary of vertex set A⊆V is the set bd(A)=[∪v∈A bd(v)]\A.We write Bd(v)=bd(v)∪{v}and Bd(A)=bd(A)∪A.An induced subgraph of G over a vertex set A is the mixed graph G A=(A,E A)where E A is the restriction of the edge map E on A×A.The skeleton of a simple mixed graph is obtained by making all edges undirected.In a simple mixed graph a sequence of adjacent vertices(v1,...,v k)uniquely deter-mines the sequence of edges joining consecutive vertices v i and v i+1,1≤i≤k−1. Hence,we can define a pathπbetween two vertices v and w as a sequence of distinct verticesπ=(v,v1,...,v k,w)such that each vertex in the sequence is adjacent to its predecessor and its successor.A path v→···→w with all edges of the form→and pointing toward w is a directed path from v to w.If there is such a directed path from v to w=v,or if v=w,then v is an ancestor of w.We denote the set of all ancestors of a vertex v by An(v)and for a vertex set A⊆V we define An(A)=∪v∈A An(v).Finally, a directed path from v to w together with an edge w→v∈G is called a directed cycle.Important subclasses of simple mixed graphs are illustrated in Figure2.Bi-directed, undirected and directed acyclic graphs contain only one type of edge.Directed acyclic graphs(DAGs)are directed graphs without directed cycles.These three types of graphs are special cases of ancestral graphs(Richardson and Spirtes,2002).Definition2.1.A simple mixed graph G that does not contain any directed cycles is an ancestral graph if the following conditions are met:(i)(ii)vxwy(iii)(iv)Figure2:Simple mixed graphs.(i)A bi-directed graph,(ii)an undirected graph,(iii)a DAG,(iv)an ancestral graph.(i)if v−w∈G,then there does not exist u such that u→v∈G or u↔v∈G;(ii)if v↔w∈G,then v is not an ancestor of w.Ancestral graphs can be given an independence interpretation,known as global Markov property,by a graphical separation criterion called m-separation(Richardson and Spirtes, 2002,§3.4).An extension of Pearl’s(1988)d-separation for DAGs,m-separation usesthe notion of colliders:a non-endpoint vertex v on a path is a collider on the path ifthe edges preceding and succeeding v on the path both have an arrowhead at v,that is,→v←,→v↔,↔v←or↔v↔is part of the path;a non-endpoint vertex v which isnot a collider is a non-collider on the path.Definition2.2.A path between vertices v and w in an ancestral graph G is said to bem-connecting given a set C⊆V(possibly empty),with v,w/∈C,if(i)every non-collideron the path is not in C,and(ii)every collider on the path is in An(C).If no path m-connects v and w given C,then v and w are said to be m-separated given C.Sets A andB are m-separated given C,if any two vertices v∈A and w∈B are m-separated givenC(A,B,C are disjoint sets;A,B are nonempty).Let G=(V,E)be an ancestral graph whose vertices index a random vector(X v|v∈V).For A⊆V,let X A be the subvector(X v|v∈A).The global Markov propertyfor G states that X A⊥⊥X B|X C,i.e.X A is conditionally independent of X B given X C, whenever A and B are m-separated given C in G.Subsequently,we write A⊥⊥B|C asa shorthand that avoids making the probabilistic context explicit.The global Markov property for the graphs in Figure2states that(i)v⊥⊥y and w⊥⊥x;(ii)v⊥⊥y|{w,x}andw⊥⊥x|{v,y};(iii)v⊥⊥y|{w,x}and w⊥⊥x|v;(iv)v⊥⊥y|x and w⊥⊥x|v.A bi-directed graph G encodes marginal independences in that the global Markov property states that v⊥⊥w if v and w are not adjacent.In a multivariate normal dis-tribution these pairwise marginal independences hold iffall independences stated bythe global Markov property for G hold(Kauermann,1996).Without any distributional assumption,Richardson(2003,§4)shows that the independences stated by the global Markov property of a bi-directed graph hold iffcertain(not only pairwise)marginal independences hold;cf.Mat´uˇs(1994).The graphs in Figure2have the property that for every pair of non-adjacent vertices vand w there exists some subset C such that the global Markov property states that v⊥⊥w|C.Ancestral graphs with this property are called maximal.DAGs,bi-directed and undirected graphs are always maximal.If an ancestral graph G is not maximal,then there exists a unique Markov equivalent maximal ancestral graph¯G such that G is a subgraph of¯G;any edge in¯G that is not present in G is bi-directed(Richardson and Spirtes,2002,§3.7).Here two ancestral graphs G1and G2are Markov equivalent if the global Markov property for G1states the same independences as the global Markov property for G2. Subsequently we will employ repeatedly the following simple Lemma.Lemma2.1.If G is a bi-directed graph and¯G is an ancestral graph such that G and¯G have the same skeleton and are Markov equivalent,then¯G is a maximal ancestral graph. Proof.Let v and w be non-adjacent vertices in¯G.Since v and w are also non-adjacent in G and every non-endpoint vertex on a path in G is a collider,the global Markov property of G states that v⊥⊥w.From the Markov equivalence of¯G and G,it follows that v⊥⊥w is also implied by the global Markov property of¯G.Thus,¯G is maximal.3Simplicial setsIn this section we show how simplicial vertex sets of a bi-directed graph can be used to construct a Markov equivalent maximal ancestral graph by removing arrowheads from certain bi-directed edges.Simplicial sets are also important in other contexts such as collapsibility(Madigan and Mosurski,1990;Kauermann,1996;Lauritzen,1996,§2.1.3, p.121and219)and triangulation of graphs(Jensen,2001,§5.3).Definition3.1.A vertex v∈V is simplicial,if Bd(v)is complete,i.e.every pair of vertices in Bd(v)are adjacent.Similarly,a set A⊆V is simplicial,if Bd(A)is complete.Proposition3.1.If v∈V is simplicial and w∈bd(v)then bd(v)⊆bd(w).If an edge between v and w has an arrowhead at v,then we say that we drop the arrowhead at v when either v←w is replaced by v−w or v↔w is replaced by v→w. Definition3.2.Let G be a bi-directed graph.The simplicial graph G s is the simple mixed graph obtained by dropping all the arrowheads at simplicial vertices of G.For the graph from Figure1,G s=G os;additional examples are given in Figure3. Theorem3.1.The simplicial graph G s induced by a bi-directed graph G is a maximal ancestral graph that is Markov equivalent to G.Proof.We claim that G s is a maximal ancestral graph.Let v∈V.First assume that there exists w∈V such that v−w∈G s.Then v must be simplicial,and consequently there may not exist u∈V such that v←u∈G s or v↔u∈G s.Next assume that either u→v∈G s or u↔v∈G s and that there exists a directed path v→v1→···→v k→u∈G s.Then the presence of the directed edge v→v1∈G s implies thatGGG 1GGG 2Figure 3:Bi-directed graphs with their simplicial and oriented simplicial graphs.v is simplicial,which is in contradiction to the fact that there is an arrowhead at v on the edge between v and u .Thus,G s is an ancestral graph.Finally,G s is maximal by Lemma 2.1in conjunction with the Markov equivalence established next.Since two vertices are adjacent in G s iffthey are adjacent in G ,the Markov equiva-lence claim follows if we can show that two non-adjacent vertices v and w are m -connected given C ⊆V in G iffthey are m -connected given C in G s .First,let v and w be non-adjacent vertices that are m -connected given C ⊆V in G .Let π=(v,v 1,...,v k ,w )be a path in G that is of minimal length among all paths in G that m -connect v and w given C .Since G is bi-directed this implies that v 1,...,v k are not simplicial,are colliders and are all in C (Richardson,2003,Lemma 6).Since G and G s have the same adjacencies,there exists a (unique)path πs =(v,v 1,...,v k ,w )in G s .Since v 1,...,v k are not simplicial in G ,they are colliders on πs ,and {v 1...,v k }⊆C yields that πs m -connects v and w given C in G s .Conversely,let v and w be two vertices that are m -connected given C ⊆V in G s ,and let πs =(v =v 0,v 1,...,v k ,v k +1=w )be a path in G s that is of minimal length among all paths in G s that m -connect v and w given C .Assume there exists a simplicialvertex v i on πs .Then it follows that v i −1and v i +1are adjacent in G s ,and that πs −i =(v,v 1,...,v i −1,v i +1,...,v k ,w )is a path in G s .By definition of G s ,a vertex u in G s iseither such that every edge with endpoint u has an arrowhead at u ,or such that everyedge with endpoint u has a tail at u .This implies that πs −i m -connects v and w given C contradicting that πs is the shortest such path.Therefore,all of the vertices v 1,...,v kare non-simplicial and thus colliders on πs .Moreover,there cannot be a directed path from any of the vertices v 1,...,v k to a vertex in C .This implies that {v 1...,v k }⊆C ,which yields that the path π=(v,v 1,...,v k ,w )in G m -connects v and w given C .Proposition 3.2.A bi-directed graph G is Markov equivalent to an undirected graph iffthe simplicial graph G s induced by G is an undirected graph iffG is a disjoint union of complete graphs.Proof.If G s is an undirected graph,then by Theorem 3.1,G is Markov equivalent to an undirected graph,namely G s .Conversely if G s is not an undirected graph,assumethat there exists an undirected graph U that is Markov equivalent to G.Necessarily,G, G s and U have the same skeleton.Since G s is not undirected,there exists a vertex v that is not simplicial,i.e.there exist two non-adjacent vertices u and w in bd(v).The global Markov property for G states that u⊥⊥w.However,the path(u,v,w)m-connects u and w given the empty set in U.Thus,the global Markov property of U does not state u⊥⊥w,which contradicts the assumption that U and G are Markov equivalent.Finally the simplicial graph G s is an undirected graph iffthe vertex set of the inducing bi-directed graph G can be partitioned into pairwise disjoint sets A1,...,A q such that (a)if v∈A i,1≤i≤q,and w∈A j,1≤j≤q,are adjacent,then i=j,and(b)all the induced subgraphs G A,i=1,...,q are complete graphs(Kauermann,1996).iUnder multivariate normality,a bi-directed graph that is Markov equivalent to an undirected graph represents a hypothesis that is linear in the covariance matrix as well as in its inverse.The general structure of such models is studied in Jensen(1988).4Inclusion of boundary setsThe simplicial graph G s sometimes may be a DAG.For example,the graph u↔v↔w has the simplicial graph u→v←w.However,there exist bi-directed graphs that are Markov equivalent to a DAG and yet the simplicial graph contains bi-directed edges. For example,the graph G1in Figure3is Markov equivalent to the DAG G os1in the same Figure.Hence,some arrowheads may be dropped from bi-directed edges in a simplicial graph while preserving Markov equivalence.In this section,we construct a maximal ancestral graph that we call an oriented simplicial graph,from which no arrowheads may be dropped without either destroying Markov equivalence or making it not ancestral. Our construction uses inclusion properties of the boundaries Bd(v),v∈V.Lemma4.1.Let v and w be adjacent vertices in a simplicial graph G s.Then(i)if v−w∈G s,then Bd(v)=Bd(w);(ii)if v→w∈G s,then Bd(v) Bd(w);(iii)if v↔w∈G s,then each of Bd(v)=Bd(w),Bd(v) Bd(w),and Bd(v)⊆Bd(w)⊆Bd(v)might be the case.Proof.(i)and(ii)follow from Proposition3.1.For(iii)see,respectively,the graphs G s1, G s2in Figure3,and G s=G os in Figure1.Definition4.1.Let G be a bi-directed graph.An oriented simplicial graph G os is a graph created from the simplicial graph G s by(i)replacing every bi-directed edge v↔w∈G s with Bd(v) Bd(w)by the directededge v→w,and(i)(ii)Figure4:Induced subgraphs for which no arrowhead can be dropped from edge v↔w. (ii)replacing every bi-directed edge v↔w∈G s with Bd(v)=Bd(w)by a directed edge such that the graph created,i.e.G os,contains no directed cycles.The notion of oriented simplicial graphs is well-defined only if in Step(ii)of Definition 4.1directed cycles can indeed always be avoided.However,this can be shown as follows. The relation u≺v given by Bd(u) Bd(v)forms a partial order.Consequently,the vertex set of G can be well-ordered as V={v1,...,v p}such that the strict inclusion Bd(v i) Bd(v j)implies that i<j.It follows from Lemma4.1(ii)that after the introduction of the directed edges in step(i)of Definition4.1,an edge v i→v j can only occur if i<j.Furthermore,in step(ii)we can select the directed edges such that v i→v j∈G os only if i<j.Then G os does not contain any directed cycles and meets the criterion to be an oriented simplicial graph.Examples of oriented simplicial graphs are shown in Figure3.Note that a given bi-directed graph may have multiple oriented simplicial graphs;e.g.for graph G1in Figure 3the edge v↔w in G s1can be replaced by either v→w or v←w.Lemma4.2.For a bi-directed graph G and an induced oriented simplicial graph G os,it holds that(i)if v−w is an undirected edge in G os,then Bd(v)=Bd(w);(ii)if v→w is a directed edge in G os,then Bd(v)⊆Bd(w);(iii)v↔w is a bi-directed edge in G os iffBd(v)⊆Bd(w)⊆Bd(v).In other words, v↔w∈G os iffthere exist vertices x∈bd(v)\{w}and y∈bd(w)\{v}such that the induced subgraph G{x,y,v,w}equals one of the two graphs shown in Figure4.Proof.Follows from Lemma4.1and Definition4.1.If a graph satisfies properties(i)and(ii)of Lemma4.2so that for all distinct vertices v,w∈V,v−w or v→w implies Bd(v)⊆Bd(w),then we say that G has the directed boundary containment property.By Lemmas4.1and4.2,G os and G s have this property. Theorem4.1.Let G os be an oriented simplicial graph for the bi-directed graph G.Then(i)G os is a maximal ancestral graph;(ii)G and G os are Markov equivalent;(iii)G os has the minimum number of arrowheads of all maximal ancestral graphs that are Markov equivalent to G.Here the number of arrowheads of an ancestral graphG with d directed and b bi-directed edges is defined as arr(G)=d+2b.Proof of Theorem4.1(i).Let v and w be two adjacent vertices in G os.Since v−w∈G os iffv−w∈G s it follows that there does not exist an arrowhead at v or w;compare the first part of the proof of Theorem3.1.Furthermore,by definition,G os does not contain any directed cycles.Finally,assume that there exists v↔w∈G os.Then there cannot be a directed path from v to w,since by Lemma4.2(ii)this would imply Bd(v)⊆Bd(w), contradicting Lemma4.2(iii).Therefore,we have shown that G os is an ancestral graph. The maximality of G os will follow from the proof of Theorem4.1(ii)and Lemma2.1. Proof of Theorem4.1(ii).First,let v and w be non-adjacent vertices that are m-con-nected given C⊆V in G.Letπ=(v=v0,v1,...,v k,v k+1=w)m-connect v and w given C in G and be such that no shorter path m-connects v and w given C.Then v1,...,v k are colliders,{v1...,v k}⊆C,and v i−1and v i+1,i=1,...,k,are not adjacent in G.Hence, for all i=1,...,k−1,v i−1∈Bd(v i)but v i−1/∈Bd(v i+1),and similarly v i+2/∈Bd(v i) but v i+2∈Bd(v i+1).It follows that Bd(v i)⊆Bd(v i+1)and Bd(v i)⊇Bd(v i+1)for all i=1,...,k−1.Thus,by Lemma4.2(iii),v i↔v i+1∈G os,i=1,...,k−1,and all v2,...,v k−1are colliders on the pathπos=(v,v1,...,v k,w)in G os.Similarly,it follows that v2∈Bd(v1)\Bd(v),which entails that Bd(v1)⊆Bd(v).Thus,v1is a collider on πos.Analogously,we can show that v k is a collider onπos,which yields thatπos is a path in G os that m-connects v and w given C.Conversely,let v and w be two vertices that are m-connected given C⊆V in G os. Since G os has the directed boundary containment property,by Proposition A.1from the Appendix,there exists a pathπos=(v,v1,...,v k,w)that m-connects v and w given C in G os and is such that v1,...,v k are colliders with{v1...,v k}⊆C.This entails that the pathπ=(v,v1,...,v k,w)in G m-connects v and w given C.Proof of Theorem4.1(iii).Let¯G be a maximal ancestral graph that is Markov equivalent to the(bi-directed)graph G.The graphs¯G,G,and G os must have the same skeleton. Assume that arr(¯G)<arr(G os).Then either(a)there exists v→w∈G os such that v−w∈¯G or(b)there exists v↔w∈G os such that v→w∈¯G or v−w∈¯G.Case(a):If v→w∈G os,then w cannot be simplicial.Hence,there exist x,y∈bd(w)such that x and y are not adjacent in G os,thus not adjacent in G;(v=x is possible).The global Markov property of G states that x⊥⊥y.Since¯G is an ancestral graph and v−w∈¯G,however,there may not be any arrowheads at w on the edges between x and w,and y and w in¯G.Therefore,x and y are m-connected given∅in¯G, which yields that the global Markov property of¯G does not imply x⊥⊥y;a contradiction.Case(b):Now v↔w∈G os but there is no arrowhead at v on the edge between v and w in¯G.By Lemma4.2(iii)there exists x∈bd(v)\Bd(w)such that x and w are not adjacent in G os.Thus x and w are not adjacent in G and x⊥⊥w is stated by the globalMarkov property for G.In¯G,however,v is a non-collider on the path(x,v,w)and thus this path m-connects x and w given∅,which yields that the global Markov property of ¯G does not imply x⊥⊥w;a contradiction.Corollary4.1.A bi-directed graph G is Markov equivalent to an undirected graph iffthe induced oriented simplicial graph G os is an undirected graph.Proof.The claim follows immediately from Proposition3.2since G os is an undirected graph iffG s is an undirected graph.Theorem4.2.Let G be a bi-directed graph with oriented simplicial graph G os.Then G is Markov equivalent to a DAG iffG os contains no bi-directed edges.Theorem4.2can be shown to be equivalent to the Markov equivalence result stated without proof in Pearl and Wermuth(1994,Thm.1).The latter theorem requires‘no chordless four-chain’,which must be read as excluding graphs with induced subgraphs that are either of the graphs in Fig.4.Under this condition,Pearl and Wermuth(1994) also state that a Markov equivalent DAG can be constructed from the(undirected) skeleton of G by introducing directed and bi-directed edges in an operation they term ‘sink orientation’,and turning remaining undirected edges into directed ones.The sink orientation of the graph G1in Figure3has the directed edges of G s1but an undirected edge v−w.Thus sink orientation need not yield an ancestral graph.The class of covariance models considered in Theorem4.2also appears in the con-struction of generalized Wishart distributions(Letac and Massam,2007,Thm.2.2), where the models are called homogeneous and characterized in terms of Hasse diagrams. Proof of Theorem4.2.Let G be a bi-directed graph such that G os contains no bi-directed edges.By Definition3.1,the induced subgraph(G os)A is undirected and complete if A⊆V is a simplicial set.Let A1,...,A q be the inclusion-maximal simplicial sets of, G.Let D be a directed graph obtained by replacing each induced subgraph(G os)Ai i=1,...,q,by a complete DAG.Then D itself has to be acyclic,i.e.a DAG,which can be seen as follows:First,since G os is an ancestral graph and thus does not contain any directed cycles,a directed cycleπin D must involve a vertex v∈∪q i=1A i.Let v∈A j.,i=1,...,q,are all acyclic,πmust also involve a Since the induced subgraphs D Aivertex not in A j.Therefore,there exists an edge x→w onπsuch that w∈A j and x∈A j.Since the sets A i are inclusion-maximal simplicial sets,no vertex in A i,i=j, is adjacent to any vertex in A j.Hence,x∈∪q i=1A i,which implies that the edge x→w is also present in G os.This is a contradiction to w being a simplicial vertex.Two vertices are adjacent in G os iffthey are adjacent in D.Further D satisfies the directed boundary containment property since by Lemma4.2,G os satisfies this property, and if u→¯u in D then either u→¯u in G os or u−¯u in G os.If two vertices v and w are not adjacent andπ=(v,v1,...,v k,w)is a shortest m-connecting path given C in G os with the maximum number of vertices in C,then,by Corollary A.1from the Appendix,π=(v,v1,w)with v1being a collider;otherwise there would have to be a bi-directed edge in G os.In particular,v1∈∪q i=1A i and thus,(v,v1,w)has identical edge structure in D and G os,and by Proposition A.1m-connects v and w in D.Now letπbe a path in D that is shortest among paths m-connecting two vertices v and w given C,and has most vertices in C.Corollary A.1implies thatπ=(v,v1,w)with v1∈∪q i=1A i being a collider.Thus(v,v1,w)has identical edge structure in G os.Finally we may apply Proposition A.1to conclude thatπis also m-connecting in G os,which concludes the proof of Markov equivalence of G os and the DAG D.Conversely,let G be a bi-directed graph with oriented simplicial graph G os such that v↔w∈G os.Suppose for a contradiction that G is equivalent to a DAG D.Note that D must have the same skeleton as G(and G os).By Lemma4.2(iii),there exist two different vertices x∈bd(v)\{w}and y∈bd(w)\{v}such that,by the Markov property of G,x⊥⊥w and v⊥⊥y.Hence,v and w must be colliders on the paths(x,v,w) and(v,w,y)in D,respectively.This is impossible in the DAG D.As the next result reveals,bi-directed graphs that are Markov equivalent to DAGs exhibit a structure that corresponds to a multivariate regression model.Proposition4.1.Let G be a connected graph.If G os contains no bi-directed edges,then the set A of all simplicial vertices is non-empty,the induced subgraph(G os)A is a disjoint union of complete undirected graphs,the induced subgraph(G os)V\A is a complete DAG, and an edge v→w joins any two vertices v∈A and w∈A in G os.Proof.For two adjacent vertices v and w in G os,Lemma4.2(i)-(ii)implies that Bd(v)⊆Bd(w)or Bd(w)⊆Bd(v).Hence,we can list the vertex set as V={v1,...,v p}such that Bd(v i)⊆Bd(v j)if v i and v j are adjacent and i≤j.It follows that v1∈A and thus A=∅.Let A1,...,A q be the inclusion-maximal simplicial sets of G.Then(G os)Aequals the union of the disjoint complete undirected graphs(G os)A1,...,(G os)Aq.SinceG os is an ancestral graph,(G os)V\A is a DAG.In order to prove the remaining claims we proceed by induction on|V\A|.If |V\A|=0,then the connected graph G os is a complete undirected graph and there is nothing to show.Let|V\A|≥1.If the shortest path between v i1and v p in G is of theform v i1↔...↔v ik↔v p,then i1<···<i k<p and Bd(v i1)⊆···⊆Bd(v ik)⊆Bd(v p),which is easily shown by induction on k.However,since v i1∈Bd(v i1)it must in fact holdthat v i1and v p are adjacent.Hence,there is an edge between every vertex v∈V\{v p} and v p,which for v∈A is of the form v→v p because clearly v p∈A.The proof is finished by combining what we learned about v p with the induction assumption applied to the induced subgraph G W with W={v1,...,v p−1}.Note that(G W)os does not contain any bi-directed edges because for v,w∈W,the inclusion Bd G(v)⊆Bd G(w)implies that Bd GW (v)⊆Bd GW(w).。

相关文档
最新文档