Abstract An Efficient Sequential SAT Solver With Improved Search Strategies
加载隐私保护的多步攻击关联方法
加载隐私保护的多步攻击关联方法摘要:随着越来越多的安全威胁,各个组织机构之间互相展开合作、共同抵抗攻击。
来自各个组织的安全报警数据常常包含一些与数据拥有者隐私相关的敏感信息,这就需要在共享这些报警数据之前对其敏感信息进行保护,然而,经过隐私保护的报警数据又会对后续的入侵分析产生负面影响。
为了平衡报警数据的隐私性和可用性,本文基于k-匿名模型对报警数据进行隐私保护,在已有序列模式挖掘算法的基础上提出espm序列模式挖掘算法对报警数据进行多步攻击关联。
实验结果表明,该方法可以有效地挖掘出多步攻击行为模式。
关键词:隐私保护 k-匿名多步攻击关联序列模式挖掘中图分类号:tp309 文献标识码:a 文章编号:1007-9416(2011)12-0051-031、引言近几年来,诸如蠕虫和分布式拒绝服务攻击(ddos)等各种安全威胁与日俱增,为了抵抗这些攻击,不少组织机构之间开始相互合作,如cert(/certcc.html)和dshield(/)通过在网上收集安全报警数据,对数据进行关联分析,然后将分析结果信息发布给用户和销售商[1]。
通常,安全报警数据都是从不同的公司组织中收集得到,因此其中包含了一些比较敏感的信息,而这些敏感信息是数据拥有者不愿公开或者与别人共享的。
为了保护安全报警数据的敏感信息,防止被不法分子利用,对报警数据进行一定的隐私保护就显得尤为重要了。
本文基于k-匿名[1]隐私保护技术对报警数据进行保护,在已有序列模式挖掘算法的基础上提出espm(efficient sequential pattern mining)序列模式挖掘算法对报警数据进行多步攻击关联。
实验结果表明,该方法仍然能够有效地从隐私保护后的报警数据中挖掘出多步攻击行为模式。
2、报警的隐私保护k-匿名是解决数据发布中隐私信息泄漏问题的一个比较有效的方法[1],本文基于k-匿名思想,改进经典算法incognito[2],对原始报警进行隐私保护,使经过隐私保护的报警数据仍能有效地用于多步攻击关联分析。
如何写摘要
How to Write an AbstractI. Definition of abstractA.ISO standard: 是指一份文件内容的缩短的精确的表达而无须补充解释或评论。
Refers to a shortened brief but precise statement of a document without explanation or appreciation.B.China national standard:以提供文献内容梗概为目的,不加评论和补充解释,简明确切的记述文献重要内容的短文。
II. Importance of abstractA.For readersmittee members and advisor and editorThey will read your abstract to decide weather you these is good or not; whether it is approved or not; whether it is published or not. It is the first impression.2.Other readers: whether to read/buy or not. Is it useful? (It is impossible for readers toread all text before they read your abstract.)B.For index (it is impossible for readers to read all text, something interest readers)Abstract can be used separately as index. (SCI) for others further study.III. Classificationrmative abstract1.Definition: it summarizes the key information contained in a larger document and willtypically include the following elements.2.Elements:a.Background information/literature review. OptionalWhat we have learned in this field, and what we are going to learnb.Principal purpose *What is your purpose? Proofing a new idea.c.Methodology: what kind of method is used *d.Results: original idea, new discovery, different idea. ***e.Conclusion and recommendation. OptionalIt is long, clear expressed, and can use separately.B.Descriptive abstractDefinition: it simply describes what a report, article, or thesis is about and what methods were used.It is short, not clearly expressed, and can not used separately(not for an index)C.Integrated abstract (shortened informative abstract)Definition: sometimes the IA is a little long; so most writer combined the 2 kinds. IV. Requirements of abstractA.Length: it should be a brief and precise statement, 0—300(ELT150, TESOLQUARTERL Y200, OUR UNIVERSITY,dissertation1,500,theses 1200,APA(Americanpsychological association)thesis350,research paper 120,dissertation abstract international 350,for this reason, abstract can be shortened, but it should be marked MODIFIED )anic whole: in clear logic sequence, full content, can be used separatelyC.Objective: using third person pronouns instead of “I, we”D.Style1.It should be in formal style.2.Choice of words (usually use technical terms; for specific request: TESOLQUARTERL Y the abstract be written in easy, simple words so those who areinterested in it can understand the main idea even without some knowledge about thisfield.)3.Tense: should be in present tense, if it describes the past thing, it can use past tense.4.Voice: usually in passive voice5.Sentence (short, long both ok); paragraph (1—n)E.Position: usually appears after the author and before text. In thesis, it is put after the coverand uses one or two pages.F.Else: do not use formula, picture, and diagram in abstract.V. How to writeprehension of the work. First you should read the whole work carefully even it iswritten by yourself, and then you should be clear about the main theme and structure.B.Be clear about the purpose of writing the abstract: whether you are going to write anabstract of the whole work or of a section or for some problems. (According to this, we can divide the abstract into 3 classes.)C.List all the important points in sequence according to the original work, and delete thosepoints that have less value.D.Write the first draft. (Attention)1.Abstract should contain the main facts and delete the unnecessary details.2.Pay attention to the proportion of each fact, the main fact should get a large share.3.Pay attention to the cohesion of the abstract. (Abstract itself is a work.)e you own words.5.Whether you abstract is in appropriate length.E.Finalize the abstract.( according to the requirements.)VI. When to writeA.TimeB.Reason: after a larger document has been completed the content of an abstract and it’slevel and difficulty will vary, depending on the target audience. For example, an abstract written to complement a technical article will be written for readers who are familiar both with the subject matter and with the technical language used. So the writer can use technical terms and advanced concepts freely.Key word1.Write before the whole work., it will help you to concentrate on the topic.e noun or noun phrase.3.Purpose: it is written for index.ually 3—8 words. Use; between each word5.The most important words, frequency,6.It can be modified after the work。
科技语篇中的评价机制研究——以Science和Nature摘要为例
Mat rn和 R s 指 出 , 价 是 关 于篇 章 中所 协 商 的态 度 、 i oe 评 情感强度 、 以及表 明价值和联盟读者的各种表 达手段[0 n 2 Hu — J
表 达 态 度 的 目的 , 时又 不 失 学 术 规 范要 求 的语 篇 客 观 l 避 同 生,
、
评 价理 论概 述
评价理论是在对 系统功能语言学 中的人 际意义进行研究
免主观建立孤立 的学术阵营 、 形成学术上 的对立面 , 以帮助建
立 学术 领域 的 同 盟关 系 , 开 拓 新 的 研 究 领 域 。 并
1势 f 聚 语 焦
( r n& R s , 0 3 Mat i oe 2 0 )
cnl asca d( a e ,=I at s it G l l P . y o e le 6×17 ; d sai( 5 c n 0 o d t 9 % o . 5 r o
i n ei ev 1= . f e c tra ) 1 3( . —.6 ) sp ot gar l fr hs d n 2 I 2 1 ) ,u p r n o i 1 3 i eu t
科 技语 篇 虽 具有 结 构 严 谨 和表 达 客 观 的特 点 ,但 从 N . a tr 和 Sine 志 摘 要 来 看 , 有 明显 话 语 标 记 的 显 性 评价 ue cec 杂 具
机制也不乏 存在。如 :
( ) l 9 2 7 2ma pn eGP e ewa inf 1 Onyr 5 3 6 p igi t C5g n ss i- s nh g i
wih m e g nc fi naea da tvei m u t t t e r e eo he n t nda p i m ni y.
皮肤的屏障作用英文文献
Moisturization and skin barrier functionABSTRACT: Over the past decade,great progress has been made toward elucidating the structure and function of the stratum corneum (SC),the outermost layer of the epidermis. SC cells (corneocytes) protect against desiccation and environmental by the SC is largely dependent on several factors. First, intercellular lamellar lipids, organized predominantly in an orthorhombic gel phase, provide an effective barrier to the passage of water through the tissue. Secondly, the diffusion path length also retards water loss, since water must traverse the tortuous path created by the SC layers and corneocyte envelopes. Thirdly, and equally important, is natural moisturizing factor (NMF), a complex mixture of low-molecular-weight, water-soluble compounds first formed within thee corneocytes by degradation of the histidine-rich protein known as filaggrin. Each maturation step leading to the formation of an effective moisture barrier-including corneocyte strengthening, lipid processing, and NMF generation-is influenced by the level of SC hydration. These processes, as well as the final step of corneodesmolysis that mediates exfoliations, are often disturbed upon environmental challenge, resulting in dry, flaky skin conditions. The present paper reviews our current understanding of the biology of the SC, particularly its homeostatic mechanisms of hydration.Keywords: corneocyte, corneodesmolysis, filaggrin, natural moisturizing factor, stratum corneum.IntroductionFor humans to survive in a terrestrial environment, the loss of water from the skin must be carefully regulated by the epidermis, a function dependent on the complex nature of its outer layer, the stratum corneum (SC) (1)。
流式细胞术在眼部疾病细胞因子和细胞学检查中的应用进展
中华实验眼科杂志2021年3月第39卷第3期Chin J Exp Ophthalmol,March2021,Vol.39,No.3•269•.综述.流式细胞术在眼部疾病细胞因子和细胞学检查中的应用进展刘铭I综述陶勇2审校1北京智德医学检验所有限公司101399;2首都医科大学附属北京朝阳医院眼科100020通信作者:陶勇,Email:taoyong@【摘要】流式细胞术是一种能够对单个细胞或生物颗粒进行多参数、定量分析和分选的检测技术,是近年来迅速发展起来的一项先进的细胞定量分析技术,广泛应用于免疫学、肿瘤学和血液学等多领域医学研究和临床诊断。
该技术可以定量检测和分析眼内液中细胞因子水平及各细胞亚群的分布特征。
与传统的酶联免疫吸附试验、细胞形态学检査及免疫组织化学检测等方法比较,流式细胞术具有操作简单、所需样本量小、灵敏度高、通量高等优点。
流式细胞术已广泛应用于眼内新生血管相关疾病、眼内淋巴瘤、视网膜病变、白内障、葡萄膜炎、结节病性葡萄膜炎、感染性眼内炎症性疾病等眼部疾病相关细胞因子水平检测和细胞亚群分析中,对眼部疾病的发病机制研究和靶向治疗起到日益重要的作用。
本文就流式细胞术在眼部疾病眼内液的细胞因子和细胞学检査中的应用进展进行综述。
【关键词】流式细胞术;眼部疾病;眼内液;细胞因子;细胞亚群基金项目:国家自然科学基金项目(82070948);顺义区“北京市科技成果转化统筹协调与服务平台”建设基金DOI;10.3760/l15989-20210118-00049Application progress of flow cytometry in the cytokine test and cytological examination for ocular diseasesLiu Ming1,Tao Yong21Beijing GiantMed Medical Diagnostics Laboratory,Beijing101399,China;2Beijing Chaoyang Hospital,CapitalMedical University,Beijing100020,ChinaCorresponding author Tao Yong,Email:taoyong@[Abstract]As an advanced quantitative technology for cells,flow cytometry,which can make multi-param t erquantitative analysis and sorting of individual cells or biological particles,has been widely used in immunology,oncology,hematology and other fields of medical research and clinical diagnosis in recent years.It can quantitativelydetect the expression levels of cytokines and the distribution characteristics of cell subsets in the intraocular fluid.Compared with traditional enzyme-linked immunosorbent assay,cell morphology and immunohistochemistry,it has theadvantages of simple operation,smaller sample size, higher sensitivity and higher throughput.Flow cytometry has beenwidely used in the detection and subgroup analysis of cytokines related to eye diseases,such as intraocularangiogenesis,intraocular lymphoma,retinopathy,cataract,uveitis,sarcoidotic uveitis,infectious intraocularinflammation,etc.Flow cytometry plays an increasingly important role in the pathogenesis research and targetedtherapy of eye diseases.In this article,the application of flow cytometry in the examination of cytokines and cytology inocular fluid for ocular diseases were reviewed.[Key words]Flow cytometry;Eye diseases;Intraocular fluid;Cytokine;Cell subsetsFund program:National Natural Science Foundation of China(82070948);Shunyi District l<Beijing Scienceand Technology Achievements Transformation Coordination and Service Platform"Construction FundDOI:10.3760/l15989-20210118-00049流式细胞术是一种能够对单个细胞或生物颗粒进行多参数、定量分析和分选的检测技术,具有速度快、精度高、准确性好和通量高等优点,是目前先进的细胞定量分析技术之一,广泛应用于免疫学、肿瘤学、血液学等领域的医学研究和临床诊断,特别是免疫学的淋巴细胞亚群分析。
酰胺的α-氧化反应研究
either lack of effective procedure
to prepare starting materials,or incompatible
with
sensitive
functionalities due to the harsh conditions during these transformations,or
could
potentially be
carried
out in
In the
second
a—keto
part,we
developed
all
efficient procedure the
to
synthesize above N,
N-substituted
amides.Different
witll
as
preparation
yields under the atmosphere of air.The whole thesis In the first part,we
consists
of three parts:
described a
novel cesium carbonate/tetra-n-butylammoniurn
V
Abstract
a.Keto amides
are
valuable intermediates in organic synthesis.have intriguing
are
biological properties,and
present in many natural products硒main skeleton.The
一种改进的自适应局部噪声消除滤波算法
2006年12月系统工程理论与实践第12期 文章编号:100026788(2006)1220099206一种改进的自适应局部噪声消除滤波算法赵小明1,2,叶喜剑1,姚 敏3(11台州学院计算机系,台州317000;21北京航空航天大学计算机学院,北京100083;31浙江大学计算机学院,杭州310027)摘要: 指出了美国冈萨雷斯教授(Rafael C.G onzalez )提出的一个基于自适应局部噪声消除滤波器算法存在的不足,并提出了一种改进的自适应局部噪声消除滤波新算法.通过比较可以发现改进的算法要比原算法的均方误差e mse 降低了一个数量级,而且各种信噪比SNR 、SNR m 、PSNR 都将比原算法提高了1Π4.计算机仿真实验证明了改进的自适应局部噪声消除滤波算法是有效的.关键词: 自适应;局部噪声;滤波器中图分类号: T N713 文献标志码: A An Im proved Adaptive Filtering Alg orithm based on Local N oise Rem ovalZHAO X iao 2ming 1,2,YE X i 2jian 1,Y AO Min 3(11C omputer Department ,T aizhou University ,T aizhou 317000,China ;21Beijing University of Aeronautics and Astronautics ,Beijing 100083,China ;31C ollege of C omputer ,Zhejiang University ,Hangzhou 310027,China )Abstract : American Profess or Rafael C.G onzalez once mentioned an adaptive filtering alg orithm based on local noiserem oval in his article 《Digital Image Processing 》.This paper is to point out the defect of this alg orithm and put forwarda new improved adaptive filtering alg orithm.By comparing this tw o alg orithms ,y ou will find the covariance e mse in thenew alg orithm has been reduced to a lower rank than in the original one ,and various signal 2to 2noise ratios SNR 、SNR m 、PSNR have been raise by one 2fourth.This new alg orithm has been improved effective by computer simulatingexperiment and is of certain applicability.K ey w ords : adaptive ;local noise ;filter收稿日期:2005201204资助项目:国家自然科学基金(60473024);浙江省自然科学基金(M603009) 作者简介:赵小明(1964-),男(汉),浙江台州,教授,硕士生导师,主要从事小波信号处理、图象处理等方向研究.1 引言自适应滤波技术由于对统计特性未知的信号处理、图象消除噪声处理有其优越性,在近30年来得到了飞速发展.如算术均值滤波器、几何均值滤波器、谐波均值滤波器、逆谐波均值滤波器、中值滤波器、阿尔法均值滤波器、维纳滤波器和自适应局部噪声消除滤波器[1~18]等,但这些算法都存在着一定的缺陷.如文献[7]中的算法,对冲激噪声不但没有减弱,反而被增强;参数h 的选取不能实现自适应,并且不能对噪声和细节信号进行分别处理.文献[11]算法中的参数K 1、K 2不能自适应,从而对K 1、K 2的选取具有盲目性,而且当出现跟背景灰度对比强烈的单像素宽度的线细节时,则算法很容易将其误认为噪声而滤除掉.文献[13]中的算法对于存在着同灰度的背景,当插入一条单像素宽度的连续细线时,则该算法就无法识别为区域边界线,同时对于滤波窗口中的背景估计量的计算应当是总体的统计结果,而该算法却把滤波窗口分离成左右区域分别对待.文献[16]中的I MF LE D 算法没有对边缘细节进行处理,造成把边缘细节部分均作为噪声进行处理而被平滑掉,导致边缘模糊化,而且该文献中算法对于只有两种灰度的图象的中值取值没有给出说明.然而,在自适应局部噪声消除滤波算法之中,要以IEEE 会士、美国著名教授冈萨雷斯提出的算法[19]较为实用,其算法是一个基于图象中的噪声加性独立和位置独立的模型,并且全部噪声的方差σ2η不大于去噪滤波器所在的像素点的局部方差σ2L,在这样的假设前提下,建立了一种自适应、局部噪声消除滤波器.其中加性独立是指噪声项中的噪声相互独立,互不干扰,且可以叠加在一起,两个输入之和的响应等于两个响应之和;位置独立是指图象中任一点的响应只取决于在该点的输入值,而与该点的位置无关;而均值给出了计算均值的区域中灰度平均值的度量,而方差给出了这个区域的平均对比度的度量.如滤波器作用于局部区域Sxy,滤波器在中心化区域中任何点(x,y)上的滤波器响应基于以下4个量:1)g(x,y)为噪声图象在点(x,y)上的灰度值;2)σ2η为干扰f(x,y)以形成g(x,y)的噪声方差;3)m L为在S x,y上像素点的局部均值;4)σ2L为在S x,y上像素点的局部方差.其中(x,y)为像素点坐标;Sxy为作用于图像局部区域内的自适应滤波器所在的m×n矩形窗口;f(x,y)为原未加噪声时图像中(x,y)点所在像素的灰度值.假如^f(x,y)为任意点(x,y)滤波后图象的灰度值,则滤波器的预期性能如下:1)如果σ2η为零,滤波器应该简单地返回g(x,y)的值,即在零噪声的情况下g(x,y)等同于f(x,y);2)如果局部方差与σ2η是高相关的,那么滤波器返回一个g(x,y)的近似值,即高局部方差与边缘相关,应保留边缘;3)如果两个方差相等,则滤波器返回区域S xy上像素的算术均值.这种情况发生在局部面积与全部图象有相同特性的条件下,此时局部噪声可以简单地用求平均来降低.基于这些假设的自适应局部澡声消除滤波算法中的^f可表示为:^f(x,y)=g(x,y)-[g(x,y)-m L]σ2ηΠσ2L,(1)在(1)式中唯一需要知道的或要估计的量是全局噪声的方差σ2η,其他参数需要从Sxy中的各个像素点(x, y)计算得到,在这里滤波器窗口已被中心化.在(1)式中,假设σ2η≤σ2L,而模型中的噪声是加性和位置独立的,因此,这是一个合理的假设,因为Sxy是g (x,y)的子集.然而在实际中很少有确切的σ2η的知识,结果很可能违反假设条件,由于这个原因,常对(1)式构建一个测试,以便当条件σ2ηΠσ2L发生时,把比率设置为1,这使该滤波器成为非线性的,但是它可以防止由于缺乏图象噪声方差的知识而产生的无意义结果,即负灰度值,依赖于mL值.2 冈萨雷斯滤波算法存在的局限尽管冈萨雷斯自适应、局部噪声消除滤波器是一个较实用的算法,但通过对其研究分析,发现(1)式自适应局部噪声消除滤波算法存在下列缺点:1)滤波图象像素灰度值^f对于小的局部方差σ2L过于敏感.令^f′|σL=2[g(x,y)-m L]σ2ηΠσ3L,当σL→0时,则^f′|σL →∞,此时方差σ2L微小的变化,都导致滤波图象像素灰度值^f的剧烈变化.2)(1)式自适应滤波器算法所得到的图象不够平滑;3)缺少对于强噪声弱边缘的处理.因为(1)式滤波器只有针对弱噪声强边缘情况的处理,即(1)式由于对于小的局部方差σ2L过于敏感,只针对σ2η≤σ2L的情况下进行处理.而对于σ2η>σ2L的情况下(1)式的自适应滤波器则无能为力.其原因为:当噪声图象中噪声的几率(P oise)增大时,噪声方差σ2η也随着增大,如图1(a)所示.当处于弱噪声强边缘图象情形下,随着强边缘图象的噪声的几率的增大,噪声将使图象的强边缘越来越模糊,结果边缘的界限越来越不清晰,从而噪声使得强边缘图象的局部方差逐渐减小,如图1(b)所示.当处于强噪声弱边缘图象情形下,随着弱边缘图象的噪声的几率的增大,噪声将使图象的灰度平坦区域越来越少,从而噪声使得弱边缘图象的局部方差逐渐增大,如图1(c)所示.但值得注意的是,当噪声的几率逐渐增大时,最终图象的噪声方差σ2η,与弱噪声强边缘图象和强噪声弱边缘图象的局部方差σ2L都将收敛于一个实数值,而不可能无限的增大或减小.当然,随着图象的噪声001系统工程理论与实践2006年12月图1 噪声与噪声方差之间的关系图2 全局方差与局部噪声方差的关系的不断增加,图象的原有的有用信息将逐渐被噪声覆盖,有用信息在噪声图象中所占的比重将不断地下降.再进一步,到达极端情形,可以预见图象的原有信息将完全被噪声所覆盖,这时候局部图象与全局图象有着相同的特性,从而图象的局部方差σ2L 将等同于噪声方差σ2η.这一概念的具体描述如图2所示.图2(a )描述了弱噪声强边缘图象的局部方差变化,图2(b )描述了强噪声弱边缘图象的局部方差变化,图中的虚线为y =x 的函数曲线.通过以上分析可知,当|ση-σL |值趋向于0时,则图象局部区域S xy 为噪声的可能性越大;同样,当|ση-σL |值越大,则图象局部区域S xy 为噪声的可能性越小.本文根据冈萨雷斯滤波算法存在的不足,提出一种改进的自适应局部噪声消除滤波算法.3 改进的自适应局部噪声消除滤波算法在上述已有的假设与分析的前提下,本文提出了一种新的自适应、局部噪声消除滤波器算法,对(1)式的自适应滤波器算法进行了改进,改进后的滤波算法应具有的预期性能为:1)如果σ2η为零,滤波器应该简单地返回g (x ,y )的值,即在零噪声的情况下g (x ,y )等同于f (x ,y );2)如果局部方差与σ2η高相关,那么滤波器要返回一个g (x ,y )的近似值,即高局部方差与边缘相关,应保留边缘;3)如果两个方差相等,则滤波器返回区域S xy 上像素的算术均值.这种情况发生在局部面积与全部图象有相同特性的条件下,此时局部噪声可以简单地用求平均来降低.4)如果两个方差越接近,则自适应滤波器所在的局部区域存在噪声的可能性越大,则此点应当更多地被滤波平滑掉.5)要求滤波图象像素灰度值^f 对于小的局部方差σ2L 平缓变化,同时兼有针对强噪声弱边缘图象和弱噪声强边缘图象的自适应消除噪声的能力.基于以上改进的应有的预期性能和假设下的前提下,改进后的自适应滤波算法中的^f 可表示为:^f (x ,y )=g (x ,y )-[g (x ,y )-m L ][exp (ση)-1]|ση-σL |,(2) 由于把原来置于分母的σ2L 转变成指数形式,消除了复原图象像素灰度值^f 对于小的局部方差σ2L 的敏感性.令^f ′|σL=-[g (x ,y )-m L ][exp (ση)-1]|ση-σL |ln[exp (ση)-1]sgn[σL -ση],当σL →0时,则^f ′|σL→[g (x ,y )-m L ][exp (ση)-1]σηln[exp (ση)-1](常数),此时方差σ2L 微小的变化,不会导致滤波图象像素灰度值^f 的剧烈变化.与此同时带来的好处是比原来的滤波器算法增添了对于强噪声弱边缘图象的去噪处理能力.而图象的去噪中一个关键参数为自适应滤波器区域中局部方差σ2L ,改进后的算法较之已有的滤波算法降低了对σ2L 的敏感性.也就意味着利用改进后的自适应滤波算法得到的滤波图像比原算法得到的滤波图象具有更平滑性.改进后的自适应滤波算法及操作步骤如下:1)采集噪声图象g 的灰度数据,并得出图象的坐标范围M 和N ;2)计算噪声方差σ2η;101第12期一种改进的自适应局部噪声消除滤波算法3)确定滤波模板S xy 的尺寸;4)赋初值x =-1;5)令x =x +1;6)判断x >M ?成立则退出滤波计算;7)令y =0;8)求取经定位(x ,y )的像素点的灰度值g (x ,y );9)求取经定位的像素点所在滤波窗口S xy 中的局部均值m L ;10)求取经定位的像素点所在滤波窗口S xy 中的局部方差σ2L ;11)用自适应滤波公式(2)计算滤波后图象像素灰度值;12)令y =y +1;13)判断y >N ?成立则转入第5)步,不成立则转入第8)步.4 两滤波算法结果比较分析衡量一个滤波器消除噪声图象质量的好坏通常有两种方法:主观评判法和客观评判法.411 主观评判结果图3 仿真实验结果为了验证本文提出的改进的自适应局部噪声消除滤波算法的有效性、优越性,利用Matlab ,对Lena 图象分别用原自适应局部噪声消除滤波算法和改进的自适应局部噪声消除滤波算法进行计算机仿真实验.实验采用3×3滤波模板,并选取了均值为0,发生概率为015%的高斯(gaussian )噪声作为图象的噪声污染模型.实验结果如图3所示.从滤波后的图象中可以非常明显地看出,图3(c )中的Lena 的左侧鼻子和嘴角两侧的白色噪声在图3(d )中已经消失;图3(d )中的Lena 的额角要比图3(c )显然得更平滑.通过图象对比可以知道,本文提出的自适应局部噪声消除滤波算法确实要比原有算法的消除噪声能力来得优越.412 客观评判结果客观准则是对滤波后的图象与原始未加噪声图象的误差进行定量计算,一般是对整个图象或图象中一个指定的区域进行某种平均计算,以得到均方误差[2].设一个原始图象为{f (i ,j ),0≤i ≤M -1,0≤j ≤N -1},相应的滤波后的图象为{^f (i ,j ),0≤i ≤M -1,0≤j ≤N -1},误差图象为{e (i ,j )=f (i ,j )-^f (i ,j ),0≤i ≤M -1,0≤j ≤N -1},那么均方误差可表示为:e mse =1MN ∑M -1i =0∑N -1j =0e 2(i ,j ),(3) 信噪比可表示为:SNR =10lg ∑M -1i =0∑N -1j =0f 2(i ,j )∑M -1i =0∑N -1j =0[f (i ,j )-^f (i ,j )]2.(4) 信噪比是用分贝表示压缩图象的定量性能评价,是最常用的一种图象质量比较方法.此外,还有另一种信噪比表示方法,它先对原始图象进行去均值处理,然后再计算信噪比,其表达式为:f (i ,j )=1MN ∑M -1i =0∑N -1j =0f (i ,j ),(5)201系统工程理论与实践2006年12月SNR m =10lg ∑M -1i =0∑N -1j =0[f (i ,j )- f (i ,j )]2∑M -1i =0∑N -1j =0[f (i ,j )-^f (i ,j )]2.(6) 而在文献[2]中,比较图象质量的最常用的方法是峰值信噪比(PS NR ).设f max =2K -1,K 是表示一个像素点用所的2进制位数,则PSNR =10lg MNf 2max∑M -1i =0∑N -1j =0[f (i ,j )-^f (i ,j )]2.(7) 在许多采集的视频序列和图象的应用中,常取K =8,本文也将用K =8,并直接将f max =255代入上式.通过编程计算,得到新算法与原算法的图象在多种图象质量标准下的比较数据如下.两种算法的图象质量标准的数据比较均方误差e mse信噪比SNR 信噪比SNR m 信噪比PSNR 原滤波算法0.002518.482812.293826.0206新滤波算法0.0007537723.689817.500931.2276从上表数据可以看出,新的自适应滤波算法的均方误差e mse 要比原自适应滤波算法小得多,降低了一个数量级;而各种信噪比SNR 、SNR m 、PSNR 都比原来滤波算法要高出1Π4左右.结果可以表明,本文提出的自适应、局部噪声消除滤波器算法要比原算法确实来得优越.5 结论通过理论分析和仿真实验的主、客观评判结果都表明:本文提出的改进的自适应局部噪声消除滤波算法对于小的局部方差的鲁棒抗噪性能要优于原有的算法,并且可以使用该算法来处理强噪声弱边缘图象.同时,采用改进后的算法进行滤波所得的图象的均方误差减小是相当可观的,而且各种信噪比的提升均达到1Π4左右.因此,本文提出的改进的自适应局部噪声消除滤波算法在实际应用中具有较大的推广价值.参考文献:[1] 尚久铨.卡尔曼滤波法在结构动态参数估计中的应用[J ].地震工程与工程振动,1991,6.Shang Jiuquan.Application of K alman filter method to dynamic parameter estimation of structures[J ].Earthquake Engineering and Engineering Vibration ,1991,6.[2] 吴森堂,张水祥,陈海尔.一类结构随机跳变系统的自适应滤波方法[J ].北京航空航天大学学报,2002,28(3):287-290.Wu Sentang ,Zhang Shuixiang ,Chen Haier.Approach for adaptive filter of systems with random changing structure s[J ].Journal of Beijing University of Aeronautics and Astronautics ,2002,28(3):287-290.[3] 刘广军,吴晓平,郭晶.一种数值稳定的次优并行Sage 自适应滤波器[J ].测绘学报,2002,31(4):8-10.Liu G uangjun ,Wu X iaoping ,G uo Jing.A numerically stable sub 2optimal parallel Sage adaptive filter [J ].Acta G eodaetica et Cartographica S inica ,2002,31(4):8-10.[4] 魏瑞轩,韩崇昭,张宗麟.鲁棒总体均方最小自适应滤波:算法与分析[J ].电子学报,2002,30(7)1023-1026.Wei Ruixuan ,Han Chongzhao ,Zhang Z onglin ,et al.R obust total least mean square adaptive filter :Alg orithm and analysis[J ].Acta E lectronica S inica ,2002,30(7):1023-1026.[5] 高鹰,谢胜利.基于矩阵广义逆递推的自适应滤波算法[J ].电子学报,2002,30(7):1032-1034.G ao Y ing ,X ie Shengli.An adaptive filtering alg orithm Based on recursionof generalized inverse matrix[J ].Acta E lectronica S inica ,2002,30(7):1032-1034.[6] 赵昕,李杰.一类加权全局迭代参数卡尔曼滤波算法[J ].计算力学学报,2002,19(4).Zhao X in ,Li Jie.A weighted globaliteration parametric K alman filter alg orithm[J ].Chinese Journal of C omputational Mechanics ,301第12期一种改进的自适应局部噪声消除滤波算法401系统工程理论与实践2006年12月2002,19(4).[7] 景晓军,李剑峰,熊玉庆.静止图像的一种自适应平滑滤波算法[J].通信学报,2002,23(10).Jing X iaojun,Li Jian feng,X iong Y uqing.An adaptive sm ooth filter alg orithms of still images[J].Journal of China Inatitute ofC ommunications,2002,23(10).[8] 郭业才,赵俊渭,陈华伟,王峰.基于高阶累积量符号相干累积自适应滤波算法[J].系统仿真学报,2002,14(10).G uo Y ecai,Zhao Junwei,Chen Huawei,Wang Feng.An adaptive filtering alg orithm of higher2order cumulant2based signed coherentintegration[J].Journal of System S imulation,2002,14(10).[9] 高鹰,谢胜利.一种基于三阶累积量的准则及自适应滤波算法[J].电子与信息学报,2002,24(9).G ao Y ing,X ie Shengli.A third2order cumulant2based criterion and adaptive filtering alg orithm[J].Journal of E lectronics andIn formation T echnology,2002,24(9).[10] 高为广,何海波.自适应抗差联邦滤波算法[J].测绘学院学报,2004,21(1).G ao Weiguang,He Haibo.Adaptive robust federated filtering[J].Journal of Institute of Surveying and Mapping,2004,21(1).[11] 单永高,范影乐,庞全.基于差分的细节保留自适应滤波算法[J].计算机工程与科学,2004,26(7).Shan Y onggao,Fan Y ingle,Pang Quan.A difference2based detail2preserving adaptive filter alg orithm[J].C omputer Engineering &Science,2004,26(7).[12] 张恒,雷志辉,丁晓华.一种改进的中值滤波算法[J].中国图象图形学报,2004,9(4).Zhang Heng,Lei Zhihui,Ding X iaohua.An improved method of median filter[J].Journal of Image and G raphics,2004,9(4). [13] 李剑峰,乐光新,尚勇.基于改进型D2S证据理论的决策层融合滤波算法[J].电子学报,2004,32(7).Li Jian feng,Y ue G uangxin,Shang Y ong.Decision2level fusion filtering alg orithm based on advanced D2S theory of evidence[J].Acta E lectronica S inica,2004,32(7).[14] 孔祥玉,魏瑞轩,,马红光.一种稳定的总体最小二乘自适应滤波算法[J].西安交通大学学报,2004,38(8).K ong X iangyu,Wei Ruixuan,Han Chongzhao,et al.S table total least mean square adaptive filter alg orithm[J].Journal of X i′an Jiaotong University,2004,38(8).[15] Chen J,Sato Y,T amura S.Orientation space filtering for multiple orientation line segmentation[J].IEEE T rans Pattern Analysis andMachine Intelligence,2000,22(5):417-429.[16] 曲延锋,徐健,李卫军,等.有效去除图像中脉冲噪声的新型滤波算法[J].计算机辅助设计与图形学学报,2003,15(4):397-401.Qu Y an feng,Xu Jian,Li Weijun,et al.New effective filtering alg orithm for the rem oval of impulse noise from images[J].Journal ofC omputer2aided Design&C omputer G raphics,2003,15(4):397-401.[17] 朱虹,李永盛,乐静,等.一种肾切片图像的肾小球边界增强滤波算法[J].中国生物医学工程学报,2003,22(4):370-373.Zhu H ong,Li Y ongsheng,Le Jing,et al.A filter alg orithm for edge emphasis of glomerulusin the image of kidney tissue slices[J].Chinese Journal of Biomedical Engineering,2003,22(4):370-373.[18] 赵乘麟.一种新的保边缘滤波算法[J].电子与信息学报,2003,25(11):1581-1584.Zhao Chenglin.A new edge2reserve filtering alg orithm[J].Journal of E lectronics and In formation T echnology,2003,25(11):1581 -1584.[19] 冈萨雷斯(美).数字图象处理(第二版)[M].北京:电子工业出版社,2003.Rafael C.G onzalez Digital Image Processing[M].Second Edition.Beijing:Publishing H ouse of E lectronics Industry,2003.。
Abstract
Group Signatures:Provable Security,Efficient Constructions and Anonymity from Trapdoor-HoldersAggelos Kiayias∗Computer Science&Engineering University of ConnecticutStorrs,CT,USA aggelos@Moti YungRSA Laboratories,Bedford,MA,USA,andColumbia UniversityNew York,NY,USAmoti@ AbstractTo date,a group signature construction which is efficient,scalable,allows dynamic adversar-ial joins,and proven secure in a formal model has not been suggested.In this work we give the first such construction in the random oracle model.The demonstration of an efficient construction proven secure in a formal model that captures all intuitive security properties of a certain primitive is a basic goal in cryptographic design.To this end we adapt a formal model for group signatures capturing all the basic requirements that have been identified as desirable in the area and we con-struct an efficient scheme and prove its security.Our construction is based on the Strong-RSA assumption(as in the work of Ateniese et al.).In our system,due to the requirements of provable security in a formal model,we give novel constructions as well as innovative extensions of the un-derlying mathematical requirements and properties.Our task,in fact,requires the investigation of some basic number-theoretic techniques for arguing security over the group of quadratic residues modulo a composite when its factorization is known.Along the way we discover that in the basic construction,anonymity does not depend on factoring-based assumptions,which,in turn,allows the natural separation of user join management and anonymity revocation authorities.Anonymity can,in turn,be shown even against an adversary controlling the join manager.∗Research partly supported by NSF Career Award CNS-0447808.1Contents1Introduction3 2Preliminaries6 3DDH over QR(n)with known Factorization7 4PK-Encryption over QR(n)with split n11 5Group Signatures:Model and Definitions16 6Building a Secure Group Signature206.1The Construction (22)6.2Correctness and Security of the Construction (25)7Separability:Anonymity vs.the GM29 A Generalized Forking Lemma3321IntroductionThe notion of group signature is a central anonymity primitive that allows users to have anonymous non-repudiable credentials.The primitive was introduced by Chaum and Van Heyst[13]and it involves a group of users,each holding a membership certificate that allows a user to issue a publicly verifiable signature which hides the identity of the signer within the group.The public-verification procedure employs only the public-key of the group.Furthermore,in a case of any dispute or abuse,it is possible for the group manager(GM)to“open”an individual signature and reveal the identity of its originator.Constructing an efficient and scalable group signature has been a research target for many years since its introduction with quite a slow progress,see e.g.,[14,12,10,11,8,27,3,2,9,24,7].In many of the early works the signature size was related to the group size.Thefirst construction that appeared to provide sufficient heuristic security and efficiency properties and where user joins are performed by a manager that is not trusted to know their keys,was the scalable scheme of Ateniese,Camenisch,Joye and Tsudik[2].It provided constant signature size and resistance to attacks by coalitions of users.This scheme was based on a novel use of the DDH assumption combined with the Strong-RSA assumption over groups of intractable order.Recently,Bellare,Micciancio and Warinschi[4],noticing that the work of[2]claims a collection of individual intuitive security properties,advocated the need for a formal model for arguing the security of group signature.This basic observation is in line with the development of solid security notions in modern cryptography,where a formal model that captures the properties of a primitive is defined and a scheme implementation is formally proven(in some model)to satisfy the security definitions. They also offered a model of a relaxed group signature primitive and a generic construction in that model.Generic constructions are inefficient and many times are simpler than efficient constructions (that are based on specific number theoretic problems).This is due to the fact that generic constructions can employ(as a black box)the available heavy and powerful machinery of general zero-knowledge protocols and general secure multi-party computations.Thus,generic constructions typically serve only as plausibility results for the existence of a cryptographic primitive,cf.[20].The relaxation in the model of[4]amounts to replacing the dynamic adversarial join protocols of[2]where users get individual keys with a trusted party that generates and distributes keys securely(relevant in some settings but perhaps unlikely in others).The above state of affairs([2,4])indicates that there exists a gap in the long progression of research efforts regarding the group signature primitive.This gap is typical in cryptography and is formed by a difference between prohibitively expensive constructions secure in a formal sense on the one hand, and efficient more ad-hoc constructions with intuitive claims on the other.In many cases,as indicated above,it is easier to come up with provably secure generic inefficient constructions or to design efficient ad-hoc constructions.It is often much harder to construct an efficient implementation that is proven secure within a formal model(that convincingly captures all desired intuitive security properties).To summarize the above,it is apparent that the following question remained open by earlier works: Design an efficient group signature with dynamic joins(and no trusted parties)which isprovably secure within a formal model.One of our contributions is solving the above open question by,both,adapting a new model for group signatures(based on the model of traceable signatures of[23]),which follows the paradigm of [22]for the security of signature schemes,as well as providing an efficient provably secure construction (in the sense of the scheme of[2]),and a comprehensive security proof.These contributions reveal many subtleties regarding the exact construction parameters,and in particular issues regarding what intractability assumptions are actually necessary for achieving the3security properties.For example,the anonymity property in our treatment is totally disassociated from any factoring related assumption.We note that,methodologically,in order to reveal such issues,a complete proof is needed following a concrete model.This has not been done in the realm of(efficient) group signatures and concrete proof and model are unique to our work.(We note that even though we try to build our constructions on prior assumptions and systems as much as possible,we need to modify them extensively as required by the constraints imposed by following formal model and arguments).Our investigation also reveals delicate issues regarding the proper formal modeling of the group signature primitive with regards to the work of[4].For example,the need of formalizing security against attacks by any internal or external entity that is active in the scheme(i.e.,no trusted parties). Lack of such treatment,while proper for the non-dynamic setting of[4],is insufficient for proving the security of schemes that follow the line of work of[2](i.e.,where there are no trusted key generators). Our Contributions.Below,we outline what this work achieves in more details.1.M ODELING.To model schemes like the scheme of[2]with dynamic(yet sequential)joins and no trusted parties we adapt the model of[23]which is thefirst formal model in the area of group signing without added trusted parties.In particular,our model has the three types of attacks that involve the GM and the users similarly to[23].We extend the model to allow adversarial opening of signatures(see the next paragraph).All the attacks are modeled as games between the adversaries and a party called the interface.The interface represents the system in a real environment and simulates the behavior of the system(a probabilistic polynomial time simulator)in the security proof.The attacker gets oracle query capabilities to probe the state of the system and is also challenged with an attack task.We note that this follows the basic approach of[22]for modeling security of digital signatures,yet in the complicated system with various parties,a few attacks which can co-exist are possible,and needed to be described as part of the system security.2.A DVERSARIAL O PENING IN E FFICIENT S CHEMES.As mentioned above,our formal model ex-tends the security requirements given by the list of security properties of[2]by allowing the adversary to request that the system opens signatures of its choice.In the work of[2],opening of signatures was implicitly assumed to be an internal operation of the GM.We note that such stronger adversarial capability was put forth for thefirst time in the formal model of[4].For achieving an efficient scheme with adversarial opening we needed to develop novel cryptographic constructs.(Note that adversarial opening can also be applied to strengthen the notion of traceable signatures).3.S TRONGER A NONYMITY P ROPERTY.In the scheme of[2]anonymity is claimed against an ad-versary that is not allowed to corrupt the GM.This is a natural choice since in their scheme the GM holds the trapdoor which provides the opening capability,namely an ElGamal key.The GM also holds the trapdoor that is required to enroll users to the group,namely the factorization of an RSA-modulus. However,pragmatically,there is no need to combine the GM function that manages group members and allow them to join the group(which in real life can be run by e.g.,a commercial company)with the opening authority function(which in real life can be run by a government entity).To manage members the GM who is the“Join Manager”still needs to know the factorization.The opening authority,on the other hand,must know the ElGamal key.This split of functions(separation of authorities)is not a relaxation of group signatures but rather a constraining of the primitive.One should observe that the introduction of such additional functionalities in a primitive potentially leads to new attacks and to a change in the security model.Indeed in the separated authorities setting,we must allow the anonymity adversary to corrupt the GM as well.4.N UMBER-T HEORETIC R ESULTS AND C RYPTOGRAPHIC P RIMITIVES.The last two contributions above required building cryptographic primitives over the set of quadratic residues modulo n=pq that remain secure when the factorization(into two strong primes)p,q is known to the adversary.4To this end,we investigate the Decisional Diffie Hellman Assumption over the quadratic residues modulo n and we prove that it appears to be hard even if the adversary knows the factorization.In particular,we prove that any adversary that knows the factorization p,q and solves the DDH problem over the quadratic residues modulo a composite n=pq,can be turned into a DDH-distinguisher for quadratic-residues modulo a prime number.This result is of independent interest since it suggests that the DDH over QR(n)does not depend to the factorization problem at all.Also,the present work requires a cca2(chosen ciphertext attack)secure encryption mechanism that operates over the quadratic residues modulo n so that(i)encryption should not use the factorization of n,(i.e.,the factorization need not be a part of the public-key),but on the other hand(ii)the factorization is known to the attacker.In this work we derive such a primitive in the form of an ElGamal variant following the general approach of twin encryption,cf.[29,16,19]which is cca2secure under the DDH assumption in the Random Oracle model(note that our efficient group signature requires the random oracle anyway since it is derived from the Fiat-Shamir transform,cf.[18,1]).5.E FFICIENT C ONSTRUCTION.We provide an efficient construction of a group signature that is proven secure in our model.While,we would like to note that our scheme is motivated by[2](and originally we tried to rely on it as much as possible),our scheme,nevertheless,possesses many subtle and important differences.These differences enable the proof of security of our scheme whereas the scheme presented by[2]claims security in heuristic arguments that are not complete and,in particular, cannot be proven secure in our model:There are many reasons for this,e.g.,the scheme of[2]lacks an appropriate cca2secure identity embedding mechanism.Moreover,our efficient construction can support formally(if so desired),the separation of group management and opening capability–some-thing not apparent in the prior scheme of[2].Finally,we note that a syntactically degenerated version of our construction(that retains its efficiency)can be proven secure in the model of[4](and is,in fact, a non-dynamic group signature scheme of the type they have suggested).An interesting technical result with respect to anonymity compared to previous work is highlighted in our investigation.Anonymity was argued in the work of[2]to be based on the decisional Diffie-Hellman Assumption over Quadratic Residues modulo a composite and given that the GM was assumed to be uncorrupted,the key-issuing trapdoor(the factorization of the modulus)was not meant to be known to the adversary.As argued above,we prove that anonymity still holds when the adversary is given the factorization trapdoor.Thus,we disassociate anonymity from the factoring problem.Taking this result independently it also implies the separability between the opening authority and the group manager.In addition,we note that many other technical and subtle details are different in our provable scheme from prior designs.An extended abstract of the present paper appeared in[26].Organization.In section2we present some background,useful tools and the intractability assump-tions.In section3we investigate the behavior of the DDH assumption over the quadratic residues modulo a composite which is multiple of two strong primes,when the factorization is known to the distinguisher.In section4we discuss the kind of cca2security that will be required in our setting (over QR(n)but with known factorization)and we present an efficient and provably secure construc-tion based on the ElGamal twin-encryption paradigm.In section5we present our security model and definitions and in section6we give our construction and its proofs of correctness and security.In sec-tion7we present group signatures with separated authorities(i.e.,the Group Manager(GM)and the Opening Authority(OA)).52PreliminariesN OTATIONS .We will write PPT for probabilistic polynomial-time.If D 1and D 2are two probability distributions defined over the same support that is parameterized by νwe will write dist A (D 1,D 2)to denote the computational distance |Prob x ←D 1[A (x )=1]−Prob x ←D 2[A (x )=1]|.Note thattypically dist A will be expressed as a function of ν.Similarly,we will write dist(D 1,D 2)to denote the maximum distance among all PPT predicates A .Note that the statistical distance of the distribu-tions D 1,D 2,namely 12 x |Prob D 1[x ]−Prob D 2[x ]|might be much larger than the computationaldistance.If n is any number,we will denote by [n ]the set {1,..., n }.If we write a ≡n b for two integers a,b we mean that n divides a −b or equivalently that a,b are the same element within Z n .A function f :I N →R will be called negligible if for all c >0there exists a νc such that for all ν≥νc ,f (ν)<ν−c .In this case we will write f (ν)=negl (ν).PPT will stand for “probabilistic polynomial time.”Throughout the paper (unless noted otherwise)we will work over the group of quadratic residues modulo n ,denoted by QR (n ),where n =pq and p =2p +1and q =2q +1and p,q,p ,q prime numbers.All operations are to be interpreted as modulo n (unless noted otherwise).In general we will use the letter νto denote the security parameter (i.e.,this value will be polynomially related to the sizes of all quantities involved).Next we define the cryptographic intractability assumptions that will be relevant in proving the security properties of our constructions.The first assumption is the Strong-RSA assumption.It is similar in nature to the assumption of the difficulty of finding e -th roots of arbitrary elements in Z ∗n with the difference that the exponent e is not fixed (i.e.,it is not part of the instance).Definition 1Strong-RSA .Given a composite n (as described above),and z ∈QR (n ),it is infeasibleto find u ∈Z ∗n and e >1such that u e =z (mod n ),in time polynomial in ν.Note that the variant we employ above restricts the input z to be a quadratic residue.This variant of Strong-RSA has been discussed before,cf.[15],and by restricting the exponent solutions to be only odd numbers we have that (i)it cannot be easier than the standard unrestricted Strong-RSA problem,but also (ii)it enjoys a random-self reducibility property (see [15]).The second assumption that we employ is the Decisional Diffie-Hellman Assumption (see e.g.,[6]for a survey).We state it below for a general group G and later on in definition 5we will specialize this definition to two specific groups.Decisional Diffie-Hellman Given a description of a cyclic (sub)group G that includes a generator g ,a DDH distinguisher A is a polynomial in νtime PPT that distinguishes the family of triples of the form g x ,g y ,g z from the family of triples of the form g x ,g y ,g xy ,where x,y,z ∈R #G .The DDH assumption suggests that this advantage is a negligible function in ν.Finally,we will employ the discrete-logarithm assumption over the quadratic residues modulo n with known factorization (note that the discrete-logarithm problem is assumed to be hard even when the factorization is known,assuming of course that the factors of n are large primes p,q and where p −1and q −1are non-smooth).Definition 2Range-bounded Discrete-Logarithm with known factorization .Given two values a,b that belong to the set of quadratic residues modulo n with known factorization n =pq ,so that there is an x ∈Λ⊆[p q ]:a x =b ,p,q are safe primes,#Λ=Θ(n )for a given constant >0,it is infeasible to find in time polynomial in νthe integer x so that a x =b (mod n ).63DDH over QR(n)with known FactorizationOur constructions will require the investigation of the number-theoretic results presented in this sec-tion that albeit entirely elementary they have not being observed in the literature to the best of our knowledge.In particular we will show that DDH over QR(n)does not depend on the hardness of factoring.Let n be a composite,n=pq with p=2p +1and q=2q +1(p,q,p ,q primes).Recall that elements of Z∗n are in a1-1correspondence with the set Z∗p×Z∗q.Indeed,given b,c ∈Z∗p×Z∗q, consider the system of equations x≡b(mod p)and x≡c(mod q).Using Chinese remaindering we can construct a solution of the above system since gcd(p,q)=1and the solution will be unique inside Z∗n.Alternatively for any a∈Z∗n we canfind the corresponding pair b,c in Z∗p×Z∗q by computing b=a(mod p)and c=a(mod q)(note that gcd(a,n)=1implies that b≡0(mod p)and c≡0(mod q).The mappingρfrom Z∗p×Z∗q to Z∗n is called the Chinese remaindering mapping. Observe thatρpreserves quadratic residuosity:ρ(QR(p)×QR(q))=QR(n).The following two lemmas will be useful in the sequel.They show(1)how the Chinese remainder-ing mapping behaves when given inputs expressed as powers inside the two groups QR(p)and QR(q), and(2)how discrete-logarithms over QR(n)can be decomposed.Lemma3Let g1,g2be generators of the groups QR(p)and QR(q)respectively,where the groups are defined as above.Then,ifβ=ρ(g x11,g x22),whereρis the Chinese remaindering mapping,it holds thatβ=αq x1+p x2(mod n)whereα=ρ(g(q )−11,g(p )−12)is a generator of QR(n).Proof.First we show thatαis a generator of QR(n).Assume without loss of generality that p >q .Then it holds that q ∈Z∗p and as a result q is an invertible element of Z∗p.It follows that g 1=g(q )−1 1is well defined and is a generator of QR(p)(since g1is a generator of QR(p)).Furthermorep (mod q )∈Z∗qsince it cannot be the case that p ≡q 0as this would mean that either p =q or p isnot prime.It follows that p has an inverse modulo q and as a result g 2=g(p )−12is well defined and isa generator of QR(q)(since g2is a generator of QR(q)).Finally we remark that if g1,g2are randomly selected generators of QR(p),QR(q)respectively,it holds that g 1,g 2are uniformly distributed over all generators.Sinceα=ρ(g 1,g 2),it follows thatα≡p g 1(p)andα≡q g 2(q).It is easy to see thatαmust be a generator unless the order ofαinside Z∗n is divisible by either p or q ;but this can only happen if α≡p1orα≡q1something not possible unless either g 1≡p1or g 2≡q1.This case is excluded given that g 1,g 2are generators of their respective groups QR(p)and QR(q).This completes the argument thatαis a generator of QR(n).Now,sinceβ=ρ(g x11,g x22)it follows thatβ≡g x11(p)andβ≡g x22(q);Using this fact together with the properties ofαwe have:αq x1+p x2≡pαq x1≡p(g(q )−11)q x1≡p g x11αq x1+p x2≡qαp x2≡p(g(p )−12)p x2≡p g x22Due to the uniqueness of the Chinese remaindering solution inside Z∗n it follows thatβ=αq x1+p x2(mod n)is the solution of the system.Lemma4Fix a generatorαof QR(n)and an integer t∈I N.The mappingτα:Z p ×Z q →QR(n),withτα(x1,x2)=α(q )t x1+(p )t x2is a bijection.The inverse mappingτ−1αis defined asτ−1α(αx)=(q )−t x mod p ,(p )−t x mod q .7Proof.Let x1,x2 , x 1,x 2 ∈Z p ×Z q be two tuples withτ(x1,x2)=τ(x 1,x 2).It follows that (q )t x1+(p )t x2≡order(α)(q )t x 1+(p )t x 2;sinceαis a generator,p q |(q )t(x1−x 1)+(p )t(x2−x 2), from which we have p |(q )t(x1−x 1)which implies p |x1−x 1,i.e.,x1=x 1.In a similar fashion we show that x2=x 2.The onto property follows immediately from the number of elements of the domain and the range.Regarding the inverse,define q∗,p∗to be integers in Z p ,Z q respectively,so that q∗(q )t≡p 1and p∗(p )t≡q 1.Moreover let y1=q∗x mod p and y2=p∗x mod q .Letπ1,π2be integers so that q∗x=π1p +y1and p∗x=π2q +y2.We will show that(q )t y1+(p )t y2≡p q x which will complete the proof.In order for p q to divide(q )t y1+(p )t y2−x it should hold that both p ,q divide(q )t y1+(p )t y2−x.Indeed,p divides(q )t y1+(p )t y2−x since(q )t y1+(p )t y2−x=(q )t(q∗x−π1p )+p y2−x≡p (q )t q∗x−x≡p 0.In a similar fashion we show that q divides(q )t y1+(p )t y2−x.From these two facts it follows immediately thatτ(τ−1(αx))=τ( y1,y2 )=αx.Let desc(1ν)be a PPT algorithm,called a group descriptor,that on input1νit outputs a description of a cyclic group G denoted by˜d G.Depending on the group,˜d G may have many entries;in our setting it will include a generator of G,denoted by˜d G.gen and the order of G denoted by˜d G.ord.We require that2ν−1≤˜d G.ord<2ν,i.e.,the order of G is aν-bit number with thefirst bit set.Additionally ˜dGcontains the necessary information that is required to implement multiplication over G.We will be interested in the following two group descriptors:•desc p:Given1νfind aν-bit prime p >2ν−1for which it holds that p=2p +1and p is also prime.Let g be any non-trivial quadratic residue modulo p.We set QR(p)to be the group of quadratic residues modulo p(which in this case is of order p and is generated by g).The descriptor desc p returns g,p,p and it holds that if˜d←desc p(1ν),˜d.ord=p and˜d.gen=g.•desc c:Givenνfind two distinct primes p ,q of bit-lengthν/2so that p q is aν-bit number that is greater than2ν−1and so that there exist primes p,q such that p=2p +1and q= 2q +1.The descriptor desc c returns α,n,p,q,p ,q and it holds that if˜d←desc c(1ν),˜d.ord=p q and˜d.gen=α.The implementation of descc that we will employ is the following:execute desc p twice,to obtain˜d1= g1,p,p and˜d2= g2,q,q with p=q,and set˜d=g,n=pq,p,q,p ,q whereα=ρ(g(q )−11,g(p )−12).For such a description˜d we will call thedescriptions˜d1and˜d2,the prime coordinates of˜d.Note that in the(unlikely)event p=q the procedure is repeated.Definition5A Decisional Diffie Hellman(DDH)distinguisher for a group descriptor desc is a PPT algorithm A with range the set{0,1};the advantage of the distinguisher is defined as follows:Adv DDHdesc,A (ν)=dist A(D descν,R descν)where D descνcontains elements of the form ˜d,g x,g y,g x·y where˜d←desc(1ν),g=˜d.gen andx,y←R[˜d.ord],and R descνcontains elements of the form ˜d,g x,g y,g z where˜d←desc(1ν),g=˜d.gen and x,y,z←R[˜d.ord].Finally we define the overall advantage quantified over all distinguishersas follows:Adv DDHdesc (ν)=max P P T A Adv DDHdesc,A(ν).The main result of this section is the theorem below that shows that the DDH over QR(n)with known factorization is essentially no easier than the DDH over the prime coordinates of QR(n).The proof of the theorem is based on the construction of a mapping of DDH triples drawn from the two prime coordinate groups of QR(n)into DDH triples of QR(n)that is shown in the following lemma:8Lemma6Let˜d←desc c(1ν)with˜d1,˜d2←desc p(1ν/2),its two prime coordinates,such that˜d1= g1,p,p and˜d2= g2,q,q .Consider a mappingρ∗defined as follows:ρ∗( ˜d1,A1,B1,C1 , ˜d2,A2,B2,C2 )=df ˜d,ρ(A1,A2),ρ(B1,B2),ρ((C1)q,(C2)p)⊥so that the⊥output is given if and only if˜d1.ord=˜d2.ord.Then it holds,thatρ∗satisfies the properties(i)dist(ρ∗(D desc pν/2,D desc pν/2),D desc cν)≤3log2·ν2ν/2and(ii)dist(ρ∗(R desc pν/2,R desc pν/2),R desc cν)≤3log2·ν2ν/2.Proof.Observe that if A1=g x11,B1=g y11,C1=g x1y11and A2=g x22,B2=g y22,C2=g x2y21,basedon the properties of the mappingρshown in lemma3it follows thatρ(A1,A2)=αq x1+p x2andρ(B1,B2)=αq y1+p y2ρ((C1)q ,(C2)p )=α(q )2x1y1+(p )2x2y2Now we show that if A1,B1,C1 is a DDH triple from˜d1,and A2,B2,C2 is a DDH triple from˜d2 then A,B,C is a DDH triple from˜d that has˜d1and˜d2as its two prime coordinates:αlogαA logαB=α(q x1+p x2)(q y1+p y2)=α(q )2x1y1+(p )2x2y2+p q (x1y2+x2y1)≡nα(q )2x1y1+(p )2x2y2=CFrom the above and lemma4and standard results on the distribution of primes we can deduce easily thatdist(ρ∗(D desc pν/2,D desc pν/2),D desc cν)≤3log2·ν2ν/2,i.e.,the two distributions are statistically indistinguishable.We conclude that the distribution defined byρ∗when applied to two distributions of DDH triples fromD desc pν/2over the respective groups is statistically close to the distribution D desc cν.This completes theproof for property(i)of the lemma.Regarding property(ii),observe that if A1=g x11,B1=g y11,C1= g z11and A2=g x22,B2=g y22,C2=g z21,based on the properties of the mappingρshown in lemma3it follows thatρ(A1,A2)=αq x1+p x2andρ(B1,B2)=αq y1+p y2ρ((C1)q ,(C2)p )=α(q )2z1+(p )2z2and thus,using lemma4,dist(ρ∗(R desc pν/2,R desc pν/2),R desc cν)≤3log2·ν2ν/2,i.e.,the two distributions arestatistically indistinguishable.The lemma is used for the proof of the theorem below:Theorem7Adv DDHdesc c (ν)≤2Adv DDHdesc p(ν/2)+(6log2·ν)/2ν/2.Proof.Let A be any DDH-distinguisher for desc ing property(i)of lemma6,we have thatdist A(D desc cν,ρ∗(D desc pν/2,D desc pν/2))≤3log2·ν2ν/2and given thatdist A(ρ∗(D desc pν/2,D desc pν/2),ρ∗(R desc pν/2,D desc pν/2))≤≤Adv DDHdesc p(ν/2)9we obtain(Fact1)dist A(D desc cν,ρ∗(R desc pν/2,D desc pν/2))≤≤Adv DDHdesc p (ν/2)+3log2·ν2Now using property(ii)of lemma6we have thatdist A(R desc cν,ρ∗(R desc pν/2,R desc pν/2))≤3log2·ν2ν/2and given thatdist A(ρ∗(R desc pν/2,D desc pν/2),ρ∗(R desc pν/2,R desc pν/2))≤≤Adv DDHdesc p(ν/2) we obtain(Fact2)dist A(ρ∗(R desc pν/2,D desc pν/2),R desc cν)==Adv DDHdesc p (ν/2)+3log2·ν2ν/2Finally by applying the triangle inequality to facts1and2above,we obtain:Adv DDHA,desc c (ν)=dist A(D desc cν,R desc cν)≤≤2·Adv DDHdesc p (ν/2)+6log2·ν2ν/2Since the above holds for an arbitrary choice of A the statement of the theorem follows.We proceed to state explicitly the two variants of the DDH assumption:Definition8The following are two Decisional Diffie Hellman Assumptions:•The DDH assumption over quadratic residues modulo a safe prime(DDH-Prime)asserts that:Adv DDHdesc p (ν)=negl(ν).•The DDH assumption over quadratic residues modulo a safe composite with known Factorization (DDH-Comp-KF)asserts that:Adv DDHdesc c(ν)=negl(ν).We conclude the section with the following theorem(where=⇒stands for logical implication): Theorem9DDH-Prime=⇒DDH-Comp-KF.Proof.An immediate corollary of theorem7and the easy fact that if f1,f2are negligible functions in νthen2·f1(ν)+f2(ν)is also a negligible function.10。
基于强跟踪五阶容积卡尔曼滤波的眼睛跟踪算法
第19卷 第6期 太赫兹科学与电子信息学报Vo1.19,No.62021年12月 Journal of Terahertz Science and Electronic Information Technology Dec.,2021文章编号:2095-4980(2021)06-1008-06基于强跟踪五阶容积卡尔曼滤波的眼睛跟踪算法殷晓春1a,蔡晨晓2,李建林1b(1.南京信息职业技术学院 a.人工智能学院;b.网络与通信学院,江苏南京 210023;2.南京理工大学自动化学院,江苏南京 210094)摘 要:非侵入式眼睛跟踪在许多基于视觉的人机交互应用中扮演十分重要的角色,但由于眼睛运动的强非线性,如何确保眼睛跟踪过程中对外界干扰的鲁棒性以及跟踪精确度是其应用的关键问题。
为提高眼睛跟踪的鲁棒性和精确度,提出强跟踪五阶容积卡尔曼滤波算法(ST-5thCKF),将强跟踪滤波(STF)次优渐消因子引入具有接近最少容积采样点且保持五阶滤波精确度的五阶容积卡尔曼滤波(5thCKF),获取5thCKF对强非线性良好滤波精确度同时具备STF对外界干扰的鲁棒性。
真实条件下的实验结果验证了所提算法在眼睛跟踪中的有效性。
关键词:眼睛跟踪;强跟踪滤波;五阶容积卡尔曼滤波;强跟踪五阶容积卡尔曼滤波中图分类号:TP391.4 文献标志码:A doi:10.11805/TKYDA2020427Eye tracking algorithm based on strong tracking fifth-degree cubature Kalman filterYIN Xiaochun1a,CAI Chenxiao2,LI Jianlin1b(1a.Institute of Artificial Intelligence;1b.School of Network and Communication,Nanjing Vocational College of Information Technology,Nanjing Jiangsu 210023,China;. All Rights Reserved.2.School of Automation,Nanjing University of Science and Technology,Nanjing Jiangsu 210094,China)Abstract:Non-intrusive eye tracking plays an important role in many vision-based human computer interaction applications. How to ensure the robustness of external interference and tracking precisionduring eye tracking is the key problem to its applications owing to the strong nonlinearity of eye motion.To improve the robustness and precision of eye tracking, the Strong Tracking fifth-degree CubatureKalman Filter(ST-5thCKF) algorithm is proposed. The algorithm introduces the suboptimal fading factor ofStrong Tracking Filter(STF) into fifth-degree Cubature Kalman Filter(5thCKF) which almost has the leastcubature sampling points while maintaining the fifth-degree filtering accuracy. The proposed algorithmbears the high filtering precision to strong nonlinearity of 5thCKF, as well as the robustness to externalinterference of STF. The experimental results under practical conditions show the validity of the proposedalgorithm in eye tracking.Keywords:eye tracking;Strong Tracking Filter(STF);fifth-degree Cubature Kalman Filter(5thCKF);Strong Tracking fifth-degree Cubature Kalman Filter(ST-5thCKF)眼睛跟踪对于提高日常人机交互的质量具有重要作用,在司机疲劳驾驶检测[1-4]、虚拟现实系统(Virtual Reality System,VR)的图像扫描、飞行人员行为检测[5]、眼睛与电脑交互[6]等领域得到广泛应用。
Advanced Techniques for GA-based sequential ATPGs
Advanced Techniques for GA-based sequential ATPGsF. Corno, P. Prinetto, M. Rebaudengo, M. Sonza Reorda R. MoscaPolitecnico di Torino CSP (Centro Supercalcolo Piemonte) Torino, Italy Torino, ItalyAbstract*Genetic Algorithms have been recently investigated as an efficient approach to test generation for synchro-nous sequential circuits. In this paper we propose a set of techniques which significantly improves the per-formance of the GA-based ATPG algorithm proposed in [PRSR94]: in particular, the new techniques en-hance the capability of the algorithm in terms of test length minimization and fault excitation. We report some experimental results gathered with a prototypical tool and show that a well-tuned GA-based ATPG is generally superior to both symbolic and topological ones in terms of achieved Fault Coverage and required CPU time.1. IntroductionDifferent approaches have been proposed to solve the problem of Automatic Test Pattern Generation for Synchronous Sequential circuits.The topological approach [NiPa91] is based on ex-tending to sequential circuits the branch and bound techniques developed for combinational circuits by adopting the Huffman’s Iterative Array Model. The method’s effectiveness heavily relies on the heuristics adopted to guide the search; the approach uses a com-plete, but often fails when applied to large circuits, where the search space is excessively large to explore.The symbolic approach [CHSo93] exploits tech-niques for Boolean function representation and ma-nipulation which were initially developed for formal verification; this approach is based on a complete al-gorithm, too, and is very effective when small- and medium-sized circuits are considered. Unfortunately, it is completely unapplicable when circuits with more than some tens of Flip-Flops are dealt with. This greatly limits its usefulness in real practice.Finally, the simulation-based approach [ACAg88] consists in generating random sequences, simulating them, and then modifying their characteristics in order to increase the obtained fault coverage. In the last few * This work has been partially supported by European Union through the ESPRIT PCI project #9452 94 204 70 PETREL. Contact address: Paolo Prinetto, Dipartimento di Automatica e Informatica, Politecnico di Torino, Corso Duca degli Abruzzi 24, I-10129 Torino (Italy), e-mail Paolo.Prinetto@polito.it years, several methods [SSAb92] [RPGN94] [PRSR94] have been proposed, which combine this approach with the use of Genetic Algorithms (GAs) [Gold89]. Results demonstrated that the approach is very flexible and provides good results for large circuits, where other methods fail.However, the analysis we performed on the behavior of GATTO (the tool described in [PRSR94]) shows that the algorithm has some weakness points:• the cross-over operator is not as effective as in other problems GAs have been applied to;• the method can hardly determine the length of the sequences; this results in an increase of thetime required by the A TPG process, and of thenumber of generated vectors;• the phase devoted to find sequences which excite faults is purely random; this obviously decreasesthe method effectiveness in terms of achievedfault coverage and required CPU time.In this paper we introduce some new techniques to overcome the above problems. We devised a more ef-fective cross-over operator, and added new techniques which provide the method with the capability of auto-matically determining the minimal length of the test sequences. Finally, we re-arranged the whole algorithm in order to increase the effectiveness of the fault exci-tation phase, which is no longer purely random, but exploits information from the already generated se-quences. To experimentally prove the effectiveness of the proposed techniques we implemented an improved version of GATTO, named GATTO+. The results show substantial improvements in GATTO+: from one side, they are now comparable to the competing algorithms on the small- and medium-size circuits; from the other side, results are further enhanced on the largest ones.The paper is organized as follow: in Section 2 we briefly summarize the GATTO algorithm; Section 3 describes the improvements introduced in GATTO+. Section 4 presents the experimental results we gathered and provides some comparisons with other A TPG al-gorithms. Section 5 draws some conclusions.2. The GATTO algorithmThe GATTO algorithm, presented in [PRSR94], is organized in three phases:• the first phase aims at selecting one fault (denoted as target fault); this phase consists ofrandomly generating sequences and fault simu-lating them w.r.t. the untested faults. As soon asone sequence is able to excite at least one fault,the fault is chosen as target fault;• the second phase aims at generating a test se-quence for the target fault; it is implemented as aGenetic Algorithm: each individual is a test se-quence to be applied starting from the reset state;cross-over and mutation operators are defined tomodify the population and generate new indi-viduals; a fitness function evaluates how closeeach individual is to the final goal (i.e., detectingthe target fault); this function is a weighted sumof the numbers of gates and Flip-Flops having adifferent value in the good and faulty circuit.After a maximum number of unsuccessful gen-erations the target fault is aborted and the secondphase is exited;• the third phase is a fault simulation experiment which determines whether the test sequencepossibly generated in phase 2 detects other faults(fault dropping).The three phases are repeated till either all the faults have been tested or aborted, or a maximum number of iterations has been reached.3. ImprovementsBased on the results reported in [PRSR94], we real-ized that several points in the GATTO algorithm were still worth of improvements. We will describe them in details in the following subsections, together with the improved solutions we devised for each of them.3.1. Cross-Over OperatorThe cross-over operator adopted in GATTO belongs to the category denoted as two-cuts cross-over. The operator works in a horizontal manner: the new se-quence is composed of some vectors coming from ei-ther parents (Fig. 1), according to the position of two randomly generated cut points. Unfortunately, there is no guarantee that the vectors coming from the second parent produce in the new sequence the same behavior they produce in the parent sequence, as the state from which they are applied is different. As a consequence, we observed in GATTO that the off-spring of two good individuals was often a bad individual; in general, the cross-over operator was not as effective for the A TPG problem as it usually is for other problems GAs have been applied to.The cross-over operator defined for GATTO+ works in a vertical manner; the off-spring does not inherit whole vectors from parents: rather, the values for each input are taken either from one parent or from the other, depending on a random choice (Fig. 2), as the operator belongs to the category known as uniform cross-over. The length of the new sequence is that of the longest between the two parent sequences: inputs taken from the shortest parent are completed with ran-dom values (dark in Fig. 2).3.2. Test LengthIn GATTO it is up to the user to decide the initial test length, which is then automatically increased dur-ing the A TPG process. For some circuits, this results in a test length higher than the minimum one, while for other circuits the process spends many iterations for reaching the length required to test some faults. Moreover, the computational complexity of the whole process mainly depends on the cost of fault simulation; therefore, any unnecessary increase in the length of the sequences results in a corresponding waste in the re-quired CPU time.To face this problem we improved the GATTO al-gorithm in two ways: we first modified the evaluation function on which the fitness function is based, and then introduced new mutation operators.3.2.1. New Evaluation FunctionThe evaluation function adopted in GATTO is based on the following expressionh(v j k,f i)= c1*b1(v j k,f i)+c2*b2(v j k,f i)(1) which provides a measure of how close the k-th in-put vector v j k of a sequence s j is to detect the fault f i. In (1), c1 and c2 are constants, while b1 and b2 are functions, whose value is proportional to the number of gates and Flip-Flops (respectively) having a different value in the good and faulty circuit for fault f i. Once the value of h(v j k,f i)is known for every vector in thesequence, the evaluation function H for the sequence s j is computed asH(s j ,f i )=max k (h(v j k ,f i )) ∀ v j k ∈ s j (2)In order to bias the evolution towards the shortest sequences, a modified version H *(s j ,f i ) of the evaluation function H has been introduced in GATTO+;the value of h(v j k ,f i ) for the k -th vector of the j -th sequence is weighted with a coefficient whose value decreases with k ; the new evaluation function corre-sponds to the maximum value of the weighted function:H *(s j ,f i )=max k (HANDICAP k ·h(v j k ,f i )) ∀ v j k ∈ s j (3)where the value of the parameter HANDICAP ranges3.2.2. New Mutation OperatorsIn GATTO, any change in the length of the se-quences during phase 2 stems from the cross-over op-erator: in fact, the length of any new sequence can randomly vary up to the sum of the lengths of the two parent sequences. The new cross-over operator pre-sented in the previous Section behaves in a completely different way, and generates sequences as long as the longest parent. This means that the length of the se-quences in a population can never be higher than that of the longest one in the previous generation. Unfortu-nately, there is thus no way to increase the sequence length.To overcome this problem, and to force the algo-rithm to better explore all the search space, we intro-duce two new mutation operators (MO+ and MO-),which are activated on an existing sequence with a given activation probability:• MO+ introduces a randomly generated vector in a random position within the existing sequence;thanks to this operator, longer sequences are generated and evaluated;• MO- removes a randomly selected vector from the existing sequence: if the vector is not essen-tial, the evaluation function of the sequence in-creases.3.3. Fault Excitation Fault excitation is one of the most critical problems when devising a GA-based A TPG. In fact, no way has been found, up to now, to evaluate how close a se-TPGs proposed in the literature resort to a 2 all the sequences belonging to the last population are stored, and then used in the following phase 1 instead of the randomly generated ones (as in GATTO).If one of these sequences is able to excite at least one fault, this is selected as target fault, and a new phase 2 is activated. Otherwise, random sequences are generated trying to excite faults, like in GATTO. The pseudo-code of phase 1 is reported in Fig. 3.4. Experimental Results We implemented a prototypical version of GATTO+containing all the techniques described above: the new cross-over operator substitutes the old one, and the operators MO+ and MO-, as well as the parameter HANDICAP, have been introduced. The values of all the parameters have been experimentally determined through a preliminary set of runs: the operators MO+and MO- are activated with probability 0.05 and 0.1,the parameter HANDICAP holds the value 0.98, andC1 and C2 have been assigned the values 1 and 10,respectively. Tab. 1 reports the results in terms of Fault Coverage (FC), CPU time and test length for the whole set of ISCAS’89 circuits. Experiments have been per-4.1. GATTO+ vs. GATTOTo demonstrate the effectiveness of the described techniques we report in Tab. 2 a comparison with the results of GATTO published in [PRSR94], where only the largest ISCAS’89 circuits were considered. GATTO+ is able to increase the fault coverage in 11 cases out of 12, and in 6 cases the increase is greater than 4%. On the other side, the CPU time is decreased in 9 cases out of 12. For all the circuits, we were able either to increase the Fault Coverage by more than 4%, or to decrease the CPU time. GATTO+ achieves this result mainly thanks to the more effective technique adopted for phase 1, whose cost is now greatly de-creased. Concerning the test length, the number of test vectors generated by GATTO+ is sometimes higher than those of GATTO, due to the new sequences added to detect other faults.Tab. 1: GATTO+ performance on ISCAS’89 circuits.4.2. GATTO+ vs. other algorithmsW report in the following the data published for two other A TPG algorithms and concerning the ISCAS’89 benchmark circuits. In Tab. 3 we consider HITEC, a topological algorithm described in [NiPa91], and the GA-based A TPG proposed in [RPGN94]. The two algorithms were selected, as they are representative of the two categories we denoted above as topological and simulation-based A TPG algorithms; we did not con-sider any A TPG belonging to the category of symbolic ones, as they are not able to deal with large circuits, which are normally the most critical problem in the real world.Two difficulties must be faced when performing such a comparison: the first one concerns the hardware platform, which is different for the three A TPGs (results for HITEC were gathered on a SPARCstation 1, those in [RPGN94] on a SPARCstation II, and those for GATTO+ on a DECstation 3000/500). The second difficulty comes from the fact that GATTO+ assumes that all the Flip-Flops in the circuits are resettable, and generates sequences starting from the all-0s state, while the two other algorithms do not make this assumption, and generate sequences starting from the all-Xs state.Tab. 2: comparison between GATTO and GATTO+.Taking into account the two points above, the results in Tab. 3 show that:• GATTO+ is able to reach higher Fault Coverage figures in all cases but 4 when HITEC is consid-ered; the figures of GATTO+ are always betterwhen the tool of [RPGN94] is analyzed;• the CPU times required by GATTO+ are lower than the ones required by HITEC for all the cir-cuits but S1196 and S1238; on the other side,GATTO+ is always faster than the method in[RPGN94]. For most circuits, we believe that thespeed-up ratios are greater than any reasonablefactor due to the different hardware platforms. 5. ConclusionsWe described some advanced techniques to improve the effectiveness of a GA-based A TPG like GATTO [PRSR94]. They fully exploit the powerfulness of Evo-lutionary Computation by removing some weakness points concerning the cross-over operator, the ability to determine the optimal sequence length, and the fault excitation phase.Experimental results demonstrate that the new techniques are able to significantly improve the per-formance of GATTO in terms of Fault Coverage and CPU times. We also compared them with the results of a state-of-the-art topological algorithm and with the ones of another GA-based ATPG algorithm.As a main contribution, this paper experimentally demonstrates that a carefully tuned GA-based A TPG algorithm is able to provide better results than any other approach: in fact, symbolic techniques, although faster on the small circuits, do not work with the large ones, while topological techniques, although able to identify untestable faults, are generally slower.6. References[ACAg88]V.D. Agrawal, K.-T. Cheng, P. Agrawal,“CONTEST: A Concurrent Test Generator forSequential Circuits,” Proc. 25th Design Auto-mation Conf., 1988, pp. 84-89[CHSo93]H. Cho, G.D. Hatchel, F. Somenzi, “Redundancy Identification/Removal and Test Generation forSequential Circuits Using Implicit State Enu-meration,” IEEE Trans. on CAD/ICAS, Vol.CAD-12, No. 7, pp. 935-945, July 1993[Gold89] D.E. Goldberg, “Genetic Algorithms in Search, Optimization, and Machine Learning,” Addison-Wesley, 1989[NiPa91]T. Niermann, J.H. Patel, “HITEC: A Test Gen-erator Package for Sequential Circuits,” Proc.European. Design Aut. Conf., 1991, pp. 214-218 [PRSR94]P. Prinetto, M. Rebaudengo, M. Sonza Reorda,“An Automatic Test Pattern Generator for LargeSequential Circuits based on Genetic Algo-rithms,” Proc. Int. Test Conf., 1994, pp. 240-249 [RPGN94] E.M. Rudnick, J.H. Patel, G.S. Greenstein, T.M.Niermann, “Sequential Circuit Test Generationin a Genetic Algorithm Framework,” Proc. De-sign Automation Conf., 1994, pp. 698-704 [SSAb92] D.G. Saab, Y.G. Saab, J. Abraham, “CRIS: A Test Cultivation Program for Sequential VLSICircuits,” Proc. Int. Conf. on Comp. Aided De-sign, 1992, pp. 216-219。
lean principles(精益原则)
Defects Production defects and service errors waste resources in four ways. First, materials are consumed. Second, the labor used to produce the part (or provide the service) the first time cannot be recovered. Third, labor is required to rework the product (or redo the service). Forth, labor is required address any forthcoming customer complaints.
Discussion: What is waiting for the resources? What are the resources in your opinion?
Fred Qin
Transportation Materials should be delivered to its point of use. Instead of raw materials being shipped from vendor to a receiving location, processed, moved into a warehouse, and then transported to the assembly line, Lean demands that the material be shipped directly from the vendor to the location in the assembly line where it will be used.
基于曲轴瞬时加速度分析的发动机启动过程着火判定与应用
基于曲轴瞬时加速度分析的发动机启动过程着火判定与应用3杨福源 张京永 王晓光 周 明 欧阳明高( 清华大学 ,汽车安全与节能国家重点实验室 ,北京 100084)[ 摘要 ] 提出不同于失火判定的基于曲轴瞬时加速度分析的发动机启动过程着火判定方法 ,即通过两次启动过程的瞬时加速度对比寻找喷油启动过程的着火始点 。
理论分析和试验表明该方法不仅可以准确识别着火始点 ,而且识别精度也从循环级提高到角度级 。
试验在一台 6 缸电控共轨柴油机上进行 。
叙词 :电控发动机 ,启动过程 ,燃烧始点判定 ,曲轴瞬时加速度J udgeme nt and Applicatio n of Ignitio n Mo m e nt Duri n g Engi ne St art upPerio d Based o n Crankshaf t Transie nt Acceleratio n Anal ysisYang Fuyuan , Z hang Jingyong , Wang Xiaoguang , Z hou Ming & Ouyang MinggaoTsi n g h u a U ni versi t y , S t ate Key L aborat ory of A ut om o t i ve S af et y a n d Energy , Bei j i n g 100084Abstract This paper brings fo rward an igniti o n detecti o n met h o d based o n crankshaf t t r ansient accelera 2ti o n analysis 1 It co m pares t h e t r ansient accelerati o n of t w o start u p p r ocedures and t h en find o u t t h e igniti o n m o 2ment ( start of co mbusti o n ) during start up peri o d 1Crankshaf t dynamics analysis and real test show t hat t h is met ho d can identi f y t he igniti o n m o ment ,wit h t he accuracy imp roved f ro m cycle level to angle level 1 The test is carried o u t o n a 6 - cylinder co m m o n rail diesel engine 1K ey w ords :E lectron ically controlled engine , S tartup period , I gn i t i on mom ent , C ran k shaf t transient acceler 2at i on将可对柴油机启动过程的喷油量 、喷油正时 、喷射压力等参数进行优化 。
ABSTRACT EFFICIENT ANALYSIS OF RARE EVENTS ASSOCIATED WITH INDIVIDUAL BUFFERS
EFFICIENT ANALYSIS OF RARE EVENTS ASSOCIATED WITH INDIVIDUAL BUFFERS IN A TANDEM JACKSON NETWORK Ramya Dhamodaran Bruce C. Shultes Department of Mechanical, Industrial, and Nuclear Engineering University of Cincinnati PO Box 210072 Cincinnati, OH 45221-0072, U.S.A.
ห้องสมุดไป่ตู้
interest more often. Large deviations theory has been used to derive a heuristic change of measure for estimating the probability that the total system size exceeds a given level before returning to zero in tandem Jackson networks (see Parekh and Walrand 1989). This exponential twisting or tilting change of measure interchanges the arrival rate and the smallest service rate in the network. This heuristic was later analyzed by Glasserman and Kou (1995) who established necessary and sufficient conditions for the asymptotic efficiency of this importance sampling estimator. More recently, de Boer, Kroese, and Rubinstein (2002) proposed an adaptive importance sampling method that utilizes a minimum cross-entropy optimization approach to estimate the overflow probability in three stages by approximating an optimal tilting parameter. The balanced likelihood ratio approach to importance sampling (see Alexopoulos and Shultes 1998, 2001) was developed for analyzing system performance in fault-tolerant repairable systems. This approach has been used to derive importance sampling estimators for limiting system unavailability and mean time to system failure that yield bounded relative error. Shultes (2002) applied this approach to estimate the system overflow probability in tandem Jackson networks. This method yields a zero variance importance sampling distribution for a single node system. For systems with more than one node, this method yields asymptotically efficient results with some restrictions on the model parameters. The rare event studied in this paper is the buffer overflow probability at the second node in a two node tandem Jackson network. An exponential tilting technique was developed by Kroese and Nicola to estimate this overflow probability (see Kroese and Nicola 2002). These authors exponentially tilt a Markov additive process representation of the system to derive an importance sampling estimator. Their distribution is state dependent in that it depends on the contents of the first buffer.
一种top-K序列模式挖掘算法
1 引 言
从 海 量 的数 据 中 提 取 出 大 量 有 意 义 的 模 式 向 来 是 数 据 挖掘领域 的热门研究 方 向。序 列模 式挖 掘是 由 Agrawal and Srikant(1995) 针对超市 中购 物篮数据 的分析提 出 ,之 后便 逐渐成为频繁模式挖 掘领域中的一个重要分支 。序 列模 式挖掘 目前 已经 广泛 地应 用在 web点 击流 数据 ,医疗 数据 , 生物数据等数据 的分 析 。尽 管 已经有很 多高 效实 用 的算法 被提 出 。J,然 而这 些算法都需要用 户给定一个 最小支 持度
quential Patterns Mining(KSPM).We used OPUS search method tO traverse all the possible candidate sequences,
and utilized a bitmap as data structure to reduce storage space. In addition,effective pruning rules were designed to improve the efi ciency of the algorithm.Finally,exper iments on web click stream sequences,sign language utterance sequences and other sequential datasets confirm ed the effectiveness of the proposed algor ithm . KEYW ORDS:Data mining;Sequential pattern;Bitmap
阈值 minsup,才 能从 序列 数 据 库 中 提 取 出 频 度 不 小 于 阈 值 的 频 繁 序 列 模 式 。 在 实 际应 用 中 ,用 户 对 支 持 度 阈 值 没 有 准 确 的认 识 ,只 能 通 过 多 次 试 验 或 丰 富 的 经 验 来 设 定 ,缺 少 统 一 的评 判标 准。因此 ,挖掘 top—K序列 模式变得 至关重要 。K 由用户 自己设 定 ,表 示所 要 得 到 的 模 式 数 量。本 文 采用 OPUS搜 索 方 式 对 所 有 可 能 的 候 选 序 列 模 式 进 行 遍 历 ,并 不断改变 阈值 大小 ,最终 得到符 合要 求 的 top—K频 繁序列 模 式 。
有关英语词类的作文
有关英语词类的作文Understanding English Word Classes。
English, like many languages, is a fascinating tapestry woven from various word classes, each with its unique role and function in communication. From the sturdy backbone of nouns to the dynamic verbs that propel action, understanding these word classes is akin to deciphering the intricate code of language itself.Nouns: The Pillars of Language。
Nouns are the bedrock upon which sentences are constructed. They are the names of people, places, things, and ideas. Without nouns, our language would be a barren landscape devoid of meaning. Consider the simple sentence, "The cat sat on the mat." Here, "cat" and "mat" are nouns, providing the essential elements of the sentence.Verbs: The Engines of Action。
Verbs infuse life into sentences, propelling them forward with action and dynamism. They denote actions, occurrences, or states of being. In the sentence, "The cat sat on the mat," "sat" is the verb, conveying the action performed by the subject, "cat." Verbs come in various forms, from the straightforward "run" to the more complex "contemplate," each adding depth and complexity to our expressions.Adjectives: Painting with Descriptions。
datax 雪花算法
datax 雪花算法"Snowflake Algorithm" - A Comprehensive Analysis of the Unique ID Generation SystemIntroductionWith the advent of the digital era, the need for unique identification across various platforms has become essential. The Snowflake Algorithm, also known as Twitter Snowflake, has emerged as an efficient solution to generate globally unique IDs at scale. In this article, we will delve into the intricacies of the Snowflake Algorithm, exploring its architecture, components, and the step-by-step process it employs to create unique IDs.1. Understanding the ProblemBefore diving into the Snowflake Algorithm, let's analyze the problem it aims to solve. In a distributed environment where multiple entities generate IDs concurrently, the potential for clashes and collisions is significant. Traditional approaches, like auto-incrementing integers or using a combination of timestamp and random numbers, fall short in ensuring global uniqueness.2. ArchitectureThe Snowflake Algorithm follows a decentralized architecture that allows generating unique IDs across multiple instances while minimizing the risk of collision. Its architecture consists of three key components:2.1 TimestampThe Snowflake Algorithm employs a 64-bit timestamp, representing the number of milliseconds since a custom epoch. By using a timestamp, it ensures that IDs generated at different instances will have a consistent order and prevent sequential collisions.2.2 Node IdentifierThe node identifier is a unique identifier assigned to each instance or worker responsible for generating IDs. This component provides uniqueness within a particular node, eliminating potential conflicts in generating IDs.2.3 Sequence NumberThe sequence number, also 64-bit in size, is generated for each ID within a single millisecond. It helps prevent collisions when multiple IDs are generated within the same millisecond by different nodes.3. ID Generation ProcessNow that we understand the architecture, let's explore the step-by-step process of generating unique IDs using the Snowflake Algorithm:3.1 Obtaining the Current TimestampWhen a request for a new ID is received, the Snowflake Algorithm retrieves the current timestamp. This ensures that each ID maintains temporal ordering and prevents potential collisions in the future.3.2 Determining the Node IdentifierNext, the algorithm determines the node identifier, which uniquely identifies the worker instance generating the ID. The node identifier is usually obtained from the hardware or an environment-specific configuration.3.3 Generating the IDTo generate the ID, the Snowflake Algorithm combines the obtained timestamp, node identifier, and a sequence number. The resulting 64-bit ID ensures global uniqueness while maintaining an ordered sequence.4. Scalability and LimitationsThe Snowflake Algorithm offers excellent scalability, making it suitable for large-scale distributed systems. With the architecture's decentralized nature, it allows for the creation of IDs across multiple instances without compromising performance. However, it is worth noting that the authenticity of the node identifier is critical. If a node identifier is replicated or incorrectly assigned, collisions may occur.5. Benefits and Use CasesThe Snowflake Algorithm's uniqueness and scalability make it highly advantageous in various scenarios. Some notable use cases include:5.1 Distributed SystemsThe algorithm is ideal for generating unique IDs in distributed systems across various components and services.5.2 Database ShardingIn database sharding, where data is horizontally partitioned across multiple servers, the Snowflake Algorithm can generate unique shard keys to ensure data consistency and scalability.5.3 Big Data ProcessingWhen dealing with massive volumes of data in big data processing, Snowflake IDs can be utilized to maintain consistentand unique identifiers across different stages of processing. ConclusionThe Snowflake Algorithm, a robust and scalable solution, solves the challenge of generating globally unique IDs in distributed systems. By employing a decentralized architecture and leveraging timestamp and node identifiers, it ensures uniqueness while maintaining ordered sequences. Its applications range from database sharding to big data processing, making it an invaluable tool in the modern digital landscape.。
三星850 PRO SATA 6Gb s SSD 说明书
Summary-SATA 6Gb/s SSD for Client PCs-2.5 inch form factor-Samsung 3-core MEX controller-Samsung 3D V-NAND MLC-Up to 1GB Samsung Low Power DDR2 SDRAM cache memory-Samsung Magician Software for SSD management-Samsung Data Migration Software3D V-NAND Technology and THE SAMSUNG SSD 850 PRO Samsung's 3D V-NAND flash memory is fabricated using an innovative vertical design. Itsvertical architecture stacks cell-layers on top ofone another rather than trying to decrease thecells' length and width to fit today's ever-shrinking form factors. This architecture resultsin higher density and higher performance usinga smaller footprint. Samsung's 3D V-NAND technology is a breakthrough in overcoming thedensity limits currently faced with theconventional planar NAND architecture.The Samsung SSD 850 PRO is a brand newsuccessor of the 840 PRO, Samsung’s premium SSD product. The 850 PRO is powered by Samsung's 3D V-NAND technology. In 2013, Samsung mass-produced 3D V-NAND products for datacenters which have already been proved in the market. And Samsung is now releasing the world’s 1st 3D V-NAND SSD for client PCs.Ultimate Read/Write PerformancePowered by Samsung's own cutting-edge 3D V-NAND technology, the 850 PRO delivers up to twice the write speed than that of the traditional 2D planar NAND flash for ultimate read and write performance. By changing to the layercylindrical cell structure, more cells can bestacked vertically, resulting in smaller footprintand higher density. As a result, you gain superior maximum read and write performance in bothsequential and random aspects, especially withthe 128GB model that outperforms other similarmodels in the market by more than 100MB/s inwrite speed.Plus, with RAPID mode, you can boost its optional higher speeds - up to almost twice faster write performance speeds than those of the previous model. The Magician software allows you to enhance performance by processing data on a system level using free PC memory (DRAM) as a cache.Enhanced Endurance and ReliabilityEnhanced enduranceWith twice the endurance of a typical NAND flash SSD, the 850 PRO will keep working as long as you use it. Samsung's 3D V-NAND technology is built to handle 150 Tera Bytes Written (TBW) for 128GB and 256GB, 300 TBW for 512GB and 1TB. Plus, it comes with an industry-leading ten-year limited warranty.Advanced data encryptionThe 850 PRO provides the same data encryption features as the 840 EVO does. The Self-Encrypting Drive (SED) security technology will help keep your data safe at all times. It includes an AES 256-bit hardware-based encryption engine to ensure that your personal files remain secure. A hardware-based encryption engine secures your data without performance degradation that you may experience with a software-based encryption. Also, the 850 PRO is compliant with advanced security management solutions (TCG Opal and IEEE 1667). Magician will guide you through ”How to use security features”. Furthermore, users can delete or initialize data with the crypto erase service with PSID. Efficient Power Management Less Power consumption for active and stand-by statesEven with all of the above performance improvements, advances in the design of the MEX Controller and the use of the new technology of 3D V-NAND have enabled us tomake the 850 PRO as one of the most energy-efficient drives on the market. Its idle power is almost the same as the 840 PRO’s and its active (average) write power has been dramatically reduced up to 40% due to the 3D V-NAND technology.Moreover, when DEVSLP is enabled, the power consumption of 850 PRO is below 2mW.Technical SpecificationsUsage Application Client PCsCapacity 128GB, 256GB, 512GB,1TB(1,024GB)Dimensions(L x WxH) 100 x 69.85 x 6.8 (mm)Interface SATA 6Gb/s (compatible with SATA 3Gb/s and SATA 1.5Gb/s)Form Factor 2.5 inchController Samsung 3-core MEX ControllerNAND Flash Memory Samsung 3D V-NAND MLCDRAM Cache Memory 256MB (128GB) or 512MB(256GB&512GB) or 1GB (1TB) LPDDR2Performance*Sequential Read: Max. 550 MB/sSequential Write:Max. 520 MB/s (256GB/512GB/1TB)Max. 470 MB/s (128GB)4KB Random Read (QD1): Max. 10,000 IOPS4KB Random Write(QD1): Max. 36,000 IOPS4KB Random Read(QD32): Max. 100,000 IOPS4KB Random Write(QD32): Max. 90,000 IOPSTRIM Support Yes (Requires OS Support) Garbage Collection YesS.M.A.R.T YesSecurity AES 256-bit Full Disk Encryption (FDE) TCG/Opal V2.0, Encrypted Drive(IEEE1667)Weight Max. 66g (1TB) Reliability MTBF: 2 million hoursTBW 128/256GB : 150 Terabytes Written(TBW) 512GB/1TB : 300 TBWPower Consumption**Active Read (Average): Max. 3.3W(1TB) Active Write (Average): Max. 3.0W(1TB) Idle: Max. 0.4WDevice Sleep: 2mWTemperature Operating:Non-Operating:0°C to 70°C -40°C to 85°CHumidity 5% to 95%, non-condensingVibration Non-Operating: 20~2000Hz, 20GShock Non-Operating:1500G , duration 0.5msec, 3 axisWarranty 10 years limited* Sequential performance measurements based on CrystalDiskMark v.3.0.1.Random performance measurements based on Iometer 2010. Performance may vary based on SSD’s firmware version, system hardware & configuration.Testsystemconfiguration:************************,DDR31600MHz4GB,OS–Windows7Ultimatex64SP1IRST 11.5.4.1001, MS performance guide pre-condition Intel® Z68** Power consumption measured with IOmeter 1.1.0 version with Intel i7-4770K, Samsung DDR3 8GB, Intel® DH87RL OS- Windows7 Ultimate x64 SP1Product Lineup128 GB* MZ-7KE128 Samsung SSD 850 PRO 128GBWarranty statementInstallation guideSoftware CDMZ-7KE128BWMZ-7KE128B/KRMZ-7KE128B/CN256 GB* MZ-7KE256 Samsung SSD 850 PRO 256GBWarranty statementInstallation guideSoftware CDMZ-7KE256BWMZ-7KE256B/KRMZ-7KE256B/CN512 GB* MZ-7KE512 Samsung SSD 850 PRO 512GBWarranty statementInstallation guideSoftware CDMZ-7KE512BWMZ-7KE512B/KRMZ-7KE512B/CN1TB(1,024GB*) MZ-7KE1T0 Samsung SSD 850 PRO 1TBWarranty statementInstallation guideSoftware CDMZ-7KE1T0BWMZ-7KE1T0B/KRMZ-7KE1T0B/CN* GB: 1GB = 1,000,000,000 bytes. A certain portion of capacity may be used for system file and maintenance use, thus the actual capacity may differ that indicated on the product label.For more information, please visit /ssd.To download the latest software & manuals, please visit /samsungssdDISCLAIMERSAMSUNG ELECTRONICS RESERVES THE RIGHT TO CHANGE PRODUCTS, INFORMATION AND SPECIFICATIONS WITHOUT NOTICE.Products and specifications discussed herein are for reference purposes only. All information discussed herein may change without notice and is provided on an “AS IS” basis, without warranties of any kind. This document and all information discussed herein remain the sole and exclusive property of Samsung Electronics. No license of any patent, copyright, mask work, trademark or any other intellectual property right is granted by one party to the other party under this document, by implication, estoppels or otherwise. Samsung products are not intended for use in life support, critical care, medical, safety equipment, or similar applications where product failure could result in loss of life or personal or physical harm, or any military or defense application, or any governmental procurement to which special terms or provisions may apply. For updates or additional information about Samsung products, contact your nearest Samsung office. COPYRIGHT © 2015This material is copyrighted by Samsung Electronics. Any unauthorized reproductions, use or disclosure of this material, or any part thereof, is strictly prohibited and is a violation under copyright law.TRADEMARKS & SERVICE MARKSThe Samsung Logo is the trademark of Samsung Electronics. Adobe is a trademark and Adobe Acrobat is a registered trademark of Adobe Systems Incorporated. All other company and product names may be trademarks of the respective companies with which they are associated.。
建立系统科学基础理论框架的一种可能途径与若干具体思路(之七)――离散动力系统的密度演化与序列
2003年5月系统工程理论与实践第5期 文章编号:100026788(2003)0520001214建立系统科学基础理论框架的一种可能途径与若干具体思路(之七)——离散动力系统的密度演化与序列的信息结构丁义明1,范文涛1,龚小庆2(中国科学院物理与数学研究所,湖北武汉430071;2.杭州商学院统计与计算机学院,浙江杭州310035)摘要: 本文是总题目下的第七篇Λ全文的总目的是试图从现代物理、分子生物学与脑神经解剖等学科领域的最新实验事实,以及相应的前沿理论领域围绕着演化概念研究的展开所获得的已有理念与成就为基础,按照“由大爆炸理论所描述的物理世界之最初情景出现以来,世界物质总是在其不同时空点具体结构状态下的几种基本相互作用属性形成的制约机械造成的物质——能量结构与分布仍非完全平衡态势的推动下,不断地一层一层完成其全方位整体性进一步精细平衡结构,实现其该层次从无序到有序的起伏演化——这一总体自然法则”的认识主线,提出一种建立系统科学基础理论的定性定量框架思路与若干细节方法Λ文中对相互作用、进化、演化、适应性与复杂性等概念进行了分析,对突变、分歧、吸引子、混沌、协同、分形等基于此种理论作了一种较为直观的诠释Λ同时,也将论及所谓信息的本质与其人本意义下的价值概念,特别是她与非线性的密切关系等Λ当然,这一切均还是初步的,尽管其中的一部分我们也已经获得了一些较为严谨的结果Λ关键词: 系统科学基础理论;框架;密度演化;平稳密度;非平稳度;信息结构中图分类号: N94 文献标识码: A A Po ssib le A pp roach to the F ram ew o rkof the Fundam en tal T heo ry of System Science:Part SevenD I N G Y i2m ing1,FAN W en2tao1,GON G X iao2qing2(1.W uhan In stitu te of Physics andM athem atics,T he Ch inese A cadem y of Sciences,W uhan430071,Ch ina;2.Co llege of Statistics and Compu ting Science,H angzhou U n iversity of Comm erce,H angzhou310035,Ch ina)Abstract: A qualitative and quan titative fram ew o rk fo r the fundam en tal theo ry of system sciences asw ell as som e details is given on the basis of som e new experi m en tal evidences,ideas and ach ievem en tsconcern ing evo lu ti on in m any fields such as modern physics,mo lecu le b i o logy and b rain anatom y,etc.Som e concep ts such as in teracti on,evo lu ti on,adap tab ility,and comp lex ity are discu ssed.In tu iti onalanno tati on s abou t catastrophe,b ifu rcati on,attracto r,chao s,fractal and synergy are also given.T heessence and the value of info rm ati on,and the clo sed relati on betw een info rm ati on and non linearity areinvestigated.It is nodoub t that all the w o rk s are to be con sidered in detail,although w e have som erigo rou s resu lts in particu lar step s.It is the seven th part of the paper.T he den sity evo lu ti on of discrete dynam ical system and the non2 stati onarity m easu re of sequence as w ell as their app licati on to decisi on m ak ing are discu ssed.Key words: fundam en tal theo ry of system sciences;fram ew o rk;den sity evo lu ti on;stati onary den sity;non2stati onarity m easu re;info rm ati on structu re1 轨道演化与密度演化确定性系统能表现出复杂的动力学行为在一百多年前已为人所知.Po incaré(庞加莱)通过证明常微收稿日期:2003202226资助项目:国家自然科学基金(60174048;70271076) 作者简介:丁义明(1972—),男,博士,江西丰城人,研究方向:系统科学基础理论及应用分方程能表现出奇异行为,在N etw on (牛顿)物理学的基础上打开了通往现代非线性动力系统和混沌理论的窗户.尽管20世纪30、40年代在许多物理系统中观察到了奇异行为,但这个源于确定性系统的词汇并没有引起足够的重视,即使在60年代有了S .Sm ale (斯梅尔)的强有力的结果之后,确定性系统中的复杂行为在数学上仍是相当神秘的,直到70年代后期,随着快速廉价的计算机的出现,人们才意识到混沌行为在科学和技术中所起的重要作用.Sm ale 马蹄开始在许多科学领域中出现.1971年,“奇异吸引子”被用来描述确定性系统中的复杂行为,并迅速成为非线性动力学中的范例[1].敏感依赖于初始条件指得是系统在初始时刻的一个微小变化,将导致后来随时间指数增长的变化.这一事实最早是被法国数学家J .H adam ard 在19世纪末所考虑[2].实际上,他证明了负曲率表面上的测地流具有敏感依赖于初始条件的性质.许多物理系统在任意初始条件下呈现敏感依赖于初始条件的特性.因此,对明确的力学系统,我们有经典的决定论,也有初始条件的敏感依赖性.在预测系统的轨道方面仍存在极大的限制,因为对初值的估计不可能完全精确.一个典型的办法是考虑初值在某一个较小的范围内,然后让它随时间演化,考察轨道进入某个范围内的可能性,这样就把概率引进了确定性系统的讨论之中.在科学史上,密度的概念是近年来为本质上是统计的现象提供统一的描述而出现的[3].M axw ell (麦克斯韦)速度分布的引入诱导了稀薄气体(dilu te gas )理论的统一描述;量子力学的起源之一也是试图澄清P lank (普朗克)的黑体辐射密度方程;人口统计学在引入了Gom p ertz 年龄分布以后才得到迅速的发展.最早在物理学中考虑密度演化的是统计物理学家Bo ltz m ann (玻尔兹曼),他试图仿效进化论的创始人D arw in (达尔文)在生物学中的研究[4].D arw in 的研究表明,如果我们从研究群体而不是从研究个体开始,就可以理解依赖于选择压力的个体的易变性如何产生漂变.Bo ltz m ann 认为,从个体的动力学轨道开始,难以理解热力学第二定律及其所预言的熵的自发增加,从而必须从大量的粒子群体开始.熵增是这些粒子间大量碰撞造成的全局漂变.这两位巨人都用对群体的研究取代了对“个体”的研究,他们的研究表明细微的变化(个体得易变性或微观碰撞)在发生很长一段时间之后会在一个集体的层次上产生宏观演化.考虑一个具有3n 个自由度的封闭经典力学系统(例如在一个三维盒子中的n 个粒子).系统的状态完全由一组6n 个独立变量(p ,q )所确定,其中p =(p 1,p 2,…,p n ),q =(q 1,q 2,…,q n )分别为第i 个粒子的动量与位置.如果系统的状态X =X (p ,q )在某个时刻已知,则任何其它时刻的状态完全由牛顿定律所决定.如果能够对系统定义一个H am ilton 量H (X ,t ),则p i 与q i (i =1,…,n )随时间的演化可以由H am ilton 方程给出:p αi ≡d p i d t =-5H 5q i q αi ≡d q i d t =5H 5p i 若H (X ,t )不显含时间,则H (X )=E ,其中E 是系统的总能量.在这种情形,系统是保守的.现在给系统建立一个6n 维的相空间#,状态矢量X (p ,q )对应于相空间的一个点,当系统的状态随时间变化时,系统的点X 在空间#中画出一条轨道.因为经典系统的运动由初始条件唯一决定,故任何两条轨道都不相交.设X 为状态空间中的点,它表示系统在t 0时刻的状态,T t (X )为状态空间中表示系统在t +t 0时刻的状态的点.由此可见,T t 是状态空间中的一个变换,并且T 0=Id 为恒等算子,T t +s =T t ・T s ,所以,t →T t 是群R 在状态空间上的作用,由于沿解曲线系统的H am ilton 量是一个常数,每个能量曲面H -1(e )对变换T t 而言是不变的,从而可以把群R 作用到每个能量曲面上,人们对这个作用的渐近性质感兴趣,即:当t 充分大时,T t 的性质.变换T t 在H -1(e )上是连续的,当H -1(e )光滑时,它也是光滑的.另外,L i ouville 定理告诉我们,如果作用在系统上的力满足一定的条件,则可以选择适当的坐标系,使通常R 6n 中的2系统工程理论与实践2003年5月L ebesgue 测度于此坐标系中在T t 的作用下保持不变.如果我们能精确测量系统的初始状态,即:知道每个粒子的位置和速度,就可以知道系统的轨道随时间的演化的全部信息.但是,当n 较大时,在某一时刻精确测量系统中每个粒子的位置和速度是不可能的,我们总可以根据一些约束条件确知系统的状态一定在某个集合中.U lam 和V on N eum ann (1947)建议用随机的方法来考虑上述系统的演化[5].用集合A 表示系统可能的初始条件,但我们不能辨别它们:我们不能完全精确知道初始状态.系统在t 时刻的状态,由集合T t A 中的所有点描述.由于对初始条件敏感依赖性,T t A 不再是一个小集合,事实上它覆盖了t 时刻系统状态的所有可能性.令B 为我们所关心的一些特殊状态的集合(比如说,只有进入到B 中的状态才能被我们观察到).T t A 的一部分会落在B 中,一部分会落在B 外,我们关心的是交集T t A ∩B 中的点.为了能够展开讨论,我们利用这样一个事实:对于长时间的演化,系统存在一个不随时间演化而改变的自然概率测度m (L i ouville 定理),用以描述各种事件的概率.例如,m (T t A )=m (A )为与我们初始条件相关的事件A 的概率.m (T t A ∩B )就是我们所关心的事件的概率.在许多情况下,对于t 很大的时候,都会有m (T t A ∩B )≈m (A )×m (B ).这一所谓的混合(m ix ing )特性表明,集合T t A 是如此卷绕,以至于它在B 中的部分与B 的大小(用m (B )作为度量标准)成正比.因此,我们可以把系统的状态看成是相空间中的一个点,也可以把系统的状态就是相空间上的一个分布,它们随时间的演化对应于轨道演化和密度演化.当初始条件完全清楚时,初始分布对应于一个奇异的D irac 分布(广义函数).轨道演化一般每次只考察一条轨道,当系统具有某种轨道稳定性时,所考察的轨道可能是典型轨道,据此可以较好地把握系统的运行情况.密度演化同时考虑所有可能的轨道,但得不到准确的预测,只能得到概率意义下的预测.当系统行为较复杂,所有轨道都不稳定时,系统将敏感依赖于初始条件,长期准确预测将不可能,密度演化将得到更好的近似.几率与力学最早是通过气体运动论发生联系的.气体运动论的基础远在19世纪中叶就由K ron ig (克朗尼格1856),C lau siu s (克劳修斯,1857)与M axw ell (麦克斯韦,1860)接二连三地发表的一系列论文所奠定.这些研究以著名的Bo ltz m ann (玻尔兹曼)H 定理(1872)而达到了顶峰[6].它的极为重要的性质就在于引进了用分子速度分布函数来定义的物理量H ,H 的行为完全类同于热力学熵.用K ron ig 自己的话说(1856):“每个分子的路径谅必是这样的不规则,以至所有的计算都将无法进行.然而,根据几率理论的规律,我们可以假设一种完全有规律的运动来代替这种完全不规则的运动.”这种看法同样反映在速度分布变化率的Bo ltz m ann 方程中.在无外力的情况下,这个著名的方程可取如下的形式:5f 5t +v ・5f 5x =(5f 5t )coll .对一个无相互作用的粒子系统而言,“流动”项v ・5f 5x可以从力学定律导出(L i ouville 定理).另一方面,碰撞项不能单独由力学定律推导,但包含关于碰撞数的几率假设.1903年,Gibb s (吉布斯)引入了系综的概念:“我们可以想象有大量的性质相同的系统,但是在某个给定的时刻,它们具有不同的位形与速度,它们的差别不仅是无限小,而且也许甚至包含一切想象的位形与速度的组合……”系统的Gibb s 系综可以由相空间中的点“云”来表示.与该系综对应的相空间的密度为Θ(q 1,…,q n ,p 1,…,p n ,t ),且满足归一化条件.因此,t 时刻系统的状态位于相空间的体积元d q 1…dp n 中的概率为Θ(q 1,…,q n ,p 1,…,p n ,t )d q 1…dp n .在相空间的每个体积元中的密度变化应归因于通过该体积元的边界的流量差.这就给出了L i ouville 方程5Θ5t +∑ni =1[(5H 5p i )(5Θ5q i )-(5H 5q i )(5Θ5p i)]=0.L i ouville 方程的求解与正则方程的积分是等价的问题,它描述了系统的密度演化过程.研究复杂动力系统的行为存在着两种方法:一种是利用系统在相空间运动的轨道来研究;另一种是利用系统在相空间密度函数的演化来研究.后者主要是研究系统的演化算子(F roben iu s 2Perron 算子或Koopm ann 算子)在概率密度函数(状态)空间上作用的长期行为.与传统的轨道方法相比,概率密度演化3第5期建立系统科学基础理论框架的一种可能途径与若干具体思路(之七)4系统工程理论与实践2003年5月方法具有以下的优越性[7,8]:・尽管所研究的动力系统是非线性的,但概率密度的演化是线性的.・对不稳定系统,由于内在计算的限制,轨道的计算极为困难,而概率密度的演化有时是稳定的,所以计算较为容易.・演化算子的引入使我们可以应用和发展泛函分析方法来处理问题,为动力系统的研究提供了有力工具.・密度演化方法有助于从整体上把握系统演化的长期行为.一般而言,动力系统的研究有双重目的:一是描述动力系统的主要部分,典型的轨道行为,特别是当时间趋于无穷时的性质;二是了解这些典型的行为在系统作少许改变以后会如何变化,也就是说系统在什么程度下和对小扰动是稳定的.一个基本的稳定性概念是结构稳定性,它最早由A ndronov和Pon tryagin提出.它要求所有所有的轨道结构在小扰动下保持不变:即,存在一个同胚可以把与原来系统“充分接近“的扰动系统的轨道和原系统的轨道一一对应起来.为了考察结构稳定性,S.Sm ale在20世纪60年代早期引入了一致双曲(公理A)系统,Palis和Sm ale猜测:一个微分同胚(或流)是结构稳定的充分必要条件为它是一致双曲的,且满足所谓的强横截条件.70年代中期,Robb in,de M elo和Rob in son证明了上述猜想的充分性,必要性的证明仍然是一个公开问题.但对C1扰动,M ane和H ayash i分别解决了微分同胚和流的情形.我国学者廖山涛院士、文兰院士等在此方面也作了较基本的工作.尽管有了巨大的进展,但结构稳定性太强了,并不能较好地应用于较多的系统.一些重要的模型,如L o renz系统和H enon映射,并不是结构稳定的,但它们的主要特征在一些小扰动之后仍然是保持的.在上世纪60年代和70年代也曾出现一些具拓扑意义的稍弱的稳定性的概念,但都因限制太强而不能很好地应用.近年来,越来越多的注意力集中于考虑系统的统计性质在扰动下的持久性.一个自然的问题是考虑所关心的动力系统的“物理测度”在特定扰动下的连续性.一个重要的概念就是的Sinai2R eu lle2Bow en (SRB)测度(或物理测度).Bo rel概率测度Λ是SRB测度,如果存在相当多的点(L ebesgue测度大于零)的轨道平均弱收敛于Λ.因为动力系统的关于L ebesgue测度绝对连续的不变测度一定是一个SRB测度,而这样的测度可以用密度演化的方法得到,使得密度演化自然成了需要发展的工具.只有一个自由度的一维确定性系统(映射)的演化也可以得到宏观状态的分布密度,这一事实在半个世纪前由E.Bo rel(1909),A.Rényi(1957)和S.U lam和J.V on N eum an n(1947)等开创性工作所揭示[1,3],只是这些工作只为少数在遍历性理论方面工作的数学工作者所熟悉.本文将介绍密度演化方法的一些基本情况.出于方便和可读性等方面的考虑,我们选择一维离散动力系统作为对象来叙述.应该指出的是,对于连续流和更“复杂”的系统,同样可以用密度演化方法来处理.另外,对材料的选择也反映了我们的偏好,确定性系统的密度演化方法的许多方面难以在本文中全面介绍,许多结论的证明都因篇幅限制而省略,但都指出了参考文献Ζ更详细的内容可以参考L aso ta和M ackey的专著[3],Boyarsky和Góra的专著[1],P rigogine的普及性著作[4],以及这些书后的参考文献,部分结论引自学位论文[8]Ζ2 M arkov算子与Froben ius-Perron算子(X,2,Λ)为概率测度空间(几乎所有结论对Ρ-有限测度空间也成立),L1={f:∫X f dΛ<∞}为所有可积函数组成的空间,其范数为‖f‖=∫X f dΛ.L1+={f∈L1:f≥0}.D={f∈L1+:‖f‖=1}为所有概率密度函数组成的集合.f∈L1,f-(x)=m ax(-f(x),0)表示f的负部,f+(x)=m ax(f(x), 0)表示f的正部,supp f={x:f(x)≠0}表示f的支撑集合,注意到f的支集在相差一个零测集的意义下是唯一的.定义2.1[3] 线性算子P :L 1→L 1称为M arkov 算子,如果P 满足:1.非负性:f ≥0]Pf ≥0;2.M arkov 性:f ≥0]‖Pf ‖=‖f ‖.显然,如果P 为M arkov 算子,则P 把密度函数变为密度函数,即PD ΑD .如果Pf =f ,称f 为P 的不动点,若还有f ∈D ,则称f 为P 的不变密度.典型的M arkov 算子包括:1.状态有限(可数)的M arkov 链的转移矩阵;2.(X ,2,Λ)上的非奇异变换所对应的F roben iu s 2Perron 算子;3.由随机核定义的积分算子.F roben iu s 2Perron 算子是研究复杂动力系统的密度演化的主要工具,它于1928年由Kuz m in 首先引入[9],被用来描述所关心的变换对概率密度函数的作用.在正式引入F roben iu s 2Perron 算子之前,先看一个具体的实例.设Ν为I =[0,1]上的随机变量,其分布密度函数为f (x ),对任何集合A ΑI ,有p rob (Ν∈A )=∫Af (x )d m ,其中d m 为[0,1]上的L ebesgue 测度.如果S :I →I 为一个变换,则S (Ν)也是一个随机变量,一个自然的问题是:S (Ν)的密度函数是什么?注意到p rob (S (Ν)∈A )=p rob (Ν∈S -1(A ))=∫S -1A f dm ,为了得到S (Ν)的密度函数,只需把∫S -1A f d m 写成∫A <d m 的形式,其中<为某个特定的函数.如果S 是非奇异的(m (A )=0]m (S -1(A ))=0),设u (A )=∫S -1Af d x ,其中f 为随机变量Ν的密度函数,A 为L ebesgue 可测集.由于S 是非奇异的,m (A )=0蕴含m (S -1(A ))=0,从而u (A )=0,这表明测度u 关于测度m 绝对连续,根据R adom 2N ikodym 定理,存在唯一的可积函数使得对所有的L ebesgue 可测集A ,有u (A )=∫A<d m .令P S f =<,这样,概率密度函数f 就变成了概率密度函数P S f ,P S 依赖于S ,称为变换S 所对应的F roben iu s 2Perron 算子.有了上面的直观解释,下面给出F roben iu s 2Perron 算子的一般定义.设(X ,2,Λ)为概率测度空间,Μ为(X ,2)上的另一个测度,称测度Μ关于测度Λ绝对连续,如果Λ(A )=0蕴含Μ(A )=0.设S :X →X 为X 上的变换,称S 是可测的,如果ΠA ∈2,有S -1(A )∈2.可测变换S 称为是非奇异的,如果ΠA ∈2,Λ(A )=0蕴含Λ(S -1)(A )=0.定义2.2[1,3] (X ,2,Λ)为概率测度空间,S :X →X 为非奇异变换,由下式定义的算子P S :L 1→L 1,∫A P S f (x )Λ(d x )=∫A f (x )Λ(d x ),ΠA ∈2.称为变换S 所对应的F roben iu s 2Perron 算子.由定义可知,非奇异变换S 所对应的F roben iu s 2Perron 算子P S 具有以下性质:1.P S 是线性算子;2.当f (x )≥0时,P S f (x )≥0;3.当f (x )≥0时,‖P S f (x )‖=‖f (x )‖;4.f (x )∈L 1,‖P S f (x )‖≤‖f (x )‖;5.如果S 所对应的F roben iu s 2Perron 算子为P S ,则S n 对应的F roben iu s 2Perron 算子为P S n .从上述性质可以看出,非奇异变换S 所对应的F roben iu s 2Perron 算子P S 是M arkov 算子,它与M arkov 链中的转移矩阵有些类似.实际上,F roben iu s 2Perron 算子在某种意义下是转移矩阵在确定性系统中的推广.对于变换S 所诱导的离散动力系统,如果我们把[0,1]上的分布看成是系统的状态,S 所对应的F roben iu s 2Perron 算子P S 就描述了系统状态的密度的演化过程Ζ这一点和马氏链的演化非常相似Ζ一5第5期建立系统科学基础理论框架的一种可能途径与若干具体思路(之七)个马氏链的演化完全由其状态转移矩阵和初始分布唯一决定Ζ如果马氏链的转移矩阵为P,初始分布密度为q,则在第n个时刻,系统状态的分布密度为qP n.类似地,如果由变换S所对应的F roben iu s2Perron算子为P S,初始状态的分布密度为f(x),则在第n个时刻,系统的状态分布密度为P n S f(x).我们所说的密度演化,关心的就是当时间n趋近于无穷时P n S f(x)的性质:它在什么情况下、在什么意义下(平均收敛、弱收敛、强收敛等)是收敛的?如果收敛,是否是全局的?收敛速度如何?如果全局收敛,它的渐近分布密度如何近似计算?……在某些情况下,可以写出P S f的显示表达式.设X=[a,b]为一个区间,令A=[a,x],于是,∫a x P S f(t)d t=∫S-1([a,x])f(t)d t,对上式两端取微分得到:P S f(x)=dd x∫S-1([a,x])f(s)d s.3 利用密度演化方法研究混沌行为设(X,2,Λ)为概率测度空间,S:X→X为可测变换,如果对任何A∈2,Λ(A)=Λ(S-1(A))成立,则称S是保测变换.如果S是保测变换,也称S是关于Λ不变的测度,或Λ是S的不变测度.应该指出的是,保测变换是非奇异的.下面的定理3.1表明S所对应的F roben iu s2Perron算子的非负不动点的存在性蕴含了非奇异变换的绝对连续不变测度的存在性.定理3.1[3] 设(X,2,Λ)为概率测度空间,S:X→X为可测变换,P S为S所对应的F roben iu s2 Perron算子,f∈L1为非负函数.则测度Λf(Λf(A)=∫A fΛ(d x),A∈2)是S的绝对连续不变测度当且仅当P S f=f.于是,根据定理3.1,Λ为S的不变侧度当且仅当P S1=1.例3.2 r进制变换S(x)=rx(m od1),r>1为整数.它所对应的F roben iu s2Perron算子为P S f(x)=dd x∫S-1([0,x])f(s)d s=∑r-1i=0dd x∫i ri+xrf(s)d s=1r∑r-1i=0f(i+xr). 容易验证,P S1=1,这表明L ebesgue测度是S的不变测度.例3.3 二次映射S(x)=4x(1-x),x∈[0,1].S所对应的F roben iu s2Perron算子为P S f(x)=dd x∫S-1([0,x])f(s)d s=dd x∫12-121-xf(s)d s+dd x∫112+121-xf(s)d s=141-x[f(12-121-x+f(12+121-x)]. 1947年,S.U lam和J.V on N eum ann使用特殊的变换求出了S的绝对连续不变测度Λf3(Λf3(A)=∫A d xΠx(1-x),ΠA∈2)[5].直接验证如下:P S f3(x)=141-x[f3(12-121-x)+f3(12+121-x)]=141-x[1Π(12-121-x)(12+121-x) +1Π(12+121-x)(12-121-x)]6系统工程理论与实践2003年5月=1Πx (1-x )=f 3(x ). 定义3.4[1,3] 设(X ,2,Λ)为概率测度空间,S :X →X 为非奇异变换,称S 是遍历的(ergodic ),如果ΠA ∈2,S -1(A )=A 蕴含Λ(A )=0或Λ(A )=1.定理3.5[1,3] (X ,2,Λ)为概率测度空间,S :X →X 为非奇异变换,P S 为S 对应的F roben iu s 2Perron 算子.如果S 是遍历的,则P S 至多有一个不变密度f 3.若P S 有唯一不变密度f 3,且f 3>0,Λ2a .e .,则S 是遍历的.定理3.6(B irkhoff 个别遍历定理)[1,3] 设(X ,2,Λ)为概率测度空间,S :X →X 为保测变换,f ∈L 1.则存在f 3∈L 1使得:li m n →∞1n ∑n -1k =0f (S k (x ))=f3(x )对几乎所有x ∈X 成立,并且f 3.S =f 3,∫Xf 3(x )d Λ=∫X f (x )d Λ.推论3.7[3] 设(X ,2,Λ)为概率测度空间,S :X →X 为非奇异变换,如果S 遍历,则f 3几乎处处为常数.若Λ(X )<∞,则f 3(x )=∫X f d Λ Λ2a .e .x ∈X . 特别地,若S 遍历,则1n ∑n -1k =01A (S k (x ))→Λ(A ), Λ2a .e . x ∈X . 因此,X 中几乎所有点的轨道进入特定集合A 中的频率趋近于Λ(A ).f ∈L 1的关于点x ∈X 的轨道{S k (x )}的时间平均为:lim n →∞1n ∑n -1k =0f (S k (x )),f 关于空间X 的平均为:∫X f (x )Λ(d x ).如果变换S 是遍历的,B irkhoff 个别遍历定理及其推论表明对几乎所有的点x ∈X ,f关于x 的轨道{S n (x )}的时间平均等于f 关于空间的平均,即通常所说的时间平均等于空间平均.反之亦然,即:如果Πf ∈L 1,f 的时间平均等于空间平均,则S 遍历.定义3.8[1,3] 设(X ,2,Λ)为概率测度空间,S 为保测变换.S 被称为是混合的(m ix ing ),如果li m n →∞Λ(A ∩S -n (B ))=Λ(A )Λ(B ),ΠA ,B ∈2. 定义3.9[1,3] 设(X ,2,Λ)为概率测度空间,S 为保测变换.S 被称为是正合的(exact ),如果对任何A ∈2有S (A )∈2,且li m n →∞Λ(S n (A ))=1, ΠA ∈2,Λ(A )>0. 直观地讲,遍历变换意味着从给定点出发,经过多次迭代之后可接近任意点,即各态历经Ζ混合变换意味着从任何集合A 出发的点,经过多次迭代之后所得之点属于集合B 的可能性为两个集合测度之积,且与A ,B 的位置无关,即混合均匀Ζ正合变换则意味着从任何非零测度集合出发,经过多次迭代之后所得集合的点将充满整个空间,即充分扩散[10]ΖS 混合蕴含S 遍历,S 正合蕴含S 混合.实际上,有如下更有力的结论.定理3.10[3] 设(X ,2,Λ)为概率测度空间,S :X →X 为保测变换,P S 为S 所对应的F roben iu s 2Perron 算子,D 为概率密度函数的集合.1.S 是遍历的当且仅当对所有f ∈D ,密度函数序列{P S n f }平均收敛于密度函数1;2.S 是混合的当且仅当对所有f ∈D ,密度函数序列{P S n f }弱收敛于密度函数1;3.S 是正合的当且仅当对所有f ∈D ,密度函数序列{P S n f }强收敛于密度函数1;由此可见,遍历性、混合性与正合性反映了S 的复杂行为的三个不同的层次,而且这三个层次可用P S 在密度函数空间上作用的特性来描述.粗略地说,S 是保持初始密度不变的表明均匀密度f (x )=1为P S 的不变密度(P S 1=1).遍历性意味着P S 有唯一的不变密度,而混合性与正合性则对应于P S 的不变密7第5期建立系统科学基础理论框架的一种可能途径与若干具体思路(之七)度f(x)=1的两种不同层次的稳定性.4 绝对连续不变测度的存在性设(X,2,Λ)为概率测度空间,S:X→X为非奇异变换.我们在上一节介绍了S的不变测度的概念.在动力系统的研究中,寻求S的不变测度是一个重要的课题.因为不变测度忽略了暂时效应,描述了S的长期行为,同时不变测度还提供了S的奇异吸引子的概率描述.我们不去研究奇异吸引子本身,而只考虑系统在奇异吸引子上的统计行为,这样做似乎回避了主要困难,但在许多实际问题中已经足够了.B irkhoff遍历定理表明了不变测度的重要性,但它并没有给出有关不变测度存在性的任何信息.尽管K rylov和Bogo liubov定理断言紧空间上连续映射的不变测度的存在性[11],但它并不是一个非常有用的结果,因为这些不变测度在物理上可能没有用处.例如,支撑在不动点上的单点测度是该系统的一个不变测度,但这个测度没有给出除该不动点之外的系统行为的其它信息.因此,我们对那些在物理上有意义的不变测度感兴趣,实际上,也就是那些可以通过跟踪大多数初始点的轨道观察到的,并且支撑在正测集合上的不变测度.这就要求我们去研究S的绝对连续不变测度.考察一个区间上的变换S(例如S(x)=4x(1-x)),一般而言,这样的变换具有无穷多个不变测度,但是计算机对轨道的模拟只表现出一个测度,也就是S的绝对连续不变测度.这表明了S的绝对连续不变测度在实践中的重要性.这类不变测度的另一个重要特征是它们是“吸引的”.设Λ为变换S的一个不变测度,B irkhoff遍历定理表明对几乎所有的点x(关于测度Λ而言),时间平均等于空间平均,如果Λ还是关于L ebesgue测度绝对连续不变测度,可以得到更强的结论:时间平均等于空间平均对L ebesgue几乎所有的点成立,即便是Λ的支撑不是整个空间X时也是如此.也就是说,当我们从一个物理上有意义的点出发时,沿着从该点出发的轨道的时间平均等于关于绝对连续不变测度的积分.4.1 弱预紧性定理3.1把寻求非奇异变换S的绝对连续不变测度的问题转化为寻找相应的F roben iu s2Perron算子P S的非零不动点的问题.因此,为了判断S是否有绝对连续不变测度,只需考察P S f=f是否有非零解.容易想到的方法是,选一个f∈D,考察迭代序列{P S n f},如果{P S n f}收敛于某个f3∈D,则P S f3= f3.但是,要证明{P S n f}收敛(强或弱)于函数f3非常困难.因此,我们考虑平均序列{A n f}的收敛性,其中A n f=1n ∑n-1k=0P S k f.如果{A n f}收敛于f3,同样有P S f3=f3.下面的定理4.1讨论了平均序列{A n f}的收敛性.定理4.1[3] 设(X,2,Λ)为概率测度空间,S:X→X为非奇异变换,P S为S所对应的F roben iu s2 Perron算子.如果对给定的f∈L1,平均序列{A n f}所组成的集合是弱预紧的,则{A n f}强收敛于f3∈L1,并且f3为P S的不动点.更进一步,如果f∈D,则有f3∈D,f3为P S的不变密度.定理4.1是属于Kaku tan i和Yo sida的抽象遍历定理的特殊情形[12].至此,寻求S的绝对连续不变测度的问题转化为考察序列{A n f}所组成的集合的弱预紧性.定义4.2[12] 设(X,2,Λ)为概率测度空间,子集F∈L1被称为是弱预紧的,如果F中的每个序列{f n},有子列{f nk }弱收敛于fθ∈L1.在上述定义中称F是弱预紧的而不是弱紧的,是因为只要求极限函数fθ∈L1而不是要求fθ∈F.下面的定理4.3给出了刻画子集F∈L1是弱预紧的充分必要条件.定理4.3[12] FΑL1是弱预紧的,当且仅当1.存在M<∞使得‖f‖≤M, Πf∈F;2.ΠΕ>0,存在∆>0,使得∫A f(x) u(d x)<Ε,Πf∈F,Λ(A)<∆; 根据定理4.1和定理4.3可以得出下面的结论.推论4.4[3] 设(X,2,Λ)为概率测度空间,S:X→X为非奇异变换.8系统工程理论与实践2003年5月。
华为 OceanStor Dorado 3000 全闪存系统说明书
Huawei OceanStor Dorado 3000is an entry-level storage system in the OceanStor Dorado all-flash series.It features an innovative hardware platform,FlashLink®intelligent algorithms,and end-to-end (E2E)NVMe architecture,which combine to deliver a 50%higher performance than the previous generation at an ultra-low 0.05ms latency.The intelligent algorithms are built into the storage system to make storage more intelligent during the application operations.SCM intelligent cache acceleration further reduces latency by 60%.Furthermore,the all active-active (A-A)architecture and simplified GUI design help simplify Operationsand Maintenance (O&M).Excelling in scenarios such as virtualization,OA,and branches,Huawei OceanStor Dorado 3000all-flash storage is a trustedoption for small and medium-sized businesses (SMBs)in thecarrier,finance,government,manufacturing,and other fields.Thestorage system provides cost-effective services,making it ideal forthe IT applications of SMBs.OceanStor Dorado 3000All-Flash Storage System 50%higher performance than theprevious generationE2E NVMe for 0.05ms of ultra-lowlatencyFlashLink®intelligent algorithmsSCM intelligent cache accelerationreduces latency by 60%Fast✓The intelligent multi-protocol interface module hosts theprotocol parsing previously performed by the general-purpose CPU, expediting the front-end access performanceby 20%.✓The computing platform offers industry-leadingperformance with 25% higher computing power than theindustry average.✓The intelligent accelerator module analyzes andunderstands I/O rules of multiple application models based Extensive intelligent software features (Smart series)3-layer management:•365-day capacity trends prediction •60-day performance bottleneck prediction •14-day disk fault prediction •Immediate solutions for 93%ofproblems FlashEver:No data migration over 10years for 3-gen systems Intelligent One device integrates multiple functions for easy management System configuration success in 3steps and resource readiness in 5minutes A-A architecture for non-disruptive upgrade (NDU)management Simplified Product Features Fast Innovative hardware platform : The hardware platform ofHuawei storage enables E2E data acceleration, improving thesystem performance by 50% compared to the previousgeneration.on machine learning frameworks to implement intelligentprefetching of memory space. This improves the read cache hit ratio by 50%.✓SmartCache+ SCM intelligent multi-tier caching identify whether or not the data is hot and uses different media tostore it, reducing the latency by 60% in OLTP (100% reads) scenarios.✓The intelligent SSD hosts the core Flash Translation Layer (FTL) algorithm, accelerating data access in SSDs andreducing the write latency by half.✓The intelligent hardware has a built-in Huawei storage fault library that accelerates component fault location anddiagnosis, and shortens the fault recovery time from 2hours to just 10 minutes.Intelligent algorithms: Most flash vendors lack E2E innate capabilities to ensure full performance from their SSDs. OceanStor Dorado 3000 runs industry-leading FlashLink® intelligent algorithms based on self-developed controllers, disk enclosures, and operating systems.✓Many-core balancing algorithm: Taps into the many-core computing power of a controller to maximize the dataprocessing capability.✓Service splitting algorithm: Offloads reconstruction services from the controller enclosure to the smart SSD enclosure to ease the load pressure of the controller enclosure for moreefficient I/O processing.✓Cache acceleration algorithm: Accelerates batch processing with the intelligent module to bring intelligence to storagesystems during application operations.The data layout between SSDs and controllers is coordinated synchronously.✓Large-block sequential write algorithm: Aggregates multiple discrete data blocks into a unified big data blockfor disk flushing, reducing write amplification and ensuringstable performance.✓Independent metadata partitioning algorithm: Effectively controls the performance compromise caused by garbagecollection for stable performance.✓I/O priority adjustment algorithm: Ensures that read and write I/Os are always prioritized, shortening the accesslatency.FlashLink® intelligent algorithms give full play to all flash memory and help Huawei OceanStor Dorado achieve unparalleled performance for a smoother service experience.E2E NVMe architecture for full series: All-flash storage has been widely adopted by enterprises to upgrade existing ITsystems, but always-on service models continue to push IT system performance boundaries to a new level. Conventional SAS-based all-flash storage cannot break the bottleneck of 0.5 ms latency. NVMe all-flash storage, on the other hand, is a future-proof architecture that implements direct communication between the CPU and SSDs, shortening the transmission path. In addition, the quantity of concurrencies is increased by 65,536 times, and the protocol interaction is reduced from four times to two, which doubles the write request processing. Huawei is a pioneer in adopting end-to-end NVMe architecture across the entire series. OceanStor Dorado 3000 uses the industry-leading 32 Gb FC-NVMe/25 Gb RoCE protocols at the front end and adopts Huawei-developed link-layer protocols to implement failover within seconds and plug-and-play, thus improving the reliability and O&M. It also uses a 100 Gb RDMA protocol at the back end for E2E data acceleration. This enables latency as low as 0.05 ms and 10x faster transmission than SAS all-flash storage.Linear increase of performance and capacity: Unpredictable business growth requires storage to provide simple linear increases in performance as more capacity is added to keep up with ever-changing business needs. OceanStor Dorado 3000 supports scale-out of 16 controllers, and IOPS increases linearly as the quantity of controller enclosures increases, matching the performance needs of future business development. IntelligentOn and off-cloud synergy: Huawei OceanStor Dorado 3000all-flash system combines general-purpose cloud intelligence with customized edge intelligence over a built-in intelligent hardware platform, providing incremental training and deep learning for a personalized customer experience. The eService intelligent O&M and management platform collects and analyzes over 190,000 device patterns on the live network in real time, extracts general rules, and enhances basic O&M. Intelligence throughout service lifecycle: Intelligent management covers resource planning, provisioning, system tuning, risk prediction, and fault location, and enables 60-day and 14-day predictions of performance bottlenecks and disk faults respectively, and immediate solutions for 93% of problems detected.Extensive intelligent software features: Thin provisioning and data reduction improve space utilization; intelligent QoS improves service quality; and intelligent heterogeneous virtualization and data migration combine to ensure simplified system lifecycle management.FlashEver : The intelligent flexible architecture implements component-based upgrades without the need for data migration within 10 years. Users can enjoy latest-generation software and hardware capabilities without investing again in the related storage software features.SimplifiedSimple management*:Huawei OceanStor Dorado 3000 delivers SAN and NAS services and supports their parallel access. Built-in containers support storage and compute convergence. The convergence of cross-generation devices allows for joint resource usage. As such, multiple functions are converged to simplify management and greatly reduce the TCO.Simple configuration: A brand-new graphical user interface (GUI) greatly simplifies the configuration process of traditional storage. This facilitates storage system configuration in just three steps and resource readiness in just five minutes, without assistance from dedicated personnel. This meets the key requirements of SMBs for simple and easy-to-use IT systems. Simple O&M: The active-active architecture ensures there is no LUN ownership, meaning a LUN does not belong to any specific controller. In addition, load balancing and non-disruptive upgrade (NDU) are supported. O&M personnel do not need to prepare much on the host side before an upgrade, greatly improving O&M efficiency.Technical SpecificationsModel OceanStor Dorado 3000Hardware SpecificationsMaximum Number ofControllers16Maximum Cache (DualControllers, Expanding withthe Number of Controllers)128 GB-1024 GB192 GB-1536 GBSupported Storage Protocols FC, iSCSI FC, iSCSI, NFS*, CIFS*Front-End Port Types8/16/32 Gbit/s FC/FC-NVMe*, 10/25/40/100 GbE, 25 Gb NVMe over RoCE*Back-End Port Types SAS 3.0/ 100 Gb RDMAMaximum Number of Hot-Swappable I/O Modules perController Enclosure6Maximum Number of Front-End Ports per ControllerEnclosure40Maximum Number of SSDs1,200SSDs 1.92 TB/3.84 TB/7.68 TB palm-sized NVMe SSD, 960 GB/1.92 TB/3.84 TB/7.68 TB/15.36 TB SASSSDSCM Supported800 GB SCM*Software SpecificationsSupported RAID Levels RAID 5, RAID 6 and RAID-TP (tolerates simultaneous failures of 3 SSDs)Number of LUNs8,192Value-Added Features SmartDedupe, SmartVirtualization, SmartCompression, SmartMigration, SmartThin,SmartQoS(SAN&NAS), HyperSnap(SAN&NAS), HyperReplication(SAN&NAS),HyperClone(SAN&NAS), HyperMetro(SAN&NAS), HyperCDP(SAN&NAS), CloudBackup*,SmartTier*, SmartCache*, SmartQuota(NAS)*, SmartMulti-Tenant(NAS)*, SmartContainer* Storage ManagementSoftwareDeviceManager UltraPath eServicePhysical SpecificationsPower Supply Controller enclosure: 100V–240V AC±10%, 192V–288V DC,-48V to -60V DCDisk enclosure: 100V–240V AC±10%, 192V–288V DC,-48V to -60V DCDimensions (H x W x D)SAS controller enclosure: 86.1 mm x 447 mm x 520 mm NVMe controller enclosure: 86.1 mm x 447 mm x 620 mm SAS SSD enclosure: 86.1 mm ×447 mm ×410 mm NVMe SSD enclosure*: 86.1 mm x 447 mm x 620 mmWeight SAS controller enclosure: ≤ 30 kg NVMe controller enclosure: ≤ 32 kgSAS SSD enclosure: ≤ 20 kg NVMe smart SSD enclosure: ≤ 35 kgOperating Temperature–60 m to +1800 m altitude: 5°C to 35°C (bay) or 40°C (enclosure)1800 m to 3000 m altitude: The max. temperature threshold decreases by 1°C for everyaltitude increase of 220 mOperating Humidity10% RH to 90% RH*For further details on specifications with an asterisk for a specific project, please contact Huawei sales.Copyright © Huawei Technologies Co., Ltd. 2021. All rights reserved.No part of this document may be reproduced or transmitted in any form or by any means without the prior written consent of Huawei Technologies Co., Ltd.Trademarks and Permissions, HUAWEI, and are trademarks or registered trademarks of Huawei Technologies Co., Ltd. Other trademarks, product, service and company names mentioned are the property of their respective holders.Disclaimer THE CONTENTS OF THIS MANUAL ARE PROVIDED "AS IS". EXCEPT AS REQUIRED BY APPLICABLE LAWS, NO WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO, THE IMPLIEDWARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, ARE MADE IN RELATION TOTHE ACCURACY, RELIABILITY OR CONTENTS OF THIS MANUAL.TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, IN NO CASE SHALL HUAWEI TECHNOLOGIESCO., LTD BE LIABLE FOR ANY SPECIAL, INCIDENTAL, INDIRECT, OR CONSEQUENTIAL DAMAGES, OR LOSTPROFITS, BUSINESS, REVENUE, DATA, GOODWILL OR ANTICIPATED SAVINGS ARISING OUT OF, OR INCONNECTION WITH, THE USE OF THIS MANUAL.Tel: + S h e n z h en 518129,P.R.C h i n aBantian Longgang DistrictHUAWEI TECHNOLOGIES CO.,LTD.To learn more about Huawei storage, please contact your local Huawei officeor visit the Huawei Enterprise website: .Huawei Enterprise APPHuawei IT。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
PO PI Combinational Logic PPI PPO PPI PI Combinational Logic PPO PPI PO PI Combinational Logic PPO PO
State reduction As pointed out in the previous work [1], minimizing the set of assignments to the state variables, called state reduction, is a critical step for improving the efficiency of sequential search. Seq-SAT performs state reduction not only when an intermediate solution is produced but also when an intermediate state objective is proved unsatisfiable. In the first case, it employs an efficient two-step state reduction algorithm to obtain a smaller state clause. The algorithm follows a similar process as the D-algorithm [8] originally proposed for ATPG and utilizes 3-value simulation. For the second case, a different conflict analysis procedure rather than UIP based conflict analysis[12] is applied to derive a smaller state clause. Flexible sequential search framework Sequential search can
Figure 1: Backward time-frame expansion
In sequential SAT, a sequential circuit is conceptually unfolded into multiple copies through backward time-frame expansion (Figure 1). In each time-frame, the circuit becomes combinational and hence, a combinational SAT solver can be applied. In each then Design Technology Cadence Corporation
state element such as a flip-flop is translated into two corresponding signals: a pseudo primary input (PPI) and a pseudo primary output (PPO). The initial state is specified at the PPIs in time-frame 0. The objective is specified at the signals in time-frame n (the last timeframe, where n is unknown before solving the problem). While solving in an intermediate time-frames i (0 < i < n − 1), intermediate state solutions are produced at the PPIs and they become intermediate objectives for further justification in the previous time-frames i − 1. Seq-SAT has three major improvements over Satori:
An Efficient Sequential SAT Solver With Improved Search Strategies
F. Lu, M.K. Iyer, G. Parthasarathy, L.-C. Wang, and K.-T. Cheng Department of ECE, University of California at Santa Barbara Abstract
A sequential SAT solver Satori [1] was recently proposed as an alternative to combinational SAT in verification applications. This paper describes the design of Seq-SAT – an efficient sequential SAT solver with improved search strategies over Satori. The major improvements include (1) a new and better heuristic for minimizing the set of assignments to state variables, (2) a new priority-based search strategy and a flexible sequential search framework which integrates different search strategies, and (3) a decision variable selection heuristic more suitable for solving the sequential problems. We present experimental results to demonstrate that our sequential SAT solver can achieve orders-of-magnitude speedup over Satori. We plan to release the source code of Seq-SAT along with this paper.
I. Introduction
Boolean SAT finds applications in many areas of circuit design and verification such as Bounded Model Checking [2, 16], Unbounded Model Checking [13, 14], Redundancy Identification, Equivalence Checking [3], Preimage Calculation [15], etc. State-of-the-art SAT algorithms, as implemented in tools such as Z C HAFF [4], B ERKMIN [5], and C-SAT [6] have demonstrated that very hard SAT problems can now be solved in reasonable time. Bounded sequential search using SAT has been shown to be very effective in model checking. However, its major disadvantage is its lack of completeness in general sequential search. A sequential SAT solver Satori was proposed in [1], which utilizes combined ATPG and SAT techniques to realize a sequential SAT solver by retaining the efficiency of Boolean SAT and being complete in the search. Given a sequential circuit, we assume that the circuit follows the Huffman synchronous sequential circuit model as illustrated in Figure 1. Sequential SAT (or sequential justification) is the problem of finding an ordered sequence of input assignments to a sequential circuit, such that a desired objective is satisfied, or proving that no such sequence exists. A desired objective can be a collection of signal value constraints such as that a primary output is 1. In this paper, we focus on the sequential problems that initial states are given.