1 Mining the Web for Object Recognition
elementary statistics 10th 解答

elementary statistics 10th 解答摘要:1.概述:EBS(Elastic Block Store)服务器返回一个未知错误2.原因分析:可能的原因包括EBS 服务器配置问题、网络问题、安全组规则限制等3.解决方法:检查EBS 服务器配置、检查网络连接、调整安全组规则等4.总结:处理EBS 服务器返回未知错误的方法正文:EBS(Elastic Block Store)是亚马逊Web 服务(Amazon Web Services)的一种块存储服务,它为Amazon EC2(Elastic Compute Cloud)实例提供了持久性块存储。
在使用EBS 时,有时可能会遇到服务器返回一个未知错误的情况。
本文将分析可能的原因并提供解决方法。
一、原因分析1.EBS 服务器配置问题:EBS 服务器的配置错误可能导致返回未知错误。
例如,EBS 服务器的容量可能不足,或者EBS 服务器的软件版本可能过低。
2.网络问题:EBS 服务器与客户端之间的网络连接可能出现问题,导致返回未知错误。
这种情况下,您需要检查网络连接并确保EBS 服务器和客户端之间的网络通信正常。
3.安全组规则限制:如果您在EBS 服务器上设置了安全组规则,这些规则可能会限制客户端访问EBS 服务器。
在这种情况下,您需要检查安全组规则并确保它们允许客户端访问EBS 服务器。
二、解决方法1.检查EBS 服务器配置:首先,您需要检查EBS 服务器的配置,确保其容量足够,软件版本为最新版本。
如果发现配置问题,请及时进行调整。
2.检查网络连接:其次,您需要检查EBS 服务器与客户端之间的网络连接。
确保网络连接正常,可以尝试Ping EBS 服务器以验证网络连通性。
3.调整安全组规则:如果发现安全组规则限制了客户端访问EBS 服务器,请及时调整这些规则。
您可以在Amazon EC2 控制台中修改安全组规则,允许客户端访问EBS 服务器。
基于双向特征金字塔和深度学习的图像识别方法

哈尔滨理工大学学报JOURNAL OF HARBIN UNIVERSITY OF SCIENCE AND TECHNOLOGY第26卷第2期2021年4月Vol. 26 No. 2Apr. 2021基于双向特征金字塔和深度学习的图像识别方法赵升1 ,赵黎2(1.昆明医科大学第三附属医院PET/CT 中心,昆明650118,2.昆明医科大学基础医学院,昆明650500)摘 要:图像物体识别与检测(图像识别)是计算机视觉领域的一个基础性任务。
近年来,深度神经网络等推进了图像物体识别的发展。
多尺度问题是图像识别的难点问题之一。
引入特征金字塔是解决图像多尺度物体识别的有效途径之一。
然而,现有基于特征金字塔的方法大多采用自上而下的语义特征信息融合方式,无法提升大尺度物体识别的精确率。
为解决该问题,提出了一种特征金字塔双向语义特征信息融合模型,实现不同尺度图像语义特征信息的双向融合。
而后,通过 嵌入深度神经网络,形成一种新的基于特征金字塔双向语义信息融合的多尺度图像识别方法,以提 升不同尺度物体识别的精确度。
实验结果表明:本文所提方法在PASCAL VOC 数据集上较其他方法至少提升0.7%的平均精确度均值,在MS COCO 数据集上的平均精确度也优于其他方法。
实验 结果验证了本文所提方法能有效提升多尺度图像识别的精确率。
关键词:图像识别;特征金字塔;深度神经网络;计算机视觉DOI ;10. 15938/j. jhust. 2021. 02. 006中图分类号:TP391.41文献标志码:A 文章编号:1007-2683(2021 )02-0044-07On Image Recog nition Using Bidirectional Feature Pyramidand Deep Neural NetworkZHAO Sheng , ZHAO Li 2(1. PET/CT Center, Third Affiliated Hospital of Kunming Medical University , Kunming 650118, China ;2. Basic Medical School , Kunming Medical University, Kunming 650500, China)Abstract : Object recognition is one of the fundamental tasks in the area of computer vision. The developmentof deep neural networks advances the object recognition. Nonetheless, multi-scale object recognition still remains tobe a challenging task ・ The feature pyramid is a promising technology to address the multi-scale object recognition. However , the existing feature pyramid-based object recognition schemes usually employed a top-down pathway , which cannot improve the recognition of large-scale objects ・ To address this issue , a novel bidirectional enhanced feature pyramid-based object recognition scheme is proposed ・ The proposed scheme can improve the precisions ofboth large-scale and small-scale object recognition by enabling the semantic information enhancement from both top to down and down to top ・ The experiment results showed that the proposed scheme can improve the mean average precision by at least 0. 7% in PASCAL VOC dataset and outperformed all the baselines in MS COCO dataset. Thesefindings verified the effectiveness of the proposed scheme ・Keywords : object recognition ; feature pyramid ; deep neural network ; computer vision收稿日期:2020-09 -04基金项目:国家自然科学基金(81960310);云南省教育厅科学研究基金(K132199357).作者简介:赵 升(1977—),男,硕士,副主任医师.通信作者:赵 黎(1987—),女,硕士,助理实验师,E-mail :lizhaoxw@ 163. com.第2期赵升等:基于双向特征金字塔和深度学习的图像识别方法450引言计算机视觉是一个多学科交叉的领域,主要研究从静态图像或者视频流中自动提取、分析和理解有价值信息的理论和方法⑴。
综述Representation learning a review and new perspectives

explanatory factors for the observed input. A good representation is also one that is useful as input to a supervised predictor. Among the various ways of learning representations, this paper focuses on deep learning methods: those that are formed by the composition of multiple non-linear transformations, with the goal of yielding more abstract – and ultimately more useful – representations. Here we survey this rapidly developing area with special emphasis on recent progress. We consider some of the fundamental questions that have been driving research in this area. Specifically, what makes one representation better than another? Given an example, how should we compute its representation, i.e. perform feature extraction? Also, what are appropriate objectives for learning good representations?
Mata手册说明书

Contents[M-0]Introduction to the Mata manualintro.................................................Introduction to the Mata manual[M-1]Introduction and adviceintro........................................................Introduction and advice ing Mata with ado-files first....................................................Introduction andfirst session help........................................................Obtaining help in Stata how.............................................................How Mata works ing Mata interactively LAPACK.........................................The LAPACK linear-algebra routines limits..................................................Limits and memory utilization naming......................................Advice on naming functions and variables permutation................................An aside on permutation matrices and vectors returnedargs..................................Function arguments used to return results source.....................................................Viewing the source code e and specification of tolerances[M-2]Language definitionnguage definition break..............................................Break out of for,while,or do loop class...........................................Object-oriented programming(classes) ments continue...........................Continue with next iteration of for,while,or do loop declarations...................................................Declarations and types do..............................................................do...while(exp) errors.................................................................Error codes exp..................................................................Expressions for......................................................for(exp1;exp2;exp3)stmt ftof...................................................Passing functions to functions goto...................................................................goto label if.............................................................if(exp)...else... op arith........................................................Arithmetic operators op assignment..................................................Assignment operator op colon...........................................................Colon operators op conditional..................................................Conditional operator op increment.......................................Increment and decrement operators op join..............................................Row-and column-join operators op kronecker........................................Kronecker direct-product operator op logical........................................................Logical operators op range..........................................................Range operators op transpose.............................................Conjugate transpose operator12Contents optargs.........................................................Optional arguments pointers..................................................................Pointers pragma...............................................Suppressing warning messages reswords...........................................................Reserved words return........................................................return and return(exp) e of semicolons struct..................................................................Structures e of subscripts syntax............................................Mata language grammar and syntax version............................................................Version control void................................................................V oid matrices while.............................................................while(exp)stmt[M-3]Commands for controlling Matamands for controlling Mata end...................................................Exit Mata and return to Stata mata.....................................................Mata invocation command mata clear.....................................................Clear Mata’s memory mata describe.....................................Describe contents of Mata’s memory mata drop...................................................Drop matrix or function mata help......................................................Obtain help in Stata mata matsave..............................................Save and restore matrices mata memory.........................................Report on Mata’s memory usage mata mlib....................................................Create function library mata mosave................................Save function’s compiled code in objectfile mata rename..............................................Rename matrix or function mata set......................................Set and display Mata system parameters mata stata...................................................Execute Stata command mata which........................................................Identify function namelists........................................Specifying matrix and function names [M-4]Index and guide to functions intro....................................................Index and guide to functions io...................................................................I/O functions manipulation....................................................Matrix manipulation mathematical.........................................Important mathematical functions matrix............................................................Matrix functions programming................................................Programming functions scalar..................................................Scalar mathematical functions solvers................................Functions to solve AX=B and to obtain A inverse standard..........................................Functions to create standard matrices stata........................................................Stata interface functions statistical........................................................Statistical functions string..................................................String manipulation functions utility.......................................................Matrix utility functions [M-5]Mata functions intro...............................................................Mata functionsContents3 abbrev().........................................................Abbreviate strings abs().......................................................Absolute value(length) adosubdir().........................................Determine ado-subdirectory forfile all()..........................................................Element comparisons args()........................................................Number of arguments asarray().........................................................Associative arrays ascii().....................................................Manipulate ASCII codes assert().....................................................Abort execution if false blockdiag()...................................................Block-diagonal matrix bufio()........................................................Buffered(binary)I/O byteorder().............................................Byte order used by computer C()...............................................................Make complex c()...............................................................Access c()value callersversion()........................................Obtain version number of caller cat()....................................................Loadfile into string matrix chdir().......................................................Manipulate directories cholesky().........................................Cholesky square-root decomposition cholinv()...................................Symmetric,positive-definite matrix inversion cholsolve()............................Solve AX=B for X using Cholesky decomposition comb()binatorial function cond()...........................................................Condition number conj()plex conjugate corr()....................................Make correlation matrix from variance matrix cross().............................................................Cross products crossdev()..................................................Deviation cross products cvpermute().................................................Obtain all permutations date()...................................................Date and time manipulation deriv()........................................................Numerical derivatives designmatrix().....................................................Design matrices det()........................................................Determinant of matrix diag().................................................Replace diagonal of a matrix diag().......................................................Create diagonal matrix diag0cnt()..................................................Count zeros on diagonal diagonal().........................................Extract diagonal into column vector dir()....................................................................File list direxists()..................................................Whether directory exists direxternal().....................................Obtain list of existing external globals display().............................................Display text interpreting SMCL displayas()........................................................Set display level displayflush()............................................Flush terminal-output buffer Dmatrix().......................................................Duplication matrix docx*().......................................Generate Office Open XML(.docx)file dsign()............................................FORTRAN-like DSIGN()function e()...................................................................Unit vectors editmissing()..........................................Edit matrix for missing values edittoint()......................................Edit matrix for roundoff error(integers) edittozero().......................................Edit matrix for roundoff error(zeros) editvalue()............................................Edit(change)values in matrix eigensystem()............................................Eigenvectors and eigenvalues4Contentseigensystemselect()pute selected eigenvectors and eigenvalues eltype()..................................Element type and organizational type of object epsilon().......................................Unit roundoff error(machine precision) equilrc()..............................................Row and column equilibration error().........................................................Issue error message errprintf()..................................Format output and display as error message exit()..........................................................Terminate execution exp().................................................Exponentiation and logarithms factorial()..............................................Factorial and gamma function favorspeed()...................................Whether speed or space is to be favored ferrortext()......................................Text and return code offile error code fft().............................................................Fourier transform fileexists().......................................................Whetherfile exists fillmissing()..........................................Fill matrix with missing values findexternal().................................Find,create,and remove external globals findfile().................................................................Findfile floatround().................................................Round tofloat precision fmtwidth().........................................................Width of%fmt fopen()..................................................................File I/O fullsvd()............................................Full singular value decomposition geigensystem().................................Generalized eigenvectors and eigenvalues ghessenbergd()..................................Generalized Hessenberg decomposition ghk()...................Geweke–Hajivassiliou–Keane(GHK)multivariate normal simulator ghkfast().....................GHK multivariate normal simulator using pregenerated points gschurd()...........................................Generalized Schur decomposition halton().........................................Generate a Halton or Hammersley set hash1()...........................................Jenkins’one-at-a-time hash function hessenbergd()..............................................Hessenberg decomposition Hilbert()..........................................................Hilbert matrices I()................................................................Identity matrix inbase()...........................................................Base conversion indexnot()..................................................Find character not in list invorder()............................................Permutation vector manipulation invsym().............................................Symmetric real matrix inversion invtokens()...............................Concatenate string rowvector into string scalar isdiagonal()..............................................Whether matrix is diagonal isfleeting()...........................................Whether argument is temporary isreal()......................................................Storage type of matrix isrealvalues()..................................Whether matrix contains only real values issymmetric().................................Whether matrix is symmetric(Hermitian) isview()....................................................Whether matrix is view J().............................................................Matrix of constants Kmatrix()mutation matrix lapack()PACK linear-algebra functions liststruct()...................................................List structure’s contents Lmatrix().......................................................Elimination matrix logit()...........................................Log odds and complementary log-logContents5 lowertriangle().........................................Extract lower or upper triangle lud()...........................................................LU decomposition luinv()......................................................Square matrix inversion lusolve()...................................Solve AX=B for X using LU decomposition makesymmetric().............................Make square matrix symmetric(Hermitian) matexpsym().......................Exponentiation and logarithms of symmetric matrices matpowersym().........................................Powers of a symmetric matrix mean()............................................Means,variances,and correlations mindouble().................................Minimum and maximum nonmissing value minindex().......................................Indices of minimums and maximums minmax().................................................Minimums and maximums missing().......................................Count missing and nonmissing values missingof()................................................Appropriate missing value mod()..................................................................Modulus moptimize().....................................................Model optimization more().....................................................Create–more–condition negate().......................................................Negate real matrix norm()....................................................Matrix and vector norms normal()................................Cumulatives,reverse cumulatives,and densities optimize().....................................................Function optimization panelsetup()...................................................Panel-data processing pathjoin()....................................................File path manipulation pinv().................................................Moore–Penrose pseudoinverse polyeval()........................................Manipulate and evaluate polynomials printf().............................................................Format output qrd()...........................................................QR decomposition qrinv().............................Generalized inverse of matrix via QR decomposition qrsolve()...................................Solve AX=B for X using QR decomposition quadcross().............................................Quad-precision cross products range()..................................................Vector over specified range rank().............................................................Rank of matrix Re()..................................................Extract real or imaginary part reldif()..................................................Relative/absolute difference rows()........................................Number of rows and number of columns rowshape().........................................................Reshape matrix runiform()..............................Uniform and nonuniform pseudorandom variates runningsum().................................................Running sum of vector schurd()......................................................Schur decomposition select()..............................................Select rows,columns,or indices setbreakintr()..................................................Break-key processing sign()...........................................Sign and complex quadrant functions sin()..........................................Trigonometric and hyperbolic functions sizeof().........................................Number of bytes consumed by object solve tol().....................................Tolerance used by solvers and inverters solvelower()..........................................Solve AX=B for X,A triangular solvenl().........................................Solve systems of nonlinear equations sort().......................................................Reorder rows of matrix6Contentssoundex().............................................Convert string to soundex code spline3()..................................................Cubic spline interpolation sqrt().................................................................Square root st addobs()....................................Add observations to current Stata dataset st addvar().......................................Add variable to current Stata dataset st data()...........................................Load copy of current Stata dataset st dir()..................................................Obtain list of Stata objects st dropvar()...........................................Drop variables or observations st global()........................Obtain strings from and put strings into global macros st isfmt()......................................................Whether valid%fmt st isname()................................................Whether valid Stata name st local()..........................Obtain strings from and put strings into Stata macros st macroexpand().......................................Expand Stata macros in string st matrix().............................................Obtain and put Stata matrices st numscalar().......................Obtain values from and put values into Stata scalars st nvar()........................................Numbers of variables and observations st rclear().....................................................Clear r(),e(),or s() st store().................................Modify values stored in current Stata dataset st subview()..................................................Make view from view st tempname()...............................................Temporary Stata names st tsrevar().....................................Create time-series op.varname variables st updata()....................................Determine or set data-have-changedflag st varformat().................................Obtain/set format,etc.,of Stata variable st varindex()...............................Obtain variable indices from variable names st varname()...............................Obtain variable names from variable indices st varrename()................................................Rename Stata variable st vartype()............................................Storage type of Stata variable st view()..........................Make matrix that is a view onto current Stata dataset st viewvars().......................................Variables and observations of view st vlexists()e and manipulate value labels stata()......................................................Execute Stata command stataversion().............................................Version of Stata being used strdup()..........................................................String duplication strlen()...........................................................Length of string strmatch()....................................Determine whether string matches pattern strofreal().....................................................Convert real to string strpos().....................................................Find substring in string strreverse()..........................................................Reverse string strtoname()...........................................Convert a string to a Stata name strtoreal().....................................................Convert string to real strtrim()...........................................................Remove blanks strupper()......................................Convert string to uppercase(lowercase) subinstr()...........................................................Substitute text sublowertriangle()...........................Return a matrix with zeros above a diagonal substr()......................................................Substitute into string substr()...........................................................Extract substring sum().....................................................................Sums svd()..................................................Singular value decomposition svsolve()..........................Solve AX=B for X using singular value decomposition swap()..............................................Interchange contents of variablesContents7 Toeplitz().........................................................Toeplitz matrices tokenget()........................................................Advanced parsing tokens()..................................................Obtain tokens from string trace().......................................................Trace of square matrix transpose()..................................................Transposition in place transposeonly().......................................Transposition without conjugation trunc()............................................................Round to integeruniqrows()..............................................Obtain sorted,unique values unitcircle()plex vector containing unit circle unlink().................................................................Erasefile valofexternal().........................................Obtain value of external global Vandermonde()................................................Vandermonde matrices vec().........................................................Stack matrix columns xl()............................................................Excelfile I/O class[M-6]Mata glossary of common terms Glossary........................................................................ Subject and author index...........................................................。
数据挖掘英语

数据挖掘英语随着信息技术和互联网的不断发展,数据已经成为企业和个人在决策和分析中不可或缺的一部分。
而数据挖掘作为一种利用大数据技术来挖掘数据潜在价值的方法,也因此变得越来越重要。
在这篇文章中,我们将会介绍数据挖掘的相关英语术语和概念。
一、概念1.数据挖掘(Data Mining)数据挖掘是一种从大规模数据中提取出有用信息的过程。
数据挖掘通常包括数据预处理、数据挖掘和结果评估三个阶段。
2.机器学习(Machine Learning)机器学习是一种通过对数据进行学习和分析来改善和优化算法的方法。
机器学习可以被视为是一种数据挖掘的技术,它可以用来预测未来的趋势和行为。
3.聚类分析(Cluster Analysis)聚类分析是一种通过将数据分组为相似的集合来发现数据内在结构的方法。
聚类分析可以用来确定市场细分、客户分组、产品分类等。
4.分类分析(Classification Analysis)分类分析是一种通过将数据分成不同的类别来发现数据之间的关系的方法。
分类分析可以用来识别欺诈行为、预测客户行为等。
5.关联规则挖掘(Association Rule Mining)关联规则挖掘是一种发现数据集中变量之间关系的方法。
它可以用来发现购物篮分析、交叉销售等。
6.异常检测(Anomaly Detection)异常检测是一种通过识别不符合正常模式的数据点来发现异常的方法。
异常检测可以用来识别欺诈行为、检测设备故障等。
二、术语1.数据集(Dataset)数据集是一组数据的集合,通常用来进行数据挖掘和分析。
2.特征(Feature)特征是指在数据挖掘和机器学习中用来描述数据的属性或变量。
3.样本(Sample)样本是指从数据集中选取的一部分数据,通常用来进行机器学习和预测。
4.训练集(Training Set)训练集是指用来训练机器学习模型的样本集合。
5.测试集(Test Set)测试集是指用来测试机器学习模型的样本集合。
基于改进Cascade R-CNN的交通标志牌识别

142传感器与微系统(Tmnsducer and Microsystem TechnologXs )2021年第45卷第5期DOI 8 0. 83873/J. 8 000-6737(2021 )05-0°42-04基于改进Cascode R-CNN 的交通标志牌识别*收稿日期:231-09-24*基金项目:国家自然科学基金资助项目(5 378333)徐国整1周 越2,董 斌3廖晨聪1(1.上海交通大学船舶海洋与建筑工程学院,上海220240 ; 2.上海交通大学电子信息与电气工程学院,上海22024/ ;3.东南大学土木工程学院,江苏南京211189)摘要:针对雨、雪、雾天等恶劣环境下,交通标志容易被遮挡,且目标较小,难以被高精度识别以及定位 的问题,提出先粗检测再精确检测的策略,并采用改进的级联(Cascade)R-CNN :优化锚设计、在线难例挖掘和多尺度训练,同时用图像去雾和增亮算法进行数据增强,最后选用2个不同的骨干网络的模型进行融合。
结果表明:在基于虚拟仿真环境下的自动驾驶交通标志识别大赛提供的数据集上,提出的算法表现出 优异的泛化能力和准确率,并在指标F1分数达到了 0. 9972,有效地克服虚拟场景中不同的天气状况和行人状况等干扰因素,实现了道路周边交通标志牌的精确识别。
关键词:智能交通;交通标志牌识别;级联(Cascade) R-CNN中图分类号:U495; TP39 1 文献标识码:A 文章编号:1 000-6737(2021 )05-0142-04Tsffic signs recoonition based on imprwed Cascode R-CNN *XU G poz /l // , ZHOU Yuo 2, DONG Bin 2, LINO Cheocong 1(1. Schooi of Nawi Architecture , Ocenn and Civil Engineering , Shanghai Jiaotong University , Shanghai 22024/, China ; 2. Schooi of Electsnic ,Information and Electricoi Engineering ,Shanghai Jiaotong University ,Shanghai 200240,China ;3. Schooi of Civil Engineering , SoutOenst University , Nanjing 211189, China )Abstsct : Aiming a- the problem that in harsh enviroxments such as min , snow , fog ; etc ; traffic signs are easily UVchei , and the target is small - which is difficult to be accumwly ide/ified and accumwly posiFoxei , o strategy ofcoarse to fine(C to F) is pmposeO and an improvs cascade regiox with coxvolu/oxal neural network features ( R-CNN ) network is adapwi, which includes impmvei anchor design, ox/ne difficult case mining and muVi-scaktraining. A- the same /me ; the image endaxcement is peLormei by image b/ghtness endaxcement and defogging algokthm. Finally ,twc diVerent bachboxe network mobels are usel for fusion. The results show that the pmposeO algokthm peLorms excellent generalizakox abi/ty and accuracy in the daWset providei by the agWmakc drivingtraffic sign recognitiox compe/tiox basel ox virtual simdatiox enviroxment and scomi 0. 9972 ox the index Fl. Itcan effec/vely overcome the interference factors such as diVerent weather couditioxs and peiesWian coxditioxs invirtual scenes and accurately identWp the traffic signs around the road.Keywords : inwlligent tmnspoka/ox ; traffic signs recognitiox ; cascade regiox with CNN features ( R-CNN)0引言交通标志牌检测是指在车辆行驶过程中利用计算机视觉技术采集交通标志并实现自动检测与识别。
数据管理参考手册说明书

Contents Intro..................................Introduction to data management reference manualData management............................Introduction to data management commands append............................................................Append datasets assert..........................................................Verify truth of claim assertnested...................................................Verify variables nestedbcal...............................................Business calendarfile manipulation by........................................Repeat Stata command on subsets of the data cd................................................................Change directory pare two datasets changeeol.....................................Convert end-of-line characters of textfile checksum..................................................Calculate checksum offile clear................................................................Clear memory clonevar......................................................Clone existing variable codebook.....................................................Describe data contents collapse............................................Make dataset of summary statistics pare two variables press data in memory contract....................................Make dataset of frequencies and percentages copy....................................................Copyfile from disk or URL corr2data...............................Create dataset with specified correlation structure count.................................Count observations satisfying specified conditions cross...................................Form every pairwise combination of two datasets Data types.............................................Quick reference for data types datasignature.....................................Determine whether data have changed Datetime............................................Date and time values and variables Datetime business calendars.........................................Business calendars Datetime business calendars creation...........................Business calendars creation Datetime conversion....................................Converting strings to Stata dates Datetime display formats.............................Display formats for dates and times Datetime durations................................Obtaining and working with durations Datetime relative dates................Obtaining dates and date information from other dates Datetime values from other software...........Date and time conversion from other software describe..........................................Describe data in memory or in afile destring.......................Convert string variables to numeric variables and vice versa dir...............................................................Displayfilenames drawnorm..............................Draw sample from multivariate normal distribution drop..................................................Drop variables or observations pactly list variables with specified properties duplicates....................................Report,tag,or drop duplicate observations dyngen....................................Dynamically generate new values of variables edit..............................................Browse or edit data with Data Editor egen.........................................................Extensions to generate encode.......................................Encode string into numeric and vice versaiii Contents erase..............................................................Erase a diskfile expand.......................................................Duplicate observations expandcl..............................................Duplicate clustered observations export...........................................Overview of exporting data from Stata filefilter.......................................Convert ASCII or binary patterns in afile fillin.........................................................Rectangularize dataset format....................................................Set variables’output format fralias..............................................Alias variables from linked frames frames intro...................................................Introduction to frames frames................................................................Data frames frame change.................................Change identity of current(working)frame frame copy..................................................Make a copy of a frame frame create.....................................................Create a new frame frame drop................................................Drop frames from memory frame prefix...............................................The frame prefix command frame put..........................Copy selected variables or observations to a new frame frame pwf....................................Display name of current(working)frame frame rename..................................................Rename existing frame frames describe..................................Describe frames in memory or in afile frames dir......................................Display names of all frames in memory frames reset.............................................Drop all frames from memory frames save..............................................Save a set of frames on disk frames use.............................................Load a set of frames from disk frget................................................Copy variables from linked frame frlink.................................................................Link frames frunalias.........................................Change storage type of alias variables generate..........................................Create or change contents of variable gsort..................................................Ascending and descending sort hexdump...........................................Display hexadecimal report onfile icd...................................................Introduction to ICD commands icd9.....................................................ICD-9-CM diagnosis codes icd9p....................................................ICD-9-CM procedure codes icd10.......................................................ICD-10diagnosis codes icd10cm.................................................ICD-10-CM diagnosis codes icd10pcs................................................ICD-10-PCS procedure codes import...........................................Overview of importing data into Stata import dbase............................................Import and export dBasefiles import delimited...................................Import and export delimited text data import excel.............................................Import and export Excelfiles import fred.............................Import data from Federal Reserve Economic Data import haver................................Import data from Haver Analytics databases import sas.........................................................Import SASfiles import sasxport5..................Import and export data in SAS XPORT Version5format import sasxport8..................Import and export data in SAS XPORT Version8format import spss.............................................Import and export SPSSfiles infile(fixed format).......................Import text data infixed format with a dictionary infile(free format).........................................Import unformatted text data infix(fixed format).....................................Import text data infixed format input......................................................Enter data from keyboardContents iii insobs.....................................................Add or insert observations inspect.....................................Display simple summary of data’s attributes ipolate.........................................Linearly interpolate(extrapolate)values isid......................................................Check for unique identifiersjdbc...........................Load,write,or view data from a database with a Java API joinby.....................................Form all pairwise combinations within groupslabel.............................................................Manipulate labels label bels for variables and values in multiple languages bel utilities list.........................................................List values of variables lookfor....................................Search for string in variable names and labelsmemory.......................................................Memory management merge..............................................................Merge datasets Missing values.......................................Quick reference for missing values mkdir.............................................................Create directory mvencode.........................Change missing values to numeric values and vice versa notes............................................................Place notes in data obs.....................................Increase the number of observations in a dataset odbc.....................................Load,write,or view data from ODBC sources order.....................................................Reorder variables in dataset outfile...................................................Export dataset in text format pctile............................................Create variable containing percentiles putmata....................................Put Stata variables into Mata and vice versa range.....................................................Generate numerical range recast................................................Change storage type of variable recode...................................................Recode categorical variables rename............................................................Rename variable rename group..............................................Rename groups of variables reshape..............................Convert data from wide to long form and vice versa rmdir............................................................Remove directory sample........................................................Draw random sample save.............................................................Save Stata dataset separate....................................................Create separate variables shell.............................................Temporarily invoke operating system snapshot..............................................Save and restore data snapshots sort.....................................................................Sort data split..................................................Split string variables into parts splitsample.............................................Split data into random samples stack...................................................................Stack data statsby..................................Collect statistics for a command across a by list e shipped dataset type.......................................................Display contents of afile unicode............................................................Unicode utilities unicode nguage-specific Unicode collators unicode convertfile...........................Low-levelfile conversion between encodingsiv Contentsunicode encoding...........................................Unicode encoding utilities unicode locale.................................................Unicode locale utilities unicode translate............................................Translatefiles to Unicode use.............................................................Load Stata datasetvarmanage...........................Manage variable labels,formats,and other properties vl............................................................Manage variable lists vl create....................................Create and modify user-defined variable lists vl drop................................Drop variable lists or variables from variable lists vl list...................................................List contents of variable lists vl rebuild......................................................Rebuild variable lists vl set................................................Set system-defined variable listse dataset from Stata websitexpose...........................................Interchange observations and variableszipfipress and uncompressfiles and directories in zip archive formatGlossary.........................................................................Subject and author index...........................................................Contents v Stata,Stata Press,and Mata are registered trademarks of StataCorp LLC.Stata andStata Press are registered trademarks with the World Intellectual Property Organization®of the United Nations.Other brand and product names are registered trademarks ortrademarks of their respective companies.Copyright c 1985–2023StataCorp LLC,College Station,TX,USA.All rights reserved.。
纹理物体缺陷的视觉检测算法研究--优秀毕业论文

摘 要
在竞争激烈的工业自动化生产过程中,机器视觉对产品质量的把关起着举足 轻重的作用,机器视觉在缺陷检测技术方面的应用也逐渐普遍起来。与常规的检 测技术相比,自动化的视觉检测系统更加经济、快捷、高效与 安全。纹理物体在 工业生产中广泛存在,像用于半导体装配和封装底板和发光二极管,现代 化电子 系统中的印制电路板,以及纺织行业中的布匹和织物等都可认为是含有纹理特征 的物体。本论文主要致力于纹理物体的缺陷检测技术研究,为纹理物体的自动化 检测提供高效而可靠的检测算法。 纹理是描述图像内容的重要特征,纹理分析也已经被成功的应用与纹理分割 和纹理分类当中。本研究提出了一种基于纹理分析技术和参考比较方式的缺陷检 测算法。这种算法能容忍物体变形引起的图像配准误差,对纹理的影响也具有鲁 棒性。本算法旨在为检测出的缺陷区域提供丰富而重要的物理意义,如缺陷区域 的大小、形状、亮度对比度及空间分布等。同时,在参考图像可行的情况下,本 算法可用于同质纹理物体和非同质纹理物体的检测,对非纹理物体 的检测也可取 得不错的效果。 在整个检测过程中,我们采用了可调控金字塔的纹理分析和重构技术。与传 统的小波纹理分析技术不同,我们在小波域中加入处理物体变形和纹理影响的容 忍度控制算法,来实现容忍物体变形和对纹理影响鲁棒的目的。最后可调控金字 塔的重构保证了缺陷区域物理意义恢复的准确性。实验阶段,我们检测了一系列 具有实际应用价值的图像。实验结果表明 本文提出的纹理物体缺陷检测算法具有 高效性和易于实现性。 关键字: 缺陷检测;纹理;物体变形;可调控金字塔;重构
Keywords: defect detection, texture, object distortion, steerable pyramid, reconstruction
II
domexception blocked a frame -回复

domexception blocked a frame -回复题目:DOMException阻塞了一个frame - 解析与解决前言:我们在进行Web开发时,经常会遇到各种各样的错误和异常。
其中一个常见的错误是“DOMException阻塞了一个frame”。
本文将以此错误为主题,逐步为你解析这个错误的原因和解决方案。
第一部分:理解DOMException1.1 DOMException定义与解释DOMException是一种JavaScript对象,用于表示在执行DOM操作期间发生的异常情况。
DOM(文档对象模型)表示网页的结构,它允许程序通过JavaScript动态地更新和修改网页的内容。
当JavaScript尝试执行对DOM的操作时,如果发生错误,就会抛出DOMException。
1.2 常见的DOMException当JavaScript执行DOM操作时,可能会触发许多不同的异常,其中一种是“DOMException阻塞了一个frame”。
其他常见的DOMException包括:- HierarchyRequestError:当执行DOM操作引起结构错误时抛出。
- NotFoundError:当DOM查找操作找不到指定节点时抛出。
- NotSupportedError:当执行的DOM操作不受支持时抛出。
第二部分:DOMException阻塞了一个frame的原因2.1 同源策略DOMException阻塞了一个frame通常是由于同源策略引起的。
同源策略是浏览器的安全机制,它限制了来自不同源的脚本访问彼此的内容。
源是由URL的协议、主机名和端口号组成的标识。
2.2 跨域访问当一个frame中的脚本试图通过DOM操作跨域访问另一个frame的内容时,同源策略将会被触发,从而引发DOMException错误。
例如,如果一个来自第三部分:解决DOMException阻塞了一个frame的方法3.1 代理页面一种解决方法是使用代理页面。
Mining Uncertain Data 不确定数据挖掘,裴健课件

D. Burdick, P. Deshpande, T. S. Jayram, R. Ramakrishnan, and S. Vaithyanathan: OLAP Over Uncertain and Imprecise Data. VLDB 2005
7 / 91
Uncertain Data and Models
Long term drift: a slow degradation of sensor properties over a long period
Noise: random deviation of the signal varying in time A sensor may to some extent be sensitive to properties (e.g., temperature) other than the one being measured Dynamic error due to sampling frequency of digital sensors
6
6 / 91
Uncertain Data and Models
基于身份密码系统和区块链的跨域认证协议

第44卷第5期2021年5月Vol.44Ao.5May2021计算机学报CHINESE JOURNAL OF COMPUTERS基于身份密码系统和区块链的跨域认证协议魏松杰年,李莎莎年王佳贺年年(南京理工大学计算机科学与工程学院南京210094)2南京理工大学网络空间安全学院南京210094)摘要随着信息网络技术的快速发展和网络规模的持续扩张,网络环境中提供的海量数据和多样服务的丰富性和持久性都得到了前所未有的提升•处于不同网络管理域中的用户与信息服务实体之间频繁交互,在身份认证、权限管理、信任迁移等方面面临一系列安全问题和挑战•本文针对异构网络环境中用户访问不同信任域网络服务时的跨域身份认证问题,基于NC身份密码系统,结合区块链技术的分布式对等网络架构,提出了一种联盟链上基于身份密码体制的跨信任域身份认证方案本先网十对基于NC架构下固有的实体身份即时撤销困难问题,通过加入安全仲裁节点来实现用户身份管理,改进了一种基于安全仲裁的身份签名方案mIBE,在保证功能有效性和安全性的基础上,mNS性能较IN-BME方案节省1次哈希运算、2次点乘运算和3次点加运算.其次,本文设计了区块链证书用于跨域认证,利用联盟链分布式账本存储和验证区块链证书,实现域间信任实体的身份核验和跨域认证本提出的跨域认证协议通过安全性分析证明了其会话密钥安全,并且协议的通信过程有效地减轻了用户端的计算负担本过真实机器上的算法性能测试,与现有同类方案在统一测试标准下比较,本文方案在运行效率上也体现出了明显的优势处关键词区块链;身份密码擞字签名;安全仲裁;可信共识中图法分类号TP304DOI号年2年97/SP.J.年16.2021.00908A Cross-Domain Authentication Protocol by Identity-Based Cryptography onConsortium BlockchaitWRI SongOe年实LI ShrShy WANG年南chool of Computer Science and Engineering,Nanjing UniuersiLy of Science and Technology,Nanjing210094) 2(School of Cyberspace SecuriLo,Nanjing UniversiLo of Science anE Technologo,Nanjiny210094)Abstrach With the exciting growth ol global Internet services and applications in the past decades,tremendoue amounS of varioue dats and service resourcee are prevailing on networO and attracting usere frow different administration domaine all oven the world.The Internes cyberspaceis nevee short of security threate and resource abusere.Reliable and efficient netwoW entity authenticatione and identifica/tion veiificatione an the cornee stonee foe all typee of secure networe applicatiod environmente and usage scenarioe.Especiallp how to verify an entity's identity outside ite origin,and how to extend such authentication capability acrose different administration domainein network without obvious securitp wean point os performance bottlenece,it is a realistic challengefoe traditional cryptography basen authentication schemee.Eithee the encryption kep basen oe thePKI certificate Used approaches suffee the threate on credential managemente and the inefficiencb revocation.Towards the problec of cross-domain authentication when users in heterogeneous netwom environmente accese netwom servicee from different trust homains,this papee proposee收稿日期=2019-11-30;在线发布日期:2021-01-22.本课题得到国家自然科学基金南1年2年6本1年2年9)、赛尔网络下一代互联网创新项目南GI1年年3上海航天科技创新基金(SAST2019-033)资助.魏松杰实士实教授实国计算机学会南CF)会员实要研究方向为网络安全、区块链技术、网络协议分析.E-mail:swei@njush edu.co.李莎莎,硕士研究生,主要研究方向为区块链技术、分布式系统.王佳贺(通信作者),硕士,主要研究方向为区块链技术、协议设计、身份认证.E-mail:jhwang@.本5期魏松杰等:基于身份密码系统和区块链的跨域认证协议909a new design of blockchain certificate to implement cross-domain authentication based on theidentity-based cryptosystem and the distributed architecturs oi blockchain technology.A novel cross-trust-domaie authenticatioe scheme baset on IBC system b constructeO and evaluated.Firstly,to solvo the problem oi instantaneout entity identity revocatioe based on the ICC architecture,n security-mediatos based identity signature scheme,mIBS,is proposed with optimized identity managemebC scheme.A securitp mediatoc servee t c trust.^五!!to approw or decline anp nthmtidon attempt.Cp retaininy part oi each entity?s identity authenticatiob key in the domaib,the security mmia_tor ca_n quickly collaboratc with other nodee to eithm verify the entity?s identity or fail its requesh for authenticatiob?i.e.The proposed mIBS algorithm for IBC-based authenticatiob,ensuree entitp nthentica/tion functionalitp u O securitp,with the computatioo overheaU reduced greatlp compared with the IC-BMS scheme.The cross-domaio authenticatioo is supported and implemented on c consortium blockchaio system.Wc optimizc the PKI certificatc structurr nd desigo c blockchaio certificatc to record domaio credential on blockchaio.Clockchaio certificatc authoritiec,just liks CAc in X.509,arc organized and coordinated together to ruo the(:01150讥;11111ledgee a_s the domaio credentiaO storage,veiificatioo and exchanys pared with ths centralized CA organizatioo,ths distributed ledgee on blockchaio nodee hae better replicatioo of certificate data,higher scalabilitp,cryptography-guaranteed informatioo integritp,and decentralized consensuc calculation capabilitp.The proposed mIBS algorithm and the blockchain-based authentication protocol arc thoroughly evaluated foe securitp and eCciencp.TheorCca!analysie and deduction show the new scheme holds the same securhp strength pc the original IBA system,buh saves some on the operation execution overhead.The state-of-the-art distributed usee authentication schemee in literature arc used u benchmarke to evaluate the proposed blockchain-based distribution authentication.The new scheme is robusU enough to survive any typical networU attacke and interruptions,and with sigXicotly improved computation overheaU efficiency when beiny measured alive on experimentai machinee.Keywords blockchain;identity-based cryptography;digital signature;security mediator;trush consensue1引言以Interneh为代表的信息网络技术,极大地拓展了数字服务用户行为的持续时间和延展范围,让原本属于不同服务区域、用户体系、业务流程的信息,能够依托网络基础设施而自由流动、广泛传播. Interneh作为海量异构网络的融合连接体,造就了网络资源的全球覆盖与服务应用域的全面联通,在带来数字服务繁荣普及的同时,也使得用户在不同应用服务域间的信息交互愈发频繁.用户在跨域访问网络资源时,由于身份认证和权限验证过程所带来的额外开销不可避免,因而设计面向全域网络环境的身份认证机制,实现身份的有效验证、一致认证、统一管理显得尤为重要.针对Internet等大规模网络应用场景下,特定数字空间范围内不同信息服务实体(Information Service Entity,ISE)间荣杂的交互过程,实现用户在不同ISE间的跨域认证过程具有广泛的应用意义和工程价值.跨域认证,即用户在多个可信区域之间完成一致的身份验证过程,既要保证全域信任关系建立的可信性、高效认证的可用性、认证过程的可靠性等,又要实现多信任域内的认证系统对有效用户的及时统一认证和即时管理巴在分布式系统的实施场景下,出现了三种主流的跨域认证框架和实践方案:⑴应用对称密钥技术设计认证架构;⑵采用公钥基础设施(Publie Key Infrastructure,PKI)实施分布式认证;(3)基于身份密码学(Identity-based Cryptography,IBC)设计认证架构.这三种架构方案所采用的密码学技术具有特质差异,适用于不同场景,也造成各自不同的优劣效果.基于传统对称密钥技术的方案运行速度快、认证效率高,但面临密钥912计算机学报010年5泄露的安全风险.鉴于网络空间内的恶意攻击和安全威胁愈加复杂多样和广泛持久⑵,这类方案的应用场景具有局限性.采用PKI体系的认证架构有效避免了对称密钥难以管理的困境,尤其适合于分布式应用场景,具有优良的系统扩展性和实践灵活性3但PKI认证过程在数字证书的管理、分发过程中存在计算复杂、开销冗杂现象,性能不佳.基于体C的认证方案直接以实体本身的有效标识作为公钥,使得认证过程不再囿于证书机制,简化了实体身份对应密钥的管理过程.但体C认证系统的实体私钥依赖密钥生成中心(Key Generation Center, KGC)集中计算产生,依然需要密钥托管,因此体C 方案适用于小规模信任域网络中.现有体C方案中实体身份撤销是通过KCC定期停止提供私钥来实现,撤销过程缺乏即时有效能力.总之,目前的实体跨域认证方案或技术均未能兼顾有效性、安全性、高效性,无法支撑用户与认E间跨域认证的完整需求4为解决现有身份认证方案在大规模跨域场景应用过程中存在的问题,本文在体C认证系统的基础上上行改进,结合区块链分布式存储与共识的特性,设计了区块链证书结构以支撑跨域认证过程具新性地构造了基于身份密码体制的跨信任域认证方案.具体工作内容和研究成果包括:(南针对原有体C架构下实体身份难以及时撤销的问题,改进设计了基于安全仲裁的身份签名方案mIBh;(0)结合身份密码体制与区块链技术设计并实现了分布式跨域认证方案,体C模式用于域内认证,借鉴联盟链分布式共识方法实现了域间认证,采用区块链证书支持跨域身份认证的完整过程;身)设计了一种多域信任的身份认证协议,既保证了密钥协商过程的安全性,又有效降低网络通信与节点计算开销,提高了认证效率,满足用户与ISS间在大规模分布式应用场景下的跨域认证需求.2相关工作异军突起的区块链技术,提供了一种具有多中心、防篡改、可追溯、易扩展特点的分布式数据记录实现方法.不断扩增的数据单元按照顺序组织起来,其间通过哈希摘要相关联,以数据发布者、记录者和确认者的电子签名为保障期通过数据加密、离散共识、时序关联等手段,区块链实现了去中心化的点对点可信事务交互,提供了融合数据可用、内容可验、操作可溯能力的分布式安全应用服务,是信用进化史上继生物血亲、贵重金属、国家货币信用后的第四座里程碑2基于身份的密码技术体c脱胎并借鉴于PKI 技术,同样采用公钥密码认证体系,但以用户身份信息来绑定生成公钥,避免了PKI体系中依赖证书的公钥认证管理过程.以此为基础,众多学者针对分布式应用场景下如何处理不同信任域间的认证传递问题,即跨域身份认证技术,展开了研究并取得了-系列成果.例如一种方案⑴提出利用PKI体制构建新型虚拟网桥CA(Certification Authority)信任模型,用以实现虚拟企业间有效的跨域认证过程.这类方案采用分布式可验证秘密共享协议和基于椭圆曲线密码系统的签名算法,实施简单且应用场景广泛.基于PKI体系搭建分布式跨域信任平台,实现域间可信的模控制与管理,在而支持多模环境下的信任传递9此外,一些方案利用逐步成熟的体C体制构建无证书的跨域认证系统和认证协议,实现多域WMA环境下安全高效的实体认证和通信功能—国内已有科研人员设计了基于身份的签名算法,尝试利用椭圆曲线加法群上运算实现身份匿名条件下的跨域认证过程年•Wang等人给出了一种认证密钥协商协议⑴,,结合异构签密实现了认C和PKI系统之间的认证转换,具有更高的安全性和更好的可用性.区块链技术在比特币等数字货币应用领域取得广泛成功,其相关设计理念和系统架构也为众多研究者们在探索跨域身份认证问题提供了新的思路. Wang等人年采用区块链技术提出了跨域认证模型BlockCAN及其跨域认证协议,将根证书颁发机构作为验证节点组织在联盟链上,解决了用户在访问多域资源时面临的安全和效率问题,展现了优于基于PKI体系的跨域身份验证能力.国内研究者结合区块链技术、分C域和PKI体系设计跨异构域认证方案,采用国密SA4和区块链代理协同生成密钥,同样通过构造联盟链提供跨域认证过程的可靠性,实现可SOV逻辑证明的协议安全性与实用性年.区块链技术极大地拓宽了解决跨域认证问题的探索空间,技巧性地融合了自证身份、互证信任、共证真实等一系列信息安全功能.本文基于联盟区块链技术尝试解决相互独立的认C系统间的跨域认证和信任传递问题,实现用户身份多域一致性的安全保证.5期魏松杰等:基于身份密码系统和区块链的跨域认证协议9113基于身份密码体制的签名算法首先对跨域认证模型中基于IBC 的域内认证方案进行改进,设计了基于仲裁的身份签名和认证 算法,简要称为mIBS.本节给出基于仲裁的IBC 域 结构设计,描述了 mIBS 方案的实现原理和方法,并对其进行安全性分析.3. 1基于仲裁的IBC 域结构设计高可靠性的用户身份认证和权限管理系统,必不可少地需要支持用户身份的实时可信验证、有效 控制与及时撤销.在ICC 系统中,公钥是基于用户身份信息关联生成的,理论上可以通过撤销用户身 份来使得对应的公钥失效1.但用户身份信息作为公开数据被广泛发布用于验证,也正是IBC 系统的 特色所在,实际应用中难以直接核销用户身份.因此,类似PKI 体系中的证书撤销机制,对于IBC 系 统无法有效适用1.既然身份验证是为了对用户行 为和权限进行限定,也可以设置权限仲裁,通过控制 某一身份用户在体C 系统中服务权限验证结果,来实现密钥管理和身份撤销的效果1.即基于安全仲 裁(Security Mediator,SEM )的 IBC 域内方案.这里KGC 和SEM 在系统内分立,KGC 密钥生成中心为系统内用户生成私钥,SEM 在系统运行过程中给用 户使用密码服务提供信令,如图1所示.相比于基于“ID ||有效期”的公钥撤销方式,KGC 和SEM 的分立能够提供更高细粒度的安全控制,提高系统的访问控制灵活性.这里的体C 认证过程仅在域内采②返回签名信令裁I 仲▼全安①申请签名信令用户终端1的另一部分私钥用户终端1的一部分私钥用户终端1图1基于仲裁的体C 域结构用,节点数量和网络规模可控,而SEM 信令仅运行一个数据量不大的简单运算,因此整体运算负载和 网络开销不大,性能可控.3.2基于仲裁的身份签名方案借鉴身份签名体S 算法,本文方案同样根据双 线性映射构造mBS 数字签名,并保证以下两点:(l)mlCE 算法在SEM 签发签名信令前能够验证签名请求消息的来源合法性,即判断是否来自合法用户;(2)用户发送给SEM 作为验证签名的依据不为 明文,需要隐藏好待签的明文信息1.整个方案包括参数生成(Setup )、密钥生私(KeyGen)、签名(Sign)和验证(Verify)四个算法,具体描述如下.Setup 阶段:设置安全系数儿初始化得到阶为大素数P (K >)的循环群(V )K),(V,X),这里G 的生成元为 P.选取双线性映射e :基)犌—犌,满足双线性对的可计算性、非退化性和双线性要求.选取哈希函数H 1: {0,也 Gi* 和 H ,: {0,也 * X G ) fZ 犖,具和 G )代表G /0}和G )⑴.KGC 随机选取s 狊[1> —)作为系统主密钥,并计算群G 的元素PpTs . 作为系统主公钥也]为取整运算,系统主密钥对为VK )g).KGC 保存系统私钥s,并公开系统参数为2 S 具具的本系具具,具);消息空间M =(0即)",签名空间 Stgn = GrXG 犖.KeyGen 阶段:对于一个用户标识为犐犇⑴系和GC 为其计算公钥犘犐和私钥犱D :P d = H 的犐和犌的⑴犱D = G ]为犇(2)然后KGC 对该用户私钥进行切分,先随机选择S d GG 本T 为并根据下式计算:d-Do 为P d (3)dDDg d T D —d : = E 15—s id DJ.id(4)犱由KGC 发送给用户,本M"发给SEM.Sign 阶段:给定消息底犕计算消息犿的正确签名如下:⑴用户签名消息观之前① 随机选择任意点P l GG 和任意整数咗 G 本-叮,计算群G 冲元素g ,q = eKP, ,K)(5)② 计算整数ng= H⑴系、V 、③ 计算签名S p ,S p =+ 莒犱0(7)912计算机学报2022年④向SEM发送请求Reques狋⑴gtruU.(2)SEM收到用户签名后①首先检索该用户的身份查看是否属于被撤销的情况,若已撤销则停止服务.②接着计算签名信令SsEM,拼接得到完整的消息犿的签名犛犿,SsEM=(8)S m S use e I SsEM(9)③SEM根据式⑴计算P m并验证元素g的正确性:g'=e(S”,P)•e(.P ID,—间(19)若=则证明消息观的签名申请是合法的,SsEM可由SEM发送给用户.根据哈希函数的特性,如果需要签名的消息那么签名信令S semi H S sem)即信令难以被重用.⑴用户签名为了验证目标信令SsEM的有效性,用户在收到S sem后,计算S”和g'则=g时输出签名〈S”〉根Verify阶段:对签名S,的验证过程是根据g'计算g',g'=犎⑵'(19) g'=g时证明该签名S,正确.3.3安全性证明为了检验更高细粒度下跨域身份认证过程的可靠性和有效性,下面对mIBS签名算法从计算和算法设计两方面作简要安全证明.⑴计算安全性证明群G中生成系统和用户的密钥,群G)产生用户和SEM签名,攻击者利用田1(D)和s,H年IDS 推导S和S d的难度等同于求解椭圆曲线上离散对数难题.同理,由签名信令S sem通过式⑴年求解(s+sd)的难度也相当,因此离散对数难题与哈希函数的安全性假设保障了方案的计算安全性.S sem=gd D M=g(s+s D)求⑵)(12)⑴算法安全性证明在PBS方案中,判断签名申请的合法性和判断S sem签名结果的有效性都需要验证因此只需要证明"立便能保证可信域内的身份双向安全认证,即式⑴).用户在向SEM申请消息犿签名〈S”〉时发送Request=⑴ggeU,不包含原始消息,这也保证了待签名消息,的隐私性.U计犲犿不计(d)——om)g=e(kP)+g<5?D r+'i MM t)•ed,,——om和=e(.kP)+gd ID—))ed,,——om间=e(gsPm判求e(kPi这)•ed,,——间=e(⑵),$因间•e⑵Pi则算e(⑵)则oo K g—e(kP9,P')—q(13) 4基于区块链的跨域身份认证模型mIBS算法可以在域内实现基于IBC的用户身份认证,本节利用区块链技术实现用户和信息服务实体间跨域的交互认证过程,设计了区块链系统模型,并描述了区块链证书的结构和认证工作原理. 4.1跨域认证协议设计本文设计的基于IBC和区块链架构的跨域认证模型,遵循如下设计目标:(1)基于区块链的分布式系统架构将多个IBC信任域在链上组织起来作为跨域信任机制的共同参与者.⑴通过区块链交易共识的方式建立域间的信任验证和身份管理.每个IBC域的代理服务器作为区块链节点参与交易传播和共识,同时依照区块链记录交易的方式对信任授权进行管理.⑴在区块链上存储目标域证书,用于快速组装和验证跨域身份认证交易2.本文采用联盟区块链架构来设计跨域身份认证模型,采用基于身份的方式来认证分属于不同IBC 信任域的用户实体和信息服务实体.如图2中所示,域内信息服务实体的私钥通过KGC密钥拆分两部分后,分发给仲裁机构SEM和实体本身.作为区块链的区域代理节点,在每个IBC域设有区块链证书服务器(Blockchain Certificate Authority,PCCA).信息服务实体ISE与用户间的认证过程如下:(1)当用户请求同域内某信息服务时,首先向ISEM出认证请求,ISE随即向SEM发起请求,收到SEM签名信令后完成-系列签名操作,签名结果发到域内身份验证服务器(Identity Authentication Server,IAS)基行认证中.如需撤销一个ISE的身份不证要求SEM停止为其发送签名信令.最后,用户可以根据IAS发回的认证响应决定ISE是否通过认证.同域内的认证过程可视为跨域认证的特殊情况,具体过程描述从略.⑴当请求用户和实体服务资源分属不同信任域时,通过区块链来进行跨域信任传递,完成用户与ISE间的认证过程.在图2中,假设用户U】与IS方进行交互,认证过程如下.用C】域和IBQ域前期通过BCC域和BCC域基于区块链证书完成域间认5期魏松杰等:基于身份密码系统和区块链的跨域认证协议913IBC 域] BCCA]图0跨域认证系统模型(步骤1:跨域认证请求Req. 4步骤2、3记认C 域间公共参数交换,步骤4发送会话密钥ST,步骤5期、7:信息服务实体体已权限认证,步骤8、开会话密钥K'认证)证,同时交换两个域认证系统的公开参数和公钥生 成算法WCCH 会为用户生成会话密钥,发送给身份认证服务器体方魏性收到认证请求后,向本域内的SER 申请签名信令,方法参照0. 2节中的相 关描述.通过SER 认证后,系E?会将完整签名结果发送给用户所在访问域的IAN ,待其验证签名信息后将认证结果返回用户.这时用户即可根据认证结 果来访问体-中的相应服务.4.2区块链证书设计为了解决0. 3节体G 域和体Q 域代理BCO和BCCH 的可信认证问题,本文利用区块链的不 可篡改性来完善数字证书,将设计的区块链证书 作为信任凭证支撑身份跨域认证过程.具体地,区块链证书依照PKI 体系中X.504数字证书标准进行改进,由各域中参与联盟链的BCCH 生成并记录 在链上.针对于身份跨域认证需求,图3比较了本文构造的区块链证书和原始X.504数字证书,具体改|签发者|磁者珂X509证书履甬看丽丽|使用者公钥|版本号1有效期1序列号1签发者1起始II 结束娥者id 服务的URL签名算漓拓展项|CA 签名||拓展项|i 使用者区块链证书|囲者版本号aw 序列号起始H 结束|跨域凭证1|图3区块链证书进内容如下:通)省略了 X.504数字证书中的签名算法内 容.签名算法用于验证证书的真实一致性.区块链本身已经采用相应密码学方法实现了链上数据的原始 真实性、完整一致性保障.各个域中的BCCH 区块 链代理,只需要生成区块链证书后将其哈希值记入区块链账本中,证书可以在链上查验.传统X.504数字证书的签名和验证过程被链上证书的存储和查验 操作所替代,这有助于提高用户认证的效率.通)取消了 X.5期证书中用于撤销检查服务的统一资源定位符URL 模块.区块链证书直接存储 在数据时序关联的联盟链上,随时可通过检索链上数据来查询证书状态,或者通过发送交易来记录新 数据,不再需要提供在线证书状态协议0CSP 和证书吊销列表CRL 管理服务.可以通过向链上发送交易签发(Issue )和交易撤销(Revoke )两种类型的数 据操作来管理证书的实时状态.这避免了传统X.504 使用OCSP 和CLL 带来的通信和查询开销.5基于区块链的跨域认证协议针对基于区块链的跨域身份认证模型,本节描述了用户跨域认证协议,详细给出了身份跨域认证中双方会话密钥的具体协商步骤,同时采用理 论分析和实验测试方法,评估协议的安全性和有 效性.914计算制学报2029年5.1跨域认证协议设计作为协议运行的初始状态,假定所有IBC域的KGC、SEM和IAS等服务节点都是诚实可靠的,域内实体间认证已经完成.每个KCC公开一致性参数(1假具],假具Pot认认),但各KCC系统显然具有不同的主公钥P a和主密钥和根联盟区块链上可以提供各域BCCA的证书状态查询.如图2中所示,以C域中用户U访问IBC)域中信息服务资源ISE?为例,描述协议的跨域认证工作过程.KGC】和KGQ分别为两个域的密钥生成中心认密钥分别为S1和S2P[1假-年,对应的系统公钥分别为Pone=1(,和P am=1,表9中定义了以下协议过程描述用到的重要符号.表1协议符号定义符号定义A f B:{m}从实体A到实体:送消息)Encry(C对消息)实施非对称加密计算M S⑵)对消息,实施基于身份的签名计算D E⑵)对消息,实施基于身份的加密计算CerLscA)IBCA的区块链证书(1)Ui^-BCCA::基I M u,I-Disn2,不,Request^,D SdDuU l D el|(e es狋犜当U]向BCC域发起IS方身份认证请求,不为时间戳,为证明自己的合法性,使用自身私钥对消息进行IBS签名操作.(2)SCCA^BCCA):基""『(D方方,认)Cerhcu,认描uest2))BCCA,收到请求后确认用户UU法身份,确认□在有效范围内,并在链上查询ISE)寸应的域代理BCCA,:BCC域选取时间戳犜、1方)、区块链证书CerhcA,s认证请求,加密后发送给BCCA.(3)BCCA—BCCA:基"c r y(gokK,认当BCCA收到并解密BCCA:的消息,验证T,后,与IAS联合查验C(cca的合法性,若证书有效则响应Request,;同时BCCA,将所在域的系统主公钥Pom与时间戳丁3加密后返回给BCCA.(4)BCCA—BCCA:{Ewcr-ydodK,并当BCCAif IAA:{LBE(K,并’Request)}K—H:(IMu:21-Disn,2,*1(,P public BCCA收到并解密BCCA发送的消息,验证T T效后,保存C域的系统公钥Poc KCCA:同样需要将自身域的系统公钥Pone与一个时间戳T密后返回给BCC,同时BCCA根据上式计算会话密钥K IBE加密后发送给IAS:(5)BCCA2^ISE2:{IBE(B/,R6,Request)} K=H)(M:2D)和[和.PodK BCCA收到BCCA】的消息,验证U有效后,保存l域的系统公钥Poe.BCCA计算会话密钥K(显然K z=H),IBn加密后发给ISM.(^ISEz—IASi:基,〈S”〉},C=—㊉KISE?获得BCCA发送的消息,验证八有效后,保存会话密钥K根据mIBS算法,验鸟通过仲裁获得消息犿的完整签名〈S”〉.最后,ISM计算密文C后,同签名结果〈S,}-起发给IASi:(7)IASif U:11X10,—,验)}在步骤显)中,S会解密来自BCCA的消息,验证八后获得会话密钥K通过K解密C获得消息观,就可以用mIBS方名算法验证〈S”〉获知ISM 的合法性.验证通过后,IAS将成功认证消息、会话密钥K及时间戳基于身份加密后发送给用户U 否则中止验证过程.5.2安全性分析5.2.1会话密钥安全证明会话密钥KK的安全性是基于攻击者UM 模型提出的.在这个模型中,会话密钥需具备如下安全性质2.性质1若通信双方都没有被攻陷且会话匹配验证方获得的会话密钥是一致的.引理1假定EncryD采用ECC加密算法, IBS(显)和I B E(显)基于椭圆曲线的双线性映射实现,均为选择密文攻击(Chosen CiphertexU Attack—CCA)安全的.若U、ISE、SSM、IAS及BCCA等实体或节点都未被攻陷验证话密钥KK将能够在协议执行过程中被成功地协商.证明.反证法.假设攻击者成功伪造协议认证过程中实体间传递的消息的概率e不可忽略对么对于上述协议各个步骤的破解概率为:第显)步攻击者成功伪造消息的概率等同于破解IBS算法的概率不s;第显)和显)步中,攻击者伪造消息的成功概率,等同于破解ECC加密算法的概率不c-第显)和显)步,攻击者伪造消息的成功概率,等同于破解IBE算法的概率不c)第显)步,攻击者伪造消息的成功概率,等同于破解IBS算法的概率不bs;第显)步,攻击者伪造消息的成功概率,等同于破解IBE算法的概率不e;综上r V不s+2不c+3不+不)M,概率不。
IBM Cognos Transformer V11.0 用户指南说明书

IJWA sample paper

ABSTRACT: Data mining is a part of a process called KDD-knowledge discovery in databases. This process consists basically of steps that are performed before carrying out data mining, such as data selection, data cleaning, pre-processing, and data transformation. Association rule techniques are used for data mining if the goal is to detect relationships or associations between specific values of categorical variables in large data sets. There may be thousands or millions of records that have to be read and to extract the rules for, but the question is what will happen if there is new data, or there is a need to modify or delete some or all the existing set of data during the process of data mining. In the past user would repeat the whole procedure, which is time-consuming in addition to its lack of efficiency. From this, the importance of dynamic data mining process appears and for this reason this problem is going to be the main topic of this paper. Therefore the purpose of is study is to find solution for dynamic data mining process that is able to take into considerations all updates (insert, update, and delete problems) into account. Key words: Static data mining process, dynamic data, data mining, data mining process, dynamic data mining process. Received: 11 July 2009, Revised 13 August 2009, Accepted 18 August 2009 © 2009 D-line. All rights reserved. 1. Introduction Data mining is the task of discovering interesting and hidden patterns from large amounts of data where the data can be stored in databases, data warehouses, OLAP ( on line analytical process ) or other repository information [1]. It is also defined as knowledge discovery in databases (KDD) [2]. Data mining involves an integration of techniques from multiple disciplines such as database technology, statistics, machine learning, neural networks, information retrieval, etc [3]. According [4]: “Data mining is the process of discovering meaningful patterns and relationships that lie hidden within very large databases”. Also [5] defines Data mining as “the analysis of observational data sets to find unsuspected relationships and to summarize the data in novel ways that are both understandable and useful to the data owner”. Data mining is a part of a process called KDD-knowledge discovery in databases [3]. This process consists basically of steps that are performed before carrying out data mining, such as data selection, data cleaning, pre-processing, and data transformation [6]. The architecture of a typical data mining system may have the following major components [3]: database, data warehouse, or other information repository; a server which is responsible for fetching the relevant data based on the user’s data mining request, knowledge base which is used to guide the search. Data mining engine consists of a set of functional modules, Pattern evaluation module which interacts with the data mining modules so as to focus the search towards interesting patterns and graphical user interface which communicates between users and the data mining system, allowing the user interaction with system.
Research on the big data feature mining technology based on the cloud computing

2019 No.3Research on the big data feature mining technologybased on the cloud computingWANG YunSichuan Vocational and Technical College, Suining, Sichuan, 629000Abstract: The cloud computing platform has the functions of efficiently allocating the dynamic resources, generating the dynamic computing and storage according to the user requests, and providing the good platform for the big data feature analysis and mining. The big data feature mining in the cloud computing environment is an effective method for the elficient application of the massive data in the information age. In the process of t he big data mining, the method of the big data feature mining based on the gradient sampling has the poor logicality. It only mines the big data features from a single-level perspective, which reduces the precision of t he big data feature mining.Keywords: Cloud computing; big data features; mining technology; model methodWith the development of the times, people need more and more valuable data. Therefore, a new technology is needed to process a large amount of the data and extract the information we need. The data mining technology is a wide-ranging subject, which integrates the statistical methods and surpasses the traditional statistical analysis. The data mining is the process of extracting the useful data we need from the massive data by using the technical means. Experiments show that this method has the high data mining performances, and can provide an effective means for the big data feature mining in all sectors of the social production.1. Feature mining method for the big data feature miningmodel1-1. The big data feature mining model in the cloud computing environmentThis paper uses the big data feature mining model in the cloud computing environment to realize the big data feature mining. The model mainly includes the big data storage system layer, the big data mining processing layer and the user layer. The following is the detailed study.1-2. The big data storage system layerThe interaction of the multi-source data information and the integration of the network technology in the cloud computing depends on the three different models in the cloud computing environment: I/O, USB and the disk layer, and the architecture of the big data storage system layer in the computing environment. It can be seen that the big data storage system in the cloud computing environment includes the multi-source information resource service layer, the core technology layer, the multi-source information resource platform service layer and the multi-source information resource basic layer.1-3. The big data feature mining and processing layerIn order to solve the problem of the low classification accuracy and the long time-consuming in the process of the big data feature mining, a new and efficient method of the big data feature classification mining based on the cloud computing is proposed in this paper. The first step is to decompose the big data training set by the map, and then generate the big data training set. The second step is to acquire the frequent item-sets. The third step is to implement the merging according to reduce, and the association rules can be acquired through the frequent item-sets, and then pruning to acquire the classification rules. Based on the classification rules, a classifier of the big data features is constructed to realize the effective classification and the mining of the big data features.1 -4. Client layerThe user input module in the client layer provides a platform for the users to express their requests. The module analyses the data information input by the users and matches the reasonable data mining methods. This method is used to mine the data features of the pre-processed data. Users of the result-based displaying module can obtain the corresponding results of the big data feature mining, and realize the big data feature mining in the cloud computing environment.2. Parallel distributed big data mining2-1. Platform system architectureHadoop provides a platform for the programmers to easily develop and run the massive data applications. Its distributed file system HDFS is a file system that can reliably store the big data sets on a large cluster. It has the characteristics of reliability and the strong fault tolerance. Map Reduce provides a programming mode for the efficient parallel programming. Based on this, we developed a parallel data mining platform, PD Miner, which stores the large-scale data on HDFS, and implements various parallel data preprocessing and data mining algorithms through Map Reduce.2-2. Workflow subsystemThe workflow subsystem provides a friendly and unified user interface (UI), which enables the users to easily establish the data mining tasks. In the process of creating the mining tasks, the ETL data preprocessing algorithm, the classification algorithm, the clustering algorithm, and the association rule algorithm can be selected. The right drop-down box can select the specific algorithm of the service unit. The workflow subsystem provides the services for the users through the graphical UI interface, and flexibly establishes the self-customized mining tasks that conform to the business application workflow. Through the workflow interface, the multiple workflow tasks can be established, not only within each mining task, but also among different data mining tasks.2-3. User interface subsystemThe user interface subsystem consists of two modules: the user input module and the result display module. The user interface subsystem is responsible for the interaction with the users, reading and writing the parameter settings, accepting the user operation52International English Education Researchrequests, and displaying the results according to the interface. For example, the parameter setting interface of the parallel Naive Bayesian algorithm in the parallel classification algorithm can easily set the parameters of the algorithm. These parameters include the training data, the test data, the output results and the storage path of the model files, and also include the setting of the number of Map and Reduce tasks. The result display part realizes the visual understanding of the results, such as generating the histograms and the pie charts and so on.2- 4. Parallel ETL algorithm subsystemThe data preprocessing algorithm plays a very important role in the data mining, and its output is usually the input of the data mining algorithm. Due to the dramatic increase of the data volume, the serial data preprocessing process needs a lot of time to complete the operation process. In order to improve the efficiency of the preprocessing algorithm, 19 preprocessing algorithms are designed and developed in the parallel ETL algorithm subsystem, including the parallel sampling (Sampling), the parallel data preview (PD Preview), the parallel data add label (PD Add Label), the parallel discretization (Discreet), the parallel addition of sample (ID), and the parallel attribute exchange (Attribute Exchange).3. Analysis of the big data feature mining technology basedon the cloud computingThe emergence of the cloud computing provides a new direction for the development of the data mining technology. The data mining technology based on the cloud computing can develop the new patterns. As far as the specific implementation is concerned, the development of the several key technologies is crucial.3- 1. Cloud computing technologyThe distributed computing is the key technology of the cloud computing platform. It is one of the effective means to deal with the massive data mining tasks and improve the data mining efficiency. The distributed computing includes the distributed storage and the parallel computing. The distributed storage effectively solves the storage problem of the massive data, and realizes the key functions of the data storage, such as the high fault tolerance, the high security and the high performance. At present, the distributed file system theory proposed by Google is the basis of the popular distributed file system in the industry. Google File System (GFS) is developed to solve the storage, search and analysis of its massive data. The distributed parallel computing framework is the key to efficiently accomplish the data mining and the computing tasks. At present, some popular distributed parallel computing frameworks encapsulate some technical details of the distributed computing, so that users only need to consider the logical relationship between the tasks without paying too much attention to these technical details, which not only greatly improves the efficiency of the research and development, but also effectively reduces the costs of the system maintenance. The typical distributed parallel computing frameworks such as Map Reduce parallel computing framework proposed by Google and the Pregel iterative processing computing framework and so on.3-2. Data aggregation scheduling technologyThe data aggregation and scheduling technology needs toachieve the aggregation and scheduling of different types of thedata accessing cloud computing platform. The data aggregationand scheduling needs to support different formats of the source data, but also provides a variety of the data synchronization methods. To solve the problem of the protocol of different data isthe task of the data aggregation and scheduling technology. The technical solutions need to consider the support of the data formats generated by different systems on the network, such as the on-line transaction processing system (OLTP) data, the on-line analysis processing system (OLAP) data, various log data, and the crawlerdata and so on. Only in this way can the data mining and analysisbe realized.3-3. Service scheduling and service management technologyIn order to enable different business systems to use this computing platform, the platform must provide the service scheduling and the service management functions. The service scheduling is based on the priority of the services and the matchingof the services and the resources, to solve the parallel exclusionand isolation of the services, to ensure that the cloud services of thedata mining platform are safe and reliable, and to schedule and control according to the service management. The service management realizes the functions of the unified service registration and the service exposure. It not only supports the exposure of the local service capabilities, but also supports the access of the third-party data mining capabilities, and extends the service capabilities of the data mining platform.3- 4. Parallelization technology of the mining algorithmsThe parallelization of the mining algorithms is one of the key technologies for effectively utilizing the basic capabilities providedby the cloud computing platform, which involves whether the algorithms can be parallel or not, and the selection of the parallel strategies. The data mining algorithms mainly include the decisiontree algorithm, the association rule algorithm and the K-means algorithm. The parallelization of the algorithm is the key technology of the data mining using the cloud computing platform.4. Data mining technology based on the cloud computing4- 1. Data mining research method based on the cloud computingOne is the data association mining. The relevant data miningcan centralize the divergent network data information when analyzing the details and extracting the values of the massive data information. The relevant data mining is usually divided into three steps. First, determine the scope of the data to be mined and collectthe data objects to be processed, so that the attributes of the relevance research can be clearly defined. Secondly, large amountsof the data are pre-processed to ensure the authenticity and integrity of the mining data, and the results of the pre-processingwill be stored in the mining database. Thirdly, implement the data mining of the shaping training. The entity threshold is analyzed bythe permutation and combination.The second is the data fuzziness learning method. Its principleis to assume that there are a certain number of the information samples under the cloud computing platform, then describe any information sample, calculate the standard deviation of all the information samples, and finally realize the data mining value532019 No.3information operation and the high compression. Faced with the massive data mining, the key of applying the data fuzziness learning method is to screen and determine the fuzzy membership function, and finally realize the actual operation of the fuzzification of the value information of the massive data mining based on the cloud computing. But here we need to pay attention to the need to activate the conditions in order to achieve the network data node information collection.The third is the data mining Apriori algorithm. The Apriori algorithm is an algorithm for mining the association rules. It is a basic algorithm designed by Agrawal, et al. It is based on the idea of the two-stage mining and is implemented by scanning the transaction databases many times. Unlike other algorithms, the Apriori algorithm can effectively avoid the problem that the convergence of the data mining algorithm is poor due to the redundancy and complexity of the massive data. On the premise of saving the investment cost as much as possible, using the computer simulation will greatly improve the speed of mining the massive data.4-2. Data mining architecture based on the cloud computingThe data mining based on the cloud computing relies on the massive storage capacity of the cloud computing and the parallel processing ability of the massive data information, so as to solve the problem that the traditional data mining faces in dealing with the massive data information. Figure 1shows the architecture of the data mining based on the cloud computing. The data mining architecture based on the cloud computing is mainly divided into three layers. The first layer is the cloud computing service layer, which provides the storage and parallel processing services for the massive data information. The second layer is the data mining processing layer, which includes the data preprocessing and the data mining algorithm parallelization. Through the data information preprocessing, it can effectively improve the quality of the data mined, and make the entire mining process easier and more effective. The third layer is the user-oriented layer, which mainly receives the data mining requests from the users and passes the requests to the second and the first layers, and displays the final data mining results to the users in the display module.5. ConclusionThe cloud computing technology itself has been in a period of the rapid development, so it will also lead to some deficiencies in the data mining architecture based on the cloud computing. One is the demand for the personalized and diversified services brought about by the cloud computing. The other is that the number of the data mined and processed may continue to increase. In addition, the dynamic data, the noise data and the high-dimensional data also hinder the data mining and processing. The third is how to choose the appropriate algorithm, which is directly related to the final mining results. The fourth is the data mining process. There may be many uncertainties, and how to deal with these uncertainties and minimize the negative impact caused by these uncertainties is also a problem to be considered in the data mining based on the cloud computing.References[1] Kong Jie; Liu Yang. Data Mining Technology Analysis [J], Computer Knowledge and Technology, 2017, (11): 105-106.[2] Wang Xiaoxue; Zhang Jiazhen; Guo He; Wang Hao. Application of the Big Data in the Mining of the Learning Behavior Patterns of College Students [J], Intelligent Computer and Applications, 2017, (12): 122-123.[3] Deng Yijun. Discussion on the Data Mining and the Knowledge Classification in University Libraries [J], Popular Science & Technology, 2018, (09): 142-143.[4] Wang Mao. Application of the Data Mining Technology in the Computer Forensic Analysis System [J], Automation & Instrumentation, 2018, (12): 100-101.[5] Li Guanli. NCRE Achievement Prediction and Analysis Based on the Rapid Miner Data Mining Technology [J], Journal of Nanjing Radio & TV University, 2018, (12): 154-155.54。
is not recognized as valid resource method -回复

is not recognized as valid resource method -回复题目:以[is not recognized as valid resource method]为主题,深入探讨其原因及解决方法引言:在进行软件开发或者网络应用的过程中,我们经常会遇到一些错误信息。
其中一个常见的问题是"[is not recognized as valid resource method]"。
该错误信息常常出现在应用程序中,特别是当我们尝试调用某个Web服务时。
本文将深入探讨这个问题的原因,以及解决方法,帮助读者理解这个错误,并为他们提供一些纠正的思路。
第一部分:错误的原因1.1 资源方法的定义问题:当我们使用一个不被视为有效资源方法的方式来调用Web服务时,就会出现该错误。
这通常是由于资源方法的定义有问题,或者我们的代码中没有正确地引用它导致的。
1.2 缺少相关依赖:某些情况下,我们可能没有正确地配置或添加所需的依赖项,这也会导致[is not recognized as valid resource method]错误的出现。
1.3 错误的URL路径:如果我们在调用Web服务时提供了错误的URL 路径,服务器将无法识别资源方法,从而引发该错误。
第二部分:解决方法2.1 核对资源方法的定义:首先,我们需要仔细检查代码中资源方法的定义是否正确。
确保它们被正确地标记为资源方法,并正确地引用。
2.2 确认依赖项:我们应该重新检查依赖项的配置和添加过程。
确保我们有正确的包、库或模块,并且它们与代码中的资源方法定义保持一致。
2.3 检查URL路径:仔细检查我们在调用Web服务时使用的URL路径。
确保路径与服务器上的资源方法相匹配,并且没有拼写错误。
2.4 更新相关框架或库:有时,[is not recognized as valid resource method]问题可能是由于我们使用的框架或库版本过旧而引起的。
链上元数据格式

链上元数据格式
链上元数据(On-Chain Metadata)通常指的是与区块链上的交易或智能
合约相关的元数据信息。
这些元数据可以是交易发送方的地址、交易金额、交易时间戳等,也可以是智能合约的名称、版本号、功能描述等信息。
链上元数据的格式通常取决于所使用的区块链平台和智能合约编程语言。
以下是一些常见的链上元数据格式:
1. JSON 格式:JSON(JavaScript Object Notation)是一种轻量级的数
据交换格式,易于人类阅读和编写,也易于机器解析和生成。
许多区块链平台使用 JSON 格式来存储和传输链上元数据。
2. CBOR 格式:CBOR(Concise Binary Object Representation)是一种二进制格式,用于表示结构化数据。
CBOR 格式具有紧凑性和可扩展性,适用于在区块链上存储和传输元数据。
3. EDIPARTY 格式:EDIParty 是一种用于表示数字资产所有者身份的格式。
它定义了一个数字资产所有者的标识符,包括地址、名称和其他相关信息。
EDIParty 格式在以太坊上广泛使用,用于链上元数据的表示。
4. W3C DID 格式:W3C DID(Decentralized Identifier)是一种基于区
块链的去中心化标识符标准。
它提供了一种标准化的方法来表示和管理数字身份和凭证,可以在区块链上存储和传输链上元数据。
需要注意的是,不同的区块链平台和智能合约编程语言可能有不同的链上元数据格式和规范。
在实际应用中,需要根据具体的平台和编程语言的要求来设计和使用链上元数据。
open set recognition代码 -回复

open set recognition代码-回复Open set recognition是一种机器学习任务,旨在识别未知类别的数据。
与传统的封闭集识别任务不同,开放集识别面临更大的挑战,因为模型需要能够预测出未在训练集中出现过的样本。
本文将详细介绍Open set recognition的定义、挑战和应用,以及常用的解决方法。
首先,Open set recognition是一种将机器学习应用于开放集问题的方法。
在传统的封闭集识别任务中,模型被训练用于区分训练集中已标记的类别,而在开放集识别任务中,模型需要能够识别并拒绝未知类别的样本。
这种能力对于许多实际应用非常重要,比如在安全领域中识别新型威胁或在自动驾驶中识别未曾遇见的障碍物。
然而,Open set recognition面临一些挑战。
首先,训练集中通常只包含有限数量的已知类别样本,这使得模型难以彻底学习未知类别的特征。
其次,未知类别样本可能具有与已知类别样本相似的特征,导致模型难以正确区分。
此外,Open set recognition需要在识别已知类别和拒绝未知类别之间取得平衡,即降低错误接受已知类别样本的概率,同时尽量减少错误拒绝已知类别样本的概率。
为解决这些挑战,研究人员提出了多种Open set recognition方法。
其中一种常用的方法是基于特征空间的方法。
这种方法将已知类别和未知类别样本映射到一个特征空间中,并尝试通过阈值或一定距离度量来区分两者。
如果一个样本的特征向量与已知类别样本的平均特征向量之间的距离小于阈值,则被分类为已知类别;否则,被分类为未知类别。
这种方法的优势在于简单易用,但也存在某些局限性,比如难以找到一个合适的阈值以平衡错误接受和错误拒绝的概率。
另一种常见的Open set recognition方法是生成模型方法。
这种方法通过建立概率模型来估计每个已知类别和未知类别样本的概率分布。
一种常用的生成模型是生成对抗网络(GAN),它由一对对抗学习的模型组成,一个生成器和一个判别器。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Mining the Web for Object RecognitionCharles RosenbergNovember 16, 1999GoalUse machine learning techniques to construct a large number of special purpose object recognizers.The problem is that this typically this requires hand labeling a large number of examples which is very time consuming and expensive.Reduce the labeling effort by mining the large number of images on the web that have already been loosely labeled with associated text.What is different about this work?These are real world messy images.Entire images are not clearly labeled as belonging to a particular class.The result is that each image contains many objects in addition to the object of interest. The assumption is that there is some image region, a subset of the image pixels, which are the “cause” of the specific class label. The algorithm must attempt to discover the statistical properties of that subset which differentiates it from other regions in the image.This is an example of multiple instance learning. In the multiple instance learning framework, the learning algorithm is given “bags” of training examples. A bag is labeled as positive if it contains at least one positive example. A bag is labeled as negative if it contains no positive examples. It is the task of the learning algorithm to learn to distinguish positive from negative examples using this data.Data Collection ProcedureNews web sites were spidered nightly. Images and associated text were collected.Spider runs were conducted on three web sites:, , The total number of pages visited in a particular site per night was limited because many sites contain large archives. To avoid loops, the same URL was never visited twice in one spidering session.Description of Data CollectedApproximately 100,000 images were collected which amounted to a total of 3GB of data over a period of 60 days.Each image and its associated caption was collected as well as the html source for the page which contained it.Associated text was labeled to be one of the following:•full marked caption•half marked caption•raw text containing the word photo•raw text containing parenthesisData was collected into two databases. One which is a cache of web objects (including both images and html) indexed by their MD5 hash. The second contains pointers to images and the text associated with each image. These are indexed by their MD5 has of the image and caption text.Care was taken to eliminate duplicates in both databases. This was achieved by pre-filtering html before caching. This is necessary because much of the html today is dynamic, causing the same page to be different each time it is visited.Proposed Training Process1) Extract one group of images which contain a specific keyword in their caption and extract asecond group of images which do not.2) For each block in each image calculate the average color in HSV space, discard intensity (V).3) Quantize the (H,S) pair for each block to one of 200 values.4) Generate two histograms - one for colors in images containing the keyword and one forimages which do not.5) Use these histograms to build a class conditional probability distribution over colors in andout of the desired class.6) Segment the image on a block basis based on color using the log odds ratio of the classconditional probability distribution of color.7) Train an appearance based object recognition algorithm using the segmented examples.Proposed Recognition Process1) Use color segmentation to locate sections of the image likely to contain the target object.2) Scan the trained object recognizer over portions of the image likely to contain the desiredobject. Use segmented region position and size to achieve some scale and shift invariance.Future Work•Use EM to get a better estimate of class conditional color probabilities.•Use other features - like edges - to aid segmentation.•Implement region extraction based on segmentation.•Integrate object recognition algorithm.ReferencesA Framework for Multiple-Instance Learning, by Oded Maron and Tomás Lozano-Pérez, in Neural Information Processing Systems 10, 1998.Multiple-Instance Learning for Natural Scene Classification, by Oded Maron and Aparna Lakshmi Ratan, in ICML-98.Solving the multiple-instance problem with axis-parallel rectangles, by T. Dietterich, R. Lathrop, and T. Lozano-Pérez, in AI, 89(1-2), pp. 31-71, 1997.A Note on Learning from Multiple-Instance Examples, by Avrim Blum and Adam Kalai, Machine Learning, 30:23--29, 1998.Preliminary Segmentation ResultsLikelihood ratio for “Clinton” class Likeihood ratio for “Not Clinton” class Colors in the left diagram are more likely to exist in images whose captions contained the word “clinton”. Colors in the right diagram are more likely to exist in images whose captions did not contain the word “clinton”. Colors equally likely to occur in both classes are not shown.Example color based segementations of images with captions containing the keyword “clinton”. Blocks not considered to be members of the “clinton” class were colored green.Likelihood ratio for “Flag” class Likeihood ratio for “Not Flag” class Colors in the left diagram are more likely to exist in images whose captions contained the word “flag”. Colors in the right diagram are more likely to exist in images whose captions did not contain the word “flag”. Colors equally likely to occur in both classes are not shown.Example color based segementations of images with captions containing the keyword “flag”. Blocks not considered to be members of the “flag” class were colored green.。