Feature Extraction
ENVI EX特征提取(Feature Extraction)中基于规则的分类的特征属性
ENVI EX特征提取(Feature Extraction)中基于规则的分类的特征属性ENVI EX的Feature Extraction是一个非常强大的用于分割与分类的workflow。
EX先对影像进行分割,然后针对分割出来的斑块对象,利用光谱、纹理和几何信息对目标进行分类和提取。
EX提供监督分类和基于规则的分类两种分类方式,其中基于规则的分类通过调节各特征属性的阈值,将感兴趣目标提取出来。
EX提供了丰富的用于分类的特征属性,以下关于特征属性的说明,从Help中翻译而来。
光谱特征:MINBAND_x:对象在第x波段所有像素的灰度的最小值。
MAXBAND_x:对象在第x波段所有像素的灰度的最小值。
AVGBAND_x:对象在第x波段所有像素的灰度的平均值。
STDBAND_x:对象在第x波段所有像素的灰度的标准差。
纹理特征:TX_RANGE:对象纹理核的平均数据范围。
TX_MEAN:对象纹理核的平均值。
TX_VARIANCE:对象纹理核的均方差。
TX_ENTROPY:对象纹理核的平均熵。
几何特征:AREA:对象的多边形的总面积,减去多边形中洞的面积。
数值为地图单位下的值。
LENGTH:包括洞在内,对象所有边界的总长度。
COMPACT:对象的紧密度。
圆是紧密度最高的形状,紧密度为1/pi。
正方形的紧密度为1/2(sqrt(pi))。
紧密度计算公式为COMPACT=sqrt(4*AREA/pi)/外轮廓线长度。
CONVEXITY:对象的凸度。
没有洞的凸多边形的凸度为1,凹多边形的凸度小于1。
凸度的计算公式为CONVEXITY=凸包长度/LENGTHSOLIDITY:对象完整度,由多边形面积比多边形凸包的面积。
没有洞的凸多边形完整度为1,凹多边形的完整度小于1。
完整度的计算公式为SOLIDITY=AREA/凸包的面积。
ROUNDNESS:对象的圆度,由多边形的面积比多边形最长直径的平方。
最长直径指的是多边形外接矩形框的长轴长度。
图片特征提取-最新文档
图片特征提取【】With the continuous development of technology, the demand for computer image recognition ability is gradually increasing, and the picture feature extraction is the core problem of image recognition. In the process of image feature extraction, it involves the application of many extraction algorithms. The results are also directly related to the color characteristics, texture characteristics of the extraction quality. This article analyzes the characteristics of the characteristics and extraction methods based on the image feature extraction.【?P键词】图片特征;提取;算法【Keywords】image feature; extraction; algorithm1 特征提取颜色特征计算机视觉、图像处理都离不开特征提取,特征提取自身具有可重复性的特征,也是图像处理的第一级预算,对每个像素进行检查,确定其特征代表的效果。
常见的图片特征和常应用到的特征提取算法,详细研究见以下内容。
1.1 特点颜色特征是对于图片表面性质的描述,其主要以像素点特征为基础,图片中不同的像素都有着自己的作用,由于颜色对图片景物变化相对不敏感,所以通过颜色特征,并不能有效提取出景物的局部特征。
人工智能领域中英文专有名词汇总
名词解释中英文对比<using_information_sources> social networks 社会网络abductive reasoning 溯因推理action recognition(行为识别)active learning(主动学习)adaptive systems 自适应系统adverse drugs reactions(药物不良反应)algorithm design and analysis(算法设计与分析) algorithm(算法)artificial intelligence 人工智能association rule(关联规则)attribute value taxonomy 属性分类规范automomous agent 自动代理automomous systems 自动系统background knowledge 背景知识bayes methods(贝叶斯方法)bayesian inference(贝叶斯推断)bayesian methods(bayes 方法)belief propagation(置信传播)better understanding 内涵理解big data 大数据big data(大数据)biological network(生物网络)biological sciences(生物科学)biomedical domain 生物医学领域biomedical research(生物医学研究)biomedical text(生物医学文本)boltzmann machine(玻尔兹曼机)bootstrapping method 拔靴法case based reasoning 实例推理causual models 因果模型citation matching (引文匹配)classification (分类)classification algorithms(分类算法)clistering algorithms 聚类算法cloud computing(云计算)cluster-based retrieval (聚类检索)clustering (聚类)clustering algorithms(聚类算法)clustering 聚类cognitive science 认知科学collaborative filtering (协同过滤)collaborative filtering(协同过滤)collabrative ontology development 联合本体开发collabrative ontology engineering 联合本体工程commonsense knowledge 常识communication networks(通讯网络)community detection(社区发现)complex data(复杂数据)complex dynamical networks(复杂动态网络)complex network(复杂网络)complex network(复杂网络)computational biology 计算生物学computational biology(计算生物学)computational complexity(计算复杂性) computational intelligence 智能计算computational modeling(计算模型)computer animation(计算机动画)computer networks(计算机网络)computer science 计算机科学concept clustering 概念聚类concept formation 概念形成concept learning 概念学习concept map 概念图concept model 概念模型concept modelling 概念模型conceptual model 概念模型conditional random field(条件随机场模型) conjunctive quries 合取查询constrained least squares (约束最小二乘) convex programming(凸规划)convolutional neural networks(卷积神经网络) customer relationship management(客户关系管理) data analysis(数据分析)data analysis(数据分析)data center(数据中心)data clustering (数据聚类)data compression(数据压缩)data envelopment analysis (数据包络分析)data fusion 数据融合data generation(数据生成)data handling(数据处理)data hierarchy (数据层次)data integration(数据整合)data integrity 数据完整性data intensive computing(数据密集型计算)data management 数据管理data management(数据管理)data management(数据管理)data miningdata mining 数据挖掘data model 数据模型data models(数据模型)data partitioning 数据划分data point(数据点)data privacy(数据隐私)data security(数据安全)data stream(数据流)data streams(数据流)data structure( 数据结构)data structure(数据结构)data visualisation(数据可视化)data visualization 数据可视化data visualization(数据可视化)data warehouse(数据仓库)data warehouses(数据仓库)data warehousing(数据仓库)database management systems(数据库管理系统)database management(数据库管理)date interlinking 日期互联date linking 日期链接Decision analysis(决策分析)decision maker 决策者decision making (决策)decision models 决策模型decision models 决策模型decision rule 决策规则decision support system 决策支持系统decision support systems (决策支持系统) decision tree(决策树)decission tree 决策树deep belief network(深度信念网络)deep learning(深度学习)defult reasoning 默认推理density estimation(密度估计)design methodology 设计方法论dimension reduction(降维) dimensionality reduction(降维)directed graph(有向图)disaster management 灾害管理disastrous event(灾难性事件)discovery(知识发现)dissimilarity (相异性)distributed databases 分布式数据库distributed databases(分布式数据库) distributed query 分布式查询document clustering (文档聚类)domain experts 领域专家domain knowledge 领域知识domain specific language 领域专用语言dynamic databases(动态数据库)dynamic logic 动态逻辑dynamic network(动态网络)dynamic system(动态系统)earth mover's distance(EMD 距离) education 教育efficient algorithm(有效算法)electric commerce 电子商务electronic health records(电子健康档案) entity disambiguation 实体消歧entity recognition 实体识别entity recognition(实体识别)entity resolution 实体解析event detection 事件检测event detection(事件检测)event extraction 事件抽取event identificaton 事件识别exhaustive indexing 完整索引expert system 专家系统expert systems(专家系统)explanation based learning 解释学习factor graph(因子图)feature extraction 特征提取feature extraction(特征提取)feature extraction(特征提取)feature selection (特征选择)feature selection 特征选择feature selection(特征选择)feature space 特征空间first order logic 一阶逻辑formal logic 形式逻辑formal meaning prepresentation 形式意义表示formal semantics 形式语义formal specification 形式描述frame based system 框为本的系统frequent itemsets(频繁项目集)frequent pattern(频繁模式)fuzzy clustering (模糊聚类)fuzzy clustering (模糊聚类)fuzzy clustering (模糊聚类)fuzzy data mining(模糊数据挖掘)fuzzy logic 模糊逻辑fuzzy set theory(模糊集合论)fuzzy set(模糊集)fuzzy sets 模糊集合fuzzy systems 模糊系统gaussian processes(高斯过程)gene expression data 基因表达数据gene expression(基因表达)generative model(生成模型)generative model(生成模型)genetic algorithm 遗传算法genome wide association study(全基因组关联分析) graph classification(图分类)graph classification(图分类)graph clustering(图聚类)graph data(图数据)graph data(图形数据)graph database 图数据库graph database(图数据库)graph mining(图挖掘)graph mining(图挖掘)graph partitioning 图划分graph query 图查询graph structure(图结构)graph theory(图论)graph theory(图论)graph theory(图论)graph theroy 图论graph visualization(图形可视化)graphical user interface 图形用户界面graphical user interfaces(图形用户界面)health care 卫生保健health care(卫生保健)heterogeneous data source 异构数据源heterogeneous data(异构数据)heterogeneous database 异构数据库heterogeneous information network(异构信息网络) heterogeneous network(异构网络)heterogenous ontology 异构本体heuristic rule 启发式规则hidden markov model(隐马尔可夫模型)hidden markov model(隐马尔可夫模型)hidden markov models(隐马尔可夫模型) hierarchical clustering (层次聚类) homogeneous network(同构网络)human centered computing 人机交互技术human computer interaction 人机交互human interaction 人机交互human robot interaction 人机交互image classification(图像分类)image clustering (图像聚类)image mining( 图像挖掘)image reconstruction(图像重建)image retrieval (图像检索)image segmentation(图像分割)inconsistent ontology 本体不一致incremental learning(增量学习)inductive learning (归纳学习)inference mechanisms 推理机制inference mechanisms(推理机制)inference rule 推理规则information cascades(信息追随)information diffusion(信息扩散)information extraction 信息提取information filtering(信息过滤)information filtering(信息过滤)information integration(信息集成)information network analysis(信息网络分析) information network mining(信息网络挖掘) information network(信息网络)information processing 信息处理information processing 信息处理information resource management (信息资源管理) information retrieval models(信息检索模型) information retrieval 信息检索information retrieval(信息检索)information retrieval(信息检索)information science 情报科学information sources 信息源information system( 信息系统)information system(信息系统)information technology(信息技术)information visualization(信息可视化)instance matching 实例匹配intelligent assistant 智能辅助intelligent systems 智能系统interaction network(交互网络)interactive visualization(交互式可视化)kernel function(核函数)kernel operator (核算子)keyword search(关键字检索)knowledege reuse 知识再利用knowledgeknowledgeknowledge acquisitionknowledge base 知识库knowledge based system 知识系统knowledge building 知识建构knowledge capture 知识获取knowledge construction 知识建构knowledge discovery(知识发现)knowledge extraction 知识提取knowledge fusion 知识融合knowledge integrationknowledge management systems 知识管理系统knowledge management 知识管理knowledge management(知识管理)knowledge model 知识模型knowledge reasoningknowledge representationknowledge representation(知识表达) knowledge sharing 知识共享knowledge storageknowledge technology 知识技术knowledge verification 知识验证language model(语言模型)language modeling approach(语言模型方法) large graph(大图)large graph(大图)learning(无监督学习)life science 生命科学linear programming(线性规划)link analysis (链接分析)link prediction(链接预测)link prediction(链接预测)link prediction(链接预测)linked data(关联数据)location based service(基于位置的服务) loclation based services(基于位置的服务) logic programming 逻辑编程logical implication 逻辑蕴涵logistic regression(logistic 回归)machine learning 机器学习machine translation(机器翻译)management system(管理系统)management( 知识管理)manifold learning(流形学习)markov chains 马尔可夫链markov processes(马尔可夫过程)matching function 匹配函数matrix decomposition(矩阵分解)matrix decomposition(矩阵分解)maximum likelihood estimation(最大似然估计)medical research(医学研究)mixture of gaussians(混合高斯模型)mobile computing(移动计算)multi agnet systems 多智能体系统multiagent systems 多智能体系统multimedia 多媒体natural language processing 自然语言处理natural language processing(自然语言处理) nearest neighbor (近邻)network analysis( 网络分析)network analysis(网络分析)network analysis(网络分析)network formation(组网)network structure(网络结构)network theory(网络理论)network topology(网络拓扑)network visualization(网络可视化)neural network(神经网络)neural networks (神经网络)neural networks(神经网络)nonlinear dynamics(非线性动力学)nonmonotonic reasoning 非单调推理nonnegative matrix factorization (非负矩阵分解) nonnegative matrix factorization(非负矩阵分解) object detection(目标检测)object oriented 面向对象object recognition(目标识别)object recognition(目标识别)online community(网络社区)online social network(在线社交网络)online social networks(在线社交网络)ontology alignment 本体映射ontology development 本体开发ontology engineering 本体工程ontology evolution 本体演化ontology extraction 本体抽取ontology interoperablity 互用性本体ontology language 本体语言ontology mapping 本体映射ontology matching 本体匹配ontology versioning 本体版本ontology 本体论open government data 政府公开数据opinion analysis(舆情分析)opinion mining(意见挖掘)opinion mining(意见挖掘)outlier detection(孤立点检测)parallel processing(并行处理)patient care(病人医疗护理)pattern classification(模式分类)pattern matching(模式匹配)pattern mining(模式挖掘)pattern recognition 模式识别pattern recognition(模式识别)pattern recognition(模式识别)personal data(个人数据)prediction algorithms(预测算法)predictive model 预测模型predictive models(预测模型)privacy preservation(隐私保护)probabilistic logic(概率逻辑)probabilistic logic(概率逻辑)probabilistic model(概率模型)probabilistic model(概率模型)probability distribution(概率分布)probability distribution(概率分布)project management(项目管理)pruning technique(修剪技术)quality management 质量管理query expansion(查询扩展)query language 查询语言query language(查询语言)query processing(查询处理)query rewrite 查询重写question answering system 问答系统random forest(随机森林)random graph(随机图)random processes(随机过程)random walk(随机游走)range query(范围查询)RDF database 资源描述框架数据库RDF query 资源描述框架查询RDF repository 资源描述框架存储库RDF storge 资源描述框架存储real time(实时)recommender system(推荐系统)recommender system(推荐系统)recommender systems 推荐系统recommender systems(推荐系统)record linkage 记录链接recurrent neural network(递归神经网络) regression(回归)reinforcement learning 强化学习reinforcement learning(强化学习)relation extraction 关系抽取relational database 关系数据库relational learning 关系学习relevance feedback (相关反馈)resource description framework 资源描述框架restricted boltzmann machines(受限玻尔兹曼机) retrieval models(检索模型)rough set theroy 粗糙集理论rough set 粗糙集rule based system 基于规则系统rule based 基于规则rule induction (规则归纳)rule learning (规则学习)rule learning 规则学习schema mapping 模式映射schema matching 模式匹配scientific domain 科学域search problems(搜索问题)semantic (web) technology 语义技术semantic analysis 语义分析semantic annotation 语义标注semantic computing 语义计算semantic integration 语义集成semantic interpretation 语义解释semantic model 语义模型semantic network 语义网络semantic relatedness 语义相关性semantic relation learning 语义关系学习semantic search 语义检索semantic similarity 语义相似度semantic similarity(语义相似度)semantic web rule language 语义网规则语言semantic web 语义网semantic web(语义网)semantic workflow 语义工作流semi supervised learning(半监督学习)sensor data(传感器数据)sensor networks(传感器网络)sentiment analysis(情感分析)sentiment analysis(情感分析)sequential pattern(序列模式)service oriented architecture 面向服务的体系结构shortest path(最短路径)similar kernel function(相似核函数)similarity measure(相似性度量)similarity relationship (相似关系)similarity search(相似搜索)similarity(相似性)situation aware 情境感知social behavior(社交行为)social influence(社会影响)social interaction(社交互动)social interaction(社交互动)social learning(社会学习)social life networks(社交生活网络)social machine 社交机器social media(社交媒体)social media(社交媒体)social media(社交媒体)social network analysis 社会网络分析social network analysis(社交网络分析)social network(社交网络)social network(社交网络)social science(社会科学)social tagging system(社交标签系统)social tagging(社交标签)social web(社交网页)sparse coding(稀疏编码)sparse matrices(稀疏矩阵)sparse representation(稀疏表示)spatial database(空间数据库)spatial reasoning 空间推理statistical analysis(统计分析)statistical model 统计模型string matching(串匹配)structural risk minimization (结构风险最小化) structured data 结构化数据subgraph matching 子图匹配subspace clustering(子空间聚类)supervised learning( 有support vector machine 支持向量机support vector machines(支持向量机)system dynamics(系统动力学)tag recommendation(标签推荐)taxonmy induction 感应规范temporal logic 时态逻辑temporal reasoning 时序推理text analysis(文本分析)text anaylsis 文本分析text classification (文本分类)text data(文本数据)text mining technique(文本挖掘技术)text mining 文本挖掘text mining(文本挖掘)text summarization(文本摘要)thesaurus alignment 同义对齐time frequency analysis(时频分析)time series analysis( 时time series data(时间序列数据)time series data(时间序列数据)time series(时间序列)topic model(主题模型)topic modeling(主题模型)transfer learning 迁移学习triple store 三元组存储uncertainty reasoning 不精确推理undirected graph(无向图)unified modeling language 统一建模语言unsupervisedupper bound(上界)user behavior(用户行为)user generated content(用户生成内容)utility mining(效用挖掘)visual analytics(可视化分析)visual content(视觉内容)visual representation(视觉表征)visualisation(可视化)visualization technique(可视化技术) visualization tool(可视化工具)web 2.0(网络2.0)web forum(web 论坛)web mining(网络挖掘)web of data 数据网web ontology lanuage 网络本体语言web pages(web 页面)web resource 网络资源web science 万维科学web search (网络检索)web usage mining(web 使用挖掘)wireless networks 无线网络world knowledge 世界知识world wide web 万维网world wide web(万维网)xml database 可扩展标志语言数据库附录 2 Data Mining 知识图谱(共包含二级节点15 个,三级节点93 个)间序列分析)监督学习)领域 二级分类 三级分类。
colmap中feature_extractor参数详解 -回复
colmap中feature_extractor参数详解-回复[colmap中feature_extractor参数详解]Feature Extraction(特征提取)是计算机视觉中的一个重要任务,它是从图像中提取出具有代表性的特征点或特征描述子的过程。
在[colmap]([colmap]( Generator)。
检测器用于在图像中检测出特征点的位置,而描述子生成器则根据特征点的位置生成对应的特征描述子。
特征提取器的主要参数在配置文件中进行指定,其具体格式为:feature_extractor {image_path = 文件夹路径database_path = 数据库路径image_list_path = 图片列表路径single_camera = falsesingle_camera_id = -1single_camera_thread_id = 0single_camera_min_num_points = 50log_file = 特征提取日志文件路径feature_extractor = 特征提取算法adaptive_scale_levels = 是否自适应尺度adaptive_scale_params = 自适应尺度参数...}下面将逐个介绍每个参数及其作用:1. image_path:指定包含图像的文件夹路径,feature_extractor将在此文件夹中查找图像进行特征提取。
2. database_path:指定[colmap](3. image_list_path:指定包含图像路径的文本文件路径,文本文件中每行包含一个图像路径,以便于选择特定的图像进行特征提取。
4. single_camera:是否只使用单个相机进行特征提取,默认值为false。
当设置为true时,将仅使用单个相机进行特征提取。
5. single_camera_id:指定要使用的单个相机的ID,默认值为-1。
四步法运动估计算法
四步法运动估计算法
"四步法"运动估计算法通常指的是在计算机视觉中用于估计物
体运动的一种方法。
这个方法包括四个基本步骤。
请注意,具体的实现可能会有所不同,以下是一个概括:
1.特征提取(Feature Extraction):
从连续的图像帧中提取特征点或特征描述子,这些特征可以唯一地标识场景中的关键点。
常见的特征包括角点、边缘等。
2.特征匹配(Feature Matching):
将第一帧和后续帧中提取的特征进行匹配,以确定它们在不同帧之间的对应关系。
这可以使用各种匹配算法,如最近邻匹配、光流等。
3.运动模型估计(Motion Model Estimation):
根据特征匹配的结果,使用运动模型来估计物体或相机的运动。
运动模型可以是刚体变换、仿射变换等,取决于场景的复杂性。
4.运动参数优化(Motion Parameters Optimization):
通过优化算法(例如最小二乘法)对运动模型的参数进行调整,以最小化特征点在相邻帧之间的误差。
这一步旨在提高运动估计的准确性。
这个四步法的运动估计算法在许多计算机视觉应用中都有应用,包括目标跟踪、光流估计、SLAM(Simultaneous Localization and Mapping)等。
在实际应用中,也可能需要考虑图像噪声、遮挡、光照变化等因素,因此算法的鲁棒性也是一个重要的考虑因素。
需要注意的是,这只是一种常见的运动估计方法之一,还有其他许多复杂的算法和技术,具体选择取决于应用场景和需求。
特征选择和特征提取
特征选择和特征提取特征选择(Feature Selection)和特征提取(Feature Extraction)是机器学习领域中常用的特征降维方法。
在数据预处理阶段,通过选择或提取与目标变量相关且有代表性的特征,可以有效提高模型的性能和泛化能力。
特征选择指的是从原始特征集合中选择一部分最相关的特征子集,剔除无关或冗余的特征,以减少计算成本和模型复杂度。
它可以分为三种类型的方法:过滤方法(Filter Method)、包裹方法(Wrapper Method)和嵌入方法(Embedded Method)。
过滤方法是利用统计或信息论的方法来评估特征与目标变量之间的相关程度,然后根据得分来选择特征。
常见的过滤方法包括互信息(Mutual Information)、方差选择(Variance Selection)和相关系数选择(Correlation Selection)等。
包裹方法是在特征子集上训练模型,通过观察模型性能的变化来评估特征子集的优劣,并选择性能最好的特征子集。
包裹方法的代表性算法有递归特征消除(Recursive Feature Elimination)和遗传算法(Genetic Algorithm)等。
嵌入方法则是将特征选择融入到模型的训练过程中,通过训练模型时的正则化项或特定优化目标来选择特征。
常见的嵌入方法有L1正则化(L1 Regularization)和决策树的特征重要性(Feature Importance of Decision Trees)等。
主成分分析是一种无监督学习方法,通过线性变换将原始特征投影到一组正交的主成分上,使得投影后的特征具有最大的方差。
主成分分析可以降低特征的维度,并保留原始特征的主要信息。
线性判别分析是一种有监督学习方法,通过线性变换找到一个投影方式,使得在投影空间中不同类别的样本更容易区分。
线性判别分析可以有效地提取类别间的差异和类别内的相似性。
因子分析则是一种概率模型,通过考虑变量之间的相关性而提取潜在的共享特征。
matlab 特征提取
matlab 特征提取英文回答:Feature extraction is a crucial step in data analysis and pattern recognition tasks, including machine learning and computer vision. It involves transforming raw data into a set of representative features that capture the essential characteristics of the data. These features can then be used to train models or make predictions.There are various techniques available for feature extraction in Matlab. One common approach is to use statistical measures such as mean, standard deviation, and skewness to describe the distribution of data. For example, if we have a dataset of images, we can extract features such as the average pixel intensity, the variance of pixel values, and the histogram of pixel intensities.Another popular technique is to use transform-based methods, such as Fourier Transform or Wavelet Transform, toextract frequency or time-domain features. For instance, in speech recognition, we can extract features such as Mel-Frequency Cepstral Coefficients (MFCCs) using the Fourier Transform.In addition to these traditional techniques, Matlab also provides advanced feature extraction methods based on deep learning. Deep learning models, such as Convolutional Neural Networks (CNNs), can automatically learn features from raw data without the need for manual feature engineering. For example, in image classification tasks, a CNN can learn to extract features such as edges, textures, and shapes from images.中文回答:特征提取是数据分析和模式识别任务中的关键步骤,包括机器学习和计算机视觉。
colmap中feature_extractor参数详解 -回复
colmap中feature_extractor参数详解-回复[colmap中feature_extractor参数详解]Colmap是一个开源的计算机视觉库,用于从大规模图像或视频数据集中重建稠密、高质量的三维模型。
在Colmap中,feature_extractor是其中一个重要的参数,它是用来提取图像特征的。
在本文中,我们将详细介绍feature_extractor参数的作用以及其各个子参数的具体含义,并逐步解答与之相关的问题。
1. feature_extractor是什么?feature_extractor是Colmap中的一个模块,用于从输入的图像数据中提取出特征点和特征描述子。
特征点是图像中具有独特性质的点,常用来表示图像中的显著特征,例如角点、边缘等。
特征描述子是用来描述特征点周围像素信息的向量,常用来判断两个特征点的相似度。
2. feature_extractor有哪些重要子参数?feature_extractor包括很多具体的子参数,下面是其中几个重要的子参数:- ImageReader:指定图像读取器,用于读取输入图像的格式。
可以选择的选项有OpenCV、PNG、JPEG等。
- ImageListFile:指定包含图像文件路径的列表文件。
- ImageDirectory:指定包含图像文件的目录。
- SiftExtraction:指定是否使用SIFT算法进行特征提取。
- RootSift:指定是否对SIFT特征进行根号斜率归一化处理。
3. ImageReader子参数的作用是什么?ImageReader子参数用于指定图像读取器,它决定了Colmap在读取输入图像时使用的读取格式。
根据需要,可以选择适合的图像读取器,以确保图像数据能够正确解析。
常用的选项有OpenCV、PNG、JPEG等。
4. ImageListFile和ImageDirectory子参数的作用是什么?ImageListFile子参数用于指定一个包含图像文件路径的列表文件。
efficient feature extraction for 2d3d objects in mesh representaion---
EFFICIENT FEATURE EXTRACTION FOR2D/3D OBJECTSIN MESH REPRESENTATIONCha Zhang and Tsuhan ChenDept.of Electrical and Computer Engineering,Carnegie Mellon University 5000Forbes Avenue,Pittsburgh,PA15213,USA{czhang,tsuhan}@ABSTRACTMeshes are dominantly used to represent3D models as they fit well with graphics rendering hardware.Features such as volume,moments,and Fourier transform coeffi-cients need to be calculated from the mesh representation efficiently.In this paper,we propose an algorithm to cal-culate these features without transforming the mesh into other representations such as the volumetric representa-tion.To calculate a feature for a mesh,we show that we can first compute it for each elementary shape such as a triangle or a tetrahedron,and then add up all the values for the mesh.The algorithm is simple and efficient,with many potential applications.1.INTRODUCTION3D scene/object browsing is becoming more and more popular as it engages people with much richer experience than2D images.The Virtual Reality Modeling Language (VRML)[1],which uses mesh models to represent the3D content,is rapidly becoming the standard file format for the delivery of3D contents across the Internet.Tradition-ally,in order to fit graphics rendering hardware well,a VRML file models the surface of a virtual object or envi-ronment with a collection of3D geometrical entities,such as vertices and polygons.In many applications,there is a high demand to calcu-late some important features for a mesh model,e.g.,the volume of the model,the moments of the model,or even the Fourier transform coefficients of the model.One ex-ample application is the search and retrieval of3D models in a database[2][3][9].Another example is shape analysis and object recognition[4].Intuitively,we may calculate these features by first transforming the3D mesh model into its volumetric representation and then finding these features in the voxel space.However,transforming a3D mesh model into its volumetric representation is a time-consuming task,in addition to a large storage requirement [5][6][7].Work supported in part by NSF Career Award9984858.In this paper,we propose to calculate these features from the mesh representation directly.We calculate a fea-ture for a model by first finding it for the elementary shapes,such as triangles or tetrahedrons,and then add them up.The computational complexity is proportional to the number of elementary shapes,which is typically much smaller than the number of voxels in the equivalent volu-metric representation.Both2D and3D meshes are consid-ered in this paper.The result is general and has many po-tential applications.The paper is organized as follows.In Section2we discuss the calculation of the area/volume of a mesh.Sec-tion3extends this idea and presents the method to com-pute moments and Fourier transform for a mesh.Some applications are provided in Section4.Conclusions and discussions are given in Section5.2.AREA/VOLUME CALCULATIONThe computation of the volume of a3D model is not a trivial work.One can convert the model into a discrete3D binary image.The grid points in the discrete space are called voxels.Each voxel is labeled with‘1’or‘0’to indi-cate whether this point is inside or outside the object.The number of voxels inside the object,or equivalently the summation of all the voxel values in the discrete space, can be an approximation for the volume of the model. However,the transforming from a3D mesh model into a binary image is very time-consuming.Moreover,in order to improve the accuracy,the resolution of the3D binary image needs to be very high,which can further increase the computation load.2.1.2D Mesh AreaWe explain our approach starting from the computation of areas for2D meshes.A2D mesh is simply a2D shape with polygonal contours.As shown in Figure1,suppose we have a2D mesh with bold lines representing its edges. Although we can discretize the2D space into a binary image and calculate the area of the mesh by counting the pixels inside the polygon,doing so is very computationally intensive."positive"area "negtive"areaFigure 1:The calculation of a 2D polygon area To start with our algorithm,let us make the assump-tion that the polygon is close.If it is not,a contour close process can be performed first [9].Since we know all the vertices and edges of the polygon,we can calculate the normal for each edge easily.For example,edge AB in Figure 1has the normal:2122121212)()(ˆ)(ˆ)(y y x x y x x xy y N AB −+−−+−−=(1)where (x 1,y 1)and (x 2,y 2)are the coordinates of vertices Aand B ,respectively,and xˆand y ˆare the unit vectors for the axes.We define the normal here as a normalized vec-tor which is perpendicular to the corresponding edge and pointing outwards of the mesh.In computer graphics lit-erature,there are different ways to check whether a point is inside or outside a polygon [8],thus it is easy to find the correct direction of the ter we will show that even if we only know that all the normals are pointing to the same side of the mesh (either inside or outside,as long as they are consistent),we are still able to find the correct area of the mesh.After getting the normals,we construct a set of trian-gles by connecting all the polygon vertices with the origin.Each edge and the origin form an elementary triangle,which is the smallest unit for computation.We define the signed area for each elementary triangle as below:The magnitude of this value is the area of the triangle,while the sign of the value is determined by checking the posi-tion of the origin with respect to the edge and the direction of the normal.Take the triangle OAB in Figure 1as an example.The area of OAB is:.)(212112y x y x S OAB +−=(2)The sign of S OAB is the same as the sign of the inner prod-uct AB N OA ⋅,which is positive in this case.The total area of the polygon can be computed by summing up all the signed areas .That is,¦=ii total S S (3)where i goes through all the edges or elementary triangles.Following the above steps,the result of equation (3)is guaranteed to be positive,no matter the origin is inside oroutside the mesh.Note here that we do not make any as-sumption that the polygon is convex.In real implementation,we do not need to check the signs of the areas each time.Let:().'',21'2112¦=+−=ii total i i i i i S S y x y x S (4)where i stands for the index of all the edges or elementary triangles.(x i1,y i1),(x i2,y i2)are coordinates of the starting point and the end point of edge i .When we loop through all the edges,we need to keep forwarding so that the in-side part of the mesh is always kept at the left hand side or the right hand side.According to the final sign of the result S’total ,we may know whether we are looping along the right direction (the right direction should give the positive result),and the final result can be simply achieved by tak-ing the magnitude of S’total .2.2.3D CaseWe can extend the above algorithm into the 3D case.In a VRML file,the mesh is represented by a set of vertices and polygons.Before we calculate the volume,we do some preprocessing on the model and make sure that all the polygons are triangles.Such preprocessing,called tri-angulation,is commonly used in mesh coding,mesh signal processing,and mesh editing.The direction of the normal for a triangle can be determined by the order of the verti-ces and the right-hand rule,as shown in Figure 2.The con-sistent condition is very easy to satisfy.For two neighbor-ing triangles,if the common edge has different directions,then the normals of the two triangles are consistent.For example,in Figure 2,AB is the common edge of triangle ACB and ABD .In triangle ACB ,the direction is from B to A ,and in triangle ABD ,the direction is from A toB ,thus N ACB and N ABD are consistent.BFigure 2:Normals and order of verticesIn the 3D case,the elementary calculation unit is a tet-rahedron.For each triangle,we connect each of its vertices with the origin and form a tetrahedron,as shown in Figure 3.As in the 2D case,we define the signed volume for each elementary tetrahedron as:The magnitude of its value is the volume of the tetrahedron,and the sign of the value is determined by checking if the origin is at the same side as the normal with respect to the triangle.In Figure 3,tri-2,z 2)Figure 3:The calculation of 3D volumeangle ACB has a normal N ACB .The volume of tetrahedron OACB is:.)(61321312231213132123z y x z y x z y x z y x z y x z y x V OACB +−−++−=(5)As the origin O is at the opposite side of N ACB ,the sign of this tetrahedron is positive.The sign can also be calculated by inner product ACB N OA ⋅.In real implementation,again we only need to com-pute:()¦=+−−++−=iitotal i i i i i i i i i i i i i i i i i i i V V z y x z y x z y x z y x z y x z y x V ''.61'321312231213132123(6)where i stands for the index of triangles or elementary tetrahedrons.(x i1,y i1,z i1),(x i2,y i2,z i2)and (x i3,y i3,z i3)are coordinates of the vertices of triangle i and they are or-dered so that the normal of triangle i is consistent with others.Volume of a 3D mesh model is always positive.The final result can be achieved by take the absolute value of V’total .In order to compute other 3D model features such as moments or Fourier transform coefficients,we reverse the sequence of vertices for each triangle if V’total turns out to be negative.3.MOMENTS AND FOURIER TRANSFORMThe above algorithm can be generalized to calculate other features for 2D and 3D mesh models.Actually,whenever the feature to be calculated can be written as a signed sum of features of the elementary shape (triangle in the 2D case and tetrahedron in the 3D case),and the feature of the elementary shape can be derived in an explicit form,the proposed algorithm applies.Although this seems to be a strong constrain,many of the commonly-used fea-tures fall into this category.For example,all the features that have the form of integration over the space inside the object can be calculated with this algorithm.This includes moments,Fourier transform,wavelet transform,and many others.In classical mechanics and statistical theory,the con-cept of moments is used extensively.In this paper,themoments of a 3D mesh model are defined as:³³³=dxdydzz y x z y x M r q p pqr ),,(ρ(7)where ),,(z y x ρis an indicator function:¯®=otherwise.,0meshthe inside is z)y,(x,if ,1),,(z y x ρ(8)and p ,q ,r are the orders of the moment.Central moments can be obtained easily from the result of equation (7).Since the integration can be rewritten as the sum of inte-grations over each elementary shape:¦³³³=ii r q p i pqr dxdydz z y x z y x s M ),,(ρ(9)where ),,(z y x i ρis the indicator function for elementary shape i ,and s i is the sign of the signed volume for shape i .We can use the same process as that in Section 2to calculate a number of low order moments for triangles and tetrahedrons that are extensively used.A few examples for the moments of a tetrahedron are given in the Appendix.More examples can be found in [9].Fourier transform is a very powerful tool in many sig-nal processing applications.The Fourier transform of a 2D or 3D mesh model is defined by the Fourier transform of its indicator function:³³³++−=Θdxdydzz y x e w v u zw yv xu i ),,(),,()(ρ(10)Since Fourier transform is also an integration over the space inside the object,it can also be calculated by de-composing the integration into integrations over each ele-mentary shape.The explicit form of the Fourier transform of a tetrahedron is given in the Appendix.As the moments and Fourier transform coefficients of an elementary shape are explicit,the above computation is very efficient.The computational complexity is O (N),where N is the number of edges or triangles in the mesh.Note that in the volumetric approach,where a 2D or 3D binary image is obtained first before getting any of the features,the computational complexity is O (M),where M is the number of grid points inside the model,not consid-ering the cost of transforming the data representation.It is obvious that M is typically much larger that N ,especially when a relatively accurate result is required and the resolu-tion of the binary image has to be large.The storage space required by our algorithm is also much smaller.Previous work by Lien and Kajiya [10]provide a similar method for calculating the moments for tetrahe-drons.Our work gives more explicit forms of the moments and extends their work to calculating the Fourier trans-form.4.APPLICATIONSA good application of our algorithm is to find the principal axes of a 3D mesh model.This is useful when we want to compare two 3D models that are not well aligned.In a 3Dmodel retrieval system [2][9],this is required because some of the features may not be invariant to arbitrary rota-tions.We construct a 3x3matrix by the second order mo-ments of the 3D model:.002011101011020110101110200»»»¼º«««¬ª=M M M M M M M M M S (11)The principal axes are obtained by computing the ei-genvectors of matrix S ,which is also known as the princi-ple component analysis (PCA).The eigenvector corre-sponding to the largest eigenvalue is made the first princi-pal axis.The next eigenvector corresponding to the secon-dary eigenvalue is the second principal axis,and so on.Inorder to make the final result unique,we further make sure that the 3rd order moments,M 300and M 030,are positive after the transform.Figure 4shows the results of this algo-rithm.Before rotationAfter rotationBefore rotation After rotationFigure 4:3D models before and after PCAThe Fourier transform of a 3D mesh model can be used in many applications.For example,the coefficients can be directly used as features in a retrieval system [9].Other applications are shape analysis,object recognition,and model matching.Note that in our algorithm,the result-ing Fourier transform is in continuous form.There is no discretization alias since we can evaluate a Fourier trans-form coefficient from the continuous form directly.5.CONCLUSIONS AND DISCUSSIONSIn this paper,we propose an algorithm for computing fea-tures for a 2D or 3D mesh model.Explicit methods to compute the volume,moments and Fourier transform from a mesh representation directly are given.The algorithm is very efficient,and has many potential applications.The proposed algorithm still has some room for im-provement.For example,it is still difficult to get the ex-plicit form of a high order moment for a triangles and tet-rahedrons.Also the Fourier transform may lose its compu-tational efficiency if many coefficients are required simul-taneously.More research is in progress to speed this up.REFERENCES[1]R.Carey,G.Bell,and C.Marrin,“The Virtual Reality Modeling Language”.Apr.1997,ISO/IEC DIS 14772-1.[Online]:/Specifications/.[2]Eric Paquet and Marc Rioux,“A Content-based Search Engine for VRML Database”,Computer Vision and Pattern Recognition,1998.Proceedings.1998,IEEE Computer Society Conference on ,pp.541-546,1998.[3]Sylvie Jeannin,Leszek Cieplinski,Jens Rainer Ohm,Munchurl Kim,MPEG-7Visual part of eXperimentation Model Version 7.0,ISO/IEC JTC1/SC29/WG11/N3521,Beijing,July 2000.[4]Anthony P.Reeves,R.J.Prokop,Susan E.Andrews and Frank P.Kuhl,“Three-Dimensional Shape Analysis Using Moments and Fourier Descriptors”,IEEE Trans.Pattern Analysis and Machine Intelligence ,pp.937-943,Vol.10,No.6,Nov.1988.[5]Homer H.Chen,Thomas S.Huang,“A Survey of Construction and Manipulation of Octrees”,Computer Vision,Graphics,and Image Processing ,pp.409-431,Vol.43,1988.[6]Shi-Nine Yang and Tsong-Wuu Lin,“A New Linear Octree Con-struction by Filling Algorithms”,Computers and Communications,1991.Conference Proceedings.Tenth Annual International Phoenix Conference on ,pp.740-746,1991.[7]Yoshifumi Kitamura and Fumio Kishino,“A Parallel Algorithm for Octree Generation from Polyhedral Shape Representation”,Pattern Recognition,1996.Proceedings of the 13th International Conference on ,pp.303-309,Vol.3,1996.[8]James D.Foley,Andries van Dam,Steven K.Feiner,and John F.Hughes,Computer Graphics principles and practice,Second Edition ,Addison-Wesley Publishing Company,Inc.,1996.[9]/projects/3DModelRetrieval/.[10]Sheue-ling Lien and James T.Kajiya,“A Symbolic Method for Calculating the Integral Properties of Arbitrary Nonconvex Polyhedra”,IEEE Computer Graphics and Applications ,pp.35-41,Oct.1984.APPENDIX().61321312231213132123000z y x z y x z y x z y x z y x z y x M +−−++−=().00032110041M x x x M ++=().000313221232221200101M x x x x x x x x x M +++++=()000321212331223221333231300x x x )()()(201M x x x x x x x x x x x x M +++++++++=))()((i))()((e *i ))()((e *i ))()((e *i (*),,(333222111323232313131333)wz vy i(ux 323232212121222)wz vy i(ux 313131212121111)wz vy i(ux 000333222111wz vy ux wz vy ux wz vy ux wz wz vy vy ux ux wz wz vy vy ux ux wz vy ux wz wz vy vy ux ux wz wz vy vy ux ux wz vy ux wz wz vy vy ux ux wz wz vy vy ux ux wz vy ux M w v u ++++++−+−+−+−+−+−+−+++−+−+−+−+−+−+++−+−+−−+−+−++=ℑ++++++。
feature_extraction原理
feature_extraction原理
Feature extraction(特征提取)是指从原始数据中提取出具有代表性的特征的过程。
特征是指能够代表数据的某种属性、特点或者模式的数量化表示,可以作为后续机器学习或数据分析任务的输入。
Feature extraction的原理可以分为以下几个步骤:
1. 数据预处理:首先对原始数据进行预处理,包括数据清洗、去噪、归一化等操作,以确保数据的准确性和一致性。
2. 特征选择:在特征提取之前,需要对原始数据中的特征进行选择,以减少冗余特征和噪声特征的影响,并仅保留对目标任务有意义的特征。
3. 特征转换:将原始数据转换为更能表达数据特点的特征表示。
通常采用的方法包括主成分分析(PCA)、线性判别分析(LDA)、离散余弦变换(DCT)等。
4. 特征提取:从转换后的特征表示中提取出具有代表性的特征。
常用的特征提取方法包括统计特征提取、频域特征提取、时域特征提取等。
5. 特征降维:在特征提取的过程中,可能会提取出大量的特征。
为了减少特征的维度,提高计算效率和模型训练效果,需要进行特征降维处理,常见的方法包括主成分分析(PCA)、线性判别分析(LDA)、特征选择等。
通过以上步骤,可以将原始数据转换为一组具有代表性的特征,以供后续的机器学习或数据分析任务使用。
特征的质量和选择对最终模型的性能有重要影响,因此在特征提取的过程中需要充分考虑数据的特点和任务的要求。
feature extraction和fine-tuning
feature extraction和fine-tuning标题:深入理解Feature Extraction与Finetuning在深度学习中的应用一、引言在深度学习领域,Feature Extraction和Finetuning是两种常用的技术手段,它们在图像识别、自然语言处理、语音识别等各类任务中发挥着重要作用。
本文将详细解析这两者的概念、工作原理以及实际应用步骤。
二、Feature Extraction1. 定义Feature Extraction,即特征提取,是指通过预训练的深度学习模型,提取输入数据(如图像、文本)的高级抽象特征的过程。
这些特征通常具有较强的表达能力和泛化能力,能够有效地描述数据的关键信息。
2. 工作原理在深度学习模型中,每一层神经网络都会对输入数据进行变换并提取出相应的特征。
底层网络主要提取一些基础的、局部的特征(如图像的颜色、纹理),而高层网络则能提取更复杂、更具语义的特征(如图像的形状、物体类别)。
因此,我们可以通过截取预训练模型的高层输出作为输入数据的特征表示。
3. 实际应用步骤(1)选择一个预训练的深度学习模型,如ResNet、VGG、BERT等。
(2)加载预训练模型的权重,并设置模型的可训练参数为False,以防止在后续过程中修改预训练模型的参数。
(3)将待提取特征的数据输入到预训练模型中,获取高层输出。
(4)将高层输出作为输入数据的特征表示,用于后续的机器学习或深度学习任务。
三、Finetuning1. 定义Finetuning,即微调,是指在预训练模型的基础上,针对特定任务进行部分或全部参数的重新训练,以优化模型在该任务上的性能。
2. 工作原理预训练模型已经在大规模数据集上学习到了丰富的通用特征,但对于特定任务可能存在一定的适应性问题。
Finetuning通过调整预训练模型的部分或全部参数,使其能够更好地适应特定任务的特性,从而提升模型的性能。
3. 实际应用步骤(1)选择一个预训练的深度学习模型,并加载其权重。
数据降维(特征提取)和特征选择有什么区别?
数据降维(特征提取)和特征选择有什么区别?Feature extraction和feature selection 都同属于Dimension reduction。
要想搞清楚问题当中⼆者的区别,就⾸先得知道Dimension reduction 是包含了feature selection这种内在联系,再在这种框架下去理解各种算法和⽅法之间的区别。
和feature selection不同之处在于feature extraction是在原有特征基础之上去创造凝练出⼀些新的特征出来,但是feature selection则只是在原有特征上进⾏筛选。
Feature extraction有多种⽅法,包括PCA,LDA,LSA等等,相关算法则更多,pLSA,LDA,ICA,FA,UV-Decomposition,LFM,SVD等等。
这⾥⾯有⼀个共同的算法,那就是⿍⿍⼤名的SVD。
SVD本质上是⼀种数学的⽅法,它并不是⼀种什么机器学习算法,但是它在机器学习领域⾥有⾮常⼴泛的应⽤。
PCA的⽬标是在新的低维空间上有最⼤的⽅差,也就是原始数据在主成分上的投影要有最⼤的⽅差。
这个是⽅差的解释法,⽽这正好对应着特征值最⼤的那些主成分。
有⼈说,PCA本质上是去中⼼化的SVD,这可以看出PCA内在上与SVD的联系。
PCA的得到是先将原始数据X的每⼀个样本,都减去所有样本的平均值,然后再⽤每⼀维的标准差进⾏归⼀化。
假如原始矩阵X的每⼀⾏对应着每⼀个样本,列对应着相应的特征,那么上述去中⼼化的步骤对应着先所有⾏求平均值,得到的是⼀个向量,然后再将每⼀⾏减去这个向量,接着,针对每⼀列求标准差,然后再把每⼀列的数据除以这个标准差。
这样得到的便是去中⼼化的矩阵了。
我在整理相关⽂档的时候,有如下体会:我们的学习是什么,学习的本质是什么?其实在我看来就是⼀种特征抽取的过程,在学习⼀门新知识的时候,这⾥⼀个知识点,那⼉⼀个知识点,你头脑⾥⼀篇混乱,完全不知所云,这些知识点在你的⼤脑中也纯粹是杂乱⽆章毫⽆头绪的,这不正是⾼维空间⾥数据的特征么?最本质的数据完全湮没在太多太多的扰动中,⽽我们要做的就是提炼,从⼀堆毫⽆头绪的扰动中寻找到最本质的真理。
图像特征的选择与提取
设P(j,i)为图像的第j个像素的第i个颜色分量值,一阶 矩为:
i
1 N
N
Pji
j 1
即表示待测区域的颜色均值 。
第11页/共31页
二阶矩(Variance)
i
(1 N
N
(Pij i )2 )1/ 2
j 1
表示待测区域的颜色方差,即不均匀性。
第12页/共31页
三阶矩(Skewness)
si
第23页/共31页
• 设f(i,j)是(i,j)处的像素值,(i,j)位置处的边缘强度通常用差分值或其函数来表示。简单的差分算法有: • x方向差分值:△xf(i,j)= f(i,j)- f(i,j-1) • y方向差分值:△yf(i,j)= f(i,j)- f(i-1,j) • 边缘强度 = |△xf(i,j)| + | △yf(i,j)| 或 • = △x2f(i,j) + △y2f(i,j),
图像特征
常见的目标特征分为灰度(颜色)、纹理和几何形状特征等。其中,灰度和纹理属于内部特征,几何 形状属于外部特征。
第4页/共31页
纹理特征 第5页/共31页
几何特征,判断凹凸
第6页/共31页
• 选取的特征应具有如下特点: • ❖ 可区别性 • ❖ 可靠性 • ❖ 独立性好 • ❖ 数量少 • ❖ 对尺寸、变换、旋转等变换尽可能不敏感
第21页/共31页
点特征提取
• 点特征主要指图像中的明显点,如房屋角点、圆点等.用于点特征提取得算子称为有利算子或兴趣算子
第22页/共31页
二值图像的边缘特征提取
• 二值图像边缘特征提取的过程实际上是寻找像素灰度值急剧变 化的位置的过程,并在这些位置上将像素值置为“1”,其余位 置上的像素值置为“0”,从而求出目标的边界线。二值图像的 边特征提取是用数学算子实现的,如Sobel、Prewitt、 Kirsch、拉普拉斯等多种算子。这些算子都是以一个3×3的模 板与图像中3×3的区域相乘,得到的结果作为图像中这个区域 中心位置的边缘强度。在计算出图像中每一个像素的边缘强度 后,将边缘强度大于一定值的点提取出来,并赋以像素值“1”, 其余赋以像素值“0”。
fx 使用手册
1.3 练的合并分割的方法叫 thresholding, 这种方法有利于提取点的特征 (例如:飞机)。Thresholding 是一个 可选项,对 Region Means 影像第一波段进行处理,合并临近分割。 对于和背景具有高对比度的地物 Thresholding 提取的地物效果非常 好 (例如,白色的船与深色的水)。 选择以下选项的其中一项: No Thresholding (默认):跳过这一步,进行下一步操作; Thresholding (advanced) :如果选择该选项,会弹出 Region Means 影像的直方图, 按照以下过程继续操作: (1)点击预览 Preview,出现预览窗口; (2)点击并拖拉直方土图上白色虚线来确定最大最小域值,预 览窗口只根据 DN 值将影像分割成白色和黑色,白色表示前景,黑色 表示背景,如果不改变虚线表示 No Thresholding; 注意:可以通过调整透明度来预览影像分割效果。如图 2-4 所示
1.4 计算属性(Computing Attributes)
在 这 一 步 里 , ENVI Zoom 为 每 一 个 目 标 物 计 算 spatial, spectral, 以及 texture 等属性 。如图 2-5 所示。
—————————————————————————————————————————————— 010-62054260/1/2/3 北京市朝阳区德胜门外华严北里甲 1 号健翔山庄 D5-D6
航天星图科技(北京)有限公司
FX 特征提取模块使用手册
一、ENVI FX 特征提取模块介绍
ENVI 特征提取模块(ENVI Feature Extraction)基于影像空间 以及影像光谱特征,从高分辨率全色或者多光谱数据中提取信息,该 模块可以提取各种特征地物如车辆、建筑、道路、河流、桥、河流、 湖泊以及田地等。令人欣慰的是该模块可以预览影像分割效果,还有 就是它基于目标来对影像进行分类(传统的影像分类是基于像素的, 也就是说利用每个像素的光谱信息对影像进行分类) 。该项技术对于 高光谱数据有很好的处理效果,对全色数据一样适用。对于高分辨率 全色数据, 这种基于目标的提取方法能更好的提取各种具有特征类型 的地物。一个目标物体是一个关于大小、光谱以及纹理(亮度、颜色 等)的感兴趣区域, ENVI FX 能同时定义多个这样的感兴趣区域。 ENVI Feature Extraction 将影像分割成不同的区域, 产生目标 区域,这个流程很有帮助而且直观 , 同时允许您定制自己的应用程 序。 整个流程如下图所示:
文本特征提取---词袋模型,TF-IDF模型,N-gram模型(TextFeatureEx。。。
⽂本特征提取---词袋模型,TF-IDF模型,N-gram模型(TextFeatureEx。
假设有⼀段⽂本:"I have a cat, his name is Huzihu. Huzihu is really cute and friendly. We are good friends." 那么怎么提取这段⽂本的特征呢?⼀个简单的⽅法就是使⽤词袋模型(bag of words model)。
选定⽂本内⼀定的词放⼊词袋,统计词袋内所有词在⽂本中出现的次数(忽略语法和单词出现的顺序),将其⽤向量的形式表⽰出来。
词频统计可以⽤scikit-learn的CountVectorizer实现:text1="I have a cat, his name is Huzihu. Huzihu is really cute and friendly. We are good friends."from sklearn.feature_extraction.text import CountVectorizerCV=CountVectorizer()words=CV.fit_transform([text1]) #这⾥注意要把⽂本字符串变为列表进⾏输⼊print(words)⾸先CountVectorizer将⽂本映射成字典,字典的键是⽂本内的词,值是词的索引,然后对字典进⾏学习,将其转换成词频矩阵并输出:(0, 3) 1(0, 4) 1(0, 0) 1(0, 11) 1(0, 2) 1(0, 10) 1(0, 7) 2(0, 8) 2(0, 9) 1(0, 6) 1(0, 1) 1(0, 5) 1(0, 7) 2 代表第7个词"Huzihu"出现了2次。
我们⼀般提取⽂本特征是⽤于⽂档分类,那么就需要知道各个⽂档之间的相似程度。
可以通过计算⽂档特征向量之间的欧⽒距离(Euclidean distance)来进⾏⽐较。
Agilent Feature Extraction Migration Utility说明书
Feature Extraction 11.5 or 12.0 to 12.1Migration UtilityFor Research Use Only. Not for use in diagnosticprocedures.Installing the Migration Utility2Running the Migration Utility3To export data from a previous version of Feature Extraction (11.5 or12.0)4To import data to Feature Extraction 12.15Troubleshooting8To display the log file8Error messages8The Feature Extraction Migration U tility provides a way to move grid templates, protocols, and metric sets from Feature Extraction 11.5 or 12.0 to Feature Extraction 12.1. This guide contains information on how to use the Migration U tility for this purpose.Installing the Migration UtilityInstalling the Migration UtilityYou can use the Migration U tility to migrate data for the followingscenarios:•Feature Extraction 11.5 or 12.0 is installed on a different computer than Feature Extraction 12.1. In this scenario, you need to install theMigration U tility software on both computers.•Feature Extraction 11.5 or 12.0 is installed on the same computer as Feature Extraction 12.1. In this scenario, only one copy of the MigrationU tility software is required. However, because the previous version ofFeature Extraction cannot simultaneously co-exist on the samecomputer as Feature Extraction 12.1, you must first export data fromthe previous version of Feature Extraction, then uninstall FeatureExtraction. You can then install Feature Extraction 12.1 and import thedata using the Migration U tility software.1Download the MigrationU tility.zip file from/genomics/MigrationUtility .2U nzip the MigrationU tility.zip file to a location where you want toinstall the Migration U tility software.A Migration_U tility folder is created that contains the Migration U tility.It is not necessary to close Feature Extraction while you run the Migration Utility. However,make sure that you are not extracting data.Close the Migration Utility before extracting data in Feature Extraction.If the previous version of Feature Extraction is installed on a different computer thanFeature Extraction 12.1, then you need to install the Migration Utility software on both computers.Running the Migration UtilityRunning the Migration UtilityComplete migration of data from a previous version of Feature Extraction(11.5 or 12.0) to Feature Extraction 12.1 is performed by the program intwo steps. First, the grid templates, protocols, and metric sets areexported from the previous version of Feature Extraction. Then, theexported data are imported into Feature Extraction 12.1.•If the previous version of Feature Extraction is installed on a different computer than Feature Extraction 12.1, first run the Migration U tilityon the computer with the previous version of Feature Extraction toexport the data. Then, run the Migration U tility on the computer withFeature Extraction 12.1 to import the data.•If the previous version of Feature Extraction is currently installed on the same computer where you want to install Feature Extraction 12.1,first run the Migration U tility in the previous version to export thedata. Then, uninstall the previous version of Feature Extraction andinstall Feature Extraction 12.1. Finally, run the Migration U tility inFeature Extraction 12.1 to import the data.Do not run extractions or perform export, import, or delete operations within Feature Extraction while the Migration Utility is exporting or importing data.If you are migrating data from one computer to another, make sure that the computer withFeature Extraction 12.1 has access to the folder where you are exporting the data. If not,after you export the data, you must copy the Migration Utility Output folder from thecomputer with the previous version of Feature Extraction to the Migration Utility installation folder on the computer with Feature Extraction 12.1.To export data from a previous version of Feature Extraction (11.5 or 12.0)To export data from a previous version of Feature Extraction (11.5 or 12.0) 1On the computer where the previous version of Feature Extraction is installed, browse to the folder where you installed the Migration U tilitysoftware, and start the MigrationU tility.exe program.The MigrationU tility dialog box opens. When the program starts, itcreates Output and Logs folders in the Migration U tility installationfolder.The program recognizes that Feature Extraction 11.5 or 12.0 software isinstalled, and by default selects to export grid templates, protocols, andmetric sets.Figure 1Migration Utility dialog box2(Optional) Clear the boxes for data you do not want to migrate (Grid Templates, Protocols, Metric Sets).3By default, the program exports the data to the <Migration U tilityinstallation folder>\Migration_U tility\Output folder. To change theexport location:a Next to the Data Location field, click Browse.b In the Browse for Folder dialog box, browse to the folder where youwant to export the data. Click the folder to select it.ORTo import data to Feature Extraction 12.1 To create a folder, browse to a location and click Make New Folder,then type the name of the folder where you want to export the data.c Click OK.Make sure you have write permissions in the Output folder location.4Click Start.The program exports the selected data from the previous version ofFeature Extraction. During data export, a progress bar appears in thedialog box.5When data export is complete, a message box opens. Click OK.Exported data is saved in the designated Output folder, in separatefolders: GridTemplates, MetricSets, and Protocols.6If Feature Extraction 12.1 is to be installed on the same computer as the previous version of Feature Extraction, uninstall the previousversion and install version 12.1 before continuing.To import data to Feature Extraction 12.11On the computer where Feature Extraction 12.1 is installed, browse to the folder where you installed the Migration U tility software, anddouble-click the MigrationU tility.exe program.The MigrationU tility dialog box opens. The program recognizes thatFeature Extraction 12.1 software is installed, and by default selects toimport grid templates, protocols, and metric sets.Make sure you have write permissions in the Output folder before you start theImport operation.2(Optional) clear the boxes for data you do not want to migrate.To import data to Feature Extraction 12.13By default, the program imports the data from the <Migration U tilityInstallation Folder>\Migration_U tility\Release\Output folder. To changethe import location,a Next to the Data Location field, click Browse .b In the Browse for Folder dialog box, browse to and select the folderthat contains the Output subfolder to which data was exported.Figure 2Browse For Folder dialog box – select the folder containing the Output sub-folderIf not all files are imported, check the log file for further information. See “To display the logfile” on page 8.4Click Start .The program imports the selected data from the Import Location folder.During data import, the progress is displayed in the MigrationU tilitydialog box.5When import is complete, a message appears. Click OK.To import data to Feature Extraction 12.1If you need to stop the data migration process for any reason, click Abort. In the message box that opens, click Yes to confirm. The data migration does not abort until you click Yes.If data migration is stopped during data export, the export operation stops and the current output file is not saved in the Output folder. All files exported before the abort operation remain in the Output folder. No further files are exported.If data migration is stopped during data import, the current file import is completed, but no additional files are imported. Data imported before the abort operation are available in Feature Extraction 12.1.TroubleshootingTroubleshootingDuring data migration, the status of each data migration operation issaved in a log file. The log file is useful in troubleshooting.To display the log file1Browse to the Logs folder in the Migration U tility export/importlocation. By default, this is <Migration U tility installationfolder>\Migration_U tility\Logs. If you changed the default location,browse to that location.2U sing a text editor, open the Logs.txt file.The Migration U tility log file is displayed.Figure 3Migration Utility log fileError messagesWhen a problem occurs during data migration, a message is displayed thatprovides information about what happened. The following table describespossible messages and explanations.Error messagesTable 1Error messagesProblem Message displayed ResolutionFeature Extraction (v 11.5 or 12.0) is not installed on the system. The Migration Utility requires FeatureExtraction Software (11.5 or 12.0) to beinstalled on the system. Please check ifone of these versions are installed on thesystem.Check log file for details. Check thatFeature Extraction 11.5 or 12.0 isinstalled on the computer.Incorrect connection parameters provided.Error in connecting to FE database.Please check log file at <Migration_utility_executable_path>\Logs for details.Verify that the database service isrunning on the computer. Also, checkthat the database is properly installedand accessible through FeatureExtraction.Incorrect export folder path. Error in creating Grid Template/Protocol/Metric Set XML. Please check log file at<Migration_utility_executable_path>\Logs for details.Check that the export folder is located in the provided path. Also check if the user has access rights to write the file in that path. Check the log file for more details.Incorrect import folder path. Error in importing GridTemplate/Protocol/ Metric Set XML.Please check log file at<Migration_utility_executable_path>\Logs for details.Check that the import folder is located in the provided path and if user has read access rights. Also check that the GridTemplates, Protocols, and MetricSets folders are present at this location. Check log file for more information.Not enough space. Not enough disk space to generate GridTemplate XML.Check disk space available at the destination folder drive and create some free disk space.One instance of the utility is already running on the system and user tries to launch another instance Migration Utility is already running on thiscomputer. Only one instance of MigrationUtility is allowed per computer.Only one instance of the utility can runat a time. Do not start the MigrationUtility while it is already running.Not enough physical memory left to migrate large data files.Not enough storage is available toprocess this command.Close all unnecessary programs andtry again. Make sure the computermeets the memory requirements forFeature Extraction.Error messagesTried to import a design that was previously imported.Either grid template <Migration UtilityInstallation_folder>\Migration_Utility\Output\GridTemplates\<GridFileName>.xml already exists or unable to write toTemp folder while importing.Grid template already exists indatabase.Table 1Error messages (continued)Problem Message displayed Resolution Agilent Technologies, Inc. 2018Revision A0, January 2018*G4460-90059*G4460-90059Agilent Technologies In this bookThis guide containsinformation on how to use the Feature Extraction Migration U tility to transfer grid templates, protocols, and metric sets from Feature Extraction version 11.5 or 12.0 to Feature Extraction version 12.1.。
feature extractor 特征
feature extractor 特征一、特征提取器简介特征提取器(Feature Extractor)是一种算法,主要用于从原始数据中提取有意义的特征。
这些特征可以是数值、向量或高维空间中的点。
特征提取器在许多领域都有广泛的应用,如图像处理、语音识别、自然语言处理等。
二、特征提取器的作用特征提取器的主要作用如下:1.降低数据维度:通过提取有用特征,减少冗余和噪声,降低数据维度,使数据更易于处理。
2.数据预处理:为后续的分析和建模做好准备,如归一化、标准化等。
3.提高模型性能:提取到的特征更能反映数据的本质,有助于提高模型预测准确率。
三、特征提取器的种类1.传统特征提取器:如霍夫变换、傅里叶变换等,主要用于图像处理领域。
2.深度学习特征提取器:如卷积神经网络(CNN)、循环神经网络(RNN)等,主要用于自然图像处理、语音识别等领域。
3.基于机器学习的特征提取器:如主成分分析(PCA)、线性判别分析(LDA)等,主要用于高维数据降维和分类任务。
四、如何在实际应用中选择合适的特征提取器1.分析问题领域:了解领域特点,选择与问题相关的特征提取器。
2.评估提取效果:通过交叉验证、可视化等方法评估不同特征提取器的性能。
3.考虑计算资源和时间:根据硬件条件和需求选择合适的算法。
五、特征提取器在机器学习中的应用1.图像识别:如人脸识别、车牌识别等。
2.语音识别:如语音信号处理、语音助手等。
3.自然语言处理:如文本分类、情感分析等。
4.推荐系统:通过提取用户和物品的特征,提高推荐准确率。
六、未来发展趋势和挑战1.深度学习特征提取器:随着深度学习技术的不断发展,更多高效、可解释的深度特征提取器将不断涌现。
2.跨领域特征提取:通过融合不同领域的知识,实现跨领域的特征提取。
3.无监督特征提取:减少人工干预,自动学习数据的内在特征。
4.实时特征提取:适应动态数据变化,实现实时特征提取。
总之,特征提取器在各个领域具有重要意义。
英文单词特征提取
英文单词特征提取英文单词特征提取:特征提取,Feature Extraction1)Feature Extraction特征提取1.Feature Extraction From Carbon Fiber Composites Ultrasonic Signals Based On Wavelet Packet Transform;基于小波包变换的复合材料超声波检测信号特征提取2.New Feature Extraction Method For Laser-Induced Fluorescence Spectra;一种激光诱导荧光光谱特征提取新方法3.Statistics Analysis And Feature Extraction Of EEG For Imaging Left-Right Hands Movement;基于想象左右手运动脑电特征提取及其统计特性分析英文短句/例句:1.Feature Extraction Algorithm Based On Boosting Bootstrap FLD Projections结合提升自举FLD投影的特征提取算法2.Fingerprint Matching includes Feature Extraction, Feature CodeAnd Minutiae Matching.指纹特征匹配包括指纹特征提取,特征编码以及特征匹配等。
3.The Recognition Of The Digits Graphics Based On The Drawing Of Traces And Features;基于轨迹提取和特征提取的数字图形识别4.The Application Of Feature Extraction And Selection In Handwritten Digit Recongnition;特征提取和特征选择在手写数字识别中的应用5.3D Footprint Shape Characteristic Pick-Up And Biological Characteristic Analysis;立体足迹形态特征提取与生物特征分析6.Document Feature And Feature Extraction In Web Mining;Web文本挖掘中的特征表示与特征提取技术7.Optimization Calculation Feature Scale For Mutual Information Measure Feature Extraction一种可最优化计算特征规模的互信息特征提取8.Experimental Study Of Feature Extraction Based On Independent Component Analysis And Evaluation Of Feature Independence基于ICA 的特征提取实验研究及特征独立性评价9.Research On Web Page Feature Extraction Method Based On Semantic Orientation基于特征倾向性的网页特征提取方法研究10.Ear Feature Extraction Combining The Shape Feature Of Outer Ear With The Structure Feature Of Inner Ear外耳形状特征和内耳结构特征结合的人耳特征提取相关短句/例句:Feature Selection特征提取1.TCM Syndrome Differentiation Based On Bioinformatics Feature Selection;基于生物信息特征提取的中医辨证2.Research On Feature Selection In Pattern Matching;模式识别中的特征提取研究3.Improved Feature Selection Algorithm Based On Variance In Text Categorization;文本分类中基于方差的改进特征提取算法3)Feature Extracting特征提取1.Robust Image Feature Extracting And Matching Algorithm For Mobile Robots Vision移动机器人视觉图像特征提取与匹配算法2.In This Paper,A Method Of Feature Selecting And Feature Extracting Based On Wavelet Was Proposed.基于小波变换提出了一种特征提取及特征选择的方法。
gee中featurecollection提取
gee中featurecollection提取一、什么是gee中的FeatureCollection在Google Earth Engine(GEE)中,FeatureCollection是一种数据结构,用于存储和操作矢量地理空间数据。
FeatureCollection是由多个Feature组成的集合,每个Feature包含一个几何对象和属性信息。
几何对象可以是点、线、多边形等,而属性信息可以是关于每个几何对象的描述性数据。
FeatureCollection可以用于处理各种地理空间数据,如卫星影像、地理矢量数据等。
GEE提供了丰富的功能和方法来提取、分析和可视化FeatureCollection中的数据,使得在地理空间数据处理和分析方面变得更加高效和便捷。
二、FeatureCollection的创建和导入在GEE中,可以通过多种方式创建和导入FeatureCollection。
1. 创建FeatureCollection可以通过在代码中直接定义几何对象和属性信息来创建FeatureCollection。
例如,以下代码创建了一个包含两个点的FeatureCollection:var features = ee.FeatureCollection([ee.Feature(ee.Geometry.Point([-122.0865, 37.4218]), {name: 'San Francisco'}), ee.Feature(ee.Geometry.Point([-73.9857, 40.7484]), {name: 'New York City'}) ]);2. 导入FeatureCollection除了直接创建,还可以通过导入矢量文件来创建FeatureCollection。
GEE支持导入常见的矢量文件格式,如Shapefile、GeoJSON等。
var features = ee.FeatureCollection('path/to/shapefile');三、FeatureCollection的操作和分析FeatureCollection提供了丰富的操作和分析方法,可以对其中的数据进行提取、筛选、计算等操作。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
RECORDING AND VISUALIZATION OF THE CENOTAPHOF GERMAN EMPEROR MAXIMILIAN IK. Hanke a *, W. Boehler ba University of Innsbruck, Institute of Geodesy, Technikerstrasse 13, A 6020 Innsbruck, Austria,klaus.hanke@uibk.ac.atb i3mainz, Institute for Spatial Information and Surveying Technology, Holzstrasse 36, D 55116 Mainz, Germany,boehler@geoinform.fh-mainz.deCommission V, Working Group V/4KEY WORDS: Close Range Photogrammetry, Laser Scanning, Data Fusion, Heritage Documentation, Visualization, Feature ExtractionABSTRACT:The Hofkirche in Innsbruck, Austria, with its tomb of German Emperor Maximilian I is one of the most famous and outstanding historical monuments in Central Europe. For centuries the Cenotaph (i.e. empty tomb) was separated from the visitors by a black iron lattice. In addition, the fine caved marble plates were covered by glass. Because of a basic restoration of the tomb, lattice and glass plates were removed for the first time ever since its construction in the 16th century. For a short period in May 2002 all sides were accessible after the temporary housing of the restoration technicians had been removed from one side and not yet been moved to the other side for the second restoration period. This could be used for a complete metric documentation of the object.Both, close-range photogrammetry and 3D scanning techniques were used. Photogrammetric images consisted of stereo pairs and separate color images. 3D scanning was accomplished with a MENSI S25 laser scanner for the overall structures and a GOM ATOS II structured light scanner at high resolution for the relief plates.Fig. 1: Overview model of the cenotaph derived from scanned data* Corresponding author.Different methods can be used to visualize the meshed surface model. Line plots from the photogrammetric stereo models do not really give an adequate representation of this object. 3D visualization using the scanning results can achieve a much better impression of the complicated geometry after data modeling, error correction, and filling in remaining ‘holes’. In order to model the complex geometry, it is necessary to use huge amounts of data. The model in its highest resolution consists of more than 1.000.000.000 triangles. Because of restrictions in the hard- and software presently available, the high resolution model has to be processed and stored in more than 140 separate virtual models. The project proves the enormous potential of these new technologies, but shows as well that more progress is needed in hardware and software development to accomplish such demanding tasks.An attempt was made to convert the meshed 3D models to 2D vector drawings, similar to those resulting from traditional plotting using photogrammetric stereo models. Solutions could be found to select all object outlines and all edge areas where curvature shows high absolute values. On the other hand, for other “lines” (e.g. the details of a human face) where a stereo plotter operator will draw lines on his own intuition, no obvious mathematical definitions could be found.1.INTRODUCTION1.1BackgroundBetween 1420 and 1665, Innsbruck was the residence of one of Europe’s most known imperial families, the “Habsburger”. The Hofkirche at Innsbruck with the tomb of Emperor Maximilian I probably is the most important art-historical monument, which is possessed by the country of Tyrol. It was built between 1555 and 1565 under Emperor Ferdinand I who was the grandson of Maximilian and the brother of famous German Emperor Karl V. The cenotaph (i.e. technical term for an empty tomb) with the statue of the kneeling Emperor is in the center of the church’s nave. The tomb was created by artists from various countries, who cooperated in the production. It is a unique certification of European court art, which was influenced by the personality of the Emperor and its successor as clients. The sarcophagus is surrounded by 28 more than life-sized bronze figures, embodying ancestors and relatives of Maximilian, the so-called "Schwarze Mander" (i.e. black men).The cenotaph itself (fig. 1) has an extent of 6,4 m x 4,5 m x 3,3 m and consists of a frame of black marble in which the 24 reliefs of white marble (each approx. 80 cm x 45 cm) are embedded in two horizontal rows. These reliefs show scenes from the life of the Emperor Maximilian I. They have a level of detail within the range of 0.1 mm and had to be documented in particular and with highest precision available. On the cover of the tomb, the kneeling figure of the Emperor is central, surrounded by representations of the four basic virtues, which are arranged at the four corners. All mentioned figures are made of dark bronze.1.2RestorationOn the occasion of the preservation and restoration of the tomb, a complete art-historical and geometrical documentation was initiated for the first time since the completion around the year 1568. In order to allow a continuous access for tourists, only one half at a time was concerned by the measures of restoration and covered in a boarding. The other part remained accessible for the public. The cenotaph was separated for centuries by a wrought-iron lattice from the visitors. Additionally the white reliefs were hidden by glass plates. In May 2002 the right half was completely restored and it became necessary to dismantle and transfer the temporary housing set up by the restoration technicians to the other side. Thus, for ten days for the first time since its establishment the cenotaph was accessible from all sides and unwrapped both from lattices and from windowpanes. This time slot was used for the complete documentation and the measurement work described here.2.DATA ACQUSITION2.1General remarksThe setting of tasks was not clearly defined – as it is often the case in comparable projects, and had to be developed in co-operation with the responsible authorities. It stood firmly that the rare chance of accessibility from all sides should be used for documentation by all means. Of course, neither detailed plans nor art-historical documentations of this tomb were available at this time. Because of the preciousness of the object - and the uniqueness of the opportunity for data collection - accordingly a combination of geodetic measuring methods was suggested and carried out in May 2002.On the one hand classical close range photogrammetry was used for the complete measurement of the cenotaph and on the other hand - due to the complex 3D details of the reliefs – the documentation should be carried out by use of 3D scanning devices. The appropriate scanners were chosen from a list of available instruments (i3mainz, updated 2004). The geometrical survey of the object by the scanners also later would be combinable with the radiometric information from the photos when both methods were used in one operation. The measurements were accomplished by three independent teams. In order to avoid interference during the short time available, all measurements had to be coordinated exactly and scheduled accurately in advance.Since the surveying methods for the geometric documentation of the cenotaph have been described in earlier publications (Marbs 2002, Hanke 2003), only a brief outline is given in the following sections.2.2Geodetic survey and photogrammetric densificationA general requirement for all surveys was a common coordinate reference. A precise network of eight observation points around the cenotaph was established and vertical and horizontal angles were observed to the reference targets on the object for the scans and for the photogrammetric images (spheres and self-adhesive flat targets). An accuracy of better than 0.5 mm (standard deviation of spatial location) could be achieved. Additional targets which were necessary for the detail scans of the reliefs were stuck onto transparent adhesive tape which was fixed in front of the reliefs without touching those. The coordinates for those targets were derived by photo triangulation using GOM’s widely automatic TRITOP system.2.33D ScanningA complete scan of the cenotaph was achieved with a MENSI S25 triangulation type laser scanner. A point density of about 2mm was chosen. This resulted in 20 observation locations from where a total of about 10 million points were recorded in about 60 hours of scanning time. As long as a scanning range of 5 m is not exceeded, the MENSI S25 will achieve a point accuracy (standard deviation of spatial location) of better than 1 mm if correctly calibrated.Because the marble reliefs show very fine details, it was neces-sary to use a high precision scanner for their documentation. A GOM ATOS II scanner was chosen. This scanner projects fringe patterns onto the object and uses two cameras to analyze the resulting images. Since high resolution was important, the version with a 400 mm base and 35 mm camera lenses was selected. In this configuration, the scanner yields about 1.3 million points in a field of view of 175 mm x 140 mm. Thus, twelve scans would cover one relief (not counting numerous additional scans which were needed to reduce the hidden areasdue to occlusions). The raw data for one single relief amounted to about 450 – 700 Mbytes.The GOM ATOS II was also used to document the five statues on top of the cenotaph since their surfaces show very fine detail (fig. 2)..2.4 Photogrammetric imagingA photogrammetric documentation of the whole object was carried out by a private surveying company experienced in the documentation of cultural heritage. A Zeiss UMK metric camera was used. In addition, stereo images were acquired for each relief on high resolution b/w film. Also, orthogonal images were exposed on color film for later rectification and/or texturing.Fig. 2: Kneeling Maximilian I. Virtual model from scanned data.3. DATA PROCESSING FOR SCANNED DATA3.1 Merging and thinning Data processing which kept one person busy for nearly one year was a very delicate task. The procedure which was accomplished using Raindrop Geomagic Studio software has been described earlier (Boehler et al. 2003). Before the objects can be treated, neighboring point clouds have to be merged into one data set. The following point thinning process has to reduce the data for easier handling without a degradation of object resolution. In smooth surface areas, a considerable reduction can be achieved, whereas detailed object parts (as they are typical for the reliefs) should not be thinned out at all. Even with the most advanced hardware and software, one complete relief plate could not be treated at the same time without loss of detail. Consequently, several partial models had to be created. 3.2 Meshing, checking manifold meshes, cleaning and hole filling The following data processing steps include a check for manifold meshes which are unavoidable where hidden parts ofthe surface cannot be observed from any point, and a cleaning procedure which readjusts neighboring triangles which showlarge orientation differences. A hole filling process completesthe data processing.4. RESULTS Presently, the results of our cenotaph documentation belong to the most detailed virtual models of art worldwide. The model in its highest resolution consists of more than 1.000.000.000 triangles. It should be noted that this number does not express the amount of data collected (which is much larger) but the necessary data to describe the object with the desired resolution. Because of restrictions in the hard- and software presently available, the high resolution model has to be processed and stored in more than 140 separate virtual models. Various other models with a reduced number of triangles are available for visualization and publication (e.g. figs. 1, 2 and 3 of this publication).Fig. 3: One of the relief plates.Top: Virtual model from scanned data. Bottom: Result of stereo plottingFig. 4: Detail from relief plate above (fig. 3), about 10 cm x 10 cm in reality.Left: Photograph. Center: Result of stereo plotting. Right: Virtual model from scanned data.5.VISUALIZATION5.13D model versus photographyVirtual images from meshed 3D models can give a much better perception of the object than photographic images of the real object (see fig. 4, left and right image). This is especially true when there is hardly any texture to the object as is the case with sculptures from stone or bronze. Since the virtual model is illuminated by virtual light sources which can be arranged according to the desired effects, the resulting images show details much clearer. If the lights (or the object itself) are gently moved in an animation, the resulting changes and movements allow a good 3D perception, even though the monitor image itself is still 2D. If the whole object definition relies on texture only, photographic images have to be used. The written text in figure 3 could not be modeled from the scans, for example. In such a case, photographs have to replace or complement 3D scanning.5.23D model versus line drawingIt has often been discussed whether metric line drawings are an adequate means of documentation of sculptural works of art. Nevertheless, this form of documentation is often asked for and the considerable cost caused by the line drawing process which is usually accomplished using stereo photographs is accepted. Undoubtedly, this form of documentation can result in remarkable interpretations of the art works (fig. 3 bottom, fig. 4 center). The selection of the lines to be drawn is a subjective process carried out by the operators on their own ideas. Complex surfaces are reduced to a set of lines. Metric quality is present but cannot really be used if a complete reconstruction should become necessary. The virtual 3D model from scanning data, on the other hand, is a complete metric representation of all surfaces involved and has a considerable visualization potential at the same time (as can be seen in figs. 1, 2, 3 top and4 right).5.3Outlines from 3D modelsOutlines describe a 3D object’s furthest extent in a specific 2D projection. In stereo photogrammetry, these silhouette lines are often very difficult to measure. If the images have been taken at an angle to the selected projection and the object has no defined edges (as is often the case with sculptures) it is nearly impossible to find these lines correctly. Meshed 3D models can be used to find the correct outlines. After the rotation into the desired projection is accomplished, a program developed at i3mainz selects all those lines that are common to a pair of triangles where one triangle has a normal pointing towards the observer and the other triangle has a normal pointing away from the observer. After all hidden lines are removed, the resulting line pattern will give a good description of the object if it has a definite 3D structure (as in fig. 5). In the case of relief type object parts, only few lines have an outline character (as in fig.6 center).Fig. 5: Outlines of the cenotaph, automatically derived from the meshed 3D model.5.4 Complete line drawings from 3D modelsAdditional lines are needed to complement the outlines if a line representation is needed. If a drawing similar to a stereo plot is desired, the rules a stereo plotter operator uses to select and draw lines have to be analyzed. If these rules are known, a mathematical model can be created which selects the proper lines from the 3D surface model accordingly. Trying to find those rules is possible only to a certain extent since every operator will select different lines intuitively and without a fixed set of rules. In addition to the outlines, all sharp edges are significant for the object and will undoubtedly be selected. Other features (e.g. eyes and other parts of faces) are important to understand the drawing. Therefore, an operator will add lines to show such elements although there is not enough geometrical evidence in the 3D model to justify those lines.If a procedure is to be developed to select meaningful lines from a meshed 3D model, it has to detect edges (in addition to the outlines). Edges in meshed models can be described as local curvature which can be quantified by the angles included between neighboring triangles. It cannot be expected that the resulting plan will have the quality of a plan produced by an operator with artistic skills, but if this product can be derived automatically, it can serve as a template which can be used as a background for drawing a meaningful and expressive 2D line drawing with metric qualities.The software developed at i3mainz computes the normals for all triangles of a mesh. For every triangle, the spatial angles to the three neighboring triangle normals are identified and a weighted mean value is computed. Positive (convex) and negative (con-cave) values can occur. Then, a histogram of all values is produced. Based on this, the user can define a number of classes for local curvature and select intensity values (green for convex, red for concave) which are used to shade the triangles concerned. This can easily be accomplished if the OBJ format is used for the mesh. Since shaded triangles are used, the result is not a true vector plan. This is not a real problem since – for the reasons mentioned above - the result cannot be used as a final drawing anyhow. Instead, it is used together with the plan of the outlines as a background for the creation of the final vector product.Fig. 6: Left: Part of a virtual relief model. Center: Automatically detected outlines. Right: Automatically detected edges; unfortunately, the red and green colors (for concave and convex edges) cannot be shown in this black-and-white print.6. ACKNOWLEDEMENTSWe want to thank Westcam Datentechnik GmbH for the fruitful cooperation during the measurement process. Linsinger Vermessung, a private surveying company, did an excellent job performing the photogrammetric documentation of the monument. Monica Bordas Vicent from i3mainz conscientiously created the 3D models from the point clouds. Mirko Siebold and Stefan Tschöpe, also from i3mainz, developed the software for the line drawings. Our acknowledgement goes - last not least - to the local authorities of Tyrol (Land Tirol, Landesbaudirektion) for the financial support and the continuous help solving the administrative problems.7. REFERENCESBoehler W., Bordas Vicent M., Hanke K., Marbs A., 2003: Documentation of German Emperor Maximilian I’s Tomb. CIPA Symposium 2003, Antalya, Turkey. The ISPRSInternational Archives of Photogrammetry and Remote Sensing Vol. XXXIV - 5/C15 (ISSN 1682 - 1750) and The CIPAInternational Archives for Documentation of Cultural Heritage Vol. XIX - 2003 (ISSN 0256 - 1840), pp. 474-479.Hanke, K. 2003: Dokumentation des Grabmals KaiserMaximilian I. in der Innsbrucker Hofkirche. In: Chesi, Weinold (Hrsg.) "12. Internationale Geodätische Woche Obergurgl 2003". Wichmann-Verlag. ISBN3-87907-401-1Marbs, Andreas, 2002: Experiences with Laser Scanning at i3mainz. Proc. of the CIPA WG6 Int. Workshop on Scanning for Cultural Heritage Recording./commission5/workshop/i3mainz, 2004: 3D scanning web site: http://scanning.fh-mainz.de。