Bayesian map learning in dynamic environments
人工智能领域中英文专有名词汇总
名词解释中英文对比<using_information_sources> social networks 社会网络abductive reasoning 溯因推理action recognition(行为识别)active learning(主动学习)adaptive systems 自适应系统adverse drugs reactions(药物不良反应)algorithm design and analysis(算法设计与分析) algorithm(算法)artificial intelligence 人工智能association rule(关联规则)attribute value taxonomy 属性分类规范automomous agent 自动代理automomous systems 自动系统background knowledge 背景知识bayes methods(贝叶斯方法)bayesian inference(贝叶斯推断)bayesian methods(bayes 方法)belief propagation(置信传播)better understanding 内涵理解big data 大数据big data(大数据)biological network(生物网络)biological sciences(生物科学)biomedical domain 生物医学领域biomedical research(生物医学研究)biomedical text(生物医学文本)boltzmann machine(玻尔兹曼机)bootstrapping method 拔靴法case based reasoning 实例推理causual models 因果模型citation matching (引文匹配)classification (分类)classification algorithms(分类算法)clistering algorithms 聚类算法cloud computing(云计算)cluster-based retrieval (聚类检索)clustering (聚类)clustering algorithms(聚类算法)clustering 聚类cognitive science 认知科学collaborative filtering (协同过滤)collaborative filtering(协同过滤)collabrative ontology development 联合本体开发collabrative ontology engineering 联合本体工程commonsense knowledge 常识communication networks(通讯网络)community detection(社区发现)complex data(复杂数据)complex dynamical networks(复杂动态网络)complex network(复杂网络)complex network(复杂网络)computational biology 计算生物学computational biology(计算生物学)computational complexity(计算复杂性) computational intelligence 智能计算computational modeling(计算模型)computer animation(计算机动画)computer networks(计算机网络)computer science 计算机科学concept clustering 概念聚类concept formation 概念形成concept learning 概念学习concept map 概念图concept model 概念模型concept modelling 概念模型conceptual model 概念模型conditional random field(条件随机场模型) conjunctive quries 合取查询constrained least squares (约束最小二乘) convex programming(凸规划)convolutional neural networks(卷积神经网络) customer relationship management(客户关系管理) data analysis(数据分析)data analysis(数据分析)data center(数据中心)data clustering (数据聚类)data compression(数据压缩)data envelopment analysis (数据包络分析)data fusion 数据融合data generation(数据生成)data handling(数据处理)data hierarchy (数据层次)data integration(数据整合)data integrity 数据完整性data intensive computing(数据密集型计算)data management 数据管理data management(数据管理)data management(数据管理)data miningdata mining 数据挖掘data model 数据模型data models(数据模型)data partitioning 数据划分data point(数据点)data privacy(数据隐私)data security(数据安全)data stream(数据流)data streams(数据流)data structure( 数据结构)data structure(数据结构)data visualisation(数据可视化)data visualization 数据可视化data visualization(数据可视化)data warehouse(数据仓库)data warehouses(数据仓库)data warehousing(数据仓库)database management systems(数据库管理系统)database management(数据库管理)date interlinking 日期互联date linking 日期链接Decision analysis(决策分析)decision maker 决策者decision making (决策)decision models 决策模型decision models 决策模型decision rule 决策规则decision support system 决策支持系统decision support systems (决策支持系统) decision tree(决策树)decission tree 决策树deep belief network(深度信念网络)deep learning(深度学习)defult reasoning 默认推理density estimation(密度估计)design methodology 设计方法论dimension reduction(降维) dimensionality reduction(降维)directed graph(有向图)disaster management 灾害管理disastrous event(灾难性事件)discovery(知识发现)dissimilarity (相异性)distributed databases 分布式数据库distributed databases(分布式数据库) distributed query 分布式查询document clustering (文档聚类)domain experts 领域专家domain knowledge 领域知识domain specific language 领域专用语言dynamic databases(动态数据库)dynamic logic 动态逻辑dynamic network(动态网络)dynamic system(动态系统)earth mover's distance(EMD 距离) education 教育efficient algorithm(有效算法)electric commerce 电子商务electronic health records(电子健康档案) entity disambiguation 实体消歧entity recognition 实体识别entity recognition(实体识别)entity resolution 实体解析event detection 事件检测event detection(事件检测)event extraction 事件抽取event identificaton 事件识别exhaustive indexing 完整索引expert system 专家系统expert systems(专家系统)explanation based learning 解释学习factor graph(因子图)feature extraction 特征提取feature extraction(特征提取)feature extraction(特征提取)feature selection (特征选择)feature selection 特征选择feature selection(特征选择)feature space 特征空间first order logic 一阶逻辑formal logic 形式逻辑formal meaning prepresentation 形式意义表示formal semantics 形式语义formal specification 形式描述frame based system 框为本的系统frequent itemsets(频繁项目集)frequent pattern(频繁模式)fuzzy clustering (模糊聚类)fuzzy clustering (模糊聚类)fuzzy clustering (模糊聚类)fuzzy data mining(模糊数据挖掘)fuzzy logic 模糊逻辑fuzzy set theory(模糊集合论)fuzzy set(模糊集)fuzzy sets 模糊集合fuzzy systems 模糊系统gaussian processes(高斯过程)gene expression data 基因表达数据gene expression(基因表达)generative model(生成模型)generative model(生成模型)genetic algorithm 遗传算法genome wide association study(全基因组关联分析) graph classification(图分类)graph classification(图分类)graph clustering(图聚类)graph data(图数据)graph data(图形数据)graph database 图数据库graph database(图数据库)graph mining(图挖掘)graph mining(图挖掘)graph partitioning 图划分graph query 图查询graph structure(图结构)graph theory(图论)graph theory(图论)graph theory(图论)graph theroy 图论graph visualization(图形可视化)graphical user interface 图形用户界面graphical user interfaces(图形用户界面)health care 卫生保健health care(卫生保健)heterogeneous data source 异构数据源heterogeneous data(异构数据)heterogeneous database 异构数据库heterogeneous information network(异构信息网络) heterogeneous network(异构网络)heterogenous ontology 异构本体heuristic rule 启发式规则hidden markov model(隐马尔可夫模型)hidden markov model(隐马尔可夫模型)hidden markov models(隐马尔可夫模型) hierarchical clustering (层次聚类) homogeneous network(同构网络)human centered computing 人机交互技术human computer interaction 人机交互human interaction 人机交互human robot interaction 人机交互image classification(图像分类)image clustering (图像聚类)image mining( 图像挖掘)image reconstruction(图像重建)image retrieval (图像检索)image segmentation(图像分割)inconsistent ontology 本体不一致incremental learning(增量学习)inductive learning (归纳学习)inference mechanisms 推理机制inference mechanisms(推理机制)inference rule 推理规则information cascades(信息追随)information diffusion(信息扩散)information extraction 信息提取information filtering(信息过滤)information filtering(信息过滤)information integration(信息集成)information network analysis(信息网络分析) information network mining(信息网络挖掘) information network(信息网络)information processing 信息处理information processing 信息处理information resource management (信息资源管理) information retrieval models(信息检索模型) information retrieval 信息检索information retrieval(信息检索)information retrieval(信息检索)information science 情报科学information sources 信息源information system( 信息系统)information system(信息系统)information technology(信息技术)information visualization(信息可视化)instance matching 实例匹配intelligent assistant 智能辅助intelligent systems 智能系统interaction network(交互网络)interactive visualization(交互式可视化)kernel function(核函数)kernel operator (核算子)keyword search(关键字检索)knowledege reuse 知识再利用knowledgeknowledgeknowledge acquisitionknowledge base 知识库knowledge based system 知识系统knowledge building 知识建构knowledge capture 知识获取knowledge construction 知识建构knowledge discovery(知识发现)knowledge extraction 知识提取knowledge fusion 知识融合knowledge integrationknowledge management systems 知识管理系统knowledge management 知识管理knowledge management(知识管理)knowledge model 知识模型knowledge reasoningknowledge representationknowledge representation(知识表达) knowledge sharing 知识共享knowledge storageknowledge technology 知识技术knowledge verification 知识验证language model(语言模型)language modeling approach(语言模型方法) large graph(大图)large graph(大图)learning(无监督学习)life science 生命科学linear programming(线性规划)link analysis (链接分析)link prediction(链接预测)link prediction(链接预测)link prediction(链接预测)linked data(关联数据)location based service(基于位置的服务) loclation based services(基于位置的服务) logic programming 逻辑编程logical implication 逻辑蕴涵logistic regression(logistic 回归)machine learning 机器学习machine translation(机器翻译)management system(管理系统)management( 知识管理)manifold learning(流形学习)markov chains 马尔可夫链markov processes(马尔可夫过程)matching function 匹配函数matrix decomposition(矩阵分解)matrix decomposition(矩阵分解)maximum likelihood estimation(最大似然估计)medical research(医学研究)mixture of gaussians(混合高斯模型)mobile computing(移动计算)multi agnet systems 多智能体系统multiagent systems 多智能体系统multimedia 多媒体natural language processing 自然语言处理natural language processing(自然语言处理) nearest neighbor (近邻)network analysis( 网络分析)network analysis(网络分析)network analysis(网络分析)network formation(组网)network structure(网络结构)network theory(网络理论)network topology(网络拓扑)network visualization(网络可视化)neural network(神经网络)neural networks (神经网络)neural networks(神经网络)nonlinear dynamics(非线性动力学)nonmonotonic reasoning 非单调推理nonnegative matrix factorization (非负矩阵分解) nonnegative matrix factorization(非负矩阵分解) object detection(目标检测)object oriented 面向对象object recognition(目标识别)object recognition(目标识别)online community(网络社区)online social network(在线社交网络)online social networks(在线社交网络)ontology alignment 本体映射ontology development 本体开发ontology engineering 本体工程ontology evolution 本体演化ontology extraction 本体抽取ontology interoperablity 互用性本体ontology language 本体语言ontology mapping 本体映射ontology matching 本体匹配ontology versioning 本体版本ontology 本体论open government data 政府公开数据opinion analysis(舆情分析)opinion mining(意见挖掘)opinion mining(意见挖掘)outlier detection(孤立点检测)parallel processing(并行处理)patient care(病人医疗护理)pattern classification(模式分类)pattern matching(模式匹配)pattern mining(模式挖掘)pattern recognition 模式识别pattern recognition(模式识别)pattern recognition(模式识别)personal data(个人数据)prediction algorithms(预测算法)predictive model 预测模型predictive models(预测模型)privacy preservation(隐私保护)probabilistic logic(概率逻辑)probabilistic logic(概率逻辑)probabilistic model(概率模型)probabilistic model(概率模型)probability distribution(概率分布)probability distribution(概率分布)project management(项目管理)pruning technique(修剪技术)quality management 质量管理query expansion(查询扩展)query language 查询语言query language(查询语言)query processing(查询处理)query rewrite 查询重写question answering system 问答系统random forest(随机森林)random graph(随机图)random processes(随机过程)random walk(随机游走)range query(范围查询)RDF database 资源描述框架数据库RDF query 资源描述框架查询RDF repository 资源描述框架存储库RDF storge 资源描述框架存储real time(实时)recommender system(推荐系统)recommender system(推荐系统)recommender systems 推荐系统recommender systems(推荐系统)record linkage 记录链接recurrent neural network(递归神经网络) regression(回归)reinforcement learning 强化学习reinforcement learning(强化学习)relation extraction 关系抽取relational database 关系数据库relational learning 关系学习relevance feedback (相关反馈)resource description framework 资源描述框架restricted boltzmann machines(受限玻尔兹曼机) retrieval models(检索模型)rough set theroy 粗糙集理论rough set 粗糙集rule based system 基于规则系统rule based 基于规则rule induction (规则归纳)rule learning (规则学习)rule learning 规则学习schema mapping 模式映射schema matching 模式匹配scientific domain 科学域search problems(搜索问题)semantic (web) technology 语义技术semantic analysis 语义分析semantic annotation 语义标注semantic computing 语义计算semantic integration 语义集成semantic interpretation 语义解释semantic model 语义模型semantic network 语义网络semantic relatedness 语义相关性semantic relation learning 语义关系学习semantic search 语义检索semantic similarity 语义相似度semantic similarity(语义相似度)semantic web rule language 语义网规则语言semantic web 语义网semantic web(语义网)semantic workflow 语义工作流semi supervised learning(半监督学习)sensor data(传感器数据)sensor networks(传感器网络)sentiment analysis(情感分析)sentiment analysis(情感分析)sequential pattern(序列模式)service oriented architecture 面向服务的体系结构shortest path(最短路径)similar kernel function(相似核函数)similarity measure(相似性度量)similarity relationship (相似关系)similarity search(相似搜索)similarity(相似性)situation aware 情境感知social behavior(社交行为)social influence(社会影响)social interaction(社交互动)social interaction(社交互动)social learning(社会学习)social life networks(社交生活网络)social machine 社交机器social media(社交媒体)social media(社交媒体)social media(社交媒体)social network analysis 社会网络分析social network analysis(社交网络分析)social network(社交网络)social network(社交网络)social science(社会科学)social tagging system(社交标签系统)social tagging(社交标签)social web(社交网页)sparse coding(稀疏编码)sparse matrices(稀疏矩阵)sparse representation(稀疏表示)spatial database(空间数据库)spatial reasoning 空间推理statistical analysis(统计分析)statistical model 统计模型string matching(串匹配)structural risk minimization (结构风险最小化) structured data 结构化数据subgraph matching 子图匹配subspace clustering(子空间聚类)supervised learning( 有support vector machine 支持向量机support vector machines(支持向量机)system dynamics(系统动力学)tag recommendation(标签推荐)taxonmy induction 感应规范temporal logic 时态逻辑temporal reasoning 时序推理text analysis(文本分析)text anaylsis 文本分析text classification (文本分类)text data(文本数据)text mining technique(文本挖掘技术)text mining 文本挖掘text mining(文本挖掘)text summarization(文本摘要)thesaurus alignment 同义对齐time frequency analysis(时频分析)time series analysis( 时time series data(时间序列数据)time series data(时间序列数据)time series(时间序列)topic model(主题模型)topic modeling(主题模型)transfer learning 迁移学习triple store 三元组存储uncertainty reasoning 不精确推理undirected graph(无向图)unified modeling language 统一建模语言unsupervisedupper bound(上界)user behavior(用户行为)user generated content(用户生成内容)utility mining(效用挖掘)visual analytics(可视化分析)visual content(视觉内容)visual representation(视觉表征)visualisation(可视化)visualization technique(可视化技术) visualization tool(可视化工具)web 2.0(网络2.0)web forum(web 论坛)web mining(网络挖掘)web of data 数据网web ontology lanuage 网络本体语言web pages(web 页面)web resource 网络资源web science 万维科学web search (网络检索)web usage mining(web 使用挖掘)wireless networks 无线网络world knowledge 世界知识world wide web 万维网world wide web(万维网)xml database 可扩展标志语言数据库附录 2 Data Mining 知识图谱(共包含二级节点15 个,三级节点93 个)间序列分析)监督学习)领域 二级分类 三级分类。
基于贝叶斯网络分类的遥感影像变化检测
基于贝叶斯网络分类的遥感影像变化检测
陈雪;马建文;戴芹
【期刊名称】《遥感学报》
【年(卷),期】2005(9)6
【摘要】遥感成像过程中,地面、大气等诸多要素的不确定性和波段之间的相关性等原因影响了分类精度,导致变化检测的不准确性.为了提高分类精度往往需要引入先验知识.贝叶斯网络是一种新的数据表达和推理模型,对数据没有严格的正态分布前提要求,通过动态地调整先验概率密度,能有效提高分类精度.以北京通州地区1996- 05-29和2001- 05-19两个时相的陆地卫星Landsat TM遥感影像为例,介绍了基于贝叶斯网络的分类算法,并在此基础上实现了两个时相遥感影像的变化检测.实验结果表明:基于贝叶斯网络分类算法的后分类比较变化检测方法是遥感影像变化检测的一种新的有效方法.
【总页数】6页(P667-672)
【作者】陈雪;马建文;戴芹
【作者单位】中国科学院,遥感应用研究所,北京,100101;北京师范大学,地理学与遥感科学学院遥感与GIS研究中心,北京,100875;中国科学院,遥感应用研究所,北京,100101;中国科学院,遥感应用研究所,北京,100101
【正文语种】中文
【中图分类】TP751.1
【相关文献】
1.基于面向对象分类方法的遥感影像变化检测 [J], 陈阳;陈映鹰;林怡
2.基于高斯混合模型的遥感影像连续型朴素贝叶斯网络分类器 [J], 陶建斌;舒宁;沈照庆
3.基于对象分类的遥感影像森林变化检测方法 [J], 雷鸣; 田卫新; 任东; 董婷
4.基于对象分类的遥感影像森林变化检测方法 [J], 雷鸣;田卫新;任东;董婷
5.贝叶斯网络分类算法在遥感数据变化检测上的应用 [J], 陈雪;戴芹;马建文;李小文
因版权原因,仅展示原文概要,查看原文内容请购买。
面向深度学习的遥感图像目标样本生成方法
面向深度学习的遥感图像目标样本生成方法在当今的科技时代,遥感技术凭借其能够获取大范围、高分辨率的地球表面信息的能力,在众多领域发挥着至关重要的作用。
而深度学习作为一种强大的数据分析和模式识别工具,在处理遥感图像方面展现出了巨大的潜力。
然而,深度学习模型的性能很大程度上依赖于大量高质量的训练样本。
因此,如何有效地生成面向深度学习的遥感图像目标样本成为了一个关键问题。
要理解遥感图像目标样本的生成,首先得明白遥感图像的特点。
遥感图像通常具有高维度、多波段、复杂的地物信息等特征。
这些特点使得直接从原始遥感图像中提取有价值的目标样本变得极具挑战性。
一种常见的遥感图像目标样本生成方法是基于数据增强技术。
简单来说,就是对原始的遥感图像进行一系列的变换操作,以增加样本的数量和多样性。
例如,可以对图像进行旋转、翻转、缩放、裁剪等操作。
通过这些变换,可以让深度学习模型学习到目标在不同姿态和尺寸下的特征,从而提高模型的泛化能力。
另一种方法是利用合成技术来生成遥感图像目标样本。
这可以通过使用计算机图形学和物理模型来模拟真实的地物场景和目标。
比如说,根据建筑物的几何形状和材质特性,生成具有不同光照条件和视角的建筑物遥感图像样本。
这种方法的优点是可以根据特定的需求生成大量定制化的样本,但缺点是生成的样本可能与真实的遥感图像存在一定的差异,需要谨慎处理。
在生成遥感图像目标样本的过程中,标注工作也是至关重要的。
准确的标注能够为深度学习模型提供明确的学习目标。
对于遥感图像中的目标,标注可以包括目标的类别、位置、形状等信息。
为了提高标注的效率和准确性,可以采用一些半自动或自动的标注方法。
例如,利用图像分割算法先对图像进行初步分割,然后人工对分割结果进行修正和完善。
同时,为了确保生成的样本具有代表性和有效性,还需要对样本进行筛选和优化。
这可以通过评估样本的质量、多样性和与实际应用场景的相关性来实现。
对于质量较差或不符合要求的样本,可以进行剔除或重新生成。
基于机器学习的海上目标识别与跟踪研究
基于机器学习的海上目标识别与跟踪研究摘要:海上目标识别与跟踪是一项具有重要意义的技术,可以应用于海洋航行安全、边防安全等领域。
本文主要研究基于机器学习的海上目标识别与跟踪技术,并提出了一种基于卷积神经网络(CNN)的方法。
通过训练大量的海上目标图像样本数据,利用CNN模型实现对不同类型海上目标的自动识别和跟踪。
实验结果表明,该方法具有较高的准确性和鲁棒性,为海上目标识别与跟踪提供了一种有效的解决方案。
1. 引言海洋是人类社会中重要的资源之一,海上航行活动以及海洋边防安全对于国家的发展和安全具有重要意义。
因此,海上目标的准确识别与跟踪成为保障海洋安全的关键技术。
传统的海上目标识别与跟踪方法通常依赖于人工制定规则,需要人工提取特征和进行分类判别,效率低且受限于人的主观因素。
而基于机器学习的方法可以通过大量的样本数据进行训练,自动学习目标特征,提高识别和跟踪效果。
本文将基于机器学习的方法应用于海上目标识别与跟踪,提出了一种基于卷积神经网络(CNN)的方法。
2. 方法2.1 数据集采集与预处理为了构建有效的机器学习模型,需要采集包含不同类型海上目标的数据集。
可以通过传感器、卫星图像或者其他可靠的数据源收集数据。
接着对采集到的数据进行预处理,包括去噪、图像增强、裁剪等操作,以提高模型的训练和识别性能。
2.2 卷积神经网络(CNN)卷积神经网络是一种深度学习的模型,具有良好的图像特征提取能力。
在海上目标识别与跟踪中,可以利用CNN模型学习目标的特征表示。
通过多个卷积层和池化层的组合,可以有效地提取图像的局部特征和全局特征,并进行分类和识别。
2.3 训练与优化在训练阶段,将准备好的数据集划分为训练集和验证集,利用训练集进行模型的训练,使用验证集进行模型的评估和调整。
选择适当的损失函数和优化算法,如交叉熵损失和随机梯度下降法(SGD),来优化模型的参数,提高模型的性能。
3. 实验与结果分析本文使用了一个包含海上舰船、渔船和救生筏的数据集进行实验。
一种基于Bayesian的图像分类算法
一种基于Bayesian的图像分类算法
张小红;张倩
【期刊名称】《计算机应用与软件》
【年(卷),期】2009(026)009
【摘要】提出了一种基于Bayesian的图像分类算法,该算法首先从原始数字图像出发,通过分析图像的特征分布特点,对图像的局部区域扫描分析,然后抽取目标图像的特征元素,得到其颜色、纹理、形状等特征,最后利用Bayesian分类器来实现图像的快速自动分类.实验结果表明,该算法能够有效提取图像的局部特征,从而快速、准确地实现图像分类.
【总页数】3页(P250-252)
【作者】张小红;张倩
【作者单位】河南财政税务高等专科学校信息工程系,河南,郑州,450002;新乡学院计算机与信息工程学院,河南,新乡,453000
【正文语种】中文
【中图分类】TP3
【相关文献】
1.基于一种新的非局部二次MRF先验模型的Bayesian图像重建 [J], 陈阳;严勇;吴昊;罗立民;陈武凡
2.一种基于Bayesian学习的彩色肺癌图像语义描述模型 [J], 杨育彬;李宁;陈世福;陈兆乾
3.一种新的基于纠正正弦图探测值的PET图像Bayesian重建算法 [J], 陈阳;王庆
奇;黄智勇;陈武凡
4.一种基于深度学习的图像分类算法研究 [J], 徐鑫;徐玙琢
5.一种基于Bayesian准则和MRF模型的SAR图像滤波方法 [J], 王青;徐新;管鲍;孙洪
因版权原因,仅展示原文概要,查看原文内容请购买。
一种新的边界增强型图像先验模型
一种新的边界增强型图像先验模型
张地;彭宏
【期刊名称】《太赫兹科学与电子信息学报》
【年(卷),期】2006(004)006
【摘要】在各种超分辨率图像重构算法中,最大后验概率(Maximum a Posteriori,MAP)算法因其具有优异的重构性能而受到广泛关注.但由于目前在MAP算法中普遍采用的是平滑型图像先验模型,导致重构出来的图像边界不明锐,一些细节不清晰.本文提出了一种新的边界增强型图像先验模型.不同于已有的图像模型,新模型对图像中的非连续性不是进行惩罚,而是进行增强.实验结果表明,新模型能够获得优于平滑型图像先验模型的重构效果.
【总页数】4页(P440-443)
【作者】张地;彭宏
【作者单位】韶关大学,计算机科学系,广东,韶关,512005;华南理工大学,计算机学院,广东,广州,510641;华南理工大学,计算机学院,广东,广州,510641
【正文语种】中文
【中图分类】TN911.73
【相关文献】
1.一种基于KPCA和形状先验知识的图像分割模型 [J], 万小萍;顾勇
2.基于一种新的非局部二次MRF先验模型的Bayesian图像重建 [J], 陈阳;严勇;吴昊;罗立民;陈武凡
3.应用粒子滤波及先验概率模型进行图像分割的新算法 [J], 陈姝;邹北骥;彭小宁;杨明
4.一种新的基于大尺寸信息的MRF先验模型 [J], 占杰;陈阳;陈武凡
5.基于边界先验双模型贝叶斯决策的红外图像海天线检测 [J], 邵旭慧;裴继红;赵阳因版权原因,仅展示原文概要,查看原文内容请购买。
基于空间自适应Bayesian缩减的NSCT域图像去噪方法
基于空间自适应Bayesian缩减的NSCT域图像去噪方法孙强;高勇;焦李成
【期刊名称】《计算机应用》
【年(卷),期】2010(030)008
【摘要】提出了一种基于空间自适应Bayesian缩减的NSCT域图像去噪方法.该方法运用了广义高斯分布对NSCT域图像的子带系数进行建模,并通过构造各向异性的椭圆窗口来描述各个子带内系数的局部背景特性,从而建立了NSCT域空间自适应Bayesian缩减机制的图像去噪方法.通过图像去噪实验验证了所提出方法的有效性.同时,与4种具有平移不变性的Contourlet去噪方法做了对比,进一步证实了所提出方法的优良去噪性能.
【总页数】5页(P2080-2084)
【作者】孙强;高勇;焦李成
【作者单位】西安理工大学,电子工程系,西安,710048;西安理工大学,电子工程系,西安,710048;西安电子科技大学,智能信息处理研究所,西安,710071
【正文语种】中文
【中图分类】TP391.41
【相关文献】
1.基于最大后验概率准则的红外图像 NSCT 域去噪方法 [J], 张喜涛;刘刚;周珩
2.基于小波和NSCT的图像自适应阈值去噪方法 [J], 杜超本;贾振红;覃锡忠;杨杰;胡英杰;李殿均
3.基于NSCT的超声图像自适应阈值去噪方法 [J], 李磊;曹旭辉;白培瑞;何寒芳
4.基于NSCT域主分量分析的遥感图像去噪方法 [J], 汪雅兰;贾振红;杨杰;庞韶宁
5.基于NSCT域的自适应阈值遥感图像去噪方法 [J], 汪雅兰;贾振红;覃锡忠;杨杰;庞韶宁
因版权原因,仅展示原文概要,查看原文内容请购买。
基于四元小波变换自适应双变量模型的图像去噪
基于四元小波变换自适应双变量模型的图像去噪
盖杉
【期刊名称】《微电子学与计算机》
【年(卷),期】2015(0)3
【摘要】提出一种新的基于四元小波变换自适应双变量模型的图像去噪算法.在四元小波变换域,以自适应双变量模型作为先验模型,对图像相邻尺度分解系数的稀疏分布进行建模,充分挖掘分解系数之间的统计相关性,采用Newton-Raphson迭代方法估计尺度间边缘系数的方差,在贝叶斯最大后验概率估计理论框架下对图像进行去噪处理.此算法取得了更优的去噪性能.
【总页数】5页(P81-85)
【关键词】图像去噪;四元小波;自适应双变量模型;贝叶斯估计
【作者】盖杉
【作者单位】东南大学计算机科学与工程学院
【正文语种】中文
【中图分类】TP391.41
【相关文献】
1.双变量模型与平移不变小波变换相结合的图像去噪算法 [J], 刘清;王平根
2.双密度双树复小波变换的局域自适应图像去噪 [J], 龚卫国;刘晓营;李伟红;李建福
3.基于Context模型的小波变换阈值自适应图像去噪 [J], 薛乃玉;王玉德;赵焕利
4.基于双变量收缩函数的局域自适应图像去噪 [J], 刘鑫;贺振华;黄德济
5.具有自适应窗口的双变量模型图像去噪方法 [J], 董雪燕;郑永果
因版权原因,仅展示原文概要,查看原文内容请购买。
A Label Field Fusion Bayesian Model and Its Penalized Maximum Rand Estimator for Image Segmentation
1610IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 6, JUNE 2010A Label Field Fusion Bayesian Model and Its Penalized Maximum Rand Estimator for Image SegmentationMax MignotteAbstract—This paper presents a novel segmentation approach based on a Markov random field (MRF) fusion model which aims at combining several segmentation results associated with simpler clustering models in order to achieve a more reliable and accurate segmentation result. The proposed fusion model is derived from the recently introduced probabilistic Rand measure for comparing one segmentation result to one or more manual segmentations of the same image. This non-parametric measure allows us to easily derive an appealing fusion model of label fields, easily expressed as a Gibbs distribution, or as a nonstationary MRF model defined on a complete graph. Concretely, this Gibbs energy model encodes the set of binary constraints, in terms of pairs of pixel labels, provided by each segmentation results to be fused. Combined with a prior distribution, this energy-based Gibbs model also allows for definition of an interesting penalized maximum probabilistic rand estimator with which the fusion of simple, quickly estimated, segmentation results appears as an interesting alternative to complex segmentation models existing in the literature. This fusion framework has been successfully applied on the Berkeley image database. The experiments reported in this paper demonstrate that the proposed method is efficient in terms of visual evaluation and quantitative performance measures and performs well compared to the best existing state-of-the-art segmentation methods recently proposed in the literature. Index Terms—Bayesian model, Berkeley image database, color textured image segmentation, energy-based model, label field fusion, Markovian (MRF) model, probabilistic Rand index.I. INTRODUCTIONIMAGE segmentation is a frequent preprocessing step which consists of achieving a compact region-based description of the image scene by decomposing it into spatially coherent regions with similar attributes. This low-level vision task is often the preliminary and also crucial step for many image understanding algorithms and computer vision applications. A number of methods have been proposed and studied in the last decades to solve the difficult problem of textured image segmentation. Among them, we can cite clustering algorithmsManuscript received February 20, 2009; revised February 06, 2010. First published March 11, 2010; current version published May 14, 2010. This work was supported by a NSERC individual research grant. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Peter C. Doerschuk. The author is with the Département d’Informatique et de Recherche Opérationnelle (DIRO), Université de Montréal, Faculté des Arts et des Sciences, Montréal H3C 3J7 QC, Canada (e-mail: mignotte@iro.umontreal.ca). Color versions of one or more of the figures in this paper are available online at . Digital Object Identifier 10.1109/TIP.2010.2044965[1], spatial-based segmentation methods which exploit the connectivity information between neighboring pixels and have led to Markov Random Field (MRF)-based statistical models [2], mean-shift-based techniques [3], [4], graph-based [5], [6], variational methods [7], [8], or by region-based split and merge procedures, sometimes directly expressed by a global energy function to be optimized [9]. Years of research in segmentation have demonstrated that significant improvements on the final segmentation results may be achieved either by using notably more sophisticated feature selection procedures, or more elaborate clustering techniques (sometimes involving a mixture of different or non-Gaussian distributions for the multidimensional texture features [10], [11]) or by taking into account prior distribution on the labels, region process, or the number of classes [9], [12], [13]. In all cases, these improvements lead to computationally expensive segmentation algorithms and, in the case of energy-based segmentation models, to costly optimization techniques. The segmentation approach, proposed in this paper, is conceptually different and explores another strategy initially introduced in [14]. Instead of considering an elaborate and better designed segmentation model of textured natural image, our technique explores the possible alternative of fusing (i.e., efficiently combining) several quickly estimated segmentation maps associated with simpler segmentation models for a final reliable and accurate segmentation result. These initial segmentations to be fused can be given either by different algorithms or by the same algorithm with different values of the internal parameters such as several -means clustering results with different values of , or by several -means results using different distance metrics, and applied on an input image possibly expressed in different color spaces or by other means. The fusion model, presented in this paper, is derived from the recently introduced probabilistic rand index (PRI) [15], [16] which measures the agreement of one segmentation result to multiple (manually generated) ground-truth segmentations. This measure efficiently takes into account the inherent variation existing across hand-labeled possible segmentations. We will show that this non-parametric measure allows us to derive an appealing fusion model of label fields, easily expressed as a Gibbs distribution, or as a nonstationary MRF model defined on a complete graph. Finally, this fusion model emerges as a classical optimization problem in which the Gibbs energy function related to this model has to be minimized. In other words, or analytically expressed in the regularization framework, each quickly estimated segmentation (to be fused) provides a set of constraints in terms of pairs of pixel labels (i.e., binary cliques) that should be equal or not. Finally, our fusion result is found1057-7149/$26.00 © 2010 IEEEMIGNOTTE: LABEL FIELD FUSION BAYESIAN MODEL AND ITS PENALIZED MAXIMUM RAND ESTIMATOR FOR IMAGE SEGMENTATION1611by searching for a segmentation map that minimizes an energy function encoding this precomputed set of binary constraints (thus optimizing the so-called PRI criterion). In our application, this final optimization task is performed by a robust multiresolution coarse-to-fine minimization strategy. This fusion of simple, quickly estimated segmentation results appears as an interesting alternative to complex, computationally demanding segmentation models existing in the literature. This new strategy of segmentation is validated in the Berkeley natural image database (also containing, for quantitative evaluations, ground truth segmentations obtained from human subjects). Conceptually, our fusion strategy is in the framework of the so-called decision fusion approaches recently proposed in clustering or imagery [17]–[21]. With these methods, a series of energy functions are first minimized before their outputs (i.e., their decisions) are merged. Following this strategy, Fred et al. [17] have explored the idea of evidence accumulation for combining the results of multiple clusterings. Reed et al. have proposed a Gibbs energy-based fusion model that differs from ours in the likelihood and prior energy design, as final merging procedure (for the fusion of large scale classified sonar image [21]). More precisely, Reed et al. employed a voting scheme-based likelihood regularized by an isotropic Markov random field priorly used to inpaint regions where the likelihood decision is not available. More generally, the concept of combining classifiers for the improvement of the performance of individual classifiers is known, in machine learning field, as a committee machine or mixture of experts [22], [23]. In this context, Dietterich [23] have provided an accessible and informal reasoning, from statistical, computational and representational viewpoints, of why ensembles can improve results. In this recent field of research, two major categories of committee machines are generally found in the literature. Our fusion decision approach is in the category of the committee machine model that utilizes an ensemble of classifiers with a static structure type. In this class of committee machines, the responses of several classifiers are combined by means of a mechanism that does not involve the input data (contrary to the dynamic structure type-based mixture of experts). In order to create an efficient ensemble of classifiers, three major categories of methods have been suggested whose goal is to promote diversity in order to increase efficiency of the final classification result. This can be done either by using different subsets of the input data, either by using a great diversity of the behavior between classifiers on the input data or finally by using the diversity of the behavior of the input data. Conceptually, our ensemble of classifiers is in this third category, since we intend to express the input data in different color spaces, thus encouraging diversity and different properties such as data decorrelation, decoupling effects, perceptually uniform metrics, compaction and invariance to various features, etc. In this framework, the combination itself can be performed according to several strategies or criteria (e.g., weighted majority vote, probability rules: sum, product, mean, median, classifier as combiner, etc.) but, none (to our knowledge) uses the PRI fusion (PRIF) criterion. Our segmentation strategy, based on the fusion of quickly estimated segmentation maps, is similar to the one proposed in [14] but the criterion which is now used in this new fusion model is different. In [14], the fusion strategy can be viewed as a two-stephierarchical segmentation procedure in which the first step remains identical and a set of initial input texton segmentation maps (in each color space) is estimated. Second, a final clustering, taking into account this mixture of textons (expressed in the set of different color space) is then used as a discriminant feature descriptor for a final -mean clustering whose output is the final fused segmentation map. Contrary to the fusion model presented in this paper, this second step (fusion of texton segmentation maps) is thus achieved in the intra-class inertia sense which is also the so-called squared-error criterion of the -mean algorithm. Let us add that a conceptually different label field fusion model has been also recently introduced in [24] with the goal of blending a spatial segmentation (region map) and a quickly estimated and to-be-refined application field (e.g., motion estimation/segmentation field, occlusion map, etc.). The goal of the fusion procedure explained in [24] is to locally fuse label fields involving labels of two different natures at different level of abstraction (i.e., pixel-wise and region-wise). More precisely, its goal is to iteratively modify the application field to make its regions fit the color regions of the spatial segmentation with the assumption that the color segmentation is more detailed than the regions of the application field. In this way, misclassified pixels in the application field (false positives and false negatives) are filtered out and blobby shapes are sharpened, resulting in a more accurate final application label field. The remainder of this paper is organized as follows. Section II describes the proposed Bayesian fusion model. Section III describes the optimization strategy used to minimize the Gibbs energy field related to this model and Section IV describes the segmentation model whose outputs will be fused by our model. Finally, Section V presents a set of experimental results and comparisons with existing segmentation techniques.II. PROPOSED FUSION MODEL A. Rand Index The Rand index [25] is a clustering quality metric that measures the agreement of the clustering result with a given ground truth. This non-parametric statistical measure was recently used in image segmentation [16] as a quantitative and perceptually interesting measure to compare automatic segmentation of an image to a ground truth segmentation (e.g., a manually hand-segmented image given by an expert) and/or to objectively evaluate the efficiency of several unsupervised segmentation methods. be the number of pixels assigned to the same region Let (i.e., matched pairs) in both the segmentation to be evaluated and the ground truth segmentation , and be the number of pairs of pixels assigned to different regions (i.e., misand . The Rand index is defined as matched pairs) in to the total number of pixel pairs, i.e., the ratio of for an image of size pixels. More formally [16], and designate the set of region labels respecif tively associated to the segmentation maps and at pixel location and where is an indicator function, the Rand index1612IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 6, JUNE 2010is given by the following relation:given by the empirical proportion (3) where is the delta Kronecker function. In this way, the PRI measure is simply the mean of the Rand index computed between each [16]. As a consequence, the PRI pair measure will favor (i.e., give a high score to) a resulting acceptable segmentation map which is consistent with most of the segmentation results given by human experts. More precisely, the resulting segmentation could result in a compromise or a consensus, in terms of level of details and contour accuracy exhibited by each ground-truth segmentations. Fig. 8 gives a fusion map example, using a set of manually generated segmentations exhibiting a high variation, in terms of level of details. Let us add that this probabilistic metric is not degenerate; all the bad segmentations will give a low score without exception [16]. C. Generative Gibbs Distribution Model of Correct Segmentations (i.e., the pairwise empirical As indicated in [15], the set ) defines probabilities for each pixel pair computed over an appealing generative model of correct segmentation for the image, easily expressed as a Gibbs distribution. In this way, the Gibbs distribution, generative model of correct segmentation, which can also be considered as a likelihood of , in the PRI sense, may be expressed as(1) which simply computes the proportion (value ranging from 0 to 1) of pairs of pixels with compatible region label relationships between the two segmentations to be compared. A value of 1 indicates that the two segmentations are identical and a value of 0 indicates that the two segmentations do not agree on any pair of points (e.g., when all the pixels are gathered in a single region in one segmentation whereas the other segmentation assigns each pixel to an individual region). When the number of and are much smaller than the number of data labels in points , a computationally inexpensive estimator of the Rand index can be found in [16]. B. Probabilistic Rand Index (PRI) The PRI was recently introduced by Unnikrishnan [16] to take into accounttheinherentvariabilityofpossible interpretationsbetween human observers of an image, i.e., the multiple acceptable ground truth segmentations associated with each natural image. This variability between observers, recently highlighted by the Berkeley segmentation dataset [26] is due to the fact that each human chooses to segment an image at different levels of detail. This variability is also due image segmentation being an ill-posed problem, which exhibits multiple solutions for the different possible values of the number of classes not known a priori. Hence, in the absence of a unique ground-truth segmentation, the clustering quality measure has to quantify the agreement of an automatic segmentation (i.e., given by an algorithm) with the variation in a set of available manual segmentations representing, in fact, a very small sample of the set of all possible perceptually consistent interpretations of an image [15]. The authors [16] address this concern by soft nonuniform weighting of pixel pairs as a means of accounting for this variability in the ground truth set. More formally, let us consider a set of manually segmented (ground truth) images corresponding to an be the segmentation to be compared image of size . Let with the manually labeled set and designates the set of reat pixel gion labels associated with the segmentation maps location , the probabilistic RI is defined bywhere is the set of second order cliques or binary cliques of a Markov random field (MRF) model defined on a complete graph (each node or pixel is connected to all other pixels of is the temperature factor of the image) and this Boltzmann–Gibbs distribution which is twice less than the normalization factor of the Rand Index in (1) or (2) since there than pairs of pixels for which are twice more binary cliques . is the constant partition function. After simplification, this yields(2) where a good choice for the estimator of (the probability of the pixel and having the same label across ) is simply (4)MIGNOTTE: LABEL FIELD FUSION BAYESIAN MODEL AND ITS PENALIZED MAXIMUM RAND ESTIMATOR FOR IMAGE SEGMENTATION1613where is a constant partition function (with a factor which depends only on the data), namelywhere is the set of all possible (configurations for the) segof size pixels. Let us add mentations into regions that, since the number of classes (and thus the number of regions) of this final segmentation is not a priori known, there are possibly, between one and as much as regions that the number of pixels in this image (assigning each pixel to an individual can region is a possible configuration). In this setting, be viewed as the potential of spatially variant binary cliques (or pairwise interaction potentials) of an equivalent nonstationary MRF generative model of correct segmentations in the case is assumed to be a set of representative ground where truth segmentations. Besides, , the segmentation result (to be ), can be considered as a realization of this compared to generative model with PRand, a statistical measure proportional to its negative likelihood energy. In other words, an estimate of , in the maximum likelihood sense of this generative model, will give a resulting segmented map (i.e., a fusion result) with a to be fused. high fidelity to the set of segmentations D. Label Field Fusion Model for Image Segmentation Let us consider that we have at our disposal, a set of segmentations associated to an image of size to be fused (i.e., to efficiently combine) in order to obtain a final reliable and accurate segmentation result. The generative Gibbs distribution model of correct segmentations expressed in (4) gives us an interesting fusion model of segmentation maps, in the maximum PRI sense, or equivalently in the maximum likelihood (ML) sense for the underlying Gibbs model expressed in (4). In this framework, the set of is computed with the empirical proportion estimator [see (3)] on the data . Once has been estimated, the resulting ML fusion segmentation map is thus defined by maximizing the likelihood distributiontions for different possible values of the number of classes which is not a priori known. To render this problem well-posed with a unique solution, some constraints on the segmentation process are necessary, favoring over segmentation or, on the contrary, merging regions. From the probabilistic viewpoint, these regularization constraints can be expressed by a prior distribution of treated as a realization of the unknown segmentation a random field, for example, within a MRF framework [2], [27] or analytically, encoded via a local or global [13], [28] prior energy term added to the likelihood term. In this framework, we consider an energy function that sets a particular global constraint on the fusion process. This term restricts the number of regions (and indirectly, also penalizes small regions) in the resulting segmentation map. So we consider the energy function (6) where designates the number of regions (set of connected pixels belonging to the same class) in the segmented is the Heaviside (or unit step) function, and an image , internal parameter of our fusion model which physically represents the number of classes above which this prior constraint, limiting the number of regions, is taken into account. From the probabilistic viewpoint, this regularization constraint corresponds to a simple shifted (from ) exponential distribution decreasing with the number of regions displayed by the final segmentation. In this framework, a regularized solution corresponds to the maximum a posteriori (MAP) solution of our fusion model, i.e., that maximizes the posterior distribution the solution , and thus(7) with is the regularization parameter controlling the contribuexpressing fidelity to the set of segtion of the two terms; encoding our prior knowledge or mentations to be fused and beliefs concerning the types of acceptable final segmentations as estimates (segmentation with a number of limited regions). In this way, the resulting criteria used in this resulting fusion model can be viewed as a penalized maximum rand estimator. III. COARSE-TO-FINE OPTIMIZATION STRATEGY A. Multiresolution Minimization Strategy Our fusion procedure of several label fields emerges as an optimization problem of a complex non-convex cost function with several local extrema over the label parameter space. In order to find a particular configuration of , that efficiently minimizes this complex energy function, we can use a global optimization procedure such as a simulated annealing algorithm [27] whose advantages are twofold. First, it has the capability of avoiding local minima, and second, it does not require a good solution. initial guess in order to estimate the(5) where is the likelihood energy term of our generative fusion . model which has to be minimized in order to find Concretely, encodes the set of constraints, in terms of pairs of pixel labels (identical or not), provided by each of the segmentations to be fused. The minimization of finds the resulting segmentation which also optimizes the PRI criterion. E. Bayesian Fusion Model for Image Segmentation As previously described in Section II-B, the image segmentation problem is an ill-posed problem exhibiting multiple solu-1614IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 6, JUNE 2010Fig. 1. Duplication and “coarse-to-fine” minimization strategy.An alternative approach to this stochastic and computationally expensive procedure is the iterative conditional modes (ICM) introduced by Besag [2]. This method is deterministic and simple, but has the disadvantage of requiring a proper initialization of the segmentation map close to the optimal solution. Otherwise it will converge towards a bad local minima . In order associated with our complex energy function to solve this problem, we could take, as initialization (first such as iteration), the segmentation map (8) i.e., in choosing for the first iteration of the ICM procedure amongst the segmentation to be fused, the one closest to the optimal solution of the Gibbs energy function of our fusion model [see (5)]. A more robust optimization method consists of a multiresolution approach combined with the classical ICM optimization procedure. In this strategy, rather than considering the minimization problem on the full and original configuration space, the original inverse problem is decomposed in a sequence of approximated optimization problems of reduced complexity. This drastically reduces computational effort and provides an accelerated convergence toward improved estimate. Experimentally, estimation results are nearly comparable to those obtained by stochastic optimization procedures as noticed, for example, in [10] and [29]. To this end, a multiresolution pyramid of segmentation maps is preliminarily derived, in order to for each at different resolution levels, and a set estimate a set of of similar spatial models is considered for each resolution level of the pyramidal data structure. At the upper level of the pyramidal structure (lower resolution level), the ICM optimization procedure is initialized with the segmentation map given by the procedure defined in (8). It may also be initialized by a random solution and, starting from this initial segmentation, it iterates until convergence. After convergence, the result obtained at this resolution level is interpolated (see Fig. 1) and then used as initialization for the next finer level and so on, until the full resolution level. B. Optimization of the Full Energy Function Experiments have shown that the full energy function of our model, (with the region based-global regularization constraint) is complex for some images. Consequently it is preferable toFig. 2. From top to bottom and left to right; A natural image from the Berkeley database (no. 134052) and the formation of its region process (algorithm PRIF ) at the (l = 3) upper level of the pyramidal structure at iteration [0–6], 8 (the last iteration) of the ICM optimization algorithm. Duplication and result of the ICM relaxation scheme at the finest level of the pyramid at iteration 0, 1, 18 (last iteration) and segmentation result (region level) after the merging of regions and the taking into account of the prior. Bottom: evolution of the Gibbs energy for the different steps of the multiresolution scheme.perform the minimization in two steps. In a first step, the minimization is performed without considering the global constraint (considering only ), with the previously mentioned multiresolution minimization strategy and the ICM optimization procedure until its convergence at full resolution level. At this finest resolution level, the minimization is then refined in a second step by identifying each region of the resulting segmentation map. This creates a region adjacency graph (a RAG is an undirected graph where the nodes represent connected regions of the image domain) and performs a region merging procedure by simply applying the ICM relaxation scheme on each region (i.e., by merging the couple of adjacent regions leading to a reduction of the cost function of the full model [see (7)] until convergence). In the second step, minimization can also be performed . according to the full modelMIGNOTTE: LABEL FIELD FUSION BAYESIAN MODEL AND ITS PENALIZED MAXIMUM RAND ESTIMATOR FOR IMAGE SEGMENTATION1615with its four nearest neighbors and a fixed number of connections (85 in our application), regularly spaced between all other pixels located within a square search window of fixed size 30 pixels centered around . Fig. 3 shows comparison of segmentation results with a fully connected graph computed on a search window two times larger. We decided to initialize the lower (or third upper) level of the pyramid with a sequence of 20 different random segmentations with classes. The full resolution level is then initialized with the duplication (see Fig. 1) of the best segmentation result (i.e., the one associated to the lowest Gibbs energy ) obtained after convergence of the ICM at this lower resolution level (see Fig. 2). We provide details of our optimization strategy in Algorithm 1. Algo I. Multiresolution minimization procedure (see also Fig. 2). Two-Step Multiresolution Minimization Set of segmentations to be fusedPairwise probabilities for each pixel pair computed over at resolution level 1. Initialization Step • Build multiresolution Pyramids from • Compute the pairwise probabilities from at resolution level 3 • Compute the pairwise probabilities from at full resolution PIXEL LEVEL Initialization: Random initialization of the upper level of the pyramidal structure with classes • ICM optimization on • Duplication (cf. Fig 1) to the full resolution • ICM optimization on REGION LEVEL for each region at the finest level do • ICM optimization onFig. 4. Segmentation (image no. 385028 from Berkeley database). From top to bottom and left to right; segmentation map respectively obtained by 1] our multiresolution optimization procedure: = 3402965 (algo), 2] SA : = 3206127, 3] rithm PRIF : = 3312794, 4] SA : = 3395572, 5] SA : = 3402162. SAFig. 3. Comparison of two segmentation results of our multiresolution fusion procedure (algorithm PRIF ) using respectively: left] a subsampled and fixed number of connections (85) regularly spaced and located within a square search window of size = 30 pixels. right] a fully connected graph computed on a search window two times larger (and requiring a computational load increased by 100).NUU 0 U 00 U 0 U 0D. Comparison With a Monoresolution Stochastic Relaxation In order to test the efficiency of our two-step multiresolution relaxation (MR) strategy, we have compared it to a standard monoresolution stochastic relaxation algorithm, i.e., a so-called simulated annealing (SA) algorithm based on the Gibbs sampler [27]. In order to restrict the number of iterations to be finite, we have implemented a geometric temperature cooling schedule , where is the [30] of the form starting temperature, is the final temperature, and is the maximal number of iterations. In this stochastic procedure, is crucial. The temperathe choice of the initial temperature ture must be sufficiently high in the first stages of simulatedC. Algorithm In order to decrease the computational load of our multiresolution fusion procedure, we only use two levels of resolution in our pyramidal structure (see Fig. 2): the full resolution and an image eight times smaller (i.e., at the third upper level of classical data pyramidal structure). We do not consider a complete graph: we consider that each node (or pixel) is connected。
Rao-Blackwellised particle filtering for dynamic bayesian networks
Arnaud DoucetEngineering Dept.Cambridge University ad2@Nando de FreitasKevin MurphyStuart RussellComputer Science Dept.UC Berkeleyjfgf,murphyk,russell @AbstractParticle filters (PFs)are powerful sampling-based inference/learning algorithms for dynamic Bayesian networks (DBNs).They allow us to treat,in a principled way,any type of probabil-ity distribution,nonlinearity and non-stationarity.They have appeared in several fields under such names as “condensation”,“sequential Monte Carlo”and “survival of the fittest”.In this pa-per,we show how we can exploit the structure of the DBN to increase the efficiency of parti-cle filtering,using a technique known as Rao-Blackwellisation.Essentially,this samples some of the variables,and marginalizes out the rest exactly,using the Kalman filter,HMM filter,junction tree algorithm,or any other finite di-mensional optimal filter.We show that Rao-Blackwellised particle filters (RBPFs)lead to more accurate estimates than standard PFs.We demonstrate RBPFs on two problems,namely non-stationary online regression with radial ba-sis function networks and robot localization and map building.We also discuss other potential ap-plication areas and provide references to some fi-nite dimensional optimal filters.1INTRODUCTIONState estimation (online inference)in state-space models is widely used in a variety of computer science and engineer-ing applications.However,the two most famous algorithms for this problem,the Kalman filter and the HMM filter,are only applicable to linear-Gaussian models and models with finite state spaces,respectively.Even when the state space is finite,it can be so large that the HMM or junction tree algorithms become too computationally expensive.This is typically the case for large discrete dynamic Bayesian net-works (DBNs)(Dean and Kanazawa 1989):inference re-quires at each time space and time that is exponential in thenumber of hidden nodes.To handle these problems,sequential Monte Carlo meth-ods,also known as particle filters (PFs),have been in-troduced (Handschin and Mayne 1969,Akashi and Ku-mamoto 1977).In the mid 1990s,several PF algorithms were proposed independently under the names of Monte Carlo filters (Kitagawa 1996),sequential importance sam-pling (SIS)with resampling (SIR)(Doucet 1998),bootstrap filters (Gordon,Salmond and Smith 1993),condensation trackers (Isard and Blake 1996),dynamic mixture models (West 1993),survival of the fittest (Kanazawa,Koller and Russell 1995),etc.One of the major innovations during the 1990s was the inclusion of a resampling step to avoid de-generacy problems inherent to the earlier algorithms (Gor-don et al.1993).In the late nineties,several statistical im-provements for PFs were proposed,and some important theoretical properties were established.In addition,these algorithms were applied and tested in many domains:see (Doucet,de Freitas and Gordon 2000)for an up-to-date sur-vey of the field.One of the major drawbacks of PF is that sampling in high-dimensional spaces can be inefficient.In some cases,however,the model has “tractable substructure”,which can be analytically marginalized out,conditional on cer-tain other nodes being imputed,c.f.,cutset conditioning in static Bayes nets (Pearl 1988).The analytical marginal-ization can be carried out using standard algorithms,such as the Kalman filter,the HMM filter,the junction tree al-gorithm for general DBNs (Cowell,Dawid,Lauritzen and Spiegelhalter 1999),or,any other finite-dimensional opti-mal filters.The advantage of this strategy is that it can drastically reduce the size of the space over which we need to sample.Marginalizing out some of the variables is an example of the technique called Rao-Blackwellisation ,because it is related to the Rao-Blackwell formula:see (Casella and Robert 1996)for a general discussion.Rao-Blackwellised particle filters (RBPF)have been applied in specific con-texts such as mixtures of Gaussians (Akashi and Ku-mamoto 1977,Doucet 1998,Doucet,Godsill and Andrieu2000),fixed parameter estimation(Kong,Liu and Wong 1994),HMMs(Doucet1998,Doucet,Godsill and Andrieu 2000)and Dirichlet process models(MacEachern,Clyde and Liu1999).In this paper,we develop the general theory of RBPFs,and apply it to several novel types of DBNs.We omit the proofs of the theorems for lack of space:please refer to the technical report(Doucet,Gordon and Krishna-murthy1999).2PROBLEM FORMULATIONLet us consider the following general state space model/DBN with hidden variables and observed vari-ables.We assume that is a Markov process of ini-tial distribution and transition equation. The observations are assumed to be conditionally independent given the process of marginal distribution.Given these observations, the inference of any subset or property of the statesrelies on the joint posterior distribution .Our objective is,therefore,to estimate this distribution,or some of its characteristics such as thefilter-ing density or the minimum mean square error (MMSE)estimate.The posterior satisfies the following recursionThe problem of how to automatically identify which vari-ables should be sampled,and which can be handled analytically, is one we are currently working on.We anticipate that algorithms similar to cutset conditioning(Becker,Bar-Yehuda and Geiger 1999)might prove useful.the alternative recursionThis estimate is unbiased and,from the strong law of large numbers(SLLN),One way to estimate and con-sists of using the well-known importance sampling method (Bernardo and Smith1994).This method is based on the following observation.Let us introduce an arbitrary impor-tance distribution,from which it is easy to get samples,and such that implies.ThenGiven i.i.d.samples distributed accord-ing to,a Monte Carlo estimate ofis given bywhere the normalized importance weights are equal to Intuitively,to reach a given precision,will require a reduced number of samples over as we only need to sample from a lower-dimensional distribution.This is proven in the following propositions.Proposition1The variances of the importance weights, the numerators and the denominators satisfy for anyvar varvar varvar varA sufficient condition for to satisfy a CLT is varand for any(Bernardo and Smith1994).This trivially implies that also satis-fies a CLT.More precisely,we get the following result. Proposition2Underthe assumptions given above,and satisfy a CLTwhere,and being given byThe Rao-Blackwellised estimate is usually compu-tationally more extensive to compute than so it is of interest to know when,for afixed computational com-plexity,one can expect to achieve variance reduction.Onehasso that,accordingly to the intuition,it will be worth gen-erally performing Rao-Blackwellisation when the average conditional variance of the variable is high.4RAO-BLACKWELLISED PARTICLE FILTERSGiven particles(samples)at time,approximately distributed according to the distribution,RBPFs allow us to compute particles approximately distributed according to the posterior,at time.This is ac-complished with the algorithm shown below,the details of which will now be explained.For,sample:and set:For,evaluate the importanceweights up to a normalizing constant:Multiply/suppress samples with high/lowimportance weights,respectively,to obtainrandom samples approximately distributedaccording to.3.MCMC step4.1IMPLEMENTATION ISSUES4.1.1Sequential importance samplingIf we restrict ourselves to importance functions of the fol-lowing form(3) we can obtain recursive formulas to evaluateand thus.The“incremental weight”is given byand the associated importance weight isUnfortunately,computing the optimal importance sampling distribution is often too expensive.Several deterministic approximations to the optimal distribution have been pro-posed,see for example(de Freitas1999,Doucet1998). Degeneracy of SISThe following proposition shows that,for importance func-tions of the form(3),the variance of can only in-crease(stochastically)over time.The proof of this propo-sition is an extension of a Kong-Liu-Wong theorem(Konget al.1994,p.285)to the case of an importance function of the form(3).Proposition4The unconditional variance(i.e.with the observations being interpreted as random variables) of the importance weights increases over time.In practice,the degeneracy caused by the variance increase can be observed by monitoring the importance weights. Typically,what we observe is that,after a few iterations, one of the normalized importance weights tends to1,while the remaining weights tend to zero.4.1.2Selection stepTo avoid the degeneracy of the sequential importance sam-pling simulation method,a selection(resampling)stage may be used to eliminate samples with low importance ra-tios and multiply samples with high importance ratios.A selection scheme associates to each particle a num-ber of offsprings,say,such that. Several selection schemes have been proposed in the lit-erature.These schemes satisfy,but their performance varies in terms of the variance of the particles,var.Recent theoretical results in(Crisan, Del Moral and Lyons1999)indicate that the restrictionis unnecessary to obtain convergence re-sults(Doucet et al.1999).Examples of these selection schemes include multinomial sampling(Doucet1998,Gor-don et al.1993,Pitt and Shephard1999),residual resam-pling(Kitagawa1996,Liu and Chen1998)and stratified sampling(Kitagawa1996).Their computational complex-ity is.4.1.3MCMC stepAfter the selection scheme at time,we obtain par-ticles distributed marginally approximately according to .As discussed earlier,the discrete nature of the approximation can lead to a skewed importance weights distribution.That is,many particles have no offspring (),whereas others have a large number of off-spring,the extreme case being for a particular value.In this case,there is a severe reduction in the di-versity of the samples.A strategy for improving the re-sults involves introducing MCMC steps of invariant distri-bution on each particle(Andrieu,de Freitas and Doucet1999b,Gilks and Berzuini1998,MacEachern et al. 1999).The basic idea is that,by applying a Markov tran-sition kernel,the total variation of the current distribution with respect to the invariant distribution can only decrease. Note,however,that we do not require this kernel to be er-godic.4.2CONVERGENCE RESULTSLet be the space of bounded,Borel measurable functions on.We denote.The fol-lowing theorem is a straightforward consequence of Theo-rem1in(Crisan and Doucet2000)which is an extension of previous results in(Crisan et al.1999).Theorem5If the importance weights are upper bounded and if one uses one of the selection schemes de-scribed previously,then,for all,there exists inde-pendent of such that for anywhere the expectation is taken w.r.t.to the randomness in-troduced by the PF algorithm.This results shows that,un-der very lose assumptions,convergence of this general par-ticlefiltering method is ensured and that the convergence rate of the method is independent of the dimension of the state-space.However,usually increases exponentially with time.If additional assumptions on the dynamic sys-tem under study are made(e.g.discrete state spaces),it is possible to get uniform convergence results(for any)for thefiltering distribution.We do not pursue this here.5EXAMPLESWe now illustrate the theory by briefly describing two ap-plications we have worked on.5.1ON-LINE REGRESSION AND MODELSELECTION WITH NEURAL NETWORKS Consider a function approximation scheme consisting of a mixture of radial basis functions(RBFs)and a linear regression term.The number of basis functions,,their centers,,the coefficients(weights of the RBF centers plus regression terms),,and the variance of the Gaussian noise on the output,,can all vary with time,so we treat them as latent random variables:see Figure1.For details, see(Andrieu,de Freitas and Doucet1999a).In(Andrieu et al.1999a),we show that it is possible to simulate,and with a particlefilter and to com-pute the coefficients analytically using Kalmanfilters. This is possible because the output of the neural network is linear in,and hence the system is a conditionally lin-ear Gaussian state-space model(CLGSSM),that is it is a linear Gaussian state-space model conditional upon the lo-cation of the bases and the hyper-parameters.This leads to an efficient RBPF that can be combined with a reversible jump MCMC algorithm(Green1995)to select the numberFigure1:DBN representation of the RBF model.The hyper-parameters have been omitted for clarity.Figure2:The top plot shows the one-step-ahead output predictions[—]and the true outputs[]for the RBF model.The middle and bottom plots show the true val-ues and estimates of the model order and noise variance respectively.of basis functions online.Forexample,we generated some data from a mixture of2RBFs for,and then from a single RBF for;the method was able to track this change,as shown in Figure2.Further experiments on real data sets are described in(Andrieu et al.1999a).5.2ROBOT LOCALIZATION AND MAPBUILDINGConsider a robot that can move on a discrete,two-dimensional grid.Suppose the goal is to learn a map of the environment,which,for simplicity,we can think of as a matrix which stores the color of each grid cell,which can be either black or white.The difficulty is that the color Figure3:A Factorial HMM with3hidden chains. represents the color of grid cell at time,represents the robot’s location,and the current observation.sensors are not perfect(they may accidentallyflip bits),nor the motors(the robot may fail to move in the desired di-with some probability due e.g.,to wheel slippage).it is easy for the robot to get lost.And when robot is lost,it does not know what part of the matrix to So we are faced with a chicken-and-egg situation: robot needs to know where it is to learn the map,butto know the map tofigure out where it is.problem of concurrent localization and map learn-for mobile robots has been widely studied.In(Mur-2000),we adopt a Bayesian approach,in which wea belief state over both the location of the robot,,and the color of each grid cell,,,where is the number cells,and is the number of colors.The DBN we using is shown in Figure3.The state space has size .Note that we can easily handle changing envi-since the map is represented as a random vari-unlike the more common approach,which treats the map as afixed parameter.The observation model is,where is a function thatflips its binary argument with somefixed probability.In other words,the robot gets to see the color of the cell it is currently at,corrupted by noise:is a noisy multiplexer with acting as a“gate”node.Note that this conditional independence is not obvious from the graph structure in Figure3(a),which suggests that all the nodes in each slice should be correlated by virtue of sharing a common observed child,as in a factorial HMM(Ghahra-mani and Jordan1997).The extra independence informa-tion is encoded in’s distribution,c.f.,(Boutilier,Fried-man,Goldszmidt and Koller1996).The basic idea of the algorithm is to sample with a PF, and marginalize out the nodes exactly,which can be done efficiently since they are conditionally independent given:Some results on a simple one-dimensional grid world aretime tg r i d c e l l i123456780.10.20.30.40.50.60.70.80.91g r i d c e l l iProb. location, i.e., P(L(t)=i | y(1:t)), 50 particles, seed 1246810121416123456780.10.20.30.40.50.60.70.80.91g r i d c e l l iBK Prob. location, i.e., P(L(t)=i | y(1:t))246810121416123456780.10.20.30.40.50.60.70.80.91a b cFigure 4:Estimated position as the robot moves from cell 1to 8and back.The robot “gets stuck”in cell 4for two steps in a row on the outgoing leg of the journey (hence the double diagonal),but the robot does not realize this until it reaches the end of the “corridor”at step 9,where it is able to relocalise.(a)Exact inference.(b)RBPF with 50particles.(c)Fully-factorised BK.shown in Figure 4.We compared exact Bayesian infer-ence with the RBPF method,and with the fully-factorised version of the Boyen-Koller (BK)algorithm (Boyen and Koller 1998),which represents the belief state as a product of marginals:We see that the RBPF results are very similar to the ex-act results,even with only 50particles,but that BK gets confused because it ignores correlations between the map cells.We have obtained good results learning a map (so the state space has size )using only 100particles (the observation model in the 2D case is that the robot observes the colors of all the cells in a neighbor-hood centered on its current location).For a more detailed discussion of these results,please see (Murphy 2000).5.3CONCLUSIONS AND EXTENSIONSRBPFs have been applied to many problems,mostly in the framework of conditionally linear Gaussian state-space models and conditionally finite state-space HMMs.That is,they have been applied to models that,conditionally upon a set of variables (imputed by the PF algorithm),admit a closed-form filtering distribution (Kalman filter in the con-tinuous case and HMM filter in the discrete case).One can also make use of the special structure of the dynamic model under study to perform the calculations efficiently using the junction tree algorithm.For example,if one had evolv-ing trees,one could sample the root nodes with the PF and compute the leaves using the junction tree algorithm.This would result in a substantial computational gain as one only has to sample the root nodes and apply the juction tree to lower dimensional sub-networks.Although the previoulsy mentioned models are the mostfamous ones,there exist numerous other dynamic systems admitting finite dimensional filters.That is,the filtering distribution can be estimated in closed-form at any time using a fixed number of sufficient statistics.These includeDynamic models for counting observations (Smith and Miller 1986).Dynamic models with a time-varying unknow covari-ance matrix for the dynamic noise (West and Harrison 1996,Uhlig 1997).Classes of the exponential family state space models (Vidoni 1999).This list is by no means exhaustive.It,however,shows that RBPFs apply to very wide class of dynamic models.Con-sequently,they have a big role to play in computer vision (where mixtures of Gaussians arise commonly),robotics,speech and dynamic factor analysis.ReferencesAkashi,H.and Kumamoto,H.(1977).Random samplingapproach to state estimation in switching environ-ments,Automatica 13:429–434.Andrieu,C.,de Freitas,J.F.G.and Doucet,A.(1999a).Se-quential Bayesian estimation and model selection ap-plied to neural networks,Technical Report CUED/F-INFENG/TR 341,Cambridge University Engineering Department.Andrieu,C.,de Freitas,J.F.G.and Doucet,A.(1999b).Se-quential MCMC for Bayesian model selection,IEEE Higher Order Statistics Workshop ,Ceasarea,Israel,pp.130–134.Becker,A.,Bar-Yehuda,R.and Geiger,D.(1999).Randomalgorithms for the loop cutset problem.Bernardo,J.M.and Smith,A.F.M.(1994).Bayesian The-ory ,Wiley Series in Applied Probability and Statis-tics.Boutilier,C.,Friedman,N.,Goldszmidt,M.and Koller,D.(1996).Context-specific independence in bayesian networks,Proc.Conf.Uncertainty in AI .Boyen,X.and Koller,D.(1998).Tractable inferencefor complex stochastic processes,Proc.Conf.Uncer-tainty in AI .Casella,G.and Robert,C.P.(1996).Rao-Blackwellisationof sampling schemes,Biometrika 83(1):81–94.Cowell,R.G.,Dawid,A.P.,Lauritzen,S.L.and Spiegel-halter,D.J.(1999).Probabilistic Networks and Ex-pert Systems ,Springer-Verlag,New York.Crisan,D.and Doucet,A.(2000).Convergence of gen-eralized particlefilters,Technical Report CUED/F-INFENG/TR381,Cambridge University Engineering Department.Crisan,D.,Del Moral,P.and Lyons,T.(1999).Dis-cretefiltering using branching and interacting parti-cle systems,Markov Processes and Related Fields 5(3):293–318.de Freitas,J.F.G.(1999).Bayesian Methods for Neu-ral Networks,PhD thesis,Department of Engineer-ing,Cambridge University,Cambridge,UK.Dean,T.and Kanazawa,K.(1989).A model for reason-ing about persistence and causation,Artificial Intelli-gence93(1–2):1–27.Doucet,A.(1998).On sequential simulation-based meth-ods for Bayesianfiltering,Technical Report CUED/F-INFENG/TR310,Department of Engineering,Cam-bridge University.Doucet, A.,de Freitas,J. F.G.and Gordon,N.J.(2000).Sequential Monte Carlo Methods in Practice, Springer-Verlag.Doucet,A.,Godsill,S.and Andrieu,C.(2000).On se-quential Monte Carlo sampling methods for Bayesian filtering,Statistics and Computing10(3):197–208. Doucet, A.,Gordon,N.J.and Krishnamurthy,V.(1999).Particlefilters for state estimation of jump Markov linear systems,Technical Report CUED/F-INFENG/TR359,Cambridge University Engineering Department.Ghahramani,Z.and Jordan,M.(1997).Factorial Hidden Markov Models,Machine Learning29:245–273. Gilks,W.R.and Berzuini,C.(1998).Monte Carlo in-ference for dynamic Bayesian models,Unpublished.Medical Research Council,Cambridge,UK.Gordon,N.J.,Salmond,D.J.and Smith,A.F.M.(1993).Novel approach to nonlinear/non-Gaussian Bayesian state estimation,IEE Proceedings-F140(2):107–113.Green,P.J.(1995).Reversible jump Markov chain Monte Carlo computation and Bayesian model determina-tion,Biometrika82:711–732.Handschin,J.E.and Mayne,D.Q.(1969).Monte Carlo techniques to estimate the conditional expectation in multi-stage non-linearfiltering,International Journal of Control9(5):547–559.Isard,M.and Blake, A.(1996).Contour tracking by stochastic propagation of conditional density,Euro-pean Conference on Computer Vision,Cambridge, UK,pp.343–356.Kanazawa,K.,Koller,D.and Russell,S.(1995).Stochastic simulation algorithms for dynamic probabilistic net-works,Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence,Morgan Kauf-mann,pp.346–351.Kitagawa,G.(1996).Monte Carlofilter and smoother for non-Gaussian nonlinear state space models,Journal of Computational and Graphical Statistics5:1–25.Kong, A.,Liu,J.S.and Wong,W.H.(1994).Se-quential imputations and Bayesian missing data prob-lems,Journal of the American Statistical Association 89(425):278–288.Liu,J.S.and Chen,R.(1998).Sequential Monte Carlo methods for dynamic systems,Journal of the Ameri-can Statistical Association93:1032–1044.MacEachern,S.N.,Clyde,M.and Liu,J.S.(1999).Sequential importance sampling for nonparametric Bayes models:the next generation,Canadian Jour-nal of Statistics27:251–267.Murphy,K.P.(2000).Bayesian map learning in dynamic environments,in S.Solla,T.Leen and K.-R.M¨u ller (eds),Advances in Neural Information Processing Systems12,MIT Press,pp.1015–1021.Pearl,J.(1988).Probabilistic Reasoning in Intelligent Sys-tems:Networks of Plausible Inference,Morgan Kauf-mann.Pitt,M.K.and Shephard,N.(1999).Filtering via simula-tion:Auxiliary particlefilters,Journal of the Ameri-can Statistical Association94(446):590–599.Smith,R.L.and Miller,J.E.(1986).Predictive records, Journal of the Royal Statistical Society B36:79–88.Uhlig,H.(1997).Bayesian vector-autoregressions with stochastic volatility,Econometrica.Vidoni,P.(1999).Exponential family state space models based on a conjugate latent process,Journal of the Royal Statistical Society B61:213–221.West,M.(1993).Mixture models,Monte Carlo,Bayesian updating and dynamic models,Computing Science and Statistics24:325–333.West,M.and Harrison,J.(1996).Bayesian Forecasting and Dynamic Linear Models,Springer-Verlag.。
贝叶斯深度学习(bayesiandeeplearning)
贝叶斯深度学习(bayesiandeeplearning)⽬录 本⽂简单介绍什么是贝叶斯深度学习(bayesian deep learning),贝叶斯深度学习如何⽤来预测,贝叶斯深度学习和深度学习有什么区别。
对于贝叶斯深度学习如何训练,本⽂只能⼤致给个介绍。
(不敢误⼈⼦弟) 在介绍贝叶斯深度学习之前,先来回顾⼀下贝叶斯公式。
贝叶斯公式p (z |x )=p (x ,z )p (x )=p (x |z )p (z )p (x )其中,p (z |x ) 被称为后验概率(posterior),p (x ,z ) 被称为联合概率,p (x |z ) 被称为似然(likelihood),p (z ) 被称为先验概率(prior),p (x ) 被称为 evidence。
如果再引⼊全概率公式 p (x )=∫p (x |z )p (z )dz ,式(1)可以再变成如下形式:p (z |x )=p (x |z )p (z )∫p (x |z )p (z )dz 如果 z 是离散型变量,则将式(2)中分母积分符号 ∫ 改成求和符号 ∑ 即可。
(概率分布中的概率质量函数⼀般⽤⼤写字母 P (⋅) 表⽰,概率密度函数⼀般⽤⼩写字母 p (⋅) 表⽰,这⾥为了简便,不多做区分,⽤连续型变量举例)什么是贝叶斯深度学习? ⼀个最简单的神经元⽹络结构如下图所⽰:图 1 神经元 在深度学习中,w i ,(i =1,...,n ) 和 b 都是⼀个确定的值,例如 w 1=0.1,b =0.2。
即使我们通过梯度下降(gradient decent)更新 w i =w i −α⋅∂J ∂w i ,我们仍未改变 “w i 和 b 都是⼀个确定的值” 这⼀事实。
那什么是贝叶斯深度学习?将 w i 和 b 由确定的值变成分布(distributions),这就是贝叶斯深度学习。
贝叶斯深度学习认为每⼀个权重(weight)和偏置(bias)都应该是⼀个分布,⽽不是⼀个确定的值。
基于Bayesian估计理论的GPS导航算法
基于Bayesian估计理论的GPS导航算法
周泽波;沈云中
【期刊名称】《宇航学报》
【年(卷),期】2009(030)003
【摘要】当观测信息冗余度小或出现异常时,仅依靠GPS单历元数据难以获得可靠的定位结果,需借助历史信息改善定位精度和可靠性.导出了Bayesian递推算法的一般形式,建立了基于移动窗口的牛顿向前插值模型与常速度、二次曲线最小二乘拟合模型的先验信息获取方式,引入了当前历元相关观测方程的抗差估计,并采用实测GPS动态数据计算分析,结果表明:抗差估计能有效抑制粗差对GPS定位结果的影响;拟合模型对误差具有平滑作用,其中二次曲线模型明显优于常速度模型,但两者均需实时计算拟合系数;而牛顿向前插值模型系数恒定,计算简单,显著提高了定位精度及可靠性.
【总页数】6页(P1073-1078)
【作者】周泽波;沈云中
【作者单位】同济大学测量与国土信息工程系,上海,200092;同济大学测量与国土信息工程系,上海,200092;现代工程测量国家测绘局重点实验室,上海,200092【正文语种】中文
【中图分类】V417.9
【相关文献】
1.GPS/Doppler导航随机模型的移动窗口实时估计算法 [J], 周泽波;沈云中;李博峰
2.基于信号分离估计理论的GPS多径抑制算法 [J], 李杰;吴仁彪;卢丹;王文益
3.序贯抗差估计的MIMU/GPS/磁力计组合导航系统对准算法 [J], 王世达;王坚
4.序贯抗差估计的MIMU/GPS/磁力计组合导航系统对准算法 [J], 王世达;王坚;;;
5.基于自适应姿态估计的MIMU/GPS紧组合导航算法 [J], 单斌;张复建;杨波;王跃钢;薛亮;腾红磊
因版权原因,仅展示原文概要,查看原文内容请购买。
基于机器学习的海洋遥感图像处理技术研究
基于机器学习的海洋遥感图像处理技术研究近年来,随着科技的发展和人类对海洋环境的关注度不断提高,海洋遥感技术也得到了广泛应用。
随着遥感技术的不断完善,人们能够从海洋遥感图像中获取更多的信息和数据,并且能够更好地理解和控制海洋环境。
而基于机器学习的海洋遥感图像处理技术,更是为人们提供了更多的便利和可能性。
一、机器学习在海洋遥感图像处理中的应用机器学习是一种通过计算机自主学习的方法,能够识别图形、声音、语音等信息,并分析和解释这些信息。
而在海洋遥感图像处理中,机器学习可以通过对大量海洋遥感数据进行学习和训练,从中挖掘出遥感图像中的有用信息,并且通过算法自主完成一系列的数据分析和处理。
比如,在海洋遥感图像中,机器学习可以帮助人们自动识别海洋中的各种物质和生物,如海水、海藻、浮游生物等,从而得出海洋生态系统的状态和变化趋势。
此外,机器学习还可以帮助人们利用遥感图像预测海洋中的水温、水质、海流等环境因素的变化,为人们提供更多的海洋信息和数据支持。
二、海洋遥感图像处理中机器学习算法的选择在实际应用中,针对不同的海洋遥感数据和应用场景,人们需要选用合适的机器学习算法来进行数据处理和分析。
一般而言,常用的机器学习算法主要有以下几种:1. 卷积神经网络(CNN)卷积神经网络(CNN)是一种广泛应用于图像和视频数据分析的深度学习算法。
在海洋遥感图像处理中,CNN可以通过对遥感图像进行卷积、池化等处理,自动提取其中的特征,然后通过分类和回归算法,帮助人们识别海洋中的各种物质和生物。
2. 支持向量机(SVM)支持向量机(SVM)是一种分类算法,可以通过将训练数据分割成不同的子空间来进行分类。
在海洋遥感图像处理中,SVM可以帮助人们对遥感图像中的各种物质和生物进行自动分类和识别。
3. 决策树算法决策树算法是一种常用的分类和回归算法,可以通过将数据集分割为不同的子集,来对数据进行分类和预测。
在海洋遥感图像处理中,决策树算法可以通过对遥感图像中的不同区域进行分类,帮助人们自动识别海洋中的各种物质和生物。
基于随机森林的高维数据可视化
基于随机森林的高维数据可视化
吕兵;王华珍
【期刊名称】《计算机应用》
【年(卷),期】2014(34)6
【摘要】目前对高维数据进行挖掘的方法大多是基于数学理论而非可视化的直觉.为便于直观分析和评价高维数据,提出引入随机森林(RF)方法对高维数据进行数据可视化.首先,采用RF进行有监督学习得到样本间的相似度度量,并采用主坐标分析法对其进行降维,将高维数据的关系信息变换到低维空间;然后,在低维空间中采用散点图进行可视化.在高维基因数据集上实验结果表明,基于RF有监督降维的可视化能够较好地展现高维数据的类分布规律,且优于传统的无监督降维后的可视化效果.【总页数】6页(P1613-1617,1644)
【作者】吕兵;王华珍
【作者单位】华侨大学计算机科学与技术学院,福建厦门361021;华侨大学计算机科学与技术学院,福建厦门361021
【正文语种】中文
【中图分类】TP391.411
【相关文献】
1.基于平行坐标的高维数据可视化算法研究 [J], 徐旸
2.改进的基于SOM的高维数据可视化算法 [J], 王志省;许晓兵
3.基于稀疏正则化的高维数据可视化分析技术 [J], 陈海辉;周向东;施伯乐
4.基于深度学习与随机森林的高维数据特征选择 [J], 冯晓荣; 瞿国庆
5.高维数据下基于云平台的随机森林算法的研究与实现 [J], 许旻
因版权原因,仅展示原文概要,查看原文内容请购买。
FastSLAM A factored solution to the simultaneous localization and mapping problem
FastSLAM:A Factored Solution to the Simultaneous Localization and Mapping ProblemMichael Montemerlo and Sebastian Thrun School of Computer ScienceCarnegie Mellon UniversityPittsburgh,PA15213mmde@,thrun@ Daphne Koller and Ben Wegbreit Computer Science DepartmentStanford UniversityStanford,CA94305-9010koller@,ben@AbstractThe ability to simultaneously localize a robot and ac-curately map its surroundings is considered by many tobe a key prerequisite of truly autonomous robots.How-ever,few approaches to this problem scale up to handlethe very large number of landmarks present in real envi-ronments.Kalmanfilter-based algorithms,for example,require time quadratic in the number of landmarks to in-corporate each sensor observation.This paper presentsFastSLAM,an algorithm that recursively estimates thefull posterior distribution over robot pose and landmarklocations,yet scales logarithmically with the number oflandmarks in the map.This algorithm is based on an ex-act factorization of the posterior into a product of con-ditional landmark distributions and a distribution overrobot paths.The algorithm has been run successfullyon as many as50,000landmarks,environments far be-yond the reach of previous approaches.Experimentalresults demonstrate the advantages and limitations ofthe FastSLAM algorithm on both simulated and real-world data.IntroductionThe problem of simultaneous localization and mapping,also known as SLAM,has attracted immense attention in the mo-bile robotics literature.SLAM addresses the problem of building a map of an environment from a sequence of land-mark measurements obtained from a moving robot.Since robot motion is subject to error,the mapping problem neces-sarily induces a robot localization problem—hence the name SLAM.The ability to simultaneously localize a robot and accurately map its environment is considered by many to be a key prerequisite of truly autonomous robots[3,7,16]. The dominant approach to the SLAM problem was in-troduced in a seminal paper by Smith,Self,and Cheese-man[15].This paper proposed the use of the extended Kalmanfilter(EKF)for incrementally estimating the poste-rior distribution over robot pose along with the positions of the landmarks.In the last decade,this approach has found widespread acceptance infield robotics,as a recent tutorial paper[2]documents.Recent research has focused on scal-ing this approach to larger environments with more than aFigure1:The SLAM problem:The robot moves from pose through a sequence of controls,.As it moves,it observes nearby landmarks.At time,it observes landmark out of two landmarks,.The measurement is denoted (range and bearing).At time,it observes the other landmark, ,and at time,it observes again.The SLAM problem is concerned with estimating the locations of the landmarks and the robot’s path from the controls and the measurements.The gray shading illustrates a conditional independence relation.plementation of this idea leads to an algorithm that requires time,where is the number of particles in the particlefilter and is the number of landmarks.We de-velop a tree-based data structure that reduces the running time of FastSLAM to,making it significantly faster than existing EKF-based SLAM algorithms.We also extend the FastSLAM algorithm to situations with unknown data association and unknown number of landmarks,show-ing that our approach can be extended to the full range of SLAM problems discussed in the literature. Experimental results using a physical robot and a robot simulator illustrate that the FastSLAM algorithm can han-dle orders of magnitude more landmarks than present day approaches.We alsofind that in certain situations,an in-creased number of landmarks leads to a mild reduction of the number of particles needed to generate accurate maps—whereas in others the number of particles required for accurate mapping may be prohibitively large.SLAM Problem DefinitionThe SLAM problem,as defined in the rich body of litera-ture on SLAM,is best described as a probabilistic Markov chain.The robot’s pose at time will be denoted.For robots operating in the plane—which is the case in all of our experiments—poses are comprised of a robot’s-coordi-nate in the plane and its heading direction.Poses evolve according to a probabilistic law,often re-ferred to as the motion model:(1) Thus,is a probabilistic function of the robot control and the previous pose.In mobile robotics,the motion model is usually a time-invariant probabilistic generalization of robot kinematics[1].The robot’s environment possesses immobile land-marks.Each landmark is characterized by its location in space,denoted for.Without loss of gen-erality,we will think of landmarks as points in the plane,so that locations are specified by two numerical values.To map its environment,the robot can sense landmarks. For example,it may be able to measure range and bearing to a landmark,relative to its local coordinate frame.The mea-surement at time will be denoted.While robots can often sense more than one landmark at a time,we follow com-monplace notation by assuming that sensor measurements correspond to exactly one landmark[2].This convention is adopted solely for mathematical convenience.It poses no restriction,as multiple landmark sightings at a single time step can be processed sequentially.Sensor measurements are governed by a probabilistic law, often referred to as the measurement model:(2) Here is the set of all landmarks,andis the index of the landmark perceived at time.For example,in Figure1,we have, and,since the robotfirst observes landmark, then landmark,andfinally landmark for a second time. Many measurement models in the literature assume that the robot can measure range and bearing to landmarks,con-founded by measurement noise.The variable is often referred to as correspondence.Most theoretical work in the literature assumes knowledge of the correspondence or,put differently,that landmarks are uniquely identifiable.Practi-cal implementations use maximum likelihood estimators for estimating the correspondence on-the-fly,which work well if landmarks are spaced sufficiently far apart.In large parts of this paper we will simply assume that landmarks are iden-tifiable,but we will also discuss an extension that estimates the correspondences from data.We are now ready to formulate the SLAM problem.Most generally,SLAM is the problem of determining the location of all landmarks and robot poses from measurementsand controls.In probabilis-tic terms,this is expressed by the posterior, where we use the superscript to refer to a set of variables from time1to time.If the correspondences are known,the SLAM problem is simpler:(3) As discussed in the introduction,all individual landmark es-timation problems are independent if one knew the robot’s path and the correspondence variables.This condi-tional independence is the basis of the FastSLAM algorithm described in the next section.FastSLAM with Known Correspondences We begin our consideration with the important case where the correspondences are known,and so is the number of landmarks observed thus far.Factored RepresentationThe conditional independence property of the SLAM prob-lem implies that the posterior(3)can be factored as follows:(4)Put verbally,the problem can be decomposed into esti-mation problems,one problem of estimating a posterior over robot paths,and problems of estimating the locationsof the landmarks conditioned on the path estimate.This factorization is exact and always applicable in the SLAM problem,as previously argued in[12].The FastSLAM algorithm implements the path estimatorusing a modified particlefilter[4].As we argue further below,thisfilter can sample efficiently from this space,providing a good approximation of the poste-rior even under non-linear motion kinematics.The land-mark pose estimators are realized by Kalmanfilters,using separatefilters for different landmarks. Because the landmark estimates are conditioned on the path estimate,each particle in the particlefilter has its own,lo-cal landmark estimates.Thus,for particles and land-marks,there will be a total of Kalmanfilters,each of dimension2(for the two landmark coordinates).This repre-sentation will now be discussed in detail.Particle Filter Path EstimationFastSLAM employs a particlefilter for estimating the path posterior in(4),using afilter that is similar (but not identical)to the Monte Carlo localization(MCL) algorithm[1].MCL is an application of particlefilter tothe problem of robot pose estimation(localization).At each point in time,both algorithms maintain a set of particles rep-resenting the posterior,denoted.Each particle represents a“guess”of the robot’s path:(5) We use the superscript notation to refer to the-th par-ticle in the set.The particle set is calculated incrementally,from theset at time,a robot control,and a measurement.First,each particle in is used to generate a probabilistic guess of the robot’s pose at time:(6) obtained by sampling from the probabilistic motion model. This estimate is then added to a temporary set of parti-cles,along with the path.Under the assumption that the set of particles in is distributed according to(which is an asymptotically cor-rect approximation),the new particle is distributed accord-ing to.This distribution is commonly referred to as the proposal distribution of particlefiltering. After generating particles in this way,the new set is obtained by sampling from the temporary particle set.Each particle is drawn(with replacement)with a probability proportional to a so-called importance factor,which is calculated as follows[10]:target distribution(7) The exact calculation of(7)will be discussed further below. The resulting sample set is distributed according to an ap-proximation to the desired pose posterior,an approximation which is correct as the number of particlesgoes to infinity.We also notice that only the most recent robot pose estimate is used when generating the parti-cle set.This will allows us to silently“forget”all other pose estimates,rendering the size of each particle indepen-dent of the time index.Landmark Location EstimationFastSLAM represents the conditional landmark estimatesin(4)by Kalmanfilters.Since this estimate is conditioned on the robot pose,the Kalmanfilters are attached to individual pose particles in.More specifi-cally,the full posterior over paths and landmark positions in the FastSLAM algorithm is represented by the sample set(8) Here and are mean and covariance of the Gaus-sian representing the-th landmark,attached to the-th particle.In the planar robot navigation scenario,each mean is a two-element vector,and is a2by2matrix. The posterior over the-th landmark pose is easily ob-tained.Its computation depends on whether or not, that is,whether or not was observed at time.For, we obtain(9)For,we simply leave the Gaussian unchanged:(10) The FastSLAM algorithm implements the update equation (9)using the extended Kalmanfilter(EKF).As in existing EKF approaches to SLAM,thisfilter uses a linearized ver-sion of the perceptual model[2].Thus, FastSLAM’s EKF is similar to the traditional EKF for SLAM[15]in that it approximates the measurement model using a linear Gaussian function.We note that,with a lin-ear Gaussian observation model,the resulting distributionis exactly a Gaussian,even if the mo-tion model is not linear.This is a consequence of the use of sampling to approximate the distribution over the robot’s pose.One significant difference between the FastSLAM algo-rithm’s use of Kalmanfilters and that of the traditional SLAM algorithm is that the updates in the FastSLAM algo-rithm involve only a Gaussian of dimension two(for the two landmark location parameters),whereas in the EKF-based SLAM approach a Gaussian of size has to be updated (with landmarks and3robot pose parameters).This cal-culation can be done in constant time in FastSLAM,whereas it requires time quadratic in in standard SLAM. Calculating the Importance WeightsLet us now return to the problem of calculating the impor-tance weights needed for particlefilter resampling,as defined in(7):µµµµµµµµ8,Σ87,Σ76,Σ65,Σ54,Σ43,Σ32,Σ21,Σ1[m][m][m][m][m][m][m][m][m][m][m][m][m][m][m][m]Figure 2:A tree representinglandmark estimates within asingle particle.(a)(b)(c)Figure4:(a)Physical robot mapping rocks,in a testbed developed for Mars Rover research.(b)Raw range and path data.(c)Map generated using FastSLAM(dots),and locations of rocks determined manually(circles).in the map.It also has to determine if a measurement cor-responds to a new,previously unseen landmark,in whichcase the map should be augmented accordingly.In most existing SLAM solutions based on EKFs,theseproblems are solved via maximum likelihood.More specif-ically,the probability of a data association is given by(12)The step labeled“PF”uses the particlefilter approxima-tion to the posterior.Thefinal step assumesa uniform prior,which is commonly used[2].The maximum likelihood data association is simply the in-dex that maximizes(12).If the maximum value of—with careful consideration of all constantsin(12)—is below a threshold,the landmark is consideredpreviously unseen and the map is augmented accordingly.In FastSLAM,the data association is estimated on a per-particle basis:.As a result,different particles may rely on different values of.Theymight even possess different numbers of landmarks in theirrespective maps.This constitutes a primary difference toEKF approaches,which determine the data association onlyonce for each sensor measurement.It has been observedfrequently that false data association will make the conven-tional EKF approach fail catastrophically[2].FastSLAM ismore likely to recover,thanks to its ability to pursue multi-ple data associations simultaneously.Particles with wrongdata association are(in expectation)more likely to disap-pear in the resampling process than those that guess the dataassociation correctly.We believe that,under mild assumptions(e.g.,minimumspacing between landmarks and bounded sensor error),thedata association search can be implemented in time loga-rithmic in.One possibility is the use of kd-trees as anindexing scheme in the tree structures above,instead of thelandmark number,as proposed in[11].Experimental ResultsThe FastSLAM algorithm was tested extensively under vari-ous conditions.Real-world experiments were complimentedby systematic simulation experiments,to investigate thescaling abilities of the approach.Overall,the results indicatefavorably scaling to large number of landmarks and smallparticle sets.Afixed number of particles(e.g.,)appears to work well across a large number of situations.Figure4a shows the physical robot testbed,which consistsof a small arena set up under NASA funding for Mars Roverresearch.A Pioneer robot equipped with a SICK laser rangefinder was driven along an approximate straight line,gener-ating the raw data shown in Figure4b.The resulting mapgenerated with samples is depicted in Figure4c,with manually determined landmark locations marked bycircles.The robot’s estimates are indicated by x’s,illustrat-ing the high accuracy of the resulting maps.FastSLAM re-sulted in an average residual map error of8.3centimeters,when compared to the manually generated map.Unfortunately,the physical testbed does not allow for sys-tematic experiments regarding the scaling properties of theapproach.In extensive simulations,the number of land-marks was increased up to a total of50,000,which Fast-SLAM successfully mapped with as few as100particles.Here,the number of parameters in FastSLAM is approx-imately0.3%of that in the conventional EKF.Maps with50,000landmarks are entirely out of range for conventionalSLAM techniques,due to their enormous computationalcomplexity.Figure5shows example maps with smallernumbers of landmarks,for different maximum sensor rangesas indicated.The ellipses in Figure5visualize the residualuncertainty when integrated over all particles and Gaussians.In a set of experiments specifically aimed to elucidate thescaling properties of the approach,we evaluated the map androbot pose errors as a function of the number of landmarks,and the number of particles,respectively.The resultsare graphically depicted in Figure6.Figure6a illustratesthat an increase in the number of landmarks mildly re-duces the error in the map and the robot pose.This is be-cause the larger the number of landmarks,the smaller therobot pose error at any point in time.Increasing the numberof particles also bears a positive effect on the map andpose errors,as illustrated in Figure6b.In both diagrams,thebars correspond to95%confidence intervals.Figure5:Maps and estimated robot path,generated using sensors with(a)large and(b)small perceptualfields.The correct landmark locations are shown as dots,and the estimates as ellipses,whose sizes correspond to the residual uncertainty.ConclusionWe presented the FastSLAM algorithm,an efficient new so-lution to the concurrent mapping and localization problem. This algorithm utilizes a Rao-Blackwellized representation of the posterior,integrating particlefilter and Kalmanfilter representations.Similar to Murphy’s work[12],FastSLAM is based on an inherent conditional independence property of the SLAM problem.However,Murphy’s approach main-tains maps using grid positions with discrete values,and therefore scales poorly with the size of the map.His ap-proach also did not deal with the data association problem, which does not arise in the grid-based setting.In FastSLAM,landmark estimates are efficiently repre-sented using tree structures.Updating the posterior requires time,where is the number of particles and the number of landmarks.This is in contrast to the complexity of the common Kalman-filter based ap-proach to SLAM.Experimental results illustrate that Fast-SLAM can build maps with orders of magnitude more land-marks than previous methods.They also demonstrate that under certain conditions,a small number of particles works well regardless of the number of landmarks. Acknowledgments We thank Kevin Murphy and Nando de Freitas for insightful discussions on this topic.This research was sponsored by DARPA’s MARS Program(Contract number N66001-01-C-6018)and the National Science Foundation(CA-REER grant number IIS-9876136and regular grant number IIS-9877033).We thank the Hertz Foundation for their support of Michael Montemerlo’s graduate research.Daphne Koller was supported by the Office of Naval Research,Young Investigator (PECASE)grant N00014-99-1-0464.This work was done while Sebastian Thrun was visiting Stanford University.References[1] F.Dellaert,D.Fox,W.Burgard,and S.Thrun.Monte Carlolocalization for mobile robots.ICRA-99.[2]G.Dissanayake,P.Newman,S.Clark,H.F.Durrant-Whyte,and M.Csorba.An experimental and theoretical investigation into simultaneous localisation and map building(SLAM).Lecture Notes in Control and Information Sciences:Exper-imental Robotics VI,Springer,2000.[3]G.Dissanayake,P.Newman,S.Clark,H.F.Durrant-Whyte,and M.Csorba.A solution to the simultaneous localisation and map building(SLAM)problem.IEEE Transactions of Robotics and Automation,2001.[4] A.Doucet,J.F.G.de Freitas,and N.J.Gordon,editors.Se-quential Monte Carlo Methods In Practice.Springer,2001.(a)(b)Figure6:Accuracy of the FastSLAM algorithm as a function of (a)the number of landmarks,and(b)the number of particles .Large number of landmarks reduce the robot localization error, with little effect on the map error.Good results can be achieved with as few as100particles.[5]A Doucet,N.de Freitas,K.Murphy,and S.Russell.Rao-Blackwellised particlefiltering for dynamic Bayesian net-works.UAI-2000.[6]J.Guivant and E.Nebot.Optimization of the simultaneouslocalization and map building algorithm for real time imple-mentation.IEEE Transaction of Robotic and Automation, May2001.[7] D.Kortenkamp,R.P.Bonasso,and R.Murphy,editors.AI-based Mobile Robots:Case studies of successful robot sys-tems,MIT Press,1998.[8]J.J.Leonard and H.J.S.Feder.A computationally efficientmethod for large-scale concurrent mapping and localization.ISRR-99.[9] F.Lu and ios.Globally consistent range scan alignmentfor environment mapping.Autonomous Robots,4,1997. [10]N.Metropolis, A.W.Rosenbluth,M.N.Rosenbluth, A.H.Teller,and E.Teller.Equations of state calculations by fast computing machine.Journal of Chemical Physics,21,1953.[11] A.W.Moore.Very fast EM-based mixture model clusteringusing multiresolution kd-trees.NIPS-98.[12]K.Murphy.Bayesian map learning in dynamic environments.NIPS-99.[13]K.Murphy and S.Russell.Rao-blackwellized particlefil-tering for dynamic bayesian networks.In Sequential Monte Carlo Methods in Practice,Springer,2001.[14]P.Newman.On the Structure and Solution of the Simulta-neous Localisation and Map Building Problem.PhD thesis, Univ.of Sydney,2000.[15]R.Smith,M.Self,and P.Cheeseman.Estimating uncertainspatial relationships in robotics.In Autonomous Robot Vehni-cles,Springer,1990.[16] C.Thorpe and H.Durrant-Whyte.Field robots.ISRR-2001.[17]S.Thrun,D.Fox,and W.Burgard.A probabilistic approachto concurrent mapping and localization for mobile robots.Machine Learning,31,1998.。
基于Bayesian线性规划的影像纹理识别方法
基于Bayesian线性规划的影像纹理识别方法
郑肇葆
【期刊名称】《武汉大学学报:信息科学版》
【年(卷),期】2007(32)3
【摘要】介绍了Bayesian线性规划的基本原理,利用它解决带有不确定性MRF(马尔柯夫随机场)参数的影像纹理识别问题,并提出了将不确定性状态参数变成确定性参数的5种选择方法。
通过使用实际航空影像的对比识别实验,给出一个合理的选择方法。
本方法为解决纹理识别中带有不确定性特征参数的处理提供了新的途径。
【总页数】4页(P193-196)
【关键词】Bayesian线性规划;纹理影像;识别
【作者】郑肇葆
【作者单位】武汉大学遥感信息工程学院
【正文语种】中文
【中图分类】TP751;P231.5
【相关文献】
1.多级Bayesian Network的影像纹理分类方法 [J], 虞欣;郑肇葆;叶志伟;李林宜
2.SPOT5影像纹理特征提取与土地利用信息识别方法 [J], 李金莲;刘晓玫;李恒鹏
3.基于高分影像纹理分维变化的灾害自动识别方法 [J], 吴鹏天昊;吴立新;沈永林;许志华;王植
4.多尺度纹理及多光谱影像协同的遥感岩性识别方法 [J], 张翠芬;余健;郝利娜;王
少军
5.集成高度和彩色纹理特征的影像目标模糊聚类识别方法 [J], 潘励;郑宏;张祖勋;张剑清
因版权原因,仅展示原文概要,查看原文内容请购买。
基于小波域隐马尔可夫树模型的图像复原
基于小波域隐马尔可夫树模型的图像复原汪雪林;赵书斌;彭思龙【期刊名称】《计算机学报》【年(卷),期】2005(028)006【摘要】从图像复原的Bayesian方法出发,提出一种基于小波域隐马尔可夫树(HMT)模型的线性图像复原算法.小波域HMT模型采用混合高斯模型刻画各子带系数的概率分布,并通过小波系数隐状态在多个尺度之间的Markov依赖性来刻画自然图像小波系数随尺度减小而指数衰减的特性.由于小波域HMT模型准确刻画了自然图像小波变换的统计特性,该文算法以此作为自然图像的先验模型,将图像复原问题转化为一个约束优化问题并用最速下降法对其进行求解.同时,提出了一种规整化参数和HMT模型参数的自适应选择方法.实验结果表明,基于小波域HMT模型的图像复原算法较好地再现了各种边缘信息,复原出的图像在信噪比和视觉效果方面都有明显的提高.【总页数】7页(P1006-1012)【作者】汪雪林;赵书斌;彭思龙【作者单位】中国科学院自动化研究所国家专用集成电路设计工程技术研究中心,北京,100080;中国科学院自动化研究所国家专用集成电路设计工程技术研究中心,北京,100080;中国科学院自动化研究所国家专用集成电路设计工程技术研究中心,北京,100080【正文语种】中文【中图分类】TP391【相关文献】1.基于小波域隐马尔可夫树模型的医学图像去噪 [J], 傅伟;万洪晓;涂刚2.基于小波域隐马尔可夫树模型和SVR的定量隐写分析 [J], 李慧3.基于小波域隐马尔可夫树模型的 SAR 图像去噪 [J], 杨燕;黄彦丽;曹金莲4.基于小波域的隐马尔可夫树模型的图像去噪 [J], 景明利;周雪芹5.基于小波域分类隐马尔可夫树模型的图像融合算法研究 [J], 范永辉;王刚;曲文娟因版权原因,仅展示原文概要,查看原文内容请购买。
基于机器学习的海洋资源预测模型
基于机器学习的海洋资源预测模型在人类不断探索和利用自然资源的进程中,海洋资源的重要性日益凸显。
海洋蕴含着丰富的矿产、能源、生物以及空间资源,其开发和利用对于解决全球资源短缺、推动经济发展和促进科技进步具有举足轻重的意义。
然而,海洋环境复杂多变,要实现对海洋资源的有效评估和预测并非易事。
在此背景下,基于机器学习的海洋资源预测模型应运而生,为我们开启了一扇通向精准预测海洋资源的大门。
海洋资源的分布和储量受到多种因素的综合影响。
例如,海底地形、海洋环流、海水温度、盐度等物理因素,以及海洋生物的分布和生态特征等生物因素,都在不同程度上决定着海洋资源的存在形式和数量。
传统的资源调查方法往往依赖于实地采样和观测,这种方式不仅耗时费力,而且难以获取大面积、长时间序列的海洋数据,从而限制了对海洋资源的全面和深入了解。
机器学习作为一种强大的数据分析和预测工具,为解决海洋资源预测的难题提供了新的思路和方法。
其核心思想是让计算机通过对大量数据的学习和分析,自动发现数据中的规律和模式,并利用这些规律和模式对未知数据进行预测。
在海洋资源预测中,机器学习算法可以处理和分析来自卫星遥感、海洋观测站、科考船等多源异构的数据,提取有价值的信息,构建海洋资源与各种影响因素之间的复杂关系模型。
在众多机器学习算法中,决策树、随机森林、支持向量机、人工神经网络等算法在海洋资源预测中得到了广泛的应用。
决策树算法通过对数据的不断分割和分类,构建出一棵决策树,从而实现对资源的预测。
随机森林则是在决策树的基础上,通过集成多个决策树的预测结果,提高预测的准确性和稳定性。
支持向量机算法通过寻找一个最优的超平面,将不同类别的数据分开,适用于处理线性和非线性的分类问题。
人工神经网络则模拟了人类大脑神经元的工作方式,通过大量的神经元之间的连接和信号传递,对数据进行学习和预测。
为了构建一个有效的海洋资源预测模型,首先需要收集大量的相关数据。
这些数据包括海洋物理、化学、生物等多方面的参数,以及历史上的资源开发和利用数据等。
自学习的人工智能演讲稿1500字
自学习的人工智能演讲稿1500字大家好,今天非常高兴、非常荣幸能参加这样一个盛会。
今天我给带来的演讲是我的一点学习心得,题目叫做自学习的人工智能。
首先大家都知道在60周年之际,我们首先应该记住的是这位人工智能的先驱,图灵。
在他的问题的感召下,我们就有了今天这样的一个盛会和今天人工智能的飞速发展。
他的问题,机器可以思维吗?可以从不同的维度来解释,那么首先人类对人工智能的一个探索也可以围绕对问题不同解释的探索。
第一个探索,应该说是在逻辑层面的探索。
60年代人工智能的这些先驱就考虑用逻辑和搜索来研究人工智能,比如下棋、推理,比如说可以去做路径规划等等。
那么他们有一个很强的假设,这个假设应该说从某种程度上来说是非常直观的。
智能包括计算机可能赋予的智能,是于计算物理符号的排列组合,我们只要能很聪明的把这些物理符号排列组合的话,人类是可以从一系列的零和一的组合来得到。
有了一些成就之后也发现这样的假设是有它的瓶颈的。
在之后大家又有一部分人着力于研究能够有学习功能的人工智能,就有不同的学习算法,机器学习的计算法被研究出来。
其中包括大家都熟悉的人工神经网络。
人工智能的几个里程碑我们现在也很熟悉,第一个大家公认的是里程碑是深蓝,这个比赛意味着几件事。
一个是说在大规模的搜索的状态下,在可能的状态空间的搜索,实际上是一个在物理符号的空间的排列组合。
也就是说在60年代人们的那些假设有一部分是正确的,我们确实可以从这种搜索和物理符号的排列组合获得很多的智能。
紧接着的阶段是,知识就是力量,这是随着互联网和大数据到来的一个热潮,从网上,从不同的媒体我们会获得很多数据,把这些数据经过沉淀变成知识,我们就可以赢得像这样一个电视大赛中的人机对战。
这个之后,刚刚芮勇博士也深入的回顾了一下最近的人工智能的突破,就是深度神经网络。
深度神经网络的突破从计算上来说有几个好处,其中一个好处是说它把一个全局计算的需求变成一个本地计算的需求,在做到这样的一个同时呢,又不失掉很多的信息,这个是计算机里面无数成就的一个中心点。
基于机器学习的海洋遥感图像目标检测技术研究
基于机器学习的海洋遥感图像目标检测技术研究海洋遥感技术是应用遥感传感器系统对海洋产生的信号进行探测、观测和记录,以提供海洋地球系统的信息与数据。
其中的海洋遥感图像作为重要的数据来源,被广泛运用于海洋资源调查、海洋环境监测、海洋灾害预警等领域。
在这些应用中,海洋遥感图像中的目标检测是一个重要的技术问题。
基于机器学习的海洋遥感图像目标检测技术,是指使用机器学习算法对海洋遥感图像中的目标进行识别和检测。
该技术主要涉及到计算机视觉、模式识别、机器学习以及信号处理等领域。
海洋遥感图像目标检测的主要问题是海洋环境的变化会对图像中的目标产生影响,这导致视觉特征不稳定,难以进行准确识别。
为解决这个问题,研究人员利用机器学习的方法来训练模型,对图像进行学习和理解,从而实现准确的目标识别和检测。
在机器学习的应用中,深度学习在目标检测方面表现出色,其算法通过神经网络模型进行训练,可以在不同环境下自适应地检测目标,并且具有较高的准确度和鲁棒性。
因此,深度学习被广泛应用于海洋遥感图像目标检测中。
目前,常用的深度学习算法有卷积神经网络(CNN),循环神经网络(RNN)以及深度置信网络(DBN)。
其中,CNN因其对图像特征的提取能力和处理效率,成为海洋遥感图像目标检测的主流方法之一。
作为深度学习的一种,CNN适用于图像分类、目标检测等任务。
CNN的核心思想是通过多层卷积和池化操作,从图像中提取特征,并将这些特征输入全连接层进行分类或目标检测。
在海洋遥感图像中,CNN可以通过对海洋特征的学习,实现对海洋环境下的目标进行高效准确的检测。
将海洋遥感图像目标检测技术应用到实际应用中,可以为一系列海洋领域的问题提供解决方案。
例如,可以实现对海洋污染物的识别、对海洋生态中的种群分布和数量的统计、对海洋环境中各种生态物种的交互分析等。
然而,海洋遥感图像目标检测技术在实际应用中,仍存在一些问题和挑战。
首先,海洋环境的复杂性和变化性,导致目标特征信息的不稳定性,这种不稳定性需要通过训练集和测试集的大规模收集和处理来解决。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Kevin P.MurphyComputer Science DivisionUniversity of CaliforniaBerkeley,CA94720-1776murphyk@AbstractWe show how map learning can be formulated as inference in a graphicalmodel,which allows us to handle changing environments in a naturalmanner.We describe several different approximation schemes for theproblem,and illustrate some results on a simulated grid-world with doorsthat can open and close.We close by briefly discussing how to learn moregeneral models of(partially observed)environments,which can containa variable number of objects with changing internal state.1IntroductionMobile robots need to navigate in dynamic environments:on a short time scale,obstacles, such as people,can appear and disappear,and on longer time scales,structural changes, such as doors opening and closing,can occur.In this paper,we consider how to create models of dynamic environments.In particular,we are interested in modeling the location of objects,which we can represent using a map.This enables the robot to perform path planning,etc.We propose a Bayesian approach in which we view the map as a(matrix-valued)random variable,which can be updated by Bayes rule in just the same way that the other hidden state variables,such as the robot’s pose(position and orientation),are updated in the widely-used techniques of Markov localization[FBT98]and Kalmanfiltering.To do Bayesian updating,we must specify the observation model(how the state predicts what the robot should see)and the transition model(how the state changes,both over time and in response to the robot’s actions).We use the formalism of factored POMDPs (Partially Observable Markov Decision Process)[CKK96,KLC98,BDH99];see Section2 for details.We must also provide a mechanism for efficiently performing the update equations;we discuss various approximation algorithms for this in Section3.In Section4, we show some results of applying our algorithm to a simulated dynamic environment,and in Section5we conclude and discuss future work.2The modelThe three main components of a POMDP are the transition model,the observation model, and the reward model(which induces a policy).In this section,we focus on the transitionmapa b cFigure 1:(a)A POMDP depicted as a graphical model.The dotted nodes (,,and )represent the fixed parameters.The square nodes (12)are the decision variables (actions).The clear nodes,(12)represent the hidden state,such as the robot’slocation.The shaded nodes (12)are the observations.(b)A POMDP in which the map is explicitely represented as a set of random variables.All the map nodes connect to and ,as represented by the thick arrow.(c)A model in which the dynamic state of the world,such as whether doors 1and 2are open or closed,is stored separately from the static map.and observation models,treating the sequence of actions as observed inputs;in this paper,we use a simple exploration policy to choose these actions,as described in Section 4.If the “external”world is considered static (for modeling purposes),we can treat it as a fixed (but unknown)parameter ;in this case,the state usually just consists of the robot’s position and orientation (see Figure 1(a)).If the state variables are discrete-valued,we can perform Bayesian updating using a Hidden Markov Model (HMM);otherwise we can use (extended)Kalman filtering,or non-parametric sample-based filtering (see e.g.,[KF98,TLF99]).For example,let us consider a simple HMM model in which the state consists of the robot’s location ,which is represented as a point on a discrete grid,and the actions consist of moving one step in each of the four compass directions.The observation model specifies the probability of each possible observation vector as a function of location.In this paper,we assume an observation is one of symbols,although it is simple to extend the technique to handle observations from I R .With a discrete set of observations,the observation modelcan be represented as a stochastic matrix,.Similarly,the transition model,which specifies the probability of moving from one grid cell to another as a function of the action taken,can be represented as a set of stochastic matrices:1.The parameters and implicitely encode the map;hence estimating these parameters using EM is equivalent to estimating the map [KS96,SK98,TBF98].If the external world can change over time,we must represent it as a random variable,instead of a fixed parameter.We then augment our (hidden)state to include the randomvariables 1,where means location contains object at time :see Figure 1(b).1To parameterize the model,we need to specify a transition matrix forpositionFigure2:A two-dimensional sensor model in which the robot receives observations from a33neighborhood.The double-ringed nodes are deterministic multiplexer nodes,and connect in1-to-1fashion to the observations.the’s.For example,if an open door can become a closed door with probability change, we can write1change.(We discuss a more parsimonious representation of objects with internal state,such as doors,in Section5.)If the robot can change the external environment(e.g.,it can open and close doors,or move obstacles),then 1will also depend on.We also need to specify how affects the observations.The simplest way to do this is to create an indexical representation of space,i.e.,in robot-centered coordinates.If the robot knew its location,it could simply extract the appropriate part of to build a model of its immediate surroundings,which we call a local map.We can implement this in a graphical model by using deterministic multiplexer nodes,with the location acting as the selector:this is illustrated in Figure2.The nodes in the local map are defined such that ∆∆∆∆,for∆∆101. Each entry in the local map then gets passed through a stochastic link to produce the actual observation.The local map can also be used to predict whether actions will succeed or not, since the probability of moving to a neighboring cell will depend on whether it contains an obstacle,and the probability of actuator failure.3InferenceNaive application of an exact inference algorithm,such as junction tree[Jen96],to the graphical model in Figure1(b)would be wildly intractable,because all the nodes in the map would become correlated by virtue of having a common child.Intuitively,the correlation arises whenever the robot is uncertain of its location,as we can explain by means of a simple example.Suppose at time1the robot sees some object,but it does not know if it is in cell or.Hence we need to increase and,but record the fact that these values are not independent.The posterior on now has(at least)two modes, corresponding to the possible worlds in which contains or contains.If,at time2,the robotfigures out its location is1(e.g.,because it recognizes a landmark),then it can infer that at time1its location was(with high probability),and hence should go up—but should go down to compensate,since they are correlated.The Bayesian approach will handle this automatically:one of the two modes in the posteriorwill“collapse”once the robot knows where it is.Unfortunately,an exact posterior over would be of size,so we need tofind a more efficient representation.If the robot always knows where it is,there will be no correlation amongst the’s,because of the deterministic nature of the local map(see Section3.1). Hence we can represent the posterior in factored form,11, which takes space.(If we had perfect sensors,we would know what object was present in each cell,and could represent the map in just space.)In practice,the robot will not know its exact location,but,as an approximation,we can still project the posterior down onto the factored representation at each step.In[BK98b],they show that the error introduced by this kind of iterated approximation does not grow with time.If we assume that the map changes slowly,we can treat is as a constant,at least over a short window of time,,and“compile”it into the parameters of the local map. We can then alternate between estimating the locations in this window given the map,and updating the map given the estimated locations.(This is just an online version of the standard EM approach cf.[BK98a].)The main advantage is that locations early in the window get to look at“future”evidence,which can help to disambiguate them(consider the example of moving down a corridor with known landmarks at each end),and this can lead to less uncertainty about which part of the map to update.However,the(standard) assumption of parameter independence means this approach cannot handle correlations. Some alternative approximations would be to use sample-basedfiltering algorithms(see e.g.,[KF98,TLF99])or the variational approximation of[GJ97].3.1Exact inference within a sliceWe will now discuss how to do inference in the two-dimensional graphical model in Figure2.Given the assumption that the’s are independent,we only need to compute the marginals on each separately,which can be done exactly and efficiently.The basic idea is that we will consider each location one at a time,and update the part of the map around with the information from the sensors,weighted by the probability that is the correct location.The conditional likelihood of the evidence from the sensors in the33neighborhood centered on is given by a product of factors:Hence∆∆101∆∆∆∆(1)∆∆101each term being just a diagonal matrix times a vector,which takes time to compute.So we can update the posterior on in time as follows:,where is a normalizing constant.Once we have the weighting matrix,we can update the map in2time usingwhere.The terms can be computed in constant time by exploiting the fact that,when and are known,only a33submatrix of the nodes get“connected”to the evidence(via the local map),the other nodes being effectively “disconnected”from the graph.Formally,we haveif101otherwisewhere the terms,etc.have already been computed in Equation1.If we only condition on the most probable values of,Figure3:A simple environment used for the experiments.Shaded blocks represent walls,R represents the starting location of the robot;to its north is an open door and to its north-east is a closed door.Figure4:A sample run of the algorithm for the environment in Figure3. instead of considering all possible values,we can update the map in time;this approximation is called bounded conditioning.4ResultsWe illustrate the behavior of the algorithm when the robot is placed in the simple envi-ronment shown in Figure3.We used change01and assumed sensors and actuators could fail with probability0.1.The policy was to always visit the nearest uncertain cell cf. [YSA99].In Figure4,we show the belief state of the robot at each time step.The letters in the center of each cell is the robot’s best guess as to what is there(W=wall,O=open door, C=closed door,F=free space,D=open or closed door,?=very uncertain).The true location of the robot is indicated by H,and the robot’s best guess of its location is indicated by h. The observations from a33grid of sensors centered on the true location are shown above each cell;an asterisk denotes an erroneous observation,i.e.,a label which is different from the true contents of the cell.The title indicates the goal the robot is heading towards,using (row,column)notation(so(1,1)represents the top left corner),and the action it intends to take.We now discuss some of the interesting steps in this example.At step1,the robot starts in(4,2),and knows this fact for sure.The two nearest,accessible,uncertain cells are(2,2) and(4,4);it arbitrarily decides to head towards thefirst.Because of the33sensing neighborhood,it is sufficient to reach(3,2),so this becomes the goal.It then moves to the upper corridor.At step5,the robot receives several observations that are inconsistent with its prior beliefs.Consequently,it gets“confused”,and decides to stay put(action=“-”). At step6,the robot’s uncertainty about the status of the door at(3,2)exceeds threshold, as shown by the fact that the’O’becomes a’D’,which represents the fact that the robot knows the object is a door,but is not sure whether it is open or closed.This increase in itsuncertainty about door objects(but not walls,etc.)happens because the transition matrix for the nodes specifies that open doors can become closed doors and vice versa.The robot will revisit the door,to check its status,after it has examined all the closer cells.After succesfully learning a map of the environment,the robot starts running from door to door, trying to keep its map up-to-date.In experiments on other,similar,environments,we have found that the robot always learns the correct map unless the noise levels become too large.In this case,the robot can get “lost”,and it starts updating the wrong part of the map.If the robot wanders into an area with known distinguishing characteristics,it can relocalize itself,and hence repair the damage to the map.5Discussion and future workThe problem of modeling dynamic environments has been open for quite a while.In 1996,Cassandra et al.[CKK96]wrote:“One of the big shortcomings of modeling the environment with POMDP models is that there is too much dependence on the world being static.”In1998,Thrun[Thr98]wrote:“It is an open question as to how to incorporate models of moving objects into a grid-based representation.”In this paper,we have shown how to model dynamic environments using graphical models.(The only previous work we know that has addressed the problem of learning maps in dynamic environments is[YB96].) However,our representation still leaves much to be desired.In particular,suppose we are interested in representing the location of individual objects,each with their own internal state.If there are such objects,each of which can be in states,the current approach would require each to have possible values,as we saw in our representation of doors(which could be open or closed).In addition,the transition matrix for the’s would be hard to specify.A more parsimonious representation would use the map to store (a distribution over)the identity(or type)of the object in each cell,and would store the state of each object in separate random variables.(Of course,uncertainty about object identity would induce correlation amongst all the state variables,just as uncertainty about location did in our previous representation.)A simple example of an object-centered representation is shown in Figure1(c),in which we store the state of each door(open or closed)in the nodes;now can take on values in12.The transition matrix for the’s becomes the identity matrix(since we assume an object in a given location doesn’t move or change identity),and the transition matrix for the’s has change on the diagonal and1change off the diagonal.As another example,consider the task of following a person through a maze-like environment;since they may disappear from view(e.g.,around a corner),it is useful to have a model of their current position and heading.The transition model for the person will be similar to that of the robot’s(since they share the same physical environment),but will be based on a local map centered on the person’s location.The example above assumed that we knew there were two doors,and further,that we knew 1corresponded to the one in location(3,2)and2to the other one(or vice versa).When an agent is placed in an unknown environment,it will need to decide whether its current observations are due to an existing object,and if so,which one,or whether it needs to create a new object.(This problem arises even in the simpler context of learning maps of unknown size—afixed graphical model is clearly inadequate.)The situation is very similar to the problem of data association and tracking with multiple targets[BSF88].The standard approach there is to model each object’s state(usually its position and velocity) with a Gaussian;when a measurement is observed,it is assumed to come from the nearest object;if,however,the observation does not lie inside’s1-Σconfidence ellipsoid,a new object is created.Note that this hard-thresholding approach eliminates the need to modelthe correlation between the different objects.If a new object is needed,the agent will have to decide what kind of object it is,too;if it does not have a pre-existing ontology,it will have to invent categories for itself,perhaps by clustering percepts.We hope to work on these problems in the future,to better understand how to build an embedded Bayesian agent. AcknowledgementsI would like to thank Stuart Russell for many useful discussions.This work was supported by grant number ONR N00014-97-1-0941.References[BDH99] C.Boutilier,T.Dean,and S.Hanks.Decision theoretic planning:Structural assumptions and computational leverage.J.of AI Research,1999.[BK98a]X.Boyen and D.Koller.Approximate learning of dynamic models.In Advances in Neural Info.Proc.Systems,1998.[BK98b]X.Boyen and D.Koller.Tractable inference for complex stochastic processes.In Proc.of the Conf.on Uncertainty in AI,1998.[BSF88]Y.Bar-Shalom and T.Fortmann.Tracking and data association.Academic Press,1988. [CKK96] A.Cassandra,L.P.Kaelbling,and J.Kurien.Acting under uncertainty:Discrete Bayesian models for mobile-robot navigation.In IEEE Intl.Conf.on Intel.Robotics and Systems,1996.[FBT98] D.Fox,W.Burgard,and S.Thrun.Active Markov localization for mobile robots.Robotics and Autonomous Systems,1998.[GJ97]Z.Ghahramani and M.Jordan.Factorial Hidden Markov Models.Machine Learning, 29:245–273,1997.[Jen96] F.V.Jensen.An Introduction to Bayesian Networks.UCL Press,London,England,1996. [KF98] D.Koller and ing learning for approximation in stochastic processes.In Intl.Conf.on Machine Learning,1998.[KLC98]L.P.Kaelbling,M.Littman,and A.Cassandra.Planning and acting in partially observable stochastic domains.Artificial Intelligence,101,1998.[KS96]S.Koenig and R.Simmons.Unsupervised learning of probabilistic models for robot navigation.In IEEE Intl.Conf.on Robotics and Automation,1996.[SK98]H.Shatkay and L.P.Kaelbling.Heading in the right direction.In Intl.Conf.on Machine Learning,1998.[TBF98]S.Thrun,W.Burgard,and D.Fox.A probabilistic approach to concurrent mapping and localization for mobile robots.Machine Learning,31:29–53,1998.[Thr98]S.Thrun.Learning metric-topological maps for indoor mobile robot navigation.Artificial Intelligence,99(1):21–71,1998.[TLF99]S.Thrun,ngord,and D.Fox.Sample-based hidden Markov models.In Intl.Conf.on Machine Learning,1999.[YB96] B.Yamauchi and R.Beer.Spatial learning for navigation in dynamic environments.IEEE Trans.Systems,Man and Cybernetics B,26(3):496–505,1996.[YSA99] B.Yamauchi,A.Schultz,and W.Adams.Integrating exploration and localization for mobile robots.Adaptive Behavior,1999.。