ISR 2003 Adaptive Robot Based Visual Inspection of Complex Parts Adaptive Robot Based Visua

合集下载

高科技带来全新视觉享受

高科技带来全新视觉享受

高科技带来全新视觉享受
佚名
【期刊名称】《中国眼镜科技杂志》
【年(卷),期】2006(000)004
【摘要】众所周知,德国物理学家恩斯特-阿贝博士是镜片设计的重要参数——阿贝数的发明者,同时他也是当今世界光学泰斗、卡尔榘司公司创始人之一。

【总页数】2页(P92-93)
【正文语种】中文
【中图分类】F124.3
【相关文献】
1.简约带来的视觉享受与心理体验--别墅样板间设计随想 [J], 王铁军
2.富彩带来超值的视觉享受 [J], 王正茂
3.首尔半导体LED为广州亚运会带来视觉享受 [J], 无
4.Rotiform带来不一样的视觉享受 [J], 杨云飞
5.高科技的应用给发动机维护带来全新挑战 [J], 无
因版权原因,仅展示原文概要,查看原文内容请购买。

Reservoir Computing Approaches toRecurrent Neural Network Training

Reservoir Computing Approaches toRecurrent Neural Network Training
author. Email addresses: m.lukosevicius@jacobs-university.de (Mantas Lukoˇ seviˇ cius), h.jaeger@jacobs-university.de (Herbert Jaeger) Preprint submitted to Computer Science Review January 18, 2010
1. Introduction Artificial recurrent neural networks (RNNs) represent a large and varied class of computational models that are designed by more or less detailed analogy with biological brain modules. In an RNN numerous abstract neurons (also called units or processing elements ) are interconnected by likewise abstracted synaptic connections (or links ), which enable activations to propagate through the network. The characteristic feature of RNNs that distinguishes them from the more widely used feedforward neural networks is that the connection topology possesses cycles. The existence of cycles has a profound impact: • An RNN may develop a self-sustained temporal activation dynamics along its recurrent connection pathways, even in the absence of input. Mathematically, this renders an RNN to be a dynamical system, while feedforward networks are functions. • If driven by an input signal, an RNN preserves in its internal state a nonlinear transformation of the input history — in other words, it has a dynamical memory, and is able to process temporal context information. This review article concerns a particular subset of RNN-based research in two aspects: • RNNs are used for a variety of scientific purposes, and at least two major classes of RNN models exist: they can be used for purposes of modeling biological brains, or as engineering tools for technical applications. The first usage belongs to the field of computational neuroscience, while the second

轻型蛇形机器人系统设计及分段运动规划策略

轻型蛇形机器人系统设计及分段运动规划策略

2023-11-07CATALOGUE目录•引言•轻型蛇形机器人系统设计•分段运动规划策略•实验与分析•结论与展望•参考文献01引言背景随着科技的发展,机器人技术在军事、救援、工业等领域的应用越来越广泛,而蛇形机器人作为机器人技术的前沿领域,具有很大的研究价值和发展潜力。

意义轻型蛇形机器人作为一种灵活、适应性强、可实现复杂运动的机器人,在复杂环境下的应用具有不可替代的作用,对于推动机器人技术的发展具有重要的意义。

研究背景与意义现状目前,国内外对于蛇形机器人的研究已经取得了一定的成果,一些蛇形机器人已经实现了自主运动和复杂环境的适应,但还存在一些问题,如机器人的运动速度和稳定性不足,运动规划和控制方法不够完善等。

发展未来的蛇形机器人将向着更轻便、更灵活、更智能的方向发展,同时,随着人工智能和机器学习技术的发展,蛇形机器人的智能化程度也将得到进一步提高。

研究现状与发展•目的:本课题旨在设计一种轻型蛇形机器人系统,实现机器人在复杂环境下的灵活运动和适应,同时研究分段运动规划策略,提高机器人的运动速度和稳定性,为进一步推动蛇形机器人的应用和发展提供技术支持。

研究目的与任务研究目的与任务任务1. 设计轻型蛇形机器人系统,包括机械结构、控制系统、感知系统等部分;2. 研究分段运动规划策略,根据环境变化和任务需求,实现机器人的自适应运动规划;研究目的与任务3. 实现机器人的自主运动和环境适应,包括地形跟随、障碍物避让等功能;4. 通过实验验证机器人的性能和分段运动规划策略的有效性。

02轻型蛇形机器人系统设计机器人系统概述机器人系统组成轻型蛇形机器人系统由机械结构、控制系统、传感器系统和分段运动规划策略等组成。

机器人工作原理通过控制系统驱动机械结构实现弯曲和伸展,传感器系统实时监测机器人姿态和位置,分段运动规划策略控制机器人实现复杂环境下的运动。

采用高弹性、轻质、耐腐蚀的柔性材料制作蛇形机器人的身体,实现灵活的弯曲和伸展。

2003-Applications of remote sensing to urban problems

2003-Applications of remote sensing to urban problems

PrefaceApplications of remote sensing to urban problemsUrban planners frequently make use of remotely sensed imagery,which they can then superimpose on other demo-graphic or geographic information to provide a more detailed and insightful picture of the human landscape. For the most part the images used to compile such databases are panchromatic aerial photographs.Some agencies take the trouble to digitize the photos so that they can have a repository of computer coverages for their area of interest. Urban planners are,however,somewhat reluctant to make use of satellite-based remotely sensed imagery even though it is in digital form and new images are constantly being acquired.There are practical reasons for this reticence:(1) urban planners are overburdened with everyday decisions and do not have time to investigate uses for satellite imagery;(2)planners are unaware of the capabilities of any satellite or of current research on applications;(3)many planning offices lack the additional money,expertise and software required for handling satellite imagery;(4)the resolution of satellite data has been too coarse for delineat-ing urban features,such as houses and streets.The last item,spatial resolution,is often cited as a major factor by planners in not using satellite imagery.As the contents of the papers in this special issue of Remote Sensing of Environment suggest,however,lower resolution sensors such as Landsat TM also have their place as a tool in urban planning.All of the papers in this collection make use of satellite data,sometimes from two satellites and often with supplementary data from aircraft imagery or from GIS.The approaches include the use of IKONOS,SPOT,METEO-SAT,MODIS,and DSMP data,as well as Landsat data.It was our purpose in soliciting papers for this special edition of Remote Sensing of Environment to publish what we feel are some of the more practical applications of satellite technology that might be of interest to urban planners,watershed managers,highway engineers,county administrators and other individuals in a variety of agencies. This set of papers should be considered a progress report from the research community to those responsible for deal-ing with some problems created by urbanization.Most papers in this collection make use of satellite imagery with 15–30m resolution.Yet the topics span a wide range of applications:(1)analysis of land use change and the nature of urban sprawl,(2)modeling and prediction of future land use changes,(3)estimation of present and future surface runoff in watersheds,(4)assessment of watershed health,and (5)creation of data bases in which inventories of woodland, farmland and wetland areas can be monitored with time.One innovation discussed in several papers is the incor-poration of satellite imagery,along with other data,into urban prediction models,from which predictions of future land use are made.Papers by Arthur et al.,Clapham,Herold et al.,and Weber and Puissant specifically describe predic-tions made with an urban growth model in which satellite information constitutes the major source of input data along with ancillary land-based information,such as road net-works and river systems.The paper by Wilson et al. discusses the use of their methodology to describe future urban growth;however,they do not provide any specific predictions in their paper.The Arthur et al.and Clapham papers address the problems of urban watersheds,under-scoring the fact that increases in impervious surface area may exert a deleterious effect on water quality and water abundance.Watershed health and impervious surface area are also subjects in the Gillies et al.paper,but with an emphasis on the loss of species due to increases in imper-vious surface area.An oft-used term,which emerges in several papers,is ‘urban sprawl.’While the concept floats near the surface of most these papers,Sutton’s paper provides one of the first approaches to quantifying this somewhat elusive concept. Sutton defines urban sprawl in terms of population density and the efficiency with which land is used in the city and thereby provides a possible standard against which urban growth can be judged.The paper by Wilson et al.also addresses this timely subject.A popular topic,urban clima-tology and the urban heat island,is the subject a paper by. V oogt and Oke,who provide an extensive review paper on this subject.Two papers lie slightly outside the focus of the aforementioned esi et al.examine the decrease of plant productivity and therefore the decease of carbon dioxide intake at the surface in the face of diminishing vegetation cover and increasing urbanization.Hammer et al.describe a method of estimating solar irradiance from satellite as a means of offsetting energy consumption in buildings.My feeling is that,despite the reservations of many urban planners,the current resolution of various satellites in the15–30m range is not marginal but ideal for many societal applications,such as urban growth modeling.I believe that the most fruitful advances in the near future0034-4257/03/$-see front matter D2003Elsevier Science Inc.All rights reserved. doi:10.1016/S0034-4257(03)/locate/rseRemote Sensing of Environment86(2003)273–274will be made in the use of urban predictions models coupled with satellite imagery and GIS to define and monitor urban sprawl,and to assess urban watershed health.Above all,progress will be swiftest in those areas where scientists and societal agencies—planners,develop-ers,road engineers,watershed managers,and environmen-tal protection agents—can communicate directly with each other and with researchers in the field,wedding the possible with the practical.Toby Carlson Department of Meteorology,Pennsylvania State University, 6th Floor Walker Building, University Park,P A168012,USA E-mail address:tnc@15March2003Preface 274。

introduction to robotics

introduction to robotics
l manufacturing automation l service industry
University of Pennsylvania
1
University of Pennsylvania
2
MEAM 520
What is a robot?
u Webster
An automatic apparatus or device that performs functions ordinarily ascribed to humans or operates with what appears to be almost human intelligence.
MEAM 520
History
u Origin of the word “robot”
l Czech word “robotnik” l 1920 play by Karel Capek l 1940s - Isaac Asimov’s science fiction
u History of automation
Leg 6 Leg 2 Leg 1
University of Pennsylvania
SParallel robot manipulators (continued)
Planar parallel manipulators
l capable of movements in the horizontal plane
8
MEAM 520
The Honda Humanoid
University of Pennsylvania
MEAM 520
What is a robot?
Definition of a robot revisited

软体机器人

软体机器人

起源
起源
软体机器人科学家们从自然界汲取灵感,创造出远比那些传统的金属制同类更加灵活和多功能的机器人。
美国哈佛大学的科学家们制造了一种新型柔韧机器人,它的身子非常柔软,可以像蠕虫一样依靠蠕动在非常 狭窄的空间里活动。这个哈佛大学科研小组由化学家乔治怀特塞兹(George M. Whitesides)率领,他们从鱿鱼, 海星和其它没有坚硬骨骼的动物身上获得启发,研制了一种小型的,有四条腿的橡皮机器人。
今年早些时候,一个来自塔夫茨大学的小组展示了由他们开发的一种体长仅10厘米的蠕虫机器人,它采用硅 氧橡胶制成,可以爬进一个小球并在里面推动小球向前滚动。
而此次哈佛大学的此项研究是在美国国防部的研究资助项目下进行的,有关进展本周一在《美国国家科学院 院报》上作了发表。这个软体机器人体长约12.7厘米,制造的过程花费了两个月。其四肢可以各自独立操控,通 过人工或计算机自动控制将压缩空气输入其肢体内进行相应驱动。这让这种新型机器人具备了无法比拟的灵活性, 可以自由地在地面爬行或者滑行。
怀特赛德斯说:“这不是一个煞费苦心的概念,但实现这种运动是很不寻常的。在这些看似(四肢)很简单 的驱动下,从中你会看到非常有趣的运动。”他指出,虽然这种机器人的运动和构造确实很像海星似的软体动物, 但目的是模仿它的功能,而不是其机制。
材料
这种新型柔体机器人可采用合成纸质材料、纤维织物和金属丝增强结构,具有硅胶外形。当它们模塑成型之 后,该机器人与复杂的压缩气体源进行连接,例如:空气注射泵。
简介
原理
设计
材料
原理
软体机器人模具制造软体机器人使用的是怀特赛德斯团队发明的软光刻技术。其生产过程是:借助电子元件 让光照射模具的表面,致使覆盖在图案上一层薄薄的高分子膜曝光,以此溶解没有图案的区域。怀特赛德斯说: “这是一个非常成功的技术,它具有很高的分辨率,相当小巧,但在批量化生产之前成本比较昂贵。”

郑州大学硕士论文模板

郑州大学硕士论文模板

学校代码 10459学号或申请号密级硕士学位论文论文题目作者姓名:张三导师姓名:李四教授学科门类:工科专业名称:培养院系:大电气、大物工完成时间:20xx年4月A thesissubmitted toZhengzhouUniversityfor the degree ofMasterThesis TitleBySan ZhangSupervisor: Prof. Si LiMajor NameInstitute NameApril20xx学位论文原创性声明本人郑重声明:所呈交的学位论文,是本人在导师的指导下,独立进行研究所取得的成果。

除文中已经注明引用的内容外,本论文不包含任何其他个人或集体已经发表或撰写过的科研成果。

对本文的研究作出重要贡献的个人和集体,均已在文中以明确方式标明。

本声明的法律责任由本人承担。

学位论文作者:日期:年月日学位论文使用授权声明本人在导师指导下完成的论文及相关的职务作品,知识产权归属郑州大学。

根据郑州大学有关保留、使用学位论文的规定,同意学校保留或向国家有关部门或机构送交论文的复印件和电子版,允许论文被查阅和借阅;本人授权郑州大学可以将本学位论文的全部或部分编入有关数据库进行检索,可以采用影印、缩印或者其他复制手段保存论文和汇编本学位论文。

本人离校后发表、使用学位论文或与该学位论文直接相关的学术论文或成果时,第一署名单位仍然为郑州大学。

保密论文在解密后应遵守此规定。

学位论文作者:日期:年月日摘要随着我国工业技术水平的快速发展与进步,对材料的质量的要求也越来越高。

在各个行业领域中,材料和器件中表面和亚表面的完整性会影响和决定系统和仪器设备的工作效率以及运行寿命。

此外,随着中国工业的高速发展,空气及水源污染问题也变得日益严峻。

环境污染导致了某些疾病的高发,皮肤病即是受环境因素诱导的典型疾病之一。

近年来,中国居民的皮肤病发病率逐年上升,且新增病例呈年轻化趋势。

因此实现对材料表面裂纹和皮肤表面病变的无损检测尤为重要。

人工智能领域模型评估和性能分析方面50个课题名称

人工智能领域模型评估和性能分析方面50个课题名称

人工智能领域模型评估和性能分析方面50个课题名称1. 基于机器学习算法的模型评估方法研究2. 人工智能模型性能分析与优化技术研究3. 对抗样本攻击与防御机制的性能评估研究4. 基于深度学习的图像识别模型评估与性能分析5. 自然语言处理模型的性能评估与优化研究6. 机器学习模型的鲁棒性评估与分析7. 分布式人工智能模型的性能评估与优化策略研究8. 稀有事件检测模型的性能评估与分析9. 基于强化学习的人工智能模型评估与性能优化10. 跨领域数据集的人工智能模型性能分析研究11. 基于迁移学习的人工智能模型性能评估与分析12. 自动驾驶系统的人工智能模型效能评估研究13. 基于深度强化学习算法的智能推荐系统的性能分析14. 人工智能模型中的数据偏见评估与修正研究15. 基于多模态数据的人工智能模型性能评估与分析16. 对抗性迁移学习模型的性能评估与优化策略研究17. 基于图神经网络的人工智能模型性能分析与改进18. 对抗生成网络模型的性能评估与鲁棒性分析19. 基于联邦学习的隐私保护人工智能模型性能评估研究20. 高效人工智能模型并行计算与性能评估技术研究21. 基于时序数据的人工智能模型性能评估与分析22. 深度强化学习模型的探索性能评估与优化研究23. 复杂环境中机器学习模型的性能评估与鲁棒性分析24. 在线学习模型的性能评估与优化策略研究25. 基于知识图谱的人工智能模型性能评估与分析26. 深度神经网络模型的性能评估与可解释性研究27. 大规模数据集上机器学习模型的性能分析与优化28. 神经网络模型中的脆弱性评估与对抗性分析29. 基于自主学习的人工智能模型性能评估与优化30. 分层学习模型的性能评估与鲁棒性分析31. 渐进学习模型的性能评估与优化策略研究32. 基于递归神经网络的自然语言处理模型性能分析33. 基于异构数据的人工智能模型性能评估与分析34. 视觉感知模型的性能评估与优化研究35. 基于生成对抗网络的图像增强模型的性能分析36. 人工智能模型中的中断恢复能力评估与优化37. 深度学习模型的认知可信度评估与分析38. 基于迁移强化学习的机器人控制模型性能评估研究39. 高维数据处理中机器学习模型性能分析与优化40. 基于自适应学习的人工智能模型性能评估与分析41. 人工智能模型中的决策可靠性评估与优化42. 非正式环境下机器学习模型性能评估与鲁棒性分析43. 基于增量学习的人工智能模型性能评估与优化策略研究44. 多任务学习模型的性能评估与分析45. 基于元学习的人工智能模型性能分析与优化研究46. 图神经网络模型的性能评估与鲁棒性分析47. 基于信息熵的人工智能模型可靠性评估与优化48. 深度生成模型的性能评估与可解释性研究49. 跨领域知识迁移模型的性能评估与分析50. 基于弱监督学习的人工智能模型性能评估与优化。

轮式移动机器人

轮式移动机器人
然后,针对基于运动学模型描述的全向轮式移动机器人系统,研究了模糊趋近律滑 模变结构控制轨迹跟踪控制器,对线速度和角速度均为匀速的圆轨迹在有限时间内实现 完全跟踪。这种控制器保留了模糊控制与滑模变结构控制的优点,既能对机器人这个多 输入多输出、高度耦合的非线性系统能很好的控制,也能减弱滑模控制器中的抖振,具 有良好的鲁棒性。
首先,阐述了全向轮式移动机器人的结构设计特点,利用坐标变换方法建立移动机 器人的运动学和动力学模型,根据全向轮式移动机器人的运动特性选择了基于运动学模 型分层控制作为其轨迹跟踪控制的设计方案。
其次,以全向轮式移动机器人的运动学模型作为控制对象,以线速度和角速度为控 制输入,设计了几种控制器并通过仿真验证了设计方法的正确性。(1)设计了模糊控制器 对全向轮式移动机器人进行轨迹跟踪控制,实现了对期望轨迹的跟踪控制。(2)为了提高 系统的鲁棒性,设计了滑模变结构控制器,有效的克服外界不确定的干扰,并进行了相 应的仿真证明设计的有效性和可行性。(3)为了减弱滑模变结构控制器中的抖振,采用了 连续函数代替了原来的符号函数,设计了准滑模控制器,通过仿真证明其能够很好的减 弱抖振。
有些学者将遗传算法和模糊算法相结合设计出移动机器人轨迹跟踪控制器但是由于模糊控制算法的自适应能力差对移动机器人的轨迹跟踪控制效果并不理想3031滑模变结构控制法滑模变结构控制的思想是针对不同移动机器人的模型表达式设计一个适当状态空间曲面称为滑模面在此基础上利用高速的开关控制律驱动非线性系统的状态轨迹渐近地到达预先设计的滑模面并且在以后的时间状态轨迹将保持在该滑动表面上以实现期望轨迹的跟踪
最后,对本文所做的工作进行总结,并提出展望,指出有待进一步研究的方向和问 题。
关键词:移动机器人;轨迹跟踪;模糊控制;滑模控制
-II-

关于机器人方面的书 -回复

关于机器人方面的书 -回复

关于机器人方面的书-回复
关于机器人方面的书非常丰富,涵盖从基础理论、设计开发、应用实践到未来趋势等多个维度。

以下是一些在不同领域内具有代表性的书籍推荐:
1. 《机器人学:基础与前沿》:作者为Richard Paul和Karl Henrik Johansson,这本书系统地介绍了机器人学的基本概念、数学模型、运动控制、感知以及智能决策等内容,适合初学者和研究者阅读。

2. 《现代机器人学》:作者为John J. Craig,该书详细阐述了机器人的机械结构、传感器、控制系统等方面的知识,是机器人技术领域的经典教材之一。

3. 《人工智能:一种现代的方法》(Artificial Intelligence: A Modern Approach):虽然并非专门针对机器人,但由Stuart Russell和Peter Norvig合著的这本书对于理解现代机器人中的人工智能算法和技术至关重要。

4. 《ROS机器人编程实战》:作者为Quigley, Gerkey, Faust等,该书深入浅出地介绍了基于ROS(Robot Operating System)的机器人系统开发方法,非常适合对机器人操作系统感兴趣的读者。

5. 《无人驾驶:从自动到自主》:作者胡迪·利普森和梅尔芭·库曼,书中探
讨了自动驾驶汽车这一前沿机器人技术的发展历程、关键技术及伦理问题。

6. 《服务机器人:设计与应用》:主要介绍了服务机器人的设计理念、关键技术及其在各个行业中的具体应用案例,对于了解和从事服务机器人研发有较大帮助。

以上仅为部分推荐书籍,根据您的兴趣和需求,还可以寻找更多专注于特定领域如工业机器人、服务机器人、医疗机器人、社交机器人等的专业书籍进行深入学习。

机器人英语翻译

机器人英语翻译

外文翻译专业工业工程学生姓名钱晓光班级BD机制082学号0820101205指导教师邱亚兰外文资料出处:Applied Mathematics and Computation185 (2007) 1149–1159附件: 1.外文资料翻译译文2.外文原文灵活的双臂空间机器人捕捉物体的控制动力学译者:钱晓光文摘:在本文中,我们提出有效载荷的影响,来控制一个双臂空间机器人灵活的获取一个物体。

该拉格朗日公式动力学模型推导出了机器人系统原理。

源自初始条件的动力学模型模拟了整个系统的获取过程。

一个PD控制器设计,其目的是为了稳定机器人来捕捉对象,动态模拟执行例子:例:1.机器人系统不受控制发生撞击,仿真结果表明影响效果。

2.空间机器人捕获物体的成功是伟大的。

仿真结果表明,该机器人关节角和机械手的迅速程度已经达到稳定。

关键词:柔性臂;空间机器人;冲击;动力学;PD控制方案:圆柱型机器人;技能训练1.介绍空间机器人将成为人类未来在太空检验、装配和检索故障等日常工作的主要元素。

空间机器人满足宇航员额外的活动,对这些来说是很有价值的。

然而,人类生活配套设施的成本和时间对航员是有限制的,高度风险使空间机器人成为宇航员助手的选择。

增加设备的流动性, 自由飞行系统中一个或多个臂安装在一艘装有推进器里,然而,扩展推进器的使用却得到了极大的限制。

一个自由浮动的操作模式能增加系统的可操作性。

有很多的研究成果对刚性臂空间机器人做了研究。

考虑到空间机器人以下的特点:轻质量、长臂、重载荷、灵活、有效性等,切应考虑到良好的控制精度和性能。

与此同时,也存在着许多研究动态建模和单臂空间机器人灵活控制的成果。

作者描述了碰撞动力学建模方案的空间机器人和研究了多手臂灵活空间机器人。

吴中书使用假设模态方法描述了弹性变形,建立了动态模型,研究了拉格朗日公式和仿真的柔性双臂空间机械臂。

由两个特定操作阶段:影响阶段和撞击阶段。

影响阶段确定了初始条件的对象。

软体机器人运动学与动力学建模综述

软体机器人运动学与动力学建模综述

软体机器人运动学与动力学建模综述一、本文概述随着科技的飞速发展,软体机器人作为一种新兴的技术领域,正在吸引着越来越多的研究关注。

作为一种具有高度灵活性和适应性的机器人,软体机器人在医疗、航空、深海探索等众多领域展现出巨大的应用潜力。

然而,软体机器人的运动学与动力学建模一直是制约其进一步发展的关键因素之一。

本文旨在对软体机器人的运动学与动力学建模进行综述,梳理相关领域的研究成果,以期为未来软体机器人的设计与应用提供理论支持。

本文首先介绍了软体机器人的基本概念和分类,阐述了其相较于传统刚性机器人的独特优势。

接着,详细阐述了软体机器人运动学建模的基本原理和方法,包括基于几何关系的建模、基于能量原理的建模等。

在动力学建模方面,本文重点介绍了软体机器人动力学模型的构建过程,包括质量分布、惯性矩阵、刚度矩阵等的确定,以及动力学方程的建立与求解。

本文还综述了软体机器人在运动学与动力学建模过程中面临的挑战与问题,如模型复杂性、参数辨识、实时控制等。

对国内外在软体机器人建模领域的最新研究进展进行了梳理和评价,以期为读者提供一个全面、深入的软体机器人运动学与动力学建模的参考框架。

本文展望了软体机器人运动学与动力学建模的未来发展趋势,提出了可能的研究方向和应用领域,为相关领域的研究者提供了一定的参考和启示。

二、软体机器人运动学建模软体机器人运动学建模是研究和描述软体机器人运动规律的重要方法。

与传统的刚性机器人不同,软体机器人由于其结构的柔软性和可变形性,其运动学建模过程更为复杂。

在软体机器人的运动学建模中,主要关注的是机器人末端执行器或特定点的位置、速度和加速度等运动学参数,而不涉及机器人的内部应力、应变等动力学因素。

软体机器人的运动学建模通常基于几何学和运动学原理。

一种常用的方法是基于连续介质力学的理论,将软体机器人视为连续变形的弹性体,通过描述其形状和位置的变化来建立运动学模型。

另一种方法是基于离散元法,将软体机器人划分为一系列离散的单元,通过描述这些单元之间的相对位置和关系来建立运动学模型。

第12课走近机器人课件

第12课走近机器人课件
第二代机器人 感觉机器人
机器人的发展
第三代机器人器人
特种机器人
面向工业领域的多 关节机械手或多自 由度机器人。
用于非制造业并服 务于人类的各种先 进机器人。
课堂活动
从网络上下载有关机器人的图片,按 照工业机器人和特种机器人对其进行分 类,并将图片保存到相应文件夹中。
L/O/G/O
L/O/G/O
学习任务
1.了解机器人的起源。 2.熟悉机器人的发展历程。 3.知道机器人的分类。
初识机器人
“机器人”
卡雷尔•卡佩克
《罗萨姆的万能机器人》
机器人的产生
世界第一台工业机器人
“尤尼梅特”
机器人的产生
我国第一台仿人型机器人
“先行者”
机器人的发展
第一代机器人 示教再现型机器人
机器人的发展

《认识开源机器人》教案 陶立樱公开课教案教学设计课件资料

《认识开源机器人》教案 陶立樱公开课教案教学设计课件资料
4. 第四章:2课时;
5. 第五章:2课时。
《认识开源》教案
教学内容:
第六章:开源的技术原理
6.1 硬件组成
6.2 控制系统
6.3 传感器与执行器
第七章:开源的网络通信
7.1 网络通信原理
7.2 利用网络通信控 常见的编程语言介绍
8.2 使用编程语言控制
8.3 编程语言在开发中的应用
第九章:开源的项目实践
9.1 项目实践的意义
9.2 项目实践的流程
9.3 项目实践案例分享
第十章:开源的未来发展
10.1 行业的发展趋势
10.2 开源平台的未来发展
10.3 学生未来职业规划与展望
教学方法:
1. 讲授法:讲解开源的技术原理、网络通信、编程语言等基础知识;
2. 编程实践:评估学生在编程、调试开源方面的技能水平;
3. 项目实践:评价学生在项目实践和创新方面的能力;
4. 课后作业:巩固学生对课堂知识的理解和应用。
教学资源:
1. 开源硬件设备;
2. 开源编程软件;
3. 相关教材和参考资料;
4. 网络资源:介绍开源相关的网站、论坛、教程等;
5. 项目实践案例:提供成功的开源项目实践案例供学生参考。
3. 团队项目:评价学生在团队协作和创新方面的能力;
4. 课后作业:巩固学生对课堂知识的理解和应用。
教学资源:
1. 开源硬件设备;
2. 开源编程软件;
3. 相关教材和参考资料;
4. 网络资源:介绍开源相关的网站、论坛、教程等。
教学进度安排:
1. 第一章:2课时;
2. 第二章:2课时;
3. 第三章:3课时;
2.3 家庭领域

关于软机器人的书籍

关于软机器人的书籍

关于软机器人的书籍软机器人是一种基于软件和算法的机器人,它具有更高的灵活性、可塑性和可编程性,能够适应不同的任务和环境。

软机器人的研究领域涉及机器人技术、人工智能、计算机科学等多个学科,相关的书籍提供了丰富的理论和实践知识,下面是几本关于软机器人的参考书籍。

1. 《软机器人:原理、方法与应用》 - 陈恺、顾乃杰著这本书是软机器人领域的经典教材,全面介绍了软机器人的原理、方法和应用。

书中包含了软机器人的建模与仿真、路径规划与轨迹跟踪、机器人控制与神经网络等内容,帮助读者了解软机器人的基本概念和技术,并学习如何应用软机器人解决实际问题。

2. 《软体机器人与传感技术:现状与应用前瞻》 - 邵少峰著这本书主要介绍了软体机器人与传感技术在机器人领域的最新研究成果和应用前景。

书中详细描述了软体机器人的软体结构和运动原理,以及各种传感技术在软体机器人中的应用,如弹性传感、触觉传感和视觉传感等。

读者可以通过这本书了解到软体机器人和传感技术的最新发展动态,并掌握相关的技术方法和应用案例。

3. 《软体机器人设计与实践》 - 斯图尔特埃尔斯顿著本书作者是一位软体机器人领域的专家,通过实践项目为读者提供了一本关于软体机器人设计和构建的实用指南。

书中包含了软体机器人的动力学建模、控制算法设计、软体材料的选择与加工等内容,通过实例演示了软体机器人设计的具体工程步骤及相关的技术细节,对于想要深入了解和实践软体机器人的读者来说是一本不可或缺的参考资料。

4. 《机器人学导论》 - 陈何锦霞著这本书是机器人学的经典教材,为读者系统介绍了机器人学的基本原理和方法。

其中包括了软机器人的相关内容,如运动学、动力学、传感与感知、运动规划等。

通过学习这本书,读者可以建立对机器人学和软机器人的全面理解,掌握相关的理论知识和技术方法。

5. 《软体机器人》 - 世本新著这本书是一本面向工程和科研人员的软体机器人专业参考书。

作者结合自己的研究和实践经验,对软体机器人的原理、设计与应用进行了深入阐述。

计算机三维人体模型的参数化几何造型

计算机三维人体模型的参数化几何造型

计算机三维人体模型的参数化几何造型
李晓莹;张建国
【期刊名称】《机械》
【年(卷),期】2003(000)0S1
【摘要】利用ObjectARX和面向对象技术,开发了一个坐在轮椅上的人体模型的应用程序。

轮椅模型和人体模型的生成均实现了参数化。

构造人体模型的数据采用国标GB10000-88中国成年人人体尺寸。

该模型将用于计算机辅助居室设计和家具设计。

【总页数】3页(P)
【作者】李晓莹;张建国
【作者单位】天津科技大学;天津科技大学;天津
【正文语种】中文
【中图分类】TP391.7
【相关文献】
1.用于车身总布置的三维参数化人体模型
2.基于Poser的一种三维人体模型参数化方法
3.服装CAD中三维人体模型的参数化研究
4.支持人体模型驱动的三维服装参数化设计
5.计算机三维数字人体模型在人体解剖学中的应用
因版权原因,仅展示原文概要,查看原文内容请购买。

基于人类视觉系统的超级计算机

基于人类视觉系统的超级计算机

基于人类视觉系统的超级计算机
佚名
【期刊名称】《《军民两用技术与产品》》
【年(卷),期】2010(000)011
【摘要】美国耶鲁大学工程和应用科学学院的研究人员开发出基于人类视觉系统的超级计算机。

与此前研制的同类计算机相比.其在速度和节能性等方面均有很大提高。

【总页数】1页(P18-18)
【正文语种】中文
【中图分类】TP391.41
【相关文献】
1.美开发基于人类视觉系统的超级计算机 [J],
2.基于人类视觉系统的水下图像质量评价方法 [J], 赵馨;侯国家;潘振宽;李景明;王国栋
3.美开发基于人类视觉系统的超级计算机可引导汽车快速识别复杂环境目标 [J],
4.美国开发基于人类视觉系统的超级计算机 [J],
5.IBM在全球超级计算机500强榜单上独领风骚——基于Power架构的系统数量迅速增加,MareNostrum重返欧洲最强大超级计算机 [J],
因版权原因,仅展示原文概要,查看原文内容请购买。

史陶比尔 机器人 (产品系列)

史陶比尔 机器人 (产品系列)
l l
±180° ±120° ±145°
±270° +120°/-105° ±270° (4) 142°/s 278°/s 130°/s 278°/s 175°/s 356°/s 280°/s 409°/s 224°/s 480°/s 406°/s 1125°/s 245
±270° +130°/-110° ±270° (4) 130°/s 155°/s 115°/s 130°/s 135°/s 205°/s 190°/s 237°/s 200°/s 243°/s 297°/s 562°/s 721
RX160 34 20 1710 6 ±0.05 IP65 (67)
l l
±160° ±137.5° ±150°
±270° +120°/-105° ±270° (4) 165°/s 200°/s 150°/s 200°/s 190°/s 255°/s 295°/s 315°/s 260°/s 360°/s 440°/s 870°/s 248
480°/s 659°/s 480°/s 659°/s 360°/s 415°/s 287°/s 475°/s
1600 mm/s 1923 mm/s 1600 mm/s 1923 mm/s 1600 mm/s 1923 mm/s 430°/s 585°/s
1200°/s 2020°/s 1200°/s 2020°/s 1200°/s 2020°/s 410°/s 1035°/s
10 6 2185 1.8 m/s l/l/l
Group II Category 2,3 Zone 1,2,21,22 Class I, II, III Div 1&2
Group II Category 3 Zone 2,22
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Adaptive Robot Based Visual Inspection of Complex PartsM. Gauss1, A. Buerkle1, T. Laengle1, H. Woern1,J. Stelter2, S. Ruhmkorf 3, R. Middelmann41Institute for Process Control and Robotics (IPR), Universitaet Karlsruhe (TH), Germany2AMA TEC ROBOTICS GmbH, Germering, Germany3ISRA VISION SYSTEMS AG, Karlsruhe, Germany4VITRONIC Dr.-Ing. Stein Bildverarbeitungssysteme GmbH, Wiesbaden, GermanyAbstractIn this paper an architecture for robot based visual in-spection of complex parts is presented. Based on this architecture two systems have been implemented, one for visual engine compartment inspection and another for visual weld seam inspection for car manufacturing. The two main goals of the work are: (1) Defining open inter-faces between a host computer, an industrial robot and a vision system. (2) Automating the adaptation of a robot based visual inspection system to new tasks.1. IntroductionThe use of image processing systems in industrial manufacturing and assembling has significantly increased in recent years. Used as inspection systems they can en-hance product quality and minimize loss through waste. The employment of such systems is profitable only under certain conditions. Fixed costs of acquisition, adaptation and initial operation are to be compared to saving of costs of the ongoing operation.In order to not only use the inspection systems at huge lot sizes reasonably, it is necessary that the systems can be adapted to a certain problem with little effort.The goal of the ARIKT1 project is to develop an architecture for a robot based visual inspection system which is easy to adapt to new tasks. The problem of in-1ARIKT = Adaptive Roboter-Inspektion komplexer Teile (German: adaptive robot inspection of complex parts) specting also large objects and inspecting objects from changing points of view is solved by the employment of a robot as carrier of a camera. In order to make possible that robots and image processing systems from different manufacturers work together in ‘plug and work’ manner, a standard communication profile is suggested for the con-trol of these components. To achieve easy adaptation to new tasks the user is supported in the determination of machine vision procedures and parameters by the system. Therefore, a knowledge base associated with the vision system is used. The position of the camera in an inspec-tion task is determined by the necessary distance and the location of the part to be inspected. Thus the automatic programming of a robot to reach the desired camera po-sition is possible, when workcell geometry and robot are modeled.In section 2 a short overview of the system is given. The main components host, robot and vision system and their collaboration are introduced.In section 3 two inspection applications are presented, which are implemented based on the ARIKT architecture: a engine compartment inspection and a weld seam in-spection.In section 4 a protocol profile for communication be-tween a host, a vision system and an industrial robot is proposed. On application level XML2 [1][2] is used to express robot and vision commands. As communication base Ethernet with TCP/IP is used.Section 5 shows how the user can be supported in 2 XML = eXtensible Markup Languageadapting the inspection system to new tasks. The knowledge base is described which helps with the deter-mination of vision procedures/parameters for a given inspection task. The automatic computation of robot tra-jectories and the integrated model for workcell and product are illustrated.2. System OverviewIn the framework of the ARIKT project the following main components are provided: Host, robot and vision system.During an inspection procedure (online phase) the collaboration between the main components works sim-plified as follows: First, the host sends robot commands to the robot control (see Figure 1). The robot moves the camera resp. sensor head to the desired position. It an-nounces reaching this position to the host. The host then transmits and starts the procedure of the vision system. When finished, the vision system returns the inspection result. The determination, whether a part is faulty or not, is only task of the vision system.Figure 1: Collaboration during the online phase of the inspectionIn some applications one image is acquired from one single position for analysis, e.g. the engine compartment inspection. In other applications the camera has to be moved on a continuous path and a series of images is acquired, e.g. the weld seam inspection. In the latter case cyclic transmission of position and orientation of the sensor head from robot to vision system can be provided. The commands send from the host to robot and vision system are stored in an inspection plan. It consists of one or more tasks which are build up of one or more vision commands and one or more robot commands and the analysis result.The inspection plan is build up offline and stored on the host (see section 5). 3. Inspection Applications3.1. Engine Compartment InspectionThe task is comprised of several inspections of differ-ent components within the engine compartment, the as-sembly of which can be faulty. The assembly faults are unknown ahead of inspection. Additionally the occur-rence of assembly faults can vary strongly with regard to the timing aspect.The inspection tasks chosen as examples within the ARIKT project can be subdivided into three different classes. The first class is comprised of the inspection of correct positioning of tube connectors and clamping pieces (see Figure 2). The second class takes care of the verification of distances between cables, tubes and pipes. The third class embraces inspections of presence of sealing plugs, cables or tubes in gaps of the car body.Figure 2: Inspection of clamping piecesThe control algorithms are started by a control job from the host during which they are fed with all relevant data regarding the objects to be inspected. This data consists of size, position, shape of the object as well as of information regarding the surface properties of object and background. On the basis of this information the chosen inspection algorithm is parameterised and executed automatically. The inspection results are sent back to the host computer. The host can select different inspection algorithms. The algorithm itself can be either a simple task as a light meter or a blob analysis or a complex multi-task inspection as for example a complete control of the clamping piece. The sensor head, as integral part of the vision system, features a shuttered camera and an externally controlled LED ring light. The ring light is manufactured with multi-colour LEDs, thereby enabling an automatic adap-tation of the illumination to the inspection task at hand. The basis for the modular and adaptive vision system is a standard vision system for general assembly control and quality inspection. The adaptation to severaldifferenttasks as presence/absence control, gauging, dimension control, quality inspection of objects and object surfaces is possible through the multitude of integrated inspection tools and algorithms.3.2. Weld Seam InspectionThe requirements for weld inspection are primarily drawn from standards such as EN30042 [3] / ISO5817 [4], combined with experience gained in developing inspec-tion systems for automotive requirements. Using the principal of laser triangulation, a high speed, high reso-lution 3D representation of the surface of the weld is generated. A series of feature extraction algorithms each determine different properties of the weld at every loca-tion where a cross sectional profile was obtained from the sensor. The measured properties of the weld are compared with the tolerances specified in the standards, to arrive at an overall assessment of the acceptability of the weld.A specifically designed sensor that integrates a laser with line generating lens and a high resolution camera produces a stream of 2D cross sectional profiles of the surface of the weld and the adjacent unwelded material at the location where the line is projected. Relative move-ment between the sensor and the part being inspected is required to build up the third dimension in the measure-ment data. For complex parts, this relative movement is ideally provided by a robot which is able to maintain a constant distance and viewing angle between the sensor and the weld.As the measurement system is optical, only the surface of the weld is mapped. The applied inspection criteria therefore form subset of the criteria specified in the standards. Typical examples of these criteria include measurement of the height and cross sectional area of the weld, together with detection of porosity density and undercuts.Whilst calibration of the sensor as such results in ac-curate 2D data within each profile, the accuracy of the third dimension depends entirely on the relative move-ment speed between the sensor and the robot. In some applications a constant velocity is achievable, however the frequently encountered complex geometry of auto-motive components and the kinematics of the robot result in a non-constant velocity. To prevent this causing meas-urement errors, the robot is required to constantly com-municate its Tool Center Point (TCP) to the inspection system. The actual data transmitted is the distance trav-eled, relative to a defined weld start or trigger point. This data is cyclically transmitted as an XML packet, con-taining at a minimum a single distance value. Accuracy of the transmitted data packets is further enhanced through inclusion of a time stamp.4. Communication between MainComponents4.1. Communication Protocol Profile Communication between devices can be divided into layers according to the ISO/OSI reference model [5]. To allow collaboration between components in ‘plug and work’ manner communication on each layer has to be defined. The protocol profile of the ARIKT system is listed in Table 1. On layers 1-6 existing standard protocols are used. The application level will be considered in sec-tion 4.2 and 4.3.Layer Task Protocol7Application ARIKT6Presentation XML5Session TCP4Transport TCP3Network IP2Data link Ethernet 10Mbit/s1Physical10BaseTTable 1: ARIKT communication profileEthernet with TCP/IP in general is not real time com-pliant because of the used medium access protocol CSMA [6]. But, the cyclic transmission of position and orienta-tion of the sensor head from robot to vision system (see Figure 1) shall run in assured short periods of time. To realize real time transmission with Ethernet/TCP/IP two techniques can be used:• Communication of certain partners is paused ac-cording to an application level protocol. While transmission of position of the sensor head from robot to vision system the host does not contact robot or vision system.• Through the use of an (Ethernet-) switch one colli-sion domain can be split up into several smaller ones. XML as syntactical format for application telegrams compared to binary format brings some advantages: The content is readable and understandable by humans with a simple text editor. The structure of content can be defined in XML schemas [7]. (XML documents look like HTML documents with self-defined tags.) For syntactical parsing free software modules are available. Disadvantages are: The size in bytes for XML messages is significantly largerthan that of binary data. Software complexity is increased by the need for syntactical parsing.4.2. XML Robot CommandsThe robot commands, which have been defined, are designed for remote control of industrial robots. The goal is to control robots from different vendors with the same commands. The commands are intended to allow the definition of trajectories and to support synchronization between a robot and external devices.A condensed example for an XML robot program is given in Figure 3. It is used for weld seam inspection. With the TCP element the Tool Center Point can be defined. A V ector element is used to describe displace-ments in mm and a rotation around the axis x,y and z. The V elocit y element determines the velocity of the TCP in mm per second. The Accuracy element declares the dis-tance which the TCP may diverge from defined waypoints. It results in rounding or fly-by behavior. The Linear element defines a linear movement. Beside linear move-ments there are existing commands for circle and point-to-point (PTP) movements. To define waypoints and orientation of the TCP the Point element is used. Also V ector elements could be used here instead.All commands listed until now allow to define move-ments and trajectories. The OutputSignal element ac-complishes synchronization between the robot and the host instance. After the completion of the last movement before an OutputSignal the robot sends a message to the host. The host knows then the status of the robot and can send for example a ‘start-your-work’ command to the vision system.<?xml version="1.0" encoding="UTF-8"?><AR:RobotCommand ...><TCP><Vector XPos="0" YPos="-77" ZPos="43" RX="90" RY="0".../> </TCP><Velocity Nominal="150" Min="0" Max="115"/><Accuracy ToleranceXYZ="10"/><Linear><!-- accelaration phase --><Point XPos="953.68" YPos="-144.42" ZPos="588.02"RX="81.8305" RY="-5.4639" RZ="174.4402"/>...<Point XPos="1474.54" YPos="-106.98" ZPos="482.48"... /></Linear><OutputSignal Value="Position reached"/> <!-- begin of weld --> <Linear><Point XPos="1474.91" YPos="-106.94" ZPos="469.19".../> <Point XPos="1475.44" YPos="-106.87" ZPos="448.87".../> ...<Point XPos="1331.11" YPos="-270.06" ZPos="-32.53".../><Point XPos="1326.17" YPos="-302.55" ZPos="-48.2" ... /></Linear><OutputSignal Value="Position reached"/><!-- end of weld seam --> </AR:RobotCommand>Figure 3: XML robot commands sampleThe leaving out of robot joint-angles in commands and the exclusive use of Cartesian coordinates allow to use the same robot program with different robot types.There are already existing attempts to form vendor independent robot programming languages. In DIN 66312 [8] the Industrial Robot Language(IRL) is defined. In contrast to the ARIKT robot command set the IRL pro-vides a ‘full’ robot programming language. IRL allows to program robots in a type specific way and is not intended for remote control. IRL is not commonly supported by robot vendors (maybe due to its complexity).4.3. XML Vision CommandsThe enormous variety of optical sensors and their broad range of image processing algorithms has predestinated the class of vision sensors for many different applications such as object position detection, object recognition, barcode reading, seam-inspection and so on. Against this background, the usability of a vision system in conjunc-tion with an industrial robot depends highly on the inte-gration of robot and vision system in terms of:• communication• operation• service and diagnosisTo ensure a transparent and intuitive communication of robot applications based on vision sensors a set of abstract vision commands has been designed. The main goal of those vision commands is to provide:• application independency• robot independency• consistent command syntax• flexibility through command parameterizationThe command syntax is based on an abstract view onto a common vision based robot application. The center of all considerations is a physical object (Obj), that is dealt with by the inspection system, i.e. a workpiece. Figure 4 shows exemplarily the structure of the “LoadObj”-command whereas Table 2 provides an overview of all predefined commands.Command DescriptionLoadObj(<objName>) Loads an object pattern (image processing program) into the vision system. Onto this pattern all further image processing tasks are performedTrigger() Triggers the image acquisitionStartTrigger() StopTrigger() Starts and stops the cyclical triggering of the vision system by the robotSetSysParameter() Sets global parameters on the vision system, e. g. operation mode, timeout, etc.SetParameter() Sets task specific parameters, e.g. setting for integrated illumi-nation devicesGetStatus() Gets the status (e. g. idle, busy, waiting) of the vision systemReset() Resets the vision system to an initial stateTable 2: Vision commandsFigure 4: Command structure “LoadObj”The return value of a dedicate command is defined similarly within a schema definition file. The types of the return value are basically defined accordingly to funda-mental types of higher programming languages. Only the representation of coordinate systems or transformation matrices respectively, a fundamental data type in robot applications, and a variable type to indicate the status of the vision system are defined separately.Type Description BOOLEAN Designates boolean variable INTEGER Designates integer variableREAL Designates floating point vari-ableSTRING Designates character- based variablesFRAME Contains a homogenous trans-formation matrixVISIONSTA TUS Contains detailed information about the vision systemTable 3: Data typesAdvanced mechanisms of the schema definition can be used to specify lower boundaries for either range limita-tion (Figure 4) by setting upper and lower boundaries or restrictions in size of a type by predefining minimal and maximal data length (Figure 5). Those mechanisms can be used to customize the type definitions accordingly to the conditions of the robot controller.<xs:simpleType name="REAL"xmlns:xs="/2001/XMLSchema"><xs:restriction base="xs:double"><xs:minInclusive value="-3.4E+38" /><xs:maxInclusive value="3.4E+38" /></xs:restriction></xs:simpleType>Figure 5: Schema definition for type REAL<xs:simpleType name="STRING"xmlns:xs="/2001/XMLSchema"><xs:restriction base="xs:string"><xs:maxLength value="80" /><xs:minLength value="0" /></xs:restriction></xs:simpleType>Figure 6: Schema definition for type STRINGFor each type shown in Table 3 an accessing function is provided which is structured consistently: get<TYPE>(<ObjName>.<TaskName>.<VariableName>)To be more specific, the use of the vision commands will be demonstrated with the following application sce-nario: a vision equipped robot has to check, if the oil-filter has been mounted to an engine block. The first step is to prepare the vision system by implementing an appropriate image processing mechanism. This mechanism is named intuitively, e. g. “CheckOilFilter” and might consist of several tasks that operate upon the oil-filter. E. g. one task is for checking the existence of the oil-filter (Find-Filter), another task might be reading the serial number of the oil-filter (ReadSerial). Each task pos-sesses a number of variables that are used to represent the result of the owning task, e. g. BOOL OilFilterEx-istance Figure 7 displays exemplarily the sequence and structure of commands to execute the described job./* Declaration */BOOLEAN OilFilterExistance--------------------------------------------- /* Checking Oilfilter */ LoadObject(CheckOilFilter) Trigger()result=getBoolean(CheckOilFilter.FindFilter. Existance, OilFilterExistance) if (result == OK)if (OilFilterExistance == TRUE)/* complete!! */ else /* NOT complete if (result == NOT_OK) /* Errorhandling */Figure 7: Inspection operation sequence . <Rob Data ><PosAct X ="1620.00" Y ="120.00" Z ="620.00" A ="0.00" B ="70.00" C ="-10.00"/> <PosDesired X ="1620.10" Y ="120.10" Z ="619.90" A ="0.00" B ="70.00" C ="-10.00"/><AxisAngleAct A1="132.00" A2="20.00" A3="65.00" A4="2.00" A5="-13.00" A6="-21.00"/> <AxisPosDesired A1="131.84" A2="20.20" A3="64.70" A4="2.00" A5="-13.20" A6="-21.02"/>" <ExternalAxisAngleAct E1="0.00" E2="15.30" E3="57.00" E4="0.20" E5="5.90" E6="0.00"/><ExternalAxisAngleDesired E1="0.00" E2="15.00" E3="57.20" E4="0.23" E5="6.00" E6="0.00"/> </Rob Data >Figure 8: Position and orientation of TCP in XML format For certain applications it is absolutely necessary that the vision system receives robot specific information, e. g. Cartesian position of the TCP , joint angles or position of external axis. This can be handled by establishing an cyclic data export from robot to vision system. The cor-responding XML-based protocol description is shown in Figure 8. .The advantages provided by our approach are ex-tremely important in industrial applications which are dominated by heterogeneous environmental conditions such as employment of different robot and vision systems as well as installation of diverse communication media (Ethernet, fieldbus etc.):• Consistent set of commands to employ vision sys-tems within robot applications• Adaptability and configurability of the data ex-change due to the use of advanced protocol defini-tion mechanisms (XML & schema)• Extensible structure; protocol modifications can easily be introduced by rearranging underlying schema-filesXML schemas for robot control and vision control willbe published in full at the end of the ARIKT project. To demonstrate the universality of the proposed protocol profile two demonstrators are build up whereby robot and vision system are exchanged crosswise. The weld seaminspection system from Vitronic is run with both a Kuka robot (KR6/1) and a Reis robot (RV6L). As well the en-gine compartment inspection from Isra is run with bothrobot systems. 5. Adaptation to New InspectionTasksTo adapt the inspection system to a new task means to determine appropriate machine vision proce-dures/parameters and to find suitable sensor posi-tions/trajectories. These activities shall be partially auto-mated [9].5.1. Determination of the Vision Procedureby a Knowledge BaseThe determination of the vision procedure for a speci-fied inspection task is supported by a ‘vision knowledge base’. Because it is difficult to implement too generalized knowledge, to each problem domain an own knowledge base is associated. For example a vision knowledge base dealing with weld seam inspection shall not be used in engine compartment inspection and vice versa.In Figure 9 we see the data flow in general in deter-mination of a vision procedure.Exemplarily now the determination of vision proce-dures/parameters in engine compartment inspection is described:The knowledge base consists of numerous inspection routines to be executed by the vision system.Related to these inspection routines is information stored in the knowledge base regarding the input data modelled for each routine. This input data stream does not require specialized vision information, but rather a gen-eral description of the objects, the inspected scene and thecamera and illumination hardware:• Object data:- Dimensions and shape- Material- Surface properties and colour- Texture / Structure• Scene data:- Position of the object in the area- Position of the camera with regard to the object- Number of objects- Inspection area(s)- Position of illumination with regard to the object • Hardware data- Camera resolution- Camera distance- Image size- Focus length- Position of illumination with regard to the camera - Bright field / Dark field- Illumination intensityFigure 9: Determination of a vision procedure in generalEach generation of a new inspection task starts with the assignment of this task to an inspection type. This is shown in the following with the example of the … clamp-ing piece “. According to the knowledge base the fol-lowing data is needed for the inspection of the clamping piece:• Dimensions and shape• Surface properties and colour• Material• Position of the camera with regard to the object • Image sizeBased on this data input the inspection task is config-ured automatically with the position where the object is expected and the search for the actual object will start. The information about colour and material generates the parameter for the inspection algorithm (such as contrast between object and background). The information about correct dimensions and correct position finally allows the verification of correct shape and correct assembly. Feedback data of the inspection routine is either dimen-sional data or ok/nok information. For the latter the input parameter have to be coupled with tolerances.During the whole process of generating an inspection plan, the user is supported in several instances by the knowledge base.In the first step a sub-sample of objects to be inspected is selected out of the information that is archived regard-ing the whole engine compartment. Through this sub-sample of objects to be inspected the sub-sample of relevant data is automatically generated. Based on the specific information about the inspection object the knowledge base now offers all fitting inspection algo-rithms for this object. When choosing a rectangular me-tallic object, for example, the inspection …tube dis-tance“ can be neglected, because for this inspection task two lengthy objects are mandatory.After the inspection task is selected every other further step is executed automatically, since all relevant data can be achieved out of the knowledge base.In order to allow manual interaction there is the addi-tional option to modify the inspection plan and the automatically generated parameter via user interface.5.2. Computation of Trajectories of theSensor Head and Integrated X3DModelResult of the determination of a vision procedure is amongst others the location of the sensor head relative to the part to be inspected. For example the weld seam in-spection sensor head shall be away 60 mm from the seam. The line of sight to the seam shall be perpendicular.By the location of the sensor head relative to the target and the given geometry of the part to be inspected the positions of the sensor head in Cartesian scene coordinate can be computed (see Figure 10). Result is either a set of single positions (engine compartment inspection) or a continuous path trajectory (weld seam inspection).In the case of the single positions it is not yet deter-mined how the robot moves the sensor head from position to position. A collision free path between the single posi-tions has to be computed. Therefore the whole scenegeometry (workcell, robot, part) and the robot kinematics is needed. In the case where a continuous path is given no path planning is needed anymore but the path has to be verified as collision free. Result is in both cases a collision free robot trajectory (or the conclusion that no collision free trajectory exists).Figure 10: Data flow with computing robot trajectoriesAs we have seen in the sections above, for automating the adaption to new inspection tasks the following data have to be provided: (computation of trajectories) ge-ometry of workcell, product and robot; the kinematics of the robot; (determination of vision procedure) geometry of parts to inspect; surface, texture and special attributes of parts to inspect; attributes of vision system sensors. Computation of trajectories and determination of vision procedures partially need the same data: geometry of parts. Surface and texture attributes of parts are naturally asso-ciated to part objects. In a optimal way all this data should be stored together in one model to avoid redundancies and to achieve consistence.The use of X3D3 [10] allows to integrate the needed data in one model. X3D is the ‘XML variant’ of VRML4 [11]. VRML allows among others the description of: • geometry of primitive volume bodies (sphere, cone, box etc.)• geometry of surfaces (neighbouring polygons: in-dexed face sets)• surface attributes like texture, color, brightness, transparency, reflexivity etc.X3D has all modelling facilities which VRML offers,3 X3D = Extensible 3D4 VRML = Virtual Reality Modeling Language but because it is XML based, it is extensible. That means you can make additions to the X3D schema [12]. For example you can introduce attributes like ‘weld seam porosity’ to the geometric element ‘line set’. Also com-plex data structures, for example to define the (direct) kinematics of a robot, can be described in XML and can be added to a X3D document. But, common viewers for X3D or VRML documents don’t know how to handle these self-defined X3D/XML additions. They can be only handled by own applications.In Figure 11 we see a clipping of the surface of an part with a weld seam. The surface is modelled as an X3D indexed face set. Each face is defined by three points. The weld seam is modelled as X3D indexed line set. A set of points is used to define a set of connected lines.Figure 11: X3D surface model: indexed face setTo determine a sensor head trajectory the normal for each point on the indexed line set is computed. The TCP of the robot is set to a position so that he is on the weld seam, when the sensor head is in the correct distance (60 mm) and direction. The positions of the sensor head are defined by the position of the point on the weld seam and the normal of the point. The robot system will compute the position of the sensor head by itself with given normal and ‘target point’, when the TCP is set properly.6. SummaryIn this paper an architecture for robot based visual in-spection is introduced. This architecture shall allow to create easy to adapt inspection systems.A communication profile (Ethernet, TCP/IP, XML and application level commands) for interaction of a host instance, a robot and a vision system has been proposed.。

相关文档
最新文档