机械手毕业设计外文翻译--最小化传感级别不确定性联合策略的机械手控制
机械手外文翻译 修改版
密级分类号编号成绩本科生毕业设计 (论文)外文翻译原文标题Simple Manipulator And The Control Of It 译文标题简易机械手及控制作者所在系别机械工程系作者所在专业xxxxx作者所在班级xxxxxxxx作者姓名xxxx作者学号xxxxxx指导教师姓名xxxxxx指导教师职称副教授完成时间2012 年02 月北华航天工业学院教务处制译文标题简易机械手及控制原文标题 Simple Manipulator And The Control Of It作者机电之家译名JDZJ国籍中国原文出处机电之家中文译文:简易机械手及控制随着社会生产不断进步和人们生活节奏不断加快,人们对生产效率也不断提出新要求。
由于微电子技术和计算软、硬件技术的迅猛发展和现代控制理论的不断完善,使机械手技术快速发展,其中气动机械手系统由于其介质来源简便以及不污染环境、组件价格低廉、维修方便和系统安全可靠等特点,已渗透到工业领域的各个部门,在工业发展中占有重要地位。
本文讲述的气动机械手有气控机械手、XY轴丝杠组、转盘机构、旋转基座等机械部分组成。
主要作用是完成机械部件的搬运工作,能放置在各种不同的生产线或物流流水线中,使零件搬运、货物运输更快捷、便利。
一.四轴联动简易机械手的结构及动作过程机械手结构如下图1所示,有气控机械手(1)、XY轴丝杠组(2)、转盘机构(3)、旋转基座(4)等组成。
图1.机械手结构其运动控制方式为:(1)由伺服电机驱动可旋转角度为360°的气控机械手(有光电传感器确定起始0点);(2)由步进电机驱动丝杠组件使机械手沿X、Y轴移动(有x、y轴限位开关);(3)可回旋360°的转盘机构能带动机械手及丝杠组自由旋转(其电气拖动部分由直流电动机、光电编码器、接近开关等组成);(4)旋转基座主要支撑以上3部分;(5)气控机械手的张合由气压控制(充气时机械手抓紧,放气时机械手松开)。
多自由度机械手毕业论文中英文资料外文翻译文献
毕业论文中英文资料外文翻译文献专业机械设计制造及其自动化课题多自由度机械手机械设计英文原文Automated Tracking and Grasping of a Moving Object with a RoboticHand-Eye SystemAbstractMost robotic grasping tasks assume a stationary or fixed object. In this paper, we explore the requirements for tracking and grasping a moving object. The focus of our work is to achieve a high level of interaction between a real-time vision system capable of tracking moving objects in 3-D and a robot arm with gripper that can be used to pick up a moving object. There is an interest in exploring the interplay of hand-eye coordination for dynamic grasping tasks such as grasping of parts on a moving conveyor system, assembly of articulated parts, or for grasping from a mobile robotic system. Coordination between an organism's sensing modalities and motor control system is a hallmark of intelligent behavior, and we are pursuing the goal of building an integrated sensing and actuation system that can operate in dynamic as opposed to static environments.The system we have built addresses three distinct problems in robotic hand-eye coordination for grasping moving objects: fast computation of 3-D motion parameters from vision, predictive control of a moving robotic arm to track a moving object, and interception and grasping. The system is able to operate at approximately human arm movement rates, and experimental results in which a moving model train is tracked is presented, stably grasped, and picked up by the system. The algorithms we have developed that relate sensing to actuation are quite general and applicable to a variety of complex robotic tasks that require visual feedback for arm and hand control.I. INTRODUCTIONThe focus of our work is to achieve a high level of interaction between real-time vision systems capable of tracking moving objects in 3-D and a robot arm equipped with a dexterous hand that can be used to intercept, grasp, and pick up a movingobject. We are interested in exploring the interplay of hand-eye coordination for dynamic grasping tasks such as grasping of parts on a moving conveyor system, assembly of articulated parts, or for grasping from a mobile robotic system. Coordination between an organism's sensing modalities and motor control system is a hallmark of intelligent behavior, and we are pursuing the goal of building an integrated sensing and actuation system that can operate in dynamic as opposed to static environments.There has been much research in robotics over the last few years that address either visual tracking of moving objects or generalized grasping problems. However, there have been few efforts that try to link the two problems. It is quite clear that complex robotic tasks such as automated assembly will need to have integrated systems that use visual feedback to plan, execute, and monitor grasping.The system we have built addresses three distinct problems in robotic hand-eye coordination for grasping moving objects: fast computation of 3-D motion parameters from vision, predictive control of a moving robotic arm to track a moving object, and interception and grasping. The system is able to operate at approximately human arm movement rates, using visual feedback to track, intercept, stably grasp, and pick up a moving object. The algorithms we have developed that relate sensing to actuation are quite general and applicable to a variety of complex robotic tasks that require visual feedback for arm and hand control.Our work also addresses a very fundamental and limiting problem that is inherent in building integrated sensing actuation systems; integration of systems with different sampling and processing rates. Most complex robotic systems are actually amalgams of different processing devices, connected by a variety of methods. For example, our system consists of three separate computation systems: a parallel image processing computer; a host computer that filters, triangulates, and predicts 3-D position from the raw vision data; and a separate arm control system computer that performs inverse kinematic transformations and joint-level servicing. Each of these systems has its own sampling rate, noise characteristics, and processing delays, which need to be integrated to achieve smooth and stable real-time performance. In our case, this involves overcoming visual processing noise and delays with a predictive filter basedupon a probabilistic analysis of the system noise characteristics. In addition, real-time arm control needs to be able to operate at fast servo rates regardless of whether new predictions of object position are available.The system consists of two fixed cameras that can image a scene containing a moving object (Fig. 1). A PUMA-560 with a parallel jaw gripper attached is used to track and pick up the object as it moves (Fig. 2). The system operates as follows:1) The imaging system performs a stereoscopic optic-flow calculation at each pixel in the image. From these optic-flow fields, a motion energy profile is obtained that forms the basis for a triangulation that can recover the 3-D position of a moving object at video rates.2) The 3-D position of the moving object computed by step 1 is initially smoothed to remove sensor noise, and a nonlinear filter is used to recover the correct trajectory parameters which can be used for forward prediction, and the updated position is sent to the trajectory-planner/arm-control system.3) The trajectory planner updates the joint-level servos of the arm via kinematic transform equations. An additional fixed-gain filter is used to provide servo-level control in case of missed or delayed communication from the vision and filtering system.4) Once tracking is stable, the system commands the arm to intercept the moving object and the hand is used to grasp the object stably and pick it up.The following sections of the paper describe each of these subsystems in detail along with experimental results.П. PREVIOUS WORKPrevious efforts in the areas of motion tracking and real-time control are too numerous to exhaustively list here. We instead list some notable efforts that have inspired us to use similar approaches. Burt et al. [9] have focused on high-speed feature detection and hierarchical scaling of images in order to meet the real-time demands of surveillance and other robotic applications. Related work has been reported by. Lee and Wohn [29] and Wiklund and Granlund [43] who uses image differencing methods to track motion. Corke, Paul, and Wohn [13] report afeature-based tracking method that uses special-purpose hardware to drive a servocontroller of an arm-mounted camera. Goldenberg et al. [16] have developed a method that uses temporal filtering with vision hardware similar to our own. Luo, Mullen, and Wessel [30] report a real-time implementation of motion tracking in 1-D based on Horn and Schunk’s method. Vergheseetul. [41] Report real-time short-range visual tracking of objects using a pipelined system similar to our own. Safadi [37] uses a tracking filter similar to our own and a pyramid-based vision system, but few results are reported with this system. Rao and Durrant-Whyte [36] have implemented a Kalman filter-based decentralized tracking system that tracks moving objects with multiple cameras. Miller [31] has integrated a camera and arm for a tracking task where the emphasis is on learning kinematic and control parameters of the system. Weiss et al. [42] also use visual feedback to develop control laws for manipulation. Brown [8] has implemented a gaze control system that links a robotic “head” containing binocular cameras with a servo controller that allows one to maintain a fixed gaze on a moving object. Clark and Ferrier [12] also have implemented a gaze control system for a mobile robot. A variation of the tracking problems is the case of moving cameras. Some of the papers addressing this interesting problem are [9], [15], [44], and [18].The majority of literature on the control problems encountered in motion tracking experiments is concerned with the problem of generating smooth, up-to-date trajectories from noisy and delayed outputs from different vision algorithms.Our previous work [4] coped with that problem in a similar way as in [38], using an cy- p - y filter, which is a form of steady-state Kalman filter. Other approaches can be found in papers by [33], [34], [28], [6]. In the work of Papanikolopoulos et al. [33], [34], visual sensors are used in the feedback loop to perform adaptive robotic visual tracking. Sophisticated control schemes are described which combine a Kalman filter’s estimation and filtering power with an optimal (LQG) controller which computes the robot’s motion. The vision system uses an optic-flow computation based on the SSD (sum of squared differences) method which, while time consuming, appears to be accurate enough for the tracking task. Efficient use of windows in the image can improve the performance of this method. The authors have presented good tracking results, as well as stated that the controller is robust enough so the use ofmore complex (time-varying LQG) methods is not justified. Experimental results with the CMU Direct Drive Arm П show that the methods are quite accurate, robust, and promising.The work of Lee and Kay [28] addresses the problem of uncertainty of cameras in the robot’s coordinate frame. The fact that cameras have to be strictly fixed in robot’s frame might be quite annoying since each time they are (most often incidentally) displaced; one has to undertake a tedious job of their recalibration. Again, the estimation of the moving object’s position and orientation is done in the Cartesian space and a simple error model is assumed. Andersen et al. [6] adopt a 3rd-order Kalman filter in order to allow a robotic system (consisting of two degrees of freedom) to play the labyrinth game. A somewhat different approach has been explored in the work of Houshangi [24] and Koivo et al. [27]. In these works, the autoregressive (AR) and auto grassier moving-average with exogenous input (ARMAX) models are investigated for visual tracking.Ш. VISION SYSTEMIn a visual tracking problem, motion in the imaging system has to be translated into 3-D scene motion. Our approach is to initially compute local optic-flow fields that measure image velocity at each pixel in the image. A variety of techniques for computing optic-flow fields have been used with varying results includingmatching-based techniques [5], [ 10], [39], gradient-based techniques [23], [32], [ 113, and patio-temporal, energy methods [20], [2]. Optic-flow was chosen as the primitive upon which to base the tracking algorithm for the following reasons.·The ability to track an object in three dimensions implies that there will be motion across the retinas (image planes) that are imaging the scene. By identifying this motion in each camera, we can begin to find the actual 3-D motion.·The principal constraint in the imaging process is high computational speed to satisfy the update process for the robotic arm parameters. Hence, we needed to be able to compute image motion quickly and robustly. The Hom-Schunck optic-flow algorithm (described below) is well suited for real-time computation on our PIPE image processing engine.·We have developed a new framework for computing optic-flow robustly using anestimation-theoretic framework [40]. While this work does not specifically use these ideas, we have future plans to try to adapt this algorithm to such a framework.Our method begins with an implementation of the Horn-Schunck method of computing optic-flow [22]. The underlying assumption of this method is theoptic-flow constraint equation, which assumes image irradiance at time t and t+σt will be the same:If we expand this constraint via a Taylor series expansion, and drop second- and higher-order terms, we obtain the form of the constraint we need to compute normal velocity:Where u and U are the velocities in image space, and Ix, Iy,and It are the spatial and temporal derivatives in the image. This constraint limits the velocity field in an image to lie on a straight line in velocity space. The actual velocity cannot be determined directly from this constraint due to the aperture problem, but one can recover the component of velocity normal to this constraint lineA second, iterative process is usually employed to propagate velocities in image neighborhoods, based upon a variety of smoothness and heuristic constraints. These added neighborhood constraints allow for recovery of the actual velocities u,v in the image. While computationally appealing, this method of determining optic-flow has some inherent problems. First, the computation is done on a pixel-by-pixel basis, creating a large computational demand. Second, the information on optic flow is only available in areas where the gradients defined above exist.We have overcome the first of these problems by using the PIPE image processor [26], [7]. The PIPE is a pipelined parallel image processing computer capable of processing 256 x 256 x 8 bit images at frame rate speeds, and it supports the operations necessary for optic-flow computation in a pixel parallel method (a typical image operation such as convolution, warping, addition subtraction of images can be done in one cycle-l/60 s).The second problem is alleviated by our not needing to know the actual velocities in the image. What we need is the ability to locate and quantify gross image motion robustly. This rules out simple differencing methodswhich are too prone to noise and will make location of image movement difficult. Hence, a set of normal velocities at strong gradients is adequate for our task, precluding the need to iteratively propagate velocities in the image.A. Computing Normal Optic-Flow in Real-TimeOur goal is to track a single moving object in real time. We are using two fixed cameras that image the scene and need to report motion in 3-D to a robotic arm control program. Each camera is calibrated with the 3-D scene, but there is no explicit need to use registered (i.e., scan-line coherence) cameras. Our method computes the normal component of optic-flow for each pixel in each camera image, finds a centurion of motion energy for each image, and then uses triangulation to intersect the back-projected centurions of image motion in each camera. Four processors are used in parallel on the PIPE. The processors are assigned as four per camera-two each for the calculation of X and Y motion energy centurions in each image. We also use a special processor board (ISMAP) to perform real-time histogram. The steps below correspond to the numbers in Fig. 3.1) The camera images the scene and the image is sent to processing stages in the PIPE.2) The image is smoothed by convolution with a Gaussian mask. The convolution operator is a built-in operation in the PIPE and it can be performed in one frame cycle. 3-4) In the next two cycles, two more images are read in, smoothed and buffered, yielding smoothed images Io and I1 and I2.The ability to buffer and pipeline images allows temporal operations on images, albeit at the cost of processing delays (lags) on output. There are now three smoothed images in the PIPE, with the oldest image lagging by 3/60 s.5) Images Io and I2, are subtracted yielding the temporal derivative It.6) In parallel with step 5, image I1is convolved with a 3 x 3 horizontal spatial gradient operator, returning the discrete form of I,. In parallel, the vertical spatial gradient is calculated yielding I, (not shown).7-8)The results from steps 5 and 6 are held in buffers and then are input to alook-up table that divides the temporal gradient at each pixel by the absolute value of the summed horizontal and vertical spatial gradients [which approximates thedenominator in (3)]. This yields the normal velocity in the image at each pixel. These velocities are then threshold and any isolated (i.e., single pixel motion energy) blobs are morphologically eroded. The above threshold velocities are then encoded as gray value 255. In our experiments, we threshold all velocities below 10 pixels per 60 ms to zero velocity.9-10) In order to get the centurion of the motion information, we need the X and Y coordinates of the motion energy. For simplicity, we show only the situation for the X coordinate. The gray-value ramp in Fig. 3 is an image that encodes the horizontal coordinate value (0-255) for each point in the image as a gray value.Thus, it is an image that is black (0) at horizontal pixel 0 and white (255) at horizontal pixel 255. If we logically and each pixel of the above threshold velocity image with the ramp image, we have an image which encodes high velocity pixels with their positional coordinates in the image, and leaves pixels with no motion at zero.11) By taking this result and histogram it, via a special stage of the PIPE which performs histograms at frame rate speeds, we can find the centurion of the moving object by finding the mean of the resulting histogram. Histogram the high-velocity position encoded images yields 256 16-bit values (a result for each intensity in the image). These 256 values can be read off the PIPE via a parallel interface in about 10 ms. This operation is performed in parallel to find the moving object’s Y censored (and in parallel for X and Y centurions for camera 2). The total associated delay time for finding the censored of a moving object becomes 15 cycles or 0.25 s.The same algorithm is run in parallel on the PIPE for the second camera. Once the motion centurions are known for each camera, they are back-projected into the scene using the camera calibration matrices and triangulated to find the actual 3-D location of the movement. Because of the pipelined nature of the PIPE, a new X or Y coordinate is produced every 1/60 s with this delay. While we are able to derive 3-D position from motion stereo at real-time rates, there are a number of sources of noise and error inherent in the vision system. These include stereo triangulation error, moving shadow s which are interpreted as object motion (we use no special lighting in the scene), and small shifts in censored alignments due to the different viewing angles of the cameras, which have a large baseline. The net effect of this is to create a 3-Dposition signal that is accurate enough for gross-level object tracking, but is not sufficient for the smooth and highly accurate tracking required for grasping the object.英文翻译自动跟踪和捕捉系统中的机械手系统摘要——许多机器人抓捕任务都被假设在了一个固定的物体上进行。
机械手设计英文参考文献原文翻译
翻译人:王墨墨山东科技大学文献题目:Automated Calibration of Robot Coordinatesfor Reconfigurable Assembly Systems翻译正文如下:针对可重构装配系统的机器人协调性的自动校准T.艾利,Y.米达,H.菊地,M.雪松日本东京大学,机械研究院,精密工程部摘要为了实现流水工作线更高的可重构性,以必要设备如机器人的快速插入插出为研究目的。
当一种新的设备被装配到流水工作线时,应使其具备校准系统。
该研究使用两台电荷耦合摄像机,基于直接线性变换法,致力于研究一种相对位置/相对方位的自动化校准系统。
摄像机被随机放置,然后对每一个机械手执行一组动作。
通过摄像机检测机械手动作,就能捕捉到两台机器人的相对位置。
最佳的结果精度为均方根值0.16毫米。
关键词:装配,校准,机器人1 介绍21世纪新的制造系统需要具备新的生产能力,如可重用性,可拓展性,敏捷性以及可重构性[1]。
系统配置的低成本转变,能够使系统应对可预见的以及不可预见的市场波动。
关于组装系统,许多研究者提出了分散的方法来实现可重构性[2][3]。
他们中的大多数都是基于主体的系统,主体逐一协同以建立一种新的配置。
然而,协同只是目的的一部分。
在现实生产系统中,例如工作空间这类物理问题应当被有效解决。
为了实现更高的可重构性,一些研究人员不顾昂贵的造价,开发出了特殊的均匀单元[4][5][6]。
作者为装配单元提出了一种自律分散型机器人系统,包含多样化的传统设备[7][8]。
该系统可以从一个系统添加/删除装配设备,亦或是添加/删除装配设备到另一个系统;它通过协同作用,合理地解决了工作空间的冲突问题。
我们可以把该功能称为“插入与生产”。
在重构过程中,校准的装配机器人是非常重要的。
这是因为,需要用它们来测量相关主体的特征,以便在物理主体之间建立良好的协作关系。
这一调整必须要达到表1中所列到的多种标准要求。
机械手设计外文翻译2
译文一机械手机器人是典型的机电一体化装置,它综合运用了机械与精密机械、微电子与计算机、自动控制与驱动、传感器与信息处理以及人工智能等多学科的最新研究成果,随着经济的发展和各行各业对自动化程度要求的提高,机器人技术得到了迅速发展,出现了各种各样的机器人产品。
现代工业机器人是人类真正的奇迹工程。
一个像人那么大的机器人可以轻松地抬起超过一百磅并可以在误差+-0.006英寸误差范围内重复的移动。
更重要的是这些机器人可以每天24小时永不停止地工作。
在许多应用中(特别是在自动工业中)他们是通过编程控制的,但是他们一旦编程一次,他们可以重复地做同一工作许多年。
机器人产品的实用化,既解决了许多单靠人力难以解决的实际问题,又促进了工业自动化的进程。
目前,由于机器人的研制和开发涉及多方面的技术,系统结构复杂,开发和研制的成本普遍较高,在某种程度上限制了该项技术的广泛应用,因此,研制经济型、实用化、高可靠性机器人系统具有广泛的社会现实意义和经济价值。
由于我国经济建设和城市化的快速发展,城市污水排放量增长很快,污水处理己经摆在了人们的议事日程上来。
随着科学技术的发展和人类知识水平的提高,人们越来越认识到污水处理的重要性和迫切性,科学家和研究人员发现塑料制品在水中是用于污水处理的很有效的污泥菌群的附着体。
塑料制品的大量需求,使得塑料制品生产的自动化和高效率要求成为经济发展的必然。
本文结合塑料一次挤出成型机和塑料抓取机械手的研制过程中出现的问题,综述近儿年机器人技术研究和发展的状况,在充分发挥机、电、软、硬件各自特点和优势互补的基础上,对物料抓取机械手整体机械结构、传动系统、驱动装置和控制系统进行了分析和设计,提出了一套经济型设计方案。
采用直角坐标和关节坐标相结合的框架式机械结构形式,这种方式能够提高系统的稳定性和操作灵活性。
传动装置的作用是将驱动元件的动力传递给机器人机械手相应的执行机构,以实现各种必要的运动,传动方式上采用结构紧凑、传动比大的蜗轮蜗杆传动和将旋转运动转换为直线运动的螺旋传动。
机械手臂外文翻译
外文出处:《Manufacturing Engineering and Technology—Maching》附件1:外文原文ManipulatorFirst, an overview of industrial manipulatorWith the rapid development of China's industrial production, especially the reform and openingup after the rapid increase in the degree of automation to achieve the workpiece handling, steering,transmission or operation of brazing, spray gun, wrenches and other tools for processing and assembly operations since, which has more and more attracted our attention. Robot is to imitate the manual part of the action, according to a given program, track and requirements for automatic capture, handling or operation of the automatic mechanical devices.Production of mechanical hand can increase the automation level of production and labor productivity; can reduce labor intensity, ensuring product quality, to achieve safe production; particularly in the high-temperature, high pressure, low temperature, low pressure, dust, explosive, toxic andradioactive gases such as poor environment can replace the normal working people. Here I would like to think of designing a robot to be used in actual production.Why would a robot designed to provide a pneumatic power:1." Air inexhaustible, used later discharged into the atmosphere, does not require recycling and disposal, do not pollute the environment. (Concept of environmental protection)2." Air stick is small, the pipeline pressure loss is small (typically less than asphalt gas path pressure drop of one-thousandth), to facilitate long-distance transport.5." The air cleaner media, it will not degenerate, not easy to plug the pipeline. But there are also places where it fly in the ointment:2." As the use of low atmospheric pressure, the output power can notbe too large; in order to increase the output power is bound to the structure of the entire pneumaticsystem size increased.Air inexhaustible, used later discharged into the atmosphere, without recycling and disposal, donot pollute the environment. Accidental or a small amount of leakage would not be a serious impact on production. Viscosity of air is small, the pipeline pressure loss also is very small, easy long-distance transport.Compared with the hydraulic transmission, and its faster action and reaction, which is one of the outstanding merits of pneumatic.1.Implementing agencies2. Transmission3. Control SystemRobots are generally divided into three categories:Main features:First, mechanical hand (the upper and lower material robot, assembly robot, handling robot, stacking robot, help robot, vacuum handling machines, vacuum suction crane, labor-saving spreader, pneumatic balancer, etc.).Second, cantilever cranes (cantilever crane, electric chain hoist crane, air balance the hanging, etc.)Third, rail-type transport system (hanging rail, light rail, single girder cranes, double-beam crane)Four, industrial machinery, application of hand(3) The working conditions may be poor, monotonous, repetive easy to sub-fatigue working environment to replace human labor.(4) May be in dangerous situations, such as military goods handling, dangerous goods and hazardous materials removal and so on..(5) Universe and ocean development.(6), military engineering and biomedical research and testing.Help mechanical hands:附件1:外文资料翻译译文机械手机械手是近几十年发展起来的一种高科技自动化生产设备。
工业机械手外文文献翻译、中英文翻译
第一章概述1. 1机械手的发展历史人类在改造自然的历史进程中,随着对材料、能源和信息这三者的认识和用,不断创造各种工具(机器),满足并推动生产力的发展。
工业社会向信息社会发展,生产的自动化,应变性要求越来越高,原有机器系统就显得庞杂而不灵活,这时人们就仿造自身的集体和功能,把控制机、动力机、传动机、工作机综合集中成一体,创造了“集成化”的机器系统——机器人。
从而引起了生产系统的巨大变革,成为“人——机器人——劳动对象”,或者“人——机器人——动力机——工作机——劳动对象”。
机器人技术从诞生到现在,虽然只有短短三十几年的历史,但是它却显示了旺盛的生命力。
近年来,世界上对于发展机器人的呼声更是有增无减,发达国家竞相争先,发展中国家急起直追。
许多先进技术国家已先后把发展机器人技术列入国家计划,进行大力研究。
我国的机器人学的研究也已经起步,并把“机器人开发研究”和柔性制造技术系统和设备开发研究等与机器人技术有关的研究课题列入国家“七五”、“八五”科技发展计划以及“八六三”高科技发展计划。
工业机械手是近代自动控制领域中出现的一项新技术,并已经成为现代机械制造生产系统中的一个重要组成部分。
这种新技术发展很快,逐渐形成一门新兴的学科——机械手工程。
1. 2机械手的发展意义机械手的迅速发展是由于它的积极作用正日益为人们所认识:其一、它能部分地代替人工操作;其二、它能按照生产工艺的要求,遵循一定的程序、时间和位置来完成工件的传送和装卸;其三、它能操作必要的机具进行焊接和装配。
从而大大地改善工人的劳动条件,显著地提高劳动生产率,加快实现工业生产机械化和自动化的步伐。
因而,受到各先进工业国家的重视,投入大量的人力物力加以研究和应用。
近年来随着工业自动化的发展机械手逐渐成为一门新兴的学科,并得到了较快的发展。
机械手广泛地应用于锻压、冲压、锻造、焊接、装配、机加、喷漆、热处理等各个行业。
特别是在笨重、高温、有毒、危险、放射性、多粉尘等恶劣的劳动环境中,机械手由于其显著的优点而受到特别重视。
毕业设计机械手外文翻译
外文翻译译文题目 一种与移动机械臂的部分零件所受载荷相协译文题目调的运动结构(2)原稿题目A kinematically compatible framework for cooperative payload transport by nonholonomic mobile manipulators(2)原稿出处 Auton Robot (2006) 21:227–242 原稿出处A kinematically compatible framework for cooperative payload transport by nonholonomic mobile manipulators (2) M. Abou-Samah 1 , C. P. Tang 2 , R. M. Bhatt 2 and V. Krovi 2(1) MSC Software Corporation, Ann Arbor, MI 48105, USA (2) Mechanical (2) Mechanical and and Aerospace Engineering, State University of of New New York at at Buffalo, Buffalo, Buffalo, NY 14260, USA Received: 5 August 2005 Revised: 25 May 2006 Accepted: 30 May 2006 Published online: 5 September 2006 Abstract In this paper, we examine the development of a kinematically compatible control framework for a modular system of wheeled mobile manipulators that can team up to cooperatively transport a common payload. Each individually autonomous mobile manipulator consists of a differentially-driven Wheeled Mobile Robot (WMR) with a mounted two degree-of-freedom (d.o.f) revolute-jointed, planar and passive manipulator arm. The composite wheeled vehicle, formed by placing a payload at the end-effectors of two (or more) such mobile manipulators, has the capability to accommodate, detect and correct both instantaneous and finite relative configuration errors.The kinematically-compatible motion-planning/control framework developed here is intended to facilitate maintenance of all kinematic (holonomic and nonholonomic) constraints within such systems. Given an arbitrary end-effector trajectory, each individual mobile-manipulator's bi-level hierarchical controller first generates a kinematically- feasible desired trajectory for the WMR base, which is then tracked by a suitable lower-level posture stabilizing controller. Two variants of system-level cooperative control schemes schemes——leader-follower and decentralized control control——are then created based on the individual mobile-manipulator control scheme. Both methods are evaluated within an implementation framework that emphasizes both virtual prototyping (VP) and hardware-in-the-loop (HIL) experimentation. Simulation and experimental results of an example of a two-module system are used to highlight the capabilities of a real-time local sensor-based controller for accommodation, detection and corection of relative formation errors. Keywords Composite system-Hardware-in-the-loop-Mobile manipulator- Physical cooperation-Redundancy resolution-Virtual prototypingKinematic collaborationof two mobilemanipulators We now examine two variants of system-level cooperative control schemes schemes——leader-follower and decentralized control control——that can be created based on the individual mobile-manipulator control scheme.Leader-follower approach The first method of modeling such a system considers the midpoint of the mobile base (MP B) of the mobile-manipulator B to be rigidly attached to the end-effector of mobile manipulator A, as depicted in Fig. 4. Figure 4(b) depictshow the end-effector frame {E } of MP A is rigidly attached to the frame at MP B (separated by a constant rotation angle β).(15)Fig. 4 Schematic diagrams of the leader-follower scheme: (a) the 3-link mobile manipulator under analysis, and (b) the two-module composite system MP B now takes on the role of the leader and can be controlled to follow any trajectory trajectory that that is feasible feasible for for a WMR. Hence, given a trajectory trajectory of of the leader MP B , and the preferred manipulator configuration of, Eq. (5) can be rewritten as:(16)and correspondingly Eqs. (6)and correspondingly Eqs. (6)––(8) as:(17)Thus, the trajectory of the virtual (reference) robot for the follower MP A, and the derived velocities can now bedetermined. determined. This This This forms forms forms the the the leader-follower leader-follower leader-follower scheme scheme scheme used used used for for for the the the control control control of of of the the collaborative system carrying a common payload.Decentralized approachThe second second approach approach approach considers considers considers the the frame attached attached to to a point of interest interest on on the common payload as the end-effector frame of both the flanking mobile manipulator systems, as depicted in Fig. 5. Thus, a desired trajectory specified for this payload frame can then provide the desired reference trajectories for the two mobile platforms using the similar framework developedin the previous section by takingand , where . Thispermits Eq. (5) to be rewritten as: (18)Fig. 5 Decentralized control scheme implementation permits the (a) composite system; to be treated as (b) two independent 2-link mobile manipulators and correspondingly Eq. (6)and correspondingly Eq. (6)––(8) as:(19)Each two-link mobile manipulator now controls its configuration with reference to this common end-effector frame mounted on the payload. However, thelocations locations of of of the the the attachments attachments attachments of of of the the the physical physical physical manipulators manipulators manipulators with respect with respect with respect to to to the the payload reference frame must be known apriori.Implementation frameworkWe examine the design and development of a two-stage implementation framework, shown in Fig. 6, that emphasizes both virtual prototyping (VP) based refinement and hardware-in-the-loop (HIL) experimentation.Fig. 6 Paradigm for rapid development and testing of the control scheme on on virtual and physical virtual and physical prototypes Virtual prototyping based refinementIn the first stage, we employ virtual prototyping (VP) tools to rapidly create, evaluate and refine parametric models of the overall system and test various algorithms algorithms in in in simulation simulation simulation within within within a a a virtual virtual virtual environment. environment. environment. 3D 3D 3D solid solid solid models models models of of of the the mobile platforms and the manipulators of interest are created in a CAD package,4 and exported with their corresponding geometric and material properties into a dynamic simulation environment.5 Figure 7(a) shows an example of the application of such framework for simulating the motion of a mobile platformcontrolled controlled by an by an by an algorithm algorithm algorithm implemented implemented implemented in Simulink.in Simulink.6 However, However, it is important it is importantto note that the utility of such virtual testing is limited by: (a) the ability to correctly model and simulate the various phenomena within the virtual environment; (b) the fidelity of the available simulation tools; and (c) ultimately, ultimately, the the the ability ability ability of of of the the the designer designer designer to to to correctly correctly correctly model model model the the the desired desired desired system system and suitably interpret the results.Fig. 7 A single WMR base undergoing testing within the (a) virtual prototyping framework; and (b) hardware-in-the-loop (HIL) testing framework Hardware-in-the-loop experimentationWe employ a hardware-in-the-loop (HIL) methodology for rapid experimental verification of the real-time controllers on the electromechanical mobile manipulator prototypes. Each individual WMR is constructed using two powered wheels and two unactuated casters. Conventional disc-type rear wheels, powered by gear-motors, are chosen because of robust physical construction and ease of operation in the presence of terrain irregularities. Passive ball casters are preferred over wheel casters to simplify the constraints on maneuverability introduced by the casters. The mounted manipulator arm has two passive revolute joints with axes of rotation parallel to each other and perpendicular to the base of the mobile platform. The first joint is placed appropriately at the geometric center on top frame of the platform. The location of the second joint can be adjusted to any position along the slotted first link. The second link itself is reduced to a flat plate supported by the second joint. Each joint is instrumented instrumented with with optical encoder that can measure the joint rotations. The completely assembled two-link mobile manipulator is shown in Fig. 1(c). The second mobile manipulator (see left module of Fig. 1(b) and (d)) employs the same same overall overall overall design design design but but possesses possesses a a motor motor at at the the base base base joint joint joint of of of the the mounted two-link arm. The motor may be used to control the joint two-link arm. The motor may be used to control the joint motion along a motion along a predetermined trajectory (which can include braking/holding the joint at a predetermined predetermined position). position). position). When When When the the the motor motor motor is is is switched switched switched off off off the the the joint joint joint now now now reverts reverts to a passive joint (with much greater damping). The motor is included for permitting future force-redistribution studies. In this paper, however, the motor is used solely to lock the joint prevent self-motions self-motions of of the articulated linkage for certain pathological cases (Bhatt et al., 2005; Tang and Krovi, 2004). PWM-output motor driver cards are used to drive the gearmotors; and encoder cards monitor the encoders instrumenting the various articulated arms. This embedded controller controller communicates communicates communicates with with a designated designated host host computer using TCP/IP for program download and data logging. The host computer withMATLAB/Simulink/Real Time Workshop 8 provides a convenient graphical userinterface environment for system-level software development using a block-diagrammatic language. The compiled executable is downloaded over the network and executes in real-time on the embedded controller while accessing locally installed hardware components.In particular, the ability to selectively test components/systems at various levels (e.g. individual motors, individual WMRs or entire systems) without wearing out components during design iterations was very useful. Figure 7(b) illustrates the implementation of such a system on one of the WMRs. Numerous calibration, simulation and experimental studies carried out with this framework, at the individual-level and system-level, are reported in Abou-Samah (2001).Experimental resultsFor the subsequent experiments,99we prescribe the initial configuration configuration of of the two-module composite system, as shown in Fig. 8. Specifically, we position thetwo two WMRs such WMRs such WMRs such that MP that MP that MP A is A is A is located located located at a relative position of at a relative position of ,and with a relative orientation difference of with respect to MP B.For fixed link-lengths this inherently specifies the values of the various configuration angles as shown in Table 1.Table 1 Parameters for the initial configuration of the two-module composite wheeled system (see Fig. 8 for details) Link lengths of the articulationL 1 0.28 0.28 m m m (11 (11 (11 in) in)L 2 0.28 0.28 m m m (11 (11 (11 in) in) Relative angles of the configuration of thearticulationL 3 0.28 0.28 m m m (11 (11 (11 in) in)φ 1 333333.98°.98°.98°φ 2 280.07°280.07°φ 3 337.36°337.36° Offset between the wheeled mobile basesφ 4 128.59°128.59°δ 0.00°0.00°0.00 m (0 in)0.61 0.61 m m m (24 (24 (24 in) in)Fig. 8 Initial configuration of the two-module composite wheeled system Leader-follower approachA straight line trajectory at a velocity of 0.0254 m/s is prescribed for the leader, MP B. Given a desired configuration of the manipulator arm, the algorithm algorithm described in described in described in Section 4.1 Section 4.1 Section 4.1 is used is used is used to to to obtain a desired obtain a desired obtain a desired trajectory for trajectory for MP A. A large disruption is intentionally introduced by causing one of the wheels of MP A to run over a bump, to evaluate the effectiveness of wheels of MP A to run over a bump, to evaluate the effectiveness of the the disturbance accommodation, detection and compensation.The results are examined in two case scenarios examined in two case scenarios –– Case A: MP A employs odometric estimation for localization as seen in Fig. 9, and Case B: MP A employs sensed articulations for localization as seen in Fig. as seen in Fig. 1010. In each of these figures, (a) presents the overall -trajectory of {M } of MPA with respect to the end-effector frame {E } (that is rigidly attached to the {M } of MPB) while (b), (c) and (d) present the relative orientation difference, X -difference and Y -difference as functions of time. Further in both sets of figures, the ‘Desired’ (––(–– line) is the desired trajectory typically computed offline; and ‘Actual’ (–o – line) is the actual trajectory followed by the system, as determined by post-processing the measurements of the instrumented articulations.However, in Fig. However, in Fig. 99, the (, the (––x – line) represents the odometric estimate while in Fig. in Fig. 1010 it stands for the articulation based estimate (which therefore coincides with the ‘Actual’ (–coincides with the ‘Actual’ (–o o – line) trajectory).Fig. 9 Case A: Odometric Estimation of Frame M, used in the control of MP A following MPB in a leader-follower approach, is unable to detect non-systematic errors such as wheel-slip. (a) XY trajectory of Frame M; (b) Orientation versus Time; (c) X position of Frame M versus Time; and (d) Y position of Frame M versus Time Fig. 10 Case B: Articulation based Estimation of Frame M, used for control of MPA with respect to MP B in a leader-follower approach is able to detect and correct non-systematic errors such as wheel-slip. slip. (a) (a) XY trajectory of Frame M; (b) Orientation versus Time; (c) X position of Frame M versus time; and (d) Y position of Frame M versus time In Case A, the introduction of the disruption causes a drift in the relative configuration of the system which remains undetected by the odometric estimation. Further, as seen in Fig. 9, this drift has a tendency to grow if left uncorrected. However, as seen in Fig. 10, the system can use the articulation-based articulation-based estimation estimation estimation (Case (Case (Case B) B) B) to to not only only detect detect detect disturbances disturbances disturbances to to to the the relative configuration but also to successfully restore the original system configuration.Decentralized control approachIn this decentralized control scenario, a straight line trajectory with a velocity of 0.0254 m/s is presented for the payload frame. As in the leader-follower scenario, a large disruption is introduced by causing one of the wheels of MP A to run over a bump. The algorithm is tested using two further case scenarios scenarios——Case Case C:C: Both mobile mobile platforms platforms platforms employ employ odometric odometric estimation estimation estimation for for localization as shown in Fig. as shown in Fig. 1111, and Case D: Both mobile platforms employsensed articulations for localization as shown in Fig. as shown in Fig. 1212.Fig. 11 Case C: Odometric estimation of frames M of MP A and MP B, used in the control of MP A with respect to MP B in the decentralized approach, is again unable to detect non-systematic errors such as wheel-slip. (a) XY trajectory of frame M of MP A; (b) XY trajectory of frame M of MP B; (c) Relative orientation, between MP A and MP B, versus time; (d) X distance, between MP A and MP B, versus time; and (e) Y distance, between MP A and MP B, versus time. (f) Sequential photographs of the corresponding composite system motion (as time progresses from left to right along each row)Fig. 12 Case D: Articulation based estimation of frames M of MP A and MP B, used for the control of MP A and MP B with respect to a payload-fixed frame is able to detect and correct non-systematic errors such as wheel-slip. (a) XY trajectory of frame M of MP A; (b) XY trajectory of frame M of MP B; (c) Relative orientation, between MP A and MP B, versus time; (d) X distance, between MP A and MP B, versus time; and (e) Y distance, between MP A and MP B, versus time. (f) Sequential photographs of the corresponding composite system motion (as time progresses from left to right along each row) In In each each each of of of these these these figures, figures, figures, subplots subplots subplots (a) (a) (a) and and and (b) (b) (b) presents presents presents the the the overall overalltrajectories trajectories of of frames frames {{M } of MP A and MP B respectively respectively with with with respect respect respect to to their initial poses. Subplots (c), (d) and (e) present the relative orientation difference, X -difference and Y -difference of frames {M } of MP A and MP B respectively as functions of time. Further in both sets of figures, the ‘Desired’ ‘Desired’ (––(––(–– line) is the desired trajectory trajectory typically typically computed offline; and ‘Actual’ (–and ‘Actual’ (–o o – line) is the actual trajectory followed by the system, as determined by post-processing the measurements of the instrumented articulations. articulations. However, However, However, in in in Fig. Fig. Fig. 1111, the (–x – line) line) represents represents represents the the odometric estimate while in Fig. estimate while in Fig. 1212 it stands for the articulation based estimate.In Case C, both mobile platforms use the odometric estimation for localization —hence hence as as as expected, expected, expected, Fig. Fig. Fig. 1111 reflects reflects the the the fact fact fact that that that the the the system system system is is unable unable to to to detect detect detect or or or correct correct correct for for for changes changes changes in in in the the the relative relative relative system system system configuration. configuration. However the data obtained from the articulations accurately captures theexistence of errors between the frames of reference of MP B and MP A. Thus, using the articulation-based articulation-based estimation estimation of relative configuration for control as in Case D allows the detection of disturbances and successful restoration of the original system configuration configuration as as shown in Fig. 12. Note, however, however, while while the relative system configuration is maintained, errors relative to a global reference frame cannot be detected if both WMRs undergo identical simultaneous disturbances . Detection of such absolute errors would require an external reference and is beyond the scope of the existing framework.ConclusionIn this paper, we examined the design, development and validation of a kinematically compatible framework for cooperative transport of a common payload payload by by by a a team team of of nonholonomic nonholonomic mobile mobile mobile manipulators. manipulators. manipulators. Each Each Each individual individual individual mobile mobile manipulator module consists of a differentially driven wheeled WMR retrofitted with with a a a passive passive passive two two two revolute revolute revolute jointed jointed jointed planar planar planar manipulator manipulator manipulator arm. arm. arm. A composite A composite A composite multi multi degree-of-freedom degree-of-freedom vehicle vehicle system could then be modularly created by attaching a common payload on the end-effector of two or more such modules.The composite composite system system allowed payload trajectory tracking errors, arising from subsystem controller errors or environmental disturbances, to be readily accommodated within the compliance offered by the articulated linkage. The individual mobile manipulators compensated by modifying their WMR bases’ motion plans to ensure prioritized satisfaction of the nonholonomic constraints. constraints. The The stabilizing controllers of the WMR bases could then track the recomputed “desired motion plans” to help restore the system system-configuration. -configuration. This scheme not only explicitly ensured maintenance of the kinematic compatibility constraints within the system but is also well suited for an online sensor-based motion planning implementation.This This algorithm was algorithm was algorithm was then then then adapted to adapted to adapted to create two create two create two control schemes for control schemes for the overall composite system — the leader follower approach and the decentralized approach. We evaluated both approaches within an implementation framework that emphasized both, virtual prototyping and hardware-in-the-loop using the case-study of a two module composite system. Experimental results verified the ability of the composite system with sensed articulations to not only accommodate instantaneous disturbances due to terrain irregularities but also to to detect detect detect and and and correct correct correct drift drift drift in in in the the the relative relative relative system system system configuration. configuration. configuration. The The The overall overall framework readily facilitates generalization to treat larger systems with many more mobile manipulator modules.Appendix AThe kinematic constraint equations shown in Eq. (3) may be written in the general form as:(20)Then the velocity constraint equation can be written as:(21)The general solution to this differential equation is:(22)By appropriate selection of the initial conditions within the experimentalsetup, one can create the condition , i.e., exactly satisfying the constraint at the initial time, which then permits the constraint constraint to be to be to be satisfied satisfied satisfied for all for all for all time. However, time. However, time. However, one could also one could also one could also enhance this enhance this process by adding another term to the right-hand-side of Eq. (21) as:(23)where Φ is a positive-definite positive-definite constant constant matrix. This results in a first order stable system:(24)whose solution for any arbitrary initial configuration is:(25)In such systems, there is no longer a requirement to enforcesince the solution exponentially stabilizes to regardless of the initial conditions. This feature could potentially be easily added to transform Eq. (5) to further improve overall performance, but was not required. ReferencesAbou-Samah, M. 2001. A kinematically compatible framework for collaboration of multiple non-holonomic wheeled mobile robots. M. Eng. Thesis, McGill University, Montreal, Canada. Abou-Samah, Abou-Samah, M. M. M. and and and Krovi, Krovi, Krovi, V. V. V. 2002. 2002. 2002. Optimal Optimal Optimal configuration configuration configuration selection selection selection for for for a a a cooperating cooperating cooperating system system system of of mobile manipulators. In P roceedings Proceedings of the 2002 ASME Design Engineering Technical Conferences , DETC2002/MECH-34358, Montreal, QC, Canada. Adams,J.,Bajcsy, R.,Kosecka,J., R.,Kosecka,J., Kumar, Kumar, Kumar, V., V., V., Mandelbaum, Mandelbaum, Mandelbaum, R., R., R., Mintz, Mintz, Mintz, M., M., M., Paul, Paul, Paul, R., R., R., Wang, Wang, Wang, C.-C., C.-C., Yamamoto, Yamamoto, Y Y ., and and Yun, Yun, Yun, X. X. X. 1996. 1996. 1996. Cooperative Cooperative Cooperative material material material handling handling handling by by by human human human and and and robotic robotic robotic agents: agents: Module development and system synthesis. Expert Systems with Applications, 11(2):89, 11(2):89, 11(2):89––97. Bhatt, R.M., Tang, C.P ., Abou-Samah, M., and Krovi, V. 2005. A screw-theoretic analysis framework for for payload payload payload manipulation manipulation manipulation by by by mobile mobile mobile manipulator manipulator manipulator collectives. collectives. collectives. In In Proceedings of the 2005 ASME International Mechanical Engineering Congress & Exposition, IMECE2005-81525, Orlando, Florida, USA. Borenstein, J., Everett, B., and Feng, L. 1996. Navigating Mobile Robots: Systems and Techniques. A. K. Peters, Ltd., Wellesley, MA. Brockett, R.W. 1981. Control theory and singular riemannian geometry. In P .J. Hilton and G.S. Young (eds), New Directions in Applied Mathematics, Springer-Verlag, New York, pp. 11–27. Campion, Campion, G., G., G., Bastin, Bastin, Bastin, G., G., G., and and and D'Andrea-Novel, D'Andrea-Novel, D'Andrea-Novel, B. B. B. 1996. 1996. 1996. Structural Structural Structural properties properties properties and and and classification classification classification of of kinematic kinematic and and and dynamic dynamic dynamic models models models of of of wheeled wheeled wheeled mobile mobile mobile robots. robots. IEEE Transactions on Robotics and Automation , 12(1):47–62. Canudas de Witt, C., Siciliano, B., and Bastin, G. 1996. Theory of Robot Control. Springer-Verlag, Berlin. Desai, Desai, J. J. J. and and and Kumar, Kumar, Kumar, V. V. V. 1999. 1999. 1999. Motion Motion Motion planning planning planning for for for cooperating cooperating cooperating mobile mobile mobile manipulators. manipulators. Journal of Robotic Systems , 16(10):557–579. Donald, B.R., Jennings, J., and Rus, D. 1997. Information invariants for distributed manipulation. TheInternational Journal of Robotics Research, 16(5):673–702. Honzik, B. 2000. Simulation of the kinematically redundant mobile manipulator. In Proceedings of the 8th MATLAB Conference 2000, Prague, Czech Republic, pp. 91, Prague, Czech Republic, pp. 91–95. Humberstone, C.K. and Smith, K.B. 2000. Object transport using multiple mobile robots with non-compliant endeffectors. In D istributed Distributed Autonomous Robotics Systems 4, Springer-Verlag, Tokyo, 4, Springer-Verlag, Tokyo, Tokyo, pp. 417–426. Juberts, M. 2001. Intelligent control of mobility systems, Needs. ISD Division, Manufacturing Engineering Laboratory, National Institute of Standards and Technology. Khatib, Khatib, O., O., O., Yokoi, Yokoi, K., K., Chang, Chang, Chang, K., K., K., Ruspini, Ruspini, Ruspini, D., D., D., Holmberg, Holmberg, Holmberg, R., R., R., and and and Casal, Casal, Casal, A. A. A. 1996. 1996. 1996. Vehicle/arm Vehicle/arm coordination and multiple mobile manipulator decentralized cooperation. In Proceedings of the 1996IEEE/RSJ International Conference on Intelligent Robots and Systems, Osaka, Japan, pp. 546–553. Kosuge, K., Osumi, T., Sato, M., Chiba, K., and Takeo, K. 1998. Transportation of a single object by two decentralized-controlled nonholonomic mobile robots. In 1998 IEEE International Conference on Robotics and Automation, Leuven, Belgium, pp. 2989–2994. Kube, C.R. and Zhang, H. 1997. Task modelling in collective robotics. Autonomous Robots , Kluwer Academic Publishers, 4(1):53–72. Latombe, J.-C. 1991. R obot Robot Motion Planning. Kluwer Academic Publishers, Boston, MA. Li, Z. and Canny, J.F. 1993. Nonholonomic Motion Planning. Kluwer Academic Publishers, Boston, MA. Murray, R.M. and Sastry, S.S. 1993. Nonholonomic motion planning: Steering using sinusoids. IEEE Transactions on Automatic Control, 38(5):700–716. Nakamura, Y . 1991. A dvanced Advanced Robotics: Redundancy and Optimization. Addison-Wesley Publishing Company, Inc., California. Samson, Samson, C. C. C. and and and Ait-Abderrahim, Ait-Abderrahim, Ait-Abderrahim, K. K. K. 1991a. 1991a. 1991a. Feedback Feedback Feedback control control control of of of a a a nonholonomic nonholonomic nonholonomic wheeled wheeled wheeled cart cart cart in in cartesian space. space. In In Proceedings of the 1991 IEEE International Conference on Robotics and Automation , Sacramento CA, pp. 1136–1141. Samson, C. and Ait-Abderrahim, K. 1991b. Feedback stabilization of a nonholonomic wheeled mobile robot. robot. In In Proceedings of the 1991 IEEE/RSJ International Workshop on Intelligent Robots andSystems , Osaka, Japan, pp. 1242–1247. Schenker, P .S., Huntsberger, T.L., Pirjanian, P ., Trebi-Ollennu, A., Das, H., Joshi, S., Aghazarian, H., Ganino, A.J., Kennedy, B.A., and Garrett, M.S. 2000. Robot work crews for planetary outposts: Close cooperation and coordination of multiple mobile robots. In Procedings of SPIE Symposium on Sensor Fusion and Decentralized Control in Robotic Systems III , Boston, MA. Seraji, Seraji, H. H. H. 1998. 1998. 1998. A A A unified unified unified approach approach approach to to to motion motion motion control control control of of of mobile mobile mobile manipulators. manipulators. The International Journal of Robotics Research, 17(2):107, 17(2):107–118. Spletzer, J., Das, A.K., Fierro, C.J., Kumar, V., and Ostrowski, J.P. 2001. Cooperative localization and control for multi-robot manipulation. In Procedings of the 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems, Maui, Hawaii, USA, pp. 631–636. Stilwell, Stilwell, D.J. D.J. D.J. and and and Bay, Bay, Bay, J.S. J.S. J.S. 1993. 1993. 1993. Towards Towards Towards the the the development development development of of of a a a material material material transport transport transport system system system using using swarms of ant-like robots. In Procedings of the 1993 IEEE/RSJ International Conference on Intelligent Robots and Systems, Atlanta, GA, pp. 766, Atlanta, GA, pp. 766–771. Tang, C.P . 2004. Manipulability-based analysis of cooperative payload transport by robot collectives. M.S. Thesis, University at Buffalo, Buffalo, NY . Tang, C.P . and Krovi, V. 2004. Manipulability-based configuration evaluation of cooperative payload transport by by mobile mobile robot collectives. In Proceedings of the 2004 ASME Design Engineering Technical Conferences and Computers and Information in Engineering Conference , DETC2004-57476, Salt Lake City, UT, USA. Tanner, H.G., Kyriakopoulos, K.J., and Krikelis, N.I. 1998. Modeling of multiple mobile manipulators handling a common deformable object. Journal of Robotic Systems, 15(11):599, 15(11):599, 15(11):599––623. Wang, Z.-D., Nakano, E., and Matsukawa, T. 1994. Cooperating multiple behavior-based robots for object manipulation. In 1994 1994 IEEE/RSJ International Conference on Intelligent Robots and Systems . Whitney, Whitney, D.E. D.E. D.E. 1969. 1969. 1969. Resolved Resolved Resolved motion motion motion rate rate rate control control control of of of manipulators manipulators manipulators and and and human human human prostheses. prostheses. IEEE Transactions on Man-Machine Systems, MMS-10;47, MMS-10;47–53. Yamamoto, Y. 1994. Control and coordination of locomotion and manipulation of a wheeled mobile manipulator. Ph.D. Thesis, University of Pennsylvania, Philadelphia, PA. Yamamoto, Y . and Yun, X. 1994. Coordinating locomotion and manipulation of a mobile manipulator. IEEE Transactions on Automatic Control, 39(6):1326–1332. Footnotes 1.R indicates revolute joint. RRR indicates serial linkages connected by three revolute joints. 2.We denote “desired ” quantities using superscript d in the rest of the paper. 3.Reference robot variables are denoted using superscript r and actual robot variables without any superscript for the rest of the paper. 4.SolidWorks TM was the CAD package used for this work. 5.MSC Visual Nastran TM was the dynamic simulation environment used for this work. 。
外文翻译机械手机械和控制系统
本科毕业设计外文翻译题目机械手的机械和控制系统姓名谢百松学号20051103006 专业机械设计制造及其自动化指导教师肖新棉职称副教授中国·武汉二○○九年二月机械手的机械和控制系统文章来源:Dirk Osswald, Heinz Wörn.Department of Computer Science , Institute for Process Control and Robotics (IPR).,Engler-Bunte-Ring 8 - Building 40.28.摘要:最近,全球内带有多指夹子或手的机械人系统已经发展起来了,多种方法应用其上,有拟人化的和非拟人化的。
不仅调查了这些系统的机械结构,而且还包括其必要的控制系统。
如同人手一样,这些机械人系统可以用它们的手去抓不同的物体,而不用改换夹子。
这些机械手具备特殊的运动能力(比如小质量和小惯性),这使被抓物体在机械手的工作范围内做更复杂、更精确的操作变得可能。
这些复杂的操作被抓物体绕任意角度和轴旋转。
本文概述了这种机械手的一般设计方法,同时给出了此类机械手的一个示例,如卡尔斯鲁厄灵巧手Ⅱ。
本文末介绍了一些新的构想,如利用液体驱动器为类人型机器人设计一个全新的机械手。
关键词:多指机械手;机器人手;精操作;机械系统;控制系统1.引言2001年6月在德国卡尔斯鲁厄开展的“人形机器人”特别研究,是为了开发在正常环境(如厨房或客厅)下能够和人类合作和互动的机器人系统。
设计这些机器人系统是为了能够在非专业、非工业的条件下(如身处多物之中),帮我们抓取不同尺寸、形状和重量的物体。
同时,它们必须能够很好的操纵被抓物体。
这种极强的灵活性只能通过一个适应性极强的机械人手抓系统来获得,即所谓的多指机械手或机器人手。
上文提到的研究项目,就是要制造一个人形机器人,此机器人将装备这种机器人手系统。
这个新手将由两个机构合作制造,它们是卡尔斯鲁厄大学的IPR(过程控制和机器人技术研究院)和c(计算机应用科学研究院)。
机械臂动力学——毕业设计外文文献翻译、中英文翻译
毕业论文(设计)外文翻译题目机械臂动力学与控制的研究系部名称:机械工程系专业班级:机自学生姓名:学号:指导教师:教师职称:20**年03月20日2009年IEEE国际机器人和自动化会议神户国际会议中心日本神户12-17,2009机械臂动力学与控制的研究拉斯彼得Ellekilde摘要操作器和移动平台的组合提供了一种可用于广泛应用程序高效灵活的操作系统,特别是在服务性机器人领域。
在机械臂众多挑战中其中之一是确保机器人在潜在的动态环境中安全工作控制系统的设计。
在本文中,我们将介绍移动机械臂用动力学系统方法被控制的使用方法。
该方法是一种二级方法, 是使用竞争动力学对于统筹协调优化移动平台以及较低层次的融合避障和目标捕获行为的方法。
I介绍在过去的几十年里大多数机器人的研究主要关注在移动平台或操作系统,并且在这两个领域取得了许多可喜的成绩。
今天的新挑战之一是将这两个领域组合在一起形成具有高效移动和有能力操作环境的系统。
特别是服务性机器人将会在这一方面系统需求的增加。
大多数西方国家的人口统计数量显示需要照顾的老人在不断增加,尽管将有很少的工作实际的支持他们。
这就需要增强服务业的自动化程度,因此机器人能够在室内动态环境中安全的工作是最基本的。
图、1 一台由赛格威RMP200和轻重量型库卡机器人组成的平台这项工作平台用于如图1所示,是由一个Segway与一家机器人制造商制造的RMP200轻机器人。
其有一个相对较小的轨迹和高机动性能的平台使它适应在室内环境移动。
库卡工业机器人具有较长的长臂和高有效载荷比自身的重量,从而使其适合移动操作。
当控制移动机械臂系统时,有一个选择是是否考虑一个或两个系统的实体。
在参考文献[1]和[2]中是根据雅可比理论将机械手末端和移动平台结合在一起形成一个单一的控制系统。
另一方面,这项研究发表在[3]和[4],认为它们在设计时是独立的实体,但不包括两者之间的限制条件,如延伸能力和稳定性。
这种控制系统的提出是基于动态系统方法[5],[6]。
码垛机械手设计外文文献翻译、中英文翻译
码垛机械手设计ABOUT MODERN INDUSTRIAL MANIPULATOR Robot is a type of mechantronics equipment which synthesizes the last research achievement of engine and precision engine, micro-electronics and computer, automation control and drive, sensor and message dispose and artificial intelligence and so on. With the development of economic and the demand for automation control, robot technology is developed quickly and all types of the robots products are come into being. The practicality use of robot not only solves the problems which are difficult to operate for human being, but also advances the industrial automation program. Modern industrial robots are true marvels of engineering. A robot the size of a person can easily carry a load over one hundred pounds and move it very quickly with a repeatability of 0.006inches. Furthermore these robots can do that 24hours a day for years on end with no failures whatsoever. Though they are reprogrammable, in many applications they are programmed once and then repeat that exact same task for years.At present, the research and development of robot involves several kinds of technology and the robot system configuration is so complex that the cost at large is high which to a certain extent limit the robot abroad use. To development economic practicality and high reliability robot system will be value to robot social application and economy development. With he rapid progress with the control economy and expanding of the modern cities, the let of sewage is increasing quickly; with the development of modern technology and the enhancement of consciousness about environment reserve, more and more people realized the importance and urgent of sewage disposal. Active bacteria method is an effective technique for sewage disposal. The abundance requirement for lacunaris plastic makes it is a consequent for plastic producing with automation and high productivity. Therefore, it is very necessary to design a manipulator that can automatically fulfill the plastic holding. With the analysis of the problems in the design of the plasticholding manipulator and synthesizing the robot research and development conditionin recent years, a economic scheme is concluded on the basis of the analysis of mechanical configuration, transform system, drive device and control system and guided by the idea of the characteristic and complex of mechanical configuration, electronic, software and hardware. In this article, the mechanical configuration combines the character of direction coordinate which can improve the stability and operation flexibility of the system. The main function of the transmission mechanism is to transmit power to implement department and complete the necessary movement. In this transmission structure, the screw transmission mechanism transmits the rotary motion into linear motion. Worm gear can give vary transmission ratio. Both of the transmission mechanisms have a characteristic of compact structure. The design of drive system often is limited by the environment condition and the factor of cost and technical lever. The step motor can receive digital signal directly and has the ability to response outer environment immediately and has no accumulation error, which often is used in driving system. In this driving system, open-loop control system is composed of stepping motor, which can satisfy the demand not only for control precision but also for the target of economic and practicality. On this basis, the analysis of stepping motor in power calculating and style selecting is also given. The analysis of kinematics and dynamics for object holding manipulator is given in completing the design of mechanical structure and drive system.Current industrial approaches to robot arm control treat each joint of the robot arm as a simple joint servomechanism. The servomechanism approach models the varying dynamics of a manipulator inadequately because it neglects the motion and configuration of the whole arm mechanism. These changes in the parameters of the controlled system sometimes are significant enough to render conventional feedback control strategies ineffective. The result is reduced servo response speed and damping, limiting the precision and speed of the end-effecter and making it appropriate only for limited-precision tasks. Manipulators controlled in this manner move at slow speeds with unnecessary vibrations. Any significant performance gain in this and other areas of robot arm control require the consideration of more efficient dynamic models, sophisticated control approaches, and the use of dedicated computer architectures and parallel processing techniques.In the industrial production and other fields, people often endangered by such factors as high temperature, corrode, poisonous gas and so forth at work, which have increased labor intensity and even jeopardized the life sometimes. The corresponding problems are solved since the robot arm comes out. The arms can catch, put and carry objects, and its movements are flexible and diversified. It applies to medium and small-scale automated production in which production varieties can be switched. And it is widely used on soft automatic line. The robot arms are generally made by withstand high temperatures, resist corrosion of materials to adapt to the harsh environment. So they reduced the labor intensity of the workers significantly and raised work efficiency. The robot arm is an important component of industrial robot, and it can be called industrial robots on many occasions. Industrial robot is set machinery, electronics, control, computers, sensors, artificial intelligence and other advanced technologies in the integration of multidisciplinary important modern manufacturing equipment. Widely using industrial robots, not only can improve product quality and production, but also is of great significance for physical security protection, improvement of the environment for labor, reducing labor intensity, improvement of labor productivity, raw material consumption savings and lowering production costs.There are such mechanical components as ball footbridge, slides, air control mechanical hand and so on in the design. A programmable controller, a programming device, stepping motors, stepping motors drives, direct current motors, sensors, switch power supply, an electromagnetism valve and control desk are used in electrical connection.关于现代工业机械手文章出处:1994-2009 China Academic Joumal Electronic Publishing House机器人是典型的机电一体化装置,它综合运用了机械与精密机械、微电子与计算机、自动控制与驱动、传感器与信息处理以及人工智能等多学科的最新研究成果,随着经济技术的发展和各行各业对自动化程度要求的提高,机器人技术得到了迅速发展,出现了各种各样的机器人产品。
机械手_外文文献及翻译
Model-based Control for 6-DOF ParallelManipulator基于模型的控制六自由度并联机器人Abstract 摘要A novel model-based controller forsix-degree-of-freedom (DOF) parallel manipulator is proposed in this paper,in order to abatement the influence of platform load variety and compel the steady state errors converge to zero 一种新的基于模型的控制器的六自由度并联机器人(自由度)提出,以便消除影响平台负载的品种和迫使稳态误差收敛到零In this paper, 6-DOF parallel manipulator is described as multi-rigid-body systems, the mathematical model of the 6-DOF parallelmanipulator including dynamics based on Kane method and kinematics used closed-form solutions andNewton-Raphson method is built in generalized coordinate system. 在本文中,六自由度并联机器人被描述为多刚体系统,数学模型的六自由度并联机器人基于凯恩方法包括动力学和运动学使用封闭形式的解决方案和牛顿迭代法是建立在广义坐标系统。
The model-based controller is presented with the feedback of cylinders positions of platform, desired trajectories and dynamics gravity as the input and the servovalve current as its output. 基于模型的控制器是与气缸位置反馈平台,所需的轨迹和动态重力作为输入和输出的伺服阀电流。
关于机械手的中英文翻译
徐州工程学院毕业设计外文翻译学生姓名学院名称专业名称指导教师20**年5月27日COMBINATION OF ROBOT CONTROL AND ASSEMBLY PLANNINGFOR A PRECISION MANIPULATOORAbstractThis paper researches how to realize the automatic assembly operation on a two-finger precision manipulator. A multi-layer assembly support system is proposed. At the task-planning layer, based on the computer-aided design (CAD) model, the assembly sequence is first generated, and the information necessary for skill decomposition is also derived. Then, the assembly sequence is decomposed into robot skills at the skill-decomposition layer. These generated skills are managed and executed at the robot control layer. Experimental results show the feasibility and efficiency of the proposed system.Keywords Manipulator Assembly planning Skill decomposition Automated assembly1 IntroductionOwing to the micro-electro-mechanical systems (MEMS) techniques, many products are becoming very small and complex, such as microphones, micro-optical components, and microfluidic biomedical devices, which creates increasing needs for technologies and systems for the automated precision assembly of miniature parts. Many efforts aiming at semi-automated or automated assembly have been focused on microassembly technologies. However, microassembly techniques of high flexibility, efficiency, and reliability still open to further research. Thispaper researches how to realize the automatic assembly operation on a two-finger micromanipulator. A multi-layer assembly support system is proposed.Automatic assembly is a complex problem which may involve many different issues, such as task planning, assembly sequences generation, execution, and control, etc. It can be simply divided into two phases; the assembly planning and the robot control. At the assembly-planning phase, the information necessary for assembly operations, such as the assembly sequence, is generated. At the robot control phase, the robot is driven based on the information generated at the assembly-planning phase, and the assembly operations are conducted. Skill primitives can work as the interface of assembly planning to robot control. Several robot systems based on skill primitives have been reported. The basic idea behind these systems is the robot programming. Robot movements are specified as skill primitives, based on which the assembly task is manually coded into programs. With the programs, the robot is controlled to fulfill assembly tasks automatically.A skill-based micromanipulation system has been developed in the authors’ lab, and it can realize many micromanipulation operations. In the system, the assembly task is manually discomposed into skill sequences and compiled into a file. After importing the file into the system, the system can automatically execute the assembly task. This paper attempts to explore a user-friendly, and at the same time easy, sequence-generation method, to relieve the burden of manually programming the skillsequence.It is an effective method to determine the assembly sequence from geometric computer-aided design (CAD) models. Many approaches have been proposed. This paper applies a simple approach to generate the assembly sequence. It is not involved with the low-level data structure of the CAD model, and can be realized with the application programming interface (API) functions that many commercial CAD software packages provide. In the proposed approach, a relations graph among different components is first constructed by analyzing the assembly model, and then, possible sequences are searched, based onthe graph. According to certain criterion, the optimal sequence is finally obtained.To decompose the assembly sequence into robot skill sequences, some works have been reported. In Nnaji et al.’s work, the assembly task commands are expanded to more detailed commands, which can be seen as robot skills, according to a predefined format. The decomposition approach of Mosemann and Wahl is based on the analysis of hyperarcs of AND/OR graphs representing the automatically generated assembly plans. This paper proposes a method to guide the skill decomposition. The assembly processes of parts are grouped into different phases, and parts are at different states. Specific workflows push forward parts from one state to another state. Each workflow is associated with a skill generator. According to the different start state and target state of the workflow, the skill generator creates a series of skills that can promote the part to its target state.The hierarchy of the system proposed here ,the assembly information on how to assemble a product is transferred to the robot through multiple layers. The top layer is for the assembly-task planning. The information needed for the task planning and skill generation are extracted from the CAD model and are saved in the database. Based on the CAD model, the assembly task sequences are generated. At the skill-decomposition layer, tasks are decomposed into skill sequences. The generated skills are managed and executed at the robot control layer.2 Task planningSkills are not used directly at the assembly-planning phase. Instead, the concept of a task isused. A task can fulfill a series of assembly operations, for example, from locating a part, through moving the part, to fixing it with another part. In other words, one task includes many functions that may be fulfilled by several different skills. A task is defined as:T =(Base Part; Assembly Part; Operation )Base_Part and Assembly_Part are two parts that are assembled together. Base_Part is fixed on the worktable, while Assembly_Part is handled by robot ’s end-effector and assembled onto the Base_Part. Operation describes how the Assembly_Part is assembled with the Base_Part; Operation ∈ {Insertion_T, screw_T, align_T,...}.The structure of microparts is usually uncomplicated, and they can be modeled by the constructive solid geometry (CSG) method. Currently, many commercial CAD software packages can support 3D CSG modeling. The assembly model is represented as an object that consists of two parts with certain assembly relations that define howthe parts are to be assembled. In the CAD model, the relations are defined by geometric constraints. The geometric information cannot be used directly to guide the assembly operation —we have to derive the information necessary for assembly operations from the CAD model.Through searching the assembly tree and geometric relations (mates ’ relations) defined in the assembly ’s CAD model, we can generate a relation graph among parts, for example, In the graph, the nodes represent the parts. If nodes are connected, it means that there are assembly relations among these connected nodes (parts).2.1 Mating directionIn CSG , the relations of two parts, geometric constraints, are finally represented as relations between planes and lines, such as collinear, coplanar, tangential, perpendicular, etc. For example, a shaft is assembled in a hole. The assembly relations between the two parts may consist of such two constraints as collinear between the centerline of shaft Lc_shaft and the centerline of hole Lc_hole and coplanar between the plane P_Shaft and the plane P_Hole. The mating direction is a key issue for an assembly operation. This paper applies the following approach to compute the possible mating direction based on the geometric constraints (the shaft-in-hole operation of Fig. 3 is taken as an example):1. For a part in the relation graph, calculate its remaining degrees of freedom,also called degrees of separation, of each geometric constraint.For the coplanar constraint, the remaining degrees of freedom are {}z Rot y x R ,,1=. For the collinear constraint, the remaining degrees of freedom are {}z Rot z R ,2=. 1R and 2R can alsobe represented as {}1,0,0,0,1,11=R and {}1,0,0,1,0,02=R . Here, 1 means that there is a degree ofseparation between the two parts. {}1,0,0,00,021,= R R , and so, the degree of freedom around the z axis will be ignored in the following steps.In the case that there is a loop in the relation graph, such as parts Part 5, Part 6, and Part 7 in Fig. 2, the loop has to be broken before the mating direction is calculated. Under the assumption that all parts in the CAD model are fully constrained and not over-constrained, the following simple approach is adopted. For the part t in the loop, calculate the number of 1s in in i i ti R R R N ...21=; where ik R is the remaining degrees of freedom of constraint k by part i. For example, in Fig. 2, given that the number of 1s in 7,5part part U and 7,6part part U is larger than 6,5part part U and 5,6part part U , respectively, then it can be regarded that the position of Part 7 is determined by constraints with both Part 5 and Part 6, while Part 5 and Part 6 can be fully constrained by constraints between Part 5 and Part 6.We can unite Part 5 and Part 6 as one node in the relation graph, also called a composite node, as shown in Fig. 2b. The composite node will be regarded as a single part, but it is obvious that the composite node implies an assembly sequence.2. Calculate mating directions for all nodes in the relation graph. Again, beginning at the state that the shaft and the hole are assembled, separate the part in one degree of separation by a certain distance (larger than the maximum tolerance), and then check if interference occurs. Separation in both ±x axis and ±y axis of R1 causes the interference between the shaft and the hole. Separation in the +z direction raises no interference. Then, select the +z direction as the mating direction, which is represented as a vector M measured in the coordinate system of the assembly. It should be noted that, in some cases, there may be several possible mating directions for a part. The condition for assembly operation in the mating direction to be ended should be given. When contact occurs between parts in the mating direction at the assembled state, which can be checked simply with geometric constraints, the end condition is measured by force sensory information, whereas position information is used as an end condition.3. Calculate the grasping position. In this paper, parts are handled and manipulated with two separate probes, which will be discussed in the Sect. 4, and planes or edges are considered for grasping. In the case that there are several mating directions, the grasping planes are selected as G1∩G2∩...∩Gi, where Gi is possible grasping plane/edge set for the ith mating direction when the part is at its free state. For example, in Fig. 4, the pair planes P1/P1′, P2/P2′, and P3/P3′ can serve as possible grasping planes, and then the grasping planes are{}{}{}{}1P1/P 2P2/P ,1P1/P 3P3/P ,1P1/P 3P3/P ,2P2/P ,1P1/P 3_2_1_'='''''''=dir mating dir mating dir mating G G GThe approaching direction of the end-effector is selected as the normal vector of thegrasping planes. It is obvious that not all points on the grasping plane can be grasped. The following method is used to determine the grasping area. The end-effector, which is modeled as a cuboid, is first added in the CAD model, with the constraint of coplanar or tangential with the grasping plane. Beginning at the edge that is far away from the Base_Part in the mating direction, move the end-effector in the mating direction along the grasping plane until the end-effector is fully in contact with the part, the grasping plane is fully in contact with the end-effector, or a collision occurs. Record the edge and the distance, both of which are measured in the part’s coordinate system.4. Separate gradually the two parts along the mating direction, while checking interference in the other degrees of separation, until no interference occurs in all of the other degrees of separation. There is obviously a separation distance that assures interference not to occur in every degree of separation. It is called the safe length in that direction. This length is used for the collision-free path calculation, which will be discussed in the following section.2.2 Assembly sequenceSome criteria can be used to search the optimal assembly sequence, such as the mechanical stability of subassemblies, the degree of parallel execution, types of fixtures, etc. But for microassembly, we should pay more attention to one of its most important features, the limited workspace, when selecting the assembly sequence. Microassembly operations are usually conducted and monitored under microscopy, and the workspace for microassembly is very small. The assembly sequence brings much influence on the assembly efficiency. For example, a simple assembly with three parts. In sequence a, part A is first fixed onto part B. In the case that part C cannot be mounted in the workspace at the same time with component AB because of the small workspace, in order to assemble part C with AB, component AB has to be unmounted from the workspace. Then, component C is transported and fixed into the workspace. After that, component AB is transported back into the workspace again. In sequence b, there is no need to unmount any part. Sequence a is obviously inefficient and may cause much uncertainty. In other words, the greater the number of times of unmounting components required by an assembly sequence, the more inefficient the assembly sequence. In this paper, due to the small -workspace feature of microassembly, the number of times necessary for the mounting of parts is selected as the search criteria to find the assembly sequence that has a few a number of times for the mounting of parts as possible.This paper proposes the following approach to search the assembly sequence. The relation graph of the assembly is used to search the optimal assembly sequence. Heuristic approaches are adopted in order to reduce the search times:1. Check nodes connected with more than two nodes. If the mating directions of its connected nodes are different, mark them as inactive nodes, whereas mark the same mating directions as active mating direction.2. Select a node that is not an inactive node. Mark the current node as the base node (part). The first base part is fixed on the workspace with the mating direction upside (this is done in the CAD model). Compare the size (e.g., weight or volume) of the base part with its connected parts, which can be done easily by reading the bill of materials (BOM) of the assembly. If the base part is much smaller, then mark it as an inactive node.3. Select a node connected with the base node as an assembly node (part). Check the mating direction if the base node needs to be unmounted from the workspace. If needed, update a variable, say mount++. Reposition the component (note that there may be not only the base part in the workspace; some other parts may have been assembled with the base part) in the workspace so that the mating direction is kept upside.4. In the CAD model, move the assembly part to the base part in the possible mating direction, while checking if interference (collision) occurs. If interference occurs, mark the base node as an inactive node and go to step 2, whereas select the Operation type according to parts’ geometric features. In this step, an Obstacle Box is also computed. The box, which is modeled as a cuboid, includes all parts in the workspace. It is used to calculate the collision-free path to move the assembly part, which will be introduced in the following section. The Obstacle Box is described by a position vector and its width, height, and length.5. Record the assembly sequence with the Operation type, the mating direction, and the grasping position.6. If all nodes have been searched, then mark the first base node as an inactive node and go to step 2. If not, select a node connected with the assembly node. Mark it as an assembly node, and the assembly node is updated as a base node. Check if there is one of the mating directions of the assembly node that is same as the mating direction of the former assembly node. If there is, use the former mating direction in the following steps. Go to step 3.After searching the entire graph, we may have several assembly sequences. Comparing the values of mount, the more efficient one can be selected. If not even one sequence is returned, then users may have to select one manually. If there are N nodes in the relation graph of Fig. 2b, all of which are not classed as inactive node, and each node may have M mating directions, then it needs M N computations to find all assembly sequences. But because, usually, one part only has one mating direction, and there are some inactive nodes, the computation should be less than M N.It should be noted that, in the above computation, several coordinate systems are involved, such as the coordinates of the assembly sequence, the coordinates of the base part, and thecoordinates of the assembly. The relations among the coordinates are represented by a 4×4 transformation matrix, which is calculated based on the assembly CAD model when creating the relations graph. These matrixes are stored with all of the related parts in the database. They are also used in skill decomposition.3 Skill decomposition and execution3.1 Definition of skill primitiveSkill primitives are the interface between the assembly planning and robot control. There have been some definitions on skill primitives. The basic difference among these definitions is the skill ’s complexity and functions that one skill can fulfill. From the point of view of assembly planning, it is obviously better that one skill can fulfill more functions. However, the control of a skill with many functions may become complicated. In the paper, two separate probes, rather than a single probe or parallel jaw gripper, are used to manipulate the part. Even for the grasp operation, the control process is not easy. In addition, for example, moving a part may involve not only the manipulator but also the worktable. Therefore, to simplify the control process, skills defined in the paper do not include many functions.More importantly, the skills should be easily applied to various assembly tasks, that is, the set of skills should have generality to express specific tasks. There should not be overlap among skills. In the paper, a skill primitive for robot control is defined as:()()()()i Attribute i Condition i Attribute i End i Attribute i Start i Attribute i Action i Attribute Si __,__,__,__,_=Attributes_i Information necessary for Si to be executed. They can be classified as required attributes and option attributes, or sensory attributes and CAD-model-driven attributes. The attributes are represented by global variables used in different layers.Action_i Robots ’ actions, which is the basic sensormotion. Many actions are defined in the system, such as Move_Worktable, Move_Probes, Rotation_Worktable, Rotation_Probes, Touch, Insert, Screw, Grasp , etc. For one skill, there is only one Action. Due to the limited space, the details of actions will not be discussed in this paper.Start_i The start state of Action_i , which is measured by sensor values.End_i The end state of Action_i , which is measured by sensor values.Condition_i The condition under which Action_i is executed.From the above definitions, we may find that skill primitives in the paper are robot motions with start state and end state, and that they are executed under specific conditions. Assemblyplanning in the paper is to generate a sequence of robot actions and to assign values to attributes of these actions.3.2 Skill decompositionSome approaches have been proposed for skill decomposition. This paper presents a novel approach to guide the skill decomposition. As discussed above, in the present paper, a task is to assemble the Assembly_Part with the Base_Part. We define the process from the state that Assembly_Part is at a free state to the state that it is fixed with the Base_Part as the assembly lifecycle of the Assembly_Part. In its assembly lifecycle, the Assembly_Part may be at different assembly states.Here shows a shaft’s states shown as blocks and associated workflows of an insertion task. A workflow consisting of a group of skills pushes forward the Assembly_Part from one state to another state. A workflow is associated with a specific skill generator that is in charge of generating skills. For different assembly tasks, the same workflows may be used, though specific skills generated for different tasks may be different.The system provides default task templates, in which default states are defined. These templates are imported into the system and instantiated after they are associated with the corresponding Assembly_Part. In some cases, some states defined by the default template may be not needed. For example, if the shaft has been placed into the workspace with accurate position, for example, determined by the fixture, then the Free and In_WS states can be removed from the shaft’s assembly lifecycle. The system provides a tool for users to modify these templates or generate their own templates. The tool’s user interface is displayed in.For a workflow, the start state is measured by sensory values, while the target state is calculated based on the CAD model and sensory attributes. According to the start state and the target state, the generator generates a series of skills. Here, we use the Move workflow in as an example to show how skills are generated.After the assembly task (assembly lifecycle) is initiated, the template is read into the Coordinator. For the workflow Move, its start state is Grasped, which implies that the Assembly_Part is grasped by the robot’s end-effector and, obviously, the position of the Assembly_Part is also obtained. Its target state is Adjusted, which is the state immediately before it is to be fixed with the Base_Part. At the Adjusted state, the orientation of the Assembly_Part is determined by the mating direction, while the position is determined by the Safe Length. These values have been calculated in the task planning layer and are stored in a database. When the task template is imported, these values are read into the memory at Coordinate and transformed into the coordinates of the workspace.There is an important and necessary step that has to be performed in the skill decompositionphase—the generation of a collision-free path. Here, we use a straight-line path, which is simple and easy calculated. Assume that P3 is the position of the Assembly_Part at the Adjusted state and P0 is the position at the Grasped state. The following approach is applied to generate the path:1. Based on the orientation of the Assembly_Part and mating direction, select skills (Rotate_Table or Rotate_Probes) to adjust the orientation of the part and assign values to the attributes of these skills.2. Based on the Obstacle Box, mating direction, real position/orientation of the Assembly_Part, the intermediate positions P1 and P2 need to be calculated.3. For each segment path, verify whether the Move_Table skill (for a large range) or the Move_Probe skill (for a small range) should be used.4. Generate skill lists for each segment and assign values to these skills.3.3 Execution of skillsAfter a group of skills which can promote the part to a specific state are generated, these skills are transferred to the Skill Management model. The system promotesone or several skills into the On Work Skill list and simultaneously dispatches them to the micromanipulator. Once the skill has been completed by the robot, the system removes it from the OnWork Task list and places it into the Completed Task list. After all of these skills have been completed, the state of the part is updated. For some states, skill execution and skill generation can be conducted in parallel. For example, for the Insertion lifecycle, if the part's position information is obtained, skills for the move workflow can be generated parallel with the execution of skills generated for the Grasp workflow.The assembly process is not closed to users. With the proposed skills management list structure, users can monitor and control the assembly process easily. For example, for the adjustment or the error recovery, users can suspend the ongoing skill to input commands directly or move the robot in a manual mode.4 Experiment4.1 Experimental platformThe experimental platform used in the paper. For microassembly operations, the precision and workspace are tradeoffs. In order to acquire both a large workspace and high precision, the two-stage control approach is usually used. These systems usually consist of two different sets of actuators; the coarse one, which is of large workspace but lower precision, and the fine one,which is of small workspace but higher precision. In our system, the large-range coarse motion is provided by a planar motion unit, with a repeatability of 2 μm in the x and y directions, which is driven by two linear sliders made by NSK Ltd. The worktable can also provide a rotation motion around the z axis, which is driven by a stepper motor with a maximum resolution of 0.1°/step.In the manipulator, two separate probes, rather than a single probe or parallel jaw grippers, are used to manipulate the miniature parts. The two probes are fixed onto two stepper motors with a maximum resolution of 0.05°/step. The two motors are then fixed onto the parallel motion mechanism respectively. It is a serial connection of a parallel-hexahedron link and a parallelogram link. When the 1θ,2θ, and 3θ are small enough, the motion of the end-effector can be considered as linear motion.The magnetic actuator to drive the parallel mechanism consists of an air-core coil and a permanent magnet. The permanent magnet is attached to the parallel link, while the coil is fixed onto the base frame. The magnetic levitation is inherently unstable, because it is weak to external disturbances due to its non-contact operation in nature. To minimize the effect of external disturbances, a disturbance-observer-based method is used to control our micromanipulator.Laser displacement sensors are used to directly measure the probe ’s position. The reflector is attached to the endeffector. Nano-force sensors produced by the BL AUTOTEC company are used to measure the forces. The position resolution of the micromanipulator is 1 um. The maximal resolution of the force is 0.8 gf, and the maximal resolution of the torque is 0.5 gfcm. A more detailed explanation on the mechanism of the manipulator can be found. All assembly operations are conducted under a microscope SZCTV BO61 made by the Olympus Company. The image information is captured by a Sharp GPB –K PCI frame grabber, which works at 25 MHz.4.2 ExperimentAn assembly with three components is assembled with the proposed manipulator. It is a wheel of a micromobile robot developed in the authors'lab. The following geometric constraints are defined in the CAD model: collinear between CL_cup and CL_axis , collinear between CL_gear and CL_axis , coplanar between Plane_cup and Plane_gear_1, coplanar between Plane_gear_1 and Plane_axis. According to the above geometric constraints, the three parts construct a loop in the relation graph.The CAD model is created with the commercial software Solidworks 2005, and its API functions are used to develop the assembly planning model. The assembly Information database is developed with Oracle 9.2. Models involved with skill generation are developed with Visual Basic 6.0. The skill-generation models are run withWindows 2000 on an HP workstation with aCPU of 2.0 G Hz and memory of 1.0 GB. Assuming that the positions of parts are available beforehand, it took about 7 min to generate the skill sequence. The generated assembly sequence is to assemble the gear onto the axis, and then assemble the cup onto the axis and the gear.In the assembly operation, the parts are placed on the worktable with special fixtures and then transported into the workspace, so that their initial position and orientation can be assured. Therefore, in the experiment, all of the skill sequences for the different parts can be generated and then transferred to the Skill Management unit. The skill istransmitted to the micromanipulator through TCP/IP communication. Because the controller of the micromanipulator is run on DOS, the WTTCP tools kit are adopted to develop the TCP/IP communication protocol.Because, currently, the automated control of the fixtures is not realized yet, the parts have to be fixed manually onto the worktable. The promotion between different tasks(assembly lifecycle of different parts) is conducted manually. Here shows some screenshots of the assembly process. In a, the axis is fixed in the workspace; in b, the gear is fixed in the workspace; from c to e, the gear is grasped, moved, and fixed onto the axis by the probes; in f, the cup is fixed in the workspace; from g to i, the cup is fixed with the gear and the axis. It can be found that the proposed system can perform the assembly successfully.5 ConclusionThis paper has introduced a skill-based manipulation system. The skill sequences are generated based on a computer-aided design (CAD) model. By searching the assembly tree and mate trees, an assembly graph is constructed. The paper proposes the approach to calculate the mating directions and grasping position based on the geometric constraints that define relations between different parts. Because the workspace of the micromanipulator is very small, the assembly sequence brings much influence on the assembly sequence. In the present paper, the number of required times of mounting parts in the workspace is selected as the criterion to select the optimal skill sequence.This paper presents a method to guide the skill decomposition. The assembly process is divided into different phases. In one phase, the part is at an assembly state. A specific workflow pushes the part forwards to its target state, which is the next desired state of the part in the assembly lifecycle and is calculated based on CAD model information and sensory information.A special skill generator is associated with the workflow to generate skills that promote the part to the target state. After the skill sequence is generated, the system dispatches them to the controller of the manipulator to drive the manipulator.。
机械手英语文献翻译
1 英文文献翻译1.1 Cherry-harvesting robot1.1.1 IntroductionIn Japan, cherries are harvested carefully by human labor. As the harvesting season is short, the harvesting work is concentrated in a short time period and labor shortage tends to limit the farm acreage. Moreover, cherry trees are tall, and so the harvesting work must be conducted using pairs of steps. This makes harvesting dangerous and inefficient. To save on labor, a cherry-harvesting robot was manufactured for trial purposes and initial experiments were conducted. Research on fruit-harvesting robots has already been conducted (Kawamura etal., 1984; Harrell et al., 1990; Fujiura et al., 1990; Hanten et al.,2002). Many of the fruit-harvesting robots previously reported are equipped with a video camera. Fruit images are distinguished from the background by the difference in color or the spectral reflectance. The 3-D location of the fruit was calculated using binocular stereo-vision (Kawamura et al., 1985)or by visual feedback control (Kondo and Endo, 1989). Applications of a 3-D vision sensor have also been reported (Subrata etal., 1996; Gao et al., 1997). The 3-D vision sensor has the advantage that each pixel of the image has distance information.Making use of this advantage, the object can be recognized by the 3-D shape. As for the cherry-harvesting work, it is necessary to harvest the fruit while avoiding collisions with obstacles such as leaves and stems. To obtain a successful harvesting motion, detection of obstacles as well as the red ripe fruit is required. To achieve this, a 3-D vision system that has two laser diodes was manufactured. One of them emits a red beam and the other an infrared beam. To prevent the influence of the sunlight, position sensitive devices (PSDs) were used todetect the r eflected light. By blinking the laser beams at a high frequency, the signal components of the laser from PSDs were distinguished from that of the sunlight. The 3-D shape of the object was measured by scanning the laser beams and the red fruits were distinguished from other objects by the different cein the spectral-reflection characteristics between the red andinfrared laser beams. The robot needs to harvest correctly and efficiently without damaging the fruits and branches under the environment (temperature, sunshine, etc.) of the orchard. Many cherry trees are cultivated in rain-cover vinyl tents to protect against rain. A robot that works in the tent is not exposed to wind and rain. Cherry fruit, both for the fresh market and for processing, must be harvested with its peduncle.In the case of manual harvesting, therefore, farmers grip the upper part of the peduncle with their fingers, and lift it upward to detach it from the tree. For the same reason, the robot manufactured for the experiment also gripped the upper part of the peduncle just like farmers and lifted it upward to detach the peduncle from the tree.1.1.2 Materials and methodsThe robot consists of a manipulator 4 degrees of freedom (DOF), a 3-D vision sensor, an end effector, a computer, and a traveling device (Fig. 2). It is about 1.2m high, 2.3m wide, and 1.2m long. The 3-D vision sensor is attached to the manipulator to scan from different viewpoints by the motion of the manipulator. A vacuum is used to suck the fruit into the sucking pipe of the end effector.Cherry trees cultivated by the method of single trunk training distribute their fruits around the main trunk. In order to harvest a fruit while avoiding obstacles, such as leaves and trunks, the end effector needs to approach the fruit from the outside of the trunk. For this reason, in this study, we manufactured an articulated manipulator with an axis of up-down traverse and three axes of right-left turning, so that the fruits could be harvested in any direction (Fig. 2). The up-down traverse requires comparatively large force caused by the gravity. Therefore, it is driven by an AC servomotor (Yaskawa Electric, SGMAH-01BAA2C, rated power 100W, rated torque 0.318Nm, rated speed 3000min−1) and a screw mechanism (lead 10mm). Three axes of the right–left turning do not require large torque. Axes of the first and second right–left turning are driven by small AC servomotors (Yaskawa Electric, SGMAH-A5BAA21, rated power 50W, rated torque 0.159N m,rated speed 3000min−1) and harmonic reduction gears (reduction gear ratio100). The remaining axis of right–left turning is driven by a small DC motor with reduction gears. The manipulator is designed to be able to move round the circumference of the tree trunk so that notonly fruits on the front side of the trunk but also the fruits on the other side of the trunk could be harvested.Since the fruits are located around the tree trunk, if the vision sensor scans from one viewpoint, fruits beyond the trunk are hidden. To scan from different viewpoints, the 3-D vision sensor was attached to the second arm. The movement of the manipulator changed the location and directionof the 3-D vision sensor so that the dead angle becomes small.The 3-D vision sensor is equipped with a light projector, a photo detector, and a scanning device (Fig. 3). The light projector consists of an infrared laser module, a red laser module, cold mirrors, a half mirror, and two full-reflecting mirrors. The photo detector consists of two PSDs, a lens, and a red optical filter that weakens the influence of su nlight. The scanning device consists of a galvanometer scanner and a stepping motor. The galvanometer scanner scans vertically and the stepping motor scans horizontally. Red and infrared laser beams are united in the same optical axis by a cold mirror that transmits infrared light and reflects visible right. The beam is further split into two beams (each still including both wavelengths) by a half mirror. These two beams scan the object simultaneously. Light of the two beams reflected from the object is focused onto two PSDs. The distance from the 3-D vision sensor to the object is calculated by a triangulation method using the ratio of the currents of both electrodes of the PSDs. The laser beams emit blinking signals in order to eliminate the influence of sunl ight.By this method, reflected light is separated from the sunlight, thus resulting in continuous light. Infrared light with wavelengths about 700–1000 nm is reflected well by all parts of the cherry tree. On the other hand, red light at about 690 nm is n ot reflected well by unripe fruit, leaves, and stalks, but is reflected well by red ripe fruit. In this study, an infrared light beam of830 nm and a red light beam of 690 nm were used. The infrared laser beam (830 nm) measures the distance to each part of the cherry tree from the 3-D vision sensor and the red laser beam(690 nm) detects the red fruit to be harvested.As mentioned above, the laser beam is split into two beams. The 3-D vision sensor scans these two beams simultaneously, and two pixels were measured at once to increase the scanning speed. The number of pixels was 50,000 (250 in the vertical and 200 in the horizontal direction). The scan time was 1.5 s. The field of vision was 43.8◦ in vertical direction and 40.6◦ in horizontal direction. The effective range of the sensor was from170mmto 500mm. If the object was too far from the sensor, the detected light was weakened and the measuring accuracy was not good.The reflected light from these laser beams is detected by two PSDs, one for each beam. The signals from the PSDs include red and infrared components. To acquire the red and infrared signals separately, the red and infrared laser lights were emitted at a blinking frequency of 41.6 kHz with a phase shift of 90◦. Photoelectric currents from PSDs are amplified. Red and infrared signals are detected separately using lock-in amplifiers, which also eliminate the influence of ambient light. The 3-D vision sensor can be used even under sunlight, where the illuminance is 100 klx. A red image and an infraredimage are fed to the computer, and then a range image and segmentation are obtained.The range image is calculated by triangulation using the infrared signals from anode A and B of the PSD. Segmentation is obtained from the ratio between the infrared and red signals. Red fruits were distinguished from other objects such as leaves by the reflectivity of the red laser. However, the trunk as well as the fruits reflect a red laser beam. Therefore, it was distinguished from fruits using other methods. Fruits reflect with specula phenomenon. When they are scanned, the fruit center reflects the laser beam well. How- ever, this phenomenon does not occur at the trunk surface. The center of each fruitwas recognized using this specula phenomenon. When the center of a fruit is visible from the 3-D vision sensor, fruits could be recognized by this method. By processing these images, the location of red fruits and obstacles, such as leaves and trunks, could be recognized.Fig. 4 shows examples of the image. The range image was obtained by the method of triangulation using the infrared signals of the PSD. By processing the infrared, red, and the range images, the object was segmented into red fruits and others. The image in the right side shows the result of segmentation.Cherry fruit must be harvested with its peduncle attached. The tensile strength needed to detach the fruit was measured. The strength between the peduncle and the fruit was about 1N. On the other hand, the strength between the peduncle and the branch was about 2.5N. Therefore, if the fruit was pulled it would detach the peduncle and the fruit because the strength in that area isthe weakest. To harvest the fruit with its peduncle, a special end effector was used. It consisted of a fruit sucking device, an open-close mechanism, a back-and- forth mechanism, and a pair of fingers. It is about 80 mm high, 30 mm wide, and 145 mm long (Fig. 5). The vacuum pressure from the vacuum cleaner sucks the fruit so that the fruit position is fixed at the tip of the pipe. The fin ger can be opened or closed by the rotation of a servomotor attached on the end effector. After the fingers grasp the peduncle, the end effector is lifted up to remove the peduncle from the tree.Fig. 6 shows the motion of the end effector. First, the finger s are opened and retracted by the servomotors. Then, the end effector approaches a fruit and sucks it. After sucking the fruit, the fingers move halfway forward, and close halfway until the clearance between fingers becomes 5mm. In order to enclose only the target fruit, the fingers are equipped with soft rubber components for obstacle exclusion, so that other fruits may not enter between the fingers. It is necessary to grip the peduncle as near as possible to its root . Therefore, after the fingers are closed halfway, they move further forward. Then, they close completely and grasp the peduncle. Finally, the end effector moves upward to detach the peduncle. The end effector moves to the position above a fruit box, and the fingers open and release the fruit.……1.2 樱桃采摘机器人1.2.1 简介在日本,采摘樱桃是一项细致的人工劳动。
机械 自动化 外文翻译 外文文献 英文文献 最小化传感级别不确定性联合策略的机械手控制
Fusion Strategies for Minimizing Sensing-LevelUncertainty in Manipulator Control Abstract:Humanoid robotic applications require robot to act and behave like human being.Follow-ing soft computing like approach human being can think,decide and control himself in unstructured dynamic surroundings,where a great degree of uncertainty exists in the information obtained through sensory organs.In the robotics domain also,one of the key issues in extracting useful knowledge from sensory data is that of coping with information as well as sensory uncertainty at various levels.In this paper a generalized fusion based hybrid classifier(ANN-FDD-FFA)has been developed and applied for validating on generated synthetic data from observation model as well as from real hardware robot.The fusion goal,selected here,is primarily to minimize uncertainties in robotic manipulation tasks that are based on internal(joint sensors)as well as external(vision camera) sensory information.The effectiveness of present methodology has been extensively studied with a specially configured experimental robot having five degrees of freedom and a simulated model of a vision guided manipulator.In the present investigation main uncertainty handling approach includes weighted parameter selection(of geometric fusion)by a trained neural network that is not available in standard manipulator robotic controller designs.These approaches in hybrid configuration has sig-nificantly reduce the uncertainty at different levels for faster and more accurate manipulator control as demonstrated here through rigorous simulations and experimentations.Key words:sensor,fusion,FDD,FFA,ANN,computing,manipulators,repeatability,accuracy,covariance,matrix,uncertainty,uncertai nty ellipsoid.1.Introductionapplications(industrial,military,scientific,medicine,welfare,household and amusement)are increasingly coming up with recent prog-ress in which a robot has tooperate in large and unstructured environment [3,12,15].In most cases,the knowledge of how the surroundings are changing every instant is fundamentally important for an optimal control of robot motions. Mobile robots also essentially have to navigate and operate in very large unstruc-tured dynamic surroundings and deal with significant uncertainty[1,9,19].When-ever a robot is operating in a natural nondeterministic environment,there always exists some degree of uncertainty in the conditions under which a given job will be done.These conditions may,at times,vary while a given operation is being carried out.The major causes leading to the uncertainty are the discrepancies arising in the robot motion parameters and in the various task-defining information.The amount by which they differ from those called for in the process specifications may not always be insignificant.These deviations may be due to inaccuracies in analytical design or in reproductions of programmed motions or because of deterministic as well as random errors in the algorithms,measurement data,data transmission links,and other factors.Changes in the status of the robot like instances of malfunctions,failures,shift in the frame of reference,etc.,also lead to uncertainty in the operating conditions of the robot.The presence of substantial uncertainty significantly affects the robot in the various steps of sensing the state of a task;in adapting to the changes through the control system;and in reasoning to select the actions needed to achieve a goal.In fact,it is felt that one of the key issues in extracting useful knowledge from data is that of coping with uncertainty at all levels and especially at the sensing level.Along with the quantity of the observed sensory measurements,the qual-ity involved also need to be investigated in terms of the residual uncertainty it propagates to the desired sensory information.In robotics domain,the uncertainty problem in the sensory interpretation level is a very crucial one for specific tasks like robotised space structure manipulators,robotised surgery etc.where both high level of machine precision and human like prehension are needed The key problem in the sensing process is in making the connection between the signal outputs of all the sensors and the attributes of the three-dimensional world.One of the recent trends is to solve the problemthrough sensor fusion and there are numerous fusion techniques covering a very broad spectrum of application areas[10,13].Under the backdrop of the study of these research works,it was felt that there is a great need for evolving a generalized and easily apprehensible soft computing based sensor fusion strategy(humanoid approach)for multiple sensory systems.The humanoid approach makes it available for versatile applications.The easily apprehensible character of the development makes it particularly suitable for processing complex,highly nonlinear functional relationships between low-level sensory data and high-level information.The fusion strategies would be most suitable to apply in distributed fusion architectures as it can effectively enable us to minimize the uncertainties at any desired level.A review of some papers on uncertainty analysis in the context of manipulator control [4,14,16,20,23]shows that a common step involved in all these systems is the interpretation of identical information that has been acquired through multiple sensory units.The fused information needs to be represented with minimized uncertainty and the level of this minimization depends on task specific applications.The research study described in this paper has focused on this objective in the context of sensory guided robotic manipulations.As a token application here the challenge of improving repeatability of a very ordinary RSC type robot has been undertaken.Real-world systems are stochastic in nature having nonlinearity and uncertainty in their behaviors and hence humanoid approach of solutions are only acceptable one in many such tasks.For multivariable input–output systems,effects of such nonlinearity and uncertainty are significant and needs to be addressed properly in order to control them effectively.Take,for example,the case of advanced robotic systems(manipulating robots having redundant degrees of freedom or mobile robots having redundant sensory systems would fall in this category).These systems require various kinds of sensors for responding intelligently to a dynamic environment.They may be equipped with external sensors such as force-torque sensors,range sensors,proximity sensors,ultrasonic and infrared sensors,tactile arrays and other touch sensors,overhead or eye-in-hand vision sensors,cross-fire,overload and slip sensing devices etc.In addition,there are also various internal state sensors such asencoders,tachometers,revolvers and others.More is the number of sensors,more is the computational complexity for controlling the system and more is its intelligence level.Since recent industrial as well as non-industrial applications need robotic systems with high level of intelligence,the complexity associated with it has to be addressed properly.For this purpose,systems equipped with multiple sensors having different ranges of uncertainties has been taken up here for study.Information obtained from different sensors are inherently uncertain,imprecise and inconsistent.Occasionally it may also be incomplete or partial,spurious or incorrect and at times,it is often geographically or geometrically incompatible amongst the different sensor views.Our knowledge of the spatial relationships among objects is also inherently uncertain.Take the example of a man-made object.It may not match its geometric model exactly because of manufacturing tolerance,human/machine errors and other natural uncertainties.Even if it does(in macro level),a sensor cannot measure the geometric features and locate the object exactly because of measurement errors.Even if it can(within certain bounded tolerance limit),a robot using the sensor may not manipulate the object exactly as intended,may be because of all cumulative errors added with the end-effector positioning errors.These errors can be reduced to a very significant level for some tasks,by reengineering the solution,structuring the working environment and using specially suited high precision equipment-but at great cost of time and equipment[20].An alternative solution may be to develop sensor fusion strategies that can minimize and eliminate the uncertainties of any engineering system to a desired level,at a much lesser cost,incorporating all inherent uncertainties.In this paper we focus on developing a FDD-FFA-ANN based hybrid type sensor fusion strategy.The organization of the paper has been arranged as follows.Section 2 outlines the computational steps through which the overall fusion algorithm has been formulated and developed.These developments and propositions have been applied in Section 3 for validating on synthetic data of an observation model.Section 4 is dedicated towards applying the developed hybrid fusion strategies for improving repeatability of a hardware robot manipulator.Their effectiveness has been extensively studied with aspecially configured RCS type experimental robot having five degrees of freedom.A neural network formulation of the fusion algorithm is also presented.Finally,in Section 5 the significant results and inferences have been listed.2.Formulation of the Fusion Algorithm StructureThe fusion algorithm structure consists of the following computational steps:(i)The uncertainties in the information derived through processing of multiple noisy sensory data are represented by individual uncertainty ellipsoids.(ii)The uncertainty ellipsoids are merged in a manner so as to minimize the volume of the fused uncertainty ellipsoid by proper assignment of optimal weighting matrices.(iii)Fusion in the Differential Domain(FDD)has been developed to further reduce the uncertainty of fused information at finer resolutions through an iterative process that predicts the correction terms for all the sensory information.These terms are then fused and applied to the fused information to increase its precision.(iv)The Fission Fusion Approach(FFA)is used to minimize uncertainties significantly for some specific sensor models where the covariance matrix of the sensory information can be“fissioned”and information from multiple measurements of the same set of sensors are available for fusion.(v)An ANN model of the manipulator has been developed for initial estimation of uncertainties(Mean Square Error)of joint sensors which could be further minimized by fusion process(FDD,FFA).The fusion methods as represented by steps(i)and(ii)give a physical or rather geometric insight for the complicated information processing as it involves the fusion of the uncertainty ellipsoids of each individual sensory information.Given a set of uncertainty ellipsoids associated with each sensor,the problem is to assign weighting matrices(Wi)with each set of sensory system so as to minimize geometrically the volume of the fused uncertainty ellipsoid[17].The parameter representing the information Xi∈Rn is usually determined from a set of sensory observational data,Di∈Rmi.Here,Rn represents the general n-dimensional Euclidean spaces,idenotes the ith sensor,mi is the number of independent measurements,and n is the dimension of information(i=1,...,N,N being the total number of sensor units),and Xi and Di are known to be related through a known nonlinear vector function,The fused information Xf is then made available in the linear combinationUsing Lagrangian optimization,we have the weighting matrices for the geometrically optimized fusion as3.Fusion to Improve Sensory InformationIn multi sensor fusion systems with redundant and/or complimentary sensors,each sensor can always be considered as individual sources of uncertain information,able to communicate,co-operate and co-ordinate with other members of the sensing group.Based on this structure,Durrant-Whyte[7]has presented sensor models described as a probabilistic function of state and decisions communicated from other information sources.They have treated three components of this sensor model:the observation model,that processes the measurement characteristics;the dependency model that describes the sensor’s dependence on other information sources;and the state model that characterizes the sensor’s dependence on its location and internal state.4.Experimental Verification of Hybrid(Neuro-FFA-FDD)Approaches for handling Uncertainties in Improving Repeatability of Robotic Manipulator In robotic manipulations,there are a number of sources of uncertainties:(i)uncertainties associated with sensors,(ii)uncertainties associated with actuators,(iii)uncertainties associated with modeling.In the present investigation,attention is focussed towards uncertainties associated with sensors and their minimization.Most industrial robots execute simple repetitive tasks by playing back prerecorded or preprogrammed sequences of motions that have been previously guided or taught by a user.For this type of performance,the robots do not need any information about its working environment.External sensors are not that important here,as manipulators have to simply move to goal points that have been taught.A“taught”point is that to which the manipulator is moved physically,the corresponding joint position sensors are determined,and the joint-angle values stored.Subsequently,in the next command to the robot to return to the same point in space,each joint is moved to the stored value.The precision with which a manipulator can return to a“taught”point is specified by the factor“repeatability of the manipulator”.An indispensable capability for most manipulator application is to provide a high speed and high precision trajectory.In such applications the repeatability of these manipulators need to be quantified as accurately as possible.For this,the analytical description of the spatial displacement of the robot,as a function of time is primarily required.This,in particular,depends on the functional relation between the joint angle variables and the position and orientation(with respect to a reference co-ordinate frame)of the robot arm end-effector.5.ConclusionsFuture humanoid robots will have to work in a multisensor framework,the fused information needs to be represented with minimized uncertainty.The level up to which the minimization would be significant is once again depends on specific application and the sophistication of the classifier handling the fused information.This paper has proposed and developed a hybrid sensor fusion classifier which consists of three levels of fusion–Geometric Fusion,Fission Fusion Approach (FFA)and Fusion in the Differential Domain(FDD).These are directed towards the objective of minimizing uncertainties associated with any type of information that has been acquired through multiple sensors or sensory units.The essence of the FFA technique is basedon“Fissioning”(dimensionality reduction of)the covarian ce matrix and considering each dimension of information separately for fusion. The FDD processing technique uses feedback from the fusion processor(i.e.the fused information)to each of the individual sensory data.The formulation for this has been presented through a recursive iteration process that predicts correction terms for the different sensory information.By fusing these terms in the differential domain using the same weighting matrices,the fused information is represented with minimized uncertainty in a recursive way.6.References1.Barshan,B.and Durrant-Whyte,H.F.:Inertial navigation systems for mobile robots,IEEE Trans.Robotics Automat.11(3)(1995),328–342.2.Chen,W.Z.,Korde,U.A.,and Skaar,S.B.:Position control experiments using vision,Internat.J.Robotics Res.13(3)(1994),199–208.3.Dallaway,J.L.,Jackson,R.D.,and Gosine,R.G.:An interactive robot control environment for rehabiltation applications,Robotica 11(1993),541–551.4.Di,X.,Ghosh,B.K.,Ning,X.,and Tzyh,J.T.:Intelligent robotic manipulation with hybrid position/force control in an uncalibrated workspace,in:Proc.IEEE Internat.Conf.on Robotics and Automation,May 1998,pp.1671–1676.5.Duda,R.O.,Hert,P.E.,and Stork,D.G.:Pattern Classification,2nd edn,Wiley,New York,2001.6.Durrant-Whyte,H.F.:Sensor models and multisensor integration,Internat.J.Robotics Res.7(6)(1988),97–113.7.Feng,L.and Brandt,R.D.:An optimal control approach to robust control of robot manipulators,in:IEEE Internat.Conf.on Control Applications,September 1996,pp.31–36.32 G.C.NANDI AND D.MITRA8.Golfarelli,M.,Maio,D.,and Rizzi,S.:Correction of dead-reckoning errors in map building for mobile robots,IEEE Trans.Robotics Automat.17(1)(2001),37–47.10.Hall,D.L.and Llinas,J.:An introduction to multisensordata fusion,Proc.IEEE 85(1)(1997),6–23.9.Hayati,S.:Robot arm geometric link parameter estimation,in:Proc.of the 22nd IEEE Conf.on Decision and Control,December 1983.10.Hirzinger,G.et al.:Advances in robotics:The DLR experience,Internat.J.Robotics Res.18(11)(1999),1064–1087.11.Klein,L.A.:Sensor and Data Fusion Concepts and Applications,SPIE Publications,San Jose,CA,USA,1993.nglois,D.,Elliott,J.,and Croft,E.A.:Sensor uncertainty management for an encapsulated logical device architecture.Part II:A control policy for sensor uncertainty,in:American Control Conference,June 2001,pp.4288–4293.最小化传感级别不确定性联合策略的机械手控制摘要:人形机器人的应用应该要求机器人的行为和举止表现得象人。
机械毕业设计英文外文翻译简易机械手及控制
附录外文文献原文:Simple Manipulator And The Control Of ItAlong with the social production progress and people life rhythm is accelerating, people on production efficiency also continuously put forward new requirements. Because of microelectronics technology and calculation software and hardware technology rapid development and modern control theory, the perfection of the fast development, the robot technology pneumatic manipulator system because its media sources do not pollute the environment, simple and cheap components, convenient maintenance and system safety and reliability characteristic, has penetrated into every sector of the industrial field, in the industrial development plays an important role. This article tells of the pneumatic control robots, furious manipulator XY axis screw group, the turntable institutions, rotating mechanical parts base. Main effect is complete mechanical components handling work, to be placed in different kinds of line or logistics pipeline, make parts handling, transport of goods more quick and convenient.Matters of the manipulator axial linkage simple structure and action processManipulator structure, as shown in figure 1 below have accused of manipulator (1), XY axis screw group (2), the turntable institutions (3), rotating base (4), etc.Figure 1 Manipulator StructureIts motion control mode is: (1) can rotate by servomotor Angle for 360 °breath control manipulator (photoelectric sensor sure start 0 point); (2) by stepping motor drive screw component make along the X, Y manipulators move (have X, Y axis limit switches); (3) can rotates 360 °can drive the turntable institutions manipulators and bushings free rotation (its electric drag in part by the dc motivation, photoelectric encoder, close to switch etc); (4) rotating base main support above 3 parts; (5) gas control manipulator by pressure control (Zhang close when pressed on, put inflatable robot manipulators loosen) when gas.Its working process for: when the goods arrived, manipulator system begins to move; Stepping motor control, while the other start downward motion along the horizontal axis of the step-motor controller began to move exercise; Servo motor driver arrived just grab goods manipulators rotating the orientation of the place, then inflatable, manipulator clamped goods.Vertical axis stepper motor drive up, the other horizontal axis stepper motor driver started to move forward; rotary DC motor rotation so that the whole robot motion, go to the cargo receiving area; longitudinal axis stepper motor driven down again, arrived at the designated location, Bleed valve,mechanical hand release the goods; system back to the place ready for the next action.II.Device controlTo achieve precise control purposes, according to market conditions, selection of a variety of keycomponents as follows:1. Stepper motor and driveMechanical hand vertical axis (Y axis) and horizontal (X axis) is chosen Motor Technology Co., Ltd. Beijing Stone 42BYG250C type of two-phase hybrid stepping motor, step angle of 0.9 ° / 1.8 °, current is 1.5A. M1 is the horizontal axis motor driven manipulator stretch, shrink; M2 is the vertical axis motor driven manipulator rise and fall. The choice of stepper motor drive is SH-20403 type, the drive uses 10 ~ 40V DC power supply, H-phase bridge bipolar constant current drive, the maximum output current of 3A of the 8 optional, maximum fine of 64 segments of 7 sub-mode optional optical isolation, standard single-pulse interface, with offline capabilities to maintain semi-sealed enclosure can be adapted to environmental conditions even worse, provide semi-current energy-saving mode automatically. Drive the internal switching power supply design to ensure that the drive can be adapted to a wide voltage range, the user can according to their circumstances to choose between the 10 ~ 40VDC. Generally the higher rated power supply voltage can improve high-speed torque motor, but the drive will increase the loss and temperature rise. The maximum output drive current is 3A / phase (peak), six drive-panel DIP switch on the first three can be combined 5,6,7 8 out of state, corresponding to the 8 kinds of output current from 0.9A to 3A to meet the different motors. The drive can provide full step, half step improvement, subdivision 4, 8 segments, 16 segments, 32 segments and 64segments of 7 operating modes. The use of six of the drive panel DIP switches 1,2and3 can be combined from three different states.2. Servo motors and drivesManipulator with Panasonic servo motor rotational movement A series of small inertia MSMA5AZA1G, the rated 50W, 100/200V share, rotary incremental encoder specifications (number of pulses 2500p / r, resolution of 10000p / r, Lead 11 lines) ; a seal, no brakes, shaft with keyway connections. The motor uses Panasonic's unique algorithms, the rate increased by 2 times the frequency response, to 500Hz; positioning over the past adjust the scheduled time by Panasonic servo motor products for the V Series of 1 / 4. With the resonance suppression, control, closed loop control, can make up for lack of mechanical rigidity, in order to achieve high positioning accuracy can also be an external grating to form closed loop control to further improve accuracy. With a conventional automatic gain adjustment and real-time automatic gainInterest adjustment in the automatic gain adjustment methods, which also has RS-485, RS-232C communication port, the host controller can control up to 16 axes. Servo motor drives are a series MSDA5A3A1A, applicable to small inertia motor.3. DC machine360 ° swing of the turntable can be a brushless DC motor driven organization, the system is chosen when the profit company in Beijing and the 57BL1010H1 brushless DC motor, its speed range, low-speed torque, smooth running, low noise, high efficiency. Brushless DC motor drive using the Beijing and when Lee's BL-0408 produced by the drive, which uses 24 ~ 48V DC power supply, a start-stop and steering control, over current, overvoltage and locked rotor protection, and there is failure alarm output external analog speed control,braking down so fast.4. Rotary encoderCan swing 360 °in the body on the turntable, fitted with OMRON E6A2 produced incremental rotary encoder, the encoder signals to the PLC, to achieve precise positioning of rotary bodies.5. PLC SelectionAccording to the system design requirements, the choice of OMRON CPM2A produced minicomputer. CPM2A in a compact unit integrated with a variety of properties, including the synchronization pulse control, interrupt input, pulse output, analog set and clock functions. CPM2A the CPU unit is a stand-alone unit, capable of handling a wide range of application of mechanical control, it is built in the device control unit for the ideal product. Ensure the integrity of communications and personal computers, other OMRON PC and OMRON Programmable Terminal communication. The communication capability allows the robot to Axis simple easy integration into industrial control systems.III. Software programming1. Software flow chartPLC programming flow chart is based. Only the design flow, it may be smooth and easy to prepare and write a statement form the ladder, and ultimately complete the process design. So write a flow chart of program design is critical to the task first thing to do. Axis Manipulator based on simple control requirements, drawing flow chart shown in Figure 2.Figure 2 Software flow chart2. Program partBecause space is limited, here only paper listed the first two program segment for readers see.Figure 3 Program partIV. ConclusionAxis simple robot state by the various movements and PLC control, the robot can not only meet the manual, semi-automatic mode of operation required for such a large number of buttons, switches, position detection point requirements, but also through the interface components and Computer Organization PLC industrial LAN, network communication and network control. Axis simple robot can be easily embedded into industrial production pipeline.中文译文:简易机械手及控制随着社会生产不断进步和人们生活节奏不断加快,人们对生产效率也不断提出新要求。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
毕业设计(论文)有关外文翻译院系:机械电子工程学院专业:自动化姓名:学号:指导教师:完成时间:2009-4-25说明1、将与课题有关的专业外文翻译成中文是毕业设计(论文)中的一个不可缺少的环节。
此环节是培养学生阅读专业外文和检验学生专业外文阅读能力的一个重要环节。
通过此环节进一步提高学生阅读专业外文的能力以及使用外文资料为毕业设计服务,并为今后科研工作打下扎实的基础。
2、要求学生查阅与课题相关的外文文献3篇以上作为课题参考文献,并将其中1篇(不少于3000字)的外文翻译成中文。
中文的排版按后面格式进行填写。
外文内容是否与课题有关由指导教师把关,外文原文附在后面。
3、指导教师应将此外文翻译格式文件电子版拷给所指导的学生,统一按照此排版格式进行填写,完成后打印出来。
4、请将封面、译文与外文原文装订成册。
5、此环节在开题后毕业设计完成前完成。
6、指导教师应从查阅的外文文献与课题紧密相关性、翻译的准确性、是否通顺以及格式是否规范等方面去进行评价。
最小化传感级别不确定性联合策略的机械手控制摘要:人形机器人的应用应该要求机器人的行为和举止表现得象人。
下面的决定和控制自己在很大程度上的不确定性并存在于获取信息感觉器官的非结构化动态环境中的软件计算方法人一样能想得到。
在机器人领域,关键问题之一是在感官数据中提取有用的知识,然后对信息以及感觉的不确定性划分为各个层次。
本文提出了一种基于广义融合杂交分类(人工神经网络的力量,论坛渔业局)已制定和申请验证的生成合成数据观测模型,以及从实际硬件机器人。
选择这个融合,主要的目标是根据内部(联合传感器)和外部( Vision 摄像头)感觉信息最大限度地减少不确定性机器人操纵的任务。
目前已被广泛有效的一种方法论就是研究专门配置5个自由度的实验室机器人和模型模拟视觉控制的机械手。
在最近调查的主要不确定性的处理方法包括加权参数选择(几何融合),并指出经过训练在标准操纵机器人控制器的设计的神经网络是无法使用的。
这些方法在混合配置,大大减少了更快和更精确不同级别的机械手控制的不确定性,这中方法已经通过了严格的模拟仿真和试验。
关键词:传感器融合,频分双工,游离脂肪酸,人工神经网络,软计算,机械手,可重复性,准确性,协方差矩阵,不确定性,不确定性椭球。
1 引言各种各样的机器人的应用(工业,军事,科学,医药,社会福利,家庭和娱乐)已涌现了越来越多产品,它们操作范围大并呢那个在非结构化环境中运行 [ 3,12,15]。
在大多数情况下,如何认识环境正在发生变化且每个瞬间最优控制机器人的动作是至关重要的。
移动机器人也基本上都有定位和操作非常大的非结构化的动态环境和处理重大的不确定性的能力[ 1,9,19 ]。
每当机器人操作在随意性自然环境时,在给定的工作将做完的条件下总是存在着某种程度的不确定性。
这些条件可能,有时不同当给定的操作正在执行的时候。
导致这种不确定性的主要的原因是来自机器人的运动参数和各种确定任务信息的差异所引起的。
这意味着,它们不同于所谓的可能不总是微不足道的规范化的过程处理中。
这些偏差的原因可能是由于不准确的分析设计或复制品,或者因为程序的动议确定性错误,以及随机误差的算法,测量数据,数据传输链路,和其他因素。
机器人地位的变化,像实例故障,失败,参考框架的转移等等,也导致机器人的不确定性条件的操作。
这种大量存在的不确定因素影响着机器人感应一项任务的各种步骤;适应控制系统的不断变化;和推理来选择实现某个目标所需的行动。
事实上,显而易见,关键问题之一,是机器人在应对各级不确定性特别是在遥感水平的数据中提取有用的信息。
根据感官测量数据结果显示,参与的质量就不确定性剩余而言及传播感觉信息的理想性加以调查。
在机器人领域,不确定性问题的解释水平的感觉是一个非常关键的一点,其具体任务像空间结构机器人一样,可代替医生为病人做手术等。
在两个高级别机器的精度和人类一样需要理解关键问题在遥感进程中的处理,以便使信号输出的所有传感器和三维世界的属性之间能稳定的连接。
一个最近的趋势是通过解决传感器融合问题,并有众多的融合技术,涵盖非常广泛的频谱的应用领域[ 10,13 ] 。
有人认为,在某种背景下研究这些研究工作,很需要引用专业化并容易掌握理解的基于传感器融合战略(人形办法)的多感觉系统的计算机软计。
该类人行为的做法可以使得为多功能的应用。
进程的易理解性使得机器人特别适合在加工复杂,高度非线性功能中处理好低层次的感官数据和高层次的信息之间的关系。
融合策略将是最合适的适用于分布式融合架构而作为它可以有效地使我们能够最大限度地减少不确定性的任何期望的水平。
审查的一些文件不确定性分析在机械手控制 [ 4,14,16,20,23 ]表明,一个共同的步骤参与所有这些系统是已获得的通过多功能感应单元相同信息的解释。
融合的信息需求的代表,并与尽量减少不确定性的程度这取决于这一任务最小的具体应用。
本文研究介绍的重点放感官指导机器人的操纵这一目标的条件下。
作为应用在改善普通版型机器人的重复性是一个象征性的挑战已开展研究。
真实世界的系统具有随机性质的非线性和不确定性,在他们的行为和做法中,人是唯一可以接受在许多这样任务中的解决方案的对象。
在多变量输入输出系统中,这种非线性和不确定性的影响很大,为了有效遏制需要得到妥善解决。
例如,先进的机器人系统(操纵机器人具有多余的自由度或移动机器人具有多余的感觉系统将属于此类别中)。
这些系统需要的各种传感器来响应智能动态环境。
他们可能被安装外部传感器,如力扭矩传感器,行程传感器,接近传感器,超声波和红外线传感器,触觉阵列和其他触摸传感器,间接或眼手视觉传感器,跨火,超载和支路传感装置等。
此外,还有各种内部国家传感器,如编码器,流速计,左轮手枪和其他的.传感器越多,计算机控制系统就越复杂同时机器人智能化水平就越高。
因为最近的工业及非工业机器人系统的应用需要高层次的情报,复杂性与它正确的编址紧密相连,正因为如此不同幅度的不确定性多传感器系统的安装开始出现在研究项目中。
从不同的传感器上获得的资料本质上是不确定的,不精确的和不一致的.偶尔也可能是不完整的或部分的,虚假的或不正确的,有时,从传感器外观来看它往往是不相容的地域或几何空间之间的差异性。
我们所知的空间关系是物体之间也具有内在不确定性.就拿人造物体来说。
正是由于制造公差它可能存在不匹配的几何模型,人/机错误和其他自然资源的不确定性。
即使它能做到(微距级),一个传感器无法衡量的几何特征,并准确找到对象这是由于测量错误.甚至如果可以(在一定的误差范围内),机器人用传感器可能无法对预设的程序完全相同的操纵对象,可能是因为所有累积误差增加使最终定位效应错误。
这些错误可以归结为一个非常重要的层次,一些任务,通过重组解决方案,构建新的工作环境和使用专门适用于高精密的设备,但需要付出很多的时间和昂贵的设备[ 20 ] 。
一个可供选择的解决办法可能是传感器融合发展战略这样可以减少和消除任何工程系统的不确定性,以达到理想的水平,在较小的成本,包括所有固有不确定性.本文重点制定一个频分双工,游离脂肪酸和基于人工神经网络的混合型传感器融合战略。
该组织的文件已被安排如下.第二部分简述了通过它的整体融合算法的计算步骤已制订和发展。
这些事态发展和主张已应用于在第3节中验证数据的综合观测模型.第四部分致力于实现应用开发杂交融合战略,以改善重复性硬件机器人。
其专门配置的RCS型机器人实验五自由度有效性已经得到了广泛的研究。
神经网络规范化融合算法也是目前的课题。
最后,在第5节的重要结果和推论已经列入研究目录中。
2.规范化的融合算法结构融合算法的结构包括下列计算步骤:(一)获取的信息通过加工多嘈杂的感应数据的不确定性是由个人的不确定性椭球决定的(二)不确定性椭球合并的方式,以尽量减少体积融合不确定性椭球以适当转让的最佳加权矩阵。
(三)融合中的微分域(频分双工)已制定,以进一步减少不确定性的信息,精细的融合决议通过一个反复的过程,预测校正的条件对所有的感觉信息。
这些序列,然后融合和应用于融合提高信息的精度。
(四)裂变融合方法(FFA)是用来最大限度地减少不确定性显着的一些具体的传感器模型的协方差矩阵感官信息可以“被分裂”和信息从多个测量相同的传感器融合。
(五)神经网络模型的机械手已经制定的初步估计不确定性(均方误差)联合传感器可以进一步最小化的融合进程(裂变融合方法)。
融合的方法所代表的步骤(一)和(二)提供实物或相当几何了解的复杂信息处理,因为它涉及到融合的不确定性椭球的每一个人感觉信息。
鉴于一套不确定性椭球与每个传感器的联系,问题是转让权重矩阵(无线)与每套感觉系统,以便尽量减少几何量的不确定性椭球融合[ 17 ] 。
参数代表信息Xi∈Rn通常是由一套感应观测资料决定的,Di∈Rmi,Rn代表欧氏空间,i代表第i个传感器,mi是每一个独立的测量数据,而n是层面的信息(i=1,...,N,N 是一个个小传感器单元的总和),而Xi 和 Di是一个已知的有关非线性向量函数。
提供的线性组合融合的信息Xf利用拉格朗日优化,我们的加权矩阵的几何优化融合为:3.交互性用于提高感官信息在多传感器融合系统的冗余和/或免费的传感器,每个传感器可以随时被视为个人信息来源的不确定性,能够沟通,合作和协调的其他成员遥感组。
在此基础上的结构,达兰特-怀特[ 7 ]提出了传感器模型描述为概率函数和决定国家通报从其他信息来源。
他们将这个传感器型号分为三个工作模式:观察模式,即过程的计量特色依赖模型,介绍了传感器的依赖其他信息来源;和国家模式,它的特点是传感器依赖其位置和内部状态。
4.试验证明交互性(神经造影力量)方法处理不确定性的提高重复性的机器人机械手在机器人操作,还有一些不确定性的来源:(一)不确定性传感器,(二)不确定性与致动器,(三)不确定性建模。
在本次调查,注意集中对不确定性传感器及其最小化.大多数工业机器人通过的放预先录制或预先序列步骤执行简单的重复性任务,此技术已指导过用户和入了教科书。
对于这种类型的性能,机器人不需要任何有关其工作环境的信息。
外部传感器并不重要的,因为我们已经知道机器人能够简单移动目标点。
一个“普遍接受”的一点是,机器人移动身体是由相应的关节位置传感器决定的,和已设置好的联合角值.然后,下一步的命令机器人是返回同一点空间,并存储每个关节的移动值。
控制的精度与机械手返回一个确定值有关,即所指定因子“可重复性的机械手。
”机器人应用一个不可缺少的能力,是能提供一个高速和高精度轨迹。
5.结论未来的人形机器人将不得不在一个多传感器的工作工作环境中,融合信息需求的代表与最小不确定性.这种水平,取决于特定应用程序和复杂的分类处理信息的融合,直至最大限度地减少复位点。