Motion Planning for a Mobile Manipulator with Redundant DOFs

合集下载

A Human Aware Mobile Robot Motion Planner

A Human Aware Mobile Robot Motion Planner

1A Human Aware Mobile Robot Motion PlannerEmrah Akin Sisbot,Luis F.Marin-Urias,Rachid Alami,and Thierry Sim´e on,Member,IEEEAbstract—Robot navigation in the presence of humans raises new issues for motion planning and control when the humans must be taken explicitly into account.We claim that a human-aware motion planner must not only provide safe robot paths, but also synthesize good,socially acceptable and legible paths.This paper focuses on a motion planner that takes explicitly into account its human partners by reasoning about their accessi-bility,their visionfield and their preferences in terms of relative human-robot placement and motions in realistic environments. This planner is part of a human-aware motion and manipulation planning and control system that we aim to develop in order to achieve motion and manipulation tasks in the presence or in synergy with humans.Index Terms—HRI,Motion Planning,Social InteractionI.I NTRODUCTIONT HE introduction of robots in our daily life raises a key issue that is“added”to the“standard challenge”of autonomous robots:the presence of humans in the robot environment and the necessity to interact with them.In the factory,the robot is systematically physically separated from the human workers.This will not be the case for future applications where the robot will be in situation where it will have to assist humans.To allow the robots“co-exist”with humans,human-robot interaction needs to be taken into account in all steps of the robot design.This paper addresses issues related to the close interaction between humans and robots from the standpoint of the motion decisions that must be taken by the robot in order to ensure:•Safe motion,i.e.,that does not harm the human,•Reliable and effective motion,i.e,that achieves the task adequately considering the motion capacities of the robot,•Socially acceptable motion,i.e,that takes into account a motion model of the human as well as his preferences and needs.Let us consider a simple“fetch and carry task”as illustrated infigure1for a socially interactive robot[1].The robot has to perform motion and manipulation actions and should be able to determine where a given task should be achieved,how to place itself relatively to a human,how to approach him,how to hand the object and how to move in a relatively constrained environment in the presence of humans(an apartment for instance).Our goal is to develop a robot that is able to take into account”social constraints”and to synthesize plans compatible with human preferences,acceptable by humans and easily legible in terms of intention.This work is part of a broader effort to develop a decisional framework for human-robot interactive task achievement,em-bedded in a cognitive architecture,aimed to allow the robot Authors are with LAAS/CNRS,University of Toulouse,7,avenue du Colonel Roche,31077,Toulouse,France.Email:{LastName@laas.fr}not only to accomplish its tasks but also to produce behaviours that support its commitment vis-a-vis its human partner and also to interpret human behaviours and intentions[2]Fig.1.A“fetch-and-carry”scenario in a domestic environment in presence of a person.We have introduced our approach and presented preliminary results in[3],[4].We have discussed in[5]how user studies have influenced the design of our planner.In this paper,we present in detail a Human Aware Motion Planner(HAMP)and its implementation with simulation and real world results.In Section II,we briefly discuss related work.Section III provides the main characteristics and algorithms of our motion planner.We show simulation results in different scenarios in Section IV.Finally,we describe in Section V the implemen-tation of the planner on a mobile robot and present real-world results.II.R ELATED W ORKAlthough human-robot interaction is a very active research field,there is not extensive research on motion planning in the presence of humans.In the factory,safety is assured by not allowing humans to approach robots at work.Although this method mostly prevents collision risks,it cannot be applied in applications where the robot has to assist,sometimes physically,a human. Obviously safety issues become the primary concern when robots come into humans’everyday environment.The notion of safety becomes very critical and must be studied in detail with all of its aspects[6].In user studies conducted by Nonaka et al.[7],two aspects of human’s safety have been studied:“physical”safety and2“mental”safety.With this separation,the notion of safety extends its meaning by including not only physical aspects but also psychological effects of the robot’s motions on humans. Physical safety is absolutely necessary for the human-robot interaction.It must be assured during the hardware and software design process of the robot.Ikuta et al.classify safety strategies in two different categories:“design”strategies and “control”strategies.Besides new designs[8],[9]that will ensure safety at the physical level,fault-tolerant approaches [10]tend to detect and limit the consequences of hardware and software problems.A danger index is considered in control strategies and robot motions are executed by minimizing this index[11]–[13].With these approaches physical safety is assured by avoiding collisions with humans and by minimizing the intensity of the impact in case of a collision.Another issue is illustrated by the research on smart wheelchairs.Although there is no direct interaction between the chair and the transported person,the wheelchair motion needs to take into account implicitly the human comfort[14].In usual interactions between humans,some non-written rules are respected and determine the distance between two persons(see the proxemics theory of E.T.Hall[15]).The robot should comply to similar conventions[1][16].To achieve more human friendly behaviors,there is much work trying to imitate human motions and to better understand how humans behave in social environments.[17]describes a behavior-based method for placing the robot like a person in a multi-partnered conversation.In recent work by Pacchierotti et al.[18],[19],a human-robot hallway passage scenario is stud-ied and“social patterns”for relative human-robot placement are extracted from these studies.These patterns are encoded into robot behaviors and result in a human friendly motions for a specific hallway crossing scenario.Another approach that deals not only with safety but also implicitly with comfort issues is the work on velocity profiles along a planned trajectory by Krishna et al.[20].Here the robot adapts its trajectory and its speed to optimize the execution time and also to guarantee that no collision will occur in a dynamic environment.Although the human is not considered explicitly,this method guarantees a motion without collision by taking into account the sensor capabilities of the robot as well as its dynamics and the dynamics of the environment.Since the sensors have a certain range,it is likely necessary to slow down in some places of the robot’s trajectory where the sensor’sfield of view is blocked by narrow passages, doors or corners.Although several authors propose motion planning or reac-tive schemes considering humans,there is no contribution that tackles globally the problem that we consider in this paper.III.H UMAN A WARE N AVIGATION P LANNINGUser studies on robot motion and approach direction with respect to humans[21][5]provided us a number of proper-ties and non-written rules/protocols[15]of human-robot or human-human interactions.Only very limited works consider such properties and often in an ad hoc manner.We describe be-low a new technique that integrates such additional constraints in a more generic way.First,we introduce two additional criteria to the motion planning stage in order to ensure human safety and comfort.These criteria,namely“safety criterion”and“visibility criterion”,present two important aspects of robot navigation in a human-robot interaction scenario. Each criterion is represented by a set of numerical values stored in a2D grid.This criterion grid contains a set of cells with various costs derived from the relative positions of humans in the environment,humans’states,their capabilities, and preferences.A criterion grid G is defined as:G=(M n,p,H1...H n,f)where M n,p is a matrix containing n∗p cells represented by a i,j,the cost of the coordinate(i,j)in the grid,H1...H n is the list of humans in the environment.The function f calculates the value of each cell according to its coordinate by taking into account only one human.The matrix M is constructed by the equation:a i,j=max k(f(H k,i,j))A human H i is modeled by H i=(St,State1...State n) where St is the structure and kinematics of the human and State i is a human state defined by a number of cost parame-ters.A state is defined by:State i=(Name,Conf,P aram)where Name is the name of a posture state(= SIT T ING,ST ANDING),Conf is the human’s configu-ration in that state(if applicable)and P aram represents the data needed to compute costs according to that state.We will further explain below the structure of the“safety”and of the“visibility”criteria and their underlying properties.A.Safety CriterionThefirst criterion,called“safety criterion”,mainly focuses on ensuring the safety of the robot and the humans by controlling the distance between these two.This property aims to keep a distance between the robot and the humans in the environment.However in some cases,as in the necessity of a close interaction(e.g.handling an object),the robot has to approach the person whom it wants to interact with.Therefore, the distance between the robot and the human is neither uniform norfixed and depends on the interaction.The feeling of safety is highly dependent on the human’s personality,his physical capabilities and his actual states;for example,safety differs highly in a sitting position compared to standing.When the human is sitting,his mobility is reduced and he tends to have a low tolerance to the robot getting close.On the contrary, when standing up he has a higher mobility,thus allowing the robot to come closer.These properties are treated in the current system by a “safety grid”.This grid contains a human centered Gaussian form of cost distribution.Each coordinate(x,y)in this grid contains a cost inversely proportional to the distance to the human.When the distance between the human and a point in the environment(in the grid)D(x i,y j)is greater than the distance of another point D(x k,y l),we have Cost(x k,y l)>3Cost (x i ,y j ).Since the safety concerns lose their importance when the robot is far away from the human,the cost also decreases when getting farther from the human,until some maximal distance at which it becomes null.Figure 2shows a computed safety grid attached to a sit-ting/standing human.The height of the vertical lines represents the cost associated with each cell.As shown in the figure,human’s current state (sitting,standing,etc)plays an important role in the cost of the grid.Also note that this approach allows us to consider other types of human states.Once this grid is computed,searching for a minimum cost path will result in a motion that avoids moving too close to the human unless it is necessary.However,if the environment is constrained or if the task requires so,the robot is allowed to approach to the human.Only very close proximity of the human is strictly prohibited to avoidcollisions.Fig.2.A Safety grid is built around every human in the environment.It depends highly on the human’s posture.As the person feels less “threatened”when standing,the value and the range of the costs are less important.B.Visibility CriterionThe second criterion,called “visibility criterion”,aims to improve human comfort during robot’s motion.Humans gen-erally feel more comfortable when the robot is in their field of view.This criterion allows the robot to be mostly in the human’s field of view during its motions.The resulting grid,namely “visibility grid”,is constructed according to costs reflecting the effort required by the human to get the robot in his field of view.For example,grid points located in a direction for which the human only has to move his eyes have a lower cost than positions requiring him to move his head in order to get the robot in his field of view.Also,when the robot is far away from the human,the effect of the visibility must decrease.The computed visibility costs are shown in figure 3.The zone situated in front of the human has very low costs.On the contrary,the zone situated behind the human has higher costs.Since the grid is attached to the head of the human,the computed costs are updated when the human changes his field of view (turn his head or his direction)during planning and/or execution stage.C.An extension:Hidden ZonesIn the grids illustrated above,the costs are calculated without taking into account the obstacles in the environment.However,obstacles in close vicinity of the human canhaveFig.3.Visibility grid is computed by taking into account human’s field of view.Places that are difficult for the human to see have higher costs.various effects on safety and comfort.If the robot is behind an obstacle,the human would feel secure because the obstacle blocks the direct path between the human and the robot.So the safety criterion must be canceled in the zones located behind the obstacles.On the other hand,when the robot becomes hidden by an obstacle,the visibility costs lose their meanings.To handle this issue,we introduce an extension to visibility and safety,called “hidden zones”criterion.This criterion helps to determine better costs for positions hidden by the obstacles.Another important effect of obstacles to human’s comfort is the surprise factor.When the robot is hidden by an obstacle and suddenly appears in the human field of view,it can cause surprise and fear,especially if it is close to the human.To avoid this effect,we must discourage the robot from passing behind an obstacle too closely,and must constrain it to enter human’s field of view sufficiently far away.This is done by putting costs to zones hidden by obstacles with respect to the human.The costs in the hidden zone grid is inversely proportional to the distance between the human and the robot.In our system,the range of the surprise factor is approximately 3m,so the costs decreases to zero in the 3m range and remains null for the other grid points (Fig.4).These values can be additionally tuned according to the scenario and type ofinteraction.Fig.4.Decreasing costs attributed to the zones hidden by obstacles.The supplementary costs discourage the robot getting too close to the obstacles and thus prevents the robot from appearing suddenly behind hidden places.D.Path plannerOnce the safety,visibility and hidden zones grids have been computed,they are merged into a single grid in which the robot4will search for a minimum cost path.Different methods can be used to merge the grids.Afirst way can be to compute the overall cost from the weighted sum of the elementary costs:Cost merged(x,y)=w1Cost safety(x,y)+w2Cost visibility(x,y)where(x,y)is a point in the grid,w1is the weight of the safety grid and w2is the weight of the visibility grid. Another way is to consider the maximum cost values when merging the gridsCost merged(x,y)=max(Cost safety(x,y),Cost visibility(x,y))Note that we do not merge the hidden zones grid with the other two grids.That is mainly because the hidden zones grid serves as a replacement of these two grids for positions where the robot could not be seen because of an obstacle.The cost of a point(x,y)in thefinal grid is computed by:if((x,y)is infield of view of H i)∧(H i cannot see(x,y)because of obstacle O)thenCost final(x,y)←w3Cost hiddenzones(x,y)elseCost final(x,y)←Cost merged(x,y)end ifOur planner can use both merging methods depending on the task and on the balance between criteria.Also,the weights of the grids can be tuned according to the properties of the task.Tofind a path between two given positions of the robot, we search for a path in thefinal grid that minimizes the sum of the costs of the cells linking the cells corresponding to these two positions.The cells corresponding to the obstaclesin the environment are labeled as forbidden and an A search is performed tofind a minimum-cost collision-free path linking two positions.The computed path is collision-free and also respects the human’s safety and comfort by taking into account safety,visibility and hidden zones.Neither thefinal grid nor3criterion grids are constructed explicitly but the values of the cells are calculated for the ones explored during A search.As humans in the environments can change their positions and orientations often,avoiding explicit grid construction gives us the possibility to replan a new pathif a change in the environment occurs(i.e.change in human positions,orientations,or states).IV.S IMULATION R ESULTSThe Human Aware Motion Planner is implemented in C and integrated within the Move3D[22]software platform developed at LAAS.Figure5illustrates two similar situations where two persons are in a conversation.The aim for the robot is to approach and join them.Although the robot can take the shortest path and pass between humans,the planner calculates a path longer but safer and more comfortable for both humans(Figure5-a).By following this path,the robot does not approach too close to the humans when it is invisible,and enters the conversation ina more natural way by making a frontal approach.To illustrate the effect of obstacles in the environment,we place a wall in the same scenario,next to the human on the right(Figure5-b).Although the obstacle is not blocking the path of the robot and the path is still valid from a classical planning view,the robot calculates a new path.Because of the obstacle blocking a part of the human’sfield of view,the previous path becomes undesirable by making the robot suddenly appear too close. With this new path the robot enters smoothly into theview. Fig.5.A scenario with two persons talking and a robot that wants to join them.a)The planner calculates a path by taking into account safety and visibility.b)Although the previous path is valid,the plannerfinds a new path to avoid the surprise effect that can come from its sudden appearance in the humans’view.The behavior of the Human Aware Motion Planner in a hallway is illustrated infigure6.In this scenario,the robot and a person cross in a hallway.The planner calculates a path to avoid a collision.Although the motion possibilities are restricted because of the environment,a friendly behavior appears.The robot avoids the human by moving to the right. After passing the human,instead of taking immediately its previous lane,the robot stays a certain distance from the human and thus behaves morefriendly.Fig.6.A robot and a person cross in a hallway.The robot planner calculates a path that integrates a social behaviour:the robot avoids to come too close to the human’s back.Figure7illustrates another scenario with a person sitting in a room.The robot is initially located in the right corner of the room and has to move next to the human hidden by the wall obstacle.An example of the behaviour of a classical motion planner is shown in7-a.Both paths are uncomfortable since the robot either passes too close and behind the human or appears suddenly in the human’sfield of view.5The second figure (7-b)shows the path computed by the Human Aware Motion Planner.This path has the following characteristics:•The robot does not approach too close to the humans.It chooses a solution that only enters in the humans 3m zone in the last portion of the path.•The robot remains as visible as possible along the path.Because of the hidden start position,there is no possibil-ity to be in the human field of view at the beginning of path.Therefore the planner chooses to pass behind the wall instead of passing behind the human.•The robot is not too close to the human when it appears in his field of view.The transition from the invisible zone behind the wall to the visible one is sufficiently far from the human to avoid any surprise effect.Then the robot can approach the human to reach its finalposition.Fig.7.A comparison between a standard motion planner and HAMP.Clearly paths produced by the first one are not acceptable since the robot either looms into the human’s field of view or passes too near to the human’s back.The path found by HAMP is more friendly with respect to safety,visibility,and hidden zones.A last example of the features of our planner is illustrated in figure 8representing an apartment scenario with two persons:Clark (with light shirt)and Bruce (with dark shirt).We look at the synthesized paths between the living room and the kitchen in different situations.In figure 8-a,we show the path generated by the navigation planner for a situation in which Clark orders the robot to bring a sandwich from the kitchen.The computed motion takes into account the safety and the comfort of both humans by trying to stay in the visibility fields.We can see in figure 8-b a computed path that avoids looming from behind the kitchen wall.Instead the robot chooses a path that keeps a certain distance to this wall.TABLE IC OMPUTATION TIMES OF THE PATHS IN FIGURE 8Grid ResolutionFigure 8-a Figure 8-b Figure 8-c Figure 8-d 0.2m 0.070.090.060.150.1m 0.210.250.230.500.05m0.440.780.490.20In figure 8-c,we can see that Bruce came to talk to Clark,so the robot calculates a different path which stays in Clark’s field of view and also avoids passing too near to Bruce’s back.The minimum cost approach of our navigation planner allows the robot to choose an alternative path if the path is blocked by an obstacle or a person as shown in figure 8-d where Bruce is blocking the passage.Our planner is fast enough to replan and adapt its path along the execution.If a grid change occurs,like a change in human state,position,orientation or appearance of a an obstacle,fast computation times allow online replanning and a smooth switch to the new path.Table I shows the processing CPU-times on an AMD Athlon 1.8GHz processor of the paths shown infigure 8for 3different grid resolutions.V.R OBOT I MPLEMENTATION &R ESULTSThe planner is integrated into OpenGenom [23]as a module of the LAAS architecture [24].As shown in the figure 9,the whole system has been carried to our robot Rackham,equipped with a front SICK laser scanner,a tilt &pan camera,infrared proximity sensors and sonars with three Pentium III processors.Fig.9.General architecture of the robot composed of various OpenGenom [23]modulesTwo additional modules are introduced into the system.As the positions and orientations of each human in the environment must be known,a human detection and tracking module (named humPos)has been developed.This module guarantees the data flow of human positions needed by the planner module.6 Fig.8.Paths generated by the Human-Aware Motion Planner.Note that(1)the robot avoids suddenly appearing in the close vicinity of humans,(2)it tries to stay in theirfield of sight,and(3)it avoids approaching humans from behind.A.HumPos-Human Detection&Tracking Module Detecting humans is necessary for a robotic/computer[25] system that involves interaction with humans.There are dif-ferent methods depending on the robot’s sensor capabilities. With camera and laser,the information can be used to detect more precisely humans in the robot’s proximity[26].In the absence of cameras,the laser can be used to detect leg-like shapes[27].After the detection,tracking[28][29]must be launched in order to follow the human motions and detect motion patterns.For this purpose,we have developed the“HumPos”module, a module that provides human detection and tracking services based on laser and camera data.HumPos provides a list of humans in the environment to the motion planner.This list contains positions and orientations of the detected humans associated with a confidence index and an identifier.The algorithm and methods used for laser based human detection and tracking are very simplistic and work under two assumptions:•The gaze direction of a person is always the same as the direction of his body.•A moving person is always moving forward looking at his motion direction.The general algorithm consists of making two types of human detection(laser and visual),matching these two and tracking.At the end,an orientation assignment stage is per-formed on the results of the tracking.Figure10shows the overall mechanism of human detection andtracking.Fig.10.The Human Detection process combines laser and visual data to detect and track humans.In laser based detection,static obstacles in the environment map arefiltered from the sensor data.Resulting points are then used to detect leg-like shapes(a leg or pair of legs)according to their geometry and neighborhood.This process produces a list of detected humans with their positions and an attached confidence index.7On the other hand,the visual data coming from the camera are used to detect people in near proximity of the robot by the visual face detection module(figure11-b).The visual face detection module provides a list of humans looking directly at the robot with their estimated distance based on facial size metrics within a range of approx.1to3meters[30]. These two lists are then matched to produce only one list of humans with corresponding positions,orientations, and confidence index(figure11-c).Finally,detected humans are tracked by the tracking stage.At the end of this stage, orientations are assigned to detected humans according to their motions,the visual detection result and,the2assumptions that we made above.The orientation assignment procedure is described in Algorithm1.Algorithm1Orientation assignment1:if a person P detected by laser then2:if VisualFaceDetection detects P then3:Direction P←looking at the robot(body toward the robot)4:else5:if P is moving then6:Direction P←motion direction7:else8:if P detected before then9:Direction P←OldDirection P10:else11:Direction P←looking at the robot(body toward the robot)12:end if13:end if14:end if15:end ifIf a person is looking at the robot and thus detected visually, we assign his orientation to the direction of the robot.If the visual face detection fails,then laser based leg detection decides humans’orientations.If a person is detected and he is moving,his motion direction is assigned as his head/body orientation.If a person stops,his last orientation is conserved and assigned to next detections until he moves,disappears,or is detected by visual face detection.B.Planner ModuleThe planner module works with a static internal3D map along with each human’s model,his grid construction param-eters and the robot model.The humans’positions are updated by the HumPos module and the robot’s current position is updated by Position Manager module.A constant data flow from HumPos to the planner is necessary to maintain information about humans.With these inputs(figure12),the planner module calculates a path that takes into account the safety and visibility constraints explained in SectionIII.Fig.11.a-Two persons have been detected based on laser data b-One of them is also detected using a vision-based face detection c-The one detected by the camera has high probability and is marked with red while the other person is marked with a lowerprobability.Fig.12.Architecture of the Human Aware Motion Planner moduleThe computed path is then sent to the execution module [31].This module produces a smooth trajectory following the path given in the form of a set of passage points.If the system detects a new person or a change of position, orientation,or state of an existing person during the robot’s motion,the planner replans a new path and sends it to the execution module.Since the robot is in motion,the planner replans a new path that begins with a small portion of the old path.With this property the robot passes smoothly from the current path to a new one.To avoid constant replanning and possible errors from the human detection phase,the planner only replans a new path if a human position changes0.2meters or his orientation changes0.3radians.C.Implementation ResultsInfigure13,a scenario with two people in a conversation and a comparison between a standard motion planner and。

无人驾驶的英语课件PPT

无人驾驶的英语课件PPT
It can also improve road safety by reducing human errors, which is a leading cause of accidents
Other potential applications include long haul trucking, public transportation, and even self driving taxis or shared mobility services
3D Reconstruction
The creation of a 3D model of the environment from sensor data to provide more accurate representation of the scene
Path planning technology
Application scenarios for autonomous driving
Autonomous driving has the potential to revolutionize transportation, particularly in urban areas where traffic congestion and pollution are major issues
Techniques used to regulate the vehicle's velocity, acceleration, and steel angle to achieve desired performance and safety standards
Risk Assessment
The evaluation of potential hazards and their associated risks to inform decision making processes

毕业设计机械手外文翻译

毕业设计机械手外文翻译

外文翻译译文题目 一种与移动机械臂的部分零件所受载荷相协译文题目调的运动结构(2)原稿题目A kinematically compatible framework for cooperative payload transport by nonholonomic mobile manipulators(2)原稿出处 Auton Robot (2006) 21:227–242 原稿出处A kinematically compatible framework for cooperative payload transport by nonholonomic mobile manipulators (2) M. Abou-Samah 1 , C. P. Tang 2 , R. M. Bhatt 2 and V. Krovi 2(1) MSC Software Corporation, Ann Arbor, MI 48105, USA (2) Mechanical (2) Mechanical and and Aerospace Engineering, State University of of New New York at at Buffalo, Buffalo, Buffalo, NY 14260, USA Received: 5 August 2005 Revised: 25 May 2006 Accepted: 30 May 2006 Published online: 5 September 2006 Abstract In this paper, we examine the development of a kinematically compatible control framework for a modular system of wheeled mobile manipulators that can team up to cooperatively transport a common payload. Each individually autonomous mobile manipulator consists of a differentially-driven Wheeled Mobile Robot (WMR) with a mounted two degree-of-freedom (d.o.f) revolute-jointed, planar and passive manipulator arm. The composite wheeled vehicle, formed by placing a payload at the end-effectors of two (or more) such mobile manipulators, has the capability to accommodate, detect and correct both instantaneous and finite relative configuration errors.The kinematically-compatible motion-planning/control framework developed here is intended to facilitate maintenance of all kinematic (holonomic and nonholonomic) constraints within such systems. Given an arbitrary end-effector trajectory, each individual mobile-manipulator's bi-level hierarchical controller first generates a kinematically- feasible desired trajectory for the WMR base, which is then tracked by a suitable lower-level posture stabilizing controller. Two variants of system-level cooperative control schemes schemes——leader-follower and decentralized control control——are then created based on the individual mobile-manipulator control scheme. Both methods are evaluated within an implementation framework that emphasizes both virtual prototyping (VP) and hardware-in-the-loop (HIL) experimentation. Simulation and experimental results of an example of a two-module system are used to highlight the capabilities of a real-time local sensor-based controller for accommodation, detection and corection of relative formation errors. Keywords Composite system-Hardware-in-the-loop-Mobile manipulator- Physical cooperation-Redundancy resolution-Virtual prototypingKinematic collaborationof two mobilemanipulators We now examine two variants of system-level cooperative control schemes schemes——leader-follower and decentralized control control——that can be created based on the individual mobile-manipulator control scheme.Leader-follower approach The first method of modeling such a system considers the midpoint of the mobile base (MP B) of the mobile-manipulator B to be rigidly attached to the end-effector of mobile manipulator A, as depicted in Fig. 4. Figure 4(b) depictshow the end-effector frame {E } of MP A is rigidly attached to the frame at MP B (separated by a constant rotation angle β).(15)Fig. 4 Schematic diagrams of the leader-follower scheme: (a) the 3-link mobile manipulator under analysis, and (b) the two-module composite system MP B now takes on the role of the leader and can be controlled to follow any trajectory trajectory that that is feasible feasible for for a WMR. Hence, given a trajectory trajectory of of the leader MP B , and the preferred manipulator configuration of, Eq. (5) can be rewritten as:(16)and correspondingly Eqs. (6)and correspondingly Eqs. (6)––(8) as:(17)Thus, the trajectory of the virtual (reference) robot for the follower MP A, and the derived velocities can now bedetermined. determined. This This This forms forms forms the the the leader-follower leader-follower leader-follower scheme scheme scheme used used used for for for the the the control control control of of of the the collaborative system carrying a common payload.Decentralized approachThe second second approach approach approach considers considers considers the the frame attached attached to to a point of interest interest on on the common payload as the end-effector frame of both the flanking mobile manipulator systems, as depicted in Fig. 5. Thus, a desired trajectory specified for this payload frame can then provide the desired reference trajectories for the two mobile platforms using the similar framework developedin the previous section by takingand , where . Thispermits Eq. (5) to be rewritten as: (18)Fig. 5 Decentralized control scheme implementation permits the (a) composite system; to be treated as (b) two independent 2-link mobile manipulators and correspondingly Eq. (6)and correspondingly Eq. (6)––(8) as:(19)Each two-link mobile manipulator now controls its configuration with reference to this common end-effector frame mounted on the payload. However, thelocations locations of of of the the the attachments attachments attachments of of of the the the physical physical physical manipulators manipulators manipulators with respect with respect with respect to to to the the payload reference frame must be known apriori.Implementation frameworkWe examine the design and development of a two-stage implementation framework, shown in Fig. 6, that emphasizes both virtual prototyping (VP) based refinement and hardware-in-the-loop (HIL) experimentation.Fig. 6 Paradigm for rapid development and testing of the control scheme on on virtual and physical virtual and physical prototypes Virtual prototyping based refinementIn the first stage, we employ virtual prototyping (VP) tools to rapidly create, evaluate and refine parametric models of the overall system and test various algorithms algorithms in in in simulation simulation simulation within within within a a a virtual virtual virtual environment. environment. environment. 3D 3D 3D solid solid solid models models models of of of the the mobile platforms and the manipulators of interest are created in a CAD package,4 and exported with their corresponding geometric and material properties into a dynamic simulation environment.5 Figure 7(a) shows an example of the application of such framework for simulating the motion of a mobile platformcontrolled controlled by an by an by an algorithm algorithm algorithm implemented implemented implemented in Simulink.in Simulink.6 However, However, it is important it is importantto note that the utility of such virtual testing is limited by: (a) the ability to correctly model and simulate the various phenomena within the virtual environment; (b) the fidelity of the available simulation tools; and (c) ultimately, ultimately, the the the ability ability ability of of of the the the designer designer designer to to to correctly correctly correctly model model model the the the desired desired desired system system and suitably interpret the results.Fig. 7 A single WMR base undergoing testing within the (a) virtual prototyping framework; and (b) hardware-in-the-loop (HIL) testing framework Hardware-in-the-loop experimentationWe employ a hardware-in-the-loop (HIL) methodology for rapid experimental verification of the real-time controllers on the electromechanical mobile manipulator prototypes. Each individual WMR is constructed using two powered wheels and two unactuated casters. Conventional disc-type rear wheels, powered by gear-motors, are chosen because of robust physical construction and ease of operation in the presence of terrain irregularities. Passive ball casters are preferred over wheel casters to simplify the constraints on maneuverability introduced by the casters. The mounted manipulator arm has two passive revolute joints with axes of rotation parallel to each other and perpendicular to the base of the mobile platform. The first joint is placed appropriately at the geometric center on top frame of the platform. The location of the second joint can be adjusted to any position along the slotted first link. The second link itself is reduced to a flat plate supported by the second joint. Each joint is instrumented instrumented with with optical encoder that can measure the joint rotations. The completely assembled two-link mobile manipulator is shown in Fig. 1(c). The second mobile manipulator (see left module of Fig. 1(b) and (d)) employs the same same overall overall overall design design design but but possesses possesses a a motor motor at at the the base base base joint joint joint of of of the the mounted two-link arm. The motor may be used to control the joint two-link arm. The motor may be used to control the joint motion along a motion along a predetermined trajectory (which can include braking/holding the joint at a predetermined predetermined position). position). position). When When When the the the motor motor motor is is is switched switched switched off off off the the the joint joint joint now now now reverts reverts to a passive joint (with much greater damping). The motor is included for permitting future force-redistribution studies. In this paper, however, the motor is used solely to lock the joint prevent self-motions self-motions of of the articulated linkage for certain pathological cases (Bhatt et al., 2005; Tang and Krovi, 2004). PWM-output motor driver cards are used to drive the gearmotors; and encoder cards monitor the encoders instrumenting the various articulated arms. This embedded controller controller communicates communicates communicates with with a designated designated host host computer using TCP/IP for program download and data logging. The host computer withMATLAB/Simulink/Real Time Workshop 8 provides a convenient graphical userinterface environment for system-level software development using a block-diagrammatic language. The compiled executable is downloaded over the network and executes in real-time on the embedded controller while accessing locally installed hardware components.In particular, the ability to selectively test components/systems at various levels (e.g. individual motors, individual WMRs or entire systems) without wearing out components during design iterations was very useful. Figure 7(b) illustrates the implementation of such a system on one of the WMRs. Numerous calibration, simulation and experimental studies carried out with this framework, at the individual-level and system-level, are reported in Abou-Samah (2001).Experimental resultsFor the subsequent experiments,99we prescribe the initial configuration configuration of of the two-module composite system, as shown in Fig. 8. Specifically, we position thetwo two WMRs such WMRs such WMRs such that MP that MP that MP A is A is A is located located located at a relative position of at a relative position of ,and with a relative orientation difference of with respect to MP B.For fixed link-lengths this inherently specifies the values of the various configuration angles as shown in Table 1.Table 1 Parameters for the initial configuration of the two-module composite wheeled system (see Fig. 8 for details) Link lengths of the articulationL 1 0.28 0.28 m m m (11 (11 (11 in) in)L 2 0.28 0.28 m m m (11 (11 (11 in) in) Relative angles of the configuration of thearticulationL 3 0.28 0.28 m m m (11 (11 (11 in) in)φ 1 333333.98°.98°.98°φ 2 280.07°280.07°φ 3 337.36°337.36° Offset between the wheeled mobile basesφ 4 128.59°128.59°δ 0.00°0.00°0.00 m (0 in)0.61 0.61 m m m (24 (24 (24 in) in)Fig. 8 Initial configuration of the two-module composite wheeled system Leader-follower approachA straight line trajectory at a velocity of 0.0254 m/s is prescribed for the leader, MP B. Given a desired configuration of the manipulator arm, the algorithm algorithm described in described in described in Section 4.1 Section 4.1 Section 4.1 is used is used is used to to to obtain a desired obtain a desired obtain a desired trajectory for trajectory for MP A. A large disruption is intentionally introduced by causing one of the wheels of MP A to run over a bump, to evaluate the effectiveness of wheels of MP A to run over a bump, to evaluate the effectiveness of the the disturbance accommodation, detection and compensation.The results are examined in two case scenarios examined in two case scenarios –– Case A: MP A employs odometric estimation for localization as seen in Fig. 9, and Case B: MP A employs sensed articulations for localization as seen in Fig. as seen in Fig. 1010. In each of these figures, (a) presents the overall -trajectory of {M } of MPA with respect to the end-effector frame {E } (that is rigidly attached to the {M } of MPB) while (b), (c) and (d) present the relative orientation difference, X -difference and Y -difference as functions of time. Further in both sets of figures, the ‘Desired’ (––(–– line) is the desired trajectory typically computed offline; and ‘Actual’ (–o – line) is the actual trajectory followed by the system, as determined by post-processing the measurements of the instrumented articulations.However, in Fig. However, in Fig. 99, the (, the (––x – line) represents the odometric estimate while in Fig. in Fig. 1010 it stands for the articulation based estimate (which therefore coincides with the ‘Actual’ (–coincides with the ‘Actual’ (–o o – line) trajectory).Fig. 9 Case A: Odometric Estimation of Frame M, used in the control of MP A following MPB in a leader-follower approach, is unable to detect non-systematic errors such as wheel-slip. (a) XY trajectory of Frame M; (b) Orientation versus Time; (c) X position of Frame M versus Time; and (d) Y position of Frame M versus Time Fig. 10 Case B: Articulation based Estimation of Frame M, used for control of MPA with respect to MP B in a leader-follower approach is able to detect and correct non-systematic errors such as wheel-slip. slip. (a) (a) XY trajectory of Frame M; (b) Orientation versus Time; (c) X position of Frame M versus time; and (d) Y position of Frame M versus time In Case A, the introduction of the disruption causes a drift in the relative configuration of the system which remains undetected by the odometric estimation. Further, as seen in Fig. 9, this drift has a tendency to grow if left uncorrected. However, as seen in Fig. 10, the system can use the articulation-based articulation-based estimation estimation estimation (Case (Case (Case B) B) B) to to not only only detect detect detect disturbances disturbances disturbances to to to the the relative configuration but also to successfully restore the original system configuration.Decentralized control approachIn this decentralized control scenario, a straight line trajectory with a velocity of 0.0254 m/s is presented for the payload frame. As in the leader-follower scenario, a large disruption is introduced by causing one of the wheels of MP A to run over a bump. The algorithm is tested using two further case scenarios scenarios——Case Case C:C: Both mobile mobile platforms platforms platforms employ employ odometric odometric estimation estimation estimation for for localization as shown in Fig. as shown in Fig. 1111, and Case D: Both mobile platforms employsensed articulations for localization as shown in Fig. as shown in Fig. 1212.Fig. 11 Case C: Odometric estimation of frames M of MP A and MP B, used in the control of MP A with respect to MP B in the decentralized approach, is again unable to detect non-systematic errors such as wheel-slip. (a) XY trajectory of frame M of MP A; (b) XY trajectory of frame M of MP B; (c) Relative orientation, between MP A and MP B, versus time; (d) X distance, between MP A and MP B, versus time; and (e) Y distance, between MP A and MP B, versus time. (f) Sequential photographs of the corresponding composite system motion (as time progresses from left to right along each row)Fig. 12 Case D: Articulation based estimation of frames M of MP A and MP B, used for the control of MP A and MP B with respect to a payload-fixed frame is able to detect and correct non-systematic errors such as wheel-slip. (a) XY trajectory of frame M of MP A; (b) XY trajectory of frame M of MP B; (c) Relative orientation, between MP A and MP B, versus time; (d) X distance, between MP A and MP B, versus time; and (e) Y distance, between MP A and MP B, versus time. (f) Sequential photographs of the corresponding composite system motion (as time progresses from left to right along each row) In In each each each of of of these these these figures, figures, figures, subplots subplots subplots (a) (a) (a) and and and (b) (b) (b) presents presents presents the the the overall overalltrajectories trajectories of of frames frames {{M } of MP A and MP B respectively respectively with with with respect respect respect to to their initial poses. Subplots (c), (d) and (e) present the relative orientation difference, X -difference and Y -difference of frames {M } of MP A and MP B respectively as functions of time. Further in both sets of figures, the ‘Desired’ ‘Desired’ (––(––(–– line) is the desired trajectory trajectory typically typically computed offline; and ‘Actual’ (–and ‘Actual’ (–o o – line) is the actual trajectory followed by the system, as determined by post-processing the measurements of the instrumented articulations. articulations. However, However, However, in in in Fig. Fig. Fig. 1111, the (–x – line) line) represents represents represents the the odometric estimate while in Fig. estimate while in Fig. 1212 it stands for the articulation based estimate.In Case C, both mobile platforms use the odometric estimation for localization —hence hence as as as expected, expected, expected, Fig. Fig. Fig. 1111 reflects reflects the the the fact fact fact that that that the the the system system system is is unable unable to to to detect detect detect or or or correct correct correct for for for changes changes changes in in in the the the relative relative relative system system system configuration. configuration. However the data obtained from the articulations accurately captures theexistence of errors between the frames of reference of MP B and MP A. Thus, using the articulation-based articulation-based estimation estimation of relative configuration for control as in Case D allows the detection of disturbances and successful restoration of the original system configuration configuration as as shown in Fig. 12. Note, however, however, while while the relative system configuration is maintained, errors relative to a global reference frame cannot be detected if both WMRs undergo identical simultaneous disturbances . Detection of such absolute errors would require an external reference and is beyond the scope of the existing framework.ConclusionIn this paper, we examined the design, development and validation of a kinematically compatible framework for cooperative transport of a common payload payload by by by a a team team of of nonholonomic nonholonomic mobile mobile mobile manipulators. manipulators. manipulators. Each Each Each individual individual individual mobile mobile manipulator module consists of a differentially driven wheeled WMR retrofitted with with a a a passive passive passive two two two revolute revolute revolute jointed jointed jointed planar planar planar manipulator manipulator manipulator arm. arm. arm. A composite A composite A composite multi multi degree-of-freedom degree-of-freedom vehicle vehicle system could then be modularly created by attaching a common payload on the end-effector of two or more such modules.The composite composite system system allowed payload trajectory tracking errors, arising from subsystem controller errors or environmental disturbances, to be readily accommodated within the compliance offered by the articulated linkage. The individual mobile manipulators compensated by modifying their WMR bases’ motion plans to ensure prioritized satisfaction of the nonholonomic constraints. constraints. The The stabilizing controllers of the WMR bases could then track the recomputed “desired motion plans” to help restore the system system-configuration. -configuration. This scheme not only explicitly ensured maintenance of the kinematic compatibility constraints within the system but is also well suited for an online sensor-based motion planning implementation.This This algorithm was algorithm was algorithm was then then then adapted to adapted to adapted to create two create two create two control schemes for control schemes for the overall composite system — the leader follower approach and the decentralized approach. We evaluated both approaches within an implementation framework that emphasized both, virtual prototyping and hardware-in-the-loop using the case-study of a two module composite system. Experimental results verified the ability of the composite system with sensed articulations to not only accommodate instantaneous disturbances due to terrain irregularities but also to to detect detect detect and and and correct correct correct drift drift drift in in in the the the relative relative relative system system system configuration. configuration. configuration. The The The overall overall framework readily facilitates generalization to treat larger systems with many more mobile manipulator modules.Appendix AThe kinematic constraint equations shown in Eq. (3) may be written in the general form as:(20)Then the velocity constraint equation can be written as:(21)The general solution to this differential equation is:(22)By appropriate selection of the initial conditions within the experimentalsetup, one can create the condition , i.e., exactly satisfying the constraint at the initial time, which then permits the constraint constraint to be to be to be satisfied satisfied satisfied for all for all for all time. However, time. However, time. However, one could also one could also one could also enhance this enhance this process by adding another term to the right-hand-side of Eq. (21) as:(23)where Φ is a positive-definite positive-definite constant constant matrix. This results in a first order stable system:(24)whose solution for any arbitrary initial configuration is:(25)In such systems, there is no longer a requirement to enforcesince the solution exponentially stabilizes to regardless of the initial conditions. This feature could potentially be easily added to transform Eq. (5) to further improve overall performance, but was not required. ReferencesAbou-Samah, M. 2001. A kinematically compatible framework for collaboration of multiple non-holonomic wheeled mobile robots. M. Eng. Thesis, McGill University, Montreal, Canada. Abou-Samah, Abou-Samah, M. M. M. and and and Krovi, Krovi, Krovi, V. V. V. 2002. 2002. 2002. Optimal Optimal Optimal configuration configuration configuration selection selection selection for for for a a a cooperating cooperating cooperating system system system of of mobile manipulators. In P roceedings Proceedings of the 2002 ASME Design Engineering Technical Conferences , DETC2002/MECH-34358, Montreal, QC, Canada. Adams,J.,Bajcsy, R.,Kosecka,J., R.,Kosecka,J., Kumar, Kumar, Kumar, V., V., V., Mandelbaum, Mandelbaum, Mandelbaum, R., R., R., Mintz, Mintz, Mintz, M., M., M., Paul, Paul, Paul, R., R., R., Wang, Wang, Wang, C.-C., C.-C., Yamamoto, Yamamoto, Y Y ., and and Yun, Yun, Yun, X. X. X. 1996. 1996. 1996. Cooperative Cooperative Cooperative material material material handling handling handling by by by human human human and and and robotic robotic robotic agents: agents: Module development and system synthesis. Expert Systems with Applications, 11(2):89, 11(2):89, 11(2):89––97. Bhatt, R.M., Tang, C.P ., Abou-Samah, M., and Krovi, V. 2005. A screw-theoretic analysis framework for for payload payload payload manipulation manipulation manipulation by by by mobile mobile mobile manipulator manipulator manipulator collectives. collectives. collectives. In In Proceedings of the 2005 ASME International Mechanical Engineering Congress & Exposition, IMECE2005-81525, Orlando, Florida, USA. Borenstein, J., Everett, B., and Feng, L. 1996. Navigating Mobile Robots: Systems and Techniques. A. K. Peters, Ltd., Wellesley, MA. Brockett, R.W. 1981. Control theory and singular riemannian geometry. In P .J. Hilton and G.S. Young (eds), New Directions in Applied Mathematics, Springer-Verlag, New York, pp. 11–27. Campion, Campion, G., G., G., Bastin, Bastin, Bastin, G., G., G., and and and D'Andrea-Novel, D'Andrea-Novel, D'Andrea-Novel, B. B. B. 1996. 1996. 1996. Structural Structural Structural properties properties properties and and and classification classification classification of of kinematic kinematic and and and dynamic dynamic dynamic models models models of of of wheeled wheeled wheeled mobile mobile mobile robots. robots. IEEE Transactions on Robotics and Automation , 12(1):47–62. Canudas de Witt, C., Siciliano, B., and Bastin, G. 1996. Theory of Robot Control. Springer-Verlag, Berlin. Desai, Desai, J. J. J. and and and Kumar, Kumar, Kumar, V. V. V. 1999. 1999. 1999. Motion Motion Motion planning planning planning for for for cooperating cooperating cooperating mobile mobile mobile manipulators. manipulators. Journal of Robotic Systems , 16(10):557–579. Donald, B.R., Jennings, J., and Rus, D. 1997. Information invariants for distributed manipulation. TheInternational Journal of Robotics Research, 16(5):673–702. Honzik, B. 2000. Simulation of the kinematically redundant mobile manipulator. In Proceedings of the 8th MATLAB Conference 2000, Prague, Czech Republic, pp. 91, Prague, Czech Republic, pp. 91–95. Humberstone, C.K. and Smith, K.B. 2000. Object transport using multiple mobile robots with non-compliant endeffectors. In D istributed Distributed Autonomous Robotics Systems 4, Springer-Verlag, Tokyo, 4, Springer-Verlag, Tokyo, Tokyo, pp. 417–426. Juberts, M. 2001. Intelligent control of mobility systems, Needs. ISD Division, Manufacturing Engineering Laboratory, National Institute of Standards and Technology. Khatib, Khatib, O., O., O., Yokoi, Yokoi, K., K., Chang, Chang, Chang, K., K., K., Ruspini, Ruspini, Ruspini, D., D., D., Holmberg, Holmberg, Holmberg, R., R., R., and and and Casal, Casal, Casal, A. A. A. 1996. 1996. 1996. Vehicle/arm Vehicle/arm coordination and multiple mobile manipulator decentralized cooperation. In Proceedings of the 1996IEEE/RSJ International Conference on Intelligent Robots and Systems, Osaka, Japan, pp. 546–553. Kosuge, K., Osumi, T., Sato, M., Chiba, K., and Takeo, K. 1998. Transportation of a single object by two decentralized-controlled nonholonomic mobile robots. In 1998 IEEE International Conference on Robotics and Automation, Leuven, Belgium, pp. 2989–2994. Kube, C.R. and Zhang, H. 1997. Task modelling in collective robotics. Autonomous Robots , Kluwer Academic Publishers, 4(1):53–72. Latombe, J.-C. 1991. R obot Robot Motion Planning. Kluwer Academic Publishers, Boston, MA. Li, Z. and Canny, J.F. 1993. Nonholonomic Motion Planning. Kluwer Academic Publishers, Boston, MA. Murray, R.M. and Sastry, S.S. 1993. Nonholonomic motion planning: Steering using sinusoids. IEEE Transactions on Automatic Control, 38(5):700–716. Nakamura, Y . 1991. A dvanced Advanced Robotics: Redundancy and Optimization. Addison-Wesley Publishing Company, Inc., California. Samson, Samson, C. C. C. and and and Ait-Abderrahim, Ait-Abderrahim, Ait-Abderrahim, K. K. K. 1991a. 1991a. 1991a. Feedback Feedback Feedback control control control of of of a a a nonholonomic nonholonomic nonholonomic wheeled wheeled wheeled cart cart cart in in cartesian space. space. In In Proceedings of the 1991 IEEE International Conference on Robotics and Automation , Sacramento CA, pp. 1136–1141. Samson, C. and Ait-Abderrahim, K. 1991b. Feedback stabilization of a nonholonomic wheeled mobile robot. robot. In In Proceedings of the 1991 IEEE/RSJ International Workshop on Intelligent Robots andSystems , Osaka, Japan, pp. 1242–1247. Schenker, P .S., Huntsberger, T.L., Pirjanian, P ., Trebi-Ollennu, A., Das, H., Joshi, S., Aghazarian, H., Ganino, A.J., Kennedy, B.A., and Garrett, M.S. 2000. Robot work crews for planetary outposts: Close cooperation and coordination of multiple mobile robots. In Procedings of SPIE Symposium on Sensor Fusion and Decentralized Control in Robotic Systems III , Boston, MA. Seraji, Seraji, H. H. H. 1998. 1998. 1998. A A A unified unified unified approach approach approach to to to motion motion motion control control control of of of mobile mobile mobile manipulators. manipulators. The International Journal of Robotics Research, 17(2):107, 17(2):107–118. Spletzer, J., Das, A.K., Fierro, C.J., Kumar, V., and Ostrowski, J.P. 2001. Cooperative localization and control for multi-robot manipulation. In Procedings of the 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems, Maui, Hawaii, USA, pp. 631–636. Stilwell, Stilwell, D.J. D.J. D.J. and and and Bay, Bay, Bay, J.S. J.S. J.S. 1993. 1993. 1993. Towards Towards Towards the the the development development development of of of a a a material material material transport transport transport system system system using using swarms of ant-like robots. In Procedings of the 1993 IEEE/RSJ International Conference on Intelligent Robots and Systems, Atlanta, GA, pp. 766, Atlanta, GA, pp. 766–771. Tang, C.P . 2004. Manipulability-based analysis of cooperative payload transport by robot collectives. M.S. Thesis, University at Buffalo, Buffalo, NY . Tang, C.P . and Krovi, V. 2004. Manipulability-based configuration evaluation of cooperative payload transport by by mobile mobile robot collectives. In Proceedings of the 2004 ASME Design Engineering Technical Conferences and Computers and Information in Engineering Conference , DETC2004-57476, Salt Lake City, UT, USA. Tanner, H.G., Kyriakopoulos, K.J., and Krikelis, N.I. 1998. Modeling of multiple mobile manipulators handling a common deformable object. Journal of Robotic Systems, 15(11):599, 15(11):599, 15(11):599––623. Wang, Z.-D., Nakano, E., and Matsukawa, T. 1994. Cooperating multiple behavior-based robots for object manipulation. In 1994 1994 IEEE/RSJ International Conference on Intelligent Robots and Systems . Whitney, Whitney, D.E. D.E. D.E. 1969. 1969. 1969. Resolved Resolved Resolved motion motion motion rate rate rate control control control of of of manipulators manipulators manipulators and and and human human human prostheses. prostheses. IEEE Transactions on Man-Machine Systems, MMS-10;47, MMS-10;47–53. Yamamoto, Y. 1994. Control and coordination of locomotion and manipulation of a wheeled mobile manipulator. Ph.D. Thesis, University of Pennsylvania, Philadelphia, PA. Yamamoto, Y . and Yun, X. 1994. Coordinating locomotion and manipulation of a mobile manipulator. IEEE Transactions on Automatic Control, 39(6):1326–1332. Footnotes 1.R indicates revolute joint. RRR indicates serial linkages connected by three revolute joints. 2.We denote “desired ” quantities using superscript d in the rest of the paper. 3.Reference robot variables are denoted using superscript r and actual robot variables without any superscript for the rest of the paper. 4.SolidWorks TM was the CAD package used for this work. 5.MSC Visual Nastran TM was the dynamic simulation environment used for this work. 。

常见的英语考纲词汇

常见的英语考纲词汇

常见的英语考纲词汇,按照字母顺序排列:A:abandon, ability, absorb, access, accommodate, accompany, achieve, acknowledge, acquire, adapt, address, adequate, adjust, administer, admire, admit, adopt, advance, adventure, advertise, advice, advocate, affect, afford, agency, agenda, aggregate, agree, aim, album, alert, allocate, allow, ally, alter, alternative, amateur, ambassador, amount, analyze, ancestor, angle, animate, announce, annoy, annual, anticipate, anxious, appear, appreciate, approach, appropriate, approve, approximate, architecture, argue, arise, arrange, arrest, arise, art, articulate, ascend, aspect, aspire, assert, assess, assign, assist, associate, assume, assure, astonish, attach, attain, attempt, attend, attract, attribute, audience, authorize, automate, available, average, avoidB:bachelor, background, backbone, bacteria, balanced, ban, band, bank, bar, bare, bargain, barometer, based, basic, battle, bear, beat, behalf, behavior, belief, belong, benefit, besides, bet, betray, better, bias, beyond, bill, biology, birth, bite, bitter, blade, blame, blend, bless, blind, block, blood, blue, board, boat, body, bold, bond, bonus, book, boom, border, bore, born, boundary, boycott, brain, branch, brave, breach, bread, break, breed, brief, bright, bring, broad, broadcast, brother, brush, budget, build, bulk, bullet, burden, bury, bus, business, busy, but, butter, button, buy, byC:cabinet, cable, calculate, call, calm, camera, campaign, canal, capable, capacity, capture, carbon, card, care, career, careful, carry, case, cash, cast, casual, catch, category, cause, celebrate, cell, censure, center, century, ceremony, certain, chain, chair, challenge, chance, change, channel, chaos, chapter, characteristic, charge, charity, chart, chase, cheap, check, cheese, chemical, chief, child, choir, choose, circle, cite, citizen, city, civil, claim, clarify, clash, class, classify, clean, clear, clerk, client, climate, climb, close, cloth, clout, club, clue, cluster, coach, coal, coast, coat, code, coffee, coin, collaborate, collapse, collar, collect, college, colonel, color, column, combination, combine, come, comfort, command, comment, commercial, commission, commit, committee, common, communicate, community, company, compare, competition, complain, complete, complex, comply, compose, compound, comprehend, compromise, compute, concentrate, concept, concern, concert, conclude, concrete, condition, conduct, cone, conference, confess, confidence, conflict, confront, confuse, congratulate, congress, connect, conquer, conscience, consensus, consent, consequence, consider, consist, consolidate, constant, constitute, construct, consult, consume, contain, contempt, contend, content, contest, context, continue, contract, contradict, contrary, contrast, contribute, control, controversy, convert, convince, cook, cool, cooperate, coordinate, cope, copy, core, corn, correct, correspond, cost, cotton, council, count, counter, country, couple, courage, course, court, cousin, cover, cow, crack, craft, crash, crazy, create, credit, creep, crew, crisis, critical, criticize, crop, cross, crowd, crown, crucial, cruel, cruise, cry, crystal, culture, cup, cure, curry, curve, custom, customer, cutD:damage, danger, data, date, daughter, day, deal, dear, death, debate, decade, decay, decide, declare, decline, decorate, decrease, dedicate, deep, defeat, defend, defer, define, definite, degree, delay, delete, deliberate, deliver, demand, democracy, demonstrate, dense, deny, depend, depict, deploy, deposit, deprive, derive, describe, desert, deserve, design, desire, despair, destroy,detail, detect, detonate, develop, devote, diagnose, diagonal, dictate, die, differ, differentiate, difficult, dig, digest, dignity, dilemma, dimension, diminish, dip, direct, disaster, discipline, disclose, discount, discourage, discover, discredit, discriminate, discuss, disease, disgust, dish, dismiss, disorder, display, dispose, dissolve, distance, distinguish, distribute, district, disturb, dive, divide, divorce, do, doctor, document, dog, dollar, donate, door, double, doubt, down, dozen, draft, drag, drain, drama, draw, dream, dress, drink, drive, drop, drought, drug, drum, dry, due, dull, dumb, dump, during, dust, dutyE:each, eager, ear, early, earn, ease, east, easy, eat, ecology, economic, edge, edit, educate, effect, efficient, effort, egg, either, elaborate, election, electric, elegant, element, elevate, eliminate, elite, else, embarrass, embassy, embryo, emerge, emergency, emit, emotion, emphasize, employ, empower, empty, enable, encounter, encourage, end, endure, enforce, engage, engineer, enhance, enjoy, enlist, enormous, enough, enter, entertain, entire, enthusiasm, entrance, envelope, environment, envy, epidemic, episode, equal, equip, equivalent, era, erode, error, escape, establish, estate, estimate, ethnic, evacuate, evaluate, even, event, eventually, ever, every, evidence, evil, evolution, exact, examine, example, exceed, excellent, exception, excess, exchange, exciting, exclude, excuse, execute, exercise, Exhaust, exhibit, exist, exit, expand, expect, experience, experiment, expert, explicit, explode, explore, export, exposure, express, extend, extensive, external, extract, extreme, eyeF:face, factor, fade, fail, faint, fair, faith, fall, false, fame, familiar, family, famous, fan, fancy, fantastic, far, farm, fashion, fast, fat, fate, father, fault, favor, fear, feature, federal, fee, feed, feel, female, fence, festival, fetch, fever, few, fiber, fiction, field, fifteen, fifth, fifty, fight, figure, file, fill, film, final, finance, find, fine, finger, finish, fire, firm, first, fish, fit, five, fix, flag, flame, flare, flat, flavor, flee, flood, floor, flow, flower, fluctuate, fly, focus, fold, follow, fond, food, foot, for, force, forecast, foreign, forest, forget, forgive, fork, form, formal, former, forth, fortunate, forward, foster, foul, found, four, fox, fraction, fragile, frame, free, freedom, frequent, fresh, friend, frighten, from, front, fruit, frustrate, fuel, full, fun, function, fund, funeral, funny, furnish, further, futureG:gain, galaxy, game, gang, gap, garage, garden, gas, gather, gear, general, generate, generation, generous, genius, gentle, genuine, geography, gesture, get, giant, gift, girl, give, glad, glance, glass, global, glory, glove, go, goal, god, gold, good, goods, govern, government, grace, graduate, grain, grand, grant, grass, grateful, grave, great, greedy, green, greet, grey, grief, grocery, ground, group, grow, growth, guarantee, guard, guess, guest, guide, guiltyH:habit, habitat, hair, half, hall, halt, hammer, hand, handle, hang, happen, harbor, hard, hardware, harm, harvest, hat, hate, have, hazard, he, head, heal, health, hear, heart, heat, heaven, heavy, heel, height, hell, help, hence, her, here, heritage, hero, hesitate, hide, high, highlight, highway, hill, him, himself, hip, hire, his, historical, hit, hold, hole, holiday, holy, home, honest, honor, hope, horizon, horse, hospital, host, hot, hotel, hour, house, how, however, hug, huge, human, humble, humor, hundred, hunger, hunt, hurry, hurt, husband, hybridI:ice, idea, identify, idle, if, ignore, ill, illegal, illuminate, illustrate, imagination, imagine,immediate, immigrant, immune, impact, imperfection, implement, importance, impose, impress, improvement, in, inch, incident, include, income, increase, independent, index, indicate, individual, industry, inevitable, infect, inferior, infest, influence, inform, infra-red, ingredient, inherit, initiate, inject, injure, inner, innocent, innovate, inquiry, insect, inside, insist, inspire, install, instant, instead, insufficient, intend, interaction, interest, interim, internal, international, interrupt, interview, interval, intervene, intimate, introduce, inure, invest, investigate, invite, involve, iron, island, issue, it, itemJ:jacket, jail, jam, jaw, jealous, jeopardy, jerk, jet, jewelry, job, join, joking, journey, joy, judge, jug, juice, jump, jungle, justK:keep, key, kick, kid, kill, kilogram, kilometer, kind, king, kiss, kit, kitchen, kneel, knife, knock, know, knowledgeL:labor, laboratory, lack, lady, lake, lamp, land, language, large, last, late, laugh, launch, law, lay, layer, laziness, lead, leader, leaf, leak, lean, leap, learn, least, leave, lecture, left, legal, legend, lemon, lend, length, less, lesson, let, letter, level, liar, liberal, liberty, library, license, life, lift, light, like, likely, limit, line, link, lion, liquid, list, listen, little, live, lizard, load, loan, local, locate, lock, long, look, loose, lose, loud, love, low, loyal, luck, luggage, lump, lunch, luxuryM:machine, mad, magazine, magic, magnet, mail, main, make, man, manage, mandolin, manipulate, manner, many, map, maple, march, mark, market, marriage, marry, mask, Mass, master, match, material, math, matter, maximum, may, maybe, me, meal, mean, measure, meat, mechanism, media, medical, medication, medicine, meet, melt, member, memo, memory, men, mend, mental, merchant, message, metal, meter, middle, midnight, might, mile, milk, million, mind, mine, mineral, minimum, minister, minor, minute, miracle, mirror, misbehave, miss, mistake, mix, model, moderate, modern, modest, modifies, moment, momentum, money, monkey, month, moon, moral, more, morning, most, mother, motion, mountain, mouth, move, much, mud, multiply, murder, muscle, museum, music, must, mute, mysteryN:nail, name, narrow, nation, native, natural, Navy, near, necessary, neck, needle, negative, negotiate, neighbor, neither, nephew, nerve, nest, net, network, never, new, news, next, nice, night, nine, no, noble, nobody, nod, noise, nominal, non, none, noodle, noon, nor, north, nose, not, note, nothing, notice, notion, now, nowhere, nuclear, null, number, nurse, nutO:oak, object, occupy, occur, ocean, odd, of, off, offer, office, often, oil, OK, old, on, once, one, only, open, operate, opinion, opportunity, oppose, opposite, option, or, orange, orbit, order, ordinary, organize, original, other, oust, out, outlaw, outlet, outline, output, outside, oven, over, overall, overcome, overlook, owe, own, oxygenP:pace, pack, package, page, pain, paint, pair, palace, pale, palm, panel, panic, paper, paradise, pardon, parent, park, part, participate, particular, party, pass, passage, past, pasta, patent, path, patient, pattern, pause, pay, peace, peak, pear, peasant, pedestrian, peel, peer, penalty, people, percent, perfect, perform, perhaps, period, permanent, permit, person, persuade, pet, phase,philosophy, photo, phrase, physical, piano, pick, picture, piece, pig, pile, pill, pilot, pin, pinch, pink, pipe, pitch, place, plain, plan, planet, plant, plastic, plate, play, pleasant, please, pleasure, plenty, plot, plug, plus, pocket, poem, point, poison, pole, police, polish, polite, poll, pollutant, pollution, pool, poor, pop, popular, population, port, pose, position, positive, possess, possible, post, postpone, pot, potato, potential, pour, poverty, powder, power, practical, practice, prairie, praise, pray, preface, prefer, prepare, present, preserve, president, press, pressure, pretend, prevent, previous, price, pride, primary, prime, principal, print, prior, prison, private, probable, problem, procedure, proceed, process, produce, product, profession, profit, program, progress, prohibit, project, promise, promote, prompt, pronounce, proof, proper, property, proposal, propose, protect, protest, Proud, provide, provoke, publish, pull, pulse, pump, punish, pupil, purchase, pure, purpose, pursue, push, putQ:qualify, quality, quantity, quarter, question, quick, quiet, quit, quite, quizR:race, rack, radio, raft, rage, rain, raise, rally, random, range, rank, rap, rapid, rare, rate, rather, reach, react, read, ready, real, reality, realize, reason, rebel, rebuild, recall, receive, recent, reception, recipe, record, recover, recruit, red, reduce, reflect, reform, refuse, regard, region, register, regret, reinforce, reject, relate, release, relevant, reliable, relief, religion, rely, remain, remarkable, remember, remind, remove, rent, repair, repeat, replace, reply, report, represent, reproduce, request, require, rescue, research, resemble, reserve, reside, resign, resist, resolve, resort, respect, respond, responsibility, rest, restore, restrain, restrict, result, retrace, retrieve, return, reveal, revenge, revenue, review, revise, revolution, reward, rhythm, rich, ride, ridicule, right, rigid, ring, riot, rise, risk, river, road, roar, robot, rock, rocket, rod, roll, roof, room, root, rope, rose, rough, round, route, row, royal, rubber, ruin, rule, run, rural, rush, rust, ruthlessS:sack, sacred, sacrifice, sad, safe, sail, salad, sale, salt, same, sample, sanction, sand, satisfy, sauce, save, say, scale, scan, scare, scatter, scene, schedule, scheme, school, science, scope, score, scrap, scratch, screen, script, sculpture, sea, search, season, seat, second, secret, section, secure, see, seek, seem, select, sell, senate, send, senior, sense, sensitive, separate, series, serve, service, session, set, settle, seven, several, severe, sex, shack, shade, shadow, shake, shall, share, sharp, shave, she, sheep, sheet, shelf, shell, shelter, shift, shine, ship, shirt, shoe, shoot, shop, short, should, shout, show, shower, shred, shrink, shrug, shy, sibling, sick, side, siege, sight, sign, signal, significant, silent, silk, silly, silver, similar, simple, since, sing, single, sink, sir, sister, sit, site, situation, six, size, skate, ski, skill, skin, skirt, sky, sleep, slice, slide, slight, slip, slope, slow, small, smart, smash, smell, smile, smoke, smooth, snake, snow, so, soap, social, society, sock, soft, soil, solar, soldier, solid, solution, somber, some, son, song, soon, sort, soul, sound, source, south, space, spare, speak, special, specific, speech, speed, spell, spend, spill, spirit, split, spoil, sponsor, spoon, sport, spot, spray, spread, spring, spy, square, stable, staff, stage, stake, standard, star, start, state, statement, station, statistics, status, stay, steady, steal, steam, steel, steep, steer, step, stick, still, sting, stipulate, stock, stomach, stone, stop, store, storm, story, stout, straight, strange, strategy, street, strength, stress, stretch, strike, string, strip, stroke, strong, struggle, student, study, stuff, stupid, subject, submit, subside, subsequent, substance, subtle, subtract, succeed, success, such, suffer, sugar, suggest, suit, suite, summer, summit, sun, super, supermarket, supervise, supply, support, suppose, supreme, sure, surface, surge, surprise, surrender, surround,survey, suspect, suspend, sustain, swallow, swear, sweat, sweep, sweet, swim, swing, switch, symbol, sympathetic, systemT:table, taboo, tackle, tag, tail, take, talent, talk, tall, tank, tape, target, task, taste, tax, teach, team, tear, technical, technique, technology, tee, teen, telephone, telescope, tell, temperature, temporary, ten, tend, tense, term, terrible, test, text, than, thank, that, the, theater, their, them, theme, themselves, then, theory, there, therefore, these, they, thick, thief, thin, thing, think, third, thirst, this, those, though, thought, thousand, thread, threaten, three, threshold, thrive, through, throw, thus, ticket, tie, tight, tile, time, tin, tip, tire, title, toast, today, toe, together, toilet, tolerance, tomato, tomorrow, tone, tongue, tonight, too, tool, tooth, top, topic, torch, torn, torture, total, touch, tough, tour, tourist, toward, tower, town,。

moveit motion planning的实现

moveit motion planning的实现

moveit motion planning的实现The MoveIt Motion Planning Framework is a robust and flexible solution for robot motion planning tasks. It leverages the capabilities of the Robot Operating System (ROS) to provide a comprehensive set of tools for generating collision-free paths for robotic arms and other mobile robots.MoveIt Motion Planning Framework是一个强大且灵活的机器人运动规划任务解决方案。

它利用机器人操作系统(ROS)的功能,提供了一套全面的工具,用于为机械臂和其他移动机器人生成无碰撞路径。

The core of MoveIt lies in its ability to create and manipulate a robot's kinematic model. This model represents the robot's physical structure and its relationships between different joints and links. MoveIt uses this model to understand the robot's workspace and potential collisions with obstacles.MoveIt的核心在于其创建和操纵机器人运动学模型的能力。

该模型代表了机器人的物理结构以及不同关节和连杆之间的关系。

MoveIt利用此模型来理解机器人的工作空间和潜在的障碍物碰撞。

Prioritized Motion Planning for Multiple Robots英文翻译2

Prioritized Motion Planning for Multiple Robots英文翻译2

对于多机器人运动规划的优先级马克・范・里冰荷兰信息和计算机科学大学摘要:本文我们解决这个问题的规划为多发性的机器人。

介绍了优先方法基于一种有力的方法,对运动规划在动态的环境,是最近发展起来的。

我们的作法是一般适用:没有限制在这个号码的自由度的机器人的,各种类型和机器人的——例如free-flying 机器人与表示机器人可以同时使用。

结果表明,优质的路径可以在不到第二,计算时间,甚至在封闭的环境涉及到许多机器人。

我们考虑了三个问题本文的优先任务:机器人,优先级的规划与性能的协调规划的影响,以及由机器人的程度运动是约束的业绩的方法。

结果发表在运行时间和生产质量的途径。

Terms-motion 规划、多机器人的指数,优先。

1.介绍本文将对运动规划问题多机器人,这种机器人是一个重要的课题。

这个任务是计划的机器人轨迹法每个机器人从开始到一些进球。

配置没有相互碰撞和碰撞的配置静态障碍物。

这一问题已被广泛地研究。

大多数的研究重点放在协调方法。

他们常常分类沿谱之间集中、解耦的规划[12]。

集中计算路径规划的复合结构空间,这是由笛卡尔乘积的配置空间的个体机器人(15]、[16]。

在一个解耦方法计算的路径是每个机器人独立,并协调图是用来计划每个机器人轨迹的无碰撞沿着它的路径[12]、[14]、[m].北京:17]。

这个方法的约束这个运动的机器人, 然后再考虑之间的相互作用机器人可以分为几类之间的中间光谱。

他们通常使用方法对于每一种机器人的结构都空间好[8],[12],[]。

沿着谱中的各种方法交易完整的速度和适用范围。

集中ap -针对完备,但在总体上要求,或仅适用于简单的机器人操作简单的环境。

解耦方法都适用对任何类型的机器人的路径,但他们计算是可能的远离最优。

方法使用一个路标这两个极端之间的一种妥协。

图1。

一个环境与四铰链机器人操纵了一辆汽车一个较少方法多运动机器人,这种机器人却常常用于实践优先计划[2]、[5],[6]。

Temporal logic motion planning for mobile robots

Temporal logic motion planning for mobile robots

Temporal Logic Motion Planning for Mobile Robots∗Georgios E.Fainekos,Hadas Kress-Gazit and George J.PappasGRASP Laboratory,Departments of ESE and CISUniversity of PennsylvaniaPhiladelphia,PA19104,USA{fainekos,hadaskg,pappasg}@Abstract—In this paper,we consider the problem of robot motion planning in order to satisfy formulas expressible in temporal logics.Temporal logics naturally express traditional robot specifications such as reaching a goal or avoiding an obstacle,but also more sophisticated specifications such as sequencing,coverage,or temporal ordering of different tasks.In order to provide computational solutions to this problem,wefirst construct discrete abstractions of robot motion based on some environmental decomposition.We then generate discrete plans satisfying the temporal logic formula using powerful model checking tools,andfinally translate the discrete plans to continuous trajectories using hybrid control. Critical to our approach is providing formal guarantees ensuring that if the discrete plan satisfies the temporal logic formula,then the continuous motion also satisfies the exact same formula.Index Terms—Motion planning,temporal logics,model check-ing,discrete abstractions,hybrid control.I.I NTRODUCTIONRobot motion planning problem has historically focused on generating trajectories which reach a goal configuration while avoiding obstacles[1],[2].Mathematically formu-lating specifications such as motion sequencing,synchro-nization,or temporal ordering of different motions present new challenges for motion planning,as they require not only novel formulations,but also powerful computational approaches due to the inherent problem complexity. Formally defining such specifications can be achieved using temporal logics,such as linear temporal logic(LTL)and computation tree logic(CTL),developed in concurrency theory.The applicability of temporal logics in robotics was advocated as far back as[3].Over the years,the for-mal methods community has developed very sophisticated model checking tools such as S PIN[4]and N U S MV[5], which verify whether a discrete transition system satisfies a temporal logic formula.More recently,model checking approaches have been used for discrete planning in order to satisfy temporal logic specifications.This research has led to planning algorithms and tools such as M BP[6], TL PLAN[7]and U MOP[8].These tools generate high-level,discrete plans that do not take into consideration the dynamic model of the robot,resulting in potentially infeasible plans.∗This work is partially supported by NSF EHS0311123,NSF ITR 0324977,and ARO MURI DAAD19-02-01-0383.This paper addresses the novel problem of generating continuous trajectories for mobile robots while satisfying formulas in temporal logic.Our approachfirst lifts the problem to the discrete level by partitioning the environ-ment into afinite number of equivalence classes.A variety of partitions are applicable,in particular the cellular de-composition in[9]or the triangular decomposition in[10]. The partition results in a natural discrete abstraction for robot motion which is used then for planning using model checking tools,in particular S PIN and N U S MV.In order to ensure that the discrete plan is feasible at the continuous level,the decomposition must satisfy the so-called bisimulation property[11].Bisimulations allow us to prove that if the abstract,discrete robot model satisfies the LTL formula,then the continuous robot model also satisfies the same formula.To ensure this critical property we utilize the hybrid control framework of[10],even though the framework of[9]is equally applicable but computationally more demanding.Related work can be found in the hybrid systems commu-nity,and in particular the recent work of[12]which focuses on designing controllers for discrete-time control systems in order to satisfy temporal logic specifications.In[13], controllers are designed for satisfying LTL formulas by composing controllers using navigation functions[14].In [15],the U PPAAL model checking tool for timed automata has been used for multi-robot motion planning using CTL formulas,but without taking into account the dynamics of the robots.This paper differentiates itself from all previous approaches by building upon the framework proposed in[10]which has,comparatively,the best computational properties,is fully automated,and is ideally suited for interfacing with model checking tools.In addition to addressing this novel problem,we believe that this direction of research is important for at least three reasons.First,this work formally connects high-level planning with low-level control,resulting in a mathemati-cally precise interface between discrete AI planning and continuous motion planning.Second,the mapping from temporal logic to physical motion is thefirst important step in the mapping from natural language to physical motion in a compositional manner.Finally,this work can be extended to multi-agent environments where formal specifications and computational solutions will result in verified coordination logic for cooperating robots.II.P ROBLEM F ORMULATIONWe consider a fully actuated,planar model of robot motion operating in a polygonal environment P.The motion of the robot is expressed as˙x(t)=u(t)x(t)∈P⊆R2u(t)∈U⊆R2(1) where x(t)is the position of the robot at time t,and u(t) is the control input.The goal of this paper is to construct a control input u(t)for system(1)so that the resulting trajectory x(t)satisfies a formula in a temporal logic,such as the temporal logic LTL[16].The formulas are built from afinite number of atomic propositions or observables which label areas of interest in the environment such as rooms or obstacles.LetΠ={π1,π2,...πn}be a set of such propositions.For system(1)we then associate an observation maph C:P→Π(2) which maps the continuous states of the robot to thefinite set of propositions.1Propositionπi∈Πrepresents an area of interest in the environment which can be characterized by a convex set of the form:P i={x∈R2| 1≤k≤m a T k x+b k≤0,a k∈R2,b k∈R}In other words,the observation map h C:P−→Πhas the form h C(x)=πi iff x belongs in the associated set P i. Wefirst give some informal examples of LTL formulas and defer the formal syntax and semantics of LTL to Section III.Propositional logic is the traditional logic of conjunction(∧),disjunction(∨),negation(¬),implication (⇒),and equivalence(⇔).LTL is obtained from standard propositional logic by adding temporal operators such as eventually(3),always(2),next( )and until(U).Some LTL examples that express interesting properties include:•Reach goal while avoiding obstacles:The formula ¬(o1∨o2∨···∨o n)Uπexpresses the property that eventuallyπwill be true,and untilπis reached,we must avoid all obstacles labeled as o i,i=1,...,n.•Sequencing:The requirement that we mustfirst visit π1,π2,andπ3in this order is naturally captured by the formula3(π1∧3(π2∧3π3)).•Coverage:Formula3π1∧3π2∧···∧3πm reads as the robot will eventually reachπ1and eventuallyπ2 and...eventuallyπm,requiring the robot to eventually visit all regions of interest in any order.More complicated specifications can be composed from more basic specifications using the logic operators.For such temporal logic formulas,in this paper we provide computational solution of the following problem.1Uninteresting regions of the state space could be mapped to a dummy proposition or no proposition(resulting in a partial observation map). Furthermore,one can easily consider overlapping propositions resulting in non-deterministic observation maps.Problem1:[Temporal logic motion planning]Given robot model(1),observation map(2),initial condition x(0)∈P, and a LTL temporal logic formulaϕ,construct a control input u(t)so that the resulting robot trajectory x(t)satisfies the formula.Example1:In order to better explain the different steps in this paper,we will consider throughout the paper the following example.Consider a robot that is moving in a square environment with four areas of interest denoted byπ1,π2,π3,π4.Initially,the robot is placed somewhere in the region labeledπ1(see Figure1).The desired specification for the robot given in natural language is:“Visit areaπ2then areaπ3then areaπ4and,finally,return to regionπ1while avoiding areasπ2andπ3”.III.L INEAR T EMPORAL L OGICIn this section,we formally describe linear temporal logic (LTL)by giving its syntax and semantics.Syntax:LTL formulas are interpreted over all trajectories of the system starting from some initial state x(0)[16]. The atomic propositions of the logic are labels representing areas of interest in the environment such as rooms or obsta-cles.LetΠ={π1,π2,...}be a set of such propositions. The LTL formulas are defined according to the following grammar:φ::=π|¬φ|φ∨φ|φUφAs usual,the Boolean constants and⊥are defined as =π∨¬πand⊥=¬ respectively.Given negation(¬)and disjunction(∨),we can define conjunction (∧),implication(⇒),and equivalence(⇔).Furthermore, we can also derive additional temporal operators such as eventuality3φ= Uφand safety2φ=¬3¬φ.Note that our syntax does not contain the so-called next operator φ. Semantics:We define the continuous semantics of LTL formulas over robot trajectories.Let x(t)for t≥0denote the state of the robot at time t and let x[t]be a possible robot trajectory starting at x(t).That is x[t]={x(s)|s≥t and˙x(t)=u(t)}or x[t]denotes theflow of x(s)under the input u(s)for s≥t.Fig.1.Example1.The4areas of interest and the initial position of the robot marked with x.LTL formulasφare interpreted over a trajectory x[t]. x[t]|=Cφdenotes the satisfaction of the formulaφover the trajectory x[t]starting at x(t).The semantics of any formula can be recursively defined as:•x[t]|=Cπiff h C(x(t))=π•x[t]|=C¬φif x[t]|=Cφ•x[t]|=Cφ1∨φ2if x[t]|=Cφ1or x[t]|=Cφ2•x[t]|=Cφ1Uφ2if there exists s≥t such that x[s]|=C φ2and for all s with t≤s <s we have x[s ]|=Cφ1Therefore,the formulaφ1Uφ2intuitively expresses the property that over the trajectory x[t]φ1is true until φ2becomes true.Formula3φindicates that over the trajectory the formulaφbecomes eventually true,whereas 2φindicates thatφis true over the trajectory x[t]for all time t ≥t.Example2:Coming back to Example(1),we can now formally define the specification using temporal logic for-mulas.Letπi be the proposition that is true when the robot is in area ing LTL the precise specification is:φ=3(π2∧3(π3∧3(π4∧(¬π2∧¬π3)Uπ1)))IV.T EMPORAL L OGIC M OTION P LANNINGOur solution to generating continuous robot trajectories satisfying LTL formulasφconsists of the following three steps:1)Discrete Abstraction of Robot Motion:Decomposethe environment P into afinite number of equiva-lence classes resulting in afinite state model of robotmotion.2)Temporal Logic Planning using Model Checking:Construct plans for the discrete robot motion satis-fying desired specifications using model checkers.3)Continuous Implementation of Discrete Plan:Imple-ment the discrete plan at the continuous level whilepreserving the satisfaction of the temporal formula.A.Discrete Abstraction of Robot MotionWefirst partition the workspace P of the robot into a finite number of equivalence classes(or cells).Clearly, we can use many efficient cell decomposition methods for polygonal environments[2].In this paper,we chose to triangulate P for two main reasons.First,there exist several efficient triangulation algorithms which can partition very complicated environments[17].Second,the choice of controllers used in Section IV-C is proven to exist and be efficiently computable on triangles[10].Despite this choice,many of the results in this section can be easily adapted to similar decompositions,such as the decompo-sition described in[9].Let T:P−→Q denote the map which sends each state x∈P to thefinite set Q={q1,...,q n}of all equivalence classes(triangles in this paper).In other words,T−1(q)Fig.2.The triangulation of the workspace of Example1appears with solid lines.The edges of the dual graph appear with dashes.The numbers denote the nodes of the undirected graph.contains all states x∈P which are contained in the triangle labeled by q and{T−1(q i)|q i∈Q}is a partition of the state space.Given such a partition of P,we can naturally abstract the robot motion by defining afinite transition systemD=(Q,q(0),→D,h D)(3) where Q is thefinite set of states,and q(0)∈Q is the cell containing the initial robot state x(0)∈P,that is q(0)=T(x(0)).The dynamics are captured by the transition relation→D⊆Q×Q,defined as q i→D q j iff the cells labeled by q i,q j are topologically adjacent, that is triangles T−1(q i)and T−1(q j)have a common line segment.The transition relation→D is also known as the dual graph of the triangulation and can be easily computed.Having defined transitions→D for transition system D,we can define trajectories p of D as sequences of the form p[i]=p i→D p i+1→D p i+2→D...,where p i=p(i)∈Q.In addition to defining the transition relation,we also define the observation map h D:Q−→Π,as h D(q)=π, if there exists x∈T−1(q)such that h C(x)=π.In order to ensure that h D is well defined,we must impose the requirement that the decomposition is proposition or observation preserving,that is for all x1,x2∈P and all π∈Π,T(x1)=T(x2)⇒h C(x1)=h C(x2).In other words,states that belong in the same equivalence class or cell,map to the same observations.Example3:Revisiting Example1,we can now triangulate the environment(see[18]for the algorithm used)and construct the dual graph of the triangulation(Figure2).The resulting undirected graph has34states and49transitions. The transition system D will serve as an abstract model of robot motion.We must now lift our problem formulation from the continuous to the discrete domain.In the previous section we defined the semantics of LTL formulas over continuous trajectories.We keep the LTL syntax exactlythe same,but we reformulate the semantics of the temporal logic formula to be interpreted over the discrete trajectories generated by transition system D.Discrete LTL Semantics:Path formulasφare interpreted over an execution p[i],denoted as p[i]|=Dφ.The seman-tics of any path formula can be recursively defined as:•p[i]|=Dπiff h D(p(i))=π•p[i]|=D¬φif p[i]|=Dφ•p[i]|=Dφ1∨φ2if p[i]|=Dφ1or p[i]|=Dφ2•p[i]|=Dφ1Uφ2if there exists j≥i s.t.p[j]|=Dφ2, and for all j with i≤j <j we have p[j ]|=Dφ1We are interested in understanding the relationship between the continuous robot model satisfying formula x[0]|=Cφwith continuous LTL semantics and the transition system D satisfying formula p[0]|=Dφ,where p(0)=T(x(0)), but with the discrete LTL semantics.B.Temporal Logic Planning using Model CheckingIn a nutshell,model checking is the algorithmic procedure for testing whether a specification formula holds over some semantic model[19].The model of the system is usually given in the form of a discrete transition system like the one described in Section IV-A.The specification formula is usually given in the form of temporal logics such as LTL. As mentioned earlier,we are looking for computation paths p[i]that satisfy the temporal formula p[0]|=Dφ. In the model checking community,this is known as the generation of witnesses.Unfortunately,the current versions of the model checking software tools do not support the construction of witnesses as they are mainly analysis tools.Hence,we have to employ the algorithms that solve the dual problem,i.e.the generation of counterexamples. In this case,when the model checker determines that a formulaφis false,it constructs afinite trace p[0]which demonstrates that the negation ofφis true,i.e.p[0]|=D¬φ. Letφbe the formula that the system should satisfy.Assume now that we give as input to our model checking algorithm the LTL formula¬φ,representing the negation of the desired behavior.If the formula is false in our discrete model of the environment,then the model checker will return afinite trace p[0]that satisfies the formula¬(¬φ)≡φand,thus,we are done as we have found afinite path that satisfies the original LTL formulaφ.Out of the variety of model checking tools that have been developed over the years,we chose the most dominant ones,that is,N U S MV[5]which is based on symbolic model checking techniques and is mainly targeted for CTL (but it can also handle LTL)model checking problems, and S PIN[4]which uses an automaton approach to the model checking problem and accepts only LTL formu-las.Both toolboxes support hierarchy and composition, multiple agents,generation of counterexamples in case the temporal formula is invalidated and nondeterministic environments.Of course,there are also several differences between the two toolboxes mainly concerning the way they deal with the model checking problem,the user interface and the expressive power of the underlying logic.S PIN only supports asynchronous communication among agents,but it gives us the option for the generation of traces that are optimal in the sense of minimum number of transitions (trace length).The conversion of the discrete transition system of Section IV-A to the input language of N U S MV or to the input language of SPIN is straightforward and it is automated.Example4:Using N U S MV for our example,we get the following witness trace p={33,34,24,25,27,16,15,14, 3,4,5,32,23,26,29,30,3,14,33},which satisfies our specification.C.Continuous Implementation of Discrete Trajectory Our next task is to utilize the discrete trajectory p[0] in order to construct a control input u(t)for t≥0 and,therefore,a continuous trajectory x[0]that satisfies exactly the same path formula.We achieve this desired goal by simulating(or implementing)at the continuous level each discrete transition of p[0].This means that if the discrete system D makes a transition p i→D p j,then the continuous system must match this discrete step by moving the robot from states in triangle T−1(p i)to states in triangle T−1(p j).We define a transition relation→C⊂P×P between continuous robot states in P.Formally,there is a transition x→C x if x and x belong to adjacent triangles,and it is possible to construct a trajectory x(t)for0≤t≤T with x(0)=x and x(T)=x ,and,furthermore,for all0≤t≤T we have x(t)∈(T−1(T(x))∪T−1(T(x ))).Informally, x→C x if we can steer the robot from x to x without visiting any triangle other than the triangle containing x or the neighboring triangle containing x .Having defined →C allows us to formally define a transition system C= (P,x(0),→C,h C).In order to ensure that the continuous system can imple-ment any discrete plan obtained by the model checker,we require that the decomposition of P satisfies the so called bisimulation property[11].Definition1(Bisimulations):A partition T:P−→Q is called a bisimulation if the following properties hold for all x,y∈P:•(Observation preserving)If T(x)=T(y),thenh C(x)=h C(y)•(Reachability preserving)If T(x)=T(y),then if x→C x then y→C y for some y with T(x )= T(y ).In other words,the triangulation is a bisimulation if the whole triangle is mapped to the same observation,and furthermore,if one state x can move to the adjacent triangleFig.3.Example1:Continuous trajectory implementationto some state x ,then all states y in the same triangle with x can also move to the same triangle with x . Assuming that this property is satisfied by the partition with respect to transitions we just defined,it is straightforwardto show the following proposition.Proposition1:Letφbe an LTL path formula,and let T: P−→Q be a bisimulation.If p[0]|=Dφ,then for every x(0)∈T−1(p(0))there exists a trajectory x[0]satisfying x[0]|=Cφ.It remains to design controllers that satisfy the so-called bisimulation property.There are several recent approaches for generating such controllers,such as[9],[10]and[20]. We use the framework developed in[10]due to its com-putational properties in triangular environments.In this approach,an affine vectorfield is created in each triangle that drives the robot to the desired adjacent triangle,while taking into consideration any velocity bounds the robot might have.For a description of this controller design,we refer the reader to[10].Note however,that by satisfying the bisimulation property using feedback controllers,the temporal logic formula is robustly satisfied not only by the initial state x(0),but also by all other states in the same triangle.Furthermore, the design of the controllers in[10]can guarantee the continuity of the vectorfields at the common edges of adjacent triangles.Example5:Figure3shows the continuous trajectory cor-responding to our example,which was created using the triangulation from Example3and the discrete path gener-ated by N U S MV in example4.V.SIMULATIONSIn order to test our approach to the problem of motion plan-ning,we ran several simulations.We started with simple environments and continued by increasing the complexity of both the environment and the specification in order to make sure our approach scales well.In this section,we describe the process of creating a solution to themotionFig.4.Example6:Visit all the roomsplanning problem,and we show examples of non-trivial behaviors in complex environments,which may include holes and regions of interest.Thefirst step consists of specifying the environment.The environment is described as a set of vertices which define the outer contour,inner holes and inner regions of interest (such as rooms).We specify these vertices either by using a M ATLAB based graphical user interface which allows the user to select points on a grid or by writing M ATLAB functions that create vertices in a desired pattern.Next,we triangulate the polygonal environment using the software developed in[18]and we create the input code for the model checker which we augment with the temporal logic formula.The required path is generated as a counter example trace using a model checker.Thefinal step is to create the control law u(t)for t≥0and to simulate the robot path.This step is performed in M ATLAB and the control law is generated according to the method developed in[10]using linear programming.Example6:Figure4is an example of a trajectory,gener-ated by N U S MV,satisfying a coverage requirement.In this example the desired behavior was to visit each of the rooms (shaded areas)in no particular order.The LTL formula that captures the specification is:3r1∧3r2∧3r3∧3r4∧3r5∧3r6.For problems of this size,the generation of the discrete path is almost instant and the controller synthesis in MATLAB takes less then15seconds.Example7:This is an example of a trajectory satisfying a more complex requirement.In this example the desired behavior is“Visit room r2,then room r1and then cover rooms r3,r4,r5-all this while avoiding obstacles o1,o2, o3”.Figure5depicts the path generated by S PIN. Example8:Figure6is an example of a very large envi-ronment.This environment includes1156areas of interest (rooms)and its discrete abstraction consists of9250tri-angles.The specification for this example was“Start in the white room and go to both black rooms”.Even though this environment is very large,the computation time was a few seconds for the triangulation,about55seconds for the path generation in N U S MV and around90seconds for theFig.5.Example 7:While avoiding the obstacles go to room 2,then to room 1and then go to rooms 3,4,5(in anyorder)Fig.6.Example 8:Complex environment -Visit the two square areas in black colorcontroller synthesis of a path of 145triangles in M ATLAB .VI.C ONCLUSIONS -F UTUREWORKIn this paper,we have described our approach to the problem of motion planning,which begins at a high level of behavior specification,expressed in temporal logic,and ends in creating continuous control inputs for the robot that satisfy those requirements.We have shown that this approach is computationally feasible,that complex envi-ronments can be handled easily and that many complex robot behaviors can be expressed and satisfied.We find this approach to be very promising and there are several directions in which we are planning to proceed,such as,extending this framework to multiple robots,incorporating natural language as a higher level specifica-tion (which will be automatically translated into temporal logic),and looking at different cell decomposition tech-niques.Currently,we are investigating the extension of the pre-sented approach to the design of hybrid controllers that would guarantee the satisfaction of a path formula φin the presence of localization and actuation errors,in the presence of observable predicates (sensory input)and undera set R of initial conditions (i.e.∀x (0)∈R ⇒x [0]|=C φ).Furthermore,we plan to run experiments testing our approach using A CTIV M EDIA mobile robots as a testbed.A CKNOWLEDGMENTSThe authors would like to thank the anonymous reviewers for their comments.R EFERENCES[1]Valle,”Planning Algorithms”,[Online,Available at/planning/],2004[2]H.Choset,K.M.Lynch,L.Kavraki,W.Burgard,S.A.Hutchinson,G.Kantor,and S.Thrun.Robotic Motion Planning:Foundations and Implementation .2004.In preparation.[3]M.Antoniotti and B.Mishra,”Discrete Event Models +Temporallogic =Supervisory Controller:Automatic Synthesis of Locomotion Controllers”,IEEE International Conference on Robotics and Au-tomation ,1995.[4]G.J.Holzmann,”The Spin Model Checker Primer and ReferenceManual”,Addison-Wesley,Reading Massachusetts,2004.[5] A.Cimatti,E.M.Clarke,E.Giunchiglia,F.Giunchiglia,M.Pis-tore,M.Roveri,R.Sebastiani and A.Tacchella,”NuSMV 2:An OpenSource Tool for Symbolic Model Checking”,In Proceeding of International Conference on Computer-Aided Verification (CA V 2002),Copenhagen,Denmark,July 27-31,2002.[6]P.Bertoli,A.Cimatti,M.Pistore,M.Roveri,and P.Traverso,”MBP:A Model Based Planner”,In Proc.IJCAI’01Workshop on Planning under Uncertainty and Incomplete Information ,2001.[7] F.Bacchus and F.Kabanza,”Using Temporal Logics to ExpressSearch Control Knowledge for Planning”,Artificial Intelligence ,vol 116,2000.[8]R.M.Jensen and M.M.Veloso,”OBDD-based Universal Planningfor Synchronized Agents in Non-Deterministic Domains”,Journal of Artificial Intelligence Research ,2000,V olume 13,189-226.[9] D.C.Conner,A.Rizzi,and H.Choset,”Composition of local potentialfunctions for global robot control and navigation”,Proceedings of 2003IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003),IEEE,V ol.4,October,2003,pp.3546-3551.[10] C.Belta and L.C.G.J.M.Habets,”Constructing decidable hybridsystems with velocity bounds”,43rd IEEE Conference on Decision and Control ,Bahamas,Dec 2004.[11]R.Alur,T.A.Henzinger,fferriere,and G.J.Pappas.”Discreteabstractions of hybrid systems”,Proceedings of the IEEE ,88:971984,2000.[12]P.Tabuada and G.J.Pappas.Model checking LTL over controllablelinear systems is decidable.Hybrid Systems :Computation and Control .volume 2623of Lecture Notes in Computer Science .Springer-Verlag,Prague,2003.[13]S.Loizou and K.Kyriakopoulos,”Automatic Synthesis of Multi-Agent Motion Tasks Based on LTL Specifications”,43rd IEEE Conference on Decision and Control ,Bahamas,Dec 2004.[14] E.Rimon and D.E.Kodischek,”Exact robot navigation usingartificial potential functions”,IEEE Transactions on Robotics and Automation ,8(5):501–518,1992.[15]M.M.Quottrup,T.Bak,and R.Izadi-Zamanabadi,”Multi-RobotPlanning:A Timed Automata Approach”,Proc.2004IEEE Int.Conf.on Robotics and Automation ,New Orleans,LA.[16] A.E.Emerson,”Temporal and Modal Logic”,in:Van Leeuwen (ed)Handbook of Theoretical Computer Science ,V ol.B,pp.997-1072,Elsevier Science Publishers,1990.[17]M.de Berg,M.van Kreveld,M.Overmars and O.Schwarzkopf,”Computational Geometry:Algorithms and Applications”,2nd rev.ed.2000.[18] A.Narkhede,and D.Manocha,”Fast Polygon Tri-angulation based on Seidel’s Algorithm”,[Online at /dm/CODE/GEM/chapter.html#Seidel91].[19] E.M.Clarke,O.Grumberg and D.A.Peled,”Model Checking”,The MIT Press,Cambrige,MA,1999.[20]L.C.G.J.M.Habets and J.H.van Schuppen.A control problem foraffine dynamical systems on a full-dimensional polytope.Automatica ,40:21–35,2004.。

Dynamic Motion Planning for Mobile Robots Using Potential Field Method

Dynamic Motion Planning for Mobile Robots Using Potential Field Method
ENT
The motion planning problem for a mobile robot in a dynamic environment is to plan and control the robot motion from an initial position to track a moving target in a desired manner while avoiding moving obstacles. To simplify the analysis, we have the following assumptions: Assumption 1 The robot is a point mass which can move omni-directionally, whose mass mrob , position p and velocity v are known and its maximum velocity vmax is greater than that of the target. The acceleration of the robot, a, is omnidirectional and its maximum value is amax. Assumption 2 The point target moves at con-
obstacles can be measured accurately. obstacle is close enough to the robot and needs to be avoided. The rest are assumed to be farther away and their in uences are neglected for that instant.

【机械专业英文文献】利用动力系统方法的移动机械手的控制

【机械专业英文文献】利用动力系统方法的移动机械手的控制

2009 IEEE International Conference on Robotics and AutomationKobe International Conference CenterKobe, Japan, May 12-17, 2009Control of Mobile Manipulator using the Dynamical Systems ApproachLars-Peter EllekildeAbstract—The combination of a mobile platform and a manipulator, known as a mobile manipulator, provides a highly flexible system, which can be used in a wide range of applications, especially within the field of service robotics. One of the challenges with mobile manipulators is the construction of control systems, enabling the robot to operate safely in potentially dynamic environments. In this paper we will present work in which a mobile manipulator is controlled using the dynamical systems approach. The method presented is a two level approach in which competitive dynamics are used both for the overall coordination of the mobile platform and the manipulator as well as the lower level fusion of obstacle avoidance and target acquisition behaviors.I. INTRODUCTIONThe majority of robotic research has in the last decades focused on either mobile platforms or manipulators, and there have been many impressive results within both areas. Today one of the new challenges is to combine the two areas, into systems, which are both highly mobile and have the ability to manipulate the environment. Especially within service robotics there will be an increased need for such systems. The demography of most western countries causes the number of old people in need of care to increase, while there will be less working to actually support them. This requires an increased automation of the service sector, for which robots able to operate safely in indoor and dynamic environments are essential.Fig. 1. Platform consisting of a Segway RMP200 and a Kuka Light Weight Robot. The platform used in this work is shown in Figure 1, and consist of a Segway RMP200 with a Kuka Light Weight Robot. The result is a platform that has a relative small footprint and is highly maneuverable, making it well suited for moving around in an indoor environment. The Kuka Light Weight Robot has a fairly long reach and high payload compared to its own weight, making it ideal for mobile manipulation. When controlling a mobile manipulator, there is a choice of whether to consider the system as one or two entities. In [1] and [2] they derive Jacobians for both the mobile platform and the manipulator and combine them into a single control system. The research reported in [3] and [4], on the other hand, considers them as separate entities when planning, but do include constraints, such as reachability and stability, between the two.The control system we propose is based on the dynamical systems approach [5], [6]. It is divided into two levels, where we at the lower level consider the mobile platform and the manipulator as two separate entities, which are then combined in a safe manner at the upper level. The main reesarch objective in this paper is to demonstrate how the dynamical systems approach can be applied to a mobile manipulator and used to coordinate behaviours at various levels of control.The remaining of this paper is organized as follows. The overall architecture isdescribed in Section II, followed by the control of the mobile platform and the manipulator in Sections III and IV. In Section V we will show some experiments before concluding the paper in Section VI. However, first a summary of work related to the dynamical systems approach will be provided in Section I-A.A. Related WorkThe dynamical systems approach [5], [6] provides a framework for controlling a robot through a set of behaviors, such as obstacle avoidance and target acquisition. Each behavioris generated through a set of attractors and repellors of a nonlinear dynamical system. These are combined through simple addition of the vector fields to provide the overall behavior of the system. The dynamical systems approach relates to the more widely used potential field method [7], but has certain advantages. Where the behavior in the potential field method is the result of following gradients of the field, the behavior variables, such as heading direction and velocity, can be controlled directly using the dynamical systems approach.The relative low computational cost associated with the approach, makes it suitable for online control in dynamic environments, and allows it to be implemented even on fairly low-level platforms with limited computational capabilities [8]. The robustness to noisy sensors is shown in [9] and [10] where a combination of infrared sensors and microphones is used for obstacle avoidance and target acquisition. Despite being able to solve various tasks it is only a local method, for task and mission-level planning other methods (see e.g. [11]) should be applied.A drawback of the approach in [5], [6] is the potential creation of spurious attractors when multiple behaviors are combined. To overcome this problem [12] introduces a weighting of the behaviors based on competitive dynamics. The influence of each behavior is controlled using an associated competitive advantage, which together with competitive interactions defined between the behaviors, controls the weights. This approach generalizes to an arbitrary number, n, behaviors, but with a O(n2) worst-case complexity, if competitive interactions between all behaviors areneeded.Real-world indoor experiments using this competitive dynamics approach can be found in [13], [14]. In [13] only the heading direction of the vehicle is used, whereas in [14] both heading direction and velocity are controlled. [15] provides a brief discussion of strategies for the velocity behavior.The dynamical systems approach has not only been used for planar mobile robots, but also for controlling the tool motion of a manipulator [16]. More complex dynamical systems using the Hopf Oscillator for generating limit cycles can also be used. [17] shows how limit cycles with different shapes can be constructed and used for both obstacle avoidance and trajectory generation. [18] uses the Hopf Oscillator to generate a timed trajectory, enabling a manipulator to catch a ball rolling down a table. The dynamical systems approach can not only be used for controlling the tool, but also to control the redundancy of a 7 degrees of freedom manipulator as demonstrated in [19].II. OVERALL ARCHITECTUREThe overall architecture of our system is illustrated in Figure 2. To control the mobile platform, in this case a Segway, two low level behaviors are use: One for target acquisition and one for obstacle avoidance. Using competitive dynamics these are fused together to provide the Mobile behavior, which specifies the desired motion of the mobile platform. Similarly we have target acquisition and obstacle avoidance behaviors for the manipulator fused together based on competitive dynamics, to give the Manipulator Acquisition behavior. When the target is not within reach, the manipulator should retract to a safe configuration, which is the purpose of the Manipulator Retract behavior. The last fusion combines the controls in a safe manner, such that the target acquisition and retract behaviors do not disturb one another and the mobile platform does not start moving towards a new target before the manipulator has been retracted.Fig. 2. Overall architecture of the control systemUsing weights w mobile ,manip acquisition w and manip retract w to represent the influence of theMobile, Manipulator Acquisition and Manipulator Retract behaviors, the control signals mobile u and manip q for the mobile platform and the manipulator are given by()leftright u mobile mobile u u w =(1)manip manip manip manip manip acquisition acquisition retract retract qq q w w =+ (2)Where (left u right u ) are control inputs to the left and right wheels of the platformas described in Section III, manipacquisition qand manip retract q are the manipulator joint velocities asdescribed in Section IV.A. Competitive Dynamics The competitive dynamics approach used is based on [12], but with the additional parameter b T used to control the transition rate as in [14]. The dynamical system used is thus given by''3'2(),b b b b b b b b b T w a w w rb bw w noise≠=--+∑(3)In which b a is the competitive advantage of behavior b and r 'b ,b is the competitive interaction of behavior 'b upon b.1) Mobile: The competitive advantages of the mobile platform should strengthen the behavior when far away from the target and reduce it when the target is reached. This is achieved throught a n h (()m o b i l e m o b i l e m o b i l e a t a r t h r e s h o l d a k d d =-(4)In which mobile a k determines how rapidly the advantage should change, tar d isthe distance to the target and mobile threshold d specifies a minimum distance to the targetrequired before the mobile platform should move.The mobile behavior has no ability to interact and suppress other behaviors, thus its competitive interactions are set to 0.2) Manipulator Acquisition: This behavior should be strengthened when the mobile platform gets close to its target. The competitive advantage will thus be defined astanh(())manip manip manip acuisition a tar threshold a k d d =--(5)The activation distance manip threshold d must be greater than mobile threshold d to make sure thebehavior is activated. This behavior has no direct interaction with the others, thus its interactions are set to 0.3) Manipulator Retract: The retract behavior should be activated opposite the goal behavior, hencemanip manip retract acqisitiona a =- tanh(())manip mobile a tar threshold k d d =-(6)Except for a very small transition time this prevents the manipulators acquisition and retract behaviors from being active at the same time, thus we can set ,0retract acquisition r =. For the interaction between the retract and the mobile behaviors wewish retract to deactivate mobile when the manipulator is far away from its home configuration. The interaction is therefore defined ashom ,1(1tanh(()))2retract r current e q retract acquisition k q q r ε=+--(7) In which cur q and hom e q are the manipulators current and home configurations, ∈q specifies a proximity distance around hom e q andretract r k specifies how quicklythe interaction changes. III. CONTROL OF THE MOBILE PLATFORMThe control of the mobile platform is constructed very similar to what is presented in [14], but with a few differences. First of all only the target acquisition and obstacle avoidance behaviors are used. The corridor following and wall avoidance are not included, but would be straight forward extensions. The second area in which this work differs is in how the density of obstacles is calculated. Details of this will be explained in section III-D.For the control to actually be able to navigate through the environment, it is necessary with a method for localization. The approach we have used is based on the method described in [20], which combines odometry and laser range measurements matched against a map of dominating lines in the environment.The control of the platform is encoded using the orientation, φ and the velocity,ν, which results in a system with control inputs {},mobile f v φ=; The values of mobilef are made up of two parts, mobiletar f and mobile obs f , which are combined asmobile mobile mobile mobile mobile tar obs tar obs ff f w w =+ (8)Where the weights mobiletar w and mobileobs w are controlled using Eq. (3) with thecompetitive advantage and interactions described in section III-C.As control input we need expressions for the left and right wheels of the mobile platform, denoted left u and right u , respectively. To obtain these v is integrated to get v, which together with the desired rotational velocity φ, the wheel diameter wheel d and the distance between the wheels wheelbase d can be used to calculate the control inputs as(,)2left wheel v v d u φπ∆=-(9) (,)2right right wheel v v d u u φπ∆=+ (10) Where ∆ is the needed difference in wheel speed given bywheelbase wheel d d φπ∆=(12)A. Target DynamicsThe basic dynamics of this target behavior is,,()sin()mobile mobile tar tar tar fφφφλψφ=-(13) ,,max ()(min(,))mobile v mobile v mobile tar tar tar tar v k d v v fλ=-(14) In which ,mobile tar φλ and ,mobile v tar λ are the strengths of the attractors and tar ψ is thedirection to the target. The constant mobile tar k gives the relation between the distance tothe target and the desired velocity. Finally max v is the maximal velocity allowed for the mobile platform 。

协作移动机器人-前因和方向外文文献翻译、中英文翻译、外文翻译

协作移动机器人-前因和方向外文文献翻译、中英文翻译、外文翻译

Cooperative Mobile Robotics: Antecedents and DirectionsY. UNY CAOComputer Science Department, University of California, Los Angeles, CA 90024-1596ALEX S. FUKUNAGAJet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109-8099ANDREW B. KAHNGComputer Science Department, University of California, Los Angeles, CA 90024-1596Editors: R.C. Arkin and G.A. BekeyAbstract. There has been increased research interest in systems composed of multiple autonomous mobile robots exhibiting cooperative behavior. Groups of mobile robots are constructed, with an aim to studying such issues as group architecture, resource conflict, origin of cooperation, learning, and geometric problems. As yet, few applications of cooperative robotics have been reported, and supporting theory is still in its formative stages. In this paper, we give a critical survey of existing works and discuss open problems in this field, emphasizing the various theoretical issues that arise in the study of cooperative robotics. We describe the intellectual heritages that have guided early research, as well as possible additions to the set of existing motivations.Keywords: cooperative robotics, swarm intelligence, distributed robotics, artificial intelligence, mobile robots, multiagent systems1. PreliminariesThere has been much recent activity toward achieving systems of multiple mobile robots engaged in collective behavior. Such systems are of interest for several reasons:•tasks may be inherently too complex (or im-possible) for a single robot to accomplish, or performance benefits can be gained from using multiple robots;•building and using several simple robots can be easier, cheaper, more flexible and more fault-tolerant than having a single powerful robot foreach separate task; and•the constructive, synthetic approach inherent in cooperative mobile robotics can possibly∗This is an expanded version of a paper which originally appeared in the proceedings of the 1995 IEEE/RSJ IROS conference. yield insights into fundamental problems in the social sciences (organization theory, economics, cognitive psychology), and life sciences (theoretical biology, animal ethology).The study of multiple-robot systems naturally extends research on single-robot systems, butis also a discipline unto itself: multiple-robot systems can accomplish tasks that no single robot can accomplish, since ultimately a single robot, no matter how capable, is spatially limited. Multiple-robot systems are also different from other distributed systems because of their implicit “real-world” environment, which is presumably more difficult to model and reason about than traditional components of distributed system environments (i.e., computers, databases, networks).The term collective behavior generically denotes any behavior of agents in a system having more than one agent. the subject of the present survey, is a subclass of collective behavior that is characterized by cooperation. Webster’s dictionary [118] defines “cooperate” as “to associate with anoth er or others for mutual, often economic, benefit”. Explicit definitions of cooperation in the robotics literature, while surprisingly sparse, include:1. “joint collaborative behavior that is directed toward some goal in which there is a common interest or reward” [22];2. “a form of interaction, usually based on communication” [108]; and3. “[joining] together for doing something that creates a progressive result such as increasing performance or saving time” [137].These definitions show the wide range of possible motivating perspectives. For example, definitions such as (1) typically lead to the study of task decomposition, task allocation, and other dis-tributed artificial intelligence (DAI) issues (e.g., learning, rationality). Definitions along the lines of (2) reflect a concern with requirements for information or other resources, and may be accompanied by studies of related issues such as correctness and fault-tolerance. Finally, definition (3) reflects a concern with quantified measures of cooperation, such as speedup in time to complete a task. Thus, in these definitions we see three fundamental seeds: the task, the mechanism of cooperation, and system performance.We define cooperative behavior as follows: Given some task specified by a designer, a multiple-robot system displays cooperative behavior if, due to some underlying mechanism (i.e., the “mechanism of cooperation”), there is an increase in the total utility of the system. Intuitively, cooperative behavior entails some type of performance gain over naive collective behavior. The mechanism of cooperation may lie in the imposition by the designer of a control or communication structure, in aspects of the task specification, in the interaction dynamics of agent behaviors, etc.In this paper, we survey the intellectual heritage and major research directions of the field of cooperative robotics. For this survey of cooperative robotics to remain tractable, we restrict our discussion to works involving mobile robots or simulations of mobile robots, where a mobile robot is taken to be an autonomous, physically independent, mobile robot. In particular, we concentrated on fundamental theoretical issues that impinge on cooperative robotics. Thus, the following related subjects were outside the scope of this work:•coordination of multiple manipulators, articulated arms, or multi-fingered hands, etc.•human-robot cooperative systems, and user-interface issues that arise with multiple-robot systems [184] [8] [124] [1].•the competitive subclass of coll ective behavior, which includes pursuit-evasion [139], [120] and one-on-one competitive games [12]. Note that a cooperative team strategy for, e.g., work on the robot soccer league recently started in Japan[87] would lie within our present scope.•emerging technologies such as nanotechnology [48] and Micro Electro-Mechanical Systems[117] that are likely to be very important to co-operative robotics are beyond the scope of this paper.Even with these restrictions, we find that over the past 8 years (1987-1995) alone, well over 200papers have been published in this field of cooperative (mobile) robotics, encompassing theories from such diverse disciplines as artificial intelligence, game theory/economics, theoretical biology, distributed computing/control, animal ethology and artificial life.We are aware of two previous works that have surveyed or taxonomized the literature. [13] is abroad, relatively succinct survey whose scope encompasses distributed autonomous robotic systems(i.e., not restricted to mobile robots). [50] focuses on several well-known “swarm” architectures (e.g., SWARM and Mataric’s Behavior-based architecture –see Section 2.1) and proposes a taxonomy to characterize these architectures. The scope and intent of our work differs significantly from these, in that (1) we extensively survey the field of co-operative mobile robotics, and (2) we provide a taxonomical organization of the literature based on problems and solutions that have arisen in the field (as opposed to a selected group of architectures). In addition, we survey much new material that has appeared since these earlier works were published.Towards a Picture of Cooperative RoboticsIn the mid-1940’s Grey Walter, along with Wiener and Shannon, studied turtle-like robots equipped wit h light and touch sensors; these simple robots exhibited “complex social behavior” in responding to each other’s movements [46]. Coordination and interactions of multiple intelligent agents have been actively studied in the field of distributed artificial intelligence (DAI) since the early 1970’s[28], but the DAI field concerned itself mainly with problems involving software agents. In the late 1980’s, the robotics research community be-came very active in cooperative robotics, beginning with projects such as CEBOT [59], SWARM[25], ACTRESS [16], GOFER [35], and the work at Brussels [151]. These early projects were done primarily in simulation, and, while the early work on CEBOT, ACTRESS and GOFER have all had physical implementations (with≤3 robots), in some sense these implementations were presented by way of proving the simulation results. Thus, several more recent works (cf. [91], [111], [131])are significant for establishing an emphasis on the actual physical implementation of cooperative robotic systems. Many of the recent cooperative robotic systems, in contrast to the earlier works, are based on a behavior-based approach (cf. [30]).Various perspectives on autonomy and on the connection between intelligence and environment are strongly associated with the behavior-based approach [31], but are not intrinsic to multiple-robot systems and thus lie beyond our present scope. Also note that a recent incarnation of CEBOT, which has been implemented on physical robots, is based on a behavior-based control architecture[34].The rapid progress of cooperative robotics since the late 1980’s has been an interplay of systems, theories and problems: to solve a given problem, systems are envisioned, simulated and built; theories of cooperation are brought from other fields; and new problems are identified (prompting further systems and theories). Since so much of this progress is recent, it is not easy to discern deep intellectual heritages from within the field. More apparent are the intellectualheritages from other fields, as well as the canonical task domains which have driven research. Three examples of the latter are:•Traffic Control. When multiple agents move within a common environment, they typically attempt to avoid collisions. Fundamentally, this may be viewed as a problem of resource conflict, which may be resolved by introducing, e.g., traffic rules, priorities, or communication architectures. From another perspective, path planning must be performed taking into con-sideration other robots and the global environment; this multiple-robot path planning is an intrinsically geometric problem in configuration space-time. Note that prioritization and communication protocols – as well as the internal modeling of other robots – all reflect possible variants of the group architecture of the robots. For example, traffic rules are commonly used to reduce planning cost for avoiding collision and deadlock in a real-world environment, such as a network of roads. (Interestingly, behavior-based approaches identify collision avoidance as one of the most basic behaviors [30], and achieving a collision-avoidance behavior is the natural solution to collision avoidance among multiple robots. However, in reported experiments that use the behavior-based approach, robots are never restricted to road networks.) •Box-Pushing/Cooperative Manipulation. Many works have addressed the box-pushing (or couch-pushing) problem, for widely varying reasons. The focus in [134] is on task allocation, fault-tolerance and (reinforcement) learning. By contrast, [45] studies two boxpushing protocols in terms of their intrinsic communication and hardware requirements, via the concept of information invariants. Cooperative manipulation of large objects is particularly interesting in that cooperation can be achieved without the robots even knowing of each others’ existence [147], [159]. Other works in the class of box-pushing/object manipulation include [175] [153] [82] [33] [91] [94] [92][114] [145] [72] [146].•Foraging. In foraging, a group of robots must pick up objects scattered in the environment; this is evocative of toxic waste cleanup, harvesting, search and rescue, etc. The foraging task is one of the canonical testbeds for cooperative robotics [32] [151] [10] [67] [102] [49] [108] [9][24]. The task is interesting because (1) it can be performed by each robot independently (i.e., the issue is whether multiple robots achieve a performance gain), and (2) as discussed in Section 3.2, the task is also interesting due to motivations related to the biological inspirations behind cooperative robot systems. There are some conceptual overlaps with the related task of materials handling in a manufacturing work-cell [47]. A wide variety of techniques have been applied, ranging from simple stigmergy (essentially random movements that result in the fortuitous collection of objects [24] to more complex algorithms in which robots form chains along which objects are passed to the goal [49].[24] defines stigmergy as “the production of a certain behaviour in agents as a consequence of the effects produced in the local environment by previous behaviour”. This is actually a form of “cooperation without communication”, which has been the stated object of several for-aging solutions since the corresponding formulations become nearly trivial if communication is used. On the other hand, that stigmergy may not satisfy our definition of cooperation given above, since there is no performance improvement over the “naive algorithm” –in this particular case, the proposed stigmergic algorithm is the naive algorithm. Again, group architecture and learning are major research themes in addressing this problem.Other interesting task domains that have received attention in the literature includemulti-robot security systems [53], landmine detection and clearance [54], robotic structural support systems (i.e., keeping structures stable in case of, say ,an earthquake) [107], map making [149], and assembly of objects using multiple robots [175].Organization of PaperWith respect to our above definition of cooperative behavior, we find that the great majority of the cooperative robotics literature centers on the mechanism of cooperation (i.e., few works study a task without also claiming some novel approach to achieving cooperation). Thus, our study has led to the synthesis of five “Research Axes” which we believe comprise the major themes of investigation to date into the underlying mechanism of cooperation.Section 2 of this paper describes these axes, which are: 2.1 Group Architecture, 2.2 Resource Conflict, 2.3 Origin of Cooperation, 2.4 Learning, and 2.5 Geometric Problems. In Section 3,we present more synthetic reviews of cooperative robotics: Section 3.1 discusses constraints arising from technological limitations; and Section 3.2discusses possible lacunae in existing work (e.g., formalisms for measuring performance of a cooperative robot system), then reviews three fields which we believe must strongly influence future work. We conclude in Section 4 with a list of key research challenges facing the field.2. Research AxesSeeking a mechanism of cooperation may be rephrased as the “cooperative behavior design problem”: Given a group of robots, an environment, and a task, how should cooperative behavior arise? In some sense, every work in cooperative robotics has addressed facets of this problem, and the major research axes of the field follow from elements of this problem. (Note that certain basic robot interactions are not task-performing interactions per se, but are rather basic primitives upon which task-performing interactions can be built, e.g., following ([39], [45] and many others) or flocking [140], [108]. It might be argued that these interactions entail “control and coordination” tasks rather than “cooperation” tasks, but o ur treatment does not make such a distinction).First, the realization of cooperative behavior must rely on some infrastructure, the group architecture. This encompasses such concepts as robot heterogeneity/homogeneity, the ability of a given robot to recognize and model other robots, and communication structure. Second, for multiple robots to inhabit a shared environment, manipulate objects in the environment, and possibly communicate with each other, a mechanism is needed to resolve resource conflicts. The third research axis, origins of cooperation, refers to how cooperative behavior is actually motivated and achieved. Here, we do not discuss instances where cooperation has been “explicitly engineered” into the robots’ behavior since this is the default approach. Instead, we are more interested in biological parallels (e.g., to social insect behavior), game-theoretic justifications for cooperation, and concepts of emergence. Because adaptability and flexibility are essential traits in a task-solving group of robots, we view learning as a fourth key to achieving cooperative behavior. One important mechanism in generating cooperation, namely,task decomposition and allocation, is not considered a research axis since (i) very few works in cooperative robotics have centered on task decomposition and allocation (with the notable exceptions of [126], [106], [134]), (ii) cooperative robot tasks (foraging, box-pushing) in the literature are simple enough that decomposition and allocation are not required in the solution, and (iii) the use of decomposition and allocation depends almost entirely on the group architectures(e.g. whether it is centralized or decentralized).Note that there is also a related, geometric problem of optimizing the allocation of tasks spatially. This has been recently studied in the context of the division of the search of a work area by multiple robots [97]. Whereas the first four axes are related to the generation of cooperative behavior, our fifth and final axis –geometric problems–covers research issues that are tied to the embed-ding of robot tasks in a two- or three-dimensional world. These issues include multi-agent path planning, moving to formation, and pattern generation.2.1. Group ArchitectureThe architecture of a computing sys tem has been defined as “the part of the system that remains unchanged unless an external agent changes it”[165]. The group architecture of a cooperative robotic system provides the infrastructure upon which collective behaviors are implemented, and determines the capabilities and limitations of the system. We now briefly discuss some of the key architectural features of a group architecture for mobile robots: centralization/decentralization, differentiation, communications, and the ability to model other agents. We then describe several representative systems that have addressed these specific problems.Centralization/Decentralization The most fundamental decision that is made when defining a group architecture is whether the system is centralized or decentralized, and if it is decentralized, whether the system is hierarchical or distributed. Centralized architectures are characterized by a single control agent. Decentralized architectures lack such an agent. There are two types of decentralized architectures: distributed architectures in which all agents are equal with respect to control, and hierarchical architectures which are locally centralized. Currently, the dominant paradigm is the decentralized approach.The behavior of decentralized systems is of-ten described using such terms as “emergence” and “self-organization.” It is widely claimed that decentralized architectures (e.g., [24], [10], [152],[108]) have several inherent advantages over centralized architectures, including fault tolerance, natural exploitation of parallelism, reliability, and scalability. However, we are not aware of any published empirical or theoretical comparison that supports these claims directly. Such a comparison would be interesting, particularly in scenarios where the team of robots is relatively small(e.g., two robots pushing a box), and it is not clear whether the scaling properties of decentralization offset the coordinative advantage of centralized systems.In practice, many systems do not conform toa strict centralized/decentralized dichotomy, e.g., many largely decentralized architectures utilize “leader” agents. We are not aware of any in-stances of systems that are completely centralized, although there are some hybrid centralized/decentralized architectures wherein there is a central planner that exerts high-levelcontrol over mostly autonomous agents [126], [106], [3], [36].Differentiation We define a group of robots to be homogeneous if the capabilities of the individual robots are identical, and heterogeneous otherwise. In general, heterogeneity introduces complexity since task allocation becomes more difficult, and agents have a greater need to model other individuals in the group. [134] has introduced the concept of task coverage, which measures the ability of a given team member to achieve a given task. This parameter is an index of the demand for cooperation: when task coverage is high, tasks can be accomplished without much cooperation, but otherwise, cooperation is necessary. Task coverage is maximal in homogeneous groups, and decreases as groups become more heterogeneous (i.e., in the limit only one agent in the group can perform any given task).The literature is currently dominated by works that assume homogeneous groups of robots. How-ever, some notable architectures can handle het-erogeneity, e.g., ACTRESS and ALLIANCE (see Section 2.1 below). In heterogeneous groups, task allocation may be determined by individual capabilities, but in homogeneous systems, agents may need to differentiate into distinct roles that are either known at design-time, or arise dynamically at run-time.Communication Structures The communication structure of a group determines the possible modes of inter-agent interaction. We characterize three major types of interactions that can be sup-ported. ([50] proposes a more detailed taxonomy of communication structures). Interaction via environmentThe simplest, most limited type of interaction occurs when the environment itself is the communication medium (in effect, a shared memory),and there is no explicit communication or interaction between agents. This modality has also been called “cooperation without communication” by some researchers. Systems that depend on this form of interaction include [67], [24], [10], [151],[159], [160], [147].Interaction via sensing Corresponding to arms-length relationships inorganization theory [75], interaction via sensing refers to local interactions that occur between agents as a result of agents sensing one another, but without explicit communication. This type of interaction requires the ability of agents to distinguish between other agents in the group and other objects in the environment, which is called “kin recognition” in some literatures [108]. Interaction via sensing is indispensable for modeling of other agents (see Section 2.1.4 below). Because of hard-ware limitations, interaction via sensing has often been emulated using radio or infrared communications. However, several recent works attempt to implement true interaction via sensing, based on vision [95], [96], [154]. Collective behaviors that can use this kind of interaction include flocking and pattern formation (keeping in formation with nearest neighbors).Interaction via communicationsThe third form of interaction involves explicit communication with other agents, by either directed or broadcast intentional messages (i.e. the recipient(s) of the message may be either known or unknown). Because architectures that enable this form of communication are similar tocommunication networks, many standard issues from the field of networks arise, including the design of network topologies and communications protocols. For ex-ample, in [168] a media access protocol (similar to that of Ethernet) is used for inter-robot communication. In [78], robots with limited communication range communicate to each other using the “hello-call” protocol, by which they establish “chains” in order to extend their effective communication ranges. [61] describes methods for communicating to many (“zillions”) robots, including a variety of schemes ranging from broadcast channels (where a message is sent to all other robots in the system) to modulated retroreflection (where a master sends out a laser signal to slaves and interprets the response by the nature of the re-flection). [174] describes and simulates a wireless SMA/CD ( Carrier Sense Multiple Access with Collision Detection ) protocol for the distributed robotic systems.There are also communication mechanisms designed specially for multiple-robot systems. For example, [171] proposes the “sign-board” as a communication mechanism for distributed robotic systems. [7] gives a communication protocol modeled after diffusion, wherein local communication similar to chemical communication mechanisms in animals is used. The communication is engineered to decay away at a preset rate. Similar communications mechanisms are studied in [102], [49], [67].Additional work on communication can be found in [185], which analyzes optimal group sizes for local communications and communication delays. In a related vein, [186], [187] analyzes optimal local communication ranges in broadcast communication.Modeling of Other Agents Modeling the intentions, beliefs, actions, capabilities, and states of other agents can lead to more effective cooperation between robots. Communications requirements can also be lowered if each agent has the capability to model other agents. Note that the modeling of other agents entails more than implicit communication via the environment or perception: modeling requires that the modeler has some representation of another agent, and that this representation can be used to make inferences about the actions of the other agent.In cooperative robotics, agent modeling has been explored most extensively in the context of manipulating a large object. Many solutions have exploited the fact that the object can serve as a common medium by which the agents can model each other.The second of two box-pushing protocols in[45] can achieve “cooperation without commun ication” since the object being manipulated also functions as a “communication channel” that is shared by the robot agents; other works capitalize on the same concept to derive distributed control laws which rely only on local measures of force, torque, orientation, or distance, i.e., no explicit communication is necessary (cf. [153] [73]).In a two-robot bar carrying task, Fukuda and Sekiyama’s agents [60] each uses a probabilistic model of the other agent. When a risk threshold is exceeded, an agent communicates with its partner to maintain coordination. In [43], [44], the theory of information invariants is used to show that extra hardware capabilities can be added in order to infer the actions of the other agent, thus reducing communication requirements. This is in contrast to [147], where the robots achieve box pushing but are not aware of each other at all. For a more com-plex task involving the placement of five desks in[154], a homogeneous group of four robots share a ceiling camera to get positional information, but do not communicate with each other. Each robot relies on modeling of otheragents to detect conflicts of paths and placements of desks, and to change plans accordingly.Representative Architectures All systems implement some group architecture. We now de-scribe several particularly well-defined representative architectures, along with works done within each of their frameworks. It is interesting to note that these architectures encompass the entire spectrum from traditional AI to highly decentralized approaches.CEBOTCEBOT (Cellular roBOTics System) is a decentralized, hierarchical architecture inspired by the cellular organization of biological entities (cf.[59] [57], [162] [161] [56]). The system is dynamically reconfigurable in tha t basic autonomous “cells” (robots), which can be physically coupled to other cells, dynamically reconfigure their structure to an “optimal” configuration in response to changing environments. In the CEBOT hierarchy there are “master cells” that coordinate subtasks and communicate with other master cells. A solution to the problem of electing these master cells was discussed in [164]. Formation of structured cellular modules from a population of initially separated cells was studied in [162]. Communications requirements have been studied extensively with respect to the CEBOT architecture, and various methods have been proposed that seek to reduce communication requirements by making individual cells more intelligent (e.g., enabling them to model the behavior of other cells). [60] studies the problem of modeling the behavior of other cells, while [85], [86] present a control method that calculates the goal of a cell based on its previous goal and on its master’s goal. [58] gives a means of estimating the amount of information exchanged be-tween cells, and [163] gives a heuristic for finding master cells for a binary communication tree. Anew behavior selection mechanism is introduced in [34], based on two matrices, the priority matrix and the interest relation matrix, with a learning algorithm used to adjust the priority matrix. Recently, a Micro Autonomous Robotic System(MARS) has been built consisting of robots of 20cubic mm and equipped with infrared communications [121].ACTRESSThe ACTRESS (ACTor-based Robot and Equipments Synthetic System) project [16], [80],[15] is inspired by the Universal Modular AC-TOR Formalism [76]. In the ACTRESS system,“robotors”, including 3 robots and 3 workstations(one as interface to human operator, one as im-age processor and one as global environment man-ager), form a heterogeneous group trying to per-form tasks such as object pushing [14] that cannot be accomplished by any of the individual robotors alone [79], [156]. Communication protocols at different abstraction levels [115] provide a means upon which “group cast” and negotiation mechanisms based on Contract Net [150] and multistage negotiation protocols are built [18]. Various is-sues are studied, such as efficient communications between robots and environment managers [17],collision avoidance [19].SWARM。

Motion planning for multiple mobile robots using dynamic networks

Motion planning for multiple mobile robots using dynamic networks

Motion Planning for Multiple Mobile Robots using Dynamic NetworksChristopher M. Clark & Stephen M. RockAerospace Robotics LabDepartment of Aeronautics & AstronauticsStanford University{chrisc, rock}@Jean-Claude Latombe Department of Computer Science Stanford Universtylatombe@Abstract - A new motion planning framework is presented that enables multiple mobile robots with limited ranges of sensing and communication to maneuver and achieve goals safely in dynamic environments. To combine the respective advantages of centralized and de-centralized planning, this framework is based on the concept of centralized planning within dynamic robot networks. As the robots move in theirenvironment, localized robot groups form networks, within which world models and robot goals can be shared. Whenever a network is formed, new information then becomes available to all robots in this network. With this new information, each robot uses a fast, centralized planner to compute new coordinated trajectories on the fly. Planning over several robot networks is decentralized and distributed. Both simulated and real-robot experiments have validated the approach.I.INTRODUCTIONWhen many robots operate in the same environment, high-level motion planning is required for the robots to reach their goals while avoiding collisions among themselves and with static and moving obstacles. In unknown or partially known environments, it is unlikely that a system of sensors can provide global knowledge. In addition, continuous inter-robot communication is usually not feasible. Instead, only robots that are sufficiently close to each other can exchange information, e.g., share their goals and local world models.This paper introduces a new planning framework that exploits the changing communication links between robots, as the robots move, to combine the respective advantages of centralized and decentralized planning. More precisely, our approach is based on dynamic robot networks that are capable of: 1) forming dynamically whenever communication and sensing capabilities permit;2) sharing world models and robot goals within each network; and 3) constructing “on the fly” coordinated trajectories for all robots in each network using a fast centralized motion planner.An overview of this approach is presented in Section II.A background review (Section III) justifies the choices made in our approach. We then describe aspects of our framework in more detail, namely the representation of partial world models (Section IV) and the planning technique (Section V). Section VI presents the test-platform used for simulations and robot experiments. Section VII gives some experimental results.a) b)Net 0Net 1Net 2Fig. 1 Example with 5 robots. Dashed lines between robots depict communication links. In a) the robots form two distinct networks Net0 and Net1. In b), two robots have moved, and the two networks in a) have merged into Net2.II.PLANNING IN DYNAMIC NETWORKSA. Network FormationWhen any two robots are within communication range of each other, they establish a communication link. Define G to be the graph whose nodes are the robots and edges are the communication links. A network of robots is any group of k≥ 1 robots forming a maximal connected component of G. So, any two robots in a network can communicate through one or several communication links, but two robots from different networks can not. Fig. 1a shows an environment with 5 robots, where 2 networks have formed. In Net1, the top and bottom robots can exchange information via their communication links with the middle robot. Because robots are moving to achieve their goal locations, the networks are dynamic. Robots may leave networks and/or form new networks (see Fig. 1b). An application level protocol ensures that at any time robots in each network can access the local sensing information of all other robots in the same network, and hence share a common world model.B. Planning ProcessMotion planning in a network N is triggered by any one of the following events:•N just got formed, i.e., two robots from different networks entered one another’s communication range.• A significant change in the world model occurs, e.g., a robot in N senses a new obstacle.• A new goal location is requested for one or several robots in N.robots are in communication range and form a network. Their centralized planners create coordinated collision-free trajectories that lead toward the goals (cross-hairs). The right robot forms a network by itself, and its trajectory is planned independently from the other two. Theright robot enter communication range with each other, and all threec) A new plan is made for all three robots in the network. This plancommunication range of each other and some network links are broken. They continue to follow the planned trajectories.Fig. 2 Top-down view of a planning example with three robots. In each of the fours snapshots, the illustration on the left shows the robots folowing their trajectories to their respective goals (cross-hairs). The diagram on the right depicts the communication range of each robot and the existing communication links.When such a triggering event occurs, data is exchanged between the robots in N, so that each one gets an updated world model that combines the local world model and goal of every robot. Once robots have shared this information, each robot runs its own copy of a centralized motion planner to construct coordinated trajectories for all robots in the network. When the planner terminates, each robot broadcasts its plan to all other robots in the network. Each robot selects the same best plan and immediately starts executing its trajectory in this plan. The planner is a single-query probabilistic-roadmap (PRM) planner similar to the one presented in [12] (see Section V).This process is illustrated in Fig. 2 on a simple example involving 3 robots, with no obstacles. A triggering event automatically occurs at the start of the process, as the first networks get formed.Since robots also have limited sensing, the world model shared through a network is partial. Planning is done using this model. As robots move, their sensors may detect previously unknown obstacles or a change in the trajectory followed by a known obstacle. Such an event triggers a re-planning operation within the network where the new obstacle or change of trajectory was detected.III.BACKGROUND REVIEWMost previous work on multi-robot motion planning can be grouped into centralized and decentralized planning [2,23]. While centralized planning considers all robots together as if they were forming a single multi-body robot [4,6,17,22,26,27], a decentralized planner plans for each robot separately before coordinating the individual plans by tuning the robot velocities along their respective paths [1,3,9,14,15,19,21,25]. A variant of decentralized, called prioritizing planning, plans for one robot at a time, in some sequence, considering the robots whose trajectories have already been planned as moving obstacles [5,10].Centralized planners can be advantageous because they allow the possibility of completeness and global optimization. For example, it was shown in [23] that a centralized planner based on PRM techniques can reliably solve problems requiring the tight coordination of multiple articulated arms, while decentralized planners based on similar PRM techniques fail often. On the other hand, centralized planning may take more time due to the high dimensionality of the configuration spaces that are searched. A worse drawback is that they require all information (partial world models and robot goals) to be centralized in one single place, which is only possible if the robots have unlimited communication capabilities. This is not the case in many practical settings.A major advantage of decentralized planning is that it allows for distributed planning. Each robot can then plan its own trajectory using its own partial model of the environment. If two robots eventually get close to one another and risk colliding, simple velocity-tuning techniques or reactive techniques can be used to locally coordinate their motions. However, a fully distributed approach fails to exploit the fact that localized groups of robots can exchange information to improve planning. While decentralized planning is potentially less computationally intensive because it searches several configuration spaces of smaller dimensionality, it cannot offer any completeness or optimality guarantee. Various attempts have been made to improve the outcome of decentralized planners (e.g., [3,5,11]). In particular, anegotiation scheme between localized groups of robots is used in [3] to assign priority orders to robots, which allow the decentralized planner to compute trajectories of reduced lengths. This negotiation scheme demonstrates the benefits of localized inter-robot communication, and is the technique most closely related to the robot network planning framework presented in this paper. But de-centralized planning remains intrinsically incomplete. The planning approach presented in this paper exploits the respective advantages of centralized and decentralized planning. In each robot network, it uses a centralized single-query PRM planner to increase completeness and still provide fast on-the-fly planning. However, planning is distributed over the various networks – hence, planning over multiple networks is decentralized – to accommodate the fact that robots from different networks cannot share information. The triggering event caused by the merging of two previously distinct networks into a single network leads the robots in this new network to take advantage of the information they now share by centrally re-planning their coordinated trajectories.Planning with incomplete world models and on-the-fly re-planning when a sensor detects the presence of a still unknown obstacle or a change in an obstacle’s trajectory have previously been described in [12, 16] for a single robot. We use similar techniques, but extend them to multiple robot networks.IV.WORLD MODELDescribing the world model in a concise but useful form is necessary to allow for information sharing between robots in the same network. In the experimental system that we have built, world models simply consist of a list of robots and their descriptions, and a list of obstacles and their descriptions. The following table outlines the information stored in each list:World Model Description1)List of Robot Descriptions- State (position and velocity)- Size (Radius)- Most Recent Update Time- Information Source- Goal position- Current Trajectory2)List of Obstacle Descriptions- State (position and velocity)- Size (Radius)- Most Recent Update Time- Information SourceRobots report their own size and state, while obstacle sizes and states are estimated by robot sensors. The most recent update time is useful when updating world models with information received from other robots. The information source is a robot ID that indicates which robot sensed (or communicated with) the object. It is used to keep track of which robots are currently in the network. Several assumptions were made to allow such a concise world model:•Each robot has access to its own state relative to a global coordinate system (e.g., GPS).•Each object is approximated as a circular object to allow its geometry to be described by a single parameter, its radius.•Each obstacle has constant linear velocity estimated by a robot’s sensor. As in [12], if at any later time its trajectory is found to diverge by more than some threshold from the predicted trajectory (either because the obstacle did not move at constant velocity, or because the error in the velocity estimate was too high), then the robot that detects this divergence calls for the construction of a new plan within its network. The planner “grows” the obstacles (and the robots) to allow for some errors in predicted trajectories of the objects.•All objects in the environment are easily identifiable by robot sensors, which can also precisely estimate their positions and velocities. Any discrepancy between two local world models can be easily resolved.The second assumption is rather easy to eliminate, as it has been shown before that PRM planners can efficiently deal with geometrically complex robots and obstacles (e.g., [22]). In [12], the third assumption has been shown to be quite reasonable, even when obstacle velocities change frequently, provided that (re-)planning is fast enough. The last assumption is more crucial. In our experimental system, it is enforced by engineering the vision system appropriately (Section VI). In the future, it will be important to relax this assumption by using more general sensing systems and data fusion techniques [20].V.MOTION PLANNING ALGORITHMAs indicated earlier, motion planning within a robot network is done using a centralized single-query PRM planner (more precisely, several copies of this planner running in parallel). This planner searches the joint state×time space C of the k robots in this network. The state of each robot is defined by the two coordinates of its center and two velocity parameters, so C has 4k+1 dimensions. This representation can easily be extended to other robots. For instance, we have implemented a version of the planner for robots in three-dimensional space [8]. The planner searches C for a collision-free trajectory from the initial state of the robots to their goal state. Theresulting trajectory defines the coordinated motions of the robots to their respective goals.Our planner searches C by incrementally building a tree of milestones (the roadmap), as described in [12,13,17]. At each iteration, it selects a milestone m in the current roadmap, generates a collision-free state m’ at random in a neighborhood of m in C and, if the path from m to m’tests collision-free, installs m’ as a new milestone in the roadmap. The search terminates when m’ falls into an “endgame” region around the goal. See [12] for details.As in [12,24], our planner satisfies kinodynamic constraints as follows: to generate each new milestone m’, it picks a control input at random and integrates the equations of motion of the robots over a short duration. We name our planner Kinodynamic Randomized Motion Planning - KRMP. As shown in [12], under reasonable assumptions on the free space, the probability of not finding a plan when one exists decreases exponentially to 0 as the number of milestones increases. This is a major advantage over our previous work in [7,9], which used a decentralized prioritized planning approach. Note, however, that the fact that the planner is probabilistically complete does not imply that the entire system is also probabilistically complete. The robots use partial world models and thus need to re-plan their trajectories when they encounter discrepancies in their model, (e.g. new obstacles). Since there is no guarantee that a series of complete plans is itself a complete plan, the robots are not guaranteed to find a global plan if one exists. While it is unclear to what extent the notion of completeness applies when planning for global goals with only partial knowledge of the environment, it is still desirable to achieve completeness in the system’s components whenever this is possible.The work in [12] also demonstrated empirically that the above techniques successfully compute trajectories for a single robot with kinodynamic motion constraints, in real-time. To enable motion planning within robot networks, KRMP extends this previous work to accommodate multiple robots. Modified techniques are needed to 1) select milestones for expansion, 2) generate new milestones, and 3) define the endgame region. Below we present the technique used to generate new milestones. When planning for multiple robots, one may generate m’ using the following “parallel” approach: first, pick the control inputs for all the robots at random; next, integrate the motions of all the robots concurrently; if no collision is detected, then record the endpoint as a new milestone, otherwise pick another set of control inputs. We found that this technique yields a high rejection rate, especially in tight space. This led us to develop the following “sequential” approach: consider the robots in some order, pick the control inputs one robot at a time and integrate their motion (considering the previous robots as moving obstacles); if the motion collides, pick new control inputs or change the motion of a previous robot. Experiments show that this sequential approach makes it possible to get each new milestone much faster, without affecting the probabilistic completeness of the overall planner. Finally, we take advantage of the various processors available in a robot network by concurrently running a separate copy of KRMP on each robot of the network. Each copy uses a different seed of the random number generator, hence constructs different roadmaps. We set the same timeout constraint (typically, a small fraction of a second) on every robot. Each robot then returns a plan or its failure to generate one. The same best plan is selected by the robots and each robot immediately executes its new trajectory. This is made possible because we use a PRM planning approach.VI.EXPERIMENTAL TEST-PLATFORMA. Micro-Autonomous RoverS Test-PlatformLocated in the Aerospace Robotics Lab at Stanford University, the Micro-Autonomous RoverS (MARS) test -platform is used to model mobile robots in a two-dimensional workspace. The platform consists of a large 3m x 2m flat, granite table with six autonomous robots that move about the table’s surface. The robots are cylindrical in shape and use two independently driven wheels that allow them to rotate on the spot, but inhibit lateral movement (nonholonomic constraint). Each robot is equipped with its own planner (copy of KRMP) and controller that are located off-board.B. SensorsAn overhead vision system is used to track the states of all objects on the table. The vision system processor calculates these states and publishes them to all applications that subscribe (see Section VI). This makes global state information available to all robots. To simulate the limited sensing range that would occur when sensors are mounted on robots, the object states are filtered such that robots only receive state information regarding objects within some preset range of the robot. C. Network CommunicationFig. 3 shows the computer/network architecture of the MARS test-platform. All the processing is done off-board. Two processors are assigned to each robot, respectively for planning and control. These computers are connected through a LAN. All communication within the LAN is accomplished with Real Time Innovation's Network Data Delivery Service (NDDS) software. Because a LAN is used for inter-robot communication instead of a wireless medium, there are no physical barriers to limit the range of communication. Hence the communication barrier is simulated.Fig. 3 Network architecture of MARS test-platformNDDS is based on a publish/subscribe architecture. To broadcast messages by flooding a robot network, the sender will publish a message to which all robots subscribe. Before robots can receive their subscriptions, the messages are filtered so that only robots within some predetermined range of the sender will receive the message. This effectively simulates a discrete physical communication range.VII. EXPERIMENTSA. Physical ExperimentsTo illustrate the applicability of the planner to a physical system, real robot experiments with up to 5 robots have been carried out. One example of such an experiment is illustrated in Fig. 4. The left photos are screen-shots of the GUI taken throughout the experiment. The right photos show the physical hardware, and were taken at the same time as the corresponding GUI screen-shots. In the GUI, robots are depicted as small circles and obstacles are depicted as larger circles. Robot goal locations are indicated by cross-hairs, and lines leading to the goals depict the trajectories. When robots form a network as described in Section II, it is indicated by a color change. Hence robots within a network have a common color, and this color will differ between networks.Fig. 4 Example experiment on the MARS test-platform involving 5 robots and 3 obstacles.Along the way, networks are continually changing as illustrated by the robots changing colors between frames. A result of this is real-time re-planning. This is illustrated by the fact that trajectories change between frames. Throughout the experiment, robots planned an average of 3.4 times, and planning times were an average of 9 ms.B. SimulationsThe physical experiments shown above validate the planner’s ability to function on real robots. However, the limited number of robots and obstacles available prevent us from performing experiments that demonstrate the planner’s ability to handle more complex scenarios.All five robots are initially located at the close end of the table (i.e. bottom of the GUI screen). Communication and sensing ranges were limited to 0.75 m. Robot colors indicate that 2 networks have formed on startup, one in the bottom left and one in the bottom right. As the experiment progresses, the robots follow their trajectories to reach their goal locations at the far end of the table. A single scenario was simulated that incorporates 12 robots, 6 static obstacles and 6 moving obstacles. The workspace was given dimensions 4m x 6m while robots and obstacles had diameters of 0.14m.[3] K. Azarm & G. Schmidt. Conflict-Free Motion of Multiple Mobile Robots Based on Decentralized Motion Planning and Negotiation, Proc. IEEE Int. Conf. on Robotics and Automation, p. 3526-3533, 1997.[4] J. Barraquand, B. Langlois, & J.C. Latombe. Numerical Potential Field techniques for Robot Path Planning, IEEE Tr. On Syst., Man, and Cyb., 22(2):224-241, 1992.[5] M. Bennewitz, W. Burgard & S. Thrun. Optimizing Schedules for Prioritized Path Planning of Multi-Robot Systems, Proc. Int. Conf. on Robotics and Automation , 2001.[6] S.J. Buckley. Fast Motion Planning for Multiple Moving Robots. Proc. IEEE Int. Conf. on Robotics and Autom., p. 1419-1424, 1989.[7] C. Clark & S. Rock. Randomized Motion Planning for Groups of Nonholonomic Robots, Proc. Int. Symp. of Artificial Intelligence, Robotics and Automation in Space , 2001.[8] C. Clark, S. Rock & J. C. Latombe. Dynamic Networks for Motion Planning in Multi-Robot Space Systems, Proc. Int. Symp. of Artificial Intelligence, Robotics and Automation in Space , 2003.[9] C. Clark, T. Bretl, & S. Rock. Kinodynamic Randomized Motion Planning for Multi-Robot Space Systems, Proc. of IEEE Aerospace Conf., 2002.Fig. 5 Screen-shot of the test scenario.[10] M. Erdmann & T. Lozano-Perez. On Multiple Moving Objects, Proc. IEEE Int. Conf. on Robotics and Automation , p. 1419-1424, 1986. To add complexity to the scenario, 4 of the moving obstacles were directed towards a network of 2 robots with little room to maneuver, (see middle of Fig. 5). Also, 2 networks of 2 robots were placed between a row of 3 obstacles and a workspace boundary.[11] Y. Guo & L. E. Parker. A Distributed and Optimal Motion Planning Approach for Multiple Mobile Robots, Proc. IEEE Int. Conf. on Robotics and Automation , p. 2612-2619, 2002.[12] D. Hsu, R. Kindel, J.C. Latombe, & S. Rock. Randomized Kinodynamic Motion Planning with Moving Obstacles, Int. J. of Robotics Research , 21(3):233-255, March 2002.The scenario was run 25 times with different initial random seeds. Despite the apparent difficulty of the scenario, the planner demonstrated fast planning times (an average of 15.8 ms), while planning for up to 5 robots in a network. To provide an idea of the level of complexity, robots formed on average 49 different networks throughout simulations that lasted several minutes. [13] D. Hsu, J.C. Latombe, & R. Motwani. Path planning in expansive configuration spaces, Proc. IEEE Int. Conf. on Robotics and Automation , p. 2719-2726, 1997.[14] K. Kant & S. Zucker. Toward efficient Trajectory Planning: The path-velocity decomposition, Int. J. of Robotics Research , 5(3):72-89,1986.[15] S. Kato, S. Nishiyama, & J. Takeno. Coordinating mobile robots by applying traffic rules, Proc. IEEE/RSH Int. Conf. on Intelligent Robots and Systems , p. 1535-1541, 1992.Table 1 Simulation data for test scenario[16] J.J. Kuffner. Autonomous Agents for Real-Time Animation . PhD Thesis, Computer Science Dept., Stanford U., 1999.Avg. number of robots per plan 2.12 Avg. planning time (ms)17.3 Avg. number of plans per robot per simulation 5.07 Avg. number of networks formed per simulation 49.4[17] S.M. LaValle & S.A. Hutchinson. Optimal Motion Planning for Multiple Robots Having Independent Goals, IEEE Tr. on Robotics and Automation , 14:912-925, 1998.[18] S.M. LaValle & J.J. Kufner. Randomized Kinodynamic Planning,” Int. J. of Robotics Research , 20(5):278-300, 2001.[19] V.J. Lumelsky & K.R. Harinarayan. Decentralized Motion Planning for Multiple Mobile Robots: The Cocktail Party Model, Autonomous Robots J., 4:121-135, 1997.VIII. CONCLUSIONSThe motion planning framework presented has demonstrated its effectiveness in planning for multiple mobile robots within a bounded workspace. It plans with a high probability of success in environments involving robots, stationary obstacles and moving obstacles. Planning times of less than 100 ms allowed the robots to plan on-the-fly and react to changes in the environment. [20] P. Moutarlier & R. Chatila. Stochastic Multisensory Data Fusion for Mobile Robot Location and Environment Modelling. Proc. Int. Symp. on Robotics Research, Tokyo , 1989.[21] D. Parsons & J. Canny. A Motion Planner for Multiple Mobile Robots, Proc. IEEE Int. Conf. on Robotics and Autom., p. 8-13, 1992 [22] G. Sánchez & J.C. Latombe. On Delaying Collision Checking in PRM Planning : Application to Multi-Robot Coordination, Int. J. of Robotics Research , 21(1):5-26, Jan. 2002.[23] G. Sánchez-Ante & J.C. Latombe. Using a PRM Planner to Compare Centralized and Decoupled Planning for Multi-Robot Systems, Proc. IEEE Int. Conf. on Robotics and Autom.., 2002.Future work includes incorporating more sophisticated methods of modeling the environment into the communication system. Another future direction will be to investigate the effects of varying the ratio between sensor range and communication range. [24] S. Sekhavat, P. Svetska, J.P. Laumond, & M.H. Overmars. Multilevel Path Planning for Nonholonomic Robots Using Semiholonomic Subsystems. Int. J. of Robotics Res., 17:840-857, 1998. [25] T. Siméon, S. Leroy, & J.P. Laumond. Path Coordination for Multiple Mobile Robots: a Geometric Algorithm, Proc. Int. Joint Conf. on Artificial Intelligence , 1999.REFERENCES[26] P. Svestka, & M.H. Overmars. Coordinated Motion Planning for Multiple Car-Like Robots Using Probabilistic Roadmaps, Proc. IEEE Int. Conf. on Robotics and Autom., p. 1631-1636, 1995.[1] R. Alami, F. Robert, F. Ingrand, & S.Suzuki. Multi-Robot Cooperation Through Incremental Plan-Merging, Proc. IEEE Int. Conf. on Robotics and Automation , p. 2573-2678, 1995.[27] C.W. Warren. Multiple Path Coordination using Artificial Potential Fields, Proc. IEEE In. Conf. on Robotics and Autom., p. 500-505, 1990.[2] T. Arai & J. Ota. Motion Planning of multiple mobile robots. Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Syst., p. 1761-1768, 1992.。

Sampling-Based Motion Planning Using Uncertain Knowledge

Sampling-Based Motion Planning Using Uncertain Knowledge

Sampling-Based Motion Planning Using Uncertain KnowledgeTechnical Report06-30Brendan Burns Oliver BrockLaboratory for Pereceptual RoboticsDepartment of Computer ScienceUniversity of Massachusetts AmherstJune2006AbstractSampling-based algorithms have dramatically improved the state of the art in robotic motion plan-ning.However,these approaches still make significant assumptions that limit their applicability forreal-world planning.This work describes how one of these assumptions:that the world is perfectlyknown,can be removed.We propose a predictive roadmap planner that incorporates uncertainty directlyinto the planning process.This enables the planner to identify configuration space paths that minimizethe risk due to uncertainty and,when necessary,directs sensing to reduce uncertainty.Experimentalresults in several domains indicate that the predictive roadmap is adept at planning despite uncertaintyin its perception of the workspace.1IntroductionIn the past decade,sampling-based algorithms[9,10]have significantly advanced the state of the art in motion planning.However,these planners make several significant assumptions that limit their applica-bility to real-world robotic tasks.In particular,these approaches assume that the world is static,that the path,once computed,is executed perfectly by the physical robot and that the workspace is perfectly known. Using sampling-based motion planning for real-world robots,such as the autonomous mobile manipulator shown in Figure1,requires relaxing these assumptions.Real-world environments often contain dynamically moving objects,motions of physical robots are subject to execution error and knowledge of the workspace must be obtained from noisy sensors.In this work we focus on thisfinal assumption.We propose a novel, predictive roadmap planning algorithm that integrates uncertainty directly into the planning process.This integration allows the planner to compute paths that minimize risk despite potential errors in the representa-tion of the workspace.The integration of uncertainty into the planner also enables it to actively direct sensor refinement.Most robots possess many different sensors for perceiving the world.Each of these sensors has different accuracy characteristics and computational costs associated with it.Even within a single sensor modality,different granularities of perception are possible with associated differences in computational cost.A motion planner that is aware of the uncertainty in its perception should also be capable of adjusting the granularity of its perception of different regions of workspace to adapt sensing to the planning rge open regions of the configuration space may permit a significantly coarser granularity of sensing than constricted regions. The integration of uncertainty and sensing into the planner enables a robot to minimize the sensing required to successfully motion plan in a given environment.Figure1:A platform for autonomous mobile-manipulation under construction at UMassThis paper proposes an approach to integrating uncertainty and sensing into sampling-based motion planning based upon the idea of a predictive roadmap.In contrast to a traditional roadmap,the predictive roadmap does not guarantee that the roadmap graph is unobstructed.Instead,each node and edge in the graph is labeled with the probability that it is obstructed or free.The predictive roadmap allows cycles in the roadmap graph.Maintaining redundant paths provides alternatives in the advent that a particular path is obstructed.Allowing cycles also allows the planner to add new more certain paths between configurations despite the presence of another path in the roadmap.For any particular path query,the predictive roadmap is queried using A*[17]tofind the path that is most certain.If this search fails tofind a satisfactorily certain path,information obtained in the search can be used to identify areas of the workspace where further sensing is required.2Related WorkNoisy sensing and uncertainty in representation has received significant attention in thefield of simultaneous localization and mapping(SLAM)for mobile robots.Thrun et al.[18]provide an excellent summary of this work.There has been significantly less work in thefield of motion planning for robots with many degrees of freedom.Classically,the problem of uncertainty in motion planning was addressed with preimage backchain-ing[12].This approach deals with planning given uncertainty in the effects of a motion command.Thefirst use of a predictive roadmap is Burns and Brock[4].This work does not address perceptual uncertainty,but rather used statistical approximate models to predict the state of roadmap edges to improve the efficiency of the motion planner.FuzzyPRM[14]also explores assigning probabilities to edges in the roadmap.The only other work to directly address perceptual uncertainty in sampling-based motion planning is Missiuro and Roy[13].This work describes a method for adapting the configuration space sampling distri-bution based upon the certainty of the sampled configurations.Like predictive roadmaps,paths are found in the resulting roadmap using A*search and a uncertainty heuristic.This work only addresses simple vertex based model of uncertainty and planning for simple2-DOF mobile robot.Another source of uncertainty in motion planning is the movement of dynamic obstacles.Leven[11] explores how to improve the efficiency of PRM planning to allow it to operate quickly enough to facilitate re-planning in the presence of dynamic obstacles.Jaillet[8]introduces cycles in the roadmap graph andlabeling of edges with possible obstructing obstacles in order to allow the planner to reason about whichpaths are feasible given certain moving obstacles.van den Berg[19]also addresses roadmap based planning in dynamic environments.Another method for dealing with perceptual uncertainty is augmenting the execution of a motion plan with online control that is capable of obstacle avoidance.The elastic strip algorithm[3]dynamically mod-ifies a computed motion plan to avoid dynamic objects.This approach can only generate paths that are homeotopic with the original path.Alami et al.[16]use anticipation of potential obstacle movement and local control to adjust the velocity and trajectory of a mobile robot following a specified path.Yu and Gupta[20]have proposed work that integrates sensing into motion planning.They use an information theoretic approach that guides a eye-in-hand robot to perform the set of motions that reduce the entropy in the robot’s perception of its environment.This exploration is focused on understanding the environment rather than reducing the uncertainty of the motion planning.Other work is not directly related to uncertainty but has explore concepts that are central to the predictive roadmap zyPRM[2]delays the evaluation of edges in the roadmap until a path query is received and uses A*to search for roadmap paths.Others[15,7]have also explored the use of cycles in roadmaps for redundant motion planning.3Incorporating Uncertainty with Predictive RoadmapsWhen sensor data from the real world is used to perceive the workspace,motion planning algorithms must be aware of the inherent uncertainty in this perception.The process of sensing introduces error into the model of the workspace.When the motion planning algorithm uses this model to determine if a configuration is uncertain or free,this error introduces uncertainty into the resulting motion plans and may lead to invalid paths.Traditional PRM planning assumes that its perception of the workspace is perfect.When this is not true,two important things must be considered:First,it is necessary to maintain redundant paths connecting configurations.This allows for recovery in the advent that a path is found to be obstructed.It also allows the motion planner to add new paths that are more likely to be free than existing uncertain paths connecting pairs of configurations.Second,complete examination of the trajectories that connect configurations is no longer warranted.Edge checking has been found[4]to be the primary computational expensive in PRM planning.This computational expense is not worthwhile in uncertain environments.Given uncertainty, exhaustive examination of a trajectory does not provide significantly better guarantees about the state of an edge compared to less expensive approximations of the edge’s state.These distinctions motivate the development of the predictive roadmap for motion planning which allows cycles and labels each edge with the probability that it obstructed or free.Methods for assigning this probability are discussed in Section4. For the time being,we will simply assume that each edge has been labeled appropriately. ConstructionIn traditional PRM planning,as configuration space is sampled,edges are directly added into the roadmap graph by modifying the representation of the graph stored in memory.The predictive roadmap is not built in this way.Instead it is constructed lazily.Pieces of the predictive roadmap are only actually constructed in response to specific motion planning queries.Delaying the concrete construction of the roadmap until queried has three significant benefits.First,it allows the predictive roadmap to maintain redundant paths with no computational cost.Because edges are not inserted into the roadmap until required,there is no cost associated with allowing redundant edges.Only when those redundant edges are required by actual path planning are they inserted into the graph.Second, delayed evaluation of the roadmap ensures that only those parts of the roadmap that are required to satisfya query are actually computed.If a portion of the configuration space is never required to solve a query, the roadmap associated with that region is never constructed.This adapts computation in a task-specific manner.Finally,delaying the construction of the roadmap ensures that the most up to date information about the workspace is used for construction.While waiting for path queries,the robot may be refining its perception of its workspace.Delaying the evaluation of edges until they are required means that the latest knowledge is always used.The predictive roadmap is sampling-strategy agnostic.Numerous sampling strategies for selecting con-figurations have been proposed[1,6,5,13]including one[13]that directly incorporates uncertainty into the sampling strategy.Any one of these can be used to select the samples that define a predictive roadmap.For simplicity we used uniform sampling in our experimental evaluation(Section5).QueryingWe use the A*[17]algorithm to perform path queries and construct the predictive roadmap.The A*al-gorithm is a general approach for searching implicitly defined graphs.For a concrete implementation it requires a cost function,a heuristic function and a method of obtaining the children of a node.The function for estimating the cost of an edge in the predictive roadmap is:obstructedobstructed is the probability that the edge is obstructed.Methods for estimating this probability are described in Section4.is a constant that estimates the cost of moving along an edge that turns out to be obstructed.is a constant that ensures that no edge has a cost of zero even if its probability of obstruction is zero.This plays an important role in balancing A*’s exploration of paths likely to be free with directed search towards the goal.For the heuristic function we use the Euclidean distance in the configuration space.All else being equal,this biases the A*search toward shorter paths through the configuration space.When A*chooses to expand a node,the set of nearest neighboring configurations are found and returned as possible children of.For each configuration,the planner evaluates the probability that the edge from to is obstructed.If this probability is greater than a threshold,the is discarded. Otherwise,an edge is added to the predictive roadmap.Both the edges and their probabilities are cached across multiple instances of A*search in to prevent redundant computation.This method of node expansion is modeled on the traditional PRM connection step.When A*searchfinds a path,its certainty is evaluated.If the minimal certainty that the path is free exceeds a threshold,it is returned as the result of the query.If this threshold test fails,the planner indicates that further refinement of the workspace representation is required to compute a path with the requisite certainty.RefinementFor a given representation of the workspace,a planner may be unable tofind a path whose certainty satisfies the requirements of the robot’s task.However,in such circumstances,the process of searching indicates workspace regions that require further sensing tofind a satisfactory path.This refinement proceeds as follows:Each edge in the path whose uncertainty is below some task-dependent certainty threshold indicates a region of the workspace that requires further exploration.Given these workspace regions,sensing is initiated by the planning algorithm to reduce their uncertainty.Once sensing is completed,the certainty of the path is re-evaluated.If the certainty of the path is now great enough,the path is returned as the result.Otherwise,the uncertain edges are removed from predictive roadmap and to enable the search for analternate path.This post-process step ensures that sensingfidelity is adapted as required by motion planning queries.This minimizes the cost of sensing required for successful motion planning.4Modeling UncertaintyThe predictive roadmap requires an estimate of the probability that each edge is obstructed.This estimate is dependent upon the underlying representation of the workspace.Each representation of the workspace introduces different types of error.Analysis of the source of this error provides a method for reasoning about the resulting uncertainty.In this work,we consider two different representations:a workspace occupancy grid and a representation of localized obstacles with known geometry.4.1Occupancy Grid RepresentationsAn occupancy grid subdivides the workspace into series of cells.Although some occupancy grids allow mixed or probabilistic labels for cells,we assume that the grid is binary.Cells are marked as either obstructed or free.To evaluate the state of a configuration,the workspace location of the robot given the configuration is computed.If any of the cells that overlap this location are obstructed,the configuration is considered obstructed.If all of the cells are free,the configuration is free.Error in this representation causes a cell that is obstructed to be marked as free or vice-versa.In either case,error introduces uncertainty in the state of roadmap edges.To evaluate this uncertainty we examine the problem as an example of observed data emitted by some underlying unknowable hidden process.In this context,the examination of an edge results in a series of observations of whether a particular configuration is obstructed or free.The planner can not know the true state of each configuration.It must instead obtain information through the fallible workspace model.This means the planner receives noisy observations of the true state of each configuration.Given a sequence of noisy observations,the task is to predict the underlying hidden state.There are many well known methods for solving this problem.We examine two:a naive Bayes model and a hidden Markov model(HMM)[17].The following describes the details of these models.Section5examines their accuracy. Naive BayesA naive Bayes model derives its name from its assumption that all observations are independent of each other.For edge checking,this assumption is false,hence the model’s naivete.However,the falsehood of this assumption does not significantly impact the model’s predictive performance.By assuming each observation is independent,the probability of a hidden state given a series of observations is simply the product of each individual probability.For our work,the probability of an obstructed edge is given by:obs.obs.Figure2:The hidden Markov model used for predicting the state of an edge given a set of observations.Hidden Markov ModelsHidden Markov models(HMMs)are a graphical model[17].Unlike naive Bayes models,HMMs do not assume that each observation is independent.Thus they are a more accurate representation of sequential observations like the examination of an edge between configurations.We can model the noisy examination of a sequence of trajectories as the simple HMM shown in Figure2.There is a vertical slice in the model for each sequential observation of a configuration along the linear edge.In each slice the hidden node represents the true,unknowable,state of the configuration.The observed node contains the noisy observation generated by the planner’s representation of the workspace.Estimating the obstructed probability of a particular sequence of observations begins by labeling the observation nodes with the observed values.The Viterbi algorithm[17]is used to calculate the maximum likelihood sequence of hidden states given the observations.This estimate of the hidden state is used to predict if the edge is obstructed or free.The likelihood of the sequence of hidden states is used to estimate the certainty of this prediction.4.2Localized Obstacle RepresentationsAn alternative method for representing the workspace is to assume that the geometry of all obstacles in the workspace is known a priori and that these obstacles can be identified and localized using sensor informa-tion.We call this a localized obstacle representation.Given this representation,error takes the form of mispredicting the location of an obstacle rather than mislabeling the state of a cell.By assuming that the magnitude of this localization error follows a probability distribution,we can calculate the certainty of each observation of the state of a configuration in the workspace representation.Gaussian Error ModelsIn the following,we assume that the magnitude of the localization error follows a Gaussian distribution.We also assume that localization only introduces translational error in the position of an obstacle.We do not consider rotational error.Extending this approach to consider rotational error is part of our future plans.Whether a configuration is obstructed or free the certainty of that observation is related to the magnitude of localization error that would invalidate the observation.For obstructed configurations,we estimate this magnitude by calculating the penetration distance in the positive and negative directions along all three translational axes.This provides two translational magnitudes in each direction that would invalidate the observation.A simple example of this along one axis is illustrated in Figure3.In this example,the localized obstacle(the gray box)in the workspace model indicates that configuration is obstructed.If the magnitude of error in localization of the obstacle is between and then the true position of the obstacle overlaps and the observation is correct.By assuming the magnitude of localization error is drawn from a Gaussian distribution,then the probability that the magnitude of error thatFigure3:The possible results of localization error.If the localization error for the gray obstacle is between ,then the the configuration q’s state is correctly known.lies within a range that doesn’t invalidate the observation is given by the cumulative distribution function (CDF)of the Gaussian function:erfwhere erf is the Gauss error function,and are magnitudes in along the positive and negative axis respectively.We calculate this probability for each axis.The lowest probability is used as the certainty estimate.This is only an approximation of the true probability but seems to work reasonably well in practice. In general,penetration distances are challenging to calculate.Better approximations are possible but incur significantly greater computational costs.For free configurations we measure the distance from the robot to the closest obstacle()and use this value as the magnitude of error required to invalidate the free observation.The probability of this is also computed using the CDF.5Empirical EvaluationWe ran numerous experiments to demonstrate the suitability of the approach described in previous sections. This evaluation addressed two important questions.First,can the models discussed in Section4provide reasonable values for use in constructing the predictive roadmap?Second,does the predictive roadmap significantly improve the ability of a motion planner to compute collision-free paths in the presence of uncertainty?To answer these questions,we ran the planner in three simulated worlds designed to resemble real-world environments.Two of these are pictured in Figure4:a10-DOF mobile manipulator platform in an office environment and a14-DOF humanoid torso in a construction environment.A third world with a2-DOF cylindrical robot in the same workspace as the mobile manipulator was also used.We ran experiments in each of these worlds simulating both methods of representing the workspace.To simulate error in the occupancy grid representation we used an adjustable probability err that any particular cell in the grid was mis-labeled.To simulate error in the localized obstacle representation,we added translational error to each obstacle’s true position.The magnitude of the error was sampled from a Gaussian distribution and was different for each obstacle.(a)10-DOF Mobile Manipulator (b)14-DOF Humanoid TorsoFigure 4:The experimental workspaces and robots used to evaluate the predictive roadmap.5.1ModelingWe first examine the models used to assign values to the predictive roadmap.Occupancy GridTo evaluate the naive Bayes and hidden Markov models we generated 500free and 500obstructed edges in configuration space.The edges were chosen uniformly at random from the set of edges that were shorter than the radius used for the nearest neighbor query in motion planning.In this way we believe they are representative of the edges encountered in motion planning.For each edge,each model was asked to predict if the edge was obstructed or free.The traditional edge checking model simply returns obstructed if any configuration along the edge is observed to be obstructed.The fraction of the edges that each model correctly predicted as a function of the probability of error is shown in Figure 5.In all three worlds,as error increases traditional edge checking quickly devolves into predicting that all edges are obstructed.This occurs because the probability of moving along an edge and not receiving an erroneous observation rapidly becomes quite small.This means that even completely free edges are likely to result in an obstructed observation.In contrast,both the naive Bayes and hidden Markov models are significantly more robust to error.Interestingly,HMMs only slightly outperform naive Bayes models in the environments.From this we conclude that the added accuracy does not justify the increased computational cost of the HMM and in our planning experiments we chose to use the naive Bayes model for predictions.LocalizationUnlike the occupancy grid model,the model of uncertainty in localization can not be used to provide pre-dictions about the state of an edge,instead it provides the reliability of the workspace model’s predictions.This uncertainty is used to bias the posture of the robot toward configurations that are more certain to be free.An image of this uncertainty function for the 2-DOF cylindrical robot in the office cubicle environment is shown in Figure 6.In the image,black indicates obstructed with absolute certainty and white indicates free with absolute certainty.Looking at the image it is easy to see that a planner that selects motions biased by this uncertainty function will choose configuration-space paths that are more likely to be unobstructed.0 0.20.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4F r a c t i o n c o r r e c t Grid Error Fraction Cylindrical AccuracyTraditional Edge Checking Naive Bayes Classifier Hidden Markov Model(a)2-DOF Cylindrical Robot 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4F r a c t i o n c o r r e c t Grid Error FractionMobile Manipulator Accuracy Traditional Edge Checking Naive Bayes Classifier Hidden Markov Model (b)10-DOF Mobile Manipulator0 0.20.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4F r a c t i o nc o r r e c t Grid Error FractionHumanoid Torso AccuracyTraditional Edge Checking Naive Bayes Classifier Hidden Markov Model(c)14-DOF Humanoid TorsoFigure 5:Edge predictions accuracy for HMM,Naive Bayes and traditional edge checking as a function of error for three different robots with varying degrees of freedom.Figure 6:A graphical representations of the uncertainty function for the 2-DOF cylindrical robot using a localized obstacle model of the workspace.Black indicates absolute certainty of obstruction,white indicates absolute certainty of free space.0 0.20.4 0.6 0.8 1 0 0.1 0.2 0.30.4F r a c t i o n o f p a t h s f o u n d Grid Error Fraction Cylindrical RobotTraditional PRM Predictive PRM (a)2-DOF Cylindrical Robot0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2F r a c t i o n o f p a t h s f o u n d Grid Error Fraction Mobile Manipulator Traditional PRM Predictive PRM (b)10-DOF Mobile Manipulator 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2F r a c t i o n o f q u e r i e s a n s w e r e d c o r r e c t l y Grid Error Fraction Humanoid Torso Traditional PRM Predictive PRM (c)14-DOF Humanoid TorsoFigure 7:Fraction of paths successfully found by tradition PRM and the predictive roadmap algorithm as a function of occupancy grid error.5.2PlanningIn addition to assessing the uncertainty models,we performed experiments to evaluate the use of these models in predictive roadmap motion planning.To evaluate the planner we generated a series of random path queries for each environment.We then asked both a traditional implementation of PRM planning and the predictive roadmap planner to find a path that satisfied the query.We varied the amount of error present in the workspace representation and observed both the percentage of queries that the planner could solve correctly and the average runtime that the planner took in computing these solutions.All of the path queries had solutions given accurate representations of the configuration space.All experiments were implemented in C++and were run on a 3Ghz Pentium 4with 1GB of RAM running the Linux operating system.Occupancy GridThe experiments using an occupancy grid representation introduced error by increasing the probability that a cell in the grid was mis-labeled.The naive Bayes model was used by the predictive planner to estimate the state of edges in the predictive roadmap.The fraction of path found and the average runtime for each planner in each environment as a function of error is shown in Figures 7and 8.From these graphs it is easy to see that the because the predictive roadmap algorithm is aware of uncer-0 5 10 15 20 25 00.10.20.30.4R u n t i m e (S e c o n d s )Grid Error Fraction Cylindrical RobotTraditional PRM Predictive PRM(a)2-DOF Cylindrical Robot 0 100 200 300 400 500 00.05 0.1 0.15 0.2R u n t i m e (S e c o n d s )Grid Error FractionMobile Manipulator Traditional PRM Predictive PRM(b)10-DOF Mobile Manipulator0 200 400 600 800 1000 00.05 0.1 0.15 0.2R u n t i m e (S e c o n d s )Grid Error FractionHumanoid TorsoTraditional PRM Predictive PRM(c)14-DOF Humanoid TorsoFigure 8:Runtime required to successfully find a path as a function of occupancy grid error.。

motion

motion

motionMotion: Understanding the Basics and ApplicationsIntroductionMotion is a fundamental concept in various fields, including physics, engineering, and computer science. At its essence, motion refers to the movement of an object or a system of objects in relation to a frame of reference. This document aims to provide an overview of the basic principles of motion and explore its applications in different domains.I. Newtonian Mechanics and MotionMotion, as understood in classical physics, is primarily governed by the laws of motion formulated by Sir Isaac Newton. These laws describe how the motion of an object changes in response to forces acting upon it. The three laws of motion can be summarized as follows:1. Law of Inertia: An object at rest tends to stay at rest, and an object in motion tends to stay in motion with the same speed and direction unless acted upon by an external force.2. Law of Acceleration: The acceleration of an object is directly proportional to the net force exerted on it and inversely proportional to its mass.3. Law of Action and Reaction: For every action, there is an equal and opposite reaction.Understanding these laws is crucial for predicting and analyzing the motion of objects in a wide range of scenarios, from everyday movements to celestial mechanics.II. Types of MotionMotion can be classified into various types based on different criteria. Some common types of motion include:1. Linear Motion: This type of motion refers to the movement of an object along a straight line. Examples include themotion of a car on a straight road or a ball rolling down a slope.2. Circular Motion: Circular motion involves the movement of an object along a circular path. A simple example is the motion of a satellite orbiting the Earth.3. Oscillatory Motion: Oscillatory motion is characterized by repeated back-and-forth or to-and-fro motion about a fixed point or equilibrium position. Simple pendulums and springs undergoing compression and expansion are examples of oscillatory motion.4. Rotational Motion: Rotational motion occurs when an object spins or rotates about a fixed axis. The spinning of a top or a spinning wheel are examples of rotational motion.5. Projectile Motion: Projectile motion describes the motion of an object launched into the air at an angle, following a curved trajectory. A perfect example is the motion of a thrown ball.III. Applications of MotionUnderstanding motion has significant practical applications across multiple fields. Some notable applications include:1. Robotics: Motion planning and control are crucial in the field of robotics. Robots need to precisely navigate their surroundings and perform tasks, such as assembly or movement, based on predefined algorithms.2. Animation and Gaming: The realistic animation of characters and objects in movies, video games, and virtual reality environments heavily relies on understanding and replicating natural motion. Accurate physics-based simulations are key to providing immersive virtual experiences.3. Transportation: The study of motion plays a vital role in the design, analysis, and optimization of transportation systems. Understanding the principles of motion helps engineers develop efficient vehicles and transportation networks, leading to improved safety and reduced energy consumption.4. Sports Science: Analyzing and enhancing human motion in sports is essential for improving performance, preventing injuries, and optimizing training programs. Motion capture technology allows coaches and athletes to analyzebiomechanics and technique, leading to better results on the field.ConclusionMotion is a fundamental concept across various disciplines and has a wide range of applications. Understanding the principles of motion, as described by Newtonian mechanics, enables us to predict and manipulate the movements of objects and systems. From robotics to animation, transportation to sports science, motion plays a vital role in advancing technology and improving our understanding of the world around us.。

视觉基于的抓取规划与移动手臂实验说明书

视觉基于的抓取规划与移动手臂实验说明书

Vision-Based Grasp Planning and Experiments of a Mobile ManipulatorYi-Fu ChiuInstitute of Electrical Control Engineering, National Chiao Tung University1001 Ta Hsueh Road, Hsinchu 30010, TaiwanKai-Tai SongInstitute of Electrical Control Engineering, National Chiao Tung University1001 Ta Hsueh Road, Hsinchu 30010, TaiwanE-mail:**************************.tw,****************.edu.twAbstractIn this paper, a motion planner is designed and implemented for of a mobile manipulator to travel to a spot for grasping of an object. In this work, the probability of successful grasping inside the workspace of the robot arm is used for grasping planning. A vision SLAM system is combined with reachability calculation to figure out the grasping position. Using a laboratory dual-arm robot, we conducted experiments in different conditions to verify the effectiveness of the developed system.Keywords: Mobile manipulation, visual servoing, visual navigation, robot grasping.1.IntroductionMobile manipulation technologies have been developed rapidly in recent years. It has been expected that various robots will come into our everyday life for living aids.In order to help people with housework, a robot needs to be equipped with many abilities such as communication, navigation, grasping and object recognition. Therefore, developing a mobile manipulator which is able to move and handle objects such that it can support people in a home setting deserves urgent attention.Cosero1has achieved notable success in RoboCup@Home contests for its mobile manipulation performance. It localizes objects and plans path based on a 2D occupancy grid map constructed from multiple3D scans of the environment2. Collision with obstacles can be avoid using the map, even though it cannot perceive an obstacle with current sensor’s view. PR23 processes sensory data from 3D visual sensors in point clouds. Multilayered 2D costmaps and a layered representation of the robot body are used to reduce the possibility of collision4. In the work of Stulp et al.5, through experience-based learning, the robot first learns a so-called generalized success model which distinguishes among positions from which manipulation may succeed or fail. The model is used to compute a probability distribution that maps positions to a predicted probability of successful manipulation, taking into account the uncertainty of the robot and object’s positions.A key factor of mobile manipulation design is that a suitable pose of the mobile robot needs to be obtained through navigation in order to conduct object manipulation. One possibility is simply to travel from the current position to a position such that the target object is reachable, as depicted in Fig.1. As shown in Fig 1(a), the object is placed in front of the table, where an obstacle exists to influence the grasping. In this case, the object might not be successfully grasped due to the limited workspace of the robot arm. However, if the mobile robot travels to the side of the table as shown in Fig. 1(b), the object can be easily grasped. It is thereforeJournal of Robotics, Network ing and Artificial Life, Vol. 1, No. 4 (March 2015), 257-260desirable to develop a method to plan a pose for the robot such that the target object is easy-to-reach.Attamimi et al.6 proposed a method to use perception maps that express the robot arm reachability and robot platform navigation reachability to a target object. The authors also show that by integrating these two maps, a proper position that has maximum probability of successful grasping is determined. In this study, we developed a motion planning system for determining the proper pose for grasping objects. The robot position is determined based on the arm’s reach and ease of movement. We adopted the design of Attamimi et al.6 to describe the limitations of robot workspace, the situation of environment and target object position. By constructing the reachability map that represents the ease of robot mobile platform motion and the graspability map that describes the success rate of grasping of robot arms in workspace, robot is able to determine the proper position to move toward and increase the success rate of mobile manipulation task. Visual EFK-SLAM and visual servo technics are integrated into the design for vision-based grasping of the object.2.The Dual Arm RobotA dual-arm robot is designed and constructed for mobile manipulation experiments. The 6-DOF dual-arm design allows the robot to grasp various objects in the environment. A Kinect RGBD camera is installed on top of the robot for visual servo control. The forward and inverse kinematics of two robot arms are derived and implemented on the robot controller. Two laser range finders are equipped on the mobile platform in forward and backward directions for obstacles detection. An on-board industrial PC (IPC) is used for data processing and motion control of the robot. Thesteer- and-drive mechanism allows the robot to move in omnidirection 7. The implemented EKF-SLAM 8 allows the robot to navigate and localize itself in the environment. The visual servo system 9 allows the robot to move and fetch an object without collisions. A. Visual EKF-SLAMA visual SLAM system has been developed for the omnidirectional mobile robot 8. There are three main steps of the EKF-SLAM: a measurement step to acquire the location vector of robot relative to the feature in the environment, a prediction step to predict the location of robot using motion model and an update step to calculate Kalman gain and update robot stare according to above two steps. Then, the updated robot state is delivered to prediction step to calculate the next moment state of robot. These steps repeat to achieve the real time localization and mapping. B. Visual servo graspingBy using the depth data from Kinect, the input image is segmented into several planes. In this method 9, the plane that the target object locates is first labeled. The labeled plane is featured by SURF algorithm and is matched with data base to find where the target object is. Except for the target object, the other objects in the environment are obstacle for robot manipulation. Potential field is then combined gravity and repulsion with these indices for safe path planning of robot arms. According to this, the robot is able to move its arms in the workspace without collisions and to grasp the targeted object.3.Grasp PlanningThe proposed system architecture is shown in Fig. 2. For representation of the environment, we utilize EKF-SLAM algorithm and Kinect sensor to construct a feature map of the environment. For grasping planning, we adopted the method of Attamimi et al.6 in this design. The feature map is used as input to the mobile platform to calculate the easiness of traveling to the reachable region of the robot. On the other hand, the target location is inputted to arm graspability calculation to represent the extent of graspable region of robot arms. We fuse the two representations to decide the goal location for robot to grasp the target object. This workuses EKF-SLAM algorithm for robot navigation(a) (b) Fig.1 An example of mobile manipulation is influenced by an obstacle in front of the target.control. First, we extract SURF features that represent the configuration of the environment. Then, EKF is utilized to improve the localization error and to correct the coordinates of features. For grasping, we use Kinect for environment detection so that the robot is able to detect not only the target object but also the obstacle in the environment and to guide its arms to fetch the target. The inverse kinematics solution map suggested in the work of Attamimi et al.6is adopted to calculate reachable/unreachable points. Then a probability distribution is given to all reachable points for later decision making of the grasping position.To include the description of robot navigation in the motion planning of grasping, we have to consider the navigation control of the mobile robot in an environment. Obstacles in the environment influence the motion planning of the robot. In this work, a feature map is built to express the environment by using EKF-SLAM algorithm. We adopted the method proposed by Attamimi et al.6to construct a reachability map for motion planning. The closer to obstacles, the less probability is given for the robot to reach. The motion planning of the robot is then determined for the robot to move to the maximum probability of the integrated representation of the navigation and arm motion6.4.Experiments and discussionThree interesting experiments have been carried out for grasp planning to verify the effectiveness of the proposed system. We first verify that different locations of target object in the same environment will result in different robot navigation behavior. Then we add an obstacle around the target area to verify the adaptation of the robot.A. Target object in different locationsFig. 3 shows the environment of this experiment, Condition A and Condition B are termed to distinguish two locations of the object on the table. We first utilize EKF-SLAM to construct the feature map of the environment. Then we replace the features with the size of robot. Fig. 4(a) shows the planning result of condition A. It is noted that the planned grasping position is in front of the table, which is depicted in dotted line. For condition B, the planned grasp position is at the side of the table, as shown in Fig. 4(b). Fig. 5 shows the experiment by real robot in Condition A and Fig. 6 shows the experiment in Condition B.B. Grasping planning affected by obstaclesThe purpose of this experiment is to verify the proposed method is able to judge the proper grasp position under the obstacle situation. Fig. 7 (a) shows the setup of this experiment, termed Condition C. The target object is at the same position as Condition A but is behind the obstacle. The planning result is shown in Fig. 7(b). The graspable region moves to the side of the table because of the graspability affected by the obstacle changes. Compare with Condition A, the difference is the obstacle effect that causes the different planning fromproposed method.(a) (b) Fig. 3 Environment setup for experiments (a) Condition A, target in the front of the front of the table, (b) Condition B,target on the side of the table. Fig.2 System architecture5. Conclusions and Future WorkA dual-arm robotic system has been designed and implemented for mobile manipulation tasks in an unstructured environment. A grasp planning system is developed based on visual SLAM and visual servoing of the robot. The proposed system has been verified in three different conditions. The experimental results show that the robot can adapt to the variation of environment and find a suitable place to grasp the object. In the future, object fetching in more extensive environments will be studied. AcknowledgementsThis work was partly supported by Ministry of Science and Technology of Taiwan, under grant NSC 102-2221-E-009-140. References1. Jorg Stuckler and Sven Behnke, “Benchmarking Mobile Manipulation in Everyday Environments,” in Proc. of 2012 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO), (Munich, Germany, 2012), pp. 1-6.2. J. Klaess, J. Stuckler and S. Behnke, “Efficient MobileRobot Navigation Using 3D Surfel Grid Maps,” in Proc. of 7th German Conf. on Robotics (ROBOTIK), (Munich, Germany, 2012), pp. 1-4.3. Sachin Chitta, E. Gil Jones, Matei Ciocarlie and KaijenHsiao, “Mobile Manipulation in Unstructured Environments: Perception, Planning, and Execution,” in IEEE Robotics & Automation Magazine, 19(2012), 58-71.4. Hornung, A., Phillips, M., Jones, E.G., Bennewitz, M.,Likhachev, M. and Chitta, S., “Navigation in Three-Dimensional Cluttered Environments for Mobile Manipulation,” in Proc. of 2012 IEEE International Conference on Robotics and Automation , (Minnesota, USA, 2012), pp. 423–429.5. Freek Stulp, Andreas Fedrizzi and Michael Beetz, “Action-Related Place-Based Mobile Manipulation,” in Proc. International Conference on Intelligent Robots and Systems , (St. Louis, MO, USA, 2009), pp. 3115-3120.6. Muhammad Attamimi, Keisuke Ito, Tomoaki Nakamura and Takayuki Nagai, “A Planning Method for Efficient Mobile Manipulation Considering Ambiguity,” in Proc. International Conference on Intelligent Robots and Systems , (Vilamoura-Algarve, Portugal, 2012), pp. 965-972.7. Kai-Tai Song, Sin-Yi Jiang and Chia-Chang Wu, “PoseControl of a Four-Wheel Drive Omni-Directional Mobile Robot,” in Proc. of 2012 CACS International Automatic Control Conference, ( Yunlin, Taiwan, 2012).8. Chien-Hung Liu and Kai-Tai Song, “A New Approach toMap Joining for Depth-Augmented Visual SLAM,” in Proc. of ASCC 2013, (Istanbul, Turkey, 2013).9. Kai-Tai Song and Shih-Cheng Tsai, “Vision-BasedAdaptive Grasping of a Humanoid Robot Arm,” in Proc. of IEEE ICAL 2012, (Zhengzhou, China, 2012), pp.160-165.(a) (b)Fig. 7 (a) Environment setup of Condition C (b) theplanning result of Condition C.(a) (b)Fig. 4 (a) The planning result of Condition A, (b) theplanning result of Condition B.(a) (b) (c)Fig. 5 Task execution of Condition A on real robot(a) (b) (c) Fig. 6 Task execution of Condition B on real robot。

差动驱动式移动机器人的运动规划

差动驱动式移动机器人的运动规划

第35卷 第9期2003年9月 哈 尔 滨 工 业 大 学 学 报JOURNA L OF H ARBI N I NSTIT UTE OF TECH NO LOGYV ol 135N o 19Sep.,2003差动驱动式移动机器人的运动规划柳长安,李国栋,刘春阳(华北电力大学计算机科学与技术系,北京102206)摘 要:提出了差动驱动式移动机器人的运动规划算法.推导出差动驱动式移动机器人的运动学模型,给出了基于广义雅可比矩阵的差动驱动式移动机器人分解运动速度控制算法.计算机仿真验证了该算法的正确性.关键词:移动机器人;差动驱动;广义雅可比矩阵;运动规划中图分类号:TP24216文献标识码:A文章编号:0367-6234(2003)09-1095-03Motion planning for differential drive mobile robotLI U Chang 2an ,LI G uo 2dong ,LI U Chun 2yang(Dept.of C om puter Science and T echnology ,N orth China E lectric P ower University ,Beijing 102206,China )Abstract :A m otion planning alg orithm has been proposed for a differential drive m obile robot by developing a kine 2matic m odel and proposing a res olved m otion rate control alg orithm based on G eneralized Jacobian Matrix and the ef 2fectiveness of the alg orithm is verified through com puter simulation.K ey w ords :m obile robot ;differential drive ;generalized jacobian matrix ;m otion planning收稿日期:2003-05-01.基金项目:华北电力大学(北京)博士科学基金资助项目.作者简介:柳长安(1971-),男,博士,讲师. 机器人足球比赛所采用的机器人都由1个移动平台和2个独立驱动的驱动轮组成,称为差动驱动式.平台有2个独立的驱动轮,该种机构组成简单,而且旋转半径可从0到无限大任意设定.当旋转半径为0时,由于能绕本体中心旋转,所以有利于在狭窄场所改变方向.本文针对差动驱动式移动机器人运动特点,研究了一种新的运动控制方法,使移动机器人的运动分解为2个驱动轮的转动.1 运动学模型差动驱动式移动机器人的几何模型如图1所示[1,2].主要参数如下:F 为移动平台上的关键点;G 为移动平台的质心;b 为2个驱动轮之间的距离;r 为驱动轮的半径;l G 为点G 与点F 之间的距离;<为平台的转角;θl 为左驱动轮的转角;θr 为右驱动轮的转角.图1 差动驱动式移动机器人几何模型Fig.1 M odel of differential drive m obile robot 设平台质心的速度为v G ,它垂直于平台的轮轴,因此它在2个坐标轴上的分量分别为x G =v G cos <, y G =v G sin <,消去v G 得x G sin <-y G cos <=0.点F 与点G 有如下位置关系:x F =x G +l G cos <,(1)y F =y G +l G sin <.(2)对式(1)、(2)求导可得 x F = x G -l G <sin <, y F = y G +l G <cos <.F 点的速度关系可以表示为x F sin <- y F cos <+l G <=0,平台质心的速度与驱动轮转速的关系式为<= θr r - θl r b ,v G =r2( θl+ θr ),可以得到矩阵表达式为x Fy F <=r2c os <+l G rbsin <r2c os <-l G rbsin<r2sin <-l G rbc os <r2sin <+l G rb c os <-rbrbθl θr,从而得P ・=J Φ・.(3)式中:P ・=x F y F <T ;Φ・= θl θrT;J =r2cos <+l G rbsin <r 2cos <-l G rb sin <r2sin <-l G rb cos <r2sin <+l G rbcos <-r br b.J 定义为差动驱动式移动机器人的广义雅可比矩阵,它与基座固定的机械手雅可比矩阵类似,反映差动驱动式移动机器人关键点的速度与驱动轮转速之间的关系.2 分解运动速度控制由式(3)可知,给出机器人驱动轮转速可以求出机器人上关键点的速度,给出移动机器人关键点的速度也可以求出驱动轮转速,这时驱动轮控制量可由下式给出:Φ=J -1 P .采用计算机控制,速度项可以表示为单位时间里(控制周期Δt )的位移增量,即ΔΦ=J -1ΔP .(4)式中:ΔP 为一个控制周期内的位移变化;ΔΦ为同一时间间隔内的角度变化.如果给定F 点的轨迹,当Δt 选定时,可以确定ΔP ,将其代入式(4)求出各个角度增量ΔΦ,可得到各个角度的给定值,最后由角度伺服控制系统去实现位置控制,这就是分解运动速度控制.差动驱动式移动机器人的分解运动速度控制是一种基于逆广义雅可比矩阵的连续路径跟踪控制方法[3,4].该算法要考虑合理的控制周期.分解运动速度控制方法通过控制每个周期内的角度输出量实现对轨迹的跟踪或速度控制.如果控制周期选择的太大,则运动控制的精度难以达到要求,如果控制周期选择过小,在每个周期内都要计算逆矩阵,给实时控制造成困难[5].具体算法描述如下:步骤(1):输入机器人的初始状态信息.平台质心的位置为r G (0),平台的方向角为<(0),驱动轮的转角θl (0)=0、θr (0)=0,置i =0.步骤(2):输入点F 的轨迹x F =f 1(t )、y F =f 2(x F ).确定控制周期Δt ,并把整个运动过程分为N 步完成.步骤(3):确定该周期的ΔP ,计算ΔΦ(i )=J -1ΔP (i ),确定该周期内的ΔΦ(i ).步骤(4):按照ΔΦ(i )驱动机器人运动.步骤(5):确定系统的新状态.平台质心的位置为r G (i +1)=r G (i )+Δr G ,平台的方向角为<(i +1)=<(i )+Δ<,驱动轮的转角为θl (i +1)=θl (i )+Δθl (i ),θr (i +1)=θr (i )+Δθr (i ).步骤(6):if (i =N )转步骤(7),否则i =i +1,转步骤(3).步骤(7):结束.3 计算机仿真为了验证本文提出的差动驱动式移动机器人运动规划算法的正确性和可行性,在I BM -PC 机上进行了计算机仿真,仿真软件采用MAT LI B 可视化编程工具.机器人系统的参数选择半自主足球机器人的实际参数,其中b =01075、l G =010375、r =0101875.初始条件为(x 0F ,y 0F ,<0)=(011,015,90°);终了条件为(x n F ,y n F ,<n )=(016,011,0°).假设F 点的轨迹为x F =a 1t +a 0,y F =b 1(x E )3+b. 设整个运动过程用时间为10s ,取Δt =0105s.图2给出了机器人平台上关键点的理想运动轨迹,图3给出了移动机器人的运动过程仿真结果.从仿真结果可以看出,差动驱动式移动机器人运用该控制算法分别驱动2个驱动轮转动可以实现机器人上关键点跟踪预定轨迹,同时也能完成机器人方向角的改变.・6901・哈 尔 滨 工 业 大 学 学 报 第35卷 图2 移动机器人的理想运动轨迹Fig.2 Desired path of m obile robot图3 移动机器人运动过程的仿真结果Fig.3 Animation of m otion of m obile robot 4 结 论(1)该算法是一种基于广义雅可比矩阵的分解运动速度控制算法,使移动机器人的运动分解为驱动轮的转动.(2)利用该算法应该考虑合理的控制周期.如果控制周期选择的太大,则运动控制的精度难以达到要求;如果控制周期选择过小,则计算速度慢,给实时控制造成困难.(3)该算法很好地解决了差动驱动式移动机器人系统的运动规划问题,为研究此类系统的动力学问题奠定了基础.参考文献:[1]蔡自兴.机器人学[M].北京:清华大学出版社,2000.[2]白井良明.机器人工程[M].北京:科学出版社,2001.[3]薄喜柱.基于多智能体合作的机器人足球比赛系统的研究[D].哈尔滨:哈尔滨工业大学,2001.[4]洪炳熔.机器人足球比赛-发展人工智能的里程碑[J].电子世界,2000(4):4-5.[5]SER J I H.A unified approach to m otion control of m obile ma2nipulators[J].The International Journal of R obotics re2 search,1998,17(2):10-15.(编辑 王小唯)(上接第1094页)适应值函数的影响,否则进行惩罚———取负值,使适应值减小.通过f来评价路径在遗传过程中的适应值,f越大,路径的优化程度越高.113 遗传操作步骤(1)根据编码方案,把路径点编码成位串形式,转化为染色体(路径).(2)选择合适的参数:群体的大小(所含个体数M)、交叉概率p c和变异概率p m.(3)随机产生一个初始群体即M条路径.(4)根据适应值函数计算每条路径的适应值f i,并计算群体的总适应值F=Σf i,i= 1……M.(5)选择:计算每一条路径的选择概率p i=f i/F及累计概率q i=Σp j,j=1,…I.(6)交叉:对每条路径产生[0,1]间随机数r,如果r<p c,则该条路径参加交叉操作,如此选出参加交叉的一组路径后,随机配对;对每一对,产生[0,1]间的随机数以确定交叉的位置.(7)变异:如果变异概率为p m,则可能变异的位数的期望值为p m・m・M(m为染色体串长,M 为群体).(8)如果新个体数未达到M个,则转向第(5)步继续进行遗传操作,否则代数加一,d=d+1;将新群体的M条路径的适应值由大到小进行排序;保存适应值最大的路径点,如果d≠g(g是设定的代数),则转向第(4)步,否则选用g代中f 值最好的路径上的点.2 结 语在仿真足球机器人的决策系统中进行了实验,在实验中,选择与同一支队伍进行比赛,使用同一种战术,进行40场比赛,在应用遗传算法避障后,每10场的平均进球数增加了近2个.本文中介绍的基于遗传算法的回避多个障碍的路径规划策略通过仿真系统和实赛的检验,显示出良好的效果,取得了可喜的成绩.(下转第1101页)・791・第9期柳长安,等:差动驱动式移动机器人的运动规划运动方程为I1I 2…I 6p =J 1a…0J 1u 0…J 2aω…0J 2uω……ωω0…ωω0…J 6a…J 6u q 1aq 2a …q 6a q 1uq 2u …q 6u ,或者A 0 p =B 0p q p .式中:A 0为24×4矩阵; p 为4×1矩阵RV XRRV YRRV ZRRωZRT,代表月球车的期望速度值和车体方向值;B 0p 为24×96的块对角矩阵;q ・p 为96×1矩阵,为待求解的可执行月球车变量;q ・ia 表示月球车可执行的变量数;q ・iu 表示月球车不需执行的变量数.采用最小二乘法,可执行的月球车变量的反向运动解为q a =[J T 1aJ 1a ]-1J T 1a[J T2a J 2a ]-1J T2a …[J T6a J 6a ]-1J T6ap .(7) 通过式(7)可以求得月球车的48个可执行变量.本文所叙述的月球车的逆运动学求解依赖月球车的位型假定,一般而言,在崎岖地形中移动的多轮机器人的逆运动学数值解都是有效解.2 结 论(1)该方法不仅能描述车体在崎岖地形中6个自由度的运动变化,得到车体速度的最优解,而且能够在存在传感噪声和车轮滑移情况下估推月球车的位置和方向,便于月球车的路径规划和自主导航.(2)逆运动学求解方法通过将地形等高图、月球车运动的期望位置和方向作为输入,得出月球车在地图中给定点的稳定性和可穿越性,便于月球车的运动控制.参考文献[1] M UIR P F ,NE UM ANN C P.K inematic m odeling ofwheeled m obile robots[J ].J R obotics Systems ,1987,4(2):281-340.[2] A LEX ANDER J C ,M ADDOCK SJ H.On the kinematic ofwheeled m obile robots [J ].Int J R obotics Research ,1989,8(5):15-26.[3] T AROK H M ,MC DERM OTT G,H AY ATI S ,et al .K inematic odeling of a high m obility mars rover [A ].International C on ference on R obotics and Automation [C].Detroit :M ichigan ,1999.[4] I AG NE M M A K,SHI BLY H ,RZEPNIEWSKI A ,et al.Planning control alg orithms for enhanced rough -terrain rover m obility [A ].Proceeding of the 6th International Sym posium on Artificial Intelligence and R obotics and Automation in S pace [C ].Quebec :Canadian S pace A 2gency ,S t -Hubert ,2001.[5] I AG NE M M A K,RZEPNIEWSKI A ,DUBOSKY S ,et al.C ontrol of robotic vehicles with actively articulated sus 2pensionsin rough terrain[J ].Autonom ous R obots ,2003,14:5-16.[6] I AG NE M M A K,RZEPNIEWSKI A ,DUBOWSKY S ,etal.M obile robot kinematic recon figurability for rough -terrain[A].Proceedings of the SPIE Sym posium on Sen 2s or Fusion and Decentralized C ontrol in R obotic Systems III[C].Boston :[s.n.],2000.(编辑 王小唯)(上接第1097页)参考文献:[1]金 玺.基于仿真机器人足球平台的多智能体控制研究[D].北京:北京理工大学,2003.[2]ARAU T ,OT A J.M otion planning of multiple m obile robots[J ].IEEE Proc Int C on f on Intelligent R obots and System ,1992:1761-1768.[3]薛宏涛,沈林成,常文森.PTS 领域中的Agent 体系结构设计与实现[J ].计算机工程与应用,2000,36(7):76-78.[4]H ONG Bing 2rong.R obot s occer simulation com petition plat 2form based on multi 2agent [M].[s.l.]:FIRA -K AIST Cup W orkshop ,2002.[5]李 实,徐旭明,叶 榛.机器人足球仿真比赛的S erver模型[J ].系统仿真学报,2000,12(2):138-142.(编辑 王小唯)・1011・第9期蔡则苏,等:HIT -1型月球车的运动学分析。

在某人的掌控中英语短语

在某人的掌控中英语短语

在某人的掌控中英语短语In the palm of someone's hand, the English language can be both a powerful tool and a delicate instrument. It is a means of communication that transcends borders, cultures, and ideologies, connecting individuals from diverse backgrounds and facilitating the exchange of ideas, emotions, and experiences.The mastery of the English language is a journey that requires dedication, perseverance, and a deep appreciation for the nuances and complexities that lie within its structure. For those who have the privilege of wielding this linguistic prowess, the ability to articulate their thoughts, emotions, and aspirations with clarity and eloquence can open doors to a world of opportunities.However, the power of the English language is not merely confined to the individual. It can also be a source of control and manipulation, a weapon in the hands of those who seek to shape the narrative and influence the perceptions of others. In the wrong hands, the English language can become a tool of oppression, used to marginalize, silence, and subjugate those who do not possess the same level of linguistic proficiency.One such example can be found in the realm of education, where the dominance of the English language has often been a barrier to the inclusion and empowerment of non-native speakers. In many partsof the world, the educational system is heavily biased towards English-based instruction, leaving those who struggle with the language at a significant disadvantage. This can lead to a wideningof the achievement gap, as students from disadvantaged backgrounds are denied access to the same educational opportunities as their more linguistically privileged counterparts.The issue of linguistic control extends beyond the educational sphere, infiltrating various aspects of societal and political structures. In the realm of international diplomacy and global governance, the English language has become the lingua franca, shaping the discourse and decision-making processes that impact the lives of millions. Those who possess the ability to navigate and manipulate this linguistic landscape wield a considerable amount of power, often at the expense of those who are excluded from the conversation.Furthermore, the dominance of the English language in the digital age has raised concerns about the potential for cultural homogenization and the erosion of linguistic diversity. As the internet and social media platforms become increasingly English-centric, there is a risk of marginalized languages and cultures being overshadowed or even lost altogether. This can have profoundimplications for the preservation of cultural heritage, the transmission of traditional knowledge, and the fostering of linguistic and cognitive diversity.In response to these challenges, there have been growing calls for a more inclusive and equitable approach to the use of the English language. Advocates for language justice argue that the promotion of multilingualism and the empowerment of non-native speakers are essential for creating a more just and inclusive society. This may involve the implementation of language policies that ensure equal access to educational resources, the development of translation and interpretation services, and the active promotion of linguistic diversity in various spheres of public life.Ultimately, the control and mastery of the English language is a complex and multifaceted issue that requires a nuanced and thoughtful approach. While the English language can be a powerful tool for communication, it must be wielded with care and consideration for the diverse needs and perspectives of all individuals. By acknowledging the potential for linguistic control and working towards a more inclusive and equitable language landscape, we can strive to create a world where the English language is not a source of oppression, but rather a bridge that connects us all.。

Optimal Motion Planning for Multiple Robots Having Independent Goals

Optimal Motion Planning for Multiple Robots Having Independent Goals

In addition to introducing multiple-objective optimality to the multiple-robot geometric motion planning, we expand the traditional view of centralized and decoupled planning by considering these two approaches as opposite ends of a spectrum. An approach that only weakly constrains the robot motions before considering interactions between robots could be considered as lying somewhere in the middle of the spectrum. By utilizing this view, we show that many useful solutions can be obtained by constraining the robots to travel on independent networks of paths called roadmaps. Many approaches exist that construct roadmaps for a single robot (e.g., 8, 17]), which can be used as a preprocessing step in our coordination approach. Our algorithms are based on applying the dynamic programming principle to generate multiple solutions in a partially-ordered space of motion strategies. The generation of these solutions is signi cantly more challenging in comparison to the standard case of scalar optimization. Many variations of dynamic programming for scalar optimization have been applied in motion planning (e.g., 11, 15, 22]) and in AI planning (e.g., 4, 5, 23]); however, techniques are presented in this paper to derive multiple solutions for the case of multiple, independent performance measures.

工业机器人外文翻译

工业机器人外文翻译

附录外文文献原文Industrial RobotsDefinition“A robot is a reprogrammable,multifunctional machine designed to manipulate materials,parts,tools,or specialized devices,through variable programmed motions for the performance of a variety of tasks.”--Robotics Industries Association “A robot is an automatic device that performs functions normally ascribrd to humans or a machine in orm of a human.”--Websters Dictionary The industrial robot is used in the manufacturing environment to increase productivity . It can be used to do routine and tedious assembly line jobs , or it can perform jobs that might be hazardous to do routine and tedious assembly line jobs , or it can perform jobs that might be hazardous to the human worker . For example , one of the first industrial robots was used to replace the nuclear fuel rods in nuclear power plants . A human doing this job might be exposed to harmful amounts of radiation . The industrial robot can also operate on the assembly line , putting together small components , such as placing electronic components on a printed circuit board . Thus , the human worker can be relieved of the routine operation of this tedious task . Robots can also be programmed to defuse bombs , to serve the handicapped , and to perform functions in numerous applications in our society .The robot can be thought of as a machine that will move an end-of-arm tool , sensor , and gripper to a preprogrammed location . When the robot arrives at this location , it will perform some sort of task . This task could be welding , sealing , machine loading , machine unloading , or a host of assembly jobs . Generally , this work can be accomplished without the involvement of a human being , except for programming and for turning the system on and off .The basic terminology of robotic systems is introduced in the following :1. A robot is a reprogrammable , multifunctional manipulator designed to move parts , materials , tools , or special devices through variable programmed motions for the performance of a variety of different task . This basic definition leads to other definitions , presented in the following paragraphs , that give a complete picture of a robotic system .2. Preprogrammed locations are paths that the robot must follow to accomplish work . At some of these locations , the robot will stop and perform some operation , such as assembly of parts , spray painting , or welding . These preprogrammed locations are stored in the robot’s mem ory and are recalled later for continuous operation . Furthermore , these preprogrammed locations , as well as other program data , can be changed later as the work requirements change . Thus , with regard to this programming feature , an industrial robot is very much like a computer , where data can be stored and later recalled and edited .3. The manipulator is the arm of the robot . It allows the robot to bend , reach , and twist . This movement is provided by the manipulator’s axes , also called the degrees of freedom of the robot . A robot can have from 3 to 16 axes . The term degrees of freedom of freedom will always relate to the number of axes found on a robot .4. The tooling and grippers are not part of the robotic system itself ; rather , they are attachments that fit on the end of the robot’s arm . These attachments connected to the end of the robot’s arm allow the robot to lift parts , spot-weld , paint , arc-weld , drill , deburr , and do a variety of tasks , depending on what is required of the robot .5. The robotic system can also control the work cell of the operating robot . the work cell of the robot is the total environment in which the robot must perform its task . Included within this cell may be the controller , the robot manipulator , a work table , safety features , or a conveyor . All the equipment that is required in order for the robot to do its job is included in the work cell . In addition , signals from outside devices can communicate with the robot in order to tell the robot when it should assemble parts , pick up parts , or unload parts to a conveyor .The robotic system has three basic components : the manipulator , the controller ,and the power source .A . ManipulatorThe manipulator , which does the physical work of the robotic system , consists of two sections : the mechanical section and the attached appendage . The manipulator also has a base to which the appendages are attached . Fig.1 illustrates the connection of the base and the appendage of a robot .The base of the manipulator is usually fixed to the floor of the work area . Sometimes , though , the base may be movable . In this case , the base is attached to either a rail or a track , allowing the manipulator to be moved from one location to another .As mentioned previously , the appendage extends from the base of the robot . The appendage is the arm of the robot . It can be either a straight , movable arm or a jointed arm . the jointed arm is also known as an articulated arm .The appendages of the robot manipulator give the manipulator its various axes of motion . These axes are attached to a fixed base , which , in turn , is secured to a mounting . This mounting ensures that the manipulator will remain in one location。

英文单字母词源

英文单字母词源

1、B=板子,小木条Bed, boatBack 后背Bone 骨头 (骨头棒子)Bar ,barrel 木桶 , barrier 栅栏(bar就是英文当中的木字旁)Embarrass 尴尬的 (em- 进去 bar- 小木条)2、B=棒子Bat 棒子,球拍Pat 拍( p, 手)Hat 帽子 (h , 头)Cat 猫(c, 抓,跑)通过这几个单词,可以看出,辅音字母b,p,h,c 的功能是表达单词的含义,而元音字母和元单字母组合,-at,则用来发音。

所以,辅音字母表示含义,元音字母表示发音,这是个非常重要的概念。

Battle 战争Debate de 下+ bat 棒子=放下棒子,不大战了,改唇枪舌剑。

Combat 格斗Bound 边界Abound 丰富,充裕 (a=否定,无边界的就是)3、B=棒,身体棒Boy , bull , bear , beast补充: beat 打 , heat 热, seat 坐位 , peat 说从这四个单词,再次看出,辅音字母作含义,元音字母作为读音标示的作用。

4、由“真棒,棒的”引申为: best =棒,好Bene- 作为词根,表示:好Benefit 恩惠,好处Benign 良性的5、B=背包Boot 长筒靴Bale 大包补充: male 男性 ,tale 传说 ,Baggage 行李 (age = 名词后缀)Burden 负担,包袱(en =名词后缀)6、B=燃烧Burn, bake , burstBrass 黄铜Bronze 青铜Brand 烙印,商标二、辅音字母C的含义1、C=抓Catch 抓,捉补充:watch , matchCable 缆绳Can 能Cept- ( 词根)= 抓,捉,拿Accept, except, reception注意:一个单词难记,往往是其词根难记,词根难记往往是因为词根中的发音成份比较复杂,比如,下面的变化:Ceive- (词根):取得,带领Perceive 觉察, deceive 欺骗 , conceive 构思。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Motion Planning for a Mobile Manipulator withRedundant DOFsDe Xu1,2, Huosheng Hu2, Carlos A. Acosta Calderon2, and Min Tan11 The Key Laboratory of Complex System and Intelligence Science, Institute of Automation,Chinese Academy of Sciences, Beijing 100080, China2Department of Computer Science, University of Essex, Colchester CO4 3SQ, UKsdxude@, tan@, hhu@, caacos@AbstractThis paper investigates how to track a desired trajectory by a mobile manipulator that has redundant degrees of freedom (DOFs). Based on the analysis of its kinematics models, a motion planning method is proposed. The position and orientation of the end-effector is decomposed into two parts. In the first part, the manipulator contributes sub-vectors projected on the Z-axis in the world frame, including position and orientation. In the second part, the mobile base and the manipulator move along the direction of the desired path and reach the sub-vectors on axes X and Y in the world frame respectively. Simulated results are presented to show the effectiveness of the proposed approach.Keyword: Mobile manipulator, Redundant DOFs, Insufficient DOFs, Path planning, Trajectory tracking.I. IntroductionA mobile manipulator consists of a mobile base and a manipulator [1], which represents several advantages and various constraints [2]. The most important feature of a mobile manipulator is the flexible operational workspace in contrast with the limited workspace of a fixed manipulator. This feature endows a mobile manipulator with the ability to operate in a large scale of operation [3], such as handling and transporting parts from one place to another. However, there is an intrinsic problem with a mobile manipulator: the total number of degrees of freedom (DOFs) is generally greater than six DOFs. How to deal with the redundant DOFs attracts much attention in the robotics community recently [1-10].Seraji [4] presented an approach to motion control of a mobile manipulator, in which both mobility and manipulation were put on the same frame with an equal treatment using the combined Jacobian matrix. Tanner et al [5] proposed a motion planning method for multiple cooperating mobile manipulators. Huang et al [6] introduced the Zero Moment Point to path planning for a mobile base, and to orientation planning for a manipulator with 5 DOFs. The orientation of the end- effector is not concerned except for avoiding instability in mechanics. Papadopoulos and Poulakakis [7] presented a model-based control and planning method for a mobile manipulator. Perrier et al [8] presented a global approach to motion generation for a non-holonomic mobile manipulator. Both homogeneous1Motion Planning for a Mobile Manipulator with Redundant DOFsmatrices and dual quarter- nions are respectively employed to generate a point-to-point trajectory for a mobile robot. Sugar and Kumar [3][9] provided a framework and algorithm for cooperating mobile manipulators to hold and transport large objects. Matsikis et al [10] proposed a behavior coordination manager based on Bayesian Belief Networks for a mobile manipulator to estimate the effectiveness of its behaviors. It should be noticed all these mobile manipulators have enough DOFs such that the mobile base and the manipulator can be decoupled in the control design.However, the mobile manipulator concerned in this paper consists of a differential driven mobile robot, Pioneer II, and a 5-DOF manipulator Pioneer Arm (PArm). It is impractical to decouple them since the 5-DOF PArm itself cannot satisfy some desired positions and orientations of its end-effector. Hence, it is a real challenge to deal with the problem of insufficient DOFs for the PArm and redundant DOFs for the mobile manipulator, i.e. the integration of the Pioneer II and the PArm.The rest of the paper is organized as follows. The configuration of the mobile manipulator and its task are described in Section 2. The models for the mobile manipulator are investigated in Section 3. In Section 4, a new strategy to deal with the motion control of the mobile manipulator is introduced based on its kinematics models. The method of freedom assignment for the mobile base and manipulator is given, and the resolve of joint angles for the PArm is deduced. The position assignment of the mobile base is also presented in this section. Simulation results are shown in Section 5. Finally, Section 6 concludes the paper.II. The Configuration of the Mobile Manipulator and Its TaskThe sketch of the mobile manipulator is as shown in Fig. 1(a). The manipulator PArm is on the top of the mobile base Pioneer II. A gripper is mounted on the end of the PArm, namely the end-effector in the rest of this paper.6ZFig.1. The mobile manipulator and its task. The configuration sketch of the mobile manipulator is shown in (a). The principle sketch of opening a door is shown in (b).Assume that the robot’s local frame O m is at the center of the main axis connecting two driven wheels of the mobile base, and the PArm is located at (m x, m y, m z) in the robot local frame. O0 is the base frame of the manipulator, and O6 is the end-effector’s frame. O1 to O5 are the joint frames of the manipulator. a1, a2, d1 and d4 are the inherent parameters of the manipulator. More detail about the frame assignment can be found in [11][12].2The task of the mobile manipulator is to open a door, as shown in Fig. 1(b). The end-effector catches the doorknob in the upward orientation. In other words, the Z 6-axis in the end-effector frame is upward. The Y 6-axis is the direction from the doorknob to the turning axis of the door. When the end-effector opens the door, its trajectory is a piece of circle arc. This task can be described as: following the desired trajectory and orientation by a mobile manipulator. How to deal with its redundant DOFs is the key factor to ensure the PArm and the mobile base working reasonably.III. The Model of the Mobile ManipulatorAssume that c i denotes for cos(θi ), s i for sin(θi ), c 23 for cos(θ2+θ3), s 23 for sin(θ2 +θ3), c for cos, and s for sin. The transform A j between two neighboring frames, i.e. O j -1 and O j , can be obtained from Denavit-Hartenberg parameters. Therefore, the transforms A 1-A 6 can be obtained. Then the position and orientation of the end-effector [11] are indicated as⎥⎦⎤⎢⎣⎡==100065432160p a o n A A A A A A T v v v v (1) here ,[]T z yxn n n n =v[]T z yx o o o o =v,[]Tzy x a a a a =v,[]Tz y x p p p p =v.A. The Model of Mobile Robot Pioneer IIPioneer II is a mobile robot with two differential driven wheels and a caster. W 1 and W 2 represent the left and right wheel separately, along the forward direction of the mobile robot in the view of bird’s eye. The world frame O w is selected at a fixed position on the floor, which is taken as absolute reference frame. Assume that O m i indicates the coordinate frame O m of the robot at i -th sampling. r i denotes its turning radius in the period between the i -th and i +1-th samplings, which is relative to W 1. θ i represents its direction angle at the i -th sampling. l is the distance between W 1 and W 2.The position and orientation of frame O m in the world frame at the i +1-th sampling, w T m i +1, can be derived from w T m i according to homogeneous transformation. Suppose w T m i is as shown in Eq. (2).⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡−=1000010000i y i i ix i i i m w p c s p s c T θθθθ (2) ⎥⎥⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎢⎢⎣⎡+++++++−+−+=⎥⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎢⎣⎡−=+++++++100001002)2)(2(20)()(2)2)(2(20)()(10000100001111111i iii i y i i i ii iiiixi i i i i y i i i x i i i m w s l r c p c s s l r s p s c p c s p s c T ααθαθαθααθαθαθθθθθ (3) The distance between frame O m i +1 and frame O m i is , and the rotation angle between Y )2/()2/(2i i s l r α+m i +1 and Y m i is αi . Then w T m i +1 can be derived as (3). More detail is available in [13].3Motion Planning for a Mobile Manipulator with Redundant DOFsOf course, Eq. (3) can be rewritten as a group of iteration equations that represent positioning with Odometry for the mobile robot [14].B. The Model of the Mobile ManipulatorThe position and orientation of the end-effector in the world frame can be derived from homogeneous transform according to the position and orientation of the mobile robot in the world frame, that of the end-effector in the manipulator’s base frame, and the transform between the mobile robot frame and the manipulator’s base frame. The kinematics of the mobile manipulator can be described in Eq. (4).⎥⎥⎥⎥⎥⎦⎤++−−++−−+−⎢⎢⎢⎢⎢⎣⎡−−−−−−−−−===1)()()()(006006006zz i y i x y i y x i x i x y i y x z zz iy i x iy i x iyi x i y i x i y i x i y i x m i m w i w i w m p p s m p c m p p c m p s m p a o n s a c a s o c o s n c n c a s a c o s o c n s n T T T T T T θθθθθθθθθθθθθθθθ (4)where the up index i denotes the i -th sampling, w T 6i denotes the position and orientation of theend-effector in the frame O w , w T 0i is the position and orientation of frame O 0 expressed in frame O w . m T 0 as Eq. (5) is the position and orientation of the frame O 0 in frame O m .⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡−=10001000010100z y x mm m m T (5)here [m x m y m z ]T is the offsets of frame O 0 in axis X , Y , and Z of frame O m separately.IV. Deal with the Insufficient and Redundant DOFs ProblemA. The Freedom Assignment for the Mobile Robot and the ManipulatorAs explained previously, the PArm has 5-DOFs, and the Pioneer II robot has 3-DOFs.Obviously, the mobile manipulator is with redundant DOFs. If the links of the PArm have enough length, or the desired moving range of the end-effector is adequate, the mobile manipulator will have 8-DOFs. In fact, the link length is limited and the desired range of the end-effector is large [15-20]. Therefore, how to distribute the desired 6-DOFs to the manipulator and the mobile robot is a key factor for the mobile manipulator to satisfy the desired position and orientation of the end-effector in an efficient and simple manner.It is easy to find that the position and orientation of the Pioneer II base have no contribution to the position and sub-vectors of orientation for the end-effector in Z -axis of the world frame. Therefore, the manipulator can only satisfy these desired position and sub-vectors of orientation. This means that these 3-DOFs must be completely handed over to the manipulator. Other 3-DOFs should be the interaction results between the manipulator and the mobile base.4B. Position and Orientation in Z-axisSuppose the desired position and orientation of the end-effector in frame O w is⎥⎦⎤⎢⎣⎡=106p a o n T wwww i wv v v v (6) where []T z wywxw n n n n =v,[]T z wywxwo o oo =v,[]T z wywxwa a aa =v ,.[]Tz wywxwp p p p =v wn z , wo z , wa z and wp z are determined by the joint angles θ2-θ5 of the manipulator. Considering redundancy, the assignment θ5=0 will be a good selection to simplify the realization of the position and orientation of the end-effector in the world frame. Hence, Eq. (7) is deduced from Eq. (4) and Eq. (6).⎪⎩⎪⎨⎧+−−−=−−=−==12223423623423423,,d s a s d s d m p s a s c o c c n z z wz w z w z w (7) θ23 has two candidate solutions denoted as θ23-1 and θ23-2 from s 23=-w a z . Submitting s 23=-w a z tothe last equation in (7), two candidate solutions for θ2, i.e. θ2-1 and θ2-2, are presented. Submitting θ23 to other equations in (7), θ4 is determined according to the sign of c 23.Note that c 23=0 is satisfied if and only if w a z =±1. In this case, θ4 can be assigned to any value in the working range of joint 4. However, it had better be evaluated with the consideration for θ1 and θ i .C. Position and Orientation in X and Y axesThe orientation angle θ i of the mobile base can be calculated using the last position and the current desired one of the end-effector, which is a known variable. Therefore, 0T 6 is a known matrix as shown in Eq. (8).⎥⎥⎥⎥⎥⎦⎤−+−−−−−−+−−⎢⎢⎢⎢⎢⎣⎡−−−−−−+−+−+−==−1)()()()(00061060zz w x i i y y w i ix x w y i i y y w i ix x w z wz wz w iy w i x w iy w i x w i y w i x w i y w i x w iy w i x w i y w i x w iw i w m p m s p p c p p m c p p s p p a o n s a c a s o c o s n c n c a s a c o s o c n s n T T T θθθθθθθθθθθθθθθθ (8)The resolving of θ1 is based on Eq. (8) according to the values of θ4 and θ23. If c 4=0 and c 23≠0,the terms n x and a x can be employed to find θ1. If c 4=0 and c 23=0, the terms n x and o x are employed. If c 4≠0 and c 23≠0, the terms o y and a y are used. If c 4≠0 and c 23=0, the terms n y and o y are selected. The solution of θ1 is obtained as Eq. (9) in the condition θ5=0.⎪⎪⎩⎪⎪⎨⎧=≠−−−≠≠−−==−≠==0,0],),()(atan[0,0 ],/)/(,/atan[0,0 )],( ),(atan[0,0 ],/ ),(atan[2344423423423444234232323442342342341c c if c o s n s sig s o s sig c n c c if c o c c s s a c a c c if s s sig o s sig n c c if c a s sig n y y y y x x y x x x x θ (9) 5Motion Planning for a Mobile Manipulator with Redundant DOFswhere atan(y , x ) is a function to calculate the arc tangent of y /x , the signs of both arguments x and y are used to determine the quadrant of the result.The solutions for θ1-θ4 can be selected from the candidates according to the actual range of the joint angles and criterions in [11]. If θ1 is out of the working range, it is assigned to a maximum value. In other words, if θ1>θ1max then θ1=θ1max ; if θ1<θ1min then θ1=θ1min . In the case c 23=0, θ4 could be calculated after the assignment of θ1.),atan(123123114s o s c n s c o s n x x x x +−=θ (10) In other cases, i.e. c 23≠0, θ4 is uniquely determined. The parameter, what can only be adjusted, is the direction of the mobile base to satisfy the desired orientation of the end-effector. The reached 0T 6 can be calculated using the values of θ1 to θ4 and θ5=0. Combining the desired position and orientation of the end-effector in the world frame O w at the i -th control cycle, the desired w T m i can be determined using Eq. (11). Then the desired direction angle and position of the mobile base are derived in Eq. (12).⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡−=⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡==−10000100001000)(16006i y i i i x i i imz w i mz w i mz w i mz w i my w i my w i my w i my w i mx w i mx w i mx w i mx w m i w i m w p c s p s c p a o n p a o n p a o n T T T T θθθθ (11) where w T 6i is the desired position and orientation of the end-effector in world frame at the i -thcontrol cycle, w T m i is that of the mobile base. w n i , w o i and w a i are the orientation vectors of w T m i and w p i is its position vector.imyw i y i mx w i x i mx w i my w i p p p p n n ===,),,(atan θ (12) V. Simulations and ResultsIn simulations, the parameters of the mobile base and the manipulator were assigned respectively. For the mobile base, parameters were given as the wheel gauge l =320mm and the maximum velocity of the driven wheel v max =0.5m/s. For the manipulator, the actual parameters of the PArm were employed: a 1=68.75mm, a 2= 160.0mm, d 1=120.0mm, d 4=137.75mm, and d 6=113.21mm [21]. The displacements of the frame O 0 in the frame O m were assigned as m x =0mm, m y =155mm, m z =155mm. The wheel diameter was 187.5mm. To verify the effectiveness of the methods proposed in Section 3, two simulations were designed.The first simulation was designed for checking the effectiveness of the methods with real manipulator parameters. In this simulation, each joint angle was strictly limited to the real working range. The desired trajectory of the end-effector was generated with Eq. (13) and Eq. (14). Its positions varied with time, and its orientation was kept.⎪⎩⎪⎨⎧+=+=−=20/55)360/2cos(20002500)360/2sin(20002500t p t p t p z wy wx w ππ (13) 6⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡−−=100010********z wy wx wwp p p T (14) where t (=1,2,…,360) was an independent variable.The simulation results are shown in Fig. 2. Fig. 2(a) gives the desired and reached positions of theend-effector in the world frame in 3D space. Fig. 2(b) shows the positions of the frame O m of the mobile base, and the positions of the end-effector in the world frame in simulation. Trajectories of both the mobile base and the effector were smooth. The direction angles of the mobile base and their increments are shown in Fig. 2(c). Fig. 2(d) shows the values of joint angles of the manipulator. The distinguish angle value is one degree for each joint in the PArm. The direction of the mobile base varied smoothly. The simulation results verified the correctness of the methods provided in Section 4.The second simulation was designed to open a door. The distance from the knob to the turning axis of the door was set to 230mm. The height of the doorknob was set to 340mm. The Z 6-axis in the end-effector frame was keeping upward. The Y 6-axis was the direction from the doorknob to the turning axis of the door. In addition, X 6-axis was the tangent direction of the circle arc moved. When the(a) (b)(c) (d)Fig.2. The results of the 1st simulation with real working range for manipulator joints. (a) gives the desired and reached positions of the end-effector in the world frame in 3D space. (b) shows the positions of frame O m of the mobile base, and the positions of the end-effector in the world frame. The direction angles of the mobile base and their increments are shown in (c). The values of joint angles of the manipulator are shown in (d).7Motion Planning for a Mobile Manipulator with Redundant DOFs(a) (b)Fig.3. The simulation results of opening a door with real working range for manipulator joints. (a) gives the desired and reached positions of the end-effector in the world frame in 3D space. (b) shows the positions of frame O m of the mobile base, the positions of the wheels, and the positions of the end-effector in the world frame in 2D.gripper opened the door, its trajectory was a piece of circle arc. The simulation results of desired and reached positions of the end- effector in the world frame in 3D space are shown in Fig. 3(a). The positions of frame O m of the mobile base, the positions of the wheels, and the positions of the end-effector in the world frame in 2D are shown in Fig. 3(b). The simulation results show the effectiveness of the proposed method.VI. ConclusionsIn this paper, a new strategy is proposed to effectively deal with the problem in a mobile manipulator that has redundant DOFs. It decomposes the position and orientation of the end-effector into two parts. The manipulator realizes the sub-vectors projected on Z-axis in the world frame, including position and orientation. The mobile base and the manipulator are responsible for moving along the main direction of the desired path and the sub-vectors on axes X and Y in the world frame. The simulation results show the effectiveness of the proposed method. Moreover, the simulation results also show that the little working ranges of the joints of the manipulator have seriously limited the application. A large working range for each joint is very helpful for the mobile manipulator to follow a desired trajectory with given orientations.AcknowledgementsThe authors would like to thank the National High Technology Research and Development Program of China for the support to this work under the grant No. 2002AA422160. Thanks also go to National Key Fundamental Research and Development Project of China (973, No.2002CB312200) for the partly support.References[1]P.Thompson, M. Rombaut, G. Rabatel, F. Pierrot, A. Liegeois, F. Sevila, “Design andControl of a Mobile Manipulator for Weed Control,” Proc. of the Int. Conf. on Advanced Robotics, 1995, pp. 933-938.[2]M. H. Korayem, H. Ghariblu, “Maximum Allowable Load on Wheeled MobileManipulators Imposing Redundancy Constraints,” Robotics and Autonomous Systems, vol.44, 2003, pp. 151-159.[3]T. Sugar, V. Kumar, “Decentralized Control of Cooperating Mobile Manipulators,” Proc. ofthe IEEE Int. Conf. on Robotics and Automation, Leuven, Belgium, 1998, pp. 2916-2921.8[4] H. G. Tanner, S. G. Loizou, K. J. Kyriakopouls, “Nonholonomic Navigation and Control of Cooperating Mobile Manipulators,” IEEE Transactions on Robotics and Automation, vol.19, no.1, 2003, pp. 53-64.[5] Q. Huang, S. Sugano, K. Tanie, “Motion Planning for a Mobile Manipulator Considering Stability and Task Constraints,” Proc. of the 1998 IEEE Int. Conf. on Robotics and Automation, Leuven, Belgium, 1998, pp. 2192-2198.[6] E. Papadopoulos, J. Poulakakis, “Planning and Model-Based Control for Mobile Manipulators,” Proc. IROS 2000 Conf. on Intelligent Robots and Systems, vol. 3, Takamatsu, Japan, 2000, pp. 1810-1815.[7] C. Perrier, P. Dauchez, F. Pierrot, “A Global Approach for Motion Generation of Non-holonomic Mobile Manipulator,” Proc. IEEE Int. Conf. on Robotics and Automation. Leuven, Belgium, 1998, pp. 2971-2976.[8] H. Seraji, “A Unified Approach to Motion Control of Mobile Manipulators,” The Int. J. of Robotics Research, vol. 17, no. 2, 1998, pp.107-118.[9] T. G. Sugar, V. Kumar, “Control of Cooperating Mobile Manipulators,” IEEE Transactions on Robotics and Automation, vol. 18, no. 1, 2002, pp. 94-103.[10] A. Matsikis, F. Schulte, F. Broicher, K. F. Fraiss, “A Behaviour Coordination Manager for a Mobile Manipulator,” Proc. of the 2003 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Las Vegas, USA, 2003, pp. 174-181.[11] D. Xu, C. A. A. Calderon, J. Q. Gan, H. Hu, M. Tan, “The Analysis of Inverse Kinematics for Manipulator Parm,” submitted to Int. J. of Robotic Systems, July 2004.[12] J. Q. Gan, E. Oyama, E. M. Rosales, H. Hu, “A Complete Analytical Solution to the Inverse Kinematics of the Pioneer2 Robotic Arm,” J. of Robotica, vol. 23, 2005, pp. 123–129.[13] D. Xu, M. Tan, G. Chen, “An Improved Dead Reckoning Method for Mobile Robot with Redundant Odometry Information,” Int. Conf. on Control, Automation, Robotics And Vision, Singapore, 2002, pp. 631-636.[14] D. Xu, M. Tan, “Accurate Positioning in Real Time for Mobile Robot,” Acta Automatica Sinica, vol.29, no.5, 2003, pp. 716-725.[15] N. A. M. Hootsmans, S. Dubowsky, “Large Motion Control of Mobile Manipulators Including Vehicle Suspension Characteristics,” Proc. of the 1991 IEEE Int. Conf. on Robotics and Automation, Sacramento, 1991, pp. 2336-2341.[16] C. S. Tzafestas, S. G. Tzafestas, “Full-state Modelling, Motion Planning and Control of Mobile Manipulators,” Studies in Informatics and Control, vol. 10, no.2, 2001, pp. 109-127. [17] A. Gupta, W. H. Huang, “A Carrying Task for Nonprehensile Mobile Manipulators,” Proc. of the 2003 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Las Vegas, 2003, pp. 2896-901.[18] L. Zlajpah, B. Nemec, “Kinematic Control Algorithms for On-line Obstacle Avoidance for Redundant Manipulators,” Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Lausanne, 2002, pp. 1898-1903.[19] O. B. Shahar, E. Rivlin, “Practical Pushing Planning for Rearrangement Tasks,” IEEE Transactions on Robotics and Automation, vol. 14, no. 4, 1998, pp. 549-565.[20] T. Wosch, W. Neubauer, “Collision Avoidance and Handling for a Mobile Manipulator,” Proc. of the 7th. Int. Conf. Intelligent Autonomous Systems, California, USA, 2002, pp. 388-391.[21] E. M. Rosales, J. Q. Gan, “Forward and Inverse Kinematics Models for a 5-DOF Pioneer 2 Robot Arm,” Technical Report, University of Essex, U.K., 2003.9De Xu graduated from Shandong University of Technology (SUT), China, in 1985. He received the Master degree from SUT in 1990 and the Ph.D. degree from Zhejiang University, China, in 2001. He has been with Institute of Automation, Chinese Academy of Sciences (IACAS) since 2001. He is an associate professor with the Laboratory of Complex Systems and Intelligence Science, IACAS. He is a member of IEEE. His research interests includerobotics and automation, especially the control of robots such as visual controland intelligent control.Motion Planning for a Mobile Manipulator with Redundant DOFsMin Tan graduated from Tsing Hua University, China, in 1986. He received the Ph.D. degree in 1990 from IACAS. He is a professor with the Laboratory of Complex Systems and Intelligence Science, IACAS. He has published over 100 papers in journals, books and conferences. His research interests includerobotics and complex system theory.Carlos Antonio Acosta Calderon received the B.S. degree in Computer Science Engineering from Pachuca Institute of Technology, Mexico, in 2000, and M.Sc. degree in Computer Science (Robotics and Intelligent Machines) from the University of Essex, United Kingdom, in 2001. He is currently pursuing the Ph.D degree in Computer Science at the University of Essex, United Kingdom. His research interests have been focus on mobile robots, in particular, coordination of multi-robot systems, mobile manipulators, and learning by imitation. He is a member of IEEE.Huosheng Hu is a Professor in Department of Computer Science, University of Essex, and head of the Human Centred Robotics Group. His research interests include autonomous mobile robots, human-robot interaction,evolutionary robotics, multi-robot collaboration, embedded systems, pervasive computing, sensor integration, RoboCup, intelligent control and networked robotics. He has published over 180 papers in journals, books and conferences, and received two best paper awards. He is a founding member of IEEE Society of Robotics and Automation Technical Committee of Internet and Online Robots since 2001 and a member of the IASTED Technical Committee on "Robotics" for 2001-2004. He was a Conference Chairman for the 1st European Embedded Systems conference in Paris, 1996, and has been amember of the Program Committees for many international conferences such as IROS (2005-2006), IASTED Robotics and Applications Conferences (2000 - present), and RoboCup Symposiums (2000 - 2004).Dr. Hu is currently Editor-in-chief for the International Journal ofAutomation and Computing, and served as a member of the Editorial Advisory Board for the International Journal of Industrial Robots during 1997 to 2000. He is a reviewer for many international journals such as IEEE Transactions on Robotics and Automation, Automatic Control, Neural Networks and theInternational Journal of Robotics Research. Since 2000 he has been a Visiting Professor at 6 universities in China - namely Central South University,Shanghai University, Wuhan University of Science and Engineering, Kunming University of Science and Technology, and Northeast Normal University. Dr. Hu is a Chartered Engineer, a senior member of IEEE, and a member of IEE, AAAI, ACM, IASTED and IAS.10。

相关文档
最新文档