Six Degree-of-Freedom Haptic Display of Polygonal Models

合集下载

TDistribution-distribution”,or“t-distribution”isW

TDistribution-distribution”,or“t-distribution”isW

Team Members: Ran Xu, Ruofan Jia, Abigail Gluck, Yujia Wu, Heng Fun.T Distribution:I.History of the t-DistributionAs a method of making inferences when specific information about a population such as an unknown population standard deviation, the t-distribution utilizes elements and aspects of other distributions to calculate population estimates.William Sealy Gosset, an analyst for Guinness Brewery, published the uses of the t-distribution in 1908.His employer prevented employees from publishing scientific work under their own names, consequently;Gosset published his work under the name Student. He corresponded with R.A. Fisher over years discussing the potentials of the t-distribution and later:Fisher realized that the unified treatment of tests of significance of a mean, of the difference betweentwo means, and of simple and partial coefficients of correlation and regression could be achieved morereadily in terms of where v is the number of degrees of freedom associated with the sum ofsquares used in defining z. (43 Kotz)Eventually, the uses of the z-distribution and chi-squared distribution with the t-distribution resulted in the ability to test many hypotheses with large amounts of data1.II.Definition of T-DistributionIf a population has a normal distribution, then the “student t-distribution”, or “t-distribution” is . Where n is thenumber of degrees of freedom and Γ is the Gamma function.2III.Picture of probability density function of the t-distribution for various parameter values.Above is t-distribution for 2 different values of degree s of freedom. 3IV.Expected Value and VarianceBecause the standard normal curve is used in the t-distribution, the expected value is zero for all degrees of freedom over one. If the degrees of freedom are equal to one as with the Cauchy distribution, the expectedvalue is undefined and does not exist. The variance of the t-distribution is , where n is the degree of freedomand n>2; otherwise, the variance is undefined.V.Appropriate Situations/Special CasesThe t-d istribution can be used when making deductions about a population mean when one does not know the standard deviation of a chi-square distribution, the t-distributionequal to one, the t-distribution becomes the Cauchy distribution . As the degrees of freedom increase, theprobability density function of t-distribution approaches the normal curve. Overall, the t-distribution is bell-shaped and symmetric around zero (because of the standard normal curve).4VI. Relationships to Other Distributions —Normal distributionIt is appropriate to use standard normal distribution (z-distribution ) for a large sample size (N) case. In general, if N > 30, use z-distribution ; N ≤ 30, use t-distribution . Note: n = N - 1 Since the variance of n/(n-2) > 1, t-distribution has a larger variance than standard normal distribution. The t-distribution becomes flatter with a smaller value of n. As seen below, when the degrees of freedom increase, the t-distribution approaches a normal distribution .Cauchy DistributionI. History of Cauchy DistributionSimeon-Denis Poisson first discovered the Cauchy distribution and its stable properties about twenty years before L. A. Cauchy. This distribution is named after Baron Louis-Augustin Cauchy (1789-1857), a Frenchmathematician and engineer ( ndre i o aevich Ko mogorov, and do f av ovich IU shkevich ). Cauchy distribution has many applications in physics, where it is more frequently known as Lorentz distribution (after the name of the Dutch physicist H. Lorentz, 1853-1928).II. Definition of Cauchy Distribution Probability Density FunctionGeneral Form: t the location parameter, defines the location of the peak of distribution.s the scale parameter, defines the dispersionStandard Form:Where t =0 and s =1. The peak point is located at 0 with scale of 1. III. Picture of Probability Density Function of the Cauchy DistributionComments: Cauchy distributions look similar to a normal distribution ; however, they have much heavier tails. Many times the hypothesis has normality, but people still will run the data through the Cauchy distribution because it is a good indicator of the sensitivity of tests compared to the normal. (Note: the purple curve is the standard Cauchy distribution.)IV.Expected Value and VarianceThe Expected Value and the Variance are undefined.Instead, we use t(a location parameter) and s(a scale parameter) to describe the distribution.If a distribution has no mean and variance, it basically means that if we collect 10,000 data points, then it gives no more precise an estimate of the mean and standard deviation than does a single point.V.Appropriate Situations/Special CasesWe use Standard Cauchy distribution when the degree of freedom in t distribution is 1 with t=0 and s=1. In physics, economics, mechanics, and electric, and especially in technical scientific fields (with calibration problems), we use it instead of normal distributions when extreme events are comparatively likely to occur. We also describe this phenomenon as a “fat-tai ed” behavior(Jacod and Protter).The Cauchy distribution is often used for counter-examples in probability theory. One of the big reasons is that its “heavy tai s” lead to the absence of a lot of concrete properties.VI.Relationship to Other Distributions1.Student t-distribution vs standard Cauchy DistributionThe standard Cauchy distribution is defined by s=1, t=0.The distribution pdf is the same as a special case of the t-distribution withone degree of freedom. It is related to other distributions in the same way asthe t-distribution( Upton, and Cook).2.Normal DistributionLike the normal distribution, the Cauchy distribution is bell-shapedand depends on two parameters (Meucci). Also, the ratio ofindependent normally distributed variables with zero mean is distributedwith a Cauchy distribution (Johnson).F DistributionI.History of F-DistributionThe F-distribution is named after Sir Ronald A. Fisher, it was however first formalized by George W. Snedecor in Calculation and Interpretation of Analysis of Variance and Covariance, which is why the F-Distribution is sometimes referred to Snedecor F-distribution. F-distribution first came into light in a discussi on on the ana ysis of variance in Fisher’s The Correlation Between Relatives on the Supposition of Mendelian Inheritance. It was ater popu arized in Fisher’s Statistical Methods for Research Workers (David, May 1995). Today the F-Distribution is commonly used in ANOVA to assign p values to F ratios and also in comparing statistical models that have been fit to a data set. (Everitt & Skrondal, 2010) Simply put the F-distribution allows us to test the likelihood that two population variance are either the same or similar.II.Definition of F-Distribution: Probability DensityFunctionThe random variable F is defined to be the ratio of twoindependent chi-sqaure random variables, each divided by itsnumber of degrees of freedom. V and U are independent chisquare random variables.III.Picture of Probability Density Function of the F-DistributionThe F random variable is nonnegative, and the distribution is skewedto the right. The two parameters that define the function are degrees offreedom1 = m, and degrees of freedom2 = n.IV.Expected Value and VarianceV.Appropriate Situations/Special CasesF-Distribution is used when we perform an F-test. F-test is no more than a ratio of sample variances, which was its original motivation when Fisher created the statistic in the 1920s. (Lomax, 2007) The F-test is for the null hypothesis that two normal populations have the same variance.However, the F-test is extremely vulnerable to non-normality. Because we are employing the F-Distribution we are assuming that the two variables have a normal distribution and therefore their variances follow a chi-squared distribution. We are also assuming that each sample is statistically independent.A formalized process, ANOVA, which is a statistical method used to compare the means of two or more groups, uses the F-test to compare the components of variation.VI.Relationship to Other DistributionsThe F distribution allows us to compare the ratio of two sample variances with their respective degrees of freedom. Well what happens if we use this method not on experiments but rather on the other families of other distributions such t and z, more specifically their error functions. Then we will see that by manipulating the degree of freedom parameters we are able to derive a surprising relationship. In 1924, "On a distributionFisher shows these relationships. (Fisher R. , 1924)Since we know F is just the ratio of variances we can get back t ratio squared. Furthermore this relationship along with a simple proof that random variables Y and sample standard deviation S, allows us to derive the pdfof the t. (Larsen & Marx, 2006)Work CitedDefinition of T-Distribution. /definitions/t-distribution/908, Cramster Inc., 20 Jan 2011. Weisstein, Eric W. "Student's t-Distribution." From MathWorld--A Wolfram Web Resource./Studentst-Distribution.htmlKa bf eish, J.G., “ robabi ity and Statistica Inference Vo ume One: robabi ity.” Springer-Verlag. New York Inc. 1985Kotz, Samue . “Encyc opedia of Statistica Sciences Vo ume 9”. John Wi ey & Sons, Inc. 1988Larsen, Richard J. An Introduction to Mathematical Statistics and its Applications. Pearson Education Inc. 2006. Electronic Book:Jean Jacod, and Philip E. Protter. Probability Essentials (Google eBook). Springer, 2003.</books?id=JHYMvY0Bd7YC&dq=cauchy+distribution+name+after&source=gbs_nav links_s>Graham J. G. Upton, and Ian Cook, A Dictionary of Statistics, Oxford University Press, 2008.</books?id=u97pzxRjaCQC&dq=cauchy+distribution&source=gbs_navlinks_s>ndre i o aevich Ko mogorov, and do f av ovich I U shkevich, Mathematics of the 19th Century: Mathematical Logic, Algebra, Number Theory, Probability Theory (Google eBook), Birkhäuser, 2001.</books?id=X3u5hJCkobYC&dq=cauchy+distribution+first+used&source=gbs_navlin ks_s>Attilio Meucci, Risk and asset allocation, シュプリンガー・ジャパン株式会社, 2005.</books?id=bAS63cyIp0EC&dq=cauchy+distribution+normal+distribution&source=gb s_navlinks_sNorman I. Johnson, (1994) “Continuous univariate distribution-1” Houghton Miff in Company-Boston p159-160Ka bf eish, J.G. 1985 , “ robabi ity and Statistica Inference Vo ume One: robabi ity.” SpringerVer ag. ew York Inc. p35-36Mario F Trio a (2005), “E ementary statistics tenth edition”, earson & ddison We s ey.Inc p350-351Kotz, Samue . “Encyc opedia of Statistica Sciences Vo ume 9”. John Wi ey & Sons, Inc. 1988Larsen, Richard J. An Introduction to Mathematical Statistics and its Applications. Pearson Education Inc. 2006 p261。

Degrees_of_Freedom

Degrees_of_Freedom

Degrees of freedomDr. Alex Yu"Degrees of freedom" have nothing to do with your life after you get married. Actually, "Degrees offreedom" (df) is an abstract and difficult statistical concept. Many elementary statistics textbook introduces this concept in terms of the number that are "free to vary" (Howell, 1992; Jaccard & Becker, 1990).Some statistics textbooks just give the df of various distributions (e.g. Moore & McCabe, 1989; Agresti & Finlay, 1986). Johnson (1992) simply said that degrees of freedom is the "index number" foridentifying which distribution is used.Probably the preceding explanations cannot clearly show the purpose of df. Even advanced statisticstextbooks do not discuss degrees of freedom in detail (e.g. Hays, 1981; Maxwell and Delany, 1986;Winner, 1985). It is not uncommon that many advanced statistics students and experienced researchers havea vague idea of degrees of freedom. In this write-up I would like to introduce this concept from two angles:working definitions and mathematical definitions.Working DefinitionsToothaker (1986), my statistic professor at the University of Oklahoma, explain df as the number ofindependent components minus the number of parameters estimated. This approach is based upon the definition provided by Walker (1940): the number of observation minus the number of necessary relations among these observations.Although Good (1973) criticized that Walker's approach is not obvious in the meaning of necessaryrelations, I do consider the above working definition the clearest explanation of df I ever heard. If it does not sound clear to you, I would like to use an illustration introduced by Dr. Robert Schulle, my SAS mentor atthe University of Oklahoma, :In a scatterplot when there is only one data point, you cannot do any estimation of the regression line. The line can go in any direction as shown in the following graph.Here you have no degree of freedom (n - 1 = 0 where n = 1) for estimation. In order to plot a regession line, you must have at least two data points as indicated in the following scattergram.In this case, you have one degree of freedom for estimation (n - 1 = 1 where n = 2). In other words, the degree of freedom tells you the number of useful data for estimation. However, when you have two data points only, you can always join them to be a straight regression line and get a perfect correlation ( r = 1.00). Thus, the lower the degree of freedom is, the poorer the estimation is.Mathematical DefinitionsThe following are indepth definitions of df:Good (1973) looked at degrees of freedom as the difference of the dimensionalities of theparameter spaces. Almost every test of a hypothesis is a test of a hypothesis H within a broaderhypothesis K. Degrees of freedom, in this sense, is d(K) - d(H), where "d" stands for dimensalityin parameter space.Galfo (1985) viewed degrees of freedom as the representation of the quality in the givenstatistic which is computed using the sample X values. Since in the computation of m,the X values can take on any of the values present in the population, the number of X values, n,selected for the given sample is the df for m. The n for the computation of m also expresses the"rung of the ladder" of quality of the m computed; i.e. if n = 1, the df, or restriction, placed on thecomputation is at the lowest quality level.Chen Xi (1994) asserted that the best way to describe the concept of the degree of freedom is incontrol theory: the degrees of freedom is a number indicating constraints. With the samenumber of the more constraints, the whole system is determined. For example, a particle moving ina three dimensional space has 9 degrees of freedom, 3 for positions, 3 for velocities, 3 foraccelerations. If it is a free falling and 4 degrees of the freedom is removed, there are 2 velocitiesand 2 accelerations in x-y plane. There are infinite ways to add constraints, but each of theconstraints will limit the moving in a certain way. The order of the state equation for a controllableand observable system is in fact the degree of the freedom.Cramer (1946) defined degrees of freedom as the rank of a quadratic form. Muirhead (1994)also adopted a geometrical approach to explain this concept. Degrees of freedom typically referto chi-square distributions (and to F distributions, but they're just ratios of chi-squares). Chi-squaredistributed random variables are sums of squares (or quadratic forms), and can be represented as thesquared lengths of vectors. The dimension of the subspace in which the vector is free toroam is exactly the degrees of freedom. For examples,Let X_1, ... , X_n be independent N(0,1) variables, and let X be the column vector whoseith element is X_i. Then X can roam all over Euclidean n-space. Its squared length, X'X =X_1^2 + ... + X_n^2, has _a chi-square distribution with n degrees of freedom.Same setup as in (1), but now let Y be the vector whose ith element is X_i-{X-bar}, whereX-bar is the sample mean. Since the sum of the elements of Y must always be zero, Ycannot roam all over n-dimensional Euclidean space, but is restricted to taking values in then-1 dimensional subspace of all n x 1 vectors whose elements sum to zero. Its squaredlength, Y'Y has a chi^2 distribution with n-1 degrees of freedom.All commonly occurring situations involving chi-sqaure distributions are similar. The most commonof these are in analysis of variance (or regression) settings. F-ratios here are ratios of independentchi-square random variables, and inherit their degrees of freedom from the subspaces in which thecorresponding vectors must lie.Rawlings (1988) associated degrees of freedom with each sum of squares (in multipleregression) as the number of dimensions in which that vector is "free to move." Yis free to fall anywhere in n-dimensional space and, hence, has n degrees of freedom. Y-hat, on theother hand, must fall in the X-space and , hence, has degrees of freedom equal to the dimension ofthe X-space -- [p', or the number of independent variables's in the model]. The residual vector e canfall anywhere in the subspace of the n-dimensional space that is orthogonal to the X-space. Thissubspace has dimensionality (n-p') and hence, e has (n-p') degrees of freedom.Selig (1994) stated that degrees of freedom are lost for each parameter in a model that isestimated in the process of estimating another parameter. For example, one degree offreedom is lost when we estimate the population mean using the sample mean; two degrees offreedom are lost when we estimate the standard error of estimate (in regression) using Y-hat (onedegree of freedom for the Y-intercept and one degree of freedom for the slope of the regression line).Lambert (1994) regarded degrees of freedom as the number of measurements exceeding the amount absolutely necessary to measure the "object" in question. For example, to measure the diameter of a steel rod would require a minimum of one measurement. If ten measurements are taken instead, the set of ten measurements has nine degrees of freedom. In Lambert's view, once the concept is explained in this way, it is not difficult to extend it to explain applications to statistical estimators. i.e. if n measurements are made on m unknown auantities then the degrees of freedom are n-m.NavigationIndexSimplified NavigationTable of ContentsSearch EngineContact。

Performance Testing of the Desdemona Motion System

Performance Testing of the Desdemona Motion System

AIAA Modeling and Simulation Technologies Conference and Exhibit 20 - 23 August 2007, Hilton Head, South CarolinaAIAA 2007-6472Performance Testing of the Desdemona Motion SystemManfred Roza1, Mark Wentink2 and Philippus Feenstra3 TNO Human Factors, Soesterberg, The NetherlandsTP PT TP PT TP PTIn the spring of 2007 TNO Human Factors together with AMST Systemtechnik GmbH have completed the development of their newest research simulator, the Desdemona, in The Netherlands. The Desdemona research simulator features a unique motion system not seen elsewhere in the world. Its serial design and geometrical dimensions give the motion system a large cylindrical motion space and a broad range of dynamic performance capabilities, which go beyond those of a classical Stewart platform. Like any other motion-base simulator the Desdemona motion system is driven by motion filters that transform the various simulation model outputs into safe and optimal motion cues. For the development of these motion filters it is necessary to exactly determine the dynamic performance characteristics of Desdemona and check whether these characteristics meet the specified motion system requirements. This paper describes the test protocol to measure, specify and verify the dynamic performance characteristics of the Desdemona motion system. The performance test protocol builds upon and extends the classical synergistic motion system test approaches, like the AGARD standard, to suite the specific Desdemona motion system capabilities.NomenclatureM&S DOF IMU MCC PLC = = = = = = modeling and simulation degree of freedom inertial measurement unit motion control computer programmable logical controller central yaw axisψ centrR Hφcab ψ cab θ cab= radial axis = heave axis = cabin roll axis = cabin yaw axis = cabin pitch axisI. IntroductionTNO Defense, Safety & Security in the Netherlands has a long tradition in research into modeling and simulation (M&S) technology and applications. The M&S effort of TNO Human Factors is centered in the area of flight, driving and ship simulators for human performance, training and behavior research. Over the years TNO Human Factors has specialized in human perception research ranging from visual & vestibular research to motion sickness and fidelity, to 3D-audio and haptic interfacing experiments1,4,5,6,7,8,9. To better facilitate this kind of research and the spatial disorientation training for the Royal Netherlands Air Force, TNO Human Factors initiated the development of a new research simulator, the Desdemona11,12,13.P P P PIn co-operation with AMST Systemtechnik GmbH, TNO Human Factors completed the development of the Desdemona simulator, in the late spring of 2007. The Desdemona research simulator features a unique and special designed non-synergistic motion system with six degrees of freedom (DOF). Its serial design and geometrical1TPResearch Scientist, TNO Defence, Safety & Security, Human Factors Department, manfred.roza@tno.nl, AIAA Member. Research Scientist, TNO Defence, Safety & Security, Human Factors Department, mark.wentink@tno.nl, AIAA Member. 3 Research Scientist, TNO Defence, Safety & Security, Human Factors Department, philippus.feenstra@tno.nl.PT HTU UTH2TP TPPTHTUUTHPTHTUUTH1 American Institute of Aeronautics and AstronauticsCopyright © 2007 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.dimensions give the motion system a large cylindrical motion space and a broad range of dynamic performance capabilities. One of the unique motion capabilities is the ability to combine onset cueing like a classical Stewart or hexapod platform with sustained acceleration cueing as found in dynamic flight simulators. Furthermore, the rotating gimbal system gives the Desdemona motion system the possibility to replicate unusual attitudes and large attitude changes one-to-one. The motion system houses a cabin, which is equipped with a 120 degree visual system. The modular hard and software design of the cabin enables fast reconfiguration of the interior and the installation of human machine interfaces. This makes the Desdemona simulator potentially suitable for a wide range of research applications including unusual aircraft and rotorcraft maneuvers, suburban, urban and terrain driving simulation, motion perception and cueing, spatial disorientation, motion sickness, and human-performance in artificial gravity conditions.Figure 1. The Desdemona Research Simulator Exterior Like all other motion-base simulators the Desdemona motion system is driven by motion filters that transform the various simulation model outputs into safe and optimal motion cues 13, 14. For the development of these motion filters it is necessary to exactly determine the Desdemona dynamic performance characteristics. In addition, the measured performance is used to verify whether it meets the specified motion system requirements. This paper describes the test protocol to measure, specify and verify the dynamic performance characteristics of the Desdemona motion system. The paper starts in Section II with a presentation of the Desdemona motion system configuration and properties. Next the rationale and top-level design of the dynamic motion test protocol are discussed (Section III). This protocol builds upon and extends the classical synergistic motion system test approaches described in literature such as AGARD and ASC/MIL standards to suite the specific Desdemona motion system capabilities 15,16. In two subsequent sections the various tests and associated performance metrics of the protocol are discussed. These tests are divided in two categories. The first category is the single axis tests (Section IV). These tests comprise timedomain tests for motion system position, velocity and acceleration limits and signal tracking accuracy, and frequency domain system identification based tests. Multi-axis performance tests form the second category of tests (Section V). These tests are typical for serial motion systems, like the Desdemona, where independent axes have to be moved in conjunction to reproduce the desired motion cues during simulation. Lessons-learned and experiences from the first execution of the Desdemona motion performance test protocol are presented in Section VI. The paper ends in Section VII with conclusions and future work to motion system performance testing in relation to research into motion system fidelity, perception and requirements.2 American Institute of Aeronautics and AstronauticsII. Desdemona Motion System DescriptionA. Motion System Configuration and Characteristics Unlike conventional motion systems such as the Stewart platform, the Desdemona motion system is not a synergistic or parallel robotic system. Instead the Desdemona has a non-synergistic or a serial motion system with six axes that can be moved independently (Figure 2). These six axes are respectively called: central yaw (ψ centr ), radius (R), heave (H), cabin roll ( φcab ), cabin yaw (ψ cab ) and cabin pitch ( θ cab ). The Desdemona cabin is suspended in a fully gimbaled 3DoF system ( φcab ,ψ cab and θ cab ), which allows unlimited cabinrotation around any arbitrary axis in space. This gimbaled system is mounted in a heave system (H) that translates the gimbal system and the cabin in the vertical plane. The heave/gimbal system can be moved horizontally over a sledge; the radius (R). The sledge itself can be rotated unlimited in the middle around a vertical axis (ψ centr ); the central yaw axis. This central yaw axis in combination with the radius gives the Desdemona motion system the capability of sustained g-load generation up to 3g.Figure 2. The Desdemona Motion System DOFAll the axes are driven by electric servo-systems. The required performance characteristics for each of these Desdemona axes in terms of the maximum attainable position, velocity and acceleration are given in the table below.Central Yaw Max. position Max. Velocity Max. Acceleration Radius Heave Cabin Roll Cabin Yaw Cabin Pitch>360P0P±4m 3.2 m/s 4.9 m/s2P±1m 2.0 m/s 4.9 m/s2>360P0P>360P0P>3600P P155 /sP P0180 /sP P0180 /sP P0180 /sP P045 /sP P P0290 /sP P P02P90 /sP P P02P90 /s2P P P P0Table 1 Maximum Position, Velocity and Acceleration Characteristics of the Desdemona Motion System B. Motion Control, Measurement and Integrated Test Systems The Desdemona motion platform is controlled by a motion control computer (MCC), which runs on a standard PC with a real-time operating system. The MCC hosts all necessary control logic, safety and communication I/O software to safely operate the Desdemona motion system. An Ethernet network with a dedicated protocol is used for the communication between the MCC and several PLC’s. These PLC’s form the interface between the MCC and the peripheral motion system hardware such as the engine drives, measurement and safety systems, and various analogue and digital I/O. A CanOpen field bus is used for communication and data transport between the drive PLC’s and the engine drives. The MCC software architecture allows for both running the , e.g. vehicle model and motion filter on the MCC itself or remotely on different PC’s in a distributed simulator architecture design. This gives the Desdemona simulator additional flexibility in developing and off-line testing of vehicle model and motion filter configurations. The MCC operates and generates motion control reference signals at a rate of 200Hz. The Desdemona motion system is equipped with three types of measurement systems, which could be used for dynamic performance testing. The first measurement system comprises the various position encoders mounted on each axis, which are used by the electrical drives to control each axis position. The second measurement system comprises three solid state accelerometers mounted on the cabin chair at the position of the subjects head. These 3 American Institute of Aeronautics and Astronauticsthree sensors are intended for safety purposes to measure the local specific force vector exerted on the subject7. The last measurement system is an Inertial Measurement Unit (IMU), which comprises a fiber-optics and temperature compensated 3-axis gyro in combination with a solid-state 3-axis accelerometer. These high quality sensors have specifically been selected for motion performance measurement purposes. The gyro is rigidly attached to the inside of the cabin structure and the accelerometer is also attached to the cabin but with a flexible mechanical connection. This provides digital output signals of the three cabin (mechanical low-pass filtered) accelerations and the angular rates. All measurement systems are connected to the Desdemona integrated test system. This test system is capable of injecting test-signals, logging and visualizing all the actual sensor data at a rate of 200Hz. C. Post processing of measurement data: filtering and differentiating The motion performance measurement and analysis of Desdemona comprises the orientation or positions of each degree of freedom and their first and second order derivatives. However, not all derivatives are directly measured by the aforementioned Desdemona measurement systems and are therefore not directly available for analysis. Therefore, some of the required derivatives have to be obtained numerically in the Desdemona test system software. In the literature there exist various ways to approximate these derivatives. Three commonly used implementations are the forward Euler approximation, the backward Euler approximation and Tustin’s approximation. For the Desdemona motion performance analysis the backward Euler approximation is used. A derivative operation amplifies the high frequency components of a measured signal. These high frequency components are due to measurement and sampling noise. Therefore, a low-pass filter is needed to filter out these noise components without affecting the real signal too much. A second order anti-causal (reverse digital filtering) low-pass filter has been utilized for this purpose, where the filter cut-off frequencies were found by trial and error. The cut-off frequencies are in the range of 8 to 12 Hz. Moreover, an anti-causal filter prevents a phase lag between the filtered and unfiltered signal.III. Desdemona Motion Performance Test Protocol DesignA. Existing Motion Performance Test Methodologies The AGARD Advisory Report 144 is probably the first and most extensive publication on flight simulator motion system performance15. The AGARD report stems from the seventies. Another often cited publication, from the same era as the AGARD standard, is the Department of Defense MIL-STD-1558 standard. This standard has been revised and is currently included in the superseding U.S. air-force guide specification for flight simulators16. Compared to the AGARD report the air-force guide is less extensive in its metrics but provides generic requirements, based upon lessons learned, for each of its described metrics. Considering the simulation-technology advances made over the past decades one can question whether the techniques and metrics described in both reports have to be adjusted or extended to meet the today’s requirements and needs. More recent publications confirm this notion and have proposed several modifications and extensions to both original standards 17,18,19,20,21,23,24. The basis for the Desdemona motion performance test protocol consists therefore of a mixture of both classical standards and several of these proposed improvements and lessons-learned. B. The Basic Desdemona Motion Performance Concept In the context of the Desdemona simulator the major limit of the existing literature on motion system performance is that it mainly focuses on the classic Stewart platforms, i.e. a synergistic or parallel system, around a single operation point of the workspace. The Desdemona motion system, however, has a non-synergistic or serial motion system instead. Due to this the Desdemona motions system has a broader range of motion capabilities and no specific single operation point. Moreover, each axis, i.e. electric servo system, can be moved and controlled independently of the other axes. This makes it possible to execute dynamic performance tests for each single axis separately. Such tests are not possible with a Stewart platform. The advantage is that the performance of each axis, i.e. driving servo system, is directly measured unlike the classical 6-DOF (surge, sway, heave, pitch, roll and yaw) of the cabin of a Stewart platform. Therefore, the single axis performance tests can be used for both the optimization of the servo system control laws and the motion filters that build-upon it. There are, however, some limitations to single-axis tests in assessing motion performance capabilities of serial motion systems. With only single axis performance tests it is hard, if not impossible, to directly compare the performances with other motion systems. These comparisons have to be made based-upon the 6-DOF of the cabin in which the human subject experiences the simulated motion. Obviously, from the user perspective this is also the area of interest. For most simulation applications two or more axes of the Desdemona motion system have to be moved 4 American Institute of Aeronautics and Astronauticsin conjunction to replicate these 6-DOF movements to be experienced by the subject13, 14. This can be realized by means of various, often not unique, combinations of axis movements. More importantly, under these different kinds of multi-axis operations structural and other mechanical (cross) coupling effects, like dynamic changes in the moment of inertia, centre of gravity, vibrations, Coriolis and centrifugal effects, could occur22. This may require changes to or additional compensation schemes in the Desdemona motion control system to obtain acceptable overall motion performance for multi axis operation. Furthermore, multi-axis tests provide additional information for optimization of motion-cueing algorithms for specific simulation applications.P P P PSuch multi-axis operation effects and knowledge cannot be identified through single axis tests. Therefore, the Desdemona motion performance test protocol combines both single axis (Chapter IV) and multi-axis performance tests (Chapter V). The multi-axis tests are based-upon the Desdemona mechanical configuration and its currently foreseen operational modes, motion filter types and research applications.IV. Desdemona Single Axis Motion Performance Test ProtocolA. System Limits Tests System limits define the upper and lower bounds for the position, velocity and acceleration of each degree of freedom provided by the motion system15. These system limits are usually expressed in the frequency domain in terms of the maximum allowed acceleration per frequency. A double log scale is used to get a convenient plot (Figure 3). System limits show the dynamic motion envelope of each axis and are used in the design of the dynamic performance tests in the remainder of this paper.P PFigure 3. A typical example of a motion system limit plotThe system limits for each Desdemona axis have been specified by TNO together with AMST as a trade-off between what TNO requires for the intended research applications and what is physically possible given the current state-of-the-art in motion system hardware (structure, controls, servo-systems, etc.). Table 1 shows the required Desdemona motion system performance limits. These motion system requirements are tested using sinusoidal reference signals that are preceded and followed by a cosine profile to ensure a smooth signal fade-in and fade-out. For each axis two of these reference signals are created with angular rates (ω) that meet the next relationships:ω=v amax and ω = max pmax vmaxT(1)THere amax, vmax and pmax are respectively the (absolute) maximum axis acceleration, velocity and position as specified in Table 1. The above relationships (1) directly follow from differentiating a sinusoidal signal.B B B B B BB. Frequency Domain Analysis Tests Commonly applied frequency domain analysis techniques are a powerful manner to identify, analyze and specify the dynamic behavior of both linear and non-linear systems23, 24. The basic concept behind these techniques is to excite the motion system with a reference acceleration signal of known frequency contents and analyze it against the frequency contents of the system response. There are two types of reference signals that can be used23. Deterministic sinusoidal signals or broadband sinusoidal signals, like the Schroeder multi-sine, and broad-band random input signals, like white or colored noise, or a pulse width modulated signals. Performing tests with broad-band signals is far less time-consuming than a series of separate sinusoidal signals covering the same frequency spectrum. However, for the Desdemona dynamic motion performance testing the single sinusoidal signal approach is chosen.P P P P5 American Institute of Aeronautics and AstronauticsThe most important rationale for this is a safety concern. The uniqueness and complexity of the Desdemona mechanical, drive and control system design requires certain care to avoid unforeseen hazardous situations and structural damage. Therefore, a rectangular grid of measurement points is chosen inside the system limits of each axis and the measurements are executed from low-power to high-power sinusoidal signals. This approach is visualized in Figure 3 by the blue arrow. For all these grid points, the next two classes of frequency domain performance measures are determined 15, 16, 17, 18, 19.P P1. Describing Functions Describing functions presented in the form of a series of Bode plots are a common manner to analyze and specify the dynamic behavior of a non-linear system in terms off gain and phase-lag. Describing functions are a more general version of linear system’s frequency response functions23. The difference is that describing functions not only vary the frequency but also the amplitude to identify amplitude dependent non-linearity. The AGARD standard assumes linearity of motion systems and only uses acceleration amplitudes of 10% of the system limits. For the Desdemona motion system this assumption is not trivial, therefore the following acceleration amplitudes have been chosen to test the correctness of this assumption; 2%, 5%, 10%, 25%, 50% and 75% of the system limits. The frequency grid points are chosen in the interval of 0.2 Hz to twice the expected design bandwidth for each axis. On each measurement grid point (ωk) the describing function gain and phase-lag ( G(j ωk) ) is calculate by dividing the cross and power spectral density estimates of the input and output acceleration signals as follows:P P B B B BG ( jωk ) =SUU ( jωk )TSYU ( jωk )(2)Expression (2) gives the describing function of the driven axis. However due to mechanical cross-coupling the driven axis will also excite the other axes. This means that in-total for each axis six describing functions can be constructed, one primary and five cross-talk describing functions. A cross-talk gain of maximum 2% in any nondriven axis is commonly acceptable16. To smoothen these estimates i.e. reduce the variance and leakage, the Welch’s method of periodogram averaging estimates for the spectral densities are applied. A quantification for the accuracy of these estimates is given by the coherence (γ) function:P Pγ2( ωk ) =SUU ( jωk ) SYY ( jωk )SYU ( jωk )2(3)The value of the coherence ranges always between zero and one. A value closer to one indicates a more accurate estimate. A coherence value larger or equal to 0.6 is considered to be adequate for an accurate estimate 24.P PThere exist two classical performance metrics that can be derived from the describing functions23,24,26. The first metric is the system bandwidth, which is defined as the frequency (F-3dB) at which the system amplitude gain sinks below the -3db. The second metric is defined as the frequency at which the system response exhibits a 90-degree phase lag (F-90deg).P P B B B B2. Noise Level Characteristics The objective of noise level measurement is to quantify the output noise characteristics of the Desdemona motion system for a single axis driven by a sinusoidal reference signal with a discrete frequency and acceleration amplitude. The basis for the noise level characterization is the variance of the measured noise, which can be estimated through calculating the average power over a frequency interval (N1<fk<N2) as:B B B B B Bσ Y2 =2 Nk = N1∑S (f )YY kP PN2(4)From this expression six noise level metrics can be defined, which are presented in Table 215, 17, 18. The major metrics are visualized in Figure 4 at the next page. For the Desdemona motion system these noise levels are quantified over the whole measurement grid as specified in the previous section and Figure 3 6 American Institute of Aeronautics and AstronauticsFundamental Frequency Powerσf =22 NSYY ( Ffund )N 2Total Noise Powerσn =22∑ S ( f ) −σ NYY k k =02−12 fLow Frequency Non-Linearity2σ lfn = σ freq 2 ⋅ F fund + σ freq 3 ⋅ F fund2()()High Frequency Non-LinearityFigure 4. Noise Level Characteristics Visualization The found noise level metrics for each axis are best expressed in terms of dimensionless noise ratios (Table 3) 15, 17, 18. These ratios are usually plotted for each frequency against the acceleration amplitude of the input signal. Similarly, for the non-driven axes the acceleration-noise ratio and peak-noise ratios caused by mechanical coupling are determined and plotted against the acceleration amplitude. According to the MIL standard the peak noise (Ap) should not exceed 0.04g peak acceleration in the driven axis and not exceed 2% acceleration cross-talk amplitude in the non-driven axes16.P P B B P Pσ hfn =22 N2∑ S (k ⋅ F )2N−1YYfundk =0Roughnessσ r = σ n − σ lfn2 2Peak NoiseAp = max{ Ynoise (t ) }Table 2 Noise Level MetricsThe signal-to-noise ratio is a special and important noise characteristic. Small signal to noise ratios imply that the noise significantly disturbs the commanded input acceleration that could result in perceivable false motion cues. By means of fitting a thin-plate spline through all rsn values over the whole measurement grid, it is possible to construct smooth contours with a constant signal-tonoise ratio value. These signal-to-noise contours specify, within the system limits, practical operational areas in which the rsn values will remain below a certain limit. Such information is useful when designing motion filters and applications for Desdemona.B B B BAcceleration noiseSignal-to-noise ratiorn =rp =σn σfAp 2⋅σ frsn =σ2 f2 σnPeak noise ratioLow-frequency non-linearity ratiorlfn = rhfn =σ lfn σf σ hfn σfRoughness ratioHigh-frequency non-linearity ratiorr =σr σfTable 3 Noise Ratio’sC. Time Domain Analysis Tests Single axis time domain analysis for the Desdemona motion comprises a series of tests to study and quantify how accurately each axis is capable of following a typical reference input signal. These input signals are either axis position, velocity and acceleration reference signals. 1. Acceleration Accuracy The Desdemona single axis acceleration accuracy test is an extension of the classical AGARD dynamic threshold measurement15. These measurements comprise the time-domain analysis of the motion system’s response to acceleration step inputs. Seven metrics are used to characterize and quantify the axis step response over time. The first four metrics are dead-time, rise-time, settling time and overshoot (Figure 5). From these metrics three other measures can be derived: dynamic threshold, estimate bandwidth and damping ratio.P P7 American Institute of Aeronautics and AstronauticsOvershoot is measured in terms of the percentage of the acceleration pulse amplitude. Settling-time is defined as the time needed before the system response remains within a defined tolerance band around the commanded acceleration input. The tolerance bands for this purpose are chosen in relation to human motion perception thresholds: 0.05 [m/s2] (R, H), 0.0166 [rad/s2] (ψ centr ) and 0.0052 [rad/s2] ( φcab , ψ cab , θ cab ). Too large overshoots and too long settling times may be a source for false perceivable motion cues.Figure 5. Acceleration Step Response Analysis MetricsThe dynamic threshold measure is the time required for each axis to reach sixty-three percent of the commanded acceleration step input. This time is the sum of two parts: dead-time and 0-63% rise-time. Dead-time is the elapsed time before a response is discernable in the axis. The result provides an indication of the magnitude of the system computation & communication time delays and responsiveness. Both the measured 10%-90% rise-time and the percentage overshoot can be used to make a rough estimate of the axis bandwidth (F-3dB) and damping ratio (ζ) respectively. The rules of thumb for these estimations are given in Table 4 and give acceptable estimates for low order system behavior23, 24.B B P PBandwidth EstimateDamping Ratio Estimate−ςπF−3dB0.35 = Trise10−90Povershoot = 100e1−ς 2Table 4 Bandwidth and damping estimatesThe measurements are performed with alternating step input pulse trains of varying amplitudes (25%, 50% and 75% of system acc. limits). This enhances the estimation quality due to averaging, helps to reduce non-linear effects (backlash, stick-slip, etc.) and identifies non-symmetrical behavior. Non-symmetrical behavior can be present, for instance, due to gravitational contributions or geometrical configurations in combination with motion control laws. 2. Position and Velocity Accuracy Position and velocity accuracy tests are additional tests to the classical AGARD and MIL acceleration accuracy tests, but common in robotics engineering 15,16,22. Unlike the acceleration-accuracy test the position and velocity accuracy are both tested with a profile that is generated by a smooth sin2 profile. The profile is shown in Figure 6. The amplitudes of these profiles have been set a-priori to 25%, 50% and 75% of the system limits.P P P Pptestp, vtestvtest atestt settltAFigure 6 Position and Velocity Accuracy Test Signal and Settling Time Definition (right picture details a square area in the left picture)Similar to the acceleration accuracy tests, several performance metrics are defined based upon a tolerance band around the test amplitude of the position and velocity profile. In Figure 6 the time tA is the time where the positionB B8 American Institute of Aeronautics and Astronautics。

八维六自由度动态测力装置的研制及应用

八维六自由度动态测力装置的研制及应用

收稿日期63作者简介赵卓(),男,河南商丘人,工程师,主要研究方向直升机动力学试验。

文章编号:1673-1220(2010)03-062-06八维六自由度动态测力装置的研制及应用赵卓,章剑,廖良全(中国直升机设计研究所,江西景德镇333001)摘要针对直升机动力学试验中载荷难于准确测量且各分量间耦合大的难题,研制了一种新型结构的八维六自由度动态测力装置。

介绍了八维六自由度动态测力装置的结构设计原理、测量算法、标定及在直升机动力学试验中的应用情况。

关键词动态测力装置;八维六自由度;动力学试验中图分类号:V216.2文献标识码:ADevelop m ent and App lica ti on of E igh t -D i m ensi onal SpaceSix Degrees of F reedo m Force -M eas ur i ng DeviceZ HAO Zhuo ,Z HANG Jian ,LI AO L iangquan(Ch i na H e lico pte r R esea rch and Deve l op m en t Instit u te ,Jingdez hen 333001,Ch i na)A bstra ct It was har d to accurately measure the l oad i n he li co pter dyna m ics test and the d istr i bu -tio n co up li ng is big .So the writer develo ped this ne w 8d i m ensi ona l space si x degrees of f reedo mf orce -m easuri ng device .This paper i ntr oduced t he structure ,m easuri ngmetho d ,cali brati on and itwas applica ti on i n helico pter dy na m ics tes.t K ey wordsf orce -measuri ng device ;eig h-t d i m ensio nal space six degrees of f reedo m;dy na m ics test1前言六自由度测力装置是一种广义的六分量力传感器,能够同时检测出空间三个力分量和三个力矩分量。

2006_Colgate-The Cobotic Hand Controller Design, Control and

2006_Colgate-The Cobotic Hand Controller Design, Control and

The Cobotic Hand Controller:Design,Control and Performance of a Novel Haptic DisplayEric L.Faulring,J.Edward Colgate,and Michael A.PeshkinAbstract—We examine the design,control and performance of the Cobotic Hand Controller,a novel,six-degree-of-freedom, admittance controlled haptic display.A highly geared admittance architecture is often used to render high impedances with reasonable sized actuators for a haptic display.The Cobotic Hand Controller is an extremely faithful realization of an admittance display,since it is capable of obtaining an infinite gear ratio and can render infinite impedances(up to its own structural stiffness).The incorporation of continuously variable transmissions utilizing hardened steel elements in dry-friction rolling contact provide the Cobotic Hand Controller with high bandwidth,low power requirements,and an extremely wide stable dynamic range.Here we describe an admittance based control algorithm for powered cobots,a novel solution to the actuation redundancy of this device,and a heuristic to avoid slip in the transmissions.We measure the performance of the Cobotic Hand Controller in terms of dynamic range.Index Terms—haptics,admittance display,cobots,continu-ously variable transmission,traction driveI.I NTRODUCTIONA N increasing number of virtual environment and teleoper-ation tasks demand highfidelity haptic interfaces.These include interaction with computer aided design models,flight simulators,telerobotic surgery,micro/nano-manipulation,un-dersea salvage,as well as telerobotic maintenance and de-contamination and decommissioning of chemical and nuclear facilities.The execution of these tasks by a remote operator is affected by his/her level of telepresence and the transparency of the master-slave relationship[1].This illusion of presence is enhanced by audio,visual and haptic cues.While visual cues are certainly mandatory,and audio cues beneficial at times, haptic cues can significantly improve theflow of information from the environment to the operator for many tasks requiring dexterity.Haptic cues are impedances;relationships between motion and force that an operator encounters when interacting with a display.It is desired that the user perceive a high dynamic range including rigid constraints and unimpeded free motion.This paper introduces a novel display that improves transparency and presence by extending the range of cues (dynamic range)that can be rendered.The specific application for the development of the master hand controller introduced here is the teleoperation of the Dual Arm Work Platform(DAWP)at Argonne National Lab-oratory[2],[3].One of the key improvements the Cobotic Manuscript initially submitted March12,2006.Notification of acceptance on August24,2006.Revised and resubmitted on September4,2006.This work was supported by the DOE grant number DE-FG07-01ER63288.Eric L.Faulring is with Chicago PT,LLC(eric.faulring@).J.Ed-ward Colgate and Michael A.Peshkin are with the Department of Mechanical Engineering at Northwestern University(colgate,peshkin@).Hand Controller can provide to DAWP operation is the implementation of virtual surfaces,or virtual constraints on motion,as suggested by[4],[5],[6],[7].Such constraints can vastly simplify execution of a six-degree-of-freedom task in a teleoperation setting.While constraints can be implemented at the slave side in the existing system,an active master allows for the reproduction of these constraints at the master and may reduce operator fatigue while increasing efficiency by eliminating unneeded motions in six-space.Thus,if the operator is using a saw and constrains the motion of the saw to the plane of the blade at the slave,he/she feels these same constraints at the master.Rendering these constraints at the master also avoids time delay issues stemming from communication latencies.Existing haptic displays consist of admittance and impedance devices.Admittance displays are highly geared and therefore non-backdrivable while impedance displays have low inertia and are highly backdrivable.Admittance displays are reviewed in[8],[9],[10].The Haptic Master[11]and Steady Hand Robot[7]are notable implementations of the admittance paradigm.Although well-engineered admittance devices may have a higher dynamic range than impedance displays,they are rare due to cost and complexity.The required multi-degree-of-freedom force sensors,additional gears and bearings,and tight machining tolerances lead to significant cost.Thus the successful commercial haptic displays are often impedance devices.Impedance displays include the Phantom[12],the Whole Arm Manipulator(W AM)[13],and many others[14], [15],[16],[17],[18],[19].While today’s impedance and admittance displays may both be used to simulate a wide range of mechanical behaviors, they excel in different areas due to the nature of their control and mechanical structures.Among commercial and research devices,most serial link haptic displays have a maximum stable stiffness on the order of1-5N/mm(various models of the Phantom range in capability from1to3.5N/mm[20]) and most parallel haptic displays have an upper stable stiffness bound of15-50N/mm[14],[21].A stiffness of1N/mm is generally accepted as the minimum required to convey the presence of a constraint.Greater than24N/mm is required to convey the presence of a“hard”or“rigid”constraint[22]. Impedance displays can have an unmasked inertia as low as 0.05kg,while admittance displays typically have a minimum stable mass of2-5kg.Impedance displays are well-adapted to simulating low inertia,low damping environments,but have difficulty rendering energetically passive stiff constraints[23], [24].On the other hand,admittance displays are well-adapted to displaying rigid constraints but struggle to simulate un-Fig.1.Parallel cobotic transmission architecture.While there are six joint speeds that must be controlled for the Cobotic Hand Controller to render a virtual environment,there are six CVTs and a power cylinder(common element)that must be actuated.It is arbitrary at what speed to have the power cylinder moving since it is related via CVTs to the joints. encumbered motion.Unlike impedance displays,admittance displays must actively mask inertia and damping,which are their inherent physical behaviors due to their non-backdrivable transmissions.A.CobotsThe word cobot is derived from collaborative and robot, meaning shared control between a human operator and a computer[25],[26].Cobotic devices control the relative velocities of their joints by modulating continuously variable transmissions(CVTs)with small steering actuators(Figure1). Cobotic CVTs have been developed to relate two translational velocities,two rotational velocities,or a rotational velocity to a translational velocity,and have been used in many prototype devices[27],[28],[29],[30].The velocity ratios enforced by constraints in the transmissions cause cobots to have only a single mechanical instantaneous motion freedom,regardless of the dimension of their configuration space.The dynamics along this single instantaneous motion freedom defined by the CVTs are controlled via a single power injector in an active cobot such as the Cobotic Hand Controller,or by a human operator in the case of a passive cobot.Rolling constraints in the transmission elements,not electrical power,resist forces orthogonal to the current motion direction.The transmissions draw power from a single common element actuator as needed, potentially reducing the weight and power requirements of the overall ing a continuously variable cobotic transmission can eliminate the need to make compromises on outputflow and effort,which are inherent in choosing a fixed transmission ratio,and also allow the common element actuator to be operated at an efficient speed nearly all of the time.In addition,the cobotic architecture allows for the ability to both lock or decouple joints without any additional actuators beyond the single low-power steering actuator for each CVT.B.Summary of paperIn Section II we provide a detailed description of mechani-cal design of the Cobotic Hand Controller recently introduced by[31],[32].In Section III we review the computation of forward and inverse kinematics for the device.In Section IV we review the workspace,mechanism stiffness,force limits and the backdrivability of the device.In Section V,we summa-rize our virtual environment admittance control algorithm and outline the overall control strategy and low level controllers.A heuristic is developed that limits slip in the CVTs and therefore protects against instability of the display.In Section VI we derive a novel methodology for dealing with the actuation redundancy of the display.In Section VII we analyze the acceleration ability of the display.In Section VIII we analyze the dynamic range of the Cobotic Hand Controller.Finally in Section IX we provide conclusions and suggestions for future investigations.II.D ESIGNA.GeometryThe design of the six-degree-of-freedom Cobotic Hand Controller,shown in Figure2,utilizes the kinematics of a parallel platform introduced by Merlet[33],[34].The parallel platform portion of the geometry(i.e.,everything but the cylinder and wheels)has also been used in an ophthalmic surgery robot developed by[35]and,in a slightly modified form,in an industrial dextrous assembly robot called the Paradex[36].The proximal links are coupled by three-degree-of-freedom universal joints to the distal links,and these in turn are coupled via two-degree-of-freedom universal joints to an end-effector platform.Here a multi-axis force sensor is placed to measure the user’s intent.Our addition to Merlet’s kinematics is to relate the six linear actuators to a central power cylinder through non-holonomic rolling constraints.An alternative CVT design for a six-degree-of-freedom cobot was proposed by[37].Linear actuation of the proximal links is achieved by a rotational-to-linear continuously variable transmission(CVT), namely a steered wheel.The steering angle of each wheel relates the linear velocity,˙l i,of each proximal link to the rotational velocity,ω,of the power cylinder.A linearly moving carriage,shown in Figure3,couples each CVT wheel to each proximal link.When the wheels are steered such that their rolling axis is parallel to the power cylinder’s(φi=0),a ratio˙l i=−Rωtanφi=0is set.If the wheels are steered in either direction fromφi=0,ratios between±infinity can be achieved.In practice,wheel slip limits this range.It is also evident that turning all six wheels toφi=0locks the six actuators,and turning them toφi=π/2completely decouples the actuators from the cylinder’s velocity,although the cylinder would then be unable to turn.The cobot was designed for some degree of kinematicflex-ibility.Thus the offset clamps(Figure2)adjoining proximal and distal links have two attachment points for the distal links and can be rotated about the proximal links.Rotating them inward yields a larger rotational workspace but reduced stiffness.The mounting positions of the distal universal joints to the end-effector plate are adjustable as well.In addition,the length of the distal links is easily changed as they are made of threaded rod that inserts directly into the universal joints. The universal joints themselves are unusual in that they exhibit continuous rotation even when coupling shafts that are almost perpendicular(87.5degrees).This severe operationFig.3.In this figure,the motor driving the cylinder is explicitly shown.Two of the steering wheels are exposed.Carriages relating two other wheels to their proximal links are visible.angle would be detrimental if they needed to transmit power rotationally,but here they need only to transmit power through translation of the universal joint as they maintain a kinematic constraint.They were designed and built specifically for the Cobotic Hand Controller and each contains four preloaded radial bearings.As shown in Figure 4,the two ends of the device are capped by endplates which sit in v-groove rollers.Thus the whole device can be rotated and fixed by a locking pin at increments of 30degrees for maintenance or kinematic purposes.Each of the carriages can be removed independently if the proximal-distal offset clamp is detached.Wire management guides all wiring (not shown)through the rear endplate.The whole cobot can be turned upright and operated with the cylinder orientedFig.5.Top and bottom isometric views of a linear actuation assembly.vertically,although significant power (and a fraction of the preload at the wheel)would be consumed to move the joints against gravity.B.Joint assembliesThe parallel nature of the Merlet-Cobotic mechanism allows for six identical actuator assemblies.As shown in Figure 2,there are six equally spaced proximal links and actuator as-semblies.These assemblies,depicted in Figure 5and detailed in Figures 6-10,are bolted to a central core,detailed in Figure 11.All structural components are machined from aluminum with the exception of the proximal links.These are 15.875mm diameter ceramic tubes chosen for their high strength to weight and stiffness to volume ratios.The ceramic tubes at their current length provide 24cm of workspace along the axis of the cylinder.The upper limit of the workspace is limited by the cylinder’s 25cm length.The moving portion of each joint assembly (depicted in Figure 6)has mass m l =0.9kg.A conductive-plastic linear potentiometer (see Figure 7)was chosen as a continuous linear sensor over numerous digital incremental options due to its lightweight untethered wiper,as well as for the ability to perform analog differentiation of its output in order to obtain a high resolution velocity signal.Fig.7.Shown is one of six identical actuator assemblies.The proximal link is grounded to a carriage on a linear guideway.An Igus T Mflexible wire guide manages wiring for the steering motor and encoder.A ramp allows the carriage to be inserted between the guide-rods and cylinder,with the spacing decreasing gradually as the CVT wheel approaches the cylinder.This allows the application of a preload force by compressing springs within the steering bell(see Figure10).Although the circuitry and code were developed to interpret this analog differentiated signal,the signal-to-noise ratio is such that thefinite-differentiated and digitallyfiltered position signal yields an equally good velocity signal.Figure8details the linear guideway chosen.It was designed to minimize the friction in and construction tolerances required for the linear guideway.In addition,we desired to locate the CVT wheel,which is preloaded against the power cylinder, between the two guide rods in order to avoid requiring the guideway to resist significant moments.The resulting design utilizes two guide rods andfive rollers,four of which are aligned against one guide rod,thefifth against a second guide rod.The sixth point of contact,which constrains the carriage to one or zero degrees of freedom depending on the steering angle,is provided by the cobotic steering wheel.There are several key advantages of this non-of parallel or series,between the yoke of the CVT wheel and its housing.The spring-constant for this set of Bellevilles needs to be such that as the carriage travels from one end of the cylinder to another,minor changes in the length of the spring(±50µm)do not significantly alter the preload,since the preload will affect the dynamics of steering,linear motion control and cylinder control.Also mounted on the carriage is an optical encoder for measuring steering angle,a steering motor coupled via gears(2.33:1)to the steering bell and a wiper for a linear potentiometer(see Figure9).Designing wheels for use in cobots has always been problematic.Conflicting design goals when choosing wheel materials has limited wheel performance.It is desired that the wheels provide a high transverse frictional force with minimal preload,yet it is also desired that the wheels have low steering friction in order to allow for smaller steering actuators and higher bandwidth of control.It is also desired that the wheels have low rolling friction and little dissipation in order to provide for backdrivability and a reduced power requirement for powered cobots.The wheel should not have any compliance transverse to the rolling direction if a rigid transmission is desired.Finally the wheel material should incur minimal wear due to steering or rolling.Previous cobots have typically utilized polyurethaneFig.9.Carriage features.Each carriage relates a CVT wheel to a proximal link.It houses the steering motor which drives the steering bell assembly via a single-stage gear pair.An eccentric bushing allows fine adjustment of the inter-gear spacing.Fig.10.Steering bell features.The yoke supports the CVT wheel axle and is able to slide freely within the bell,guided by the brass bushings normal to the cylinder.If the distance between the linear guideway and cylinder changes over the stroke length,the Belleville springs absorb the change in position of the yoke while maintaining a preload.The wheel axle intersects the bell and causes the wheel to steer as the bell is driven by a gear pressed around it.Rollerblade T M wheels (75mm in diameter)in order to obtain the necessary transverse coefficient of friction.For the Cobotic Hand Controller we chose to move to harder wheel materials to increase the stiffness and bandwidth of the device,and to reduce rolling losses.Also,the linear guideways required the existence of a high preload (unnecessary for the high friction polyurethane wheels).With these two constraints in mind,steel wheels (18mm diameter)were chosen to run against a steel power cylinder even though the coefficient of friction of steel on steel is an order of magnitude less than that of polyurethane on steel.Depending on performance needs (transverse friction or resisting of wrenches on the guideway),more or less preload can be utilized.Currently the preload P is set to around 250N.The CVT wheels start out with a spherical profile and are the centers of plain spherical bearings with a hardness of Rockwell C 58.After a few hours of use,the wheels,originally with a black-oxide coating,have a shiny flat strip 880µm across.Even after 12months of intermittent use in the lab,the stripis no larger than 910µm across,which amounts to a total of 11µm of wear off the radius of the wheel.We find the coefficient of friction,µ,for support of lateral forces between the steel wheels and steel cylinder,to be around 0.12.This is the point at which lateral creep breaks down into gross slip.C.Power cylinderAs shown in Figure 11,the power cylinder is located between two mating blocks.The steel cylinder is 25cm in length,13.64cm in diameter,and has a 6.25mm wall thickness.The cylinder shell has been welded to its end-caps,and these to the shaft (total inertia of 0.0286kgm 2).The cylinder/end-caps/shaft were then hardened to Rockwell C 59.8and cylindrically ground between centers to a 12µinch finish.In 12months of use,the cylinder has not shown evidence of wear.Each of the six linear actuator assemblies bolt to the mating blocks.Also connected to the blocks are the power motor and a high resolution encoder.The motor is connected via a rigid coupling to the cylinder.A flexible coupling was originally present,but later removed to avoid unwanted resonances.A large 1200watt motor was chosen as it was readily avail-able in the lab and had sufficient torque to operate without gearing.The original goal of the large motor was to mitigate backlash,thus allowing smooth operation including reversal of direction,and to allow backdrivability if the system was ever operated passively.Ultimately,control algorithms were never implemented to take advantage of this feature,and preload and speed limitations have only allowed us to draw about 60watts of mechanical power from the cylinder motor,five percent of its capacity.Assuming we had a 70percent efficient gear-train between a much smaller motor and the cylinder,a motor capable of peaking at 86watts would have sufficed to drive the cylinder.D.Electronics and SoftwareTable I summarizes the specifications of the sensors and actuators.All motors are brushless DC operated in torque (current)mode.The linear position and force are recorded viaTABLE IS ENSOR AND ACTUATOR SPECIFICATIONS.Sensor LinearityPower Cyl Enc NA40,960cnts/rev2πLinear Pots1/200016bit ADC(2.5mN)±40N,±2Nm Actuator Cont TorquePower Cyl Motor 3.7Nm260mNm30a This oversized motor was chosen as it was readily available and eliminated the need for gearing and the associated backlash and nonbackdrivability. Ultimately control modes never took advantage of these features,and only 60mechanical Watts have ever been asked of the cylinder motor.16bit ADC boards.All the necessary electronics,including motor amplifiers and power supplies,the control computer,and signal conditioning hardware were placed in a single cabinet measuring41x46x53cm.The control computer is comprised of a1.53GHz standard personal computer running the QNX 6.2real-time operating system.An oscillator and counter, on one of three data acquisition boards,is used to generate hardware interrupts at approximately2000Hz,to which all data acquisition and output is latched electronically.Board IO and algorithms that run at the full2000Hz take about 60µs and220µs respectively.Writing data to disk,network communication and updating the GUI are performed at lower rates,and are lower priority threads.All code was written in C.III.K INEMATICSThe Cobotic Hand Controller has two discrete sets of kinematics,thefirst general to robotic devices and the second specific to cobots.Thefirst set of kinematics transforms between the SE(3)rigid body motion of the end-effector(task space)and the R6straight-line motion of the six proximal links (joint space).The second set of kinematics transforms between joint space and steering space,as a function of cylinder speed.A.Joint-to-task kinematicsLet us define as the joint-to-task forward kinematics of the parallel platform portion of the Cobotic Hand Controller as the functions,ϑ(l),that take us from joint space coordinates, l,to task space coordinates,x=ϑ(l),(1) of the manipulandum(end-effector).The Jacobian,J,relates motion in joint space,˙l,to motion in task space,˙x.˙x=J(l)˙l J(l)=∂ϑ(l)∂xis easily establishedfrom the expressions l i=ϑ−1i(x),and relates velocities˙l and˙x.˙l=J−1(x)˙x J−1ij(x)=∂ϑ−1i(x)∂x j∂x k(4)¨li=6j=1J−1ij(x)¨x j+6j=16k=1H−1i,jk(x)˙x k˙x j2)Forward kinematics:For the general case of a six-degree-of-freedom parallel manipulator,if no pairings(in-tersections of axes of universal joints)exist at the platformor base,a closed form analytical solution is not availablefor the forward kinematics.In fact,twelve solutions arepossible for task space coordinates for a given set of jointcoordinates without using any heuristics about collisions orrange of motion.In practice,a Newton-Raphson iterativescheme can be used to compute the task space coordinates,x,given measured joint coordinates,l,and an initial estimatefor the task space coordinates,x o.However,it will becomeapparent that knowledge of the actual task space coordinates isunnecessary and the Newton-Raphson scheme is not needed.The Cobotic Hand Controller tracks a desired trajectory intask space and we map this desired position,velocity andacceleration to joint space.This is done by utilizing a Jacobianand Hessian computed from the desired task space location.Then our feedback control is implemented in joint space.Thuswe do not need to map the actual joint space location to taskspace via a Newton-Raphson scheme.B.Steering-to-joint kinematicsLet us define as the steering-to-joint forward kinematics ofthe continuously variable transmission portion of the CoboticHand Controller as the functions that take us from steeringangle,φi,and cylinder speed,ω,to joint space velocity,˙l i.1)Forward kinematics:The input and outputflows for eachrotational-to-linear transmission are related via˙liFig.12.Translational workspace without allowing rotation.The workspace is best approximated by an8cm radius hemisphere stacked on top of a 16cm diameter,13cm long cylinder,oriented along the x3direction(see Figure4for the coordinate directions).Thus the workspace has a relatively flat bottom and a domed top.The three-fold symmetry of the proximal-distal link connection points is apparent in the grooves on the bottom,and in the slightly hexagonal shape of the cylinder and dome.2)Inverse kinematics:During operation of the display, we seek to control joint motion,and thus the appropriate steering velocities are computed given the commanded joint accelerations.We differentiate Equation5and obtain˙φi =−¨li+R˙ωtan(φ)τc=−1Fig.14.Forces from cylinder acting on the wheel at the contact patch. and are diagrammed in Figure14.f w,i is the net force applied by the cylinder on the wheel in the joint direction.τc is the cylinder torque acting on the wheel.Here we have neglected the effort losses due to rolling friction in the transmission and the CVT wheel axle bearings which we model in[38]and[39]. The output force of the joint at the wheel,f w=m l¨l+c d,l P sgn(˙l)−f l,(8) is composed of the inertial force of the joint,m l¨l,the joint friction force,c d,l P sgn(˙l),and the net output force of the joint,f l(the operator applied force).c d,l is the linear guideway dynamic coefficient of Coulomb friction.For preload force P set to250N,joint friction c d,l P sgn(˙l)is0.84N.The joint masses m l are0.9kg.Forces in the longitudinal(rolling direction)of the wheel are essentially zero,unless the wheel is accelerating,or experiencing rolling friction.The net lateral force,f w secφ, is of primary concern.Adequate lateral friction force,µP, must be present so thatµP≥f w secφ.(9) When this is satisfied,adequate friction force is available to accelerate the linear carriage,to combat joint friction and to apply the net force,f l,to an operator.With all transmissions steered toφ=0,thus attaining their maximum available lateral friction forcesµP/sec(φ)=30N,the combined six joints of the Cobotic Hand Controller can sustain task space loads of≥50N without the expense of any electrical power.D.BackdrivabilityHere we examinefirst the inertial forces and then the friction forces that a user feels when attempting to backdrivea passively operated Cobotic Hand Controller.1)Apparent inertia:Due to the rolling constraints in the transmissions,the cobot only has a single motion freedom for a given set of steering angles,and the apparent inertia of the cobot along this single motion freedom incorporates the six joint masses as well as the cylinder inertia,in some combination depending on the transmission ratios.The lowest the apparent inertia could be in the translational direction along the cylinder’s axis is6m l=5.4kg,the sum of the six joint masses,plus the steering angle dependent contribution of the cylinder inertia which can be zero forφ=πr sin(φ)for each wheel,where r is the radius of a CVT wheel andτw,fr the rolling friction torque from inelastic losses at the wheel-cylinder interface and the friction in the wheel axle bearings.The joint frame force needed to backdrive the cylinder bearings isτc,fr2 such that the cylinder does not spin,an operator would haveto apply effort5.7N in order to backdrive the six joints.If the wheels were steered atφ=π。

Transactions on Systems, Man and Cybernetics, SMC-14(4)665{671, 1984.

Transactions on Systems, Man and Cybernetics, SMC-14(4)665{671, 1984.

Bibliography[AB95] Motoman Robotics AB. Arcsystem 6000: Specications, 1995.[ADA00] ADAMS. Adams user manual. Dynamic Computation Software, 2000.[Ang00] M. Angerilli. Analisi, progettazione ed ottimizzazione di una interfaccia aptica per la valutazione della qualità a percepita di cambi automobilistici. Master's thesis, Università a degli Studi di Pisa, October 2000.[AS84] D. Ariel and R. Sivan. False cue reduction in moving ight simulators. IEEE Transactions on Systems, Man and Cybernetics, SMC-14(4):665{671, 1984.[BD98] T. S. Mruthyunjaya B. Dasgupta. Singularity-free path planning for the stewart platform manipulator. Mechanism and Machine Theory, 6(33):711{725, 1998.[BD99] T. S. Mruthyunjaya B. Dasgupta. The stewart platform manipulator: a review. Mechanism and Machine Theory, 1(35):15{40, 1999.[BMW94] BMW. High speed realtime simulation with acsl under unix. Technical Report 96, ATZ Automobilitechnische Zeitschrift, 1994. Extract from the following: Der Neue BMW Simulationspruefstand fuer Antiblockiersysteme.[Bob92] M. Bobben. Binaurale Signalverarbeitung: Modellierung der Richtungserkennung und des Cocktail-Party-E ektes. VDI-Verlag, Duesseldorf, 1992.[BT83] H. Rathgeber B. Thomson. Automated systems used for rapid and exible generation of vehicle simulation models exemplied by a veried passenger car and a motorcycle model. Proc. 8TH IASVD Symp. on the Dynamics of Vehicles on Road and on Railway Tracks, 983.[Cap00] K. Capek. Russums universal robot. Web Page, 2000./1921ad.htm.[Cau13] A. Cauchy. Deuxieme memoire sur les polygones et les polyedres. Journal del'Ecole Polytechnique, pages 87{98, 1813. [Cen] NASA Ames Research Center. Vertical motion simulator. Web Page. .[CGG83] R. S. Sharp C. G. Giles. Static and dynamic stiness and de ection mode measurement on a motorcycle, with particular reference to steering behaviour. Proc. Inst. Mech. Engrs M.I.R.A., 1983.[CK80] H. B. Pacejka C. Koenen. Vibrational modes of motorcycles in curves. Proc. Int. Motorcycle Safety Conf., 1980.[Com99] Honda Motor Company. Riding simulator. Tokyo, Japan, 1999. Techical Brochure.[Cor99a] Frasca Corporation. Web Page, 1999. .[Cor99b] Re ecstone Corporation. Web Page, 1999. http://www.re .[Cor00] Microsoft Corporation. Web Page, 2000. .[Cra88] J. Craig. Introduction to Robotics. Addison Wesley Publishing Company, 1988.[DF99] F. Barbagli D. Ferrazzin. A simplied dynamic model for a motorcycle. Technical Report MO-SS-ME-D-DM-01, PERCRO, Scuola Superiore S.Anna, Pisa, Italy, September 1999.[DFO99] F. Salsedo D. Ferrazzin and Others. Strategy manager as a design tool in a motorcycle driving simulator. In Proceedings of 32nd ISATA, Wien, number 99SI26, 1999. [DH55] J. Denavit and R. S. Hartenberg. A kinematic notation for lowerpair mechanisms based on matrices. Journal of Applied Mechanics, pages 215{221, June 1955.[Doe56] E. Doering. Steering wobble in a single track vehicles. A.T.Z., 58, 1956.[DRL88] M. A. Nahon D. R. Lloyd. Response of airline pilots to variations in ight imulator motion algorithms. Journal of Aircraft, 25(7):639{646, 1988.[dU00] Ente Nazionale di Unicazione. Normativa. Web Page, 2000. .[DW78] J. Zellner D. Weird. Lateral-directional motorcycle dynamics and rider control. Society of Automotive Engineers Inc., (780304), 1978.[Eat73] D. J. Eaton. Man-Machine Dynamics in the Stabilisation of Single-Track Vehicles. PhD thesis, 1973.[F+99] D. Ferrazzin et al. Strategy manager implementation in a motion based two wheeled vehicle simulator. In 32nd ISATA, pages 281{288, June 1999.[FB00] D. Ferrazzin F. Barbagli. Washout lter design and implementation for the moris motorcycle simulator. Technical Report MOSS-CO-D-SMU-01, PERCRO, Scuola Superiore S.Anna, Pisa, Italy, January 2000.[Fer97] D. Ferrazzin. Rider command acquisition subsystem: Sensorselection and verication. Technical Report MO-SS-ME-D-RCA01, PERCRO Scuola Superiore S.Anna, 1997.[Fic00] A. Ficola. Corso di robotica industriale. Web Page, 2000.http://istel.ing.unipg.it/DIDATTICA/CORSI/cola/Appunti.htm.[Fit86] E. F. Fitcher. A stewart platform-base manipulator: General theory and practical construction. International Journal of Robotics Research, 5(2):157{182, 1986.[Fu65] H. Fu. Fundamental characteristics of single-track vehicles in steady turning. JSME Bulletin, 9, 1965.[Fun91] H. Funakubo. Actuators for Control, volume 2. Gordon and Breach Science Publishers, 1991.[GB+94] C.M. Johnston G.P. Bertollini et al. Driving simulation at general motors. Automotive Engineering, 1994.[Gen84] K. Genuit. Ein Modell zur Beschreibung von Ausenohreigenschaften. PhD thesis, RWTH Aachen, 1984.[GP94] J.A. Greenberg and T.J. Park. Driving simulation at ford. Automotive Engineering, 1994.[Gum73] D. R. Gum. Modeling of the human force and motion-sensing mechanisms. Technical Report AFHRL-TR-72-54, NASA, June 1973.[HA] V. Hayward and O. R. Astley. Performance measure for haptic interfaces. pages195{207.[Hay95] V. Hayward. Toward a seven axis haptic interface. In IROS95 Conference, 1995.[HB57] M. D. Havron and L. F. Butler. Evaluation of training effectiveness of the 2fh2 helicopter ight trainer research tool.Technical Report NAVTRADEVCEN 1915-00-1, NY: Naval Training Device Center, April 1957.[HHD93] T. De Rose H. Hoppe and T. Duchamp. Mesh optimization. In Computer Graphics, 1993.[HI85] T. Fakuizumi H. Inoue, Y. Tsusaka. In 3rd ISRR, 1985.[HS+96] K.Hiramatsu H. Soma et al. System architecture of the jari driving simulator and its validation. In Symposium on the Design and Validation of Driving Simulators, 1996. [Hun78] K. Hunt. Kinematic Geometry of Mechanisms. Oxford University Press, 1978.[Hun83] K. H. Hunt. Structural kinematics of in-parallel-actuated robot arms. Transaction of ASME, Journal of Mechanisms, Transmissions and Automation in Design, 105:705{712, 1983.[Ind89] SKF Industrie. Catalogo generale, 1989. Catalog ID 4000 I.[IS96] M. Idan and D. Sahar. A robust controller for a dynamic six degree of freedom ight simulator. In AIAA Proc. Of Conf. On Flight Simulator Technologies, pages 53{60, 1996. paper n. AIAA-99-4238.[Jen74] G. Jennings. A study of motorcycle suspension damping characteristic. SAE 740628, 1974.[Jon70] D. E. H. Jones. The stability of the bicycle. Physics Today, 1970.[JR94] J. Helmann J. R. Iris performer: a high performance multi processing toolkit for real-time 3d graphics. In Proceeding of Siggraph, 1994.[Kad95] W. Kading. The advanced daimler benz driving simulator. SAE Japan, 9530012, 1995.[KK+96] T. Fukuda K. Kosuge et al. Force control of parallel link manipulator with hydraulic actuators. In International Conference on Robotics and Automation, Minneapolis, April 1996.[Kol95] E. M. Kolasinski. Simulator sickness in virtual environments. Technical Report Tech. Rep. 1027, US Army Research Institute for the Behavioural and Social Sciences, Virginia - USA, May 1995.[KY+00] D. Ferrazzin K. Yoshimoto et al. Development of a motorcycle simulator using a parallel manipulator and a head mounted display. In DCS2000, September 2000.[Leh90] H. Lehnert. Binaurale Raumsimulation: Ein Computermodell zur Erzeugung virtueller auditiver Umgebungen. Verlag Shaker, Aachen, 1990.[Lis80] G. Lisini. Servomeccanismi. Azienda per il Diritto Universitario, 1980.[LLC96] N. K. M'Sirdi L. Laval and J. C. Cadiou. hinf force control of a hydraulic servo-actuator with environmental uncertainties. In ICRA 1996, pages 1566{1571, 1996.[LS91] M. W. Levine and J. M. Shefner. Sensation and perception (2nd ed.). Brooks/Cole Publishing Company, Pacic Grove, CA, 1991.[LS97] D. Li and S. E. Salcudean. Modeling simulation and control of a hydraulic stewart platform. In IEEE Intl. Conf. Rob. Aut., pages 3360{3366, 1997.[LS00] B. Siciliano L. Sciavicco. Robotica Industriale. McGraw-Hill, 2000.[MA91] O. Ma and J. Angeles. Optimum architecture design of platform manipulators. IEEE, (7803-0078):1130{1135, 1991.[MANG95] R. Remi M. A. Nahon and C. Gosselin. A comparison of ight simulator motion-base architectures. In CEAS Symposium on Simulation Technology, pagesMSy02:1{16, October-November 1995. MSy02.[MANK92] L. D. Reid M. A. Nahon and J. Kirdeikis. Adaptive simulator motion software with supervisory control. Journal of Guidance, Control, and Dynamics, 15(2):376{383, March-April 1992.[Mer87] J. P. Merlet. Technical Report 646, Inria Reseach Report, 1987.[Mer99] J. P. Merlet. State of the art and perspectives. Web Page, 1999. http://www-sop.inria.fr/saga/personnel/ memrlet/Etat/etat de lart99.[MK63] F. Yoshimura M. Kondo, A. Nagaoka. Theoretical study on the running stability oftwo wheelers. Trans. Soc. Auto Engrs Japan, 1963.[MM+90] M. Ouhyoung M. Minsky et al. Feeling and seeing: Issues in force display. Computer Graphics, 2(24):235{243, 1990.[MMB98] J. G. Fontaine M. Maza and S. Baselga. Modular architecture of driving simulators for off-road vehicles. virtual reality in driving simulators. In Proceedings of ISATA, Duesseldorf, number 98VR052, 1998.[MS95] J. B. Morrell and K. Salisbury. Parallel coupled actuators for high performance force control: A micro-macro concept. In Fourth International Symposium on Experimental Robotics, pages 165{ 170, 1995.[Nie83] G. Niemann. Elementi di Macchine. Edizioni Scienza e Tecnica Milano, 1983. [NM94] R. Nair and H. Maddocks. On the forward kinematics of parallel manipulators. The International Journal of Robotics Research, 13(2):171{188, 1994.[NR88] M. A. Nahon and L. D. Reid. Response of airline pilots to variatios in ight simulator motion algorithms. Journal of Aircraft, 25(7):639{646, July 1988.[NR90] M. A. Nahon and L. D. Reid. Simulator motion-drive algorithms: a designer perspective. Journal of Guidance, Control, and Dynamics, 13(2):356{362, March-April 1990.[Org94] International Standard Organisation. Motorcycle and motorcycle-rider kinematics - vocabulary. Draft International Standard 11838, International Organisation for Standardisation, 1994.[P+98] N. A. Pouliot et al. Motion simulation capabilities of three degree-of-freedom ight simulators. Journal of Aircraft, 35(1):9{17, January-February 1998.[P+00] J. P. Pickett et al. The American Heritage Dictionary. Houghton Miàin Company, 2000.[Pag00] Parametric Technology Corporation Home Page. Web Page, 2000..[Pau81] R. P. Paul. Robot Manipulators. MIT Press, Cambridge, 1981.[PNM90] K. J. Waldron P. Nanua and V. Murty. Direct kinematic solution of a stewart platform. IEEE Transaction Robotic Automation, 6(4):438{444, 1990.[Ran69] J. S. Rankine. Dynamical principles of the motion of velocipedes. The Engineer, 28,1869.[RJT99] F. M. Cardullo R. J. Telban. Developments in human centered cueing algorithms for control of ight simulator motion systems. In AIAA Modelling and Simulation Technologies Conference, 1999. paper n. AIAA-99-4238.[RN85] L.D. Reid and M. A. Nahon. The simulation of truck motions. Technical Report UTIAS Report 294, Univ. of Toronto, Toronto - Canada, September 1985.[RN86] L. D. Reid and M. A. Nahon. Fligt simulator motion-base drive algorithms: Part 3 - pilot evaluations. Technical Report UTIAS Report 319, Univ. of Toronto, Toronto - Canada, December 1986.[Rob80] D. A. Robinson. Control of eye movements. In The handbook ofPhysiology - The Nervous System II, pages 1275{1320. ?, 1980.[Rol73] R. D. Roland. Simulation study of motorcycle stability at highspeed. 2ND Int. Cong. on Automotive Safety, 1973.[Rot83] B. Roth. Principle of automation, future directions in manifacturing technology. In Unilever Research and Engineering Division Symposium, Port Sunlight, 1983.[RSH82] J. Ish-Shalom R. Sivan and J. K. Huang. An optimal control approach to the design of moving ight simulators. IEEE Transactions on Systems, Man and Cybernetics, SMC-12(6):818{827, 1982.[RSS77] C. J. Jones R. S. Sharp. The straight-running stability of singletrack vehicles. Proc. 5TH IASVD Symp. on the Dynamics of Vehicles on Road and on Railway Tracks, 1977.[RSS80] C. J. Alstead R. S. Sharp. The in uence of structural exibilities on the straight-running stability of motorcycles. Veh. Syst. Dyn., 9, 1980.[RVPB75] J. E. Dieudonne R. V. Parrish and R. L. Bowles. Coordinated adaptive washout for motion simulators. Journal of Aircraft, vol 12(1):44{50, 1975.[SC70] S. F. Schmidt and B. Conrad. Motion drive signals for piloted ight simulators. Technical Report Contract NAS2-4869, Analytical Mechanics Associated, May 1970.[Sch69] V. Scheinman. Design of a computer controlled manipulator. Master's thesis, Stanford University, 1969.[Sha71] R. S. Sharp. The stability and control of motorcycles. Jour. Mech. Engng. Sci., 13, 1971.[Sha74] R. S. Sharp. The in uence of frame exibility on the lateral stability of motorcycles. Jour. Mech. Engng. Sci., 15, 1974.[Sha76a] R. S. Sharp. The in uence of the suspension system on motorcycle weave mode oscillations. Veh. Syst. Dyn., 5, 1976.[Sha76b] R. S. Sharp. The stability of motorcycles in acceleration and deceleration. Proc. Inst. Mech. Engrs. Conf. on Braking Road Vehicles, 1976.[Sha82] R. S. Sharp. Motorcycle stability and responses to steering control. Proc. 1ST Course on Advance Vehicle System Dynamics, 1982.[Sil83] W. Silver. On the equivalence of lagrangian and newton-euler dynamics for manipulators. Int. Journal of Robotics Research, 1(2):60{70, 1983.[SKAA97] N. Haeck S. K. Advani, M. A. Nahon and J. Albronda. Optimization of six-degrees-of-freedom motion system for ight simulators. In AIAA Modeling and Simulation Technologies Conference, pages 12{22, New Orleans (LA), August 1997.[SL87] S. M. Song and Y. J. Lin. Dynamics of pantograph type manipulators. In IEEE Conference, 1987.[SL88] S. M. Song and J. K. Lee. The mechanical eÆciency and kinematics of pantograph type manipulators. In IEEE Conference, 1988.[SM92] T. J. Sharkey and M. E. McCauley. Does motion base prevent simulator sickness? In AIAA/AHS Flight Simulation Technologies Conference, HeadIsland, August 1992.[SS97] L. J. Stocco and S. E. Salcudean. Hybrid serial/parallel manipulator. Technical Report US06047610, United State Patent, 1997.[Ste66] D. Stewart. A platform with six degrees of freedom. In Institution of Mechanical Engineers, volume 180, 1965-1966.[TAF] C. H. Sequin T. A. Funkhouser. Adaptive display algorithm for interactive frame rates during visualization of complex virtual environments. Master's thesis, University of California at Berkeley.[Tea96a] MORIS Team. Deliverable 1.1. Technical report, Piaggio Veicoli Europei, 1996.[Tea96b] MORIS Team. Deliverable 2.1: Mathematical model. Technical report, Piaggio Veicoli Europei, 1996.[Tea97] MORIS Team. Deliverable 3.2: User requirements. Technical report, Scuola Superiore S.Anna, 1997.[Tea00] MORIS Team. Deliverable 4.1: Mechanical subsystem development. Technical report, Scuola Superiore S.Anna, 2000.[TN85] T. Katayama T. Nishimi, A. Aoki. Analysis of straight running stability of motorcycles. The 10TH Int. Tech. Conf. on Experimental SafetyVehicle Report, 1985. [Uni95] IOWA University. The iowa driving simulator: An immersive research environment. IEEE Computer, 1995.[VEG62] S. G. Whitehall V. E. Gough. Universal tyre test machines. In 9th International Congress FISITA, pages 117{137, 1962.[Ver78] M. K. Verma. Theoretical and Experimental Investigations of Motorcycle Dynamics. PhD thesis, 1978.[WC97] W. Wu and M. Cardullo. Is there an optimum motion cueing algorithm? pilot and aircraft simulators. In AIAA, Modeling and Simulation Technologies Conference, New Orleans (LA), August 1997.[Wei72] D. H. Weir. Motorcycle Handling Dynamics and Rider Controland the Effect of Design Conguration on Response and Performance. PhD thesis, 1972.[Wel98] M. Wells. Communicating situation awareness in virtual environments. Technical Report Report E-6A, Washington University, 1998. Internal Aviation Maintenance Curriculum Evaluation: A Case Study.[Whi77] D. E. Whitney. Force feedback control of manipulatorngers. ASME Transactions, Journal of Dynamic Systems, Measurement and Control, pages 91{97, 1977.[WHPV93] S. A. Teukolsky W. H. Press, B. P. Flannery and W. T. Vetterling. Numerical Recipes in C The Art of Scientic Computing 2nd edition. Cambridge University Press, 1993.[Wip99] F. J. W. Wipple. The stability of the motion of a bicycle. Quarterly Journal of Pure and Applied Mathematics, 30, 1899.[WJ51] R. A. Wilson-Jones. Steering and stability of single-track vehicles. Proc. Inst. Mech. Engrs., 1951.[Wor99a] The Math Works. Matlab: The language of technical computing,1999.[Wor99b] The Math Works. Simulink: Dynamic system simulation formatlab, 1999.[WWS99] C. Basdogan W. Wu and M. A. Srinivasan. Visual, haptic and bimodal perception of size and stiffness in virtual environments. In ASME Dynamic Systems and Control Divisions, number 67, pages 19{26, 1999.[ZA96] K. E. Zanganeh and J. Angeles. Displacement analysis of a sixdegree-of-freedom hybrid hand controller. In International Conference on Robotics and Automation, Minneapolis, April 1996.[Zac78] G. L. Zacharias. Motion cue models for pilot-vehicle analysis. Technical Report AMRL-TR-78-2, NASA, May 1978.[ZGC90] L. Haynes Z. Geng and R. L. Carroll. Direct forward kinematic solution for general stewart platforms. In 3rd International Symposium Robotics and Manufactoring, pages 11{16, 1990.[ZW78] J. W. Zellner and D. H. Weir. Development of handling test procedures for motorcycles. Society of Automotive Engineers Inc., (780313), 1978.。

Adaptive Dynamic Surface Control for Uncertain Nonlinear Systems With Interval Type-2 Fuzzy Neural

Adaptive Dynamic Surface Control for Uncertain Nonlinear Systems With Interval Type-2 Fuzzy Neural

Adaptive Dynamic Surface Control for Uncertain Nonlinear Systems With Interval Type-2FuzzyNeural NetworksYeong-Hwa Chang,Senior Member,IEEE,and Wei-Shou ChanAbstract—This paper presents a new robust adaptive control method for a class of nonlinear systems subject to uncertainties. The proposed approach is based on an adaptive dynamic sur-face control,where the system uncertainties are approximately modeled by interval type-2fuzzy neural networks.In this paper, the robust stability of the closed-loop system is guaranteed by the Lyapunov theorem,and all error signals are shown to be uniformly ultimately bounded.In addition to simulations,the proposed method is applied to a real ball-and-beam system for performance evaluations.To highlight the system robustness, different initial settings of ball-and-beam parameters are con-sidered.Simulation and experimental results indicate that the proposed control scheme has superior responses,compared to conventional dynamic surface control.Index Terms—Ball-and-beam system,dynamic surface control, interval type-2fuzzy neural network.I.IntroductionN ONLINEAR phenomena,popularly existing in physical systems,have attracted a lot of attention in both industry and academia.The design of stabilizing controller for a class of nonlinear systems is usually full of challenges,especially for the systems with uncertainties.To achieve desired control goals,several techniques have been developed for uncertain nonlinear systems.In[1],a fuzzy hyperbolic model,consisting of fuzzy,neural network,and linear models,was proposed for a class of complex systems.In[2],the guaranteed cost control problem was presented for uncertain stochastic T-S fuzzy systems with multiple time delays.Recently,type-2fuzzy logic has been developed to deal with uncertain information in intelligent control systems[3]–[5].The concept of type-2fuzzy sets(T2FSs)is an extension of type-1fuzzy sets (T1FSs).Type-2fuzzy techniques with uncertainties in the antecedent and/or consequent membership functions have at-tracted much interest.The membership functions of T2FSs are Manuscript received June23,2012;revised December5,2012;accepted March4,2013.Date of publication April19,2013;date of current version January13,2014.This work was supported in part by the National Science Council,Taiwan,under Grant NSC101-2221-E-182-041-MY2,and by the High Speed Intelligent Communication Research Center,Chang Gung Uni-versity,Taiwan.This paper was recommended by Associate Editor H.Zhang. The authors are with the Department of Electrical Engineering, Chang Gung University,Kwei-Shan,Tao-Yuan333,Taiwan(e-mail: yhchang@.tw;d9721002@.tw).Color versions of one or more of thefigures in this paper are available online at .Digital Object Identifier10.1109/TCYB.2013.22535483-D,where a footprint of uncertainty is particularly addressed. The primary membership function grade in type-2can be a subset in the universe[0,1].Moreover,corresponding to each primary membership,there is a secondary membership,which is also in[0,1].T2FSs provide additional degrees of freedom such that modelling and minimizing the effects of uncertainties become more feasible.However,the same works usually cannot be executed by type-1fuzzy logic systems(T1FLSs) [6],[7].Although the types of membership functions are different,the structures of type-1fuzzy logic control(T1FLC) and type-2fuzzy logic control(T2FLC)are similar,in which a fuzzifier,a rule base,a fuzzy inference engine,and an output processor are included.In particular,the output processor of a T2FLC has a type reducer.The outcome of a type reducer is a type-1fuzzy set,and then a crisp output can be obtained through a defuzzifier.However,heavy computational loads would be the main concern with regard to T2FLSs[8].To simplify the computation complexity,the grades of secondary membership functions can be simply set to1,which becomes the so-called interval T2FLSs(IT2FLCs).IT2FLCs have been successfully applied in many control applications due to the capability of modeling uncertainties.In[9],an ant optimized fuzzy controller is applied to a wheeled-robot for wall-following under a reinforcement learning,where the interval type-2fuzzy sets are adopted in the antecedent part of each fuzzy rule to improve robustness.Recently,interval type-2 fuzzy neural networks(IT2FNN),combining the capability of interval type-2fuzzy reasoning to handle uncertain information and the capability of artificial neural networks to learn from processes,have been popularly addressed.In[10],an IT2FNN control system was designed for the motion control of a linear ultrasonic motor based moving stage.The proposed controller combines the merits of FLS and neural networks,and the update laws of the network parameters are obtained based on the backpropagation algorithm.However,the stability of the IT2FNN control system cannot be guaranteed.As a breakthrough in the area of nonlinear control,a system-atic and recursive design procedure,named the backstepping control,has been provided to design a stabilizing controller for a class of strict-feedback nonlinear systems.The backstepping control is basically attributed to the stabilization consideration of subsystems using the Lyapunov stability theory.A number of applications have been addressed in the utilization of backstepping techniques.In[11],the problem of a one degree-2168-2267c 2013IEEEFig.1.Structure of IT2FNN.of-freedom master-slave teleoperated device was discussed by the adaptive backstepping haptic control approach.The adaptive backstepping control was also applied to a plane-type 3-DOF precision positioning table,in which the unmodeled nonlinear effects and parameter uncertainties were considered [12].In[13],a backstepping controller design was investigated for the maneuvering of helicopters to autonomously track predefined trajectories.In[14],a voltage-controlled active magnetic bearing system was addressed,where a robust con-troller was designed to overcome unmodeled dynamics and parametric uncertainties by backstepping techniques.In[15], a backstepping oriented nonlinear controller was proposed to improve the shift quality of vehicles with clutch-to-clutch gearshifts.In[16],theflocking of multiple nonholonomic wheeled robots was studied,where distributed controllers were proposed with the backstepping techniques,graph theory and singular perturbation theory.The position control of a highly-nonlinear electrohydraulic system was considered,where an indirect adaptive backstepping was utilized to deal with param-eter variations[17].In[18],a neural network and backstepping output feedback controller was proposed for leader-follower-based multirobot formation control.In[19],an adaptive back-stepping approach was presented for the trajectory tracking problem of a closed-chain robot,where radial basis function neural networks were used to compensate complicated non-linear terms.In[20],an adaptive neural network backstepping output-feedback control was presented for a class of uncertain stochastic nonlinear strict-feedback systems with time-vary delays.Although backstepping control can be applied to a wide range of systems,however,there is a substantial drawback of the explosion of terms with conventional backstepping techniques.The dynamic surface control(DSC)has been proposed to overcome the“explosion of terms"problem, caused by repeated differentiations.Similar to the backstep-ping approach,DSC is also an iteratively systematic design procedure.The complexity arisen from the explosion terms in conventional backstepping control methods can be avoided by introducingfirst-order low passfilters[21]–[23].In[24], an integrator-backstepping-based DSC method for a two-axis piezoelectric micropositioning stage was proposed,where the robustness of tracking performance can be improved.In[25],a feedback linearization strategy including the dynamic surface control was developed to improve paper handling performance subject to noises.In[26]and[27],an adaptive DSC was adopted for mobile robots,where the exact parameters of robot kinematics and dynamics including actuator dynamics were assumed to be unknown a priori.In[28],a T-S fuzzy model based adaptive DSC was proposed for the balance control of a ball-and-beam system.Based on the T-S fuzzy modelling,the dynamic model of the ball-and-beam system was formulated as a strict feedback form with modelling errors.An observer-based adaptive robust controller was developed via DSC tech-niques for the purpose of achieving high-performance for servo mechanisms with unmeasurable states[29].A robust adaptive DSC approach was presented for a class of strict-feedback single-input-single-output nonlinear systems,where radial-basis-function neural networks were used to approximate those unknown system functions[30],[31].In[32],a novel-function approximator was constructed by combining a fuzzy-logic system with a Fourier series expansion to approximate the un-known system function depending nonlinearly on the periodic disturbances.Based on this approximator,an adaptive DSC was proposed for strict-feedback and periodically time-varyingCHANG AND CHAN:ADAPTIVE DYNAMIC SURFACE CONTROL FOR UNCERTAIN NONLINEAR SYSTEMS295systems with unknown control-gain functions.In[33]and [34],an adaptive fuzzy backstepping dynamic surface control approach was developed for a class of multiple-input multiple-output nonlinear systems and uncertain stochastic nonlinear strict-feedback systems,respectively.In[35],an adaptive DSC was proposed for a class of stochastic nonlinear systems with the standard output-feedback form,where neural networks were used to approximate the unknown system functions. In[36],a neural-network-based decentralized adaptive DSC control design method was proposed for a class of large-scale stochastic nonlinear systems,where neural network method was used to approximate the unknown nonlinear functions. In this paper,a robust adaptive control method is presented for a class of uncertain nonlinear systems.The proposed con-trol scheme is based on adaptive DSC techniques combining with IT2FNN learning(IT2FNNADSC).To achieve a desired tracking goal,a stabilizing control action can be systematically derived.Also,the uncertainty modelling can be attained using IT2FNNs.The IT2FNN has the ability to accurately approxi-mate nonlinear systems or functions with uncertainties based on the merits of both the type-2fuzzy logic and the neural network.From the Lyapunov stability theorem,it is shown that the closed-loop system is stable,and all error signals are uniformly ultimately bounded.In addition,a ball-and-beam system is utilized for performance validations,including both simulation and experimental results.Moreover,the form of strict-feedback systems is a special case of addressed general nonlinear systems.Thus,the proposed IT2FNNADSC control method can be easily applied to many applications of interest. The organization of this paper is as follows.In Section II, the dynamic characteristics of uncertain nonlinear systems are discussed.The structure and design philosophy of IT2FNNs and the design procedures of the adaptive DSC were intro-duced in Section III.The stability analysis is investigated in Section IV.In Section V,a ball-and-beam system is applied for performance validations,where both simulation and ex-perimental results are included.The concluding remarks are given in Section VI.II.PreliminariesA.Problem FormulationConsider a class of nonlinear systems as follows.˙x1=f1(x)+g1(x1)x2˙x2=f2(x)+g2(x1,x2)x3...˙x i=f i(x)+g i(x1,x2,...,x i)x i+1...˙x n=f n(x)+g n(x)u(t)(1)where i=1,2,...,n−1,x=[x1x2···x n]T∈R n is the state vector,u(t)∈R is the control input.The f i,g i, i=1,2,...,n are smooth system and virtual control-gain functions,respectively.In this paper,the state vector x is assumed to be in a compact set in R n.Definition1:System(1)is uniformly ultimately bounded with ultimate bound b if there exist positive constants band Fig.2.Interval type-2membership function with an uncertain mean. c,independent of t0≥0,and for every a∈(0,c),there is T=T(a,b)≥0,independent of t0such that[37]x(t0) ≤a⇒ x(t) ≤b,∀t≥t0+T.(2) With the consideration of system uncertainties,the perturbed model of(1)can be represented in˙x1=f1(x)+g1(x1)x2+ 1(x,t)˙x2=f2(x)+g2(x1,x2)x3+ 2(x,t)...˙x i=f i(x)+g i(x1,x2,...,x i)x i+1+ i(x,t)...˙x n=f n(x)+g n(x)u(t)+ n(x,t)(3)where i=1,2,...,n−1.The i(x,t),i=1,2,...,n are bounded uncertainties,consisting of unknown modelling errors,parameter uncertainties,and/or external disturbances. Denote x1d(t)as a control goal of x1(t).The control objective is to develop an adaptive controller such that the corresponding closed-loop system of(3)is stable,all errors are uniformly ultimately bounded,and the magnitude of the tracking error x1(t)−x1d(t)can be arbitrary small as t→∞. Assumption1:There exist some constantsγi>0,ψi>0, i=1,2,...,n such thatψi≤ g i(x1,x2,...,x i) ≤γi.(4) Assumption2:The reference command x1d(t)is a continu-ous and bounded function of degree2such that x1d,˙x1d,and ¨x1d are well defined.Remark1:In particular casesf i(x)=f i(x1,x2,...,x i),i=1,2,...,n(5) the representation of(1)turns out to be a class of nonlinear systems with a general strict-feedback form.B.Interval Type-2Fuzzy Neural NetworksRecently,interval type-2fuzzy neural networks handling uncertainty problems have been popularly addressed.The design procedures of IT2FNNs will be introduced in this section.First,the IF-THEN rules for an IT2FNN can be expressed asR j:IF x(1)1is˜F1j AND···AND x(1)n is˜F njTHEN y(5)is[w(4)jL w(4)jR],j=1,2,...,m(6)296IEEE TRANSACTIONS ON CYBERNETICS,VOL.44,NO.2,FEBRUARY2014 where˜F ij is an interval type-2fuzzy set of the antecedentpart,[w(4)jL w(4)jR]is a weighting interval set of the consequentpart,and x(1)i is the input of IT2FNN,i=1,2,...,n.Thenetwork structure of IT2FNN is shown in Fig.1,in whichthe superscript is used to identify the layer number.Thefunctionality of each layer of the IT2FNN will be introducedin sequence.1)Input Layer:In this layer,x(1)1,...,x(1)n,denoted as thestate variables of(3),are the inputs of IT2FNN.2)Membership Layer:In this layer,each node performsits work as an interval type-2fuzzy membership function.Asshown in Fig.2,the associated Gaussian membership functionregarding the i th input and j th nodes is represented asμ(2)˜F ij (x(1)i)=exp⎛⎝−12x(1)i−m(2)ijσ(2)ij2⎞⎠≡N(m(2)ij,σ(2)ij,x(1)i)(7)where m(2)ij∈[m(2)ijL m(2)ijR]is an uncertain mean,andσ(2)ij is a variance,i=1,...,n,j=1,...,m.In Fig.2,the upper mem-bership function(UMF)and the lower membership function(LMF)are denoted as¯μ(2)˜F ij (x(1)i)andμ(2)˜F ij(x(1)i),respectively.The upper and lower bounds of each type-2Gaussian membership function can be respectively represented as¯μ(2)˜F ij (x(1)i)=⎧⎪⎨⎪⎩N(m(2)ijL,σ(2)ij,x(1)i),x(1)i<m(2)ijL1,m(2)ijL≤x(1)i≤m(2)ijRN(m(2)ijR,σ(2)ij,x(1)i),x(1)i>m(2)ijR,(8)μ(2)˜F ij (x(1)i)=⎧⎨⎩N(m(2)ijL,σ(2)ij,x(1)i),x(1)i>m(2)ijL+m(2)ijR2N(m(2)ijR,σ(2)ij,x(1)i),x(1)i≤m(2)ijL+m(2)ijR2.(9)Therefore,the outputs of membership layer can be describedas a collection of intervals[μ(2)˜F ij (x(1)i)¯μ(2)˜F ij(x(1)i)],i=1,...,n,j=1,...,m.3)Rule Layer:The operation of this layer is to multiply the input signals and output the result of product.For example, the output of the j th node in this layer is calculated as[f(3)j ¯f(3)j]=n i=1μ(2)˜F ij(x(1)i) n i=1¯μ(2)˜F ij(x(1)i).(10)4)Type-Reduction Layer:Type-reducer has been applied to reduce a type-2fuzzy set to a type-reduced set[6],[38]. This type-reduced set is then defuzzified to derive a crisp output.To simplify the computation complexity,singleton output fuzzy sets with the center-of-sets type reduction method are considered such that the output y(4)=[y(4)L y(4)R]is an interval type-1set.Then,the output of the type-reduction layercan be expressed asy(4)= mj=1f(3)j w(4)jmj=1f(3)j(11)where w(4)j=[w(4)jL w(4)jR]is consequent interval weighting andf(3)j=[f(3)j ¯f(3)j]is the interval degree offiring.Withoutloss of generality,w(4)jR and w(4)jL are assumed to be assigned in ascending order,i.e.,w(4)1L≤w(4)2L≤···≤w(4)mL and w(4)1R≤w(4)2R≤···≤w(4)mR.With reference to[39]and[40],the calculation of y(4)L and y(4)R can be derived asy(4)L=lj=1¯f(3)jw(4)jL+mj=l+1f(3)jw(4)jLj=1f j+j=l+1f(3)j,(12) y(4)R=rj=1f(3)jw(4)jR+mj=r+1¯f(3)jw(4)jRrj=1f(3)j+mj=r+1¯f(3)j.(13)Obviously,the numbers l and r in(12)and(13)are essential in the switching between the lowerfiring strength f(3)jand the upperfiring strength¯f(3)j.In general,the values of l and r can be obtained from a sorting process.In[9],a simplified type of reduction was proposed as follows:ˆy(4)L=mj=1f(3)jw(4)jLmj=1f(3)j≡W T L F(14)ˆy(4)R=mj=1¯f(3)jw(4)jRmj=1¯f(3)j≡W T R¯F(15)where W L and W R are the weighting vectors connected between the rule layer and type-reduction layer,W T L=w(4)1L w(4)2L···w(4)mL,W T R=w(4)1R w(4)2R···w(4)mR;F and¯F are the weightedfiring strength vectorF=f(3)1mj=1f(3)jf(3)2mj=1f(3)j···f(3)mmj=1f(3)jT,¯F=¯f(3)1mj=1¯f(3)j¯f(3)2mj=1¯f(3)j···¯f(3)mmj=1¯f(3)jT.It is noted that only the lower and upper extremefiring-strengths are used in(14)and(15),respectively,such that the computation burden can be reduced.5)Output Layer:The output y(5)is determined as the average ofˆy(4)L andˆy(4)Ry(5)=ˆy(4)L+ˆy(4)R2.(16) Remark2:The results obtained so far are mainly on the structure description of IT2FNNs.It is more interesting to investigate how to model the uncertain terms in(3)with IT2FNNs.Moreover,the closed-loop stability affected by the approximation errors will be addressed later on.III.Design of IT2FNNADSCIn this paper,the dynamic surface control method is utilized to design a stabilizing controller for a class of uncertain nonlinear systems of(3).With IT2FNNs,uncertainty terms of(3)can be approximately modeled asi(x,t)=12W T i F i+εi(17) where W T i=W T iL W T iRis an ideal constant weights vector, F i=F T i¯F T iTis thefiring strength vector,andεi is the approximation error,i=1,2,...,n.In(17),there exist some unknown bounded constants W M andεM such that W i ≤CHANG AND CHAN:ADAPTIVE DYNAMIC SURFACE CONTROL FOR UNCERTAIN NONLINEAR SYSTEMS297 W M and εi ≤εM,respectively.Since there is no a prioriknowledge to determine exact W i s,an adaptive mechanismis applied such that the estimatedˆW i can asymptoticallyconverge to W i.Let the estimation error be defined as follows:˜Wi(t)=ˆW i(t)−W i,i=1,2,...,n.(18)For the aforementioned uncertain nonlinear systems,an adap-tive dynamic surface control(ADSC)will be proposed topreserve the closed-loop stability.The design procedures re-garding the ADSC are discussed in the following.To stayconcise,the notations i,i=1,2,...,n,will be used torepresent the uncertainties of(3).Step3.1:Defines1=x1−x1d.(19)It is desired to track the reference command x1d with giveninitial conditions.From(3)and(17),the derivative of(19)canbe obtained as˙s1=f1+g1x2+12W T1F1+ε1−˙x1d.(20)Let¯x2be a stabilizing function to be determined for(3)¯x2=g−11(−f1−12ˆW T1F1−k1s1+˙x1d)(21)where k1is a positive constant.An adaptive law is designated as˙ˆW 1=121F1s1−η 1ˆW1(22)with a symmetric positive definite matrix 1andη>0.Let x2d be the regulated output of¯x2through a low-passfilter τ2˙x2d(t)+x2d(t)=¯x2,x2d(0)=¯x2(0).(23) With a properly chosenτ2,the smoothed x2d can be equiva-lently considered the required¯x2.Step3.2:Analogous to the discussion in Step3.1,an error variable is defined ass2=x2−x2d.(24) From(3)and(17),the derivative of(24)can be obtained as ˙s2=f2+g2x3+12W T2F2+ε2−˙x2d.(25) Let¯x3be a stabilizing function to be determined for(3)¯x3=g−12(−f2−12ˆW T2F2−k2s2+˙x2d)(26)where k2is a positive constant.An adaptation law is designed as˙ˆW 2=122F2s2−η 2ˆW2(27)with a symmetric positive definite matrix 2.Let x3d be the smoothed variable of¯x3through a low-passfilterτ3˙x3d(t)+x3d(t)=¯x3,x3d(0)=¯x3(0).(28) With a properly chosenτ3,the smoothed x3d can be equiva-lently considered the required¯x3.Step3.i:The state tracking error of x i is defined ass i=x i−x id.(29) From(3)and(17),the derivative of(29)can be obtained as˙s i=f i+g i x i+1+12W T i F i+εi−˙x id.(30) Let¯x i+1be a stabilizing function to be determined for(3)¯x i+1=g−1i(−f i−12ˆW T i F i−k i s i+˙x id)(31) where k i is a positive constant.Similarly,an adaptive law is given as˙ˆWi=12i F i s i−η iˆW i(32) with a positive definite matrix i= T i.Similarly,let x i+1,d be the smoothed counterpart of¯x i+1through a low-passfilter τi+1˙x i+1,d(t)+x i+1,d(t)=¯x i+1,x i+1,d(0)=¯x i+1(0).(33) With a properly chosenτi+1,d,the smoothed x i+1,d can be equivalently considered the required¯x i+1.Step3.n:The state tracking error of x n is defined ass n=x n−x nd.(34) From(3)and(17),the derivative of(34)can be obtained as˙s n=f n+g n u(t)+12W T n F n+εn−˙x nd.(35) Let u(t)be a stabilizing function to be determined for(3) u(t)=g−1n(−f n−12ˆW T n F n−k n s n+˙x nd)(36) where k n is a positive constant.Analogous to the aforemen-tioned discussion,an adaptive law is given as˙ˆWn=12n F n s n−η nˆW n(37) where n is a symmetric matrix, n>0.Remark3:So far,only the design procedures,combining DSC approach and IT2FNN,are identified step by step.The derivation of control actions and learning rules in each step will be investigated later,where the closed-loop stability will also be discussed in detail.IV.Stability AnalysisIn this section,the overall stability of a class of uncertain nonlinear systems will be investigated,where a DSC controller with an IT2FNN adaption is adopted.In this paper,let 0and n be feasible sets of IT2FNNADSC such that0={(x1d,˙x1d,¨x1d):x1d+˙x1d+¨x1d≤p0}(38)n=(s i,˜W i,h i):12ni=1s2i+˜W T i −1i˜W i+12ni=2h2i≤p(39)where p0and p are positive constants.It is noticed that 0 and n are compact sets in R3and R(2m+2)n−1.298IEEE TRANSACTIONS ON CYBERNETICS,VOL.44,NO.2,FEBRUARY 2014Theorem 1:Consider a class of uncertain nonlinear systems of (3).With the DSC controller and IT2FNN adaptive laws designed as (19),(21)–(24),(26)–(29),(31)–(34),(36)–(37),the corresponding s i and ˜Wi are uniformly ultimately bounded,i =1,2,...,n ,where (38)and (39)are satisfied.Proo f :First,the errors between the stabilizing variables and the smoothed variables are defined ash i +1=x i +1,d −¯x i +1=x i +1,d −g −1i (−f i −12ˆW Ti F i −k i s i +˙x id )(40)where i =1,2,...,n −1.From (30),(31),and (40),the derivative of s i can be derived as˙s i=g i s i +1+g i h i +1−12˜W Ti F i +εi −k i s i(41)in which i =1,2,...,n −1.From (35)and (36),the derivativeof the tracking error s n can be obtained as˙s n =−k n s n −12˜W Tn F n +εn .(42)From (33)and (40),the dynamics of the smoothed variablex id can be formulated as˙x id (t )=¯x i −x id τi =−h iτi,i =2,3,...,n.(43)Therefore,from (40)and (43),the dynamic changes of h i +1can be derived in the following:˙h i +1=˙x i +1,d −˙¯x i +1=−h i +1τi +1+B i +1,i =1,2,...,n −1(44)in whichB i +1(s 1,...,s n ,h 2,...,h n ,ˆW T 1,...,ˆW T i ,x 1d ,˙x 1d ,¨x 1d )=g −2i [(˙f i +12˙ˆW T i F i +12ˆW T i ˙F i +k i ˙s i+˙h i τi )g i −˙g i (f i +12ˆW TiF i −˙x id +k i s i )].Choose a Lyapunov function candidate asV =12 n i =1 s 2i +˜W T i −1i ˜W i +12 n i =2h 2i .(45)The derivative of (45)can be obtained as˙V = n i =1 s i ˙s i +˜W T i −1i ˙ˆW i + n i =2h i ˙h i .(46)From (41),(42),(44),and (46),it leads to ˙V≤s 1(g 1s 2+g 1h 2−12˜W T1F 1+ε1−k 1s 1)+s 2(g 2s 3+g 2h 3−12˜W T2F 2+ε2−k 2s 2)+···+s i (g i s i +1−12˜W Ti F i +g i h i +1+εi −k i s i )+···+s n (−k n s n −12˜W Tn F n +εn )+ n i =1˜W T i −1i ˙ˆW i + n i =2 −h 2i τi + h i B i.(47)It is true thats i s i +1≤s 2i +s 2i +1,s i h i +1≤s 2i +h 2i +1,i =1,2,...,n −1(48)ands i εi ≤s 2i +ε2i ,i =1,2,...,n.(49)Therefore,the following inequality can be obtained by substi-tuting (4),(22),(27),(32),(37),and (48),(49)into (47)˙V ≤ n −1i =1(2γi s 2i +γi s 2i +1+γi h 2i +1)+ n i =1(−k i s 2i +ε2i +s 2i )+ n i =1(−η˜W T i ˆW i )+ n i =2 −h 2i τi+ h i B i .(50)Let α0>0be a constant such that k 1=2γ1+1+α0,k n =γn −1+1+α0,and k i =2γi +γi −1+1+α0,i =2,3,...,n −1,are satisfied.With reference to [30],using ˜Wi 2− W i 2≤2˜WT i ˆW i ,it gives ˙V ≤ ni =1(−α0s 2i +ε2i )+ n −1i =1(γi h 2i +1)+ n i =2 −h 2iτi + h i B i− n i =112η˜W i 2− W i 2 ≤ n i =1 −α0s 2i −η2λmax ( −1i )˜W T i −1i ˜W i+ n −1i =1(γi h 2i +1)+ n i =2 −h 2i τi + h i B i+ n i =1 ε2i +η2 W i 2.(51)Denoted ε2i +η2 W i 2=c i ,it leads to c i ≤ε2M +η2W 2M =c M ,i =1,2,...,n .From the positive definiteness of i ,a positive ηcan be determined such that η2λmax ( −1i)≥α0.From (51),itresults in˙V ≤ n i =1 −α0 s 2i +˜W T i −1i ˜W i +nc M + n −1i =1(γi h 2i +1)+ n i =2 −h 2iτi+ h i B i .(52)Therefore,from the assumptions of boundedness,smoothness,and compactness,B i is bounded on 0× n ,i.e.,there exists an M i >0such that B i ≤M i .Let1τi =γi −1+M 2i 2β+α0,i =2,3,...,n (53)where βis a positive number.Substituting (53)into (52),the following inequality can be obtained:˙V ≤ n i =1 −α0 s 2i +˜W T i −1i ˜W i +nc M + n i =2 − M 2i2β+α0 h 2i+ h i B i .(54)Notice that h i B i ≤h 2i B 2i 2β+ β2,i =2,3,...,n .Then,one can have˙V ≤ n i =1 −α0 s 2i +˜W T i −1i ˜W i +nc M +(n −1)β2+ n i =2 −α0h 2i −(M 2i −B 2i )h 2i 2β ≤ n i =1−α0 s 2i +˜W T i −1i ˜W i − n i =2(α0h 2i )+nc M +(n −1)β2=−2α0V +nc M +(n −1)β2.(55)Solving (55)leads toV (t )≤12α0 nc M +(n −1)β2+ V (0)−12α0nc M +(n −1)β2e (−2α0t ),∀t >0.(56)CHANG AND CHAN:ADAPTIVE DYNAMIC SURFACE CONTROL FOR UNCERTAIN NONLINEAR SYSTEMS299Fig.3.Experiment setup of the ball-and-beam system.From (56),it can be observed that V (t )is uniformly ultimately bounded,and the proof is completed.ٗRemark 4:By increasing the value of α0,the quantity of12α0(nc M +(n −1)β2)can be made arbitrarily small.Also,the convergence of V (t )implies that s i ,˜Wi ,and h i are uniformly ultimately bounded.Thus a large enough α0can make s 1=x 1−x 1d arbitrarily small.Equivalently,the stability of the proposed IT2FNNADSC control system can be guaranteed,and the purpose of tracking can be achieved in the boundness of steady-state errors.V .Example:Ball-and-Beam SystemTo verify the feasibility of the proposed IT2FNNADSC,anend-point driven ball-and-beam system is considered,where the scheme diagram is shown in Fig.3,and the associated dynamic equations are described as follows [41]:˙x 1=x 2+ 1(x ,t )˙x 2=Ax 1x 24−Agx 3+ 2(x ,t )˙x 3=x 4+ 3(x ,t )˙x 4=B cos x 3[C cos l b x 3d u −Dx 4cos l b x 3d−E −Hx 1]−BGx 1x 2x 4+ 4(x ,t )(57)where x (t )=[x 1(t )x 2(t )x 3(t )x 4(t )]T is the state vector,x 1is the ball position (m),x 2is the ball velocity (m/sec),x 3is the beam angle (rad),x 4is the angular velocity of the beam(rad/sec),A = 1+m −1B J B R −2 −1,B = J B +J b +m B x 21 −1,C =n g K b l b (R a d )−1,D = n g K b l b 2 R a d 2 −1,E =0.5l b m b g ,H =m B g ,G =2m B ,and u (t )is the input voltage of a DC motor.The physical parameters of the ball-and-beam system are listed in Table I.Remark 5:The representation of (57)can be fitted into the general form of (3).In consequence,the proposed IT2FNNADSC will be applied for the position control of a ball-and-beam system,where the ball is desired to be asymptotically balanced at a designated position.TABLE IParameters of the Ball-and-Beam System Symbol DefinitionValue m B Mass of the ball 0.0217kg m b Mass of the beam 0.334kg R Radius of the ball 0.00873m l b Beam length0.4m d Radius of the large gear 0.04mJ B Ball inertia 6.803×10−7kgm 2J b Beam inertia0.017813kgm 2K b Back-EMF constant 0.1491Nm/A R a Armature resistance 18.91 g Acceleration of gravity 9.8m /s 2n gGear ratio4.2TABLE IIParameters of ADSCi 1234k i 2840.887434.943727τi N/A0.0250.0250.025 i 100I 32α025η0.5A.Simulation ResultsFrom (21),(26),(31),and (57),stabilizing functions can be obtained as follows:¯x 2=−12ˆW T1F 1−k 1s 1+˙x 1d(58)¯x 3=−1Ag −12ˆW T 2F 2−k 2s 2−Ax 1x 24+˙x 2d (59)and¯x 4=−12ˆWT3F 3−k 3s 3+˙x 3d (60)where the employing neural networks are adaptively obtained from (32).From (36)and (57),the IT2FNNADSC based control action can be drived asu (t )=1BC cos x 3cos l b x 3d·[B cos x 3(Dx 4cos l b x 3d +E +Hx 1)+BGx 1x 2x 4+˙x4d −k 4s 4−12ˆW T4F 4].(61)The parameters for the conventional DSC and the IT2FNNADSC are initially given in Tables II and III,in which F 1=F 2=F 3=F 4=⎡⎣f (3)1 16j =1f (3)j···f (3)16 16j =1f (3)j¯f (3)1 16j =1¯f (3)j ···¯f (3)16 16j =1¯f(3)j ⎤⎦T ,[f (3)j¯f (3)j ]=[ 4i =1μ(2)˜F ij 4i =1¯μ(2)˜F ij ],j =1,2,...,16,and ˆW 1(0)=ˆW 2(0)=ˆW 3(0)=ˆW 4(0)=[ˆw(4)1L (0)···ˆw (4)16L (0)ˆw (4)1R (0)···ˆw (4)16R (0)]T .The rule table of an IT2FNN is shown in Table IV,where PS (positive small),PB (positive big),Neg (negative),Pos (positive),and Zero are interval type-2fuzzy sets.The input and output membership functions for the corresponding interval type-2fuzzy sets are shown in Fig.4.。

负折射率和光学隐形装置

负折射率和光学隐形装置
前还仅限于二维的情形,还没有实现三维物体的隐 形。而且,他们仅仅实现了单色光(实验中用的是 632.8nm的红光)的负折射,还无法同时实现整个 可见光光谱的负折射。但他同时表示,实现混合色 光的隐形装置只是个时间问题,而且,在不久的将 来,就可以使足够大尺寸的物体隐形。
负折射率材料以及隐形装置的研制成功,不仅给 促进了纳米、光学等向光学科的发展,而且给人们 提出了更多的课题。
参考文献:
1. Steven A. Cummer, Bogdan-Ioan Popa, David Schurig, David R. Smith, John Pendry: Full-wave simulations of electromagnetic cloaking structures[J]
被遮蔽物
隐形材质
真空
隐形材料
Vladimir Shalaev等人进行了一组二维的光学隐形实验。上图显示的是683.2 纳米的红光通过一个二维的光学隐形装置的传播情况。控制隐形材质的相关 变量(主要是介电常数和磁导率), Vladimir Shalaev等人使光线光滑的绕 过了障碍物,也就是,使被遮蔽物隐形了。
这就是这种负折射率材质的立体模型
Pendry和D.R. Smith在 他们的论文中指出,如 左图所示,电磁波在负 折射率介质(a)中发生 的折射的确与正常介质 (b)中的不同。 Nhomakorabeaa
b
在负折射 率介质透 镜中,电 磁波也发 生了与在 通常介质 中完全相 反的折射 现象。
但是Pendry等人的模型存在着很大的局限, 光波的频率范围是4.5×1014 Hz~7.3×10 14 Hz,而Pendry & Smith模型的有效范围是 GHz频段,也就是10 9 Hz。这种材料还不能 真正使可见光波发生负折射现象。

中国机械工程学报论文模板

中国机械工程学报论文模板

ZHANG Jiafan 1, 3, *, FU Hailun 2, DONG Yiming 1, ZHANG Yu 1, YANG Canjun 112 Zhejiang Province Instituteof Metrology,Hangzhou 310027, China3Abstract: robot teleoperation with the force-feedback in the unknown mechanism, the 3-revolution-prismatic-spherical (3RPS) parallel mechanism is devised from the concept of the human upper-limb anatomy and applied for the shoulder 3-DOF joint. Meanwhile, the orthogonal experiment design method is introduced for its optimal design. Aiming at enhancing the performance of teleoperation, the force feedback is employed by the pneumatic system on ZJUESA to produce the vivid feeling in addition to the soft control interface. Due to the compressibility and nonlinearity of the pneumatic force feedback system, a feasibility of ZJUESA system and the effect of its hybrid fuzzy controller are verified.Key words:1At first look at modern society, more and more robotsand automated devices are coming into our life and servemechatronic devices replace lower levels, essentially levels just as the term human which is coined by GOERTZ, et al [2]widely developed in the fields haptic interface to enhance the operator, also in the exciting applications in surgeryplanning, personnel training, and physical rehabilitation.DUBEY , et al [3], developed a methodology to incorporatesensor and model based computer assistance into humancontrolled teleoperation systems. In their approach, theThis project is supported by National Natural China (Grant No. 50305035), National Hi-tech Research and Development Program of China(863 Program, Grant No. ##), Beijing Municipal NaturalScience Foundation of China((Grant No. ##), and Zhejiang Provincial Natural Science Foundation of China((Grant No. ##)and was assisted parameters which the mapping of positions and velocities between the master and slave and their impedance parameters. The ESA humanarm exoskele- ton was developed to enable force-feedbackarms [4]. In recent work [5–6], been used to control the concepts were applied in researchers from Korea(KIST) introduced[11–12].master arm, in the torque sensor beams [14]. Likewise, the authorsto describe the bilateralremote manipulation in the view of the control theory [15–17].In this research, a wearable exoskeleton arm, ZJUESA,based on man-machine system is designed and ahierarchically distributed teleoperation control system isexplained. This system includes three main levels: ①supervisor giving the command through the exoskeletonsafe zone with the operator interface; ②working in hazardous zone; ③ data between supervisor-master and master-slave through the Internet or Ethernet. In section 2, by using the orthogonal experiment design method, the designY ZHANG Jiafan, et al: Novel 6-DOF Wearable Exoskeleton Arm with Pneumatic Force-Feedback for Bilateral Teleoperation·2·foundation of ZJUESA and its optimal design are presented. Then in section 3, we describe a novel hybrid fuzzy control system for the force feedback on ZJUESA. Consequently, the force feedback control simulations and experiment results analysis are presented in section 4, followed by discussions and conclusions.2 Configuration of the Exoskeleton ArmSystemThe master-slave control is widely employed in the robot the routine input device for the robot master-slave control system. The system presented in this paper is shown in Fig. 1.Fig. 1.In the system the exoskeleton arm —ZJUESA replaces the joystick as the command generator. It is an external structure mechanism, which can be worn by the operator, and can transfer the motions of human upper arm to the slave manipulator position-control-commands through the Internet or Ethernet between the master and slave computers. With this information, the slave manipulator mimics the motion of the operator. At the same time, the force-feedback signals, detected by the 6-axis force/torque sensor on the slave robot arm end effector, are sent back to indicate the pneumatic actuators for the force-feedback on ZJUESA to realize the bilateral teleoperation.Since ZJUESA is designed by following the physiological parameters of the human upper-limb, with such a device the human operator can control the manipulator more comfortably and intuitively than the system with the joystick or the keyboard input.3 Design of the Exoskeleton ArmWhat we desire is an arm exoskeleton which is capable of following motions of the human upper-limb accurately and supplying the human upper-limb with proper force feedback if needed. In order to achieve an ideal controlling performance, we have to examine the structure of thehuman upper-limb.3.1 Anatomy of human upper-limb3.1.1 Upper-limbRecently, various models of the human upper-limb anatomy have been derived. The biomechanical models of the arm that stand for precise anatomical models includingmuscles, tendons and bones are too complex to be utilized in mechanical design of an anthropomorphic robot arm. From the view of the mechanism, we should set up a more practicable model for easy and effective realization.Fig. 2 introduces the configuration of human upper-limb and its equivalent mechanical model, which is a 7-DOF structure, including 3 degrees of freedom for shoulder (flexion/extension, abduction/adduction and rotation), 1 degree of freedom for elbow (flexion/extension) and 3 degrees of freedom for wrist (flexion/ extension, abduction/adduction and rotation) [18]. The details about the motion characteristics of these skeletal joints can be obtained in Refs. [18-20]. Compared to the mechanical model, the shoulder and wrist can be considered as spherical joints and the elbow as a revolution joint. It is a good approximate model for the human arm, and the base for the design and construction of exoskeleton arm-ZJUESA.Fig. 2. Configuration of human upper limband its equivalentmechanical model3.2 Mechanism of the exoskeleton armBecause the goal of this device is to follow motions of the human arm accurately for teleoperation, ZJUESA ought to make the best of motion scope of the human upper-limb and limit it as little as possible. A flexible structure with the same or similar configuration of human upper-limb is an ideal choice. Based on the anatomy of human upper-limb, the joint motion originates from extension or flexion of the muscle and ligament with each other to generate torque around the bones. Compared with the serial mechanism, the movements of the parallel mechanism are driven by the prismatics, which act analogically to the human muscles and ligament. Besides, using the parallel mechanism not only realizes the multi-DOF joint for a compact structure and ligament. Besides, using the parallel mechanism not only realizes the multi-DOF joint for a compact structure of图题字号9磅,行距固定值11磅,段前0.3行,段后回车换行1次;图中字号7.5磅 二级标题字号10磅 图题后遇标题时,段后回车换行2次 图前段落,段后回车换行1次双码页面页眉字号8磅,单倍行距,段后1.2磅三级标题字号10磅,斜体,段前0.5行CHINESE JOURNAL OF MECHANICAL ENGINEERING·3·human upper-limb. The 3RPS parallel mechanism is one of the simplest mechanisms. Fig. 3 explains the principle of the 3RPS parallel mechanism. KIM, et al [11],introduced it into the KIST design. Here we follow this concept. The two revolution degrees of freedom embodied in the 3RPS are for flexion/extension, abduction/adduction at shoulder. Its third translation degree of freedom along z axis can be used for the dimension adjustment of ZJUESA for different operators. The prismatic joints are embodied by pneumatic actuators, which are deployed to supply force reflective capability. Also displacement sensors are located along with the pneumatic actuators and the ring-shaped joints to measure their linear and angular displacements. At elbow, a crank-slide mechanism composed of a cylinder and links is utilized for flexion/extension. At wrist, since the abduction/ adduction movement is so limited and can be indirectly reached by combination of the other joints, we simplify the configuration by ignoring the effect of this movement. As shown in Fig. 4, the additional ring the same as that at shoulder for the elbow rotation. Thus our exoskeleton arm-ZJUESA has 6 degrees of freedom totally.Fig. 3. 3RPS parallel mechanismFig. 4. Prototype of the exoskeleton arm-ZJUESA3.3 Optimization design of ZJUESAAs nentioned above, the best design is to make the workspace of ZJUESA as fully cover the scope of the human upper-limb motion as possible. We employ the 3RPS parallel mechanism for the shoulder, whoseworkspace mainly influences the workspace of ZJUESA. The optimal design of 3RPS parallel mechanism for theshoulder is the key point of ZJUESA optimal design. However, it is a designing problem with multi-factors, saying the displacement of the prismatics (factor A ), circumradius ratio of the upper and lower platforms (factor B ), initial length of the prismatics (factor C ), and their coupling parameters (factor A *B , A *C and B *C ) (Table 1) and multi-targets, namely, its workspace, weight, size. So,we use theexperiment design method with foregoing 6 key factors [21] and Eq. (1) gives the expression of the optimal target function of this problem: 0, x r Q F L R ⎛⎫= ⎪⎝⎭ where L 0 is the initial length of the prismatics, R is the circumradius of the lower base in 3RPS mechanism, r is thecircumradius of the upper base in 3RPS mechanism, θ is the expected reachable angle around axis, and xθ is thereachable angle around axis.Table 1. Factors and their levels mmLevel rankA B C A *B A *C B *C 1 60 0.5 150 - - - 2 80 0.438 160 - - - 3 100 0.389 170 - - - 4 --180---The orthogonal experiment design is outlined because of the ease with which levels can be allocated and its efficiency. The concept of orthogonal experiment design is discussed in Ref. [21] to obtain parameters optimization, finding the setting for each of a number of input parameters that optimizes the output(s) of the design. Orthogonal experiment design allows a decrease in the number of experiments performed with only slightly less accuracy than full factor testing. The orthogonal experiment design concept can be used for any complicated system being investigated, regardless of the nature of the system. During the optimization, all variables, even continuous ones, are thought of discrete “levels ”. In an orthogonal experiment design, the levels of each factors are allocated by using an orthogonal array [22]. By discretizing variables in this way, a design of experiments is advantageous in that it can reduce the number of combinations and is resistant to noise and conclusions valid over the entire region spanned by the control factors and their setting.Table 2 describes an orthogonal experiment design array for 6 key factors [23]. In this array the first column implies the number of the experiments and factors A , B , C , A *B , A *B and B *C are arbitrarily assigned to columns respectively. From Table 2, 36 trials of experiments are needed, with the level of each factor for each trial-run indicated in the array. The elements represent the levels of each factors. The vertical columns represent the另行排的数学式必须居中,单倍行距,段后回车换行1次表题字号9磅,字体加粗,段后0.3行表中字号8磅,行距固定值11磅,段后回车换行1次数学式前段落,段后回车换行1次 表前段落,段后回车换行1次单码页面页眉字号10.5磅,单倍行距,段后1.2磅 页码文字周围的图文框宽1.1 cm ,高0.4 cm ,相对于“页面”水平距离18cm ,相对于“段落”垂直距离0.4 cm图序与图题间空两格Table 后空一格,表序与表题间空两格缩写点后空一格Y ZHANG Jiafan, et al: Novel 6-DOF Wearable Exoskeleton Arm with Pneumatic Force-Feedback for Bilateral Teleoperation·4·experimental factors to be studied using that array. Each ofthe columns contains several assignments at each level for the corresponding factors. The levels of the latter three factors are dependent on those of the former three factors. The elements of the column IV , namely factor A *B , are determined by the elements in the columns I, II, and elements of column V , factor A *C , has the relationship with the elements of columns I, III, and the column VI, factor B *C , lies on the columns II, III.Table 2. Orthogonal experiment design array L36for 6 key factorsExperiment numberA B C A *B A *C B *C Result Q 1 1 1 1 1 1 1 Y 1 2 1 1 2 1 2 2 Y 2 3 1 1 3 1 3 3 Y 3 4 1 1 4 1 4 4 Y 4 5 1 2 1 2 1 5 Y 5 6 1 2 2 2 2 6 Y 6 33 3 3 1 9 9 9 Y 33 34 3 3 2 9 10 10 Y 34 35 3 3 3 9 11 11 Y 35 3633491212Y 36The relation between column IV and columns I, II is that: if level of A is n and level of B is m , the level of A *B is 3(n –1)+m , where n=1, 2, 3 and m=1, 2, 3. All the cases can be expressed as follows:(1, 1)→1 (1, 2)→2 (1, 3)→3; (2, 1)→4 (2, 2)→5 (2, 3)→6; (3, 1)→7 (3, 2)→8 (3, 3)→9.The first element in the bracket represents the corresponding level of factor A in Table 1 and the latter means the corresponding level of the factor B . Factor A *B has totally 9 levels, as factor A and factor B have 3 levels, respectively.Likewise, the relation between column V and columns I, III is(1, 1)→1 (1, 2)→2 (1, 3)→3 (1, 4)→4; (2, 1)→5 (2, 2)→6 (2, 3)→7 (2, 4)→8; (3, 1)→9 (3, 2)→10 (3, 3)→11 (3, 4)→12.Also the relation between column VI and columns II, III is(1, 1)→1 (1, 2)→2 (1, 3)→3 (1, 4)→4; (2, 1)→5 (2, 2)→6 (2, 3)→7 (2, 4)→8; (3, 1)→9 (3, 2)→10 (3, 3)→11 (3, 4)→12.The optimal design is carried out according to the first three columns:1211121235*36*1/91/91/9000000000000,0000000001/3A A B C B C I Y I Y I Y ⎛⎫⎛⎫⎛⎫ ⎪ ⎪⎪ ⎪ ⎪⎪ ⎪ ⎪ ⎪= ⎪ ⎪⎪ ⎪ ⎪⎪ ⎪⎪ ⎪⎝⎭⎝⎭⎝⎭ (2)max{}min{}i ij ij K I I =-,(3)where i = A , B , C , A *B , A *C , B *C ; j is the number of i rank. By Eqs. (2), (3) and the kinematics calculation of the 3RPS parallel mechanism [35], the relationship between the target and each factor can be obtained, as shown in Fig.5.Fig. 5. Relation between levels of factors and QAccording to the plots in 5, we can get thesuperiority and the degree of the influence (sensitivity) of each design factor. The factor with bigger extreme difference K i , as expressed in Eq. (3) has more influence on Q . In this case, it can be concluded that the sensitivity of the factors A *B and A *C are high and factors B *C and C have weak influence, since K A *B and K A *C are much bigger than K B *C and K C . And the set A 3B 1, A 2C 1, A 2, B 1, C 1, B 1C 1 are the best combination of each factor levels. But there is a conflict with former 3 items in such a set. As their K i have little differences between each other, the middle course is chosen. After compromising, we take the level 2 of factor A , the level 1 of factor B and the level 1 of factor C , namely d =80 mm, r /R =0.5, L 0=150 mm [32].It is interesting to know how good the results derived from the above 36 trials are, when compared with all other possible combinations. Because of its mutual balance of orthogonal arrays, this performance ratio can be guaranteed by the theorem in non-parametric statistics [13]. It predicts that this optimization is better than 97.29% of alternatives. Combined with the kinematics and dynamics simulation of the 3RPS parallel mechanism and ZJUESA with chosen design parameters by ADAMS, we perform the optimal design. Table 3 indicates the joint range and joint torque of双数页码周围的图文框,相对于“页面”水平距离1.8 cm量名称与量符号间空两格缩写点与后续文字间空两格CHINESE JOURNAL OF MECHANICAL ENGINEERING·5·each joint on ZJUESA. It is apparent that ZJUESA can almost cover the workspace of human upper-limb well so that it can follow the motion of human operation upper-limb with little constrain, as shown in Fig. 6.Table 3. Joint ranges and joint torques for each jointon ZJUESA Joint on ZJUESAJoint range θ/(°) Joint torque T/(N ·m)Flexion/extension (shoulder) –60-60 36 Abduction/adduction (shoulder) –50-60 36 Rotation (shoulder)–20-90 18 Flexion/extension (elbow) 0-90 28 Rotation (wrist)–20-90 13 Flexion/extension (wrist) 0-6028 Abduction/ adduction (wrist)Fig. 6. Motion of exoskeleton arm following the operator4 Hybrid Fuzzy-Controller for the ForceFeedback On ZjuesaIn master-slave manipulation, besides the visual feedback and man-machine soft interface, the force feedback is another good choice to enhance the control performance. If the slave faithfully reproduces the master motions and the master accurately feels the slave forces, the operator can experience the same interaction with the teleoperated tasks, as would the slave. In this way the teleoperation becomes more intuitive.In our bilateral teleoperation system with ZJUESA, a 6 axis force/torque sensor is mounted on the end effector of the slave manipulator and detects the force and torque acting on the end effector during performing the work. This information is transferred to the master site in real time. With dynamic calculation, the references of the generating force on actuators of ZJUESA are obtained. Hereafter, the feeling can be reproduced by means of the pneumatic system.Eq. (4) expresses the relation between the force and torque on the end effector and the torques generating on the joints: T =τJ F(4)where F —Force and torque on the end effector,⎛⎫= ⎪⎝⎭f F n ,τ —Torque on each joint,T 126()τττ=τ,J —Jacobian matrix of ZJUESA.By dividing the force arm, it is easy to get to the generating force on the joints, such as shoulder ring, elbow, wrist ring and wrist, as explained by Eq. (5):()TT345645673456f f f f a a a a ττττ⎛⎫== ⎪⎝⎭f (5)where a i (i =3, 4, 5, 6) is the force arm of the shoulder ring, elbow, elbow ring and wrist joints, respectively.As for the generating force of the prismatics on the 3RPS parallel mechanism, it can be calculated as follows [35]:13RPS 23RPS 3 f F f f f ⎛⎫⎛⎫ ⎪= ⎪ ⎪⎝⎭ ⎪⎝⎭τG f (6)where f FG —Jacobian matrix of 3RPS parallel mechanism,3RPS τ—Torques on 3RPS parallel mechanism, ()T3RPS 12ττ=τ,f 3RPS —Force on 3RPS parallel mechanism.Therefore, with Eqs. (5), (6), the total seven force references are obtained for the pneumatic system on ZJUESA. Fig. 7 explains the scheme of the pneumatic cylinder-valve system for the force feedback.Therefore, with Eqs. (5), (6), the total seven force references are obtained for the pneumatic system on ZJUESA. Fig. 7 explains the scheme of the pneumatic cylinder-valve system for the force feedback. Therefore, with Eqs. (5), (6), the total seven force references are obtained for the pneumatic system on ZJUESA. Fig. 7 explains the scheme of the pneumatic cylinder-valve system for the force feedback. Therefore, with Eqs. (5), (6), the total seven force references are obtained for the pneumatic system on ZJUESA. Fig. 7 explains the scheme of the pneumatic cylinder-valve system for the force feedback. Therefore, with Eqs. (5), (6), the total seven force references are obtained for the pneumatic system on ZJUESA. Fig. 7 explains the scheme of the pneumatic cylinder-valve system for the force feedback. Therefore, with Eqs. (5), (6), the total seven force references are obtained for the pneumatic system on ZJUESA.数学式下方的解释语及其他数学式,各行间单倍行距Y ZHANG Jiafan, et al: Novel 6-DOF Wearable Exoskeleton Arm with Pneumatic Force-Feedback for Bilateral Teleoperation ·6·Fig. 7. Scheme of the pneumatic cylinder-valve systemp1, v1, a1—Pressure, volume and section area of cylinder chamber 1p2, v2, a2—Pressure, volume and section area of cylinder chamber 2 m p—Mass of the pistona r—Section area of rodm L —Mass of loadThe high-speed on-off valves, working as the command components in the system, are controlled by the pulse width modification (PWM) signals from the control units, respectively. Rather than the proportional or servo valve, this is an inexpensive and widely used method in the application of position and force control in the pneumatic system [23–28]. To simplify the control algorithm, there is just one valve on work at any moment. For instance, when a leftward force is wanted, the valve V1 works and valve V2 is out of work. Under this case, we can control the pressure p1 in chamber 1 by modifying the PWM signals. Chamber 2 connects to the atmosphere at that time and the pressure p2 inside the chamber 2 of cylinder is absolutely ambient pressure, and vice versa. At each port of the cylinder, there is a pressure sensor to detect the pressure value inside the chamber for the close-loop control. And the throttle valves are equipped for limiting the flow out of the chamber to reduce piston vibrations. In our previous work, we gave out the specific mathematic models of the system, including pneumatic cylinder, high-speed on-off valve and connecting tube[33].However, the pneumatic system is not usually a well linear control system, because of the air compressibility and its effect on the flow line. Also the highly nonlinear flow brings troubles into the control. The conventional controllers are often developed via simple models of the plant behavior that satisfy the necessary assumptions, via the specially tuning of relatively simple linear or nonlinear controllers. As a result, for pressure or force control in such a nonlinear system, especially in which the chamber pressure vibrates rapidly, the conventional control method can hardly have a good performance.Fortunately, the introduction of the hybrid control method mentioned, gives out a solution to this problem. But the traditional design of the hybrid controller is always complicated and only available to the proportion or servo valve system. In our system, we figured out a kind of novel hybrid fuzzy control strategy for the high-speed on-off valves, which is much simpler and can be realized by micro control units (MCUs) in the contributed architecture. This strategy is composed of two main parts: a fuzzy controller and a bang-bang controller. The fuzzy controller provides a formal methodology for representing, manipulating, and implementing a person’s heuristic knowledge about how to control a system. It can be regarded as an artificial decision maker that operates in a closed-loop system in real time and can help the system to get the control information either from a human decision maker who performs the control task or by self-study, while the bang-bang controller is added to drive the response of the system much more quickly.Fig. 8 shows the concept of the proposed hybrid fuzzy controller. The concept of multimode switching is applied to activate either the bang-bang controller or the fuzzy controller mode.Fig. 8. Concept of the hybrid fuzzy controllerBang-bang control is applied when the actual output is far away from reference value. In this mode, fast tracking of the output is implemented. The fuzzy controller is activated when the output is near the set point, which needs accuracy control.In the fuzzy-control mode, we use pressure error ref actual()()()e t P t P t=-and its change ()e t as the input variables on which to make decisions. On the other hand, the width of the high voltage in one PWM period is denoted as the output of the controller.As mentioned above, the PC on master site works as the supervisor for real-time displaying, kinematics calculation and exchanges the control data with the slave computer and so on. For the sake of reducing the burden of the master PC, the distributed control system is introduced. Each control unit contains a Mega8 MCU of A TMEL Inc., working as a hybrid fuzzy-controller for each cylinder respectively, and forms a pressure closed-loop control. The controller samples the pressure in chamber with 20 kHz sampling rate by the in-built analog- digital converters. These controllers keep in contact or get the differential pressure signals from the master PC through RS232, as depicted in Fig. 9. In this mode, fast tracking of the output is implemented.当图题后面有注释时,图题前、后各0.3行注释文字字号8磅,单倍行距,最后一行段后回车换行1次CHINESE JOURNAL OF MECHANICAL ENGINEERING·7·Fig. 9. Distributed control system of the master arm5 Force Feedback ExperimentsFig. 10 gives out the set up of the force feedback experiments. The system includes the soft interface, data acquisition, Mega8 MCU experiment board, on-off valves, sensors of displacement and pressure, and the oscilloscope. We chose the cylinder DSNU-10-40-P produced by FESTO Inc. The soft signal generator and data acquisition are both designed in the LabVIEW, with which users may take advantage of its powerful graphical programming capability. Compared with other conventional programming environments, the most obvious difference is that LabVIEW is a graphical compiler that uses icons instead of lines of text. Additionally, LabVIEW has a large set of built-in mathematical functions and graphical data visualization and data input objects typically found in data acquisition and analysis applications.Fig. 10. Set-up of force feedback experimentThe plots in Fig. 11 give out experimental results of the chamber pressure outputs with step input signals on one joint. While at frequencies higher than 80 Hz, force is sensed through the operator’s joint, muscle and te ndon receptors, and the operator is unable to respond to, and low amplitude disturbances at these frequencies. We remove reflected force signals above 80Hz band by fast Fourier transfer (FFT) and get the smoothed curve in the plots. One is obtained by using hybrid control strategy and another is obtained by using traditional fuzzy controller without bang-bang controller. Although these two curves both track the reference well with very good amplitude match (less than 5% error) and a few milliseconds misalignment in the time profile, by comparing these two curves, it can be found that the adjust time of the curve with hybrid control strategy is less than 0.03 s, which is much less than 0.05 s of other with traditional fuzzy controller. It proves effect of the hybrid control strategy.Fig. 11. Experimental results with a step signalFig. 12 shows the results of tracking a sinusoidal commander. This experiment is implemented to test the dynamic nature of the system. Although there is a little error and delay between the reference curve and the experiment curve, the system has good performance. According to the experiments, the system with the help of hybrid fuzzy control strategy can track an up to 5 Hz frequency sinusoidal command well.Fig. 12. Experiment results for sinusoidal pressure commandsY ZHANG Jiafan, et al: Novel 6-DOF Wearable Exoskeleton Arm with Pneumatic Force-Feedback for Bilateral Teleoperation ·8·After then, another two experiments are carried out torealize the bilateral teleoperation with simple motion, inwhich the slave manipulator is controlled for the shoulderabduction/ adduction (the movement of a boneaway/toward the midline in the frontal plane) andextension/flexion of elbow (the movement in the sagittalplane) by the teleoperation with ZJUESA.In the first experiment, the operator performs theshoulder abduction/adduction movement with ZJUESA,when the slave robot follows and holds up the load. Withthe force feedback on ZJUESA, the operator has feeling asif he holds the load directly without the mechanicalstructure, as shown in Fig. 13. Plots in Figs. 14, 15 showthe torque and force on each joint on ZJUESA during theshoulder abduction/adduction movement from 45° to 90°(inthe frontal plane) with 5 kg load. There are some remarks. In plots of Fig. 14 shoulder 3RPS-x means the torque around x-axis of 3RPS mechanism at shoulder and the same to shoulder 3RPS-y. Shoulder ring, elbow, wrist ring and wrist represent the torques on these joints, respectively. The characters shoulder 3RPS-1, shoulder 3RPS-2 and shoulder 3RPS-3 in Fig. 15 represent corresponding force on the cylinders on 3RPS parallel mechanism (referring to Fig. 3) with length L1, L2 and L3, respectively.Fig. 13. Shoulder abduction/adduction teleoperation Fig. 14. Torques on the joints of the shoulderabduction/adduction for 5 kg load liftingFig. 15. Force feedback on the cylinders of the shoulder abduction/adduction for 5 kg load liftingThe operator teleoperates the slave manipulator with force feedback as if he performs for lifting a dumbbell or raising package in daily life (Fig. 16). Fig. 17 shows the moment on each joint during the process for producing the feeling of lifting a 10 kg dumbbell. Fig.18 depicts the force output of every pneumatic cylinder on ZJUESA.All these results of experiments demonstrate the effect of ZJUESA system. ZJUESA performs well by following the motions of human upper-limb with little constrain and the pneumatic force feedback system supplies a proper force feedback tracking the reference well.Fig. 16. Extension/flexion for elbow teleoperationFig. 17. Torques on the joints of the elbowextension/flexion for 10 kg load lifting。

负折射率隐身衣英文版

负折射率隐身衣英文版
Geometric optics-based multiband cloaking of large objects with the wave phase and amplitude preservation
Ran Duan,1 Elena Semouchkina,2,* and Ravi Pandey1
1Leabharlann Abstract: The geometric optics principles are used to develop a unidirectional transmission cloak for hiding objects with dimensions substantially exceeding the incident radiation wavelengths. Invisibility of both the object and the cloak is achieved without metamaterials, so that significant widths of the cloaking bands are provided. For the preservation of wave phases, the λ-multiple delays of waves passing through the cloak are realized. Suppression of reflection losses is achieved by using half-λ multiple thicknesses of optical elements. Due to periodicity of phase delay and reflection suppression conditions, the cloak demonstrates efficient multiband performance confirmed by full-wave simulations.

Teleoperation of ABB Industrial Robots

Teleoperation of ABB Industrial Robots

Teleoperation of ABB Industrial RobotsMohsen Moradi Dalvand and Saeid NahavandCentre for Intelligent Systems Research (CISR), Deakin University, Waurn Ponds Campus, AustraliaAbstractPurpose – This paper analyses teleoperation of an ABB industrial robot with an ABB IRC5 controller. A method to improve motion smoothness and decrease latency using the existing ABB IRC5 robot controller without access to any low level interface is proposed.Design/methodology/approach –The proposed control algorithm includes a high-level PID controller used to dynamically generate reference velocities for different travel ranges of the tool centre point (TCP) of the robot. Communication with the ABB IRC5 controller was performed utilising the ABB PC software development kit (SDK). The multitasking feature of the IRC5 controller was used in order to enhance the communication frequency between the controller and the remote application. Trajectory tracking experiments of a pre-defined 3D trajectory were carried out and the benefits of the proposed algorithm was demonstrated. The robot was intentionally installed on a wobbly table and its vibrations were recorded using a six degrees of freedom (DOF) force/torque sensor fitted to the tool mounting interface of the robot. The robot vibrations were used as a measure of the smoothness of the tracking movements.Findings – A communication rate of up to 250 Hz between computer and controller was established using C# .Net. Experimental results demonstrating the robot tool centre point (TCP), tracking errors, and robot vibrations for different control approaches were provided and analysed. It was demonstrated that the proposed approach results in the smoothest motion with less than 0.2 mm tracking errors.Research limitations/implications – The proposed approach may be employed to produce smooth motion for a remotely operated ABB industrial robot with the existing ABB IRC5 controller. However, in order to achieve high-bandwidth path following, the inherent latency of the controller must be overcome, for example by utilising a low level interface. It is particularly useful for applications including a large number of short manipulation segments, which is typical in teleoperation applications.Social implications – Using the proposed technique, off-the-shelf industrial robots can be used for research and industrial applications where remote control is required.Originality/value – Although low-level control interface for industrial robots seem to be the ideal long-term solution for teleoperation applications, the proposed remote control technique allows out-of-the-box ABB industrial robots with IRC5 controllers to achieve high efficiency and manipulation smoothness without requirements of any low-level programming interface.Keywords - Robotics, Teleoperation, Remote control,PID controller, Trajectory tracking, Force/torque sensor, VibrationIntroductionIndustrial robot manipulators are relatively cheap and mechanically optimised (Hirzinger et al., 2005, Brogårdh, 2007) and are therefore frequently utilised for both industrial applications and research projects (Desbats et al., 2006, You et al., 2012, Moradi Dalvand and Shirinzadeh, 2013). Commercial industrial robotic systems typically provide a static off-line programming interface (Xiao et al., 2012). Although off-line programming is sufficient for most industrial applications, some applications including teleoperation and force-guided manipulation require a more flexible and dynamic approach to incorporate online trajectory generation and path planning into the control architecture (Xiao et al., 2012, Cederberg et al., 2002, Nilsson and Johansson, 1999). In on-line control applications, new information including set points of the trajectory are calculated on-line from a remote client and sent to the controller “on-the-fly” (Neto et al., 2010). This approach enables the control architecture to react instantaneously to unforeseen events. Teleoperation applications typically involve a large number of short displacement commands executed consecutively. For force-controlled manipulation and teleoperation applications in stiff environments, control rates of more than 1 kHz are essential to avoid damaging the robot and its environment and to improve control stability/quality or perceptual stability, respectively (Cederberg et al., 2002, Kubus et al., 2009, Kroger et al., 2004, Moradi Dalvand et al., 2013, In press). Especially for high approach velocities and stiff environments, high update rates and low delays are key requirements for stability and thus for realistic haptic perception (Kubus et al., 2009). Different control techniques have been studied and investigated for teleportation of robot manipulators requiring direct access to motor controllers including feed-forward control, trajectory pre-filtering, and predictive control (Slama et al., 2007, Slama et al., 2008, Torfs et al., 1998). In the literature, to the best knowledge of the authors of this article, very limited research has been conducted to study and analyse control techniques to improve the smoothness and decrease the latency of the movements of off-the-shelf industrial robots for teleoperation applications (Cederberg et al., 2002, Li et al., 2003).While commercially available manipulators generally offer favourable mechanical and electrical characteristics for control-related research, the provided control interfaces do not allow for high-rate low-level control (Blomdell et al., 2005). The robot controllers provided by manufacturers typically lack high-rate low-latency communication interfaces and only provide control intervals in the two-digit millisecond range (Kubus et al., 2010). In ABB robot controllers, the delay between when a movement instruction executes and the time at which the robot moves comprises of the time it takes for the robot to plan for the next movement (approximately 0.1 s) and the servo lags (0-1.0 s depending on the velocity and the actual deceleration performance of the robot) (ABB-Robotics, 2000). Often the performance of robotic systems and the quality of research results can be improved significantly by employing an appropriate low-level control interface instead of the original controller (Kubus et al., 2010). Therefore, the main focus of the researchers in this field in the last decade have been concentrated on the development and implementation of low-level interfaces to the robot hardware in order to overcome the limitations of the commercially available control systems for the industrial robot manipulators (Kubus et al., 2010).Several suppliers of industrial robots have introduced a possibility of interoperability of their robot controllers with control algorithms executing on external computers and developed new control interfaces enabling open communication channels with rates higher than 1 kHz. Such communication channels allow the users low-level access (with given limits) to the control strategies of the commercial devices. Mitsubishi developed an optical ARCNet communication channel to the PA-10 robot arm with 400 Hz communication rate (Artemiadis and Kyriakopoulos, 2006). COMAU Robotics introduced a new policy and changed the open version of the industrial COMAU C3G 9000 controller produced by Tecnospazio into a partially opened C3G control system (Lippiello et al., 2007). KUKA Roboter GmbH introduced a real-time Ethernet channel at 1 kHz for the third generation of DLR lightweight robots (LWR III) (Albu-Schäffer et al., 2007). Finally, Staubli introduced a Low Level Interface (LLI) that allows an easy interface to the low level control loop and is integrated with CS8C controllers (Pertin and Bonnet des Tuves, 2004).Modifying commercial robot controllers or developing open robot controllers for commercial manipulators requires advanced knowledge in several fields and is difficult to achieve without compromising safety. Although low-level control interfaces seem to be the ideal long-term solution for applications like teleoperation, very little has been published about short-term solutions or methods to improve the performance of the current built-in commercial control systems. Only limited information can be obtained from the manufacturers and robot suppliers, and very little research has been conducted to analyse thefunctional properties of industrial robots and controllers (Cederberg et al., 2002, Li et al., 2003). In one limited study directly dealing with remote control of the industrial robots, the response frequency of an ABB industrial robot with an S4C+ controller was found to be 10 Hz. (Cederberg et al., 2002). This study also proposed a control technique; however, no experimental results and validation were provided. In another research effort, remote control of an ABB industrial robot for meat processing applications was studied (Li et al., 2003). This study utilised an ABB IRB 4400 robot with an S4C+ robot controller to evaluate the accuracy and latency of multi-point non-continuous trajectory tracking for a remote application. Experimental results indicated two cases of 800 ms delay with low tracking errors and 150 ms latency in remote control responses with 10 mm tracking errors. The conclusion was that although commercial industrial robots can achieve good accuracy, the motion response of industrial robots does not support real time control and it is therefore impossible to adapt an industrial robot directly to an application with real time or quick response operation requirements.Figure 1 – Experimental setup including an ABB IRB 120 robot incorporated with a force/torque sensor, an ABB IRC5 robot controller, and a standard computer connected to the controller through Ethernet network Both of these studies have revealed that it is challenging to produce remote control motions with high accuracy and low delay; however, more research is required to investigate other control techniques for modern robot controllers like the ABB IRC5 controller with built-in PC-Interface and Multitasking features. This paper focuses on evaluation and improvement of control techniques for teleoperation of an ABB IRB 120 robot with the ABB IRC5 controller (Figure 1). The objective was to follow 3D trajectories continuously generated from a remote location. Extensive experiments using various control techniques were conducted and the results were analysed. The remainder of this paper is organised as follows: A brief explanation of the remote control techniques of industrial robots is introduced in Section 2 while Section 3 describes the experimental setup. In Section 4, experimental results are reported and analysed. Finally, conclusions are presented in Section 5.Control TechniqueABB PC software development kit (SDK) makes it possible for system integrators, third parties or end-users to add their own customised operator interfaces to the IRC5 controller. This allows flexible solutions and customised interfaces fitting the specific requirements of an operator (ABB-Robotics, 2000). Such stand-alone applications with customised interfaces are able to communicate with the real robot controller over a network or with a virtual robot controller in ABB RobotStudio. There are several restrictions for remoteclients and specific privileges must be acquired by requesting so-called “mastership” of the controller domain explicitly by the remote application. The IRC5 controller must also be in automatic operating mode. The remote applications are able to monitor and access one or several robot controllers from one location and are less limited in terms of memory resources and process power compared with “local” clients on the FlexPendant (ABB-Robotics).ABB PC SDK provides a set of services to monitor and control the robot. These services include connection, communication, file and task management services (Cederberg et al., 2002). The connection management services basically provide support for other services, for example start, stop, and restart of a connection. The communication management service is responsible for retrieving information from the robot controller and updating data of the robot controller, for example reading or modifying predefined persistent controller variables. The functionality to access files on the robot controller, for example to open, read, write, close, rename and delete a file is provided by the file management services. Finally, the task management service provides methods such as start, stop, and restart of RAPID tasks in single task or multitask configurations.Normally, a RAPID program is prepared for a particular application and all of its parameters are predefined, updated and used within the program execution loop. It is realised as a stand-alone program and typically has no communication with the outside world except for digital and analogue input and output. However, for some applications (e.g. teleoperation) the local RAPID program must communicate with a remote program in order to send and receive RAPID data. Figure 2 presents an overview of the control architecture used in this research. The external computer interacts with an executing RAPID program by updating RAPID data defined as persistent variables in the program. This interaction is achieved by a remote master-slave application developed by the ABB PC SDK. The RAPID program on the slave side typically includes a sequence of MoveL instructions whose parameters are modified by a remote master application. The MoveL instruction refers to a linear movement to a given destination (ABB-Robotics, 2000). Its arguments include v and z that denote TCP velocity and zonedata structures, respectively. Zonedata is used to specify how a position is to be terminated and how close to the programmed position the robot must be before moving towards the next ordered position.Figure 2 – Remote Control Architecture (RAB: Robot Application Builder, PERS: persistent)The pseudo code for the RAPID program and remote program proposed in this paper are listed in Table 1. In the RAPIDprogram, two variables, p1 and p2, of RobTarget structures are predefined and act as buffers for the variables being updated by the remote application. A RobTarget structure defines the position and orientation of the robot and external axes. As a robot is often able to achieve the same position and orientation of the tool with two or more axis configurations, the RobTarget structure also includes data tounambiguously define the axis configuration of the robot. In order to enable read and write of variables between multiple RAPID programs in multitasking programming, the variables are declared as persistent. During execution, the values of p1 and p2 are dynamically updated in a circular fashion with the data sent by the remote application. The remote application and the RAPID code are synchronised so that the remote application keeps track of which MoveL instruction is currently running and which RobTarget structure needs to be updated for the next MoveL instruction.In order to achieve smooth transition between consecutive linear movements between positions that are updated on-the-fly, the zone value should be used. If the argument fine is used instead of a zone value, the following move instruction is executed after the robot is standing still in its destination. This results in a jaggy motion. If instead a zone value is defined, the next MoveL instruction is executed before the robot has reached the target position. This produces the smooth transitional motions which are desired in teleoperation applications. Utilising the zone argument requires that the RobTarget structure for the next MoveL instruction is defined before the interpolator is finished with the first destination. For example, when the robot starts moving to p2, the next point (p1) must be defined.Table 1 – Pseudo code for the RAPID program and remote programRAPID Program Remote Program! VARIABLE DECLARATIONS proc IntelliMove(start,end) Dist:=Distance(start,end); vVar:=PID(Dist);MoveL end, vVar, z, tool0; Dist_pre:=Dist;ENDPROCproc Main()!INITIALISATION...WHILE NOT ABORTED DOWaitUntil Flag1=FALSE;IntelliMove pre_p2,p1;pre_p1:=p1;curP:=2;Flag1:=FALSE;WaitUntil Flag2=FALSEIntelliMove pre_p1,p2;pre_p2:=p2;curP:=1;Flag2:=FALSE;ENDWHILEENDPROC // VARIABLE DECLARATIONS ... getControllerMASTERSHIP(); startRAPIDTasks();while (true){getRAPIDData();if NOT IsChanged(curP) continue;switch (curP){case 2:updateRAPIDData(p1);updateRAPIDData(Flag2); case 1:updateRAPIDData(p2);updateRAPIDData(Flag1); }}On the other hand, initiating a MoveL instruction ahead of a robot movement may lead to executing multiple MoveL instructions with identical RobTarget structures. This leads to a stop which also causes jaggy motions. To avoid this problem, a mechanism was established in order to make sure that the RobTarget value of the next MoveL instruction has already been updated with a Robtarget different than the previous one before initiating a MoveL instruction. Variables Flag1, Flag2 and curP are defined in the remote application (Table 1) to make sure the update sequence of the RAPID data are one step ahead of the move instruction sequence. Once the next Robtarget and consequently the corresponding variable (Flag1 or Flag2) are validated from the remote application on the master side, the RAPID program that is continuously monitoring the variables Flag1 and Flag2 is notified. The variableFlag1 or Flag2 confirms that the next RobTarget has been updated and the next MoveL instruction can be initiated. The variable curP is set at this exact time to avoid re-validation of the variable Flag1 or Flag2 by the remote application.The positions of the RobTarget variables updated by the remote application are not equidistant. Resultsfrom the experiments performed in this paper show that the size of the programmed velocity has a profound effect on the motion quality. If the programmed velocity deviates too much from an optimal value, the resulting motion becomes very jaggy. In order to resolve this problem, a PID control algorithm was implemented to dynamically calculate the necessary velocity of the TCP according to the required travel distance. The procedure IntelliMove in the pseudo code for the RAPID program (Table 1) calculates the dynamically updating velocity according to the distance to the new target and initiates the MoveL instruction.Experimental SetupIn order to verify the validity and effectiveness of the proposed teleoperation control technique, trajectory tracking experiments for a time-varying 3D figure-eight curve shaped like an ∞symbol (also known as lemniscate of Gerono) were carried out. Figure 3 presents consecutive photos of the ABB IRB 120 robot during the trajectory tracking experiment. Highlighted photos represent the robot states at start, middle and end of the tracking trajectory. Furthermore, to compare the smoothness of the robot motion for different control parameters, the robot was installed on a wobbly table and the vibrations caused by the robot movements were used as a measure of motion smoothness. The vibrations were captured by a 6-DOF force/torque sensor from Sunrise Instruments (/) mounted at the tool mounting interface of the robot.As shown in Figure 1, the experimental setup includes an ABB IRB 120 industrial robot incorporated with a force/torque sensor, an ABB IRC5 robot controller, and a standard computer that is connected to the robot controller through Ethernet network. Software including Visual Studio .Net, ABB PC SDK components of Robot Application Builder (RAP) and ABB RobotWare were installed on the computer. The IRC5 robot controller was integrated with the option PC Interface, which is required for communication with an external computer. Trajectory tracking manipulations were performed by updating the RobTarget data structures one by one and submitting them to the RAPID program to process and initiate the MoveL instructions accordingly.Figure 3 – Consecutive photos of the robot during the trajectory tracking motionstarting from left to right and top to bottomExperimental Results and DiscussionTo evaluate the proposed control technique various tracking experiments with different velocities of the MoveL instructions were conducted. Experiments were performed both with constant speed and by dynamically updating the speed using the proposed PID controller. The constant velocities were chosen as 40, 50, 60, 80, 100, 200, 500, 1000 and vmax mm/s, where vmax is the maximum achievable TCP speed in the present configuration. As listed in Table 1, vVar is the dynamic velocity calculated using the PID controller. The calculation of vVar is based on the distances between the ordered spatial positions along the tracking path and the orientations of the TCP at those points are not involved. If the manipulation involves sudden changes in the orientations of the TCP, the calculation method should be modified accordingly.Figure 4 shows the experimental results for the robot TCP displacements for the constant velocities vmax, 500, 200, 100, 80, 60, and 40 mm/s. Figure 5 illustrates the desired and real trajectories of the TCP for these experiments in 3D. Tracking errors for these experiments are listed in Figure 6. The vibration of the robot measured by the force/torque sensor is used as a measure of the smoothness of the tracking motion. The robot vibrations in the form of the force measured by the sensor for all of the constant velocities are presented in Figure 7. The latencies of the robot movement with respect to the desired time-varying trajectory for different constant velocities with zone value and fine argument as well as the dynamically updating velocity calculated using the proposed PID controller are listed in Table 2.As shown in Figures 4a-4c (and 5a-5-c), the results for the velocities vmax, 500, and 200 mm/s are similar and all relatively jagged. For these high speed experiments, the robot TCP reached the destinations along the tracking path fairly quickly before executing the next MoveL instruction available to the controller. In other words, the update rate of the RobTarget data structures were slower than the robot movements. Hence, the robot had stationary states in different intervals waiting for the next commands to be ready, explaining the jaggedness in the movements. The reason of having similar results for these high speed experiments is that the control system reduces the programmed speed to what is physically possible. The duration of each experiment was around 17.5 s and the each tracking error was less than 0.2 mm (Figures 6a-6c). As shown in Figures 7a-7c, the vibration of the robot in each experiment was extreme with a maximum of around 9 N.The robot tracking movement was greatly improved with much smoother tracking motions by decreasing the velocity to 100 mm/s (Figures 4d and 5d). The results are still jagged in some parts of the tracking trajectory, depending on the travel distances for different RobTarget structures along the tracking path and so the robot vibration remained at a high level with a maximum around 9 N (Figure 7d). Also in these cases, the duration of each tracking motion was around 17.5 s and the tracking error less than 0.2 mm (Figure 6d).Table 2 – Robot movement latencies for constant velocities with zone value and fine argument as well as the dynamically updating velocity calculated using the proposed PID controller Speed Type Constant DynamicZone zone value fine zone valueSpeed (mm/s) vmaxto200100 80 6050to40vmaxto200100 80 6050to40vVarLatency (ms) 430380 542 2786 N/A 327 710 1553 3417 N/A 344When the speed was decreased to 80 mm/s, uneven and irregular delays appeared at different stages of the tracking experiment (Figures 4e and 5e) and no improvement was made in terms of vibration which remained at the maximum level measured by the sensor (around 9 N in Figure 7e). The total time of the experiment was also slightly increased to around 18 s (Figures 4e and 5e). The tracking errors due to the irregular delays had a large jump to around 1.7 mm at the maximum level (Figure 6e). These problems increased when the speed was decreased to 60 and 40 mm/s (Figures 4f, 4g, 5f, and 5g), respectively, with huge delays between consecutive MoveL instructions and much longer time for the entire robot movement (20.5 and 23 s, respectively). The corresponding tracking errors at the maximum level increased to around 32 and 40 mm (Figures 6f and 6g), respectively. However, the vibration of the robot for these low speed manipulations were decreased, resulting in smoother plots with maximum value reduced to around 5 and 3.5 (N) (Figures 7f and 7g), respectively. The minimum latency in the robot motion was measured around 380 ms for the constant velocity of 100 mm/s (Table 2 and Figures 4d and 5d).Figure 4 – TCP displacements for seven differentconstant velocitiesFigure 5 – Desired and real trajectories in 3Dforce/torque sensor mounted at the TCP of therobot for seven different constant velocitiesdifferent constant velocitieswith fine argumentsrobot vibration, and dynamic velocity calculatedusing the proposed PID controller Figure 8 presents the results for TCP displacements, tracking errors, and robot vibrations using the constant speed vmax and the argument fine in the MoveL instructions. As shown in this figure, this leads to a very jagged tracking movement (Figure 8a) with the largest vibration of all the simulations (Figure 8c). However, this case also has the lowest tracking errors of all simulations (Figure 8b). The reason why the tracking errors are minimised in this case is that when a MoveL instruction is initiated with the argument fine, the robot does not process next instructions until the robot reaches the target. Therefore, no interpolation between the consecutive targets are used which minimises the tracking errors (Figure 8b). The manipulation latency using the constant speed vmax and the argument fine in the MoveL instructions was measured around 327 ms (Table 2 and Figure 8a).Figure 9 shows the experimental results for the TCP displacements, tracking errors, robot vibrations as well as the dynamics velocity calculated by the proposed controller PID using the proposed PID controller. The speed vVar was calculated on-the-fly and dynamically updated. As this figure indicates, this leads to a significantly smoother tracking motion compared with the best observed results for the constant speeds at 100 mm/s (Figure 4d and 5d). The tracking errors were less than 0.2 mm which are similar to the errors for the tracking motions with the constant velocities of 100 mm/s (Figure 6d) and higher (Figures 6a-6c). The amount of the vibration was also similar to the range of the vibrations for the low speed tracking motions with the constant velocities of 60 and 40 mm/s (Figures 7f and 7g). Figure 9 also illustrates the velocities calculated using the PID controller during the trajectory tracking experiment (Figure 7g). The manipulation latency using the dynamically updating velocity was around 344 ms (Table 2 and Figure 9).The experimental results imply that for each MoveL instruction, there exists one optimal constant TCPspeed which leads to the smoothest tracking motion. This specific speed depends on the type and time variance of the trajectory and employing a constant speed for all movements will therefore not be ideal neither in terms of smoothness nor in terms of vibration. Using a speed that is lower or higher than the optimal speed causes the stop-start scenarios, irregular delays and consequently jaggy or inaccurate motions shown in Figures 4-7. This issue was solved in the proposed control technique using the PID control algorithm designed to dynamically calculate the velocity according to the distance between the points along the desired trajectory (Figure 9). Using the proposed PID controller, the latency in the robot movements was also decreased from 420 ms for the constant velocity of 100 mm/s (Figures 4d and 5d) to 340 mm/s for the dynamically updating velocity.ConclusionDue to the inherent design of commercial robot control systems, industrial robots typically have limited capabilities for real time operations. Industrial robots typically exhibit a long delay between the time a motion is ordered and the robot actually moves. This latency makes standard industrial robots unsuitable for many applications requiring an immediate response to sensor inputs. However, this paper demonstrates that smooth motion with minimum latency can be achieved using a control technique based on circularly validating two pre-defined variables in the RAPID program using a remote application. Extensive experiments were conducted showing successful tracking motions with tracking errors less than 0.2 mm with constant speeds; however, the achieved motions were jaggy and caused lots of vibrations. To improve the motion smoothness, a PID control algorithm was introduced to dynamically update the programmed speed for the next consecutive motion based on the distance between ordered positions along the trajectory. It was demonstrated that this approach maintains good positioning accuracy and leads to a smooth tracking motion with minimised vibration and a latency of 344 ms. This result was achieved using the existing IRC5 robot controller without access to the low-level programs of the controller. The proposed control technique may be employed to remotely control industrial robots for applications in dangerous environments like nuclear power plants and microbial diseases laboratories, or any other application with hard and/or dangerous working condition.ReferencesABB-ROBOTICS PC SDK Application manual. ABB Robotics.ABB-ROBOTICS 2000. PC Rapid Reference Manual - Rapid Overview On-Line. ABB Robotics,3HAC 0966-50.ALBU-SCHÄFFER, A., OTT, C. & HIRZINGER, G. 2007. A unified passivity-based control framework for position, torque and impedance control of flexible joint robots. The International Journal of Robotics Research, 26, 23-39.ARTEMIADIS, P. K. & KYRIAKOPOULOS, K. J. Teleoperation of a robot arm in 2d catching movements using emg signals and a bio-inspired motion law. Biomedical Robotics and Biomechatronics, 2006.BioRob 2006. The First IEEE/RAS-EMBS International Conference on, 2006. IEEE, 41-46. BLOMDELL, A., BOLMSJO, G., BROGARDH, T., CEDERBERG, P., ISAKSSON, M., JOHANSSON, R., HAAGE, M., NILSSON, K., OLSSON, M. & OLSSON, T. 2005. Extending an industrial robot controller: implementation and applications of a fast open sensor interface. Robotics & Automation Magazine, IEEE, 12, 85-94.BROGÅRDH, T. 2007. Present and future robot control development—an industrial perspective. Annual Reviews in Control, 31, 69-79.CEDERBERG, P., OLSSON, M. & BOLMSJÖ, G. Remote control of a standard ABB robot system in real time using the Robot Application Protocol (RAP). Proceedings of the 33rd ISR (International Symposium on Robotics) October, 2002. 11.DESBATS, P., GEFFARD, F., PIOLAIN, G. & COUDRAY, A. 2006. Force-feedback teleoperation of an industrial robot in a nuclear spent fuel reprocessing plant. Industrial Robot: An International Journal, 33, 178-186.。

六度分隔的另一种证明方式

六度分隔的另一种证明方式

六度分隔的另一种证明方式英文回答:The six degrees of separation theory, also known as the small-world phenomenon, is a sociological theory that states that any two people on Earth are connected by a chain of six or fewer acquaintances. This theory has been popularized by the play and film "Six Degrees of Separation," which follows the story of a young man who claims to be the son of Sidney Poitier.There are several different ways to prove the six degrees of separation theory. One method is to use the "Erdős number," which is a measure of how close someone is to the Hungarian mathematician Paul Erdős. Erdős was a prolific collaborator, and he published over 1,500 papers with more than 500 different co-authors. As a result, anyone who has ever collaborated with Erdős has an Erdős number of 1. Anyone who has collaborated with someone who has an Erdős number of 1 has an Erdős number of 2, and soon.By calculating the Erdős numbers of a large number of people, it is possible to show that the average Erdős number is around 4. This means that any two people on Earth are likely to be connected by a chain of four or fewer acquaintances.Another way to prove the six degrees of separation theory is to use the "small-world experiment." In this experiment, participants are given a list of names and asked to send a message to a target person. Theparticipants are only allowed to send the message to someone they know, and the recipient of the message is then allowed to send it to someone they know, and so on. The experiment has been conducted several times, and it has consistently shown that the average number of steps it takes to send a message from one person to another is around six.The six degrees of separation theory has implications for a variety of different fields, including sociology,psychology, and marketing. For example, the theory suggests that it is possible for people to spread information or ideas very quickly through their social networks. The theory also suggests that it is possible for people to find common ground with others, even if they come from different backgrounds.中文回答:六度分隔理论,又称小世界现象,是一种社会学理论,声称地球上的任何两个人都可以通过六个人或更少的熟人联系起来。

Haptic device

Haptic device

专利名称:Haptic device 发明人:Mor, Andrew B.申请号:EP99108224.9申请日:19990427公开号:EP0980037A2公开日:20000216专利内容由知识产权出版社提供摘要:Apparatus is provided to extend the number of active degrees of freedom of a haptic interface to provide the user with the feel of a tool which is cantilevered over obstacles or which experiences frictional drag along its shaft. This is accomplished by providing two additional active degrees of freedom while not limiting the rotation or translation of the handle that the user is grasping. The apparatus constrains, through a 4 degree of freedom gimbal, the shaft of the tool handle, whose tip is controlled by another 3 spatial degree of freedom haptic device. The shaft of the tool slides and rotates in a sleeve hearing or collar which is mounted in a 2 degree of freedom gimbal. The gimbal is rigidly connected to a 2 degree of freedom parallel planar manipulator, with both degrees of freedom of the planar manipulator being powered by actuators used to generate the requisite haptic forces. The use of this device provides users with a 5 degree of freedom device, through which they can feel forces and moments, instead of only point forces which are generated by 3 degree of freedom devices. This is useful when performing simulations where a portion of the tool removed from the tip may contact an obstruction instead of just the tip.申请人:MITSUBISHI DENKI KABUSHIKI KAISHA地址:2-3, Marunouchi 2-chome Chiyoda-ku Tokyo 100-8310 JP国籍:JP代理机构:Pfenning, Meinig & Partner 更多信息请下载全文后查看。

自由度与自由度数

自由度与自由度数

分子种类
3
0
3
3
2
5
3
3
6
说明 1. 分子的自由度不仅取决于其内部结构,还取决于温度。
2. 实际上,双原子、多原子分子并不完全是刚性的,还有振动 自由度(s)。但在常温下将其分子作为刚性处理,给出与 实验大致相符的结果,因此可以不考虑分子 内部的振动, 认为分子都是刚性的。
3. 非刚性双原子分子有一个沿两质心联线振动的振动自由度
自由度与自由度数
自由度(degree of freedom)定义:描述一个物体在空间 的位置所需的独立坐标称为该物体的自由度。 自由度数:确定一个分子的空间位置所需要的独立坐标数目。
刚性分子:分子内原子之间的距离保持不变的分子。
z
zz
zH
O z
H zH
O
H
C CC
zO z z
H
H
C
C
C
He
(a)
x
O
He
(a) yx
O2
z
zHe
O O(b) (a)
x
x y
yx y
Oz 2 (b)
xy O
x
y O xy
x
HO2O2
( cO()b)z x y
H2O
(c)z
yx y
z yx
Oy
H2O
(cO)
z
O
x
z xy
z yx
x x y xy y
y
(a) (a) (a)
(b) (b) (b)
(c) (c) (c)
4. N个原子组成的多原子分子,其自由度数最多为3N个。一般 来说,在这3N自由度中,有三个(整体)平动、三个(整 体)

科学文献

科学文献

per S.E., “Roles of Proprioceptive Input in the Programmingof Arm Trajectories”.Cold Spring Harbor Symp. Quant. Bi-ol.,1990,55:837-847.[5]Ghez C., Hening W., and Favilla M., “Parallel InteractingChannels in the Initiation and Specification of Motor Re-sponse Features”. In Attention and Performance XIII. MotorRepresentation and Control, M Jeannerod, ed. Hillsdale, N.J.:Erlbaum, pp. 265-293, 1990.[6]Ghez C., Hening W., and Gordon.J, “Organization of V olun-tary Movement”. Curr. Opin. Neurobiol. 1:664-671, 1991. [7]Graziono M.S. and Gross C.G., “A Bimodal Map of Space:Tactile Receptive Fields in the Macaque Putamen with Corre-sponding Visual Receptive Fields”.Exp Brain Res.97:96-109, 1993.[8]Kelso J.A.S and Holt K.G, “Exploring a Vibratory SystemAnalysis of Human Movement Production”.J. Neurophysiol.43:1183-1196, 1980.[9]Kurzion Y. and R. Yagel, “Volume deformation using RayDeflectors”,The 6th Eurographics Workshop on Rendering,Dublin, June 1995, pp. 21-32.[10]Laur D. and P. Hanrahan, “Hierarchical Splatting: A Progres-sive Refinement Algorithm for V olume Rendering”,Comput-er Graphics,25(4):285-288,Proceedings of SIGGRAPH’91,July 1991.[11]Leslie A.M. and Keeble S., “Do Six-Month-Old Infants Per-ceive Causality?”,Cognition 25:267-287, 1987.[12]Mcdonald J.S., Rosenberg L.B., Stredney D., “Virtual RealityTechnology Applied to Anesthesiology, Interactive Technolo-gy and the New Paradigm for Health Care”,ProceedingsMedicine Meets Virtual Reality III, R.M. Satava, ed., IOSPress Amesterdam, 237-243, 1995.[13]Messerklinger, W., “Endoscopy of the Nose”, Urban andSchwartzenburg, Inc. Baltimore, Maryland, 1978.[14]Rohlf J. and J. Helman, “IRIS Performer: A High Perfor-mance Multiprocessing Toolkit for Real-Time 3D Graphics”,Proceedings of SIGGRAPH ‘94 (Orlando, Florida, July1994), pp. 381--395.[15]Schmalbrock P., Pruski J., Sun L., Rao A., and Monroe J.W.,“Phased Array RF Coils for High Resolution Imaging of theInner Ear and the Brain Stem”,J. Comp. Assist. Tom.,19:8-14, 1995.[16]Stammberger, H., “Functional Endoscopic Sinus Surgery. TheMesserklinger Technique”, B. C. Decker, Philadelphia, Penn-sylvania. 1991.[17]Stein B.E. and Meridth M.A., “The Merging of the Senses”,Cambridge, MA, MIT Press, 1993.[18]Streri A.,“Seeing, Reaching, Touching-The Relations be-tween Vision and Touch in Infancy”, Cambridge, Mass.: MITPress 1993.[19]Udupa J. K. and D. Odhner, “Interactive Surgical Planning:High-Speed Object Rendition and Manipulation Without Spe-cialized Hardware”, in Proceedings of the First Conferenceon Visualization in Biomedical Computing, pp. 330-335, May1990.[20]Westover L., “Footprint Evaluation for V olume Rendering,Proceedings of SIGGRAPH ‘90, Computer Graphics,24(4):367-376, August 1990.[21]Wiet G.J., Schuller D.E., Goodman J., Stredney D.L., BenderC.F., Yagel R., Swan J.E., and Schmalbrock P., “Virtual Sim-ulations of Brain and Cranial Base Tumors”,Proceedings ofthe 98th Annual Meeting of the American Academy of Oto-laryngology Head and Neck Surgery, San Diego, California,September 1994.[22]Wittenbrink, `IFS Fractal Interpolation for 2D and 3D Visual-ization'',Proceedings Visualization `95, Atlanta, GA, 1995.[23]Yagel R. and W. Ray, “Visibility Computation for EfficientWalkthrough Complex Environments”, accepted to PRES-ENCE, April 1994.[24]Ying K., Schmalbrock P., Clymer B.D., Echo-Time Reductionfor Submillimeter Resolution Imaging with a Phase EncodeTime Reduced Acquisition Method”,Magn. Reson. Med.,33:82-87, 1995.4.2 Collision DetectionOne of the advantages of voxel representation is the natural spatial organization of the data and the random access to any point in this space. Our data structures and voxel organization of the data and tools give us collision detection at a low cost.We currently check only for collision of the tip of the tool with the data. If such collision is detected, a visual feedback is given (highlight), and the user’s motion is restricted. The information of the touched surface is passed to the user inter-face for haptic feedback processing. In future work we will detect collision of the entire surface of the tool with the data,providing more appropriate haptic response including interac-tion of the instruments.4.3 PerformanceOur renderer is composed of two processes, one that per-forms some preprocessing and one that renders. The general design of the renderer borrows on some of the ideas in Iris Performer [14]. However, while Performer operates in three phases: traversal, culling, and rendering, we have only two:culling and rendering. Our culling phase is a little more com-plex than using Performer. Most of the polygons in a standard Performer scene are independent of the viewer eye point.Only in special cases (e.g., billboards) does performer rotate polygons to make them face the viewer. In our case all the polygons (splats) must face the viewer. In orthographic view-ing we create all the viewer-facing polygons by translating a single polygon around the volume. Therefore, during the cull-ing phase we do not treat a fuzzy set as a stream of indepen-dent splats but as a geometric structure with some regularities.For culling, we hold our splats as sparse vectors aligned with the original volume major axis. We cull splats by culling every such sparse vector. This operation reduces the complex-ity of the problem from n 3 to . The log(n) term comes from a binary search for the sparse vector location where clipping occurs. The output of the culling process is a display list containing all the information for the most rapid rendering of an image. Our render task takes this display list and renders it.We run our splat renderer on a multiprocessor Onyx with Reality Engine graphic hardware, and a single Raster Man-ager (RM) board. Extracting a fuzzy set out of a 1283 volume takes ≈60 seconds. Depending on the choice of splat thresh-old, we can control the resulting number of splats in the fuzzy set. For a fuzzy set with 50,000 splats lighted by four light sources at infinity, we get render rates of about twenty frames per second for point splats and about seven frames per second for textured rectangular splats. Although initialization has to be repeated whenever the user changes the transfer function or loads another dataset, for the most significant visualization operations involved in data exploration it provides very attrac-tive rendering speed.n 2n log 5. DiscussionSeveral drawbacks exist. First, the image is degraded as the “endoscope” approaches objects for a closer look; this is a result of the lack of a higher resolution acquisition in the dataset. We currently obtain CT data at the resolution of present-day hospital scanners. An option we have been con-sidering is to use magnetic resonance data, which can obtain submillimeter resolutions. The problem with MR is the lack of bony differentiation, which shows up extremely well in CT.Progress is being made in this area in MRI [15][24]. One advantage to using MR is that the data gives very high resolu-tion soft tissue information (the CT currently gives only bone detail). This allows for a more complete visual display of the regional anatomy. Third, we plan to incorporate more power-ful visibility preprocessing tools [23] that can single out all voxels that are not visible from the current viewpoint, before handing the model to the renderer. We also plan to incorporate in the system our volume deformation algorithms [9] that will allow to simulate some surgery-specific operators. The com-putation of forces from the medical data is also under explora-tion. Surface normal and voxel tags, which are computed at the rendering and segmentation phases, respectively, can be used to assist in the efficient generation of force st, the data is displayed in grayscale with shading. We even-tually hope to incorporate color into the display by texture mapping actual photographic representations of nasal mucous membranes onto the surface of the displayed data. The sys-tem, once complete, will integrate realistic display, simulated surgery operators, and haptic feedback. It is our plan to go through clinical trials to evaluate the system’s fidelity and training value.6. AcknowledgmentsThis project was supported by the National Science Foun-dation under grant CCR-9211288, Department of Defense USAMRDC 94228001. We thank Naeem Shareef and Ed Swan for their work in the implementation of various compo-nents of the system.7. Bibliography[1]Bizzi E., Accornero E, Chapple W., and Hogan N., “Posture Control and Trajectory Formation During Arm Movement”,J. Neuroscience, 4:2738-2744, 1984.[2]Feldman A.G., Adamovich S.V ., Ostry D.J., and Flanagan J.R., “The Origin of Electromyograms - Explanations based on Equilibrium Point Hypothesis”. In “Multiple Muscle Sys-tems: Biomechanics and Movement Organization ”, Winters and Woo, eds. Springer-Verlag, New York, 1990, pp. 195-213.[3]Flash T., “The Control of Hand Equilibrium Trajectories in Multi-joint Arm Movements”.Biol. Cybern ., 1987, 57:257-274.[4]Ghez C., Gordon J., Ghilardi M.F., Christakos C.N., and Coo-inal voxels; thus, for each splat in a row, we maintain its posi-tion in 3D space. In addition, we maintain the normal of each splat, which we calculate based on the information in all its twenty six adjacent voxels.The volume splatting algorithm takes as input a fuzzy set.It traverses the fuzzy set in a back-to-front order. For each member of the set it renders a rectangle facing the viewer, tex-tured with a splat texture. The splat texture contains an imageFIGURE 4. Two perspective views of the region ofinterest.of a fuzzy circle, with opaque center and transparent circum-ference. V arious functions can be used to govern the decay of opacity in this circle of influence, and we use a Gaussian func-tion [20]. We also implemented a faster version of the render-ing algorithm, in which, instead of rectangles, we render enlarged points on the screen. These points have constant opacity and therefore generate images with some visible arti-facts; however, because points are very simple graphic primi-tives, this method supports higher rendering speeds.We control the material properties of the splats; however,for reasons of speed, we vary only opacity and diffuse reflec-tion of the material for each splat. We define multiple light sources (infinite and local) and use the GL light routines to shade the splats. Unlike existing splatting algorithm [20][10],we exploit rendering hardware to provide real time perfor-mance. Transforming the rectangles, scan-converting the tex-tured rectangles, and compositing colors and opacities are performed by the SGI graphic hardware. Figure 4 shows the bony tissue of the sinus region as rendered by the volume splatter.4.1 The Hierarchical Data RepresentationBecause we are dealing with very small landmarks (in mil-limeters) and we are very close to these landmarks using a very wide viewing angle (90°), these small landmarks will take up a very large space on the viewing screen. Even with the highest resolution datasets currently available, this charac-teristic causes severe degradation of the image quality. In addition, the nature of the splatting algorithm tends to smear feature such as edges and silhouettes. Some of this smoothing artifacts are also apparent in the endoscope and we do not wand to remove it altogether. To reduce the degradation, we subdivide the datasets and create several levels of detail (LODs). Because of memory constraint, LODs are con-structed only for regions in which surgery is expected to take place. The LODs are created during initialization using multi-processing by tri-linearly interpolating higher resolution LODs. This simple interpolation scheme will be replaced in the future with a more powerful, fractal based, interpolation procedure [22]. We maintain pointers to the LODs and splats for each voxel in the area of interest using a sparse 3D array .The LOD data and 3D array require large amounts or memory,which we trade off for rendering speed. The resolution we choose to display a landmark depends on its distance from the eye. Smooth transition between LODs will be achieved by lin-ear interpolation between levels. In addition, its is possible to display the hierarchical LOD representation in a progressive refinement rendering [10]. Because we now naively display hundreds of small splats instead of one big splat, our render-ing time drops to 6Hz in the common case and down to 0.2Hz for extremely complex areas. In the future we plan to use tex-ture mapping to aid with the degradation problem as well as optimize which sub-voxels get displayed.acquisition of force data throughout the procedure. Specifi-cally, a Frazier tip suction instrument was retrofitted with a force sensor and used during the procedure in a standard fash-ion to palpate, probe, and bluntly dissect the structures of the nasal cavity and sinus tissues. (see Figure 2). Specifically, total ethmoidectomy, maxillary antrostomy, and sphenoidot-omy (various surgical maneuvers that are part of endoscopic technique) were performed bilaterally (on each side). Force data was acquired during initial palpation of the internal nasal structures and during total ethmoidectomy on one side. Palpa-tion of normal structures, which constitute landmark surgical boundaries, was performed. Force measurements from the sensored instrument were sampled along with video of the surgeon performing the maneuvers. In addition, video of the operative field were recorded in real-time and synchronized with a video time stamp. Throughout the procedure, the sur-geon utilized a “think aloud” protocol, orally describing his actions. This data provided concise correlation of forces with anatomical landmarks and surgical maneuvers. Currently, the data is being analyzed and integrated with the acquired vol-ume datasets.3.2 Interaction in the ESS SystemOur system consists of a mock patient head housing elec-tromechanical mechanisms that provide the force reflection to the user (see Figure 2, 3). We have decided to provide the user with a physical interface for several reasons. 1) The face of the patient is used to help provide surgeons with external land-marks for reference to internal positions; 2) it provides the users with a rest and fulcrum for the hands, commonly used in surgical practice; and 3) the nostrils of the model provide the fulcrum for the endoscopic instruments. As the user moves the instruments, sensors monitor the position and orientation of the tool within a given three-dimensional workspace. The computer can dictate force sensations to the instruments by commanding the actuators to generate forces on the drive mechanism. The drive mechanism applies forces upon the sur-gical tool, which in turn applies forces to the user who is hold-ing the tool.Reconstructed anatomy derived from merged MR and CT data is registered with the model of the face. The 3D volumet-ric model is rendered to the screen using a novel volume ren-dering algorithm described in Section 4. A surface model of the surgical instruments is integrated with the internal view of the reconstructed anatomy. The locations and perspective of the digital referents of the instruments are graphically updated through information provided by the sensors inside the model head. One of the instruments is the endoscope, which provides the viewpoint and viewing vector for the user and the genera-tion of the scene. We have integrated a five degrees-of-free-dom (5DOF) probe, the Microscribe TM by Immersion Corporation of San Jose. This probe presents an intuitive interface for the users, allowing them to work in a natural way within the patient model. The user views the internal anatomy,which has been reconstructed in the computer, by direct obser-vation of the computer monitor. This method is similar toviewing the surgical field via a monitor during actual practice.Eventually, we will incorporate the video directly to the scopeas this technique is employed by experts.4. RenderingWe have developed a visual interface using splats [20] thatcan produce frame rates up to 20 Hz using an SGI ONYX/RE.This same system produces graphics at a rate of 12Hz usingan SGI CRIMSON/RE.The Volume Splatter relies on the notion of fuzzy voxel setwhich consists of a subset of the volume’s voxels. For eachvoxel in the original volume we evaluate a transfer functionwhere∇andρare the gradient and the den-sity of the given voxel and t is the inclusion criterion. Weinclude a voxel in the fuzzy set if it has a large enough t value(above some user-defined threshold). Note that if we pick F tobe a projection onto the second coordinate,ρ, we do merelythresholding on the density. We name each voxel passing theF threshold a “splat”. The idea of fuzzy voxel set is similar tosemi-boundaries and shells used by [19]. However, unlikeprevious methods, which choose the voxels for the set by seg-mentation methods, our approach chooses the voxels to beincluded by their contribution to the final image. This processwhich effectively rejects all the voxels that contribute little ornothing to the final image, greatly reduces the burden placedon the rendering pipeline.The resulting subset of voxels, the fuzzy set, is ordered inthe same fashion as the original volume: we hold slices ofsplats, where each slice contains rows of splats. The only dif-ference is that the number of elements (splats) in each rowmay not be equal. Each row of splats is a sparse vector of orig-FIGURE 3. The ESS simulator.F:∇ρ,()t→through this narrow area to gain access to the deeper sinus structures. Recognition of key structures by both visual and haptic queues is paramount to safe, minimally invasive, and adequate technique.Today the technique is taught by a stepwise approach. First, the student of ESS must master the detailed anatomy of the sinuses. To do this, the student must study detailed two-dimensional atlases and text to begin to develop a working “virtual” three-dimensional model in the mind’s eye. Next, detailed dissection of cadaver specimens is done to verify the model and gain a “feel” for the surgical techniques. Often, cadaver material is in limited supply and, when embalmed, does not exhibit the same subtle tissue characteristics of live tissue. Next, the student observes surgery on live patients under the direction of a mentor. Last, the student begins actu-ally performing the procedure on live patients under the men-tor’s direction. Beginning with simple tasks and progressing in a stepwise fashion to more complicated procedures, the novice surgeon gains experience and eventually becomes pro-ficient.The learning process is labor intensive and can be frustrat-ing because models and cadavers can show only limited vari-ability and interaction. There is no way to actually practice the techniques learned without using either cadaver material or live subjects. A realistic simulator of the paranasal sinuses will allow for a quicker understanding of the three-dimen-sional anatomy and allow a safe and realistic environment for the novice surgeon to learn the techniques. A true simulator must have realistic haptic as well as visual display. Many of the surgical maneuvers require subtle haptic cues as well as visual. Inside the sinuses, many of the tissues look similar, and it is their “feel” that allows the expert surgeon to distin-guish between such structures as another air-filled passage versus the thin bone that covers the brain and eye. One can be removed aggressively, while the other’s disruption may lead to a disastrous result for the patient. Haptic display is essential if the simulation is to be more than just a “fly through”. Finally, to convey a sense of reality the system must deliver real-time photo-realistic rendering of the 3D environment.Our previous work includes the development of a system to provide for the simulation of regional anaesthesia [12]. This system integrates visual, haptic, and speech recognition for simulating an epidural block, a regional anesthesia technique commonly used in obstetrics. In addition, our group has developed a real-time volumetric system for the use in pre-planning the removal of brain and cranial base tumors [21]. The system currently employs only dexterous interfaces for the clinician to interact and pre-plan surgical approach for tumor management. In this paper, however, we wish to focus on a specific application being developed, where we are inte-grating haptic and visual display technology for of simulating functional endoscopic surgery.3. The ESS SimulatorOur system provides direct interaction with a volumetric model of the anatomical region while delivering haptic feed-back (force reflection) to the user. Due to the need to dissect and view tissue regions beyond the outer surface, it is obvious one needs to maintain and manipulate a volumetric represen-tation of the nasal cavity. The volume-based approach has many advantages over surface-based models in the arena of medical applications. For surgery simulation, we note espe-cially the efficient collision detection (see Section 4.2) and more powerful deformation schemes [9]. We first describe the way we acquire force data and then provide more information on the system itself.3.1 Data Acquisition: Sensored Surgical StudyThe ESS system requires both the anatomical information as well as force data. Prior to force acquisition, an adult white male cadaveric head (embalmed) underwent imaging with both computed tomography (CT) and magnetic resonance (MR) to acquire structural data of the anatomy of the head. Vitamin E capsules, visible to both CT and MR scanners, were glued with colloidion to specific sites on the head to serve as a common set of reference points between the CT and MR acquisitions. Sub-millimeter resolution MR images were acquired with at three-dimensional gradient echo sequence[24], using a repetition time of 23ms, an echo time of 4ms anda 40 degree flip angle. A volumetric data set with mm resolution of each image element was obtained.Endoscopic sinus surgery was performed on the cadaveric specimen, using surgical instruments that provided real-time512512×124×0.470.47× 1.5×FIGURE 2. The active sensor used to sample forces inthe model, the same way it was used on the cadaver.endoscope. This type of surgery requires the spatial localiza-tion of the surgical instrument through the integration of such non-direct observation via an endoscope and direct haptic cues provided via the surgical instruments. Most endoscopes provide only a monocular view of the scene, although stereo scopes are being introduced into surgical practice. In the fol-lowing section we describe one type of endoscopic surgery we are attempting to simulate− endoscopic sinus surgery.The requirements for a computer system to simulate com-plex interaction, such as that required in endoscopic surgery, are many. The system must be able to generate realistic visual and haptic cues that provide temporal and spatial coherence with the data manipulation. This correspondence of action/ reaction provides the user with a sense of causality, similar to that experienced in interactions with the real-world [11]. It must provide high fidelity haptic and visual cues to “con-vince” and thereby engage the user. And finally, it must pro-vide interactions and behaviors that are similar enough to the “real-world” in order to maximize transfer to actual practice. We nevertheless believe that real-time interaction is the most essential requirement and practitioners prefer to operate in interactive environment even if it is rendered in a somewhat less realistic way. In the next section we overview the applica-tion area of Endoscopic Sinus Surgery. In Section 3 we describe our system and describe its components in the sec-tions that follow. We conclude with some discussion and our future plans.2. Endoscopic Sinus SurgeryEndoscopic Sinus Surgery (ESS) is currently the procedure of choice for the treatment of medically resistant recurrent acute and chronic sinusitis. Its rationale is based on the simple premise of improving the natural drainage of the sinuses into the nasal airway. Endoscopic sinus surgery was first popular-ized by Messerklinger [13] and furthered by Stamberger [16]. In the past, sinus surgery was composed of rather crude tech-niques including blind probing and removal of tissue without direct visualization or the creation of external facial incisions to gain direct access. The further understanding of sinus anat-omy and the advent of endoscopic techniques have allowed a less invasive, more precise and physiologically sound method of treatment of patients with these disease entities.The paranasal sinuses are composed of a series of labyrin-thine passageways and spaces located in the lower forehead, between the eyes, at the center of the cranial base, and behind the cheeks. Several factors complicate surgery in these areas. First, the sinuses are surrounded by vital structures such as theeye socket and its contents, the internal carotid artery, which supplies blood to the brain, and the brain itself. Such struc-tures lie within millimeters of the sinus boundaries. Second, to avoid external incisions, access to the sinuses is through the nostril, which precludes direct visualization and manipulation of the sinus structures by the naked eye (see Figure 1a). The technique consists of visualizing landmark structures within the nasal cavity and sinuses, excising and “biting” out dis-eased tissue, probing and sectioning under direct visualization through the endoscope or via a video monitor with attached endoscopic camera (see Figure 1b). The nostril becomes a ful-crum of rotation as the scope and other instruments must passFIGURE 1. (a) An illustration showing a surgeon looking through the eyepiece while the video image isdisplayed on the monitor in the back. (b) A view of the nasal cavity as seen through the endoscope eyepiece, showing a cutting tool (bottom left) performing sectioning.(a)(b)Multisensory Platform for Surgical Simulation Roni Yagel1, Don Stredney2, Gregory J. Wiet2,3, Petra Schmalbrock4, Louis Rosenberg5,Dennis J. Sessanna2, Yair Kurzion1, and Scott King11Department of Computer and Information Science, The Ohio State University, Columbus, OH 2The Ohio Supercomputer Center, Columbus, OH3Department of Otolaryngology, The Ohio State University Hospitals, Columbus, OH 4Department of Radiology, The Ohio State University Hospitals, Columbus, OH5Immersion Corporation, San Jose, CAAbstractAdvanced display technologies have made the virtual explora-tion of relatively complex models feasible in many applica-tions. Unfortunately, only a few human interfaces allow natural interaction with the environment. Moreover, in surgi-cal applications, such realistic interaction requires real-time rendering of volumetric data −placing an overwhelming per-formance burden on the system. We report on a collaboration of a unique interdisciplinary group developing a virtual real-ity system that provides intuitive interaction with complex vol-ume data by employing real-time realistic volume rendering and convincing force feedback (haptic) sensations. We describe our rendering methods and the haptic devices in detail and demonstrate the utilization of this system in the real-world application of Endoscopic Sinus Surgery (ESS) simulation.1. IntroductionHuman investigation of spatial information requires the subtle integration of several sensory modalities. Vision is the primary sensory modality, as it is instantaneous, provides a broad bandwidth, and allows the user to sample the environ-ment with little or no physical navigation. Navigation is com-monly used to evaluate environments and verify spatial comprehension previously provided by sight. However, inves-tigations of objects are predominately explored both visually and manually. Manual interaction provides the movement required to verify size and shapes (like navigation) but addi-tionally provides tactile information such as forces and tex-tures. Most computer interfaces suffer from lack of multisensory information, providing only a passive witnessing of information.The human capacities for manual and visual investigations are inherent and develop early, as is evident in infant object exploration before the development of locomotive skills required for navigation [18]. The emphasis and value of this multimodal interaction are clearly evident in the cerebral area apportioned to visual and dexterous activity. Convergence of visual, auditory, and somatosensory inputs has been identified by Stein [17] in the superior colliculus. Bi-modal neural involvement has been identified in the putamen, parietal area 7b, and inferior area 6 [8] and is presumed integral to the rep-resentation of extrapersonal space.Surgery is a human activity that employs the use of visual and manual investigations. The user must integrate both hap-tic (manual) and visual information and correlate sensory information with previously held conceptual information, such as the internal representation of the regional anatomy.This representational map serves as a mental paragon for which the user will perform spatial processing required in the task, including orientation, position fixing, spatial reasoning (including spatial and temporal prediction), and performance (including execution, analysis, and re-evaluation). The repre-sentation will be utilized to generate and re-calibre of virtual trajectories of hand and arm movements that will be used to program actual physical trajectories of the limbs [1][2][3][8]. This mental representation will be modified with direct sen-sory information including visual, haptic, and information supplied by proprioreception of joints and visual feedback of user limb position [4][5][6].In minimally invasive surgery, surgeons insert various tools through one or two small incisions. Their visual feed-back is limited to the view they get through the eyepiece of an。

第一章 振动和转动光谱

第一章 振动和转动光谱

ν4 = ν2{F(m1+m2)/(4 π 2m1m2)}
ν = 0,A1=A2,X1=X2,平动
ν = 1÷(2 π)*(F(1/m1+1/m2))1/2
量纲: ν: s-1,F: N m-1,m1 and m2:kg 作业: ν (cm-1)= 1303{F(m1+m2)÷(m1×m2)}1/2 , m1和m2为原子1或2 的原子量, 则力常数F的量纲为?
-
简正模式定义: 1)分子内所有原子以同 样的频率振动 2)所有原子同时通过平 衡点 所以:CO2 反对称振动模 式为简正振动模式。
每一简正模式可以用一简正坐标来描述。 简正坐标:1)周期性变化 2)简正振动模式的幅度 3) Cartesian 位移坐标和内坐标可以用简正坐标表示
简正振动模式可以用笛卡 尔坐标,内坐标,简正坐 标来描述。 CO2 对称伸缩振动: 是否简正模式? 笛卡尔坐标表示? 内坐标? 简正坐标? 简正坐标与其它坐标的关 系?
力 = - dV / d( r-re )= - F( r-re ) Hooke’s law
F = d2V/d(r-re )2
双原子分子真实位能 在(r-re)很小时,与谐振子 位能函数非常接近。
经验公式 Morse function: V β常数,De 分子离解能
= De(1-e-β(r-re))2
一个原子一个质点确定质点的位置xyz三个坐标运动自由度为3一个分子n个原子3n个坐标确定所有原子的位置自由度3n分子自由度总数平动自由度转动自由度振动自由度例如hcl自由度63个平动自由度确定质心位置2个转动自由度确定方位1个振动自由度化学键伸缩水分子三个平动三个转动9333振动自由度线形分子振动自由度3n5非线形分子振动自由度3n614normalmodesofvibration振动的简正模式非线形分子3n6个internaldegreesoffreedom对应于3n6个独立的振动简正模式

Series 4 底部无冻食品储存器电子冰箱说明书

Series 4 底部无冻食品储存器电子冰箱说明书

Series 4, free-standing fridge-freezer with freezer at bottom, 203 x 70 cm, Stainless steel (with anti-fingerprint) KGN49XIDP The NoFrost bottomfreezer with VitaFresh: Keeps your fresh foods fresh longer.• Perfect Fit: place your fridge right next to walls or in a niche.• LED interior light: Illuminates the refrigerator evenly and glare-free, and lasts for its entire life.• VarioZone: more flexibility due to variable-use glass shelves and drawers in the freezer compartment.• Automatic Super Freezing function: optimal for freezing smaller food load faster whilst protecting your frozen items from defrosting, thanks to the detection of temperature increase and the automatic decrease of temperature• Easy Access Shelf: pull-out glass shelf gives you better access to the contents of your fridge.Energy Efficiency Class (Regulation (EU) 2017/1369): .....................D Average annual energy consumption in kilowatt hour per year(kWh/a) (EU) 2017/1369: ..........................................207 kWh/annum Sum of volume of frozen compartments (EU 2017/1369): ..........108 l Sum of volume of chill- and unfrozen compartments (EU 2017/1369): ....................................................................................................330 l Airborne acoustical noise emissions (EU 2017/1369): .38 dB(A) re 1 pWAirborne acoustical noise emission class (EU 2017/1369): ..............C Built-in / Free-standing: .................................................Free-standing Number of compressors: . (1)Number of independent cooling systems: (2)Width of the appliance: ..........................................................700 mm Height: .................................................................................2030 mm Depth of the product: ............................................................670 mm Net weight: ..............................................................................86.7 kg Door panel options: ........................................................Not possible Door hinge: ................................................................Right reversible Number of Adjustable Shelves in fridge compartment: (4)Shelves for Bottles: .......................................................................Yes Frost free system: ...................................................................Freezer Interior ventilator: ..........................................................................No Reversible Door Hinge: ..................................................................Yes Length electrical supply cord: ..............................................240.0 cm Noise level: ..............................................................38 dB(A) re 1 pW Multi-Flow Air Tower: ....................................................................Yes Fast cooling switch: .......................................................................Yes Fast freezing switch: ......................................................................Yes Temperature Controlled Drawer: ....................................................No Humidity Control Drawer: ..............................................................Yes Number of Door Bins - Refrigerator: .. (4)Door bin adjustability - Refrigerator: .............................................Yes Tilt-out door bins in fridge: ............................................................Yes Gallon wide door bins: ..................................................................Yes Number of Gallon storage: . (1)Motorized Shelf: .............................................................................No Material ofthe shelves: .....................................................Safety glass Door opened indicator freezer: .......................................................No Automatic motor-driven ice-maker: .................................................No Connection rating: ....................................................................100 W Fuse protection: ...........................................................................10 A Voltage: ...............................................................................220-240 V Frequency: .................................................................................50 Hz Storage Period in Event of Power Failure (h): ..............................16 h Door panel options: ........................................................Not possible Noise level: ..............................................................38 dB(A) re 1 pW Energy Star Qualified: .....................................................................No Plug type: .......................................................Gardy plug w/ earthing Required cutout/niche size for installation (in): .. (x)Gross weight: ...................................................................204.000 lbsSeries 4, free-standing fridge-freezerwith freezer at bottom, 203 x 70 cm, Stainless steel (with anti-fingerprint)KGN49XIDPThe NoFrost bottomfreezer with VitaFresh: Keeps your fresh foods fresh longer. Performance and consumptionDesign and Styling- Doors inox EasyClean, side panels Pearl grey (VZF 07127)- vertical handle- LED with Soft Start in fridge sectionConvenience & Safety- NoFrost - never again defrosting!- Separate, electronic temperature control, LED indicators- Two Cooling Systems- Super Cooling: automatic deactivation- Super Freezing: Manual/automatic activiation- Optical and acoustic door open warning system- Visual / acousticFridge Compartment- Multi Airflow-System- 5 safety glass shelves of which 4 x EasyAccess Shelf, extendable- Bottle grid- 1 door shelf large, 3 door shelf smallFreshness System- 1 VitaFresh drawer with humidity control - Fruits and vegetables retain vitamins and stay fresh for longer.- , 2 VitaFresh 0°C drawers with temperature control - Fish and meat stay fresh for longerFreezer Section- 3 transparent freezer drawers- Vario Zone - removable safety glass shelves for extra space! Dimensions- Dimensions ( H x W x D): 203.0 cm x 70.0 cm x 67.0 cmTechnical Information- Door right hinged, reversible- Height adjustable front feet- Connected load: 100 W- Nominal voltage: 220 - 240 VAccessories- egg tray- ice cube trayCountry Specific Options- Based on the results of the standard 24-hour test. Actual consumption depends on usage/position of the appliance.- To achieve the declared energy consumption, the attached distanceholders have to be used. As a result, the appliance depth increases by about 3.5 cm. The appliance used without the distance holder is fully functional, but has a slightly higher energy consumption.Environment and Safety InstallationGeneral InformationSeries 4, free-standing fridge-freezer with freezer at bottom, 203 x 70 cm, Stainless steel (with anti-fingerprint) KGN49XIDP。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Six Degree-of-Freedom Haptic Display of Polygonal Models Arthur Gregory Ajith Mascarenhas Stephen EhmannMing Lin and Dinesh ManochaDepartment of Computer ScienceUniversity of North Carolina at Chapel Hillgeom@/geom/6DHapticsAbstract:We present an algorithm for haptic display of moderately complex polygonal models with a six degree of freedom(DOF)force feedback device.We make use of incremental algorithms for contact determination between convex primitives.The resulting contact information is used for calculating the restoring forces and torques and thereby used to generate a sense of virtual touch.To speed up the computation,our approach exploits a combination of geometric locality,temporal coherence,and predictive methods to compute object-object contacts at kHz rates. The algorithm has been implemented and interfaced with a6-DOF PHANToM Premium1.5.We demonstrate its performance on force display of the mechanical interaction between moderately complex geometric structures that can be decomposed into convex primitives.CR Categories:I.3.6[Computer Graphics]:Methodology and Techniques-Interaction TechniquesAdditional Keywords:haptics,virtual reality,force-feedback devices,interactive computer graphics1IntroductionExtending the frontier of visual computing,haptic inter-faces,or force feedback devices,have the potential to in-crease the quality of human-computer interaction by accom-modating the sense of touch.They also provide an attrac-tive augmentation to visual display and enhance the level of immersion in a virtual world.They have been effectively used for a number of applications including molecular dock-ing,manipulation of nano-materials,surgical training,vir-tual prototyping and digital sculpting.Given their poten-tial,a number of research prototypes and commercial de-vices that can accommodate up to seven degrees of freedom (DOF)[Bur96]have been designed.Compared with visual and auditory display,haptic ren-dering has extremely demanding computational require-ments.In order to maintain a stable system while displaying smooth and realistic forces and torques,haptic update rates must be as high as Hz.This involves accurately com-puting all the contacts between the object attached to the probe and the simulated environment,as well as the restor-ing forces and torques–all in less than one millisecond.Some of the commonly used haptic devices include the 3-DOF PHAMToM arm[MS94]and the SARCOS Dexter-ous Arm[NNHJ98]that compute point-object contacts and provide only force feedback.However,many applications like scientific exploration,virtual prototyping(e.g.assembly planning and maintainability studies),medical simulation and tele-operation need to simulate arbitrary object-object interactions.A-DOF haptic device,that provides torque feedback in addition to force display within a large transla-tion and rotational range of motion,is very useful for such applications.It gives a user the much needed dexterity to feel,explore and maneuver around other objects in the vir-tual environment.Although a number of commercial and research6-DOF haptic devices are becoming available,their applications have been limited.This is mainly due to the complexity of accurate calculation of all the contacts and restoring forces that must be computed in less than one millisecond,as out-lined by McNeeley et al.[MPT99].The existing fast haptic rendering algorithms developed for3-DOF haptic devices primarily deal with single point contact with the virtual mod-els[ST97,JC98,GLGT99,RKK97]and are not directly ap-plicable to accurately compute object-object contacts.Up to date,only approximate haptic rendering technique for mostly static environments,such as the one based on point-voxel sampling,has been proposed for haptic display using a 6-DOF haptic device[MPT99].Current algorithms for exact contact determination and response computation for general polygonal models are unable to meet the real-time require-ments of6-DOF haptic rendering.1.1Main ResultsIn this paper,we present a novel algorithm for haptic dis-play of moderately complex polygonal environments using a six degree-of-freedom force feedback device at kHz up-date rates.We assume that each object can be decomposed into convex primitives.Since the probe’s position changes little between successive frames,our algorithm keeps track of the pairs of closest features between convex primitives [LC91].By exploiting the temporal coherence and spatial locality of closest features,we can take advantage of incre-mental computation by caching the last closest feature pair and performing a“greedy walk”to the current pair of closest features.We exploit motion coherence from extremely high haptic update rates to predict contact locations and minimize penetration between the probe and the virtual environment. The restoring forces are computed based on a concept sim-ilar to“virtual proxy”[RKK97]extended to6-DOF force feedback devices.The algorithm has been applied to haptic display of mechanical interaction between moderately com-plex structures composed of tens of convex primitives.As compared to earlier approaches,our algorithm offers the following advantages:Applicability to Dynamic Environments:We do not assume the environment is static.Objects in the scene are free to move simultaneously.Our current imple-mentation can only handle a few moving objects at the same time.Accurate Contact Determination:We do not need to trade off accuracy for performance,while maintaining required force update rates.Smooth Collision Response:By an extension of the virtual proxy concept,our system generates a natural haptic response that does not result in force discontinu-ities.1.2OrganizationThe rest of the paper is organized as follows.Section2 briefly surveys related work.Section3presents an overview of our approach.Section4describes the algorithm for con-tact determination and penetration depth estimation.Section 5describes contact force and torque computation.Finally, we describe the implementation in Section6and highlight its performance on different environments.2Previous WorkIn this section,we survey previous work related to haptic rendering,contact determination and collision response.2.1Haptic DisplaySeveral techniques have been proposed for integrating force feedback with a complete real-time virtual environment to enhance the user’s ability to perform interaction tasks [CB94,MS94,SBM95].Iwata describes a6-DOF haptic master and the concept of time-critical rendering at a lower update rate of hundreds Hz[Iwa90].Ruspini et al.[RKK97] presented a haptic interface library“HL”that uses a virtual proxy and a multi-level control system to effectively dis-play forces using3-DOF haptic devices.Thompson et al. [TJC97]have presented a system for direct haptic rendering of sculptured models.2.2Contact DeterminationThe problem of fast collision detection and contact deter-mination has been well studied in computational geometry, robotics and graphics literature.Convex Polytopes:A number of specialized algorithms have been developed for contact determination and distance computation between convex polytopes[GJK88,Bar90, LC91,DHKS93,GHZ99,EL00].Hierarchical Approaches:Some of the commonly used al-gorithms for general polygonal models are based on hierar-chical data structures.These include bounding volume hi-erarchies where the bounding volume may correspond to a sphere[Hub95],axis-aligned bounding box,oriented bound-ing box[GLM96],a k-DOP[KHM98]or a swept sphere volume[LGLM99].These algorithms can only compute all pairs of overlapping triangles and not the intersection re-gion.Algorithms for separation distance computation based on bounding volume hierarchies have been proposed by [Qui94,JC98,LGLM99].Intersection Region and Penetration Depth:Given two polyhedral models,algorithms for intersection computation and boundary evaluation have been extensively studied in solid modeling.Bouma and Vanecek[BV93]used spatial partitioning approaches to compute the contact region.Pon-amgi et al.[PML97]combined hierarchical representations with incremental computation to detect contacts between polygonal models.Snyder[Sny95]used temporal coherence to track penetration depth on spline models.All these algorithms rely on a surface-based representa-tion of the model.Their performance varies as a function of the size and relative configuration of two models. Volumetric Approaches:Gibson[Gib95],Avila and So-bierajski[AS96]have proposed algorithms for object ma-nipulation including haptic interaction with volumetric ob-jects and physically-realistic modeling of object interac-tions.More recently,McNeely et al.[MPT99]have pro-posed a voxel sampling technique for6-DOF haptic render-ing,where point samples from one surface are tested againstthe voxel representations of the static environment.This ap-proximation approach achieves constant query time at the expense of accuracy and correctness of haptic force display.2.3Collision ResponseThere is considerable work on dynamic simulation and re-sponse computation[Bar90,Bar94,MC95].However,these algorithms do not guarantee real-time performance.Other algorithms propose an artificial coupling between the haptic display and the virtual environment[Cea94]or the notion of a“god object”[ZS95,RKK97].3OverviewIn this section,we give an overview of our approach.We highlight our rendering algorithm for6-DOF haptic devices. The same methods are applicable to other devices as well.3.1PreliminariesIn general,the computation for haptic rendering at each time frame involves the following steps:1.Collision Detection-The algorithmfirst detects ifan intersection has occurred between an object(or a probe)held by the user and the virtual environment. puting the Contact Manifold-If an intersectionhas occurred,then the algorithm needs to compute the intersection points that form the intersection region and the contact normal direction.3.Estimating the Penetration Depth-A measure or anestimation of penetration depth along the contact nor-mal direction is computed from the intersection region.puting Restoring Forces and Torques-A restor-ing or contact force is often calculated based on penalty methods that require the penetration depth.Given the force and the contact manifold,restoring torques can be easily computed.The contact manifold refers to the set of all points where the two objects come into contact with each other or may come into contact for predictive methods.For haptic sim-ulation,the stability of force computation is extremely im-portant.An accurate computation of the contact manifold at each frame also helps in smoothing the transition of force display from one frame to the next.This is crucial for the stability of the force feedback system with a human-in-the-loop,especially in a situation when there are multiple con-tacts between the object attached to the probe and the simu-lated environment.None of the current algorithms and systems can perform force display of interaction between general polygonal mod-els in an efficient and accurate manner.Given the time con-straint of performing all these computations in less than a millisecond,our approach uses predictive techniques and in-cremental computation along with spatial and temporal co-herence.As part of pre-computation we initially decompose each object into convex primitives[BD92],if such a decom-position is not given.For the rest of the paper,we will refer to them as primitives.By exploiting the convexity of the ob-jects,we can compute the contacts between a pair of convex primitives in expected constant time,with some simple pre-processing.As long as the number of contacts are bounded, our algorithm can guarantee the performance of force update rates.3.26-DOF Haptic Rendering with A VirtualProxyThe computation of our6-DOF haptic rendering involves three components:(1)the use of virtual proxy[RKK97],(2) contact manifold computation and penetration depth estima-tion,and(3)haptic response computation.Extension of the Virtual Proxy:A virtual proxy is a rep-resentative object that substitutes for the physical object that the probe is attached to in the virtual environment.The mo-tion of the virtual proxy is greedy in nature.It will move as long as it is not obstructed.Once it runs into a surface of some object in the environment,its motion is constrained to the surface location,in such a way that the actual object position will locally minimize the penetration depth.This is accomplished by predicting where the collision may occur between the two objects,based on the proxy position and ve-locity(also the actual probe position and velocity)from the previous frame and a simple linear interpolation along the travel path.At the beginning of the current frame,the object attached to the probe is constrained to travel no more than a safe threshold distance based on the predicted scheme,so as to minimize the amount of penetration.Contact Determination and Penetration Depth Estima-tion:The contact determination algorithm initially narrows down pairs of primitives on different objects that are in close proximity using a combination of an N-body“sweep and prune”test and“real-time scheduling”.Then,it checks those pairs in close proximity for contacts.For a pair of con-vex primitives,we use an expected constant time algorithm to track the pair of closest features,thereby computing the contact manifold,the contact normal and the estimated pen-etration depth.Moreover,a predictive approach is used to minimize penetration computations between the probe and the virtual environment.This is described in Section5. Collision Response:The algorithm uses penalty methods to compute a force that is proportional to the penetration depth.It is then applied to the contact manifold in the direction of the contact normal.4Contact Manifold&Depth Estima-tionIn this section,we present our algorithm for computing the contact manifold and estimating the penetration depth.Our algorithm uses convex decomposition to subdivide each ob-ject into convex primitives.The algorithm for contact de-termination uses a two phase approach.In thefirst phase,it narrows down pairs of primitives that are in close proximity. The algorithm computes a tightfitting axis-aligned bound-ing box for each primitive using incremental methods.It checks these bounding boxes for overlap by projecting them onto the coordinate axes and sorting them using insertion sort[CLMP95].In the second phase,the algorithm performs exact contact determination tests on all pairs whose bound-ing boxes overlap.4.1Contact Determination between ConvexPartsWe use a closest-feature tracking algorithm based on V oronoi regions for convex primitives,first proposed by Lin and Canny[LC91].It is an incremental algorithm that keeps track of closest features between convex primitives from the previous frame.A feature corresponds to a vertex,edge or face of the ing the external V oronoi regions of the convex primitives,the algorithm marches towards the new set of closest features in a greedy manner.The proof of correctness and the analysis of this algorithm is given in [Lin93].Expected Constant Time Performance:The running time of this algorithm is,where is the number of features it traverses and is typically much smaller than the total num-ber of features on a given polytope.The number of features that is traversed is expected to be constant due to coherence. When the bounding boxes of a pair of convex primitives overlap for thefirst time,coherence does not exist.In that case we use a directional lookup table[EL00]that has been precomputed for each convex primitive.It consists of near-est vertices to certain samples on a bounding sphere.The directional lookup table provides the means to quicklyfind out which vertex of a primitive is near a given direction by a simple lookup.The size of this table is determined by the resolution of directions on a unit sphere and is generally set to a constant.Given a constant size table,the table look up time is also constant.Given the centers of two convex prim-itives and the vector connecting them,each primitive’s table is used to look up a vertex in order to initialize the closest feature tracking algorithm.This method can be used to help restore coherence and the closest features can be determined in expected constant time(a few microseconds)[EL00].Thedirectional lookup table we used takes up less than a kilobyte for each convex primitive.Contact Manifold Computation:The tracking algorithm always returns a pair of closest features for initializing con-tact manifold computation.There are six different featurecombinations possible.If one of the features is a vertex, then the vertex and the closest point to it on the other featureare used.If both the features are edges,the closest points on the edges are used,assuming they are not parallel.If the twoedges are parallel,the algorithm computes the projection ofone edge to the other.For the edge-face and face-face cases, the algorithm uses a combination of edge-clipping and face-clipping routines.Estimating Penetration Depth:The penetration depth iscomputed by extending the closest-feature algorithm.It isdefined as the smallest distance that one of the primitives has to move so that the two primitives are just touching and notpenetrating along the contact normal direction.This can becomputed using the pseudo internal V oronoi region of each primitive[PML97].For a convex polytope,the boundariesof the pseudo internal V oronoi region correspond to lines and planes,as opposed to quadric surfaces for general poly-hedra.The pseudo internal V oronoi regions can be used totrack andfind all features that form the intersection volume. Given the intersection volume,we compute the penetrationdepth along the direction of the motion.4.2Real-Time SchedulingAt each frame,the algorithm initially uses the sweep-and-prune technique[CLMP95]to compute pairs of primitives in close proximity.The complexity of the sweep-and-prunealgorithm is expected linear time(with a small constant,)in terms of the number of primitives that the objects have been decomposed into.Let the total number of primitives be and the total number of potential contacts be.The to-tal computation time for each update,,for6-DOF hapticrendering is bounded by:where is the time required for computing the restoringforce and torque for each of resulting contact pairs,and is an upper bound on the runtime performance of the closest features tracking between convex primitives.For large envi-ronments,it is possible that the algorithm cannot check all possible pairs for exact contacts in less than a millisecond.As a result,we use a scheduling scheme to prioritize the pairs which will be checked for contact based on their im-portance.The pairs are assigned a priority based on: Pairs of objects that were in contact in the last frame, are given the highest priority.The algorithm prioritizes the remaining pairs based on the amount of time since they were last checked for a contact(the greater the time,the higher the priority).Given these criteria,the algorithm uses a simple greedy strategy.It sorts all the pairs based on increasing priority values into an array offixed length.The length of the ar-ray is some constant derived from the maximum number of contacts the system can handle and still maintain the force update rate.The time of the pairs in this array is then up-dated to be that of the current frame,and they are checked for contacts.5Contact Forces and TorquesGiven the contact manifold and estimated penetration depth between the probe and the virtual environment,we can com-pute the contact forces and torques for6-DOF haptic ren-dering.In this section,we describe the basic formulation of 6-DOF force display and the use of predictive techniques to avoid penetration as much as possible.5.1Restoring Forces and TorquesWe compute the restoring or contact forces based on the penalty ing Hooke’s law,we generate a spring force that is proportional to the penetration depth: where is the spring stiffness constant and is the depth of penetration.We use0.6N/mm as the value of in our implementation.The computed restoring force vector is applied to the contact manifolds along the contact normal di-rection to resolve the penetration,thereby generating a sense of touch.Restoring torques are generated bywhere is the contact force vector applied at the point and is the radius vector from the center of mass to. 5.2Predictive Collision ResponseThe computation of penetration depth is expensive in gen-eral.Furthermore,our depth estimation algorithm only com-putes a local approximation to the penetration depth.In con-junction with the use of virtual proxy,we minimize the fre-quency of computing the penetration depth by conceptually “growing”the actual surface along its surface normal direc-tion by some small amount.Whenever the distance be-tween two objects is less than,say,we declare a collision.Then,we apply a restoring forceIf an actual penetration of occurs,then we modify the contact force using the same principle by settingThis formulation reduces the need for computing penetration depth,which is relatively more expensive than computing the separation distance.The value is a function of the upper bound on the mag-nitude of the current velocity,which takes into consid-eration both the linear and angular velocity of the moving objects.is set to be:where is1ms for typical haptic force update.5.3Force and Torque InterpolationThe displayed force is computed as a function of:penetra-tion depth,contact manifold and contact normal direction. It is possible that the magnitude of contact forces can vary, thereby creating sudden jumps and introduce sharp discon-tinuities between successive frames.We use a notion of force and torque interpolation that adopt the interpolated normals for“force shading”,similar to that presented in [RKK97].We interpolate between two different force nor-mals to achieve smooth force shading effects.In addition, we use a simple linear smoothing scheme to minimize dis-continuity in force and torque display between successive frames.Let be the force displayed at the previous frame and be the force generated during the current frame.Without loss of generality,let us assume that.Let be the maximum amount of force difference allowed between successive updates.We use the following formulation to dis-play the restoring force:ifthenelse ifthenDISPLAYWe have considered other higher order smoothing func-tions.However,this formulation seems to work reasonably well and is simple to compute.Due to the very fast force update rate,a more complex smoothing function may take unnecessarily long to compute and is likely to only result in very minute and subtle differences in the force and torque feedback to the user.Figure1.Overall architecture of the6-DOF hap-tic display system6System Implementation and Perfor-manceIn this section,we describe our system and its application to force display of mechanical interaction between moderately complex structures.We used the6-DOF PHANToM Pre-mium1.5device[Che99]designed by SensAble Technolo-gies.It provides intuitive force as well as torque feedback.6.1ImplementationThe system consists of routines for hierarchical transforma-tions,predictive methods,collision detection and contact manifold computation,and computing contact force/torque. It is interfaced with the provided library as well as graphical display routines.The overall architecture of the system is shown in Figure1.The main loop of the system consists of three basic parts: Contact Determination:The probe proxy is typically attached to some object in the scene.The linear and angular velocities for this object are computed from the difference in the transformation between the proxy and the probe.Given the time slice for the current frame and the velocities of the moving objects,a conserva-tive estimate is computed as a tolerance for collisions.Contact determination initially consists of applying an N-body algorithm based on sweep and prune.If the en-vironment has too many potential pairs,it makes use of the scheduling scheme to prioritize the pairs as de-scribed in Section4.2.For each pair in close proxim-ity,it checks them for contact and computes the con-System componentWorst TimeAverage Col.Det.timeAverage N-body timeAverage Exact CD timeTotal number of collisions=Table1.Timings for the different system compo-nents.The average timing is further broken down into its various components.The collision de-tection calls normally include an N-body test and typically a few exact collision tests plus the timeto determine the closest features and to compute the contact normal for each contact feature pair. tact manifold between each pair.For convex objects the contact manifold can be used to compute a contact nor-mal and a pair of contact points to apply the response forces.Collision Response:For each pair in contact,the algo-rithm uses the relative velocity at the contact points,the contact normal and the distance between them to esti-mate the time of impact.If there are multiple contacts during the given time frame,the algorithm uses the one with the smallest time of impact.Each object is then advanced using its current velocities to the time of im-pact,and the contact forces are applied.Finally it is advanced with its new velocities for the remainder of the time in the frame.Computing Proxy Forces:The actual force and torque being applied to the probe are computed using a sim-ple spring model based on the transformation from the actual probe to its proxy at the end of the frame.Af-ter that a simple smoothing is applied to the force and torque vectors to make sure that thefirst derivatives of both the magnitude and the direction of these vectors have not changed drastically between frames.Real-Time Constraints:An important component of a high-fidelity haptic rendering system is that all the computations including contact determination and re-sponse computation have to be performed in less than a millisecond.In practice,this is a rather hard con-straint and many portions of our system need to be op-timized to achieve such performance.Currently,our system only works well on geometric models that can be decomposed into a few convex primitives(convex primitives).The algorithm for collision detec-tion between a pair of convex primitives takes just a few microseconds.Average running times for each phase of the force/torque display during a haptic session are shown in Table1.6.2System DemonstrationWe have applied this system for force and torque display to several applications:gear turning,inserting a peg in a hole, and interaction with multiple moving objects. Mechanical Interaction Between Gears:The gears are modeled and positioned in such a manner that the user can turn one gear with the other in either di-rection.The teeth are not so tightly interlocking that there is always a collision.In the demonstration shown,the gears contain a few hundred polygons and about20convex prim-itives,as shown in Color Plate I(Left).In order to aid the user in turning one gear with the other, constraints are applied to the gears so their position cannot be altered,and their rotation is constrained to be about an axis.In order to generate a correct response these constraints are applied to the velocities at the beginning of the haptic frame,and to the new transformations computed for the ob-jects towards the end.As a result the user can feel the gears rotating against each other just as they would in a real me-chanical part,as shown in Color Plate I(Right).The user can attach the haptic probe to either gear in two different modes.Clicking the button towards the rim of the gear is like inserting a rod into a bicycle wheel.The probe’s position remains at afixed radius from the center of the gear and its orientation isfixed so as to rotate with the gear at that radius.If the user clicks around the center of a gear, the probe effectively becomes the gear.Its position isfixed, and it is only allowed to rotate about one axis.In this mode, the user turns the gears and feels their interactions entirely through the torque of the probe handle.We encourage the reader to view the system demonstration on our website: /˜geom/6DHaptics. Inserting a Peg in a Hole:In this scenario,the user attaches the probe to a rectan-gular peg and attempts to insert it into a rectangular hole. Often one or two pairs of the parallel faces are the pair(s) of the closest features and may be in contact.If any type of sampling technique is used,the number of contact points would be very high,since nearly all faces of the peg are in close proximity with the walls of the hole.Collision detec-tion and contact determination for such a seemingly simple scene are actually rather difficult,due to its contact config-uration and geometric robustness problem.A sequence of snapshots are given in Color Plate II to demonstrate a suc-cessful attempt of a user inserting the peg into the hole with 6-DOF haptic display.A Dynamic Scene of Multiple Moving Objects:In this scenario,all objects are moving simultaneously under the influence of gravity and impact due to collision with other objects.The user can pick up any of the objects with the probe and move it to hit other objects or feel other objects hitting it.Sample snapshots are shown in Color Plate III.In this particular setup,there are four cubes,four spheres(320faces each),four ellipsoids(320faces each)and a stick-like block. There are two types of simulated force.There is continu-ous force/torque such as gravity.There is also impulsive force/torque due to impact between the user controlled ob-ject and other moving objects.The motion of all moving ob-jects is simulated using impulse-based rigid body dynamics. The continuous force can be felt quite well but the impul-sive ones currently feel like small blips.This is exactly what we expect since the impulsive contact duration is very short. We are considering the possibility of force expansion over time or force amplification to exaggerate the feel of impul-sive force/torque.6.3DiscussionForce display of mechanical interaction is useful in virtual assembly,maintenance studies and other electronic proto-typing applications.In cases where the user has to interact with the virtual environment through a sense of touch(e.g.a mechanic trying to remove a virtual part from a virtual en-gine),haptic display appears to be the only means of human-computer interaction.In many other cases(e.g.molecular graphics[BOYBK90]),force display can provide additional means to visualize complex systems or environments.As overall scenes become more complex,the type of contact scenarios do not necessarily become more complex, since the contact configuration in most cases is only local to the region of impact.In fact,in our gear-turning demon-stration,the polygon count is much higher than the visu-ally simple peg-in-the-hole scenario.However,the collision detection and contact determination problem becomes sub-stantially harder for peg-in-the-hole insertion,since nearly the entire peg is in contact with the hole.Although there are already many contact pairs between the interlocking gears, there are significantly many more contacts(in theory in-finitely many point contacts)in the peg-in-the-hole case. Haptic display becomes much more difficult to control due to multiple contact forces generated in opposing directions. This is an extremely challenging scenario.Our approach can provide a more accurate and smoother response than an approximate method.However,if the user exerts too much force which causes a large amount of penetration,the con-。

相关文档
最新文档