AN ADAPTIVE NEURAL CONTROL OF A DC MOTOR
机器人运动控制方法综述
机器人运动控制方法综述摘要:随着工业的不断发展,机器人的应用领域日益广泛,如汽车、飞机、发电机等零部件的焊接、磨抛、装配等。
在这些任务中,机器人的运动精度对提高产品质量至关重要。
然而,机器人强耦合、非线性、性能依赖位形,且实际运动过程中存在摩擦、扰动等多种不确性复杂因素,因此对机器人运动控制方法进行分析整理具有重要意义和应用价值。
本文对现有主流机器人(主要针对机械臂)运动控制算法进行了综述,分析了这些算法的特点以及存在的不足,最后给出了机器人运动控制研究展望。
该工作可以为机器人运动控制方法的选择提供依据,具有一定的实用价值。
关键词:机器人,运动控制,综述1引言随着多传感和人工智能等技术的进步,机器人正日益广泛应用于焊接、磨抛、装配等智能制造领域,显著提升了生产效率,降低了生产成本。
因此,机器人是我国制造技术领域的主攻方向之一[1],也是助力我国从制造大国向制造强国快速迈进的战略高技术之一。
当前,高精度运动是机器人领域发展的主要趋势之一,对形成高质量的操作品质至关重要,而高精度运动的实现要求控制方法具有高动态响应和强鲁棒性。
然而,机器人齿轮和连杆结构受载后均存在不同程度的变形,且整个系统本身强耦合、非线性、动态特性时变、操作工况不确定性复杂多样,导致高动态响应和强鲁棒性的运动控制方法设计尤为困难,从而产生较大控制偏差,影响机器人运动精度,造成操作质量下降。
基于上述分析,本文对机器人(主要针对机械臂)的主流运动控制方法进行了综述,分析了这些方法中存在的不足,并给出了机器人运动控制方法未来发展趋势的一些思考。
2机器人运动控制方法作为一种复杂自动化系统,机器人具有非线性、强耦合、多变量时变特性。
高速运动时各关节惯量变化较大,耦合强烈,低速时摩擦、饱和等非线性效应明显,这些极大地增加了控制难度。
为了使机器人系统稳定运行,要求控制系统能实时提供与机器人动力学特征和多源扰动特征相匹配的控制特性,这给予了运动控制方法极大的挑战。
关于人工智能思考的英语作文
关于人工智能思考的英语作文英文回答:When we contemplate the intriguing realm of artificial intelligence (AI), a fundamental question arises: can AI think? This profound inquiry has captivated the minds of philosophers, scientists, and futurists alike, generating a rich tapestry of perspectives.One school of thought posits that AI can achieve true thought by emulating the intricate workings of the human brain. This approach, known as symbolic AI, seeks to encode human knowledge and reasoning processes into computational models. By simulating the cognitive functions of the mind, proponents argue, AI can unlock the ability to think, reason, and solve problems akin to humans.A contrasting perspective, known as connectionism, eschews symbolic representations and instead focuses on the interconnectedness of neurons and the emergence ofintelligent behavior from complex networks. This approach, inspired by biological neural systems, posits that thought and consciousness arise from the collective activity of vast numbers of nodes and connections within an artificial neural network.Yet another framework, termed embodied AI, emphasizes the role of physical interaction and embodiment in shaping thought. This perspective contends that intelligence is inextricably linked to the body and its experiences in the real world. By grounding AI systems in physical environments, proponents argue, we can foster a more naturalistic and intuitive form of thought.Beyond these overarching approaches, ongoing research in natural language processing (NLP) and machine learning (ML) is contributing to the development of AI systems that can engage in sophisticated dialogue, understand complex texts, and make predictions based on vast data sets. These advancements are gradually expanding the cognitive capabilities of AI, bringing us closer to the possibility of artificial thought.However, it is essential to recognize the limitations of current AI systems. While they may excel at performing specific tasks, they still lack the comprehensive understanding, self-awareness, and creativity that characterize human thought. The development of truly thinking machines remains a distant horizon, requiring significant breakthroughs in our understanding of consciousness, cognition, and embodiment.中文回答:人工智能是否能够思考?人工智能领域的核心问题之一就是人工智能是否能够思考。
输入受限下欠驱动AUV轨迹跟踪滑模控制
第30卷第1期水下无人系统学报 Vol.30 No.1 2022年2月 JOURNAL OF UNMANNED UNDERSEA SYSTEMS Feb. 2022收稿日期: 2021-03-29; 修回日期: 2021-06-04.基金项目: 国家自然科学基金项目资助(61873224); 河北省省级科技计划项目资助(F2020203037).作者简介: 李鑫滨(1969-), 男, 博士, 教授, 主要研究方向为水下机器人智能控制.[引用格式] 李鑫滨, 王鹏, 骆曦, 等. 输入受限下欠驱动AUV 轨迹跟踪滑模控制[J]. 水下无人系统学报, 2022, 30(1): 44-53.输入受限下欠驱动AUV 轨迹跟踪滑模控制李鑫滨, 王 鹏, 骆 曦, 潘洪涛(燕山大学 智能控制系统与智能装备教育部工程研究中心, 河北 秦皇岛, 066004)摘 要: 针对欠驱动自主水下航行器(AUV)在外界干扰和输入受限下水平面轨迹跟踪问题, 提出了基于非线性干扰观测器和径向基函数(RBF)神经网络的滑模控制器。
首先, 将欠驱动AUV 运动学模型通过坐标变换转换为误差运动学模型镇定位置误差; 其次, 利用反步法设计艏摇角虚速度控制律, 镇定姿态误差; 然后采用非线性干扰观测器对时变海流扰动进行估计, 并通过滤波器估计虚拟控制律的导数, 避免了虚拟控制律求导引起的“微分爆炸”; 最后, 设计自适应RBF 神经网络对欠驱动AUV 实际输入进行补偿, 通过李雅普诺夫稳定性证明闭环跟踪误差所用信号一致有界。
仿真验证了所设计控制器的有效性。
关键词: 欠驱动自主水下航行器; 输入受限; 轨迹跟踪; 滑模控制中图分类号: TJ630.1; TB71.2 文献标识码: A 文章编号: 2096-3920(2022)01-0044-10 DOI: 10.11993/j.issn.2096-3920.2022.01.006Underactuated AUV Trajectory Tracking Sliding Mode Controlwith Input LimitationLI Xin-bin , WANG Peng , LUO Xi , PANG Hong-tao(Engineering Research Center of the Ministry of Education for Intelligent Control System and Intelligent Equipment, Yanshan University, Qinhuangdao 066004, China)Abstract: Aiming at the horizontal plane trajectory tracking problem of the underactuated autonomous undersea vehicle (AUV) under external interferences and limited inputs, a sliding mode controller based on a nonlinear disturbance ob-server and radial basis function(RBF) neural network was proposed in this study. Firstly, the underactuated AUV kine-matics model was transformed into an error kinematics model to stabilize the position error through a coordinate trans-formation. Secondly, the backward step method was used to design the bow-rocking angle virtual velocity control law to stabilize the attitude error. Subsequently, a nonlinear disturbance observer was used to estimate the disturbance of a time-varying ocean current, and the derivative of the virtual control law was estimated through a filter to avoid the “dif-ferential explosion” caused by the derivative of the virtual control law. Finally, an adaptive RBF neural network was designed to compensate the actual input of the underactuated AUV, and the Lyapunov stability proved that the signal used for the closed-loop tracking error was uniformly bounded. The simulation verified the effectiveness of the designed controller.Keywords: underactuated autonomous undersea vehicle; input limitation; trajectory tracking; sliding mode. All Rights Reserved.2022年2月李鑫滨, 等: 输入受限下欠驱动AUV轨迹跟踪滑模控制第1期0 引言随着人类科学技术的不断进步, 人们对海洋资源的开发和投入也随之增大。
具有输入饱和的严格反馈系统的约束控制
针对具有全状态约束和输入饱和的非线性严格反馈系统,提出了一种自适应神经网络跟踪控制方案。文
文章引用: 唐丽, 贾继阳. 具有输入饱和的严格反馈系统的约束控制[J]. 建模与仿真, 2019, 8(3): 102-116. DOI: 10.12677/mos.2019.83013
唐丽,贾继阳
中利用反步递推和神经网络逼近相结合的方法设计自适应控制器。在设计过程中,首先,利用高斯误差 函数构建一种连续可导的非对称饱和模型。其次,通过选取正切型障碍李雅普诺夫函数处理状态约束问 题,即确保全状态约束不被违背。对比已有对数型障碍李雅普诺夫函数和积分型障碍李雅普诺夫函数, 正切型的障碍李雅普诺夫函数是一种即可以处理具有约束的系统又可以处理没有约束的系统的函数。再 次,通过定理证明闭环系统的所有信号是一致最终有界的,误差信号保持在零的小邻域内,并且不违反 全状态约束。最后,由仿真结果验证所提方案的有效性和可行性。
Modeling and Simulation 建模与仿真, 2019, 8(3), 102-116 Published Online August 2019 in Hans. /journal/mos https:///10.12677/mos.2019.83013
的输入,ν ∈ R 是饱和器的输入,有如下描述
u (v=)
uM
× erf
π 2uM
v
(2)
(( ) ( ) ) 其中,uM = u+ + u− + u+ − u− sgn (v) 2 ,这里 u+ 和 u− 是分别是执行器的上界和下界,sgn (⋅) 和 erf (⋅)
分别是标准符号函数和高斯误差函数。 为了便于接下来的控制器设计,定义如下函数
Introduction to Computer Controlled Systems
Adaptive Control
Adaption: Adjustments of the behavior e.g: parameters in a neural network or weights in a Breitenberg vehicle
System Identification
System Identification is the discipline of making mathematical models of systems, starting from experimental data, measurements, observations. Typically, a certain model structure is chosen by the user which contains unknown parameters (i.e. one puts forward a certain parameterisation). parameterisation). Having the model of a system is often very important for analysis, simulation, prediction, monitoring, diagnosis, control system design,...
System Identification
Adaptive Control
In practice this implies that an adaptive controller is a controller with adjustable parameters, which is tuned on-line according onto some mechanism in order to cope with timetime-variations in process dynamics and changes in the environment. environment.
【优质】intelligent是什么汉语意思-实用word文档 (3页)
本文部分内容来自网络整理,本司不为其真实性负责,如有异议或侵权请及时联系,本司将立即删除!== 本文为word格式,下载后可方便编辑和修改! ==intelligent是什么汉语意思也许大家一看到intelligent这个英语就觉得它很难,主要是因为我们不懂它的汉语意思。
下面小编将为你推荐英文intelligent表达的汉语意思,希望能够帮到你!英语单词intelligent的汉语意思英 [ɪnˈtelɪdʒənt] 美 [ɪnˈtɛlədʒənt]形容词聪明的; 理解力强的; 有智力的; [计]智能的相关例句形容词1. The child made a very intelligent comment.那孩子作了很有见地的评论。
2. Elephants are intelligent animals.象是有灵性的动物。
3. Can you say that dolphins are much more intelligent than other animals?你能说海豚比其它动物聪明得多吗?4. He has really been very intelligent about the whole thing.他对事情的全过程确实很了解。
英语单词intelligent的单语例句1. But by and large, the CBA's fans are intelligent and fun to share an arena with.2. Boys need male teachers to learn how to act as responsible, intelligent adults when they grow up.3. Their client has pleaded not guilty by reason of insanity, but prosecutors are seeking the death penalty and have portrayed him as highly intelligent.4. By then the United States will be the world's leadingauthority on creationism and intelligent design.5. Downey said he told Silver he was interested in doing an intelligent action movie that also was a period piece.6. He said he hoped his coming out would be a catalyst for intelligent discourse and took a measured approach to NBAplayers'reactions.7. The second part builds protective cave eaves and installs intelligent management protection facilities within the caves.8. Circuit Court of Appeals interviewed by The Associated Press described Alito as thoughtful, intelligent and fair.9. Myles reveals a few more colors on her palette while playing Isolde as an impulsive yet intelligent woman torn between duty and passion.英语单词intelligent的双语例句1. Aiming at the characteristics of ship power system, two hybrid intelligent rough control methods were presented for the first time. The one is an adaptive neural PID control strategy based on RS-RBF neural networks identification; another one is excitation compound control method with rough neural inverse system feed forward compensation. The simulation results prove the validity of these methods.针对船舶电力系统的特点,首次提出了基于粗糙-RBF网络辨识的发电机励磁神经PID控制和粗糙-神经网络逆前馈补偿的发电机励磁复合控制两种混合智能粗糙控制方法,仿真结果验证了所设计方法的有效性。
ADAPTIVE-ROBUST CONTROL OF THE STEWART-GOUGH PLATFORM AS A SIX DOF PARALLEL ROBOT
Keywords: Parallel robots, Stewart-Gough platform, adaptive-robust control scheme, Lagrangian dynamics.
1. INTRODUCTION
Parallel manipulators such as a Stewart-Gough platform, [1], have some advantages such as high force-to-weight ratio (high rigidity), compact size, capability for control with a very high bandwidth, their robustness against external forces and error accumulation, high dexterity and are suitable for an accurate positioning system. These manipulators have found a variety of applications in flight and vehicle simulators, high precision machining centers, mining machines, medical instruments, spatial devices, etc. However, they have some drawbacks of relatively small workspace and difficult forward kinematics problem. Generally, because of the nonlinearity and the complexity of the equations, forward kinematics of parallel manipulators is very complicated and difficult to solve. This is a contrast to serial manipulators. There are analytic solutions [2, 3 and 4], numerical ones [5] and solutions using the observers [6] for the forward kinematics problem of parallel robots. The analytical methods provide the exact solution; but, they are too complicated because the solution is obtained by solving the high-order polynomial equations. There is also the selection problem of the exact solution among the several ones. In fact there exists no general closed-form solution for the above problem. The Newton-Raphson method is known as a simple algorithm for solving nonlinear equations, whose convergence is good, but it takes much calculation time, and also it sometimes converges to the wrong solution according to the initial values. This method was used to solve the forward kinematic problem of platform-type robotic manipulators [5]. In the methods using the observers [6], two kinds of observers, linear and nonlinear, have been used. The linear observer is based on linearizing the nonlinear terms and leaves the steady-state error. The nonlinear observer has the difficulty to select the observation gains. A neural network based method may be applied to solve the forward kinematics problem as a basic element in the modeling and control of the parallel robots [7]. While the kinematics of parallel manipulators has been studied extensively during the last two decades, fewer contributions can be found on the dynamics problem of parallel mainpulators. Dynamical analysis of parallel robots, which is very important to develop a model-based controller, is complicated because of the existence of multiple closed-loop chains [8, 9]. Dynamic equations of a Stewart-Gough platform can be derived based on Lagrange' s formulation [10].
Adaptive tracking control of uncertain MIMO nonlinear systems with input constraints
article
info
abstract
In this paper, adaptive tracking control is proposed for a class of uncertain multi-input and multi-output nonlinear systems with non-symmetric input constraints. The auxiliary design system is introduced to analyze the effect of input constraints, and its states are used to adaptive tracking control design. The spectral radius of the control coefficient matrix is used to relax the nonsingular assumption of the control coefficient matrix. Subsequently, the constrained adaptive control is presented, where command filters are adopted to implement the emulate of actuator physical constraints on the control law and virtual control laws and avoid the tedious analytic computations of time derivatives of virtual control laws in the backstepping procedure. Under the proposed control techniques, the closed-loop semi-global uniformly ultimate bounded stability is achieved via Lyapunov synthesis. Finally, simulation studies are presented to illustrate the effectiveness of the proposed adaptive tracking control. © 2011 Elsevier Ltd. All rights reserved.
BackStepping_Control
484Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.Chapter 20Back-Stepping Control of Quadrotor:A Dynamically Tuned Higher Order Like Neural Network ApproachAbhijit Das The University of Texas at Arlington, USAFrank Lewis The University of Texas at Arlington, USAKamesh Subbarao The University of Texas at Arlington, USA ABSTRACTThe dynamics of a quadrotor is a simplified form of helicopter dynamics that exhibit the same basic prob-lems of strong coupling, multi-input/multi-output design, and unknown nonlinearities. The Lagrangian model of a typical quadrotor that involves four inputs and six outputs results in an underactuated system. There are several design techniques are available for nonlinear control of mechanical underactuated system. One of the most popular among them is backstepping. Backstepping is a well known recursive procedure where underactuation characteristic of the system is resolved by defining ‘desired’ virtual control and virtual state variables. Virtual control variables is determined in each recursive step assum-ing the corresponding subsystem is Lyapunov stable and virtual states are typically the errors of actual and desired virtual control variables. The application of the backstepping even more interesting when a virtual control law is applied to a Lagrangian subsystem. The necessary information to select virtual control and state variables for these systems can be obtained through model identification methods. One of these methods includes Neural Network approximation to identify the unknown parameters of the system. The unknown parameters may include uncertain aerodynamic force and moment coefficients or unmodeled dynamics. These aerodynamic coefficients generally are the functions of higher order state polynomials. In this chapter we will discuss how we can implement linear in parameter first order neu -ral network approximation methods to identify these unknown higher order state polynomials in every recursive step of the backstepping. Thus the first order neural network eventually estimates the higher DOI: 10.4018/978-1-61520-711-4.ch020485Back-Stepping Control of Quadrotor1. INTRODUCTIONNowadays helicopters are designed to operate with greater agility and rapid maneuvering, and are capable of work in degraded environments including wind gusts etc. Helicopter control often requires holding at a particular trimmed state, generally hover, as well as making changes of velocity and acceleration in a desired way (T. J. Koo & Sastry). The control of unmanned rotorcraft is also becoming more and more important due to their usefulness in rescue, surveillance, inspection, mapping etc. For these applications the ability of the rotorcraft to maneuver sharply and hover precisely is important.Like fixed-wing aircraft control, rotorcraft control is also involved in controlling attitude pitch, yaw, and roll- and position, either separately or in a coupled way. But the main difference is that, due to the unique body structure of a rotorcraft, as well as the rotor dynamics, the attitude dynamics and position dynamics are strongly coupled. Therefore, it is very difficult to design a decoupled control law of good structure that stabilizes the faster and slower dynamics simultaneously. On the contrary, for a fixed wing aircraft it is easy to design decoupled standard control laws (B. L. Stevens & Lewis, 2003) with intui-tively comprehensible performance. Controllers of good structure are needed for robustness, as well as to give some intuitive feel for the functioning of autopilots, Stability Augmentation System (SAS), and Control Augmentation System (CAS).The dynamics of a quadrotor (A. Mokhtari, A. Benallegue, & Daachi, 2006; A. Mokhtari, A. Benal -legue, & Orlov, 2006; P. Castillo, R. Lozano, & Dzul, 2005a; S. Bouabdallah, A. Noth, & Siegwart, 2004; T. Madani & Benallegue, 2006) are a simplified form of rotorcraft dynamics that exhibit the basic problems including underactuation, strong coupling, multi-input/multi-output design, and unknown nonlinearities. In the quadrotor, the movement is characterized by the resultant forces and moments of four independent rotors. Control design for a quadrotor is quite similar to a rotorcraft; therefore the quadrotor serves as a suitable, more tractable, case study for rotorcraft controls design. In view of the similarities between a quadrotor and a rotorcraft, control design for the quadrotor reveals corresponding approaches for rotorcraft control design. The 6-DOF airframe dynamics of a typical quadrotor involves force and moment dynamics in which the position dynamics often appear as kinematics. Backstepping control is one of the solutions to handle such coupled dynamic-kinematic systems.There are many approaches such as (C. D. Yang & Liu, 2003; R. Enns & Si, 2000; R. Mahony & Hamel, 2005; V . Mistler, A. Benallegue, & M’Sirdi, 2001) etc. available which reveal different control techniques for rotorcraft models. Popular methods include input-output linearization and backstepping.order state polynomials which is in fact a higher order like neural net (HOLNN). Moreover, when these NN placed into a control loop, they become dynamic NN whose weights are tuned only. Due to the inherent characteristics of the quadrotor, the Lagrangian form for the position dynamics is bilinear in the controls, which is confronted using a bilinear inverse kinematics solution. The result is a control-ler of intuitively appealing structure having an outer kinematics loop for position control and an inner dynamics loop for attitude control. The stability of the control law is guaranteed by a Lyapunov proof. The control approach described in this chapter is robust since it explicitly deals with unmodeled state dependent disturbances without needing any prior knowledge of the same. A simulation study validates the results such as decoupling, tracking etc obtained in the paper.28 more pages are available in the full version of this document, which maybe purchased using the "Add to Cart" button on the product's webpage:/chapter/back-stepping-control-quadrotor/41679This title is available in InfoSci-Books, Business-Technology-Solution, InfoSci-Intelligent Technologies. Recommend this product to your librarian: /forms/refer-database-to-librarian.aspx?id=41679Related ContentDynamic Ridge Polynomial Higher Order Neural Network(2010). Artificial Higher Order Neural Networks for Computer Science and Engineering: Trends for Emerging Applications (pp. 255-268)./chapter/dynamic-ridge-polynomial-higher-order/41670Learning Algorithms for Complex-Valued Neural Networks in Communication Signal Processing and Adaptive Equalization as its ApplicationCheolwoo You and Daesik Hong (2009). Complex-Valued Neural Networks: Utilizing High-Dimensional Parameters (pp. 194-235)./chapter/learning-algorithms-complex-valued-neural/6770Movement Pattern Recognition Using Neural NetworksRezaul Begg and Joarder Kamruzzaman (2006). Neural Networks in Healthcare: Potential and Challenges (pp. 217-237)./chapter/movement-pattern-recognition-using-neural/27280Models of Complex-Valued Hopfield-Type Neural Networks and Their DynamicsYasuaki Kuroe (2009). Complex-Valued Neural Networks: Utilizing High-Dimensional Parameters (pp. 123-141)./chapter/models-complex-valued-hopfield-type/6767。
Neural Control Robot Learning
Neural Control Robot LearningNeural control robot learning is a fascinating and rapidly evolving field that holds great promise for the future of robotics and artificial intelligence. This innovative approach to robot learning involves the use of neural networks to enable robots to learn and adapt to new tasks and environments in a manner that is more analogous to human learning. By leveraging the power of neural control, robots can become more autonomous, versatile, and capable of performing complex tasks with greater efficiency and accuracy. However, this cutting-edge technology also raises important ethical and societal considerations that must be carefully addressed as neural control robot learning continues to advance.One of the most significant benefits of neural control robot learning is its potential to revolutionize the capabilities of robots in various industries and applications. Traditional robotic systems are typically programmed with specific instructions for carrying out predefined tasks, which limits their flexibility and adaptability in dynamic or unpredictable environments. In contrast, neural control allows robots to learn from experience, make decisions based on sensory input, and continuously improve their performance over time. This means that robots equipped with neural control technology can effectively learn new tasks, optimize their movements, and even collaborate with humans in shared workspaces, making them invaluable assets in fields such as manufacturing, logistics, healthcare, and beyond.Moreover, neural control robot learning has the potential to enhance human-robot interaction and collaboration in ways that were previously unattainable. By enabling robots to learn and understand human intentions, preferences, and gestures, neural control technology can facilitate more natural and intuitive communication between humans and robots. This can be particularly beneficial in settings where close human-robot collaboration is essential, such as in assistive robotics for individuals with disabilities, interactive educational robots for children, or collaborative robots in industrial settings. The ability of robots to adapt to human behavior through neural control learning can foster a sense of trust, cooperation, and mutual understanding, ultimately leading to more seamless and productive human-robot partnerships.However, the rapid advancement of neural control robot learning also raises important ethical considerations and societal implications that must be carefully considered. As robots equipped with neural control technology become increasingly autonomous and capable of learning from their environment, there is a growing concern about the potential impact on employment and job displacement. The widespread adoption of highly adaptable and intelligent robots could lead to the automation of a wide range of jobs, potentially resulting in unemployment and economic disruption for many individuals. Additionally, there are concerns about the ethical implications of giving robots the ability to learn and make decisions independently, particularly in critical domains such as healthcare, transportation, and public safety.Furthermore, the use of neural control robot learning also raises important questions about accountability, transparency, and the potential for unintended consequences. As robots become more autonomous and capable of learning from their experiences, it becomes increasingly challenging to predict and control their behavior in all possible scenarios. This raises concerns about the potential for robots to make mistakes or exhibit unexpected behaviors, especially in high-stakes or safety-critical situations. Ensuring that robots equipped with neural control technology adhere to ethical and legal standards, as well as maintaining accountability for their actions, presents a significant challenge that must be addressed through robust regulations, standards, and oversight mechanisms.In conclusion, neural control robot learning represents a groundbreaking advancement in the field of robotics and artificial intelligence, with the potential to revolutionize the capabilities of robots and enhance human-robot interaction in diverse applications. However, the rapid development of this technology also raises important ethical and societal considerations that must be carefully addressed to ensure its responsible and beneficial integration into society. By proactively addressing these challenges and leveraging the potential of neural control robot learning in a thoughtful and ethical manner, we can harness the full potential of this technology to create a future where robots and humans can collaborate effectively and harmoniously.。
Adaptive RBF neural-networks control for a class of time-delay nonlinear systems
Neurocomputing 71 (2008) 3617– 3624
Contents lists available at ScienceDirect
Neurocomputing
journal homepage: /locate/neucom
$ Supported by National Natural Science Foundation of PR China (60594006, 60774017). Ã Corresponding author. E-mail addresses: cleanzhu2002@ (Q. Zhu), smfei@ (S. Fei).
abstract
A control scheme combined with backstepping, radius basis function (RBF) neural networks and adaptive control is proposed for the stabilization of nonlinear system with input and state delay. By using state transformation, the original system is converted to the system without input delay. The RBF neural network is employed to estimate the unknown continuous function. The controller is designed for the converted system so that the closed-loop system is bounded. According to the relation between the original system and the converted one, the state of the original system is proved to be bounded. The control scheme ensures that the closed-loop system is semi-globally uniformly ultimately bounded. & 2008 Elsevier B.V. All rights reserved.
深度强化学习在自动驾驶中的应用研究(英文中文双语版优质文档)
深度强化学习在自动驾驶中的应用研究(英文中文双语版优质文档)Application Research of Deep Reinforcement Learning in Autonomous DrivingWith the continuous development and progress of artificial intelligence technology, autonomous driving technology has become one of the research hotspots in the field of intelligent transportation. In the research of autonomous driving technology, deep reinforcement learning, as an emerging artificial intelligence technology, is increasingly widely used in the field of autonomous driving. This paper will explore the application research of deep reinforcement learning in autonomous driving.1. Introduction to Deep Reinforcement LearningDeep reinforcement learning is a machine learning method based on reinforcement learning, which enables machines to intelligently acquire knowledge and experience from the external environment, so that they can better complete tasks. The basic framework of deep reinforcement learning is to use the deep learning network to learn the mapping of state and action. Through continuous interaction with the environment, the machine can learn the optimal strategy, thereby realizing the automation of tasks.The application of deep reinforcement learning in the field of automatic driving is to realize the automation of driving decisions through machine learning, so as to realize intelligent driving.2. Application of Deep Reinforcement Learning in Autonomous Driving1. State recognition in autonomous drivingIn autonomous driving, state recognition is a very critical step, which mainly obtains the state information of the environment through sensors and converts it into data that the computer can understand. Traditional state recognition methods are mainly based on rules and feature engineering, but this method not only requires human participation, but also has low accuracy for complex environmental state recognition. Therefore, the state recognition method based on deep learning has gradually become the mainstream method in automatic driving.The deep learning network can perform feature extraction and classification recognition on images and videos collected by sensors through methods such as convolutional neural networks, thereby realizing state recognition for complex environments.2. Decision making in autonomous drivingDecision making in autonomous driving refers to the process of formulating an optimal driving strategy based on the state information acquired by sensors, as well as the goals and constraints of the driving task. In deep reinforcement learning, machines can learn optimal strategies by interacting with the environment, enabling decision making in autonomous driving.The decision-making process of deep reinforcement learning mainly includes two aspects: one is the learning of the state-value function, which is used to evaluate the value of the current state; the other is the learning of the policy function, which is used to select the optimal action. In deep reinforcement learning, the machine can learn the state-value function and policy function through the interaction with the environment, so as to realize the automation of driving decision-making.3. Behavior Planning in Autonomous DrivingBehavior planning in autonomous driving refers to selecting an optimal behavior from all possible behaviors based on the current state information and the goal of the driving task. In deep reinforcement learning, machines can learn optimal strategies for behavior planning in autonomous driving.4. Path Planning in Autonomous DrivingPath planning in autonomous driving refers to selecting the optimal driving path according to the goals and constraints of the driving task. In deep reinforcement learning, machines can learn optimal strategies for path planning in autonomous driving.3. Advantages and challenges of deep reinforcement learning in autonomous driving1. AdvantagesDeep reinforcement learning has the following advantages in autonomous driving:(1) It can automatically complete tasks such as driving decision-making, behavior planning, and path planning, reducing manual participation and improving driving efficiency and safety.(2) The deep learning network can perform feature extraction and classification recognition on the images and videos collected by the sensor, so as to realize the state recognition of complex environments.(3) Deep reinforcement learning can learn the optimal strategy through the interaction with the environment, so as to realize the tasks of decision making, behavior planning and path planning in automatic driving.2. ChallengeDeep reinforcement learning also presents some challenges in autonomous driving:(1) Insufficient data: Deep reinforcement learning requires a large amount of data for training, but in the field of autonomous driving, it is very difficult to obtain large-scale driving data.(2) Safety: The safety of autonomous driving technology is an important issue, because once an accident occurs, its consequences will be unpredictable. Therefore, the use of deep reinforcement learning in autonomous driving requires very strict safety safeguards.(3) Interpretation performance: Deep reinforcement learning requires a lot of computing resources and time for training and optimization. Therefore, in practical applications, the problems of computing performance and time cost need to be considered.(4) Interpretability: Deep reinforcement learning models are usually black-box models, and their decision-making process is difficult to understand and explain, which will have a negative impact on the reliability and safety of autonomous driving systems. Therefore, how to improve the interpretability of deep reinforcement learning models is an important research direction.(5) Generalization ability: In the field of autonomous driving, vehicles are faced with various environments and situations. Therefore, the deep reinforcement learning model needs to have a strong generalization ability in order to be able to accurately and Safe decision-making and planning.In summary, deep reinforcement learning has great application potential in autonomous driving, but challenges such as data scarcity, safety, interpretability, computational performance, and generalization capabilities need to be addressed. Future research should address these issues and promote the development and application of deep reinforcement learning in the field of autonomous driving.深度强化学习在自动驾驶中的应用研究随着人工智能技术的不断发展和进步,自动驾驶技术已经成为了当前智能交通领域中的研究热点之一。
Transactions of the Institute of Measurement and Control-2013-Juan-1008-15
Adaptive fuzzy approach for non-linearity compensation in MEMS gyroscope
Wanru Juan and Juntao Fei
Transactions of the Institute of Measurement and Control 35(8) 1008–1015 Ó The Author(s) 2013 Reprints and permissions: /journalsPermissions.nav DOI: 10.1177/0142331212472224
Corresponding author: Juntao Fei, Hohai University, 200 North Jinling Road, Changzhou, China. Email: jtfei@
Downloaded from at Southeast University on May 10, 2015
Fei and Juan
1009
motor and an XY-Positioning Table using fuzzy logic control. Park and Han (2011) derived backstepping and recurrent fuzzy neural controller for a robot manipulator with deadzone and friction.
Jiangsu Key Laboratory of Power Transmission and Distribution Equipment Technology, College of Computer and Information, Hohai University, Changzhou, China
基于障碍李雅普诺夫函数非线性系统的死区补偿控制
基于障碍李雅普诺夫函数非线性系统的死区补偿控制作者:***来源:《南京信息工程大学学报(自然科学版)》2018年第06期摘要本文集中在带有部分状态约束的非线性单输入单输出系统的自适应控制器设计上.考虑了非对称死区的非线性输入特性,选取障碍李雅普诺夫函数用来阻止部分受约束的状态违反约束条件.根据障碍李雅普诺夫函数反步法,解决了该类系统的输出跟踪问题,同时也处理了死区非线性带来的影响.针对下三角结构的非线性系统,设计了自适应控制器,证明了闭环系统所有信号都是有界的,同时保证了系统输出可以跟踪上参考信号.最后,仿真结果表明了所提方法的有效性.关键词约束控制;障碍李雅普诺夫函数;死区非线性输入;反步法中图分类号 TP13文献标志码 A0 引言受约束的系统普遍存在于现实中的许多物理系统中.例如在静电微驱动机构中,可移动电极的速度和位移必须受到约束以防止该电极触碰到固定电极.实际控制系统中,为保证系统的安全运行,对系统中的各变量进行恰当的约束变得十分有必要.通常而言,系统的超调量不能过大,否则会导致系统的运行状态非常不理想甚至会导致系统不稳定.如何快速有效地处理控制系统中的约束问题也是工业过程控制中一个很重要的工作.基于障碍李雅普诺夫函数(BLF)的控制设计方法能有效处理一类约束问题.其基本思想是当自变量的值趋于某些区域边界时,BLF的值趋于无穷大.通过保证BLF的有界性,可以达到限制系统状态的目的.具体而言,基于BLF的控制方法大致可以总结为三类:输出约束控制、全状态约束控制和部分状态约束控制.输出约束控制要在系统稳定性分析的基础上确保系统的输出保持在一定的约束范围之内.文献[1]解决了严格反馈系统的输出约束问题,分别利用传统的BLF以及对称BLF,基于backstepping方法,提出了相应的自适应控制策略.全状态约束控制需要在确保系统稳定的同时保证所有的状态都满足一定的约束条件.文献[2]针对一类带有参数不确定的随机非线性系统,利用对称BLF和非对称BLF,提出了两类自适应控制算法,确保了系统的稳定性.文献[3]提出了严格反馈系统的自适应全状态约束控制方案,解决了控制方向未知的问题.部分状态约束控制是解决控制系统里只有一部分状态需要满足一些特定的约束条件(且保证系统的稳定性)的方法.事实上,输出约束控制和全状态约束控制可以看成是部分状态约束控制的特殊形式[4] .虽然约束控制的研究日趋成熟,但是,很多已有成果都忽略了死区非线性输入的现象[5-6] .死区是一种典型的非线性输入形式.由于执行器物理限制、机械设计和制造等方面原因,死区输入特性不可避免地存在于实际控制系统中,并且会造成闭环控制系统的性能下降,甚至导致系统不稳定.因此,近年来关于具有死区非线性输入特性的动态系统的研究受到广泛重视,并且取得了一些研究成果[7-8] .文献[9]利用死区特性里斜率有界的性质,提出了基于小增益定理和状态观测器的鲁棒模糊自适应输出反馈死区补偿控制策略,并确保系统里所有的信号都是半全局一致最终有界的.然而,以上成果都是针对存在对称死区的执行器进行探讨的,而许多实际的机械系统中经常存在不对称死区机构,为了克服这一局限性,文献[10]详细给出了非對称死区的具体模型,针对三角结构的互联非线性系统,提出了分散式自适应镇定控制器的设计方法.不过,这些已有方法[8-10] 并没有考虑状态约束问题.当系统的部分状态也必须满足一定的约束条件时,如何设计有效的自适应控制器来补偿死区现象仍未解决.本文在障碍李雅普诺夫函数的基础上,针对一类带有部分状态约束的非线性系统,设计自适应死区补偿控制器,解决该类系统的输出跟踪问题.通过稳定性分析,证明了系统所有的信号都是有界的.与现有结果对比,本文在部分状态约束控制问题中,引入非线性死区特性,结合自适应辅助信号,克服了传统部分状态约束控制中难以补偿死区的难点.最后,数值仿真结果验证了所提方法的有效性.1 问题描述考虑如下严格反馈单输入单输出系统在此仿真中,初值选取为x 1(0)=0.2,x 2(0)=0.15.相关设计参数为μ 1=10,μ 2=20,γ=0.5,k 1=2.2,k b 1 =2.图1—4为仿真结果.图1为系统的跟踪曲线,很显然,系统的输出能跟踪上参考信号,并且保证系统的第一个状态 x 1在其给定的约束界以内.图2给出了η 2的轨迹,表明η 2 是有界的.另外,图3给出的是控制器的图象.图4描绘的是本仿真中的相位图.通过图1—4,可以看出这些信号都是有界的.4 结论本文提出了基于障碍李雅普诺夫函数方法的非线性单输入单输出系统的自适应部分状态约束控制器,并对非对称死区设计了补偿控制策略,有效解决了该类系统的输出跟踪问题.根据障碍李雅普诺夫反步法,通过稳定性分析,证明了闭环系统内所有信号都是有界的.该方法建立了下三角结构系统的部分状态约束控制方案,实现了该类系统的死区补偿控制.最后,通过仿真算例验证了本文所提方法的有效性.参考文献References[ 1 ]Tee K P,Ge S S,Tay E H.Barrier Lyapunov functions for the control of output-constrained nonlinear systems[J].Automatica,2009,45(4):918-927[ 2 ] Liu Y J,Lu S M,Tong S C,et al.Adaptive control based Barrier Lyapunov functions for a class of stochastic nonlinear systems with full state constraints[J].Automatica,2018,87:83-93[ 3 ] Liu Y J,Tong S C.Barrier Lyapunov functions for Nussbaum gain adaptive control of full state constrained nonlinear systems[J].Automatica,2017,76:143-152[ 4 ] Tee K P,Ge S S.Control of nonlinear systems with partial state constraints using a Barrier Lyapunov function[J].International Journal of Control,2011,84(12):2008-2023[ 5 ] Liu L,Wang Z,Zhang H.Adaptive fault-tolerant tracking control for MIMO discrete-time systems via reinforcement learning algorithm with less learning parameters[J].IEEE Transactions on Automation Science and Engineering,2017,14(1):299-313[ 6 ] Li Y,Tong S,Li posite adaptive fuzzy output feedback control design for uncertain nonlinear strict-feedback systems with input saturation[J].IEEE Transactions on Cybernetics,2015,45(10):2299-2308[ 7 ] Tong S,Li Y.Adaptive fuzzy output feedback control of MIMO nonlinear systems with unknown dead-zone inputs[J].IEEE Transactions on Fuzzy Systems,2013,21(1):134-146[ 8 ] Liu L,Liu Y J,Chen C L P.Adaptive neural network control for a DC motor system with dead-zone[J].Nonlinear Dynamics,2013,72(1/2):141-147[ 9 ] Li Y,Tong S,Liu Y,et al.Adaptive fuzzy robust output feedback control of nonlinear systems with unknown dead zones based on a small-gain approach[J].IEEE Transactions on Fuzzy Systems,2014,22(1):164-176[10] Yoo S J,Park J B,Choi Y H.Decentralized adaptive stabilization of interconnected nonlinear systems with unknown non-symmetric dead-zone inputs[J].Automatica,2009,45(2):436-443Barrier Lyapunov function based compensation control for aclass of nonlinear systems with dead zoneLIU Lei 11 College of Science,Liaoning University of Technology,Jinzhou 121001Abstract An adaptive controller design for a class of nonlinear single-input single-output systems with partial state constraints is presented in this paper.The approach adopted is to consider the asymmetric dead zone of the non-linear systemsand employ the barrier Lyapunov function(BLF) to prevent partial states from transgressing the ing a BLF-based backstepping technique,a good tracking performance is obtained and the non-linearity of the dead zone is addressed.For the triangular system,an adaptive controller is designed.It is shown that all the signals in the resulting closed-loop system are bounded,and the system output can track the reference signal.The simulation results show the effectiveness of the proposed method.Key words constrained control;barrier Lyapunov function;dead zone nonlinearinput;backstepping method。
自动化控制系统的鲁棒优化设计方法创新与应用论文素材
自动化控制系统的鲁棒优化设计方法创新与应用论文素材鲁棒优化是自动化控制系统设计中的重要研究方向之一。
它致力于在考虑系统不确定性的情况下,对系统进行优化设计。
本文将介绍自动化控制系统鲁棒优化设计的创新方法和应用,并提供相关论文素材。
一、引言自动化控制系统在现代工业中扮演着重要的角色,它可以实现对工业过程的自动化控制,提高工业生产的效率和品质。
然而,由于工业过程中存在各种不确定性因素,例如外部扰动、传感器噪声、模型参数误差等,传统的优化设计方法往往表现出较差的稳定性和鲁棒性。
因此,鲁棒优化设计成为自动化控制系统研究的热点之一。
二、鲁棒优化设计方法的创新1. 参数不确定性建模方法在鲁棒优化设计中,准确建立系统的参数不确定性模型是关键。
传统的方法通常基于概率分布对参数进行建模,但在实际应用中,参数的不确定性更常表现为模糊的区间或不确定的精确值。
因此,创新的方法采用模糊数学、区间分析等方法对参数进行建模,提高鲁棒优化设计的准确性和可靠性。
2. 鲁棒控制器设计方法鲁棒控制器设计是鲁棒优化设计的核心内容之一。
传统的方法主要采用线性鲁棒控制器设计技术,如H∞控制、μ合成等。
在实际应用中,非线性系统和存在模型误差的系统需要更为创新的鲁棒控制器设计方法。
例如,基于自适应和神经网络的控制方法、模糊控制方法等,这些方法通过模型自适应和非线性校正,提高控制系统的鲁棒性和稳定性。
3. 多目标鲁棒优化设计方法在实际工业应用中,往往存在多个优化目标,例如控制性能、能耗、成本等。
传统的单目标优化设计方法忽略了多个目标之间的权衡和平衡。
因此,创新的多目标鲁棒优化设计方法应用于自动化控制系统设计中,通过引入多目标优化算法,综合考虑多个目标的权衡关系,得到更为鲁棒和可行的设计方案。
三、鲁棒优化设计方法的应用1. 工业过程控制鲁棒优化设计方法在各类工业过程控制中都有广泛的应用。
例如,化工过程中的温度控制、压力控制、液位控制等;电力系统中的发电机控制、电力调度控制等;机械加工过程中的机器人控制、切削控制等。
面向模型未知的自由漂浮空间机械臂自适应神经鲁棒控制
WANGChao, JIN G Lijian, YEXiaoping,JIANGLihong,ZHANG Wenhui (School of Engineering,Lishui University,Lishui 323000,Zhejiang,China)
A bstract:In order to solve the problem that the precise mathematical model of free-floating space
确 获 得 ,利用神经网络控制器来补偿机械臂动力学模型, 设计网络权值的自适应学习律实现在线实时调整,避免
对 数 学 模 型 的 依 赖 .设 计 自 适 应 鲁 棒 控 制 器 来 抑 制 外 界 扰 动 和 补 偿 逼 近 误 差 ,提 高 系 统 鲁 棒 性 和 控 制 精 度 .基 于 Lyapunov理 论 ,证明了闭环系统的稳定性.仿真试验验证了所提控制方法的有效性,对于自由漂浮空间机器人 研究具有重要意义.
关 键 词 :空 间 机 器 人 '神 经 网 络 '鲁 棒 控 制 '自 适 应 ;稳定性
中图分类号!T P 24
文献标志码: A
文 章 编 号 !1672- 5581(2019)02-0153 - 06
Adaptive neural robust control for free-floatin* space manipulator facin* unknown model
manipulators is difficult ot obtain and the parameters of the dynamic model will change due to the external
脑科学的黑科技 英语
脑科学的黑科技英语Brain Science: The Dark TechnologyThe field of brain science has been rapidly evolving, unlocking secrets and unveiling remarkable advancements that were once thought to be the realm of science fiction. From the ability to read and manipulate human thoughts to the development of brain-computer interfaces, the breakthroughs in this domain have been both awe-inspiring and unsettling. As we delve deeper into the mysteries of the human mind, we find ourselves confronted with a double-edged sword – the immense potential for good, and the equally daunting potential for abuse.One of the most captivating developments in brain science is the ability to read and interpret human thoughts. Through the use of advanced neuroimaging techniques and machine learning algorithms, researchers have demonstrated the feasibility of decoding the neural patterns associated with specific thoughts and mental states. This technology has profound implications for fields such as mental health, cognitive enhancement, and even lie detection. Imagine a world where the inner workings of the mind are no longer hidden, where our thoughts and emotions can be accessedand analyzed with unprecedented precision.While this level of insight into the human mind holds immense potential for improving our understanding of the brain and developing more effective therapies, it also raises significant ethical concerns. The prospect of having our most private thoughts and memories accessible to others, without our consent, is a chilling thought. The implications of such technology in the hands of governments, corporations, or malicious actors are far-reaching and potentially devastating. Imagine the implications of a totalitarian regime that can monitor and manipulate the thoughts of its citizens, or a corporation that can exploit the neural patterns of its employees to maximize productivity and profit.Another remarkable development in brain science is the advancement of brain-computer interfaces (BCIs). These technologies aim to create a direct communication pathway between the human brain and external devices, allowing for the control of various electronic systems through the power of thought alone. From prosthetic limbs that can be controlled by the mind to gaming experiences that are entirely driven by neural activity, the potential applications of BCIs are truly staggering.However, the development of these technologies has also raised concerns about the ethical implications of merging the human mindwith machines. Questions of personal autonomy, privacy, and the blurring of the line between human and machine become increasingly complex. Imagine a world where our thoughts and actions are no longer solely our own, but are influenced or even controlled by external devices or software. The potential for manipulation, addiction, and the erosion of individual agency becomes a pressing concern.Moreover, the advancement of brain science has also led to the exploration of neural enhancement technologies. From the development of drugs and devices that can improve cognitive function to the prospect of direct brain-to-brain communication, the ability to augment and expand the capabilities of the human mind is a tantalizing prospect. Yet, this too comes with its own set of ethical quandaries.The prospect of creating a class of "enhanced" individuals raises concerns about fairness, equity, and the potential for societal stratification. If access to these technologies is limited or unevenly distributed, it could lead to the creation of a divide between those who can afford the enhancements and those who cannot. This could exacerbate existing social and economic inequalities, further marginalizing already disadvantaged groups.Furthermore, the long-term effects of neural enhancementtechnologies on the human brain and psyche are largely unknown. The potential for unintended consequences, such as cognitive impairments, personality changes, or the disruption of natural cognitive development, must be carefully considered before widespread adoption.As we continue to delve deeper into the realm of brain science, it is crucial that we approach these advancements with a keen sense of ethical responsibility. The power to read, manipulate, and enhance the human mind is a double-edged sword, and we must ensure that the pursuit of scientific progress is balanced with a deep consideration of the moral and societal implications.Policymakers, researchers, and the public must engage in robust and ongoing dialogues to establish robust ethical frameworks and regulatory mechanisms that can guide the development and deployment of these technologies. Only through a collaborative and thoughtful approach can we harness the immense potential of brain science while mitigating the risks and preserving the fundamental rights and dignity of the human individual.The future of brain science is both exciting and daunting. As we unravel the mysteries of the mind, we must remain vigilant and committed to ensuring that these advancements serve the greatergood of humanity, rather than becoming the tools of oppression, exploitation, or the erosion of our shared humanity.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
121
mechanical system. Weerasooriya and El-Sharkawi, [5], applied a Feedforward NNs for identification and control of DC-motor drives. As it can be seen, the proposed schemes in the literature of NN learning control systems possesses higher complexity and higher dimensionality, which makes them hardly applicable. To avoid this complexity, it is appropriate to use the recurrent NN approach. Baruch et al., [2], [3] have done some works in this field, allowing to identify nonlinear dynamic objects by means of recurrent neural networks. So, the purpose of this paper is to apply the Recurrent Trainable Neural Network (RTNN) approach, described in [2], [3], for a real-time identification and control of DC-motor driven mechanical systems with friction and unknown load characteristics.
Where: T is a target vector with dimension 1 and [T(k)-Y(k)]is an output error vector with the same dimension; R is an auxiliary variable.
2. BRIEF DESCRIPTION OF THE RTNN TOPOLOGY AND LEARNING
IEROHAM S. BARUCH JOSE-MARTIN FLORES RUBEN GARRIDO JUAN-CARLOS MARTINEZ
Department of Automatic Control, CINVESTAV-IPN, AV.IPN No 2508, A.P. 14740, Mexico D. F. 07360, MEXICO, Fax: (+52) 5747-7089, E-mail: baruch@ctrl.cinvestav.mx
Proceedingsof the 2001 IEEE InternationalSymposium on IntelligentControl September 5 7 , 2 0 0 1 MBxico City, MBxico
AN ADAPTIVE NEURAL CONTROL OF A DC MOTOR
In [2], [3], a discrete-time model of the RTNN, and the Backpropagation Trough Time (BPTT) weightupdating rule, are given. The RTNN model is a parametric one. It gives information about the state, control and output matrices, and the state vector as well. The RTNN topology is described by the following vector-matrix equations:
Key Words: DC-Motor, Recurrent Neural Network, Backpropagation Trough Time Learning, Systems Identification, Adaptive Control
[ 13, [4]. The presence of nonlinear friction forces is unavoidable in high performance motion control system. In servo systems, if the controller is designed without consideration of the friction, the closed-loop system may show steady-state tracking error andor oscillations. In addition, the friction characteristics may change easily due to the environment’s changes like load variations, temperature and humidity changes, and some dynamic effects could be observed, [l]. So, the standard PID type servo control algorithm is not capable of delivering the desire precision under the influence of friction. To compensate the friction effects, adaptive schemes were developed. Adaptive friction compensation for DC-motor drives and robot control systems are given in [ 5 ] . Some advanced works also are done on Neural Networks (NN) application for adaptive friction compensation. The cited in [4], works applied CAMAC based NN for robust control of systems with friction. Kim and Lewis, [4], applied a reinforcement learning, based on Functional Link NN for friction compensation of high-speed precise
Abstract. A Recurrent Trainable Neural Network (RTNN) together with a Backpropagation Trough-Time (BFTT) learning algorithm are applied for a real-time identification and adaptive control of a DC-motor drive. The paper proposes to use three RTNNs, separately for the parts of the systems identification, the state feedback control and the feedforward control. The applied RTNN model has a minimum number of parameters due to its Jordan canonical structure, which permits to use the generated vector of states directly for a DC-motor feedback control. The experimental results, confirm the applicability of the described identification and control methodology in practice and also confirm the good quality of the RTNN.
1.
INTRODUCTION
Recent developments in science and technology provide a wide scope of applications of high performance electric motor drives in areas involving mechatronics, such as robotics, rolling mills, machine tools, etc., where an accurate speed or position control is of critical importance. There are an increasing number of applications in high precision motion control systems in manufacturing, i.e., ultra precision machining, assembly of small components and micro drives. It is very difficult to assure high positioning accuracy and high trajectory tracking ability due to many factors affecting the precision of motion, such as load-torque variations, friction, backlash and stiffness in the drive system, [5]. Friction is a natural resistance to relative motion between two contacting bodies. The friction model has been widely studied by numerous researchers, as found in [l], [4]. It is commonly modeled as a linear combination of Coulomb friction, stick, viscous friction, and Stribeck effect,