A City Modeling and Simulation Platform Based on Google Map API

合集下载

Modeling,Simulat...

Modeling,Simulat...

Book reviewModeling,Simulation,and Control of Flexible Manufacturing Systems ±A Petri Net Approach;Meng Chu Zhou;Kurapati Venkatesh;Yushun Fan;World Scienti®c,Singapore,19991.IntroductionA ¯exible manufacturing system (FMS)is an automated,mid-volume,mid-va-riety,central computer-controlled manufacturing system.It can be used to produce a variety of products with virtually no time lost for changeover from one product to the next.FMS is a capital-investment intensive and complex system.In order to get the best economic bene®ts,the design,implementation and operation of FMS should be carefully made.A lot of researches have been done regarding the modeling,simulation,scheduling and control of FMS [1±6].From time to time,Petri net (PN)method has also been used as a tool by di erent researcher in studying the problems regarding the modeling,simulation,scheduling and control of FMS.A lot of papers and books have been published in this area [7±14].``Modeling,Simulation,and Control of Flexible Manufacturing Systems ±A PN Approach''is a new book written by Zhou and Venkatesh which is focused on studying FMS using PN as a systematic method and integrated tool.The book's contents can be classi®ed into four parts.The four parts are introduction part (Chapter 1to Chapter 4),PNs application part (Chapter 5to Chapter 8),new research results part (Chapter 9to Chapter 13),and future development trend part (Chapter 14).In the introduction part,the background,motivation and objectives of the book are described in Chapter 1.The brief history of manufacturing systems and PNs is also presented in Chapter 1.The basic de®nitions and problems in FMS design and implementation are introduced in Chapter 2.The authors divide FMS related problems into two major areas ±managerial and technical.In Chapter 4,basic de®nitions,properties,and analysis techniques of PNs are presented,Chapter 4can be used as the fundamentals of PNs for those who are not familiar with PN method.In Chapter 3,the authors presented their approach to studying FMS related prob-lems,the approach uses PNs as an integrated tool and methodology in FMS design and implementation.In Chapter 3,various applications in modeling,analysis,sim-ulation,performance evaluation,discrete event control,planning and scheduling of FMS using PNs are presented.Through reading the introduction part,the readers can obtain basic concepts and methods about FMS and PNs.The readers can also get a clear picture about the relationshipbetween FMS and PNs.Mechatronics 11(2001)947±9500957-4158/01/$-see front matter Ó2001Elsevier Science Ltd.All rights reserved.PII:S 0957-4158(00)00057-X948Book review/Mechatronics11(2001)947±950The second part of the book is about PNs applications.In this part,various applications of using PNs in solving FMS related problems are introduced.FMS modeling is the basis for simulation,analysis,planning and scheduling.In Chapter5, after introduction of several kinds of PNs,a general modeling method of FMS using PNs is given.The systematic bottom-up and top-down modeling method is pre-sented.The presented method is demonstrated by modeling a real FMS cell in New Jersey Institute of Technology.The application of PNs in FMS performance analysis is introduced in Chapter 6.The stochastic PNs and the time distributions are introduced in this Chapter. The analysis of a¯exible workstation performance using the PN tool called SPNP developed at Duke University is given in Section6.4.In Chapter7,the procedures and steps involved for discrete event simulation using PNs are discussed.The use of various modeling techniques such as queuing network models,state-transition models,high-level PNs,object-oriented models for simulations are brie¯y explained.A software package that is used to simulate PN models is introduced.Several CASE tools for PNs simulations are brie¯y intro-duced.In Chapter8,PNs application in studying the di erent e ects between push and pull paradigms is shown.The presented application method is useful for the selection of suitable management paradigm for manufacturing systems.A manufacturing system is modeled considering both push and pull paradigms in Section8.3which is used as a practical example.The general procedures for performance evaluation of FMS with pull paradigm are given in Section8.4.The third part of the book is mainly the research results of the authors in the area of PNs applications.In Chapter9,an augmented-timed PN is put forward. The proposed method is used to model the manufacturing systems with break-down handling.It is demonstrated using a¯exible assembly system in Section9.3. In Chapter10,a new class of PNs called Real-time PN is proposed.The pro-posed PN method is used to model and control the discrete event control sys-tems.The comparison of the proposed method and ladder logic diagrams is given in Chapter11.Due to the signi®cant advantages of Object-oriented method,it has been used in PNs to de®ne a new kind of PNs.In Chapter12,the authors propose an Object-oriented design methodology for the development of FMS control software.The OMT and PNs are integrated in order to developreusable, modi®able,and extendible control software.The proposed methodology is used in a FMS.The OMT is used to®nd the static relationshipamong di erent objects.The PN models are formulated to study the performance of the FMS.In Chapter12,the scheduling methods of FMS using PNs are introduced.Some examples are presented for automated manufacturing system and semiconductor test facility.In the last Chapter,the future research directions of PNs are pointed out.The contents include CASE tool environment,scheduling of large production system,su-pervisory control,multi-lifecycle engineering and benchmark studies.Book review/Mechatronics11(2001)947±950949 mentsAs a monograph in PNs and its applications in FMS,the book is abundant in contents.Besides the rich knowledge of PNs,the book covers almost every aspects regarding FMS design and analysis,such as modeling,simulation,performance evaluation,planning and scheduling,break down handling,real-time control,con-trol software development,etc.So,the reader can obtain much knowledge in PN, FMS,discrete event system control,system simulation,scheduling,as well as in software development.The book is a very good book in the combinations of PNs theory and prac-tical applications.Throughout the book,the integrated style is demonstrated.It is very well suited for the graduate students and beginners who are interested in using PN methods in studying their speci®c problems.The book is especially suited for the researchers working in the areas of FMS,CIMS,advanced man-ufacturing technologies.The feedback messages from our graduate students show that compared with other books about PNs,this book is more interested and easy to learn.It is easy to get a clear picture about what is PNs method and how it can be used in the FMS design and analysis.So,the book is a very good textbook for the graduate students whose majors are manufacturing systems, industrial engineering,factory automation,enterprise management,and computer applications.Both PNs and FMS are complex and research intensive areas.Due to the deep understanding for PNs,FMS,and the writing skills of the authors,the book has good advantages in describing complex problems and theories in a very easy read and understandable fashion.The easy understanding and abundant contents enable the book to be a good reference book both for the students and researchers. Through reading the book,the readers can also learn the new research results in PNs and its applications in FMS that do not contained in other books.Because the most new results given in the book are the study achievements of the authors,the readers can better know not only the results,but also the background,history,and research methodology of the related areas.This would helpthe researchers who are going to do the study to know the state-of-art of relevant areas,thus the researchers can begin the study in less preparing time and to get new results more earlier.As compared to other books,the organization of the book is very application oriented.The aims are to present new research results in FMS applications using PNs method,the organization of the book is cohesive to the topics.A lot of live examples have reinforced the presented methods.These advantages make the book to be a very good practical guide for the students and beginners to start their re-search in the related areas.The history and reference of related research given in this book provides the reader a good way to better know PNs methods and its applications in FMS.It is especially suited for the Ph.D.candidates who are determined to choose PNs as their thesis topics.950Book review/Mechatronics11(2001)947±9503.ConclusionsDue to the signi®cant importance of PNs and its applications,PNs have become a common background and basic method for the students and researchers to do re-search in modeling,planning and scheduling,performance analysis,discrete event system control,and shop-¯oor control software development.The book under re-view provides us a good approach to learn as well as to begin the research in PNs and its application in manufacturing systems.The integrated and application oriented style of book enables the book to be a very good book both for graduate students and researchers.The easy understanding and step-by-step deeper introduction of the contents makes it to be a good textbook for the graduate students.It is suited to the graduated students whose majors are manufacturing system,industrial engineering, enterprise management,computer application,and automation.References[1]Talavage J,Hannam RG.Flexible manufacturing systems in practice:application,design,andsimulation.New York:Marcel Dekker Inc.;1988.[2]Tetzla UAW.Optimal design of¯exible manufacturing systems.New York:Springer;1990.[3]Jha NK,editor.Handbook of¯exible manufacturing systems.San Diego:Academic Press,1991.[4]Carrie C.Simulation of manufacturing.New York:John Wiley&Sons;1988.[5]Gupta YP,Goyal S.Flexibility of manufacturing systems:concepts and measurements.EuropeanJournal of Operational Research1989;43:119±35.[6]Carter MF.Designing¯exibility into automated manufacturing systems.In:Stecke KE,Suri R,editors.Proceedings of the Second ORSA/TIMS Conference on FMS:Operations Research Models and Applications.New York:Elsevier;1986.p.107±18.[7]David R,Alla H.Petri nets and grafcet.New York:Prentice Hall;1992.[8]Zhou MC,DiCesare F.Petri net synthesis for discrete event control of manufacturing systems.Norwell,MA:Kluwer Academic Publishers;1993.[9]Desrochers AA,Al-Jaar RY.Applications of petri nets in manufacturing systems.New York:IEEEPress;1995.[10]Zhou MC,editor.Petri nets in¯exible and agile automation.Boston:Kluwer Academic Publishers,1995.[11]Lin C.Stochastic petri nets and system performance evaluations.Beijing:Tsinghua University Press;1999.[12]Peterson JL.Petri net theory and the modeling of systems.Englewood Cli s,NJ:Prentice-Hall;1981.[13]Resig W.Petri nets.New York:Springer;1985.[14]Jensen K.Coloured Petri Nets.Berlin:Springer;1992.Yushun FanDepartment of Automation,Tsinghua UniversityBeijing100084,People's Republic of ChinaE-mail address:*****************。

多学科虚拟样机协同建模与仿真平台及其关键技术研究[1]

多学科虚拟样机协同建模与仿真平台及其关键技术研究[1]

第11卷第7期计算机集成制造系统Vol.11No.72005年7月Computer Integrated Manufacturing SystemsJul .2005文章编号:1006-5911(2005)07-0901-08多学科虚拟样机协同建模与仿真平台及其关键技术研究邸彦强1,李伯虎1,柴旭东2,王 鹏1(1.北京航空航天大学自动化学院,北京 100083;2.航天科工集团二院,北京 100854)摘 要:针对面向多学科虚拟样机开发的协同建模与仿真平台,建立了其系统体系结构和技术体系结构。

提出了该平台解决多学科虚拟样机协同建模与仿真问题中的4项关键技术,包括:①面向模型的多领域协同仿真技术;②基于系统工程理论和组件技术的建模技术;③网格技术和微软自动化技术;④采用可扩展标记语言和产品生命周期管理系统的集成技术。

给出了一种符合并行工程思想的基于该平台的虚拟样机开发过程模型。

最后,简要介绍了该平台在船舶领域的一个应用范例,以及在航天、船舶和卫星等领域的初步实践,表明该平台能有效支持虚拟样机工程。

关键词:多学科虚拟样机;协同仿真;仿真平台;网格技术;集成技术中图分类号:T P391.9 文献标识码:AResearch on collaborative modeling &simulation platform for multi -disciplinary virtualprototype and its key technologyD I Yan -qiang 1,LI Bo -hu 1,CH A I X u -dong 2,WA N G Peng 1(1.Sch.of A utomatio n,Beihang U niv.,Beijing 100083,China;2.T he Second A cademy,China A ero space Sci.&Indust ry Co rp.,Beijing 100854,China)Abstract:T he system ar chitecture and technolog y ar chitectur e of Collabor ative M o deling &Simulation Platfor m (Cosim-P latfo rm)w ere established fo cusing on dev elopment o f M ult i-Disciplinary V ir tua l Pr otot ype.T o solv e the pro blem o f co llaborat ive modeling and simulat ion,fo ur key technolog ies wer e put for wa rd:model-o riented multi-domain collabor ativ e simulatio n technolog y,mo deling t echnolo g y based on System Eng ineer ing T heor y and co mpo nent techno log y,g rid techno log y and M icr osoft auto mation techno lo gy ,Co sim-P latfo rm s inter nal and ex ter nal integ ratio n technolo gy ,w hich w as based on eXtensible M arkup L ang uage(XM L )and Pr oduct L ifecycle M an ag ement (PL M ).A vir tual prototype development process mo del w as provided in accordance with the principle of Concur rent Eng ineering.At last,an application example in Ship was briefly introduced.T he primary practices in the fields of A s tronautics,Ship and Satellite indicated that Cosim-Platfor m could effectively support development of virtual prototype.Key words:multi-disciplinary v ir tua l pr oto type;collabor ativ e simulat ion;simulat ion platfor m;g rid technolog y;integr ation techno log y收稿日期:2004-06-24;修订日期:2004-09-20。

ADvanced Architecture for Modeling and Simulation (ADAMS)

ADvanced Architecture for Modeling and Simulation (ADAMS)

Abstract —The Advanced Architecture for Modeling and Simulation (ADAMS) framework was developed at Lockheed Martin Advanced Technology Laboratories (LM ATL) to support use of social science and related models in combinations for decision support purposes. The ultimateproducts include decision aid tools for commanders or analysts,to enable decision makers to realize an improvedunderstanding of the dynamics of foreign societies of particular strategic interest to the U.S., and to provide a method to develop and evaluate strategies for interactions with these societies. These systems include representation of societies, adversaries, governments, and other related organizations. LM ATL’s experience in implementing these systems is that no single model or set of models can efficiently handle and represent all aspects of a complex realistic problem. Likewise,no single exploration or analysis technique can exploit the fullvalue of a set of models. For this reason, a multi-model, multi-hypothesis approach was developed to enable the user to more fully exploit the information provided by multiple models and analysis techniques. However, a multi-model approach requires large amounts of processed data to populate the models, and the ability to easily integrate multi-models, run them in combinations, view results and compare the results of the successive “what-if” experiments. LM ATL’s ADAMS platform satisfies this need by providing an extensible environment to manage the interactions betweenthe models, data, and analysis components. Enhancements arein development with ADAMS to further improve and automate these processes. ADAMS allows construction and maintenance of multi-model environments at relatively low cost. Development programs using ADAMS can then focus efforts onthe components (data, models, exploration and experimentation), yet reap much more benefit and knowledge by treating the components as a portfolio and applying the components to the most appropriate requirements. I. I NTRODUCTIONhe Advanced Architecture for Modeling and Simulation(ADAMS) framework was developed at Lockheed Martin Advanced Technology Laboratories (LM ATL) to support use of social science and related models in combinations for decision support purposes. The ultimate products include decision aid tools for commanders or analysts that enable decision makers to realize an improved understanding of the dynamics of foreign societies ofManuscript received May 22, 2009.Janet Wedgwood is with Lockheed Martin Advanced Technology Laboratories, Cherry Hill, NJ 08002 USA (856-792-9879; fax: 856-792-9930; jwedgwoo@).Zacharias Horiatis, Timothy Siedlecki, and John Welsh are with Lockheed Martin Advanced Technology Laboratories, Cherry Hill, NJ 08002 USA ({zhoriati, tsiedlec, jwelsh}@).particular strategic interest to the US, and to provide a method to develop and evaluate strategies for interactions with these societies. These systems include representation of societies, adversaries, governments, and other related individuals and organizations. The decision aid systems support multiple types of users (multiple levels), and produce top-level summary reports/recommendations, as well as detailed “what if” type analyses and explanation reports for indepth analytic users. LM ATL’s experience in implementing these systems reveals that no single model or modeling paradigm can efficiently handle and represent all aspects of a complex realistic problem. Likewise, no single exploration or analysis technique can exploit the full value of a set of models. Forthis reason, a multi-model, multi-hypothesis approach was developed to enable the user to more fully exploit theinformation provided by multiple models and analysis techniques. However, a multi-model approach requires large amounts of processed data to populate the parameters of the models, and the ability to easily integrate multi-paradigm models together, run them in combinations, view results, and compare the results of the successive "what-if" experiments. This in turn requires that models can be easily integrated into the system and interconnected to each other, as well asthe ability to easily plug in the desired Human Computer Interface Components (HCIs) to control the models, run experiments and analyze/visualize the outputs.LM ATL’s ADAMS platform (Fig. 1) satisfies this requirement by providing an extensible environment that manages the interactions between the models, data, model control and analysis/visualization components. ADAMS uses the Dynamic Information Architecture System (DIAS)Fig. 1. ADAMS provides automated integration of models andUser Interface components.ADvanced Architecture for Modelingand Simulation (ADAMS)Janet E. Wedgwood, Zacharias Horiatis, Timothy Siedlecki, and John J. WelshT[1] from Argonne National Laboratory at the core, with automated processes in place for model integration. Additional enhancements to further improve and automate the processes necessary to assemble a decision support system are in development. These processes and underlying automation enable the construction and maintenance of multi-model environments in less time and at a lower cost than existing ad-hoc frameworks. Development programs using ADAMS can then focus efforts on the components, reaping significant benefit and knowledge from treating the components as a portfolio and applying the components to the most appropriate requirements.The model integration framework and support services provided by ADAMS will be described, as well as the relevant models and model collections. Applications for this technology include course of action (COA) generation/ strategy analysis for command and control and experiment support for modeling and simulation programs, offering substantial benefits to the various technology owners. Example user groups include: JFCOM Information Operations (IO) Range Analyst; CENTCOM IO Analyst; Combatant Commander Staff; and Intel Analysts from a number of agencies.II.A PPROACH2.1 OverviewEngineers at LM ATL are developing the ADAMS processes as part of their research into the use of modeling and simulation to understand the dynamics of foreign societies relative to such things as national stability. No single model or modeling paradigm can provide the rich set of integrated behaviors needed to adequately simulate regions of interest and forecast possible futures for those regions. Currently, the complexity and time required to integrate a diverse set of multi-paradigm, multi-domain models is prohibitive unless automated assistance is available.With ADAMS, we are developing processes and implementing wizards to rapidly integrate and configure simulation models enabling non-computer scientists to more easily exploit modeling and simulation through automation of scenario development, model instantiation, integration and initialization, and experimentation, including visualization and analysis support. Our Scenario Generation Process, and our progress on the Model Instantiation and Experimentation Processes is described in the following sections.These processes address some of the largest challenges to automating the instantiation, simulation and analysis of a scenario. These processes rely on representations of the integrated model set (nodes and relationships, models, events, and data flow, COAs and results) that an autocoding tool/wizard can use to generate the model integration code. This allows the analyst to explore the solution space (change model parameters) without being overwhelmed by unnecessary, irrelevant complexity.2.2 Scenario Generation ProcessThe results of the Scenario Generation Process are well defined. This paper presents our current thinking in this area, and illuminates some of the difficult areas. We have selected a Semantic Conceptual Model (SCM) as a way that analysts may think about scenarios. The SCM represents the nodes (people, places, and things) and relationships of interest to the user. The Scenario Generation Process is designed to be supported by a user interface component to select the entities and linkages that will be explored in the simulation. The scenario must be represented in such a way that an autocoding tool/wizard can generate not only the model integration code, but also the hooks into the model parameters for the Experimentation Process that will allow either the analyst or an Experimentation Component to explore the solution space of the integrated models (Section 2.4).Currently we are representing the SCM in the Web Ontology Language OWL [2]. This can be displayed in a U ser Friendly node-link diagram [3] that can be easily learned and manipulated by the user, such as the left column of Fig. 2. It is clear that the node and relationship information are not sufficient to allow an autocoding tool to generate a functional simulation. The nodes and relationships need to be interconnected through model inputs and outputs, not concepts, like “support.” Information about when each model should run is also necessary. This structural and dynamic information that indicates how the models are “wired together” (data and events flow between models) is pre-populated into the ADAMS model library by modelers and Subject Matter Experts who are familiar with the region of interest. An example of a mapping between an SCM representing a scenario and a UML/SysML [4] Block Diagram is shown in the second column of Fig. 2.Our next area of research will be to determine the best way to abstract these Block and Sequence Diagrams and present them in such a way that they can be easily understood and manipulated by the analyst. In general, an SCM can be developed in any way that will contain the required information. Its visualization is completely up to the designer of the user interface. The output of the Scenario Generation Process is the representation of these nodes and relationships in a manner suitable for the Model Integration Process (Section 2.3).2.2.1 Example User ScenarioThese processes are best described in the context of an example scenario. In this case, the hypothetical scenario involves a hypothetical terrorist group (Group A) attempting to increase the support of their leader (Leader A) by helping to repair the electrical infrastructure in Group B’s neighborhood. The analyst is interested in understanding how Group B can be influenced not to support Leader A. The scenario-generation process shown in Fig. 2 begins with a semantic representation of the scenario that the analyst wants to explore, the Semantic Conceptual Model (SCM). This portion of the analyst’s “virtual world” includes nodes that represent important entities in the virtual world ofFig. 2. The SCM, on the left, is augmented with model interconnection information to enable autocoding of the SCM into the DIAS framework. interest, etc., and relationships between them. In our case,these are represented as ontology in OWL (Web OntologyLanguage). A possible view is one in which the analyst canview a nodal diagram of the situation, such as the one shownin Fig. 2, and then select models for the two groups, theleaders, a model of support for leaders and groupscustomized for Leader A and Groups A (and B—not shown)and a model of electrical infrastructure, customized forGroup B. We will be researching issues such as bounding ofthe selection of models (basically drawing a circle aroundthe set of entities and relationships that are of interest),selecting default values on that boundary, and automatedmodel population and updating from a database.2.3 ADAMS Installation and Model IntegrationProcessThe second column of Fig. 2 shows how a portion of the scenario is enhanced with model interconnection information that will allow it to be targeted to the DIAS from Argonne National Laboratory The analyst would be able to view as much or as little of this additional information as desired. It is quite possible that an analyst could go through an entire experimentation phase using only SCM diagrams.The underlying framework and capabilities support the translation of the SCM into an integrated model set. These steps occur before the analyst can use the tool, and are generally performed by what we will call an installation user. This is someone who is familiar with installing tools, although they will not be required to be familiar with the models.Fig. 3 shows details of the steps needed to create the data sets necessary for our SimBuilder to create an instance of the ADAMS framework. An instance of the ADAMS framework is essentially a set of integrated andFig. 3. ADAMS SimBuilder provides rapid assembly of modelcollections for analysis support.interconnected models that can be run through our supplied model controller and generate model output files that can be read by our supplied visualization tool. Sufficient information is provided with the install to enable users to connect their own model control, experimentation and visualization/analysis components.Our initial installation package contains the ADAMS.j ar file that contains all of the templated code that will be customized for the particular instance of ADAMS. It also provides a standard application schema and an autocoding schema. The application schema provides the ability to describe the capabilities of the models, and the autocoding schema contains all of the autocoding keywords that are found in the templates. The Installation Wizards walk theinstallation user through the tasks of telling SimBuilderwhere the modeling frameworks for the various models have been installed on the system.The development of the underlying Model Library, consisting of the block diagrams and sequence diagrams, is really an exercise in collaboration between multiple model development teams. This third set of users are the social scientists who understand and/or wish to experiment with, for instance, the relationships between a leader model, a group model, a model of group support and an electrical infrastructure model. In this process, the modelers use block and sequence diagrams to explain the interconnections and interactions. They may also provide insight into the abstraction of those interconnections and interactions.Data flow information from the block diagrams allows autocoding wizards to indicate to each model where to find and gather inputs before running, and provides parameter name translations as necessary. Unlike many approaches, the semantics of each simulation model is not required to match the common semantics of an overall model. W e are exploring methods that allow the semantics of the nodes to vary based on the simulation model that is used to implement it. This way, any new functionality in the model will be available to any other node in the integrated system that can take advantage of it. These methods include a future flexible ontology mapping capability, allowing each simulation model to be used to the greatest extent possible. We are also investigating how far up the user chain we can place such a tool.Events, as specified by the sequence diagrams not only indicate the order of model execution, but also allow entities to communicate with each other in a “push” mode if necessary. Events can carry data with them and can be used to signal entities to execute internal methods or run their models.Once the Model Library is available, the installation administrator runs Simbuilder and completes the integration of the models into ADAMS. After the Model Integration Process is executed, the resulting integrated model set is representative of the nodes and relationships, events, data flow, actions and effects specified by the Scenario Creation Process. At this point, the HCI (including experimentation), the visualization, the models, and the user supplied data sources that contain the parameters for the models are all “wrapped” into the system. These steps are designed to be able to be completed with minimal hand coding by the installation user. Certain documentation regarding the running of the models must be delivered with each model to be integrated in order to be able to complete the model integration.2.4 Experimentation ProcessAfter the analyst has set up the scenario and instantiated the integrated simulation, we move to the Experimentation Process. It is important to note that the analyst only interacts directly with the model through intuitive graphical user interfaces—never through the code.Our Experimentation Process will enable users to explore the solution space of the integrated models. Three methods will be available to explore the solution space: (1) a Design of Experiments [5] or optimization engine can be integrated through SimBuilder as a component with the modeling and simulation platform; (2) the user can choose pre-defined actions models that are integrated with the models under consideration and represented just like the social models, and (3) the first method to be made available is enabling the user to choose “what-if” type explorations in which individual model parameters can be modified.The Experimentation Process is designed to be supported by a user interface component that provides the required data for the desired experimentation method(s). The general result of any of these methods is to change any number of model parameters at a particular time during the simulation run, and possibly to run multiple runs with different sets of parameters, and finally to link all of the result sets that are generated from a particular experiment to support visualization and analysis components. A model parameter wizard is planned that will enable the modeler to describe the model parameters, provide metadata about the parameters and connect them to a database. Some of the metadata for parameters may include whether they can be modified, and if so, their minimum and maximum values. This information can be used by the Experimentation Component to control the modification of parameters so the user or Experimentation Component cannot accidentally set in a number that does not make sense for the model.2.4.1 The Design of Experimentation MethodDesign of Experiments (DOE) can be implemented through multiple commercially available tools, as well as with many more in-house tools. The ADAMS framework DOE capability (planned) enables the user to select not only the inputs to change, but also the order in which they are changed and the simulation time at which they should be changed. This means that the models that have been encapsulated in the framework can be driven by sophisticated DOE frameworks, such as Dakota [6] from Sandia Laboratories, or Isight from Simulia [7]. The goal is to enable sophisticated, adaptive search through the complex solution spaces that result from the multi-model simulation. 2.4.2 The Action/Effect MethodIn this method, the experimentation process is supported by action and effect entities in the SCM. To continue our example, the analyst will be provided with a library of actions that has been defined for the Leader, Support, Group, and Infrastructure models to see what actions may used to change the support for Leader A. This can be viewed as a COA development process. For COA development, multiple actions can be “active” at any time. Alternatively, the analyst may actually change parameters in the models to understand their effects, and then follow up with Subject Matter Experts to define new actions that might affect such changes. The mapping between actions and model inputs and effects and model outputs, action discovery and instantiation is an area of interesting future development. We are tracking efforts in action discovery and as progress ismade in this area, the ADAMS framework will support the aspects that are relevant.2.4.3 The “What-if” MethodThe “what-if” method involves changing specific inputs to the models. This can be done by either designating a start, step and stop values, or by providing a set of desired values. In addition, if multiple parameters and models are involved, the analyst can specify how to combine the parameter changes. This gives the analyst the ability to change any parameter values in the SCM as they apply to the particular purpose of the analysis. For example, the analyst might want to observe the effects of increasing the number of hours that electricity is available in a particular province, but having the repairs be performed by coalition forces instead of members Group A. The goal might be to see if Group B will lower their support for the Leader A if they no longer perceive that they are being helped by Leader A. The analyst uses a simple timeline tool to define when to take the action (coalition repairs infrastructure to increase the electricity availability by some percentage) and the duration of the action.III.C ONCLUSIONOur Scenario Generation, Model Integration and Experimentation P rocesses are promising approaches to making complex modeling and simulation ever more automated, and therefore, more accessible to a wide range of analysts and decision makers without requiring computer science skills. Multiple representations and wizards are being effectively employed to hide the complexity of modeling and simulation while increasing its usability. We are continuing development work in this area to effectively address a growing number of application areas involving computational social science and related disciplines.R EFERENCES[1]The Dynamic Information Architecture System, A. P eter Campbelland John R. Hummel:/DIAS/papers/SCS/SCS.html[2]OWL Web Ontology Language Overview, Deborah L. McGuinnessand Frank van Harmelen, Editors, W3C Recommendation, 10 February 2004, /TR/owl-features/[3]/[4]SysML UML/SysML /[5]Design of Experiments:/wiki/Design_of_experiments[6]Dakota from Sandia National Laboratory:/DAKOTA/index.html[7]Isight from Simulia: /products/sim_opt.html。

基于PSR_模型的成都市土地生态安全评价

基于PSR_模型的成都市土地生态安全评价

Modeling and Simulation 建模与仿真, 2023, 12(5), 4449-4457 Published Online September 2023 in Hans. https:///journal/mos https:///10.12677/mos.2023.125405基于PSR 模型的成都市土地生态安全评价陈 涛,陈施越,樊玉茹西南民族大学公共管理学院,四川 成都收稿日期:2023年6月7日;录用日期:2023年9月1日;发布日期:2023年9月8日摘要研究目的:通过构建土地生态安全评价指标体系,对2011~2022年成都市土地生态安全做出综合评价,揭示成都市土地生态安全差异及其主要影响因素,指导土地合理利用。

研究方法:运用PSR 评价模型建立评价指标体系,运用熵值法计算权重,综合评价成都市土地生态安全指数。

研究结果:在2011~2020年间,成都市土地生态系统压力指数出现先降后升,然后趋于平稳;土地生态系统状态指数整体上呈现出上升的趋势;土地生态安全响应指数稳步上升。

研究结论:整体上看,成都市土地生态安全综合评价指数发展不稳定,成都市政府应该加大对生态环境方面的投入,制定土地生态发展策略,全面提高成都市土地生态安全等级。

关键词PSR 模型,土地生态安全评价,成都市Evaluation of Land Ecological Security in Chengdu Based on PSR ModelTao Chen, Shiyue Chen, Yuru FanSchool of Public Administration, Southwest Minzu University, Chengdu SichuanReceived: Jun. 7th , 2023; accepted: Sep. 1st , 2023; published: Sep. 8th , 2023AbstractThe paper is going to make a comprehensive evaluation of land ecological security in Chengdu dur-ing 2011~2022 by constructing an evaluation index system of land ecological security. Also the pa-per will reveal the differences and main influencing factors of land ecological security in Chengdu, and guide the rational use of land. PSR evaluation model was used to establish the evaluation index system and the entropy method was used to calculate the weight to comprehensively evaluate the陈涛等land ecological security index of Chengdu. The results showed that from 2011 to 2020, the land ecosystem pressure index in Chengdu decreased first, then increased, and then stabilized. The state index of land ecosystem showed an upward trend on the whole. The response index of land ecological security increased steadily. It can be concluded that the development of comprehensive evaluation index of land ecological security in Chengdu is not stable. Chengdu government should increase the investment in ecological environment, formulate land ecological development strat-egy, and comprehensively enhance the level of land ecological security in Chengdu.KeywordsPSR Model, Evaluation of Land Ecological Security, Chengdu Array Copyright © 2023 by author(s) and Hans Publishers Inc.This work is licensed under the Creative Commons Attribution International License (CC BY 4.0)./licenses/by/4.0/1. 引言随着都市化和现代化进程的加快,我国经济迈入快车道的同时,人口数量也逐步增多,土地资源作为基本生产要素,为经济社会发展提供重要支撑。

On the Modeling and Simulation of Friction

On the Modeling and Simulation of Friction

Abstract
Two new modeLs for "slp-stick ion are presented. One, called the "bristle model," is an apprxiation designed to apture the psical phenomenon of sticking. This model is relatively inefficent numericaly. The other model,called the "resetintegratormodel," does not Capture the details of the sticing phenomenon, but is numerically efficient and exhibits behavior similar to the model propoed by Karnopp in 1985. All threeof these modelsare preferable to thecassical model which poorly represents the friction force at zero velcdty. Simulation experiments show that the new models and the Karnopp model give simflar results in two examples In a dosed-loop example, the classical model predkts a mimit cycle which is not observed in the laboratory. The new modeis and the Karnopp model, on the other hand, agree with the experimental obserntio.

轮胎动力学模型

轮胎动力学模型

轮胎动力学模型The Tire block models a vehicle tire in con tract with the road.The driveline port 匚transfers the torque from the wheelaxis to the tire. You must specify the vertical load F z and vehicle Iongitudi nal velocity V x as Simuli nk in put sig nals. Themodel provides the tire angular velocity 1 and theIongitudinal force F x on the vehicle as Simulink output signals. All signals have MSK units .The convention for the F z signal is positive downward. If the vertical load F z is zero or negative, the horizontal tire force F x vanishes. In that case, thetire is just touching or has left the ground.Tire ModelThe tire is a flexible body in con tact with the roadsurface and subject to slip. When a torque is applied tothe wheel axle, the tire deforms, pushes on the ground(while subject to con tact friction), and tran sfers theresult ing react ion as a force back on the wheel,pushing the wheel forward or backward.The Tire block models the tire as a rigid-wheel,flexible-body comb in ati on in con tact with the road.The model in cludes only Iongitudinal motion and nocamber (夕卜倾), turning, or lateral motion. As fullspeed , the tire acts like a damper, and the Iongitudinalforce F x is determined mainly by the slip. At low speeds when the tire is starting up from or slowing down to a stop, the tire behaves more like a deformable, circular spring. The effective rolling radius r e is normally slightly less than the nominal tire radius because the tire deforms under its vertical load. The tire relaxation length 二k is the ratio of the slip stiffness to Iongitudinal force stiffness. It determines the transient response of F x to slip.This figure and table define the tire model variables. The figure displays the forces from the ground on the tire. The normal convention for F z is positive downward, representing the vertical load on the tire and the force from the tire on the ground.Tire Dynamics and MotionTire Model Variables and ConstantsSymbol Meaning and Unit电Eflective rolling radius (rm)Wheel-lire assembly inertia (kg m2)Torque applied by the axlei to the wheal (N m)Wheel >cenit&r longiiudinal vglocrty (m/s)□Whei&l angular velocily (rad/s-)A Contact point angular valocrty (rad/s)Wheel glipvelocily (m/s)Tire Deformation and ResponseIf the tire were rigid and did not slip, it would roll and translate as V x = r e「. In reality, even a rigid tire slips, and a tire develops a Iongitudinal force F x only in response to slip. The wheel slip velocity V sx 二V x —re.;■- 0. The wheel slip '二一V sx/V x is more convenient. For a locked, sliding tire, - -1. For perfect rolling, ■■- =0.The tire is also flexible. Because it deforms, the con tact point turns at a slightly differe nt angular velocity from the wheel. The contact point slip - -V sx/V x , whereV sx =V x — re.「.The tire deformati on u directly measures the differe nee of wheel and con tact point slip and satisfiesdu .=Vsx —VsxdtA tire model must provide the Iongitudinal force F x the tire exerts on the wheel once give n• Vertical load F z• Con tact slip 'The tire characteristic function specifies this relati on ship in the steady state: F x 二f (■ ,F z).The con tact slip '■- in turn depe nds on the deformati on u . The Ion gitudi nal force F x is approximately proport ional to the vertical load because F x is gen erated by contact fricti on and the no rmal force F z. (The relati on ship is somewhat non li near because of tire deformati on and slip.) The dependence of F x on '■- is more complex.Tire DynamicsThe tire model in corporates tran sie nt as well as steady-state behavior and is thus appropriate for start ing from, and coming to, a stop.Because the rolli ng, stressed tire is not in a steady state, the con tact slip '■- and deformation u are not constant. Before they can be used in the characteristic function, their time evolution must be accounted for. In this model, u and '■- are moderated to small. The relati on ships of F x to u and u to '■- are the n lin ear.These properties are taken from empirical tire data.The deformation u evolves according toThe slip ■■- follows from - and u . The tire behaves like a driven damper of damping rate V x M K.At low speeds, the slip rema ins fin ite, and the tire behaves more like a circular spri ng of stiffness C FX . Manifestly nonsingular forms of the tire evolution areThe second form explicitly shows the dependence on a varying vertical load F z .Finding the Wheel and Vehicle MotionWith the tire characteristic function f (「,F z), the vertical load F z, and the evolved u and ■■- , you can find the Iongitudinal force F X and wheel velocity 二’.From these, the equations of motion determine the wheel angular motion (the angular velocity 「) and Ion gitudi nal moti on (the wheel cen ter velocity V X ):I T drive _ r e F XdtdV xm ---- = F x - mg sin -dtwhere - is the slope of the in cli ne upon which the vehicle is traveli ng (positive for uphill), and m and g are the wheel load mass and the gravitational acceleration, respectively. T drive is the driveshaft torque applied to the wheel axis.Relationship to Block ParametersThe effective rolling radius is r e. The rated load normalizes the tire characteristic function f (' , F z), and the peak force, slip at peak force, and relaxation length fields determine the peak and slope of f (' , F z) and thus C FX and - .Slip velocity> omegaTire forceSlip uelocityTran sie ntslipF_zVJKVJ5XKappa*GDka口□日1 ;:F x dF zC FX::F z dtLow speed dampingLow €p@;@ddampingCZ) ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ►© ------------------- KT?」n damped 1T K tc. V lnw = 2. E1/tp-V_low 二0. 4 tp. k_VJavO 二770 1/tp, C_fk 二9. 66^e-0. 06Tire force derivative wrt F_zline forte d@riv/a.tiw& wrt F z》Ki^ppadF x/dF zNumerical different!己lion d the Magic FormulaTire force derivative wrt kappaTine forte derivative wrt kappakappadF^dkappakappa scale=0.9149,F_z scale=0.9557ReferencesCenta, G., Motor Vehicle Dynamics: Modeling and Simulation, Singapore, 1997. Pacejka, H. B., Tire and Vehicle Dynamics, Society Engineers/Butterworth-Heinemann, Oxford, 2002.World Scientific, of Automotive。

Modeling and Simulation of a Horizontal Axis

Modeling and Simulation of a Horizontal Axis

Modeling and Simulation of a Horizontal AxisWind Turbine Using S4WTSanem Evren,Mustafa Unel,Omer K.Adak,Kemalettin Erbatur,Mahmut F.Aksit Mechatronics Engineering,Faculty of Engineering and Natural Sciences,Sabanci UniversityAbstract—In this paper,modeling and simulation of a500 KW prototype wind turbine that is being developed in the con-text of the MILRES(National Wind Energy Systems)Project in Turkey are presented.This prototype wind turbine has a nominal power of500KW at a nominal wind speed of around 11m/s.Aerodynamic,mechanical,electrical and control models are built in S4WT(Samcef for Wind Turbines)environment. Kaimal turbulence model have been used to generate realistic wind profiles in Turbsim that can be integrated with S4WT. The standard components(tower,bedplate,rotor,rotor shaft, gearbox,generator and coupling shaft)consisting of Samcef elements(bush,hinge,beam)have been used compatible with the IEC61400-1in S4WT to perform the simulations.The pitch and torque controllers are used to achieve the ideal power curve.A pitch function and a PI controller with gain scheduling have been used to control the pitch angle of the blades to limit the power at the full load operating region.The generator torque which consists of an optimal mode gain method,is used to control the power at both partial and full load operating regions.The performance analysis of500KW wind turbine prototype is done under different scenarios including power production,start up,emergency stop,shut down and parked, and the simulation results are presented.I.I NTRODUCTIONElectrical energy is the most consumed energy throughout the world.This is supported by fact that the electricity consumption growth is almost double of total energy demand of the world and is projected to grow76%from2007to 2030(growing at average2.5%per year from16,429TWh to28,930TWh).Increased demand is most dramatic in Asia, averaging4.7%per year to2030.Wind energy is a renewable and sustainable kind of energy that is becoming increasingly important in the last decades. The technologies converting wind energy into usable forms of electricity are developed as alternatives to traditional power plants that rely on fossil fuels.According to the half year report of2011of World Wind Energy Association (WWEA);the world market for wind energy saw a sound revival in thefirst half of2011and re-gained momentum after a weak year in2010.The worldwide wind capacity reached215,000MW by the end of June2011,out of which 18,405MW were added in thefirst six months of2011.This added capacity is15%higher than in thefirst half of2010 when only16,000MW were added[1].A wind turbine system consists of aerodynamic,mechan-ical and electrical models.The calculations of the aero-dynamic power and torque which are extracted from the horizontal axis wind turbines are presented in[2].The power coefficient,C p is the most important term for power generation.It depends on the tip speed ratio and pitch angle.The power coefficient curve is given in[3]and[4].The aero-dynamic torque is the input to the mechanical system.The mechanical equations of the two mass and one mass wind turbine models are detailed in[5]and[6].Electrical model in the wind turbine can be designed in three different ways using different generators such as squirrel cage induction generator(SCIG),doubly fed induction generator(DFIG) and permanent magnet synchronous generators(PMSG).The structural and operational differences between them are given in[7]and[8].In literature,mechanical,electrical and aerodynamic mod-els have been designed and simulated in different envi-ronments;Matlab/Simulink,dSpace etc.The aim of this paper is to construct these models with FEM models and performing analysis under different scenarios.SAMCEF for Wind Turbines(S4WT)is a perfect tool to achieve this goal. It provides engineers with an easy access to the detailed linear and nonlinear analysis of all relevant wind turbine components.In order to understand how thefinite element modeling and analysis of the wind turbine models are used in S4WT,Samcef elements are introduced.A Samcef element is a model of a mechanical device such as a beam,a hinge or a gear that are used in the connections between the various components of the wind turbine.The term“element”in this context should not be confused with eitherfinite elements or structural elements.The Samcef elements used in the prototype wind turbine design are beam elements,bushing elements and hinge elements.The details about these Samcef elements are given in[9].S4WT is based on SAMTECH general tools;CAESAM, SAMCEF Field and SAMCEF Mecano.CAESAM is a general framework able to integrate models and computation tools in a user friendly ponents are defined in a modular and a parameterized way.Transient,modal and fatique analysis have been done to cover most of the needs concerning the WT dynamics.SAMCEF Field is the standard graphical pre-processor program of SAMCEF.It has been used to build the various components of the WT.SAMCEF Mecano is the nonlinear solver of SAMCEF,which is the kernel of all dynamic analysis on WT.The organization of this paper is as follows:In section II,aerodynamic,mechanical and electrical models of the prototype wind turbine are constructed in S4WT using Samcef elements.In section III,pitch and torque controllers are designed in S4WT.Section IV describes the different scenarios of the prototype wind turbine.Section V presents simulation results of the prototype wind turbine under differ-ent scenarios.Finally,Section VI concludes the paper with some remarks and indicates possible future directions.II.W IND T URBINE M ODELING IN S4WTThe basic goal of S4WT 1is to construct a model of a wind turbine from basic components to import engineering parameters to the model and then to analyze the model with these parameter values.Before analyzing a wind turbine,initialization process must be done.The Initialization process consists of designing and assembling the various components of the wind turbine model.In Figure 1,the wind turbine model consists of segmented tower,bedplate,gearbox,rotor,rotor shaft,coupling shaft andgenerator.Fig.1:Components of the Wind Turbine [10]A.TowerThe parametric tower consists of a series of flanged segments modeled as beams.It is made up of steel S235.Young’s modulus is assumed as 210e3N/mm 2.The ge-ometry of the tower segments and top flange are given in Figure 2and Figure3.Fig.2:Dimensions of the flanged segments[10]Fig.3:Dimensions of the top flange [10]The tower of prototype wind turbine is 63.5m long.The top flange has internal diameter of 2.076m and thickness of 0.06m.The tower has 3segments.Segment order goes1S4WTis a trademark of SAMTECHfrom top (0)to bottom (2).Segment(0)has a length of 18.7m,segment(1)and segment(2)are 22.4m long.The diameters of each segment are given asSegment (0)External Internal Upper 2.1m 2.076m Internal 2.541m 2.517m Segment (1)External Internal Upper 2.541m 2.511m Internal 3.07m 3.04m Segment (2)External Internal Upper 3.07m 3.03m Internal3.6m3.56mB.BedplateThe bedplate is modeled as a set of beams.The bedplate supports the hub,the gearbox and the generator.There is one mainbearing in the bedplate to support the rotor shaft.To define the dimensions between the bedplate and the other turbine components (hub,gearbox,generator)three levels are designed:∙Rotor Axis Level:The axis of the rotor,0mm∙Tower Yaw Level:The centre of the top of the tower (yaw mechanism),-1.396e3mm∙Yokes Level :Level to the axis of the arms in the yokes,0mm∙Generator Support Level :The level of the generator support,-550mmFig.4:Bedplate Dimensions [10]C.GearboxThe gearbox consists of two planetary stages and one helical stage in Figure 5.Both planetary stage 1and 2have three planet gears around the sun gear.Teeth numbers of all stages are presented in Tables I-II.The prototype gearbox has the reduction ratio of 33.5.D.BladesIt is assumed that three rotor blades are used and that each of the blades are identical.Each blade length is 21.5m and rotor diameter is 45m.The blade has 15sections with the aerodynamic data given in the Figure 6.Fig.5:GearboxTABLE I:Teeth numbers of both planetary stagesPlanetaryStage 1Stage 2Number of teeth on the sun 3646Number of teeth on a planet6070Number of teeth on the fixed wheel91121TABLE II:Teeth numbers of the helical stageHelical Stage Input wheelOutput wheelTeeth Number7929Fig.6:Aerodynamic Properties of Each Blade SectionE.Rotor Shaft and Coupling ShaftThe components of the rotor shaft are the hub,one main bearing and the shaft itself.In this design,main bearing is regarded as the rotor side.The coupling shaft extends from the gearbox to the generator.It consists of the brake,the slip coupling,the elastic coupling and the shaft itself.The rotating shaft is modeled as a beam element,the brake is modeled as a hinge element and the elastic coupling is modeled as a bushing element.F .GeneratorThe dynamical behavior of the generator is represented as a one-dimensional,linear time invariant (LTI)system with bounded output.Doubly fed induction generator (DFIG)is designed using the components of rotor,stator,bearings and generator support bushings in S4WT.DFIGs use the power converters of having a rating of only about one third of the nominal power of the generator.Also,they always work in generator mode both at the above and below the synchronous speed.The mathematical model of the DFIG is presented in [11]and [12].The mechanical power which is gained from the wind is reduced by the losses in the generator.The generator efficiency is 90%for the worst case turbinescenario.III.W IND T URBINE C ONTROL IN S4WTThe capacity of wind turbines is related to the maximum power captured from the wind.Ideal power curve shows the optimum energy gathering from the wind depending on the wind speed.A typical power curve for a wind turbine is given in Figure 7.Theideal power curve has two operating regions depending on the wind speeds:∙Partial load operating region :The operating region with the the wind speeds below the nominal value∙Full load operating region :The operating region with the wind speeds above the nominal wind speedsFig.7:Ideal Power CurveAt the partial load operating region,pitch angle is kept constant and generator torque is controlled to operate the turbine with the maximum power coefficient,C pmax .The optimal tip speed ratio is held constant to protect C pmax .Therefore,the wind turbine is operated as gaining maximum power between the V cutin and V n wind speeds.At the full load operating region,this maximum energy is limited to its nominal value,P n between V n and V cutoff wind speeds in order to save the turbine from excessive loads.Rotor angular speed is fixed to nominal speed.Aerodynamic torque is limited by changing the pitch angle.The pitch angle can be increased or decreased.If the pitch angle is increased,the technique known as pitch to feather is implemented.If the pitch angle is decreased,active stall technique is implemented [13].The pitch controller is slower than the torquecontroller therefore the generator torque control is also used with the pitch control for limiting the power to its nominal value at the full load operating region.In S4WT the operating principle of pitch angle and gen-erator torque controllers is given in the Figure 8.The inputFig.8:Torque and Pitch Controllersof the control system is the measured generator speed.The outputs of the control system are the collective pitch angle and the demanded generator torque.A.Pitch ControlIn S4WT,the pitch controller consists of ∙Pitch function ∙PI controller ∙Gain schedulingThe pitch function controls the pitch angle depending on the turbine rotor speed as in Figure9.Fig.9:Pitch FunctionA PI controller is used with the pitch function curve.PI controller uses an input of generator speed,not turbine rotor speed.The error variable for the PI feedback controller is given bye =max(ωg −ωnom ,0)(1)where ωg is the (filtered)measured generator speed andωnom is the nominal generator speed.There is a PI starting time parameter which is a threshold time for distinguishing how to control the pitch angle.Before reaching this time,the pitch function is used,and after that the PI algorithm is used.The pitch control can be designed with gain scheduling.It means that the already defined PI values K P ,K I can change when they are multiplied by a weight function which depends on the instantaneous blade pitch.The weight function is given in Figure10.Fig.10:Gain FactorThe pitch behavior of the wind turbine can be adjusted by means of characteristic actuator limits in S4WT.These impose restrictions that affect both the allowed pitch speed and the pitch acceleration values.The effective parameters are:∙Pitch Speed Limit :Limits value of the achievable pitch speed by 1.24140856rpm.∙Pitch Acceleration Limit :Limits value of the achievable pitch acceleration by 1rad/sec 2.∙Pitch Speed Reduction Threshold :The speed limit for the pitch actuator is decreased when the difference between the demanded pitch angle and the measured pitch angle is lower than this threshold,100%.B.Generator Torque ControlThe optimal mode gain for the generator,K optimal is a constant parameter,needed to define the demanded generator torque T demand .If chosen appropriately,it ensures that the wind turbine achieves the condition of optimum tip speed ratio (TSR).When the optimal mode gain is used,the demanded generator torque T demand is given by:T demand =K optimal ωg 2(2)In Equation (2),ωg is the measured generator speed.In this method,the parameters of maximum and minimum generator torques are used to limit the upper and lower of the torque the generator can provide.Maximum torque is assumed to be 6370N.m and K optimal is 0.57.IV.S4WT S CENARIOSFollowing scenarios can be implemented in S4WT:∙Power Production ∙Start Up∙Emergency Stop ∙Shut Down ∙ParkedPower production scenarios provide performing transient analysis for both partial and full load operating regions.Power generation is not possible at the start up scenarios because the efficiency at the start up wind speed is very low.The wind turbine begins to transmit voltage to the generator as the wind speed reaches the cut-in wind speed,not the start up wind speed.Therefore,these speeds should not be conflicted each other.The emergency stop scenarios are the situations where the grid connection is lost.Grid loss occurs due to the technical faults at the transmission cable and the environmental facts such as stroke of lightning.Whenever the grid loss occurs,the wind turbine operation will be stopped.Shut down scenarios have manual control and generator operation is fully stopped.Turbine blades will not spin anymore.These scenarios are required when the wind speeds exceed the cut-off wind speed so that turbine will be pro-tected from excessive loads.Parked scenarios are the scenarios in which the blades are locked in a special parked angle whenever the generated power starts to exceed the demanded power.Thus,the generated power to the grid will not be higher than the desired value.V.S IMULATIONSIn order to simulate the prototype wind turbine in S4WT,we need to generate realistic wind profiles.To this end,Turbsim,a stochastic,full-field,turbulent-wind simulator,isintegrated with S4WT.It uses a statistical model to numer-ically simulate time series of three-component wind-speed vectors at points in a two-dimensional vertical rectangular grid that is fixed in space.Kaimal spectrum is used as wind model to simulate the prototype 500KW wind turbine.The spectrum model is given in [14].Different scenarios are simulated in S4WT.First simula-tions are done at both partial and full load operating regions under the power production scenario.The input wind speed is 7m/s at the partial load operating region and 11m/s at the full load operating region as shown in Figure 11.The mechanical power that is extracted from the wind is around 150KW at the partial load operating region.However,the generated power is around 130KW.It is smaller than the mechanical power because the generator has 90%efficiency.The mechanical power is around 550KW and the generated power is around the nominal value at full load operating region since the wind speed increases to the nominal value.The pitch angle is increased by the PI controller when the rotor speed exceeds its nominal value in Figure 12.Thus,the generated power is limited.The input wind speed oftheFig.11:Wind speed profiles and results of the power pro-ductionscenarioFig.12:Results of the power production scenario start-up scenario is 1m/s as shown in Figure 13.This speed is smaller than the cut-in wind speed,3m/s.Start-up scenario starts at the 15th second and its duration is 25seconds.The prototype turbine must not generate electrical power at the 15th second because the input wind speed is below the cut-in wind speed as previously mentioned.It must be noted that the pitch angle was set to 90o until the beginning time of start-up scenario.Thus,the mechanical power extraction from the wind is prevented.As a result,the generated poweris zero at the beginning of the scenario since the pitch angle was set to 90o .The input wind speed exceeds thecut-offFig.13:Wind speed profile and the results of the start upscenario wind speed,23m/s under the normal shut down scenario as depicted in Figure 14.In this case,the turbine operation must be stopped to protect it from the excessive loads.When the pitch angle is limited to 90o ,the generated power and the rotor speed decrease to zero values.The input windspeedFig.14:Wind speed profile for normal shut downscenarioFig.15:Results of the normal shut down scenarioof the emergency scenario is presented in Figure 16.Grid loss occurs at the 15th second.The generated power drops to zero because generator is immediately disconnected from the turbine as shown in Figure 17.However,there is a peak in the mechanical power because the rotor shaft still turns.In order to decrease the rotor shaft speed to zero,the pitch angle is increased to the target value of 90o .Thus,the mechanical power is not gained anymore.The input wind speed of the parked scenario is depicted in Figure 18.Blades are parked to 90o by the pitch controller to decrease the generated power.Fig.16:Wind speed profile for emergencyscenarioFig.17:Results of the emergency scenarioThe generated electrical power reduces from 1800W to zero value as shown in Figure19.Fig.18:Wind speed profile for parkedscenarioFig.19:Results of the parked scenarioVI.C ONCLUSIONS AND F UTURE W ORKWe have now presented modeling and simulation of a pro-totype wind turbine in S4WT environment.The parameters of the prototype wind turbine components (tower,bedplate,gearbox,blades,rotor shaft,coupling shaft and generator)are used in simulations.Realistic wind profiles are created by using Kaimal spectrum in Turbsim.The pitch controller is designed such that it consists of pitch function and the PI controller with gain scheduling.PI starting time parameter is used for distinguishing how to control the pitch angle.The torque controller ensures that the wind turbine achieves the condition of optimum tip speed ratio (TSR)by using the optimal mode gain parameter,K optimal .Power production,start-up,shut down,emergency and parked scenarios are simulated with different wind speeds of Kaimal spectrum.The results are quite successful.As a future work,modal and fatique analyzes will be done under different turbine scenarios.The other IEC wind models including Extreme Coherent Gust,Extreme Direction Change,Extreme Coherent Gust with Direction Change,Extreme Operating Gust and Extreme Wind Shear will be used as inputs for these scenarios.VII.A CKNOWLEDGMENTAuthors would like to acknowledge the support provided by TUBITAK (Scientific and Technological Research Coun-cil of Turkey)through the Grant 110G010.R EFERENCES[1]World Wind Energy Association,“Half Year Report of Wind Energy2011,”Germany,August 2011.[2] B.Boukhezzar,L.Lupu,H.Siguerdidjane,M.Hand,“Multivariablecontrol strategy for variable speed,variable pitch wind turbines,”Renewable Energy,V olume 32,Issue 8,pp.1273-1287,July 2007.[3]Rui Melcio,V .M.F.Mendes,“Doubly Fed Induction Generator Sys-tems For Variable Speed Wind Turbine,”IEEE Industry Applications Magazine,V olume 8,Issue 3,pp.26-33,2004.[4]Jianzhong Zhang,Ming Cheng,Zhe Chen,Xiaofan Fu,“Pitch AngleControl for Variable Speed Wind Turbines,”Third International Con-ference on Electric Utility Deregulation and Restructuring and Power Technologies,pp.2691-2696,2008.[5]Surya Santoso,Ha Thu Le,“Fundamental time domain wind turbinemodels for wind power studies,”Renewable Energy,V olume 32,Issue 14,November 2007.[6]Sevket Akdogan,“Degisken hizli degisken kanat acili ruzgar turbinisimulasyonu,”MS Thesis,Gebze Institue of Technology,2011.[7]J.G.Slootweg,H.Polinder,W.L.Kling,“Representing Wind TurbineElectrical Generating Systems in Fundamental Frequency Simulations,”IEEE Transactions on Energy Conversion,V ol.18,No.4,December 2003.[8]J.G.Slootweg,H.Polinder,W.L.Kling,“Reduced Order Modelsof Actual Wind Turbine Concepts,”IEEE Transactions on Energy Conversion,2004.[9]Samtech Bosch-Rexroth,“Concept report simulation platform and ref-erence gearbox measurements”,March,2007.[10]Samtech,“Samcef 4Wind Turbines User Manual”.[11]Arantxa Tapia,Gerardo Tapia,J.X.Ostolaza,J.R.Saenz,“Modeling andControl of a Wind Turbine Driven Doubly Fed Induction Generator,”IEEE Transactions on Energy Conversion,V ol.18,No.2,2003.[12]Arash Abedi,Mojtaba Pishvaei,Ali Madadi,Homayoun MeshginKelk,“Analyzing Vector Control of a Grid-Connected DFIG under Simultaneous Changes of Two Inputs of Control System,”European Journal of Scientific Research,V ol.45,No.2,2010.[13]Tony Burton,David Sharpe,Nick Jenkins,Ervin Bossanyi,WindEnergy Handbook ,John Wiley and Sons,page 351-357,2001.[14] B.J.Jonkman,“TurbSim User’s Guide”,September,page 41-42,2009.。

AIDA II Progress Report Towards a Co-simulation Environment for Computer Control Systems

AIDA II Progress Report Towards a Co-simulation Environment for Computer Control Systems

AIDA II Progress ReportTowards a Co-simulation Environment forComputer Control SystemsJad El-khouryKTH, Department of Machine design,Mechatronics lab, 100 44 Stockholmjad@md.kth.seABSTRACTModern machinery, such as automobiles, trains and aircraft, are equipped with embedded dis-tributed computer control systems where the software implemented functionality is steadily in-creasing. The development of such systems, however, is still a relatively new and young discipline. There is consequently a lack of tools, methods and models to support, in particular, the early architectural design stages.Closely associated with the design of a distributed control system are design decisions on the overall system structure, function triggering and synchronization, policies for scheduling, com-munication and error handling. These parameters to a large extent determine the resulting sys-tem timing and dependability, and hence, impact the control application performance and robustness.The primary target for the research described here is the development of executable models to support architectural design. The report describes earlier efforts in this direction including case studies in the DICOSMOS and AIDA projects. Experiences from the studies are discussed including aspects on modelling to support interdisciplinary design, modeling abstraction and accuracy, and tool implementation.A state of the art survey of related modelling efforts reveals that very little appears to have been done in terms of modelling and simulation that involves simulation of the environment with the computer control system. In addition, the treatment of distributed computer systems, redundant systems, and error scenarios in this context appear to be scarce.The possibilities for extending the simulation models to arrive at a ready to use library to sup-port the design of distributed control systems are discussed. This work is already under way. Remaining work includes to verify the developed simulation models, and to evaluate the models and their usage in case studies. One related topic for further work includes integrating the sim-ulation models with the AIDA modeling framework. A longer term aim is that the models and simulation features should form part of a larger toolset supporting also earlier and later devel-opment stages, thus providing more comprehensive analysis and synthesis support.1. INTRODUCTIONThis report is a summary of an internal report developed at the mechatronics lab in order to iden-tify the short and long-term research work to be done in the AIDA II project. Please refer to (Törngren, 2001) for more detailed information.1.1. Background: embedded control systems, trends and challengesMachinery, such as automobiles, trains and aircraft, are increasingly being equipped with em-bedded control systems that are based on distributed computer systems. In these systems the functionality is steadily increasing due to the possibilities enabled by software and networks for information exchange. The computer system is typically composed of a number of networks that connect the different nodes of the system. A node typically includes sensor and actuator in-terfaces, a microcontroller, and a communication interface to the broadcast bus. From a control function perspective, such machines can be said to be controlled by a hierarchical system, where subfunctions interact with each other and through the vehicle dynamics to provide the desired control performance.However, a number of challenges face the developers of such machinery in order to really be able to benefit from the technology advances:•The need to cope with the dramatic increase in system (software) complexity. Apart from the sheer amount of new functionality, complexity also arises from different types of interfer-ence between functions partly arising from their usage of shared computer system resources.•Managing and exploiting multidisciplinarity. The development of embedded control systems requires knowledge in, and cooperation between, several different scientific and engineering disciplines.•Verification and validation of dependability requirements. The main challenge here is that of developing mechatronic (software based) systems that are safer and more reliable than their mechanical counterparts. Albeit the advantages, software easily becomes very complex and is difficult to verify and validate. In addition, the distributed computer systems where the software is implemented add “new” failure modes.1.2. Modelling and simulation in the context of architectural designThe development of embedded computer control systems is still a relatively new and young dis-cipline. Consequently, there is a lack of tools, methods and models to support, in particular, the early so called architectural design stages. The primary target for the research described here is the development of models to support architectural design. In the context of designing a distrib-uted computer control system, architectural design is here used to refer to decisions on the:•Overall system structure, including both functional, software and computer structure (These issues include deciding on allocation and partitioning)•Function triggering, synchronization, and policies for scheduling all resources •Communication principles•Policies for error handling, and the mechanisms used for error detection.Choices of these “design parameters” to a large extent determine the resulting system reliability, safety, and timing, and impact the control application performance and robustness.The very basic idea is to develop models that can be used to analyse different architectural pro-posals, simulation is the analysis approach taken in this work. In a longer term perspective, the aim is that the models and simulation features should form part of a larger toolset being devel-oped at the mechatronics lab; a toolset that supports the design of embedded control systems in order the meet the main identified challenges: complexity, multidisciplinarity and dependabili-ty.Some of the requirements on the models are as follows (the reader is referred to Törngren (1995) and Redell (1998) for more detail and background):•The models should allow both functionality, to be implemented in a computer system, and the computer systems to be represented.•The models should allow co-simulation of functionality, as implemented in a computer sys-tem, together with the controlled continuous time processes and the behavior of the computer system.•The models should support interdisciplinary design, thus taking into account different sup-porting methods, modelling views, abstractions and accuracy, as required by control, system and computer engineers.•The models should be useful also for a descriptive framework, visualising different aspects of the system, as well as being useful for other types of analysis such as scheduling analysis.1.3. Why model based architectural design?By model based design we primarily refer to models that are sufficiently formal to allow anal-ysis to be carried out. Early architectural analysis allows the solution space to be explored, and at the same time provides a better opportunity for the early detection of erroneous requirements and design bugs. Clearly, such mistakes are very costly if detected late. Compared to traditional testing, the approach allows different failure modes to be approached and analysed one at a time, progressing the verification through the development in an iterative and incremental fashion.A strong advantage of model based design used in the context of early analysis, is the ability to evaluate alternative architectures with very few restrictions. Compared to traditional integration testing, or hardware in-the-loop simulation where the complete environment of a node is simu-lated in real-time, model based design and analysis provides additional advantages including the ability to easily and with low cost change the design, and the possibility to instrument and ana-lyse the internals of the distributed computer system. This is not to say that model based design can replace the others, but the tasks of for example the system integrator can be made much eas-ier given that some aspects of the system, such as its timing behavior, have been analysed thor-oughly earlier. A strong point of the modelling activity is that it is a very useful process that can reveal a number of aspects, such as missing requirements and design bugs.The investigation of computer system models, at different levels of abstraction with different accuracy/complexity, is an educative process that we hope can provide additional understanding to support interdisciplinary design. The model and tool prototypes are also very important as part of the research because they enable different forms of feedback.2. ACCOMPLISHMENTS IN MODELLING AND SIMULATION AT THE MECHATRONICS LAB2.1. Early work on modelling and simulation in the DICOSMOS projectAn early investigation of “timing problems” was carried out in the DICOSMOS project, see Wittenmark et al. (1995), Törngren (1995), Nilsson (1996). Here the effects of time-varying feedback delays, sampling period jitter and data loss in feedback control systems were investi-gated. Inspired by earlier work along the lines of Ray (1988), cosimulation was one approach taken in DICOSMOS towards analysing the timing problems. Special Simulink () blocks were developed to model time-varying delays, sampling instant jitter, vacant sampling and sample rejection (data-loss through over-writing of data), Törngren (1995). A feedback control system, as modelled in Simulink, could then be instrumented with these blocks to come one step towards modelling a distributed computer implementation. The approach turned out to be useful for illustrating the effects of the timing problems and for analysing them, e.g. for comparing the effects of sampling period jitter vs. a time varying delay, or for analysing the sensitivity of a particular control design.2.2. The AIDA modelling frameworkThe basis for developing the modelling framework was to determine the information needed in the context of the early stages of design of distributed control systems. The models were target-ed towards motion control applications implying the need to be able to model time- and event-triggered, multirate, control systems with different modes of operation. These control systems are to be implemented on distributed heterogeneous hardware with both serial and parallel com-munication links interconnecting the processing elements. There is a need for modelling the var-ious system specific overheads, scheduling policies, error handling etc. The derived requirements and the AIDA modelling framework are thoroughly described by Redell (1998), Törngren and Redell (2000).2.3. Modelling and simulation of the SMART satellite fault-tolerant computer system During very early design stages of the development of the distributed computer control system of the SMART satellite, to be launched in 2002 by the European Space Agency, the Swedish Space Corporation needed to analyse and verify the use of the Controller Area Network (CAN) in the on-board satellite computer control system. In particular the error handling of CAN in conjunction with the chosen design for a fault tolerant network was of concern. This work was partly carried out by the authors of this report, and included assessing the design by means of modelling and simulation. The main aim of the simulation was to verify that the given rules for redundancy are not conflicting, and that the system can recover from a single permanent fault. For more information on the simulation models the reader is referred to Törngren and Fredriks-son (1999).2.4. Cosimulation within the DICOSMOS2 projectSanfridson (1999 & 2000) describes a co-simulation of a truck and semi-trailer using a CAN bus for the on-board distributed control system in order to investigate the performance of con-trol applications and adjust control periods and message priorities on-line. The simulations have been implemented in Matlab/Simulink. The model of the CAN bus is fairly detailed which gives a realistic timely behaviour of the network communication. The major drawback is the amount of processor time required to carry out the simulation at this detailed level.3. THE STATE OF THE ARTVarious simulation tools have been developed to tackle some of the issues discussed in this re-port. A small survey has been performed in order to evaluate these tools, with an earlier com-plementing survey given by Redell (1998). What was common between these tools is the fact that they are developed with the aim of simulating real-time computer systems. However, each tool is focused on particular aspects of such systems.The following tools were evaluated: (See Törngren, 2001 for further details)•RT/CS Co-design: A Matlab Toolbox for Real-time and Control Systems Co-design •DRTSS: A Simulation Framework for Complex Real-time Systems•STRESS - A Simulator for Hard Real-time Systems•HaRTS: Design and Simulation of Hard Real-time Applications4. SOME ESSENTIAL ISSUES IN MODELLING AND SIMULATION4.1. Interdisciplinary design: modeling purpose, abstraction and accuracyGood cooperation and interfaces between the involved engineers is essential to maintain con-sistency between specifications, design and implementation. When developing a motion control system for a machine, somehow the functions and elements thereof need to be allocated to the nodes. This principally means that an implementation independent functional design needs to be enhanced with new “system” functions that:•Perform communication between parts of the control system.•Perform scheduling of the computer system processors and networks.•Perform additional error detection and handling.This mapping will change the timing behavior of the functions due to effects such as delays and jitter.4.2. Issues related to system development and tool implementationWhile developing distributed control systems, it would be advantageous to have a simulation toolbox or library, in which the user can build the system based on prebuilt modules to define things such as the network protocols and the scheduling algorithms. With such a tool, the user can focus on the application details instead. Such a tool is possible since components like dif-ferent types of schedulers and CAN network are well defined and standardised across applica-tions. Such a tool will enforce a boundary between the application and the rest of the system. This will speed up the development process, and gives the developers extra flexibility in devel-oping the application.Also, to be useful and cost-effective for system development, the usage of the models and the simulation facilities need to be integrated into the development process.5. FUTURE WORKInvestigating and extending the AIDA modeling framework. Some further ideas for devel-oping the models are as follows:•The developed simulation models should be integrated with the AIDA modelling frame-work. Ideally, the same basic models should be useful for static analysis and for simulation.•The simulation models do describe both system structure and behavior but they are not al-ways very descriptive since they sometimes lack graphical views. Some proposals in this di-rection have been made in the AIDA models (Redell, 1998).•The semantics of pure simulation models (such as the ones in Simulink) inherently differ from the timing behaviour of real-time execution, (Törngren et al., 1997). How can this se-mantic gap be bridged?•The simulation models and the AIDA framework need to be extended both upwards, to mod-els used in earlier design stages, and downwards, towards the implementation. Basically, the motivation is given by the need for a holistic design framework that enables models (and work accomplished) to be reused, and used efficiently. Thus it would be desirable to be able to refine the models such that they can be used not just in early design. In addition, work car-ried out in configuring a computer node for example, should be reusable in later prototyping and implementation stages.•The use of the Unified Modelling Language (UML, 1997) and CODARTS models (Gomaa, 1993, a development of structured analysis models) in conjunction with the modeling frame-work will be investigated. Key aspects here include assessing when and why to use object models vs. functional (structured analysis) models, and how they map to other entities such as tasks.Developing a toolset for architectural analysis. This topic builds upon the ideas of the AIDA project. The idea is to develop a toolset that provides comprehensive analysis and modelling support for embedded control systems.The following are some ideas for the implementation of a prototype toolset for research purpos-es:•The envisioned toolset concept is based on a number of well established engineering tools that are complemented and extended with a number of functionalities. These functionalities include additional models to enable important analysis and synthesis to be carried out. They will also include necessary interfaces between tools. There are two strong reasons for this structure; given limited research efforts, energy should not be spent on reinventing the wheel.In addition, using available tools will make it easier to demonstrate and apply the toolset con-cept in a realistic setting. It will also facilitate the pinpointing of the opportunities provided by the complementing tools. As an example, there are tools around that can deal with mod-elling and analysis of reliability, safety, control system design, finite state machines, timing analysis, etc., but these tools are not integrated; and even if they were, can not provide the functionality required for appropriately supporting the design of distributed real-time control systems.•The functionalities considered include support for overall system structuring in early design stages (where reliability, safety, resource load and costs are evaluated), hazard and safety analysis integrated with control system design, and control design complemented with sup-port for distributed real-time system implementation where timing analysis is one important part.•Model reuse must be enabled by the envisioned toolset. Reuse can come in different forms.One aspect of this is to be able to use a developed model throughout the life cycle of a product in order to support product maintenance including upgrades. As discussed earlier, this re-quires models to be refined during the development. Having developed a simulatable model, it would be valuable to reuse this information, for example the computer system configura-tion, in further prototyping and development work.Many interesting issues exist and will arise in the development of this type of toolset.This in-cludes how to manage the different types of models and how to use the toolset facilities in sys-tem development. The toolset should be complemented by an appropriate design methodology. One important piece of this methodology should be a verification framework that promotes the development of dependable embedded systems.Extending the functionality of the toolset also requires the models to be extended (as partly dis-cussed in the previous section). For example, the AIDA models currently do not include explicit fault models and attributes such as criticality.6. REFERENCESGomaa (1993). Hassan Gomaa. Software design methods for concurrent and real-time systems.Addison-Wesley publishing company, 1993.Nilsson (1996). Johan Nilsson. Real-Time Control Systems with Delays. PhD thesis. ISRN LUTFD2/TFRT--1049--SE, Lund Institute of Technology, Sweden.Ray and Halevi (1988). Asok Ray and Y. Halevi. Integrated Communication and Control Systems: Part II - Design Considerations. ASME Journal of Dynamic Systems, Measurements and Control, Vol 110, Dec. 1998, pp 374-381.Redell (1998). Ola Redell. Modelling of Distributed Real-Time Control Systems, An Approach for Design and Early Analysis, Licentiate Thesis, Department of Machine Design, KTH, 1998, TRITA-MMK 1998:9, ISSN 1400-1179, ISRN KTH/MMK--98/9--SE, Stockholm, Sweden.Sanfridson (1999). Martin Sanfridson. QoS in Distributed Control of Safety-Critical Motion Systems. Work in progress paper at the 20th Real-time Systems Symposium, Phoenix, December 1999.Sanfridson (2000). Martin Sanfridson. Timing problems in distributed control. Licentiate Thesis, TRITA-MMK 2000:14, ISSN 1400-1179, ISRN KTH/MMK--00/14--SE, May 2000.Törngren (1995). Martin Törngren. Modelling and design of distributed real-time control applications.Doctoral thesis, Department of Machine Design, KTH, TRITA-MMK 1995:7, ISSN1400-1179, ISRN KTH/MMK--95/7--SE.Törngren and Fredriksson (1999). Martin Törngren and Peter Fredriksson. SMART-1. CAN and Redundancy logic simulation of the SMART SU. Swedish Space Corporation, Report S80-1-SRAPP-1.Törngren and Redell (2000). Martin Törngren and Ola Redell. A Modelling Framework to support the design and analysis of distributed real-time control systems. Journal of Microprocessors and Microsystems 24 (2000) 81-93, Elsevier, special issue based on selected papers from the Mechatronics 98 proceedings.Törngren (2001). Martin Törngren, Jad El-khoury, Martin Sanfridson and Ola Redell. Modelling and Simulation of Embedded Computer Control Systems: Problem formulation. Internal Report, Department of Machine Design, KTH, TRITA-MMK 2001:3, ISSN1400-1179, ISRN KTH/ MMK--01/3--SE.Törngren et al. (1997). Martin Törngren, Christer Eriksson, Kristian Sandström (1997). Real-time issues in the design and implementation of multirate sampled data systems. In Preprints of SNART 97 -Swedish National Association on Real-Time Systems Conference, Lund, 21-22 August 1997.UML (1997). UML notation guide. Version 1.1. Sept. 1997. Object Management Group, doc.no. ad/97-08-05. /umlWittenmark et al. (1995). Björn Wittenmark, Johan Nilsson, and Martin Törngren. Timing Problems in Real-time Control Systems. P roceedings of the 1995 American Control Conference, Seattle, WA, USA.。

ModelingandSimulatinganAll

ModelingandSimulatinganAll

ModelingandSimulatinganAllModeling and Simulating an All-Digital Phase Locked LoopBy Russell Mohn, Epoch Microelectronics Inc.Implementing a PLL design on silicon can consume months of development time and hundreds of thousands of dollars in fabrication costs. To minimize these costs, engineers need a way to predict whether the design will meet specifications before implementing the design on silicon.Simulation is an obvious solution, but it, too, can be difficult because of the vastly different time scales involved in PLL design. Phase lock time is usually measured in hundreds of microseconds, while femtosecond resolution is required to evaluate phase noise. It can take days to weeks of computing time to run a circuit-level simulation that spans the few milliseconds necessary to capture a PLL locking, and multiple simulations are required to fully evaluate a design.At Epoch Microelectronics, we use MATLAB and Simulink? to ensure that our all-digital PLL (ADPLL) design meets the specification before committing to hardware. We create analytical and behavioral models of the ADPLL design in two domains. We start with an analytical model in MATLAB and then build a phase-domain and time-domain model in Simulink, into which we introduce imperfections such as nonlinearities and noise. Simulations using these models are easier to get off the ground and more re-configurable than Verilog simulations. Simulink behavioral simulation is much faster than circuit-level simulation, and as a result, we can complete many simulations in one day, experimenting with different implementation ideas forthe functional blocks. The behavioral simulations are instrumental in determining the block-level specifications that will satisfy a given set of top-level PLL specifications. We combine and analyze simulation results from these models to build confidence in our design before investing the resources required to implement it.ADPLLs: Advantages and Design ChallengesUsed to synchronize the phase of two signals, the phase-locked loop (PLL) is employed in a wide array of electronics, including microprocessors and communications devices such as radios, televisions, and mobile phones. A PLL consists of a phase detector, a low-pass filter, a variable frequency oscillator, and a divider (Figure 1). Originally composed of entirely analog components, these components have been replaced over time with digital equivalents to produce an all-digital PLL (ADPLL).Figure 1. Block diagram of a PLL.ADPLLs offer several advantages over analog PLLs, particularly for microprocessor applications. ADPLLs generallyhave shorter lock times, and they are easier to integrate with digital components on mixed-signal integrated circuits (ICs). They also consume less area on ICs than analog PLLs, reducing die sizes and production costs. As fabrication technologies improve, ADPLLs will continue to shrink, whereas analog PLLs will not.Designing an ADPLL can be challenging, particularly for engineers who are more familiar with analog circuit design. For example, the designer must use digital signal processing design concepts and deal with practical digital design tradeoffs such as choosing the number of bits for a signal to avoid overflow or saturation while keeping the design area small.Creating a Linearized Model in MATLABIn the linearized, phase-domain analytical model, each component is represented by a MATLAB transfer function. As a result, we can analyze the phase of the signal at each point in the loop. For example, we can estimate the noise at the input to the controlled oscillator and use the model to understand how that noise affects the output. This purely analytical approach lets us prototype ideas and verify functionality. It also shows us early on which component blocks pose challenges in meeting the overall design specifications.Building a Phase-Domain Model in SimulinkAfter analyzing the MATLAB model, we build the equivalent phase-domain model in Simulink, replacing the MATLAB equations with predefined blocks that implement the transfer function for each component (Figure 2). We compare the results of this model with the results of the MATLAB analytical model to verify that they agree and to understand how nonlinearities influence the output.Figure 2. Simulink phase-domain model.With the Simulink model, we can easily simulate noise, nonlinearities, and the kinds of effects seen in real devices—for example, the effects of any mismatch between the up current and the down current in the charge pump. Similarly, the Simulink model shows us how phase noise is affected by spurs generated by a sigma-delta modulator or by large variations in oscillator gain.Unlike circuit-level simulations, which take days, we can run 10 Simulink simulations in less than two minutes. By averaging the resulting power spectral densities, we obtain a reliable estimate of the PLL design's phase noise. We also use the phase-domain model to analyze tradeoffs for individual components. For example, if we decrease the digital phase frequency detector resolution to reduce current consumption, we can see whether the resulting increase in integrated phase error and phase noise remains within the specifications.Building a Time-Domain ModelWhile a phase-domain model can reveal a great deal about how a PLL will perform, it does not provide a complete picture. Time-domain models help fill in vital details, such as whether the PLL locks, and if so, how long it takes. Time-domain models also bring the design closer to the actual implementation, giving designers a more realistic view of how the PLL will perform on hardware. At the same time, because it is a behavioral model, it can be simulated in much less time than a circuit-level model. Instead of waiting weeks for a result, we can complete a 1.2-millisecond simulation with a 30-picosecond timestep in Simulink in about 15 minutes.We build each component of the time-domain model separately using basic Simulink blocks (Figure 3). The component models, which start as behavioral descriptions, are increasingly refined as the high-level blocks are replaced with more hardware-accurate functional models.Figure 3. Simulink time-domain model.For example, as a behavioral model the digital low-pass filter can be a simple Digital Filter library block with the appropriate coefficients to specify the H(z). As a functional model, the digital low-pass filter consists of D flip-flop, adder, and bit shift blocks (Figure 4).Figure 4. Simulink model of a digital low-pass filter. The functional model (top) is more hardware-accurate, while the behavioral model (bottom) enables quick verification of the chosen transfer function.We develop a floating-point version of the time-domain model to verify basic functionality. To prepare for implementation, we then convert each functional block to a fixed-point representation using Simulink Fixed Point?. We compare theresults of the fixed-point and floating-point versions to ensure that no errors were introduced during the conversion.In creating the model for each component, we make sure that we can easily change key parameters, such as the digital phase frequency detector gain, the digital low-pass filter coefficients, the oscillator gain, and the divider's division ratio. Parameterizing these values enables us to run parameter sweeps using a MATLAB script to initiate a series of Simulink simulations. We then post-process the simulation results in MATLAB to identify the best settings for each parameter based on the PLL's specifications.The time-domain model is most useful for understanding the transient behavior of the PLL, but it also provides some insight into phase noise performance. We plot the phase noise as a function of frequency for the phase-domain model and the time-domain model to make sure that there is broad agreement over a range of frequencies (Figure 5).Figure 5. Phase noise vs. frequency for phase-domain and time-domain models.In the plot shown in Figure 5, the results are aligned for frequencies lower than 1 MHz. Above that frequency, the time-domain model has a lower phase noise. This is to be expected, as the time-domain model is less accurate at simulating noise and does not model as many real effects as the phase-domain model. The spur in the 100 kHz region of the time-domain results (called out on the graph) does not appear in the phase-domain results, a discrepancy that may indicate a problem in the model or that the design needs further investigation.While time-domain and phase-domain simulations in Simulink reduce the need for lengthy circuit-level simulations, they do not eliminate it. Circuit-level results will still be required for further investigating a specific area of the design or unexpected results in the behavioral model.Verifying the Hardware ImplementationAfter using simulation to verify the design and identify an optimal set of parameters, we're ready to use hardware prototyping on an FPGA to further verify the design before ASIC fabrication. Only blocks that can be implemented on the FPGA are verified in this way; we exclude circuits that run above frequencies the FPGA can handle. We implement the design in HDL code using Verilog, and use a Verilog simulator to run test cases that we had previously executed against the fixed-point Simulink model. We then compare the output of the two simulations bit by bit to verify the HDL implementation. Anything short of a perfect match between the results leads us to reinspectthe HDL code to find the source of the discrepancy.We then synthesize the design to create the FPGA prototype of the block to be tested. During tests in the FPGA environment, we set up probe points on the FPGA and capture the output from various points in the ADPLL. We download this data stream as a vector and import it into MATLAB. We can then verify the FPGA implementation of functional blocks in the design by comparing the captured results against the Simulink simulation results. At that point, we can deliver a verified register-transfer level (RTL) schematic of the ADPLL circuit to our customer, confident that it meets specifications.。

System Modeling and Simulation

System Modeling and Simulation

System Modeling and SimulationSystem modeling and simulation is a process that involves creating models of systems and simulating them to analyze their behavior. It is a crucial aspect of engineering and scientific research as it helps in understanding complex systems and predicting their behavior in different scenarios. In this essay, I will discuss the importance of system modeling and simulation, the different types of models, and the challenges associated with the process.One of the key benefits of system modeling and simulation is that it allows engineers and scientists to test the behavior of a system in a controlled environment. This helps in identifying potential problems and improving the system's performance. For example, in the automotive industry, engineers use simulation software to test the performance of different car components, such as the engine, brakes, and suspension, before the actual production process. This helps in identifying potential problems and improving the design before the car is manufactured.Another benefit of system modeling and simulation is that it allows for the optimization of systems. By simulating different scenarios, engineers can identify the best possible configuration for a system that will maximize its performance. For example, in the aerospace industry, engineers use simulation software to optimize the design of aircraft wings, which helps in reducing drag and improving fuel efficiency.There are different types of models used in system modeling and simulation. One of the most common types is the mathematical model, which uses equations to describe the behavior of a system. Mathematical models are often used in the analysis of physical systems, such as the behavior of fluids or the movement of objects. Another type of model is the physical model, which involves creating a physical representation of a system. Physical models are often used in the testing of prototypes, such as wind tunnel testing of aircraft.In addition to mathematical and physical models, there are also computer-based models. These models use computer software to simulate the behavior of a system. Computer-based models are often used in the analysis of complex systems, such as thebehavior of large-scale networks or the movement of crowds. Computer-based models are also used in the development of video games and virtual reality simulations.Despite the benefits of system modeling and simulation, there are also challenges associated with the process. One of the main challenges is the accuracy of the models. Models are often simplifications of complex systems and may not accurately represent the behavior of the system in all scenarios. Engineers and scientists must be aware of the limitations of their models and use them appropriately.Another challenge is the complexity of the models. As systems become more complex, the models used to simulate them also become more complex. This can make it difficult to analyze the behavior of the system and identify potential problems. Engineers and scientists must have a deep understanding of the system they are modeling and the tools they are using to simulate it.In conclusion, system modeling and simulation is a crucial aspect of engineering and scientific research. It allows for the testing and optimization of systems, which helps in improving their performance. There are different types of models used in system modeling and simulation, including mathematical, physical, and computer-based models. However, there are also challenges associated with the process, such as the accuracy and complexity of the models. Engineers and scientists must be aware of these challenges and use the appropriate tools and techniques to overcome them.。

cimoc

cimoc

CIMOC简介CIMOC(CIty MOdeling and City simulation)是一种用于城市建模和城市模拟的软件工具。

它可以帮助城市规划师、建筑师和政策制定者理解和分析城市规划决策的影响,以及预测城市未来的发展趋势。

CIMOC注重于城市环境中的可持续发展、交通运输规划、土地利用规划、建筑设计、能源管理等关键领域。

本文将介绍CIMOC的主要功能、应用场景以及其在城市规划中的作用。

功能CIMOC是一个功能强大的软件工具,具有以下主要功能:1. 城市建模CIMOC可以帮助用户模拟和重建城市的三维模型。

用户可以使用该软件工具根据实际情况和数据进行城市建模,包括建筑物、道路、绿地等要素。

这些要素可以根据用户的需求进行编辑和组织,以便更好地展示城市的整体面貌。

2. 城市模拟CIMOC可以模拟城市内在的各种因素和过程,比如交通流量、能源消耗、人口迁移等。

用户可以通过调整参数和输入数据,模拟出不同情景下城市的发展趋势,并预测城市的未来发展。

3. 数据分析CIMOC提供了强大的数据分析功能,可以帮助用户对城市模型和模拟结果进行深入的分析。

用户可以通过CIMOC提供的工具和算法,对数据进行可视化和统计分析,以便更好地理解和解释城市规划决策的效果。

4. 决策支持CIMOC可以为城市规划师和政策制定者提供决策支持。

通过模拟和分析不同方案下的城市发展情况,用户可以评估不同规划决策对城市的影响,进而做出更明智的决策。

应用场景CIMOC适用于以下几个应用场景:1. 城市规划CIMOC可以帮助城市规划师进行城市规划。

通过构建城市模型和模拟不同规划方案的效果,规划师可以评估不同规划决策的优劣,为城市未来的发展提供科学依据。

2. 建筑设计CIMOC可以支持建筑师进行建筑设计。

通过在城市模型中添加建筑要素,建筑师可以模拟和评估不同建筑设计的效果,确保新建筑物与周围环境的和谐一致。

3. 交通规划CIMOC可以辅助交通规划师进行交通规划。

数字地球(英文)

数字地球(英文)

The Digital Earth: Understanding our planet in the 21st CenturyA new wave of technological innovation is allowing us to capture, store, process and display an unprecedented amount of information about our planet and a wide variety of environmental and cultural phenomena. Much of this information will be "georeferenced" - that is, it will refer to some specific place on the Earth’s surface.The hard part of taking advantage of this flood of geospatial information will be making sense of it. - turning raw data into understandable information. Today, we often find that we have more information than we know what to do with. The Landsat program, designed to help us understand the global environment, is a good example. The Landsat satellite is capable of taking a complete photograph of the entire planet every two weeks, and it’s been collecting data for more than 20 years. In spite of the great need for that information, the vast majority of those images have never fired a single neuron in a single human brain. Instead, they are stored in electronic silos of data. We used to have an agricultural policy where we stored grain in Midwestern silos and let it rot while millions of people starved to death. Now we have an insatiable hunger for knowledge. Yet a great deal of data remainsunused.Part of the problem has to do with the way information is displayed. Someone once said that if we tried to describe the human brain in computer terms, it looks as if we have a low bit rate, but very high resolution. For example, researchers have long known that we have trouble remembering more than seven pieces of data in our short-term memory. That’s a low bit rate. On the other hand, we can absorb billions of bits of information instantly if they are arrayed in a recognizable pattern within which each bit gains meaning in relation to all the others — a human face, or a galaxy of stars.The tools we have most commonly used to interact with data, such as the "desktop metaphor" employed by the Macintosh and Windows operating systems, are not really suited to this new challenge. I believe we need a "Digital Earth". A multi-resolution, three-dimensional representation of the planet, into which we can embed vast quantities of geo-referenced data.Imagine, for example, a young child going to a Digital Earth exhibit at a local museum. After donning a head-mounted display, she sees Earth as it appears from space. Using a data glove, she zooms in, using higher and higher levels of resolution, to see continents, then regions, countries, cities, and finally individualhouses, trees, and other natural and man-made objects. Having found an area of the planet she is interested in exploring, she takes the equivalent of a "magic carpet ride" through a 3-D visualization of the terrain. Of course, terrain is only one of the many kinds of data with which she can interact. Using the systems’ voice recognition capabilities, she is able to request information on land cover, distribution of plant and animal species, real-time weather, roads, political boundaries, and population. She can also visualize the environmental information that she and other students all over the world have collected as part of the GLOBE project. This information can be seamlessly fused with the digital map or terrain data. She can get more information on many of the objects she sees by using her data glove to click on a hyperlink. To prepare for her family’s vacation to Yellowstone National Park, for example, she plans the perfect hike to the geysers, bison, and bighorn sheep that she has just read about. In fact, she can follow the trail visually from start to finish before she ever leaves the museum in her hometown.She is not limited to moving through space, but can also travel through time. After taking a virtual field-trip to Paris to visit the Louvre, she moves backward in time to learn about French history, perusing digitized maps overlaid on the surface of the Digital Earth,newsreel footage, oral history, newspapers and other primary sources. She sends some of this information to her personal e-mail address to study later. The time-line, which stretches off in the distance, can be set for days, years, centuries, or even geological epochs, for those occasions when she wants to learn more about dinosaurs.Obviously, no one organization in government, industry or academia could undertake such a project. Like the World Wide Web, it would require the grassroots efforts of hundreds of thousands of individuals, companies, university researchers, and government organizations. Although some of the data for the Digital Earth would be in the public domain, it might also become a digital marketplace for companies selling a vast array of commercial imagery and value-added information services. It could also become a "collaboratory"-- a laboratory without walls —for research scientists seeking to understand the complex interaction between humanity and our environment.Technologies needed for a Digital EarthAlthough this scenario may seem like science fiction, most of the technologies and capabilities that would be required to build a Digital Earth are either here or under development. Of course, the capabilities of a Digital Earth will continue to evolve over time. Whatwe will be able to do in 2005 will look primitive compared to the Digital Earth of the year 2020. Below are just a few of the technologies that are needed:Computational Science: Until the advent of computers, both experimental and theoretical ways of creating knowledge have been limited. Many of the phenomena that experimental scientists would like to study are too hard to observe - they may be too small or too large, too fast or too slow, occurring in a billionth of a second or over a billion years. Pure theory, on the other hand, cannot predict the outcomes of complex natural phenomena like thunderstorms or air flows over airplanes. But with high-speed computers as a new tool, we can simulate phenomena that are impossible to observe, and simultaneously better understand data from observations. In this way, computational science allows us to overcome the limitations of both experimental and theoretical science. Modeling and simulation will give us new insights into the data that we are collecting about our planet.Mass Storage: The Digital Earth will require storing quadrillions of bytes of information. Later this year, NASAs Mission to Planet Earth program will generate a terrabyte of information each day. Fortunately, we are continuing to make dramatic improvements in this area.Satellite Imagery: The Administration has licensed commercial satellites systems that will provide 1-meter resolution imagery beginning in early 1998. This provides a level of accuracy sufficient for detailed maps, and that was previously only available using aerial photography. This technology, originally developed in the U.S. intelligence community, is incredibly accurate. As one company put it, "It’s like having a camera capable of lo oking from London to Paris and knowing where each object in the picture is to within the width of a car headlight."Broadband networks: The data needed for a digital globe will be maintained by thousands of different organizations, not in one monolithic database. That means that the servers that are participating in the Digital Earth will need to be connected by high-speed networks. Driven by the explosive growth of Internet traffic, telecommunications carriers are already experimenting with 10 gigabit/second networks, and terrabit networking technology is one of the technical goals of the Next Generation Internet initiative. The bad news is that it will take a while before most of us have this kind of bandwidth to our home, which is why it will be necessary to have Digital Earth access points in public places like children’s museums and science museums.Interoperability: The Internet and the World Wide Web havesucceeded because of the emergence of a few, simple, widely agreed upon protocols, such as the Internet protocol. The Digital Earth will also need some level of interoperability, so that geographical information generated by one kind of application software can be read by another. The GIS industry is seeking to address many of these issues through the Open GIS Consortium. Metadata: Metadata is "data about data." For imagery or other georeferenced information to be helpful, it might be necessary to know its name, location, author or source, date, data format, resolution, etc. The Federal Geographic Data Committee is working with industry and state and local government to develop voluntary standards for metadata.Of course, further technological progress is needed to realize the full potential of the Digital Earth, especially in areas such as automatic interpretation of imagery, the fusion of data from multiple sources, and intelligent agents that could find and link information on the Web about a particular spot on the planet. But enough of the pieces are in place right now to warrant proceeding with this exciting initiative.Potential ApplicationsThe applications that will be possible with broad, easy to use access to global geospatial information will be limited only by ourimagination. We can get a sense of the possibilities by looking at today’s applications of GIS and sensor data, some of which have been driven by industry, others by leading-edge public sector users: Conducting virtual diplomacy: To support the Bosnia peace negotiations, the Pentagon developed a virtual-reality landscape that allowed the negotiators to take a simulated aerial tour of the proposed borders. At one point in the negotiations, the Serbian President agreed to a wider corridor between Sarajevo and the Muslim enclave of Gorazde, after he saw that mountains made a narrow corridor impractical.Fighting crime: The City of Salinas, California has reduced youth handgun violence by using GIS to detect crime patterns and gang activity. By collecting information on the distribution and frequency of criminal activities, the city has been able to quickly redeploy police resources.Preserving biodiversity: Planning agencies in the Camp Pendelton, California region predict that population will grow from 1.1 million in 1990 to 1.6 million in 2010. This region contains over 200 plants and animals that are listed by federal or state agencies as endangered, threatened, or rare. By collecting information on terrain, soil type, annual rainfall, vegetation, land use, and ownership, scientists modeled the impact on biodiversity of differentregional growth plans.Predicting climate change: One of the significant unknowns in modeling climate change is the global rate of deforestation . By analyzing satellite imagery, researchers at the University of New Hampshire, working with colleagues in Brazil, are able to monitor changes in land cover and thus determine the rate and location of deforestation in the Amazon. This technique is now being extended to other forested areas in the world.Increasing agricultural productivity: Farmers are already beginning to use satellite imagery and Global Positioning Systems for early detection of diseases and pests, and to target the application of pesticides, fertilizer and water to those parts of their fields that need it the most. This is known as precision farming, or "farming by the inch."The Way ForwardWe have an unparalleled opportunity to turn a flood of raw data into understandable information about our society and out planet. This data will include not only high-resolution satellite imagery of the planet, digital maps, and economic, social, and demographic information. If we are successful, it will have broad societal and commercial benefits in areas such as education, decision-making for a sustainable future, land-use planning, agricultural, and crisismanagement.The Digital Earth project could allow us to respond to manmade or natural disasters - or to collaborate on the long-term environmental challenges we face.A Digital Earth could provide a mechanism for users to navigate and search for geospatial information - and for producers to publish it. The Digital Earth would be composed of both the "user interface" - a browsable, 3D version of the planet available at various levels of resolution, a rapidly growing universe of networked geospatial information, and the mechanisms for integrating and displaying information from multiple sources.A comparison with the World Wide Web is constructive. [In fact, it might build on several key Web and Internet standards.] Like the Web, the Digital Earth would organically evolve over time, as technology improves and the information available expands. Rather than being maintained by a single organization, it would be composed of both publically available information and commercial products and services from thousands of different organizations. Just as interoperability was the key for the Web, the ability to discover and display data contained in different formats would be essential.I believe that the way to spark the development of a Digital Earthis to sponsor a testbed, with participation from government, industry, and academia. This testbed would focus on a few applications, such as education and the environment, as well as the tough technical issues associated with interoperability, and policy issues such as privacy. As prototypes became available, it would also be possible to interact with the Digital Earth in multiple places around the country with access to high-speed networks, and get a more limited level of access over the Internet.Clearly, the Digital Earth will not happen overnight.In the first stage, we should focus on integrating the data from multiple sources that we already have. We should also connect our leading children’s museums and science museums to hi gh-speed networks such as the Next Generation Internet so that children can explore our planet. University researchers would be encouraged to partner with local schools and museums to enrich the Digital Earth project — possibly by concentrating on local geospatial information. Next, we should endeavor to develop a digital map of the world at 1 meter resolution.In the long run, we should seek to put the full range of data about our planet and our history at our fingertips.In the months ahead, I intend to challenge experts in government, industry, academia, and non-profit organizations to help develop a strategyfor realizing this vision. Working together, we can help solve many of the most pressing problems facing our society, inspiring our children to learn more about the world around them, and accelerate the growth of a multi-billion dollar industry.。

南京航空航天大学经济及管理学院

南京航空航天大学经济及管理学院

信息管理与电子商务研究所
研究成果
近年获省部级奖励5项,发表论文200多篇, SSCI,SCI,EI收录论文50多篇。
承担项目:
近年来承担国家自然基金、国家社科基金、教育部人文社 科基金、省高校哲学社会科学重点项目等9项;为企业级 、研究所开发横向课题15项,包括管理信息系统、知识管 理系统、网络舆情系统的开发;服务对象包括政府机关、 企事业单位、航空、航天研究所、国防信息中心等;核心 技术包含网络信息实时采集、知识本体自动生成、多源知 识融合体系设计、大数据文本分析等。
信息管理与电子商务研究所
南航“信息管理与电子商务研究所”,由信息技术与管理 科学复合知识结构的教授与副教授组成。主要从事信息系 统关键技术研发、大数据分析、电子商务、国防知识创新 等方面的研究。
研究领域包括: 信息管理与信息系统 商业和生产领域的大数据分析 网络舆情及复杂网络分析 电子商务行为与模式分析 知识管理与挖掘 管理系统建模与仿真 大数据环境下的航空航天研发创新方法与支持系统
Optimal path for controlling CO2 emissions in China: A perspective of efficiency analysis. Energy Economics 45 (2014), 99-110.
On estimating shadow prices of undesirable outputs with efficiency models: A literature review. Applied Energy 130 (2014), 799-806.
国家自然科学基金青年项目
能源与气候变化政策建模中要素替代弹性研究 考虑非期望产出效率模型及其在能源效率与环境绩效评价研究中的应用

天气预报的作文用英文 北京

天气预报的作文用英文 北京

天气预报的作文用英文北京Weather forecasting plays a crucial role in the daily lives of people in Beijing and across the world. It provides valuable information about the expected atmospheric conditions, allowing individuals and organizations to plan their activities accordingly. In the bustling city of Beijing, where the climate can be quite unpredictable, accurate weather forecasts are essential for a variety of sectors, from transportation and agriculture to tourism and emergency management.One of the primary benefits of weather forecasting in Beijing is its impact on transportation. The city's extensive road network and extensive public transportation system rely heavily on accurate weather information to ensure the safe and efficient movement of people and goods. Heavy snowfall, for example, can disrupt traffic flow, leading to delays and accidents. By monitoring weather patterns and issuing timely warnings, meteorologists help transportation authorities and commuters prepare for such events, implementing appropriate measures to mitigate the impact on daily commutes and logistics.The agricultural sector in Beijing also greatly benefits from weather forecasting. Farmers in the region rely on accurate predictions of rainfall, temperature, and other climatic factors to make informed decisions about planting, irrigation, and pest management. This information helps them optimize their operations, maximize crop yields, and minimize the risk of weather-related losses. For instance, a forecast of an upcoming heat wave can prompt farmers to adjust their irrigation schedules or implement measures to protect their crops from the damaging effects of high temperatures.In the realm of tourism, weather forecasting plays a crucial role in Beijing. The city is a popular destination for both domestic and international visitors, who often plan their itineraries around the expected weather conditions. Accurate forecasts allow tourists to make informed decisions about their activities, ensuring they can make the most of their stay. Whether it's planning outdoor excursions, scheduling sightseeing tours, or choosing the best time to visit iconic landmarks, weather information is essential for the tourism industry in Beijing.Beyond these practical applications, weather forecasting also plays a vital role in emergency management in Beijing. Severe weather events, such as heavy storms, floods, or extreme temperatures, can pose significant risks to the safety and well-being of the city'sresidents. By closely monitoring weather patterns and issuing timely warnings, meteorologists enable emergency response teams to prepare for and respond to these situations effectively. This includes coordinating evacuation efforts, mobilizing resources, and ensuring the availability of essential services during times of crisis.The accuracy of weather forecasting in Beijing has improved significantly in recent years, thanks to advancements in technology and the ongoing efforts of meteorological agencies. The use of sophisticated weather modeling and simulation tools, coupled with a network of weather observation stations and satellite data, has enabled forecasters to provide more reliable and detailed predictions. Additionally, the integration of real-time data from various sources, such as radar and sensor networks, has enhanced the ability to detect and track weather patterns with greater precision.However, the challenge of weather forecasting in Beijing remains. The city's complex topography, with its mountainous terrain and proximity to the sea, can create unique microclimates and weather patterns that are difficult to predict accurately. Furthermore, the rapidly changing urban landscape, with its high-rise buildings and dense infrastructure, can influence local weather conditions in ways that are not always well-understood.To address these challenges, the meteorological community inBeijing is continuously working to refine its forecasting models, improve data collection methods, and enhance communication strategies. Collaboration between research institutions, government agencies, and private sector organizations is crucial in this endeavor, as it allows for the sharing of knowledge, the development of innovative technologies, and the implementation of best practices.In conclusion, weather forecasting is a vital component of daily life in Beijing, providing essential information that impacts a wide range of sectors and activities. From transportation and agriculture to tourism and emergency management, the accurate prediction of atmospheric conditions plays a crucial role in the city's resilience and prosperity. As technology and scientific understanding continue to advance, the importance of weather forecasting in Beijing will only grow, helping to ensure the well-being and safety of its inhabitants and visitors alike.。

The American Museum of Natural History in New York City has built a new exhibit space---The

The American Museum of Natural History in New York City has built a new exhibit space---The

Visualizations of Earth Process for the American Museum of NaturalHistoryAbstractThe American Museum of Natural History in New York City has built a new exhibit space—The Hall of Planet Earth.This hall highlights earth processes using various exhibits including actual rocksand core samples,demonstration models,and video display stations.One specific scientific area that the museum wants to highlight is that of modeling and simulation.Los Alamos has a long history in this area through our involvement in programs such as the DOE GrandChallenges and the Institute for Geophysics and Planetary Physics.Because of this,we were asked toparticipate in the design of,and provide content for,five exhibits designed to showcase modeling andsimulation of individual earth processes.This paper briefly describes the scientific visualizations developed and used to model atmosphere, ocean,and mantle processes for the American Museum of Natural History’s exhibit.1IntroductionThe American Museum of Natural History in New York City is currently building a new exhibit space—The Hall of Planet Earth.This hall will highlight earth processes using various exhibits including actual rocks and core samples,demonstration models,and video display stations.One specific scientific area that the museum wants to highlight is that of modeling and simulation.Los Alamos has a long history in this area through our involvement in programs such as the DOE Grand Challenges and the Institute for Geophysics and Planetary Physics.Because of this,we were asked to participate in the design of,and provide content for,five exhibits designed to showcase modeling andsimulation of individual earth processes.The modeling and simulation exhibits will consist offive video display stations distributed throughout the hall.Each video station will play4-5minutes of pre-recorded video when triggered by a museum visitor. Thefirst few minutes of each video will explain the details of modeling a specific earth process through graphic animations,textual overlays,and interviews with the simulation scientists.Thefinal1-2minutes of each video use actual scientific visualizations of the simulation data to explain specific features under study.The museum specified the use of scientific visualization,as opposed to artists renditions,to convey the process that scientists use to understand simulation results.In the following sections we’ll describe three of thefive visualizations that Los Alamos delivered to the museum—an atmospheric simulation of a severe winter storm,a global ocean model,and the process of mantle convection.In each section we’ll briefly describe the model and the visualization tools andtechniques used to produce these animations.12Atmospheric ModelThe atmospheric model was used to simulate the development of one of the strongest storms to hit the eastern United States this century and tracks its development from Brownsville to Newfoundland.This storm,through a combination of heavy snow,high winds,severe storms,and coastalflooding,claimed dozens of lives and caused over2billion dollars in damages.The storm also produced one of the largest areal coverage of deep snow ever,paralyzing the eastern seaboard,and its effects were felt deep into the tropics including in Cuba and the Yucatan.The Regional Atmospheric Modeling System(RAMS)[1], originally developed at Colorado State University,uses measurements from weather stations all over the country and numerical calculations to predict evolving weather patterns.Model output includes temperature,pressure,wind vectors,and species of condensate such as ice crystals, high-elevation snow,snow and rain.Two animations were created to visualize the dynamics of the simulated storm system.A overhead view animation details the life cycle of the storm.A side view highlights the storm’s intense development phase.To create these animation sequences for the scientists and museum we used IBM’s Visualization Data Explorer(DX)product[2].Data Explorer provides a full collection of visualization operators and allows for fast program creation via a data-flow program grapheditor.Both animations depict three and half days of simulated time from12PM,March11,1993to12AM, March15,1993.The overhead view animation details the winds near the jet stream level using stream ribbons.The extent of the clouds associated with the storm are shown using volume rendering.Contours of surface pressure are used to show how the storm intensified over the eastern seaboard,producing hurricane force winds in some locations.The areal extent of the rain/snow is depicted using scalar color mappings as the storm propagates from Texas to Maine.Local temperatures are also reported using numerical values.Figure1shows a frame from this animation.Figure1:Overhead view of storm.2A side view highlights the storm’s intense development phase.The view shows the strong vertical lifting associated with the low pressure at the center of the storm by using stream ribbons which originate at the surface.This lifting produces the heavy clouds and rain/snow shown using volume rendering.Figure2shows a frame from this animation.Figure2:Side view of storm’s vertical development.The importance of these animations is in being able to see how all of the different variables that define atmospheric structure interact to produce such an extraordinary event.From this type of visualization scientists can better understand how subtle aspects of atmospheric dynamics can come together at the righttime to produce a killer storm.3Ocean ModelThe Earth’s climate is determined by a complicated interaction between the ocean,sea ice,atmosphere,and puter models that simulate numerically the behavior of this system are one of the best means we have for projecting future climate and the impact of humanity’s activities on it.Present-day general circulation models(GCMs)are able to simulate satisfactorily many aspects of the current climate, though a new generation of models is needed that havefiner spatial resolution and that more realistically treat the physical processes that control our climate.To meet these objectives,we need GCMs that run on massively parallel computers.As part of the DOE’s Grand Challenge program,scientists at Los Alamos have developed one such model:a global ocean circulation model named the Parallel Ocean Program(POP).The POP ocean simulation,running on the Laboratory’s SGI Origin2000parallel computers,employs a global grid containing1280uniformly spaced points in longitude and896variably spaced points fromN to S latitude,yielding a spatial resolution ranging from31km at the Equator to7km at latitude.3The model uses20non-uniformly spaced depth levels and realistic bottom topography(bathymetry). Observed surface winds from the period1985-1995and realistic monthly mean heat and saltfluxes are used to force the model.Additionally,we run the model using roughly the same grid spacing over only the North Atlantic,resulting in much greater spatial resolution for that area.As POP runs it periodically writes datafiles representing the progress of the simulation.Although the simulation computes on a30minute time-step,thesefiles are written every three days of simulated time. At each three day time-step,onefile is written for each variable being computed:salinity,temperature,sea-surface height,andflow vectors).Historically,we’ve visualized this sequential collection of datafiles using video technology.These video visualizations,while useful for viewing the progress of the simulation,have a serious drawback—they are static and can’t be modified without creating a new video.Because large simulations are run infrequently, these video animations have long lifespans—their shortcomings become increasingly apparent as time goeson.To address these and other limitations we developed an interactive ocean model rendering tool called POPTEX[3].This tool duplicates the benefit of video visualizations(putting the results of the simulation into motion)while adding capabilities that enable dynamic,flexible,and interactive exploration of their data.To do this we used the powerful combination of hardware features available on the Laboratory’s SGI Origin2000and its Infinite Reality(iR)graphics pipes.Specifically,we exploit the iR’s fast texture mapping capabilities[4]to provide the desired interactivity.The results of the simulation are converted to8-bit texture images which are then mapped through an editable texture lookup table(TLUT)onto the globe.The TLUT itself is implemented in the iR’s hardware and can be loaded almost instantaneously.The main advantage of POPTEX though,is its animation capability.The collection of textures in main memory can be continuously streamed into texture memory at an observed maximum rate of72million texels per second.This results in a maximum frame rate of60Hz. At this rate,ten years of simulated time pass in just21seconds.More useful than end-to-end animation though,is the ability to choose a period of time and selectively animate over only that range—at any speed, forward of backward,pausing or changing the rate as desired.Although most of the variables we visualize(sea-surface height,temperature,and salinity)are mapped to colors,we have experimented with some alternative mappings.For example,we’ve used the hillshading technique[5]to display sea-surface height in shaded relief.Figure3shows sea-surface height in the North Atlantic using this technique.Visualizations of sea-surface height are of interest in many areas of the world.For example,the strong eddies seen in the Caribbean and Gulf of Mexico can affect the operationsof oil drilling platforms.For the museum project,we added code to visualize the surface currents of the ocean.Ourfirst attempt was to advect particles(or drifters)through the vectorfield,leaving a dissipating trail behind them as they progress through theflowfield.Figure4shows an example of this technique applied to the Agulhus current thatflows around southern Africa.Both the visualization researchers and the simulation scientists we were encouraged by the results of this technique—the drifters effectively tracked the eddies in theflowfield.We previewed an animation of this technique to the museum staff fully expecting an equally positive response.4Figure3:Hillshaded sea-surface height in North Atlantic.Unfortunately,that’s not what we heard.Their initial response to the drifters was that they looked”creepy”or”like bugs”.We spent quite a bit of time trying alternate color schemes and line styles—none of which appeared a great deal more appealing.In the end,we stayed with the original depiction since that’s whatthe ocean scientists will be using on a day-to-day basis.4Mantle ModelSolid state convection within the Earth’s mantle determines one of the longest time scales of our planet. The Earth’s mantle,the2900km thick silicate shell that extends from the iron core to the Earth’s surface, though solid,is deforming slowly by viscous creep over long time periods.While gradual in human terms, the vigor of this subsolidus convection is impressive,producingflow velocities of1-10cm/year.Plate tectonics,the piecewise continuous movement of the Earth’s surface,is the prime manifestation of this internal deformation,but ultimately all large scale geological activity of our planet,such as mountain building and continental drift,must be explained dynamically by mass displacements within the mantle.A major problem for researchers in computational mantle dynamics is to resolve the Earth’s outer100km deep skin,or lithosphere.This lithosphere is an integral part of the mantle and thus a100km wide spatial resolution has to be achieved throughout the volume.The resulting computational problem requires numerical discretizations with approximately10-100million grid points to resolve the mantle volume on scales of50km or less.Mantle convection researchers at Los Alamos use the3D spherical mantle dynamics code TERRA,which solves the Navier-Stokes equations in the infinite Prandtl number limit using a multigrid approach[6].A message passing version of TERRA runs on a wide variety of parallel platforms,from clusters of Linux PCs through large parallel machine such as the SGI/Cray Origin2000 and the SGI/Cray T3E[7].The large memories of these machines has allowed scientists to investigate convection using a numerical grid of more than10millionfinite elements and thus allows them to resolve a5Figure4:Drifters following Ahulhus current.6large range of dynamical length scales within the mantle.When the TERRA visualization effort wasfirst begun,simulations were primarily run on a256processor Cray T3D system and being able to run visualization codes on the same platform as the simulation was a big advantage.We therefore developed purely software visualization tools that ran on the parallel computer where the data was generated.It allowed for both a rapid and high resolution display of simulation results too large for visualization on even the high-end graphics workstations of the time and avoided time consuming data transfers between the simulation host and the visualization computers.In today’s environment at the Advanced Computing Laboratory,where our main platform for computation is a2048 processor cluster of Origin2000systems that can include hardware graphics accelerators,an OpenGL solution might have better performance.Still,the parallel software tools are portable and scalable and can run efficiently on any platform where the simulation code runs.The parallel visualization tools consist of an isosurface extractor,a parallel software polygon renderer,and a parallel slicer that can interpolate arbitrary planar slices throughfield data.These tools use a messagepassing and active message programming model[8].The tools operate directly on the TERRA grid structure.While the TERRA grid is not a structured grid,the recursive subdivision basis of the grid allows the grid geometry to be implicitly represented rather than explicitly stored,saving memory and allowingfor efficient geometric queries of the grid.Our software parallel renderer uses a sort-middle based rendering algorithm.Both the data domain and the image are partitioned evenly among the processors.Each processorfirst handles the geometric processing for the portion of the data it holds:isosurface extraction,arbitrary slicing and geometric transformation. The resulting geometric primitives are partitioned into scanline segments according to the portion of screen space they cover and sent to the processor responsible for that portion of the image using an active message communications model.When the active message arrives at its destination processor,a handler function is invoked that completes the rasterization of the primitives it contains.Opaque scanline segments are directly z-buffered.Transparent scanline segments are buffered and sorted and composited after all processors complete geometric processing.Arbitrary slicing is handled through software based texture mapping which maps pixels in the slice plane back into thefield grid for color lookup.Isosurfaces are extracted using a parallel version of the NOISE algorithm[9].More details about the TERRA visualization tools can befound in[10].Figure5shows a frame from the mantle convection animation that will be used at the American Museum of Natural History.This frame shows the temperaturefield from the simulation colormaped red(hot)to blue(cold).The outer blue transparent isosurface is a relative low temperature and indicates where cold material moves back toward the interior of the mantle.The inner orange isosurface is a relative high temperature and indicates hot material moving outward.The visualization tools run at interactive rates(3-5 FPS).While the video produced for the museum was rendered in batch mode,the rendering rate was stillover3frames per second,excluding image write time.7Figure5:Mantle convection animation frame.85AcknowledgementsThe authors would like to thank and acknowledge the following for their support:Allison Alltucker,John Ballentyne,James Bossert,Hans-Peter Bunge,Allegra Burnette,Alice Chapman,Elliot Hoyt,Ro Kinzler, Robert Malone,Mathew Maltrud,Ed Mathez,and Judy Winterkamp.6References[1]R.A.Pielke et al.,”A Comprehensive Meteorological Modeling System-RAMS,”Meteorology ofAtmospheric Physics,V ol.49,1992,pp.69-91.[2]G.Abram and L.Treinish,”An Extended Data-Flow Architecture for Data Analysis and Visualization,”IEEE Visualization95Conf.Proc.,IEEE Computer Society Press,Los Alamitos,Calif.,Oct.1995,pp.263-270.[3]John S.Montrym,Daniel R.Baum,David L.Dgnam,and Christopher J.Migdal,”InfiniteReality:A Real-Time Graphics System,”SIGGRAPH97Conf.Proc.,ACM SIGGRAPH,Addison-Wesley,August1997,pp.293-302.[4]Allen McPherson and Mathew Maltrud,”POPTEX:Interactive Ocean Model Visualization UsingTexture Mapping Hardware,”IEEE Visualization98Conf.Proc.,IEEE Computer Society Press,LosAlamitos,Calif.,Oct.1998,pp.471-474.[5]B.K.P.Horn,”Hillshading and the Reflectance Map,”Proceedings of the IEEE,1981,169(1):14-47.[6]John R.Baumgardner.”Three dimensional treatment of convectiveflow in the Earth’s mantle,”J.Stat.Phys,39(5-6):501-511,1985.[7]Hans-Peter Bunge and John R.Baumgardner.”Mantle convection modeling on parallel virtualmachines,”Computers in Physics,9(2):207-215,1995.[8]James S.Painter,Patrick McCormick,Michael Krogh,Charles Hansen,and Guillame Colin de Verdire.”The ACL message passing library,”EPFL Supercomputing Review,7,November1995.[9]Yarden Livnat,Han-Wei Shen,and Christopher R.Johnson.”A near optimal isosurface extraction algorithm using the span space,”IEEE Transactions on Visualization and Computer Graphics,2(1):73-84,1996.[10]James S.Painter,Hans-Peter Bunge and Yarden Livnat,”Case Study:Mantle Convection Visualization on the Cray T3D,”,In Proceedings of IEEE Visualization’96,pages409-412,San Francisco,CA,October1996.9mailto:mcpherson@ Allen McPherson10。

AMESim Proportional Reversing Valve模型与仿真分析说明书

AMESim Proportional Reversing Valve模型与仿真分析说明书

The Modeling and Simulation of Proportional Reversing Valve Basedon AMESimLin Chuang 1, a , Fei Ye 2,b1-2 School of Mechanical Engineering, Shenyang Jianzhu University, No.9, Hunnan East Road,Hunnan New District, Shenyang City, Liaoning, P.R. China, 110168a *********************,b ***************Keywords: AMESim ;Proportional Reversing Valve ;Modeling and SimulationAbstract . In some models of proportional reversing valve as an example, by Ansoft software andAMESim software respectively establishes the finite element analysis model of proportional solenoidand the proportional reversing valve with simulation model, the output characteristic parameterswhich are obtained by Ansoft software import AMESim proportional solenoid model, settingsimulation parameters, comparing theoretical characteristic curve and the sample parameter, todetermine the proportional solenoid model is correct.By analyzing the proportional reversing valvemodel simulation of the proceeds of the pilot valve to control pressure curve and the main valve coredisplacement curve, known pilot valve for the main valve has good controllability, proportionalreversing valve model to meet the corresponding functional requirement, for it can be used in liftinghydraulic circuit simulation model provides an important reference.1 IntroductionAt present, it is an important means of analysis of the hydraulic system operating characteristicswith the help of AMESim simulation, when the software simulates the truck crane hoisting circuitcontaining the proportional reversing valve,it need the help of HCD function to model the simulationof the proportional reversing valve[1]. Single using HCD to set up the simulation model of theproportional reversing valve,it usually simplifys the proportional electromagnet,uses piecewisefunction simulation of its drive on valve core according to the sample provided parameters,and ishard to ensure the simulation accuracy.The author attempts to use the finite element analysis softwareAnsoft Maxwell to model the proportional electromagnet, through the simulation input/outputcharacteristic of proportional electromagnet, as a proportional directional valve AMESim simulationmodeling of the input signal,to ensure the accuracy of hydraulic system simulation containingproportional control valve.Fig.1 Pilot proportional direction valve structure diagramThis paper is based on the structure and working principle of proportional directional valve, usesAMESim software for modeling and simulation, analysis of the simulation of pilot valve to controlInternational Conference on Automation, Mechanical Control and Computational Engineering (AMCCE 2015)pressure curve and the main valve core displacement curve, knowing pilot valve for the main valve has good controllability,proportional directional valve model meeting the corresponding functional requirement, is an important reference for it can be used in lifting hydraulic circuit simulation model provides.2 The working principle of the proportional directional valveFig.1 is the structure diagram of the guide type proportional directional valve, this valve is mainly consisting of two parts, proportion of pilot valve and main valve ,the pilot valve's internal structure includes integrated proportional amplifier, proportional electromagnet and the centring spring, etc.Proportional amplifier amplifys the power of the command signal,inputs proportional current to the proportional electromagnet, proportional electromagnetic outputs electromagnetic force and promotes the forerunner in proportion valve core, at this point, generating a control pressure at the outlet of the pilot valve,it pressd on the one end of both sides of the main valve core, under the action of the pressure,main valve core gradually overcomes the force of the reset spring and begins to move, and forming a valve mouth opening, and the oil flow rate can be changed proportionally and the flow direction can be changed,so to realize the control of the position and speed of the actuator.3 The proportional electromagnet modeling and simulationIn order to study the dynamic output characteristics of proportional electromagnet working alone,it is built in the AMESim simulation model as shown in Fig.2, the main part of its proportional electromagnetic valve is composed of signal input, and the quality of block and the reset spring, the quality of block M is according to the proportional electromagnet armature putting total quality to set, and design of the friction coefficient and reset spring pre-tightening force and stiffness reasonably.Fig.2 Proportional electromagnet AMESim simulation model3.1 AMESim proportional electromagnetic valve is created in the output fileProportional electromagnet GH263-060 as sample, the rated current of 1.11[A] and rated travel 4[mm], suction 145[N] [2], proportional electromagnetic valve is built by using Ansoft software model, when the input rated current is 1.11[A], steady-state output proportional electromagnet force changes between 137 ~ 161[N], the mean value of 148.4[N], 145[N] sample value and the error is only 2.3%, the model correctly reflects the proportional electromagnet output characteristic[3].Proportional electromagnetic valve is set up in the AMESim simulation model, need the electromagnetic force and inductance output characteristics as the data support, through AMESim table edit module will Ansoft Maxwell 2D analysis of the proceeds of the proportional electromagnet electromagnetic force and inductance related data, in the form of a 2D table in AMESim are stored for Diancitie. The data and Dianganxin data format file, so that the proportional electromagnet simulation parameters when imported.3.2 The simulation parameters settingselectromagnet coil inside an electrical current, electromagnetic loop formation on the armature make its output electromagnetic force, after reaching reset spring pre-tightening force, under the impetus of the armature push-rod spring began to shrink.In AMESim environment parameter settings, set parameters for the model on the basis of the above conditions, the main parameter such as Table 1.3.3 Run the simulationAfter setting simulation parameters, operation simulation, get proportional electromagnet simulation results are as follows:(1) The input voltage and current curvesCan be seen from the Fig.3, the input voltage coil is the input voltage proportional solenoid, that is, between 0 and 1 seconds, a linear growth trend, the voltage change range is 0 ~13[V].At this point, as the input voltage, current also increases gradually, in the 1[s] current peak of 1.104[A].The numerical samples with proportional electromagnet rated current numerical 1.11[A] very close.Fig.3 Input voltage and current curve over time(2) The armature current push rod - force change curvesFig.4 Putting armature current - force characteristic curve Fig.5 Theoretical curve Current - power output characteristics of proportional electromagnetic valve is an important index of evaluating its control performance, can be seen in Fig.4 armature putter output force changing with the current, before 0.58[A], electromagnetic force approximation to grow by a certain slope, in 0.58 [A] place, putting electromagnetism appeared inflection point, 0.58[A] and 0.93[A] stage, and electromagnetic force in another slope increase slowly, in 1.1[A] output reach maximum electromagnetic force 144.932[N].Electromagnetic force in the middle stage of slow increase of the reason is that when the armature inductance increases after putting a displacement, the obstacles of thrust increases have played a role, with the increase of current, push rod after a certain stage in electromagnetic force increases rapidly, in the end when the current is 1.1[A], putting the output force is 144.932[N], the sample value and the proportional electromagnet suction numerical rating 145[N] almost unanimously. Armature putter output force rapid rise, slow increase, rapid rise in three stages, and the current proportional electromagnetic valve is shown in Fig.5 - theory of power output characteristic curve, in contrast, the trend and numerical difference is not big, in the range of allowable error.Appear afore-mentioned difference possible factors is: in the process of proportional electromagnet modeling and simulation, to the simplified model, the parameters of the individual module default assumptions, will also introduce a small error[4].Simulation results in view of the above analysis, the proportional solenoid current - force characteristic curve is close to the theoretical analysis, the curves in its value and sample parameter is very close, so after the proportional electromagnet model can be applied to the study.4 The proportional directional valve with the modeling and simulationAs shown in ing AMESim software model pilot proportional directional valve.Fig.6 Pilot proportional directional valve with the simulation model4.1 The simulation parameters settingsPilot valve as the premise of proportional directional valve, the manual input signals accurately convert proportional electromagnet force output signal, and then passed to the control valve core, with the help of drive valve core movement to achieve the goal of controlling the oil is loaded into the main valve core on each control cavity.As drive carrier output proportional electromagnetic valve isthe whole process, the electromagnetic force, putting through the armature effect on pilot valve core, when the output of the electromagnetic force is greater than the reset spring pre-tightening force, valve core began to move and generate the opening of valve port, control the oil into the left side spring cavity of main valve core, when pressure is enough to overcome the right after the spring pre-tightening force and the valve core friction, the main valve core movement to the right, at the same time in the main valve spool valve mouth opening, realize the main valve reversing throttling.In AMESim environment parameter settings, according to the proportional electromagnet simulation model and guiding the operation condition of the proportional directional valve set parameters for the model, main parameter such as Table 2.Table 2 Setting the main parameter of Pilot proportional directional valve Control pressure Constant Source 30[bar]Directional valvespool Piston diameter 15[mm],Rod diameter 2[mm],The rest take a defaultvalueThe main valvemass Mass 0.02[kg],Coefficient of viscous friction 15[ N/(m/s)],Higher displacement limit 15.2[mm],The rest take a default valueThe main valvespring cavityPre-tightening force 15[N],Spring rate 10000[N/m] Traffic sources Constant flow rate 2[L/min]Set the solver Simulation time 1[s],Time interval 0.001[s]4.2 Run the simulationRun the simulation, the curve can be obtained as follows:Fig.7 Pilot valve to control pressure curveFig.7 is pilot valve to control pressure output curve.Pilot valve control output by the pressure on both sides of the main valve core, under the action of the control pressure, the valve core gradually overcome the role of the reset spring and fluid dynamics, and finally formed the movement of the main valve core, forming a valve mouth opening, the main valve to realize reversing the throttle.By figure, output pressure is 0[bar] before 0.13[s], 0.13[s] control pressure output delay, between 0.13[s] to 0.7[s] time, control the pressure gradually increased, until 0.7[s], the output value of the maximum 30[bar].Fig.8 Main valve core displacement curveThe Fig.8 shows that the main valve core displacement curve and pilot control pressure curvetrend is consistent, the main valve core did not produce displacement before 0.13[s], 0.13[s] to 0.7 [s]in the main valve core control pressure, the maximum displacement of the 15.2[mm], curve reflectsthe pilot valve for the main valve with good controllability[5,6].5 SummaryIn AMESim environment, the proportional electromagnet about the working current and the clearance between the output force and the inductance data respectively by 2D table format is converted to the corresponding format file, proportional electromagnetic valve is set up in the AMESim simulation model of the 2D table data import magnet linear converter, the simulation analysis of the dynamic output characteristics in AMESim software, the result of the proceeds of thecurrent - force curve and theoretical curve contrast, verify the validity of the model, for furtherin-depth theoretical research to provide adequate basis.Set in AMESim model based on proportional electromagnet HCD, pilot proportional directional valve with HCD model, through the analysis of the simulation of the pilot valve to control pressure curve and the main valve core displacement curve,can be the guide valve for the main valve has good controllability, can be used as a directional controlvalve is used for lifting hydraulic circuit simulation model.References[1] BideauxE, SeavardaS. Pneumatic library for AMESim. Fluid Power system and technology,(1998),p.185-195.[2] GH263-060 proportional electromagnet samples. /.[3] Roccatelloa, Mancos, Nervegnan. Modeling a variable displacement axial piston pump in amultibody simulation environment [C]. American Society of Mechanical Engineers(ASME), Torino,(2006),p.456-468.[4] Wong, JY. Theory of Ground Vehicles[M].John Wiley&Sons,New York,(2001),p.169-174.[5] Stringer, John. Hydraulic system analysis [J].The Macmillan Pr.Ltd ,1976.[6] Ying Sun, Ping He,Yun qing Zhang, Li ping Chen. Modeling and Co-simulation of HydraulicPower Steering System[C]. 2011 Third International Conference on Measuring Technology andMechatronics Automation. 2011 IEEE:p.595-600.。

面向登陆艇装载过程的可视化建模与仿真技术

面向登陆艇装载过程的可视化建模与仿真技术

第6卷第4期2015年8月指挥信息系统与技术Command Information System and TechnologyVol.6No.4Aug.2015•实践与应用• d o犻10.15908/kt cist.2015.04.014面向登陆艇装载过程的可视化建模与仿真技术x颜常胜1黄炎焱12王建宇1(1南京理工大学自动化学院南京210094)(2中国电子科技集团公司第二十八研究所南京210007)摘要:为实现高效和及时的登陆艇武器装载,提出了可视化建模与仿真技术的总体框架,重点分析了建模、基于微软基础类库(M FC)的仿真程序设计、多自由度(DOF)和碰撞检测4项关键技术。

以某登陆艇装载坦克过程为例,基于M ultigenCreator及V e g a P rim e开发平台,实现了可视化仿真应用程序。

仿真结果具有较强的示范效果,有助于支持对海军后勤保障能力的评估分析。

关键词:登陆艇装载;建模与仿真;三维模型;碰撞检测;多自由度中图分类号:T P391. 9 文献标识码:A文章编号:1674-909X(2015)04-0073-08Visual Modeling and Simulation Technology ofLoading Process for Landing CraftYanChangsheng1Huang Yanyan1'2Wang Jianyu1(1S ch o o l o f A u to m a tio n, N a n jin g U n iv e r s ity o f Science &T e c h n o lo g y, N a n jin g210094, C h in a)(2T h e28th R esea rch In s titu te o f C h in a E le c tro n ic s T e c h n o lo g y G ro u p C o rp o ra tio n, N a n jin g210007, C h in a)A b s t r a c t:To a chieve effective and timely weapon loading for the loading craft, an overall framework forvisual modeling and simulation techniques is proposed.Four key techniques are analyzed in the process,including modeling, design of the simulation program based on Microsott foundation classes(M FC),multi-degree of freedom(DOF)and collision detection technology.The visual emulation program s re­alized based on Multigen Creator and Vega Prime development platform by taking a landing(^ratt loadingtanks as an example.Simulation results have a demonstration effect,thus helping to assessment analysisof the naval logistic support capability.K e y w o r d s:loading for the loading craft;modeling and simulation;three-dimensional mo lision detection;multi-degrees of freedom(DOF)〇引言登陆艇是一款现代军事海上登陆战实用的武器 装备,也称两栖舰艇,专为输送登陆兵及其武器装备 和补给品的舰艇。

Vibration and Acoustics

Vibration and Acoustics

Vibration and AcousticsVibration and acoustics are two interconnected fields that play a crucial role in various aspects of our daily lives. From the noise produced by machinery to the vibrations felt in buildings and structures, these phenomena have both positiveand negative impacts on our environment and well-being. In this response, we will explore the significance of vibration and acoustics, their effects on human health, the measures taken to mitigate their negative effects, and the advancements in technology that aim to improve these areas. First and foremost, it is importantto understand the significance of vibration and acoustics in our surroundings. Vibration refers to the oscillations of an object or a system, often resulting in the production of sound. On the other hand, acoustics is the study of sound andits behavior in various environments. Both of these fields are essential in the design and operation of machinery, buildings, and transportation systems. For instance, in the automotive industry, engineers and designers need to consider the vibrations and acoustics produced by the vehicle to ensure a comfortable and safe driving experience for the passengers. Similarly, in the construction industry,the impact of vibrations and noise on nearby residents and structures must betaken into account to prevent any potential damage or disturbance. The effects of vibration and acoustics on human health cannot be overlooked. Excessive exposureto noise and vibrations can lead to various health issues, including hearing loss, stress, sleep disturbances, and even cardiovascular problems. In industrial settings, workers are often exposed to high levels of noise and vibrations, which can have detrimental effects on their well-being. Additionally, individuals living in urban areas are constantly exposed to traffic noise and other sources of environmental noise, which can have long-term implications on their health. It is crucial to address these concerns and implement measures to mitigate the negative effects of vibration and acoustics on human health. To address the negative impacts of vibration and acoustics, various measures are taken to control and reduce their effects. In industrial settings, engineering controls such asvibration isolators, acoustic enclosures, and sound-absorbing materials are usedto minimize the transmission of noise and vibrations. Additionally, administrative controls such as limiting the duration of exposure and providing personalprotective equipment are implemented to protect workers from the harmful effects of noise and vibrations. In urban areas, city planners and policymakers work towards implementing noise barriers, soundproofing measures, and traffic management strategies to reduce the impact of environmental noise on residents. These measures aim to create a more comfortable and safe environment for individuals exposed to high levels of noise and vibrations. Advancements in technology have played a significant role in improving the management of vibration and acoustics. In the automotive industry, active noise cancellation systems have been developed to reduce the interior noise of vehicles, providing a quieter and more comfortable driving experience. Similarly, in the construction industry, advanced modeling and simulation tools are used to predict and mitigate theeffects of vibrations on buildings and structures. Furthermore, the development of smart materials and innovative engineering solutions has led to the creation of more effective vibration isolators and soundproofing materials. These technological advancements continue to drive progress in the field of vibration and acoustics, offering new opportunities to address the challenges associated with noise and vibrations. In conclusion, vibration and acoustics are integral aspects of our environment that have a profound impact on our daily lives. From the design of machinery and buildings to the management of environmental noise, these phenomena play a crucial role in various industries and settings. It is essential to recognize the effects of vibration and acoustics on human health and well-being, and to implement measures to mitigate their negative impacts. Through the use of advanced technology and innovative solutions, we can continue to improve the management of vibration and acoustics, creating a more comfortable and safe environment for all.。

Framework of grinding process modeling and simulation based on

Framework of grinding process modeling and simulation based on

Framework of grinding process modeling and simulation based onmicroscopic interaction analysis$Xuekun Li n,Yiming RongComputer-Aided-Manufacturing Lab,Department of Mechanical Engineering Worcester Polytechnic Institute Worcester,MA01609,USAa r t i c l e i n f oArticle history:Received7December2009Received in revised form23May2010Accepted25June2010Keywords:GrindingModelingSimulationFrameworkMicroscopic interactionsa b s t r a c tThis paper describes a framework of grinding process modeling to understand the grinding fundamentalsand design grinding processes with predictive performance.The model regards grinding process as a timedependent process and an integration of microscopic interactions in the wheel-workpiece contact zone,including cutting,plowing,and sliding as well as other frictional interactions.The grinding process controland design are in fact to manage and balance all these interactions.The principles of microscopic interactionsare analyzed and used to correlate the grinding process input parameters and performance output.&2010Elsevier Ltd.All rights reserved.1.IntroductionGrinding is a special machining process with a large number ofparameters influencing each other,which can be considered as aprocess where thousands of irregular cutting edges interact with theworkpiece at a high speed simultaneously.Considering the complexnature of grinding it would have been impossible to synthesis theprocess without a systematic approach.When taking the process as awhole,every abrasive process is influenced by the abrasive productused,machine tool involved,workpiece material,and operationalvariables.All these four input categories interact with each other,whichculminates in the process measures,technical output,and systemoutput.Irrespective of the choice of variables in the input categories,forevery grinding process it is possible to visualize the basic microscopicwheel–workpiece interactions in terms of grain–workpiece interface,bond–workpiece interface,chip–workpiece interface,and chip–bondinterface,as shown in Fig.1.Depending on the engagement condition,the grain–workpiece interface can be further divided into cutting,plowing,and sliding modes.Any minute changes in the6modes couldresult in dramatic changes in the output of the system.This input/output representation,namely‘‘system approach’’greatly simplifiesthe understanding and use of the principles of machining and tribologyto manage and/or improve grinding processes.Hence every abrasivemachining process control is an effort to balance between cutting(surface generation)and tribological interactions of plowing/slidingwhile eliminating all the other frictional interactions[1].And propermeasurement and analysis immensely help application engineers insuch strategic management of the grinding processes.Moreover,grinding processes exhibit a strong time dependentcharacteristic,which is a combination of all microscopic interac-tions changing as a function of time.In industry where grindingpower signal is widely measured for process monitoring,while thewheels get worn,loading,or glazing,the power curve will show asteady and gradual change as in Fig.2.Superimposing the powerprofiles of cycle5and cycle1makes the change visible,as shown inFig.3.Within one individual grinding cycle,which consists ofseveral segments:rough,semi-finish,finish,spark out,etc.,theMRR-Power draw can be obtained by curvefitting into a straightline.And the MRR-Power draw change from cycle1to cycle5tellsthe‘‘inside story’’of the grinding process.An in-depth analysis ofthe MRR-Power draw,in Fig.3,leads to the decomposition of thepower in terms of threshold and cutting components and other timedependent components.Each one of these components in turn isassociated with specific aspects of the microscopic interactions,aswell as the wheel properties alteration leading to such interactions.Fig.3indicates a qualitative understanding of the MRR-Power drawchange,which could predict grinding wheel surface conditions butstill insufficient in providing an explicit solution for grindingoptimization.Quantification of this power curve superimpositionand MRR-Power draw in terms of the microscopic interactions andtheir change is a key aspect of managing modern grinding processes.2.Literature reviewThe literature on grinding process modeling is rather extensiveand it would not be possible to cover it in any detail.As grindingContents lists available at ScienceDirectjournal homepage:/locate/rcimRobotics and Computer-Integrated Manufacturing0736-5845/$-see front matter&2010Elsevier Ltd.All rights reserved.doi:10.1016/j.rcim.2010.06.029$Presented at the19th International Conference on Flexible Automation andIntelligent Manufacturing.n Corresponding author.E-mail address:xuekunli@(X.Li).Robotics and Computer-Integrated Manufacturing27(2011)471–478force contributes almost all aspects of grinding process output,only pertinent literature dealing with force modeling is covered.Malkin and Guo [2]decomposed the grinding force into chip formation force,plowing force,and sliding force.The chip formation energy was related with the melting energy for iron,which was about u ch ¼13.8J/mm 3.The tangential plowing force per unit width was estimated to be 1N/mm for steels.Sliding was associated with rubbing of dulled flattened areas on the abrasive grain tips (wear flats)against the workpiece surface.Thus the grinding force was a summation of all these components.Inasaki [3]measured the cutting edges by counting the peak points on wheel surface.The cross-section area was automatically calculated in his simulation software for calculating the force acting on each single grain.The integration of force on all cutting edges gave the grinding force in a1.11.21.3CUTTING(Material Removal Process)PLOWING(Material Displacement Process)SLIDING(Surface ModificationProcess)SLIDINGSLIDINGSLIDING2. CHIP/BOND3. CHIP/WORK4. BOND/WORKFig.1.System approach developed by Dr.Subramanian [1].Fig.2.Grinding power change in typical OD grinding processes.(a)An outer diameter (OD)grinding process and (b)measurement of grinding power and wheeldisplacement.P o w e rFig.3.Superimposition of grinding power signals and its correlation with microscopic interaction modes.(a)Superimposition of grinding power curves and (b)qualitative decomposition grinding power.X.Li,Y.Rong /Robotics and Computer-Integrated Manufacturing 27(2011)471–478472global scale.Chen and Rowe[4]modeled a grinding wheel surface with statistical methods.The single grain cutting force was regarded comparable to indenter–specimen interaction in Brinell hardness test in the absence of friction.The kinematic simulation generated the active grain number and cutting force on each grain. Badger and Torrance[5]used both Challen and Oxley’s2D plane-strain slip-linefield theory and Williams and Xie’s3D pyramid-shaped asperity model to calculate grinding force on each single grain.Hou and Komanduri[6]incorporated the random nature of grain distribution into their work.The dynamic grinding force was formulated as the convolution of a single-grit force and the grit density function.All the literatures suggest that the primary modules for a grinding process model should include a grinding wheel model,a kinematics model,and a single grain cutting model. However,the chip–workpiece,chip–bond,bond–workpiece inter-face,and the time dependent properties of grinding processes are overlooked in the current research.For such reasons,a grinding process model is in demand to quantify the significance and change of each single microscopic interaction as a function of time.3.Structure of the grinding process modelThe understanding of grinding processes in terms of time dependent microscopic interactions indeed sheds light on the principles for the modeling.First of all,the wheel surface condition and its change in the process is the fundamental cause of the time dependent behavior of grinding.Therefore,a grinding wheel model is a prerequisite for grinding process modeling,through which the microscopic wheel properties and their change as a function of time can be presented.Secondly,in order to specify and quantify the 6interaction modes,some criteria should be deduced for the identification and modeling of each mode.The output of each single interaction mode,such as force(or power)consumption and heat source generation,should be correlated with the input parameters in the microscopic interaction model.Thirdly,to calculate the cumulative effect of all microscopic mode in the wheel–workpiece contact zone,the process integration model is necessary,from which the contacting condition of each interaction can be specified, and the integration of the output force(or power)provides the grinding force(or temperature)in macro-scale,and the heat source that is discrete in nature can be deducted from the microscopic force.In addition,the instantaneous output of the process could also serve as a feedback,which affects the process input.In total, the time dependent grinding process modeling can be broken down into3levels in Fig.4:(1)A grinding wheel model,which describes both surface topo-graphical and mechanical properties.(2)The microscopic interaction model,which serves to categorizeand quantify the performance of individual mode.(3)A process integration model,which is used to study thecollaborative performance of each individual microscopic interaction mode.3.1.Grinding wheel modelCurrently,there exist two popular approaches for grinding wheel modeling.One approach is to use pure mathematical methods to simulate the wheel surface.Alternatively,the other is to apply statistical approaches to solve this problem considering random distribution of grains on the grinding wheel surface[7]. However,the pore volume and the mechanical properties of the wheel,which are critical to the grinding process,cannot be obtained from all the reported methods.In order to provide a virtual grinding wheel morphology,which is equivalent to a real product in terms of topography features and mechanical features,a through-the-process modeling method is proposed.The idea is to utilize mathematical methods to imitate each wheel fabrication step,from raw material mixing tofinal wheel dressing.Not only the composition of wheel,such as grain size,grain shape,grain fraction,and bond fraction,but also the mechanics and bond material diffusion during wheelfiring are considered.After dressing simulation,the virtual wheel surface should bear resemblance with the real products in terms of static grain count,protrusion height,effective pore volume,and local wheel hardness,as indicated in Fig.5.3.2.Microscopic interaction analysis3.2.1.Grain–workpiece interaction:cutting and plowingCutting and plowing are the most fundamental interactions in grinding,which modify the workpiece surface directly and dom-inate the material removal efficiency.Moreover,the chip genera-tion in microscopic cutting also contributes to the chip–bond interface and chip–workpiece interface.Fig.6explains the major dominant factors that influence the single grain micro-machining in grinding processes.The output from the single grain micro-machining analysis for the process integration in the next step typically includes cutting force,sideflow geometry,and heat source density for mode discrimination and chip volume genera-tion calculation.Meanwhile,the surface modification by one grain influences the material removal of the successive grain,as indi-cated in Fig.7[8].Therefore,mechanisms of cutting and plowing at microscopic level should be established for comprehensive under-standing of grinding.Although a number of grinding experiments with a single abrasive grain were conducted,it is still quite intricate to establish the mechanisms in3D due to the measurement difficulties of force,temperature,and workpiece material deforma-tion.On the contrary,finite element models and packaged FEM software are capable to describe metal cutting processes explicitly. Therefore,there are great possibilities thatfinite element modeling can be applied to investigate the single grain material removal under a wide range of grinding conditions.This can clearly quantify the force(or energy)consumption,chip generation mechanism,as well as localized material deformation,which are difficult to acquire based on only the common sense of grinding and single grain test.In this research,a commercialized FEM software package AdvantEdgeTM,which incorporates the thermo-mechanical prop-erties of material,is employed for single grain material removal understanding.In grinding,an abrasive grain can be considered to be an inverse cone shaped or pyramid shaped and it may vary from different wheel specifications.During grinding,one abrasive grain is seldom fully engaged with the workpiece,instead it will only contact partially with the workpiece,as indicated in Fig.7.Then,at the moment t,the force consumption can be expressed as a product of specific cutting force times the grain–workpiece engagement cross-section area A(t)[3].In addition,Fig.8indicates that the specific cutting force is no longer a constant according to the micro-cutting theory,which should be described as a function of grain–workpiece engagement depth d.The sideflow formation for each single grain in cutting and plowing is also needed to be considered, as the sideflow geometry will affect the material removal of the successive grain.The sideflow shape is considered to be a sphere cap,which is characterized by the width b and height h.The chip volume generated during a time interval D t can be derived for loading force calculation,which will be explained later in this paper.Through the FEM simulation,the force consumption,side flow geometry,and chip volume generation for each grain–workpieceX.Li,Y.Rong/Robotics and Computer-Integrated Manufacturing27(2011)471–478473contact couple can be deducted from the following equations,as indicated in Eq.(6):F ¼Specific _Force A ðt Þð1ÞSpecific _Force ¼f ðd Þð2ÞF ¼f ðd ÞA ðt Þð3Þb ¼f ðv ,d ,grainshape ,t Þð4Þh ¼f ðv ,d ,grainshape ,t Þð5Þchipvolume ¼f ðb ,h ,grainshape ,t Þv D tð6Þ3.2.2.Grain–workpiece interaction:slidingFig.9shows the grain–workpiece sliding occurrence when wear flat is developed on the grain tips.The area of wear flat graduallyincreases with grinding time and the rate of growth depends on the grain–workpiece combination,grinding parameters,and the envir-onment parameters.With the growth of wear flat,both the tangential and the normal forces increase,thus resulting in a further increase in grinding energy consumption,grinding tem-perature,and thermal damage [2].The total wear flat area,which is the summation of wear flat area on each grain tip,is found to be in linear relationship with the grinding force [9].Considering the proportional relationship between the grain–workpiece friction force and grain–workpiece contact area,Eq.(7)shows the sliding force (power)consumption:F G ÀW ¼m G ÀW p A G ÀWð7Þwhere m G ÀW is the friction coefficient between grain and workpiece material;p is the pressure on the interface,which can be derived from the previous cutting simulation;and A G -W is the area of grain and workpiece contact.Grinding Process System OutputFig.4.Framework of the grinding process model.X.Li,Y.Rong /Robotics and Computer-Integrated Manufacturing 27(2011)471–478474According to the tool wear model developed by Usui et al.[10],the wear flat development during grinding can be estimated as a function of time as indA G ÀW ¼C d n V s exp Àlyð8ÞTherefore,Eq.(9)shows the calculation of the sliding force,which is equivalent to the glazing force:F G ÀW ¼m G ÀW p C d n V s exp Àlydt ð9Þ3.2.3.Chip–bond and chip–workpiece interactionsChip–bond and chip–workpiece friction,absent in any other cutting processes,is one key area that differs grinding from cutting.These interactions usually happen to be associated with wheel loading,which is one of the undesirable phenomena in grinding.As wheel loading increases,additional energy is consumed due to the increased friction.Loading is the criterion for wheel dressing with fine grinding in 15%or even higher [11].Wheel loading can be defined as accumulation or adhesion of grinding chips at the inter-grain space,as indicated in Fig.10.Therefore,it can be inferred that the relationship between chip volume generated by the grain and pore volume in front of the grain is likely to influence the tendency for loading.Aside from the chemical effect,this can clearly be avoided by appropriate choice of grinding and wheel parameters.One possible solution without reducing grinding efficiency is to use wheel with high porosity or improved porosity shape for easier chip flow [12].For this under-standing,the loading force (power)consumption can be modeledModeling Algorithms Wheel Manufacturing ProcessLinear Grain Count (#/mm)Protrusion Height Pore VolumeWheel Sharpness Wheel HardnessOutput Wheel PropertiesVirtual Wheel SurfaceGrain HardnessWheel fabrication procedure and modeling Virtual wheel modeling parametersFig.5.Methodology of grinding wheel modeling.Fig.6.Framework of the single grain cutting model.X.Li,Y.Rong /Robotics and Computer-Integrated Manufacturing 27(2011)471–478475as a function of ratio between the leftover chip volume and the pore volume,which can be described asF B ÀW ¼m avg p chipV chippore ð10ÞV chip ¼fZt 2t 1chip volume dt ¼fZt 2t 1f ðb ,h ,grain shape ,t Þdt ð11Þwhere m avg is the average friction coefficient for chip–bond and chip–workpiece interface;p chip is the pressure in the interface,which is associated with the wheel hardness;V pore is the effective pore volume in front of a contacting grain;and V chip is the chip volume remaining in the pore in front of the grain,which is the summation of chip leftover from previous cuts and the chip generated by current cut.f is related with the cleaning efficiency of the coolant.Smaller f is for higher pressure coolant,which can help evacuate the chip clogging in the pore more effectively,while reduce V chip and the loading force consequently.3.2.4.Bond–workpiece interactionBond–workpiece is another key area that differs grinding from cutting.As grain wears or breaks down from the bond material,some of the bond material rubs against the workpiece material consuming energy and as a result raising the specific energy requirement and heat generation,as shown in Fig.10(b).The bond–workpiece friction seems insignificant in conventional grinding;however,it would play an important role in super-abrasive machining where the metal bond is used as a bonding agent [13].The calculation of friction force between bond and workpiece material is similar to grain–workpiece sliding,the proportional relationship between the bond–workpiece friction force and the bond–workpiece contact area still applies.Therefore,the bond–workpiece friction force can be expressed as F B ÀW ¼m B ÀW p A B ÀW ðt Þð12Þwhere m B ÀW is the friction coefficient between the bond and the workpiece material;p is the strength of the bond,which is associated with the wheel hardness;and A B ÀW is the contact area of bond and workpiece contact.3.3.Process integrationThe process integration is carried out to calculate the colla-borative impact of all microscopic interaction modes,and it can be divided into kinematics simulation and kinetics analysis (Fig.11).When the kinematics simulation starts as an iteration procedure,the wheel surface and the workpiece are transferred into the same coordinate system.At t ¼0,the grinding wheel stays away from the workpiece for some distance without contacting the workpiece.For each time incremental D t ,the wheel rotates and translates linearly toward the workpiece with respect to the wheel speed and feed rate.As soon as the wheel contacts the workpiece,the localized contact condition for each individual active grain i is determined in terms of the grain–workpiece engagement cross-section area CSA i .Through calling the data from the microscopic interaction analysis,6modes could be recognized and identified.The plastic material pile-up on both sides of grain tip should also be considered as a function of CSA,therefore,the workpiece surface updates at each simulation iteration step.The chip volume generation can also be determined as a function of CSA.In the kinetics analysis,single grain force for material removal can be determined as a product ofFig.7.Material removal by single grain and successive cutting grain.Grain geometry parameters side flow geometryspecific cutting force and cutting force1.20E+051.00E+058.00E+046.00E+044.00E+042.00E+040.00E+000.010.020.030.040.050.06Depth of Cut (mm)plowingcutting (no chip)(chip forms)Cutting Force2520151050S p e c i f i c C u t t i n g F o r c e (N /m m 2)C u t t i n g F o r c e (N )Specific Cutting ForceFig.8.Single grain micro-machining in grinding process.Fig.9.Grain–workpiece sliding in grinding.X.Li,Y.Rong /Robotics and Computer-Integrated Manufacturing 27(2011)471–478476Chip - Bond and Chip -Workpiece frictionBond - Workpiece FrictionFig.10.Chip–bond friction,chip–workpiece friction,and bond–workpiece friction.Fig.11.Grinding kinematics simulation for process integration.Fig.12.Force integration for kinetics analysis and simulated ground surface.X.Li,Y.Rong /Robotics and Computer-Integrated Manufacturing 27(2011)471–478477engagement cross-section area A,and the loading force can be calculated based on the accumulated chip volume.When taking the wheel–workpiece contact zone as a whole,the number of cutting and plowing grains,the grinding force,and the resultant surface texture can be derived from the integration as indicated in Fig.12. The iteration continues till the time reaches the pre-set value T.In addition,after each iteration step,the localized force is used to compare with the correspondent wheel surface mechanical prop-erties;therefore,the wheel surface condition is updated consider-ing the bond breakage and the grain wearflat:Grinding Force¼Cutting forceþPlowing forceþSliding forceþLoading forceþBond_Workforceð13Þ4.Potential applicationsAs a whole,the grinding process model enables the decom-position of grinding force components as a function of time.Since grinding force components are associated with the grinding wheel surface property change,the grinding process model helps predict the wheel surface evolution for grinding diagnoses and optimiza-tion.Furthermore,the grinding process model enables the calcula-tion of grinding force(or power)for a cycle,which consists of several segments:rough,semi-finish,finish,spark out,etc.There-fore,the grinding process design can be carried out proactively while eliminating‘‘trial and error’’.In addition,the grinding wheel model itself can be used to guide the development and optimiza-tion of grinding wheels.In terms of verification of the process model,the experimental setup should be developed for verification of grinding wheel model and single grain micro-machining model. In addition,advanced monitoring and measurement techniques may be necessary to capture and quantify the loading phenomenon and to measure the cutting and plowing grains.5.Conclusions and future researchThe grinding process model framework based on the under-standing and analysis of the time dependent microscopic interac-tions is developed,and the modules that constitute this model are described in detail.This model enables a quantitative description of the‘‘inside story’’in the grinding zone through separation and quantification of the6microscopic interaction modes.Further developments of this model include incorporation of different grinding process,such as slot grinding,profile grinding,and application of more efficient computation algorithms. AcknowledgementsThis research is partially supported by the Worcester Poly-technic Institute Fellowship and the open project from State Key Laboratory of Digital Manufacturing Equipment and Technology (Huazhong University of Science and Technology,China).The authors would like to thank Surface Preparation Technologies Group at Saint-Gobain High-Performance Materials for the helpful discussions and suggestions.References[1]Subramanian S,Ramanath,Tricard M.Mechanism of material removal in theprecision grinding of ceramics.Journal of Manufacturing Science and Engi-neering1997;119:509–19.[2]Malkin S,Guo C.Grinding technology—theory and applications of machiningwith abrasives.Industrial Press;2008.[3]Inasaki I.Grinding process simulation based on the wheel topographymeasurement.CIRP Annals—Manufacturing Technology1996;45(1):347–50.[4]Chen X,Rowe WB.analysis and simulation of the grinding process.Part2:mechanics of Grinding.International Journal of Machine Tools and Manufac-ture1996;36(8):883–96.[5]Badger JA,Torrance AA.A comparison of two models to predict grinding forcesfrom wheel surface topography.International Journal of Machine Tools and Manufacture2000;40(8):1099–120.[6]Hou ZB,Komanduri R.On the mechanics of the grinding process—Part I.Stochastic nature of the grinding process.International Journal of Machine Tools and Manufacture2003;43(15):1579–93.[7]Doman DA,Warkentin A,Bauer R.A survey of recent grinding wheeltopography models.International Journal of Machine Tools and Manufacture 2006;46(3):343–52.[8]Torrance AA.Modelling abrasive wear.Wear2005;258:281–93.[9]Hwang TW,Evans CJ,Malkin S.An investigation of high speed grinding withelectroplated diamond wheels.CIRP Annals—Manufacturing Technology 2000;49(1):245–8.[10]Usui E,Shirakashi T,Kitagawa T.Analytical predcition of cutting tool wear.Wear1984;258.[11]Jackson MJ,Khangar A,Chen ser cleaning and dressing of vitrified grindingwheels.Journal of Materials Processing Technology2007;185:17–23.[12]Onchi Y,Matsumori N,Ikawa N.Porousfine CBN Stones for high removalrate superfinishing.CIRP Annals—Manufacturing Technology1995;44(1): 291–4.[13]Brinksmeier E,Heinzel C,Wittmann M.Friction,cooling and lubrication ingrinding.CIRP Annals—Manufacturing Technology1999;48(2):581–98.X.Li,Y.Rong/Robotics and Computer-Integrated Manufacturing27(2011)471–478 478。

相关主题
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

A City Modeling and Simulation Platform Based on Google Map APIPeng Fuquan1, Su Jian2,*, Weng Wenyong2, and Wang Zebing21 College of Computer Science and TechnologyZhejiang UniversityHangzhou, Chinapengfuquan@2 Lab of Digital City and Electronic ServiceZhejiang University City CollegeHangzhou, China{suj,wengwy,wangzb}@Abstract. At present, China is in the most prominent time of the social transition. At the same time, population and vehic les are increased rapidly in the urban. Social interaction and large-scale events are also happened frequently in cities. All this are making the problem of urbanization increasingly serious. In order to solve the problem caused by urbanization, city modeling has been rapid developed in recent years. This paper presents a Google map based platform for city modeling and simulation. The platform aims to ease the problem caused by urbanization. It is feasible and cost-effective to develop applications base on this platform. This paper also introduces an application based on Google map API for flex, which implements some functions, such as adding, deleting, modifying points and lines on map, and the property of markers can be dynamically formulated. And all the added data is stored in the local database. On the basis of this platform, city modelers may reduce their burden and concentrate on modeling methods.Keywords: Google map API, flex, city modelling and simulation, Google Map.1 IntroductionWith the rapid development of computer technology, space technology and the ever-changing and increasingly improved computer graphics theory, GIS(Geographic information System) technology has matured and gradually been recognized and accepted. In recent years, after the "Digital Earth" concept was proposed, GIS core technology is concerned by more governments all over the world. At present, GIS which is know for managing spatial data, is playing a increasingly important role in many areas, such as global change and monitoring, military, resource management, city planing, land management, environmental studies, disaster prediction, traffic management. And China is in a social transition. The rapid increasement in urban population, increasing social interaction, and more public activities, make public facilities, schools and other construction problems to become the foundation for survival and development of the city. In particular, some cities, such as Beijing, Shanghai, which have millions of people, have particularly prominent urbanization because of the growing floating population. In order to provide a virtual scene and a certain basis to solve these problems, city modeling has a great development.In order to make a vivid and reliable city modeling platform, it must be combined with GIStechnology. This paper is mainly to introduce a map platform based on Google map API for city modeling and simulation.2 Key T echnology Introduction2.1 Google Map API DescriptionGoogle Maps API is a Secondary Development Interface provided by Google. This API provides lots of practical tools used to deal with map (As the map on the page ), And add contents to the map through a variety of services. Google map API is a free service, and it can be used on all the non-profit web site [1].Google Maps have three forms which are traditional maps, satellite maps and hybrid maps. The traditional maps can provide guidelines for the users when they are moving and help them to find the direction directly. The Satellite maps can allow users to get the real shot aerial view of the current location and let users have immersive feeling. Combing with the use of traditional maps, it can let the users obtain more accurate sense of direction, Which is the unreachable result by the traditional GPS technology[2].Currently Google Maps has launched several API service like Maps Javascript API, Maps API for Flash, Maps Data API, Earth API, Google Static Maps API. This platform is developed based on the version of Maps API for flash. After Google has opened up its own API, the application developers can use it to provide useful and simple map services for the public.Compared with the traditional GIS development, the basic features can be describedas follows:●Map operation. Google Maps's map operation could be shifted (mouse drag) and zoomedfreely..●Pre-generated map. Map is not dynamically generated based on user's request, butpre-processed into the image pyramid, which is made into the quad tree encoding after cut storing on the server side. When the map window is shifted or zoomed, you need only download new picture to fill the new area, which take full use of the multi-threaded simultaneous downloads features of the browser. And the downloaded pictures need not to be downloaded again from server when they are accessed for the second or more times [3]..●Analysis. Google Maps can achieve some spatial analysis, such as measurement, the nearestanalysis and path analysis. Recently, Google has announced that Google Maps has already support a new feature which can help users to understand the world through the maps. This is the Google Maps Street V iew. In the Street V iew, people can wander in 360°view on the virtual city streets to watch the street scene, just like walking in the real environment.●Development costs. Google Maps API is available for free resources now. If only you applyfor a key, you can use it. It makes the secondary development much easier both from the map service and development. It enhances and it is import to extent the mapping service..●Data update. Google Maps offer map service in two ways, which are vector maps andhigh-resolution satellite images. Google updates the maps from time to time; users can enjoy the latest map information. However, based on national security and some otherconsiderations, we cannot use high-resolution real-time satellite images. Commonly, we use the Quickbird remote sensing image technology 4 years ago.2.2 T echnical Framework for FlexAdobe flex is a software development kit (SDK) released by adobe systems for the development and deployment of cross-platform rich Internet applications based on the adobe flash platform. Flex applications can be written using adobe flash builder or by using the freely available flex compiler from adobe.Application development process:.●Define an application interface using a set of pre-defined components (forms, buttons, and soon).●Arrange components into a user interface design.●Use styles and themes to define the visual design.●Add dynamic behavior (one part of the application interacting with another, for example).●Define and connect to data services as needed.3 System Architecture DescriptionThis system selected MyEclipse as the development platform. Java was used to develop the controller layer, which was used to interact with database, and flex was used to develop view layer. This system selected tomcat5.5 as the web application server. Hibernate framework was used to develop data model layer. The current popular MVC design pattern was used in this system.3.1 Integration between Flex and JavaJSP is usually used as the display language in traditional J2EE projects. With the rise of the flex, RIA(Rich Internet Applications) client brings a more cool user interface, a shorter response time, which is closer to the desktop applications.This map platform uses LCDS(LiveCycle Data Service) to integrate between flex and java. Please read the reference book about how to configure LCDS in MyEclipse.3.2 Flex Integration with Google Map APIGoogle map provides not only JavaScript interface, but also provides flex interface.It still needs three steps to develop Google map application, after you configure the environment of flex development.●Apply a key of Google map API.●Download Google map API for flash SDK.●Configure the library path of the flex project. Add map_flex_*.swc which is in the directoryof the Google map API for flash SDK to the path of flex build.3.3 Database Design OverviewThe map platform uses sql server 2000 database. PowerDesigner was used to design database.1) Database E-R diagramAccording to the project requirements, and the Google Maps API component method, the structure of database was designed as follows.There are nine tables in the E-R diagram. Because there are divided into two kinds of users which include administrator and general users, there is a user table. Because points and lines is two types of marker to be added, there are point and line tables. The names of the points and lines and their attributes both are dynamic formulated. It need three tables to manage the types of the points and lines. The next section gives detail description.Because the lines which need to be added may be constituted by a number of line segments, there is a table to store the intermediate points.2) Generation of the Dynamic tableThis system is a modeling platform, which is not confined to a particular application development. So in order to meet the demands that the attributes of the points and lines could be dynamic formulated, The program needs to creat a table dynamic.Take points for example, different kinds of points that have different properties were created in database.We added a table which describe all kinds types of points in database. It is one-tomany elationship between the type of points and the attributes of the points. For example, school has many attributes, such as name, address, scale, contact information, and so on. If the type of the point is public bicycle, the attributes are different from school's. Because different type has different attributes, we need create a table to store the attributes of different types.The point_type_detail table stores the attributes. Through the id of the point, we can know all the attributes owned by the type. And when we create the type_table, we put some SQL operation, like adding, deleting, modifing, into the table. So we can dynamic operate table more convenient.4 System Design and ImplementationThere are divided into two function modules in this system. Not only we can call the service of Google map, but also we can operate local database.4.1 Google Map API Service OverviewThe Google Maps API is regularly extended, adding new functionality and features that are often released on first. This section covers these services.3) Address InquiryIn the Google Maps API, we can use the address resolution services, to achieve the address lookup function. Address Resolution is to a physical address (eg: Zhejiang University) into a geographical location (latitude and longitude information) process [4].Use ClientGeocoder the geocode (arg [0]: String) method of address resolution can be achieved. for positioning on the map, "Zhejiang University" and its label shows [5].4) Path QueryThough the Directions object (Google map API Flash version) to add route. The program code is as follows.var dir:Directions=new Directions();dir.addEventListener(DirectionsEvent.DIRECTIONS_S UCCESS, onDirLoad);dir.addEventListener(DirectionsEvent.DIRECTIONS_FAILURE, onDirFail);dir.load("from: " + from.text + " to: " + to.text);For example, get the path from Zhejiang University to Zhejiang Gongshang University.Fig. 1. Path query on the map platform4.2 Add Points and Lines on the Map5) Customize name and attributes of Points and LinesAccording to the input, we customize the name and attributes of point and line.There are listener code for the jptype.creatpnttype(ptype). If code is called correctly, the SQL code will be constructed. Because dynamic of the table, we don't use hibernate to interact with database, and direct use JDBC to visit database. Here, take point for example, there are two tables need to be operated, which includes type table and type_detail table. All the types of the point and line is read and displayed.6) Store Points and Lines in Local DatabaseIn the Google Maps API (Flash version), we can use the Marker object to add points on the map, using the Polyline object to add a line on the map. Get the latitude and longitude on the map through monitoring the mount events. While add a point, it need to obtain a coordinate which include latitude and longitude. And while add a line, it need obtain a series of coordinates. Also all the points in the line must be saved in order to display the line when read data from database.While adding points and lines, we need to pop up a message window to facilitate user to input information of points and lines. It is worth noting here that we direct to the information window shows a custom mxml page, so InfoWindowOptions attribute is used customContent. Andwhen the attribute of customContent was used, other attributes of InfoWindowOptions will be invalid[6].public var bjWindow:InfoWindowOptions=new InfoWindowOptions();var m_content:ptAdd= new ptAdd();m_tlng=latLng;bjWindow.customContent=m_content;marker.openInfoWindow(bjWindow);Besides, user should click the the same position twice before deciding to add a line on the map. The function operated as the figure followed [7].Fig. 2. Add points and lines on the map.While save data to database, not only we need to add information to the point or line window, but also add information to the dynamically generated window, which includes properties of the selected type [8].4.3 Display Points and Lines on the MapUsers read the data from database to display points and lines on the map. Firstly, read data from database to xml file. And Google Maps API (Flash version) provides an interface for users to read xml.Then display points and lines stored in database to the map platform through reading from the xml file. So we need to implement the interface which operate xml document [9].There are five interfaces to be implemented. Save a marker in the xml document. Delete a marker in the xml document. Update a marker in the xml document. Search a marker to check whether it is in the xml document, and then read data out from the xml document.We need to implement the following function to realize the operation that reading data from xml document to display on the map [10].var xmlString:URLRequest = new URLRequest("markers.xml");var xmlLoader:URLLoader = new URLLoader(xmlString);xmlLoader.addEventListener("complete", readXml);public function readXml(event:Event):void{var markersXML:XML = new XML(event.target.data);var markers:XMLList = markersXML..marker;{var marker:XML = markers[i];var latlng:LatLng= new LatLng(t, marker.lng);var marker1:Marker=new Marker(latlng);map1.addOverlay(marker1);}4.4 Custom Map ControlsThere are two function button for users to hide or display the marker on the map more convenient, which are custom map controls by subclassing ControlBase.To create a custom control available, you need to cover ControlBase of initControlWithMap() method.Y ou need to override methods from the superclass within the calling method. In the control constructor should also be specified ControlPosition.5 SummaryThis platform provide a map for city modeling and simulation. User could create local database to complete their required functions, such as bus inquiry system. Here is not give detailed description.Besides, we can see that Google map API provides a very convenient map service develop tool. Perhaps, at present the features are simple, but it is already a good beginning. The developer can insert maps into their page freely. As for providing map service, this is a big step forward.The Chinese map data provided by Google is still relatively rough, which is not as detail as United States, and there are still many services that have not provide interface. With the further improvement of the data and more service interface to be opened, Google map API will get more attention and the city modeling platform will be more perfect.Acknowledgment. This work is funded by Zhejiang Provincial Education Department Research Project (No.Y200803064), Hangzhou Science and Technology Project (No.20080433T01).References1. Zhang, X., Shen, Q., Long, Y.: Map evaluation system and its application. Earth Information Science 11(3) (2009)2. Tian, F.: Application for Digital City Information Platform Based on Google Maps. Computer & Telecommunication 11 (2008)3. Geng, Q., Miao, L., Duan, Y., Li, J.: Research and application of Web map service system basedon Google Maps. APIJournal of China institute of Water Resources and Hydropower Research 7(1) (2009)4. The Official Google Map API documentation, /apis/maps/documentation/5. Du, J., Zhang, Z.: Study on City Information Integration Platform Based on WebGIS. Application Research of Computers 22(6) (2005)6. Sun, X., Zhao, J.: Applying Google Maps API in WEBGIS. Control & Measurement 22(19) (2006)7. Google Ride Finder, /ridefinder8. Google Earth Blog, /9. Wei, W.: Google Maps and Web application integration, /222/2111222.shtml10. Y ang, Z., Y ang, M., Weng, Y.: Flex 3 RIA Illustration and intensive practice of development. Tsinghua University Press (2009)。

相关文档
最新文档