Modelling the linear viscoelastic behavior of silicate glasses near the glass transition po
SimMechanics帮助文件
• Getting Started• Introducing SimMechanics Software• Product OverviewProduct DefinitionMechanical Simulation and Physical Modeling• Related ProductsRequired ProductsOther Related Products• Running a Demo ModelWhat the Demo RepresentsViewing a Mechanical Drawing of the ConveyorWhat the Demo IllustratesOpening the ModelRunning the ModelModifying the ModelVisualizing and Animating the Model• What You Can Do with SimMechanics SoftwareAbout SimMechanics SoftwareModeling Mechanical SystemsBodies, Coordinate Systems, Joints, and Constraints Sensors, Actuators, Friction, and Force ElementsSimulating and Analyzing Mechanical MotionVisualizing and Animating ModelsFor More Information• Learning MoreUsing the MATLAB Help System for Documentation and Demos Finding Special SimMechanics Help• Modeling, Simulating, and Visualizing Simple Machines• Introducing the SimMechanics Block LibrariesAbout the SimMechanics Block LibraryAccessing the LibrariesUsing the Libraries• Essential Steps to Building and Running a Mechanical Model About Machine Modeling and SimulationEssential Steps to Build a ModelEssential Steps to Configure and Run a Model• Modeling and Simulating a Simple MachineModeling the Simple PendulumOpening the SimMechanics Block LibraryThe World Coordinate System and GravityConfiguring the GroundConfiguring the BodyConfiguring the JointAdding a Sensor and Starting the Simulation• Visualizing a Simple MachineVisualizing and Animating the Simple PendulumStarting VisualizationSelecting a Body GeometryDisplaying the PendulumModeling and Visualizing More Complex Machines• Modeling and Simulating a Closed-Loop Machine Modeling the Four Bar MechanismViewing a Mechanical Drawing of the Four Bar Mechanism Counting the Degrees of FreedomConfiguring the Mechanical EnvironmentSetting Up the Block DiagramConfiguring the Ground and JointsConfiguring the BodiesSensing Motion and Running the ModelFor More About the Four Bar Mechanism• Representing Motion• Kinematics and Machine Motion StateAbout KinematicsDegrees of FreedomThe State of MotionHome, Initial, and Assembled ConfigurationsFor More Information• Representations of Body MotionAbout Body MotionMachine Geometry and MotionReference Frames and Coordinate SystemsRelating Coordinate Systems in Relative Motion Observing Body Motion in Different Coordinate Systems Representing Body Translations and Rotations• Representations of Body OrientationAbout Body Orientation RepresentationsAxis-Angle RepresentationQuaternion RepresentationRotation Matrix RepresentationEuler Angle RepresentationConverting Rotation RepresentationsConverting the Angular Velocity• Orienting a Body and Its Coordinate SystemsAbout the Body Orientation ExamplesSetting Up the Test BodyRotating the Body and Its CG CS Relative to World Rotating the Body Relative to Its Center of Gravity Creating and Rotating Body Coordinate SystemsReferences• User's Guide• Reference• Blocks• FunctionsExamples• Release NotesGetting StartedIf you have limited Simulink® and/or mechanical simulation experience, you will especiallyUser's GuideThese chapters compose the SimMechanics™ User's Guide. They introduce you to SimMechanics software, help you to build simple models, and explain the general steps to modeling and simulating mechanical systems. They also present SimMechanics tools and methods for analyzing mechanical motion.Modeling Mechanical Systems How to represent machines with block diagramsRunning Mechanical Models How to set up and run your simulation, generate and use code,and troubleshoot simulation errorsAnalyzing Motion Advanced methods for analyzing motionMotion, Control, and Real-Time Simulation Advanced controls and code generation applications, based on the Stewart platformStewart Platform as SimMechanics Plant in Simulink Control ModelModeling Mechanical SystemsSimMechanics software gives you a complete set of block libraries for modeling machine parts and connecting them into a Simulink® block diagram.●Representing Machines with Models●Modeling Grounds and Bodies●Modeling Degrees of Freedom●Constraining and Driving Degrees of Freedom●Cutting Machine Diagram Loops●Applying Motions and Forces●Sensing Motions and Forces●Adding Internal Forces●Combining One- and Three-Dimensional Mechanical Elements●Validating Mechanical ModelsConsult Representing Motion to review body kinematics. If you need more information on rigid body mechanics, consult the physics and engineering literature, beginning with the Bibliography. Classic engineering mechanics texts include Goodman and Warner [2], [3] and Meriam [8]. The books of Goldstein [1] and José and Saletan [5] are more theoretically oriented.Running Mechanical ModelsSimMechanics software gives you multiple ways to simulate and analyze machine motion in the Simulink environment. Running a mechanical simulation is similar to running a simulation of any other type of Simulink model. It entails setting various simulation options, starting the simulation, interpreting results, and dealing with simulation errors. See the Simulink documentation for a general discussion of these topics. This chapter focuses on aspects of simulation specific to SimMechanics models.●Configuring SimMechanics Models in Simulink●Configuring Methods of Solution●Starting Visualization and Simulation●How SimMechanics Software Works●Troubleshooting Simulation Errors●Improving Performance●Generating Code●Limitations●ReferenceAnalyzing MotionSimMechanics analysis modes allow you to study machine motion beyond the simple forward dynamics integration of forces. This chapter explains how to specify machine motion, then deduce the necessary forces and torques, with the Inverse Dynamics and Kinematic analysis modes. You can also specify a machine steady state and analyze perturbations about any machine trajectory by trimming and linearizing your model, respectively.●Mechanical Dynamics●Finding Forces from Motions●Trimming Mechanical Models●Linearizing Mechanical ModelsThe Motion, Control, and Real-Time Simulation chapter covers more sophisticated motion analysis and control design techniques applied to more complex systems.Motion, Control, and Real-Time SimulationSimMechanics software and Simulink form a powerful basis for advanced controls applications:trimming and linearizing motion, analyzing and designing controllers, generating code from the plant and controller models, and simulating controller and plant on dedicated hardware. This chapter is a connected set of case studies illustrating these methods. As its example system, the studies use the Stewart platform, a moderately complex, six degree-of-freedom positioning system.●About the Stewart Platform Case Studies●About the Stewart Platform●Modeling the Stewart Platform●Trimming and Linearizing Through Inverse Dynamics●About Controllers and Plants●Analyzing Controllers●Designing and Improving Controllers●Generating and Simulating with Code●Simulating with Hardware in the Loop●ReferencesBefore attempting these intricate case studies, you should understand the simpler motion analysis concepts, methods, and results of Analyzing Motion.Translating a CAD Stewart Platform in the Importing Mechanical Models chapter presents a related example, converting a Stewart platform computer-aided design assembly into a SimMechanics model.Introducing Visualization and AnimationYou can visualize your model's bodies using the SimMechanics visualization window. This overview explains the essentials of starting visualization and choosing body colors and geometries.●About SimMechanics Visualization●About Body Color and Geometry●Hierarchy of Body, Machine, and Model Visualization SettingsGetting Started with the Visualization WindowThe SimMechanics visualization window allows you to control how you view your model's bodies in both static display and dynamic simulation-based animation. It also allows you to record animations.●Introducing the SimMechanics Visualization Window●Controlling Body and Body Component Display●Adjusting the Camera View●Communicating with the Model from the Visualization Window●Controlling Simulation from the Visualization Window●Controlling Animation●Recording Animation●SimMechanics Visualization Menus and Their ControlsCustomizing Visualization and AnimationYou can customize the colors and geometries of visualized bodies in the SimMechanics visualization window. Choice of colors is intrinsic to visualization. Specifying a custom body geometry requires an external graphics file for each customized body.As an alternative to the visualization window, you can also visualize your mechanical system with virtual reality.●About Custom SimMechanics Visualization●Customizing Visualized Body Colors●Customizing Visualized Body Geometries●Visualizing with a Virtual Reality Client●ReferenceImporting Mechanical ModelsUsing SimMechanics software with computer-aided design (CAD) extends your mechanical modeling and simulation capabilities, allowing you to create SimMechanics models from CAD assemblies. This chapter covers what you need to get started with CAD translation. It assumes some familiarity with SimMechanics modeling, as explained in the SimMechanics Getting Started Guide and SimMechanics User's Guide.●Introducing Mechanical Import●Generating New Models from Physical Modeling XML●Working with Generated Models●Updating Generated Models Using Physical Modeling XML●Controlling Model Update at the Block Level●Troubleshooting Imported and Updated ModelsComputer-Aided Design TranslationThese case studies illustrate how to translate mechanical systems defined externally, as computer-aided design (CAD) assemblies, into mechanical models.●About the CAD Translation Case Studies●Translating a CAD Part into a Body●Translating CAD Constraints into Joints●Updating and Retranslating a CAD Pendulum●Translating a CAD Robot Arm●Translating a CAD Stewart Platform。
神奇的冷知识英语作文初一
神奇的冷知识英语作文初一Fascinating Cold FactsThe world around us is filled with countless wonders and intriguing phenomena that often go unnoticed in our daily lives. One such fascinating area is the realm of cold temperatures and their remarkable effects. From the icy landscapes of the Arctic to the chilling depths of the ocean, the power of cold can manifest in truly astonishing ways. Let us delve into some of the most captivating cold facts that may just leave you in awe.To begin with, did you know that the coldest temperature ever recorded on Earth was an astounding -129°F (-89°C) at Vostok Station in Antarctica? This mind-boggling figure is colder than the surface of Mars and highlights the extreme conditions that can exist on our planet. Imagine the sheer resilience of the hardy researchers and scientists who brave such inhospitable environments in the pursuit of knowledge.Another fascinating cold fact relates to the unique properties ofwater. At temperatures below 32°F (0°C), water transforms into a solid state, becoming ice. However, the true wonder lies in the fact that ice is less dense than liquid water. This anomaly is what allows ice to float on the surface of bodies of water, rather than sinking to the bottom. This remarkable characteristic is essential for the survival of aquatic ecosystems, as it prevents the entire water body from freezing solid and ensures the continued existence of life beneath the surface.Moving on, the power of cold can also be harnessed for practical purposes. One remarkable example is the use of cryogenics, the study and application of extremely low temperatures. Cryogenic technology has enabled remarkable advancements in fields such as medicine, transportation, and energy storage. For instance, certain medical procedures, such as the preservation of human organs for transplantation, rely on cryogenic techniques to maintain the integrity and viability of the tissues. Additionally, the use of cryogenic fuels, such as liquid hydrogen, has revolutionized the aerospace industry, allowing for more efficient and powerful rocket propulsion systems.Interestingly, the effects of cold can also be observed in the natural world beyond our planet. On the surface of Mars, for example, temperatures can plummet to a staggering -195°F (-125°C) during the winter months. This extreme cold has a profound impact on theMartian landscape, leading to the formation of unique geological features like dry ice glaciers and carbon dioxide ice caps. The study of these extraterrestrial cold environments provides valuable insights into the processes that shape the surfaces of other worlds and the potential for life in the universe.Furthermore, the impact of cold on the human body is another fascinating area of exploration. At extremely low temperatures, the human body can undergo remarkable adaptations to cope with the harsh conditions. One such adaptation is the phenomenon of cold-induced shivering, where the body generates heat by rapidly contracting and relaxing muscles. This involuntary response helps to maintain the core body temperature and prevent hypothermia. Additionally, certain individuals have been documented to possess genetic traits that allow them to better withstand the effects of cold, such as increased cold tolerance and the ability to conserve body heat more efficiently.The realm of cold also extends to the microscopic world, where the behavior of molecules and atoms is profoundly influenced by temperature. At extremely low temperatures, matter can exhibit unusual properties, such as superconductivity, where certain materials can conduct electricity without any resistance. This phenomenon has revolutionized technologies like magnetic resonance imaging (MRI) and particle accelerators, which rely on theunique properties of superconducting materials. Furthermore, the study of quantum mechanics, a fundamental branch of physics, has been greatly advanced by experiments conducted at cryogenic temperatures, where the behavior of subatomic particles can be observed and understood in unprecedented detail.In the realm of the natural world, the impact of cold is also evident in the stunning beauty of ice formations. From the delicate snowflakes that grace our winter landscapes to the awe-inspiring glaciers that carve through mountainous terrain, the intricate and diverse structures of ice are a testament to the power of cold. These icy wonders not only captivate our senses but also provide valuable insights into the complex processes that shape our planet's climate and ecosystems.Perhaps one of the most intriguing cold facts is the potential for life to thrive in the most extreme cold environments. In the depths of the ocean, where the pressure and darkness are immense, communities of microorganisms have adapted to survive and even thrive in near-freezing temperatures. These extremophiles, as they are called, challenge our understanding of the limits of life and inspire us to explore the vast and mysterious realms of the cryosphere, the frozen regions of our world.In conclusion, the realm of cold is a treasure trove of fascinating factsand phenomena that continue to captivate and inspire us. From the record-breaking temperatures of the Antarctic to the quantum-level behavior of matter, the power of cold has shaped our world in countless ways. As we delve deeper into the mysteries of the cryosphere, we uncover new insights that not only expand our scientific knowledge but also ignite our sense of wonder and curiosity about the natural world. The captivating cold facts presented here are just a glimpse into the extraordinary realm of low temperatures, and there is much more to explore and discover in this fascinating domain.。
ooDACEToolboxAFlexibleObject-OrientedKriging…
Journal of Machine Learning Research15(2014)3183-3186Submitted6/12;Revised6/13;Published10/14ooDACE Toolbox:A Flexible Object-Oriented Kriging ImplementationIvo Couckuyt∗********************* Tom Dhaene******************* Piet Demeester*********************** Ghent University-iMindsDepartment of Information Technology(INTEC)Gaston Crommenlaan89050Gent,BelgiumEditor:Mikio BraunAbstractWhen analyzing data from computationally expensive simulation codes,surrogate model-ing methods arefirmly established as facilitators for design space exploration,sensitivity analysis,visualization and optimization.Kriging is a popular surrogate modeling tech-nique used for the Design and Analysis of Computer Experiments(DACE).Hence,the past decade Kriging has been the subject of extensive research and many extensions have been proposed,e.g.,co-Kriging,stochastic Kriging,blind Kriging,etc.However,few Krig-ing implementations are publicly available and tailored towards scientists and engineers.Furthermore,no Kriging toolbox exists that unifies several Krigingflavors.This paper addresses this need by presenting an efficient object-oriented Kriging implementation and several Kriging extensions,providing aflexible and easily extendable framework to test and implement new Krigingflavors while reusing as much code as possible.Keywords:Kriging,Gaussian process,co-Kriging,blind Kriging,surrogate modeling, metamodeling,DACE1.IntroductionThis paper is concerned with efficiently solving complex,computational expensive problems using surrogate modeling techniques(Gorissen et al.,2010).Surrogate models,also known as metamodels,are cheap approximation models for computational expensive(black-box) simulations.Surrogate modeling techniques are well-suited to handle,for example,expen-sivefinite element(FE)simulations and computationalfluid dynamic(CFD)simulations.Kriging is a popular surrogate model type to approximate deterministic noise-free data. First conceived by Danie Krige in geostatistics and later introduced for the Design and Analysis of Computer Experiments(DACE)by Sacks et al.(1989),these Gaussian pro-cess(Rasmussen and Williams,2006)based surrogate models are compact and cheap to evaluate,and have proven to be very useful for tasks such as optimization,design space exploration,visualization,prototyping,and sensitivity analysis(Viana et al.,2014).Note ∗.Ivo Couckuyt is a post-doctoral research fellow of FWO-Vlaanderen.Couckuyt,Dhaene and Demeesterthat Kriging surrogate models are primarily known as Gaussian processes in the machine learning community.Except for the utilized terminology there is no difference between the terms and associated methodologies.While Kriging is a popular surrogate model type,not many publicly available,easy-to-use Kriging implementations exist.Many Kriging implementations are outdated and often limited to one specific type of Kriging.Perhaps the most well-known Kriging toolbox is the DACE toolbox1of Lophaven et al.(2002),but,unfortunately,the toolbox has not been updated for some time and only the standard Kriging model is provided.Other freely available Kriging codes include:stochastic Kriging(Staum,2009),2DiceKriging,3 Gaussian processes for Machine Learning(Rasmussen and Nickisch,2010)(GPML),4demo code provided with Forrester et al.(2008),5and the Matlab Krigeage toolbox.6 This paper addresses this need by presenting an object-oriented Kriging implementation and several Kriging extensions,providing aflexible and easily extendable framework to test and implement new Krigingflavors while reusing as much code as possible.2.ooDACE ToolboxThe ooDACE toolbox is an object-oriented Matlab toolbox implementing a variety of Krig-ingflavors and extensions.The most important features and Krigingflavors include:•Simple Kriging,ordinary Kriging,universal Kriging,stochastic Kriging(regression Kriging),blind-and co-Kriging.•Derivatives of the prediction and prediction variance.•Flexible hyperparameter optimization.•Useful utilities include:cross-validation,integrated mean squared error,empirical variogram plot,debug plot of the likelihood surface,robustness-criterion value,etc.•Proper object-oriented design(compatible interface with the DACE toolbox1is avail-able).Documentation of the ooDACE toolbox is provided in the form of a getting started guide (for users),a wiki7and doxygen documentation8(for developers and more advanced users). In addition,the code is well-documented,providing references to research papers where appropriate.A quick-start demo script is provided withfive surrogate modeling use cases, as well as script to run a suite of regression tests.A simplified UML class diagram,showing only the most important public operations, of the toolbox is shown in Figure1.The toolbox is designed with efficiency andflexibil-ity in mind.The process of constructing(and predicting)a Kriging model is decomposed in several smaller,logical steps,e.g.,constructing the correlation matrix,constructing the1.The DACE toolbox can be downloaded at http://www2.imm.dtu.dk/~hbn/dace/.2.The stochastic Kriging toolbox can be downloaded at /.3.The DiceKriging toolbox can be downloaded at /web/packages/DiceKriging/index.html.4.The GPML toolbox can be downloaded at /software/view/263/.5.Demo code of Kriging can be downloaded at //legacy/wileychi/forrester/.6.The Krigeage toolbox can be downloaded at /software/kriging/.7.The wiki documentation of the ooDACE toolbox is found at http://sumowiki.intec.ugent.be/index.php/ooDACE:ooDACE_toolbox.8.The doxygen documentation of the ooDACE toolbox is found at http://sumo.intec.ugent.be/buildbot/ooDACE/doc/.Figure1:Class diagram of the ooDACE toolbox.regression matrix,updating the model,optimizing the parameters,etc.These steps are linked together by higher-level steps,e.g.,fitting the Kriging model and making predic-tions.The basic steps needed for Kriging are implemented as(protected)operations in the BasicGaussianProcess superclass.Implementing a new Kriging type,or extending an existing one,is now done by subclassing the Kriging class of your choice and inheriting the(protected)methods that need to be reimplemented.Similarly,to implement a new hyperparameter optimization strategy it suffices to create a new class inherited from the Optimizer class.To assess the performance of the ooDACE toolbox a comparison between the ooDACE toolbox and the DACE toolbox1is performed using the2D Branin function.To that end,20data sets of increasing size are constructed,each drawn from an uniform random distribution.The number of observations ranges from10to200samples with steps of10 samples.For each data set,a DACE toolbox1model,a ooDACE ordinary Kriging and a ooDACE blind Kriging model have been constructed and the accuracy is measured on a dense test set using the Average Euclidean Error(AEE).Moreover,each test is repeated 1000times to remove any random factor,hence the average accuracy of all repetitions is used.Results are shown in Figure2a.Clearly,the ordinary Kriging model of the ooDACE toolbox consistently outperforms the DACE toolbox for any given sample size,mostly due to a better hyperparameter optimization,while the blind Kriging model is able improve the accuracy even more.3.ApplicationsThe ooDACE Toolbox has already been applied successfully to a wide range of problems, e.g.,optimization of a textile antenna(Couckuyt et al.,2010),identification of the elasticity of the middle-ear drum(Aernouts et al.,2010),etc.In sum,the ooDACE toolbox aims to provide a modern,up to date Kriging framework catered to scientists and age instructions,design documentation,and stable releases can be found at http://sumo.intec.ugent.be/?q=ooDACE.ReferencesJ.Aernouts,I.Couckuyt,K.Crombecq,and J.J.J.Dirckx.Elastic characterization of membranes with a complex shape using point indentation measurements and inverseCouckuyt,Dhaene and Demeester(a)(b)Figure2:(a)Evolution of the average AEE versus the number of samples(Branin function).(b)Landscape plot of the Branin function.modelling.International Journal of Engineering Science,48:599–611,2010.I.Couckuyt,F.Declercq,T.Dhaene,and H.Rogier.Surrogate-based infill optimization applied to electromagnetic problems.Journal of RF and Microwave Computer-Aided Engineering:Advances in Design Optimization of Microwave/RF Circuits and Systems, 20(5):492–501,2010.A.Forrester,A.Sobester,and A.Keane.Engineering Design Via Surrogate Modelling:A Practical Guide.Wiley,Chichester,2008.D.Gorissen,K.Crombecq,I.Couckuyt,P.Demeester,and T.Dhaene.A surrogate modeling and adaptive sampling toolbox for computer based design.Journal of Machine Learning Research,11:2051–2055,2010.URL http://sumo.intec.ugent.be/.S.N.Lophaven,H.B.Nielsen,and J.Søndergaard.Aspects of the Matlab toolbox DACE. Technical report,Informatics and Mathematical Modelling,Technical University of Den-mark,DTU,Richard Petersens Plads,Building321,DK-2800Kgs.Lyngby,2002.C.E.Rasmussen and H.Nickisch.Gaussian processes for machine learning(GPML)toolbox. Journal of Machine Learning Research,11:3011–3015,2010.C.E.Rasmussen and C.K.I.Williams.Gaussian Processes for Machine Learning.MIT Press,2006.J.Sacks,W.J.Welch,T.J.Mitchell,and H.P.Wynn.Design and analysis of computer experiments.Statistical Science,4(4):409–435,1989.J.Staum.Better simulation metamodeling:The why,what,and how of stochastic Kriging. In Proceedings of the Winter Simulation Conference,2009.F.A.C.Viana,T.W.Simpson,V.Balabanov,and V.Toropov.Metamodeling in multi-disciplinary design optimization:How far have we really come?AIAA Journal,52(4): 670–690,2014.。
英语语言学 第十二章 language and brain
▪ 神经语言学研究两个相关领域:语言障碍和大脑与语言 之间的关系。包括大脑在语言发展和语言及存储方式、大脑受损对语言运 用能力的影响等。
The structure and function of the human brain
3. syntax syntacitc parser 句法解析 这个解析器被认为是一个利用语法知识的系统, 但它也包含一些特殊的过程和原则,这些过程和 原则指导句子元素的构成顺序和句法结构的构建 方式。 garden path sentences 花园路径句 e.g. The horse raced past the barn fell. sentence ambiguity 句子歧义 e.g. They all rose.
autopsy studies 尸体解剖研究
AST 阿米妥钠测试 CAT 计算机轴向分层造影 PET 正电子发射断层扫描术 MRI 磁共振成像 fMRI 机能性磁共振成像
Methods to examine the behavior associated with the brain
Dichotic listening 双耳分听实验 Split brain studies 裂脑实验
Psycholinguistic modeling
1. Broca's aphasia 2. Wernicke's aphasia
IO 06 Spatial models
第六讲 空间模型(Spatial models )在一个空间模型,消费者实际支付的代价不等于企业得到的价格,其中的差距是交通成本,而且不同消费者的交通成本不尽相同。
1.线形城市模型(linear city model )Hotelling (1929)考虑了一个仅有一条街的长条形城市,消费者分布在这条街上,他们对某产品有相同的单位需求。
消费者从某地移动到另一地时须支付交通成本。
两个卖家提供同质的商品。
如果企业可以无成本地调整他们的价格和位置,Hotelling 的“线形城市”博弈没有纯策略均衡(详见D’Aspremont 等 (1979))。
如果企业的位置是固定的,该博弈有一个纯策略均衡。
这个模型经常被用于讨论存在产品差异时的寡头竞争。
如果给定单一价格,该博弈的纯策略纳什均衡不一定存在。
在只有2个企业时有一个纯策略均衡,这时两个企业均选择城市中点。
当有3个企业时该博弈没有纯策略均衡。
而当有4个以上企业时存在纯策略均衡。
应用例子:街头卖报,竞选等。
考虑一个线形城市[0, 1],有连续的一个单位消费者均匀地分布在该城市。
消费者的单位交通成本为t ,他们有单位需求,保留价格为v 。
在城市的两端有两个企业,1 和2,它们分别以1c 和2c 的边际成本生产相同的产品。
两个企业分别选择他们的产品价格1p 和2p 来最大化自身利润。
给定两个企业的价格1p 和2p ,当21||p p t −≤时,记x 为到企业1或2购买产品的实际支出相同的消费者的位置,它满足12(1)p tx p t x +=+−,即 21122p p x t −=+。
如果1p tx v +≤,企业1选择1p 来最大化其利润1111()()p p c π=−211()22p p t −+. 该最优化问题的一阶导数条件为 2112p c t p ++=。
类似的,企业2 的价格满足 1222p c t p ++=。
于是在均衡点,我们有12123c c p t +=+ 和 21223c cp t +=+。
精益生产-LineBalanceModels中英文版
控制过程变革和控制制定控制计划计算最终财务过程指标项目过渡给未来项目管理者项目鉴 别转化机会
测量
定义
项目编号工具项目定义表净现值分析内部回报率分析折算现金流分析 ?(按现值计算的现金流量分析)PIP管理过程RACIQuad 表
过程图价值分析脑力风暴投票归类法柏拉图因果图/鱼骨图FMEA查检表运行图控制图量具 R&R
Line Balance Model
学习目标
如何设计和实施由“线平衡模型”支持的一个流程以确保优化配置:人 地方固定资产材料知道如何使生产率最大化
Line Balance Model
What’s in It for Me?
Able to design and implement a balanced process lineUnderstand the issues in a typical process environment and how to impact those issues
Revised 1-12-02
Line Balance Model
精益6
过程改善流程
分析
控制
改进
定义选定题目列出客户从顾客之声中列出关建需求定出项目焦点和重要指标完成 PDF
测量绘制业务流程图绘制价值流程图制定数据收集计划测量系统分析收集数据过程能力分析
分析提出关键因子区分关键因子验证关键因子评枯每个关键因子对结果的影响量化机会根本原因排序寻找根本原因针对关键因子
Process MappingValue AnalysisBrainstormingMulti-Voting TechniquesPareto ChartsC&E/Fishbone DiagramsFMEACheck SheetsRun ChartsControl ChartsGage R&R
Problem-solvingb...
Neurocomputing44–46(2002)735–742/locate/neucomProblem-solving behavior in a system modelof the primate neocortexAlan H.BondCalifornia Institute of Technology,Mailstop136-93,Pasadena,CA91125,USAAbstractWe show how our previously described system model of the primate neocortex can be extended to allow the modeling of problem-solving behaviors.Speciÿcally,we model di erent cognitive strategies that have been observed for human subjects solving the Tower of Hanoi problem. These strategies can be given a naturally distributed form on the primate neocortex.Further, the goal stacking used in some strategies can be achieved using an episodic memory module corresponding to the hippocampus.We can give explicit falsiÿable predictions for the time sequence of activations of di erent brain areas for each strategy.c 2002Published by Elsevier Science B.V.Keywords:Neocortex;Modular architecture;Perception–action hierarchy;Tower of Hanoi;Problem solving;Episodic memory1.Our system model of the primate neocortexOur model[4–6]consists of a set of processing modules,each representing a corti-cal area.The overall architecture is a perception–action hierarchy.Data stored in each module is represented by logical expressions we call descriptions,processing within each module is represented by sets of rules which are executed in parallel and which construct new descriptions,and communication among modules consists of the trans-mission of descriptions.Modules are executed in parallel on a discrete time scale, corresponding to20ms.During one cycle,all rules are executed once and all inter-module transmission of descriptions occurs.Fig.1depicts our model,as a set of cor-tical modules and as a perception–action hierarchy system diagram.The action of theE-mail address:***************.edu(A.H.Bond).0925-2312/02/$-see front matter c 2002Published by Elsevier Science B.V.PII:S0925-2312(02)00466-6736 A.H.Bond/Neurocomputing44–46(2002)735–742Fig.1.Our system model shown in correspondence with the neocortex,and as a perception–action hierarchy.system is to continuously create goals,prioritize goals,and elaborate the highest priority goals into plans,then detailed actions by propagating descriptions down the action hierarchy,resulting in a stream of motor commands.(At the same time,perception of the environment occurs in a ow of descriptions up the perception hierarchy.Perceived descriptions condition plan elaboration,and action descriptions condition perception.) This simple elaboration of stored plans was su cient to allow is to demonstrate simple socially interactive behaviors using a computer realization of our model.A.H.Bond/Neurocomputing44–46(2002)735–7427372.Extending our model to allow solution of the Tower of Hanoi problem2.1.Tower of Hanoi strategiesThe Tower of Hanoi problem is the most studied,and strategies used by human subjects have been captured as production rule systems[9,1].We will consider the two most frequently observed strategies—the perceptual strategy and the goal recursion strategy.In the general case,reported by Anzai and Simon[3],naive subjects start with an initial strategy and learn a sequence of strategies which improve their performance. Our two strategies were observed by Anzai and Simon as part of this learning sequence. Starting from Simon’s formulation[8],we were able to represent these two strategies in our model,as follows:2.2.Working goalsSince goals are created dynamically by the planning activity,we needed to extend our plan module to allow working goals as a description type.This mechanism was much better than trying to use the main goal module.We can limit the number of working goals.This would correspond to using aÿxed size store,corresponding to working memory.The module can thus create working goals and use the current working goals as input to rules.Working goals would be held in dorsal prefrontal areas,either as part of or close to the plan module.Main motivating topgoals are held in the main goal module corresponding to anterior cingulate.2.3.Perceptual tests and mental imageryThe perceptual tests on the external state,i.e.the state of the Tower of Hanoi apparatus,were naturally placed in a separate perception module.This corresponds to Kosslyn’s[7]image store.The main perceptual test needed is to determine whether a proposed move is legal.This involves(a)making a change to a stored perceived representation corresponding to making the proposed move,and(b)making a spatial comparison in this image store to determine whether the disk has been placed on a smaller or a larger one.With these two extensions,we were able to develop a representation of the perceptual strategy,depicted in Fig.2.3.Episodic memory and its use in goal stackingIn order to represent the goal recursion strategy,we need to deal with goal stacking, which is represented by push and pop operations in existing production rule represen-tations.Since we did not believe that a stack with push and pop operations within a module is biologically plausible,we found an equivalent approach using an episodic memory module.738 A.H.Bond/Neurocomputing44–46(2002)735–742Fig.2.Representation of the perceptual strategy on our brain model.This module creates associations among whatever inputs it receives at any given time, and it sends these associations as descriptions to be stored in contributing modules. In general,it will create episodic representations from events occurring in extended temporal intervals;however,in the current case we only needed simple association. In the Tower of Hanoi case,the episode was simply taken to be an association between the current working goal and the previous,parent,working goal.We assume that these two working goals are always stored in working memory and are available to the plan module.The parent forms a context for the working goal.The episode description is formed in the episodic memory module and transmitted to the plan module where it is stored.The creation of episodic representations can proceed in parallel with the problem solving process,and it can occur automatically or be requested by the plan module.Rules in the plan module can retrieve episodic descriptions usingA.H.Bond/Neurocomputing44–46(2002)735–742739the current parent working goal,and can replace the current goal with the current parent,and the current parent with its retrieved parent.Thus the working goal context can be popped.This representation is more general than a stack,since any stored episode could be retrieved,including working goals from episodes further in the past. Such e ects have,in fact,been reported by Van Lehn et al.[10]for human subjects. With this additional extension,we were able to develop a representation of the goal recursion strategy,depicted in Fig.3.Descriptions of episodes are of the form con-text(goal(G),goal context(C)).goal(G)being the current working goal and goal context(C)the current parent working goal.Theÿgure shows a slightly more general version,where episodes are stored both in the episodic memory module and the plan module.This allows episodes that have not yet been transferred to the cortex to be used.We are currently working on extending our model to allow the learning a sequence of strategies as observed by Anzai and Simon.This may result in a di erent representation of these strategies,and di erent performance.740 A.H.Bond/Neurocomputing44–46(2002)735–742during perceptual analysis during movementP MFig.4.Predictions of brain area activation during Tower of Hanoi solving.4.Falsiÿable predictions of brain area activationFor the two strategies,we can now generate detailed predictions of brain area acti-vation sequences that should be observed during the solution of the Tower of Hanoi ing our computer realization,we can generate detailed predictions of activa-tion levels for each time step.Since there are many adjustable parameters and detailed assumptions in the model,it is di cult toÿnd clearly falsiÿable predictions.However, we can also make a simpliÿed and more practical form of prediction by classifying brain states into four types,shown in Fig.4.Let us call these types of states G,E,P and M,respectively.Then,for example,the predicted temporal sequences of brain state types for3disks are:A.H.Bond/Neurocomputing44–46(2002)735–742741For the perceptual strategy:G0;G;E;P;G;E;P;G;E;P;E;M;P;G;E;P;G;E;P;E;M;P;G;E;P;G;E;P;E;M;P;G;E;P;E;M;P;G;E;P;G;E;P;E;M;P;G;E;P;E;M;P;G;E;P;E;M;P;G0:and for the goal recursion strategy:G0;G;E;P;G+;E;P;G+;E;P;E;M;P;G∗;E;P;E;M;P;G∗;E;P;G+;E;P;E;M;P;G∗;E;P;E;M;P;G;E;P;G+;E;P;E;M;P;G∗;E;P;E;M;E;G;E;P;E;M;P;G0: We can generate similar sequences for di erent numbers of disks and di erent strate-gies.The physical moves of disks occur during M steps.The timing is usually about 3:5s per physical move,but the physical move steps probably take longer than the average cognitive step.If a physical move takes1:5s,this would leave about300ms per cognitive step.The perceptual strategy used is an expert strategy where the largest disk is always selected.We assume perfect performance;when wrong moves are made,we need a theory of how mistakes are made,and then predictions can be generated.In the goal recursion strategy,we assume the subject is using perceptual tests for proposed moves, and is not working totally from memory.G indicates the creation of a goal,G+a goal creation and storing an existing goal(push),and G∗the retrieval of a goal(pop). Anderson et al.[2]have shown that pushing a goal takes about2s,although we have taken creation of a goal to not necessarily involve pushing.For us,pushing only occurs when a new goal is created and an existing goal has to be stored.G0is activity relating to the top goal.It should be noted that there is some redundancy in the model,so that,if a mismatch to experiment is found,it would be possible to make some changes to the model to bring it into better correspondence with the data.For example,the assignment of modules to particular brain areas is tentative and may need to be changed.However, there is a limit to the changes that can be made,and mismatches with data could falsify the model in its present form.AcknowledgementsThis work has been partially supported by the National Science Foundation,Informa-tion Technology and Organizations Program managed by Dr.Les Gasser.The author would like to thank Professor Pietro Perona for his support,and Professor Steven Mayo for providing invaluable computer resources.References[1]J.R.Anderson,Rules of the Mind,Lawrence Erlbaum Associates,Hillsdale,NJ,1993.[2]J.R.Anderson,N.Kushmerick,C.Lebiere,The Tower of Hanoi and Goal structures,in:J.R.Anderson(Ed.),Rules of the Mind,Lawrence Erlbaum Associates,Hillsdale,New Jersey,1993,pp.121–142.742 A.H.Bond/Neurocomputing44–46(2002)735–742[3]Y.Anzai,H.A.Simon,The theory of learning by doing,Psychol.Rev.86(1979)124–140.[4]A.H.Bond,A computational architecture for social agents,Proceedings of Intelligent Systems:ASemiotic Perspective,An International Multidisciplinary Conference,National Institute of Standards and Technology,Gaithersburg,Maryland,USA,October20–23,1996.[5]A.H.Bond,A system model of the primate neocortex,Neurocomputing26–27(1999)617–623.[6]A.H.Bond,Describing behavioral states using a system model of the primate brain,Am.J.Primatol.49(1999)315–388.[7]S.Kosslyn,Image and Brain,MIT Press,Cambridge,MA,1994.[8]H.A.Simon,The functional equivalence of problem solving skills,Cognitive Psychol.7(1975)268–288.[9]K.VanLehn,Rule acquisition events in the discovery of problem-solving strategies,Cognitive Sci.15(1991)1–47.[10]K.VanLehn,W.Ball,B.Kowalski,Non-LIFO execution of cognitive procedures,Cognitive Sci.13(1989)415–465.Alan H.Bond was born in England and received a Ph.D.degree in theoretical physics in1966from Imperial College of Science and Technology,University of London.During the period1969–1984,he was on the faculty of the Computer Science Department at Queen Mary College,London University,where he founded and directed the Artiÿcial Intelligence and Robotics Laboratory.Since1996,he has been a Senior Scientist and Lecturer at California Institute of Technology.His main research interest concerns the system modeling of the primate brain.。
Advances in
Advances in Geosciences,4,17–22,2005 SRef-ID:1680-7359/adgeo/2005-4-17 European Geosciences Union©2005Author(s).This work is licensed under a Creative CommonsLicense.Advances in GeosciencesIncorporating level set methods in Geographical Information Systems(GIS)for land-surface process modelingD.PullarGeography Planning and Architecture,The University of Queensland,Brisbane QLD4072,Australia Received:1August2004–Revised:1November2004–Accepted:15November2004–Published:9August2005nd-surface processes include a broad class of models that operate at a landscape scale.Current modelling approaches tend to be specialised towards one type of pro-cess,yet it is the interaction of processes that is increasing seen as important to obtain a more integrated approach to land management.This paper presents a technique and a tool that may be applied generically to landscape processes. The technique tracks moving interfaces across landscapes for processes such as waterflow,biochemical diffusion,and plant dispersal.Its theoretical development applies a La-grangian approach to motion over a Eulerian grid space by tracking quantities across a landscape as an evolving front. An algorithm for this technique,called level set method,is implemented in a geographical information system(GIS).It fits with afield data model in GIS and is implemented as operators in map algebra.The paper describes an implemen-tation of the level set methods in a map algebra program-ming language,called MapScript,and gives example pro-gram scripts for applications in ecology and hydrology.1IntroductionOver the past decade there has been an explosion in the ap-plication of models to solve environmental issues.Many of these models are specific to one physical process and of-ten require expert knowledge to use.Increasingly generic modeling frameworks are being sought to provide analyti-cal tools to examine and resolve complex environmental and natural resource problems.These systems consider a vari-ety of land condition characteristics,interactions and driv-ing physical processes.Variables accounted for include cli-mate,topography,soils,geology,land cover,vegetation and hydro-geography(Moore et al.,1993).Physical interactions include processes for climatology,hydrology,topographic landsurface/sub-surfacefluxes and biological/ecological sys-Correspondence to:D.Pullar(d.pullar@.au)tems(Sklar and Costanza,1991).Progress has been made in linking model-specific systems with tools used by environ-mental managers,for instance geographical information sys-tems(GIS).While this approach,commonly referred to as loose coupling,provides a practical solution it still does not improve the scientific foundation of these models nor their integration with other models and related systems,such as decision support systems(Argent,2003).The alternative ap-proach is for tightly coupled systems which build functional-ity into a system or interface to domain libraries from which a user may build custom solutions using a macro language or program scripts.The approach supports integrated models through interface specifications which articulate the funda-mental assumptions and simplifications within these models. The problem is that there are no environmental modelling systems which are widely used by engineers and scientists that offer this level of interoperability,and the more com-monly used GIS systems do not currently support space and time representations and operations suitable for modelling environmental processes(Burrough,1998)(Sui and Magio, 1999).Providing a generic environmental modeling framework for practical environmental issues is challenging.It does not exist now despite an overwhelming demand because there are deep technical challenges to build integrated modeling frameworks in a scientifically rigorous manner.It is this chal-lenge this research addresses.1.1Background for ApproachThe paper describes a generic environmental modeling lan-guage integrated with a Geographical Information System (GIS)which supports spatial-temporal operators to model physical interactions occurring in two ways.The trivial case where interactions are isolated to a location,and the more common and complex case where interactions propa-gate spatially across landscape surfaces.The programming language has a strong theoretical and algorithmic basis.The-oretically,it assumes a Eulerian representation of state space,Fig.1.Shows a)a propagating interface parameterised by differ-ential equations,b)interface fronts have variable intensity and may expand or contract based onfield gradients and driving process. but propagates quantities across landscapes using Lagrangian equations of motion.In physics,a Lagrangian view focuses on how a quantity(water volume or particle)moves through space,whereas an Eulerian view focuses on a localfixed area of space and accounts for quantities moving through it.The benefit of this approach is that an Eulerian perspective is em-inently suited to representing the variation of environmen-tal phenomena across space,but it is difficult to conceptu-alise solutions for the equations of motion and has compu-tational drawbacks(Press et al.,1992).On the other hand, the Lagrangian view is often not favoured because it requires a global solution that makes it difficult to account for local variations,but has the advantage of solving equations of mo-tion in an intuitive and numerically direct way.The research will address this dilemma by adopting a novel approach from the image processing discipline that uses a Lagrangian ap-proach over an Eulerian grid.The approach,called level set methods,provides an efficient algorithm for modeling a natural advancing front in a host of settings(Sethian,1999). The reason the method works well over other approaches is that the advancing front is described by equations of motion (Lagrangian view),but computationally the front propagates over a vectorfield(Eulerian view).Hence,we have a very generic way to describe the motion of quantities,but can ex-plicitly solve their advancing properties locally as propagat-ing zones.The research work will adapt this technique for modeling the motion of environmental variables across time and space.Specifically,it will add new data models and op-erators to a geographical information system(GIS)for envi-ronmental modeling.This is considered to be a significant research imperative in spatial information science and tech-nology(Goodchild,2001).The main focus of this paper is to evaluate if the level set method(Sethian,1999)can:–provide a theoretically and empirically supportable methodology for modeling a range of integral landscape processes,–provide an algorithmic solution that is not sensitive to process timing,is computationally stable and efficient as compared to conventional explicit solutions to diffu-sive processes models,–be developed as part of a generic modelling language in GIS to express integrated models for natural resource and environmental problems?The outline for the paper is as follow.The next section will describe the theory for spatial-temporal processing us-ing level sets.Section3describes how this is implemented in a map algebra programming language.Two application examples are given–an ecological and a hydrological ex-ample–to demonstrate the use of operators for computing reactive-diffusive interactions in landscapes.Section4sum-marises the contribution of this research.2Theory2.1IntroductionLevel set methods(Sethian,1999)have been applied in a large collection of applications including,physics,chemistry,fluid dynamics,combustion,material science,fabrication of microelectronics,and computer vision.Level set methods compute an advancing interface using an Eulerian grid and the Lagrangian equations of motion.They are similar to cost distance modeling used in GIS(Burroughs and McDonnell, 1998)in that they compute the spread of a variable across space,but the motion is based upon partial differential equa-tions related to the physical process.The advancement of the interface is computed through time along a spatial gradient, and it may expand or contract in its extent.See Fig.1.2.2TheoryThe advantage of the level set method is that it models mo-tion along a state-space gradient.Level set methods start with the equation of motion,i.e.an advancing front with velocity F is characterised by an arrival surface T(x,y).Note that F is a velocityfield in a spatial sense.If F was constant this would result in an expanding series of circular fronts,but for different values in a velocityfield the front will have a more contorted appearance as shown in Fig.1b.The motion of thisinterface is always normal to the interface boundary,and its progress is regulated by several factors:F=f(L,G,I)(1)where L=local properties that determine the shape of advanc-ing front,G=global properties related to governing forces for its motion,I=independent properties that regulate and influ-ence the motion.If the advancing front is modeled strictly in terms of the movement of entity particles,then a straightfor-ward velocity equation describes its motion:|∇T|F=1given T0=0(2) where the arrival function T(x,y)is a travel cost surface,and T0is the initial position of the interface.Instead we use level sets to describe the interface as a complex function.The level set functionφis an evolving front consistent with the under-lying viscosity solution defined by partial differential equa-tions.This is expressed by the equation:φt+F|∇φ|=0givenφ(x,y,t=0)(3)whereφt is a complex interface function over time period 0..n,i.e.φ(x,y,t)=t0..tn,∇φis the spatial and temporal derivatives for viscosity equations.The Eulerian view over a spatial domain imposes a discretisation of space,i.e.the raster grid,which records changes in value z.Hence,the level set function becomesφ(x,y,z,t)to describe an evolv-ing surface over time.Further details are given in Sethian (1999)along with efficient algorithms.The next section de-scribes the integration of the level set methods with GIS.3Map algebra modelling3.1Map algebraSpatial models are written in a map algebra programming language.Map algebra is a function-oriented language that operates on four implicit spatial data types:point,neighbour-hood,zonal and whole landscape surfaces.Surfaces are typ-ically represented as a discrete raster where a point is a cell, a neighbourhood is a kernel centred on a cell,and zones are groups of mon examples of raster data include ter-rain models,categorical land cover maps,and scalar temper-ature surfaces.Map algebra is used to program many types of landscape models ranging from land suitability models to mineral exploration in the geosciences(Burrough and Mc-Donnell,1998;Bonham-Carter,1994).The syntax for map algebra follows a mathematical style with statements expressed as equations.These equations use operators to manipulate spatial data types for point and neighbourhoods.Expressions that manipulate a raster sur-face may use a global operation or alternatively iterate over the cells in a raster.For instance the GRID map algebra (Gao et al.,1993)defines an iteration construct,called do-cell,to apply equations on a cell-by-cell basis.This is triv-ially performed on columns and rows in a clockwork manner. However,for environmental phenomena there aresituations Fig.2.Spatial processing orders for raster.where the order of computations has a special significance. For instance,processes that involve spreading or transport acting along environmental gradients within the landscape. Therefore special control needs to be exercised on the order of execution.Burrough(1998)describes two extra control mechanisms for diffusion and directed topology.Figure2 shows the three principle types of processing orders,and they are:–row scan order governed by the clockwork lattice struc-ture,–spread order governed by the spreading or scattering ofa material from a more concentrated region,–flow order governed by advection which is the transport of a material due to velocity.Our implementation of map algebra,called MapScript (Pullar,2001),includes a special iteration construct that sup-ports these processing orders.MapScript is a lightweight lan-guage for processing raster-based GIS data using map alge-bra.The language parser and engine are built as a software component to interoperate with the IDRISI GIS(Eastman, 1997).MapScript is built in C++with a class hierarchy based upon a value type.Variants for value types include numeri-cal,boolean,template,cells,or a grid.MapScript supports combinations of these data types within equations with basic arithmetic and relational comparison operators.Algebra op-erations on templates typically result in an aggregate value assigned to a cell(Pullar,2001);this is similar to the con-volution integral in image algebras(Ritter et al.,1990).The language supports iteration to execute a block of statements in three ways:a)docell construct to process raster in a row scan order,b)dospread construct to process raster in a spreadwhile(time<100)dospreadpop=pop+(diffuse(kernel*pop))pop=pop+(r*pop*dt*(1-(pop/K)) enddoendwhere the diffusive constant is stored in thekernel:Fig.3.Map algebra script and convolution kernel for population dispersion.The variable pop is a raster,r,K and D are constants, dt is the model time step,and the kernel is a3×3template.It is assumed a time step is defined and the script is run in a simulation. Thefirst line contained in the nested cell processing construct(i.e. dospread)is the diffusive term and the second line is the population growth term.order,c)doflow to process raster byflow order.Examples are given in subsequent sections.Process models will also involve a timing loop which may be handled as a general while(<condition>)..end construct in MapScript where the condition expression includes a system time variable.This time variable is used in a specific fashion along with a system time step by certain operators,namely diffuse()andfluxflow() described in the next section,to model diffusion and advec-tion as a time evolving front.The evolving front represents quantities such as vegetation growth or surface runoff.3.2Ecological exampleThis section presents an ecological example based upon plant dispersal in a landscape.The population of a species follows a controlled growth rate and at the same time spreads across landscapes.The theory of the rate of spread of an organism is given in Tilman and Kareiva(1997).The area occupied by a species grows log-linear with time.This may be modelled by coupling a spatial diffusion term with an exponential pop-ulation growth term;the combination produces the familiar reaction-diffusion model.A simple growth population model is used where the reac-tion term considers one population controlled by births and mortalities is:dN dt =r·N1−NK(4)where N is the size of the population,r is the rate of change of population given in terms of the difference between birth and mortality rates,and K is the carrying capacity.Further dis-cussion of population models can be found in Jrgensen and Bendoricchio(2001).The diffusive term spreads a quantity through space at a specified rate:dudt=Dd2udx2(5) where u is the quantity which in our case is population size, and D is the diffusive coefficient.The model is operated as a coupled computation.Over a discretized space,or raster,the diffusive term is estimated using a numerical scheme(Press et al.,1992).The distance over which diffusion takes place in time step dt is minimally constrained by the raster resolution.For a stable computa-tional process the following condition must be satisfied:2Ddtdx2≤1(6) This basically states that to account for the diffusive pro-cess,the term2D·dx is less than the velocity of the advancing front.This would not be difficult to compute if D is constant, but is problematic if D is variable with respect to landscape conditions.This problem may be overcome by progressing along a diffusive front over the discrete raster based upon distance rather than being constrained by the cell resolution.The pro-cessing and diffusive operator is implemented in a map al-gebra programming language.The code fragment in Fig.3 shows a map algebra script for a single time step for the cou-pled reactive-diffusion model for population growth.The operator of interest in the script shown in Fig.3is the diffuse operator.It is assumed that the script is run with a given time step.The operator uses a system time step which is computed to balance the effect of process errors with effi-cient computation.With knowledge of the time step the it-erative construct applies an appropriate distance propagation such that the condition in Eq.(3)is not violated.The level set algorithm(Sethian,1999)is used to do this in a stable and accurate way.As a diffusive front propagates through the raster,a cost distance kernel assigns the proper time to each raster cell.The time assigned to the cell corresponds to the minimal cost it takes to reach that cell.Hence cell pro-cessing is controlled by propagating the kernel outward at a speed adaptive to the local context rather than meeting an arbitrary global constraint.3.3Hydrological exampleThis section presents a hydrological example based upon sur-face dispersal of excess rainfall across the terrain.The move-ment of water is described by the continuity equation:∂h∂t=e t−∇·q t(7) where h is the water depth(m),e t is the rainfall excess(m/s), q is the discharge(m/hr)at time t.Discharge is assumed to have steady uniformflow conditions,and is determined by Manning’s equation:q t=v t h t=1nh5/3ts1/2(8)putation of current cell(x+ x,t,t+ ).where q t is theflow velocity(m/s),h t is water depth,and s is the surface slope(m/m).An explicit method of calcula-tion is used to compute velocity and depth over raster cells, and equations are solved at each time step.A conservative form of afinite difference method solves for q t in Eq.(5). To simplify discussions we describe quasi-one-dimensional equations for theflow problem.The actual numerical com-putations are normally performed on an Eulerian grid(Julien et al.,1995).Finite-element approximations are made to solve the above partial differential equations for the one-dimensional case offlow along a strip of unit width.This leads to a cou-pled model with one term to maintain the continuity offlow and another term to compute theflow.In addition,all calcu-lations must progress from an uphill cell to the down slope cell.This is implemented in map algebra by a iteration con-struct,called doflow,which processes a raster byflow order. Flow distance is measured in cell size x per unit length. One strip is processed during a time interval t(Fig.4).The conservative solution for the continuity term using afirst or-der approximation for Eq.(5)is derived as:h x+ x,t+ t=h x+ x,t−q x+ x,t−q x,txt(9)where the inflow q x,t and outflow q x+x,t are calculated in the second term using Equation6as:q x,t=v x,t·h t(10) The calculations approximate discharge from previous time interval.Discharge is dynamically determined within the continuity equation by water depth.The rate of change in state variables for Equation6needs to satisfy a stability condition where v· t/ x≤1to maintain numerical stabil-ity.The physical interpretation of this is that afinite volume of water wouldflow across and out of a cell within the time step t.Typically the cell resolution isfixed for the raster, and adjusting the time step requires restarting the simulation while(time<120)doflow(dem)fvel=1/n*pow(depth,m)*sqrt(grade)depth=depth+(depth*fluxflow(fvel)) enddoendFig.5.Map algebra script for excess rainfallflow computed over a 120minute event.The variables depth and grade are rasters,fvel is theflow velocity,n and m are constants in Manning’s equation.It is assumed a time step is defined and the script is run in a simulation. Thefirst line in the nested cell processing(i.e.doflow)computes theflow velocity and the second line computes the change in depth from the previous value plus any net change(inflow–outflow)due to velocityflux across the cell.cycle.Flow velocities change dramatically over the course of a storm event,and it is problematic to set an appropriate time step which is efficient and yields a stable result.The hydrological model has been implemented in a map algebra programming language Pullar(2003).To overcome the problem mentioned above we have added high level oper-ators to compute theflow as an advancing front over a land-scape.The time step advances this front adaptively across the landscape based upon theflow velocity.The level set algorithm(Sethian,1999)is used to do this in a stable and accurate way.The map algebra script is given in Fig.5.The important operator is thefluxflow operator.It computes the advancing front for waterflow across a DEM by hydrologi-cal principles,and computes the local drainageflux rate for each cell.Theflux rate is used to compute the net change in a cell in terms offlow depth over an adaptive time step.4ConclusionsThe paper has described an approach to extend the function-ality of tightly coupled environmental models in GIS(Ar-gent,2004).A long standing criticism of GIS has been its in-ability to handle dynamic spatial models.Other researchers have also addressed this issue(Burrough,1998).The con-tribution of this paper is to describe how level set methods are:i)an appropriate scientific basis,and ii)able to perform stable time-space computations for modelling landscape pro-cesses.The level set method provides the following benefits:–it more directly models motion of spatial phenomena and may handle both expanding and contracting inter-faces,–is based upon differential equations related to the spatial dynamics of physical processes.Despite the potential for using level set methods in GIS and land-surface process modeling,there are no commercial or research systems that use this mercial sys-tems such as GRID(Gao et al.,1993),and research systems such as PCRaster(Wesseling et al.,1996)offerflexible andpowerful map algebra programming languages.But opera-tions that involve reaction-diffusive processing are specific to one context,such as groundwaterflow.We believe the level set method offers a more generic approach that allows a user to programflow and diffusive landscape processes for a variety of application contexts.We have shown that it pro-vides an appropriate theoretical underpinning and may be ef-ficiently implemented in a GIS.We have demonstrated its application for two landscape processes–albeit relatively simple examples–but these may be extended to deal with more complex and dynamic circumstances.The validation for improved environmental modeling tools ultimately rests in their uptake and usage by scientists and engineers.The tool may be accessed from the web site .au/projects/mapscript/(version with enhancements available April2005)for use with IDRSIS GIS(Eastman,1997)and in the future with ArcGIS. It is hoped that a larger community of users will make use of the methodology and implementation for a variety of environmental modeling applications.Edited by:P.Krause,S.Kralisch,and W.Fl¨u gelReviewed by:anonymous refereesReferencesArgent,R.:An Overview of Model Integration for Environmental Applications,Environmental Modelling and Software,19,219–234,2004.Bonham-Carter,G.F.:Geographic Information Systems for Geo-scientists,Elsevier Science Inc.,New York,1994. Burrough,P.A.:Dynamic Modelling and Geocomputation,in: Geocomputation:A Primer,edited by:Longley,P.A.,et al., Wiley,England,165-191,1998.Burrough,P.A.and McDonnell,R.:Principles of Geographic In-formation Systems,Oxford University Press,New York,1998. Gao,P.,Zhan,C.,and Menon,S.:An Overview of Cell-Based Mod-eling with GIS,in:Environmental Modeling with GIS,edited by: Goodchild,M.F.,et al.,Oxford University Press,325–331,1993.Goodchild,M.:A Geographer Looks at Spatial Information Theory, in:COSIT–Spatial Information Theory,edited by:Goos,G., Hertmanis,J.,and van Leeuwen,J.,LNCS2205,1–13,2001.Jørgensen,S.and Bendoricchio,G.:Fundamentals of Ecological Modelling,Elsevier,New York,2001.Julien,P.Y.,Saghafian,B.,and Ogden,F.:Raster-Based Hydro-logic Modelling of Spatially-Varied Surface Runoff,Water Re-sources Bulletin,31(3),523–536,1995.Moore,I.D.,Turner,A.,Wilson,J.,Jenson,S.,and Band,L.:GIS and Land-Surface-Subsurface Process Modeling,in:Environ-mental Modeling with GIS,edited by:Goodchild,M.F.,et al., Oxford University Press,New York,1993.Press,W.,Flannery,B.,Teukolsky,S.,and Vetterling,W.:Numeri-cal Recipes in C:The Art of Scientific Computing,2nd Ed.Cam-bridge University Press,Cambridge,1992.Pullar,D.:MapScript:A Map Algebra Programming Language Incorporating Neighborhood Analysis,GeoInformatica,5(2), 145–163,2001.Pullar,D.:Simulation Modelling Applied To Runoff Modelling Us-ing MapScript,Transactions in GIS,7(2),267–283,2003. Ritter,G.,Wilson,J.,and Davidson,J.:Image Algebra:An Overview,Computer Vision,Graphics,and Image Processing, 4,297–331,1990.Sethian,J.A.:Level Set Methods and Fast Marching Methods, Cambridge University Press,Cambridge,1999.Sklar,F.H.and Costanza,R.:The Development of Dynamic Spa-tial Models for Landscape Ecology:A Review and Progress,in: Quantitative Methods in Ecology,Springer-Verlag,New York, 239–288,1991.Sui,D.and R.Maggio:Integrating GIS with Hydrological Mod-eling:Practices,Problems,and Prospects,Computers,Environ-ment and Urban Systems,23(1),33–51,1999.Tilman,D.and P.Kareiva:Spatial Ecology:The Role of Space in Population Dynamics and Interspecific Interactions.Princeton University Press,Princeton,New Jersey,USA,1997. Wesseling C.G.,Karssenberg, D.,Burrough,P. A.,and van Deursen,W.P.:Integrating Dynamic Environmental Models in GIS:The Development of a Dynamic Modelling Language, Transactions in GIS,1(1),40–48,1996.。
generalized maxwell model
Generalized Maxwell Model1. IntroductionThe Maxwell model is a linear viscoelastic model used to describe the rheological behavior of viscoelastic materials. It consists of a spring and a dashpot in parallel, and ismonly used to model the behavior of polymers, gels, and otherplex fluids. In this article, we will explore the generalized Maxwell model, which is an extension of the original Maxwell model and provides a more accurate representation of the viscoelastic properties of materials.2. The Maxwell modelThe Maxwell model, first proposed by James Clerk Maxwell in the 19th century, consists of a spring and a dashpot in parallel. The spring represents the elastic behavior of the material, while the dashpot represents the viscous behavior. The constitutive equation of the Maxwell model is given by:σ(t) = Eε(t) + ηdε(t)/dtWhere σ(t) is the stress, ε(t) is the strain, E is the elastic modulus, η is the viscosity, and dε(t)/dt is the rate of strain. The Maxwellmodel is simple and easy to understand, but it fails to capture the nonlinear viscoelastic behavior of many materials.3. The generalized Maxwell modelTo ovee the limitations of the original Maxwell model, the generalized Maxwell model introduces multiple springs and dashpots in parallel, each with its own elastic modulus and viscosity. This allows for a more accurate representation of theplex viscoelastic behavior of materials. The constitutive equation of the generalized Maxwell model is given by:σ(t) = ∑(Eiε(t) + ηidε(t)/dt)Where the summation is taken over all the springs and dashpots in the model, and Ei and ηi are the elastic moduli and viscosities of the individual elements. By including multiple elements with different relaxation times, the generalized Maxwell model can accurately describe the behavior of materials with nonlinear viscoelastic properties.4. Applications of the generalized Maxwell modelThe generalized Maxwell model has found wide applications in various fields, including polymer science, biomedicalengineering, and materials science. It has been used to study the viscoelastic behavior of polymers, gels, and foams, and to design materials with specific viscoelastic properties. In biomedical engineering, the model has been used to study the mechanical behavior of soft tissues and to develop new biomaterials for tissue engineering. In materials science, the model has been used to characterize the viscoelastic properties ofposites and to optimize their performance.5. Comparison with other viscoelastic modelsThe generalized Maxwell model is just one of many viscoelastic models used to describe the rheological behavior of materials. Other popular models include the Kelvin-Voigt model, the Burgers model, and the Zener model. Each of these models has its own advantages and limitations, and the choice of model depends on the specific material and the behavior of interest. The generalized Maxwell model is particularly useful for materials withplex viscoelastic behavior, as it allows for a more detailed description of the relaxation processes.6. ConclusionIn conclusion, the generalized Maxwell model is a powerful tool for describing the viscoelastic behavior of materials. Byextending the original Maxwell model to include multiple springs and dashpots, the generalized Maxwell model provides a more accurate representation of the nonlinear viscoelastic properties of materials. It has found wide applications in various fields and has contributed to our understanding of the mechanical behavior ofplex fluids and solids. As our knowledge of viscoelastic materials continues to grow, the generalized Maxwell model will undoubtedly remain an important tool for researchers and engineers alike.。
马林斯效应
a rate-dependency, at large strains hysteresis loops appear when unloading, under cyclic loading conditions, filled elastomers present a loss
article info
Article history: Received 7 May 2008 Received in revised form 18 October 2008 Available online 29 January 2009
Keywords: Elastomers Hysteresis Stress softening Mullins effect Nonlinear behavior
model is that the reloading path is the same as the unloading path as long as the maximum strain of the first loading is not reached. Fig. 1 shows the behavior of an idealized material corresponding to Mullins effect only. But actual soft materials have a more complex inelastic behavior. In tests with constant displacement amplitude, the stress drop between successive loading cycles is especially important during the first and second cycles and becomes negligible after about 5–10 cycles. A stationary state with constant stress amplitude and stabilized hysteresis loop is then reached after several cycles (see Fig. 2).
采用FACT技术消除钛基体对铌292.781nm谱线的干扰
2020年第7期分析测试FACT,是安捷伦ICP Expert软件排除光谱干扰[1-9]的一种技术,可以理解为“快速自动曲线拟合”。
这个功能可以很好的扣除干扰元素对待测元素的干扰。
FACT是对被分析元素和干扰元素的标样分别进行测定,用所得到的谱图数据进行图形解析,从而将被分析谱线旁边的谱线和重叠的谱线还原成两条不同的谱线。
一旦对每个模型的谱图感到满意后,该模型将被储存在在方法中,在以后的分析当中,每条被测量谱线将依照其所建立的相应的模型对谱图进行解析。
对于基体本身产生的光谱干扰没有合适的谱线利用时可以采用FACT功能对谱线进行剥离,从而得到合适的谱线,达到对样品中某元素进行检测。
1实验部分1.1仪器及工作参数Agilent725型电感耦合等离子体光谱仪,安捷伦科技(中国)有限公司制造,工作参数:RF功率1.2kW;等离子气流量15L·min-1;辅助气流量1.5L·min-1;雾化器压力200kPa;观测高度10mm;蠕动泵速15r·min-1;积分时间为5s,积分次数3次。
1.2试剂与材料HF(ρ=1.14g·mL-1,优级纯);HNO3(ρ=1.42g·mL-1,优级纯);Nb标准溶液为10000g·mL-1(德国Merck公司,使用时需要稀释);实验室用水(去离子水,18.2MΩ·cm-1);高纯钛基体(w Ti>99.99%,w Nb<0.001%)。
罗枫(宝鸡钛谷新材料检测技术中心有限公司,陕西宝鸡721013)摘要:采用ICP-OES法测定钛合金中铌元素,通过获得Nb单标溶液292.781nm处光谱图,分析谱线对单标溶液测定分析发现钛基体本身会对Nb292.781nm谱线处产生干扰。
通过利用FACT功能,建立了校准工作曲线,空白模型,基体模型,被分析元素模型,干扰物模型,选择模型,测试FACT消除钛基体干扰。
质量减震消能系统减震性能数值计算分析
第 39 卷第 5 期2023 年10 月结构工程师Structural Engineers Vol. 39 , No. 5Oct. 2023质量减震消能系统减震性能数值计算分析赵孜铭施卫星*(同济大学结构防灾减灾工程系,上海 200092)摘要介绍了一种由工程结构顶部的非结构质量与有刚度的摩擦支座以及限位黏弹性材料组成的新型分阶段耗能系统,在地震输入较小时,主要由摩擦摆滞回耗能,当支座与黏弹性材料碰撞后,该系统则由支座阻尼耗能。
通过MATLAB中的Simscape模块分析该新型减震系统的力学模型并以四层框架结构为例,与传统隔震结构与设置调谐质量阻尼器减震结构进行减震性能对比。
结果表明:质量减震消能系统(MES)既能利用质量系统耗散能量,有效控制楼层最大位移和层间位移角,又能利用结构中的隔震支座降低楼层绝对加速度,从而减少地震输入结构能量,降低建筑物损伤。
关键词质量减震消能系统,摩擦摆支座,调谐质量阻尼器,黏弹性材料, SimscapeNumerical Analysis on Seismic Performance of MES SystemZHAO Ziming SHI Weixing*(Department of Disaster Mitigation for Structures,Tongji University, Shanghai 200092, China)Abstract A novel multi-stage seismic mitigation system consisting of non-structural mass,friction isolator and viscoelastic materials is proposed. The low input seismic energy is mostly dissipated by the hysteretic-behavior of the friction isolator,and the damping properties work until the pounding of the isolator and viscoelastic materials occurs. The mechanical model of the MES system is analyzed using the Simscape module in MATLAB software, and the seismic performance of the MES system is compared with conventional base-isolated system and tuned mass damper systems. The results show that, the MES system can not only dissipate energy and effectively control the maximum story displacement and story drift through the mass system but also mitigate the story's absolute acceleration via the isolation system,thereby reducing the input earthquake energy and structural damage.Keywords MES system, friction pendulum isolation, tuned mass damper, viscoelastic materials, simscape0 引言在消能减震工程中,经常利用质量系统消耗能量,通过调整上部附加质量自振频率与下部主体结构自振频率一致,使质量块发生共振,放大振幅,并通过设置阻尼器或隔震支座消耗输入的地震能量,从而减少主体结构所需耗能及地震响应。
粗糙度影响
In¯uence of roughness on characteristics of tightinterference ®t of a shaft and a hubG.M.Yang a ,J.C.Coquille b ,J.F.Fontaine c,*,mbertin daCentre de Ressources Technologique,route de Mon e teau,89000Auxerre,FrancebEcole Nationale d'Enseignement Sup erieur Agricole de Dijon,21000Quetigny,France cLaboratoire de Recherche en M e canique et Acoustique,IUT route des plaines de l'Yonne,89000Auxerre,FrancedLaboratoire Bourguignon des Mat eriaux et Proc e d e s,ENSAM,CER Cluny,71250,France Received 2August 1999AbstractTight interference ®ts are very widely applied in industry,because of their simple manufacturing process.But,in allproposed models for determining their characteristics,one supposes that the contact interfaces are perfect.This is not the case in reality.The objectives of this study are to analyse the contact surfaces behaviour under the e ect of the assembly pressure.We show for a given surface texture typology that the roughness has a noticeable in¯uence on the ®t strength.Our process uses an experimental approach correlated with numerical modelling of the assembly.The aim is to justify a tightening de®nition based on the maximum matter concept and to introduce,for this particular case,a prediction of loss due to the deformation of the surface asperities.Ó2001Elsevier Science Ltd.All rights reserved.Keywords:Tight interference ®t;Assembly;Contact with asperities;Finite element modelling1.Introduction1.1.Classical approach for modelling the tight interference ®tsThe tight interference ®t,as a way of assembling two mechanical components,is widely applied in in-dustry because it is e cient and,in theory,simple to implement.This technique is used either to add an interface part having good tribological properties to a structure (e.g.a ®tting bearing on a gearbox casing),or to increase the strength of a high pressure enclosure.It can also be used to assemble a shaft and a hub on a large scale (e.g.transmission and breaking set of railway bogies)or for small and very accurate sizes (for example:video recorder drums).The presented study concerns the latterapplication.International Journal of Solids and Structures 38(2001)7691±7701/locate/ijsolstr*Corresponding author.Tel.:+33-3-8694-2619;fax:+33-3-8694-2616.E-mail address:fontaine@alcyone.u-bourgogne.fr (J.F.Fontaine).0020-7683/01/$-see front matter Ó2001Elsevier Science Ltd.All rights reserved.PII:S 0020-7683(01)00035-XFor the calculation of the cylindrical tight ®t,we use the classical approach combining the thick walled cylinder and Lam e Õs method of elasticity (Nicolet and Trottet,1971).It forms the basis of the standards in most industrial countries (NF E22-621,1980,NF E22-620,1984).The contact pressure can be written thus:p n D k 2À1d a m Àa a k a m b m:1where D is the tightening (the di erence between the axis and hub diameters),k ,a geometrical factor,d ,the ®tting nominal diameter and a m ,a a ,b m ,material characteristic constants.Note that the geometrical characteristics taken into account are only dimensional.However,the stan-dards recommend a loss of tightening for asperity moving to take place,of course,this depends on arithmetic roughness of both contacting surfaces (R a s ,for the shaft,R a h ,for the hub)L D 3 R a s R a h :2This expression is empirical,so our objective is to study the asperity behaviour and its contribution to the ®t strength.As Fig.1shows,we take into account the micro-geometry of the contacting surfaces.The tightening is not locally constant,so we have two possibilities for introducing the tightening parameter:1·The mean tightening (D m )given by the mean square method.·The peak to peak (D pp )given by the di erence between the maximum diameter of the shaft and the mini-mum diameter of the hub hole.1.2.Mechanical contact with roughnessThe study of the contact between two rough surfaces is a di cult theory as well as an experimental problem.The objective of several studies has been the analysis for of the indentation of a in®nite ¯at rough surface by a ¯at smooth die:Moore (1948),Bowden and Tabor (1958),Williamson and Hunt (1972)andChilds1The standards do not give more precision except for dimensional and form defect speci®cations based on the maximum matter criterion.However,we make an arbitrary choice here to begin the study of mean tightening de®nition based mathematical reasoning.7692G.M.Yang et al./International Journal of Solids and Structures 38(2001)7691±7701(1977)have shown that roughness persists for di erent experiments under high pressure greater than the elastic limit.This phenomenon is due to the hardening of the asperities and to the type of load (compression in all three directions)induced by the indentation.Other studies concerning the laws of friction in metal forming processes present di erent macroscopic loads:the material is subjected to transversal tension or compression.The models are based on the slip line ®eld method (Bay and Wahnheim,1976;Sutcli e,1988),or on the ®nite element method (Ike and Maki-nouchi,1990).They show that the boundary conditions play a predominant role in the surface roughness behaviour.A tension in the surface direction speeds up the ¯attening of the asperities contrary to a com-pression which delays it.The tight interference ®t case is di erent.The geometry is cylindrical and the radial and ortho-radial stresses in the hub are equal and opposite at a macroscopic scale.In a previous study,we showed the in¯uence of form defect (Fontaine and Siala,1998).In the case of periodic defects,on the peaks,the pressure is maximum and the ortho-radial stress minimum and vice versa on the hollows.In the case of the surface texture study where the asperity behaviour is not easily predicted,we ®rstly carried out anexperimental study which we later correlated to a FE modelling using ABAQU S ABAQUS âsoftware.The obtained experimental and numerical results show the great in¯uence of the surface roughness.Finally,we justify that the peak to peak tightening criterion gives the best correlation with the assembly strength and we introduce a new tightening loss calculation based on numerical and experimental observations.2.Experimental testsThe principle of the experimental tests (Yang,1998)lies in the comparison of the extractive loads of the ®tted set which have the same geometrical characteristics but di erent roughness values (Fig.2).The tightening D m is ®xed at 25l m for a ®tting diameter equal to 16mm.The relative tightening D =d 1:56%is consistent with the standard data.The axes used are in fact treated steel control elements.Their cylindrical surfaces are considered perfect.The hubs were manufactured in duralumin in accordance with the simple disk geometry given in Fig.1(exterior diameter 60mm and thickness 10mm).All the holes of the hubs were machined on a DNC lathe,varying the turn step to give di erent roughness values 0:2l m <R a <6:8l m .All of the geo-metrical characteristics have been measured in applying several methods to ensure the accuracy of the results.The mean tightening has been calculated by taking into account all defects.This step wasveryG.M.Yang et al./International Journal of Solids and Structures 38(2001)7691±77017693delicate because each parameter (dimension,form,surface texture)was measured on a di erent machine.It was necessary to compile the measurements to restore the holes topography.For every class of roughness,a limited number of hubs presenting minimum defects of form and similar mean dimensions was selected and paired up with a chosen shaft.Every pair was assembled by heating the hub and free insertion of the axis.After cooling,the strength was measured by extraction of the axis at low speed (0.5mm/min).This characterization has the advantage of being simple to process but it is not complete:it would need to be associated with the strength of the sliding torque which is often a functional characteristic in the industrial ®t cases.We can see the evolution of the extractive load versus the relative displacement for di erent roughness values in Fig.3.It is important to note that the ®rst stage,on every curve,corresponds to the elastic rigidity of the assembly where there is no relative displacement between the contact surfaces.The roughness does not in¯uence the latter.In a second stage,where we are concerned with the relative sliding of both parts,the load remains constant.At this stage,it is clear that the surface texture greatly in¯uences the strength of the ®t.It is di cult to understand the asperity deformation mechanisms because it is impossible to observe the contact surface,so we have modelled the process by the ®nite element method.3.Numerical modelling of the process of tight interference ®t and extracting test 3.1.General assumptions1.The axis is considered perfect before the hub bore defects Fig.4.Table 1shows the di erent character-istics of the samples.2.The thermal dilatation step is not taken into account.It is considered not to a ect the properties of ma-terials and the interfacial micro-geometry.This can be justi®ed by the low temperature used (200°C).3.The roughness in the ortho-radial direction is negligible compared to that in the axial direction (see Table 2).The modelling can be e ected with the assumption of the axial symmetry.4.The behaviour is elastoplastic.The formulation is expressed in large deformations and a contact with little sliding is chosen for the ®ttingstep.7694G.M.Yang et al./International Journal of Solids and Structures 38(2001)7691±7701The coe cient of friction between steel and duralumin is taken as 0.15by conventional value.The mesh is made up of four-node quadrangles.This is accomplished by checking that the mesh does not in¯uence the results while providing a good compromise vis- a -vis calculation time.A ®ne mesh is chosen for a zone 2mm large near the interface.The length of element is equal to %20l m.This is less than half of the smallest step of roughness than intended.So,every sample is modelled with the same mesh with respect to the following procedure:·Choice of a characteristic sample pro®le:several measures are made in di erent surface locations.The ones with values nearest to the mean parameter values arechosen.Fig.4.Example of 3D roughness measures of the hub hole.Table 1Comparison between the hub and the axis geometrical defectsNumber of sample 123R a ,Arithmetic roughness (l m)Hub hole 0.24 2.18 6.82Shaft 0.090.0710.063o,Cylindricity defect (l m)Hub hole 9.488.798.56Shaft1.72 1.87 1.04D m ,Mean tightening (l m)25.826.923Table 2Comparison of the surface texture characteristics in the axial and ortho-radial directionsNumber of sample 123R a axial0.24 2.18 6.82R a ortho-radial 0.040.170.13R z axial1.899.5626.26R zortho-radial0.190.28G.M.Yang et al./International Journal of Solids and Structures 38(2001)7691±77017695·Digitalization of the pro®le in 4000points.The lengths are variable because of the di erent measuring conditions (the evaluated length depends on the ®lter),so the pro®le is completed by translation to ob-tain the total ®tting length.·Search of the spline curve associated to the pro®le points.·Elaboration of the element with respect to the pro®le:it is necessary to have a node at every peak and hollow as Fig.5shows.The boundary conditions are the following:·outside surfaces are free,·symmetry with respect to a plane perpendicular to the cylinder axis is ®xed.The tightening is applied progressively up to the value of the simulated assembly.3.2.Extractive test simulationThe extraction step is modelled from the results of the ®rst step with the following boundary conditions:the displacements of the axis extremity are imposed and the opposite hub face is ®xed in the axial direction.As the experimental curves present a non-monotonous response,Risk Õs method is used in order to make the calculations converge.4.Results and interpretation 4.1.Extractive strengthFig.6gives the evolution of the extractive force versus the axis displacement.One can see on one hand a good correlation between the calculation results and the experimental measures.The di erence can come from defects and inaccurate experimental data,such as the value of the friction coe cient.However it is less than 9%which permits us to validate our modelling.On the other hand we notice the great in¯uence of the surface texture on the extractive strength.The modelling permits us to bring some responses to our initial questions.The strength (R )depends on two parameters:the nominal contact pressure (p n )and the friction coe cient f with:R Z A nf p n d A n ; 3Fig.5.Mesh of asperities with respect to peaks and hollows for R a 2:4l m.7696G.M.Yang et al./International Journal of Solids and Structures 38(2001)7691±7701p n depends on the real pressure (p r )and on the ratio between the real area (A r )and the nominal area (A n ):p n p r A r =A n :4It is essential to study the in¯uence of roughness on both parameters.4.2.Contact pressure and interface stresses stateFrom the numerical results,it is very di cult to determine the real contact area.But the nominal pressure can be calculated from the mean contact pressure.Table 3indicates that this grows with the roughness,and we can see that the maximum pressure increases too as a notable increase.However,the maximum level of Von Mises Õs stress in the hub is less than the maximum pressure.In fact it depends on the peak distribution.If the surface was perfect,the mean pressure and the maximal pressure would be equal to half of the equivalent Von Mises Õs stress.In fact,the surface roughness changes the contact interface behaviour radically.Fig.7a and b indicate the evolution of the three main stresses in a radial direction for a rough surface at a peak level and for a perfect surface.In the case of a perfect surface,the elastic limit is reached more quickly because of the state of radial compression r 1<0 and ortho-radial tension r 2>0 whereas in the case of a rough surface,there is a compressive stress in the three directions r 1<0,r 2<0and r 3<0 .This state is notsystematic,Table 3Mean pressure,maximum pressure and maximum Von Mises Õs stress versus the roughness Arithmetic roughness R a (l m)Mean pressure p n (MPa)Maximum pressure (MPa)Maximum Von Mises Õs stress in the hub (MPa)0.2470.562604112.18111.24254876.82149.1654505G.M.Yang et al./International Journal of Solids and Structures 38(2001)7691±77017697but due to the roughness distribution which is not perfectly periodic and depends on the relative heights of the peaks and of their neighbours.Fig.8indicates the radial and ortho-radial stresses ®eld.The radial stresses are negative and the ortho-radial stresses are globally positive but near certain peaks they are negative (something which delays the change in plasticity).A further study will examine the interaction between the surface asperities.Finally,we observe that the stresses in the hub are not maximum stresses at the interface but are so under the peaks as shown in Fig.9.One can conclude that in spite of the favourable stress conditions to crush the asperities,they have a tendency to persist even under great pressure.4.3.Friction coe cientIt is very di cult to predict the evolution of coe cient of friction.If it is presumed that the thermal step has not modi®ed the material properties,its evolution would come from variation of surface texture.Table 4indicates that the di erent surface texture parameters do not change very much.These parameters are calculated from the numerical simulations.We notice that the roughness decreases just a little,which means that the asperities persist.The total height of the pro®le (P t )is smaller after assembly so the lone peaksareFig.8.Stresses ®eld near the interface (in grey compressive stresses in white tensile stresses)to left radial stresses to right ortho-radial stress.7698G.M.Yang et al./International Journal of Solids and Structures 38(2001)7691±7701very deformed until the contact involves all the peaks (see Fig.10).The axis,considered perfect at the beginning,presents a low roughness level due to the great di erence in hardness between both materials.We can estimate that the coe cient of friction does not change in the case of the givenproblem.Fig.9.Von Mises Õs stress distribution for R a 2:18l m:areas with plasticity deformation are represented in black.Table 4Comparison of surface texture parameters before (b)and after assembly (a)Group Hub Axis R R x W W t P t R R x W W t P t 1b 0.6 1.20.8 1.9 2.0±±±±±a 0.5 1.10.8 1.2 1.20.10.20.32b 9.612.6 2.8 3.513.6±±±±±a 9.211.2 2.8 3.411.80.20.30.40.43b 24.126.0 2.4 2.726.9±±±±±a22.924.32.42.324.40.20.30.50.5G.M.Yang et al./International Journal of Solids and Structures 38(2001)7691±770176995.A best de®nition of the tighteningFrom the elements given previously,it can be concluded that if the mean tightening parameter (using the mean squares method)is used to de®ne the characteristic parameter of the ®t,the surface texture will notably in¯uence the assembly strength.Since the surface roughness persists after assembly,it could be said that they contribute to the tightening.This is the reason why we have introduced the second de®nition of the tightening (D pp )consistent with the maximum matter concept (see Fig.1).The peak to peak tightening (D pp )is de®ned by:D pp d s max Àd h min D m 2R pb ;5where d s max is the maximum diameter of the shaft,d h min is minimum diameter of the hub hole,D m is mean tightening,R pb is maximum height of the hub peaks of the outside cylinder.Concerning the tightening loss,it can be limited to the plasti®cation of the isolated peaks.It can be directly eliminated at the time of the measurement if the ``motifs''method is used (Boulanger,1992)or it can be determined by subtracting the double distance between the highest peak and the mean peak line of the hub pro®le:D pp corrected D pp À2 R ph ÀR pmhTable 5shows the comparison for the precedent example between the extractive force measured and calculated from the standard model (I)using the proposed tightening with the di erent corrections.One can see that the proposed tightening loss is better than the one given in the standards.To verify the validity of our approach,we have simulated three ®ts each with an identical peak to peak tightening and di erent roughness values as shown in Fig.11.The extracting forces given in Fig.12areTable 5Comparison between the measured and calculated extractive forces:(I)without tightening loss,(II)suggested loss L D 2 R ph ÀR pmh ,(III)standard loss:L D 3 R a s R a h R a Measured force D pp (I)Calculated force D pp corrected (II)Calculated force D pp corrected (III)Calculated force 0.24157028.6186727.46179328.1218362.18269044.7291842.1274940.3426346.82372060391755.4361746.363027Coe cient of correlation between the calculated forces and the measured forces:0.99995(I),0.99999(II)and 0.9856(III).7700G.M.Yang et al./International Journal of Solids and Structures 38(2001)7691±7701identical for the three roughness values therefore demonstrating the validity of the peak to peak tightening de®nition.6.ConclusionsBased on the results obtained through a series of experimental tests and associated numerical simula-tions,this paper shows the asperities behaviour for the cylindrical tightening ®t of an axis and a hub.These analyses con®rm that the asperities play an important and positive role in the cylindrical interference ®t characteristic.On this basis,we have found that the major factor is the mean height of the asperities.This factor must be integrated into the de®nition of the tightening.We propose the peak to peak tightening de®nition as the right parameter permitting the calculation of the pressure with the standard equation (1).We think that the roughness contributes to the ®t strength,and it is advisable to use economical manu-facturing processes as turning or boring to produce the ®t surfaces,as opposed to polishing or grinding methods that are much more expensive.ReferencesBay,N.,Wahnheim,T.,1976.Real area of contact and friction stress at high pressure sliding contact.Wear 38,201±209.Boulanger,J.,1992.The Ômotifs Õmethod:an interesting complement to the ISO parameters for some functional problems.Int.J.Mach.Tools Manufact.32(1/2),203±209.Bowden,F.P.,Tabor,D.,1958.The Friction and Lubrication of Solids.Clarendon Press,Oxford.Childs,T.H.C.,1977.The persistence of roughness between surfaces in static contact.Proc.R.Soc.Lond.A 353,35±53.Fontaine,J.F.,Siala,I.E.,1998.Form defect in¯uence on the shrinkage ®t characteristics.Eur.J.Mech.A/Solids 17(1),107±119.Ike,H.,Makinouchi,A.,1990.E ect of lateral tension and compression on plane strain fattering processes of surface asperities lying over a plastically deformable bulk.Wear 140,17±38.Moore,A.J.W.,1948.Proc.R.Soc.Lond.A 195,213±243.Nicolet,G.,Trottet,J.,1971.El e ment de machines,Lausanne.NF E22-620,1984.Assemblage frett e s sur port e e cylindrique:Fonction,r e alisation,calcul.,AFNOR,Paris la d e fense.NF E22-621,1980.Assemblage frett e s,dimensions,tol e rances et e tats de surface pour assemblages usuels,AFNOR,Paris la d e fense.Sutcli e,M.P.F.,1988.Surface asperity deformation in metal forming processes.Int.J.Mech.Sci.30,847±868.Willamson,J.B.P.,Hunt,1972.Asperity persistence and the real area of contact between rough surfaces.Proc.R.Soc.Lond.A 327,147±157.Yang,G.M.,1998.In¯uence de l Õ e tat de surface sur les caract e ristiques d Õun assemblage frett e .Th e se de doctorat ENSAM,France.G.M.Yang et al./International Journal of Solids and Structures 38(2001)7691±77017701。
Summary
Proc.Valencia/ISBA8th World Meeting on Bayesian StatisticsBenidorm(Alicante,Spain),June1st–6th,2006Bayesian nonparametriclatent feature modelsZoubin GhahramaniUniversity of Cambridge,UKzoubin@Thomas L.GriffithsUniversity of California at Berkeley,USAtomZoubin Ghahramani is in the Department of Engineering,University of Cambridge,and the Machine Learning Department,Carnegie Mellon University;Thomas L.Griffiths is in the Psychology Department,University of California at Berkeley;Peter Sollich is in the Department of Mathematics,King’s College London.2Z.Ghahramani,T.L.Griffiths,P.Sollich1.INTRODUCTIONLatent or hidden variables are an important component of many statistical models. The role of these latent variables may be to represent properties of the objects or data points being modelled that have not been directly observed,or to represent hidden causes that explain the observed data.Most models with latent variables assume afinite number of latent variables per object.At the extreme,mixture models can be represented via a single discrete latent variable,and hidden Markov models(HMMs)via a single latent variable evolving over time.Factor analysis and independent components analysis(ICA) generally use more than one latent variable per object but this number is usually assumed to be small.The close relationship between latent variable models such as factor analysis,state-space models,finite mixture models,HMMs,and ICA is reviewed in(Roweis and Ghahramani,1999).Our goal is to describe a class of latent variable models in which each object is associated with a(potentially unbounded)vector of latent tent feature representations can be found in several widely-used statistical models.In Latent Dirichlet Allocation(LDA;Blei,Ng,&Jordan,2003)each object is associated with a probability distribution over latent features.LDA has proven very successful for modelling the content of documents,where each feature indicates one of the topics that appears in the document.While using a probability distribution over features may be sensible to model the distribution of topics in a document,it introduces a conservation constraint—the more an object expresses one feature,the less it can express others—which may not be appropriate in other contexts.Other latent feature representations include binary vectors with entries indicating the presence or absence of each feature(e.g.,Ueda&Saito,2003),continuous vectors representing objects as points in a latent space(e.g.,Jolliffe,1986),and factorial models,in which each feature takes on one of a discrete set of values(e.g.,Zemel&Hinton,1994; Ghahramani,1995).While it may be computationally convenient to define models with a smallfinite number of latent variables or latent features per object,it may be statistically inappropriate to constrain the number of latent variables a priori.The problem of finding the number of latent variables in a statistical model has often been treated as a model selection problem,choosing the model with the dimensionality that results in the best performance.However,this treatment of the problem assumes that there is a single,finite-dimensional representation that correctly characterizes the properties of the observed objects.This assumption may be unreasonable.For example,when modelling symptoms in medical patients,the latent variables may include not only presence or absence of known diseases but also any number of environmental and genetic factors and potentially unknown diseases which relate to the pattern of symptoms the patient exhibited.The assumption that the observed objects manifest a sparse subset of an un-bounded number of latent classes is often used in nonparametric Bayesian statistics. In particular,this assumption is made in Dirichlet process mixture models,which are used for nonparametric density estimation(Antoniak,1974;Escobar&West, 1995;Ferguson,1983;Neal,2000).Under one interpretation of a Dirichlet process mixture model,each object is assigned to a latent class,and each class is associated with a distribution over observable properties.The prior distribution over assign-ments of objects to classes is specified in such a way that the number of classes used by the model is bounded only by the number of objects,making Dirichlet process mixture models“infinite”mixture models(Rasmussen,2000).Recent workBayesian nonparametric latent feature models3 has extended these methods to models in which each object is represented by a dis-tribution over features(Blei,Griffiths,Jordan,&Tenenbaum,2004;Teh,Jordan,Beal,&Blei,2004).However,there are no equivalent methods for dealing withother feature-based representations,be they binary vectors,factorial structures,orvectors of continuous feature values.In this paper,we take the idea of defining priors over infinite combinatorialstructures from nonparametric Bayesian statistics,and use it to develop methodsfor unsupervised learning in which each object is represented by a sparse subsetof an unbounded number of features.These features can be binary,take on mul-tiple discrete values,or have continuous weights.In all of these representations,the difficult problem is deciding which features an object should possess.The setof features possessed by a set of objects can be expressed in the form of a binary matrix,where each row is an object,each column is a feature,and an entry of1indicates that a particular objects possesses a particular feature.We thus focuson the problem of defining a distribution on infinite sparse binary matrices.Ourderivation of this distribution is analogous to the limiting argument in(Neal2000;Green and Richardson,2001)used to derive the Dirichlet process mixture model (Antoniak,1974;Ferguson,1983),and the resulting process we obtain is analogousto the Chinese restaurant process(CRP;Aldous,1985;Pitman,2002).This distri-bution over infinite binary matrices can be used to specify probabilistic models thatrepresent objects with infinitely many binary features,and can be combined withpriors on feature values to produce factorial and continuous representations.The plan of the paper is as follows.Section2discusses the role of a prioron infinite binary matrices in defining infinite latent feature models.Section3 describes such a prior,corresponding to a stochastic process we call the Indianbuffet process(IBP).Section4describes a two-parameter extension of this model which allows additionalflexibility in the structure of the infinite binary matrices. Section6illustrates several applications of the IBP prior.Section7presents someconclusions.TENT FEATURE MODELSAssume we have N objects,represented by an N×D matrix X,where the i th row of this matrix,x i,consists of measurements of D observable properties of the i thobject.In a latent feature model,each object is represented by a vector of latentfeature values f i,and the properties x i are generated from a distribution determinedby those latent feature tent feature values can be continuous,as in prin-cipal component analysis(PCA;Jolliffe,1986),or discrete,as in cooperative vector quantization(CVQ;Zemel&Hinton,1994;Ghahramani,1995).In the remainderof this Section,we will assume that feature values are ing the matrix F=ˆf T1f T2···f T N˜T to indicate the latent feature values for all N objects,the model is specified by a prior over features,p(F),and a distribution over observed propertymatrices conditioned on those features,p(X|F).As with latent class models,thesedistributions can be dealt with separately:p(F)specifies the number of features,their probability,and the distribution over values associated with each feature,while p(X|F)determines how these features relate to the properties of objects.Our focus will be on p(F),showing how such a prior can be defined without placing an upper bound on the number of features.We can break the matrix F into two components:a binary matrix Z indicatingwhich features are possessed by each object,with z ik=1if object i has feature4Z.Ghahramani,T.L.Griffiths,P.Sollich f i∈{1...K}finite mixture model DPMf i∈[0,1]K, k f ik=1LDA HDPf i∈{0,1}K factorial models,CVQ IBPf i∈ℜK FA,PCA,ICA derivable from IBPBayesian nonparametric latent feature models53.A DISTRIBUTION ON INFINITE BINARY MATRICESIn this Section,we derive a distribution on infinite binary matrices by starting with a simple model that assumes K features,and then taking the limit as K→∞.The resulting distribution corresponds to a simple generative process,which we term the Indian buffet process.3.1.Afinite feature modelWe have N objects and K features,and the possession of feature k by object i is indicated by a binary variable z ik.Each object can possess multiple features. The z ik thus form a binary N×K feature matrix,Z.We will assume that each object possesses feature k with probabilityπk,and that the features are generated independently.The probabilitiesπk can each take on any value in[0,1].Under this model,the probability of a matrix Z givenπ={π1,π2,...,πK},isP(Z|π)=KY k=1N Y i=1P(z ik|πk)=K Y k=1πm k k(1−πk)N−m k,(1)where m k=P N i=1z ik is the number of objects possessing feature k.We can define a prior onπby assuming that eachπk follows a beta distribution. The beta distribution has parameters r and s,and is conjugate to the binomial. The probability of anyπk under the Beta(r,s)distribution is given byp(πk)=πr−1k(1−πk)s−1Γ(r+s).(3)We take r=αK ,1)=Γ(αΓ(1+αα,(4)exploiting the recursive definition of the gamma function.The effect of varying s is explored in Section4.The probability model we have defined isπk|α∼Beta(α6Z.Ghahramani,T.L.Griffiths,P.Sollich marginal probability of a binary matrix Z isP(Z)=KY k=1Z N Y i=1P(z ik|πk)!p(πk)dπk(5)=KY k=1B(m k+αB(αKΓ(m k+αΓ(N+1+αKK,(8)where the result follows from the fact that the expectation of a Beta(r,s)random variable is r/(r+s).Consequently,Eˆ1T Z1˜=KEˆ1T z k˜=Nα/(1+(α/K)). For any K,the expectation of the number of entries in Z is bounded above by Nα.3.2.Equivalence classesIn order tofind the limit of the distribution specified by Equation7as K→∞,we need to define equivalence classes of binary matrices.Our equivalence classes will be defined with respect to a function on binary matrices,lof(·).This function maps binary matrices to left-ordered binary matrices.lof(Z)is obtained by ordering the columns of the binary matrix Z from left to right by the magnitude of the binary number expressed by that column,taking thefirst row as the most significant bit. The left-ordering of a binary matrix is shown in Figure2.In thefirst row of the left-ordered matrix,the columns for which z1k=1are grouped at the left.In the second row,the columns for which z2k=1are grouped at the left of the sets for which z1k=1.This grouping structure persists throughout the matrix.The history of feature k at object i is defined to be(z1k,...,z(i−1)k).Where no object is specified,we will use history to refer to the full history of feature k,(z1k,...,z Nk).We will individuate the histories of features using the decimal equivalent of the binary numbers corresponding to the column entries.For example, at object3,features can have one of four histories:0,corresponding to a feature with no previous assignments,1,being a feature for which z2k=1but z1k=0,2,being a feature for which z1k=1but z2k=0,and3,being a feature possessed by both previous objects.K h will denote the number of features possessing the history h, with K0being the number of features for which m k=0and K+=P2N−1h=1K h being the number of features for which m k>0,so K=K0+K+.This method of denoting histories also facilitates the process of placing a binary matrix in left-ordered form, as it is used in the definition of lof(·).in [Z ]is reduced if Z contains identical columns,since some re-orderings of the columns of Z result in exactly the same matrix.Taking this into account,the cardinality of [Z ]is …K K 0...K 2N −1«=K !Q 2N −1h =0K h !K Y k =1αK )Γ(N −m k +1)K).(10)8Z.Ghahramani,T.L.Griffiths,P.Sollich In order to take the limit of this expression as K →∞,we will divide the columns of Z into two subsets,corresponding to the features for which m k =0and the features for which m k >0.Re-ordering the columns such that m k >0if k ≤K +,and m k =0otherwise,we can break the product in Equation 10into two parts,corresponding to these two subsets.The product thus becomesK Y k =1αK)Γ(N −m k +1)K)=…αK )Γ(N +1)K )«K −K +K +Y k =1αK )Γ(N −m k +1)K )(11)=…αK )Γ(N +1)K)«K K +Y k =1Γ(m k +αΓ(αQ N j =1(j +αK ”K +K +Y k =1(N −m k )!Q m k −1j =1(j +αN !.(13)Substituting Equation 13into Equation 10and rearranging terms,we can compute our limitlim K →∞αK +K 0!K K +· N !K )!K ·K +Y k =1(N −m k )!Q m k −1j =1(j +αN !=αK +N !,(14)where H N is the N th harmonic number,H N =P N j =11Bayesian nonparametric latent feature models9 arranged in a line.Thefirst customer starts at the left of the buffet and takes a serving from each dish,stopping after a Poisson(α)number of dishes as his plate becomes overburdened.The i th customer moves along the buffet,sampling dishes in proportion to their popularity,serving himself with probability m k/i,where m k is the number of previous customers who have sampled a dish.Having reached the end of all previous sampled dishes,the i th customer then tries a Poisson(α/i)number of new dishes.Dishes1011121314151617181920We can indicate which customers chose which dishes using a binary matrix Z with N rows and infinitely many columns,where z ik=1if the i th customer sampled the k th dish.Figure3shows a matrix generated using the IBP withα=10.The first customer tried17dishes.The second customer tried7of those dishes,and then tried3new dishes.The third customer tried3dishes tried by both previous customers,5dishes tried by only thefirst customer,and2new dishes.Vertically concatenating the choices of the customers produces the binary matrix shown in the figure.Using K(i)1to indicate the number of new dishes sampled by the i th customer, the probability of any particular matrix being produced by this process isP(Z)=αK+N!.(15)As can be seen from Figure3,the matrices produced by this process are generally not in left-ordered form.However,these matrices are also not ordered arbitrarily because the Poisson draws always result in choices of new dishes that are to the right of the previously sampled dishes.Customers are not exchangeable under this distribution,as the number of dishes counted as K(i)1depends upon the order in which the customers make their choices.However,if we only pay attention to the lof-equivalence classes of the matrices generated by this process,we obtain the exchangeable distribution P([Z])given by Equation14:(Q N i=1K(i)1!)/(Q2N−1h=1K h!) matrices generated via this process map to the same left-ordered form,and P([Z])is10Z.Ghahramani,T.L.Griffiths,P.Sollich obtained by multiplying P(Z)from Equation15by this quantity.It is also possible to define a similar sequential process that directly produces a distribution on left-ordered binary matrices in which customers are exchangeable,but this requires more effort on the part of the customers.We call this the exchangeable IBP(Griffiths and Ghahramani,2005).3.5.Some properties of this distributionThese different views of the distribution specified by Equation14make it straight-forward to derive some of its properties.First,the effective dimension of the model, K+,follows a Poisson(αH N)distribution.This is easily shown using the generative process described in previous Section,since under this process K+is the sum ofPoisson(α),Poisson(α3),etc.The sum of a set of Poisson distributionsis a Poisson distribution with parameter equal to the sum of the parameters of its ing the definition of the N th harmonic number,this isαH N.A second property of this distribution is that the number of features possessed by each object follows a Poisson(α)distribution.This also follows from the defi-nition of the IBP.Thefirst customer chooses a Poisson(α)number of dishes.By exchangeability,all other customers must also choose a Poisson(α)number of dishes, since we can always specify an ordering on customers which begins with a particular customer.Finally,it is possible to show that Z remains sparse as K→∞.The simplest way to do this is to exploit the previous result:if the number of features possessed by each object follows a Poisson(α)distribution,then the expected number of entries in Z is Nα.This is consistent with the quantity obtained by taking the limit of this expectation in thefinite model,which is given in Equation8:lim K→∞Eˆ1T Z1˜= lim K→∞NαK=Nα.More generally,we can use the property of sums of Poisson random variables described above to show that1T Z1will follow a Poisson(Nα) distribution.Consequently,the probability of values higher than the mean decreases exponentially.3.6.Inference by Gibbs samplingWe have defined a distribution over infinite binary matrices that satisfies one of our desiderata–objects(the rows of the matrix)are exchangeable under this dis-tribution.It remains to be shown that inference in infinite latent feature models is tractable,as was the case for infinite mixture models.We will derive a Gibbs sampler for latent feature models in which the exchangeable IBP is used as a prior. The critical quantity needed to define the sampling algorithm is the full conditional distributionP(z ik=1|Z−(ik),X)∝p(X|Z)P(z ik=1|Z−(ik)),(16) where Z−(ik)denotes the entries of Z other than z ik,and we are leaving aside the issue of the feature values V for the moment.The likelihood term,p(X|Z),relates the latent features to the observed data,and will depend on the model chosen for the observed data.The prior on Z contributes to this probability by specifying P(z ik=1|Z−(ik)).In thefinite model,where P(Z)is given by Equation7,it is straightforward to compute the full conditional distribution for any z ik.Integrating overπk givesP(z ik=1|z−i,k)=Z10P(z ik|πk)p(πk|z−i,k)dπk=m−i,k+αN+αwhere z−i,k is the set of assignments of other objects,not including i,for feature k, and m−i,k is the number of objects possessing feature k,not including i.We need only condition on z−i,k rather than Z−(ik)because the columns of the matrix are generated independently under this prior.In the infinite case,we can derive the conditional distribution from the exchange-able IBP.Choosing an ordering on objects such that the i th object corresponds to the last customer to visit the buffet,we obtainm−i,kP(z ik=1|z−i,k)=extreme cases is very useful,but in general it will be helpful to have a prior where the overall number of features used can be specified.The required generalization is simple:one takes r=(αβ)/K and s=βin Equation2.Settingβ=1then recovers the one-parameter IBP,but the calculations go through in basically the same way also for otherβ.Equation(7),the joint distribution of feature vectors forfinite K,becomesP(Z)=KY k=1B(m k+αβB(αβK)Γ(N−m k+β)K+β)Γ(αβΓ(αβQ h≥1K h!e−K+defined below.As the one-parameter model,this two-parameter model also has a sequential generative process.Again,we will use the Indian buffet analogy.Like before,the first customer starts at the left of the buffet and samples Poisson(α)dishes.The i th customer serves himself from any dish previously sampled by m k>0customers with probability m k/(β+i−1),and in addition from Poisson(αβ/(β+i−1))new dishes.The customer-dish matrix is a sample from this two-parameter IBP.Two other generative processes for this model are described in the Appendix.The parameterβis introduced above in such a way as to preserve the average number of features per object,α;this result follows from exchangeability and the fact that thefirst customer samples Poisson(α)dishes.Thus,also the average number of nonzero entries in Z remains Nα.More interesting is the expected value of the overall number of features,i.e.the number K+of k with m k>0.One gets directly from the buffet interpretation, or via any of the other routes,that the expected overall number of features isβ+i−1,and that the distribution of K+is Poisson with this mean.We can see from this that the total number of features used increases asβincreases, so we can interpretβas the feature repulsion,or1/βas the feature stickiness.In the limitβ→∞(forfixed N),K+→α,again as expected:in this limit features are infinitely sticky and all customers sample the same dishes as thefirst one.Forfiniteβ,one sees that the asymptotic behavior of K+∼αβln N,because in the relevant terms in the sum one can then approximateβ/(β+ i−1)≈β/i.Ifβ≫1,on the other hand,the logarithmic regime is preceded by linear growth at small N<β,during whichroughly the same number of 1s,the number of features used varies considerably.We can see that at small values of β,features are very sticky,and the feature vector variance is low across objects.Conversely,at high values of βthere is a high degree of feature repulsion,with the probability of two objects possessing the same feature α=10 β=0.20510150102030405060708090100Prior sample from IBP with α=10 β=1010203040500102030405060708090100Prior sample from IBP with α=10 β=502040608010012014016001020304050607080901005.AN ILLUSTRATIONThe Indian buffet process can be used as the basis of non-parametric Bayesian models in diverse ways.Different models can be obtained by combining the IBP prior over latent features with different generative distributions for the observed data,p (X |Z ).We illustrate this using a simple model in which real valued data X is assumed to be linearly generated from the latent features,with Gaussian noise.This linear-Gaussian model can be thought of as a version of factor analysis with binary,instead of Gaussian,latent factors,or as a factorial model (Zemel and Hinton,1994;Ghahramani 1995)with infinitely many factors.5.1.A linear Gaussian modelWe motivate the linear-Gaussian IBP model with a toy problem of modelling simple images (Griffiths and Ghahramani,2005;2006).In the model,greyscale images are generated by linearly superimposing different visual elements (objects)and adding Gaussian noise.Each image is composed of a vector of real-valued pixel intensities.The model assumes that there are some unknown number of visual elements and that each image is generated by choosing,for each visual element,whether the image possesses this element or not.The binary latent variable z ik indicates whether image i possesses visual element k .The goal of the modelling task is to discover both the identities and the number of visual elements from a set of observed images.We will start by describing a finite version of the simple linear-Gaussian model with binary latent features used here,and then consider the infinite limit.In the finite model,image i is represented by a D -dimensional vector of pixel intensities,x i which is assumed to be generated from a Gaussian distribution with mean z i Aand covariance matrix ΣX =σ2X I ,where z i is a K -dimensional binary vector,andA is a K ×D matrix of weights.In matrix notation,E [X ]=ZA .If Z is a featurematrix,this is a form of binary factor analysis.The distribution of X given Z,A, andσX is matrix Gaussian:p(X|Z,A,σX)=12σ2Xtr((X−ZA)T(X−ZA))}(22)where tr(·)is the trace of a matrix.We need to define a prior on A,which we also take to be matrix Gaussian:p(A|σA)=12σ2Atr(A T A)},(23)whereσA is a parameter setting the diffuseness of the prior.This prior is conjugate to the likelihood which makes it possible to integrate out the model parameters A.Using the approach outlined in Section3.6,it is possible to derive a Gibbs sampler for thisfinite model in which the parameters A remain marginalized out.To extend this to the infinite model with K→∞,we need to check that p(X|Z,σX,σA) remains well-defined if Z has an unbounded number of columns.This is indeed the case(Griffiths and Ghahramani,2005)and a Gibbs sampler can be defined for this model.We applied the Gibbs sampler for the infinite binary linear-Gaussian model to a simulated dataset,X,consisting of1006×6images.Each image,x i,was represented as a36-dimensional vector of pixel intensity values1.The images were generated from a representation with four latent features,corresponding to the image elements shown in Figure6(a).These image elements correspond to the rows of the matrix A in the model,specifying the pixel intensity values associated with each binary feature.The non-zero elements of A were set to1.0,and are indicated with white pixels in thefigure.A feature vector,z i,for each image was sampled from a distribution under which each feature was present with probability0.5.Each image was then generated from a Gaussian distribution with mean z i A and covariance σX I,whereσX=0.5.Some of these images are shown in Figure6(b),together with the feature vectors,z i,that were used to generate them.The Gibbs sampler was initialized with K+=1,choosing the feature assign-ments for thefirst column by setting z i1=1with probability0.5.σA,σX,andαwere initially set to1.0,and then sampled by adding Metropolis steps to the MCMC algorithm(see Gilks et al.,1996).Figure6shows trace plots for thefirst1000it-erations of MCMC for the log joint probability of the data and the latent features, log p(X,Z),the number of features used by at least one object,K+,and the model parametersσA,σX,andα.The algorithm reached relatively stable values for all of these quantities after approximately100iterations,and our remaining analyses will use only samples taken from that point forward.The latent feature representation discovered by the model was extremely con-sistent with that used to generate the data(Griffiths and Ghahramani,2005).The posterior mean of the feature weights,A,given X and Z isE[A|X,Z]=(Z T Z+σ2X1This simple toy example was inspired by the“shapes problem”in(Ghahramani,1995);a larger scale example with real images is presented in(Griffiths and Ghahramani,2006)features used to generate the data.(b)Sample images from the dataset.(c)Image elements corresponding to the four features possessed by the most ob-jects in the1000th iteration of MCMC.(d)Reconstructions of the images in(b)using the output of the algorithm.The lower portion of thefigure showstrace plots for the MCMC simulation,which are described in more detail inthe text.Figure6(c)shows the posterior mean of a k for the four most frequent features in the1000th sample produced by the algorithm,ordered to match the features shown in Figure6(a).These features pick out the image elements used in generating the data.Figure6(d)shows the feature vectors z i from this sample for the four images in Figure6(b),together with the posterior means of the reconstructions of these images for this sample,E[z i A|X,Z].Similar reconstructions are obtained by averaging over all values of Z produced by the Markov chain.The reconstructions provided by the model clearly pick out the relevant features,despite the high level of noise in the original images.6.APPLICATIONSWe now outlinefive applications of the IBP,each of which uses the same prior over infinite binary matrices,P(Z),but different choices for the likelihood relating latent features to observed data,p(X|Z).These applications will hopefully provide an indication for the potential uses of this distribution.。
Variational modeling of triangular bezier surfaces
Remco C. Veltkamp
Wieger Wesselinkz
Utrecht University, Department of Computing Science Padualaan 14, 3584 CH Utrecht, The Netherlands Remco.Veltkamp@cs.ruu.nl
2. Surface representation
We have used triangular B zier patches. Spline surfaces composed of triangular patches have e some advantages over rectangular patches see Farin, 93 . For instance, triangular patches are better suited to describe complex geometries, and they can be subdivided locally. A triangular B zier patch T is de ned by Eq. 2 as the weighted sum of a number of e control points Pi;j;k . Analogous to the univariate Bernstein polynomials over an interval, the Bernstein polynomials of degree n over a non-degenerate triangle V1 V2 V3 are de ned by n n Bi;j;k u; v; w = i!j !!k! ui vj wk ; i + j + k = n; i; j; k 2 N ; where u, v, and w, with u + v + w = 1, are barycentric coordinates with respect to triangle V1V2 V3 . A parametric B zier triangle is de ned as: e
High quality compatible triangulations
High Quality Compatible TriangulationsVitaly Surazhsky Craig GotsmanCenter for Graphics and Geometric ComputingDept. of Computer Science, Technion – Israel Institute of Technology, Haifa 32000, Israel{vitus|gotsman}@cs.technion.ac.ilABSTRACTCompatible meshes are isomorphic meshing of the interiors of two polygons having a correspondence between their vertices. Compatible meshing may be used for constructing sweeps, suitable for finite element analysis, between two base polygons. They may also be used for meshing a given sequence of polygons forming a sweep. We present a method to compute compatible trian-gulations of planar polygons with a very small number of Steiner (interior) vertices. Being close to optimal in terms of the num-ber of Steiner vertices, these compatible triangulations are usually not of high quality, i.e., do not have well-shaped triangles. We show how to increase the quality of these triangulations by adding Steiner vertices in a compatible manner, using several novel techniques for remeshing and mesh smoothing. The total scheme results in high-quality compatible meshes with a small number of triangles. These meshes may then be morphed to obtain the intermediate triangulated sections of a sweep, if needed. Keywords: mesh generation, compatible triangulations, remeshing1. INTRODUCTIONIn CAE, swept volumes, sometimes called two and one half dimensional volumes, are frequently constructed between two base polygons given with a correspondence between their vertices. To discretize a swept volume for finite ele-ment analysis, it is necessary to mesh the interiors of the sequence of polygonal cross-sections forming the sweep, usually introducing interior (Steiner) vertices, in a manner such that the mesh is isomorphic, valid and well-formed within all the polygons. This mesh is said to be compatible with all the polygons. See Figure 1 for an example. The result is a set of prisms defining the sweep, whose edges are the so-called “ribs” of the sweep [15].In the case where only the two base polygons of the sweep are given, it is possible to automatically generate the inter-mediate polygons by a process known as morphing. The morphing problem, in general, is to smoothly transform one given polygon, the source, into another, the target, over time. Constructing the sweep volume may be considered a morphing problem by thinking of the sweep axis as the time axis of the morph. Morphing has been the subject of much research over recent years and has wide practical use in areas such as computer graphics, animation and model-ing. The naive approach to the morphing problem is to decide that the polygon vertex trajectories are straight lines, where every feature of the shape travels with a constant velocity along the line towards the corresponding feature of the target during the morph. However, this simple approach(b) (c) (d)(e)Figure 1: The concept of compatible triangulations of corresponding polygons. Vertex correspondence is de-digits. (a),(b) Non-compatible triangulation.Compatible triangulation. (e) The sweep bases (c) and (d).can lead to undesirable results. The intermediate shapes can vanish, i.e. degenerate into a single point, or self-intersect even though the source and target are simple. Even if the linear morph is free of self-intersections and degeneracies, its intermediate shapes may have areas or distances be-tween features far from those of the source and target, re-sulting in a “misbehaved” morph. See the top row of Figure 2. Most of the research on solving the trajectory problem for morphing concentrates on trying to eliminate self-intersections and preserving the geometrical properties of the intermediate shapes. Numerous existing methods achieve good results for many inputs, (e.g. [13] and [14]), yet, only the methods that use compatible triangulations are able to guarantee any properties of the resulting morph.In order to perform finite-element analysis on a swept vol-ume—a sequence of corresponding simple polygonal cross sections—it is necessary to mesh the polygon interiors in a compatible manner. In this work we concentrate on com-patible patible meshing is not always possible unless Steiner vertices are introduced into the inte-rior of the polygons. The main challenge is then to mini-mize the number of Steiner vertices to the least needed to achieve compatibility. Unfortunately, this can be as much as Ω(n2), where n is the number of vertices of the polygons. In the first work on this problem, Aronov et al. [2] provided two constructions which result in quite a large number of Steiner vertices. In their work on polygon morphing, Surazhsky and Gotsman [17] improved Aronov et al.’s so-called “spiderweb” method to significantly reduce the number of Steiner vertices required. Kranakis and Urrutia [11] presented a completely different method in which the number of Steiner vertices introduced depends on the num-ber of inflection vertices of the two polygons. Gupta and Wenger [9] described an algorithm which uses minimal-link polylines in the polygon.While compatible triangulations of polygons with a very small number of Steiner vertices are definitely an advan-tage from a complexity point-of-view, it appears that these triangulations are naturally not well-formed. They tend to contain long skinny triangles which cannot be adjusted to improve the triangle shape significantly. Hence a major challenge in our application is to introduce as small a num-ber of Steiner vertices as possible, yet obtain two triangula-tions with decent quality, and maintain compatibility of the triangulations throughout the process. We call this process compatible remeshing.This was attempted in the work of Alexa et al. [1], who start off with compatible triangula-tions of polygons, and introduce Steiner vertices in order to improve the quality of the triangulation. They, however,start from a large number of Steiner vertices, and thereafter increase this number significantly, in order to achieve tri-angulation of good quality. This results in compatible tri-angulations that are overly complex.Our main contribution in this paper is a new method to compatibly triangulate two planar polygons with a very small number of Steiner vertices, and a new remeshing technique, including a novel smoothing component. This introduces a small number of extra Steiner vertices, yet achieves a high-quality triangulation while maintaining the compatibility of the two triangulations.Meshing for sweep generation has been attempted before by a variety of authors in the meshing community (e.g. [5], [10], [15]). Their basic approach is to generate a mesh for a subset of the cross-section polygons, usually just one of the sweep bases, and then project this mesh somehow onto the other polygons. Beyond the fact that this certainly does not guarantee that the result will be a valid triangulation, there are also no guarantees for the quality of the triangulation even if it were valid. Our solution, taking into account both sweep bases (and theoretically all intermediate polygons), solves all these problems.If only the base polygons of the sweep are given, the inter-mediate polygons, with their corresponding compatible triangulations, may be generated using the morphing meth-ods of Surazhsky and Gotsman [17]. This is done by reduc-ing the problem to that of morphing compatible planar triangulations with a common convex boundary, in which the polygon is embedded, as described by Floater and Gotsman [8] and Surazhsky and Gotsman [16]. Two corre-sponding point sets admit a compatible triangulation if there exists a triangulation of one point set which, when induced on the second point set by the correspondence, is a legal triangulation there too. The morphing method ofFloater and Gotsman [8] is based on the convex representa-tion of triangulations using barycentric coordinates, firstintroduced by Tutte [20] for graph drawing purposes, and later generalized by Floater [7] for parameterization of 3D meshes. This avoids many of the problems associated with morphing, and basically guarantees that the triangulation remains valid (i.e. compatible with the source and target) throughout the morph.To embed the two polygons in a triangulation, first com-patibly triangulate the polygon interiors. Then circumscribe the two polygons in a common convex enclosure and com-patibly triangulate the two resulting annuli between the polygons and the enclosure (possibly requiring Steiner vertices) [3]. This results in two compatible triangulations with a common convex boundary, in which the polygons are embedded. Morphing these triangulations using the methods of [8] or [16] will then result in a valid (compati-ble) morph of the two polygons. See the bottom row of Figure 2.2. NEAR-OPTIMAL COMPATIBLETRIANGULATIONS2.1 Previous workAs already stated, Kranakis and Urrutia [11] presented two different methods to compatibly triangulate two polygons in which the number of Steiner vertices introduced depends on the number of the polygons’ inflection vertices. The first algorithm produces a rather large number, O((k+l)2), of Steiner vertices, where k and l are the number of the two polygons’ reflex vertices respectively. The second algo-rithm introduces, at most, O(kl) Steiner vertices, but its drawback is that it may add Steiner vertices on the polygon boundaries, which some applications do not allow. Fur-thermore, enlarging the boundary might prevent this algo-rithm from being used as a black-box in a recursive man-ner, as the algorithm might not terminate.Gupta and Wenger [9] described an algorithm, seemingly the best so far, which constructs the compatible triangula-tion based on minimal-link polylines inside the polygons P and Q. A minimal-link polyline is a path of straight-line segments between two vertices lying entirely in the poly-gon interior, having a minimal number of segments. The idea behind their algorithm is the following: First, compute an arbitrary triangulation T p of P. Using edges of T p it is possible to partition P into sub-polygons such that the number of links in minimal-link polylines in those sub-polygons is no more than a small constant (e.g. 5). Then the corresponding partition of Q is constructed using non-intersecting minimum-link polylines. The vertices of these polylines are the Steiner vertices of the triangulations. These corresponding sub-polygons are then compatibly triangulated, usually requiring a relatively small number of Steiner vertices due to the properties of the partition. The resulting compatible triangulations have O(M log n + n log2n) triangles, where M is the number of triangles in the optimal solution. Theoretically, this is good, except that the con-stant factor is quite large (approximately 40, according to the authors), so it is not very practical for smaller inputs. Another drawback of this algorithm is that it is not symmet-ric in P and Q. The choice of the triangulation T p of P strongly influences the resulting compatible triangulations and the number of Steiner vertices. Thus, overall, in some very simple cases when two triangulations may be com-patibly triangulated without requiring Steiner vertices, the algorithm can, nonetheless, introduce a large number of Steiner vertices. From a practical point of view, the algo-rithm involves implementing many state-of-the-art compu-tational geometry algorithms developed over the last two decades. As a consequence, an implementation of the algo-rithm is currently not available, and thus, it is impossible to compare this algorithm with other algorithms for finding compatible triangulations.2.2 Our algorithmOur algorithm is similar in spirit to that of Gupta and Wenger [9], namely, it is based on the idea of partitioning polygons using minimum-link polylines. However, we believe our algorithm is much simpler. Given two polygons P and Q with a correspondence between their vertices, we find a pair of vertices u and v with a minimal-link polyline between them in one of the polygons, and a corresponding polyline in the second. After the shorter polyline is refined to the same number of vertices as the longer one, the twopolylines compatibly partition both of the input polygonsinto two sub-polygons. The vertices of these polygons arethe Steiner vertices. We then apply the algorithm recur-sively on these two sub-polygons. The process terminateswhen the input polygons contain only three vertices,namely, the polygons have become triangles.Note that if it is possible to compatibly triangulate the two polygons without any Steiner vertices, our algorithm willdo so, as opposed to most of the other algorithms. Sincethis is the case for many inputs, our algorithm has a signifi-cant advantage.We still need to show how to find a pair of vertices u and vthat minimizes the number of links in the partitioning poly-lines. To achieve this, we employ the method of Suri [18], who showed how to find the minimum-link path betweentwo given vertices in a simple polygon in O(n) time, wheren is the number of polygon vertices. In a subsequent work, Suri [19] showed how a simple polygon can be preproc-essed in O(n) time in order to query the number of links of the minimum-link path between two given vertices of the polygon in O(log n) time. Thus, we can query all possible vertex pairs of the polygon in O(n2log n) time using this algorithm. Hence, in this manner we may determine which pair is best to use, and then employ the first algorithm to actually compute the paths.Accordingly, in order to find the best path for both poly-gons we query the two polygons for the minimum-link distance and choose the pair that has the best (minimal) value of the maximum between two distances. Namely, we choose the pair (u,v) which satisfies:In practice, this pair is not unique. Therefore, we choosethe pair that will partition the polygons into sub-polygons which are as balanced as possible, in order to reduce the overall algorithm complexity. This can be easily done by comparing the indices of the polygon vertices. More for-mally, if the polygon vertices are v1, …, v n and n is the size of the polygon, we look for:We believe that the time complexity for finding the optimal vertex pair(s) of the polygon(s) can be further improved to O(n log n) by exploiting the existing preprocessed data structure for the queries, instead of using the query proce-dure for a specific pair as a black box.We must still show that the algorithm terminates, since when the polygon P is partitioned into two sub-polygons P1 and P2, theoretically the size of P1or P2(or both) can be identical to that of P, and if this repeats, the algorithm can run infinitely. In general, to prevent such cases we should check that the size of the partitioned polygon can be the same as P only once. If the size stays the same after two iterations, the algorithm should backtrack and choose an-other vertex pair for the partition polylines. This, theoreti-cally, results in the exponential time complexity. However, in practice (we have tested the algorithm over numerous, very complex inputs), even the case when the size of the polygons repeats itself twice does not occur. Thus, although we cannot prove it at this time, we believe that the average total time complexity is less than O(n3log n).See Figure 3 for an illustration of the various stages of the compatible triangulation algorithm.3. MESH IMPROVEMENTOur compatible triangulation algorithm generates a small number of Steiner vertices, at locations which have not necessarily been optimized for mesh quality. It is possible to improve these meshes by smoothing them (moving the vertices), or remeshing them (changing the connectivity). In this section we describe methods for these two opera-tions which we believe are also of independent interest. 3.1 Weighted angle-based smoothingZhou and Shimada [21] presented an effective and easy-to-implement angle-based mesh smoothing scheme. They show that the quality of the mesh after angle-based smooth-ing is much better than after Laplacian smoothing. More-over, the chance that the scheme will produce inverted (invalid) faces is much less than that in Laplacian smooth-ing. Unfortunately, this is true mostly for meshes whose vertices have degrees close to the average degree, namely, the mesh connectivity is very regular. When the mesh has more irregular connectivity, the scheme may fail. In appli-cations involving meshes with very distorted (long and skinny) triangles, a more robust smoothing scheme is criti-cal. We propose a very simple improvement to the original angle-smoothing scheme, which significantly reduces the chances of inverted triangles and improves the quality of the resulting mesh. Furthermore, it has almost the same computational cost per iteration and a lower total computa-tional cost due to better convergence.The original scheme attempts to make each pair of adjacent angles equal. Given a vertex and its neighbors , where is the vertex degree, we want to move in order to improve the angles of the triangles incident on . Let be the angle adjacent to in the polygon . We define to be the point lying on the bisector of such that , namely, the edge is rotated around to coincide with the bisector of . SeeFigure 4. The new position of is defined as the average of all for all neighbors. That is:.(1)We improve this scheme by introducing weights into (1).For a small angle it is difficult to guarantee that will be placed relatively close to the bisector of . Since is itself small, the large deviation of from the bisector of will create angles much smaller than . Thus, the resulting mesh will be of a poor quality. To pre-vent this, we modify (1) in the following way:.(2)Namely, the for small angles will carry more weightthan for large angles. To demonstrate the robustness of our improvement, see Figure 5, Figure 6 and Table 1.Despite the superior results of our weighted angle-based scheme, it still cannot guarantee that the new vertex posi-tion forms a valid triangulation. Similarly, the convergence of our scheme as well as the original scheme cannot be guaranteed in cases when the given mesh has invalid (in-verted) triangles or when the mesh boundary is far from convex. In these cases, both schemes should be applied in a “smart” manner, namely, verifying that the triangles are still valid, or that the minimum angle of the adjacent trian-gles has been improved, before a vertex is moved. In some rare cases, both schemes may fail to improve the minimum angle when even Laplacian smoothing may improve it. A “combined” scheme that applies Laplacian smoothing when the angle-based method fails has extremely fast conver-gence and achieves the best of both worlds.3.2 Area-based remeshingThe idea to use triangle areas as one of the criteria for tri-angulation optimization is not new. This usually means trying to form triangles with as uniform an area as possible. However, triangle areas alone cannot be used to obtain meshes of reasonable quality. The reason is that when only the areas are optimized, without taking into account the angles, the resulting mesh can (and in most cases will) have many long and skinny triangles. Only when a mesh has an almost regular connectivity may uniform triangle areas imply well-formed triangles. Nevertheless, a mesh contain-ing triangles with uniform area distribution has one impor-tant positive property: The spatial distribution of the verti-ces over the total mesh area is very uniform. If we elimi-nate the edges of the mesh leaving only the vertices, we obtain quite a uniform point distribution, as may be seen in Figure 7(b).We propose a remeshing scheme that utilizes this. Given a mesh, we alternate between the area equalization procedure and a series of angle-improving edge-flips. Edge-flips are performed until improvement is no longer possible. This process results in a mesh that is as close to regular as the ratio between the number of the boundary and interior ver-tices, together with the geometry of the boundary, allows. It is far superior to the results from an analogous scheme involving angle-based smoothing instead of area equaliza-tion. Figure 7(c) and (d) compare the two schemes.To equalize the areas of the mesh triangles, a number of iterations are performed over the mesh. Each iteration moves all the mesh interior vertices sequentially to improve the areas locally. Let = be an interior mesh vertex and its neighbors. are the coordinates of . Denote by the area of triangle . Note that is modulo :. (3)Let be the area of the polygon that is actu-ally . In order to find the position of that equalizes the areas of the adjacent triangles as much as possible, we minimize the following function:. (4)This reduces to solving a system of two linear equations in x and y. The computational cost of this unique solution is close to that of traditional Laplacian smoothing.It turns out that a valid mesh can be obtained by equalizing the areas of the mesh triangles, even in cases such as a highly non-convex boundary. This contrasts with other methods, including the smart Laplacian [6] and both angle-based smoothing methods, which fail. See Figure 7(e)–(h).4. COMPATIBLE REMESHINGWe now show how to combine the two methods introduced in Section 3, along with a refinement procedure (introduc-ing new interior Steiner vertices), to produce high-quality compatible triangulations of two polygons given with a correspondence between their vertices. Compatible triangu-MinangleTriangles< 10°Triangles< 15°Triangles< 20°Laplacian 0.17° 2.57% 5.31% 8.71% Angle-based 4.62°0.58% 1.66% 4.56% Weightedangle-based 17.2°0.00% 0.00% 1.82%Table 1: Quantitative comparison between quality of triangulations in Figure 6.lations created using the method introduced in Section 2 usually have a small number of Steiner vertices, but their quality is unlikely to be acceptable. Therefore, remeshing techniques must be applied to improve the quality. The main difficulty with using existing remeshing techniques is that the remeshing criteria which are suitable for a single mesh may fail when applied to two triangulations in paral-lel.Our compatible remeshing technique is similar to that of Alexa et al. [1]. We use a series of simultaneous edge-flips, mesh smoothing and mesh refinement by edge-splitting. In addition, we perform a single iteration of the area equaliza-tion technique presented in Section 3.2. The outline of the algorithm appears in Algorithm 1. The parameter k split dic-tates the rate at which new Steiner vertices are introduced. while mesh quality has not been achieved ornumber of Steiner vertices does not exceedthresholdStep 1.Alternate between angle-based smooth-ing and simultaneous angle-improvingedge-flips.Step 2.Refine both meshes by simulta-neous edge-splits.Step 3.The same as Step 1.Step 4.Perform a single iteration of areaequalization (Section 4).Step 5.The same as Step 1.Algorithm 1: Compatible remeshing.While the criteria for operations in Algorithm 1 are rather straightforward for a single mesh, applying them simulta-neously on two triangulations requires more precise con-trol. If care is not exercised, the corresponding properties of triangles within the two meshes may often contradict each other. The following empirical criteria, based on their ana-log for a single mesh, have produced the best results on numerous examples:Edge-flips: Similarly to when constructing Delaunay trian-gulations, the edge is flipped if the minimum angle between the angles of both meshes of the triangles adjacent to the edge is improved.Angle-based smoothing:Both meshes are independently smoothed, applying the technique described in Section 3.1 in the “smart” manner, namely, preserving the validity of both meshes.Edge-split refinement:Our criterion for choosing an edge to be split is based both on the edge length (denoted by ) and the minimal of the four adjacent triangle angles ( ). The edge with the maximal “normalized” length in both triangulations (T0 and T1) is refined:(5)Note that the refinement is performed simultaneously on both triangulations in order to preserve the compatibility. The criterion defined in (5) produces better experimental results than the aspect ratio-based criterion of [12] or dis-tortion metrics criteria of [4] and [6]. The number of edges to be split in each iteration ( ) determines the trade-off between the number of Steiner vertices and the algorithm running time.Area equalization: As noted in Section 3.2, area equaliza-tion improves the spatial vertex distribution. Due to the(b)Figure 9: 3D sweep generation. (a) Optimal (no Steiner vertices) compatible triangulation of source and target polygons. Top row: High-quality compatible triangulation and intermediates generated by morphing procedure. Minimum angles of the source and target triangulations are 27.2° and 25.9°, respectively. (b) 3D visualization of sweeps from a number ofrefinement operations, some regions of the mesh may have an excess in vertex density. To smooth this out, we apply a single iteration of area equalization (Step 4). This area equalization can prevent a further increase in the number of Steiner vertices at later stages, but at the price of slowing down the algorithm. See Figure 8. On the one hand, the refinement operations change the meshes locally, and thus, Step 1 (or 3) of Algorithm 1 converges quickly. On the other hand, the area equalization affects the mesh globally, and thus, Step 1 (or 3) takes much longer to improve the mesh globally. If a faster algorithm is required, Step 4 can be applied only every 4–10 iterations.5. EXPERIMENTAL RESULTSWe have implemented all the algorithms described in this paper, and applied them to numerous example inputs. Our inputs consist of two planar polygons which serve as the source and target (top and bottom) cross-sections of the sweep. These two are compatibly triangulated with suffi-cient mesh quality (using Algorithm 1), and then morphed to create intermediate compatibly triangulated polygons. Especially challenging inputs are when the source and tar-get are significantly different. Figures 9–11 show some sample input pairs, the (usually low-quality) compatible triangulations with a small number of Steiner vertices gen-erated by the methods of Section 2, the remeshed high quality compatible triangulations generated by the methods of Section 4, and the intermediate triangulated cross-sections generated by applying morphing techniques. The latter are shown both as a sequence of 2D cross sections, and as a sliced 3D sweep. For each example, we specify the statistics of the source and target meshes. We found that the angles of the intermediate meshes generated using the tech-niques of Surazhsky and Gotsman [16], [17] were always in between those two, so the mesh quality is preserved throughout the morph.In terms of runtimes, all these examples required no more than a second or so to run on a Athlon 1.2GHz PC with 256MB RAM. Larger inputs, which ultimately involved hundreds of (interior and exterior) Steiner vertices for the mesh and the morph, required no more than 10 seconds on the same machine.6. DISCUSSION AND CONCLUSIONWe have shown how to generate compatibly triangulated sweeps with quality adequate for finite-element analysis. Our method is fast, robust, and, as opposed to previously published methods, is guaranteed to always produce a valid result.(b)3D sweep generation. (a) Compatible triangulation of source and target polygons with three Steiner verti-ces. Top row: High-quality compatible triangulation and intermediates generated by morphing procedure. Minimum angles of the source and target triangulations are 15.9° and 15.3°, respectively. (b) 3D visualization of sweeps from a number of different angles.。
宣传物流的短视频脚本模板
宣传物流的短视频脚本模板Title: Unlocking the Power of Logistics[Opening scene: A catchy and energetic background score plays as vibrant visuals of transportation modes like ships, trains, trucks, and planes are shown]Narrator: Whether it's delivering a package to your doorstep or ensuring goods reach international markets, logistics plays a vital role in connecting businesses and people across the globe.[Scene transition: A small business owner struggling to manage his inventory]Narrator: Meet Mike, a small business owner who is facing difficulties managing his inventory and delivering products to his customers on time.[Scene transition: A solution brought in]Narrator: But little does he know, there's a solution that can solve all his logistics worries.[Scene: Introduction of a logistics professional]Narrator: Introducing Lucy, a highly experienced logistics professional. Let's see how she helps Mike revolutionize his business with the power of logistics.[Scene: Lucy discussing with Mike about his current challenges]Lucy: Hey Mike! I see you're facing challenges with inventory management and timely deliveries. Don't worry, I'm here to help you by unlocking the power of logistics.[Scene: Visual representation of logistics process]Narrator: But wait, what exactly is logistics?[Scene transition: Graphics illustrating the three key stages of logistics - procurement, production, and distribution]Narrator: Logistics encompasses the seamless coordination of procurement, production, and distribution to ensure goods flow smoothly from suppliers to consumers.[Scene: Lucy explaining the benefits of logistics]Lucy: By implementing effective logistics practices, you can streamline your operations, save costs, and improve customer satisfaction.[Scene transition: Lucy and Mike brainstorming]Lucy: Let's dive into how we can optimize your inventory management by using smart technology and data analysis to forecast demand accurately.[Scene: Introduction of technology solutions]Narrator: With advancements in technology, logistics has become more efficient than ever before.[Scene: Visual representation of technology solutions like inventory management software, tracking systems, etc.]Narrator: Mike learns how the latest inventory management software and tracking systems can enhance his business operations.[Scene transition: Lucy explaining international logistics]Lucy: Mike, expanding your business to international markets can be challenging. But with our international logistics expertise, your products can reach customers worldwide faster and more efficiently.[Scene: Visual representation of international transportation modes and customs clearance process]Narrator: From customs clearance to choosing the right transportation modes, Lucy guides Mike through the complexities of international logistics.[Scene transition: Showcasing the benefits of logistics]Narrator: By leveraging the power of logistics, Mike witnesses a transformation in his business.[Scene: Mike's business thriving]Narrator: His inventory is well-managed, deliveries are on time, and customer satisfaction is at an all-time high.[Scene transition: Highlighting customer testimonials] Narrator: But don't just take our word for it. Here's what Mike's delighted customers have to say.[Scene: Happy customers expressing satisfaction]Customer 1: The products always arrive on time, and the packaging is impeccable!Customer 2: I've never had a smoother online shopping experience. The company's logistics game is on point![Scene: Conclusion]Narrator: The power of logistics has transformed Mike's business, and it can do the same for you too.[Scene: Lucy and Mike shaking hands]Lucy: Are you ready to unlock the power of logistics and take your business to new heights?Mike: Absolutely! Thank you, Lucy, for showing me the way. [Closing scene: High-energy background music playing as visuals of Mike's thriving business are shown]Narrator: Unlock the power of logistics and witness the transformation in your business.[Closing shot: A strong call to action with the company's contact information displayed on the screen]Narrator: Contact us now to unleash the true potential of your business with logistics.[Background music fades out as the video ends]。
特斯拉模型Series XL、XLT、Lariat、King Ranch、Platinum和Limi
Product Changes and Features AvailabilityFeatures, options and package content subject to change. Please check www.media.ford .com for the most current information.★= New for this model yearMODEL/SERIES/AVAILABILITY●XL, XLT, Lariat, King Ranch ®, Platinum and LimitedMECHANICAL★ New/Changed— 3.5L PowerBoost Full Hybrid V6 Engine (99D)— Hybrid Electronic Ten-Speed Automatic Transmission(44H) — Auto Hold ★ Deleted— Electronic Six-Speed Automatic TransmissionEXTERIOR★ New/Changed— LED Box Lighting w/ Zone Lighting (85P) – optional onXLT Mid (301A), standard on XLT High (302A), Lariat, King Ranch ® and Platinum— Power Tailgate (includes Tailgate Step and Tailgate WorkSurface) (60T) – Optional on Lariat, King Ranch ® Base (600A) and Platinum Base (700A); Incl. King Ranch ® High (601A) and Platinum High (701A); Standard on Limited— Tonneau Pickup Box Cover – Retractable (96J) ★ New Colors— Antimatter Blue (HX)— Carbonized Gray (M7) — Guard (HN)— Kodiak Brown (J1)— Smoked Quartz Tinted Clearcoat (TQ) — Space White (A3) ●Deleted Paint Colors — Abyss Gray — Blue Jean — Magma Red — Magnetic — Silver SpruceINTERIOR/COMFORT★ New Interior Colors— Medium Dark Slate – XL, XLT, Lariat — Baja Tan – XLT, Lariat — Carmelo – Platinum — Admiral Blue – Limited ●Deleted Paint Colors — Medium Earth Gray — Medium Light Camel — Dark Earth Gray — Dark Marsala — CamelbackINTERIOR/COMFORT (continued)★ New/Changed— Pro Power Onboard – 2KW (472) – optional to all series — Pro Power Onboard – 2.4KW – standard on 3.5LPowerBoost Full Hybrid V6 Engine— Pro Power Onboard – 7.2KW (477) – optional to all; reqs.3.5L PowerBoost Full Hybrid V6 Engine— 6” Extended Chrome Running Boards (18L) (req. 20”Wheels on 4x2)— 6” Extended Dark Grey Accent Running Boards (18S)(req. XLT Sport Appearance Pkg (862) or Lariat Sport Appearance Pkg (863)— Max Recline Driver and Passenger Seat (91S) – LateAvailability – optional on King Ranch ®, Platinum and Limited— Floor Liner – Tray Style (Less Carpeted Matching FloorMats) (47W) – optional on XL, XLT and LariatFUNCTIONAL★ New/Changed— 8” Center Stack Screen – standard— 12” Center Stack Screen – standard XLT (302A), Lariat,King Ranch ®, Platinum and Limited— 12” Digital Productivity Screen – standard on Lariat, KingRanch ®, Platinum and Limited— B&O System by Bang & Olufsen HD Radio™ (583) (incl.8 speakers and subwoofer) – optional on XLT 302A and Lariat 500A; Included on Lariat 501A/502A, King Ranch 600A and Platinum 700A— B&O System Unleashed by Bang & Olufsen HD Radio™(588) (incl. 18 speakers and subwoofer) – optional on Lariat (502A); included on King Ranch ® (601A), Platinum (701A); standard on Limited— FordPass Connect™ (4G) – standard— Individual Trailer TPMS/Customer-Placed Trailer Camera(66T) – optional XL (101A), XLT, Lariat, King Ranch®, Platinum and Limited— Rock Crawl Mode (Incl. in FX4 Off-Road Pkg (55A)) — Wireless Charging Pad – standard on King Ranch ®,Platinum and LimitedSAFETY/SECURITY★ New/Changed— NoneFORD CO-PILOT360™ TECHNOLOGYNew/Changed— Adaptive Steering – standard on King Ranch ®, Platinumand Limited— Ford Co-Pilot360™ Assist 2.0 (43B) – optional on XLT(301A/302A) and Lariat; std on King Ranch ® and Platinum— Ford Co-Pilot360™ Active 2.0 Prep Package (43C) –optional on Lariat (502A), King Ranch ® and Platinum (700A); std on Platinum (701A) and LimitedNote: Active Drive Assist Prep Kit functionality expected 3rdquarter 2021CY. Separate payment for featuresoftware required to activate full functionality at that time.08/01/202021 F-150REGULAR CAB / SUPERCAB / SUPERCREW ®MAJOR PRODUCT CHANGESProduct Changes and Features AvailabilityFeatures, options and package content subject to change. Please check www.media.ford .com for the most current information.★= New for this model year-2 -FORD CO-PILOT360™ TECHNOLOGY (continued)●Deleted — NonePACKAGES★ New/Changed— Interior Work Surface (50M) – optional — Tow Technology Package (17T) ●Deleted— XL Sport Appearance PackageOTHER●Changed— Factory Invoiced Accessories (FIA) to Dealer InstalledOptions (DIO). Order codes are no longer used for these features and they are orderable only through WBDO.PACKAGES/OPTIONS DELETED●Interior— Inflatable Rear Safety Belts — Lariat Bed Utility Package— Lariat Black Appearance Package — Lariat Special Edition Package— STX Sport Appearance Special Edition Package — XLT Special Edition Package — XLT Black Appearance PackageLATEST ORDER GUIDE UPDATES●Revised: Exterior paint color footnotes ●Added: Wheel images●Revised: XL National Discounts。
大颗粒与壁面碰撞的离散单元法模拟与分析
大颗粒与壁面碰撞的离散单元法模拟与分析张鹤;李天津;刘马林;黄志勇;薄涵亮【摘要】颗粒与壁面碰撞普遍存在于散体物料输送过程,研究颗粒与壁面碰撞有助于优化输送系统、减小物料磨损或提高输送经济性.本文基于离散单元法(DEM),采用Hertz-Mindlin无滑移接触模型,对单个6 mm直径大颗粒与壁面碰撞进行了数值模拟和分析,研究了碰撞速度、碰撞角度和剪切模量对碰撞过程和法向最大接触力的影响.研究结果表明,Hertz-Mindlin无滑移接触理论描述的法向接触过程具有自相似特性,法向卸载时长与法向加载时长比值为定值.模拟的接触时长与Thornton等的关系式预测值相符.碰撞速度和碰撞角度对碰撞过程中的法向最大接触力均有明显影响,法向最大接触力随法向碰撞速度的增加近似线性增加;碰撞速度不变时,法向最大接触力随碰撞角度的增大而减小.剪切模量对法向接触力具有重要影响,在考虑颗粒磨损和破碎的DEM模拟时,不宜采用降低剪切模量加快计算速度.本研究对颗粒磨损和破碎研究以及高温气冷堆吸收球气力输送过程优化均具有重要意义.%Particle-wall collisions widely exist in bulk solidstransportation .Investiga-tions on particle-wall collisions are helpful to optimize transporting system ,decrease product attrition or improve transportation economy .Collisions between a single coarse particle (6 mm in diameter) and a wall were investigated by Hertz-Mindlin no-slip con-tact model based on discrete element method (DEM ) .Effects of impact velocity ,impact angle and shear modulus on contact processes and maximum normal contact forces were studied .Results show that the normal contact process described by Hertz-Mindlin no-slip contact model shows self-similarity feature ,and the ratio of unloading to loadingduration in normal direction keeps as a certain value .The numerical contact durations agree well with the predictions by the correlation of Thornton et al .The impact velocity and impact angle show obvious effects on maximum contact forces .The normal maxi-mum contact force increases almost linearly with normal impact velocity .For the certain impact velocity of 2 m/s ,the normal maximum contact force decreases with the increase of impact angle .The shear modulus is a key factor to normal contact force ,which sug-gests that speeding up DEM simulation by decreasing shear modulus should be avoided when particle attrition and/or breakage are in consideration .The results in the present study are important for investigation of particle attrition and/or breakage ,as well as optimization of absorber sphere pneumatic conveying process in high temperature gas-cooled reactor .【期刊名称】《原子能科学技术》【年(卷),期】2017(051)012【总页数】6页(P2212-2217)【关键词】颗粒碰撞;颗粒磨损;颗粒破碎;离散单元法;气力输送【作者】张鹤;李天津;刘马林;黄志勇;薄涵亮【作者单位】清华大学核能与新能源技术研究院 ,先进核能技术协同创新中心 ,先进反应堆工程与安全教育部重点实验室,北京 100084;清华大学核能与新能源技术研究院 ,先进核能技术协同创新中心 ,先进反应堆工程与安全教育部重点实验室,北京 100084;清华大学核能与新能源技术研究院 ,先进核能技术协同创新中心 ,先进反应堆工程与安全教育部重点实验室,北京 100084;清华大学核能与新能源技术研究院 ,先进核能技术协同创新中心 ,先进反应堆工程与安全教育部重点实验室,北京100084;清华大学核能与新能源技术研究院 ,先进核能技术协同创新中心 ,先进反应堆工程与安全教育部重点实验室,北京 100084【正文语种】中文【中图分类】TL33吸收球停堆系统是高温气冷堆的第二停堆系统[1]。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Modelling the linear viscoelastic behavior of silicate glasses near the glass transition point
to the interval [u, u + du]). We adopt the exponential distribution of intensities of hops
[7],
q(u) =
1 V
exp
−
u V
(u ≥ 0),
where V is the average height to be reached in a hop. For a CRR trapped in a well with
(consisting of a few SiO4 tetrahedra) that change their configuration as they are agitated by thermal fluctuations.
To describe the viscoelastic response of a glass in dynamic tests with a restricted win-
reference state.
The kinetics of rearrangement is uniquely determined by the temperature-dependent
attempt rate Γ0 (the number of hops in a well per unit time) and the distribution function, q(u), for hops with various heights (q(u)du is the probability of a hop whose height belongs
Observations in static and dynamic mechanical tests on silicate glasses are conventionally fitted by using such phenomenological approaches as (i) the generalized Maxwell model [3], (ii) the KWW (Kolhrausch–Williams–Watt) formula [4], and (iii) linear stress– strain relations with fractional derivatives [5]. The objective of this paper is to develop constitutive equations for the linear viscoelastic response of an amorphous glass based on a micro-mechanical concept and to validate these relations by matching experimental data in torsional oscillatory tests on a multicomponent silicate glass.
At random instants, the point hops to higher energy levels as it is thermally agitated. If
the point does not reach the liquid-like energy level in a hop, it returns to the bottom
Γ(v) = Γ0 exp
−
v V
.
(1)
Denote by Na and Np the numbers of active and passive CRRs per unit mass, and by p(v) the distribution of potential wells with various energies v. The rearrangement process is
described by the function n(t, τ, v) that equals the number (per unit mass) of active CRRs
trapped in wells with energy v which have last been rearranged before instant τ ∈ [0, t].
rearranged during the interval [τ, τ +dτ ] and, afterwards, have not beed rearranged within
the interval [τ, t]. The number of active CRRs (per unit mass) with energy v that were
time for rearrangement substantially exceeds the duration of a test. An active CRR is en-
tirely characterized by the energy, v, of thermal fluctuations necessary for rearrangement.
depth v, the probability of rearrangement in a hopq(u)du
=
exp(−
v V
).
The rate of rearrangement,
Γ(v) = Γ0Q(v),
equals the number of hops (per unit time) in which a point reaches the liquid-like level,
dow of frequencies, we distinguish two types of relaxing units: (i) active CRRs that can
rearrange during the experimental time-scale, and (ii) passive CRRs whose characteristic
The quantity n(t, t, v) is the current number of active CRRs (per unit mass) with energy
v,
n(t, t, v) = Nap(v).
(2)
2
The quantity
∂n ∂τ
(t,
τ,
v)dτ
equals the number (per unit mass) of active CRRs with energy v that have last been
Key-words: Silicate glasses, Dynamic tests, Viscoelasticity, Glass transition
1 Introduction
This study is concerned with the viscoelastic behavior of silicate glasses at uniaxial deformation with small strains. This subject has been a focus of attention in the past decade because of numerous applications of vitreous silica in industry (that range from waveguides and optical fibers to semiconductor wafers and tire additives [1] and the key role of multicomponent silicate glasses in geological processes [2].
1
2 A micro-mechanical model
A silicate glass is thought of as an ensemble of cooperatively rearranging regions (CRRs),
where mechanical stress relaxes due to rotational rearrangement of small clusters of atoms
Aleksey D. Drozdov and Jesper deC. Christiansen Department of Production Aalborg University
Fibigerstraede 16, DK–9220 Aalborg, Denmark
Abstract
A model is derived for the viscoelastic response of glasses at isothermal uniaxial deformation with small strains. A glass is treated as an ensemble of relaxing units with various activation energies for rearrangement. With reference to the energylandscape concept, the rearrangement process is thought of as a series of hops of relaxing units (trapped in their potential wells on the energy landscape) to higher energy levels. Stress–strain relations are developed by using the laws of thermodynamics. Adjustable parameters are found by fitting experimental data in torsional dynamic tests on a multicomponent silicate glass at several temperatures near the glass transition point.