Abstract Cloth Animation with Adaptively Refined Meshes
动漫设计与制作毕业论文(毕业总结范文模板)
动漫设计与制作毕业论文(毕业总结范文模板) 注:共2份论文范文毕业实习报告学院: XXXXXXX学院专业:动漫设计与制作班级:班级姓名:姓名题目:题目评定:XXXXXXXX学院20XX年6月摘要随着计算机技术、网络技术的发展,动漫设计与制作成为方兴未艾的朝阳产业,与世界发达国家相比,中国的数字媒体产业才刚刚起步。
本文从三维建模、模型的渲染以及动画短片的合成加以介绍。
Maya是美国Alias|Wavefront公司出品的世界顶级的三维动画软件,应用对象是专业的影视广告,角色动画,电影特技等。
Maya功能完善,工作灵活,易学易用,制作效率极高,渲染真实感极强,是电影级别的高端制作软件。
关键词:影视,动漫,模型,MAYAAbstrac tWith computer technology, network technology, animation design and production into a sunrise industry in the ascendant, as compared with developed countries, China's digital media industry has just begun. In this paper, three-dimensional modeling,rendering and animation clips model synthesis to be introduced.Maya is the United States, Alias | Wave front company produced the world's top three-dimensional animation software, the application object is a professional television advertising, character animation,film special effects and so on. Maya functional, flexible working, easy to use, produce high efficiency, rendering highly realistic, is the level of high-end film production software.Keywords:Movie,Television,Model, MAYA目录一、绪论 (1)(一)研究动画历史 (1)1、动画片的发展 (1)2、动画的定义: (1)3、中国动画的历史 (1)(二)中国动画的前景与现状 (2)1、中国动画的前景 (2)2、中国动画的现状 (3)二、三维动画 (4)(一)三维动画的定义 (4)(二)三维动画的制作软件 (4)1、MAYA软件及合成软件AE的介绍 (4)(三)其他三维动画软件 (5)1、3DS MAX (5)2、zbrush (6)3、Poser (6)4、SoftimageXSI (6)5、Rhino (6)三、模型与合成 (7)(一)模型(MAYA软件制作) (7)1、模型的主题 (7)2、模型的制作 (7)3、模型的渲染 (9)(二)合成(AE软件) (10)1、片头的制作 (10)2、MAYA渲染图片的导入与合成 (10)3、片尾的制作 (10)4、渲染短片 (11)四、总结 (11)致谢 (11)参考文献 (12)一、绪论(一)研究动画历史1、动画片的发展2、动画的定义:动画是一种综合艺术门类,是工业社会人类寻求精神解脱的产物,它集合了绘画、漫画、电影、数字媒体、摄影、音乐、文学等众多艺术门类于一身的艺术表现形式。
AdaptivCRT算法运作演示讲解
Day in the Life of an AdaptivCRTTM Patient
Note: AdaptivCRTTM is available in the Viva XT CRT-D device.
*AV conduction checks will be delayed if AV block is suspected.
2. Determine Pacing Method
Determines the pacing method based on the intrinsic conduction assessment and heart rate
1 Martin, DO, et. al. Heart Rhythm (2012), doi: 10.1016/j.hrthm.2012.07.009. 2 Medtronic Viva XT CRT-D manual.
A daptive LV pacing
What Is Adaptive LV Pacing?
Menu Selection
This slide deck provides an overview of the AdaptivCRTTM feature as well as animations of the feature in action for different patient scenarios. Click on one of the selections below or advance to the next slide to review the entire presentation:
ACM 1-xxxxxxxxxxxxxxxxxx. Dreaming of Adaptive Interface Agents
Copyright is held by the author/owner(s). CHI 2007, April 28 – May 3, 2007, San Jose, USA ACM 1-xxxxxxxxxxxxxxxxxx.
Abstract This interactive project uses the metaphor of human sleep and dreaming to present a novel paradigm that helps address problems in adaptive user interface design. Two significant problems in adaptive interfaces are: interfaces that adapt when a user does not want them to do so, and interfaces where it is hard to understand how it changed during the process of adaptation. In the project described here, the system only adapts when the user allows it to go to sleep long enough to have a dream. In addition, the dream itself is a visualization of the transformation of the interface, so that a person may see what changes have occurred. This project presents an interim stage of this system, in which an autonomous agent collects knowledge about its environment, falls asleep, has dreams, and reconfigures its internal representation of the world while it dreams. People may alter the agent’s environment, may prevent it from sleeping by making noise into a microphone, and may observe the dream process that ensues when it is allowed to fall asleep. By drawing on the universal human experience of sleep and dreaming, this project seeks to make adaptive interfaces more effective and comprehensible.
历年Siggraph会议论文2
历年Siggraph会议论文2历年Siggraph会议论文2SIGGRAPH 2002 papers on the webPage maintained by Tim Rowley. If you have additions or changes, send an e-mail.Note that when possible I link to the page containing the link to the actual PDF or PS of the preprint. I prefer this as it gives some context to the paper and avoids possible copyright problems with direct linking. Thus you may need to search on the page to find the actual document.ChangelogACM Digital Library: Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques Images and VideoVideo Matting of Complex ScenesYung-Yu Chuang, Aseem Agarwala, Brian Curless (University of Washington), David H. Salesin (University of Washington and Microsoft Research), Richard Szeliski (Microsoft Research) Gradient Domain High Dynamic Range CompressionRaanan Fattal, Dani Lischinski, Michael Werman (The Hebrew University)Fast Bilateral Filtering for the Display of High Dynamic Range ImagesFrédo Durand, Julie Dorsey(Massachusetts Institute of Technology)Photographics Tone Reproduction for Digital ImagesErik Reinhard, Michael Stark, Peter Shirley (University of Utah), Jim Ferwerda (Cornell University)Transferring Color to Greyscale ImagesTomihisa Welsh, Michael Ashikhmin, Klaus Mueller(Stony Brook University)Modeling and SimulationCHARMS: A Simple Framework for Adaptive Simulation (PDF) Eitan Grinspun (California Institute of Technology), Petr Krysl (University of California, San Diego), Peter Schröder(California Institute of Technology)Graphical Modeling and Animation of Ductile FractureJames F. O'Brien, Adam W. Bargteil (University of California, Berkeley), Jessica K. Hodgins (Carnegie Mellon University) Creating Models of Truss Structures With OptimizationsJeffry Smith, Jessica K. Hodgins, Irving Oppenheim (Carnegie Mellon University), Andrew Witkin (Pixar Animation Studios)A Procedural Approach to Authoring Solid ModelsBarbara Cutler, Julie Dorsey, Leonard McMillan, Matthias Mueller, Robert Jagnow (Massachusetts Institute of Technology) GeometryCut-and-Paste Editing of Multiresolution Surfaces (abstract) Henning Biermann(New York University), Ioana Martin, Fausto Bernardini (IBM T.J. Watson Research Center), Denis Zorin (New York University)Pointshop 3D: An Interactive System for Point-Based Surface EditingMatthias Zwicker, Mark Pauly, Oliver Knoll, Markus Gross (Eidgenössische Technische Hochschule Zürich)Level Set Surface Editing OperatorsKen Museth, David E. Breen(California Institute of Technology), Ross T. Whitaker (University of Utah), Alan H. Barr (California Institute of Technology)Dual Contouring of Hermite DataTao Ju, Frank Losasso, Scott Schaefer, Joe Warren(Rice University)Parameterization and MeshesInteractive Geometry RemeshingPierre Alliez(University of Southern California and INRIA), Mark Meyer(California Institute of Technology), Mathieu Desbrun (University of Southern California)Geometry ImagesXianfeng Gu, Steven Gortler(Harvard University), Hugues Hoppe (Microsoft Research)Least Squares Conformal Maps for Automatic Texture Atlas GenerationBruno Levy(INRIA Lorriane), Sylvain Petitjean, Nicolas Ray (CNRS), Jerome Maillot (Alias|Wavefront)Progressive and Lossless Compression of Arbitrary Simplicial ComplexesPierre-Marie Gandoin, Olivier Devillers(INRIA Sophia-Antipolis)Linear Combination of TransformationsMarc Alexa (Technische Universtat Darmstadt)Character AnimationTrainable Videorealistic Speech AnimationTony Ezzat, Gadi Geiger, Tomaso Poggio(Massachusetts Institute of Technology, Center for Biological and Computational Learning)Turning to the Masters: Motion Capturing CartoonsChristoph Bregler, Lorie Loeb, Erika Chuang, Hrishikesh Deshpande (Stanford University)Synthesis of Complex Dynamic Character Motion From Simple AnimationsC. Karen Liu, Zoran Popovic (University of Washington)Integrated Learning for Interactive Synthetic CharactersBruce Blumberg, Marc Downie, Yuri Ivanov, Matt Berlin, Michael Patrick Johnson, William Tomlinson(Massachusetts Institute of Technology, The Media Laboratory)3D Acquisition and Image Based RenderingImage-Based 3D Photography Using Opacity HullsWojciech Matusik(Massachusetts Institute of Technology), Hanspeter Pfister (Mitsubishi Electric Research Laboratory), Addy Ngan(Massachusetts Institute of T echnology), Paul Beardsley (Mitsubishi Electric Research Laboratory), Leonard McMillan (Massachusetts Institute of Technology)Real-Time 3D Model AcquisitionSzymon Rusinkiewicz(Princeton University), Olaf Hall-Holt, Marc Levoy (Stanford University)Light Field Mapping: Efficient Representation and Hardware Rendering of Surface Light Fields (project page)Wei-Chao Chen (University of North Carolina at Chapel Hill), Radek Grzeszczuk, Jean-Yves Bouguet (Intel Corporation) Feature-Based Light Field Morphing (PDF)Baining Guo(Microsoft Research China), Zhunping Zhang (Tsinghua University), Lifeng Wang, Heung-Yeung Shum (Microsoft Research China)Animation From Motion CaptureMotion Textures: A Two-Level Statistical Model for Character Motion Synthesis (PDF)Yan Li, Tianshu Wang, Heung-Yeung Shum (Microsoft Research China)Motion GraphsLucas Kovar, Michael Gleicher(University of Wisconson-Madison), Fred Pighin (USC Institute for Creative Technologies) Interactive Motion Generation From Examples (PDF)Okan Arikan, D.A. Forsyth (University of California, Berkeley) Interactive Contol of Avatars Animated With Human Motion DataJehee Lee, Jinxiang Chai (Carnegie Mellon University), Paul S.A. Reitsma(Brown University), Jessica K. Hodgins(Carnegie Mellon University), Nancy S. Pollard (Brown University) Motion Capture Assisted Animation: T exturing and Synthesis Katherine Pullen, Christoph Bregler (Stanford University) Lighting and AppearanceHomomorphic Factorization of BRDF-Based Lighting ComputationLutz Latta, Andreas Kolb(University of Applied Sciences Wedel)Frequency Space Environment Map RenderingRavi Ramamoorthi, Pat Hanrahan (Stanford University)Precomputed Radiance Transfer for Real-Time Rendering in Dynamic, Low-Frequency Lighting EnvironmentsPeter-Pike Sloan(Microsoft Research), Jan Kautz(Max-Planck-Institut für Informatik), John Snyder (Microsoft Research) Interactive Global Illumination in Dynamic ScenesParag Tole, Fabio Pellacini, Bruce Walter, Donald P. Greenberg (Cornell University)A Lighting Reproduction Approach to Live-Action CompositingPaul Debevec, Chris Tchou (USC Institute for Creative Technologies), Andreas Wenger (Brown University), Tim Hawkins, Andy Gardner, Brian Emerson (USC Institute for Creative Technologies), Ansul Panday (University of Southern California)Shadows, Translucency, and VisibilityPerspective Shadow MapsMarc Stamminger, George Drettakis(REVES/INRIA Sophia-Antipolis)A User Interface for Interactive Cinematic Shadow DesignFabio Pellacini, Parag Tole, Donald P. Greenberg(Cornell University)Robust Epsilon VisibilityFlorent Duguet, George Drettakis(REVES/INRIA Sophia-Antipolis)A Rapid Hierarchical Rendering Technique for Translucent MaterialsHenrik Wann Jensen(Stanford University), Juan Buhler (PDI/DreamWorks)Soft ThingsDyRT: Dynamic Response Textures for Real Time Deformation Simulation with Graphics HardwareDoug L. James, Dinesh K. Pai (University of British Columbia) Interactive Skeleton-Driven Dynamic DeformationsSteve Capell, Seth Green, Brian Curless, Tom Duchamp, Zoran Popovic (University of Washington)Robust Treatment of Collisions, Contact, and Friction for Cloth AnimationRobert Bridson, Ronald Fedkiw(Stanford University), John Anderson (Industrial Light & Magic)Stable but Responsive ClothKwang-Jin Choi, Hyeong-Seok Ko (Seoul National University) Humans and AnimalsArticulated Body Deformation From Range Scan DataBrett Allen, Brian Curless, Zoran Popovic(University ofWashington)Interactive Multi-Resolution Hair Modeling and EditingTae-Yong Kim, Ulrich Neumann(University of Southern California)Modeling and Rendering of Realistic Feathers (PDF)Yanyun Chen, Yingquing Xu, Baining Guo, Heung-Yeung Shum (Microsoft Research China)Eyes AliveSooha P. Lee (University of Pennsylvania), Jeremy B. Badler (The Smith-Kettlewell Eye Research Institute), Norman I. Badler (University of Pennsylvania)Physiological Measures of Presense in Virtual Environments Michael Meehan, Brent Insko, Mary Whitton, Frederick P. Brooks, Jr. (University of North Carolina at Chapel Hill) Texture SynthesisSynthesis of Bidirectional Texture Functions on Arbitrary Surfaces (PDF)Xin T ong (Microsoft Research), Jingdan Zhang (Tsinghua University), Ligang Liu (Microsoft Research), Xi Wang (Tsinghua University), Baining Guo, Heung-Yeung Shum (Microsoft Research China)Jigsaw Image MosaicsJunhwan Kim, Fabio Pellacini (Cornell University)Self-Similarity Based Texture EditingStephen Brooks, Neil Dodgson (University of Cambridge)Hierarchical Pattern MappingCyril Soler, Marie-Paule Cani, Alexis Angelidis (IMAGIS-GRAVIR)Improving NoiseKen Perlin (New York University)Graphics HardwareSAGE Graphics Architecture (XVR-4000 White Paper)Michael F. Deering, David Naegle (Sun Microsystems, Inc.) Chromium: A Stream Processing Framework for Interactive Rendering on Clusters (project page)Greg Humphreys, Mike Houston, Yi-Ren Ng(Stanford University), Randall Frank, Sean Ahern (Lawrence Livermore National Laboratory), Peter Kirchner, Jim Klosowski(IBM Research)Ray Tracing on Programmable Graphics HardwareTimothy J. Purcell, Ian Buck (Stanford University), William R. Mark(Stanford University[now at NVIDIA]), Pat Hanrahan (Stanford University)Shader-Driven Compilation of Rendering Assets (PDF hosted locally at author's request)Paul Lalonde, Eric Schenk (Electronic Arts (Canada) Inc.) Fluids and FirePhysically Based Modeling and Animation of FireDuc Nguyen, Ronald Fedkiw, Henrik Wann Jensen (Stanford University)Structural Modeling of Natural Flames (PDF hosted locally at author's request)Arnauld Lamorlette, Nick Foster (PDI/DreamWorks)Animation and Rendering of Complex Water SurfacesDouglas P. Enright, Steve Marschner, Ronald Fedkiw (Stanford University)Image Based Flow VisualizationJarke J. van Wijk (T echnische Universiteit Eindhoven) Painting and Non-Photorealistic GraphicsWYSIWYG NPR: Drawing Strokes Directly on 3D ModelsRobert D. Kalnins, Lee Markosian(Princeton University), Barbara J. Meier, Michael A. Kowalski, Joseph C. Lee(Brown University), Philip L. Davidson, Matthew Webb(Princeton University), John F. Hughes (Brown University), Adam Finkelstein (Princeton University)Octree TexturesDavid Benson, Joel Davis (Industrial Light & Magic)Painting and Rendering Textures on Unparameterized Models (PDF)David (grue) DeBry, Jonathan Gibbs, Devorah DeLeon Petty, Nate Robins (Thrown Clear Productions)Stylization and Abstraction of PhotographsDoug DeCarlo, Anthony Santella (Rutgers University)Object-Based Image Editing (thesis)William Barrett, Alan Cheney (Brigham Young University)。
Native Instruments MASCHINE MIKRO MK3用户手册说明书
The information in this document is subject to change without notice and does not represent a commitment on the part of Native Instruments GmbH. The software described by this docu-ment is subject to a License Agreement and may not be copied to other media. No part of this publication may be copied, reproduced or otherwise transmitted or recorded, for any purpose, without prior written permission by Native Instruments GmbH, hereinafter referred to as Native Instruments.“Native Instruments”, “NI” and associated logos are (registered) trademarks of Native Instru-ments GmbH.ASIO, VST, HALion and Cubase are registered trademarks of Steinberg Media Technologies GmbH.All other product and company names are trademarks™ or registered® trademarks of their re-spective holders. Use of them does not imply any affiliation with or endorsement by them.Document authored by: David Gover and Nico Sidi.Software version: 2.8 (02/2019)Hardware version: MASCHINE MIKRO MK3Special thanks to the Beta Test Team, who were invaluable not just in tracking down bugs, but in making this a better product.NATIVE INSTRUMENTS GmbH Schlesische Str. 29-30D-10997 Berlin Germanywww.native-instruments.de NATIVE INSTRUMENTS North America, Inc. 6725 Sunset Boulevard5th FloorLos Angeles, CA 90028USANATIVE INSTRUMENTS K.K.YO Building 3FJingumae 6-7-15, Shibuya-ku, Tokyo 150-0001Japanwww.native-instruments.co.jp NATIVE INSTRUMENTS UK Limited 18 Phipp StreetLondon EC2A 4NUUKNATIVE INSTRUMENTS FRANCE SARL 113 Rue Saint-Maur75011 ParisFrance SHENZHEN NATIVE INSTRUMENTS COMPANY Limited 5F, Shenzhen Zimao Center111 Taizi Road, Nanshan District, Shenzhen, GuangdongChina© NATIVE INSTRUMENTS GmbH, 2019. All rights reserved.Table of Contents1Welcome to MASCHINE (23)1.1MASCHINE Documentation (24)1.2Document Conventions (25)1.3New Features in MASCHINE 2.8 (26)1.4New Features in MASCHINE 2.7.10 (28)1.5New Features in MASCHINE 2.7.8 (29)1.6New Features in MASCHINE 2.7.7 (29)1.7New Features in MASCHINE 2.7.4 (31)1.8New Features in MASCHINE 2.7.3 (33)2Quick Reference (35)2.1MASCHINE Project Overview (35)2.1.1Sound Content (35)2.1.2Arrangement (37)2.2MASCHINE Hardware Overview (40)2.2.1MASCHINE MIKRO Hardware Overview (40)2.2.1.1Browser Section (41)2.2.1.2Edit Section (42)2.2.1.3Performance Section (43)2.2.1.4Transport Section (45)2.2.1.5Pad Section (46)2.2.1.6Rear Panel (50)2.3MASCHINE Software Overview (51)2.3.1Header (52)2.3.2Browser (54)2.3.3Arranger (56)2.3.4Control Area (59)2.3.5Pattern Editor (60)3Basic Concepts (62)3.1Important Names and Concepts (62)3.2Adjusting the MASCHINE User Interface (65)3.2.1Adjusting the Size of the Interface (65)3.2.2Switching between Ideas View and Song View (66)3.2.3Showing/Hiding the Browser (67)3.2.4Showing/Hiding the Control Lane (67)3.3Common Operations (68)3.3.1Adjusting Volume, Swing, and Tempo (68)3.3.2Undo/Redo (71)3.3.3Focusing on a Group or a Sound (73)3.3.4Switching Between the Master, Group, and Sound Level (77)3.3.5Navigating Channel Properties, Plug-ins, and Parameter Pages in the Control Area.773.3.6Navigating the Software Using the Controller (82)3.3.7Using Two or More Hardware Controllers (82)3.3.8Loading a Recent Project from the Controller (84)3.4Native Kontrol Standard (85)3.5Stand-Alone and Plug-in Mode (86)3.5.1Differences between Stand-Alone and Plug-in Mode (86)3.5.2Switching Instances (88)3.6Preferences (88)3.6.1Preferences – General Page (89)3.6.2Preferences – Audio Page (93)3.6.3Preferences – MIDI Page (95)3.6.4Preferences – Default Page (97)3.6.5Preferences – Library Page (101)3.6.6Preferences – Plug-ins Page (109)3.6.7Preferences – Hardware Page (114)3.6.8Preferences – Colors Page (114)3.7Integrating MASCHINE into a MIDI Setup (117)3.7.1Connecting External MIDI Equipment (117)3.7.2Sync to External MIDI Clock (117)3.7.3Send MIDI Clock (118)3.7.4Using MIDI Mode (119)3.8Syncing MASCHINE using Ableton Link (120)3.8.1Connecting to a Network (121)3.8.2Joining and Leaving a Link Session (121)4Browser (123)4.1Browser Basics (123)4.1.1The MASCHINE Library (123)4.1.2Browsing the Library vs. Browsing Your Hard Disks (124)4.2Searching and Loading Files from the Library (125)4.2.1Overview of the Library Pane (125)4.2.2Selecting or Loading a Product and Selecting a Bank from the Browser (128)4.2.3Selecting a Product Category, a Product, a Bank, and a Sub-Bank (133)4.2.3.1Selecting a Product Category, a Product, a Bank, and a Sub-Bank on theController (137)4.2.4Selecting a File Type (137)4.2.5Choosing Between Factory and User Content (138)4.2.6Selecting Type and Character Tags (138)4.2.7Performing a Text Search (142)4.2.8Loading a File from the Result List (143)4.3Additional Browsing Tools (148)4.3.1Loading the Selected Files Automatically (148)4.3.2Auditioning Instrument Presets (149)4.3.3Auditioning Samples (150)4.3.4Loading Groups with Patterns (150)4.3.5Loading Groups with Routing (151)4.3.6Displaying File Information (151)4.4Using Favorites in the Browser (152)4.5Editing the Files’ Tags and Properties (155)4.5.1Attribute Editor Basics (155)4.5.2The Bank Page (157)4.5.3The Types and Characters Pages (157)4.5.4The Properties Page (160)4.6Loading and Importing Files from Your File System (161)4.6.1Overview of the FILES Pane (161)4.6.2Using Favorites (163)4.6.3Using the Location Bar (164)4.6.4Navigating to Recent Locations (165)4.6.5Using the Result List (166)4.6.6Importing Files to the MASCHINE Library (169)4.7Locating Missing Samples (171)4.8Using Quick Browse (173)5Managing Sounds, Groups, and Your Project (175)5.1Overview of the Sounds, Groups, and Master (175)5.1.1The Sound, Group, and Master Channels (176)5.1.2Similarities and Differences in Handling Sounds and Groups (177)5.1.3Selecting Multiple Sounds or Groups (178)5.2Managing Sounds (181)5.2.1Loading Sounds (183)5.2.2Pre-listening to Sounds (184)5.2.3Renaming Sound Slots (185)5.2.4Changing the Sound’s Color (186)5.2.5Saving Sounds (187)5.2.6Copying and Pasting Sounds (189)5.2.7Moving Sounds (192)5.2.8Resetting Sound Slots (193)5.3Managing Groups (194)5.3.1Creating Groups (196)5.3.2Loading Groups (197)5.3.3Renaming Groups (198)5.3.4Changing the Group’s Color (199)5.3.5Saving Groups (200)5.3.6Copying and Pasting Groups (202)5.3.7Reordering Groups (206)5.3.8Deleting Groups (207)5.4Exporting MASCHINE Objects and Audio (208)5.4.1Saving a Group with its Samples (208)5.4.2Saving a Project with its Samples (210)5.4.3Exporting Audio (212)5.5Importing Third-Party File Formats (218)5.5.1Loading REX Files into Sound Slots (218)5.5.2Importing MPC Programs to Groups (219)6Playing on the Controller (223)6.1Adjusting the Pads (223)6.1.1The Pad View in the Software (223)6.1.2Choosing a Pad Input Mode (225)6.1.3Adjusting the Base Key (226)6.2Adjusting the Key, Choke, and Link Parameters for Multiple Sounds (227)6.3Playing Tools (229)6.3.1Mute and Solo (229)6.3.2Choke All Notes (233)6.3.3Groove (233)6.3.4Level, Tempo, Tune, and Groove Shortcuts on Your Controller (235)6.3.5Tap Tempo (235)6.4Performance Features (236)6.4.1Overview of the Perform Features (236)6.4.2Selecting a Scale and Creating Chords (239)6.4.3Scale and Chord Parameters (240)6.4.4Creating Arpeggios and Repeated Notes (253)6.4.5Swing on Note Repeat / Arp Output (257)6.5Using Lock Snapshots (257)6.5.1Creating a Lock Snapshot (257)7Working with Plug-ins (259)7.1Plug-in Overview (259)7.1.1Plug-in Basics (259)7.1.2First Plug-in Slot of Sounds: Choosing the Sound’s Role (263)7.1.3Loading, Removing, and Replacing a Plug-in (264)7.1.4Adjusting the Plug-in Parameters (270)7.1.5Bypassing Plug-in Slots (270)7.1.6Using Side-Chain (272)7.1.7Moving Plug-ins (272)7.1.8Alternative: the Plug-in Strip (273)7.1.9Saving and Recalling Plug-in Presets (273)7.1.9.1Saving Plug-in Presets (274)7.1.9.2Recalling Plug-in Presets (275)7.1.9.3Removing a Default Plug-in Preset (276)7.2The Sampler Plug-in (277)7.2.1Page 1: Voice Settings / Engine (279)7.2.2Page 2: Pitch / Envelope (281)7.2.3Page 3: FX / Filter (283)7.2.4Page 4: Modulation (285)7.2.5Page 5: LFO (286)7.2.6Page 6: Velocity / Modwheel (288)7.3Using Native Instruments and External Plug-ins (289)7.3.1Opening/Closing Plug-in Windows (289)7.3.2Using the VST/AU Plug-in Parameters (292)7.3.3Setting Up Your Own Parameter Pages (293)7.3.4Using VST/AU Plug-in Presets (298)7.3.5Multiple-Output Plug-ins and Multitimbral Plug-ins (300)8Using the Audio Plug-in (302)8.1Loading a Loop into the Audio Plug-in (306)8.2Editing Audio in the Audio Plug-in (307)8.3Using Loop Mode (308)8.4Using Gate Mode (310)9Using the Drumsynths (312)9.1Drumsynths – General Handling (313)9.1.1Engines: Many Different Drums per Drumsynth (313)9.1.2Common Parameter Organization (313)9.1.3Shared Parameters (316)9.1.4Various Velocity Responses (316)9.1.5Pitch Range, Tuning, and MIDI Notes (316)9.2The Kicks (317)9.2.1Kick – Sub (319)9.2.2Kick – Tronic (321)9.2.3Kick – Dusty (324)9.2.4Kick – Grit (325)9.2.5Kick – Rasper (328)9.2.6Kick – Snappy (329)9.2.7Kick – Bold (331)9.2.8Kick – Maple (333)9.2.9Kick – Push (334)9.3The Snares (336)9.3.1Snare – Volt (338)9.3.2Snare – Bit (340)9.3.3Snare – Pow (342)9.3.4Snare – Sharp (343)9.3.5Snare – Airy (345)9.3.6Snare – Vintage (347)9.3.7Snare – Chrome (349)9.3.8Snare – Iron (351)9.3.9Snare – Clap (353)9.3.10Snare – Breaker (355)9.4The Hi-hats (357)9.4.1Hi-hat – Silver (358)9.4.2Hi-hat – Circuit (360)9.4.3Hi-hat – Memory (362)9.4.4Hi-hat – Hybrid (364)9.4.5Creating a Pattern with Closed and Open Hi-hats (366)9.5The Toms (367)9.5.1Tom – Tronic (369)9.5.2Tom – Fractal (371)9.5.3Tom – Floor (375)9.5.4Tom – High (377)9.6The Percussions (378)9.6.1Percussion – Fractal (380)9.6.2Percussion – Kettle (383)9.6.3Percussion – Shaker (385)9.7The Cymbals (389)9.7.1Cymbal – Crash (391)9.7.2Cymbal – Ride (393)10Using the Bass Synth (396)10.1Bass Synth – General Handling (397)10.1.1Parameter Organization (397)10.1.2Bass Synth Parameters (399)11Working with Patterns (401)11.1Pattern Basics (401)11.1.1Pattern Editor Overview (402)11.1.2Navigating the Event Area (404)11.1.3Following the Playback Position in the Pattern (406)11.1.4Jumping to Another Playback Position in the Pattern (407)11.1.5Group View and Keyboard View (408)11.1.6Adjusting the Arrange Grid and the Pattern Length (410)11.1.7Adjusting the Step Grid and the Nudge Grid (413)11.2Recording Patterns in Real Time (416)11.2.1Recording Your Patterns Live (417)11.2.2Using the Metronome (419)11.2.3Recording with Count-in (420)11.3Recording Patterns with the Step Sequencer (422)11.3.1Step Mode Basics (422)11.3.2Editing Events in Step Mode (424)11.4Editing Events (425)11.4.1Editing Events with the Mouse: an Overview (425)11.4.2Creating Events/Notes (428)11.4.3Selecting Events/Notes (429)11.4.4Editing Selected Events/Notes (431)11.4.5Deleting Events/Notes (434)11.4.6Cut, Copy, and Paste Events/Notes (436)11.4.7Quantizing Events/Notes (439)11.4.8Quantization While Playing (441)11.4.9Doubling a Pattern (442)11.4.10Adding Variation to Patterns (442)11.5Recording and Editing Modulation (443)11.5.1Which Parameters Are Modulatable? (444)11.5.2Recording Modulation (446)11.5.3Creating and Editing Modulation in the Control Lane (447)11.6Creating MIDI Tracks from Scratch in MASCHINE (452)11.7Managing Patterns (454)11.7.1The Pattern Manager and Pattern Mode (455)11.7.2Selecting Patterns and Pattern Banks (456)11.7.3Creating Patterns (459)11.7.4Deleting Patterns (460)11.7.5Creating and Deleting Pattern Banks (461)11.7.6Naming Patterns (463)11.7.7Changing the Pattern’s Color (465)11.7.8Duplicating, Copying, and Pasting Patterns (466)11.7.9Moving Patterns (469)11.8Importing/Exporting Audio and MIDI to/from Patterns (470)11.8.1Exporting Audio from Patterns (470)11.8.2Exporting MIDI from Patterns (472)11.8.3Importing MIDI to Patterns (474)12Audio Routing, Remote Control, and Macro Controls (483)12.1Audio Routing in MASCHINE (484)12.1.1Sending External Audio to Sounds (485)12.1.2Configuring the Main Output of Sounds and Groups (489)12.1.3Setting Up Auxiliary Outputs for Sounds and Groups (494)12.1.4Configuring the Master and Cue Outputs of MASCHINE (497)12.1.5Mono Audio Inputs (502)12.1.5.1Configuring External Inputs for Sounds in Mix View (503)12.2Using MIDI Control and Host Automation (506)12.2.1Triggering Sounds via MIDI Notes (507)12.2.2Triggering Scenes via MIDI (513)12.2.3Controlling Parameters via MIDI and Host Automation (514)12.2.4Selecting VST/AU Plug-in Presets via MIDI Program Change (522)12.2.5Sending MIDI from Sounds (523)12.3Creating Custom Sets of Parameters with the Macro Controls (527)12.3.1Macro Control Overview (527)12.3.2Assigning Macro Controls Using the Software (528)13Controlling Your Mix (535)13.1Mix View Basics (535)13.1.1Switching between Arrange View and Mix View (535)13.1.2Mix View Elements (536)13.2The Mixer (537)13.2.1Displaying Groups vs. Displaying Sounds (539)13.2.2Adjusting the Mixer Layout (541)13.2.3Selecting Channel Strips (542)13.2.4Managing Your Channels in the Mixer (543)13.2.5Adjusting Settings in the Channel Strips (545)13.2.6Using the Cue Bus (549)13.3The Plug-in Chain (551)13.4The Plug-in Strip (552)13.4.1The Plug-in Header (554)13.4.2Panels for Drumsynths and Internal Effects (556)13.4.3Panel for the Sampler (557)13.4.4Custom Panels for Native Instruments Plug-ins (560)13.4.5Undocking a Plug-in Panel (Native Instruments and External Plug-ins Only) (564)14Using Effects (567)14.1Applying Effects to a Sound, a Group or the Master (567)14.1.1Adding an Effect (567)14.1.2Other Operations on Effects (574)14.1.3Using the Side-Chain Input (575)14.2Applying Effects to External Audio (578)14.2.1Step 1: Configure MASCHINE Audio Inputs (578)14.2.2Step 2: Set up a Sound to Receive the External Input (579)14.2.3Step 3: Load an Effect to Process an Input (579)14.3Creating a Send Effect (580)14.3.1Step 1: Set Up a Sound or Group as Send Effect (581)14.3.2Step 2: Route Audio to the Send Effect (583)14.3.3 A Few Notes on Send Effects (583)14.4Creating Multi-Effects (584)15Effect Reference (587)15.1Dynamics (588)15.1.1Compressor (588)15.1.2Gate (591)15.1.3Transient Master (594)15.1.4Limiter (596)15.1.5Maximizer (600)15.2Filtering Effects (603)15.2.1EQ (603)15.2.2Filter (605)15.2.3Cabinet (609)15.3Modulation Effects (611)15.3.1Chorus (611)15.3.2Flanger (612)15.3.3FM (613)15.3.4Freq Shifter (615)15.3.5Phaser (616)15.4Spatial and Reverb Effects (617)15.4.1Ice (617)15.4.2Metaverb (619)15.4.3Reflex (620)15.4.4Reverb (Legacy) (621)15.4.5Reverb (623)15.4.5.1Reverb Room (623)15.4.5.2Reverb Hall (626)15.4.5.3Plate Reverb (629)15.5Delays (630)15.5.1Beat Delay (630)15.5.2Grain Delay (632)15.5.3Grain Stretch (634)15.5.4Resochord (636)15.6Distortion Effects (638)15.6.1Distortion (638)15.6.2Lofi (640)15.6.3Saturator (641)15.7Perform FX (645)15.7.1Filter (646)15.7.2Flanger (648)15.7.3Burst Echo (650)15.7.4Reso Echo (653)15.7.5Ring (656)15.7.6Stutter (658)15.7.7Tremolo (661)15.7.8Scratcher (664)16Working with the Arranger (667)16.1Arranger Basics (667)16.1.1Navigating Song View (670)16.1.2Following the Playback Position in Your Project (672)16.1.3Performing with Scenes and Sections using the Pads (673)16.2Using Ideas View (677)16.2.1Scene Overview (677)16.2.2Creating Scenes (679)16.2.3Assigning and Removing Patterns (679)16.2.4Selecting Scenes (682)16.2.5Deleting Scenes (684)16.2.6Creating and Deleting Scene Banks (685)16.2.7Clearing Scenes (685)16.2.8Duplicating Scenes (685)16.2.9Reordering Scenes (687)16.2.10Making Scenes Unique (688)16.2.11Appending Scenes to Arrangement (689)16.2.12Naming Scenes (689)16.2.13Changing the Color of a Scene (690)16.3Using Song View (692)16.3.1Section Management Overview (692)16.3.2Creating Sections (694)16.3.3Assigning a Scene to a Section (695)16.3.4Selecting Sections and Section Banks (696)16.3.5Reorganizing Sections (700)16.3.6Adjusting the Length of a Section (702)16.3.6.1Adjusting the Length of a Section Using the Software (703)16.3.6.2Adjusting the Length of a Section Using the Controller (705)16.3.7Clearing a Pattern in Song View (705)16.3.8Duplicating Sections (705)16.3.8.1Making Sections Unique (707)16.3.9Removing Sections (707)16.3.10Renaming Scenes (708)16.3.11Clearing Sections (710)16.3.12Creating and Deleting Section Banks (710)16.3.13Working with Patterns in Song view (710)16.3.13.1Creating a Pattern in Song View (711)16.3.13.2Selecting a Pattern in Song View (711)16.3.13.3Clearing a Pattern in Song View (711)16.3.13.4Renaming a Pattern in Song View (711)16.3.13.5Coloring a Pattern in Song View (712)16.3.13.6Removing a Pattern in Song View (712)16.3.13.7Duplicating a Pattern in Song View (712)16.3.14Enabling Auto Length (713)16.3.15Looping (714)16.3.15.1Setting the Loop Range in the Software (714)16.3.15.2Activating or Deactivating a Loop Using the Controller (715)16.4Playing with Sections (715)16.4.1Jumping to another Playback Position in Your Project (716)16.5Triggering Sections or Scenes via MIDI (717)16.6The Arrange Grid (719)16.7Quick Grid (720)17Sampling and Sample Mapping (722)17.1Opening the Sample Editor (722)17.2Recording Audio (724)17.2.1Opening the Record Page (724)17.2.2Selecting the Source and the Recording Mode (725)17.2.3Arming, Starting, and Stopping the Recording (729)17.2.5Checking Your Recordings (731)17.2.6Location and Name of Your Recorded Samples (734)17.3Editing a Sample (735)17.3.1Using the Edit Page (735)17.3.2Audio Editing Functions (739)17.4Slicing a Sample (743)17.4.1Opening the Slice Page (743)17.4.2Adjusting the Slicing Settings (744)17.4.3Manually Adjusting Your Slices (746)17.4.4Applying the Slicing (750)17.5Mapping Samples to Zones (754)17.5.1Opening the Zone Page (754)17.5.2Zone Page Overview (755)17.5.3Selecting and Managing Zones in the Zone List (756)17.5.4Selecting and Editing Zones in the Map View (761)17.5.5Editing Zones in the Sample View (765)17.5.6Adjusting the Zone Settings (767)17.5.7Adding Samples to the Sample Map (770)18Appendix: Tips for Playing Live (772)18.1Preparations (772)18.1.1Focus on the Hardware (772)18.1.2Customize the Pads of the Hardware (772)18.1.3Check Your CPU Power Before Playing (772)18.1.4Name and Color Your Groups, Patterns, Sounds and Scenes (773)18.1.5Consider Using a Limiter on Your Master (773)18.1.6Hook Up Your Other Gear and Sync It with MIDI Clock (773)18.1.7Improvise (773)18.2Basic Techniques (773)18.2.1Use Mute and Solo (773)18.2.2Create Variations of Your Drum Patterns in the Step Sequencer (774)18.2.3Use Note Repeat (774)18.2.4Set Up Your Own Multi-effect Groups and Automate Them (774)18.3Special Tricks (774)18.3.1Changing Pattern Length for Variation (774)18.3.2Using Loops to Cycle Through Samples (775)18.3.3Load Long Audio Files and Play with the Start Point (775)19Troubleshooting (776)19.1Knowledge Base (776)19.2Technical Support (776)19.3Registration Support (777)19.4User Forum (777)20Glossary (778)Index (786)1Welcome to MASCHINEThank you for buying MASCHINE!MASCHINE is a groove production studio that implements the familiar working style of classi-cal groove boxes along with the advantages of a computer based system. MASCHINE is ideal for making music live, as well as in the studio. It’s the hands-on aspect of a dedicated instru-ment, the MASCHINE hardware controller, united with the advanced editing features of the MASCHINE software.Creating beats is often not very intuitive with a computer, but using the MASCHINE hardware controller to do it makes it easy and fun. You can tap in freely with the pads or use Note Re-peat to jam along. Alternatively, build your beats using the step sequencer just as in classic drum machines.Patterns can be intuitively combined and rearranged on the fly to form larger ideas. You can try out several different versions of a song without ever having to stop the music.Since you can integrate it into any sequencer that supports VST, AU, or AAX plug-ins, you can reap the benefits in almost any software setup, or use it as a stand-alone application. You can sample your own material, slice loops and rearrange them easily.However, MASCHINE is a lot more than an ordinary groovebox or sampler: it comes with an inspiring 7-gigabyte library, and a sophisticated, yet easy to use tag-based Browser to give you instant access to the sounds you are looking for.What’s more, MASCHINE provides lots of options for manipulating your sounds via internal ef-fects and other sound-shaping possibilities. You can also control external MIDI hardware and 3rd-party software with the MASCHINE hardware controller, while customizing the functions of the pads, knobs and buttons according to your needs utilizing the included Controller Editor application. We hope you enjoy this fantastic instrument as much as we do. Now let’s get go-ing!—The MASCHINE team at Native Instruments.MASCHINE Documentation1.1MASCHINE DocumentationNative Instruments provide many information sources regarding MASCHINE. The main docu-ments should be read in the following sequence:1.MASCHINE MIKRO Quick Start Guide: This animated online guide provides a practical ap-proach to help you learn the basic of MASCHINE MIKRO. The guide is available from theNative Instruments website: https:///maschine-mikro-quick-start/2.MASCHINE Manual (this document): The MASCHINE Manual provides you with a compre-hensive description of all MASCHINE software and hardware features.Additional documentation sources provide you with details on more specific topics:►Online Support Videos: You can find a number of support videos on The Official Native In-struments Support Channel under the following URL: https:///NIsupport-EN. We recommend that you follow along with these instructions while the respective ap-plication is running on your computer.Other Online Resources:If you are experiencing problems related to your Native Instruments product that the supplied documentation does not cover, there are several ways of getting help:▪Knowledge Base▪User Forum▪Technical Support▪Registration SupportYou will find more information on these subjects in the chapter Troubleshooting.Document Conventions1.2Document ConventionsThis section introduces you to the signage and text highlighting used in this manual. This man-ual uses particular formatting to point out special facts and to warn you of potential issues.The icons introducing these notes let you see what kind of information is to be expected:This document uses particular formatting to point out special facts and to warn you of poten-tial issues. The icons introducing the following notes let you see what kind of information canbe expected:Furthermore, the following formatting is used:▪Text appearing in (drop-down) menus (such as Open…, Save as… etc.) in the software andpaths to locations on your hard disk or other storage devices is printed in italics.▪Text appearing elsewhere (labels of buttons, controls, text next to checkboxes etc.) in thesoftware is printed in blue. Whenever you see this formatting applied, you will find thesame text appearing somewhere on the screen.▪Text appearing on the displays of the controller is printed in light grey. Whenever you seethis formatting applied, you will find the same text on a controller display.▪Text appearing on labels of the hardware controller is printed in orange. Whenever you seethis formatting applied, you will find the same text on the controller.▪Important names and concepts are printed in bold.▪References to keys on your computer’s keyboard you’ll find put in square brackets (e.g.,“Press [Shift] + [Enter]”).►Single instructions are introduced by this play button type arrow.→Results of actions are introduced by this smaller arrow.Naming ConventionThroughout the documentation we will refer to MASCHINE controller (or just controller) as the hardware controller and MASCHINE software as the software installed on your computer.The term “effect” will sometimes be abbreviated as “FX” when referring to elements in the MA-SCHINE software and hardware. These terms have the same meaning.Button Combinations and Shortcuts on Your ControllerMost instructions will use the “+” sign to indicate buttons (or buttons and pads) that must be pressed simultaneously, starting with the button indicated first. E.g., an instruction such as:“Press SHIFT + PLAY”means:1.Press and hold SHIFT.2.While holding SHIFT, press PLAY and release it.3.Release SHIFT.1.3New Features in MASCHINE2.8The following new features have been added to MASCHINE: Integration▪Browse on , create your own collections of loops and one-shots and send them directly to the MASCHINE browser.Improvements to the Browser▪Samples are now cataloged in separate Loops and One-shots tabs in the Browser.▪Previews of loops selected in the Browser will be played in sync with the current project.When a loop is selected with Prehear turned on, it will begin playing immediately in-sync with the project if transport is running. If a loop preview starts part-way through the loop, the loop will play once more for its full length to ensure you get to hear the entire loop once in context with your project.▪Filters and product selections will be remembered when switching between content types and Factory/User Libraries in the Browser.▪Browser content synchronization between multiple running instances. When running multi-ple instances of MASCHINE, either as Standalone and/or as a plug-in, updates to the Li-brary will be synced across the instances. For example, if you delete a sample from your User Library in one instance, the sample will no longer be present in the other instances.Similarly, if you save a preset in one instance, that preset will then be available in the oth-er instances, too.▪Edits made to samples in the Factory Libraries will be saved to the Standard User Directo-ry.For more information on these new features, refer to the following chapter ↑4, Browser. Improvements to the MASCHINE MIKRO MK3 Controller▪You can now set sample Start and End points using the controller. For more information refer to ↑17.3.1, Using the Edit Page.Improved Support for A-Series Keyboards▪When Browsing with A-Series keyboards, you can now jump quickly to the results list by holding SHIFT and pushing right on the 4D Encoder.▪When Browsing with A-Series keyboards, you can fast scroll through the Browser results list by holding SHIFT and twisting the 4D Encoder.▪Mute and Solo Sounds and Groups from A-Series keyboards. Sounds are muted in TRACK mode while Groups are muted in IDEAS.。
第三届中国青年女科学家奖候选人
[1]Image segmentation by clustering of spatial patterns, Pattern Recognition Letters, 2007,他引频次:23引证文献:1.X Yang, et al., Image segmentation with a fuzzy clustering algorithm based on Ant-Tree,Signal Processing, 2008 – Elsevier2.J Fan, et al., Single point iterative weighted fuzzy C-means clustering algorithm forremote sensing image segmentation, Pattern Recognition- 20093.Cariou, et al., Unsupervised texture segmentation/classification using 2-Dautoregressive modeling and the stochastic expectation-maximization algorithmC,Pattern Recognition Letters, 20084.M Kühne, et al., A novel fuzzy clustering algorithm using observation weighting andcontext information for reverberant blind speech separation, Signal Processing, 20095.Y Xia, et al., Segmentation of brain structures using PET-CT images,20086.W Chen, et al., A 2-phase 2-D thresholding algorithm, Digital Signal Processing, 20107.Chaoshun Li, et al.,A Fuzzy Cluster Algorithm Based on Mutative Scale ChaosOptimization, Proceedings of the 5th international symposium on Neural Networks:Advances in Neural Networks, Part II,20088.Kun Qin, et al., Image Segmentation Based on Cloud Concept Analysis,20109.Long Chen, et al.,Multiple kernel fuzzy C-means based image segmentation,201010.Reddy, B.V.R., et al.,A Random Set View of Texture Segmentation,201011.Lefèvre, S., A New Approach for Unsupervised Classification in Image Segmentation,201012.Kai-jian, XIA, et al., An Image Segmentation Based on Clustering of Spatial Patternsand Watershed Algorithm, 201013.Rajeswari, M., et al., Spatial Multiple Criteria Fuzzy Clustering for Image Segmentation,201014.CH Wu, et al., A greedy strategy for images segmentation by support vector machines,201015.Wei, B.C, et al., Multi-objective nature-inspired clustering techniques for imagesegmentation, 201016.Ruta, A, Video-based Traffic Sign Detection, Tracking and Recognition, 200917.Camilus, K.S., et al., A Robust Graph Theoretic Approach for Image Segmentation,201018.WP Zhu, et al., Image segmentation by improved clustering of spatial patterns, JisuanjiYingyong Yanjiu, 200919.S Lefèvre, Une nouvelle approche pour la classification non superviséeen segmentationd’image, et gestion des connaissances: EGC'200920.Callejo, R, et al., Segmentación automática de texturas en imágenes agrícolas,201021.Marcos, I, Estrategias de clasificación de texturas en imágenes forestales hemisféricas,201022.Seo ST, et al., Co-occurrence Matrix-Based Image Segmentation IEICETRANSACTIONS ON INFORMATION AND SYSTEMS. NOV 2010, E93D(11):3128-313123.Pedrycz W, et al., Fuzzy clustering with semantically distinct families of variables:Descriptive and predictive aspects.PA TTERN RECOGNITION LETTERS. OCT 1 2010, 31(13): 1952-1958[2]Robust Shape-Based Head Tracking, Advanced Concepts for Intelligent Vision Systems,2007, 他引频次:10引证文献:1. A Bottino, et al., A fast and robust method for the identification of face landmarks inprofile images, WSEAS Transactions on Computers, 2008 - 2. D Jiang, et al., Speech driven realistic mouth animation based on multi-modal unitselection, Journal on Multimodal User Interfaces,2004.63.Chen, D, et al., Audio-Visual Emotion Recognition Based on a DBN Model withConstrained Asynchrony,20104. A Bottino, et al., Robust identification of face landmarks in profile images, 2008Proceedings of the 12th WSEAS international conference on Computers, 20085.Hou, Y, et al., Smooth Adaptive Fitting of 3D Face Model for the Estimation of Rigidand Non-rigid Facial Motion in Video Sequences, 20106.Gonzalez, I, et al., Automatic Recognition of Lower Facial Action Units, 20107.Jiang, X, et al., Perception-Based Lighting Adjustment of Image Sequences, 20108.Jiang, D, et al., Realistic mouth animation based on an articulatory DBN model withconstrained asynchrony, 20109.Y Hou, et al., 3D Face Alignment via Cascade 2D Shape Alignment and ConstrainedStructure from Motion, Advanced Concepts for Intelligent Vision Systems,200910.刘培桢,等,I RAVYSE, Hichem, S, 基于发音特征DBN 模型的嘴部动画合成,2010[3]An Efficient Physically-Based Model for Chinese Brush, Frontiers in Algorithmics, 2007,他引频次:5引证文献:1.TD Chen, Chinese Calligraphy Brush Stroke Interactive Model with Ink Diffusion Style,20102.TD Chen, Hairy Brush and Rice Paper Interactive Model with Chinese Ink PaintingStyle, 20103.Y Hou, et al., Model for Evaluating the Safety Innovation Effects in Coal Mines basedon' Security Force Engineering, 20094.MZ Zhu,et al., Virtual brush model based on statistical analysis and its application,20095.朱墨子,等, 基于统计分析的虚拟毛笔模型及其应用, 计算机工程, 2009[4]Segmentation of images using wavelet packet based feature set and Clustering Algorithm,International Journal of Information Technology, 2005, 他引频次:4引证文献:1.Lv, H, et al., Medical image segmentation based on wavelet packet and improved FCM,20082.Afifi, A, et al., Particle Swarm Optimization Based Medical Image SegmentationTechnique, 20103.吕回,等,基于小波包和改进的FCM 的医学图像分割,计算机工程与应用,20084.AFIFI. A, et al., Shape and Texture Priors for Liver Segmentation in AbdominalComputed Tomography Scans Using the Particle Swarm Optimization, 2010[5] A New Method of SAR Image Segmentation Based on Neural Network, Proceedings of the5th International Conference on Computational Intelligence and Multimedia Applications, 2003, 他引频次:3引证文献:1.徐海祥,等,基于改进的一对一支持向量机方法的多目标图像分割,微电子学与计算机,20052.徐海祥,等,彭复员,基于支持向量机方法的多目标图像分割,计算机工程与应用,20053.BU Shankar, Novel Classification and Segmentation Techniques with Application toRemotely Sensed Images, Transaction on Rough Sets VII, 2007[6] A modified particle swarm optimization algorithm for support vector machine training, TheSixth World Congress on Intelligent Control and Automation, 2006, 他引频次:3引证文献:1.Matthias Becker, et al., Traffic Analysis and Classification with Bio-Inspired andClassical Algorithms in Sensor Networks, SPECTS 2008 Committees2.Matthias Becker, et al., Sebastian Bohlmann, Helena Szczerbicka, On ClassificationApproaches for Misbehavior Detection in Wireless Sensor Networks, Journal ofComputers, Vol 4, No 5 (2009), 357-365, May 20093.Q WU, et al., Particle Swarm Optimization for Semi-supervised Support VectorMachine, 2010[7] A Novel Immune Quantum-Inspired Genetic Algorithm, Advances in Natural Computation,2005,他引频次:3引证文献:1.X You, et al. Immune Quantum Evolutionary Algorithm Based on Chaotic SearchingTechnique for Global Optimization, 20082.G Zhang, Quantum-inspired evolutionary algorithms: a survey and empirical study,20103.Xiaoming You , et al., Real-coded Quantum Evolutionary Algorithm based on ImmuneTheory for Multi-modal Optimization Problems, 2008 International Conference onComputer Science and Software Engineering[8]New method for image target recognition, Second International Conference on Image andGraphics, 2002, 他引频次:2引证文献:1.陈亮,等,基于SVM 的遥感影像目标检测中的样本选取,计算机工程与应用,20062.梅建新,等, 基于支持向量机的特定目标检测方法,武汉大学学报: 信息科学版,2004[9] A New Method for Detecting Bridges Automatically, JOURNAL OF NORTHWESTERNPolytechnical University, 2003,他引频次:2引证文献:1.Y Fu, et al., Recognition of Bridge over Water in High-Resolution Remote SensingImages, 2009 WRI World Congress on Computer Science and InformationEngineering,20092.L Zhang, et al., Adaptive river segmentation in SAR images, 2009[10]The research of the match of corresponding points in multi-view and the realization byevolutionary programming, 2004 7th International Conference on Signal Processing2004, 他引频次:1引证文献:1.Guangpeng Zhang, et al., A 3D FACIAL FEATURE POINT LOCALIZATIONMETHOD BASED ON STATISTICALSHAPE MODEL, Proc. of Internat. Conf. onAcoustics, Speech and Signal Processing ICASSP, pp. 15–20.[11]A fuzzy integral method of applying support vector machine for multi-class problem,LECTURE NOTES IN COMPUTER SCIENCE, 2006,他引频次:1引证文献:1.Hu YC, Fusing fuzzy association rule-based classifiers using Sugeno integral withordered weighted averaging operators, INTERNATIONAL JOURNAL OFUNCERTAINTY FUZZINESS AND KNOWLEDGE-BASED SYSTEMS, DEC 2007,15(6): 717-735[12]Robust object tracking based on uncertainty factorization subspace constraints optical flow,International Conference on Computational Intelligence and Security, LECTURE NOTES IN ARTIFICIAL INTELLIGENCE, 2005, 他引频次:1引证文献:1.Hou Y, et al., Robust shape-based head tracking, Advanced Concepts for IntelligentVision Systems, Proceedings, AUG 28-31, 2007, 4678: 340-351[13]On Good-Quality Edge Detection of SAR Image, Journal of Northwestern PolytechnicalUniversity, 2003, 他引频次:1引证文献:1.LI Wei-bin, et al., New operator for edge detection in SAR image, ComputerEngineering and Design 2007-17[14]An Adaptive Immune Genetic Algorithm for Edge Detection, Advanced IntelligentComputing Theories and Applications. With Aspects of Artificial Intelligence,2007,他引频次:1引证文献:1.Judy,et al., A multi-objective evolutionary algorithm for protein structure predictionwith immune operators, Computer Methods in Biomechanics and BiomedicalEngineering, V olume 12, Number 4, August 2009 , pp. 407-413(7)[15]视频监视中运动目标的检测与跟踪算法, 系统工程与电子技术, 2002, 他引频次:111引证文献:1.付晓薇,一种基于动态图像的多目标识别计数方法,武汉科技大学,20032.汪颖进,目标跟踪过程中的遮挡问题研究,华中科技大学,20043.杨俊,变电站遥视图像的识别研究,华北电力大学(河北),20044.高腾,静止背景下运动目标跟踪方法的研究,西北大学,20055.崔宇巍,运动目标检测与跟踪中有关问题的研究,西北大学,20056.胡嘉凯,智能视频监控系统中运动目标跟踪有关问题研究及其DSP实现,合肥工业大学,20067.刘天国,红外防火监视监控系统的设计与实现,吉林大学,20068.刘昕,实时视频中选定物体追踪算法的研究,吉林大学,20069.程江华,基于DSP的视频分析系统设计与实现,国防科学技术大学,200510.廖雪超,基于粒子滤波和背景建模的多目标跟踪技术的研究和实现,武汉科技大学,200611.张之稳,嵌入式视频跟踪算法的研究,山东大学,200612.乔月,基于三层动态交互体系的多目标监控系统,哈尔滨工业大学,200613.周香珍,基于DSP的目标跟踪系统的实现,南京理工大学,200614.刘青青,智能式数字视频监控系统的研究与实现,厦门大学,200415.辛瑞红,运动目标的检测与跟踪研究,北京交通大学,200716.单海涛,复杂环境下运动人体分割算法的研究,大连海事大学,200617.武爱民,视频检测与跟踪技术在行人计数中的应用研究,合肥工业大学,200718.魏瑞斌,基于多特征的运动目标跟踪,西北大学,200719.吴雪刚,一种有效的基于粒子滤波器的多目标跟踪技术,西南大学,200720.胡志刚,基于移动通信网络的视频监控系统设计与实现,国防科学技术大学,200621.于晨,基于模板匹配技术的运动物体检测的研究,重庆大学,200722.吴园,运动车辆的检测与跟踪,南京航空航天大学,200723.周敬兵,复杂背景下的目标检测与跟踪技术研究,南京理工大学,200724.罗勤,基于序列图像处理的桥墩防撞预警系统的研究,华中科技大学,200625.肖海燕,动态目标检测与跟踪技术的研究,大连理工大学,200726.汪泉,基于运动目标检测与跟踪的视频测速技术的研究与应用,南昌大学,200727.司长哲,基于DSP的火箭自动跟踪与识别系统,重庆大学,200728.高原,海背景下弱小运动目标的检测和跟踪研究,北京交通大学,200729.庄志国,视频监控系统中有遮挡运动目标的提取和重构,厦门大学,200730.杨洋,智能场景监控系统的研究及其在室内监控中的应用,吉林大学,200831.张恒娟,基于分块高斯背景的运动目标检测与跟踪技术研究,天津师范大学,200832.马杰,视频人脸检测与识别方法研究,湖南大学,200833.陈家树,像素差的平方和增强核粒子滤波的非刚体目标跟踪,西南大学,200834.黄苜,支持向量回归机粒子滤波器非刚体目标跟踪,西南大学,200835.陈方晖,基于DSP的图像识别技术研究,国防科学技术大学,200736.王柱,复杂背景下动态目标的检测与跟踪,昆明理工大学,200737.马樱,基于视频流的步态识别,昆明理工大学,200838.梁昌斌,视频监控系统中运动目标检测和跟踪技术的研究与实现,合肥工业大学,200839.王虎,运动目标检测和跟踪的研究及应用,中国海洋大学,200840.李凤凯,多运动目标的检测与跟踪算法研究,天津大学,200741.刘月明,视频目标运动轨迹提取算法的分析与仿真,哈尔滨工业大学,200742.王久明,基于高速处理器的CMOS数字图像采集系统的硬件设计,哈尔滨工业大学,200743.伍翔,视频图像中运动目标检测与跟踪方法研究与实现,哈尔滨工业大学,200744.戴若愚,基于帧间运动能量差的跟踪算法研究与实现,华中科技大学,200745.闫丽媛,单移动目标跟踪装置的研究,沈阳工业大学,200946.杨翠萍,基于图像处理的视频监控系统的研究与实现,东华大学,200947.江雪剑,东华大学,基于PTZ摄像机的跟踪算法研究,200948.贾鸿儒,遮挡情况下基于特征相关匹配的目标跟踪方法研究,东北师范大学,200949.李明君,基于计算机视觉的运动目标的检测与跟踪的研究,青岛大学,200950.杨隽姝,车辆检测与实时跟踪算法研究,华东师范大学,200951.韩亚伟,视频交通流背景提取与运动目标跟踪检测技术研究,长安大学,200952.山茂泉,运动目标检测和跟踪算法研究,大庆石油学院,200853.罗莹,网络实时音视频处理中运动检测技术的研究与实现,上海交通大学,200854.刘钢,基于小波变换的航空图像处理及动载体多目标跟踪方法研究,中国科学院研究生院(长春光学精密机械与物理研究所),200455.潘锋,仿人眼颈视觉系统的理论与应用研究,浙江大学,200556.岳润峰,等,基于小波分解与运动补偿的弹迹检测方法,兵工自动化,200757.王蓉晖,等,基于小波变换的分层块匹配多目标跟踪方法,吉林大学学报(信息科学版),200458.刘春华,等,运动中的多目标电视跟踪方法,弹箭与制导学报,200459.胡志刚,等,基于无线通信网络的视频监控研究,电脑知识与技术(学术交流),200760.程成,等,眼动交互的实时线性算法构造和实现,电子学报,200961.宋世军,等,运动人体图像分割算法研究,中国工程机械学报,200762.刘钢,等,运动背景下多目标跟踪的小波方法,光电工程,200563.门立彦,等,种视频序列中运动目标的跟踪方法,装备制造技术,200964.杨伟,等,基于mean-shift的多目标粒子滤波跟踪算法设计,光电技术应用,200965.杨伟, 等,基于Mean-shift的多目标跟踪算法的设计[J]. 红外,2009,(3).66.朱冬,等,一种改进的自适应运动目标检测算法,信息通信,200667.杨伟,等,基于mean-shift的多目标粒子滤波跟踪算法设计,红外技术,200968.蒋文斌,等,一种基于位移概率矩阵的目标跟踪方法,华中科技大学学报(自然科学版),200669.孙剑,等,基于mean-shift的快速跟踪算法设计与实现,计算机工程,200670.徐璟,DSP视频监控中运动目标检测方法研究,计算机仿真,200871.余静,等,自动目标识别与跟踪技术研究综述,计算机应用研究,200572.宋世军,等,复杂背景下运动目标的智能检测方法,计算机应用与软件,200873.唐俐,等,运动目标检测的三帧差分和背景消减研究,科技信息,200874.袁基炜,等,一种基于灰色预测模型GM(1,1)的运动车辆跟踪方法,控制与决策,200675.李衡宇,等,杨晓敏基于计算机视觉的公交车人流量统计系统,四川大学学报(自然科学版),200776.唐宏震,等,基于多分辨率分级分块处理的边缘检测方法,陕西师范大学学报(自然科学版),200777.关向荣,等,视频监视中背景的提取与更新算法,微电子学与计算机,200578.杨建国,等,基于自适应轮廓匹配的视频运动车辆检测和跟踪,西安交通大学学报,200579.王先培,等,变电站遥视智能化系统中除噪问题的研究,襄樊学院学报,200980.鹿雪娇,基于视频图像的运动物体识别与跟踪技术研究,大庆石油学院,200981.焦安霞,视频序列中动目标检测与跟踪算法的研究,哈尔滨工程大学,200882.王笑雨,运动目标检测与跟踪系统设计,哈尔滨工程大学,200883.张敏,视频监控中运动目标检测与清晰化方法的研究,江苏大学,201084.马小博,基于FPGA的视频监控跟踪系统研究,大连海事大学,201085.邓俊云,基于DAM6416P处理平台对弱小目标的检测与跟踪,南京航空航天大学,200986.余晓蓉,运动目标检测与跟踪技术研究,西安电子科技大学,201087.赵红丽,基于多光谱图像融合的视频运动目标检测[,西安电子科技大学,201088.宋岩,交通信息采集系统中运动车辆的检测与识别技术研究,黑龙江大学,200989.汪冲,运动目标检测与跟踪在鱼眼图像中的应用,哈尔滨工程大学,200990.吕斌,交通监控系统中目标跟踪与行为识别研究,中南大学,201091.刘玟,基于驾驶员眼睛状态的疲劳驾驶检测算法,华南理工大学,201092.陆珺,交通道口运动目标检测与跟踪方法的研究,江苏大,200793.陈奕奕,运动目标检测分割算法研究,武汉科技大学,201094.王二力,红外监控系统中关键技术研究,西安电子科技大学,200695.陈爱斌,基于视觉的运动目标跟踪方法研究,中南大学,201096.邹策千,等,序列图像运动目标的检测与提取,内蒙古农业大学学报(自然科学版),201097.关向荣,等,视频监视中背景的提取与更新算法,微电子学与计算机,200598.孙剑,等,基于mean-shift 的快速跟踪算法设计与实现,计算机工程,200699.何健刚,AdHoc 网络在WindowsXP 环境下的应用实例,计算机应用与软件,2008100.庄志国,视频监控系统中有遮挡运动目标的提取和重构,硕士学位论文,厦门大学,2007101.陈方晖,基于DSP 的图像识别技术研究,硕士学位论文,国防科技大学,2007 102.付晓薇,一种基于动态图像的多目标识别计数方法,硕士学位论文,武汉科技大学,2003103.李衡宇,等,基于计算机视觉的公交车人流量统计系统,四川大学学报: 自然科学版, 2007104.徐璟,DSP 视频监控中运动目标检测方法研究,计算机仿真, 2008105.肖海燕,动态目标检测与跟踪技术的研究,硕士学位论文,大连理工大学,2007 106.黄扬帆,等,改进PDA-AI方法的运动目标跟踪性能分析[J]. 重庆大学学报,2010 107.赵陈, 等,基于混合模型的运动目标检测算法[J].电子测试,2011108.曹晖. 运动多目标检测与跟踪算法研究[D]. 哈尔滨工程大学,2010109.何娜. 视频监控中运动物体自动跟踪技术的研究[D]. 南华大学,2010110.杨勇. 基于粒子滤波目标跟踪方法研究[D]. 中南林业科技大学,2009111.李姗姗. 智能视频跟踪系统中的运动目标检测与跟踪技术研究[D]. 华中科技大学: ,2009[16]角点检测技术综述, 计算机应用研究, 2006, 他引频次:94引证文献:1.李宝昭,基于匹配的图像识别算法的应用研究,硕士学位论文,广东工业大学,20072.陆兵,视频中的文本提取及其应用,硕士学位论文,河海大学,20073.庄志国,视频监控系统中有遮挡运动目标的提取和重构,硕士学位论文,厦门大学,20074.韩啸,基于遗传算法的摄像机内参数标定研究,硕士学位论文,吉林大学,20085.吴亚鹏,基于双目视觉的运动目标跟踪与三维测量,硕士学位论文,西北大学6.邓再强,基于特征点匹配的电子稳像算法研究,硕士学位论文,电子科技大学,20087.赵万金,图像自动拼接技术研究与应用,硕士学位论文,苏州大学,20088.汪心昕,基于内容的广告垃圾图像检测关键技术研究,硕士学位论文,北京邮电大学,20089.代建辉,智能交通系统车辆流量检测技术的研究,硕士学位论文,天津大学,200710.赵文闯,基于视觉的多机器人实验系统室内实时定位研究,硕士学位论文,哈尔滨工业大学,200711.兰信旭,视觉坐标测量的仿真环境设计,硕士学位论文,青岛大学,200812.刘晶晶,基于双目立体视觉的三维定位技术研究,硕士学位论文,华中科技大学,200713.刘永强,基于视觉测量的汽车车轮定位技术的研究,硕士学位论文,大连理工大学,200814.王娟,图像拼接技术研究,硕士学位论文,陕西师范大学,200815.王树峰,基于立体视觉方法的图像三维模型重建研究,硕士学位论文,南京航空航天大学200816.张晶,基于比值算法的图像拼接技术的实现,硕士学位论文,吉林大学,200917.陈光,亚像素级角点提取算法,硕士学位论文,吉林大学,200918.徐江玲,基于非平行双目视觉的三维重建,硕士学位论文,大连理工大学,200919.蒋虎,航片拼接及其与矢量地图的可视化集成技术,硕士学位论文,电子科技大学,200920.李建敏,基于轮廓片段的图像识别技术研究,硕士学位论文,厦门大学,200921.徐秀眉,基于SVG的校园导航系统开发研究,硕士学位论文,长安大学,200922.魏娟,双目视觉在自动倒车系统中的应用研究,硕士学位论文,哈尔滨工程大学,200923.张明志,基于微特征的指纹识别算法研究,硕士学位论文,厦门大学,200824.李海峰,基于要素的图像统计模型研究,硕士学位论文,北京交通大学,200925.李绍君,基于Snake模型的肿瘤显微图像分割技术研究,硕士学位论文,华东交通大学,200826.钱苏斌,曲面重建[D].硕士学位论文,江南大学,200927.倪奎,人脸器官拼接融合及其在人脸动画中的应用,硕士学位论文,中国科学技术大学,200928.阮国威,高速电脑绣花机视频运动检测分析系统,硕士学位论文,北京工商大学,200929.刘军学,移动机器人视觉检测和跟踪研究,硕士学位论文,哈尔滨工业大学,2008.30.陈二龙,PCB视觉检测系统中相机标定算法与位姿测定技术,硕士学位论文,哈尔滨工业大学,200831.徐涛,基于多个广角相机的图像拼接技术,硕士学位论文,浙江大学,201032.邹虹,基于计算机视觉的动作识别对人机界面消隐的研究,硕士学位论文,哈尔滨工业大学,200933.裴聪,基于计算机视觉中双目立体匹配技术研究,江苏大学,201034.肖建军,车辆遮挡检测的研究与应用,北方工业大学,201035.杨文鲜,基于形状的图像匹配复合模型研究,华北电力大学(北京),201036.李畅,基于曲率乘积的直接曲率尺度空间角点检测算法,硕士学位论文,南京航空航天大学,200937.钱镜洁,基于视频的车型识别技术研究,硕士学位论文,南京航空航天大学,200938.孟犇,图像局部特征技术在图像检索系统中的应用,上海交通大学,201039.赵勇,基于覆盖分类的视觉跟踪算法研究,安徽大学,201040.王靖韬,三维重建的摄像机标定技术和多尺度空间下角点检测技术的研究,内蒙古农业大学,201041.李蕊艳,基于机器视觉的芯片识别及定位软件的研究开发,硕士学位论文,西安理工大学,200942.曾东方,单晶生长过程直径检测与化料过程模式分类方法研究,硕士学位论文,西安理工大学,200943.兰昆艳,基于特征检测的车辆跟踪技术的研究,硕士学位论文,北京邮电大学,201044.彭旭,机场监控视频相关事件检测,硕士学位论文,北京邮电大学,201045.时洪光,基于双目视觉的运动目标定位研究,硕士学位论文,青岛大学,201046.戴剑锋,摄像头径向畸变自动校正系统,硕士学位论文,华南理工大学,201047.王奇,基于脸部器官关系的嘴巴检测算法研究,硕士学位论文,湖南大学,201048.吴祺,基于视觉技术的陈展交互设计与实现,硕士学位论文,浙江大学,浙江大学,201049.唐新星,具有立体视觉的工程机器人自主作业控制技术研究,博士学位论文,吉林大学,200750.王立中,基于机器视觉的奶牛体型评定中的关键技术研究,内蒙古农业大学,200951.孙文昌,等,基于熵和独特性的角点提取算法,计算机应用,200952.张裕,等,基于Harris算法的黑白棋盘格角点检测,计算机应用与软件,201053.欧剑,等,基于头部跟踪的虚拟画展系统,计算机应用,201054.兰海滨,等,基于角点检测的彩色图像拼接技术,计算机工程与设计,200955.张登银,等,边缘检测算法改进及其在QoE测定中的应用,计算机技术与发展,200956.王科俊,等,基于共面圆的双目立体视觉分步标定法,应用科技,201057.时洪光,等,双目视觉中的角点检测算法研究,现代电子技术,201058.万敏,基于角点的汉字特征提取与识别算法,宜宾学院学报,201059.李健,抗几何攻击的数字图像水印技术的研究,南京理工大学,200960.张金玲,面向空间舱内机器人遥操作的增强现实仿真场景构建技术研究,北京邮电大学,200961.谭立东,道路交通事故现场快速勘查图像信息处理技术研究,吉林大学,200962.王军南,等,基于视觉的机械臂末端执行器坐标获取,2007系统仿真技术及其应用学术会议论文集,200763.李勃,等,路况PTZ摄像机自动标定方法,中国通信学会通信软件技术委员会2009年学术会议论文集,200964.陈宇波,等,在人脸图像中确定嘴巴位置的方法,电子科技大学学报,200765.陶骏,等,特定视频采集系统中的身份识别的实现,电脑知识与技术,200966.宋洁,等,基于金字塔和模糊聚类的路面图像拼接方法,河北工业大学学报,200867.韩斌,等,改进的亚像素级快速角点检测算法,江苏科技大学学报(自然科学版),200968.张铁楠,等,针对棋盘格角点快速检测的一种新方法,计算机工程与应用,200869.赵万金,等,一种自适应的Harris角点检测算法,计算机工程,200870.王娟,师军,吴宪祥,图像拼接技术综述,计算机应用研究,200871.王立中,等,基于图像分块的多尺度Harris特征点检测算法,内蒙古大学学报(自然科学版),200972.冯宇平,等,一种用于图像序列拼接的角点检测算法,计算机科学,200973.任雁,角点检测方法研究,机械工程与自动化,200974.华瑞娟,等,一种多椭圆曲线拟合的新算法,长春理工大学学报(自然科学版),201075.顾国庆,等,基于曲率多尺度的高精度角点检测,光学技术,201076.盛遵冰,等,点对核值匹配的角点检测,计算机工程与应用,201077.马品奎,基于图像分析的超塑性自由胀形实验测量与力学解析,吉林大学,201078.倪奎,人脸器官拼接融合及其在人脸动画中的应用, 硕士学位论文,中国科学技术大学,200979.禹蒲阳,分类算法的一种改进,计算机应用与软件, 201080.吴祺, 基于视觉技术的陈展交互设计与实现, 硕士学位论文,浙江大学,201081.肖啸,等,基于数字有机体的访问控制链表(ACL) 的设计与实现,电脑知识与技术: 学术交流,200982.徐涛,基于多个广角相机的图像拼接技术, 硕士学位论文,浙江大学,201083.兰昆艳,等,基于图像金字塔光流的角点跟踪法的车辆监测系统,中国智能交通,200984.蔡胜利, 等,基于Harris角点检测的图像旋转测量[J]. 计算机测量与控制,2011,(1).85.全星慧, 等. 一种基于角点匹配的图像拼接算法研究[J]. 科学技术与工程,2011,(4).86.谭振宇, 等. 一种基于支持向量机的角点检测算法[J]. 电子测试,2011,(1).87.孙秋成. 基于机器视觉的轴径测量[D]. 吉林大学,2010.88.张炜. 基于点特征的图像拼接技术研究[D]. 河南科技大学: 2010.89.张金金. 基于SIFT的遥感影像自动配准的研究与实现[D]. 河南理工大学: 2009.90.肖若秀. 图像匹配方法研究与应用[D]. 昆明理工大学:2008.91.王静. 基于SIFT和角点检测的自动图像配准方法研究[D]. 华中科技大学: 2010.92.唐红梅. 基于辐射与空间信息的遥感图像检索[D]. 山东科技大学:2010.93.卓磊. 视频序列电子稳像技术研究[D]. 天津大学:2010.94.戴磊. 基于视觉反馈的移动机器人控制[D]. 上海交通大学:2011[17]可恢复的脆弱数字图像水印, 计算机学报, 2004, 他引频次:26引证文献:1.郭越,基于小波变换的鲁棒性与脆弱性数字水印算法的研究与实现,上海海事大学,20042.郭彦琦,数字图书馆工程中数字产品的版权保护和访问权限控制的研究和实现,上海海事大学,20043.刘为超,基于小波的数字图像认证水印研究,西安电子科技大学,20054.赵敏,医学图象数字水印系统研究与实践,苏州大学,20055.孙建梅,基于内容的图像认证技术研究,西北大学,20056.桑晓青,基于离散小波变换的数字图像篡改验证技术的研究,浙江工商大学,20067.杨艳萍,基于数字水印的图像认证技术研究,西北大学,20068.朱兴力,鲁棒图像数字水印算法及其协议研究,西南交通大学,20069.余淼,用于JPEG图像认证的数字水印算法研究,西南交通大学,200710.吴志伟,基于CRC的脆弱型文本数字水印研究与应用,中南大学,200711.廖昌兴,压缩域图像水印与隐写算法研究,西南交通大学,200812.潘季芳,差错控制数据库水印算法研究,湖南大学,200913.张宪海,数字水印技术在版权保护与内容认证中的应用研究,哈尔滨工程大学,200614.叶登攀,图像认证及视频数字水印的若干算法研究,南京理工大学,2005。
图标趋势英文作文
图标趋势英文作文Title: Trends in Iconography。
Icons have become an integral part of modern communication, transcending language barriers and conveying messages efficiently. As society evolves, so do the trends in iconography. In this essay, we will explore the current trends in icon design and their implications.One prominent trend in iconography is the shift towards simplicity and minimalism. In today's fast-paced digital world, users crave simplicity and clarity in design. This trend is evident in the widespread use of flat design and minimalist icons across various platforms and applications. Flat icons, characterized by clean lines and simple shapes, offer a visually appealing and user-friendly experience. By eliminating unnecessary details, these icons enhance usability and facilitate quick recognition.Moreover, the rise of responsive design has influencediconography trends. With the increasing prevalence ofmobile devices, designers are prioritizing scalability and adaptability in icon design. Scalable vector graphics (SVGs) have become popular for their ability to maintain clarity and sharpness across different screen sizes. Adaptive icons, which adjust their appearance based on the device and context, are also gaining traction. These trends reflectthe importance of flexibility and responsiveness in modern icon design, ensuring a seamless user experience across diverse devices and platforms.Another trend shaping iconography is the emphasis on inclusivity and diversity. In an increasingly globalized world, it is essential for icons to represent a wide rangeof cultures, identities, and experiences. Designers are paying more attention to inclusive iconography,incorporating diverse representations of people, genders, and abilities. This trend not only promotes inclusivity but also helps users feel represented and valued in digital spaces. By embracing diversity in icon design, we canfoster a more inclusive and equitable online environment.Furthermore, the integration of motion and animation has emerged as a notable trend in iconography. Animated icons add dynamism and interactivity to user interfaces, enhancing engagement and delighting users. Microinteractions, such as hover effects and subtle animations, provide visual feedback and guide user interactions. Motion graphics also play a crucial role in storytelling and branding, conveying emotions andnarratives through animated icons. As technology advances, we can expect to see more sophisticated and immersive animations in icon design, further blurring the lines between static and interactive elements.In addition to these trends, the use of unconventional shapes and metaphors is gaining popularity in iconography. Designers are experimenting with abstract forms and symbolic imagery to convey complex concepts and emotions. This trend reflects a departure from traditional iconography conventions, allowing for more creative expression and interpretation. By pushing the boundaries of icon design, we can create visually distinctive and memorable experiences for users.In conclusion, the field of iconography is constantly evolving, driven by technological advancements, cultural shifts, and changing user preferences. The current trends in icon design reflect a focus on simplicity, scalability, inclusivity, motion, and creativity. As designers continue to innovate and experiment, we can expect to see new and exciting developments in iconography that redefine the way we communicate and interact in the digital world.。
S1D13305中文资料
元器件交易网
The information of the product number change
Starting April 1, 2001, the product number will be changed as listed below. To order from April 1, 2001 please use the new product number. For further information, please contact Epson sales representative.
SDU1374#0C SDU1375#0C SDU1376#0C SDU1376BVR SDU1378#0C
• S1D1380x Series New No. Previous No.
SDU1386#0C
New No.
S5U13806P00C
S5U13503P00C S5U13504P00C S5U13505P00C S5U13506P00C
S1D13305 Series S1D13305D00A S1D13305F00A S1D13305F00B
S1D1370x Series S1D13704F00A S1D13705F00A S1D13706B00A S1D13706F00A S1D13708 Series
• S1D1350x Series Previous No.
S5U13704P00C S5U13705P00C S5U13706P00C S5U13706B32R S5U13708P00C
• S1D13A0x Series Previous No.
SDU13A3#0C SDU13A4#0C
New No.
GMW3136_JPJ090210r2_1
3.1.2 Glass Manufactures. ............................ 4 3.1.3 Printed Glazing, Glass Blackout (FRIT). ..................................................................... 4 3.1.3.1 Printing Side. .................................... 4 3.1.3.2 Appearance....................................... 4
1.3.2.4 Class D: Plastic Safety Glazing.……..3 1.3.2.5 Class E: Acoustic Windshield Glass. . 4 1.3.2.6 Class F: Heat Strenthened laminated Glass. ........................................................... 3 2 References ....................................................... 3 2.1 Normative. ................................................. 3 2.2 GM. ............................................................ 3 2.3 Additional. .................................................. 3 3 Requirements ................................................... 3 3.1 General Requirements.. ............................. 3 3.1.1 Regulatory.Error! defined. Bookmark not
海康威视产品展示手册说明书
Smart Meeting
Interactive Display
Easy projection, convenient sharing and flexible comment contributes for a more efficient meeting.
Easy Projection
Working Area
Access Control
Authorising Access
With a Hikvision access control system, you can assign permissions for room entry - preventing illegal break-ins, with real-time notifications of unauthorised access attempts. Because the access system can be linked to a camera, you can view video footage of the incident for full situation management.
HikCentral Professional ACS License
MinMoe Access Control Terminal
Flexible Deployment - Supports wall mounting and floor standing with mounting pole. Visualised Temperature Screening - 7-inch touch screen. Thermographic Technology - Measures forehead temperature upon face detection. High Accuracy of Temperature Screening Temperature range 30°C to 45°C*. Mask Detection - Supports face mask wearing alerts and compulsory mask wearing alerts. Temperature screening with mask.
一种单模型多风格快速风格迁移方法
在短视频兴起的时代,各种风格滤镜效果备受人们喜爱,图像风格迁移技术已经广泛地被人们熟知。
然而,许多图像风格迁移方法一个模型都只能针对一种风格,在应用上效率低下。
Gatys等人[1]在2015年首次提出一种基于卷积神经网络的统计分布参数化纹理建模方法,他们发现VGG (Visual Geometry Group)网络的高层能够很好地表达图像的语义风格信息,网络的低层能够很好地表示图像的内容纹理特征信息,通过计算Gram矩阵来表示一种风格纹理,再利用图像重建的方法,不断优化迭代来更新白噪声图像的像素值,使其Gram矩阵与风格图Gram 矩阵相似,最后重建出既具有风格图的风格又具有内容图像的内容,这一研究引起了后来学者的广泛研究,并成功使用深度学习技术来进行风格迁移。
文献[2]从理论上验证了Gram矩阵为什么能够代表风格特征,认为神经风格迁移的本质是匹配风格图像与生成图像之间的特征分布,并提出一种新的在不同层使用批归一化统计量对风格建模方法,通过在VGG网络不同层的特征表达的每一个通道的均值和方差来表示风格,为后续研究提供了风格建模的参考。
这些方法都是在CNN(Convolutional Neural Networks)网络的高层特征空间提取特征,高层特征空间是对图像的一种抽象表达,容易丢失一些低层次的信息,比如边缘信息的丢失会造成结构上形变。
文献[3]提出在风格迁移的同时应该考虑到像素空间和特征空间,在Gatys提出的损失函数基础上,将在像素空间的内容图进行拉普拉斯算一种单模型多风格快速风格迁移方法朱佳宝,张建勋,陈虹伶重庆理工大学计算机科学与工程学院,重庆400054摘要:多数图像风格迁移任务都是一个模型只能对应一种风格,这在实际应用场景中效率低下,提出一种单模型多风格的快速风格迁移方法,只使用一个模型就可以适应任意风格样式。
使用一组线性变化分别对内容特征和风格特征进行转换,使用组合的风格损失函数来重建图像。
菲利普Xtra 4K Ambilight 电视 139 cm(55英寸) Ambilight TV
Philips The Xtra 4K Ambilight TV139 cm (55") Ambilight TV Dolby Atmos soundP5 picture enginePhilips Smart TV55PML9008Movies to get lost in. Gaming you can feel. 4K Ambilight TVGet all the way into the action! Whether you watch or play, the Xtra's big, bright, colourful picture and immersive Ambilight make it a real event. Add richly atmospheric Dolby Atmos sound and you've got everything you need for an epic night in.The Xtra. 4K MiniLED Ambilight TV.•Immerse yourself in what you love. Ambilight TV•Big-screen brilliance. For every dramatic moment•MiniLED. Intelligent backlight zones. Incredible contrast.•Whatever the source, always perfection. Philips P5 engine.Supersize without compromise.•Cinematic vision and sound. Dolby Vision and Dolby Atmos.•Supports all major HDR formats•Premium design. Packed for the future.Flexible sound. Epic gaming. Smart control.•Philips Wireless Home System powered by DTS Play-Fi•Epic gaming. 120 Hz, ultra-low lag, VRR, FreeSync•Voice control. Works with Google Assistant* and Alexa*HighlightsAmbilight TVAmbilight TVs are the only TVs with LED lights behind the screen that react to what you watch, immersing you in a halo of colourful light. It changes everything: Your TV seems bigger, and you'll be drawn deeper into your favourite programmes, films and games.MiniLED technologyNo matter what you watch, the Xtra's MiniLED technology gives you a trulyimpressive big-screen picture with deep blacks, pin-sharp contrast, and lifelike colours. Plus, this 4K UHD Ambilight TV is compatible with all major HDR formats, so you'll see more detail, even in dark and bright areas.MiniLED TVMiniLED. Intelligent backlight zones. Incredible contrast.P5 Perfect Picture engineThe Philips P5 engine delivers a picture as brilliant as the content you love. Details havenoticeably more depth. Colours are vivid, while skin tones look natural. Contrast is so crisp you'll feel every detail. Motion is perfectly smooth.Dolby Vision and Dolby AtmosWith Dolby Vision and Dolby Atmos on board, your films, shows and games look and sound incredible. See the picture the director wanted you to see. No more disappointing scenes that are too dark to make out! Hear every word clearly. Experience sound effects like they're really happening around you.Supports all major HDR formats Supports all major HDR formats.Premium designThe ultra-thin metal bezel and slim metal feet, both in anthracite grey, give this TV a strong, minimalist look. Our packaging uses FSC-certified recycled cardboard and our printed materials use recycled paper.DTS Play-FiPhilips Wireless Home System powered by DTS Play-Fi lets you connect to compatible soundbars and wireless speakers around your home in seconds. Listen to films in the kitchen. Play music anywhere.Epic gamingPlay without limits and immerse in the Xtra's vibrant colours! HDMI 2.1, a blazing-fast120 Hz native refresh rate and ultra-low input lag get the best out of next-gen gaming gear with fluid, responsive gameplay, super-smooth natural motion and great-looking graphics. Ambilight's gaming mode brings even bigger thrills.Voice controlIf you want to control this TV via voice assistant, you can. Simply pair your TV with your Google smart speaker and ask Google Assistant to control the TV and find shows and movies. Or pair with Alexa-enabled devicesand ask Alexa!Ambilight•Ambilight Features: Wall colour adaptive, Lounge mode, Game Mode, Ambilight Music, AmbiWakeup, AmbiSleep, Works with Philips Wireless Home Speakers, Ambilight Boot-Up Animation•Ambilight Version: 3 sidedPicture/Display•Diagonal screen size (inch): 55 inch •Diagonal screen size (metric): 139 cm •Display: 4K Ultra HD LED •Panel resolution: 3840 x 2160•Native refresh rate: 120 Hz•Picture engine: P5 Perfect Picture Engine•Picture enhancement: Perfect Natural Motion,Micro Dimming Premium, HDR10+, Dolby Vision,HLG (Hybrid Log Gamma), CalMAN ReadyDisplay input resolution•Resolution-Refresh rate: 576 p - 50 Hz, 640 x 480- 60 Hz, 720 p - 50 Hz, 60 Hz, 2560 x 1440 - 60 Hz,120 Hz, 1920 x 1080 p - 24 Hz, 25 Hz, 30 Hz,50 Hz, , 60 Hz, 100 Hz, 120 Hz., 3840 x 2160 p -24 Hz, 25 Hz, 30 Hz, 50 Hz,,60 Hz, 100 Hz,120 Hz.Tuner/Reception/Transmission•Digital TV: DVB-T/T2/T2-HD/C/S/S2•Video Playback: PAL, SECAM•TV Programme guide*: 8-day ElectronicProgramme Guide•Signal strength indication•Teletext: 1000 page Hypertext•HEVC supportIssue date 2023-10-12 Version: 9.9.1EAN: 87 18863 03802 4© 2023 Koninklijke Philips N.V.All Rights reserved.Specifications are subject to change without notice. Trademarks are the property of Koninklijke Philips N.V. or their respective owners.Smart TV•OS: Smart TV with improved OSSmart TV Features•User Interaction: Screen mirroring, SimplyShare •Interactive TV:HbbTV•SmartTV apps*: Amazon Prime video, Netflix, Philips store, YouTube•Voice assistant*: Works with Alexa, Works with Google HomeMultimedia Applications•Video Playback Formats: Containers: AVI, MKV, H264/MPEG-4 AVC, MPEG-1, MPEG-2, MPEG-4, HEVC (H.265), VP9, AV1•Music Playback Formats: AAC, MP3, WAV, WMA (v2 up to v9.2), WMA-PRO (v9 and v10), FLAC •Subtitles Formats Support: .SRT, .SUB, .TXT, .SMI, .ASS, .SSA•Picture Playback Formats: JPEG, BMP, GIF, PNG, 360 photo, HEIFProcessing•Processing Power: Dual CoreSound•Audio: 2.0 Channel, Output power: 40 watts (RMS)•Speaker configuration: 10Wx2 mid-high speaker,10Wx2 tweeter•Codec: Dolby Digital MS12 V2.6.1, DTS:X •Sound Enhancement: A.I. Sound, Clear Dialogue, Dolby Bass Enhancement, Dolby Volume Leveller, Night mode, Dolby Atmos®, A.I. EQ Connectivity•Number of HDMI connections: 4•HDMI features: 4K, Audio Return Channel •EasyLink (HDMI-CEC): Remote control pass-through, System audio control, System standby, One touch play•Number of USBs: 2•Wireless connection: Wi-Fi 802.11ac, 2 x 2, Dual band, Bluetooth 5.0•Other connections: Common Interface Plus (CI+), Digital audio out (optical), Satellite Connector, Ethernet-LAN RJ-45, Headphone out, Service connector•HDCP2.3:Yes on all HDMI•HDMI ARC:Yes on HDMI1•HDMI 2.1 features: eARC/VRR/ALLM supported, Max 48 Gbps data rate, eARC on HDMI 1, FreeSync Premium•EasyLink 2.0: HDMI-CEC for Philips TV/SB,External setting via TV UISupported HDMI video features•HDMI 1/2: HDMI 2.1 full bandwidth 48 Gbps, up to4K 120 Hz•Gaming: ALLM, HDMI VRR, AMD FreeSyncPremium•HDR: Dolby Vision, HDR10, HDR10+, HLGEU Energy card•EPREL registration numbers: 1562352•Energy class for SDR: F•On mode power demand for SDR: 77 Kwh/1000h•Energy class for HDR: G•On mode power demand for HDR: 158 Kwh/1000h•Off mode power consumption: n.a.•Networked standby mode: 2.0 W•Panel technology used: LED LCDPower•Mains power: AC 220 - 240 V 50/60 Hz•Standby power consumption: less than 0.3 W•Power Saving Features: Auto switch-off timer,Picture mute (for radio), Eco mode, Light sensorAccessories•Included accessories: Legal and safety brochure,Power cord, Quick start guide, Remote Control,Tabletop stand, 2 x AAA BatteriesDesign•Colours of TV: Anthracite grey bezel•Stand design:Anthracite grey offset sticksDimensions•Set Width: 1231.0 mm•Set Height: 721.0 mm•Set Depth: 82.0 mm•Product weight: 14.3 kg•Set width (with stand): 1231.0 mm•Set height (with stand): 739.0 mm•Set depth (with stand): 254.0 mm•Product weight (+stand): 14.6 kg•Box width: 1360.0 mm•Box height: 840.0 mm•Box depth: 160.0 mm•Weight incl. Packaging: 20.2 kg•Stand width: 735.0 mm•Stand height: 20.0 mm•Stand depth: 256.0 mm•Wall-mount compatible: 300 x 300 mm*This television contains lead only in certain parts or componentswhere no technology alternatives exist in accordance with existingexemption clauses under the RoHS Directive.*The TV supports DVB reception for 'Free to air' broadcast. SpecificDVB operators may not be supported. An up to date list can befound in the FAQ section of the Philips support website. For someoperators Conditional Access and subscription are required.Contact your operator for more information.*EPG and actual visibility (up to 8 days) is country- and operator-dependent.*Philips TV Remote app and related functionalities vary per TV model,operator and country, as well as smart device model and OS. Formore details please visit: /TVRemoteapp.*Netflix subscription required. Subject to terms on https://*Rakuten TV is available in selected languages and countries.*Amazon Prime is available in selected languages and countries.*Amazon, Alexa and all related logos are trademarks of ,Inc. or its affiliates. Amazon Alexa is available in selected languagesand countries.。
剑桥国际儿童英语 小达人点读包
剑桥国际儿童英语小达人点读包Cambridge International Children's English Smart Reader Package is a comprehensive learning solution designed to help young learners develop their English language skills effectively. This innovative program, developed by Cambridge Assessment International Education, offers a unique and engaging approach to language acquisition, empowering children to become confident and proficient communicators in English.At the heart of the Cambridge International Children's English Smart Reader Package lies a meticulously crafted curriculum that aligns with the global standards set by the Common European Framework of Reference for Languages (CEFR). This framework ensures that the learning objectives and content are tailored to the specific needs and abilities of children, allowing them to progress at a pace that suits their individual learning styles and abilities.One of the key components of the program is the interactive smart reader device. This cutting-edge technology seamlessly combines the physical books with interactive digital content, creating acaptivating and immersive learning experience. The smart reader device features a high-resolution touch screen that allows children to engage with the learning materials in a fun and intuitive manner.The smart reader device is equipped with a vast collection of meticulously curated storybooks, covering a wide range of genres and themes. Each book is accompanied by engaging multimedia elements, such as animation, audio narration, and interactive activities, which help to bring the stories to life and enhance the children's comprehension and retention of the content.Moreover, the smart reader device is designed to adapt to the individual learning needs of each child. It utilizes advanced algorithms to track the child's progress, identify their strengths and weaknesses, and provide personalized recommendations for further learning. This adaptive approach ensures that the child's learning journey is tailored to their specific requirements, allowing them to progress at a pace that is both challenging and rewarding.One of the standout features of the Cambridge International Children's English Smart Reader Package is its emphasis on interactive learning. The smart reader device encourages children to actively engage with the content, whether it's by tapping on interactive elements, answering comprehension questions, or participating in language-based games and activities.This interactive approach not only makes the learning process more enjoyable but also helps to reinforce the language concepts and skills being taught. By actively engaging with the content, children are more likely to retain the information and apply it in their everyday communication.In addition to the engaging interactive features, the Cambridge International Children's English Smart Reader Package also includes a comprehensive suite of assessment tools. These tools enable parents and educators to monitor the child's progress, identify areas for improvement, and adjust the learning plan accordingly.The assessment tools range from formative assessments, which provide ongoing feedback and guidance, to summative assessments, which measure the child's overall proficiency in the English language. The data gathered through these assessments is then used to generate detailed reports, allowing parents and educators to make informed decisions about the child's learning journey.One of the unique aspects of the Cambridge International Children's English Smart Reader Package is its strong emphasis on cultural diversity and global awareness. The program's content includes stories and activities that expose children to different cultures, customs, and perspectives from around the world.By introducing children to diverse cultural elements, the program helps to foster a better understanding and appreciation of the richness of our global community. This cultural awareness not only enhances the children's language skills but also nurtures their empathy, tolerance, and appreciation for diversity.Another key feature of the Cambridge International Children's English Smart Reader Package is its seamless integration with the broader Cambridge curriculum. The program's content and learning objectives are closely aligned with the Cambridge Primary English curriculum, ensuring a smooth transition for children as they progress through their educational journey.This alignment with the Cambridge curriculum means that children who use the Smart Reader Package will be well-prepared for the next stage of their education, whether it's in a Cambridge school or any other educational institution. The language skills and knowledge acquired through the program will provide a solid foundation for their future academic and personal success.In conclusion, the Cambridge International Children's English Smart Reader Package is a truly remarkable and innovative learning solution that empowers young learners to develop their English language skills in a captivating and effective manner. With itsinteractive technology, adaptive learning approach, comprehensive assessment tools, and cultural diversity focus, this program offers a transformative learning experience that can unlock the potential of children around the world.As a parent or educator, investing in the Cambridge International Children's English Smart Reader Package can be a game-changer, providing children with the necessary tools and support to become confident and proficient communicators in English. By embracing this cutting-edge learning solution, we can equip our children with the language skills and global awareness they need to thrive in an increasingly interconnected world.。
基于Unity快速搭建剪纸风格场景的探索实现
基于Unity快速搭建剪纸风格场景的探索实现米星光,陈文娟(中国传媒大学动画与数字艺术学院,北京100024)摘要:目前中国剪纸风格的游戏较少尝试将传统剪纸与现代2D游戏中的技术相结合。
为解决这一问题,本文提出一种算法,该算法能够快速高效地生成兼顾剪纸风格与现代2D游戏视觉效果的场景。
本文借鉴了现代2D游戏中的视差滚动技术、屏幕后处理技术,对这些技术做了更适合表现中国剪纸艺术风格的改进,并且通过自动生成算法提高了场景搭建的效率。
最终通过将由本文算法实现的结果与其他剪纸风格的游戏作对比,本文所实现的视觉效果具备更好的纵深感。
关键词:图层法;视差滚动技术;屏幕后处理技术;2D景深效果中图分类号:TP399文献标识码:A文章编号:1673-4793(2020)05-0024-08The exploration and implementation of quicklybuilding paper-cut style scene based on UnityMI Xing-guang,CHEN Wen-juan(School of Animation and Digital Arts in Communication University of China,Beijing100024,China)Abstract:At present,Chinese paper-cut style games seldom attempt to combine traditional paper-cut with the technology in modern2D games.In order to solve this problem,this paper proposes an algorithm that can quickly and efficiently generate scenes with both paper-cut style and modern2D game visual effects.In this paper,parallax scrolling technology and screen post-processing technology in modern2D games are used for reference.Improvements are made to these technologies that are more suitable for the performance of Chinese paper-cut art style,and the efficiency of scene building is improved through au-tomatic generation algorithm.Finally,by comparing the results achieved by the algorithm in this paper with other paper-cut style games,the visual effect achieved in this paper has better depth perception.Key words:layer method;parallax scrolling;post-processing;2D depth of field;sense of depth;automat-ic generated1引言如今,对非物质文化遗产的抢救与保护已在世界范围引起了重视。
以固定帧率更新画面C语言编程方法
以固定帧率更新画面C语言编程方法摘要:用C语言实现计算机动画时,往往使用一个大概时长作为画面之间的延时。
该做法会使得不同画面的帧时长不相等,从而无法实现对动画速度的准确控制。
针对该问题,对延时函数进行了改进,能按指定的帧时长进行自适应延时。
以该理论为基础,进一步提出一种以固定帧率更新画面的编程方法,适用于编写需要按时更新画面的动画、游戏和应用程序。
最后,以C语言编写控制台窗口文本界面下的“英文对话动态演示”应用程序为例,展示了该方法的应用关键词:计算机动画;帧率;延时;C语言;编程方法中图分类号:TP311 文献标识码:A 文章编号:1009-3044(2017)01-0046-03Abstract:To implement computer animation with C language,often use approximate duration as the delay between the frames. This will make the different time length between different frames,making it impossible to achieve accurate control over the speed of the animation. To solve this problem,the sleep function is improved and the adaptive delay can be carried out according to the specified frame time length. On this basis,furtherproposed a programming method to update frames with fixed frame rate,suitable for animation,games and applications. Finally,the application of this method is demonstrated by the example of “English Dialogue Dynamic Demonstration” under textual interface of the windows console in C language.Key words:computer animation;frame rate;delay;C language;programming method1 背景利用人类视觉系统的视觉残留特性[1],计算机动画中采用:“绘制一帧画面”、“延时”、“绘制下一帧画面”、“延时”的方式,在短时间内快速切换画面,能使人产生“画面是连续变化”的感觉。
ACADEMIAROM^N~ACADEMIADE}TIINE
ROMANIAN ACADEMY ACADEMY OF TECHNICAL SCIENCES Department of Technical SciencesThe Annual Symposium of theInstitute of Solid MechanicsSISOM 2006BucharestMay 17-19, 2006Organizing Committee:Co-ChairmenAcad. Radu Voinea, Romanian AcademyProf. Cornel Mihai Nicolescu KTH, StockholmProf. Goran Karlsson, KTH, StockholmVice-chairmanDr. Tudor Sireteanu, IMS.Dr. Veturia Chiroiu, IMSMembersProf. Charles W. Stammers, Univ. of Bath, UKProf. Pier Paolo Delsanto, Politecnico. di Torino, Italy Prof. Boris Rybakin, Rep. Moldova AcademyProf. Petre P. Teodorescu, Univ. of BucharestProf. Constantin Minciu, Politehnica Univ., Bucharest Prof. Napoleon Antonescu, Univ. from PloieştiProf. Florin Petrescu, Univ. of Civil Engn., Bucharest Dr. Marcel Migdalovici, Institute of Solid Mechanics Dr. ing. Dinu Bratosin, Institute of Solid Mechanics Dr. ing. Lucian Căpitanu, Institute of Solid Mechanics Dr. Aron Iarovici, Institute of Solid MechanicsDr. Ligia Munteanu, Institute of Solid MechanicsDr. ing. Wladimir Racoviţă, Institute of Solid Mechanics Dr. ing. Luige Vlădăreanu, Institute of Solid MechanicsSecretary :dr. Marcel MIGDALOVICIdr. eng. Cristian RUGIN~dr. M`rg`rit BAUBECeng. Gabriela VL~DEANUas. Veronica IOV~NESCUSpecial thanks to the sponsors:- AGIR - General Association of Romanian Engineers-VTC – SRL (Engineering and Industrial Technologies, Engineering & Dealer – ABB)Session 1 – System dynamics and continuum mechanics(1D) A.STROZZI, A.BALDINI, M.GIACOPINI, S. RIVASI, R. ROSI, E. BERTOCCHI Initial clearance effects on hoop stresses in conrod small ends(2D) Adrian CARABINEANUThe determination of the reaction force in a certain material poin of a system(3D) Dinu BRATOSINAssesment of the necessary period-shift in structural pasive control by using thenon-linear magnification functions(4D) O. SIMIONESCU-PANAITThe coupling of guided plane waves in piezoelectric crystals subject to initial electromechanical fields (5D) Liliana GRATIELinear shell models revisited with the stress resultants and the bending moments as the new unknowns(6D) Petre P. TEODORESCU, Veturia CHIROIU, Ana Maria MITUOn the bending and torsion of carbon nanotubes(7D) Polidor BRATURigidity and damping dynamic characteristics in case of composite neoprene systems due to passive vibrations isolation(8D) Mihai Valentin PREDOI, Adina NEGREA, Catalin George IONSome considerations on the critical angle at the plane boundary between anisotropic media (9D) Ştefan SOROHAN, Nicolae CONSTANTIN, Mircea GĂVAN, Viorel ANGHEL Numerical extraction of dispersion curves used in Lamb wave inspections(10D) Viorel ANGHEL, Stefan SOROHAN, Nicolae CONSTANTINFinite element investigation of the buckling of composite stiffened panels(11D) Nicolae–Doru STĂNESCU, Nicolae PANDREAOn the stability of the forced damped oscillator with nonlinear cubic stiffness(12D) Silviu MACUTA, Luminita MORARU, Ioan TARAUThe use of Taguchi method in designing experimental research(13D) Daniela BARAN, Horia PAUN, Sebastian PETRISORFluid-structure interaction, theoretical and practical aspects(14D) Dan DUMITRIU, Daniel BALDOVIN, Tudor SIRETEANUNumerical simulation of the vertical interaction between railway vehicle and track(15D) Mircea FENCHEAOptimization of hammer construction for material crushing(16D) Ioan COSMA, Diana I. POPESCU, I. LUPEA, I.A. COSMAExperimental method for visualization and dynamical model for study of compressional waves and vibration modes(17D) Veturia CHIROIUThe damping capacity of the nanostructured materials from carbon nanotubes(18D) Gabriel MURARIU, Marina Aura DARIESCU, Ciprian DARIESCUMass system evaluation for a Cveck space – time case(19D) Gabriel MURARIU, Luminita MORARUEvolution equations for a system in a Hoffmann space - time(20D) Elena MEREUŢĂMechanical structures with geometrical stability – iterative and classical models(21D) Angela PETRESCUA possible evaluation of geomaterials damage(22D) Ligia MUNTEANUOn the evaluation of the Young’s modulus for composites based on the auxetic materials (23D) Ligia MUNTEANU, Oana MARIN, Valeria MOSNEGUŢUOn the nonlinear stick- slip friction. Part II : an example(24D) Marcel MIGDALOVICI, Justin ONISORU, Emil M. VIDEA, Al. ALBRECHTControl of vibrations of cables with viscous damping hypothesis(25D) AMEDEU ORĂNESCU, ELENA MEREUŢĂ, SILVIA BEJENARU, MĂDĂLINA RUS The determination of the Cauchy simultaneous correspondence points for thestructures having supraunitary mobility(26D) Carmen TACHE, Laurentiu PREDESCU, Mirela ZAHARIA, Dumitru DUMITRUNumerical evaluation for the time response of a vibrating system(27D) Camelia CERBUWater and seawater effects on the members made of e-glass composite materials(28D) Silviu NASTACComputational Dynamics of the POSEID Systems(29D) Silviu NASTACExperimental Dynamics of the POSEID Systems(30D) Alexandru VLADEANU, Gabriela VLADEANUContributions regarding the monitoring of the concrete preparation process in concrete mixing plants (31D) Stefania DONESCUSome aspects of the existence of Coulomb vibrations in a composite bar(32D) Adrian Ioan NICULESCUSpecific conditions at stroke-damping force benches, used for self-adjustable shock absorbers.Practical solutions for damping chart benches modification(33D) Petre STIUCAApplications of CEFIT and LISA approaches in propagation of waves along pipelines(34D) Grigore-Liviu ODOBESCU, Rareş Alexandru ODOBESCUThe efficiency analysis in function of working conditions in case of high power ultrasonicsystems using piezoelectric transducers which are series compensate(35D) Rareş Alexandru ODOBESCU, Grigore-Liviu ODOBESCUThe experimental study about series compensation schemes in case of high power piezoelectrictransducers(36D) Radu Dan RUGESCUExtended Bolza method for discontinuous integrands(37D) Rodica IOAN, Stefan IOANOn the assymetrical strip rolling(38D) Valerica MOSNEGUTUOn the friction in planar motions(39D) Vasile MARINAreserved title(40D) Cristian FACIU, Mihaela MIHAILESCU-SULICIUOn the dynamics of strings made of shape memory alloys.(41D) Polidor BRATU, Silviu NASTACIsolation performances of the polygonal shape elastic isolation device(42D) Mihai Valentin PREDOI, Cristian PETREA simple model for Lamb waves generation using a piezoelectric wafer transducer(43D) Camelia CERBU, Călin ITUAnalysis of the rigidity in case of the rear plate of a motorboat hull madeof e-glass fibres / polymeric resins(44D) Ion PREDOIU, Florin FRUNZULICĂStress field in thin-walled aircraft structures. Experimental research(45D) Dan Ioan OLTEAN, Dana Luca MOTOCElectric properties assessments of particle reinforced polymer composite materialssubjected to constant applied loads(46D) Dana Luca MOTOCInvestigation vs. finite element simulation of particle reinforced polymer composite materials (47D) Anton SOLOI, Stefan AMADOAnalytical study of orthotropic rectangular plate(48D) Cristina POPA, Lucian GEORGESCUTheoretical study of Lamb wave dispersion in aluminum/polymer bilayers(49D) R.C.PICU, Monica.SOARECharacterization of the dislocation core fields using conservation laws in elastostatics(50D) Ion PREDOIU, Marius STOIA-DJESKA, Florin FRUNZULICĂ, Ionel POPESCU, Paul SILIŞTEANU Aeroelastic models and the synthesis of controllers for flutter suppression(51D) Iuliana OPREAZig-zag chaos - a new spatiotemporal pattern in nonlinear dynamics(52D) Daniel BALDOVIN, Ana-Maria MITU, Tudor SIRETEANUExperimental analyse of shock absorbing properties for playground surfacing(53D) Margarit BAUBECAn uniqueness result in theories of plates with moderate thickness(54D) Justin ONISORU, Aron IAROVICI, Lucian CAPITANUKinematics and contact in total knee prostheses during routine activities(55D) Cornel BALAS, Tudor SIRETEANUA study of a semiactive control system using a single parameter.(56D) Viorica BOGDANESCU , Cornel BALASLow-frequencies longitudinal vibrations in urban persons transportation(57D) Emil VIDEA, Cornel BALASA study of the dynamical behavior of a mechanical structure equipped with a dry friction damper (58D) Cristian RUGINAThe use of adaline neural network in defect identification(59D) Gloria COSOVICI, Dan Sorin COMŞA, Dorel BANABICEvaluation of the performances of the different yield criteria by using the deep drawing test (60D)Liana PĂRĂIANU, Dorel BANABICPredictive accuracy of different yield criteriaSession 2 – Robotics, mechatronics, biomechanics, wear investigations of materials (1R) Nicolae POPContact boundary conditions and frictions laws in mathematical modeling of contact problems(2R) Cornel BRISANConsiderations concerning calibration of parallel reconfigurable robots(3R) Calin RUSU, Cornel BRISANConsiderations concerning training of the human muscles using robotic methods(4R) Ana DONIGA, Silviu MĂCUTĂ, Istrate MILTIADE, Vasilescu ELISABETACercetari privind influenta parametrilor laminarii asupra proprietatilor benzilor bimetalice(5R) Vasile ZAMFIR, Horia Marius VÎRGOLICIThe transmission angle - a criterion for the approximate synthesis of the functiongenerating four bar mechanism with two cranks(6R) Niculae MANAFIThe analytical method in kinematics of plane mechanisms modeling and animation(7R)A. FILIPESCU, Silviu MACUTARobotic manipulators control by adaptive gain smooth sliding observer – controllerand parameter identification(8R) Luminita MORARU, Silviu MACUTAAcoustical degassing of molten aluminium(9R) Ionel GHERCĂ, Vasile MERTICARU, Eugen MERTICARU, Dana CIOBANU, Silviu MACUTA Mixer with shock wave(10R) Radu BĂLAN, Vistrian MĂTIEŞ, Sergiu STANA control approach of a 3-RRR planar parallel minirobot(11R) Radu BALAN, Vistrian MATIES, Sergiu STAN, Ciprian LAPUSANSome applications for nonlinear processes of a model predictive control algorithm(12R) Mircea IGNAT, George ZARNESCU, Sebastian SOLTAN, Victor STOICA Electromechanical microdrives for robotics using piezoelectric microactuators and micromotors(13R) Ionel GHERCĂ, Vasile MERTICARU, Eugen MERTICARU, Dana CIOBANU Directions to improve the construction of mixers(14R) George BALAN, Alexandru EPUREANU, Viorel VACARUSThe monitoring of a lathe using an artificial neural network – 3rd part (the experimental setup)(15R) George. BALANThe monitoring of a lathe using an artificial neural network – 4th part(experimental results, data processing)(16R) Adrian Catalin DRUMEANU, Ioan TUDORExperimental determinations concerning the overstrain of the bits bearing sealing “O” rings(17R)Daniela GHELASE, Luiza DASCHIEVICI. Igor GORLACHConsiderations and contributions regarding the rigidity of the gearing tooth(18R) Silvia BEJENARURestriction of the Lagrange mathematical models for the differential monitored gearings (19R) Mădălina RUSThe movement’s computer simulation of a series of gears having fixed axes with an imposed kinematics after a parabolic law(20R) Liviu POPA, Diana-Flavia POPAContributions to the optimization of passive rotary dampers(21R) Ion NAE, Marius Gabriel PETRESCUThe particularities of the decisional technological process(22R) Marius Gabriel PETRESCU, Ion NAEProgramming method of the maintenance activities of the process equipment(23R) Luiza DASCHIEVICI, Daniela GHELASE, Cristian SIMIONESCUExperimental results obtained in researching of non-coulombian friction(24R)Boris CONONOVICI, Wladimir RACOVIŢĂParticularities of robotic manipulation in micro/nano scale.(25R) L. VLADAREANU, R.MUNTEANU, S. CONONOVICI, V. CHIROIU, ZAR, I.ILIUCStudies and Research of Open Real -Time Control Systems for Nano and MicromanipulatorsWorking in a Cooperative Regime(26R) Radu BALAN, Vistrian MATIES, Olimpiu HANCU, Ciprian LAPUSANA nonlinear control algorithm – six applications(27R) Călin ITUDesign solution in order to increase the energetic and ecology parametersof the engine with internal combustion(28R) Călin ITU, Camelia CERBUStructural behaviour evaluation of the piston compressor con-rod based on dynamic stress simulation (29R) Ion NITU, Cornel SECARACinematica unui manipulator plan redundant cu structură ramificată(30R) Ion CRUDU, Alexandru IVĂNESCU, Ioan ŞTEFANESCUTribosystems for integrate manufacturing processes in metallurgical industry(31R) G. IONESCU, O.N. IONESCUImprovements in control of thermal spray coating(32R) Nicolae LOBONTIUResonant nano/micro mechanical systems(33R) Nicolae LOBONTIU, Simona NOVEANU, Dan MANDRU,Monolithic hinges and compliant mechanisms: applications to nano/micro systems(34R) Alexandru NASTASEModel al angrenarii flexibile din motoarele piezoelectrice Non-destructive ultrasonic Experimental (35R) Stefan AMADO, Anton SOLOIA FEM Analysis of a mass measurement device by tensometry(36R) Dorin BADOIUResearch concerning the dynamics of a manipulator robot with elastic links(37R) Ivan ILIUC, Constantin TIGANESTEANUMicropitting wear of steel ball sliding against TiN coated steel plate in lubricated conditions – a comparative study。
制作个性化沙发套英语作文
制作个性化沙发套英语作文Have you ever looked at your living room and thought about how you could make it more unique and personal? One way to add a touch of individuality to your space is by creating personalized sofa covers. Not only can this project be a fun and creative endeavor, but it also allows you to showcase your personality and style in your home decor.To start off, gather the materials you will need for this DIY project. Depending on your preferences, you can choose from a variety of fabrics such as cotton, linen, velvet, or even a combination of different textures. Select a fabric that not only complements the existing color scheme of your living room but also reflects your personal taste.Next, measure your sofa carefully to ensure that the fabric will fit snugly over the cushions and frame. Remember to account for any seams or hem allowances when cutting the fabric. You can choose to create a simple slipcover that drapes over the sofa or a more fitted cover that can be secured in place with zippers or buttons.Once you have the fabric cut to size, it's time to get creative with embellishments and details. Consider adding decorative elements such as piping, tassels, or embroidery to make your sofa cover truly unique. You can also experiment withdifferent patterns, prints, and colors to match your style preferences.As you sew the pieces together, pay close attention to the finishing touches to ensure a polished look. Take your time to sew straight and even seams, and make sure to reinforce stress points such as corners and edges. Don't forget to press thefabric as you go along to create crisp, clean lines.Finally, once your personalized sofa cover is complete, tryit on your sofa to see how it fits and looks in your living room. Admire your handiwork and take pride in knowing that you have created a oneofakind piece that reflects your personality and adds a touch of style to your home.In conclusion, crafting a personalized sofa cover is not only a creative project but also a way to infuse your living spacewith your unique identity. By selecting the right fabric, adding personal touches, and paying attention to the details, you can create a stunning piece that not only protects your sofa butalso serves as a statement piece in your home decor. So why not embark on this DIY journey and transform your living room with a custommade sofa cover that is truly oneofakind?。
Illustrator认证试题-3
以下关于Illustrator的描述,不正确的是:Illustrator是Adobe Systems公司研发的大型平面设计应用软件Illustrator是一个向量式的绘图软件,所绘制的图形不受分辩率的影响Illustrator可以直接打开Photoshop格式的文件,也可以保存为Photoshop格式的文件用Illustrator可以快速精确地制作出彩色和黑白的图形在执行滤镜命令的过程中,中途取消操作的快捷键是?ShiftEsc(Windows)/Command+.(Mac OS)Alt(Windows)/Option(Mac OS)ReturnEdit>Preference(编辑>预置)子菜单中有很多设定项用来定义Illustrator的操作环境,下列哪个选项不属于Preference(预置)中的设定项?Smart Guide(智能参考线)Type & Auto Tracing(字型和自动描边)Units & Undo (度量单位和还原)Document Setup(文档设定)下面关于Adobe Illustrator CS的界面描述正确的是:启动Adobe Illustrator CS后,软件就会自动建立一个大小为A4,色彩模式为RGB的新文件创建新文件时,在New Document(新文件)对话框中只有RGB和CMYK两种色彩模式可以设定创建新文件时,在New Document(新文件)对话框中可任意设定文件的大小,并且输入的时候数字及度量单位可以同时输入,如可直接输入12cm如果工具箱中某工具图标的右下角有黑色小三角,表示还有隐含的工具在图形文件中进行颜色设定时应以?显示器为准感觉为准颜色数值为准打样为准在Illustrator中,外观调板的主要功能是显示对象查看对象的外观属性调整外观属性的前后顺序复制、删除对象的外观属性如图所示:在绘制上图中花朵图形时,绘制好单个花瓣后使用旋转工具(RotateTool)旋转复制,使用的快捷键是Ctrl+DCtrl+JCtrl+FCtrl+V对一个图形执行完Object>Lock(图形>锁定)命令后,图形还可以执行的操作是:可以改变边线颜色可以改变填充颜色不能执行任何操作在图层(layer)面板中,可以删除被缩定图形如图所示:透明蒙版中的连接锁起到什么作用?将透明蒙版与对象分开连接将透明蒙版与对象同时移动使透明蒙版与对象实现透明效果应用透明蒙版关于矩形、椭圆及圆角矩形工具的使用,下列的叙述哪些是正确的?在绘制矩形时,起始点为右下角,鼠标只需向左上角拖移,便可绘制一个矩形如果以鼠标击点为中心绘制矩形、椭圆及圆角矩形,使用工具的同时按Shift键就可实现在绘制圆角矩形时,如果希望长方形的两边呈对称的半圆形,可在RoundedRectangle(圆角矩形)对话框中使圆角半径值大于高度的一半如果欲显示图形的中心点,首先确定图形处于选择状态,然后在Attributes(属性)调板上单击Show Center(显示中心)按钮如图所示:原图形是一个绿色填充的闭合路径,图形带有一个投影效果(见图A),执行Object>Path>Split Into Grid(对象>路径>划分为网格),划分行数和列数均设置为2,其他保留默认值。
最新布料系统NCLOTH
布料系统N C L O T HnCloth 简化一览仅供学习与交流,如有侵权请联系网站删除谢谢36仅供学习与交流,如有侵权请联系网站删除 谢谢36观察效果,交互进行nCloth 支持多边形,可能不支持nurbs 。
一些相关的菜单如下图:nMesh : 创建 nClothnConstraint : 创建约束nCache: 创建缓存nSolver:可以进行一些操作Fields:里面有一些场仅供学习与交流,如有侵权请联系网站删除谢谢36仅供学习与交流,如有侵权请联系网站删除谢谢36创建 nCloth 前要注意一点:首先要删除模型的历史记录,冻结物体。
接下来,选择旗子,点击nMesh > Create nClothTips: 点击 Display Input Mesh 会显示物体本身的shaper节点,但是也失去了布料的效果。
点击Display Current Mesh 就切换回布料节点了。
Tips: Nucleus 是nCloth 的核心。
基本概念就跟引擎是一样的。
Nucleus中的参数都是以“米”为单位的。
如何使布料物体,忽视默认的重力和风力?就在nClothShape 节点下的 Dynamic Properties 栏,勾选 Ignore Solver Gravity 和 Ignore Solver Wind 。
勾选上这两个就会忽略掉默认的重力以及风力了。
如下图:仅供学习与交流,如有侵权请联系网站删除谢谢36仅供学习与交流,如有侵权请联系网站删除谢谢36Nucleus 的 Space ScaleNucleus 中的 Scale Attributes 栏中的 Space Scale 值,拖动它会对最终的模拟效果有很大影响。
它的属性值与模型的大小有关。
如果你的模型是以厘米为单位的。
那么你就要把它的属性值调整为0.01。
如果模型是以分米为单位的,那么它的属性值为0.1。
以此类推。
如下图:仅供学习与交流,如有侵权请联系网站删除谢谢36仅供学习与交流,如有侵权请联系网站删除谢谢36当你给布料所跟随的物体K动画时,为了节省资源可以先把布料效果暂时无效化,等K完动画以后再来看布料效果。
MAYA认证考试真题
1下列关于Maya无限版与完全版的描述,正确的是。
Maya无限版包含了Maya完全版的所有功能。
2在窗口菜单下的View>Bookmarks命令,其用途是。
创建或编辑视图书签。
3下列对Maya中Undo命令描述错误的是。
Undo命令可以撤消对视图的推拉、摇移的最后操作。
4如果在Maya中进行旋转视图的操作,请问以下哪一项操作是正确的? 键盘At+鼠标左键5下列对隐藏物体操作描述错误的是。
在所选物体的Channels Box通道盒中修改Visibility属性值为1。
6如图所示,下列对Grid [栅格]显示操作描述错误的是。
不能改变Grid的范围。
7下列对Level of Detail [细节级别]描述错误的是。
Maya会在不同级别模型转化过程中自动产生过渡模型。
8执行窗口菜单Lighting中的命令,可以由图2hx01_008中物体a得到物体b的效果。
Use No Lights9下列对Dynamics模块描述错误的是。
Dynamics模块中包含nCloth菜单。
10参考下图,使用View菜单下的命令,能让画外的对象转变为视图中央显示。
1/22Look at Selection11下列对删除Image Plane节点操作描述错误的是在UV Texture Editor中找到Image Plane节点并删除。
12下列物体中,哪个不是NURBS物体?ABC DA13创建NURBS原始物体时,不可以创建下列哪一种物体?Helix14下列关于曲线度数(Degree)的描述正确的是。
曲线的度数可以控制曲线的形状。
15下列关于Create>Arc Tools [圆弧工具]命令菜单的描述,正确的是三点圆弧工具可以创建一个垂直于正交视图的弓形曲线,但不能建立一个完整的圆。
16下列关于Add Point Tool:加点工具]绘制曲线的描述,正确的是。
使用Add Point Tool会在所选曲线的结束位置加点。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Cloth Animation with Adaptively Refined MeshesLing Li and Vasily VolkovDepartment of ComputingCurtin University of TechnologyGPO Box U1987 Perth, WA 6845{ling|vasily}@.auAbstractCloth animation is a very expensive process in terms of computational cost, due to the flexible nature of cloth objects. Since wrinkles and smooth areas co-exist commonly in cloth, it is tempting to reduce computational cost by avoiding redundant tessellation at the smooth areas. In this paper we present a method for dynamic adaptation of triangular meshes suitable for cloth simulation. A bottom-up approach is used for mesh refinement, which does not require precomputation and storage of multiresolution hierarchy. The hierarchy is constructed in runtime and allows reverting of the refinement locally. Local mesh refinement and simplification are triggered by curvature-induced criterion, where the curvature is estimated using methods of discrete differential geometry. The results presented are the realistic animation of garment worn by a walking mannequin generated with Baraff-Witkin type cloth solver enhanced with the mesh adaptation scheme. Keywords: Cloth Animation, Refinement and Simplification, Adaptive Mesh1. IntroductionCloth animation has received intensive attention in computer graphics in the last twenty years. Significant progress has been made in realistic cloth animation. Still it remains a computationally demanding task. It is naturally desirable to improve the performance and efficiency of cloth animation systems. Such improvement is often achieved at the cost of realism, using one or another simplification on the physical model or the integration method. In this paper an approach is proposed to improve the efficiency of cloth simulation system without degrading the simulation realism. It can be directly applied in the most elaborated cloth simulation techniques as an additional component. _________________________________________Copyright © 2005, Australian Computer Society, Inc. This paper appeared at the 28th Australasian Computer Science Conference, The University of Newcastle, Australia. Conferences in Research and Practice in Information Technology, Vol. 38. V. Estivill-Castro, Ed. Reproduction for academic, not-for profit purposes permitted provided this text is included. The computational cost of cloth simulation directly depends on the mesh resolution, which determines the fineness of cloth details to be captured, i.e. wrin kles. The majority of existing cloth simulation methods relies on uniform resolution meshes, though geometric details are not at all distributed uniformly across a garment as shown in Fig. 1. Distributing mesh nodes over the cloth surface according to the local detail level could significantly reduce computational cost.(a) (b)Figure 1. Curvature (as represented by different colours in(b)) varies notably across a garment.The approach of using adaptive mesh to improve the performance of cloth simulation attracted attention of researchers for more than a decade. Still, all exis ting adaptive approaches(Hutchinsin and Hewitt 1996, Thingvold and Cohen 1987, Villard and Borouchaki 2002) are severely limited compared with the state-of-art in the non-adaptive simulation (Baraff and Witkin 1998, Choi and Ko 2002, Volino et al. 1995). The limitations include: explicit integration, regular grid, spring-mass physics, and application in only the simplest simulations, such as the draping of a tablecloth. Besides, all the existing adaptive mesh algorithms include only the refinement, not the simplification, and their refinement criteria are simply tied to angle in an ad hoc manner (Hutchinsin and Hewitt 1996, Villard and Borouchaki 2002). Still, significant improvement in system performance was reported (Hutchinsin and Hewitt 1996).Adaptive mesh scheme has been used commonly in simulation of 3D deformable objects (Debunne et al. 2001, Wu et al. 2001). Notably, mesh refinement algorithms are well-developed in view-dependent visualization area, e.g. terrain visualization (Duchaineau et al. 1997, Lindstrom and Pascucci 2001, Hoppe 1998), where high performance methods capable of maintaining continuous and intensively re-adapted triangular meshes are essential. Wu et al. were probably the first to apply these techniques to the field of deformable object simulation.In this paper an algorithm is reported that introduces adaptive meshes into the most elaborated cloth simulation models based on irregular triangular meshes (Baraff and Witkin 1998, Eischen et al. 1996, Etzmuß et al. 2003, Volino et al. 1995 ). Irregular meshes are advantageous in cloth simulation, since they impose less restriction on the mesh boundaries. Our contribution is twofold. Firstly, a high performance method for mesh adaptation is presented. Given a coarse irregular triangular mesh as input, √3-refinement is used to locally adapt mesh resolution following a refinement criterion. The generated semi-regular mesh can be directly used in a standard triangle-based cloth simulation system. A history of refinement operations is maintained in a hierarchic structure to allow reversing the refinement locally. Secondly, more systematic approach is used to derive the refinement criterion. Measure of adequacy of the current local resolution against the local detail level is related to local curvature. Methods of discrete differential geometry are used to evaluate the mean curvature over the mesh. The mixed finite element / finite volume derivations of curvature estimation by Meyer et al. 2003 are extended to the case of triangular mesh with boundary.The proposed adaptive mesh scheme is tested in the most typical, though challenging, cloth simulation set: animation of garment worn by a walking mannequin. It is the first time that adaptive cloth simulation is employed in such complicated scenario with folding and unfolding of complex wrinkle patterns. The simulation results demonstrate high realism at reduced computational cost.The rest of the paper is organized as follows. Section 2 reviews issues in cloth simulation, mesh adaptation and refinement criteria. The mesh adaptation algorithm is explained in Section 3. Section 4 discusses the refinement criterion and evaluation of mean curvature on triangular mesh with boundary. Results are discussed in Section 5 while Section 6 concludes this paper.2. Related Work2.1 Cloth SimulationAppropriate equations of continuum mechanics has been used for cloth simulation, first in variational form, then reduced to PDE (Terzopolous et al. 1987) and then spatially discretized into ODEs. Classical discretization methods are finite differences (Terzopolous et al. 1987) and finite elements (Eischen et al. 1996, Etzmuß et al. 2003 ). In practice, ad hoc discretization methods have gained popularity (Baraff and Witkin 1998, DeRose et al. 1998, Provot 1995, Volino et al. 1995). They are not strictly and consistently derived from continuous equations. Instead they are stated directly in the discrete form. The popular spring-mass networks are reminiscent of finite difference methods: “stretch” springs connect adjacent points is a five-point stencil for discrete approximation of Laplacian and “flexion” springs is the wider stencil for the forth derivative approximation. Spring-mass method inherits restrictions of finite differences, i.e., plausible results are only provided for regular grids. The Baraff-Witkin approach (Baraff and Witkin 1998, for example, is based on irregular triangular meshes and is reminiscent to finite element approach. Notable issues in the physical models include preventing “super-elasticity” effect from linear elastic model (Provot 1995), buckling etc. (Choi and Ko 2002, Eischen et al. 1996, Feynman 1986).The resulted ODE are stiff and it became standard to solve them using implicit method, which was first used by Terzopoulos et al. 1987, but became widespread only after the work by Baraff and Witkin 1998. Among recent contributions are precomputing implicit Euler matrix inverse (Desbrun et al. 1997) and Gear’s method (or BDF) (Choi and Ko 2002).2.2 Mesh adaptationTwo important considerations for adaptive algorithms are: good mesh quality in terms of triangle aspect ratio, and the ability to locally reverse refinement. Mesh adaptation methods can be categorized into dealing with regular, irregular and semi-regular, i.e. regularly subdivided irregular meshes.Regular meshes are the simplest but the most restrictive solution. Adaptive regular mesh is a collection of regular meshes of different resolution joined together. Topological restrictions produce cracks on the interface between different resolutions – the so-called T-vertices. Another problem is the poor appro ximation of the domain boundary, if it is not rectangular. This approach was used in the previous attempts of adaptive cloth simulation (Hutchinsin and Hewitt 1996, Villard and Borouchaki 2002).Adaptive irregular meshes are less restrictive, and they produce continuous meshes. They are constructed in top-down approach, pre-computing the multiresolution hierarchy via simplification of the finest mesh down to the coarsest state. The pre-computed hierarchy requires considerable space for storage. Examples are progressive meshes (Hoppe 1998, Xia and Varsheney 1996) and Dobkin-Kirkpatrick meshes (De Berg and Dobrindt 1998, Lee et al. 1998) . Progressive meshes are not designed to produce the mesh of good quality and they usually do not, though applications to deformable object simulations exist (Wu et al. 2001). Dobkin-Kirkpatrick meshes are Delauney optimal (De Berg and Dobrindt 1998) . The hierarchy in their case is a number of uniform resolution irregular meshes, which are combined in runtime to get the adapted mesh. Debunne et al. (2001) relied on the sameidea, but managed to simulate deformable models even without combining the meshes into a single conforming mesh.Adaptive semi-regular meshes enjoy simplicity of the regular meshes with robustness of their irregular counterpart. The hierarchy is constructed when necessary in bottom-up fashion using the refinement rules. Hence no precomputation and extensive storage is required. The generated meshes have good mesh quality, provided that the coarsest mesh is good enough. Classical variations include red-green refinement based on 1-to-4 split (Azuma et al. 2003, bank et al. 1983, Wood et al. 2000) and bintree meshes based on 1-to-2 split (Duchaineau et al. 1997, Velho and Zorin 2001). The coarsest resolution of bintree mesh is required to consist of pairs of right triangles sharing hypotenuse, though every triangular mesh can be converted into the bintree mesh doubling triangle count (and Zorin 2001). Volkov and Li (2003) describe a general method which can be used with a variety of regular refinement rules, including √3-subdivision (Kobbelt 2000), which is the slowest, in terms of resolution change per refinement pass. Moreover, √3-split is more local, than, for example, the next slowest 1-to-4 split – sharp resolution gradients are possible without introducing excessively slivery triangles (Kobbelt 2000). Allez et al. (2003) proposed a hierarchy-less approach to reversible √3-refinement. Naturally, their linear history of refinement operations allows working only in the FILO style (first-in last-out). For example, to simplify the first refined triangle, one must simplify all and then refine them all except the first again.A totally different approach in adaptation is to refine the finite element functional space instead of the mesh (Grinspun et al. 2002).3. Adaptive meshIn order to enable the reverting of the refinement operations, history of refinement is maintained. In order to revert in an arbitrary spatial order, the history is stored in a hierarchical fashion. Hierarchy is the core structure for mesh adaptation. In our method, all local refinement and simplification operations affect the hierarchy first, and then the hierarchy is converted into a conforming triangular mesh. The hierarchy update is temporally coherent, i.e. little changes in geometrical shape of cloth result in a few hierarchy adjustment operations. Export of hierarchy to conforming mesh is not temporally coherent, but simple and computationally inexpensive. 3.1 Hierarchy Data StructuresThe hierarchy nodes are understood as the triangles arising in the refinement process. The root nodes are given as input t o the algorithm and form the coarsest triangulation. All other nodes are constructed in runtime using procedural refinement rule as described later in 3.2. Nodes of depth i compose i-th resolution triangulation M i as shown in Fig. 2. M i will be referred in the paper as i-th resolution layer. M0 is the coarsest layer. Parent-child links are defined between triangles at different layers, as specified by the refinement rule.Since all higher resolution layers can be reconstructed from the coarsest layer at any moment using refinement rule, it is not necessary to store all of them permanently, which would require an abundant space. Instead, only the required parts of the hierarchy and the associated vertices are stored, as dictated by the refinement criterion.In order to make the reconstruction and deconstruction of the hierarchy efficient, robust data structures are used to store vertices and triangles of hierarchy. The standard memory allocation approach is employed taking into account that the stored elements are small and of the same type (hierarchy triangles in one container and vertices in another). Elements are stored in an array, where some of its cells may not be occupied. Additional array is used to list the unoccupied cells. Insertion/removal operations pop or push cell indices from/to this list. In this way, insertion and removal do not invalidate references by index in the main array, still working in O(1) time. When there is no free cell and new element is to be inserted, arrays are resized by 100% to make such memory reallocations rare. In practice only 30~70% of the allocated memory is wasted.A couple of such dynamic arrays are used: one for vertices, and one for each resolution layer.3.2 √3-refinement ruleRefinement rule is a procedure for reconstructing resolution level M i+1 from M i. In √3-refinement rule, new vertices are inserted into the face centers. The face centers of two neighboring triangles and one of their common vertices produce a higher resolution triangle. The triangles incident to the child vertex of triangle T∈M i are considered as children of T, as shown in Fig. 2. Note that each generated child triangle has two parents. Moreover, neighboring non-root triangles always have one common parent. Refinement rule procedurally generates child vertices, child triangles and neighborhood links among them, which are needed for further finer refinement.layer ilayer i+1Fig. 2. Construction of the finer resolution layer with √3-refinement rule.Whenever a triangle T is refined, it is first ensured that all its neighbors exist. If they are not existent, they are created by refining their parent at the coarser level. This forced refinement may recursively invoke refinement at even coarser levels. When the concerned triangle is on theboundary, the neighboring triangle can not exist. This special case will be discussed later in 3.3.When the criterion indicates that the triangle should be refined, all its children have to be created. The most recent criterion value is stored for each triangle and is updated whenever refinement or simplification is performed. When the criterion flips to negative, the redundant children are removed, i.e., the children that do not have another parent with positive criterion. It may happen that some of the children to be removed have children in turn. In this case, the simplification is skipped.The set of criterion decisions for each triangle at M i completely determine which part of M i+1 is reconstructed, as shown in Fig. 3.Fig. 3. Refinement state of the triangles (marked with color) uniquely defines what part of the higher resolution(dashed) is reconstructed.3.3 Domain boundaryConsider a boundary triangle at layer M i. When new vertex cannot be inserted into the neighbor’s center due to the absence of the neighbor, it is inserted into the edge in the way to produce two right children triangles, as shown in Fig. 4. It is better than inserting it into the middle of the edge, which produces one obtuse-angle triangle, if the mesh is not perfectly regular. In either case, these triangles have poorer aspect ratios than the regular children. To avoid further decrease in the mesh quality, boundary at M i+2 is constructed to simulate 1-to-9 split of M i, just as in Kobbelt (2000).Fig. 4. Boundary triangles on even resolution layers are constructed in an alternative fashion. 3.4 Extracting conforming meshHaving updated the hierarchy, the conforming triangulation is built, which is based on the same vertices as used in the resolution layers. The conforming mesh is constructed in a direct and strict manner similar to the red-green refinement. The hierarchy triangles with no children and hence having no finer representation contribute directly. Those with t he full set of children have finer representation at layers above and do not contribute. Others are lying at the interface between resolutions and are triangulated conformingly. This triangulation is as simple and strict as the refinement rule as shown in Fig.6. The triangle count in the resulting conforming mesh is in practice 25% below the total number of triangles in the hierarchy.Fig. 5. Conforming mesh for the two hierarchy layers of Fig. 3. The colored triangles are the contribution of the coarser layer. White regions are recursively processed onthe next layer.For the conforming triangulation, neighborhood links and 1-ring triangles for each vertex are extracted. There is no need to store the entire list of all 1-ring triangles for each vertex. Instead only one of them is stored, while the rest are obtained walking around the vertex using the neighborhood links.3.5 Physical propertiesCloth mesh has more attributes than only the geometry. These attributes depend on the physical model used. Here we discuss what properties must be assigned to the mesh used with Baraff-Witkin model (1998). Being FEM-like, it has attributes common for all solvers based on irregular meshes.Compared to the ordinary mesh, cloth mesh is enhanced with vector velocities, material coordinates and masses. When inserting vertex into the center of triangle (i,j,k), this vertex is also assigned the velocity of the triangle center (v i+v j+v k)/3, and material coordinates of the center. Similarly, boundary cases are considered, where vertex is inserted onto an edge.Masses of t h e vertices are assigned the masses of the associated surface patch – the Voronoi cell of the vertex. Formula for computing the area of the Voronoi cell can be found in Meyer et al. (2003), scaled by the cloth density to give the mass. Note that cloth density is given in the material coordinates, which are therefore more appropriate for the calculation of the Voronoi cell area. Density inworld coordinates changes with stretch, though insignificantly.All other properties, such as stiffness, are not mesh specific and independent of the resolution. Usually, physical models for elastic objects rely on real mechanical quantities, such as Young modulus of Poisson coefficient. Though Baraff-Witkin model does not use such considerations, it is similar in spirit. Hence no adjustments are needed for stiffness and damping parameters.4. ResultsThe adaptive refinement was incorporated into a cloth simulation system which include the following components: Baraff-Witkin’s physics and integration, voxel-based cloth-cloth proximity detection (Eischen and Bigliani 2000), hierarchical bounding boxes for cloth-rigid proximity detection (Eischen and Bigliani 2000), and the collision response by Volino et al. (1995). Adaptation was performed before each simulation step. The performance metrics were measured on a computer with an Intel Pentium 4 CPU 2.8GHz.A few 10-second simulations of a dress worn on a walking mannequin are produced to demonstrate the advantage of incorporating adaptation into the cloth simulation system. The coarsest triangulation M0 used as input consists of 472 triangles. The refinement is restricted to 5 resolution layers. The mesh at its finest resolution M4 consists of 38,232 triangles. Construction of the entire hierarchy up to the finest layer M4 from M0 required 40ms, while deconstruction (simplification) required only 11ms. Extraction of the finest conforming mesh took 13ms. Adaptation time constituted typically only 7-8% of the simulation time. For example, one simulation step in the adaptive simulation with ~12000 triangles took 1.2 sec on average, while the adaptationwas done in only 87ms per step.Fig. 6. Triangle count in the three different adaptivecloth simulations.Fig. 6 illustrates the change of triangle numbers in adaptive simulations with different thresholds on local approximation error. Thresholds have been chosen to maintain the average triangle count at around 4000, 8000 and 12000.Fig. 7. Mesh is dynamically adapted along the animation following the changes in deformation.Fig. 7 shows the mesh adaptation in action. The resolution is increased when wrinkles formed and sharpened and is decreased later, when they are unfolded.Fig. 8 compares the results generated using uniform and adaptive meshes, obtained with different thresholds. It can be seen that the adaptive method may produce much sharper creases at lower computational cost. It was also observed that adaptive refinement inhibits minor wrinkles. Regions of cloth holding minor wrinkles are simplified, which prevents their development into sharper ones. At the same time, sharp creases, always represented with finer mesh, tend to become even sharper.Uniform adaptive4228 triangles 4863 trianglesuniform adaptive12924 triangles 12681 trianglesFig. 8. Snapshots of the non-adaptive and adaptivesimulations.5. ConclusionAn elaborated mesh adaptation system designed for cloth simulation is presented. It operates on semi-regular meshes, which enjoy the benefits of both irregular and regular meshes:imposing very little restrictions on the mesh boundary and at the same time very simple and computationally efficient.Two vital components were explained: the estimation of the adequate local resolution and the subsequent mesh refinement and simplification. Mesh refinement algorithm deals with hierarchy of resolution layers – uniform meshes of different resolutions. The coarsest layer of the hierarchy is given as input, the rest is reconstructed in runtime using √3-refinement rule. Redundant parts of the hierarchy are deconstructed to save the memory space. The resolution layers are converted into a conforming mesh using a straightforward and fast algorithm.The adaptation is driven by the refinement criterion, which estimates local approximation error using discrete mean curvature at the mesh vertices. The f ormulae for curvature estimation were derived and the dependence of the approximation error on the triangle size was discussed.Results include animation of a dress worn by a walking mannequin, which was never before demonstrated in the adaptive cloth simulation. Mesh adaptation overhead was only 7-8% of the typical cloth simulation step. Behavior of cloth at the adaptive simulations was natural and realistic, generally demonstrated sharper and more detailed wrinkles comparing to the non-adaptive simulation with similar triangle count. However it was observed that the system inhibits some wrinkles, preventing minor bending deformations. At the same time, sharp creases tend to became even sharper. Therefore our approach should be distinctively advantageous where minor bending deformations are not common, i.e. when bending stiffness is high and cloth is forced to bend into the sharp creases by strong external forces, such as at the elbows of a jacket. It is unclear at this stage whether this artifact should be attributed to the adaptation technique or the deficiency of the physical model used. We believe that introducing buckling into the physical model may ameliorate the problem.The whole idea of adaptive meshes is to reduce computational cost while preserving the quality of simulation. We have obtained realistic simulations with reduced triangle count, although as common to cloth simulation problem, there is no quantitative method to numerically measure the realism. Compared to previous adaptive mesh approaches in cloth simulation, ours is significantly more elaborated and consistent. Many advanced techniques in various different fields have been used, such as methods of discrete differential geometry or view-dependent visualization, to construct an approach as much mature as possible.The approach proposed is transparent to the cloth simulation method. Therefore any FEM, FVM or ad hoc method based on linear triangular elements can be augmented with the presented adaptation system.REFERENCESAlliez, P., Laurent, N., Sanson, H. and Schmitt, F. (2003), Efficient view-dependent refinement on 3D meshes using √3-subdivision. The Visual Computer, vol. 19, no. 4, pp. 205–221.Azuma, D. I., Wood, D. N., Curless, B., Duchamp, T., Salesin, D. H. and Stuetzle, W. (2003), View-dependent refinement of multiresolution meshes with subdivision connectivity. Proceedings of AFRIGRAPH 2003, ACM Press, pp. 69–78.Bank, R.E, Sherman, A.H. and Weiser, A. (1983), Refinement Algorithms and Data Structures for Regular Local Mesh Refinement. Scientific Computing, R. Stepleman et al., ed., vol. 44, IMACS N orth-Holland, Amsterdam, 1983, pp. 3–17.Baraff, D. and Witkin, A. (1998), Large Steps in Cloth Simulation. SIGGRAPH ‘98 Conference Proceedings, ACM Press, pp. 43–54.Choi, K.-J. and Ko, H.-S. (2002), Stable but responsive cloth. ACM Transactions on Graphics, vol. 21, no. 3, Proc. ACM SIGGRAPH 2002, pp. 604–611.De Berg, M. and Dobrindt, K. T. G. (19980, On levels of details in terrains. Graphical Models and Image Processing, vol. 60, no. 1, pp. 1–12.Debunne, G., Desbrun, M., Cani, M.-P. and Barr, A.H. (2001), Dynamic real-time deformations using space & time adaptive sampling. SIGGRAPH 2001 Conference Proceedings, pp. 31-36.DeRose, T., Kass, M., and Truong, T. (1998), Subdivision surfaces in character animation. SIGGRAPH ’98 Conference Proceedings, pp. 85–94.Desbrun, M., Schröder, P. and Barr, A. (1999), Interactive animation of structured deformable objects. Proc. Graphics Interface '99, pp. 1–8. Duchaineau, M.A., Wolinsky, M., Sigeti, D.E., Miller, M.C., Aldrich, C., and Mineev-Weinstein, M.B. (1997), ROAMing terrain: real-time optimally adapting meshes. Proc. IEEE Visualization‘97, pp. 81–88.Eischen, J. and Bigliani, R. (2000), Collision detection in cloth modelling. In D. House and D. Breen, editors, Cloth Modeling and Animation, pp. 196–218, A.K. Peters.Eischen, J.W, Deng, S. and Clapp T.G. (1996), Finite-element modeling and control of flexible fabric parts. IEEE Computer Graphics and Applications, vol. 16, no. 5, pp.71–80.Etzmuß, O., Keckeisen, M. and Straßer, W. (2003), A fast finite element solution for cloth modelling. Proc. Pacific Graphics 2003, pp. 244–251.Feynman, C.R. (1986), Modeling the appearance of cloth. Master's thesis, Massachusetts Institute of Technology, Available at: Grinspun, E., Krysl, P. and Schröder, P. (2002), CHARMS: a simple framework for adaptive simulation. ACM Transactions on Graphics, vol. 21, no. 3, Proc. ACM SIGGRAPH 2002, pp. 281–290. Hoppe, H. (1998), Smooth view-dependent level-of-detail control and its application to terrain rendering. Proc. IEEE Visualization 1998, pp. 35–42. Hutchinson, D., Preston, M. and Hewitt, T. (1996), Adaptive refinement for mass/spring simulations. Proceedings of the European workshop on Computer animation and simulation ’96, pp. 31–45.Kobbelt, L. (2000). v3-subdivision. SIGGRAPH 2000 Conference Proceedings, pp. 103–112.Lee, A. W. F., Sweldens, W., Schröder, P., Cowsar, L. and Dobkin, D. (1998), MAPS: Multiresolution Adaptive Parameterization of Surfaces. SIGGRAPH ‘98 Conference Proceedings, pp. 95–104. Lindstrom, P. and Pascucci, V. (2001), Visualization of large terrains made easy. Proc. IEEE Visualization 2001, pp. 363–370, 574.Meyer, M., Desbrun, M., Schöder, P. and Barr A.H. (2003), Discrete differential-geometry operators for triangulated 2-manifolds. In H.-C. Hege and K. Polthier, editors, Visualization and Mathematics III, pp. 35–57, Springer-Verlag, Heidelberg.Provot, X. (1995), Deformation constraints in a mass-spring model to describe rigid cloth behaviour. Proc. Graphics Interface '95, pp 147–154.Terzopolous, D., Platt, J., Barr, A. and Fleischer K. (1987), Elastically deformable models. SIGGRAPH ’87 Conference Proceedings, pp. 205–214. Thingvold, J., and Cohen, E. (1987), Physical modeling with B-spline surfaces for interactive design and animation. Computer Graphics, vol. 24, no. 2, pp. 129–137. Velho, L. and Zorin, D. (2001), 4-8 Subdivision, Computer Aided Geometric Design, vol. 18, no. 5, pp. 397–427.Villard, J. and Borouchaki, H. (2002), Adaptive meshing for cloth animation. Proc. 11th International Meshing Roundtable, pp. 243–252.Volino, P., Courchesne, M. and Thalmann, N. M. (1995), Versatile and efficient techniques for simulating cloth and other deformable objects. SIGGRAPH ’95 Conference Proceedings, pp. 137–144.Volkov, V. and Li, L. (2003), Real-time refinement and simplification of adaptive triangular meshes. Proc. IEEE Visualization 2003, pp. 155–162.Wood, Z. J., Desbrun, M., Schröder, P. and Breen, D. (2000), Semi-regular mesh extraction from volumes. Proc. IEEE Visualization 2000, pp. 275–282.Wu, X., Downes, M.S., Goktekin, T. and Tendick, F. (2001), Adaptive nonlinear finite elements for deformable body simulation using dynamic progressive meshes. Proc. Eurographics 2001, pp. 349–358.Xia, J. C. and Varshney, A. (1996), Dynamic view-dependent simplification for polygonal models. Proc. IEEE Visualization ‘96, pp. 32。