Abstract Extracting Animated Meshes with Adaptive Motion Estimation

合集下载

Native Instruments MASCHINE MK3 用户手册说明书

Native Instruments MASCHINE MK3 用户手册说明书

The information in this document is subject to change without notice and does not represent a commitment on the part of Native Instruments GmbH. The software described by this docu-ment is subject to a License Agreement and may not be copied to other media. No part of this publication may be copied, reproduced or otherwise transmitted or recorded, for any purpose, without prior written permission by Native Instruments GmbH, hereinafter referred to as Native Instruments.“Native Instruments”, “NI” and associated logos are (registered) trademarks of Native Instru-ments GmbH.ASIO, VST, HALion and Cubase are registered trademarks of Steinberg Media Technologies GmbH.All other product and company names are trademarks™ or registered® trademarks of their re-spective holders. Use of them does not imply any affiliation with or endorsement by them.Document authored by: David Gover and Nico Sidi.Software version: 2.8 (02/2019)Hardware version: MASCHINE MK3Special thanks to the Beta Test Team, who were invaluable not just in tracking down bugs, but in making this a better product.NATIVE INSTRUMENTS GmbH Schlesische Str. 29-30D-10997 Berlin Germanywww.native-instruments.de NATIVE INSTRUMENTS North America, Inc. 6725 Sunset Boulevard5th FloorLos Angeles, CA 90028USANATIVE INSTRUMENTS K.K.YO Building 3FJingumae 6-7-15, Shibuya-ku, Tokyo 150-0001Japanwww.native-instruments.co.jp NATIVE INSTRUMENTS UK Limited 18 Phipp StreetLondon EC2A 4NUUKNATIVE INSTRUMENTS FRANCE SARL 113 Rue Saint-Maur75011 ParisFrance SHENZHEN NATIVE INSTRUMENTS COMPANY Limited 5F, Shenzhen Zimao Center111 Taizi Road, Nanshan District, Shenzhen, GuangdongChina© NATIVE INSTRUMENTS GmbH, 2019. All rights reserved.Table of Contents1Welcome to MASCHINE (25)1.1MASCHINE Documentation (26)1.2Document Conventions (27)1.3New Features in MASCHINE 2.8 (29)1.4New Features in MASCHINE 2.7.10 (31)1.5New Features in MASCHINE 2.7.8 (31)1.6New Features in MASCHINE 2.7.7 (32)1.7New Features in MASCHINE 2.7.4 (33)1.8New Features in MASCHINE 2.7.3 (36)2Quick Reference (38)2.1Using Your Controller (38)2.1.1Controller Modes and Mode Pinning (38)2.1.2Controlling the Software Views from Your Controller (40)2.2MASCHINE Project Overview (43)2.2.1Sound Content (44)2.2.2Arrangement (45)2.3MASCHINE Hardware Overview (48)2.3.1MASCHINE Hardware Overview (48)2.3.1.1Control Section (50)2.3.1.2Edit Section (53)2.3.1.3Performance Section (54)2.3.1.4Group Section (56)2.3.1.5Transport Section (56)2.3.1.6Pad Section (58)2.3.1.7Rear Panel (63)2.4MASCHINE Software Overview (65)2.4.1Header (66)2.4.2Browser (68)2.4.3Arranger (70)2.4.4Control Area (73)2.4.5Pattern Editor (74)3Basic Concepts (76)3.1Important Names and Concepts (76)3.2Adjusting the MASCHINE User Interface (79)3.2.1Adjusting the Size of the Interface (79)3.2.2Switching between Ideas View and Song View (80)3.2.3Showing/Hiding the Browser (81)3.2.4Showing/Hiding the Control Lane (81)3.3Common Operations (82)3.3.1Using the 4-Directional Push Encoder (82)3.3.2Pinning a Mode on the Controller (83)3.3.3Adjusting Volume, Swing, and Tempo (84)3.3.4Undo/Redo (87)3.3.5List Overlay for Selectors (89)3.3.6Zoom and Scroll Overlays (90)3.3.7Focusing on a Group or a Sound (91)3.3.8Switching Between the Master, Group, and Sound Level (96)3.3.9Navigating Channel Properties, Plug-ins, and Parameter Pages in the Control Area.973.3.9.1Extended Navigate Mode on Your Controller (102)3.3.10Navigating the Software Using the Controller (105)3.3.11Using Two or More Hardware Controllers (106)3.3.12Touch Auto-Write Option (108)3.4Native Kontrol Standard (110)3.5Stand-Alone and Plug-in Mode (111)3.5.1Differences between Stand-Alone and Plug-in Mode (112)3.5.2Switching Instances (113)3.5.3Controlling Various Instances with Different Controllers (114)3.6Host Integration (114)3.6.1Setting up Host Integration (115)3.6.1.1Setting up Ableton Live (macOS) (115)3.6.1.2Setting up Ableton Live (Windows) (116)3.6.1.3Setting up Apple Logic Pro X (116)3.6.2Integration with Ableton Live (117)3.6.3Integration with Apple Logic Pro X (119)3.7Preferences (120)3.7.1Preferences – General Page (121)3.7.2Preferences – Audio Page (126)3.7.3Preferences – MIDI Page (130)3.7.4Preferences – Default Page (133)3.7.5Preferences – Library Page (137)3.7.6Preferences – Plug-ins Page (145)3.7.7Preferences – Hardware Page (150)3.7.8Preferences – Colors Page (154)3.8Integrating MASCHINE into a MIDI Setup (156)3.8.1Connecting External MIDI Equipment (156)3.8.2Sync to External MIDI Clock (157)3.8.3Send MIDI Clock (158)3.9Syncing MASCHINE using Ableton Link (159)3.9.1Connecting to a Network (159)3.9.2Joining and Leaving a Link Session (159)3.10Using a Pedal with the MASCHINE Controller (160)3.11File Management on the MASCHINE Controller (161)4Browser (163)4.1Browser Basics (163)4.1.1The MASCHINE Library (163)4.1.2Browsing the Library vs. Browsing Your Hard Disks (164)4.2Searching and Loading Files from the Library (165)4.2.1Overview of the Library Pane (165)4.2.2Selecting or Loading a Product and Selecting a Bank from the Browser (170)4.2.2.1[MK3] Browsing by Product Category Using the Controller (174)4.2.2.2[MK3] Browsing by Product Vendor Using the Controller (174)4.2.3Selecting a Product Category, a Product, a Bank, and a Sub-Bank (175)4.2.3.1Selecting a Product Category, a Product, a Bank, and a Sub-Bank on theController (179)4.2.4Selecting a File Type (180)4.2.5Choosing Between Factory and User Content (181)4.2.6Selecting Type and Character Tags (182)4.2.7List and Tag Overlays in the Browser (186)4.2.8Performing a Text Search (188)4.2.9Loading a File from the Result List (188)4.3Additional Browsing Tools (193)4.3.1Loading the Selected Files Automatically (193)4.3.2Auditioning Instrument Presets (195)4.3.3Auditioning Samples (196)4.3.4Loading Groups with Patterns (197)4.3.5Loading Groups with Routing (198)4.3.6Displaying File Information (198)4.4Using Favorites in the Browser (199)4.5Editing the Files’ Tags and Properties (203)4.5.1Attribute Editor Basics (203)4.5.2The Bank Page (205)4.5.3The Types and Characters Pages (205)4.5.4The Properties Page (208)4.6Loading and Importing Files from Your File System (209)4.6.1Overview of the FILES Pane (209)4.6.2Using Favorites (211)4.6.3Using the Location Bar (212)4.6.4Navigating to Recent Locations (213)4.6.5Using the Result List (214)4.6.6Importing Files to the MASCHINE Library (217)4.7Locating Missing Samples (219)4.8Using Quick Browse (221)5Managing Sounds, Groups, and Your Project (225)5.1Overview of the Sounds, Groups, and Master (225)5.1.1The Sound, Group, and Master Channels (226)5.1.2Similarities and Differences in Handling Sounds and Groups (227)5.1.3Selecting Multiple Sounds or Groups (228)5.2Managing Sounds (233)5.2.1Loading Sounds (235)5.2.2Pre-listening to Sounds (236)5.2.3Renaming Sound Slots (237)5.2.4Changing the Sound’s Color (237)5.2.5Saving Sounds (239)5.2.6Copying and Pasting Sounds (241)5.2.7Moving Sounds (244)5.2.8Resetting Sound Slots (245)5.3Managing Groups (247)5.3.1Creating Groups (248)5.3.2Loading Groups (249)5.3.3Renaming Groups (251)5.3.4Changing the Group’s Color (251)5.3.5Saving Groups (253)5.3.6Copying and Pasting Groups (255)5.3.7Reordering Groups (258)5.3.8Deleting Groups (259)5.4Exporting MASCHINE Objects and Audio (260)5.4.1Saving a Group with its Samples (261)5.4.2Saving a Project with its Samples (262)5.4.3Exporting Audio (264)5.5Importing Third-Party File Formats (270)5.5.1Loading REX Files into Sound Slots (270)5.5.2Importing MPC Programs to Groups (271)6Playing on the Controller (275)6.1Adjusting the Pads (275)6.1.1The Pad View in the Software (275)6.1.2Choosing a Pad Input Mode (277)6.1.3Adjusting the Base Key (280)6.1.4Using Choke Groups (282)6.1.5Using Link Groups (284)6.2Adjusting the Key, Choke, and Link Parameters for Multiple Sounds (286)6.3Playing Tools (287)6.3.1Mute and Solo (288)6.3.2Choke All Notes (292)6.3.3Groove (293)6.3.4Level, Tempo, Tune, and Groove Shortcuts on Your Controller (295)6.3.5Tap Tempo (299)6.4Performance Features (300)6.4.1Overview of the Perform Features (300)6.4.2Selecting a Scale and Creating Chords (303)6.4.3Scale and Chord Parameters (303)6.4.4Creating Arpeggios and Repeated Notes (316)6.4.5Swing on Note Repeat / Arp Output (321)6.5Using Lock Snapshots (322)6.5.1Creating a Lock Snapshot (322)6.5.2Using Extended Lock (323)6.5.3Updating a Lock Snapshot (323)6.5.4Recalling a Lock Snapshot (324)6.5.5Morphing Between Lock Snapshots (324)6.5.6Deleting a Lock Snapshot (325)6.5.7Triggering Lock Snapshots via MIDI (326)6.6Using the Smart Strip (327)6.6.1Pitch Mode (328)6.6.2Modulation Mode (328)6.6.3Perform Mode (328)6.6.4Notes Mode (329)7Working with Plug-ins (330)7.1Plug-in Overview (330)7.1.1Plug-in Basics (330)7.1.2First Plug-in Slot of Sounds: Choosing the Sound’s Role (334)7.1.3Loading, Removing, and Replacing a Plug-in (335)7.1.3.1Browser Plug-in Slot Selection (341)7.1.4Adjusting the Plug-in Parameters (344)7.1.5Bypassing Plug-in Slots (344)7.1.6Using Side-Chain (346)7.1.7Moving Plug-ins (346)7.1.8Alternative: the Plug-in Strip (348)7.1.9Saving and Recalling Plug-in Presets (348)7.1.9.1Saving Plug-in Presets (349)7.1.9.2Recalling Plug-in Presets (350)7.1.9.3Removing a Default Plug-in Preset (351)7.2The Sampler Plug-in (352)7.2.1Page 1: Voice Settings / Engine (354)7.2.2Page 2: Pitch / Envelope (356)7.2.3Page 3: FX / Filter (359)7.2.4Page 4: Modulation (361)7.2.5Page 5: LFO (363)7.2.6Page 6: Velocity / Modwheel (365)7.3Using Native Instruments and External Plug-ins (367)7.3.1Opening/Closing Plug-in Windows (367)7.3.2Using the VST/AU Plug-in Parameters (370)7.3.3Setting Up Your Own Parameter Pages (371)7.3.4Using VST/AU Plug-in Presets (376)7.3.5Multiple-Output Plug-ins and Multitimbral Plug-ins (378)8Using the Audio Plug-in (380)8.1Loading a Loop into the Audio Plug-in (384)8.2Editing Audio in the Audio Plug-in (385)8.3Using Loop Mode (386)8.4Using Gate Mode (388)9Using the Drumsynths (390)9.1Drumsynths – General Handling (391)9.1.1Engines: Many Different Drums per Drumsynth (391)9.1.2Common Parameter Organization (391)9.1.3Shared Parameters (394)9.1.4Various Velocity Responses (394)9.1.5Pitch Range, Tuning, and MIDI Notes (394)9.2The Kicks (395)9.2.1Kick – Sub (397)9.2.2Kick – Tronic (399)9.2.3Kick – Dusty (402)9.2.4Kick – Grit (403)9.2.5Kick – Rasper (406)9.2.6Kick – Snappy (407)9.2.7Kick – Bold (409)9.2.8Kick – Maple (411)9.2.9Kick – Push (412)9.3The Snares (414)9.3.1Snare – Volt (416)9.3.2Snare – Bit (418)9.3.3Snare – Pow (420)9.3.4Snare – Sharp (421)9.3.5Snare – Airy (423)9.3.6Snare – Vintage (425)9.3.7Snare – Chrome (427)9.3.8Snare – Iron (429)9.3.9Snare – Clap (431)9.3.10Snare – Breaker (433)9.4The Hi-hats (435)9.4.1Hi-hat – Silver (436)9.4.2Hi-hat – Circuit (438)9.4.3Hi-hat – Memory (440)9.4.4Hi-hat – Hybrid (442)9.4.5Creating a Pattern with Closed and Open Hi-hats (444)9.5The Toms (445)9.5.1Tom – Tronic (447)9.5.2Tom – Fractal (449)9.5.3Tom – Floor (453)9.5.4Tom – High (455)9.6The Percussions (456)9.6.1Percussion – Fractal (458)9.6.2Percussion – Kettle (461)9.6.3Percussion – Shaker (463)9.7The Cymbals (467)9.7.1Cymbal – Crash (469)9.7.2Cymbal – Ride (471)10Using the Bass Synth (474)10.1Bass Synth – General Handling (475)10.1.1Parameter Organization (475)10.1.2Bass Synth Parameters (477)11Working with Patterns (479)11.1Pattern Basics (479)11.1.1Pattern Editor Overview (480)11.1.2Navigating the Event Area (486)11.1.3Following the Playback Position in the Pattern (488)11.1.4Jumping to Another Playback Position in the Pattern (489)11.1.5Group View and Keyboard View (491)11.1.6Adjusting the Arrange Grid and the Pattern Length (493)11.1.7Adjusting the Step Grid and the Nudge Grid (497)11.2Recording Patterns in Real Time (501)11.2.1Recording Your Patterns Live (501)11.2.2The Record Prepare Mode (504)11.2.3Using the Metronome (505)11.2.4Recording with Count-in (506)11.2.5Quantizing while Recording (508)11.3Recording Patterns with the Step Sequencer (508)11.3.1Step Mode Basics (508)11.3.2Editing Events in Step Mode (511)11.3.3Recording Modulation in Step Mode (513)11.4Editing Events (514)11.4.1Editing Events with the Mouse: an Overview (514)11.4.2Creating Events/Notes (517)11.4.3Selecting Events/Notes (518)11.4.4Editing Selected Events/Notes (526)11.4.5Deleting Events/Notes (532)11.4.6Cut, Copy, and Paste Events/Notes (535)11.4.7Quantizing Events/Notes (538)11.4.8Quantization While Playing (540)11.4.9Doubling a Pattern (541)11.4.10Adding Variation to Patterns (541)11.5Recording and Editing Modulation (546)11.5.1Which Parameters Are Modulatable? (547)11.5.2Recording Modulation (548)11.5.3Creating and Editing Modulation in the Control Lane (550)11.6Creating MIDI Tracks from Scratch in MASCHINE (555)11.7Managing Patterns (557)11.7.1The Pattern Manager and Pattern Mode (558)11.7.2Selecting Patterns and Pattern Banks (560)11.7.3Creating Patterns (563)11.7.4Deleting Patterns (565)11.7.5Creating and Deleting Pattern Banks (566)11.7.6Naming Patterns (568)11.7.7Changing the Pattern’s Color (570)11.7.8Duplicating, Copying, and Pasting Patterns (571)11.7.9Moving Patterns (574)11.7.10Adjusting Pattern Length in Fine Increments (575)11.8Importing/Exporting Audio and MIDI to/from Patterns (576)11.8.1Exporting Audio from Patterns (576)11.8.2Exporting MIDI from Patterns (577)11.8.3Importing MIDI to Patterns (580)12Audio Routing, Remote Control, and Macro Controls (589)12.1Audio Routing in MASCHINE (590)12.1.1Sending External Audio to Sounds (591)12.1.2Configuring the Main Output of Sounds and Groups (596)12.1.3Setting Up Auxiliary Outputs for Sounds and Groups (601)12.1.4Configuring the Master and Cue Outputs of MASCHINE (605)12.1.5Mono Audio Inputs (610)12.1.5.1Configuring External Inputs for Sounds in Mix View (611)12.2Using MIDI Control and Host Automation (614)12.2.1Triggering Sounds via MIDI Notes (615)12.2.2Triggering Scenes via MIDI (622)12.2.3Controlling Parameters via MIDI and Host Automation (623)12.2.4Selecting VST/AU Plug-in Presets via MIDI Program Change (631)12.2.5Sending MIDI from Sounds (632)12.3Creating Custom Sets of Parameters with the Macro Controls (636)12.3.1Macro Control Overview (637)12.3.2Assigning Macro Controls Using the Software (638)12.3.3Assigning Macro Controls Using the Controller (644)13Controlling Your Mix (646)13.1Mix View Basics (646)13.1.1Switching between Arrange View and Mix View (646)13.1.2Mix View Elements (647)13.2The Mixer (649)13.2.1Displaying Groups vs. Displaying Sounds (650)13.2.2Adjusting the Mixer Layout (652)13.2.3Selecting Channel Strips (653)13.2.4Managing Your Channels in the Mixer (654)13.2.5Adjusting Settings in the Channel Strips (656)13.2.6Using the Cue Bus (660)13.3The Plug-in Chain (662)13.4The Plug-in Strip (663)13.4.1The Plug-in Header (665)13.4.2Panels for Drumsynths and Internal Effects (667)13.4.3Panel for the Sampler (668)13.4.4Custom Panels for Native Instruments Plug-ins (671)13.4.5Undocking a Plug-in Panel (Native Instruments and External Plug-ins Only) (675)13.5Controlling Your Mix from the Controller (677)13.5.1Navigating Your Channels in Mix Mode (678)13.5.2Adjusting the Level and Pan in Mix Mode (679)13.5.3Mute and Solo in Mix Mode (680)13.5.4Plug-in Icons in Mix Mode (680)14Using Effects (681)14.1Applying Effects to a Sound, a Group or the Master (681)14.1.1Adding an Effect (681)14.1.2Other Operations on Effects (690)14.1.3Using the Side-Chain Input (692)14.2Applying Effects to External Audio (695)14.2.1Step 1: Configure MASCHINE Audio Inputs (695)14.2.2Step 2: Set up a Sound to Receive the External Input (698)14.2.3Step 3: Load an Effect to Process an Input (700)14.3Creating a Send Effect (701)14.3.1Step 1: Set Up a Sound or Group as Send Effect (702)14.3.2Step 2: Route Audio to the Send Effect (706)14.3.3 A Few Notes on Send Effects (708)14.4Creating Multi-Effects (709)15Effect Reference (712)15.1Dynamics (713)15.1.1Compressor (713)15.1.2Gate (717)15.1.3Transient Master (721)15.1.4Limiter (723)15.1.5Maximizer (727)15.2Filtering Effects (730)15.2.1EQ (730)15.2.2Filter (733)15.2.3Cabinet (737)15.3Modulation Effects (738)15.3.1Chorus (738)15.3.2Flanger (740)15.3.3FM (742)15.3.4Freq Shifter (743)15.3.5Phaser (745)15.4Spatial and Reverb Effects (747)15.4.1Ice (747)15.4.2Metaverb (749)15.4.3Reflex (750)15.4.4Reverb (Legacy) (752)15.4.5Reverb (754)15.4.5.1Reverb Room (754)15.4.5.2Reverb Hall (757)15.4.5.3Plate Reverb (760)15.5Delays (762)15.5.1Beat Delay (762)15.5.2Grain Delay (765)15.5.3Grain Stretch (767)15.5.4Resochord (769)15.6Distortion Effects (771)15.6.1Distortion (771)15.6.2Lofi (774)15.6.3Saturator (775)15.7Perform FX (779)15.7.1Filter (780)15.7.2Flanger (782)15.7.3Burst Echo (785)15.7.4Reso Echo (787)15.7.5Ring (790)15.7.6Stutter (792)15.7.7Tremolo (795)15.7.8Scratcher (798)16Working with the Arranger (801)16.1Arranger Basics (801)16.1.1Navigating Song View (804)16.1.2Following the Playback Position in Your Project (806)16.1.3Performing with Scenes and Sections using the Pads (807)16.2Using Ideas View (811)16.2.1Scene Overview (811)16.2.2Creating Scenes (813)16.2.3Assigning and Removing Patterns (813)16.2.4Selecting Scenes (817)16.2.5Deleting Scenes (818)16.2.6Creating and Deleting Scene Banks (820)16.2.7Clearing Scenes (820)16.2.8Duplicating Scenes (821)16.2.9Reordering Scenes (822)16.2.10Making Scenes Unique (824)16.2.11Appending Scenes to Arrangement (825)16.2.12Naming Scenes (826)16.2.13Changing the Color of a Scene (827)16.3Using Song View (828)16.3.1Section Management Overview (828)16.3.2Creating Sections (833)16.3.3Assigning a Scene to a Section (834)16.3.4Selecting Sections and Section Banks (835)16.3.5Reorganizing Sections (839)16.3.6Adjusting the Length of a Section (840)16.3.6.1Adjusting the Length of a Section Using the Software (841)16.3.6.2Adjusting the Length of a Section Using the Controller (843)16.3.7Clearing a Pattern in Song View (843)16.3.8Duplicating Sections (844)16.3.8.1Making Sections Unique (845)16.3.9Removing Sections (846)16.3.10Renaming Scenes (848)16.3.11Clearing Sections (849)16.3.12Creating and Deleting Section Banks (850)16.3.13Working with Patterns in Song view (850)16.3.13.1Creating a Pattern in Song View (850)16.3.13.2Selecting a Pattern in Song View (850)16.3.13.3Clearing a Pattern in Song View (851)16.3.13.4Renaming a Pattern in Song View (851)16.3.13.5Coloring a Pattern in Song View (851)16.3.13.6Removing a Pattern in Song View (852)16.3.13.7Duplicating a Pattern in Song View (852)16.3.14Enabling Auto Length (852)16.3.15Looping (853)16.3.15.1Setting the Loop Range in the Software (854)16.4Playing with Sections (855)16.4.1Jumping to another Playback Position in Your Project (855)16.5Triggering Sections or Scenes via MIDI (856)16.6The Arrange Grid (858)16.7Quick Grid (860)17Sampling and Sample Mapping (862)17.1Opening the Sample Editor (862)17.2Recording Audio (863)17.2.1Opening the Record Page (863)17.2.2Selecting the Source and the Recording Mode (865)17.2.3Arming, Starting, and Stopping the Recording (868)17.2.5Using the Footswitch for Recording Audio (871)17.2.6Checking Your Recordings (872)17.2.7Location and Name of Your Recorded Samples (876)17.3Editing a Sample (876)17.3.1Using the Edit Page (877)17.3.2Audio Editing Functions (882)17.4Slicing a Sample (890)17.4.1Opening the Slice Page (891)17.4.2Adjusting the Slicing Settings (893)17.4.3Live Slicing (898)17.4.3.1Live Slicing Using the Controller (898)17.4.3.2Delete All Slices (899)17.4.4Manually Adjusting Your Slices (899)17.4.5Applying the Slicing (906)17.5Mapping Samples to Zones (912)17.5.1Opening the Zone Page (912)17.5.2Zone Page Overview (913)17.5.3Selecting and Managing Zones in the Zone List (915)17.5.4Selecting and Editing Zones in the Map View (920)17.5.5Editing Zones in the Sample View (924)17.5.6Adjusting the Zone Settings (927)17.5.7Adding Samples to the Sample Map (934)18Appendix: Tips for Playing Live (937)18.1Preparations (937)18.1.1Focus on the Hardware (937)18.1.2Customize the Pads of the Hardware (937)18.1.3Check Your CPU Power Before Playing (937)18.1.4Name and Color Your Groups, Patterns, Sounds and Scenes (938)18.1.5Consider Using a Limiter on Your Master (938)18.1.6Hook Up Your Other Gear and Sync It with MIDI Clock (938)18.1.7Improvise (938)18.2Basic Techniques (938)18.2.1Use Mute and Solo (938)18.2.2Use Scene Mode and Tweak the Loop Range (939)18.2.3Create Variations of Your Drum Patterns in the Step Sequencer (939)18.2.4Use Note Repeat (939)18.2.5Set Up Your Own Multi-effect Groups and Automate Them (939)18.3Special Tricks (940)18.3.1Changing Pattern Length for Variation (940)18.3.2Using Loops to Cycle Through Samples (940)18.3.3Using Loops to Cycle Through Samples (940)18.3.4Load Long Audio Files and Play with the Start Point (940)19Troubleshooting (941)19.1Knowledge Base (941)19.2Technical Support (941)19.3Registration Support (942)19.4User Forum (942)20Glossary (943)Index (951)1Welcome to MASCHINEThank you for buying MASCHINE!MASCHINE is a groove production studio that implements the familiar working style of classi-cal groove boxes along with the advantages of a computer based system. MASCHINE is ideal for making music live, as well as in the studio. It’s the hands-on aspect of a dedicated instru-ment, the MASCHINE hardware controller, united with the advanced editing features of the MASCHINE software.Creating beats is often not very intuitive with a computer, but using the MASCHINE hardware controller to do it makes it easy and fun. You can tap in freely with the pads or use Note Re-peat to jam along. Alternatively, build your beats using the step sequencer just as in classic drum machines.Patterns can be intuitively combined and rearranged on the fly to form larger ideas. You can try out several different versions of a song without ever having to stop the music.Since you can integrate it into any sequencer that supports VST, AU, or AAX plug-ins, you can reap the benefits in almost any software setup, or use it as a stand-alone application. You can sample your own material, slice loops and rearrange them easily.However, MASCHINE is a lot more than an ordinary groovebox or sampler: it comes with an inspiring 7-gigabyte library, and a sophisticated, yet easy to use tag-based Browser to give you instant access to the sounds you are looking for.What’s more, MASCHINE provides lots of options for manipulating your sounds via internal ef-fects and other sound-shaping possibilities. You can also control external MIDI hardware and 3rd-party software with the MASCHINE hardware controller, while customizing the functions of the pads, knobs and buttons according to your needs utilizing the included Controller Editor application. We hope you enjoy this fantastic instrument as much as we do. Now let’s get go-ing!—The MASCHINE team at Native Instruments.MASCHINE Documentation1.1MASCHINE DocumentationNative Instruments provide many information sources regarding MASCHINE. The main docu-ments should be read in the following sequence:1.MASCHINE Getting Started: This document provides a practical approach to MASCHINE viaa set of tutorials covering easy and more advanced tasks in order to help you familiarizeyourself with MASCHINE.2.MASCHINE Manual (this document): The MASCHINE Manual provides you with a compre-hensive description of all MASCHINE software and hardware features.Additional documentation sources provide you with details on more specific topics:▪Controller Editor Manual: Besides using your MASCHINE hardware controller together withits dedicated MASCHINE software, you can also use it as a powerful and highly versatileMIDI controller to pilot any other MIDI-capable application or device. This is made possibleby the Controller Editor software, an application that allows you to precisely define all MIDIassignments for your MASCHINE controller. The Controller Editor was installed during theMASCHINE installation procedure. For more information on this, please refer to the Con-troller Editor Manual available as a PDF file via the Help menu of Controller Editor.▪Online Support Videos: You can find a number of support videos on The Official Native In-struments Support Channel under the following URL: https:///NIsupport-EN. We recommend that you follow along with these instructions while the respective ap-plication is running on your computer.Other Online Resources:If you are experiencing problems related to your Native Instruments product that the supplied documentation does not cover, there are several ways of getting help:▪Knowledge Base▪User Forum▪Technical Support▪Registration SupportYou will find more information on these subjects in the chapter Troubleshooting.1.2Document ConventionsThis section introduces you to the signage and text highlighting used in this manual. This man-ual uses particular formatting to point out special facts and to warn you of potential issues. The icons introducing these notes let you see what kind of information is to be expected:This document uses particular formatting to point out special facts and to warn you of poten-tial issues. The icons introducing the following notes let you see what kind of information can be expected:Furthermore, the following formatting is used:▪Text appearing in (drop-down) menus (such as Open…, Save as… etc.) in the software and paths to locations on your hard disk or other storage devices is printed in italics.▪Text appearing elsewhere (labels of buttons, controls, text next to checkboxes etc.) in the software is printed in blue. Whenever you see this formatting applied, you will find the same text appearing somewhere on the screen.▪Text appearing on the displays of the controller is printed in light grey. Whenever you see this formatting applied, you will find the same text on a controller display.▪Text appearing on labels of the hardware controller is printed in orange. Whenever you see this formatting applied, you will find the same text on the controller.▪Important names and concepts are printed in bold.▪References to keys on your computer’s keyboard you’ll find put in square brackets (e.g.,“Press [Shift] + [Enter]”).►Single instructions are introduced by this play button type arrow.→Results of actions are introduced by this smaller arrow.Naming ConventionThroughout the documentation we will refer to MASCHINE controller (or just controller) as the hardware controller and MASCHINE software as the software installed on your computer.The term “effect” will sometimes be abbreviated as “FX” when referring to elements in the MA-SCHINE software and hardware. These terms have the same meaning.Button Combinations and Shortcuts on Your ControllerMost instructions will use the “+” sign to indicate buttons (or buttons and pads) that must be pressed simultaneously, starting with the button indicated first. E.g., an instruction such as:“Press SHIFT + PLAY”means:1.Press and hold SHIFT.2.While holding SHIFT, press PLAY and release it.3.Release SHIFT.Unlabeled Buttons on the ControllerThe buttons and knobs above and below the displays on your MASCHINE controller do not have labels.。

unity temporal aa 原理

unity temporal aa 原理

unity temporal aa 原理
Unity 使用Temporal Anti-Aliasing (TAA) 技术来消除游戏画面中的锯齿和模糊,让画面更加平滑和清晰。

这种技术是通过把多帧画面合成成一张新画面来实现的。

TAA 需要在画面周围保留一定的像素边框,有助于算法得到更加准确的渲染结果。

在实时渲染中,抗锯齿技术可以有效地提升游戏画面的品质。

在TAA技术中,首先需要通过SSAA技术( Super Sample Anti-Aliasing) 生成一个比目标分辨率更高的过渡帧 (Jitter Frame)。

每一张过渡帧都有一个随机的偏移量(Jitter Offset),这个偏移量会在不同的渲染帧之间变化,并且使用这个过渡帧生成一种新的Anti-Aliasing纹理。

这种新生成的Anti-Aliasing纹理与上一帧的场景混合,从而达到抗锯齿的效果。

TAA 需要通过修改像素在上一帧和当前帧之间的位移来实现模糊和锯齿消失的效果。

因为渲染的每一帧都会产生速度,每帧之间都有明显的图像差异。

TAA算法通过使用相对位移来复原这些差异,然后把差异应用到当前帧的像素上。

TAA技术的好处在于,在保证游戏画面视觉效果的同时,资源的消耗较小。

相比于其他抗锯齿技术,TAA可以确保画面保持更高的帧率,从而提高玩家游戏的顺畅程度。

总之, TAA 技术是一种使用多个帧来消除游戏画面中的锯齿和模糊的技术。

它使用了一种叫做过渡帧的中间步骤,从而可以在游戏画面中实现抗锯齿的效果。

TAA 技术无需额外的图像资源,使得游戏的渲染更加高效和更容易开发。

latex中graphical abstract 命令 -回复

latex中graphical abstract 命令 -回复

latex中graphical abstract 命令-回复引言:在科学研究领域,图形摘要(graphical abstract)广泛应用于科学论文中,用于简明扼要地展示论文的主题和重要结论。

本文将介绍在LaTeX 中创建图形摘要的命令,并详细解释每一步的操作。

第一步:安装必要的宏包在LaTeX 中创建图形摘要需要使用一些特定的宏包。

首先,确保已安装`graphicx` 宏包,它提供了插入图片的功能。

使用以下命令检查该宏包是否已安装:latex\usepackage{graphicx}如果提示未找到该宏包,可以通过在LaTeX 发行版的包管理器中搜索并安装该宏包。

第二步:创建图形摘要环境要在LaTeX 中创建图形摘要,需要定义一个新的环境。

可以使用以下命令在文档的导言区定义一个名为`graphicalabstract` 的环境:latex\newenvironment{graphicalabstract}{\par\smallskip\noindent\textbf{Graphical Abstract:}\par\noindent} {\par\smallskip}这个命令定义了一个新的环境`graphicalabstract`,在摘要之前和之后增加了适当的空白行,并以粗体字形式显示“Graphical Abstract:”字样。

第三步:插入图形摘要在定义了图形摘要环境后,可以在文档的适当位置插入图形摘要。

以下是一个示例,展示如何插入一个图片作为图形摘要:latex\begin{graphicalabstract}\includegraphics[width=\textwidth]{graphical_abstract.jpg}\end{graphicalabstract}在这个示例中,使用`includegraphics` 命令插入了名为`graphical_abstract.jpg` 的图片。

Particle-Based Anisotropic Surface Meshing

Particle-Based Anisotropic Surface Meshing

Particle-Based Anisotropic Surface MeshingZichun Zhong∗Xiaohu Guo*Wenping Wang†Bruno L´e vy‡Feng Sun†Yang Liu§Weihua Mao¶*University of Texas at Dallas†The University of Hong Kong‡INRIA Nancy-Grand Est §NVIDIA Corporation¶UT Southwestern Medical Center atDallasFigure1:Anisotropic meshing results generated by our particle-based method.AbstractThis paper introduces a particle-based approach for anisotropic sur-face meshing.Given an input polygonal mesh endowed with a Rie-mannian metric and a specified number of vertices,the method gen-erates a metric-adapted mesh.The main idea consists of mapping the anisotropic space into a higher dimensional isotropic one,called “embedding space”.The vertices of the mesh are generated by uni-formly sampling the surface in this higher dimensional embedding space,and the sampling is further regularized by optimizing an en-ergy function with a quasi-Newton algorithm.All the computations can be re-expressed in terms of the dot product in the embedding space,and the Jacobian matrices of the mappings that connect d-ifferent spaces.This transform makes it unnecessary to explicitly represent the coordinates in the embedding space,and also provides all necessary expressions of energy and forces for efficient compu-tations.Through energy optimization,it naturally leads to the de-sired anisotropic particle distributions in the original space.The tri-angles are then generated by computing the Restricted Anisotropic V oronoi Diagram and its dual Delaunay triangulation.We compare our results qualitatively and quantitatively with the state-of-the-art in anisotropic surface meshing on several examples,using the stan-dard measurement criteria.∗{zichunzhong,xguo}@,†{wenping,fsun}@cs.hku.hk,‡bruno.levy@inria.fr,§thomasyoung.liu@,¶weihua.mao@ CR Categories:I.3.5[Computer Graphics]:Computational Ge-ometry and Object ModelingKeywords:Anisotropic Meshing,Particle,and Gaussian Kernel. Links:DL PDF1IntroductionAnisotropic meshing offers a highlyflexible way of controlling mesh generation,by letting the user prescribe a direction and densi-tyfield that steers the shape,size and alignment of mesh elements. In the simulation offluid dynamics,it is often desirable to have e-longated mesh elements with desired orientation and aspect ratio given by a Riemannian metric tensorfield[Alauzet and Loseille 2010].For surface modeling,it has been proved in approxima-tion theory that the L2optimal approximation to a smooth surface with a given number of triangles is achieved when the anisotropy of triangles follows the eigenvalue and eigenvectors of the curvature tensors[Simpson1994;Heckbert and Garland1999].This can be easily seen from the example of ellipsoid surface in Fig.2where the ratio of the two principal curvatures K max/K min is close to 1near the two ends of the ellipsoid and is as high as100in the middle part.Anisotropic triangles stretched along the direction of minimal curvatures in the middle part of the ellipsoid provide best approximation,while isotropic triangles are needed at its two ends. In this paper,we propose a new method for anisotropic meshing of surfaces endowed with a Riemannian metric.We rely on a particle-based scheme,where each pair of neighboring particles is equipped with a Gaussian energy.It has been shown[Witkin and Heckbert 1994]that minimizing this pair-wise Gaussian energy leads to a u-niform isotropic distribution of particles.To compute the anisotrop-ic meshing on surfaces equipped with Riemannian metric,we uti-lize the concept of a higher dimensional“embedding space”[Nash 1954;Kuiper1955].Our method optimizes the placement of the vertices,or particles,by uniformly sampling the higher dimension-al embedding of the input surface.This embedding is designed in such a way that when projected back into the original space(usual-Figure 2:Isotropic and anisotropic meshing with 1,000output vertices of the ellipsoid surface.The stretching ratio (defined in Sec.2.1)is computed as √K max /K min ,where K max and K min are the two principal curvatures.Note that the “End Part”is ren-dered with orthographic projection along its long-axial direction,to better show the isotropy.ly 2D or 3D),a uniform sampling becomes anisotropic with respect to the input metric.Direct reference to the higher dimensional em-bedding is avoided by re-expressing all computations in terms of the dot product in the high-dimensional space,and the Jacobian matri-ces of the mappings that connect different spaces.Based on this re-expression we derive principled energy and force models for ef-fective computation on the original manifold with a quasi-Newton optimization algorithm.Finally,the triangles are generated by com-puting a Restricted Anisotropic V oronoi Diagram and extracting the dual of its connected components.This paper makes the following contributions for efficiently gener-ating high-quality anisotropic meshes:•It introduces a new particle-based formulation for anisotropic meshing.It defines the pair-wise Gaussian energies and forces between particles,and formulates the energy optimization in a higher dimensional “embedding space”.We show further how anisotropic meshing can be translated into isotropic meshing in this higher dimensional embedding space (Sec.3.1).The energy is designed in such a way that the particles are uni-formly distributed on the surface embedded in this higher di-mensional space.When the energy is optimized,the corre-sponding particles in the original manifold will achieve the anisotropic sampling with the desired input metric.•It presents a computationally feasible and efficient method for our energy optimization (Sec.3.2).The high-dimensional energy function and its gradient is mapped back into the o-riginal space,where the particles can be directly optimized.This computational approach avoids the need of computing the higher dimensional embedding space.Such energy opti-mization strategy shows very fast convergence speed,without any need for the explicit control of particle population (e.g.,inserting or deleting particles to meet the desired anisotropy).2Background and Related Works2.1Definition of AnisotropyAnisotropy denotes the way distances and angles are distorted.Ge-ometrically,distances and angles can be measured by the dot prod-uct:⟨v ,w ⟩,which is a bilinear function mapping a pair of vectors v ,w to R .The dot product is symmetric,positive,and definite (SPD).If the dot product is replaced with another SPD bilinear for-m,then an anisotropic space is defined.We consider that a met-ric M (.),i.e.an SPD bilinear form,is defined over the domain Ω⊂R m .In other words,at a given point x ∈Ω,the dot product between two vectors v and w is given by ⟨v ,w ⟩M (x ).In practice,the metric can be represented by a symmetric m ×m matrix M (x ),in which case the dot product becomes:⟨v ,w ⟩M (x )=v T M (x )w .(1)The metric matrix M (x )can be decomposed with Singular Value Decomposition (SVD)into:M (x )=R (x )T S (x )2R (x ),(2)where the diagonal matrix S (x )2contains its ordered eigenvalues,and the orthogonal matrix R (x )contains its eigenvectors.We note that a globally smooth field R (x )may not exist for surfaces of arbitrary topology.For the metric design,we use the following two options:(1)In some of our experiments,we start from designing a smooth scaling field S (x )and a rotation field R (x )that is smooth in re-gions other than those singularities,and compose them to Q (x )=S (x )R (x )and M (x )=Q (x )T Q (x ),which is the same as Du et al.[2005].They are defined on the tangent spaces of the surface.Suppose s 1and s 2are the two diagonal items in S (x )correspond-ing to the two eigenvectors in the tangent space,and s 1≤s 2.Wesimply call s 2s 1as the stretching ratio .This process will play a role later when the user specifies the desired input metric (Sec.5).(2)Note that if M (x )is given by users,the decomposition to Q (x )is non-unique.An equivalent decomposition M (x )=Q O (x )T Q O (x )is given by any matrix Q O (x )=O (x )Q (x ),where O (x )is a m ×m orthogonal matrix.In other words,Q (x )is unique up to a rotation .However,it is easy to show that if a SPD metric M (x )is giv-en,its square root Q ′(x )=√M (x )is also a SPD matrix,and such decomposition is unique (Theorem 7.2.6of [Horn and John-son 1985])and smooth (Theorem 2of [Freidlin 1968]).Q ′(x )is a symmetric affine mapping:Q ′(x )=R (x )T S (x )R (x ),and M (x )=Q ′(x )Q ′(x ).In Sec.5.1,we use the “Mesh Font”ex-ample to show that Q ′(x )can work well in our framework,given a user specified smooth metric field M (x ).It is interesting to note that if the metric tensor field is given as:M (x )=ρ(x )2m ·I ,(3)where ρ(x ):Ω→R and I is the identity matrix,then M (x )defines an isotropic metric graded with the density function ρ(x ).Given the metric field M (x )and an open curve C ⊂Ω,the length of C is defined as the integration of the length of tangent vector along the curve C with metric M (x ).Then,the anisotropic distance d M (x ,y )between two points x and y can be defined as the length of the (possibly non-unique)shortest curve that connects x and y .2.2Previous WorksAnisotropic Voronoi Diagrams:By replacing the dot product with the one defined by the metric, anisotropy can be introduced into the definition of the standard no-tions in computational geometry,e.g.,V oronoi Diagrams and De-launay Triangulations.The most general setting is given by Rie-mannian V oronoi diagrams[Leibon and Letscher2000]that replace the distance with the anisotropic distance d M(x,y)defined above. Some theoretical results are known,in particular that Riemannian V oronoi diagrams admit a valid dual only in dimension2[Boisson-nat et al.2012].However,a practical implementation is still beyond reach[Peyre et al.2010].For this reason,two simplifications are used to compute the V oronoi cell of each generator x i:V or Labelle(x i)={y|d xi (x i,y)≤d xj(x j,y),∀j}V or Du(x i)={y|d y(x i,y)≤d y(x j,y),∀j}where:d x(y,z)=√(z−y)T M(x)(z−y).(4)Thefirst definition V or Labelle[Labelle and Shewchuk2003]is eas-ier to analyze theoretically.The bisectors are quadratic surfaces, known in closed form,and a provably-correct Delaunay refinemen-t algorithm can be defined.The so-defined Anisotropic V oronoi Diagram(A VD)may be also thought of as the projection of a higher-dimensional power diagram[Boissonnat et al.2008a].The second definition V or Du[Du and Wang2005]is best suited to a practical implementation of Lloyd relaxation in the computation of Anisotropic Centroidal V oronoi Tessellations.Centroidal Voronoi Tessellation and its Anisotropic Version:A Centroidal V oronoi Tessellation(CVT)is a V oronoi Diagram such that each point x i coincides with the centroid of its V oronoi cell.A CVT can be computed by either the Lloyd relaxation[L-loyd1982]or a quasi-Newton energy optimization solver[Liu et al. 2009].It generates a regular sampling[Du et al.1999],from which a Delaunay triangulation with well-shaped isotropic elements can be extracted.In the case of surface meshing,it is possible to gener-alize this definition by using a geodesic V oronoi diagram over the surface[Peyre and Cohen2004].To make the computations simpler and cheaper,it is possible to replace the geodesic V oronoi diagram with the Restricted V oronoi Diagram(RVD)or Restricted Delau-nay Triangulation(RDT),defined in[Edelsbrunner and Shah1994] and used by several meshing algorithms,see[Dey and Ray2010] and the references herein.Hence a Restricted Centroidal V oronoi Tessellation can be defined[Du et al.2003].With an efficient algo-rithm to compute the Restricted V oronoi Diagram,Restricted CVT can be used for isotropic surface remeshing[Yan et al.2009]. CVT was further generalized to Anisotropic CVT(ACVT)by Du et al.[2005]using the definition V or Du in Eq.(4).In each Lloyd iteration,an anisotropic Delaunay triangulation with the given Rie-mannian metric needs to be constructed,which is a time-consuming operation.Valette et al.[2008]proposed a discrete approximation of ACVT by clustering the vertices of a dense pre-triangulation of the domain.This discrete version is much faster than Du et al.’s continuous ACVT approach,at the expense of slightly degraded mesh quality.Sun et al.[2011]introduced a hexagonal Minkows-ki metric into ACVT optimization,in order to suppress obtuse pared to these ACVT approaches,our particle-based scheme avoids the construction of A VD in the intermediate itera-tions of energy optimization,thus shows much faster performance as shown in Sec.6.1.Surface Meshing in Higher Dimensional Space:Uniformly meshing surfaces embedded in higher dimensional space has also been studied in the literature[Ca˜n as and Gortler2006;Ko-vacs et al.2010;L´e vy and Bonneel2012].The work of L´e vy and Bonneel[2012]is most related to ours,since both can be considered as using the framework of energy optimization in a higher dimen-sional embedding space.They extended the computation of CVT to a6D space in order to achieve a curvature-adaptation.In partic-ular,the anisotropic meshing on a3D surface is transformed to an isotropic one on the surface embedded in6D space,which can be efficiently computed by CVT equipped with V oronoi Parallel Lin-ear Enumeration[L´e vy and Bonneel2012].However,it does not provide users with theflexibility to control the anisotropy via an in-put metric tensorfield.Our approach is designed to handle the more general anisotropic meshing scenario where a user-desired metric is specified.Refinement-Based Delaunay Triangulation:Anisotropic versions of point insertion in Delaunay triangula-tion has been successfully applied to many practical application-s[Borouchaki et al.1997a;Borouchaki et al.1997b;Dobrzynski and Frey2008].Boissonnat et al.[2008b;2011]introduced a De-launay refinement framework,which is based on the goal to make the star around each vertex x i to be consisting of the triangles that are exactly Delaunay for the metric associated with x i.In order to“stitch”the stars of neighboring vertices,refinement algorithm-s are proposed to add new vertices gradually to achieve thefinal anisotropic meshing.Our approach is different and consists in op-timizing all the vertices of the mesh globally.Another difference is that we compute the dual of the connected components of the RVD[Yan et al.2009]instead of the RDT.The results are com-pared in Sec6.3.Particle-Based Anisotropic Meshing:Turk[1992]introduced repulsive points to sample a mesh for the purpose of polygonal remeshing.It was later extended by Witkin and Heckbert[1994]who used particles equipped with pair-wise Gaussian energy to sample and control implicit surfaces.Meyer et al.[2005]formulated the energy kernel as a modified cotangen-t function withfinite support,and showed the kernel to be near-ly scale-invariant as compared to the Gaussian kernel.It was lat-er extended to handle adaptive,isotropic meshing of CAD mod-els[Bronson et al.2012]with particles moving in the parametric space of each surface patch.All these methods are only targeting isotropic sampling of surfaces.To handle anisotropic meshing,Bossen and Heckbert[1996]incor-porated the metric tensor into the distance function d(x,y),and use f(x,y)=(1−d(x,y)4)·exp(−d(x,y)4)to model the re-pulsion and attraction forces between particles.Shimada and his co-workers proposed physics-based relaxation of“bubbles”with a standard second-order system consisting of masses,dampers,and linear springs[Shimada and Gossard1995;Shimada et al.1997; Yamakawa and Shimada2000].They used a bounded cubic func-tion of the distance to model the inter-bubble forces,and further extended it to anisotropic meshing by converting spherical bubbles to ellipsoidal ones.Both Bossen et al.and Shimada et al.’s works require dynamic population control schemes,to adaptively insert or delete particles/bubbles in certain regions.Thus if the initialization does not have a good estimation of the number of particles needed tofill the domain,it will take a long time to converge.The method proposed in this paper is very similar to the idea of Adaptive Smoothed Particle Hydrodynamics(ASPH)[Shapiro et al.1996]which uses inter-particle Gaussian kernels with an anisotropic smoothing tensor.However,as addressed in Sec.3.3, ASPH directly formulates the energy in the original space without using the embedding space concept.To compute the forces between particles,the gradient of the varying metric tensor has to be ignored due to numerical difficulty.This treatment will lead to inaccurateanisotropy in the computed mesh as shown in Fig.4,when there are mild or significant variations in the metric.Relation with the Theory of Approximation:It has been studied in the theory of approximation [D’Azevedo 1991;Shewchuk 2002]that anisotropy is related to the optimal ap-proximation of a function with a given number of piecewise-linear triangular elements.The anisotropy of the optimal mesh can be characterized,and optimization algorithms can be designed to best approximate the given function.The continuous mesh concept in-troduced by Loseille and Alauzet [2011a;2011b]provides a rela-tionship between the linear interpolation error and the mesh pre-scription,which has resulted in highly efficient anisotropic mesh adaptation algorithms.The relationship between anisotropic mesh-es and approximation theory has also been studied for higher-order finite elements [Mirebeau and Cohen 2010;Mirebeau and Cohen 2012],which leads to an efficient greedy bisection algorithm to generate optimal meshes.Other Related Works:This paper only focuses on anisotropic triangular meshing,which is different from other works handling anisotropic quad-dominant remeshing [Alliez et al.2003;Kovacs et al.2010;L´e vy and Liu 2010;Zhang et al.2010].The notion of anisotropy has also been applied to the blue noise sample generation [Li et al.2010].3The Particle ApproachConsidering each vertex as a particle,the potential energy be-tween the particles determines the inter-particle forces.When the forces applied on each particle become equilibrium,the particles reach the optimal balanced state with uniform distribution.To han-dle anisotropic meshing,we utilize the concept of “embedding s-pace”[Nash 1954;Kuiper 1955].In such high-dimensional em-bedding space,the metric is uniform and isotropic.When the forces applied on each particle reach equilibrium in this embedding space,the particle distribution on the original manifold will exhibit the desired anisotropic property.Basic Framework:Given n particles with their positions X ={x i |i =1...n }on the surface Ωwhich is embedded in R m space,we define the inter-particle energy between particles i and j as:Eij=e−∥x i −x j ∥24σ2.(5)Here σ,called kernel width ,is the fixed standard deviation of the Gaussian kernels.In Sec.4.1we will discuss how to choose an appropriate size of σ.Clearly,E ij =E ji .The gradient of E ij w.r.t.x j can be considered as the force F ij applied on particle j by particle i :Fij=∂E ij∂x j =(x i −x j )2σ2e−∥x i −x j ∥24σ2.(6)Analogous to Newton’s third law of motion,we have F ij=−F ji.We want to note that the formulation of Eq.(6)is similar to the particle repulsion/attraction idea of Witkin and Heckbert [1994].By minimizing the total energy E =∑i ∑j =i Eijwith L-BFGS [Liu and Nocedal 1989],we can get a uniform isotropic sam-pling,where the forces applied on each particle reach equilibrium.It is shown in the supplementary Appendix that this particle-based energy formulation is fundamentally equivalent to Fattal’s kernel-based formulation [2011],for the uniform isotropic case.However,Fattal’s method does not handle anisotropic case.For non-uniform isotropic case,our analysis in Appendix shows the difference with respect to Fattal’s approach,from both theoretical viewpoints and experimentalresults.Figure 3:A simple example of an embedding function that trans-forms an original 2D anisotropic surface Ω(left)into the surface Ω(right)embedded in a higher dimensional space (3D in this exam-ple)where the metric is uniform and isotropic.In the general case a higher number of dimensions is required for Ω.3.1Anisotropic CaseThe top-left image of Fig.3shows a representation of a 2D metric field M .The figure shows a set of points (black dots)and their associated unit circles (the bean-shaped curves,that correspond to the sets of points equidistant to each black dot).The bottom-left image of Fig.3shows the ideal mesh governed by such metric field:the length of the triangle edges,under the anisotropic distance,are close to be equal.For this simple example of Fig.3,one can see that the top-left im-age can be considered as the surface in the top-right image “seen from above”.In other words,by embedding the flat 2D domain as a curved surface in 3D,one can recast the anisotropic meshing problem as the isotropic meshing of a surface embedded in higher-dimensional space.In general,for an arbitrary metric M ,a higher-dimensional space will be needed [Nash 1954;Kuiper 1955].We now consider that the surface Ωis mapped to Ωthat is embedded in a higher-dimensional space R m .We simply call R m as the embedding space in this pa-per.Suppose the mapping function is ϕ:Ω→Ω,where Ω⊂R m ,Ω⊂R m ,and m ≤m .Let us denote the particle positions on this surface Ωby X ={x i |x i =ϕ(x i ),i =1...n }.A unifor-m sampling on Ωcan be computed by changing the inter-particleenergy function E ij of Eq.(5)as follows,hence defining E ij:Eij=e−∥x i −x j ∥24σ2.(7)The gradient of E ijw.r.t.x j ,i.e.,the force F ijin the embeddingspace,can be defined similarly as:Fij=∂Eij∂x j =(x i −x j )2σ2e −∥x i −x j ∥24σ2.(8)3.2Our Computational ApproachWe show in this subsection how to optimize E ijwithout referringto the coordinates of Ωin the embedding space.From the introduction of Sec.2.1,we have seen that introducing anisotropy means changing the definition of the dot product.If we consider two small displacements v and w from a given lo-cation x∈Ω,then they are transformed into v=J(x)v and w=J(x)w,where J(x)denotes the Jacobian matrix ofϕat x. The dot product between v and w is given by:⟨v,w⟩=v T J(x)T J(x)w=v T M(x)w.(9) In other words,given the embedding functionϕ,the anisotropy M corresponds to thefirst fundamental form ofϕ.If we now suppose that the anisotropy M(x)is known but not the embedding function ϕ,it is still possible to compute the dot product between two vectors in embedding space around a given point.3.2.1Computing the Energy FunctionWe now consider the inter-particle energy function in Eq.(7).Con-sider neighboring particles i and j.We use the Jacobian matrix evaluated at their middle point:x i+x j2.In the following we de-note J ij=J(x i+x j2),M ij=M(x i+x j2),Q ij=Q(x i+x j2),andQ′ij=Q′(x i+x j2)(see Sec.2.1),for notational simplicity.Sincethe middle point is close to both x i and x j,it is reasonable to make the following approximation:x i−x j=ϕ(x i)−ϕ(x j)≈J ij(x i−x j).(10) Thus the exponent in the term E ij can be approximated as:∥x i−x j∥2=⟨x i−x j,x i−x j⟩≈(x i−x j)T J T ij J ij(x i−x j)=(x i−x j)T M ij(x i−x j).(11) Our inter-particle energy function can be approximated by:E ij≈e−(x i −x j)T M ij(x i−x j)4σ2.(12)The total energy is simply:E=n∑i=1n∑j=1,j=iE ij(13)3.2.2Computing the Force FunctionUsing Eq.(10)and Eq.(11),the inter-particle forces of Eq.(8)be-comes:F ij≈J ij(x i −x j)2σ2e−(x i−x j)T M ij(x i−x j)4σ2.(14)Here,for a particle i,different neighbors j have different J ij,which essentially encodes the variation of the metric.The total force applied on each particle i is simply:F i=∑j=iF ji.(15)Note that the expression in Eq.(14)still depends on the Jacobian matrix J ij.In our case,neither the embedding functionϕnor its Jacobian is known.Therefore,we propose below an approximation of Eq.(14)that solely depends on the anisotropyfield M(x).We denote the set of particle i’s neighbors as N(i),and denote the vectors v ij=x i−x j,j∈N(i).To better understand J ij,let us explore the relationship between the matrices J ij and Q ij.J ij is am×m matrix,where m is the dimension of the embedding space, and m is either2or3,depending on whetherΩis a2D domain or a3D surface.Consider the QR decomposition:J ij=U ij[P ij], where U ij is a m×m unitary matrix(i.e.a rotation matrix in R m), P ij is a m×m matrix,and0is a(m−m)×m block of zeros. Then:M ij=J T ij J ij=P T ij P ij,(16) since U T ij U ij=I.As mentioned in Sec.2.1,if both S ij and R ij are given by users, then we can compose them and define Q ij=S ij R ij;if a smoothmetric M ij is given by users,we can use its square root Q′ij=√M ij.In the following derivation,both Q ij and Q′ij will lead to the same approximation technique.So we simply use Q ij in the following discussion.From Eq.(16)we can see that P ij is exactly Q ij up to a rotation, i.e.,P ij=O ij Q ij where O ij is a m×m rotation matrix.We can simply represent J ij as:J ij=U ij[O ij Q ij]=W ij[Q ij],(17) where W ij is a rotation matrix in R m:W ij=U ij[O ij00I].(18)If the metricfield M(x)is smooth,then it is reasonable to approx-imate the rotation matrix W ij with W i,where W i is the rotation matrix of Eq.(18)evaluated at x i.Thus for j∈N(i),the m-dimensional vectors J ij v ij in Eq.(14)can be approximated by:J ij v ij=W ij[Q ij]v ij≈W i[Q ij v ij].(19) Then the force vector on particle i in Eq.(15)becomes:F i=∑j=iJ ij v ij2σ2e−(x i−x j)T M ij(x i−x j)4σ2≈∑j=i12σ2W i[Q ij v ij]e−(x i−x j)T M ij(x i−x j)4σ2=W i∑j=i12[Q ij v ij]e−(x i−x j)T M ij(x i−x j)4σ2.(20) If we define the m-dimensional forces:F ij=Q ij(x i−x j)2σ2e−(x i−x j)T M ij(x i−x j)4σ2,(21)andF i=∑j=iF ji,(22) then the m-dimensional force F i in Eq.(20)is simply:F i≈W i[F i]=V i F i,(23) where V i=W i[I m×m],and I m×m is a m×m identity matrix. Note that F i is the gradient in the higher-dimensional space R m, while F i is in the original space R m.They are related by the ma-trix V i in Eq.(23),which builds up a bijection between them.We can see that they can guide the optimization to arrive at the same equilibrium,since F i=0⇔ F i=0.Thus for the energy op-timization purpose,we can simply replace Eq.(15)with Eq.(22) which can be computed directly on the original surfaceΩ.The idea behind our force approximation can be interpreted as fol-lows.At a given particle i,different neighboring pairs(i,j1)and(i,j2)may be equipped with different metrics M ij1and M ij2(aswell as different Jacobians J ij1and J ij2).The difference betweenJ ij encodes the variation of the metric locally around particle i. J ij includes both“metric”part(Q ij)and“embedding rotation”part(W ij)(Eq.(19)).W ij transforms the tangent plane at x ij in the original space into the tangent plane in embedding space.Our approach uses the exact variation of neighboring metric Q ij,and approximates the embedding rotation W ij with W i in Eq.(19). Thus,the variation of embedding rotation is ignored in each parti-cle’s neighborhood,but the variation of metric is accounted.In summary,we can optimize the uniform isotropic sampling onΩwith the approximated energy of Eq.(12)and force of Eq.(21)us-ing L-BFGS optimizer.They are both computed using the particle positions X onΩ,together with the metric M.If M is given by users,we use its square root Q′instead of Q in Eq.(21).Although we utilize the elegant concept of“embedding space”to help devel-op our formulation for anisotropic meshing,we do NOT need to compute such an embedding space.3.3Importance of the Embedding SpaceAnisotropic meshing is defined by the Riemannian metric M,to lo-cally affine-transform triangles into a“unit”space while enforcing the transformed triangles to be uniformly equilateral.Thus it is nat-ural to directly define the energy optimization problem in this“unit”space.However,the metrics on each point can be different.With-out establishing a coherent“unit”space,we cannot describe how these local affine copies of“unit”spaces can be“stitched”together. Our approach coherently considers all these local“unit”spaces by embedding the surfaceΩinto high-dimensional space.Our energy in Eq.(7)is designed exactly by the definition of“anisotropy”–the affine-transformed triangles inΩshould be uniformly equilateral (the particles should be uniformly distributed).This definition also leads to very efficient computations of forces in Eq.(21).We want to emphasize that:without using this embedding space, the definition of energy function and the corresponding force for-mulation would be inconsistent with the definition of anisotropic mesh and thus lead to incorrect results.If we do not use this high-dimensional embedding space,the most intuitive formulation of en-ergy will be Eq.(12).We elaborate on that and give some compar-isons below.Ignoring the Gradient of Metric(ASPH Method):We need to note that the metric M ij in Eq.(12)is dependent on the positions of particles x i and x j.Therefore,the force formula-tion will involve the gradient of M ij w.r.t.x j,which is numerically very difficult to compute.In the method of Adaptive Smoothed Par-ticle Hydrodynamics(ASPH)[Shapiro et al.1996],they use inter-particle Gaussian kernels and incorporated an anisotropic smooth-ing kernel to define the potential energy between particles,which is similar to Eq.(12).However,it is mentioned in their paper(Sec.2.2.4of[Shapiro et al.1996])that the gradient of metric term is ignored when computing the gradient of such inter-particle energy. Thus it leads to the following ASPH force formulation:F ij≈M ij(x i−x j)2σ2e−(x i−x j)T M ij(x i−x j)4σ2.(24)It is easy to see that Eq.(24)differs to Eq.(21)by only replacing Q ij with M ij.Thus if the metricfield is not constant,these two forces will lead to different local minima.Our method in Eq.(21)only ignores the variation of embedding ro-tation in each particle’s neighborhood,while the variation of metric is accounted.As confirmed in our experiments in Fig.4,this has a measurable influence on the quality of generated meshes. Ignoring the Variation of Jacobian Matrix:Another approximation is to apply the pseudo-inverse of Jacobian matrix in the expression of Eq.(14).In Eq.(14),J ij is different for different neighbors j.If we approximate J ij with J i in Eq.(14), and then apply the pseudo-inverse of J i,we arrive at the formula-tion(without the leading M ij or Q ij)as follows:F ij≈(x i−x j)2σ2e−(x i−x j)T M ij(x i−x j)4σ2.(25)We emphasize the difference with our method:this variation is ap-proximating J ij with J i in Eq.(14),while our method is approxi-mating W ij with W i in Eq.(19).As mentioned above,J ij con-tains both“metric”part Q ij and“embedding rotation”part W ij. Thus the approximation of J ij with J i will potentially“erase”the variation of metric between neighboring particles.To see their different effects on anisotropic mesh generation,we conduct the energy optimization in a2D square domain using the following three choices of forces:(1)our force in Eq.(21);(2)the ASPH force in Eq.(24);and(3)the force in Eq.(25).As shown in Fig.4,the2D square domain is equipped with the background tensorfield:M(x)=diag{Stretch(x)2,1},where thefield of Stretch(x)is in the range of[0.577,9].In this experiment,we use a spatially nonuniform metricfield–if M(x)is spatially uniform, then all the three forces will lead to the same particle configuration. The comparative measurements of the quality of the generated anisotropic mesh are shown in Fig.4,with triangle area quality G area,angle histogram,G min,G avg,θmin,θavg,and%<30◦, which are all defined in Sec.4.5.The color-coded triangle area quality of our method shows that the areas of triangles computed using our force are uniform(all close to1),which means the tri-angle sizes are conforming to the desired density defined by the metric tensor.From this experiment,we can see that performing energy optimization using our force in Eq.(21)generates the ideal anisotropic mesh,while optimizing the energy using the other two alternative forces in Eq.(24)and Eq.(25)cannot,which illustrates that formulating the energy optimization in the embedding space with our approximation leads to a principled formulation of inter-particle forces.4Implementation and Algorithm DetailsOur particle-based method is summarized in Alg.1below.To help reproduce our results,we further detail each component of the al-gorithm and the implementation issues.4.1Kernel WidthThe inter-particle energy as defined in Eq.(5)depends on the choice of thefixed kernel widthσ.The slope of this energy peaks at dis-tance of√2σand it is near zero at much smaller or much greater distances.Ifσis chosen too small then particles will nearly stop spreading when their separation is about5√2σ,because there is almost no forces between particles.Ifσis chosen too large then nearby particles cannot repel each other and the resulting sampling pattern will be poor.In this work,we chooseσto be proportional to the average“radius”of each particle when they are uniformly dis-tributed onΩ:σ=cσ√|Ω|/n,where|Ω|denotes the area of the surfaceΩin the embedding space,n is the number of particles,and cσis a constant coefficient.Note that our goal is to let the particles。

作为极限建筑空间设计依据的人体运动包络体研究

作为极限建筑空间设计依据的人体运动包络体研究

摘要城市化进程不断的发展导致了城市中心的地块不停的被分隔,因此出现了许多在空间极为局促、环境极为苛刻或使用者行为活动受到一定限制的条件下的极限建筑空间。

在此情况下,根据行为建筑学相关理论及设计方法,计算出满足使用者功能需求的最小建筑空间,显得十分重要。

然而现有的极限建筑空间的设计数据主要是根据人体百分位参数进行建筑空间以及空间中固定物的设计。

这样的设计方式,在很大程度上存在着缺少设计针对性、空间尺寸不合理、空间使用效率低、建筑能耗大等问题。

针对这一现象,本研究将首先详细阐述通过计算机编程方式模拟人体运动方式,并通过运动轨迹计算得出人体运动包络体。

人体运动包络体模拟是行为建筑学理论研究推理过程中所采用的一种模拟法。

从而克服了传统实验法存在的样本人体尺度从二维平面研究转化为三维立体空间研究。

在此基础之上,该论文将探讨现存极限建筑存在的问题以及如何在实际建筑设计中,通过计算空间使用者运动包络体得到他们的详细数据,并以此确定使用者在空间中的活动范围,作为极限建筑空间设计的重要参考依据。

这样的设计方式,可以计算出可以满足使用需求的极限建筑空间形态与体积,从而保证建筑空间可以满足使用者对使用功能的基本需求,提高建筑空间使用效率。

另一方面,人体运动包络体可以用于优化极限空间中固定物的位置与尺寸、形状,根据具体使用者的实际测量参数的进行个性化的私人定制,并保证了固定物的基本使用功能。

关键词:运动包络体;极限建筑空间;行为建筑学;模拟法;空间效率AbstractThe land in the center of the city is constantly divided for the sake of urbanization development. As a result, an increasing number of limited architectural space was designed and built. The environment of such kind of space is usually cramped. And the users’ behavior is also limited. In this case, it is of great importance to calculate the minimum size of space which can meet the basic functional needs of the users. However, the existing data for limited architectural extent, leads to an increasing number serious issues, such as lacking pertinence, unreasonable space size, low space efficiency and high energy consumption.In order to solve this issue, this essay will first simulate the movement of human body by computer programming. After that, enveloping solid will be calculated by the trail of human body. Enveloping solid simulation is a basic simulating method in the inference procedure of behavioral architecture. Compared with traditional experiments, there will be no sample quantity limitation and anthropogenic factor in simulating process. And the 2-dimensional human parameter comes to 3 dimensional.Based on which, this essay will explore the existing problems on limited architectural space design and how to use enveloping solid simulation in architecture design. In the first stage, the design data of users can be get from the process of enveloping solid simulation. And the users’ parameter shows the range of activity, which is important reference frame in design procedure. By this method, the functional needs of users can be meet. And space efficiency can also be improved. What’s more, enveloping solid can be used in optimizing the shape and location of fixtures in building as well.Keywords:enveloping solid, limited architectural space,behavioral architecture, simulation, space efficiency目录摘要 (1)Abstract (2)第1章绪论 (1)1.1课题背景及研究的目的和意义 (1)1.1.1 课题的研究背景 (1)1.1.2 课题的研究目的和意义 (2)1.2相关概念概述 (3)1.2.1 极限建筑空间的概念 (3)1.2.2 “包络体”的概念及构成概述 (3)1.3国内外研究现状及分析 (4)1.3.1 行为建筑学 (4)1.3.2 极限建筑空间 (4)1.3.3 包络体的应用及计算方式 (6)1.4研究内容、方法与框架 (11)1.4.1 课题的研究内容 (11)1.4.2 研究方法 (12)1.4.3 课题的研究框架 (14)第2章研究基础 (15)2.1人体运动学、运动解剖学 (15)2.1.1 人体运动形式 (15)2.1.2 人体运动的特性与坐标系建立 (15)2.2人体测量学与程序人体基本参数设定 (17)2.2.1 人体上肢静态尺寸测量 (17)2.2.2 程序人体基本参数设定 (18)2.3计算机编程 (19)2.3.1 模拟软件 (19)2.3.2 Toxiclibs类库引用与运动轨迹的向量表示 (19)2.3.3 HE_Mesh类库引用与包络曲面生成 (20)2.4本章小结 (20)第3章程序模拟 (21)3.1程序逻辑 (21)3.1.1 程序参数设定 (21)3.1.2 上肢运动轨迹模拟 (22)3.1.3 上肢运动包络体生成 (30)3.2不同人体参数对模拟结果的影响 (30)3.2.1 儿童(四肢长度对模拟结果的影响) (30)3.2.2 老年人(活动角度对模拟结果的影响) (33)3.2.3 残疾人(残肢对模拟结果的影响) (34)3.2.4 数据对比 (35)3.3“人体运动包络体”程序对行为建筑学研究方法的扩展 (36)3.3.1 行为建筑学研究的一般方法以及主要存在问题 (36)3.3.2 “人体运动包络体”模拟对行为建筑学研究方法的贡献 (37)3.4本章小结 (39)第4章 (40)4.1计算满足使用需求的极限建筑空间形态与体积 (40)4.1.1 满足功能需求,提高空间使用效率 (40)4.1.2 根据运动轨迹预测使用者所需的三维建筑空间 (45)4.1.3 节约能源 (49)4.2优化极限空间中固定物的位置与尺寸、形状 (50)4.2.1 包络体与极限空间中固定物的位置 (51)4.2.2 包络体与极限空间中固定物的尺寸 (55)4.2.3 包络体与固定物的三维空间组合 (57)4.3本章小结 (58)结论 (59)参考文献 (60)附录 (63) (74)致谢 (75)第1章绪论1.1 课题背景及研究的目的和意义1.1.1 课题的研究背景古代有蜗居的说法,用“蜗舍”比喻“圆舍”“蜗”字描述的是空间的形状,后来逐渐演变为居住空间狭小的意思。

efficient feature extraction for 2d3d objects in mesh representaion---

efficient feature extraction for 2d3d objects in mesh representaion---

EFFICIENT FEATURE EXTRACTION FOR2D/3D OBJECTSIN MESH REPRESENTATIONCha Zhang and Tsuhan ChenDept.of Electrical and Computer Engineering,Carnegie Mellon University 5000Forbes Avenue,Pittsburgh,PA15213,USA{czhang,tsuhan}@ABSTRACTMeshes are dominantly used to represent3D models as they fit well with graphics rendering hardware.Features such as volume,moments,and Fourier transform coeffi-cients need to be calculated from the mesh representation efficiently.In this paper,we propose an algorithm to cal-culate these features without transforming the mesh into other representations such as the volumetric representa-tion.To calculate a feature for a mesh,we show that we can first compute it for each elementary shape such as a triangle or a tetrahedron,and then add up all the values for the mesh.The algorithm is simple and efficient,with many potential applications.1.INTRODUCTION3D scene/object browsing is becoming more and more popular as it engages people with much richer experience than2D images.The Virtual Reality Modeling Language (VRML)[1],which uses mesh models to represent the3D content,is rapidly becoming the standard file format for the delivery of3D contents across the Internet.Tradition-ally,in order to fit graphics rendering hardware well,a VRML file models the surface of a virtual object or envi-ronment with a collection of3D geometrical entities,such as vertices and polygons.In many applications,there is a high demand to calcu-late some important features for a mesh model,e.g.,the volume of the model,the moments of the model,or even the Fourier transform coefficients of the model.One ex-ample application is the search and retrieval of3D models in a database[2][3][9].Another example is shape analysis and object recognition[4].Intuitively,we may calculate these features by first transforming the3D mesh model into its volumetric representation and then finding these features in the voxel space.However,transforming a3D mesh model into its volumetric representation is a time-consuming task,in addition to a large storage requirement [5][6][7].Work supported in part by NSF Career Award9984858.In this paper,we propose to calculate these features from the mesh representation directly.We calculate a fea-ture for a model by first finding it for the elementary shapes,such as triangles or tetrahedrons,and then add them up.The computational complexity is proportional to the number of elementary shapes,which is typically much smaller than the number of voxels in the equivalent volu-metric representation.Both2D and3D meshes are consid-ered in this paper.The result is general and has many po-tential applications.The paper is organized as follows.In Section2we discuss the calculation of the area/volume of a mesh.Sec-tion3extends this idea and presents the method to com-pute moments and Fourier transform for a mesh.Some applications are provided in Section4.Conclusions and discussions are given in Section5.2.AREA/VOLUME CALCULATIONThe computation of the volume of a3D model is not a trivial work.One can convert the model into a discrete3D binary image.The grid points in the discrete space are called voxels.Each voxel is labeled with‘1’or‘0’to indi-cate whether this point is inside or outside the object.The number of voxels inside the object,or equivalently the summation of all the voxel values in the discrete space, can be an approximation for the volume of the model. However,the transforming from a3D mesh model into a binary image is very time-consuming.Moreover,in order to improve the accuracy,the resolution of the3D binary image needs to be very high,which can further increase the computation load.2.1.2D Mesh AreaWe explain our approach starting from the computation of areas for2D meshes.A2D mesh is simply a2D shape with polygonal contours.As shown in Figure1,suppose we have a2D mesh with bold lines representing its edges. Although we can discretize the2D space into a binary image and calculate the area of the mesh by counting the pixels inside the polygon,doing so is very computationally intensive."positive"area "negtive"areaFigure 1:The calculation of a 2D polygon area To start with our algorithm,let us make the assump-tion that the polygon is close.If it is not,a contour close process can be performed first [9].Since we know all the vertices and edges of the polygon,we can calculate the normal for each edge easily.For example,edge AB in Figure 1has the normal:2122121212)()(ˆ)(ˆ)(y y x x y x x xy y N AB −+−−+−−=(1)where (x 1,y 1)and (x 2,y 2)are the coordinates of vertices Aand B ,respectively,and xˆand y ˆare the unit vectors for the axes.We define the normal here as a normalized vec-tor which is perpendicular to the corresponding edge and pointing outwards of the mesh.In computer graphics lit-erature,there are different ways to check whether a point is inside or outside a polygon [8],thus it is easy to find the correct direction of the ter we will show that even if we only know that all the normals are pointing to the same side of the mesh (either inside or outside,as long as they are consistent),we are still able to find the correct area of the mesh.After getting the normals,we construct a set of trian-gles by connecting all the polygon vertices with the origin.Each edge and the origin form an elementary triangle,which is the smallest unit for computation.We define the signed area for each elementary triangle as below:The magnitude of this value is the area of the triangle,while the sign of the value is determined by checking the posi-tion of the origin with respect to the edge and the direction of the normal.Take the triangle OAB in Figure 1as an example.The area of OAB is:.)(212112y x y x S OAB +−=(2)The sign of S OAB is the same as the sign of the inner prod-uct AB N OA ⋅,which is positive in this case.The total area of the polygon can be computed by summing up all the signed areas .That is,¦=ii total S S (3)where i goes through all the edges or elementary triangles.Following the above steps,the result of equation (3)is guaranteed to be positive,no matter the origin is inside oroutside the mesh.Note here that we do not make any as-sumption that the polygon is convex.In real implementation,we do not need to check the signs of the areas each time.Let:().'',21'2112¦=+−=ii total i i i i i S S y x y x S (4)where i stands for the index of all the edges or elementary triangles.(x i1,y i1),(x i2,y i2)are coordinates of the starting point and the end point of edge i .When we loop through all the edges,we need to keep forwarding so that the in-side part of the mesh is always kept at the left hand side or the right hand side.According to the final sign of the result S’total ,we may know whether we are looping along the right direction (the right direction should give the positive result),and the final result can be simply achieved by tak-ing the magnitude of S’total .2.2.3D CaseWe can extend the above algorithm into the 3D case.In a VRML file,the mesh is represented by a set of vertices and polygons.Before we calculate the volume,we do some preprocessing on the model and make sure that all the polygons are triangles.Such preprocessing,called tri-angulation,is commonly used in mesh coding,mesh signal processing,and mesh editing.The direction of the normal for a triangle can be determined by the order of the verti-ces and the right-hand rule,as shown in Figure 2.The con-sistent condition is very easy to satisfy.For two neighbor-ing triangles,if the common edge has different directions,then the normals of the two triangles are consistent.For example,in Figure 2,AB is the common edge of triangle ACB and ABD .In triangle ACB ,the direction is from B to A ,and in triangle ABD ,the direction is from A toB ,thus N ACB and N ABD are consistent.BFigure 2:Normals and order of verticesIn the 3D case,the elementary calculation unit is a tet-rahedron.For each triangle,we connect each of its vertices with the origin and form a tetrahedron,as shown in Figure 3.As in the 2D case,we define the signed volume for each elementary tetrahedron as:The magnitude of its value is the volume of the tetrahedron,and the sign of the value is determined by checking if the origin is at the same side as the normal with respect to the triangle.In Figure 3,tri-2,z 2)Figure 3:The calculation of 3D volumeangle ACB has a normal N ACB .The volume of tetrahedron OACB is:.)(61321312231213132123z y x z y x z y x z y x z y x z y x V OACB +−−++−=(5)As the origin O is at the opposite side of N ACB ,the sign of this tetrahedron is positive.The sign can also be calculated by inner product ACB N OA ⋅.In real implementation,again we only need to com-pute:()¦=+−−++−=iitotal i i i i i i i i i i i i i i i i i i i V V z y x z y x z y x z y x z y x z y x V ''.61'321312231213132123(6)where i stands for the index of triangles or elementary tetrahedrons.(x i1,y i1,z i1),(x i2,y i2,z i2)and (x i3,y i3,z i3)are coordinates of the vertices of triangle i and they are or-dered so that the normal of triangle i is consistent with others.Volume of a 3D mesh model is always positive.The final result can be achieved by take the absolute value of V’total .In order to compute other 3D model features such as moments or Fourier transform coefficients,we reverse the sequence of vertices for each triangle if V’total turns out to be negative.3.MOMENTS AND FOURIER TRANSFORMThe above algorithm can be generalized to calculate other features for 2D and 3D mesh models.Actually,whenever the feature to be calculated can be written as a signed sum of features of the elementary shape (triangle in the 2D case and tetrahedron in the 3D case),and the feature of the elementary shape can be derived in an explicit form,the proposed algorithm applies.Although this seems to be a strong constrain,many of the commonly-used fea-tures fall into this category.For example,all the features that have the form of integration over the space inside the object can be calculated with this algorithm.This includes moments,Fourier transform,wavelet transform,and many others.In classical mechanics and statistical theory,the con-cept of moments is used extensively.In this paper,themoments of a 3D mesh model are defined as:³³³=dxdydzz y x z y x M r q p pqr ),,(ρ(7)where ),,(z y x ρis an indicator function:¯®­=otherwise.,0meshthe inside is z)y,(x,if ,1),,(z y x ρ(8)and p ,q ,r are the orders of the moment.Central moments can be obtained easily from the result of equation (7).Since the integration can be rewritten as the sum of inte-grations over each elementary shape:¦³³³=ii r q p i pqr dxdydz z y x z y x s M ),,(ρ(9)where ),,(z y x i ρis the indicator function for elementary shape i ,and s i is the sign of the signed volume for shape i .We can use the same process as that in Section 2to calculate a number of low order moments for triangles and tetrahedrons that are extensively used.A few examples for the moments of a tetrahedron are given in the Appendix.More examples can be found in [9].Fourier transform is a very powerful tool in many sig-nal processing applications.The Fourier transform of a 2D or 3D mesh model is defined by the Fourier transform of its indicator function:³³³++−=Θdxdydzz y x e w v u zw yv xu i ),,(),,()(ρ(10)Since Fourier transform is also an integration over the space inside the object,it can also be calculated by de-composing the integration into integrations over each ele-mentary shape.The explicit form of the Fourier transform of a tetrahedron is given in the Appendix.As the moments and Fourier transform coefficients of an elementary shape are explicit,the above computation is very efficient.The computational complexity is O (N),where N is the number of edges or triangles in the mesh.Note that in the volumetric approach,where a 2D or 3D binary image is obtained first before getting any of the features,the computational complexity is O (M),where M is the number of grid points inside the model,not consid-ering the cost of transforming the data representation.It is obvious that M is typically much larger that N ,especially when a relatively accurate result is required and the resolu-tion of the binary image has to be large.The storage space required by our algorithm is also much smaller.Previous work by Lien and Kajiya [10]provide a similar method for calculating the moments for tetrahe-drons.Our work gives more explicit forms of the moments and extends their work to calculating the Fourier trans-form.4.APPLICATIONSA good application of our algorithm is to find the principal axes of a 3D mesh model.This is useful when we want to compare two 3D models that are not well aligned.In a 3Dmodel retrieval system [2][9],this is required because some of the features may not be invariant to arbitrary rota-tions.We construct a 3x3matrix by the second order mo-ments of the 3D model:.002011101011020110101110200»»»¼º«««¬ª=M M M M M M M M M S (11)The principal axes are obtained by computing the ei-genvectors of matrix S ,which is also known as the princi-ple component analysis (PCA).The eigenvector corre-sponding to the largest eigenvalue is made the first princi-pal axis.The next eigenvector corresponding to the secon-dary eigenvalue is the second principal axis,and so on.Inorder to make the final result unique,we further make sure that the 3rd order moments,M 300and M 030,are positive after the transform.Figure 4shows the results of this algo-rithm.Before rotationAfter rotationBefore rotation After rotationFigure 4:3D models before and after PCAThe Fourier transform of a 3D mesh model can be used in many applications.For example,the coefficients can be directly used as features in a retrieval system [9].Other applications are shape analysis,object recognition,and model matching.Note that in our algorithm,the result-ing Fourier transform is in continuous form.There is no discretization alias since we can evaluate a Fourier trans-form coefficient from the continuous form directly.5.CONCLUSIONS AND DISCUSSIONSIn this paper,we propose an algorithm for computing fea-tures for a 2D or 3D mesh model.Explicit methods to compute the volume,moments and Fourier transform from a mesh representation directly are given.The algorithm is very efficient,and has many potential applications.The proposed algorithm still has some room for im-provement.For example,it is still difficult to get the ex-plicit form of a high order moment for a triangles and tet-rahedrons.Also the Fourier transform may lose its compu-tational efficiency if many coefficients are required simul-taneously.More research is in progress to speed this up.REFERENCES[1]R.Carey,G.Bell,and C.Marrin,“The Virtual Reality Modeling Language”.Apr.1997,ISO/IEC DIS 14772-1.[Online]:/Specifications/.[2]Eric Paquet and Marc Rioux,“A Content-based Search Engine for VRML Database”,Computer Vision and Pattern Recognition,1998.Proceedings.1998,IEEE Computer Society Conference on ,pp.541-546,1998.[3]Sylvie Jeannin,Leszek Cieplinski,Jens Rainer Ohm,Munchurl Kim,MPEG-7Visual part of eXperimentation Model Version 7.0,ISO/IEC JTC1/SC29/WG11/N3521,Beijing,July 2000.[4]Anthony P.Reeves,R.J.Prokop,Susan E.Andrews and Frank P.Kuhl,“Three-Dimensional Shape Analysis Using Moments and Fourier Descriptors”,IEEE Trans.Pattern Analysis and Machine Intelligence ,pp.937-943,Vol.10,No.6,Nov.1988.[5]Homer H.Chen,Thomas S.Huang,“A Survey of Construction and Manipulation of Octrees”,Computer Vision,Graphics,and Image Processing ,pp.409-431,Vol.43,1988.[6]Shi-Nine Yang and Tsong-Wuu Lin,“A New Linear Octree Con-struction by Filling Algorithms”,Computers and Communications,1991.Conference Proceedings.Tenth Annual International Phoenix Conference on ,pp.740-746,1991.[7]Yoshifumi Kitamura and Fumio Kishino,“A Parallel Algorithm for Octree Generation from Polyhedral Shape Representation”,Pattern Recognition,1996.Proceedings of the 13th International Conference on ,pp.303-309,Vol.3,1996.[8]James D.Foley,Andries van Dam,Steven K.Feiner,and John F.Hughes,Computer Graphics principles and practice,Second Edition ,Addison-Wesley Publishing Company,Inc.,1996.[9]/projects/3DModelRetrieval/.[10]Sheue-ling Lien and James T.Kajiya,“A Symbolic Method for Calculating the Integral Properties of Arbitrary Nonconvex Polyhedra”,IEEE Computer Graphics and Applications ,pp.35-41,Oct.1984.APPENDIX().61321312231213132123000z y x z y x z y x z y x z y x z y x M +−−++−=().00032110041M x x x M ++=().000313221232221200101M x x x x x x x x x M +++++=()000321212331223221333231300x x x )()()(201M x x x x x x x x x x x x M +++++++++=))()((i))()((e *i ))()((e *i ))()((e *i (*),,(333222111323232313131333)wz vy i(ux 323232212121222)wz vy i(ux 313131212121111)wz vy i(ux 000333222111wz vy ux wz vy ux wz vy ux wz wz vy vy ux ux wz wz vy vy ux ux wz vy ux wz wz vy vy ux ux wz wz vy vy ux ux wz vy ux wz wz vy vy ux ux wz wz vy vy ux ux wz vy ux M w v u ++++++−+−+−+−+−+−+−+++−+−+−+−+−+−+++−+−+−−+−+−++=ℑ++++++。

Adobe Acrobat SDK 开发者指南说明书

Adobe Acrobat SDK 开发者指南说明书
Please remember that existing artwork or images that you may want to include in your project may be protected under copyright law. The unauthorized incorporation of such material into your new work could be a violation of the rights of the copyright owner. Please be sure to obtain any permission required from the copyright owner.
This guide is governed by the Adobe Acrobat SDK License Agreement and may be used or copied only in accordance with the terms of this agreement. Except as permitted by any such agreement, no part of this guide may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, recording, or otherwise, without the prior written permission of Adobe. Please note that the content in this guide is protected under copyright law.

超声波提取柑橘果皮总黄酮优化工艺

超声波提取柑橘果皮总黄酮优化工艺

超声波提取柑橘果皮总黄酮优化工艺(湖北省农业科学院农产品加工与核农技术研究所/国家食用菌加工技术研发分中心,武汉430064)摘要:以鄂柑1号为材料,采用Plackett-Burman(PB)和中心组合设计(Central Composite Design)对影响超声波辅助提取柑橘果皮总黄酮工艺操作的8个因素进行了筛选优化&#65377;PB试验设计与统计学分析表明,提取温度&#65380;溶剂浓度和液料比是影响总黄酮得率的3个关键因素,并确定其他5个因素水平为超声波功率100W,超声波处理时间20 min,提取次数2次,溶剂类型为乙醇,分级水平60目&#65377;对3个关键因素进行中心组合设计,并经响应面法优化分析得到影响总黄酮得率的二阶模型&#65377;结果表明,鄂柑1号果皮总黄酮得率大于6.2%的提取条件为提取温度54.8~58℃,溶剂体积分数40%~44.6%,液料比10~12.55 mL∶1g,并进行了验证&#65377;关键词:鄂柑1号;总黄酮;Plackett-Burman设计;中心组合设计;超声波;提取条件Optimization of Extraction Technique of Flavonoids from Citrus Peel with Assistance of UltrasonicAbstract:Using Egan No.1 as material,Plackett-Burman (PB) design and Central Composite Design (CCD) were applied to screen and optimize influential factors for extraction process of total flavonoids from Egan No. 1 peel with assistance of ultrasonic.The extraction temperature,the ethanol concentrarion,the ratio of solvent to material,as three key factors,were found to significantly influence the yields of total flavonoids via PB design and the following statistic analysis,Other factors were as follows:ultrasonic power 100W,the extraction time 20min,extraction times 2,the kind of solvent ethanol,meshes 60.The quadratic model for three significant factors was established with the yields of total flavonoids as the target response by CCD design and response surface analysis.The optimum extraction conditions in which the yields were more than 6.2% were:temperature 54.8~58℃,ethanol concentration 40%~44.6%,and ratio of solvent to material 10~12.55 mL:1g,It was also verified experimentally.Key words:Egan No.1;flavonoids;plackett-burman design;central composite design;ultrasonic,extracting柑橘果皮含有丰富的总黄酮,不仅具有降血压&#65380;降血脂&#65380;扩张冠状动脉等作用,还具有抗氧化&#65380;镇咳&#65380;平喘&#65380;抗菌抗炎&#65380;增强免疫和抗肿瘤等药理活性[1],作为保健品和药品具有广泛的应用前景[2,3]&#65377;2007年全国柑橘种植面积达181.5万hm2,产量突破1 800万t,达到历史新高&#65377;当前及今后急需开发新型有效的提取工艺来满足我国柑橘精深加工发展的需求&#65377;鄂柑1号,又名金水柑,具湖北特色的优良品种,果色橙红光亮,汁多化渣,浓甜爽口,风味醇香,可溶性固形物含量12.4%~13.89%,维生素C含量为34.26~38.27 mg·100g-1,但对其深加工的研究还很少报道&#65377;Plackett-Burman法在筛选试验重要因子方面最为有效和准确[4],可大大减少优化过程考察的因素数和试验次数,节省大量人力&#65380;和时间&#65377;中心组合设计(CCD)用于确定试验因素及其交互作用在工艺过程中对指标响应值的影响,精确地表述因素和响应值之间的关系[5]&#65377;本研究采用响应面法对超声波辅助提取鄂柑1号果皮总黄酮工艺进行优化,首先采用Plackett-Burman设计,筛选出对提取工艺起显著作用的影响因素,再利用CCD响应面分析确定最佳提取条件,通过绘制等高线图可直观地反映出影响因素之间交互作用的显著程度&#65377;1材料与方法1.1材料鄂柑1号果皮(湖北省农业科学院果树茶叶研究所),数控超声波清洗器(KQ -100DB型,昆山市超声仪器有限公司),数显恒温三用水浴锅(金坛国瑞实验仪器厂),紫外可见分光光度计(UV2600型,上海天美科学仪器有限公司),旋转蒸发仪(RE-52型,上海亚荣生化仪器),循环水式真空泵(SHZ-Ш开封宏兴科教仪器厂),植物粉碎机(LG-500A型,瑞安百信药机械厂),离心机(TDL-型,上海安亭科学仪器厂),芦丁(上海生化制品厂),其他试剂为AR级&#65377;1.2方法1.2.1鄂柑1号果皮总黄酮的提取工艺流程鄂柑1号果皮→干燥(60℃,5 h)→粉碎→过筛分级(60&#65380;80目)→乙醇溶液振荡浸泡→超声波辅助回流提取→过滤→旋转蒸发浓缩→定容→定量分析;总黄酮得率的测定采用亚硝酸钠-硝酸铝比色法[6]1.2.2试验设计[7,8]Plackett-Burman设计应用Design Expert软件对提取试验进行Plackett-Burman设计(因素水平设计见表1)&#65377;对8个主要因素进行筛选:即提取时间&#65380;液料比&#65380;超声波功率&#65380;原料粒度的分级水平&#65380;提取次数&#65380;溶剂分数&#65380;溶剂类型&#65380;提取温度,外加3个虚拟变量&#65377;每个变量分别确定(+)和(-)两个水平,以总黄酮得率为响应值,共进行12次试验以确定每个因素的影响因子&#65377;1.2.3响应面试验设计在确定了对响应值具有重要影响的因子后,采用中心组合设计(CCD),对关键因子(提取温度&#65380;溶剂分数&#65380;液料比)进行进一步研究,每个因素取5个水平,根据相应的试验表进行试验后,对数据进行二次回归拟合,得到带交互项和平方项的二次方程:=β0+βixi+βijxixj+βijx2i其中是预测响应值,xi是自变量,β0&#65380;βi&#65380;βii&#65380;βij是待估计参数偏移值&#65377;分析各因素的主效应和交互效应,最后在一定水平范围内求出最佳值&#65377;诸因子水平及编码见表2&#65377;2结果与分析2.1超声波提取工艺中影响因素的确定提取工艺所涉及的原料粉碎度&#65380;所用溶剂以及溶剂分数&#65380;用量&#65380;辅助提取方式&#65380;提取温度及提取时间等因素都对提取效果有影响&#65377;运用Plackett-Burman对诸因素进行分析,依据实际情况选取合适的因素水平,试验设计见表3,方差分析结果见表4&#65377;由表4可以看出,模型P为0.039 7-1.20AB+0.41AC+0.36BC其中y&#65380;A&#65380;B&#65380;C分别代表总黄酮得率&#65380;提取温度&#65380;溶剂分数&#65380;液料比&#65377;响应面分析中对试验结果进行拟和的二次模型方差分析见表6&#65377;F值为4.93,多元相关系数为R=0.816,说明模型对实际情况拟合较好;P为0.010 1,表明该模型高度显著,可用来进行响应值预测&#65377;二次模型中回归系数的显著性检验表明,溶剂分数对提取得率的线性效应显著,而提取温度和液料比不显著;3因素对提取得率的曲面效应不显著;提取温度和溶剂分数对提取得率的交互影响显著,而提取温度和液料比和溶剂分数和液料比的交互影响不显著&#65377;图1&#65380;图2和图3是由多元回归方程式所做的响应曲面图及其等高线图&#65377;由此可对两因素交互影响提取的总黄酮得率进行分析&#65377;图1显示了液料比在最佳值(10.35 mL∶1g)条件下,提取温度和溶剂分数对总黄酮得率的交互影响&#65377;当溶剂分数处于低水平条件时(40%),随着温度的升高,总黄酮得率出现上升趋势,而在溶剂分数处于高水平条件下(80%),随着温度的升高,总黄酮得率反而出现降低&#65377;这说明溶剂分数与提取温度之间存在显著的交互影响,不同比例的溶剂分数与提取温度组合,将出现不同的响应值变化趋势&#65377;应综合考虑试剂成本和能耗的要求来确定合理的提取温度和液料比,才能获得较高的总黄酮得率&#65377;图2显示了溶剂分数处于最佳值(40.4%)时,提取温度和液料比对提取效果的交互影响&#65377;从三维图中可以看出,两个因素的交互影响并不显著&#65377;当提取温度处于高水平时,可以获得较高的总黄酮得率,且随着液料比的增大,总黄酮得率略有增加,但不明显,因此确定提取温度应为较高水平&#65377;图3显示了提取温度为最佳值(58℃)时,液料比和溶剂分数对提取效果的交互影响&#65377;从图中看出,在试验水平范围内,溶剂分数处于低水平时可获得较高的总黄酮得率,而此时随液料比的增加,总黄酮得率略有下降,因此料液比应为较低水平&#65377;2.3模型对提取工艺参数的优化通过回归模型来预测总黄酮得率高于6.3%的提取工艺参数为:提取温度54.8~58.9℃,溶剂分数40%~44.6%,液料比10~12.55 mL∶1g,优化出5组鄂柑果皮总黄酮提取工艺参数,并进行了验证(表7)&#65377;优化工艺中最佳实测值6.37%,与文献中报道超声波辅助提取的总黄酮得率5.56%相比提高了14.57%[11]&#65377;表明了PB和CCD方法共同优化天然功能性成分提取工艺的可行性,优化后的工艺参数更有利于总黄酮物质的提取&#65377;2.4超声波辅助提取工艺对总黄酮得率的影响提取次数&#65380;溶剂类型&#65380;分级水平相同的条件下,本试验确定的超声波辅助提取工艺(超声波功率100W,超声波处理时间20 min,提取温度58℃,溶剂分数40.4%,液料比10.35 mL∶1g),与回流法优化的提取工艺(提取时间90 min&#65380;提取温度65℃&#65380;溶剂分数70%&#65380;液料比15 mL∶1g)所得总黄酮得率进行比较,所得结果(表8)说明,超声波辅助提取比回流提取的得率5.45%提高了16.8%,超声波辅助提取提高了黄酮物质得率&#65377;3结论研究证明:①运用Plackett-Burman试验设计,从影响超声波辅助提取鄂柑总黄酮工艺过程的众多因子中高效地筛选出关键因素提取温度&#65380;溶剂分数&#65380;液料比&#65377;其他因素水平依次为超声波功率100W,超声波处理时间20min,提取次数2次,溶剂类型为乙醇,分级水平60目&#65377;②通过中心组合设计和响应面分析实现了提取工艺p[1]丁晓雯.柑桔皮提取液抗氧化及其它保健功能研究[D].重庆:西南农业大学,2004.[2]石桂风,朱玉昌.柑桔皮综合利用的实用技术[J].食品工业科技,1989(5):41-45.[3]赵雪梅,朱大元,叶兴乾,等.柑桔属中总黄酮的研究进展[J].天然产物研究与开发,2002,14(l):89-92.[4]WALLS I,CHUYATE R.Alicyclobacillus-historical perspective and preliminary characterization study[J].Dairy Food and Enviro Sanitation,1998,18(8):499-503.[5]MYERS W R.Response surface methodology [A].Encycloedia of biopharmaceutical statistics[C].New York:Marcel Dekker,2003.858-869[6]国家药典委员会.中华人民共和国药典[M].北京:化学工业出版社,2000.[7]王军,王敏,季璐.苦荞麦麸皮总总黄酮提取工艺及其数学模型研究[J].农业工程学报,2006,22(7):223-225.[8]LUQUE-GARCIA J L,LUQUE DE CASTRO M D.Ultrasound-assisted Soxhlet extraction:an expeditive approach for solid sample treatment application to the extraction of total fat from oleaginous seeds [J].Journal of Chromatography,2004,1034 (1-2):237-242.[9]王成章,郁青.银杏叶黄酮浸提工艺的研究[J].天然产物研究与开发,1998,10(2):66-70.[10]胡静丽,陈健初.杨梅叶黄酮类化合物最佳提取工艺研究[J].食品科学,2003,24(1):96-98.[11]王万能,全学军,陆天健,等.超声波协助桔皮总黄酮的提取[J].江苏农业学报,2006,22(2):168-170.。

A survey of content based 3d shape retrieval methods

A survey of content based 3d shape retrieval methods

A Survey of Content Based3D Shape Retrieval MethodsJohan W.H.Tangelder and Remco C.VeltkampInstitute of Information and Computing Sciences,Utrecht University hanst@cs.uu.nl,Remco.Veltkamp@cs.uu.nlAbstractRecent developments in techniques for modeling,digitiz-ing and visualizing3D shapes has led to an explosion in the number of available3D models on the Internet and in domain-specific databases.This has led to the development of3D shape retrieval systems that,given a query object, retrieve similar3D objects.For visualization,3D shapes are often represented as a surface,in particular polygo-nal meshes,for example in VRML format.Often these mod-els contain holes,intersecting polygons,are not manifold, and do not enclose a volume unambiguously.On the con-trary,3D volume models,such as solid models produced by CAD systems,or voxels models,enclose a volume prop-erly.This paper surveys the literature on methods for con-tent based3D retrieval,taking into account the applicabil-ity to surface models as well as to volume models.The meth-ods are evaluated with respect to several requirements of content based3D shape retrieval,such as:(1)shape repre-sentation requirements,(2)properties of dissimilarity mea-sures,(3)efficiency,(4)discrimination abilities,(5)ability to perform partial matching,(6)robustness,and(7)neces-sity of pose normalization.Finally,the advantages and lim-its of the several approaches in content based3D shape re-trieval are discussed.1.IntroductionThe advancement of modeling,digitizing and visualizing techniques for3D shapes has led to an increasing amount of3D models,both on the Internet and in domain-specific databases.This has led to the development of thefirst exper-imental search engines for3D shapes,such as the3D model search engine at Princeton university[2,57],the3D model retrieval system at the National Taiwan University[1,17], the Ogden IV system at the National Institute of Multimedia Education,Japan[62,77],the3D retrieval engine at Utrecht University[4,78],and the3D model similarity search en-gine at the University of Konstanz[3,84].Laser scanning has been applied to obtain archives recording cultural heritage like the Digital Michelan-gelo Project[25,48],and the Stanford Digital Formae Urbis Romae Project[75].Furthermore,archives contain-ing domain-specific shape models are now accessible by the Internet.Examples are the National Design Repos-itory,an online repository of CAD models[59,68], and the Protein Data Bank,an online archive of struc-tural data of biological macromolecules[10,80].Unlike text documents,3D models are not easily re-trieved.Attempting tofind a3D model using textual an-notation and a conventional text-based search engine would not work in many cases.The annotations added by human beings depend on language,culture,age,sex,and other fac-tors.They may be too limited or ambiguous.In contrast, content based3D shape retrieval methods,that use shape properties of the3D models to search for similar models, work better than text based methods[58].Matching is the process of determining how similar two shapes are.This is often done by computing a distance.A complementary process is indexing.In this paper,indexing is understood as the process of building a datastructure to speed up the search.Note that the term indexing is also of-ten used for the identification of features in models,or mul-timedia documents in general.Retrieval is the process of searching and delivering the query results.Matching and in-dexing are often part of the retrieval process.Recently,a lot of researchers have investigated the spe-cific problem of content based3D shape retrieval.Also,an extensive amount of literature can be found in the related fields of computer vision,object recognition and geomet-ric modelling.Survey papers to this literature have been provided by Besl and Jain[11],Loncaric[50]and Camp-bell and Flynn[16].For an overview of2D shape match-ing methods we refer the reader to the paper by Veltkamp [82].Unfortunately,most2D methods do not generalize di-rectly to3D model matching.Work in progress by Iyer et al.[40]provides an extensive overview of3D shape search-ing techniques.Atmosukarto and Naval[6]describe a num-ber of3D model retrieval systems and methods,but do not provide a categorization and evaluation.In contrast,this paper evaluates3D shape retrieval meth-ods with respect to several requirements on content based 3D shape retrieval,such as:(1)shape representation re-quirements,(2)properties of dissimilarity measures,(3)ef-ficiency,(4)discrimination abilities,(5)ability to perform partial matching,(6)robustness,and(7)necessity of posenormalization.In section2we discuss several aspects of3D shape retrieval.The literature on3D shape matching meth-ods is discussed in section3and evaluated in section4. 2.3D shape retrieval aspectsIn this section we discuss several issues related to3D shape retrieval.2.1.3D shape retrieval frameworkAt a conceptual level,a typical3D shape retrieval frame-work as illustrated byfig.1consists of a database with an index structure created offline and an online query engine. Each3D model has to be identified with a shape descrip-tor,providing a compact overall description of the shape. To efficiently search a large collection online,an indexing data structure and searching algorithm should be available. The online query engine computes the query descriptor,and models similar to the query model are retrieved by match-ing descriptors to the query descriptor from the index struc-ture of the database.The similarity between two descriptors is quantified by a dissimilarity measure.Three approaches can be distinguished to provide a query object:(1)browsing to select a new query object from the obtained results,(2) a direct query by providing a query descriptor,(3)query by example by providing an existing3D model or by creating a3D shape query from scratch using a3D tool or sketch-ing2D projections of the3D model.Finally,the retrieved models can be visualized.2.2.Shape representationsAn important issue is the type of shape representation(s) that a shape retrieval system accepts.Most of the3D models found on the World Wide Web are meshes defined in afile format supporting visual appearance.Currently,the most common format used for this purpose is the Virtual Real-ity Modeling Language(VRML)format.Since these mod-els have been designed for visualization,they often contain only geometry and appearance attributes.In particular,they are represented by“polygon soups”,consisting of unorga-nized sets of polygons.Also,in general these models are not“watertight”meshes,i.e.they do not enclose a volume. By contrast,for volume models retrieval methods depend-ing on a properly defined volume can be applied.2.3.Measuring similarityIn order to measure how similar two objects are,it is nec-essary to compute distances between pairs of descriptors us-ing a dissimilarity measure.Although the term similarity is often used,dissimilarity corresponds to the notion of dis-tance:small distances means small dissimilarity,and large similarity.A dissimilarity measure can be formalized by a func-tion defined on pairs of descriptors indicating the degree of their resemblance.Formally speaking,a dissimilarity measure d on a set S is a non-negative valued function d:S×S→R+∪{0}.Function d may have some of the following properties:i.Identity:For all x∈S,d(x,x)=0.ii.Positivity:For all x=y in S,d(x,y)>0.iii.Symmetry:For all x,y∈S,d(x,y)=d(y,x).iv.Triangle inequality:For all x,y,z∈S,d(x,z)≤d(x,y)+d(y,z).v.Transformation invariance:For a chosen transforma-tion group G,for all x,y∈S,g∈G,d(g(x),g(y))= d(x,y).The identity property says that a shape is completely similar to itself,while the positivity property claims that dif-ferent shapes are never completely similar.This property is very strong for a high-level shape descriptor,and is often not satisfied.However,this is not a severe drawback,if the loss of uniqueness depends on negligible details.Symmetry is not always wanted.Indeed,human percep-tion does not alwaysfind that shape x is equally similar to shape y,as y is to x.In particular,a variant x of prototype y,is often found more similar to y then vice versa[81].Dissimilarity measures for partial matching,giving a small distance d(x,y)if a part of x matches a part of y, do not obey the triangle inequality.Transformation invariance has to be satisfied,if the com-parison and the extraction process of shape descriptors have to be independent of the place,orientation and scale of the object in its Cartesian coordinate system.If we want that a dissimilarity measure is not affected by any transforma-tion on x,then we may use as alternative formulation for (v):Transformation invariance:For a chosen transforma-tion group G,for all x,y∈S,g∈G,d(g(x),y)=d(x,y).When all the properties(i)-(iv)hold,the dissimilarity measure is called a metric.Other combinations are possi-ble:a pseudo-metric is a dissimilarity measure that obeys (i),(iii)and(iv)while a semi-metric obeys only(i),(ii)and(iii).If a dissimilarity measure is a pseudo-metric,the tri-angle inequality can be applied to make retrieval more effi-cient[7,83].2.4.EfficiencyFor large shape collections,it is inefficient to sequen-tially match all objects in the database with the query object. Because retrieval should be fast,efficient indexing search structures are needed to support efficient retrieval.Since for query by example the shape descriptor is computed online, it is reasonable to require that the shape descriptor compu-tation is fast enough for interactive querying.2.5.Discriminative powerA shape descriptor should capture properties that dis-criminate objects well.However,the judgement of the sim-ilarity of the shapes of two3D objects is somewhat sub-jective,depending on the user preference or the application at hand.E.g.for solid modeling applications often topol-ogy properties such as the numbers of holes in a model are more important than minor differences in shapes.On the contrary,if a user searches for models looking visually sim-ilar the existence of a small hole in the model,may be of no importance to the user.2.6.Partial matchingIn contrast to global shape matching,partial matching finds a shape of which a part is similar to a part of another shape.Partial matching can be applied if3D shape mod-els are not complete,e.g.for objects obtained by laser scan-ning from one or two directions only.Another application is the search for“3D scenes”containing an instance of the query object.Also,this feature can potentially give the user flexibility towards the matching problem,if parts of inter-est of an object can be selected or weighted by the user. 2.7.RobustnessIt is often desirable that a shape descriptor is insensitive to noise and small extra features,and robust against arbi-trary topological degeneracies,e.g.if it is obtained by laser scanning.Also,if a model is given in multiple levels-of-detail,representations of different levels should not differ significantly from the original model.2.8.Pose normalizationIn the absence of prior knowledge,3D models have ar-bitrary scale,orientation and position in the3D space.Be-cause not all dissimilarity measures are invariant under ro-tation and translation,it may be necessary to place the3D models into a canonical coordinate system.This should be the same for a translated,rotated or scaled copy of the model.A natural choice is tofirst translate the center to the ori-gin.For volume models it is natural to translate the cen-ter of mass to the origin.But for meshes this is in gen-eral not possible,because they have not to enclose a vol-ume.For meshes it is an alternative to translate the cen-ter of mass of all the faces to the origin.For example the Principal Component Analysis(PCA)method computes for each model the principal axes of inertia e1,e2and e3 and their eigenvaluesλ1,λ2andλ3,and make the nec-essary conditions to get right-handed coordinate systems. These principal axes define an orthogonal coordinate sys-tem(e1,e2,e3),withλ1≥λ2≥λ3.Next,the polyhe-dral model is rotated around the origin such that the co-ordinate system(e x,e y,e z)coincides with the coordinatesystem(e1,e2,e3).The PCA algorithm for pose estimation is fairly simple and efficient.However,if the eigenvalues are equal,prin-cipal axes may switch,without affecting the eigenvalues. Similar eigenvalues may imply an almost symmetrical mass distribution around an axis(e.g.nearly cylindrical shapes) or around the center of mass(e.g.nearly spherical shapes). Fig.2illustrates the problem.3.Shape matching methodsIn this section we discuss3D shape matching methods. We divide shape matching methods in three broad cate-gories:(1)feature based methods,(2)graph based meth-ods and(3)other methods.Fig.3illustrates a more detailed categorization of shape matching methods.Note,that the classes of these methods are not completely disjoined.For instance,a graph-based shape descriptor,in some way,de-scribes also the global feature distribution.By this point of view the taxonomy should be a graph.3.1.Feature based methodsIn the context of3D shape matching,features denote ge-ometric and topological properties of3D shapes.So3D shapes can be discriminated by measuring and comparing their features.Feature based methods can be divided into four categories according to the type of shape features used: (1)global features,(2)global feature distributions,(3)spa-tial maps,and(4)local features.Feature based methods from thefirst three categories represent features of a shape using a single descriptor consisting of a d-dimensional vec-tor of values,where the dimension d isfixed for all shapes.The value of d can easily be a few hundred.The descriptor of a shape is a point in a high dimensional space,and two shapes are considered to be similar if they are close in this space.Retrieving the k best matches for a3D query model is equivalent to solving the k nearest neighbors -ing the Euclidean distance,matching feature descriptors can be done efficiently in practice by searching in multiple1D spaces to solve the approximate k nearest neighbor prob-lem as shown by Indyk and Motwani[36].In contrast with the feature based methods from thefirst three categories,lo-cal feature based methods describe for a number of surface points the3D shape around the point.For this purpose,for each surface point a descriptor is used instead of a single de-scriptor.3.1.1.Global feature based similarityGlobal features characterize the global shape of a3D model. Examples of these features are the statistical moments of the boundary or the volume of the model,volume-to-surface ra-tio,or the Fourier transform of the volume or the boundary of the shape.Zhang and Chen[88]describe methods to com-pute global features such as volume,area,statistical mo-ments,and Fourier transform coefficients efficiently.Paquet et al.[67]apply bounding boxes,cords-based, moments-based and wavelets-based descriptors for3D shape matching.Corney et al.[21]introduce convex-hull based indices like hull crumpliness(the ratio of the object surface area and the surface area of its convex hull),hull packing(the percentage of the convex hull volume not occupied by the object),and hull compactness(the ratio of the cubed sur-face area of the hull and the squared volume of the convex hull).Kazhdan et al.[42]describe a reflective symmetry de-scriptor as a2D function associating a measure of reflec-tive symmetry to every plane(specified by2parameters) through the model’s centroid.Every function value provides a measure of global shape,where peaks correspond to the planes near reflective symmetry,and valleys correspond to the planes of near anti-symmetry.Their experimental results show that the combination of the reflective symmetry de-scriptor with existing methods provides better results.Since only global features are used to characterize the overall shape of the objects,these methods are not very dis-criminative about object details,but their implementation is straightforward.Therefore,these methods can be used as an activefilter,after which more detailed comparisons can be made,or they can be used in combination with other meth-ods to improve results.Global feature methods are able to support user feed-back as illustrated by the following research.Zhang and Chen[89]applied features such as volume-surface ratio, moment invariants and Fourier transform coefficients for 3D shape retrieval.They improve the retrieval performance by an active learning phase in which a human annotator as-signs attributes such as airplane,car,body,and so on to a number of sample models.Elad et al.[28]use a moments-based classifier and a weighted Euclidean distance measure. Their method supports iterative and interactive database searching where the user can improve the weights of the distance measure by marking relevant search results.3.1.2.Global feature distribution based similarityThe concept of global feature based similarity has been re-fined recently by comparing distributions of global features instead of the global features directly.Osada et al.[66]introduce and compare shape distribu-tions,which measure properties based on distance,angle, area and volume measurements between random surface points.They evaluate the similarity between the objects us-ing a pseudo-metric that measures distances between distri-butions.In their experiments the D2shape distribution mea-suring distances between random surface points is most ef-fective.Ohbuchi et al.[64]investigate shape histograms that are discretely parameterized along the principal axes of inertia of the model.The shape descriptor consists of three shape histograms:(1)the moment of inertia about the axis,(2) the average distance from the surface to the axis,and(3) the variance of the distance from the surface to the axis. Their experiments show that the axis-parameterized shape features work only well for shapes having some form of ro-tational symmetry.Ip et al.[37]investigate the application of shape distri-butions in the context of CAD and solid modeling.They re-fined Osada’s D2shape distribution function by classifying2random points as1)IN distances if the line segment con-necting the points lies complete inside the model,2)OUT distances if the line segment connecting the points lies com-plete outside the model,3)MIXED distances if the line seg-ment connecting the points lies passes both inside and out-side the model.Their dissimilarity measure is a weighted distance measure comparing D2,IN,OUT and MIXED dis-tributions.Since their method requires that a line segment can be classified as lying inside or outside the model it is required that the model defines a volume properly.There-fore it can be applied to volume models,but not to polyg-onal soups.Recently,Ip et al.[38]extend this approach with a technique to automatically categorize a large model database,given a categorization on a number of training ex-amples from the database.Ohbuchi et al.[63],investigate another extension of the D2shape distribution function,called the Absolute Angle-Distance histogram,parameterized by a parameter denot-ing the distance between two random points and by a pa-rameter denoting the angle between the surfaces on which two random points are located.The latter parameter is ac-tually computed as an inner product of the surface normal vectors.In their evaluation experiment this shape distribu-tion function outperformed the D2distribution function at about1.5times higher computational costs.Ohbuchi et al.[65]improved this method further by a multi-resolution ap-proach computing a number of alpha-shapes at different scales,and computing for each alpha-shape their Absolute Angle-Distance descriptor.Their experimental results show that this approach outperforms the Angle-Distance descrip-tor at the cost of high processing time needed to compute the alpha-shapes.Shape distributions distinguish models in broad cate-gories very well:aircraft,boats,people,animals,etc.How-ever,they perform often poorly when having to discrimi-nate between shapes that have similar gross shape proper-ties but vastly different detailed shape properties.3.1.3.Spatial map based similaritySpatial maps are representations that capture the spatial lo-cation of an object.The map entries correspond to physi-cal locations or sections of the object,and are arranged in a manner that preserves the relative positions of the features in an object.Spatial maps are in general not invariant to ro-tations,except for specially designed maps.Therefore,typ-ically a pose normalization is donefirst.Ankerst et al.[5]use shape histograms as a means of an-alyzing the similarity of3D molecular surfaces.The his-tograms are not built from volume elements but from uni-formly distributed surface points taken from the molecular surfaces.The shape histograms are defined on concentric shells and sectors around a model’s centroid and compare shapes using a quadratic form distance measure to compare the histograms taking into account the distances between the shape histogram bins.Vrani´c et al.[85]describe a surface by associating to each ray from the origin,the value equal to the distance to the last point of intersection of the model with the ray and compute spherical harmonics for this spherical extent func-tion.Spherical harmonics form a Fourier basis on a sphere much like the familiar sine and cosine do on a line or a cir-cle.Their method requires pose normalization to provide rotational invariance.Also,Yu et al.[86]propose a descrip-tor similar to a spherical extent function and a descriptor counting the number of intersections of a ray from the ori-gin with the model.In both cases the dissimilarity between two shapes is computed by the Euclidean distance of the Fourier transforms of the descriptors of the shapes.Their method requires pose normalization to provide rotational in-variance.Kazhdan et al.[43]present a general approach based on spherical harmonics to transform rotation dependent shape descriptors into rotation independent ones.Their method is applicable to a shape descriptor which is defined as either a collection of spherical functions or as a function on a voxel grid.In the latter case a collection of spherical functions is obtained from the function on the voxel grid by restricting the grid to concentric spheres.From the collection of spher-ical functions they compute a rotation invariant descriptor by(1)decomposing the function into its spherical harmon-ics,(2)summing the harmonics within each frequency,and computing the L2-norm for each frequency component.The resulting shape descriptor is a2D histogram indexed by ra-dius and frequency,which is invariant to rotations about the center of the mass.This approach offers an alternative for pose normalization,because their method obtains rotation invariant shape descriptors.Their experimental results show indeed that in general the performance of the obtained ro-tation independent shape descriptors is better than the cor-responding normalized descriptors.Their experiments in-clude the ray-based spherical harmonic descriptor proposed by Vrani´c et al.[85].Finally,note that their approach gen-eralizes the method to compute voxel-based spherical har-monics shape descriptor,described by Funkhouser et al.[30],which is defined as a binary function on the voxel grid, where the value at each voxel is given by the negatively ex-ponentiated Euclidean Distance Transform of the surface of a3D model.Novotni and Klein[61]present a method to compute 3D Zernike descriptors from voxelized models as natural extensions of spherical harmonics based descriptors.3D Zernike descriptors capture object coherence in the radial direction as well as in the direction along a sphere.Both 3D Zernike descriptors and spherical harmonics based de-scriptors achieve rotation invariance.However,by sampling the space only in radial direction the latter descriptors donot capture object coherence in the radial direction,as illus-trated byfig.4.The limited experiments comparing spherical harmonics and3D Zernike moments performed by Novotni and Klein show similar results for a class of planes,but better results for the3D Zernike descriptor for a class of chairs.Vrani´c[84]expects that voxelization is not a good idea, because manyfine details are lost in the voxel grid.There-fore,he compares his ray-based spherical harmonic method [85]and a variation of it using functions defined on concen-tric shells with the voxel-based spherical harmonics shape descriptor proposed by Funkhouser et al.[30].Also,Vrani´c et al.[85]accomplish pose normalization using the so-called continuous PCA algorithm.In the paper it is claimed that the continuous PCA is better as the conventional PCA and better as the weighted PCA,which takes into account the differing sizes of the triangles of a mesh.In contrast with Kazhdan’s experiments[43]the experiments by Vrani´c show that for ray-based spherical harmonics using the con-tinuous PCA without voxelization is better than using rota-tion invariant shape descriptors obtained using voxelization. Perhaps,these results are opposite to Kazhdan results,be-cause of the use of different methods to compute the PCA or the use of different databases or both.Kriegel et al.[46,47]investigate similarity for voxelized models.They obtain a spatial map by partitioning a voxel grid into disjoint cells which correspond to the histograms bins.They investigate three different spatial features asso-ciated with the grid cells:(1)volume features recording the fraction of voxels from the volume in each cell,(2) solid-angle features measuring the convexity of the volume boundary in each cell,(3)eigenvalue features estimating the eigenvalues obtained by the PCA applied to the voxels of the model in each cell[47],and a fourth method,using in-stead of grid cells,a moreflexible partition of the voxels by cover sequence features,which approximate the model by unions and differences of cuboids,each containing a number of voxels[46].Their experimental results show that the eigenvalue method and the cover sequence method out-perform the volume and solid-angle feature method.Their method requires pose normalization to provide rotational in-variance.Instead of representing a cover sequence with a single feature vector,Kriegel et al.[46]represent a cover sequence by a set of feature vectors.This approach allows an efficient comparison of two cover sequences,by compar-ing the two sets of feature vectors using a minimal match-ing distance.The spatial map based approaches show good retrieval results.But a drawback of these methods is that partial matching is not supported,because they do not encode the relation between the features and parts of an object.Fur-ther,these methods provide no feedback to the user about why shapes match.3.1.4.Local feature based similarityLocal feature based methods provide various approaches to take into account the surface shape in the neighbourhood of points on the boundary of the shape.Shum et al.[74]use a spherical coordinate system to map the surface curvature of3D objects to the unit sphere. By searching over a spherical rotation space a distance be-tween two curvature distributions is computed and used as a measure for the similarity of two objects.Unfortunately, the method is limited to objects which contain no holes, i.e.have genus zero.Zaharia and Prˆe teux[87]describe the 3D Shape Spectrum Descriptor,which is defined as the histogram of shape index values,calculated over an en-tire mesh.The shape index,first introduced by Koenderink [44],is defined as a function of the two principal curvatures on continuous surfaces.They present a method to compute these shape indices for meshes,byfitting a quadric surface through the centroids of the faces of a mesh.Unfortunately, their method requires a non-trivial preprocessing phase for meshes that are not topologically correct or not orientable.Chua and Jarvis[18]compute point signatures that accu-mulate surface information along a3D curve in the neigh-bourhood of a point.Johnson and Herbert[41]apply spin images that are2D histograms of the surface locations around a point.They apply spin images to recognize models in a cluttered3D scene.Due to the complexity of their rep-resentation[18,41]these methods are very difficult to ap-ply to3D shape matching.Also,it is not clear how to define a dissimilarity function that satisfies the triangle inequality.K¨o rtgen et al.[45]apply3D shape contexts for3D shape retrieval and matching.3D shape contexts are semi-local descriptions of object shape centered at points on the sur-face of the object,and are a natural extension of2D shape contexts introduced by Belongie et al.[9]for recognition in2D images.The shape context of a point p,is defined as a coarse histogram of the relative coordinates of the re-maining surface points.The bins of the histogram are de-。

基于特征点提取的影视动画场景色彩搭配系统

基于特征点提取的影视动画场景色彩搭配系统

基于特征点提取的影视动画场景色彩搭配系统
赵伸
【期刊名称】《吉林大学学报:信息科学版》
【年(卷),期】2022(40)4
【摘要】为提高影视动画场景色彩搭配效果,提出基于特征点提取算法的影视动画场景色彩搭配系统设计方法,构建影视动画场景色彩搭配的视觉认知模型。

使用辅
助视觉图像采样模型提取影视动画场景色彩搭配图形的辅助视觉元素特征空间结构,结合影视动画场景色彩搭配图形的三维分布特征量;利用特征点提取算法建立影视
动画场景色彩搭配视觉元素RGB特征分解模型;通过视觉信息参数融合和三维视觉重建的方法,进行影视动画场景色彩搭配系统的软件平台开发,设计影视动画场景色
彩搭配系统,采用三维重建软件,实现影视动画场景色彩搭配系统的视景仿真开发设计。

仿真结果表明,设计的影视动画场景色彩搭配系统输出稳定性较高,可靠性较好。

【总页数】6页(P688-693)
【作者】赵伸
【作者单位】西安翻译学院艺术与设计学院
【正文语种】中文
【中图分类】TP391
【相关文献】
1.一种基于特征点提取与随机树的增强现实系统
2.基于感性工学的服装色彩搭配系统规划分析
3.基于DSP的指纹图像特征点连续提取系统设计
4.基于虚拟现实技术
的文创产品色彩自动搭配系统设计5.基于虚拟现实技术的文创产品色彩自动搭配系统设计
因版权原因,仅展示原文概要,查看原文内容请购买。

Abstract图形界面的使用

Abstract图形界面的使用
第47页/共59页
These options control the grid analysis function that calculates the best metal1 and metal2 routing grid pitches and offsets for your standard cells. 默认关闭该功能。
当开关关闭时,整条Net都定义
为Pin,并且每一次改变走向都
增加编号。例如:en1、en2等
A
当开关打开,在boundary的边
缘位置创建Pin,默认情况以
label所在层的最窄宽度为边长
的正方形。
B
第42页/共59页
Adjust Step
Boundary pin max distance to boundary
第16页/共59页
第17页/共59页
第18页/共59页
5、给Layout增加prBoundary
这一层用于规划IP的大小,属于标识层。 今后可以在Layout设计时就加入这层。需 要注意Stream out GDS文件时要在map文件 中添加对这层的说明,否则会丢失。
第19页/共59页
边缘黄色线条就是prBoundary这层 使用时需要到LSW的Edit中开启这一 层的显示。
Abstract界面
第8页/共59页
菜单说明
Verify step Abstract step Extract step Pins step Logial import Layout import Open library
第9页/共59页
菜单说明
第10页/共59页
数据准备及建库流程
• Tech.lef • GDS • Schematic Library • PDK library

Animae实战全能一本通ppt课件完整版

Animae实战全能一本通ppt课件完整版

3.2.3 变形文本
将文本分离为填充图形后,可以非常方便地改变文字的形状。要改变分 离后文本的形状,可以使用【工具】面板中的【选择工具】或【部分选取工 具】等,对其进行各种变形操作 。
3.2.4 消除文本锯齿
有时Animate中的文字会显得模糊不清,这往往是由于创建的文本较小 从而无法清楚显示的缘故,在文本的【属性】面板中通过对文本锯齿的设置 优化,可以很好地解决这一问题 。
2.3.1 使用【颜料桶工具】
在Animate 2020中,【颜料桶工具】用来填充图形内部的 颜色,并且可以使用纯色、渐变色以及位图进行填充 。
2.3.2 使用【墨水瓶工具 】
在Animate 2020中,【墨水瓶工具】用于更改矢量线条或 图形的边框颜色,更改封闭区域的填充颜色,吸取颜色等 。
2.3.3 使用【滴管工具 】
2.4.2 使用【椭圆工具】
【工具】面板中的【椭圆工具】和【基本椭圆工具】用于 绘制椭圆图形,它和矩形工具类似,差别主要在于椭圆工具的 选项中有关角度和内径的设置 。
2.4.3 使用【多角星形工具】
使用【多角星形工具】可以绘制多边形图形和多角星形图 形,这些图形经常应用到实际动画制作过程中,选择【多角星 形工具后】 ,将鼠标光标移动到舞台上,按住鼠标左键拖动系 统默认是绘制出五边形,通过设置也可以绘制其他多角星形的 图形 。
2.2.3 使用【传统画笔工具】
在Animate 2020中,【传统画笔工具】用于绘制形态各异的矢量色块或 创建特殊的绘制效果 。
2.2.3 使用【钢笔工具 】
【钢笔工具】常用于绘制比较复杂、精确的曲线路径。“路径”由一个 或多个直线段和曲线段组成,线段的起始点和结束点由锚点标记。使用【工 具】面板中的【钢笔工具】 ,可以创建和编辑路径,以便绘制出需要的图形 。

纤维拉丝 制作纤维拉丝动画效果

纤维拉丝 制作纤维拉丝动画效果

纤维拉丝:制作纤维拉丝动画效果在AE软件中,纤维拉丝(Fiber Wire)是一种常用的特效技术,可以制作出逼真的纤维拉丝动画效果。

它通常用于呈现细微的纤维、线条等效果,给人一种真实而又神奇的感觉。

下面将介绍一种简单的制作纤维拉丝动画的方法。

1. 导入素材和设置合成首先,我们需要准备一张背景图片或视频作为合成的底层素材。

拖拽素材到AE软件的项目面板中即可导入。

然后,创建一个新的合成,设置合成的大小和时长与素材相匹配,并将素材拖拽到合成中。

2. 创建纤维拉丝在合成中,我们需要创建纤维拉丝的图层。

选择新建一个形状图层,并选择“线”工具,在合成中绘制一条直线。

调整线条的颜色、粗细以及透明度,使其与背景素材相协调。

3. 添加效果选中纤维拉丝图层,在效果面板中选择“卷曲(Curves)”效果。

调整卷曲的参数,使纤维拉丝呈现出曲线和弯曲的效果。

你可以通过调整快门角度和旋转参数来控制纤维拉丝的形状和方向。

4. 锚点动画下一步是为纤维拉丝添加动画效果。

选中纤维拉丝图层,打开“变换”面板,在“锚点”选项下点击“添加动画”按钮。

使用关键帧来控制纤维拉丝在时间轴上的运动和形状变化。

你可以通过调整锚点的位置和曲线来实现不同的动画效果。

5. 路径动画如果你想要纤维拉丝沿着特定的路径移动,可以使用路径动画的技术。

在合成中创建一个新的形状图层,并选择“椭圆工具”或“钢笔工具”来绘制路径。

然后,将路径图层拖拽到纤维拉丝图层的前面,并将纤维拉丝图层的“跟随路径”属性设置为路径图层。

6. 添加光影效果为使纤维拉丝看起来更加真实,我们可以添加一些光影效果。

选中纤维拉丝图层,选择“内发光(Inner Glow)”效果,并调整光亮度和颜色,使纤维拉丝看起来像是在发光。

可以使用几个光晕效果的图层叠加,增加纤维拉丝的立体感和层次感。

7. 渲染和导出完成动画效果后,我们可以开始进行渲染和导出。

点击菜单栏中的“合成”选项,选择“添加到渲染队列”。

meshlab 法向量定向

meshlab 法向量定向

meshlab 法向量定向
MeshLab 是一款开源的3D模型处理软件,可以用于处理三维点云数据、网格模型等。

在MeshLab中,法向量定向可以通过以下步骤进行:
打开MeshLab软件,并导入需要处理的3D模型。

在菜单栏中选择“File”->“Import Mesh”来导入模型。

在打开的对话框中,选择需要导入的3D模型文件,并设置相关参数。

导入模型后,选择“Filters”->“Normals, Curvatures and Orientation”->“Smooth Normals on a Point Set”来计算模型的法向量。

在弹出的对话框中,设置法向量的参数,如迭代次数、平滑度等。

点击“Apply”按钮应用法向量计算,并等待计算完成。

计算完成后,选择“Filters”->“Remeshing Simplification and Reconstruction”->“Surface Reconstruction: Ball Pivoting”来进行三维点云重建。

在弹出的对话框中,设置重建参数,如重建精度、表面平滑度等。

点击“Apply”按钮应用重建操作,并等待重建完成。

重建完成后,即可查看处理后的3D模型,并使用法向
量进行进一步的分析和处理。

需要注意的是,法向量的定向可能会影响最终的三维模型效果,因此需要根据实际情况进行调整和优化。

同时,在处理大规模的三维点云数据时,需要合理配置计算机硬件和软件参数,以保证处理速度和精度。

基于模糊操纵永磁同步电机的研究

基于模糊操纵永磁同步电机的研究

基于模糊操纵永磁同步电机伺服系统的研究摘要永磁同步电机因具有体积小、重量轻、运行靠得住、能量转换效率高、调速范围宽、动静态特性好等优势而被普遍应用于各类伺服系统中。

操纵精度、稳态性能和抗干扰能力是衡量伺服系统整体性能好坏的重要因素,而要使系统有较高的操纵精度、稳固性和较强的抗干扰能力,采纳适合的操纵策略相当重要。

因此,目前对永磁同步电机伺服系统操纵策略的研究是一个热点。

传统的双闭环PID操纵策略研究的是线性时不变的操纵问题。

但是,永磁同步电机本身是具有必然非线性、强藕合性和时变性的“系统”,其伺服对象也存在较强的不确信性和非线性,而且运行进程中还会受到不同程度的干扰。

另外,永磁同步电机在运行中参数也会随之改变。

由于PID操纵参数是依照成立好的精准数学模型进行整定的,不能随着被控对象的转变而作相应的调整,因此,系统必然存在稳态精度和抗干扰性不高的缺点。

模糊操纵理论利用模糊集合论,把专家的成熟体会和规那么有机地融入到操纵策略中,依照系统对象参数的转变实时地改变操纵参数,能够取得较好的操纵成效。

本文对模糊操纵理论进行了必然的研究,并在此基础上设计了两种最大体最有效的模糊操纵器,将其应用到了永磁同步电机伺服系统中,成立了系统模型并对其进行了仿真研究,并与传统的双闭环PID操纵系统仿真结果进行了对照。

仿真结果说明,这种新型的模糊PID操纵策略显著提高了系统的响应速度,大大改善了永磁同步电机伺服系统的动静态性能,而且提高了其抗干扰能力,符合高性能伺服系统的要求。

关键词永磁同步电机;伺服系统;闭环;模糊操纵;系统仿真The Research of PMSM Servo System Based on FuzzyLogic ControlAbstractPermanent Magnet Synchronous Motor (PMSM), has good characteristics of small size, lightweight,reliable operation, high energy conversion efficiency,high speed wide range and good static and dynamic behavior, ect. So it is widely used in various servo system. Control precision stability and anti-jamming capability are important factors that determine the overall performance measurement of servo system. In order to make the system have higher precision, stability and stronger anti-jamming capability, to use an appropriate control strategy is essential. therefore,at present, the control strategy of PMSM servo system is a hot research.The research of traditional double-closed- loop PID control strategy is the linear time-invariant control Problems. however,PMSM is a non-linear,strong-coupling and time-invariant “system”, and its servo object has strong uncertainty and non- line characters , and in the course of operation will be different interference .In addition , the electrical parameters of the PMSM will change while the motor is operating. As PID control parameters prior to the establishment of good basis for setting a accurate mathematical model,which parameters can not vary in time,therefore,the system must exist disadvantages of low steady precision and interference resistance.Fuzzy logic control theory using fuzzy set theory,put mature experience and rules of experts into the control strategy and alter the control parameters in time according to the change of the system parameters is able to achieve good control effete .This dissertation designs two basic and utility fuzzy controller base on research of fuzzy logic control theory and put them in the application to the PMSM servo system, a system model and its simulation is set which is compared with the traditional double-closed- loop PID control system simulation results. The simulationresults show that this new type of fuzzy PID control. Strategy significantly improve the response speed of the system, greatly improve the static and dynamic performance of the PMSM servo system, and enhance its anti-interference capabilities, meet the requirements of high- performance servo systems.Keywords PMSW ;Servo system; Closed Loop; Fuzzy Logic Control目录摘要 (I)Abstract (II)第1章绪论...........................................................................错误!未定义书签。

u3D场景案例赏析的心得

u3D场景案例赏析的心得

u3D场景案例赏析的心得本次简单分享U3D美术优化心得(释放内存和间接找到性能差的原因)1. 资源的目录结构与命名规范我认为这是非常基础也是非常重要的环节,良好的结构可大大加强研发效率,也为后期优化与热更打好基础。

将所有美术资源放入Assets下建立的Arts或者Res文件夹,Res文件夹下按照美术工种分为多个文件夹:UI(这里放界面所需资源icon立绘等),Character(角色),Map、Scenes(放关卡地图或场景),Fx(特效),Shader (着色器),Other(其他)等。

每个文件夹下还会有很多详细的子文件夹,例如特效文件夹Assets/Res/Fx/Prefab、Texture、model、Animation、timeline等。

资源本身也需清晰的命名,例如特效Prefab命名:Fx_saber(角色名或场景UI名)_Excalibur(技能名)_hit_01,这个就是亚瑟王契约胜利之剑技能的受击特效01,角色命名也是同理Character_saber。

(命名可自行发挥,但主要目的是便于理解,一目了然为最佳)2. 贴图2.1游戏中释放技能发生卡顿或点开界面加载慢,可以检查下是否是贴图过大没有压缩(这里不聊代码效率的问题)ios推荐使用RGBA Compressed Pvrtc 4bits,安卓有透明通道的建议使用RGBA Compressed ETC2 8 bits。

iOSAndroid(需提前安装)2.2这里顺便提一下Shader的blend混合模式,因为blend模式会渲染通道比Add消耗内存,不需要透明的纹理或Add粒子使用的特效贴图可去掉alpha通道。

2.3贴图默认会勾选Generate Mip Maps,将其关闭可减少30%的空间。

2.4所有贴图大小尺寸为1:1,2的幂次方。

尽量不要超过1024*1024,否则游戏安装之后、低端机直接崩溃、原因是手机系统版本低,超过1000的图集无法读取导致。

如何使用Adobe Photoshop软件制作迷幻的扭曲艺术

如何使用Adobe Photoshop软件制作迷幻的扭曲艺术

如何使用Adobe Photoshop软件制作迷幻的扭曲艺术迷幻的扭曲艺术一直以来都是许多艺术爱好者追逐的潮流。

在这个数字时代,Adobe Photoshop软件成为了扭曲艺术创作的重要工具。

它提供了强大的功能和多样化的工具,使艺术家可以轻松地创作出令人惊叹的扭曲艺术作品。

首先,为了制作迷幻的扭曲艺术,我们需要有一个创意的构思。

可以从日常生活中的事物、自然景观、人物形象等各种元素中获取灵感。

另外,观察现有的艺术作品和社交媒体上的创作也可以激发创意思维。

一旦有了想法,接下来就可以开始使用Adobe Photoshop软件来实现这一创意。

在创作过程中,使用变形工具是不可或缺的。

变形工具可以帮助我们对图像进行自由变形,从而制造出扭曲的效果。

在选择变形工具之前,我们需先打开一幅待创作的图像。

在菜单栏中选择“编辑” -> “变换” -> “扭曲”,这样就会出现一个网格状的参考图。

通过拖动网格的节点,我们可以对图像进行拉伸、扭曲,甚至创建出新的形状。

另外一个有用的工具是液化工具。

通过调整液化工具的大小和强度,我们可以在图像上产生出迷幻的扭曲效果。

比如,可以选择液化工具的“鱼眼”模式来使图像中心区域凸起,创造出微妙的弯曲效果。

还可以尝试使用“右转”或“左转”模式来扭曲图像的侧面。

除了变形工具和液化工具,色彩调整也是制作迷幻扭曲艺术的关键步骤之一。

通过调整图像的亮度、对比度、饱和度等参数,我们可以创造出具有独特感觉的色彩效果。

可以尝试在色阶和色调/饱和度调节面板中进行调整,或者使用曲线工具来自定义色彩变化。

通过调整色彩,我们可以让图像更加鲜艳、强烈,增强了扭曲艺术的视觉冲击力。

另外一个可以增强迷幻效果的技巧是添加纹理。

在Photoshop软件中,可以通过图案填充、纹理贴图等方式来给图像增加纹理效果。

可以在网上找到一些免费的纹理素材,然后将其导入到Photoshop软件中,并使用图层叠加方式进行融合。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Our main contribution is to show that for continuously evolving surface data the propagation by an adaptively estimated flow field can help to establish temporally coherent meshing of the dynamic surface.
1.1
Related work
A similar problem of creating a single coherent geometry representation for a number of shapes has been recently addressed by computer graphics researchers. For instance, when several range scans of the same individual are given, Allen et al.[ACP02] solve the problem of fitting a subdivision surface template to create a posable 3D model. That work was extended to handle the parameterization of whole-body data for multiple humans [ACP03]. In both cases, a sparse set of 3D markers is used for model registration. Related work on finding consistent parameterizations of dissimilar shapes by Praun et al. [PSS01] has also used user-defined feature markers in a multiresolution remeshing procedure. Neither of the studies mentioned above explicitly considers a continuously evolving surface. In this paper, we concentrate our effort on extracting a single animated mesh from a continuously evolving shape sequence. We do not assume the availability of markers or surface textures in the original data, with the hope that strong shape coherence will give enough information for extracting surface motion. We split our problem into the motion estimation and the mesh propagation parts. The motion estimation problem has long been studied in the computer vision community [BB81]. In particular, several efficient optical flow algorithms were introduced [SS96][Sim93]. We extend the multiscale optical flow algorithm of Simoncelli [Sim93] to work on adaptive signed distance volume representations and modify it to remove unneeded interactions between unrelated surface patches. Our motion estimation and surface fitting procedures require a volumetric shape representation, and an efficient conversion of input polygonal data is necessary for the overall good performance of the algorithm. We employ an adaptive shape representation similar to the ADF representation of Frisken et al. [FPRJ00]; due to some differences and to avoid confusion we generically call it ASDV (adaptive signed distance volume). We convert our input meshes into a sequence of ASDV datasets and use them for both motion estimation and surface fitting 666
ቤተ መጻሕፍቲ ባይዱ
Animated meshes are widely employed in character animation, visualization, and computational simulation applications. An animated mesh is a sequence of meshes with the same connectivity whose vertex positions change in time. This is a convenient representation for dynamically changing shapes, with many processing, rendering, and compression tasks easily handled. For instance, modern compression methods can take advantage of the temporal coherence present in animated mesh data, resulting in a very compact shape representation [IR03][KG]. Unfortunately, several recently introduced state of the art dynamic shape acquisition methods [SMP03][ZCS03] do not produce their result in the animated mesh form; rather a sequence of meshes of varying connectivity is produced. Volumetric morphing and isosurface extraction are also examples of applications that produce evolving surfaces that are not meshed consistently in time, and have changing connectivity and samVMV 2004
pling from frame to frame. Reconstruction of animated mesh sequences from such data is an important problem we aim to address in this paper.
1
Introduction
Abstract
We present an approach for extracting coherently sampled animated meshes from input sequences of incoherently sampled meshes representing a continuously evolving shape. Our approach is based on multiscale adaptive motion estimation procedure followed by propagation of a template mesh through time. An adaptive signed distance volumes are used as the principal shape representation, and a Bayesian optical flow algorithm is adapted to the surface setting with a modification that diminishes the interference between unrelated surface regions. Additionally, a parametric smoothing step is employed to improve the sampling coherence of the model. The result of the proposed procedure is a single animated mesh. We apply our approach to the human motion data.
Figure 1: Comparison of trajectories for three methods of particle propagation for the jump sequence: fitting, temporal prediction followed by fitting, our estimated flow followed by fitting. The left two propagations fail, the right one succeeds to preserve the surface sampling. We restrict our effort to the simpler scenario of unchanging topology both in the input data and in the desired output. A single mesh template of the same connectivity is propagated through time and is fitted to the input surface data in every frame of the sequence. The main challenge is to establish correspondence between consecutive shapes. Computing such correspondence relation constitutes the main contribution of this paper. We assume that the input surfaces are closed and without boundaries; the first step of the algorithm converts each input shape into an adaptive volumetric representation with a signed distance transform. Once a volumetric representation is obtained, we run a Bayesian motion estimation procedure similar to a differential optical flow approach from image processing. The resulting vector field defined on the surface is used for the initial propagation of the mesh template. The further fitting and parametric smoothing steps result in a temporally coherent animated mesh sequence. Stanford, USA, November 16–18, 2004
相关文档
最新文档