cms---kluckhohn-strodtbeck-cultural-dimensions(组织文化维度)
Kluckhohn-and-Strodtback’s-value-orientation
Kluckhohn and Strodtback’s value orientationThe Kluckhohns and Strodtbeck, after examining hundreds of cultures, reached the conclusion that people turn to their culture for answers to the following questions。
(1) What is the character of human nature? (2) What is the relation of humankind to nature?(3) What is the orientation toward time?(4)What is the value placed on activity?And (5)What is the relationship of people to each other?The answers to these crucial questions serve as the bases for the five value orientations that are at the heart of their approach. These five orientations might best be visualized as points on a continuum. As you move through these five orientations,you will undoubtedly notice some of the same characteristics discussed by Hofstede。
This is very understandable because both approaches are talking about meaningful values found in all cultures. Hence, both sets of researchers were bound to track many of the same patterns.Human Nature Orientation1. evil2。
黑龙江省2023年英语a级考试真题及答案
黑龙江省2023年英语a级考试真题及答案全文共3篇示例,供读者参考篇1Black Dragon River Province 2023 English A-Level Exam Questions and AnswersSection A: Reading ComprehensionRead the following article and answer the questions below:Title: The Beauty of Black Dragon RiverLocated in the northeastern part of China, Black Dragon River Province is known for its stunning natural beauty and rich cultural heritage. The province is home to a diverse range of landscapes, including lush forests, majestic mountains, and crystal-clear rivers.One of the most popular attractions in Black Dragon River Province is the Black Dragon River itself, which meanders through the picturesque countryside. The river is famous for its clear blue waters and tranquil atmosphere, making it a favorite spot for locals and tourists alike.In addition to its natural beauty, Black Dragon River Province is also known for its vibrant cultural scene. The province is home to a number of traditional villages where visitors can experience the rich cultural traditions of the region. Local artisans create beautiful handicrafts, such as intricately carved woodwork and brightly colored textiles.Overall, Black Dragon River Province offers visitors a unique blend of natural beauty and cultural richness. Whether you're interested in exploring the great outdoors or immersing yourself in the local culture, this province has something for everyone.Questions:1. Where is Black Dragon River Province located?2. What is the Black Dragon River famous for?3. What can visitors experience in the traditional villages of Black Dragon River Province?4. What makes Black Dragon River Province a unique destination for tourists?Answers:1. Black Dragon River Province is located in the northeastern part of China.2. The Black Dragon River is famous for its clear blue waters and tranquil atmosphere.3. In the traditional villages of Black Dragon River Province, visitors can experience the rich cultural traditions of the region, including beautifully crafted handicrafts.4. Black Dragon River Province offers a unique blend of natural beauty and cultural richness, making it an ideal destination for tourists.Section B: WritingWrite an essay of at least 500 words on the topic "The Importance of Preserving Cultural Heritage."In your essay, be sure to discuss:- Why is it important to preserve cultural heritage?- How can we promote the preservation of cultural heritage?- What are the benefits of preserving cultural heritage for future generations?Section C: Listening ComprehensionListen to the following audio clip and answer the questions below:Audio Clip: The History of Black Dragon River ProvinceQuestions:1. What is one of the main attractions in Black Dragon River Province?2. What is the Black Dragon River famous for?3. How does the speaker describe the cultural scene in Black Dragon River Province?Answers:1. One of the main attractions in Black Dragon River Province is the Black Dragon River itself.2. The Black Dragon River is famous for its clear blue waters.3. The speaker describes the cultural scene in Black Dragon River Province as vibrant and rich.Section D: Grammar and VocabularyComplete the following sentences with the correct grammar and vocabulary:1. I __________ (visit) Black Dragon River Province last summer and I __________ (be) amazed by its beauty.2. The local artisans __________ (create) beautiful handicrafts for generations.3. It is important to __________ (preserve) cultural heritage for future generations to enjoy.Answers:1. visited, was2. have been creating3. preserveEnd of ExamWe hope you enjoyed this exam on Black Dragon River Province! Good luck with your studies and future travels.篇2Black Dragon River Province, located in the northeastern part of China, is known for its beautiful scenery, rich history, and diverse culture. In 2023, the province hosted the A-level English exam, which has become a topic of interest among students and educators. The exam tested students' proficiency in various aspects of the English language, including grammar, vocabulary, reading comprehension, and writing skills.Here are the 2023 A-level English exam questions and answers:Grammar Section:1. Complete the sentence: If I ________ more time, I would have finished the project.Answer: had2. Choose the correct form of the verb to complete the sentence: She ________ to Japan last summer.A) goesB) wentC) is goingD) will goAnswer: B) went3. Fill in the blank with the appropriate pronoun: My sister and ________ went to the store.Answer: IVocabulary Section:1. Choose the synonym for "ubiquitous":A) rareB) commonC) uniqueD) exceptionalAnswer: B) common2. What is the opposite of "vibrant"?A) dullB) livelyC) energeticD) colorfulAnswer: A) dullReading Comprehension Section:Read the following passage and answer the questions that follow:"Mount Changbai, also known as Baekdu Mountain, is a majestic peak located on the border between China and North Korea. It is famous for its stunning natural beauty and rich biodiversity. The mountain is also considered sacred by bothChinese and Korean people, with many myths and legends surrounding it. Visitors can enjoy hiking, skiing, and exploring the volcanic landscape of Mount Changbai."1. Where is Mount Changbai located?Answer: Mount Changbai is located on the border between China and North Korea.2. What is Mount Changbai famous for?Answer: Mount Changbai is famous for its stunning natural beauty and rich biodiversity.Writing Section:Write an essay on the following topic:"Describe your dream vacation destination and why you would like to visit it."Answer: My dream vacation destination is the Maldives, a tropical paradise in the Indian Ocean. I have always been fascinated by the crystal-clear waters, white sandy beaches, and luxurious resorts of the Maldives. I would love to spend my days relaxing on the beach, snorkeling in the vibrant coral reefs, and experiencing the warm hospitality of the local people. The Maldives is a place where I can escape from the hustle and bustleof daily life and unwind in a peaceful and tranquil environment. I am drawn to the beauty and serenity of this exotic destination, and I hope to one day make my dream vacation a reality.Overall, the 2023 A-level English exam in Black Dragon River Province challenged students to demonstrate their English language skills in a variety of areas. The exam questions were designed to assess students' understanding of grammar, vocabulary, reading comprehension, and writing abilities. As students prepare for future exams, they can use these sample questions and answers as a guide to improve their English proficiency. Good luck to all the students taking the A-level English exam in the coming years!篇32023 English A-Level Exam Questions and Answers in Heilongjiang ProvincePaper 1: Reading ComprehensionPart A: Multiple Choice QuestionsRead the following passage and answer the questions that follow:The English language has become an essential skill in today's globalized world. It is spoken by over a billion people around the globe, and is the official language in many countries. Learning English opens up a world of opportunities, from studying abroad to working in multinational companies.1. What is the main idea of the passage?A. English is an important language to learn.B. English is only spoken in a few countries.C. Learning English has no benefits.D. English is a difficult language to learn.2. How many people speak English worldwide?A. Half a billionB. A billionC. Two billionD. Three billion3. What opportunities can learning English provide?A. Working only in local companiesB. Studying abroadC. Learning only one languageD. Reading foreign literaturePart B: MatchingMatch the words with their definitions:1. Resilient2. Conscientious3. Adaptable4. Tenaciousa. Able to adjust easily to new conditionsb. Showing great care and attention to detailc. Strong and able to recover from difficult situationsd. Holding onto something firmly or persistentlyPaper 2: WritingIn no less than 300 words, write an essay on the importance of learning a second language. Include examples and personal experiences to support your argument.Answer Key:Part A:1. A. English is an important language to learn.2. B. A billion3. B. Studying abroadPart B:1. C. Resilient2. B. Conscientious3. A. Adaptable4. D. TenaciousWe hope that these questions and answers help you in preparing for the English A-Level exam in Heilongjiang Province in 2023. Good luck on your exam!。
思途旅游CMS标签调用说明前台模板二次开发文档
思途CMS标签调用说明书本文档主要描述系统标签的功能与用法,系统标签的存储位置统一存放在include/taglib/smore/目录下,标签的命名格式为标签名.lib.php1.Attrgrouplist用途:此标签主要用于读取线路,酒店,租车,景点,文章,相册,团购的的属性组列表,此标签一般与getattrgrouplist配合使用,用于搜索列表,达到显示栏目相应属性的功能。
参数:typeid:需要调用属性的栏目id(线路:1,酒店:2,租车:3,文章:4,景点:5,相册:6,团购:13)filterid:需要排除的属性组id,如果排除多个则以逗号分隔。
row:调用的条数。
例子:这个标签一般用于在搜索列表使用如如上图所示,会调用线路属性组进行显示,typeid=1表示读取线路属性组,filterid=’91’表示排除属性组id为91的属性组,属性组id的查看可以在后台属性组管理页面进行查看。
如下图:2.getattrbygroup用途:用于通过某个属性组id或者属性名称来读取某个属性组相应的属性列表,该标签一般与attrgrouplist配合使用实现快速读取多个属性组信息。
参数:groupname:属性组的名称,如“旅行方式”typeid:同上groupid:属性组id的值。
row:调用的条数。
前台模板可用参数:[field:title/]:表示读取当前属性名称[field:id/]:表示读取当前属性id.例子:1.如我想单独调用线路属性组为“交通选择”的属性列表信息,则可以通过以下代码进行实现{sline:getattrbygroup typeid=’1’groupname=’交通选择’}<a data-id=”[field:id/]”>[field:title]</a>{/sline:getattrbygroup}也可以使用groupid来实现同样的效果,{sline:getattrbygroup typeid=’1’groupid=’84’}<a data-id=”[field:id/]”>[field:title]</a>{/sline:getattrbygroup}Groupid可以在后台相应栏目属性配置那里获取。
高中英语戏剧欣赏单选题60题
高中英语戏剧欣赏单选题60题1. In the play "Hamlet", Hamlet is often described as __________.A. optimistic and braveB. cautious and hesitantC. cruel and selfishD. simple and naive答案:B。
本题考查对《 哈姆雷特》中哈姆雷特人物性格的理解。
选项 A 乐观勇敢不符合哈姆雷特的性格特点,他在复仇过程中充满了犹豫和思考。
选项C 残忍自私与哈姆雷特的形象不符。
选项D 单纯天真也不能准确描述哈姆雷特,他心思缜密,善于思考。
而选项B 谨慎犹豫符合他在面对复仇时的复杂心理。
2. The main character in the drama "Romeo and Juliet" shows a trait of __________.A. calm and rationalB. passionate and impulsiveC. cold and indifferentD. timid and cautious答案:B。
在 罗密欧与朱丽叶》中,主角的性格特点是充满激情且冲动的。
选项A 冷静理性不符合他们为了爱情不顾一切的表现。
选项 C 冷漠无情与他们热烈的爱情相悖。
选项 D 胆小谨慎也不是他们的性格,他们勇敢追求爱情。
3. In the classic play, the character who always follows the rulesstrictly is __________.A. a rebelB. a conformistC. an innovatorD. a free spirit答案:B。
本题考查对戏剧中人物类型的理解。
选项A 叛逆者通常不会严格遵循规则。
选项C 创新者侧重于创造新的事物,不一定严格遵循规则。
Kluckhohn_与_Strodtbeck的价值观取向
Kluckhohn 与 Strodtbeck的价值观取向Florence Kluckhohn (佛萝伦丝·克拉克洪)与Fred Strodtbeck(弗雷德·斯多特贝克)是较早提出文化理论的美国人类学家。
已故的美国哈佛大学女学者佛萝伦丝·克拉克洪曾在太平洋战争时参与了一个由美国战争情报处(Office of War Information)组建的一个约30人的专家队伍,研究不同文化的价值、民心和士气。
这个研究组通过对日本民族的心理和价值的分析,向美国政府提出了不要打击和废除日本天皇的建议,并依此建议修改要求日本无条件投降的宣言。
二战后不久,哈佛大学加强了对文化价值维度研究的支持力度,并与洛克菲勒基金会一起资助克拉克洪等人在美国德克萨斯州一片方圆40英里的土地上针对五个不同的文化社区展开一项大规模的研究。
这项研究的一个主要成果就是克拉克洪-斯多特贝克(Kluckhohn & Strodtbeck, 1961)的五种价值取向模式,该成果发表于《价值取向的变奏》(Variations in Value Orientations, 1961)一书中。
在该书中佛萝伦丝·克拉克洪沿用了她的丈夫Clyde Kluckhohn(克莱德・克拉克洪)提出的有关价值取向的定义。
所谓价值取向指的是“复杂但确定的模式化原则,与解决普通的人类问题相联系,对人类行为和思想起着指示与导向作用”(Kluckhohn & Strodtbeck, 1961:4)。
这一模式包括了五个价值取向:人性取向、人与自然的关系取向、时间取向、活动取向和关系取向。
克拉克洪与斯多特贝克的价值取向理论基于以下三个基本的假设:(1)任何时代的任何民族都必须为某些人类的共同问题提供解决的方法;(2)这些问题的解决方法不是无限的或任意的,而是在一系列的选择或价值取向中的变化。
(3)每种价值取向都存在于所有的社会和个体中,但每个社会和个体对价值取向的偏好不同。
Kluckhohn与Strodtbeck的价值观取向
的价值观取向 StrodtbeckKluckhohn 与Florence Kluckhohn (佛萝伦丝·克拉克洪)与Fred Strodtbeck(弗雷德·斯多特贝克)是较早提出文化理论的美国人类学家。
已故的美国哈佛大学女学者佛萝伦丝·克拉克洪曾在太平洋战争时参与了一个由美国战争情报处(Office of War Information)组建的一个约30人的专家队伍,研究不同文化的价值、民心和士气。
这个研究组通过对日本民族的心理和价值的分析,向美国政府提出了不要打击和废除日本天皇的建议,并依此建议修改要求日本无条件投降的宣言。
二战后不久,哈佛大学加强了对文化价值维度研究的支持力度,并与洛克菲勒基金会一起资助克拉克洪等人在美国德克萨斯州一片方圆40英里的土地上针对五个不同的文化社区展开一项大规模的研究。
这项研究的一个主要成果就是克拉克洪-斯多特贝克(Kluckhohn & Strodtbeck, 1961)的五种价值取向模式,该成果发表Variations in Value Orientations, 1961)一书中。
在该书中佛于《价值取向的变奏》(萝伦丝·克拉克洪沿用了她的丈夫Clyde Kluckhohn(克莱德?克拉克洪)提出的有关价值取向的定义。
所谓价值取向指的是“复杂但确定的模式化原则,与解决普通的人类问题相联系,对人类行为和思想起着指示与导向作用”(Kluckhohn & Strodtbeck, 1961:4)。
这一模式包括了五个价值取向:人性取向、人与自然的关系取向、时间取向、活动取向和关系取向。
克拉克洪与斯多特贝克的价值取向理论基于以下三个基本的假设:)任何时代的任何民族都必须为某些人类的共同问题提供解决的方法;1((2)这些问题的解决方法不是无限的或任意的,而是在一系列的选择或价值取向中的变化。
(3)每种价值取向都存在于所有的社会和个体中,但每个社会和个体对价值取向的偏好不同。
跨文化交际3
Values
Cultural patterns
Social Practices
Norms
Chapter3 The Hidden Core of Culture
The Definition of Values
• According to the Concise Oxford Dictionary, values are : one’s principles or standards; one’s judgment of what is valuable or importance in life.
Chapter3 The Hidden Core of Culture
Case study
• Story 1:
During the American Civil War, a very hungry young man fell down in front of a farm gate. The famer gave him food but in return he asked the young man to move a pile of wood in his yard--- in fact it was not at all necessary to move the wood. When the young man left, the farmer moved the wood back to its original place. Seeing all this, the famer’s son was confused. Q: why did the farmer not just give the young man some food? What values underlie the behavior of the old man?
CMUcam3 An Open Programmable Embedded Vision Sensor
CMUcam3:An Open Programmable Embedded Vision SensorAnthony Rowe Adam Goode Dhiraj GoelIllah NourbakhshCMU-RI-TR-07-13May2007Robotics InstituteCarnegie Mellon UniversityPittsburgh,Pennsylvania15213c Carnegie Mellon UniversityAbstractIn this paper we present CMUcam3,a low-cost,open source,embedded com-puter vision platform.The CMUcam3is the third generation of the CMUcamsystem and is designed to provide aflexible and easy to use open source develop-ment environment along with a more powerful hardware platform.The goal of thesystem is to provide simple vision capabilities to small embedded systems in theform of an intelligent sensor that is supported by an open source community.Thehardware platform consists of a color CMOS camera,a frame buffer,a low cost32-bit ARM7TDMI microcontroller,and an MMC memory card slot.The CMUcam3also includes4servo ports,enabling one to create entire,working robots usingthe CMUcam3board as the only requisite robot processor.Custom C code canbe developed using an optimized GNU toolchain and executables can beflashedonto the board using a serial port without external downloading hardware.The de-velopment platform includes a virtual camera target allowing for rapid applicationdevelopment exclusively on a PC.The software environment comes with numer-ous open source example applications and libraries including JPEG compression,frame differencing,color tracking,convolutions,histogramming,edge detection,servo control,connected component analysis,FATfile system support,and a facedetector.1IntroductionThe CMUcam3is an embedded vision sensor designed to be low cost,fully pro-grammable,and appropriate for realtime processing.It features an open source de-velopment environment,enabling customization,code sharing,and community.In the world of embedded sensors,the CMUcam3occupies a unique niche.In this design exercise we have avoided high-cost components,and therefore do not have many of the luxuries that other systems have:L1cache,an MMU,DMA,or a large RAM store.Still,the hardware design of the CMUcam3provides enough processing power to be useful for many simple vision tasks[1],[2],[3],[4],[5].An ARM microcontroller provides excellent performance and allows for the exe-cution of a surprisingly broad set of algorithms.A high speed FIFO buffers images from the CIF-resolution color camera.Mass storage is provided by an MMC socket using an implementation of the FATfilesystem so that thefile written by the CMUcam3 are immediately readable to an ordinary er interaction occurs via GPIO,servo outputs,two serial UARTs,a button,and three colored LEDs.We provide a full C99environment for buildingfirmware,and include libraries such as libjpeg,libpng,and zlib.Additionally,we have developed a library of vision algorithms optimized for embedded processing.Full source is provided for all components under a liberal open source license.The system described in this paper has been implemented and is fully functional. The system has passed CE testing and is available from multiple international commer-cial vendors for a cost of approximately US$239.[12]IFigure1:Photograph of the CMUcam3mated with the CMOS camera board.An MMC memory card used for mass storage can be seen protruding on the right side of the board.The board is5.5cm×5.5cm and approximately3cm deep depending on the camera module.1.1Embedded Vision ChallengesEmbedded Vision affords a unique set of functional requirements upon a computational device meant to serve as a visual-sensor.In fact,taken as a general purpose processor, the CMUcam3is rather underpowered compared to desktop computers or even PDAs. However,if examined as a self-contained vision subsystem,several benefits become clear.The system excels in IO-constrained environments.The small size and low power of the CMUcam3enables it to be placed in unique environments,collecting data au-tonomously for later review.If coupled with a wireless network link(such as802.15.4 or GPRS),the CMUcam3can perform sophisticated processing to send data only as needed over a potentially expensive data channel.Its low cost allows the CMUcam3to be purchased in greater quantities than other solutions.This makes the CMUcam3more accessible to a larger community of de-velopers.In several applications,for instance surveillance,reduced cost allows for a meaningful tradeoff between high performance from a single sensor node and the distribution of lower-cost nodes to achieve greater coverage.The CMUcam3also has benefits when used as a self-contained part of a greater system.Because of its various standard communications ports(RS-232,SPI,I2C), adding vision to an existing system becomes straightforward,particularly because the computational overhead is assumed by the separately dedicated CMUcam3processor rather than imposed upon the main processor and its I/O system.IIFinally,having completely open sourcefirmware allowsflexibility and reproducibility—anyone can download and compile the code to run on the hardware or alternatively a desktop computer(using the virtual-cam module).1.2Related WorkThere have been numerous embedded image processing systems constructed by the computer vision community in the service of research.In this section we will presenta selection of systems that have similar design goals to that of the CMUcam3.The Cognachrome[10]system is able to track up to25objects at speeds as highas60Hz.Its drawbacks include cost(more than US$2000),size(four by two by ten inches)and power(more than5×that of the CMUcam3)as limitations when creating small form factor nodes and robots.The Stanford MeshEye[6]was designed for use in low power sensor networks.The design uses two different sets of image sensors,a low resolution pair of sensors is usedto wake the device in the presence of motion,while the second VGA CMOS camera performs image processing.The system is primarily focused on sensor networking applications,and less on general purpose image processing.The UCLA Cyclops[7],also designed around sensor networking applications,usesan8-bit microprocessor and an FPGA to capture and process images.The main draw-backs are low image resolution(128×128)due to limited RAM and slow processingof images(1to2FPS).Specialized DSP based systems like the Bluetechnix[9]Blackfin camera boards provide superior image processing capabilities at the cost of power,price and com-plexity.They also typically require expensive commercial compilers and external de-velopment hardware(i.e.JTAG emulators).In contrast,the CMUcam3’s development environment is fully open source,freely available and has built-infirmware loading using a serial port.Various attempts have been made to use general purpose single board computers including the Intel Stargate[8]running Linux in combination with a USB webcam for image processing.Though open source,such systems are quite expensive,large,and demanding of power.Furthermore,USB camera acquired images are typically trans-mitted to the processor in a compressed pressed data results in lossy and distorted image information as well as the extra CPU overhead required to decompressthe data before local processing is possible.The use of slow external serial bus proto-cols including USB v1.0limits image bandwidth resulting in low frame rates.Finally,a number of systems[1],[2],[3]consist of highly optimized software designed to run on standard desktop machines.The CMUcam3is unique in that it targets applications where the use of a standard desktop machine would be prohibitive because of size,cost or power requirements.2CMUcam3In the following section,we will describe and justify the design decisions leading tothe hardware and software architecture of the CMUcam3.III'HEXJ /('V Figure 2:CMUcam3hardware block diagram consisting of three main components:processor,frame buffer and CMOS camera.2.1Hardware ArchitectureAs shown in Figure 2,the hardware architecture for the CMUcam3consists of threemain components:a CMOS camera chip,a frame buffer,and a microcontroller.Themicrocontroller configures the CMOS sensor using a two-wire serial protocol.Themicrocontroller then initiates an image transfer directly from the CMOS camera to theframe buffer.The microcontroller must wait for the start of a new frame to be signaledat which point it configures the system to asynchronously load the image into the framebuffer.Once the CMOS sensor has filled at least 2blocks of frame buffer memory (128bytes),the main processor can begin asynchronously clocking data 8bits at a time outof the image buffer.The end of frame triggers a hardware interrupt at which point themain processor disables the frame buffer’s write control line until further frame dumpsare needed.The CMUcam3has two serial ports (one level shifted),I 2C,SPI,four standardhobby servo outputs,three software controlled LEDs,a button and an MMC slot.Atypical operating scenario consists of a microcontroller communicating with the CMU-cam3over a serial connection.Alternatively,I 2C and SPI can be used,making theCMUcam3compatible with most embedded systems without relying soley on RS-232.The SPI bus is also used to communicate with FLASH storage connected to the MMCslot.This allows the CMUcam3to read and write gigabytes of permanent storage.IVUnlike previous CMUcam systems,all of these peripherals are now controlled by the processor’s hardware and hence do not detract from processing time.The expansion port on the CMUcam3is compatible with various wireless sensor networking motes including the Telos[15]motes from Berkeley.The image input to the system is provided by either an Omnivision OV6620or OV7620CMOS camera on a chip[14].As in the CMUcam and CMUcam2,the CMOS camera is mounted on a carrier board which includes a lens and supporting passive components.The camera board is free running and will output a stream of8-bit RGB or YCbCr color pixels.The OV6620supports a maximum resolution of352×288at50 frames per second.Camera parameters such as color saturation,brightness,contrast, white balance gains,exposure time and output modes are controlled using the two-wire SCCB protocol.Synchronization signals including a pixel clock(directly connected to the image FIFO)are used to read out data and indicate new frames as well as horizontal rows.The camera also provides a monochrome analog signal.One major difference between the CMUcam2and the CMUcam3is the use of the NXP LPC2106microcontroller.The LPC2106is a32-bit60MHz ARM7TDMI with built-in64KiB of RAM and128KiB offlash memory.The processor is capable of software controlled frequency scaling and has a memory acceleration module(MAM) which provides it with near single cycle fetching of data from FLASH.A built-in boot loader allows downloading of executables over a serial port without external program-ming hardware.Since the processor uses the ARM instruction set,code can be com-piled with the freely available GNU GCC compiler.Built-in downloading hardware and free compiler support makes the LPC2106an ideal processor for open source de-velopment.The frame buffer on the CMUcam3is a50MHz,1MB AL4V8M440video FIFO manufactured by Averlogic.The video FIFO is important because it allows the camera to operate at full speed and decouples processing on the CPU from the camera’s pixel clock.Running the camera at full frame rate yields better automatic gain and exposure performance due to factory default tuning of the CMOS sensor.Even though pixels can not be accessed in a random access fashion,the FIFO does allow for resetting the read pointer which enables multiple pass image processing.One disadvantage of the LPC2106is that it has relatively slow I/O.Reading a single pixel value can take as long as14clock cycles,of those12are spent waiting on I/O.Software down sampling, operating on a single image channel,or doing software windowing greatly accelerates image processing since skipping a pixel takes ing the FIFO,algo-rithms can be developed thatfirst process a lower resolution image and can later rewind and revisit regions at higher resolutions if more detail is required.For example frame differencing can be performed on a low resolution gray scale image,while frames of interest containing motion can be saved as high resolution color images.Since pro-cessing is decoupled from individual pixel access times,the pixel clock on the camera does not need to be set to the worst case per pixel processing time.This in turn allows for higher frames rates that would not be possible without the frame buffer.In many embedded applications,such as sensor networks,power consumption is an important factor.To facilitate power savings,we provide three power modes of operation(active,idle and power down)as well as the ability to power down just the camera module.In the active mode of operation when the CPU,camera and FIFOVCPU core3.31549.5Frame Buffer525125MMC3.31033na na499.7Table1:This table shows a breakdown of the power consumption of various compo-nents while the camera is fully active.are all fully operating the system consumes500mW of power.Table1shows the distribution of power consumption across the various components.When in an idle state,where RAM is maintained and the camera is disabled,the system consumes around300mW.The transition time between idle and active is on the order of30us. For applications where very low duty cycles are required and startup delays of up to1 second can be tolerated,we provide an external power down pin which gates external power to the board bringing the consumption down to nearly zero(25uW).In the power down state of operation,the processor RAM is not maintained and hence camera parameters must be restored by thefirmware at startup.2.2Software ArchitectureStandard vision systems assume the availability of PC-class hardware.Systems such as OpenCV[17],LTI-Lib[19],and MATLAB[13]require megabytes of memory ad-dress space and are written in runtime-heavy languages such as C++and Java.The CMUcam3has only64KiB of RAM and thus cannot use any of these standard vision libraries.To solve this problem,we designed and implemented the cc3vision system as the main software for CMUcam3.We also implement several components on top of cc3 as described in this section.2.2.1The cc3Software Vision SystemThe cc3system is a C API for performing vision and control,optimized for the small environment of the CMUcam3.Features:•Abstraction layer for interfacing with future hardware systems•Modern C99style with consistently named types and functions•Support of a limited number of image formats for simplicity•Documentation provided via Doxygen[18]•Versioned API for future extensibility•virtual-cam module for PC-based testing and debugging(see below)VIcc3is a part of the CMUcam3distribution,and is openly available at the CMUcamwebsite[12].Below is an example of the cc3based source code showing you how totrack a color:int main(void){cc3_image_t img;cc3_color_track_pkt t_pkt;//init filesystem drivercc3_filesystem_init();//configure uartscc3_uart_init(0,CC3_UART_RATE_115200,CC3_UART_MODE_8N1,CC3_UART_BINMODE_TEXT);cc3_camera_init();cc3_camera_set_colorspace(CC3_COLORSPACE_RGB);cc3_camera_set_resolution(CC3_CAMERA_RESOLUTION_LOW);cc3_camera_set_auto_white_balance(true);cc3_camera_set_auto_exposure(true);printf("Enter color bounds to track:");scanf("%d%d%d%d%d%d\n",&t_pkt.lower_bound.chan[CC3_RED_CHAN],&t_pkt.lower_bound.chan[CC3_GREEN_CHAN],&t_pkt.lower_bound.chan[CC3_BLUE_CHAN], &t_pkt.upper_bound.chan[CC3_RED_CHAN],&t_pkt.upper_bound.chan[CC3_GREEN_CHAN], &t_pkt.upper_bound.chan[CC3_BLUE_CHAN]);img.channels=3;img.width=cc3_g_pixbuf_frame.width;img.height=1;img.pix=cc3_malloc_rows(1);while(1){cc3_pixbuf_load();cc3_track_color_scanline_start(t_pkt);while(cc3_pixbuf_read_rows(img.pix,1)){cc3_track_color_scanline(&img,t_pkt);}cc3_track_color_scanline_finish(t_pkt);printf("Color blob found at%d,%d\n",t_pkt.centroid_x,t_pkt.centroid_y);}}VIIThe next example shows how a developer can access raw pixels.The following code section returns the location of the brightest red pixel found in the image:uint8_t max_red,max_red_y,max_red_x;cc3_pixel_t my_pix;max_red=0;cc3_pixbuf_load();while(cc3_pixbuf_read_rows(img.pix,1)){//read a row into the image//picture memory from the camerafor(uint16_t x=0;x<img.width;x++){//get a pixel from the img row memorycc3_get_pixel(&img,x,0,&my_pix);if(my_pix.chan[CC3_CHAN_RED]>max_red){max_red=my_pix.chan[CC3_CHAN_RED];max_red_x=x;max_red_y=y;}}y++;}printf("Brightest Red Pixel:%d,%d\n",max_red_x,max_red_y);2.2.2virtual-camThe virtual-cam module is part of the cc3system as mentioned above.It provides a simulated environment for testing library and project code on any standard PC by compiling with the system’s native GCC compiler.This allows for full use of the PC’s debugging tools to diagnose problems in user code.Oftentimes,a difficult to understand behavior observed on the CMUcam3will easily manifest itself as a bad pointer dereference or other easily found bug when run on a standard PC with memory protection.While not all of CMUcam3’s functionality is implemented in virtual-cam(miss-ing features include the hardware-specific components of servo control and GPIO), enough functionality is provided to enable off-line diagnostic testing.2.2.3CMUcam2EmulationThe CMUcam2[20]provides a simple human readable ASCII communication proto-col allowing for interactive control of the camera from a serial terminal program or a micro-controller.The CMUcam2is capable of many functions including in-built color tracking,frame differencing,histogramming as well as binary image transfers.The CMUcam2comes with a graphical user interface running on a PC that allows users to experiment with various functions.The CMUcam3emulates most of the CMUcam2’sVIII(a)(b)(c)(d)Figure3:The following images show the advantage of color tracking in the HSV color space.Figure(a)shows an RGB image,(b)shows the intensity(V)component of the HSV image,(c)shows the Hue and Saturation components of the image without intensity(d)shows the segmented hand with the center of mass in the middle. functions making it a drop-in replacement for the CMUcam2.The CMUcam2emu-lation extends upon the original CMUcam2with superior noisefiltering,HSV color tracking and JPEG compressed image transfers.2.2.4Color TrackingThe original CMUcam tracks color blobs using a simple RGB threshold color model. Though computationally lightweight,it does not adapt well to changing light condi-tions and can only track a single color at one time.The CMUcam3improves tracking performance by providing the option to use the Hue Saturation Value(HSV)color space,provisions for connected component blobfiltering and the ability to track mul-tiple colors.Figure3shows how the HSV color space can remove lighting effects simplifying color segmentation.Since the system is open source,it is simple for end users to further improve color tracking by building more complex color models.2.2.5Frame DifferencingAs an example program to illustrate frame differencing,we provide a simple security camera application.The camera continuously compares the previous image and theIXcurrent image.If an images changes by more than a preset threshold,the image is saved as a JPEG on the MMC card.2.2.6ConvolutionsWe provide a general convolution library that allows custom kernels to be convolved across an image.This can be used for variousfilters that perform tasks like edge detection or blurring.2.2.7CompressionNew to the CMUcam3is the ability to compress images with both libjpeg and ing different destination managers,one can redirect the output of libjpeg to the MMC,serial output,or any other communication bus.Depending on the quality of the image,libjpeg can produce images as small as4KiB.2.2.8Face DetectionThe CMUcam3incorporates the ability to detect faces in plain-background environ-ments.The face detector technique is based on the feature-based approach,proposed by Viola and Jones,in which a cascade of classifiers are trained for Haar-like rectan-gular features selected by AdaBoost[16].The integral image is a key data structure used in Viola-Jones.Unfortunately,it consumes significant memory.Even a low resolution integral image of176×144re-quires about76KiB of memory,far exceeding available memory.Along with memory constraints,the processor lacksfloating point hardware.As a result,two unique customizations were applied to the face detection implementation for CMUcam3:•Only a part of the whole image is loaded in main memory at any time.As a consequence,the maximum resolution of a detected face is limited to60×60 pixels.•All the classifier thresholds and corresponding compared values are computed usingfixed point arithmetic,via a binary scaling method.A few other optimizations were made to improve performance:•When scanning sub-windows,neighboring sub-windows are illumination nor-malized with iteratively computed standard deviation(std),instead of being com-puting independently.This can provide a speed up of approximately3×.•Sub-windows that are are too homogeneous(std<14)or too dark or bright (mean<30or mean>200)are discarded immediately,short-circuiting unnec-essary computation in regions unlikely to yield positive detection hits.With these changes,CMUcam3face detection operates on-board at1Hz.X(a)(b)Figure4:Sample output from a modified Viola-Jones face detector.Faces are denoted with boxes.Image(b)shows how texture in the background can occasionally be de-tected as a false positive.2.2.9PollyThe Polly[3]algorithm provides visual navigation information based on color.This navigation was used on the Polly robot to give tours of the MIT AI laboratory in the early90’s.The algorithm originally consisted of three steps:blurring the image,edge detection and generating a free space map starting from the bottom of the image upward towards any edges.Our implementation applies a3x3blur followed by a simple edge detector.We thenfilter out small edges using our connected component module.As can be seen in Figure5the algorithm returns a histogram of the free space in front of the robot.Polly is able to run on-board CMUcam3at4fps,operating on a176x144 image.2.2.10SpoonBotSpoonBot is a small mobile robot consisting of a CMUcam3,two continuous rotation hobby servos mounted to wheels,a four AA battery pack and a micro-servo connected to a plastic spoon.The two hobby servos allow SpoonBot to drive forward,backward and rotate left and right.The rear mounted micro-servo pushes the spoon up and down acting as a tilt degree of freedom.SpoonBot can use the Polly algorithm described above to drive around a table top or it can follow colored objects.All control and navigation is run locally on the CMUcam3,since the board can compute and command servo control signals directly,without the need for conventional robot control hardware. 3PerformanceIn this section we discuss execution time and memory consumption for various CMU-cam3software components.Depending on the image resolution and complexity of the algorithm,these values can vary significantly.The goal of this section is to provide some intuition for the various types of image processing that are possible using theXIFigure5:Sample output of the Polly algorithm.Thefirst column shows the original image.The second column shows the image after a blurfilter,edge detection and small connected componentfilter.Thefinal column shows the histogram representing free area in front of the camera.Load Frame210ms128ms52ms32msPack PixelsTotal FPSFigure6:Thisfigure compares the execution times of loading a frame,copying the image from the frame buffer to the processor,unpacking the pixels and processing the new frame.JPEG,Track Color(TC)and Track Color in the HSV color space(TC-HSV)are shown at two different resolutions.The numbers in parenthesis represent the frame rate of the operation.cc3_pixbuf_read_rows()function.Operating at a lower resolution obviously decreases the execution time because fewer pixels are fetched.Operating on a single channel instead of three channels provides only a1.625×increase in speed.This in-crease is due to no longer having to read all of the color pixels,however,since the CMOS camera does not have a monochrome output mode,color information must still be clocked out of the FIFO.Thefinal Pack Pixel column shows the time required to convert the GRGB pattern from the camera in memory into the local RGB pixel struc-ture.This corresponds to the cc3_get_pixel()function call.It is possible to greatly reduce the pixel construction time by designing algorithms that operate on the raw memory from the camera.This becomes a trade-off between simple portable code and execution speed.We provide examples of both methodologies for those interested in highly optimized implementations.Figure6shows the relative time consumption of the previously mentioned frame loading operations along with processing times for three different algorithms:JPEG, Track Color and Track Color HSV.The JPEG algorithm in this example compresses aXIIIcolor image in memory and does not write the output to a storage device.The Track Color(TC)and Track Color HSV(TC-HSV)algorithms are profiled directly from the CMUcam2emulation code.Each algorithmfinds the bounding box,centroid and den-sity of a particular color specified.For this test we show the worst-case performance by tracking all active pixels.The Track Color HSV benchmark is identical to Track Color except that it performs a software based conversion from the RGB to HSV color space for each pixel.The general trend found in these plots is that very simple algorithms such as tracking color are mostly I/O limited.For example Track Color spends only 17%of the time on processing.A more complex algorithm,JPEG,spends62%of its time on processing.JPEG also shows an example of where optimized pixel accesses can drastically reduce the pixel packing time.However as can be seen in the JPEG operating on a QCIF image,as resolution decreases these optimizations become less relevant.As previously mentioned,the LPC2106has64KiB of internal RAM and128KiB of ROM.By default,9KiB of RAM is reserved for stack space and9KiB of RAM is used by the core software libraries(including libc buffers).A176×144(QCIF) gray-scale image requires25KiB of RAM,while a100×100RGB image requires 30KiB of memory.All processing on larger sized images must be performed on a section by section basis,or using a sliding window scan-line approach.For example, JPEG requires only eight full rows(8KiB)of the image in addition to the storage required for the compressed image(less than12KiB).The code space consumed by most CMUcam3applications is quite small.The full CMUcam2emulation with JPEG compression and the FATfile system requires96KiB of ROM.A simple program that loads images and links in the standard library functions requires52KiB of ROM.The FATfilesystem and MMC driver require an additional12KiB of ROM.4Conclusions and Future WorksThe goal of this work was to design and publicly release a low cost,open source,em-bedded color computer vision platform.The system can provide simple vision capabil-ities to small embedded systems in the form of an intelligent sensor that is supported by an open source community.Custom C code can be developed using an optimized GNU toolchain andflashed onto the board using the serial port without external downloading hardware.The development platform includes a virtual camera target and numerous open source example applications and libraries.The main drawback of the CMUcam3hardware platform is the lack of RAM and computation speed required for many complex computer vision algorithms.We cur-rently have a prototype system using a600MHz Blackfin media processor from Analog Devices.Ideally,we would like to provide a software environment for this new plat-form that is compatible with our existing environment to help reduce the learning curve typically associated with high-end DSP systems.Eventually,applications can be pro-totyped on a PC using our virtual-cam with various hardware deployment options to support that particular application’s needs.Staying true to the spirit of the CMUcam project,we are also developing a simpler and cheaper hardware platform using a lower cost ARM7processor without the frame buffer.This device will be compatible withXIV。
Kluckhohn-与-Strodtbeck的价值观取向
Kluckhohn 与 Strodtbeck的价值观取向Florence Kluckhohn (佛萝伦丝·克拉克洪)与Fred Strodtbeck(弗雷德·斯多特贝克)是较早提出文化理论的美国人类学家。
已故的美国哈佛大学女学者佛萝伦丝·克拉克洪曾在太平洋战争时参与了一个由美国战争情报处(Office of War Information)组建的一个约30人的专家队伍,研究不同文化的价值、民心和士气。
这个研究组通过对日本民族的心理和价值的分析,向美国政府提出了不要打击和废除日本天皇的建议,并依此建议修改要求日本无条件投降的宣言。
二战后不久,哈佛大学加强了对文化价值维度研究的支持力度,并与洛克菲勒基金会一起资助克拉克洪等人在美国德克萨斯州一片方圆40英里的土地上针对五个不同的文化社区展开一项大规模的研究。
这项研究的一个主要成果就是克拉克洪-斯多特贝克(Kluckhohn & Strodtbeck, 1961)的五种价值取向模式,该成果发表于《价值取向的变奏》(Variations in Value Orientations, 1961)一书中。
在该书中佛萝伦丝·克拉克洪沿用了她的丈夫Clyde Kluckhohn(克莱德・克拉克洪)提出的有关价值取向的定义。
所谓价值取向指的是“复杂但确定的模式化原则,与解决普通的人类问题相联系,对人类行为和思想起着指示与导向作用”(Kluckhohn & Strodtbeck, 1961:4)。
这一模式包括了五个价值取向:人性取向、人与自然的关系取向、时间取向、活动取向和关系取向。
克拉克洪与斯多特贝克的价值取向理论基于以下三个基本的假设:(1)任何时代的任何民族都必须为某些人类的共同问题提供解决的方法;(2)这些问题的解决方法不是无限的或任意的,而是在一系列的选择或价值取向中的变化。
(3)每种价值取向都存在于所有的社会和个体中,但每个社会和个体对价值取向的偏好不同。
光线cms、马克斯MaxCMS影视系统调用吉吉影音替换快播播放器的方法
光线cms、马克斯MaxCMS影视系统调⽤吉吉影⾳替换快播播放器的⽅法⼀、吉吉影⾳在光线cms调⽤⽅法以前⼤都⽤的快播,现在快播没了,光线cms需要调⽤吉吉影⾳,这个教程来⾃于官⽅⽹站。
该⽅法基于光线CMS1.5基础版,经过⼆次开发的版本请联系作者修改。
注意在替换相关⽂件时请先备份。
1、找到/core/Lib/Action/CmsAction.class.php⽂件,⼤概在139⾏加⼊以下代码:复制代码代码如下:}else if(stripos($currentUrl, 'jjhd://')!==false){//吉吉影⾳$player .='<div id="GxInstall"></div><div id="GxPlayer" class="Loading"></div>';$player .='<script language="javascript" type="text/javascript">'."\n";$player .='var $playlist="'.str_replace(array("\r\n", "\n", "\r"),'+++',$array['playurl']).'"'."\n";$player .='</script>'."\n";$player .='<script language="javascript" src="'.C('web_path').'views/js/jjvod.js" charset="utf-8"></script>';2、将jjvod.js放⼊/views/js/中。
2024-2025学年上外版高三上学期期末英语试卷及解答参考
2024-2025学年上外版英语高三上学期期末模拟试卷及解答参考一、听力第一节(本大题有5小题,每小题1.5分,共7.5分)1、W: Which book are you reading, John? M: I’m reading Twilight. It’s very interesting. W: Can I have a look at it? M: Certainly. You can read it now. Q: What are the two speakers talking about?•A: A movie. (Wrong: They were discussing a movie.)•B: A book. (Correct: John is reading a book called Twilight, and they are discussing this book.)•C: A TV show. (Wrong: They were not talking about a TV show.)2、M: I wonder what you think of this new board game. Do you take an interest in it? W:I haven’t tried it, but I think it’s not a good one. It is quite difficult and boring. M: That sounds bad. Q: What does the woman think of the game?•A: It’s interesting. (Wrong: The woman thinks the game is difficult and boring.)•B: It’s difficult and boring. (Correct: The wom an thinks the game is difficult and boring.)•C: It’s not enjoyable. (Correct: The woman considers the game difficult and boring, which means it’s not enjoyable.)The questions are designed to test the students’ ability to understand conversations indifferent contexts and to infer the speakers’ thoughts and feelings based on the dialogue.3.W: I can’t find my textbook. Do you think it could be in the library?M: I think you should check your bag first. We left it there this morning.Question: Where does the conversation hint that the textbook was last seen?A)In the library.B)In the speakers’ bag.C)During the morning.Answer: B) In the speakers’ bag.Explanation: The man suggests that the woman first checks her bag, implying that they left the textbook there earlier that morning.4.M: How was your job interview? Did everything go smoothly?W: It went okay, but I didn’t get the job. The company is looking for someone with more experience than I have.Question: What is the main concern expressed by the woman about the job interview?A)She believes the interview didn’t go well.B)She didn’t receive any feedback on the interview.C)She doesn’t have the required experience.Answer: C) She doesn’t have the required experience.Explanation: The woman explicitly states that the company is looking for more experienced candidates, indicating that her lack of experience is the concern that prevented her from getting the job.5、听下面一段对话。
Economic Impacts of the Turfgrass and Lawncare Industry in the United States
FE632 Economic Impacts of the Turfgrass and Lawncare Industry in the United States John J. Haydu, PhD, Professor, University of Florida, Mid-Florida Research and Education Center2725 Binion Rd., Apopka, Florida 32703; email jjh@Alan W. Hodges, PhD, Associate, University of Florida, Food & Resource Economics Department PO Box 110240, Gainesville, Florida 32611; email awhodges@Charles R. Hall, PhD, Professor, University of Tennessee, Department of Agricultural Economics 2621 Morgan Circle, Room 314B, Knoxville, Tennessee 37996; email crh@E XECUTIVE S UMMARYTheturfgrass and lawncare industry in the United States continues to grow rapidly due to strong demand for residential and commercial property development, rising affluence, and the environmental and aesthetic benefits of turfgrass in the urban landscape. Economic sectors of the industry include sod farms, lawncare services, lawn and garden retail stores, and lawn equipment manufacturing. Golf courses were included in this study as a major industry that depends upon highly managed turfgrass for golf play. Numerous studies have been conducted on the economic impacts of the turfgrass and lawncare industry for individual states or regions; however, this research is the first to report results for the entire United States.Economic impacts of the U.S. turfgrass and lawncare industry in 2002 were estimated based upon survey data in conjunction with various published sources of secondary data, and economic multipliers derived from regional input-output models for each state using the Implan software system and associated datasets. Information gathered for each sector included number of establishments, employment, payroll, and sales receipts. Sources included the 2002 Census of Agriculture (sod farms), the 2002 Economic Census Industry Report Series, and County Business Patterns (U.S. Commerce Department).As defined in this study, the five sectors comprising the U.S. turfgrass industry in 2002 generated total output (revenue) impacts of $57.9 billion (Bn), employment impacts of 822,849 jobs, value added impacts of $35.1 Bn, labor income of $23.0 Bn, and $2.4 Bn in indirect business taxes to local and state governments. If these values are expressed in 2005 dollars, the total output impact was $62.2 Bn and the total value added impact was $37.7 Bn. The value added impact represents total personal and business net income.Among individual sectors, sod producers created nearly $1.8 Bn in output impacts, $1.3 Bn in value added, and 17,028 jobs. Lawn equipment manufacturers contributed $8.0 Bn in output, $2.5 Bn in value added, and supported nearly 34,000 jobs. The lawncare goods retailing sector produced $9.1 Bn in output impacts, contributed $5.8 Bn in value added, and sustained 114,294 jobs. The lawncare services sector generated nearly $19.8 Bn in output impacts, $13.3 Bn in value added, and 295,841 jobs. Golf courses had $23.3 Bn in output impacts, $14.5 Bn in value added, and 361,690 jobs.Economic impacts were summarized for individual states and seven geographic regions of the United States, with the turfgrass and lawncare industry having significant activity in all areas of the United States. The top ten individual states in terms of employment impacts were California (101,022 jobs), Florida (83,944), Texas (52,784), Ohio (33,154), Illinois (31,625), Pennsylvania (30,845), North Carolina (28,860), Georgia (27,327), South Carolina (25,083), and New York (23,965). Regionally, the Southeast was the largest in termsof employment impacts (197,711 jobs), followed by the East-Central (159,358), Western Coastal (130,862), South-Central (112,284), North-Central (100,738), Western-Interior (64,226), and Northeast (57,671).The Institute of Food and Agricultural Sciences (IFAS) is an Equal Opportunity Institution authorized to provide research, educational information, and other services only to individuals and institutions that function with non-discrimination with respect to race, creed, color, religion, age, disability, sex, sexual orientation, marital status, national origin, political opinions, or affiliations. U.S. Department of Agriculture, Cooperative Extension Service, University of Florida, IFAS, Florida A&M University Cooperative Extension Program, and Boards of County Commissioners. Larry Arrington, Dean.TABLE OF CONTENTSE XECUTIVE S UMMARY (i)A CKNOWLEDGEMENTS (ii)L IST OF TABLES (iii)L IST OF F IGURES (iii)I NTRODUCTION (1)Structure of the Turfgrass Industry (4)Previous Economic Studies of the Turfgrass Industry (5)R ESEARCH M ETHODOLOGY (5)Sectors (7)IndustrySources (7)InformationEconomic Impact Analysis (8)R ESULTS (9)National Results for All Industry Sectors (9)State and Regional Impacts (12)The Sod Production Sector (12)The Lawncare Services Sector (16)The Lawncare Goods Retailing Sector (19)The Lawn Equipment Manufacturing Sector (21)The Golf Course Sector (24)C ONCLUSIONS (27)L ITERATURE A ND I NFORMATION S OURCES C ITED (28)A PPENDICES (30)Appendix A—Economic Multipliers (30)Appendix B—Economic Impacts by State (35)AcknowledgementsThis study was sponsored in part by the International Turfgrass Research Foundation, Turfgrass Producers International, Rolling Meadows, ILLIST OF TABLESTable 1. Previous economic impact studies of the turfgrass industry, 1978–2004 (6)Table 2. Classification of sectors associated with the turfgrass and lawncare industry (7)Table 3. Summary of economic impacts of the turfgrass and lawncare industry in the United States, by sector, 2002 (9)Table 4. Employment and value added impacts of the U.S. turfgrass and lawncare industry, by state and region, 2002 (11)Table 5. U.S. sod farms, production and harvested area, 2002 (13)Table 6. Characteristics of U.S. sod production, by region (14)Table 7. Value of U.S. turfgrass-related landscape services specialties (17)Table 8. Value of U.S. lawn equipment manufacturing shipments, 2002 (21)Table A-1. Multipliers for sod farms (nursery and greenhouse sector) (30)Table A-2. Multipliers for lawncare services (services to buildings sector) (31)Table A-3. Multipliers for lawncare retailing (building materials and garden supplies stores sector).32 Table A-4. Multipliers for lawn and garden equipment manufacturing (33)Table A-5. Multipliers for golf courses (amusement, gambling and recreation services) (34)Table B-1. Economic impacts of U.S. sod production, by state, 2002 (35)Table B-2. Economic impacts of U.S. lawncare services, by state, 2002 (36)Table B-3. Economic impacts of U.S. lawncare goods retailing, by state, 2002 (37)Table B-4. Economic impacts of U.S. lawn equipment manufacturing, by state, 2002 (38)Table B-5. Economic impacts of U.S. golf courses, by state, 2002 (39)LIST OF FIGURESFigure 1. Market structure of the turfgrass industry (4)Figure 2. Employment impacts of the U.S. turfgrass and lawncare industry, 2002 (10)Figure 3. Top ten states for output impacts of the sod production sector, 2002 (15)Figure 4. Top ten states for employment impacts of the sod production sector, 2002 (15)Figure 5. Top ten states for value added impacts of the sod production sector, 2002 (16)Figure 6. Top ten states for output impacts in the lawncare services sector, 2002 (18)Figure 7. Top ten states for total employment impacts in the lawncare services sector, 2002 (18)Figure 8. Top ten states for economic value added impacts in the lawncare services sector, 2002 (19)Figure 9. Top ten states for output impacts of the lawncare goods retailing sector, 2002 (20)Figure 10. Top ten states for total employment impacts of the lawncare goods retailing sector, 2002.20 Figure 11. Top ten states for value added impacts of the lawncare goods retailing sector, 2002 (21)Figure 12. Top ten states for output impacts in the lawn equipment manufacturing sector, 2002 (22)Figure 13. Top ten states for employment impacts of lawn equipment manufacturing sector, 2002 (23)Figure 14. Top ten states for value added impacts of lawn equipment manufacturing sector, 2002 (23)Figure 15. Top ten states for output impacts of the golf course sector, 2002 (25)Figure 16. Top ten states for employment impacts of the golf course sector, 2002 (25)Figure 17. Top ten states for value added impacts of the golf course sector, 2002 (26)I NTRODUCTIONCultivated turfgrass is a pervasive feature of the urban landscape in the United States and many other developed regions of the world. According to Beard (1973), turfgrass provides at least three major benefits to human activities: functional, recreational, and ornamental. Functional uses include wind and water erosion control, thereby reducing dust and mud problems surrounding homes and businesses. Metropolitan areas and suburban residences profit from the cool, green pleasant environment afforded from healthy lawns, withlandscapes frequently complemented by numerous trees, flowers and shrubs. Turf is also recognized for reducing glare, noise, air pollution and heat buildup. Turf is used extensively along roadsides for erosion control and as a stabilized zone for emergency stopping and repairs. Recreational use of turf is extensive throughout the world. Common sports activities played on turf include golf, lawn tennis, soccer, rugby, polo, and football. Most professional and recreational sports utilize grass surfaces because of its ability to minimizeinjuries (compared to hard surfaces) and provide a durable groundcover capable of cost-effective regeneration from season to season. Ornamental or aesthetic attributes of turfgrass are also highly regarded. Properly landscaped homes and businesses may also benefit financially from higher resale values when compared to poorly landscaped residences (Behe, et al, 2005; Des Rosiers, et al, 2002; Henry, 1999; Orland, et al, 1992). Structure of the Turfgrass IndustryIn the United States, a very large industry has rapidly evolved to produce and deliver turfgrass products and services. This industry contributes to the national economy in terms of employment, spending on inputs, income from sales of turfgrass products and services, as well as business taxes generated by its economic activities. Economic activity in the turfgrass industry may be broadly grouped into two categories: 1)production and supply of turfgrass products and related services and 2) intermediate and final consumption of turf products and services. The supply of turfgrass products includes not only grass but also the many goods necessary for production and maintenance, such as chemicals, fertilizer and lawn equipment. Turfgrass service activities include landscape planning and design, landscape installation and the ongoing maintenance of turfgrass areas. Consumption of turfgrass products and services may be subdivided as (a) integral turf-basedactivities, such as golf courses or athletic fields, which rely heavily on turfgrass as a major driver of their business and (b) ancillary uses, such as lawns surrounding homes, businesses, and public roads and highways.Figure1. Market structure of the turfgrass industry.The structure of thegoods and services among thevarious sectors of the industry areshown in Figure 1. Central to thiseconomic activity are the sodgrowers who create the productthat is directly or indirectlyutilized by the rest of the industry.Manufacturers of turf equipment,fertilizers and chemicals hold asimilar economic role as primaryproducers. Wholesalers, retailers,and service vendors purchase andresell sod and other turfgrassproducts together with theirrelated services to consumers.These market intermediariesprovide value-added services tocustomers includingtransportation, packaging,installation, and product useinformation. In addition, lawnmaintenance service vendors provide complete lawn care services, such as mowing, pest and disease control, irrigation and fertilization. Each of these service activities adds value to turfgrass products for final consumers.The purpose of this study was to document the size, scope and structure of the turfgrass industry and to assess its economic contribution to the United States economy. Input-output (IO) models were employed to generate multipliers that account for the full range of economic activity between industry sectors within each state. I-O models capture what each business or sector must purchase from every other sector to produce its products and services. Variables examined in this analysis include output or total sales impacts, employment (jobs generated), value added (net income after direct costs are subtracted from gross output), labor income, and indirect business taxes. Total impacts include the direct effects, which are changes in economic activity resulting from the sale of a product or service to intermediate or final consumers; indirect effects of economic activity arising from purchases of inputs by the directly affected sectors; and induced effects from household spending as a result of income earned by industry employee. As an example of indirect impacts, sod sales by a producer results in purchases of inputs such as seeds, fertilizer, and chemicals as he replants his harvested fields. Previous Economic Studies of the Turfgrass IndustryFor many decades the U.S. Department of Agriculture has collected detailed production and financial data on the farm sector. This information has been used by government agencies, universities, and trade associations to track changes in the size and scope of the various agricultural industries over time. However, not all sectors of agriculture were included in the government’s early data collection effort. Typically, those sectors incorporated were limited to large-scale “food & fiber” commodities, such as corn, soybeans, cotton, citrus, dairy, and cattle. This decision to focus on the largest, most common sectors of agriculture was largely cost driven — it was simply too expensive for the government to collect detailed information on the many hundreds of relatively minor “specialty crops” that were produced in this country. In the past 20 years, the economic significance of specialty crops has grown appreciably and, as a consequence, the USDA now conducts broader studies that include nearly all specialty crops. Additional studies have been conducted that focus on ornamental crops and turfgrass, such as “Floriculture and Nursery Crops Outlook” (USDA/ERS, 2005). While these studies have filled a void in government statistics for “green industry” crops, due to the large numbers of specialty crops produced and the number of states producing them, information collected is for the most part limited to area under production (acres or square feet).Since the early 1970s the economic importance of the green industry has grown substantially, making it the second most important sector in agriculture (USDA/NASS, 2004). This development was spurred primarily by rapid population growth and rising household incomes that began in the early 1990s and continues today. With an expanding economy, more disposable income, and extremely low interest rates, the demand for new home construction rose markedly as well. A strong upturn in the construction of homes, commercial businesses and schools translated into a similarly strong upturn in the demand for landscape materials, including turfgrass. This in turn prompted the green industry to ratchet up supply of products and services, hence the remarkable growth of this industry.The downside to urban population growth and green industry expansion is the pressure it places on scarce resources, particularly land and water. Competition for these resources is felt in many parts of the country, but is particularly acute in densely populated areas (Carriker, 1993; Campbell & Sargent, 2001; Haydu et al, 2004). As industries struggle for access to more water and land, the incentives to document their economic contributions to society have grown. As a result, a recent abundance of “green industry” studies funded largely by state trade associations and conducted by University economists and horticulturists have been published. The scope of industry publications and the methodologies employed vary widely, but all have a common theme and purpose of documenting the economic contribution of their respective industries. A list of over 50 of thesestate-level publications spanning the period 1978 to 2004 is presented in Table 1. The titles of these studies can be grouped into three categories: 1) studies that have general titles such as “Green Industry Survey”, “Environmental Horticulture” or “Nursery Industry”, most of which also cover the turfgrass industry as well; 2) studies with titles that identify both nursery and turfgrass explicitly; and 3) studies with turfgrass titles only.The present study extends findings from a previous study by the same authors (Hall, Hodges and Haydu, 2005) which estimated economic impacts for the Green Industry in the United States, of which turfgrass-related activity is an important component.Table 1. Previous economic impact studies of the turfgrass industry in individual states, 1978–2004.YearState Scope Reported2004 New England Environmental Horticulture2003 New Jersey Turfgrass Industry2003 New York Turfgrass IndustryOperationsIndustry2002 Nevada GreenIndustry2002 Colorado Green2002 Michigan Turfgrass IndustryIndustry2002 Arizona Green2002 Georgia Golf Course and Landscape MaintenanceIndustry2001 Iowa Turfgrass2001 Idaho GreenIndustryIndustry2001 Ohio GreenIndustry2001 Louisiana GreenIndustry2001 Illinois GreenIndustry2001 Florida EnvironmentalHorticultureIndustry2000 Kansas Horticulture2000 Texas GreenIndustryIndustry2000 Virginia TurfgrassIndustry2000 Maryland Horticulture2000 Missouri NurseryIndustryIndustry2000 Pennsylvania Green1999 South Carolina Horticulture IndustryCarolina Turfgrass1999 North1999 Arizona GreenIndustryIndustry1999 Wisconsin Turfgrass1998 Missouri Turfgrass Industry1998 New England Environmental Horticulture Industry1997 Florida Environmental Horticultural Industryand Turfgrass Industry1997 Louisiana NurseryIndustry1996 Maryland Turfgrass1996 Mississippi TurfgrassIndustry1995 New Mexico Turfgrass IndustryIndustry1995 Louisiana GreenIndustry1994 Arizona GreenIndustry1994 Kansas Turfgrass1994 North Carolina Turfgrass Industry1994 South Carolina Golf Industry1994 South Carolina Ornamental Horticulture and Turfgrass IndustryIndustry1994 Kansas HorticultureIndustry1993 Colorado GreenIndustry1993 Texas Green1993 Tennessee Nursery and Floriculture IndustryIndustry1991 Florida Turfgrassand Landscape Industry1990 Michigan NurseryIndustry1989 Ohio TurfgrassIndustry1989 Kentucky Turfgrass1989 Pennsylvania Turfgrass IndustryIndustry1989 Michigan TurfgrassIndustry1987 Oklahoma Turfgrass1986 North Carolina Turfgrass Industry1985 New Jersey Turfgrass IndustryIndustryIsland Turfgrass1984 RhodeIndustry1982 Virginia TurfgrassIndustry1978 Oklahoma TurfgrassSource: Hall, Hodges and Haydu, 2005.R ESEARCH M ETHODOLOGYIndustry SectorsThe economic sectors associated with the turfgrass and lawncare industry in the United States include sod farms, lawncare services, lawn and garden retail stores, lawn equipment manufacturing, and golf courses, as indicated in Table 2. Definitions of these sectors were based on the North American Industry Classification System (NAICS, Executive Office of the President, Office of Management and Budget), at the five or six-digit level of detail. The five sectors shown in Table 2 are major components of the turfgrass industry that were used in estimating economic impacts. However, it should be noted that they do not represent all the sectors that contribute to the value of the turf industry. There are other turf-based recreational activities, such as racetracks and athletic fields, which were not included in the analysis due to a lack of data to estimate their economic impact. Consequently, the impact values presented in this report for the turfgrass industry are considered a conservative estimate of the true value. In the same vein, it is also important to recognize that this study includes golf courses as part of the turfgrass industry’s economic impact. Although it is logical to do so since turfgrass is a key input in golf operations, other aspects of golf operations are less directly attributable to turfgrass, such as restaurants or lodging establishments. In these activities, the economic role of turfgrass may be less clear and less significant. This qualifier is important given the economic significance of this particular sector.Table 2. Classification of sectors associated with the turfgrass and lawncare industry.Sector Industry Sector(s) (NAICS code) Implan Sector Name (Number)Sod Farms Nursery and Floriculture Production (11142)* Nursery & Greenhouse (6)Lawncare Services Landscaping Services (56173)* Services To Buildings And Dwellings (458)Lawncare Retail Stores Lawn and Garden Equipment and SuppliesStores (4442)* and Home Centers (44411)*Building Material And Garden Supply Stores(404)Lawn Equipment Manufacturing Lawn & Garden Tractor and Home Lawn andGarden Equipment Manufacturing (333112)*Lawn & Garden Equipment Manufacturing (258)Golf Courses Golf Courses and Country Clubs (71391) Amusement, Gambling and Recreation Services (458)* Turfgrass-related activity in this sector is a portion of the overall industry sector.Information SourcesEconomic information on the turfgrass industry was compiled from a variety of sources. The Census of Agriculture and Economic Census were considered to be the most reliable information sources available since they have well-established statistical methodologies with adjustment for small or non-responding firms and provide published confidence parameters.For sod farms, national and state information on number of farms and production area were taken from the Census of Agriculture for 2002. Area and value of turfgrass harvested were estimated from industry survey data, with harvest value based on regional average prices. In this survey, a total of 581 sod farms were sent questionnaires of which 159 were returned, for a response rate of 27 percent. To determine value, respondents were asked their production area, percent of area harvested, average price (farm gate price, i.e., delivery not included), and share of total sales as sod products, and the share of sales that to customers outside their state.For the sectors of lawncare services, retailing, equipment manufacturing and golf courses, information on number of establishments, employment, and sales (receipts) were taken from the 2002 Economic Census Industry Report Series for U.S. totals (U.S. Census Bureau, 2005). State-level information on number of firms, employment and payroll were taken from County Business Patterns (U.S. Department of Commerce), and were adjusted to match the U.S. totals. For some states in which employment and wages were non-disclosed because of a small number of firms reporting, employment was estimated at the midpoint of the range indicated, and payroll was estimated at the national average annual wages per employee.Information on specific lawncare-related landscape services was taken from Dun & Bradstreet (Dun and Bradstreet Information Systems, 1997). A total of 18 specialty sectors were delineated representing over 53thousand establishments nationwide. For the activities of lawn and garden services, garden maintenance and planting services, and landscape contractors, the share of total revenues that were turfgrass-related was estimated at 29.5 percent, based on data from the Economic Census. Retail sales of lawncare goods were taken from the National Gardening Survey for 2002, which was conducted by Harris Interactive for the National Gardening Association (Butterfield, 2005). Sales of lawncare goods amounted to $11.96 Bn in 2002, which represented 30.2 percent of total U.S. household retail lawn and garden expenditures ($39.64 Bn). Information on manufacturing of specific lawn equipment was taken from the Current Industrial Report on Farm Machinery and Lawn and Garden Equipment Manufacturing (U.S. Census Bureau, 2003b). Lawn equipment was segregated into six different categories, accounting for a total of $6.15 billion in sales in 2003.Economic Impact AnalysisTo evaluate the broad regional economic impacts of the turfgrass and lawncare industry in the United States, regional economic models were developed for each state using the Implan software system and associated state datasets (MIG, Inc., 2004). The Implan system includes over 500 distinct industry sectors. The Implan data used for this analysis was based on fiscal year 2001. The information for these models was derived from the U.S. National Income and Product Accounts, together with regional economic data collected by the U.S. Department of Commerce, Bureau of Economic Analysis. Input-output models represent the structure of a regional economy in terms of transactions between industries, employees, households, and government institutions (Miller and Blair, 1985).Economic multipliers derived from the models were used to estimate the total economic activity generated in each state by sales (or output) to final demand or exports. This includes the effects of intermediate purchases by industry firms from other economic sectors (indirect effects) and the effects of industry employee household consumer spending (induced effects), in addition to direct sales by industry firms. The regional Implan models were constructed as fully closed models, with all household, government, and capital accounts treated as endogenous, to derive Social Accounting Matrix (SAM) type multipliers, which represent transfer payments as well as earned income. Separate multipliers are provided for output (sales), employment, value added, labor income, and business taxes. The sectors used in the Implan models are indicated in Table 2 and the multipliers for each industry sector and state are shown in Appendix A. The multipliers for output, value added, labor income, and indirect business taxes are expressed in units of dollars per dollar output, while the employment multiplier is expressed in jobs per million dollars output. Differences in values of the multipliers reflect the structure of industry sectors and regional mix of supplier industries. The multipliers were applied to estimated industry sales or output in order to estimate total economic impacts. For the producer, manufacturer, service, and golf course sectors, total economic impacts were estimated as:I hij= S hi x [ A hij + E hi x ( B hij + C hij)];while impacts for the retail trade sectors were estimated as:I hij = S hi x G i [ A hij + B hij + C hij],where:I hij is total impact for measures (j) of output, employment, value added, labor income, or indirect businesstaxes, in each sector (i), and state (h).S hi is industry sales in sector i and state h.E hi is the proportion of industry sales exported or shipped outside the state, by sector i in state h.A hij is the direct effects multiplier for measure j in sector i and state h.B hij is the indirect effects multiplier for measure j in sector i and state h.C hij is the induced effects multiplier for measure j in sector i and state h.G i is the gross margin on retail sales for sector i.The calculation for the producer and service sectors assumes that only the export portion of output is sold to final demand and, therefore, is subject to the indirect and induced effects multipliers, while the remainder of in-state sales is subject to intermediate demand from other business sectors and to direct effects multipliers. Data on exports were taken from the Implan database for 2001 or 1999, except in the case of the nursery and greenhouse sector, where information for some states was taken from a national nursery industry survey。
Cultural-Patterns跨文化交际的文化模式PPT课件
3.2.1 High-Context
A high context (HC) communication or message is one in which most of the information is already in the person, while very little is in the coded, explicitly transmitted part of the message. In high-context cultures, people are very homogeneous with regard to experiences, information networks, and the like. High-context cultures, because of tradition and history, change very little over time.
3.1.3 Power Distance
The premise of the dimension deals with the extent to which a society accepts that power in relationships, institution, and organizations is distributed unequally. People in high-power-distance countries believe that power and authority are facts of life…Social hierarchy is prevalent and institutionalizes inequality. To people in low-power-distance countries, a hierarchy is an inequality of roles established for convenience.
科汛cms网站通用标签(GenerallabelKesionCMSwebsite)
科汛cms网站通用标签(General label Kesion CMS website)1, KesionCMSV6 label listThe original | 2009 07 month 19 days on 9449This time, all the labels have been rearrangedKesion 6.0sp1 system directory role documentationAdmin background management directory, you can modify the folder name, but the background management directory will change accordingly.API and third party software integration interface.Ask question answering system.Club official comes small forum / message.Company enterprise space.Config configuration file.Images default template picture folder.Item configuration file.JS JS tags generate folders.Ks_cls core file.KS_Data database folder, you can rename it for security.KS_Editor editor folder.KS_Inc core files, storage of some script files, is essential.Plus plug-in.Product enterprise space.Space personal space, space allocation, etc..Template template folder, you can rename.UploadFiles upload folder.User member folder.Conn.asp configuration file, change database configuration information, authentication code.Index.asp home page file.Map.asp sitemap.Usage method:These tags can be copied and pasted directly to the location of the template to be calledGeneral label history website =============={$GetSiteTitle} displays the title of the web site{$GetSiteName} displays the web site name{$GetSiteLogo} displays the web site logo (without arguments){=GetLogo (130,90)} displays the web site logo (with parameters, logo's width and height){=GetTags (1,50)} displays popular Tags/ latest Tags{$GetSiteCountAll} display website information statistics (column total, the total number of articles)... ).{$GetSiteOnline} shows the number of people online (total online: X people, users: X people; visitors: X people){=GetTopUser (5, more...)} shows active users ranking{=GetUserDynamic (10)} shows the user dynamics (what everyone is doing){$GetSpecial} display thematic portal{$GetFriendLink} displays the link entrance{$GetSiteUrl} display site URL{$GetInstallDir} shows the site installation path{$GetManageLogin} display management portal{$GetCopyRight} displays copyright information{$GetMetaKeyWord} displays keywords for search engines{$GetMetaDescript} displays a description of the search engine{$GetWebmaster} display webmaster{$GetWebmasterEmail} show webmaster EmailUsed the script tag ========= flash effects{=JS_Ad ("left picture address", "right picture address F", "/images/close.gif", 0.8)} advertising couplet{$JS_Time1} time effects (style: April 8, 2006){$JS_Time2} time effects (styles: 2006, 4-, Saturday, 8)The effects of time ({$JS_Time3} style: Friday June 1, 2007 [five] at the beginning of April of the lunar calendar){$JS_Time4} time effects (styles: Saturday, April 8, 2006, 11:50:46){$JS_Language} transform{$JS_HomePage} is set as home page{$JS_Collection} joined the collection{$JS_ContactWebMaster} contact webmaster{$JS_GoBack} returns to the previous page{$JS_WindowClose} closes the window{$JS_NoSave} pages are not saved by others"{$JS_NoIframe} pages are not put into the frame by others{$JS_NoCopy} prevents web information from being copied{$JS_DCRoll} double click scrolling special effects{=JS_Status1 (status bar, typing effect, 120)} status bar, typing effect{=JS_Status2 ("text is displayed on the status bar from right to left", "120")} the text is displayed from right to left in the status bar{=JS_Status3 ("text moves on the status bar, then moves away") 150} the text is typed on the status bar and then moved away========== system function label (Kesioncms Entry Label) regardingThat is, we label ourselves in the label management system function, name ourselves, build labels ourselvesCommon tag calls such as{LB_ channel navigation} {LB_ web site announcement} {LB_ link} {LB_ position navigation}The special label regarding membership system desolate{$GetTopUserLogin} display member login entrance (horizontal){$GetUserLogin} display member login entrance (vertical){$GetAllUserList} displays the list of all registered members (used only on member list pages templates){$GetUserRegLicense} displays new member registration services terms and declarations{$Show_UserNameLimitChar} shows the minimum number of characters for a new member to register{$Show_UserNameMaxChar} displays the maximum number of characters registered by a new member when registering{$Show_VerifyCode} displays authentication code when new member is registered{$GetUserRegResult} new member registration successful informationRegarding the search of private label =============={$GetSearchByDate} advanced calendar search (widget){$GetSearch} terminus search{$GetArticleSearch} article system search {$GetPhotoSearch} picture system search {$GetDownLoadSearch} download system search {$GetFlashSearch} anime system search {$GetShopSearch} mall system search{$GetMovieSearch} video system search {$GetSupplySearch} supply and demand system search=========== channel (column) private label =============== {$GetChannelID} displays the current model ID {$GetChannelName} displays the current model name{$GetItemName} displays the name of the project for the current model{$GetItemUnit} displays the project unit of the current model {$GetClassID} displays the current column ID{$GetClassName} displays the current column name {$GetClassUrl} displays the link address of the current column {$GetClassPic} displays the current column picture {$GetClassIntro} displays the current column presentation {$GetClass_Meta_KeyWord} keyword for search engines {$GetClass_Meta_Description}'s description of search engines {$GetParentID} displays parent column ID {$GetParentUrl} displays the parent column link address================ content page label =========The content page label regarding ==========The following tags apply only to the article content page {$GetArticleUrl} current article URL{$ChannelID} current model ID{$InfoID} current article small ID{$ItemName} current project name{$ItemUnit} current project unit{$GetArticleShortTitle} article short title{=GetPhoto (130,90)} content page picture (width * height) {$GetArticleTitle} full title{$GetArticleKeyWord} keyword Tags{$GetKeyTags} get the article Tags{$GetArticleIntro} article introduction {$GetArticleContent} article content {$GetArticleAuthor} article author{$GetArticleOrigin} article source{$GetArticleDate} add date (Format: May 1, 2009) {$GetDate} add date (direct output){$GetHits} article popularity (total number of visitors) {$GetHitsByDay} article browsing today {$GetHitsByWeek} the number of hits this week {$GetHitsByMonth} articles this month{$GetArticleInput} article entry (with links){$GetUserName} article entry (without links){$GetRank} article recommendation level{$GetArticleProperty} displays the properties of the article (hot, recommended, rolled)... ){$GetArticleSize} show article [font: large, medium, small]{$GetArticleAction} show [comment] [tell friends] [print article]{$GetPrevArticle} displays the last article {$GetNextArticle} displays the next article {$GetPrevUrl} displays the last article, URL {$GetNextUrl} displays the next article, URL {$GetShowComment} display comments{$GetWriteComment} comments===================================Custom field labels can be added to model management and new fields when we need to call some special fields, such as gender,age, education, on the content pageGive an example:Add a new gender field and define the name "Sex". Then, when output is placed in the template content page, the tag {$KS_Sex} generates the page to display the effect, such as: "female"======================================================== picture content page label blocked{$GetPictureUrl} current picture URL{$ChannelID} current model ID{$InfoID} current picture small ID{$ItemName} current project name{$ItemUnit} current project unit{$GetPictureName} picture name{$GetPictureKeyWord} keyword{$GetKeyTags} get pictures Tags{$GetPictureSrc} get image thumbnails Src{$GetPictureByPlayer} view picture content (player mode){$GetPictureByPage} looks at the contents of the picture (last page, next page){$GetPictureIntro} pictures introduction{$GetPictureAuthor} photo authors{$GetPictureOrigin} photo source{$GetPictureDate} add date (Format: May 1, 2009){$GetDate} add date (direct output){$GetHits} pictures popularity (total number of visitors){$GetHitsByDay} pictures today{$GetHitsByWeek} pictures this week{$GetHitsByMonth} pictures visited this month{$GetPictureInput} picture entry (with links){$GetUserName} picture entry (without links){$GetRank} picture recommendation level{$GetPictureProperty} display picture properties (hot, scroll, recommended){$GetPictureAction} show [I'll comment] [I want to collect] home [close the window]{$GetPictureVote} show vote for it{$GetPictureVoteScore} displays votes for pictures{$GetPrevPicture} displays the previous set of pictures{$GetNextPicture} displays the next set of pictures{$GetPrevUrl} displays the previous set of pictures URL{$GetNextUrl} displays the next set of pictures URL{$GetShowComment} display comments{$GetWriteComment} commentsRegarding the download content page tag blocked{$GetDownUrl} current software URL{$ChannelID} current model ID{$InfoID} current software small ID{$ItemName} current project name{$ItemUnit} current project unit{$GetDownTitle} software name + version number {$GetDownKeyWord} keyword{$GetKeyTags} gets software Tags{$GetDownAddress} download address{=GetDownPhoto (130,90)} software picture (width * height) {$GetDownSize} file size +MB (KB) {$GetDownLanguage} software language{$GetDownType} Software Categories{$GetDownSystem} system platform{$GetDownPower} authorization modeIntroduction to {$GetDownIntro} software {$GetDownAuthor} author developers{$GetDownInput} software input (with links) {$GetUserName} software input (without links) {$GetDownOrigin} source{$GetDownDate} add (update) dateTotal number of hits downloaded by {$GetHits}{$GetHitsByDay} today hits{$GetHitsByWeek} hits this week{$GetHitsByMonth} hits this month{$GetDownLink} related links (Demo + registration address) {$GetDownPoint} downloads the required points {$GetDownYSDZ} demo address{$GetDownZCDZ} registered address{$GetDownDecPass} unzip the password{$GetDownProperty} display software properties [favorites, recommendations, etc.]{$GetDownAction} show [I'll comment] [I want to collect] {$GetRank} displays recommended levels {$GetPrevDown} displays the previous software {$GetNextDown} displays the next software {$GetPrevUrl} displays the last software URL{$GetNextUrl} displays the next software URL{$GetShowComment} display comments{$GetWriteComment} comments=============== animation content page label blocked{$GetFlashUrl} current anime URL{$ChannelID} current model ID{$InfoID} current anime little ID{$ItemName} current project name{$ItemUnit} current project unit{$GetFlashName} anime name{$GetFlashKeyWord} current animation keywords{$GetKeyTags} get anime Tags{=GetFlashByPlayer (550380)} viewing anime content (player mode, playback){=GetFlash (550380)} view the anime content (play in plain mode){$GetFlashIntro} anime profiles{$GetFlashAuthor} anime author{$GetFlashOrigin} anime sources{$GetFlashDate} add (update) date{$GetFlashSrc} anime address{$GetFlashFullScreen} full screen viewTotal number of {$GetHits} anime browsing{$GetHitsByDay} anime browsing today{$GetHitsByWeek} anime week visit{$GetHitsByMonth} anime browsing this month{$GetFlashInput} animation input{$GetFlashProperty} display animation properties (hot, scroll, recommended level){$GetFlashAction} show [I'll comment] [I'll collect] [close the window]{$GetFlashVote} show vote for it{$GetFlashVoteScore} displays anime votes{$GetPrevFlash} displays the previous animation {$GetNextFlash} displays the next anime {$GetPrevUrl} displays the previous anime URL {$GetNextUrl} displays the next anime URL {$GetShowComment} display comments {$GetWriteComment} commentsRegarding the content page regarding commodity label {$GetProductUrl} current commodity URL {$GetProductID} current commodity editor (ID) {$ChannelID} current model ID{$InfoID} current commodity small ID {$ItemName} current project name{$ItemUnit} current project unit {$GetProductName} commodity name{=GetProductPhoto (130,90)} pictures of goods{=GetGroupPhoto (200200)} displays the product picture group (multi direction display merchandise){$GetProductKeyWord} current commodity keywords{$GetKeyTags} get merchandise Tags{$GetProductIntro} product profiles{$GetProducerName} manufacturers{$GetTrademarkName} brand trademark{$GetProductModel} commodity model{$GetProductSpecificat} commodity specifications{$GetProductDate} on board{$GetServiceTerm} service period{$GetTotalNum} inventory quantity{$GetProductUnit} commodity unit{$GetHits} commodity popularity (total views){$GetHitsByDay} commodity views today{$GetHitsByWeek} week view of goods{$GetHitsByMonth} commodity visits this month{$GetProductType} types of sales (regular sales, discounts, promotions){$GetRank} recommendation level{$GetProductProperty} displays commodity attributes (hot sales, specials, recommendations, etc.){$GetProductInput} display commodity entry{$GetPrice_Original} shows the original retail price {$GetPrice} displays the current retail price{$GetPrice_Member} shows membership prices{$GetPrice_Market} shows market prices{$GetGroupPrice} automatically gets the price of the member group in the current user group{$GetDiscount} display discount rate{$GetScore} displays shopping points{$GetAddCar} joins the shopping cart{$GetAddFav} add favorites{$GetPrevProduct} displays the last item {$GetNextProduct} displays the next item {$GetPrevUrl} displays the last item, URL {$GetNextUrl} displays the next item, URL {$GetShowComment} display comments {$GetWriteComment} comments============== video content page label ================ {$GetMovieUrl} current movie URL{$ChannelID} current model ID{$InfoID} current movie ID{$ItemName} current project name{$ItemUnit} current project unit{$GetMovieName} movie name{$GetMovieActor} major actors{$GetMovieDirector} film director{=GetMoviePhoto (250250)} film picture (width * height){$GetMovieKeyWord} current movie keywords{$GetKeyTags} gets the movie Tags{$GetMovieLanguage} movie language{$GetMovieArea} producing areas{$GetMovieIntro} check out the video{$GetMovieTime} movie length (play time) {$GetScreenTime} release time{$GetMovieDate} add (update) date{=GetMoviePlayList (5, /images/movienavi.gif)} play list{=GetMoviePagePlay (450450)} content page player (width * height){=GetMovieDownList (5, /images/movienavi.gif)} download the list{$GetHits} movie popularity{$GetHitsByDay} movie hits today{$GetHitsByWeek} movie visits this week{$GetHitsByMonth} movie visits this month{$GetRank} shows recommended stars{$GetMovieInput} video input{$GetPoint} watch / download volume{$GetMovieProperty} displays movie properties (hot, scrolling, recommended, etc){$GetMovieVote} show vote for it{$GetMovieVoteScore} shows the votes for the film{$GetPrevMovie} displays the previous display{$GetNextMovie} shows the next movie{$GetPrevUrl} displays the last film, URL{$GetNextUrl} shows the next movie, URL{$GetShowComment} display comments{$GetWriteComment} commentsSupply and demand information ============ content page label =============={$GetGQInfoUrl} current information URL{$GetGQInfoID} current information ID{$ChannelID} current model ID{$InfoID} current information is small, ID{$ItemName} current project name{$ItemUnit} current project unit{$GetGQTitle} information topics{=GetSupplyPhoto (130,90)} information thumbnail (width * height){$GetGQKeyWord} gets keywords{$GetKeyTags} get information Tags{$GetPrice} price description{$GetInfoType} information categories{$GetTransType} transaction class{$GetValidTime} validityIntroduction to {$GetGQContent} information content{$GetHits} information popularity (total views){$GetHitsByDay} information today {$GetHitsByWeek} information access this week {$GetHitsByMonth} information access this month {$GetAddDate} release time{$GetInput} information publisher (member name) {$GetCompanyName} company name {$GetContactMan} contacts{$GetContactTel} contact number{$GetFax} fax number{$GetAddress} detailed address{$GetEmail} e-mail{$GetPostCode} zip codeThe {$GetProvince} exchange is in the provinces The {$GetCity} exchange is in the city {$GetHomePage} company web site{$GetPrevInfo} displays the last message{$GetNextInfo} displays the next message{$GetPrevUrl} displays the last message, URL{$GetNextUrl} displays the next message, URL{$GetShowComment} displays message messages{$GetWriteComment} releases messages============== music channel special label desolate{ = getmusiclist(0,真的,真的,真的,0,10,25,2)}最新歌曲播放列表(整站通用){ = getmusiclist(0,真的,真的,真的,1,10,25,2)}推荐歌曲播放列表(整站通用){ = getmusiclist(0,真的,真的,真的,2,10,25,2)}热门歌曲播放列表(整站通用){ = getmusicspeciallist(0,10,1,90,80,8,true)}最新专辑列表(整站通用){ = getmusicspeciallist(1,10,1,90,80,8,true)}推荐专辑列表(整站通用){ = getmusicspeciallist(2,10,1,90,80,8,true)}热门专辑列表(整站通用){ $ getmusicnavi }音乐顶部导航(整站通用){ }取得当前类别名称getsingertype美元,如华人男歌手,华人女歌手(适用于歌手模板页调用){ $ getsingerlist }取得当前类别下的所有歌手列表(适用于歌手模板页){ $ getmusicspeciallist }取得当前歌手的专辑列表(歌手专辑模板页调用){ $ getpagelist }取得当前歌手的专辑分页(歌手专辑模板页调用)============以下标签仅适用于最终专辑歌曲页模板=================={ $ getspecialid }取得专辑ID{ }取得专辑名称getspecialname美元{ }演唱歌手getsingername美元{ }发行公司getspecialcompany美元{ }发行日期getspecialdate美元{ }语言种类getspeciallanguage美元{ }专辑评论getspecialcomment美元{ }专辑收藏getspecialsave美元{ }专辑介绍getspecialcontent美元{ }专辑图片地址getspecialphoto美元{ }专辑歌曲播放列表getmusicplaylist美元=============公告内容页标签==================== { }公告标题getannouncetitle美元{ }公告作者getannounceauthor美元{ }公告发布时间getannouncedate美元{ }表公告的具体内容getannouncecontent美元===========友情链接页标签============={ }显示查看方式及申请友情链接等getlinkcommoninfo美元{ }显示分类及友情链接站点搜索getclasslink美元{ }分页显示友情链接详细列表getlinkdetail美元============专题页标签=============={ }当前专题名称getspecialname美元{ }当前专题图片getspecialpic美元{ }当前专辑介绍getspecialnote美元{ }当前专辑添加时间getspecialdate美元{ }当前专辑分类名称getspecialclassname美元{ $ getspecialclassurl }当前专辑分类URL=============广告位通用标签============{ = getadvertise(43)}广告位(参数根据不同的调查而变化,点击模板管理-编辑模板-选择更多标签,点击您要调用的调查内容即插入了该标签)=============站内调查标签============{ = getvote(16)}如您最喜欢科汛的哪些产品(参数根据不同的调查而变化,点击模板管理-编辑模板-选择更多标签,点击您要调用的调查内容即插入了该标签)=============== RSS标签=============={ $ RSS RSS标签显示}{ } RSS推荐标签显示rsselite美元{ } RSS热门标签显示rsshot美元===================用户自定义SQL函数标签==============系统含的自定义SQL函数标签,可以根据网站的需求在后台自定义网页效果,调用如{ sql_我要评论(10)} { sql_当前栏目品牌()}小提示:要用{ }开始,标签名称完后要以()结束===================用户自定义静态标签==============A custom static label that we input what call in the label after this tag shows what content, such as web sites are usually a page at the bottom of the head, are the same, in order to facilitate the modification and management, we always create a universal head and bottom of the universal label, which make the template in the HTML code is copied into the custom static the label store, called as {LB_ head at the bottom of the {LB_ general general}}SQL system function label novice must seeThe original | 2009 08 month 04 days on 1971The first article recommended by the 1.SQL_ call ()Condition: at least one article has been recommendedStatement: select, top, 1, ID, Tid, Title, from, KS_Article, where, Recommend = 1, order, by, ID, descCall: the first ()} recommended by the {SQL_ callPS: too simple2.SQL_ calls recommend 2 to 10 () ----------------------PS: why start from 2, some people have some BT requirements, such as me, and then there will be any bar to any barCondition: more than 10 are recommendedStatements: select, top,, ID, Tid, Title, Adddate, from,KS_Article, where, ID, not, in (select, top, 1, ID, from, KS_Article, where, Recommend =, order, by, ID, DESC), order, by, Recommend, descCall: {SQL_ calls recommend 2 through 10 ()}3.SQL_ site headlines ()Condition: at least one of the headlines has been setStatement: select, top, 1, ID, Tid, Title, Adddate, from, KS_Article, where, Strip = 1, order, by, Adddate, descCall: {SQL_ site headlines ()}4.SQL_ calls a column TOP10 (Param (0))Condition: there is an article in the column"---------------------PS": nonsense!Statement: select top 10 ID, Tid, Title, Adddate, Hits, Intro, Picurl from KS_Article where Tid ='{$Param (0) order by ID desc}'Argument: Param (0) = column ID -----------------PS: don't tell me you don't knowCall: for example, {SQL_ calls a column TOP10 (200710254)}5.SQL_ a column is currently N (Param (0), Param (1))Condition: forget not to say, I'm afraid. Brick...Statement: select top {$Param (0) Tid, Title, ID}, Adddate from KS_Article where Tid='{$Param (1) order by Adddate desc}'Argument: Param (0) = = several, must be numeric, Param (1) = column ID, e.g. 200710258: similar (New SQL second pages below)Call: for example, {SQL_, a column is currently N (10200701255)}The following is a personal ------------------------ BT needs to watch ---------------------------------------6., with the use of fifth, for example, the first 10 are divided into 3 and 7, the spell is just 10, and sometimes in order to make different styles, such as up and down width is not the same, different colors (more said)SQL_ a column is currently N1 to N2 (Param (0), Param (1), Param(2))Statement: select top {$Param (0) Tid, Title, ID}, Adddate from KS_Article where Tid ='{$Param (2)}'and ID not in (select top {$Param ID from KS_Article (1)} (2) = where Tid'{$Param order by Adddate desc}') order by Adddate descArgument: the parameters of this statement, please look carefully, otherwise the effect is not necessarily rightN1 to N2Param (0) = N2 - N1 + 1Param (1) = N1 - 1Param (2) = column IDFor example: 2 to 10{SQL_ a column is currently N1 to N2 (9,12007102545)}7., this is suitable for display in the home page, display when the column name and article title together, cool! In general, we are using a column, such as a column TOP10,And this tune out of the 10 is you each column each, explain very laborious, used to knowSQL_ each column, Section 1 ()Statements: select, ID, Tid, Title, Adddate, from, KS_Article, where, ID, in (select, max (ID), from,, KS_Article, Tid, group, by, etc)Requirements: each column has the ~ ~ ~ ~Parameter: No, because the number of columns is not large, the quantity is flexible in LOOPHow to use: place labels in the template where you want to display themGeneral label history website =============={$GetSiteTitle} displays the title of the web site{$GetSiteName} displays the web site name{$GetSiteLogo} displays the web site logo (without arguments){=GetLogo (130,90)} displays the web site logo (with parameters, logo's width and height){=GetTags (1,50)} displays popular Tags/ latest Tags{$GetSiteCountAll} display website information statistics (column total, the total number of articles)... ).{$GetSiteOnline} shows the number of people online (total online: X people, users: X people; visitors: X people){=GetTopUser (5, more...)} shows active users ranking{=GetUserDynamic (10)} shows the user dynamics (what everyone is doing){$GetSpecial} display thematic portal{$GetFriendLink} displays the link entrance{$GetSiteUrl} display site URL{$GetInstallDir} shows the site installation path{$GetManageLogin} display management portal{$GetCopyRight} displays copyright information{$GetMetaKeyWord} displays keywords for search engines{$GetMetaDescript} displays a description of the search engine{$GetWebmaster} display webmaster{$GetWebmasterEmail} show webmaster EmailUsed the script tag ========= flash effects{=JS_Ad ("left picture address", "right picture address F", "/images/close.gif", 0.8)} advertising couplet{$JS_Time1} time effects (style: April 8, 2006){$JS_Time2} time effects (styles: 2006, 4-, Saturday, 8)The effects of time ({$JS_Time3} style: Friday June 1, 2007 [five] at the beginning of April of the lunar calendar){$JS_Time4} time effects (styles: Saturday, April 8, 2006, 11:50:46){$JS_Language} transform{$JS_HomePage} is set as home page{$JS_Collection} joined the collection{$JS_ContactWebMaster} contact webmaster{$JS_GoBack} returns to the previous page{$JS_WindowClose} closes the window{$JS_NoSave} pages are not saved by others"{$JS_NoIframe} pages are not put into the frame by others {$JS_NoCopy} prevents web information from being copied {$JS_DCRoll} double click scrolling special effects{=JS_Status1 (status bar, typing effect, 120)} status bar, typing effect{=JS_Status2 ("text is displayed on the status bar from right to left", "120")} the text is displayed from right to left in the status bar{=JS_Status3 ("text moves on the status bar, then moves away") 150} the text is typed on the status bar and then moved away========== system function label (KesionCMS Entry Label) regardingThat is, we label ourselves in the label management system function, name ourselves, build labels ourselvesCommon tag calls such as{LB_ channel navigation} {LB_ web site announcement} {LB_ link} {LB_ position navigation}The special label regarding membership system desolate{$GetTopUserLogin} display member login entrance (horizontal){$GetUserLogin} display member login entrance (vertical){$GetAllUserList} displays the list of all registered members (used only on member list pages templates){$GetUserRegLicense} displays new member registration services terms and declarations{$Show_UserNameLimitChar} shows the minimum number of characters for a new member to register{$Show_UserNameMaxChar} displays the maximum number of characters registered by a new member when registering{$Show_VerifyCode} displays authentication code when new member is registered{$GetUserRegResult} new member registration successful informationRegarding the search of private label =============={$GetSearchByDate} advanced calendar search (widget){$GetSearch} terminus search{$GetArticleSearch} article system search{$GetPhotoSearch} picture system search{$GetDownLoadSearch} download system search{$GetFlashSearch} anime system search{$GetShopSearch} mall system search{$GetMovieSearch} video system search{$GetSupplySearch} supply and demand system search。
Web Graphics Made-to-Measure Technologies for an Online Clothing Store
The Internet is a compelling channel for selling garments. Several recent initia-tives by companies such as Nordstrom, Macy’s, and Lands End focus on made-to-measure manufacturing and shopping via the Internet. Current Web technolo-gies fuel these initiatives by providing an exciting and aesthetically pleasing interface to the general public.However, until now, such Web applications have sup-ported only basic functions such as viewing apparelitems in 2D or 3D, combining dif-ferent items together, and mixing and matching colors and textures (and sometimes using a mannequin adjusted to the shopper’s propor-tions). The most common problemscustomers experience when they tryon the clothes are poor fit, an unpleasant feeling while wearing the item, and surprise at the gar-ment’s color. As a result, high prod-uct return rates persist, and most consumers are still either hesitant to purchase garments online or are unsatisfied with their online shop-ping experience.1Here, we present a Web application that provides more powerful access to and manipulation of garments to facilitate garment design, pattern derivation, and siz-ing. (You can visit the virtual Try-On application at http://virtual-try-on.miralab.unige.ch.) We apply 3D graphics technology to help create and simulate the vir-tual store. In this article, we discuss various relevant research problems, the creation of the shopper’s body and garments, simulation of body and garment move-ment, and online sizing.Our system supports many efficient and interactive operations such as automatically adjusting the 3D man-nequin according to the shopper’s body measurement,selecting different garment items, online fitting and resizing of the garment to the mannequin, and simu-lating garment movement in real time. We ultimately hope to develop and integrate several key technologiesinto a distributed, interactive virtual clothing store where customers can choose garments, use 3D man-nequins that are adjusted to their body measurements,and receive assistance during their online purchase.System overviewAn online clothing store designed as a Web application requires flexible manipulation, fast transmission, and effi-cient storage of the display content. In particular, because of the relatively huge database of garments and the sim-ulation of complex graphical objects such as skin and cloth needed for such an application, the most critical limitation is the real-time performance constraint. When we simulate garment movement using a physics-based model, the results must be prerecorded to display the ani-mated garment at an interactive rate because it’s impos-sible to achieve real-time performance.2Although this approach simplifies the online compu-tation, it requires us to transfer, for each frame, the posi-tion data of each vertex constituting the visual representation of a garment, increasing the response time of the client application. In addition, it’s time-consuming to simulate each 3D garment item in the database. Typically, it takes about 4 to 12 hours to sim-ulate the garment movement of a 1-minute sequence depending on the 3D geometry’s complexity. In our application, we choose a better alternative: we calcu-late the garment simulation on the fly while keeping the response time interactive.Human body modeling and simulation is another key technology enabling automated garment sizing and size selection. Since the advent of 3D image capture technology, we’ve seen much interest in applying this technology to taking human-body measurements.Today, some systems are optimized either for extracting accurate measurements from parts of the body, or for realistic visualization in various fields (including e-commerce applications). Cyberware Inc.’s DigiSize, for instance, was developed in a joint government project to improve and automate fitting and issuing military clothing (see /products for more information).Web GraphicsFrederic Cordier, Hyewon Seo, and Nadia Magnenat-ThalmannMiraLab, University of Geneva, SwitzerlandMade-to-Measure Technologies for an Online Clothing Store38January/February 2003Published by the IEEE Computer Society 0272-1716/03/$17.00 © 2003 IEEEDespite recent efforts devoted to using 3D body scan-ners, limiting factors such as the inability to automati-cally integrate body scan data to application software exist. One important feature of our online clothing store is its ability to build a 3D mannequin according to the body measurements the user inputs. This led us to con-sider another body modeling scheme. We perform the online construction of the 3D mannequin that satisfies the given measurements, avoiding scanners and down-loading large data models. Despite the apparent diffi-culty in building the body geometry from the limited amount of information within the real-time constraint,we show how this process is feasible through our robust modeling technique for practical applications. More-over, with our approach, we can interactively modify the measurements and body motion.Figure 1 shows the system architecture. Our system has a minimal response time because a major part of the content to be manipulated is generated on the client side rather than on the server. Computing the body and cloth animations on the server and sending this data through the Internet would generate too much traffic and reduce the application’s response time. Thus, our solution is to move the body and garment sizing as well as the cloth and skin animation to the client side, avoiding the down-load of large precalculated models.The serverThe online clothing store Web server consists of the following databases and an online database retrieval application module for retrieving data:IBody database . This database contains two 3D man-nequins for each gender, which we refer to as generic models , plus certain statistical information that’s col-lected from existing models through 3D shape cap-ture technologies. This is essentially the information needed to derive new body geometries from mea-surement inputs by the body- and garment-sizing module.IGarment database . We created a set of 3D garment models for the generic models and categorized them. They’re available to the user in the garment catalog pages (see Figure 2, next page). Ideally, the online clothing shop will feature many different gar-ments. Moreover, the online store will make it easy to update the database to keep the coherence between the clothes shown on the Web application and the clothes available for sale. Therefore, this gar-ment database is located on the server side. Upon user selection, the system downloads the chosen 3D garment model to the client. These garments are saved into a Virtual Reality Modeling Language (VRML) format.IMotion database . The animation database contains samples of body motion data. Like the garment data,the system can download the selected motion data upon request. We obtain the motion data through the prerecording of a real person’s movement using an optical motion capture system.IScene database . Graphical elements that compose the background scene are stored as VRML files in the scene database.IEEE Computer Graphics and Applications 39Motion database Garment database Body databaseScene database1Web applica-tion architec-ture of ouronline clothing store.body models, and what data we need for the runtime body and gar-ment sizing.Use of measurementsThe application uses apparel designers’ eight primary measure-ments (listed in Table 1). We assume that the users are aware of their own measurements or the measure-ments have been retrieved from 3D scanners or tapes.Generic modelsInstead of constructing a new geometry for each model, we assume that the model’s topology isknown a priori and shared by all resulting models. The idea behind this is to exploit the objects’ common struc-ture. Aside from making the problem simpler and the modeling faster, deforming the existing model to obtain a new one makes it easier to immediately animate the newly generated model.The generic model consists of a standard human skeleton structure (see the Humanoid Animation Work-ing Group, or H-Anim, specification at ) and a textured skin surface of approximately 6,000 vertices and 10,000 triangles. There are two mod-els, one for each gender. The generic models’ prepara-tion consists of three successive procedures:1.We create and properly adjust an H-Anim skeleton to the body mesh using the Discreet 3ds max tool.2.We then calculate the skin attachment data using The H-Anim standard . The skeleton-3a classical method for the basic bones and corresponding Web Graphics40January/February 2003Table 1. Measurement definitions.Body Measurement DefinitionVertical distance between the crown of the head and the groundVertical distance between the crotch level at center of body and the groundThe distance from the armpit and shoulder line intersection over the elbow to the far end of the prominent wrist bone in line with the small finger The girth of the neck baseMaximum circumference of the trunk measured at bust/chest heightHorizontal girth of the body immediately below the breaststransformed to stay aligned with it.In our current implementation, we do this through BonesPro.Once we’ve defined the attach-ment, we need to segment the skin mesh and locate each of the seg-ments into the skeleton hierarchy as a proper child node of the corre-sponding joint. We decompose the mesh into a collection of adjacent triangles that share the joint of the highest weight, forming segment nodes of the corresponding joint nodes (Figure 3c). Now we can export the model into a H-Anim-compliant human body. We export the attachment data as XML data in the VRML file.Example-based approach .Mere geometric consistency of mea-surements doesn’t guarantee a rea-sonable body shape appearance.Arguably, the captured body geom-etry of real people provides the best available resource to model and estimate correlations between mea-surements and the shape. We exploit correlations among mea-surements and between measure-ments and the body shape. Thispermits a robust modeling of bodygeometry even when only a limitedamount of information is available.Figure 4 shows the body model-ing process. It involves two prepro-cessing tasks. First, we establishtopological equivalence among these data sets (example models hereafter) so that a complete map-ping can be found for each vertex in the mesh and the skeletal structure.Based on the prepared samples and their measurements, the second task implements interpolators forexamples.Example model preparation IEEE Computer Graphics and Applications 413Automatic segmentation of body model:(a) the original mesh, (b) the H-Anim skeleton,and (c) the segmented mesh.(a)(b)(c)body deformation. The joint parameters of a person,which will determine the global proportion of the physique, represent the rigid component. Then, we useshape parameters to express the elastic component,which, when added, depicts the body’s detail shape. We define the joint and shape parameters as follows:I Joint parameters . The joint parameters are essentiallyeach joint’s degree of freedom (DOF). A joint has scale (s x , s y , s z ) and translation (t x , t y , t z ), leading to a vector of 6N dimension, where N is the number of joints. I Shape parameters . As Figure 5 shows, we use a set of contours to represent the detail shape of the body geometry, particularly on the torso. Note that many of the primary measurements are based on contours.The elastic deviation of the contour vertex through the fitting process from its initial position represents the deformation’s elastic component.For a compact representation of both parameters, we adopt the principal component analysis (PCA)approach, one of the common techniques to reduce the dimensionality of data in a statistical manner. See Press et al.4for an extensive discussion of this technique.Interpolator construction . Once we prepare the example models, we then build interpolators that will be used to evaluate deformations necessary to obtain new models by the body- and garment-sizing mod-ule. We consider two interpola-tors—joint and shape. Given a particular measurement set, joint interpolators are responsible for the body deformation’s rigid compo-nent, guiding the skeleton-driven deformation to obtain the appropri-ate proportion of each body part.The shape interpolators deal with the elastic deformation, adding detail shape on the body. Dealing with high-dimensional data and a relatively small (less than 10) num-ber of examples, a scattered inter-polation problem best describes our problem. As a result, we imple-mented joint and shape interpola-tors as functions of the eight primary measurements by using the radial basis function (RBF).Interpolator exportation .Once the system has generated the interpolators, we can quickly ex-plore the measurement space to evaluate the deformation’s rigid andelastic components, allowing run-time generation of a customized 3Dbody model. Implemented as an RBF, each interpolator is described by its interpolation nodes and the nodes’ corresponding weights. Tospeed up the runtime evaluation, we implemented the basis function by using a lookup table.Garment database We calculated a set of garments for the generic model,which reside in the garment database. For creatingclothes usable in the online clothing store, designers must draw the patterns, preprocess the garments, and export them into VRML. We developed several author-ing tools to assist designers in their work.Design of garments Designers create garments using our in-house soft-ware.3This tool assists designers in drawing 2D pat-terns and defining seam lines on the borders of the garment patterns, referring to the polygon edges that are to be joined during the garment construction process as Figure 6a shows. The system tessellates thepatterns into a triangular mesh and places them aroundthe 3D virtual body (see Figure 6b). Next, the system computes the initial shape of the garment through a collision response, as illustrated in Figure 6c. The body model’s shape guides the cloth’s surface as a result of the collision response. Because the body- and garment-sizing module handles the garment sizing online, each garment item needs only one simulation for the gener-ic model.Web Graphics42January/February 20035Fitting a generic model to an example. (a) The generic model. (b) An example model. (c) After the fitting.(a)(b)(c)6Creating garments. The system (a) constructs a garment pattern and (b)places it on a 3D virtual body. (c) Then, it computes the initial shape of thegarment through a collision response.(a)(b)(c)Garment preprocessingSimulating garments in real time requires drastically simplifying the simulation process, possibly at the expense of mechanical and geometrical accuracy. Our approach5is based on a hybrid method where the cloth is segmented into various sections and different algo-rithms are applied. When observing a garment worn on a moving character, we notice that the movement of the garment can be classified into several categories depending on how the garment is placed—that is, whether it sticks to, or flows on, the body surface. For instance, a tight pair of trousers will mainly follow the movement of the legs, while a skirt will flow around the legs. Thus, we segment the cloth into three layers that we define as follows:I Layer 1: Stretch clothes. Garment regions that stick to the body with a constant offset. In this case, the cloth exactly follows the underlying skin surface’s movement.I Layer 2: Loose clothes. Garment regions that move with-in a certain distance to the body surface are placed in another category. The best examples are shirtsleeves. In this case, we assume that the cloth surface always collides with the same skin surface and its movement is mainly perpendicular to the body surface.I Layer 3: Floating cloth. Garment regions that flow around the body. The cloth movement doesn’t exact-ly follow the body movement, and collisions aren’t predictable. For example, for a long skirt, the left side of the skirt may collide with the right leg during animation.These three categories are animated using three dif-ferent cloth layers. The idea behind the proposed method is to avoid the heavy calculation of physical deformation and collision detection wherever possible. The main interest of our approach is to preprocess the target cloth and body model so that they can be computed efficient-ly during runtime. The garments are divided with our in-house software into a set of segments and the associ-ated simulation method is defined for each. Computing cloth attachment data. Defining attachment data is one step of the preprocessing stage. As we stated in the previous section, the cloth defor-mation method uses the underlying skin’s shape. We use the underlying skin deformation for the simulation of the first two layers of the cloth. Each garment mesh ver-tex is associated to the closest triangle, edge, or vertex of the skin mesh.In Figure 7, the garment vertex C is in collision with the skin triangle S1S2S3. We define C′as the closest ver-tex to C located on the triangle S1S2S3. We then define the barycentric coordinates of C′with S1, S2, and S3. These barycentric coordinates are later used for cloth simulation and sizing. They provide an easy way to com-pute the garments’ “resting” shape by using the location of the skin vertices.Segmentation. With the garment in its resting shape on the initial body, we use the distance between the garment and the skin surface to determine whichcategory the cloth triangles belong. Associated witheach segment are distances from the skin surface thatare used to determine the category. Each segment fallsinto one of three categories: tight, loose, and floatingclothes. Cloth vertices that are located closely to the skinsurface belong to the first or second layer. Cloth verticesthat don’t collide with any skin surface belong to thethird layer as Figure 8 shows.Exportation. Once segmented, we can export 3Dcloth models and the associated data into VRML for usein the online clothing store. These models are locatedon the server, allowing easy update and maintenance.Motion databaseCommercially available motion capture systems offera relatively easy way of recording a human performer’smovement. Our system converts the animation data sothat it’s immediately applicable to H-Anim models. Theconverter first computes the correspondence betweenthe two skeleton hierarchies (one from the Vicon[] motion capture system andthe other from H-Anim) and performs the transforma-tion on each joint angle for each frame of animation toresolve the difference in their stand-by posture.The converted animation data is exported in VRML for-mat using the Interpolator node and can be applied to theH-Anim body model in a frame-by-frame basis. The data-base consists of several typical motions that people dowhen trying on clothes, such as walking and turning.Integrating the clientThe client application isn’t only involved in the visu-alization of garments, but it also calculates the cloth andIEEE Computer Graphics and Applications43 7Mapping of attachment information. Here the gar-ment vertex C is in collision with the skin triangleS1S2S3.8Segmenta-tion of gar-ments. In layers1 and 2, clothvertices arelocated closelyto the skinsurface while inlayer 3, thevertices don’tcollide with anyskin surface.Layer 1Layer 2Layer 3body deformation. As Figure 9 shows, the client archi-tecture uses three different layers for the implementa-tion: C ++, JavaScript, and HTML.The ActiveX controlWe developed modules for real-time animation and visualization in C ++and integrated them into an ActiveX control. ActiveX controls are components that can be embedded within Web browsers like Microsoft Internet Explorer. Because they can be written in any language,they offer the best performance for time-critical appli-cations. Another advantage of ActiveX controls is thattheir installation on the client machine is transparent to the user.They’re automatically downloaded and installed at the first access to the Web page.Our ActiveX control consists of several modules: the FTP client for downloading data from the server,the VRML loader that’s in charge of loading the data into the scene man-ager, the skeleton animation player for animating the mannequin skele-ton, the body and cloth sizing mod-ule for fitting the body and clothes to the user measurements, the skin and cloth deformation module for real-time animation, and the 3D viewer for visualization of the scene.Javascript and HTML pagesWe used JavaScript and HTML to implement the graphical user inter-face. They provide a good solution for user interaction. JavaScriptallows complex functionalities such as keeping track of the user choices, managing widgets, and sending the user defined parameters to the ActiveX control. Figure 10shows a Web page where the user enters the body mea-surements and visualizes the 3D mannequin animation.Body- and garment-sizing moduleThe body- and garment-sizing module’s main task is to manage the proper sizing of the 3D mannequin in a VRML scene. As we described, the module uses the generic model and the interpolators to evaluate the joint and shape parameters to deform the generic model. ItWeb Graphics44January/February 20039Overview of the client archi-tecture.10Screen shot of an ActiveX viewer with the HTML and JavaScript user interface.first deforms the body model by applying the evaluated parameters from the interpolators as a function of the measurement input. The garments are then deformed accordingly so that they fit the deformed body.Body sizingGiven the measurement values or a specific location in the measurement space chosen by the application at runtime, the interpolator evaluates the necessary defor-mation by efficiently blending the examples with known measurements to produce an interpolated shape.Contour warping . Given the new set of measure-ments, the system determines the location in the mea-surement space. Using this vector as input, the system evaluates the joint parameters from the joint interpola-tors for each joint in the skeleton. Figure 11b shows the model after the joint parameters are applied to the model shown in Figure 11a. Similarly, the shape inter-polators determine the shape of the primary and auxil-iary contours consecutively and deform them by adding the evaluated vertex displacements as Figure 11c shows.Mesh deformation . To compute a skin surface that smoothly interpolates between deformed contours,we again use the scattered data interpolation tech-nique. For each point p i on the skin mesh, the final loca-tion is given by (1)where W =(s x , s y , s z ) denotes the deformation function and W (p i ) =∆p ix , ∆p iy , ∆p iz . For n points p 1, p 2, …p n of the contour set, we have W (p i ) from the shape interpo-lators. Then we can find the coefficients of the Gauss-ian function interpolants s x , s y , s z by solving a linear system of 3n equations:(2)Once the interpolators have been calculated, we deform the mesh by displacing all points on the mesh according to the resulting values.Refinement . Some joints are directly related to the input (stature, the crotch length, and the arm length)measurements, and the evaluated joint parameters from the interpolators need to be adjusted accordingly. The refinement phase of our approach measures the model from the interpolators and adjusts the skeleton to ensure the user-specified measurement constraints on the model.Garment sizingAfter the body model is properly deformed from the given measurements, the garment-sizing module mod-ifies the garments so that they fit this new body. As we explained earlier, the cloth mesh is segmented into three layers—two layers where the cloth vertices are attached to the body surface and one layer where the garments move freely. Thanks to the cloth attachment data (Fig-ure 7), we can easily compute the position of vertices belonging to the first two layers from the position of the underlying skin vertices weighted with the barycentric coordinates. Cloth vertices on these two layers follow the skin surface, even in the case where the skin is deformed by the body-sizing module. For the sizing of garments belonging to the third layer such as skirts or dresses, the diameter and the length of legs are calcu-lated in the body deformation module. The length and the width of the garments are scaled accordingly to the new measurements of the legs. Figure 12 (next page)shows the body and garment sizing results.Skin and garment simulation moduleAfter generating the dressed body according to the user measurements, the motion data applied to the H-Anim skeleton drives the skin and cloth deformation.This section addresses the computation of the real-time simulation of bodies and clothes according to the ani-mation of the underlying skeleton.Joint-driven deformation of skinWe use the skeletal driven deformation technique,introduced in the section “Segmentation and exporta-′=+p p W p i i i ()IEEE Computer Graphics and Applications 4511Skin deformation through interpolation. (a) The initial reference model. (b) After the skeletal deforma-tion via joint interpolation. (c) After adding detailed shape via shape interpolation.(a)(b)(c)tion” for simulating the skin deformation of the virtual human. At each frame of the animation, we calculate the vertices’ position using the weight values and the transformation matrix of the joints. We also use these weight values to compute the mesh surface’s normal.The movement of vertices that belong to a single joint isn’t calculated but automatically moved as they’re attached to the joint. Duplicated vertices that lie on the boundaries have the same position because they share the same attachment information. Thus, the boundaries among segments aren’t visible. The result is a segment-ed body that appears as a seamless body in the render-ing view port. This method combines the speed of the deformation of segmented bodies with the visual qual-ity of seamless bodies.Garment simulationWe previously 5proposed techniques for a real-time clothing simulation (see Figure 13). Garment vertices are animated with three different methods, depending on which layer they belong to that’s defined during the preprocessing stage.Layer 1. Tight clothes in layer 1 follow the defor-mation of the underlying skin. These deformations are calculated with a geometric-based method thanks to the mapping of the attachment data of the skin to the gar-ment surface.Layer 2. For layer 2, which consists of loose clothes,the relative movements of clothes to the skin remain rel-atively small, keeping a certain distance from the skin surface. Consider the movement of a sleeve in relation to the arm—for a certain region of the garment, the col-lision area falls within a fixed region of the skin surface during simulation. With this in mind, the scope of the collision detection can be severely limited. We assume that the movement of the garment largely depends on the underlying skin and yet it shouldn’t follow the skin surface rigidly. It’s necessary to simulate the local dis-placement of the garment from the skin surface.We’ve developed two different methods, one for cloth deformation on the limbs (trousers and sleeves), the other one for the deformation of cloth on the trunk.Cloth vertices on the limbs are enclosed in half spheres that are attached to the skin surface. Vertices inside these spheres are displaced with the equation of the rigid body motion. A function defines the diameter of the spheres depending on the relative position of the cloth vertex to the skin surface’s normal. We animate cloth vertices located on the trunk with a rough mesh and animate the rough mesh with a physics-based method. The cloth mesh is deformed with the free-form deformation (FFD) method using the position of the ver-tices on the rough mesh.Web Graphics46January/February 200312Simulation of bodies and garments for different sizes:(a) small, (b)medium, and (c) large.(a)(b)(c)13A sequence of real-time cloth animation.。