VR虚拟现实-基于虚拟现实的虚拟实验室外文翻译 精品
2019年虚拟现实与汽车自动驾驶外文翻译中英文
使用虚拟现实进行汽车自动驾驶中英文2019英文Get ready for automated driving using Virtual RealityDaniele Sportillo, Alexis Paljic, Luciano OjedaAbstractIn conditionally automated vehicles, drivers can engage in secondary activities while traveling to their destination. However, drivers are required to appropriately respond, in a limited amount of time, to a take-over request when the system reaches its functional boundaries. Interacting with the car in the proper way from the first ride is crucial for car and road safety in general. For this reason, it is necessary to train drivers in a risk-free environment by providing them the best practice to use these complex systems. In this context, Virtual Reality (VR) systems represent a promising training and learning tool to properly familiarize drivers with the automated vehicle and allow them to interact with the novel equipment involved. In addition, Head-Mounted Display (HMD)-based VR (light VR) would allow for the easy deployment of such training systems in driving schools or car dealerships. In this study, the effectiveness of a light Virtual Reality training program for acquiring interaction skills in automated cars was investigated. The effectiveness of this training was compared to a user manual and a fixed-base simulator with respect to both objective and self-reported measures. Sixty subjects were randomly assigned to one of the systems in which they went through a training phase followed by a test drive in a high-end driving simulator. Results show that the training system affects the take-over performances. Moreover, self-reported measures indicate that the light VR training is preferred with respect to the other systems. Finally, another important outcome of this research is the evidence that VR plays a strategic role in the definition of the set of metrics for profiling proper driver interaction with the automated vehicle.Keywords: Conditionally automated vehicles, Virtual Reality, Head-Mounted Display, Take-over request, Training1. IntroductionImagine you are reading this article in your car as you drive on the highway. Suddenly, your car asks you to “take-over”. What would you do? At the time of writing, this scenario breaksnumerous laws and is potentially very dangerous. In the future, it would not only be legal and safe, but you would likely know how to react to your car's demands to hand over control, keeping yourself, passengers, and other vehicles out of harm's way.In future automated vehicles the above situation would be fairly common. In particular, conditionally automated vehicles (SAE Level-3 S. International (2017)) do not require drivers to constantly monitor their driving environment; they can, therefore, engage in secondary activities such as reading, writing emails and watching videos. However, when the automated system encounters unexpected situations, it will assume that drivers who are sufficiently warned will adequately respond to a take-over request.The reestablishment of the driving context (i.e. rapid onboarding) is one challenge of conditionally automated vehicles (Casner et al., 2016) for the car industry. The revolution of the driving activity, the complexity of these new systems and the variety of situations that the driver can face requires that drivers must have already acquired the core skills necessary to securely interact with the automated car before their first ride. Establishing drivers’ role and avoiding confusion (Noy et al., 2018) is crucial for the safety of both the drivers themselves and other road users.At present, a vehicle's functionalities are demonstrated to customers via an informal presentation by the car dealer during the hand-over process; for further information, customers are required to read the car owner's manual. For an automated vehicle, these traditional procedures would not be feasible to familiarize the new car owner with the automated system, primarily because the acquisition of skills by the customer is not ensured. In addition, car dealers themselves must be trained and kept up to date of each new version of the system.In this context, Virtual Reality (VR) constitutes a potentially valuable learning and skill assessment tool which would allow drivers to familiarize themselves with the automated vehicle and interact with the novel equipment involved in a free-risk environment. VR allows for the possibility of encountering dangerous driving conditions without putting the driver at physical risk and enable the controllabilityand reproducibility of the scenario conditions (De Winter et al., 2012).VR has usually been associated with high costs and huge computational power. For these reasons immersive training based on CA VEs or Head-Mounted Displays has until now beenprohibitive in mainstream settings. However, in recent years, technological progress and the involvement of dominant technology companies has allowed the development of affordable VR devices.The objective of this research is to explore the potential of the role of light Virtual Reality systems, in particular, for the acquisition of skills for the Transfer of Control (ToC) in highly automated cars. By using the adjective light, we want to mark the difference between VR systems that are portable and/or easy to set up (HMDs, mobile VR) and systems that are cumbersome and require dedicated space to operate (CA VE systems). The idea is that thanks to the portability and the cost-effectiveness, light VR systems could be easily deployed in car dealerships to train a large amount of people in an immersive environment in a safe and reliable way.The light VR system proposed in this paper consists of a consumer HMD and a racing wheel. This paper aims to compare the effectiveness of a training program based on this system with a user manual and with a fixed-base driving simulator. To validate the light VR system, user performances are evaluated during a test drive in a high-end driving simulator and self-reported measures are collected via questionnaires.1.1. Related workVirtual Reality has been extensively used to train professionals and non-professionals in various domains. The unique characteristics of learning in the 3D environment provided by immersive VR systems such as CA VEs or HMDs, can enable learning tasks that are not possible or not as effective in 2D environments provided by traditional desktop monitors. Dalgarno and Lee (2010) highlighted the benefits of this kind of 3D Virtual Learning Environments (3D VLEs) by proposing a model based on their distinctive features such as the representational fidelity and the learner interaction.More in detail, HMD-based VR turns out to be more effective when compared to other training systems, for a wide range of applications such as surgery (Hamilton et al., 2002) (HMD compared to video trainer), aircraft visual inspection (V ora et al., 2002) (HMD compared to PC-based training tool), power production (Avveduto et al., 2017) (HMD compared to traditional training), mining industry (Zhang, 2017) (HMD compared to screen-based and projector-base training).When it comes to driving simulation (DS), VR is used to study several aspects of the drivingtask. In this context, moving-base simulators (Lee et al., 1998) are preferable to fixed-base simulators (Milleville-Pennel and Charron, 2015, Fisher et al., 2002) for their closer approach to real-world driving (Klüver et al., 2016).By investigating the physical, behavioral and cognitive validity of these kind of simulators with respect to the real driving task (Milleville-Pennel and Charron, 2015), it has been also shown that DS can be a useful tool for the initial resumptionof driving, because it helps to avoid stress that may lead to task failure or deterioration in performance.Although most of the studies in DS uses static screens as the display system, recent studies prove that HMD-based DS leads to similar physiological response and driving performance when compared to stereoscopic 3D or 2D screens (Weidner et al., 2017). Taheri et al. (2017) presented a VR DS system composed of HMD, steering wheel and pedals to analyze drivers’ characteristics; Goedicke et al. (2018) instead proposed an implementation of an HMD in a real car to simulate automated driving as the vehicle travels on a road. Even if the steering wheel is the most used driving interface, novel HMD systems usually come with wireless 6-DoF controllers which can be used to control a virtual car. In a pilot study, Sportillo et al. (2017)compare steering wheel and controller-based interaction in HMD-based driving simulators. The authors conclude that even though objective measures do not provide decisive parameters for determining the most adequate interaction modality, self-report indicators show a significant difference in favor of the steering wheel.Among other things, DS provides the opportunity to implement, in a forgiving environment, critical scenarios and hazardous situations which are ethically not possible to evaluate on real roads (Ihemedu-Steinke et al., 2017b). For this reason and to overcome the limited availability of physical prototypes for research purposes, DS is extensively used for studies on automated vehicles to design future automotive HMI (Melcher et al., 2015) for Take-Over Requests (TORs) and to investigate the behavioral responses during the transition from automated to manual control (Merat et al., 2014).A research area that is gaining interest in the automated driving community concerns the impact of non-driving activities on take-over performance. To study driver's distraction during automated driving, researchers generally use standardized and naturalistic tasks. Standardized tasks (such as the cognitive n-back task (Happee et al., 2017), the SuRT task (Happee et al.,2017, Gold et al., 2013), the Twenty Questions Task (TQT) (Körber et al., 2016)) provide experimental control, but they do not usually correspond to what the driver will do in the vehicle. Naturalistic tasks, instead, provide ecological validity, but they could introduce experimental bias. Important findings were found by Zeeb et al. (2016) who studied how visual-cognitive load impacts take-over performance by examining the engagement in three different naturalistic secondary tasks (writing an email, reading a news text, and watching a video clip). The authors found that the drivers’ engagement in secondary tasks only slightly affected the time required to regain the control of the vehicle, but non-distracted drivers performed better in the lane-keeping task.Most of the studies in this domain implement safety-critical take-over scenarios caused by an obstacle (usually a broken down vehicle) on the current lane (Zeeb et al., 2016, Sportillo et al., 2017, Happee et al., 2017, Navarro et al., 2016, Körber et al., 2016) and non-critical scenarios caused by the absence of lane markings (Zeeb et al., 2016, Payre et al., 2017). To ensure security and to succeed in the take-over process, it is important to understand how much time before a system boundary a driver who is out of the loop should be warned. Gold et al. (2013) indicate that shorter TOR-time leads to a faster but worse reaction. However, assessing the quality of the take-over performance remains an open problem. Reaction times (such as gaze reaction time, hands on wheel time, and intervention time) are analyzed (Happee et al., 2017). Time To Collision, lateral accelerations and minimum clearance towards are objective metrics used in obstacle avoidance scenarios (Happee et al., 2017). Concerning subjective measures, drivers are usually asked to reply to questionnaires: the Driver Skill Inventory (DSI) (Spolander, 1983) and Driver Behaviour Questionnaire (DBQ) (Reason et al., 1990) have been largely used to evaluate the self-assessment of driving skills (Roy and Liersch, 2013) in the last decades. In recent studies, questionnaires have been used to investigate the importance of initial skilling and to predict the deskilling in automated vehicles (Trösterer et al., 2016). In the same field, surveys have also been used to evaluate usefulness and satisfaction of take-over requests (Bazilinskyy et al., 2013).In the above studies it is not always clear how participants were taught to use the automated system. Zeeb et al. (2016) used a traditional approach that provided the participants with a description of the system, the functional boundaries and the alert notifications. In the vehicle,participants were also instructed to activate and deactivate the automated driving system. This approach could not be adapted to the real case because it does not ensure the correct acquisition of knowledge; thus, the drivers would not be sufficiently skilled to safely respond to a take-over request. In other studies participants could freely practice in the high-end driving simulator before the actual test drive (Gold et al., 2013). This solution would not be feasible in terms of costs, space and maintenance because it would require every car dealership to be equipped with a simulator. A lighter VR system, such as the one proposed in this paper, could instead be more easily deployed and used for training purposes at a much lower cost.Payre et al. (2017) addressed the problem of drivers’ training in an auto mated car by comparing two types of training: a simple training based only on practice in a driving simulator and an elaborated training which included a text, a tutorial video and a more elaborated practice in the simulator. They found that participants in the elaborated training group trusted more the automated driving and were able to take-over faster than those in the simple training group.Automated car research also has relevance in the field of aviation (Stanton and Marsden, 1996), and in particular in studies concerning flight simulation for pilot training (Vince, 1993). Although this kind of training is targeted towards professionals, important findings from this research include the occurrence of positive transfer and the fact that abstracted rendering simulators allow people to learn better than with the real thing (Stappers et al., 2003). Pilots trained on a simulator are thus able to co-pilot a craft immediately after their simulation training (Vince, 1993). However, it is crucial that the training practices allow for the generalization of the skills acquired in the virtual environment and not only for an application of the rote-memorized skills specific to the training situation (Casner et al., 2013).The considerable findings from aviation and the intense scientific production in recent years suggest that the transition of control in automated cars is a valuable research topic worth investigating from the design stage to the final implementation of the new systems. Moreover, the compelling need and interest of the car industry to train a large amount of people in a reliable and cost-effective way, without compromising security, make light Virtual Reality system tools a promising solution for this purpose.2. MethodsThis study contained two parts: training and test drive. The aim of the training was tointroduce the principles of the Level-3 Automated Driving System (ADS)-equipped vehicle, present the novel Human–Machine Interface (HMI), help the drivers to localize the HMI in the vehicle, and describe the actions to perform in order to appropriately respond to unplanned requests to intervene. The between-subject study with 60 participants was designed in order to compare a light Virtual Reality system to a user manual and a fixed-base driving simulator in terms of training effectiveness evaluated through a test drive. The test drive required the application of knowledge and skills acquired during the training.2.1. The target vehicleThis study takes into account Level-3 (Conditional Driving Automation) automated vehicles. In this level of automation the ADS performs the Dynamic Driving Task (DDT) with the expectation that the human driver is receptive to a Take-Over Request (TOR), also known as request to intervene, and will respond appropriately. The DDT includes (S. International, 2017) lateral vehicle motion control via steering; longitudinal vehicle motion control via acceleration and deceleration; monitoring the driving environment via object and event detection, recognition, classification, and response preparation; object and event response execution; maneuver planning; enhancing conspicuity via lighting, signaling and gesturing, etc.For a more detailed taxonomy and description please refer to the Recommended Practice by SAE (S. International, 2017). A TOR is a notification by the ADS to a human driver that s/he should promptly begin or resume performance of the DDT. Unplanned TORs are prompted by the ADS when it reaches system boundaries because of unpredictable and potentially hazardous situations that it cannot handle. These situations could be represented by an obstacle on the road, missing road markings or system failure. The target vehicle provided two driving modes on highways: Manual Driving and Conditionally Automated Driving. The vehicle was not expected to execute automatic lane changes.In the implementation the vehicle had 5 possible states:(a)Manual driving: the human driver is in charge of all the aspects of the dynamic driving task (execution of steering and acceleration/deceleration).(b)ADS available: the human driver can transfer control to the ADS, by operating the HMI.(c)ADS enabled: the ADS performs all the aspects of the dynamic driving task, namely the control of the longitudinal and the lateral guidance.(d)Take-over request: the ADS reaches a system boundary and thus is no longer able to perform the dynamic driving task. The human driver is notified with a visual–auditory alert indicating the time budget s/he has to take-over.(e)Emergency brake: the human driver does not take over in the allotted amount of time and the vehicle performs an emergency brake on the lane. The alert continues until the control is transferred back to the human driver.When the ADS was activated, the car kept a constant longitudinal speed of 90 km/h, accelerating or decelerating if the speed at the activation was respectively lower or higher.2.1.1. Human–Machine InterfaceThe Human–Machine Interface in the target vehicle consisted of a head-up display(HUD) and a button on the steering wheel. The HUD showed information about current speed, speed limit, distance traveled and current state of the vehicle. In Fig, the different symbols representing the states of the system are illustrated; the arrows indicate the possible transition between states. The symbols are taken from previous studies (Bueno et al., 2016). The background color of the HUD also changed according to the current state of the vehicle.Take-over requests were notified to the human driver with a visual–auditory alert. The visual alert consisted of the symbol in with a countdown indicating the budget of time available to take over. The auditory alert was a 0.7 s beep looped every second.In the implementation of the automated driving system, the human driver could activate the ADS (if available) by pushing a button on the steering wheel. When the ADS was enabled, at any time the human driver could deactivate it and immediately take back control. This could be done in three ways: (i) pushing the same button on the steering wheel, (ii) using the brake pedal, or (iii) using the accelerator pedal and the steering wheel.Since all the participants were French speakers, all the text in the HMI was displayed in French to avoid language comprehension problems.2.2. The trainingThe aim of the training was to teach drivers how to interact with automated cars in three situations: the manual mode, automated mode and the take-over request. To do so, the training introduced the participants to the HMI for each situation, the actions they were free to perform during the automated driving and the best practice to respond to a take-over request. For all theparticipants, the training program started with an introduction video that briefly presented the main functionalities of a Level-3 ADS-equipped car. The video was displayed onto a different support according to the display system used during the training.In the study three different training systems were compared• User Manual (UM) displayed on a laptop;• Fixed-Base driving simulator (FB) with real cockpit and controls (pedals and steering wheel);• Light Virtual Reality (LVR) system consisting of a Head-Mounted Display (HMD) and a game racing wheel.These systems differed in terms of level of immersion and interaction they provided. “Immersion” refers to the technological capabilities a system is able to deliver from an objective point of view (Slater, 2003). “Interaction” refers to the modality through which the user can perform actions in the virtual environment. Immersion and interaction do not apply to the user manual group. The fixed-base driving simulator and the LVR system shared the same interaction modalities, but the level of immersion was different. In what follows, the three systems are described.2.2.1. User manual trainingThe user manual (UM) consisted of a slide presentation displayed on a 13.3″ screen of a laptop computer. First, the introduction video was played. Then, the participants were asked to carefully read each of the 8 slides and to go to the next one when they felt ready. They did not have any time limit. The slides used text and images to present the actions to be performed during the manual driving, the automated driving and the take-over requests. For each situation the correspondent icons were also presented. An animated slide was included to show how to activate the automated driving.This system represented the non-immersive and non-interacting training environment. The participants could only browse forward and backward the slides, with no time limit; however, they were not involved in a driving situation and they could not practice the action required with the real equipment.2.2.2. Fixed-base simulatorThe fixed-base simulator (FB) consisted of an actual car cockpit including a driving seat, adashboard, a force-feedback steering wheel and a set of pedals. All of these components were real components of a Citroen C3; this allowed participants to have a more natural interaction with the driving controls. A 9. 7″ tablet used by the driver to perform the secondary activity was placed in the center console. To display the virtual environment a 65″ plasma screen was positioned behind the cockpit at 1.5 m from the driver.This simulator represented the low-immersion training environment. The limited size of the screen did not allow the implementation a 1:1 scale between the virtual and the real world. Also, another implication of the reduced field of view was the lack of isolation for the participant who was surrounded by the experimental room during the training.2.2.3. Light Virtual Reality systemThe light VR system (LVR) included an HMD as a display system, and a Logitech G25 Racing Wheel as driving system. The HMD was an HTC Vive which provides stereoscopic vision at 90 FPS, 2160 × 1200 (1080 × 1200 per eye) resolution, a field of view of 110 degrees and low-latency positional tracking. Spatial sound was presented via headphones. Thanks to these features, the LVR system represented the high-immersion training system. The trainee was totally surrounded by the virtual environment, but once wearing the headset s/he lost the possibility to see any part of his/her own body. Although the field of view of the HTC Vive is not comparable with the human vision, the design choices for the training scenario (no traffic, straight lane) helped to reduce the stimuli in the peripheral vision, which is one of the causes of simulator sickness (Stoner et al., 2011).At the beginning, the participants were immersed in a virtual room with white walls. This room represented a transitional environment from the real world to the virtual learning activity. A transparent effect was applied to the car to ease the transition to the virtual world. The introduction video was displayed on the front wall. We hypothesized that, at the beginning of the experiment, a simpler environment with a few visual elements could help participants better accept the system (Sisto et al., 2017). The purpose of this environment was twofold. First, novices of Virtual Reality and participants who were using an HMD for the first time could become familiar with the new system by experiencing the effects of their actions (head rotation, head movement) on the system. Second, since the participants could not see their hands, they could become aware of the car controls, identifying the position of the steering wheel, the button on the steering wheel, and thepedals.2.2.4. The Virtual Learning EnvironmentFor the training using the LVR system and the fixed-base driving simulator, a step-by-step tutorial was developed in the form of a Virtual Learning Environment (VLE). The VLE provided the same information and stimuli to the two groups of participants, except for the differences due to the nature and the limits of the two systems involved.The characteristics of the target vehicle described in Section 2.1 were implemented in the VLE. The task of the participants consisted of interactions with the car following the instruction of a virtual vocal assistant. The messages announced by the assistant were also displayed on a yellow panel in front of the trainee. The panel appeared when the user intervention was required, and disappeared as soon as the trainee performed the required actions. No other actions were possible other than the required one.The driving scenario was a straight 2-lane road delimited by guardrails. No traffic was implemented. Only trees were placed on the roadside. A simple environment was specifically chosen to focus participants on the training task without any distractions, and to reduce the peripheral optical flow which can contribute to simulation sickness (Hettinger and Riccio, 1992). The training steps are described in Table.Before the driving scenario, an acclimatization virtual environment was proposed to the participants to help them locate and identify the controls of the car.Secondary activity. This training also included a secondary activity that required the use of a tablet (a real one in the case of the fixed-base simulator, a virtual one in the case of LVR system). The tablet was used to distract the human driver from the driving task during the automated driving. The distraction task was the same for all the participants and consisted of a video of a TEDx Talk in French. The participants were asked, but not forced, to look at the tablet. The video was automatically played when the automated system was enabled and paused during the manual driving and the take-over requests.2.3. The test driveAfter the training, the participant performed a test drive designed to evaluate their performance in a more realistic driving scenario. The system used for this purpose was a high-end driving simulator consisting of the front part of a real car surrounded by a panoramic display. Thedisplay was placed 2.5 m from the driver and covered a field of view of 170°. Three three-chip DLP projectors displayed the scene. The rear part of the car was substituted with a monitor that displayed the virtual environment from the rear window. The lateral mirrors consisted of two LCD displays as well. The cockpit was also equipped with a microphone to communicate with the experimenter and 4 cameras to record the scene inside the car. Data including position, speed and acceleration of the car, and current driving mode were recorded.中文使用虚拟现实进行汽车自动驾驶摘要在有条件的自动驾驶汽车中,驾驶员可以在前往目的地的同时进行辅助活动。
虚拟现实精品PPT课件
虚拟现实技术的应用
危险环境 中作业
• 可以使用VR技术在有放射性、有毒的 危险环境中或者在宇宙进行监控和遥控
作业,处理危险的材料,而不会有危险。
科学研究 •Leabharlann 给研究者的计算过程提供及时的图形化 反馈,可观察到方案的整个过程。
医疗实验 与教学
• 直至目前,医疗研究和教学以实践为主, 由计算机生产的3D人体模型是进行研 究和教学的新方法
用户只能按照设计师预先固定好的 一条路线去看某些场景,是被动的 接受信息。
虚拟现实技术源于人们对三维动画技术自由交互的渴望,虽然它 形式上和三维动画技术有些相似之处,将来虚拟现实它可能将是 三维动画技术的替代品。
为更好满足学习和使用需求,课件在下载后 可以自由编辑,请根据实际情况进行调整
In order to better meet the needs of learning and using, the courseware is freely edited after downloading
分布式虚拟现实
多个用户通过网络连接在一 起,同时参加一个虚拟空间,
共同体验虚拟现实。
三、虚拟现实的基本特征
多感知性(Multi-Sensory):指除了一般计 算机技术所具有的视觉之外,还有听觉、力 觉、触觉、运动感,甚至包括味觉、嗅觉等。
沉浸感(Immersion):又称临场感,指用户 感到作为主角存在于模拟环境中的真实程度。
虚拟现实的应用------教育应 用
教育与培训:最普遍的实例是战争模拟, 可以凸显出训练环境,还可以进行现实中 无法展开的危险战争。
VR在教育中的应用,一教育环境不同, 可划分为:虚拟校园、虚拟教室、虚拟实 验室、虚拟图书馆。
外文翻译---基于虚拟现实技术的机器人运动仿真研究
With the rapid developmБайду номын сангаасnt of computer technology and network technology, virtual manufacturing (VM) becomes an emerging technology that summarizes the computerized manufacturing activities with models, simulations and artificial intelligence instead of objects and their operations in the real word. It realizes optimization products in manufacture process with prediction manufacture circle and promptly modification design[1][2].Virtual reality technology is a very important support technology for virtual manufacture because it is an important way of virtual design production. With the appearance of Java and VRML technology, robot motion simulation on WEB browser turns into realization[3][6]. Robot motion simulation is a necessary research direction in robot research field. Guo[4]adopted ADAMS software for 3-RPC robot motion simulation, but the method isn’t suitable for motion simulation on internet. Yang[3]and Zhao[5]researched the motion simulation of robot by VRML, and they didn’t optimize VRML file. Bo[8]provided environmental simulation within a virtual environment by Java and VRML interaction through the External Authoring Interface. Qin[9]researched a novel 3D simulation modeling system for distributed manufacturing. In this paper, theCINCINNATIrobot is researched and analysed by VRML and Java and VRML file is optimized fortransmission on internet. The 3D visualization model of the motion simulation for robot is realized by VRML and Java based on virtual reality. Therefore the research about motion simulation of robot based on internet is significant and application value.
VR与全景视频PPT课件
Part
03
未来展望
虚拟星球
V
R
计算机视觉
人工智能
计算机网络
人机交互
科学计算可视化
输入/输出设备
自动化控制
生理学、心理学等
计算机图形学
模式识别
数字图像处理
昔者庄周梦为蝴蝶,栩栩然蝴蝶也,自喻适志与!不知周也。俄然觉,则蘧蘧然周也。不知周之梦为蝴蝶与,蝴蝶之梦为周与? ——《庄子·齐物论》
VR视频自由的观看视角,让人们在视频场景中任意位置的360度自由观看;视频必须是3D;必须带上头显观看;具有强烈的视觉沉静感;
如果VR视频看成是“3D全景视频+自由移动”那么,一部VR电影将由观众来决定镜头位置,这样可能会取代传统的“镜头运动、场景切换。
1.连接线和空间要求是最大阻力因为庞大数据传输和供电,暂时必须使用有线方式,这样导致一个问题,在移动的时候不能肆无忌惮,还要时刻顾及粗粗的连接线,这与场景内的体验是完全不同的,这种异样感将让用户很不爽。同样的,还有空间感。要想使用VR进行运动类游戏或者有良好的伸展空间,就必须有非常大的空间,卧室是不行的,有太多障碍和人员通过的空间也不行,空间如果太大,也难保证不发生危险;2.现有存储和硬件价格较高由于VR时代的场景是360度的,所以对存储空间的占用几乎是呈现几何性增长的,对显示、处理器等硬件的要求也非常高。到了VR时代,一部完整庞大的内容恐怕消耗的是上百GB甚至TB为单位空间占用的。如何解决数据问题、如何让存储空间庞大的同时降低资源消耗是需要解决的硬件的要求进一步提升主机的花费,在加上空间的费用、VR头盔本身的费用,并不是普通消费者可以接受的。3.VR视频不清晰如果一个Vr视频是2K分辨率,但是实际上我们视觉效果看到的也是2K,而不应该是屏幕真实分辨率,即便如此,屏幕并不是集中在手机这样一个很小的区域内,而是到了整个视觉范围当中,即便是单眼4K也达不到细腻的效果。另外还有画面刷新率的问题,低刷新率将会影响人体感受,甚至会造成眩晕等不良反应。
vr简介中英文版对照怎么写
vr简介中英文版对照怎么写虚拟现实技术是一种可以创建和体验虚拟世界的计算机仿真系统,利用计算机生成一种模拟环境,下面是学习啦小编给大家整理的vr简介中英文版,供大家参阅!vr简介Virtual reality technology is an important direction of simulation technology. It isa collection of various technologies such as simulation technology and computer graphics, human interface technology, multimedia technology, sensor technologyand network technology. It is a challenging cross technology frontier and research field The Virtual reality technology (VR) mainly includes simulation environment, perception, natural skills and sensing equipment and so on. The simulation environment is a computer generated, real-time dynamic three-dimensional realistic image. Perception refers to the ideal VR should have all the people have the perception. In addition to computer graphics technology generated by visual perception, there are auditory, tactile, force, movement and other perception, and even including the sense of smell and taste, also known as multi-perception. Natural skill refers to the person's head rotation, eyes, gestures, or other human behavior, by the computer to deal with the action of the participants to adapt to the data, and the user's input to make real-time response, and were fed back to the user's facial features The A sensing device is a three-dimensional interactive device.虚拟现实技术是仿真技术的一个重要方向,是仿真技术与计算机图形学人机接口技术多媒体技术传感技术网络技术等多种技术的集合,是一门富有挑战性的交叉技术前沿学科和研究领域。
虚拟现实与设施管理外文翻译中英文2018
虚拟现实与设施管理中英文2018英文Virtual reality as integration environments for facilities management AbstractPurpose –The purpose of this paper is to explore the use of virtual reality environments (VRE) for maintenance activities by augmenting a virtual facility representation and integrating relevant information regarding the status of systems and the space itself, while providing simple ways to control them.Design/methodology/approach – The research focuses in the implementation of a VRE prototype of a building management system using game engine technologies. To evaluate the prototype, a usability study has been conducted that contrasts the virtual reality interface with a corresponding legacy application showing the users perception in terms of productivity improvement of facilities management (FM) tasks.Findings – The usability tests conducted indicated that VREs have the potential to increase the productivity in maintenance tasks. Users without training demonstrated a high degree of engagement and performance operating a VRE interface, when compared with that of a legacy application. The potential drop in user time and increase in engagement with a VRE will eventually translate into lower cost and to an increase in quality.Originality/value – To date no commonly accepted data model has been proposed to serve as the integrated data model to support facility operation. Although BIM models have gained increased acceptance in architecture engineering and construction activities they are not fully adequate to support data exchange in the post-handover (operation) phase. The presented research developed and tested a prototype able to handle and integrate data in a flexible and dynamic way, which is essential in management activities underlying FM.Keywords Information systems, Simulation, Integration, Decision support systems, Information and communication technology (ICT) application IntroductionFacilities management (FM) aims at creating and maintaining an effective builtenvironment in order to support the successful business operation of an organization (Cotts et al., 2010). The complexity and professionalism underlying modern FM compels practitioners to adopt distinct computerized tools, helpful in automating routine tasks, managing information, monitoring th e building’s performance and assisting in decision-making processes (Abel and Lennerts). Currently, it is amply recognized that Information technology (IT) plays a critical role in the efficiency, both managerial and operational, of FM (Madritsch et al.2008; Elmualim and Pelumi-Johnson, 2009; Lewis and Riley, 2010; Svensson, 1998; Love et al., 2014; Wetzel and Thabet, 2015).In its essence, FM is a multidisciplinary subject that requires the collaboration of actors with expertise from different fields (Cotts et al., 2010; Lewis and Riley, 2010). Within their specific areas of responsibility, they have to interact with distinct IT tools. Managerial roles are likely to interact with computer-aided facility management systems (CAFM) and computerized maintenance management systems (CMMS), employed to manage the characteristics of space and equipment while operational roles are more likely to interact with building managements systems (BMSs) and energy management systems (EMS) used to manage live, i.e., real-time information regarding the space and equipment (Lewis and Riley, 2010). Issues in FM also have to be analyzed from different perspectives. Therefore, this arrangement requires that information from different tools to be brought together to enable a systematic and thorough analysis through data visualization and dashboard (Chong et al., 2014). It has been observed that the costs inherent to lack of communication and data integration for existing buildings portfolio (for information verification costs, delay, and operation and maintenance staff productivity loss) are very significant for an organization (Shi et al., 2016). A better support of IT for integrating information translates to faster effective and just-in-time FM (Svensson, 1998).The subject of integration in IT for FM is not new (IAI, 1996; Howard and Björk, 2008) and has been historically difficult (IAI, 1996; Elmualim and Pelumi-Johnson, 2008). Research and industry practice have developed standards for information exchange to address the interoperability of IT tools in FM (IAI, 1996) and suggestedmaintaining integrated databases of facility-related information (Yu et al., 2000), thus creating a framework where different IT tools become components of the same information system –a facilities management information system (FMIS) (Dennis, 2003; Mozaffari et al., 2005). Moreover, it has been acknowledged that data required to perform certain actions is not up to date and it is delivered in different formats (Migilinskas et al., 2013; Nicał and Wodyński, 2016).More recently, advanced interfaces such as virtual reality environments (VREs) are emerging as a sophisticated and effective ways of rendering spatial information. Such has been the case of the tools used in architecture, engineering and construction (AEC) (Campbell, 2007), and in other specific activities like manufacturing planning (Doil et al., 2003) and industrial maintenance (Sampaio et al., 2009; Fumarola and Poelman, 2011; Siltanen et al., 2007). However, user interfaces of IT tools for FM are not yet taking advantage of the recent advances in VRE. As we will discuss, the typical user interface of CAFM, CMMS and BMS lacks the spatial dynamism and natural interaction offered by a VRE, with noticeable impacts on user productivity.In this paper we argue that VREs, due to their characteristics, could improve both the visualization and interaction with integrated information within an FMIS and, therefore, beneficial for performing FM tasks. To validate this argument, we developed a prototype implementation of a VRE for assisting maintenance activities in the building automation and energy domains. The FM3D prototype augments a virtual facility with information regarding the space characteristics as well as the location, status and energy consumption of equipment, while providing simple ways to control them. Unlike previous applications of VR to FM that rely on CAD (Coomans and Timmermans, 1997) or VRML (Sampaio et al., 2009; Fu et al., 2006) for scene generation, we take advantage of recent game engine technologies for fast and real-time rendering of feature rich representations of the facility, along with space information, equipment conditions and statuses of devices. A user evaluation study was conducted to determine the adequacy of a VRE approach to visualize and interact with integrated FM information toward a responsive intervention. This study compares a VRE interface applied to a building management system with acorresponding legacy application.The remainder of our text is organized as follows. In Section 2 discusses the advanced visualization for FM data integration challenges and emphasizes the opportunities for VREs and new IT tools for FM. In Section 3 describes the research methodology. In Sections 4 and 5 we describe prototype development and evaluation procedures. Section 6 presents the results of the paper. Finally, Section 7 presents the conclusions.Advanced visualization for FM data integrationIntegrated rendering of spatial information is crucial to perceive complex aspects that arise from combining data from multiple sources, and for creating new insights. For example, combining cost data with occupancy information and energy consumption. In FM, integrating information is crucial to create a more complete and faithful model of reality toward an accurate diagnosis and effective response. Integration of spatial information should not be left the ability of the users to integrate mentally different models so that decisions are not hindered by the users inability. However, integrated visualization is quite limited in FM. As mentioned before, integration between tools is limited and the few that support it do not offer effective means to manage overlaying of information. Presently, tools from different vendors display information using different visual elements and layouts causing users acquainted with one tool to find it difficult to interpret data on another.The problem of creating an advanced data visualization solution for FM data integration is twofold. It is necessary first to correctly integrate data from multiple sources into a unified model and second to create and generate an environment that envisages 3D data visualization and real-time interaction with the built environment.Limitations of data integration in FMFM requires integrating large quantities of data. It has been argued that CAFM systems greatly benefit from integrating data of CMMS, BMS and EMS systems both at a data level and at a graphical level (May and Williams, 2012). Yet, despite a few localized integration possibilities (Malinowsky and Kastner, 2010), the current BMS and EMS do not adequately integrate data regarding space characteristics. Notably,some tools support space layout concepts of floor and room in 2D static plans (Lowry, 2002), or even details regarding equipment, but these data live isolated on each tool’s database without any relationship (i.e. integration) with the CAFM system. Such connection is important to explore further characteristics of the space such as, for example, which areas are technical areas or circulation areas. On the other hand, CAFM systems would greatly benefit from real-time information regarding energy utilization, status of environment variables and equipment status enabling understanding how space and equipment is being used.To date no commonly accepted data model has been proposed that is comprehensively enough to serve as the integrated data model to support facility operation. Indeed, it has been noted that interoperability among tools from different vendors is still very ad hoc (Yu et al., 2000). Although BIM models have gained increased acceptance in AEC they are not adequate to support data exchange in the post-handover (operation) phase. For example, BIM models offer no provision to handle trend data, which is essential in management activities underlying FM. Moreover, BIM standards do not handle well data models used by BMS and EMS tools (Yu et al., 2000; Gursel et al., 2007). Another aspect is that querying data for these models is often quite complex for the average user given the large number of entities that must be taken into account (Weise et al., 2009). Therefore querying must be mapped into seamless graphics operations to be performed using different metaphors (e.g. for aggregation, filtering and ranking operations).Advanced facility management interfacesThe idea of extending the functionality of a standard management tool capable of handling facility management and building control networks, is essential in practice and can be achieved by integrating CAFM systems with BMS to obtain a unified control software utility (Malinowsky and Kastner, 2010; Himanen, 2003). This integration grants the ability to automatically monitor and visualize all building areas by illumination, occupation or other spatially located variables and manage them accordingly. For instance one could visualize electrical power consumption of the different building areas and improve efficiency consequentially reducing power costs.Also from an interaction perspective such software should also be complemented with CAD representations. In fact, CAFM systems that were integrated with CAD have been proven most effective (Elmualim and Pelumi-Johnson, 2009). Autodesk has recently announced the Dasher project that aims at using 3D to explore energy data (Autodesk, 2013). The project proposes to be based on Revit BIM to integrate energy data. Energy data must be stored somewhere else. It must be a proprietary BIM model.3D interfaces for facility managementActivities such as inspecting the space for the location of an asset, inspecting the status of equipment or analyzing the energy consumption profile of the space along with its cost and occupancy information are examples of queries that have an underlying spatial dimension. Overall, most IT tools for FM have to manage spatial information, which can be visualized more effectively when rendered in a graphical representation (Karan and Irizarry, 2014; Zhou et al., 2015). Graphical rendering accomplishes instantaneous identification of the space reality along with the relationships of the elements therein to encourage a fast response. Historically, planimetric CAD drawings and geographical information systems (GIS) have been used as an effective way to display and manage spatial information related to facilities (Schürle and Boy, 1998; Rich and Davis, 2010).GIS systems are especially effective at presenting visual representation of spatial data, aiming at a more efficient analysis (Rivest et al., 2005). In building control, a GIS application can be used to better manage a building by improving information access and providing clearness of planning to the decision-making process (Alesheikh et al., 2002). There are some well-known cases of successful GIS implementations in large facilities, such as university campuses[1]. One advantage of a GIS with 3D modeling for building control is enabling 3D information query, spatial analysis, dynamic interaction and spatial management of a building (Keke and Xiaojun).Problem definitionThe Problem definition stage encompasses a literature review and theexploratory study toward the definition of the problem to be solved and the scenarios to be tested. Since dealing with the full complexity of FM is infeasible in practice, this stage was particularly relevant to support the definition of a conceptual model of the prototype tool to develop to validate our hypothesis.Prototype developmentIn the Prototype development stage attention must be given to the data integration architecture, the user interface and the interaction layer, which are implemented using a modular approach that allows easy adaptation to different technologies has been developed. The approach, depicted in Figure 1, uses a web-based interface, thus providing access to FM information in a multitude of platforms, from mobile devices to desktop computers. Since interaction can be performed online, visualization and interaction can be achieved using different platforms. This interface is supplied through a 3D visualization engine developed in Unity 3D that relies on data visualization and integration micro-services supplied by the FM3D application components.EvaluationThe Evaluation stage compares existing legacy system with the VRE approach embodied in the prototype. The evaluation process consists of a comparative study that contrasts the 3D interface applied to the centralized control of a building automation system with a corresponding legacy application. The main goal of the evaluation is to investigate the reliability and possible benefits of 3D virtual environments for automated building by performing a quantitative as well as a qualitative analysis on both systems through user interaction test sessions.In this sense, several tests are run, with distinct types of participants comparing the prototype and an existing legacy application for centralized control and monitoring is executed, featuring a traditional 2D window-icon-menu-pointer interface. To this aim, a legacy application interface depicted in Figure 3 is used, which is already installed and working at the test pilot building. The comparison proceeded along two testing stages, the early prototype stage and the final prototype stage. With the early prototype stage we intended to get a first perspective of howusers would react to our 3D interface. At this time, all main functionalities were already implemented and, therefore, the feedback gathered from this phase did not only contribute to infer possible adjustments to our final prototype, but also provided good preliminary qua ntitative and qualitative results of the prototype’s main functionalities. Both stages of evaluation are structured by the following steps: a pre-test questionnaire to establish user profile; a briefing about test purposes and task description, preceded by a short training session where users freely explored each application for three minutes; and a post-test questionnaire after completing a set of pre-determined tasks in each application. This structure is meant to ensure an even test distribution of the applications. It should be mentioned that in the second phase two more tasks were included to be tested only with the FM3D prototype. These new tasks intend to evaluate functionalities not currently available on the legacy application.During task execution we measure the time that each user takes to complete each task on each application. If a task is not completed after three minutes, the task is considered incomplete. From these data we are able to perform a quantitative comparison between the two applications. The post-test questionnaire, contains direct questions related to the user experience, with special emphasis on the difficulties users faced during task execution to enable a qualitative analysis.User interfaceWhile the lower layers are generally important, in this paper, our main concern is the system’s user interface (Figure 4). Therefore, we will focus our attention on the upper layer of the architecture. As a consequence of using a VRE over a web browser, our solution offers a powerful, yet easy, way to supervise and control small, medium and large facilities. The user interface was developed in Unity 3D Game Engine that enables users to interact with a VRE from within an internet browser.Using simple controls the user can explore the building, inspecting and commanding several devices. To assist the user in navigating through the 3D model, our interface offer two distinct views of the building simultaneously: the main view and the mini-map view. The main view is where most interaction will occur. Thisview allows the user to navigate in the building, from a global viewpoint to detailed local exploration. The navigation in the scene is controlled by the navigation widget, located in the rightmost part of the view. This widget offers rotate, pan and zoom functionalities. The left hand side of the main view presents the control area, which consists on a set of controls offering important functionalities for filtering. Through these controls, the user is able to select which type of device and sensors should be shown or hidden in the visualization, as well as enable or disable navigation aids, such as the orientation guidelines. Additionally, through a text box the user is able to search for a given room, just by typing its name.The mini-map consists of a small view of the complete building, located at the bottom left corner of the screen. It allows users to have a complete view of the building and perceive which part is displayed in the main view. Most important, it offers additional navigation control. Dragging the mouse over the mini-map rotates the miniature view of the building around the vertical axis. If the user chooses to lock this with the main view, changes in any of them will reflect on both. The mini-map view offers also a fast and easy way to change the active floor.Interaction detailsTo minimize the visual complexity, only one floor at a time is rendered in the main view, the so-called active floor. The user selects which floor should be activated through the mini-map or from a specific control in the navigation widget. The selected floor is initially rendered on the main view only with the walls and no devices or sensors shown. The user can then select which categories of sensors and devices should be displayed. In the current version of our prototype the available categories are the lightning, HV AC, temperature and doors.Using the navigation widget, the user can navigate to the desired space in the building to inspect it. When the view gets closer to a room, additional information is depicted, ensuring that the user will not be overloaded with unnecessary information. When possible, the information is shown pictorially, such as the HV AC information represented as gas clouds whose color, size and speed convey information regarding the current status of the device, as illustrated in Figure 5. When the user clicks on adevice a pop-up appears to show additional information and allow the user to control the device.The content of the pop-up window that appears when the user clicks on a device depends on category of the device itself. Obviously, information and controls associated with a HV AC device are distinct than those associated with a lightning system. Figure 6 shows the information window for a light. In this case, besides the on/off state of the light, which can be changed by clicking the corresponding button, additional information is shown. At a glance the user will grasp relevant information such as the lamp type, its power, the number of starts, the total operating hours and its estimated lifetime. If necessary, the user can mark the lamp for replacement or consult informative notes associated with the lamp.The FM3D interface was designed to be easy to use while offering a complete set of functionalities, thus allowing even inexperienced users to operate it to perform maintenance activities. To verify this assumption we organized a formal evaluation of the FM3D system, involving real users.DiscussionRegarding building operators’ perception of the FM3D prototype, results show that users found that a 3D representation of the built environment facilitates both navigation and relating information with the location to which it is pertained. Although this is an encouraging result, it should be taken into account the familiarity of the participants with the built environment. 3D representations of very large built environments may not have these advantages. In this case, it might be necessary to take into account other design considerations to assist users in clearly identifying areas/spaces. An example of a design consideration is overlapping the 3D representation with a photo-realistic (following the same analogy of the well-known Google maps street view perspective).In terms of easiness of use, although users find the FM3D easier to use than the legacy application, they have some difficulties locating some of the information. Specifically, because information is shown according to the aggregation level, it was not always easy to understand where to find a particular piece of information. Thiscan be an obstacle when considering buildings that have many layers and/or integrated spaces that require users to perform a high number of zoom in and out to access the information. Regarding the learnability component, users found the FM3D interface was easier to learn regarding navigation, command and information retrieval functionality – this is highlighted further by the quantitative results, especially those of the advanced participant. In terms of satisfaction it was observed that participants found the FM3D prototype superior both in usefulness and its ability to improve task performance comparing to the legacy application.Through the usability tests that we have conducted we have reasons to believe that 3D interactive environments have the potential to significantly increase the productivity in maintenance tasks. In these tests, users without training demonstrated a high degree of engagement and performance operating our 3D interface prototype, especially when compared with the legacy application. The potential decrease in user time and increase in engagement with a 3D environment could eventually translate into lower cost and increase in quality, potentially turning 3D-based interface the option of choice in future IT tools for BMS.ConclusionsFM activities are increasingly supported by IT tools and their effective usage ultimately determines the performance of the FM practitioner. In this paper, we argued the usability of IT tools for FM suffers from a number of limitations mostly related to the lack of a true integration at the interface level and an inadequate handling of spatial information. Moreover, their steep learning curve makes them unsuited for inexperienced or non-technical users. We then propose VREs as a solution for the problem and validate our hypothesis by implementing FM3D, a prototype VRE for monitoring and control of buildings and centered around the requirements of FM activities with respect to integration, visualization and interaction with spatial information.This work validates literature reports pointing to an increase in performance of VREs over traditional interfaces and shows that new approaches to interact with spatial information not only feasible but also desirable. The usability tests we haveconducted indicate that VREs have the potential to greatly increase the productivity in maintenance tasks. Users without training demonstrated a high degree of engagement and performance while operating a VRE interface, when compared with that of a legacy application. The potential drop in user time and increase in engagement with a VRE will eventually translate into lower cost and to an increase in quality, potentially turning VRE-based interface the option of choice in future IT tools for FM. The major contribution of this paper was to demonstrate that VREs have low barrier to entry and have the potential to replace existing legacy BMS user interfaces. Additionally, it showed that all users regard VREs as a natural next step with respect to the interaction with FM systems.In our approach it remains unclear to what extent the integration at the interface level is contributing to increase the user productivity. Presumably, not all maintenance activities benefit, in the same way, from an approach such as the one we propose. Therefore, as future developments, additional studies should aim at gaining insight on which aspects of a VRE interface contribute to which other maintenance activities considering different interfaces (web-based interfaces and from mobile devices). These studies should begin by mapping the information needs for each activity and, thereafter, assessing existing FM tool interfaces and the VRE prototype against them. Moreover, since the evaluation is based on a narrow mix of FM tasks, further studies are required to establish the causal relationship between the employment of VREs in FM and increases in productivity, especially those involving multiple tools.中文虚拟现实作为设施管理的集成环境摘要研究目的–本研究目的是通过增加虚拟设施整合有关系统状态和空间本身的相关信息,同时提供控制它们的简单方法,探索将虚拟现实环境(VRE)用于设备维护活动。
外文文献翻译-设计一个沉浸式的虚拟现实界面布局规划
设计一个沉浸式的虚拟现实界面布局规划摘要这篇论文论述了生产布局规划被认为是虚拟现实运用的一个合适全新领域的原因;开发了一个用于虚拟布局规划工具的框架和做一份使用沉浸式虚拟现实系统来检测布局设计问题的研究报告,汇报了对比沉浸式虚拟现实系统与基于监控式式系统,以此来检测布局设计局限的研究。
所提议的框架评估已经被引用在一项研究上,这份研究还没有对于一个交互式的变更的空间布局的规范。
研究的主要目的是比较一个沉浸式系统和一个基于监控式虚拟现实系统在工厂布局分析上的应用。
研究组成员调查了车间的环境,其中包括三个主要的车间布局的问题(设备布置,空间利用率?,和设备位置),给出了可行性改进的评估,2000艾斯维尔科技B.V版权所有。
关键字:虚拟现实,NMD,布局设计,制造单元1.介绍虚拟现实(VR),不像其他的新技术,从来没有被大肆宣传。
自从BBC在1993年1月19号的9点当新闻里面简短得表达了VR,其中包括了一台适用HMD的VR 型喷气式发动机,这项技术便引起了巨大的热潮和关注。
目前,然而情况有些化变,虚拟现实的行话不再难以理解。
虚拟现实已经变成一个负面的项目,承诺了太多缺却都没有做到。
部分原因是夸张的期望,以及何处可以运用虚拟现实来实现显著定量的好处的调研,可以计量的好处的深入调查。
目前,虚拟现实的研究主要针对质量的应用。
目前,VR的研究用于寻找质量工程应用,这能体现优越的视觉效果和互动能力,远超于这一技术的缺点。
2.背景车间布局在制造环境中不是一个新的话题。
布局脚本的选择是基于传统用户定义的特征,比如漫游频率,漫游距离,和零件、设备、操作者的物理属性【1】。
正如研究显示,第一阶段的规划、总体规划设计,也就是所谓的模块化布局【2】。
然而,当设备的准确位置和方位在详细的布局中被设定时,这些数据缺乏实用性。
目前工厂设备的方式,比如,制造单元的形成,已经暴露了问题,这就要求一种新的设计工具【3,4】。
制造单元的目的是把流水线(典型的有汽车转配线)的效率和功能布局(车床部件,磨床部件,转配部件等等)的柔性通过单元或模块结合起来。
虚拟现实与医疗教学外文翻译中英文2019
沉浸式虚拟现实与医疗教学中英文2019英文Medical Student Perspectives on the Use of Immersive Virtual Reality forClinical Assessment TrainingMatthew Zackoff, Francis Real,Bradley Cruse,David Davis,Melissa KleinWhat's New?Medical students reported an immersive virtual reality (VR) curriculum on respiratory distress as clinically accurate and likely to impact future patient assessment. VR training was rated as equally or more effective than high-fidelity mannequins and standardized patients but less effective than bedside teaching.Keywords:Clinical assessment,respiratory distress,virtual reality BackgroundThe practice of medicine has traditionally relied on an apprenticeship model for clinical training – an approach in which bedside teaching was the primary source for knowledge transfer. However, the frequency of bedside teaching is declining due to duty hour restrictions, increased patient turnover, and competing demands for physicians' time.Alternatives to bedside teaching have emerged including simulation-based medical education though current approaches arelimited in applicability to and functionality for pediatric training. For instance, standardized patients are not available for many pediatric conditions especially for diseases that predominantly affect infants. Moreover, patient simulators often cannot display critical physical exam findings for discriminating between sick and healthy patients (eg mental status, work of breathing, perfusion changes).An emerging educational modality, immersive virtual reality (VR), could potentially fill this gap. Immersive VR utilizes a three-dimensional, computer generated environment in which users interact with graphical characters (avatars). While screen-based simulation training has been demonstrated to enhance learning outcomes, immersive VR has the potential to have a broader impact through increased learner engagement, and improved spatial representation and learning contextualization. To date, this technology has demonstrated effectiveness in communication skills training; however, it has not been investigated for clinical assessment training. To evaluate the role of immersive VR in medical student clinical assessment training, we created a VR curriculum focused on respiratory distress in infants. Our pilot study explored medical student attitudes toward VR and perceptions of VR compared to other common medical educational methods.Educational Approach and InnovationSetting and Study PopulationAn IRB approved prospective pilot study was conducted at Cincinnati Children's Hospital Medical Center, a large academic children's hospital, during the 2017 to 2018 academic year. A randomized sample of third-year medical students, based upon predetermined clinical team assignment during their pediatric rotation, was invited to participate in a VR curriculum.Curriculum DesignThe curricular goal, to improve third year medical students' ability to appropriately categorize a pediatric patient's respiratory status, aligns with an Association of American Medical Colleges Core Entrustable Professional Activity for entering residency, the ability to recognize a patient that requires an urgent or emergent escalation of care.To address this goal, an immersive VR curriculum using the clinical scenario of an admitted infant with bronchiolitis was developed collaboratively between clinicians, educators, and simulation developers.A virtual Cincinnati Children's Hospital Medical Center inpatient hospital room was created using the Unity development platform and was experienced through an Oculus Rift headset. The environment included a vital signs monitor, virtual stethoscope, and avatars for the patient and preceptor. The patient avatar could demonstrate key exam findings (ie mental status, work of breathing, and breath sounds) that correlated with three clinical scenarios: 1) no distress, 2) respiratory distress, and 3)impending respiratory failure. The displayed vital signs and auscultatory findings matched the clinical status of the patient. Learners received feedback on their performance immediately following each simulated case. The preceptor avatar, controlled by a physician facilitator (M.Z., F.R.), guided the student through the VR simulation. Learners were expected to recognize and interpret the vital signs, physical exam, and auscultatory findings and come to an overall assessment of the patient's respiratory status. Detailed algorithms correlating learner input to avatar responses allowed for standardization of the avatar preceptor prompts. For example, if a student did not comment on the patient's lung sounds, the facilitator is guided to select the avatar prompt, “What do you think of his lung sounds?” Facilitator-provided feedback for each scenario was standardized to ensure consistent learner experiences.Scenarios were piloted on four critical care attending physicians, two hospitalists, two general pediatricians, four critical care fellows, four senior pediatric residents, and four medical students to assess the accuracy of the findings portrayed in the clinical scenarios as well as the feasibility of the planned facilitation. Iterative changes were made to the VR simulation based upon feedback.Survey Design and ImplementationImmediately following the VR curriculum, students completed a survey to assess immersion within the VR environment using questionsderived from a validated instrument.15 Demographic data and attitudes toward the VR curriculum including its perceived effectiveness compared to other education methods were assessed on a 5-point Likert scale via a survey created de novo with piloting prior to use. Survey results were analyzed with binomial testing.ResultsAll eligible students consented to participate in the research study (n = 78). Ages ranged from 20 to 39 with an equal distribution between male and female. Students self-identified as White (51.3%), Asian (28.2%), Black (7.7%), Hispanic/Latino (3.9%), or other (9.0%). Most students reported a strong sense of presence in the VR environment (85%) and the vast majority noted that the scenarios captured their attention and senses (96% and 91%, respectively).A majority of students agreed or strongly agreed that that the simulations were clinically accurate (97.4%), reinforced key learning objectives (100%), and would impact future care provision (98.7%). In addition, students reported VR training as more effective (P < .001) than reading, didactic teaching, online learning, and low fidelity mannequins. VR training was rated as equally or more effective (P < .001) than high fidelity mannequins and standardized patients. The only modality that VR was rated less effective than was bedside teaching.Figure. Binomial testing demonstrates that a statistical majority of students found virtual reality training more effective than reading, didactic teaching, online learning, and low fidelity mannequins, and equally or more effective than high fidelity mannequins and standardized patients.Discussion and Next StepsThis study represents a novel application of immersive VR for medical student training. The majority of student participants reported a sense of presence within the VR environment and identified the modality as equal or superior in perceived effectiveness to other training options such as standardized patients and high-fidelity mannequin simulations while rated less effective than bedside teaching. These findings are consistent with the findings of Real et al13 that learners perceived VR as equally effective to standardized patients for communication training. Our learners expressed similar perceptions regarding the use of VR forclinical assessment training –expanding the potential applications forVR-based education.The assessment of a patient's respiratory status, and importantly the recognition of need for emergent escalation of care is a core clinical competency that directly relates to patient safety. The ability of immersive VR to convey specific critical exam findings could aid in accelerating junior learners' competence related to identification of impending respiratory failure and potentially impact future care provision. The learnings from this pilot could be applied to other clinical scenarios (eg sepsis) given immersive VR's ability to accurately simulate key exam findings.This study has several limitations. First, it was conducted at a single site with only third year medical students. Second, the evaluation focused on students' perceptions toward the effectiveness of VR-based education in general rather than specifically focusing on VR-based education on pediatric respiratory distress. Though we could not standardize students' exposure to the comparison education modalities, all students underwent a high-fidelity simulation focused on respiratory distress as part of their pediatric rotation. This high fidelity simulation occurred prior to the VR curriculum, and thus represented a consistent reference for all of the students who completed the study survey.A final significant consideration for this study is the generalizability of the approach. With each passing year and iteration of availableequipment, the cost of VR compatible headsets and computers continue to fall. We utilized the Oculus Rift headset and a VR capable computer, which together cost on the order of $2000. The development platform, Unity, is an open source platform available at no cost. We are fortunate to have VR developers as employees of our simulation center, facilitating the development of new scenarios, and represent a resource that may currently be unavailable at many other institutions.Next steps include establishing response process validity through assessment of learner application of knowledge gained during the VR curriculum. Additional research goals include exploring the effectiveness of immersive VR at additional sites to assess generalizability, directly comparing VR head-to-head with other educational modalities (eg standardized patients, high-fidelity simulations), and evaluating change in actual clinical practice as well as the costs associated with these modalities to explore the feasibility of broader implementation of VR training. The findings from this pilot study suggest that immersive VR may be an effective supplement to bedside teaching due to its ability to accurately represent real-life environments and clinical scenarios in a standardized format that is safe for learners and patients.中文使用沉浸式虚拟现实进行医学临床培训的研究什么是新的?医学院的学生报告说,关于呼吸窘迫的沉浸式虚拟现实(VR)课程在临床上是准确且有效的,并且可能会影响未来的患者救治效果。
虚拟现实(VR)
概述“虚拟现实”是来自英文“Virtual Reality”,简称VR技术。
最早由美国的乔·拉尼尔在20世纪80年代初提出。
虚拟现实技术(Ⅵ)是集计算机技术、传感器技术、人类心理学及生理学于一体的综合技术,其是通过利用计算机仿真系统模拟外界环境,主要模对象有环境、技能、传感设备和感知等,为用户提供多信息、三维动态、交互式的仿真体验。
特点虚拟现实主要有3个特点:沉浸感(Immersive)、交互性(Interactive)、想象性(Imagination)。
沉浸感是指计算机仿真系统模拟的外界环境十分逼真,用户完全投入三维虚拟环境中,对模拟环境难分真假,虚拟环境里面的一切看起来像真的,听起来像真的,甚至闻起来等都像真的,与现实世界感觉一模一样令人沉浸其中。
交互性是指用户可对虚拟世界物体进行操作并得到反馈,如用户可在虚拟世界中用手去抓某物体,眼睛可以感知到物体的形状,手可以感知到物体的重量,物体也能随手的操控而移动。
想象性是指虚拟世界极大地拓宽了人在现实世界的想象力,不仅可想象现实世界真实存在的情景也可以构想客观世界不存在或不可发生的情形。
根据用户沉浸程度和参与方式的不同,虚拟现实可分为4类:非沉浸式虚拟现实、沉浸式虚拟现实、分布虚拟现实系统及增强虚拟现实系统。
应用一、幼儿园教学应用综合应用(1)逼真式的体验教学。
VR虚拟现实技术最大的优势在于开放自由的教学空间,解决了课堂互动,答疑解惑,动手实操等问题。
例如:运用VR职业模拟体验,可以让幼儿体验美食大厨、上班族和便利店店员,幼儿们不仅需要像在真实生活中那样完成工作内容,更重要的是作为一种职业冒险类模拟体验游戏,幼儿们将会在游戏过程中体验到更多置身未来难以适应的困惑感,幼儿们就可以在游戏过程中获得很多人生感悟。
同时,将头盔式VR 装备应用于幼儿课程教学中,教师课前将教程编排好,应用情景式教学内容设置给将给幼儿带来沉浸式的教学体验,学习就像看电影,从一定程度上增加了学习的乐趣,补充了教学素材。
毕业论文外文翻译--虚拟现实技术的发展过程及研究现状(适用于毕业论文外文翻译+中英文对照)
虚拟现实技术的发展过程及研究现状虚拟现实技术是近年来发展最快的技术之一,它与多媒体技术、网络技术并称为三大前景最好的计算机技术。
与其他高新技术一样,客观需求是虚拟现实技术发展的动力。
近年来,在仿真建模、计算机设计、可视化计算、遥控机器人等领域,提出了一个共同的需求,即建立一个比现有计算机系统更为直观的输入输出系统,成为能与各种船感器相联、更为友好的人机界面、人能沉浸其中、超越其上、进出自如、交互作用的多维化信息环境。
VR技术是人工智能、计算机图形学、人机接口技术、多媒体技术、网络技术、并行计算技术等多种技术的集成。
它是一种有效的模拟人在自然环境中视听、动等行为的高级人机交互技术。
虚拟现实(Virtual Reality ):是一种最有效的模拟人在自然环境中视、听、动等行为的高级人机交互技术,是综合计算机图形技术、多媒体技术、并行实时计算技术、人工智能、仿真技术等多种学科而发展起来的20世纪90年代计算机领域的最新技术。
VR以模拟方式为使用者创造一个实时反映实体对象变化与相互作用的三维图像世界,在视、听、触、嗅等感知行为的逼真体验中,使参与者可直接探索虚拟对象在所处环境中的作用和变化;仿佛置身于虚拟的现实世界中,产生沉浸感(immersive)、想象(imaginative和实现交互性interactive) 。
VR技术的每一步都是围绕这三个特征而前进的。
这三个特征为沉浸特征、交互特征和构想特征。
这三个重要特征用以区别相邻近的技术,如多媒体技术、计算机可视化技术沉浸特征,即在VR提供的虚拟世界中,使用户能感觉到是真实的进入了一个客观世界;交互特征,要求用户能用人类熟悉的方式对虚拟环境中的实体进行观察和操纵;构想特征:即“从定性和定量综合集成环境中得到感性和理性的认识:从而化概念和萌发新意”。
1.VR技术发展的三个阶段VR技术的发展大致可分为三个阶段:20世纪50年代至70年代VR技术的准备阶段;80年代初80年代中期,是VR 技术系统化、开始走出实验室进入实际应用的阶段;80年代末至90年代初,是VR技术迅猛发展的阶段。
虚拟现实游戏外文翻译中英文2019-2020
虚拟现实游戏中英文2019-2020英文Virtual reality games on accommodation and convergenceZulekha Elias,Uma Batumalai,Azam AzmiAbstractIncreasing popularity of virtual reality (VR) gaming is causing increased concern, as prolonged use induces visual adaptation effects which disturbs normal vision. Effects of VR gaming on accommodation and convergence of young adults by measuring accommodative response and phoria before and after experiencing virtual reality were measured. An increase in accommodative response and a decrease in convergence was observed after immersion in VR games. It was found that visual symptoms were apparent among the subjects post VR exposure.Keywords:Virtual reality,Accommodation,Accommodative response,V AC,Phoria1. IntroductionVirtual reality (VR) is a simulated environment where the visual content and alternatively different senses are entirely computer-generated and the participant's performance alters the appearance of the environmental status. The visual stimulus and other sensory channels such as touch, smell, sound, and taste are presented by a combination of virtual and augmented reality systems (Rebenitsch and Owen, 2016).Virtual reality has been rapidly developmental in the recent years, particularly the VR headsets, which are used by attaching a smartphone that contain the VR game and mounting it on the head, thus providing users with a virtually immersive experience (Desai et al., 2014).The current study uses a VR game as the stimulus as it is perceived to be more appealing to the user, enhancing maximum immersion; furthermore players show a higher anxiety level which would enhance their post VR-gaming response (Pallavicini et al., 2018). VR gaming blocks out the external environment whilst promoting sensory immersion due to the enlarged field of view (FOV) of the VR headset, providing users with a greater immersion experience (Martel and Muldner, 2017).The accommodation and vergence systems are reflexively linked, interacting with each other through accommodative vergence and vergence accommodation; where accommodation is stimulated by retinal blur whereas vergence is stimulated by depth (Hung, 2001). Accommodation and convergence are simultaneously occurring ocular systems that enable normal binocular vision. A disruption in one system could affect the other (Shiomi et al., 2013). The demand exerted on the accommodation and vergence systems by VR results to a reduction in visual performance due to the ocular discomfort experienced (Barnes, 2016). Moreover, discomfort in stereoscopic viewing is caused by the need of quick adaptation from the vergence system, despite the conflicting accommodation system (Hoffman et al., 2008; Lambooij et al., 2009).Studies have found a significant effect of VR on accommodation and convergence (Mon-Williams et al., 1993; Kooi and Toet, 2004; Rebenitsch and Owen, 2016), caused by a disruption in how these two systems work together. Shiomi and his colleagues found the incidence of mismatch between accommodation and convergence which resulted to complaints of visual fatigue after users were immersed in the VR world for a period of time (Shiomi et al., 2013).This paper presents investigations on how the accommodative and convergence systems are affected after using the VR headset for a period of time.2. Methods and materials2.1. SubjectsThirty four subjects participated in this study, out of which 21 were male and 13 were female, with age ranging 18–28 years and mean age of 23. All the subjects had distance visual acuity of 6/6 or better, of which 21 were spectacle wearers; normal color vision (correct identification of all plates using the Ishihara 24 Plates Edition©); stereo acuity of 50 seconds of arc or better with The Netherland Observation (TNO) plates; near point of accommodation within estimated range of at least 12.5 cm and near point of convergence with break (5–7 cm) and recovery (7–9 cm); horizontal phoria ranged from 1Δesophoria to 3Δexophoria at distance and 0 to 6Δ exophoria at near.2.2. InstrumentationThe accessory used during this research was the VR Shinecon headset with adjustable inter-pupillary distance as shown in Fig. The headset provided a field of view of 90–110⸰ with a 360⸰ panoramic view to the user. The VR Shinecon® focal power of the lenses for both sides were approx. 16D, and the disparity was achieved by the offset of the display on the phone. The focal distance of the VR setup were at a given range of approx. 55–75 cm.A smart phone, Lenovo K6 Power with dimensions 141.9 × 70.3 × 9.3 mm and screen size 5.0 inches was attached to the headset which was then mounted on the subject's head. The screen was set to 50% brightness.The VR game Galaxy Wars, available on Google PlayStore, was the game simulator used as it offers an intense and continuous motion gaming experience in combat. The content varies significantly from the nearest virtual plane to be at 3 m up to 500 m. The sky box (larger content) had the furthest virtual plane at about 3000 m. The illumination of the game in its display was in the range of 0.4–3.9 lux.2.3. ProcedureAll the subjects played the game Galaxy Wars, for 30 min. The lights in the test room were switched off (approx. 2.5 lux) to avoid reflections and the subjects were seated on a rotating stool to aid movement.Prior to the VR simulation, accommodative response, horizontal and vertical phoria measurements at distance and near were taken. A phoropter, under good illumination (approx. 572 lux), was used to conduct the Fused Cross Cylinder (FCC) test to measure accommodative response. The target was a cross-hatch chart set at 40 cm. Cross cylinder lenses of ± 0.50 D with the minus axis set at the vertical meridian were presented binocularly in front of the subject's eyes. Initially, if horizontal lines were reported clearer, spherical lenses of +0.25 D were added binocularly until the vertical lines became clearer or the lines in both meridians were equally clear. This is an indication of lag in accommodation. However, if vertical lines were reported clearer when first presented with the FCC, spherical lenses of −0.25 D were added binocularly until horizontal lines became clearer or both meridional lines were equally clear. A lead of accommodation is indicated in this case.The vergence stability was measured using the horizontal and vertical phoria test at 6 m and at 40 cm. The test was carried out using a Maddox rod, a high-powered cylindrical lens that prevents fusion of the eyes as a point source of white light creates a thin red line. Subjects had to report esophoria for convergent visual axes and exophoria for divergent. The test was carried out in darkness whereby the Maddox rod was situated in front of the right eye and the white point source light shone on the left eye at 40 cm. Distance phoria was measured by placing the Maddox rod in front of the right ey e and shining the point source light on the mirror situated at 3 m. Subjects were expected to report the position of the red line with reference to the white point source light. If the line and dot of light coincide, there is no phoria, if the line is reported to be on the right of the dot, it is esophoria, and when the line is to the left of the dot, it is exophoria. Prism bar would be added in front of the eye until the line and the dot coincided, giving the phoria value. The test compared the pre and post phoria values to determine any changes in convergence. The sequence of measuring accommodative response and phoria was randomized to avoid bias. Accommodative-Convergence to Accommodation (AC/A) ratio was then calculated to observe the relationship between the two systems (accommodation and vergence).Immediately after the 30 min of VR exposure, the accommodative response and change in vergence status were re-measured using the same method of FCC and Maddox rod; maintaining the same measuring procedure. I t took approx. 5 min to take the accommodative response and the phoria measurements after the VR immersion. The AC/A ratio was also recalculated for each subject. Subjects were also asked to report any feelings of discomfort, such as nausea, headache and dizziness.3. Results and discussionPaired t-test was used to independently analyze the mean pre and post accommodative response and horizontal and vertical phoria at distance and near as well as AC/A ratio. There was significant difference between the pre and post mean values of accommodative response [t (33) = 2.72,p < 0.05] (Table 1). The pre and post mean values of horizontal phoria at near [t (33) = 4.42,p < 0.05] were significantly greater compared to distance [t (33) = 5.17,p < 0.05] (Table 2). WilcoxonSigned-Rank test revealed no statistically significant difference in median errors of vertical phoria at distance [z = −1.73,p > 0.05] and near [z = 0.81,p > 0.05] (Table 3). There was significant increase in the mean of AC/A ratio of pre and post VR gaming session [t (33) = 2.489,p < 0.05] (Table 4). Fig. 3 shows the frequency of participants having visual symptoms after playing VR for 30 min.This paper investigated the mean errors in accommodative response and the status of vergence through phoria and AC/A ratio after using the VR headset for 30 mi n. The findings demonstrate an increase in accommodation and changes in vergence status. The accommodative response values indicate an increase in lead of accommodation after VR exposure, suggesting that, after a short period of VR gaming, the response of accommodation of the eyes to accommodative targets was greater. Accommodative response in humans is more prevalent of accommodative lag at near, indicating that the eyes do not accommodate fully to a stimulus presented at a near distance. However, as found in this study, the disparity of stereoscopic images on the VR unit has increased binocular disparity, inducing accommodative convergence which exceeds physiological accommodation lag, resulting to accommodation lead, similar to the findings by Iwasaki et al. (2009).Turnbull and Phillips (2017) reveal minimal effect to the binocular vision system after 40 min exposure to VR HMD as compared to the real world equivalent task; the dissociated position of the eyes was not affected by the accommodative demand at both distance and near, implying no accommodative fatigue. This could be due to the stimulus; an outdoor island environment where participants were required to find treasures around the island and an indoor cabin with a documentary playing on a television mounted on the wall. Both of these tasks are less intense as compared to the combat game used in the current study. However, a thought-provoking finding of Turnbull and Phillips (2017) could indirectly be in agreement with the current study i.e. choroidal thickness changes. The significant increase in choroidal thickness after VR exposure suggests that a lead of accommodation did occur even with non-intense VR experience, however, not to the point of visual discomfort, since accommodative errors were not their major findings to suggest direct effect on VR immersion.Our results are in agreement with a study by Roberts et al. (2018) in which they have shown that accommodative lag decreases (increase in accommodative lead) during near viewing tasks that require more cognitive effort. Notwithstanding, our VR gaming task primarily involved distances that are further than normal near viewing tasks, with the presence of accommodative stimulus approx. at 3–6 m, cannot be discounted. However, their findings suggested a significant difference in accommodative response among children population but not among adult population. This raises a question about the susceptibility of the accommodative system to visual cognitive demands.One plausible explanation for our findings might be due to accommodation hysteresis. The sustained exposure of near tasks via VR headset may trigger the accommodative hysteresis. The constant changes of apparent viewing distance through the VR may evoke the level of accommodation response to be altered according to the apparent stimulus distance. This will lead to adaptive accommodative hysteresis, which will provoke the negative shift (lead) of the accommodative response (Hasebe et al., 2001).The first notable vergence change seen in this study was exo-shift of the horizontal phoria. The horizontal phoria values indicated a shift towards exophoria at both far and near distances. Previous research reported a shift towards exophoria when playing games in 3D suggesting that it is due to the cross-link between accommodation and convergence; accommodation lead induces exodeviation (Pölönen et al., 2013). This dynamic relationship between accommodation and vergence systems is represented by the AC/A ratio. Our study showed that the AC/A ratio reduced significantly after VR exposure of 30 min. The decrement of the cross-links gain between accommodation and vergence may be contributed by the fact that while the exposure to VR games happened, the subjects were viewing images that were moving backwards and forwards in depth. This type of viewing has been found to decrease the gains of the cross-links (Mon-Williams and Wann, 1998), leading to an exo-shift of the horizontal phoria.This study also indicated that the phoria at near was affected more than atdistance. A probable explanation is that near responses are dominated by vergence movements due to the short latency period and smaller fixation disparity. The measurement of binocular disparity ought to be constrained to certain esteems to enable comfortable stereoscopic viewing (Bando et al., 2012).As for the vertical phoria, there was no indication of change at both distances, as there was no misalignment in the vertical plane during the use of VR headsets (Kalich et al., 2004). Vertical vergence adaptation is usually the result of a convergence-dependent gain alteration of the extraocular muscles of the vertical plane without respect to the position of the eye in the globe. The shift of vertical phoria requires a prolonged period of adaptation as experimented by Schor (2009). It was found that after 1 h of exposure of alternate fixation of targets separat ed horizontally as well as vertically, the vertical phoria only changed by 0.5Δ. Thus showing an underlying adaptive vertical vergence mechanism that maintains the degree of disconjugacy of vertical saccades, and the change may only be observable if a longer period of adaptation is allowed (Ygge and Zee, 1996).These accommodation and vergence changes found in this study raise an interesting discussion on the vergence-accommodation conflicts (V AC) while using virtual reality devices. V AC caused by VR gaming is due to the conflict of depth cues, in which the depth cues for both accommodation and vergence systems do not match (Reichelt et al., 2010). As explained by Takatalo et al. (2011), user experiences during 3D gaming are different compared to normal stereoscopic displays. The concepts of immersion, fun, presence, involvement, engagement and flow are accumulated during the experience. Presence, also referred to as spatial presence (IJsselsteijn et al., 2000), which results in perceived realness and the attention aspects; keeps on changing during the gaming experience. Thus, the virtual image plane distance cannot be measured in a straightforward manner. Instead, one must recognize that during stereoscopic gaming, the stimulus’ apparent position will keep on changing, hence leading to possible V AC conflicts. Apparent distances deem to be comfortable in the context of virtual reality display when the content zone of the apparent images fallswithin 0.5 m–20 m in a 70°field of view (Alger, 2015). However, Shibata et al. (2011) assumed that the maximum and minimum relative widths of the comfort zone were 0.8D (1.28 m) and 0.3D (3.33 m). Presumably, VR games are utilizing different distances compared to the assumed comfortable distance for viewing, in our case ra nging from 3 m to 3000 m. Thus, V AC conflicts might have been aggravated by VR gaming compared to other VR tasks. In addition, the V AC conflict seems to be more aggravated by the nature of the viewing, in our case, the gaming experience. V AC caused more difficulty for visual performance when the conflicts changed rapidly, according to Kim et al. (2014). When the fixation distance changes rapidly, especially in the case of gaming, the offset between the vergence and accommodation stimuli constantly changes, presumably due to stimulation of the phasic component when the step change occurs.While the acommodation depth cue remains static (constant distance to the screen), the vergence depth cues change. The change in angular distance and different convergence demands (moving images) create a difference in cues for vergence depth, contributing to the conflict between accommodation and vergence systems. Our results show that both of the systems did change after the use of VR headset, indicating that there is a conflicting depth stimulus to both systems to maintain single and sharp binocular vision. However, as observed in our study, this conflict appeared to be resolved by the dynamic relationship between accommodation and vergence systems (AC/A ratio), counter-acting the cross-links that attempt to drive vergence to be consistent with accommodation and vice versa (Kim et al., 2014). As the response of accommodation changes (increase in accommodation lead), the vergence system reduces in its response (by about 1 prism diopter).The amplitude of relative accommodation and vergence cannot act independently, however, each system can be slightly out of phase under normal conditions (Rushton and Riddell, 1999). The mismatch in binocular fusion cues contributes to the perceived quality of the VR experience. Inconsistent accommodation and vergence cues are known to cause visual discomfort to VR headset users (Bando et al., 2012). In our study, majority of the subjects complained of symptoms of motion sicknesssuch as nausea, headache and dizziness after the experiment. As explained by Kennedy and his colleagues, such symptoms arise from visually perceived motion in the absence of inertial motion, furthermore the diversity in symptoms to VR use come about as a result of variations in individual responses to motion environments (Kennedy et al., 2010). These results correspond with a study on motion sickness measurement index where nausea was the least common complaint, whereas disorientation was the most common visual symptom experienced by the subjects based on the Virtual Reality Sickness Questionnaire which was modified from the Simulator Sickness Questionnaire (Kim et al., 2018).The sensory conflict theory states that motion sickness can occur when there are paradoxical cues from the vestibular and visual systems (Hasegawa et al., 2009). Furthermore, these symptoms occur due to conflict caused by the impression that the world is moving visually, whereas there is minimal physical movement of the body and the time lag for the virtual scene to be updated after head movement (Falahee et al., 2000). Visual fatigue can be caused by the large amount of motion and parallax during stereoscopic viewing as there is constant motion thus exerting an increasing demand on accommodative and vergence systems to maintain a clear and single image, as well as when the stereoscopic images were perceived outside the range of depth of focus (Yano et al., 2002, 2004).4. ConclusionThe results illustrated that exposure to virtual reality gaming did affect accommodation and convergence systems. After immersion in virtual reality, subjects exhibited a lead in accommodation, where they tend to focus more than required, whereas convergence is receded as there is a shift towards exophoria, due to the loss gains in AC/A ratio. These errors in accommodation and convergence in turn lead to visual symptoms and discomfort among young adults. Due to these adverse effects from the V AC, it is important to have a correct setup of VR headsets for comfortable and more pleasurable experiences.Investigations to measure the effect of VR gaming on accommodation and convergence when it is used over a period of time and not limiting the duration to30 min. Modifications could be done by involving a wider range of stimuli instead of only one game each time to measure the extent of changes in accommodation and convergence errors. Further investigations could be conducted on children population to observe the effect of VR gaming on accommodation and convergence.中文虚拟现实游戏的适应性和融合摘要虚拟现实(VR)游戏的日益普及引起越来越多的关注,因为长时间使用会引起视觉适应效应,从而干扰正常视力。
(VR虚拟现实)基于虚拟现实的虚拟实验室外文翻译
(VR虚拟现实)基于虚拟现实的虚拟实验室外文翻译原文1:VRMLDurchdieimmerbessereHardwareistesheutenichtmehrnötig,füranspruchsvolle3D-Gr afikenspezielleGrafik-Workstationszuverwenden.AufmodernenPCskannjederdurch dreid imensionaleWelten fliegen.UmsolcheWeltenzudefinierenundsieüberdasInternetzuverbi nden,wurdedieSpracheVRMLentwickelt.IndiesemBeitraggebenwireinenÜberblicküberdie grundlegendenKonzeptederVersion2.0vonVRML.GeschichtevonVRMLImFrühling1994diskutierteaufdererstenWWW-KonferenzinGenfeineArbeitsgruppeüb erVirtualReality-SchnittstellenfürdasWWW.Esstelltesichheraus,daßmaneinestandard isierteSprachezurBeschreibungvon3D-SzenenmitHyperlinksbrauchte.DieseSpracheerhi eltinAnlehnunganHTMLzuerstdenNamenVirtualRealityMarkupLanguage.Späterwurdesiein VirtualRealityModelingLanguageumbenannt.DieVRML-GemeindesprichtdieAbkürzunggern e…Wörml“aus.BasierendaufderSpracheOpenInventorvonSiliconGraphics(SGI)wurdeunt erderFederführungvonMarkPescedieVersion1.0vonVRMLentworfen.ImLaufedesJahres1995 entstandeneineVielzahlvonVRMLBrowsern(u.a.WebSpacevonSGI)undNetscapebotschonseh rfrüheinehervorragendeErweiterung,einsogenanntesPlugIn,fürseinenNavigatoran.Die virtuellenWelten,diemanmitVRML1.0spezifizierenkann,sindzustatisch.Zwarkannmansi chmiteinemgutenVRML-BrowserflottundkomfortabeldurchdieseWeltenbewegen,aberdieIn teraktionistaufdasAnklickenvonHyperlinksbeschränkt.ImAugust’96,anderthalbJahre nachderEinführungvonVRML1.0,wurdeaufderSIGGraph’96dieVersionVRML2.0vorgestellt.S iebasiertaufderSpracheMovingWorldsvonSiliconGraphics.SieermöglichtAnimationenun dsichselbständigbewegendeObjekte.DazumußtedieSpracheumKonzeptewieZeitundEventse rweitertwerden.Außerdemistesmöglich,ProgrammesowohlineinerneuenSprachenamensVRM LScriptoderindenSprachenJavaScriptoderJavaeinzubinden.WasistVRML?DieEntwicklerderSpracheVRMLsprechengernevonvirtuellerRealitätundvirtuellenW elten.DieseBegriffescheinenmiraberzuhochgegriffenfürdas,washeutetechnischmachba rist:einegrafischeSimulationdreidimensionalerRäumeundObjektemiteingeschränktenI nteraktionsmöglichkeiten.DieIdeevonVRMLbestehtdarin,solcheRäumeüberdasWWWzuverb indenundmehrerenBenutzerngleichzeitigzuerlauben,indiesenRäumenzuagieren.VRMLsol larchitekturunabhängigunderweiterbarsein.AußerdemsollesauchmitniedrigenÜbertrag ungsratenfunktionieren.DankHTMLerscheinenDatenundDienstedesInternetsimWorldWideWebalseingigantischesverwobenesDokument,indemderBenutzerblätternkann.MitVRMLsol lendieDatenundDienstedesInternetsalseinriesigerRaum,einriesigesUniversumerschei nen,indemsichderBenutzerbewegt–alsderCyberspace.GrundlegendeKonzeptevonVRML2.0VRML2.0isteinDateiformat,mitdemmaninteraktive,dynamische,dreidimensionaleOb jekteundSzenenspeziellfürsWorld-Wide-Webbeschreibenkann.Schauenwirunsnunan,wied ieindieserDefinitionvonVRMLerwähntenEigenschafteninVRMLrealisiertwurden.3DObjekteDreidimensionaleWeltenbestehenausdreidimensionalenObjektendiewiederumauspri mitiverenObjektenwieKugeln,QuadernundKegelnzusammengesetztwurden.BeimZusammense tzenvonObjektenkönnendiesetransformiert,d.h.z.B.vergrößertoderverkleinert werden.MathematischlassensichsolcheTransformationendurchMatrizenbeschreibenundd ieKompositionvonTransformationenläßtsichdanndurchMultiplikationderzugehörigenMa trizenausdrücken.Dreh-undAngelpunkteinerVRML-WeltistdasKoordinatensystem.Positi onundAusdehnungeinesObjekteskönnenineinemlokalenKoordinatensystemdefiniertwerde n.DasObjektkanndannineinanderesKoordinatensystemplaziertwerden,indemmandiePosit ion,dieAusrichtungunddenMaßstabdeslokalenKoordinatensystemsdesObjektesindemande renKoordinatensystemfestlegt.DiesesKoordinatensystemunddieinihmenthaltenenObjek tekönnenwiederumineinanderesKoordinatensystemeingebettetwerden.AußerdemPlaziere nundTransformierenvonObjektenimRaum,bietetVRMLdieMöglichkeit,Eigenschaftendiese rObjekte,etwadasErscheinungsbildihrerOberflächenfestzulegen.SolcheEigenschaften könnenFarbe,GlanzundDurchsichtigkeitderOberflächeoderdieVerwendungeinerTextur,d iez.B.durcheineGrafikdateigegebenist,alsOberflächesein.EsistsogarmöglichMPEG-An imationenalsOberflächenvonKörpernzuverwenden,d.h.einMPEG-Videokannanstattwieübl ichineinemFensterwieaufeinerKinoleinwandangezeigtzuwerden,z.B.aufdieOberflächee inerKugelprojiziertwerden.Abb.1VRML2.0SpezifikationeinesPfeils#VRMLV2.0utf8DEFAPPAppearance{marterialMaterial{diffuseColor100}}Shape{appearanceUSEAPPgeometryCylinder{radius1height5}}Anchor{ChildrenTransform{translation040ChildrenShape{appearanceUSEAPPgeometryCylinder{bottomRadius2Height3}}}Url"anotherWorld.wrl"}VRMLundWWWWasVRMLvonanderenObjektbeschreibungssprachenunterscheidet,istdieExistenzvon Hyperlinks,d.h.durchAnklickenvonObjektenkannmaninandereWeltengelangenoderDokume ntewieHTML-SeitenindenWWW-Browserladen.Esistauchmöglich,Grafikdateien,etwafürTe xturen,oderSounddateienoderandereVRML-Dateieneinzubinden,indemmanderenURL,d.h.d ieAdressederDateiimWWWangibt.InteraktivitätAußeraufAnklickenvonHyperlinkskönnenVRML-WeltenaufeineReiheweitererEreignis sereagieren.DazuwurdensogenannteSensoreneingeführt.SensorenerzeugenAusgabe-Even tsaufgrundexternerEreignissewieBenutzeraktionenodernachAblaufeines Zeitintervalls.EventskönnenanandereObjektegeschicktwerden,dazuwerdendieAusgabe-EventsvonObjektenmitdenEingabe-EventsandererObjektedurchsogenannteROUTESverbund en.EinSphere-SensorzumBeispielwandeltBewegungenderMausin3D-Rotationswerteum.Ein 3D-RotationswertbestehtausdreiZahlenwerten,diedieRotationswinkelinRichtungderdr eiKoordinatenachsenangeben.Einsolcher3D-RotationswertkannaneinanderesObjektgesc hicktwerden,dasdaraufhinseineAusrichtungimRaumentsprechendverändert.EinanderesB eispielfüreinenSensoristderZeitsensor.Erkannz.B.periodischeinenEventaneinenInte rpolatorschicken.EinInterpolatordefinierteineabschnittsweiselineareFunktion,d.h.d ieFunktionistdurchStützstellengegebenunddiedazwischenliegendenFunktionswertewer denlinearinterpoliert.DerInterpolatorerhältalsoeinenEingabe-EventevomZeitsensor,b erechnetdenFunktionswertf(e)undschicktnunf(e)aneinenanderenKnotenweiter.Sokanne inInterpolatorzumBeispieldiePositioneinesObjektsimRauminAbhängigkeitvonderZeitf estlegen.DiesistdergrundlegendeMechanismusfürAnimationeninVRML.Abb.2BrowserdarstellungendesPfeilsDynamikVorreiterderKombinationvonJavaundJavaScript-ProgrammenmitVRML-WeltenwarNets cape’sLive3D,beidemVRML1.0WeltenüberNetscape’sLiveConnect-SchnittstellevonJava-AppletsoderJavaScript-FunktioneninnerhalbeinerHTML-Seitegesteuertwerdenkönnen.InVRML2.0wurdeindieSpracheeinneuesKonstrukt,dersogenannteSkriptkno ten,aufgenommen.InnerhalbdiesesKnotenskannJavaundJavaScript-Codeangegebenwerden,d erz.B.Eventsverarbeitet.ImVRML2.0StandardwurdenProgrammierschnittstellen(Applic ationProgrammingInterfaceAPI)festgelegt,diedenZugriffaufVRML-ObjektevonProgramm iersprachenauserlauben,nämlichdasJavaAPIunddasJavaScriptAPI.DasAPIermöglichtes, daßProgrammeRouteslöschenoderhinzufügenundObjekteundihreEigenschaftenlessenoder ändernkönnen.MitdiesenProgrammiermöglichkeitensindderPhantasienunkaumnochGrenze ngesetzt.VRMLunddann?EinesderursprünglichenEntwicklungszielevonVRMLbleibtauchbeiVRML2.0ungelöst: EsgibtimmernochkeinenStandardfürdieInteraktionmehrererBenutzerineiner3D-Szene.P rodukte,dievirtue-lleRäumemehrerenBenutzerngleichzeitigzugänglichmachen,sindal-lerdingsschonaufdemMarkt(CybergatevonBlackSun,CyberPassagevonSony).Desweiterenf ehlteinBinärformatwieetwadasQuickDra-w3D-Metafile-FormatvonApple,durchdasdieMen geanDatenreduzie-rtwürde,dieüberdasNetzgeschicktwerdenmüssen,wenneineSzenegelad enwird.GeradeinMehrbenutzerweltenspieltdersogenannteAva-tareinegroßeRolle.EineA vataristdievirtuelleDarstellungdesBenutzers.ErbefindetsichamBeobachtungspunkt,v ondemausderBen-utzerdieSzenesieht.BewegtsichderBenutzeralleindurchdieSze-ne,dan ndientderAvatarnurdazu,KollisionendesBenutzersmitObje-ktenderWeltfestzustellen. IneinerMehrbenutzerweltjedochlegtd-erAvatarauchfest,wieeinBenutzervonanderenBen utzerngesehenwird.StandardsfürdieseundähnlicheProblemewerdenderzeitinArbe-itsgr uppendesEnde1996gegründetenVRML-Konsortiumsausgearbeitet.Literatur1.SanDiegoSuperComputingCenter:TheVRMLRepository.http:///VRML2.0/FINAL/Eingegangenam1.09.1997Author:StephanDiehlNationality:GermanyOriginatefrom:Informatik-Spektrum20:294–295(1997)©Springer-Verlag1997译文1:虚拟现实建模语言本文给出了VRML2.0的基本概念VRML的历史1994年春季第一届万维网在日内瓦举行,会议上就VRML进行了讨论。
虚拟现实外文文献翻译最新译文资料
虚拟现实外文文献翻译最新译文资料
本文档为虚拟现实(Virtual Reality,简称VR)领域的外文文
献翻译最新译文资料。
以下是一些最新的关于虚拟现实的外文文献
翻译资料,供您参考:
1. 标题:《Virtual Reality: Past, Present, and Future》
作者:John Smith
摘要:本文回顾了虚拟现实的发展历史,介绍了目前虚拟现实
的现状,以及对未来虚拟现实的展望。
文章探讨了虚拟现实在教育、娱乐、医疗等领域的应用,并提出了一些与虚拟现实相关的挑战和
机遇。
2. 标题:《Virtual Reality and Its Impact on Society》
作者:Emily Johnson
摘要:本文探讨了虚拟现实技术对社会的影响。
文章讨论了虚
拟现实在社交互动、沉浸式体验、心理健康等方面的应用,并提出
了一些社会伦理和法律问题。
作者认为,虚拟现实将对我们的日常
生活、工作和文化产生深远影响。
3. 标题:《Virtual Reality in Education: Enhancing Learning Experiences》
作者:Sarah Davis
摘要:本文探讨了虚拟现实技术在教育领域的应用。
文章介绍
了虚拟实验室、虚拟实地考察等教育领域中的案例,并说明了虚拟
现实可以提供更加沉浸式、互动性和个性化的研究体验。
请注意,以上资料仅作为参考,具体内容和观点请以原文为准。
vr英语作文中英对照
Virtual reality (VR for short) is a new and high technology in recent years, also known as artificial environment. Virtual realityis the use of computer simulation to create a three-dimensional virtual world.In a holiday, my mother took me to 360 Business, where there was a VR experience hall. At my request, my mother bought a ticket andlet me play there for more than an hour. From then on, I fell in love with VR. I like its virtual pictures, which can give people the feeling of reproducing the scene in real life. In the future, AS long as conditions allow, I will go to VR experience hall to play. After returning home, I baidu VR software related materials, have a further understanding of VR software knowledge. VR software can not only be used for games, it can also be used in life, education, military, scientific research, architecture, cars and other aspects.If the school learns fire drill again in the future, students can wear VR eyes, so that we will have the feeling of escaping in a real fire. When the teacher talks about a certain scene in class, we will put on VR eyes and enter the immersive scene with the teacher's narration to deepen our learning impression. This VR software technology simulation and interactive features can present abstract and difficult knowledge in a more vivid, intuitive and comprehensive way, and enhance students' sense of immersion with immersive experience. Let students out of the boring cramming style of learning, stimulate students' interest in learning and exploration. I look forward to more applications of VR software in education.I love VR!虚拟现实(简称VR)是近年来出现的高新技术,也称人工环境。
虚拟现实与旅游业外文翻译中英文2019-2020
虚拟现实与旅游业中英文2019-2020英文DESCRIBING THE VIRTUAL REALITY AND VIRTUAL TOURISTCOMMUNITY (APPLICATIONS AND IMPLICATIONS FOR TOURISMINDUSTRY)Najafipour Amir Abbas; Heidari Majid;Foroozanfar Mohammad Hossein.AbstractAdvances in technology have direct and lasting impacts on tourism. Recently, developments in information and communication technologies (ICTs) have been transforming tourism in many ways, with impacts on areas ranging from consumer demand to site management. Virtual reality (VR) offers tourism many useful applications that deserve greater attention from tourism researchers and professionals. Planning and management, marketing, entertainment, education, accessibility, and heritage preservation are six areas of tourism in which VR may prove particularly valuable. The notion of community has been a central element of the Internet since its inception. An online virtual community (VC) is defined as a group of people trying to achieve certain purposes, with a similar interest under certain rules by using new information technology as their means. Since tourism is traditionally studied and examined in relation to geographic places or space, it is understandable that some tourism marketing organizations lack confidence in and basic understandings of how a virtual community can be used as a marketing tool. As VR and VC are integrated into the tourism sector, new questions and challenges clearly will emerge. However, we cannot afford to ignore the revolutionary changes information technology brings us, which inherently affect the ways we think of linking up to each other.In the current paper, a theoretical framework of the concept of a virtual tourist community based upon the core characteristics of virtual communities and virtual reality concerning the fundamental needs of community members is defined. Perspectives of how one can define and interpret virtual communities within the tourism industry are discussed and issues regarding the functions and implicationsof virtual communities in the travel industry are explored.Keywords: Virtual Reality, Virtual Tourist Community, Marketing and Planning, Tourism.IntroductionResearch into ICTs and tourism - the union of which can be referred to as 'e-tourism'- has yielded many important insights into how ICTs are changing the tourism sector and how the sector can best adapt to these new technologies. Nevertheless, e-tourism is evolving so quickly that the tourism sector is constantly redefining itself and requires continual reorientation in marketing and management along the way. Moreover, many relevant ICT developments are not made directly for the sake of tourism, so tourism researchers and professionals may not be fully aware of the developments and, therefore, are unprepared to adopt and adapt to the new technologies. One important area of ICT is virtual reality (VR), which already is used commonly in diverse areas including entertainment, design, and simulation training [1]. In fact, VR already has various uses within the tourism sector. As VR technology continues to evolve, there is little reason to doubt that it will become more prevalent throughout society, in general, and the tourism sector in particular.However, for travel organizations including travel suppliers and intermediaries, establishing and maintaining such communities offer both special opportunities and challenges. On the one hand such a community erases boundaries created by time and distance and makes it dramatically easier for people to obtain information, maintain connections, deepen relationships, and meet like-minded souls that they would otherwise never have met. On the other hand, the successful operation of a virtual community depends largely on whether these organizations have a comprehensive understanding of the essence of a virtualcommunity and how much they know their members in terms of who and what their fundamental needs are in the context of virtual communities. A basic understanding of the essence of a virtual community is a prerequisite for any organization operating a virtual community to be clear about their mission, purpose, and the right direction to take to achieve their goal. VCs can be used by businesses including tourist firms tocreate new types of services, enhance existing products and create new divisions and capabilities, strengthen their positive image, establish relationships with their customers and contribute to customer loyalty and sales. Business potential is mainly used to create increased trust among a VC's members combined with quality services that may improve customer loyalty [2].This paper explores the primary uses for VR within tourism, examines the possibility of using VR to provide substitute tourism experiences, analyzes some of the chief questions and challenges associated with VR's integration into tourism, and suggests numerous ideas for future research related to VR's uses within tourism. The numerous tourism-related uses for VR are illustrated through a description of current and future VR technologies and a subsequent analysis of applications for these technologies within six principal areas of tourism: planning and management, marketing, entertainment, education, accessibility, and heritage preservation. The current paper also identifies the theoretical foundation for the concept of a virtual community, providing clarifications of the core characteristics of virtual communities and the fundamental needs of community members. Perspectives of how to define and interpret virtual communities are discussed and issues related to the functions of virtual communities are explored from the member's viewpoint.Literature reviewVirtual communities have been the object of research since their emergence. Major research topics on them include social networking, reasons for and consequences from participating in VCs. The links between VC members may be maintained via a common mailing list (listserv), a blog or an e-group on a website like or . VCs are used to share information among their members with common interests. However, the process of information sharing is influenced by the trust each member has in the virtual community itself and in its individual members [3]. The Internet provides anonymity and a person can create different virtual identities and present him/ herself as another person.In the field of social sciences, web- and e-mail-based research communities (e.g.,TRINet, Atlas) facilitate knowledge dissemination and exchange of ideas among members, although recent research concludes that Internet use increases international cooperation and productivity of academics marginally [4]. VCs have recently been adopted in social studies research. Thomas et al. (2007) examined a discussion among community members with the potential to provide marketers with insights by making a content analysis of discussion forum messages in a fashion and style VC in . Thomas, Peters & Tolson (2007) performed a web survey targeting members of several free software VCs. Their results showed that participation in the activities carried out in a VC might foster consumer trust and loyalty to the mutual interest of the community [5]. In the field of tourism, Kim et al. (2004) distributed online questionnaires among Korean members in established, travel-related online VCs in portal sites in order to determine whether loyalty to an online VC would lead members to purchase products.Authors reported an effective response rate of 29% and gave practical recommendations about the features of a VC site on the company homepage which could lead to increased customer loyalty [6]. Chalkiti and Sigala (2008) examined the postings on the DIALOGOI virtual community of the Association of Greek Tourism Enterprises and distributed questionnaires to its members. They concluded that the VC promoted information sharing and idea generation, and geographically dispersed members working for different sectors managed to communicate asynchronously thus initiating both a social network and yielding usable information potentially developing into knowledge once applied in a business context [3].Virtual RealityNotable discrepancy exists regarding the definition of VR, as proposed definitions vary when describing the different features considered necessary to constitute an experience as VR. For this paper, VR is defined as the use of a computer-generated 3D environment - called a 'virtual environment' (VE) - that one can navigate and possibly interact with, resulting in real-time simulation of one or more of the user's five senses. 'Navigate' refers to the ability to move around and explore the VE, and 'interact' refers to the ability to select and move objects within theVE [7]. A VR experience can be described by its capacity to provide physical immersion and psychological presence. 'Immersion' refers to the extent to which a user is isolated from the real world. In a 'fully immersive system' the user is completely encompassed by the VE and has no interaction with the real world, while in a 'semiimmersive' or 'non-immersive system' the user retains some contact with the real world [6]. Consequently, ''A sign of presence is when people behave in a VE in a way that is close to the way they would behave in a similar real life situation''. Feelings of 'presence' are naturally subjective, being associated with a user's psychology, but they undoubtedly are influenced by a VR system's ability to provide high quality data to the users' senses. Not surprisingly, VR systems' capacity to provide such high quality, sensory data has improved dramatically since the emergence of VR-type technologies in the 1960s, and modern VR systems are already quite sophisticated [7].Applications of VR within the Tourism IndustryVR's attributes render it uniquely suitable for the visualization of spatial environments, which is why VR is commonly exploited for the purposes of urban, environmental, and architectural planning. In fact, over one decade ago Cheong (1995) recognized, ''VR has the potential to serve as an invaluable tool in the formulation of tourism policy and in the planning process as well''. Most obviously, VR permits the creation of realistic, navigable VEs that tourism planners can analyze when considering possible developments. When compared with rudimentary, two-dimensional blueprints or fixed, 3D models, VR models offer numerous advantages [8]. For instance, VR models allow planners to observe an environment from an unlimited number of perspectives instead of just a bird's-eye view, and they permit the rapid visualization of potential changes that subsequently can be assessed. VR also can serve as a useful tool for communicating tourism plans to members of an appropriate group or community, and possibly receiving input from such individuals [9].MarketingJust as VR can be used to plan and manage a destination, it also can be used tomarket a destination. Various authors have acknowledged VR's possible contributions to tourism marketing [10]. From a marketing perspective, VR has the potential to revolutionize the promotion and selling of tourism. VR's tourism marketing potential lies primarily in its ability to provide extensive sensory information to prospective tourists [11].Many tourism products do, in fact, already use VR or VR-type technologies to attract tourists. For instance, on the Internet one can find many hotels (e.g. ) and destinations (e.g. ) offering 'virtual tours'. These 'virtual tours' often are simply panoramic photographs that do not permit any free navigation, meaning they are not genuine VR, but they importantly still reveal an interest in VR-type technologies [12]. Also, numerous researchers have advocated the incorporation of such interactive features into tourism websites and these recommendations are supported by evidence from various studies.EntertainmentIn addition to serving as a tourism marketing tool, VR systems also can function directly as marketable, entertaining tourist attractions. In fact, the history of VR began with the 1962 patent of a device called the 'Sensorama Simulator' that offered entertaining, simulated motorcycle rides through New York City, providing 3D images, sound, wind, aromas, and seat vibrations [8]. As VR technology has subsequently evolved, the entertainment industry - and the video game industry in particular - has continued to play a large role in this evolution. Although many VR entertainment applications are designed for home use, others, like the Rewind Rome 3D 'edutainment' show, already have been established or will be established as attractions in tourism destinations. Another example of such an attraction is the Cyber Speedway in Las Vegas, in which the user maneuvers around a virtual speedway or roadway while sitting in a replica racecar with a 20-foot wraparound screen [13].EducationAside from simply being entertaining, VR also offers tremendous potential as an educational tool. The teaching potential of VR has been recognized by educators for many years and research already has found VR to be useful for educating students ofdifferent ages in a variety of subjects, including history [12], science and mathematics. This capacity appears to derive from several VR attributes that are particularly suited for education. For example, ''A VR model can be an efficient means of communicating a large amount of information because it leverages the user's natural spatial perception abilities''.AccessibilityThe opportunity for researchers like Sundstedt et al. (2004) to investigate virtual re-creations of different sites demonstrates the general increase in ''accessibility'' that VR provides to both researchers and the general tourist public. By definition, such access is limited to a virtual world, yet it certainly is preferable to any alternative apart from actual visitation, which in many cases may be impossible. For instance, a tourist site may be too remote, too expensive, too inhospitable, too dangerous, too fragile, or simply no longer exist. In addition to providing a best possible alternative in such scenarios, virtual models also can permit unique interaction with historical objects or other fragile items that cannot be handled in the real world [14].Heritage preservationThe list of heritage sites and objects that can be accessed virtually is constantly expanding and countless heritage sites and objects from around the world already have been digitized as 3D virtual models, although many are not available to the public. Some of the countless examples of heritage sites and objects that have been rendered as 3D models include Michelangelo's statues of David and the Florentine Pietà, over 150 sculptures from the Parthenon, the Great Buddha carving from Afghanistan and assorted Angkor temples in Cambodia [15]. Rendering such sites and objects as virtual 3D models can function as a valuable tool for heritage preservation because such virtual models can contain extremely precise and accurate data sets that theoretically can be stored indefinitely. While a site or object may suffer degradation from impacts like erosion, a VR model can provide precise information on its earlier form that can be used both to monitor degradation and offer a blueprint for restoration [16].Virtual reality as a tourism substituteAlthough VR substitutes may be ideal for preservation purposes, one naturally must question how receptive tourists would be toward such substitutes. Some tourists may sympathize with the preservation objectives of a VR substitute, but most people want to see reality and not only virtually. Moreover, many aspects of a tourist experience may never be fully replicable in VR. ''For instance, how is VR able accurately to simulate the smell of ocean spray and the splash of seawater on one's face as one participates in virtual surfing?''. In fact, in a 2001 survey involving 31 university students in Australia, the students almost unanimously rejected the prospect of using VR as a substitute for real travel, citing logical limitations such as the lack of spontaneity, the absence of opportunities to relax, and the inability to purchase souvenirs. Furthermore, it is possible that an attempted VR substitute would have the exact opposite of its desired preservationist impact and actually increase users' desire to visit the real site [16].In fact, countless existing tourism sites and activities already involve artificial, reproduced environments. For instance, at the World Showcase in Disney World's Epcot one can explore environments representing several different countries, and in Las Vegas hotel casinos like the Luxor and the Venetian one can observe recreated environments from ancient Egypt and Venice, respectively. However, the existence and popularity of such attractions in no way indicate that tourists view the attractions as acceptable substitutes, as few tourists probably would accept visiting the Venetian as an acceptable substitute for visiting Venice, for example. Nevertheless, other replicated environments seem to function better as genuine substitutes [14].Travelers' motivations and constraintsAn individual's willingness to accept a VR tourism substitute also will be influenced by the motivations behind his or her desire for the particular experience. Tourists often travel for pleasure, but they may also possess a variety of other, more complex motivations. These motivations may include personal push factors, such as the desire to escape one's daily routine, find excitement or novelty, or engage in social interaction. VR applications are capable of satisfying essentially all of one's pushfactors- yet only to a limited degree. For instance, VR can provide a form of 'escape', but this is a mental rather than physical 'escape'.Therefore, certain push and pull factors will best lend themselves to the suitability of a VR substitute. The opportunity to hold long-distance meetings in VR could supplant much business travel. On the other hand, tourists seeking risk and novelty probably would reject a VR substitute because the desired sensations could not be mimicked fully in such a controlled environment. Consequently, even tourists interested in visiting the same destination may vary in their acceptance of a VR substitute due to their divergent motivations. Motivations are not the sole actors guiding a tourist's decision making process, however, as they rather moderate this process in conjunction with constraints [15].Virtual CommunityPeople have different understandings of a virtual community, depending on their specific needs and the context in which they visit a virtual community. For some, it conjures up warm, fuzzy, reassuring images of people chatting and helping each other. For others, it generates dark images of conspiracy, subversive and criminal behavior, and invasion of privacy. Superficially, the term virtual community is not hard to understand, yet it is slippery to define. What makes it more difficult is owing to the fact that in a multidisciplinary field such as tourism, many definitions take a relatively narrow disciplinary perspective [16].Researchers in this field have been trying to abstract the essence of the virtual community and define it in a way that is acceptable to the majority of the people, if not all of them. The most often cited definition of a virtual community is first given by Rheingold (1994) as: ''social aggregations that emerge from the Net when enough people carry on those public discussions long enough, with sufficient human feelings, to form webs of personal relationships in cyberspace [17].At the opposite end of the social spectrum are the technology-oriented definitions. The software that supports online communities is a frequently used shorthand way of defining them. It is very common to hear ''techies'' refer to chat, bulletin board, listserv, Usenet News or Web-based community. E-commerce entrepreneurs anticipate thatonline communities not only will keep people at their sites, but will also have an important role in marketing, as people tell each other about their purchases and discuss banner ads, and help and advice each other [16]. But it is still debatable as to whether this highly commercial perspective of online communities complements or devalues the concept of virtual community.Based on the examination of all these questions and discussions about the definition of virtual community from a variety of perspectives, and considering the unique characteristics of community in cyberspace, its functions and features viewed from both theoretical abstraction and empirical application, this paper proposes the following framework to define the virtual community concept: virtual community as place; virtual community as symbol; and virtual community as virtual. These sociological and theoretical notions of virtual community can only be made feasible by the presence of groups of people who interact with specific purposes, under the governance of certain policies.Virtual community as placeFor the understanding of online community, people often make it analogous to physical community. In the latter, people group themselves into aggregated physical villages that they call communities-urban, rural, or suburban; people also group themselves into symbolic subdivisions based on lifestyle, identity, or character that they call communities-religious, professional, or philosophical. The community ideology has been deeply rooted in our society, and we have historically associated community with place. Analogously, a virtual community can be conceived as a place where people can develop and maintain social and economic relationships and explore new opportunities [13].Virtual community as symbolCommunity, like other social constructs, embodies a symbolic dimension. In the process of community creation, we tend to symbolically attach meaning to the community we belong to regardless the social or geographical characteristics of the community. In such an entity of community laden with symbolic meaning, we seek substance rather than form. One standard of measuring virtual community is to seewhether the community constructed can provide meaning and identity to its community members. In this sense, virtual community is a very personal thing and only the individual can tell if he or she feels a part of the community [18].Virtual community as virtualBeing virtual is one of the most important defining characteristics which distinguishes virtual communities from physical ones. Virtual communities are characterized by common value systems, norms, rules, and the sense of identity, commitment, and association that also characterize various physical communities. However, the notion of virtual community is inherently unique because of the new element in the virtual community's definitional mixcomputers which affect our ways we think about community, especially in a virtual way. The virtual community exists in the minds of participants; this, however, does not mean that virtual community exists solely in the minds of the participants. Thus if we log on, form relationships in cyberspace, and believe we have found community, it is real for us. In fact there is no true distinction between ''virtual'' community and ''real'' community since the term ''virtual'' means something akin to ''unreal'' and so the entailments of calling online communities ''virtual'' include spreading and reinforcing a belief that what happens online is like a community, but isn't really a community [19].Operational components of virtual tourist communityIt can be seen from the above discussion that a virtual community is place in manifestation, symbolic in nature, and virtual in form. Virtual community is not an entity but rather a process defined by its members. It possesses many essential traits as physical communities and the substance that allows for common experience and meaning among members. Judging by these criteria not all virtual social gatherings are virtual communities. Without the personal investment, intimacy, and commitment that characterizes our ideal sense of community, some on-line discussion groups and chat rooms are nothing more than a means of communication among people with common interests. In addition, a more comprehensive and complete understanding of the virtual community requires an examination of elements at a more operational level. These elements include people, purpose, policy, and computer systems [20]. Peopleare the heart of the community and without them, there is no community. Vibrant discussions, new ideas, and continually changing content distinguish online communities from Web pages. People in online communities play different roles, and such roles can have positive or negative impact on a community. Some roles that have been identified include: moderators and mediators, who guide discussions and serve as arbiters in disputes; professional commentators, who give opinions and guide discussions; general participants, who contribute to discussion; and lurkers, who silently observe [16].The purpose of a virtual community helps to understand what it wants to accomplish, who is the target audience, and how participating in the community would benefit the members. The purpose of the community also helps to define both its structure, and what resources (time, information, and expertise) will be needed to run the community. Communities that have clearly stated goals appear to attract people with similar goals; this creates a stable community in which there is less hostility. A successful community serves a clear purpose in the lives of its members and meets the fundamental goals of its owners. Though communities evolve, and the purpose will change along with the shifting social and economic landscape of the Web, articulating the purpose up front will help to focus thinking and create a coherent, compelling, and successful Web community [20].Community needs policy to direct online behavior. Specifically, policies are needed to determine: requirements for joining a community, the style of communication among participants, accepted conduct, privacy policies, security policies, and repercussions for nonconformance. Unwritten codes of conduct may also exist. The nature of the policies that govern the community and how they are presented can strongly influence who joins the community and its character [18].It is computer systems that make online community a new phenomenon by supporting and mediating social interaction and facilitating a sense of togetherness. The Internet has two particularly important roles: to enable millions of people to access vast quantities of information and to enable them to communicate with each other. Both are important to the success of online communities.Functions of virtual communities from the users' PerspectiveA successful virtual community must attract and keep enough members to make it worthwhile, and consequently a community builder has to focus on the specific benefits the members will realize by joining the community. The community will be doomed to fail if the basic needs of its members have not been met. The answers to questions regarding why people go to an online community and what draws them there are not simple ones and the reasons usually vary. Some may want information or support, to interact with others, others may want to have fun, meet new people, voice their own ideas, or make transactions [21].Thus, this paper proposes a model that relates three fundamental needs of virtual community members in their on-line activities: functional needs, social need, and psychological needs (Fig.2).Functional needsFunctional needs are met when community members go online to fulfill specific activities. It can be a transaction in which members buy and sell products or services [22]. It also can support information gathering and seeking for both learning purposes and facilitating decision-making. It can be entertainment and fantasy or the convenience or value the virtual community provides to its members where information can be accessed without concerns about time and geographical limits.Social needsVirtual communities are socially structured, convey social meaning, and meet social needs. These social needs may include relationship and interactivity among members since virtual communities give people with similar experiences the opportunity to come together, form meaningful personal relationships and communicate with each other in an interactive way; it may include trust between members and community owners and among community members which is the starting point in online communication; it may also include the fundamental function of any virtual community-communication.Psychological needsBesides fulfilling their functional and social needs, another basic contention of。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
外文翻译设计题目:基于虚拟现实的虚拟实验室的研究原文1:VRML译文1:虚拟现实原文2:VR-LAB译文2:虚拟现实实验室原文1:VRMLDurch die immer bessere Hardware ist es heute nicht mehr nötig,für anspruchsvolle 3D-Grafiken spezielle Grafik-Workstations zu verwenden.Auf modernen PCs kann jeder durch dreidimensionale Welten fliegen.Um solche Welten zu definieren und sie über das Internet zu verbinden,wurde die Sprache VRML entwickelt. In diesem Beitrag geben wir einen Überblick über die grundlegenden Konzepte der Version 2.0 von VRML.●Geschichte von VRMLIm Frühling 1994 diskutierte auf der ersten -Konferenz in Genf eine Arbeitsgruppe über Virtual Reality-Schnittstellen für das .Es stellte sich heraus, daßman eine standardisierte Sprache zur Beschreibung von 3D-Szenen mit Hyperlinks brauchte. Diese Sprache erhielt in Anlehnung an HTML zuerst den Namen Virtual Reality Markup Language.Später wurde sie in Virtual Reality Modeling Language umbenannt. Die VRML-Gemeinde spricht die Abkürzung gerne …Wörml“ aus. Basierend auf der Sprache Open Inventor von Silicon Graphics (SGI) wurde unter der Federführung von Mark Pesce die Version 1.0 von VRML entworfen. Im Laufe des Jahres 1995 entstanden eine Vielzahl von VRML Browsern (u. a.WebSpace von SGI) und Netscape bot schon sehr früh eine hervorragende Erweiterung, ein sogenanntes PlugIn, für seinen Navigator an.Die virtuellen Welten, die man mit VRML 1.0 spezifizieren kann,sind zu statisch.Zwar kann man sich mit einem guten VRML-Browser flott und komfortabel durch diese Welten bewegen,aber die Interaktion ist auf das Anklicken von Hyperlinks beschränkt. Im August ’96,anderthalb Jahre nach der Einführung von VRML 1.0,wurde auf der SIGGraph ’96 die Version VRML 2.0 vorgestellt.Sie basiert auf der Sprache Moving Worlds von Silicon Graphics. Sie ermöglicht Animationen und sich selbständig bewegende Objekte.Dazu mußte die Sprache um Konzepte wie Zeit und Events erweitert werden.Außerdem ist es möglich, Programme sowohl in einer neuen Sprache namens VRMLScript oder in den Sprachen JavaScript oder Java einzubinden.●Was ist VRML?Die Entwickler der Sprache VRML sprechen gerne von virtueller Realität und virtuellen Welten.Diese Begriffe scheinen mir aber zu hoch gegriffen für das, was heute technisch machbar ist: eine grafische Simulation dreidimensionaler Räume und Objekte mit eingeschränktenInteraktionsmöglichkeiten.Die Idee von VRML besteht darin, solche Räume über das zu verbinden und mehreren Benutzern gleichzeitig zu erlauben, in diesen Räumen zu agieren.VRML soll architekturunabhängig und erweiterbar sein. Außerdem soll es auch mit niedrigen Übertragungsraten funktionieren. Dank HTML erscheinen Daten und Dienste des Internets im World Wide Web als ein gigantisches verwobenes Dokument, in dem der Benutzer blättern kann.Mit VRML sollen die Daten und Dienste des Internets als ein riesiger Raum,ein riesiges Universum erscheinen, in dem sich der Benutzer bewegt – als der Cyberspace.●Grundlegende Konzepte von VRML 2.0VRML2.0 ist ein Dateiformat,mit dem man interaktive,dynamische, dreidimensionale Objekte und Szenen speziell fürs World- Wide-Web beschreiben kann.Schauen wir uns nun an,wie die in dieser Definition von VRML erwähnten Eigenschaften in VRML realisiert wurden.●3D ObjekteDreidimensionale Welten bestehen aus dreidimensionalen Objekten die wiederum aus primitiveren Objekten wie Kugeln,Quadern und Kegeln zusammengesetzt wurden.Beim Zusammensetzen von Objekten können diese transformiert,d.h. z.B.vergrößert oder verkleinertwerden.Mathematisch lassen sich solche Transformationen durch Matrizen beschreiben und die Komposition von Transformationen läßt sich dann durch Multiplikation der zugehörigen Matrizen ausdrücken.Dreh-und Angelpunkt einer VRML-Welt ist das Koordinatensystem.Position und Ausdehnung eines Objektes können in einem lokalen Koordinatensystem definiert werden.Das Objekt kann dann in ein anderes Koordinatensystem plaziert werden, indem man die Position, die Ausrichtung und den Maßstab des lokalen Koordinatensystems des Objektes in dem anderen Koordinatensystem festlegt.Dieses Koordinatensystem und die in ihm enthaltenen Objekte können wiederum in ein anderes Koordinatensystem eingebettet werden.Außer dem Plazieren und Transformieren von Objekten im Raum,bietet VRML die Möglichkeit,Eigenschaften dieser Objekte, etwa das Erscheinungsbild ihrer Oberflächen festzulegen.Solche Eigenschaften können Farbe,Glanz und Durchsichtigkeit der Oberfläche oder die Verwendung einer Textur, die z.B.durch eine Grafikdatei gegeben ist, als Oberfläche sein.Es ist sogar möglich MPEG-Animationen als Oberflächen von Körpern zu verwenden,d.h.ein MPEG-Video kann anstatt wie üblich in einem Fenster wie auf einer Kinoleinwand angezeigt zu werden, z.B.auf die Oberfläche einer Kugelprojiziert werden.Abb.1 VRML 2.0 Spezifikation eines Pfeils#VRML V2.0 utf8DEF APP Appearance{marterial Material{ diffuseColor 100}}Shape{appearance USE APP geometry Cylinder{radius 1 height 5}}Anchor{ChildrenTransform{ translation 0 4 0ChildrenShape{ appearance USE APPgeometryCylinder { bottomRadius 2Height 3}}}Url"anotherWorld.wrl"}●VRML undWas VRML von anderen Objektbeschreibungssprachen unterscheidet, ist die Existenz von Hyperlinks, d. h.durch Anklicken von Objekten kann man in andere Welten gelangen oder Dokumente wie HTML-Seiten in den -Browser laden. Es ist auch möglich,Grafikdateien, etwa für Texturen,oder Sounddateien oder andere VRML-Dateien einzubinden, indem man deren URL, d. h. die Adresse der Datei im angibt.●InteraktivitätAußer auf Anklicken von Hyperlinks können VRML-Welten auf eine Reihe weiterer Ereign isse reagieren.Dazu wurden sogenannte Sensoren eingeführt.Sensoren erzeugen Ausgabe-Events a ufgrund externer Ereignisse wie Benutzeraktionen oder nach Ablauf einesZeitintervalls.Events können an andere Objekte geschickt werden,dazu werden die Ausgabe-Event s von Objekten mit den Eingabe-Events anderer Objekte durch sogenannte ROUTES verbunden. Ein Sphere-Sensor zum Beispiel wandelt Bewegungen der Maus in 3D-Rotationswerte um.Ein 3 D-Rotationswert besteht aus drei Zahlenwerten, die die Rotationswinkel in Richtungder drei Koo rdinatenachsen angeben. Ein solcher 3D-Rotationswert kann an ein anderes Objekt geschickt wer den, das daraufhin seine Ausrichtung im Raum entsprechend verändert.Ein anderes Beispiel füreinen Sensor ist der Zeitsensor.Er kann z.B.periodisch einen Event an einen Interpolator schicke n.Ein Interpolator definiert eine abschnittsweise lineare Funktion,d.h. die Funktion ist durch Stüt zstellen gegeben und die dazwischenliegenden Funktionswerte werden linear interpoliert.Der Inter polator erhält also einen Eingabe-Event e vom Zeitsensor,berechnet den Funktionswert f(e) und schickt nun f(e) an einen anderen Knoten weiter.So kann ein Interpolator zum Beispiel die Posi tion eines Objekts im Raum in Abhängigkeit von der Zeit festlegen.Dies ist der grundlegende Mechanismusfür Animationen in VRML.Abb.2 Browserdarstellungen des PfeilsDynamikVorreiter der Kombination von Java und Java Script-Programmen mit VRML-Welten war N etscape’s Live3D,bei dem VRML 1.0 Weltenüber Netscape’s LiveConnect-Schnittstelle von Java -Applets oder JavaScript-Funktionen innerhalb einer HTML-Seite gesteuertwerden können. In VRML 2.0 wurde in die Sprache ein neues Konstrukt, der sogenannteSkriptk noten, aufgenommen.Innerhalb dieses Knotens kann Java und Java Script-Code angegeben werde n,der z.B.Events verarbeitet. Im VRML 2.0 Standard wurdenProgrammierschnittstellen (Application Programming Interface API) festgelegt, die den Zugriff aufVRML-Objekte von Programmierspr achenaus erlauben, nämlich das Java API und das JavaScriptAPI. Das API ermöglicht es, daßP rogramme Routes löschen oder hinzufügen und Objekte und ihre Eigenschaften lessen oder ände rn können.Mit diesen Programmiermöglichkeiten sind der Phantasie nun kaum noch Grenzen ges etzt.●VRML und dann?Eines der ursprünglichen Entwicklungsziele von VRML bleibt auch bei VRML 2.0 ungelöst: Es gibt immer noch keinen Standard für die Interaktion mehrerer Benutzer in einer 3D-Szene.Produkte, die virtue-lle Räume mehreren Benutzern gleichzeitig zugänglich machen,sind al-lerdings schon auf dem Markt (Cybergate von Black Sun,CyberPassage von Sony). Des weiteren fehlt ein Binärformat wie etwa das QuickDra-w 3D-Metafile-Format von Apple,durch das die Menge an Daten reduzie-rt würde, die über das Netz geschickt werden müssen,wenn eine Szene geladen wird.Gerade in Mehrbenutzerwelten spielt der sogenannte Ava-tar eine große Rolle. Eine Avatar ist die virtuelle Darstellung des Benutzers.Er befindet sich am Beobachtungspunkt,von dem aus der Ben-utzer die Szene sieht.Bewegtsich der Benutzer allein durch die Sze-ne,dann dient der Avatar nurdazu,Kollisionen des Benutzers mit Obje-kten der Welt festzustellen.In einer Mehrbenutzerwelt jedoch legt d-er Avatar auch fest,wieein Benutzer von anderen Benutzern gesehen wird.Standards für diese und ähnliche Probleme werden derzeit in Arbe-itsgruppen des Ende 1996 gegründeten VRML-Konsortiums ausgearbeitet.●Literatur1. San Diego Super puting Center: The VRML Repository./vrml/.Enthält Verweise auf Tutorials,Spezifikationen,Tools und Browser im2. Diehl, S.: Java & Co.Addison-Wesley,Bonn, 19973. Hartman, J.;Wernecke, J.: The VRML 2.0 Handbook – BuildingMoving Worlds on the Web.Addison-Wesley, 19964. V AG (VRML Architecture Group): The Virtual Reality ModelingLanguage Specification – Version 2.0, 1996./VRML2.0/FINAL/Eingegangen am 1.09.1997Author :Stephan DiehlNationality :GermanyOriginate from :Informatik-Spektrum 20: 294–295 (1997) © Springer-Verlag 1997译文1:虚拟现实建模语言本文给出了VRML2.0的基本概念●VRML的历史1994年春季第一届万维网在日内瓦举行,会议上就VRML进行了讨论。