优选(VR虚拟现实)基于虚拟现实的虚拟实验室外文翻译

合集下载

VR虚拟现实-基于虚拟现实的虚拟实验室外文翻译 精品

VR虚拟现实-基于虚拟现实的虚拟实验室外文翻译 精品

外文翻译设计题目:基于虚拟现实的虚拟实验室的研究原文1:VRML译文1:虚拟现实原文2:VR-LAB译文2:虚拟现实实验室原文1:VRMLDurch die immer bessere Hardware ist es heute nicht mehr nötig,für anspruchsvolle 3D-Grafiken spezielle Grafik-Workstations zu verwenden.Auf modernen PCs kann jeder durch dreidimensionale Welten fliegen.Um solche Welten zu definieren und sie über das Internet zu verbinden,wurde die Sprache VRML entwickelt. In diesem Beitrag geben wir einen Überblick über die grundlegenden Konzepte der Version 2.0 von VRML.●Geschichte von VRMLIm Frühling 1994 diskutierte auf der ersten -Konferenz in Genf eine Arbeitsgruppe über Virtual Reality-Schnittstellen für das .Es stellte sich heraus, daßman eine standardisierte Sprache zur Beschreibung von 3D-Szenen mit Hyperlinks brauchte. Diese Sprache erhielt in Anlehnung an HTML zuerst den Namen Virtual Reality Markup Language.Später wurde sie in Virtual Reality Modeling Language umbenannt. Die VRML-Gemeinde spricht die Abkürzung gerne …Wörml“ aus. Basierend auf der Sprache Open Inventor von Silicon Graphics (SGI) wurde unter der Federführung von Mark Pesce die Version 1.0 von VRML entworfen. Im Laufe des Jahres 1995 entstanden eine Vielzahl von VRML Browsern (u. a.WebSpace von SGI) und Netscape bot schon sehr früh eine hervorragende Erweiterung, ein sogenanntes PlugIn, für seinen Navigator an.Die virtuellen Welten, die man mit VRML 1.0 spezifizieren kann,sind zu statisch.Zwar kann man sich mit einem guten VRML-Browser flott und komfortabel durch diese Welten bewegen,aber die Interaktion ist auf das Anklicken von Hyperlinks beschränkt. Im August ’96,anderthalb Jahre nach der Einführung von VRML 1.0,wurde auf der SIGGraph ’96 die Version VRML 2.0 vorgestellt.Sie basiert auf der Sprache Moving Worlds von Silicon Graphics. Sie ermöglicht Animationen und sich selbständig bewegende Objekte.Dazu mußte die Sprache um Konzepte wie Zeit und Events erweitert werden.Außerdem ist es möglich, Programme sowohl in einer neuen Sprache namens VRMLScript oder in den Sprachen JavaScript oder Java einzubinden.●Was ist VRML?Die Entwickler der Sprache VRML sprechen gerne von virtueller Realität und virtuellen Welten.Diese Begriffe scheinen mir aber zu hoch gegriffen für das, was heute technisch machbar ist: eine grafische Simulation dreidimensionaler Räume und Objekte mit eingeschränktenInteraktionsmöglichkeiten.Die Idee von VRML besteht darin, solche Räume über das zu verbinden und mehreren Benutzern gleichzeitig zu erlauben, in diesen Räumen zu agieren.VRML soll architekturunabhängig und erweiterbar sein. Außerdem soll es auch mit niedrigen Übertragungsraten funktionieren. Dank HTML erscheinen Daten und Dienste des Internets im World Wide Web als ein gigantisches verwobenes Dokument, in dem der Benutzer blättern kann.Mit VRML sollen die Daten und Dienste des Internets als ein riesiger Raum,ein riesiges Universum erscheinen, in dem sich der Benutzer bewegt – als der Cyberspace.●Grundlegende Konzepte von VRML 2.0VRML2.0 ist ein Dateiformat,mit dem man interaktive,dynamische, dreidimensionale Objekte und Szenen speziell fürs World- Wide-Web beschreiben kann.Schauen wir uns nun an,wie die in dieser Definition von VRML erwähnten Eigenschaften in VRML realisiert wurden.●3D ObjekteDreidimensionale Welten bestehen aus dreidimensionalen Objekten die wiederum aus primitiveren Objekten wie Kugeln,Quadern und Kegeln zusammengesetzt wurden.Beim Zusammensetzen von Objekten können diese transformiert,d.h. z.B.vergrößert oder verkleinertwerden.Mathematisch lassen sich solche Transformationen durch Matrizen beschreiben und die Komposition von Transformationen läßt sich dann durch Multiplikation der zugehörigen Matrizen ausdrücken.Dreh-und Angelpunkt einer VRML-Welt ist das Koordinatensystem.Position und Ausdehnung eines Objektes können in einem lokalen Koordinatensystem definiert werden.Das Objekt kann dann in ein anderes Koordinatensystem plaziert werden, indem man die Position, die Ausrichtung und den Maßstab des lokalen Koordinatensystems des Objektes in dem anderen Koordinatensystem festlegt.Dieses Koordinatensystem und die in ihm enthaltenen Objekte können wiederum in ein anderes Koordinatensystem eingebettet werden.Außer dem Plazieren und Transformieren von Objekten im Raum,bietet VRML die Möglichkeit,Eigenschaften dieser Objekte, etwa das Erscheinungsbild ihrer Oberflächen festzulegen.Solche Eigenschaften können Farbe,Glanz und Durchsichtigkeit der Oberfläche oder die Verwendung einer Textur, die z.B.durch eine Grafikdatei gegeben ist, als Oberfläche sein.Es ist sogar möglich MPEG-Animationen als Oberflächen von Körpern zu verwenden,d.h.ein MPEG-Video kann anstatt wie üblich in einem Fenster wie auf einer Kinoleinwand angezeigt zu werden, z.B.auf die Oberfläche einer Kugelprojiziert werden.Abb.1 VRML 2.0 Spezifikation eines Pfeils#VRML V2.0 utf8DEF APP Appearance{marterial Material{ diffuseColor 100}}Shape{appearance USE APP geometry Cylinder{radius 1 height 5}}Anchor{ChildrenTransform{ translation 0 4 0ChildrenShape{ appearance USE APPgeometryCylinder { bottomRadius 2Height 3}}}Url"anotherWorld.wrl"}●VRML undWas VRML von anderen Objektbeschreibungssprachen unterscheidet, ist die Existenz von Hyperlinks, d. h.durch Anklicken von Objekten kann man in andere Welten gelangen oder Dokumente wie HTML-Seiten in den -Browser laden. Es ist auch möglich,Grafikdateien, etwa für Texturen,oder Sounddateien oder andere VRML-Dateien einzubinden, indem man deren URL, d. h. die Adresse der Datei im angibt.●InteraktivitätAußer auf Anklicken von Hyperlinks können VRML-Welten auf eine Reihe weiterer Ereign isse reagieren.Dazu wurden sogenannte Sensoren eingeführt.Sensoren erzeugen Ausgabe-Events a ufgrund externer Ereignisse wie Benutzeraktionen oder nach Ablauf einesZeitintervalls.Events können an andere Objekte geschickt werden,dazu werden die Ausgabe-Event s von Objekten mit den Eingabe-Events anderer Objekte durch sogenannte ROUTES verbunden. Ein Sphere-Sensor zum Beispiel wandelt Bewegungen der Maus in 3D-Rotationswerte um.Ein 3 D-Rotationswert besteht aus drei Zahlenwerten, die die Rotationswinkel in Richtungder drei Koo rdinatenachsen angeben. Ein solcher 3D-Rotationswert kann an ein anderes Objekt geschickt wer den, das daraufhin seine Ausrichtung im Raum entsprechend verändert.Ein anderes Beispiel füreinen Sensor ist der Zeitsensor.Er kann z.B.periodisch einen Event an einen Interpolator schicke n.Ein Interpolator definiert eine abschnittsweise lineare Funktion,d.h. die Funktion ist durch Stüt zstellen gegeben und die dazwischenliegenden Funktionswerte werden linear interpoliert.Der Inter polator erhält also einen Eingabe-Event e vom Zeitsensor,berechnet den Funktionswert f(e) und schickt nun f(e) an einen anderen Knoten weiter.So kann ein Interpolator zum Beispiel die Posi tion eines Objekts im Raum in Abhängigkeit von der Zeit festlegen.Dies ist der grundlegende Mechanismusfür Animationen in VRML.Abb.2 Browserdarstellungen des PfeilsDynamikVorreiter der Kombination von Java und Java Script-Programmen mit VRML-Welten war N etscape’s Live3D,bei dem VRML 1.0 Weltenüber Netscape’s LiveConnect-Schnittstelle von Java -Applets oder JavaScript-Funktionen innerhalb einer HTML-Seite gesteuertwerden können. In VRML 2.0 wurde in die Sprache ein neues Konstrukt, der sogenannteSkriptk noten, aufgenommen.Innerhalb dieses Knotens kann Java und Java Script-Code angegeben werde n,der z.B.Events verarbeitet. Im VRML 2.0 Standard wurdenProgrammierschnittstellen (Application Programming Interface API) festgelegt, die den Zugriff aufVRML-Objekte von Programmierspr achenaus erlauben, nämlich das Java API und das JavaScriptAPI. Das API ermöglicht es, daßP rogramme Routes löschen oder hinzufügen und Objekte und ihre Eigenschaften lessen oder ände rn können.Mit diesen Programmiermöglichkeiten sind der Phantasie nun kaum noch Grenzen ges etzt.●VRML und dann?Eines der ursprünglichen Entwicklungsziele von VRML bleibt auch bei VRML 2.0 ungelöst: Es gibt immer noch keinen Standard für die Interaktion mehrerer Benutzer in einer 3D-Szene.Produkte, die virtue-lle Räume mehreren Benutzern gleichzeitig zugänglich machen,sind al-lerdings schon auf dem Markt (Cybergate von Black Sun,CyberPassage von Sony). Des weiteren fehlt ein Binärformat wie etwa das QuickDra-w 3D-Metafile-Format von Apple,durch das die Menge an Daten reduzie-rt würde, die über das Netz geschickt werden müssen,wenn eine Szene geladen wird.Gerade in Mehrbenutzerwelten spielt der sogenannte Ava-tar eine große Rolle. Eine Avatar ist die virtuelle Darstellung des Benutzers.Er befindet sich am Beobachtungspunkt,von dem aus der Ben-utzer die Szene sieht.Bewegtsich der Benutzer allein durch die Sze-ne,dann dient der Avatar nurdazu,Kollisionen des Benutzers mit Obje-kten der Welt festzustellen.In einer Mehrbenutzerwelt jedoch legt d-er Avatar auch fest,wieein Benutzer von anderen Benutzern gesehen wird.Standards für diese und ähnliche Probleme werden derzeit in Arbe-itsgruppen des Ende 1996 gegründeten VRML-Konsortiums ausgearbeitet.●Literatur1. San Diego Super puting Center: The VRML Repository./vrml/.Enthält Verweise auf Tutorials,Spezifikationen,Tools und Browser im2. Diehl, S.: Java & Co.Addison-Wesley,Bonn, 19973. Hartman, J.;Wernecke, J.: The VRML 2.0 Handbook – BuildingMoving Worlds on the Web.Addison-Wesley, 19964. V AG (VRML Architecture Group): The Virtual Reality ModelingLanguage Specification – Version 2.0, 1996./VRML2.0/FINAL/Eingegangen am 1.09.1997Author :Stephan DiehlNationality :GermanyOriginate from :Informatik-Spektrum 20: 294–295 (1997) © Springer-Verlag 1997译文1:虚拟现实建模语言本文给出了VRML2.0的基本概念●VRML的历史1994年春季第一届万维网在日内瓦举行,会议上就VRML进行了讨论。

虚拟现实技术(VR)在公共图书馆中的应用研究

虚拟现实技术(VR)在公共图书馆中的应用研究

虚拟现实技术(VR)在公共图书馆中的应用研究虚拟现实技术(Virtual Reality,简称VR)是一种模拟现实环境并通过头戴式显示设备等工具传输给用户的技术。

它通过三维计算机图形技术和人机交互的方式,使用户能够沉浸到虚拟环境中,并与环境进行实时交互。

虚拟现实技术的发展为公共图书馆提供了新的应用领域,本文将对虚拟现实技术在公共图书馆中的应用进行研究。

虚拟现实技术在公共图书馆中可以有多种应用方式,例如展览展示、教育培训、阅读体验等。

通过虚拟现实技术,公共图书馆可以为读者提供更加丰富、互动性更强的阅读和学习体验,推动图书馆传统服务模式向现代化、数字化转变。

1. 虚拟博物馆:通过虚拟现实技术,公共图书馆可以将博物馆的珍贵文物数字化,为读者提供与实际博物馆展览相仿的体验。

读者可以在虚拟空间中选择参观不同的展览、观看文物的3D展示,了解文化背景和历史故事,并与文物进行互动。

2. 虚拟艺术画廊:公共图书馆可以合作或自行设计虚拟艺术画廊,将著名艺术家的作品数字化,并通过虚拟现实技术进行展示。

读者可以像在实际画廊中一样欣赏名画,通过VR设备进行近距离观察。

图书馆还可以通过虚拟现实技术的加持,为读者提供与艺术家的互动、沉浸式学习体验。

1. 虚拟实验室:虚拟现实技术可以帮助公共图书馆提供更加真实的实验室体验。

通过虚拟现实技术,在虚拟环境中模拟实验场景,使读者能够进行实验模拟和实时交互,提高实验技能和理论知识的应用能力。

2. 虚拟讲座:公共图书馆可以邀请著名学者和专家进行虚拟讲座,通过虚拟现实技术将讲座内容呈现给读者。

读者可以在虚拟空间中逼真地观看演讲者讲解并与其进行互动,提高学习效果。

1. 虚拟图书馆:公共图书馆可以设计虚拟图书馆,将实体书籍数字化并以虚拟形式呈现。

读者可以通过VR设备选择想要阅读的书籍,并在虚拟环境中翻阅书籍的内容。

在虚拟图书馆中,读者还可以与其他读者进行交流、分享阅读心得。

2. 虚拟阅读空间:公共图书馆可以设计虚拟阅读空间,提供更加舒适、私密的阅读环境。

1、引言虚拟现实(VirtualReality,简称VR)是人们.doc

1、引言虚拟现实(VirtualReality,简称VR)是人们.doc

1、引言虚拟现实(Virtual Reality,简称VR)是人们1、引言虚拟现实(Virtual Reality,简称VR)是人们对计算机仿真环境进行可视化操作和交互的一种全新方式,与传统人机界面相比,在技术思想上有了质的飞跃。

利用计算机生成虚拟环境,通过视、听、触,甚至味觉等多种通道的实时模拟和实时交互[1]。

虚拟现实技术融合了计算机图形学、数字图像处理、人工智能、传感器、多媒体技术、网络以及并行技术等多个信息技术分支的最新发展成果,大大推进了计算机技术的发展,已被广泛应用于军事模拟、视景仿真、飞机汽车制造、科学可视化等领域[2]。

虚拟漫游是虚拟现实技术的重要应用,实现了对三维景观的数字化和虚拟化[3],在虚拟场景中漫游具有实时性和交互性,使用户产生了身临其境的感受。

2、构建面向漫游的三维虚拟场景构建虚拟场景是整个漫游系统的基础,模型的质量好坏直接影响了场景的逼真程度和运行的效果。

本文采用Maya软件进行建模,得到模型具有很强的逼真度。

虚拟漫游系统中场景的构建主要采用几何建模技术进行建模,根据不同的需求,将多边形建模、曲面建模等多种方法结合起来应用。

正式建模之前,首先要获得整个场景的地图数据,确定需要哪些建筑物以及每个建筑物所处的位置。

本文主要是通过照片和录像资料采集数据,照片由于分辨率较高并且是静态的,通常用来描述场景细节信息,同时也作为纹理贴图的主要参照。

录像资料收集的范围比较广,更适合记录建筑物之间的相对位置。

在建模的过程中,可以将场景分为若干个模块,主次分明,重点的建筑物需要对其精细建模,次要的建筑物则可以粗略建模,逐层逐块的利用Maya提供的强大建模功能和修改工具进行建模。

需要精细建模的部分尽量采用精确的几何体,而粗略的部分可以使用面片数较少的几何体构建,争取用最少的多边形达到理想的效果。

但是,在建好的模型中往往会出现冗余的多边形,不仅增加了面片数,而且在漫游的过程中会出现画面闪烁的现象。

外文翻译---基于虚拟现实技术的机器人运动仿真研究

外文翻译---基于虚拟现实技术的机器人运动仿真研究
Index Terms – Robot; VRML and Java; Kinematics; Motion Simulation
With the rapid developmБайду номын сангаасnt of computer technology and network technology, virtual manufacturing (VM) becomes an emerging technology that summarizes the computerized manufacturing activities with models, simulations and artificial intelligence instead of objects and their operations in the real word. It realizes optimization products in manufacture process with prediction manufacture circle and promptly modification design[1][2].Virtual reality technology is a very important support technology for virtual manufacture because it is an important way of virtual design production. With the appearance of Java and VRML technology, robot motion simulation on WEB browser turns into realization[3][6]. Robot motion simulation is a necessary research direction in robot research field. Guo[4]adopted ADAMS software for 3-RPC robot motion simulation, but the method isn’t suitable for motion simulation on internet. Yang[3]and Zhao[5]researched the motion simulation of robot by VRML, and they didn’t optimize VRML file. Bo[8]provided environmental simulation within a virtual environment by Java and VRML interaction through the External Authoring Interface. Qin[9]researched a novel 3D simulation modeling system for distributed manufacturing. In this paper, theCINCINNATIrobot is researched and analysed by VRML and Java and VRML file is optimized fortransmission on internet. The 3D visualization model of the motion simulation for robot is realized by VRML and Java based on virtual reality. Therefore the research about motion simulation of robot based on internet is significant and application value.

虚拟现实与设施管理外文翻译中英文2018

虚拟现实与设施管理外文翻译中英文2018

虚拟现实与设施管理中英文2018英文Virtual reality as integration environments for facilities management AbstractPurpose –The purpose of this paper is to explore the use of virtual reality environments (VRE) for maintenance activities by augmenting a virtual facility representation and integrating relevant information regarding the status of systems and the space itself, while providing simple ways to control them.Design/methodology/approach – The research focuses in the implementation of a VRE prototype of a building management system using game engine technologies. To evaluate the prototype, a usability study has been conducted that contrasts the virtual reality interface with a corresponding legacy application showing the users perception in terms of productivity improvement of facilities management (FM) tasks.Findings – The usability tests conducted indicated that VREs have the potential to increase the productivity in maintenance tasks. Users without training demonstrated a high degree of engagement and performance operating a VRE interface, when compared with that of a legacy application. The potential drop in user time and increase in engagement with a VRE will eventually translate into lower cost and to an increase in quality.Originality/value – To date no commonly accepted data model has been proposed to serve as the integrated data model to support facility operation. Although BIM models have gained increased acceptance in architecture engineering and construction activities they are not fully adequate to support data exchange in the post-handover (operation) phase. The presented research developed and tested a prototype able to handle and integrate data in a flexible and dynamic way, which is essential in management activities underlying FM.Keywords Information systems, Simulation, Integration, Decision support systems, Information and communication technology (ICT) application IntroductionFacilities management (FM) aims at creating and maintaining an effective builtenvironment in order to support the successful business operation of an organization (Cotts et al., 2010). The complexity and professionalism underlying modern FM compels practitioners to adopt distinct computerized tools, helpful in automating routine tasks, managing information, monitoring th e building’s performance and assisting in decision-making processes (Abel and Lennerts). Currently, it is amply recognized that Information technology (IT) plays a critical role in the efficiency, both managerial and operational, of FM (Madritsch et al.2008; Elmualim and Pelumi-Johnson, 2009; Lewis and Riley, 2010; Svensson, 1998; Love et al., 2014; Wetzel and Thabet, 2015).In its essence, FM is a multidisciplinary subject that requires the collaboration of actors with expertise from different fields (Cotts et al., 2010; Lewis and Riley, 2010). Within their specific areas of responsibility, they have to interact with distinct IT tools. Managerial roles are likely to interact with computer-aided facility management systems (CAFM) and computerized maintenance management systems (CMMS), employed to manage the characteristics of space and equipment while operational roles are more likely to interact with building managements systems (BMSs) and energy management systems (EMS) used to manage live, i.e., real-time information regarding the space and equipment (Lewis and Riley, 2010). Issues in FM also have to be analyzed from different perspectives. Therefore, this arrangement requires that information from different tools to be brought together to enable a systematic and thorough analysis through data visualization and dashboard (Chong et al., 2014). It has been observed that the costs inherent to lack of communication and data integration for existing buildings portfolio (for information verification costs, delay, and operation and maintenance staff productivity loss) are very significant for an organization (Shi et al., 2016). A better support of IT for integrating information translates to faster effective and just-in-time FM (Svensson, 1998).The subject of integration in IT for FM is not new (IAI, 1996; Howard and Björk, 2008) and has been historically difficult (IAI, 1996; Elmualim and Pelumi-Johnson, 2008). Research and industry practice have developed standards for information exchange to address the interoperability of IT tools in FM (IAI, 1996) and suggestedmaintaining integrated databases of facility-related information (Yu et al., 2000), thus creating a framework where different IT tools become components of the same information system –a facilities management information system (FMIS) (Dennis, 2003; Mozaffari et al., 2005). Moreover, it has been acknowledged that data required to perform certain actions is not up to date and it is delivered in different formats (Migilinskas et al., 2013; Nicał and Wodyński, 2016).More recently, advanced interfaces such as virtual reality environments (VREs) are emerging as a sophisticated and effective ways of rendering spatial information. Such has been the case of the tools used in architecture, engineering and construction (AEC) (Campbell, 2007), and in other specific activities like manufacturing planning (Doil et al., 2003) and industrial maintenance (Sampaio et al., 2009; Fumarola and Poelman, 2011; Siltanen et al., 2007). However, user interfaces of IT tools for FM are not yet taking advantage of the recent advances in VRE. As we will discuss, the typical user interface of CAFM, CMMS and BMS lacks the spatial dynamism and natural interaction offered by a VRE, with noticeable impacts on user productivity.In this paper we argue that VREs, due to their characteristics, could improve both the visualization and interaction with integrated information within an FMIS and, therefore, beneficial for performing FM tasks. To validate this argument, we developed a prototype implementation of a VRE for assisting maintenance activities in the building automation and energy domains. The FM3D prototype augments a virtual facility with information regarding the space characteristics as well as the location, status and energy consumption of equipment, while providing simple ways to control them. Unlike previous applications of VR to FM that rely on CAD (Coomans and Timmermans, 1997) or VRML (Sampaio et al., 2009; Fu et al., 2006) for scene generation, we take advantage of recent game engine technologies for fast and real-time rendering of feature rich representations of the facility, along with space information, equipment conditions and statuses of devices. A user evaluation study was conducted to determine the adequacy of a VRE approach to visualize and interact with integrated FM information toward a responsive intervention. This study compares a VRE interface applied to a building management system with acorresponding legacy application.The remainder of our text is organized as follows. In Section 2 discusses the advanced visualization for FM data integration challenges and emphasizes the opportunities for VREs and new IT tools for FM. In Section 3 describes the research methodology. In Sections 4 and 5 we describe prototype development and evaluation procedures. Section 6 presents the results of the paper. Finally, Section 7 presents the conclusions.Advanced visualization for FM data integrationIntegrated rendering of spatial information is crucial to perceive complex aspects that arise from combining data from multiple sources, and for creating new insights. For example, combining cost data with occupancy information and energy consumption. In FM, integrating information is crucial to create a more complete and faithful model of reality toward an accurate diagnosis and effective response. Integration of spatial information should not be left the ability of the users to integrate mentally different models so that decisions are not hindered by the users inability. However, integrated visualization is quite limited in FM. As mentioned before, integration between tools is limited and the few that support it do not offer effective means to manage overlaying of information. Presently, tools from different vendors display information using different visual elements and layouts causing users acquainted with one tool to find it difficult to interpret data on another.The problem of creating an advanced data visualization solution for FM data integration is twofold. It is necessary first to correctly integrate data from multiple sources into a unified model and second to create and generate an environment that envisages 3D data visualization and real-time interaction with the built environment.Limitations of data integration in FMFM requires integrating large quantities of data. It has been argued that CAFM systems greatly benefit from integrating data of CMMS, BMS and EMS systems both at a data level and at a graphical level (May and Williams, 2012). Yet, despite a few localized integration possibilities (Malinowsky and Kastner, 2010), the current BMS and EMS do not adequately integrate data regarding space characteristics. Notably,some tools support space layout concepts of floor and room in 2D static plans (Lowry, 2002), or even details regarding equipment, but these data live isolated on each tool’s database without any relationship (i.e. integration) with the CAFM system. Such connection is important to explore further characteristics of the space such as, for example, which areas are technical areas or circulation areas. On the other hand, CAFM systems would greatly benefit from real-time information regarding energy utilization, status of environment variables and equipment status enabling understanding how space and equipment is being used.To date no commonly accepted data model has been proposed that is comprehensively enough to serve as the integrated data model to support facility operation. Indeed, it has been noted that interoperability among tools from different vendors is still very ad hoc (Yu et al., 2000). Although BIM models have gained increased acceptance in AEC they are not adequate to support data exchange in the post-handover (operation) phase. For example, BIM models offer no provision to handle trend data, which is essential in management activities underlying FM. Moreover, BIM standards do not handle well data models used by BMS and EMS tools (Yu et al., 2000; Gursel et al., 2007). Another aspect is that querying data for these models is often quite complex for the average user given the large number of entities that must be taken into account (Weise et al., 2009). Therefore querying must be mapped into seamless graphics operations to be performed using different metaphors (e.g. for aggregation, filtering and ranking operations).Advanced facility management interfacesThe idea of extending the functionality of a standard management tool capable of handling facility management and building control networks, is essential in practice and can be achieved by integrating CAFM systems with BMS to obtain a unified control software utility (Malinowsky and Kastner, 2010; Himanen, 2003). This integration grants the ability to automatically monitor and visualize all building areas by illumination, occupation or other spatially located variables and manage them accordingly. For instance one could visualize electrical power consumption of the different building areas and improve efficiency consequentially reducing power costs.Also from an interaction perspective such software should also be complemented with CAD representations. In fact, CAFM systems that were integrated with CAD have been proven most effective (Elmualim and Pelumi-Johnson, 2009). Autodesk has recently announced the Dasher project that aims at using 3D to explore energy data (Autodesk, 2013). The project proposes to be based on Revit BIM to integrate energy data. Energy data must be stored somewhere else. It must be a proprietary BIM model.3D interfaces for facility managementActivities such as inspecting the space for the location of an asset, inspecting the status of equipment or analyzing the energy consumption profile of the space along with its cost and occupancy information are examples of queries that have an underlying spatial dimension. Overall, most IT tools for FM have to manage spatial information, which can be visualized more effectively when rendered in a graphical representation (Karan and Irizarry, 2014; Zhou et al., 2015). Graphical rendering accomplishes instantaneous identification of the space reality along with the relationships of the elements therein to encourage a fast response. Historically, planimetric CAD drawings and geographical information systems (GIS) have been used as an effective way to display and manage spatial information related to facilities (Schürle and Boy, 1998; Rich and Davis, 2010).GIS systems are especially effective at presenting visual representation of spatial data, aiming at a more efficient analysis (Rivest et al., 2005). In building control, a GIS application can be used to better manage a building by improving information access and providing clearness of planning to the decision-making process (Alesheikh et al., 2002). There are some well-known cases of successful GIS implementations in large facilities, such as university campuses[1]. One advantage of a GIS with 3D modeling for building control is enabling 3D information query, spatial analysis, dynamic interaction and spatial management of a building (Keke and Xiaojun).Problem definitionThe Problem definition stage encompasses a literature review and theexploratory study toward the definition of the problem to be solved and the scenarios to be tested. Since dealing with the full complexity of FM is infeasible in practice, this stage was particularly relevant to support the definition of a conceptual model of the prototype tool to develop to validate our hypothesis.Prototype developmentIn the Prototype development stage attention must be given to the data integration architecture, the user interface and the interaction layer, which are implemented using a modular approach that allows easy adaptation to different technologies has been developed. The approach, depicted in Figure 1, uses a web-based interface, thus providing access to FM information in a multitude of platforms, from mobile devices to desktop computers. Since interaction can be performed online, visualization and interaction can be achieved using different platforms. This interface is supplied through a 3D visualization engine developed in Unity 3D that relies on data visualization and integration micro-services supplied by the FM3D application components.EvaluationThe Evaluation stage compares existing legacy system with the VRE approach embodied in the prototype. The evaluation process consists of a comparative study that contrasts the 3D interface applied to the centralized control of a building automation system with a corresponding legacy application. The main goal of the evaluation is to investigate the reliability and possible benefits of 3D virtual environments for automated building by performing a quantitative as well as a qualitative analysis on both systems through user interaction test sessions.In this sense, several tests are run, with distinct types of participants comparing the prototype and an existing legacy application for centralized control and monitoring is executed, featuring a traditional 2D window-icon-menu-pointer interface. To this aim, a legacy application interface depicted in Figure 3 is used, which is already installed and working at the test pilot building. The comparison proceeded along two testing stages, the early prototype stage and the final prototype stage. With the early prototype stage we intended to get a first perspective of howusers would react to our 3D interface. At this time, all main functionalities were already implemented and, therefore, the feedback gathered from this phase did not only contribute to infer possible adjustments to our final prototype, but also provided good preliminary qua ntitative and qualitative results of the prototype’s main functionalities. Both stages of evaluation are structured by the following steps: a pre-test questionnaire to establish user profile; a briefing about test purposes and task description, preceded by a short training session where users freely explored each application for three minutes; and a post-test questionnaire after completing a set of pre-determined tasks in each application. This structure is meant to ensure an even test distribution of the applications. It should be mentioned that in the second phase two more tasks were included to be tested only with the FM3D prototype. These new tasks intend to evaluate functionalities not currently available on the legacy application.During task execution we measure the time that each user takes to complete each task on each application. If a task is not completed after three minutes, the task is considered incomplete. From these data we are able to perform a quantitative comparison between the two applications. The post-test questionnaire, contains direct questions related to the user experience, with special emphasis on the difficulties users faced during task execution to enable a qualitative analysis.User interfaceWhile the lower layers are generally important, in this paper, our main concern is the system’s user interface (Figure 4). Therefore, we will focus our attention on the upper layer of the architecture. As a consequence of using a VRE over a web browser, our solution offers a powerful, yet easy, way to supervise and control small, medium and large facilities. The user interface was developed in Unity 3D Game Engine that enables users to interact with a VRE from within an internet browser.Using simple controls the user can explore the building, inspecting and commanding several devices. To assist the user in navigating through the 3D model, our interface offer two distinct views of the building simultaneously: the main view and the mini-map view. The main view is where most interaction will occur. Thisview allows the user to navigate in the building, from a global viewpoint to detailed local exploration. The navigation in the scene is controlled by the navigation widget, located in the rightmost part of the view. This widget offers rotate, pan and zoom functionalities. The left hand side of the main view presents the control area, which consists on a set of controls offering important functionalities for filtering. Through these controls, the user is able to select which type of device and sensors should be shown or hidden in the visualization, as well as enable or disable navigation aids, such as the orientation guidelines. Additionally, through a text box the user is able to search for a given room, just by typing its name.The mini-map consists of a small view of the complete building, located at the bottom left corner of the screen. It allows users to have a complete view of the building and perceive which part is displayed in the main view. Most important, it offers additional navigation control. Dragging the mouse over the mini-map rotates the miniature view of the building around the vertical axis. If the user chooses to lock this with the main view, changes in any of them will reflect on both. The mini-map view offers also a fast and easy way to change the active floor.Interaction detailsTo minimize the visual complexity, only one floor at a time is rendered in the main view, the so-called active floor. The user selects which floor should be activated through the mini-map or from a specific control in the navigation widget. The selected floor is initially rendered on the main view only with the walls and no devices or sensors shown. The user can then select which categories of sensors and devices should be displayed. In the current version of our prototype the available categories are the lightning, HV AC, temperature and doors.Using the navigation widget, the user can navigate to the desired space in the building to inspect it. When the view gets closer to a room, additional information is depicted, ensuring that the user will not be overloaded with unnecessary information. When possible, the information is shown pictorially, such as the HV AC information represented as gas clouds whose color, size and speed convey information regarding the current status of the device, as illustrated in Figure 5. When the user clicks on adevice a pop-up appears to show additional information and allow the user to control the device.The content of the pop-up window that appears when the user clicks on a device depends on category of the device itself. Obviously, information and controls associated with a HV AC device are distinct than those associated with a lightning system. Figure 6 shows the information window for a light. In this case, besides the on/off state of the light, which can be changed by clicking the corresponding button, additional information is shown. At a glance the user will grasp relevant information such as the lamp type, its power, the number of starts, the total operating hours and its estimated lifetime. If necessary, the user can mark the lamp for replacement or consult informative notes associated with the lamp.The FM3D interface was designed to be easy to use while offering a complete set of functionalities, thus allowing even inexperienced users to operate it to perform maintenance activities. To verify this assumption we organized a formal evaluation of the FM3D system, involving real users.DiscussionRegarding building operators’ perception of the FM3D prototype, results show that users found that a 3D representation of the built environment facilitates both navigation and relating information with the location to which it is pertained. Although this is an encouraging result, it should be taken into account the familiarity of the participants with the built environment. 3D representations of very large built environments may not have these advantages. In this case, it might be necessary to take into account other design considerations to assist users in clearly identifying areas/spaces. An example of a design consideration is overlapping the 3D representation with a photo-realistic (following the same analogy of the well-known Google maps street view perspective).In terms of easiness of use, although users find the FM3D easier to use than the legacy application, they have some difficulties locating some of the information. Specifically, because information is shown according to the aggregation level, it was not always easy to understand where to find a particular piece of information. Thiscan be an obstacle when considering buildings that have many layers and/or integrated spaces that require users to perform a high number of zoom in and out to access the information. Regarding the learnability component, users found the FM3D interface was easier to learn regarding navigation, command and information retrieval functionality – this is highlighted further by the quantitative results, especially those of the advanced participant. In terms of satisfaction it was observed that participants found the FM3D prototype superior both in usefulness and its ability to improve task performance comparing to the legacy application.Through the usability tests that we have conducted we have reasons to believe that 3D interactive environments have the potential to significantly increase the productivity in maintenance tasks. In these tests, users without training demonstrated a high degree of engagement and performance operating our 3D interface prototype, especially when compared with the legacy application. The potential decrease in user time and increase in engagement with a 3D environment could eventually translate into lower cost and increase in quality, potentially turning 3D-based interface the option of choice in future IT tools for BMS.ConclusionsFM activities are increasingly supported by IT tools and their effective usage ultimately determines the performance of the FM practitioner. In this paper, we argued the usability of IT tools for FM suffers from a number of limitations mostly related to the lack of a true integration at the interface level and an inadequate handling of spatial information. Moreover, their steep learning curve makes them unsuited for inexperienced or non-technical users. We then propose VREs as a solution for the problem and validate our hypothesis by implementing FM3D, a prototype VRE for monitoring and control of buildings and centered around the requirements of FM activities with respect to integration, visualization and interaction with spatial information.This work validates literature reports pointing to an increase in performance of VREs over traditional interfaces and shows that new approaches to interact with spatial information not only feasible but also desirable. The usability tests we haveconducted indicate that VREs have the potential to greatly increase the productivity in maintenance tasks. Users without training demonstrated a high degree of engagement and performance while operating a VRE interface, when compared with that of a legacy application. The potential drop in user time and increase in engagement with a VRE will eventually translate into lower cost and to an increase in quality, potentially turning VRE-based interface the option of choice in future IT tools for FM. The major contribution of this paper was to demonstrate that VREs have low barrier to entry and have the potential to replace existing legacy BMS user interfaces. Additionally, it showed that all users regard VREs as a natural next step with respect to the interaction with FM systems.In our approach it remains unclear to what extent the integration at the interface level is contributing to increase the user productivity. Presumably, not all maintenance activities benefit, in the same way, from an approach such as the one we propose. Therefore, as future developments, additional studies should aim at gaining insight on which aspects of a VRE interface contribute to which other maintenance activities considering different interfaces (web-based interfaces and from mobile devices). These studies should begin by mapping the information needs for each activity and, thereafter, assessing existing FM tool interfaces and the VRE prototype against them. Moreover, since the evaluation is based on a narrow mix of FM tasks, further studies are required to establish the causal relationship between the employment of VREs in FM and increases in productivity, especially those involving multiple tools.中文虚拟现实作为设施管理的集成环境摘要研究目的–本研究目的是通过增加虚拟设施整合有关系统状态和空间本身的相关信息,同时提供控制它们的简单方法,探索将虚拟现实环境(VRE)用于设备维护活动。

毕业论文外文翻译--虚拟现实技术的发展过程及研究现状(适用于毕业论文外文翻译+中英文对照)

毕业论文外文翻译--虚拟现实技术的发展过程及研究现状(适用于毕业论文外文翻译+中英文对照)

虚拟现实技术的发展过程及研究现状虚拟现实技术是近年来发展最快的技术之一,它与多媒体技术、网络技术并称为三大前景最好的计算机技术。

与其他高新技术一样,客观需求是虚拟现实技术发展的动力。

近年来,在仿真建模、计算机设计、可视化计算、遥控机器人等领域,提出了一个共同的需求,即建立一个比现有计算机系统更为直观的输入输出系统,成为能与各种船感器相联、更为友好的人机界面、人能沉浸其中、超越其上、进出自如、交互作用的多维化信息环境。

VR技术是人工智能、计算机图形学、人机接口技术、多媒体技术、网络技术、并行计算技术等多种技术的集成。

它是一种有效的模拟人在自然环境中视听、动等行为的高级人机交互技术。

虚拟现实(Virtual Reality ):是一种最有效的模拟人在自然环境中视、听、动等行为的高级人机交互技术,是综合计算机图形技术、多媒体技术、并行实时计算技术、人工智能、仿真技术等多种学科而发展起来的20世纪90年代计算机领域的最新技术。

VR以模拟方式为使用者创造一个实时反映实体对象变化与相互作用的三维图像世界,在视、听、触、嗅等感知行为的逼真体验中,使参与者可直接探索虚拟对象在所处环境中的作用和变化;仿佛置身于虚拟的现实世界中,产生沉浸感(immersive)、想象(imaginative和实现交互性interactive) 。

VR技术的每一步都是围绕这三个特征而前进的。

这三个特征为沉浸特征、交互特征和构想特征。

这三个重要特征用以区别相邻近的技术,如多媒体技术、计算机可视化技术沉浸特征,即在VR提供的虚拟世界中,使用户能感觉到是真实的进入了一个客观世界;交互特征,要求用户能用人类熟悉的方式对虚拟环境中的实体进行观察和操纵;构想特征:即“从定性和定量综合集成环境中得到感性和理性的认识:从而化概念和萌发新意”。

1.VR技术发展的三个阶段VR技术的发展大致可分为三个阶段:20世纪50年代至70年代VR技术的准备阶段;80年代初80年代中期,是VR 技术系统化、开始走出实验室进入实际应用的阶段;80年代末至90年代初,是VR技术迅猛发展的阶段。

虚拟现实游戏外文翻译中英文2019-2020

虚拟现实游戏外文翻译中英文2019-2020

虚拟现实游戏中英文2019-2020英文Virtual reality games on accommodation and convergenceZulekha Elias,Uma Batumalai,Azam AzmiAbstractIncreasing popularity of virtual reality (VR) gaming is causing increased concern, as prolonged use induces visual adaptation effects which disturbs normal vision. Effects of VR gaming on accommodation and convergence of young adults by measuring accommodative response and phoria before and after experiencing virtual reality were measured. An increase in accommodative response and a decrease in convergence was observed after immersion in VR games. It was found that visual symptoms were apparent among the subjects post VR exposure.Keywords:Virtual reality,Accommodation,Accommodative response,V AC,Phoria1. IntroductionVirtual reality (VR) is a simulated environment where the visual content and alternatively different senses are entirely computer-generated and the participant's performance alters the appearance of the environmental status. The visual stimulus and other sensory channels such as touch, smell, sound, and taste are presented by a combination of virtual and augmented reality systems (Rebenitsch and Owen, 2016).Virtual reality has been rapidly developmental in the recent years, particularly the VR headsets, which are used by attaching a smartphone that contain the VR game and mounting it on the head, thus providing users with a virtually immersive experience (Desai et al., 2014).The current study uses a VR game as the stimulus as it is perceived to be more appealing to the user, enhancing maximum immersion; furthermore players show a higher anxiety level which would enhance their post VR-gaming response (Pallavicini et al., 2018). VR gaming blocks out the external environment whilst promoting sensory immersion due to the enlarged field of view (FOV) of the VR headset, providing users with a greater immersion experience (Martel and Muldner, 2017).The accommodation and vergence systems are reflexively linked, interacting with each other through accommodative vergence and vergence accommodation; where accommodation is stimulated by retinal blur whereas vergence is stimulated by depth (Hung, 2001). Accommodation and convergence are simultaneously occurring ocular systems that enable normal binocular vision. A disruption in one system could affect the other (Shiomi et al., 2013). The demand exerted on the accommodation and vergence systems by VR results to a reduction in visual performance due to the ocular discomfort experienced (Barnes, 2016). Moreover, discomfort in stereoscopic viewing is caused by the need of quick adaptation from the vergence system, despite the conflicting accommodation system (Hoffman et al., 2008; Lambooij et al., 2009).Studies have found a significant effect of VR on accommodation and convergence (Mon-Williams et al., 1993; Kooi and Toet, 2004; Rebenitsch and Owen, 2016), caused by a disruption in how these two systems work together. Shiomi and his colleagues found the incidence of mismatch between accommodation and convergence which resulted to complaints of visual fatigue after users were immersed in the VR world for a period of time (Shiomi et al., 2013).This paper presents investigations on how the accommodative and convergence systems are affected after using the VR headset for a period of time.2. Methods and materials2.1. SubjectsThirty four subjects participated in this study, out of which 21 were male and 13 were female, with age ranging 18–28 years and mean age of 23. All the subjects had distance visual acuity of 6/6 or better, of which 21 were spectacle wearers; normal color vision (correct identification of all plates using the Ishihara 24 Plates Edition©); stereo acuity of 50 seconds of arc or better with The Netherland Observation (TNO) plates; near point of accommodation within estimated range of at least 12.5 cm and near point of convergence with break (5–7 cm) and recovery (7–9 cm); horizontal phoria ranged from 1Δesophoria to 3Δexophoria at distance and 0 to 6Δ exophoria at near.2.2. InstrumentationThe accessory used during this research was the VR Shinecon headset with adjustable inter-pupillary distance as shown in Fig. The headset provided a field of view of 90–110⸰ with a 360⸰ panoramic view to the user. The VR Shinecon® focal power of the lenses for both sides were approx. 16D, and the disparity was achieved by the offset of the display on the phone. The focal distance of the VR setup were at a given range of approx. 55–75 cm.A smart phone, Lenovo K6 Power with dimensions 141.9 × 70.3 × 9.3 mm and screen size 5.0 inches was attached to the headset which was then mounted on the subject's head. The screen was set to 50% brightness.The VR game Galaxy Wars, available on Google PlayStore, was the game simulator used as it offers an intense and continuous motion gaming experience in combat. The content varies significantly from the nearest virtual plane to be at 3 m up to 500 m. The sky box (larger content) had the furthest virtual plane at about 3000 m. The illumination of the game in its display was in the range of 0.4–3.9 lux.2.3. ProcedureAll the subjects played the game Galaxy Wars, for 30 min. The lights in the test room were switched off (approx. 2.5 lux) to avoid reflections and the subjects were seated on a rotating stool to aid movement.Prior to the VR simulation, accommodative response, horizontal and vertical phoria measurements at distance and near were taken. A phoropter, under good illumination (approx. 572 lux), was used to conduct the Fused Cross Cylinder (FCC) test to measure accommodative response. The target was a cross-hatch chart set at 40 cm. Cross cylinder lenses of ± 0.50 D with the minus axis set at the vertical meridian were presented binocularly in front of the subject's eyes. Initially, if horizontal lines were reported clearer, spherical lenses of +0.25 D were added binocularly until the vertical lines became clearer or the lines in both meridians were equally clear. This is an indication of lag in accommodation. However, if vertical lines were reported clearer when first presented with the FCC, spherical lenses of −0.25 D were added binocularly until horizontal lines became clearer or both meridional lines were equally clear. A lead of accommodation is indicated in this case.The vergence stability was measured using the horizontal and vertical phoria test at 6 m and at 40 cm. The test was carried out using a Maddox rod, a high-powered cylindrical lens that prevents fusion of the eyes as a point source of white light creates a thin red line. Subjects had to report esophoria for convergent visual axes and exophoria for divergent. The test was carried out in darkness whereby the Maddox rod was situated in front of the right eye and the white point source light shone on the left eye at 40 cm. Distance phoria was measured by placing the Maddox rod in front of the right ey e and shining the point source light on the mirror situated at 3 m. Subjects were expected to report the position of the red line with reference to the white point source light. If the line and dot of light coincide, there is no phoria, if the line is reported to be on the right of the dot, it is esophoria, and when the line is to the left of the dot, it is exophoria. Prism bar would be added in front of the eye until the line and the dot coincided, giving the phoria value. The test compared the pre and post phoria values to determine any changes in convergence. The sequence of measuring accommodative response and phoria was randomized to avoid bias. Accommodative-Convergence to Accommodation (AC/A) ratio was then calculated to observe the relationship between the two systems (accommodation and vergence).Immediately after the 30 min of VR exposure, the accommodative response and change in vergence status were re-measured using the same method of FCC and Maddox rod; maintaining the same measuring procedure. I t took approx. 5 min to take the accommodative response and the phoria measurements after the VR immersion. The AC/A ratio was also recalculated for each subject. Subjects were also asked to report any feelings of discomfort, such as nausea, headache and dizziness.3. Results and discussionPaired t-test was used to independently analyze the mean pre and post accommodative response and horizontal and vertical phoria at distance and near as well as AC/A ratio. There was significant difference between the pre and post mean values of accommodative response [t (33) = 2.72,p < 0.05] (Table 1). The pre and post mean values of horizontal phoria at near [t (33) = 4.42,p < 0.05] were significantly greater compared to distance [t (33) = 5.17,p < 0.05] (Table 2). WilcoxonSigned-Rank test revealed no statistically significant difference in median errors of vertical phoria at distance [z = −1.73,p > 0.05] and near [z = 0.81,p > 0.05] (Table 3). There was significant increase in the mean of AC/A ratio of pre and post VR gaming session [t (33) = 2.489,p < 0.05] (Table 4). Fig. 3 shows the frequency of participants having visual symptoms after playing VR for 30 min.This paper investigated the mean errors in accommodative response and the status of vergence through phoria and AC/A ratio after using the VR headset for 30 mi n. The findings demonstrate an increase in accommodation and changes in vergence status. The accommodative response values indicate an increase in lead of accommodation after VR exposure, suggesting that, after a short period of VR gaming, the response of accommodation of the eyes to accommodative targets was greater. Accommodative response in humans is more prevalent of accommodative lag at near, indicating that the eyes do not accommodate fully to a stimulus presented at a near distance. However, as found in this study, the disparity of stereoscopic images on the VR unit has increased binocular disparity, inducing accommodative convergence which exceeds physiological accommodation lag, resulting to accommodation lead, similar to the findings by Iwasaki et al. (2009).Turnbull and Phillips (2017) reveal minimal effect to the binocular vision system after 40 min exposure to VR HMD as compared to the real world equivalent task; the dissociated position of the eyes was not affected by the accommodative demand at both distance and near, implying no accommodative fatigue. This could be due to the stimulus; an outdoor island environment where participants were required to find treasures around the island and an indoor cabin with a documentary playing on a television mounted on the wall. Both of these tasks are less intense as compared to the combat game used in the current study. However, a thought-provoking finding of Turnbull and Phillips (2017) could indirectly be in agreement with the current study i.e. choroidal thickness changes. The significant increase in choroidal thickness after VR exposure suggests that a lead of accommodation did occur even with non-intense VR experience, however, not to the point of visual discomfort, since accommodative errors were not their major findings to suggest direct effect on VR immersion.Our results are in agreement with a study by Roberts et al. (2018) in which they have shown that accommodative lag decreases (increase in accommodative lead) during near viewing tasks that require more cognitive effort. Notwithstanding, our VR gaming task primarily involved distances that are further than normal near viewing tasks, with the presence of accommodative stimulus approx. at 3–6 m, cannot be discounted. However, their findings suggested a significant difference in accommodative response among children population but not among adult population. This raises a question about the susceptibility of the accommodative system to visual cognitive demands.One plausible explanation for our findings might be due to accommodation hysteresis. The sustained exposure of near tasks via VR headset may trigger the accommodative hysteresis. The constant changes of apparent viewing distance through the VR may evoke the level of accommodation response to be altered according to the apparent stimulus distance. This will lead to adaptive accommodative hysteresis, which will provoke the negative shift (lead) of the accommodative response (Hasebe et al., 2001).The first notable vergence change seen in this study was exo-shift of the horizontal phoria. The horizontal phoria values indicated a shift towards exophoria at both far and near distances. Previous research reported a shift towards exophoria when playing games in 3D suggesting that it is due to the cross-link between accommodation and convergence; accommodation lead induces exodeviation (Pölönen et al., 2013). This dynamic relationship between accommodation and vergence systems is represented by the AC/A ratio. Our study showed that the AC/A ratio reduced significantly after VR exposure of 30 min. The decrement of the cross-links gain between accommodation and vergence may be contributed by the fact that while the exposure to VR games happened, the subjects were viewing images that were moving backwards and forwards in depth. This type of viewing has been found to decrease the gains of the cross-links (Mon-Williams and Wann, 1998), leading to an exo-shift of the horizontal phoria.This study also indicated that the phoria at near was affected more than atdistance. A probable explanation is that near responses are dominated by vergence movements due to the short latency period and smaller fixation disparity. The measurement of binocular disparity ought to be constrained to certain esteems to enable comfortable stereoscopic viewing (Bando et al., 2012).As for the vertical phoria, there was no indication of change at both distances, as there was no misalignment in the vertical plane during the use of VR headsets (Kalich et al., 2004). Vertical vergence adaptation is usually the result of a convergence-dependent gain alteration of the extraocular muscles of the vertical plane without respect to the position of the eye in the globe. The shift of vertical phoria requires a prolonged period of adaptation as experimented by Schor (2009). It was found that after 1 h of exposure of alternate fixation of targets separat ed horizontally as well as vertically, the vertical phoria only changed by 0.5Δ. Thus showing an underlying adaptive vertical vergence mechanism that maintains the degree of disconjugacy of vertical saccades, and the change may only be observable if a longer period of adaptation is allowed (Ygge and Zee, 1996).These accommodation and vergence changes found in this study raise an interesting discussion on the vergence-accommodation conflicts (V AC) while using virtual reality devices. V AC caused by VR gaming is due to the conflict of depth cues, in which the depth cues for both accommodation and vergence systems do not match (Reichelt et al., 2010). As explained by Takatalo et al. (2011), user experiences during 3D gaming are different compared to normal stereoscopic displays. The concepts of immersion, fun, presence, involvement, engagement and flow are accumulated during the experience. Presence, also referred to as spatial presence (IJsselsteijn et al., 2000), which results in perceived realness and the attention aspects; keeps on changing during the gaming experience. Thus, the virtual image plane distance cannot be measured in a straightforward manner. Instead, one must recognize that during stereoscopic gaming, the stimulus’ apparent position will keep on changing, hence leading to possible V AC conflicts. Apparent distances deem to be comfortable in the context of virtual reality display when the content zone of the apparent images fallswithin 0.5 m–20 m in a 70°field of view (Alger, 2015). However, Shibata et al. (2011) assumed that the maximum and minimum relative widths of the comfort zone were 0.8D (1.28 m) and 0.3D (3.33 m). Presumably, VR games are utilizing different distances compared to the assumed comfortable distance for viewing, in our case ra nging from 3 m to 3000 m. Thus, V AC conflicts might have been aggravated by VR gaming compared to other VR tasks. In addition, the V AC conflict seems to be more aggravated by the nature of the viewing, in our case, the gaming experience. V AC caused more difficulty for visual performance when the conflicts changed rapidly, according to Kim et al. (2014). When the fixation distance changes rapidly, especially in the case of gaming, the offset between the vergence and accommodation stimuli constantly changes, presumably due to stimulation of the phasic component when the step change occurs.While the acommodation depth cue remains static (constant distance to the screen), the vergence depth cues change. The change in angular distance and different convergence demands (moving images) create a difference in cues for vergence depth, contributing to the conflict between accommodation and vergence systems. Our results show that both of the systems did change after the use of VR headset, indicating that there is a conflicting depth stimulus to both systems to maintain single and sharp binocular vision. However, as observed in our study, this conflict appeared to be resolved by the dynamic relationship between accommodation and vergence systems (AC/A ratio), counter-acting the cross-links that attempt to drive vergence to be consistent with accommodation and vice versa (Kim et al., 2014). As the response of accommodation changes (increase in accommodation lead), the vergence system reduces in its response (by about 1 prism diopter).The amplitude of relative accommodation and vergence cannot act independently, however, each system can be slightly out of phase under normal conditions (Rushton and Riddell, 1999). The mismatch in binocular fusion cues contributes to the perceived quality of the VR experience. Inconsistent accommodation and vergence cues are known to cause visual discomfort to VR headset users (Bando et al., 2012). In our study, majority of the subjects complained of symptoms of motion sicknesssuch as nausea, headache and dizziness after the experiment. As explained by Kennedy and his colleagues, such symptoms arise from visually perceived motion in the absence of inertial motion, furthermore the diversity in symptoms to VR use come about as a result of variations in individual responses to motion environments (Kennedy et al., 2010). These results correspond with a study on motion sickness measurement index where nausea was the least common complaint, whereas disorientation was the most common visual symptom experienced by the subjects based on the Virtual Reality Sickness Questionnaire which was modified from the Simulator Sickness Questionnaire (Kim et al., 2018).The sensory conflict theory states that motion sickness can occur when there are paradoxical cues from the vestibular and visual systems (Hasegawa et al., 2009). Furthermore, these symptoms occur due to conflict caused by the impression that the world is moving visually, whereas there is minimal physical movement of the body and the time lag for the virtual scene to be updated after head movement (Falahee et al., 2000). Visual fatigue can be caused by the large amount of motion and parallax during stereoscopic viewing as there is constant motion thus exerting an increasing demand on accommodative and vergence systems to maintain a clear and single image, as well as when the stereoscopic images were perceived outside the range of depth of focus (Yano et al., 2002, 2004).4. ConclusionThe results illustrated that exposure to virtual reality gaming did affect accommodation and convergence systems. After immersion in virtual reality, subjects exhibited a lead in accommodation, where they tend to focus more than required, whereas convergence is receded as there is a shift towards exophoria, due to the loss gains in AC/A ratio. These errors in accommodation and convergence in turn lead to visual symptoms and discomfort among young adults. Due to these adverse effects from the V AC, it is important to have a correct setup of VR headsets for comfortable and more pleasurable experiences.Investigations to measure the effect of VR gaming on accommodation and convergence when it is used over a period of time and not limiting the duration to30 min. Modifications could be done by involving a wider range of stimuli instead of only one game each time to measure the extent of changes in accommodation and convergence errors. Further investigations could be conducted on children population to observe the effect of VR gaming on accommodation and convergence.中文虚拟现实游戏的适应性和融合摘要虚拟现实(VR)游戏的日益普及引起越来越多的关注,因为长时间使用会引起视觉适应效应,从而干扰正常视力。

虚拟现实外文文献翻译最新译文资料

虚拟现实外文文献翻译最新译文资料

虚拟现实外文文献翻译最新译文资料
本文档为虚拟现实(Virtual Reality,简称VR)领域的外文文
献翻译最新译文资料。

以下是一些最新的关于虚拟现实的外文文献
翻译资料,供您参考:
1. 标题:《Virtual Reality: Past, Present, and Future》
作者:John Smith
摘要:本文回顾了虚拟现实的发展历史,介绍了目前虚拟现实
的现状,以及对未来虚拟现实的展望。

文章探讨了虚拟现实在教育、娱乐、医疗等领域的应用,并提出了一些与虚拟现实相关的挑战和
机遇。

2. 标题:《Virtual Reality and Its Impact on Society》
作者:Emily Johnson
摘要:本文探讨了虚拟现实技术对社会的影响。

文章讨论了虚
拟现实在社交互动、沉浸式体验、心理健康等方面的应用,并提出
了一些社会伦理和法律问题。

作者认为,虚拟现实将对我们的日常
生活、工作和文化产生深远影响。

3. 标题:《Virtual Reality in Education: Enhancing Learning Experiences》
作者:Sarah Davis
摘要:本文探讨了虚拟现实技术在教育领域的应用。

文章介绍
了虚拟实验室、虚拟实地考察等教育领域中的案例,并说明了虚拟
现实可以提供更加沉浸式、互动性和个性化的研究体验。

请注意,以上资料仅作为参考,具体内容和观点请以原文为准。

基于虚拟现实的建筑设计仿真实验报告

基于虚拟现实的建筑设计仿真实验报告

基于虚拟现实的建筑设计仿真实验报告一、实验背景随着科技的不断发展,虚拟现实(Virtual Reality,简称 VR)技术在建筑设计领域的应用越来越广泛。

虚拟现实技术能够为设计师提供更加直观、沉浸式的设计体验,帮助他们更好地理解和评估设计方案。

本次实验旨在探究虚拟现实技术在建筑设计中的应用效果和优势,为建筑设计的创新和优化提供参考。

二、实验目的1、研究虚拟现实技术在建筑设计过程中的应用方式和效果。

2、评估虚拟现实技术对设计师创意启发和设计决策的影响。

3、分析虚拟现实技术在提高建筑设计质量和效率方面的潜力。

三、实验设备与环境1、硬件设备高性能计算机:用于运行虚拟现实软件和处理复杂的图形计算。

虚拟现实头戴式显示器(HTC Vive、Oculus Rift 等):提供沉浸式的视觉体验。

手柄控制器:用于在虚拟环境中进行交互操作。

2、软件工具3D 建模软件(如 Autodesk Revit、SketchUp 等):用于创建建筑模型。

虚拟现实引擎(如 Unreal Engine、Unity 等):将建筑模型转化为虚拟现实场景。

3、实验环境专门的虚拟现实实验室,配备良好的照明和通风条件,以确保实验的舒适性和安全性。

四、实验过程1、建筑模型创建设计师使用 3D 建模软件,根据设计要求和概念,创建建筑的三维模型。

模型包括建筑的外观、结构、内部空间布局等细节。

2、模型导入与优化将创建好的 3D 模型导入虚拟现实引擎中,并进行优化处理,以提高模型在虚拟现实环境中的运行效率和视觉效果。

优化内容包括模型的纹理、材质、多边形数量等。

3、虚拟现实场景搭建在虚拟现实引擎中,设置场景的光照、环境效果、音效等,营造出逼真的建筑环境。

同时,创建交互元素,如门、窗的开关,家具的移动等,以便设计师在虚拟环境中进行操作和体验。

4、设计师体验与评估设计师佩戴虚拟现实头戴式显示器和手柄控制器,进入虚拟建筑场景中进行体验。

在体验过程中,设计师可以自由行走、观察建筑的各个角落,从不同的视角评估设计方案的合理性和美观性。

网络环境下基于虚拟现实技术的虚拟实验室的研究

网络环境下基于虚拟现实技术的虚拟实验室的研究

个用户在—个虚拟现实环境 中通过计算机与其它用户进行交互。系统应具有 以下特征 :共享的虚拟空间; 伪实体的行为真实感 ;支持实时交互 ;多个用户相互通信 ;资源信息共享以及允许用户 自由操纵 环 境 中
的 对 象 。 分布式虚拟现实系统是构建虚拟实验室的重要 的技术支撑 。
1 分布式虚拟现实开发系统 . 2
第二层是各种虚拟现实对象生成工具 ,用 以生成各种虚拟对象 ,虚拟开发系统的最高层是管理层 ,它可 以
在单用户或联网情况下 ,根据要求启动和连结 由对象建模描述的对象 ,形成一定环境条件下的虚拟现实 。
2 虚拟实验室 的构建
21 虚拟实验室的概念 . 虚拟实验室指基于虚拟现实技术生成的适于虚拟实验的实验系统 ,虚拟实验室可以是某一现实实验室
维普资讯
第2 卷第 1 3 期
20 0 7年 1 月
齐 齐 哈 尔 大 学 学 报
J ra fQiia ie st oun l qh r o Unv r i y
V0.3N . 1 .o I 2
J n,0 7 a. 0 2
网络环境 下基 于虚拟现 实技术 的 虚 拟实验 室的研 究
实应用工具箱。使用 C + + 集成开发环境的支撑平台 WT 函数库相接建立虚拟实验室 ,系统设计如图 l K 所
示。
图 1 虚拟 买验系 统体 系结构
3 结束语
随着教学体制的改革 ,实验教学成为高校教学 中重要 的组成部分 , 但是 由于各种条件限制导致实验教
学环境难以满足科研及实验教学要求,基于虚拟现实技术的虚拟实验室的构建可 以辅助高校的科研工作 ,
义 了虚拟对象的外观和运动特性之后 ,还应 当定义对象的质量 、重量 、惯性 、表面光滑或粗糙 、软硬和形 状变形模式 ,这些就是对象的物理模型。

vr简介中英文版对照怎么写

vr简介中英文版对照怎么写

vr简介中英文版对照怎么写虚拟现实技术是一种可以创建和体验虚拟世界的计算机仿真系统,利用计算机生成一种模拟环境,下面是店铺给大家整理的vr简介中英文版,供大家参阅!vr简介Virtual reality technology is an important direction of simulation technology. It is a collection of various technologies such as simulation technology and computer graphics, human interface technology, multimedia technology, sensor technology and network technology. It is a challenging cross technology frontier and research field The Virtual reality technology (VR) mainly includes simulation environment, perception, natural skills and sensing equipment and so on. The simulation environment is a computer generated, real-time dynamic three-dimensional realistic image. Perception refers to the ideal VR should have all the people have the perception. In addition to computer graphics technology generated by visual perception, there are auditory, tactile, force, movement and other perception, and even including the sense of smell and taste, also known as multi-perception. Natural skill refers to the person's head rotation, eyes, gestures, or other human behavior, by the computer to deal with the action of the participants to adapt to the data, and the user's input to make real-time response, and were fed back to the user's facial features The A sensing device is a three-dimensional interactive device.虚拟现实技术是仿真技术的一个重要方向,是仿真技术与计算机图形学人机接口技术多媒体技术传感技术网络技术等多种技术的集合,是一门富有挑战性的交叉技术前沿学科和研究领域。

基于虚拟现实技术的室内导航与增强现实应用设计

基于虚拟现实技术的室内导航与增强现实应用设计

基于虚拟现实技术的室内导航与增强现实应用设计虚拟现实技术(Virtual Reality, VR)是一种计算机仿真技术,通过模拟创建的虚拟环境,使用户沉浸其中,从而获得身临其境的感觉。

而增强现实技术(Augmented Reality, AR)则是将虚拟元素与现实世界相结合,使用户能够在现实环境中看到虚拟物体。

基于虚拟现实技术的室内导航与增强现实应用设计,是一项利用VR和AR技术来实现室内导航和提升用户体验的创新方案。

传统的室内导航系统通常依赖于平面图和文字指示,但这种方式往往不够直观、效果有限。

而采用虚拟现实技术,结合增强现实技术,可以为用户提供更加沉浸式和直观的导航和体验。

首先,基于虚拟现实技术的室内导航能够为用户提供更加直观的导航方式。

传统的室内导航主要依靠平面图和文字指示,但对于一些不熟悉该建筑物布局的人来说,理解和遵循这些指示可能并不容易。

而基于虚拟现实技术的室内导航可以通过创建虚拟场景,使用户能够在虚拟环境中直观地了解建筑物的布局和路径,并根据实时位置显示相应的导航指示。

例如,用户可以通过佩戴虚拟现实眼镜来导航,系统会根据用户当前位置和目的地,在眼镜中显示出实时导航指示,让用户更加轻松方便地找到目标位置。

其次,基于虚拟现实技术的室内导航还可以提供更加个性化和自定义的导航体验。

传统的室内导航往往是通用化的,无法满足每个用户的个性化需求。

而虚拟现实技术可以根据用户的喜好和需求,定制化地呈现导航的内容和方式。

用户可以根据自己的喜好选择不同的导航模式、导航风格和导航界面,使导航过程更加符合个人习惯和喜好。

例如,用户可以选择在导航过程中添加背景音乐或声控交互,以增加导航的趣味性和便利性。

另外,基于虚拟现实技术的室内导航还可以结合增强现实技术,提供更加丰富的导航和体验功能。

增强现实技术能够将虚拟元素叠加在现实场景中,使用户能够直接在现实环境中看到导航指示和相关信息。

通过使用增强现实眼镜或手机等设备,用户可以凭借摄像头捕捉现实环境,并在屏幕上显示虚拟指示和信息。

基于Unity的虚拟现实教育应用开发

基于Unity的虚拟现实教育应用开发

基于Unity的虚拟现实教育应用开发虚拟现实(Virtual Reality,简称VR)技术是一种通过计算机技术模拟出的三维虚拟环境,让用户可以在其中进行互动体验。

随着科技的不断发展,虚拟现实技术在教育领域的应用也越来越广泛。

本文将介绍基于Unity引擎的虚拟现实教育应用开发,包括技术原理、开发流程和应用场景等内容。

一、虚拟现实教育应用的意义和价值虚拟现实技术可以提供身临其境的学习体验,帮助学生更好地理解抽象概念,增强学习兴趣和主动性。

通过虚拟现实教育应用,学生可以在安全的环境中进行实践操作,提高学习效率和记忆深度。

同时,虚拟现实还可以打破传统教学的时空限制,让学生跨越地域障碍接受优质教育资源。

二、Unity引擎在虚拟现实应用开发中的优势Unity是一款跨平台的游戏引擎,具有强大的3D渲染能力和丰富的资源库,适合用于虚拟现实应用的开发。

Unity支持多种VR设备,如Oculus Rift、HTC Vive等,开发者可以轻松地将应用发布到不同平台上。

此外,Unity还提供了友好的可视化编辑界面和强大的脚本编程功能,方便开发者快速构建复杂的虚拟场景。

三、基于Unity的虚拟现实教育应用开发流程1. 确定应用需求在开始开发之前,首先需要明确虚拟现实教育应用的需求和目标。

包括教学内容、互动方式、用户群体等方面的要求,为后续开发工作奠定基础。

2. 设计场景和模型利用Unity提供的建模工具和资源库,设计虚拟场景和模型。

根据教学内容和需求,创建逼真的3D环境和交互元素,以增强用户体验。

3. 添加交互功能通过Unity中的脚本编程功能,为应用添加各种交互功能。

比如手势识别、物体抓取、场景切换等操作,让用户可以与虚拟环境进行互动。

4. 优化性能和体验在开发过程中需要不断优化应用性能和用户体验。

包括减少模型面数、优化纹理贴图、调整光照效果等方面,以确保应用流畅运行并提供良好的视听效果。

5. 测试和发布在开发完成后,进行系统测试和用户体验测试。

虚拟现实技术

虚拟现实技术

虚拟现实技术1、定义虚拟现实(Virtual Reality,VR),又译为虚拟实在、灵境、临境等。

它是近年来出现的高新技术,也称灵境技术或人工环境。

虚拟现实是利用电脑模拟产生一个三维空间的虚拟世界,提供使用者关于视觉、听觉、触觉等感官的模拟,让使用者如同身历其境一般,可以及时、没有限制地观察三度空间内的事物。

2、基本特征沉浸性(Immersion):人能沉浸到计算机系统创建的环境中,人由观察者变为全身心的投入者,成为虚拟现实系统的一部分,虚拟场景可随着人的视点做全方位运动。

交互性(Interaction):人能通过键盘、鼠标以及各种传感器与多维化信息的环境发生交互,人如同在真实的环境中与虚拟环境中的对象发生交互关系。

为达到这个目标,高速计算和处理必不可少。

想象性(Imagination):通过用户沉浸在“真实的”虚拟环境中,与虚拟环境进行了各种交互作用,从定性和定量综合集成的环境中得到感性和理性的认识,从而可以深化概念,萌发新意,产生认识上的飞跃。

因此,虚拟现实不仅仅是一个用户与终端的接口,而且可以使用户沉浸于此环境中获取新的知识,提高感性和理性认识,从而产生新的构思。

这种构思结果输入到系统中去,系统会将处理后的状态实时显示或由传感装置反馈给用户。

如此反复,这是一个学习—创造—再学习—再创造的过程,因而可以说,虚拟现实是启发人的创造性思维的活动。

3、参与者在虚拟环境中的活动和经历主观参与(First-person activities):参与者是整个经历的中心,一切围绕参与者进行。

利用桌面计算机或头盔式眼镜进行的活动就是这种类型的参与。

主观参与(Second-person activities):客观参与时,参与者可在虚拟环境中看到他自己与其他物体的交互。

4、建立有效的虚拟环境(1)用虚拟环境精确表示物体的状态模型(2)环境的可视化表示及渲染出的景象6、虚拟现实技术的发展及应用(1)发展:美国是虚拟现实技术研究的发源地,第一个虚拟设备是在1962年由Morton Heiling 设计的"全传感仿真器",该仿真器仿真骑车穿越纽约市的过程,"骑车人"能感受到风,感受到路面的颠簸,当经过饭店时,"骑车人"甚至能闻到食品的香味。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

优选(VR虚拟现实)基于虚拟现实的虚拟实验室外文翻译外文翻译毕业设计题目:基于虚拟现实的虚拟实验室的研究原文1:VRML译文1:虚拟现实原文2:VR-LAB译文2:虚拟现实实验室原文1:VRMLDurch die immer bessere Hardware ist es heute nicht mehr nötig,für anspruchsvolle 3D-Grafiken spezielle Grafik-Workstations zu verwenden.Auf modernen PCs kann jeder durch dreidimensionale Welten fliegen.Um solche Welten zu definieren und sie über das Internet zu verbinden,wurde die Sprache VRML entwickelt. In diesem Beitrag geben wir einen Überblick über die grundlegenden Konzepte der Version 2.0 von VRML.Geschichte von VRMLIm Frühling 1994 diskutierte auf der ersten WWW-Konferenz in Genf eine Arbeitsgruppe über Virtual Reality-Schnittstellen für das WWW.Es stellte sich heraus, daß man eine standardisierte Sprache zur Beschreibung von 3D-Szenen mit Hyperlinks brauchte. Diese Sprache erhielt in Anlehnung an HTML zuerst den Namen Virtual Reality Markup Language.Später wurde sie in Virtual Reality Modeling Language umbenannt. Die VRML-Gemeinde spricht die Abkürzung gerne …Wörml“ aus. Basie rend auf der Sprache Open Inventor von Silicon Graphics (SGI) wurde unter der Federführung von Mark Pesce die Version 1.0 von VRML entworfen. Im Laufe des Jahres 1995 entstanden eine Vielzahl von VRML Browsern (u. a.WebSpace von SGI) und Netscape bot schon sehr früh eine hervorragende Erweiterung, ein sogenanntes PlugIn, für seinen Navigator an.Die virtuellen Welten, die man mit VRML 1.0 spezifizieren kann,sind zu statisch.Zwar kann man sich mit einem guten VRML-Browser flott und komfortabel durch diese Wel ten bewegen,aber die Interaktion ist auf das Anklicken von Hyperlinks beschränkt. Im August ’96,anderthalb Jahre nach der Einführung von VRML 1.0,wurde auf derSIGGraph ’96 die Version VRML 2.0 vorgestellt.Sie basiert auf der Sprache Moving Worlds von Sili con Graphics. Sie ermöglicht Animationen und sich selbständig bewegende Objekte.Dazu mußte die Sprache um Konzepte wie Zeit und Events erweitert werden.Außerdem ist es möglich, Programme sowohl in einer neuen Sprache namens VRMLScript oder in den Sprachen JavaScript oder Java einzubinden.●Was ist VRML?Die Entwickler der Sprache VRML sprechen gerne von virtueller Realität und virtuellen Welten.Diese Begriffe scheinen mir aber zu hoch gegriffen für das, was heute technisch machbar ist: eine grafische Simula tion dreidimensionaler Räume und Objekte mit eingeschränkten Interaktionsmöglichkeiten.Die Idee von VRML besteht darin, solche Räume über das WWW zu verbinden und mehreren Benutzern gleichzeitig zu erlauben, in diesen Räumen zu agieren.VRML soll architekturunabhängig und erweiterbar sein. Außerdem soll es auch mit niedrigen Übertragungsraten funktionieren. Dank HTML erscheinen Daten und Dienste des Internets im World Wide Web als ein gigantisches verwobenes Dokument, in dem der Benutzer blättern kann.Mit VR ML sollen die Daten und Dienste des Internets als ein riesiger Raum,ein riesiges Universum erscheinen, in dem sich der Benutzer bewegt – als der Cyberspace.●Grundlegende Konzepte von VRML 2.0VRML2.0 ist ein Dateiformat,mit dem man interaktive,dynamische, dreidimensionale Objekte und Szenen speziell fürs World- Wide-Web beschreiben kann.Schauen wir uns nun an,wie die in dieser Definition von VRML erwähnten Eigenschaften in VRML realisiert wurden.3D ObjekteDreidimensionale Welten bestehen aus dreidimensionalen Objekten die wiederum aus primitiveren Objekten wie Kugeln,Quadern und Kegeln zusammengesetzt wurden.Beim Zusammensetzen von Objekten können diese transformiert,d.h. z.B.vergrößert oder verkleinertwerden.Mathematisch lassen sich solche Transformationen durch Matrizen beschreiben und die Komposition von Transformationen läßt sich dann durch Multiplikation der zugehörigen Matrizen ausdrücken.Dreh-und Angelpunkt einer VRML-Welt ist das Koordinatensystem.Position und Ausdehnung eines Objektes können in einem lokalen Koordinatensystem definiert werden.Das Objekt kann dann in ein anderes Koordinatensystem plaziert werden, indem man die Position, die Ausrichtung und den Maßstab des lokalen Koordinatensystems des Objektes in dem anderen Koordinatensystem festlegt.Dieses Koordinatensystem und die in ihm enthaltenen Objekte können wiederum in ein anderes Koordinatensystem eingebettet werden.Außer dem Plazieren und Transformieren von Objekten im Raum,bietet VRML die Möglichkeit,Eigenschaften dieser Objekte, etwa das Erscheinungsbild ihrer Oberflächen festzulegen.Solche Eigenschaften können Farbe,Glanz und Durchsichtigkeit der Oberfläche oder die Verwendung einer Textur, die z.B.durch eine Grafikdatei gegeben ist, als Oberfläche sein.Es ist sogar möglich MPEG-Anim ationen als Oberflächen von Körpern zu verwenden,d. h.ein MPEG-Video kann anstatt wie üblich in einem Fenster wie auf einer Kinoleinwand angezeigt zu werden, z.B.auf die Oberfläche einer Kugel projiziert werden.Abb.1 VRML 2.0 Spezifikation eines Pfeils#VRML V2.0 utf8DEF APP Appearance{marterial Material{ diffuseColor 100}}Shape{appearance USE APP geometry Cylinder{radius 1 height 5}}Anchor{ChildrenTransform{ translation 0 4 0ChildrenShape{ appearance USE APPgeometryCylinder { bottomRadius 2Height 3}}}Url"anotherWorld.wrl"}●VRML und WWWWas VRML von anderen Objektbeschreibungssprachen unterscheidet, ist die Existenz von Hyperlinks, d. h.durch Anklicken von Objekten kann man in andere Welten gelangen oder Dokumente wie HTML-Seiten in den WWW-Browser laden. Es ist auch möglich,Grafikdateien, etwa für Texturen,oder Sounddateien oder andere VRML-Dateien einzubinden, indem man deren URL, d. h. die Adresse der Datei im WWW angibt.●InteraktivitätAußer auf Anklicken von Hyperlinks können VRML-Welten auf eine Reihe weiterer Ereignisse reagieren.Dazu wurden sogenannte Sensoren eingeführt.Sensoren erz eugen Ausgabe-Events aufgrund externer Ereignisse wie Benutzeraktionen oder nac h Ablauf einesZeitintervalls.Events können an andere Objekte geschickt werden,dazu werden die Ausgabe-Events von Objekten mit den Eingabe-Events anderer Objekte durch soge nannte ROUTES verbunden.Ein Sphere-Sensor zum Beispiel wandelt Bewegungen d er Maus in 3D-Rotationswerte um.Ein 3D-Rotationswert besteht aus drei Zahlenwer ten, die die Rotationswinkel in Richtungder drei Koordinatenachsen angeben. Ein s olcher 3D-Rotationswert kann an ein anderes Objekt geschickt werden, das darauf hin seine Ausrichtung im Raum entsprechend verändert.Ein anderes Beispiel für ein en Sensor ist der Zeitsensor.Er kann z.B.periodisch einen Event an einen Interpolat or schicken.Ein Interpolator definiert eine abschnittsweise lineare Funktion,d.h. die Funktion ist durch Stützstellen gegeben und die dazwischenliegenden Funktionswer te werden linear interpoliert.Der Interpolator erhält also einen Eingabe-Event e vo m Zeitsensor,berechnet den Funktionswert f(e) und schickt nun f(e) an einen ander en Knoten weiter.So kann ein Interpolator zum Beispiel die Position eines Objekts i m Raum in Abhängigkeit von der Zeit festlegen.Dies ist der grundlegende Mechan ismusfür Animationen in VRML.Abb.2 Browserdarstellungen des PfeilsDynamikVorreiter der Kombination von Java und Java Script-Programmen mit VRML-Welten war Netscape’s Live3D,bei dem VRML 1.0 Weltenüber Netscape’s LiveConn ect-Schnittstelle von Java-Applets oder JavaScript-Funktionen innerhalb einer HTML -Seite gesteuertwerden können. In VRML 2.0 wurde in die Sprache ein neues Konstrukt, der sogen annteSkriptknoten, aufgenommen.Innerhalb dieses Knotens kann Java und Java Scri pt-Code angegeben werden,der z.B.Events verarbeitet. Im VRML 2.0 Standard wurd enProgrammierschnittstellen (Application Programming Interface API) festgelegt, die den Zugriff aufVRML-Objekte von Programmiersprachenaus erlauben, nämlich das Java API und das JavaScriptAPI. Das API ermöglicht es, daß Programme Routes löschen oder hinzufügen und Objekte und ihre Eigenschaften lessen oder ändern können.Mit diesen Programmiermöglichkeiten sind der Phantasie nun kaum noch Gre nzen gesetzt.VRML und dann?Eines der ursprünglichen Entwicklungsziele von VRML bleibt auch bei VRML 2.0 ungelöst: Es gibt immer noch keinen Standard für die Interaktion mehrerer Benutzer in einer 3D-Szene.Produkte, die virtue-lle Räume mehreren Benutzern gleichzeitig zugänglich machen,sind al-lerdings schon auf dem Markt (Cybergate von Black Sun,CyberPassage von Sony). Des weiteren fehlt ein Binärformat wie etwa das QuickDra-w 3D-Metafile-Format von Apple,durch das die Menge an Daten reduzie-rt würde, die über das Netz geschickt werden müssen,wenn eine Szene geladen wird.Gerade in Mehrbenutzerwelten spielt der sogenannte Ava-tar eine große Rolle. Eine Avatar ist die virtuelle Darstellung des Benutzers.Er befindet sich amBeobachtungspunkt,von dem aus der Ben-utzer die Szene sieht.Bewegtsich der Benutzer allein durch die Sze-ne,dann dient der Avatar nurdazu,Kollisionen des Benutzers mit Obje-kten der Welt festzustellen.In einer Mehrbenutzerwelt jedoch legt d-er Avatar auch fest,wieein Benutzer von anderen Benutzern gesehen wird.Standards für diese und ähnliche Probleme werden derzeit in Arbe-itsgruppen des Ende 1996 gegründeten VRML-Konsortiums ausgearbeitet.Literatur1. San Diego Super Computing Center: The VRML Repository.http:///VRML2.0/FINAL/Eingegangen am 1.09.1997Author :Stephan DiehlNationality :GermanyOriginate from :Informatik-Spektrum 20: 294–295 (1997) © Springer-Verlag 1997译文1:虚拟现实建模语言本文给出了VRML2.0的基本概念●VRML的历史1994年春季第一届万维网在日内瓦举行,会议上就VRML进行了讨论。

相关文档
最新文档