NVidia Grid 3D桌面虚拟化方案
桌面虚拟化解决方案(纯方案,25页)
桌面虚拟化解决方案目录1概述 (3)1.1项目背景 (3)1.2用户当前的问题 (4)1.3用户需求分析 (5)2系统总体设计 (7)2.1设计原则 (7)2.2系统设计目标 (7)2.3红山解决方案 (8)2.4红山方案优势 (9)3具体方案建议 (11)3.1方案设计 (11)3.2方案拓朴图 (12)3.3方案说明 (13)3.4方案分析 (14)4部署与实施 (17)4.1TurboGate安装 (17)4.2NComputing安装 (19)5产品介绍 (21)5.1红山TurboGate介绍 (21)5.2NComputing产品说明 (24)1 概述1.1项目背景Xx公司,是集研发、生产、贸易、服务于一体的技术创新型高新技术企。
目前研发软件部分主要的岗位分为开发类、配置管理类、集成编译类、QA、软件代表及测试类等。
随着企业研发办公规模扩大,办公环境的管理越来越复杂。
如何利用现有硬件资源,建立一个简单、易用、安全的统一接入平台,以有效进行办公环境的规范管理,支持可控的远程访问,同时保证重要数据和代码的安全,是企业面临的一个重大难题。
在传统的IT系统架构中,桌面即功能齐全的PC。
随着IT应用的日益强大,业务对IT 的依赖也越来越大,为每个用户提供安全高效的桌面环境成为业务开展的基本要求。
传统的PC桌面系统越来越显示出其缺点和局限性,主要表现在以下几个方面:⏹管理困难:用户要求能在任何地方访问其桌面环境,但PC 硬件分布广泛,很难实现集中式 PC 管理。
另外,由于 PC 硬件种类繁多,而用户修改桌面环境的需求各异,因此PC 桌面标准化也是一个难题。
⏹数据的安全性无法保证:一方面,数据能否成功备份,在PC故障或文件丢失时能否成功恢复;另一方面,如果PC丢失,则PC上所有的数据也会丢失。
用户的数据安全面临巨大的挑战。
⏹资源利用率低:随着硬件运算能力的高速发展,PC的硬件配置通常都远超过了业务应用系统的使用需求,大多数PC都运行在极低的负载状态,利用率在5%以下。
基于调度大厅应用场景的国产化云桌面系统研究
第24卷第1期2024年1月交 通 工 程Vol.24No.1Jan.2024DOI:10.13986/ki.jote.2024.01.012基于调度大厅应用场景的国产化云桌面系统研究高 洋,王彩虹,耿鸿雁(中铁信弘远(北京)软件科技有限责任公司,北京 100844)摘 要:基于国产技术的铁路云桌面系统改造办公桌面系统,将用户桌面数据集中在数据中心,通过虚拟化技术组建资源池,提供业务用户使用瘦终端接入,办公数据由终端从办公云上调取,每位用户拥有独立的虚拟桌面,精确资源分配,有效实现对用户桌面的集中管理,根据应用场景差异化需求,按需配置㊁发放㊁回收删除桌面,支持用户个性化设置,同时实现集中存储,集中运算,确保数据安全.有效解决传统PC 分布广,维护工作量大的弊端.关键词:国产云桌面;虚拟化;调度大厅中图分类号:U 491文献标志码:A文章编号:2096⁃3432(2024)01⁃067⁃08收稿日期:2023⁃11⁃20.基金项目:中国铁路信息科技集团有限公司科技研究开发计划重大课题(WJZG⁃CKY⁃2022009(2022A08)).作者简介:高洋(1982 ),女,硕士,研究方向为国产化云桌面系统.E⁃mail:gaoyangbj@.Research on Localized Cloud Desktop System Based on Dispatching Hall Application ScenariosGAO Yang,WANG Caihong,GENG Hongyan(SinoRail Hongyuan (Beijing)Software Technology Co.,Ltd.,Beijing 100844,China)Abstract :Based on domestic technology,the railway cloud desktop system is transformed into an officedesktop system.The user desktop data is centralized in the data center,and a resource pool is formed through virtualization technology to provide business users with thin terminal access.Office data isretrieved from the office cloud by the terminal,and each user has an independent virtual desktop with precise resource allocation,effectively achieving centralized management of user desktops.According to the differentiated needs of application scenarios,it is configured,distributed,and recycle and delete desktops,support user personalized settings,and achieve centralized storage and calculation to ensure data security.Effectively solving the drawbacks of widespread distribution and heavy maintenance workload of traditional PCs.Key words :domestic cloud desktop;virtualization;dispatching hall0 引言国产云桌面系统以虚拟化技术为主要基础,计算虚拟化㊁存储虚拟化和网络虚拟化等关键技术,形成1个统一的资源池,减少基础硬件设施的投入;采用自研的云桌面传输协议,国产服务器通过该传输协议和铁路行业信创终端设备通讯,不同应用场景按需配置发放不同桌面,给铁路行业提供高安全㊁高性能㊁高可靠的云桌面环境.提供图形化界面和自助运维能力,降低运维复杂度,以及通过铁路专用应用管理商店及安全防护管理系统实现铁路信创终端统一运维管理㊁统一安全防护.1 现状分析近年来,面对严峻的国际形势,为维护国家安全,需要加速科技自主可控的发展,特别是在网络安交 通 工 程2024年全㊁数据安全和信息系统防护领域.国家已颁布并实施了一系列法律法规和政策文件,包括‘网络安全法“‘GJ信息基础设施安全保护条例“‘中华人民共和国数据安全法“等.与此同时,云桌面技术在政府㊁企业㊁金融㊁教育和医疗等多个领域已得到广泛应用.然而,在云桌面的使用过程中,仍然存在一系列挑战,如庞大的硬件设备库存㊁终端用户多㊁复杂的运维要求以及多样的外设类型,这些问题使得精确管控变得复杂.并且之前很多政府企业在办公领域大多采用国外的云桌面软件,自主可控性差,用户数据安全无法保证.铁路行业是我国经济命脉型行业,其行业内的办公㊁生产中使用超过70万台PC终端,但是很多业务场景下都采用国外的桌面云技术,例如调度大厅使用的是VMware桌面云软件,以及大多数铁路终端设备都采用国外的软件,自主可控能力差,不能兼容国产操作系统及插件,安全存在风险.如果未进行统一管控,那么任何1个点都可能存在数据丢失:1)数据传输中容易被非法截获PC需要经常与外部进行数据交互,铁路调度大厅汇集了列车行车线路㊁行车计划㊁防灾预警等各类运输信息,这些数据可能会被缓存在本地或者在传输中被截获.2)办公和上网桌面安全有隐患调度大厅的办公桌面和上网桌面没有隔离,在办公桌面环境上网,很容易出现未知病毒感染㊁信息被恶意盗取等安全事件,容易对调度系统产生影响,影响范围极广.如果断开互联网,又会降低员工工作效率.3)运维管理安全调度台使用的各业务系统及终端是由各业务部门建设的,为了确保PC终端的安全性,通常需要在计算机上安装各种安全软件,使电脑负担过重,增加维护成本,员工使用也不便捷,影响办公效率.国产云桌面系统成功地应对了上述难题,它能将用户的物理设备㊁操作系统㊁应用程序和数据逻辑上分离,而对终端用户而言,这一切都是完全透明的.2 国产云桌面系统架构国产云桌面系统整体架构分为资源层㊁服务层㊁协议层㊁国产终端设备和应用场景.如图1所示.图1 国产云桌面系统架构2.1 资源层国产云桌面平台的硬件基础设施包括国产服务器㊁存储设备㊁网络设备以及显卡等硬件,为该平台提供计算㊁存储㊁网络和显卡资源支持.一是实现对硬件资源的虚拟化,形成逻辑资源池㊁多副本存储㊁调度策略㊁负载均衡等底层逻辑功能;二是对虚拟资源㊁桌面㊁应用㊁安全㊁资源监控㊁高可用㊁用户和权限等集中管理,包含多个功能模块,通过策略选择分配给不同的用户使用,满足不同用户的管理需求.2.2 管理平台层管理平台层提供虚拟机管理㊁资源管理㊁终端管理㊁用户管理㊁认证管理㊁策略管控㊁账号控制等功能.虚拟机管理模块包括虚拟桌面Agent管理㊁虚拟机设置㊁虚拟机运维操作等功能;资源管理模块支持资源分组管理㊁远程资源管理等功能;终端管理模块包括终端配置㊁实现合法性认证对终端进行支持, 86 第1期高 洋,等:基于调度大厅应用场景的国产化云桌面系统研究例如终端与用户或用户组的关联㊁802.1X认证(密码或证书方式)以及CA认证等方式,以确保终端安全,避免非法终端的接入[4];用户管理支持创建㊁删除㊁编辑㊁分类等用户的全生命周期管理;认证管理支持密码认证设置㊁防暴力破解等;策略管理支持对策略组的全生命周期管理㊁等功能.2.3 协议层通过自研的国产云桌面传输协议实现桌面云平台与国产终端设备的通讯,传输虚拟桌面图像数据㊁流数据㊁文件数据和管理数据等.2.4 国产终端设备层通常是通过不同类型的客户端访问桌面云,这些客户端包括桌面云专用的国产瘦客户端㊁个人电脑㊁笔记本电脑以及各种智能设备,也可是非国产终端产品,兼容各类国产芯片计算机㊁笔记本和利旧PC[5].3 国产云桌面系统关键技术3.1 国产云桌面传输协议技术3.1.1 2D图形显示技术X窗口系统是一种以位图方式显示的软件窗口系统.如图2所示,鼠标和键盘信息通过Linux内核中的鼠标和键盘驱动,将鼠标和键盘事件发送给X 服务(),X服务通过X协议将鼠标和键盘事件转发给对应的X客户端(各应用程序,如GNOME/KDE等),X客户端负责事件处理,并将处理结果传达给X服务,X服务调用显示驱动进行结果显示处理,将最终结果通过显示驱动,展现给用户.图2 国产桌面2D图形显示技术3.1.2 3D图形显示显示技术1)GPU直通国产服务器的GPU以直通方式分配给虚拟机,并通过远程协议,用户可远程访问.单个GPU物理核只能给单个虚拟机使用,单GPU卡并发数比较低,成本较高.2)vGPU采用虚拟化技术,将物理显卡的显存分配给多个虚拟机,这些虚拟机安装了NVIDIA显卡驱动,共享物理显卡的计算资源.但是,只有国产X86芯片+NVIDIA显卡的模式可实现vGPU.3.1.3 语音技术如图3所示,在资源虚拟化层模拟1个音频设备给虚拟机,虚拟机直接使用标准的音频驱动,音频APP调用系统音频处理接口(录音㊁放音API),虚拟声卡设备进行交互.图3 语音技术实现原理96交 通 工 程2024年3.1.4 视频技术目前在国产云桌面系统中,由于视频帧率高㊁变化区域较大,消耗带宽也较大,普通图形处理流程无法满足视频场景,主要的视频优化方案有3种:1)视频非重定向方案如图4所示,虚拟机内部播放视频时直接解码并显示视频画面,云桌面服务端识别视频区域,随后,对视频区域的图像进行重新编码处理,然后将重新编码后的视频数据传输至客户端,以便进行解码㊁播放和显示.图4 视频非重定向流程图 2)视频重定向方案如图5所示,虚拟机内部播放视频时,云桌面插件把播放器解析后的音㊁视频编码流拦截下来,音频和视频编码流可直接发送至桌面客户端以进行解码㊁播放和显示.图5 视频重定向流程图 3)网页重定向方案虚拟机内部播放视频时,云桌面插件把播放器播放视频文件(或视频网页)的URL 路径发送给桌面云客户端,客户端拉起同样的播放器或者定制的播放器来访问该URL.图6 网页重定向流程图7 第1期高 洋,等:基于调度大厅应用场景的国产化云桌面系统研究3.1.5 外设重定向技术外设重定向技术指的是在国产云桌面环境中,将终端侧的国产外设设备通过云桌面协议映射到远程桌面,并实现远程桌面对这些外设设备的使用.根据外设技术实现的原理,可分为2种类型: 1)端口重定向:是指在远程桌面操作系统中,针对端口底层协议进行重定向;如图7所示的USB 端口重定向.2)设备重定向:是指在远程桌面的操作系统中,针对设备应用协议进行重定向;如图8所示的摄像头设备重定向.图7 USB端口重定向图8 摄像头设备重定向3.1.6 小结国产云桌面传输协议中,国产桌面采用 来创建操作系统所用的图形用户界面;后端采用国产X86芯片的服务器,配置NVIDIA显卡,通过vGPU虚拟化的方式共享物理显卡,实现铁路实际业务中的3D应用;在不同应用场景下采用不同的方案满足视频场景,分别为非重定向场景㊁视频重定向场景和网页重定向场景;外设重定向技术中,大多数外设采用端口重定向和设备重定向两种方法. 3.2 终端管控关键技术3.2.1 终端发现国产云桌面平台管理员通过定义网络IP段分组,周期性地使用多种协议和机制来发现特定网络分组中的终端,并统计这些网络中的终端数量和类型.3.2.2 终端安全防护1)安全威胁管理:国产云桌面系统用系统防护㊁入侵防护㊁IP黑白名单㊁主机防火墙和微隔离㊁文件保护与病毒防护等多种方式全面保护国产终端安全.2)威胁检测:在威胁攻击发生前,国产云桌面系统通过基线核查㊁系统加固㊁漏洞加固㊁网络控制㊁预防攻击与文件保护等方式进行威胁预测,提前防御可能出现的威胁.3)系统安全管理:在威胁攻击前,国产云桌面系统提前对国产终端安全进行加固,通过国产终端安全基线核查机制为国产终端建立其特有的安全基线;对国产终端进行漏洞检测,根据危害等级展示漏洞情况,如有漏洞将下发补丁修复指令,实现系统加固.3.2.3 终端运维管理1)补丁管理:国产云桌面系统主动检测国产终端计算机的操作系统类型,随后自动下载所需的补丁㊁进行自动安装,并提供相关提示.2)软件管理:国产云桌面系统中软件管理模块提供一站式下载及安装软件功能.3)移动存储管理:针对国产终端接入的移动存储设备,系统提供拦截㊁认证㊁授权和审计功能,以确保只有经过认证的移动存储设备可被授权访问,防止非法设备接入和未经授权的数据外泄.4)违规外联管理:迅速检测和管理网络中的非法外联活动,对涉及发送外联行为的设备进行控制和警示.5)网络行为监控:包括网络访问控制㊁网络流量控制㊁僵尸网络检测等.17交 通 工 程2024年3.3 应用虚拟化由于UOS 和麒麟操作系统的推广和Windows7停服,目前还存在很多Windows 端的应用没有国产操作系统客户端的应用可替代,比如IE㊁Chrome 等,在国产化操作系统上使用这些应用,不能看到Windows 系统,于是衍生出操作系统上的远程应用的需求,后端使用Windows7或Windows10的操作系统承载应用,将应用发布给国产化终端接入使用.如图9所示,Windows 服务器端向系统注册窗口事件钩子,在回调接口中将窗口相关的信息通过主通道传递给客户端.图9 应用虚拟化实现方式国产客户端原理:1)显示原理:在建立连接过程中,先将客户端窗口隐藏,在打开应用的过程中,会接收到Windows 端发送过来窗口位置信息,对整个桌面进行裁减,将对应窗口的位置保留显示,其他位置裁减掉(其他位置是透明且能鼠标穿透),这样就只显示了打开的应用,多窗口的情况类似.2)跟踪服务端窗口:客户端本地会创建1个透明属性的窗口,这个窗口的属性跟服务端对应的应用窗口属性完全保持一致.3.4 安全防护关键技术根据国产云桌面系统面临的安全威胁和挑战,国产云桌面系统各部分安全功能如下:3.4.1 终端安全包括终端自身安全㊁接入认证安全以及相关安全策略配置,保障接入的终端一定是合法授权过的.3.4.2 网络安全主要围绕网络隔离与管控来构建整个云桌面的网络安全体系,包括分布式防火墙㊁安全域划分及VPN 隧道技术等方面.3.4.3 虚拟化安全包含计算㊁存储㊁网络虚拟化安全㊁资源隔离以及虚拟机内部安全等.3.4.4 数据安全通过剪切板审计㊁水印㊁磁盘数据加密等措施,保障用户数据免受侵害.3.4.5 运维管理安全通过桌面统一管理㊁远程协助运维㊁用户自助备份还原㊁用户自助申请开户㊁日志管理㊁管理员分级分权等功能来保证系统的安全.4 应用场景分析为促进铁路行业终端向自主可控方向迁移,国产云桌面在铁路行业中适用于以下场景:生产运营㊁综合办公㊁研发运维场景.4.1 综合办公场景综合办公场景中,传统的个人计算机数据安全性差㊁灵活性[1].国产云桌面系统建立了统一的云桌面管理平台[2],对国产终端㊁原有PC㊁桌面和铁路应用进行统一管理与维护,减轻运维压力;同时,后端国产服务器部署在数据中心,数据集中上云管理,解决数据容易丢失和数据泄密的安全难题[3];利用应用虚拟化的方式,实现已改造应用和未改造完成的应用 同时运行”,保障铁路行业业务正常运行.4.2 生产运营场景生产运营场景中,国产终端主要访问的应用系统包括客运㊁货运㊁工电㊁机辆等生产业务涉及的规划设计㊁运行流转㊁监控分析等应用系统.终端在使用办公类终端常用的办公软件㊁通讯软件㊁辅助软件和信息安全软件的基础上,还会使用业务专用软件,例如CTC㊁STP 等列车运行㊁通信系统生产专用软件㊁动环监测专用软件.此类场景特点是终端承担关键业务,稳定性要求极高,终端故障会影响工作.国产云桌面可通过镜像编辑和静默更新再不影响正常工作的同时对终端桌面进行部署更新,简化运维难度,依托故障回退等措施快速处理终端故障,保障业务连续.4.3 研发运维场景研发运维场景要访问的应用系统包括各类综合信息系统㊁业务应用系统㊁安全管理系统㊁监控分析27 第1期高 洋,等:基于调度大厅应用场景的国产化云桌面系统研究系统和仿真培训系统.在使用办公类终端常用软件的基础上,技术类终端还会使用secureCRT 远程登录软件,AutoCAD㊁3ds㊁MAX 等设计软件以及Visual studio㊁Pycharm 等编程开发软件.此类场景对国产终端性能和数据安全有很高的要求,通过国产云桌面平台资源弹性分配功能满足不同岗位桌面算力的差异化需求,提高资源的利用率;对资源文档集中云端存储,根据权限进行精细化管控,有效保障信息的安全性.5 典型应用场景:调度大厅终端国产改造5.1 背景介绍调度指挥中心设备主要分为调度指挥中心大屏与调度台设备,在国产化改造的大背景下,业务要求实现国产改造的目标,同时保证业务稳定运行.调度台使用的各业务系统及终端由各业务部门建设,缺乏统一规划㊁设计和部署,不利于后期维护管理[6],并且调度台已出现工作站数量不足㊁服务器性能不足㊁存储空间不足等现象.例如,因性能㊁空间不足造成虚拟桌面非正常退出后,无法正常登录系统等现象出现多次,影响了调度值班人员工作.同时,调度大厅业务系统非常重要,软件开发㊁改造㊁升级㊁部署和更新工作需要简单化,目前系统的运维工作占据了IT 人员的大量精力和时间.调度大厅业务系统非常重要,短期内全部系统改造难度巨大,同时存在兼容性问题,越来越多的研发人员专注于信息化架构,包括存储分离㊁桌面分离以及应用分离[7].5.2 国产云桌面系统在调度大厅场景下的应用为了完成国产化改造,国产云桌面系统采用X86架构芯片的国产服务器㊁国产终端及相关设备,通过国产云桌面软件,为终端用户提供Windows 或linux 云桌面,满足调度业务应用使用需求.后端采用X86国产服务器部署云桌面系统,分为2个集群,1个集群作为云桌面资源集群(每台国产服务器配置支持GPU 虚拟化的显卡,采用NVIDIA GPU 虚拟化技术为每个用户提供相应的GPU 资源)[8],1个集群作为云桌面和虚拟化管理集群;在调度大厅配置瘦客户端,供调度人员使用.(另外后端配备多余的X86国产服务器,提供虚拟化应用服务.)存储方面,计算资源和分布式存储资源分离方式进行部署,由于国产化服务器性能略低,采用多台分布式存储服务器分离部署模式,避免计算节点资源和存储资源进行性能争抢.将已经完成的国产化改造的业务运行在国产桌面,未完成改造的业务系统在Windows 系统上运行,通过国产终端接入办公,实现无缝切换.当调度大厅的业务系统全部改造完成后,回收Windows 桌面,将国产桌面统一分布给用户使用.图10 调度大厅国产云桌面解决方案6 结束语作为铁路日常运输组织的核心指挥中心,调度大厅在确保铁路运输的安全和高效运行方面扮演着至关重要的角色.本文基于国产云桌面系统,研究了国产云桌面系统的关键技术,分析了国产云桌面系统在铁路行业里的实际应用场景,并针对调度大厅国产化改造提出了完整的国产终端解决方案,包括服务器软硬件部署㊁网络架构㊁后台管理等.帮助改善调度大厅工作人员的工作环境,增强运输数据的安全性以及系统运行可靠性,提高调度大厅信息化基础设施运维管理水平,为铁路信息化不断完善创新夯实基础.37交 通 工 程2024年参考文献:[1]李楠.桌面云在企业智慧办公中的应用研究[J].网络安全技术与应用,2021(12):108⁃109.[2]段然,周来新.智慧云桌面系统的设计与实现[J].数字通信世界,2021(4):94⁃101.[3]戴媛媛,刘树峰.桌面云在企业智慧办公中的应用研究[J].现代工业经济和信息化,2021,11(2):130⁃132.[4]高伟.桌面云技术在企业办公生产中的应用[J].电子技术与软件工程,2021(1):182⁃183.[5]刘龙庚,李安伦,庄金鑫.新型桌面云应用发展研究[J].信息技术与标准化,2022(Z1):31⁃34,49.[6]柯文,李攀科.铁路调度大厅桌面虚拟化方案的研究与设计[J].长江信息通信,2022,35(12):90⁃93.[7]程燕,贾宁,李巧意.陕西省地震局应急指挥大厅桌面虚拟化设计与研究[J].长江信息通信,2021,34(1):131⁃134.[8]韩栋梁,贺霄琛,李少飞.基于VGPU 的桌面虚拟化在三维设计中的应用[J].电子工业专用设备,2021,50(1):52⁃56.(上接第60页)参考文献:[1]GUO D,DENG X H,SONG S Y,et al.Study on TrafficFlow Characteristics of Multi⁃lane Freeway Tunnel [J ].IOP Conference Series:Earth and Environmental Science,2021,787(1):012036(5pp).[2]胡江碧,马文倩.基于驾驶视认需求的隧道入口段光环境研究[J].上海交通大学学报,2015,49(4):464⁃469.[3]白翰.高速公路隧道出入口车辆安全运行顺适过渡理论相关基础研究[D].西安:长安大学,2015.[4]WANG W,WU Z.Car⁃following Model ConsideringMultiple Distances and Numerical Simulation of Mixed Traffic Flow [J ].IOP Conference Series:Earth and Environmental Science,2018,189(6).[5]ZENG Y Z,RAN B,ZHANG Ni,et bined Effectsof Drivers'Disturbance Risk Preference Heterogeneity andthe Nearest Following Vehicle Headway on Traffic FlowInstability:Analytical studies[J].Physical A:Statistical Mechanics and its Applications,2020,545.[6]曲昭伟,潘昭天,陈永恒,等.基于最优速度模型的改进安全距离跟驰模型[J].吉林大学学报(工学版),2019,49(4):1092⁃1099.[7]邓红星,胡翼,王猛.考虑前车加速度信息的改进IDM模型研究[J].重庆理工大学学报(自然科学),2022,36(5):226⁃232.[8]梅家林,杜志刚,王首硕.不同时段高速公路特长隧道入口视觉特性研究[J].武汉理工大学学报(交通科学与工程版),2022,46(1):50⁃59.[9]梅家林,杜志刚,郑号染,等.不同时段特长隧道入口区域视觉负荷研究[J].中国安全科学学报,2021,31(6):176⁃181.[10]肖志军,栾利强.基于照度的高速公路隧道入口安全评价研究[J].交通科技,2015(6):72⁃74.[11]方松,马健霄.城市隧道长度对驾驶人视觉特性影响分析[J].交通信息与安全,2020,38(6):24⁃30.[12]高峰,张龙潇,刘汉银,等.毗邻隧道连接段进口处视觉变化及合理照度研究[J].公路工程,2021,46(3):112⁃117.[13]杜志刚,黄发明,严新平,等.基于瞳孔面积变动的公路隧道明暗适应时间[J].公路交通科技,2013,30(5):98⁃102.[14]TREIBER M,HENNECKE A,HELBING D.CongestedTraffic States in Empirical Observations and Microscopic Simulations[J].Physical Review E,2000,62:1805⁃1824.[15]KESTING A,TREIBER M,HELBING D.EnhancedIntelligent Driver Model to Access the Impact of DrivingStrategies on TrafficCapacity [J ].PhilosophicalTransactions of the Royal Society A:Mathematical,Physical and Engineering Sciences,2010,368(1928):4585⁃4605.[16]王华,何晓宇,徐静,等.融合交通心理学的车辆群组运动仿真研究综述[J].郑州大学学报(工学版),2020,41(1):83⁃90.[17]W.T.Bommel.Road lighting:fundamentals,technologyand application [M].Netherlands:Springer,2015.47。
gpu虚拟化解决方案
gpu虚拟化解决方案
《GPU虚拟化解决方案:实现多用户共享GPU计算资源》
随着人工智能、大数据分析和深度学习等领域的不断发展,对图形处理单元(GPU)的需求也越来越大。
然而,传统上GPU资源的分配是以物理设备为单位的,这就导致了在多用户环境中存在着资源浪费和冗余的问题。
为了解决这一难题,GPU虚拟化技术应运而生。
GPU虚拟化解决方案是指通过虚拟化技术将一台物理GPU资源划分成多个虚拟GPU资源,实现多用户共享GPU计算资源的一种方法。
这种解决方案可以更有效地利用GPU设备,降低成本并提高资源利用率。
在企业中,这种技术能够满足不同部门或团队对GPU资源的需求,同时保护数据的安全性和隔离性。
GPU虚拟化解决方案的实现需要结合硬件和软件两方面的技术。
在硬件方面,需要GPU设备本身支持虚拟化技术,如NVIDIA的vGPU技术。
而在软件方面,需要虚拟化管理平台和GPU虚拟化软件的支持,如VMware的vSphere和NVIDIA 的GRID软件。
通过GPU虚拟化解决方案,用户可以在虚拟机上运行图形密集型应用程序,享受和使用独立的GPU资源,而不会受到物理GPU资源的限制。
这对于需要大量图形计算或并行计算的场景非常有用,比如科学计算、虚拟桌面基础设施(VDI)、云计算等领域。
总的来说,GPU虚拟化解决方案是一种能够有效提高GPU资源利用率、降低成本、增强数据安全性的技术。
随着GPU虚拟化技术的发展,相信它将会在越来越多的场景中得到广泛应用并发挥出更大的价值。
美的桌面虚拟化解决方案案例
美的桌面虚拟化解决方案案例一、引言桌面虚拟化是一种将用户的桌面环境从物理设备中解耦,将其移至虚拟服务器上的技术。
美的集团作为一家全球率先的家电创造商,为了提升员工的工作效率和数据安全性,决定采用桌面虚拟化解决方案来统一管理和部署员工的工作环境。
本文将介绍美的桌面虚拟化解决方案的具体实施情况和取得的成果。
二、解决方案概述1. 技术选型美的选择了一家知名的虚拟化解决方案提供商作为合作火伴,该解决方案基于VMware Horizon View平台,结合了VMware vSphere虚拟化技术和NVIDIA GRID 图形加速卡,为用户提供高性能的桌面虚拟化体验。
2. 系统架构美的桌面虚拟化解决方案采用了典型的三层架构,包括用户终端设备、虚拟桌面服务器和数据中心服务器。
用户通过终端设备访问虚拟桌面服务器,而虚拟桌面服务器则连接到数据中心服务器,实现桌面环境的虚拟化和集中管理。
3. 功能特性美的桌面虚拟化解决方案提供了以下功能特性:- 高性能图形加速:通过NVIDIA GRID图形加速卡,实现对图形密集型应用的加速,保证用户在虚拟桌面环境下的流畅体验。
- 统一管理:管理员可以通过集中的管理控制台对所有虚拟桌面进行集中管理和配置,大大减轻了管理工作的负担。
- 快速部署:虚拟桌面可以根据需要快速部署和调整,节省了时间和资源成本。
- 数据安全:所实用户数据都存储在数据中心服务器中,减少了数据泄露和丢失的风险。
三、实施过程1. 环境准备在实施桌面虚拟化解决方案之前,美的进行了详细的规划和准备工作。
他们建立了一个专门的项目团队,负责解决方案的设计、实施和测试。
此外,他们还进行了现有硬件和网络环境的评估,确保能够满足虚拟化解决方案的需求。
2. 系统部署美的首先部署了虚拟桌面服务器和数据中心服务器。
他们采用了高性能的服务器硬件,并使用VMware vSphere进行虚拟化配置。
然后,他们安装和配置了VMware Horizon View平台,并将用户的桌面环境进行虚拟化。
南京高等职业技术学校借助NVIDIA虚拟GPU技术打造虚拟教学平台
南京高等职业技术学校借助NVIDIA虚拟GPU技术
打造虚拟教学平台
借助于NVIDIA GRID™ 桌面虚拟化技术,南京高等职业技术学校打造了虚拟教学平台,为学生提供浸入式的高质量用户体验,同时降低成本并简化IT管理。
虚拟化技术的引入不仅改变了教育行业传统的教学方式,同时提升了学生对教学资源的访问便捷性。
在众多虚拟化技术中,NVIDIA 以其特有的虚拟GPU技术帮助每一个虚拟化桌面实现和物理PC及工作站类似的用户体验。
从而使得用户可以平滑地从传统的桌面环境迁移到虚拟化的环境中,同时获取所有虚拟化带来的技术优势。
NVIDIA Virtual GPU在对GPU 实现虚拟化切割的基础上,能够对用户虚拟化后的所有工作负载进行加速(包括图形、计算以及人工智能应用),并最大限度地实现GPU资源的复用,完善GPU资源的可视化监控和管理功能。
南京高等职业技术学校是一所高水平、高起点、现代化的重点综合性职业院校,是国家教育部在职教领域和德国合作的第一所项目学校,是全国中等职业教育改革发展示范单位。
学校现拥有70个设备先进的现代化实验室、28个配套齐全的校内技能实训车间、130多个多媒体教室以及1200台计算机,融合有线、无线、多媒体、安防等多种应用系统。
FusionSphere服务器虚拟化规划和最佳实践
组网介绍 - LAN-Base
LAN-Base组网
备份介质
组网介绍-LAN-Free
LAN-Free组网
LAN
SAN
LAN-free Backup
主控服务器
生产系统
备份客户端业务节点
生产存储
备份介质
组网介绍 - Server-Free
Server-Free组网
备份代理客户端由它产生快照
快照
生产存储
高级功能
在基本功能的实现之后,系统设计时可以根据需要提供以下高级功能,如:安全性及可用性虚拟机及存储的热迁移GPU虚拟化NUMA相关技术DRS&DPMVxLAN&vApp
虚拟机热迁移
技术特点基于内存压缩传输技术,虚拟机热迁移效率提升1倍。虚拟机磁盘数据位置不变,只更改映射关系。 适用场景可容忍短时间中断,但必须要快速恢复业务。比如轻量级数据库业务,桌面云业务。
eBackup
备份管理简单易用
满足海量虚拟机备份场景需求
性价比极高的数据保护方案
备份服务器与备份代理各司其职,分布式可扩展架构虚拟机智能选择,自动保护新增虚拟机,减轻维护工作量任务负载均衡、故障切换,保证可靠性
通过备份数据重复删除、压缩,永久增量备份等技术,降低35%用户备份存储购置成本支持块级增量备份/恢复,缩减95%备份恢复窗口
使用GPU配置Horizon虚拟桌面
NVIDIA多个虚拟机上共享这一GPU。
此举可以为现有应用程序和桌面环境提升性能,还开辟根据用户计划部署的任务类型的不同,有两种不同的驱动程序类型。
针对PC图1 传统应用程序和桌面虚拟化工作原理图2 NVIDIA GRID工作原理保留许可证,直到将其关闭。
然后,它将许可证释放回许可证服务器。
许可设置在重新启动后仍然存并且仅在许可服务器地(5)然后使用Client登Server,修改为直接共享,器。
图 3 试用许可信息图4 创建License ServerNVIDIA License服务器注意事项如下:(1)可以配置单台物理机或虚拟机用作License Server。
为了避免单个许使用的每台许可证服务器主机上已经安装了必需的软件。
在Windows上,需要安装.Net Framework 4.5或更高版本,还要安装Java运对MAC地址进行记录。
(6)许可证服务器的日期和时间必须正确配置。
在本示例中,Windows图5 正式许可图6 管理服务器图7 虚拟桌面服务器图8 配置好的ESXi主机在数据中心中创建2个群集(HA),每个群集添加1台主机。
其中HA01这个群集添加DELL R720的服务器(承载Active Directory、vCenter Server、Horizon连接服务器和Composer服务器等信息),HA02这个群集添加DELL R940xa的服务器,只用来承载虚拟桌面,如图8所示。
并且在配置好vCenter Server及ESXi主机之后,创建Windows Server 2016 Datacenter的模板虚拟机,从此模板虚拟机部署2台Active Directory虚拟机、1Composer1台Horizon务器。
这些服务器的图9 受限制的组组添加到这两个组中,如图9所示。
(3)在DHCP服务器中创建作用域,为虚拟桌面网段(本示例规划使用192.168.8.0/24)分配IP 地址。
GPU虚拟化的应用实践
本文根据Xen 虚拟化架构下GPU 工作原理,结合太原广播电视台云制作、云媒资项目,介绍了GPU 虚拟化的使用经验,依据调试得出的各项数据结果,阐述不同应用场景GPU 的调优方法。
GPU 虚拟化 调优一 Xen 虚拟化太原广播电视台2018年投入使用的节目制作和媒资系统使用华为FusionSphere 云桌面解决方案,其虚拟化底层使用Xen Server 。
Xen 的虚拟化层(Xen Hypervisor )位于操作系统和硬件之间,负责为上层运行的操作系统内核提供虚拟化的硬件资源,负责管理和分配这些资源,并确保上层虚拟机(称为域,Domain )之间的相互隔离。
在Xen 在特权域中建立后端设备(Backend ),所有的用户域操作系统像使用普通设备一样向前端设备发送请求,而前端设备通过IO 请求描述符(IO descripror ring )和设备通道(Device Channel )将这些请求以及用户域的身份信息发送到处于特权域中的后端设备。
所有的真实硬件访问都由特权域的后端设备调用本地设备驱动(Native Device Drive )发起。
二 GPU 虚拟化广电行业的电视节目制作系统对图形处理能力的需求较高,GPU 资源如何配置以及使用何种应用模式,对节目生产效率有着重要的影响。
GPU 虚拟化在太原台应用场景中有两种模式,一是GPU 硬件虚拟化(Mediated Pass-through ),二是GPU 直GPU硬件虚拟化就是使用单个显卡为多个图形桌面提供显卡能力,虚拟化平台将一块物理显卡虚拟化成多个虚拟显卡,每个GPU划分成多个vGPU,每个VM(虚拟机)绑定一个vGPU。
2. GPU直通显卡虚拟化就是将显卡进行切片,并将这些显卡时间片分配给虚拟机使用的过程。
我们现在使用3D桌面虚拟化解决方案中,大部分是使用nVIDIA公司提供的显卡虚拟化技术,即是Vcuda(virtual CUDA)技术。
VMware VGPU图形虚拟化解决方案
CONFIDENTIAL
7
设计类桌面虚拟化中3D图形加速的重要性
CONFIDENTIAL
8
NVIDIA grid vGPU虚拟化技术
• GPU虚拟化v2.0 – vGPU技术
安装GRID GPU的服务器
Hypervisor
GRID Virtual GPU Manager
Hypervisor
数据中心
新病毒木马层出不穷,
防护软件不仅永远落后,自己还会捅娄子!
一直在救火却总也扑不完
CONFIDENTIAL
3
图形工作站让汽车企业面临特殊挑战
• 数据安全问题 数据放在工作站本地或从后端存储调用,导致数据安全的不确定性
• 数据传输问题 一次CAD/CAE完整的计算过程中,数据需要在网络中进行两次转移, 网络压力大,工作效率降低
DX9,DX10,DX11
OGL4.4或更新
NVIDIA驱动控制面板 应用程序在vGPU上运行和物 理GPU上完全一样
CONFIDENTIAL
14
Nvidia Grid vGPU (演示)
• VMware Horizon View 6.1+ NVIDIA GRID vGPU
CONFIDENTIAL
Apps
Citrix XenServer VMware vSphere
Hypervisor
NVIDIA Driver
NVIDIA Driver
Direct GPU access from
guest VM
Dedicated GPU per user
NVIDIA GPU
远程图形协议议
客户端
CONFIDENTIAL
Decision
思杰云桌面解决方案汇报
Document
Database
PCoIP RGS
PCoIP RDP
RDP
RDP
RDP
Increasing Application Complexity
Technologies
Hardest
微软对协议的评测(1)
网络
用户设备
数据中心
TCP
HDX 远程图形协议
Citrix Receiver
运行于全球约30亿设备上
桌面在数据中心运行的画面,依赖于桌面连接协议传输到客户端侧,并由客户端进行呈现。 桌面连接协议的优劣直接决定用户使用的体验好坏。 衡量桌面连接协议的五个方面: 占用带宽 还原能力 容错能力 外设支持 控制力度
桌面连接协议
桌面连接协议——占用带宽
Driver
Driver
vGPU
Driver
NVIDIA GRID K1
NVIDIA GRID K2
GPU模块数量
4 Kepler GPUs
2 High End Kepler GPUs
CUDA 处理器数量
768 (192/GPU)
3072 (1536/GPU)
显存总量
16GB DDR3 (4GB/GPU)
兰州市统计局
数据中心详细说明
云计算硬件组成
大规模部署
轻代理杀毒
多场景桌面
数据 中心
网络方案:(1)三网合一;(2)整网IP互联;(3)整网热备
网络方案(1)——三网合一
组网方式
三网合一
三网隔离
安全性
安全性较高:三种业务流通过VLAN逻辑隔离
安全性高:三种业务流物理隔离
流量干扰
设计应用VDI解决方案VMware+nVidia
•工作站资源无法动态分配,且占用 大量空间
•设备生命周期短,需频繁升级
• 通过提供高度整合的虚拟化平台,减少图形工作站的数量, 简化IT环境,降低维护工作量,结合vGPU技术,提供一虚多 的3D设计工作站,充分利用现有资源。 • 统一的管理平台,模板化的系统部署方式,几分钟内即可 创建或移除一台虚拟工作站,轻松适应瞬息万变的市场需求
5.安全远程访问:使远程工作人员和加班人员能够采用其它设备从公司防火墙之外的地点访问其虚拟桌面
➢ 桌面优化和支持
1. 桌面性能监测:通过对实时和历史监测数据进行跟踪,主动确保用户始终获得最佳的性能
2. 网络优化:采用策略机制和UDP机制来提网络性能
3. 桌面支持:使技术支持人员能够查看用户屏幕、开展对话、传输文件,从而快速解决问题
CONFIDENTIAL
3
VDI优势2
➢ 可靠的桌面访问管理
1. 桌面分配:为用户群创建虚拟桌面池,或为特定用户提供个性化桌面
2. 会话管理:虚拟桌面连接和会话状态
3. 会话可靠性:确保用户即使通过高延时或低带宽的网络连接也可继续工作
4. 高可用性/故障恢复:在避免产生单点故障的情况下让用户能够访问其虚拟桌面
➢ 广泛的桌面交付生态系统
1. 桌面设备:新型终端设备可提供最佳的用户体验和立即可用的互操作性
2. 支持广泛的系统支持:提供第三方硬件厂商的互操作性和集成能力
3. 支持刀片PC:采用基于刀片PC的虚拟桌面,为用户提供高性能的专用计算资源
4.支持异构客户端:使用户在终端设备选择上具有更大的灵活性,支持Windows、Linux、Mac和智能手机等操作系统
Server
Hardware
gpu虚拟化方案
GPU虚拟化方案引言随着人工智能、大数据分析和图形渲染等应用的广泛发展,对图形处理单元(Graphics Processing Unit,简称GPU)的需求也日益增加。
然而,由于GPU的稀缺性和高昂的成本,如何充分利用和满足多个用户对GPU的需求成为一个重要的课题。
为了解决这个问题,GPU虚拟化方案应运而生。
本文将介绍GPU虚拟化的概念、优势和一些常见的虚拟化方案。
什么是GPU虚拟化GPU虚拟化是指将一个或多个物理GPU分割为多个虚拟GPU并供多个用户共享的技术。
通过虚拟化,每个用户可以独享一部分GPU资源,实现高性能的并发计算和图形渲染。
GPU虚拟化可以在物理GPU上运行多个虚拟机实例,每个实例可以独立访问和管理自己的虚拟GPU。
GPU虚拟化的优势1. 提高资源利用率通过虚拟化技术,可以将一台物理GPU分割为多个虚拟GPU并供多个用户共享。
这样可以充分利用GPU的计算能力,提高资源利用率。
即使某些用户的任务较轻,也可以通过虚拟化将他们分配到同一块物理GPU上,以减少资源浪费。
2. 提高系统灵活性虚拟化技术可以实现虚拟GPU的动态分配和调度,根据任务的需求进行动态调整。
这样可以根据实际情况合理分配GPU资源,提高整个系统的灵活性。
即使某个用户的任务突然变得更加复杂,系统也可以根据需求分配更多的GPU资源。
3. 提供安全隔离通过GPU虚拟化,每个用户可以独立使用自己的虚拟GPU,并与其他用户的虚拟GPU相互隔离。
这样可以确保每个用户的数据和计算任务不被其他用户访问或干扰,保护用户的隐私和安全。
常见的GPU虚拟化方案1. NVIDIA GRIDNVIDIA GRID是一种基于硬件的GPU虚拟化方案。
它使用专用的硬件加速器和虚拟化软件,将一块物理GPU分割为多个虚拟GPU。
NVIDIA GRID可以在多个虚拟机实例上同时运行,每个实例可以独立访问和管理自己的虚拟GPU。
NVIDIA GRID具有高性能和低延迟的特点,适用于图形密集型应用和虚拟桌面环境。
NVIDIA GRID虚拟GPU和虚拟工作站许可证用户指南说明书
DU-07757-001 | September 2015 User GuideDOCUMENT CHANGE HISTORY DU-07757-001Chapter 1.Introduction (1)1.1How licensing works (1)1.2License editions (2)Chapter 2.GRID Virtual GPU (3)2.1vGPU License requirements (3)2.2Licensing on Windows (4)2.3Licensing on Linux (7)Chapter 3.GRID Virtual Workstation (8)3.1GRID Virtual Workstation features (8)3.2Licensing on Windows (8)3.2.1Disabling GRID Virtual Workstation (11)3.3Licensing on Linux (13)Chapter 4.Advanced topics (14)4.1Licenses obtained after boot (14)4.2Operating with intermittent connectivity to the license server (14)4.3Applying Windows license settings via registry (15)Chapter 5.Troubleshooting (17)5.1Known issues (17)5.2Troubleshooting steps (17)LIST OF FIGURESFigure 1 GRID licensing architecture (1)Figure 2 GRID license editions (2)Figure 3 Managing vGPU licensing in NVIDIA Control Panel (5)Figure 4 Successful acquire of vGPU license (6)Figure 5 Sample gridd.conf for GRID vGPU (7)Figure 6 Managing Virtual Workstation Licensing in NVIDIA Control Panel (9)Figure 7 Applying GRID Virtual Workstation license (10)Figure 8 Success acquire of Virtual Workstation license (11)Figure 9 Disabling GRID Virtual Workstation (12)Figure 10 Sample gridd.conf for GRID Virtual Workstation (13)Figure 11 Configuring vGPU licensing via registry settings (16)LIST OF TABLESTable 1 Virtual GPUs licensed on Tesla M6, M60 (4)Table 2 Licensing registry settings (14)Chapter 1. INTRODUCTIONCertain NVIDIA GRID TM products, such as GRID vGPU TM and GRID VirtualWorkstation, are available as licensed features on NVIDIA Tesla TM GPUs. This guide describes the licensed features and how to enable and use them on supported hardware.1.1 HOW LICENSING WORKSFigure 1 provides an overview of GRID licensing:Figure 1 GRID licensing architectureGRID ServerGRID License ServerVMNVIDIA Tesla GPUNVIDIATesla GPUGRID Virtual Workstation GraphicsVMGRID vGPUVMGRID vGPULicensesLicense borrow, returnIntroduction When enabled on Tesla GPUs, licensed features such as Virtual GPU are activated by obtaining a license over the network from an NVIDIA GRID License Server. The licensed is “checked out” or “borrowed” at the time the Virtual Machine (VM) is booted, and returned when the VM is shut down.1.2LICENSE EDITIONSGRID licenses come in three editions that enable different classes of GRID features. The GRID software automatically selects the right license edition based on the features being used:Figure 2 GRID license editionsThe remainder of this guide is organized as follows:④Chapter 2 describes licensing of GRID Virtual GPU.④Chapter 3 describes licensing of GRID Virtual Workstation features with GPUpassthrough.④Chapter 4 discusses advanced licensing settings.④Chapter 5 provides guidance on troubleshooting.Chapter 2.This chapter describes licensing of NVIDIA GRID vGPU.2.1VGPU LICENSE REQUIREMENTSNVIDIA GRID Virtual GPU (vGPU) is offered as a licensable feature on Tesla M6 and M60 GPUs. When a vGPU is booted on these GPUs, a license must be obtained by the Virtual Machine (VM) in order to enable the full features of the vGPU. The VM retains the license until it is shut down; it then releases the license back to the license server.Table 1 lists the vGPU types that are supported on Tesla M6 / M60, and the license edition that each vGPU type requires.Table 1 Virtual GPUs licensed on Tesla M6, M60The higher-end GRID license editions are inclusive of lower editions: for example virtual GPUs that require a GRID Virtual PC license are also usable with a GRID Virtual Workstation or GRID Virtual Workstation Extended license.2.2LICENSING ON WINDOWSTo license vGPU, open NVIDIA Control Panel by right-clicking on the Windows desktop and selecting NVIDIA Control Panel from the menu, or by opening Windows Control Panel and double-clicking the NVIDIA Control Panel icon.In NVIDIA Control Panel, select Manage License task in the Licensing section of the navigation pane, as shown in Figure 3.Figure 3 Managing vGPU licensing in NVIDIA Control PanelThe Manage License task pane shows that GRID vGPU is currently unlicensed. Enter the address of your local GRID License Server in the License Server field. The address can be a fully-qualified domain name such as , or an IP address such as 10.31.20.45.The Port Number field can be left unset and will default to 7070, which is the default port number used by NVIDIA GRID License Server.Select Apply to assign the settings. The system will request the appropriate license for the current vGPU from the configured license server and, if successful, vGPU’s full capabilities are enabled (see Figure 4). If the system fails to obtain a license, refer to 4.3 for guidance on troubleshooting.Once configured in NVIDIA Control Panel, licensing settings persist across reboots.Figure 4 Successful acquire of vGPU licenseGRID Virtual GPU2.3LICENSING ON LINUXTo license GRID vGPU, edit /etc/nvidia/gridd.conf:[nvidia@localhost ~]$ sudo vi /etc/nvidia/gridd.confSet ServerURL to the address and port number of your local NVIDIA GRID License Server. The address can be a fully-qualified domain name such as, or an IP address such as 10.31.20.45. The port number is appended to the address with a colon, for example :7070.Set FeatureType to 1, to license vGPU:Figure 5 Sample gridd.conf for GRID vGPURestart the nvidia-gridd service:[nvidia@localhost ~]$ sudo service nvidia-gridd restartThe service should automatically obtain a license. This can be confirmed with log messages written to /var/log/messages, and the vGPU within the VM should now exhibit full framerate, resolution, and display output capabilities:[nvidia@localhost ~]$ sudo grep gridd /var/log/messages…Sep 13 15:40:06 localhost nvidia-gridd: Started (10430)Sep 13 15:40:24 localhost nvidia-gridd: License acquired successfully.Once configured in gridd.conf, licensing settings persist across reboots and need only be modified if the license server address changes, or the VM is switched to running GPU passthrough.Chapter 3.This chapter describes how to enable the GRID Virtual Workstation feature on supported Tesla GPUs.3.1GRID VIRTUAL WORKSTATION FEATURES GRID Virtual Workstation is available on Tesla GPUs running in GPU passthrough mode to Windows and Linux VMs. Virtual Workstation requires a GRID Virtual Workstation – Extended license edition, and provides these features:④Up to four virtual display heads at 4k resolution (unlicensed Tesla GPUs support asingle virtual display head with maximum resolution of 2560x1600).④Workstation-specific graphics features and accelerations.④Certified drivers for professional applications3.2LICENSING ON WINDOWSTo enable GRID Virtual Workstation, open NVIDIA Control Panel by right-clicking on the Windows desktop and selecting NVIDIA Control Panel from the menu, or by opening Windows Control Panel and double-clicking the NVIDIA Control Panel icon. In NVIDIA Control Panel, select Manage License task in the Licensing section of the navigation pane, as shown in Figure 6.Figure 6 Managing Virtual Workstation Licensing in NVIDIA Control PanelThe Manage License task pane shows the current License Edition being used, and defaults to unlicensed.Select GRID Virtual Workstation, and enter the address of your local GRID License Server in the License Server field (see Figure 7). The address can be a fully-qualified domain name such as , or an IP address such as10.31.20.45.The Port Number field can be left unset and will default to 7070, which is the default port number used by NVIDIA GRID License Server.Figure 7 Applying GRID Virtual Workstation licenseSelect Apply to assign the settings. The system will request a license from the configured license server and, if successful, GRID Virtual Workstation features are enabled (see Figure 8). If the system fails to obtain a license, refer to 4.3 for guidance on troubleshooting.Once configured in NVIDIA Control Panel, licensing settings persist across reboots.Figure 8 Success acquire of Virtual Workstation license3.2.1Disabling GRID Virtual WorkstationTo disable the GRID Virtual Workstation licensed feature, open NVIDIA Control Panel; in the Manage License task, select Tesla (unlicensed) and select Apply (see Figure 9). The setting does not take effect until the next time the system is shutdown or rebooted; GRID Virtual Workstation features remain available until then.Figure 9 Disabling GRID Virtual Workstation3.3LICENSING ON LINUXTo license GRID Virtual Workstation, edit /etc/nvidia/gridd.conf:[nvidia@localhost ~]$ sudo vi /etc/nvidia/gridd.confSet ServerURL to the address and port number of your local NVIDIA GRID License Server. The address can be a fully-qualified domain name such as, or an IP address such as 10.31.20.45. The port number is appended to the address with a colon, for example :7070.Set FeatureType to 2, to license GRID Virtual Workstation:Figure 10 Sample gridd.conf for GRID Virtual WorkstationRestart the nvidia-gridd service:[nvidia@localhost ~]$ sudo service nvidia-gridd restartThe service should automatically obtain a license. This can be confirmed with log messages written to /var/log/messages, and the GPU should now exhibit Virtual Workstation display output and resolution capabilities:[nvidia@localhost ~]$ sudo grep gridd /var/log/messages…Sep 13 15:40:06 localhost nvidia-gridd: Started (10430)Sep 13 15:40:24 localhost nvidia-gridd: License acquired successfully.Once configured in gridd.conf, licensing settings persist across reboots and need only be modified if the license server address changes, or the VM is switched to running GPU passthrough.Chapter 4.This chapter discusses advanced topics and settings for GRID licensing.4.1LICENSES OBTAINED AFTER BOOTUnder normal operation, a GRID license is obtained by a platform during boot, prior to user login and launch of applications. If a license is not available, indicated by a popup on Windows or log messages on Linux, the system will periodically retry its license request to the license server. During this time, GRID vGPU runs at reduced capability described in section 2.1; similarly, GRID Virtual Workstation features described in section 3.1 are not available.When a license is obtained, the licensed features are dynamically enabled and become available for immediate use. However, any application software launched prior to the license becoming available may need to be restarted in order to recognize and utilize the licensed features.4.2OPERATING WITH INTERMITTENTCONNECTIVITY TO THE LICENSE SERVERGRID vGPU and Virtual Workstation clients require connectivity to a license server when booting, in order to check out a license. Once booted, clients may operate without connectivity to the license server for a period of up to 7 days, after which time the client will warn of license expiration.4.3APPLYING WINDOWS LICENSE SETTINGS VIAREGISTRYGRID licensing settings can be controlled via the Windows Registry, removing the need for manual interaction with NVIDIA Control Panel. Settings are stored in this registry key:HKEY_LOCAL_MACHINE\SOFTWARE\NVIDIA Corporation\Global\GridLicensingRegistry values are summarized in Table 2.Table 2 Licensing registry settingsFigure 11 shows an example of configuring virtual GPU licensing settings in the registry. Note it is sufficient to simply configure FeatureType = 1 (GRID vGPU) and set the license server address in ServerAddress.Figure 11 Configuring vGPU licensing via registry settingsChapter 5.This chapter describes basic troubleshooting steps.5.1KNOWN ISSUESBefore troubleshooting or filing a bug report, review the release notes that accompany each driver release, for information about known issues with the current release, and potential workarounds.5.2TROUBLESHOOTING STEPSIf a GRID system fails to obtain a license, investigate the following as potential causes for the failure:④Check that the license server address and port number are correctly configured.④Run a network ping test from the GRID system to the license server address to verifythat the system has network connectivity to the license server.④Verify that the date and time are configured correctly on the GRID system. If the timeis set inaccurately or is adjusted backwards by a large amount, the system may fail to obtain a license.④Verify that the license server in use has available licenses of the type required by theGRID feature the GRID system is configured to use.GRID LICENSING DU-07757-001| 17 NoticeALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FORA PARTICULAR PURPOSE.Information furnished is believed to be accurate and reliable. However, NVIDIA Corporation assumes no responsibility for the consequences of use of such information or for any infringement of patents or other rights of third parties that may result from its use. No license is granted by implication of otherwise under any patent rights of NVIDIA Corporation. Specifications mentioned in this publication are subject to change without notice. This publication supersedes and replaces all other information previously supplied. NVIDIA Corporation products are not authorized as critical components in life support devices or systems without express written approval of NVIDIA Corporation.HDMIHDMI, the HDMI logo, and High-Definition Multimedia Interface are trademarks or registered trademarks of HDMI Licensing LLC.OpenCLOpenCL is a trademark of Apple Inc. used under license to the Khronos Group Inc.TrademarksNVIDIA and the NVIDIA logo are trademarks or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.Copyright© 2015 NVIDIA Corporation. All rights reserved.。
桌面云扩容项目需求
桌面云扩容项目需求一.设备清单二投标供应商资格要求1、具有独立承担民事责任的能力;具有良好的商业信誉和健全的财务会计制度;具有履行合同所必需的设备和专业技术能力;有依法缴纳税收和社会保障资金的良好记录;2、投标时提供所投设备的原厂质保服务承诺函,原件在签订合同时提供;三.付款方式验收合格后,在30个工作日内支付全部货款。
1.桌面云服务器2、桌面云超融合操作系统3桌面云终端桌面云控制器显卡虚拟化运维功能支持删除用户时,可选择是否删除关联的虚拟机,如果不删除,虚拟机处于闲置状态,可重新关联给冥他用户。
★支持与上网行为管理系统联动认证,用户成功登录VDI后,上网行为管理自动同步用户认证信息,后台记录用户ID,后续上网无需再次认证登录。
★支持虚拟机自由分组、支持修改计算机名称、支持根据虚拟机名称、关联用户、用户描述、IP、MAC、所雇资源、所雇平台等信息搜索虚拟机。
★支持批呈删除虚拟机、批呈设置虚拟机IP地址、批显关联/解关联虚拟机用户。
支持开机状态下编辑虚拟机配置,重启生效。
★支持设置虚拟机开关机计划,一方面可避免并发开机IO风暴问题,另一方面可释放服务器资源,支持关机前会弹框提醒用户,如果用户正在使用不想关机可手动选择"取消"。
中心报表功能,能够杳询一段时间的并发用户会话、用户使用时★支持外置娄长、服务器负载情况、存储利用率、存储性能情况等。
支持报表导出。
(提供功能截图)★支持升级前和升级后的健康检查,确保升级不出问题。
注:★项为必须满足项,须与现有桌面系统无缝结合。
五.验收要求在接到供应商以书面形式提出验收申请后,在5个工作H内及时组织相关专业技术人员,必要时邀请采购中心、质检等部门共同参与验收,并出具验收报告,作为支付货款的依据。
六.售后服务及其他(含安装■调试、培训.维护等)1.质保期为3年(自交货并验收合格之日起计);2.其他售后服务要求:中标人必须负责所采购设备的运输,并负责进场卸货、安装、调试、检测,培训,合格交付使用,并提供标书要求年限的免费保修。
虚拟化技术有哪些
虚拟化技术有哪些虚拟化技术是一种将物理硬件资源抽象化并作为虚拟实例进行管理的技术。
通过虚拟化,计算机系统可以利用硬件资源的高度利用率,提高应用程序的可靠性、性能和灵活性。
在云计算和大数据时代,虚拟化技术得到了广泛的应用。
1. 服务器虚拟化服务器虚拟化是最常见的虚拟化技术之一。
它将一台物理服务器分割成多个虚拟机,在每个虚拟机中运行不同的操作系统和应用程序。
通过这种方式,可以更有效地利用硬件资源,并简化服务器的管理和维护。
常见的服务器虚拟化软件包括VMware ESXi、Microsoft Hyper-V和KVM等。
2. 桌面虚拟化桌面虚拟化技术将用户的桌面计算环境从物理设备中解耦,使其能够在虚拟机中运行。
这种技术可以提供更安全、更灵活的工作环境,同时降低维护和管理成本。
桌面虚拟化可以通过远程桌面协议或虚拟化客户端软件来实现。
常见的桌面虚拟化软件包括VMware Horizon、Microsoft Remote Desktop和Citrix XenDesktop等。
3. 网络虚拟化网络虚拟化是一种将网络资源进行抽象化和隔离的技术。
通过网络虚拟化,可以将一台物理网络划分为多个独立的虚拟网络,每个虚拟网络具有独立的拓扑结构、IP地址和访问控制策略。
这样可以提高网络资源的利用率,并简化网络管理和配置。
常见的网络虚拟化技术包括虚拟局域网(VLAN)、虚拟路由器和软件定义网络(SDN)等。
4. 存储虚拟化存储虚拟化技术将多个存储设备抽象化为一个逻辑的存储池,并为虚拟机提供统一的存储接口。
这种技术可以简化存储管理,提高存储资源的利用率,并增加存储的灵活性和可伸缩性。
常见的存储虚拟化技术包括虚拟存储区域网络(SAN)、网络文件系统(NFS)和存储虚拟化器(Storage Virtualization Appliance)等。
5. 数据库虚拟化数据库虚拟化是一种将多个数据库实例抽象化为一个逻辑的数据库环境的技术。
通过数据库虚拟化,可以简化数据库管理和配置,实现数据的集中管理和共享,并提供更高的可用性和可扩展性。
基于HPE DL380的超融合系统介绍
14
✓
✓
✓
14
✓
✓
✓
12
✓
✓
✓
10
✓
✓
✓
14
✓
✓
✓
12
✓
✓
✓
12
✓
✓
✓
8
✓
✓
✓
10
✓
✓
✓
10
✓
✓
✓
12
✓
✓
✓
6
✓
✓
8
✓
✓
8
✓
✓
8
✓
✓
6
✓
HPE HC380 每节点存储配置
8个驱动器一组,单节点最多支持3组;
或者
或者
8 驱动器
16 驱动器
24 驱动器
存储
存储模块说明
每模块可用容量 (TB,8*SFF)
3年HPE Hyper Converged 380方案支持(必需)
HPE HC380 配置选件
机箱 处理器 内存 存储 网络 图形加速 电源
DL380 Gen9 24SFF;2-16 同构节点 2 x E5-2600 v3;每节点12-36个CPU内核 每节点128GB到1.5TB内存 SAS硬盘或混合硬盘组合;每节点3.5T-25.2TB存储容量 10Gb和1GB;SFP+ 和RJ45 K1或K2,根据虚拟负载要求; 220V;冗余;HPE Power Advisor
网络
网卡端口类型
网卡描述
当选择三个应 10GbE SFP+
用场景之一, 将包括相应的
10GbE Base-T 网络配置;
1GbE RJ45
NVIDIA GRID Virtual GPU Technology (vGPU) 用户指南说明书
Solution GuideBalancing Graphics Performance, User Density & Concurrency with NVIDIA GRID™ Virtual GPU Technology (vGPU™) for Autodesk AutoCAD Power UsersV1.0Table of ContentsThe GRID vGPU benefit (3)Understanding vGPU Profiles (3)Benchmarking as a proxy for real world workflows (5)Methodology (6)Fully Engaged Graphics Workloads? (6)Analyzing the Performance Data to Understand How User Density Affects Overall Performance (7)Server Configuration (12)The GRID vGPU BenefitThe inclusion of vGPU™ support in XenDesktop 7.1 allows businesses to leverage the power of NVI DIA’s GRID™ technology to create a whole new class of virtual machines designed to provide end users with a rich, interactive graphics experience. By allowing multiple virtual machines to access the power of a single GPU within the virtualization server, enterprises can now maximize the number of users with access to true GPU based graphics acceleration in their virtual machines. Because each physical GPU within the server can be configured with a specific vGPU profile organizations have a great deal of flexibility in how to best configure their server to meet the needs of various types of end users.Up to 8 VMs can connect to the physical GRID GPU via vGPU profiles controlled by the NVIDIA vGPU Manager.While the flexibility and power of vGPU system implementations provide improved end user experience and productivity benefits, they also provide server administrators with direct control of GPU resource allocation for multiple users. Administrators can balance user density and performance, maintaining high GPU performance for all users. While user density requirements can vary from installation to installation based on specific application usage, concurrency of usage, vGPU profile characteristics, and hardware variation, it’s possible to run standardized benchmarking procedures to establish user density and performance baselines for new vGPU installations.Understanding vGPU ProfilesWithin any given enterprise the needs of individual users varies widely, a one size fits all approach to graphics virtualization doesn’t take these differences into account. One of the key benefits of NVIDIA GRID vGPU is the flexibility to utilize various vGPU profiles designed to serve the needs of different classes of end users. While the needs of end users can be quite diverse, for simplicity we can group them into the following categories: Knowledge Workers, Designers and Power Users.For knowledge workers key areas of importance include office productivityapplications, a rich web experience, and fluid video playback. Graphically knowledge workers have the least graphics demands, but they expect a similarly smooth, fluid experience that exists natively on today’s graphic accelerated devices such as desktop PCs, notebooks, tablets and smart phones.Power Users are those users with the need to run more demanding officeapplications; examples include office productivity software, image editing software like Adobe Photoshop, mainstream CAD software like Autodesk AutoCAD and product lifecycle management (PLM) applications. These applications are more demanding and require additional graphics resources with full support for APIs such as OpenGL and Direct3D.Designers are those users within an organization running demanding professional applications such as high end CAD software and professional digital contentcreation (DCC) tools. Examples include Autodesk Inventor, PTC Creo, Autodesk Revit and Adobe Premiere. Historically designers have utilized desktop workstations and have been a difficult group to incorporate into virtual deployments due to the need for high end graphics, and the certification requirements of professional CAD and DCC software.The various NVIDIA GRID vGPU profiles are designed to serve the needs of these three categories of users:Each GPU within a system must be configured to provide a single vGPU profile, however separate GPU’s on the same GRID board can each be configured separately. For example a single K2 board could be configured to serve eight K200 enabled VM’s on one GPU and two K260Q enabled VM’s on the other GPU.The key to efficien t utilization of a system’s GRID resources requires understanding the correct end user workload to properly configure the installed GRID cards with the ideal vGPU profiles maximizing both end user productivity and vGPU user density.The vGPU profiles with the “Q” suffix (K140Q, K240Qand K260Q), offer additional benefits not available inthe non-Q profiles, the primary of which is that Qbased vGPU profiles will be certified for professionalapplications. These profiles offer additional supportfor professional applications by optimizing thegraphics driver settings for each application usingNVIDIA’s Application Configuration Engine (ACE),ACE offers dedicated profiles for most professionalworkstation applications, once ACE detects thelaunch of a supported application it verifies that thedriver is optimally tuned for the best userexperience in the application. Benchmarking as a Proxy for Real World WorkflowsIn order to provide data that offers a positive correlation to the workloads we can expect to see in actual use, benchmarking test case should serve as a reasonable proxy for the type of work we want to measure. A benchmark test workload will be different based on the end user category we are looking to characterize. For knowledge worker workloads a reasonable benchmark is the Windows Experience Index, and for Power Users we can use the CADALYST benchmark for AutoCAD. The SPEC Viewperf benchmark is a good proxy for Designer use cases.To illustrate how we can use benchmark testing to help determine the correct ratio between total user density and workload performance we’ll look at a Power User workload using t he CADALYST benchmark, which tests performance within Autodesk AutoCAD 2014. The benchmark tests various aspects of AutoCAD performance by loading a variety of models an interacting with them within the application viewports in real-time.CADALYST offers many advantages for use as a proxy for end user workloads, it is designed to test actual real world models using various graphic display styles and return a benchmark score dedicated to graphics performance. Because the benchmark runs without user interaction once started it is an ideal candidate for multi-instance testing. As an industry standard benchmark, it has the benefit of being a credible test case, and since the benchmark shows positive scaling with higher end GPU’s it allows us to test various vGPU profiles to understand how profile selection affects both performance and density.MethodologyBy utilizing test automation scripting tools, we can automate launching the benchmark on the target VM’s. We can then automate launching the VM’s so that the benchmark is running on the target number of VM’s concurrently. Starting with a single active user per physical GPU, the benchmark is launched by the client VM and the results of the test are recorded. This same procedure is repeated by simultaneously launching the benchmark on additional VM’s and continuing to repeat these steps until the maximum number of vGPU accelerated VMs per GRID card (K1 or K2) is reached for that particular vGPU profile.Fully Engaged Graphics Workloads?When running benchmark tests, we need to determine whether our test nodes should be fully engaged with a graphics load or not. In typical real-world configurations the number of provisioned VM’s actively engaged in performing graphically intensive tasks will vary based on need within the enterprise environment. While possible, it is highly unlikely that every single provisioned VM is going to be under a high demand workload at any given moment in time.In setting up our benchmarking framework we have elected to utilize a scenario that assumes that every available node is fully engaged. While such heavy loading is unlikely to occur in a real world environment, it allows us to use a “worst case scenario” to plot our density vs. performance data.Analyzing the Performance Data to Understand How User Density Affects Overall PerformanceTo analyze the benchmark result data we take the sum of the CADALYST 3D result score from each VM and total them. The total is then divided by the total number of active VM’s to obtain an Average Score Per VM. In determining the impacts of density on overall benchmarking performance we plot thebenchmark as seen in the graphs below. For each plot we record the average CADALYST 3D score result, and indicate the percentage drop in performance compared to the same profile with a single active VM. In general the results below show that vGPU profiles with are targeted for Power Users and Designers, experience less performance falloff than profiles which are intended for use by Knowledge workers. In Example 1 below we analyze the data for the K240Q vGPU profile, one of the professional profiles available on the K2 GRID board. The performance trend for the K240Q profile shows that performance in the CADALYST benchmark when running the maximum number of VMs on a single K2 board (8), is only 10 percent slower than the performance of a single K240Q accelerated VM active on the server. Overall, adding additional CADALYST workloads to the system shows that performance is minimally impacted by scaling up the number of users on the system. While the blue line on the chart shows the averageCADALYST benchmark scores as measured across all active VMs, the Minimum and Maximum scores are also plotted on the graph. For the K240Q profile we see extremely little deviation between the average score and the min and max.Example 1 – Single K2 board allocated with K240Q vGPU profile (1024MB Framebuffer), each K1 board can support up to 8K240Q vGPU accelerated VMs.501001502002503003504004505000123456789A v e r a g e C A D A L Y S TB e n c h m a r k 3D G r a p h i c s I n d e x S c o r eNumber of Active VMsK240Q (1024MB Framebuffer) vGPU Profile Scaling Data Based on Performance of Multiple VM's Running CADALYST BenchmarkMeasured on a Single GRID K2 Graphics Boardk240Q AVGMin Max-4%-10%In Example 2 below is the CADALYST performance profile for the K140Q the professional profile for the K1 GRID board. The K140Q profile is configured with 1024MB of framebuffer per accelerated VM, the same as the K240Q. On a single K1 GRID board the performance profile is extremely similar between the K140Q and the K240Q profiles up to 8 active VMs, which is the maximum number of VMs supported on the K240Q. Moving beyond 8 VM’s we see that although the average benchm ark scores continue to decline even with the maximum number of K140Q profiles running average scores only drop by 25% compared to a single K140Q accelerated VM running the benchmark on the server. The deviation between the average score and the min/max starts low but increase as more active VMs are added.Example 2 Single K1 board allocated with K140Q vGPU profile (1024MB Framebuffer), each K1 board can support up to 16K140Q vGPU accelerated VMs.Example 3 below shows the performance profile for a single K2 GRID board using K200 vGPUs. The chart shows that adding additional K200 accelerated VMs has minimal impact on performance up to VMs. At the maximum number of VMs supported on a single K2 board (16) there is a more pronounced falloff between 12 and 16 active VMs. Although the total percentage of performance drop at 16 VMs is similar between K200 and K140Q, the professional based K140Q profile offers higher average performance with the maximum number of VMs actively running the benchmark.010020030040050060001234567891011121314151617A v e r a g e C A D A L Y S TB e n c h m a r k 3D G r a p h i c s I n d e x S c o r eNumber of Active VMsK140Q (1024MB Framebuffer) vGPU Profile Scaling Data Based on Performance of Multiple VM's Running CADALYST Benchmark Measured on aSingle GRID K1 Graphics Boardk140Q AVGMinMax-4%-8%-19%-24%Example 3 – Single K2 board allocated with K200 vGPU profile (256MB Framebuffer), each K2 board can support up to 16K200 vGPU accelerated VMs.In Example 4 we see the performance profile for a single GRID K1 board configured with K100 vGPUs. The performance trend for the K100 profile shows that when loaded with eight actively engaged VMs (25% of a K1 board’s maximum capacity when allocated as a K100 vGPU profile), there is a noticeable 31% percent drop in average performance. However, after the initial performance drop, adding additional engaged VM’s up to the maximum of 32 only results in minimal additional falloff. Thisindicates that if application performance is acceptable with 8 engaged users, that adding more users to the system with similar workloads shouldn’t negatively affect the overall system perf ormance in a noticeable manner.In addition to the average score for all active VM’s, the graph also indicates the minimum and maximum scores recorded by all VM’s generating benchmark results. We can see that while the averageperformance drop between a single active VM and the maximum of 32 supported by a single K1 board is 39%, in some cases the performance drop was as little as 24%05010015020025030035040045001234567891011121314151617A v e r a g e C A D A L Y S TB e n c h m a r k 3D G r a p h i c s I n d e x S c o r eNumber of Active VMsK200 (256MB Framebuffer) vGPU Profile Scaling Data Based on Performance of Multiple VM's Running CADALYST BenchmarkMeasured on a Single GRID K2 Graphics Boardk200 avg min max-1%-5% -6%-23%Example 4 – Single K1 board allocated with K100 vGPU profile (256MB Framebuffer), each K1 board can support up to 32K100 vGPU accelerated VMs.Board Profile Maximum VMs per BoardRecommended range of VM'sper Board K1 K100 32 20 - 32K1 K140Q 16 12 - 16 K2 K200 16 10 - 16 K2 K240Q 8 7 - 8 K2K260Q43 - 4Table 1 – Maximum and recommended VM’s per GRID board by profile501001502002503003504004500123456789101112131415161718192021222324252627282930313233A v e r a g e C A D A L Y S TB e n c h m a r k 3D G r a p h i c s I n d e x S c o r eNumber of Active VMs K100 (256MB Framebuffer) vGPU Profile Scaling Data Based on Performance of Multiple VM's Running CADALYST BenchmarkMeasured on a Single GRID K1 Graphics Boardk100 AVGMinMax-31%-32%-33%-36%-39%Chart 1 – Average 3D benchmark score at recommended densities.Server ConfigurationDell R720Intel® Xeon® CPU E5-2670 2.6GHz, Dual Socket (16 Physical CPU, 32 vCPU with HT)Memory 384GBXenServer 6.2 Tech Preview Build 74074cVirtual Machine ConfigurationVM Vcpu : 4 Virtual CPUMemory : 11GBXenDesktop 7.1 RTM HDX 3D ProAutoCAD 2014CADALYST C2012 BenchmarkNVIDIA Driver: vGPU Manager : 331.24Guest driver : 331.82Additional GRID ResourcesWebsite –/vdiCertified Platform List –/wheretobuyISV Application Certification –/gridcertificationsGRID YouTube Playlist –/gridvideosHave issues or questions setting up or viewing demos? Contact the GRID demo team via email at ******************* or @NVIDIAGRID on Twitter.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
NVIDIA GRID GPU
TECH PREVIEW: vGPU ON VMware VSPHERE
vGPU running on vSphere
vSphere
NVIDIA GRID GPUs
GRID K1, K2
GRID certified server platforms
NVIDIA GRID K2
PROCESS Many deliveries to many houses
All Users Expect a Great Visual Experience!
Every notebook, tablet and smartphone has a GPU
GPUs delivers a better visual experience by offloading work that the CPU is not efficient at processing (Direct X, OpenGL, Video)
8GB GDDR5 (4GB/GPU) 225 W Quadro K5000 (high end)
KNOWLEDGE WORKER
Memory Size Max Power Equivalent Quadro with Pass-through
DESIGNER
Market Size 25M
IMPORTANCE OF GPU
Why virபைடு நூலகம்ualize?
Awesome performance!
High cost Hard to fully utilize, limited mobility Challenging to manage Data security can be a problem (A superset of the drivers for VDI!)
True PC Graphics Anywhere On Any Device for Anyone
VDI/CLOUD
VIRTUAL MACHINE VIRTUAL DESKTOPS
NVIDIA GRID Enabled Virtual Desktop
NVIDIA Driver
NVIDIA GRID ENABLED Hypervisor
CPU
Optimized for Many Parallel Tasks
CPU Pizza Delivery
PROCESS Delivery truck delivers one pizza and then moves to next house
NVIDIA GPU Pizza Delivery
NVIDIA® GRID™
GPU 3D 桌面虚拟化方案
VMWare Horizon View
THE VISUAL COMPUTING COMPANY
NVIDIA
NVIDIA Confidential
What is a GPU?
GPU Accelerator
Optimized for Serial Tasks
DESIGNER
Designer/Engineer
Profile: • View, create, manipulate and render complex 2D/3D graphics • Traditionally an NVIDIA Quadro® user on a desktop workstation • Have been a difficult group to incorporate into virtual deployments due to the need for high-end graphics and the certification requirements of professional CAD and digital content creation (DCC) software Typical Applications: • Autodesk Inventor, PTC Creo, Autodesk Revit and Adobe Premiere GRID Benefits: • GRID enables Designers to use those applications in a VDI environment without compromising graphics performance for GPU acceleration of Direct3D, OpenGL, CUDA applications
POWER USER
Power User
Profile: • View, edit and interface with 2D/3D graphics Typical Applications: • Office productivity software, image editing software like Adobe Photoshop, mainstream CAD software like Autodesk AutoCAD and product lifecycle management (PLM) applications • May also run the same applications as Designers but in view-only mode GRID Benefits: • GRID supports these applications in a VDI environment with full support for APIs such as OpenGL and Direct3D
Nice to Have Must Have
3D Engineering & Design Apps
POWER USER
Office Productivity
PLM & Volume Design
200M
KNOWLEDGE WORKER
Windows 7
Web
400M
Target Markets
WANT benefits of VDI…remoting, flexibility, security, manageability but NEED high-end graphics applications Solution: VDI + NVIDIA GRID
DESIGNER
NVIDIA GRID K1
POWER USER
GPU
4 Kepler GPUs
2 High End Kepler GPUs
CUDA Cores
768 (192/GPU)
16GB DDR3 (4GB/GPU) 130 W Quadro K600 (entry)
3072 (1536/GPU)
Desktop workstation Quadro GPU
Opportunity: Deliver Graphics in a Virtual Environment
Without NVIDIA GRID
Limited graphics in traditional VDI
With NVIDIA GRID