vmware vSphere esxi 5 -What’s New in vCenter Server
VMWARE培训与认证
Agenda
– VMware 培训服务介绍 – VMware认证 – VMware 培训路径与课程发展 –获取最新培训时间表与注册 – 2010年Q2合作伙伴DV和 BC 认证培训优惠 – 合作伙伴大会VCP4 考试优惠
VMware 培训服务介绍
2010年Q2合作伙伴DV和 BC 认证培训优惠
能力 课程名称 VMware View™: Install, Configure, Manage VMware View: Design Best Practices VMware View™: Install, Configure, Manage 5/17‐5/20 (4天) 9:30‐17:30 上海 日期/天数 5/11‐5/14 (4天) 时间 9:30‐17:30 地点 北京
如何获得VCP4证书? 如何获得VCP4证书
完成一门规定课程的学习 在VUE注册并通过VCP 专门考试 在VUE注册并通过VCP
VMware认证
VCDX4( VMware Certified Design Expert)认证介绍 设计部署VMware企业级虚拟化架构的高级认证,VCDX 认证将证明以下能力: 评估客户需求,计划、设计虚拟化基础架构环境的能力 设计规划企业级大型数据中心的能力 实施,测试,文档化与演示完整的虚拟化设计 考虑全面业务需求,设计企业虚拟化策略 极大提升个人潜在的职业晋升机会
(Update in Q3/Q4)
VMware 培训路径与课程发展
vSphere Troubleshooting – V4 (4天)
New!
• 强化动手实验 • 培养处理实际问题的能力 • 侧重于VMware® ESX™/ESXi 主机和VMware vCenter™ 服 务器系统所生成的问题诊断、纠正配置 • 与 vSphere:ICM 课程一样可以满足VCP4认证的要求
HPE OmniStack 3.7.1 兼容性指南说明书
HPE OmniStack 3.7.1 Interoperability GuideDocument type Page 3ContentsHPE SimpliVity 380 Gen9 and Gen10 Platform Firmware Interoperability (4)Hypervisor Interoperability (vSphere) (6)HPE OmniStack Hypervisor Interoperability with vSphere ESXi (6)VMware vCenter Interoperability (6)vSphere License Interoperability (7)HPE OmniStack Compatibility for Deployment and Upgrades (7)Supported Deployment Paths (7)Supported Upgrade Paths (7)HPE OmniStack Host Interoperability Within and Across Clusters (8)HPE OmniStack Component Build Versions (8)HPE SimpliVity 380 Gen9 and Gen10 Platform Firmware InteroperabilityThe following is a list of system firmware packages that are supported by the HPE SimpliVity 380 platform.FIRMWARE 3.6.2 3.7.0 3.7.1871790_001_spp-2016.10.0-SPP2016100.2016_1015.191.iso Y Y N*790-000107-lsi-a.iso Y Y N*790-000107-lsi-b.iso Y Y N790-000107-lsi-c.iso Y Y YSVT380SP-2017_0810.03.iso Y Y Y1. Firmware is available for download from the Hewlett Packard Enterprise Customer Support (/support/).A. * Indicates the iso is no longer available for download.2. Firmware packages do not contain firmware for HPE OmniStack Virtual Accelerator card (OVA). OVA firmware is installed/updatedautomatically as part of the HPE OmniStack software installation process.Note: The LSI Array Controller firmware required by the HPE OmniStack platform is listed below (it can be found after the specified HPE Server firmware version)FIRMWARE PACKAGE: 871790_001_spp-2016.10.0-SPP2016100.2016_1015.191.isoRELEASE DATE: 2017/04/07NOTES HPE SimpliVity 380 Gen9 Support OnlyFIRMWARE DETAILS: System ROM P89 v2.30Smart HBA H240 0 4.52iLO 2.50Power Mgnt Controller 1.0.9Broadcom NIC 2.17.6Intel NIC 1.10.8FIRMWARE PACKAGE: SVT380SP-2017_0810.03.isoRELEASE DATE: 2017/09/25NOTES HPE SimpliVity 380 Gen9 & Gen10 SupportFIRMWARE DETAILS: Gen9 System ROM P89 v2.42Gen10 System ROM u30 1.02Gen9 Smart HBA H240 5.52Gen10 HPE Smart Array E208i-p SR 1.06Gen10 HPE Smart Array P408i-a SR 1.06Gen9 iLO4 2.54FIRMWARE PACKAGE: SVT380SP-2017_0810.03.isoGen10 iLO5 1.10Gen9 Power Mgnt Controller 1.0.9Gen10 Power Mgnt Controller 1.0.2Broadcom NIC 2.18.15Intel NIC 1.12.18LSI FIRMWARE: *790-000107-lsi-a.isoRELEASE DATE: 2017/04/07NOTES HPE SimpliVity 380 Gen9 Support Only FIRMWARE DETAILS: LSI Package 24.18.0-0021LSI Gas Gauge 6071-04A* Indicates the iso is no longer available for download.LSI FIRMWARE: *790-000107-lsi-b.isoRELEASE DATE: 2017/06/12NOTES HPE SimpliVity 380 Gen9 Support Only FIRMWARE DETAILS: LSI Package 24.18.0-0021LSI Gas Gauge 6071-04ASAS Expander 3.01* Indicates the iso is no longer available for download.LSI FIRMWARE: 790-000107-lsi-c.isoRELEASE DATE: 2017/09/25NOTES HPE SimpliVity 380 Gen9 Support Only FIRMWARE DETAILS: LSI Package 24.21.0-0012LSI Gas Gauge 6071-04ASAS Expander 3.14Hypervisor Interoperability (vSphere)HPE OmniStack Hypervisor Interoperability with vSphere ESXiVSPHERE ESXI BUILD 3.6.2 3.7.0 3.7.1 SOURCE1ESXi 6.0 Patch 4 4600944 Y2Y2Y2HPEESXi 6.0 Express Patch 7a 5224934 Y2Y2Y2HPEESXi 6.0 Update 3a (ESXI 6.0 Patch 5)5572656 Y Y Y HPEESXi 6.5 4564106 Y2Y2Y2HPEESXi 6.5. Express Patch 1a 5224529 Y2Y2Y2HPEESXi 6.5 U15969303 Y Y Y HPE1. Source = Location from which to obtain the ESXi Package2. HPE SimpliVity 380 Gen9 Support Only3. All federated Compute Nodes must run vSphere ESXi version of the same major release as the HPE OmniStack Platform.4. For supported ESXI upgrade path, please refer to the VMware Interoperability Matrix.VMware vCenter InteroperabilityVMWARE VCENTER BUILD 3.6.2 3.7.0 3.7.1VCenter Server 6.0b 2776511 Y Y YvCenter Server 6.0 Update 1 3018524 Y Y YvCenter Server 6.0 Update 2a 4541947 Y Y YvCenter Server 6.0 Update 3a 5183549 Y Y YvCenter Server 6.0 Update 3b 5318200 Y1Y1Y1vCenter Server 6.5 4602587 Y Y YvCenter Server 6.5 0b 5178943 Y Y YvCenter Server 6.5 Update 1 5973321 Y Y YvCenter Server Appliance 6.0b 2776510 Y Y YvCenter Server Appliance 6.0 Update 1 3018523 Y Y YvCenter Server Appliance 6.0 Update 2a 4541948 Y Y YvCenter Server Appliance 6.0 Update 3a 5183552 Y Y YvCenter Server Appliance 6.0 Update 3b 5326079 Y1Y!Y!vCenter Server Appliance 6.5 4602587 Y Y YVMWARE VCENTER BUILD 3.6.2 3.7.0 3.7.1vCenter Server Appliance 6.5 0b 5178943 Y Y YvCenter Server Appliance 6.5 U1 5973321 Y Y Y1. Requires vSphere Web Client Plugin version 13.1.90 or higher2. For supported vCenter upgrade path, please refer to VMware Interoperability MatrixvSphere License InteroperabilityVSPHERE LICENSE 3.6.2 3.7.0 3.7.1vSphere Standard Y Y YVSphere Enterprise Plus Y Y YVSphere with Operations Management EnterprisePlusY Y YVSphere Essentials 1Y Y YvSphere Essentials Plus1Y Y YvSphere Remote Office Branch Office Y Y YvSphere Remote Office Branch Office Advanced Y Y Y1. Essentials/Essentials Plus license is restricted to 3 hosts maximum, does not support linked-mode vCenter, and does not supportVMware Storage Accelerated VAAI clones.HPE OmniStack Compatibility for Deployment and UpgradesSupported Deployment PathsRELEASE HPEPROLIANTDL380 GEN9SMALLFLASH HPEPROLIANTDL380 GEN9MEDIUMFLASHHPEPROLIANTDL380 GEN9LARGEFLASHHPEPROLIANTDL380GEN10X-SMALLHPEPROLIANTDL380GEN10-SMALLHPEPROLIANTDL380GEN10-MEDIUMHPEPROLIANTDL380GEN10-LARGE3.6.2 Y Y Y N N N N 3.7.0 Y Y Y N N N N 3.7.1 Y Y Y Y Y Y YSupported Upgrade PathsFROM\TO 3.7.0 3.7.13.6.2 Y Y3.7.0 YHPE OmniStack Host Interoperability Within and Across Clusters HPE OMNISTACK MODEL WITHIN THE SAME CLUSTERT ACROSS ALL CLUSTERSHPE OMNISTACK SMALL FLASH ALL HPE OmniStack Small FlashALL HPE OmniStack Small5OmniCube CN-2400-FOmniCube CN-24005ALL HPE OmniStack Models ALL OmniCube ModelsHPE OMNISTACK MEDIUM FLASH ALL HPE OmniStack MediumFlashOmniCube CN-3400-F ALL HPE OmniStack Models ALL OmniCube ModelsHPE OMNISTACK LARGE FLASH ALL HPE OmniStack Large FlashALL HPE OmniStack Large4ALL HPE OmniStack Medium4OmniCube CN-5400-FOmniCube CN-54004OmniCube CN-34004ALL HPE OmniStack Models ALL OmniCube Models1. Only models of equal socket count are supported in the same cluster.2. As a best practice, it’s recommended to use the same CPU model within a single cluster, for example, avoid mixing E5-2697v2 with E5-2697v3.3. All hosts in a cluster should contain equal amounts of CPU & Memory.4. Hybrid Medium and Large can be mixed with new Large Flash in the same cluster same federation.5. Hybrid Small can be mixed with new Small Flash in the same cluster in the same federation. (HPE OmniStack 3.7.1 only supports HPE SimpliVity380 Gen 9 and Gen 10 platforms.)HPE OmniStack Component Build VersionsVERSION HPEOMNISTACKSOFTWAREARBITERSOFTWAREVSPHERE WEBCLIENT PLUGINUPGRADEMANAGERDEPLOYMENTMANAGERDEPLOYINSTALLER3.6.2 3.6.2.2393.6.2.245 3.6.2.21 10.23.610.23.83.6.2.1383.6.2.1453.6.2.4943.6.2.5038.17.158.18.33.7.0 3.7.0.260 3.7.0.37 13.1.90 3.7.0.226 3.7.0.513 8.34.8 3.7.1 3.7.1.60 3.7.1.23 13.2.100 3.7.1.72 3.7.1.134 8.36.37HPE OmniStack 3.7.1 November 2017© Copyright 2017 Hewlett Packard Enterprise Development LP. The information contained herein is subject tochange without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in theexpress warranty statements accompanying such products and services. Nothing herein should be construed asconstituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors oromissions contained herein.This document contains confidential and/or legally privileged information. It is intended for Hewlett PackardEnterprise and Channel Partner Internal Use only. If you are not an intended recipient as identified on the frontcover of this document, you are strictly prohibited from reviewing, redistributing, disseminating, or in any other way Array using or relying on the contents of this document.November 2017。
VMware vSphere 6 Foundations Beta Exam 2V0-620题库
VMware 2V0-620 Exam中英文对照QUESTION NO: 1By default, each ESXi 6.x host is provisioned with a certificate from which root certificate authority缺省情况下,每个ESXi 6.x主机会从以下哪个根证书授权提供一个认证 A. RedHat Certificate Authority RedHat 认证授权B. VMware Certificate Authority VMware认证授权C. DigiCert Certificate Authority DigiCert认证授权D. Verisign Certificate Authority Verisign认证授权Answer: BExplanation:QUESTION NO: 2Which vSphere 6 Enterprise Edition feature will allow an organization to ensure that critical multi-threaded applications have the maximum possible uptime vSphere6企业版的哪个功能将允许一个组织,以确保重要的多线程应用程序有最大可能的正常运行时间?A. Fault Tolerance 容错B. High Availability 高可利用性C. Distributed Resource Scheduler 分布式资源程序D. App HA 应用HAAnswer: AExplanation:QUESTION NO: 3Which vSphere 6 Standard Edition feature will allow an organization to ensure that critical multi-threaded applications have the maximum possible uptime vSphere6标准版的哪个功能将允许一个组织,以确保重要的多线程应用程序有最大可能的正常运行时间?A. Fault Tolerance 容错B. High Availability 高可利用性C. Distributed Resource Scheduler 分布式资源程序D. App HA 应用HAAnswer: BExplanation:QUESTION NO: 4Which vSphere 6.x feature will allow an organization to utilize native snapshots vSphere6.x的哪个功能将允许一个组织利用本地快照?A. Virtual Volumes 虚拟卷B. Virtual SAN 虚拟SANC. VMFS3D. VMFS5Answer: AExplanation:QUESTION NO: 5Which two components can be used when configuring Enhanced Linked Mode (Choose two.) 哪两个组件可以在配置增强的链接模式中使用?(选择两项)。
虚拟化认证
DRS Cluster Settings: EVC
EVC is a cluster feature that prevents vMotion migrations from failing because of incompatible CPUs.
VMware vSphere: Install, Configure, Manage – Revision A
Migration threshold determines how frequently virtual machines are migrated.
VMware vSphere: Install, Configure, Manage – Revision A 12-7
© 2012 VMware Inc. All rights reserved
© 2012 VMware Inc. All rights reserved
DRS Cluster Settings: DRS Groups
A DRS group is: A group of virtual machines A group of hosts A virtual machine can belong to multiple virtual machine DRS groups. A host can belong to multiple host DRS groups.
cluster
VMware vSphere: Install, Configure, Manage – Revision A
12-5
© 2012 VMware Inc. All rights reserved
DRS Cluster Prerequisites
raritan-commandcenter-secure-gateway-cc-sg-virtual
CommandCenter Secure GatewayQuick Setup Guide for CC-SG Virtual Appliance - No License Server此快速設定指南說明如何安裝和設定CommandCenter Secure Gateway。
如需CommandCenter Secure Gateway 任一方面的詳細資訊,請參閱《CommandCenter Secure Gateway 使用指南》,您可以從Raritan 網站的「Firmware and Documentation」(韌體與文件) 區段(/support/firmware-and-documentation/) 下載。
This installation includes new deployments of virtual CC-SG appliance with local licenses, in a configuration without a license server. Existing users who want to eliminate their license server should first upgrade to the latest version, then follow the instructions beginning at Get Your License (請參閱"取得授權" p. 3), noting where procedures are different when migrating from served tonot-served licenses. You must contact Raritan Technical Support to rehost your CC-SG licenses before you can migrate to thenot-served configuration.Requirements1. ESX/ESXi 4.1/5.0/5.1 to deploy the CommandCenterSecure Gateway virtual appliance▪Must have a datastore with 40GB minimum available ▪Must have 2GB memory available▪ 2 physical NICs in the server. (ESX/ESXi networking refers to these as "vmnic".)▪ A high availability cluster with access to shared storage is recommended. Fault tolerance may alsobe used. See CC-SG Administrators Help "UsingVMware High Availability or Fault Tolerance with aCC-SG Virtual Appliance"./help/ccsg/v5.4.0/en/#257132. Client computer running vSphere Client 4.1/5.0/5.13. The virtual appliance .OVF file, which is available at/support/commandcenter-secure-g ateway. See Download Installation Files for details.▪CommandCenter Secure Gateway VirtualAppliance link: You must log in to the RaritanSoftware License Key Management site to view thislink. See Get Your License.下載安裝檔案您可以透過存取「CommandCenter Secure Gateway 虛擬裝置」連結,在以下網站取得完整的安裝檔案組:/support/CommandCenter-Secure-Gate way/。
vSphere 5.x 存储空间回收--用DELL EQL 为例子
vSphere 5.x 存储空间回收--用DELL EQL 为例子近日,公司内部很多案子都讨论到空间回收这个话题起因,是客户在EQL (其他存储也一样)的管理界面看到的可用空间和vmware内部看到的不一致原因,EQL 是block (块)级别的存储,它不是一个文件系统;当一个文件被删除了,ESXi 没有告诉底层把这文件占用的块清除掉;这个新的(相对于esxi 4)名词叫―unmap‖,在esxi 5.x已经开始支持用一个工具手工的回收这些空间。
所以在esxi看到的空间和存储上显示的不同。
一般情况下,esxi看到的可用空间是比存储上看到的可用空间要大解决办法或者说工具在帖子后面ARRAY: GUI space usage differs from what the OS showsQ: Why is there a difference between what my file system shows as space used and what the PS array GUI shows for in-use for the volume?A: The PS array is block-storage, and only knows about areas of a volume that have ever been written. The PS Series GUI reports this information for each volume. Volume allocation grows automatically due to application data writes. If later the application frees up space, the space is not marked as unused in the PS Series GUI. Hence the difference in views between the OS/file system and the PS Series GUI.With thin provisioned volumes, this perception can be more pronounced.Thin provisioning is a storage virtualization and provisioning model that allows administrators to logically allocate large addressable storage space to a volume, yet not physically commit storage resources to this space until it is used by an application. For example, using thin provisioning you can create a volume that an application views as 3 TB, while only allocating 300 GB of physical storage to it. As the operating system writes to the volume, physical disk space is allocated to the volume by the storage array. This physical disk space is taken from the available free space in the pool automatically and transparently. As a result, less physical storage is needed over time, and the stranded storage problem is eliminated. The administrator enjoys management benefits similar to over-provisioning, yet maintains the operational efficiencies of improved physical storage utilization. This more efficient use of physical storage resources typically allows an organization to defer or reduce storage purchases.So Thin provisioning is a forward planning tool for storage allocation in which all the storage an application will need is allocated upfront, eliminating the trauma of expanding available storage in systems that do not support online expansion. Because the administrator initially provisions the application with all the storage it will need, repeated data growth operations are avoided.Most important, because of the difference between reality and perception, anyone involved with thin provisioned storage must be aware of the duality in play. If all players are not vigilant someone could start drawing on the un-provisioned storage – exceeding capacity, disrupting operations, or requiring additional unplanned capital investments.A thin-provisioned volume also grows automatically due to application data writes – the space is drawn from the pool free space (rather than having been pre-allocated in a normal volume). If later the application frees up space, the space is free in the file system but is not returned to the free space in the PS Series pool. The only way to reduce the physical allocation in the SAN is to create a new volume, copy the application data from the old volume to the new, and then delete the old volume.A similar problem is when the initiator OS reports significantly more space in use than the array does. This can be pronounced in systems like VMWare that create large, sparse files. In VMWare, if you create yourself a 10GB disk for a VM as a VMDK file, VMWare does not write 10GB of zeros to the file. It creates an empty (sparse) 10GB file, and subtracts 10GB from free space. The act of creating the empty file only touches a few MB of actual sectors on the disk. So VMWare says 10GB missing, but the array says, perhaps, only 2MB written to.Since the minimum volume reserve for any volume is 10%, the filesystem has a long way to go before the MB-scale writes catch up with the minimum reservation of a volume. For instance, a customer with a 100GB volume might create 5 VMs with 10GB disks. That's50GB used according to VMWare, but only perhaps 5 x 2MB (10MB) written to the array. Until the customer starts filling the VMDK files with actual data, the array won't know anything is there. If has no idea what VMFS is; it only knows what's been written to the volume.• Example: A file share is thin-provisioned with 1 TB logical size. Data is placed into the volume so that the physical allocation grows to 500 GB. Files are deleted from the file system, reducing the reported file system in use to 100 GB. The remaining 400 GB of physical storage remains allocated to this volume in the SAN.• This issue can also occur with maintenance operations including defragmentation, database re-organization, and other application operations.In most environments, file systems do not dramatically reduce in size, so this issue occurs infrequently. Also some file systems will not make efficient re-use of previously allocated space, and may not reuse deleted space until it runs out of unused space (this is not an issue for NTFS, VMFS).A related question is how can this previously used, but now not-in-use space be reclaimed in the SAN?Today this would require the creation of a new volume and copying the volume data from old to new. This likely would require the application/users that use the file system to be offline (a planned maintenance window). In the future, file systems such as NTFS in Windows Server 2008 will allow online shrink in addition to the existing online grow operations. The result with this procedure would be the ability to reclaim free space would be done online (perhaps during non-peak times), where the file system is shrunk online, then re-grown online to its original size. This would reclaim the volume space, however there may be a delay in gaining the free space back to the pool free space if snapshots are present, as they would hold the released space until the snapshots age (and are automatically deleted).In some cases the amount of space used on the array will show LESS than what is shown by the OS. For example, VMware ESX. When ESX 3.x creates a 20GB VMDK, it doesn't actually write 20GB of data to the volume. A small file is created, then ESX tells the file allocation table that 20GB has been allocated. Over time as data is actually written and deleted, this will swing back the other way. Where ESX says there's less space allocated than what the array GUI indicates.先看机器翻译的:数组:GUI 空间使用不同于操作系统的显示解决方案详细信息问:为何会有我的文件系统空间使用的显示和PS 阵列GUI 使用的卷中的显示之间的区别?答:PS 阵列是数据块的存储,并只知道以往任何时候都已写入卷的领域。
服务器虚拟化系统实施方案V1
服务器虚拟化系统实施方案V1.0I。
项目背景和目标II。
系统架构设计III。
虚拟化平台建设方案IV。
实施计划V。
风险管理VI。
项目预算VII。
项目管理项目背景和目标本项目旨在建立一套高效、稳定、可扩展的服务器虚拟化平台,以提高服务器资源的利用率和管理效率,降低IT成本和维护成本。
系统架构设计基于XXX vSphere平台,本系统采用集中式管理架构,包括vCenter Server、ESXi主机和虚拟机三个层次。
其中,vCenter Server作为虚拟化平台的核心组件,负责管理和监控所有ESXi主机和虚拟机。
ESXi主机则负责承载虚拟机,提供计算、存储和网络资源。
虚拟机则是用户运行应用程序的实体,可以随时创建、删除和迁移。
虚拟化平台建设方案1.硬件选型为保证虚拟化平台的性能和可靠性,我们选择了品牌服务器和存储设备,并按照建设规划的要求进行了配置。
其中,服务器采用双路Intel Xeon处理器、128GB内存和多块SAS硬盘的配置,存储设备则选择了高可靠性的SAS硬盘和RAID 5存储方案。
2.虚拟化软件我们选择了XXX vSphere平台作为虚拟化软件,以其成熟的技术和广泛的应用为我们提供了充分的保障。
同时,我们也对vSphere进行了定制化配置,以满足我们的实际需求。
3.网络架构为保证虚拟机的网络性能和可靠性,我们设计了双网卡的网络架构,其中一块网卡用于虚拟机的内部通信,另一块网卡则用于虚拟机与外部网络的通信。
同时,我们也采用了VLAN技术来隔离虚拟机的网络流量。
实施计划本项目的实施计划分为三个阶段,分别是规划设计阶段、硬件采购和系统部署阶段、以及系统测试和优化阶段。
整个项目预计在6个月内完成。
风险管理为避免项目实施过程中的风险,我们制定了详细的风险管理计划,包括风险识别、风险评估、风险应对和风险监控等环节。
同时,我们也将定期进行风险评估和风险监控,确保项目实施的顺利进行。
项目预算本项目的总预算为300万元,其中硬件采购和系统部署的费用为200万元,软件和服务费用为80万元,项目管理和其他费用为20万元。
VCP55题库中英文对照翻译
1.An administrator is planning a vsphere infrastructure with the following specific networkingrequirements:.The ability to shape inbound (RX) traffic.Support for Private VLANS (PLVLANs).Support for LLDP(Link Layer Discovery Protocol)What is the minimum vSphere Edition that will support these requirements?A.vsphere essentials plusB.vSphere standard.C.vSphere EnterpriseD.vSphere Enterprise plus1。
管理员有以下具体网络要求的基础设施规划:。
塑造入境能力(RX)交通。
支持私有VLAN(PLVLANs)。
支持LLDP(链路层发现协议)支持这些要求最小的版本将是什么?D2. what two It infrastructure components are virtualized b vsphere Essentials.A. NetworksB.applications.C.storageD. Management哪两点是虚拟化的基础设施组件Sphere的要点。
A C.3.An administrator attempts to install vcenter Single Sign-On server.the installer returns an error message indicating that the installation. all setup prerequisites were met. the administrator has generated a vcenter server single sign-on support bundle.which two files should the adminstrator analyze to determine the cause of the failure?A.Server\utils\logs\imsTrace.logB.server\utils\logs\install.txtC.%TEMP%\utils\logs\vminst.logD.%TEMP%\vminst.log3.一位管理员尝试安装vCenter单点登录服务器,安装程序会返回一条错误消息,指示安装。
Emulex CN4052S 和 CN4054S 10Gb VFA5.2 适配器商品说明说明书
Emulex CN4052S and CN4054S 10Gb VFA5.2 Adapters for Flex SystemProduct GuideThe CN4054S 4-port and CN4052S 2-port 10Gb Virtual Fabric Adapters are VFA5.2 adapters that are supported on ThinkSystem and Flex System compute nodes.The CN4052S can be divided into up to eight virtual NIC (vNIC) devices per port (for a total of 16 vNICs) and the CN4054S can be divided in to four vNICs (for a total of 16 vNICs). Each vNIC can have flexible bandwidth allocation. These adapters also feature RDMA over Converged Ethernet (RoCE) capability, and support iSCSI, and FCoE protocols, either as standard or with the addition of a Features on Demand (FoD) license upgrade.The adapters are shown in the following figure. The CN4054S and CN4052S look the same.Figure 1. Flex System CN4054S and CN4052S 10Gb Virtual Fabric AdaptersDid you know?The CN4054S and CN4052S are based on the new Emulex XE100-P2 "Skyhawk P2" ASIC which enables better performance, especially with the new RDMA over Converged Ethernet v2 (RoCE v2) support. In addition, these adapters are supported by Lenovo XClarity Administrator, which allows you to deploy adapter settings easier and incorporate the adapters in configuration patterns.The CN4052S adapter now supports 8 vNICs per port using UFP or vNIC2 and with adapter firmware 10.6 or later. This means a total of 16 vNICs are supported. The CN4054S still supports 4 vNICs per port.Click here to check for updatesIn pNIC mode, an adapter with the FoD upgrade applied operates in traditional Converged Network Adapter (CNA) mode with four ports (CN4054S) or two ports (CN4052S) of Ethernet and four ports (CN4054S) or two ports (CN4052S) of iSCSI or FCoE available to the operating system.Server supportThe following table lists the ThinkSystem and Flex System compute nodes that support the adapters. Table 2. Support for Flex System compute nodesPartnumber DescriptionAdapters - ThinkSystem and Flex System compute nodes01CV780Flex System CN4052S 2-port 10Gb Virtual FabricAdapter AdvancedN N Y Y Y Y Y Y Y Y00AG540Flex System CN4052S 2-port 10Gb Virtual FabricAdapterN N Y Y Y N Y Y Y Y00AG590Flex System CN4054S 4-port 10Gb Virtual FabricAdapterY Y Y Y Y Y Y Y Y Y Features on Demand upgrades - Flex System compute nodes only00JY804Flex System CN4052 Virtual Fabric Adapter SWUpgrade (FoD)Y Y Y Y Y Y Y N N N00AG594Flex System CN4054S 4-port 10Gb Virtual Fabric Adapter SW Upgrade (FoD)Y Y Y Y Y Y Y N N N x24(8737,E5-26v2)x24(7162)x24M5(9532,E5-26v3)x24M5(9532,E5-26v4)x44(7167)x88/x48/x28X6(793)x28/x48/x88X6(7196)SN55(7X16)SN85(7X15)SN55V2(7Z69)I/O module supportThese adapters can be installed in any I/O adapter slot of a supported Flex System compute node. One or two compatible 1 Gb or 10 Gb I/O modules must be installed in the corresponding I/O bays in the chassis. The following table lists the switches that are supported. When connected to the 1 Gb switch, the adapter will operate at 1 Gb speeds. When connected to the 40 Gb switch, the adapter will operate at 10 Gb speeds.To maximize the number of adapter ports usable, you may also need to order switch upgrades to enable additional ports. Alternatively, for CN4093, EN4093R, and SI4093 switches, you can use Flexible Port Mapping (FPM), a feature of Networking OS 7.8 or later, that allows you to minimize the number of upgrades needed.See the Product Guides for the Flex System switches for more details about switch upgrades and FPM: https:///servers/blades/networkmoduleThe table below specifies how many ports the adapters contain. For the CN4054S, to enable all 4 adapter ports, either upgrade the switch or use Flexible Port Mapping. Switches should be installed in pairs to maximize the number of ports enabled and to provide redundant network connections.Table 3. I/O modules supportedPartnumber Description CN4052Sports†CN4054Sports†4SG7A08868Lenovo ThinkSystem NE2552E Flex Switch2400FM514Lenovo Flex System Fabric EN4093R 10Gb Scalable Switch24**00FM510Lenovo Flex System Fabric CN4093 10Gb Converged Scalable Switch24**00FE327Lenovo Flex System SI4091 10Gb System Interconnect Module2200FM518Lenovo Flex System Fabric SI4093 System Interconnect Module24**90Y9346Flex System EN6131 40Gb Ethernet Switch2288Y6043Flex System EN4091 10Gb Ethernet Pass-thru2249Y4294Flex System EN2092 1Gb Ethernet Scalable Switch24**94Y5350Cisco Nexus B22 Fabric Extender for Flex System2200D5823*Flex System Fabric CN4093 10Gb Converged Scalable Switch24**95Y3309*Flex System Fabric EN4093R 10Gb Scalable Switch24**49Y4270*Flex System Fabric EN4093 10Gb Scalable Switch24**95Y3313*Flex System Fabric SI4093 System Interconnect Module24**94Y5212*Flex System EN4023 10Gb Scalable Switch24*** Withdrawn from marketing† This is the number of adapter ports that will be enabled per adapter, and requires that two switches be installed in the chassis.** The use of 4 ports will require either a switch upgrade to enable additional ports or the use of Flexible Port Mapping to reconfigure the active portsThe following table shows the connections between adapters installed in the compute nodes and the switch bays in the chassis.Table 4. Adapter to I/O bay correspondenceI/O adapter slot in the server Port on the adapter Corresponding I/O module bayin the chassisSlot 1Port 1Module bay 1Port 2Module bay 2Port 3*Module bay 1Port 4*Module bay 2 Slot 2Port 1Module bay 3Port 2Module bay 4Port 3*Module bay 3Port 4*Module bay 4Slot 3(full-wide compute nodes only)Port 1Module bay 1 Port 2Module bay 2 Port 3*Module bay 1 Port 4*Module bay 2Slot 4(full-wide compute nodes only)Port 1Module bay 3 Port 2Module bay 4 Port 3*Module bay 3 Port 4*Module bay 4* Ports 3 and 4 (CN4054S only) require Upgrade 1 of the selected switch, where applicable. 14-port modules such as the EN4091 Pass-thru, SI4091 switch, and Cisco B22 only support ports 1 and 2 (and only when two I/O modules are installed).The following figure shows the internal layout of the CN4054S, with how the adapter ports are routed to the I/O module internal ports.Note: INTD1 is not available on any currently shipping Flex System I/O modules.Figure 2. Internal layout of the CN4054S adapter portsThe following figure shows the internal layout of the CN4052S, and how the adapter ports are routed to the I/O module internal ports.Note: INTD1 is not available on any currently shipping Flex System I/O modules.Figure 3. Internal layout of the CN4052S adapter portsThe connections between the adapters installed in the compute nodes to the switch bays in the chassis are shown diagrammatically in the following figure. The figure shows half-wide servers (such as the x240 M5 with two adapters) and full-wide servers (such as the x440 with four adapters).Figure 4. Logical layout of the interconnects between I/O adapters and I/O modulesSUSE Linux Enterprise Server 12 SP2N N N Y Y Y Y Y SUSE Linux Enterprise Server 12 SP2 with Xen N N N Y Y Y Y Y SUSE Linux Enterprise Server 12 SP3N N N Y Y Y Y Y SUSE Linux Enterprise Server 12 SP3 with Xen N N N Y Y Y Y Y SUSE Linux Enterprise Server 12 SP4N Y Y Y Y Y Y Y SUSE Linux Enterprise Server 12 SP4 with Xen N Y Y Y Y Y Y Y SUSE Linux Enterprise Server 12 SP5Y Y Y Y Y Y Y Y SUSE Linux Enterprise Server 12 SP5 with Xen Y Y Y Y Y Y Y Y SUSE Linux Enterprise Server 15N Y Y Y Y Y Y N SUSE Linux Enterprise Server 15 SP1N Y Y Y Y Y Y N SUSE Linux Enterprise Server 15 SP1 with Xen N Y Y Y Y Y Y N SUSE Linux Enterprise Server 15 SP2Y Y Y Y Y Y Y N SUSE Linux Enterprise Server 15 SP2 with Xen Y Y Y Y Y Y Y N SUSE Linux Enterprise Server 15 SP3Y Y Y Y Y N N N SUSE Linux Enterprise Server 15 SP3 with Xen Y Y Y Y Y N N N SUSE Linux Enterprise Server 15 SP4Y Y Y Y Y N N N SUSE Linux Enterprise Server 15 SP4 with Xen Y Y Y Y Y N N N SUSE Linux Enterprise Server 15 SP5Y Y Y Y Y N N N SUSE Linux Enterprise Server 15 SP5 with Xen Y Y Y Y Y N N N SUSE Linux Enterprise Server 15 with Xen N Y Y Y Y Y Y N Ubuntu 18.04.5 LTSY N N N N N N N VMware vSphere Hypervisor (ESXi) 5.5N N N N N Y N Y VMware vSphere Hypervisor (ESXi) 6.0 U3N N N Y Y Y N Y VMware vSphere Hypervisor (ESXi) 6.5N N N Y Y Y Y N VMware vSphere Hypervisor (ESXi) 6.5 U1N N N Y Y Y Y N VMware vSphere Hypervisor (ESXi) 6.5 U2N Y Y Y Y Y Y N VMware vSphere Hypervisor (ESXi) 6.5 U3N Y Y Y Y Y Y N VMware vSphere Hypervisor (ESXi) 6.7N N N Y Y Y N N VMware vSphere Hypervisor (ESXi) 6.7 U1N Y Y Y Y Y N N VMware vSphere Hypervisor (ESXi) 6.7 U2N Y Y Y Y Y N N VMware vSphere Hypervisor (ESXi) 6.7 U3Y Y Y Y Y Y N N VMware vSphere Hypervisor (ESXi) 7.0N Y Y Y Y N N N VMware vSphere Hypervisor (ESXi) 7.0 U1N Y Y Y Y N N N VMware vSphere Hypervisor (ESXi) 7.0 U2Y Y Y Y Y N N NOperating systemsS N 550 V 2S N 550 (X e o n G e n 2)S N 850 (X e o n G e n 2)S N 550 (X e o n G e n 1)S N 850 (X e o n G e n 1)x 240 M 5 (9532)x 280/x 480/x 880 X 6 (719x 440 (7167)VMware vSphere Hypervisor (ESXi) 7.0 U3Y Y Y Y Y N N NOperating systemsTable 6. Operating system support for Flex System CN4054S 4-port 10Gb Virtual Fabric Adapter, 00AG590Operating systemsMicrosoft Windows Server 2012N N N N N Y Y Y N Y Y Y Microsoft Windows Server 2012 R2N N N Y Y Y Y Y N Y Y Y Microsoft Windows Server 2016Y Y Y Y Y Y N Y N Y Y N Microsoft Windows Server 2019Y Y Y Y Y N N Y N N N N Microsoft Windows Server 2022Y Y Y Y Y N N N N N N N Microsoft Windows Server version 1709N N N Y Y N N Y Y N Y N Microsoft Windows Server version 1803N N N Y N N N N N N N N Red Hat Enterprise Linux 6.10N N N Y Y Y Y Y Y Y Y Y Red Hat Enterprise Linux 6.9N N N Y Y Y Y Y Y Y Y Y Red Hat Enterprise Linux 7.3N N N Y Y Y Y Y Y Y Y Y Red Hat Enterprise Linux 7.4N N N Y Y Y Y Y Y Y Y Y Red Hat Enterprise Linux 7.5N N N Y Y Y Y Y Y Y Y Y Red Hat Enterprise Linux 7.6N Y Y Y Y Y Y Y Y Y Y Y Red Hat Enterprise Linux 7.7N Y Y Y Y Y Y Y Y Y Y Y Red Hat Enterprise Linux 7.8N Y Y Y Y Y Y Y Y Y Y Y Red Hat Enterprise Linux 7.9Y Y Y Y Y Y Y Y Y Y Y Y Red Hat Enterprise Linux 8.0N Y Y Y Y N N N N N N N Red Hat Enterprise Linux 8.1N Y Y Y Y N N N N N N N Red Hat Enterprise Linux 8.2Y Y Y Y Y N N N N N N N Red Hat Enterprise Linux 8.3Y Y Y Y Y N N N N N N N Red Hat Enterprise Linux 8.4Y Y Y Y Y N N N N N N N Red Hat Enterprise Linux 8.5Y Y Y Y Y N N N N N N N Red Hat Enterprise Linux 8.6Y Y Y Y Y N N N N N N N Red Hat Enterprise Linux 8.7Y Y Y Y Y N N N N N N NS N 550 V 2S N 550 (X e o n G e n 2)S N 850 (X e o n G e n 2)S N 550 (X e o n G e n 1)S N 850 (X e o n G e n 1)x 240 M 5 (9532)x 280/x 480/x 880 X 6 (719x 440 (7167)S N 550 V 2S N 550 (X e o n G e n 2)S N 850 (X e o n G e n 2)S N 550 (X e o n G e n 1)S N 850 (X e o n G e n 1)x 240 (8737, E 5 v 2)x 240 (7162)x 240 M 5 (9532)x 280/x 480/x 880 X 6 (7196)x 280/x 480/x 880 X 6 (7903)x 440 (7167)x 440 (7917)SUSE Linux Enterprise Server 11 SP4N N N Y Y Y Y Y N Y Y Y SUSE Linux Enterprise Server 11 SP4 with Xen N N N Y Y Y Y Y N Y Y Y SUSE Linux Enterprise Server 11 for x86N N N N N N Y N N N Y N SUSE Linux Enterprise Server 12 SP2N N N Y Y Y Y Y Y Y Y Y SUSE Linux Enterprise Server 12 SP2 with Xen N N N Y Y Y Y Y Y Y Y Y SUSE Linux Enterprise Server 12 SP3N N N Y Y Y Y Y Y Y Y Y SUSE Linux Enterprise Server 12 SP3 with Xen N N N Y Y Y Y Y Y Y Y Y SUSE Linux Enterprise Server 12 SP4N Y Y Y Y Y Y Y Y Y Y Y SUSE Linux Enterprise Server 12 SP4 with Xen N Y Y Y Y Y Y Y Y Y Y Y SUSE Linux Enterprise Server 12 SP5Y Y Y Y Y Y Y Y Y Y Y Y SUSE Linux Enterprise Server 12 SP5 with Xen Y Y Y Y Y Y Y Y Y Y Y Y SUSE Linux Enterprise Server 15N Y Y Y Y N N Y Y N N N SUSE Linux Enterprise Server 15 SP1N Y Y Y Y N N Y Y N N N SUSE Linux Enterprise Server 15 SP1 with Xen N Y Y Y Y N N Y Y N N N SUSE Linux Enterprise Server 15 SP2Y Y Y Y Y N N Y Y N N N SUSE Linux Enterprise Server 15 SP2 with Xen Y Y Y Y Y N N Y Y N N N SUSE Linux Enterprise Server 15 SP3Y Y Y Y Y N N N N N N N SUSE Linux Enterprise Server 15 SP3 with Xen Y Y Y Y Y N N N N N N N SUSE Linux Enterprise Server 15 SP4Y Y Y Y Y N N N N N N N SUSE Linux Enterprise Server 15 SP4 with Xen Y Y Y Y Y N N N N N N N SUSE Linux Enterprise Server 15 SP5Y Y Y Y Y N N N N N N N SUSE Linux Enterprise Server 15 SP5 with Xen Y Y Y Y Y N N N N N N N SUSE Linux Enterprise Server 15 with Xen N Y Y Y Y N N Y Y N N N Ubuntu 18.04.5 LTSY N N N N N N N N N N N VMware vSphere Hypervisor (ESXi) 5.5N N N N N Y Y Y N Y Y Y VMware vSphere Hypervisor (ESXi) 6.0 U3N N N Y Y Y Y Y Y Y Y Y VMware vSphere Hypervisor (ESXi) 6.5N N N Y Y Y N Y Y Y N N VMware vSphere Hypervisor (ESXi) 6.5 U1N N N Y Y Y N Y Y Y N N VMware vSphere Hypervisor (ESXi) 6.5 U2N Y Y Y Y Y N Y Y Y N N VMware vSphere Hypervisor (ESXi) 6.5 U3N Y Y Y Y Y N Y Y Y N N VMware vSphere Hypervisor (ESXi) 6.7N N N Y Y N N Y N N N N VMware vSphere Hypervisor (ESXi) 6.7 U1N Y Y Y Y N N Y N N N N VMware vSphere Hypervisor (ESXi) 6.7 U2N Y Y Y Y N N Y N N N N VMware vSphere Hypervisor (ESXi) 6.7 U3Y Y Y Y Y N N Y N N N NOperating systemsS N 550 V 2S N 550 (X e o n G e n 2)S N 850 (X e o n G e n 2)S N 550 (X e o n G e n 1)S N 850 (X e o n G e n 1)x 240 (8737, E 5 v 2)x 240 (7162)x 240 M 5 (9532)x 280/x 480/x 880 X 6 (719x 280/x 480/x 880 X 6 (790x 440 (7167)x 440 (7917)VMware vSphere Hypervisor (ESXi) 7.0N Y Y Y Y N N N N N N N VMware vSphere Hypervisor (ESXi) 7.0 U1N Y Y Y Y N N N N N N N VMware vSphere Hypervisor (ESXi) 7.0 U2Y Y Y Y Y N N N N N N N VMware vSphere Hypervisor (ESXi) 7.0 U3Y Y Y Y Y N N N N N N NOperating systemsTable 7. Operating system support for Flex System CN4052S 2-port 10Gb Virtual Fabric Adapter Advanced,01CV780Operating systemsMicrosoft Windows Server 2012N N N N N Y Y Y Y Microsoft Windows Server 2012 R2N N N Y Y Y Y Y Y Microsoft Windows Server 2016Y Y Y Y Y Y Y Y N Microsoft Windows Server 2019Y Y Y Y Y Y N N N Microsoft Windows Server 2022Y Y Y Y Y N N N N Microsoft Windows Server version 1709N N N Y Y Y Y N N Microsoft Windows Server version 1803N N N Y N N N N N Red Hat Enterprise Linux 6.10N N N Y Y Y Y Y Y Red Hat Enterprise Linux 6.9N N N Y Y Y Y Y Y Red Hat Enterprise Linux 7.3N N N Y Y Y Y Y Y Red Hat Enterprise Linux 7.4N N N Y Y Y Y Y Y Red Hat Enterprise Linux 7.5N N N Y Y Y Y Y Y Red Hat Enterprise Linux 7.6N Y Y Y Y Y Y Y Y Red Hat Enterprise Linux 7.7N Y Y Y Y Y Y Y Y Red Hat Enterprise Linux 7.8N Y Y Y Y Y Y Y Y Red Hat Enterprise Linux 7.9Y Y Y Y Y Y Y Y Y Red Hat Enterprise Linux 8.0N Y Y Y Y N N N N Red Hat Enterprise Linux 8.1N Y Y Y Y N N N N Red Hat Enterprise Linux 8.2Y Y Y Y Y N N N N Red Hat Enterprise Linux 8.3Y Y Y Y Y N N N NS N 550 V 2S N 550 (X e o n G e n 2)S N 850 (X e o n G e n 2)S N 550 (X e o n G e n 1)S N 850 (X e o n G e n 1)x 240 (8737, E 5 v 2)x 240 (7162)x 240 M 5 (9532)x 280/x 480/x 880 X 6 (719x 280/x 480/x 880 X 6 (790x 440 (7167)x 440 (7917)S N 550 V 2S N 550 (X e o n G e n 2)S N 850 (X e o n G e n 2)S N 550 (X e o n G e n 1)S N 850 (X e o n G e n 1)x 240 M 5 (9532)x 280/x 480/x 880 X 6 (7196)x 280/x 480/x 880 X 6 (7903)x 440 (7917)Red Hat Enterprise Linux 8.4Y Y Y Y Y N N N N Red Hat Enterprise Linux 8.5Y Y Y Y Y N N N N Red Hat Enterprise Linux 8.6Y Y Y Y Y N N N N Red Hat Enterprise Linux 8.7Y Y Y Y Y N N N N SUSE Linux Enterprise Server 11 SP4N N N Y Y Y Y Y Y SUSE Linux Enterprise Server 11 SP4 with Xen N N N Y Y Y Y Y Y SUSE Linux Enterprise Server 11 for x86N N N N N N Y N N SUSE Linux Enterprise Server 12 SP2N N N Y Y Y Y Y Y SUSE Linux Enterprise Server 12 SP2 with Xen N N N Y Y N Y Y Y SUSE Linux Enterprise Server 12 SP3N N N Y Y Y Y Y Y SUSE Linux Enterprise Server 12 SP3 with Xen N N N Y Y N Y Y Y SUSE Linux Enterprise Server 12 SP4N Y Y Y Y Y Y Y Y SUSE Linux Enterprise Server 12 SP4 with Xen N Y Y Y Y N Y Y Y SUSE Linux Enterprise Server 12 SP5Y Y Y Y Y Y Y Y Y SUSE Linux Enterprise Server 12 SP5 with Xen Y Y Y Y Y N Y Y Y SUSE Linux Enterprise Server 15N Y Y Y Y Y Y N N SUSE Linux Enterprise Server 15 SP1N Y Y Y Y Y Y N N SUSE Linux Enterprise Server 15 SP1 with Xen N Y Y Y Y Y Y N N SUSE Linux Enterprise Server 15 SP2Y Y Y Y Y Y Y N N SUSE Linux Enterprise Server 15 SP2 with Xen Y Y Y Y Y Y Y N N SUSE Linux Enterprise Server 15 SP3Y Y Y Y Y N N N N SUSE Linux Enterprise Server 15 SP3 with Xen Y Y Y Y Y N N N N SUSE Linux Enterprise Server 15 SP4Y Y Y Y Y N N N N SUSE Linux Enterprise Server 15 SP4 with Xen Y Y Y Y Y N N N N SUSE Linux Enterprise Server 15 SP5Y Y Y Y Y N N N N SUSE Linux Enterprise Server 15 SP5 with Xen Y Y Y Y Y N N N N SUSE Linux Enterprise Server 15 with Xen N Y Y Y Y Y Y N N Ubuntu 18.04.5 LTSY N N N N N N N N VMware vSphere Hypervisor (ESXi) 5.5N N N N N Y Y Y Y VMware vSphere Hypervisor (ESXi) 6.0 U3N N N Y Y Y Y Y Y VMware vSphere Hypervisor (ESXi) 6.5N N N Y Y Y Y Y N VMware vSphere Hypervisor (ESXi) 6.5 U1N N N Y Y Y Y Y N VMware vSphere Hypervisor (ESXi) 6.5 U2N Y Y Y Y Y Y Y N VMware vSphere Hypervisor (ESXi) 6.5 U3N Y Y Y Y Y Y Y NOperating systems S N 550 V 2S N 550 (X e o n G e n 2)S N 850 (X e o n G e n 2)S N 550 (X e o n G e n 1)S N 850 (X e o n G e n 1)x 240 M 5 (9532)x 280/x 480/x 880 X 6 (719x 280/x 480/x 880 X 6 (790x 440 (7917)TrademarksLenovo and the Lenovo logo are trademarks or registered trademarks of Lenovo in the United States, other countries, or both. A current list of Lenovo trademarks is available on the Web athttps:///us/en/legal/copytrade/.The following terms are trademarks of Lenovo in the United States, other countries, or both:Lenovo®Flex SystemServerProven®System x®ThinkSystem®VMready®XClarity®The following terms are trademarks of other companies:Xeon® is a trademark of Intel Corporation or its subsidiaries.Linux® is the trademark of Linus Torvalds in the U.S. and other countries.Microsoft®, Hyper-V®, SQL Server®, SharePoint®, Windows Server®, and Windows® are trademarks of Microsoft Corporation in the United States, other countries, or both.Other company, product, or service names may be trademarks or service marks of others.Emulex CN4052S and CN4054S 10Gb VFA5.2 Adapters for Flex System21。
解决在VMware虚拟机系统中安装hyper
解决在VMware虚拟机系统中安装hyper解决在VMware虚拟机系统中安装hyper-V(2012-12-21 04:55:38)标签:虚拟化hyper-vvmitSo, following are the steps to create a Microsoft Hyper-V VM running in VMware Workstation 8, but later I’ll show you how to do it in ESXi 5 as well:1. Create a New VM with version 8 hardware2. Give it 4 GB RAM and 2 x vCPUs with about 80-100 GB disk space, depending upon how many VMs you wanted nested underneath Hyper-V.3. The instructions lead you to believe that you should pick a VMware ESX option as the guest OS... STOP! DON’T! Select Windows 2008 R2 x64.4. When you are finished, make sure you add another NIC to the VM used as the Hyper-V virtual network,5. Under the settings of the VM > CPU, make sure you have the option to pass-through the Intel VT-x/EPT feature.6. Make sure you have set the VM to boot from Windows 2008 R2 x64 media ISO.7. Before booting, you should edit the config file .vmx and add the parameter: hypervisor.cpuid.v0 = “FALS E”8. Now Boot and Install Windows 2008 R2 x64.9. Once finished, open up Server Manager and click ―Add Role‖.10. Select and install the Hyper-V option. At this point, you will know if your system is working correctly and passing the Intel EPT feature, because if it doesn’t, you won’t be able to go past this point.11. You’ll also have to select the network adapter used for the virtual network.12. Now install Hyper-V, which will need a reboot.13. After it is completed, open Server Manager drill down to Hyper-V and connect to the local server.14. Now create and install a virtual machine.Once done, you should be able to use it as normal, albeit slow.Nesting Hyper-VM running E S Xi 5Now, doing the same thing on ESXi 5 is a little trickier although some of the steps are the same.1. Before anything you need to place an entry in the /etc/vmware/config file found in the tech support mode on your ESXi 5. I enabled SSH through the security profile in the vSphere Client. Then used putty SSH into the ESXi system.2. From there I executed the following command which is needed to allow nested hypervisors :# echo 'vhv.allow = "TRUE" ' >> /etc/vmware/configNotice the use of single and double quotes in the command-line3. Now create a virtual machine using version 8 hardware, 4GB (or as much as you can spare), 2 x vCPUs, 2 or more vNICs anda 100GB virtual disk.4. Before booting up the VM and installing Hyper-V we need to add two lines the virtual machines config file .vmxYou can try this through the vSphere Client in the settings of the virtual machine > Configuration Parameters, whereas I had better luck doing it from command-lineTo add them using command-line move back in SSH > change into the directory where you Hyper-V VM is installed# echo 'vhv.allow = "TRUE" ' >> /etc/vmware/configIn my example the config file is called Hyper-V.vmx. Type the following commands:# echo 'monitor.virtual_exec = "hardware" ' >> Hyper-V.vmx# echo 'hypervisor.cpuid.v0 = "FALSE" ' >> Hyper-V.vmx根据新建虚拟机的保存⽬录找到Windows 2012 x64.vmx⽂件,⽤记事本打开并在后⾯添加下⾯两句然后保存退出:hypervisor.cpuid.v0 = "FALSE"mce.enable = "TRUE"5. Now back in the VM settings > Options > CPU/MMU Virtualization make sure you have the option to pass the Intel EPT feature.6. Now in the Options area > CPUID Mask click on Advanced7. Add the following CPU mask Level ECX: ---- ---- ---- ---- ---- ---- --H- ----8. Now Install Hyper-V or W indows 2008 R2 and enable the Hyper-V role.9. You are ready to roll.How to Run Hyper-V in vSphere 5.1By David DavisJanuary 2, 2013Topics MentionedSoftware:VMware vSphereToday, Server 2012 Hyper-V cannot be virtualized inside a Microsoft virtualization product. In other words, you can’t run Hyper-V as virtual machine inside Hyper-V on Windows 2012, or as a virtual machine inside Client Hyper-V in Windows 8.You can, however, virtualize Hyper-V as a virtual machine inside VMware products like Fusion, Workstation, Free ESXi, or the commercial vSphere / vCloud Suite — just a few tweaks are required.Typically, the first time that admins have trouble running Hyper-V in one of these virtualization platforms is after they have created the VM and installed Windows. They try to go ahead and add the Hyper-V role and they get the message:Hyper-V cannot be installedThe processor on this computer is not compatible with Hyper-V. To install this role, the processor must have a supported version of hardware-assisted virtualization, and that feature must be turned on in the BIOS.If you are running Hyper-V on a physical server (with no virtualization hypervisor in between) then either your CPU really is not compatible or you need to enable Intel-VT or AMD-V in your server’s BIOS.However, if you are trying to run Hyper-V in a virtual machine (and your physical server has Intel-VT / AMD-V enabled) and you are getting this message in a VMware product then you need to make the tweaks below.Step 1If you are using vSphere 5 or greater then you need to modify the vmware config file on the ESXi host that will run the Hyper-V virtual machine. To edit this file, ssh to the server and vi /etc/config/vmware. Add the line vhv.allow = “TRUE” to the file and save it with ―wq‖.Step 2Next, power off the Hyper-V VM and remove it from inventory. Browse the datastore that the VM is stored on and download the VMX configuration file to your local computer. Use Wordpad, or a similar text editor, to edit the VMX configuration file that you downloaded.Change the guestos line to read:guestOS = “winhyperv”Save the file and upload it back to the datastore using the datastore browser.Right-click on the virtual machine’s VMX configuration and use ―Add to Inventory‖ to add the VM back to the ESXi host that it was on (where you added the vhv.allow line).Step 3Finally, upgrade the virtual machine hardware to hardware version 9 by right-clicking on the VM and clicking Upgrade Virtual Hardware.Now, power on the virtual machine and you shouldn’t have any trouble adding the Hyper-V role, as you see below.With this configuration, you’ll also be able to run 64-bit nested virtual machines inside this virtualized Hyper-V host. In the example below, you can see Windows Server 2012 running inside Windows 2008 R2 as a Hyper-V VM, which is running inside vSphere 5.1 as a virtual machine (that’s a VM in a VM, all running inside vSphere).Note that you don’t have to do all this work to run Hyper-V virtual machines in Workstation or Fusion –it’s much ea sier (stay tuned for a future article on that from me, here on TST blogs).For another great post on running Hyper-V in vSphere, see this post from/doc/9ca6e38d6c85ec3a86c2c51d.htmlMore Related Posts1.New Training Release: VMware vSphere PowerCLI Training2.7 Awesome Things You Can Do With Hyper-V3.vSphere Advanced Networking Training4.Hyper-V and VMware — Part 3: Cost5.Master vSphere and Transform Your IT Infrastructure with VMware vSphere Trainingtest2012.vmx。
HP VMware vSphere 5.1 U1 Customized Image 发布说明说明书
HP VMware vSphere5.1U1Customized Image Release Notes for September2013 UpdatesAbstractThis document provides the latest release information for HP VMware vSphere5.1U1Customized Image.HP Part Number:749859-001Published:September2013Edition:9©Copyright2008,2013Hewlett-Packard Development Company,L.P.NoticesThe information contained herein is subject to change without notice.The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services.Nothing herein should be construed as constituting an additional warranty.HP shall not be liable for technical or editorial errors or omissions contained herein.Confidential computer software.Valid license from HP required for possession,use or copying.Consistent with FAR12.211and12.212,Commercial Computer Software,Computer Software Documentation,and Technical Data for Commercial Items are licensed to the ernment under vendor’s standard commercial license.Microsoft and Windows are U.S.registered trademarks of Microsoft Corporation.Intel is a trademark or registered trademark of Intel Corporation in the U.S.and other countries.AMD is a trademark of Advanced Micro Devices,Inc.Contents1Navigation tips (4)2HP VMware vSphere5.1U1Customized Image (5)3New and updated in this release (6)4Important notes and recommendations (7)Contents31Navigation tipsNavigating to documentation on the HP website•From the URLs in this guide,you may need to make several selections to get to your specific server documentation.•For online access to technical documentation,self-help resources,live chat assistance, community forums of IT experts,technical knowledge base,remote monitoring and diagnostictools,go to /support(/support).•For the latest versions of selected technical documentation,go to/go/bizsupport(/go/bizsupport).4Navigation tips2HP VMware vSphere5.1U1Customized ImageThe HP vSphere5.1U1Customized Image includes the following:•HP Management Tools:◦HP CIM Providers◦HP NMI Driver◦HP iLO Driver◦HP CRU Driver◦HPONCONFG Utility◦HPBOOTCFG Utility◦HPSSACLI Utility◦HPTESTEVENT utility•VMware IOVP Certified Device Drivers added to HP vSphere5.1U1installation images for HP device enablement.•HP Agentless Management Service(AMS)to support Agentless Management and Active Health.•HP CIM Providers and HP Agentless Management Service(AMS)are now available in this version of the HP Customized Image.This version of the HP Customized Image supersedesversion5.60which does not include these products.The version is found in the name of theHP Custom Image download file,for example:VMware-ESXi-5.1.0-Update1-1065491-HP-5.60.40-Sep2013.iso.53New and updated in this releaseThe new and updated features for the HP vSphere5.1U1Customized Image for September2013 include:•Provider Features◦Report Smart array driver name and version.◦Report SAS driver name and version.◦Report SCSI driver name and version◦Report Firmware version of'System Programmable Logic Device'.◦Report SPS/ME firmware.◦Added SCSI HBA Provider.◦Report IdentityInfoType and IdentityInfoValue for PowerControllerFirmware class.◦IPv6support for OA and iLO.◦Report Memory DIMM part number for HP Smart Memory.◦Added new'Test SNMP Trap'.◦Updated reporting of memory configuration to align with iLO and health Driver.•SR-IOV Support◦Updated Emulex10Gb network driver to enable SR-IOV for HP.For additional details on supported servers,Guest Operating Systems,SR-IOV limitations,configuration steps,and troubleshooting,see the HP White Paper–Implementing SR-IOV onHP ProLiant Servers with VMware vSphere5.1.•Utilities features◦HPTESTEVENT–New utility to generate test WBEM indication and test SNMP trap.◦HPSSACLI–New utility to replace hpacucli◦HPONCFG–HPONCFG utility,displays the Server Serial Number along with the Server Name when using hponcfg –g switch,to extract the Host System Information.6New and updated in this release4Important notes and recommendationsThe drivers and firmware recipe information for HP ProLiant servers and options can be found atthe following link:/hpq/recipes.HP CIM Providers and HP Agentless Management Service(AMS)are now available in this version of the HP Customized Image.This version of the HP Customized Image supersedes version5.60which does not include these products.The version is found in the name of the HP Custom Image download file,for example:VMware-ESXi-5.1.0-Update1-1065491-HP-5.60.40-Sep2013.iso.AMS and Active Health InstallationAMS and Active Health is shipped as part of the HP Custom Image,the HP ESXi Offline Bundleand the new Agentless Management Service Offline Bundle.The HP Custom Image and the HPESXi Offline Bundle also include the HP Insight Management WBEM Providers.The AMS OfflineBundle only includes the providers required to support online firmware update and SmartArray,DIMM Status and iLO reporting in vCenter.Customers can manage their ProLiant servers usingAgentless Management without installing the AMS Offline Bundle.Only customers who intend to manage their ProLiant servers using only Agentless Management and do not need the full set ofHP Insight Management WBEM Providers installed should install the AMS Offline Bundle.See the Deploying and updating vSphere5.0on Proliant servers white paper for more information.If the customer has vSphere5.1U1and is currently using HP Insight Management WBEM Providers for ESXi and requires Agentless Management and does not need the full set of providers installed, they need to perform the following step:•Install the AMS Offline Bundle to replace the HP ESXi Offline BundleIf the customer has vSphere5.1U1and is currently using Agentless Management and does nothave the full providers installed and wants to move to using HP Insight Management WBEMProviders for vSphere,they need to perform the following step:•Install the HP vSphere Offline Bundle to replace the AMS Offline Bundle.AMS Fixes•Fix for file logging which generated error in5.1auto deploy with stateful install environment.•Fixed AMS excessive vCenter Logging issue.•Fixed AMS empty and malformed MAC address in cpqNicIfLogMapMACAddress mib.System Memory required for installing vSphere5.1U1Customized ImageVMware vSphere will not install on an HP Gen8server with2GB or less system memory.Although VMware’s stated minimum installation requirement is2GB of system memory,HP’s Gen8platforms do not expose enough of the2GB of system memory to the operating system,causing the installation to fail.This issue is resolved by populating the system with more than2GB of memory.HP Dynamic Smart Array Controllers(B120i and B320i)cannot be used as the target for a coredumpSAS WWIDs for B120i controllers will be non-uniqueVcenter/Vclient not reporting the degraded status for Smart Array disk driveFixed issue where HP Insight Management WBEM Providers were not reporting a degraded status, when the physical drive connected to a Smart Array Controller is pulled out.This resulted VMware vSphere Management Console,Insight Control for vCenter,and other clients of the HP InsightManagement WBEM Providers to report incorrect status when the physical drive is pulled out.Customer Advisory for this issue can be found in the following link:/ bizsupport/TechSupport/Document.jsp?objectID=c03748151&lang=en&cc=us&taskId=101&prodSeriesId=3924066&prodTypeId=329290.7Fixed issue in HPONCFG utility when upgrading the iLO3firmware from1.2x version to1.50and later versions.Varbinds data mismatch in SNMP trapsFixed the issue where HP Insight Management WBEM Providers send SNMP traps with mismatched SNMP trap varbind values.Customer Advisory for this issue can be found in the following link:/ portal/site/hpsc/public/kb/docDisplay/?docId=emr_na-c03835179.Smart Array License version not reportedFixed the issue where HP Insight Management WBEM Providers were not reporting the Smart Array license version.Customer Advisory for this can be found in the following link:/portal/site/hpsc/public/kb/docDisplay/?docId=emr_na-c03893745.Broadcom FCoEThe Broadcom FCoE will not have any HP management support enabled in this release.HPmanagement support will be available in a future release.8Important notes and recommendations。
华为 Data Protector 10.80 虚拟化支持矩阵说明书
Table of ContentsIntroduction (3)What’s New (4)Table 1: Supported components inside virtual machines (5)Table 2: Data Protector Virtual Environment Agent platform support (VDDK) (6)Table 3: VMware vCenter support (6)Table 4: Data Protector Virtual Environment Agent platform support (7)Table5: Granular Recovery Extension for VMware vSphere (7)Table 6: Microsoft Hyper-V Virtualization application integration support (7)Table 7: Supported configurations – HPE SimpliVity Storage (7)Table 8: Supported configurations – H3C CAS (8)Table 9: Supported configurations – Red Hat KVM (8)Table 10: Supported configurations – Nutanix vCenter (8)Table 11: Supported configurations – Nutanix AHV (8)Note: The combinations of Data Protector components with operating systems and/or application versions are supported by Data Protector if the associated operating system and/or application versions are supported by respective vendors.All guest operating systems supported by the respective vendors are supported with Data Protector if they are listed as supported on physical hosts in the Data Protector 10.80 Platform and Integration Support Matrix.For information about specific Windows versions supported by Data Protector please refer to the Platform and Integration Support Matrix.Quiescent state of MS File Systems and Applications within a VMware virtual machine is handled and supported by VMware ToolsUpdates/changes to individual fields within the Matrix will be highlighted in RED.1. Removed the Table 5 “Supported Microsoft applications for quiescence enabled backups”2. Support of Microsoft Hyper-V Server 2019The following table lists various Data Protector components supported inside the guest operating systems: Table 1: Supported components inside virtual machinesVirtualization application Supported Data Protector componentsVMware1 •Cell Manager/Installation Server•Manager of Managers•Disk Agent•Media Agent2•Graphical User Interface•Online Extension Agents3•StoreOnce Software Deduplication•HPE P9000 XP SSEA Agent4•HPE 3PAR SMI-S Agent5•VSS Agent6Microsoft Hyper-V7•Cell Manager/Installation Server•Manager of Managers•Disk Agent•Media Agent•Graphical User Interface•Online Extension Agents3•StoreOnce Software Deduplication•HPE P9000 XP SSEA Agent4•VSS Agent7HPE Integrity Virtual Machines (IVM) •Cell Manager/Installation Server •Disk Agent•Media Agent8•Manager of Managers •Online Extension Agents3•HPE P9000 XP SSEA Agent4Solaris Zones •Disk Agent (Global and Local Zones)•Media Agent (Global Zone)•Oracle Online Agent (Global and Local Zones)Oracle VM •Cell Manager/Installation Server•Manager of Managers•Disk Agent•Media Agent•Graphical User Interface•Online Extension Agents3•StoreOnce Software Deduplication•HPE P9000 XP SSEA Agent4•VSS Agent 6•Oracle Online AgentRed Hat, KVM •Cell Manager/Installation Server•Manager of Managers•Disk Agent•Media Agent•Online Extension Agents3•StoreOnce Software Deduplication•HPE P9000 XP, SSEA Agent41 Includes support for Virtual Infrastructure and vSphere components like vMotion, HA, DRS.2 Supported for HPE StoreOnce Backup Systems with iSCSI and Catalyst, file devices, HPE StoreOnce Backup System using Catalyst, HPE StoreOnce Software, and file libraries only.3 Valid for all applications that are listed as supported in Data Protector 10.80 Platform and Integration Support Matrix and are supported byrespective vendors inside a virtual machine4 Includes application integrations listed in the Data Protector 10.80 HPE Storage Support Matrix for HPE P9000 XP Disk Array Family5 Includes application integrations listed in the Data Protector 10.80 HPE Storage Support Matrix for HPE 3PAR Disk Array Family Using SMI-SAgent. For more information on instant recovery, see Data Protector Zero Downtime Backup Integration Guide.6 In case of VSS backups, the application host can be a virtual host, but the backup host for FC based arrays (3PAR, XP, etc) must be a physicalserver. For details of the supported VSS configurations, see the Data Protector 10.80 VSS Integration Support Matrix7 Individual disk restores are only supported for Windows Hyper-V Server 2012 or later.8 Support includes attached AVIO Devices.Table 2: Data Protector Virtual Environment Agent platform support (VDDK)Data Protector versions VMware VDDK component Supported backup / mount proxy operating systems10.00 •VDDK 6.0 U2 Windows Server 2008 R2 (x64)Windows Server 2012, 2012 R2 (x64)RHEL 6.64, 7.0 (x64) 5,6,7SLES 11.3, 12 (x64) 5,6,710.01,10.02,10.03,10.04 and 10.10 •VDDK 6.5 U1 Windows Server 2008 R2 (x64)Windows Server 2012, 2012 R2 (x64)Windows Server 2016 (x64)RHEL 6.7, 6.8, 7.2, 7.3 (x64)5,6,7SLES 11.4, 12.1 (x64) 5,6,710.20, 10.30, 10.40, 10.50 •VDDK 6.7 U1 Windows Server 2008 R2 (x64)Windows Server 2012, 2012 R2 (x64)Windows Server 2016CentOS 7.4 (x64)5,6,7RHEL 6.7, 6.8, 6.9, 7.2, 7.3 (x64)5,6,7SLES 11.4, 12.1, 15 (x64)5,6,710.60, 10.70, 10.80 •VDDK 6.7 U3 Windows Server 2008 R2 (x64)Windows Server 2012, 2012 R2 (x64)Windows Server 2016CentOS 7.4 (x64)5,6,7RHEL 6.7, 6.8, 6.9, 7.2, 7.3, 7.6 (x64)5,6,7SLES 11.4, 12.1, 12.4, 15 (x64)5,6,71 Data Protector supports guest operating systems that are supported by the respective operating system vendor and are supported as aguest operating system by VMware.2 GPT disks are supported for Backup and Restore.3 Data Protector does not support backup of SATA disks.4 RHEL 6.6 does not support Power On and Live Migrate operation.5 Supported partition type for GRE: Linux partition (ID 83), Linux LVM partition (ID 8E).6 Linux mount proxies do not support granular recovery of ownership, ACLs, file attributes, and alternate data streams for files and folders in78Table 3: VMware vCenter supportVMware vCenter support for Data Protector 10.80 1,2,3,4,5,6VMware vCenter Server 6.0, 6.0 U1, 6.0 U2, 6.0 U3, 6.5, 6.5 U1, 6.5 U2, 6.5 U3, 6.7, 6.7 U1, 6.7 U2, 6.7 U3, 7.07VMware Virtual Server Appliance 6.0, 6.0 U1, 6.0 U2, 6.0 U3, 6.5, 6.5 U1, 6.5 U2, 6.5 U3, 6.7, 6.7 U1, 6.7 U2, 6.7 U3, 7.071 Data Protector supports only the above mentioned VMware vCenter versions.The ESXi Servers supported by these VMware vCenter versions are supported as Data Protector Application clients.2 For the respective ESX Server support, refer to the VMware Product Interoperability Matrix using the following link:https:///comp_guide2/sim/interop_matrix.php.3 Data Protector does not support free ESXi licenses.4 Raw Disk Mappings is supported with VADP based backups in virtual mode but not supported in physical mode.5 VMware VVol (Virtual Volumes) are supported for VMs that are hosted on 3PAR VVol only.6 Data Protector supports the backup of encrypted VMs (non-ZDB mode only). Such VMs will be restored in an unencrypted manner. Advancedoperations such as Granular Recovery, Power On and Live Migrate are currently not supported. Encrypted VMs backup is not supported withvCenter 6.7, 6.7 U1, 6.7 U2. 6.7 U3, 7.07 VMware VVol (Virtual Volumes) is not supported.Table 4: Data Protector Virtual Environment Agent platform supportData Protector components Platforms Supported backup / mount proxy operating systemsVirtual Environment Agent(vStorage API support for Data Protection) •VMware vCloud Director 5.5.0 Windows Server 2008 (x64)Windows Server 2008 R2 (x64)Windows Server 2012 (x64)Virtual Environment Agent •H3C CAS 5.0(E0522, E0526 andE05503)Windows Server 2008 (x64) Windows Server 2008 R2 (x64) Windows Server 2012 (x64) Windows Server 2012 R2 (x64) RHEL 6, 7 (x64) Cent OS 6, 7(x64) SLES 11, 12 (x64)1 Data Protector supports guest operating systems that are supported by the respective operating system vendor and are supported as a guestoperating system by VMware.2 GPT disks are supported for Backup and Restore.3 Only Cent OS 7.5 (x64) is supported as a Backup Proxy host for Cached method.Table5: Granular Recovery Extension for VMware vSphereData Protector component VMware component Supported VMware versions Granular Recovery Extension forVMware vSphere Client (HTML5)VMware vCenter Server 6.5 U2, 6.5 U3, 6.7, 6.7 U1, 6.7 U2, 6.7 U3, 7.0Granular Recovery Extension forVMware vSphere Client (HTML5)VMware Server Appliance (VSA) 6.5 U2, 6.5 U3, 6.7, 6.7 U1, 6.7 U2, 6.7 U3, 7.01 For Cached GRE using Smart Cache, mount proxy and backup server should be the same host (for NAS devices).2 Granular recovery of data is supported for VMs hosted on vSAN datastores. vSAN versions 6.6.1 and 6.7 are supported for this operationData Protector supports the following virtualization application-specific features, which enable VSS snapshots for instantrecovery without an agent inside the VMs.Table 6: Microsoft Hyper-V Virtualization application integration supportVirtualization application Data Protector component Supported application componentsMicrosoft Hyper-V Server 2008,2008 R21,20121, 2012 R21, 20161, 20191Microsoft Volume ShadowCopy IntegrationVSS based snapshots of VMsMicrosoft Hyper-V Server 2008, 2008 R21Virtual Environment Agent VSS based snapshots of VMs (cluster aware) Microsoft Hyper-V Server 20121,3, 2012R21,3Virtual Environment Agent VSS based snapshots of VMs (cluster aware)Microsoft Hyper-V Server 20161,3Virtual Environment Agent VSS based snapshots of VMs (cluster aware) Microsoft Hyper-V Server 20191,2,3Virtual Environment Agent VSS based snapshots of VMs (cluster aware)1 Instant recovery for Hyper-V VSS snapshots is done using the SMIS-A agent.2 Restore to target storage path not supported due to known Microsoft Limitation3 Scale-Out File Server with cluster storage volume recommended by Microsoft for SMB on Hyper V ClusterTable 7: Supported configurations – HPE SimpliVity Storage1Integration Backup Restore Power On and Live Migrate GRE VMware 6.5, 6.5 U1 Supported Supported Supported Supported1. Supported HPE SimpliVity Storage version is 3.7.0 and aboveTable 8: Supported configurations – H3C CASIntegration Backup Restore Power On and Live Migrate GREH3C CAS 5.01,2Supported Supported Not Supported Not Supported1. Supported CAS Server versions are E0522, E0526 and E05502 E0550 supports only the Cached method. E0522 and E0526 supports only Non-Cached methodTable 9: Supported configurations – Red Hat KVM1Integration Backup Restore Power On and Live Migrate GREKVM 1.4.x Supported Supported Not Supported Not Supported1. This is a scripted solution which processes data using Filesystem Backup and Restore.Table 10: Supported configurations – Nutanix vCenterIntegration Backup Restore Power On and Live Migrate GRE Nutanix 5.10.1 LTS Supported Supported Not Supported Not SupportedTable 11: Supported configurations – Nutanix AHVIntegration Backup Restore Power On and Live Migrate GRE Nutanix 5.10.3 LTS Supported Supported Not Supported Supported21. This is a scripted solution (File-level, Image-level) which processes data using Filesystem/Raw-Image Backup and Restore.2 This is supported for virtual machines backed up using the File-level scripted solution3 Cell manager and backup proxy host must be Linux。
Veeam 虚拟机管理软件用户指南说明书
Veeam Gets shared storaGe for Vmware with disaster recoVery featuresColumbus, Ohio-based Veeam Software, Inc. is a premier-level VMware Technology Alliance Partner and member of the VMware Ready Management program. It provides software for managing VMware infrastructures, and is best known for its Veeam Backup & Replication software which is generally considered as the most innovative disaster recovery solution for VMware Infrastructure 3 (VI3) and VMware vSphere 4 environments.Bradley Barnes, Manager of Technology Resources, manages the IT needs of these organizations with six full time staff and three part-time administrators. The group is tasked with data protection for application servers, web servers and the hospital’s back office. The facilities’ servers are comprised of VMware ESX and ESXi servers and some Microsoft Virtual Serv-ers on Dell platforms. Over time they’ve reduced their physical server count from 40 to 20 as the virtual server count has reached 60. ProBLem Veeam was spending more time than it wanted to build and manage the shared storage for VMware in the U.S. and Eu-rope. In each location, applications and applications running on Dell® servers were tied to direct attached storage (DAS), giving little flexibility in configuring high availability server clusters and little prospects for easily and quickly recovering stored data in the event of a storage failure. In addition, the existing storage setup lacked the scalability, cost efficiencyand ease of use the company required in its daily operations.StarWind, among four other vendors, was asked to bid on a shared storage solution that combined high performance, scalability and centralized management. It also needed to be easy to use. Final constraints required the shared storage solution to be affordable without requiring extensive training to manage which was a typical issue that Veeam saw with procuring Fibre Channel SANs. soLutioN StarWind was an easy choice. It allowed Veeam to convert several new and several existing, repurposed Dell PowerEdgeservers into SANs that were asynchronously mirrored in the same server room as well as configured for remote replication across a WAN to match the disaster recovery and business continuity requirements. Veeam’s IT manager, Vladimir Varfolo-meev chose the StarWind solution because it offered true Enterprise level features but at an SMB price.StarWind system software was quickly loaded on Dell servers and took advantage of the IP network and CAT5 Ethernet cabling already in place. Asked to list the benefits of the decision, Vladimir Varfolomeev pointed to the ease of use and Veeam’s ability to avoid spending on a far more costly Fibre Channel solution and the fact that the all-inclusive cost for an enterprise-level SAN priced out at 50% of the cost of the next closest rival solution.As Vladimir Varfolomeev puts it, “All we did was download StarWind, installed it on a Dell server and we had a SAN for VMware in less than half an hour. I did not even read a manual. It was really that easy.”“All we did was download StarWind, installed it on a Dell server and wehad a SAN for VMware in less than half an hour. I did not even read amanual. It was really that easy.” -Vladimir Varfolomeev, IT ManagerCASE STUDY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .o r G a N i z a t i o NVeeam, Inc.w e B i N d u s t r y High-Tech, SoftwareK e y c h a L L e N G e sExisting storage for VMware was difficultto scale and not conducive to high avail-ability configuration. Additionally it washard to set it up for disaster recovery andbusiness continuity. There was a need foraffordable, scalable and enterprise-levelstorage that was easy to manage.e N V i r o N m e N t Dell servers, Ethernet network s o L u t i o N StarWind Enterprise ServerB u s i N e s s B e N e f i t sAchieved business continuity, rapidinstallation, ease of use and centralizedmanagement, all at a low cost. ABOUT STARWINDEnterprise Features. SMB Price.TM。
通过PowerCLI批量自动化创建虚拟机
通过vSphere PowerCLI脚本批量自动部署虚拟机1.下载并安装PowerCLI2.修改PowerCLI执行脚本的策略,允许执行任意脚本; 初次运行PowerCLI时提示以下报错+ Cat rf+ Fu 11 yQua. 1 i£ iedEFi*cii*Id+ Categorplnfo> E], ConmandNotFoundException+ Fu 11 yQua 1 if ie dErro i* I d : Com通过get-executionpolicy 查看本地执行策略,Restricted 是禁止执行任何脚本;金门管理员二 W ■虹上产方财11"工"P 口・ie;irCLI | | |+ Categoryinfo : SecurityError : < O [1, PSSecurityExcept ion 剧 + Full^QualifiedErrorld : Unauthoi'iaedAccess 警 PS C :\Frogram Files (xB6>MJMwareXinfpastructureXuSphere PoijerCLI>PS C : XProgram Files CxB6> XUMware XI nf rastrLicture Xu Sphere Power CL I > about_Exec :Lit io£ n_Policies ■ abont_Execut ion Jo Lie ies :无法将■ ut_EMe cut inn-Policies"工/识别为 cndlet ■懦数灰缗会鹫陪〒程庠苗名款。
请椅杳名称的拼写.皿果包括路君谙确保路径F ■ 能谴阴:,i"1 I + abou t-Fxecn t in n Pn liciss+ Category] nf o - ObjectNotFound - <ahoiLt_Execut ion_Po lie ies : £ti*inij> E ]单 ConmandHotFotindExcept ionFull5QualificdEi'i*oi'Id - CcmriandNotFoundExcept£on ------ : ------------------------ :—I PS C : ,sPrcg-ram Files CxBG> MJMware \I nf rastvucture SuEplicrc PoucrCLI > g,ct-execi.it 1口np |, olicy I I Restrict B Uri b = xn'cgram mes kxcjtj/xUriOare M nt rasuriicrure 、“spjiure ruue ①H .pts : Mn it ini,息、5 请参阅 http :Z/sD .Ion J'o lie icslabout Execution Policies cmdlet请椅杳名称的拼写,如果包括路径,请确保珞径正届管理员,Tl.arn修改本地执行策略set-executionpolicy RemoteSigned,允许执行任意脚本;3.创建脚本$vc = '10.0.66.7' #VCenter IPConnect-VIServer -Server $vc -username "***************************" -Password "vmware " 口$vmhost="192.168.1.10" #esxi host 口$namestart="test " #vm 名称前缀 口$template="win2012_temp " #vm 模板 口$datastore="64.170" #存储 lun$custsysprep = Get-OSCustomizationSpec Win #自定义规范文件 口$ipstart="192.168.1." #IP 前缀$endipscope=100..150 #IP 后缀 ObjectzNol:Found.: Cabout:__Ex:ec u.t inn_Pa lie ies : StFinjyW EG trie ted 咻 C :\Progpan)licy pemotesigrned⑺帮助《默认值为勺” Y+ Cat:egoi'iiiI nFn y [ 1, CnmrniandNciiiFnundlEzrfcepHon+ F IJI . 11 yQua 1 i£ iedEFPDF'Id : CornnnLa.ndNQt:F£kiiJ.ndExcept: Ian关闭窗口,重新运行PowerCLI□循环生成50台虚拟机foreach ($endip in $endipscope) 口口{口□ $ip=$ipstart+$endip□□ $name=$namestart+$endip□□ $custsysprep | Set-OScustomizationSpec -NamingScheme fixed -NamingPr efix $name□□ $custsysprep | Get-OSCustomizationNicMapping | Set-OSCustomizationNic Mapping -IpMode UseStaticIP -IpAddress $ip -SubnetMask255.255.255.0 -Dns 19 2.168.1.1-DefaultGateway 192.168.1.1□ □ New-vm -vmhost $vmhost -Name $name -Template $template -Datastore $datastore -OSCustomizationspec $custsysprep □ □ }4.执行脚本将上面的脚本保存为“test.ps1”,然后打开VMWare vSphere PowerCLI,切换到存放脚本的目录运行".\test.ps1"回车即可运行。
VMware复习题答案
V M w a r e复习题答案(共12页)--本页仅作为文档封面,使用时请直接删除即可----内页可以根据需求调整合适字体及大小--VMware复习题总共45道题(从复习题中抽取35道+额外的5道=结业考试40道)QUESTION 1Which memory conservation technique collaborates with the server to reclaim pages that are redundant in a virtual machine or virtual machinesA.Memory Balloon DriverB.Transparent Page SharingC.Redundant Memory DriverD.VMkernel SwapQUESTION 2Which network settings are only available with vSphere Distributed SwitchesA.Jumbo FramesB.PVLANC.Load BalancingD.Promiscuous ModeQUESTION 3What are three valid objects to place in a vApp (Choose three.)A.FoldersB.HostsC.Resource poolsD.vAppsE.Virtual machinesQUESTION 4Which two conditions will prevent a virtual machine from being successfully migrated using Storage vMotion (Choose two.)A.The virtual machine has an RDM.B.The virtual machine has Fault Tolerance enabled.C.The virtual machine is running on a vSphere Standard host.D.The virtual machine has a disk stored on an NFS datastore.QUESTION 5An administrator is working to update the hosts and virtual machines in a vSphere deployment using Update Manager Baselines.Other than host patches, which three items require a separate procedure or process to update (Choose three.)A.Operating system patchesB.Virtual Appliance updatesC.Virtual Machine Virtual Hardware upgradesD.VMware Tools on machines without VMware Tools already installedE.Application patches within the virtual machineQUESTION 6An ESXi host displays a warning icon in the vSphere console and its summary page lists a configurationissue "SSH for the host has been enabled." What are two ways to clear this warning (Choose two.)ing the Security Profile pane of the Configuration tab in the vSphere Clienting the Direct Console User Interface (DCUI)ing the Advanced Settings pane of the Configuration tab in the vSphere Clienting the Networking pane of the Configuration tab in the vSphere ClientQUESTION 7An administrator wants to monitor the health status of an ESXi host. However, when the administratorclicks the Hardware Status tab the following error is displayed.This program cannot display the webpageWhat might cause this problemA.The VMware VirtualCenter Management Webservices Service is not started.B.The required plug-in is not enabled.C.The VMware ESXi Management Webservices Service is not startedD.The name of the vCenter Server system could not be resolved.QUESTION 8An iSCSI array is being added to a vSphere implementation. An administrator wants to verify that all IPstorage VMkernel interfaces are configured for jumbo frames.What are two methods to verify this configuration (Choose two.)A.Run the esxcli network ip interface list commandB.Run the esxcfg-vswif -l commandC.Ensure that the Enable Jumbo Frames box is checked in the VMkernel interface properties in the vSphere clientD.Ensure that the MTU size is set correctly in the VMkernel interface properties in the vSphere client QUESTION 9An administrator is setting up vMotion in a vSphere environment. A migration is run to test the configuration, but fails.What could the administrator do to resolve the issueA.Ensure the VMkernel port group is configured for Management Traffic.B.Enable VMkernel port group for the vSphere cluster.C.Ensure the vMotion enabled adaptors configured for the same subnet.e a vDS instead of a standard switch.QUESTION 10An administrator is unable to connect a vSphere Client to an ESXi host.Which option can be selected from the direct console to restore connectivity without disrupting running virtual machinesA.Restore the Standard Switch from the ESXi host.B.Restart the Management Agents from the vSphere client.C.Restart the Management Agent from the ESXi hostD.Disable the Management Network from the vSphere client.QUESTION 11An administrator receives a complaint that a virtual machine is performing poorly. The user attributes the issue to poor storage performance.If the storage array is the bottleneck, which two counters would be higher than normal (Choose two.)A.Average ESXi VMKernel latency per command, in millisecondsB.Average ESXi highest latency, in millisecondsC.Average virtual machine operating system latency per command, in millisecondsD.Number of SCSI reservation conflicts per secondQUESTION 12An administrator must decommission a datastore.Before unmounting the datastore, which three requirements must be fullfilled (Choose three.)A.No virtual machines reside on the datastore.B.The datastore is not used for vSphere HA heartbeat.C.No registered virtual machines reside on the datastore.D.The datastore must not have any extents.E.The datastore must not be part of a datastore cluster.QUESTION 13A company wants to increase disk capacity for their vSphere environment.Management mandates that.1. vMotion must work in this environment.2. The existing LAN infrastructure must be used.Which storage option best meets the company objectivesA.iSCSIB.Fibre ChannelC.SATAD.NFSQUESTION 14Which resource management technique can be used to relieve a network bottleneck caused by a virtual machine with occasional high outbound network activityA.Convert the switch from a vSphere Standard Switch to a vSphere Distributed Switch.B.Create a new portgroup for the virtual machine and enable traffic shaping.C.Apply traffic shaping to the other virtual machines in the same port group.D.Apply traffic shaping to the virtual machine with high activity.QUESTION 15Which of these factors indicates a high likelihood that the performance of a virtual machine disk is being constrained by disk I/OA. A large number of commands issuedB. A high device latency valueC. A high disk used valueD. A large number of kilobytes read and writtenQUESTION 16Click the Exhibit button.An administrator is configuring vMotion in their environment. As part of the implementation, an administrator is examining resource mapping for virtual machines.What is a likely cause of the error shown in the exhibitA.vMotion has been disabled because the are no other hosts in the cluster with vMotion enabled.B.vMotion has not been enabled on the Production port group.C.The administrator has not created a Managent network.D.vMotion has not been enabled on a VMkernel port group.QUESTION 17Which two circumstances might cause a DRS cluster to become invalid (Choose two.)A.DRS has been disabled on one or more ESXi hosts.B.An ESXi host has been removed from the cluster.C. A migration on a virtual machine is attempted while the virtual machine is failing over.D.Virtual machines have been powered on from a vSphere Client connected directly to an ESXi host.QUESTION 18Click the Exhibit button.An administrator is applying patches to a batch of ESXi hosts in an under-allocated HA/DRS cluster. An attempt is made to place the host into maintenance mode, but the progress has stalled at 2%. DRS is configured as shown in the exhibit. Of the four choices below, two would effectively resolve this issue. Which two steps could be taken to correct the problem (Choose two.)A.Enable VMware EVC.B.Set the virtual machines on the host to Fully Automated.C.Shut down the virtual machines on the host.D.Move the virtual machine swap file off the local datastore.QUESTION 19A company has converted several physical machines to virtual machines but are seeing significant performance issues on the converted machines. The host is configured with sufficient memory and storage does not appear to be a bottleneck.Which metric can be checked to determine if CPU contention exists on an ESXi hostA.%RUNB.%WAITC.%USEDD.%RDYQUESTION 20A group of virtual machines has been deployed using thin disks because of limited storage space availability. The storage team has expressed concern about extensive use of this type of provisioning.At which level can the administrator set an alarm to notify the storage teamA.DatastoreB.Virtual MachineC.HostD.Resource PoolQUESTION 21Virtual machine VM1 is unable to communicate with virtual machine VM2. Both virtual machines are connected to a portgroup named Production on vSwitch1 on host ESXi01. Which statement could explain whyA.The only vmnic connected to vSwitch1 on ESXi01 is set to unusedB.The only vmnic connected to vSwitch1 on ESXi01 is set to standbyC.Load balancing settings on Production do not match vSwitch1D.VM1 is configured for VGT, VM2 is configured for VSTQUESTION 22What are two methods of maximizing VMFS performance for virtual machines across all the hosts in a cluster (Choose two.)e disk shares to prioritize virtual machine disk I/OB.Enable Storage I/O controlC.Enable Storage DRS with I/O load balancingD.Enable Host Cache using local SSD drivesQUESTION 23Which two statements are true regarding vSphere standard switches (Choose two.)A.Beaconing requires at least three uplinks to be considered useful.B.Virtual machines on different vSwitches require the vSwitches to share an uplink to communicate.C.vSphere virtual switches require at least one uplink adapter.D.Setting the number of ports to the maximum on a vSwitch will exhaust the total ports on a host. QUESTION 24Which types of traffic can a VMkernel port be enabled to carry on a ESXi host (Choose three.)A.NFSB.Fault TolerantC.ManagementD.iSCSIE.vMotionQUESTION 25Networking policies for a vSphere Standard Switch, such as traffic shaping and security, can be overridden on which vSphere elementsA.On the virtual machineB.On the physical switchC.On the physical network interfaceD.On the port groupQUESTION 26When would a license server be configured for vCenter ServerA.When managing ESX serversB.When the vCenter Server Virtual Appliance (vCSA) is usedC.Within the first 60 daysD. A standalone license server is installed by default "QUESTION 27What are two valid Resource settings that can be set at the vApp level (Chose two.)workB.CPUC.MemoryD.DiskQUESTION 28An administrator is investigating a report of slow disk performance. Where is the most efficient place for the administrator to checkA.The performance tab of the vApp.B.The performance tab of the virtual machine.C.The performance tab of the vApp resource pool.D.The performance tab of the cluster.QUESTION 29What is a benefit of vCenter Linked-ModeA.Allows the vCenter Server Virtual Appliance (vCSA) to manage multiple sitesB.Pools vRAM entitlementC.Increases vCenter securityD.Increases vCenter reliabilityQUESTION 30What virtual machine action listed below can be performed on a templateA.Power onB.CloneC.Edit SettingsD.MigrateQUESTION 31An administrator is attempting to clone a running virtual machine, but receives an error that prevents that the virtual machine is using a device that prevents the operation. What two device types listed below could be causing this error (Choose two.)A.An independent mode virtual diskB. A physical compatibility mode RDMC.An LSI logic SAS adapterD. A BusLogic Parallel adapterQUESTION 32When cloning a virtual machine to a template, what two attributes from the list below can be changed for the destination virtual disk (Choose two.)A.Virtual disk formatB.Number of network adaptorsC.IP AddressD.Location for virtual machine filesQUESTION 33Which types of devices can be connected to a virtual machine during a vMotion migration (Choose two.)A.SCSI pass-through devices connected to the ESXi 5 host.B.NFS mounts inside of the guest.B pass-through devices connected to the ESXi 5 host.D.ISO images connected using the vSphere 5 client.QUESTION 34What are three of the steps recalled patches automatically go through in vSphere Update Manager (Choose three.)A.Hosts with recalled patches are placed maintenance mode.B.Hosts with recalled patches are remediated.C.The recalled patch binary is deleted from the respository.D.The recalled patch is flagged in the database.E. A notification is generated in the notification tab.QUESTION 35Assuming that ballooning is possible, under which three circumstances might the VMkernel use a swap file for a running virtual machine (Choose three.)A.The value is between 10 and 25 percent.B.Memory cannot be reclaimed quickly enough.C.The virtual machine is starting up.D.VMware Tools is not installed.E.50% of the configured memory has already been ballooned.QUESTION 36Which two options will be presented during the Migrate wizard for a powered on virtual machine (Choose two.)A.Change HostB.Change Host and DatastoreC.Change NetworkD.Change DatastoreQUESTION 37While performing a security check the vSphere administrator finds unassigned AD accounts with vSphere permissions. If the accounts are removed from Active Directory what will happen to any user logged into vCenter with those accountsA.The vSphere client warns the user they will be logged out in 1 hour.B.The user can remain logged in indefinitely.C.The user is immediately disconnected from vCenter Server and cannot log back in.D.The user can remain logged into vCenter for up to 24 hours.QUESTION 38The VMware vCenter Server Virtual Appliance (vCSA) offers many features of the Windows application version.Which of the following features is only available on the Windows application version of vCenterA.Host ProfilesB.Template and clone customizationC.Active Directory authenticationD.IPv6QUESTION 39When deploying an OVF template, the resulting virtual disk is created in what file formatA.OVFB.VMDKC.VMXD.VSWPQUESTION 40Assuming that VLANs are not configured, what is true about traffic from a virtual machine connected to a port group on a vSphere Standard Switch with no uplinksA.The virtual switch will drop the packets if no uplink is present.B.Virtual machines on any vSphere Standard Switch on the same ESXi host can receive the traffic.C.Virtual machines in any port group on the virtual switch can monitor all of the traffic.D.vMotion will not migrate any Virtual machines connected to a port group on the virtual switch. QUESTION 41What is true about the HA agent on ESXi hostsA.HA agent logs and entries use the prefix aamB.HA agents are set to start by defaultC.HA agents can store configuration information locallyD.vSphere client cannot determine which HA agent is masterQUESTION 42Which are the correct parameters to specify when adding an iSCSI target to an ESXi host using the Static Discovery tabA.iSCSI server ip address, port 902B.iSCSI server ip address, port 3260C.iSCSI server ip address, port 902, iSCSI target nameD.iSCSI server ip address, port 3260, iSCSI target nameQUESTION 43Which two migration techniques can be used together to move a running virtual machine to a local datastore on a different server (Choose two.)A.Cold migrationB.Storage vMotionC.vMotionD.vSphere CloneQUESTION 44Which three services will continue to function when vCenter Server is unavailable (Choose three.)A.Storage DRSB.Fault Tolerance (FT)C.vMotionD.Thin ProvisioningE.High AvailabilityQUESTION 45A user wants to receive an email notification when the virtual machine CPU usage enters a warning state and again when the condition enters a alarm state.Which two state changes must be selected to receive the appropriate notifications (Choose two.)A.Yellow-red state changeB.Red-green state changeC.Yellow-green state changeD.Green-yellow state change。
HPE Smart Array P841 控制器概述说明书
QuickSpecs HPE Smart Array P841 Controller OverviewHPE Smart Array P841 ControllerThe HPE Smart Array P841 Controller is a full height, PCIe3 x8, 12Gb/s Serial Attached SCSI (SAS) RAID controller that provides enterprise-class storage performance, increased external scalability, and data protection for select HPE ProLiant Gen9 rack and tower servers. It features sixteen external physical links and delivers increased server uptime by providing advanced storage functionality, including online RAID level migration (between any RAID levels) with 4GB flash back write cache (FBWC), global online spare, and pre-failure warning. This controller includes transportable FBWC allowing the data in the cache to be migrated to a new controller.The Gen9 family of controllers supports the HPE Smart Storage Battery. With HPE Smart Storage Battery, multiple Smart Array controllers are supported without needing additional backup power sources for each controller, resulting in simple upgrade process.What's New•Supports HPE D6020 Disk EnclosuresModelsHPE Smart Array P841HP Smart Array P841/4GB FBWC 12Gb 4-ports Ext SAS Controller726903-B21 ControllerNOTE:HPE Smart Array P841/4GB FBWC controller option kit does not include the HPE Smart Storage Battery the backup power source necessary to protect the data on the Flash-backed Write Cache. HPE Smart Storage Battery is an item that has to be purchased separately if this is the first P-series Smart Array controller on your Gen9 server.Key Features•Storage interface (SAS/SATA)o16 physical links across 4 x4 external portso12Gb/s SAS, 6Gb/s SATA technologyo Mix-and-match SAS and SATA drives to the same controllero Support for SAS tape drives, SAS tape autoloaders and SAS tape libraries• 4 GiBytes Flash-Backed Write Cache (FBWC)•PCI Express Gen3 x8 link•RAID 0, 1, 10, 5, 50, 6, 60, 10 ADM (Advanced Data Mirroring)•RAID or HBA mode•Legacy and UEFI boot operation•Up to 200 physical drives•Up to 64 logical drives•Up to 8 daisy chained HPE D3600 Disk Enclosure and HPE D3700 Disk Enclosure in dualdomain configuration•HPE SmartCache (license included)•HPE Secure Encryption (optional license)•HPE SSD Smart Path•VMware Virtual SAN certified•Rapid Parity Initialization (RPI)•Rapid rebuild•Drive Erase•Performance Optimization-Degraded Reads and Read Coalescing•Power Efficiency•Seamless upgrades to and from other HPE Smart Array controllers•Recovery ROM to protect against controller ROM corruption•Full-height, half-length standard PCI Express plug-in cardo Dimension (excluding bracket): 7.5 in x 9.5 in x 2.25 in (19.05 cm x 24.13 cm x 5.72 cm) Ports•External: 16 SAS/SATA physical links across 4 x4 portPerformance•12Gb/s SAS (1200 MB/s theoretical bandwidth per physical lane)•6Gb/s SATA (600 MB/s theoretical bandwidth per physical lane)•PCI Express Gen3 x8 link width• 4 GiBytes 72-bit wide DDR3-1866 Flash Back Write Cache provides up to 14.9GB/s maximumcache bandwidth•Read ahead caching•Write-back cachingOnline Management Features •Online array expansion•Online capacity expansion•Online logical drive extension•Online RAID level migration•Online stripe size migration•Online mirror split, recombine, and rollback•Online active drive replacement•Online drive firmware upgrade•Online and high performance offline Rapid Parity Initialization (RPI) •Unlimited global online spares assignment•User selectable expand and rebuild priority•User selectable RAID level and stripe size•User selectable read and write cache sizes•Supports Predictive Spare ActivationFault Prevention The following features offer detection of possible failures before they occur, allowing preventive action to be taken:•S.M.A.R.T.: Self Monitoring Analysis and Reporting Technology first developed at HPE detectspossible hard disk failure before it occurs, allowing replacement of the component beforefailure occurs.•Drive Parameter Tracking monitors drive operational parameters, predicting failure andnotifying the administrator.•Dynamic Sector Repairing continually performs background surface scans on the hard diskdrives during inactive periods and automatically remaps bad sectors, ensuring data integrity.•Smart Array Cache Tracking monitors integrity of controller cache, allowing pre-failurepreventative maintenance.Fault Recovery Minimizes downtime, reconstructs data, and facilitates a quick recovery from drive failure•Recovery ROM: This feature provides unique redundancy that protects from a ROM imagecorruption. A new version of firmware can be flashed to the ROM while the controller maintainsthe last known working version of firmware. If the firmware becomes corrupt, the controller willrevert back to the previous version of firmware and continue operating. This reduces the riskof flashing firmware to the controller.•On-Line Spares: There is no limit to the number of spare drives that can be installed prior todrive failure. If a failure occurs, recovery begins with an On-Line Spare and data isreconstructed automatically.•DRAM ECC corrects against single bit data and address corruption.HPE SmartCache The HPE SmartCache feature is a controller-based read and write caching solution in a DASenvironment that caches the most frequently accessed data ("hot" data) onto lower latency SSDs todynamically accelerate application workloads. The HPE SmartCache architecture is flexible andsupports any HPE ProLiant Gen9 supported HDD for bulk storage and any HPE ProLiant Gen9supported SSD as an accelerator. HPE SmartCache is deployed and managed using the HPE SmartStorage Administrator (HPE SSA). HPE SmartCache license comes standard with the P841 controller.For more information please visit: /servers/smartcache.HPE Secure Encryption HPE Secure Encryption is a Smart Array controller-based data encryption solution for ProLiant Gen9 servers that protects sensitive, mission critical data. This is an enterprise-class encryption solution fordata at rest on any bulk storage attached to the HPE Smart Array controllers including data on thecache memory of the controller. HPE Secure Encryption is an optional license per server requiringencryption enablement (see Related Options for more information on the license).The solution is available for both local and remote key management mode deployments. Local KeyManagement Mode is focused on single server deployment where there is one Master key percontroller that is managed by the user. Remote Key Management Mode is for enterprise widedeployments from just a few servers to thousands of servers.For more information on please visit: /servers/secureencryptionHPE SSD Smart Path The HPE SSD Smart Path feature included in the Smart Array software stack improves Solid State Disk (SSD) read performance. With up to 4x better SSD read performance, HPE SSD Smart Path choosesthe optimum path to the SSD and accelerates reads for all RAID levels and RAID 0 writes. HPE SSDSmart Path requires updated firmware, drivers, and configuration utility available at:/servers/ssa. HPE SSD Smart Path is ideal for read intensive workloads and isincluded as a base feature on HPE Smart Array P-series controllers.Warranty The warranty for this device is 3 years parts only.Pre-Failure Warranty: Drives attached to the Smart Array Controller and monitored under InsightManager are supported by a Pre-Failure (replacement) Warranty. For complete details, consult the HPESupport Center or refer to your HPE Server Documentation.Warranty Upgrade Options•Response - Upgrade on-site response from next business day to same day 4 hours•Coverage - Extend hours of coverage from 9 hours x 5 days to 24 hours x 7 days•Duration - Select duration of coverage for a period of 1, 3, or 5 years•Warranty upgrade options can come in the form of Care Packs, which are sold at the HPESystem level to which this product attaches.CompatibilityServer Support HPE ProLiant DL20 Gen9HPE ProLiant DL80 Gen9HPE ProLiant DL180 Gen9HPE ProLiant DL360 Gen9HPE ProLiant DL380 Gen9HPE ProLiant DL560 Gen9 HPE ProLiant ML110 Gen9 HPE ProLiant ML150 Gen9 HPE ProLiant ML350 Gen9 Apollo 4200Apollo 4500Disk Enclosure Support HPE D2600 Disk EnclosureHPE D2700 Disk EnclosureHPE D3600 Disk EnclosureHPE D3700 Disk Enclosure HPE D6000 Disk Enclosure HPE D6020 Disk EnclosureOperating Systems Microsoft Windows ServerMicrosoft Windows Hyper-V ServerVMware vSphere ESXiRed Hat Enterprise Linux (RHEL)SUSE Linux Enterprise Server (SLES)Oracle SolarisOracle LinuxCanonical UbuntuCentOSWind RiverCitrix XenServerNOTE: For a complete and up-to-date list of certified and supported OS versions for HPE SmartArray controllers, please refer to the HPE Smart Array Operating System Support Matrix at:/h20195/v2/GetPDF.aspx/4AA6-6550ENW.pdfNOTE: For more information on HPE's Certified and Supported ProLiant Servers for OS andVirtualization Software, please visit our Support Matrix at: /info/ossupportStorage Management Software Suite HPE Smart Storage Administrator (HPE SSA)Comprehensive management for HPE Smart Storage products with advanced scripting and diagnostic features and simplified and intuitive interface and functionality. For more information please visit: /servers/ssaHPE Systems Insight ManagerPowerful server and server options/storage manager tool with configuration/diagnostic utilities HPE Storage Management UtilityOffers the simplest method for configuring the storage system via Initial System Configuration WizardService and SupportService and Support NOTE: HPE Smart Array controllers are supported as a part of the HPE Server Infrastructure. No separate care packs are needed to be purchased.HPE Technology Services for Industry Standard ServersHPE Technology Services delivers confidence, reduces risk and helps customers realize agility andstability. Connect to HPE to help prevent problems and solve issues faster. Our support technology letsyou to tap into the knowledge of millions of devices and thousands of experts to stay informed and incontrol, anywhere, any time.Protect your business beyond warranty with HPE Care Pack ServicesHPE Care Pack Services enable you to order the right service level, length of coverage and responsetime as you purchase your new server, giving you full entitlement for the term you select.Get connected to HPE to improve your support experience Connecting products to Hewlett Packard Enterprise will help prevent problems with 24x7monitoring, prefailure alerts, automatic call logging, and parts dispatch. With Connected products, you can have a dashboard to manage your IT anywhere, anytime, from any device.HPE Support Center Personalized online support portal with access to information, tools and experts to support Hewlett Packard Enterprise business products. Submit support cases online, chat with HPE experts, accesssupport resources or collaborate with peers. Learn more /go/hpscThe HPE Support Center Mobile App allows you to resolve issues yourself or quickly connect to anagent for live support. Now, you can get access to personalized IT support anywhere, anytime.HPE Insight Remote Support and HPE Support Center are available at no additional cost with a HPEwarranty, HPE Care Pack or Hewlett Packard Enterprise contractual support agreement.NOTE: The HPE Support Center Mobile App is subject to local availabilityParts and materials Hewlett Packard Enterprise will provide HPE-supported replacement parts and materials necessary to maintain the covered hardware product in operating condition, including parts and materials foravailable and recommended engineering improvements.Parts and components that have reached their maximum supported lifetime and/or the maximumusage limitations as set forth in the manufacturer's operating manual, product QuickSpecs, or thetechnical product data sheet will not be provided, repaired, or replaced as part of these services.The defective media retention service feature option applies only to Disk or eligible SSD/Flash Drivesreplaced by HPE due to malfunction.For more information To learn more on services for HPE ProLiant servers, please contact your Hewlett Packard Enterprise sales representative or Hewlett Packard Enterprise Authorized ServiceOne Channel Partner. Or visit:/services/proliantRelated OptionsHPE Secure Encryption HP Secure Encryption per Svr EntitlementNOTE: HPE Secure Encryption licensing is based on the number of serversrequiring encryption for direct attached storage.For more information visit: /go/hpsecureencryptionC9A82AAEHPE Smart Storage Battery HP 96W Smart Storage Battery with 145mm Cable for DL/ML/SL ServersNOTE:HPE 96W Smart Storage Battery provides backup power for up to 16 HPESmart Array controllers or other devices.727258-B21HPE External Cable Options Cable options to be used with HP D3600 and D3700 Disk EnclosuresHP External 0.5m (1ft) Mini-SAS HD 4x to Mini-SAS HD 4x Cable691968-B21 HP External 1.0m (3ft) Mini-SAS HD 4x to Mini-SAS HD 4x Cable716195-B21 HP External 2.0m (6ft) Mini-SAS HD 4x to Mini-SAS HD 4x Cable716197-B21 HP External 4.0m (13ft) Mini-SAS HD 4x to Mini-SAS HD 4x Cable716199-B21 Cable options to be used with D6000 and D6020 Disk EnclosuresHP 0.5m External Mini SAS High Density to Mini SAS Cable691971-B21 HP 1.0m External Mini SAS High Density to Mini SAS Cable716189-B21 HP 2.0m External Mini SAS High Density to Mini SAS Cable716191-B21 HP 4.0m External Mini SAS High Density to Mini SAS Cable716193-B21 HP 6.0m External Mini SAS High Density to Mini SAS Cable733045-B21Summary of ChangesDate Version History ActionDescription of Change19-Aug-2016From Version 1 to 2AddedAdded support for D6020.Changed Overview, Standard Features, Compatibility, and Relatedoptions were revised.Sign up for updates© Copyright 2016 Hewlett Packard Enterprise Development LP. The information contained herein issubject to change without notice. The only warranties for Hewlett Packard Enterprise products andservices are set forth in the express warranty statements accompanying such products and services.Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterpriseshall not be liable for technical or editorial errors or omissions contained herein.Microsoft and Windows are registered trademarks or trademarks of Microsoft Corporation in the U.S.and/or other countries.c04545470 - 15200 - Worldwide - V2 - 19-August-2016。
StarWind Virtual SAN 虚拟机存储解决方案说明书
StarWind Virtual SAN Compute and Storage Separated Multi-Node Cluster Scale-Out Existing Deployments for VMware vSpheretRADEMARKS“StarWind”, “StarWind Software” and the StarWind and the StarWind Software logos are trademarks of StarWind Software which may be registered in some jurisdictions. All other trademarks are owned by their respective owners.CHANgESThe material in this document is for information only and is subject to change without notice. While reasonable efforts have been made in the preparation of this document to assure its accuracy, StarWind Software assumes no liability resulting from errors or omissions in this document, or from the use of the information contained herein. StarWind Software reserves the right to make changes in the product design without reservation and without notification to its users.tECHNICAL SuPPORt AND SERVICESIf you have questions about installing or using this software, check this and other documents first - you will find answers to most of your questions on the Technical Papers webpage or in StarWind Forum. If you need further assistance, please contact us.COPyRIgHt ©2009-2014 StARWIND SOftWARE INC.No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written consent of StarWind Software.CONtENtS Introduction (4)Solution Diagram (5)Replacing Partner for (DS2) (6)Creating Virtual Disk (DS3) (11)Configuring eSxI Cluster (18)Configuring the i ScSI Initiator (18)Setting up a Datastore (20)Contacts (23)INtRODuCtIONT raditionally VMware requires having some sort of the shared storage to guarantee the data safety, allow the virtual machines migration, enables continuous application availability and eliminates any single point of failure within IT environment. VMware users have to choose between two options when choosing the shared storage:• Hyper-Converged soluti ons, that allows sharing the same hardware resources for the application(i.e. hypervisor, database) and the shares storage, thus decreasing the TcO and achieving theoutstanding performance results• compute and Storage separated solutions that keeps the compute and storage layers separately from each other, thus making the maintenance easier, increasing the hardware usage efficiency and allows building the system accurately for solving the taskThis guide is intended for experienced VMware and Windows system administrators and IT professionals who would like to add the Scale-Out node to the StarWind Virtual SAN cluster. It provides a step-by-step guidance on scaling out the hyper-converged 2-node StarWind Virtual SAN that converts the storage resources of the separated general purpose servers into a fault tolerant shared storage resource for ESXi.A full set of up-to-date technical documentation can alw ays be found here, or by pressing the Help button in the StarWind Management console.For any technical inquiries please visit our online community, Frequently Asked Questions page, or use the support form to contact our technical support department.SOLutION DIAgRAMThe diagra m bellow illustrates the network and storage configuration of the resulting solution described in the guide.Physical storage sizing in Scale-Out modelREPLACINg PARtNER fOR (DS2)1. Add third StarWind node (SW3)2. Open replication manager for DS2 on the second StarWind node3. click Remove Replica4. click Add replica5. Select Synchronous “Two-Way” Replication and click next6. Enter name or IP address of the third StarWind node7. Select Create new Partner device8. click Change Network Settings9. Select HeartBeat and synchronization channelsclick OK. Y ou will return to Network Option for Synchronization Replication. click Next.10. click Create ReplicaAfter creation click Finish to close Replication WizardCREAtINg VIRtuAL DISK (DS3)1. Open Add Device Wizard through one of the following ways:• Right-click a StarWind server and select Add Device (advanced) from the shortcut menu.• Select a StarWind server and click the Add Device (advanced) button on the toolbar.2. As Add Device Wizard appears, follow the instructions to complete the creation of a new disk.3. Select Hard disk device as the type of a device to be created. click Next to continue.4. Select Virtual disk. click Next to continue.5. Specify a virtual disk location and size.6. click Next.7. Specify virtual disk options.8. click Next to continue.9. Define the caching policy and specify the cache size (in MB). Y ou can also set maximum avail ablecache size by selecting the appropriate checkbox.10. click Next to continue.11. Optionally, define the L2 caching policy and the cache size. Click Next to continue.12. Specify target parameters. Select the Target Name checkbox to enter a custom name of a target.Otherwise, the name is generated automatically in accordance with the specified target alias.13. click Next to continue.14. click Create to add a new device and attach it to the target.15. click Finish to close the wizard.16. Right-click the recently created device and select Replication Manager from the shortcut menu.Then, click Add replica.17. Select Synchronous two-way replication as a replication mode.18. click Next to proceed.19. Specify a partner hostname, IP address and port number.20. click Next.21. choose create new Partner Device and click Next.22. click Change network settings....23. Specify interfaces for synchronization and Heartbeat channels.click OK. Then click Next.24. click Create Replica.25. click Finish to close the wizard.26. The successfully added devices appear in the StarWind Console.CONfIguRINg ESXI CLuStERConfiguring the iSCSI Initiator1. Select a host.2. click the Manage tab and select the Storage inset and Storage Adapters item.3. The list of the available storage adapters appears. Select iSCSI Software Adapter. Open Targets.4. click the Add… button. Enter IP address of new StarWind node. click OK.5. Do same for each iScSI_Data networks by clicking Add and specifying the server IP address.6. click Rescan. In the Rescan dialog click OK.7. Repeat the same procedure for another cluster host.Setting up a Datastore1. Right click on host and select New datastore.2. New Datastore wizard appears.3. Select VMFS.4. click next.5. Enter name of datastore (DS3) and device for datastore.6. click next7. Select VMFS 58. click next.9. Enter datastore size.10. click next11. Verify the settings. click Finish.12. check another host for a new datastore. If a new datastore is not listed among the existingdatastores, click Rescan All.CONtACtSCustomer Support Portal:Support Forum:Sales: General Information:US HeadquartersPhone:Fax:eMeA and APACPhone:Voice Mail:Russian Federation and CISPhone:/support /forums ***************************************************1-617-449-77171-617-507-5845+44-0-2071936727+44-0-20719363501-866-790-2646+7 (495) 505-63-61StarWind Software Inc. 301 edgewater Place, Suite 100, Wakefield, MA 01880, USA。
Sophos XG Firewall Virtual Appliance 部署指南说明书
ContentsIntroduction (1)Installation procedure (2)Configuring XG Firewall (4)Activation and Registration (4)Basic Configuration (4)Legal notices (8)Sophos XG Firewall Virtual Appliance1 IntroductionWelcome to the Getting Started guide for Sophos XG Firewall Virtual Appliance (referred to in this document as “XG Firewall”) for VMware ESX/ESXi platform. This guide describes how you can download, deploy and run XG Firewall as a virtual machine on VMware ESX/ESXi.Minimum hardware requirement1.One vCPU2.2GB vRAM3. 2 vNIC4.Primary Disk with a minimum of 4 GB space5.Report Disk with a minimum of 80 GB spaceNoteSFOS 17 supports hard drives with a maximum of 512 GB.XG Firewall will go into fail-safe mode if the minimum requirements are not met.NoteTo optimize the performance of your XG Firewall, configure vCPU and vRAM according to the license you have purchased. When configuring a number of vCPUs, make sure that you do not exceed the maximum number specified in your license.Sophos XG Firewall Virtual Appliance2 Installation procedureMake sure that VMware ESX/ESXi version 5.0 or later is installed in your network. For VMware ESX/ ESXi installation instructions, refer to the VMware documentation /support/ pubs/vsphere-esxi-vcenter-server-pubs.html.You need to:1.Download and extract the OVF image2.Access the ESX/ESXi Host via vSphere Client3.Deploy the OVF Template4.Power on1.Download the .zip file containing the OVF image from https://secure/en-us/products/next-gen-firewall/free-trial.aspx and save it.2.Log in to the ESX/ESXi host server on which you want to deploy the OVF template throughVMware vSphere Client.NoteIn this guide, we are using VMware vSphere client to connect to the ESX/ESXi host server onwhich the OVF template is to be deployed.a)Go to File > Deploy OVF Template to open the downloaded .ovf file in the vSphere Client.b)Select the sf_virtual file and click Open.3.To deploy the OVF template:a)Select the location of the .ovf file for XG Firewall and click Next to continue.Sophos XG Firewall Virtual Applianceb)Verify the OVF template details and click Next to continue.c)Specify a name and location for the OVF template to be deployed and click Next to continue.d)Select the host/cluster within which you want to deploy the OVF template and click Next tocontinue.NoteHere, we are deploying the OVF template on a single/standalone server. The configurationmay be different in a cluster environment.e)Select the format in which you want to store the virtual disks from the available options:Thin Provision: It uses the minimum required space for the OVF template, saving the restfor other use.Thick Provision: It uses the entire allotted virtual disk for OVF template installation, wipingout additional data on the disk.In case of VMware ESXi 5.0 or later, three storage options are available: Thin Provision,Thick Provision Lazy Zeroed and Thick Provision Eager Zeroed. For more information,refer to /.f)Click Next to continue.g)Select the networks to be used by the OVF template and click Next to continue.h)Verify the deployment settings for the OVF Template and click Finish to initiate the deploymentprocess of XG Firewall.This installs XG Firewall on your machine.4.Right-click the deployed XG Firewall and go to Power > Power On.a)Enter the administrator password: ‘admin’ to continue to the Main Menu.Sophos XG Firewall Virtual Appliance3 Configuring XG Firewall1.Browse to "https://172.16.16.16" from the management computer.2.Click Start to begin the wizard and follow the on-screen instructions.NoteThe wizard will not start if you have changed the default administrator password from theconsole.3.1 Activation and Registration1.Review and accept the License Agreement. You must accept the Sophos End User LicenseAgreement (EULA) to proceed further.2.Register Your Firewall. Enter the serial number, if you have it. You can also use your UTM 9license if you are migrating.Otherwise, you can skip registration for 30 days or start a free trial.a)You will be redirected to the MySophos portal website. If you already have a MySophosaccount, specify your sign-in credentials under “Login”. If you are a new user, sign up for aMySophos account by filling in the details under “Create Sophos ID”.b)Complete the registration process.Post successful registration of the device, the license is synchronized and the basic setup is done.3.Finish the basic setup. Click Continue and complete the configurations through the wizard. Whenyou finish the process, the Network Security Control Center appears.You can now use the navigation pane to the left to navigate and configure further settings.3.2 Basic ConfigurationYou can:1.Set up Interfaces2.Create Zones3.Create Firewall Rules4.Set up a Wireless Network1.To set up interfaces:a)You can add network interfaces and RED connections in the Configure > Network >Interfaces menu.b)You can add wireless networks in the Protect > Wireless > Wireless Networks menu.SSIDs will also be shown in the interfaces menu once created.c)You can add access points in Protect > Wireless > Access Points.Sophos XG Firewall Virtual ApplianceSophos XG Firewall Virtual ApplianceYou can see both these wireless networks in Protect > Network > Wireless Networks.e)Go to Protect > Wireless > Access Point Groups.f)Click Add to add a new access point group.g)Add both the wireless networks, and the new access point.If new APs have been installed, you can view these in Control Center.h)Click the pending APs to accept the new access points.i)Configure the settings of the new APs as shown in the image.Sophos XG Firewall Virtual Appliancej)Click Save.Sophos XG Firewall Virtual Appliance4 Legal noticesCopyright © 2020 Sophos Limited. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise unless you are either a valid licensee where the documentation can be reproduced in accordance with the license terms or you otherwise have the prior permission in writing of the copyright owner.Sophos, Sophos Anti-Virus and SafeGuard are registered trademarks of Sophos Limited, Sophos Group and Utimaco Safeware AG, as applicable. All other product and company names mentioned are trademarks or registered trademarks of their respective owners.Copyright © 2020 Sophos Limited. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise unless you are either a valid licensee where the documentation can be reproduced in accordance with the license terms or you otherwise have the prior permission in writing of the copyright owner.Sophos and Sophos Anti-Virus are registered trademarks of Sophos Limited and Sophos Group.All other product and company names mentioned are trademarks or registered trademarks of their respective owners.。
NVIDIA Data Center GPU Driver 515.86.01 (Linux) 5
NVIDIA Data Center GPU Driver version 515.86.01 (Linux)/ 517.71 (Windows)Release NotesTable of Contents Chapter 1. Version Highlights (1)1.1. Software Versions (1)1.2. Fixed Issues (2)1.3. Known Issues (2)Chapter 2. Virtualization (6)Chapter 3. Hardware and Software Support (8)Chapter 1.Version HighlightsThis section provides highlights of the NVIDIA Data Center GPU R515 Driver (version 517.88 Windows).For changes related to the 515 release of the NVIDIA display driver, review the file "NVIDIA_Changelog" available in the .run installer packages.‣Windows driver release date: 12/20/20221.1. Software VersionsFor this release, the software versions are as follows:‣CUDA Toolkit 11: 11.7Note that starting with CUDA 11, individual components of the toolkit are versioned independently. For a full list of the individual versioned components (for example, nvcc, CUDA libraries, and so on), see the CUDA Toolkit Release Notes.‣NVIDIA Data Center GPU Driver: 517.88 (Windows)‣Fabric Manager: 515.86.01 (Use nv-fabricmanager -v)‣GPU VBIOS:‣HGX A100 PG506‣92.00.45.00.03 SKU200 40GB air cooling (lidless)‣92.00.45.00.04 SKU202 40GB hybrid cooling (lidded)‣92.00.45.00.05 SKU210 80GB air cooling (lidless)‣92.00.45.00.06 SKU212 80GB hybrid cooling (lidded)‣HGX A100 PG510‣92.00.81.00.01 SKU200 40GB air cooling (lidless)‣92.00.81.00.02 SKU202 40GB hybrid cooling (lidded)‣92.00.81.00.04 SKU210 80GB air cooling (lidless)‣92.00.81.00.05 SKU212 80GB hybrid cooling (lidded)‣HGX A800 PG506‣92.00.A4.00.01 SKU215 80GB air cooling (lidless)‣HGX A800 PG510‣92.00.A4.00.05 SKU215 80GB air cooling (lidless)‣A100 PCIe P1001 SKU230‣92.00.90.00.04 (NVIDIA A100 PCIe)‣A800 PCIe P1001‣92.00.A4.00.0C 40 GB SKU203 PCIe‣92.00.A4.00.0D 80 GB SKU235 PCIe‣NVSwitch VBIOS: 92.10.14.00.01‣NVFlash: 5.791Due to a revision lock between the VBIOS and driver, VBIOS versions >= 92.00.18.00.00 must use corresponding drivers >= 450.36.01. Older VBIOS versions will work with newer drivers. For more information on getting started with the NVIDIA Fabric Manager on NVSwitch-based systems (for example, NVIDIA HGX A100), refer to the Fabric Manager User Guide.1.2. Fixed Issues‣Security updates: see Security Bulletin: NVIDIA GPU Display Driver – November 2022, which is listed on the NVIDIA Product Security page.1.3. Known IssuesGeneral‣ A large number of call traces are seen while peer-to-peer between GPUs is torn down.This is expected and does not indicate any functional issues.‣The GPU driver build system might not pick the Module.symvers file, produced when building the ofa_kernel module from MLNX_OFED, from the right subdirectory. Because of that, nvidia_peermem.ko does not have the right kernel symbol versions for the APIs exported by the IB core driver, and therefore it does not load correctly. That happens when using MLNX_OFED 5.5 or newer on a Linux Arm64 or ppc64le platform.To work around this issue, perform the following:1.Verify that nvidia_peermem.ko does not load correctly.2.Uninstall old MLNX_OFED if one was installed.3.Manually remove /usr/src/ofa_kernel/default if one exists.4.Install MLNX_OFED5.5 or newer.5.Manually create a soft link:/usr/src/ofa_kernel/default -> /usr/src/ofa_kernel/$(uname -m)/$(uname -r)6.Reinstall the GPU driver.‣On HGX A800 8-GPU systems, the nvswitch-audit tool will report 12 NVLinks per GPU. This is a switch configuration report and does not reflect the true number of NVLink interfaces available per-GPU, which remains 8.‣Combining A800 and A100 SXM modules in a single server is not currently supported with this driver version.‣Combining A800 and A100 PCIe with NVLink is not fully tested.‣When switching between the Open and the legacy kernel modules on Ubuntu, use the following commands:In order to switch from open -> legacy:sudo apt-get remove --purge nvidia-kernel-open-515sudo apt-get install cuda-drivers-515In order to switch from legacy -> open:sudo apt-get remove --purge nvidia-kernel-source-515sudo apt-get install nvidia-kernel-open-515sudo apt-get install cuda-drivers-515‣If you encounter an error on RHEL7 when installing with cuda-drivers-fabricmanager packages, use the following alternate instructions. For example:If you are upgrading from a different branch, for example to driver 515.65.01:new_version=515.65.01sudo yum swap nvidia-driver-latest-dkms nvidia-driver-latest-dkms-${new_version} sudo yum install nvidia-fabric-manager-${new_version}‣When installing a driver on SLES15 or openSUSE15 that previously had an R515 driver installed, users need to run the following command afterwards to finalize the installation: sudo zypper install --force nvidia-gfxG05-kmp-defaultWithout doing this, users may see the kernel objects as missing.‣nvidia-release-upgrademay report that not all updates have been installed and exit.When running thenvidia-release-upgradecommand on DGX systems running DGX OS 4.99.x, it may exit and tell users: "Pleaseinstall all available updates for your release before upgrading" even though all upgrades have been installed.Users who see this can run the following command:sudo apt install -y nvidia-fabricmanager-450/bionic-updates --allow-downgradesAfter running this, proceed with the regular upgrade steps:sudo apt updatesudo apt full-upgrade -ysudo apt install -y nvidia-release-upgradesudo nvidia-release-upgrade‣By default, Fabric Manager runs as a systemd service. If usingDAEMONIZE=0in the Fabric Manager configuration file, then the following steps may be required.1.Disable FM service from auto starting.systemctl disable nvidia-fabricmanager2.Once the system is booted, manually start FM process./usr/bin/nv-fabricmanager -c /usr/share/nvidia/nvswitch/fabricmanager.cfgNote, since the process is not a daemon, the SSH/Shell prompt will not be returned(use another SSH shell for other activities or run FM as a background task).GPU Performance CountersThe use of developer tools from NVIDIA that access various performance countersrequires administrator privileges. See this note for more details. For example, reading NVLink utilization metrics from nvidia-smi (nvidia-smi nvlink -g 0) would require administrator privileges.NoScanout ModeNoScanout mode is no longer supported on NVIDIA Data Center GPU products. If NoScanout mode was previously used, then the following line in the “screen” section of /etc/X11/xorg.conf should be removed to ensure that X server starts on data center products:Option "UseDisplayDevice" "None"NVIDIA Data Center GPU products now support one display of up to 4K resolution.Unified Memory SupportCUDA and unified memory is not supported when used with Linux power management states S3/S4.IMPU FRU for Volta GPUsThe driver does not support the IPMI FRU multi-record information structure for NVLink. See the Design Guide for Tesla P100 and Tesla V100-SXM2 for more information.OpenCL 3.0 Known IssuesDevice side enqueue‣Device-Side-Enqueue related queries may return 0 values, although corresponding built-ins can be safely used by kernel. This is in accordance with conformance requirements described at https:///registry/OpenCL/specs/3.0-unified/html/OpenCL_API.html#opencl-3.0-backwardscompatibility‣Shared virtual memory - the current implementation of shared virtual memory is limited to 64-bit platforms only.Chapter 2.VirtualizationTo make use of GPU passthrough with virtual machines running Windows and Linux, the hardware platform must support the following features:‣ A CPU with hardware-assisted instruction set virtualization: Intel VT-x or AMD-V.‣Platform support for I/O DMA remapping.‣On Intel platforms, the DMA remapper technology is called Intel VT-d.‣On AMD platforms, it is called AMD IOMMU.Support for these features varies by processor family, product, and system, and should be verified at the manufacturer's website.Supported HypervisorsThe following hypervisors are supported:Data Center products now support one display of up to 4K resolution.Supported Graphics CardsThe following GPUs are supported for device passthrough:VirtualizationChapter 3.Hardware and SoftwareSupportSupport for these features varies by processor family, product, and system, and should be verified at the manufacturer's website.Supported Operating Systems for NVIDIA Data Center GPUsThe Release 515 driver is supported on the following operating systems:‣Windows x86_64 operating systems:‣Microsoft Windows® Server 2022‣Microsoft Windows® Server 2019‣Microsoft Windows® Server 2016‣Microsoft Windows® 11 21H2‣Microsoft Windows® 10Supported Operating Systems and CPU Configurations for NVIDIA HGX A100 The Release 515 driver is validated with NVIDIA HGX A100 on the following operating systems and CPU configurations:‣Windows 64-bit distributions:‣Windows Server 2019 (in 1/2/4/8-GPU configurations; 16-GPU configurations are currently not supported)Windows is supported only in shared NVSwitch virtualization configurations.‣CPU Configurations:‣AMD Rome in PCIe Gen4 mode‣Intel Skylake/Cascade Lake (4-socket) in PCIe Gen3 modeAPI SupportThis release supports the following APIs:‣NVIDIA® CUDA® 11.7 for NVIDIA® Maxwell TM, Pascal TM, Volta TM, Turing TM, and NVIDIA Ampere architecture GPUs‣OpenGL® 4.6‣Vulkan® 1.3‣DirectX 11‣DirectX 12 (Windows 10)‣Open Computing Language (OpenCL TM software) 3.0Note that for using graphics APIs on Windows (such as OpenGL, Vulkan, DirectX 11, and DirectX 12) or any WDDM 2.0+ based functionality on Data Center GPUs, vGPU is required. See the vGPU documentation for more information.Supported NVIDIA Data Center GPUsThe NVIDIA Data Center GPU driver package is designed for systems that have one ormore Data Center GPU products installed. This release of the driver supports CUDA C/C++ applications and libraries that rely on the CUDA C Runtime and/or CUDA Driver API. Attention: Release 470 was the last driver branch to support Data Center GPUs based on the NVIDIA Kepler architecture. This includes discontinued support for the following compute capabilities:‣sm_30 (NVIDIA Kepler)‣sm_32 (NVIDIA Kepler)‣sm_35 (NVIDIA Kepler)‣sm_37 (NVIDIA Kepler)For more information on GPU products and compute capability, see https:///cuda-gpus.Hardware and Software SupportNoticeThis document is provided for information purposes only and shall not be regarded as a warranty of a certain functionality, condition, or quality of a product. NVIDIA Corporation (“NVIDIA”) makes no representations or warranties, expressed or implied, as to the accuracy or completeness of the information contained in this document and assumes no responsibility for any errors contained herein. NVIDIA shall have no liability for the consequences or use of such information or for any infringement of patents or other rights of third parties that may result from its use. This document is not a commitment to develop, release, or deliver any Material (defined below), code, or functionality.NVIDIA reserves the right to make corrections, modifications, enhancements, improvements, and any other changes to this document, at any time without notice.Customer should obtain the latest relevant information before placing orders and should verify that such information is current and complete.NVIDIA products are sold subject to the NVIDIA standard terms and conditions of sale supplied at the time of order acknowledgement, unless otherwise agreed in an individual sales agreement signed by authorized representatives of NVIDIA and customer (“Terms of Sale”). NVIDIA hereby expressly objects to applying any customer general terms and conditions with regards to the purchase of the NVIDIA product referenced in this document. No contractual obligations are formed either directly or indirectly by this document.NVIDIA products are not designed, authorized, or warranted to be suitable for use in medical, military, aircraft, space, or life support equipment, nor in applications where failure or malfunction of the NVIDIA product can reasonably be expected to result in personal injury, death, or property or environmental damage. NVIDIA accepts no liability for inclusion and/or use of NVIDIA products in such equipment or applications and therefore such inclusion and/or use is at customer’s own risk.NVIDIA makes no representation or warranty that products based on this document will be suitable for any specified use. Testing of all parameters of each product is not necessarily performed by NVIDIA. It is customer’s sole responsibility to evaluate and determine the applicability of any information contained in this document, ensure the product is suitable and fit for the application planned by customer, and perform the necessary testing for the application in order to avoid a default of the application or the product. Weaknesses in customer’s product designs may affect the quality and reliability of the NVIDIA product and may result in additional or different conditions and/or requirements beyond those contained in this document. NVIDIA accepts no liability related to any default, damage, costs, or problem which may be based on or attributable to: (i) the use of the NVIDIA product in any manner that is contrary to this document or (ii) customer product designs.No license, either expressed or implied, is granted under any NVIDIA patent right, copyright, or other NVIDIA intellectual property right under this document. Information published by NVIDIA regarding third-party products or services does not constitute a license from NVIDIA to use such products or services or a warranty or endorsement thereof. Use of such information may require a license from a third party under the patents or other intellectual property rights of the third party, or a license from NVIDIA under the patents or other intellectual property rights of NVIDIA.Reproduction of information in this document is permissible only if approved in advance by NVIDIA in writing, reproduced without alteration and in full compliance with all applicable export laws and regulations, and accompanied by all associated conditions, limitations, and notices.THIS DOCUMENT AND ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE. TO THE EXTENT NOT PROHIBITED BY LAW, IN NO EVENT WILL NVIDIA BE LIABLE FOR ANY DAMAGES, INCLUDING WITHOUT LIMITATION ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, PUNITIVE, OR CONSEQUENTIAL DAMAGES, HOWEVER CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARISING OUT OF ANY USE OF THIS DOCUMENT, EVEN IF NVIDIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Notwithstanding any damages that customer might incur for any reason whatsoever, NVIDIA’s aggregate and cumulative liability towards customer for the products described herein shall be limited in accordance with the Terms of Sale for the product.TrademarksNVIDIA and the NVIDIA logo are trademarks and/or registered trademarks of NVIDIA Corporation in the Unites States and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.Copyright© 2022 NVIDIA Corporation & affiliates. All rights reserved.。
VMware-Powercli常用脚本
VMware Powercli常用脚本目录1.1修改ESXI主机的ROOT密码 (5)1.2将本地文件传输到VM (5)1.3修改虚拟机硬件版本 (5)1.4查看虚拟机快照 (6)1.5更改虚拟机网卡 (6)1.6批量开关虚拟机 (6)1.7批量模板化创建虚拟机 (6)1.8删除告警 (7)1.9更新V C ENTER上的集群 (8)1.10将主机添加进V C ENTER (11)1.11为主机添加及配置I SCSI端口 (11)1.12添加及配置ESXI主机 (13)1.13创建VMFS DATASTORE (19)1.14检索ESXI的网络信息 (21)1.15修改虚拟机绑定的端口组 (25)1.16创建新的V DS (27)1.17创建新的分布式交换机端口组 (27)1.18查看分布式交换机绑定的主机 (27)1.20配置分布式交换机端口组VLAN (28)1.21创建新的虚拟机 (28)1.22为虚拟机添加VMXNET3网卡 (29)1.23为虚拟机添加磁盘 (29)1.24通过模板部署虚拟机 (29)1.25通过模板及自定义规范部署虚拟机 (29)1.26通过模板及自定义规范部署虚拟机,并检查是否有足够的可用空间 (30)1.27重新注册虚拟机 (31)1.28导入虚拟机信息表 (32)1.29批量创建虚拟机 (32)1.30更改虚拟机默认网关 (33)1.31批量修改虚拟机IP信息 (33)1.32W INDOWS静默安装VMTOOLS (34)1.33L INUX静默安装VMTOOLS (35)1.34批量安装VMTOOLS (37)1.35更新VMTOOLS (38)1.36将虚拟机转换成模板 (38)1.37将虚拟机克隆成模板 (38)1.38通过自定义规范为W INDOWS绑定静态的IP地址 (38)1.40关机更改虚拟机内存和V CPU (39)1.41通过磁盘拷贝的方式将磁盘从厚模式转换成瘦模式 (43)1.42将一个数据存储上的所有虚拟机迁移到另一个数据存储 (46)1.43查找创建时间超过两个星期的快照及创建者 (49)1.1修改ESXI主机的root密码$strOldRootPassword="vmware1!"$strNewRootPassword="vmware2!"$arrHostsWithErrors=@()Get-VMHost | ForEach-Object {$ConnectVIServer=Connect-VIServer -Server $_.Name -User root -Password $strOldRootPassword$VMHostAccount=$null$VMHostAccount=Set-VMHostAccount -Server $_.Name -UserAccount (Get-VMHostAccount -Server $_.Name -User root) -Password $strNewRootPasswordif (($VMHostAccount -eq $null) -or ($VMHostAccount.GetType().Name -ne "HostUserAccountImpl")) {$arrHostsWithErrors +=$_.Name}Disconnect-VIServer -Server $_.Name -Confirm:$false}1.2将本地文件传输到VM$vm=Get-VM-Name vmnameGet-Item"C:\Temp\*.*"| Copy-VMGuestFile-Destination -"C:\Temp\"-VM $vm-LocalToGuest -HostUser root -HostPassword password -GuestUser administrator -GuestPassword guestpassword1.3修改虚拟机硬件版本Set-VM-VM vm01 -version v9 -confirm:$false1.4查看虚拟机快照Get-VM| Get-Snapshot#显示快照列表Get-VM| Get-Snapshot| format-list#详细显示快照信息1.5更改虚拟机网卡Get-VM vmname | Get-NetworkAdapter| Where {$_.Type -eq"E1000"} | Set-NetworkAdapter-Type vmxnet31.6批量开关虚拟机$scope=1..50$namestart="vm"#打开50台虚拟机(从 vm1 到vm50)foreach ($v in $scope) {$name=$namestart+$vGet-VM -Name $name | Start-VM -Confirm false}1.7批量模板化创建虚拟机New-vm -vmhost -Name SVR02 -Template WIN2008R2_Template -Datastore datastore1 -OSCustomizationspec WIN2008R2_Template命令中参数说明:-vmhost:VM生成的目标ESXi主机; -Name:生成的VM的名字;-Template:用于生成VM的模板主机;-Datastore:生成的虚拟机的存放数据存储;-OSCustomizationspec:定制化部署VM的prep目录1.8删除告警function Remove-Alarm{<#.SYNOPSISRemoves one or more alarms.DESCRIPTIONThe function will remove all the alarms whosename matches..NOTESSource: Automating vSphere AdministrationAuthors: Luc Dekens, Arnim van Lieshout, Jonathan Medd,Alan Renouf, Glenn Sizemore.EXAMPLEPS> Remove-Alarm -Name "Book: My Alarm".EXAMPLEPS> Remove-Alarm -Name "Book:*"#>param([string]$Name)process{$alarmMgr = Get-View AlarmManager$alarmMgr.GetAlarm($null) | %{$alarm = Get-View $_if($ -like $Name){$alarm.RemoveAlarm()}}}}1.9更新vCenter上的集群function Update-vCenterCluster {<#.SYNOPSISPatch a cluster that contains vCenter or VUM VMs..DESCRIPTIONPatch a cluster that contains vCenter or VUM VMs..NOTESAuthors: Luc Dekens & Jonathan Medd.PARAMETER ClusterNameName of cluster to patch.PARAMETER BaselineNameName of baseline to use for patching.EXAMPLEUpdate-vCenterCluster -ClusterName Cluster01 -BaselineName 'ESXi 4.0 U2 - Current' #>[CmdletBinding()]Param([parameter(Mandatory=$True, HelpMessage='Name of cluster to patch')][String]$ClusterName,[parameter(Mandatory=$True, HelpMessage='Name of baseline to use for patching')][String]$BaselineName)$baseline = Get-Baseline -Name $BaselineName# Find VUM server$extMgr = Get-View ExtensionManager$vumExt = $extMgr.ExtensionList | where {$_.Key -eq "com.vmware.vcIntegrity"} $vumURL = ($vumExt.Server | where {$_.Type -eq "SOAP"}).Url$vumSrv = ($vumUrl.Split("/")[2]).Split(":")[0]$vumSrvShort = $vumSrv.Split(".")[0]$vumVM = Get-VM -Name $vumSrvShort# Find VC server$vcSrvShort = $extMgr.Client.ServiceUrl.Split("/")[2].Split(".")[0]$vcVM = Get-VM -Name $vcSrvShort# Patch the cluster nodes$hostTab = @{}Get-Cluster -Name $ClusterName | Get-VMHost | %{$hostTab[$_.Name] = $_}$hostTab.Values | %{$vm = $nullif($_.Name -eq $){$vm = $vumVM}if($_.Name -eq $){$vm = $vcVM}if($vm){$oldNode = $_$newNode = $hostTab.Keys | where {$_ -ne $} | Select -First 1$vumVM = $vumVM | Move-VM -Destination $newNode -Confirm:$false}Remediate-Inventory -Entity $_ -Baseline $baseline }}1.10将主机添加进vCenter# Add our host to vCenter, and immediately enable lockdown mode!$VMhost = Add-VMHost -Name vSphere03.vSphere.local `-User root `-Password pa22word `-Location (Get-Datacenter) `-Force |Set-VMHostLockdown –Enable1.11为主机添加及配置iSCSI端口# Add iSCSI VMkernel vNIC$vSwitch = Get-VirtualSwitch -VMHost $VMHost -Name 'vSwitch0' # we have to first create a portgroup to bind our vNIC to.$vPG = New-VirtualPortGroup -Name iSCSI `-VirtualSwitch $vSwitch `-VLanId 55# Create our new vNIC in the iSCSI PG we just created$vNIC = New-VMHostNetworkAdapter -VMHost $VMHost `-PortGroup iSCSI `-VirtualSwitch $vSwitch `-IP 10.10.55.3 `-SubnetMask 255.255.255.0# Enable the software ISCSI adapter if not already enabled. $VMHostStorage = Get-VMHostStorage -VMHost $VMhost | Set-VMHostStorage -SoftwareIScsiEnabled $True#sleep while iSCSI starts upStart-Sleep -Seconds 30# By default vSphere will set the Target Node name to# .vmware:<HostName>-<random number> the # following cmd will remove everything after the hostname, set # Chap auth, and add a send Target.## Example .vmware:esx01-165435 becomes# .vmware:esx01## Note that if your hostname has dashes in it, you抣l# need to change the regex below.$pattern = ".vmware\:\w*"Get-VMHostHba -VMHost $VMHost -Type IScsi |Where-Object{ $_.IScsiName -match $pattern} |Set-VMHostHba -IScsiName $Matches[0] |Set-VMHostHba -ChapName 'vmware' `-ChapPassword 'password' `-ChapType "Required" |New-IScsiHbaTarget -Address '192.168.1.1' -Port "3260" | Out-Null1.12添加及配置ESXI主机Function ConfigureVMHost{<#.SYNOPSISGet-Admin standard vSphere Post configuration scriptShould only be ran against a frest host..DESCRIPTIONGet-Admin standard vSphere Post configuration scriptShould only be ran against a frest host..PARAMETER IPAddressIPAddress of the host to configure.PARAMETER ClusterName of the cluster to add our host to..PARAMETER UserUser to log in as default is root.PARAMETER PasswordPassword to log in with if needed.EXAMPLEConfigureVMHost -IPAddress 10.10.1.40 `-Cluster DC04_PROD_06#>[cmdletbinding()]Param([Parameter(Mandatory=$true, ValueFromPipelineByPropertyname=$true )][String]$IPAddress,[Parameter(Mandatory=$true, ValueFromPipelineByPropertyName=$True )][String]$Cluster,[Parameter(ValueFromPipelineByPropertyName=$True )][string]$User = 'root',[Parameter(ValueFromPipelineByPropertyName=$True )][string]$password)# while static enough to not be parameterized we'll still # define our advanced iSCSI configuration up front thereby # simplifying any future modifications.$ChapName = 'vmware'$ChapPassword ='password'$ChapType ='Required'$IScsiHbaTargetAddress ='10.10.11.200','10.10.11.201'$IScsiHbaTargetPort = '3260'# we'll use the last octet of the IPAddress as the ID for # the host.$ESXID = $IPaddress.split(".")[3]# Get the actual cluster object for our targeted cluster.$ClusterImpl = Get-Cluster -Name $Cluster# Get the parent folder our cluster resides in.$Folder = `Get-VIObjectByVIView $ClusterImpl.ExtensionData.Parent Write-Verbose "Adding $($IPAddress) to vCenter"# Add our host to vCenter, and immediately enable # lockdown mode!$VMHost = Add-VMHost -Name $IPAddress `-User $user `-Password $Password `-Location $Folder `-Force `-EA 'STOP' |Set-VMHostLockdown -Enable# Enter Maintenance mode$VMHost = Set-VMHost -State 'Maintenance' -VMHost $VMHost | Move-VMHost -Destination $Cluster#$VMHost = Get-VMHost -Name $IPAddress# Get the Host profile attached to that cluster$Hostprofile = Get-VMHostProfile -Entity $Cluster# attach profile to our new hostApply-VMHostProfile -Entity $VMHost `-Profile $HostProfile `-AssociateOnly `-Confirm:$false |Out-Null# Apply our host profile to gather any required values$AdditionConfiguration = `Apply-VMHostProfile -Entity $VMHost `-Profile $HostProfile `-ApplyOnly `-Confirm:$false# If we have a hashtable then there are additional config# Items that need to be defined. Loop through and attempt# to fill them in, prompting if we come across something# we're not prepared for.if ($AdditionConfiguration.gettype().name -eq 'Hashtable'){#Create a new hashtable to hold our information$Var = @{}# Loop through the collectionswitch ($AdditionConfiguration.GetEnumerator()){{$_.name -like '*iSCSI*.address' } {$var +=@{$_.Name = $('10.10.10.{0}' -f $ESXID)}} {$_.name -like '*iSCSI*.subnetmask'} {$var += @{$_.Name = '255.255.255.0'}}{$_.name -like '*vMotion*.address'} {$var +=@{$_.Name = $('10.10.11.{0}' -f $ESXID)}} {$_.name -like '*vMotion*.subnetmask'} {$var += @{$_.Name = '255.255.255.0'}}Default {$value = Read-Host `"Please provide a value for $($_.Name)"$var += @{ $_.Name = $value}}}# Apply our profile with the additional config info$VMHost = Apply-VMHostProfile -Entity $VMHost `-Confirm:$false `-Variable $var}Else{# Apply our profile.$VMHost = Apply-VMHostProfile -Entity $VMHost `-Confirm:$false}# update vCenter with our new Profile compliance statusTest-VMHostProfileCompliance -VMHost $VMHost | out-null# Enable the software ISCSI adapter if not already enabled.$VMHostStorage = Get-VMHostStorage -VMHost $VMhost |Set-VMHostStorage -SoftwareIScsiEnabled $True#sleep while iSCSI starts upStart-Sleep -Seconds 30# By default vSphere will set the Target Node name to# .vmware:<HostName>-<random number> This # script will remove everything after the hostname, set Chap# auth, and add a send Target.## Note that if your hostname has dashes in it, you抣l# need to change the regex below.$pattern = ".vmware\:\w*"$HBA = Get-VMHostHba -VMHost $VMHost -Type 'IScsi' | Where { $_.IScsiName -match $pattern }If ($HBA.IScsiName -ne $Matches[0]){$HBA = Set-VMHostHba -IScsiHba $HBA `-IScsiName $Matches[0]}Set-VMHostHba -IScsiHba $HBA `-ChapName $ChapName `-ChapPassword $ChapPassword `-ChapType $ChapType |New-IScsiHbaTarget -Address $IScsiHbaTargetAddress ` -Port $IScsiHbaTargetPort | Out-Null}1.13创建VMFS datastorefunction New-PartitionDatastore{<#.SYNOPSISCreate a VMFS datastore on a free disk partition..DESCRIPTIONCreate a VMFS datastore on a free disk partition..NOTESSource: Automating vSphere AdministrationAuthors: Luc Dekens, Arnim van Lieshout, Jonathan Medd,Alan Renouf, Glenn Sizemore.PARAMETER VMHostESX(i) Host.PARAMETER PartitionFree disk partition from Get-ScsiFreePartition.PARAMETER NameName of the new VMFS datastore.EXAMPLE$esxName = "esx4i.test.local"$esxImpl = Get-VMHost -Name $esxName$partition = $esxImpl | Get-ScsiFreePartition | Where {!$_.FullDisk} | Select -First 1$esxImpl | New-PartitionDatastore -Partition $partition -Name "MyDS"#>param ([parameter(ValueFromPipeline = $true,Position=1)][ValidateNotNullOrEmpty()][VMware.VimAutomation.ViCore.Impl.V1.Inventory.VMHostImpl]$VMHost,[parameter(Position=2)][ValidateNotNullOrEmpty()][PSObject]$Partition,[parameter(Position=3)][ValidateNotNullOrEmpty()][String]$Name)process{$esx = $VMHost | Get-View$storMgr = Get-View $esx.ConfigManager.DatastoreSystem$lunExt = $storMgr.QueryAvailableDisksForVmfs($null)$device = $lunExt | where {$_.DeviceName -eq $Partition.DeviceName}$dsOpt = $storMgr.QueryVmfsDatastoreCreateOptions($Partition.DeviceName) | where {$_.Info.VmfsExtent.Partition -eq $Partition.Partition}$spec = $dsOpt.Spec$spec.Vmfs.VolumeName = $Name$spec.Extent += $spec.Vmfs.Extent$dsMoRef = $storMgr.CreateVmfsDatastore($spec)Get-Datastore (Get-View $dsMoRef).Name}}1.14检索ESXI的网络信息function Get-HostDetailedNetworkInfo{<#.SYNOPSISRetrieve ESX(i) Host Networking Info..DESCRIPTIONRetrieve ESX(i) Host Networking Info using CDP..NOTESSource: Automating vSphere AdministrationAuthors: Luc Dekens, Arnim van Lieshout, Jonathan Medd,Alan Renouf, Glenn Sizemore.PARAMETER VMHostName of Host to Query.PARAMETER ClusterName of Cluster to Query.PARAMETER FilenameName of File to Export.EXAMPLEGet-HostDetailedNetworkInfo -Cluster Cluster01 -Filename C:\Scripts\CDP.csv #>[CmdletBinding()]param([String]$VMHost,[String]$Cluster,[parameter(Mandatory=$True, HelpMessage='Name of File to Export')][String]$filename)Write-Host "Gathering VMHost objects"if ($Cluster){$vmhosts = Get-Cluster $Cluster | Get-VMHost | Where-Object {$_.State -eq "Connected"} | Get-View}else {$vmhosts = Get-VMHost $VMHost | Get-View}$MyCol = @()foreach ($vmwarehost in $vmhosts){$ESXHost = $Write-Host "Collating information for $ESXHost"$networkSystem = Get-View $workSystemforeach($pnic in $workConfig.Pnic){$pnicInfo = $networkSystem.QueryNetworkHint($pnic.Device)foreach($Hint in $pnicInfo){$NetworkInfo = "" | Select-Object Host, PNic, Speed, MAC, DeviceID, PortID, Observed, VLAN$NetworkInfo.Host = $$NetworkInfo.PNic = $Hint.Device$NetworkInfo.DeviceID = $Hint.connectedSwitchPort.DevId$NetworkInfo.PortID = $Hint.connectedSwitchPort.PortId$record = 0Do{If ($Hint.Device -eq $work.Pnic[$record].Device){$NetworkInfo.Speed = $work.Pnic[$record].LinkSpeed.SpeedMb $NetworkInfo.MAC = $work.Pnic[$record].Mac}$record ++}Until ($record -eq ($work.Pnic.Length))foreach ($obs in $Hint.Subnet){$NetworkInfo.Observed += $obs.IpSubnet + " "Foreach ($VLAN in $obs.VlanId){If ($VLAN -eq $null){}Else{$strVLAN = $VLAN.ToString()$NetworkInfo.VLAN += $strVLAN + " "}}}$MyCol += $NetworkInfo}}$Mycol | Sort-Object Host,PNic | Export-Csv $filename -NoTypeInformation}1.15修改虚拟机绑定的端口组function Move-ToNewPortGroup{<#.SYNOPSISMove VMs from one Port Group to another..DESCRIPTIONMove VMs from one Port Group to another..NOTESSource: Automating vSphere AdministrationAuthors: Luc Dekens, Arnim van Lieshout, Jonathan Medd,Alan Renouf, Glenn Sizemore.PARAMETER SourceName of Port Group to move from.PARAMETER TargetName of Port Group to move to.PARAMETER ClusterName of Cluster containing VMs.EXAMPLEMove-ToNewPortGroup -Source PortGroup01 -Target PortGroup02 -Cluster Cluster01[CmdletBinding()]Param([parameter(Mandatory=$True, HelpMessage='Name of Port Group to move from')][String]$Source,[parameter(Mandatory=$True, HelpMessage='Name of Port Group to move to')][String]$Target,[String]$Cluster)$SourceNetwork = $Source$TargetNetwork = $Targetif ($Cluster){Get-Cluster $Cluster | Get-VM | Get-NetworkAdapter | Where-Object {$_.NetworkName -eq $SourceNetwork } | Set-NetworkAdapter -NetworkName $TargetNetwork -Confirm:$false}else {Get-VM | Get-NetworkAdapter | Where-Object {$_.NetworkName -eq $SourceNetwork } | Set-NetworkAdapter -NetworkName $TargetNetwork -Confirm:$false}}1.16创建新的vDS$Datacenter = Get-Datacenter –Name PROD01New-DistributedSwitch -Name PROD01-vDS01 `-Datacenter $Datacenter `-NumberOfUplinks 41.17创建新的分布式交换机端口组New-DistributedSwitchPortGroup -Name dvPG01 `-NumberOfPorts 128 `-VLAN 42 `-DistributedSwitch 'PROD01-vDS01'1.18查看分布式交换机绑定的主机Get-Datacenter –Name PROD01 |Get-DistributedSwitchCandidate -DistributedSwitch vDS011.19添加主机到分布式交换机Get-VMhost Add-DistributedSwitchVMHost -VMhost $_ `-DistributedSwitch vDS01 `-Pnic vmnic2,vmnic31.20配置分布式交换机端口组VLAN$vDS | New-DistributedSwitchPrivateVLAN -PrimaryVLanID 108 | New-DistributedSwitchPortGroup -Name 'vDS01-10.10.10.0' `-PrivateVLAN 108 |Set-DistributedSwitchPortGroup -NumberOfPorts 128 `-ActiveDVUplinks DVUplink1,DVUplink41.21创建新的虚拟机New-VM -Name REL5_01 `-NumCpu 4-DiskMB 10240 `-DiskStorageFormat ‘thin’-MemoryMB 1024 `-GuestId rhel5Guest `-NetworkName vSwitch0_VLAN22 `-CD |Get-CDDrive |Set-CDDrive -IsoPath "[datastore0] /REHL5.2_x86.iso" `-StartConnected:$true `-Confirm:$False1.22为虚拟机添加VMXNET3网卡#Add VMXNET3 Network AdaptersNew-NetworkAdapter -NetworkName 'dvSwitch0_VLAN22' `-StartConnected `-Type 'Vmxnet3' `-VM $VM1.23为虚拟机添加磁盘#Add Additional Hard drivesNew-HardDisk -CapacityKB (100GB/1KB) -VM $vmNew-HardDisk -CapacityKB (10GB/1KB) -VM $vm1.24通过模板部署虚拟机$Template = Get-Template -Name 'W2K8R2'$VMHost = Get-VMHost -Name 'vSphere1'New-VM -Template $Template -Name 'WEB001' -VMHost $VMHost 1.25通过模板及自定义规范部署虚拟机# Get source Template$Template = Get-Template -Name 'REHL5.5'# Get a host within the development cluster$VMHost = Get-Cluster 'dev01' | Get-VMHost | Get-Random# Get the OS CustomizationSpec$Spec = Get-OSCustomizationSpec -Name 'REHL5.5'# Deploy our new VMNew-VM -Template $Template -Name 'REHL_01' -VMHost $VMHost -OSCustomizationSpec $Spec1.26通过模板及自定义规范部署虚拟机,并检查是否有足够的可用空间# Get source Template$Template = Get-Template -Name 'REHL5.5'# Get the OS CustomizationSpec$OSCustomizationSpec = Get-OSCustomizationSpec -Name 'REHL5.5'# Get a host within the development cluster$VMHost = Get-Cluster 'dev01' | Get-VMHost | Get-Random# Determine the capacity requirements of this VM$CapacityKB = Get-HardDisk -Template $Template |Select-Object -ExpandProperty CapacityKB |Measure-Object -Sum |Select-Object -ExpandProperty Sum# Find a datastore with enough room$Datastore = Get-Datastore -VMHost $VMHost|Where-Object {($_.FreeSpaceMB * 1mb) -gt (($CapacityKB * 1kb) * 1.1 )} |Select-Object -first 1# Deploy our Virtual Machine$VM = New-VM -Name 'REHL_01' `-Template $Template `-VMHost $VMHost `-Datastore $Datastore-OSCustomizationSpec $OSCustomizationSpec1.27重新注册虚拟机# Get every VM registered in vCenter$RegisteredVMs = Get-VM |Select-Object -ExpandProperty ExtensionData |Select-Object -ExpandProperty Summary |Select-Object -ExpandProperty Config |Select-Object -ExpandProperty VmPathName# Now find every .vmx on every datastore. If it’s not part of vCenter# then add it back in.Get-Datastore |Search-Datastore -Pattern *.vmx|Where-Object { $RegisteredVMs -notcontains $_.path } |Where-Object {$_.Path -match "(?<Name>\w+).vmx$"} |ForEach-Object {$VMHost = Get-Datastore -Name $_.Datastore | Get-VMHost | Get-Random New-VM -Name $ `-VMHost $VMHost `-VMFilePath $_.Path}1.28导入虚拟机信息表Import-Csv .\massVM.txt |Foreach-Object {New-VM -Name $_.Name `-Host $VMhost `-Datastore $datastore `-NumCpu $_.CPU `-MemoryMB $_.Memory `-DiskMB $_.HardDisk `-NetworkName $_.NIC}1.29批量创建虚拟机$Datastores = Get-Cluster -name 'PROD01'|Get-VMHost |Get-Datastore$i=1While ($i -le 4){Foreach ($Datastore in $Datastores){New-VM -Name "VM0$I" `-Host ($Datastore | Get-VMHost | Get-Random) `-Datastore $datastore}}1.30更改虚拟机默认网关$GuestCreds = Get-Credential$HostCreds = Get-CredentialGet-VM |Get-VMGuestRoute -GuestCredential $cred -HostCredential $HostCreds |Where-Object { $_.Destination -eq "default" -AND $_.Gateway -ne "10.10.10.1"} | Set-VMGuestRoute -Gateway 10.10.10.1 `-GuestCredential $cred `-HostCredential $HostCreds1.31批量修改虚拟机IP信息Get-Cluster SQL_DR |Get-VM |Get-VMGuestNetworkInterface -GuestCredential $guestCreds `-HostCredential $hostcreds |Where-Object {$_.ip -match "192.168.145.(?<IP>\d{1,3})"} |Set-VMGuestNetworkInterface -Ip 192.168.145.$($Matches.IP) `-Netmask 255.255.255.0 `-Gateway 192.167.145.2 `-GuestCredential $guestCreds `-HostCredential $hostcreds1.32Windows静默安装vmtools$GuestCred = Get-Credential Administrator$VM = Get-VM ‘Win2k8R2’# Mount vmware tools mediaMount-Tools -VM $VM# Find the drive letter of the mounted media$DrvLetter = Get-WmiObject -Class 'Win32_CDROMDrive' `-ComputerName $ `-Credential $GuestCred |Where-Object {$_.VolumeName -match "VMware Tools"} |Select-Object -ExpandProperty Drive#Build our cmd line$cmd = "$($DrvLetter)\setup.exe /S /v`"/qn REBOOT=ReallySuppress ADDLOCAL=ALL`""# spawn a new process on the remote VM, and execute setup$go = Invoke-WMIMethod -path win32_process `-Name Create `-Credential $GuestCred `-ComputerName $ `-ArgumentList $cmdif ($go.ReturnValue -ne 0)Write-Warning "Installer returned code $($go.ReturnValue) unmounting media!"Dismount-Tools -VM $VM}Else{Write-Verbose "Tool installation successfully triggered on $($) media will be ejected upon completion."}1.33Linux静默安装vmtools#!/bin/bashecho -n "Executing preflight checks "# make sure we are rootif [ `id -u` -ne 0 ]; thenecho "You must be root to install tools!"exit 1;fi# make sure we are in RHEL, CEntOS or some reasonable facsimilieif [ ! -s /etc/redhat-release ]; thenecho "You must be using RHEL or CEntOS for this script to work!"exit 1;fiecho "[ OK ]"echo -n "Mounting Media "# check for the presence of a directory to mount the CD to if [ ! -d /media/cdrom ]; thenmkdir -p /media/cdromfi# mount the cdrom, if necessary...this is rudimentaryif [ `mount | grep -c iso9660` -eq 0 ]; thenmount -o loop /dev/cdrom /media/cdromfi# make sure the cdrom that is mounted is vmware tools MOUNT=`mount | grep iso9660 | awk '{ print $3 }'`if [ `ls -l $MOUNT/VMwareTools* | wc -l` -ne 1 ]; then # there are no tools hereecho "No tools found on CD-ROM!"exit 1;fiecho "[ OK ]"echo -n "Installing VMware Tools "# extract the installer to a temporary locationtar xzf $MOUNT/VMwareTools*.tar.gz -C /var/tmp# install the tools, accepting defaults, capture output to a file( /var/tmp/vmware-tools-distrib/vmware-install.pl --default ) > ~/vmware-tools_install.log# remove the unpackaging directoryrm -rf /var/tmp/vmware-tools-distribecho "[ OK ]"echo -n "Restarting Network:"# the vmxnet kernel module may need to be loaded/reloaded...service network stoprmmod pcnet32rmmod vmxnetmodprobe vmxnetservice network start# or just reboot after tools install# shutdown -r now1.34批量安装vmtoolsGet-View -ViewType "VirtualMachine" `-Property Guest,name `-filter @{"Guest.GuestFamily"="windowsGuest";"Guest.ToolsStatus"="ToolsOld";"Guest.GuestState"="running"} |Get-VIObjectByVIView |Update-Tools -NoReboot1.35更新vmtools$CMD = Get-Content .\installTools.sh | Out-StringInvoke-VMScript -VM $VM `-GuestCredential $guestCreds `-HostCredential $hostcreds `-ScriptText $cmd1.36将虚拟机转换成模板Get-VM WEBXX | Set-VM –ToTemplate1.37将虚拟机克隆成模板$VM = Get-VM WEB07$Folder = Get-Folder WEBNew-Template -Name 'W2k8R2' -VM $VM -Location $Folder1.38通过自定义规范为Windows绑定静态的IP地址# Update Spec with our desired IP informationGet-OSCustomizationSpec -Name 'Win2k8R2' |Get-OSCustomizationNicMapping |Set-OSCustomizationNicMapping -IPmode UseStaticIP `-IpAddress '192.168.145.78' `-SubnetMask '255.255.255.0' `-DefaultGateway '192.168.145.2' `-Dns '192.168.145.6','192.168.145.2'# Get updated Spec Object$Spec = Get-OSCustomizationSpec -Name 'Win2k8R2'# Get Template to deploy from$Template = Get-Template -Name 'W2K8R2'# Get VMHost to deploy new VM on$VMHost = Get-VMHost -Name 'vSphere1'# Deploy VMNew-VM -Name 'WEB001' `-VMHost $VMHost `-Template $Template `-OSCustomizationSpec $Spec |Start-VM1.39更新虚拟机硬件版本$VM = Get-Template ‘W2K8R2’ | Set-Template -ToVM $vm.ExtensionData.UpgradeVM("vmx-07")Set-VM -VM $VM –ToTemplate1.40关机更改虚拟机内存和vCPUfunction Set-VMOffline {<#.SYNOPSISChanges the vCPU and memory configuration of thevirtual machine Offline.DESCRIPTIONThis function changes the vCPU and memory configuration ofthe virtual machine Offline.NOTESSource: Automating vSphere AdministrationAuthors: Luc Dekens, Arnim van Lieshout, Jonathan Medd,Alan Renouf, Glenn Sizemore.PARAMETER VMSpecify the virtual machine.PARAMETER MemoryMBSpecify the memory size in MB.PARAMETER NumCpuSpecify the number of virtual CPUs.PARAMETER TimeOutSpecify the number of seconds to wait for the vm to shut downgracefully. Default timeout is 300 seconds.PARAMETER ForceSwitch parameter to forcibly shutdown the virtual machineafter timeout.EXAMPLEPS> Get-VM VM001 | Set-VMOffline -memoryMB 4096 -numCpu 2 -timeOut 60 #>Param ([parameter(valuefrompipeline = $true, mandatory = $true,HelpMessage = "Enter a vm entity")][VMware.VimAutomation.ViCore.Impl.V1.Inventory.VirtualMachineImpl]$VM,[int64]$memoryMB,[int32]$numCpu,[Int32]$timeOut = 300,[switch]$force)Process {if ($memoryMB -or $numCpu) {if ((Get-VM $vm).PowerState -eq "PoweredOn") {$powerState = "On"Shutdown-VMGuest $vm -Confirm:$false | Out-Null}$startTime = Get-DateWhile (((Get-VM $vm).PowerState -eq "PoweredOn") -and (((Get-Date) - $startTime).totalseconds -lt $timeOut)) {Sleep -Seconds 2}if ((Get-VM $vm).PowerState -eq "PoweredOff" -or $force) {if ((Get-VM $vm).PowerState -eq "PoweredOn") {Write-Warning "The shutdown guest operation timed out"Write-Warning "Forcing shutdown"Stop-VM $VM -Confirm:$false | Out-Null}if ($memoryMB -and $numCpu) {。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Confidential
Feature Overview
vCenter Server Appliance supports: • The vSphere Web Client • Authentication through AD and NIS • Feature parity with vCenter Server on Windows
VMware vCenter Server
Automation Scalability
Visibility
17
Confidential
Component Overview
vCenter Server Appliance (VCSA) consists of: • A pre-packaged 64 bit application running on SLES 11
5
• Upgrades/Patching • Standardization
Reduce Deployment Costs
• Deployment head • Licensing
16
Confidential
Introducing vCenter Server Appliance The vCenter Server Appliance is the answer!
• • • •
Customize the GUI Ready Access to Common Actions Support interrupt driven workflows Advanced Search Functionality
• Scales to Cloud levels • Easily Extensible
8
Confidential
Features of the vSphere Web Client
Ready Access to Common Actions
• Quick access to common tasks provided out of the box
9
Confidential
Features of the vSphere Web Client
• Basic Health Monitoring • Viewing the VM console remotely • Search through large, complex environments
• Save search queries, and quickly run them to find detailed information
The vSphere Web Client runs within a browser
Application Server that provides a scalable back end
Flex Client Back End
The Query Service obtains live data from the core vCenter Server process
search find information quickly – Even across multiple vCenters!
11
Confidential
Features of the vSphere Web Client
Extendable Functionality
• Possible for partners and end users to add features and functionality
• Except – • Linked Mode support • Requires ADAM (AD LDS) • IPv6 support • External DB Support • Oracle is the only supported external DB for the first release • No vCenter Heartbeat support • HA is provided through vSphere HA
vCenter in either single or Linked mode operation
Query Service vCenter
Confidential
5
Why Flex?
Flex provides us with the richest and fullest featured development platform available. • Extensive amount of Libraries to use • Technologies such as HTML5 and others are still in development • Provides the best performance • Scales to the web
2
Confidential
Evolving Data Centers
Data Centers continue to evolve and present challenges to IT
Scalability
Number of objects Heterogeneous nature of data center
3
Confidential
Responding to Business Faster
Enhanced User Experience
Scale to the Cloud
Extensibility
Multiple Platform Support
4
Confidential
vSphere Web Client Architecture
What’s New in vCenter Server
Tom Stephens Senior Technical Marketing Architect Justin King
Senior Technical Marketing Manager
Confidential
Agenda
vSphere Web Client • Overview • Architecture • New Functionality • Summary vCenter Server Appliance • Introduction • Components and Features • Deployment/Management • Summary vCenter Heartbeat
databases.
• Limits are the same for VC and VCSA
• Embedded DB • 5 hosts/50 VMs • External DB • <300 hosts/<3000 VMs (64 bit)
• A web-based configuration interface
Web Client C# Client
Scalability
Platform Independence Extensibility
50 VCs 100,000 VMs
Windows Linux Linux Native Rich Extension Points
10 VCs 10,000 VMs
Windows One HTML plug-in
Manageability
Multiple Tools Reduced Personnel Ready access to information Geographically separated data centers Multiple admins in differing environments
Dispersed Operations Extensibility
Tools are not flexible Inability to customize
Leads to: • Frustration • Complexity • Time Waste
The vSphere Web Client tackles these challenges head on!
• Simplifies Deployment and Configuration • Streamlines patching and upgrades • Reduces the TCO for vCenter
Enables companies to respond to business faster!
6
Confidential
Features of the vSphere Web Client
Customize the GUI
• Create custom views to reflect the information you need to see, the way you
like to see it
• vApp Management
• vApp Provisioning, vApp Editing, vApp Power Operations
13
Confidential
Summary
The vSphere Web Client enables you to respond to business faster • Provides a common, cross platform capable user experience • Enables admins to accomplish tasks more effectively
14
Confidential
vCenter Server Appliance