EMC VxRail 超融合平台-性能测试报告

合集下载

EMC现代数据中心和VxRail超融合系统介绍

EMC现代数据中心和VxRail超融合系统介绍

易于管理
让应用装置保持在健康、可用和优化状态 将 VxRAIL 应用装置作为 单个群集进行管理 用 VCE VISION 发现 VxRAIL 应用装置
20
24
512GB
15.2TB
28
Cores
Memory
64GB
3.6TB
6
128GB
4.8TB
12
192GB
6TB
16
256GB
8TB
20
512GB
10TB
19TB
Storage Network
RJ45
SFP+
RJ45
SFP+
同一个VxRail应用设备不能混插不同类型节点, 同一个群集可混搭不同的VxRail型号设备
© Copyright 2016 EMC Corporation. All rights reserved.
连续复制确保业务连续性
面向 VMware 简单、高效且经过验证的灾难恢复 内含 EMC RecoverPoint for VMs
无缝集成 VMware vCenter 恢复到任意时间点 连续数据保护 保护精细到 VM 级别
VxRail超融合系统软件架构
云整合 数据保护及容灾 EMC CloudArray EMC Hybrid Cloud /EHC ( 可选 ) VDP ( EMC Avamar )
EMC RecoverPoint for VM
VxRail Manager Extension
VxRail管理
VCE Vision (可选)
轻松扩展 VxRAIL 系统功能
内置 VxRAIL 应用商店
一键式访问 — 通过 VxRail manager

EMC VxRail超融合平台-管理手册

EMC VxRail超融合平台-管理手册

EMC VxRail超融合平台-管理手册
本文档致力于介绍EMC VxRail超融合平台的管理手册,帮助
用户了解和使用该平台。

以下是该平台的管理手册内容:管理平台介绍
EMC VxRail超融合平台是一种全新的数据中心架构,将计算、网络和存储集成到一起,以提供最佳的IT基础设施。

环境配置和配置要求
在使用EMC VxRail超融合平台之前,用户需要检查并满足以
下配置要求:
- 主机应至少有64GB的RAM。

- 操作系统需要安装VMware vSphere ESXi 5或更高版本。

- 存储至少需要2TB的空间。

- 网络要求为10GbE的网络适配器,可选配InfiniBand HCA。

管理步骤和操作指南
在使用EMC VxRail超融合平台过程中,我们应该按照以下步骤操作:
1. 登录EMC VxRail管理页面。

2. 点击“虚拟机管理器”进入虚拟机管理页面。

3. 在虚拟机管理页面中,您可以创建、修改和删除虚拟机,以及查看虚拟机性能。

4. 在EMC VxRail超融合平台的管理页面中,您可以查看和管理整个集群、主机和存储的状态。

故障排除及支持
在EMC VxRail超融合平台的使用过程中,如果出现故障,您可以尝试以下措施:
- 测试网络连通性。

- 检查存储设备的状态。

- 检查集群节点的状态。

如果您仍然无法解决问题,您可以联系EMC VxRail超融合平台的技术支持。

结论
本文档介绍了EMC VxRail超融合平台的管理手册,希望能帮助用户更好地使用这一平台。

VMWare与EMC的vSAN及VxRail虚拟化存储解决方案

VMWare与EMC的vSAN及VxRail虚拟化存储解决方案

5
继ESX之后,VSAN是VMware发展最快的产品
CONFIDENTIAL
6
更多客户愿意借助 VMware HCS而不是竞争产品来部署 HCI
3500+
不到24个月,超过3500多客户选择
VMware HCS (VSAN)
据我的经验来看,VMware 的解决方 案非常可靠…我们已准备好将 Virtual SAN 部署的规模扩大将近 两倍。
虚拟机内核
vSphere
...
Virtual SAN 已嵌入 vSphere 内核
• CPU 占用少于 10% • 内存占用不到6GB
便于管理
• 无需安装和管理单独的虚拟设备 • 无单点故障 • 提供最短的 I/O 路径
与 vSphere 和 VMware 产品体系无缝集成
• VSAN集成了High Availability, 远距离vMotion, Fault Tolerance(多虚拟CPU), Data Protection, vSphere Replication等功能
企业级Server SAN 预计达44.2的年复 合增长率; 2021年 企业级 Server SAN 将是 传统外置磁盘阵列 的1.24倍。
预计5年左右,Server SAN(SDS中主要的种类),将占据整个全球存储市场份额的半壁江山 !
2. The Rise of Server SAN, Jul 16, 2015 Source: /wiki/v/The_Rise_of_Server_SAN
部署VSAN最多的四种场景/用例
虚拟桌面 (VDI) • Low upfront costs based on commodity x86 servers

VxRail超融合平台技术规格

VxRail超融合平台技术规格

PRODUCT OVERVIEWVCE VXRAIL™ APPLIANCEThe VCE VxRail™ Appliance, the exclusive hyper-converged infrastructure appliance from VCE|EMC and VMware, is the easiest and fastest way to stand up a fully virtualized Software-Defined Data Center (SDDC) environment. With the power of a whole Storage Area Network (SAN) in just two rack units, it provides a simple, cost-effective hyper-converged solution that delivers compute, network, storage,virtualization, and management for a wide variety of applications and workloads. Built on the foundation of VMware Hyper-Converged software and managed through the familiar vCenter interface, the VxRail Appliance provides existing VMware customers an experience they are already familiar with. Seamless integration with existing VMware tools, such as vRealize Operations, lets customers leverage and extend their existing IT tools and processes. Additionally, the VxRail Appliance is discoverable and visible in VCE Vision™ Intelligent Operations for a comprehensive IT core to edge management ecosystem.The VxRail Appliance is fully loaded with integrated mission-critical data services—including replication, backup, and cloud tiering—all at no additional charge. The VxRail Appliance incorporates data protection technology, including EMC RecoverPoint for VMs and VMware vSphere Data Protection. Integrated EMC CloudArray seamlessly extends the VxRail Appliance to public and private clouds to securely expand storage capacity without limits, providing an additional 10 TB of on-demand cloud tiering included per appliance.The VxRail Appliance architecture is a distributed system consisting of common modular building blocks that scale linearly from 1 to 16 2U/4 node appliances, up to 64 nodes in a cluster. Multiple compute, memory, and storage options deliver configurations to match any use case.A fully populated all-flash appliance supports up to 112 cores and up to 76 TB of raw storage. A 64-node all-flash cluster delivers 1,792 cores and 1,216 TB of raw storage, making it the industry’s most powerful HCIA to date to maximize performance and scale for applications that demand low latency.The VxRail Appliance is backed by a single point of world-class support for both hardware and software. The VxRail Appliance is available with EMC Enhanced and Premium support options, both of which include EMC ESRS for call home and proactive two-way remote connection for remote monitoring, diagnosis, and repair to ensure maximum availability.Detailed specifications and a comparison of the VxRail Appliances follows.VxRailVXRAIL APPLIANCE SPECIFICATIONS—HYBRID NODESCOMPONENTS VXRAIL APPLIANCE 60 VXRAIL APPLIANCE 120 VXRAIL APPLIANCE 160 VXRAIL APPLIANCE 200 PROCESSOR CORES(PER NODE)6 12 16 20PROCESSOR (PER NODE)1 Intel® Xeon® ProcessorE5-2603 v3 1.6 GHz2 Intel® Xeon® ProcessorE5-2620 v3 2.4 GHz2 Intel® Xeon® ProcessorE5-2630 v3 2.4 GHz2 Intel® Xeon® ProcessorE5-2660 v3 2.6 GHzMEMORY/RAM (PER NODE)64 GB (4 x 16 GB) 128 GB (8 x 16 GB) or192 GB (12 x 16 GB) or256 GB (16 x 16 GB)256 GB (16 x 16 GB) or512 GB (16 x 32 GB)256 GB (16 x 16 GB) or512 GB (16 x 32 GB)CACHING SSD(PER NODE)200 GB 400 GB or 800 GB 400 GB or 800 GB 400 GB or 800 GBSTORAGE-RAW(PER NODE)3.6 – 10 TB 3.6 – 10 TB4.8 – 10 TB 4.8 – 10 TBMINIMUM NODES PERCLUSTER4 4 4 4MAXIMUM NODES PERCLUSTER164 64 64 64SCALING INCREMENTS(IN NODES)1 1 1 1CHASSIS 2U, 19” rack-mounted chassis supporting 4 hot swappable nodes and 2 hot swappable power suppliesPOWER SUPPLIES 2 1200W high-efficiencyredundant PSUs,110/220V AC 50/60Hz 2 1600W high-efficiencyredundant PSUs, 220VAC 50/60Hz2 1600W high-efficiencyredundant PSUs, 220VAC 50/60Hz2 1600W high-efficiencyredundant PSUs, 220VAC 50/60HzCOOLING Dedicated cooling/node (no single point of failure) – 4 80X5M6 mm variable-speed fans MAX TOTAL POWERCONSUMPTION (FULLYLOADED APPLIANCE-KVA)1003 1337 1337 1486MAX HEATDISSIPATION (FULLY-LOADED APPLIANCE-BTU/HR)3422.236 4561.844 4561.844 5070.232NETWORK CONNECTION 4 x 1 GbE RJ45 2 x 10 GbE SFP+ or2 x RJ45 ports2 x 10 GbE SFP+ or2 x RJ45 ports2 x 10 GbE SFP+ or2 x RJ45 portsMANAGEMENT PORT(OPTIONAL, PER NODE)1 x 100 Mbps RJ45 port 1 x 100 Mbps RJ45 port 1 x 100 Mbps RJ45 port 1 x 100 Mbps RJ45 port 1Scale to 64 nodes via approved RPQ only.VXRAIL APPLIANCE SPECIFICATIONS—ALL-FLASH NODESCOMPONENTS VXRAIL 120F VXRAIL 160F VXRAIL 200F VXRAIL 240F VXRAIL 280F CORES(PER NODE)12 16 20 24 28PROCESSOR (PER NODE) 2 Intel® Xeon®Processor E5-2620v3 2.4GHz /15M Cache2 Intel® Xeon®Processor E5-2630v3 2.4GHz /20M Cache2 Intel® Xeon®Processor E5-2660v3 2.6GHz /25M Cache2 Intel® Xeon®Processor E5-2680v3 2.5GHz /30M Cache2 Intel® Xeon®Processor E5-2683v3 2.0GHz /35M Cache(PER NODE)or 512GB (16 x32GB) or 512GB (16 x32GB)or 512GB (16 x32GB)or 512GB (16 x32GB)(PER NODE)STORAGE-RAW(PER NODE)MINIMUM NODESPER CLUSTER4 4 4 4 4MAXIMUM NODESPER CLUSTER164 64 64 64 64SCALINGINCREMENTS(IN NODES)CHASSIS 2U, 19” rack-mounted chassis supporting 4 hot swappable nodes and 2 hot swappable power suppliesPOWER SUPPLIES 2 1600W high-efficiencyredundant PSUs,220V AC 50/60Hz 2 1600W high-efficiencyredundant PSUs,220V AC 50/60Hz2 1600W highefficiencyredundant PSUs,220V AC 50/60Hz2 1600W high-efficiencyredundant PSUs,220V AC 50/60Hz2 1600W high-efficiencyredundant PSUs,220V AC 50/60HzCOOLING Dedicated cooling/node (no single point of failure) – 4 80X5M6 mm variable-speed fansMAX TOTALPOWERCONSUMPTION(FULLY LOADEDAPPLIANCE-KVA)1240 1240 1389 1500 1500MAX HEATDISSIPATION(FULLY-LOADEDAPPLIANCE-BTU/HR)4230.88 4230.88 4739.268 5118 5118NETWORK CONNECTION 2 x 10GbE SFP+ orRJ45 ports2 x 10GbE SFP+ orRJ45 ports2 x 10GbE SFP+ orRJ45 ports 2 x 10GbE SFP+ 2 x 10GbE SFP+(OPTIONAL, PERNODE)port port port port port 1Scale to 64 nodes via approved RPQ only.PHYSICAL SPECIFICATIONSCOMPONENTS HEIGHT (MM/IN)WIDTH (MM/IN)DEPTH (MM/IN)WEIGHT (MAX KG/LB)APPLIANCE87.3mm/3.44” 447mm/17.6IN 774.7mm/30.5”41.42KG / 91.31LBOPERATING RANGEAMBIENT OPERATING TEMPERATUREO° to 40° COPERATING AND STORAGE RELATIVE HUMIDITY 10% to 85% (non-condensing) STORAGE TEMPERATURE RANGE- 40°C to + 65°CTRANSPORTATION TEMPERATURE RANGE - 40C to + 70C (short-term storage) OPERATING ALTITUDE WITH NO DERATINGS3200m (about 10656ft)ABOUT VCEVCE, an EMC Federation Company, is the world market leader in converged infrastructure and converged solutions. VCE accelerates the adoption of converged infrastructure and cloud-based computing models that reduce IT costs while improving time to market. VCE delivers the industry's only fully integrated and virtualized cloud infrastructure systems, allowing customers to focus on business innovation instead of integrating, validating, and managing IT infrastructure. VCE solutions are available through an extensive partner network, and cover horizontal applications, vertical industry offerings, and application development environments, allowing customers to focus on business innovation instead of integrating, validating, and managing IT infrastructure.For more information, go to .© 2016 VCE Company, LLC. All rights reserved. Vblock, VxBlock, VCE Vscale,VCE Vision, and the VCE logo are trademarks or registered trademarks of VCE Compare features, see options, get pricing at /vxrailCONFIGURE AND QUOTE VXRAIL。

VxRail超融合产品功能详解

VxRail超融合产品功能详解

VxRail超融合产品功能详解超融合架构凭借效率、灵活性、规模、成本和数据保护,已经成为最近几年的热门词汇。

超融合是指在同一套单元设备中不仅仅具备计算、网络、存储和服务器虚拟化等资源和技术,同时拥有横向扩展能力,可以为用户提供一个统一的资源池部署应用程序。

VxRail系列设备就是一款基于VMware vSphere、vSAN 和EMC软件的超融合解决方案,旨在帮助用户实现一体化IT基础架构的转型。

它采用模块化的分布式存储设计,实现了计算和存储的融合,用户可以从小规模起步,按需扩展,同时提供多种计算、内存和存储选项以满足各种使用情形的配置。

本文将主要介绍VxRail的硬件、网络和功能软件。

VxRail是Dell EMC今年推出的超融合产品,定位的目标市场包括虚拟桌面架构VDI、远程办公室、私有云和中小企业。

用户通过VxRail和vBlock或者VxRack就可以搭建一套完整的企业级解决方案,适合所有类型的工作负载,通过VMware vCenter实现统一管理,同时还拥有企业级的复制和保护能力。

VxRail 以集群的形式存在,集群由多个节点组成,系统最多支持16个硬件盒子,64个节点,3000台以上的虚拟机。

单个节点拥有独立的CPU、内存、存储和网络资源,采用X86架构,支持横向扩展功能。

硬件配置部分VxRail提供混合模式和全闪存模式二种硬件配置,全闪存模式单节点最多支持28个CPU内核,512GB 内存和19TB存储容量,混合模式单节点最多支持20个CPU内核,512GB内存和10TB存储容量。

具体配置见下图:VxRail全闪存模式推出了5种型号,具体配置见下表:VxRail混合模式推出4种型号,具体配置见下表:VxRail硬件配置规则汇总:∙在3.5版本中,单个集群最低配置为4个节点。

在4.0版本中,单个集群最低配置为3个节点;∙单个逻辑集群最多支持64个节点,超过32个节点需要进行RPQ;∙集群中前4个节点必须使用相同硬件型号;∙单个集群中不支持混合模式和全闪存模式二种节点共存;∙单个集群中不支持10G模块和1G模块混合使用;VxRail硬件扩展规则汇总:∙节点可以无中断地添加到集群o节点可以增量地添加到VxRail集群;o每个Quanta节点扩展都需要一个磁盘包。

VxRail 超融合平台初始化及更新手册

VxRail 超融合平台初始化及更新手册

VxRail 超融合平台初始化及更新手册目录Preliminary Activity Tasks (3)Read, understand, and perform these tasks (3)Service Provider, Project Information and Customer Contact form (4)VxRail Reset Procedure (4)Attention: (4)Warnings and Disclaimers (4)Intended Audience: (5)Materials Required: (5)Reset Procedure (5)Prerequisites (5)Preparing for Reset (5)Running Reset (8)Example using Reset on Windows (8)After Reset (8)Reset Command Options (9)Best Practices and Troubleshooting (10)Resetting Clusters with Multiple Appliances (10)If you want to restore a multi-appliance VxRail cluster: (10)If you want to separate the appliances into multiple VxRail clusters: (10)Reset Output: Example output of Successful reset: (10)Reset Failure (11)Known Issues (12)Appendix A:VxRail Appliances with a Custom Initial IP address (13)Preliminary Activity TasksThis section may contain tasks that you must perform before going to the customer site. Make sure these tasks are complete before performing the procedure.Read, understand, and perform these tasks1. [ ] Table 1 lists tasks, cautions, warnings, notes, knowledgebase (KB) solutions, and Top 10 ServiceTopics that you need to be aware of before performing this activity. Read, understand, and when necessary perform any tasks contained in this table and any tasks contained in any associatedknowledgebase solution.Table 1 List of cautions, warnings, notes, KB solutions, and Top 10 Service Topics for thisactivityService Provider, Project Information and Customer Contact formThis form is automatically populated based on the information that is provided in the VxRail® Procedure Generator questionnaire prior to generating the procedure. This information is collected and used for reference purposes when referring back to this procedure at a later date and time.Section 2:Section 3:VxRail Reset ProcedureThese instructions describe how to reset the VxRail Appliance using the Factory Reset Script which returns the appliance to a first run state but leaves the underlying software load unchanged. The reset script can be executed in the field by a VxRail Appliance reseller, distributor, or EMC CSE.Use this process to reset between demonstrations or moves between customer sites for evaluation.The reset script takes approximately 30 minutes to complete, however if there are environmental problems (network connectivity, i.e.) it can fail.The unit must be in an otherwise healthy state, connected to the network and functioning for the reset script to be used.Attention:The reset script tool will wipe all data and defined User VMs from the appliance. If the customer wants to keep their data or VMs, they must back them up prior to running the tool.Always use the latest VxRail Reset tool for the factory reset function based on your model of VxRail. Warnings and Disclaimers•Reset destroys all virtual machines and datastores in your VxRail Cluster. VMs on mounted datastores that you don’t want the reset to delete should be unregistered.prior to executing the reset.The process cannot be interrupted and there are no data recovery options available once the Reset starts.•The Reset script and documentation may only be utilized by properly trained personnel.•The Reset script and documentation are only available to properly trained engineering, support, and sales organizations.•The Reset script and documentation must not be provided to anyone who is identified as a direct end-user of a VxRail product.•The Reset script and documentation must be removed from the customer’s appliance and premises after Reset is performed by qualified personnel.Note: Reset script 4.0.100 is required for all VxRail appliances with Release 4.0.100 software. Intended Audience:This procedure is intended to be used by QEP Distribution Partners and VARs performing POC demos. This Procedure is not to be used on production systems!Materials Required:From the website, download the reset package:•File Name: v4.0.100_reset.zip•MD5Sum: 966d86f7ef0b9b02445bcb06eb263dcbWARNING: This procedure deletes all data on the system. It should never be used on a production system.Reset ProcedureReset restores an appliance to factory settings. It uses the original software images and the factory-created backup file to restore your appliance:/vmfs/volumes/<service_datastore>/images,/vmfs/volumes/<service_datastore>/reset/configBundle.tgzPrerequisites• A workstation/laptop with a web browser is required. It must be either plugged into the 10GbE switch or able to logically reach the VxRail management VLAN on the 10GbE switch.Note: This PC must be able to access both the pre-reset and post-reset VLANs and IP Addresses. •The VxRail appliance must have all software images and the configBundle.tgz file on the service datastore.•The document assumes you are familiar with the VxRail Setup Guide, the VxRail Product Guide, vSphere Web Client usage, and ESXi usage (DCUI/CLI).•All ESXi hosts must have the same version of both the ESXi and VxRail VIB on them. Preparing for ResetWARNING: To prevent a possible issue with the decompression of the software bundle, EMC recommends that the upgrade bundle is decompressed using a 3rd party decompression utilitysuch as WinZip. Windows Explorer decompression may have difficulty handling large files resulting in file corruption.1. [ ] Download reset bundle v4.0.100_reset.zip from the EMC Support Zone website to VxRailworkstation/laptop.Note: All files can be found in bundle with name v4.0.100_reset.zip from the VxRail Support page on https:///downloads. EMC Highly recommends that all files be verified using the md5 sum for each ova file to ensure integrity.2. [ ] On the VxRail workstation/laptop, uncompress it into a folder that you can browse, such asC:\reset or /tmp/reset. This archive contains:•Reset script: consisting of two python scripts: reset.pyc and gen_ovfenv.pyc •reset.pyc is in the package reset_4.0.100-4907696.zip.•You should uncompress it first in order to get the reset script.•The v4.0.100_reset.zip bundle contains three files:•reset_4.0.100-4907696.zip•Python-2.7.8.msi•VMware-viclient-all-6.0.0-3562874.exe•Python libraries: pyVim and pyVMomiPython 2.7 is required for Reset. Python 2.6 will fail. If necessary, python 2.7.8 is recommended and includes the install exe for installation on your VxRail Management workstation/laptop.The Reset script is backwards compatible with all versions of VxRail so users should update to the latest version of the reset script. If previous versions of Python are installed on the managementworkstation, those versions should be uninstalled first and Python v2.7 installed.3. [ ] For VxRail appliances connected to an External vCenter, manually remove thecluster/VDS/management accounts in the External vCenter.4. [ ] (Optional) In the vSphere Web Client, go to the Marvin-Virtual-SAN-Datastore. Click on Manageand then Files. Delete the ISO and UPGRADE directories. DO NOT delete any directories named witha UUID.5. [ ] For each VxRail host in the VxRail cluster, identify the IP address assigned to the factory-createdVMkernel management interface (vmk0) in one of the following ways:a. The IP address is displayed in the vSphere Web Client (use vcserver to locate the networkinformation for the VMkernel adapter on each ESXi host)orb. The IP address is displayed in DCUI (console user interface for ESXi)orc. ssh into each ESXi host (or connect a console or monitor via BMC) and run the esxcli networkip interface ipv4 get command on each host. The username is root. Use the password for the ESXi hosts that was set during VxRail Initial Configuration. (For an un-configured VxRail appliance, the password is Passw0rd!) The output will be similar to what is shown below:esxcli network ip interface ipv4 get | grep "vmk0"vmk0 MARVIN Management IPv4 10.10.10.5 255.255.255.0 10.10.10.25500:25:90:eb:bc:0a 1500 65535 true STATICOn an un-configured VxRail appliance, ESXi IP addresses are either assigned by DHCP (if it was available in your network) or configured as IPv4 link-local addresses (starting with 169.254.x.y). On a configured VxRail appliance, ESXi IP addresses are specified by the user who set up the appliance.6. [ ] Fill in the IPv4 address of each ESXi host in the cluster (use additional space if there is more thanone appliance in a cluster):Host IPv4 AddressESXi host on node 1ESXi host on node 2ESXi host on node 3ESXi host on node 47. [ ] From the VxRail workstation/laptop, verify that you can ping each IPv4 addressNote: If you cannot access a host, try power-cycling the node first.8. [ ] Use vSphere Client to login vCenter Server to check current Management VLAN as below:Figure 1 Example: the management VLAN ID in this appliance is 117Note: If you cannot access vCenter Server, you will have to ask the customer for the management VLAN ID.Running ResetReset is run from the VxRail workstation/laptop, using the options described in the Reset Command Options section. Before running Reset, be sure to read the section on Best Practices and Troubleshooting so that you can use the Reset script correctly.Note: Using the --vlan option for Reset is recommended even if current management VLAN is not set ( in this case, use 0 for VLAN ID). It will modify the management VLAN ID saved in configBundle first before actually using the configBundle. It simplifies initial setup because the management VLAN ID is configured on all 4 ESXi hosts. Likewise, the --initialIpAddress,--initialSubnet, and --initialGateway options also simplify initial setup.Another benefit comes when a VxRail appliance is reset and remains in the same environment (I.e. It is not relocated to another network). In this case, retaining the management VLAN ID and initial IP information during a reset permits the VxRail Manager workstation/laptop to remain set for the correct VLAN and IP. If these options are not used, the instructions in the VxRail SetupGuide can be followed at a later time.If the VLAN option is not used, no changes will be made to the Management VLAN settings. It will remain exactly as previously set.All appliances and nodes in a VxRail cluster should be reset if one appliance or node needs to be reset. Use the following procedure:•If you have more than one appliance in your VxRail cluster, reset each secondary appliance, one at a time. We recommend that the secondary appliances be reset in the order in which they were added to the cluster. Power off all nodes after each appliance is reset.•If you only have one appliance in the VxRail cluster, or when you have finished resetting and shutting down all secondary appliances, reset the primary appliance.Example using Reset on Windows1. [ ] To reset the appliance use the following command:Example:C:\Python27\python.exe c:\reset\reset.pyc -d --destroyAllVMs -f <IPv4_node1> -a <IPv4_node2> -a <IPv4_node3> -a <IPv4_node4> --vlan <vlan_id>[ --initialIpAddress=<init_IPaddr> --initialSubnet=<init_netmask>--initialGateway=<init_gateway> ]After ResetNote: The password is now Passw0rd! for all ESXi hosts, vCenter Server Appliance, and VxRail Manager.1. [ ] For VxRail appliances connected to an External vCenter, manually remove thecluster/VDS/management accounts in the External vCenter.2. [ ] Point a browser to the VxRail Manager user interface to begin VxRail initial configuration followingthe instructions in “VxRail Initial Configuration” in the VxRail Setup Guide. The location is one of the following:<init_IPaddr>specified in the ‘--initialIpAddress’ option supplied to Resetorhttps://192.168.10.200, the default value for ResetIf you cannot reach VxRail at one of these IP addresses after Reset, follow the instructions in AppendixA to configure another initial IP address.Reset Command OptionsBest Practices and TroubleshootingResetting Clusters with Multiple AppliancesIf you want to restore a multi-appliance VxRail cluster:After performing initial configuration on the primary appliance in the VxRail cluster, power on and configure each secondary appliance, one at a time. The primary appliance will discover each secondary appliance when it is powered on. Then you will add it to the cluster before continuing with the next secondary appliance. For more information, including configuring the Management VLAN, see the VxRail Setup Guide, “Adding Appli ances to a VxRail Cluster”.If you want to separate the appliances into multiple VxRail clusters:When you run Reset on each appliance, assign a unique <init_IPaddr> with the ‘--initialIpAddress’option. Then point your browser to each <init_IPaddr> and run VxRail initial configuration to create separate VxRail clusters.Reset Output: Example output of Successful reset:Reset succeeds with INFO and WARNING output similar to the following:c:\Python27\python.exe reset.py -d --destroyAllVMs -f 10.10.10.11 --initialIpAddress=10.10.10.250 --initialSubnet=255.255.254.0--initialGateway=10.10.10.253 -a 10.10.10.12 -a 10.10.10.13 -a 10.10.10.14 Post-configuration Password:INFO Primary node supplied: 10.10.10.11 INFO Secondary node supplied:10.10.10.12 INFO Secondary node supplied: 10.10.10.13 INFO Secondary nodesupplied: 10.10.10.14INFO MARVIN VIB 3.x discovered on hostsVxRail 3.x WARNING================================================WARNING THE FOLLOWING ARE THE CURRENTLY REGISTERED VIRTUAL MACHINES:WARNING ['VMware vCenter Server Appliance', 'VxRail Manager', 'VMware vRealize Log Insight'] WARNING CONTINUING WILL DESTROY ALL VIRTUAL MACHINES REGISTERED WITH THE APPLIANCE.CONFIRM (YES/NO): YESWARNING DESTROYING ALL VMs ON 10.10.10.11: ['VMware vCenter Server Appliance', 'VxRail Manager', 'VMware vRealize Log Insight']…INFO Host 10.10.10.11 has VMs [] remaining. INFO Removing hosts from VSAN cluster.INFO Putting hosts in maintenance mode. INFO Resetting and rebooting hosts.INFO Waiting for hosts to finish rebooting to confirm reset ...…INFO Trying to connect to 10.10.10.14 ...INFO Host 10.10.10.14 not yet available, still waiting ... INFO Trying to connect to 10.10.10.14 ...INFO Connection to host 10.10.10.14 successful.INFO Regenerating VSAN and VMs on primary node (10.10.10.11). Assuming default password. INFO Deploying VM: VMware vCenter Server Appliance fromimages/vcsa-restore-original/.INFO Deploying VM: VxRail Manager from images/evorail-restore-original/. INFO Upgrading hardware on VM: VMware vCenter Server ApplianceINFO Enabling Ivy Bridge EVC for VM: VMware vCenter Server Appliance INFO Upgrading hardware on VM: VxRail ManagerINFO Enabling Ivy Bridge EVC for VM: VxRail ManagerINFO Setting VM AutostartInfo for: VxRail manager on 10.10.10.11. INFOstartAction: powerOn; stopAction: guestShutdownINFO Powering on VM: VxRail ManagerINFO Deploying VM: VMware vRealize Log Insight fromimages/log-insight-original/. INFO Reconfiguring VM 'VMware vRealize Log Insight' OVF environment: success.INFO Upgrading hardware on VM: VMware vRealize Log InsightINFO Enabling Ivy Bridge EVC for VM: VMware vRealize Log Insight INFO Restarting 'loudmouth'.INFO Finalizing settings on Shamu.Reset FailureReset can fail for a variety of reasons. If it does not complete successfully on all nodes in an appliance, it can be rerun just on the node(s) that did not reset properly.Infrequently, Reset can report a failure but eventually succeed. This can happen if an ESXi host takes longer to reboot than the 720-second timeout in Reset. The ESXi node can still continue to boot and it may finally succeed. You can determine whether or not an ESXi host was successfully reset by attempting to login in via ssh or DCUI. The password should have been reset to Passw0rd!If you cannot login with this default password, the host was NOT reset properly. Run Reset again only on the host that did not boot successfully.Another reason that Reset could fail is if the IPv4 address for vmk0 does not match the information saved in configBundle.tgz. This would happen if ESXi was assigned a static IP address instead of link-local DHCP after configBundle.tgz was created, but before initial configuration. The failure message displayed would indicate failure to connect to all hosts.Usin g the ‘--onlyRegeneratePrimary’ option will quickly redeploy the service VMs.Note: Since you have already reset the appliance, the default password, Passw0rd!, must be used instead of your original password.C:\Python27\python.exe c:\reset\reset.pyc -f <IPv4_node1>--onlyRegeneratePrimary --initialIpAddress=<init_IPaddr>--initialSubnet=<init_netmask> --initialGateway=<init_gateway>The reasons that Reset can fail include the following:•Failing to connect to or locate a specified ESXi host•Using an invalid IPv4 address as Reset command options•Having a proxy server configured on the VxRail workstation/laptop•Failing to deploy the Log Insight VM or VxRail Manager•Still failing after several automatic retriesKnown IssuesIf you have a typo on the Reset command line, such as using a space instead of ‘=’ in the option/value pairs that require one (--initialIpAddress, --initialSubnet, --initialGateway), all 3 options will be ignored. (e.g--initialIpAddress=10.10.10.200 is correct; --initialIpAddress 10.10.10.200 is incorrect) If Reset otherwise succeeds, you can follow the instructions in Appendix A to set the initial IP address.Appendix A: VxRail Appliances with a Custom Initial IP address WARNING: The following steps are to configure a custom IP address for initial access to VxRail if you cannot use the <init_IPaddr> specified in the ‘--initialIpAddress’ option or the default IP address, 192.168.10.200.Note: You are setting the IP address for the VxRail Manager in Release 4.0.1. [ ] Connect the vSphere (C#) Client to the IP address of ESXi host 1 using the root user and thepassword that Reset configured during ESXi software installation, Passw0rd!2. [ ] Click the Virtual Machines tab and select “VxRail Manager”. The VM should already be poweredon. If not, click the green play button to power it on and wait for it to boot.3. [ ] Open the Console and login as root with the default password Passw0rd!4. [ ] Stop vmware-marvin:/usr/bin/systemctl stop vmware-marvin.service5. [ ] Using the vami_set_network command, change the default IP address to a custom IP address,netmask, and gateway using the following syntax (all arguments are required)./opt/vmware/share/vami/vami_set_network eth0 STATICV4 <init_IPaddr><init_netmask> <init_gateway>6. [ ] Start vmware-marvin and restart loudmouth on VxRail Manager:/usr/bin/systemctl start vmware-marvin.service/usr/bin/systemctl restart vmware-loudmouth.service7. [ ] Restart loudmouth on the primary ESXi host:/etc/init.d/loudmouth restart。

VxRail超融合产品功能详解

VxRail超融合产品功能详解

VxRail超融合产品功能详解VxRail超融合产品功能详解超融合架构凭借效率、灵活性、规模、成本和数据保护,已经成为最近⼏年的热门词汇。

超融合是指在同⼀套单元设备中不仅仅具备计算、⽹络、存储和服务器虚拟化等资源和技术,同时拥有横向扩展能⼒,可以为⽤户提供⼀个统⼀的资源池部署应⽤程序。

VxRail系列设备就是⼀款基于VMware vSphere、vSAN 和EMC软件的超融合解决⽅案,旨在帮助⽤户实现⼀体化IT基础架构的转型。

它采⽤模块化的分布式存储设计,实现了计算和存储的融合,⽤户可以从⼩规模起步,按需扩展,同时提供多种计算、内存和存储选项以满⾜各种使⽤情形的配置。

本⽂将主要介绍VxRail的硬件、⽹络和功能软件。

VxRail是Dell EMC今年推出的超融合产品,定位的⽬标市场包括虚拟桌⾯架构VDI、远程办公室、私有云和中⼩企业。

⽤户通过VxRail和vBlock或者VxRack就可以搭建⼀套完整的企业级解决⽅案,适合所有类型的⼯作负载,通过VMware vCenter实现统⼀管理,同时还拥有企业级的复制和保护能⼒。

VxRail 以集群的形式存在,集群由多个节点组成,系统最多⽀持16个硬件盒⼦,64个节点,3000台以上的虚拟机。

单个节点拥有独⽴的CPU、内存、存储和⽹络资源,采⽤X86架构,⽀持横向扩展功能。

硬件配置部分VxRail提供混合模式和全闪存模式⼆种硬件配置,全闪存模式单节点最多⽀持28个CPU内核,512GB 内存和19TB存储容量,混合模式单节点最多⽀持20个CPU内核,512GB内存和10TB存储容量。

具体配置见下图:VxRail全闪存模式推出了5种型号,具体配置见下表:VxRail混合模式推出4种型号,具体配置见下表:VxRail硬件配置规则汇总:在3.5版本中,单个集群最低配置为4个节点。

在4.0版本中,单个集群最低配置为3个节点;单个逻辑集群最多⽀持64个节点,超过32个节点需要进⾏RPQ;集群中前4个节点必须使⽤相同硬件型号;单个集群中不⽀持混合模式和全闪存模式⼆种节点共存;单个集群中不⽀持10G模块和1G模块混合使⽤;VxRail硬件扩展规则汇总:节点可以⽆中断地添加到集群o节点可以增量地添加到VxRail集群;o每个Quanta节点扩展都需要⼀个磁盘包。

超融合产品测试报告书

超融合产品测试报告书

超融合产品测试报告书1. 测试背景和目的超融合(Hyper-Converged Infrastructure,HCI)是一种将计算、存储和网络等硬件资源整合在一起,并通过软件定义的方式提供高度集成的虚拟化解决方案。

本次测试旨在评估超融合产品在不同场景下的性能、可用性和扩展性等关键指标,为选择和优化超融合产品提供参考。

2. 测试环境2.1 硬件配置- CPU:***************************- 内存:128GB- 存储:1TB SSD x 4- 网络:千兆以太网接口x 22.2 软件配置- 超融合产品版本:v2.0.1- 操作系统:CentOS 7.6- 虚拟化平台:VMware ESXi 6.7- 存储协议:iSCSI3. 测试内容和方法3.1 性能测试- 测试场景1:模拟虚拟机大量读写操作,评估存储性能和响应时间。

- 测试场景2:模拟网络负载,评估网络性能和带宽利用率。

- 测试场景3:模拟虚拟机迁移和故障恢复,评估超融合产品的可用性和恢复速度。

3.2 可扩展性测试- 测试场景1:逐步添加虚拟机节点,评估超融合产品的水平扩展性。

- 测试场景2:增加存储容量和计算资源,评估超融合产品的垂直扩展性。

3.3 其他测试- 安全性测试:评估系统的安全性和权限管理能力。

- 系统稳定性测试:长时间运行虚拟机和进行负载测试,评估系统的稳定性和资源管理能力。

- 灾难恢复测试:测试系统在硬件故障和断电等非预期情况下的恢复能力。

4. 测试结果和分析4.1 性能测试结果- 测试场景1:平均读写响应时间为5ms,满足高性能要求。

- 测试场景2:网络带宽利用率达到90%,保证大规模并发访问时的高效率。

- 测试场景3:虚拟机迁移时间平均为10s,故障恢复时间平均为30s,满足业务连续性要求。

4.2 可扩展性测试结果- 测试场景1:添加虚拟机节点后,集群吞吐量线性增加,展现出良好的水平扩展性。

- 测试场景2:增加存储容量和计算节点后,系统性能线性增加,具备良好的垂直扩展性。

VxRail 超融合平台初始化及更新手册

VxRail 超融合平台初始化及更新手册

VxRail 超融合平台初始化及更新手册目录Preliminary Activity Tasks (3)Read, understand, and perform these tasks (3)Service Provider, Project Information and Customer Contact form (4)VxRail Reset Procedure (4)Attention: (4)Warnings and Disclaimers (4)Intended Audience: (5)Materials Required: (5)Reset Procedure (5)Prerequisites (5)Preparing for Reset (5)Running Reset (8)Example using Reset on Windows (8)After Reset (8)Reset Command Options (9)Best Practices and Troubleshooting (10)Resetting Clusters with Multiple Appliances (10)If you want to restore a multi-appliance VxRail cluster: (10)If you want to separate the appliances into multiple VxRail clusters: (10)Reset Output: Example output of Successful reset: (10)Reset Failure (11)Known Issues (12)Appendix A:VxRail Appliances with a Custom Initial IP address (13)Preliminary Activity TasksThis section may contain tasks that you must perform before going to the customer site. Make sure these tasks are complete before performing the procedure.Read, understand, and perform these tasks1. [ ] Table 1 lists tasks, cautions, warnings, notes, knowledgebase (KB) solutions, and Top 10 ServiceTopics that you need to be aware of before performing this activity. Read, understand, and when necessary perform any tasks contained in this table and any tasks contained in any associatedknowledgebase solution.Table 1 List of cautions, warnings, notes, KB solutions, and Top 10 Service Topics for thisactivityService Provider, Project Information and Customer Contact formThis form is automatically populated based on the information that is provided in the VxRail® Procedure Generator questionnaire prior to generating the procedure. This information is collected and used for reference purposes when referring back to this procedure at a later date and time.Section 2:Section 3:VxRail Reset ProcedureThese instructions describe how to reset the VxRail Appliance using the Factory Reset Script which returns the appliance to a first run state but leaves the underlying software load unchanged. The reset script can be executed in the field by a VxRail Appliance reseller, distributor, or EMC CSE.Use this process to reset between demonstrations or moves between customer sites for evaluation.The reset script takes approximately 30 minutes to complete, however if there are environmental problems (network connectivity, i.e.) it can fail.The unit must be in an otherwise healthy state, connected to the network and functioning for the reset script to be used.Attention:The reset script tool will wipe all data and defined User VMs from the appliance. If the customer wants to keep their data or VMs, they must back them up prior to running the tool.Always use the latest VxRail Reset tool for the factory reset function based on your model of VxRail. Warnings and Disclaimers•Reset destroys all virtual machines and datastores in your VxRail Cluster. VMs on mounted datastores that you don’t want the reset to delete should be unregistered.prior to executing the reset.The process cannot be interrupted and there are no data recovery options available once the Reset starts.•The Reset script and documentation may only be utilized by properly trained personnel.•The Reset script and documentation are only available to properly trained engineering, support, and sales organizations.•The Reset script and documentation must not be provided to anyone who is identified as a direct end-user of a VxRail product.•The Reset script and documentation must be removed from the customer’s appliance and premises after Reset is performed by qualified personnel.Note: Reset script 4.0.100 is required for all VxRail appliances with Release 4.0.100 software. Intended Audience:This procedure is intended to be used by QEP Distribution Partners and VARs performing POC demos. This Procedure is not to be used on production systems!Materials Required:From the website, download the reset package:•File Name: v4.0.100_reset.zip•MD5Sum: 966d86f7ef0b9b02445bcb06eb263dcbWARNING: This procedure deletes all data on the system. It should never be used on a production system.Reset ProcedureReset restores an appliance to factory settings. It uses the original software images and the factory-created backup file to restore your appliance:/vmfs/volumes/<service_datastore>/images,/vmfs/volumes/<service_datastore>/reset/configBundle.tgzPrerequisites• A workstation/laptop with a web browser is required. It must be either plugged into the 10GbE switch or able to logically reach the VxRail management VLAN on the 10GbE switch.Note: This PC must be able to access both the pre-reset and post-reset VLANs and IP Addresses. •The VxRail appliance must have all software images and the configBundle.tgz file on the service datastore.•The document assumes you are familiar with the VxRail Setup Guide, the VxRail Product Guide, vSphere Web Client usage, and ESXi usage (DCUI/CLI).•All ESXi hosts must have the same version of both the ESXi and VxRail VIB on them. Preparing for ResetWARNING: To prevent a possible issue with the decompression of the software bundle, EMC recommends that the upgrade bundle is decompressed using a 3rd party decompression utilitysuch as WinZip. Windows Explorer decompression may have difficulty handling large files resulting in file corruption.1. [ ] Download reset bundle v4.0.100_reset.zip from the EMC Support Zone website to VxRailworkstation/laptop.Note: All files can be found in bundle with name v4.0.100_reset.zip from the VxRail Support page on https:///downloads. EMC Highly recommends that all files be verified using the md5 sum for each ova file to ensure integrity.2. [ ] On the VxRail workstation/laptop, uncompress it into a folder that you can browse, such asC:\reset or /tmp/reset. This archive contains:•Reset script: consisting of two python scripts: reset.pyc and gen_ovfenv.pyc •reset.pyc is in the package reset_4.0.100-4907696.zip.•You should uncompress it first in order to get the reset script.•The v4.0.100_reset.zip bundle contains three files:•reset_4.0.100-4907696.zip•Python-2.7.8.msi•VMware-viclient-all-6.0.0-3562874.exe•Python libraries: pyVim and pyVMomiPython 2.7 is required for Reset. Python 2.6 will fail. If necessary, python 2.7.8 is recommended and includes the install exe for installation on your VxRail Management workstation/laptop.The Reset script is backwards compatible with all versions of VxRail so users should update to the latest version of the reset script. If previous versions of Python are installed on the managementworkstation, those versions should be uninstalled first and Python v2.7 installed.3. [ ] For VxRail appliances connected to an External vCenter, manually remove thecluster/VDS/management accounts in the External vCenter.4. [ ] (Optional) In the vSphere Web Client, go to the Marvin-Virtual-SAN-Datastore. Click on Manageand then Files. Delete the ISO and UPGRADE directories. DO NOT delete any directories named witha UUID.5. [ ] For each VxRail host in the VxRail cluster, identify the IP address assigned to the factory-createdVMkernel management interface (vmk0) in one of the following ways:a. The IP address is displayed in the vSphere Web Client (use vcserver to locate the networkinformation for the VMkernel adapter on each ESXi host)orb. The IP address is displayed in DCUI (console user interface for ESXi)orc. ssh into each ESXi host (or connect a console or monitor via BMC) and run the esxcli networkip interface ipv4 get command on each host. The username is root. Use the password for the ESXi hosts that was set during VxRail Initial Configuration. (For an un-configured VxRail appliance, the password is Passw0rd!) The output will be similar to what is shown below:esxcli network ip interface ipv4 get | grep "vmk0"vmk0 MARVIN Management IPv4 10.10.10.5 255.255.255.0 10.10.10.25500:25:90:eb:bc:0a 1500 65535 true STATICOn an un-configured VxRail appliance, ESXi IP addresses are either assigned by DHCP (if it was available in your network) or configured as IPv4 link-local addresses (starting with 169.254.x.y). On a configured VxRail appliance, ESXi IP addresses are specified by the user who set up the appliance.6. [ ] Fill in the IPv4 address of each ESXi host in the cluster (use additional space if there is more thanone appliance in a cluster):Host IPv4 AddressESXi host on node 1ESXi host on node 2ESXi host on node 3ESXi host on node 47. [ ] From the VxRail workstation/laptop, verify that you can ping each IPv4 addressNote: If you cannot access a host, try power-cycling the node first.8. [ ] Use vSphere Client to login vCenter Server to check current Management VLAN as below:Figure 1 Example: the management VLAN ID in this appliance is 117Note: If you cannot access vCenter Server, you will have to ask the customer for the management VLAN ID.Running ResetReset is run from the VxRail workstation/laptop, using the options described in the Reset Command Options section. Before running Reset, be sure to read the section on Best Practices and Troubleshooting so that you can use the Reset script correctly.Note: Using the --vlan option for Reset is recommended even if current management VLAN is not set ( in this case, use 0 for VLAN ID). It will modify the management VLAN ID saved in configBundle first before actually using the configBundle. It simplifies initial setup because the management VLAN ID is configured on all 4 ESXi hosts. Likewise, the --initialIpAddress,--initialSubnet, and --initialGateway options also simplify initial setup.Another benefit comes when a VxRail appliance is reset and remains in the same environment (I.e. It is not relocated to another network). In this case, retaining the management VLAN ID and initial IP information during a reset permits the VxRail Manager workstation/laptop to remain set for the correct VLAN and IP. If these options are not used, the instructions in the VxRail SetupGuide can be followed at a later time.If the VLAN option is not used, no changes will be made to the Management VLAN settings. It will remain exactly as previously set.All appliances and nodes in a VxRail cluster should be reset if one appliance or node needs to be reset. Use the following procedure:•If you have more than one appliance in your VxRail cluster, reset each secondary appliance, one at a time. We recommend that the secondary appliances be reset in the order in which they were added to the cluster. Power off all nodes after each appliance is reset.•If you only have one appliance in the VxRail cluster, or when you have finished resetting and shutting down all secondary appliances, reset the primary appliance.Example using Reset on Windows1. [ ] To reset the appliance use the following command:Example:C:\Python27\python.exe c:\reset\reset.pyc -d --destroyAllVMs -f <IPv4_node1> -a <IPv4_node2> -a <IPv4_node3> -a <IPv4_node4> --vlan <vlan_id>[ --initialIpAddress=<init_IPaddr> --initialSubnet=<init_netmask>--initialGateway=<init_gateway> ]After ResetNote: The password is now Passw0rd! for all ESXi hosts, vCenter Server Appliance, and VxRail Manager.1. [ ] For VxRail appliances connected to an External vCenter, manually remove thecluster/VDS/management accounts in the External vCenter.2. [ ] Point a browser to the VxRail Manager user interface to begin VxRail initial configuration followingthe instructions in “VxRail Initial Configuration” in the VxRail Setup Guide. The location is one of the following:<init_IPaddr>specified in the ‘--initialIpAddress’ option supplied to Resetorhttps://192.168.10.200, the default value for ResetIf you cannot reach VxRail at one of these IP addresses after Reset, follow the instructions in AppendixA to configure another initial IP address.Reset Command OptionsBest Practices and TroubleshootingResetting Clusters with Multiple AppliancesIf you want to restore a multi-appliance VxRail cluster:After performing initial configuration on the primary appliance in the VxRail cluster, power on and configure each secondary appliance, one at a time. The primary appliance will discover each secondary appliance when it is powered on. Then you will add it to the cluster before continuing with the next secondary appliance. For more information, including configuring the Management VLAN, see the VxRail Setup Guide, “Adding Appli ances to a VxRail Cluster”.If you want to separate the appliances into multiple VxRail clusters:When you run Reset on each appliance, assign a unique <init_IPaddr> with the ‘--initialIpAddress’option. Then point your browser to each <init_IPaddr> and run VxRail initial configuration to create separate VxRail clusters.Reset Output: Example output of Successful reset:Reset succeeds with INFO and WARNING output similar to the following:c:\Python27\python.exe reset.py -d --destroyAllVMs -f 10.10.10.11 --initialIpAddress=10.10.10.250 --initialSubnet=255.255.254.0--initialGateway=10.10.10.253 -a 10.10.10.12 -a 10.10.10.13 -a 10.10.10.14 Post-configuration Password:INFO Primary node supplied: 10.10.10.11 INFO Secondary node supplied:10.10.10.12 INFO Secondary node supplied: 10.10.10.13 INFO Secondary nodesupplied: 10.10.10.14INFO MARVIN VIB 3.x discovered on hostsVxRail 3.x WARNING================================================WARNING THE FOLLOWING ARE THE CURRENTLY REGISTERED VIRTUAL MACHINES:WARNING ['VMware vCenter Server Appliance', 'VxRail Manager', 'VMware vRealize Log Insight'] WARNING CONTINUING WILL DESTROY ALL VIRTUAL MACHINES REGISTERED WITH THE APPLIANCE.CONFIRM (YES/NO): YESWARNING DESTROYING ALL VMs ON 10.10.10.11: ['VMware vCenter Server Appliance', 'VxRail Manager', 'VMware vRealize Log Insight']…INFO Host 10.10.10.11 has VMs [] remaining. INFO Removing hosts from VSAN cluster.INFO Putting hosts in maintenance mode. INFO Resetting and rebooting hosts.INFO Waiting for hosts to finish rebooting to confirm reset ...…INFO Trying to connect to 10.10.10.14 ...INFO Host 10.10.10.14 not yet available, still waiting ... INFO Trying to connect to 10.10.10.14 ...INFO Connection to host 10.10.10.14 successful.INFO Regenerating VSAN and VMs on primary node (10.10.10.11). Assuming default password. INFO Deploying VM: VMware vCenter Server Appliance fromimages/vcsa-restore-original/.INFO Deploying VM: VxRail Manager from images/evorail-restore-original/. INFO Upgrading hardware on VM: VMware vCenter Server ApplianceINFO Enabling Ivy Bridge EVC for VM: VMware vCenter Server Appliance INFO Upgrading hardware on VM: VxRail ManagerINFO Enabling Ivy Bridge EVC for VM: VxRail ManagerINFO Setting VM AutostartInfo for: VxRail manager on 10.10.10.11. INFOstartAction: powerOn; stopAction: guestShutdownINFO Powering on VM: VxRail ManagerINFO Deploying VM: VMware vRealize Log Insight fromimages/log-insight-original/. INFO Reconfiguring VM 'VMware vRealize Log Insight' OVF environment: success.INFO Upgrading hardware on VM: VMware vRealize Log InsightINFO Enabling Ivy Bridge EVC for VM: VMware vRealize Log Insight INFO Restarting 'loudmouth'.INFO Finalizing settings on Shamu.Reset FailureReset can fail for a variety of reasons. If it does not complete successfully on all nodes in an appliance, it can be rerun just on the node(s) that did not reset properly.Infrequently, Reset can report a failure but eventually succeed. This can happen if an ESXi host takes longer to reboot than the 720-second timeout in Reset. The ESXi node can still continue to boot and it may finally succeed. You can determine whether or not an ESXi host was successfully reset by attempting to login in via ssh or DCUI. The password should have been reset to Passw0rd!If you cannot login with this default password, the host was NOT reset properly. Run Reset again only on the host that did not boot successfully.Another reason that Reset could fail is if the IPv4 address for vmk0 does not match the information saved in configBundle.tgz. This would happen if ESXi was assigned a static IP address instead of link-local DHCP after configBundle.tgz was created, but before initial configuration. The failure message displayed would indicate failure to connect to all hosts.Usin g the ‘--onlyRegeneratePrimary’ option will quickly redeploy the service VMs.Note: Since you have already reset the appliance, the default password, Passw0rd!, must be used instead of your original password.C:\Python27\python.exe c:\reset\reset.pyc -f <IPv4_node1>--onlyRegeneratePrimary --initialIpAddress=<init_IPaddr>--initialSubnet=<init_netmask> --initialGateway=<init_gateway>The reasons that Reset can fail include the following:•Failing to connect to or locate a specified ESXi host•Using an invalid IPv4 address as Reset command options•Having a proxy server configured on the VxRail workstation/laptop•Failing to deploy the Log Insight VM or VxRail Manager•Still failing after several automatic retriesKnown IssuesIf you have a typo on the Reset command line, such as using a space instead of ‘=’ in the option/value pairs that require one (--initialIpAddress, --initialSubnet, --initialGateway), all 3 options will be ignored. (e.g--initialIpAddress=10.10.10.200 is correct; --initialIpAddress 10.10.10.200 is incorrect) If Reset otherwise succeeds, you can follow the instructions in Appendix A to set the initial IP address.Appendix A: VxRail Appliances with a Custom Initial IP address WARNING: The following steps are to configure a custom IP address for initial access to VxRail if you cannot use the <init_IPaddr> specified in the ‘--initialIpAddress’ option or the default IP address, 192.168.10.200.Note: You are setting the IP address for the VxRail Manager in Release 4.0.1. [ ] Connect the vSphere (C#) Client to the IP address of ESXi host 1 using the root user and thepassword that Reset configured during ESXi software installation, Passw0rd!2. [ ] Click the Virtual Machines tab and select “VxRail Manager”. The VM should already be poweredon. If not, click the green play button to power it on and wait for it to boot.3. [ ] Open the Console and login as root with the default password Passw0rd!4. [ ] Stop vmware-marvin:/usr/bin/systemctl stop vmware-marvin.service5. [ ] Using the vami_set_network command, change the default IP address to a custom IP address,netmask, and gateway using the following syntax (all arguments are required)./opt/vmware/share/vami/vami_set_network eth0 STATICV4 <init_IPaddr><init_netmask> <init_gateway>6. [ ] Start vmware-marvin and restart loudmouth on VxRail Manager:/usr/bin/systemctl start vmware-marvin.service/usr/bin/systemctl restart vmware-loudmouth.service7. [ ] Restart loudmouth on the primary ESXi host:/etc/init.d/loudmouth restart。

EMC VxRail超融合平台-测试报告

EMC VxRail超融合平台-测试报告

EMC VxRail超融合平台测试报告目录1.前言 01.1.VxRail介绍 01.2.VxRail配置灵活性和高可扩展性 01.3.软件环境 (1)1.4.配置 (1)2.VxRail 功能测试 (2)2.1.VSAN配置 (2)2.2.管理界面VxRail Manager (3)3.VxRail系统冗余测试 (6)3.1.网络冗余测试 (6)3.2.磁盘冗余测试 (8)3.3.SSD 缓存冗余测试 (12)3.4.节点冗余测试 (13)3.5.电源冗余测试 (15)4.测试记录 (16)1.前言1.1.VxRail介绍由EMC和VMware联合开发的VxRail是一款基于VMware虚拟化架构并真正实现系统核心层全面集成、预先配置和测试的超融合架构。

VCE VxRail 应用装置是由VCE和VMware提供的超级融合基础架构应用装置,它借助仅占用两个机架单元的完整存储区域网络提供了一个简单、经济高效的超级融合解决方案,此解决方案为范围广泛的一系列应用程序和工作负载提供了计算、网络、存储、虚拟化和管理。

VxRail基于VMware超级融合软件而构建并通过熟悉的vCenter Server界面进行管理,从而为我们提供了一种熟悉的管理体验。

VxRail应用装置装有全套集成的任务关键型数据服务,包括复制、备份。

VxRail应用装置配备了数据保护技术,包括EMCRecoverPoint for VMs和VMware vSphere Data Protection。

VxRail 应用装置体系结构是一个由通用模块化构造块组成的分布式系统,可从1个2U/4节点应用装置线性扩展到16个,从而在单个群集中达到最多64个节点。

多个计算、内存和存储选项提供了可适合任何使用情形的配置。

一个填满的混合应用装置可支持多达80个核心和多达24TB的原始存储容量。

一个64节点群集可提供1280个核心和384TB的原始存储容量。

VxRail超融合技术培训-节点硬件介绍

VxRail超融合技术培训-节点硬件介绍

VCE VXRAIL APPLIANCE BUILD PREPARATIONOBJECTIVES•Upon completion of this module, you should be able to: –Discuss the build to deployment process for the VxRailAppliance–Describe the appliance hardware and the availableconfiguration options–Explain the build process that happens at the servervendors manufacturing facility–Discuss the steps required to prepare the appliance foroperations–Complete the pre-site checklist survey•Delivered as a pre-built unit of IT infrastructure.–Software installation performed at supplier manufacturing •Requires site specific customizations before use –Wizard guides through configuration process–From box to provisioning virtual machines in minutesVCE VXRAIL APPLIANCE – SIMPLIFYING ITMAY & JUNE 2016MYQUOTES ORDERING PATHSVLP Licensing Only VxRail 3.5 Software Only* Hybrid & All Flash Models2 Use Cases:New SolutionUpgrade SolutionNew Solution PathMyQuotes: May 7Fully Populated Chassis(4 Nodes)Partially Populated Chassis(3,2,1 Nodes)Upgrade Solution PathMyQuotes: June 11Single Node AddSingle Drive AddRulesThe first appliance in a cluster must contain four nodesHybrid Nodes should not be mixed with All Flash Nodes in the same cluster Homogenous nodes and drives within an ApplianceDrives are evenly balanced across all the nodes on an Appliance*Ship VxRail 3.0 SW with Q1 HW configs1.2.VGA port3.Serial port4.BMC dedicated management port5.10G host port6.Power LED7.Power buttonNODE PORTS71 2 3 4 5 6•Memory configurations from 128 GB (minimum small) to 512 GB (maximum medium and large)NODE MAINBOARD VIEWDIMMC0 DIMMC1 DIMMD0 DIMMD1 Quanta 3008DIMME0 DIMME1 DIMMF0 DIMMF1 Network CardSATADOM Connection•Small SATA3 (6Gb/s) flash memory module.•Simulates a hard disk drive (HDD)•ESXi is installed on the SATADOM•Designed to be inserted into a server board SATA connectorSATADOM SUB MODULESATADISK ON MODULE - SATADOMSATADOMCONNECTION•Solid State Disks (SSD)–Single Ultrastar SSD 1600MM Series (per node)••Hard Disk Drives (HDD)–Three to Five Ultrastar enterprise-class HDD (per node)VXRAIL APPLIANCE PHYSICAL DRIVESUP TO 6 PER NODE - 24 DIRECT ATTACHED DRIVES PER ENCLOSURE•Customers can bundle a Connectrix VDX 6740 switch with their order or purchase their own top-of-rack network switches except for the VxRail 60–Two 10Gb ports per node•IP Protocol considerations:–Enable IPv4 and IPv6 multicast on all ports connected to VxRail–Management VLAN requires IPv6 multicast–VSAN VLAN requires IPv4 multicast•Connectrix VDX 6740 switch:–Up to 64 10GB: 48 standard and 16 4x10 40GB QSFPS–IPV4 /IPV6 ManagementVXRAIL APPLIANCE NETWORK SWITCHES•Build server is used to install the baseline image onto the VxRail Appliances–Build network connects the VxRail appliance and build system to automatically install system softwareBUILDING THE SYSTEMAPPLIANCE INITIALIZATIONBUILD SERVER DETAIL*Mobile Build software can be downloadedBUILD SERVER CONNECTIVITY TOPOLOGYNIC1 port/1G BMCport/1G BuildServer Switch NetworkAPPLIANCE IMAGING PREPARATION•Separate the ToR Switch ports to VLAN groups and connect the appliance’s Ethernet ports to the Switch–Port 1-8: VLAN 100. Appliance 1’s 4 BMC ports and 4 NIC 1G ports are connected to port1-8–Port 9-16: VLAN 200. Appliance 2’s 4 BMC ports and 4 NIC 1G portsare connected to port 9-16–Port 41: trunk mode. Allow access to VLAN 1, 100,200•Connect to the standalone server:–eth0 to outside network for golden image access–eth1 to ToR Switch port 41 for internal software provisioning •Power on all the hardware:–ToR Switch–Four nodes in the enclosureVXRAIL APPLIANCE BUILD PROCESS DEMONSTRATION VIDEONow let’s watch a short videoon the VxRail Build process.APPLIANCE CONFIGURATION•System Boots from the internal SDD SATADOM (32GB SLC SATADOM with reservation pool)•VxRail Node01 – Acts like bootstrap for the appliance –Initially holds the VCSA (DRS and vMotion can change this) –Contains Log Insight Appliance–Contains the VxRail Orchestration Appliance–Inside VxRail Appliance are the vmware-marvin and vmware-loudmouth services–VxRail VM set to power on with Appliance Start-up/Shutdown settings–VMs held on pre-configured Virtual SAN Datastore containing just Node01APPLIANCE SOFTWARE CONFIGURATIONEach node runs vSphere 6.0 as the based platformRunning on ESXi as Virtual Machines:–vCenter (single instance per VxRail Cluster)–vCenter Server Platform Controller–VxRail Manager–Log InsightVSAN health check plug-in installed by default and runs on vCenterNode 1•vCenter is installed and configured to manage the VxRail Appliance virtual environment–Virtual network is configured with a vSphere Distributed Switch for appliance network traffic–Virtual SAN provides storage capacity across all appliance nodesVIRTUAL INFRASTRUCTURENETWORK CONFIGURATIONVIRTUAL NETWORKING•Configuration process create vSphereDistributed Switch (vDS)–vmnic1 is assigned as uplink1 andbecomes primary NIC for managementtraffic and shared with all the other traffic–2nd uplink is configured for VSAN•Can use VLAN tagging to isolate traffic•Management Network–Used for ESXi node Management•Virtual SAN–Used for Virtual SAN traffic•vCenter Service Network–Used for Service VMs•vSphere vMotion–Used for vSphere vMotion•Marvin Management Port Group–Used for Internal VxRail communicationsVXRAIL VIRTUAL NETWORK CONFIGURATION•vSphere DRS is configured to be fully automated by default •vSphere HA is enabled by defaultVXRAIL CLUSTER DRS AND HAVMWARE VIRTUAL SAN•Initialization process aggregates locally attached disks of hosts in a vSphere cluster–Create a distributed shared storage solution with a VSANDatastore•Enables rapid provisioning of storage within VMware vCenter•Simplifies and streamlines storage provisionng and management–VM-level storage policies automatically matchesrequirements with underlying storage resources•Virtual SAN is managed under vCenter .VXRAIL VIRTUAL SAN CONFIGURATIONVSAN Basic RulesMin # of ESXi nodes: 3 Min amount of memory per node: 6 GB # of SSDs 1 # of HDDs: 3 - 5 Min # of disk groups: 1 Max # of disk groups: 5•Single VSAN Datastore•All VMs use the VSAN DatastoreVSAN DATASTORE•MARVIN-STORAGE-PROFILE policy is created during installation –This policy uses rules based on VSAN and is able to survive the failure of one node•MARVIN-SYSTEM-STORAGE-PROFILE policy is used exclusively for VxRail internal operationsDEFAULT STORAGE POLICYVXRAIL MANAGER CONFIGURATION DAEMON •Marvin Daemon is an Apache Tomcat instance that runs on the VxRail Manager virtual machine–Provides VxRail management interface•Other options–/etc/init.d/vmware-marvin <stop> <start> <restart> •Must be cycled for configuration changes–config-static.json has been modified•Loudmouth is the service that discovers and configures nodes on the network during –Initial configuration–Appliance expansion•Assists in replacing failed nodes•Runs on both the VxRail Manager VM•Each ESXi ServerVXRAIL DISCOVERY DAEMONLOUDMOUTH DAEMONSUMMARY•The VxRail Appliance is prebuilt and delivered ready to be configured with customer site specifications•Once received the appliance must be added to the network following VCE strict network requirements•A guided process readies the appliance for operations •The operational state of the VxRail Appliance includes: –vCenter to mange the virtual infrastructure and to create guest VMs–VxRail Manager to monitor appliance health。

EMCVxRail超融合平台-安装配置手册

EMCVxRail超融合平台-安装配置手册

EMC VxRail 超融合平台安装配置手册目录目录1. 客户信息 (4)1.1. 客户联系信息 (4)1.2. 产品清单 (4)1.3. 售后支持 (4)2. 地址与用户名规划 (5)2.1. IP 地址信息 (5)2.2. 用户登录信息 (6)2.3. DNS 配置 (6)3. VxRail 设备安装 (7)3.1. VxRail 初始化准备 (7)3.1.1. 环境准备 (7)3.1.2. 网络连接 (7)3.1.3. 启动VxRail (7)3.2. VxRail 初始化安装 (8)3.2.1. 初始化配置 (8)3.2.2. 登录VxRail Manager (15)3.2.3. 登录VCenter (18)4. VxRail 管理 (19)4.1. 设备开机 (19)4.2. 设备关机 (19)1. 客户信息1.1. 客户联系信息客户名称:地址:邮编:第一联系人:职务:电话:Email 第二联系人:职务:电话:Email1.2. 产品清单产品名称设备型号设备序列号VxRail VxRail 120 QCFVR1640980011.3. 售后支持设备在使用中有任何问题,请联系EMC24 小时支持热线:固话拨打:800-819-0009手机拨打:400-670-0009请提供故障设备的序列号,并记录case 号码。

2. 地址与用户名规划2.1. IP 地址信息设备信息IP 地址子网掩码网关vLAN ID vCenter 192.168.17.19 255.255.255. 192.168.17 17 Platform ServicesController192.168.17.22 0 .1VxRail Manger 192.168.17.20vRealize Log Insight 192.168.17.21ESXi01-ESXi04 192.168.17.15-18vMotion(ESXi01-ESXi04) 192.168.13.11-14 255.255.255.N/A 13vSAN (ESXi01-ESXi04)192.168.14.11-14 255.255.255.N/A 14 BMC 地址Node 1 Node 2 192.168.17.11 255.255.255.192.168.17.12 255.255.255.192.168.17.1端口acccess 到vlan17Node 3 192.168.17.13 255.255.255.Node 4192.168.17.14 255.255.255.2.2. 用户登录信息设备信息用户名密码vCenter ***************************Platform ServicesControllerN/AVxRail Manger ***************************vRealize Log Insight adminESXi01-ESXi04 rootBMC admin2.3. DNS 配置请在DNS 服务器上增加如下域名解析:IP 地址域名192.168.17.15 192.168.17.16 192.168.17.17 192.168.17.18 192.168.17.19 192.168.17.20 192.168.17.21 192.168.17.22 3. VxRail 设备安装3.1. VxRail 初始化准备3.1.1. 环境准备1. 本次安装的为 1 台VXRAIL APPLIANCE 120 ,网卡为10GB SFP+ 网卡2. 一台带有电口网卡的PC Server ,用于初始化VxRail3.1.2. 网络连接(本次环境为10GB SFP+ 网络)3.1.3. 启动VxRail1. 按指定顺序启动VxRail 的每个Node ,间隔至少30s 以上顺序为:Node4 Node3 Node2 Node13.2. VxRail 初始化安装3.2.1. 初始化配置1. 使用IE 登录VxRail 默认IP:https://192.168.10.200 , 并点击“开始”(笔记本应提前配置IP,例如192.168.10.201/24)2. 使点击“接受”3. 选择“分步”4. 添写系统相关信息5. 设置Hostname 、网络、用户密码6. 设置vMotion 网络7. 设置vSAN 信息8. 设置虚拟机网络9. 设置Loginsight 信息10. 点击验证11. 点击“构建VxRail ”,进行VxRail 初始化。

超融合测试报告

超融合测试报告

超融超融合测试报告1、测试背景目前传统数据中心仍然依靠服务器、存储、备份的标准设施来对业务进行支撑。

依赖于数据中心硬件和物理服务器来存储数据,同时也受到存储这些硬件的物理空间大小的限制。

为了增加数据中心的存储容量,企业需要购买更多的物理服务器和其他硬件。

因此,传统的数据中心受制于物理空间的限制,这一因素使任何形式的业务扩展都成为一项重要的操作并充满潜在的隐患。

这就是超融合系统可以发挥重要作用的地方。

在超融合存储系统技术越发成熟的今天,可以依靠超融合技获得更低的业务成本,以改善其业务功能。

2、超融合系统超融合基础设施是一种软件解决方案,将存储、计算和网络连接到一个系统中,可以最大程度地减少数据中心的复杂性,并提高可扩展性。

多个节点组合在一起以创建共享的计算和存储系统。

3、测试过程3.1物理环境硬件环境:3.2测试过程3.2.1概览界面说明:集群状态:显示当前集群物理主机数据及健康状态。

集群负载:显示当前集群iops/mbps实时监控数据。

系统服务:显示云管、计算、存储、网络服务主机数据及健康状态。

虚拟资源统计:显示集群云服务器、云容器统计信息,包括总数、运行状态,比例图。

物理设备统计:显示集群物理设备的统计信息,包括CPU、内存、存储、网络的总数、已使用、使用率数据及比例图。

存储统计:显示集群存储卷总数、健康状态数量、存储池、共享存储数量。

云服务器top信息:显示集群top5使用率云服务器的CPU、内存、存储、网络信息。

宿主机top信息:显示集群top3使用率宿主机的CPU、内存、存储、网络信息。

存储卷top信息:显示集群top5存储卷读写速率信息。

网络安全设备:显示集群交换机、路由器、虚拟出口、安全规则、安全组、虚拟IP、虚拟防火墙、VPN等网络安全设备数量。

租户统计:显示集群租户、用户组、角色、用户数量;未处理工单数量,点击【未处理工单】可跳转到工单管理界面。

资源配置:显示集群CPU、内存、存储已分配和总数信息。

EMC VxRail超融合方案介绍

EMC VxRail超融合方案介绍

移除旧应用装置
无任何中断
满载 — 应有尽有
…而且通过现有工具进行管理
复制 备份 云 市场
VxRail超融合系统软件架构
云整合 数据保护及容灾 EMC CloudArray EMC Hybrid Cloud /EHC ( 可选 ) VDP ( EMC Avamar )
EMC RecoverPoint for VM
虚拟化
vCenter Server vSphere ESXi
VxRail Manager
硬件
EMC|VCE专用硬件设备 万兆网络交换机 (客户提供)
IPMI / BMC
复制确保业务连续性
面向 VMWARE 的简单、高效且经过验证的灾难恢复
EMC VxRail 超融合方案介绍
自动化一切
尽可能消除人工任务 服务目录
vRealize
自助服务和自动化 自动化架构服务
加速部署 融合架构
传统架构
为何选择 HCI?— 没有时间去自行构建!
超级融合系统设计
超级融合
服务器
+
存储
传统 软件定义的
为何选择超级融合?
复杂性
成本
风险
标准很重要
自行 构建
收效时间加快 8 倍
即时
问题诊断
超过 2 小时的
升级时间节约
12:1
的占用空间缩减
“EMC 支持服务的质量让我们充满信心,我们相信, 他们实现的超级融合基础架构将相当可靠。”— CIO
再没有整体式迁移
常青树群集简化了技术更新
10GbE
VM
添加新应用装置到群集中 通过 VMOTION 将工作负载移至新应用装置
1,600 个 VM

EMC VxRail超融合数据中心-方案建议书

EMC VxRail超融合数据中心-方案建议书

EMC 融合平台EMC VxRail超融合数据中心方案建议书EMC VXRAIL 超融合平台:超级融合基础架构的跨越腾飞日期客户联系人全名客户公司名称客户公司地址尊敬的客户联系人称呼:您好!事物需要变革,而直面变革的时机已到。

IT 基础架构的作用不再仅仅是为业务提供支持,而是主动地推动业务发展。

压力在肩,您的基础架构必须利用有限的预算和资源来满足新应用程序与服务等不断浮现的业务需求。

当前,各类需求不断演化发展,要对硬件、软件和网络解决方案进行持续评估和集成,时间十分有限。

您需要打造一种能适应您业务需求的自动化动态基础架构。

如果您的 IT 基础架构妨碍到您调动资源、适应各种新的业务开展方式,那么,贵组织将面临在竞争中落后甚至被淘汰的风险。

为了满足层出不穷的业务需求,您曾经尝试过推进您的 IT 基础架构现代化进程,然而,不断的维护需求、长时间的集成以及漫长的技术评估周期统统成了阻挡您前进的拦路虎。

倘若无法让基础架构生命周期得到简化和重复利用,贵组织必然难以开展创新工作,及时响应市场需求。

您很清楚,您需要打破被动响应的恶性循环,找到能推动贵组织业务水平更上一层楼的 IT 基础架构解决方案。

您是锐意进取的革新者,您知道软件定义的数据中心 (SDDC) 这种作为服务交付的全自动敏捷型虚拟基础架构就是您未来要实现的目标。

但问题在于,如何才能通过经济高效的简单方式达成这一目标。

VCE VxRail™应用装置由 EMC 和 VMware 联合开发,是市面上唯一一款采用 VMware 超级融合软件进行了全面集成、预配置以及测试的超级融合基础架构应用装置,是部署完全虚拟化 SDDC 环境最简单、快捷的方式。

该应用装置将始终为您提供最新的 VMware 技术,通过简单、经济高效的超级融合解决方案为您实现数据中心转型,适用于多种任务关键型工作负载。

在其帮助下,您可以胸有成竹地实现可预测的发展,持续实现基础架构现代化目标。

VxRail超融合平台性能测试方案

VxRail超融合平台性能测试方案

VxRail超融合平台性能测试方案HCIBench目录1HCIBench前言 (3)1.1使用HCIBench必要性 (3)1.2获得HCIBench (4)1.3HCI工具架构 (5)2部署和配置HCIBench (6)2.1创建端口组 (7)2.2部署HCIBench (7)2.3配置HCIBench (12)2.3.1vSphere环境信息 (12)2.3.2Virtual SAN集群主机信息 (15)2.3.3Vdbench客户虚拟机设置 (16)2.3.4Vdbench参数配置 (17)2.3.5验证配置 (19)3运行HCIBench测试 (21)3.1启动测试 (21)3.2收集测试结果 (23)3.3清理测试 (25)4HCIBench最佳实践 (26)1HCIBench前言1.1 使用HCIBench必要性在超融合架构中,每个服务器既用来支持许多应用程序虚拟机,也为供应用程序使用的存储池贡献存储。

建模此架构的用例时,最好调用大量的测试虚拟机,而且每个虚拟机访问多个存储的 VMDK。

目标是模拟非常繁忙的群集。

遗憾的是,常见存储性能测试工具不直接支持此模型。

IOMeter、FIO等传统测试工具不得不手动创建多个测试虚拟机,并在虚拟机上配置VMDK、安装配置传统测试工具以生成工作负载、监控采集相关数据,以完成性能测试。

这需要花费很多时间,并且可能引入测试误差。

因此,测试超融合架构(例如Virtual SAN)的性能会带来许多不同的挑战。

为正确模拟生产群集的工作负载,最好跨主机部署多个虚拟机,且每个虚拟机有多个磁盘。

此外还需要同时对每个虚拟机和磁盘运行工作负载测试。

为解决在超融合环境中正确运行性能测试的挑战,VMware 设计了一个叫作HCIbench的存储性能测试自动化工具,它能自动运行常见的Vdbench测试工具。

用户只需指定他们想要运行的测试的参数,HCIbench即会指示Vdbench在群集中的每个节点上做什么。

EMC VxRail 超融合平台-性能测试报告

EMC VxRail 超融合平台-性能测试报告
1.3.
测试项目
传输块大小
随机/顺序比例
读/写分发比例0%顺序
100%读
测试二:最大带宽能力
64KB
100%顺序
100%读
测试三:特定应用_数据库
8KB
100%随机
67%读33%写
测试四:常规应用顺序读
4KB
100%顺序
100%读
测试五:常规应用随机读
4KB
100%随机
4KB
100%顺序
100%读
IOPS≈24521
带宽≈ 95 MBPs
时延≈0325.s
测试五:常规应用随机读
4KB
100%随机
100%读
IOPS≈17881
带宽≈ 69 MBPs
时延≈0.446s
测试六:常规应用顺序写
4KB
100%顺序
100%写
IOPS≈6885
带宽≈ 27 MBPs
时延≈ 1.161s
EMCVxRail超融合平台
性能测试报告
1.
1.1.
本次测试选用Iometer进行测试;Iometer是一个单机或者集群的I/O子系统的测量和描述工具。它最初是由英特尔公司在1998年2月17日的英特尔开发者论坛(IDF)宣布,自那时以来,在行业内广泛的蔓延,成为了标准。
Iometer包括2个程序,Iometer.exe和Dynamo.exe。其中Iometer是控制程序,是图形界面,让你轻松的调节参数和显示测试结果,而Dynamo就是让测试盘产生压力测试的主程序了,用Iometer来控制Dynamo程序。在Windows下运行Iometer时,会自动打开Dynamo程序。
100%读
测试六:常规应用顺序写

EMC VxRail超融合平台-运维管理手册

EMC VxRail超融合平台-运维管理手册

EMC VxRail超融合平台运维管理手册目录1. 产品信息 (3)1.1.产品清单 (3)1.2.售后支持 (3)1.2.1.登录VxRail Manager (3)1.2.2.查看系统状态 (4)1.2.3.登录VCenter (10)2. VxRail管理信息 (11)2.1.IP地址信息 (11)2.2.用户登录信息 (12)2.3.设备开机 (13)2.4.设备关机 (13)1.产品信息1.1.产品清单1.2.售后支持设备在使用中有任何问题,请联系EMC24小时支持热线:固话拨打:800-819-0009手机拨打:400-670-0009请提供故障设备的序列号,并记录case号码。

1.2.1.登录VxRail Manager1.单击“管理VxRail”,登录至管理介面输入用户名及之前设置的密码:***************************1.2.2.查看系统状态进入Dashbord主界面进入Support支持页面查看系统告警事件进入系统健康状态查看页面查看节点逻辑状态查看节点硬件状态进入系统配置界面1.2.3.登录VCenter登录vCenter地址192.168.105.19, 输入用户名及之前设置的密码:***************************2.VxRail管理信息2.1.IP地址信息2.2.用户登录信息2.3.设备开机按指定顺序启动VxRail的每个Node,间隔至少30s以上顺序为:Node4→Node3→Node2→Node12.4.设备关机登录VxRail Manager点击如下按钮,关闭整个集群。

VxRail Manager->配置->关闭。

VxRail超融合桌面虚拟化解决方案简述

VxRail超融合桌面虚拟化解决方案简述

解决方案简述VCE VXRAIL™ 应用装置:解决州和地方政府机构的虚拟化难题难题对于州和地方政府机构来说,新技术带来的美好前景同时也会给他们带来独特的新难题。

新应用程序,例如那些用于远程和移动计算的应用程序,会大大提升用户工作效率,但它们会给陈旧的 IT 基础架构带来原规划者未曾预料到的性能和数据存储要求。

然而,州和地方政府的系统几乎无法跟上其五花八门的硬件和软件组件的更新周期,远远无法适应当前针对更严格的服务级别协议、更短的项目最后期限和压缩预算的要求。

由于面临保持低成本的压力,政府机构很难申请配备专业化 IT 技术人员—甚至是新的基础设施—来支持 IT 基础架构的升级。

解决方案:用 VCE VXRAIL 应用装置虚拟化您的州/地方政府机构的 IT 基础架构对于许多州和地方政府机构(而且此数量还在迅速增加)来说,为解决公众的期望值高但他们的预算额更低这一难题,计算资源虚拟化是一个有效的解决方案。

州政府机构已频频表达了通过虚拟化减少物理服务器数量的这一愿望。

按照最近的统计数据,州和地方政府机构在给定的某一年通过虚拟化将其服务器硬件数量减少了 68% 之多。

然而,若能再进一步,利用针对虚拟化的超级融合解决方案,则他们可以做到的将远不止是削减硬件开支。

他们可以消除相互冲突的系统更新周期和冗余软件,并可大大减少对专业化IT 技术人员的需要。

例如,通过在一个超级融合基础架构应用装置(HCIA) 上部署一个虚拟化桌面基础架构(VDI),您将能够在任何时候都将计算机资源放在最需要它们的地方。

虚拟化和HCIA 可为您省去诸多成本,包括:过度调配IT 系统、雇用高薪酬IT 顾问,和为更大的服务器、存储或网络设备建造新的办公空间。

现在,随着VCE VxRail 应用装置的问世,利用虚拟化以及软件定义的数据中心(SDDC) 的优势对于州和地方政府机构来说已成为力所能及之事。

VCE VXRAIL 应用装置简介VCE VxRail 应用装置跨一系列功能强大的系统提供了数百种配置选项,可单独使用或配合使用,以便部署和管理几十或数百个虚拟服务器或虚拟桌面。

DellEMC VXrail超融合平台技术白皮书

DellEMC VXrail超融合平台技术白皮书

产品介绍Dell EMC VxRail 应用装置不要只刷新您的服务器;采用第 14 代 P OWER E DGE 服务器上的 V X R AIL 实现基础架构转型 敏捷性是加速 IT 转型的关键要素。

超融合是实现敏捷性的关键所在。

行业分析公司 ESG 的最近调查显示,绝大多数 (87%) 已采用 HCI 的组织表示 HCI 使他们的组织更敏捷。

其结果是他们为更广泛的工作负载部署 HCI ,大部分在 HCI 上运行 20% 甚至更多的应用程序。

Dell EMC VxRail™ 可通过标准化和自动化加速和简化 IT 。

采用第 14 代PowerEdge 上的 VxRail ,您不但能够刷新服务器,而且还能实现基础架构转型。

VxRail 应用装置优势 Dell EMC VxRail 在完整生命周期管理方面(包括高级自动化)投入了巨大力量,您从第一天开始就可以轻松使用它,以便您进一步简化 IT 基础架构和运营。

新一代 PowerEdge 服务器上的 VxRail 应用装置是完善、优化和强大的平台,有助于简化整个生命周期 — 从部署,到管 理、到扩展、到维护。

据 Silverton Consulting 的一项研究,VxRail 已经过验证,通过您自己进行 HCI 系统维护可提高多达 30% 的 TCO 优势。

此外,借助集成的端到端支持,VxRail 还可将维护成本降低 43%。

简而言之,VxRail 能够让您事半功倍,使您的 IT 人员把精力放在更具战略性的项目上,而不仅仅是维持正常运转。

VxRail 是受VMware vSAN TM 支持的唯一一个完全集成、预配置且经过测试的 HCI 应用装置,是 VMware 基础架构转型的标准。

VxRail 提供了一种简单、经济高效的超融合解决方案,可解决您的众多难题,并支持几乎任何使用情形,包括第一层应用程序和混合工作负载。

通过 DellEMC 可以更快、更好且更简单地提供虚拟桌面、业务关键型应用程序和远程办公室基础架构。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
EMCVxRail超融合平台
性能测试报告
1.
1.1.
本次测试选用Iometer进行测试;Iometer是一个单机或者集群的I/O子系统的测量和描述工具。它最初是由英特尔公司在1998年2月17日的英特尔开发者论坛(IDF)宣布,自那时以来,在行业内广泛的蔓延,成为了标准。
Iometer包括2个程序,Iometer.exe和Dynamo.exe。其中Iometer是控制程序,是图形界面,让你轻松的调节参数和显示测试结果,而Dynamo就是让测试盘产生压力测试的主程序了,用Iometer来控制Dynamo程序。在Windows下运行Iometer时,会自动打开Dynamo程序。
100%读
测试六:常规应用顺序写
4KB
100%顺序
100%写
测试七:常规应用随机写
4KB
100%随机
100%写
1.4.
本次测试使用8个CPU核心,对应图中Topology栏内Worker1-8,对所有Worker均设置相同参数;
DiskTargets栏中选择相应磁盘(这里选择E:,图中标示有误),MaximumDiskSize保持默认为0(图中标示有误),其他参数均保持为默认;
针对超融合架构,我们在每个节点部署2个Windows2008R2虚机,并安装 IOmeter软件,测试的虚拟机配置2×4颗处理器核心,32GB内存和80GB系统盘,100GB数据盘。本次测试针对该虚拟机数据盘磁盘文件进行性能测试。利用IOmeter生成特定负载,在 VxRailManager观察系统负载情况,并收集 IOmeter详细结果。
AccessSpecifications栏中指定IO类型,其中测试一和测试二已有现成的对应规则,测试三需要编辑修改Default规则;
点击编辑进入规则编辑页面,将TransferRequestSize设置为8KB,其他保持不变,该IO类型即为Oracle数据库IO类型;
ResultDisplay栏中UpdateFrequency保持默认5s采集一次数据;
测试七:常规应用随机写
4KB
100%随机
100%写
IOPS≈7155
带宽≈ 28 MBPs
时延≈1.117s
4KB
100%顺序
100%读
IOPS≈24521
带宽≈ 95 MBPs
时延≈0325.s
测试五:常规应用随机读
4KB
100%随机100%读来自IOPS≈17881带宽≈ 69 MBPs
时延≈0.446s
测试六:常规应用顺序写
4KB
100%顺序
100%写
IOPS≈6885
带宽≈ 27 MBPs
时延≈ 1.161s
1.3.
测试项目
传输块大小
随机/顺序比例
读/写分发比例
测试一:最大IO处理能力
512B
100%顺序
100%读
测试二:最大带宽能力
64KB
100%顺序
100%读
测试三:特定应用_数据库
8KB
100%随机
67%读33%写
测试四:常规应用顺序读
4KB
100%顺序
100%读
测试五:常规应用随机读
4KB
100%随机
100%读
IOPS≈123656
带宽≈ 60MBPs
时延≈0.064s
测试二:最大带宽能力
64KB
100%顺序
100%读
IOPS≈8887
带宽≈555MBPs
时延≈0.899s
测试三:特定应用_数据库
8KB
100%随机
67%读33%写
IOPS≈13823
带宽≈ 108MBPs
时延≈0.578s
测试四:常规应用顺序读
测试工具版本为iometer-1.1.0-win64.x86_64-bin。
1.2.
环境中数据库以虚拟机方式运行在VMware虚拟化环境中,拓扑结构如下:
其中VxRail超融合设备内置4个服务器节点,每个节点配置1块800GBSSD盘和5块2TBSATA盘,共4块800GBSSD盘和20块2TBSATA盘。
TestSetup栏中RunTime设置为5分钟;
2.
(详细结果见《EMC VxRail 性能测试详细结果》)
测试一:最大IO处理能力
测试四:常规应用顺序读:
测试五:常规应用随机读:
测试六:常规应用顺序写
2.1.
测试项目
传输块大小
随机/顺序比例
读/写分发比例
测试结果
测试一:最大IO处理能力
512B
100%顺序
相关文档
最新文档