Mellanox_Interconnect_Solutions - v1.2 - oil conf
HP Insight CMU-Mellanox UFM Connector管理软件说明书
MELLANOX©2012 Mellanox Technologies. All rights reserved.™ ConnectorManagement software is a critical component in today’s clusters. As clusters become larger, more complex and business critical, they require a proper end-to-end means to monitor, provision and control them.Traditionally, cluster administrators have had to manage the server and network sides separately without visibility into network performance and health. This results in manual, time consuming root cause analysis of events, and relatively long duration till resolution.The CMU-UFM Connector combines HP’s Insight CMU server information with Mellanox’s Unified Fabric Manager™ (UFM™) fabric information. This enables the cluster administrator to view, in one location, the server and network information which greatly reduces operational efforts and duration till resolution. The CMU-UFM Connector is an add-on software package installed on the HP-CMU management node.Figure 1. New fabric actions now available with CMU-UFM ConnectorThe CMU-UFM Connector synchronizes host names and IPs, thus creating a consistent cluster database for the cluster administrator. This enables correlating between server and network monitored data and events. Furthermore, information from both environments is readily correlated to the same cluster entities.The CMU-UFM Connector makes congestion information available to CMU without the need for local agent, indicating to the cluster administrator of the nodes that suffer from traffic congestion. The informationcan now be easily correlated to server behavior metrics which significantly improves the ability to identify application bottlenecks and to perform root cause analysis of poor cluster performance incidents.The node congestion is displayed in CMU via a new congestion counter named “IB_Congestion”. The counter can be monitored in near real-time or historically.The CMU-UFM Connector generates a fabric-related node status in CMU indicating to the cluster administrator that the node has fabric alerts/issues. This introduces a whole new dimension of visibility on the same central CMU console.page 2MELLANOX: HP Insight CMU - Mellanox UFM ™ Connector 350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085Tel: 408-970-3400 • Fax: © Copyright 2012. Mellanox Technologies. All rights reserved.Mellanox, Mellanox logo, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, PhyX, SwitchX, Virtual Protocol Interconnect, and Voltaire are registered trademarks of Mellanox Technologies, Ltd. Connect-IB, FabricIT, MLNX-OS, ScalableHPC, Unbreakable-Link, UFM and Unified Fabric Manager are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners. 4034WP Rev 1.3The CMU-UFM Connector imports the fabric topology from UFM into CMU. The fabric connectivity isrepresented as CMU user groups. Each group contains the nodes connected to a specific leaf switchin the fabric. It helps to quickly identify cabling or CMU setup errors, as well as non-optimized jobdistribution.When fabric alerts appear or cluster configuration inconsistencies are detected, the cluster administratormay wish to further drill down into the fabric as part of debug and diagnostics. With the CMU-UFMConnector the administrator can launch UFM diagnostic tools from CMU.Two new CMU menu entries have been added: “Run UFM Fabric Health Report” and “Launch UFM FabricHealth Report”. When selected, a detailed report of the underlying fabric is generated and displayed tothe user. Further fabric analysis and configuration can be conducted using native UFM access.A simple and affordable cluster toolkit, HP CMU is used to configure and install cluster operating environments, to monitor cluster and node metrics and to remotely manage resources. HP CMU serves as a powerful tool for installing Linux software images, including middleware such as Message-Passing Interface (MPI) and job schedulers. HP CMU can be used anywhere to manage a number of standalone systems similar in hardware and software configuration.Mellanox’s Unified Fabric Manager (UFM) is a powerful platform for managing scale-out computing Figure 2: UFM’s fabric data displayed in CMU。
Mellanox vSAN 网络解决方案说明说明书
vSAN ™Networking Done RightIncrease vSAN Efficiency with Mellanox Ethernet InterconnectsHigher EfficiencyEfficient Hardware OffloadsA variety of new workloads and technologies are increasing the load on CPU utilization. Overlay networks protocols, OVS processing, access tostorage and others are placing a strain on VMware environments. High performance workloads require intensive processing which can waste CPU cycles, and choke networks. The end result is that application efficiency is limited and virtual environments as a whole becomes inefficient. Because of thesechallenges, data center administrators now look to alleviate CPU loads by implementing, intelligent, network components that can ease CPU strain, increase networkbandwidth and enable scale and efficiency in virtual environments.Mellanox interconnects can reduce the burden byoffloading many networking tasks, thereby freeing CPU resources to serve more VMs and process more data. Side-by-side comparison shows over a 70% reduction in CPU resources and a 40% improvement in bandwidth.Without OffloadsWith Mellanox OffloadsvSphere 6.5, introduced Remote Direct MemoryAccess over Converged Ethernet (RoCE). RoCE allows direct memory access from one computer to another without involving the operating system or CPU. The transfer of data is offloaded to a RoCE-capable adapter, freeing the CPU from the data transferprocess and reducing latencies. For virtual machines a PVRDMA (para-virtualized RDMA) network adapter is used to communicate with other virtual machines. Mellanox adapters are certified for both in vSphere.RoCE dramatically accelerates communication between two network endpoints but also requires a switch that is configured for lossless traffic. RoCE v1 operates over lossless layer 2 and RoCE v2 supports layer 2 and layer 3. To ensure a lossless environment, you must be able to control the traffic flows. Mellanox Spectrum switches support Priority Flow Control (PFC) and Explicit Congestion Notification (ECN) whichenables a global pause across the network to support RDMA. Once RoCE is setup on vSphere close-to-local, predictable latency can be gained from networked storage along with line-rate throughput and linear scalability. This helps to accommodate dynamic, agile data movement between nodes.RoCE CertifiedReduce CPU OverheadWith RDMAWithout RDMA VMware Virtual SANVMware's Virtual SAN (vSAN) brings performance, low cost and scalability to virtual cloud deployments. An issue that cloud deployment model raises is the problem of adequate storage performance to virtualinstances. Spinning disks and limited bandwidth networks lower IO rates over local drives. VMware’s solution to this is vSAN which adds a temporary local storage “instance” in the form of a solid -state drive to each server. vSAN extends the concept of local instance storage to a shareable storage unit in each server, where additionally, the data can be accessed by other servers over a LAN. vSAN brings. The benefits of VSAN include:•Increased performance due to local server access to Flash storage•Lower infrastructure cost by removing the need for networked storage appliances •Highly scalable --simply add more servers to increase storage •Eliminate boot storms since data is stored locally•Unified management --no storage silo versus server silo separation problemsMellanox 10/25G Ethernet interconnect solutions enable unmatched competitive advantages in VMwareenvironments by increase efficiency of overall server utilization and eliminating I/O bottleneck to enable more virtual machines per server, faster migrations and speed access to storage. Explore this reference guide to learn more about how Mellanox key technologies can help improve efficiencies in your vSAN environment.Scalable from a half rack to multiple racksHalf Rack 12 nodesFull Rack 24 nodesPay As You Grow Switching10 Racks up to 240 nodesDeployment Config134411GbE link: 1GbE Transceiver125/10GbE link: QSFP to SFP+324100GbE link: QSFP to QSFP 100/40GbE link: QSFP to QSFP Provisioning & Orchestration▪Zero-touch provisioning ▪VLAN auto-provisioning▪Migrate VMs without manual configuration▪VXLAN/DCI support for VM migration across multiple datacenters for DRMonitoring▪Performance monitoring ▪Health monitoring ▪Detailed telemetry▪Alerts and notificationsAutomated Network▪½ 19” width, 1U height ▪18x10/25GbE + 4x40/100GbE ▪57W typical (ATIS)2Mellanox InterconnectsiSERStorage virtualization requires an agile and responsive network. iSER accelerates workloads by using an iSCSI extensions for RDMA. Using the iSER extension lowers latencies and CPU utilization to help keep pace with I/O requirements and provides a 70% improvement in throughput and 70% reduction in latencies through Mellanox Ethernet interconnects.Deliver 3X EfficiencyHyper-ConvergedReduce CapEx ExpenseHyper-Converged Infrastructure (HCI) is a demanding environment for networking interconnects. HCI consists of three software components: compute virtualization, storage virtualization and management, in which all three require an agile andresponsive network. Deploying on 10, or better, 25G network pipes assists as does network adapters and switches with offload capabilities to optimizeperformance and availability of synchronization and replication of virtualized workloads.CapEx Analysis: 10G vs. 25GMellanox adapters and switches accelerate VM resources toimprove performance, enhance efficiency and provide high-availability and are a must-have feature for any VMware environment. Ethernet AdaptersMellanox Connect-X adapters:▪Enable near-native performance for VMs thru Stateless offloads ▪Extend hardware resources to 64 PF, 512 VF w/ SR-IOV & ROCE ▪Accelerate virtualized networks with VXLAN, GENEVE & NVGRE ▪Align network services withcompute services for multitenant network supportIncreasing vSAN EfficiencyIncrease vSAN Efficiency with Mellanox Ethernet Interconnects。
Mellanox 高速低成本 DAC 电缆说明书
Mellanox® MCP1600-E0xxEyy DAC cables are high speed, cost-effective alternatives to fiber optics inInfiniBand 100Gb/s EDR applications.Mellanox QSFP28 passive copper cables(1) contain four high-speed copper pairs, each operating at datarates of up to 25Gb/s. Each QSFP28 port comprises an EEPROM providing product information, which canbe read by the host system.Mellanox’s unique quality passive copper cable solutions provide power-efficient connectivity for shortdistance interconnects. It enables higher port bandwidth, density and configurability at a low cost andreduced power requirement in the data centers.Rigorous cable production testing ensures best out-of-the-box installation experience, performance anddurability.T able 1 - Absolute Maximum RatingsT able 2 - Operational SpecificationsT able 3 - Electrical SpecificationsMCP1600-E0xxEyyINTERCONNECTPRODUCT BRIEF†100Gb/s QSFP28 Direct Attach Copper Cable(1)Raw cables are provided from different sources to ensure supply chain robustness.©2018 Mellanox Technologies. All rights reserved.†For illustration only. Actual products may vary.©2018 Mellanox Technologies. All rights reserved.Table 4 - Cable Mechanical SpecificationsNote 1. The minimum assembly bending radius (close to the connector) is 10x the cable’s outer diameter. The repeated bend (far from the connector) is also 10x the cable’s outer diameter. The single bend (far from the connector) is 5x the cable’s outer diameter.Table 5 - Part Numbers and DescriptionsNote. See Figure 2 for the cable length definition. Note 2. xx = reach; yy = wire gauge.Figure 2.Cable Length DefinitionFigure 1.Assembly Bending Radius350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085Tel: 408-970-3400 • Fax: © Copyright 2018. Mellanox Technologies. All rights reserved.Mellanox, Mellanox logo and LinkX are registered trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.Warranty InformationMellanox LinkX direct attach copper cables include a 1-year limited hardware warranty, which covers parts repair or replacement.Mechanical Schematics60111PBRev 2.5。
Mellanox MLNX-OS 用户手册说明书
Mellanox MLNX-OS User Manual for VPI Rev 4.90Software Version 3.6.6000Table of ContentsDocument Revision History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22 About this Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .40 Chapter 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431.1 System Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431.2 Ethernet Features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441.3 InfiniBand Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441.4 Gateway Features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Chapter 2 Getting Started. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472.1 Configuring the Switch for the First Time. . . . . . . . . . . . . . . . . . . . . . . . . . 472.1.1 Configuring the Switch with ZTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542.1.2 Rerunning the Wizard. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552.2 Starting the Command Line (CLI). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552.3 Starting the Web User Interface (WebUI) . . . . . . . . . . . . . . . . . . . . . . . . . 552.4 Zero-touch Provisioning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592.4.1 Running DHCP-ZTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592.4.2 ZTP on Director Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612.4.3 ZTP and MLNX-OS Software Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612.4.4 DHCPv4 Configuration Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622.4.5 DHCPv6 Configuration Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622.4.6 Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 632.5 Licenses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662.5.1 Installing MLNX-OS License (CLI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662.5.2 Installing MLNX-OS License (Web). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662.5.3 Retrieving a Lost License Key. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692.5.4 Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Chapter 3 User Interfaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753.1 LED Indicators. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753.2 Command Line Interface Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753.2.1 CLI Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753.2.2 Syntax Conventions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763.2.3 Getting Help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773.2.4 Prompt and Response Conventions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 783.2.5 Using the “no” Form. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 783.2.6 Parameter Key. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 803.2.7 CLI Pipeline Operator Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 813.2.7.1 “include” and “exclude” CLI Filtration Options. . . . . . . . . . . . . . . . . . . . . 813.2.7.2 “watch” CLI Monitoring Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 823.2.7.3 “json-print” CLI Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 823.2.8 CLI Shortcuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 833.3 Web Interface Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 843.3.1 Setup Menu. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 853.3.2 System Menu. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 863.3.3 Security Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 873.3.4 Ports Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 873.3.5 Status Menu. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883.3.6 IB SM Mgmt. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883.3.7 Fabric Inspector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 893.3.8 ETH Mgmt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 903.3.9 IP Route . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 903.3.10 IB Router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 913.3.11 Gateway. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 913.4 Secure Shell (SSH). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 913.4.1 Adding a Host and Providing an SSH Key. . . . . . . . . . . . . . . . . . . . . . . . . . . 913.4.2 Retrieving Return Codes when Executing Remote Commands. . . . . . . . . 923.5 Management Information Bases (MIBs). . . . . . . . . . . . . . . . . . . . . . . . . . . 923.6 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 963.6.1 CLI Session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 963.6.2 Banner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1073.6.3 SSH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1153.6.4 Remote Login. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1323.6.5 Web Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Chapter 4 System Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1444.1 Management Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1444.1.1 Configuring Management Interfaces with Static IP Addresses. . . . . . . . 1444.1.2 Configuring IPv6 Address on the Management Interface . . . . . . . . . . . . 1444.1.3 Dynamic Host Configuration Protocol (DHCP) . . . . . . . . . . . . . . . . . . . . . 1454.1.4 Default Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1454.1.5 In-Band Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1454.1.6 Configuring Hostname via DHCP (DHCP Client Option 12) . . . . . . . . . . . 1464.1.7 Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1474.1.7.1 Interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1474.1.7.2 Hostname Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1714.1.7.3 Routing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1784.1.7.4 Network to Media Resolution (ARP & NDP) . . . . . . . . . . . . . . . . . . . . . . 1824.1.7.5 DHCP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1884.1.7.6 General IPv6 Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1904.1.7.7 IP Diagnostic Tools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1914.2 NTP, Clock & Time Zones. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1964.2.1 NTP Authenticate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1964.2.2 NTP Authentication Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1964.2.3 Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 4.3 Unbreakable Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2154.3.1 Link Level Retransmission (LLR). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2154.3.2 Configuring Phy Profile & LLR. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2164.3.3 Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 4.4 Virtual Protocol Interconnect (VPI). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2244.4.1 Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 4.5 System Profile. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2284.5.1 Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 4.6 Software Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2314.6.1 Upgrading MLNX-OS Software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2314.6.2 Upgrading MLNX-OS Software on Director Switches. . . . . . . . . . . . . . . . 2364.6.3 Upgrading MLNX-OS HA Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2374.6.4 Deleting Unused Images. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2384.6.5 Downgrading MLNX-OS Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2394.6.5.1 Downloading Image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2394.6.5.2 Downgrading Image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2404.6.5.3 Switching to Partition with Older Software Version. . . . . . . . . . . . . . . . 2414.6.6 Upgrading System Firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2424.6.6.1 After Updating MLNX-OS Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2424.6.6.2 After Inserting a Switch Spine or Leaf . . . . . . . . . . . . . . . . . . . . . . . . . . . 2434.6.6.3 Importing Firmware and Changing the Default Firmware. . . . . . . . . . . 2434.6.7 Image Maintenance via Mellanox ONIE . . . . . . . . . . . . . . . . . . . . . . . . . . 2444.6.8 Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 4.7 Configuration Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2584.7.1 Saving a Configuration File. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2584.7.2 Loading a Configuration File. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2584.7.3 Restoring Factory Default Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . 2594.7.4 Managing Configuration Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2594.7.4.1 BIN Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2594.7.4.2 Text Configuration Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2604.7.5 Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2624.7.5.1 File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2624.7.5.2 Configuration Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 4.8 Logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2914.8.1 Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2914.8.2 Remote Logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2914.8.3 Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2924.9 Debugging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3194.9.1 Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 4.10 Link Diagnostic Per Port. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3334.10.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3334.10.2 List of Possible Output Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3334.10.3 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334 4.11 Event Notifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3404.11.1 Supported Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3404.11.2 SNMP Trap Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3424.11.3 Terminal Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3434.11.4 Email Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3434.11.5 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3454.11.5.1 Email Notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 4.12 Telemetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3644.12.1 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 4.13 mDNS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3904.13.1 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 4.14 User Management and Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3924.14.1 User Accounts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3924.14.2 Authentication, Authorization and Accounting (AAA). . . . . . . . . . . . . . . 3924.14.2.1 User Re-authentication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3934.14.2.2 RADIUS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3934.14.2.3 TACACS+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3934.14.2.4 LDAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3934.14.3 System Secure Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3944.14.4 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3964.14.4.1 User Accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3964.14.4.2 AAA Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4014.14.4.3 RADIUS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4154.14.4.4 TACACS+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4184.14.4.5 LDAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4224.14.4.6 System Secure Mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 4.15 Cryptographic (X.509, IPSec) and Encryption. . . . . . . . . . . . . . . . . . . . . . 4404.15.1 System File Encryption. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4404.15.1.1 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441 4.16 Scheduled Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4574.16.1 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 4.17 Statistics and Alarms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4674.17.1 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467 4.18 Chassis Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4874.18.1 System Health Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4874.18.1.1 Re-Notification on Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4874.18.1.2 System Health Monitor Alerts Scenarios. . . . . . . . . . . . . . . . . . . . . . . . . 4884.18.2 Power Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4934.18.2.1 Power Supply Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4934.18.2.2 Width Reduction Power Saving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4944.18.2.3 Managing Chassis Power. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4954.18.3 Monitoring Environmental Conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . 4964.18.4 USB Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4984.18.5 Unit Identification LED. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4994.18.6 High Availability (HA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4994.18.6.1 Chassis High Availability Nodes Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . 5004.18.6.2 Malfunctioned CPU Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5004.18.6.3 Box IP Centralized Location. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5014.18.6.4 System Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5014.18.6.5 Takeover Functionally. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5024.18.7 System Reboot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5034.18.7.1 Rebooting 1U Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5034.18.7.2 Rebooting Director Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5034.18.8 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5044.18.8.1 Chassis Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5044.18.8.2 Chassis High Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535 4.19 Network Management Interfaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5404.19.1 SNMP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5404.19.1.1 Standard MIBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5404.19.1.2 Private MIB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5424.19.1.3 Proprietary Traps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5424.19.1.4 Configuring SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5434.19.1.5 Configuring an SNMPv3 User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5444.19.1.6 Configuring an SNMP Notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5444.19.1.7 SNMP SET Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5464.19.1.8 IF-MIB and Interface Information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5504.19.2 JSON API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5514.19.2.1 Authentication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5514.19.2.2 Sending the Request . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5514.19.2.3 JSON Request Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5524.19.2.4 JSON Response Format. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5544.19.2.5 Supported Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5564.19.2.6 JSON Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5574.19.2.7 JSON Request Using WebUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5624.19.3 XML API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5644.19.4 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5654.19.4.1 SNMP Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5654.19.4.2 XML API Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5864.19.4.3 JSON API Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588 4.20 Puppet Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5914.20.1 Setting the Puppet Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5914.20.2 Accepting the Switch Request. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5914.20.3 Installing Modules on the Puppet Server. . . . . . . . . . . . . . . . . . . . . . . . . 5924.20.4 Writing Configuration Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5924.20.5 Supported Configuration Capabilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . 5954.20.5.1 Ethernet, InfiniBand, and Port-Channel Interface Capabilities . . . . . . . 5954.20.5.2 VLAN Capabilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5954.20.5.3 Layer 2 Ethernet Interface Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . 5954.20.5.4 LAG (Port-Channel) Capabilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5964.20.5.5 Layer 3 Interface Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5964.20.5.6 OSPF Interface Capabilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5964.20.5.7 OSPF Area Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5974.20.5.8 Router OSPF Capabilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5974.20.5.9 Protocol LLDP, SNMP, IP Routing and Spanning Tree Capabilities . . . . 5974.20.5.10 Fetched Image Capabilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5974.20.5.11 Installed Image Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5984.20.6 Supported Resources for Each Type. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5984.20.7 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5994.20.7.1 Switch and Server Clocks are not Synchronized . . . . . . . . . . . . . . . . . . . 5994.20.7.2 Outdated or Invalid SSL Certificates Either on the Switch or the Server 5994.20.7.3 Communications Issue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6004.20.8 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601 4.21 Virtual Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6084.21.1 Virtual Machine Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6084.21.2 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6114.21.2.1 Config . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6114.21.2.2 Show. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626 4.22 Back-Up Battery Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6344.22.1 BBU Calibration Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6344.22.2 BBU Self-Test. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6354.22.3 BBU Shut-Off Timer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6354.22.4 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637 4.23 Control Plane Policing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6454.23.1 IP Table Filtering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6454.23.1.1 Configuring IP Table Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6454.23.1.2 Modifying IP Table Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6474.23.1.3 Rate-limit Rule Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6474.23.2 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648 4.24 Resource Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6594.24.1 Ethernet Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659。
Mellanox QSFP+ 直接附接铜线(DAC)线缆商品介绍说明书
Mellanox ® QSFP+ direct attach copper (DAC) cables provide robust connections for leading edge 56Gb/s InfiniBand or 40GbE Ethernet systems. The cables consist of four lanes each operating at data rates of up to 14.0625Gb/s.Passive copper cables require no additional power to ensure quality connectivity. The cables are compliant with SFF-8685 specifications and provide connectivity between devices using QSFP+ ports. Each QSFP+ port comprises an EEPROM providing product information, which can be read by the host system.This integrated cable solution provides reliable transport for aggregate data rates of up to 56Gb/s Virtual Protocol Interconnect (VPI). Optimizing systems to operate with Mellanox’s 56Gb/s passive copper cables significantly reduces power consumption and EMI emission, eliminating the use of enterprise data center (EDC) hosts.Mellanox’s unique quality passive copper cable solutions provide power-efficient connectivity for short distance interconnects. They enable higher port bandwidth, density and configurability at a low cost and reduced power requirement in the data centers.Rigorous cable production testing ensures best out-of-the-box installation experience, performance and durability.Table 1 - Absolute Maximum RatingsTable 2 - Operational Specifications56Gb/s and 40GbE QSFP + Direct Attach Copper CablesINTERCONNECT PRODUCT BRIEFMC22061xx-00x | MC22071xx-0xx | MC22101xx-00x | MCP170L-F0xx | MCP1700-B0xxx†T able 3 - Electrical SpecificationsT able 4 - Cable Mechanical SpecificationsNote 1. See Figure 1 for the cable length definition.Note 2: *The pulltab color is black, instead of the standard blue pulltab.Note 3: The minimum bending space is the distance from the system panel to the edge of the cable bend with the minimum assembly bending radius. This corresponds to the distance from the switch panel to the rack door.Note 4: The minimum assembly bending radius (close to the connector) is 10x the cable’s outer diameter. The repeated bend (far from the connector) is also 10x the cable’s outer diameter. The single bend (far from the connector) is 5x the cable’s outer diameter.Figure 1. Cable Length DefinitionMechanical Schematics350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085Tel: 408-970-3400 • Fax: © Copyright 2018. Mellanox Technologies. All rights reserved.Mellanox, Mellanox logo and LinkX are registered trademarks of Mellanox Technologies, Ltd. Warranty InformationMellanox LinkX direct attach copper cables include a 1 year limited hardware warranty, which covers parts repair or replacement.Table 5 - Part Numbers and DescriptionsNote: The standard pulltab color is blue; cables with a black pulltab are indicated accordingly.。
Mellanox SN2100交换机数据表说明书
©2019 Mellanox Technologies. All rights reserved.†For illustration only. Actual products may vary.The SN2100 switch provides a high density, side-by-side 100GbE switching solution which scales up to 128 ports in 1RU for the growing demands of today’s database, storage, data centers environments.The SN2100 switch is an ideal spine and top of rack (ToR) solution, allowing maximum flexibility, with port speeds spanning from 10Gb/s to 100Gb/s per port and port density that enables full rack connectivity to any server at any speed. The uplink ports allow a variety of blocking ratios that suit any application requirement.Powered by the Mellanox Spectrum ®ASIC and packed with 16 ports running at 100GbE, the SN2100 carries a whopping throughput of 3.2Tb/s with a landmark 2.38Bpps processing capacity in a compact 1RU form factor.Following the footsteps of the SwitchX ®-2 based-systems, the SN2100 enjoys the legacy of the field-proven Mellanox Onyx™ operating system (successor to MLNX-OS Ethernet) with a wide installed base and a robust implementation of data, control and management planes that drives the world’s most powerful data centers.Keeping with the Mellanox tradition of setting performance record switch systems, the SN2100introduces the world’s lowest latency for a 100GbE switching and routing element, and does so while having the lowest power consumption in the market. With the SN2100, the use of 25, 40, 50 and 100GbE in large scale is enabled without changing power infrastructure facilities.The SN2100 is part of Mellanox’s complete end-to-end solution which provides 10GbE through 100GbE interconnectivity within the data center. Other devices in this solution include ConnectX ®-4 based network interface cards, and LinkX ® copper or fiber cabling. This end-to-end solution is topped with Mellanox NEO ®, a management application that relieves some of the major obstacles when deploying a network. NEO enables a fully certified and interoperable design, speeds up time to service and eventually speeds up ROI.The SN2100 carries a unique design to accommodate the highest rack performance. Its design allows side-by-side placement of two switches in a single, 1RU slot of a 19” rack, delivering high availability to the hosts.Database solutions require high availability and the ability to scale out in active-active configuration. For example, DB2 pureScale or Oracle RAC require high bandwidth and low latency to the caching facility, the disk storage system, etc., with connectivity to the application servers. The SN2100 is the best fit, providing the highest network throughput, resilience and a mix of 25GbE and 100GbE ports.Half-width 16-port Non-blocking 100GbE Open Ethernet Switch SystemSN2100 Open Ethernet SwitchPRODUCT BRIEFSWITCH SYSTEM †©2019 Mellanox Technologies. All rights reserved.Mellanox SN2100 Open Ethernet Switchpage 2The SN2100 introduces hardware capabilities for multiple tunneling protocols that enable increased reachability and scalability for today’s data centers. Implementing MPLS, NVGRE and VXLAN tunneling encapsulations in the network layer of the data center allows more flexibility for terminating a tunnel by the network, in addition to termination on the server endpoint.While Mellanox Spectrum provides the thrust and acceleration that powers the SN2100, an integrated powerful x86-based processor allows this system to not only be the highest performing switch fabric element, but also gives the ability to incorporate a Linux running server into the same device. This opens up multiple application aspects of utilizing the high CPU processing power and the best switching fabric, to create a powerful machine with unique appliance capabilities that can improve numerous network implementation paradigms.FEATURESPower Specifications–Typical power with passive cables (ATIS): 94.3W–Input voltage range: 100-240VACPhysical Characteristics–Dimensions:1.72” (43.8mm) H x 7.87” (200mm) W x 20” (508mm) D–Weight: 4.540kg (10lb)Supported Modules and Cables–QSFP28, SFP28 short and long rangeoptics–QSFP28 to QSFP28 DAC cable–QSFP breakout cables 100GbE to4x25GbE and 40GbE to 4x10GbE DAC, optical–QSFP breakout cables 100GbE to 2x50GbE DAC, optical –QSFP AOC –1000BASE-T and 1000BASE-SX/LX/ZX modules SPECIFICATIONS* This section describes hardware features and capabilities. Please refer to the driver and firmware release notes for feature availability.Layer 2 Feature Set–Multi Chassis LAG (MLAG) –IGMP V2/V3, Snooping, Querier –VLAN 802.1Q (4K) –Q-In-Q–802.1W Rapid Spanning Tree –BPDU Filter, Root Guard –Loop Guard, BPDU Guard –802.1Q Multiple STP–PVRST+ (Rapid Per VLAN STP+)–802.3ad Link Aggregation (LAG) & LACP –32 Ports/Channel - 64 Groups Per System –Port Isolation –LLDP–Store & Forward / Cut-through mode of work –HLL–10/25/40/50/56/100GbE –Jumbo Frames (9216 BYTES)Layer 3 Feature Set–64 VRFs–IPv4 & IPv6 Routing inc Route maps: –BGP4, OSPFv2–PIM-SM & PIM-SSM (inc PIM-SM over MLAG) –BFD (BGP , OSPF, static routes) –VRRP–DHCPv4/v6 Relay–Router Port, int Vlan, NULL Interface for Routing –ECMP , 64-way–IGMPv2/v3 Snooping QuerierSynchronization–PTP IEEE-1588 (SMPTE profile) –NTPQuality of Service–802.3X Flow Control –WRED, Fast ECN & PFC–802.1Qbb Priority Flow Control –802.1Qaz ETS–DCBX – App TLV support–Advanced QoS- qualification, Rewrite, Policers – 802.1AB–Shared buffer managementManagement & Automation–ZTP–Ansible, SALT Stack, Puppet –FTP \ TFTP \ SCP–AAA , RADIUS \ TACACS+ \ LDAP –JSON & CLI , Enhanced Web UI –SNMP v1,2,3–In-band Management –DHCP , SSHv2, Telnet –SYSLOG–10/100/1000 ETH RJ45 MNG ports –USB Console port for Management –Dual SW image –Events history –ONIENetwork Virtualization–VXLAN EVPN – L2 stretch use case –VXLAN Hardware VTEP – L2 GW–Integration with VMware NSX & OpenStack, etcSoftware Defined Network (SDN)–OpenFlow 1.3:• Hybrid•Supported controllers: ODL, ONOS, FloodLight, RYU Docker Container–Full SDK access through the container –Persistent container & shared storageMonitoring & Telemetry–What Just Happened (WJH) –sFlow–Real time queue depth histograms & thresholds –Port mirroring (SPAN & RSPAN) –Enhanced Link & Phy Monitoring –BER degradation monitor –Enhanced health mechanism –3rd party integration (Splunk, etc.)Security–USA Department of Defense certification – UC APL –System secure mode – FIPS 140-2 compliance –Storm Control–Access Control Lists (ACLs L2-L4 & user defined) –802.1X - Port Based Network Access Control –SSH server strict mode – NIST 800-181A –CoPP (IP filter) –Port isolation© Copyright 2019. Mellanox Technologies. All rights reserved.Mellanox, Mellanox logo, Mellanox Open Ethernet, MLNX-OS, LinkX, Mellanox NEO, Mellanox Spectrum and ConnectX are registered trademarks of Mellanox Technologies, Ltd. Mellanox Onyx is a trademark of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.53197PBRev 2.8T able 2 - SN2100B Series Part Numbers and DescriptionsT able 3 - Rail Kit Part Number and Description。
Mellanox 应用识别引用文件说明书
Reference GuideTable of ContentsChapter 1. Introduction (1)Chapter 2. System Design (2)Chapter 3. Application Architecture (5)Chapter 4. Configuration Flow (6)Chapter 5. Running Application (8)Chapter 6. References (10)Chapter 1.IntroductionApplication Recognition (AR) allows identifying applications that are in use on a monitored networking node.AR enables the security administrator to generate consolidated reports that show usage patterns from the application perspective. AR is also used as a corner stone of many security applications such as L7-based firewalls.Due to the massive growth in the number of applications that communicate over Layer 7 (HTTP), effective monitoring of network activity requires looking deeper into Layer 7 traffic so individual applications can be identified. Different applications may require different levels of security and service.This document describes how to build AR using the deep packet inspection (DPI) engine, which leverages NVIDIA® BlueField®-2 DPU capabilities such as regular expression (RXP) acceleration engine, hardware-based connection tracking, and more.Chapter 2.System DesignThe AR application is designed to run as "bump-on-the-wire" on the BlueField-2 instance, it intercepts the traffic coming from the wire, and passes it to the Physical Function (PF) representor connected to the host.System DesignSystem DesignChapter 3.Application ArchitectureAR runs on top of Data Plan Development Kit (DPDK) based Stateful Flow Tracking (SFT) to identify the flow that each packet belongs to, then uses DPI to process L7 classification.1.Signatures are compiled by DPI compiler and then loaded to DPI engine2.Ingress traffic is identified using Connection Tracking module3.Traffic is scanned against DPI engine compiled signature DB4.Post processing is performed for match decision5.Matched flows are identified and actions can be executed (Allow/Deny)Chapter 4.Configuration Flow1.DPDK initializationdpdk_init(&argc, &argv, &nb_queues);2.AR initializationar_init(argc, argv, cdo_filename, csv_filename);a).Initialize NetFlow using default configuration /etc/doca_netflow.conf.b).Initialize signature database.3.Stateful Flow Table (SFT) and Port initializationflow_offload_init(nb_queues);a).SFT initialization.b).Mempool allocation.c).Port initialization.4.DPI initializationdpi_ctx = doca_dpi_init(&doca_dpi_config, &err);a).Configure RegEx engine.b).Configure DPI queues.5.Load compiled signatures to RegEx engine.doca_dpi_load_signatures(dpi_ctx, ar_config.cdo_filename);6.Configure DPI packet processing.dpi_worker_lcores_run(nb_queues, CLIENT_ID, ar_worker_attr);a).Configure DPI enqueue packets.b).Send jobs to RegEx engine.c).Configure DPI dequeue packets.7.Send statistics and write database.sig_database_write_to_csv(ar_config.csv_filename);send_netflow();a).Send statistics to the collector.b).Write CSV file with signatures statistics.8.AR destroyar_destroy(cmdline_thread, ar_config);‣Clear thread9.DPI destroydoca_dpi_destroy(dpi_ctx);Configuration Flow ‣Free DPI resourcesChapter 5.Running Application‣PrerequisitesPlease refer to the DOCA Installation Guide for details on how to install BlueField related software.‣BlueField software image 3.6‣DOCA software package 1.0‣To build the application:1.Prepare the environment variables (DPDK is installed under /opt/mellanox). Run:‣For Ubuntu:export LD_LIBRARY_PATH=/opt/mellanox/dpdk/lib/aarch64-linux-gnu/‣For CentOS:export LD_LIBRARY_PATH=/opt/mellanox/dpdk/lib642.The application recognition example is installed as part of the doca-dpi-lib package,the binary is located under/opt/mellanox/doca/examples/ar/bin/doca_app_rec.To re-build the application recognition sample, run:‣For Ubuntu:export PKG_CONFIG_PATH=/opt/mellanox/dpdk/lib/aarch64-linux-gnu/pkgconfig/cd /opt/mellanox/doca/examples/ar/srcmeson /tmp/buildninja -C /tmp/build‣For CentOS:export PKG_CONFIG_PATH=/opt/mellanox/dpdk/lib64/pkgconfig/cd /opt/mellanox/doca/examples/ar/srcmeson /tmp/buildninja -C /tmp/builddoca_app_rec will be created under build/app.3.The application recognition example is a DPDK application. Therefore, the user isrequired to provide DPDK flags, and allocate huge pages. Run:echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages‣To run the application:doca_app_rec [flags] -- --cdo [cdo_file] -output_csv [output_csv_file] -print_matchFor example:Running Application/opt/mellanox/doca/examples/ar/bin/doca_app_rec -a0000:03:00.0,class=regex:eth,representor=[65535],sft_en=0 -- --c /usr/etc/doca/ dpi/ar/ar.cdo –o ar_stats.csv –pNote: The flag -a 0000:03:00.0,class=regex:eth,representor=[65535],sft_en=0is a must for proper usage of the application. Modifying this flag will result unexpectedbehavior as only 2 ports (an uplink and a representor) are supported.To print the output when the DPI engine finds a match, use --print_match.For additional information about the available flags for DPDK use -h before the -separator. For the application, use -h after the --.The application will periodically dump a .csv file with the recognition results containing statistics about the recognized apps in the format SIG_ID, APP_NAME, MATCHING_FIDS, and DROP.As per the example above, a file called ar_stats.csv will be created.Additional features can be triggered by using the shell interaction. This allows blocking and unblocking specific signature IDs using the following commands:‣block <sig_id>‣unblock <sig_id>The TAB key allows autocompletion while the quit command terminates the application. Application flags:‣-c or --cdo <path> – path to CDO file compiled from a valid PDD‣-o or --output_csv <path> – path to the output of the CSV file‣-p or --print_match – prints FID when matched in DPI engine‣-i or --interactive – adds interactive mode for blocking signatures‣-n or -netflow – exports data from BlueField to remote NetFlow collector NetFlow collector UI example:Chapter 6.References ‣/opt/mellanox/doca/examples/ar/src/ar.cNoticeThis document is provided for information purposes only and shall not be regarded as a warranty of a certain functionality, condition, or quality of a product. NVIDIA Corporation nor any of its direct or indirect subsidiaries (collectively: “NVIDIA”) make no representations or warranties, expressed or implied, as to the accuracy or completeness of the information contained in this document and assume no responsibility for any errors contained herein. NVIDIA shall have no liability for the consequences or use of such information or for any infringement of patents or other rights of third parties that may result from its use. This document is not a commitment to develop, release, or deliver any Material (defined below), code, or functionality.NVIDIA reserves the right to make corrections, modifications, enhancements, improvements, and any other changes to this document, at any time without notice. Customer should obtain the latest relevant information before placing orders and should verify that such information is current and complete.NVIDIA products are sold subject to the NVIDIA standard terms and conditions of sale supplied at the time of order acknowledgement, unless otherwise agreed in an individual sales agreement signed by authorized representatives of NVIDIA and customer (“Terms of Sale”). NVIDIA hereby expressly objects to applying any customer general terms and conditions with regards to the purchase of the NVIDIA product referenced in this document. No contractual obligations are formed either directly or indirectly by this document.NVIDIA products are not designed, authorized, or warranted to be suitable for use in medical, military, aircraft, space, or life support equipment, nor in applications where failure or malfunction of the NVIDIA product can reasonably be expected to result in personal injury, death, or property or environmental damage. NVIDIA accepts no liability for inclusion and/or use of NVIDIA products in such equipment or applications and therefore such inclusion and/or use is at customer’s own risk. NVIDIA makes no representation or warranty that products based on this document will be suitable for any specified use. Testing of all parameters of each product is not necessarily performed by NVIDIA. It is customer’s sole responsibility to evaluate and determine the applicability of any information contained in this document, ensure the product is suitable and fit for the application planned by customer, and perform the necessary testing for the application in order to avoid a default of the application or the product. Weaknesses in customer’s product designs may affect the quality and reliability of the NVIDIA product and may result in additional or different conditions and/or requirements beyond those contained in this document. NVIDIA accepts no liability related to any default, damage, costs, or problem which may be based on or attributable to: (i) the use of the NVIDIA product in any manner that is contrary to this document or (ii) customer product designs.No license, either expressed or implied, is granted under any NVIDIA patent right, copyright, or other NVIDIA intellectual property right under this document. Information published by NVIDIA regarding third-party products or services does not constitute a license from NVIDIA to use such products or services or a warranty or endorsement thereof. Use of such information may require a license from a third party under the patents or other intellectual property rights of the third party, or a license from NVIDIA under the patents or other intellectual property rights of NVIDIA.Reproduction of information in this document is permissible only if approved in advance by NVIDIA in writing, reproduced without alteration and in full compliance with all applicable export laws and regulations, and accompanied by all associated conditions, limitations, and notices.THIS DOCUMENT AND ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE. TO THE EXTENT NOT PROHIBITED BY LAW, IN NO EVENT WILL NVIDIA BE LIABLE FOR ANY DAMAGES, INCLUDING WITHOUT LIMITATION ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, PUNITIVE, OR CONSEQUENTIAL DAMAGES, HOWEVER CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARISING OUT OF ANY USE OF THIS DOCUMENT, EVEN IF NVIDIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Notwithstanding any damages that customer might incur for any reason whatsoever, NVIDIA’s aggregate and cumulative liability towards customer for the products described herein shall be limited in accordance with the Terms of Sale for the product.TrademarksNVIDIA, the NVIDIA logo, and Mellanox are trademarks and/or registered trademarks of Mellanox Technologies Ltd. and/or NVIDIA Corporation in the U.S. and in other countries. Other company and product names may be trademarks of the respective companies with which they are associated.Copyright© 2021 NVIDIA Corporation. All rights reserved.NVIDIA Corporation | 2788 San Tomas Expressway, Santa Clara, CA 95051。
Mellanox Ethernet 网络设备用户手册说明书
SOLUTION BRIEFKEY BUSINESS BENEFITSEXECUTIVE SUMMARYAnalytic tools such as Spark, Presto and Hive are transforming how enterprises interact with and derive value from their data. Designed to be in memory, these computing and analytical frameworks process volumes of data 100x faster than Hadoop Map/Reduce and HDFS - transforming batch processing tasks into real-time analysis. These advancements have created new business models while accelerating the process of digital transformation for existing enterprises.A critical component in this revolution is the performance of the networking and storage infrastructure that is deployed in support of these modern computing applications. Considering the volumes of data that must be ingested, stored, and analyzed, it quickly becomes evident that the storage architecture must be both highly performant and massively scalable.This solution brief outlines how the promise of in-memory computing can be delivered using high-speed Mellanox Ethernet infrastructure and MinIO’s ultra-high performance object storage solution.IN MEMORY COMPUTINGWith data constantly flowing from multiple sources - logfiles, time series data, vehicles,sensors, and instruments – the compute infrastructure must constantly improve to analyze data in real time. In-memory computing applications, which load data into the memory of a cluster of servers thereby enabling parallel processing, are achieving speeds up to 100x faster than traditional Hadoop clusters that use MapReduce to analyze and HDFS to store data.Although Hadoop was critical to helping enterprises understand the art of the possible in big data analytics, other applications such as Spark, Presto, Hive, H2O.ai, and Kafka have proven to be more effective and efficient tools for analyzing data. The reality of running large Hadoop clusters is one of immense complexity, requiring expensive administrators and a highly inefficient aggregation of compute and storage. This has driven the adoption of tools like SparkDelivering In-memory Computing Using Mellanox Ethernet Infrastructure and MinIO’s Object Storage SolutionMinIO and Mellanox: Better TogetherHigh performance object storage requires the right server and networking components. With industryleading performance combined with the best innovation to accelerate data infrastructure Mellanox provides the networking foundation needed to connect in-memory computing applications with MinIO high performance object storage. Together, they allow in-memory compute applications to access and process large amounts of data to provide high speed business insights.Simple to Deploy, Simpler to ManageMinIO can be installed and configured within minutes simply by downloading a single binary and executing it. The amount of configuration options and variations has been kept to a minimum resulting in near-zero system administration tasks and few paths to failures. Upgrading MinIO is done with a single command which is non-disruptive and incurs zero downtime.MinIO is distributed under the terms of the Apache* License Version 2.0 and is actively developed on Github. MinIO’s development community starts with the MinIO engineering team and includes all of the 4,500 members of MinIO’s Slack Workspace. Since 2015 MinIO has gathered over 16K stars on Github making it one of the top 25 Golang* projects based on a number of stars.which are simpler to use and take advantage of the massive benefits afforded by disaggregating storage and compute. These solutions, based on low cost, memory dense compute nodes allow developers to move analytic workloads into memory where they execute faster, thereby enabling a new class of real time, analytical use cases.These modern applications are built using cloud-native technologies and,in turn, use cloud-native storage. The emerging standard for both the public and private cloud, object storage is prized for its near infinite scalability and simplicity - storing data in its native format while offering many of the same features as block or file. By pairing object storage with high speed, high bandwidth networking and robust compute enterprises can achieve remarkable price/performance results.DISAGGREGATE COMPUTE AND STORAGE Designed in an era of slow 1GbE networks, Hadoop (MapReduce and HDFS) achieved its performance by moving compute tasks closer to the data. A Hadoop cluster often consists of many 100s or 1000s of server nodes that combine both compute and storage.The YARN scheduler first identifies where the data resides, then distributes the jobs to the specific HDFS nodes. This architecture can deliver performance, but at a high price - measured in low compute utilization, costs to manage, and costs associated with its complexity at scale. Also, in practice, enterprises don’t experience high levels of data locality with the results being suboptimal performance.Due to improvements in storage and interconnect technologies speeds it has become possible to send and receive data remotely at high speeds with little (less than 1 microsecond) to no latency difference than if the storage were local to the compute.As a result, it is now possible to separate storage from the compute with no performance penalty. Data analysis is still possible in near real time because the interconnect between the storage and the compute is fast enough to support such demands.By combining dense compute nodes, large amounts of RAM, ultra-highspeed networks and fast object storage, enterprises are able to disaggregate storage from compute creating the flexibility to upgrade, replace, or add individual resources independently. This also allows for better planning for future growth as compute and storage can be added independently and when necessary, improving utilization and budget control.Multiple processing clusters can now share high performance object storage so that different types of processing, such as advanced queries, AI model training, and streaming data analysis, can run on their own independent clusters while sharing the same data stored on the object storage. The result is superior performance and vastly improved economics.HIGH PERFORMANCE OBJECT STORAGEWith in-memory computing, it is now possible to process volumes of data much faster than with Hadoop Map/Reduce and HDFS. Supporting these applications requires a modern data infrastructure with a storage foundation that is able to provide both the performance required by these applications and the scalability to handle the immense volume of data created by the modern enterprise.Building large clusters of storage is best done by combining simple building blocks together, an approach proven out by the hyper-scalers. By joining one cluster with many other clusters, MinIO can grow to provide a single, planet-wide global namespace. MinIO’s object storage server has a wide rangeof optimized, enterprise-grade features including erasure code and bitrot protection for data integrity, identity management, access management, WORM and encryption for data security and continuous replication and lamba compute for dynamic, distributed data.MinIO object storage is the only solution that provides throughput rates over 100GB/sec and scales easily to store 1000s of Petabytes of data under a single namespace. MinIO runs Spark queries faster, captures streaming data more effectively, and shortens the time needed to test, train and deploy AI algorithms.LATENCY AND THROUGHPUTIndustry-leading performance and IT efficiency combined with the best of open innovation assist in accelerating big data analytics workloads which require intensive processing. The Mellanox ConnectX® adapters reduce the CPU overhead through advanced hardware-based stateless offloads and flow steering engines. This allows big data applications utilizing TCP or UDP over IP transport to achieve the highest throughput, allowing completion of heavier analytic workloads in less time for big data clusters so organizations can unlock and efficiently scale data-driven insights while increasing application densities for their business.Mellanox Spectrum® Open Ethernet switches feature consistently low latency and can support a variety of non-blocking, lossless fabric designs while delivering data at line-rate speeds. Spectrum switches can be deployed in a modern spine-leaf topology to efficiently and easily scalefor future needs. Spectrum also delivers packet processing without buffer fairness concerns. The single shared buffer in Mellanox switches eliminates the need to manage port mapping and greatly simplifies deployment. In an© Copyright 2019. Mellanox, Mellanox logo, and ConnectX are registered trademarks of Mellanox Technologies, Ltd. Mellanox Onyx is a trademark of Mellanox Technologies, Ltd. All other trade-marks are property of their respective owners350 Oakmead Parkway, Suite 100 Sunnyvale, CA 94085Tel: 408-970-3400 • Fax: MLNX-423558315-99349object storage environment, fluid resource pools will greatly benefit from fair load balancing. As a result, Mellanox switches are able to deliver optimal and predictable network performance for data analytics workloads.The Mellanox 25, 50 or 100G Ethernet adapters along with Spectrum switches results in an industry leading end-to-end, high bandwidth, low latency Ethernet fabric. The combination of in-memory processing for applications and high-performance object storage from MinIO along with reduced latency and throughput improvements made possible by Mellanox interconnects creates a modern data center infrastructure that provides a simple yet highly performant and scalable foundation for AI, ML, and Big Data workloads.CONCLUSIONAdvanced applications that use in-memory computing, such as Spark, Presto and Hive, are revealing business opportunities to act in real-time on information pulled from large volumes of data. These applications are cloud native, which means they are designed to run on the computing resources in the cloud, a place where Hadoop HDFS is being replaced in favor of using data infrastructures that disaggregates storage from compute. These applications now use object storage as the primary storage vehicle whether running in the cloud or on- premises.Employing Mellanox networking and MinIO object storage allows enterprises to disaggregate compute from storage achieving both performance and scalability. By connecting dense processing nodes to MinIO object storage nodes with high performance Mellanox networking enterprises can deploy object storage solutions that can provide throughput rates over 100GB/sec and scales easily to store 1000s of Petabytes of data under a singlenamespace. The joint solution allows queries to run faster, capture streaming data more effectively, and shortens the time needed to test, train and deploy AI algorithms, effectively replacing existing Hadoop clusters with a data infrastructure solution, based on in-memory computing, that consumes a smaller data center footprint yet provides significantly more performance.WANT TO LEARN MORE?Click the link below to learn more about object storage from MinIO VAST: https://min.io/Follow the link below to learn more about Mellanox end-to-end Ethernet storage fabric:/ethernet-storage-fabric/。
Mellanox vMotion加速器商品介绍说明书
Accelerate vMotionMigrate 23X FasterHyper-ConvergedvSphere ®Networking Done RightThe Most Efficient vSphere, NSX and VSAN-based Data Center Solutionsz VirtualizationSDNStorageMellanox Key FunctionalityMellanox Key FunctionalityMellanox Key Functionality•High Bandwidth •Low Latency •Virtual Functions •Certified SR-IOV •Inbox drivers•Physical Function •Virtual Functions •Overlay Networks •Proven deployments•RDMA Enabled •iSER Certified•Support for All-Flash devices •Inbox RoCE and iSER driversVMwareVMware provides a powerful, flexible, and secure foundation that adds agility and accelerates thetransformation to hybrid cloud and software defined data centers (SDDC). VMware helps end-users run, manage, connect and secure applications and new workloads through virtualization, Software DefinedNetworks (SDN), and virtualized storage solutions that help with the growing needs and complexity of modern infrastructure.Mellanox 10/25G Ethernet interconnect solutions enable unmatched competitive advantages in VMware environments by increasing efficiency of overall server utilization and eliminating I/O bottleneck to enable more virtual machines per server, faster migrations and speed access to storage. Explore this reference guide to learn more about how Mellanox key technologies can help improve efficiencies in your VMware environment.VMware vMotion allows themovement of virtual machines from one host to another. Deploying Mellanox high-speed Ethernet can help decrease the amount of time it takes to migrate virtual machines. The results clearly show thatincreasing from 10 to 25G in an ESXi 6 environment reduced the time to transfer a VM. Migration timedropped from 55 minutes to a little over 2 minutes, a 96%improvement. Mellanox Ethernet, RDMA, and iSER drivers are certified and ship in the box with vSphere.Reduce CapEx ExpenseVDI DeploymentsIncrease Virtual DesktopsVirtual desktop infrastructure (VDI) have similar characteristics as cloud deployment, such as high virtual machineconsolidation ratios, with typically hundreds of small to mediumsized desktops consolidated on a single host. Network performance is important for user experience as well as yielding direct CapEx and OpEx savings. Savings can grow as you move to higherperforming networks. Below is an example of CapEx savings when moving from 10 to 25GE.Hyper-Converged Infrastructure (HCI) is a demandingenvironment for networking components. HCI consists of three software components: compute virtualization, storage virtualization and management in which all three require an agile and responsive network. Deploying on 10, or better, 25G larger network pipes assists as does network adapters with offload capabilities to optimize performance and availability of synchronization and replication of virtualized workloads.CapEx Analysis: 10G vs. 25GCapEx Analysis: 10G vs. 25GScalable from a half rack to multiple racksHalf Rack 12 nodesFull Rack 24 nodesPay As You Grow10 Racks up to 240 nodesDeployment Config134411GbE link: 1GbE Transceiver125/10GbE link: QSFP to SFP+324100GbE link: QSFP to QSFP 100/40GbE link: QSFP to QSFP Why Spectrum▪ 2 switches in 1U▪Ideal storage/HCI port counts▪Zero packet loss ▪Low latency▪RoCE optimized (NVMe-oF, Spark, SMB Direct, etc.)▪NEO for network automation/visibility▪Native SDK for containers ▪Cost optimized▪Network OS alternativesProvisioning & Orchestration▪Zero-touch provisioning ▪VLAN auto-provisioning▪Migrate VMs without manual configuration▪VXLAN/DCI support for VM migration across multiple datacenters for DRMonitoring▪Performance monitoring ▪Health monitoring ▪Detailed telemetry▪Alerts and notificationsAutomated Network▪½ 19” width, 1U height ▪18x10/25GbE + 4x40/100GbE ▪57W typical (ATIS)2Spectrum SwitchesProven Higher EfficiencyIncreasing VMware EfficiencyNSX services enable east-west routing between the SDDC and north-south routing for external networks and require VXLAN segmentation which can consume CPU processes and diminish overall server efficiency. Mellanox supports VXLAN offloads to handle this processing resulting in higher throughput and over 50% reduction in CPU utilization.Accelerate NSXStorage virtualization requires an agile and responsive network. iSER accelerates workloads by using an iSCSI extensions for RDMA. Using the iSER extension lowers latencies and CPU utilization to help keep pace with I/O requirements and provides a 70% improvement in throughput and 70% reduction in latencies.VMware EVO SDDC provides a validated suite of interoperable, tested components to deliver a completely Integrated System. This comprises fully qualifiedhardware components including Mellanox switches and adapters that are pre-built and pre-racked, providing an appliance-likeexperience that makes it easy for customers to deploy, operate and support. Mellanox leverages our relationship with Cumulus Linux to extend access from Layer 2 across Layer 3 networks topologies,Deliver 3X Efficiency with iSERFully Certified with EVOAverage CPU% per 1GbE VXLAN Traffic。
Mellanox Technologies Hadoop 解决方案白皮书说明书
WHITE PAPER™ Hadoop ® with Dell and Mellanox VPI SolutionsStoring and analyzing rapidly growing amounts of data via traditional tools introduces new levels of chal-lenges to businesses, government and academic research organizations.Hadoop framework is a popular tool for analyzing large structured and unstructured data sets. Using Java based tools to process data, a data-scientist can infer users’ churn pattern in retail banking, better recom-mend a new service to users of social media, optimize production lines based on sensor data and detect a security breach in computer networks. Hadoop is supported by the Apache Software Foundation.Hadoop workloads vary based on target implementation and even within the same implementation. Designing networks to sustain the different variety of workloads introduces challenges to legacy network designs in terms of bandwidth and latency requirements. Moving a terabyte of information can take several minutes using a 1 Giga-bit network. Minutes long operations are not acceptable in an on-line user experience, fraud detection and risk management tools. A better solution is required.Building a Hadoop cluster requires taking into consideration many factors such as, disk capacity, CPUutilization, memory usage and networking capabilities.Using legacy networks creates bottlenecks in the data flow. State-of-the-art CPUs can drive over 50 Giga-bits-per-second while disk controllers capable of driving 12 Giga-bits-per-second are entering the market, and the result is more data trying to flow out of the compute node.Using 40Gb Ethernet and FDR InfiniBand satisfies the needed dataflow requirements for high speed SAS controllers and Solid State Drives (SSDs) 10Gb Ethernet is becoming the entry level requirement to handle dataflow requirements of common spindle disk drives.Scaling and capacity planning should be another point of consideration. While businesses grow linearly, their data grows in an exponential form at the same time. Adding more servers and storage should not require a complete re-do of the network, using edge switches and easy to balance, flat, network is aBackground (1)Mellanox Solutions for Apache Hadoop (1)Mellanox Unstructured Data Accelerator (UDA) (2)Ethernet Performance (2)UDA Performance (2)Hardware (2)Software Requirements (5)Installation (5)Scaling the Cluster Size (9)High Availability (10)Appendix A: Setup Scripts (10)References (13)In collaboration with Dell Mellanox Solutions for Apache HadoopFigure 1: Hadoop, 5 Nodes DeploymentIn the above example, where nodes are connected with a FDR InfiniBand 56Gb/s fabric, the All-to-All available bandwidth will be 18.6Gb/s. Scaling to larger clusters is done in the same fashion. Connection ToR switches with enough bandwidth to satisfy nodes throughputs.Figure 2: Mellanox FDR InfiniBandand/or 40Gb Ethernet Adapter Figure 3: Mellanox QSFP Copper CableFigure 4: Mellanox 10Gb Ethernet Adapter Figure 5: Mellanox SFP+ Copper Cable©2013 Mellanox Technologies. All rights reserved.©2013 Mellanox Technologies. All rights reserved.350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085Tel: 408-970-3400 • Fax: © Copyright 2013. Mellanox Technologies. All rights reserved.Mellanox, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, MLNX-OS, PhyX, SwitchX, Virtual Protocol Interconnect and Voltaire are registered trademarks of Mellanox Technologies, Ltd. Connect-IB, CoolBox, FabricIT, Mellanox Federal Systems, Mellanox Software Defined Storage, Mellanox Virtual Modular Switch, MetroX, MetroDX, Mellanox Open Ethernet, Open Ethernet, ScalableHPC, Unbreakable-Link, UFM and Unified Fabric Manager are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.15-688WP Rev 1.2The information contained in this document, including all instructions, cautions, and regulatory approvals and certifications, is provided by Mellanox and has not been independently verified or tested by Dell. Dell cannot be responsible for damage caused as a result of either following or failing to follow these instructions. All statements or claims regarding the properties, capabilities, speeds or qualifications of the part referenced in this document are made by Mellanox and not by Dell. Dell specifically disclaims knowledge of the accuracy, completeness or substantiation for any such statements. All questions or comments relating to such statements or claims should be directed to Mellanox. Visit for more information.。
Mellanox InfiniBand产品指南说明书
BROCHUREInfiniBand Product Guide End-to-end InfiniBand Solutions at HDR / EDR / FDR SpeedsInfiniBand SwitchesFrom Switch-IB®-2 100Gb/s EDR to Mellanox Quantum™ 200Gb/s HDR InfiniBand, the Mellanox family of1RU and modular switches deliver the highest density and performance. Featuring the in-network computing Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™ technology, Mellanox’s extensive switch portfolio enables compute clusters to operate at any scale, while reducing operational costs and infrastructure complexity. InfiniBand switches also leverage Mellanox’s SHIELD™ (Self-Healing Interconnect Enhancement for Intelligent Datacenters), which increases data center network resiliency by 5000 times, compared to other software-based solution options.InfiniBand AdaptersLeveraging faster speeds and innovative In-Network Computing, Mellanox’s award-winning HDR (200Gb/s), EDR (100Gb/s) and FDR (56Gb/s) InfiniBand Adapters deliver the highest throughput and message rate in theindustry. Providing best-in-class network performance, scale and efficiency, InfiniBand adapters enable extremely low-latency, and advanced application acceleration engines for High-Performance Computing, Machine Learning, Cloud, Storage, Databases and Embedded applications, reducing cost per operation and increasing ROI. Mellanox smart adapters deliver the highest performance and scalable connectivity for Intel, AMD, IBM Power, NVIDIA, Arm and FPGA-based Compute and Storage Platforms.InfiniBand LongHaulMellanox’s MetroX®-2 systems extend the reach of InfiniBand to up to 40 kilometers, enabling native InfiniBand connectivity between remote data centers, remote data center and storage infrastructures, or fordisaster recovery. Delivering up to 100Gb/s data throughout, MetroX-2 enables native RDMA connectivity, high data throughput, advanced routing, and more, across distributed compute or storage platforms. MetroX-2 enables users to easily migrate application jobs from one InfiniBand center to another, or to combine the compute power of multiple remote data centers together for higher overall performance and scalability. InfiniBand Gateway to EthernetMellanox Skyway™ 200 Gigabit HDR InfiniBand to Ethernet gateway appliance enables scalableand efficient connectivity from high performance, low-latency InfiniBand data centers to external Ethernet networks and infrastructures. Mellanox Skyway™ empowers InfiniBand-based high performance andcloud data centers to achieve the lowest interconnect latency, while providing a simple and cost-effective option to connect to remote Ethernet networks.InfiniBand Cables and TransceiversMellanox LinkX® InfiniBand cables and transceivers are designed to maximize the performance of the high-performance InfiniBand networks, to deliver high-bandwidth, low-latency and highly reliable and robust connectivity. To provide superior system performance, Mellanox ensures the highest quality in all LinkX products.InfiniBand Telemetry and Software ManagementMellanox’s comprehensive suite of network telemetry and management software provides an innovative application-centric approach to bridge the gap between servers, applications and fabric elements. Mellanox’s UFM® (Unified Fabric Management) software includes fabric diagnostics, monitoring, alerting, provisioning and advanced features such as congestion monitoring and fabric segmentation and isolation. Users can manage small to extremely large fabrics as a set of inter-related business entities while also performing fabric monitoring and optimizing performance at the application-logical level rather than only at the individual port or device level. InfiniBand Acceleration SoftwareThe Mellanox HPC-X® ScalableHPC Toolkit is a comprehensive MPI and SHMEM/PGAS software suite for high performance computing environments. For scientific research and engineering simulations, the complete Mellanox HPC-X software toolkit, including Mellanox UCX and FCA acceleration engines, provides enhancements that significantly increase the scalability and performance of message communications in the network. HPC-X enables the rapid deployment and delivery of maximum application performance without the complexity and costs of licensedthird-party tools and libraries.Mellanox, Mellanox logo, ConnectX, BlueField, UFM, Switch-IB, Mellanox ScalableHPC, MLNX-OS, LinkX , HPC-X and ConnectX are registered trademarks of Mellanox Technologies, Ltd. BlueFIeld,Mellanox Quantum, NVMe SNAP , SHIELD, Scalable Hierarchical Aggregation and Reduction Protocol (SHARP) are trademarks of Mellanox Technologies, Ltd.All other trademarks are property of their respective owners.350 Oakmead Parkway, Suite 100 Sunnyvale, CA 94085Tel: 408-970-3400 • Fax: 060342BR v1.6。
Mellanox Technologies和DataON合作建立微软认证的超融合集群应用程序说明书
SOLUTION BRIEF©2016 Mellanox Technologies. All rights reserved.verged infrastructure with these promises:• Consolidating existing data center silos to streamline operations• Simplifying the IT infrastructure for quick de-ployment and scale-out• Improving agility to accommodate constantly changing business needsDataON™ and Mellanox Empower Next-Gen Hyper-Converged Infrastructurepage 2SOLUTION BRIEF: DataON™ and Mellanox Empower Next-Gen Hyper-Converged Infrastructure 350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085Tel: 408-970-3400 • Fax: © Copyright 2016. Mellanox Technologies. All rights reserved.Mellanox, Mellanox logo, and ConnectX are registered trademarks of Mellanox Technologies, Ltd. Mellanox NEO, Mellanox Spectrum, and LinkX are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.15-51693SBRev1.0• Industry Leading Application Performance – The S2D-3110 HCCA with four (4x) cluster nodes is capable of providing over 2.4 million IOPS (running VMFleet) using the latest all flash NVMe™ based SSD technology to scale I/O intensive workloads.• Storage and Network with SMB 3.0 over RDMA – Deliv-ers highest throughput, lowest latency and increased CPU efficiency • Hyper-Converged Scalability – Incremental compute, net-working, and storage resources provide near-linear scalabil-ity. Each HCCA can also be expanded in capacity via 12G SAS JBODs for further storage expansion.。
迈兰奧克蓝字段参考平台说明书
©2020 Mellanox Technologies. All rights reserved.†For illustration only. Actual products may vary.Today’s network technologies drive OEMs to seek innovative, scalable and cost effective designs for dealing with the exponential growth of data. The Mellanox BlueField Reference Platform provides a multi-purpose fully-programmable hardware environment for evaluation, development and running of software solutions, reducing time-to-market and increasing product development and runtime efficiency.The reference platform delivers all the features of the BlueField Data Processing Unit (DPU) in convenient form factors, making it ideal for a range of software solutions, for the most demanding markets. Features include two 100Gb/s Ethernet or InfiniBand interfaces, a 16-core BlueField processor, up to 512GB of RDIMM DDR4 memory, two PCIe x16 slots, and an NVMe-ready midplane for SSD connectivity.BlueField Platform for Storage AppliancesToday’s fast storage technologies drive storage OEMs to seek innovative, scalable and costeffective designs for their applications. Powered by the BlueField DPU, the BlueField 2U Reference Platform offers a unique combination of on-board capabilities and NVMe-readiness, creating an ideal environment for storage appliance development.Platform Highlights• Leverages the processing power of Arm ® cores for storage applications such as All-FlashArrays using NVMe-oF, Ceph, Lustre, iSCSI/TCP offload, Flash Translation Layer, RAID/Erasure coding, data compression/decompression, and deduplication.• In high-performance storage arrays, BlueField serves as the system’s main CPU, handlingstorage controller tasks and traffic termination.• Provides up to 16 front-mounted 2.5” disk drive bays that are routed to an NVMe-readymidplane within the enclosure. The system can be configured as a storage JBOF with 16 drives using PCIe Gen 3.0 x2, or 8 drives with PCIe Gen 3.0 x4 lanes.BlueField Platform for Machine LearningThe BlueField 2U Reference Platform supports connectivity of up to 2 GPUs via its PCIe x16 Gen 3.0 interface, providing cost effective and integrative solutions for Machine Learning appliances. By utilizing RDMA and RoCE technology, the BlueField network controller data path hardware delivers low latency and high throughput with near-zero CPU cycles.The platform also offers GPUDirect ® RDMA technology, enabling the most efficient data exchange between GPUs and with the Mellanox high speed interconnect, optimizing real-time analytics and data insights.Powerful & Flexible Reference Platform for a Wide Range ofApplications Including Storage, Networking and Machine LearningDATA PROCESSOR PRODUCT BRIEF†NVIDIA ®Mellanox ®BlueField ®Reference Platform© Copyright 2020. Mellanox Technologies. All rights reserved. Mellanox, Mellanox logo, BlueField, BlueOS, ConnectX, ASAP2 - Accelerated Switching and Packet Processing, GPUDirect and Virtual Protocol Interconnect are registered trademarks of Mellanox Technologies, Ltd. PeerDirect is a trademark of Mellanox Technologies. All other trademarks are property of their respective owners.page 2BlueField Reference Platform 350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085Tel: 408-970-3400 • Fax: 52961PB Rev 1.8SupportFor information about Mellanox support packages, please contact your Mellanox Technologies sales representative or visit our Support Index page .T able 1 - Part Numbers and DescriptionsProductFamily OPNDescriptionBF1200MBE1200A-BN12U BlueField Reference Platform, BlueField E-Series, Crypto disabled. A storage controller platform with option for up to 16 SSDs. (SSDs are not included.)NVMe-Ready MidplaneA modular chassis midplane supports up to eight 2.5” SSDs, which can be duplicated to 16 SSDs. The midplane also supports hot swappable SSD cards, an I 2C switch to enable connectivity of the SMBUS from the platform Baseboard Management Controller (BMC) to each SSD, andan on-board clock generator/buffer.Software SupportThe BlueField Reference Platform comespre-installed with a UEFI-based bootloader and BlueOS, a Linux reference distribution targeted at BlueField-based embedded systems. Based on the Yocto Project Poky distribution, BlueOS is highly customizable for meeting specific Linux distribution requirements through the OpenEmbedded Build System. Yocto producesan SDK with an extremely flexible cross-build environment, ideal for building and running applications seamlessly for the Arm BlueField target system, on any x86 server running any Linux distribution. Mellanox OFED and NVMe-oF support is installed by default. The reference platform also provides a BMC running OpenBMC to manage the entire system. Note: Reference platform customers can run the Linux distribution of their choice.Enclosure Specifications –2U 19”–ATX form factor motherboard –BlueField DPU with 16 Armv8 A72 cores (64-bit)–Two internal x16 PCIe Gen3.0/4.0 expansion connectors–Dual-port ConnectX-5 Virtual Protocol Interconnect ® (VPI) interface• Ethernet: 40/50/100GbE QSFP ports • InfiniBand: FDR/EDR QSFP ports • 10/25Gb/s available with QSA28–Two PCIe risers enabling 2.5” NVMe SSD disk support • 8 x PCIe Gen3 x4 lanes • 16 x PCIe Gen3 x2 lanes –1 x 850W FRU power supply –Integrated BMC–32GB eMMC Flash memory for software–3 x 80mm fan cartridges DRAM DIMM Support–4 sockets for DRAM DIMMs –Up to 512GB total memory –NVDIMM-N Support2U Reference Platform Features1U Reference Platform FeaturesEnclosure Specifications –1U 19”–ATX form factor motherboard –BlueField DPU with 16 Armv8 A72 cores (64-bit)–One internal x16 PCIe Gen3.0 expansion connector–Dual-port ConnectX-5 Virtual Protocol Interconnect ® (VPI) interface• Ethernet: 40/50/100GbE QSFP ports • InfiniBand: FDR/EDR QSFP ports • 10/25Gb/s available with QSA28–1x 400W power supply –Integrated BMC–32GB eMMC Flash memory for software–3 x 80mm fan cartridges DRAM DIMM Support–4 sockets for DRAM DIMMs –Up to 512GB total memory –NVDIMM-N SupportFigure 1. 8 SSD Configuration (2U platform)Figure 2. 16 SSD Configuration (2U platform)†Figure 3: 2U Reference Platform。
Mellanox SX1012 ToR 交换机产品介绍说明书
–– IPv6 Ready –– IPv6 IPsec KEY FEATURES –– High Density
• 12 40/56GbE ports in 1RU • Up to 48 10GbE ports –– Lowest Latency • 220nsec for 40GbE • 270nsec for 10GbE –– Lowest Power • Maximum power consumption: <100 Watts
©2015 Mellanox Technologies. All rights reserved.
SX1012 12-Port 40/56GbE, 48-Port 10GbE, High Performance and SDN in Small Scale
FEATURES
LAYER 2 FEATURE SET – 1GbE, 10GbE, 40GbE, 56GbE – 48K L2 Forwarding Entries – Static MAC – Jumbo Frames (9216 BYTES) – VLAN 802.1Q (4K) – 802.1W Rapid Spanning Tree Protocol
End-to-End Mellanox is a leading supplier of end-to-end connectivity solutions. The SX1012, together with Mellanox cables and adapters, brings the networking industry’s first end-to-end 40/56GbE.
Mellanox WinOF for Windows 安装指南说明书
Mellanox TechnologiesMellanox WinOF for WindowsInstallation GuideRev 2.00 GA2© Copyright 2008. Mellanox Technologies, Inc. All Rights Reserved.Mellanox, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are regis-tered trademarks of Mellanox Technologies, Ltd. Virtual Protocol Interconnect is a trademark of Mellanox Technologies, Ltd.Mellanox WinOF for Windows Installation GuideDocument Number: 2956Mellanox Technologies, Inc.350 Oakmead ParkwaySunnyvale, CA 94086U.S.A.Tel: (408) 970-3400Fax: (408) 970-3403Mellanox Technologies LtdP.O. Box 586 Hermon-BuildingYokneam 20692IsraelTel: +972-4-909-7200Fax: +972-4-959-3245Mellanox WinOF for Windows Installation Guide3 1 OverviewThis document describes how to install and test Mellanox WinOF for Windows on a single host machine with Mellanox InfiniBand hardware installed. The document includes the following sections:• This overview•“Web Page and Documentation” (page 3)•“Hardware and Software Requirements” (page 3)•“Identifying Mellanox HCAs on Your Machine” (page 4)•“Downloading Mellanox WinOF” (page 6)•“Installing Mellanox WinOF” (page 6)•“Installation Results” (page 17)•“Assigning IP to IPoIB Adapters After Installation” (page 18)•“Modifying Configuration After Installation” (page 21)•“Updating Firmware After Installation” (page 22)•“Uninstalling Mellanox WinOF” (page 24)2Web Page and DocumentationPlease visit the Mellanox WinOF Web page at /products/MLNX_WinOF.php todownload the package and to reference documentation such as release notes, user manuals, FAQ, trouble-shooting, and archive. After installing Mellanox WinOF (see “Installing Mellanox WinOF” below), you will find release notes and user manuals under “Program Files\Mellanox\MLNX_WinOF\Documenta-tion”.3Hardware and Software Requirements3.1Hardware RequirementsRequired Disk Space for Installation•100MBPlatformsAny computer system with an x86 or x64 CPU architecture, and with a PCI adapter card based on one of the following Mellanox Technologies’ InfiniBand HCA devices:•ConnectX® (firmware: fw-25408 v2.5.000 or later)•InfiniHost® III Ex (firmware: fw-25218 v5.3.000 or later for Mem-Free cards, and fw-25208 v4.8.200 or later for cards with memory)•InfiniHost® III Lx (firmware: fw-25204 v1.2.000 or later)•InfiniHost® (firmware: fw-23108 v3.5.000 or later)Note:For the list of supported architecture platforms, please refer to the MellanoxWinOF for Windows Release Notes file under the “Documentation” folder.Identifying Mellanox HCAs on Your Machine 43.2Software RequirementsInstaller PrivilegesThe installation requires administrator privileges on the target machine.Operating Systems•Windows XP•Windows Server 2003•Windows Server 2008Note:For the list of supported operating system distributions and kernels, please referto the Mellanox WinOF for Windows Release Notes file under the “Documenta-tion” folder.4Identifying Mellanox HCAs on Your MachineStep 1.Check the Device Manager display for PCI devices. If the device driver has not been installed, check under Other Devices.Note:If you cannot find any PCI device, make sure that the HCA card(s) is correctlyinstalled in the PCI slot. If no PCI device is identified after your check, tryinstalling the HCA card into a different PCI slot.Mellanox WinOF for Windows Installation Guide5 Step 2.Select a PCI Device entry, right-click and select Properties to display the PCI Device Properties window.Step 3.Click the Details tab and select Device Instance ID from the Property pull-down menu.Step 4.In the Value display box, check the fields VEN and DEV (fields are separated by ‘&’). In the display example above, notice the sub-string “PCI\VEN_15B3&DEV_6340”: VEN isequal to 0x15B3 – this is the Vendor ID of Mellanox Technologies; and DEV is equal to6340 – this is a valid Mellanox Technologies PCI Device ID.Note:The list of Mellanox Technologies PCI Device IDs can be found in the PCI IDrepository at http://pci-ids.ucw.cz/read/PC/15b3.Step 5.If the PCI device does not have a Mellanox HCA ID, return to Step 2 to check another device.Note:If you cannot find any Mellanox HCA device listed as a PCI device, make surethat the HCA card(s) is correctly installed in the PCI slot. If the HCA deviceremains unidentified, try installing the adapter card into a different PCI slot.Downloading Mellanox WinOF 65Downloading Mellanox WinOFDownload the appropriate MSI to your host from the Mellanox WinOF Web page at/products/MLNX_WinOF.php.The MSI’s name has the format MLNX_WinOF_<arch>_<version>.msi, where arch can be eitherx86 or x64.6Installing Mellanox WinOFThis sections provides instructions for two types of installation:•“Attended Installation” on page 6An attended installation is an installation procedure that requires frequent user intervention.•“Unattended Installation” on page 11An unattended installation is an automated installation procedure that requires no user intervention.•“WDS Installation” on page 12A WDS installation is intended for Windows HPC Server 2008 clusters.6.1Attended InstallationNote:The installation requires administrator privileges on the target machine.For operating systems other than Windows 2008, double click the MSI and follow the GUI instructions to install Mellanox WinOF.For Windows 2008, install the MSI by opening a CMD console (click Start-->Run and enter ‘cmd’) andentering the following command:> msiexec.exe /i MLNX_WinOF_<arch>_<version>.msiThe following is an example of a Mellanox WinOF x64 installation session.Mellanox WinOF for Windows Installation Guide7 Step 1.Click Next in the Welcome screen.Step 2.Select the “accept” radio button and click Next.Installing Mellanox WinOF 8Step 3.Select the destination folder for Mellanox WinOF and click Next.Step 4.Select the type of installation: Typical or Custom.Mellanox WinOF for Windows Installation Guide9 If you select Typical, click Next and advance to the next step.If you select Custom, click Next and you will get the screen below. To install/remove a com-ponent, left-click the component and enable/disable it for installation.To continue, click Next and advance to the next step.Installing Mellanox WinOF 10Step 5.To install the chosen components, click Install.Step 6.In the following window, enable the components you need (if any). To complete the instal-lation, click Finish. (See the figure below.)Note:Even if you do not enable any of the displayed components in this step, you willbe able to enable components after the installation completes – see the fileMLNX_WinOF_README.txt.Mellanox WinOF for Windows Installation Guide116.2Unattended InstallationNote:The installation requires administrator privileges on the target machine.To perform a silent/unattended installation, open a CMD console (click Start->Run and enter ‘cmd’) andenter the following command:> msiexec.exe /i MLNX_WinOF_<arch>_<version>.msi /qn [Parameter]where Parameter is:ADDLOCAL Provides the list of components (separated by com-mas) to install. Available components are: Driver,IPoIB, ND, WSD, SDP, OpenSM, SRP, Tools, Docs, andSDK. You can also provide ADDLOCAL=ALL to installall components.Note: If you do not provide the ADDLOCAL parameter,the script will install the following list ofdefault components: Driver, IPoIB, ND, WSD, OpenSM,Tools, Docs, and SDK.WSDENABLE Enable WSD by adding the parameter WSDENABLE=1. (Theinstallation procedure installs WSD but does notenable it.)Installing Mellanox WinOF 12NDENABLE Enable ND by adding the parameter NDENABLE=1. (Theinstallation procedure installs ND as part of theIPoIB component but does not enable it.)Note:For all command options, enter ‘msiexec.exe /?’.Usage Examples•The following command installs MLNX_WinOF in the default configuration:> msiexec /i MLNX_WinOF_x86_<ver>.msi /qn•The following command installs MLNX_WinOF with all the available components:> msiexec /i MLNX_WinOF_x86_<ver>.msi /qn ADDLOCAL=ALL•The following command installs MLNX_WinOF with the default components plus SRP:> msiexec /i MLNX_WinOF_x86_<ver>.msi /qn \ADDLOCAL=Driver,IPoIB,ND,WSD,OpenSM,SRP,Tools,Docs,SDK•The following command installs MLNX_WinOF in the default configuration and enables WSD:> msiexec /i MLNX_WinOF_x86_<ver>.msi /qn WSDENABLE=16.3WDS InstallationTo perform a WDS installation for a Windows HPC Server 2008 cluster, follow the steps below.Step 1.Extract the package Mellanox_WinOF_x64_<ver>_INF.zip to a directory in the head node.Step 2.On the head node, click start--> All Programs --> Microsoft HPC Pack --> HPC Cluster Manager. Select Configuration in the navigation pane and then select To-do List. Next,Click “Manage drivers” and the following dialog will be displayed.Mellanox WinOF for Windows Installation Guide13 Step 3.Click “Add” and navigate in the Open dialog to the directory chosen in Step 1. Then go to the INF directory.Step 4.Select the listed INF files and click “Open” to add the files.Step 5.Click Close in “Manage drivers” dialog.Step 6.To enable ND perform the following steps. Otherwise, skip to the next step.a.Select Node Templates in the Configuration pane.b.Right click on the desired Node Template and select “edit”. An editor window is displayed (see below).Installing Mellanox WinOF 14c.Click Add Task --> Deployment --> Run OS command.Mellanox WinOF for Windows Installation Guide15d.Locate the new Run OS command listed under the Deployment section in the editor. Next, select in theOptional pane ContinueOnFailure to be True and enter the following text in the Description field:“NetworkDirect registration command”.e.In the Required pane of the editor, enter the following text in the Command field: “ndinstall -i”.f.Click Save. The editor window will close.Installing Mellanox WinOF 16Step 7.Select “Node Management” in the navigation pane of HPC Cluster Manager.Step 8.Right-click the desired node and select “Assign Node Template”. The following dialog will be displayed.Step 9.Select the correct node template and click OK to start MLNX_WinOF installation on the node.Mellanox WinOF for Windows Installation Guide17 7Installation ResultsHardwareDisplaying the Device Manager will show the Mellanox HCA devices, the Mellanox InfiniBand fabric, and an IPoIB (network) device for each InfiniBand port.Software•The MLNX_WinOF package is installed under the directory selected in Step 3 of Section 6.1.•OpenSM is installed as a disabled Windows service. To enable it, enter at the command line:> sc start opensmAssigning IP to IPoIB Adapters After Installation 188Assigning IP to IPoIB Adapters After InstallationBy default, your machine is configured to obtain an automatic IP address via a DHCP server. In some cases, the DHCP server may require the MAC address of the network adapter installed in your machine. To obtain the MAC address, open a CMD console and enter the command ‘ipconfig /all’ ; the MAC address is dis-played as “Physical Address”.To assign a static IP addresses to an IPoIB adapter after installation, perform the following steps:Step 1.Open the Network Connections window. Locate Local Area Connections named Mellanox IPoIB Adapter <#>. Each InfiniBand port has one IPoIB Adapter.Mellanox WinOF for Windows Installation Guide19 Step 2.Right-click a Mellanox Local Area Connection and left-click Properties.Step 3.Select Internet Protocol (TCP/IP) from the scroll list and click Properties.Assigning IP to IPoIB Adapters After Installation 20Step 4.Select the “Use the following IP address:” radio button and enter the desired IP informa-tion. Click OK when you are done.Step 5.Close the Local Area Connection dialog.Step 6.Verify the IP configuration by running ‘ipconfig’ from a CMD console.> ipconfig...Ethernet adapter Local Area Connection 3:Connection-specific DNS Suffix . :IP Address. . . . . . . . . . . . : 11.4.3.204Subnet Mask . . . . . . . . . . . : 255.0.0.0Default Gateway . . . . . . . . . :...Mellanox WinOF for Windows Installation Guide21 9Modifying Configuration After Installation9.1Modifying Mellanox HCA ConfigurationTo modify HCA configuration after installation, perform the following steps:a.Open the Registry editor by clicking Start->Run and entering ‘regedit’.b.In the navigation pane, expand HKEY_LOCAL_MACHINE->SYSTEM->CurrentControlSet->Ser-vices.c.Expand (in the navigation pane) the HCA driver service entry:- ‘mtcha’ for the InfiniHost family- ‘mlx4_hca’ and ‘mlx4_bus’ for the ConnectX familyd.Click the Parameters entry in the expanded driver service entry to display HCA parameters.e.Double click the desired HCA parameter and modify it. Repeat this step for all the parameters youwish to modify.f.Close the Registry editor after completing all modifications.g.Open Device Manager and expand the correct InfiniBand Channel Adapters entry (i.e., the adapterwith modified parameters).h.Right click the expanded HCA entry and left-click Disable. This disables the device.i.Right click the expanded HCA entry and left-click Enable. This re-enables the device.Note:For the changes to take effect, you must disable and re-enable the HCA (steps hand i above).9.2Modifying IPoIB ConfigurationTo modify the IPoIB configuration after installation, perform the following steps:a.Open Device Manager and expand Network Adapters in the device display pane.b.Right-click the Mellanox IPoIB Adapter entry and left-click Properties.c.Click the Advanced tab and modify the desired properties.Note:The IPoIB network interface is automatically restarted once you finish modifyingIPoIB parameters.Note:You need to restart opensm after modifying IPoIB configuration.Updating Firmware After Installation 2210Updating Firmware After InstallationThe following steps describe how to burn new firmware downloaded from Mellanox Technologies’ Webpages under /support/firmware_download.php.Step 1.Install the firmware tools package, MFT for Windows (WinMFT), on your machine. You can download it from /products/management_tools.php. Pleasecheck also the documentation on the same Web page.Step 2.Open a CMD console. (Click Start-->Run and enter ‘cmd’.)Step 3.Start mst.> st start mstSERVICE_NAME: mstTYPE : 1 KERNEL_DRIVERSTATE : 4 RUNNING(STOPPABLE, NOT_PAUSABLE, IGNORES_SHUTDOWN))WIN32_EXIT_CODE : 0 (0x0)SERVICE_EXIT_CODE : 0 (0x0)CHECKPOINT : 0x0WAIT_HINT : 0x0PID : 0FLAGS :Step 4.Identify your target InfiniBand device for firmware update.a.Get the list of InfiniBand device names on your machine.> mst statusFound 2 devices:mt25418_pciconf0mt25418_pci_cr0b.Your InfiniBand device is the one with the postfix “_pci_cr0”. In the example listed above, this will bemt25418_pci_cr0. Use the string “mtXXXXX” to identify the device type by checking the Webpage http://pci-ids.ucw.cz/read/PC/15b3. In the example above, mtXXXXX=mt25418, and the deviceis a ConnectX IB.c.Now go to the Mellanox firmware download page at:/support/firmware_download.php.d.Go to the correct firmware download page according to the device type identified in step (b) above.e.Run ‘vstat’ to obtain the PSID of your HCA card. The PSID is a unique identifier for the HCA card.Step ing the PSID obtained in Step 4(e), download the appropriate firmware binary image (*.bin.zip) and unzip it.Step 6.Burn the firmware image using the flint utility (part of your installed WinMFT).Note:Make sure that you burn the correct binary image to your HCA card. Burning thewrong image may cause severe firmware corruption. Please review Step 4 andStep 5 above.> flint -d mt25418_pci_cr0 -image <image>.bin burnMellanox WinOF for Windows Installation Guide23 Note:You may need to run ‘unzip’ on the downloaded firmware image prior to the burnoperation.Step 7.Reboot your machine after the firmware burning is completed.Uninstalling Mellanox WinOF 2411Uninstalling Mellanox WinOFAttended UninstallTo uninstall MLNX_WinOF on a single node, perform one of the following options:a.Click Start->Programs->Mellanox->MLNX_WinOF->Uninstall MLNX_WinOFb.Click Start->Control Panel-> Add Remove Programs-> MLNX_WinOF-> RemoveUnattended UninstallTo uninstall MLNX_WinOF in unattended mode, open a CMD console and perform one of the followingprocedures:a.If the MSI that was used to install MLNX_WinOF is available on your machine, enter the fol-lowing command:> msiexec.exe /x MLNX_WinOF_<arch>_<version>.msi /qn /forcerestartb.Obtain the MLNX_WinOF GUID (product code) by left-clicking Start->Programs->Mellanox->MLNX_WinOF.Now right-click on Uninstall MLNX_WinOF and select Properties, then copy the GUID from the “Tar-get entry. The GUID is the hexadecimal string that appears after ‘/x’. To uninstall the MSI, enter thefollowing command:> msiexec.exe /x <GUID> /qn /forcerestartNote:The ‘/forcerestart’ parameter forces the machine to restart after uninstall-ing the MLNX_WinOF MSI. This is recommended action for a complete unin-stall procedure.Note:For all command options, enter ‘msiexec.exe /?’.。
Mellanox发布全新ConnectX
Mellanox发布全新ConnectX2014年11月20日 - 世界领先的高性能计算、数据中心端到端互连方案提供商Mellanox(纳斯达克交易所代码:MLNX)今天发布ConnectX-4单/双端口100Gb/s虚拟协议互连(VPI)网络适配器,该适配器是业界首个完整的端到端100Gb/s InfiniBand互连方案的最后一个组件(Mellanox早先已经发布了100Gb交换机和线缆)。
ConnectX-4适配器的吞吐量是上一代产品的两倍,可提高HPC、云技术、Web 2.0及企业应用的性能、稳定性并降低延时,使其更好地满足实时任务处理的需求。
Mellanox的ConnectX-4 VPI适配器可提供10、20、25、40、50、56和100Gb/s吞吐量,支持InfiniBand和Ethernet标准协议,同时具有良好的灵活性,可兼容各种CPU、GPU架构,如x86、GPU、POWER、ARM和FPGA等。
ConnectX-4每秒可以处理1.5亿个消息,延时只有0.7微秒,同时集成RDMA、GPUDirect和SR-IOV等智能加速引擎,这些特性使其可以实现最为高效的计算和存储平台。
美国明尼苏达大学超级计算学院主任Jorge Vinals认为,"大规模集群需要极低的延时和极高的带宽,因此要求极高。
Mellanox的ConnectX-4为我们提供了节点到节点通信及实时数据检索的功能,使我们可将本院的EDR InfiniBand集群建设成为全美最顶尖的计算集群。
有了100Gb/s的性能,EDR InfiniBand大规模集群将对明尼苏达大学的科研做出重大贡献"。
"IDC认为100Gb/s互连方案的应用将在2015年兴起" IDC高性能计算研究副总裁Steve Conway说道,"数据在当今世界中扮演着越来越重要的作用,于是为了在当今世界中保持竞争力,大多数HPC数据中心均需要高带宽、低延时的强大互连性能。
MellanoxConnectX性能的InfiniBand适配器在多核体系结构的使用
Performance of Mellanox ConnectX Adapter on Multi-core Architectures Using InfiniBandAbstract (1)Introduction (2)Overview of ConnectX Architecture (2)Performance Results (3)Acknowledgments (7)For more information (7)AbstractMulti-core platforms are becoming standard in compute cluster environments. The performance of the next generation host channel adapter (HCA) from Mellanox called ConnectX achieves more than six times better latency over the previous generation InfiniBand HCA when utilizing all cores in a quad-core dual-socket node. This performance improvement demonstrates that ConnectX is well suited for multi-core systems across all HP product lines.IntroductionMulti-core, where two or more computational engines (cores) are contained within a single processor, is the dominant architecture in compute cluster environments. Multi-core allows more computational processing power per server (when measured in GFLOP/S) however, it also can create bottlenecks in a parallel application running across multiple servers by placing additional load on the interconnect adapters. Assuming one process per core, a single-core dual-socket server would only create interconnect traffic for two processes, whereas the same dual-socket server with quad-cores would create interconnect traffic for eight processes. If processes are primarily communicated off host, one can easily see how the interconnect adapter could become a bottleneck.Early in 2007, the HP High Performance Computing (HPC) Division launched its Multi-core Optimization Program. The program’s goal is to investigate and implement performance improvement techniques for HPC applications on HP servers that use multi-core processors. This paper looks at the multi-core performance of the next generation interconnect adapter from Mellanox called ConnectX—an adapter that is part of the program.Overview of ConnectX ArchitectureConnectX is the fourth generation server and storage adapter architecture from Mellanox Technologies. It supports eight lane PCI Express v1.1 and 2.0 interfaces in a dual port package. Either port can be configured as a 4x InfiniBand (IB) port or a 10 Gigabit Ethernet port, but this report only discusses IB performance.ConnectX implements a layered architecture to provide maximum flexibility and offers up to 16 million flow interfaces for traffic class assignment. Hardware-based forward and backward congestion notification mechanisms avoid congestion via flow control.Figure 1. The ConnectX HCA architecture.ConnectX also uses the same silicon in adapters regardless of whether they are used in blades or rack-mounted servers providing uniform interconnect performance across product lines. Performance ResultsIn order to isolate the impact of multi-core architecture on HCA performance, we used the Multiple Latency (multilat.c) benchmark from the Network-Based Computing Laboratory at Ohio State University. This benchmark was designed to evaluate aggregate performance between multiple pairs of processes. The processes execute a number of simultaneous “pingpongs” via matchingMPI_Send/MPI_Recv pairs.We ran this benchmark on the following two configurations:1) an HP BL460c dual-socket quad-core Clovertown system with Infinihost III Lx 4x DDR IB adapters2) an HP BL460c dual-socket quad-core Clovertown system with ConnectX using 4x DDR IB.In both tests the blades were each in a single enclosure resulting in a single switch hop.Figure 2. Multi-core performance on Infinihost III LxFigure 2 shows the measured latency on the Infinihost III Lx configuration when running simultaneous pingpongs between two nodes as the number of cores utilized per node increases. The plot clearly shows that although latency is less than 4 μsec when only using one core per node, performance degrades to over 12 μsec when using all cores in a node.Figure 3. Multi-core performance on ConnectXIn marked contrast to this behavior, Figure 3 shows the same benchmark results when using ConnectX. Note the scale for the latency axis is drastically reduced, and the measured latency remains well under 2 μsec regardless of the number of cores used. This shows that the architectural enhancements in ConnectX directly result in vastly improved multi-core performance.To show the improvement between the previous generation adapter and ConnectX, the eight core results are plotted using the same scale in Figure 4. It is clear ConnectX provides over six times improvement in short message latency when utilizing all cores in a quad-core dual-socket node.Figure 4. Performance comparison of Infinihost III Lx and ConnectX using all eight cores in the blade.Although these results are quite dramatic, the question remains how this translates into real application performance on multi-core systems. Figure 5 contains results from the two popular ISV applications LS-DYNA and ABAQUS. The benchmark configuration consisted of 64 BL460c dual-socket Intel Xeon 5160 nodes. The benchmark was first run using Infinihost III Lx adapters. These adapters were then removed and replaced with ConnectX adapters. The benchmark cases were rerun on the exact same system allowing complete isolation of HCA effects. All benchmark cases were run utilizing all cores on a node.The LS-DYNA benchmark was LSTC LS-DYNA V971.7600. This is the 3-vehicle model of 791,780 elements, simulating the crash of 3 vehicles for 150 msec, run in single precision floating point. The input data set is provided on the TOPCRUNCH public performance site at .The ABAQUS benchmark was SIMULIA Abaqus Explicit V6.6—running three of the ISV standard performance benchmarks /support/v66/v66_performance.html. Data set E2 is a simplified 45,785-element model of a cell phone impacting a fixed rigid floor. Data set E3 consists of forming a sheet metal part by the deep drawing process with 34,540 elements. Data set E5 consists of a stiffened steel plate subjected to a high intensity blast load with 50,000 elements. According to Figure 5, all cases except Abaqus E5 show a significant speedup when moving to ConnectX, even though the benchmark platform was only dual-core. One expects further increases when running on a quad-core system.Figure 5. Performance comparison of Infinihost III Lx and ConnectX for LS-DYNA and AbaqusAcknowledgmentsThe idea for this project originated in HP’s High Performance Computing Division. It is one of theresults of HP’s Multi-Core Optimization Program, which seeks ways to improve total applicationperformance and per-core application performance on servers using multi-core processors.For more informationConnectX Architecture Brief, Mellanox Technologies./pdf/products/connectx_architecture.pdfShainer, Gilad: “Connectivity in the Multi-Core Environment”, HPC Forum,/tech_gilad_shainer_1.htm/benchmarks/MVAPICH: MPI over InfiniBand and iWARP, Network-Based Computing Laboratory, Department ofComputer Science and Engineering, Ohio State University.© 2007 Hewlett-Packard Development Company, L.P. The information containedherein is subject to change without notice. The only warranties for HP products andservices are set forth in the express warranty statements accompanying suchproducts and services. Nothing herein should be construed as constituting anadditional warranty. HP shall not be liable for technical or editorial errors oromissions contained herein.AMD and AMD Opteron are trademarks of Advanced Micro Devices, Inc. Intel andXeon are registered trademarks of Intel Corporation or its subsidiaries in the UnitedStates and other countries. Itanium is a trademark or registered trademark of IntelCorporation or its subsidiaries in the United States and other countries.Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation.Linux is a U.S. registered trademark of Linus Torvalds.。
Mellanox OFED for FreeBSD for ConnectX-4 ConnectX-
Mellanox Technologies Mellanox OFED for FreeBSD for ConnectX-4/ConnectX-4 Lx/ConnectX-5/ConnectX-6 Release NoteRev 3.5.1Rev 3.5.12Mellanox Technologies Mellanox Technologies350 Oakmead Parkway Suite 100Sunnyvale, CA 94085U.S.A.Tel: (408) 970-3400Fax: (408) 970-3403© Copyright 2019. Mellanox Technologies Ltd. All Rights Reserved.Mellanox®, Mellanox logo, Connect -IB®, ConnectX®, CORE-Direct®, GPUDirect®, LinkX®, Mellanox Multi -Host®, Mellanox Socket Direct®, UFM®, and Virtual Protocol Interconnect® are registered trademarks of Mellanox Technologies, Ltd.For the complete and most updated list of Mellanox trademarks, visit /page/trademarks.All other trademarks are property of their respective owners.NOTE:THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT PRODUCT(S)ᶰAND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES AS-ISﺴWITH ALL FAULTS OF ANY KIND AND SOLELY FOR THE PURPOSE OF AIDING THE CUSTOMER IN TESTING APPLICATIONS THAT USE THE PRODUCTS IN DESIGNATED SOLUTIONS. THE CUSTOMER'S MANUFACTURING TEST ENVIRONMENT HAS NOT MET THE STANDARDS SET BY MELLANOX TECHNOLOGIES TO FULLY QUALIFY THE PRODUCT(S) AND/OR THE SYSTEM USING IT. THEREFORE, MELLANOX TECHNOLOGIES CANNOT AND DOES NOT GUARANTEE OR WARRANT THAT THE PRODUCTS WILL OPERATE WITH THE HIGHEST QUALITY. ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT ARE DISCLAIMED. IN NO EVENT SHALL MELLANOX BE LIABLE TO CUSTOMER OR ANY THIRD PARTIES FOR ANY DIRECT, INDIRECT, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES OF ANY KIND (INCLUDING, BUT NOT LIMITED TO, PAYMENT FOR PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY FROM THE USE OF THE PRODUCT(S) AND RELATED DOCUMENTATION EVEN IF ADVISED OF THE POSSIBILITYOF SUCH DAMAGE.Rev 3.5.13Mellanox Technologies Table of ContentsTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4Release Update History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5Chapter 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.1 Supported Platforms and Operating Systems . . . . . . . . . . . . . . . . . . . . . . . 61.2 Supported Adapters Firmware Versions. . . . . . . . . . . . . . . . . . . . . . . . . . . . 6Chapter 2 Changes and New Features in Rev 3.5.1. . . . . . . . . . . . . . . . . . . . . . 7Chapter 3 Known Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8Chapter 4 Bug Fixes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Chapter 5 Change Log History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12Rev 3.5.14Mellanox Technologies List of TablesTable 1:Release Update History. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5Table 2:Supported Platforms and Operating Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6Table 3:Changes and New Features in Rev 3.5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7Table 4:Known Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8Table 5:Bug Fixes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10Table 6:Change Log History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12Rev 3.5.15Mellanox Technologies Release Update HistoryTable 1 - Release Update History Release Date DescriptionRev 3.5.1May 2, 2019Initial release of this version.Rev 3.5.16Mellanox Technologies 1IntroductionThese are the release notes for Mellanox Technologies' driver for FreeBSD Rev 3.5.1 driver kit for Mellanox ConnectX®-4, ConnectX®-4 Lx, ConnectX®-5, ConnectX®-5 Ex adapter cards supporting the following uplinks to servers:1.1Supported Platforms and Operating SystemsThe following are the supported OSs in Mellanox OFED for FreeBSD for ConnectX-4/Con-nectX-4 Lx/ConnectX-5/ConnectX-6 Rev 3.5.1:1.2Supported Adapters Firmware VersionsMellanox OFED for FreeBSD Rev 3.5.1 supports the following Mellanox network adapter cards:Uplink/HCAs Driver Name Uplink SpeedConnectX®-4mlx5•InfiniBand: SDR, QDR, FDR, FDR10, EDR•Ethernet: 1GigE, 10GigE, 25GigE, 40GigE,50GigE, 56GigE a , and 100GigEa.56 GbE is a Mellanox propriety link speed and can be achieved while connecting a Mellanox adapter cards to Mellanox SX10XX switch series or connecting a Mellanox adapter card to another Mellanox adapter card.ConnectX®-4 Lx •Ethernet: 1GigE, 10GigE, 25GigE, 40GigE,and 50GigEConnectX®-5/ ConnectX®-5 Ex •InfiniBand: SDR, QDR, FDR, FDR10, EDR•Ethernet: 1GigE, 10GigE, 25GigE, 40GigE,50GigE, and 100GigEConnectX®-6 [beta ]•InfiniBand: SDR, EDR, HDR•Ethernet: 1GigE, 10GigE, 25GigE, 40GigE,50GigE, and 100GigE Table 2 - Supported Platforms and Operating SystemsOperating System PlatformFreeBSD 12AMD64/x86_64Supported Adapters Current Firmware Rev.ConnectX®-412.25.1020ConnectX®-4 Lx14.25.1020ConnectX®-5/ConnectX-5 Ex16.25.1020ConnectX®-6 [beta ]20.25.1500Changes and New Features in Rev 3.5.1Rev 3.5.17Mellanox Technologies 2Changes and New Features in Rev 3.5.1For additional information on the new features, please refer to the User Manual.Table 3 - Changes and New Features in Rev 3.5.1Category DescriptionFirmware Upgrade Using mlx5toolAdded the ability to burn firmware of MFA2 format using mlx5tool and Kernel module.Dynamic Interrupt Moderation(DIM)Added the ability to adaptively configure interrupt moderation based on network traffic.Bug Fixes See Section 4, “Bug Fixes”, on page 10.Rev 3.5.18Mellanox Technologies 3Known IssuesThe following is a list of general limitations and known issues of the various components of this Mellanox OFED for FreeBSD release.Table 4 - Known Issues InternalRef.Issue1320335Description: When Witness is enabled, the following message may appear in logs:“lock order reversal in mlx5_en_rx and in_pcb/tcp_input ”.Workaround: N/AKeywords: Witness, LORDiscovered in Release : 3.5.01578093Description: ibstat tool shows the wrong value of “rate” after unplugging the cablefrom the HCA.Workaround: N/AKeywords: ibstate, rateDiscovered in Release : 3.5.01439351Description: Link local GIDs are dysfunctional when IPv6 address is configuredfor the first time.Workaround : Set the net device state to “up”. For example: # ifconfig mce0 up Keywords: RoCE, IPv6Discovered in Release : 3.4.21435021Description: All Rx priority pause counters values increase when Rx global pauseis enabled.Workaround: Ignore Rx priority pause counters when Rx global pause is enabled.Keywords: Rx pause counters, priorityDiscovered in Release : 3.4.21434034Description: RDMA-CM applications do not work when PCP is configured on oneside of the connection.Workaround: Make sure PCP is configured on both sides of the connection.Keywords: RDMA-CM, PCPDiscovered in Release : 3.4.21428828Description: Extended join multicast API is not supported.Workaround : N/AKeywords: RDMA, MulticastDiscovered in Release : 3.4.2Known Issues Rev 3.5.19Mellanox Technologies 1313461Description: When Packet Pacing is enabled in firmware, only one traffic class willbe supported by the firmware.Workaround: Disable Packet Pacing in the firmware configuration. For example:# cat /tmp/disable_pp.txtMLNX_RAW_TLV_FILE0x00000004 0x0000010c 0x00000000 0x00000000# mlxconfig -d pci0:4:0:0 -f /tmp/disable_pp.txt set_rawKeywords: Firmware, Packet PacingDiscovered in Release : 3.4.21227471Description: When loading and unloading linuxkpi module, the following errormessage will appear in the dmesg, indicating that a memory leak has occurred:“Warning: memory type linux leaked memory on destroy (2 allocations,64 bytes leaked). Warning: memory type linuxcurrent leaked memory ondestroy (7 allocations, 896 bytes leaked).”Workaround : N/AKeywords: linuxkpiDiscovered in Release : 3.4.1-Description: The following error message may be printed to dmesg when usingstatic configuration via rc.conf:"loopback_route: deletion failed "This is a kernel-related issue.Workaround: N/AKeywords: Static Configuration-Description: Choosing a wrong interface media type will cause a “no carrier” sta-tus and the physical port will not be active.Workaround: N/AKeywords: Media Type-Description: There is no TCP traffic when configuring MTU in the range of 72-100 bytesin ConnectX®-4 Lx.Workaround: N/AKeywords: MTUTable 4 - Known IssuesInternalRef.IssueRev 3.5.110Mellanox Technologies 4Bug FixesThe table below lists the bugs fixed in this release.Table 5 - Bug Fixes InternalRef.Issue1243940Description: Fixed the issue where RDMA applications (user space and Kernelspace) might hang when restarting the driver during traffic.Keywords: RDMA, driver restartDiscovered in Release : 3.4.1Fixed in Release : 3.5.11402958Description: Fixed the issue where interfaces were not loaded after firmware soft-ware reset while RDMA traffic was running in the background.Keywords: Self healing, RDMADiscovered in Release : 3.4.2Fixed in Release : 3.5.11581628Description: Fixed the issue were driver unload used to hang while RDMA userspace application was running.Keywords: RDMA, driver unloadDiscovered in Release : 3.5.0Fixed in Release : 3.5.11554671Description: Fixed the issue where mlx5ib unload used to fail while OpenSM wasrunning in the background.Keywords: mlx5ib, OpenSM, RDMADiscovered in Release : 3.5.0Fixed in Release : 3.5.11498467Description: Added support for 10G-ER and 10G-LR modules recognition.Keywords: SFP moduleDiscovered in Release : 3.4.2Fixed in Release : 3.5.01175757Description: Added support for running RDMA CM with IPoIB.Keywords: RDMA CM, IPoIBDiscovered in Release : 3.4.1Fixed in Release : 3.5.0Bug Fixes Rev 3.5.111Mellanox Technologies 1337448/1485155/1470374Description: Fixed the issue of when rebooting a virtual machine (VM), the fol-lowing log message may appear: warning: event(0) on port 0Keywords: Virtualization, RDMADiscovered in Release : 3.4.2Fixed in Release : 3.5.01297834Description: Fixed the issue of when running over VLAN, RDMA loopback trafficused to fail.Keywords: RDMA, loopback, VLANDiscovered in Release : 3.4.1Fixed in Release : 3.4.21258718Description: Fixed the issue of when working in RoCE mode using ConnectX-4HCAs only, a bandwidth performance degradation used to occur when sending/receiving a message of any size larger than 16K.Keywords: RoCE, performance, ConnectX-4Discovered in Release : 3.4.1Fixed in Release : 3.4.21273118/1399014Description: Added support for RDMA multicast traffic.Keywords: RDMA, multicastDiscovered in Release : 3.4.1Fixed in Release : 3.4.2765775Description: Suppressed EEPROM error message/s that used to be received whenSFP cages were empty.Keywords: EEPROM, SFPDiscovered in Releas e : 3.0.0Fixed in Release: 3.3.0854565Description: Allowed setting software MTU size below the value of 1500.Keywords: MTUDiscovered in Releas e : 3.0.0Fixed in Release: 3.3.0Table 5 - Bug FixesInternalRef.IssueRev 3.5.112Mellanox Technologies 5Change Log HistoryTable 6 - Change Log History Release Category Description3.5.0Relaxed OrderingAdded support for configuring PCIe packet write ordering via sysctl.Enhanced TransmissionSelection (ETS)Added support for setting the bandwidth limit as a ratio rather than in bits per second. The ratio must be an integer number between 1 and100, inclusive. This feature also enables setting a minimal BW guar-antee on traffic classes (TCs).Ethernet CountersAdded support for the following new counters:•tx_jumbo_packets•rxstat0.bytes•txstat0tc0.bytes3.4.2RoCE Packet SniffingAdded support for RoCE packets sniffing using tcpdump tool.VLAN 0 Priority TaggingAdded support for 802.1Q Ethernet frames to be transmitted with VLAN ID set to zero in RoCE mode.Differentiated ServiceCode Point (DSCP)Added support for classifying and managing network traffic and pro-viding quality of service (QoS) on IP and RoCE networks.Trust StateAdded support for prioritizing sent/received packets based on packet fields.Reset Flow Added support for a reset mechanism to recover from fatal failures.Upon such failures, a firmware dump for all relevant registers will betriggered, followed by a firmware and driver reset.RDMA Mutlicast Support Added support for sending and receiving RDMA multicast packets.3.4.1Explicit Congestion Noti-fication (ECN)Added support for ECN, which enables end-to-end congestion notifi-cations between two end-points when a congestion occurs.Rate Limiting Added support for users to rate limit a specific Traffic Class.Priority Flow Control (PFC)Added the ability to apply pause functionality to specific classes oftraffic on the Ethernet link.Note : Currently, only layer 2 PFC (PCP) is supported.Rx Hardware Time-StampingAdded support for adding high-quality hardware time-stamping on incoming packets.Firmware DumpAdded the ability to dump hardware registered data upon demand.3.3.0Packet Pacing Also known as “rate limit”, this feature is now supported at a GAlevel.Note : This feature is supported in firmware v12.17.1016 and above.Change Log History Rev 3.5.113Mellanox Technologies 3.0.0Hardware LRO Added support for Large Receive Offload (LRO) in the hardware. Itincreases inbound throughput of high-bandwidth network connec-tions by reducing CPU overhead.Hardware LRO is only supported in ConnectX®-4.Completion Based Moder-ation Added the option to reset the timer for generating interrupts uponcompletion generation.EEPROM Cable Reading Added support for EEPROM cable reading via ifconfig and sysctl.EEPROM is only supported in ConnectX®-4.Interface NameChanged the interface name from mlx5en<X> to mce<X>.Table 6 - Change Log HistoryRelease Category Description。
Mellanox Switch-IB 2 固件发布说明.pdf_1701765494.4440453
Mellanox Switch-IB™ 2 Firmware Release NotesRev 15.1610.0196Mellanox Technologies350 Oakmead Parkway Suite 100Sunnyvale, CA 94085U.S.A. Tel: (408) 970-3400Fax: (408) 970-3403© Copyright 2018. Mellanox Technologies Ltd. All Rights Reserved.Mellanox®, Mellanox logo, Accelio®, BridgeX®, CloudX logo, CompustorX®, Connect-IB®, ConnectX®, CoolBox®, CORE-Direct®, EZchip®, EZchip logo, EZappliance®, EZdesign®, EZdriver®, EZsystem®, GPUDirect®, InfiniHost®, InfiniBridge®, InfiniScale®, Kotura®, Kotura logo, Mellanox CloudRack®, Mellanox CloudXMellanox®, Mellanox Federal Systems®, Mellanox HostDirect®, Mellanox Multi-Host®, Mellanox Open Ethernet®, Mellanox OpenCloud®, Mellanox OpenCloud Logo®, Mellanox PeerDirect®, Mellanox ScalableHPC®, Mellanox StorageX®, Mellanox TuneX®, Mellanox Connect Accelerate Outperform logo, Mellanox Virtual Modular Switch®, MetroDX®, MetroX®, MLNX-OS®, NP-1c®, NP-2®, NP-3®, NPS®, Open Ethernet logo, PhyX®, PlatformX®, PSIPHY®, SiPhy®, StoreX®, SwitchX®, Tilera®, Tilera logo, TestX®, TuneX®, The Generation of Open Ethernet logo, UFM®, Unbreakable Link®, Virtual Protocol Interconnect®, Voltaire® and Voltaire logo are registered trademarks of Mellanox Technologies, Ltd.All other trademarks are property of their respective owners.For the most updated list of Mellanox trademarks, visit /page/trademarksNOTE:THIS HARDWARE , SOFTWARE OR TEST SUITE PRODUCT (PRODUCT(S)) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES AS-ISﺴWITH ALL FAULTS OF ANY KIND AND SOLELY FOR THE PURPOSE OF AIDING THE CUSTOMER IN TESTING APPLICATIONS THAT USE THE PRODUCTS IN DESIGNATED SOLUTIONS . THE CUSTOMER 'S MANUFACTURING TEST ENVIRONMENT HAS NOT MET THE STANDARDS SET BY MELLANOX TECHNOLOGIES TO FULLY QUALIFY THE PRODUCT (S) AND/OR THE SYSTEM USING IT. THEREFORE , MELLANOX TECHNOLOGIES CANNOT AND DOES NOT GUARANTEE OR WARRANT THAT THE PRODUCTS WILL OPERATE WITH THE HIGHEST QUALITY. ANY EXPRESS OR IMPLIED WARRANTIES , INCLUDING , BUT NOT LIMITED TO , THE IMPLIED WARRANTIES OF MERCHANTABILITY , FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT ARE DISCLAIMED. IN NO EVENT SHALL MELLANOX BE LIABLE TO CUSTOMER OR ANY THIRD PARTIES FOR ANY DIRECT , INDIRECT, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES OF ANY KIND (INCLUDING , BUT NOT LIMITED TO , PAYMENT FOR PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES ; LOSS OF USE, DATA, OR PROFITS ; OR BUSINESS INTERRUPTION ) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY , WHETHER IN CONTRACT , STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE ) ARISING IN ANY WAY FROM THE USE OF THE PRODUCT(S) AND RELATED DOCUMENTATION EVEN IF ADVISED OF THE POSSIBILITY OFSUCH DAMAGE.Table of ContentsChapter 1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.1 Supported Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2 Firmware Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 Supported EDR Cables and Modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.4 Firmware Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.5 PRM Revision Compatibility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Chapter 2 Changes and New Features in Rev 15.1610.0196 . . . . . . . . . . . . . . 5 Chapter 3 Known Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Chapter 4 Bug Fixes History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Chapter 5 Firmware Changes and New Feature History. . . . . . . . . . . . . . . . . . 9List of TablesTable 1:Release Update History. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Table 2:Supported Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Table 3:Firmware Interoperability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Table 4:Qualified EDR Cables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Table 5:Firmware Rev 15.1610.0196 Changes and New Features . . . . . . . . . . . . . . . . . . . .5 Table 6:Known Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Table 7:Bug Fixes History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 Table 8:History of Major Changes and New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9Release Update HistoryTable 1 - Release Update HistoryDate Description February 28, 2017First release1OverviewThese are the release notes for the Switch-IB™ 2 firmware, Rev 15.1610.0196. This firmware complements the Switch-IB™ 2 silicon architecture with a set of advanced features, allowing easy and remote management of the switch.1.1Supported SystemsThis firmware supports the devices and protocols listed in Table 2. For the most updated list of switches supported, visit the Firmware Download pages on . Table 2 - Supported SystemsDevice Part NumberDescriptionMSB7890Switch-IB™ 2 based EDR InfiniBand switch; 36 QSFP28 ports; externally managed1.2Firmware InteroperabilityTo raise links with platforms based on the following ICs, these minimum requirements must be met:Table 3 - Firmware InteroperabilityHCA/SwitchFirmware Version1.3Supported EDR Cables and ModulesTable 4 presents the qualified EDR cables for externally managed EDR switch systems.For highest compatibility, it is recommended to burn the firmware image using the latest avail-able MFT tools package. Firmware Rev 15.1610.0196 has been tested with MFT v4.8.0-26.Switch-IB™ 11.1200.0102SwitchX®-29.2.8000ConnectX-5 / ConnectX-5 Ex 16.21.2010ConnectX-412.18.2000ConnectX-4 Lx 14.18.1000ConnectX®-3 (Pro) 2.40.7000ConnectX®-2 2.9.1200Connect-IB®10.16.1020Table 4 - Qualified EDR CablesMellanox P/NDescriptionCable LengthMMS1C00-C500Mellanox® transceiver, 100GbE, QSFP28, MPO, 1550nm PSM4, up to 2km-Please refer to the LinkX™ Cables and Transceivers webpage (/prod-ucts/interconnect/cables-configurator.php ) for the full list of supported cables and transceivers.1.4Firmware UpgradeFirmware upgrade may be performed directly from any previous version to this version. To upgrade firmware, please refer to the Mellanox Firmware Tools (MFT) package at:/page/management_tools1.5PRM Revision CompatibilityFirmware Rev 15.1610.0196 complies with the Mellanox Switches Programmer’s Reference Manual (PRM), Rev 1.45 or later.MMA1L10-CR Mellanox® optical transceiver, 100GbE, 100Gb/s, QSFP28, LC-LC, 1310nm, LR4, up to 10km-MCP1600-E00A a Mellanox® passive copper cable, up to 100Gb/s, QSFP28, LSZH 0.5m MCP1600-E001a Mellanox® passive copper cable, up to 100Gb/s, QSFP28, LSZH 1m MCP1600-E01A a Mellanox® passive copper cable, up to 100Gb/s, QSFP28, LSZH 1.5m MCP1600-E002a Mellanox® passive copper cable, up to 100Gb/s, QSFP28, LSZH 2m MCP1600-E02A Mellanox® passive copper cable, up to 100Gb/s, QSFP28, LSZH 2.5m MCP1600-E003Mellanox® passive copper cable, up to 100Gb/s, QSFP28, LSZH 3m MCP1600-E004Mellanox® passive copper cable, up to 100Gb/s, QSFP28, LSZH 4m MFA1A00-E005a Mellanox® active fiber cable, up to 100Gb/s, QSFP285m MFA1A00-E010a Mellanox® active fiber cable, up to 100Gb/s, QSFP2810m MFA1A00-E015a Mellanox® active fiber cable, up to 100Gb/s, QSFP2815m MFA1A00-E050Mellanox® active fiber cable, up to 100Gb/s, QSFP2850m MFA1A00-E100Mellanox® active fiber cable, up to 100Gb/s, QSFP28100ma.Forward Error Correction (FEC) is deactivated on this cable.Table 4 - Qualified EDR CablesMellanox P/NDescriptionCable LengthChanges and New Features in Rev 15.1610.01962Changes and New Features in Rev 15.1610.0196Table 5 - Firmware Rev 15.1610.0196 Changes and New FeaturesCategory DescriptionGeneral Added support for congestion control log 1.3 as described in IBTA IB specificationrelease 1.3, Annex A10General Added additional information (PDDR pages as described in the Switches PRM, sec-tion 8.15.50 PDDR - Port Diagnostics Database Register) to diagnostics data VS-MAD as described in Mellanox Vendor Specific MAD Specification 1.4 section3.33 – DiagnosticDataSHArP Added support for group join optimization using root GID as described in MellanoxVendor Specific MAD Specification 1.4 section 4.10 – Aggregation Group Join3Known IssuesTable 6 describes known issues in this firmware release and possible workarounds.Table 6 - Known IssuesInternal Ref.Issue982005Description: When connecting 6 & 7 meters, link may raise DDR instead of QDR against GD4000/IS5000 switches.Workaround: N/AKeywords: Link-Description: Congestion control is not supported. Workaround: N/AKeywords: General-Description: VL2VL mode is not supported from an aggregation port to an egress port. Workaround: N/AKeywords: SHArP-Description: FDR link may rise with symbol errors on optic EDR cable longer than 30M. Workaround: N/AKeywords: Link-Description: Port LEDs do not flash on system boot. Workaround: N/AKeywords: LEDs-Description: Link width reduction is not supported in this release. Workaround: N/AKeywords: Power Management-Description: If QDR is not enabled for the switch's InfiniBand Port Speed while connected to ConnectX-3/Pro or Connect-IB® FDR adapters or to SwitchX® /SwitchX®-2 FDR switches, links will rise at SDR or DDR (even if FDR is enabled).Workaround: Enable QDR (in addition to FDR) when connecting to peer ports running at FDRKeywords: Interoperability-Description: Force FDR10 is not supported on EDR products.Workaround: To raise link with an FDR10 device, make sure all speeds, including EDR, are configured on Switch-IB.Keywords: Interoperability-Description: Fallback Routing is not supported for DF+ topology. Fallback Routing Notifi-cations and Adaptive Routing notifications are not supported for topologies others then trees.Workaround: N/AKeywords: Network-Description: Module info page in Diagnostics Data VS-MAD is not supported Workaround: N/AKeywords: Diagnostics Data VS-MADTable 6 - Known IssuesInternal Ref.IssueBug Fixes History 4Bug Fixes HistoryTable 7 - Bug Fixes History Internal Ref.IssueDescription: Enable SDR speed regardless of cable supported speedsDiscovered in Release : 15.1400.0102Fixed in Release : 15.1500.0106Description : SHARP not functional in case of groups larger than 14 membersDiscovered in Release : 15.1430.0160Fixed in Release : 15.1500.0106Description : In info block 29 (Thermal algorithm values):DELTA TEMP REPORTING > '4' will be considered '1'.DELTA TEMP REPORTING = 1,2,3 returns no issues.Discovered in Release : 15.1310.0138Fixed in Release : 15.1310.0150Description : VL arbitration does not distribute traffic as expected in case of multiple VLsKeywords : GeneralDiscovered in Release : 15.1200.0102Fixed in Release : 15.1300.0100Description : In rare cases, FDR links may rise with errors. (Improved BER performance.)Keywords : LinkDiscovered in Release : 15.1.1002Fixed in Release : 15.1300.00921092005Keywords : Link1089528Keywords : SHARP964972Keywords : Thermal Management--5Firmware Changes and New Feature HistoryTable 8 - History of Major Changes and New FeaturesCategory Description15.1500.0106General Added support for IB telemetry, Top Talkers.For more details, refer to section “Congestion Telemetry” in the Mellanox SwitchesProgrammer’s Reference Manual.Module Added support for 100GbE PSM4/LR4 modules (for more details see Section 1.3,“Supported EDR Cables and Modules,” on page 3)15.1430.0160General Added support for Adaptive Routine (AR) optimizations with ConnectX-5 (RC Mode)Link Added support for Force EDR on Switch IB systems as described in MellanoxSwitches Programmer's Reference Manual (PRM) under PTYS Register15.1400.0102General Added support for IB telemetry, Congestion Monitoring-Thresholds (See MellanoxSwitches PRM (Programmer's Reference Manual) - section 9.7 - Congestion Teleme-try).General Added support for Additional Port Counters Extended (See IB Specification V ol 1-Release-1.3, MgtWG 1.3 Errata).General Added support for IB Router Port (Port 37) Counters (See IB Specification V ol 1-Release-1.3).15.1300.0126General Added support for burst/traffic histograms (described in Vendor Specific MAD PRMRev 1.3, Section 3.33 – Mellanox Performance Histograms)Link Added support for Port PHY Link Mode (PLLM) register (For register description, SeeSwitch PRM - PPLM - Port Phy Link Mode)Link Added support for QSFP copper cables which do not publish attenuation in the mem-ory map15.1200.0102General Added support for SHArP performance improvements (UD, Group trimming)General Added support for fast flash burn with new register MFMC and updates to current flashburn register MFPA (according to Section 3.9 of the Switches PRM)Link Added support for PRBS generation tool (according to registers PPTT and PPRT regis-ters in Section 7.14 of the Switches PRM)Link Added support for new PHY statistical counters group to register PPCNT (according toSection 7.14 of the Switches PRM)15.1100.0072General Added support for SHArPSystem Manage-mentAdded system MKey supportChassis Manage-mentAdded support of IB NodeDescription SetModules Added support for reading from pages with password through cable info MADFor more information, please refer to register MCIA in the Switch PRM and theCableinfo VS-MAD15.0400.0064General First beta-level releaseGeneral Added support for port mirroringGeneral Added support for SHArPGeneral Improved support for adaptive routing, adaptive routing notification, fault routing, fault routing notificationLink Removed out-of-the-box Forward Error Correction (FEC), reaching 90ns latency, on Mellanox GA level AOCs equal to or shorter than 30m.•MFA1A00-EXXX: 3, 5, 10, 15, 20, 30Link Added support for FDR10 speedChassis Manage-ment Added support for power supply monitoring (for more information please refer to the MSPS register in the SwitchX Family Programmer’s Reference Manual)Table 8 - History of Major Changes and New Features Category Description。
Mellanox ConnectX 系列适用于 FreeBSD 的 OFED 发行版 3.5.2 发
Mellanox Technologies Mellanox OFED for FreeBSD for ConnectX-4/ConnectX-4 Lx/ConnectX-5/ConnectX-6 Release NoteRev 3.5.2Rev 3.5.22Mellanox Technologies Mellanox Technologies350 Oakmead Parkway Suite 100Sunnyvale, CA 94085U.S.A. Tel: (408) 970-3400Fax: (408) 970-3403© Copyright 2019. Mellanox Technologies Ltd. All Rights Reserved.Mellanox®, Mellanox logo, Connect -IB®, ConnectX®, CORE-Direct®, GPUDirect®, LinkX®, Mellanox Multi -Host®, Mellanox Socket Direct®, UFM®, and Virtual Protocol Interconnect® are registered trademarks of Mellanox Technologies, Ltd.For the complete and most updated list of Mellanox trademarks, visit /page/trademarks.All other trademarks are property of their respective owners.NOTE:THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT PRODUCT(S)ᶰAND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES AS-ISﺴWITH ALL FAULTS OF ANY KIND AND SOLELY FOR THE PURPOSE OF AIDING THE CUSTOMER IN TESTING APPLICATIONS THAT USE THE PRODUCTS IN DESIGNATED SOLUTIONS. THE CUSTOMER'S MANUFACTURING TEST ENVIRONMENT HAS NOT MET THE STANDARDS SET BY MELLANOX TECHNOLOGIES TO FULLY QUALIFY THE PRODUCT(S) AND/OR THE SYSTEM USING IT. THEREFORE, MELLANOX TECHNOLOGIES CANNOT AND DOES NOT GUARANTEE OR WARRANT THAT THE PRODUCTS WILL OPERATE WITH THE HIGHEST QUALITY. ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT ARE DISCLAIMED. IN NO EVENT SHALL MELLANOX BE LIABLE TO CUSTOMER OR ANY THIRD PARTIES FOR ANY DIRECT, INDIRECT, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES OF ANY KIND (INCLUDING, BUT NOT LIMITED TO, PAYMENT FOR PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY FROM THE USE OF THE PRODUCT(S) AND RELATED DOCUMENTATION EVEN IF ADVISED OF THE POSSIBILITYOF SUCH DAMAGE.Rev 3.5.23Mellanox Technologies Table of ContentsTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4Release Update History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5Chapter 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.1 Supported Platforms and Operating Systems . . . . . . . . . . . . . . . . . . . . . . . 61.2 Supported Adapters Firmware Versions. . . . . . . . . . . . . . . . . . . . . . . . . . . . 6Chapter 2 Changes and New Features in Rev 3.5.2. . . . . . . . . . . . . . . . . . . . . . 7Chapter 3 Known Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8Chapter 4 Bug Fixes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Chapter 5 Change Log History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12Rev 3.5.24Mellanox Technologies List of TablesTable 1:Release Update History. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5Table 2:Supported Platforms and Operating Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6Table 3:Changes and New Features in Rev 3.5.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7Table 4:Known Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8Table 5:Bug Fixes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10Table 6:Change Log History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12Rev 3.5.25Mellanox Technologies Release Update HistoryTable 1 - Release Update HistoryReleaseDateDescriptionRev 3.5.2September 29, 2019Initial release of this version.Rev 3.5.26Mellanox Technologies 1IntroductionThese are the release notes for Mellanox Technologies' driver for FreeBSD Rev 3.5.2 driver kit for Mellanox ConnectX®-4, ConnectX®-4 Lx, ConnectX®-5, ConnectX®-5 Ex adapter cards supporting the following uplinks to servers:1.1Supported Platforms and Operating SystemsThe following are the supported OSs in Mellanox OFED for FreeBSD for ConnectX-4/Con-nectX-4 Lx/ConnectX-5/ConnectX-6 Rev 3.5.2:1.2Supported Adapters Firmware VersionsMellanox OFED for FreeBSD Rev 3.5.2 supports the following Mellanox network adapter cards:Uplink/HCAsDriver NameUplink SpeedConnectX®-4mlx5•InfiniBand: SDR, QDR, FDR, FDR10, EDR •Ethernet: 1GigE, 10GigE, 25GigE, 40GigE, 50GigE, 56GigE a , and 100GigE a.56 GbE is a Mellanox propriety link speed and can be achieved while connecting a Mellanox adapter cards to Mellanox SX10XX switch series or connecting a Mellanox adapter card to another Mellanox adapter card.ConnectX®-4 Lx •Ethernet: 1GigE, 10GigE, 25GigE, 40GigE, and 50GigE ConnectX®-5/ ConnectX®-5 Ex •InfiniBand: SDR, QDR, FDR, FDR10, EDR •Ethernet: 1GigE, 10GigE, 25GigE, 40GigE, 50GigE, and 100GigE ConnectX®-6•InfiniBand: SDR, EDR, HDR•Ethernet: 10GigE, 25GigE, 40GigE, 50GigE, 100GigE b , and 200GigE (alpha )b.ConnectX-6 Ethernet adapter cards currently support Force mode only. Auto-Negotiation mode is not sup-ported.Table 2 - Supported Platforms and Operating SystemsOperating SystemPlatformFreeBSD 13AMD64/x86_64Supported AdaptersCurrent Firmware Rev.ConnectX®-412.26.1040ConnectX®-4 Lx14.26.1040ConnectX®-5/ConnectX-5 Ex 16.26.1040ConnectX®-620.26.1040Changes and New Features in Rev 3.5.2Rev 3.5.27Mellanox Technologies 2Changes and New Features in Rev 3.5.2For additional information on the new features, please refer to the User Manual.Table 3 - Changes and New Features in Rev 3.5.2CategoryDescriptionSR-IOV over ESXiAdded support for SR-IOV Guest over ESXi.NOTE : Certain earlier versions of VMware prevented proper function of MS/MSI-X emulation on FreeBSD Guests. Since these versions are not accurately accounted for, MSI is blacklisted by default, preventing mlx5(4) driver from loading properly.If MS/MSI-X emulation does not function properly on your current hypervisor, you may set the loader tunableshw.pci.enable_msi and hw.pci.enable_msix to 1, and hw.pci.honor_msi_blacklist to 0.Forward Error Correction (FEC) Configuration Added support for FEC configuration via the FreeBSD driver.IRQ LabelingIRQ IDs are now labeled according to their Mellanox device functionality.Priority Flow Control (PFC) Hardware Buffer Configuration [Beta ] Added support for configuring HW buffers for prior-ity flow control (PFC).Port Module Events Added counters for various port module events.ConnectX-6 Firmware Dump Expanded firmware dump support for ConnectX-6 adapter cards.Bug FixesSee Section 4, “Bug Fixes”, on page 10.Rev 3.5.28Mellanox Technologies 3Known IssuesThe following is a list of general limitations and known issues of the various components of this Mellanox OFED for FreeBSD release.Table 4 - Known IssuesInternal Ref.Issue-Description : When reloading the module during heavy traffic, FreeBSD driver may become unstable.Workaround: N/AKeywords: Reload, stress, heavy traffic Discovered in Release : 3.5.21320335Description: When Witness is enabled, the following message may appear in logs: “lock order reversal in mlx5_en_rx and in_pcb/tcp_input ”.Workaround: N/A Keywords: Witness, LOR Discovered in Release : 3.5.01578093Description: ibstat tool shows the wrong value of “rate” after unplugging the cable from the HCA.Workaround: N/A Keywords: ibstate, rate Discovered in Release : 3.5.01439351Description: Link local GIDs are dysfunctional when IPv6 address is configured for the first time.Workaround : Set the net device state to “up”. For example: # ifconfig mce0 up Keywords: RoCE, IPv6Discovered in Release : 3.4.21435021Description: All Rx priority pause counters values increase when Rx global pause is enabled.Workaround: Ignore Rx priority pause counters when Rx global pause is enabled.Keywords: Rx pause counters, priority Discovered in Release : 3.4.21434034Description: RDMA-CM applications do not work when PCP is configured on one side of the connection.Workaround: Make sure PCP is configured on both sides of the connection.Keywords: RDMA-CM, PCP Discovered in Release : 3.4.2Known IssuesRev 3.5.29Mellanox Technologies 1428828Description: Extended join multicast API is not supported.Workaround : N/AKeywords: RDMA, Multicast Discovered in Release : 3.4.21313461Description: When Packet Pacing is enabled in firmware, only one traffic class will be supported by the firmware.Workaround: Disable Packet Pacing in the firmware configuration. For example:# cat /tmp/disable_pp.txt MLNX_RAW_TLV_FILE0x00000004 0x0000010c 0x00000000 0x00000000# mlxconfig -d pci0:4:0:0 -f /tmp/disable_pp.txt set_rawKeywords: Firmware, Packet Pacing Discovered in Release : 3.4.21227471Description: When loading and unloading linuxkpi module, the following error message will appear in the dmesg, indicating that a memory leak has occurred:“Warning: memory type linux leaked memory on destroy (2 allocations, 64 bytes leaked). Warning: memory type linuxcurrent leaked memory on destroy (7 allocations, 896 bytes leaked).”Workaround : N/A Keywords: linuxkpi Discovered in Release : 3.4.1-Description: The following error message may be printed to dmesg when using static configuration via rc.conf:"loopback_route: deletion failed " This is a kernel-related issue.Workaround: N/AKeywords: Static Configuration-Description: Choosing a wrong interface media type will cause a “no carrier” sta-tus and the physical port will not be active.Workaround: N/A Keywords: Media Type-Description: There is no TCP traffic when configuring MTU in the range of 72-100 bytesin ConnectX®-4 Lx.Workaround: N/A Keywords: MTUTable 4 - Known IssuesInternal Ref.IssueRev 3.5.210Mellanox Technologies 4Bug FixesThe table below lists the bugs fixed in this release.Table 5 - Bug FixesInternal Ref.Issue1243940Description: Fixed the issue where RDMA applications (user space and Kernel space) might hang when restarting the driver during traffic.Keywords: RDMA, driver restart Discovered in Release : 3.4.1Fixed in Release : 3.5.11402958Description: Fixed the issue where interfaces were not loaded after firmware soft-ware reset while RDMA traffic was running in the background.Keywords: Self healing, RDMA Discovered in Release : 3.4.2Fixed in Release : 3.5.11581628Description: Fixed the issue were driver unload used to hang while RDMA user space application was running.Keywords: RDMA, driver unload Discovered in Release : 3.5.0Fixed in Release : 3.5.11554671Description: Fixed the issue where mlx5ib unload used to fail while OpenSM was running in the background.Keywords: mlx5ib, OpenSM, RDMA Discovered in Release : 3.5.0Fixed in Release : 3.5.11498467Description: Added support for 10G-ER and 10G-LR modules recognition.Keywords: SFP module Discovered in Release : 3.4.2Fixed in Release : 3.5.01175757Description: Added support for running RDMA CM with IPoIB.Keywords: RDMA CM, IPoIB Discovered in Release : 3.4.1Fixed in Release : 3.5.0Bug Fixes Rev 3.5.211Mellanox Technologies 1337448/1485155/1470374Description: Fixed the issue of when rebooting a virtual machine (VM), the fol-lowing log message may appear: warning: event(0) on port 0Keywords: Virtualization, RDMADiscovered in Release : 3.4.2Fixed in Release : 3.5.01297834Description: Fixed the issue of when running over VLAN, RDMA loopback trafficused to fail.Keywords: RDMA, loopback, VLANDiscovered in Release : 3.4.1Fixed in Release : 3.4.21258718Description: Fixed the issue of when working in RoCE mode using ConnectX-4HCAs only, a bandwidth performance degradation used to occur when sending/receiving a message of any size larger than 16K.Keywords: RoCE, performance, ConnectX-4Discovered in Release : 3.4.1Fixed in Release : 3.4.21273118/1399014Description: Added support for RDMA multicast traffic.Keywords: RDMA, multicastDiscovered in Release : 3.4.1Fixed in Release : 3.4.2765775Description: Suppressed EEPROM error message/s that used to be received whenSFP cages were empty.Keywords: EEPROM, SFPDiscovered in Releas e : 3.0.0Fixed in Release: 3.3.0854565Description: Allowed setting software MTU size below the value of 1500.Keywords: MTUDiscovered in Releas e : 3.0.0Fixed in Release: 3.3.0Table 5 - Bug FixesInternalRef.IssueRev 3.5.212Mellanox Technologies 5Change Log HistoryTable 6 - Change Log History Release Category Description3.5.1Firmware Upgrade Usingmlx5toolAdded the ability to burn firmware of MFA2 format using mlx5tool and Kernel module.Dynamic Interrupt Moder-ation (DIM)Added the ability to adaptively configure interrupt moderation based on network traffic.3.5.0Relaxed Ordering Added support for configuring PCIe packet write ordering via sysctl.Enhanced Transmission Selection (ETS)Added support for setting the bandwidth limit as a ratio rather than inbits per second. The ratio must be an integer number between 1 and100, inclusive. This feature also enables setting a minimal BW guar-antee on traffic classes (TCs).Ethernet CountersAdded support for the following new counters:•tx_jumbo_packets•rxstat0.bytes•txstat0tc0.bytes3.4.2RoCE Packet SniffingAdded support for RoCE packets sniffing using tcpdump tool.VLAN 0 Priority TaggingAdded support for 802.1Q Ethernet frames to be transmitted with VLAN ID set to zero in RoCE mode.Differentiated ServiceCode Point (DSCP)Added support for classifying and managing network traffic and pro-viding quality of service (QoS) on IP and RoCE networks.Trust StateAdded support for prioritizing sent/received packets based on packet fields.Reset Flow Added support for a reset mechanism to recover from fatal failures.Upon such failures, a firmware dump for all relevant registers will betriggered, followed by a firmware and driver reset.RDMA Mutlicast Support Added support for sending and receiving RDMA multicast packets.3.4.1Explicit Congestion Noti-fication (ECN)Added support for ECN, which enables end-to-end congestion notifi-cations between two end-points when a congestion occurs.Rate Limiting Added support for users to rate limit a specific Traffic Class.Priority Flow Control (PFC)Added the ability to apply pause functionality to specific classes oftraffic on the Ethernet link.Note : Currently, only layer 2 PFC (PCP) is supported.Rx Hardware Time-StampingAdded support for adding high-quality hardware time-stamping on incoming packets.Firmware DumpAdded the ability to dump hardware registered data upon demand.3.3.0Packet Pacing Also known as “rate limit”, this feature is now supported at a GAlevel.Note : This feature is supported in firmware v12.17.1016 and above.Change Log History Rev 3.5.213Mellanox Technologies 3.0.0Hardware LRO Added support for Large Receive Offload (LRO) in the hardware. Itincreases inbound throughput of high-bandwidth network connec-tions by reducing CPU overhead.Hardware LRO is only supported in ConnectX®-4.Completion Based Moder-ation Added the option to reset the timer for generating interrupts uponcompletion generation.EEPROM Cable Reading Added support for EEPROM cable reading via ifconfig and sysctl.EEPROM is only supported in ConnectX®-4.Interface NameChanged the interface name from mlx5en<X> to mce<X>.Table 6 - Change Log HistoryRelease Category Description。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
© 2013 Mellanox Technologies
- Mellanox Confidential -
15
Mellanox Interconnect突破存储性能瓶颈,提高存储吞吐量
每台服务器可以支持更多虚拟桌面,并且可以线性扩展
© 2013 Mellanox Technologies - Mellanox Confidential 11
面向云计算的高效互连网络
构建最高效的云计算环境
Efficient Virtualized Network (EVN) 业界领先的OpenStack和SDN互连解决方案
Cloud
应用的性能提升多达10倍
单个物理机支持的虚拟机数目提高3倍 网络I/O和存储I/O的集成
单个应用的成本降低32% 网络性能提高694%
© 2013 Mellanox Technologies
- Mellanox Confidential -
13
UDA加速Hadoop性能
54%
Disk Access
© 2013 Mellanox Technologies
- Mellanox Confidential -
21
西南石油256节点集群拓扑图
© 2013 Mellanox Technologies
- Mellanox Confidential -
22
勘探研究院杭州分院256节点集群拓扑图
© 2013 Mellanox Technologies
200usec
6000usec
With SSDs (~0.5msec)
100usec
3000 IOPs
200usec
25 usec
With Fast Network (~0.2msec)
10
usec
200usec
4300 IOPs
25 usec
With RDMA (~0.05msec)
1
us
20
usec
Mellanox 高速网络互连与高性能解决方案
罗云飞,中国区市场开发经理
Mellanox公司概况
• FDR 56Gb/s InfiniBand 与万兆/4万兆以太网 • 降低应用等待数据时间 • 大幅提升数据中心投资回报率
股票代码: MLNX
连接服务器、存储器的高带宽与低延迟网络的领导厂商
公司总部
© 2013 Mellanox Technologies
- Mellanox Confidential -
3
Mellanox互连解决方案优势 低延迟 高利用率 卸载引擎 InfiniBand和以太网 RDMA 远程直接内存访问 行业验证 无限扩展 计算与存储 高带宽
最佳投资回报率
最具成本优势的计算与存储互连解决方案
加速 融合 虚拟化
高效的云计算依赖于高效的虚拟化网络
© 2013 Mellanox Technologies - Mellanox Confidential 12
Mellanox显著提高云平台的投资回报率
Microsoft Windows Azure
云平台效率提高90.2% 单个应用的成本降低33%
5usec 2.5usec 1.3usec 0.7usec 0.5usec <0.5usec
- Mellanox Confidential 7
Latency
© 2013 Mellanox Technologies
Mellanox 高效数据中心解决方案
© 2013 Mellanox Technologies
交换机/ 网关
Virtual Protocol Interconnect
存储 前端 / 后端
56G IB & FCoIB
56G InfiniBand 10/40/56GbE
10/40/56GbE & FCoE
完整端到端 InfiniBand 与以太网产品线
芯片 网卡 交换机/网关 管理/加速软件 Metro / WAN 网线/模块
6
Consolidation of network and storage I/O for lower OPEX
© 2013 Mellanox Technologies
- Mellanox Confidential -
领先的网络互连技术,优异的性能
Bandwidth
56Gb/s 40Gb/s 20Gb/s 10Gb/s
• high throughput, low latency server and storage interconnect
Database performance improved up to 10X
• Cases: Oracle, Teradata, IBM, Microsoft
2X faster data analytics = expose your data value!
Mellanox 石油行业参考案例
© 2013 Mellanox Technologies
- Mellanox Confidential -
19
西南石油网络改造
© 2013 Mellanox Technologies
- Mellanox Confidential -
20
BGP 258节点PC-Cluster 并行集群计算机系统
- Mellanox Confidential 17
Mellanox Interconnect - 行业领先的存储互联解决方案
SMB Direct
Market Leading Performance with RDMA Interconnects
© 2013 Mellanox Technologies - Mellanox Confidential 18
EMC桌面虚拟化一体机 (VMware Horizon View 与 40GbE网络)
Mellanox端到端4万兆以太网方案
ESXi 5.5 内置ConnectX-3驱动, SX1036 40GE 交换机, ConnectX-3 40GE 网卡与网线
1,000 台虚拟桌面,用户平均响应时间小于1秒
Server
15 x 16Gb/s Fibre
Channel Ports
OR
+
24 x 2.5” SAS 12G SSDs
= 24GB/s =
20 x 10Gb/s iSCSI
Ports (with offload)
OR
4 x 40-56Gb/s IB/Eth port
(with RDMA)
Or 2 x 100G
- Mellanox Confidential -
8
Mellanox - VMware 40GbE 认证合作伙伴
ESXi 5.5内置支持的第一个,也是唯一一个40GbE 网络产品*
* Over ConnectX-3
更高的IO带宽实现更高的效率
© 2013 Mellanox Technologies - Mellanox Confidential 9
- Mellanox Confidential -
23
参考案例
© 2013 Mellanox Technologies
GD4200 GPU cluster BGP涿州总部 GD4700 258 GPU cluster BGP北京 GD4700 258 GPU cluster BGP 休斯敦 GD4700 Pack A GPU Cluster +BX5020 + UFM BGP涿州总部 GD4200 Pack B GPU Cluster+BX5020+UFM BGP涿州总部 MIS5025 BGP涿州总部 GD4200 CPU/GPU Cluster +UFM 西南石油勘探研究院 GD4700 CPU/GPU Cluster +UFM 中石油勘探开发研究院杭州分院 MIS5200 CPU Cluster +UFM 中石油勘探开发研究院西北分院 MIS5030+BX5020 GPU Cluster 中石油勘探开发研究院西北分院 MIS5100+BX5020 +Switch X1016 中石油勘探开发研究院廊坊分院 MIS5100+ BX5020 中石油勘探开发研究总院 MIS5100+ BX5020 中长庆油田 MIS5100+ BX5020 塔里木油田 GD4036E+GD4036 吐哈油田 MIS5100+ BX5020 青海油田 GD4036E+ GD4036 , MIS5300 中海油塘沽 Switch X 1036 中石化北京勘探院 MSX 6512 BGP研究院 MSX 6512 胜利油田物探院 MSX 6512 + VPI1036G + 南京石化物探院
Mellanox Interconnect最大化数据中心投资回报率(ROI)
From 7 Days to 4 Hours!
97% reduction in database recovery time
• Case: Tier-1 Fortune100 Web 2.0 company
Big Data needs big pipes
© 2013 Mellanox Technologies - Mellanox Confidential 4
互连技术的时代
HPC高性能计算
互连技术
Web 2.0