Mellanox-4036-QDR交换机白皮书

合集下载

Mellanox CS7510智能交换机说明书

Mellanox CS7510智能交换机说明书

©2018 Mellanox Technologies. All rights reserved.†For illustration only. Actual products may vary.Mellanox provides the world’s first smart switch, enabling in-network computing through the Co-Design Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™ technology. The CS7510 system provides the highest performing fabric solution in a 16U form factor by delivering 64Tb/s of full bi-directional bandwidth with 400ns port latency.SCALING-OUT DATA CENTERS WITH EXTENDED DATA RATE (EDR) INFINIBANDFaster servers based on PCIe 3.0, combined with high-performance storage and applications that use increasingly complex computations, are causing data bandwidth requirements to spiral upward. As servers are deployed with next generation processors, High-Performance Computing (HPC) environments and Enterprise Data Centers (EDC) will need every last bit of bandwidth delivered with Mellanox’s next generation of EDR InfiniBand high-speed smart switches.SUSTAINED NETWORK PERFORMANCEBuilt with Mellanox’s latest Switch-IB™ InfiniBand switch devices, the CS7510 provides up to 324 100Gb/s full bi-directional bandwidth per port. The CS7510 modular chassis switch provide an excellent price-performance ratio for medium to extremely large size clusters, along with the reliability and manageability expected from a director-class switch.CS7510 is the world’s first smart network switch, designed to enable in-network computing through the Co-Design Scalable Hierarchical Aggregation and Reduction Protocol (SHARP) technology. The Co-Design architecture enables the usage of all active data center devices to accelerate the communications frameworks, resulting in order of magnitude applications performance improvements.WORLD-CLASS DESIGNCS7510 is an elegant director switch designed for performance, serviceability, energy savings and high-availability. The CS7510 comes with highly efficient, 80 gold+ and energy star certified AC power supplies.The leaf, spine blades and management modules, as well as the power supplies and fan units, are all hot-swappable to help eliminate down time.COLLECTIVE COMMUNICATION ACCELERATIONCollective is a term used to describe communication patterns in which all members of a group of communication endpoints participate.324-Port EDR 100Gb/s InfiniBand Smart Director SwitchCS7510 InfiniBand SwitchPRODUCT BRIEFSWITCH SYSTEM†350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085Tel: 408-970-3400 • Fax: © Copyright 2018. Mellanox Technologies. All rights reserved.Mellanox, Mellanox logo and MLNX-OS are registered trademarks of Mellanox Technologies, Ltd. Mellanox Scalable Hierarchical Aggregation and Reduction Protocol (SHARP), Switch-IB, UFM, and Unified Fabric Manager are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.Mellanox CS7520 InfiniBand Switchpage 2Collectives have implications on overall application performance and scale. CS7510 introduces the Co-Design SHARP technology, which enables the switch to manage collective communications using embedded hardware. Switch-IB 2 improves the performance of selected collective operations by processing the data as it traverses the network, eliminating the need to send data multiple times between endpoints. This decreases the amount of data traversing the network and frees up CPU resources for computation rather than using them to process communication.MANAGEMENTThe CS7510, dual-core x86 CPU, comes with an onboard subnetmanager, enabling simple, out-of-the-box fabric bring-up for up to 2048 nodes. CS7510 switch runs the same MLNX-OS ® software package as Mellanox FDR products to deliver complete chassis management, to manage the firmware, power supplies, fans and ports.Mellanox CS7510–16U modular chassis–36 QSFP28 EDR 100Gb/s InfiniBand ports per dual IC leaf bladeSwitch Specifications–Compliant with IBTA 1.21 and 1.3 –9 virtual lanes:8 data + 1 management –256 to 4Kbyte MTU–4x48K entry linear forwarding databaseManagement Ports–DHCP–Familiar Industry Standard CLI–Management over IPv6 –Management IP –SNMP v1,v2,v3 –Web UIFabric Management–On-board Subnet Manager supporting fabrics of up to 2048 nodes–Unified Fabric Manager™ (UFM™) AgentConnectors and Cabling–QSFP28 connectors–Passive copper or active fiber cables –Optical modulesIndicators–Per port status LED Link, Activity –System status LEDs: System, fans, power supplies –Port Error LED –Unit ID LEDPhysical Characteristics–Dimensions:28’’H x 17.64’’W x 30.3’’D –Weight:Fully populated 275kg (606lb)Power Supply–Hot swappable with N+N redundancy–Input range: 180-265VAC –Frequency:47-63Hz, single phase ACCooling–Hot-swappable fan trays –Front-to-rear air flow –Auto-heat sensing fansPower Consumption–Typical power consumption (fully populated):–Passive cable: 4939W –Active cable: 6543WFEATURESSafety–CB –cTUVus –CE –CUEMC (Emissions)–CE –FCC –VCCI –ICES –RCMOperating Conditions–Operating 0ºC to 40ºC–Non-Operating -40ºC to 70ºC–Humidity: Operating 10% to 85%, non-condensing–Altitude: Operating -60 to 3200mAcoustic–ISO 7779 –ETS 300 753Others–RoHS compliant –1-year warrantyCOMPLIANCETable 1 - CS7510 Series Part Numbers and Descriptions53363PB Rev 2.0。

云计算资料

云计算资料

西南石油勘探研究院Mellanox InfiniBand高速网络加速应用方案概述石油在现代工业体系中扮演着关键角色,同时由于它的不可再生性和短期内难以被其他能源所取代的特点,促使石油企业将石油勘探视为首要任务。

这项业务的目标就是要在地表数千米以下的地层中找到油藏的位置,强大的技术支持是提高石油勘探效率和投资回报率的关键,而技术日新月异,性能不断提升的高性能计算系统是它所需技术中必不可少的组成部分。

利用高性能计算实现更加精确高效的石油勘探已经是当今世界石油行业的共识。

而基于标准技术部署的计算机集群以其出色的性价比和完整的生态系统支持受到越来越多的青睐。

Mellanox InfiniBand作为当今计算机集群最高性能的标准网络互连,其高带宽、低延迟、高可扩展性、低CPU占有率等特性可以进一步加速石油勘探应用软件的性能。

西南石油勘探研究院于2012年第二季度最新采用的Mellanox InfiniBand QDR(40Gb/秒)搭建的最新系统为该院石油勘探带来全新动力,大大提高地震资料处理相关计算的精度和复杂度。

此文将介绍优化的系统架构及如何利用InfiniBand集群加速石油勘探。

InfiniBand技术介绍InfiniBand是由InfiniBand行业协会(InfiniBand Trade Association,IBTA)定义的一项标准。

它是一种新的I/O总线技术,用于取代目前的PCI总线。

InfiniBand主要应用于企业网络和数据中心,也可以应用在高速线速路由器、交换机以及大型电信设备中。

InfiniBand的设计思路是通过一套中心机构,即中心InfiniBand交换机,在远程存储器、网络以及服务器等设备之间建立一个单一的连接链路,并由中心InfiniBand交换机来指挥流量。

在2011年11月公布的全球高性能计算机TOP500排行榜上,基于InfiniBand网络互连的系统占比达到42%,其中前100名中有55个系统采用InfiniBand,而且呈现逐年递增的趋势。

华为CloudEngine系列交换机VXLAN技术白皮书

华为CloudEngine系列交换机VXLAN技术白皮书
关于本章
介绍VXLAN的实现原理。 2.1 基本概念 2.2 报文格式 2.3 隧道建立与维护 2.4 数据报文转发 2.5 VXLAN QoS
2 原理描述
2 原理描述
文档版本 01 (2014-09-20)
华为专有和保密信息
4
版权所有 © 华为技术有限公司
CloudEngine 系列交换机 VXLAN 技术白皮书
表 2-1 控制器相关概念
概念
描述
控制器 (Controller)
控制器是OpenFlow协议的控制面服务器,所有的路径计算与管 理都由独立的控制器完成。
通常,刀片服务器即可作为控制器。
转发器
OpenFlow协议的转发平面设备,只处理数据转发任务。
OpenFlow协议 OpenFlow协议是SDN中的重要协议,是控制器和转发器的通信 通道。控制器通过OpenFlow协议将信息下发给转发器。
4 基于 SDN 控制器的 VXLAN 配置示例.....................................................................................23 5 参考标准和协议.............................................................................................................................39
CloudEngine 系列交换机
VXLAN 技术白皮书
文档版本 01 发布日期 2014-09-20
华为技术有限公司
版权所有 © 华为技术有限公司 2014。 保留一切权利。 非经本公司书面许可,任何单位和个人不得擅自摘抄、复制本文档内容的部分或全部,并不得以任何形式传播。

绿盟网络入侵防护系统产品白皮书

绿盟网络入侵防护系统产品白皮书

绿盟网络入侵防护系统产品白皮书© 2011 绿盟科技■版权声明本文中出现的任何文字叙述、文档格式、插图、照片、方法、过程等内容,除另有特别注明,版权均属绿盟科技所有,受到有关产权及版权法保护。

任何个人、机构未经绿盟科技的书面授权许可,不得以任何方式复制或引用本文的任何片断。

目录一. 前言 (2)二. 为什么需要入侵防护系统 (2)2.1防火墙的局限 (3)2.2入侵检测系统的不足 (3)2.3入侵防护系统的特点 (3)三. 如何评价入侵防护系统 (4)四. 绿盟网络入侵防护系统 (4)4.1体系结构 (5)4.2主要功能 (5)4.3产品特点 (6)4.3.1 多种技术融合的入侵检测机制 (6)4.3.2 2~7层深度入侵防护能力 (8)4.3.3 强大的防火墙功能 (9)4.3.4 先进的Web威胁抵御能力 (9)4.3.5 灵活高效的病毒防御能力 (10)4.3.6 基于对象的虚拟系统 (10)4.3.7 基于应用的流量管理 (11)4.3.8 实用的上网行为管理 (11)4.3.9 灵活的组网方式 (11)4.3.10 强大的管理能力 (12)4.3.11 完善的报表系统 (13)4.3.12 完备的高可用性 (13)4.3.13 丰富的响应方式 (14)4.3.14 高可靠的自身安全性 (14)4.4解决方案 (15)4.4.1 多链路防护解决方案 (15)4.4.2 交换防护解决方案 (16)4.4.3 路由防护解决方案 (16)4.4.4 混合防护解决方案 (17)五. 结论 (18)一. 前言随着网络与信息技术的发展,尤其是互联网的广泛普及和应用,网络正逐步改变着人类的生活和工作方式。

越来越多的政府、企业组织建立了依赖于网络的业务信息系统,比如电子政务、电子商务、网上银行、网络办公等,对社会的各行各业产生了巨大深远的影响,信息安全的重要性也在不断提升。

近年来,企业所面临的安全问题越来越复杂,安全威胁正在飞速增长,尤其混合威胁的风险,如黑客攻击、蠕虫病毒、木马后门、间谍软件、僵尸网络、DDoS攻击、垃圾邮件、网络资源滥用(P2P下载、IM即时通讯、网游、视频)等,极大地困扰着用户,给企业的信息网络造成严重的破坏。

Mellanox 高速低成本 DAC 电缆说明书

Mellanox 高速低成本 DAC 电缆说明书

Mellanox® MCP1600-E0xxEyy DAC cables are high speed, cost-effective alternatives to fiber optics inInfiniBand 100Gb/s EDR applications.Mellanox QSFP28 passive copper cables(1) contain four high-speed copper pairs, each operating at datarates of up to 25Gb/s. Each QSFP28 port comprises an EEPROM providing product information, which canbe read by the host system.Mellanox’s unique quality passive copper cable solutions provide power-efficient connectivity for shortdistance interconnects. It enables higher port bandwidth, density and configurability at a low cost andreduced power requirement in the data centers.Rigorous cable production testing ensures best out-of-the-box installation experience, performance anddurability.T able 1 - Absolute Maximum RatingsT able 2 - Operational SpecificationsT able 3 - Electrical SpecificationsMCP1600-E0xxEyyINTERCONNECTPRODUCT BRIEF†100Gb/s QSFP28 Direct Attach Copper Cable(1)Raw cables are provided from different sources to ensure supply chain robustness.©2018 Mellanox Technologies. All rights reserved.†For illustration only. Actual products may vary.©2018 Mellanox Technologies. All rights reserved.Table 4 - Cable Mechanical SpecificationsNote 1. The minimum assembly bending radius (close to the connector) is 10x the cable’s outer diameter. The repeated bend (far from the connector) is also 10x the cable’s outer diameter. The single bend (far from the connector) is 5x the cable’s outer diameter.Table 5 - Part Numbers and DescriptionsNote. See Figure 2 for the cable length definition. Note 2. xx = reach; yy = wire gauge.Figure 2.Cable Length DefinitionFigure 1.Assembly Bending Radius350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085Tel: 408-970-3400 • Fax: © Copyright 2018. Mellanox Technologies. All rights reserved.Mellanox, Mellanox logo and LinkX are registered trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.Warranty InformationMellanox LinkX direct attach copper cables include a 1-year limited hardware warranty, which covers parts repair or replacement.Mechanical Schematics60111PBRev 2.5。

Mellanox交换机学习记录

Mellanox交换机学习记录

Mellanox交换机学习记录—、基本术语:1、IPOIB:是在IB网络上跑一个TCP/IP的协议,使用的IB网络,配置IP 地址,但两端都是IB卡。

2、EOIB:是在IB网络上跑以太网的协议,配IP地址,但一端是IB卡(上行),另一端为以太网卡(下行)简单来讲,IPOIB只需要infiniband的交换机,而EOIB需要用到bridgeXo Bridge X的两个端口,可以同时全部接IB网络,全部接以太网,1为IB, 2为以太网,但不能1接以太网2接IB网。

BRIDGE X的三个以太网口之间互相不通疑问:BRIDGE X是个什么样的产品,1和2端口在哪?3、SRP:SCSI RDMA protocol,是IB SAN 的一种协议,也被称为SCSI Rrmote Protocol,其主要作用是把SCSI协议的命令和数据通过RDMA的方式跑到Ifin让and网络上,和ISCSI类似。

该协议主要是面向存储方面,提供了一个高带宽,高性能的存储。

SCSI协议主要是在主机和存储设备之间传送命令、状态和块数据。

注:RDNIA(Remote Direct Memory Access)技术全称远程直接数据存取,就是为了解决网络传输中服务器端数据处理的延迟而产生的。

RDMA通过网络把资料直接传入讣算机的存储区,将数据从一个系统快速移动到远程系统存储器中,而不对操作系统造成任何影响,这样就不需要用到多少讣算机的处理功能。

它消除了外部存储器复制和文本交换操作,因而能解放内存带宽和CPU周期用于改进应用系统性能4、MPI:主要面向计算机应用方面,大量的计算,加密解密等,需要并行计算,在HPC方面应用比较广泛。

MPI是帮助用户进行并行计算的一个开源工具,有自己的标准(MPI1、MPI2等)。

它做的是包括1、每个节点上起一个进程把程序加载起来运行程序2、解决程序起来之后运行过程中各种通讯并行计算可以理解成如果有8个节点,启动MPI之后可以看到8个运行结果,4个节点可以有4个运行结果。

Mellanox Ethernet 网络设备用户手册说明书

Mellanox Ethernet 网络设备用户手册说明书

SOLUTION BRIEFKEY BUSINESS BENEFITSEXECUTIVE SUMMARYAnalytic tools such as Spark, Presto and Hive are transforming how enterprises interact with and derive value from their data. Designed to be in memory, these computing and analytical frameworks process volumes of data 100x faster than Hadoop Map/Reduce and HDFS - transforming batch processing tasks into real-time analysis. These advancements have created new business models while accelerating the process of digital transformation for existing enterprises.A critical component in this revolution is the performance of the networking and storage infrastructure that is deployed in support of these modern computing applications. Considering the volumes of data that must be ingested, stored, and analyzed, it quickly becomes evident that the storage architecture must be both highly performant and massively scalable.This solution brief outlines how the promise of in-memory computing can be delivered using high-speed Mellanox Ethernet infrastructure and MinIO’s ultra-high performance object storage solution.IN MEMORY COMPUTINGWith data constantly flowing from multiple sources - logfiles, time series data, vehicles,sensors, and instruments – the compute infrastructure must constantly improve to analyze data in real time. In-memory computing applications, which load data into the memory of a cluster of servers thereby enabling parallel processing, are achieving speeds up to 100x faster than traditional Hadoop clusters that use MapReduce to analyze and HDFS to store data.Although Hadoop was critical to helping enterprises understand the art of the possible in big data analytics, other applications such as Spark, Presto, Hive, H2O.ai, and Kafka have proven to be more effective and efficient tools for analyzing data. The reality of running large Hadoop clusters is one of immense complexity, requiring expensive administrators and a highly inefficient aggregation of compute and storage. This has driven the adoption of tools like SparkDelivering In-memory Computing Using Mellanox Ethernet Infrastructure and MinIO’s Object Storage SolutionMinIO and Mellanox: Better TogetherHigh performance object storage requires the right server and networking components. With industryleading performance combined with the best innovation to accelerate data infrastructure Mellanox provides the networking foundation needed to connect in-memory computing applications with MinIO high performance object storage. Together, they allow in-memory compute applications to access and process large amounts of data to provide high speed business insights.Simple to Deploy, Simpler to ManageMinIO can be installed and configured within minutes simply by downloading a single binary and executing it. The amount of configuration options and variations has been kept to a minimum resulting in near-zero system administration tasks and few paths to failures. Upgrading MinIO is done with a single command which is non-disruptive and incurs zero downtime.MinIO is distributed under the terms of the Apache* License Version 2.0 and is actively developed on Github. MinIO’s development community starts with the MinIO engineering team and includes all of the 4,500 members of MinIO’s Slack Workspace. Since 2015 MinIO has gathered over 16K stars on Github making it one of the top 25 Golang* projects based on a number of stars.which are simpler to use and take advantage of the massive benefits afforded by disaggregating storage and compute. These solutions, based on low cost, memory dense compute nodes allow developers to move analytic workloads into memory where they execute faster, thereby enabling a new class of real time, analytical use cases.These modern applications are built using cloud-native technologies and,in turn, use cloud-native storage. The emerging standard for both the public and private cloud, object storage is prized for its near infinite scalability and simplicity - storing data in its native format while offering many of the same features as block or file. By pairing object storage with high speed, high bandwidth networking and robust compute enterprises can achieve remarkable price/performance results.DISAGGREGATE COMPUTE AND STORAGE Designed in an era of slow 1GbE networks, Hadoop (MapReduce and HDFS) achieved its performance by moving compute tasks closer to the data. A Hadoop cluster often consists of many 100s or 1000s of server nodes that combine both compute and storage.The YARN scheduler first identifies where the data resides, then distributes the jobs to the specific HDFS nodes. This architecture can deliver performance, but at a high price - measured in low compute utilization, costs to manage, and costs associated with its complexity at scale. Also, in practice, enterprises don’t experience high levels of data locality with the results being suboptimal performance.Due to improvements in storage and interconnect technologies speeds it has become possible to send and receive data remotely at high speeds with little (less than 1 microsecond) to no latency difference than if the storage were local to the compute.As a result, it is now possible to separate storage from the compute with no performance penalty. Data analysis is still possible in near real time because the interconnect between the storage and the compute is fast enough to support such demands.By combining dense compute nodes, large amounts of RAM, ultra-highspeed networks and fast object storage, enterprises are able to disaggregate storage from compute creating the flexibility to upgrade, replace, or add individual resources independently. This also allows for better planning for future growth as compute and storage can be added independently and when necessary, improving utilization and budget control.Multiple processing clusters can now share high performance object storage so that different types of processing, such as advanced queries, AI model training, and streaming data analysis, can run on their own independent clusters while sharing the same data stored on the object storage. The result is superior performance and vastly improved economics.HIGH PERFORMANCE OBJECT STORAGEWith in-memory computing, it is now possible to process volumes of data much faster than with Hadoop Map/Reduce and HDFS. Supporting these applications requires a modern data infrastructure with a storage foundation that is able to provide both the performance required by these applications and the scalability to handle the immense volume of data created by the modern enterprise.Building large clusters of storage is best done by combining simple building blocks together, an approach proven out by the hyper-scalers. By joining one cluster with many other clusters, MinIO can grow to provide a single, planet-wide global namespace. MinIO’s object storage server has a wide rangeof optimized, enterprise-grade features including erasure code and bitrot protection for data integrity, identity management, access management, WORM and encryption for data security and continuous replication and lamba compute for dynamic, distributed data.MinIO object storage is the only solution that provides throughput rates over 100GB/sec and scales easily to store 1000s of Petabytes of data under a single namespace. MinIO runs Spark queries faster, captures streaming data more effectively, and shortens the time needed to test, train and deploy AI algorithms.LATENCY AND THROUGHPUTIndustry-leading performance and IT efficiency combined with the best of open innovation assist in accelerating big data analytics workloads which require intensive processing. The Mellanox ConnectX® adapters reduce the CPU overhead through advanced hardware-based stateless offloads and flow steering engines. This allows big data applications utilizing TCP or UDP over IP transport to achieve the highest throughput, allowing completion of heavier analytic workloads in less time for big data clusters so organizations can unlock and efficiently scale data-driven insights while increasing application densities for their business.Mellanox Spectrum® Open Ethernet switches feature consistently low latency and can support a variety of non-blocking, lossless fabric designs while delivering data at line-rate speeds. Spectrum switches can be deployed in a modern spine-leaf topology to efficiently and easily scalefor future needs. Spectrum also delivers packet processing without buffer fairness concerns. The single shared buffer in Mellanox switches eliminates the need to manage port mapping and greatly simplifies deployment. In an© Copyright 2019. Mellanox, Mellanox logo, and ConnectX are registered trademarks of Mellanox Technologies, Ltd. Mellanox Onyx is a trademark of Mellanox Technologies, Ltd. All other trade-marks are property of their respective owners350 Oakmead Parkway, Suite 100 Sunnyvale, CA 94085Tel: 408-970-3400 • Fax: MLNX-423558315-99349object storage environment, fluid resource pools will greatly benefit from fair load balancing. As a result, Mellanox switches are able to deliver optimal and predictable network performance for data analytics workloads.The Mellanox 25, 50 or 100G Ethernet adapters along with Spectrum switches results in an industry leading end-to-end, high bandwidth, low latency Ethernet fabric. The combination of in-memory processing for applications and high-performance object storage from MinIO along with reduced latency and throughput improvements made possible by Mellanox interconnects creates a modern data center infrastructure that provides a simple yet highly performant and scalable foundation for AI, ML, and Big Data workloads.CONCLUSIONAdvanced applications that use in-memory computing, such as Spark, Presto and Hive, are revealing business opportunities to act in real-time on information pulled from large volumes of data. These applications are cloud native, which means they are designed to run on the computing resources in the cloud, a place where Hadoop HDFS is being replaced in favor of using data infrastructures that disaggregates storage from compute. These applications now use object storage as the primary storage vehicle whether running in the cloud or on- premises.Employing Mellanox networking and MinIO object storage allows enterprises to disaggregate compute from storage achieving both performance and scalability. By connecting dense processing nodes to MinIO object storage nodes with high performance Mellanox networking enterprises can deploy object storage solutions that can provide throughput rates over 100GB/sec and scales easily to store 1000s of Petabytes of data under a singlenamespace. The joint solution allows queries to run faster, capture streaming data more effectively, and shortens the time needed to test, train and deploy AI algorithms, effectively replacing existing Hadoop clusters with a data infrastructure solution, based on in-memory computing, that consumes a smaller data center footprint yet provides significantly more performance.WANT TO LEARN MORE?Click the link below to learn more about object storage from MinIO VAST: https://min.io/Follow the link below to learn more about Mellanox end-to-end Ethernet storage fabric:/ethernet-storage-fabric/。

绿盟运维安全管理系统

绿盟运维安全管理系统

绿盟运维安全管理系统产品白皮书©2018绿盟科技■版权声明本文中出现的任何文字叙述、文档格式、插图、照片、方法、过程等内容,除另有特别注明,版权均属绿盟科技所有,受到有关产权及版权法保护。

任何个人、机构未经绿盟科技的书面授权许可,不得以任何方式复制或引用本文的任何片断。

目录一. 背景 (1)1.1运维账号混用,粗放式权限管理 (1)1.2审计日志粒度粗,易丢失,难定位 (2)1.3面临法规遵从的压力 (2)1.4运维工作繁重枯燥 (2)1.5虚拟云技术蓬勃发展 (3)二. 产品概述 (3)2.1运维安全管理系统 (3)2.2目标 (3)2.3应用场景 (4)2.3.1 管理员制定运维管理策略 (5)2.3.2 普通运维用户访问目标设备 (6)2.4系统价值 (8)三. 产品介绍 (8)3.1系统功能 (8)3.2系统架构 (9)四. 产品特性 (11)4.1多维度、细粒度的认证与授权体系 (11)4.1.1 灵活的用户认证方式 (11)4.1.2 细粒度的运维访问控制 (11)4.1.3 多维度的运维访问授权 (12)4.2高效率、智能化的资产管理体系 (12)4.2.1 智能化巡检托管设备和设备账号 (13)4.2.2 高效率管理设备和设备账号 (13)4.3提供丰富多样的运维通道 (14)4.3.1 B/S下网页访问 (14)4.3.2 C/S下客户端访问 (14)4.3.3 跨平台无缝管理 (15)4.3.4 强大的应用扩展能力 (15)4.4高保真、易理解、快定位的审计效果 (16)4.4.1 数据库操作图形与命令行级双层审计 (16)4.4.2 基于唯一身份标识的审计 (16)4.4.3 全程运维行为审计 (17)4.4.4 审计信息“零管理” (17)4.4.5 文字搜索定位录像播放 (18)4.5稳定可靠的系统安全性保障 (19)4.5.1 系统安全保障 (19)4.5.2 数据安全保障 (19)4.6快速部署,简单易用 (19)4.6.1 物理旁路,逻辑串联 (19)4.6.2 配置向导功能 (20)五. 客户收益 (21)插图索引图 1.1 用户与运维账号的关系现状 (1)图 2.1 核心思路 (4)图 2.2 运维管理员制定策略 (5)图 2.3 普通用户访问目标设备 (7)图 3.1 系统功能 (9)图 3.2 系统架构 (10)前置机架构示意图 (15)图 4.1 数据库操作图形与命令行级双层审计 (16)图 4.2 文字搜索定位录像播放 (18)图 4.3 产品部署 (20)一. 背景随着信息化的发展,企事业单位IT系统不断发展,网络规模迅速扩大、设备数量激增,建设重点逐步从网络平台建设,转向以深化应用、提升效益为特征的运行维护阶段,IT系统运维与安全管理正逐渐走向融合。

Mellanox 1000BASE-T 电缆 SFP 模块说明说明书

Mellanox 1000BASE-T 电缆 SFP 模块说明说明书
The 1000BASE-T physical layer IC (PHY), including all settings and features, can be accessed via I2C.
Rigorous production testing ensures the best out-of-the-box installation experience, performance, and durability.
Table 1 - Absolute Maximum Ratings
Parameter Supply voltage Input voltage (referenced to GND) Maximum voltage Surge current Storage ambient temperature
Minimum --3.13 -----40
Typical 320 3.3 -------
Maximum 375 3.47 4 30(A) 85
Units mA Vcc Vmax mA °C
Table 2 - Operational Specifications
Parameter Power consumption Single ended data input swing Single ended data output swing Rise/fall time (20%-80%) Tx input impedance (single ended) Rx output impedance (single ended) Operating case temperature
cable
©2019 Mellanox Technologies. All rights reserved.

Mellanox Switch-IB Firmware Release Notes说明书

Mellanox Switch-IB Firmware Release Notes说明书

Mellanox Switch-IB Firmware Release NotesRev 11.1701.0010Mellanox Technologies350 Oakmead Parkway Suite 100Sunnyvale, CA 94085U.S.A.Tel: (408) 970-3400Fax: (408) 970-3403© Copyright 2018. Mellanox Technologies Ltd. All Rights Reserved.Mellanox®, Mellanox logo, Accelio®, BridgeX®, CloudX logo, CompustorX®, Connect-IB®, ConnectX®, CoolBox®, CORE-Direct®, EZchip®, EZchip logo, EZappliance®, EZdesign®, EZdriver®, EZsystem®, GPUDirect®, InfiniHost®, InfiniBridge®, InfiniScale®, Kotura®, Kotura logo, Mellanox CloudRack®, Mellanox CloudXMellanox®, Mellanox Federal Systems®, Mellanox HostDirect®, Mellanox Multi-Host®, Mellanox Open Ethernet®, Mellanox OpenCloud®, Mellanox OpenCloud Logo®, Mellanox PeerDirect®, Mellanox ScalableHPC®, Mellanox StorageX®, Mellanox TuneX®, Mellanox Connect Accelerate Outperform logo, Mellanox Virtual Modular Switch®, MetroDX®, MetroX®, MLNX-OS®, NP-1c®, NP-2®, NP-3®, NPS®, Open Ethernet logo, PhyX®, PlatformX®, PSIPHY®, SiPhy®, StoreX®, SwitchX®, Tilera®, Tilera logo, TestX®, TuneX®, The Generation of Open Ethernet logo, UFM®, Unbreakable Link®, Virtual Protocol Interconnect®, Voltaire® and Voltaire logo are registered trademarks of Mellanox Technologies, Ltd.All other trademarks are property of their respective owners.For the most updated list of Mellanox trademarks, visit /page/trademarksNOTE:THIS HARDWARE , SOFTWARE OR TEST SUITE PRODUCT (PRODUCT(S)) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES AS-ISﺴWITH ALL FAULTS OF ANY KIND AND SOLELY FOR THE PURPOSE OF AIDING THE CUSTOMER IN TESTING APPLICATIONS THAT USE THE PRODUCTS IN DESIGNATED SOLUTIONS . THE CUSTOMER 'S MANUFACTURING TEST ENVIRONMENT HAS NOT MET THE STANDARDS SET BY MELLANOX TECHNOLOGIES TO FULLY QUALIFY THE PRODUCT (S) AND/OR THE SYSTEM USING IT. THEREFORE , MELLANOX TECHNOLOGIES CANNOT AND DOES NOT GUARANTEE OR WARRANT THAT THE PRODUCTS WILL OPERATE WITH THE HIGHEST QUALITY. ANY EXPRESS OR IMPLIED WARRANTIES , INCLUDING , BUT NOT LIMITED TO , THE IMPLIED WARRANTIES OF MERCHANTABILITY , FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT ARE DISCLAIMED. IN NO EVENT SHALL MELLANOX BE LIABLE TO CUSTOMER OR ANY THIRD PARTIES FOR ANY DIRECT , INDIRECT, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES OF ANY KIND (INCLUDING , BUT NOT LIMITED TO , PAYMENT FOR PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES ; LOSS OF USE, DATA, OR PROFITS ; OR BUSINESS INTERRUPTION ) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY , WHETHER IN CONTRACT , STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE ) ARISING IN ANY WAY FROM THE USE OF THE PRODUCT(S) AND RELATED DOCUMENTATION EVEN IF ADVISED OF THE POSSIBILITY OFSUCH DAMAGE.Table of ContentsChapter 1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.1 Supported Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2 Firmware Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 Supported Cables and Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.4 Firmware Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.5 PRM Revision Compatibility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Chapter 2 Changes and New Features in Rev 11.1701.0010 . . . . . . . . . . . . . . 5 Chapter 3 Known Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Chapter 4 Bug Fixes History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Chapter 5 Firmware Changes and New Feature History. . . . . . . . . . . . . . . . . 11List of TablesTable 1:Release Update History. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Table 2:Supported Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Table 3:Firmware Interoperability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Table 4:Firmware Rev 11.1701.0010 Changes and New Features . . . . . . . . . . . . . . . . . . . .5 Table 5:Known Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Table 6:Fixed Bugs List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 Table 7:History of Major Changes and New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11Release Update HistoryTable 1 - Release Update HistoryDate DescriptionJune 30, 2018First releaseJuly 31, 2018Updated “Mellanox Vendor Specific MAD Specification” from 1.4 to 1.31OverviewThese are the release notes for the Switch-IB™ firmware, Rev 11.1701.0010. This firmware complements the Switch-IB™ silicon architecture with a set of advanced features, allowing easy and remote management of the switch.1.1Supported SystemsThis firmware supports the devices and protocols listed in Table 2. For the most updated list of switches supported, visit the Firmware Download pages on .Table 2 - Supported SystemsDevice Part Number DescriptionMSB7790Switch-IB™ based EDR InfiniBand switch; 36 QSFP28 ports; externally managed1.2Firmware InteroperabilityThis FW version has been validated to work against platforms with the following SW versions.Table 3 - Firmware Interoperability HCA/Switch Firmware VersionSwitch-IB™ 215.1700.0162SwitchX®-29.4.2000ConnectX®-5 (Ex)16.22.1002ConnectX-4 Lx14.22.1002ConnectX-412.22.1002Connect-IB®10.16.1200ConnectX-3 (Pro)2.40.7000MFT 4.9.0-381.3Supported Cables and ModulesFor a list of the Mellanox supported cables please visit the LinkX™ Cables and Transceivers page of the Mellanox Website at:/products/interconnect/cables-configurator.phpPlease refer to the LinkX™ Cables and Transceivers webpage for the full list of supported cables and transceivers /products/interconnect/cables-configurator.phpWhen using Mellanox AOC cables longer than 50m use one VL to achieve full wirespeed.1.4Firmware UpgradeFirmware upgrade may be performed directly from any previous version to this version. To upgrade firmware, please refer to the Mellanox Firmware Tools (MFT) package at:/page/management_tools1.5PRM Revision CompatibilityFirmware Rev 11.1701.0010 complies with the Mellanox Switches Programmer’s Reference Manual (PRM), Rev 1.45 or later.Changes and New Features in Rev 11.1701.00102Changes and New Features in Rev 11.1701.0010Table 4 - Firmware Rev 11.1701.0010 Changes and New FeaturesCategory DescriptionGeneral Added support for congestion control log 1.3 as described in IBTA IB specificationrelease 1.3, Annex A10General Added additional information (PDDR pages as described in the Switches PRM, section8.15.50 PDDR - Port Diagnostics Database Register) to diagnostics data VS-MAD asdescribed in Mellanox Vendor Specific MAD Specification 1.4 section 3.33 – Diagnos-ticDataChassis Management Added ability to read part numbers and serial numbers for fans (by using MFNR regis-ter) and the power supply (by using MSPS register)3Known IssuesTable 5 describes known issues in this firmware release and possible workarounds.Table 5 - Known IssuesInternal Ref.Issue955641Description: VL_HIGH_LIMIT is not affecting the VL arbiter as expected. Workaround: Arbitration table should be set using the low priority VL arbitration table only.Keywords: VL Arbitration1249608Description: Configuring weight “0” for VL, results in unexpected behavior. Workaround: Arbitration table should be configured with weights other than “0”. Keywords: VL Arbitration982005Description: When connecting 6 & 7 meters, link may raise DDR instead of QDR against GD4000/IS5000 switches.Workaround: N/AKeywords: Link-Description: Congestion control 1.3 supports congestion log only. Workaround: N/AKeywords: QoS-Description: Port LEDs do not flash on system boot. Workaround: N/AKeywords: LEDs-Description: Link width reduction is not supported in this release. Workaround: N/AKeywords: Power Management-Description: If QDR is not enabled for the switch’s InfiniBand Port Speed while connected to ConnectX-3/Pro or Connect-IB® FDR adapters or to SwitchX® /SwitchX®-2 FDR switches, links will rise at SDR or DDR (even if FDR is enabled)Workaround: Enable QDR (in addition to FDR) when connecting to peer ports running at FDRKeywords: Interoperability-Description: Force FDR10 is not supported on EDR products.Workaround: To raise link with an FDR10 device, make sure all speeds, including EDR, are configured on Switch-IB.Keywords: Interoperability-Description: Fallback Routing is not supported for DF+ topology. Fallback Routing Notifi-cations and Adaptive Routing notifications are not supported for topologies others then trees.Workaround: N/AKeywords: Network697149Description: Link rises at DDR speed instead of FDR10 when using 100m QDR/FDR10 optical cables.Workaround: N/AKeywords: Link-Description: FDR link may rise with symbol errors on optic EDR cable longer than 30M. Workaround: N/AKeywords: Link-Description: Fan LEDs may behave unexpectedly in the first 5 seconds of system boot. Workaround: N/AKeywords: LEDs-Description: Module info page in Diagnostics Data VS-MAD is not supported Workaround: N/AKeywords: Diagnostics Data VS-MADTable 5 - Known IssuesInternal Ref.Issue4Bug Fixes HistoryTable 6 - Fixed Bugs ListInternal Ref.Issue1337469Description: in rare cases, when a receiver’s electrical eye is narrow, link might raise with BER higher (worse) than 10^-12.Keywords: LinkDiscovered in Release: 11.1500.0034Fixed in Release: 11.1630.02061092005Description: Enable SDR speed regardless of cable supported speeds Keywords: LinkDiscovered in Release: 11.1400.0102Fixed in Release: 11.1500.0106-Description: VL arbitration does not distribute traffic as expected in case of multiple VLs.Keywords: GeneralDiscovered in Release: 11.1200.0102Fixed in Release: 11.1300.0100-Description: In rare cases, FDR links may rise with errors. (Improved BER perfor-mance.)Keywords: LinkDiscovered in Release: 11.1.1002Fixed in Release: 11.1200.0102-Description: Insertion of QDR cables into a Switch-IB™ based switch overwrites non-volatile fields (rx_output_amp/emp).Keywords: System ManagementDiscovered in Release: 11.1100.0072Fixed in Release: 11.1200.0102-Description: Bubbles appear as symbol errors when link raises FDR 1x. Keywords: LinkDiscovered in Release: 11.1.1002Fixed in Release: 11.1200.0102-Description: PSU fans set to work with 60% max speed by default. Keywords: Chassis ManagementDiscovered in Release: 11.0350.0394Fixed in Release: 11.1100.0072-Description: The command “show interfaces ib * transceiver” shows no cable is con-nected while link is up.Keywords: Chassis ManagementDiscovered in Release: 11.0350.0394Fixed in Release: 11.1100.0072690231Description: Fixed MSGI data reading. Keywords: GeneralDiscovered in Release: 11.1.1002 Fixed in Release: 11.0350.0394-Description: In rare cases, link may degrade speed from EDR or FDR to a lower speed. In other cases physical errors may increment.Keywords: LinkDiscovered in Release: 11.0350.0372Fixed in Release: 11.0350.0394-Description: Minimum fan speed may drop. Keywords: TemperatureDiscovered in Release: 11.0200.0120 Fixed in Release: 11.0204.0124-Description: Bit error rate is not optimal on QDR links. Keywords: LinkDiscovered in Release: 11.1.1002Fixed in Release: 11.0200.0118-Description: Only connections to Switch-IB™, ConnectX®-3, ConnectX-3 Pro, Con-nect-IB™, and SwitchX® family devices are supported.Keywords: InteroperabilityDiscovered in Release: 11.1.1002Fixed in Release: 11.0100.0112-Description: Connecting a cable longer than 30m to ConnectX-3, ConnectX-3 Pro or Connect-IB plat- forms causes interoperability issues.Keywords: InteroperabilityDiscovered in Release: 11.1.1002Fixed in Release: 11.0100.0112Table 6 - Fixed Bugs ListInternal Ref.Issue-Description: Packets are lost on private linear forwarding table (pLFT). Keywords: pLFTDiscovered in Release: 11.1.1002Fixed in Release: 11.0100.0112-Description: Port LEDs may continue to blink even after a bad cable is removed. Keywords: Chassis ManagementDiscovered in Release: 11.1.1002Fixed in Release: 11.0100.0112Table 6 - Fixed Bugs ListInternal Ref.IssueFirmware Changes and New Feature History 5Firmware Changes and New Feature HistoryTable 7 - History of Major Changes and New Features Category Description11.1630.0206General Bug fixes11.1610.0196General Added additional information (PDDR pages as described in the Switches PRM , section8.15.50 PDDR - Port Diagnostics Database Register) to diagnostics data VS-MAD asdescribed in Mellanox Vendor Specific MAD Specification 1.4 section 3.33 – Diagnostic-DataGeneralAdded support for congestion control log 1.3 as described in IBTA IB specification release1.3, Annex A1011.1500.0106General Added support for IB telemetry, Top Talkers.See “Congestion Telemetry” section in Mellanox Switches PRM (Programmer's Reference Manual).ModulesAdded support for 100GbE PSM4/LR4 modules.11.1430.0160Added support for Adaptive Routine (AR) optimizations with ConnectX-5 (RC Mode)Added support for Force EDR on Switch IB systems As described in Mellanox SwitchesProgrammer's Reference Manual (PRM) under PTYS Register11.1400.0102Added support for IB telemetry, Congestion Monitoring-Thresholds (See Mellanox Switches PRM (Programmer's Reference Manual) - section 9.7 - Congestion Telemetry).Added support for Additional Port Counters Extended (See IB Specification V ol 1-Release-1.3, MgtWG 1.3 Errata).Added support for IB Router Port (Port 37) Counters (See IB Specification V ol 1-Release-1.3)11.1300.0126Added support for burst/traffic histograms (described in Vendor Specific MAD PRM Rev 1.3, Section 3.33 – Mellanox Performance Histograms)Added support for Port PHY Link Mode (PLLM) register (For register description, See Switch PRM - PPLM - Port Phy Link Mode)Added support for QSFP copper cables which do not publish attenuation in the memorymapGeneralLink GeneralGeneralGeneral GeneralLinkLink。

绿盟远程安全评估系统安全基线管理系列产品白皮书

绿盟远程安全评估系统安全基线管理系列产品白皮书

绿盟远程安全评估系统安全基线管理系列产品白皮书© 2011 绿盟科技■版权声明本文中出现的任何文字叙述、文档格式、插图、照片、方法、过程等内容,除另有特别注明,版权均属绿盟科技所有,受到有关产权及版权法保护。

任何个人、机构未经绿盟科技的书面授权许可,不得以任何方式复制或引用本文的任何片断。

目录一. 脆弱性的危害 (1)1.1漏洞危害越来越严重 (1)1.2配置错误频出,合规检查困难 (2)1.3不必要进程、端口带来的风险 (3)二. 信息安全主管们面临的问题 (3)2.1安全漏洞管理的现状 (3)2.2运维工作中的烦恼 (4)2.3思考安全工作的需求 (4)三. 绿盟基于基线的安全管理工具 (5)3.1建立安全基线 (6)3.1.1 安全配置 (6)3.1.2 安全漏洞 (7)3.1.3 重要信息 (7)3.2使用安全基线自动化风险控制 (7)3.3产品特色 (8)3.3.1 基于实践的安全基线管理及展示 (8)3.3.2 基于用户行为模式的管理架构 (8)3.3.3 权威、完备的基线知识库 (9)3.3.4 高效、智能的弱点识别技术 (9)3.3.5 集成专业的Web应用扫描模块 (10)3.3.6 多维、细粒度的统计分析 (11)3.4典型应用 (12)3.4.1 部署方式 (12)3.4.2 应用场景 (14)3.5产品价值 (16)3.5.1 安全基线模型助力全面掌控信息系统风险状况 (16)3.5.2 为运维人员提高工作效率 (17)3.5.3 为主管领导洞察全局 (17)四. 结论 (17)一. 脆弱性的危害漏洞是在硬件、软件、协议的具体实现或系统安全策略上存在的缺陷,从而可以使攻击者能够在未授权的情况下访问或破坏系统,在计算机安全领域,安全漏洞(Security Hole)通常又称作脆弱性(Vulnerability)。

其实这是一个概括性的描述,很多专业人员给出的定义都不同。

Mellanox Technologies Hadoop 解决方案白皮书说明书

Mellanox Technologies Hadoop 解决方案白皮书说明书

WHITE PAPER™ Hadoop ® with Dell and Mellanox VPI SolutionsStoring and analyzing rapidly growing amounts of data via traditional tools introduces new levels of chal-lenges to businesses, government and academic research organizations.Hadoop framework is a popular tool for analyzing large structured and unstructured data sets. Using Java based tools to process data, a data-scientist can infer users’ churn pattern in retail banking, better recom-mend a new service to users of social media, optimize production lines based on sensor data and detect a security breach in computer networks. Hadoop is supported by the Apache Software Foundation.Hadoop workloads vary based on target implementation and even within the same implementation. Designing networks to sustain the different variety of workloads introduces challenges to legacy network designs in terms of bandwidth and latency requirements. Moving a terabyte of information can take several minutes using a 1 Giga-bit network. Minutes long operations are not acceptable in an on-line user experience, fraud detection and risk management tools. A better solution is required.Building a Hadoop cluster requires taking into consideration many factors such as, disk capacity, CPUutilization, memory usage and networking capabilities.Using legacy networks creates bottlenecks in the data flow. State-of-the-art CPUs can drive over 50 Giga-bits-per-second while disk controllers capable of driving 12 Giga-bits-per-second are entering the market, and the result is more data trying to flow out of the compute node.Using 40Gb Ethernet and FDR InfiniBand satisfies the needed dataflow requirements for high speed SAS controllers and Solid State Drives (SSDs) 10Gb Ethernet is becoming the entry level requirement to handle dataflow requirements of common spindle disk drives.Scaling and capacity planning should be another point of consideration. While businesses grow linearly, their data grows in an exponential form at the same time. Adding more servers and storage should not require a complete re-do of the network, using edge switches and easy to balance, flat, network is aBackground (1)Mellanox Solutions for Apache Hadoop (1)Mellanox Unstructured Data Accelerator (UDA) (2)Ethernet Performance (2)UDA Performance (2)Hardware (2)Software Requirements (5)Installation (5)Scaling the Cluster Size (9)High Availability (10)Appendix A: Setup Scripts (10)References (13)In collaboration with Dell Mellanox Solutions for Apache HadoopFigure 1: Hadoop, 5 Nodes DeploymentIn the above example, where nodes are connected with a FDR InfiniBand 56Gb/s fabric, the All-to-All available bandwidth will be 18.6Gb/s. Scaling to larger clusters is done in the same fashion. Connection ToR switches with enough bandwidth to satisfy nodes throughputs.Figure 2: Mellanox FDR InfiniBandand/or 40Gb Ethernet Adapter Figure 3: Mellanox QSFP Copper CableFigure 4: Mellanox 10Gb Ethernet Adapter Figure 5: Mellanox SFP+ Copper Cable©2013 Mellanox Technologies. All rights reserved.©2013 Mellanox Technologies. All rights reserved.350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085Tel: 408-970-3400 • Fax: © Copyright 2013. Mellanox Technologies. All rights reserved.Mellanox, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, MLNX-OS, PhyX, SwitchX, Virtual Protocol Interconnect and Voltaire are registered trademarks of Mellanox Technologies, Ltd. Connect-IB, CoolBox, FabricIT, Mellanox Federal Systems, Mellanox Software Defined Storage, Mellanox Virtual Modular Switch, MetroX, MetroDX, Mellanox Open Ethernet, Open Ethernet, ScalableHPC, Unbreakable-Link, UFM and Unified Fabric Manager are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.15-688WP Rev 1.2The information contained in this document, including all instructions, cautions, and regulatory approvals and certifications, is provided by Mellanox and has not been independently verified or tested by Dell. Dell cannot be responsible for damage caused as a result of either following or failing to follow these instructions. All statements or claims regarding the properties, capabilities, speeds or qualifications of the part referenced in this document are made by Mellanox and not by Dell. Dell specifically disclaims knowledge of the accuracy, completeness or substantiation for any such statements. All questions or comments relating to such statements or claims should be directed to Mellanox. Visit for more information.。

浪潮 Inspur NOS 安全技术白皮书说明书

浪潮 Inspur NOS 安全技术白皮书说明书

Inspur NOS安全技术白皮书文档版本V1.0发布日期2022-12-16版权所有© 2022浪潮电子信息产业股份有限公司。

保留一切权利。

未经本公司事先书面许可,任何单位和个人不得以任何形式复制、传播本手册的部分或全部内容。

商标说明Inspur浪潮、Inspur、浪潮、Inspur NOS是浪潮集团有限公司的注册商标。

本手册中提及的其他所有商标或注册商标,由各自的所有人拥有。

技术支持技术服务电话:400-860-0011地址:中国济南市浪潮路1036号浪潮电子信息产业股份有限公司邮箱:***************邮编:250101前言文档用途本文档阐述了浪潮交换机产品Inspur NOS的安全能力及技术原理。

注意由于产品版本升级或其他原因,本文档内容会不定期进行更新。

除非另有约定,本文档仅作为使用指导,本文档中的所有陈述、信息和建议不构成任何明示或暗示的担保。

读者对象本文档提供给以下相关人员使用:●产品经理●运维工程师●售前工程师●LMT及售后工程师变更记录目录1概述 (1)2缩写和术语 (2)3威胁与挑战 (3)4安全架构 (4)5安全设计 (5)5.1账号安全 (5)5.2权限控制 (5)5.3访问控制 (6)5.4安全协议 (6)5.5数据保护 (7)5.6安全加固 (7)5.7日志审计 (7)5.8转发面安全防护 (7)5.9控制面安全防护 (8)6安全准测和策略 (9)6.1版本安全维护 (9)6.2加强账号和权限管理 (10)6.3TACACS+服务授权 (10)6.4加固系统安全 (12)6.4.1关闭不使用的服务和端口 (12)6.4.2废弃不安全通道 (12)6.4.3善用安全配置 (12)6.5关注数据安全 (13)6.6保障网络隔离 (14)6.7基于安全域访问控制 (14)6.8攻击防护 (15)6.9可靠性保护 (16)7安全发布 (18)随着开放网络的快速发展,白盒交换机做为一种软硬件解耦的开放网络设备,应用越来越广泛。

Mellanox ConnectX-2 VPI单端和双端QDR InfiniBand主机通道适配器产

Mellanox ConnectX-2 VPI单端和双端QDR InfiniBand主机通道适配器产

Mellanox ConnectX-2 VPI Single-port and Dual-port QDR InfiniBand Host Channel AdaptersProduct Guide (withdrawn product)High-performance computing (HPC) solutions require high bandwidth, low latency components with CPU offloads to get the highest server efficiency and application productivity. The Mellanox ConnectX-2 VPI Single-port and Dual-port Quad Data Rate (QDR) InfiniBand host channel adapters (HCAs) deliver the I/O performance that meets these requirements. Data centers and cloud computing also require I/O services such as bandwidth, consolidation and unification, and flexibility, and the Mellanox HCAs support the necessary LAN and SAN traffic consolidation.Figure 1 shows the Mellanox ConnectX-2 VPI Dual-port QDR InfiniBand host channel adapter.Figure 1. Mellanox ConnectX-2 VPI Dual-port QDR InfiniBand host channel adapterDid you know?Mellanox ConnectX-2 VPI Single-port and Dual-port QDR InfiniBand host channel adapters make it possible for any standard networking, clustering, storage, and management protocol to seamlessly operate over any converged network by using a consolidated software stack. With auto-sense capability, each ConnectX-2 port can identify and operate on InfiniBand, Ethernet, or Data Center Bridging (DCB) fabrics. ConnectX-2 with Virtual Protocol Interconnect (VPI) simplifies I/O system design and makes it easier for IT managers to deploy an infrastructure that meets the challenges of a dynamic data center.Click here to check for updatesPart number informationTable 1 shows the part numbers and feature codes for the Mellanox ConnectX-2 VPI QDR InfiniBand HCAs. Table 1. Ordering part numbers and feature codesPart number Feature code Description81Y1531*5446Mellanox ConnectX-2 VPI Single-port QSFP QDR IB/10GbE PCI-E 2.0 HCA81Y1535*5447Mellanox ConnectX-2 VPI Dual-port QSFP QDR IB/10GbE PCI-E 2.0 HCA* Withdrawn from marketingThe adapters support the transceivers and direct-attach copper (DAC) twin-ax cables listed in Table 2. Table 2. Supported transceivers and DAC cablesPart number Feature code Description59Y192037313m QLogic Optical QDR InfiniBand QSFP Cable59Y1924373210m QLogic Optical QDR InfiniBand QSFP Cable59Y1928373330m QLogic Optical QDR InfiniBand QSFP Cable59Y189237250.5m QLogic Copper QDR InfiniBand QSFP 30AWG Cable59Y189637261m QLogic Copper QDR InfiniBand QSFP 30AWG Cable59Y190037273m QLogic Copper QDR InfiniBand QSFP 28AWG Cable49Y048859893m Optical QDR InfiniBand QSFP Cable49Y0491599010m Optical QDR InfiniBand QSFP Cable49Y0494599130m Optical QDR InfiniBand QSFP CableFigure 2 shows the Mellanox ConnectX-2 VPI Single-port QDR InfiniBand host channel adapter.Figure 2. Mellanox ConnectX-2 VPI Single-port QDR InfiniBand host channel adapterFeatures and benefitsM5 systems(v3 processors)M4 and X6 systems (v2 processors)Part numberDescription81Y1531Mellanox ConnectX-2 VPI Single-port QSFP QDR IB/10GbE PCI-E2.0 HCAN N N N N N N N N N N N N N N N81Y1535Mellanox ConnectX-2 VPI Dual-port QSFP QDR IB/10GbE PCI-E2.0 HCAN N N N N N N N N N N N N N N N Table 3. Server compatibility, part 2 (M4 systems with v1 processors and M3 systems)M4 and X5 systems (v1 processors)M3 systemsPartnumberDescription81Y1531MellanoxConnectX-2VPI Single-port QSFPQDRIB/10GbEPCI-E 2.0HCAN N N N N N N N Y N Y N N N N N Y N N N Y N81Y1535MellanoxConnectX-2VPI Dual-port QSFPQDRIB/10GbEPCI-E 2.0HCAN N N N N N N N N N N N N N N N N N N N N N Supported operating systemsTrademarksLenovo and the Lenovo logo are trademarks or registered trademarks of Lenovo in the United States, other countries, or both. A current list of Lenovo trademarks is available on the Web athttps:///us/en/legal/copytrade/.The following terms are trademarks of Lenovo in the United States, other countries, or both:Lenovo®System x®X5The following terms are trademarks of other companies:Intel® is a trademark of Intel Corporation or its subsidiaries.Linux® is the trademark of Linus Torvalds in the U.S. and other countries.Microsoft®, Windows Server®, and Windows® are trademarks of Microsoft Corporation in the United States, other countries, or both.Other company, product, or service names may be trademarks or service marks of others.。

Mellanox 4036 QDR交换机白皮书

Mellanox 4036 QDR交换机白皮书

Mellanox Technologies Mellanox GD4036高性能QDR InfiniBand交换机Mellanox公司是全球网格骨干解决方案的领导者,致力于为下一代数据中心网络化计算提供最为全面的解决方案。

Mellanox公司的产品涵盖了从服务器到网络交换到存储互联的各个领域,凭借先进的架构设计以及严谨的质量控制,Mellanox公司的软硬件产品在全球用户市场获得了广泛的好评,通过IBM、HP、SGI、NEC、SUN等合作伙伴建立了OEM合作关系,Mellanox公司的产品以及解决方案正在被部署在各行各业。

作为业高性能InfiniBand交换解决方案,Mellanox GD4036为高性能计算集群和网格提供了水平空前的性能和扩展性。

GD4036交换机能使高性能应用运行在分布式的服务器、存储和网络资源上。

Mellanox GD4036对单个机箱中的36个节点精心设计并提供了QDR规格的40Gb/s通讯带宽,多台GD4036或者与Mellanox其他产品配合,我们可以搭建更大规模的集群运算系统,可配置的节点数量范围从十几个到几千个,内部无阻塞的交换设计最大限度的为集群运算系统提供了可靠高效的通讯环境Mellanox Technologies一、Mellanox GD4036模块介绍Mellanox GD4036主要由主机箱、sPSU电源模块以及相应的系统散热模块构成,整机采用模块化设计、无线缆接插件紧密连接结构,大大提高了设备的可靠性,同时方便了系统安装以及以后的维护工作。

GD4036机箱及相关部件Mellanox GD4036机箱采用工业标准设计完全符合19英寸机柜的安装,支持网络机柜和服务器机柜安装使用;高度为1U,在一个标准42U机柜中可以轻松部署多台Mellanox GD4036;Mellanox GD4036提供了机架导轨,更方便机箱的机柜安装;1. 对外端口Mellanox GD4036机箱提供了36个QDR 40Gb/s端口,总计可以提供2.88Tb/s吞吐量;Mellanox Technologies2. 管理模块Mellanox GD4036内置了子网管理器,无须在服务器上安装子网管理器软件;管理模块提供了标准的DB9串口以及以太网连接接口提供远程管理。

ONU产品白皮书(纯方案,17页)

ONU产品白皮书(纯方案,17页)

产品白皮书目录第一章概述 (3)第二章产品功能 (5)第三章产品特点 (11)第四章关键技术 (12)第五章典型部署 (13)第六章产品规格 (14)第七章产品案例 (17)第八章产品资质 (18)产品白皮书第一章概述一、背景基于传统三层交换以太网技术的综合布线系统日益表现出它的“落后性”:网络结构复杂、投资成本高、有源设备多、施工困难、维护困难、性能差,等等。

无源光局域网(Passive Optical LAN, POL)解决方案的二层扁平化网络结构因具备更广的组网距离,更高的传输带宽,更强的业务承载能力,更经济的建设成本,更简便的维护方式,更节能环保,无氧化和电磁干扰的困扰等优势,有可能成为传统三层交换以太网技术的升级换代解决方案,但POL网络实施过程中存在的一些痛点,例如:普通光缆或皮线光缆施工复杂,维护困难,光纤面板通过尾纤连接普通ONU(俗称“光猫”)时,尾纤外露容易损坏等,急需得到尽快解决。

二、全光纤网络(CNFTTD)解决方案全光纤网络( CNFTTD,光纤到桌面)解决方案是对POL网络的继承与创新,通过“POL+微管微缆”、“机房一级分光”、“ONU面板”等创新,能将信息网络拓扑结构简化为非常简洁的“星状扁平结构”,所有信息点(如宽带、电话、监控、AP、信息发布、门禁、DDC 等)均从机房“一纤到桌面”,大大减少了信息网络的设计与施工工作量。

采用全光纤网络(CNFTTD)解决方案,能实现大型或超大型智能建筑无需一台汇聚交换机,无需一台接入交换机,无需一间弱电间;运用微管微缆气吹技术,信息网络施工快捷方便,同时节省桥架和线槽,并方便预留5G光纤路由;采用“机房一级分光”,只需改变分光器的分光比,能十分方便地实现每一个信息点的带宽接入,网络升级与维护十分简便。

全光纤网络(CNFTTD)解决方案大大改善了无源光局域网(POL)的用户体验,能真正实现光纤到桌面(FTTD),并且能够实现光纤到桌面(FTTD)技术大规模普及应用。

Mellanox Spectrum 高密度1GbE到400GbE Ethernet交换系列产品介绍说

Mellanox Spectrum 高密度1GbE到400GbE Ethernet交换系列产品介绍说

The Mellanox Spectrum ® Open Ethernet family includes a broad portfolio of fixed form factor switches, ranging from 16 through 128 ports and with speeds from 1Gb/s to 400Gb/s, allowing the construction of purpose-built data centers at any scale with any desired blocking ratio. This enablesnetwork and data center managers to design and implement a cost-effective switch fabrics based on the “pay-as-you-grow” principle. Thus a fabric consisting of a few servers can gradually expand to include hundreds of thousands of servers.Incorporating SDN attributes, the Mellanox Ethernet solution rewards the data center administrator with tools that provide a clean, simple and flexible view, and orchestration capabilities for the infrastructure. The result is an easily accessible framework that provides the data center applications with utmost elasticity.Accompanied by Mellanox NEO ® Networking Orchestrator, as well as the world’s fastest network interface cards, interconnect modules and cables,Mellanox provides a complete end-to-end Ethernet solution that scales to perform at the highest level.Cloud Native Infrastructure• Leaf/Spine architectures that easily scale up to 10K+ nodes in 2 tiers • Best in class VXLAN•Automation with best of breed tools including Ansible, Chef, Puppet, and SaltStack• OpenStack Neutron Integration• Hyperconverged Infrastructure Integration (Nutanix)•Turnkey Data Center Interconnect solutions with VXLANStorage or Machine Learning Interconnect• Fair, high bandwidth, low latency and bottleneck free data path•Robust RDMA over converged Ethernet (RoCE) transport for NVMe-oF or GPUDirect ®•Mellanox NEO based orchestration and seamless integration with various storage solutions•Built-in telemetry with What Just Happened (WJH)™Choice of Software - Open Ethernet•Mellanox Onyx ®, Cumulus ® Linux, DENT, Microsoft SONiC, and moreBenefitsMellanox provides the Highest Performing Open Ethernet Switch Systems at port speeds ranging from 1GbE through 400GbE, enabling Data Centers, Cloud Computing, Storage, Web2.0 and High Performance Computing applications to operate with maximum functionality at scale.page 5PERFORMANCE WITHOUT COMPROMISE • Fully shared packet buffer provides fair and high bandwidth datapath• Intelligent congestion management that enables robust RoCE transport• Adaptive flowlet routing to maximize link utilization FEATURES WITHOUT COMPROMISE• Single pass VXLAN routing and bridging • 10X better VXLAN scale• Hardware-based NAT• MPLS/IPv6 Segment Routing VISIBILITY WITHOUT COMPROMISE • Hardware based sub-microsecond buffer tracking and data summarization• Granular and contextual visibility with What Just Happened (WJH)• Streaming and Inband telemetry SCALE WITHOUT COMPROMISE• Massive Layer-2/Layer-3 and ACL scale • Large scale flow countersA CLOUD WITHOUT COMPROMISEMellanox specializes in designing advancedsilicon and systems to accelerate software-defined data centers (SDDC). Mellanox Spectrumswitches support rich features while concurrentlydelivering the highest performance for the mostdemanding workloads.The SN4000, SN3000 and SN2000 series offer threemodes of operation:• Preinstalled with Mellanox Onyx®, a home-grown operatingsystem utilizing common networking user experiences andan industry standard CLI.• Preinstalled with Cumulus® Linux, a revolutionary operatingsystem, taking the Linux user experience from servers toswitches and providing a rich routing functionality for largescale applications.• Bare metal including ONIE image ready to be installedwith the aforementioned or other ONIE-mounted operatingsystems.page 3SN2010SN2100SN2700SN2410General Specs Connectors 18 SFP28 25GbE + 4 QSFP28 100GbE16 QSFP28 100GbE32 QSFP28 100GbE48 SFP28 25GbE + 8 QSFP28 100GbE100GbE Ports 41632850GbE Ports 832641640GbE Ports 41632825GbE Ports 3464646410GbE Ports 34646464Height1RU 1RU 1RU 1RU Switching Capacity [Tb/s] 1.7 3.2 6.44FRUs----PS and fansPS and fansPSU Redundancy ✓✓✓✓Fan Redundancy ✓✓✓✓CPUx86x86x86x86Power Consumption [W]5794.3150165Wire Speed Switching [Bpps]1.262.384.762.97SN2000 SERIESpage 4SN3700SN3700CSN3510**SN3420General Specs Connectors 32 QSFP56 200GbE32 QSFP56 100GbE48 SFP56 50GbE +6 QSFP-DD 400GbE48 SFP28 25GbE +12 QSFP28 100GbE400GbE Ports ----6--200GbE Ports 32--12--100GbE Ports 64+3224+1250GbE Ports 128*6448+48*2440GbE Ports 3232121210GbE/25GbE Ports 12812848+4848+481 GbE Ports 12812848+4848+48Height1U 1U 1U 1U Switching Capacity [Tb/s]12.8 6.49.6 4.8FRUs✓✓✓✓PSU Redundancy ✓✓✓✓Fan Redundancy ✓✓✓✓CPUx86x86x86x86Wire Speed Switching [Bpps]8.334.767.163.58*50G PAM-4 ** Available H2 2020 +2x50G PAM-4SN3000 SERIESpage 5SN4800*SN4700SN4600*SN4600CSpecifications Connectors Modular, based on line cards 32 QSFP-DD 400GbE64 QSFP56 200GbE64 QSFP28 100GbE400GbE Ports Up to 32 in full chassis 32------200GbE Ports Up to 64 in full chassis 6464---100GbE Ports Up to 128 in full chassis 1281286450GbE Ports Up to 128 in full chassis 12812812840GbE Ports Up to 128 in full chassis 64646410GbE/25GbE Ports Up to 128 in full chassis 1281281281 GbE Ports Up to 128 in full chassis128128128Height4U 1U 2U 2U Switching Capacity [Tb/s]25.625.625.612.8FRUs✓✓✓✓PSU Redundancy ✓✓✓✓Fan Redundancy ✓✓✓✓CPUx86x86x86x86Wire Speed Switching [Bpps]8.48.48.48.4SN4000 SERIES** Available H2 2020350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085Tel: 408-970-3400 • Fax: Ethernet© Copyright 2020. Mellanox Technologies. All rights reserved.Mellanox, Mellanox logo, Mellanox Open Ethernet, MLNX-OS, Mellanox Spectrum, Mellanox NEO, Mellanox Onyx, and Spectrum logo are registered trademarks of Mellanox Technologies, Ltd.All other trademarks are property of their respective owners.*This brochure describes hardware features and capabilities. Please refer to the driver release notes on for feature availability.*Actual product may differ from the images.WARRANTY INFORMATIONMellanox switches come with a one-year limited hardware return-and-repair warranty, with a 14 business day turnaround after the unit is received. For more information, please visit the Mellanox Technical Support User Guide .ADDITIONAL INFORMATIONSupport services including next business day and 4-hour technician dispatch are available. For more information, please visit theMellanox Technical Support User Guide . Mellanox offers installation, configuration, troubleshooting and monitoring services, available on-site or remotely delivered. For more information, please visit the Mellanox Global Services web site .53776BR Rev 4.2。

绿盟内容安全管理系统技术白皮书

绿盟内容安全管理系统技术白皮书

绿盟内容安全管理系统产品白皮书© 2011 绿盟科技■版权声明本文中出现的任何文字叙述、文档格式、插图、照片、方法、过程等内容,除另有特别注明,版权均属绿盟科技所有,并受到有关产权及版权法保护。

任何个人、机构未经绿盟科技的书面授权许可,不得以任何方式复制或引用本文的任何片断。

■商标信息绿盟科技、NSFOCUS、绿盟是绿盟科技的商标。

目录一. 前言 (1)二. 为什么需要内容安全管理系统 (1)2.1内容安全管理的必要性 (2)2.2内容安全管理系统的特点 (2)三. 如何评价内容安全管理系统 (3)四. 绿盟科技内容安全管理系统 (3)4.1主要功能 (4)4.2体系架构 (5)4.3产品特点 (7)4.3.1 高效精准的数据处理 (7)4.3.2 深度Web内容过滤 (7)4.3.3 智能WEB信誉管理 (7)4.3.4 全程网络行为管理 (7)4.3.5 全面信息外发管理 (8)4.3.6 多维度精细流量管理 (8)4.3.7 高效能的网络病毒防护 (9)4.3.8 全面的垃圾邮件防护 (9)4.3.9 集成高性能防火墙 (9)4.3.10 基于对象的虚拟系统 (9)4.3.11 强大丰富的管理能力 (10)4.3.12 方便灵活的可扩展性 (11)4.3.13 事件信息“零管理” (12)4.3.14 高可靠的自身安全性 (12)4.4解决方案 (13)4.4.1 多链路内容安全管理解决方案 (13)4.4.2 混合内容安全管理解决方案 (14)五. 结论 (15)插图索引图 4.1 绿盟内容安全管理系统功能 (4)图 4.2 绿盟SCM典型部署 (5)图 4.3 绿盟科技内容安全管理系统体系架构 (6)图 4.4 多维度精细流量管理 (9)图 4.5 虚拟内容管理系统 (10)图 4.6 独立式多路SCM (11)图 4.7 多链路防护解决方案 (13)图 4.8 混合内容安全管理解决方案 (14)一. 前言随着互联网应用的迅速发展,计算机网络在经济和生活的各个领域正在迅速普及,信息的获取、共享和传播更加方便。

IB交换机——精选推荐

IB交换机——精选推荐

IB交换机Mellanox公司InfiniBand系列交换机可为⽤户提供最佳性能、⾼端⼝密度完整的⽹络管理解决⽅案,使任何规模的计算集群和数据中⼼能够同时降低运营成本和基础设施的复杂性。

Mellanox公司InfiniBand交换机包括Edge系列和核⼼系列,⽀持20,40和56Gb/ s端⼝速度以及从8⼝到648⼝范围的⼴泛组合。

这些交换机使IT管理者能够建⽴最具成本效益以及可扩展的交换结构,从⼩型集群到数以千计节点的⼤型⽹络,都能提供有带宽保证和服务质量的信息传递服务。

Mellanox公司InfiniBand系列交换机具有⾼性能,可维护性好,节能和⾼可⽤性等特点。

通过在InfiniBand⾏业的优势和综合以太⽹⽹关技术,Mellanox公司InfiniBand系列交换机为世界上最⼤、最快的⾼性能计算系统和下⼀代数据中⼼提供了可扩展的互联解决⽅案。

Edge系列交换机Mellanox公司Edge系列交换机为⽤户提供了⼀个⾼性能互联架构解决⽅案。

在⼀个1U尺⼨机架内提供总数据量最⾼达4TB/s 的⽆阻塞带宽以及100-165ns端⼝到端⼝的时延。

每个端⼝最⾼可⽀持⾼达56Gb/s(QSFP)全双向带宽。

Edge交换机是单机架内设备互连或中⼩型集群互连解决⽅案的理想选择。

Mellanox公司Edge系列交换机⼜可分为⾮管理和可管理型以满⾜多种不同需求的部署⽅案。

核⼼系列交换机:⾼密度机箱式交换机系统Mellanox公司核⼼系列交换机提供了最⾼密度的交换解决⽅案,带宽可从8.64Tb/秒扩⼤到72.5Tb/ s,具有低时延、每端⼝最⾼速度可达56Gb/ s。

模块化交换机的设计提供了集群的扩展能⼒,使得客户投资可随着集群规模的增长⽽增加。

优点:●内置Mellanox公司第四和第五代InfiniScale?和SwitchX?交换机芯⽚●业界领先低能耗,⾼密度、低成本的交换机制造公司●超低延迟●⽆论集群,LAN或SAN都具有精细的QoS保证●快速和容易的设置和管理●最⼤执⾏性能、⽆拥塞●集群架构管理I/O汇聚应⽤Edge 交换机IS5022IS5023IS5024IS5025IS5030IS5035GridDirector 4036Grid Director 4036E SX6025SX6036端⼝数 818 36 36 36 36 3634QDR+21/10GbE 3636端⼝速率40Gb/s40Gb/s 40Gb/s 40Gb/s 40Gb/s 40Gb/s 40Gb/s 40Gb/s 56Gb/s 56Gb/s 交换容量 640Gb/s1.44Tb/s2.88Tb/s2.88Tb/s2.88Tb/s2.88Tb/s2.88Tb/s2.72Tb/s4.032Tb/s 4.032Tb/s端⼝到端⼝时延 < 100ns< 100ns< 100ns< 100ns< 100ns< 100ns< 100ns< 100ns< 100ns< 100nsSM ⼦⽹管理不能管理不能管理不能管理不能管理可管理108个节点可管理648个节点可管理648个节点可管理648个节点不能管理可管理648个节点性能⾮阻塞⾮阻塞⾮阻塞⾮阻塞⾮阻塞⾮阻塞⾮阻塞⾮阻塞⾮阻塞⾮阻塞关键性能兼容IBTA 1.219条虚通道: 8数据 +1管理⾃适应路由拥塞控制端⼝镜像兼容IBTA 1.21 和1.3 9条虚通道: 8数据 +1管理⾃适应路由拥塞控制端⼝镜像核⼼交换机IS51001S5200IS5300IS5600Griddirector 4200Grid director 4700SX6536端⼝数108216324648144/162324/648648端⼝速率 40Gb/s 40Gb/s 40Gb/s 40Gb/s 40Gb/s 40Gb/s 56Gb/s交换容量 8.64Tb/s 1740Gb/s /s 25.9Tb/s 51.8Tb/s 11.52Tb/s 25.92Tb/s 72.52Tb/s端⼝到端⼝时延 100ns to 300ns 100ns to 300ns 100ns to 300ns 100ns to 300ns 100ns to 300ns 100ns to 300ns 165ns to 495nsSM ⼦⽹管理可管理648个节点可管理648个节点可管理648个节点可管理648个节点可管理648个节点可管理648个节点可管理648个节点性能⾮阻塞⾮阻塞⾮阻塞⾮阻塞⾮阻塞⾮阻塞⾮阻塞关键性能兼容IBTA 1.219条虚通道: 8数据 +1管理⾃适应路由拥塞控制端⼝镜像总代理:南京斯坦德股份有限公司联系电话:025-********-219 Fax:025-********。

iMultiLink白皮书

iMultiLink白皮书

概述............................................................................................................................................ 1 术语............................................................................................................................................ 1 产品简介.................................................................................................................................... 2 典型部署.................................................................................................................................... 2 成功案例.................................................................................................................................... 3 体系结构.................................................................................................................................... 4 系统架构.................................................................................................................................... 4 主要模块.................................................................................................................................... 4 功能说明.................................................................................................................................... 5 产品特点.................................................................................................................................... 6 功能完备.................................................................................................................................... 6 开放系统,可扩充能力强........................................................................................................ 6 支持多种平台............................................................................................................................ 6 系统效率高................................................................................................................................ 6 稳定、可靠................................................................................................................................ 6 易用性好.................................................................................................................................... 6 应用环境.................................................................................................................................... 7 硬件环境.................................................................................................................................... 7 软件环境.................................................................................................................................... 7 开发工具.................................................................................................................................... 7 声明............................................................................................................................................ 7
相关主题
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Mellanox Technologies
Mellanox GD4036高性能QDR InfiniBand交换机
Mellanox公司是全球网格骨干解决方案的领导者,致力于为下一代数据中心网络化计算提供最为全面的解决方案。

Mellanox公司的产品涵盖了从服务器到网络交换到存储互联的各个领域,凭借先进的架构设计以及严谨的质量控制,Mellanox公司的软硬件产品在全球用户市场获得了广泛的好评,通过IBM、HP、SGI、NEC、SUN等合作伙伴建立了OEM合作关系,Mellanox公司的产品以及解决方案正在被部署在各行各业。

作为业高性能InfiniBand交换解决方案,Mellanox GD4036为高性能计算集群和网格提供了水平空前的性能和扩展性。

GD4036交换机能使高性能应用运行在分布式的服务器、存储和网络资源上。

Mellanox GD4036对单个机箱中的36个节点精心设计并提供了QDR规格的40Gb/s通讯带宽,多台GD4036或者与Mellanox其他产品配合,我们可以搭建更大规模的集群运算系统,可配置的节点数量范围从十几个到几千个,内部无阻塞的交换设计最大限度的为集群运算系统提供了可靠高效的通讯环境
Mellanox Technologies
一、Mellanox GD4036模块介绍
Mellanox GD4036主要由主机箱、sPSU电源模块以及相应的系统散热模块构成,整机采用模块化设计、无线缆接插件紧密连接结构,大大提高了设备的可靠性,同时方便了系统安装以及以后的维护工作。

GD4036机箱及相关部件
Mellanox GD4036机箱采用工业标准设计完全符合19英寸机柜的安装,支持网络机柜和服务器机柜安装使用;高度为1U,在一个标准42U机柜中可以轻松部署多台Mellanox GD4036;Mellanox GD4036提供了机架导轨,更方便机箱的机柜安装;
Mellanox Technologies
1. 对外端口
Mellanox GD4036机箱提供了36个QDR 40Gb/s端口,总计可以提供2.88Tb/s吞吐量;
2. 管理模块
Mellanox GD4036内置了子网管理器,无须在服务器上安装子网管理器软件;管理模块提供了标准的DB9串口以及以太网连接接口提供远程管理。

同时还提供了一个标准的USB接口用于软件及微码的升级。

GD4036除了标准的36端口交换芯片外,还设计有由一个低功耗的CPU以及相应的缓存组成的板载监控系统,通过搭载相应的固件,可以实时监控电源模块、风扇散热工作状
Mellanox Technologies
况以及系统工作温度等详细信息;对于交换机端口可以实现启用、禁用以及速率调整等功能;
3. 风扇模块
Mellanox GD4036机箱标准配置了一个散热模块,内部包含两个冗余风扇,支持热插拔更换维护,提高了高可用性和可维护性。

4.电源模块
随机箱配置2个最大功率为350Watt的电源模块,确保系统能够实现N+1,N+N电源供电,每个电源模块提供对机箱内所有模块的供电,用户不必担心出现竞争友商产品中的分区供电故障问题。

所有电源模块均位于机箱两侧支持热插拔更换,并且与外部线缆连接
Mellanox Technologies
模块不在同侧,这样的设计充分考虑了线缆布线对未来电源系统维护带来的挑战,大大方便系统维护。

二、Mellanox GD4036可靠性
Mellanox GD4036全部采用模块化设计,部件连接均采用先进可靠的连接模块来实现,从根本上保证了系统内部通讯链路的可靠性;
在管理方面,Mellanox GD4036内置子网管理器、集成Device Manager,可实现设备硬件监控功能、完全支持Unified Fabric Manager(UFM)软件管理;
在系统供电方面,Mellanox GD4036支持N+1以及N+N的冗余供电配置,所有电源模块均提供对整个机箱供电,平均负担整机负载,电源模块支持热插拔更换维护,确保整个系统不间断运行;
Mellanox Technologies
Mellanox GD4036提供了一个散热模块,散热模块具备两个散热风扇,两路散热风扇互为备份;同时,先进的机箱模块设计使得系统散热效率最大化;
Mellanox GD4036通过了国际FCC、UL、CB、VCCI等多项认证,在产品设计、运行可靠性、电磁稳定性等多方面获得权威机构的认可!
三、Mellanox GD4036 性能
Mellanox GD4036采用全互连无阻塞架构,总计提供2.88Tbps的吞吐率,同一个内
部交换模块的两个端口之间的延迟小于100纳秒;
四、Mellanox GD4036管理
Mellanox GD4036提供了命令行管理方式以及先进的UFM管理软件,借助Mellanox UFM管理软件,InfiniBand网络不再是个神秘的黑盒子,整个网络的监控与管理将变得透明化、系统化。

Mellanox Technologies
1. Mellanox UFM核心特点
●以应用软件为中心的网络管理;
●无限可扩展性提供对应用软件、数据库以及存储系统的无缝支持;
●直观的展现网络交通以及设备运行状况,确保用户清晰并深度掌握网络工作
Mellanox Technologies
●先进的网络阻塞状况发现与分析优化处理功能;
●基于应用软件工作流以及网络拓扑结构的通讯路由优化功能;
●可设定与调节的故障预警机制,使用户对网络通讯状况了如指掌;
●提供网络分区以及多服务等级的分区功能,方便用户设定与调整;
●提供在一个共享的网络中实现多个基于应用软件的独立通讯区域设定;
●集中化的InfiniBand网络设备管理使得大型网络中设备管理更为便捷;
●安全可靠的HA架构设计确保UFM管理系统的高可用性;
●提供API接口,方便用户将UFM管理纳入现有的综合管理系统中。

2. Mellanox UFM网络状况发现与控制
Mellanox UFM集成了先进的网络监控引擎,对InfiniBand网络交换机以及连接到InfiniBand网络的主机提供实时的监控。

Mellanox UFM提供了一个可自行设定的公告牌界面,可以提供网络健康状况以及主机CPU、内存、磁盘等资源的使用状况,通过公告牌界面,我们可以方便的看到服务器中网络通讯带宽开销最大的Top10(数量可以自行设定),网络中阻塞最多的Top10,网络中故障报警的实时列表,网络中阻塞问题的热点出现在哪里等等,
Mellanox Technologies
3. 网络拓扑结构自动发现,网络瓶颈实时显示
Mellanox UFM能够自动监测网络拓扑结构并自动绘制出相应的拓扑结构图,同时,通过实时的通讯链路监控,Mellanox UFM能够自动发现网路拥塞的热点区域并通过
Mellanox Technologies
图表方式显示给用户,此功能可以帮助用户精确的定位网络通讯的阻塞状况,为下一步性能优化提供参考数据。

4. 网络分区优化与路由通讯优化
Mellanox UFM提供先进的网络通讯优化功能,针对不同需求的计算群组(低延迟、高带宽等等)可以创建相应的逻辑计算机资源组,在同一个组内的计算节点之间通讯会自动根据所设定的网络需求类型进行优化,确保网络通讯能够分层进行,大大提高网络通讯效率;
Mellanox Technologies
同时,Mellanox UFM还提供了独有的Traffic Optimized Routing (TOR)路由算法,经过优化后,网络中的阻塞热点会自动被均衡再分配,大大降低网络带宽资源争抢所造成的整体计算效率下降的问题。

Mellanox Technologies
Mellanox Technologies
5. 全网络通讯日志收集与保存
Mellanox UFM会自动收集并保存整个网络(包括交换机端、计算节点I/O节点端)的通讯日志,为系统通讯状况分析以及故障排查提供强有力的资源支持。

Mellanox Technologies
6. Mellanox UFM支持的InfiniBand网络设备以及主机平台
Mellanox UFM支持的硬件交换机平台:
Mellanox GD2004/2012系列
Mellanox Vantage系列
Mellanox GD4000系列
Mellanox Technologies
Mellanox IS5000系列
Mellanox UFM支持的主机平台:
Redhat 5.1/5.2/5.3/5.4/5.5/5.6/6.0
Centos 5.1/5.2/5.3/5.4/5.5/5.6/6.0
Windows 2003/2008
五、Mellanox GD4036在OEM厂商
目前,Mellanox GD4036交换机已经通过IBM、HP、富士通、曙光、浪潮等多个服务器厂商严格的OEM产品测试,已经开始面向市场销售并获得了数目可观的订单,这进一步说明了Mellanox GD4036无论在可靠性、性能以及可扩展性等方面已经获得诸多国际厂商以及用户的充分认可!。

相关文档
最新文档