Network Processor Architecture for Next Generation Packet Format

合集下载

NetLogic Microsystems助力中国新一代互联网网络

NetLogic Microsystems助力中国新一代互联网网络

器内核 与可从 1 个扩展 到 1 8 n C Us 2 个 x P 的可扩展性 的独特 组合 能够提 供每秒 2 0Mp以上的智能应 用程序处理性 能, 4
使 其 成 为 了用 于 业 界 4~ 7层 智 能 网络 、服 务和 应 用 程 序 处 理 的 性 能 最 高 的 多核 通 信 处 理 器 。 NeL gcMirss ms 行 副 总 裁 兼 总 经 理 B ho z d 称 : 对 我 们 而 言 , 中 国是 一 个 令人 振 奋 且 非 常 重 要 to i coyt 执 e e ro Ab i “
DI c, ^ I
中的NR 号不 能重复 。由此需要仿 照四色原理 ,合 理划分 1
( )。 图9
在 Mii l 组 网 中 ,现 网 的I 口和 A口 的承 载 方 式 主 n— e Fx u
要有E 和A M。需要预 留大量 的冗余 电路 .以保证MGW 1 T
退 出 时 ,容 量 不 受 影 响 ,所 以 电 路 利 用 率 很 低 同 时 网 络 结构会比较复杂 。 多 ,市 场 的 竞 争 越 来 越 激 烈 , 网 络 稳 定 的重 要 性 越 现 突 出 。本 文 对 于 MS OO + n。lx 组 网 方 式 的优 势 和 C P L Mii e 的 F 对 网 络 的 影 响 进 行 了 分 析 ,对 保 持 网 络 稳 定 有 着 积 极 有 效 的作 用 。隧
Spe i  ̄ Tec a o ca h ol gy
技 术
性 .提 高 了网络质量 和可靠性 。但需 要S R R及MGW E VE
进 行 必 要 的软 硬 件 升 级 ,带 来 了 网络 数 据 的 复 杂 性 。 另 外 .NRI 识 和 标 识 用 户 的 T S 共 享 2 bt ( 标 M I 5 i 其 中NRI 多 l 位 ) 。 如 果 NRI 位 数 越 多 , 则 在 同 一 个 最 0 的 MS S re池 中所 能 容 纳 的 MS . ev r 数 量 就 越 多 . C—ev r C S re的 但 同 时 每 个 M S — ev r 能 服 务 的 用 户 数 就 会 减 少 .因 C S re所 此 ,NR 一 般 取 4 5 比较 合 适 ,这 时 每 个 MS — ev r I ~ 位 C S re 的 最 大 用 户 数 在 1 0 0 万 左 右 。 由于 这 一 限 制 ,每 个 0 ~2 0 M S OOL 最 大 容 量 限 制 在 3 0 万 左 右 用 户 。 由 于 I. CP 的 20 u Fe 采 用 N I 法 实施 调 度 , 因此 相 邻 两 个 MS —ev r lx R算 C S re池

德尔·艾美 S5148F-ON 25GbE 顶层架(ToR)开放网络交换机说明书

德尔·艾美 S5148F-ON 25GbE 顶层架(ToR)开放网络交换机说明书

The Dell EMC S5148 switch is an innovative, future-ready T op-of-Rack (T oR) open networking switch providing excellent capabilities and cost-effectiveness for the enterprise, mid-market, Tier2 cloud and NFV service providers with demanding compute and storage traffic environments.The S5148F-ON 25GbE switch is Dell EMC’s latest disaggregated hardware and software data center networking solution that provides state-of-the-art data plane programmability, backward compatible 25GbE server port connections, 100GbE uplinks, storage optimized architecture, and a broad range of functionality to meet the growing demands of today’s data center environment now and in the future.The compact S5148F-ON model design provides industry-leading density with up to 72 ports of 25GbE or up to 48 ports of 25GbE and 6 ports of 100GbE in a 1RU form factor.Using industry-leading hardware and a choice of Dell EMC’s OS10 or select 3rd party network operating systems and tools, the S5148F-ON Series offers flexibility by provision of configuration profiles and delivers non-blocking performance for workloads sensitive to packet loss. The compact S5148F-ON model provides multi rate speedenabling denser footprints and simplifying migration to 25GbE server connections and 100GbE fabrics.Data plane programmability allows the S5148F-ON to meet thedemands of the converged software defined data center by offering support for any future or emerging protocols, including hardware-based VXLAN (Layer 2 and Layer 3 gateway) support. Priority-based flow control (PFC), data center bridge exchange (DCBX) and enhanced transmission selection (ETS) make the S5148F-ON an excellent choice for DCB environments.The Dell EMC S5148F-ON model supports the open source Open Network Install Environment (ONIE) for zero touch installation of alternate network operating systems.Maximum performance and functionalityThe Dell EMC Networking S-Series S5148F-ON is a high-performance, multi-function, 10/25/40/50/100 GbE T oR switch purpose-built for applications in high-performance data center, cloud and computing environments.In addition, the S5148F-ON incorporates multiple architectural features that optimize data center network flexibility, efficiency, and availability, including IO panel to PSU airflow or PSU to IO panel airflow for hot/Key applications •Organizations looking to enter the software-defined data center era with a choice of networking technologies designed to deliver the flexibility they need• Use cases that require customization to any packet processing steps or supporting new protocols• Native high-density 25 GbE T oR server access in high- performance data center environments• 25 GbE backward compatible to 10G and 1G for future proofing and data center server migration to faster uplink speeds. • Capability to support mixed 25G and 10G servers on front panel ports without any limitations• iSCSI storage deployment including DCB converged lossless transactions• Suitable as a T oR or Leaf switch in 100G Active Fabric implementations• As a high speed VXLAN L2/L3 gateway that connects the hypervisor-based overlay networks with non-virtualized • infrastructure•Emerging applications requiring hardware support for new protocolsKey features •1RU high-density 25/10/1 GbE T oR switch with up to forty eight ports of native 25 GbE (SFP28) ports supporting 25 GbE without breakout cables• Multi-rate 100GbE ports support 10/25/40/50 GbE• 3.6 Tbps (full-duplex) non-blocking, cut-through switching fabric delivers line-rate performance under full load**• Programmable packet modification and forwarding • Programmable packet mirroring and multi-pathing • Converged network support for DCB and ECN capability • IO panel to PSU airflow or PSU to IO panel airflow • Redundant, hot-swappable power supplies and fans • IEEE 1588v2 PTP hardware supportDELL EMC NETWORKING S5148F-ON SERIES SWITCHProgrammable high-performance open networking top-of-rack switch with native 25Gserver ports and 100G network fabric connectivity• FCoE transit (FIP Snooping)• Full data center bridging (DCB) support for lossless iSCSI SANs, RoCE and converged network.• Redundant, hot-swappable power supplies and fans• I/O panel to PSU airflow or PSU to I/O panel airflow(reversable airflow)• VRF-lite enables sharing of networking infrastructure and provides L3 traffic isolation across tenants• 16, 28, 40, 52, 64 10GbE ports availableKey features with Dell EMC Networking OS10• Consistent DevOps framework across compute, storage and networking elements• Standard networking features, interfaces and scripting functions for legacy network operations integration• Standards-based switching hardware abstraction via Switch Abstraction Interface (SAI)• Pervasive, unrestricted developer environment via Control Plane Services (CPS)• Open and programmatic management interface via Common Management Services (CMS)• OS10 Premium Edition software enables Dell EMC layer 2 and 3 switching and routing protocols with integrated IP Services,Quality of Service, Manageability and Automation features• Platform agnostic via standard hardware abstraction layer (OCP-SAI)• Unmodified Linux kernel and unmodified Linux distribution• OS10 Open Edition software decoupled from L2/L3 protocol stack and services• Leverage common open source tools and best-practices (data models, commit rollbacks)• Increase VM Mobility region by stretching L2 VLAN within or across two DCs with unique VLT capabilities• Scalable L2 and L3 Ethernet Switching with QoS, ACL and a full complement of standards based IPv4 and IPv6 features including OSPF, BGP and PBR• Enhanced mirroring capabilities including local mirroring, Remote Port Mirroring (RPM), and Encapsulated Remote Port Mirroring(ERPM).• Converged network support for DCB, with priority flow control (802.1Qbb), ETS (802.1Qaz), DCBx and iSCSI TLV• Rogue NIC control provides hardware-based protection from NICS sending out excessive pause frames48 line-rate 25 Gigabit Ethernet SFP28 ports6 line-rate 100 Gigabit Ethernet QSFP28 ports1 RJ45 console/management port with RS232signaling1 Micro-USB type B optional console port1 10/100/1000 Base-T Ethernet port used asmanagement port1 USB type A port for the external mass storage Size: 1 RU, 1.72 h x 17.1 w x 18.1” d (4.4 h x 43.4 w x46 cm d)Weight: 22lbs (9.97kg)ISO 7779 A-weighted sound pressure level: 59.6 dBA at 73.4°F (23°C)Power supply: 100–240 VAC 50/60 HzMax. thermal output: 1956 BTU/hMax. current draw per system:5.73A/4.8A at 100/120V AC2.87A/2.4A at 200/240V ACMax. power consumption: 516 Watts (AC)T yp. power consumption: 421 Watts (AC) with all optics loadedMax. operating specifications:Operating temperature: 32° to 113°F (0° to 45°C) Operating humidity: 5 to 90% (RH), non-condensingFresh Air Compliant to 45CMax. non-operating specifications:Storage temperature: –40° to 158°F (–40° to70°C)Storage humidity: 5 to 95% (RH), non-condensingRedundancyHot swappable redundant power suppliesHot swappable redundant fansPerformanceSwitch fabric capacity: 3.6TbpsPacket buffer memory: 16MBCPU memory: 16GBMAC addresses: Up to 512KARP table: Up to 256KIPv4 routes: Up to 128KIPv6 routes: Up to 64KMulticast hosts: Up to 64KLink aggregation: Unlimited links per group, up to 36 groupsLayer 2 VLANs: 4KMSTP: 64 instancesLAG Load Balancing: User Configurable (MAC, IP, TCP/UDPport)IEEE Compliance802.1AB LLDPTIA-1057 LLDP-MED802.1s MSTP802.1w RSTP 802.3ad Link Aggregation with LACP802.3ae 10 Gigabit Ethernet (10GBase-X)802.3ba 40 Gigabit Ethernet (40GBase-X)802.3i Ethernet (10Base-T)802.3u Fast Ethernet (100Base-TX)802.3z Gigabit Ethernet (1000BaseX)802.1D Bridging, STP802.1p L2 Prioritization802.1Q VLAN T agging, Double VLAN T agging,GVRP802.1Qbb PFC802.1Qaz ETS802.1s MSTP802.1w RSTPPVST+802.1X Network Access Control802.3ab Gigabit Ethernet (1000BASE-T) orbreakout802.3ac Frame Extensions for VLAN T agging802.3ad Link Aggregation with LACP802.3ae 10 Gigabit Ethernet (10GBase-X)802.3ba 40 Gigabit Ethernet (40GBase-SR4,40GBase-CR4, 40GBase-LR4, 100GBase-SR10,100GBase-LR4, 100GBase-ER4) on optical ports802.3bj 100 Gigabit Ethernet802.3u Fast Ethernet (100Base-TX) on mgmtports802.3x Flow Control802.3z Gigabit Ethernet (1000Base-X) with QSAANSI/TIA-1057 LLDP-MEDJumbo MTU support 9,416 bytesLayer2 Protocols4301 Security Architecture for IPSec*4302 I PSec Authentication Header*4303 E SP Protocol*802.1D Compatible802.1p L2 Prioritization802.1Q VLAN T agging802.1s MSTP802.1w RSTP802.1t RPVST+802.3ad Link Aggregation with LACPVLT Virtual Link TrunkingRFC Compliance768 UDP793 TCP854 T elnet959 FTP1321 MD51350 TFTP2474 Differentiated Services2698 T wo Rate Three Color Marker3164 Syslog4254 SSHv2791 I Pv4792 ICMP826 ARP1027 Proxy ARP1035 DNS (client)1042 Ethernet Transmission1191 Path MTU Discovery1305 NTPv41519 CIDR1812 Routers1858 IP Fragment Filtering2131 DHCP (server and relay)5798 VRRP3021 31-bit Prefixes3046 DHCP Option 82 (Relay)1812 Requirements for IPv4 Routers1918 Address Allocation for Private Internets2474 Diffserv Field in IPv4 and Ipv6 Headers2596 Assured Forwarding PHB Group3195 Reliable Delivery for Syslog3246 Expedited Assured Forwarding4364 VRF-lite (IPv4 VRF with OSPF andBGP)*General IPv6 Protocols1981 Path MTU Discovery*2460 I Pv62461 Neighbor Discovery*2462 Stateless Address AutoConfig2463 I CMPv62464 Ethernet Transmission2675 Jumbo grams3587 Global Unicast Address Format4291 IPv6 Addressing2464 Transmission of IPv6 Packets overEthernet Networks2711 IPv6 Router Alert Option4007 IPv6 Scoped Address Architecture4213 Basic Transition Mechanisms for IPv6Hosts and Routers4291 IPv6 Addressing Architecture5095 Deprecation of T ype 0 Routing Headers inI Pv6IPv6 Management support (telnet, FTP, TACACS,RADIUS, SSH, NTP)OSPF (v2/v3)1587 NSSA1745 OSPF/BGP interaction1765 OSPF Database overflow2154 MD52328 OSPFv22370 Opaque LSA3101 OSPF NSSA3623 OSPF Graceful Restart (Helper mode)*BGP 1997 Communities 2385 MD52439 Route Flap Damping 2796 Route Reflection 2842 Capabilities 2918 Route Refresh 3065 Confederations 4271 BGP-44360 Extended Communities 4893 4-byte ASN5396 4-byte ASN Representation 5492Capabilities AdvertisementLinux Distribution Debian Linux version 8.4Linux Kernel 3.16MIBSIP MIB– Net SNMPIP Forward MIB– Net SNMPHost Resources MIB– Net SNMP IF MIB – Net SNMP LLDP MIB Entity MIB LAG MIBDell-Vendor MIBTCP MIB – Net SNMP UDP MIB – Net SNMP SNMPv2 MIB – Net SNMP Network Management SNMPv1/2SSHv2FTP, TFTP, SCP SyslogPort Mirroring RADIUS 802.1XSupport Assist (Phone Home)Netconf APIs XML SchemaCLI Commit (Scratchpad)AutomationControl Plane Services APIs Linux Utilities and Scripting Tools Quality of Service Access Control Lists Prefix List Route-MapRate Shaping (Egress)Rate Policing (Ingress)Scheduling Algorithms Round RobinWeighted Round Robin Deficit Round Robin Strict PriorityWeighted Random Early Detect Security 2865 RADIUS 3162 Radius and IPv64250, 4251, 4252, 4253, 4254 SSHv2Data center bridging802.1QbbPriority-Based Flow Control802.1Qaz Enhanced Transmission Selection (ETS)*Data Center Bridging eXchange(DCBx) DCBx Application TLV (iSCSI, FCoE*)Regulatory compliance SafetyUL/CSA 60950-1, Second Edition EN 60950-1, Second EditionIEC 60950-1, Second Edition Including All National Deviations and Group DifferencesEN 60825-1 Safety of Laser Products Part 1: EquipmentClassification Requirements and User’s GuideEN 60825-2 Safety of Laser Products Part 2: Safety of Optical Fibre Communication Systems FDA Regulation 21 CFR 1040.10 and 1040.11Emissions & Immunity EMC complianceFCC Part 15 (CFR 47) (USA) Class A ICES-003 (Canada) Class AEN55032: 2015 (Europe) Class A CISPR32 (International) Class AAS/NZS CISPR32 (Australia and New Zealand) Class AVCCI (Japan) Class A KN32 (Korea) Class ACNS13438 (T aiwan) Class A CISPR22EN55022EN61000-3-2EN61000-3-3EN61000-6-1EN300 386EN 61000-4-2 ESDEN 61000-4-3 Radiated Immunity EN 61000-4-4 EFT EN 61000-4-5 SurgeEN 61000-4-6 Low Frequency Conducted Immunity NEBSGR-63-Core GR-1089-Core ATT -TP-76200VZ.TPR.9305RoHSRoHS 6 and China RoHS compliantCertificationsJapan: VCCI V3/2009 Class AUSA: FCC CFR 47 Part 15, Subpart B:2009, Class A Warranty1 Year Return to DepotLearn more at /Networking*Future release**Packet sizes over 147 BytesIT Lifecycle Services for NetworkingExperts, insights and easeOur highly trained experts, withinnovative tools and proven processes, help you transform your IT investments into strategic advantages.Plan & Design Let us analyze yourmultivendor environment and deliver a comprehensive report and action plan to build upon the existing network and improve performance.Deploy & IntegrateGet new wired or wireless network technology installed and configured with ProDeploy. Reduce costs, save time, and get up and running cateEnsure your staff builds the right skills for long-termsuccess. Get certified on Dell EMC Networking technology and learn how to increase performance and optimize infrastructure.Manage & SupportGain access to technical experts and quickly resolve multivendor networking challenges with ProSupport. Spend less time resolving network issues and more time innovating.OptimizeMaximize performance for dynamic IT environments with Dell EMC Optimize. Benefit from in-depth predictive analysis, remote monitoring and a dedicated systems analyst for your network.RetireWe can help you resell or retire excess hardware while meeting local regulatory guidelines and acting in an environmentally responsible way.Learn more at/Services。

德尔·韦玛网络S4048T-ON交换机说明书

德尔·韦玛网络S4048T-ON交换机说明书

The Dell EMC Networking S4048T-ON switch is the industry’s latest data center networking solution, empowering organizations to deploy modern workloads and applications designed for the open networking era. Businesses who have made the transition away from monolithic proprietary mainframe systems to industry standard server platforms can now enjoy even greater benefits from Dell EMC open networking platforms. By using industry-leading hardware and a choice of leading network operating systems to simplify data center fabric orchestration and automation, organizations can tailor their network to their unique requirements and accelerate innovation.These new offerings provide the needed flexibility to transform data centers. High-capacity network fabrics are cost-effective and easy to deploy, providing a clear path to the software-defined data center of the future with no vendor lock-in.The S4048T-ON supports the open source Open Network Install Environment (ONIE) for zero-touch installation of alternate network operating systems, including feature rich Dell Networking OS.High density 1/10G BASE-T switchThe Dell EMC Networking S-Series S4048T-ON is a high-density100M/1G/10G/40GbE top-of-rack (ToR) switch purpose-builtfor applications in high-performance data center and computing environments. Leveraging a non-blocking switching architecture, theS4048T-ON delivers line-rate L2 and L3 forwarding capacity within a conservative power budget. The compact S4048T-ON design provides industry-leading density of 48 dual-speed 1/10G BASE-T (RJ45) ports, as well as six 40GbE QSFP+ up-links to conserve valuable rack space and simplify the migration to 40Gbps in the data center core. Each40GbE QSFP+ up-link can also support four 10GbE (SFP+) ports with a breakout cable. In addition, the S4048T-ON incorporates multiple architectural features that optimize data center network flexibility, efficiency and availability, including I/O panel to PSU airflow or PSU to I/O panel airflow for hot/cold aisle environments, and redundant, hot-swappable power supplies and fans. S4048T-ON supports feature-rich Dell Networking OS, VLT, network virtualization features such as VRF-lite, VXLAN Gateway and support for Dell Embedded Open Automation Framework.• The S4048T-ON is the only switch in the industry that supports traditional network-centric virtualization (VRF) and hypervisorcentric virtualization (VXLAN). The switch fully supports L2 VX-• The S4048T-ON also supports Dell EMC Networking’s Embedded Open Automation Framework, which provides enhanced network automation and virtualization capabilities for virtual data centerenvironments.• The Open Automation Framework comprises a suite of interre-lated network management tools that can be used together orindependently to provide a network that is flexible, available andmanageable while helping to reduce operational expenses.Key applicationsDynamic data centers ready to make the transition to software-defined environments• High-density 10Gbase-T ToR server access in high-performance data center environments• Lossless iSCSI storage deployments that can benefit from innovative iSCSI & DCB optimizations that are unique only to Dell NetworkingswitchesWhen running the Dell Networking OS9, Active Fabric™ implementation for large deployments in conjunction with the Dell EMC Z-Series, creating a flat, two-tier, nonblocking 10/40GbE data center network design:• High-performance SDN/OpenFlow 1.3 enabled with ability to inter-operate with industry standard OpenFlow controllers• As a high speed VXLAN Layer 2 Gateway that connects thehypervisor based ovelray networks with nonvirtualized infrastructure Key features - general• 48 dual-speed 1/10GbE (SFP+) ports and six 40GbE (QSFP+)uplinks (totaling 72 10GbE ports with breakout cables) with OSsupport• 1.44Tbps (full-duplex) non-blocking switching fabric delivers line-rateperformance under full load with sub 600ns latency• I/O panel to PSU airflow or PSU to I/O panel airflow• Supports the open source ONIE for zero-touch• installation of alternate network operating systems• Redundant, hot-swappable power supplies and fansDELL EMC NETWORKING S4048T-ON SWITCHEnergy-efficient 10GBASE-T top-of-rack switch optimized for data center efficiencyKey features with Dell EMC Networking OS9Scalable L2 and L3 Ethernet switching with QoS and a full complement of standards-based IPv4 and IPv6 features, including OSPF, BGP and PBR (Policy Based Routing) support• Scalable L2 and L3 Ethernet switching with QoS and a full complement of standards-based IPv4 and IPv6 features, including OSPF, BGP andPBR (Policy Based Routing) support• VRF-lite enables sharing of networking infrastructure and provides L3traffic isolation across tenants• Increase VM Mobility region by stretching L2 VLAN within or across two DCs with unique VLT capabilities like Routed VL T, VLT Proxy Gateway • VXLAN gateway functionality support for bridging the nonvirtualizedand the virtualized overlay networks with line rate performance.• Embedded Open Automation Framework adding automatedconfiguration and provisioning capabilities to simplify the management of network environments. Supports Puppet agent for DevOps• Modular Dell Networking OS software delivers inherent stability as well as enhanced monitoring and serviceability functions.• Enhanced mirroring capabilities including 1:4 local mirroring,• Remote Port Mirroring (RPM), and Encapsulated Remote PortMirroring (ERPM). Rate shaping combined with flow based mirroringenables the user to analyze fine grained flows• Jumbo frame support for large data transfers• 128 link aggregation groups with up to 16 members per group, usingenhanced hashing• Converged network support for DCB, with priority flow control(802.1Qbb), ETS (802.1Qaz), DCBx and iSCSI TLV• S4048T-ON supports RoCE and Routable RoCE to enable convergence of compute and storage on Active FabricUser port stacking support for up to six units and unique mixed mode stacking that allows stacking of S4048-ON with S4048T-ON to providecombination of 10G SFP+ and RJ45 ports in a stack.Physical48 fixed 10GBase-T ports supporting 100M/1G/10G speeds6 fixed 40 Gigabit Ethernet QSFP+ ports1 RJ45 console/management port with RS232signaling1 USB 2.0 type A to support mass storage device1 Micro-USB 2.0 type B Serial Console Port1 8 GB SSD ModuleSize: 1RU, 1.71 x 17.09 x 18.11”(4.35 x 43.4 x 46 cm (H x W x D)Weight: 23 lbs (10.43kg)ISO 7779 A-weighted sound pressure level: 65 dB at 77°F (25°C)Power supply: 100–240V AC 50/60HzMax. thermal output: 1568 BTU/hMax. current draw per system:4.6 A at 460W/100VAC,2.3 A at 460W/200VACMax. power consumption: 460 WattsT ypical power consumption: 338 WattsMax. operating specifications:Operating temperature: 32°F to 113°F (0°C to45°C)Operating humidity: 5 to 90% (RH), non-condensing Max. non-operating specifications:Storage temperature: –40°F to 158°F (–40°C to70°C)Storage humidity: 5 to 95% (RH), non-condensingRedundancyHot swappable redundant powerHot swappable redundant fansPerformance GeneralSwitch fabric capacity:1.44Tbps (full-duplex)720Gbps (half-duplex)Forwarding Capacity: 1080 MppsLatency: 2.8 usPacket buffer memory: 16MBCPU memory: 4GBOS9 Performance:MAC addresses: 160KARP table 128KIPv4 routes: 128KIPv6 hosts: 64KIPv6 routes: 64KMulticast routes: 8KLink aggregation: 16 links per group, 128 groupsLayer 2 VLANs: 4KMSTP: 64 instancesVRF-Lite: 511 instancesLAG load balancing: Based on layer 2, IPv4 or IPv6headers Latency: Sub 3usQOS data queues: 8QOS control queues: 12Ingress ACL: 16KEgress ACL: 1KQoS: Default 3K entries scalable to 12KIEEE compliance with Dell Networking OS9802.1AB LLDP802.1D Bridging, STP802.1p L2 Prioritization802.1Q VLAN T agging, Double VLAN T agging,GVRP802.1Qbb PFC802.1Qaz ETS802.1s MSTP802.1w RSTP802.1X Network Access Control802.3ab Gigabit Ethernet (1000BASE-T)802.3ac Frame Extensions for VLAN T agging802.3ad Link Aggregation with LACP802.3ae 10 Gigabit Ethernet (10GBase-X) withQSA802.3ba 40 Gigabit Ethernet (40GBase-SR4,40GBase-CR4, 40GBase-LR4) on opticalports802.3u Fast Ethernet (100Base-TX)802.3x Flow Control802.3z Gigabit Ethernet (1000Base-X) with QSA 802.3az Energy Efficient EthernetANSI/TIA-1057 LLDP-MEDForce10 PVST+Max MTU 9216 bytesRFC and I-D compliance with Dell Networking OS9General Internet protocols768 UDP793 TCP854 T elnet959 FTPGeneral IPv4 protocols791 IPv4792 ICMP826 ARP1027 Proxy ARP1035 DNS (client)1042 Ethernet Transmission1305 NTPv31519 CIDR1542 BOOTP (relay)1812 Requirements for IPv4 Routers1918 Address Allocation for Private Internets 2474 Diffserv Field in IPv4 and Ipv6 Headers 2596 Assured Forwarding PHB Group3164 BSD Syslog3195 Reliable Delivery for Syslog3246 Expedited Assured Forwarding4364 VRF-lite (IPv4 VRF with OSPF, BGP,IS-IS and V4 multicast)5798 VRRPGeneral IPv6 protocols1981 Path MTU Discovery Features2460 Internet Protocol, Version 6 (IPv6)Specification2464 Transmission of IPv6 Packets overEthernet Networks2711 IPv6 Router Alert Option4007 IPv6 Scoped Address Architecture4213 Basic Transition Mechanisms for IPv6Hosts and Routers4291 IPv6 Addressing Architecture4443 ICMP for IPv64861 Neighbor Discovery for IPv64862 IPv6 Stateless Address Autoconfiguration 5095 Deprecation of T ype 0 Routing Headers in IPv6IPv6 Management support (telnet, FTP, TACACS, RADIUS, SSH, NTP)VRF-Lite (IPv6 VRF with OSPFv3, BGPv6, IS-IS) RIP1058 RIPv1 2453 RIPv2OSPF (v2/v3)1587 NSSA 4552 Authentication/2154 OSPF Digital Signatures Confidentiality for 2328 OSPFv2 OSPFv32370 Opaque LSA 5340 OSPF for IPv6IS-IS1142 Base IS-IS Protocol1195 IPv4 Routing5301 Dynamic hostname exchangemechanism for IS-IS5302 Domain-wide prefix distribution withtwo-level IS-IS5303 3-way handshake for IS-IS pt-to-ptadjacencies5304 IS-IS MD5 Authentication5306 Restart signaling for IS-IS5308 IS-IS for IPv65309 IS-IS point to point operation over LANdraft-isis-igp-p2p-over-lan-06draft-kaplan-isis-ext-eth-02BGP1997 Communities2385 MD52545 BGP-4 Multiprotocol Extensions for IPv6Inter-Domain Routing2439 Route Flap Damping2796 Route Reflection2842 Capabilities2858 Multiprotocol Extensions2918 Route Refresh3065 Confederations4360 Extended Communities4893 4-byte ASN5396 4-byte ASN representationsdraft-ietf-idr-bgp4-20 BGPv4draft-michaelson-4byte-as-representation-054-byte ASN Representation (partial)draft-ietf-idr-add-paths-04.txt ADD PATHMulticast1112 IGMPv12236 IGMPv23376 IGMPv3MSDP, PIM-SM, PIM-SSMSecurity2404 The Use of HMACSHA- 1-96 within ESPand AH2865 RADIUS3162 Radius and IPv63579 Radius support for EAP3580 802.1X with RADIUS3768 EAP3826 AES Cipher Algorithm in the SNMP UserBase Security Model4250, 4251, 4252, 4253, 4254 SSHv24301 Security Architecture for IPSec4302 IPSec Authentication Header4303 ESP Protocol4807 IPsecv Security Policy DB MIBdraft-ietf-pim-sm-v2-new-05 PIM-SMwData center bridging802.1Qbb Priority-Based Flow Control802.1Qaz Enhanced Transmission Selection (ETS)Data Center Bridging eXchange (DCBx)DCBx Application TLV (iSCSI, FCoE)Network management1155 SMIv11157 SNMPv11212 Concise MIB Definitions1215 SNMP Traps1493 Bridges MIB1850 OSPFv2 MIB1901 Community-Based SNMPv22011 IP MIB2096 IP Forwarding T able MIB2578 SMIv22579 T extual Conventions for SMIv22580 Conformance Statements for SMIv22618 RADIUS Authentication MIB2665 Ethernet-Like Interfaces MIB2674 Extended Bridge MIB2787 VRRP MIB2819 RMON MIB (groups 1, 2, 3, 9)2863 Interfaces MIB3273 RMON High Capacity MIB3410 SNMPv33411 SNMPv3 Management Framework3412 Message Processing and Dispatching forthe Simple Network ManagementProtocol (SNMP)3413 SNMP Applications3414 User-based Security Model (USM) forSNMPv33415 VACM for SNMP3416 SNMPv23417 Transport mappings for SNMP3418 SNMP MIB3434 RMON High Capacity Alarm MIB3584 Coexistance between SNMP v1, v2 andv34022 IP MIB4087 IP Tunnel MIB4113 UDP MIB4133 Entity MIB4292 MIB for IP4293 MIB for IPv6 T extual Conventions4502 RMONv2 (groups 1,2,3,9)5060 PIM MIBANSI/TIA-1057 LLDP-MED MIBDell_ITA.Rev_1_1 MIBdraft-grant-tacacs-02 TACACS+draft-ietf-idr-bgp4-mib-06 BGP MIBv1IEEE 802.1AB LLDP MIBIEEE 802.1AB LLDP DOT1 MIBIEEE 802.1AB LLDP DOT3 MIB sFlowv5 sFlowv5 MIB (version 1.3)DELL-NETWORKING-SMIDELL-NETWORKING-TCDELL-NETWORKING-CHASSIS-MIBDELL-NETWORKING-PRODUCTS-MIBDELL-NETWORKING-SYSTEM-COMPONENT-MIBDELL-NETWORKING-TRAP-EVENT-MIBDELL-NETWORKING-COPY-CONFIG-MIBDELL-NETWORKING-IF-EXTENSION-MIBDELL-NETWORKING-FIB-MIBIT Lifecycle Services for NetworkingExperts, insights and easeOur highly trained experts, withinnovative tools and proven processes, help you transform your IT investments into strategic advantages.Plan & Design Let us analyze yourmultivendor environment and deliver a comprehensive report and action plan to build upon the existing network and improve performance.Deploy & IntegrateGet new wired or wireless network technology installed and configured with ProDeploy. Reduce costs, save time, and get up and running cateEnsure your staff builds the right skills for long-termsuccess. Get certified on Dell EMC Networking technology and learn how to increase performance and optimize infrastructure.Manage & SupportGain access to technical experts and quickly resolve multivendor networking challenges with ProSupport. Spend less time resolving network issues and more time innovating.OptimizeMaximize performance for dynamic IT environments with Dell EMC Optimize. Benefit from in-depth predictive analysis, remote monitoring and a dedicated systems analyst for your network.RetireWe can help you resell or retire excess hardware while meeting local regulatory guidelines and acting in an environmentally responsible way.Learn more at/lifecycleservicesLearn more at /NetworkingDELL-NETWORKING-FPSTATS-MIBDELL-NETWORKING-LINK-AGGREGATION-MIB DELL-NETWORKING-MSTP-MIB DELL-NETWORKING-BGP4-V2-MIB DELL-NETWORKING-ISIS-MIBDELL-NETWORKING-FIPSNOOPING-MIBDELL-NETWORKING-VIRTUAL-LINK-TRUNK-MIB DELL-NETWORKING-DCB-MIBDELL-NETWORKING-OPENFLOW-MIB DELL-NETWORKING-BMP-MIBDELL-NETWORKING-BPSTATS-MIBRegulatory compliance SafetyCUS UL 60950-1, Second Edition CSA 60950-1-03, Second Edition EN 60950-1, Second EditionIEC 60950-1, Second Edition Including All National Deviations and Group Differences EN 60825-1, 1st EditionEN 60825-1 Safety of Laser Products Part 1:Equipment Classification Requirements and User’s GuideEN 60825-2 Safety of Laser Products Part 2: Safety of Optical Fibre Communication Systems FDA Regulation 21 CFR 1040.10 and 1040.11EmissionsInternational: CISPR 22, Class AAustralia/New Zealand: AS/NZS CISPR 22: 2009, Class ACanada: ICES-003:2016 Issue 6, Class AEurope: EN 55022: 2010+AC:2011 / CISPR 22: 2008, Class AJapan: VCCI V-3/2014.04, Class A & V4/2012.04USA: FCC CFR 47 Part 15, Subpart B:2009, Class A RoHSAll S-Series components are EU RoHS compliant.CertificationsJapan: VCCI V3/2009 Class AUSA: FCC CFR 47 Part 15, Subpart B:2009, Class A Available with US Trade Agreements Act (TAA) complianceUSGv6 Host and Router Certified on Dell Networking OS 9.5 and greater IPv6 Ready for both Host and RouterUCR DoD APL (core and distribution ALSAN switch ImmunityEN 300 386 V1.6.1 (2012-09) EMC for Network Equipment\EN 55022, Class AEN 55024: 2010 / CISPR 24: 2010EN 61000-3-2: Harmonic Current Emissions EN 61000-3-3: Voltage Fluctuations and Flicker EN 61000-4-2: ESDEN 61000-4-3: Radiated Immunity EN 61000-4-4: EFT EN 61000-4-5: SurgeEN 61000-4-6: Low Frequency Conducted Immunity。

Network architecture capabilities - Ericsson

Network architecture capabilities - Ericsson

Growing the network’s cognitive capabilities for all growing ecosystems #AutomationIntent-driven management using cognitive technologiesIntent can be defined as a “formal specification of all expectations including requirements, goals and constraints given to a technical system”. It states which goals to achieve rather than how to achieve them. Intent enables the creation of autonomous sub-systems rather than creating tightly coupled management workflows.Cognition is a psychology term referring to an “action or process of acquiring knowledge, by reasoning or by intuition or through the senses“ [Oxford]. Using cognitive technologies makes it possible to implement a technical system with cognitive capabilities using e.g. AI techniques including Machine Learning (ML) and Machine Reasoning (MR).Standardization in the areas of Autonomous Networks and Intent-driven management is ongoing in several Standardization Organizations (e.g. TMF, 3GPP, ETSI) and cover separate aspects of automation which include use of cognitive technologies such as AI, intent-driven management, digital twins, data-driven management, MLOps, and others.As a step towards a fully autonomous network and achieving an intent-based management of a network, its architecture must be prepared by raising the level of abstraction in management with e.g., strong separation of concerns.Each instance of an Intent Management Function, IMF then has a clear and non-overlapping scope of responsibility for a functional domain in the autonomous network architecture as shown in Figure 1.Figure 1. IMFs within an autonomous network architectureIMFs receive intents from customers and other functions, and exchange intent with each other, managing the life cycle of an intent, and coordinate, within its domain of responsibility, the needed actions with other management functions.The internal control loop of an IMF has a cognitive loop of five logical phases: Measurements, Assurance, Proposal, Evaluation and Actuation.Collaboration with, for example, service assurance and service orchestration are also required to ensure fulfilment.Related articles/Additional reading:Creating autonomous networks with intent-based closed loopsMulti domain orchestration business opportunitiesArtificial Intelligence and MLOpsMLOps is a set of processes and technology capabilities for building, deploying, and operationalizing Machine Learning (ML) systems, including how data is refined and transformed to serve the ML system, aiming to unify ML system development and ML system operation with DevOps targeting the introduction of software in a repeatable/reproducible and fault tolerant workflow.Thus MLOps advocates automation and monitoring in all steps of ML system construction and deployment with a main goal to achieve shorter TTM with high confidence level of addressing challenges in the automated processes of development, verification, etc.Certain additional challenges of adopting MLOps to highly reliable live telecom networks exist such as the need to handle lifecycle management or automatic re-training of the many instances of ML models.Adoption of MLOps enables a more expedient handling of artifacts like models, pipelines, datasets, etc. in a uniform way across the different stages of the process.Targeting products and services, both internal and external, will require MLOps to be able to be deployed for several scenarios, e.g. provided as-a-Service (aaS) or licensed SW/product oncustomer site, deployed on cloud infrastructure or deployed on dedicated HW, but likely in several more.CSPs’ realities vary depending on selection of cloud infrastructure with a clear divider of whether the CSP selects to use a particular HCP or use private cloud which can be done for various reasons, like applications execution, licensed SW or data storage, etc.Spreading MLOps over several large HCPs (e.g., AWS, Azure, GCP), with the limited compatibility between their APIs for AI services requires a certain level of adoption for vendors’ products/services to adopt to each of these HCPs. Although there may be benefits of using HCP tools/services, it will require certain efforts – efforts for transferring data, efforts for data refinement, efforts for consumption, etc.A few abstraction layer initiatives exist that may help to provide an abstraction layer for different HCP services. None of these alternatives, however, provide a complete solution to the problem and AI/ML is not at the top of their priority list.Figure 2. Ericsson AI architecture blueprintRelated articles/Additional reading:Defining AI native: A key enabler for advanced intelligent telecom networksAI-powered RAN: An energy efficiency breakthroughNetwork Reliability, Availability and Resilience (NRAR)Mobile broadband has become a society-critical service in recent years, with enterprises, governments and private citizens alike relying on its availability, reliability and resilience around the clock. Living up to continuously rising expectations while simultaneously evolvingnetworks to meet the requirements of emerging use cases beyond MBB will require the ability to deliver increasingly higher levels of network robustness.5GS (5G System) has been designed to provide the robustness required to support the growth of conventional MBB services, while also offering network support to new business segments and use cases with more advanced requirements in terms of NRAR. 5GS delivers new capabilities that enable enterprises with business-critical use cases in segments such as manufacturing, ports and automotive to take a major step forward in their digitalization journeys by replacing older means of communication with the 5GS. These new capabilities are also beneficial for mission-critical networks like national security and public safety deployments being modernized.It is important to consider all parts of the network in the definition of robustness (as illustrated by the green part in Figure 3), as the weakest link in the E2E chain sets the limits for the network service characteristics. In addition, network-level design must include consideration of both sunny day scenarios and different disaster/failure cases in all parts of the network. The large orange section represents both new critical use cases and society-critical use cases with new and tougher requirements. The orange line between the application client and the server, highlights the significance of the E2E perspective.Figure 3. Shifting focus from node/NF-level to network robustness for demanding E2E applicationsWhile both 4G and 5G can provide the high level of robustness required to deliver such services today, new and emerging use cases require the addition of new features and mechanisms in the network robustness toolbox. 5GS has been designed to meet even the most challenging network robustness requirements. Beyond that, the creation of robust networks also requires careful network planning and deployment.The 5GS robustness toolbox consists of both standardized and vendor-specific network features and mechanisms. Highly flexible, it gives CSPs the power to activate the most appropriate mechanisms depending on the use cases and the deployment variants. The toolbox also enables CSPs to activate different mechanisms for different user equipment within a single network.Related articles/Additional reading:Robustness evolution: Building robust critical networks with the 5G System PDFTraffic Classification and QoSTraffic classification is about mapping of different applications and application flows from a specific UE to different network resources (e.g. network slices, PDU sessions and Radio Bearers) in both uplink (UL) and downlink (DL) and is based on mechanisms such as: NI-QoS (Network Initiated-Quality of Service) is standardized in 3GPP and based onestablishment of radio bearers and QoS Flows (shortly bearers)L4S (Low Latency Low Loss Scalable Throughput) is an IETF-defined solution for time critical communications to ensure that latency-critical high-rate apps using built on L4S information in the IP-headerURSP (UE Route Selection Policy) is standardized by 3GPP for a UE using multiple slices and/or PDU SessionsThese network resources may have different QoS levels associated to them (see Figure 4). NI-QoS and URSP are examples of traffic classification mechanisms with different control points that can be used for QoS support in mobile networks.Additional functionality is needed to support a network with deployed QoS support. One such example is SLA and SLA assurance support. Most applications use multiple application flows with different requirements.Figure 4. Traffic ClassificationExisting 3GPP standards and products designed based on these standards are not fully prepared to support QoS for data applications beyond VoLTE/IMS and particular care needs tobe taken in the RAN parts where there are limitations in the number of radio bearers.Another area of concern is how to handle Net Neutrality and Open Internet (NN/OI) which impact how a CSP can monetize QoS. One way of working with this could be to offer several subscriptions on a single device.Future direction will require a Traffic Classification Toolbox addressing a wide set of needs to be able to handle the ongoing alignment, settlement and potential standardization initiatives existing in the market.Service exposureSee also chapter on the Global Network Platform.As CSPs seek to expand outside telecom to explore the exposure of network capabilities, e.g. to address enterprises, the network resources exposed must be made easy to consume and shaped to fit the needs and desired use cases of enterprises and their partners.To be successful, CSPs need to expand their service portfolio and turn their network into a programmable platform with the capability to onboard new applications while leveraging their existing connectivity offerings and combine them with cloud and edge offerings from different players.Exposure can be applied in different places, both in the network and in the device as illustrated in Figure 5 below which is based on the High Level Network Architecture further below in Figure 5.Figure 5. Exposure InterfacesZ interface layer represents higher level and domain specific abstractions, interfaces and services, within environments that Developer’s trust, encapsulating/wrapping the C layer as needed.C interface layer contains a collection of northbound exposed capabilities and services of the network, reachable via Service Exposure Frameworks and its APIs/Protocols/SDKs, coveringdomains such as BSS, OSS, Packet Core and Communication Services.Y interface layer is a collection of exposed abstractions of capabilities and services in Z and C from the device side.X interface layer is a collection of network services exposed via the Modem / UNI interface, typically AT commands. Many standardized, but a large set of proprietary from Modem vendors.Although the Z and C layers are expressed as thin lines (Figure 5), these can contain a set of functions that are common to all exposed services, e.g., discovery, access control, identity management, throttling, etc. This drives a consistence experience towards different consumers of the APIs (developers, integrators, enterprises etc.) enabling scale and eliminates the need for an to have to use a proxy through the Management, Orchestration and Monetization layer. Related articles/Additional reading:Programmable 5G for the Industrial Internet of ThingsMonetizing API exposure for enterprises with evolved BSSFOLLOW ERICSSONTwitterLinkedInYoutubeFacebook✉Contact us。

VMware vRealize Network Insight 5.3 安装指南说明书

VMware vRealize Network Insight 5.3 安装指南说明书

安装 vRealize Network InsightVMware vRealize Network Insight 5.3您可以从 VMware 网站下载最新的技术文档:https:///cn/。

VMware, Inc.3401 Hillview Ave. Palo Alto, CA 94304 威睿信息技术(中国)有限公司北京办公室北京市朝阳区新源南路 8 号启皓北京东塔 8 层 801/cn上海办公室上海市淮海中路 333 号瑞安大厦 804-809 室/cn广州办公室广州市天河路 385 号太古汇一座 3502 室/cn版权所有© 2020 VMware, Inc. 保留所有权利。

版权和商标信息安装 vRealize Network Insight目录关于《vRealize Network Insight 安装指南》41系统建议和要求52vRealize Network Insight 安装程序9安装工作流10部署 vRealize Network Insight 平台 OVA11使用 vSphere Web Client 进行部署11使用 vSphere Windows 本机客户端进行部署13激活许可证14生成共享密钥14设置 Network Insight 收集器 (OVA)15使用 vSphere Web Client 的部署15使用 vSphere Windows 本机客户端进行部署16对于 VMware SD-WAN 在 AWS 中设置 Network Insight 收集器 (AMI)17在现有设置中部署其他收集器193使用评估许可证访问 vRealize Network Insight20添加 vCenter Server20分析流量流22生成报告224计划纵向扩展部署23计划纵向扩展平台群集23计划纵向扩展收集器24增加设置的块大小255升级 vRealize Network Insight26联机升级27一键式脱机升级29CLI 升级316卸载 vRealize Network Insight34在 vCenter 中启用 Netflow 时移除收集器 IP34在 NSX 中启用 Netflow 时移除收集器 IP35关于《vRealize Network Insight 安装指南》《vRealize Network Insight 安装指南》面向负责安装 vRealize Network Insight 的管理员或专业人员。

风河携手Cavium共推数字家庭网络

风河携手Cavium共推数字家庭网络
应用 交付网络技术领导 厂商 Bu ot leC a 系统 公司 日前宣
置可扩展能力以及跨网络和网络协议的无缝互操作性 , 其
强大 的服务平 台更 带来 突破性 的性 能 。
布推 出在 单一 集成 设备 中提供 综合 数 据 丢失 防 护 的 Bu le
Ca数据丢失防护( L ) o l D P系列设备, 在不增加复杂性的情况
的 E O A高 效 节 能 处 理 器 。 C v m N to s 一 家全 球 领 CN ai e r 是 u w k
Bu ot 据 丢失 防护 设 备针 对 网 络流 量 而 集成 数 leC a 数 据 丢失防护 , 这些 流量包括 : 电子 邮件和 网络 内容 、 数据 中 心 中的数据 或单 一综 合平 台上带 有 统一 管 理 系统 的文 件
产 品市 场方面成功 的业 务与工程 战略合作基 础 ,共 同为客 户提供 高度优化的解决 方案 ,满足快 速发 展的家庭互联 网
市场需求 。 利用风河可靠 的 V Wok x rs和 Wi i r iu 运 n Rv n x d eL
每小 时 3 T . B的备份性能和最高 5 T 5 6 B可用容量 。这款交钥
昆腾 推 出新款重 复数 据一 除设备
昆腾公 司 日前发布新款重复数据删除 与复 制设备 , 利用 V L接 口向中端和企业级客 户的光纤 通道 S N环境提供无 T A
与伦 比的性 能 、 简洁性和价值。这款 D i 0 X6 0设备提供最 高 7
一, 风河将进一步发展与 Cvm长期 以来在网络基础架构 ai u
下实现法规遵从。通过在其安全网关解决方案加入数据丢失 防护设备 , le ot Bu a 现在可同时防御入 站恶 意威 胁和 出站数 C

Extreme Networks Summit X460-G2 数据手册说明书

Extreme Networks Summit X460-G2 数据手册说明书

The Summit® X460-G2 series is based on Extreme Networks® revolutionaryExtremeXOS®, a highly resilient OS that provides continuous uptime, manageability and operational efficiency. Each switch offers the same high-performance, non-blocking hardware technology, in the Extreme Networks tradition of simplifying network deployments through the use of common hardware and software throughout the network.The Summit X460-G2 switches are effective campus edge switches that support Energy Efficient Ethernet (EEE – IEEE 802.3az) with IEEE 802.3at PoE-plus and can also serve as aggregation switches for traditional enterprise networks. The Summit X460-G2 series is also an option for DSLAM or CMTS aggregation, or for active Ethernet access.The Summit X460-G2 can also be used as a top-of-rack switch for many data center environments with features such as high-density Gigabit Ethernet for concentrated data center environments; XNV™ (ExtremeXOS Network Virtualization) for centralized network-based Virtual Machine (VM) inventory, VM location history and VM provisioning; Direct Attach™ to offload VM switching from servers, thereby improving performance; high-capacity Layer 2/Layer 3 scalability for highly virtualized data centers; and intra-rack and cross-rack stacking with industry-leading flexibility.Comprehensive Security Management• User policy and host integrity enforcement, and identity management • Universal Port Dynamic Security Profiles to provide fine granular securitypolicies in the network• Threat detection and response instrumentation to react to network intrusion with CLEAR-Flow Security Rules Engine• Denial of Service (DoS) protection and IP security against man-in-the-middle and DoS attacks to harden the network infrastructureFlexible Port ConfigurationSummit X460-G2 offers flexible port configurations. For Summit X460-G2 24 port copper models with 10Gb uplinks with four dedicated Gigabit Ethernet fiber ports and four shared Gigabit Ethernet fiber ports, the switch can have up to 8 fiber GbE ports, while still providing 20 Gigabit Ethernet copper ports (PoE-plus or non-PoE). The Summit X460-G2 24 port copper models with 1Gb uplinks can provide up to 12 SFP ports with 20 Gigabit Ethernet ports or eight SFP ports with 24 copper GbE ports.All models come equipped with either 4 ports of SFP+ 10 GbE or 4 ports of SFP 1GbE resident on the faceplate of each model. Through an optional VIM slot, Summit X460-G2 switches can be equipped with an additional 2 ports of 10 GbE for a total of six 10 Gigabit Ethernet ports on the 10Gb uplink models.As another option, each unit can be equipped with 2 ports of QSFP+ 40 Gigabit Ethernet for uplinks or stacking.High-Performance StackingUp to eight Summit X460-G2 switches can be stacked using three different methods of stacking: SummitStack, SummitStack-V, and SummitStack-V160.SUMMITSTACK — STACKING USING COPPER CX4 CONNECTIONSThe Summit X460-G2 supports SummitStack by using the Summit X460-G2-VIM-2ss module, which offers high-speed 40 Gbps stacking performance and provides compatibility with the Summit X440, X460, X460-G2 and X480 stackable switches running the same version of ExtremeXOS.SUMMITSTACK-V — FLEXIBLE STACKING OVER 10GbEExtremeXOS supports the SummitStack-V capability using 2 of the native 10 GbE ports on the faceplate as stacking ports, enabling the use of standard cabling and optics technologies used for 10 GbE SFP+, SummitStack-V provides long-distance 40 Gbps stacking connectivity of up to 40 km while reducing the cable complexity of implementing a stacking solution. SummitStack-V is compatible with SummitX440, X460, X460-G2, X480, X670, X670V, X670-G2 and X770 switches running the same version of ExtremeXOS. SummitStack-V enabled 10 GbE ports must be physically direct-connected.Note: Stacking will NOT be supported on the 10GbE fiber VIM and the 10GbE copperVIM with initial X460-G2 shipments.Note: SummitStack-V is NOT supported on the 1GbE (SFP) front panel faceplateports of non-10Gb X460-G2 models.SUMMITSTACK-V160 — FLEXIBLE STACKING OVER 40GbEThe Summit X460-G2 also supports high-speed 160 Gbps stacking, which is idealfor demanding applications where a high volume of traffic traverses through the stacking links, yet bandwidth is not compromised through stacking.SummitStack-V160 can support passive copper cable (up to 3m), active multi-mode fiber cable (up to 100m), and QSFP+ optical transceivers for 40 GbE up to 10km. With SummitStack-V160, the Summit X460-G2 provides a flexible stacking solution inside the data center or central office to create a virtualized switching infrastructure across rows of racks. SummitStack-V160 is compatible with Summit X460-G2, X480, X670V, X670-G2 and X770 switches running the same version of ExtremeXOS.Intelligent Switching and MPLS/H-VPLS SupportSummit X460-G2 supports sophisticated and intelligent Layer 2 switching, as well as Layer 3 IPv4/IPv6 routing including policy-based switching/routing, Provider Bridges, bidirectional ingress and egress Access Control Lists, and bandwidth control by 8 Kbps granularity both for ingress and egress.T o provide scalable network architectures used mainly for Carrier Ethernet network deployment, Summit X460-G2 supports MPLS LSP-based Layer 3 forwarding and Hierarchical VPLS (H-VPLS) for transparent LAN services. WithH-VPLS, transparent Layer 3 networks can be extended throughout the Layer 3 network cloud by using a VPLS tunnel between the regional transparent LAN services typically built by Provider Bridges (IEEE 802.1ad) technologyIEEE 802.3at PoE-plusIEEE 802.3af Power over Ethernet has been widely used in the campus enterprise edge network for Ethernet-powered devices such as wireless access points, Voice over IP phones, and security cameras. Ethernet port extenders such as Extreme Networks ReachNXT™ 100-8t can also utilize PoE, making installation and management easier and reducing maintenance costs. The newer IEEE 802.3at PoE-plus standard expands upon Power over Ethernet by increasing the power limitup to 30 watts, and by standardizing power negotiation by using LLDP. SummitX460-G2 supports IEEE 802.3at PoE-plus and supports standards-compliant PoE devices today and into the future.1588 Precision Time Protocol (PTP)Summit X460-G2 offers Boundary Clock (BC), Transparent Clock (TC), and Ordinary Clock (OC) for synchronizing phase and frequency and allowing the network and the connected devices to be synchronized down to microseconds of accuracy over Ethernet connection.Audio Video Bridging (AVB)The X460-G2 series supports IEEE 802.1 Audio Video Bridging to enable reliable, real-time audio/video transmission over Ethernet. AVB technology delivers the quality of service required for today’s high-definition and time-sensitive multimedia streams.Ordering NotesThe X460-G2 base switches do not ship with fan trays or power supplies. The fan tray and power supplies must be ordered separately as well as any of the optional VIMS. There is only one optional VIM slot on each X460-G2 switch. The optional Timing Module has a separate dedicated slot on the back of the X460-G2 switch.CPU/MEMORY• 64-bit MIPS Processor, 1 GHz clock• 1GB ECC DDR3 DRAM• 4GB eMMC Flash• 4MB packet bufferLED INDICATORS• Per port status LED including power status• System Status LEDs: management, fan and powerENVIRONMENTAL SPECIFICATIONS• EN/ETSI 300 019-2-1 v2.1.2 - Class 1.2 Storage• EN/ETSI 300 019-2-2 v2.1.2 - Class 2.3 Transportation • EN/ETSI 300 019-2-3 v2.1.2 - Class 3.1e Operational• EN/ETSI 300 753 (1997-10) - Acoustic Noise• ASTM D3580 Random Vibration Unpackaged 1.5 G OPERATING CONDITIONS• T emp: 0° C to 50° C (32° F to 122° F)• Humidity: 10% to 95% relative humidity, non-condensing • Altitude: 0 to 3,000 meters (9,850 feet)• Shock (half sine): 30 m/s2 (3 G), 11 ms, 60 shocks• Random vibration: 3 to 500 Hz at 1.5 G rms PACKAGING AND STORAGE SPECIFICATIONS • T emp: -40° C to 70° C (-40° F to 158° F)• Humidity: 10% to 95% relative humidity, non-condensing• Packaged Shock (half sine): 180 m/s2 (18 G), 6 ms, 600shocks• Packaged Vibration: 5 to 62 Hz at velocity 5 mm/s, 62 to 500 Hz at 0.2 G• Packaged Random Vibration: 5 to 20 Hz at 1.0 ASD w/–3 dB/oct. from 20 to 200 Hz• Packaged Drop Height: 14 drops minimum on sides and corners at 42 inches (<15 kg box)REGULATORY AND SAFETYNorth American ITE• UL 60950-1 2nd Ed., Listed Device (U.S.)• CSA 22.2 #60950-1-03 2nd Ed. (Canada)• Complies with FCC 21CFR 1040.10 (U.S. Laser Safety)• CDRH Letter of Approval (US FDA Approval) European ITE• EN 60950-1:2007 2nd Ed.• EN 60825-1+A2:2001 (Lasers Safety)• TUV-R GS Mark by German Notified Body• 2006/95/EC Low Voltage DirectiveInternational ITE• CB Report & Certificate per IEC 60950-1 2nd Ed. +National Differences• AS/NZX 60950-1 (Australia /New Zealand)EMI/EMC STANDARDSNorth American EMC for ITE• FCC CFR 47 part 15 Class A (USA)• ICES-003 Class A (Canada)European EMC Standards• EN 55022:2006+A1:2007 Class A• EN 55024:A2-2003 Class A includes IEC 61000-4-2, 3, 4, 5, 6, 11• EN 61000-3-2,8-2006 (Harmonics)• EN 61000-3-3 2008 (Flicker)• ETSI EN 300 386 v1.4.1, 2008-04 (EMC T elecommunications)• 2004/108/EC EMC DirectiveInternational EMC Certifications• CISPR 22: 2006 Ed 5.2, Class A (International Emissions)• CISPR 24:A2:2003 Class A (International Immunity)• IEC 61000-4-2:2008/EN 61000-4-2:2009 ElectrostaticDischarge, 8kV Contact, 15 kV Air, Criteria A• IEC 61000-4-3:2008/EN 61000-4-3:2006+A1:2008 Radiated Immunity 10V/m, Criteria A• IEC 61000-4-4:2004 am1 ed.2./EN 61000-4-4:2004/A1:2010 Transient Burst, 1 kV, Criteria A• IEC 61000-4-5:2005 /EN 61000-4-5:2006 Surge, 2 kV L-L, 2 kV L-G, Level 3, Criteria A• IEC 61000-4-6:2008/EN 61000-4-6:2009 ConductedImmunity, 0.15-80 MHz, 10V/m unmod. RMS, Criteria A• IEC/EN 61000-4-11:2004 Power Dips & Interruptions, >30%,25 periods, Criteria CCOUNTRY SPECIFIC• VCCI Class A (Japan Emissions)• ACMA (C-Tick) (Australia Emissions)• CCC Mark• KCC Mark, EMC Approval (Korea)TELECOM STANDARDS• ETSI EN 300 386:2001 (EMC T elecommunications)• ETSI EN 300 019 (Environmental for T elecommunications)• NEBS Level 3 compliant to portions of GR-1089 Issue 4 &GR-63 Issue 3 as defined in SR3580 with exception to filter requirement• CE 2.0 CompliantIEEE 802.3 MEDIA ACCESS STANDARDS• IEEE 802.3ab 1000BASE-T• IEEE 802.3z 1000BASE-X• IEEE 802.3ae 10GBASE-X• IEEE 802.3at PoE Plus• IEEE 802.3az (EEE)* Bystander Sound Pressure is presented for comparison to other products measured using Bystander Sound Pressure. **Declared Sound Power is presented in accordance with ISO-7779:2010(E), ISO 9296:2010 per ETSI/EN 300 753:2012-01SUMMIT X460-G2 VIM-2T2-port 10 Gigabit Ethernet module, provides two 10GBase-T copper ports. SUMMIT X460-G2 VIM-2SSSummitStack module has two SummitStack stacking ports, and provides a 40 Gigabit stacking solution. This stacking module offers compatibility with other Extreme Networks stackable switches, which are Summit X440, Summit X460, and SummitX480.Ordered EmptyRequired: First Power Supply with Air Flow Direction ordered separatelyOptional:Redundant/Additive Power Supply with Air Flow Direction ordered separatelyOptional: Timing Module for SyncE and 1588 PTP ordered separatelyRequired: Fan Tray with Air Flow Direction ordered separatelyOptional: VIM Cardsordered separately* = data networking, not stacking/contact Phone +1-408-579-2800©2014 Extreme Networks, Inc. All rights reserved. Extreme Networks and the Extreme Networks logo are trademarks or registered trademarks of Extreme Networks, Inc. in the United States and/or other countries. All other names are the property of their respective owners. For additional information on Extreme Networks Trademarks。

Communicated by (Name of Editor)

Communicated by (Name of Editor)

Parallel Processing Letters,f c World Scientific Publishing CompanyAN EFFICIENT IMPLEMENTATION OFTHE BSP PROGRAMMING LIBRARY FOR VIAYANGSUK KEE and SOONHOI HA∗School of Electrical Engineering and Computer Science,Seoul National UniversitySeoul,151-742,KoreaReceived(received date)Revised(revised date)Communicated by(Name of Editor)ABSTRACTVirtual Interface Architecture(VIA)is a light-weight protocol for protected user-level zero-copy communication.In spite of the promised high performance of VIA,previousMPI implementations for GigaNet’s cLAN revealed low communication performance.Two main sources of such low performance are the discrepancy in the communicationmodel between MPI and VIA and the multi-threading overhead.In this paper,wepropose a new implementation of the Bulk Synchronous Parallel(BSP)programminglibrary for VIA called xBSP to overcome such problems.To the best of our knowledge,xBSP is thefirst implementation of the BSP library for VIA.xBSP demonstrates thatthe selection of a proper library is important to exploit the features of light-weightprotocols.Intensive use of Remote Direct Memory Access(RDMA)operations leads tohigh performance close to the native VIA performance with respect to round trip delayand bandwidth.Considering the effects of multi-threading,memory registration,andcompletion policy on performance,we could obtain an efficient BSP implementation forcLAN,which was confirmed by experimental results.Keywords:Bulk Synchronous Parallel,Virtual Interface Architecture,parallel program-ming library,light-weight protocol,cluster1.IntroductionEven though the peak bandwidth of networks has increased rapidly over the years,the latency experienced by applications using these networks has decreased only modestly.The main reason of this disappointing performance is the high software overhead[1,2,3],which mainly results from context switch and data copy between the user and the kernel spaces.To overcome these problems,many light-weight protocols have been proposed to move the protocol stacks from the kernel to the user space[4,5,6,7,8,9,10].One of these protocols is Virtual Interface Architecture(VIA)[6]which was jointly proposed by Intel,Compaq,and Microsoft.The VIA specifications describe a net-work architecture for protected user-level zero-copy communication.For applica-∗Correspondence Address:School of Electrical Engineering and Computer Science,Seoul National University,Shinlim-Dong,Kwanak-Gu,Seoul,151-742,Korea.Tel.82-2-880-7292.Fax.82-2-879-1532.Email:{enigma,sha}@iris.snu.ac.kr.12Parallel Processing Letterstion developers,VIA provides an interface called the Virtual Interface Provider Layer(VIPL).Even though the VIPL can be directly used to develop applications,it is de-sirable to build various popular programming libraries such as PVM[11],MPI[12], and BSPlib[13]for portability of the programs.Two previous works,for example, are the MPI implementations for cLAN by MPI Software Technology(MPI/Pro)[14] and by Rice University[15].Parallel programming library based on other commu-nication protocols can be found in[16,17,18].The authors of[14]described many implementation issues such as threading,long message,asynchronous incoming mes-sage,etc.In particular,they paid attention to the pre-posting constraint of VIA in implementing asynchronous operations of MPI.The zero-copy strategy of VIA enforces that the receiver is ready before the sender initiates its operation,which defines the pre-posting constraint.The results of these studies,however,are some-what disappointing.Even though the half round trip time(RTT)of cLAN using VIPL is8.21µs in our system,that of MPI/Pro is delayed more thanfive times. Furthermore,MPI/Pro achieved only81.7percent of the peak bandwidth of VIPL. This means that the MPI library could not be efficiently integrated with VIA.There are two main causes for such low performance.The primary one is the dis-crepancy in the communication model between MPI and VIA.VIA does not assume any intermediate buffers due to the zero-copy policy,while various asynchronous op-erations of MPI require receiving queues.Therefore,the authors suggested the use of”unexpected queues”on the receiver side to handle asynchronous incoming mes-sages.Then,the implementation experiences more than one copying overhead on the receiver side and requiresflow control for the queue.Moreover,they did not use the Remote Direct Memory Access(RDMA)operation for small messages,because only large messages can amortize the overhead of exchanging the address of RDMA buffers.The second cause is the overhead due to multi-threading.Although dele-gating the message handling task to a separate thread from the computation thread seems a good way of structural implementation,it suffers significant overhead due to thread switching.The overhead due to multi-threading in our system is over ten micro-seconds:this is indeed comparable to the round trip delay in the application level.This means that the multi-threading overhead negates the gain obtained by reducing the latency in the hardware level.These two problems motivate us to implement another VIA-based parallel li-brary.In this paper,we implement the BSPlib standard of the Bulk Synchronous Parallel(BSP)programming library.The BSP model[19]wasfirst proposed as a computing model to bridge the gap between software and hardware for parallel processing.Afterwards,it became a viable programming model with BSPlib.The performance of the BSPlib library was shown to be better than MPICH with re-spect to throughput and predictability[20],which means that BSPlib is not only theoretically but also practically useful.Moreover,the study on BSP clusters[21] has demonstrated that the BSPlib library can be accelerated by rewriting the Fast Ethernet device driver to be optimized for the BSPlib operations.One of the mainAn Efficient Implementation of the BSP Programming Library for VIA3 lessons of the study was that optimization with global knowledge about the trans-port layer and the parallel library promises higher performance.This perspective is also applicable to implementing parallel libraries using light-weight protocols. Indeed,BSPlib has a strong operational resemblance with VIA in memory registra-tion,message passing communication,and direct remote memory access.Our new implementation of BSPlib for cLAN is called express BSP(xBSP).To the best of our knowledge,xBSP is thefirst implementation of BSPlib for VIA.xBSP demonstrates that selecting a proper library is important in exploiting the features of light-weight protocols.Furthermore,we achieved performance close to the native VIPL by significant efforts to reduce the overheads due to multi-threading,memory registration,andflow-control.xBSP also supports reliable communication by using the reliable delivery mode of VIA.In the following two sections,we address key features of VIA required to imple-ment the BSPlib library and discuss how well the library is matched with VIA.After that,we present experimental implementation alternatives to achieve the full per-formance of VIA.In sections4and5,several benchmarks demonstrate the efficiency of xBSP,and we conclude our discussion in section6.2.VIA FeaturesIn this section,we discuss VIA features that should be carefully considered for efficient implementation of BSPlib.They concern memory registration,communi-cation mode,and descriptor processing.2.1.Memory RegistrationTable1.Costs of memory registration and copying(µs)message length(byte)11K2K4K8K16Kregistration333445copying122101835Communication buffers of the user space should be registered in order to elimi-nate data copying between the user space and the kernel space and to provide mem-ory protection.The memory registration cost,however,is not negligible.For ex-ample,the Windows NT system experienced over15µs latency for messages smaller than16Kbytes[15],while the overhead in our Linux system ranged from3to5µs as shown in table1.Considering communication delay and copying overhead,it is important to reduce the registration overhead,especially for small messages.munication ModeAfter communication buffers are registered,processes can transfer data between the registered buffers.VIA supports two communication modes.One is the tradi-tional message passing mode in which both the sender and the receiver participate in communication,satisfying the pre-posting constraint.The other is the one-sided4Parallel Processing LettersFig.1.Procedure of RDMA write operationcommunication called RDMA,which is an extension of the local DMA operations that allow a user process to transparently access buffers of a user process on another machine connected to the VIA network.The procedure of the RDMA write operation is illustrated in Fig. 1.First, both processes register their buffers to their VIA device drivers,and process B informs process A of the address of its buffer by explicit message passing to avoid the protection violation.After that,process A initiates its operation by posting descriptors and the device driver moves data from the user buffer to the network through DMA.When packets arrive at the target machine,the device driver of the target machine moves data in the reverse way of the sender.This RDMA operation has several advantages.First,the RDMA operation can avoid the descriptor processing overhead in the target process since it does not require any descriptor in the target process except when the initiator uses the immediate datafield of descriptor.Second,since only the VI-NIC of the target machine is involved in communication,the target process can continue without interruption.Finally,the initiator does not have to worry aboutflow control for the resources of the target machine.Therefore,we prefer the RDMA mode to the message passing mode.2.3.Descriptor Processing ModeWhen there are multiple VI-connections to a process,mechanisms like select() in the socket interface are needed.We can implement such mechanisms using the Completion Queue.Notifications of descriptor completion from multiple Receive Queues are directed to a single Completion Queue.The Completion Queue can be managed by a dedicated communication thread or the user thread itself.When a thread is dedicated to managing the Completion Queue,it prevents the interruption of user threads in a clustered SMP environment.An Efficient Implementation of the BSP Programming Library for VIA5However,this introduces extra latency of thread switching.On the other hand,the user thread directly receives messages at the expense of CPU time to avoid this multi-threading overhead.Since we aim at low latency communication,the user thread itself takes a role in managing the Completion Queue.3.BSPlib ImplementationBased on the previous discussion,we explain in this section how well the BSPlib library is matched with VIA and how the library is realized.3.1.BSP-RegistrationIn a BSP program,a user can access data in a remote memory after one registers a memory block by bsp push reg(void*ident,int nbytes).The registrations within a superstep take effect after the subsequent barrier synchronization identified by bsp sync().In the Oxford implementation[13],each node keeps track of the sequence of reg-istrations and maintains a mapping table between the unique block number and the associated local address:it does not require any explicit message exchange.When a process initiates a one-sided operation with this block number,the target process translates the number into its local address for the block.The main objective of this mechanism is to reduce unnecessary network traffic in the registration step. This low-cost dynamic registration is beneficial to implementing user-level libraries and applications with recursion.Since the registration typically appears at the beginning of a program and rarely afterwards,it may be preferable to speed up ordinary communication operations at the expense of the registration.As discussed in section2.2,the initiator of RDMA operations should know the address of the remote buffer.In xBSP,each node registers its local buffer to the VI-NIC in the bsp push reg()and exchanges the address in the barrier synchronization step.At the end of the synchronization, each node builds a mapping table between the local address and the corresponding remote addresses.Since each node knows the actual address of the global memory block,it can transfer data to the remote buffer directly using the RDMA operation, unlike the Oxford implementation.3.2.One-Sided OperationA process can initiate a one-sided operation on the registered memory block. For example,bsp hpput(int pid,void*src,void*dst,int offset,int nbytes)writes nbytes data in the src buffer to the dst+offset address in the pid node;the written data is valid in the next superstep.The bsp hpput()function is exactly matched to the RDMA write operation.As the initiator has the address information of the dst buffer after the registration step,it can transfer data to the dst buffer directly. The target process does not have to considerflow-control,descriptor posting,nor incoming message handling.Furthermore,it is free from multi-threading overhead.6Parallel Processing LettersConsequently,the bsp hpput()function is able to pull delay and bandwidth perfor-mance close to those of VIPL.One problem related to the RDMA write operation is how the target process knows the arrival of a message.There are two possible solutions to this problem. One is to enforce the use of a descriptor notifying the end of a message(EOM). An RDMA write operation consumes a descriptor in the Receive Queue only when there is an immediate data in the source descriptor.Hence,we can use this feature to mark the end of an RDMA message.When a message consists of n packets,the sender transfers n-1packets andfinishes the nth packet transfer with the EOM tag while the receiver checks whether a descriptor is consumed and the returned value is EOM.This approach requires one descriptor per message,while the traditional message passing requires n descriptors.The other approach is to send an additional control message to mark the end of a message.Even though this approach has more overhead than thefirst,in the case of BSPlib,this approach is preferable.As the transferred messages in a superstep are available in the next superstep,there is no need to handle incoming messages immediately.Since cLAN supports reliable in-order delivery,the arrival of a packet means the successful arrival of the preceding packets.Therefore,a series of EOM control messages in a superstep can be replaced by the last EOM control message and the EOM message can be piggybacked with the barrier synchronization packet.After all,in the place of EOM control messages, barrier synchronization can be used implicitly to mark the end of transfers.3.3.Other IssuesThe accumulated start-up costs of communication are significant if small mes-sages are outstanding to the network.This problem has already been discussed in other studies[20,22]and can be overcome with the combining scheme.xBSP also combines small messages into a temporary buffer since the copying overhead of small messages is smaller than the memory registration cost of VIA.This com-bining method contributes to increasing the communication bandwidth,sacrificing little round trip time.Table2.Total exchange time with eight nodes for cLAN(µs)message length(byte)latin square naive ordering factor8K15722358 1.516K27194871 1.832K493010308 2.164K934021535 2.3 Besides,reordering messages is helpful to avoid serialization of message deliv-ery[22],and we use a latin square indexing order to schedule the destination of messages.A latin square is a p x p square in which all rows and columns are per-mutations of integers1to p.In comparison,naive ordering distributes messages by thefixed index order as implied in the code,like for(j=0;j<p;j++).As presented in table2,the reordering affects the performance for large messages;theAn Efficient Implementation of the BSP Programming Library for VIA7speed-up factor increases with the message size.This result indicates that poor destination scheduling can decrease performance of total exchange significantly. 4.Micro-benchmark ExperimentsIn this section,we demonstrate that BSPlib could be efficiently implemented on VIA through the experimental results with two micro-benchmarks:half round trip time and bandwidth.Our Linux cluster consists of eight nodes connected by an8-port cLAN switch.Each node has dual Pentium III550MHz processors with 256-Mbyte SDRAM and runs Redhat Linux6.2SMP version.4.1.Preliminary ExperimentsWe tested a few implementation alternatives to achieve the full performance of VIA and observed the effects of completion policies and threading on the round trip delay.Fig.2.Effects of threading and completion policyBy polling,each process repeatedly checks whether the transaction is completed while by blocking it waits for the completion of the transaction.Meanwhile,in the threaded version,a communication thread is dedicated to receiving incoming messages while a user thread continues its computation.Fig.2shows that the single threaded version using polling achieves significant reduction of delay.However,it is wasteful to dedicate all of the CPU resources to polling,especially in the case of long message transfers.A tradeoffcan be made by mixing both schemes:xBSP polls for a certain number of iterations anticipating the completion of short message transfers and is blocked eventually.Based on these experiments,we chose the single threaded version using the mixed policy.8Parallel Processing Letters4.2.Half Round Trip Time and BandwidthWith micro-benchmarks,we measured the half RTT and the bandwidth.To measure the half RTT,two processes send equal amounts of data back and forth repeatedly.We vary the message size from4bytes to64Kbytes and take the average value over1000execution results.Also,the bandwidth is computed after measuring the latency to transfer1-Mbyte data varying the message size.The baseline is the performance of xBSP using the traditional message passing with a single thread.We change the communication mode of VIA from the message passing to RDMA and compare xBSP to VIPL and MPI/Pro.These benchmarks use the following configurations:•VIPL-MP:VIPL using message passing(polling)•VIPL-RDMA:VIPL using RDMA(polling)•xBSP-MP:xBSP using message passing(mixed)•xBSP-RDMA:xBSP using RDMA(mixed)•MPI/Pro:MPI of MPI Software TechnologyFig.3.Half round trip time(with combining overhead)Fig.3and Fig.4show the experimental results of the round trip delay and bandwidth for various configurations.For comparison,the results of MPI/Pro[14] are also presented.The VIPL versions reveal minimum application level latency since they do not include any supplementary jobs for communication like registra-tion and use the polling mechanism for the completion policy.Comparing the two VIPL versions,we can estimate the overhead due to the pre-posting constraint which includes descriptor posting andflow control.Even thoughAn Efficient Implementation of the BSP Programming Library for VIA9Fig.4.Bandwidth(without combining advantage)the performance gap is not significant,the RDMA version consistently outperforms the MP version and the experiments with xBSP also show similar results.According to Fig.3,xBSP-RDMA is two times slower than VIPL-RDMA with 4-byte packets.The extra latency of xBSP-RDMA mainly results from the copy-ing overhead of the message combining and the blocking overhead of the mixed completion policy.In contrast,MPI/Pro is8.8times slower than VIPL-RDMA:in average,xBSP shows at least twice lower latency than MPI/Pro in the case of small messages.In terms of the peak bandwidth,xBSP-RDMA achieves about94%of the VIPL bandwidth while MPI/Pro achieved only82%.Consequently,these results demonstrate that xBSP exploits VIA features more effectively than MPI/Pro.5.Benchmark ExperimentsEven though micro-benchmarks can be used for measuring the basic link proper-ties,high performance of micro-benchmarks does not ensure the same performance benefit in real applications.To rigorously evaluate the performance,we measure the BSP cost parameters and then the execution times of several real applications.5.1.BSP Cost ModelThe BSP model simplifies a parallel machine by three components,a set of pro-cessors,an interconnection network,and a barrier synchronizer,which are parame-terized as{p,g,l}.Parameter p represents the number of processors in the cluster, parameter g,the gap between continuous message sending operations,and parame-ter l,the barrier synchronization latency.A BSP program consists of a sequence of supersteps separated by barrier synchronizations.In every superstep,each process10Parallel Processing Lettersperforms local computation or exchanges messages which are available in the next superstep.Hence,the execution time for superstep i is modeled by w i+gh i+l, where w i is the longest duration of local computation in the ith superstep and h i is the largest amount of packets exchanged by a process during this superstep.Table3.BSP cost parameters,s(Mflop/s)=121xBSP-RDMA xBSP-MP BSPlib-UDP/IPP L(µs)g(µs/word)L(µs)g(µs/word)L(µs)g(µs/word) min max shift total min max shift total min max shift total 217230.0770.10319520.0860.1101363200.400.42 437500.0770.08642710.0920.1022714410.370.40 673890.0790.083801120.1090.1154066870.460.49 81091230.0790.0841081450.1050.1104337640.480.53 In Table3,the cost parameters of xBSP and the Oxford BSPlib implementa-tion using UDP/IP for Fast Ethernet are compared.These parameters serve as a measure of the entire system under some non-trivial workload.The s parameter represents the instruction execution rate of each processor taken from the average execution time of matrix multiplication and dot products.The minimum L value is taken as the average latency of a long sequence of bsp sync(),while the maximum value is taken as the average latency of a long sequence of the pair of bsp hpput()and bsp sync()with one word message.The g parameter is a measure of the global net-work bandwidth,not the point-to-point bandwidth:a smaller g value means higher global bandwidth.With the shift communication pattern,each process sends data to its neighbor,and with total exchange it broadcasts.xBSP-RDMA experiences much lower synchronization latency and higher band-width(short time interval)than the others.xBSP-RDMA achieves a constant global bandwidth of about381Mbps and xBSP-MP achieves about291Mbps while the BSPlib-UDP/IP’s performance decreases with the number of over four nodes:xBSP shows good scalability characteristics,and the RDMA operations are well matched with the BSPlib interfaces.5.2.ApplicationsIn this section,we compare the BSPlib libraries with the following two applica-tions.•ES:application to solve a grid problem with a300x300matrix[23]•LU:application to solve a linear equation using LU decomposition[24]The execution time of the grid solver is presented in Fig.5.The values above bars represent the ratio of the sum of communication and synchronization times compared with xBSP-RDMA.In the grid solver program,each process exchanges data with its neighbors:the communication pattern is similar to the shift communi-cation.Since ES spends most of its time(about5.9sec)in computation in the case of two nodes,the performance gap between xBSP-RDMA and xBSP-MP is not so great.In contrast,since the packet size transferred in a superstep is2400bytes,theAn Efficient Implementation of the BSP Programming Library for VIA11Fig.5.Execution time of ES with a300by300matrixlatency reduction of xBSP-RDMA over BSPlib/UDP is about6.2.Fig.5coincides with the expected result where the sum of the global communication time and the synchronization time is reduced to about18%.As the number of nodes increases, the portion of computation decreases and the communication and synchronization costs become significant.xBSP-RDMA always outperforms both xBSP-MP and BSPlib/UDP.Fig.6.LU decomposition on two by two nodesFig.6shows the execution time of LU decomposition measured on four pro-cessors varying the size of the input matrix.In the LU decomposition program, broadcast operations with small h-relations and barrier synchronizations are re-12Parallel Processing Letterspeated.Therefore,it provides a good measure of the communication latency of a system.xBSP-RDMA shows about1.4time better communication performance than xBSP-MP,which is expected by the cost model.6.ConclusionsIn this paper,we presented an efficient implementation of BSPlib for VIA called xBSP.xBSP demonstrates that BSPlib is more appropriate than MPI to exploit the features of VIA.Furthermore,we achieved similar application performance to the native performance from VIPL by reducing the overheads associated with multi-threading,memory registration,andflow-control.Even though we paid attention only to implementing BSPlib,there are many possibilities to improving performance by relaxing the BSPlib semantics.In particu-lar,we should reduce barrier synchronization costs by adopting such mechanisms as relaxed barrier synchronization[25]and zero-cost synchronization[26].Currently,we are building a programming environment based on xBSP-RDMA for heterogeneous cluster systems adopting a dynamic load balancing scheme. AcknowledgementsThis work was supported by National Research Laboratory program(number M1-0104-00-0015).The RIACT at Seoul National University provides research fa-cilities for this study.References[1]R.Caceres,P.B.Danzig,S.Jamin,and D.J.Mitzel,Characteristics of Wide-AreaTCP/IP Conversations,ACM SIGCOMM Computer Communication Review21(4) (1991),101–112.[2]J.Kay and J.Pasquale,The Importance of Non-Data Touching Processing Overheadsin TCP/IP,ACM SIGCOMM Computer Communication Review23(4)(1993),259–268.[3]J.Kay and J.Pasquale,Profiling and Reducing Processing Overheads in TCP/IP,IEEE/ACM work.4(6)(1996),817–828.[4]R.A.F Bhoedjang,T.Rubl,and H.E.Bal,User-Level Network Interface Protocols,IEEE Computer31(11)(1998),53–60.[5]G.Chiola and G.Ciaccio,GAMMA:A Low Cost Network of Workstations Based onActive Message,In Proc.PDP’97,1997.[6] D.Dunning,G.Regnier,G.McAlpine,D.Cameron,B.Shubert,F.Berry,A.MarieMerritt,E.Gronke,and C.Dodd,The Virtual Interface Architecture,IEEE Micro 18(2)(1998),66–76.[7]S.Pakin,V.Karamcheti,and A.A.Chien,Fast Messages:Efficient,Portable Commu-nication for Workstation Clusters and MPPs,IEEE Concurrency5(2)(1997),60–72.[8]L.Prylli and B.Tourancheau,BIP:A New Protocol Designed for High PerformanceNetworking on Myrinet,PC-NOW’98,Vol.1388of Lect.Notes in Comp.Science,April, 1998,472–485.[9]T.von Eicken,A.Basu,V.Buch,and W.Vogels,U-Net:A User-Level Network Interfacefor Parallel and Distributed Computing,Operating Systems Review29(5)(1995),40–An Efficient Implementation of the BSP Programming Library for VIA1353.[10]T.von Eicken,D.E.Culler,S.C.Goldstein,and K.E.Schauser,Active Messages:A Mechanism for Integrated Communications and Computation,Proc.19th Symp.Computer Architecture,May1992,256–266.[11]V.S.Sunderam,PVM:A Framwork for Paralel Distributed Computing,Concurrency:Practice and Experience2(4)(1990),315–339.[12]Message Passing Interface Forum,MPI:A Message Passing Interface Standard,Tch.Report Version1.1,Univ.of Tennessee,Knoxville,Tenn.,1995.[13]J.M.D.Hill,B.McColl,D.C.Stefanescu,M.W.Goudreau,ng,S.B.Rao,T.Suel,T.Tsantilas,and R.Bisseling,BSPlib:The BSP programming Library,Parallel Computing24(14)(1998),1947–1980.[14]R.Dimitrov and A.Skjellum,An Efficient MPI Implementation for Virtual Inter-face(VI)Architecture-Enabled Cluster Computing,MPI Software Technology,Inc. [15] E.Speight,H.Abdel-Shafi,and J.K.Bennett,Realizing the Performance Potential ofthe Virtual Interface Architecture,ICS’99,June1999,184–192.[16]uria and A.Chien,MPI-FM:High Performance MPI on Workstation Clusters,Journal of Parallel and Distributed Computing40(1)(1997),4–18.[17]L.Prylli,B.Tourancheau,and R.Westrelin,The Design for a High Performance MPIImplementation on the Myrinet Network,EuroPVM/MPI’99,Vol.1697of Lect.Notes in Comp.Science,September,1999,223–230.[18]J.Worringen and T.Bemmerl,MPICH for SCI-connected Clusters,SCI Europe’99,September,1999,3–11.[19]Leslie G.Valiant,A Bridging Model for Parallel Computation,Comm.ACM33(8)(1990),103–111.[20]S.R.Donaldson,J.M.D.Hill,and D.B.Skillicorn,Predictable Communication onUnpredictable Networks:Implementing BSP over TCP/IP,Concurrency:Practice and Experience11(11)(1999),687–700.[21]S.R.Donaldson,J.M.D.Hill,and D.B.Skillicorn,BSP Clusters:High Performance,Reliable,and Very Low Cost,Parallel Computing26(2-3)(2000),199–242.[22]J.M.D.Hill and D.B.Skillicorn,Lessons Learned from Implementing BSP,Journalof Future Generation Computer Systems13(4-5)(1998),327–335.[23] D.E.Culler and J.Pal Singh,Parallel Computer Architecture,Morgan KaufmannPublishers,Inc.,1999,92–116.[24]R.Bisseling,BSPEDUpack,/implmnts/oxtool.htm.[25]J.S.Kim,S.Ha,and C.S.Jhon,Relaxed Barrier Synchronization for the BSP Model ofComputation on Message-Passing Architectures,Information Processing Letters66(5) (1998),247–253.[26]O.Bonorden,B.Juurlink,I.von Otte,and I.Rieping,The Paderborn UniversityBSP(PUB)Library-Design,Implementation and Performance,IPPS/SPDP’99,April 1999,99–104.。

SJA1110 汽车以太网交换机 factsheet说明书

SJA1110 汽车以太网交换机 factsheet说明书

FACT SHEETSJA1110 The SJA1110 automotive Ethernet switchfamily offers innovative and dedicated safetyand security features designed for optimalintegration in auto ECUs. The four switchvariants enable modular ECU design andplatforms and support different automotiveapplications such as gateways, ADAS boxes,and infotainment ECUs.KEY FEATURES• I ntegrated 100BASE-T1 and 100BASE-TX PHYs• Integrated Arm® Cortex®-M7 based core• Best-in class packet inspection and DoS preventioncapabilities• Advanced secure boot capabilities• Purpose built functional safety features• Support for Wake-over-Ethernet (OPEN TC10)• Rich set of Time-Sensitive Networking (TSN) standards• Rich set of NXP original AVB and AUTOSAR® software• System solution with S32G Vehicle Networking Processorand VR5510 power management unitSJA1110 ETHERNET SWITCH BLOCK DIAGRAMENABLEMENT• Production-grade Software Development Kit (SDK)• Native integration with NXP Design Studio IDE• Production grade AUTOSAR drivers• Production grade AVB/802.1AS synchronization protocol middleware• Evaluation board compatible with NXP’s Smart Application Blueprint for Rapid Engineering (SABRE)• Linux® DriversSJA1110 TSN ETHERNET SWITCH/SJA1110NXP and the NXP logo are trademarks of NXP B.V. All other product or service names are the property of their respective owners. Arm and Cortex are trademarks or registered trademarks of Arm Limited (or its subsidiaries) in the US and/or elsewhere. The related technology may be protected by any or all of patents, copyrights, designs and trade secrets. All rights reserved. © 2022 NXP B.V.Document Number: SJA1110AUTESFS REV 1NETWORKING APPLICATIONS • Optimized NXP chipset solution with S32G processor enables unmatched routing, firewalling, intrusion/detection/prevention capabilities • Best-in-class TCAM-based frame inspection for IDPS support, DOS prevention and advanced frames management • BOM optimization features include compatibility with VR5510 PMIC, four pin-compatible variants and optimized cascaded configurationADAS APPLICATIONS• Functional safety-dedicated features improving ECU safety design • Safety manual enable optimized safety design up to ASIL-D ECUs • Automotive Grade 1 (-40 / +125° C) capability for optimized PCB design • High-SGMII count for EMC friendly design • Production-grade AUTOSAR drivers• Compatible with TTTEch ® MotionWise ® middleware INFOTAINMENT/CLUSTER APPLICATIONS• Multi-gigabit SGMII for external Gigabit and Multi-Gigabit PHYs • Autonomous operation support avoids dependency from untrusted external host • Avnu ®-Certified* AVB/gPTP stack for integrated controller• Support for Wake over Ethernet (OPEN TC10)• Integrated controller with programmable GPIOs。

整理Xen理论知识

整理Xen理论知识

XEN 虚拟化技术特性整理Xen理论知识XEN 简介 XEN 是⼀个基于X86架构、发展最快、性能最稳定、占⽤资源最少的开源虚拟化技术。

Xen可以在⼀套物理硬件上安全的执⾏多个虚拟机,与 Linux 是⼀个完美的开源组合,Novell SUSE Linux Enterprise Server 最先采⽤了XEN虚拟技术。

它特别适⽤于服务器应⽤整合,可有效节省运营成本,提⾼设备利⽤率,最⼤化利⽤数据中⼼的IT基础架构。

XEN 是英国剑桥⼤学计算机实验室开发的⼀个虚拟化开源项⽬,XEN 可以在⼀套物理硬件上安全的执⾏多个虚拟机,它和操作平台结合的极为密切,占⽤的资源最少。

⽬前稳定版本为XEN3.0。

⽀持万贯虚拟化和超虚拟化。

以⾼性能、占⽤资源少著称,赢得了IBM、AMD、HP、Red Hat和Novell等众多世界级软硬件⼚商的⾼度认可和⼤⼒⽀持,已被国内外众多企事业⽤户⽤来搭建⾼性能的虚拟化平台。

VMware与XEN⽐较XEN架构如图所⽰: Xen 是⽬前业界性能最⾼的超级管理程序,其开销⽐同类专有产品低⼗倍。

Xen 独特的性能价值来⾃超虚拟化的使⽤。

超虚拟化使托管虚拟服务器可以与超级管理程序共同协作,使企业应⽤程序达到最佳的性能。

其他供应商 (例如 Microsoft) 正争先恐后地实施⾃⼰的超级管理程序,但⾄少已落后 Xen 项⽬ 3 年。

另外,Xen 还利⽤了 Intel VT 和 AMD 虚拟化处理器的硬件虚拟化能⼒。

XEN 虚拟化技术的主要特性如下所⽰:◆虚拟机的性能更接近真实的硬件平台;◆可实现物理平台和虚拟平台间的⾃由切换;◆在每个客户虚拟机⽀持到 32个虚拟CPU,通过VCPU热插拔;◆⽀持PAE指令集的x86/32, x86/64平台;◆能通过硬件辅助虚拟技术进⾏虚拟原始操作系统,可⽀持Microsoft Windows虚拟;◆得到⼴泛的硬件⼚家的⼤⼒⽀持,⽀持⼏乎所有的Linux设备驱动。

16版质量、环境、职业健康三体系中英文版本手册

16版质量、环境、职业健康三体系中英文版本手册

************有限公司CHONGQING TONGYAO CASTING & FORGING CO., LTD.质量、环境、职业健康安全手册Quality,Environmental,Occupational Healthand Safety Manual编号:TY-QEO-2018Document No: TY-QEO-2018版本:Revision:制订:Prepared by:审核:Verified by:批准:Approved by:受控状态:Controlled Condition:分发号:Distribution No.:2018-3-01发布Released on 01/03/20182018-3-01 实施Effected on 01/03/2018体系更改记录表History of RevisionsTY-QR-JZ-021A0.1目录 (3)0.1Content (3)0.2公司概况 (6)0.2Company Profile (6)0.3.1质量管理者代表和质量保证负责人任命书 (7)0.3.1Letter of Appointment for Quality Management Representative & Supervisor (7)0.3.2环境、职业健康安全管理者代表任命书 (9)0.3.2Letter of Appointment for Environmental and Occupational Health and SafetyManagement Representative (9)0.3.3员工职业健康安全事务代表任命书 (10)0.3.3Letter of Appointment for Occupational Health and Safety Representative of Employees100.3.4认证联络工程师任命 (11)0.3.4Appointment of Certification Liaison Engineer (11)0.4手册颁布令 (12)0.4Issue Order of Manual (12)0.6.1组织机构图 (13)0.6.1Organization chart (13)0.6.2检验机构图 (15)0.6.2Inspection organization chart (15)0.7公司质量管理体系运行图 (17)0.7Structure Chart of Company’s Quality Management System (17)0.8.1质量管理体系过程适用范围及职责分配表 (18)0.8.1Quality Management System Process Applicable Scope & Responsibility AssignmentTable (18)0.8.2环境、职业健康安全管理体系过程适用范围及职责分配表 (21)0.8.2Quality Management System Process Applicable Scope & Responsibility AssignmentTable (21)0.9质量管理原则 (24)0.9Quality management principles (24)1范围 (25)1Scope (25)2规范性引用文件 (27)2Normative reference (27)3术语和定义 (28)3Terms and definitions (28)4.1理解组织及其环境 (29)4.1Understanding the organization and its environment; (29)4.2理解相关方的需求和期望 (29)4.2Understanding the needs and expectations of interested parties; (29)4.3确定管理体系的范围 (30)4.3Determine the scope of management system (30)4.4管理体系及其过程 (30)4.4Management system and its process (30)5.1领导作用和承诺 (33)5.1Leadership and commitment (33)5.2方针 (34)5.2Policy (34)5.3组织的岗位、职责和权限 (36)5.3Organizational positions, responsibilities, and authorities (36)6.1应对风险和机遇的措施 (44)6.1Countermeasures for risks and opportunities (44)6.2质量、环境、职业健康安全目标及其实现的策划 (49)6.2Quality, environment, occupational health and safety objectives and planning for theirimplementation (49)6.3变更的策划 (50)6.3Planning of change (50)7.1资源 (51)7.1Resources (51)7.2能力 (54)7.2Competence (54)7.3意识 (55)7.3Awareness (55)7.4沟通 (56)7.4Communication (56)7.5形成文件的信息 (59)7.5Documented information (59)8.1运行策划和控制 (62)8.1Operation planning and control (62)8.2产品和服务的要求 (63)8.2Product and service requirements (63)8.3工艺设计和开发 (66)8.3Process design and development (66)8.4外部提供的过程、产品和服务的控制 (69)8.4Control of externally provided processes, products and services (69)8.5生产和服务提供 (72)8.5Production and service provision (72)8.6产品和服务的放行 (76)8.6Release of products and services (76)8.7不合格输出的控制 (77)8.7Control of nonconforming outputs (77)8.8应急准备和响应 (78)8.8Emergency preparedness and response (78)9.1监视、测量、分析和评价 (82)9.1Monitoring, measurement, analysis and evaluation (82)9.2内部审核 (88)9.2Internal audit (88)9.3管理评审 (89)9.3Management review (89)10持续改进 (92)10Continuous improvement (92)11认证产品的一致性 (95)11Consistency of certified products (95)12安全文明生产 (97)12Safe and well-managed production (97)13对照检索:AAR质量体系对照通耀质量体系 (98)13Cross Reference–AAR Elements to TongYao Manual (98)14程序文件目录 (99)14Procedure Document Content (99)************有限公司是一家大型民营企业,成立于2011年1月7日,注册资金10600万元。

HP Network Node Manager i-series 8.01 系统要求和支持设备 Ma

HP Network Node Manager i-series 8.01 系统要求和支持设备 Ma

HP Network Node Manager i-series Support MatrixSoftware Version:8.01This document provides an overview of the system requirements and supported devices for HP Network Node Manager Software version8.01.For the latest updates to the system requirements and device support,see sg-pro-/nnm/NNM8.01/SupportMatrix/supportmatrix.htmThis document is intended to augment the Release Notes.You can find both the Support Matrix(supportmatrix_en.html) and the Release Notes(releasenotes_en.html)at the root directory of the installation media.Installation GuideHardware and Software RequirementsHardwareOperating SystemDatabaseWeb BrowserTuning the JBoss Memory SizeLocalized Product SupportDeployment GuideIntegration and Coexistence with Other ProductsSupported Network DevicesInstallation GuidePre-installation requirements,as well as instructions for installing NNM,are documented in the installation guide provided in Adobe Acrobat(.pdf)format.The document file is included on the product's installation media as:install-guide_en.pdf. After installation,this document can be found from the NNM User Interface by selecting Help → Documentation Library → Installation Guide.Hardware and Software RequirementsBefore installing Network Node Manager make sure that your system meets the following minimum requirements: Hardware∙Intel64-bit(x86-64)or AMD64-bit(AMD64)▪Caution:Intel32-bit(x86)hardware is not supported.Verify your computer architecture by looking at the%PROCESSOR_ARCHITECTURE%variable or System Properties.∙Itanium Processor Family(IPF,formerly IA-64)▪Caution:IPF hardware running the Windows operating system is not supported∙Sun SPARC∙VMWare ESX Server3.x▪Virtual environment must meet the Intel or AMD hardware requirements listed here∙Virtual Memory/Swap Space▪Recommend2times physical memory and at least12GB▪Verify virtual memory via the swapinfo command on HP-UX,the swap command on Solaris,the cat /proc/meminfo command on Linux,or System Properties on Windows∙CPU RAM and Disk Space RequirementsManagement Requirements NNM Minimum System RequirementsNumberof discovered nodes Number ofpolledinterfacesNumber ofconcurrentusersCPU(64-bit)IPFx86-64AMD64SPARCRAM Java heap size(see Tuningthe JBossMemory size,below)Disk space forApplicationinstallation(<NnmInstallDir>)*Disk space fordatabase and dataduring execution(<NnmDataDir>)**Up to3K Up to10K Up to104CPU or2x dual core(>1GHZprocessorspeed)4GB2GB5GB20GB3K–8K Up to20K Up to254CPU or2x dual core(>1GHZprocessorspeed)8GB4GB5GB30GB8K–15K Up to50K Up to408CPU or4x dual core(>1GHZprocessorspeed)16GB8GB5GB60GB*<NnmInstallDir>is configured during installation on Windows or by creating a symlink of/opt/OV on UNIX.NNM7.x NOTE:/etc/opt/OV is no longer used on UNIX.**<NnmDataDir>is configured during installation on Windows or by creating a symlink of/var/opt/OV on UNIX.Operating System∙Windows▪Windows Server2003Enterprise x64with Service Pack2▪Windows Server2003Enterprise x64R2with Service Pack2▪Caution:Windows operating systems on Itanium Processor Family(IPF)are not supported▪Caution:Windows32-bit operating systems are not supported▪Other Windows Softwareo Microsoft Simple Network Management Protocol must be installed(see Install Guide)∙HP-UX▪HP-UX11iv3▪Kernel configuration(verify with/usr/sbin/smh)o Verify kernel parameters in the"Kernel Configuration/Tunables"section:o nproc:add50o max_threads_proc=2048o nkthreads=10000▪System Configurationo Verify using swapinfo that the system has a sufficient amount of swap.The minimum requirement is12 GB.This is the sum of the RAM and swap space available.▪Operating System Kernel PatchesThe following HP-UX11iv3operating system patches are required(or newer if the patch has beensuperseded).You can verify patches on HP-UX by running/usr/sbin/swlist-l fileset-a patch_state*.*,c=patch|grep-v superseded This list does not include Java patches(see next bullet),but only the list of OS-level patches.The following patches are required:o PHKL_36054o PHKL_36261o PHKL_36872o PHKL_37184▪Run HPjconfig HP-UX11i system configuration tool to validate the system configuration.HPjconfig can be downloaded from /go/java.To install:o On your HP-UX system,gunzip and untar the.tar.gz file as follows:gunzip HPjconfig-3.1.00.tar.gztar-xvf HPjconfig-3.1.00.taro To start HPJconfig:Change to the directory you installed the HPjconfig files.There are two ways you can run HPjconfig,GUI and non-GUI mode.Enter one of the following commands: java-jar./HPjconfig.jar(The default HPjconfig GUI)java-jar./HPjconfig.jar-nogui-help(The-help command lists options that you can use in non-GUI mode)o To list missing patches in non-GUI mode:java-jar./HPjconfig.jar-nogui-patches-listmisThis will validate kernel configuration and patch levels∙Solaris▪Sun Solaris10SPARC▪Caution:Solaris on Intel Architecture is not supported▪The shared memory must be updated.Update the/etc/system entry using an editor as follows: set shmsys:shminfo_shmmax=1073741824∙Linux▪RedHat Enterprise Server AS4.0▪RedHat Enterprise Server ES4.0▪The default size of kernel.shmmax may be too small for the embedded database to operate after a reboot.To validate,run/sbin/sysctl–a|/bin/grep kernel.shmmax.If this is less than300Meg(300000000),then it must be modified.To change the value,run:/sbin/sysctl–w kernel.shmmax=300000000To make this change permanent(after a reboot),one must edit the/etc/sysctl.conf file and add the following entry:kernel.shmmax=300000000▪See the installation guide for the dependency on the64-bit libstdc++libraries.DatabaseNNM can store its data using an embedded database that is automatically installed,or in an Oracle database.Oracle as a database must be chosen at installation time.NOTE:you cannot migrate from an embedded database to Oracle or back.∙Embedded database on the management system▪The embedded database is automatically installed and automatically initialized and maintained by NNM▪The embedded database comes with tools for re-initialization,online backup,and restore▪The embedded database performs well for most deployments∙Oracle10g Release2(10.2.0.x)installed on a remote system▪Recommend at least a1GB network connection between the NNM management server and the database server▪Database user must be created before install(see Install Guide)with at least4GB of tablespaceWeb Browser∙General Web Browser Requirements▪Any Window Popup Blockers must be disabled for the browser(see instructions on the console sign-in page or Install Guide)▪Cookies must be enabled for the browser(see instructions on the console sign-in page or Install Guide)▪Client display should have a resolution of at least1024x768∙Web Browser Running on a Remote Client System(for operational use)▪Microsoft Internet Explorer version7.0.5730.11or newer with October2007or later Cumulative Patch for Internet Explorer7.This patch increases the number of Internet Explorer cookies from20to50,allowing for savingof more NNM console table configurations.▪Mozilla Firefox version2.0.0.11or newer from a Windows or Linux client.The Firefox browser may be downloaded from /firefox▪Caution:Microsoft Internet Explorer version6is not supported▪Caution:Apple Safari is not supported∙Web Browser Running on the Local Management Server System(for initial installation and configuration use)▪Any browser supported for operational use(see above)when running on the management server▪Mozilla Firefox version2.0.0.4or newer for HP-UX11.31on IPF server.The Firefox browser may be downloaded from /go/firefox▪Mozilla Firefox version2.0.0.9or newer for Solaris SPARC10.The Firefox browser may be downloaded from /pub//firefox/releases/2.0.0.9/contrib/solaris_pkgadd/Tuning the JBoss Memory SizeDuring installation,the recommended default maximum memory size of the JBoss application server is configured inovjboss.jvm.properties.For larger environments this value can be increased to improve performance.The current value is displayed in the NNM console via Help → About.It is recommended that this value not exceed one-half of the amount of physical RAM.To change the JBoss Maximum Java Heap Size:1.ovstop–c ovjboss2.Edit the ovjboss.jvm.properties file and change the Maximum Java Heap Size to the required amount.∙Windows:C:\Documents and Settings\All Users\Application Data\HP\HP BTO Software\\shared\nnm\conf\ovjboss\ovjboss.jvm.properties∙HP-UX:/var/opt/OV/shared/nnm/conf/ovjboss/ovjboss.jvm.properties1.Modify the-Xmx and optionally-Xms valuesA snippet of the file looks like this:##JVM Memory parameters#-Xms:Initial Java Heap Size#-Xmx:Maximum Java Heap Size#-Xms128m-Xmx2048m2.ovstart–c ovjbossLocalized Product SupportNNM8.01is internationalized and can be used on operating systems configured for non-US-English locales that are supported by the operating systems.Those locales include variants of Japanese,Korean,Simplified Chinese,and Traditional Chinese,and Western and Central European locales,and Russian.NNM has been localized to Japanese.Under other locales,NNM will produce English strings,while accepting non-English characters as input.NNM uses UTF-8based locales on Linux only.When running on HP-UX,Solaris,and Windows,NNM uses non-UTF-8based locales supported by that operating system.Due to these character set differences,NNM is not supported from a Linux browser client to an HP-UX,Solaris,or Windows server running in a non-English locale.Deployment GuideTo get the latest version of the NNM8.00deployment guide,go to the following web site and request the HP Network Node Manager Software Deployment Guide:/lpe/doc_serv/Integration and Coexistence with Other ProductsThe following products have been tested to co-exist on the same system as NNMi8.01:∙HP Operations Agent(OMW64bit https Agent)Version8.x(Windows Server2003Enterprise x64R2Service Pack2 only)∙HP Operations Agent(OMU64bit https Agent)Version8.x(HP-UX11.31IPF,Solaris10SPARC)∙HP Performance Insight Version5.3(HP-UX11.31IPF,Solaris10SPARC)∙HP Performance Agent Version4.7(Windows Server2003Enterprise x64SP2,Windows Server2003Enterprise x64 R2Service Pack2)∙HP Performance Manager Version8.0(HP-UX11.31IPF,Solaris10SPARC)Caution:Installation of HP Performance Manager followed by NNMi8.01is supported.Installation of NNMi8.01followed by HP Performance Manager is not supported.Caution:If HP Performance Manager is installed,followed by NNMi8.01,then HP Performance Manager is uninstalled, the HPOvPerlA package must be reinstalled using the appropriate OS command:▪Solaris:pkgadd–d<full path to HPOvPerlA sparc package>/HPOvPerlA-05.08.081-SunOS5.7-release.sparc ▪HP-UX:swinstall–s<full path to HPOvPerlA depot package>/HPOvPerlA-05.08.081-HPUX11.22_IPF32-release.depot\*∙HP Extensible SNMP Agent Version4.21(HP-UX11.31IPF,Solaris10SPARC)The following products have an NNMi8.01integration available:∙HP Network Node Manager iSPI for Performance version8.01∙HP Network Node Manager Versions6.x and7.x(Integration built into NNMi.See"NNM6.x/7.x Management Stations"in the online help)∙HP Network Automation Server(NAS)version7.01∙NetScout nGenius version4.3∙AlarmPoint Systems AlarmPoint3.2.1Supported Network DevicesFor the list of supported network devices and MIB requirements,refer to the NNMi Device Support Matrix.。

Cisco Nexus 3048交换机数据手册说明书

Cisco Nexus 3048交换机数据手册说明书

Data SheetCisco Nexus 3048 SwitchProduct OverviewThe Cisco Nexus® 3048 Switch (Figure 1) is a line-rate Gigabit Ethernet top-of-rack (ToR) switch and is part of the Cisco Nexus 3000 Series Switches portfolio. The Cisco Nexus 3048, with its compact one-rack-unit (1RU) form factor and integrated Layer 2 and 3 switching, complements the existing Cisco Nexus family of switches. This switch runs the industry-leading Cisco® NX-OS Software operating system, providing customers with robust features and functions that are deployed in thousands of data centers worldwide. The Cisco Nexus 3048 is ideal for big data customers that require a Gigabit Ethernet ToR switch with local switching that connects transparently to upstream Cisco Nexus switches, providing an end-to-end Cisco Nexus fabric in their data centers. This switch supports both forward and reversed airflow schemes with AC and DC power inputs.Figure 1. Cisco Nexus 3048 SwitchMain BenefitsThe Cisco Nexus 3048 provides the following main benefits:●Wire-rate Layer 2 and 3 switching◦Layer 2 and 3 switching of up to 176 Gigabit per second (Gbps) and more than 132 million packets per second (mpps) in a compact 1RU form-factor switch●Robust and purpose-built Cisco NX-OS operating system for end-to-end Cisco Nexus fabric◦Transparent integration with the Cisco Nexus family of switches to provide a consistent end-to-end Cisco Nexus fabric◦Modular operating system built for resiliency◦Integration with Cisco Data Center Network Manager (DCNM) and XML management tools●Comprehensive feature set and innovations for next-generation data centers◦Virtual PortChannel (vPC) provides Layer 2 multipathing through the elimination of Spanning Tree Protocol and enables fully utilized bisectional bandwidth and simplified Layer 2 logical topologies without the need to change the existing management and deployment models.◦Power On Auto Provisioning (POAP) enables touchless bootup and configuration of the switch, drastically reducing provisioning time.◦Cisco Embedded Event Manager (EEM) and Python scripting enable automation and remote operations in the data center.◦Advanced buffer monitoring reports real-time buffer utilization per port and per queue, which allows organizations to monitor traffic bursts and application traffic patterns.◦The 64-way equal-cost multipath (ECMP) routing enables Layer 3 fat tree designs and allows organizations to prevent network bottlenecks, increase resiliency, and add capacity with little networkdisruption.◦EtherAnalyzer is a built-in packet analyzer for monitoring and troubleshooting control-plane traffic and is based on the popular Wireshark open source network protocol analyzer.◦Precision Time Protocol (PTP; IEEE 1588) provides accurate clock synchronization and improved data correlation with network captures and system events.◦Full Layer 3 unicast and multicast routing protocol suites are supported, including Border Gateway Protocol (BGP), Open Shortest Path First (OSPF), Enhanced Interior Gateway Routing Protocol(EIGRP), Routing Information Protocol Version 2 (RIPv2), Protocol Independent Multicast sparse mode (PIM-SM), Source-Specific Multicast (SSM), and Multicast Source Discovery Protocol (MSDP).●Network traffic monitoring with Cisco Nexus Data Broker◦Build simple, scalable and cost-effective network tap or Cisco Switched Port Analyzer (SPAN) aggregation for network traffic monitoring and analysis.Configuration●48 fixed 10/100/1000-Mbps Ethernet ports● 4 fixed Enhanced Small Form-Factor Pluggable (SFP+) ports●Locator LED●Dual redundant power supplies●Fan tray with redundant fans●Two 10/100/1000-Mbps management ports●One RS-232 serial console port●One USB port●Locator LED and buttonSupport for both forward (port-side exhaust) and reversed (port-side intake) airflow schemes is available.Transceiver and Cabling OptionsFor uplink connectivity, the Cisco Nexus 3048 supports SFP+ direct-attach 10 Gigabit Ethernet copper, an innovative solution that integrates transceivers with Twinax cables into an energy-efficient and low-cost solution. For longer cable runs, multimode and single-mode optical SFP+ transceivers are supported. Table 1 lists the supported 10 Gigabit Ethernet transceiver options.Table 1. Cisco Nexus 3048 10 Gigabit Transceiver Support MatrixFor more information about the transceiver types, see/en/US/products/hw/modules/ps5455/prod_module_series_home.html.Cisco NX-OS Software OverviewCisco NX-OS is a data center-class operating system built with modularity, resiliency, and serviceability at its foundation. Cisco NX-OS helps ensure continuous availability and sets the standard for mission-critical data center environments. The self-healing and highly modular design of Cisco NX-OS makes zero-impact operations a reality and enables exceptional operation flexibility.Focused on the requirements of the data center, Cisco NX-OS provides a robust and comprehensive feature set that meets the networking requirements of present and future data centers. With an XML interface and a command-line interface (CLI) like that of Cisco IOS® Software, Cisco NX-OS provides state-of-the-art implementations of relevant networking standards as well as a variety of true data center-class Cisco innovations.Cisco NX-OS Software BenefitsTable 2 summarizes the benefits that Cisco NX-OS offers.Table 2. Benefits of Cisco NX-OS SoftwareCommon software throughout the data center: Cisco NX-OS runs on all Cisco data center switch platforms: Cisco Nexus 7000, 5000, 4000, 3000, 2000, and 1000V Series. ●Simplification of data center operating environment●End-to-end Cisco Nexus and Cisco NX-OS fabric ●No retraining necessary for data center engineering and operations teamsSoftware compatibility: Cisco NX-OS interoperates with Cisco products running any variant of Cisco IOS Software and also with any networking OS that conforms to the networking standards listed as supported in this data sheet. ●Transparent operation with existing network infrastructure●Open standards●No compatibility concernsModular software design: Cisco NX-OS is designed to support distributed multithreaded processing. Cisco NX-OS modular processes are instantiated on demand, each in a separate protected memory space. Thus, processes are started and system resources allocated only when a feature is enabled. The modular processes are governed by a real-time preemptive scheduler that helps ensure timely processing of critical functions. ●Robust software●Fault tolerance●Increased scalability●Increased network availabilityTroubleshooting and diagnostics: Cisco NX-OS is built with unique serviceability functions to enable network operators to take early action based on network trends and events, enhancing network planning and improving network operations center (NOC) and vendor response times. Cisco Smart Call Home and Cisco Online Health Management System (OHMS) are some of the features that enhance the serviceability of Cisco NX-OS. ●Quick problem isolation and resolution●Continuous system monitoring and proactive notifications●Improved productivity of operations teamsEase of management: Cisco NX-OS provides a programmatic XML interface based on the NETCONF industry standard. The Cisco NX-OS XML interface provides a consistent API for devices. Cisco NX-OS also supports Simple Network Management Protocol (SNMP) Versions 1, 2, and 3 MIBs. ●Rapid development and creation of tools for enhanced management●Comprehensive SNMP MIB support for efficient remote monitoringUsing the Cisco Nexus Data Broker software and Cisco Plug-in for OpenFlow agent, the Cisco Nexus 3048 Switch can be used to build a scalable, cost-effective, and programmable tap or SPAN aggregation infrastructure. This approach replaces the traditional purpose-built matrix switches with these switches. You can interconnect these switches to build a multilayer topology for tap or SPAN aggregation infrastructure. ●Scalable and cost effective●Robust traffic filtering capabilities●Traffic aggregation from multiple input ports across different switches●Traffic replication and forwarding to multiple monitoring toolsRole-based access control (RBAC): With RBAC, Cisco NX-OS enables administrators to limit access to switch operations by assigning roles to users. Administrators can customize access and restrict it to the users who require it. ●Tight access control mechanism based on user roles●Improved network device security●Reduction in network problems arising from human errorsCisco NX-OS Software Packages for Cisco Nexus 3048The Cisco NX-OS Software package for the Cisco Nexus 3048 offers flexibility and a comprehensive feature set along with consistency with Cisco Nexus access switches. The default system software has a comprehensive Layer 2 feature set with extensive security and management features. To enable Layer 3 IP unicast and multicast routing functions, additional licenses need to be installed. Table 3 lists the software licensing details.Table 3. Cisco NX-OS Software Package in the Cisco Nexus 3048* The Base license (N3K-C3048-BAS1K9) is required to take advantage of LAN Enterprise license (N3K-C3048-LAN1K9) features. Table 5 later in this document provides a complete feature list.Cisco Data Center Network ManagerThe Cisco Nexus 3048 is supported in Cisco DCNM. Cisco DCNM is designed for hardware platforms enabled for Cisco NX-OS, which consist of the Cisco Nexus Family of products. Cisco DCNM is a Cisco management solution that increases overall data center infrastructure uptime and reliability, hence improving business continuity. Focused on the management requirements of the data center network, Cisco DCNM provides a robust framework and comprehensive feature set that meets the routing, switching, and storage administration needs of present and future data centers. In particular, Cisco DCNM automates the provisioning process, proactively monitors the LAN by detecting performance degradation, secures the network, and streamlines the diagnosis of dysfunctional network elements.Cisco Nexus Data BrokerThe Cisco Nexus 3048 Switch with Cisco Nexus Data Broker can be used to build a scalable and cost-effective traffic monitoring infrastructure using network taps and SPAN. This approach replaces the traditional purpose-built matrix switches with one or more OpenFlow-enabled Cisco Nexus switches. You can interconnect these switches to build a scalable tap or SPAN aggregation infrastructure. You also can combine tap and SPAN sources to bring the copy of the production traffic to this tap or SPAN aggregation infrastructure. In addition, you can distribute these sources and traffic monitoring and analysis tools across multiple Cisco Nexus switches. For more details, visit /go/nexusdatabroker.Product SpecificationsTable 4 lists the specifications for the Cisco Nexus 3048, Table 5 lists software features, and Table 6 lists management standards and support.Table 4. Specifications* Please refer to Cisco Nexus 3000 Series Verified Scalability Guide for scalability numbers validated for specific software releases: /en/US/products/ps11541/products_installation_and_configuration_guides_list.html.Table 5. Software FeaturesPort-based CoS assignmentModular QoS CLI (MQC) complianceACL-based QoS classification (Layers 2, 3, and 4)MQC CoS markingDifferentiated services code point (DSCP) markingWeighted Random Early Detection (WRED)CoS-based egress queuingEgress strict-priority queuingEgress port-based scheduling: Weighted Round-Robin (WRR)Explicit Congestion Notification (ECN)Security ●Ingress ACLs (standard and extended) on Ethernet●Standard and extended Layer 3 to 4 ACLs: IPv4, Internet Control Message Protocol (ICMP), TCP, UserDatagram Protocol (UDP), etc.●VLAN-based ACLs (VACLs)●Port-based ACLs (PACLs)●Named ACLs●ACLs on virtual terminals (vtys)●DHCP snooping with Option 82●Port number in DHCP Option 82●DHCP relay●Dynamic Address Resolution Protocol (ARP) inspection●CoPPCisco Nexus Data Broker ●Topology support for tap and SPAN aggregation●Support for QinQ to tag input source tap and SPAN ports●Traffic load balancing to multiple monitoring tools●Traffic filtering based on Layer 1 through Layer 4 header information●Traffic replication and forwarding to multiple monitoring tools●Robust RBAC●Northbound Representational State Transfer (REST) API for all programmability support Management ●Switch management using 10/100/1000-Mbps management or console ports●CLI-based console to provide detailed out-of-band management●In-band switch management●Locator and beacon LEDs●Port-based locator and beacon LEDs●Configuration rollback●SSHv2●Telnet●AAA●AAA with RBAC●RADIUS●TACACS+●Syslog●Syslog generation on system resources (for example, FIB tables)●Embedded packet analyzer●SNMP v1, v2, and v3●Enhanced SNMP MIB support●XML (NETCONF) support●Remote monitoring (RMON)●Advanced Encryption Standard (AES) for management traffic●Unified username and passwords across CLI and SNMP●Microsoft Challenge Handshake Authentication Protocol (MS-CHAP)●Digital certificates for management between switch and RADIUS server●Cisco Discovery Protocol Versions 1 and 2●RBAC●Cisco Switched Port Analyzer (SPAN) on physical, PortChannel and VLAN interfacesTable 6. Management and Standards Support Description SpecificationMIB support Generic MIBs●SNMPv2-SMI●CISCO-SMI●SNMPv2-TM●SNMPv2-TC●IANA-ADDRESS-FAMILY-NUMBERS-MIB●IANAifType-MIB●IANAiprouteprotocol-MIB●HCNUM-TC●CISCO-TC●SNMPv2-MIB●SNMP-COMMUNITY-MIB●SNMP-FRAMEWORK-MIB●SNMP-NOTIFICATION-MIB●SNMP-TARGET-MIB●SNMP-USER-BASED-SM-MIB●SNMP-VIEW-BASED-ACM-MIB●CISCO-SNMP-VACM-EXT-MIBEthernet MIBs●CISCO-VLAN-MEMBERSHIP-MIB●LLDP-MIB●IP-MULTICAST-MIBConfiguration MIBs●ENTITY-MIB●IF-MIB●CISCO-ENTITY-EXT-MIB●CISCO-ENTITY-FRU-CONTROL-MIB●CISCO-ENTITY-SENSOR-MIB●CISCO-SYSTEM-MIB●CISCO-SYSTEM-EXT-MIB●CISCO-IP-IF-MIB●CISCO-IF-EXTENSION-MIB●CISCO-NTP-MIB●CISCO-IMAGE-MIB●CISCO-IMAGE-UPGRADE-MIB Monitoring MIBs●NOTIFICATION-LOG-MIB●CISCO-SYSLOG-EXT-MIB●CISCO-PROCESS-MIB●RMON-MIB●CISCO-RMON-CONFIG-MIB●CISCO-HC-ALARM-MIBSecurity MIBs●CISCO-AAA-SERVER-MIB●CISCO-AAA-SERVER-EXT-MIB ●CISCO-COMMON-ROLES-MIB●CISCO-COMMON-MGMT-MIB●CISCO-SECURE-SHELL-MIB Miscellaneous MIBs●CISCO-LICENSE-MGR-MIB●CISCO-FEATURE-CONTROL-MIB ●CISCO-CDP-MIB●CISCO-RF-MIBLayer 3 and Routing MIBs●UDP-MIB●TCP-MIB●OSPF-MIB●BGP4-MIB●CISCO-HSRP-MIBStandards ●IEEE 802.1D: Spanning Tree Protocol●IEEE 802.1p: CoS Prioritization●IEEE 802.1Q: VLAN Tagging●IEEE 802.1s: Multiple VLAN Instances of Spanning Tree Protocol●IEEE 802.1w: Rapid Reconfiguration of Spanning Tree Protocol●IEEE 802.3z: Gigabit Ethernet●IEEE 802.3ad: Link Aggregation Control Protocol (LACP)●IEEE 802.3ae: 10 Gigabit Ethernet●IEEE 802.1ab: LLDP●IEEE 1588-2008: Precision Time Protocol (Boundary Clock)RFC BGP●RFC 1997: BGP Communities Attribute●RFC 2385: Protection of BGP Sessions with the TCP MD5 Signature Option●RFC 2439: BGP Route Flap Damping●RFC 2519: A Framework for Inter-Domain Route Aggregation●RFC 2545: Use of BGPv4 Multiprotocol Extensions●RFC 2858: Multiprotocol Extensions for BGPv4●RFC 3065: Autonomous System Confederations for BGP●RFC 3392: Capabilities Advertisement with BGPv4●RFC 4271: BGPv4●RFC 4273: BGPv4 MIB: Definitions of Managed Objects for BGPv4●RFC 4456: BGP Route Reflection●RFC 4486: Subcodes for BGP Cease Notification Message●RFC 4724: Graceful Restart Mechanism for BGP●RFC 4893: BGP Support for Four-Octet AS Number SpaceOSPF●RFC 2328: OSPF Version 2●8431RFC 3101: OSPF Not-So-Stubby-Area (NSSA) Option●RFC 3137: OSPF Stub Router Advertisement●RFC 3509: Alternative Implementations of OSPF Area Border Routers●RFC 3623: Graceful OSPF Restart●RFC 4750: OSPF Version 2 MIBRIP●RFC 1724: RIPv2 MIB Extension●RFC 2082: RIPv2 MD5 Authentication●RFC 2453: RIP Version 2●IP Services●RFC 768: User Datagram Protocol (UDP)●RFC 783: Trivial File Transfer Protocol (TFTP)●RFC 791: IP●RFC 792: Internet Control Message Protocol (ICMP)●RFC 793: TCP●RFC 826: ARP●RFC 854: Telnet●RFC 959: FTP●RFC 1027: Proxy ARP●RFC 1305: Network Time Protocol (NTP) Version 3●RFC 1519: Classless Interdomain Routing (CIDR)●RFC 1542: BootP Relay●RFC 1591: Domain Name System (DNS) Client●RFC 1812: IPv4 Routers●RFC 2131: DHCP Helper●RFC 2338: VRRPIP Multicast●RFC 2236: Internet Group Management Protocol, version 2●RFC 3376: Internet Group Management Protocol, Version 3●RFC 3446: Anycast Rendezvous Point Mechanism Using PIM and MSDP●RFC 3569: An Overview of SSM●RFC 3618: Multicast Source Discovery Protocol (MSDP)●RFC 4601: Protocol Independent Multicast - Sparse Mode (PIM-SM): Protocol Specification (Revised)●RFC 4607: Source-Specific Multicast for IP●RFC 4610: Anycast-RP using PIM●RFC 5132: IP Multicast MIBSoftware RequirementsCisco Nexus 3000 Series Switches are supported by Cisco NX-OS Software Release 5.0 and later. Cisco NX-OS interoperates with any networking OS, including Cisco IOS Software, that conforms to the networking standards mentioned in this data sheet.Regulatory Standards ComplianceTable 7 summarizes regulatory standards compliance for the Cisco Nexus 3000 Series.Table 7. Regulatory Standards Compliance: Safety and EMCOrdering InformationTable 8 provides ordering information for the Cisco Nexus 3048.Table 8. Ordering InformationService and SupportCisco offers a wide range of services to help accelerate your success in deploying and optimizing the Cisco Nexus 3000 Series in your data center. The innovative Cisco Services offerings are delivered through a unique combination of people, processes, tools, and partners and are focused on helping you increase operation efficiency and improve your data center network. Cisco Advanced Services uses an architecture-led approach to help you align your data center infrastructure with your business goals and achieve long-term value. Cisco SMARTnet®Service helps you resolve mission-critical problems with direct access at any time to Cisco network experts and award-winning resources. With this service, you can take advantage of the Cisco Smart Call Home service capability, which offers proactive diagnostics and real-time alerts on your Cisco Nexus 3000 Series Switches. Spanning the entire network lifecycle, Cisco Services helps increase investment protection, optimize network operations, support migration operations, and strengthen your IT expertise.For More InformationFor more information, please visit /go/nexus3000. For information about Cisco Nexus Data Broker, please visit /go/nexusdatabroker.Printed in USA C78-685363-0510/14。

聚焦新一代互联网网络——NetLogic Microsystems突破性多核处理器助力中国互联网网络迅速发展

聚焦新一代互联网网络——NetLogic Microsystems突破性多核处理器助力中国互联网网络迅速发展

就 是 能 更 方 便 、 捷 、 效 地 使 用 互 快 高 联 网 。 这 就 要 求 通 信 基 础 设 施 能 够 为 先 进 的 视 频 、P 6和 回 程 服 务 提 Iv 供 其 所 需 的 高 性 能 、扩 展 性 和 稳 定
性 , 从 而 为 下ห้องสมุดไป่ตู้一 代 网 络 系 统 的 性 能 提 出 考 验 。 助 推 下 一 代 互 联 网 网 络
需 求 。 据 预 测 , 来 4 5年 , 络 通 未 ~ 网 信 中对每个 i p包 的 处 理 能 力 需 提 高 5 0倍 。可 见 , 联 网 网络 系 统 由 0 互 采 用 传 统 的 单 核 处 理 器 向 多 核 处 理
器转换 成 为必然 的发展 趋势 。 作 为 该 领 域 的 领 先 者 , eL gc N toi Mi oytms 公 司 推 出 了 仓 c ss r e 0新 的
21 0 0年 预 计 的 3 8 .3亿 美 元 , 推 动 其
力 是 消 费 者 在 互 联 网 网 络 各 个 节 点 上 的 巨 大 需 求 。 该 公 司 最 近 更 是 荣 获 了 业 界 同 行 和 全 球 半 导 体 联 盟 授 予 的 “ 受 尊 敬 的 新 兴 半 导 体 上 市 最 企 业 ” 誉 称号 。 荣
Ab i 示 : 对 我 们 而 言 , 中 国 d表 “ 是 一 个 令 人 振 奋 且 非 常 重 要 的 市
场 。 在 去 年 宣 布 与 RMI合 并 以 后 , 我 们 便 致 力 于 实 施 两 家 企 业 所 制 定 的 技 术 发 展 蓝 图 。 而 我 们 在 中 国 和 全 球 市 场 上 所 取 得 的 成 功 正 验 证 了 我 们 的 战 略 ” 。

用于深度记忆网络的系统和方法[发明专利]

用于深度记忆网络的系统和方法[发明专利]

专利名称:用于深度记忆网络的系统和方法专利类型:发明专利
发明人:沈逸麟,邓岳,阿维克·雷,金红霞
申请号:CN201980044458.8
申请日:20190809
公开号:CN112368718A
公开日:
20210212
专利内容由知识产权出版社提供
摘要:包括深度记忆模型的电子设备包括至少一个存储器和耦合到至少一个存储器的至少一个处理器。

至少一个处理器被配置为接收对深度记忆模型的输入数据。

至少一个处理器还被配置为基于输入数据提取耦合到深度记忆模型的外部存储器的历史状态。

至少一个处理器还被配置为基于输入数据更新外部存储器的历史状态。

此外,至少一个处理器被配置为基于所提取的外部存储器的历史状态输出预测。

申请人:三星电子株式会社
地址:韩国京畿道
国籍:KR
代理机构:北京市立方律师事务所
更多信息请下载全文后查看。

native 以太业务定义

native 以太业务定义

native 以太业务定义
“Native以太业务”可能是指“Native Ethernet Service”,是一种由非Java语言实现的以太网业务。

以太网在局域网的成功得益于其高度的标准化,而以太网业务在电信网中的广泛应用也需要标准化的支持。

在某些场景下,为了实现对操作系统底层的操作,会使用Java Native Interface(JNI)调用其他语言来实现底层的访问,其中的以太网业务即为“Native Ethernet Service”。

不同的行业和领域可能会对“Native以太业务”有不同的定义和解释,如果你想了解更详细的信息,可以提供更具体的问题,我将尽力为你解答。

ei ccie大纲

ei ccie大纲

ei ccie大纲
CCIE Enterprise Infrastructure是思科认证中的一项高级认证,其大纲包
括以下主题和实践考试内容:
1. 网络架构和协议:理解和应用网络架构的基本原理和协议,包括TCP/IP
协议族、路由协议、OSPF、BGP等。

2. 网络安全:理解和应用网络安全原理和协议,包括访问控制列表(ACL)、防火墙配置、IPsec、SSL/TLS等。

3. 路由和交换技术:理解和应用路由和交换技术,包括静态路由、动态路由、VLAN、STP等。

4. 广域网(WAN)技术:理解和应用WAN技术,包括PPP、HDLC、Frame Relay等。

5. 语音和视频技术:理解和应用语音和视频技术,包括VoIP、视频会议等。

6. 数据中心技术:理解和应用数据中心技术,包括虚拟化、云计算等。

7. 应用服务技术:理解和应用应用服务技术,包括DNS、DHCP、FTP等。

8. 实践考试:通过实践考试来检验考生对以上主题的掌握程度和应用能力。

以上是大纲的部分内容,建议访问思科官网获取完整版大纲。

基于LSTM的深基坑开挖地表沉降预测研究

基于LSTM的深基坑开挖地表沉降预测研究

图5数据训练结果对比
由图5可知,LSTM神经网络在深基坑周围土体沉降预
测方面较BP神经网络相比与实测值较为接近,有良好的
预测效果,其预测的沉降量走向与实测值更为接近,可为未
来深基坑周围地表沉降预测提供参考。
为验证LSTM神经网络的可靠性,避免计算结果的偶
然性,本文另外选取C14号监测点进行实验预测,预测结果
见表1,并通过平均绝对百分误差、误差均方差和误差绝对
值均值作为预测精确度的评价指标对预测精度进行评估,
如表2所示,LSTM神经网络模型在两个监测点位的预测值
精度均高于BP神经网络。
表1 C2.C10监测点沉降预测结果
mm
LSTM 235 237 537 6.07 633
C2 BP 231 538 632 633 638
・74・
第45卷第3期 2 0 2 1年7月
山 西建筑
SHANXI ARCHITECTURE
Vol. 27 Na. 10 Jut. 2021
DOI:3. 13719/j. cadi. 1009-6723.2021. 14.227
基于LSTM的深基坑开挖地表沉降预测研究
王永军
(广东中煤江南工程勘测设计有限公司,广东广州510440 )
2020,1(S1) :247 454.
[3] 王春.受周边基坑施工影响的道路沉降预测研究
[J].山西建筑,2020,46(13) :14-15,193
[4] 李 篷,王红梅,王若锋,等.基于优化的BP神经网
络算法的深基坑沉降预测[J ].经纬天地,2020 (3):
141-144.
[5] 关占印,刘天成,乔清源3基于滑动平均一灰色Veu
参考。

Extreme Networks 产品介绍与公司简介说明书

Extreme Networks 产品介绍与公司简介说明书

Enabling All of Us to AdvanceWhat We Sell Infrastructure Products• ExtremeSwitching• ExtremeRouting• ExtremeWireless Applications• Management and Automation • Analytics and Visibility• Security and Access ControlWho We Are • Founded in 1996• Revenue: $1.1 Billion• 2800 employees• NASDAQ: EXTR• Headquartered in San Jose, CaliforniaWhat We Do• 20+ years of networking innovation• Cloud-driven end-to-end networking solutions• 1100+ active patents• Award winning services and support• Leader in Gartner MQsProgress is achieved when we connect – allowing us to learn, understand, create, and grow. Extreme Networks make connecting simple and easy with effortless networking experiences that enable all of us to advance how we live, work, and share. We push the boundaries of technology leveraging the powers of machine learning, artificial intelligence, analytics, and automation. With a culture of agility, we anticipate the needs of our clients and their end-users as they develop. More than 50,000 customers globally trust our end-to-end, cloud-driven networking solutions andrely on our top-rated services and support to accelerate digital transformation efforts and deliver progress like never before.At Extreme Networks, we are a culture of innovation, committed to improving the experiencesof our customers along with our own. From building the very first 1 Gbps ethernet port to the industry’s only 4th generation cloud architecture – we are committed to creating effortless networking experiences that enable all of us to advance.End-to-End Effortless NetworkingExtreme successfully delivers Cloud-driven end-to-end enterprise networking for the emerging data-driven economy. Cloud is the ultimate enabler for collecting the data over time that is needed to develop highly accurate knowledge and then act upon it intelligently.When you organize data it becomes information, when you learn from information it becomes knowledge, and when you apply knowledge it becomes intelligence. This data paradigm is intrinsically linked to all ML/AIinitiatives at Extreme and elsewhere.Cloud management is critical to enterprise digital transformation efforts.At its heart, Cloud Networking is a SaaS based application that deliversa Simpler way to deploy and manage complex networks and increasedeployment scale while reducing costs. In short, Cloud Networking is:FLEXIBLE: Right sized for skills, scale, and business objectives.AGILE: Speed and continuous delivery of new features and capabilitiesSECURE: Proven In the most risk-sensitive environmentsTECHNOLOGY: Access to best of breed technologies in the worldExtreme is delivering powerful and proven enterprise solutions that workacross the wireless and wired network, providing comprehensive visibility, control, and automation dramatically simplifying IT operations.CLOUD MANAGEMENT TOOLS Cloud-driven networking gives the ability to collect, correlate, and visualizelarge sets of data to provide simple, smart, and secure effortless networks.Centralized management delivers a simple way to deploy and scalecomplex networks. Visibility and Analytics offer intelligent insights andanalytics using ML and AI. New Automation T ools assist in assurance of user,device, and IoT access and remediation.Benefits of Extreme Cloud-DrivenExtreme offers advanced hardware that spans from the IoT Edge to the Data Center, and then builds in software-driven solutions that deliveradvanced analytics, unprecedented insights, and full network assurance.The ability to do real-time data transfers, and automate where we didn’thave automation before, gives us the ability to leverage state-of-the-arthardware to advance any outcome.When you add it all up, there are 5 key reasons Extreme delivers EffortlessNetworking better than anyone else:ML/AI Paradigm “If our network can't provide a highlyreliable, seamless experience throughoutthe game, our fans will stay home.”Vice President of Facilities and Game Day OperationsT ennessee Titans• A full End-to-End solution spanning from the Edge to the Data Center,with advanced hardware built on Broadcom technology to deliver thevery best connectivity and edge computing;• The highest quality cloud - first and only cloud management solution to be certified as ISO/IEC 27001 compliant to reflect our commitment to information security management systems best practices and controls by the International Standards Organization (ISO), as well as integrated Global Data Privacy Regulations (GDPR) support and continuousinnovation of new features and functionality• Cloud Choice: Extreme’s cloud can be deployed in the public cloud,private cloud, or on-premises, providing organizations with unmatchedflexibility to accomplish their business objectives. Whatever theselected deployment type, Extreme’s cloud-driven infrastructureprovides continuous operation even if connectivity to the cloud isimpacted, adding even more flexibility to environments regardless ofdependable internet connectivity.• Depth of capability: Our cloud leverages the powers of machinelearning, artificial intelligence, analytics, and automation to helpconnect, secure, and optimize network infrastructure. This goes waybeyond basic management, but delivers intelligence and assurance tobuild the foundation for next generation experiences.• Huge savings: when it’s all said and done, the efficiencies of cloud-based management make effortless networking cost less too. Thisextends beyond just the cost of the subscription, but also for serverinfrastructure cabling, cooling, and maintenance and continuousoperational savings for training and troubleshooting“It’s very rare to find a solution that drivesautomation, minimizes security risk, andreduces costs without sacrificing one foranother.”Director of Network Services OSF HealthcareEffortless Networking ArchitectureExtreme’s Cloud, whether public, private, or local, powers our hardwareand software technologies with ML and AI to deliver a tailored networkingsolution for your needs. Our wireless, switching, and routing portfolioprovides a broad range of end-to-end connectivity solutions, utilizingindustry leading technologies to deliver speed, scale, reliability, and securityfor all of your users, device, apps, and things, no matter their location. Oursoftware solutions streamline and automate the management of our wiredand wireless technologies, protect the edge to the DC, fuel business and ITsystem integrations, and unlock new insights and analytics.Effortless Portfolio SummaryWe have streamlined our products so it’s never been easier to makenetworking effortless and advancing how we live, work and share. TheExtremeSwitching products provide the nervous system of an autonomousenterprise by delivering simple, secure and automated networks.ExtremeRouting products focus on highly scalable internet peeringand routing for campus, data center operators, and service providers.ExtremeWireless products are high efficiency, high capacity Wi-Fi productsthat utilize the latest technologies top optimize and protect bandwidth in“Our network infrastructure is thelifeblood of our organization andrepresents an important enabler for digitaltransformation.”Director of IT Services Leeds Beckett UniversityCLICK IMAGE TO VIEWall environments. And ExtremeApplications enable you to gain actionable insights, visibility, and control with advanced security and superior quality of user experience. For more details, visit our Extreme Product Catalog .Effortless Networking in Action HEALTHCARE GOVERNMENT EDUCATION LARGE PUBLIC VENUES MANUFACTURING RETAILTRANSPORTATIONAND LOGISTICS T oday, our technology enables us to share magical moments right from the game and stream live TV in-flight. It gives businesses more control and the ability to adapt to new ways of working. It ensures the most private data is secure within a healthcare system and schools and colleges have the bandwidth to meet the needs of the most demanding students – wherever they are on campus. Extreme ServicesIn the midst of global digital transformation, Extreme Networks is atrusted partner you can count on to ensure that your network becomesautonomous and effortless, and meets the changing needs of yourbusiness. With industry leading support and services, and 100% in-sourcedengineering experts, you can be confident you will have the resources andexpertise needed to manage your software-driven solutions and built-insecurity from the edge to the cloud.It’s time to transform your network into a source of ongoing business value.By partnering with Extreme Networks, you will have the robust services andsupport, starting with the initial consultation to day-to-day managementto be sure you are executing against a successful network managementstrategy and to advance with us.We are Number OneAs the number one provider of end-to-end, cloud-driven networkingsolutions, we are committed to making the next generation of networkingeasier, faster, and more secure.We believe that progress is achieved when we connect – allowing us tolearn, understand, create, and grow. We make connecting simple and easyFun Facts of ExtremeCloudAvg 40 yearsof HD video every day 4MClients/Day 2x Facebookphotos/Day4.2B Management Events/Day 4K AdminLogins/Day 1M Managed Devices “We needed anew mission critical networksolution that would help us identifynew efficiencies and automate manualprocesses.”VP, Information Systems STIB-MIVB Brusselswith effortless networking experiences that enable all of us to advance howwe live, work, and share.About ExtremeExtreme Networks, Inc. (EXTR) creates effortless networking experiencesthat enable all of us to advance. We push the boundaries of technologyleveraging the powers of machine learning, artificial intelligence, analytics,and automation. Over 50,000 customers globally trust our end-to-end,cloud-driven networking solutions and rely on our top-rated servicesand support to accelerate their digital transformation efforts and deliverprogress like never before. For more information, visit Extreme's website orfollow us on Twitter, LinkedIn, and Facebook./contact ©2020 Extreme Networks, Inc. All rights reserved. Extreme Networks and the Extreme Networks logo are trademarks or registered trademarks of Extreme Networks, Inc. in the United States and/or other countries. All other names are the property of their respective owners. For additional information on Extreme Networks Trademarks please see /company/legal/trademarks. Specifications and product availability are subject to change without notice. 29283-0320-19。

印尼最大互联网服务提供商采用北电城域以太网解决方案提供高带宽服务

印尼最大互联网服务提供商采用北电城域以太网解决方案提供高带宽服务

印尼最大互联网服务提供商采用北电城域以太网解决方案提供
高带宽服务
佚名
【期刊名称】《电信技术》
【年(卷),期】2007(0)9
【摘要】北电宣布印度尼西亚最大的互联网服务提供商CYberindo Aditama (CBN)采用北电的解决方案提供城域以太网服务。

CBN企业客户的各个分支机
构之间可以实现高速共享VOIP和多媒体等高带宽、实时业务,快速便捷,如同身处同一间办公室。

北电的城域以太网解决方案采用创新的运营商骨干网桥接(PBB)技术,帮助CBN满足迅速增长的高带宽服务需求,为印尼企业提供新增服务。

【总页数】1页(P75-75)
【关键词】城域以太网;服务提供商;带宽服务;互联网;印尼;企业客户;印度尼西
亚;VOIP
【正文语种】中文
【中图分类】TP393.11
【相关文献】
1.Telseon—突破城域网的带宽瓶颈,提供千兆比以太网服务 [J],
2.新世界电讯选择瞻博网络方案以增强高扩展性和灵活性的下一代城域以太网服务瞻博与联盟伙伴诺基亚西门子网络携手提供战略性综合解决方案 [J], 瞻博网络
3.上海贝尔阿尔卡特基于ATM交换机平台的以太网解决方案逐退市场/烽火网络
以太网为新业务开展提供良好的解决方案/北电运营商及以太环网OE解决方案创
新 [J],
4.路收器测试仪加速基于IP的城域网设备与服务的部署——协议和全新接口测试能力帮助服务提供商与NEM降低城域/边缘测试成本 [J],
5.MEF:城域以太网论坛发布2005年度亚太区服务提供商嘉奖计划 [J], 英子因版权原因,仅展示原文概要,查看原文内容请购买。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Network Processor Architecture for Next Generation Packet FormatAnkush Garg Department of Computer Science University of California.Davis Davis,California95616garg@Prantik Bhattacharyya Department of Computer Science University of California,Davis Davis,California95616 pbhattacharyya@ABSTRACTOver the past few years,network processors have become an important component in packet delivery over the internet with its packet processing rate deter-mining the bottleneck of usage of bandwidth of trans-mission medium.The architectures of network proces-sors have mainly focused on achieving a particular line rate for IPv4packets.As the usage of new format IPv6 increases,changes in architecture models become one of the primary concerns.In this paper,we propose to modify an existing network processor architecture to improve performance by reducing the overload of pre-processing a packet header.We also introduce the con-cept of cache for forwarding address lookup tofind the packet destination address to increase efficiency of the lookup system.1IntroductionOne of the most vital component of any network system is the path taken by a packet as it travels from host to destination machines.The current internet infrastructure is based on best-effort delivery options i.e.the setup is such that it will make its best effort to send the packet to its destination.The system does not work on the model of providing a guaranteed service delivery option i.e.it never promises to the host or to the destination machine that it would be able to deliver the packet with a hundred percent certainty.Such a particular model helps in building the components as there is always less demand on service providers to promise to customers and hence deliver the packet.With this model in focus,we have network processor in the path which helps in data transmission over the infrastructure.We have till now used the term‘data’instead of packet so as to give the overview of internet data delivery structure without referring to packet switching or any other technologies.From now on we will use the term packet as packet switching is the pre-dominant technology and all the components being built satisfy the demand of this technology.The main function of a network processor[1][2]is to accept packets from its incoming port and using the processors built inside it,‘route’these packets appropriately,i.e.forward these packets appropriately through the output ports. Thus,the architecture of the router becomes vital to the packet transmission rate.As we have more and more bandwidth available in the physical path, the performance of the router becomes a bottleneck to the packet forwarding rate[3].A lot of research has been done to build better router architectures[4] so that these routers are able to do justice to the bandwidth available otherwise.One of the primary focus of research has been analyzing the different 1architectures which can exhaustively use the packet formats to take the forwarding decisions.As the packet format changes from IPv4to IPv6, we observe that major change(s)in the header format is taking place.The change in these versions is not merely a four times up-gradation from32bits of IPv4addressing mode to128bits of IPv6addressing mode but a major change in the headerfields.In other words,we are talking of the packet formats and specially the header formats of each packet format that has been designed for the new version.As these transitions take place,one of the key challenges has been upgrading or installing new network processors in the data path so that new formats can be identified and any processing activity can be done on them accordingly.Current technology solutions has mainly focused on integrating IPv6packets with IPv4packets as the restricted deployment of the newer version has not provided companies enough motivation to deploy only IPv6focused network processors.As the upgraded versionfinds widespread popularity,one of the key research issue has thus become designing newer architecture models for network processors so that the various new characteristics of the IPv6can be properly used for higher performance rates.In this paper,we present our research on the pos-sible architectures for routers suitable for next gener-ation packet formats.The paper is organized as fol-lows.In Section2,we make a study of the existing network processors and their architectures.We also talk about how the current processing of IPv4is done in these network processors and how IPv6deployment on the current architectures may perform,thus stating our motivation of these research.In Section3,we give details about our model.In Section4we try to math-ematically analyse the performance enhancement that our model can provide and Section5deals with the performance evaluation of our proposed model.These are followed by conclusion and possible future work in Section6.2BackgroundIn this section we study why network processors are required and the architectures of existing network processors.We observe that due to the advent of optical fibres the data transfer rate across transmission lines has increased to the order of Gbps(Gigabit/second). According to Intel[5],a packet arrives every35ns for a10Gbps connection and every8ns for a40Gbps connection.Assuming a minimum packet size of40 bytes,from Sherwood et.al.[6]we observe that a line card operating at10Gbps can process upto32million packets every second.General Purpose Processors are incapable of handling data at such a high rates which has lead to the use of Network Processors(NP).An excellent starting place to understand what a network processor is given by Shah[1].Extending the knowledge we have gained,we can define a network processor as a processing unit targeted for networking application.In other words,a device with architectural features and/or special circuitry for packet work processors are quite similar to general purpose central processing unit that are used in different types of equipment and products.Below we discuss the architecture of a networkprocessor.Figure1.A General Network ProcessorA network processor has mainly a central process-ing unit where the data is processed.It also has two types of memory units,termed as the SRAM(Static Random Access Memory)and DRAM(Dynamic Random Access Memory).DRAM is generally used to store large forwarding table(s)while SRAM isFigure2.Multi Core Network Processorused to store the small incoming packets.A table look up is done to determine the next forwarding address and other instructions are executed and then the packets come out and are put into the output line. Figure1shows the environment in which a general Network Processor works.The core is equipped with special instructions suited to satisfy the demand of networking applications i.e.the processor is equipped with instructions to speed up packet processing. These instructions are generally optimized to yield better performance with regard to the specialized demands of packet transmissions over any network.Network processors have evolved over the time and most of the processors currently available have multiple cores with each core capable of running multiple threads.Figure2shows a high level diagram for this kind of a processor.We have based our work on Intel IXP2800processor which has multi core architecture capable of running threads and handling more than one packet at a time.A more detailed discussion of this processor is given later. The following paragraphs of the current section talks about the motivation for our project.Thus,we have NPs with special instructions to handle packet(s).Also,it is an interesting fact to note that in packet processing,its mainly the header of a packet which is processed and the data part of the packet is ignored for most of the computation. Thus,it becomes essential that we have NPs with architectures and instruction set suited specially for header formats.Lets take a look at the packet formats of IPv4and IPv6(shown in Figures3and4)to understand what adaptations in NPs are required to make them better suited to handle the new version of packet format.We also take a look on why such a revision in the packet format isrequired.Figure3.IPv4PacketFigure4.IPv6PacketThe IP address in IPv4is32bit long implying that the maximum number of distinct IP address that can be assigned is232.The phenomenal growth of computers and their connectivity to the internet has lead to an IP address exhaustion problem[7]. This inspired the research of a new addressing mode which can provide enough IP addresses to everyone asking for it and yet remain un-exhausted over the years.IPv6has come up as the solution and a large scale deployment of IPv6has already started.An IP address in IPv6is128bit long and thus can be used to represent2128addresses,a number so hugethat a possible exhaustion of available IP addresses is virtually impossible.As the packet header format changes,naturally it becomes important to look into other issues that may be effected due to this change. Correspondingly,Network Processors also need to adapt to these changes in near future.In the new header format of IPv6,the Network Processor needs to read the128bits of the IP ad-dress(es)present in the header and then it determines the next hop address for this packet from its forward-ing table.Also,observed are the facts that in IPv6,the header size isfixed(the size of the header is40bytes) which means a preprocessing is not required to sepa-rate the header and the data from the packet and also the fact that a number of instructions are not required to be carried out compared to the previous packet for-mat.For example,as thefield of checksum has been entirely removed from IPv6,the processor need not carry out the complex computation on the data bits and match them to the CRCfield.The packet size for IPv6is alsofixed implying that an incoming packet need not be broken down into more packets as was the case sometimes with IPv4.Thus,it may think of sep-arating the header and the data from the packet from the input line itself so that faster processing can be done.This provided us with the possibility of doing research on this topic.In the next sections,we de-scribe our proposed model and present our simulation results.3Proposed ModelWe decided to work on the Intel IXP2800 Network Processor as it has been widely used in industry.The Intel Product Brief[8]says that IXP 2800is based on the same model as the earlier IXP 1200version.Basically,the chip has one Xscale Core that runs at700Mhz and16Micro-engines each working at a frequency of1.4Ghz.The microengines are capable of handling multiple threads.The SRAM unit in IXP2800supports a800Mbytes/sec read and 800Mbytes/sec write.The SRAM interface hasaFigure5.Existing Model of Processing in IXP2800peak bandwidth of1.6GBytes/sec per channel using 200MHz SRAMs.In the existing model,as shown in Figure5,the incoming packetfirst comes to the Xscale Core.The core checks the version of the packet(IPv4or IPv6) and after doing the CRC Check and other operations forwards the packet information to a microengine that is free and puts the packet in SRAM.If the input buffer is full then the packet is dropped.The informa-tion provided to the microengine is the location in the SRAM where the packet is stored.The microengine operates on the packet andfinds that on which output line the packet needs to be sent after comparing the source and destination addresses of the packet with the data of the forwarding table stored in the DRAM. Also notable is the fact that the microengine has to get the source and destination address of the packet from the SRAM.This introduces a latency as many cycles are wasted in the process due to the slow speed of SRAM.The packet is then sent on the selected out-put line.3.1Reducing LatencyThe IPv6packet has afixed header size and it also need not do a CRC Check on the IPv6packet.This suggests that the Xscale Core can be replaced by a fast and less complex processing unit.As no extra checks are needed to be done on the packet,the onlyFigure6.Proposed Model of Processing in IXP2800work the core is left with is to direct the packets to the microengine that is free.As shown in Figure6,we propose to replace the core with another microengine operating at1.4Ghz.Now,the header of the packet is forwarded to a microengine and the rest of the data is stored in the SRAM.Along with the header,the mi-croengine also gets the location of the packet from the SRAM.As the microengine does not need to fetch the source and destination address from the SRAM it can start processing the header immediately reducing the idle time.Afterfinding the output line for a packet,the microengine fetches the rest of the packet data from the SRAM in the last step,reassembles it and sends it on the output line.Clearly,the microengine can start working on the packet without wasting any time thus reducing the latency.3.2Introducing CacheAnother important point,as noted by Hu et.al.[9], is that microengines have no caches in them.But an important observation made by Harai et.al.[10]is that40%of the packets follow the same path on the network.This is a very significant observation as the microengine has to read the forwarding table from the DRAM again and again.The DRAM is even slower than the SRAM implying that for each packet many cycles are wasted for the getting the data of the for-warding table.Thus,the lookup time can be reduced drastically by introducing a cache.As40%of the times the microengine needs the same data from the DRAM the hit rate can be taken to be0.4.This means that the cache can be incorporated to reduce the mem-ory access time.The architectural modifications can be summarized as follows:1.Modification1(M1)-Replace the Xscale Corewith a Microengine to exploit the change in the header from IPv4to IPv6.And forward the header to the microengine reducing the SRAM access time.2.Modification2(M2)-Provide the microengineswith a cache to exploit the inherent property of the internet that many of the packets follow a similar path.4Theoretical AnalysisIn this section,we try to compute a theoretical im-provement that might be achieved using the changes that we have proposed.Park et.al.in[11]say that an average program written for IXP2800executes around 654instructions.Out of those105instructions are ex-ecuted on the core and the rest on the microengine. Tan et.al.in[12]note that both the core and the mi-croengine are have6pipeline stages.Again Tan et. al in[13]say that around20%of the instructions im-plemented on microengine are memory accesses with a latency of54microengine clock cycles.Thus,we can compute the time taken by the processor for one packet as follows:time old=(105∗6/700∗106)+((80/100∗549∗6)/1.4∗109)+((20/100∗549∗54)/1.4∗109)=7.017∗10−6sIn the above equation thefirst term is for the core, the second for microengine instructions which are not memory accesses and third for the memory accesses for the microengine.The average packet size over the internet is550bytes and as there are16microenginesin IXP2800,we can compute the throughput to be:throughput old=1/time old∗550∗8∗16bits/s=1.0032∗1010bits/s=10.032GbpsWe can see the original IXP2800was able to achieve a line speed of10Gbps+.Now,we will compute the throughput for the M1(modification1).The only change will occur in thefirst term of the time equation. Thus,the time for the processor with a microengine in place of the core can be computed as follows: time M1=(105∗6/1.4∗109)+((80/100∗549∗6)/1.4∗109)+((20/100∗549∗54)/1.4∗109)=6.567∗10−6sThis gives a throughput ofthroughput M1=1/time M1∗550∗8∗16bits/s=1.0719∗1010bits/s=10.719GbpsThus,from the introduction of M1a performance en-hancement of throughput M1/throughput old=1.07 can be achieved.Now,if we also introduce M2then the memory access term for the time will be modified. As,we noted earlier that around40%of the packets follow a similar path on the network.Thus,for40% of the times there will be a cache hit thereby reducing the latency from54to6for those instructions.The time equation can be modified to:time M1+M2=(105∗6/1.4∗109)+((80/100∗549∗6)/1.4∗109)+((20/100∗549∗54∗0.6)/1.4∗109)+((20/100∗549∗6∗0.4)/1.4∗109)=5.0616∗10−6s The throughput for this machine can be found out to be:throughput M1+M2=1/time M1+M2∗550∗8∗16bits/s=1.3908∗1010bits/s=13.908GbpsThus,the total enhancement that can be achieved by introducing both M1and M2is throughput M1+M2/throughput old=1.39.This means that our model can in theory give an ehance-ment of1.39over the original IXP2800.The following section will illustrate the simulation results that were obtained for the model proposed.5Simulation ResultsWe wrote a discrete event simulator to simulate the architecture.The IXP2800product brief[8]says that the size of of the input buffer is8192bytes and assuming an average packet size of550bytes we get the buffer size to be15.The Core checks the incoming packet and then directs that packet to the microengine which is free.Thus,the core has been modelled as an M/M/1queue with a buffer size of 15.The microengines have also been modelled asM/M/1queues.As soon as the packet is processed, the microengine becomes free and becomes ready for another packet to be sent to it.λOriginal M1M1+M20.660923360.82803812126352129690416312474381.2942287669316717841.4165943113461053214911.6229771919803107763491.82841874253579913020082332007130215771801040λOriginal M1M1+M20.64999697499988549999990.849859874993939499982614851553491844649962831.24528865466535149641111.44170293432695548392631.63851144400985346118281.83579071373210943490052333997234892204099489To simulate M1(modification1)the speed of the core was increased to make it comparable to that of a microengine.Thus,it was able to process the incom-ing packets faster.And to model M2(modification 2)on top of M1,the speed of the microengines was increased to account for lesser latency for the memory accesses.The incoming rate(λ)was changed in steps of0.2(starting from0.6)of the10Gbps packet rate (2272727.3packets/s).Thefirst table shows the num-ber of packets dropped for the original processor and the processors with the modifications proposed.The second table shows the number of packet processed by the processors.Thus,thefirst row of upper table tells that for an incoming packet rate of6Gbps the number of packets dropped by the three processors are609,233and6respectively.We can see from the data that the processor with both M1and M2in it performs better than the other two processors.Forλ=1.4the number of packets dropped for M1+M2are one order of magnitude lesser than for the other two processors.This shows that the results match with the theoretical analysis.Figure7shows the number of packetsdropped for the three processors.And Figure8showsthe num-ber of packets processed by the three processors.The blue line is for the theoretical limit of1.39that was proved in the previous section.It can be seen from the graphs that the number of packets dropped for the processors with both M1and M2are least and the ex-plosion in the number of packets dropped occurs close to the blue line showing that the simulation results areFigure7.Packets Dropped by different ProcessorsFigure8.Packets Processed by different Processorsnot very far off from the analytical results.6Conclusion And Future WorkWith the proposed microengine processor introduced in place of the core processor for pre-processing of packet headers,we observe that an improvement over the packet procesing rate can be achieved.Though the improvement seems small in terms of numerical data,the significant achievement will be in the cost of the processor as a processor with less complexity has been used.We further introduce the concept of caches as many of the packets from a particular source to a destination follow the same route to increase the total throughput of the network processor.A significant improvement in the process-ing rates has been observed due to this concept.One of the interesting future research work can be inspecting various policies for the cache replacement. This would require intensive analysis of how packetsflow between different network processors and setting up an appropriate policy.Also,an important part of our research focus has been using simple processing unit in place of the complex and costly Xscale core. We believe that a smaller and less complex proces-sor can be used in place of our proposed micro-engine model to achieve a better performance ratio as the sim-ple version of IPv6requires a small number of instruc-tions for it pre-processing.We leave this topic as part of future work.REFERENCES[1]Niraj Shah.Understanding network processors.UCB.[2]Patrick Crowley,Marc E.Fiuczynski,Jean-LoupBaer,and Brian N.Bershad.Characterizing pro-cessor architectures for programmable network interfaces.Proceedings of the2000International Conference on Supercomputing,2000.[3]Xiaoning Nie,Lajos Gazsi,Frank Engel,andGerhard Fettweis.A new network processor ar-chitecture for high-speed communications.In Proceedings of the IEEE Workshop on Signal Processing Systems(SIPS’99),1999.[4]Mel Tsai,Chidamber Kulkarni,Christian Sauer,Niraj Shah,and Kurt Keutzer.A benchmark-ing methodology for network processors.1st Network Processor Workshop,8th Int.Symp.on High Performance Computer Architectures (HPCA),2002.[5]Intel.Next generation network processor tech-nologies.Technical report,Network Processor Division,Intel Corporation,2001.[6]Timothy Sherwood,George Varghese,and BradCalder.A pipelined memory architecture for high throughput network processors.In Proceed-ings of30th Annual International Symposium on Computer Architecture(ISCA’03),2003. [7]Expert Research Team on Number Re-sources Utilization.Analysis and recom-mendations on the exhaustion of ipv4address space.Technical report,Japan NetworkInformation Center(JPNIC),2006.[8]Intel.Intel ixp2800network processor.Techni-cal report,2004.[9]Xianghui Hu,Xinan Tang,and Bei Hua.High-performance ipv6forwarding algorithm for multi-core and multithreaded network processor.In PPoPP’06:Proceedings of the eleventh ACM SIGPLAN symposium on Principles and practice of parallel programming,pages168–177,New York,NY,USA,2006.ACM.[10]Hiroaki Harai and Masayuki Murata.High-speed buffer management for40gb/s-based pho-tonic packet switches.IEEE/ACM w., 14(1):191–204,2006.[11]Jaehyung Park,Myoung Hee Jung,SujeongChang,Su il Choi,Min Young Chung,and Byung Jun Ahn.Performance evaluation of the flow-based router using intel ixp2800network processors.Workshop on Information Systems Information Technologies,2006.[12]Zhangxi Tan,Chuang Lin,Hao Yin,and Bo Li.Optimization and benchmark of cryptographic algorithms on network processors.IEEE Micro, 24(5):55–69,2004.[13]Yao Yue,Chuang Lin,and Zhangxi Tan.Npcryptbench:a cryptographic benchmark suite for network processors.SIGARCH Comput.Ar-chit.News,34(1):49–56,2006.[14]Intel.Intel ixp2855network processor.Techni-cal report,Intel.[15]Cheng Sheng,Zhang Xu,Cao Yingxin,andDing Wei.Implementation of10gigabit packet switching using ixp network processors.Interna-tional Conference on Communication Technol-ogy(ICCT’2003),2003.。

相关文档
最新文档