OpenFlow协议版本1.3.5学习总结
【个人总结系列-11】Openflow协议的理解-学习及总结
Openflow协议的理解-学习及总结Openflow协议概述OpenFlow协议是斯坦福大学的Nick McKeown提出的,随着OpenFlow协议的提出,SDN(Software Defined Network)也逐渐得到人们以及网络设备运营商的关注。
OpenFlow协议的主要思想就是将传统网络交换机数据包转发的过程分离开来,原来由交换机一步完成的转发动作现由2种设备协作完成,分别是支持openflow协议的交换机(假设为OVS)和控制器(以下称controller)。
OVS 里面维护了一个叫做流表(flow table)的东西,用来指定转发的策略,流表是在controller的控制下生成的,并且controller随时可以维护流表(删除、添加、修改流表项)。
所以网络管理人员可以很容易自己定义网络的行为,这也是SDN的原因。
在OpenFlow网络中数据包的转发流程是这样的:OVS接收到数据以后,通过和流表项进行匹配决定这个包的走向(转发、丢弃还是传给controller等),所以流表就相当于控制的策略,定义网络的行为只要在controller端对流表进行操作即可。
这就涉及到controller和OVS之间通信和交流的问题,而Controller和OVS之间通信的消息格式、内容及种类就是OpenFlow协议的主要内容。
OVS通过secure channel和controller进行通信,而且通信内容是通过TLS加密的,从secure channel发出和接收到的信息是和交换机接收到的数据包(到达的数据包)是不一样的,他们不经过流表的处理,OVS和controller的关系如下图所示[1]:图2-1-1 OVS和控制器的关系所以OpenFlow网络中最重要的就是流表(流表的结构和匹配的过程)和controller与OVS之间的通信。
下面主要介绍一下自己理解的这两块的内容。
流表每个交换机包括许多流表和一个组表,流表从0开始进行编号。
OPENFLOW协议介绍
OPENFLOW协议介绍篇一:Openflow协议通信流程解读Openflow协议通信流程解读分类: openflow协议分析 2013-12-30 19:01887人阅读评论(1)收藏举报目录(?)[-] ?controller组成OFPT_FEATURES_REPLY篇二:Openflow及SDN技术简介Openflow及SDN1.网络虚拟化之SDN和OPENFLOW云计算的发展,是以虚拟化技术为基础的。
云计算服务商以按需分配为原则,为客户提供具有高可用性、高扩展性的计算、存储和网络等IT资源。
虚拟化技术将各种物理资源抽象为逻辑上的资源,隐藏了各种物理上的限制,为在更细粒度上对其进行管理和应用提供了可能性。
近些年,计算的虚拟化技术(主要指x86平台的虚拟化)取得了长足的发展;相比较而言,尽管存储和网络的虚拟化也得到了诸多发展,但是还有很多问题亟需解决,在云计算环境中尤其如此。
OpenFlow和SDN尽管不是专门为网络虚拟化而生,但是它们带来的标准化和灵活性却给网络虚拟化的发展带来无限可能。
OpenFlow起源于斯坦福大学的Clean Slate项目组 [1] 。
CleanSlate项目的最终目的是要重新发明英特网,旨在改变设计已略显不合时宜,且难以进化发展的现有网络基础架构。
在2006年,斯坦福的学生Martin Casado领导了一个关于网络安全与管理的项目Ethane[2],该项目试图通过一个集中式的控制器,让网络管理员可以方便地定义基于网络流的安全控制策略,并将这些安全策略应用到各种网络设备中,从而实现对整个网络通讯的安全控制。
受此项目(及Ethane的前续项目Sane[3])启发,Martin和他的导师Nick McKeown教授(时任Clean Slate项目的Faculty Director)发现,如果将Ethane的设计更一般化,将传统网络设备的数据转发(data plane)和路由控制(control plane)两个功能模块相分离,通过集中式的控制器(Controller)以标准化的接口对各种网络设备进行管理和配置,那么这将为网络资源的设计、管理和使用提供更多的可能性,从而更容易推动网络的革新与发展。
OpenFlow关键技术的介绍
OpenFlow关键技术的介绍Openflow简介OpenFlow标准的名称是OpenFlow Switch Specification,因此它本身是一份设备规范,其中规定了作为SDN基础设施层转发设备的OpenFlow交换机的基本组件和功能要求,以及用于由远程控制器对交换机进行控制的OpenFlow协议。
图1-1 SDN (OpenFlow) ArchitectureOpenFlow交换机利用基于安全连接的OpenFlow协议与远程控制器相通信。
其中,流表(Flow Table)是OpenFlow交换机的关键组件,负责数据包的高速查询和转发。
流表概念在OpenFlow v1.1和v1.2中,交换机中的流表项虽然还是由三部分组成,但是相应的名称已经发生了变化。
其中,v1.0中定义的包头域(Header Fields)和动作(Actions)被分别更名为匹配域(Match Fields)和指令(Instructions)。
之所以对包头域进行改名,是因为流表项中的入端口等元组信息并不是属于数据包头的内容,因此将其改为匹配域将更为确切。
而使用指令一词替代动作,则主要是因为OpenFlow交换机中多流表的引入。
在多流表场景中,虽然数据包在前一流表中出现了匹配,但是交换机后续的操作仍旧可能是将其转到下一流表中继续进行匹配,而并非像v1.0一样马上依照流表动作对数据包做具体操作。
因此,新版本的OpenFlow将相关的动作统一更名为指令。
OpenFlow v1.1和v1.2中定义的流表结构如图1-2所示。
图1-2 OpenFlow v1.1和v1.2的流表结构而在OpenFlow v1.3之后,流表结构的内容又一次发生了变化,增加了优先级(Priority)、超时定时器(Timeouts)和Cookie等内容,从而将原来流表结构中的三部分扩展至六部分,如图1-3所示。
图1-3 OpenFlow v1.3的流表结构如图1-3所示,OpenFlow v1.3的流表结构的各个域的说明如下。
openflow_nox详细参考资料总结
这是我在学习中做的一个笔记文档,仅供大家参考目录目录 (1)第一章背景 (2)第二章理论基础 (3)2.1软件定义网络SDN (3)2.2 openflow网络架构 (4)2.2.1 openflow交换机 (4)2.2.2 openflow 控制器 (8)2.2.3 openflow 虚拟化 (8)2.3 安全通道 (9)2.3.1 OF协议 (9)2.3.2 建立连接 (10)2.3.3 连接中断 (11)2.3.4 加密 (11)2.3.5 生成树 (11)第三章实验环境搭建 (11)3.1 安装open vswitch (12)3.1.1 安装KVM (12)3.1.2 安装Openvswitch (13)3.1.3 配置网桥 (14)3.2 安装NOX网络操作系统及GUI (15)3.2.1 安装NOX (15)3.2.2 安装NOX-GUI (16)3.3 环境测试 (16)3.1.1 总体拓扑图展示 (16)3.3.2 运行controller (16)3.3.3 配置open vswitch (17)3.3.4 测试open switch 与controller 是否连通 (18)3.3.5 启动GUI监测 (19)第四章Open Flow分析 (19)4.1 重要的数据结构 (19)4.1.1 of协议头 (19)4.1.2交换机端口状态 (21)4.1.3 流匹配结构 (21)4.1.4 行为结构 (22)4.1.5流表操作 (22)4.1.6 表统计信息 (23)4.1.7 端口统计 (23)4.1.8 数据包进入 (24)4.1.9 发送数据包 (24)4.1.10 流表删除 (25)4.2 openflow设备定义以及基本操作 (25)4.3 OpenFow数据通路分析 (28)第五章NOX分析 (30)5.1 事件 (30)5.1.1 事件概念 (30)5.1.2 核心事件列表 (30)5.2 组件 (31)5.2.1 组件的概念 (31)5.2.2 基于python的组件实现原理 (31)5.2.3 流表创建实现原理 (32)5.2.4 组件的基本架构 (32)第六章python组件实例 (33)6.1 实例一解析packet_in 数据包 (33)6.2实例二数据通路重定向 (33)第七章GUI 组件实例 (36)7.1 GUI 简介 (36)7.2 NOX-GUI实现原理 (36)7.2.1 SNMP协议简介 (36)7.2.2 open vswitch SNMP实现 (36)7.2.3 NOX SNMP 实现 (39)第一章背景斯坦福大学的研究者于2008 年提出OpenFlow 技术,并逐渐推广SDN 概念。
阅读笔记-OpenFlow协议
VLan、虚拟端口。
VLAN、MPLS标签或数据包QoSPriority仅跟设置了通配符(wildcard)的域相关。
Priority域指明了流表的优先级,数字越大则优先级越高。
因此,高优先级的带通配符的流表项必须要放到低编号的流表中。
交换机负责正确的顺序,避免高优先级规则覆盖掉低优先级的。
Metadata : a maskable register value that is used to carry information from one table to the next.OpenFlow Specification 1.1和1.0较大的变化就是对匹配域的元组进行了更改,增加了元组数,提高了多种转发规则制定,提高了功能,同时在1.1版本中还增加了组表概念。
OpenFlow是一种交换技术,刚开始它是2008年斯坦福大学的一个研究项目。
使用OpenFlow协议建立软件定义网络,可以将网络作为一个整体而不是无数的独立设备进行管理。
传统交换机使用生成树协议或其他一些新标准(如多链路透明互连,TRILL)来确定数据包转发路径。
而OpenFlow将转发决策从各个交换机转移到控制器上,这一般是服务器或工作站。
管理应用程序执行控制器,负责与所有网络交换机进行交互,配置数据转发路径,从而提高带宽利用率。
这个应用程序与云管理软件进行交互,保证有足够的带宽支持负载的创建和变化。
OpenFlow协议操作OpenFlow标准定义了控制器与交换机之间的交互协议,以及一组交换机操作。
这个控制器—交换机协议运行在安全传输层协议(TLS)或无保护TCP连接之上。
控制器向交换机发送指令,控制数据包的转发方式,以及配置参数,如VLAN优先级。
交换机会在链路中断或出现未指定转发指令的数据包时,发送消息通知控制器。
转发指令基于流,这个流由所有数据包共享的通用特性组成。
定义流需要指定许多参数,其中可能包括:数据包到达的交换机端口、来源以太网端口、来源IP端口、VLAN标签、目标以太网或IP端口及许多其他数据包特性。
Openflow版本间差别
Openflow版本间差别(1.0与1.1 1.2 1.3之间)1.OpenFlow version 1.1Release date : February 28, 20111 Multiple TablesPrior versions of the OpenFlow specification did expose to the controller the abstraction of a single table. The OpenFlow pipeline could internally be mapped to multiple tables, such as having a separate wildcard and exact match table, but those tables would always act logically as a single table.OpenFlow 1.1 introduces a more flexible pipeline with multiple tables. Exposing multiple tables has many advantages. The first advantage is that many hardware have multiple tables internally (for example L2 table, L3 table, multiple TCAM lookups), and the multiple table support of OpenFlow may enable to expose this hardware with greater efficiency and flexibility. The second advantage is that many network deployments combine orthogonal processing of packets (for example ACL, QoS and routing), forcing all those processing in a single table creates huge ruleset due to the cross product of individual rules, multiple table may decouple properly those processing.The new OpenFlow pipeline with multiple table is quite different from the simple pipeline of prior OpenFlow versions. The new OpenFlow pipeline expose a set of completely generic tables, supporting the full match and full set of actions. It’s difficult to build a pipeline abstraction that represent accurately all possible hardware, therefore OpenFlow 1.1 is based on a generic and flexible pipeline that may be mapped to the hardware. Some limited table capabilities are available to denote what each table is capable of supporting.Packet are processed through the pipeline, they are matched and processed in the first table, and may be matched and processed in other tables. As it goes through the pipeline, a packet is associated with an action set, accumulating action, and a generic metadata register. The action set is resolved at the end of the pipeline and applied to the packet. The metadata can be matched and written at each table and enables to carry state between tables.OpenFlow introduces a new protocol object called instruction to control pipeline processing. Actions which were directly attached to flows in previous versions are now encapsulated in instructions, instructions may apply those actions between tables or accumulate them in the packet action set. Instructions can also change the metadata, or direct packet to another table.• The switch now expose a pipeline with multiple tables.• Flow entry have instruction to control pipeline processing• Controller can choose packet traversal of tables via goto instruction• Metadata field (64 bits) can be set and match in tables• Packet actions can be merged in packet action set• Packet action set is executed at the end of pipeline• Packet actions can be applied between table stages• Table miss can send to controller, continue to next table or drop• Rudimentary table capability and configuration2 GroupsThe new group abstraction enables OpenFlow to represent a set of ports as a single entity for forwarding packets. Different types of groups are provided, to represent different abstractions such as multicasting or multipathing. Each group is composed of a set group buckets, each group bucket contains the set of actions to be applied before forwarding to the port. Groups buckets can also forward to other groups, enabling to chain groups together.• Group indirection to represent a set of ports• Group table with 4 types of groups :- All - used for multicast and flooding- Select - used for multipath- Indirect - simple indirection- Fast Failover - use first live port• Group action to direct a flow to a group• Group buckets contains actions related to the individual port3 Tags : MPLS & VLANPrior versions of the OpenFlow specification had limited VLAN support, it only supported a single level of VLAN tagging with ambiguous semantic. The new tagging support has explicit actions to add, modify and remove VLAN tags, and can support multiple level of VLAN tagging. It also adds similar support the MPLS shim headers.• Support for VLAN and QinQ, adding, modifying and removing VLAN headers• Support for MPLS, adding, modifying and removing MPLS shim headers4 Virtual portsPrior versions of the OpenFlow specification assumed that all the ports of the OpenFlow switch were physical ports. This version of the specification add support for virtual ports, which can represent complex forwarding abstractions such as LAGs or tunnels.• Make port number 32 bits, enable larger number of ports• Enable switch to provide virtual port as OpenFlow ports• Augment packet-in to report both virtual and physical ports5 Controller connection failurePrior versions of the OpenFlow specification introduced the emergency flow cache as a way to deal with the loss of connectivity with the controller. The emergency flow cache feature was removed in this version of the specification, due to the lack of adoption, the complexity to implement it and other issues with the feature semantic.This version of the specification add two simpler modes to deal with the loss of connectivity with the controller. In fail secure mode, the switch continues operating in OpenFlow mode, until it reconnects to a controller. In fail standalone mode, the switch revert to using normal processing (Ethernet switching).• Remove Emergency Flow Cache from spec• Connection interruption trigger fail secure or fail standalone mode6 Other changes• Remove 802.1d-specific text from the specification• Cookie Enhancements Proposal - cookie mask for filtering• Set queue action (unbundled from output port action)• Maskable DL and NW address match fields• Add TTL decrement, set and copy actions for IPv4 and MPLS• SCTP header matching and rewriting support• Set ECN action• Define message handling : no loss, may reorder if no barrier• Rename VENDOR APIs to EXPERIMENTER APIs• Many other bug fixes, rewording and clarifications2.OpenFlow version 1.2Release date : December 5, 2011Please refers to the bug tracking ID for more details on each change1 Extensible match supportPrior versions of the OpenFlow specification used a static fixed length structureto specify ofp_match, which prevents flexible expression of matches and prevents inclusion of new match fields. The ofp_match has been changed to a TLV structure, called OpenFlow Extensible Match (OXM), which dramatically increases flexibility.The match fields themselves have been reorganised. In the previous static structure, many fields were overloaded ; for example tcp.src_port, udp.src_port, and icmp.code were using the same field entry. Now, every logical field has its own unique type.List of features for OpenFlow Extensible Match :• Flexible and compact TLV structure called OXM (EXT-1)• Enable flexible expression of match, and flexible bitmasking (EXT-1)• Pre-requisite system to insure consistency of match (EXT-1)• Give every match field a unique type, remove overloading (EXT-1)• Modify VLAN matching to be more flexible (EXT-26)• Add vendor classes and experimenter matches (EXT-42)• Allow switches to override match requirements (EXT-56, EXT-33)2 Extensible ’set field’ packet rewriting supportPrior versions of the OpenFlow specification were using hand-crafted actions to rewrite header fields. The Extensible set_field action reuses the OXM encoding defined for matches, and enables to rewrite any header field in a single action (EXT-13). This allows any new match field, including experimenter fields, to be available for rewrite. This makes the specification cleaner and eases cost of introducing new fields.• Deprecate most header rewrite actions• Introduce generic set-field action (EXT-13)• Reuse match TLV structure (OXM) in set-field action3 Extensible context expression in ’packet-in’The packet-in message did include some of the packet context (ingress port), but not all (metadata), preventing the controller from figuring how match did happen in the table and which flow entries would match or not match. Rather than introduce a hard coded field in the packet-in message, the flexible OXM encoding is used to carry packet context.• Reuse match TLV structure (OXM) to describe metadata in packet-in (EXT-6)• Include the ’metadata’ field in packet-in• Move ingress port and physical port from static field to OXM encoding• Allow to optionally include packet header fields in TLV structure4 Extensible Error messages via experimenter error typeAn experimenter error code has been added, enabling experimenter functionality togenerate custom error messages (EXT-2). The format is identical to other experimenter APIs.5 IPv6 support addedBasic support for IPv6 match and header rewrite has been added, via the Flexible match support.• Added support for matching on IPv6 source address, destination address, protocol number, traffic class, ICMPv6 type, ICMPv6 code and IPv6 neighbor discovery header fields (EXT-1)• Added support for matching on IPv6 flow label (EXT-36)6 Simplified behaviour of flow-mod requestThe behaviour of flow-mod request has been simplified (EXT-30).• MODIFY and MODIFY STRICT commands never insert new flows in the table• New flag OFPFF RESET COUNTS to control counter reset• Remove quirky behaviour for cookie field.7 Removed packet parsing specificationThe OpenFlow specification no longer attempts to define how to parse packets (EXT-3). The match fields are only defined logically.• OpenFlow does not mandate how to parse packets• Parsing consistency acheived via OXM pre-requisite8 Controller role change mechanismThe controller role change mechanism is a simple mechanism to support multiple controllers for failover (EXT-39). This scheme is entirely driven by the controllers ; the switch only need to remember the role of each controller to help the controller election mechanism.• Simple mechanism to support multiple controllers for failover• Switches may now connect to multiple controllers in parallel• Enable each controller to change its roles to equal, master or slave9 Other changes• Per-table metadata bitmask capabilities (EXT-34)• Rudimentary group capabilities (EXT-61)• Add hard timeout info in flow-removed messages (OFP-283)• Add ability for controller to detect STP support(OFP-285)• Turn off packet buffering with OFPCML NO BUFFER (EXT-45)• Added ability to query all queues (EXT-15)• Added experimenter queue property (EXT-16)• Added max-rate queue property (EXT-21)• Enable deleting flow in all tables (EXT-10)• Enable switch to check chaining when deleting groups (EXT-12)• Enable controller to disable buffering (EXT-45)• Virtual ports renamed logical ports (EXT-78)• New error messages (EXT-1, EXT-2, EXT-12, EXT-13, EXT-39, EXT-74 and EXT-82) • Include release notes into the specification document• Many other bug fixes, rewording and clarifications3.OpenFlow version 1.3Release date : April 13, 2012Please refers to the bug tracking ID for more details on each change1 Refactor capabilities negotiationPrior versions of the OpenFlow specification included limited expression of the capabilities of an OpenFlow switch. OpenFlow 1.3 include a more flexible framework to express capabilities (EXT-123).The main change is the improved description of table capabilities. Those capabilities have been moved out of the table statistics structure in its own request/reply message, and encoded using a flexible TLV format. This enables the additions of next-table capabilities, table-miss flow entry capabilities and experimenter capabilities.Other changes include renaming the ’stats’ framework into the ’multipart’ framework to reflect the fact that it is now used for both statistics and capabilities, and the move of port descriptions into its own multipart message to enable support of a greater number of ports.List of features for Refactor capabilities negotiation :• Rename ’stats’ framework into the ’multipart’ framework.• Enable ’multipart’ requests (requests spanning multiple messages).• Move port list description to its own multipart request/reply.• Move table capabilities to its own multipart request/reply.• Create flexible property structure to express table capabilities.• Enable to express experimenter capabilities.• Add capabilities for table-miss flow entries.• Add next-table (i.e. goto) capabilities2 More flexible table miss supportPrior versions of the OpenFlow specification included table configuration flags toselect one of three 3 behaviour for handling table-misses (packet not matching any flows in the table). OpenFlow 1.3 replace those limited flags with the table-miss flow entry, a special flow entry describing the behaviour on table miss (EXT-108).The table-miss flow entry uses standard OpenFlow instructions and actions to process table-miss packets, this enables to use the full flexibility of OpenFlow in processing those packets. All previous behaviour expressed by the table-miss config flags can be expressed using the table-miss flow entry. Many new way of handling table-miss, such as processing table-miss with normal, can now trivially be described by the OpenFlow protocol.• Remove table-miss config flags (EXT-108).• Define table-miss flow entry as the all wildcard, lowest priority flow entry (EXT-108).• Mandate support of the table-miss flow entry in every table to process table-miss packets (EXT-108).• Add capabilities to describe the table-miss flow entry (EXT-123).• Change table-miss default to drop packets (EXT-119).3 IPv6 Extension Header handling supportAdd the ability of match the presence of common IPv6 extension headers, and some anomalous conditions in IPv6 extension headers (EXT-38). A new OXM pseudo header field OXM_OF_IPV6_EXTHDR enables to match the following conditions :• Hop-by-hop IPv6 extension header is present.• Router IPv6 extension header is present.• Fragmentation IPv6 extension header is present.• Destination options IPv6 extension headers is present.• Authentication IPv6 extension header is present.• Encrypted Security Payload IPv6 extension header is present.• No Next Header IPv6 extension header is present.• IPv6 extension headers out of preferred order.• Unexpected IPv6 extension header encountered.4 Per flow metersAdd support for per-flow meters (EXT-14). Per-flow meters can be attached to flow entries and can measure and control the rate of packets. One of the main applications of per-flow meters is to rate limit packets sent to the controller.The per-flow meter feature is based on a new flexible meter framework, which includes the ability to describe complex meters through the use of multiple metering bands, metering statistics and capabilities.Currently, only simple rate-limiter meters are defined over this framework. Support for color-aware meters, which support Diff-Serv style operation and are tightly integrated in the pipeline, was postponed to a later release.• Flexible meter framework based on per-flow meters and meter bands.• Meter statistics, including per band statistics.• Enable to attach meters flexibly to flow entries.• Simple rate-limiter support (drop packets).5 Per connection event filteringPrevious version of the specification introduced the ability for a switch to connect to multiple controllers for fault tolerance and load balancing. Per connection event filtering improves the multi-controller support by enabling each controller to filter events from the switch it does not want (EXT-120).A new set of OpenFlow messages enables a controller to configure an event filter on its own connection to the switch. Asynchronous messages can be filtered by type and reason. This event filter comes in addition to other existing mechanisms that enable or disable asynchronous messages, for example the generation of flow-removed events can be configured per flow. Each controller can have a separate filter for the slave role and the master/equal role.• Add asynchronous message filter for each controller connection.• Controller message to set/get the asynchronous message filter.• Set default filter value to match OpenFlow 1.2 behaviour.• Remove OFPC_INVALID_TTL_TO_CONTROLLER config flag.6 Auxiliary connectionsIn previous version of the specification, the channel between the switch and the controller is exclusively made of a single TCP connection, which does not allow to exploit the parallelism available in most switch implementations. OpenFlow 1.3 enables a switch to create auxiliary connections to supplement the main connection between the switch and the controller (EXT-114). Auxiliary connections are mostly useful to carry packet-in and packet-out messages.• Enable switch to create auxiliary connections to the controller.• Mandate that auxiliary connection can not exist when main connection is not alive. • Add auxiliary-id to the protocol to disambiguate the type of connection.• Enable auxiliary connection over UDP and DTLS.7 MPLS BoS matchingA new OXM field OXM_OF_MPLS_BOS has been added to match the Bottom of Stack bit (BoS) from the MPLS header (EXT-85). The BoS bit indicates if other MPLS shim header are in the payload of the present MPLS packet, and matching this bit can help to disambiguate case where the MPLS label is reused across levels of MPLS encapsulation.8 Provider Backbone Bridging taggingAdd support for tagging packet using Provider Backbone Bridging (PBB) encapsulation (EXT-105). This support enables OpenFlow to support various network deployment based on PBB, such as regular PBB and PBB-TE.• Push and Pop operation to add PBB header as a tag.• New OXM field to match I-SID for the PBB header.9 Rework tag orderIn previous version of the specification, the final order of tags in a packet was statically specified. For example, a MPLS shim header was always inserted after all VLAN tags in the packet. OpenFlow 1.3 removes this restriction, the final order of tags in a packet is dictated by the order of the tagging operations, each tagging operation adds its tag in the outermost position (EXT-121).• Remove defined order of tags in packet from the specification.• Tags are now always added in the outermost possible position.• Action-list can add tags in arbitrary order.• Tag order is predefined for tagging in the action-set.10 Tunnel-ID metadataThe logical port abstraction enables OpenFlow to support a wide variety of encapsulations. The tunnel-id metadata OXM_OF_TUNNEL_ID is a new OXM field that expose to the OpenFlow pipeline metadata associated with the logical port, most commonly the demultiplexing field from the encapsulation header (EXT-107).For example, if the logical port perform GRE encapsulation, the tunnel-id field would map to the GRE key field from the GRE header. After decapsulation, OpenFlow would be able to match the GRE key in the tunnel-id match field. Similarly, by setting the tunnel-id, OpenFlow would be able to set the GRE key in an encapsulated packet.11 Cookies in packet-inA cookie field was added to the packet-in message (EXT-7). This field takes its value from the flow the sends the packet to the controller. If the packet was not sent by a flow, this field is set to 0xffffffffffffffff.Having the cookie in the packet-in enables the controller to more efficiently classify packet-in, rather than having to match the packet against the full flow table.12 Duration for statsA duration field was added to most statistics, including port statistics, group statistics, queue statistics and meter statistics (EXT-102). The duration field enables to more accurately calculate packet and byte rate from the counters included in those statistics.13 On demand flow countersNew flow-mod flags have been added to disable packet and byte counters on a per-flow basis. Disabling such counters may improve flow handling performance in the switch.14 Other changes• Fix a bug describing VLAN matching (EXT-145).• Flow entry description now mention priority (EXT-115).• Flow entry description now mention timeout and cookies (EXT-147).• Unavailable counters must now be set to all 1 (EXT-130).• Correctly refer to flow entry instead of rule (EXT-132).• Many other bug fixes, rewording and clarifications.。
SDN软件定义网络之南向协议——OpenFlow协议
SDN软件定义网络之南向协议——OpenFlow协议一、前言SDN(Software-Defined Networking)软件定义网络是一种新兴的网络架构,它将网络控制平面与数据转发平面分离,通过集中式的控制器来实现网络的灵活性和可编程性。
南向协议是SDN架构中控制器与网络设备之间的通信协议,用于控制器向网络设备下发指令,实现网络的配置和管理。
OpenFlow协议是SDN中最常用的南向协议之一,本协议旨在详细描述OpenFlow协议的标准格式和功能。
二、协议版本本协议基于OpenFlow协议的最新版本进行描述,当前版本为OpenFlow 1.5。
三、协议结构OpenFlow协议由多个消息类型组成,每个消息类型都有特定的结构和功能。
以下是OpenFlow协议的主要消息类型:1. Hello消息:用于建立控制器与网络设备之间的连接,并交换协议版本信息。
2. Echo消息:用于测试控制器与网络设备之间的连接状态。
3. Features请求/回复消息:控制器向网络设备请求设备的基本信息,包括支持的OpenFlow协议版本、端口信息等。
4. Configuration请求/回复消息:用于设置或查询网络设备的配置信息,例如流表容量、超时时间等。
5. Packet-In消息:网络设备将无法处理的数据包发送给控制器,请求控制器进行处理。
6. Flow-Removed消息:网络设备上的流表项被删除时发送给控制器,以通知流表项的删除原因。
7. Port-Status消息:网络设备上的端口状态发生变化时发送给控制器,例如端口连接或断开。
8. Barrier请求/回复消息:用于控制器与网络设备之间的同步,确保前一条消息的处理已完成。
9. Flow-Mod消息:控制器向网络设备下发流表项,用于配置数据包的转发行为。
10. Group-Mod消息:控制器向网络设备下发组表项,用于实现组播、多路径等高级功能。
11. Table-Mod消息:控制器向网络设备下发表项,用于配置流表的匹配规则和优先级。
OpenFlow1.3协议总结
OpenFlow1.3协议总结OpenFlow1.3协议总结OpenFlow1.3协议总结 (1)1介绍 (4)2交换机组成 (4)3名称解释 (4)4端⼝ (4)5OpenFlow表 (5)5.1Pipeline处理 (5)5.2Flow Table (6)5.3Match (6)5.4Table-miss (6)5.5流表项删除 (6)5.6组表 (7)5.7Meter Table (7)5.8Counters (8)5.9Instructions (8)6OpenFlow1.3版本协议新增消息 (10)7OpenFlow Protocol (10)7.1OpenFlow Header (10)ofp_header (10)7.2Common Structures (11)7.2.1端⼝ (11)ofp_port (12)7.2.2队列 (14)ofp_packet_queue (14)7.2.3匹配域 (15)struct ofp_match (15)ofp_oxm_experimenter_header (17)7.2.4Flow Instruction Structures (17)ofp_instruction (17)ofp_instruction_goto_table (17)ofp_instruction_write_metadata (17)ofp_instruction_actions (17)ofp_instruction_meter (18)ofp_instruction_experimenter (18) 7.2.5Action Structures (18)ofp_action_header (18)ofp_action_output (18)ofp_action_group (19)ofp_action_set_queue (19)ofp_action_mpls_ttl (19)ofp_action_pop_mpls (19)ofp_action_set_field (19)ofp_action_experimenter_header (19) 7.3Controller-to-Switch Messages (19) 7.3.1Handshake (19)ofp_switch_features (20)7.3.2交换机配置 (20)ofp_switch_config (20)7.3.3流表配置 (21)ofp_table_mod (21)7.3.4Modify State Messages (21)ofp_flow_mod (21)ofp_group_mod (23)ofp_bucket: (23)ofp_port_mod (24)ofp_meter_mod (24)ofp_mater_band_header (25)ofp_meter_band_drop (25)ofp_meter_band_dscp_remark (25) ofp_meter_band_experimenter (25) 7.3.5Multipart Messages (25)ofp_multipart_request (26)ofp_multipart_reply (26)ofp_flow_stats_request (27)ofp_flow_stats (28)ofp_aggregate_stats_request (28)ofp_aggregate_stats_reply (28)ofp_table_stats (29)ofp_table_feature (29)ofp_table_feature_prop_type (30)ofp_table_feature_prop_header (30)ofp_table_feature_prop_instructions (30)ofp_table_feature_prop_next_tables (30)ofp_table_feature_prop_actions (30)ofp_table_feature_prop_oxm (31)ofp_table_feature_prop_instructions (31)ofp_port_stats_request (31)ofp_port_stats (32)ofp_port (33)ofp_queue_stats_request (33)ofp_queue_stats (33)ofp_group_stats_request (34)ofp_group_stats (34)ofp_group_desc (34)ofp_group_features (34)ofp_group_feature (35)ofp_meter_multipart_stats (35)ofp_meter_band_stats (35)ofp_meter_multipart_request (35)ofp_meter_config (35)ofp_meter_features (36)ofp_experimenter_multipart_header (37)7.3.6队列配置信息 (37)ofp_queue_get_config_request (37)ofp_queue_get_config_reply (37)7.3.7Packet_Out消息 (37)ofp_packet_out (37)7.3.8Barrier Message (38)7.3.9Role Request Message (38)ofp_role_request (38)7.3.10Set Asynchronous Conguration Message (38) 7.4Asynchronous消息 (39)7.4.1Packet-In Message (39)7.4.2Flow Removed Message (39)7.4.3Port Status Message (40)7.4.4Error Message (41)7.5Symmetric消息 (45)7.5.1Hello (45)7.5.2Echo Request (45)7.5.3Echo Reply (45)7.5.4Experimenter (46)1介绍2交换机组成OpenFlow的交换机包括⼀个或多个流表和⼀组表,执⾏分组查找和转发,和到⼀个外部控制器OpenFlow的信道。
OpenFlow1.3协议总结
OpenFlow1.3协议总结OpenFlow1.3协议总结 (1)1介绍 (4)2交换机组成 (4)3名称解释 (4)4端口 (4)5OpenFlow表 (5)5.1Pipeline处理 (5)5.2Flow Table (6)5.3Match (6)5.4Table-miss (6)5.5流表项删除 (6)5.6组表 (7)5.7Meter Table (7)5.8Counters (8)5.9Instructions (8)6OpenFlow1.3版本协议新增消息 (10)7OpenFlow Protocol (10)7.1OpenFlow Header (10)ofp_header (10)7.2Common Structures (11)7.2.1端口 (11)ofp_port (12)7.2.2队列 (14)ofp_packet_queue (14)7.2.3匹配域 (15)struct ofp_match (15)ofp_oxm_experimenter_header (17)7.2.4Flow Instruction Structures (17)ofp_instruction (17)ofp_instruction_goto_table (17)ofp_instruction_write_metadata (17)ofp_instruction_actions (17)ofp_instruction_meter (18)ofp_instruction_experimenter (18)7.2.5Action Structures (18)ofp_action_header (18)ofp_action_output (18)ofp_action_group (19)ofp_action_set_queue (19)ofp_action_mpls_ttl (19)ofp_action_pop_mpls (19)ofp_action_set_field (19)ofp_action_experimenter_header (19)7.3Controller-to-Switch Messages (19)7.3.1Handshake (19)ofp_switch_features (20)7.3.2交换机配置 (20)ofp_switch_config (20)7.3.3流表配置 (21)ofp_table_mod (21)7.3.4Modify State Messages (21)ofp_flow_mod (21)ofp_group_mod (23)ofp_bucket: (23)ofp_port_mod (24)ofp_meter_mod (24)ofp_mater_band_header (25)ofp_meter_band_drop (25)ofp_meter_band_dscp_remark (25)ofp_meter_band_experimenter (25)7.3.5Multipart Messages (25)ofp_multipart_request (26)ofp_multipart_reply (26)ofp_flow_stats_request (27)ofp_flow_stats (28)ofp_aggregate_stats_request (28)ofp_aggregate_stats_reply (28)ofp_table_stats (29)ofp_table_feature (29)ofp_table_feature_prop_type (30)ofp_table_feature_prop_header (30)ofp_table_feature_prop_instructions (30)ofp_table_feature_prop_next_tables (30)ofp_table_feature_prop_actions (30)ofp_table_feature_prop_oxm (31)ofp_table_feature_prop_instructions (31)ofp_port_stats_request (31)ofp_port_stats (32)ofp_port (33)ofp_queue_stats_request (33)ofp_queue_stats (33)ofp_group_stats_request (34)ofp_group_stats (34)ofp_group_desc (34)ofp_group_features (34)ofp_group_feature (35)ofp_meter_multipart_stats (35)ofp_meter_band_stats (35)ofp_meter_multipart_request (35)ofp_meter_config (35)ofp_meter_features (36)ofp_experimenter_multipart_header (37)7.3.6队列配置信息 (37)ofp_queue_get_config_request (37)ofp_queue_get_config_reply (37)7.3.7Packet_Out消息 (37)ofp_packet_out (37)7.3.8Barrier Message (38)7.3.9Role Request Message (38)ofp_role_request (38)7.3.10Set Asynchronous Conguration Message (38)7.4Asynchronous消息 (39)7.4.1Packet-In Message (39)7.4.2Flow Removed Message (39)7.4.3Port Status Message (40)7.4.4Error Message (41)7.5Symmetric消息 (45)7.5.1Hello (45)7.5.2Echo Request (45)7.5.3Echo Reply (45)7.5.4Experimenter (46)1介绍2交换机组成OpenFlow的交换机包括一个或多个流表和一组表,执行分组查找和转发,和到一个外部控制器OpenFlow的信道。
浅谈OpenFlow 技术
浅谈OpenFlow 技术OpenFlow 全称是OpenFlow Switch Specification,其字面意思就是OpenFlow 交換机规则。
在SDN 网络架构中,其基础设施层的交换机主要完成数据转发的功能,而OpenFlow 技术就是为实现这一功能的OpenFlow 交换机设置结构和功能规则,同时它也定义规范了控制平面控制器对交换机进行控制管理的OpenFlow 协议。
根据它的功能可以看出其技术核心包括了OpenFlow 交换机、OpenFlow 控制器以及OpenFlow 协议,另外通过控制器下发的流表信息也是核心技术之一。
一、OpenFlow交换机和控制器1.1 OpenFlow交换机OpenFlow 网络的核心就是OpenFlow 交换机,它是一个二层交换机,主要功能是完成数据转发处理:根据网络要求将数据层的数据信息转发到相应数据网络端口。
其实现数据转发主要依赖于OpenFlow 控制器所下发的流表,即就是OpenFlow 控制器控制交换机实现数据转发,这一方式的优势就在于OpenFlow 交换机不需要存储、记忆和学习过程,提高了数据转发的准确性和效率。
而控制器下发的流表信息是在响应网络状态、拓扑变化以及应用需求时,自行下发或通过手动下发生成,生成的流表信息需要经过SDN 控制数据平面接口发送给交换机为其提供转发机制。
OpenFlow 交换机处理数据包的过程如图1.1.1 所示,当数据包从端口传输被OpenFlow 交换机上接收后,交换机首先对接收到的数据包包头进行分析,根据分析结果按照流表的优先级高低顺序从已有流表中依次进行匹配,如果匹配到流表,则使用契合程度最高的一个流表,实施与之对应的操作,完成数据包的转发传输,然后将数据信息更新到计算机上。
如果没有匹配到与之对应的流表,就将数据包通过安全通道发送给控制器,由控制器管理数据包并做进一步处理。
OpenFlow 交换机是当前在应用范围和应用空间等方面最为普遍的Open vSwitch 软件交换机,通过开源的Open vSwitch 交换机和控制器构建实验场景和拓扑。
OpenFlow
解析OpenFlow建立软件定义网络工作原理
使用OpenFlow协议建立软件定义网络,可以将网络作为一 个整体而不是无数的独立设备进行管理。传统交换机使用生成 树协议或其他一些新标准(如多链路透明互连,TRILL)来确定数 据包转发路径。而OpenFlow将转发决策从各个交换机转移到控 制器上,这一般是服务器或工作站。
管理应用程序执行控制器,负责与所有网络交换机进行交 互,配置数据转发路径,从而提高带宽利用率。这个应用程序 与云管理软件进行交互,保证有足够的带宽支持负载的创建和 变化。
OpenFlow标准定义了控制器与交换机之间的交互协议,以 及一组交换机操作。这个控制器—交换机协议运行在安全传输 层协议(TLS)或无保护TCP 连接之上。控制器向交换机发送指令, 控制数据包的转发方式,以及配置参数,如VLAN优先级。交换 机会在链路中断或出现未指定转发指令的数据包时,发送消息 通知控制器。
路由表指令会修改每一个数据包所设置的操作。一开始, 数据包会使用空操作集进行处理。这些操作可能要求数据包通 过指定的端口进行转发,或者需要修改数据包TTL、VLAN、 MPLS标签或数据包QoS。
第一个路由表的指令可能会对数据包执行操作,或者增加 一些将来执行的操作。这些指令会将数据包与其他路由表记录 进行比较,控制数据包的后续处理。后续路由表的记录的指令 可能会进一步增加操作,删除或修改之前添加的操作,或者执 行其他一些操作。
路由表的编号从0开始,到达的数据包对表0的记录进行比 较。如果匹配,路由计数会增加,然后执行指定的指令集。如 果到达的数据包不匹配任何路由表记录,那么必须创建一个新 流。有的交换机可能直接丢弃未定义的流,但是大多数情况下, 数据包都会转发到控制器上。然后,控制器为该数据包定义一 个新流,并且创建一个或多个路由表记录。然后,它会将记录 发送到交换机上,并增加路由表。最后,数据包会被送回交换 机,使用新创建的路由记录进行处理。
OpenFlow协议1.0及1.3版本分析
OpenFlow协议1.0及1.3版本分析OpenFlow是SDN控制器和交换之间交流的协议,在SDN领域有着⼗分重要的地位。
OpenFlow协议发展到现在已经经过了1.0、1.3、1.4等版本。
其中1.0和1.3版本使⽤的是最为⼴泛的。
本篇博⽂主要分析1.0版本和1.3版本OpenFLow协议在控制器和交换机之间的交互流程。
OpenFlow1.0协议交互OpenFlow协议1.0的交互过程如下:交互过程:交换机或控制器⾸先发送hello报⽂,确定openflow通信版本。
交换机或控制器收到hello报⽂之后,回复⼀个hello报⽂,协商版本。
控制器发送feature_request报⽂,查询交换机具体信息。
交换机收到feature_request报⽂之后,回复feature_reply,报告⾃⼰的详细信息给控制器。
⼯作过程中控制器会不断发送echo_request给交换机,交换机回复echo_reply消息给控制器,确认连接。
OpenFlow协议1.0版本在交换机和控制器信息交互过程中,⼀共有如下的消息类型:1. Enum ofp_type {1. /* Immutable messages. */2. OFPT_HELLO, /* Symmetric message */3. OFPT_ERROR, /* Symmetric message */4. OFPT_ECHO_REQUEST, /* Symmetric message */5. OFPT_ECHO_REPLY, /* Symmetric message */6. OFPT_VENDOR, /* Symmetric message *//* Switch configuration messages. */7. OFPT_FEATURES_REQUEST, /* Controller/switch message */8. OFPT_FEATURES_REPLY, /* Controller/switch message */9. OFPT_GET_CONFIG_REQUEST, /* Controller/switch message10. OFPT_GET_CONFIG_REPLY, /* Controller/switch message *11. OFPT_SET_CONFIG, /* Controller/switch message *//* Asynchronous messages. */12. OFPT_PACKET_IN, /* Async message */13. OFPT_FLOW_REMOVED, /* Async message */14. OFPT_PORT_STATUS, /* Async message *//* Controller command messages. */15. OFPT_PACKET_OUT, /* Controller/switch message */16. OFPT_FLOW_MOD, /* Controller/switch message */17. OFPT_PORT_MOD, /* Controller/switch message *//* Statistics messages. */18. OFPT_STATS_REQUEST, /* Controller/switch message */19. OFPT_STATS_REPLY, /* Controller/switch message *//* Barrier messages. */20. OFPT_BARRIER_REQUEST, /* Controller/switch message */21. OFPT_BARRIER_REPLY, /* Controller/switch message *//* Queue Configuration messages. */22. OFPT_QUEUE_GET_CONFIG_REQUEST, /* Controller/switch m23. OFPT_QUEUE_GET_CONFIG_REPLY /* Controller/switch mess24. };其中红⾊为常⽤消息,整理如下:HelloFeature_requestFeature_replyStats_requestStats_replyFlow_modSet_configPacket_inPacket_out下⾯分别介绍以上报⽂信息。
OpenFlow协议版本1.3.5学习总结
OpenFlow Switch Components
OpenFlow Ports
• Standard Ports
– – – – – can be used as ingress ports can be used as output ports can be used in groups have port counters have state and configuration
Action Set
• An action set is associated with each packet. This set is empty by default. • Action set is executed after the pipeline processing is finished. • An action set contains a maximum of one action of each type. • The same type is overwritten by the later one.
OpenFlow Protocol Family
• OpenFlow Spec
– – – – 1.0.2 1.3.4, 1.3.5 1.4.1 1.5.1
• OpenFlow Table Type Patterns
– 1.0
• OF-Config
– 1.2
• OpenFlow Test Specification
Flow Removal
• Flow entries are removed In two ways:
– at the request of the controller – via the switch flow expiry mechanism.
OpenFlow协议解读
OpenFlow协议解读OpenFlow通信流程解读前⾔接触了这么久的SDN,OpenFlow协议前前后后也读过好多遍,但是⼀直没有时间总结⼀下⾃⼰的⼀些见解。
现在有时间了,就写⼀写⾃⼰对OpenFlow协议通信流程的⼀些理解。
SDN中Switch和controller在SDN中很重要的两个实体是Switch跟Controller。
Controller在⽹络中相当于上帝,可以知道⽹络中所有的消息,可以给交换机下发指令。
Switch就是⼀个实现Controller指令的实体,只不过这个交换机跟传统的交换机不⼀样,他的转发规则由流表指定,⽽流表由控制器发送。
switch组成与传统交换机的差异switch组成switch由⼀个Secure Channel和⼀个flow table组成,of1.3之后table变成多级流表,有256级。
⽽of1.0中table只在table0中。
Secure Channel是与控制器通信的模块,switch和controller之间的连接时通过socket 连接实现。
Flow table⾥⾯存放这数据的转发规则,是switch的交换转发模块。
数据进⼊switch 之后,在table中寻找对应的flow进⾏匹配,并执⾏相应的action,若⽆匹配的flow 则产⽣packet_in(后⾯有讲)of中sw与传统交换机的差异匹配层次⾼达4层,可以匹配到端⼝,⽽传统交换机只是2层的设备。
运⾏of协议,实现许多路由器的功能,⽐如组播。
求补充!!(如果你知道,请告诉我,⾮常感谢!)OpenFlow的switch可以从以下⽅式获得实体of交换机,⽬前市场上有⼀些⼚商已经制造出of交换机,但是普遍反映价格较贵!性能最好。
在实体机上安装OVS,OVS可以使计算机变成⼀个OpenFlow交换机。
性能相对稳定。
使⽤mininet模拟环境。
可以搭建许多交换机,任意拓扑,搭建拓扑具体教程本博客有⼀篇。
OpenFlow协议抓包分析
OpenFlow协议抓包分析协议名称:OpenFlow协议抓包分析协议一、引言本协议旨在详细描述OpenFlow协议抓包分析的相关内容,包括抓包工具的选择与配置、抓包数据的解读与分析等。
通过本协议的执行,旨在提供OpenFlow协议抓包分析的标准化操作指南,以便于各方能够准确理解和分析OpenFlow协议的网络流量。
二、抓包工具的选择与配置1. 抓包工具选择为了进行OpenFlow协议的抓包分析,推荐使用以下抓包工具:- Wireshark:一款广泛使用的网络抓包工具,支持多种协议的抓包和分析。
- tcpdump:一款命令行网络抓包工具,适用于Linux系统环境。
2. 抓包工具配置在使用抓包工具进行OpenFlow协议抓包分析时,应按照以下配置要求进行设置:- 设置过滤器:根据需要设置过滤器,仅捕获与OpenFlow协议相关的数据包。
- 设置捕获接口:选择与OpenFlow交换机连接的网络接口进行数据包捕获。
- 设置捕获时间:根据需要设置捕获时间长度,以便获取足够的数据包进行分析。
三、抓包数据的解读与分析1. 抓包数据的基本结构OpenFlow协议的抓包数据通常由以下几个字段组成:- 版本号:表示OpenFlow协议的版本号。
- 数据包总长度:表示该数据包的总长度。
- 类型:表示OpenFlow消息的类型,如控制消息、数据包消息等。
- XID:表示消息的事务ID。
- 数据:表示OpenFlow消息的具体内容。
2. 抓包数据的解读通过对抓包数据的解读,可以获取以下关键信息:- OpenFlow协议的版本号:根据抓包数据中的版本号字段,可以确定所使用的OpenFlow协议版本。
- 控制消息的解读:根据抓包数据中的类型字段,可以判断该消息是否为控制消息,并根据具体类型进行解读。
- 数据包消息的解读:根据抓包数据中的类型字段,可以判断该消息是否为数据包消息,并根据具体类型进行解读。
3. 抓包数据的分析通过对抓包数据的分析,可以获取以下关键信息:- OpenFlow消息的交互流程:通过分析抓包数据中的XID字段,可以追踪OpenFlow消息的交互流程,包括请求和响应的关系。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Table-miss
• Every flow table must support a table-miss flow entry to process table misses. • The table-miss flow entry is identified by its match and its priority, it wildcards all match fields (all fields omitted) and has the lowest priority (0). • The table-miss flow entry behaves in most ways like any other flow entry • If the table-miss flow entry does not exist, by default packets unmatched by flow entries are dropped (discarded).
Action Set
• An action set is associated with each packet. This set is empty by default. • Action set is executed after the pipeline processing is finished. • An action set contains a maximum of one action of each type. • The same type is overwritten by the later one.
Meter Table
• enable various simple QoS operations. • Any flow entry can specify a meter in its instruction set
• meter identifier: a unique 32 bit unsigned integer • meter bands: an unordered list of meter bands, where each meter band specifies the rate of the band and the way to process the packet • counters: updated when packets are processed by a meter
• Optional:
– LOCAL: switch’s local networking stack, management stack – NORMAL: forwarding using the traditional non-Openflow pipeline – FLOOD: flooding using the traditional non-Openflow pipeline
Meter Bands
• Each meter may have one or more meter bands.
•
band type: defines how packet are processed
– drop: Drop the packet. – dscp remark: increase the DSCP field in the IP header of the packet
Instruction
• Each flow entry contains one instruction of each type • Instruction type with execution order (Blue is optional):
– – – – Meter: Direct packet to the specified meter. Apply-Actions: Applies the list of action(s) immediately. Clear-Actions: Clears all the actions in the action set immediately. Write-Actions: Merges the specified set of action(s) into the current action set. – Write-Metadata: Writes the masked metadata value into the metadata field. – Goto-Table: Indicates the next table in the processing pipeline.
Action Set Type 1/2
• Action set action order (total 11, first 8):
Flow Removal
• Flow entries are removed In two ways:
– at the request of the controller – via the switch flow expiry mechanism.
• Two timeouts (both effective, which comes first) :
– hard_timeout: Timeout based on flow entry arrival time – idle_timeout: Relative timeout based on flow entry not matched
• If OFPFF_SEND_FLOW_REM flag is set:
• Physical Ports
– correspond to a hardware interface
• Logical Ports
– don’t correspond directly to a hardware – link aggregation groups, tunnels, loopback interfaces
Agenda
• OpenFlow Overview
• OpenFlow Flow Table
• OpenFlow Protocol
Pipeline Processing
Matching
Match fields: • Packet Header fields • various protocol header fields (Ethernet source address, IPv4 destination address) • the ingress port, the metadata field and other pipeline fields.
– 1.3.4
OpenFlow Switch Components
OpenFlow Ports
• Standard Ports
– – – – – can be used as ingress ports can be used as output ports can be used in groups have port counters have state and configuration
OpenFlow 1.3.5 Overview
Agenda
• OpenFlow Overview • OpenFlow Flow Table • OpenFlow Protocol
Байду номын сангаас
Agenda • OpenFlow Overview
• OpenFlow Flow Table • OpenFlow Protocol
• • • •
rate: to select the meter band burst: defines the granularity of the meter band counters: updated when packets are processed by a meter band type specific arguments: some band types’ optional arguments
OpenFlow Protocol Family
• OpenFlow Spec
– – – – 1.0.2 1.3.4, 1.3.5 1.4.1 1.5.1
• OpenFlow Table Type Patterns
– 1.0
• OF-Config
– 1.2
• OpenFlow Test Specification
Reserved Ports
• Required:
– – – – – ALL: all ports the switch can use to forward a packet CONTROLLER: control channel with OpenFlow controllers. TABLE: the start of the OpenFlow pipeline IN PORT: the packet ingress port. ANY: Special value when no port is specified
Flow Table
•
• • • • • •
match Fields: to match against packets. (ingress port, packet headers, and optionally metadata specified by a previous table) priority: matching precedence of the flow entry. counters: updated when packets are matched. instructions: to modify the action set or pipeline processing. timeouts: timeout before flow is expired by the switch. cookie: opaque data value chosen by the controller to Filter flow statistics, flow modification and flow deletion. flags: Execute all buckets in the group.