Why We Need Network Abstraction

The move to highly virtualized data centers and cloud models is straining the network. While traditional data center networks were not designed to support the dynamic nature of today’s workloads, the fact is, the emergence of highly virtualized environments is merely exposing issues that have always existed within network constructs. VLANs, VRFs, subnets, routing, security, etc. have been stretched well beyond their original intent. The way these constructs are currently used limits scale, application expansion, contraction and mobility.  To read the full article visit: http://www.networkcomputing.com/next-gen-network-tech-center/why-we-need-network-abstraction/240142588

GD Star Rating
loading...

Data Center Overlays 101

I’ve been playing around with Show Me (www.showme.com) as a tool to add some white boarding to the blog.  Here’s my first crack at it covering Data Center Network overlays.

GD Star Rating
loading...

NVGRE

The most viable competitor to VXLAN is NVGRE which was proposed by Microsoft, Intel, HP and Dell.  It is another encapsulation technique intended to allow virtual network overlays across the physical network.  Both techniques also remove the scalability issues with VLANs which are bound at a max of 4096.  NVGRE uses Generic Routing Encapsulation (GRE) as the encapsulation method.  It uses the lower 24 bits of the GRE header to represent the Tenant Network Identifier (TNI.)  Like VXLAN this 24 bit space allows for 16 million virtual networks. 

image

While NVGRE provides optional support for broadcast via IP multi-cast, it does not rely on it for address learning as VXLAN does.  It instead leaves that up to an as of yet undefined control plane protocol.  This control plane protocol will handle the mappings between the “provider” address used in the outer header to designate the remote NVGRE end-point and the “customer” address of the destination.  The lack of reliance of flood and learn behavior replicated over IP multicast potentially makes NVGRE a more scalable solution.  This will be dependent on implementation and underlying hardware.

Another difference between VXLAN and NVGRE will be within its multi-pathing capabilities.  In its current format NVGRE will provides little ability to be properly load-balanced by ECMP.  In order to enhance load-balancing the draft suggests the use of multiple IP addresses per NVGRE host, which will allow for more flows.  This is a common issue with tunneling mechanisms and is solved in VXLAN by using a hash of the inner frame as the UDP source port.  This provides for efficient load balancing by devices capable of 5-tuple balancing decisions.  There are other possible solutions proposed for NVGRE load-balancing, we’ll have to wait and see how they pan out. 

The last major difference between the two protocols is the use of jumbo frames.  VXLAN is intended to stay within a data center where jumbo frame support is nearly ubiquitous, therefore it assumes that support is present and utilizes it.  NVGRE is intended to be able to be used inter-data-enter and therefore allows for provisions to avoid fragmentation.

Summary:

While NVGRE still needs much clarification it is backed by some of the biggest companies in IT and has some potential benefits.  With the VXLAN capable hardware world expanding quickly you can expect to see more support for NVGRE.  Layer 3 encapsulation techniques as a whole solve the issues of scalability inherent with bridging.  Additionally due to their routed nature they also provide for loop free multi-pathed environments without the need for techniques such as TRILL and technologies based on it.  In order to reach the scale and performance required by tomorrows data centers our networks need change, overlays such as these are one tool towards that goal.

GD Star Rating
loading...

Stateless Transport Tunneling (STT)

STT is another tunneling protocol along the lines of the VXLAN and NVGRE proposals.  As with both of those the intent of STT is to provide a network overlay, or virtual network running on top of a physical network.  STT was proposed by Nicira and is therefore not surprisingly written from a software centric view rather than other proposals written from a network centric view.  The main advantage of the STT proposal is it’s ability to be implemented in a software switch while still benefitting from NIC hardware acceleration.  The other advantage of STT is its use of a 64 bit network ID rather than the 32 bit IDs used by NVGRE and VXLAN.

The hardware offload STT grants relieves the server CPU of a significant workload in high bandwidth systems (10G+.)  This separates it from it’s peers that use an IP encapsulation in the soft switch which negate the NIC’s LSO and LRO functions.   The way STT goes about this is by having the software switch inserts header information into the packet to make it look like a TCP packet, as well as the required network virtualization features.  This allows the guest OS to send frames up to 64k to the hypervisor which are encapsulated and sent to the NIC for segmentation.  While this does allow for the HW offload to be utilized it causes several network issues due to it’s use of valid TCP headers it causes issues for many network appliances or “middle boxes.” 

STT is not expected to be ratified and is considered by some to have been proposed for informational purposes, rather than with the end goal of a ratified standard.  With its misuse of a valid TCP header it would be hard pressed for ratification.  STT does bring up the interesting issue of hardware offload.  The IP tunneling protocols mentioned above create extra overhead on host CPUs due to their inability to benefit from NIC acceleration techniques.  VXLAN and NVGRE are intended to be implemented in hardware to solve this problem.  Both VXLAN and NVGRE use a 32 bit network ID because they are intended to be implemented in hardware, this space provides for 16 million tenants.  Hardware implementation is coming quickly in the case of VXLAN with vendors announcing VXLAN capable switches and NICs. 

GD Star Rating
loading...

VXLAN Deep Dive – Part II

In part one of this post I covered the basic theory of operations and functionality of VXLAN (http://www.definethecloud.net/vxlan-deep-dive.)  This post will dive deeper into how VXLAN operates on the network.

Let’s start with the basic concept that VXLAN is an encapsulation technique.  Basically the Ethernet frame sent by a VXLAN connected device is encapsulated in an IP/UDP packet.  The most important thing here is that it can be carried by any IP capable device.  The only time added intelligence is required in a device is at the network bridges known as VXLAN Tunnel End-Points (VTEP) which perform the encapsulation/de-encapsulation.  This is not to say that benefit can’t be gained by adding VXLAN functionality elsewhere, just that it’s not required.

image

Providing Ethernet Functionality on IP Networks:

As discussed in Part 1, the source and destination IP addresses used for VXLAN are the Source VTEP and destination VTEP.  This means that the VTEP must know the destination VTEP in order to encapsulate the frame.  One method for this would be a centralized controller/database.  That being said VXLAN is implemented in a decentralized fashion, not requiring a controller.  There are advantages and drawbacks to this.  While utilizing a centralized controller would provide methods for address learning and sharing, it would also potentially increase latency, require large software driven mapping tables and add network management points.  We will dig deeper into the current decentralized VXLAN deployment model.

VXLAN maintains backward compatibility with traditional Ethernet and therefore must maintain some key Ethernet capabilities.  One of these is flooding (broadcast) and ‘Flood and Learn behavior.’  I cover some of this behavior here (http://www.definethecloud.net/data-center-101-local-area-network-switching)  but the summary is that when a switch receives a frame for an unknown destination (MAC not in its table) it will flood the frame to all ports except the one on which it was received.  Eventually the frame will get to the intended device and a reply will be sent by the device which will allow the switch to learn of the MACs location.  When switches see source MACs that are not in their table they will ‘learn’ or add them.

VXLAN is encapsulating over IP and IP networks are typically designed for unicast traffic (one-to-one.)  This means there is no inherent flood capability.  In order to mimic flood and learn on an IP network VXLAN uses IP multi-cast.  IP multi-cast provides a method for distributing a packet to a group.  This IP multi-cast use can be a contentious point within VXLAN discussions because most networks aren’t designed for IP multi-cast, IP multi-cast support can be limited, and multi-cast itself can be complex dependent on implementation.

Within VXLAN each VXLAN segment ID will be subscribed to a multi-cast group.  Multiple VXLAN segments can subscribe to the same ID, this minimizes configuration but increases unneeded network traffic.  When a device attaches to a VXLAN on a VTEP that was not previously in use, the VXLAN will join the IP multi-cast group assigned to that segment and start receiving messages.

image

In the diagram above we see the normal operation in which the destination MAC is known and the frame is encapsulated in IP using the source and destination VTEP address.  The frame is encapsulated by the source VTEP, de-encapsulated at the destination VTEP and forwarded based on bridging rules from that point.  In this operation only the destination VTEP will receive the frame (with the exception of any devices in the physical path, such as the core IP switch in this example.)

image

In the example above we see an unknown MAC address (the MAC to VTEP mapping does not exist in the table.)  In this case the source VTEP encapsulates the original frame in an IP multi-cast packet with the destination IP of the associated multicast group.  This frame will be delivered to all VTEPs participating in the group.  VTEPs participating in the group will ideally only be VTEPs with connected devices attached to that VXLAN segment.  Because multiple VXLAN segments can use the same IP multicast group this is not always the case.  The VTEP with the connected device will de-encapsulate and forward normally, adding the mapping from the source VTEP if required.  Any other VTEP that receives the packet can then learn the source VTEP/MAC mapping if required and discard it. This process will be the same for other traditionally flooded frames such as ARP, etc.  The diagram below shows the logical topologies for both traffic types discussed.

image

As discussed in Part 1 VTEP functionality can be placed in a traditional Ethernet bridge.  This is done by placing a logical VTEP construct within the bridge hardware/software.  With this in place VXLANs can bridge between virtual and physical devices.  This is necessary for physical server connectivity, as well as to add network services provided by physical appliances.  Putting it all together the diagram below shows physical servers communicating with virtual servers in a VXLAN environment.  The blue links are traditional IP links and the switch shown at the bottom is a standard L3 switch or router.  All traffic on these links is encapsulated as IP/UDP and broken out by the VTEPs.

image

Summary:

VXLAN provides backward compatibility with traditional VLANs by mimicking broadcast and multicast behavior through IP multicast groups.  This functionality provides for decentralized learning by the VTEPs and negates the need for a VXLAN controller.

GD Star Rating
loading...

VXLAN Deep Dive

I’ve been spending my free time digging into network virtualization and network overlays.  This is part 1 of a 2 part series, part 2 can be found here: http://www.definethecloud.net/vxlan-deep-divepart-2.  By far the most popular virtualization technique in the data center is VXLAN.  This has as much to do with Cisco and VMware backing the technology as the tech itself.  That being said VXLAN is targeted specifically at the data center and is one of many similar solutions such as: NVGRE and STT.)  VXLAN’s goal is allowing dynamic large scale isolated virtual L2 networks to be created for virtualized and multi-tenant environments.  It does this by encapsulating frames in VXLAN packets.  The standard for VXLAN is under the scope of the IETF NVO3 working group.

 

VxLAN Frame

The VXLAN encapsulation method is IP based and provides for a virtual L2 network.  With VXLAN the full Ethernet Frame (with the exception of the Frame Check Sequence: FCS) is carried as the payload of a UDP packet.  VXLAN utilizes a 24-bit VXLAN header, shown in the diagram, to identify virtual networks.  This header provides for up to 16 million virtual L2 networks.

Frame encapsulation is done by an entity known as a VXLAN Tunnel Endpoint (VTEP.)  A VTEP has two logical interfaces: an uplink and a downlink.  The uplink is responsible for receiving VXLAN frames and acts as a tunnel endpoint with an IP address used for routing VXLAN encapsulated frames.  These IP addresses are infrastructure addresses and are separate from the tenant IP addressing for the nodes using the VXLAN fabric.  VTEP functionality can be implemented in software such as a virtual switch or in the form a physical switch.

VXLAN frames are sent to the IP address assigned to the destination VTEP; this IP is placed in the Outer IP DA.  The IP of the VTEP sending the frame resides in the Outer IP SA.  Packets received on the uplink are mapped from the VXLAN ID to a VLAN and the Ethernet frame payload is sent as an 802.1Q Ethernet frame on the downlink.  During this process the inner MAC SA and VXLAN ID is learned in a local table.  Packets received on the downlink are mapped to a VXLAN ID using the VLAN of the frame.  A lookup is then performed within the VTEP L2 table using the VXLAN ID and destination MAC; this lookup provides the IP address of the destination VTEP.  The frame is then encapsulated and sent out the uplink interface.

image

Using the diagram above for reference a frame entering the downlink on VLAN 100 with a destination MAC of 11:11:11:11:11:11 will be encapsulated in a VXLAN packet with an outer destination address of 10.1.1.1.  The outer source address will be the IP of this VTEP (not shown) and the VXLAN ID will be 1001.

In a traditional L2 switch a behavior known as flood and learn is used for unknown destinations (i.e. a MAC not stored in the MAC table.  This means that if there is a miss when looking up the MAC the frame is flooded out all ports except the one on which it was received.  When a response is sent the MAC is then learned and written to the table.  The next frame for the same MAC will not incur a miss because the table will reflect the port it exists on.  VXLAN preserves this behavior over an IP network using IP multicast groups.

Each VXLAN ID has an assigned IP multicast group to use for traffic flooding (the same multicast group can be shared across VXLAN IDs.)  When a frame is received on the downlink bound for an unknown destination it is encapsulated using the IP of the assigned multicast group as the Outer DA; it’s then sent out the uplink.  Any VTEP with nodes on that VXLAN ID will have joined the multicast group and therefore receive the frame.  This maintains the traditional Ethernet flood and learn behavior.

VTEPs are designed to be implemented as a logical device on an L2 switch.  The L2 switch connects to the VTEP via a logical 802.1Q VLAN trunk.  This trunk contains an VXLAN infrastructure VLAN in addition to the production VLANs.  The infrastructure VLAN is used to carry VXLAN encapsulated traffic to the VXLAN fabric.  The only member interfaces of this VLAN will be VTEP’s logical connection to the bridge itself and the uplink to the VXLAN fabric.  This interface is the ‘uplink’ described above, while the logical 802.1Q trunk is the downlink.

image

Summary

VXLAN is a network overlay technology design for data center networks.  It provides massively increased scalability over VLAN IDs alone while allowing for L2 adjacency over L3 networks.  The VXLAN VTEP can be implemented in both virtual and physical switches allowing the virtual network to map to physical resources and network services.  VXLAN currently has both wide support and hardware adoption in switching ASICS and hardware NICs, as well as virtualization software.

GD Star Rating
loading...

Something up Brocade’s Sleeve, and it looks Good

Brocade’s got some new tricks up their sleeve and they look good.  For far too long Brocade fought against convergence to protect its FC install base and catch up.  This bled over into their Ethernet messaging and hindered market growth and comfort levels there.  Overall they appeared as a company missing the next technology waves and clinging desperately to the remnants of a fading requirement: pure storage networks.  That has all changed, Brocade is embracing Ethernet and focusing on technology innovation that is relevant to today’s trends and business.

The Hardware:

Brocade’s VDX 8770 (http://www.brocade.com/downloads/documents/data_sheets/product_data_sheets/vdx-8770-ds.pdf) is their flagship modular switch for Brocade VCS fabrics.  While at first I scoffed at the idea of bigger chassis switches for fabrics, it turns out I was wrong (happens often.)  I forgot about scale.  These fabrics will typically be built in core/edge or spine leaf/designs, often using End of Row (EoR) rather than Top of Rack (ToR) designs to reduce infrastructure.  This leaves max scalability bound by a combination of port count and switch count dependent on several factors such as interconnect ports.  Switch count will typically be limited by fabric software limitations either real or due to testing and certification processes.  Having high density modular fabric-capable switches helps solve scalability issues.

Some of the more interesting features:

  • Line-rate 40GE
  • “Auto-trunking” ISLs (multiple links between switches will bond automatically.)
  • Multi-pathing at layers 1, 2 and 3
  • Dynamic port-profile configuration and migration for VM mobility
  • 100GE ready
  • 4us latency with 4TB switching capacity
  • Support for 384,000 MAC addresses per fabric for massive L2 scalability
  • Support for up to 8000 ports in a VCS fabric
  • 4 and 8 slot chassis options
  • Multiple default gateways for load-balancing routing

The Software:

The real magic is Brocade’s fabric software.  Brocade looks at the fabric as the base on which to build an intelligent network, SDN or otherwise.  As such the fabric should be: resilient, scalable and easy to manage.  In several conversations with people at Brocade it was pointed out that SDN actually adds a management layer.  No matter how you slice it the SDN software overlays a physical network that must be managed.  Minimizing configuration requirements at this level simplifies the network overall.  Additionally the fabric should provide multi-pathing without link blocking for maximum network throughput. 

Brocade executes on this with VCS fabric.  VCS provides an easy to set up and manage fabric model.  Operations like adding a link for bandwidth are done with minimal configuration through tools like “auto-trunking.’  Basically ports identified as fabric ports will be built into the network topology automatically.  They also provide impressive scalability numbers with support for 384,000 MACs, 352,000 IPv4 routes, 88,000 IPv6 routes, and 8000 ports.

One surprise to me was that Brocade is doing this using custom silicon.  With companies like Arista and Nicira (now part of VMware) touting commodity hardware as the future, why is Brocade spending money on silicon?  The answer is in latency.  If you want to do something at line-rate it must be implemented in hardware.  Merchant silicon is adept at keeping cutting edge at things like switching latency and buffering but is slow to implement new features.  This is due to addressable market.  Merchant silicon manufacturers want to ensure that the cost of hardware design and manufacturing will be recouped through bulk sale to multiple manufacturers.  This means features must have wide applicability and typically be standards driven before being implemented.

Brocade saw the ability to innovate with features while maintaining line-rate as an advantage worth the additional cost.  This allows Brocade to differentiate themselves, and their fabric, from vendors relying solely on merchant silicon.  Additionally they position they’re fabric as enough of an advantage to be worth the additional cost when implementing SDN for reasons listed above.

Summary:

Brocade is making some very smart moves and coming out from under the FC rock.  The technology is relevant and timely, but they will still have an uphill battle gaining the confidence of network teams.  They will have to rely on their FC data center heritage to build that confidence and expand their customer base.  The key now will be in execution, it will be an exciting ride.

GD Star Rating
loading...