It’s Our Time Down Here– “Underlays”

Recently while winding down from a long day I flipped the channel and “The Goonies” was on.  I left it there thinking an old movie I’d seen a dozen times would put me to sleep quickly.  As it turns out I quickly got back into it.  By the time the gang hit the wishing well and Mikey gave his speech I was inspired to write a blog, this one in particular.  “Cause it’s their time – their time up there.  Down here it’s our time, it’s our time down here.” 

This got me thinking about data center network overlays, and the physical networks that actually move the packets some Network Virtualization proponents have dubbed “underlays.”  The more I think about it, the more I realize that it truly is our time down here in the “lowly underlay.”  I don’t think there’s much argument around the need for change in data center networking, but there is a lot of debate on how.  Let’s start with their time up there “Network Virtualization.”

Network Virtualization

Unlike server virtualization, Network Virtualization doesn’t partition out the hardware and separate out resources.  Network Virtualization uses server virtualization to virtualize network devices such as: switches, routers, firewalls and load-balancers.  From there it creates virtual tunnels across the physical infrastructure using encapsulation techniques such as: VxLAN, NVGRE and STT.  The end result is a virtualized instantiation of the current data center network in x86 servers with packets moving in tunnels on physical networking gear which segregate them from other traffic on that gear.  The graphic below shows this relationship.

image

Network Virtualization in this fashion can provide some benefits in the form of: provisioning time and automation.  It also induces some new challenges discussed in more detail here: What Network Virtualization Isn’t (be sure to read the comments for alternate view points.)  What network virtualization doesn’t provide, in any form, is a change to the model we use to deploy networks and support applications.  The constructs and deployment methods for designing applications and applying policy are not changed or enhanced.  All of the same broken or misused methodologies are carried forward.  When working with customers to begin virtualizing servers I would always recommend against automated physical to virtual server migration, suggesting rebuild in a virtual machine instead.

The reason for that is two fold.  First server virtualization was a chance to re-architect based on lessons learned.  Second, simply virtualizing existing constructs is like hiring movers to pack your house along with dirt/cobwebs/etc. then move it all to the new place and unpack.  The smart way to move a house is donate/yard sale what you won’t need, pack the things you do, move into a clean place and arrange optimally for the space.  The same applies to server and network virtualization.

Faithful replication of today’s networking challenges as virtual machines with encapsulation tunnels doesn’t move the bar for deploying applications.  At best it speeds up, and automates, bad practices.  Server virtualization hit the same challenges.  I discuss what’s needed from the ground up here: Network Abstraction and Virtualization: Where to Start?.  Software only network virtualization approaches are challenged by both restrictions of the hardware that moves their packets and issues with methodology of where the pain points really are.  It’s their time up there.

Underlays

The physical transport network which is minimalized by some as the “underlay” is actually more important in making a shift to network programmability, automation and flexibility.  Even network virtualization vendors will agree, to some extent, on this if you dig deep enough.  Once you cut through the marketecture of “the underlay doesn’t matter” you’ll find recommendations for a non-blocking fabric of 10G Access ports and 40G aggregation in one design or another.  This is because they have no visibility into congestion and no control of delivery prioritization such as QoS. 

Additionally Network Virtualization has no ability to abstract the constructs of VLAN, Subnet, Security, Logging, QoS from one another as described in the link above.  To truly move the network forward in a way that provides automation and programmability in a model that’s cohesive with application deployment, you need to include the physical network with the software that will drive it.  It’s our time down here.

By marrying physical innovations that provide a means for abstraction of constructs at the ground floor with software that can drive those capabilities, you end up with a platform that can be defined by the architecture of the applications that will utilize it.  This puts the infrastructure as a whole in a position to be  deployed in lock-step with the applications that create differentiation and drive revenue.  This focus on the application is discussed here: Focus on the Ball: The Application.  The figure below, from that post, depicts this.

image

 

The advantage to this ground up approach is the ability to look at applications as they exist, groups of interconnected services, rather than the application as a VM approach.  This holistic view can then be applied down to an infrastructure designed for automation and programmability.  Like constructing a building, your structure will only be as sound as the foundation it sits on.

For a little humor (nothing more) here’s my comic depiction of Network Virtualization.image

GD Star Rating
loading...

Network Abstraction and Virtualization: Where to Start?

Network Abstraction and Virtualization: Where to Start?

With the growth of server virtualization network designs and the associated network management constructs have been stretched beyond their intended uses. This has brought about data center networks that are unmanageable and slow to adapt to change. While servers and storage can be rapidly provisioned to bring on new services the network itself has become a bottleneck of required administrative changes and inflexible constructs limiting scalability and speed of adoption.

These constraints of modern data center networks have motivated network architects to look for workarounds of which one current proposal is ‘network virtualization’ which looks to apply the benefits of server virtualization to the network. Conceptually network virtualization is the use of encapsulation techniques to create virtual overlays on existing network infrastructure. These methods use technologies such as VxLAN, STT, NVGRE, and others to wrap machine traffic in virtual IP overlays which can be transported across any Layer 3 infrastructure.

1. A primary benefit of these overlay techniques is the ability to scale beyond the limits of VLANs for network segmentation. Virtualization and multi-tenancy caused an explosion of network segments that strain traditional isolation techniques. With VLANs we are limited at 4096 segments or less depending on implementation. Other methods exist, such as placing ACLs within the Hypervisor but these also suffer limits in configuration and CPU overhead. The purpose of these techniques is creating application/tenant segmentation without security implications between segments. As the number of services and tenants grows these limits quickly become restrictive.

2. Another advantage of the network virtualization overlay is the ability to place workloads independent of physical locality and underlying topology. As long as IP connectivity is available the encapsulation handles delivery to end-point workloads. This provides greater flexibility in deployment, especially for virtualized workloads which receive encapsulation within the hypervisor switch. The operational benefit of this effect is the ability to place workloads where there is available capacity without restrictions from underlying network constructs.

Network virtualization does not come without drawbacks. The act of layering virtual networks over existing infrastructure puts an opaque barrier between the virtual workloads and the operation of the underlying infrastructure. This brings on issues with performance, quality of service (QoS) and network troubleshooting. Unlike server virtualization this limitation is not seen with compute hypervisors which are tightly coupled with the hardware maintaining visibility at both levels. The diagram below shows the relationship between network and server virtualization.

image

1. This lack of cross-visibility between the logical networks carrying production application traffic, and the physical network providing the packet delivery, leads to issues with application performance and system troubleshooting. With SDN techniques based on network virtualization through encapsulation, the packet delivery infrastructure is completely obfuscated by the encapsulation. This can lead to performance issues arising from lack of quality of service, altered multi-pathing ability, and others within the underlying network. This separation is shown in the diagram below.

image

2. Additionally these logical networks add a point of management to the network architecture. While they can hide the complexity of the underlying network for the purposes of application deployment, the network underneath still exists. The switching infrastructure must still be configured, managed and deployed as usual. All of the constructs shown above must still be architected and pushed into device configuration. Network virtualization provides perceived independence from the infrastructure but does not provide a means to manage the network as a whole.

3. The last challenge for network virtualization techniques is the ability to tie overlays back to traditional networking constructs understood by the network switches below. Switch hardware and software is designed to use VLANs which are tied to IP subnets and stitch security and services to these constructs. The overlay created by encapsulation does not alleviate these issues.

For example encapsulation techniques such as VxLAN provide far greater logical network scalability upwards of 16 million virtual networks. This logical scalability does not currently stitch into traditional switching equipment that assumes VLANs are global. Tighter cohesion will be required between physical switching infrastructure and hypervisor based access layers to provide robust services to real-world heterogeneous environments.

While overlay techniques provide separate namespace and therefore a means for overlapping IP addressing there will still be a need to architect the routing that handles this. In order to accomplish this network functionality such as Virtual Routing and Forwarding (VRF) must be configured on the switching infrastructure, or virtual routers deployed in the hypervisor. VRF scalability is greatly limited by hardware implementation and will be far less than VxLAN scalability, while virtualized routers will consume CPU overhead and require additional architectural considerations. Without techniques in place network tenants will require non-overlapping IP space.

Making a case for true abstraction

With network virtualization alone being overlaid onto existing infrastructure we just add layers of complexity. This occurs without correcting the issues that have arisen in traditional networking constructs; just adding network virtualization will do no more than amplify existing problems. A parallel can be drawn to server virtualization where the more rapid pace of server provisioning quickly brought out problems in underlying architecture and processes.

The underlying network consists of hardware, cabling, and Layer 2 / Layer 3 topologies that dictate traffic flow and potential application throughput. These layers have their own limitations and stability issues which are not addressed by network virtualization. Think of the OSI model in terms of building a house, the bottom layers (1-3) create a foundation, a frame, and a structure. Issues in those foundational layers will be exacerbated at each additional layer added on top.

Rather than applying an overlay technique such as a virtualization layer on top of existing architecture, IT architects will benefit greatly from abstracting the network constructs from the ground up first. Separating out logical and physical constructs, security, services, etc. prior to layering on overlays will provide a clean canvas on which to paint the future’s scalable feature rich networks. Virtualization must be built into the network from the ground up rather than layered on top. Again this parallels server virtualization where the greatest success has been seen in full virtualization of the hardware platform and tight integration down to bare metal. The end goal is addressing the underlying network issues rather than mask them with a virtualization layer.

The ties between network constructs such as VLAN, IP subnet, security, load-balancing etc. have placed constraints on the scalability and agility of the network. Each VLAN is provided an IP subnet, security and network services are then tied to these constructs. Addressing and location become the identifying characteristics of the network rather than the application requirements. This is not optimal behavior for a network responsible for elastic business services, workload anywhere designs, and ever increasing connectivity needs. These attributes and capabilities of connectivity must be abstracted in a new way to allow us to move beyond the constraints we have imposed by overloading or misusing these basic network constructs.

imageRather than starting with a new coat of paint on a peeling building, abstraction takes a ground up approach. By looking at the purpose of each construct: VLAN = Broadcast domain, IP = addressing mechanism, etc. we can redesign with a goal to alleviate the unnecessary constraints that have been placed on today’s networks. With these constructs separated we can provide a transport capable of maximizing the performance, security and scalability of the applications using it.

Take a step back from traditional network thinking and think in terms of application needs without consideration of current deployment methodology. Think through the following questions leaving out concepts like: VLAN, Subnet, IP addressing, etc.:

  • How would you tie application tiers together?
  • How would you group like services?
  • What policies would be required between application tiers?
  • What services are required for a given application?
  • How does that application connect to the intranet and internet?

Separating out the applications and services required from the underlying architecture is not possible with today’s networks, virtualized or not. Overlay network virtualization alone may hide some of the complexities but does not provide tools for optimizing the delivery and holistic design. The conversation must include addressing, VLAN construct, location and service insertion. If these constructs are instead abstracted from one another, and the architecture, the conversation can revolve around application requirements rather than network restrictions.

Summary:

While network virtualization provides a set of tools for gaining greater network scale and application deployment flexibility, it is not a complete solution. Without true network abstraction and tools for visibility between the logical and physical network virtualization does no more than add complexity to existing problems. As was seen with server virtualization, layering virtualization on infrastructure issues and bad processes exponentially increases the complexity and room for error.

In order to truly scale networks in a sustainably manageable fashion we need to remove the ties of disparate network constructs by abstracting them out. Once these constructs operate independently of one another we’re provided a flexible architecture that removes the inherent complexity rather than leaving the problems and compounding them through layers of virtualization.

To build networks that meet current demands while being able to support the rapid scale and emerging requirements we need to rethink network design as a whole. Taking a top down look at what we need from the network without tying ourselves to the way in which we use the constructs today allows us to design towards the future and apply layers of abstraction down the stack to meet those goals.

Thinking about your network today, is virtualization alone solving the problems or adding a layer?

Network virtualization without network abstraction – results in short term patching with limited control of longer term operational complexity.

Network virtualization based on an abstracted network – results in effective control of both capital and operational expenses.

GD Star Rating
loading...

Taking a Good Hard Look at SDN

SDN is sitting at the peak of it’s hype cycle (at least I hope it’s the peak.)  Every vendor has a definition and a plan.  Most of those definitions and plans focus around protecting their existing offerings and morphing those into some type of SDN vision.  Products and entire companies have changed their branding from whatever they were to SDN and the markets flooded with SDN solutions that solve very different problems.  This post will take a deep dive into the concepts around SDN and the considerations of a complete solution.  As always with my posts this is focused on the data center network, because I can barely spell WAN, have never spent time on a campus and have no idea what magic it is that service providers do.

The first question anyone considering SDN solutions needs to ask is: What problem(s) am I trying to solve.  Start with the business drivers for the decision.  There are many that SDN solutions look to solve, a few examples are:

  • Faster response to business demands for new tenants, services and applications.
  • More intelligent configuration of network services such as load balancers, firewalls etc.  The ability to dynamically map application tiers to required services.
  • Reductions in cost i.e. CapEx via enabling purchase of lower cost infrastructure and OpEx via reducing administrative overhead of device centric configuration.
  • Ability to create new revenue streams via more intelligent network service offerings.
  • Reduction in lock-in from proprietary systems.
  • Better network integration with cloud management systems and orchestration tools.
  • Better network efficiency through closer match of network resources to application demands.

That leaves a lot of areas with room for improvement in order to accomplish those tasks.  That’s one of the reasons the definition is so loose and applied to such disparate technologies.  In order to keep the definition generic enough to encompass a complete solution there are three major characteristics I prefer for defining an SDN architecture:

  • Flow Management – The ability to define flows across the network based on characteristics of the flow in a centralized fashion.
  • Dynamic Scalability – Providing a network that can scale beyond the capabilities of traditional tools and do so in a fluid fashion.
  • Programmability – The ability for the functionality provided by the network to be configured programmatically typically via APIs.

The Complete Picture:

In looking for a complete solution for Software Defined data center network it’s important to assess all aspects required to deliver cohesive network services and packet delivery:

  • Packet delivery – routing/switching as required.  Considerations such as requirements for bridging semantics (flooding, broadcast), bandwidth, multi-pathing etc.
  • L4-L7 service integration – The ability to map application tiers to required network services such as load-balancers and firewalls.
  • Virtual network integration – Virtual switching support for your chosen hypervisor(s).  This will be more complex in multi-hypervisor environments.
  • Physical network integration – Integration with bare-metal servers, standalone appliances, network storage and existing infrastructure.
  • Physical management – The management of the physical network nodes, required configuration of ports, VLANs, routes, etc.
  • Scalability – Ability to scale application or customer tenancy beyond the 4000 VLAN limit.
  • Flow management – The ability to program network policy from a global perspective.

Depending on your overall goals you may not have requirements in each of these areas but you’ll want to analyze that carefully based on growth expectations.  Don’t run your data center like congress kicking the can (problem) down the road.  The graphic below shows the various layers to be considered when looking at SDN solutions.

image

Current Options:

The current options for SDN typically provide solutions for one or more of these issues but not all.  The chart below takes a look at some popular options.

 

 

VLAN Scale

L4-7

Bare Metal Support

Physical Network Node MGMT

KVM

VMware

Xen

HyperV

L3

Flow MGMT

Nicira/VMware X 3rd Party *   X * X   3rd Party X
Overlays X       X X X X    
OpenFlow     X   X X X X X X
Midokura X X     X   X   X  

X = Support

* = Future Support
This chart is not intended to be all encompassing or to compare all features of equal products (obviously an overlay doesn’t compete with a Nicira or Midokura solution, and each of those rely on overlays of some type.)  Instead it’s intended to show that the various solutions lumped into SDN provide solutions for different areas of the data center network.  One or more tools may be necessary to deploy a full SDN architecture and even then there may be gaps in areas like bare metal support, integration of standalone network appliances and provisioning/monitoring/troubleshooting of physical switch nodes (yes that all still matters.)
API Model:
Another model lumped into SDN is northbound APIs for network devices.  Several networking vendors are in various stages of support for this model.  This model does provide programmability but I would argue against it’s scale.  Using this model requires top down management systems that understand each device, its capabilities and its API.  To scale this type of management system and program network flows this way is not easy and will be error prone.  Additionally this model does not provide any additional functionality, visibility or holistic programmability, simply a better way to configure individual devices. That being said managing via APIs is light years ahead of screen scrapes and CLI scripting.
Hardware Matters:
Let me preface with what I’m not saying: I’m not saying that hardware will/won’t be commoditized, and I’m not saying that custom silicon or merchant silicon is better or worse.
I am saying that the network hardware you choose will matter.  Table sizes, buffer space, TCAM size will all factor in, and depending on your deployment model will be a major factor.  The hardware will also need to provide maximum available bandwidth and efficient ECMP load-balancing for network throughput.  This load-balancing can be greatly affected by the overlay method chosen based on available header information for hashing algorithms.  Additionally your hardware must support the options of the SDN model you choose.  For example in a Nicira/VMware deployment you’ll have future support for management of switches running OVS, you may want these to tie in physical servers, etc.  The same would apply if you choose OpenFlow.  You’ll need switch hardware that provides OpenFlow support, additionally it will need to support your deployment model hybrid or pure OpenFlow.
The hardware also matters in configuration, management, and troubleshooting.  While there is a lot of talk of “We just need any IP connectivity” that IP network still has to be configured and managed.  Layer 2/3 constructs must be put in place, ports must be configured.  This hardware will also have to be monitored, and troubleshot when things fail.  This will be more difficult in cases where the overlay is unknown to the L3 infrastructure at which point two separate independent networks will be involved: physical and logical.
Management Model:

There are several management models to choose from and two examples in the choices I compared above.  OpenFlow uses a centralized top down approach with the controller pushing flows to all network elements and handling policy for new flows forwarded from those devices.  The Nicira/VMware solution uses the same model as OpenFlow.  Midokura on the other hand takes a play from distributed systems and pushes intelligence to the edges in that fashion.  Each model offers various pros/cons and will play a major role in the scale and resiliency of your SDN deployment.

Northbound API:

The Northbound API is different than the device APIs mentioned below.  This API opens the management of your SDN solution as whole up to higher level systems.  Chances are you’re planning to plug your infrastructure into an automation/orchestration solution or cloud platform.  In order to do this you’ll want a robust northbound API for your infrastructure components, in this case your SDN architecture.  If you have these systems in place, or have already picked your horse you’ll want to ensure compatibility with the SDN architectures you consider.  Not all APIs are created equal, and they are far from standardized so you’ll want to know exactly what you’re getting from a functionality perspective and ensure the claims match your upper layer systems needs.

Additional Considerations:

There are several other considerations which will effect both the options chosen and the architecture used some of those:

  • How are flows distributed?
  • How are unknown flows handled?
  • How are new end points discovered?
  • How are required behaviors of bridging handled?
  • How are bad behaviors of bridging minimized (BUM traffic)?
  • What happens during controller failure scenarios?
  • What is the max theoretical/practical scalability?
    • Does that scale apply globally, i.e. physical and virtual switches etc.?
  • What new security concerns (if any) may be introduced?
  • What are the requirements of the IP network (multicast, etc.)
  • How is multi-tenancy handled?
  • What is the feature disparity between virtualized and physical implementation?
  • How does it integrate with existing systems/services?
  • How is traffic load balanced?
  • How is QoS provided?
  • How are software/firmware upgrades handled?
  • What is the disparity between the software implementation and the hardware capabilities, for example OpenFlow on physical switches?
  • Etc.

Summary:

SDN should be putting the application back in focus and providing tools for more robust and rapid application deployment/change.  In order to effectively do this an SDN architecture should provide functionality for the full life of the packet on the data center network.  The architecture should also provide tools for the scale you forecast as you grow.  Because of the nature of the ecosystem you may find more robust deployment options the more standardized your environment is (I’ve written about standardization several times in the past for example:http://www.networkcomputing.com/private-cloud-tech-center/private-cloud-success-factor-standardiza/231500532 .)  You can see examples of this in the hypervisor support shown in the chart above.

While solutions exist for specific business use cases the market is far from mature.  Products will evolve and as lessons are learned and roadmaps executed we’ll see more robust solutions emerge.  In the interim choose technologies that meet your specific business drivers and deploy them in environments with the largest chance of success, low hanging fruit.  It’s prudent to move into network virtualization in the same fashion you moved into server virtualization, with a staged approach.

GD Star Rating
loading...

VXLAN Deep Dive – Part II

In part one of this post I covered the basic theory of operations and functionality of VXLAN (http://www.definethecloud.net/vxlan-deep-dive.)  This post will dive deeper into how VXLAN operates on the network.

Let’s start with the basic concept that VXLAN is an encapsulation technique.  Basically the Ethernet frame sent by a VXLAN connected device is encapsulated in an IP/UDP packet.  The most important thing here is that it can be carried by any IP capable device.  The only time added intelligence is required in a device is at the network bridges known as VXLAN Tunnel End-Points (VTEP) which perform the encapsulation/de-encapsulation.  This is not to say that benefit can’t be gained by adding VXLAN functionality elsewhere, just that it’s not required.

image

Providing Ethernet Functionality on IP Networks:

As discussed in Part 1, the source and destination IP addresses used for VXLAN are the Source VTEP and destination VTEP.  This means that the VTEP must know the destination VTEP in order to encapsulate the frame.  One method for this would be a centralized controller/database.  That being said VXLAN is implemented in a decentralized fashion, not requiring a controller.  There are advantages and drawbacks to this.  While utilizing a centralized controller would provide methods for address learning and sharing, it would also potentially increase latency, require large software driven mapping tables and add network management points.  We will dig deeper into the current decentralized VXLAN deployment model.

VXLAN maintains backward compatibility with traditional Ethernet and therefore must maintain some key Ethernet capabilities.  One of these is flooding (broadcast) and ‘Flood and Learn behavior.’  I cover some of this behavior here (http://www.definethecloud.net/data-center-101-local-area-network-switching)  but the summary is that when a switch receives a frame for an unknown destination (MAC not in its table) it will flood the frame to all ports except the one on which it was received.  Eventually the frame will get to the intended device and a reply will be sent by the device which will allow the switch to learn of the MACs location.  When switches see source MACs that are not in their table they will ‘learn’ or add them.

VXLAN is encapsulating over IP and IP networks are typically designed for unicast traffic (one-to-one.)  This means there is no inherent flood capability.  In order to mimic flood and learn on an IP network VXLAN uses IP multi-cast.  IP multi-cast provides a method for distributing a packet to a group.  This IP multi-cast use can be a contentious point within VXLAN discussions because most networks aren’t designed for IP multi-cast, IP multi-cast support can be limited, and multi-cast itself can be complex dependent on implementation.

Within VXLAN each VXLAN segment ID will be subscribed to a multi-cast group.  Multiple VXLAN segments can subscribe to the same ID, this minimizes configuration but increases unneeded network traffic.  When a device attaches to a VXLAN on a VTEP that was not previously in use, the VXLAN will join the IP multi-cast group assigned to that segment and start receiving messages.

image

In the diagram above we see the normal operation in which the destination MAC is known and the frame is encapsulated in IP using the source and destination VTEP address.  The frame is encapsulated by the source VTEP, de-encapsulated at the destination VTEP and forwarded based on bridging rules from that point.  In this operation only the destination VTEP will receive the frame (with the exception of any devices in the physical path, such as the core IP switch in this example.)

image

In the example above we see an unknown MAC address (the MAC to VTEP mapping does not exist in the table.)  In this case the source VTEP encapsulates the original frame in an IP multi-cast packet with the destination IP of the associated multicast group.  This frame will be delivered to all VTEPs participating in the group.  VTEPs participating in the group will ideally only be VTEPs with connected devices attached to that VXLAN segment.  Because multiple VXLAN segments can use the same IP multicast group this is not always the case.  The VTEP with the connected device will de-encapsulate and forward normally, adding the mapping from the source VTEP if required.  Any other VTEP that receives the packet can then learn the source VTEP/MAC mapping if required and discard it. This process will be the same for other traditionally flooded frames such as ARP, etc.  The diagram below shows the logical topologies for both traffic types discussed.

image

As discussed in Part 1 VTEP functionality can be placed in a traditional Ethernet bridge.  This is done by placing a logical VTEP construct within the bridge hardware/software.  With this in place VXLANs can bridge between virtual and physical devices.  This is necessary for physical server connectivity, as well as to add network services provided by physical appliances.  Putting it all together the diagram below shows physical servers communicating with virtual servers in a VXLAN environment.  The blue links are traditional IP links and the switch shown at the bottom is a standard L3 switch or router.  All traffic on these links is encapsulated as IP/UDP and broken out by the VTEPs.

image

Summary:

VXLAN provides backward compatibility with traditional VLANs by mimicking broadcast and multicast behavior through IP multicast groups.  This functionality provides for decentralized learning by the VTEPs and negates the need for a VXLAN controller.

GD Star Rating
loading...

VXLAN Deep Dive

I’ve been spending my free time digging into network virtualization and network overlays.  This is part 1 of a 2 part series, part 2 can be found here: http://www.definethecloud.net/vxlan-deep-divepart-2.  By far the most popular virtualization technique in the data center is VXLAN.  This has as much to do with Cisco and VMware backing the technology as the tech itself.  That being said VXLAN is targeted specifically at the data center and is one of many similar solutions such as: NVGRE and STT.)  VXLAN’s goal is allowing dynamic large scale isolated virtual L2 networks to be created for virtualized and multi-tenant environments.  It does this by encapsulating frames in VXLAN packets.  The standard for VXLAN is under the scope of the IETF NVO3 working group.

 

VxLAN Frame

The VXLAN encapsulation method is IP based and provides for a virtual L2 network.  With VXLAN the full Ethernet Frame (with the exception of the Frame Check Sequence: FCS) is carried as the payload of a UDP packet.  VXLAN utilizes a 24-bit VXLAN header, shown in the diagram, to identify virtual networks.  This header provides for up to 16 million virtual L2 networks.

Frame encapsulation is done by an entity known as a VXLAN Tunnel Endpoint (VTEP.)  A VTEP has two logical interfaces: an uplink and a downlink.  The uplink is responsible for receiving VXLAN frames and acts as a tunnel endpoint with an IP address used for routing VXLAN encapsulated frames.  These IP addresses are infrastructure addresses and are separate from the tenant IP addressing for the nodes using the VXLAN fabric.  VTEP functionality can be implemented in software such as a virtual switch or in the form a physical switch.

VXLAN frames are sent to the IP address assigned to the destination VTEP; this IP is placed in the Outer IP DA.  The IP of the VTEP sending the frame resides in the Outer IP SA.  Packets received on the uplink are mapped from the VXLAN ID to a VLAN and the Ethernet frame payload is sent as an 802.1Q Ethernet frame on the downlink.  During this process the inner MAC SA and VXLAN ID is learned in a local table.  Packets received on the downlink are mapped to a VXLAN ID using the VLAN of the frame.  A lookup is then performed within the VTEP L2 table using the VXLAN ID and destination MAC; this lookup provides the IP address of the destination VTEP.  The frame is then encapsulated and sent out the uplink interface.

image

Using the diagram above for reference a frame entering the downlink on VLAN 100 with a destination MAC of 11:11:11:11:11:11 will be encapsulated in a VXLAN packet with an outer destination address of 10.1.1.1.  The outer source address will be the IP of this VTEP (not shown) and the VXLAN ID will be 1001.

In a traditional L2 switch a behavior known as flood and learn is used for unknown destinations (i.e. a MAC not stored in the MAC table.  This means that if there is a miss when looking up the MAC the frame is flooded out all ports except the one on which it was received.  When a response is sent the MAC is then learned and written to the table.  The next frame for the same MAC will not incur a miss because the table will reflect the port it exists on.  VXLAN preserves this behavior over an IP network using IP multicast groups.

Each VXLAN ID has an assigned IP multicast group to use for traffic flooding (the same multicast group can be shared across VXLAN IDs.)  When a frame is received on the downlink bound for an unknown destination it is encapsulated using the IP of the assigned multicast group as the Outer DA; it’s then sent out the uplink.  Any VTEP with nodes on that VXLAN ID will have joined the multicast group and therefore receive the frame.  This maintains the traditional Ethernet flood and learn behavior.

VTEPs are designed to be implemented as a logical device on an L2 switch.  The L2 switch connects to the VTEP via a logical 802.1Q VLAN trunk.  This trunk contains an VXLAN infrastructure VLAN in addition to the production VLANs.  The infrastructure VLAN is used to carry VXLAN encapsulated traffic to the VXLAN fabric.  The only member interfaces of this VLAN will be VTEP’s logical connection to the bridge itself and the uplink to the VXLAN fabric.  This interface is the ‘uplink’ described above, while the logical 802.1Q trunk is the downlink.

image

Summary

VXLAN is a network overlay technology design for data center networks.  It provides massively increased scalability over VLAN IDs alone while allowing for L2 adjacency over L3 networks.  The VXLAN VTEP can be implemented in both virtual and physical switches allowing the virtual network to map to physical resources and network services.  VXLAN currently has both wide support and hardware adoption in switching ASICS and hardware NICs, as well as virtualization software.

GD Star Rating
loading...

Digging Into the Software Defined Data Center

The software defined data center is a relatively new buzzword embraced by the likes of EMC and VMware.  For an introduction to the concept see my article over at Network Computing (http://www.networkcomputing.com/data-center/the-software-defined-data-center-dissect/240006848.)  This post is intended to take it a step deeper as I seem to be stuck at 30,000 feet for the next five hours with no internet access and no other decent ideas.  For the purpose of brevity (read: laziness) I’ll use the acronym SDDC for Software Defined Data Center whether or not this is being used elsewhere.)

First let’s look at what you get out of a SDDC:

Legacy Process:

In a traditional legacy data center the workflow for implementing a new service would look something like this:

  1. Approval of the service and budget
  2. Procurement of hardware
  3. Delivery of hardware
  4. Rack and stack of new hardware
  5. Configuration of hardware
  6. Installation of software
  7. Configuration of software
  8. Testing
  9. Production deployment

This process would very greatly in overall time but 30-90 days is probably a good ballpark (I know, I know, some of you are wishing it happened that fast.)

Not only is this process complex and slow but it has inherent risk.  Your users are accustomed to on-demand IT services in their personal life.  They know where to go to get it and how to work with it.  If you tell a business unit it will take 90 days to deploy an approved service they may source it from outside of IT.  This type of shadow IT poses issues for security, compliance, backup/recovery etc. 

SDDC Process:

As described in the link above an SDDC provides a complete decoupling of the hardware from the services deployed on it.  This provides a more fluid system for IT service change: growing, shrinking, adding and deleting services.  Conceptually the overall infrastructure would maintain an agreed upon level of spare capacity and would be added to as thresholds were crossed.  This would provide an ability to add services and grow existing services on the fly in all but the most extreme cases.  Additionally the management and deployment of new services would be software driven through intuitive interfaces rather than hardware driven and disparate CLI based.

The process would look something like this:

  1. Approval of the service and budget
  2. Installation of software
  3. Configuration of software
  4. Testing
  5. Production deployment

The removal of four steps is not the only benefit.  The remaining five steps are streamlined into automated processes rather than manual configurations.  Change management systems and trackback/chargeback are incorporated into the overall software management system providing a fluid workflow in a centralized location.  These processes will be initiated by authorized IT users through self-service portals.  The speed at which business applications can be deployed is greatly increased providing both flexibility and agility.

Isn’t that cloud?

Yes, no and maybe.  Or as we say in the IT world: ‘It depends.’  SDDN can be cloud, with on-demand self-service, flexible resource pooling, metered service etc. it fits the cloud model.  The difference is really in where and how it’s used.  A public cloud based IaaS model, or any given PaaS/SaaS model does not lend itself to legacy enterprise applications.  For instance you’re not migrating your Microsoft Exchange environment onto Amazon’s cloud.  Those legacy applications and systems still need a home.  Additionally those existing hardware systems still have value.  SDDC offers an evolutionary approach to enterprise IT that can support both legacy applications and new applications written to take advantage of cloud systems.  This provides a migration approach as well as investment protection for traditional IT infrastructure. 

How it works:

The term ‘Cloud operating System’ is thrown around frequently in the same conversation as SDDC.  The idea is compute, network and storage are raw resources that are consumed by the applications and services we run to drive our businesses.  Rather than look at these resources individually, and manage them as such, we plug them into a a management infrastructure that understands them and can utilize them as services require them.  Forget the hardware underneath and imagine a dashboard of your infrastructure something like the following graphic.

image

 

The hardware resources become raw resources to be consumed by the IT services.  For legacy applications this can be very traditional virtualization or even physical server deployments.  New applications and services may be deployed in a PaaS model on the same infrastructure allowing for greater application scale and redundancy and even less tie to the hardware underneath.

Lifting the kimono:

Taking a peak underneath the top level reveals a series of technologies both new and old.  Additionally there are some requirements that may or may not be met by current technology offerings. We’ll take a look through the compute, storage and network requirements of SDDC one at a time starting with compute and working our way up.

Compute is the layer that requires the least change.  Years ago we moved to the commodity x86 hardware which will be the base of these systems.  The compute platform itself will be differentiated by CPU and memory density, platform flexibility and cost. Differentiators traditionally built into the hardware such as availability and serviceability features will lose value.  Features that will continue to add value will be related to infrastructure reduction and enablement of upper level management and virtualization systems.  Hardware that provides flexibility and programmability will be king here and at other layers as we’ll discuss.

Other considerations at the compute layer will tie closely into storage.  As compute power itself has grown by leaps and bounds  our networks and storage systems have become the bottleneck.  Our systems can process our data faster than we can feed it to them.  This causes issues for power, cooling efficiency and overall optimization.  Dialing down performance for power savings is not the right answer.  Instead we want to fuel our processors with data rather than starve them.  This means having fast local data in the form of SSD, flash and cache.

Storage will require significant change, but changes that are already taking place or foreshadowed in roadmaps and startups.  The traditional storage array will become more and more niche as it has limited capacities of both performance and space.  In its place we’ll see new options including, but not limited to migration back to local disk, and scale-out options.  Much of the migration to centralized storage arrays was fueled by VMware’s vMotion, DRS, FT etc.  These advanced features required multiple servers to have access to the same disk, hence the need for shared storage.  VMware has recently announced a combination of storage vMotion and traditional vMotion that allows live migration without shared storage.  This is available in other hypervisor platforms and makes local storage a much more viable option in more environments.

Scale-out systems on the storage side are nothing new.  Lefthand and Equalogic pioneered much of this market before being bought by HP and Dell respectively.  The market continues to grow with products like Isilon (acquired by EMC) making a big splash in the enterprise as well as plays in the Big Data market.  NetApp’s cluster mode is now in full effect with OnTap 8.1 allowing their systems to scale out.  In the SMB market new players with fantastic offerings like Scale Computing are making headway and bringing innovation to the market.  Scale out provides a more linear growth path as both I/O and capacity increase with each additional node.  This is contrary to traditional systems which are always bottle necked by the storage controller(s). 

We will also see moves to central control, backup and tiering of distributed storage, such as storage blades and server cache.  Having fast data at the server level is a necessity but solves only part of the problem.  That data must also be made fault tolerant as well as available to other systems outside the server or blade enclosure.  EMC’s VFcache is one technology poised to help with this by adding the server as a storage tier for software tiering.  Software such as this place the hottest data directly next the processor with tier options all the way back to SAS, SATA, and even tape.

By now you should be seeing the trend of software based feature and control.  The last stage is within the network which will require the most change.  Network has held strong to proprietary hardware and high margins for years while the rest of the market has moved to commodity.  Companies like Arista look to challenge the status quo by providing software feature sets, or open programmability layered onto fast commodity hardware.  Additionally Software Defined Networking (http://www.definethecloud.net/sdn-centralized-network-command-and-control) has been validated by both VMware’s acquisition of Nicira and Cisco’s spin-off of Insieme which by most accounts will expand upon the CiscoOne concept with a Cisco flavored SDN offering.  In any event the race is on to build networks based on software flows that are centrally managed rather than the port-to-port configuration nightmare of today’s data centers. 

This move is not only for ease of administration, but also required to push our systems to the levels required by cloud and SDDC.  These multi-tenant systems running disparate applications at various service tiers require tighter quality of service controls and bandwidth guarantees, as well as more intelligent routes.  Today’s physically configured networks can’t provide these controls.  Additionally applications will benefit from network visibility allowing them to request specific flow characteristics from the network based on application or user requirements.  Multiple service levels can be configured on the same physical network allowing traffic to take appropriate paths based on type rather than physical topology.  These network changes are require to truly enable SDDC and Cloud architectures. 

Further up the stack from the Layer 2 and Layer 3 transport networks comes a series of other network services that will be layered in via software.  Features such as: load-balancing, access-control and firewall services will be required for the services running on these shared infrastructures.  These network services will need to be deployed with new applications and tiered to the specific requirements of each.  As with the L2/L3 services manual configuration will not suffice and a ‘big picture’ view will be required to ensure that network services match application requirements.  These services can be layered in from both physical and virtual appliances  but will require configurability via the centralized software platform.

Summary:

By combining current technology trends, emerging technologies and layering in future concepts the software defined data center will emerge in evolutionary fashion.  Today’s highly virtualized data centers will layer on technologies such as SDN while incorporating new storage models bringing their data centers to the next level.  Conceptually picture a mainframe pooling underlying resources across a shared application environment.  Now remove the frame.

GD Star Rating
loading...

SDN – Centralized Network Command and Control

Software Defined Networking (SDN) is a hot topic in the data center and cloud community.  The geniuses <sarcasm> over at IDC predict a $2 billion market by 2016 (expect this number to change often between now and then, and look closely at what they count in the cost.) The concept has the potential to shake up the networking business as a whole (http://www.networkcomputing.com/next-gen-network-tech-center/240001372) and has both commercial and open source products being developed and shipping, but what is it, and why?

Let’s start with the why by taking a look at how traditional networking occurs.

 

Traditional Network Architecture:

 

image

 

The most important thing to notice in the graphic above is the separate control and data planes.  Each plane has separate tasks that provide the overall switching/routing functionality.  The control plane is responsible for configuration of the device and programming the paths that will be used for data flows.  When you are managing a switch you are interacting with the control plane.  Things like route tables and Spanning-Tree Protocol (STP) are calculated in the control plane.   This is done by accepting information frames such as BPDUs or Hello messages and processing them to determine available paths.  Once these paths have been determined they are pushed down to the data plane and typically stored in hardware.  The data lane then typically makes path decisions in hardware based on the latest information provided by the control plane.  This has traditionally been a very effective method.  The hardware decision making process is very fast, reducing overall latency while the control plane itself can handle the heavier processing and configuration requirements.

This method is not without problems, the one we will focus on is scalability.  In order to demonstrate the scalability issue I find it easiest to use Quality of Service (QoS) as an example.  QoS allows forwarding priority to be given to specific frames for scheduling purposes based on characteristics in those frames.  This allows network traffic to receive appropriate treatment in times of congestion.  For instance latency sensitive voice and video traffic is typically engineered for high priority to ensure the best user experience.  Traffic prioritization is typically based on tags in the frame known as Class of Service (CoS) and or Differentiated Services Code Point (DSCP.)  These tags must be marked consistently for frames entering the network and rules must then be applied consistently for their treatment on the network. This becomes cumbersome in a traditional multi-switch network because the configuration must be duplicated in some fashion on each individual switching device.

An easier example of the current administrative challenges consider each port in the network a management point, meaning each port must be individually configured.  This is both time consuming and cumbersome.

Additional challenges exist in properly classifying data and routing traffic.  A fantastic example of this would be two different traffic types, iSCSI and voice.  iSCSI is storage traffic and typically a full size packet or even jumbo frame while voice data is typically transmitted in a very small packet.  Additionally they have different requirements, voice is very latency sensitive in order to maintain call quality, while iSCSI is less latency sensitive but will benefit from more bandwidth.  Traditional networks have few if any tools to differentiate these traffic types and send them down separate paths which are beneficial to both types.

These types of issues are what SDN looks to solve.

The Three Key Elements of SDN:

  • Ability to manage the forwarding of frames/packets and apply policy
  • Ability to perform this at scale in a dynamic fashion
  • Ability to be programmed

Note: In order to qualify as SDN an architecture does not have to be Open, standard, interoperable, etc.  A proprietary architecture can meet the definition and provide the same benefits.  This blog does not argue for or against either open or proprietary architectures.

An SDN architecture must be able to manipulate frame and packet flows through the network at large scale, and do so in a programmable fashion.  The hardware plumbing of an SDN will typically be designed as a converged (capable of carrying all data types including desired forms of storage traffic) mesh of large lower latency pipes commonly called a fabric.  The SDN architecture itself will in turn provide a network wide view and the ability to manage the network and network flows centrally.

This architecture is accomplished by separating the control plane from the data plane devices and providing a programmable interface for that separated control plane.  The data plane devices receive forwarding rules from the separated control plane and apply those rules in hardware ASICs.  These ASICs can be either commodity switching ASICs or customized silicone depending on the functionality and performance aspects required.  The diagram below depicts this relationship:

image

In this model the SDN controller provides the control plane and the data plane is comprised of hardware switching devices.  These devices can either be new hardware devices or existing hardware devices with specialized firmware.  This will depend on vendor, and deployment model.  One major advantage that is clearly shown in this example is the visibility provided to the control plane.  Rather than each individual data plane device relying on advertisements from other devices to build it’s view of the network topology a single control plane device has a view of the entire network.  This provides a platform from which advanced routing, security, and quality decisions can be made, hence the need for programmability.  Another major capability that can be drawn from this centralized control is visibility.  With a centralized controller device it is much easier to gain usable data about real time flows on the network, and make decisions (automated or manual) based on that data.

This diagram only shows a portion of the picture as it is focused on physical infrastructure and serves.  Another major benefit is the integration of virtual server environments into SDN networks.  This allows centralized management of consistent policies for both virtual and physical resources.  Integrating a virtual network is done by having a Virtual Ethernet Bridge (VEB) in the hypervisor that can be controlled by an SDN controller.  The diagram below depicts this:

image

This diagram more clearly depicts the integration between virtual networking systems and physical networking systems in order to have cohesive consistent control of the network.  This plays a more important role as virtual workloads migrate.  Because both the virtual and physical data planes are managed centrally by the control plane when a VM migration happens it’s network configuration can move with it regardless of destination in the fabric.  This is a key benefit for policy enforcement in virtualized environments because more granular controls can be placed on the VM itself as an individual port and those controls stick with the VM throughout the environment.

Note: These diagrams are a generalized depiction of an SDN architecture.  Methods other than a single separated controller could be used, but this is the more common concept.

With the system in place to have centralized command and control of the network through SDN and a programmable interface more intelligent processes can now be added to handle complex systems.  Real time decisions can be made for the purposes of traffic optimization, security, outage, or maintenance.  Separate traffic types can be run side by side while receiving different paths and forwarding that can respond dynamically to network changes.

Summary:

Software Defined Networking has the potential to disrupt the networking market and move us past the days of the switch/router jockey.  This shift will provide extreme benefits in the form of flexibility, scalability and traffic performance for datacenter networks.  While all of the aspects are not yet defined SDN projects such as OpenFlow (www.openflow.org) provide the tools to begin testing and developing SDN architectures on supported hardware.  Expect to see lots of changes in this eco system and many flavors in the vendor offerings.

GD Star Rating
loading...

Server Networking With gen 2 UCS Hardware

** this post has been slightly edited thanks to feedback from Sean McGee**

In previous posts I’ve outlined:

If you’re not familiar with UCS networking I suggest you start with those for background.  This post is an update to those focused on UCS B-Series server to Fabric Interconnect communication using the new hardware options announced at Cisco Live 2011.  First a recap of the new hardware:

The UCS 6248UP Fabric Interconnect

The 6248 is a 1RU model that provides 48 universal ports (1G/10G Ethernet or 1/2/4/8G FC.)  This provides 20 additional ports over the 6120 in the same 1RU form factor.  Additionally the 6248 is lower latency at 2.0us from 3.2us previously.

The UCS 2208XP I/O Module

The 2208 doubles the total uplink bandwidth per I/O module providing a total of 160Gbps total throughput per 8 blade chassis.  It quadruples the number of internal 10G connections to the blades allowing for 80Gbps per half-width blade.

UCS 1280 VIC

The 1280 VIC provides 8x10GE ports total, 4x to each IOM for a total of 80Gbps per half-width slot (160 Gbs with 2x in a full-width blade.)  It also double the VIF numbers of the previous VIC allowing for 256 (theoretical)  vNICs or vHBAs.  The new VIC also supports port-channeling to the UCS 2208 IOM and iSCSI boot.

The other addition that affects this conversation is the ability to port-channel the uplinks from the 2208 IOM which could not be done before (each link on a 2104 IOM operated independently.)  All of the new hardware is backward compatible with all existing UCS hardware.  For more detailed information on the hardware and software announcements visit Sean McGee’s blog where I stole these graphics: http://www.mseanmcgee.com/2011/07/ucs-2-0-cisco-stacks-the-deck-in-las-vegas/.

Let’s start by discussing the connectivity options from the Fabric Interconnects to the IOMs in the chassis focusing on all gen 2 hardware.

There are two modes of operation for the IOM: Discrete and Port-Channel.  in both modes it is possible to configure 1, 2 , 4, or 8 uplinks from each IOM in either Discrete mode (non-bundled) or port-channel mode (bundled.)

UCS 2208 fabric Interconnect Failover

image

Discrete Mode:

In discrete mode a static pinning mechanism is used mapping each blade to a given port dependent on number of uplinks used.  This means that each blade will have an assigned uplink on each IOM for inbound and outbound traffic.  In this mode if a link failure occurs the blade will not ‘re-pin’ on the side of the failure but instead rely on NIC-Teaming/bonding or Fabric Failover for failover to the redundant IOM/Fabric.  The pinning behavior is as follows with the exception of 1-Uplink (not-shown) in which all blades use the only available Port:

2 Uplinks

Blade

Port 1

Port 2

Port 3

Port 4

Port 5

Port 6

Port 7

Port 8

1

image

2

image

3

image

4

image

5

image

6

image

7

image

8

image

4 Uplinks

Blade

Port 1

Port  2

Port 3

Port 4

Port 5

Port 6

Port 7

Port 8

1

image

2

image

3

image

4

image

5

image

6

image

7

image

8

image

8 Uplinks

Blade

Port 1

Port2

Port 3

Port 4

Port 5

Port 6

Port 7

Port 8

1

image

2

image

3

image

4

image

5

image

6

image

7

image

8

image

The same port-pinning will be used on both IOMs, therefore in a redundant configuration each blade will be uplinked via the same port on separate IOMs to redundant fabrics.  The draw of discrete mode is that bandwidth is predictable in link failure scenarios.  If a link fails on one IOM that server will fail to the other fabric rather than adding additional bandwidth draws on the active links for the failure side.  In summary it forces NIC-teaming/bonding or Fabric Failover to handle failure events rather than network based load-balancing.  The following diagram depicts the failover behavior for server three in an 8 uplink scenario.

Discrete Mode Failover

image

In the previous diagram port 3 on IOM A has failed.  With the system in discrete mode NIC-teaming/bonding or Fabric Failover handles failover to the secondary path on IOM B (which is the same port (3) based on static-pinning.)

Port-Channel Mode:

In Port-Channel mode all available links are bonded and a port-channel hashing algorithm (TCP/UDP + Port VLAN, non-configurable) is used for load-balancing server traffic.  In this mode all server links are still ‘pinned’ but they are pinned to the logical bundle rather than individual IOM uplinks.  The following diagram depicts this mode.

Port-Channel Mode

image

In this scenario when a port fails on an IOM port-channel load-balancing algorithms handle failing the server traffic flow to another available port in the channel.  This failover will typically be faster than NIC-teaming/bonding failover.  This will decrease the potential throughput for all flows on the side with a failure, but will only effect performance if the links are saturated.  The following diagram depicts this behavior.

image

In the diagram above Blade 3 was pinned to Port 1 on the A side.  When port 1 failed port 4 was selected (depicted in green) while fabric B port 6 is still active leaving a potential of 20 Gbps.

Note: Actual used ports will vary dependent on port-channel load-balancing.  These are used for example purposes only.

As you can see the port-channel mode enables additional redundancy and potential per-server bandwidth as it leaves two paths open.  In high utilization situations where the links are fully saturated this will degrade throughput of all blades on the side experiencing the failure.  This is not necessarily a bad thing (happens with all port-channel mechanisms), but it is a design consideration.  Additionally port-channeling in all forms can only provide the bandwidth of a single link per flow (think of a flow as a conversation.)  This means that each flow can only utilize 10Gbps max even though 8x10Gbps links are bundled.  For example a single FTP transfer would max at 10Gbps bandwidth, while 8xFTP transfers could potentially use 80Gbps (10 per link) dependent on load-balancing.

Next lets discuss server to IOM connectivity (yes I use discuss to describe me monologuing in print, get over it, and yes I know monologuing isn’t a word) I’ll focus on the new UCS 1280 VIC because all other current cards maintain the same connectivity.  the following diagram depicts the 1280 VIC connectivity.

image

The 1280 VIC utilizes 4x10Gbps links across the mid-pane per IOM to form two 40Gbps port-channels.  This provides for 80Gbps total potential throughput per card.  This means a half-width blade has a total potential of 80Gbps using this card and a full-width blade can receive 160Gbps (of course this is dependent upon design.)  As with any port-channel, link-bonding, trunking or whatever you may call it, any flow (conversation) can only utilize the max of one physical link (or back plane trace) of bandwidth.  This means every flow from any given UCS server has a max potential bandwidth of 10Gbps, but with 8 total uplinks 8 different flows could potentially utilize 80Gbps.

This becomes very important with things like NFS-based storage within hypervisors.  Typically a virtualization hypervisor will handle storage connectivity for all VMs.  This means that only one flow (conversation) will occur between host and storage.  In these typical configurations only 10Gbps will be available for all VM NFS data traffic even though the host may have a potential 80Gbps bandwidth.  Again this is not necessarily a concern, but a design consideration as most current/near-future hosts will never use more than 10Gbps of storage I/O.

Summary:

The new UCS hardware packs a major punch when it comes to bandwidth, port-density and failover options.  That being said it’s important to understand the frame flow, port-usage and potential bandwidth in order to properly design solutions for maximum efficiency.  As always comments, complaints and corrections are quite welcome!

GD Star Rating
loading...

How to Boost Cloud Reliability

Clouds fail. That’s a fact. But if your company uses business apps that are tied to the availability of public cloud services, you can—and must—take steps to mitigate these failures by getting schooled on a few key factors:  service-level agreements (SLAs), redundancy options, application design, and the type of service being used. We’ll outline how these factors affect the availability of your applications in the cloud…

 

Read my full article in the August issue of Network Computing (For IT by IT) (Requires a free registration, my apologies.)

http://www.informationweek.com/nwcdigital/nwcaug11?k=nwchp&cid=onedit_ds_nwchp

GD Star Rating
loading...

Why NetApp is my ‘A-Game’ Storage Architecture

One of, if not the, most popular of my blog posts to date has been ‘Why Cisco UCS is my ‘A-Game’ Server Architecture (http://www.definethecloud.net/why-cisco-ucs-is-my-a-game-server-architecture.)  In that post I describe why I lead with Cisco UCS for most consultative engagements.  This follow up for storage has been a long time coming, and thanks to some ‘gentle’ nudging and random coincidence combined with an extended airport wait I’ve decided to get this posted.

If you haven’t read my previous post I take the time to define my ‘A-Game’ architectures as such:

“The rule in regards to my A-Game is that it’s not a rule, it’s a launching point. I start with a specific hardware set in mind in order to visualize the customer need and analyze the best way to meet that need. If I hit a point of contention that negates the use of my A-Game I’ll fluidly adapt my thinking and proposed architecture to one that better fits the customer. These points of contention may be either technical, political, or business related:

  • Technical: My A-Game doesn’t fit the customers requirement due to some technical factor, support, feature, etc.
  • Political: My A-Game doesn’t fit the customer because they don’t want Vendor X (previous bad experience, hype, understanding, etc.)
  • Business: My A-Game isn’t on an approved vendor list, or something similar.

If I hit one of these roadblocks I’ll shift my vendor strategy for the particular engagement without a second thought. The exception to this is if one of these roadblocks isn’t actually a roadblock and my A-Game definitely provides the best fit for the customer I’ll work with the customer to analyze actual requirements and attempt to find ways around the roadblock.

Basically my A-Game is a product or product line that I’ve personally tested, worked with and trust above the others that is my starting point for any consultative engagement.

In my A-Game Server post I run through my hate then love relationship that brought me around to trust, support, and evangelize UCS; I cannot express the same for NetApp.  My relationship with NetApp fell more along the lines of love at first sight.

NetApp – Love at first sight:

I began working with NetApp storage at the same time I was diving headfirst into datacenter as a whole.  I was moving from server admin/engineer to architect and drinking from the SAN, Virtualization, and storage firehouse.  I had a fantastic boss who to this day is a mentor and friend that pushed me to learn quickly and execute rapidly and accurately, thanks Mike!  The main products our team handled at the time were: IBM blades/servers, VMware, SAN (Brocade and Cisco) and IBM/NetApp storage.  I was never a fan of the IBM storage.  It performed solidly but was a bear to configure, lacked a rich feature set and typically got put in place and left there untouched until refresh.  At the same time I was coming up to speed on IBM storage I was learning more and more about NetApp.

From the non-technical perspective NetApp had accessible training and experts, clear value-proposition messaging and a firm grasp on VMware, where virtualization was heading and how/why it should be executed on.  This hit right on with what my team was focused on.  Additionally NetApp worked hard to maintain an excellent partner channel relationship, make information accessible, and put the experts a phone call or flight away.  This made me WANT to learn more about their technology.

The lasting bonds:

Breakfast food, yep breakfast food is what made NetApp stick for me, and still be my A-game four years later. Not just any breakfast food, but a personal favorite of mine; beer and waffles, err, umm… WAFL (second only to chicken and waffles and missing only bacon.)  Data ONTAP (the beer) and NetApp’s Write Anywhere File System (WAFL) are at the heart of why they are my A-Game.  While you can find dozens of blogs, competitive papers, etc. attacking the use of WAFL for primary block storage, what WAFL enables is amazing from a feature perspective, and the performance numbers NetApp can put up speak for themselves.  Because, unlike a traditional block based array, NetApp owns the underlying file system they can not only do more with the data, but they can more rapidly adapt to market needs with software enhancements.  Don’t take my word for it, do some research, look at the latest announcements from other storage leaders and check to see what year NetApp announced their version of those same features, with few exceptions you’ll be surprised.  The second piece of my love for NetApp is Data ONTAP.  NetApp has several storage controller systems ranging from the lower end to the Tier-1 high-capacity, high availability systems.  Regardless of which one you use, you’re always using the same operating/management system, Data ONTAP.  This means that as you scale, change, refresh, upgrade, downgrade, you name it, you never have to retrain AND you keep a common feature set.

My love for breakfast is not the only draw to NetApp, and in fact without a bacon offering I would have strayed if there weren’t more (note to NetApp: Incorporate fatty pork the way politicians do.) 

Other features that keep NetApp top of my list are:

  • Primary block-level storage Deduplication with real world savings at 70+ % with minimal performance hit (and no license fee to boot)
  • Ease of upgrade/downgrade (keep the shelves of disks, replace the controllers, data stays)
  • Read/Write ‘0’ space/cost clones (the ability to clone various data sets in a read/write status using only pointers and storing only the change ‘delta’) and FlexClone capabilities as a whole
  • Highly optimized snapshots for point-in-time rollback, test/dev, etc.
  • VMware plugins to enable VMware admins to manage and monitor their own storage allotments
  • Storage virtualization, the ability to carve out storage and the management of that storage to multiple tenants in a similar fashion to what VMware does for servers
  • Ability to get 80% of the performance benefits of a shelf of SSD drives by adding Flash Cache (PAM II) cards 

Add to that more recent features such as first to market with FCoE based storage and you’ve got a winner in my book.  All that being said I still haven’t covered the real reason NetApp is the first storage vendor in my head anytime I talk about storage.

Unification:

Anytime I’m talking about servers I’m talking about virtualization as well.  Because I don’t work in the Unix or Mainframe worlds I’m most likely talking about VMware (90% market share has that effect.)  When dealing with virtualization my primary goals are consolidation/optimization and flexibility.  In my opinion nobody can touch NetApp storage for this.  I’m a fan of choice and options, I also like particular features/protocols for particular use cases.  On most storage platforms I have to choose my hardware based on the features and protocols my customers require, and most likely use more than one platform to get them all.  This isn’t the case with NetApp.  With few exceptions every protocol/feature is available simultaneously with any given hardware platform.  This means I can run iSCSI, FC, FCoE or all of the above for block based needs at the same time I run CIFS natively to replace Windows file servers, and NFS for my VMware data stores.  All of that from the same box or even the same ports!  This lets me tier my protocols and features to the application requirements instead of to my hardware limitations.

I’ve been working on VMware deployments in some fashion for four years, and have seen dozens of unique deployments but personally never deployed or worked with a VMware environment that ran off a single protocol, typically at a minimum NFS is used for ISO datastores and CIFS can be used to eliminate Windows file servers rather than virtualize them, with a possible block based protocol involved for boot or databases.

Additionally NetApp offers features and functionality to allow multiple storage functions to be consolidated on a single system.  You no longer require separate hardware for primary, secondary, backup, DR, and archive.  All of this can then be easily setup and managed for replication across any of NetApp’s platforms, or many 3rd party systems front-ended with V-series.  These two pieces combined create a truly ‘unified’ platform.

When do I bring out my B-Game?

NetApp like any solution I’ve ever come across is not the right tool for every job.  For me they hit or exceed the 80/20 rule perfectly.  A few places where I don’t see NetApp as a current fit:

  • Small to Medium Business (SMB) – At the SMB level a single protocol solution may work and you can find lower cost solutions that fit the bill, but if you scale faster than expected you’re stuck with a single protocol platform and may end up having to purchase and manage additional devices if/when needs change
  • Massive scalability – Here I’m talking public cloud petabytes upon petabytes where systems like Isilon from EMC and its competitors have the lead
  • Top-Tier performance and enterprise class reliability for Tier-1 applications –  Here at the very high end typically EMC or Hitachi are the players, and IBM using SVC may also play
  • Mainframes, NetApp don’t play that and Big Blue don’t support it  

Summary:

While I stick to there are no ‘one-size fits all’ IT solutions, and that my A-Game is a starting point not a rule I find NetApp to hit the bulls eye for 80+ percent of the market I work with.  Not only do they fit upfront, but they back it up with support, continued innovation, and product advancement.  NetApp isn’t ‘The Growth Company’ and #2 in storage by luck or chance (although I could argue they did luck out quite a bit with the timing of the industry move to converged storage on 10GE.)

Another reason NetApp still reigns king as my A-Game is the way in which it marries to my A-Game server architecture.  Cisco UCS enables unification, protocol choice and cable consolidation as well as virtualization acceleration, etc.  All of these are further amplified when used alongside NetApp storage which allows rapid provisioning, protocol options, storage consolidation and storage virtualization, etc.  Do you want to pre-provision 50 (or 250) VMware hosts with 25 GB read/write boot LUNs ready to go at the click of a template?  Do you want to do this without utilizing any space up front?  UCS and NetApp have the toolset for you.  You can then rapidly bring up new customers, or stay at dinner with your family while a Network Operations Center (NOC) administrator deploys a pre-architected pre-secured, pre-tested and provisioned server from a template to meet a capacity burst.

If you’re considering a storage decision, a private cloud migration, or a converged infrastructure pod make sure you’re taking a look at NetApp as an option and see it for yourself.  For some more information on NetApp’s virtualization story see the links below:

TR3856: Quantifying the Value of Running VMware on NetApp 

TR3808: VMware vSphere and ESX 3.5 Multiprotocol Performance Comparison Using FC, iSCSI, and NFS

GD Star Rating
loading...