True Software Defined Networking (SDN)

The world is, and has been, buzzing about software defined networking. It’s going to revolutionize the entire industry, commoditize hardware, and disrupt all the major players. It’s going to do all that… some day. To date it hasn’t done much but be a great conversation, and more importantly identify the need for change in networking.

In its first generation SDN is a lot of sizzle with no flash. The IT world is trying to truly define it, much like we were with ‘Cloud’ years ago. What’s beginning to emerge is that SDN is more of a methodology then an implementation, and like cloud there are several implementations: OpenFlow, Network Virtualization and Programmable Network Infrastructure.

 

image

OpenFlow

Open Flow focuses on a separation of control plane and data plane. This provides a centralized method to route traffic based on a 5-tuple match of packet header information. One area OpenFlow falls short is in its dependence on the independent advancement of the protocol itself and the hardware support below. Hardware in the world of switching and routing is Application Specific Integrated Circuits (ASIC) based, and those ASICs typically take three years to refresh. This means that the OpenFlow protocol itself must advance, and then once stabilized silicon vendors can begin building new ASICs to be available three years later.

Network Virtualization

Network virtualization is a faithful reproduction of networking functionality into the hypervisor. This method is intended to provide advanced automation and speed application deployment. The problem here arises in the new tools required to manage and monitor the network, the additional management layer, and the replication of the same underlying complexity.

Programmable Network Infrastructure

Programmable network infrastructure takes the configuration of devices from human to machine CLI/GUI interfaces to APIs and programming agents. This allows for faster, more powerful and less error prone device configuration from automation, orchestration and cloud operating system tools. These advance the configuration of multiple disparate systems but are still designed based on network operating system constructs intended for human use, and the same underlying network complexities such as artificial ties between addressing and policy.

All of these generation 1 SDN solutions simply move the management of the underlying complexity around. They are software designed to operate in the same model, trying to configure existing hardware. They’re simply adding another protocol, or protocols, to the pile of existing complexity.

image

Truly software defined networks

To truly define the network via software you have to look at the entire solution, not just a single piece. Simply adding a software or hardware layer doesn’t fix the problem, you must look at them in tandem starting with the requirements for today’s networks: automation, application agility, visibility (virtual/physical) security, scale and L4-7 services (virtual/physical.)

If you start with those requirements and think in terms of a blank slate you now have the ability to build things correctly for today and tomorrow’s applications while ensuring backwards compatibility. The place to start is in the software itself, or the logical model. Begin with questions:

1. What’s the purpose of the network?

2. What’s most relevant to the business?

3. What dictates the requirements?

The answer to all three is the application, so that’s the natural starting point. Next you ask who owns, deploys and handles day two operations for an application? The answer is the development team. So you start with a view of applications in a format they would understand.

image

That format is simple provider/consumer relationships between tiers or components of an application. Each tier may provide and consume services from the next to create the application which is a group of tiers or components, not a single physical server or VM.

You take that idea a step further and understand that the provider/consumer relationships are truly just policy. Policy can describe many things, but here it would be focused on permit/deny, redirect, SLAs, QoS, logging and L4-7 service chaining for security and user experience.

image

Now you’ve designed a policy model that focuses on the application connectivity and any requirements for those connections, including L4-7 services. With this concept you can instantiate that policy in a reusable format so that policy definition can be repeated for like connections, such as users connecting to a web tier. Additionally the application connectivity definition as a whole could be instantiated as a template or profile for reuse.

You’ve now defined a logical model, based on policy, for how applications should be deployed. With this model in place you can work your way down. Next you’ll need network equipment that can support your new model. Before thinking about the hardware, remember there is an operating system (OS) that will have to interface with your policy model.

Traditional network operating systems are not designed for this type of object oriented policy model. Even highly programmable or Linux based operating systems have not been designed for object programmability that would fully support this model.  You’ll need an OS that’s capable of representing tiers or components of an application as objects, with configurable attributes. Additionally it must be bale to represent physical resources like ports as objects abstracted from the applications that will run on them.  An OS that can be provisioned in terms of policy constructs rather than configuration lines such as switch ports, QoS and ACLs. You’ll need to rewrite the OS.

As you’re writing your OS you’ll need to rethink the switching and routing hardware that will deliver all of those packets and frames. Of course you’ll need: density, bandwidth, low-latency, etc. More importantly you’ll need hardware that can define, interpret and enforce policy based on your new logical model. You’ll need to build hardware tailored to the way you define applications, connectivity and policy.  Hardware that can enforce policy based on logical groupings free of VLAN and subnet based policy instantiation.

If you build these out together, starting with the logical model then defining the OS and hardware to support it, you’ll have built a solution that surpasses the software shims of generation 1 SDN. You’ll have built a solution that focuses on removing the complexity first, then automating, then applying rapid deployment through tools usable by development and operations, better yet DevOps.

If you do that you’ll have truly defined networking based on software. You’ll have defined it from the top all the way down to the ASICs. If you do all that and get it right, you’ll have built Cisco’s Application Centric Infrastructure (ACI.)

For more information on the next generation of data center networking check out www.cisco.com/go/aci.

 

Disclaimer: ACI is where I’ve been focused for the last year or so, and where my paycheck comes from.  You can feel free to assume I’m biased and this article has no value due to that.  I won’t hate you for it.

GD Star Rating
loading...

It’s Our Time Down Here– “Underlays”

Recently while winding down from a long day I flipped the channel and “The Goonies” was on.  I left it there thinking an old movie I’d seen a dozen times would put me to sleep quickly.  As it turns out I quickly got back into it.  By the time the gang hit the wishing well and Mikey gave his speech I was inspired to write a blog, this one in particular.  “Cause it’s their time – their time up there.  Down here it’s our time, it’s our time down here.” 

This got me thinking about data center network overlays, and the physical networks that actually move the packets some Network Virtualization proponents have dubbed “underlays.”  The more I think about it, the more I realize that it truly is our time down here in the “lowly underlay.”  I don’t think there’s much argument around the need for change in data center networking, but there is a lot of debate on how.  Let’s start with their time up there “Network Virtualization.”

Network Virtualization

Unlike server virtualization, Network Virtualization doesn’t partition out the hardware and separate out resources.  Network Virtualization uses server virtualization to virtualize network devices such as: switches, routers, firewalls and load-balancers.  From there it creates virtual tunnels across the physical infrastructure using encapsulation techniques such as: VxLAN, NVGRE and STT.  The end result is a virtualized instantiation of the current data center network in x86 servers with packets moving in tunnels on physical networking gear which segregate them from other traffic on that gear.  The graphic below shows this relationship.

image

Network Virtualization in this fashion can provide some benefits in the form of: provisioning time and automation.  It also induces some new challenges discussed in more detail here: What Network Virtualization Isn’t (be sure to read the comments for alternate view points.)  What network virtualization doesn’t provide, in any form, is a change to the model we use to deploy networks and support applications.  The constructs and deployment methods for designing applications and applying policy are not changed or enhanced.  All of the same broken or misused methodologies are carried forward.  When working with customers to begin virtualizing servers I would always recommend against automated physical to virtual server migration, suggesting rebuild in a virtual machine instead.

The reason for that is two fold.  First server virtualization was a chance to re-architect based on lessons learned.  Second, simply virtualizing existing constructs is like hiring movers to pack your house along with dirt/cobwebs/etc. then move it all to the new place and unpack.  The smart way to move a house is donate/yard sale what you won’t need, pack the things you do, move into a clean place and arrange optimally for the space.  The same applies to server and network virtualization.

Faithful replication of today’s networking challenges as virtual machines with encapsulation tunnels doesn’t move the bar for deploying applications.  At best it speeds up, and automates, bad practices.  Server virtualization hit the same challenges.  I discuss what’s needed from the ground up here: Network Abstraction and Virtualization: Where to Start?.  Software only network virtualization approaches are challenged by both restrictions of the hardware that moves their packets and issues with methodology of where the pain points really are.  It’s their time up there.

Underlays

The physical transport network which is minimalized by some as the “underlay” is actually more important in making a shift to network programmability, automation and flexibility.  Even network virtualization vendors will agree, to some extent, on this if you dig deep enough.  Once you cut through the marketecture of “the underlay doesn’t matter” you’ll find recommendations for a non-blocking fabric of 10G Access ports and 40G aggregation in one design or another.  This is because they have no visibility into congestion and no control of delivery prioritization such as QoS. 

Additionally Network Virtualization has no ability to abstract the constructs of VLAN, Subnet, Security, Logging, QoS from one another as described in the link above.  To truly move the network forward in a way that provides automation and programmability in a model that’s cohesive with application deployment, you need to include the physical network with the software that will drive it.  It’s our time down here.

By marrying physical innovations that provide a means for abstraction of constructs at the ground floor with software that can drive those capabilities, you end up with a platform that can be defined by the architecture of the applications that will utilize it.  This puts the infrastructure as a whole in a position to be  deployed in lock-step with the applications that create differentiation and drive revenue.  This focus on the application is discussed here: Focus on the Ball: The Application.  The figure below, from that post, depicts this.

image

 

The advantage to this ground up approach is the ability to look at applications as they exist, groups of interconnected services, rather than the application as a VM approach.  This holistic view can then be applied down to an infrastructure designed for automation and programmability.  Like constructing a building, your structure will only be as sound as the foundation it sits on.

For a little humor (nothing more) here’s my comic depiction of Network Virtualization.image

GD Star Rating
loading...

Focus on the Ball: The Application

With the industry talking about Software Defined Networking (SDN) at full hype levels, there is one thing missing from many discussions: the application. SDN promises to reign in the complexity of network infrastructure and provide better tools for deploying services at scale. What often seems to be forgotten are the applications, which are the reason those networks exist. While application focus in itself is not a new concept it seems lost in the noise around SDN as a whole, with a few exceptions such as Plexxi being which focuses on Application Affinity.

Current SDN approaches provide tools to solve issues in one portion or the other of network infrastructure. Flow control mechanisms look to centralize the distribution and configuration of routing and forwarding. Overlays look to build virtual networks on existing IP infrastructure. Virtualized L4-7 services provide solutions to configure, stitch-in and control network services more closely to virtual machines themselves. None of these current approaches looks to tackle the whole picture from an application centric point of view. These solutions also take a myopic view that the VM is the network, this is far from the case.  The closest models fall into dev-ops categories or orchestration but these require a deep understanding of the details and intricacies of the network.

In traditional networking environments there is a disconnect in communication between application and network teams. The languages and concepts are disparate enough that they don’t translate, there is no logical continuation from application developer or owner to network designer. Application teams speak in OS instances, application tiers and components, tooling, language, end-user demands, etc. while network teams speak in switch-ports, VLANs, QoS, IP addressing and Access Control Lists (ACLs). The lack of common understanding and vocabulary causes architectures and implementations to suffer. The graphic below illustrates this relationship:

image

Building the flexible, scalable, manageable and programmable networks of the future requires a change in focus. The application needs to take center stage; it’s the apps that solve business problems. From this focus, logical and physical topology become secondary and are only designed once application requirements have been mapped out. Application centric policies must be designed first. Policies such as: security, load-balancing, QoS can all be designed based on application requirements, rather than network restrictions. Application developers define these requirements without the need to speak a network language.

Traditional networks begin with a physical topology that is layered with L2 and L3 logical topologies and assumed application mobility and service domains such as a services tier in the aggregation level. Once these topologies are architected and implemented applications are built and deployed on them. This method limits the capabilities available to the application and the services deployed on them.

Application security is an excellent example of a system that suffers from traditional architectures. Network security constructs are implemented in the form of ACLs on switches, routers and firewalls. These entries suffer from two major drawbacks: complexity of design/implementation and scale of the TCAM that stores the entries. This means that application policies must be communicated effectively to network engineers who must translate those requirements into implementable ACLs across multiple devices in the network. This is then defined manually device-by-device. This is a system ripe for PEBKAC errors (Problem Exists Between Keyboard and Chair.)

The complexity and room for error in this system increases exponentially as networks scale, applications move and new services are needed. Additionally this leads to bad practice based on design limitations. Far too often outdated policy entries are left in place due to the complexity and risk of removing entries. This leads to residual entries in place consuming space long after an application is gone. Just as often policies are written more loosely than would be optimal in order to reduce required entries, and optimize space, through wild card summarization.

To break this cycle networking systems need to take an application centric approach which models actual application requirements onto the network in a top down fashion. Systems need to take into account the structure of the application, its components, and how those components interact then provide tools for designing logical policy maps of these relationships. From there these policy maps can be programmatically applied to the networking infrastructure.

An application is not a single software instance running on a server. Applications are made up of the end-points required in a given tier, the tiers required for the service delivered and the policies that define how those tiers communicate, and their unique requirements. The application as a whole must be taken into account in order to provide robust, scalable service delivery.

The illustration below shows this relationship in contrast to the diagram above:

image

In this model network and application teams develop the systems of policies that define application behavior and push them to the network. Taking the application as a whole into focus instead of the myopic view of VMs, switch ports or IP addresses allows cohesive deployment and manageability at scale. The application is the purpose of having a network; therefore the application should define the network.

This definition of the network by the application should be done in a language that the developers understand, and the network can interpret and implement. For example an app owner labels application traffic as ‘video’ and the network implements policies for bandwidth, QoS, etc. that video requires. These policies are predefined by the network engineers.

An application is more than an IP address and a set of rules; it is an ecosystem of interconnected devices and the policies that define their relationship. Traditional networking techniques anchor application deployment by defining applications in networking terms. In order to accelerate the application deployment (and re-deployment throughout its lifecycle) networks need to provide an application centric view and deployment model.

GD Star Rating
loading...

Network Abstraction and Virtualization: Where to Start?

Network Abstraction and Virtualization: Where to Start?

With the growth of server virtualization network designs and the associated network management constructs have been stretched beyond their intended uses. This has brought about data center networks that are unmanageable and slow to adapt to change. While servers and storage can be rapidly provisioned to bring on new services the network itself has become a bottleneck of required administrative changes and inflexible constructs limiting scalability and speed of adoption.

These constraints of modern data center networks have motivated network architects to look for workarounds of which one current proposal is ‘network virtualization’ which looks to apply the benefits of server virtualization to the network. Conceptually network virtualization is the use of encapsulation techniques to create virtual overlays on existing network infrastructure. These methods use technologies such as VxLAN, STT, NVGRE, and others to wrap machine traffic in virtual IP overlays which can be transported across any Layer 3 infrastructure.

1. A primary benefit of these overlay techniques is the ability to scale beyond the limits of VLANs for network segmentation. Virtualization and multi-tenancy caused an explosion of network segments that strain traditional isolation techniques. With VLANs we are limited at 4096 segments or less depending on implementation. Other methods exist, such as placing ACLs within the Hypervisor but these also suffer limits in configuration and CPU overhead. The purpose of these techniques is creating application/tenant segmentation without security implications between segments. As the number of services and tenants grows these limits quickly become restrictive.

2. Another advantage of the network virtualization overlay is the ability to place workloads independent of physical locality and underlying topology. As long as IP connectivity is available the encapsulation handles delivery to end-point workloads. This provides greater flexibility in deployment, especially for virtualized workloads which receive encapsulation within the hypervisor switch. The operational benefit of this effect is the ability to place workloads where there is available capacity without restrictions from underlying network constructs.

Network virtualization does not come without drawbacks. The act of layering virtual networks over existing infrastructure puts an opaque barrier between the virtual workloads and the operation of the underlying infrastructure. This brings on issues with performance, quality of service (QoS) and network troubleshooting. Unlike server virtualization this limitation is not seen with compute hypervisors which are tightly coupled with the hardware maintaining visibility at both levels. The diagram below shows the relationship between network and server virtualization.

image

1. This lack of cross-visibility between the logical networks carrying production application traffic, and the physical network providing the packet delivery, leads to issues with application performance and system troubleshooting. With SDN techniques based on network virtualization through encapsulation, the packet delivery infrastructure is completely obfuscated by the encapsulation. This can lead to performance issues arising from lack of quality of service, altered multi-pathing ability, and others within the underlying network. This separation is shown in the diagram below.

image

2. Additionally these logical networks add a point of management to the network architecture. While they can hide the complexity of the underlying network for the purposes of application deployment, the network underneath still exists. The switching infrastructure must still be configured, managed and deployed as usual. All of the constructs shown above must still be architected and pushed into device configuration. Network virtualization provides perceived independence from the infrastructure but does not provide a means to manage the network as a whole.

3. The last challenge for network virtualization techniques is the ability to tie overlays back to traditional networking constructs understood by the network switches below. Switch hardware and software is designed to use VLANs which are tied to IP subnets and stitch security and services to these constructs. The overlay created by encapsulation does not alleviate these issues.

For example encapsulation techniques such as VxLAN provide far greater logical network scalability upwards of 16 million virtual networks. This logical scalability does not currently stitch into traditional switching equipment that assumes VLANs are global. Tighter cohesion will be required between physical switching infrastructure and hypervisor based access layers to provide robust services to real-world heterogeneous environments.

While overlay techniques provide separate namespace and therefore a means for overlapping IP addressing there will still be a need to architect the routing that handles this. In order to accomplish this network functionality such as Virtual Routing and Forwarding (VRF) must be configured on the switching infrastructure, or virtual routers deployed in the hypervisor. VRF scalability is greatly limited by hardware implementation and will be far less than VxLAN scalability, while virtualized routers will consume CPU overhead and require additional architectural considerations. Without techniques in place network tenants will require non-overlapping IP space.

Making a case for true abstraction

With network virtualization alone being overlaid onto existing infrastructure we just add layers of complexity. This occurs without correcting the issues that have arisen in traditional networking constructs; just adding network virtualization will do no more than amplify existing problems. A parallel can be drawn to server virtualization where the more rapid pace of server provisioning quickly brought out problems in underlying architecture and processes.

The underlying network consists of hardware, cabling, and Layer 2 / Layer 3 topologies that dictate traffic flow and potential application throughput. These layers have their own limitations and stability issues which are not addressed by network virtualization. Think of the OSI model in terms of building a house, the bottom layers (1-3) create a foundation, a frame, and a structure. Issues in those foundational layers will be exacerbated at each additional layer added on top.

Rather than applying an overlay technique such as a virtualization layer on top of existing architecture, IT architects will benefit greatly from abstracting the network constructs from the ground up first. Separating out logical and physical constructs, security, services, etc. prior to layering on overlays will provide a clean canvas on which to paint the future’s scalable feature rich networks. Virtualization must be built into the network from the ground up rather than layered on top. Again this parallels server virtualization where the greatest success has been seen in full virtualization of the hardware platform and tight integration down to bare metal. The end goal is addressing the underlying network issues rather than mask them with a virtualization layer.

The ties between network constructs such as VLAN, IP subnet, security, load-balancing etc. have placed constraints on the scalability and agility of the network. Each VLAN is provided an IP subnet, security and network services are then tied to these constructs. Addressing and location become the identifying characteristics of the network rather than the application requirements. This is not optimal behavior for a network responsible for elastic business services, workload anywhere designs, and ever increasing connectivity needs. These attributes and capabilities of connectivity must be abstracted in a new way to allow us to move beyond the constraints we have imposed by overloading or misusing these basic network constructs.

imageRather than starting with a new coat of paint on a peeling building, abstraction takes a ground up approach. By looking at the purpose of each construct: VLAN = Broadcast domain, IP = addressing mechanism, etc. we can redesign with a goal to alleviate the unnecessary constraints that have been placed on today’s networks. With these constructs separated we can provide a transport capable of maximizing the performance, security and scalability of the applications using it.

Take a step back from traditional network thinking and think in terms of application needs without consideration of current deployment methodology. Think through the following questions leaving out concepts like: VLAN, Subnet, IP addressing, etc.:

  • How would you tie application tiers together?
  • How would you group like services?
  • What policies would be required between application tiers?
  • What services are required for a given application?
  • How does that application connect to the intranet and internet?

Separating out the applications and services required from the underlying architecture is not possible with today’s networks, virtualized or not. Overlay network virtualization alone may hide some of the complexities but does not provide tools for optimizing the delivery and holistic design. The conversation must include addressing, VLAN construct, location and service insertion. If these constructs are instead abstracted from one another, and the architecture, the conversation can revolve around application requirements rather than network restrictions.

Summary:

While network virtualization provides a set of tools for gaining greater network scale and application deployment flexibility, it is not a complete solution. Without true network abstraction and tools for visibility between the logical and physical network virtualization does no more than add complexity to existing problems. As was seen with server virtualization, layering virtualization on infrastructure issues and bad processes exponentially increases the complexity and room for error.

In order to truly scale networks in a sustainably manageable fashion we need to remove the ties of disparate network constructs by abstracting them out. Once these constructs operate independently of one another we’re provided a flexible architecture that removes the inherent complexity rather than leaving the problems and compounding them through layers of virtualization.

To build networks that meet current demands while being able to support the rapid scale and emerging requirements we need to rethink network design as a whole. Taking a top down look at what we need from the network without tying ourselves to the way in which we use the constructs today allows us to design towards the future and apply layers of abstraction down the stack to meet those goals.

Thinking about your network today, is virtualization alone solving the problems or adding a layer?

Network virtualization without network abstraction – results in short term patching with limited control of longer term operational complexity.

Network virtualization based on an abstracted network – results in effective control of both capital and operational expenses.

GD Star Rating
loading...

What Network Virtualization Isn’t

Brad Hedlund recently posted an excellent blog on Network Virtualization.  Network Virtualization is the label used by Brad’s employer VMware/Nicira for their implementation of SDN.  Brad’s article does a great job of outlining the need for changes in networking in order to support current and evolving application deployment models.  He also correctly points out that networking has lagged behind the rest of the data center as technical and operational advancements have been made. 

Network configuration today is laughably archaic when compared to storage, compute and even facilities.  It is still the domain of CLI wizards hacking away on keyboards to configure individual devices.  VMware brought advancements like resource utilization based automatic workload migration to the compute environment.  In order to support this behavior on the network an admin must ensure the appropriate configuration is manually defined on each port that workload may access and every port connecting the two.  This is time consuming, costly and error prone. Brad is right, this is broken.

Brad also correctly points out that network speeds, feeds and packet delivery are adequately evolving and that the friction lies in configuration, policy and service delivery.  These essential network components are still far too static to keep pace with application deployments.  The network needs to evolve, and rapidly, in order to catch up with the rest of the data center.

Brad and I do not differ on the problem(s), or better stated: we do not differ on the drivers for change.  We do however differ on the solution.  Let me preface in advance that Brad and I both work for HW/SW vendors with differing solutions to the problem and differing visions of the future.  Feel free to write the rest of this off as mindless dribble or vendor Kool Aid, I ain’t gonna hate you for it.

Brad makes the case that Network Virtualization is equivalent to server virtualization, and from this simple assumption he poses it as the solution to current network problems.

Let’s start at the top: don’t be fooled by emphatic statements such as Brad’s stating that network virtualization is analogous to server virtualization.  It is not, this is an apples and oranges discussion.  Network virtualization being the orange where you must peel the rind to get to the fruit.  Don’t take my word for it, one of Brad’s colleagues, Scott Lowe, a man much smarter then I says it best:

image

The issue is that these two concepts are implemented in a very different fashion.  Where server virtualization provides full visibility and partitioning of the underlying hardware, network virtualization simply provides a packet encapsulation technique for frames on the wire.  The diagram below better illustrates our two fruits: apples and oranges.

image

As the diagram illustrates we are not working with equivalent approaches.  Network virtualization would require partitioning of switch CPU, TCAM, ASIC forwarding, bandwidth etc. to be a true apples-to-apples comparison.  Instead it provides a simple wrapper to encapsulate traffic on the underlying Layer 3 infrastructure.  These are two very different virtualization approaches.

Brad makes his next giant leap in the “What is the Network section.”  Here he makes the assumption that the network consists of only virtual workloads “The “network” we want to virtualize is the complete L2-L7 services viewed by the virtual machines” and the rest of his blog focuses there.  This is fine for those data center environments that are 100% virtualized including servers, services and WAN connectivity and use server virtualization for all of those purposes.  Those environments must also lack PaaS and SaaS systems that aren’t built on virtual servers as those are also non-applicable to the remaining discussion.  So anyone in those environments described will benefit from the discussion, anyone <crickets>.

So Brad and, presumably VMware/Nicira (since network virtualization is their term), define the goal as taking “all of the network services, features, and configuration necessary to provision the application’s virtual network (VLANs, VRFs, Firewall rules, Load Balancer pools & VIPs, IPAM, Routing, isolation, multi-tenancy, etc.) – take all of those features, decouple it from the physical network, and move it into a virtualization software layer for the express purpose of automation.”  So if your looking to build 100% virtualized server environments with no plans to advance up the stack into PaaS, etc. it seems you have found your Huckleberry.

What we really need is not a virtualized network overlay running on top of an L3 infrastructure with no communication or correlation between the two.  What we really need is something another guy much smarter than me (Greg Ferro) described:

image

Abstraction, independence and isolation, that’s the key to moving the network forward.  This is not provided by network virtualization.  Network virtualization is a coat of paint on the existing building.  Further more that coat of paint is applied without stripping, priming, or removing that floral wall paper your grandmother loved.  The diagram below is how I think of it.

Network Virtualization

With a network virtualization solution you’re placing your applications on a house of cards built on a non-isolated infrastructure of legacy design and thinking.  Without modifying the underlying infrastructure, network virtualization solutions are only as good as the original foundation.  Of course you could replace the data center network with a non-blocking fabric and apply QoS consistently across that underlying fabric (most likely manually) as Brad Hedlund suggests below.

image

If this is the route you take, to rebuild the foundation before applying network virtualization paint, is network virtualization still the color you want?  If a refresh and reconfigure is required anyway, is this the best method for doing so? 

The network has become complex and unmanageable due to things far older than VMware and server virtualization.  We’ve clung to device centric CLI configuration and the realm of keyboard wizards.  Furthermore we’ve bastardized originally abstracted network constructs such as VLAN, VRF, addressing, routing, and security tying them together and creating a Frankenstein of a data center network.  Are we surprised the villagers are coming with torches and pitch forks?

So overall I agree with Brad, the network needs to be fixed.  We just differ on the solution, I’d like to see more than a coat of paint.  Put lipstick on a pig and all you get is a pretty pig.

lipstick pig

GD Star Rating
loading...

CloudStack Graduates to Top-Level Apache Project

The Apache Software Foundation announced in late March that CloudStack is now a top-level project. This is a promotion from CloudStack’s incubator status, where it had lived after being released as open source by Citrix.

This promotion provides additional encouragement to companies and developers looking to contribute to the project, because it validates the CloudStack community and demonstrates ongoing support under the Apache Software Foundation. To read more visit the full article.

GD Star Rating
loading...

VXLAN Deep Dive – Part II

In part one of this post I covered the basic theory of operations and functionality of VXLAN (http://www.definethecloud.net/vxlan-deep-dive.)  This post will dive deeper into how VXLAN operates on the network.

Let’s start with the basic concept that VXLAN is an encapsulation technique.  Basically the Ethernet frame sent by a VXLAN connected device is encapsulated in an IP/UDP packet.  The most important thing here is that it can be carried by any IP capable device.  The only time added intelligence is required in a device is at the network bridges known as VXLAN Tunnel End-Points (VTEP) which perform the encapsulation/de-encapsulation.  This is not to say that benefit can’t be gained by adding VXLAN functionality elsewhere, just that it’s not required.

image

Providing Ethernet Functionality on IP Networks:

As discussed in Part 1, the source and destination IP addresses used for VXLAN are the Source VTEP and destination VTEP.  This means that the VTEP must know the destination VTEP in order to encapsulate the frame.  One method for this would be a centralized controller/database.  That being said VXLAN is implemented in a decentralized fashion, not requiring a controller.  There are advantages and drawbacks to this.  While utilizing a centralized controller would provide methods for address learning and sharing, it would also potentially increase latency, require large software driven mapping tables and add network management points.  We will dig deeper into the current decentralized VXLAN deployment model.

VXLAN maintains backward compatibility with traditional Ethernet and therefore must maintain some key Ethernet capabilities.  One of these is flooding (broadcast) and ‘Flood and Learn behavior.’  I cover some of this behavior here (http://www.definethecloud.net/data-center-101-local-area-network-switching)  but the summary is that when a switch receives a frame for an unknown destination (MAC not in its table) it will flood the frame to all ports except the one on which it was received.  Eventually the frame will get to the intended device and a reply will be sent by the device which will allow the switch to learn of the MACs location.  When switches see source MACs that are not in their table they will ‘learn’ or add them.

VXLAN is encapsulating over IP and IP networks are typically designed for unicast traffic (one-to-one.)  This means there is no inherent flood capability.  In order to mimic flood and learn on an IP network VXLAN uses IP multi-cast.  IP multi-cast provides a method for distributing a packet to a group.  This IP multi-cast use can be a contentious point within VXLAN discussions because most networks aren’t designed for IP multi-cast, IP multi-cast support can be limited, and multi-cast itself can be complex dependent on implementation.

Within VXLAN each VXLAN segment ID will be subscribed to a multi-cast group.  Multiple VXLAN segments can subscribe to the same ID, this minimizes configuration but increases unneeded network traffic.  When a device attaches to a VXLAN on a VTEP that was not previously in use, the VXLAN will join the IP multi-cast group assigned to that segment and start receiving messages.

image

In the diagram above we see the normal operation in which the destination MAC is known and the frame is encapsulated in IP using the source and destination VTEP address.  The frame is encapsulated by the source VTEP, de-encapsulated at the destination VTEP and forwarded based on bridging rules from that point.  In this operation only the destination VTEP will receive the frame (with the exception of any devices in the physical path, such as the core IP switch in this example.)

image

In the example above we see an unknown MAC address (the MAC to VTEP mapping does not exist in the table.)  In this case the source VTEP encapsulates the original frame in an IP multi-cast packet with the destination IP of the associated multicast group.  This frame will be delivered to all VTEPs participating in the group.  VTEPs participating in the group will ideally only be VTEPs with connected devices attached to that VXLAN segment.  Because multiple VXLAN segments can use the same IP multicast group this is not always the case.  The VTEP with the connected device will de-encapsulate and forward normally, adding the mapping from the source VTEP if required.  Any other VTEP that receives the packet can then learn the source VTEP/MAC mapping if required and discard it. This process will be the same for other traditionally flooded frames such as ARP, etc.  The diagram below shows the logical topologies for both traffic types discussed.

image

As discussed in Part 1 VTEP functionality can be placed in a traditional Ethernet bridge.  This is done by placing a logical VTEP construct within the bridge hardware/software.  With this in place VXLANs can bridge between virtual and physical devices.  This is necessary for physical server connectivity, as well as to add network services provided by physical appliances.  Putting it all together the diagram below shows physical servers communicating with virtual servers in a VXLAN environment.  The blue links are traditional IP links and the switch shown at the bottom is a standard L3 switch or router.  All traffic on these links is encapsulated as IP/UDP and broken out by the VTEPs.

image

Summary:

VXLAN provides backward compatibility with traditional VLANs by mimicking broadcast and multicast behavior through IP multicast groups.  This functionality provides for decentralized learning by the VTEPs and negates the need for a VXLAN controller.

GD Star Rating
loading...

Digging Into the Software Defined Data Center

The software defined data center is a relatively new buzzword embraced by the likes of EMC and VMware.  For an introduction to the concept see my article over at Network Computing (http://www.networkcomputing.com/data-center/the-software-defined-data-center-dissect/240006848.)  This post is intended to take it a step deeper as I seem to be stuck at 30,000 feet for the next five hours with no internet access and no other decent ideas.  For the purpose of brevity (read: laziness) I’ll use the acronym SDDC for Software Defined Data Center whether or not this is being used elsewhere.)

First let’s look at what you get out of a SDDC:

Legacy Process:

In a traditional legacy data center the workflow for implementing a new service would look something like this:

  1. Approval of the service and budget
  2. Procurement of hardware
  3. Delivery of hardware
  4. Rack and stack of new hardware
  5. Configuration of hardware
  6. Installation of software
  7. Configuration of software
  8. Testing
  9. Production deployment

This process would very greatly in overall time but 30-90 days is probably a good ballpark (I know, I know, some of you are wishing it happened that fast.)

Not only is this process complex and slow but it has inherent risk.  Your users are accustomed to on-demand IT services in their personal life.  They know where to go to get it and how to work with it.  If you tell a business unit it will take 90 days to deploy an approved service they may source it from outside of IT.  This type of shadow IT poses issues for security, compliance, backup/recovery etc. 

SDDC Process:

As described in the link above an SDDC provides a complete decoupling of the hardware from the services deployed on it.  This provides a more fluid system for IT service change: growing, shrinking, adding and deleting services.  Conceptually the overall infrastructure would maintain an agreed upon level of spare capacity and would be added to as thresholds were crossed.  This would provide an ability to add services and grow existing services on the fly in all but the most extreme cases.  Additionally the management and deployment of new services would be software driven through intuitive interfaces rather than hardware driven and disparate CLI based.

The process would look something like this:

  1. Approval of the service and budget
  2. Installation of software
  3. Configuration of software
  4. Testing
  5. Production deployment

The removal of four steps is not the only benefit.  The remaining five steps are streamlined into automated processes rather than manual configurations.  Change management systems and trackback/chargeback are incorporated into the overall software management system providing a fluid workflow in a centralized location.  These processes will be initiated by authorized IT users through self-service portals.  The speed at which business applications can be deployed is greatly increased providing both flexibility and agility.

Isn’t that cloud?

Yes, no and maybe.  Or as we say in the IT world: ‘It depends.’  SDDN can be cloud, with on-demand self-service, flexible resource pooling, metered service etc. it fits the cloud model.  The difference is really in where and how it’s used.  A public cloud based IaaS model, or any given PaaS/SaaS model does not lend itself to legacy enterprise applications.  For instance you’re not migrating your Microsoft Exchange environment onto Amazon’s cloud.  Those legacy applications and systems still need a home.  Additionally those existing hardware systems still have value.  SDDC offers an evolutionary approach to enterprise IT that can support both legacy applications and new applications written to take advantage of cloud systems.  This provides a migration approach as well as investment protection for traditional IT infrastructure. 

How it works:

The term ‘Cloud operating System’ is thrown around frequently in the same conversation as SDDC.  The idea is compute, network and storage are raw resources that are consumed by the applications and services we run to drive our businesses.  Rather than look at these resources individually, and manage them as such, we plug them into a a management infrastructure that understands them and can utilize them as services require them.  Forget the hardware underneath and imagine a dashboard of your infrastructure something like the following graphic.

image

 

The hardware resources become raw resources to be consumed by the IT services.  For legacy applications this can be very traditional virtualization or even physical server deployments.  New applications and services may be deployed in a PaaS model on the same infrastructure allowing for greater application scale and redundancy and even less tie to the hardware underneath.

Lifting the kimono:

Taking a peak underneath the top level reveals a series of technologies both new and old.  Additionally there are some requirements that may or may not be met by current technology offerings. We’ll take a look through the compute, storage and network requirements of SDDC one at a time starting with compute and working our way up.

Compute is the layer that requires the least change.  Years ago we moved to the commodity x86 hardware which will be the base of these systems.  The compute platform itself will be differentiated by CPU and memory density, platform flexibility and cost. Differentiators traditionally built into the hardware such as availability and serviceability features will lose value.  Features that will continue to add value will be related to infrastructure reduction and enablement of upper level management and virtualization systems.  Hardware that provides flexibility and programmability will be king here and at other layers as we’ll discuss.

Other considerations at the compute layer will tie closely into storage.  As compute power itself has grown by leaps and bounds  our networks and storage systems have become the bottleneck.  Our systems can process our data faster than we can feed it to them.  This causes issues for power, cooling efficiency and overall optimization.  Dialing down performance for power savings is not the right answer.  Instead we want to fuel our processors with data rather than starve them.  This means having fast local data in the form of SSD, flash and cache.

Storage will require significant change, but changes that are already taking place or foreshadowed in roadmaps and startups.  The traditional storage array will become more and more niche as it has limited capacities of both performance and space.  In its place we’ll see new options including, but not limited to migration back to local disk, and scale-out options.  Much of the migration to centralized storage arrays was fueled by VMware’s vMotion, DRS, FT etc.  These advanced features required multiple servers to have access to the same disk, hence the need for shared storage.  VMware has recently announced a combination of storage vMotion and traditional vMotion that allows live migration without shared storage.  This is available in other hypervisor platforms and makes local storage a much more viable option in more environments.

Scale-out systems on the storage side are nothing new.  Lefthand and Equalogic pioneered much of this market before being bought by HP and Dell respectively.  The market continues to grow with products like Isilon (acquired by EMC) making a big splash in the enterprise as well as plays in the Big Data market.  NetApp’s cluster mode is now in full effect with OnTap 8.1 allowing their systems to scale out.  In the SMB market new players with fantastic offerings like Scale Computing are making headway and bringing innovation to the market.  Scale out provides a more linear growth path as both I/O and capacity increase with each additional node.  This is contrary to traditional systems which are always bottle necked by the storage controller(s). 

We will also see moves to central control, backup and tiering of distributed storage, such as storage blades and server cache.  Having fast data at the server level is a necessity but solves only part of the problem.  That data must also be made fault tolerant as well as available to other systems outside the server or blade enclosure.  EMC’s VFcache is one technology poised to help with this by adding the server as a storage tier for software tiering.  Software such as this place the hottest data directly next the processor with tier options all the way back to SAS, SATA, and even tape.

By now you should be seeing the trend of software based feature and control.  The last stage is within the network which will require the most change.  Network has held strong to proprietary hardware and high margins for years while the rest of the market has moved to commodity.  Companies like Arista look to challenge the status quo by providing software feature sets, or open programmability layered onto fast commodity hardware.  Additionally Software Defined Networking (http://www.definethecloud.net/sdn-centralized-network-command-and-control) has been validated by both VMware’s acquisition of Nicira and Cisco’s spin-off of Insieme which by most accounts will expand upon the CiscoOne concept with a Cisco flavored SDN offering.  In any event the race is on to build networks based on software flows that are centrally managed rather than the port-to-port configuration nightmare of today’s data centers. 

This move is not only for ease of administration, but also required to push our systems to the levels required by cloud and SDDC.  These multi-tenant systems running disparate applications at various service tiers require tighter quality of service controls and bandwidth guarantees, as well as more intelligent routes.  Today’s physically configured networks can’t provide these controls.  Additionally applications will benefit from network visibility allowing them to request specific flow characteristics from the network based on application or user requirements.  Multiple service levels can be configured on the same physical network allowing traffic to take appropriate paths based on type rather than physical topology.  These network changes are require to truly enable SDDC and Cloud architectures. 

Further up the stack from the Layer 2 and Layer 3 transport networks comes a series of other network services that will be layered in via software.  Features such as: load-balancing, access-control and firewall services will be required for the services running on these shared infrastructures.  These network services will need to be deployed with new applications and tiered to the specific requirements of each.  As with the L2/L3 services manual configuration will not suffice and a ‘big picture’ view will be required to ensure that network services match application requirements.  These services can be layered in from both physical and virtual appliances  but will require configurability via the centralized software platform.

Summary:

By combining current technology trends, emerging technologies and layering in future concepts the software defined data center will emerge in evolutionary fashion.  Today’s highly virtualized data centers will layer on technologies such as SDN while incorporating new storage models bringing their data centers to the next level.  Conceptually picture a mainframe pooling underlying resources across a shared application environment.  Now remove the frame.

GD Star Rating
loading...

SDN – Centralized Network Command and Control

Software Defined Networking (SDN) is a hot topic in the data center and cloud community.  The geniuses <sarcasm> over at IDC predict a $2 billion market by 2016 (expect this number to change often between now and then, and look closely at what they count in the cost.) The concept has the potential to shake up the networking business as a whole (http://www.networkcomputing.com/next-gen-network-tech-center/240001372) and has both commercial and open source products being developed and shipping, but what is it, and why?

Let’s start with the why by taking a look at how traditional networking occurs.

 

Traditional Network Architecture:

 

image

 

The most important thing to notice in the graphic above is the separate control and data planes.  Each plane has separate tasks that provide the overall switching/routing functionality.  The control plane is responsible for configuration of the device and programming the paths that will be used for data flows.  When you are managing a switch you are interacting with the control plane.  Things like route tables and Spanning-Tree Protocol (STP) are calculated in the control plane.   This is done by accepting information frames such as BPDUs or Hello messages and processing them to determine available paths.  Once these paths have been determined they are pushed down to the data plane and typically stored in hardware.  The data lane then typically makes path decisions in hardware based on the latest information provided by the control plane.  This has traditionally been a very effective method.  The hardware decision making process is very fast, reducing overall latency while the control plane itself can handle the heavier processing and configuration requirements.

This method is not without problems, the one we will focus on is scalability.  In order to demonstrate the scalability issue I find it easiest to use Quality of Service (QoS) as an example.  QoS allows forwarding priority to be given to specific frames for scheduling purposes based on characteristics in those frames.  This allows network traffic to receive appropriate treatment in times of congestion.  For instance latency sensitive voice and video traffic is typically engineered for high priority to ensure the best user experience.  Traffic prioritization is typically based on tags in the frame known as Class of Service (CoS) and or Differentiated Services Code Point (DSCP.)  These tags must be marked consistently for frames entering the network and rules must then be applied consistently for their treatment on the network. This becomes cumbersome in a traditional multi-switch network because the configuration must be duplicated in some fashion on each individual switching device.

An easier example of the current administrative challenges consider each port in the network a management point, meaning each port must be individually configured.  This is both time consuming and cumbersome.

Additional challenges exist in properly classifying data and routing traffic.  A fantastic example of this would be two different traffic types, iSCSI and voice.  iSCSI is storage traffic and typically a full size packet or even jumbo frame while voice data is typically transmitted in a very small packet.  Additionally they have different requirements, voice is very latency sensitive in order to maintain call quality, while iSCSI is less latency sensitive but will benefit from more bandwidth.  Traditional networks have few if any tools to differentiate these traffic types and send them down separate paths which are beneficial to both types.

These types of issues are what SDN looks to solve.

The Three Key Elements of SDN:

  • Ability to manage the forwarding of frames/packets and apply policy
  • Ability to perform this at scale in a dynamic fashion
  • Ability to be programmed

Note: In order to qualify as SDN an architecture does not have to be Open, standard, interoperable, etc.  A proprietary architecture can meet the definition and provide the same benefits.  This blog does not argue for or against either open or proprietary architectures.

An SDN architecture must be able to manipulate frame and packet flows through the network at large scale, and do so in a programmable fashion.  The hardware plumbing of an SDN will typically be designed as a converged (capable of carrying all data types including desired forms of storage traffic) mesh of large lower latency pipes commonly called a fabric.  The SDN architecture itself will in turn provide a network wide view and the ability to manage the network and network flows centrally.

This architecture is accomplished by separating the control plane from the data plane devices and providing a programmable interface for that separated control plane.  The data plane devices receive forwarding rules from the separated control plane and apply those rules in hardware ASICs.  These ASICs can be either commodity switching ASICs or customized silicone depending on the functionality and performance aspects required.  The diagram below depicts this relationship:

image

In this model the SDN controller provides the control plane and the data plane is comprised of hardware switching devices.  These devices can either be new hardware devices or existing hardware devices with specialized firmware.  This will depend on vendor, and deployment model.  One major advantage that is clearly shown in this example is the visibility provided to the control plane.  Rather than each individual data plane device relying on advertisements from other devices to build it’s view of the network topology a single control plane device has a view of the entire network.  This provides a platform from which advanced routing, security, and quality decisions can be made, hence the need for programmability.  Another major capability that can be drawn from this centralized control is visibility.  With a centralized controller device it is much easier to gain usable data about real time flows on the network, and make decisions (automated or manual) based on that data.

This diagram only shows a portion of the picture as it is focused on physical infrastructure and serves.  Another major benefit is the integration of virtual server environments into SDN networks.  This allows centralized management of consistent policies for both virtual and physical resources.  Integrating a virtual network is done by having a Virtual Ethernet Bridge (VEB) in the hypervisor that can be controlled by an SDN controller.  The diagram below depicts this:

image

This diagram more clearly depicts the integration between virtual networking systems and physical networking systems in order to have cohesive consistent control of the network.  This plays a more important role as virtual workloads migrate.  Because both the virtual and physical data planes are managed centrally by the control plane when a VM migration happens it’s network configuration can move with it regardless of destination in the fabric.  This is a key benefit for policy enforcement in virtualized environments because more granular controls can be placed on the VM itself as an individual port and those controls stick with the VM throughout the environment.

Note: These diagrams are a generalized depiction of an SDN architecture.  Methods other than a single separated controller could be used, but this is the more common concept.

With the system in place to have centralized command and control of the network through SDN and a programmable interface more intelligent processes can now be added to handle complex systems.  Real time decisions can be made for the purposes of traffic optimization, security, outage, or maintenance.  Separate traffic types can be run side by side while receiving different paths and forwarding that can respond dynamically to network changes.

Summary:

Software Defined Networking has the potential to disrupt the networking market and move us past the days of the switch/router jockey.  This shift will provide extreme benefits in the form of flexibility, scalability and traffic performance for datacenter networks.  While all of the aspects are not yet defined SDN projects such as OpenFlow (www.openflow.org) provide the tools to begin testing and developing SDN architectures on supported hardware.  Expect to see lots of changes in this eco system and many flavors in the vendor offerings.

GD Star Rating
loading...

Private Cloud Infrastructure Design: Go Beyond Best Practices

Of all of the possible benefits of a private cloud infrastructure, one of the most valuable is flexibility. With a properly designed private cloud infrastructure, the data center environment can fluidly shift with the business. This allows new applications to be deployed to meet business demands as they’re identified, and legacy applications to be removed when the value is no longer recognized. To see the full article visit: http://www.networkcomputing.com/private-cloud/240002196

GD Star Rating
loading...