It’s Our Time Down Here– “Underlays”

Recently while winding down from a long day I flipped the channel and “The Goonies” was on.  I left it there thinking an old movie I’d seen a dozen times would put me to sleep quickly.  As it turns out I quickly got back into it.  By the time the gang hit the wishing well and Mikey gave his speech I was inspired to write a blog, this one in particular.  “Cause it’s their time – their time up there.  Down here it’s our time, it’s our time down here.” 

This got me thinking about data center network overlays, and the physical networks that actually move the packets some Network Virtualization proponents have dubbed “underlays.”  The more I think about it, the more I realize that it truly is our time down here in the “lowly underlay.”  I don’t think there’s much argument around the need for change in data center networking, but there is a lot of debate on how.  Let’s start with their time up there “Network Virtualization.”

Network Virtualization

Unlike server virtualization, Network Virtualization doesn’t partition out the hardware and separate out resources.  Network Virtualization uses server virtualization to virtualize network devices such as: switches, routers, firewalls and load-balancers.  From there it creates virtual tunnels across the physical infrastructure using encapsulation techniques such as: VxLAN, NVGRE and STT.  The end result is a virtualized instantiation of the current data center network in x86 servers with packets moving in tunnels on physical networking gear which segregate them from other traffic on that gear.  The graphic below shows this relationship.

image

Network Virtualization in this fashion can provide some benefits in the form of: provisioning time and automation.  It also induces some new challenges discussed in more detail here: What Network Virtualization Isn’t (be sure to read the comments for alternate view points.)  What network virtualization doesn’t provide, in any form, is a change to the model we use to deploy networks and support applications.  The constructs and deployment methods for designing applications and applying policy are not changed or enhanced.  All of the same broken or misused methodologies are carried forward.  When working with customers to begin virtualizing servers I would always recommend against automated physical to virtual server migration, suggesting rebuild in a virtual machine instead.

The reason for that is two fold.  First server virtualization was a chance to re-architect based on lessons learned.  Second, simply virtualizing existing constructs is like hiring movers to pack your house along with dirt/cobwebs/etc. then move it all to the new place and unpack.  The smart way to move a house is donate/yard sale what you won’t need, pack the things you do, move into a clean place and arrange optimally for the space.  The same applies to server and network virtualization.

Faithful replication of today’s networking challenges as virtual machines with encapsulation tunnels doesn’t move the bar for deploying applications.  At best it speeds up, and automates, bad practices.  Server virtualization hit the same challenges.  I discuss what’s needed from the ground up here: Network Abstraction and Virtualization: Where to Start?.  Software only network virtualization approaches are challenged by both restrictions of the hardware that moves their packets and issues with methodology of where the pain points really are.  It’s their time up there.

Underlays

The physical transport network which is minimalized by some as the “underlay” is actually more important in making a shift to network programmability, automation and flexibility.  Even network virtualization vendors will agree, to some extent, on this if you dig deep enough.  Once you cut through the marketecture of “the underlay doesn’t matter” you’ll find recommendations for a non-blocking fabric of 10G Access ports and 40G aggregation in one design or another.  This is because they have no visibility into congestion and no control of delivery prioritization such as QoS. 

Additionally Network Virtualization has no ability to abstract the constructs of VLAN, Subnet, Security, Logging, QoS from one another as described in the link above.  To truly move the network forward in a way that provides automation and programmability in a model that’s cohesive with application deployment, you need to include the physical network with the software that will drive it.  It’s our time down here.

By marrying physical innovations that provide a means for abstraction of constructs at the ground floor with software that can drive those capabilities, you end up with a platform that can be defined by the architecture of the applications that will utilize it.  This puts the infrastructure as a whole in a position to be  deployed in lock-step with the applications that create differentiation and drive revenue.  This focus on the application is discussed here: Focus on the Ball: The Application.  The figure below, from that post, depicts this.

image

 

The advantage to this ground up approach is the ability to look at applications as they exist, groups of interconnected services, rather than the application as a VM approach.  This holistic view can then be applied down to an infrastructure designed for automation and programmability.  Like constructing a building, your structure will only be as sound as the foundation it sits on.

For a little humor (nothing more) here’s my comic depiction of Network Virtualization.image

GD Star Rating
loading...

Focus on the Ball: The Application

With the industry talking about Software Defined Networking (SDN) at full hype levels, there is one thing missing from many discussions: the application. SDN promises to reign in the complexity of network infrastructure and provide better tools for deploying services at scale. What often seems to be forgotten are the applications, which are the reason those networks exist. While application focus in itself is not a new concept it seems lost in the noise around SDN as a whole, with a few exceptions such as Plexxi being which focuses on Application Affinity.

Current SDN approaches provide tools to solve issues in one portion or the other of network infrastructure. Flow control mechanisms look to centralize the distribution and configuration of routing and forwarding. Overlays look to build virtual networks on existing IP infrastructure. Virtualized L4-7 services provide solutions to configure, stitch-in and control network services more closely to virtual machines themselves. None of these current approaches looks to tackle the whole picture from an application centric point of view. These solutions also take a myopic view that the VM is the network, this is far from the case.  The closest models fall into dev-ops categories or orchestration but these require a deep understanding of the details and intricacies of the network.

In traditional networking environments there is a disconnect in communication between application and network teams. The languages and concepts are disparate enough that they don’t translate, there is no logical continuation from application developer or owner to network designer. Application teams speak in OS instances, application tiers and components, tooling, language, end-user demands, etc. while network teams speak in switch-ports, VLANs, QoS, IP addressing and Access Control Lists (ACLs). The lack of common understanding and vocabulary causes architectures and implementations to suffer. The graphic below illustrates this relationship:

image

Building the flexible, scalable, manageable and programmable networks of the future requires a change in focus. The application needs to take center stage; it’s the apps that solve business problems. From this focus, logical and physical topology become secondary and are only designed once application requirements have been mapped out. Application centric policies must be designed first. Policies such as: security, load-balancing, QoS can all be designed based on application requirements, rather than network restrictions. Application developers define these requirements without the need to speak a network language.

Traditional networks begin with a physical topology that is layered with L2 and L3 logical topologies and assumed application mobility and service domains such as a services tier in the aggregation level. Once these topologies are architected and implemented applications are built and deployed on them. This method limits the capabilities available to the application and the services deployed on them.

Application security is an excellent example of a system that suffers from traditional architectures. Network security constructs are implemented in the form of ACLs on switches, routers and firewalls. These entries suffer from two major drawbacks: complexity of design/implementation and scale of the TCAM that stores the entries. This means that application policies must be communicated effectively to network engineers who must translate those requirements into implementable ACLs across multiple devices in the network. This is then defined manually device-by-device. This is a system ripe for PEBKAC errors (Problem Exists Between Keyboard and Chair.)

The complexity and room for error in this system increases exponentially as networks scale, applications move and new services are needed. Additionally this leads to bad practice based on design limitations. Far too often outdated policy entries are left in place due to the complexity and risk of removing entries. This leads to residual entries in place consuming space long after an application is gone. Just as often policies are written more loosely than would be optimal in order to reduce required entries, and optimize space, through wild card summarization.

To break this cycle networking systems need to take an application centric approach which models actual application requirements onto the network in a top down fashion. Systems need to take into account the structure of the application, its components, and how those components interact then provide tools for designing logical policy maps of these relationships. From there these policy maps can be programmatically applied to the networking infrastructure.

An application is not a single software instance running on a server. Applications are made up of the end-points required in a given tier, the tiers required for the service delivered and the policies that define how those tiers communicate, and their unique requirements. The application as a whole must be taken into account in order to provide robust, scalable service delivery.

The illustration below shows this relationship in contrast to the diagram above:

image

In this model network and application teams develop the systems of policies that define application behavior and push them to the network. Taking the application as a whole into focus instead of the myopic view of VMs, switch ports or IP addresses allows cohesive deployment and manageability at scale. The application is the purpose of having a network; therefore the application should define the network.

This definition of the network by the application should be done in a language that the developers understand, and the network can interpret and implement. For example an app owner labels application traffic as ‘video’ and the network implements policies for bandwidth, QoS, etc. that video requires. These policies are predefined by the network engineers.

An application is more than an IP address and a set of rules; it is an ecosystem of interconnected devices and the policies that define their relationship. Traditional networking techniques anchor application deployment by defining applications in networking terms. In order to accelerate the application deployment (and re-deployment throughout its lifecycle) networks need to provide an application centric view and deployment model.

GD Star Rating
loading...