A Few Good Apps

Developer: Network team, did you order the Code upgrade?!

Operations Manager: You don’t have to answer that question!

Network Engineer: I’ll answer the question. You want answers?

Developer: I think I’m entitled!

Network Engineer: You want answers?!

Developer: I want the truth!

Network Engineer: You can’t handle the truth! Son, we live in a world that has VLANs, and those VLANs have to be plumbed by people with CLIs. Who’s gonna do it? You? You, Database Admin? I have a greater responsibility than you can possibly fathom. You weep for app agility and you curse the network. You have that luxury. You have the luxury of not knowing what I know, that network plumbing, while tragically complex, delivers apps. And my existence, while grotesque and incomprehensible to you, delivers apps! You don’t want the truth, because deep down in places you don’t talk about at parties, you want me on that CLI. You need me on that CLI. We use words like “routing”, “subnets”, “L4 Ports”. We use these words as the backbone of a life spent building networks. You use them as a punch line. I have neither the time nor the inclination to explain myself to a man who rises and sleeps under the blanket of infrastructure that I provide, and then questions the manner in which I provide it! I would rather you just said “thank you”, and went on your way. Otherwise, I suggest you pick up a putty session, and configure a switch. Either way, I don’t give a damn what you think you are entitled to!

Developer: Did you order the Code upgrade?

Network Engineer: I did the job that—-

Developer: Did you order the Code upgrade?!!

Network Engineer: YOU’RE GODDAMN RIGHT I DID!!

 

In many IT environments today there is a distinct line between the application developers/owners and the infrastructure teams that are responsible for deploying those applications.  These organizational silos lead to tension, lack of agility and other issues.  Much of this is caused by the translation between these teams.  Application teams speak in terms like: objects, attributes, provider, consumer, etc.  Infrastructure teams speak in memory, CPU, VLAN, subnets, ports.  This is exacerbated when delivering apps over the network, which requires connectivity, security, load-balancing etc.  On today’s network devices (virtual or physical) the application must be identified based on Layer 3 addressing and L4 information.  This means the app team must be able to describe components or tiers of an app in those terms (which are foreign to them.)  This slows down the deployment of applications and induces problems with tight controls, security, etc.  I’ve tried to describe this in the graphic below (for people who don’t read good and want to learn to do networking things good too.)

image

As shown in the graphic, the definition of an application and its actual instantiation onto networking devices (virtual and physical) is very different.  This causes a great deal of the slowed application adoption and the complexity of networking.  Today’s networks don’t have an application centric methodology for describing applications and their requirements.  The same can be said for emerging SDN solutions.  The two most common examples of SDN today are OpenFlow and Network Virtualization.  OpenFlow simply attempts to centralize a control plane that was designed to be distributed for scale and flexibility.  In doing so it  uses 5-tuple matches of IP and TCP/UDP headers to attempt to identify applications as network flows.  This is no different from the model in use today.  Network virtualization faithfully replicates today’s network constructs into a hypervisor, shifting management and adding software layers without solving any of the underlying problem.

What’s needed is a common language for the infrastructure teams and development teams to use.  that common language can be used to describe application connectivity and policy requirements in a way that makes sense to separate parts of the organization and business.  Cisco Application Centric Infrastructure (ACI) uses policy as this common language, and deploys the logical definition of policy onto the network automatically.

Cisco ACI bases network provisioning on the application and the two things required for application delivery: connectivity and policy.  By connectivity we’re describing what group of objects is allowed to connect to other groups of objects.  We are not defining forwarding, as forwarding is handled separately using proven methods, in this case ISIS with a distributed control plane.  When we describe connectivity we simply mean allowing the connection.  Policy is a broader term, and very important to the discussion.  Policy is all of the requirements for an application: SLAs, QoS, Security, L4-7 services etc.  Policy within ACI is designed using reusable ‘contracts.’  This way policy can be designed in advance by the experts and architects with that skill set and then reused whenever required for a new application roll-out.

Applications are deployed on the ACI fabric using an Application Network Profile. An application network profile is simply a logical template for the design and deployment of an applications end-to-end connectivity and policy requirements.  If you’re familiar with Cisco UCS it’s a very similar concept to the UCS Service Profile.  One of the biggest benefits of an Application Network profile is its portability.  They can be built through the API, or GUI, downloaded from Cisco Developer Network (CDN) or the ACI Github community, or provided by the application vendor itself.  They’re simply an XML or JSON representation of the end-to-end requirements for delivering an application.  The graphic below shows an application network profile.

image

This model provides that common language that can be used by developer teams and operations/infrastructure teams.  To tie this back to the tongue-in-cheek start to this post based on dialogue from “A Few Good Men”, we don’t want to replace the network engineer, but we do want to get them off of the CLI.  Rather than hacking away at repeatable tasks on the command line, we want them using the policy model to define the policy ‘contracts’ for use when deploying applications.  At the same time we want to give them better visibility into what the application requires and what it’s doing on the network.  Rather than troubleshooting devices and flows, why not look at application health?  Rather than manually configuring QoS based on devices, why not set it per application or tier?  Rather than focusing on VLANs and subnets as policy boundaries why not abstract that and group things based on those policy requirements?  Think about it, why should every aspect of a servers policy change because you changed the IP?  That’s what happens on today’s networks.

Call it a DevOps tool, call it automation, call it what you will, ACI looks to use the language of applications to provision the network dynamically and automatically.  Rather than simply providing better management tools for 15 year old concepts that have been overloaded we focus on a new model: application connectivity and policy.

**Disclaimer: I work as a Technical Marketing Engineer for the Cisco BU responsible for Nexus 9000 and ACI.  Feel free to disregard this post as my biased opinion.**

GD Star Rating
loading...

Video: Cisco ACI Overview

GD Star Rating
loading...

Oh, the Places You’ll Go! (A Cisco ACI Story)

In the fashion of my two previous Dr. Seuss style stories I thought I’d take a crack at Cisco Application Centric Infrastructure (ACI.)  Check out the previous two if you haven’t read them and have time to waste:

 

Horton Hears Hadoop: http://www.definethecloud.net/horton-hears-hadoop/

The App on the Crap (An SDN Story) http://www.definethecloud.net/the-app-on-the-crap-an-sdn-story/

 

 

 

 

 

 

Congratulations!

This is the time.

The network is changing!

The future is here!

 

With software controllers.

And virtualized widgets.

You can steer traffic

any direction you choose.

Packets are moving. They’ll flow where they flow.

And YOU are the gal who’ll decide where they’ll go.

 

You’ll look up and down paths.  Look ‘em over with care.

About some you’ll say, “No VOIP will go there.”

With an overlay net, and central control,

No packet will flow, down a not-so-good path.

 

And when packets travel

on suboptimal paths.

You’ll reroute those flows,

based on 5-tuple match.

 

Net’s opened wide

With central control.

 

Now net change can happen

and rapidly too

with net as central

and virtual too.

 

And when things start to happen,

don’t panic.  Don’t stew.

Just go troubleshoot.

All layers old, and the new.

 

OH!

THE PLACES YOU’LL GO!

 

You’ll be on your way up!

Packet’s moving in flight!

You’ll be the rock star

who set network right.

 

The network won’t lag, because of central control.

You’ll provision the pipes, avoid traffic black holes.

The packets will fly, you’ll be best of the best.

Wherever they fly, be faster than the rest.

 

Except when they don’t.

Because sometimes they won’t.

 

I’m sorry to say so

but, sadly it’s true

that Bang-ups

and Hang-ups

will happen to you.

 

You can get all hung up

in congestion / jitter.

And packets won’t travel.

Some will just flitter.

 

Applications will fail

with unpleasant time-outs.

And the chances are, then,

that you’ll start hearing shouts.

 

And when applications fail,

you’re not in for much fun.

Getting them back up

is not easily done.

 

You’ll need the app team, spreadsheets , security rules.

You’ll have to troubleshoot through disparate tools.

Find a way to translate from app language to net.

Map L3/L4 to app names, not done yet.

There are services too, that’s a safe bet.

 

Which route did it take, and which networks the problem?

Overlay, underlay, this network has goblins.

Congestion, and drops, latency jitter

Check with the software, than break out the splitter.

You’ll sort this out, you’re no kind of quitter!

 

It can get so confused

two networks to trace.

The process is slow, not what you want for a pace.

You must sort it out, this is business, a race.

What happened here, what’s going on in this place?

 

NO!

That’s not for you!

Those duct tape based fixes.

You’ll choose better methods.

Not hodge-podge tech mixes.

 

Look first at the problem,

what’s causing the issues?

What is it that net, is trying to do?

The app is the answer, in front of you.

 

The data center’s there to run applications!

To serve them to users, move data ‘cross nations.

To drive revenue, open up business models.

To push out new services, all at full throttle.

The application’s what matters.

Place it on a platter.

 

You’ll put the app into focus,

With some abstraction hocus-pocus.

 

You’ll use the language of apps.

To describe connectivity.

Building application maps,

to increase productivity.

 

Use a system focused on policy,

not new-fangled virtual novelty.

Look at apps end-to-end,

Not with the app is VM trend.

 

Whether virtual or physical, you’ll treat things the same.

From L2 to L3, or L4-7,

use of uniform policy, will be your new game.

Well on your way to networking heaven.

 

Start with a logical model, a connectivity graph.

One that the system, deploys on your behalf.

A single controller for policy enforcement.

Sure to receive security’s cheering endorsement.

Forget about VLANs, routes and frame formats,

no longer will networking be the app-deploy doormat.

 

You see to build networks for today and tomorrow,

don’t use band-aids stacked high as Kilimanjaro.

You’ll want to start with REMOVING complexity.

Anything else, just adds to perplexity.

 

Start at the top, in an app centric fashion.

on a system that knows to treat apps as its passion.

 

And will you succeed?

Yes! you will, indeed!

(98 and 3/4 percent guaranteed.)*

KID, YOU’LL MOVE MOUNTAINS!

 

So…

be your app virtual, physical or cloud

with services, simple, complex or astray,

you’re off to Great Places!

Today is your day!

ACI is waiting.

So…get on your way!

 

 

*This is intended as whimsical nonsense.  Any guarantees are null and void based on the complete insanity of the author.

**Disclaimer: I work for Cisco Systems with the group responsible for Nexus 9000 and ACI.  Please feel free to consider this post random vendor rhetoric.**

For more information on Cisco ACI visit www.cisco.com/go/aci

GD Star Rating
loading...

True Software Defined Networking (SDN)

The world is, and has been, buzzing about software defined networking. It’s going to revolutionize the entire industry, commoditize hardware, and disrupt all the major players. It’s going to do all that… some day. To date it hasn’t done much but be a great conversation, and more importantly identify the need for change in networking.

In its first generation SDN is a lot of sizzle with no flash. The IT world is trying to truly define it, much like we were with ‘Cloud’ years ago. What’s beginning to emerge is that SDN is more of a methodology then an implementation, and like cloud there are several implementations: OpenFlow, Network Virtualization and Programmable Network Infrastructure.

 

image

OpenFlow

Open Flow focuses on a separation of control plane and data plane. This provides a centralized method to route traffic based on a 5-tuple match of packet header information. One area OpenFlow falls short is in its dependence on the independent advancement of the protocol itself and the hardware support below. Hardware in the world of switching and routing is Application Specific Integrated Circuits (ASIC) based, and those ASICs typically take three years to refresh. This means that the OpenFlow protocol itself must advance, and then once stabilized silicon vendors can begin building new ASICs to be available three years later.

Network Virtualization

Network virtualization is a faithful reproduction of networking functionality into the hypervisor. This method is intended to provide advanced automation and speed application deployment. The problem here arises in the new tools required to manage and monitor the network, the additional management layer, and the replication of the same underlying complexity.

Programmable Network Infrastructure

Programmable network infrastructure takes the configuration of devices from human to machine CLI/GUI interfaces to APIs and programming agents. This allows for faster, more powerful and less error prone device configuration from automation, orchestration and cloud operating system tools. These advance the configuration of multiple disparate systems but are still designed based on network operating system constructs intended for human use, and the same underlying network complexities such as artificial ties between addressing and policy.

All of these generation 1 SDN solutions simply move the management of the underlying complexity around. They are software designed to operate in the same model, trying to configure existing hardware. They’re simply adding another protocol, or protocols, to the pile of existing complexity.

image

Truly software defined networks

To truly define the network via software you have to look at the entire solution, not just a single piece. Simply adding a software or hardware layer doesn’t fix the problem, you must look at them in tandem starting with the requirements for today’s networks: automation, application agility, visibility (virtual/physical) security, scale and L4-7 services (virtual/physical.)

If you start with those requirements and think in terms of a blank slate you now have the ability to build things correctly for today and tomorrow’s applications while ensuring backwards compatibility. The place to start is in the software itself, or the logical model. Begin with questions:

1. What’s the purpose of the network?

2. What’s most relevant to the business?

3. What dictates the requirements?

The answer to all three is the application, so that’s the natural starting point. Next you ask who owns, deploys and handles day two operations for an application? The answer is the development team. So you start with a view of applications in a format they would understand.

image

That format is simple provider/consumer relationships between tiers or components of an application. Each tier may provide and consume services from the next to create the application which is a group of tiers or components, not a single physical server or VM.

You take that idea a step further and understand that the provider/consumer relationships are truly just policy. Policy can describe many things, but here it would be focused on permit/deny, redirect, SLAs, QoS, logging and L4-7 service chaining for security and user experience.

image

Now you’ve designed a policy model that focuses on the application connectivity and any requirements for those connections, including L4-7 services. With this concept you can instantiate that policy in a reusable format so that policy definition can be repeated for like connections, such as users connecting to a web tier. Additionally the application connectivity definition as a whole could be instantiated as a template or profile for reuse.

You’ve now defined a logical model, based on policy, for how applications should be deployed. With this model in place you can work your way down. Next you’ll need network equipment that can support your new model. Before thinking about the hardware, remember there is an operating system (OS) that will have to interface with your policy model.

Traditional network operating systems are not designed for this type of object oriented policy model. Even highly programmable or Linux based operating systems have not been designed for object programmability that would fully support this model.  You’ll need an OS that’s capable of representing tiers or components of an application as objects, with configurable attributes. Additionally it must be bale to represent physical resources like ports as objects abstracted from the applications that will run on them.  An OS that can be provisioned in terms of policy constructs rather than configuration lines such as switch ports, QoS and ACLs. You’ll need to rewrite the OS.

As you’re writing your OS you’ll need to rethink the switching and routing hardware that will deliver all of those packets and frames. Of course you’ll need: density, bandwidth, low-latency, etc. More importantly you’ll need hardware that can define, interpret and enforce policy based on your new logical model. You’ll need to build hardware tailored to the way you define applications, connectivity and policy.  Hardware that can enforce policy based on logical groupings free of VLAN and subnet based policy instantiation.

If you build these out together, starting with the logical model then defining the OS and hardware to support it, you’ll have built a solution that surpasses the software shims of generation 1 SDN. You’ll have built a solution that focuses on removing the complexity first, then automating, then applying rapid deployment through tools usable by development and operations, better yet DevOps.

If you do that you’ll have truly defined networking based on software. You’ll have defined it from the top all the way down to the ASICs. If you do all that and get it right, you’ll have built Cisco’s Application Centric Infrastructure (ACI.)

For more information on the next generation of data center networking check out www.cisco.com/go/aci.

 

Disclaimer: ACI is where I’ve been focused for the last year or so, and where my paycheck comes from.  You can feel free to assume I’m biased and this article has no value due to that.  I won’t hate you for it.

GD Star Rating
loading...

Engineers Unplugged Episode 14: Application Affinity

I had the pleasure of speaking with Nils Swart (@nlnils) of Plexxi about applications and the network.  You can watch the quick Engineer’s Unplugged below.

GD Star Rating
loading...

Software Defined networking: The Role of SDN on Compute Infrastructure Administration

#vBrownBag Follow-up Software Defined Networking SDN with Joe Onisick (@jonisick) from ProfessionalVMware on Vimeo.

GD Star Rating
loading...

It’s Our Time Down Here– “Underlays”

Recently while winding down from a long day I flipped the channel and “The Goonies” was on.  I left it there thinking an old movie I’d seen a dozen times would put me to sleep quickly.  As it turns out I quickly got back into it.  By the time the gang hit the wishing well and Mikey gave his speech I was inspired to write a blog, this one in particular.  “Cause it’s their time – their time up there.  Down here it’s our time, it’s our time down here.” 

This got me thinking about data center network overlays, and the physical networks that actually move the packets some Network Virtualization proponents have dubbed “underlays.”  The more I think about it, the more I realize that it truly is our time down here in the “lowly underlay.”  I don’t think there’s much argument around the need for change in data center networking, but there is a lot of debate on how.  Let’s start with their time up there “Network Virtualization.”

Network Virtualization

Unlike server virtualization, Network Virtualization doesn’t partition out the hardware and separate out resources.  Network Virtualization uses server virtualization to virtualize network devices such as: switches, routers, firewalls and load-balancers.  From there it creates virtual tunnels across the physical infrastructure using encapsulation techniques such as: VxLAN, NVGRE and STT.  The end result is a virtualized instantiation of the current data center network in x86 servers with packets moving in tunnels on physical networking gear which segregate them from other traffic on that gear.  The graphic below shows this relationship.

image

Network Virtualization in this fashion can provide some benefits in the form of: provisioning time and automation.  It also induces some new challenges discussed in more detail here: What Network Virtualization Isn’t (be sure to read the comments for alternate view points.)  What network virtualization doesn’t provide, in any form, is a change to the model we use to deploy networks and support applications.  The constructs and deployment methods for designing applications and applying policy are not changed or enhanced.  All of the same broken or misused methodologies are carried forward.  When working with customers to begin virtualizing servers I would always recommend against automated physical to virtual server migration, suggesting rebuild in a virtual machine instead.

The reason for that is two fold.  First server virtualization was a chance to re-architect based on lessons learned.  Second, simply virtualizing existing constructs is like hiring movers to pack your house along with dirt/cobwebs/etc. then move it all to the new place and unpack.  The smart way to move a house is donate/yard sale what you won’t need, pack the things you do, move into a clean place and arrange optimally for the space.  The same applies to server and network virtualization.

Faithful replication of today’s networking challenges as virtual machines with encapsulation tunnels doesn’t move the bar for deploying applications.  At best it speeds up, and automates, bad practices.  Server virtualization hit the same challenges.  I discuss what’s needed from the ground up here: Network Abstraction and Virtualization: Where to Start?.  Software only network virtualization approaches are challenged by both restrictions of the hardware that moves their packets and issues with methodology of where the pain points really are.  It’s their time up there.

Underlays

The physical transport network which is minimalized by some as the “underlay” is actually more important in making a shift to network programmability, automation and flexibility.  Even network virtualization vendors will agree, to some extent, on this if you dig deep enough.  Once you cut through the marketecture of “the underlay doesn’t matter” you’ll find recommendations for a non-blocking fabric of 10G Access ports and 40G aggregation in one design or another.  This is because they have no visibility into congestion and no control of delivery prioritization such as QoS. 

Additionally Network Virtualization has no ability to abstract the constructs of VLAN, Subnet, Security, Logging, QoS from one another as described in the link above.  To truly move the network forward in a way that provides automation and programmability in a model that’s cohesive with application deployment, you need to include the physical network with the software that will drive it.  It’s our time down here.

By marrying physical innovations that provide a means for abstraction of constructs at the ground floor with software that can drive those capabilities, you end up with a platform that can be defined by the architecture of the applications that will utilize it.  This puts the infrastructure as a whole in a position to be  deployed in lock-step with the applications that create differentiation and drive revenue.  This focus on the application is discussed here: Focus on the Ball: The Application.  The figure below, from that post, depicts this.

image

 

The advantage to this ground up approach is the ability to look at applications as they exist, groups of interconnected services, rather than the application as a VM approach.  This holistic view can then be applied down to an infrastructure designed for automation and programmability.  Like constructing a building, your structure will only be as sound as the foundation it sits on.

For a little humor (nothing more) here’s my comic depiction of Network Virtualization.image

GD Star Rating
loading...

Focus on the Ball: The Application

With the industry talking about Software Defined Networking (SDN) at full hype levels, there is one thing missing from many discussions: the application. SDN promises to reign in the complexity of network infrastructure and provide better tools for deploying services at scale. What often seems to be forgotten are the applications, which are the reason those networks exist. While application focus in itself is not a new concept it seems lost in the noise around SDN as a whole, with a few exceptions such as Plexxi being which focuses on Application Affinity.

Current SDN approaches provide tools to solve issues in one portion or the other of network infrastructure. Flow control mechanisms look to centralize the distribution and configuration of routing and forwarding. Overlays look to build virtual networks on existing IP infrastructure. Virtualized L4-7 services provide solutions to configure, stitch-in and control network services more closely to virtual machines themselves. None of these current approaches looks to tackle the whole picture from an application centric point of view. These solutions also take a myopic view that the VM is the network, this is far from the case.  The closest models fall into dev-ops categories or orchestration but these require a deep understanding of the details and intricacies of the network.

In traditional networking environments there is a disconnect in communication between application and network teams. The languages and concepts are disparate enough that they don’t translate, there is no logical continuation from application developer or owner to network designer. Application teams speak in OS instances, application tiers and components, tooling, language, end-user demands, etc. while network teams speak in switch-ports, VLANs, QoS, IP addressing and Access Control Lists (ACLs). The lack of common understanding and vocabulary causes architectures and implementations to suffer. The graphic below illustrates this relationship:

image

Building the flexible, scalable, manageable and programmable networks of the future requires a change in focus. The application needs to take center stage; it’s the apps that solve business problems. From this focus, logical and physical topology become secondary and are only designed once application requirements have been mapped out. Application centric policies must be designed first. Policies such as: security, load-balancing, QoS can all be designed based on application requirements, rather than network restrictions. Application developers define these requirements without the need to speak a network language.

Traditional networks begin with a physical topology that is layered with L2 and L3 logical topologies and assumed application mobility and service domains such as a services tier in the aggregation level. Once these topologies are architected and implemented applications are built and deployed on them. This method limits the capabilities available to the application and the services deployed on them.

Application security is an excellent example of a system that suffers from traditional architectures. Network security constructs are implemented in the form of ACLs on switches, routers and firewalls. These entries suffer from two major drawbacks: complexity of design/implementation and scale of the TCAM that stores the entries. This means that application policies must be communicated effectively to network engineers who must translate those requirements into implementable ACLs across multiple devices in the network. This is then defined manually device-by-device. This is a system ripe for PEBKAC errors (Problem Exists Between Keyboard and Chair.)

The complexity and room for error in this system increases exponentially as networks scale, applications move and new services are needed. Additionally this leads to bad practice based on design limitations. Far too often outdated policy entries are left in place due to the complexity and risk of removing entries. This leads to residual entries in place consuming space long after an application is gone. Just as often policies are written more loosely than would be optimal in order to reduce required entries, and optimize space, through wild card summarization.

To break this cycle networking systems need to take an application centric approach which models actual application requirements onto the network in a top down fashion. Systems need to take into account the structure of the application, its components, and how those components interact then provide tools for designing logical policy maps of these relationships. From there these policy maps can be programmatically applied to the networking infrastructure.

An application is not a single software instance running on a server. Applications are made up of the end-points required in a given tier, the tiers required for the service delivered and the policies that define how those tiers communicate, and their unique requirements. The application as a whole must be taken into account in order to provide robust, scalable service delivery.

The illustration below shows this relationship in contrast to the diagram above:

image

In this model network and application teams develop the systems of policies that define application behavior and push them to the network. Taking the application as a whole into focus instead of the myopic view of VMs, switch ports or IP addresses allows cohesive deployment and manageability at scale. The application is the purpose of having a network; therefore the application should define the network.

This definition of the network by the application should be done in a language that the developers understand, and the network can interpret and implement. For example an app owner labels application traffic as ‘video’ and the network implements policies for bandwidth, QoS, etc. that video requires. These policies are predefined by the network engineers.

An application is more than an IP address and a set of rules; it is an ecosystem of interconnected devices and the policies that define their relationship. Traditional networking techniques anchor application deployment by defining applications in networking terms. In order to accelerate the application deployment (and re-deployment throughout its lifecycle) networks need to provide an application centric view and deployment model.

GD Star Rating
loading...

Network Management Needs New Ideas

As networks have grown, the industry has sought better ways in which to manage them at scale. Traditional network management systems are typically device-centric, particularly for network infrastructure. These systems take a top-down management approach and use a central server to push configuration into devices and to manage device state. With few exceptions, this approach provides no additional abstraction or functionally and fundamentally becomes a GUI representation of CLI configuration…

To see the full post visit:  http://www.networkcomputing.com/data-networking-management/network-management-needs-new-ideas/240157120

GD Star Rating
loading...

Network Abstraction and Virtualization: Where to Start?

Network Abstraction and Virtualization: Where to Start?

With the growth of server virtualization network designs and the associated network management constructs have been stretched beyond their intended uses. This has brought about data center networks that are unmanageable and slow to adapt to change. While servers and storage can be rapidly provisioned to bring on new services the network itself has become a bottleneck of required administrative changes and inflexible constructs limiting scalability and speed of adoption.

These constraints of modern data center networks have motivated network architects to look for workarounds of which one current proposal is ‘network virtualization’ which looks to apply the benefits of server virtualization to the network. Conceptually network virtualization is the use of encapsulation techniques to create virtual overlays on existing network infrastructure. These methods use technologies such as VxLAN, STT, NVGRE, and others to wrap machine traffic in virtual IP overlays which can be transported across any Layer 3 infrastructure.

1. A primary benefit of these overlay techniques is the ability to scale beyond the limits of VLANs for network segmentation. Virtualization and multi-tenancy caused an explosion of network segments that strain traditional isolation techniques. With VLANs we are limited at 4096 segments or less depending on implementation. Other methods exist, such as placing ACLs within the Hypervisor but these also suffer limits in configuration and CPU overhead. The purpose of these techniques is creating application/tenant segmentation without security implications between segments. As the number of services and tenants grows these limits quickly become restrictive.

2. Another advantage of the network virtualization overlay is the ability to place workloads independent of physical locality and underlying topology. As long as IP connectivity is available the encapsulation handles delivery to end-point workloads. This provides greater flexibility in deployment, especially for virtualized workloads which receive encapsulation within the hypervisor switch. The operational benefit of this effect is the ability to place workloads where there is available capacity without restrictions from underlying network constructs.

Network virtualization does not come without drawbacks. The act of layering virtual networks over existing infrastructure puts an opaque barrier between the virtual workloads and the operation of the underlying infrastructure. This brings on issues with performance, quality of service (QoS) and network troubleshooting. Unlike server virtualization this limitation is not seen with compute hypervisors which are tightly coupled with the hardware maintaining visibility at both levels. The diagram below shows the relationship between network and server virtualization.

image

1. This lack of cross-visibility between the logical networks carrying production application traffic, and the physical network providing the packet delivery, leads to issues with application performance and system troubleshooting. With SDN techniques based on network virtualization through encapsulation, the packet delivery infrastructure is completely obfuscated by the encapsulation. This can lead to performance issues arising from lack of quality of service, altered multi-pathing ability, and others within the underlying network. This separation is shown in the diagram below.

image

2. Additionally these logical networks add a point of management to the network architecture. While they can hide the complexity of the underlying network for the purposes of application deployment, the network underneath still exists. The switching infrastructure must still be configured, managed and deployed as usual. All of the constructs shown above must still be architected and pushed into device configuration. Network virtualization provides perceived independence from the infrastructure but does not provide a means to manage the network as a whole.

3. The last challenge for network virtualization techniques is the ability to tie overlays back to traditional networking constructs understood by the network switches below. Switch hardware and software is designed to use VLANs which are tied to IP subnets and stitch security and services to these constructs. The overlay created by encapsulation does not alleviate these issues.

For example encapsulation techniques such as VxLAN provide far greater logical network scalability upwards of 16 million virtual networks. This logical scalability does not currently stitch into traditional switching equipment that assumes VLANs are global. Tighter cohesion will be required between physical switching infrastructure and hypervisor based access layers to provide robust services to real-world heterogeneous environments.

While overlay techniques provide separate namespace and therefore a means for overlapping IP addressing there will still be a need to architect the routing that handles this. In order to accomplish this network functionality such as Virtual Routing and Forwarding (VRF) must be configured on the switching infrastructure, or virtual routers deployed in the hypervisor. VRF scalability is greatly limited by hardware implementation and will be far less than VxLAN scalability, while virtualized routers will consume CPU overhead and require additional architectural considerations. Without techniques in place network tenants will require non-overlapping IP space.

Making a case for true abstraction

With network virtualization alone being overlaid onto existing infrastructure we just add layers of complexity. This occurs without correcting the issues that have arisen in traditional networking constructs; just adding network virtualization will do no more than amplify existing problems. A parallel can be drawn to server virtualization where the more rapid pace of server provisioning quickly brought out problems in underlying architecture and processes.

The underlying network consists of hardware, cabling, and Layer 2 / Layer 3 topologies that dictate traffic flow and potential application throughput. These layers have their own limitations and stability issues which are not addressed by network virtualization. Think of the OSI model in terms of building a house, the bottom layers (1-3) create a foundation, a frame, and a structure. Issues in those foundational layers will be exacerbated at each additional layer added on top.

Rather than applying an overlay technique such as a virtualization layer on top of existing architecture, IT architects will benefit greatly from abstracting the network constructs from the ground up first. Separating out logical and physical constructs, security, services, etc. prior to layering on overlays will provide a clean canvas on which to paint the future’s scalable feature rich networks. Virtualization must be built into the network from the ground up rather than layered on top. Again this parallels server virtualization where the greatest success has been seen in full virtualization of the hardware platform and tight integration down to bare metal. The end goal is addressing the underlying network issues rather than mask them with a virtualization layer.

The ties between network constructs such as VLAN, IP subnet, security, load-balancing etc. have placed constraints on the scalability and agility of the network. Each VLAN is provided an IP subnet, security and network services are then tied to these constructs. Addressing and location become the identifying characteristics of the network rather than the application requirements. This is not optimal behavior for a network responsible for elastic business services, workload anywhere designs, and ever increasing connectivity needs. These attributes and capabilities of connectivity must be abstracted in a new way to allow us to move beyond the constraints we have imposed by overloading or misusing these basic network constructs.

imageRather than starting with a new coat of paint on a peeling building, abstraction takes a ground up approach. By looking at the purpose of each construct: VLAN = Broadcast domain, IP = addressing mechanism, etc. we can redesign with a goal to alleviate the unnecessary constraints that have been placed on today’s networks. With these constructs separated we can provide a transport capable of maximizing the performance, security and scalability of the applications using it.

Take a step back from traditional network thinking and think in terms of application needs without consideration of current deployment methodology. Think through the following questions leaving out concepts like: VLAN, Subnet, IP addressing, etc.:

  • How would you tie application tiers together?
  • How would you group like services?
  • What policies would be required between application tiers?
  • What services are required for a given application?
  • How does that application connect to the intranet and internet?

Separating out the applications and services required from the underlying architecture is not possible with today’s networks, virtualized or not. Overlay network virtualization alone may hide some of the complexities but does not provide tools for optimizing the delivery and holistic design. The conversation must include addressing, VLAN construct, location and service insertion. If these constructs are instead abstracted from one another, and the architecture, the conversation can revolve around application requirements rather than network restrictions.

Summary:

While network virtualization provides a set of tools for gaining greater network scale and application deployment flexibility, it is not a complete solution. Without true network abstraction and tools for visibility between the logical and physical network virtualization does no more than add complexity to existing problems. As was seen with server virtualization, layering virtualization on infrastructure issues and bad processes exponentially increases the complexity and room for error.

In order to truly scale networks in a sustainably manageable fashion we need to remove the ties of disparate network constructs by abstracting them out. Once these constructs operate independently of one another we’re provided a flexible architecture that removes the inherent complexity rather than leaving the problems and compounding them through layers of virtualization.

To build networks that meet current demands while being able to support the rapid scale and emerging requirements we need to rethink network design as a whole. Taking a top down look at what we need from the network without tying ourselves to the way in which we use the constructs today allows us to design towards the future and apply layers of abstraction down the stack to meet those goals.

Thinking about your network today, is virtualization alone solving the problems or adding a layer?

Network virtualization without network abstraction – results in short term patching with limited control of longer term operational complexity.

Network virtualization based on an abstracted network – results in effective control of both capital and operational expenses.

GD Star Rating
loading...