A Lesson on Infrastructure from Nigeria – SDN and Networking

I recently took an amazing trip focused on launching Cisco Application Centric Infrastructure (ACI) across Africa (I work as a Technical Marketing Engineer for the Cisco BU responsible for ACI.)  During the trip I learned as much information as I was there to share.  One of the more interesting lessons I learned was about the importance of infrastructure, and the parallels that can be drawn to networking.  Lagos Nigeria was the inspiration for this lesson.  Before beginning, let me state for the record that I enjoyed my trip, the people I had the pleasure to work with, and the parts of the culture I was able to experience.  This is simply an observation of the infrastructure and its parallels to data center networks.

Nigeria is known as the ‘Giant of Africa’ because of its population and economy.  It has explosive birth rates which have brought it quickly to 174 million inhabitants, and its GDP has become the largest in Africa at $500 billion.  This GDP is primarily oil based (40%) and surpasses South Africa with its mining, banking, trade, and agricultural industries.  Nigeria also has a large and quickly growing telecommunications sector, and a highly developed financial services sector.  Even more important is that Nigeria is expected to be one of the world’s top 20 economies by 2050.  (Source: https://en.wikipedia.org/wiki/Nigeria.)

With this and several other industries and natural resources, Nigeria has the potential to quickly become a very dominant player in the global stage.  The issue the country faces is that all of this industry is dependant on one thing: infrastructure.  Government, transportation, electrical, telecommunications, water, security, etc. infrastructure is required to deliver on the value of these industries.  The Nigerian infrastructure is abysmal.

Corruption is rampant at all stages, instantly apparent before even passing through immigration at the airport.  Once outside of the airport if you travel the roads, especially at night you can expect to be stopped at roadside checkpoints and asked for a ‘gift’ from armed military or police forces.  This is definitely not a problem unique to Nigeria, but having travelled to many similar places I found it to be much more in your face, and ingrained in the overall system.

Those same roads that require gifts to travel on are commonly hard-packed dirt, or deteriorating pavement.  Large pot holes filled with water scatter the roadways making travel difficult.  Intersections are typically unmarked and free of traffic signals, stop, or yield signs.  Traffic chokes the streets and even short trips can take hours depending on traffic that is unpredictable.

... THOUGH I WALK AND DRIVE THROUGH THE VALLEY OF LAGOS ROADS

The electrical grid is fragile and unstable with brownouts frequent throughout the day.  In some areas power is on for a day or two at a time, followed by days of darkness.  In the nicer complexes generators are used to fill the gaps.  The hotel we stayed at was a very nice global chain, and the power went out briefly several times a day for a few moments while the generator kicked back in.

The overall security infrastructure of Nigeria has issues of its own.  Because of the weaknesses in central security most any business establishment you enter will have its own security.  This means you’ll go through metal detectors, x-rays, pat-downs, car searches, etc before entering most places.

Additionally you may be required to hire private security while in country, depending on your employer.  Private security is always a catch-22, to be secure you hire security, by having security you become a more prominent target.  As a base example of this, one can assume that someone who can afford private security guards must be important enough, to someone, for a ransom. 

All of these aspects pose significant challenges to doing business in Nigeria.  The roads and security issues mean that you’ll spend far more time than necessary getting between meetings.  You’ll have the unpredictable travel times, the added time of going through security at each end, parking challenges, etc.  Along the way you may experience check-points that demand gifts, etc.  The power may pose a problem depending on the generator capabilities of the locations your visiting.

All of these issues choke the profitability of doing business in countries like this.  They also make doing business in these countries more difficult.  Some simple examples of this would be companies that simply choose not to send staff due to security reasons, or individual employees who are not comfortable travelling to these types of locations.  It’s far easier to find someone who’s willing to travel the expanse of the European Union with its solid infrastructure, relative safety,etc. than it may be to find people willing to travel to such locations.

All of this quickly drew a parallel in my mind to the current change going on within data center networks, specifically Software Defined Networking (SDN.)  SDN has the potential to drive new revenue streams in commercial business, and more quickly/efficiently accomplish the mission at hand for non-commercial organizations.  That being said, SDN will always be limited by the infrastructure that supports it.

A lot of talk around SDN focuses on software solutions that ride on top of existing networking equipment in order to provide features x, y and z.  Very little talk is given to the networking equipment below.  This will quickly become an issue for organizations looking to improve the application and service delivery of their data center.  Like Nigeria, these business services will be hindered by the infrastructure that supports them.

Looking at today’s networks, many are not far off from the roads pictured above.  We have 20+ years of quick fixes, protocol band-aids, and duct tape layered on, to fix point problems before moving on to the next.  The physical transport of the network has become extremely complex.

Beyond these issues there are new physical requirements for today’s data center traffic.  1 Gig server links are saturated and quickly transitioning to 10 Gig.  10 Gig adoption at the access layer is driving demand for higher speeds at the aggregation and core layers, including 40 Gig and above.  Simple speeds and feeds increases cannot be solved by software alone.  A congested network with additional overlay headers, will simply become a more congested network.

A more systemic problem is the network designs themselves.  The most prominent network design in the data center is the 3-tier design.  This design consists of some combination of logical and physical Access, Aggregation and Core tiers.  In some cases one or more tiers are collapsed, based on size and scale, but the logical topology remains the same.  These designs are based on traditional North/South traffic patterns. With these traffic patterns, data is primarily coming into the data center through the core (north) and being sent south to the server for processing, then back out.  Today, the majority of the data center traffic travels East/West between servers.  This can be multi-tier applications, distributed applications, etc.  The change in traffic pattern puts constraints on the traditional designs.

The first constraint is the traffic flow itself.  As shown in the diagram below, traffic is typically sent to the aggregation tier for policy enforcement (security, user experience, etc.)  This pattern causes a ping-pong effect for traffic moving between server ports.

imageEqually as important is the design of the hardware in place today.  Networking hardware in the data center is typically oversubscribed to reduce cost.  This means that while a switch may offer 48x 10 Gig ports, its hardware design may only offer a portion of that total bandwidth.  This is done with two assumptions:

1) the traffic will eventually be egressing the data center network on slower WAN links

2) not all ports will be attempting to send packets at full-rate at the same time.

With the way in which modern applications are being built and used, this is no longer the case.  Due to the distribution of applications we more often have 1 Gig or 10 Gig server ports communicating with other 1 Gig or 10 Gig ports.  Additionally many applications will actually attempt to push all ports at line-rate at one time.  Big data applications are a common example of this.

The new traffic demands in the data center require new hardware designs, and new network topologies.  Most modern network hardware solutions are designed for full-rate non-blocking traffic, or as close to it as possible.  Additionally the designs being recommended by most vendors today are flatter two tier architectures known as Spine/Leaf or Clos architectures.  These designs lend themselves well to scalability and consistent latency betweens servers, service appliances (virtual/physical) and WAN or data center interconnect links.

Like Nigeria, our business solutions will only be as effective as the infrastructure that supports them.  We can of course, move forward and grow at some rate, for some time, by layering over top of the existing but we’ll be limited.  At some point in our futures we’ll need to overhaul the infrastructure itself to support the full potential of the services that ride on top of it.

GD Star Rating
loading...

Next Generation Networking Panel With Brad Hedlund – Cisco ACI vs. VMware NSX

GD Star Rating
loading...

Seeing the Big Picture – Job Rotation

A mini-rant I went on this evening prompted Jason Edelman (@jedelman) to suggest I write a blog on the topic.  My rant was in regards to job rotation.  Specifically in the IT vendor world rotating from the field (sales-engineering, professional services, technical support) to the business units building the products, and vice versa.  This is all about perspective.

In the past I’ve written about pre-sales engineering:

The Art of Pre-Sales: http://www.definethecloud.net/the-art-of-pre-sales/

The Art of Pre-Sales Part II – Showing Value: http://www.definethecloud.net/the-art-of-pre-sales-part-ii-showing-value/

Pre-sales engineering (at Value Added Resellers) has been my background for quite a while.  About a year and a half ago I moved over to the vendor side, and specifically a business unit brining brand new products to market.  This has been an eye opening experience to say the least.

What I’ve Learned:

  1. Building a product, and bringing it to market is completely different from using it to architect solutions and selling it.
    • This one almost goes without saying.  When your selling a product/architecting solutions you are focused on using a solution to solve a problem.  Both of these should be known’s. 
    • If you’re a good architect/engineer you’re using the two ears and one mouth your given in proportion to identify the problem.  Then you select the best fit solution for that problem from your bag of tricks.
    • When you’re building a product you’re trying to identify a broad problem, market shift, or industry gap and create a solution for that.  The technology is only one small piece of this.  The other focuses include:
      • The Total Addressable Market (TAM)
      • Business impact/disruption to current products
      • Marketing
      • Training/education
      • Adoption
      • Sales ramp
  2. A lot of those “WTF were they thinking” questions, have valid answers.
    • Ever sat back and asked yourself ‘What were they thinking?’  9 times out of 10, they were. 
      • 9 out of 10 of those times they were thinking TAM. 
        • We tend to work in specific verticals (as customers or vendors): health care, government, service-provider, etc.  What seems like a big requirement in our view, may not be significant in the big picture being addressed.
        • Shifting markets and therefore shifting TAM.  Where you may be selling a lot of feature x today, that may be a shrinking market/requirement in the big picture.
      • Complexity/down the road costs.  In many cases implementing feature x while complicate feature A, B, C, and D.  It may complicate Q&A processes, slow feature adoption, add cost, etc.
  3. Everything has trade-offs.
    • Nothing is free, engineering resources are limited, time is limited, budget is limited.  This means that tough decisions have to be made.  Tough decisions are made in every roadmap meeting and most of those end with features on the chopping block, or pushed out.
    • Anything added typically means something removed or delayed.  In cases where it doesn’t, it probably means a date will slip.

The Flip Side:

This is not a one way street.  The flip side is true as well.  My lifetime of experience on the other side gives me a far different perspective than many of my colleagues.  Some of my colleagues have always lived in the ‘ivory tower’ I now live with them in.  Without having experience in the field it’s hard to empathize with specific requests, needs, complaints or concerns.  It’s hard to really be in touch with the day to day requirements and problems.  Having the other perspective is beneficial all around.

So What?

  • If you’re an IT vendor find ways to open up job rotation practices.  3-6 month rotations every 2-3 years would be ideal, but anything is a start.  Advertise the options, encourage managers to support it, promote it.
  • If you’re an individual, suggest the program.  Beyond suggesting the program search out opportunities.  It’s always beneficial to have a broader understanding of the company you work for, trying different roles will help with this. 
  • Even if neither of these things can happen, find ways to engage often with your counterparts on the other side of the fence and listen.  The more you understand their point of view the easier it will be to find win/win solutions.
GD Star Rating
loading...

A Few Good Apps

Developer: Network team, did you order the Code upgrade?!

Operations Manager: You don’t have to answer that question!

Network Engineer: I’ll answer the question. You want answers?

Developer: I think I’m entitled!

Network Engineer: You want answers?!

Developer: I want the truth!

Network Engineer: You can’t handle the truth! Son, we live in a world that has VLANs, and those VLANs have to be plumbed by people with CLIs. Who’s gonna do it? You? You, Database Admin? I have a greater responsibility than you can possibly fathom. You weep for app agility and you curse the network. You have that luxury. You have the luxury of not knowing what I know, that network plumbing, while tragically complex, delivers apps. And my existence, while grotesque and incomprehensible to you, delivers apps! You don’t want the truth, because deep down in places you don’t talk about at parties, you want me on that CLI. You need me on that CLI. We use words like “routing”, “subnets”, “L4 Ports”. We use these words as the backbone of a life spent building networks. You use them as a punch line. I have neither the time nor the inclination to explain myself to a man who rises and sleeps under the blanket of infrastructure that I provide, and then questions the manner in which I provide it! I would rather you just said “thank you”, and went on your way. Otherwise, I suggest you pick up a putty session, and configure a switch. Either way, I don’t give a damn what you think you are entitled to!

Developer: Did you order the Code upgrade?

Network Engineer: I did the job that—-

Developer: Did you order the Code upgrade?!!

Network Engineer: YOU’RE GODDAMN RIGHT I DID!!

 

In many IT environments today there is a distinct line between the application developers/owners and the infrastructure teams that are responsible for deploying those applications.  These organizational silos lead to tension, lack of agility and other issues.  Much of this is caused by the translation between these teams.  Application teams speak in terms like: objects, attributes, provider, consumer, etc.  Infrastructure teams speak in memory, CPU, VLAN, subnets, ports.  This is exacerbated when delivering apps over the network, which requires connectivity, security, load-balancing etc.  On today’s network devices (virtual or physical) the application must be identified based on Layer 3 addressing and L4 information.  This means the app team must be able to describe components or tiers of an app in those terms (which are foreign to them.)  This slows down the deployment of applications and induces problems with tight controls, security, etc.  I’ve tried to describe this in the graphic below (for people who don’t read good and want to learn to do networking things good too.)

image

As shown in the graphic, the definition of an application and its actual instantiation onto networking devices (virtual and physical) is very different.  This causes a great deal of the slowed application adoption and the complexity of networking.  Today’s networks don’t have an application centric methodology for describing applications and their requirements.  The same can be said for emerging SDN solutions.  The two most common examples of SDN today are OpenFlow and Network Virtualization.  OpenFlow simply attempts to centralize a control plane that was designed to be distributed for scale and flexibility.  In doing so it  uses 5-tuple matches of IP and TCP/UDP headers to attempt to identify applications as network flows.  This is no different from the model in use today.  Network virtualization faithfully replicates today’s network constructs into a hypervisor, shifting management and adding software layers without solving any of the underlying problem.

What’s needed is a common language for the infrastructure teams and development teams to use.  that common language can be used to describe application connectivity and policy requirements in a way that makes sense to separate parts of the organization and business.  Cisco Application Centric Infrastructure (ACI) uses policy as this common language, and deploys the logical definition of policy onto the network automatically.

Cisco ACI bases network provisioning on the application and the two things required for application delivery: connectivity and policy.  By connectivity we’re describing what group of objects is allowed to connect to other groups of objects.  We are not defining forwarding, as forwarding is handled separately using proven methods, in this case ISIS with a distributed control plane.  When we describe connectivity we simply mean allowing the connection.  Policy is a broader term, and very important to the discussion.  Policy is all of the requirements for an application: SLAs, QoS, Security, L4-7 services etc.  Policy within ACI is designed using reusable ‘contracts.’  This way policy can be designed in advance by the experts and architects with that skill set and then reused whenever required for a new application roll-out.

Applications are deployed on the ACI fabric using an Application Network Profile. An application network profile is simply a logical template for the design and deployment of an applications end-to-end connectivity and policy requirements.  If you’re familiar with Cisco UCS it’s a very similar concept to the UCS Service Profile.  One of the biggest benefits of an Application Network profile is its portability.  They can be built through the API, or GUI, downloaded from Cisco Developer Network (CDN) or the ACI Github community, or provided by the application vendor itself.  They’re simply an XML or JSON representation of the end-to-end requirements for delivering an application.  The graphic below shows an application network profile.

image

This model provides that common language that can be used by developer teams and operations/infrastructure teams.  To tie this back to the tongue-in-cheek start to this post based on dialogue from “A Few Good Men”, we don’t want to replace the network engineer, but we do want to get them off of the CLI.  Rather than hacking away at repeatable tasks on the command line, we want them using the policy model to define the policy ‘contracts’ for use when deploying applications.  At the same time we want to give them better visibility into what the application requires and what it’s doing on the network.  Rather than troubleshooting devices and flows, why not look at application health?  Rather than manually configuring QoS based on devices, why not set it per application or tier?  Rather than focusing on VLANs and subnets as policy boundaries why not abstract that and group things based on those policy requirements?  Think about it, why should every aspect of a servers policy change because you changed the IP?  That’s what happens on today’s networks.

Call it a DevOps tool, call it automation, call it what you will, ACI looks to use the language of applications to provision the network dynamically and automatically.  Rather than simply providing better management tools for 15 year old concepts that have been overloaded we focus on a new model: application connectivity and policy.

**Disclaimer: I work as a Technical Marketing Engineer for the Cisco BU responsible for Nexus 9000 and ACI.  Feel free to disregard this post as my biased opinion.**

GD Star Rating
loading...

Video: Cisco ACI Overview

GD Star Rating
loading...

Oh, the Places You’ll Go! (A Cisco ACI Story)

In the fashion of my two previous Dr. Seuss style stories I thought I’d take a crack at Cisco Application Centric Infrastructure (ACI.)  Check out the previous two if you haven’t read them and have time to waste:

 

Horton Hears Hadoop: http://www.definethecloud.net/horton-hears-hadoop/

The App on the Crap (An SDN Story) http://www.definethecloud.net/the-app-on-the-crap-an-sdn-story/

 

 

 

 

 

 

Congratulations!

This is the time.

The network is changing!

The future is here!

 

With software controllers.

And virtualized widgets.

You can steer traffic

any direction you choose.

Packets are moving. They’ll flow where they flow.

And YOU are the gal who’ll decide where they’ll go.

 

You’ll look up and down paths.  Look ‘em over with care.

About some you’ll say, “No VOIP will go there.”

With an overlay net, and central control,

No packet will flow, down a not-so-good path.

 

And when packets travel

on suboptimal paths.

You’ll reroute those flows,

based on 5-tuple match.

 

Net’s opened wide

With central control.

 

Now net change can happen

and rapidly too

with net as central

and virtual too.

 

And when things start to happen,

don’t panic.  Don’t stew.

Just go troubleshoot.

All layers old, and the new.

 

OH!

THE PLACES YOU’LL GO!

 

You’ll be on your way up!

Packet’s moving in flight!

You’ll be the rock star

who set network right.

 

The network won’t lag, because of central control.

You’ll provision the pipes, avoid traffic black holes.

The packets will fly, you’ll be best of the best.

Wherever they fly, be faster than the rest.

 

Except when they don’t.

Because sometimes they won’t.

 

I’m sorry to say so

but, sadly it’s true

that Bang-ups

and Hang-ups

will happen to you.

 

You can get all hung up

in congestion / jitter.

And packets won’t travel.

Some will just flitter.

 

Applications will fail

with unpleasant time-outs.

And the chances are, then,

that you’ll start hearing shouts.

 

And when applications fail,

you’re not in for much fun.

Getting them back up

is not easily done.

 

You’ll need the app team, spreadsheets , security rules.

You’ll have to troubleshoot through disparate tools.

Find a way to translate from app language to net.

Map L3/L4 to app names, not done yet.

There are services too, that’s a safe bet.

 

Which route did it take, and which networks the problem?

Overlay, underlay, this network has goblins.

Congestion, and drops, latency jitter

Check with the software, than break out the splitter.

You’ll sort this out, you’re no kind of quitter!

 

It can get so confused

two networks to trace.

The process is slow, not what you want for a pace.

You must sort it out, this is business, a race.

What happened here, what’s going on in this place?

 

NO!

That’s not for you!

Those duct tape based fixes.

You’ll choose better methods.

Not hodge-podge tech mixes.

 

Look first at the problem,

what’s causing the issues?

What is it that net, is trying to do?

The app is the answer, in front of you.

 

The data center’s there to run applications!

To serve them to users, move data ‘cross nations.

To drive revenue, open up business models.

To push out new services, all at full throttle.

The application’s what matters.

Place it on a platter.

 

You’ll put the app into focus,

With some abstraction hocus-pocus.

 

You’ll use the language of apps.

To describe connectivity.

Building application maps,

to increase productivity.

 

Use a system focused on policy,

not new-fangled virtual novelty.

Look at apps end-to-end,

Not with the app is VM trend.

 

Whether virtual or physical, you’ll treat things the same.

From L2 to L3, or L4-7,

use of uniform policy, will be your new game.

Well on your way to networking heaven.

 

Start with a logical model, a connectivity graph.

One that the system, deploys on your behalf.

A single controller for policy enforcement.

Sure to receive security’s cheering endorsement.

Forget about VLANs, routes and frame formats,

no longer will networking be the app-deploy doormat.

 

You see to build networks for today and tomorrow,

don’t use band-aids stacked high as Kilimanjaro.

You’ll want to start with REMOVING complexity.

Anything else, just adds to perplexity.

 

Start at the top, in an app centric fashion.

on a system that knows to treat apps as its passion.

 

And will you succeed?

Yes! you will, indeed!

(98 and 3/4 percent guaranteed.)*

KID, YOU’LL MOVE MOUNTAINS!

 

So…

be your app virtual, physical or cloud

with services, simple, complex or astray,

you’re off to Great Places!

Today is your day!

ACI is waiting.

So…get on your way!

 

 

*This is intended as whimsical nonsense.  Any guarantees are null and void based on the complete insanity of the author.

**Disclaimer: I work for Cisco Systems with the group responsible for Nexus 9000 and ACI.  Please feel free to consider this post random vendor rhetoric.**

For more information on Cisco ACI visit www.cisco.com/go/aci

GD Star Rating
loading...

True Software Defined Networking (SDN)

The world is, and has been, buzzing about software defined networking. It’s going to revolutionize the entire industry, commoditize hardware, and disrupt all the major players. It’s going to do all that… some day. To date it hasn’t done much but be a great conversation, and more importantly identify the need for change in networking.

In its first generation SDN is a lot of sizzle with no flash. The IT world is trying to truly define it, much like we were with ‘Cloud’ years ago. What’s beginning to emerge is that SDN is more of a methodology then an implementation, and like cloud there are several implementations: OpenFlow, Network Virtualization and Programmable Network Infrastructure.

 

image

OpenFlow

Open Flow focuses on a separation of control plane and data plane. This provides a centralized method to route traffic based on a 5-tuple match of packet header information. One area OpenFlow falls short is in its dependence on the independent advancement of the protocol itself and the hardware support below. Hardware in the world of switching and routing is Application Specific Integrated Circuits (ASIC) based, and those ASICs typically take three years to refresh. This means that the OpenFlow protocol itself must advance, and then once stabilized silicon vendors can begin building new ASICs to be available three years later.

Network Virtualization

Network virtualization is a faithful reproduction of networking functionality into the hypervisor. This method is intended to provide advanced automation and speed application deployment. The problem here arises in the new tools required to manage and monitor the network, the additional management layer, and the replication of the same underlying complexity.

Programmable Network Infrastructure

Programmable network infrastructure takes the configuration of devices from human to machine CLI/GUI interfaces to APIs and programming agents. This allows for faster, more powerful and less error prone device configuration from automation, orchestration and cloud operating system tools. These advance the configuration of multiple disparate systems but are still designed based on network operating system constructs intended for human use, and the same underlying network complexities such as artificial ties between addressing and policy.

All of these generation 1 SDN solutions simply move the management of the underlying complexity around. They are software designed to operate in the same model, trying to configure existing hardware. They’re simply adding another protocol, or protocols, to the pile of existing complexity.

image

Truly software defined networks

To truly define the network via software you have to look at the entire solution, not just a single piece. Simply adding a software or hardware layer doesn’t fix the problem, you must look at them in tandem starting with the requirements for today’s networks: automation, application agility, visibility (virtual/physical) security, scale and L4-7 services (virtual/physical.)

If you start with those requirements and think in terms of a blank slate you now have the ability to build things correctly for today and tomorrow’s applications while ensuring backwards compatibility. The place to start is in the software itself, or the logical model. Begin with questions:

1. What’s the purpose of the network?

2. What’s most relevant to the business?

3. What dictates the requirements?

The answer to all three is the application, so that’s the natural starting point. Next you ask who owns, deploys and handles day two operations for an application? The answer is the development team. So you start with a view of applications in a format they would understand.

image

That format is simple provider/consumer relationships between tiers or components of an application. Each tier may provide and consume services from the next to create the application which is a group of tiers or components, not a single physical server or VM.

You take that idea a step further and understand that the provider/consumer relationships are truly just policy. Policy can describe many things, but here it would be focused on permit/deny, redirect, SLAs, QoS, logging and L4-7 service chaining for security and user experience.

image

Now you’ve designed a policy model that focuses on the application connectivity and any requirements for those connections, including L4-7 services. With this concept you can instantiate that policy in a reusable format so that policy definition can be repeated for like connections, such as users connecting to a web tier. Additionally the application connectivity definition as a whole could be instantiated as a template or profile for reuse.

You’ve now defined a logical model, based on policy, for how applications should be deployed. With this model in place you can work your way down. Next you’ll need network equipment that can support your new model. Before thinking about the hardware, remember there is an operating system (OS) that will have to interface with your policy model.

Traditional network operating systems are not designed for this type of object oriented policy model. Even highly programmable or Linux based operating systems have not been designed for object programmability that would fully support this model.  You’ll need an OS that’s capable of representing tiers or components of an application as objects, with configurable attributes. Additionally it must be bale to represent physical resources like ports as objects abstracted from the applications that will run on them.  An OS that can be provisioned in terms of policy constructs rather than configuration lines such as switch ports, QoS and ACLs. You’ll need to rewrite the OS.

As you’re writing your OS you’ll need to rethink the switching and routing hardware that will deliver all of those packets and frames. Of course you’ll need: density, bandwidth, low-latency, etc. More importantly you’ll need hardware that can define, interpret and enforce policy based on your new logical model. You’ll need to build hardware tailored to the way you define applications, connectivity and policy.  Hardware that can enforce policy based on logical groupings free of VLAN and subnet based policy instantiation.

If you build these out together, starting with the logical model then defining the OS and hardware to support it, you’ll have built a solution that surpasses the software shims of generation 1 SDN. You’ll have built a solution that focuses on removing the complexity first, then automating, then applying rapid deployment through tools usable by development and operations, better yet DevOps.

If you do that you’ll have truly defined networking based on software. You’ll have defined it from the top all the way down to the ASICs. If you do all that and get it right, you’ll have built Cisco’s Application Centric Infrastructure (ACI.)

For more information on the next generation of data center networking check out www.cisco.com/go/aci.

 

Disclaimer: ACI is where I’ve been focused for the last year or so, and where my paycheck comes from.  You can feel free to assume I’m biased and this article has no value due to that.  I won’t hate you for it.

GD Star Rating
loading...

Engineers Unplugged Episode 14: Application Affinity

I had the pleasure of speaking with Nils Swart (@nlnils) of Plexxi about applications and the network.  You can watch the quick Engineer’s Unplugged below.

GD Star Rating
loading...

Software Defined networking: The Role of SDN on Compute Infrastructure Administration

#vBrownBag Follow-up Software Defined Networking SDN with Joe Onisick (@jonisick) from ProfessionalVMware on Vimeo.

GD Star Rating
loading...

It’s Our Time Down Here– “Underlays”

Recently while winding down from a long day I flipped the channel and “The Goonies” was on.  I left it there thinking an old movie I’d seen a dozen times would put me to sleep quickly.  As it turns out I quickly got back into it.  By the time the gang hit the wishing well and Mikey gave his speech I was inspired to write a blog, this one in particular.  “Cause it’s their time – their time up there.  Down here it’s our time, it’s our time down here.” 

This got me thinking about data center network overlays, and the physical networks that actually move the packets some Network Virtualization proponents have dubbed “underlays.”  The more I think about it, the more I realize that it truly is our time down here in the “lowly underlay.”  I don’t think there’s much argument around the need for change in data center networking, but there is a lot of debate on how.  Let’s start with their time up there “Network Virtualization.”

Network Virtualization

Unlike server virtualization, Network Virtualization doesn’t partition out the hardware and separate out resources.  Network Virtualization uses server virtualization to virtualize network devices such as: switches, routers, firewalls and load-balancers.  From there it creates virtual tunnels across the physical infrastructure using encapsulation techniques such as: VxLAN, NVGRE and STT.  The end result is a virtualized instantiation of the current data center network in x86 servers with packets moving in tunnels on physical networking gear which segregate them from other traffic on that gear.  The graphic below shows this relationship.

image

Network Virtualization in this fashion can provide some benefits in the form of: provisioning time and automation.  It also induces some new challenges discussed in more detail here: What Network Virtualization Isn’t (be sure to read the comments for alternate view points.)  What network virtualization doesn’t provide, in any form, is a change to the model we use to deploy networks and support applications.  The constructs and deployment methods for designing applications and applying policy are not changed or enhanced.  All of the same broken or misused methodologies are carried forward.  When working with customers to begin virtualizing servers I would always recommend against automated physical to virtual server migration, suggesting rebuild in a virtual machine instead.

The reason for that is two fold.  First server virtualization was a chance to re-architect based on lessons learned.  Second, simply virtualizing existing constructs is like hiring movers to pack your house along with dirt/cobwebs/etc. then move it all to the new place and unpack.  The smart way to move a house is donate/yard sale what you won’t need, pack the things you do, move into a clean place and arrange optimally for the space.  The same applies to server and network virtualization.

Faithful replication of today’s networking challenges as virtual machines with encapsulation tunnels doesn’t move the bar for deploying applications.  At best it speeds up, and automates, bad practices.  Server virtualization hit the same challenges.  I discuss what’s needed from the ground up here: Network Abstraction and Virtualization: Where to Start?.  Software only network virtualization approaches are challenged by both restrictions of the hardware that moves their packets and issues with methodology of where the pain points really are.  It’s their time up there.

Underlays

The physical transport network which is minimalized by some as the “underlay” is actually more important in making a shift to network programmability, automation and flexibility.  Even network virtualization vendors will agree, to some extent, on this if you dig deep enough.  Once you cut through the marketecture of “the underlay doesn’t matter” you’ll find recommendations for a non-blocking fabric of 10G Access ports and 40G aggregation in one design or another.  This is because they have no visibility into congestion and no control of delivery prioritization such as QoS. 

Additionally Network Virtualization has no ability to abstract the constructs of VLAN, Subnet, Security, Logging, QoS from one another as described in the link above.  To truly move the network forward in a way that provides automation and programmability in a model that’s cohesive with application deployment, you need to include the physical network with the software that will drive it.  It’s our time down here.

By marrying physical innovations that provide a means for abstraction of constructs at the ground floor with software that can drive those capabilities, you end up with a platform that can be defined by the architecture of the applications that will utilize it.  This puts the infrastructure as a whole in a position to be  deployed in lock-step with the applications that create differentiation and drive revenue.  This focus on the application is discussed here: Focus on the Ball: The Application.  The figure below, from that post, depicts this.

image

 

The advantage to this ground up approach is the ability to look at applications as they exist, groups of interconnected services, rather than the application as a VM approach.  This holistic view can then be applied down to an infrastructure designed for automation and programmability.  Like constructing a building, your structure will only be as sound as the foundation it sits on.

For a little humor (nothing more) here’s my comic depiction of Network Virtualization.image

GD Star Rating
loading...