A Few Good Apps

Developer: Network team, did you order the Code upgrade?!

Operations Manager: You don’t have to answer that question!

Network Engineer: I’ll answer the question. You want answers?

Developer: I think I’m entitled!

Network Engineer: You want answers?!

Developer: I want the truth!

Network Engineer: You can’t handle the truth! Son, we live in a world that has VLANs, and those VLANs have to be plumbed by people with CLIs. Who’s gonna do it? You? You, Database Admin? I have a greater responsibility than you can possibly fathom. You weep for app agility and you curse the network. You have that luxury. You have the luxury of not knowing what I know, that network plumbing, while tragically complex, delivers apps. And my existence, while grotesque and incomprehensible to you, delivers apps! You don’t want the truth, because deep down in places you don’t talk about at parties, you want me on that CLI. You need me on that CLI. We use words like “routing”, “subnets”, “L4 Ports”. We use these words as the backbone of a life spent building networks. You use them as a punch line. I have neither the time nor the inclination to explain myself to a man who rises and sleeps under the blanket of infrastructure that I provide, and then questions the manner in which I provide it! I would rather you just said “thank you”, and went on your way. Otherwise, I suggest you pick up a putty session, and configure a switch. Either way, I don’t give a damn what you think you are entitled to!

Developer: Did you order the Code upgrade?

Network Engineer: I did the job that—-

Developer: Did you order the Code upgrade?!!

Network Engineer: YOU’RE GODDAMN RIGHT I DID!!

 

In many IT environments today there is a distinct line between the application developers/owners and the infrastructure teams that are responsible for deploying those applications.  These organizational silos lead to tension, lack of agility and other issues.  Much of this is caused by the translation between these teams.  Application teams speak in terms like: objects, attributes, provider, consumer, etc.  Infrastructure teams speak in memory, CPU, VLAN, subnets, ports.  This is exacerbated when delivering apps over the network, which requires connectivity, security, load-balancing etc.  On today’s network devices (virtual or physical) the application must be identified based on Layer 3 addressing and L4 information.  This means the app team must be able to describe components or tiers of an app in those terms (which are foreign to them.)  This slows down the deployment of applications and induces problems with tight controls, security, etc.  I’ve tried to describe this in the graphic below (for people who don’t read good and want to learn to do networking things good too.)

image

As shown in the graphic, the definition of an application and its actual instantiation onto networking devices (virtual and physical) is very different.  This causes a great deal of the slowed application adoption and the complexity of networking.  Today’s networks don’t have an application centric methodology for describing applications and their requirements.  The same can be said for emerging SDN solutions.  The two most common examples of SDN today are OpenFlow and Network Virtualization.  OpenFlow simply attempts to centralize a control plane that was designed to be distributed for scale and flexibility.  In doing so it  uses 5-tuple matches of IP and TCP/UDP headers to attempt to identify applications as network flows.  This is no different from the model in use today.  Network virtualization faithfully replicates today’s network constructs into a hypervisor, shifting management and adding software layers without solving any of the underlying problem.

What’s needed is a common language for the infrastructure teams and development teams to use.  that common language can be used to describe application connectivity and policy requirements in a way that makes sense to separate parts of the organization and business.  Cisco Application Centric Infrastructure (ACI) uses policy as this common language, and deploys the logical definition of policy onto the network automatically.

Cisco ACI bases network provisioning on the application and the two things required for application delivery: connectivity and policy.  By connectivity we’re describing what group of objects is allowed to connect to other groups of objects.  We are not defining forwarding, as forwarding is handled separately using proven methods, in this case ISIS with a distributed control plane.  When we describe connectivity we simply mean allowing the connection.  Policy is a broader term, and very important to the discussion.  Policy is all of the requirements for an application: SLAs, QoS, Security, L4-7 services etc.  Policy within ACI is designed using reusable ‘contracts.’  This way policy can be designed in advance by the experts and architects with that skill set and then reused whenever required for a new application roll-out.

Applications are deployed on the ACI fabric using an Application Network Profile. An application network profile is simply a logical template for the design and deployment of an applications end-to-end connectivity and policy requirements.  If you’re familiar with Cisco UCS it’s a very similar concept to the UCS Service Profile.  One of the biggest benefits of an Application Network profile is its portability.  They can be built through the API, or GUI, downloaded from Cisco Developer Network (CDN) or the ACI Github community, or provided by the application vendor itself.  They’re simply an XML or JSON representation of the end-to-end requirements for delivering an application.  The graphic below shows an application network profile.

image

This model provides that common language that can be used by developer teams and operations/infrastructure teams.  To tie this back to the tongue-in-cheek start to this post based on dialogue from “A Few Good Men”, we don’t want to replace the network engineer, but we do want to get them off of the CLI.  Rather than hacking away at repeatable tasks on the command line, we want them using the policy model to define the policy ‘contracts’ for use when deploying applications.  At the same time we want to give them better visibility into what the application requires and what it’s doing on the network.  Rather than troubleshooting devices and flows, why not look at application health?  Rather than manually configuring QoS based on devices, why not set it per application or tier?  Rather than focusing on VLANs and subnets as policy boundaries why not abstract that and group things based on those policy requirements?  Think about it, why should every aspect of a servers policy change because you changed the IP?  That’s what happens on today’s networks.

Call it a DevOps tool, call it automation, call it what you will, ACI looks to use the language of applications to provision the network dynamically and automatically.  Rather than simply providing better management tools for 15 year old concepts that have been overloaded we focus on a new model: application connectivity and policy.

**Disclaimer: I work as a Technical Marketing Engineer for the Cisco BU responsible for Nexus 9000 and ACI.  Feel free to disregard this post as my biased opinion.**

GD Star Rating
loading...

Video: Cisco ACI Overview

GD Star Rating
loading...

Oh, the Places You’ll Go! (A Cisco ACI Story)

In the fashion of my two previous Dr. Seuss style stories I thought I’d take a crack at Cisco Application Centric Infrastructure (ACI.)  Check out the previous two if you haven’t read them and have time to waste:

 

Horton Hears Hadoop: http://www.definethecloud.net/horton-hears-hadoop/

The App on the Crap (An SDN Story) http://www.definethecloud.net/the-app-on-the-crap-an-sdn-story/

 

 

 

 

 

 

Congratulations!

This is the time.

The network is changing!

The future is here!

 

With software controllers.

And virtualized widgets.

You can steer traffic

any direction you choose.

Packets are moving. They’ll flow where they flow.

And YOU are the gal who’ll decide where they’ll go.

 

You’ll look up and down paths.  Look ‘em over with care.

About some you’ll say, “No VOIP will go there.”

With an overlay net, and central control,

No packet will flow, down a not-so-good path.

 

And when packets travel

on suboptimal paths.

You’ll reroute those flows,

based on 5-tuple match.

 

Net’s opened wide

With central control.

 

Now net change can happen

and rapidly too

with net as central

and virtual too.

 

And when things start to happen,

don’t panic.  Don’t stew.

Just go troubleshoot.

All layers old, and the new.

 

OH!

THE PLACES YOU’LL GO!

 

You’ll be on your way up!

Packet’s moving in flight!

You’ll be the rock star

who set network right.

 

The network won’t lag, because of central control.

You’ll provision the pipes, avoid traffic black holes.

The packets will fly, you’ll be best of the best.

Wherever they fly, be faster than the rest.

 

Except when they don’t.

Because sometimes they won’t.

 

I’m sorry to say so

but, sadly it’s true

that Bang-ups

and Hang-ups

will happen to you.

 

You can get all hung up

in congestion / jitter.

And packets won’t travel.

Some will just flitter.

 

Applications will fail

with unpleasant time-outs.

And the chances are, then,

that you’ll start hearing shouts.

 

And when applications fail,

you’re not in for much fun.

Getting them back up

is not easily done.

 

You’ll need the app team, spreadsheets , security rules.

You’ll have to troubleshoot through disparate tools.

Find a way to translate from app language to net.

Map L3/L4 to app names, not done yet.

There are services too, that’s a safe bet.

 

Which route did it take, and which networks the problem?

Overlay, underlay, this network has goblins.

Congestion, and drops, latency jitter

Check with the software, than break out the splitter.

You’ll sort this out, you’re no kind of quitter!

 

It can get so confused

two networks to trace.

The process is slow, not what you want for a pace.

You must sort it out, this is business, a race.

What happened here, what’s going on in this place?

 

NO!

That’s not for you!

Those duct tape based fixes.

You’ll choose better methods.

Not hodge-podge tech mixes.

 

Look first at the problem,

what’s causing the issues?

What is it that net, is trying to do?

The app is the answer, in front of you.

 

The data center’s there to run applications!

To serve them to users, move data ‘cross nations.

To drive revenue, open up business models.

To push out new services, all at full throttle.

The application’s what matters.

Place it on a platter.

 

You’ll put the app into focus,

With some abstraction hocus-pocus.

 

You’ll use the language of apps.

To describe connectivity.

Building application maps,

to increase productivity.

 

Use a system focused on policy,

not new-fangled virtual novelty.

Look at apps end-to-end,

Not with the app is VM trend.

 

Whether virtual or physical, you’ll treat things the same.

From L2 to L3, or L4-7,

use of uniform policy, will be your new game.

Well on your way to networking heaven.

 

Start with a logical model, a connectivity graph.

One that the system, deploys on your behalf.

A single controller for policy enforcement.

Sure to receive security’s cheering endorsement.

Forget about VLANs, routes and frame formats,

no longer will networking be the app-deploy doormat.

 

You see to build networks for today and tomorrow,

don’t use band-aids stacked high as Kilimanjaro.

You’ll want to start with REMOVING complexity.

Anything else, just adds to perplexity.

 

Start at the top, in an app centric fashion.

on a system that knows to treat apps as its passion.

 

And will you succeed?

Yes! you will, indeed!

(98 and 3/4 percent guaranteed.)*

KID, YOU’LL MOVE MOUNTAINS!

 

So…

be your app virtual, physical or cloud

with services, simple, complex or astray,

you’re off to Great Places!

Today is your day!

ACI is waiting.

So…get on your way!

 

 

*This is intended as whimsical nonsense.  Any guarantees are null and void based on the complete insanity of the author.

**Disclaimer: I work for Cisco Systems with the group responsible for Nexus 9000 and ACI.  Please feel free to consider this post random vendor rhetoric.**

For more information on Cisco ACI visit www.cisco.com/go/aci

GD Star Rating
loading...

True Software Defined Networking (SDN)

The world is, and has been, buzzing about software defined networking. It’s going to revolutionize the entire industry, commoditize hardware, and disrupt all the major players. It’s going to do all that… some day. To date it hasn’t done much but be a great conversation, and more importantly identify the need for change in networking.

In its first generation SDN is a lot of sizzle with no flash. The IT world is trying to truly define it, much like we were with ‘Cloud’ years ago. What’s beginning to emerge is that SDN is more of a methodology then an implementation, and like cloud there are several implementations: OpenFlow, Network Virtualization and Programmable Network Infrastructure.

 

image

OpenFlow

Open Flow focuses on a separation of control plane and data plane. This provides a centralized method to route traffic based on a 5-tuple match of packet header information. One area OpenFlow falls short is in its dependence on the independent advancement of the protocol itself and the hardware support below. Hardware in the world of switching and routing is Application Specific Integrated Circuits (ASIC) based, and those ASICs typically take three years to refresh. This means that the OpenFlow protocol itself must advance, and then once stabilized silicon vendors can begin building new ASICs to be available three years later.

Network Virtualization

Network virtualization is a faithful reproduction of networking functionality into the hypervisor. This method is intended to provide advanced automation and speed application deployment. The problem here arises in the new tools required to manage and monitor the network, the additional management layer, and the replication of the same underlying complexity.

Programmable Network Infrastructure

Programmable network infrastructure takes the configuration of devices from human to machine CLI/GUI interfaces to APIs and programming agents. This allows for faster, more powerful and less error prone device configuration from automation, orchestration and cloud operating system tools. These advance the configuration of multiple disparate systems but are still designed based on network operating system constructs intended for human use, and the same underlying network complexities such as artificial ties between addressing and policy.

All of these generation 1 SDN solutions simply move the management of the underlying complexity around. They are software designed to operate in the same model, trying to configure existing hardware. They’re simply adding another protocol, or protocols, to the pile of existing complexity.

image

Truly software defined networks

To truly define the network via software you have to look at the entire solution, not just a single piece. Simply adding a software or hardware layer doesn’t fix the problem, you must look at them in tandem starting with the requirements for today’s networks: automation, application agility, visibility (virtual/physical) security, scale and L4-7 services (virtual/physical.)

If you start with those requirements and think in terms of a blank slate you now have the ability to build things correctly for today and tomorrow’s applications while ensuring backwards compatibility. The place to start is in the software itself, or the logical model. Begin with questions:

1. What’s the purpose of the network?

2. What’s most relevant to the business?

3. What dictates the requirements?

The answer to all three is the application, so that’s the natural starting point. Next you ask who owns, deploys and handles day two operations for an application? The answer is the development team. So you start with a view of applications in a format they would understand.

image

That format is simple provider/consumer relationships between tiers or components of an application. Each tier may provide and consume services from the next to create the application which is a group of tiers or components, not a single physical server or VM.

You take that idea a step further and understand that the provider/consumer relationships are truly just policy. Policy can describe many things, but here it would be focused on permit/deny, redirect, SLAs, QoS, logging and L4-7 service chaining for security and user experience.

image

Now you’ve designed a policy model that focuses on the application connectivity and any requirements for those connections, including L4-7 services. With this concept you can instantiate that policy in a reusable format so that policy definition can be repeated for like connections, such as users connecting to a web tier. Additionally the application connectivity definition as a whole could be instantiated as a template or profile for reuse.

You’ve now defined a logical model, based on policy, for how applications should be deployed. With this model in place you can work your way down. Next you’ll need network equipment that can support your new model. Before thinking about the hardware, remember there is an operating system (OS) that will have to interface with your policy model.

Traditional network operating systems are not designed for this type of object oriented policy model. Even highly programmable or Linux based operating systems have not been designed for object programmability that would fully support this model.  You’ll need an OS that’s capable of representing tiers or components of an application as objects, with configurable attributes. Additionally it must be bale to represent physical resources like ports as objects abstracted from the applications that will run on them.  An OS that can be provisioned in terms of policy constructs rather than configuration lines such as switch ports, QoS and ACLs. You’ll need to rewrite the OS.

As you’re writing your OS you’ll need to rethink the switching and routing hardware that will deliver all of those packets and frames. Of course you’ll need: density, bandwidth, low-latency, etc. More importantly you’ll need hardware that can define, interpret and enforce policy based on your new logical model. You’ll need to build hardware tailored to the way you define applications, connectivity and policy.  Hardware that can enforce policy based on logical groupings free of VLAN and subnet based policy instantiation.

If you build these out together, starting with the logical model then defining the OS and hardware to support it, you’ll have built a solution that surpasses the software shims of generation 1 SDN. You’ll have built a solution that focuses on removing the complexity first, then automating, then applying rapid deployment through tools usable by development and operations, better yet DevOps.

If you do that you’ll have truly defined networking based on software. You’ll have defined it from the top all the way down to the ASICs. If you do all that and get it right, you’ll have built Cisco’s Application Centric Infrastructure (ACI.)

For more information on the next generation of data center networking check out www.cisco.com/go/aci.

 

Disclaimer: ACI is where I’ve been focused for the last year or so, and where my paycheck comes from.  You can feel free to assume I’m biased and this article has no value due to that.  I won’t hate you for it.

GD Star Rating
loading...

Engineers Unplugged Episode 14: Application Affinity

I had the pleasure of speaking with Nils Swart (@nlnils) of Plexxi about applications and the network.  You can watch the quick Engineer’s Unplugged below.

GD Star Rating
loading...

Software Defined networking: The Role of SDN on Compute Infrastructure Administration

#vBrownBag Follow-up Software Defined Networking SDN with Joe Onisick (@jonisick) from ProfessionalVMware on Vimeo.

GD Star Rating
loading...

Focus on the Ball: The Application

With the industry talking about Software Defined Networking (SDN) at full hype levels, there is one thing missing from many discussions: the application. SDN promises to reign in the complexity of network infrastructure and provide better tools for deploying services at scale. What often seems to be forgotten are the applications, which are the reason those networks exist. While application focus in itself is not a new concept it seems lost in the noise around SDN as a whole, with a few exceptions such as Plexxi being which focuses on Application Affinity.

Current SDN approaches provide tools to solve issues in one portion or the other of network infrastructure. Flow control mechanisms look to centralize the distribution and configuration of routing and forwarding. Overlays look to build virtual networks on existing IP infrastructure. Virtualized L4-7 services provide solutions to configure, stitch-in and control network services more closely to virtual machines themselves. None of these current approaches looks to tackle the whole picture from an application centric point of view. These solutions also take a myopic view that the VM is the network, this is far from the case.  The closest models fall into dev-ops categories or orchestration but these require a deep understanding of the details and intricacies of the network.

In traditional networking environments there is a disconnect in communication between application and network teams. The languages and concepts are disparate enough that they don’t translate, there is no logical continuation from application developer or owner to network designer. Application teams speak in OS instances, application tiers and components, tooling, language, end-user demands, etc. while network teams speak in switch-ports, VLANs, QoS, IP addressing and Access Control Lists (ACLs). The lack of common understanding and vocabulary causes architectures and implementations to suffer. The graphic below illustrates this relationship:

image

Building the flexible, scalable, manageable and programmable networks of the future requires a change in focus. The application needs to take center stage; it’s the apps that solve business problems. From this focus, logical and physical topology become secondary and are only designed once application requirements have been mapped out. Application centric policies must be designed first. Policies such as: security, load-balancing, QoS can all be designed based on application requirements, rather than network restrictions. Application developers define these requirements without the need to speak a network language.

Traditional networks begin with a physical topology that is layered with L2 and L3 logical topologies and assumed application mobility and service domains such as a services tier in the aggregation level. Once these topologies are architected and implemented applications are built and deployed on them. This method limits the capabilities available to the application and the services deployed on them.

Application security is an excellent example of a system that suffers from traditional architectures. Network security constructs are implemented in the form of ACLs on switches, routers and firewalls. These entries suffer from two major drawbacks: complexity of design/implementation and scale of the TCAM that stores the entries. This means that application policies must be communicated effectively to network engineers who must translate those requirements into implementable ACLs across multiple devices in the network. This is then defined manually device-by-device. This is a system ripe for PEBKAC errors (Problem Exists Between Keyboard and Chair.)

The complexity and room for error in this system increases exponentially as networks scale, applications move and new services are needed. Additionally this leads to bad practice based on design limitations. Far too often outdated policy entries are left in place due to the complexity and risk of removing entries. This leads to residual entries in place consuming space long after an application is gone. Just as often policies are written more loosely than would be optimal in order to reduce required entries, and optimize space, through wild card summarization.

To break this cycle networking systems need to take an application centric approach which models actual application requirements onto the network in a top down fashion. Systems need to take into account the structure of the application, its components, and how those components interact then provide tools for designing logical policy maps of these relationships. From there these policy maps can be programmatically applied to the networking infrastructure.

An application is not a single software instance running on a server. Applications are made up of the end-points required in a given tier, the tiers required for the service delivered and the policies that define how those tiers communicate, and their unique requirements. The application as a whole must be taken into account in order to provide robust, scalable service delivery.

The illustration below shows this relationship in contrast to the diagram above:

image

In this model network and application teams develop the systems of policies that define application behavior and push them to the network. Taking the application as a whole into focus instead of the myopic view of VMs, switch ports or IP addresses allows cohesive deployment and manageability at scale. The application is the purpose of having a network; therefore the application should define the network.

This definition of the network by the application should be done in a language that the developers understand, and the network can interpret and implement. For example an app owner labels application traffic as ‘video’ and the network implements policies for bandwidth, QoS, etc. that video requires. These policies are predefined by the network engineers.

An application is more than an IP address and a set of rules; it is an ecosystem of interconnected devices and the policies that define their relationship. Traditional networking techniques anchor application deployment by defining applications in networking terms. In order to accelerate the application deployment (and re-deployment throughout its lifecycle) networks need to provide an application centric view and deployment model.

GD Star Rating
loading...

Network Management Needs New Ideas

As networks have grown, the industry has sought better ways in which to manage them at scale. Traditional network management systems are typically device-centric, particularly for network infrastructure. These systems take a top-down management approach and use a central server to push configuration into devices and to manage device state. With few exceptions, this approach provides no additional abstraction or functionally and fundamentally becomes a GUI representation of CLI configuration…

To see the full post visit:  http://www.networkcomputing.com/data-networking-management/network-management-needs-new-ideas/240157120

GD Star Rating
loading...

What Network Virtualization Isn’t

Brad Hedlund recently posted an excellent blog on Network Virtualization.  Network Virtualization is the label used by Brad’s employer VMware/Nicira for their implementation of SDN.  Brad’s article does a great job of outlining the need for changes in networking in order to support current and evolving application deployment models.  He also correctly points out that networking has lagged behind the rest of the data center as technical and operational advancements have been made. 

Network configuration today is laughably archaic when compared to storage, compute and even facilities.  It is still the domain of CLI wizards hacking away on keyboards to configure individual devices.  VMware brought advancements like resource utilization based automatic workload migration to the compute environment.  In order to support this behavior on the network an admin must ensure the appropriate configuration is manually defined on each port that workload may access and every port connecting the two.  This is time consuming, costly and error prone. Brad is right, this is broken.

Brad also correctly points out that network speeds, feeds and packet delivery are adequately evolving and that the friction lies in configuration, policy and service delivery.  These essential network components are still far too static to keep pace with application deployments.  The network needs to evolve, and rapidly, in order to catch up with the rest of the data center.

Brad and I do not differ on the problem(s), or better stated: we do not differ on the drivers for change.  We do however differ on the solution.  Let me preface in advance that Brad and I both work for HW/SW vendors with differing solutions to the problem and differing visions of the future.  Feel free to write the rest of this off as mindless dribble or vendor Kool Aid, I ain’t gonna hate you for it.

Brad makes the case that Network Virtualization is equivalent to server virtualization, and from this simple assumption he poses it as the solution to current network problems.

Let’s start at the top: don’t be fooled by emphatic statements such as Brad’s stating that network virtualization is analogous to server virtualization.  It is not, this is an apples and oranges discussion.  Network virtualization being the orange where you must peel the rind to get to the fruit.  Don’t take my word for it, one of Brad’s colleagues, Scott Lowe, a man much smarter then I says it best:

image

The issue is that these two concepts are implemented in a very different fashion.  Where server virtualization provides full visibility and partitioning of the underlying hardware, network virtualization simply provides a packet encapsulation technique for frames on the wire.  The diagram below better illustrates our two fruits: apples and oranges.

image

As the diagram illustrates we are not working with equivalent approaches.  Network virtualization would require partitioning of switch CPU, TCAM, ASIC forwarding, bandwidth etc. to be a true apples-to-apples comparison.  Instead it provides a simple wrapper to encapsulate traffic on the underlying Layer 3 infrastructure.  These are two very different virtualization approaches.

Brad makes his next giant leap in the “What is the Network section.”  Here he makes the assumption that the network consists of only virtual workloads “The “network” we want to virtualize is the complete L2-L7 services viewed by the virtual machines” and the rest of his blog focuses there.  This is fine for those data center environments that are 100% virtualized including servers, services and WAN connectivity and use server virtualization for all of those purposes.  Those environments must also lack PaaS and SaaS systems that aren’t built on virtual servers as those are also non-applicable to the remaining discussion.  So anyone in those environments described will benefit from the discussion, anyone <crickets>.

So Brad and, presumably VMware/Nicira (since network virtualization is their term), define the goal as taking “all of the network services, features, and configuration necessary to provision the application’s virtual network (VLANs, VRFs, Firewall rules, Load Balancer pools & VIPs, IPAM, Routing, isolation, multi-tenancy, etc.) – take all of those features, decouple it from the physical network, and move it into a virtualization software layer for the express purpose of automation.”  So if your looking to build 100% virtualized server environments with no plans to advance up the stack into PaaS, etc. it seems you have found your Huckleberry.

What we really need is not a virtualized network overlay running on top of an L3 infrastructure with no communication or correlation between the two.  What we really need is something another guy much smarter than me (Greg Ferro) described:

image

Abstraction, independence and isolation, that’s the key to moving the network forward.  This is not provided by network virtualization.  Network virtualization is a coat of paint on the existing building.  Further more that coat of paint is applied without stripping, priming, or removing that floral wall paper your grandmother loved.  The diagram below is how I think of it.

Network Virtualization

With a network virtualization solution you’re placing your applications on a house of cards built on a non-isolated infrastructure of legacy design and thinking.  Without modifying the underlying infrastructure, network virtualization solutions are only as good as the original foundation.  Of course you could replace the data center network with a non-blocking fabric and apply QoS consistently across that underlying fabric (most likely manually) as Brad Hedlund suggests below.

image

If this is the route you take, to rebuild the foundation before applying network virtualization paint, is network virtualization still the color you want?  If a refresh and reconfigure is required anyway, is this the best method for doing so? 

The network has become complex and unmanageable due to things far older than VMware and server virtualization.  We’ve clung to device centric CLI configuration and the realm of keyboard wizards.  Furthermore we’ve bastardized originally abstracted network constructs such as VLAN, VRF, addressing, routing, and security tying them together and creating a Frankenstein of a data center network.  Are we surprised the villagers are coming with torches and pitch forks?

So overall I agree with Brad, the network needs to be fixed.  We just differ on the solution, I’d like to see more than a coat of paint.  Put lipstick on a pig and all you get is a pretty pig.

lipstick pig

GD Star Rating
loading...

CloudStack Graduates to Top-Level Apache Project

The Apache Software Foundation announced in late March that CloudStack is now a top-level project. This is a promotion from CloudStack’s incubator status, where it had lived after being released as open source by Citrix.

This promotion provides additional encouragement to companies and developers looking to contribute to the project, because it validates the CloudStack community and demonstrates ongoing support under the Apache Software Foundation. To read more visit the full article.

GD Star Rating
loading...