Your Technology Sunk Cost is KILLING you

I recently bought a Nest Hello to replace my perfectly good, near new, Ring Video Doorbell. The experience got me thinking about sunk cost in IT and how significantly it strangles the business and costs companies ridiculous amounts of money.

When I first saw the Nest Hello, I had no interest. I had recently purchased and installed my Ring. I was happy with it, and the Amazon Alexa integration was great. I had no need to change. A few weeks later I decided to replace my home security system because it’s a cable provider system and like everything from a cable provider it’s a shit service at caviar pricing because ‘Hey, you have no choice you sad F’er.’ That’s the beauty of the monopoly our government happily built and sustains for them. I chose to go with a system from Nest, because I already have two of their thermostats, several of their smoke detectors, and a couple of their indoor cameras. I ordered the security system components I needed, and a few cameras to compliment it, then I looked back into the Nest Hello.

The Nest Hello is a much better camera, and more feature rich device. More importantly it will integrate seamlessly with my new security system, and existing devices, eliminating yet another single use app on my phone (the Ring app.) The counter argument for purchasing the device was my sunk cost. I’d spent money on the Ring, and I’d also spent time and hassle installing it. The Nest might require me to get back in the attic and change out the transformer for my doorbell as well as wire in a new line conditioner. Not things I enjoy doing. The sunk cost nearly stopped my purchase. Why throw away a good device I just installed, to get a feature or two and a better picture.

I then stepped back and looked at it from a different point of view. What’s my business case? What’s the outcome I’m purchasing this technology to achieve? The answer is a little bit of security, but a lot of piece of mind for my home. I live alone, and I travel a lot. While I’m gone I need to manage packages, service people, and my pets. I also need to do this quickly and easily. This means that seamless integration is a top priority for me, and video quality, etc. is another big concern. Nest’s Hello camera feature set far better for my use case, especially when adding their IQ cameras. Lastly for video recording and monitoring service, I would now only need one provider, and one manageable bill rather than one for Nest and one for Ring. From that perspective the answer became clear: the cost I sunk wasn’t providing any value based on my use-cases, therefore it was irrelevant. It was actually irrelvant in the first place, but we’ll get back to that.

I went ahead and bought the Nest Hello. Next came another sunk cost problem. My house is covered in Amazon Alexa devices which integrate quite well with Ring. I have no fewer than 8 Alexa enabled devices around the home, garage, etc. Nest is a Google product, so it’s best integration is with Google Home. Do I replace my beloved Amazon devices with Google Home to get the best integration?

First a rant: The fact that I should even have to consider this is ludicrous, and shows that both products are run by shit heads that won’t even feign the semblance of looking out for their customers interests. Because they have competing products they forcibly degrade any integration between the systems rather than integrating and differentiating on product quality rather than engineered lock-in. I despise this, it’s bad business, and completely unnecessary. I’d guess it actually stalls potential sales of both because people want to ‘sit back and see how it plays out’ before investing in one or the other.

I have a lot of sunk financial cost in my Alexa devices. There’s also some cost in time setting them up and integrating them with my other home-automation tools. That in mind I went back to the outcome I’m trying to achieve. My Alexa/Ring integration allowed me to see who was at the front door, and talk to them. My Alexa/Hello integration will only let me view the video. What’s my use-case? I use the integration to see the door, and decide if I should walk to the front door to answer. If it’s a package delivery, I can grab it later. If it needs a signature, I’ll see them waiting. If it’s something else, I walk to the door for a conversation. Basically I only use the integration to view the video and decide if I should go to the door or not. This means that Alexa/Hello integration, while not ideal, meets my needs perfectly. I easily chose to keep Alexa which provides the side benefit of not providing the evil behemoth that is Google any more access to my life than I already have. Last thing I need is my Gmail recommending male potency remedies after the Google device in my bedroom listens in on a night with my girlfriend. I’m picturing Microsoft Clippy here for some reason.

Clippy Help - Copy

 

I’m much more comfortable with Amazon listening in and craftily adding some books on love making for dummies to my Kindle recommendations while using price discrimination to charge me more for marital aid purchases because they know I need them.

Ok, enough TMI, back to the point. Your technology sunk cost is killing you, mmkay? When making technology decisions for your company you should ignore sunk costs. Your rational brain knows this, but you don’t do it.

Rational thinking dictates that we should ignore sunk costs when making a decision. The goal of a decision is to alter the course of the future. And since sunk costs cannot be changed, you should avoid taking those costs into account when deciding how to proceed.https://blog.fastfedora.com/2011/01/the-sunk-cost-dilemma.html

You have sunk cost in hardware, software, people-hours, consulting, and everywhere else under the sun. If you’re like most these sunk costs hinder every decision you make. “I just refreshed my network, I can’t buy new equipment.” “My servers are only two years old, I won’t swap them out.” I have an enterprise ELA with them, I should use their version. These are all bad reasons to make a decision. The cost is already spent, it’s gone, it can’t be changed, but future costs, and capabilities can. Maybe:

  • That sparkly $400,000 SDN rip and replace will plug far more cohesively into the VP of Applications ongoing DevOps project allowing them to launch features faster resulting in millions of dollars in potential profit to the company over the next 24 months.
  • The new servers increase compute density lowering your overall footprint and saving you on power, cooling, management, and licensing over time starting a quarter or two down the road.
  • Maybe that feature that’s included for free with your ELA will end up costing you thousands in unforeseen integration challenges while only solving 10% of your existing problem.

This issue becomes insanely more relevant as you try and modernize for more agile IT delivery. Regardless of the buzzword you’re shooting towards, DevOps, Cloud, UnicornRainbowDeliverySystems, the shift will be difficult. It will be exponentially more difficult if you anchor it with the sunk cost of every bad decision ever made in your environment.

“Of course your tool sounds great, and we need something exactly like it, but we already have so many tools, I can’t justify another one.” I’ve heard that verbatim from a customer, and it’s bat—shit—freaking—crazy. If your other tools suck, get rid of them, don’t let those bad decisions negate you from purchasing something that does what you need. Maybe it’s your vetting process, or um, eh, that thing you see when you look in the mirror that needs changing. That’s like saying ‘My wife needs a car to get to work, but I already have these two project cars I can’t get running, I can’t justify buying her a commuter car.’

Most of our data centers are built using the same methodology Dr. Frankenstein used to reanimate the dead. He grabbed a cart and a wheelbarrow and set off for his local graveyard. He dug up graves grabbing the things he needed, a torso, a couple of legs, a head, etc. and carted them back to his lab. Once safely back at the lab he happily stitched them together and applied power.

Data centers have been built buying the piece needed at the time from the favored vendor of the moment. A smattering of HP here, a dash of Cisco there, some EMC, a touch of NetApp, oh this Arista thing is shiny… Then up through the software stack, a teaspoon of Oracle makes the profits go down, the profits go down… some SalesForce, some VMware, and on, and on. We’ve stitched these things together with Ethernet and applied power.

Now you want to ‘DevOps that’, or ‘cloudify the thing’? Really, are you sure you REALLY want to do that? Fine go ahead, I won’t call you crazy, I’ll just think… never mind, yes I will call you crazy… crazy. DevOps, Cloud, etc. are all like virtualization before them, if you put them on a shit foundation, you get shit results.

Now don’t get me wrong. You can protect your sunk costs, sweat your assets, and still achieve buzzword greatness. It’s possible. The question is should you, and would it actually save you money? The answer is no, and ‘hell no.’ The cost of additional tools, customization, integration and lost time will quickly, and exponentially, outweigh any perceived ‘investment protection’ savings, except in the most extreme of corner-cases.

I’m not promoting throwing the baby out with the bathwater, or rip-and-replace every step of the way. I am recommending you consider those options. Look at the big picture and ignore sunk-cost as much as you can.

Maybe you replace $500,000 in hardware and software you bought last year with $750,000 worth of new-fangled shit today, and $250,000 in services to build and launch it. Crap, you wasted the sunk $500K and sunk $1 million more! How do you explain that? Maybe you’ll be explaining it as the cost of moving your company from 4 software releases per year to 1 software reease per week. Maybe that release schedule is what just allowed your Dev team to ‘dark test’ then rolling release the next killer feature on your customer platform. Maybe customer attrition is down 50% while the cost of customer acquisition is 30% of what it was a year ago. Maybe you’ll be explaining the tough calls it takes to be the hero.

 

 

 

GD Star Rating
loading...

Intent Driven Architecture Part III: Policy Assurance

Here I am finally getting around to the third part of my blog on Intent Driven Architectures, but hey, what’s a year between friends. If you missed or forgot parts I and II the links are below:

Intent Driven Architectures: WTF is Intent

Intent Driven Architectures Part II: Policy Analytics

Intent Driven Data Center: A Brief Overview Video

Now on to part III and a discussion of how assurance systems finalize the architecture.

What gap does assurance fill?

‘Intent’ and ‘Policy’ can be used interchangeably for the purposes of this discussion. Intent is what I want to do, policy is a description of that intent. The tougher question is what intent intent assurance is. Using the network as an example, let’s assume you have a proper intent driven system that can automatically translate a business level intent into infrastructure level configuration.

An intent like deploying a financial application beholden to PCI compliance will boil down into a myriad of config level objects: connectivity, security, quality, etc. At the lowest level this will translate to things like Access Control lists (ACLs), VLANs, firewall (FW) rules, and Quality of Service (QoS) settings. The diagram below shows this mapping.

Note: In an intent driven system the high level business intent is automatically translated down into the low-level constructs based on pre-defined rules and resource pools. Basically, the mapping below should happen automatically.

Blog Graphics

The translation below is one of the biggest challenges in traditional architectures. In those architectures the entire process is manual and human driven. Automating this process through intent creates an exponential speed increase while reducing risk and providing the ability to apply tighter security. That being said it doesn’t get us all the way there. We still need to deploy this intent. Still within the networking example the intent driven system should have a network capable of deploying this policy automatically, but how do you know it can accept these changes, and what they will effect?

In steps assurance…

The purpose of an assurance system is to guarantee that the proposed changes (policy modifications based on intent) can be consumed by the infrastructure. Let’s take one small example to get an idea of how important this is. This example will sound technical, but the technical bits are irrelevant. We’ll call this example F’ing TCAM.

F’ing TCAM:

  • TCAM (Tertiary Content Addressable Memory) is the piece of hardware that stores Access Control Entries (ACEs).
  • TCAM is very expensive, therefore you have a finite amount in any given switch.
  • These are how ACLs get enforced at ‘line-rate’ (as fast as the wire).
  • ACLs can be/are used along with other tools to enforce things like PCI compliance.
  • An individual DC switch can theoretically be out of TCAM space, therefore unable to enforce a new policy.
  • Troubleshooting and verifying that across al the switches in a data center is hard.

That’s only one example of verification that needs to happen before a new intent can be pushed out. Things like VLAN and route availability, hardware/bandwidth utilization, etc. are also important. In the traditional world two terrible choices are available: verify everything manually per device, or ‘spray and pray’ (push the configuration and hope.)

This is where the assurance engine fits in. An assurance engine verifies the ability of the infrastructure to consume new policy before that policy is pushed out. This allows the policy to be modified if necessary prior to changes on the system, and reduces troubleshooting required after a change.

Advanced assurance systems will take this one step further. They perform step 1 as outlined above, which verifies that the change can be made. Step 2 will verify if the change should be made. What I mean by this is that step 2 will check compliance, IT policy, and other guidelines to ensure that the change will not violate them. Many times a change will be possible, even though it will violate some other policy, step 2 ensures that administrators are aware of this before a change is made.

This combination of features is crucial for the infrastructure agility required by modern business. It also greatly reduces the risk of change allowing maintenance windows to be reduced greatly or eliminated. Assurance is a critical piece of achieving true intent driven architectures.

GD Star Rating
loading...

A Few Good Apps

Developer: Network team, did you order the Code upgrade?!

Operations Manager: You don’t have to answer that question!

Network Engineer: I’ll answer the question. You want answers?

Developer: I think I’m entitled!

Network Engineer: You want answers?!

Developer: I want the truth!

Network Engineer: You can’t handle the truth! Son, we live in a world that has VLANs, and those VLANs have to be plumbed by people with CLIs. Who’s gonna do it? You? You, Database Admin? I have a greater responsibility than you can possibly fathom. You weep for app agility and you curse the network. You have that luxury. You have the luxury of not knowing what I know, that network plumbing, while tragically complex, delivers apps. And my existence, while grotesque and incomprehensible to you, delivers apps! You don’t want the truth, because deep down in places you don’t talk about at parties, you want me on that CLI. You need me on that CLI. We use words like “routing”, “subnets”, “L4 Ports”. We use these words as the backbone of a life spent building networks. You use them as a punch line. I have neither the time nor the inclination to explain myself to a man who rises and sleeps under the blanket of infrastructure that I provide, and then questions the manner in which I provide it! I would rather you just said “thank you”, and went on your way. Otherwise, I suggest you pick up a putty session, and configure a switch. Either way, I don’t give a damn what you think you are entitled to!

Developer: Did you order the Code upgrade?

Network Engineer: I did the job that—-

Developer: Did you order the Code upgrade?!!

Network Engineer: YOU’RE GODDAMN RIGHT I DID!!

 

In many IT environments today there is a distinct line between the application developers/owners and the infrastructure teams that are responsible for deploying those applications.  These organizational silos lead to tension, lack of agility and other issues.  Much of this is caused by the translation between these teams.  Application teams speak in terms like: objects, attributes, provider, consumer, etc.  Infrastructure teams speak in memory, CPU, VLAN, subnets, ports.  This is exacerbated when delivering apps over the network, which requires connectivity, security, load-balancing etc.  On today’s network devices (virtual or physical) the application must be identified based on Layer 3 addressing and L4 information.  This means the app team must be able to describe components or tiers of an app in those terms (which are foreign to them.)  This slows down the deployment of applications and induces problems with tight controls, security, etc.  I’ve tried to describe this in the graphic below (for people who don’t read good and want to learn to do networking things good too.)

image

As shown in the graphic, the definition of an application and its actual instantiation onto networking devices (virtual and physical) is very different.  This causes a great deal of the slowed application adoption and the complexity of networking.  Today’s networks don’t have an application centric methodology for describing applications and their requirements.  The same can be said for emerging SDN solutions.  The two most common examples of SDN today are OpenFlow and Network Virtualization.  OpenFlow simply attempts to centralize a control plane that was designed to be distributed for scale and flexibility.  In doing so it  uses 5-tuple matches of IP and TCP/UDP headers to attempt to identify applications as network flows.  This is no different from the model in use today.  Network virtualization faithfully replicates today’s network constructs into a hypervisor, shifting management and adding software layers without solving any of the underlying problem.

What’s needed is a common language for the infrastructure teams and development teams to use.  that common language can be used to describe application connectivity and policy requirements in a way that makes sense to separate parts of the organization and business.  Cisco Application Centric Infrastructure (ACI) uses policy as this common language, and deploys the logical definition of policy onto the network automatically.

Cisco ACI bases network provisioning on the application and the two things required for application delivery: connectivity and policy.  By connectivity we’re describing what group of objects is allowed to connect to other groups of objects.  We are not defining forwarding, as forwarding is handled separately using proven methods, in this case ISIS with a distributed control plane.  When we describe connectivity we simply mean allowing the connection.  Policy is a broader term, and very important to the discussion.  Policy is all of the requirements for an application: SLAs, QoS, Security, L4-7 services etc.  Policy within ACI is designed using reusable ‘contracts.’  This way policy can be designed in advance by the experts and architects with that skill set and then reused whenever required for a new application roll-out.

Applications are deployed on the ACI fabric using an Application Network Profile. An application network profile is simply a logical template for the design and deployment of an applications end-to-end connectivity and policy requirements.  If you’re familiar with Cisco UCS it’s a very similar concept to the UCS Service Profile.  One of the biggest benefits of an Application Network profile is its portability.  They can be built through the API, or GUI, downloaded from Cisco Developer Network (CDN) or the ACI Github community, or provided by the application vendor itself.  They’re simply an XML or JSON representation of the end-to-end requirements for delivering an application.  The graphic below shows an application network profile.

image

This model provides that common language that can be used by developer teams and operations/infrastructure teams.  To tie this back to the tongue-in-cheek start to this post based on dialogue from “A Few Good Men”, we don’t want to replace the network engineer, but we do want to get them off of the CLI.  Rather than hacking away at repeatable tasks on the command line, we want them using the policy model to define the policy ‘contracts’ for use when deploying applications.  At the same time we want to give them better visibility into what the application requires and what it’s doing on the network.  Rather than troubleshooting devices and flows, why not look at application health?  Rather than manually configuring QoS based on devices, why not set it per application or tier?  Rather than focusing on VLANs and subnets as policy boundaries why not abstract that and group things based on those policy requirements?  Think about it, why should every aspect of a servers policy change because you changed the IP?  That’s what happens on today’s networks.

Call it a DevOps tool, call it automation, call it what you will, ACI looks to use the language of applications to provision the network dynamically and automatically.  Rather than simply providing better management tools for 15 year old concepts that have been overloaded we focus on a new model: application connectivity and policy.

**Disclaimer: I work as a Technical Marketing Engineer for the Cisco BU responsible for Nexus 9000 and ACI.  Feel free to disregard this post as my biased opinion.**

GD Star Rating
loading...

Video: Cisco ACI Overview

GD Star Rating
loading...

Oh, the Places You’ll Go! (A Cisco ACI Story)

In the fashion of my two previous Dr. Seuss style stories I thought I’d take a crack at Cisco Application Centric Infrastructure (ACI.)  Check out the previous two if you haven’t read them and have time to waste:

 

Horton Hears Hadoop: http://www.definethecloud.net/horton-hears-hadoop/

The App on the Crap (An SDN Story) http://www.definethecloud.net/the-app-on-the-crap-an-sdn-story/

 

 

 

 

 

 

Congratulations!

This is the time.

The network is changing!

The future is here!

 

With software controllers.

And virtualized widgets.

You can steer traffic

any direction you choose.

Packets are moving. They’ll flow where they flow.

And YOU are the gal who’ll decide where they’ll go.

 

You’ll look up and down paths.  Look ‘em over with care.

About some you’ll say, “No VOIP will go there.”

With an overlay net, and central control,

No packet will flow, down a not-so-good path.

 

And when packets travel

on suboptimal paths.

You’ll reroute those flows,

based on 5-tuple match.

 

Net’s opened wide

With central control.

 

Now net change can happen

and rapidly too

with net as central

and virtual too.

 

And when things start to happen,

don’t panic.  Don’t stew.

Just go troubleshoot.

All layers old, and the new.

 

OH!

THE PLACES YOU’LL GO!

 

You’ll be on your way up!

Packet’s moving in flight!

You’ll be the rock star

who set network right.

 

The network won’t lag, because of central control.

You’ll provision the pipes, avoid traffic black holes.

The packets will fly, you’ll be best of the best.

Wherever they fly, be faster than the rest.

 

Except when they don’t.

Because sometimes they won’t.

 

I’m sorry to say so

but, sadly it’s true

that Bang-ups

and Hang-ups

will happen to you.

 

You can get all hung up

in congestion / jitter.

And packets won’t travel.

Some will just flitter.

 

Applications will fail

with unpleasant time-outs.

And the chances are, then,

that you’ll start hearing shouts.

 

And when applications fail,

you’re not in for much fun.

Getting them back up

is not easily done.

 

You’ll need the app team, spreadsheets , security rules.

You’ll have to troubleshoot through disparate tools.

Find a way to translate from app language to net.

Map L3/L4 to app names, not done yet.

There are services too, that’s a safe bet.

 

Which route did it take, and which networks the problem?

Overlay, underlay, this network has goblins.

Congestion, and drops, latency jitter

Check with the software, than break out the splitter.

You’ll sort this out, you’re no kind of quitter!

 

It can get so confused

two networks to trace.

The process is slow, not what you want for a pace.

You must sort it out, this is business, a race.

What happened here, what’s going on in this place?

 

NO!

That’s not for you!

Those duct tape based fixes.

You’ll choose better methods.

Not hodge-podge tech mixes.

 

Look first at the problem,

what’s causing the issues?

What is it that net, is trying to do?

The app is the answer, in front of you.

 

The data center’s there to run applications!

To serve them to users, move data ‘cross nations.

To drive revenue, open up business models.

To push out new services, all at full throttle.

The application’s what matters.

Place it on a platter.

 

You’ll put the app into focus,

With some abstraction hocus-pocus.

 

You’ll use the language of apps.

To describe connectivity.

Building application maps,

to increase productivity.

 

Use a system focused on policy,

not new-fangled virtual novelty.

Look at apps end-to-end,

Not with the app is VM trend.

 

Whether virtual or physical, you’ll treat things the same.

From L2 to L3, or L4-7,

use of uniform policy, will be your new game.

Well on your way to networking heaven.

 

Start with a logical model, a connectivity graph.

One that the system, deploys on your behalf.

A single controller for policy enforcement.

Sure to receive security’s cheering endorsement.

Forget about VLANs, routes and frame formats,

no longer will networking be the app-deploy doormat.

 

You see to build networks for today and tomorrow,

don’t use band-aids stacked high as Kilimanjaro.

You’ll want to start with REMOVING complexity.

Anything else, just adds to perplexity.

 

Start at the top, in an app centric fashion.

on a system that knows to treat apps as its passion.

 

And will you succeed?

Yes! you will, indeed!

(98 and 3/4 percent guaranteed.)*

KID, YOU’LL MOVE MOUNTAINS!

 

So…

be your app virtual, physical or cloud

with services, simple, complex or astray,

you’re off to Great Places!

Today is your day!

ACI is waiting.

So…get on your way!

 

 

*This is intended as whimsical nonsense.  Any guarantees are null and void based on the complete insanity of the author.

**Disclaimer: I work for Cisco Systems with the group responsible for Nexus 9000 and ACI.  Please feel free to consider this post random vendor rhetoric.**

For more information on Cisco ACI visit www.cisco.com/go/aci

GD Star Rating
loading...

True Software Defined Networking (SDN)

The world is, and has been, buzzing about software defined networking. It’s going to revolutionize the entire industry, commoditize hardware, and disrupt all the major players. It’s going to do all that… some day. To date it hasn’t done much but be a great conversation, and more importantly identify the need for change in networking.

In its first generation SDN is a lot of sizzle with no flash. The IT world is trying to truly define it, much like we were with ‘Cloud’ years ago. What’s beginning to emerge is that SDN is more of a methodology then an implementation, and like cloud there are several implementations: OpenFlow, Network Virtualization and Programmable Network Infrastructure.

 

image

OpenFlow

Open Flow focuses on a separation of control plane and data plane. This provides a centralized method to route traffic based on a 5-tuple match of packet header information. One area OpenFlow falls short is in its dependence on the independent advancement of the protocol itself and the hardware support below. Hardware in the world of switching and routing is Application Specific Integrated Circuits (ASIC) based, and those ASICs typically take three years to refresh. This means that the OpenFlow protocol itself must advance, and then once stabilized silicon vendors can begin building new ASICs to be available three years later.

Network Virtualization

Network virtualization is a faithful reproduction of networking functionality into the hypervisor. This method is intended to provide advanced automation and speed application deployment. The problem here arises in the new tools required to manage and monitor the network, the additional management layer, and the replication of the same underlying complexity.

Programmable Network Infrastructure

Programmable network infrastructure takes the configuration of devices from human to machine CLI/GUI interfaces to APIs and programming agents. This allows for faster, more powerful and less error prone device configuration from automation, orchestration and cloud operating system tools. These advance the configuration of multiple disparate systems but are still designed based on network operating system constructs intended for human use, and the same underlying network complexities such as artificial ties between addressing and policy.

All of these generation 1 SDN solutions simply move the management of the underlying complexity around. They are software designed to operate in the same model, trying to configure existing hardware. They’re simply adding another protocol, or protocols, to the pile of existing complexity.

image

Truly software defined networks

To truly define the network via software you have to look at the entire solution, not just a single piece. Simply adding a software or hardware layer doesn’t fix the problem, you must look at them in tandem starting with the requirements for today’s networks: automation, application agility, visibility (virtual/physical) security, scale and L4-7 services (virtual/physical.)

If you start with those requirements and think in terms of a blank slate you now have the ability to build things correctly for today and tomorrow’s applications while ensuring backwards compatibility. The place to start is in the software itself, or the logical model. Begin with questions:

1. What’s the purpose of the network?

2. What’s most relevant to the business?

3. What dictates the requirements?

The answer to all three is the application, so that’s the natural starting point. Next you ask who owns, deploys and handles day two operations for an application? The answer is the development team. So you start with a view of applications in a format they would understand.

image

That format is simple provider/consumer relationships between tiers or components of an application. Each tier may provide and consume services from the next to create the application which is a group of tiers or components, not a single physical server or VM.

You take that idea a step further and understand that the provider/consumer relationships are truly just policy. Policy can describe many things, but here it would be focused on permit/deny, redirect, SLAs, QoS, logging and L4-7 service chaining for security and user experience.

image

Now you’ve designed a policy model that focuses on the application connectivity and any requirements for those connections, including L4-7 services. With this concept you can instantiate that policy in a reusable format so that policy definition can be repeated for like connections, such as users connecting to a web tier. Additionally the application connectivity definition as a whole could be instantiated as a template or profile for reuse.

You’ve now defined a logical model, based on policy, for how applications should be deployed. With this model in place you can work your way down. Next you’ll need network equipment that can support your new model. Before thinking about the hardware, remember there is an operating system (OS) that will have to interface with your policy model.

Traditional network operating systems are not designed for this type of object oriented policy model. Even highly programmable or Linux based operating systems have not been designed for object programmability that would fully support this model.  You’ll need an OS that’s capable of representing tiers or components of an application as objects, with configurable attributes. Additionally it must be bale to represent physical resources like ports as objects abstracted from the applications that will run on them.  An OS that can be provisioned in terms of policy constructs rather than configuration lines such as switch ports, QoS and ACLs. You’ll need to rewrite the OS.

As you’re writing your OS you’ll need to rethink the switching and routing hardware that will deliver all of those packets and frames. Of course you’ll need: density, bandwidth, low-latency, etc. More importantly you’ll need hardware that can define, interpret and enforce policy based on your new logical model. You’ll need to build hardware tailored to the way you define applications, connectivity and policy.  Hardware that can enforce policy based on logical groupings free of VLAN and subnet based policy instantiation.

If you build these out together, starting with the logical model then defining the OS and hardware to support it, you’ll have built a solution that surpasses the software shims of generation 1 SDN. You’ll have built a solution that focuses on removing the complexity first, then automating, then applying rapid deployment through tools usable by development and operations, better yet DevOps.

If you do that you’ll have truly defined networking based on software. You’ll have defined it from the top all the way down to the ASICs. If you do all that and get it right, you’ll have built Cisco’s Application Centric Infrastructure (ACI.)

For more information on the next generation of data center networking check out www.cisco.com/go/aci.

 

Disclaimer: ACI is where I’ve been focused for the last year or so, and where my paycheck comes from.  You can feel free to assume I’m biased and this article has no value due to that.  I won’t hate you for it.

GD Star Rating
loading...

Engineers Unplugged Episode 14: Application Affinity

I had the pleasure of speaking with Nils Swart (@nlnils) of Plexxi about applications and the network.  You can watch the quick Engineer’s Unplugged below.

GD Star Rating
loading...

Software Defined networking: The Role of SDN on Compute Infrastructure Administration

#vBrownBag Follow-up Software Defined Networking SDN with Joe Onisick (@jonisick) from ProfessionalVMware on Vimeo.

GD Star Rating
loading...

Focus on the Ball: The Application

With the industry talking about Software Defined Networking (SDN) at full hype levels, there is one thing missing from many discussions: the application. SDN promises to reign in the complexity of network infrastructure and provide better tools for deploying services at scale. What often seems to be forgotten are the applications, which are the reason those networks exist. While application focus in itself is not a new concept it seems lost in the noise around SDN as a whole, with a few exceptions such as Plexxi being which focuses on Application Affinity.

Current SDN approaches provide tools to solve issues in one portion or the other of network infrastructure. Flow control mechanisms look to centralize the distribution and configuration of routing and forwarding. Overlays look to build virtual networks on existing IP infrastructure. Virtualized L4-7 services provide solutions to configure, stitch-in and control network services more closely to virtual machines themselves. None of these current approaches looks to tackle the whole picture from an application centric point of view. These solutions also take a myopic view that the VM is the network, this is far from the case.  The closest models fall into dev-ops categories or orchestration but these require a deep understanding of the details and intricacies of the network.

In traditional networking environments there is a disconnect in communication between application and network teams. The languages and concepts are disparate enough that they don’t translate, there is no logical continuation from application developer or owner to network designer. Application teams speak in OS instances, application tiers and components, tooling, language, end-user demands, etc. while network teams speak in switch-ports, VLANs, QoS, IP addressing and Access Control Lists (ACLs). The lack of common understanding and vocabulary causes architectures and implementations to suffer. The graphic below illustrates this relationship:

image

Building the flexible, scalable, manageable and programmable networks of the future requires a change in focus. The application needs to take center stage; it’s the apps that solve business problems. From this focus, logical and physical topology become secondary and are only designed once application requirements have been mapped out. Application centric policies must be designed first. Policies such as: security, load-balancing, QoS can all be designed based on application requirements, rather than network restrictions. Application developers define these requirements without the need to speak a network language.

Traditional networks begin with a physical topology that is layered with L2 and L3 logical topologies and assumed application mobility and service domains such as a services tier in the aggregation level. Once these topologies are architected and implemented applications are built and deployed on them. This method limits the capabilities available to the application and the services deployed on them.

Application security is an excellent example of a system that suffers from traditional architectures. Network security constructs are implemented in the form of ACLs on switches, routers and firewalls. These entries suffer from two major drawbacks: complexity of design/implementation and scale of the TCAM that stores the entries. This means that application policies must be communicated effectively to network engineers who must translate those requirements into implementable ACLs across multiple devices in the network. This is then defined manually device-by-device. This is a system ripe for PEBKAC errors (Problem Exists Between Keyboard and Chair.)

The complexity and room for error in this system increases exponentially as networks scale, applications move and new services are needed. Additionally this leads to bad practice based on design limitations. Far too often outdated policy entries are left in place due to the complexity and risk of removing entries. This leads to residual entries in place consuming space long after an application is gone. Just as often policies are written more loosely than would be optimal in order to reduce required entries, and optimize space, through wild card summarization.

To break this cycle networking systems need to take an application centric approach which models actual application requirements onto the network in a top down fashion. Systems need to take into account the structure of the application, its components, and how those components interact then provide tools for designing logical policy maps of these relationships. From there these policy maps can be programmatically applied to the networking infrastructure.

An application is not a single software instance running on a server. Applications are made up of the end-points required in a given tier, the tiers required for the service delivered and the policies that define how those tiers communicate, and their unique requirements. The application as a whole must be taken into account in order to provide robust, scalable service delivery.

The illustration below shows this relationship in contrast to the diagram above:

image

In this model network and application teams develop the systems of policies that define application behavior and push them to the network. Taking the application as a whole into focus instead of the myopic view of VMs, switch ports or IP addresses allows cohesive deployment and manageability at scale. The application is the purpose of having a network; therefore the application should define the network.

This definition of the network by the application should be done in a language that the developers understand, and the network can interpret and implement. For example an app owner labels application traffic as ‘video’ and the network implements policies for bandwidth, QoS, etc. that video requires. These policies are predefined by the network engineers.

An application is more than an IP address and a set of rules; it is an ecosystem of interconnected devices and the policies that define their relationship. Traditional networking techniques anchor application deployment by defining applications in networking terms. In order to accelerate the application deployment (and re-deployment throughout its lifecycle) networks need to provide an application centric view and deployment model.

GD Star Rating
loading...

Network Management Needs New Ideas

As networks have grown, the industry has sought better ways in which to manage them at scale. Traditional network management systems are typically device-centric, particularly for network infrastructure. These systems take a top-down management approach and use a central server to push configuration into devices and to manage device state. With few exceptions, this approach provides no additional abstraction or functionally and fundamentally becomes a GUI representation of CLI configuration…

To see the full post visit:  http://www.networkcomputing.com/data-networking-management/network-management-needs-new-ideas/240157120

GD Star Rating
loading...