Cloudy with a 100% Chance of Cloud

I recently remembered that my site, and blog is Called Define the Cloud. That realization led me to understand that I should probably write a cloudy blog from time to time. The time is now.

It's 2018 and most, if not all of the early cloud predictions have proven to be wrong. The battle of public vs. private, on-premises vs. off, etc. has died. The world as it sits now uses both, with no signs of that changing anytime soon. Cloud proved not to be universally cheaper, in fact it's more expensive in many cases, depending on a plethora of factors. That being said, public cloud adoption grew, and continues to grow, anyway. That's because the value isn't in the cost, it's in the technical agility. We're knee deep in a transition from IT as a cost center back to it's original position as a business and innovation enabler. The 13.76 white guys that sit in a Silicon Valley ivory tower making up buzzwords all day call this Digitization, or Digital Transformation.

Goonies

 

Down here, it's our time. It's our time down here…

It’s also our time. Our time! Up there!

 

<rant>

This a very good thing. When we started buying servers and installing them in closets, literal closets, we did so as a business enabler. That email server replaced type written memos. That web server differentiated me from every company in my category still reliant solely on the Yellow Pages. In my first tech job I assisted with a conversion of analog hospital dictation systems to an amazing, brand-new technology that was capable of storing voice recordings digitally, today every time you pick up a phone your voice is transmitted digitally.
Over the next few years the innovation leveled out for the most part. Everyone had the basics, email, web, etc. That's when the shift occurred. IT moved from a budget we invested in for innovation and differentiation, to a bucket we bled money into to keep the lights on. This was no-good for anyone involved. CIOs started buying solely based on ROI metrics. Boil ROI down to it's base-level and what you get is 'I know I'm not going to move the needle, so how much money can you save me on what I'm doing right now.'
The shift back is needed, and good for basically everyone: IT practitioners, IT sales, vendors who can innovate, etc. Technology departments are getting new investment to innovate, and if they can't then the lines-of-business simply move around them. That's still additional money going into technology innovation.

</rant>

One of the more interesting things that's played out is not just that it's not all or nothing private vs. public, but it's also not all-in on one public cloud. The majority of companies are utilizing more than one public cloud in addition to their private resources. Here are some numbers from the Right Scale State of the Cloud 2018 report (feel free to choose your own numbers, this is simply an example from a reasonable source.) Original source: https://www.rightscale.com/lp/state-of-the-cloud.

So the question becomes why multi-cloud? The answer is fairly simple, it's the same answer that brought us to today's version of hybrid-cloud with companies running apps in both private and public infrastructure. Because different tasks need different tools. In this case those tasks are apps, and those tools are infrastructure options.

Cloud Bursting

As an industry we chased our tails for quite a while around a crazy concept of 'Cloud Bursting' as the primary use-case for hybrid-cloud. That use case distracted us from looking at the problem more realistically. Different apps have different requirements, and different infrastructures might offer advantages based on those requirements. For more of my thoughts on cloud bursting see this old post: https://www.networkcomputing.com/cloud-infrastructure/hybrid-clouds-burst-bubble/2082848167.

Once we let that craptastic idea go we moved over to a few new and equally dumb concepts. The cloud doomsayers used a couple of public cloud outages to build FUD and people started fearing the stability of cloud. People of course jumped to the least rational, completely impractical solution they could: stretching apps across cloud providers for resiliency. Meanwhile those companies using cloud, who stayed up right through the cloud outages laughed, and wondered why people just didn't build stability into their apps using the tools the cloud provides. Things like multiple regions and zones are there for a reason. So some chased their tails a little on that idea. Startups started, got funded, and failed, etc., etc.

Finally we got to today, I rather like today. Today is a place where we can choose to use as many clouds as we want, and we're smart enough to make that decision based on the app itself, and typically keep that app in the place we chose, and only that place. Yay us!

 

Quick disclaimer: Not all multi-cloud came from brilliant planning. I'd take a guess that a solid majority of multi-cloud happened by accident. When cloud hit the market IT departments were sitting on static, legacy, silo'd infrastructure with slow, manual change-management. Getting new apps and services online could be measured in geological time. As organizations scrambled to figure out if/how/when to use cloud, their departments went around IT and started using cloud. They started generating revenue, and building innovation in the public cloud. Because they went out on their own, they picked the cloud or service that made sense for them. I think many organizations were simply handed a multi-cloud environment, but that doesn't make the concept bad.

Now for the fun part. How do you choose which clouds to use? Some of this will simply be dictated by what's already being used, so that parts easy. Beyond that, you probably already get that it won't be smart to open the flood gates and allow any and every cloud. So what we need is some sort of defined catalogue of services. Hmm, we could call that a service catalogue! Someone should start building that Service Now.

Luckily this is not a new concept, we've been assessing and offering multiple infrastructure services since way back in the way back. Airlines and banks often run applications on a mix of mainframe, UNIX, Linux, and Windows systems. Each of these provides pros, and cons, but they've built them into the set of infrastructure services they offer. Theoretically software to accomplish all of their computing needs could be built on one standardized operating system, but they've chosen not to based on the unique advantages/disadvantages for their organization.

The same thinking can be applied to multi-cloud offerings. In the most simple terms your goal should be to get as close to one offering (meaning one cloud, public or private) as possible. For the most part only startups will achieve an absolute one infrastructure goal, at least in the near term. They'll build everything in their cloud of choice until they hit some serious scale and have decisions to make. If you want to get super nit-picky, even they won't be at one because they'll be consuming several SaaS offerings for things like web-conferencing, collaboration, payroll, CRM, etc.

There's no need to stress if your existing app sprawl and diversity force you to offer a half-dozen or more clouds for now. What you want to focus on is picking two very important numbers:

  1. How many clouds will I offer to my organization now?
  2. How many clouds will I offer to my organization in five years? It should go without saying, but the answer to #1 should be >= the answer to #2 for all but the most remote use-cases.

With these answers in place the most important thing is sticking to your guns. Let's say you choose to deliver 5 clouds now (Private on DC 1 and DC 2, Azure, AWS, and GCP). You also decide that the five year plan is bringing that down to three clouds. Let's take a look at next steps with that example in mind.

You'll first want to be religious about maintaining a max of five offerings in the near term, without being so rigid you miss opportunities. One way to accomplish this is to put in place a set of quantifiable metrics to assess requests for an additional cloud service offering. Do this up-front. You can even add a subjective weight into this metric by having an assigned assessment group and letting them each provide a numeric personal rating and using the average of that rating along with other quantifiable metrics to come up with a score. Weigh that score against a pre-set minimum bar and you have your decision right there. In my current role we use a system just like this to assess new product offerings brought to us by our team or customers.

The next step is deciding how you'll whittle down three of the existing offerings over time. The lowest hanging fruit there is deciding whether you can exist with only one privately operated DC. The big factor here will be disaster recovery. If you still plan on running some business-critical apps in-house five years down the road, this is probably a negating factor. Off the bat that will mean private cloud stays two of your three. Let's assume that's the case.

That leaves you with the need to pick two of your existing cloud offerings to phase out over time. This is a harder decision. Here are several factors I'd weigh in:

In the real world no decision will be perfect, but indecision itself is a decision, and the furthest from perfect. If you build a plan that makes the transition as smooth as possible over time, gather stake-holder buy-in, provide training etc., you'll silence a lot of the grumbling's. One way to do this is identifying internal champions for the offerings you choose. You'll typically have naturally occurring champions, people that love the offering and want to talk about it. Use them, arm them, enable them to help spread the word. The odds are that when properly incentivized and motivated your developers and app teams can work with any robust cloud offering you choose public or private. Humans have habits, and favorites, but we can learn and change. Well, not me, but I've heard that most humans can.

If you want to see some thoughts on how you can use intent-based automation to provide more multi-cloud portability check out my last blog: http://www.definethecloud.net/intent-all-of-the-things-the-power-of-end-to-end-intent/.

Intent all of the things: The Power of end-to-end Intent

Intent

The tech world is buzzing with talk of intent. Intent based this, intent driven that. Let's take a look at intent, and where we as an industry want to go with it.

First, and briefly, what's intent? Intent is fairly simple if you let it be, it's what you want from the app or service you're deploying. Think business requirements of the app. Some examples are the apps requirements for: governance, security, compliance, risk, Geo-dependency, up-time, user-experience, etc. Eventually these will all get translated into the technical garbage language of infrastructure, but at some point they're business decisions, that's exactly where we want to capture them. For the purpose of this post I'll focus on using intent for new apps, or app redesign, discovering intent for existing apps is a conversation or ten in itself.

There are several reasons we want to capture them at this level. Here are a few:

Conceptually any intent based, or driven, system will capture this intent at the highest level in a format abstracted from implementation detail. For example, financial regulations such as PCI compliance would be captured in intent. That actual intent will be implemented by a number of different devices that can be interchanged and acquired from different vendors. The intent, PCI compliance, must be captured in a way that separates it from underlying configuration requirements of devices such as firewalls.

The actual language used to define the intent is somewhat arbitrary but should be as universally usable as possible. What this means is that you'd ideally want one intent repository that could be used to provision intent for any application on any infrastructure, end-to-end. We obviously don't live in an ideal world so this is not fully possible with current products, but we should continue to move towards this goal.

The next step of the process is the deployment of the intent onto infrastructure. The infrastructure itself is irrelevant, it can be on-premises, hosted, cloud, unicorn powered, or any combination. In most, if not all, products available today the intent repository and the automation engine responsible for this deployment are one and the same. The intent engine is responsible for translating the intent description stored in the repository down onto the chosen supported infrastructure. In the real world the engine may have multiple parts, or may be translated by more than one abstraction, but this should be transparent from an operational perspective. This process is shown in the following graphic.

Intent System

Now's where things start to get really sexy, and the true value of intent starts to shine. If I have an intent based system with a robust enough abstraction layer, my infrastructure becomes completely irrelevant. I define the intent for my application, and the system is responsible for translating that intent into the specific provisioning instructions required by the underlying infrastructure whatever that infrastructure is, and wherever that infrastructure lives (private, public, hosted, Alpha Centauri.)

The only restriction to this is that the infrastructure itself must have the software, hardware, and feature set required to implement intent. Using the PCI compliance example again, if the infrastructure isn't capable of providing your compliance requirements, the intent system can't help make that happen. Where the intent system can help with this is preventing the deployment of the application if doing so would violate intent. For example you have three underlying infrastructure options: Amazon EC2, Google Cloud Platform, and an on-premises private-cloud, if only your private-cloud meets your defined intent then the system prevents deployment to the two public cloud options. This type of thing is handled by the intent assurance engine which may be a seperate product or component of one of the pieces discussed above. For more on intent assurance see my blog http://www.definethecloud.net/intent-driven-architecture-part-iii-policy-assurance/.

This is where I see the most potential as this space matures. The intent for your application is defined once, and the application can be deployed, or moved anywhere with the intent assured, and continuously monitored. You can build and test in one environment with your full intent model enforced, then move to another environment for production. You can move your app from one cloud to another without redefining requirements. Equally important you can perform your audits on the central intent repository instead of individual apps as long as the infrastructures you run them on have been verified to properly consume the intent. Imagine auditing your PCI compliance intent definition one time, then simply deploying apps tagged with that intent without the need for additional audits. Here's a visual on that.

 Multi-Cloud Intent

Now let's move this one step further: end-to-end intent. The umbrella of intent applies from the initial point of a user accessing the network, and should be carried consistently through to any resource they touch, data center, cloud, or otherwise. We need systems that can provide identity at initial access, carry that identity throughout the network, and enforce intent consistently along with the traffic.

This, unsurprisingly, is a very complex task. The challenges include:

The bright side here is that products to handle each portion of this exist. At least one vendor has a portfolio that can provide this end-to-end architecture using several products. The downside or fine print is that these all still exist as separate domain products with little-to-no-integration. I would consider that more of a consideration than an adoption road-block. Custom integration can be built, or bought, and you can expect product integration to be a top road map priority. The graphic below shows the full picture of an intent driven architecture.

End-to-end Intent

Utilizing intent as the end-to-end policy deployment method provides an immense amount of value to the IT organization. The use of intent:

While Intent driven architectures are in early maturity levels, they are already being deployed fairly widely in various fashions. As the maturity continues to grow here are some things I'd like to see from the industry: