How to Boost Cloud Reliability

Clouds fail. That’s a fact. But if your company uses business apps that are tied to the availability of public cloud services, you can—and must—take steps to mitigate these failures by getting schooled on a few key factors:  service-level agreements (SLAs), redundancy options, application design, and the type of service being used. We’ll outline how these factors affect the availability of your applications in the cloud…


Read my full article in the August issue of Network Computing (For IT by IT) (Requires a free registration, my apologies.)

GD Star Rating

The Reality of Cloud Bursting

Recently while researching the concept of ‘Cloud Bursting‘ I received a history lesson in Cloud Computing after a misguided tweet at Chris Hoff (@Beaker.)  My snarky comment suggested Chris needed a lesson in Cloud history, but as it turns out I received the lesson.  My reference turned out to be a long debunked myth of Amazon cloud origins (S3 storage followed by EC2 Compute) the details of which can be found here:  The silver lining of my self induced public twitter thrashing was two things: I learned yet again that the best preventative measure for Foot-In-Mouth-Disease is proper research, and I got some great background and info from Chris, Brian Gracely (@bgracely), Matt Davis (@da5is), Roman Tarnavski (@romant), Denis Guyadeen (@dguyadeen) and others.  This all began when I read Chris’s ‘Incomplete Thought: Cloudbursting Your Bubble – I call Bullshit’ (  Chris takes the stance ‘TODAY cloud bursting is BS…’ to quote the man himself.  The ‘today’ is the part I didn’t infer from his blog post (lack of cloud history knowledge aside.)

Before we kick off let’s look at the concept of Cloud Bursting:

Cloud Bursting:

In a broad strokes fashion cloud bursting is the idea that an application normally runs in one type of cloud and is capable of utilizing additional resources of another cloud type during peak periods, or ‘bursting.’  The most common example of this type of utilization would be a retail company utilizing a private cloud for day-to-day operations bursting to the public cloud for peak periods such as a holiday season.


At first glance cloud bursting looks like a great way to have your cake and eat it too.  You get the comfort and security blanket of hosting your own applications with the knowledge that if your capacity spikes you’ve got excess available in the public cloud on-demand with a pay for use model.

The issue:

The issue is in the reality of this system, as several problems come to play:

  1. If you’ve designed the application to be public cloud compatible why wouldn’t you just run it there in the first place?
  2. Building a new private cloud infrastructure that doesn’t support your capacity demands is short-sighted.
  3. Designing an application for cloud bursting capability is no easy task and would probably require some portion (data?) to exist in the public cloud constantly skewing the benefits of the ‘on-demand’ concept of cloud bursting.
  4. Complicated cost model for any given application in which infrastructure is purchased up front and depreciated over time alongside pay-for-use costs as the application bursts

After carefully looking at these and other issues cloud bursting will most likely not be a reality for most enterprises and applications, and is currently a very rare cloud use case.

Note: Chris Hoff draws a distinction which I wholeheartedly echo: Cloud bursting is separate from Hybrid cloud approaches where specific apps are run in public or private clouds based on application/business requirements.  The issue above is specifically directed at individual applications bursting between clouds.

The Reality:

For the average enterprise cloud bursting is not an option today and will probably not be in the future.  While hybrid models can thrive, i.e. some applications run privately and some publicly, or a private cloud designed to failover to public cloud etc. individual applications bursting back forth between clouds will not be a reality.  Exceptions exist and there will still be use cases for cloud bursting, but they will be corner cases.  Things like high Performance Computing (HPC) can lend themselves well to cloud bursting due to the dynamic and distributed nature. 

Another possible use case for cloud bursting is environments that heavily utilize development and test systems but must utilize on-premise resources for production due to requirements such as security.  In these cases the dev/test may be capable of running in the cloud but can more cost effectively reside locally in the private cloud during off peak production hours.  The dev/test systems could be designed so that they burst to the cloud when production peaks and spare cycles are sparse.

GD Star Rating

Dell, Backing the Right Horse in the Wrong Race

Horse racingWith Dell’s announced acquisition of 3par I’ve been pondering the question of what it is they’re thinking.  I’ve been scouring the blogs looking for an answer and there is none that resonates well with me.  Most of what I find states they picked a good horse and that the business behind buying a horse to race makes sense, but nobody asks are they in the right race.  The separate races I’m talking about are private and public clouds.

Dell bid on 3Par which is a small high end storage company with a product line positioned to compete with EMC and Hitachi for some use cases.  This complements Dell’s own storage offering which was built upon the Equalogix iSCSI storage acquisition and geared toward the SMB space.  Dell also has had a traditionally strong partnership with EMC and resold a great deal of EMC storage where Equalogix was not a good fit.  The Equalogix acquisition did not appear to damage the Dell EMC partnership significantly but by adding 3par to the mix this may change.  On the other hand EMC is heavily backing Cisco UCS, so this may very well be a defensive play.

So what is Dell’s play expanding their internal storage capabilities and risking damage to a profitable partnership with EMC?  Most of the analysis I find states that Dell is looking to grow data center revenue to regain profit they are losing to HP in the desktop/laptop space.  In order to do this they are putting together more of the key hardware components of private cloud architectures.  The thinking being that they will try and put together an offering to compete with vBlock, Matrix, SMT, CloudBurst, etc. 

At first glance this all makes sense, Dell doesn’t want to be left without a horse in the private cloud race so they make some moves and acquisitions and get their offering in place, late, but maybe not too late.  On the flip side they can utilize the small market share 3par has as an avenue for Dell server sales, and reversely use Dell server sales to boost 3par’s struggling sales.  With any luck Dell will have the same success with 3par that they did with Equalogix.  That’s what I see at first glance, upon further thought there are more concerns:

  • Can Dell’s sales force handle the addition, especially while they’re battling a new server vendor (Cisco) with a lot of marketing dollars to spend and a strong partner/manufacturer ecosystem?  Will the Dell account reps and engineers be able to incorporate 3par into they’re offering without cannibalizing Equalogix business or muddying the waters enough for a competitor to move in?
  • Will a customer requiring top Tier storage that they traditionally turned to Hitachi or EMC for be willing to accept a Dell solution?  If they do will they be willing to put those top tier applications on Dell servers?  Dell is not traditionally looked at as high-end servers, and they really only recently (last six months) started adding any innovation into their server product line.
  • Will the customers buying Dell servers now have any interest in an upper tier storage array?

The most important question in my mind: Is Dell putting their horse in the right race?

Dell is looking to attack the enterprise and federal data center where private cloud will be a big play.  This is the home of solid high performance, feature rich, innovative platforms.  It’s also a place where trust means everything, i.e. ‘Nobody gets fired for buying vendor x.’  Dell is not vendor X, they’ve typically competed solely on price.  Moving heavily into this market they will be in constant battle with HP, IBM, EMC, NetApp, Cisco and others.

I think Dell is missing an opportunity to execute on their traditional strengths and attack public cloud markets with a unique offering.  Public cloud is all about massive scale and the intelligence, redundancy, etc. is built into the software layers.  This means that a company who can effectively deliver bulk, reliable, low cost servers, storage and networking will have a very strong offering.  The HP’s, Cisco’s, IBM’s etc. will have a much harder time selling into this space due to cost.  Their products have traditionally been more about performance and usability features which may not have a strong a message in the public cloud.


Dell solidly executed on the merger and acquisition of Equalogix and has had great success there providing a low-end, low-cost storage system paired perfectly with their server offering.  The 3par acquisition and recent Dell innovations in their server offering show a preview of a new model for Dell.  Whether this is a successful model or not is yet to be seen.  From my point of view successful or not Dell would be better suited to pairing their traditional business to public cloud solutions and creating a new market for themselves with less competition.

GD Star Rating

Cloud Types

Within the discussion of cloud computing there are several concepts that get tossed around and mixed up.  Part of the reason for this is that there are several cloud architecture types.  While there are tons of types and sub-types discussed I’ll focus on four major deployment models here: Public Cloud, Private Cloud, Community Cloud and Hybrid Cloud.  Each cloud type can be used to deliver any combination of XaaS.  The key requirements to be defined as a cloud architecture are:

  • The ability to measure the service being provided
  • Broad Network Access
  • Self-Service
  • Resource Pooling
  • Elasticity

I’ve discussed the business drivers for a transition to cloud in a previous post ( and the technical drivers here (

Public Clouds:

According to NIST with Public Clouds ‘The cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services’ (  This is the service model for cloud computing, A company owns the resources that provide a service and sell that service to other users/companies.  This is a similar model to the utilities, companies pay for the amount of: infrastructure, processing, etc. that is use.  Examples of Public Cloud providers are:

  • Amazon Web Services
  • Rackspace
  • Newservers
  • Verizon
  • Savvis

These and more can be found on Search Cloud Computing’s Top 10 list (


 Private Clouds:

NIST defines the Private Cloud as: ‘The cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on premise or off premise.’

Private clouds are data center architectures owned by a single company that provide flexibility, scalability, provisioning, automation and monitoring.  The goal of a private cloud is not to sell XaaS to external customers but instead to gain the benefits of a cloud architecture without giving up the control of maintaining your own data center.  Typical private cloud architectures will be built on a foundation of end-to-end virtualization, with automation, monitoring, and provisioning tools layered on top.  While not in the definition of Private Clouds bear in mind that security should be a primary concern at every level of design.

There are several complete Private Cloud offerings from various industry leading vendors.  These solutions typically have the advantages joint testing, and joint support among others.  That being said Private Clouds can be built on any architecture you choose.


Community Clouds:

Community Clouds are when an ‘infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on premise or off premise’ according to NIST.

A community cloud is a cloud service shared between multiple organizations with a common tie.  These types of clouds are traditionally thought of as farther out in the timeline of adoption.


Hybrid Clouds:

So while you can probably guess what a hybrid cloud is I’ll give you the official NIST definition first: ‘The cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds).

Using a Hybrid approach companies can maintain control of an internally managed private cloud while relying on the public cloud as needed.  For instance during peak periods individual applications, or portions of applications can be migrated to the Public Cloud.  This will also be beneficial during predictable outages: hurricane warnings, scheduled maintenance windows, rolling brown/blackouts.



When defining a cloud strategy for your organization or customer’s organization it is important to understand the different models and the advantages each can have for a given workload.  No cloud model is mutually exclusive and many organizations will be able to benefit from more than one model at the same time.

Defining a long term vision now and developing a staged migration path to it with set timelines will help ease the transition into cloud based architectures and allow a faster ROI.

When Cloud Goes Bad:


GD Star Rating