Technology Passion

The May 24th IDC report on server market share by the IDC validated a technology I’ve been passionate about for some time; Cisco unified Computing System (UCS.)  For the first time since UCS’s launch two years ago Cisco reported server earnings to IDC with amazing result – #3 in global Blade Server market share and 1.6% factory revenue share overall for servers as a whole.  Find the summary of blades by Kevin Houston here: http://bladesmadesimple.com/2011/05/q1-2011-idc-worldwide-server-market-shows-blade-server-leader-as/ and the IDC report here: http://www.idc.com/getdoc.jsp?containerId=prUS22841411

This report shows that in two years Cisco has either taken significant market share from incumbents, driven new demand, or both.  Regardless of where the numbers came from they are impressive, as far as servers go it’s close to David and Goliath proportions and still playing out with Cisco about 1% behind IBM in the #2 spot.  I have been a ‘cheerleader’ for UCS for nearly its entire existence but didn’t start that way.  I describe the transition here: http://www.definethecloud.net/why-cisco-ucs-is-my-a-game-server-architecture

Prior to Cisco UCS I was a passionate IBM BladeCenter advocate, great technology, reliable hardware and a go-to brand.  I was passionate about IBM.  When IBM launched the BladeCenter H they worked hard to ensure customer investment protection and in doing so anchored the H chassis as a whole.  They hindered technical enhancements and created complexity to ensure the majority of components customers purchased in BladeCenter E would be forward compatible.  At the time I liked this concept, and IBM had several great engineering concepts built in that provided real value. 

In the same time frame HP released the C-Class blade chassis which had no forward/backward compatibility with previous HP blade architectures but used that fresh slate to build a world class platform that had the right technology for the time with the scalability to move far into the future.  At that point from a technical perspective I had no choice but to concede HP as the technical victor but I still whole-heartedly recommended IBM because the technical difference was minimal enough that IBM’s customer investment protection model made them the right big picture choice in my eyes.   

I always work with a default preference or what I call an ‘A-Game’ as described in the link above, but my A-Game is constantly evolving.  As I discover a new technology that will work in the spaces I exist I assess it against my A-Game and decide whether it can provide better value to 80% or more of the customer base I work with.  When a technology is capable of displacing my A-Game I replace it.

Sean McGee (http://www.mseanmcgee.com/) says it better than I can, so I’ll paraphrase him ‘I’m a technologist, I work with and promote the best technology I’m aware of and can’t support a product once I know a better one exists.’

In the same fashion I’ll support and promote Cisco UCS until a better competitor proves itself, and I’m happy to see that customers agree based on the IDC reporting.

For some added fun here are some great Twitter comments from before the IDC announcement served with a side of crow:

image

GD Star Rating
loading...

The Power of Innovative Datacenter Stacks

With the industry drive towards cloud computing models there has been a lot of talk and announcements around ‘converged infrastructure’ ‘integrated stack’ solutions. An integrated stack is pre-packaged offering typically containing some amount of network, storage, and server infrastructure bundled with some level of virtualization, automation, and orchestration software. The purpose of these stacks is to simplify the infrastructure purchasing requirements, and accelerate the migration to virtualized or cloud computing models, accomplished by reducing risk and time to deployment. This simplification and acceleration is accomplished by heavy testing and certification by the vendor or vendors in order to ensure various levels of compatibility, stability and performance.

In broad strokes there are two types of integrated stack solution:

Single Vendor – All stack components are developed, manufactured and bundled by a single vendor.

Multi-Vendor – Products from two or more parent vendors are bundled together to create the stack.

Of these two approaches the true value and power typically comes from the multi-vendor approach or Innovative Stack, as long as some key processes are handled correctly, specifically infrastructure pre-integration/delivery and support. With an innovative stack the certification and integration testing is done by the joint vendors allowing more time to be spent tailoring the solution to specific needs rather than ensuring component compatibility and design validity. The innovative stack provides a cookie cutter approach at the infrastructure level.

The reason the innovative stack holds the sway is the ability to package ‘best-of-breed’ technologies into a holistic top-tier package rather than relying solely on products and software from a single vendor of which some may fall lower in the rankings. The large data center hardware vendors all have several disparate product lines each of which are in various stages of advancement and adoption. While one or two of these product lines may be best-of-breed or close, you’d be hard-pressed to argue that any one vendor can provide the best: storage, server, and network hardware along with automation and orchestration software.

A prime example of this would be VMware, it’s difficult to argue that VMware is not the best-of-breed for server virtualization, with a robust feature set, outstanding history and approximately 90% market share they are typically the obvious choice for server virtualization. That being said VMware does not sell hardware which means if you’re virtualizing servers and want best of breed you’ll need two vendors right out of the gate. VMware also has an excellent desktop virtualization platform but in that arena Citrix could easily be argued best-of-breed and both have pros/cons depending on the specific technical/business requirements. For desktop virtualization architecture it’s not uncommon to have three best-of-breed vendors before even discussing storage or network hardware (Vendor X server, VMware Hypervisor, and Citrix desktop virtualization.)

With the innovative stack approach a collaborative multi-vendor team can analyze, assess, bundle, test, and certify an integration of best-of-breed hardware and software to provide the highest levels of performance, feature set and stability. Once the architectures are defined if an appropriate support and delivery model is put in place jointly by the vendors a best-of-breed innovative stack can accelerate your successful adoption of converged infrastructure and cloud-model services. An excellent example of this type of multi-vendor certified Innovative Stack is the FlexPod for VMware by NetApp, Cisco, and VMware which is backed by a joint support model and delivery packaging through certified expert channel partners.

To participate in a live WebCast on the subject and learn more please register here: http://www.definethecloud.net/innovative-versus-integration-cloud-stacks.

GD Star Rating
loading...

Is Private Cloud a Unicorn?

With all of the discussion, adoption, and expansion of cloud offerings there is a constant debate that continues to rear its head: Public vs. Private or more bluntly ‘Is there even such thing as a private cloud?’  You typically have two sides of this debate coming from two different camps:

Public Cloud Proponents:  There is no such thing as private cloud and or you won’t gain the economies of scale and benefits of a cloud model when building it privately.

Private Cloud Proponents: Building a cloud IT delivery model in-house provides greater resource control, accountability, security and can leverage existing infrastructure investment.

Before we begin let’s start with the basics, The National Institute of Standards and Technology (NIST) definition of cloud:

Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared
pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that
can be rapidly provisioned and released with minimal management effort or service provider interaction.
This cloud model promotes availability and is composed of five essential characteristics, three service
models, and four deployment models.

Essential Characteristics:

On-demand self-service: A consumer can unilaterally provision computing capabilities, such as
server time and network storage, as needed automatically without requiring human
interaction with each service’s provider.

Broad network access: Capabilities are available over the network and accessed through standard
mechanisms that promote use by heterogeneous thin or thick client platforms (e.g.,
mobile phones, laptops, and PDAs).

Resource pooling: The provider’s computing resources are pooled to serve multiple consumers
using a multi-tenant model, with different physical and virtual resources dynamically
assigned and reassigned according to consumer demand. There is a sense of location
independence in that the customer generally has no control or knowledge over the exact
location of the provided resources but may be able to specify location at a higher level of
abstraction (e.g., country, state, or datacenter). Examples of resources include storage,
processing, memory, network bandwidth, and virtual machines.

Rapid elasticity: Capabilities can be rapidly and elastically provisioned, in some cases
automatically, to quickly scale out, and rapidly released to quickly scale in. To the
consumer, the capabilities available for provisioning often appear to be unlimited and can
be purchased in any quantity at any time.

Measured Service: Cloud systems automatically control and optimize resource use by leveraging
a metering capability at some level of abstraction appropriate to the type of service (e.g.,
storage, processing, bandwidth, and active user accounts). Resource usage can be
monitored, controlled, and reported, providing transparency for both the provider and
consumer of the utilized service.

Service Models:

Cloud Software as a Service (SaaS): The capability provided to the consumer is to use the
provider’s applications running on a cloud infrastructure. The applications are accessible
from various client devices through a thin client interface such as a web browser (e.g.,
web-based email). The consumer does not manage or control the underlying cloud
infrastructure including network, servers, operating systems, storage, or even individual
application capabilities, with the possible exception of limited user-specific application
configuration settings.

Cloud Platform as a Service (PaaS): The capability provided to the consumer is to deploy onto
the cloud infrastructure consumer-created or acquired applications created using
programming languages and tools supported by the provider. The consumer does not
manage or control the underlying cloud infrastructure including network, servers,
operating systems, or storage, but has control over the deployed applications and possibly
application hosting environment configurations.

Cloud Infrastructure as a Service (IaaS): The capability provided to the consumer is to provision
processing, storage, networks, and other fundamental computing resources where the
consumer is able to deploy and run arbitrary software, which can include operating
systems and applications. The consumer does not manage or control the underlying cloud
infrastructure but has control over operating systems, storage, deployed applications, and
possibly limited control of select networking components (e.g., host firewalls).

 

Deployment Models:

Private cloud: The cloud infrastructure is operated solely for an organization. It may be managed
by the organization or a third party and may exist on premise or off premise.

Community cloud: The cloud infrastructure is shared by several organizations and supports a
specific community that has shared concerns (e.g., mission, security requirements, policy,
and compliance considerations). It may be managed by the organizations or a third party
and may exist on premise or off premise.

Public cloud: The cloud infrastructure is made available to the general public or a large industry
group and is owned by an organization selling cloud services.

Hybrid cloud: The cloud infrastructure is a composition of two or more clouds (private,
community, or public) that remain unique entities but are bound together by standardized
or proprietary technology that enables data and application portability (e.g., cloud
bursting for load balancing between clouds).

http://csrc.nist.gov/publications/drafts/800-145/Draft-SP-800-145_cloud-definition.pdf

Obviously NIST believes there is a place for private cloud, as do several others, so where does the issue arise?

The argument against private cloud:

Public cloud proponents believe in another defining characteristic of cloud computing: Utility Pricing.  They believe that the ‘pay for only what you use’ component of public cloud should be required for all clouds, which would negate the concept of private cloud where the infrastructure is paid for up front and has a cost whether or not it’s used.  The driver for this is Cloud’s benefit of moving CapEx (capital expenditure) to OpEx (Operating Expenditure.)  Because you aren’t buying infrastructure you have no upfront costs and pay as you go for use.  This has obvious advantages and this type of utility model makes sense (think power grid in big picture terms, you have metered use.)

So public cloud it is?

Not so fast!  There are several key concerns for public cloud that may drive the decision to utilize a private cloud:

  • Data Security – Will my data be secure/can I entrust it to another entity?  The best example of this would be the Department of Defense (DoD) and intelligence community.  That level of sensitive data can not be entrusted to a private 3rd party.
  • Performance – Will my business applications have the same level of performance existing in a public offsite cloud?
  • Up-time – On average a properly designed enterprise data center provides 99.99 (4×9’s) uptime or above whereas a public cloud is typically guaranteed for 3 to 4×9’s.  This means relying on a single public cloud infrastructure will most likely provide less availability for enterprise customers.  To put that in perspective 3×9’s is 8.76 hours of downtime per year where 4×9’s is only 52.56 minutes.  An enterprise data center operating at 5×9’s only experiences 5.26 minutes of downtime per year.
  • Exit/Migration strategy – In the event it were necessary how would the applications and data be moved back in-house or to another cloud?

These factors must be considered when making a decision to utilize a public cloud.  For most organizations they’re typically not roadblocks, but speed bumps that must be navigated carefully.

So which it it?

That question will be answered differently for every organization.  It’s based on what you want to do and how you want to do it.  Chris Hoff uses laundry to explain this: http://www.rationalsurvivability.com/blog/?p=2384.  Additionally cost will be a major factor, Wikibon has an excellent post arguing that private cloud is more cost effective for organizations over $1 billion: http://wikibon.org/wiki/v/Private_Cloud_is_more_Cost_Effective_than_Public_Cloud_for_Organizations_over_$1B.  Additionally in many cases a hybrid model may work best either as a permanent solution or migration path.

Summary:

Private cloud is no unicorn and will be here to stay.  For some it will be a stepping stone to a fully public IT model, and for others it will be the solution.  Organizations like the federal government have the data security needs to require a private cloud and the size/scale to gain the benefits of that model.  Other large organizations may find that private makes more monetary sense.  Availability, security, compliance etc. may drive other companies to look at a private cloud model.

Cloud is about cost but it’s more importantly about accelerating the business.  When IT can respond immediately to new demands the business can execute more quickly.  Both public and private models provide this benefit, each organization will have to decide for itself which model fits their demands.

GD Star Rating
loading...

The Cloud Rules

Cloud Computing Concepts:

These are Twitter sized quick thoughts. If you’d like more elaboration or have a comment participation is highly encouraged.  As I’ve run out of steam on this I’ve decided to move it into a blog rather than a page.

  • 01: Cloud is a fad like computers, the Internet and social networking were before it.
  • 02: It’s not all or nothing, its pick and choose.
  • 03: It’s as secure as YOU make it
  • 04: There’s no point arguing semantics, argue features.
  • 05: You have at least one application today that’s a great candidate for cloud computing
  • 06: Cloud requires a migration strategy, not a fork-lift.
  • 07: Virtualization and automation are the building blocks of private cloud.
  • 08: Encrypt locally store globally.
  • 09: Open portability is key to public cloud.
  • 10: Elasticity means scale-up AND scale-down.
  • 11: Security should not be an afterthought.
  • 12: Multi-Tenancy is your friend.
  • 13: Silo’d organizations breed silo’d architectures.
  • 14: IT should support the business, not the other way around.
  • 15: Performance isn’t about highest/lowest it’s about application requirements.
  • 16: Cloud pushes IT from CapEx to OpEx, without financing hardware.
  • 17: Features only matter if you need, them now or will need them later.
  • 18: Address organizational challenges before technical challenges.
  • 19: The way you do things today should not dictate the way you do things tomorrow.
  • 20: Latency operates independent of bandwidth, low-latency apps require low latency links.
  • 21: Build a 5-Year plan and incorporate staged migration to cloud architectures/services.
  • 22: Bad budget processes should not force bad IT decisions.
  • 23: If you do things the way you’ve always done them, you get the results you’ve always had.
  • 24: Integration and support are top considerations for private-cloud architectures.
  • 25: Cloud computing provides business agility.
  • 26: Getting applications out of the cloud is as important a consideration as getting them in.
  • 27: There are no ‘One-size-fits-all’ solutions in IT, cloud is no different.
GD Star Rating
loading...

The Reality of Cloud Bursting

Recently while researching the concept of ‘Cloud Bursting‘ I received a history lesson in Cloud Computing after a misguided tweet at Chris Hoff (@Beaker.)  My snarky comment suggested Chris needed a lesson in Cloud history, but as it turns out I received the lesson.  My reference turned out to be a long debunked myth of Amazon cloud origins (S3 storage followed by EC2 Compute) the details of which can be found here: http://www.quora.com/How-and-why-did-Amazon-get-into-the-cloud-computing-business.  The silver lining of my self induced public twitter thrashing was two things: I learned yet again that the best preventative measure for Foot-In-Mouth-Disease is proper research, and I got some great background and info from Chris, Brian Gracely (@bgracely), Matt Davis (@da5is), Roman Tarnavski (@romant), Denis Guyadeen (@dguyadeen) and others.  This all began when I read Chris’s ‘Incomplete Thought: Cloudbursting Your Bubble – I call Bullshit’ (http://www.rationalsurvivability.com/blog/?p=3016.)  Chris takes the stance ‘TODAY cloud bursting is BS…’ to quote the man himself.  The ‘today’ is the part I didn’t infer from his blog post (lack of cloud history knowledge aside.)

Before we kick off let’s look at the concept of Cloud Bursting:

Cloud Bursting:

In a broad strokes fashion cloud bursting is the idea that an application normally runs in one type of cloud and is capable of utilizing additional resources of another cloud type during peak periods, or ‘bursting.’  The most common example of this type of utilization would be a retail company utilizing a private cloud for day-to-day operations bursting to the public cloud for peak periods such as a holiday season.

image

At first glance cloud bursting looks like a great way to have your cake and eat it too.  You get the comfort and security blanket of hosting your own applications with the knowledge that if your capacity spikes you’ve got excess available in the public cloud on-demand with a pay for use model.

The issue:

The issue is in the reality of this system, as several problems come to play:

  1. If you’ve designed the application to be public cloud compatible why wouldn’t you just run it there in the first place?
  2. Building a new private cloud infrastructure that doesn’t support your capacity demands is short-sighted.
  3. Designing an application for cloud bursting capability is no easy task and would probably require some portion (data?) to exist in the public cloud constantly skewing the benefits of the ‘on-demand’ concept of cloud bursting.
  4. Complicated cost model for any given application in which infrastructure is purchased up front and depreciated over time alongside pay-for-use costs as the application bursts

After carefully looking at these and other issues cloud bursting will most likely not be a reality for most enterprises and applications, and is currently a very rare cloud use case.

Note: Chris Hoff draws a distinction which I wholeheartedly echo: Cloud bursting is separate from Hybrid cloud approaches where specific apps are run in public or private clouds based on application/business requirements.  The issue above is specifically directed at individual applications bursting between clouds.

The Reality:

For the average enterprise cloud bursting is not an option today and will probably not be in the future.  While hybrid models can thrive, i.e. some applications run privately and some publicly, or a private cloud designed to failover to public cloud etc. individual applications bursting back forth between clouds will not be a reality.  Exceptions exist and there will still be use cases for cloud bursting, but they will be corner cases.  Things like high Performance Computing (HPC) can lend themselves well to cloud bursting due to the dynamic and distributed nature. 

Another possible use case for cloud bursting is environments that heavily utilize development and test systems but must utilize on-premise resources for production due to requirements such as security.  In these cases the dev/test may be capable of running in the cloud but can more cost effectively reside locally in the private cloud during off peak production hours.  The dev/test systems could be designed so that they burst to the cloud when production peaks and spare cycles are sparse.

GD Star Rating
loading...

Innovative Versus Integration Cloud Stacks

The Live Webcast with NetApp and Kingman Tang went quite well with good discussion on private cloud and data center stacks.  Check out the recording below.

A BrightTALK Channel

GD Star Rating
loading...