Private Cloud: It’s Not About ROI

Most private cloud discussions revolve around the return on investment of the architecture. Many discussions begin and quickly end with ROI. The reason is that ROI is very difficult to show in real numbers for any IT investment, but more so when the majority of the costs are soft costs.

ROI is an important factor and can’t be left out of discussions, but it’s not the only factor and likely not the most important factor.

To read the rest see the blog on Network Computing (no registration required): http://www.networkcomputing.com/private-cloud/231601280

How to Boost Cloud Reliability

Clouds fail. That’s a fact. But if your company uses business apps that are tied to the availability of public cloud services, you can—and must—take steps to mitigate these failures by getting schooled on a few key factors:  service-level agreements (SLAs), redundancy options, application design, and the type of service being used. We’ll outline how these factors affect the availability of your applications in the cloud…

 

Read my full article in the August issue of Network Computing (For IT by IT) (Requires a free registration, my apologies.)

http://www.informationweek.com/nwcdigital/nwcaug11?k=nwchp&cid=onedit_ds_nwchp

Thoughts From a Global Technology Leadership Forum

I recently had the privilege to attend and participate in a global technology leadership forum.  The forum consisted of technology investors, vendors and thought leaders and was an excellent event.  The tracks I focused on were VDI, Big Data, Data Center Infrastructure, Data Center Networks, Cloud and Collaboration.  The following are my notes from the event:

VDI:

There was a lot of discussion around VDI and a track dedicated to it.  The overall feeling was that VDI has not lived up to its hype over the last few years, and while it continues to grow market share it never reaches the predicted numbers, or hits the bubble that is predicted for it.  For the most part the technical experts agreed on the following:

There was some disagreement on whether VDI is the right next step for the enterprise.  The split I saw was nearly 50/50 with half thinking it is the way forward and will be deployed in greater and greater scale, and the other half thinking it is one of many viable current solutions and may not be the right 3-5 year goal.  I’ve expressed my thoughts previously: http://www.definethecloud.net/vdi-the-next-generation-or-the-final-frontier. Lastly we agreed that the key leaders in this space are still VMware and Citrix.  While each have pros and cons it was believed that both solutions are close enough as to be viable and that VMware’s market share and muscle make it very possible to pull into a dominant lead.  Other players in this space were complete afterthoughts.

Big Data:

Let me start by saying I know nothing about big data.  I sat in these expert sessions to understand more about it, and they were quite interesting.  Big data sets are being built, stored, and analyzed.  Customer data, click traffic, etc. are being housed to gather all types of information and insight.  Hadoop clusters are being used for processing data, cloud storage such as Amazon S3 is being utilized as well as on-premises solutions.  The main questions were in regard to where the data should be stored and where it should be processed, as well as the compliance issues that may arise with both.  Another interesting question was the ability to leave the public cloud if your startup turns big enough to beat the costs of public cloud with a private one.  For example if you have a lot of data you can mail Amazon disks to get it into S3 faster than WAN speed, but to our knowledge they can’t/won’t mail your disk back if you want to leave.

Data Center Infrastructure:

Overall there was an agreement that very few data center infrastructure (defined here as compute, network, storage) conversations occur without chat about cloud.  Cloud is a consideration for IT leaders from the SMB to large global enterprise.  That being said while cloud may frame the discussion the majority of current purchases are still focused on consolidation and virtualization, with some automation sprinkled in.  Private-cloud stacks from the major vendors also come into play helping to accelerate the journey, but many are still not true private clouds (see: http://www.definethecloud.net/the-difference-between-private-cloud-and-converged-infrastructure.)

Data Center Networks:

I moderated a session on flattening the data center networks, this is currently referred to as building ‘fabrics.’  The majority of the large network players have announced or are shipping ‘fabric’ solutions.  These solutions build multiple active paths at Layer 2 alleviating the blocked links traditional Spanning-Tree requires.  This is necessary as we converge our data and ask more of our networks.  The panel agreed that these tools are necessary but that standards are required to push this forward and avoid vendor lock-in.  As an industry we don’t want to downgrade our vendor independence to move to a Fabric concept.  That being said most agree that pre-standard proprietary deployments are acceptable as long as the vendor is committed to the standard and the hardware is intended to be standards compliant.

Cloud:

One of the main discussions conversations I had was in regards to PaaS.  While many agree that PaaS and SaaS are the end goals of public and private clouds, the PaaS market is not yet fully mature (see: http://www.networkcomputing.com/private-cloud/231300278.)  Compatibility, interoperability and lock-in were major concerns overall for PaaS.  Additionally while there are many PaaS leaders, the market is so immature leadership could change at any time, making it hard to pick which horse to back. 

Another big topic was open and open source.  Open Stack, Open Flow and open source players like RedHat.  With RedHat’s impressive YoY growth they are tough to ignore and there is a lot of push for open source solutions as we move to larger and larger cloud systems.  The feeling is that larger and more technically adept IT shops will be looking to these solutions first when building private clouds.

Collaboration:

Yet another subject I’m not an expert on but wanted to learn more about.  The first part of the discussion entailed deciding what we were discussing i.e. ‘What is collaboration.’  With the term collaboration encompassing: voice, video, IM, conferencing, messaging, social media, etc. depending on who you talk to this was needed.  We settled into a focus on enterprise productivity tools, messaging, information repositories, etc.  The overall feeling was that there are more questions than answers in this space.  Great tools exist but there is no clear leaders.  Additionally integration between enterprise tools and public tools was a topic and involved the idea of ensuring compliance.  One of the major discussions was building internal adoption and maintaining momentum.  The concern with a collaboration tool rollout is the initial boom of interest followed by a lull and eventual death of the tool as users get bored with the novelty before finding any ‘stickiness.’

VDI, the Next Generation or the Final Frontier?

After sitting through a virtualization sales pitch focused around Virtual Desktop Infrastructures (VDI) this afternoon I had several thoughts on the topic I thought may be blog worthy.

VDI has been a constant buzzword for a few years now, riding the coattails of server virtualization.  For the majority of those years you can search back and find predictions from the likes of Gartner touting ‘This is the year for VDI’ or making other similar statements, typically with projected growth rates that don’t ever happen.  What you won’t see is those same analyst organizations reaching back the year after and answering to why they over hyped it, or were blatantly incorrect. (Great idea for a yearly blog here, analyzing previous years failed predictions.)

The reasons they’ve been incorrect vary over the years starting with technical inadequacy of the infrastructures and lack of real understanding as an industry.  When VDI first hit the forefront many of us (myself included) made the assumption desktops could be virtualized the same as servers (Windows is Windows right?)  What we neglected to account for is the plethora of varying user applications, the difficulty of video and voice, and other factors such as boot storms which are unique and or more amplified within VDI environments than their server counterparts.  From there for a short while the VDI rollout horror stories and memories of failed Proof of Concepts slowed adoption and interest for a short period.

Now we’re at a point where the technology can overcome the challenges and the experts are battle hardened with knowledge of success and failures in various environments; yet still adoption is slow.  Users are bringing new devices into the workplace and expecting them to interface with enterprise services; yet still adoption is slow.  We supposedly have a more demanding influx of younger generation employees who demand remote access from their chosen devices; yet still adoption is slow.  This doesn’t mean that VDI isn’t being adopted, nor that the market share numbers aren’t increasing across the board; it’s just slow.

The reason for this is that our thinking and capabilities for service delivery have surpassed the need for VDI in many environments. VDI wasn’t an end-goal but instead an improvement over individually managed, monitored, and secured local end-user OS environments.  The end-goal isn’t removing the OS tie to the hardware on the end-point (which is what VDI does) but instead removing the applications tie to the OS; or more simply put: removing any local requirements for access to the services.  Starting to sound like cloud?

Cloud is the reason enterprise IT hasn’t been diving into VDI head first, the movement to cloud services has shown that for many we may have passed the point where VDI could show true Return On Investment (ROI) before being obsoleted.  Cloud is about delivering the service to any web connected end-point on-demand regardless of platform (OS.)  If you can push the service to my iOS, Android, Windows, Linux, etc. device without the requirement for a particular OS, then what’s the need for VDI?

To use a real world example I am a Microsoft zealot, I use Windows 7, Bing for search and only IE for browsing on my work and personal computers (call me retro.)  I also own an iPad, mainly due to the novelty and the fact that I got addicted to ‘Flight Control’ on a friends iPad at release of the original.  I occasionally use the iPad for what I’d call ‘productivity work’ related to my primary role or side projects.  Using my iPad I do things like: Access corporate email for the company I work for and my own, review files, access Salesforce, and Salesforce Chatter, and even perform some remote equipment demos, my files are seamlessly synched between my various other computers.  I do all of this without a Windows 7 virtual desktop running on my iPad, it’s all done through apps connected to these services directly.  In fact the only reason I have VDI client applications on my iPad is to demo VDI, not to actually work.

Now an iPad is not a perfect example, I’d never use it for developing content (slides, reports, spreadsheets, etc.) but I do use it for consuming content, email, etc.  To develop I turn to a laptop with full keyboard, screen and some monitor outputs.  This laptop may be a case for VDI but in reality why?  If the services I use are cloud based, public or private, and the data I utilize is as well, then the OS is irrelevant again.  With office applications moving to the cloud (Microsoft Office 365, Google Docs, etc.) along with many others, and many services and applications already there, what is the need for a VDI infrastructure?

VDI like server virtualization is really a band-aid for an outdated application deployment process which uses local applications tied to a local OS and hardware.  Virtualizing the hardware doesn’t change that model but can provide benefits such as:

Once the wound of our current application deployment model has fully healed, the band-aid comes off and we have service delivery from cloud computing environments free of any OS or hardware ties.

So friends don’t let friends virtualize desktops right?

Not necessarily.  As shown above VDI can have significant advantages over standard desktop deployment.  Those advantages can drive business flexibility and reduce costs.  The difficult questions will become

Many organizations will still see benefits from deploying VDI today because the ROI of VDI will occur more quickly than the ability to deliver all business apps as a service.  Additionally VDI is an excellent way to begin getting your feet wet with the concepts of supporting any device with organizational controls and delivering services remotely.  Coupling VDI with things like thin apps will put you one step closer while providing additional flexibility to your IT environment.

When assessing a VDI project you’ll want to take a close look at the time it will take your organization to hit ROI with the deployment and assess that against the time it would take to move to a pure service delivery model (if your organization would be capable of such.)  VDI is a fantastic tool in the data center tool bag, but like all others it’s not the right tool for every job.  VDI is definitely the Next Generation but it is not The Final Frontier.

Additional fun:

Here are some sales statements that are commonly used when pitching VDI, all of these I consider to be total hogwash.  Try out or modify a few of my one line answers next time your vendors there telling you about the wonderful world of VDI and why you need it now.

Vendor: ‘<Insert analyst here (Gartner, etc.)> says that 2011 is the year for VDI.’  Alternatively ‘<Insert analyst here (Gartner, etc.)> predicts VDI to grow X amount this year.’

My answer: ‘That’s quite interesting, let’s adjourn for now and reconvene when you’ve got data on <Insert analyst here (Gartner, etc.)>’s VDI predictions for the previous 5 years.’

Vendor: ‘The next generation of workers coming from college demand to use the devices and services they are used to, to do their job.’

My answer: ‘Excellent, they’ll enjoy working somewhere that allows that, we have corporate policies and rules to protect our data and network.’  This won’t work in every case as Mike Stanley (@mikestanley) pointed out to me, universities for example have student IT consumers who are the paying customers, this would be much more difficult in such cases.

Vendor: ‘People want a Bring Your Own (BYO) device model.’

My Answer: ‘If I bring my own device and the fact that I want to matters, what makes you think I’ll want your desktop?  Just give me application or service.’

Additional Private Cloud Blogs

For those that are interested and unaware I’ve been blogging for Network Computing for about a month on their Private Cloud Tech Center.  You can find those blogs here: http://www.networkcomputing.com/private-cloud-tech-center.  You should see a new one there every week or so.  I will continue to publish content here as regularly as possible and I’m always seeking new contributors for guest posts or regular contributions.  Contact me via the About page if you’re interested.

Server/Desktop Virtualization–A Best of Breed Band-Aid

Virtualization is a buzzword that has moved beyond into mainstream use and enterprise deployment.  A few years back vendors were ‘virtualization-washing’ their products and services the way many ‘cloud-wash’ the same today.  Now a good majority of enterprises are well into their server virtualization efforts and moving into Virtual Desktop Infrastructures (VDI) and cloud deployments.  This is not by accident, hardware virtualization comes with a myriad of advantages such as: resource optimization, power and cooling savings, flexibility, rapid deployment, etc.  That being said we dove into server/desktop virtualization with the same blinders on we’ve worn as an industry since we broke away from big iron.  We effectively fix point-problems while ignoring big picture, and create new problems in the process:

The underlying issue is the way in which we design our applications.  When we moved to commodity servers we built an application model with a foundation of one application, one operating system (OS), one server.  We’ve maintained that model ever since.  Server/desktop virtualization provides benefits but does not change this model it just virtualizes the underlying server and places more silos on a single piece of hardware to increase utilization.  Our applications and the services they deliver are locked into this model and suffer from it when we look at scale, flexibility and business continuance.

This is not a sustainable model, or at best not the most efficient model for service delivery.  Don’t take my word for it, jump on Bing and do a search for recent VMware acquisitions/partnerships.  The dominant giant in virtualization is acquiring companies or partnering with companies poised to make it the dominant giant in PaaS and SaaS.   Cloud computing as a whole offers the opportunity to rethink service delivery, or possibly more importantly brings the issue of service delivery and IT costing to the front of our minds. 

Moving applications and services to robust, highly available, flexible architectures is the first step in transforming IT to a department that enables the business.  The second step is removing the application OS silo and building services that can scale up and down independent of the underlying OS stack.  When you talk about zero downtime business continuance, massively scalable applications, global accessibility and other issues the current model is an anchor. 

That being said transforming these services is no small task.  Redesigning applications to new architectures can be monumental.  Redesigning organizations/processes and retraining people can be even more difficult.  The technical considerations for designing global highly available services touches on every aspect of application and architecture design: storage, network, web access, processing, etc.  That being said the tools are either available or rapidly emerging.

Any organization looking to make significant IT purchases or changes should be considering all of the options and looking at the big picture as much as possible.  The technology is available to transform the way we do business.  It may not be right for every organization or application but it’s not an all or nothing proposition.  There’s no fault in virtualizing servers and desktops today, but the end goal on the road map should be efficient service delivery optimized to the way you do business.

For more of my thoughts on this see my post on www.networkcomputing.com: http://www.networkcomputing.com/private-cloud/230600012.

My Recent Guest Spot on The Cloudcast (.NET) Podcast

image

Brian Gracely, Aaron Delp, and I discuss converged infrastructure stack, tech news and industry direction: http://www.thecloudcast.net/2011/06/cloudcast.html.  It was a lot of fun to chat with them and we covered some great topics.

The Power of Innovative Datacenter Stacks

With the industry drive towards cloud computing models there has been a lot of talk and announcements around ‘converged infrastructure’ ‘integrated stack’ solutions. An integrated stack is pre-packaged offering typically containing some amount of network, storage, and server infrastructure bundled with some level of virtualization, automation, and orchestration software. The purpose of these stacks is to simplify the infrastructure purchasing requirements, and accelerate the migration to virtualized or cloud computing models, accomplished by reducing risk and time to deployment. This simplification and acceleration is accomplished by heavy testing and certification by the vendor or vendors in order to ensure various levels of compatibility, stability and performance.

In broad strokes there are two types of integrated stack solution:

Single Vendor - All stack components are developed, manufactured and bundled by a single vendor.

Multi-Vendor - Products from two or more parent vendors are bundled together to create the stack.

Of these two approaches the true value and power typically comes from the multi-vendor approach or Innovative Stack, as long as some key processes are handled correctly, specifically infrastructure pre-integration/delivery and support. With an innovative stack the certification and integration testing is done by the joint vendors allowing more time to be spent tailoring the solution to specific needs rather than ensuring component compatibility and design validity. The innovative stack provides a cookie cutter approach at the infrastructure level.

The reason the innovative stack holds the sway is the ability to package ‘best-of-breed’ technologies into a holistic top-tier package rather than relying solely on products and software from a single vendor of which some may fall lower in the rankings. The large data center hardware vendors all have several disparate product lines each of which are in various stages of advancement and adoption. While one or two of these product lines may be best-of-breed or close, you’d be hard-pressed to argue that any one vendor can provide the best: storage, server, and network hardware along with automation and orchestration software.

A prime example of this would be VMware, it’s difficult to argue that VMware is not the best-of-breed for server virtualization, with a robust feature set, outstanding history and approximately 90% market share they are typically the obvious choice for server virtualization. That being said VMware does not sell hardware which means if you’re virtualizing servers and want best of breed you’ll need two vendors right out of the gate. VMware also has an excellent desktop virtualization platform but in that arena Citrix could easily be argued best-of-breed and both have pros/cons depending on the specific technical/business requirements. For desktop virtualization architecture it’s not uncommon to have three best-of-breed vendors before even discussing storage or network hardware (Vendor X server, VMware Hypervisor, and Citrix desktop virtualization.)

With the innovative stack approach a collaborative multi-vendor team can analyze, assess, bundle, test, and certify an integration of best-of-breed hardware and software to provide the highest levels of performance, feature set and stability. Once the architectures are defined if an appropriate support and delivery model is put in place jointly by the vendors a best-of-breed innovative stack can accelerate your successful adoption of converged infrastructure and cloud-model services. An excellent example of this type of multi-vendor certified Innovative Stack is the FlexPod for VMware by NetApp, Cisco, and VMware which is backed by a joint support model and delivery packaging through certified expert channel partners.

To participate in a live WebCast on the subject and learn more please register here: http://www.definethecloud.net/innovative-versus-integration-cloud-stacks.

Is Private Cloud a Unicorn?

With all of the discussion, adoption, and expansion of cloud offerings there is a constant debate that continues to rear its head: Public vs. Private or more bluntly ‘Is there even such thing as a private cloud?’  You typically have two sides of this debate coming from two different camps:

Public Cloud Proponents:  There is no such thing as private cloud and or you won’t gain the economies of scale and benefits of a cloud model when building it privately.

Private Cloud Proponents: Building a cloud IT delivery model in-house provides greater resource control, accountability, security and can leverage existing infrastructure investment.

Before we begin let’s start with the basics, The National Institute of Standards and Technology (NIST) definition of cloud:

Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared
pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that
can be rapidly provisioned and released with minimal management effort or service provider interaction.
This cloud model promotes availability and is composed of five essential characteristics, three service
models, and four deployment models.

Essential Characteristics:

On-demand self-service: A consumer can unilaterally provision computing capabilities, such as
server time and network storage, as needed automatically without requiring human
interaction with each service’s provider.

Broad network access: Capabilities are available over the network and accessed through standard
mechanisms that promote use by heterogeneous thin or thick client platforms (e.g.,
mobile phones, laptops, and PDAs).

Resource pooling: The provider’s computing resources are pooled to serve multiple consumers
using a multi-tenant model, with different physical and virtual resources dynamically
assigned and reassigned according to consumer demand. There is a sense of location
independence in that the customer generally has no control or knowledge over the exact
location of the provided resources but may be able to specify location at a higher level of
abstraction (e.g., country, state, or datacenter). Examples of resources include storage,
processing, memory, network bandwidth, and virtual machines.

Rapid elasticity: Capabilities can be rapidly and elastically provisioned, in some cases
automatically, to quickly scale out, and rapidly released to quickly scale in. To the
consumer, the capabilities available for provisioning often appear to be unlimited and can
be purchased in any quantity at any time.

Measured Service: Cloud systems automatically control and optimize resource use by leveraging
a metering capability at some level of abstraction appropriate to the type of service (e.g.,
storage, processing, bandwidth, and active user accounts). Resource usage can be
monitored, controlled, and reported, providing transparency for both the provider and
consumer of the utilized service.

Service Models:

Cloud Software as a Service (SaaS): The capability provided to the consumer is to use the
provider’s applications running on a cloud infrastructure. The applications are accessible
from various client devices through a thin client interface such as a web browser (e.g.,
web-based email). The consumer does not manage or control the underlying cloud
infrastructure including network, servers, operating systems, storage, or even individual
application capabilities, with the possible exception of limited user-specific application
configuration settings.

Cloud Platform as a Service (PaaS): The capability provided to the consumer is to deploy onto
the cloud infrastructure consumer-created or acquired applications created using
programming languages and tools supported by the provider. The consumer does not
manage or control the underlying cloud infrastructure including network, servers,
operating systems, or storage, but has control over the deployed applications and possibly
application hosting environment configurations.

Cloud Infrastructure as a Service (IaaS): The capability provided to the consumer is to provision
processing, storage, networks, and other fundamental computing resources where the
consumer is able to deploy and run arbitrary software, which can include operating
systems and applications. The consumer does not manage or control the underlying cloud
infrastructure but has control over operating systems, storage, deployed applications, and
possibly limited control of select networking components (e.g., host firewalls).

 

Deployment Models:

Private cloud: The cloud infrastructure is operated solely for an organization. It may be managed
by the organization or a third party and may exist on premise or off premise.

Community cloud: The cloud infrastructure is shared by several organizations and supports a
specific community that has shared concerns (e.g., mission, security requirements, policy,
and compliance considerations). It may be managed by the organizations or a third party
and may exist on premise or off premise.

Public cloud: The cloud infrastructure is made available to the general public or a large industry
group and is owned by an organization selling cloud services.

Hybrid cloud: The cloud infrastructure is a composition of two or more clouds (private,
community, or public) that remain unique entities but are bound together by standardized
or proprietary technology that enables data and application portability (e.g., cloud
bursting for load balancing between clouds).

http://csrc.nist.gov/publications/drafts/800-145/Draft-SP-800-145_cloud-definition.pdf

Obviously NIST believes there is a place for private cloud, as do several others, so where does the issue arise?

The argument against private cloud:

Public cloud proponents believe in another defining characteristic of cloud computing: Utility Pricing.  They believe that the ‘pay for only what you use’ component of public cloud should be required for all clouds, which would negate the concept of private cloud where the infrastructure is paid for up front and has a cost whether or not it’s used.  The driver for this is Cloud’s benefit of moving CapEx (capital expenditure) to OpEx (Operating Expenditure.)  Because you aren’t buying infrastructure you have no upfront costs and pay as you go for use.  This has obvious advantages and this type of utility model makes sense (think power grid in big picture terms, you have metered use.)

So public cloud it is?

Not so fast!  There are several key concerns for public cloud that may drive the decision to utilize a private cloud:

These factors must be considered when making a decision to utilize a public cloud.  For most organizations they’re typically not roadblocks, but speed bumps that must be navigated carefully.

So which it it?

That question will be answered differently for every organization.  It’s based on what you want to do and how you want to do it.  Chris Hoff uses laundry to explain this: http://www.rationalsurvivability.com/blog/?p=2384.  Additionally cost will be a major factor, Wikibon has an excellent post arguing that private cloud is more cost effective for organizations over $1 billion: http://wikibon.org/wiki/v/Private_Cloud_is_more_Cost_Effective_than_Public_Cloud_for_Organizations_over_$1B.  Additionally in many cases a hybrid model may work best either as a permanent solution or migration path.

Summary:

Private cloud is no unicorn and will be here to stay.  For some it will be a stepping stone to a fully public IT model, and for others it will be the solution.  Organizations like the federal government have the data security needs to require a private cloud and the size/scale to gain the benefits of that model.  Other large organizations may find that private makes more monetary sense.  Availability, security, compliance etc. may drive other companies to look at a private cloud model.

Cloud is about cost but it’s more importantly about accelerating the business.  When IT can respond immediately to new demands the business can execute more quickly.  Both public and private models provide this benefit, each organization will have to decide for itself which model fits their demands.

The Cloud Rules

Cloud Computing Concepts:

These are Twitter sized quick thoughts. If you’d like more elaboration or have a comment participation is highly encouraged.  As I’ve run out of steam on this I’ve decided to move it into a blog rather than a page.