Thoughts From a Global Technology Leadership Forum

I recently had the privilege to attend and participate in a global technology leadership forum.  The forum consisted of technology investors, vendors and thought leaders and was an excellent event.  The tracks I focused on were VDI, Big Data, Data Center Infrastructure, Data Center Networks, Cloud and Collaboration.  The following are my notes from the event:

VDI:

There was a lot of discussion around VDI and a track dedicated to it.  The overall feeling was that VDI has not lived up to its hype over the last few years, and while it continues to grow market share it never reaches the predicted numbers, or hits the bubble that is predicted for it.  For the most part the technical experts agreed on the following:

  • VDI has had several hang-ups both technical, cost and image wise that have held it back from mass-scale adoption
  • The technical challenges have been solved for the most part, storage solutions like cache, tiering and SSD can solve the IOPS contention and help to reduce the costs.  Storage optimization products like Atlantis Computing also exist to alleviate costs per seat by reducing storage requirements to obtain acceptable IOPS.
  • The cost model is getting better but is still not at a place where VDI is a no-brainer.  The consensus was that until a complete VDI solution can be rolled out for a cost per seat equal or lower to a typical enterprise desktop/laptop it will still be a tough decision.  Currently VDI is still a soft cost ROI as in it provides added features and benefit at a slightly higher cost.

There was some disagreement on whether VDI is the right next step for the enterprise.  The split I saw was nearly 50/50 with half thinking it is the way forward and will be deployed in greater and greater scale, and the other half thinking it is one of many viable current solutions and may not be the right 3-5 year goal.  I’ve expressed my thoughts previously: http://www.definethecloud.net/vdi-the-next-generation-or-the-final-frontier. Lastly we agreed that the key leaders in this space are still VMware and Citrix.  While each have pros and cons it was believed that both solutions are close enough as to be viable and that VMware’s market share and muscle make it very possible to pull into a dominant lead.  Other players in this space were complete afterthoughts.

Big Data:

Let me start by saying I know nothing about big data.  I sat in these expert sessions to understand more about it, and they were quite interesting.  Big data sets are being built, stored, and analyzed.  Customer data, click traffic, etc. are being housed to gather all types of information and insight.  Hadoop clusters are being used for processing data, cloud storage such as Amazon S3 is being utilized as well as on-premises solutions.  The main questions were in regard to where the data should be stored and where it should be processed, as well as the compliance issues that may arise with both.  Another interesting question was the ability to leave the public cloud if your startup turns big enough to beat the costs of public cloud with a private one.  For example if you have a lot of data you can mail Amazon disks to get it into S3 faster than WAN speed, but to our knowledge they can’t/won’t mail your disk back if you want to leave.

Data Center Infrastructure:

Overall there was an agreement that very few data center infrastructure (defined here as compute, network, storage) conversations occur without chat about cloud.  Cloud is a consideration for IT leaders from the SMB to large global enterprise.  That being said while cloud may frame the discussion the majority of current purchases are still focused on consolidation and virtualization, with some automation sprinkled in.  Private-cloud stacks from the major vendors also come into play helping to accelerate the journey, but many are still not true private clouds (see: http://www.definethecloud.net/the-difference-between-private-cloud-and-converged-infrastructure.)

Data Center Networks:

I moderated a session on flattening the data center networks, this is currently referred to as building ‘fabrics.’  The majority of the large network players have announced or are shipping ‘fabric’ solutions.  These solutions build multiple active paths at Layer 2 alleviating the blocked links traditional Spanning-Tree requires.  This is necessary as we converge our data and ask more of our networks.  The panel agreed that these tools are necessary but that standards are required to push this forward and avoid vendor lock-in.  As an industry we don’t want to downgrade our vendor independence to move to a Fabric concept.  That being said most agree that pre-standard proprietary deployments are acceptable as long as the vendor is committed to the standard and the hardware is intended to be standards compliant.

Cloud:

One of the main discussions conversations I had was in regards to PaaS.  While many agree that PaaS and SaaS are the end goals of public and private clouds, the PaaS market is not yet fully mature (see: http://www.networkcomputing.com/private-cloud/231300278.)  Compatibility, interoperability and lock-in were major concerns overall for PaaS.  Additionally while there are many PaaS leaders, the market is so immature leadership could change at any time, making it hard to pick which horse to back. 

Another big topic was open and open source.  Open Stack, Open Flow and open source players like RedHat.  With RedHat’s impressive YoY growth they are tough to ignore and there is a lot of push for open source solutions as we move to larger and larger cloud systems.  The feeling is that larger and more technically adept IT shops will be looking to these solutions first when building private clouds.

Collaboration:

Yet another subject I’m not an expert on but wanted to learn more about.  The first part of the discussion entailed deciding what we were discussing i.e. ‘What is collaboration.’  With the term collaboration encompassing: voice, video, IM, conferencing, messaging, social media, etc. depending on who you talk to this was needed.  We settled into a focus on enterprise productivity tools, messaging, information repositories, etc.  The overall feeling was that there are more questions than answers in this space.  Great tools exist but there is no clear leaders.  Additionally integration between enterprise tools and public tools was a topic and involved the idea of ensuring compliance.  One of the major discussions was building internal adoption and maintaining momentum.  The concern with a collaboration tool rollout is the initial boom of interest followed by a lull and eventual death of the tool as users get bored with the novelty before finding any ‘stickiness.’

GD Star Rating
loading...

My Recent Guest Spot on The Cloudcast (.NET) Podcast

image

Brian Gracely, Aaron Delp, and I discuss converged infrastructure stack, tech news and industry direction: http://www.thecloudcast.net/2011/06/cloudcast.html.  It was a lot of fun to chat with them and we covered some great topics.

GD Star Rating
loading...

The Difference Between Private Cloud and Converged Infrastructure

With all of the  hype around private clouds and manufacturer private cloud infrastructure stacks I thought I’d take some time  to differentiate between ‘private-cloud’ and ‘converged-infrastructure.’  For some background on Private Cloud see two of my previous posts: http://www.definethecloud.net/building-a-private-cloud and http://www.definethecloud.net/is-private-cloud-a-unicorn.

Private clouds typically consist of four architectural stages (I describe these here: http://www.definethecloud.net/smt-matrix-and-vblock-architectures-for-private-cloud):

To build a true private cloud hardware/platform consolidation is layered with virtualization, automation and orchestration (without which the ‘On-Demand Self Service requirement of NIST’s definition is not met.)  The end result is a IT model and infrastructure that moves at the pace of business.

Converged infrastructure on the other hand is a subset of this, typically consolidation and virtualization but could possibly include some automation.  With all major vendors selling some form of ‘Integrated stack’ marketed at Private Cloud I thought I’d take a look at where four of the most popular actually fall along the path.

 

image

Starting from the bottom (as in bottom of the pyramid rather than bottom in quality, value, etc.)

FlexPod: FlexPod is an architecture designed using NetApp storage and Cisco compute and networking components.  The FlexPod architectures address various business and application needs but do not include automation/orchestration software.  The idea being that customers will have the flexibility to choose the level and type of automation/orchestration suite they require.

Vblock: Vblocks consist of EMC storage couple with VMware virtualization and Cisco Network/Compute.  Additionally VBlock incorporates EMC’s Unified Infrastructure Manager (UIM) which enables automation and single point of management for most of the infrastructure components. An orchestration suite would still be required for true private cloud.

Exalogic: Oracle’s stack offering is Exalogic which combines Oracle hardware with their middleware and software to provide a private cloud platform tailored toward Java environments.  The provisioning tools included offer the promise of private cloud ‘on-demand self-service.’

BladeSystem Matrix: Is built upon HP BladeSystem, storage, network and software components and is managed by HP’s Cloud Service Automation.  The automation and orchestration tools included in that software suite put HP’s offering in the private cloud arena.

Bottom Line:

Depending on the drivers, requirements, and individual environment all of these stacks can offer customers a platform from which to rapidly build cloud services.  The key is in deciding what you want and what is the best tool to get you there.  The best tool to get you there will be based on both ROI and business agility as cost is not the only reason for a migration to cloud.

For a deeper look at private cloud stacks check out my post at Networking Computing (http://www.networkcomputing.com/private-cloud/229900081.)

GD Star Rating
loading...

Is Private Cloud a Unicorn?

With all of the discussion, adoption, and expansion of cloud offerings there is a constant debate that continues to rear its head: Public vs. Private or more bluntly ‘Is there even such thing as a private cloud?’  You typically have two sides of this debate coming from two different camps:

Public Cloud Proponents:  There is no such thing as private cloud and or you won’t gain the economies of scale and benefits of a cloud model when building it privately.

Private Cloud Proponents: Building a cloud IT delivery model in-house provides greater resource control, accountability, security and can leverage existing infrastructure investment.

Before we begin let’s start with the basics, The National Institute of Standards and Technology (NIST) definition of cloud:

Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared
pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that
can be rapidly provisioned and released with minimal management effort or service provider interaction.
This cloud model promotes availability and is composed of five essential characteristics, three service
models, and four deployment models.

Essential Characteristics:

On-demand self-service: A consumer can unilaterally provision computing capabilities, such as
server time and network storage, as needed automatically without requiring human
interaction with each service’s provider.

Broad network access: Capabilities are available over the network and accessed through standard
mechanisms that promote use by heterogeneous thin or thick client platforms (e.g.,
mobile phones, laptops, and PDAs).

Resource pooling: The provider’s computing resources are pooled to serve multiple consumers
using a multi-tenant model, with different physical and virtual resources dynamically
assigned and reassigned according to consumer demand. There is a sense of location
independence in that the customer generally has no control or knowledge over the exact
location of the provided resources but may be able to specify location at a higher level of
abstraction (e.g., country, state, or datacenter). Examples of resources include storage,
processing, memory, network bandwidth, and virtual machines.

Rapid elasticity: Capabilities can be rapidly and elastically provisioned, in some cases
automatically, to quickly scale out, and rapidly released to quickly scale in. To the
consumer, the capabilities available for provisioning often appear to be unlimited and can
be purchased in any quantity at any time.

Measured Service: Cloud systems automatically control and optimize resource use by leveraging
a metering capability at some level of abstraction appropriate to the type of service (e.g.,
storage, processing, bandwidth, and active user accounts). Resource usage can be
monitored, controlled, and reported, providing transparency for both the provider and
consumer of the utilized service.

Service Models:

Cloud Software as a Service (SaaS): The capability provided to the consumer is to use the
provider’s applications running on a cloud infrastructure. The applications are accessible
from various client devices through a thin client interface such as a web browser (e.g.,
web-based email). The consumer does not manage or control the underlying cloud
infrastructure including network, servers, operating systems, storage, or even individual
application capabilities, with the possible exception of limited user-specific application
configuration settings.

Cloud Platform as a Service (PaaS): The capability provided to the consumer is to deploy onto
the cloud infrastructure consumer-created or acquired applications created using
programming languages and tools supported by the provider. The consumer does not
manage or control the underlying cloud infrastructure including network, servers,
operating systems, or storage, but has control over the deployed applications and possibly
application hosting environment configurations.

Cloud Infrastructure as a Service (IaaS): The capability provided to the consumer is to provision
processing, storage, networks, and other fundamental computing resources where the
consumer is able to deploy and run arbitrary software, which can include operating
systems and applications. The consumer does not manage or control the underlying cloud
infrastructure but has control over operating systems, storage, deployed applications, and
possibly limited control of select networking components (e.g., host firewalls).

 

Deployment Models:

Private cloud: The cloud infrastructure is operated solely for an organization. It may be managed
by the organization or a third party and may exist on premise or off premise.

Community cloud: The cloud infrastructure is shared by several organizations and supports a
specific community that has shared concerns (e.g., mission, security requirements, policy,
and compliance considerations). It may be managed by the organizations or a third party
and may exist on premise or off premise.

Public cloud: The cloud infrastructure is made available to the general public or a large industry
group and is owned by an organization selling cloud services.

Hybrid cloud: The cloud infrastructure is a composition of two or more clouds (private,
community, or public) that remain unique entities but are bound together by standardized
or proprietary technology that enables data and application portability (e.g., cloud
bursting for load balancing between clouds).

http://csrc.nist.gov/publications/drafts/800-145/Draft-SP-800-145_cloud-definition.pdf

Obviously NIST believes there is a place for private cloud, as do several others, so where does the issue arise?

The argument against private cloud:

Public cloud proponents believe in another defining characteristic of cloud computing: Utility Pricing.  They believe that the ‘pay for only what you use’ component of public cloud should be required for all clouds, which would negate the concept of private cloud where the infrastructure is paid for up front and has a cost whether or not it’s used.  The driver for this is Cloud’s benefit of moving CapEx (capital expenditure) to OpEx (Operating Expenditure.)  Because you aren’t buying infrastructure you have no upfront costs and pay as you go for use.  This has obvious advantages and this type of utility model makes sense (think power grid in big picture terms, you have metered use.)

So public cloud it is?

Not so fast!  There are several key concerns for public cloud that may drive the decision to utilize a private cloud:

  • Data Security – Will my data be secure/can I entrust it to another entity?  The best example of this would be the Department of Defense (DoD) and intelligence community.  That level of sensitive data can not be entrusted to a private 3rd party.
  • Performance – Will my business applications have the same level of performance existing in a public offsite cloud?
  • Up-time – On average a properly designed enterprise data center provides 99.99 (4×9’s) uptime or above whereas a public cloud is typically guaranteed for 3 to 4×9’s.  This means relying on a single public cloud infrastructure will most likely provide less availability for enterprise customers.  To put that in perspective 3×9’s is 8.76 hours of downtime per year where 4×9’s is only 52.56 minutes.  An enterprise data center operating at 5×9’s only experiences 5.26 minutes of downtime per year.
  • Exit/Migration strategy – In the event it were necessary how would the applications and data be moved back in-house or to another cloud?

These factors must be considered when making a decision to utilize a public cloud.  For most organizations they’re typically not roadblocks, but speed bumps that must be navigated carefully.

So which it it?

That question will be answered differently for every organization.  It’s based on what you want to do and how you want to do it.  Chris Hoff uses laundry to explain this: http://www.rationalsurvivability.com/blog/?p=2384.  Additionally cost will be a major factor, Wikibon has an excellent post arguing that private cloud is more cost effective for organizations over $1 billion: http://wikibon.org/wiki/v/Private_Cloud_is_more_Cost_Effective_than_Public_Cloud_for_Organizations_over_$1B.  Additionally in many cases a hybrid model may work best either as a permanent solution or migration path.

Summary:

Private cloud is no unicorn and will be here to stay.  For some it will be a stepping stone to a fully public IT model, and for others it will be the solution.  Organizations like the federal government have the data security needs to require a private cloud and the size/scale to gain the benefits of that model.  Other large organizations may find that private makes more monetary sense.  Availability, security, compliance etc. may drive other companies to look at a private cloud model.

Cloud is about cost but it’s more importantly about accelerating the business.  When IT can respond immediately to new demands the business can execute more quickly.  Both public and private models provide this benefit, each organization will have to decide for itself which model fits their demands.

GD Star Rating
loading...

The Cloud Rules

Cloud Computing Concepts:

These are Twitter sized quick thoughts. If you’d like more elaboration or have a comment participation is highly encouraged.  As I’ve run out of steam on this I’ve decided to move it into a blog rather than a page.

  • 01: Cloud is a fad like computers, the Internet and social networking were before it.
  • 02: It’s not all or nothing, its pick and choose.
  • 03: It’s as secure as YOU make it
  • 04: There’s no point arguing semantics, argue features.
  • 05: You have at least one application today that’s a great candidate for cloud computing
  • 06: Cloud requires a migration strategy, not a fork-lift.
  • 07: Virtualization and automation are the building blocks of private cloud.
  • 08: Encrypt locally store globally.
  • 09: Open portability is key to public cloud.
  • 10: Elasticity means scale-up AND scale-down.
  • 11: Security should not be an afterthought.
  • 12: Multi-Tenancy is your friend.
  • 13: Silo’d organizations breed silo’d architectures.
  • 14: IT should support the business, not the other way around.
  • 15: Performance isn’t about highest/lowest it’s about application requirements.
  • 16: Cloud pushes IT from CapEx to OpEx, without financing hardware.
  • 17: Features only matter if you need, them now or will need them later.
  • 18: Address organizational challenges before technical challenges.
  • 19: The way you do things today should not dictate the way you do things tomorrow.
  • 20: Latency operates independent of bandwidth, low-latency apps require low latency links.
  • 21: Build a 5-Year plan and incorporate staged migration to cloud architectures/services.
  • 22: Bad budget processes should not force bad IT decisions.
  • 23: If you do things the way you’ve always done them, you get the results you’ve always had.
  • 24: Integration and support are top considerations for private-cloud architectures.
  • 25: Cloud computing provides business agility.
  • 26: Getting applications out of the cloud is as important a consideration as getting them in.
  • 27: There are no ‘One-size-fits-all’ solutions in IT, cloud is no different.
GD Star Rating
loading...

SMT, Matrix and Vblock: Architectures for Private Cloud

Cloud computing environments provide enhanced scalability and flexibility to IT organizations.  Many options exist for building cloud strategies, public, private etc.  For many companies private cloud is an attractive option because it allows them to maintain full visibility and control of their IT systems.  Private clouds can also be further enhanced by merging private cloud systems with public cloud systems in a hybrid cloud.  This allows some systems to gain the economies of scale offered by public cloud while others are maintained internally.  Some great examples of hybrid strategies would be:

  • Utilizing private cloud for mission critical applications such as SAP while relying on public cloud for email systems, web hosting, etc.
  • Maintaining all systems internally during normal periods and relying on the cloud for peaks.  This is known as Cloud Bursting and is excellent for workloads that cycle throughout the day, week, month or year.
  • Utilizing private cloud for all systems and capacity while relying on cloud based Disaster Recovery (DR) solutions.

Many more options exist and any combination of options is possible.  If private cloud is part of the cloud strategy for a company there is a common set of building blocks required to design the computing environment.

image

In the diagram above we see that each component builds upon one another.  Starting at the bottom we utilize consolidated hardware to minimize power, cooling and space as well as underlying managed components.  At the second tier of the private cloud model we layer on virtualization to maximize utilization of the underlying hardware while providing logical separation for individual applications. 

If we stop at this point we have what most of today’s data centers are using to some extent or moving to.  This is a virtualized data center.  Without the next two layers we do not have a cloud/utility computing model.  The next two layers provide the real operational flexibility and organizational benefits of a cloud model.

To move out virtualized data center to a cloud architecture we next layer on Automation and Monitoring.  This layer provides the management and reporting functionality for the underlying architecture.  It could include: monitoring systems, troubleshooting tools, chargeback software, hardware provisioning components, etc.  Next we add a provisioning portal to allow the end-users or IT staff to provision new applications, decommission systems no longer in use, and add/remove capacity from a single tool.  Depending on the level of automation in place below some things like capacity management may be handled without user/staff intervention.

The last piece of the diagram above is security.  While many private cloud discussions leave security out, or minimize its importance it is actually a key component of any cloud design.  When moving to private cloud customers are typically building a new compute environment, or totally redesigning an existing environment.  This is the key time to design robust security in from end-to-end because you’re not tied to previous mistakes (we all make them)or legacy design.  Security should be part of the initial discussion for each layer of the private cloud architecture and the solution as a whole.

Private cloud systems can be built with many different tools from various vendors.  Many of the software tools exist in both Open Source and licensed software versions.  Additionally several vendors have private cloud offerings of an end-to-end stack upon which to build design a private cloud system.  The remainder of this post will cover three of the leading private cloud offerings:

Scope: This post is an overview of three excellent solutions for private cloud.  It is not a pro/con discussion or a feature comparison.  I would personally position any of the three architectures for a given customer dependant on customer requirements, existing environment, cloud strategy, business objective and comfort level.  As always please feel free to leave comments, concerns or corrections using the comment form at the bottom of the post.

Secure Multi-Tenancy (SMT):

Vendor positioning:  ‘This includes the industry’s first end-to-end secure multi-tenancy solution that helps transform IT silos into shared infrastructure.’

image

SMT is a pairing of: VMware vSphere, Cisco Nexus, UCS, MDS, and NetApp storage systems.  SMT has been jointly validated and tested by the three companies, and a Cisco Validated Design (CVD) exists as a reference architecture.  Additionally a joint support network exists for customers building or using SMT solutions.

Unlike the other two systems SMT is a reference architecture a customer can build internally or along with a trusted partner.  This provides one of the two unique benefits of this solution.

Unique Benefits:

  • Because SMT is a reference architecture it can be built in stages married to existing refresh and budget cycles.  Existing equipment can be reutilized or phased out as needed.
  • SMT is designed to provide end-to-end security for multiple tenants (customers, departments, or applications.)

HP Matrix:

Vendor positioning:  ‘The industry’s first integrated infrastructure platform that enables you to reduce capital costs and energy consumption and more efficiently utilize the talent of your server administration teams for business innovation rather than operations and maintenance.’

prod-shot-170x190

Matrix is a integration of HP blades, HP storage, HP networking and HP provisioning/management software.  HP has tested the interoperability of the proven components and software and integrated them into a single offering. 

Unique benefits:

  • Of the three solutions Matrix is the only one that is a complete solution provided by a single vendor.
  • Matrix provides the greatest physical server scalability of any of the three solutions with architectural limits of thousands of servers.

Vblock:

Vendor positioning:  ‘The industry’s first completely integrated IT offering that combines best-in-class virtualization, networking, computing, storage, security, and management technologies with end-to-end vendor accountability.’

image

Vblocks are a combination of EMC software and storage storage, Cisco UCS, MDS and Nexus, and VMware virtualization.  Vblocks are complete infrastructure packages sold in one of three sizes based on number of virtual machines.  Vblocks offer a thoroughly tested and jointly supported infrastructure with proven performance levels based on a maximum number of VMs. 

Unique Benefits:

  • Vblocks offer a tightly integrated best-of-breed solution that is purchased as a single product.  This provides very predictable scalability costs when looked at from a C-level perspective (i.e. x dollars buys y scalability, when needs increase x dollars will be required for the next block.)
  • Vblock is supported by a unique partnering between Cisco, EMC and VMware as well as there ecosystem of channel partners.  This provides robust front and backend support for customer before during and after install.

Summary:

Private cloud can provide a great deal of benefits when implemented properly, but like any major IT project the benefits are greatly reduced by mistakes and improper design.  Pre-designed and tested infrastructure solutions such as the ones above provide customers a proven platform on which they can build a private cloud.

GD Star Rating
loading...

Why You’re Ready to Create a Private Cloud

I’m catching up on my reading and ran into David Linthicum’s ‘Why you’re not ready to create a private cloud’ (http://www.infoworld.com/d/cloud-computing/why-youre-not-ready-create-private-cloud-458.)  It’s a great article and points out a major issue with private-cloud adoption – internal expertise.  The majority of data center teams don’t have the internal expertise required to execute effectively on private-cloud architectures.  This isn’t a knock on these teams, it’s a near impossibility to have and maintain that internal expertise.  Remember back when VMware was gaining adoption.  Nobody had virtualization knowledge so they learned it on the fly.  As people became experts many times they left the nest where they learned it in search of bigger better worms.  More importantly because it was a learn-as-you-go process the environments were inherently problematic and were typically redesigned several times to maximize performance and benefit.

Looking at the flip side of that coin, what is the value to the average enterprise or federal data center in retaining a private cloud architect?  If they’re good at their job they only do it once.  Yes there will be optimization and performance assessments to maintain it, but that’s not typically a full time job. The question becomes:  Because you don’t have the internal expertise to build a private cloud should you ignore the idea or concept?  I would answer a firm no.

The company I work for has the ability, reseller agreements, service offerings and expertise to execute on private clouds.  We’re capable of designing and deploying these solutions from the data center walls to the provisioning portals with experts on hand that have experience in each aspect, and enough overlap to tie it all together.  To put our internal capabilities in perspective one of my companies offerings is private cloud containers and ruggedized deployable private cloud racks.  These aren’t throw some stuff in a box solutions they are custom designed containers outfitted with shock absorption, right-sized power/cooling, custom rack rails providing full equipment serviceability and private cloud IT architectures built on several industry leading integrated platforms. That’s a very unique home grown offering for a systems integrator (typically DC containers are the space of IBM, Sun, etc.)  I accepted this position for these reasons, among others. 

This is not an advertisement for my company but instead an example of why you’re ready to build private cloud infrastructures.  You should not expect to have the internal expertise to architect and build a private cloud infrastructure, you should utilize industry experts to assist with your transition.  There are two major methods of utilizing experts to assess, design, and deploy a private cloud: a capable reseller/solutions provider or a capable consultant/consulting firm.  Both methods have pros and cons.

Reseller/Systems Integrator:

Utilizing a reseller and systems integrator has some major advantages in the form of what is provided at no cost and having a one stop shop for design, purchase, and deployment.  Typically when working with a reseller much of the upfront consulting and design is provided free, this is because it is considered pre-sales and the hardware sale is where they make their money.  With complex systems and architectural designs such as Virtual Desktop Infrastructures (VDI) and cloud architectures don’t expect everything to be cost free, but good portions will be.  These type of deployments require in depth assessment and planning sessions, some of which will be paid engagements but are typically low overall cost and vital to success.  For example you won’t deploy VDI successfully without first understanding your applications in depth.  Application assessments are extended technical engagements.

Another advantage of using a reseller is that the hardware design, purchase and and installation can all be provided from the same company.  This simplifies the overall process and provides the ever so important ‘single-neck-to-choke.’  If something isn’t right before, during or after the deployment a good reseller will be able to help you coordinate actions to repair the issue without you having to call 10 separate vendors.

Lastly a reseller of sufficient size to handle private cloud design and migration will have an extensive pool of technical resources to draw upon during the process both internally and through vendor relationships, which means the team your working with has back-end support in several disciplines and product areas.

There are also some potential downsides to using a reseller that you’ll want to fully understand.  First a reseller typically partners with a select group of vendors that they’ve chosen.  This means that the architectural design will tend to revolve around those vendors.  This is not necessarily a  bad thing as long as:

  • The reseller takes an ‘independent view’ and has multiple vendors to choose from for the solutions they provide.  This allows them to pick the right fit for your environment.
  • You ensure the reseller explains to your satisfaction the reasoning behind the hardware choices they’ve positioned and the advantages of that hardware.

Obviously a reseller is in the business of making a sale, but a good reseller will leverage their industry knowledge and vendor relationships to build the right solution for the customer. Another note is even if your reseller doesn’t partner with a specific vendor, they should be able to make appropriate arrangements to include anything you choose in your design.

Consultant/Consulting Firm:

Utilizing a consultant is another good option for designing and deploying a private-cloud.  A good consultant can help assess the existing environment/organization and begin to map out an architecture and road map to build the private cloud.  One advantage of a consultant will be the vendor independence you’ll have with an independent consultant or firm.  Once they’ve helped you map out the architecture and roadmap they can typically work with you during with the purchase process through vendors or resellers.

Some potential drawbacks to independent consultants will be identifying a reliable individual or team with the proper capabilities to outline a cloud strategy. The best bet here to minimize risk here will be to use references from colleagues that have made the transition, trusted vendors, etc.  Excellent cloud architecture consultants exist, you’ll just need to find the right fit.

Hybrid Strategy:

These two options are never mutually exclusive.  In many cases I’d recommend working with a trusted reseller and utilizing an independent consultant as well.  There are benefits to this approach, one the consultant can assist to ‘keep the reseller honest’ and additionally should be able to provide alternative opinions and design considerations.

Summary:

Migrating to cloud is not an overnight process and most likely not something that can be planned for, designed and implemented using all internal resources.  When making the decision to move to cloud utilize the external resources available to you. As one last word of caution, don’t even bother looking at cloud architectures until your ready to align your organization to the flexibility provided by cloud, a cloud architecture is of no value to a silo driven organization (see my post ‘The Organizational Challenge’ for more detail: http://www.definethecloud.net/?p=122)

GD Star Rating
loading...

Why Cloud is as ‘Green’ As It Gets

I stumbled across a document from Greenpeace citing cloud for additional power draws and the need for more renewable energy (http://www.greenpeace.org/international/en/publications/reports/make-it-green-cloud-computing/.)  This is one of a series I’ve been noticing from the organization bastardizing IT for its effect on the environment and chastising companies for new data centers.  These articles all strike a cord with me because they show a complete lack of understanding of what cloud is, does and will do on the whole especially where it concerns energy consumption and ‘green’ computing.

Greenpeace seams to be looking at cloud as additional hardware and data centers being built to serve more and more data.  While cloud is driving new equipment, new data centers and larger computing infrastructures it is doing so to consolidate computing overall.  Speaking of public cloud specifically there is nothing more green than moving to a fully cloud infrastructure.  It’s not about a company adding new services it’s about moving those services from underutilized internal systems onto highly optimized and utilized shared public infrastructure.

Another point they seem to be missing is the speed at which technology moves.  A state of the art data center built 5-6 years ago would be lucky to reach 1.5:1 Power Usage Effectiveness (PUE) whereas today’s state-of-the-art data centers can get to 1.2:1 or below.  This means that a new data center can potentially waste .3 or more KW less per processing KW than one built 5-6 years ago.  Whether that’s renewable energy or not is irrelevant, it’s a good thing.

The most efficient privately owned data centers moving forward will be ones built as private-cloud infrastructures that can utilize resources on demand, scale-up/scale-down instantly and automatically shift workloads during non-peak times to power off unneeded equipment.  Even the best of these won’t come close to the potential efficiency of public cloud offerings which can leverage the same advantages and gain exponential benefits by spreading them across hundreds of global customers maintaining high utilization rates around the clock and calendar year.

Greenpeace lashing out at cloud and focusing on pushes for renewable energy is naive and short sighted.  Several other factors go into thinking green with data center.  Power/Cooling are definitely key, but what about utilization?  Turning a server off during off peak times is great to save power but that still means the components of the computer had to be mined, shipped, assembled, packaged, and delivered to me in order to sit powered off 1/3 of the day when I don’t need the cycles.  That hardware will still be refreshed the same way at which point some of the components may be recycled and the rest will be non-biodegradable and sometimes harmful waste. 

Large data centers housing public clouds have the promise of overall reduced power and cooling with maximum utilization.  You have to look at the whole picture to really go green.

Greenpeace: While you’re out there casting stones at big data centers how about you publish some of your numbers?  Let’s see the power, cooling, utilization numbers for your computing/data centers, actual numbers not what you offset by sending a check to Al Gore’s bank account.  While you’re at it throw in the costs and damage created by your print advertisement (paper, ink, power) etc.  Give us a chance to see how green you are.

GD Star Rating
loading...

Building a Hybrid Cloud

At a recent Data Center Architect summit I attended cloud computing was a key focus.  Of the concepts that were discussed one that was a recurring theme was Hybrid Clouds.  Conceptually a Hybird-Cloud is a mix of any two cloud types, typically thought of as a mix of a Private Cloud and Public Cloud services.  For more information on the cloud types see my previous post on the subject (http://www.definethecloud.net/?p=109.)  There are several great use cases for this type of architecture, the two that resonate most with me are:

Cloud Bursting: 

Not to be confused with the psychokinesis exercise from “The Men Who Stare At Goats.”  Cloud Bursting is the ability to utilize public cloud resources for application burst requirements during peak periods.  This allows a company to maintain performance during expected or unexpected peak periods without maintaining additional hardware.  Simply said on-demand capacity.  This allows companies with varying workloads to maintain core processing in house and burst into the cloud for peaks.

Disaster Recovery / Business Continuity:

Business continuity is a concern for customers of all shapes and sizes but can be extremely costly to implement well.  For the companies that don’t have the budget of a major oil company, bank, etc. maintaining a DR site is typically out of the question.  Lower cost solutions/alternatives exist but the further down the spectrum you move the less data/capability you’ll recover and the longer it will take to do that.  In steps cloud based DR/Business continuity services.  Rather than maintaining your own DR capabilities you contract the failover out to a company that maintains multi-tenant infrastructure for that purpose and specializes in getting your business back online quickly.

Overall I’m an advocate for properly designed hybrid clouds as they provide the ability to utilize cloud resources while still maintaining control of the services/data you don’t want in the cloud.  Even more appealing from my perspective is the ability to use private and hybrid-clouds as a migration strategy for a shift to fully public cloud based IT infrastructure.  If you begin building your applications for in-house cloud infrastructures you’ll be able to migrate them more easily to public clouds.  There are also tools available to use your private cloud exactly as some major public cloud providers do to make that transition even easier.

We also thoroughly covered barriers to adoption for hybrid cloud architectures.  Most of the considerations were the usual concerns:

  • Compliance
  • Security
  • Performance
  • Standardization
  • Service Level Agreements (SLA)

There were two others discussed that I see as the key challenges: Organizational and cloud lock-in.

Organizational:

In my opinion organizational challenges are typically the most difficult to overcome when adopting new technology.  If the organization is on board and properly aligned to the shift they will find solutions for the technical challenges.  Cloud architectures of all types require a change in organizational structure.  Companies that attempt to move to cloud architecture without first considering the organizational structure will at best have a painful transition and at worst fail and pull back to silo data centers.  I have a post covering some of the organizational challenges in more detail (http://www.definethecloud.net/?p=122.)

Cloud Lock-In:

Even more interesting was the concept of not just moving applications and services into the cloud, but also being able to move out.  This is a very interesting concern because it means that cloud computing has progressed in the acceptance stages. Customers and architects have moved past whether migration to cloud will happen and how applications will be migrated onto how do I get them back if I want them?  There are several times when this may become important:

  • Service does not perform as expected
  • Cloud provider runs into business problems (potential for bankruptcy, etc.)
  • Your application outgrows what your provider can deliver
  • Compliance or regulations change
  • etc.

In order for the end-user of cloud services to be comfortable migrating applications into the cloud they need to be confident that they can get them back if/when they want them.  Cloud service providers who make their services portable to other vendor offerings will gain customers more quickly and most likely maintain them longer.

Summary:

Cloud computing still has several concerns but none of them are road-blocks.  A sound strategy with a focus on analyzing and planning for problems will help ensure a successful migration.  The one major thing I gained from the discussion this week was that cloud has moved from an argument of should we/shouldn’t we to how do we make it happen and ensure it’s a smooth transition.

GD Star Rating
loading...

Building a Private Cloud

Private clouds are currently one of the most popular concepts in cloud computing.  They promise the flexibility of cloud infrastructures without sacrificing the control of owning and maintaining your own data center.  For a definition of cloud architectures see my previous blog on Cloud Types  (http://www.definethecloud.net/?p=109.)

Private clouds are an architecture that is owned by an individual company typically for internal use.  in order to be considered a true cloud architecture it must layer automation and orchestration over robust scalable architectures.  The intent of private clouds is the ability to have an infrastructure that reacts fluidly to business changes by scaling up and scaling down as applications and requirements change.  Typically consolidation and virtualization are the foundation of these architectures and advanced management, monitoring and automation systems are layered on top.  In some cases this can be taken a step further by loading cloud management software suites on the underlying infrastructure to provide an internal self service Software as a Service (SaaS) or Platform as a Service (PaaS) environment.

Private cloud architectures provide the additional benefit of being an excellent way to test the ability for a company to migrate to public cloud architecture.  Additionally if designed correctly private clouds also act as a migration step to public clouds by migrating applications onto cloud based platforms without exporting them to a cloud service host.  Private clouds can also be used in conjunction with public clouds in order to leverage public cloud resources for extra capacity, failover, or disaster recovery purposes.  This use is known as a hybrid cloud.

Private cloud architectures can be done in a roll-your-own fashion, selecting best of breed hardware, software, and services to build the appropriate architecture.  This can maximize the reuse of existing equipment while providing a custom tailored solution to achieve specific goals.  The drawback with roll-your-own solutions is that it requires extensive in-house knowledge in order to architect the solution properly.

A more common practice for migration into private clouds is to use packaged solutions offered by the major IT vendors, companies like IBM, Sun, Cisco, and HP have announced cloud or cloud-like architecture solutions and initiatives.  These provide a more tightly coupled solution, and in some cases a single point of contact and expertise on the complete solution.  These types of solutions can expedite your migration to the private cloud. 

When selecting the hardware and software for private cloud infrastructures ensure you do your homework.  Work with a trusted integrator or reseller with expertise in the area, gather multiple vendor proposals and read the fine print.  These solutions are not all created equal.  Some of the offered solutions are no more than vaporware and a good number are just repackaging of old junk in a shiny new part number.  Some will support a staged migration and others will require rip-and-replace or at least a new build out.

There are several key factors I would focus on when selecting a solution:

Compatibility and Support:

Tested compatibility and simplified support are key factors that should be considered when choosing a solution.  If you use products from multiple vendors that don’t work together you’ll need to tie the support pieces together in-house and may need to open and maintain several support tickets when things go awry.  Additionally if compatibility hasn’t been tested or support isn’t in place for a specific configuration you may be up a creek without a paddle when something comes up.

Flexibility vs. Guaranteed Performance:

Some of the available solutions are very strict on hardware types and quantities but in return provide performance guarantees that have been thoroughly tested. This is a trade off that must be considered.

Hardware subcomponents of the solution:

Building a private cloud is a large commitment to both architectural and organizational changes.  Real Return On Investment (ROI) won’t be seen without both.  When making that kind of investment you don’t want to end up with a subpar component of your infrastructure (software or hardware) because your vendor tried to bundle their best of breed X and best of breed Y with their so so Z.  Getting everything under one corporate logo has its pros and cons.

Hardware Virtualization and Abstraction:

A great statement I’ve heard about cloud computing was that when defining it if you start talking about hardware you’re already wrong (I don’t remember the source so if you know it please comment.)  This is because cloud is more about the process and people than the equipment.  When choosing hardware/software for private cloud keep this in mind.  You don’t want to end up with a private cloud that can’t flex because your software and process is tied to the architecture or equipment underneath.

Summary:

Private cloud architectures provide a fantastic set of tools to regain control of the data center and turn it back into a competitive advantage rather than a hole to throw money into.  Many options and technologies exist to accelerate your journey to private cloud but they must be carefully assessed.  If you don’t have the in-house expertise but are serious about cloud there are lots of consultant and integrator options out there to help walk you through the process.

GD Star Rating
loading...