VDI, the Next Generation or the Final Frontier?

After sitting through a virtualization sales pitch focused around Virtual Desktop Infrastructures (VDI) this afternoon I had several thoughts on the topic I thought may be blog worthy.

VDI has been a constant buzzword for a few years now, riding the coattails of server virtualization.  For the majority of those years you can search back and find predictions from the likes of Gartner touting ‘This is the year for VDI’ or making other similar statements, typically with projected growth rates that don’t ever happen.  What you won’t see is those same analyst organizations reaching back the year after and answering to why they over hyped it, or were blatantly incorrect. (Great idea for a yearly blog here, analyzing previous years failed predictions.)

The reasons they’ve been incorrect vary over the years starting with technical inadequacy of the infrastructures and lack of real understanding as an industry.  When VDI first hit the forefront many of us (myself included) made the assumption desktops could be virtualized the same as servers (Windows is Windows right?)  What we neglected to account for is the plethora of varying user applications, the difficulty of video and voice, and other factors such as boot storms which are unique and or more amplified within VDI environments than their server counterparts.  From there for a short while the VDI rollout horror stories and memories of failed Proof of Concepts slowed adoption and interest for a short period.

Now we’re at a point where the technology can overcome the challenges and the experts are battle hardened with knowledge of success and failures in various environments; yet still adoption is slow.  Users are bringing new devices into the workplace and expecting them to interface with enterprise services; yet still adoption is slow.  We supposedly have a more demanding influx of younger generation employees who demand remote access from their chosen devices; yet still adoption is slow.  This doesn’t mean that VDI isn’t being adopted, nor that the market share numbers aren’t increasing across the board; it’s just slow.

The reason for this is that our thinking and capabilities for service delivery have surpassed the need for VDI in many environments. VDI wasn’t an end-goal but instead an improvement over individually managed, monitored, and secured local end-user OS environments.  The end-goal isn’t removing the OS tie to the hardware on the end-point (which is what VDI does) but instead removing the applications tie to the OS; or more simply put: removing any local requirements for access to the services.  Starting to sound like cloud?

Cloud is the reason enterprise IT hasn’t been diving into VDI head first, the movement to cloud services has shown that for many we may have passed the point where VDI could show true Return On Investment (ROI) before being obsoleted.  Cloud is about delivering the service to any web connected end-point on-demand regardless of platform (OS.)  If you can push the service to my iOS, Android, Windows, Linux, etc. device without the requirement for a particular OS, then what’s the need for VDI?

To use a real world example I am a Microsoft zealot, I use Windows 7, Bing for search and only IE for browsing on my work and personal computers (call me retro.)  I also own an iPad, mainly due to the novelty and the fact that I got addicted to ‘Flight Control’ on a friends iPad at release of the original.  I occasionally use the iPad for what I’d call ‘productivity work’ related to my primary role or side projects.  Using my iPad I do things like: Access corporate email for the company I work for and my own, review files, access Salesforce, and Salesforce Chatter, and even perform some remote equipment demos, my files are seamlessly synched between my various other computers.  I do all of this without a Windows 7 virtual desktop running on my iPad, it’s all done through apps connected to these services directly.  In fact the only reason I have VDI client applications on my iPad is to demo VDI, not to actually work.

Now an iPad is not a perfect example, I’d never use it for developing content (slides, reports, spreadsheets, etc.) but I do use it for consuming content, email, etc.  To develop I turn to a laptop with full keyboard, screen and some monitor outputs.  This laptop may be a case for VDI but in reality why?  If the services I use are cloud based, public or private, and the data I utilize is as well, then the OS is irrelevant again.  With office applications moving to the cloud (Microsoft Office 365, Google Docs, etc.) along with many others, and many services and applications already there, what is the need for a VDI infrastructure?

VDI like server virtualization is really a band-aid for an outdated application deployment process which uses local applications tied to a local OS and hardware.  Virtualizing the hardware doesn’t change that model but can provide benefits such as:

  • Centralized control
  • Added security
  • More efficient backup
  • Support staff reduction/repurposing
  • Broader device support
  • Reduced administrative overhead
  • etc.

Once the wound of our current application deployment model has fully healed, the band-aid comes off and we have service delivery from cloud computing environments free of any OS or hardware ties.

So friends don’t let friends virtualize desktops right?

Not necessarily.  As shown above VDI can have significant advantages over standard desktop deployment.  Those advantages can drive business flexibility and reduce costs.  The difficult questions will become

  • Whether your organization can utilize a pure service delivery model based on security needs, organizational readiness, application/service readiness, etc.
  • Whether the VDI gains will be seen before the infrastructure can be replaced with a fully service based model

Many organizations will still see benefits from deploying VDI today because the ROI of VDI will occur more quickly than the ability to deliver all business apps as a service.  Additionally VDI is an excellent way to begin getting your feet wet with the concepts of supporting any device with organizational controls and delivering services remotely.  Coupling VDI with things like thin apps will put you one step closer while providing additional flexibility to your IT environment.

When assessing a VDI project you’ll want to take a close look at the time it will take your organization to hit ROI with the deployment and assess that against the time it would take to move to a pure service delivery model (if your organization would be capable of such.)  VDI is a fantastic tool in the data center tool bag, but like all others it’s not the right tool for every job.  VDI is definitely the Next Generation but it is not The Final Frontier.

Additional fun:

Here are some sales statements that are commonly used when pitching VDI, all of these I consider to be total hogwash.  Try out or modify a few of my one line answers next time your vendors there telling you about the wonderful world of VDI and why you need it now.

Vendor: ‘<Insert analyst here (Gartner, etc.)> says that 2011 is the year for VDI.’  Alternatively ‘<Insert analyst here (Gartner, etc.)> predicts VDI to grow X amount this year.’

My answer: ‘That’s quite interesting, let’s adjourn for now and reconvene when you’ve got data on <Insert analyst here (Gartner, etc.)>’s VDI predictions for the previous 5 years.’

Vendor: ‘The next generation of workers coming from college demand to use the devices and services they are used to, to do their job.’

My answer: ‘Excellent, they’ll enjoy working somewhere that allows that, we have corporate policies and rules to protect our data and network.’  This won’t work in every case as Mike Stanley (@mikestanley) pointed out to me, universities for example have student IT consumers who are the paying customers, this would be much more difficult in such cases.

Vendor: ‘People want a Bring Your Own (BYO) device model.’

My Answer: ‘If I bring my own device and the fact that I want to matters, what makes you think I’ll want your desktop?  Just give me application or service.’

GD Star Rating
loading...

Server/Desktop Virtualization–A Best of Breed Band-Aid

Virtualization is a buzzword that has moved beyond into mainstream use and enterprise deployment.  A few years back vendors were ‘virtualization-washing’ their products and services the way many ‘cloud-wash’ the same today.  Now a good majority of enterprises are well into their server virtualization efforts and moving into Virtual Desktop Infrastructures (VDI) and cloud deployments.  This is not by accident, hardware virtualization comes with a myriad of advantages such as: resource optimization, power and cooling savings, flexibility, rapid deployment, etc.  That being said we dove into server/desktop virtualization with the same blinders on we’ve worn as an industry since we broke away from big iron.  We effectively fix point-problems while ignoring big picture, and create new problems in the process:

  • Fix cost/support of the mainframe with commodity servers, end up with scalability and management issues.
  • Consolidate servers and storage to combat scalability end up with density issues and reencounter scalability problems with growth.
  • Move to blades and end up with ‘Mini-Racks.’ (See Sean McGee’s post: http://www.mseanmcgee.com/2010/05/the-mini-rack-approach-to-blade-server-design/)
  • Virtualize and end up with management complexity, sprawl, and other issues.

The underlying issue is the way in which we design our applications.  When we moved to commodity servers we built an application model with a foundation of one application, one operating system (OS), one server.  We’ve maintained that model ever since.  Server/desktop virtualization provides benefits but does not change this model it just virtualizes the underlying server and places more silos on a single piece of hardware to increase utilization.  Our applications and the services they deliver are locked into this model and suffer from it when we look at scale, flexibility and business continuance.

This is not a sustainable model, or at best not the most efficient model for service delivery.  Don’t take my word for it, jump on Bing and do a search for recent VMware acquisitions/partnerships.  The dominant giant in virtualization is acquiring companies or partnering with companies poised to make it the dominant giant in PaaS and SaaS.   Cloud computing as a whole offers the opportunity to rethink service delivery, or possibly more importantly brings the issue of service delivery and IT costing to the front of our minds. 

Moving applications and services to robust, highly available, flexible architectures is the first step in transforming IT to a department that enables the business.  The second step is removing the application OS silo and building services that can scale up and down independent of the underlying OS stack.  When you talk about zero downtime business continuance, massively scalable applications, global accessibility and other issues the current model is an anchor. 

That being said transforming these services is no small task.  Redesigning applications to new architectures can be monumental.  Redesigning organizations/processes and retraining people can be even more difficult.  The technical considerations for designing global highly available services touches on every aspect of application and architecture design: storage, network, web access, processing, etc.  That being said the tools are either available or rapidly emerging.

Any organization looking to make significant IT purchases or changes should be considering all of the options and looking at the big picture as much as possible.  The technology is available to transform the way we do business.  It may not be right for every organization or application but it’s not an all or nothing proposition.  There’s no fault in virtualizing servers and desktops today, but the end goal on the road map should be efficient service delivery optimized to the way you do business.

For more of my thoughts on this see my post on www.networkcomputing.com: http://www.networkcomputing.com/private-cloud/230600012.

GD Star Rating
loading...

The Power of Innovative Datacenter Stacks

With the industry drive towards cloud computing models there has been a lot of talk and announcements around ‘converged infrastructure’ ‘integrated stack’ solutions. An integrated stack is pre-packaged offering typically containing some amount of network, storage, and server infrastructure bundled with some level of virtualization, automation, and orchestration software. The purpose of these stacks is to simplify the infrastructure purchasing requirements, and accelerate the migration to virtualized or cloud computing models, accomplished by reducing risk and time to deployment. This simplification and acceleration is accomplished by heavy testing and certification by the vendor or vendors in order to ensure various levels of compatibility, stability and performance.

In broad strokes there are two types of integrated stack solution:

Single Vendor – All stack components are developed, manufactured and bundled by a single vendor.

Multi-Vendor – Products from two or more parent vendors are bundled together to create the stack.

Of these two approaches the true value and power typically comes from the multi-vendor approach or Innovative Stack, as long as some key processes are handled correctly, specifically infrastructure pre-integration/delivery and support. With an innovative stack the certification and integration testing is done by the joint vendors allowing more time to be spent tailoring the solution to specific needs rather than ensuring component compatibility and design validity. The innovative stack provides a cookie cutter approach at the infrastructure level.

The reason the innovative stack holds the sway is the ability to package ‘best-of-breed’ technologies into a holistic top-tier package rather than relying solely on products and software from a single vendor of which some may fall lower in the rankings. The large data center hardware vendors all have several disparate product lines each of which are in various stages of advancement and adoption. While one or two of these product lines may be best-of-breed or close, you’d be hard-pressed to argue that any one vendor can provide the best: storage, server, and network hardware along with automation and orchestration software.

A prime example of this would be VMware, it’s difficult to argue that VMware is not the best-of-breed for server virtualization, with a robust feature set, outstanding history and approximately 90% market share they are typically the obvious choice for server virtualization. That being said VMware does not sell hardware which means if you’re virtualizing servers and want best of breed you’ll need two vendors right out of the gate. VMware also has an excellent desktop virtualization platform but in that arena Citrix could easily be argued best-of-breed and both have pros/cons depending on the specific technical/business requirements. For desktop virtualization architecture it’s not uncommon to have three best-of-breed vendors before even discussing storage or network hardware (Vendor X server, VMware Hypervisor, and Citrix desktop virtualization.)

With the innovative stack approach a collaborative multi-vendor team can analyze, assess, bundle, test, and certify an integration of best-of-breed hardware and software to provide the highest levels of performance, feature set and stability. Once the architectures are defined if an appropriate support and delivery model is put in place jointly by the vendors a best-of-breed innovative stack can accelerate your successful adoption of converged infrastructure and cloud-model services. An excellent example of this type of multi-vendor certified Innovative Stack is the FlexPod for VMware by NetApp, Cisco, and VMware which is backed by a joint support model and delivery packaging through certified expert channel partners.

To participate in a live WebCast on the subject and learn more please register here: http://www.definethecloud.net/innovative-versus-integration-cloud-stacks.

GD Star Rating
loading...

Where Are You?

Joe wrote an excellent guest blog on my website called To Blade Or Not To Blade and offered me the same opportunity. Being a huge fan of Joe’s I’m honored. One of my favorites blog posts is his Data Center 101: Server Virtualization. Joe explained the benefits of server virtualization in the data center. I felt this post is appropriate because Joe showed us that virtualization is “supposed” to make life easier for Customers.  However, a lot of vendors have yet to come up with management tools that facilitate that concept. 

It’s a known fact that I’m a huge Underdog fan. However, what people don’t know is that Scooby-Doo is my second favorite cartoon dog. As a kid I always stayed current with the latest Underdog and Scooby-Doo after school episodes. This probably explains why my mother was always upset with me for not doing my homework first. I always got a kick out of the fact that no matter how many times Mystery Inc. would split up to find the ghost, it was always Scooby-Doo and Shaggy that managed (accidentally) to come face-to-face with the ghost while looking for food. Customers face the same issues that Scooby and Shaggy faced in ghost hunting. If a Customer was in VMware vCenter doing administrative tasks there was no way to effectively manage HBA settings (the ghost) without hunting around or opening a different management interface. Emulex has solved that issue with the new OneCommand Manager Plug-in for vCenter (OCM-VCp).

Being a former Systems Administrator in a previous life. I understand frustrations in opening multiple management interfaces to do a task(s). Emulex has already simplified infrastructure management with OneCommand. In OneCommand Customers already have the capability to manage HBAs across all protocols, generations, see/change CEE settings,  and do batch firmware/driver parameter updates (amongst a myriad of other capabilities).

 

 Not convinced? No problem. Let me introduce you to OCM-VCp interface. Take a look, you know have the opportunity to centrally discover, monitor and manage HBAs across the infrastructure from within vCenter, including vPort to to VM associations. How cool is that? Very.

 

 You get all the functions of the OneCommand HBA management application. No more looking for the elusive ghost that is called HBA settings. No more going back and forth between management interfaces. Which increases the probability of messing up the settings. However, out of all the cool capabilities here are the top 4 functions that I feel stand out for vCenter:

  • Diagnostic tools tab. This allows you to run PCI/Internal/External loopback and POST tests on a specific port on a specific VM.
  • Driver Parameters Tab. This tab is important to SAN/Network Administrators this is where you can update/change network parameters. The cool thing is that you can make changes temporary or save to a file for batch infrastructure updates/changes.
  • Maintenance Tab. Allows you to update firmware (single host or batch file) without rebooting the host.
  • CEE settings tab. Very important for Datacenter Bridging Capability Exchange Protocol (DCBX). 

 

In my opinion this couldn’t have come any sooner. As more organizations look to do more with less (virtualization principle) OCM-VCp will be the cornerstone of easing infrastructure management within VMware vCenter.  There is no learning curve because the plug-in has the same look and feel as the standalone management interface. In other words is very intuitive. So if you or your Customer(s) are expanding their adoption of virtualization take serious look at this plug-in, because it’s going to make your life so much easier.

http://www.niketown588.com/

GD Star Rating
loading...

Post defining VN-Link

For the Cisco fans or those curious about Cisco’s VN-Link see my post on my colleagues Unified Computing Blog: http://bit.ly/dqIIQK.

GD Star Rating
loading...

Virtualization

While not a new concept virtualization has hit the main stream over the last few years and become a uncontrollable buzz word driven by VMware, and other server virtualization platforms.  Virtualization has been around in many forms for much longer than some realizes, things like Logical partitions (LPAR) on IBM Mainframes have been around since the 80’s and have been extended to other non-mainframe platforms.  Networks have been virtualized by creating VLANs for years.  The virtualization term now gets used for all sorts of things in the data center.  like it or love the term doesn’t look like it’s going away anytime soon.

Virtualization in all of its forms is a pillar of Cloud Computing especially in the private/internal cloud architecture.  To define it loosely for the purpose of this discussion let’s use ‘The ability to divide a single hardware device or infrastructure into separate logical components.

Virtualization is key to building cloud based architectures because it allows greater flexibility and utilization of the underlying equipment.  Rather than requiring  separate physical equipment for each ‘Tenant’ multiple tenants can be separated logically on a single underlying infrastructure.  This concept is also known as ‘multi-tenancy.’  Depending on the infrastructure being designed a tenant can be an individual application, internal team/department, or external customer.  There are three areas to focus on when discussing a migration to cloud computing, servers, network, and storage.

Server Virtualization:

Within the x86 server platform (typically the Windows/Linux environment.) VMware is the current server virtualization leader.  Many competitors exist such as Microsoft’s HyperV and Zen for Linux, and they are continually gaining market share.  The most common server virtualization allows a single physical server to be divided into logical subsets by creating virtual hardware, this virtual hardware can then have an Operating System and application suite installed and will operate as if it were an independent server.  Server virtualization comes in two major flavors, Bare metal virtualization and OS based virtualization.

Bare metal virtualization means that a lightweight virtualization capable operating system is installed directly on the server hardware and provides the functionality to create Virtual Servers.  OS based virtualization operates as an application or service within an OS such as Microsoft Windows that provides the ability to create virtual servers.  While both methods are commonly used Bare Metal virtualization is typically preferred for production use due to the reduced overhead involved.

Server virtualization provides many benefits but the key benefits to cloud environments are: increased server utilization, and operational flexibility.  Increased utilization means that less hardware is required to perform the same computing tasks which reduces overall cost.  The increased flexibility of virtual environments is key to cloud architectures.  When a new application needs to be brought online it can be done without procuring new hardware, and equally as important when an application is decommissioned the physical resources are automatically available for use without server repurposing.  Physical servers can be added seamlessly when capacity requirements increase.

Network Virtualization:

Network virtualization comes in many forms.  VLANs, LSANs, VSANs allow a single physical  LAN or SAN architecture to be carved up into separate networks without dependence on the physical connection.  Virtual Routing and Forwarding (VRF) allows separate routing tables to be used on a single piece of hardware to support different routes for different purposes.  Additionally technologies exist which allow single network hardware components to be virtualized in a similar fashion to what VMware does on servers.  All of these tools can be used together to provide the proper underlying architecture for cloud computing.  The benefits of network virtualization are very similar to server virtualization, increased utilization and flexibility.

Storage Virtualization:

Storage virtualization encompasses a broad range of topics and features.  The term has been used to define anything from the underlying RAID configuring and partitioning of the disk to things like IBMs SVC, and NetApp’s V-Series both used for managing heterogeneous storage.  Without getting into what’s right and wrong when talking about storage virtualization, let’s look at what is required for cloud.

First consolidated storage itself is a big part of cloud infrastructures in most applications.  Having the data in one place to manage can simplify the infrastructure, but also increases the feature set especially when virtualizing servers.  At a top-level looking at storage for cloud environments there are two major considerations: flexibility and cost.  The storage should have the right feature set and protocol options to support the initial design goals, it should also offer the flexibility to adapt as the business requirements change.  Several vendors offer great storage platforms for cloud environments depending on the design goals and requirements.  Features that are typically useful for the cloud (and sometimes lumped into virtualization) are:

De-Duplication – Maintaining a single copy of duplicate data, reducing overall disk usage.

Thin-provisioning – Optimizes disk usage by allowing disks to be assigned to servers/applications based on predicted growth while consuming only the used space.  Allows for applications to grow without pre-consuming disk.

Snapshots – Low disk use point in time record which can be used in operations like point-in-time restores.

Overall virtualization from end-to-end is the foundation of cloud environments, allowing for flexible high utilization infrastructures.

GD Star Rating
loading...