Data Center 101: Server Virtualization

Virtualization is a key piece of modern data center design.  Virtualization occurs on many devices within the data center, conceptually virtualization is the ability to create multiple logical devices from one physical device.  We’ve been virtualizing hardware for years:  VLANs and VRFs on the network, Volumes and LUNs on storage, and even our servers were virtualized as far back as the 1970s with LPARs. Server virtualization hit mainstream in the data center when VMware began effectively partitioning clock cycles on x86 hardware allowing virtualization to move from big iron to commodity servers. 

This post is the next segment of my Data Center 101 series and will focus on server virtualization, specifically virtualizing x86/x64 server architectures.  If you’re not familiar with the basics of server hardware take a look at ‘Data Center 101: Server Architecture’ (http://www.definethecloud.net/?p=376) before diving in here.

What is server virtualization:

Server virtualization is the ability to take a single physical server system and carve it up like a pie (mmmm pie) into multiple virtual hardware subsets. 

imageEach Virtual Machine (VM) once created, or carved out, will operate in a similar fashion to an independent physical server.  Typically each VM is provided with a set of virtual hardware which an operating system and set of applications can be installed on as if it were a physical server.

Why virtualize servers:

Virtualization has several benefits when done correctly:

  • Reduction in infrastructure costs, due to less required server hardware.
    • Power
    • Cooling
    • Cabling (dependant upon design)
    • Space
  • Availability and management benefits
    • Many server virtualization platforms provide automated failover for virtual machines.
    • Centralized management and monitoring tools exist for most virtualization platforms.
  • Increased hardware utilization
    • Standalone servers traditionally suffer from utilization rates as low as 10%.  By placing multiple virtual machines with separate workloads on the same physical server much higher utilization rates can be achieved.  This means you’re actually using the hardware your purchased, and are powering/cooling.

How does virtualization work?

Typically within an enterprise data center servers are virtualized using a bare metal installed hypervisor.  This is a virtualization operating system that installs directly on the server without the need for a supporting operating system.  In this model the hypervisor is the operating system and the virtual machine is the application. 

image

Each virtual machine is presented a set of virtual hardware upon which an operating system can be installed.  The fact that the hardware is virtual is transparent to the operating system.  The key components of a physical server that are virtualized are:

  • CPU cycles
  • Memory
  • I/O connectivity
  • Disk

image

At a very basic level memory and disk capacity, I/O bandwidth, and CPU cycles are shared amongst each virtual machine.  This allows multiple virtual servers to utilize a single physical servers capacity while maintaining a traditional OS to application relationship.  The reason this does such a good job of increasing utilization is that your spreading several applications across one set of hardware.  Applications typically peak at different times allowing for a more constant state of utilization.

For example imagine an email server, typically an email server is going to peak at 9am, possibly again after lunch, and once more before quitting time.  The rest of the day it’s greatly underutilized (that’s why marketing email is typically sent late at night.)  Now picture a traditional backup server, these historically run at night when other servers are idle to prevent performance degradation.  In a physical model each of these servers would have been architected for peak capacity to support the max load, but most of the day they would be underutilized.  In a virtual model they can both be run on the same physical server and compliment one another due to varying peak times.

Another example of the uses of virtualization is hardware refresh.  DHCP servers are a great example, they provide an automatic IP addressing system by leasing IP addresses to requesting hosts, these leases are typically held for 30 days.  DHCP is not an intensive workload.  In a physical server environment it wouldn’t be uncommon to have two or more physical DHCP servers for redundancy.  Because of the light workload these servers would be using minimal hardware, for instance:

  • 800Mhz processor
  • 512MB RAM
  • 1x 10/100 Ethernet port
  • 16Gb internal disk

If this physical server were 3-5 years old replacement parts and service contracts would be hard to come by, additionally because of hardware advancements the server may be more expensive to keep then to replace.  When looking for a refresh for this server, the same hardware would not be available today, a typical minimal server today would be:

  • 1+ Ghz Dual or Quad core processor
  • 1GB or more of RAM
  • 2x onboard 1GE ports
  • 136GB internal disk

The application requirements haven’t changed but hardware has moved on.  Therefore refreshing the same DHCP server with new hardware results in even greater underutilization than before.  Virtualization solves this by placing the same DHCP server on a virtualized host and tuning the hardware to the application requirements while sharing the resources with other applications.

Summary:

Server virtualization has a great deal of benefits in the data center and as such companies are adopting more and more virtualization every day.  The overall reduction in overhead costs such as power, cooling, and space coupled with the increased hardware utilization make virtualization a no-brainer for most workloads.  Depending on the virtualization platform that’s chosen there are additional benefits of increased uptime, distributed resource utilization, increased manageability.

GD Star Rating
loading...

What's Stopping cloud?

So with everyone talking about cloud and all of the benefits of cloud computing why isn’t everyone diving in?  The barriers to adoption can be classified into three major categories: Personal, Technical, and Business.

Personal Reasons:

Personal barriers to cloud adoption are broad ranging and can be quite difficult to overcome.  Many IT professionals have fears of job loss and staff reduction as things become more centralized or moved to a service provider.  In some ways this may very well be true.  If more and more companies increase IT efficiency and outsource applications and infrastructure there will be a reduction in the necessary work force, that being said it won’t be as quick or extreme as some predict in the sky is falling books and blogs.  The IT professionals that learn and adapt will always have a place to work, and quite possibly a bigger paycheck for their more specialized or broader scope jobs. 

Additionally human beings tend to have a natural fear of the unknown and desire to stay within a certain comfort zone.  If you’ve built your career in a siloed data center cloud can be a scary proposition.  You can see this in the differences of IT professionals that have been in the industry for varying amounts of time.  Many of those who started their career in main frames were tough to push into distributed computing, those that started in distributed computing had issues with server virtualization, and those who built a career in the virtualized server world may still have issues with pushing more virtualization into the network and storage.  Overall we tend to have a level of complacency that is hard to break.  To that point I always think of a phrase we used in the Marines ‘Complacency Kills’ in that environment it was meant quite literally, in a business environment it still rings true ‘Complacency kills business.’  A great example of this from the U.S. market is the Kroger grocery stores catapult to greatness.  When grocery stores were moving from small stores to ‘super stores’ the retail giant at the time A&P opened a super store which was far more succesful than its other stores.  Rather than embracing this change and expanding on it A&P closed the store for fear of ruining their existing market.  Kroger on the other hand saw the market shift and changed existing stores to superstores while closing down stores that couldn’t fit the model.  This ended up launching Kroger to the number one grocery store in the U.S.

Technical Reasons:

Technical barriers to adoption for cloud also come in many flavors, but on the whole the technical challenges are easier to solve.  Issues such as performance, security, reliability, and recovery are major concerns for cloud adoption.  Can I trust someone else to protect and secure my data?  The answer to that question in most cases is yes, the tools all exist to provide secure environments to multiple tenants.  These tools exist at all sizes from the small enterprise (or smaller) to the global giants.  When thinking about the technical challenges take a step back from how good you are at your job, how good the IT department is, or how great your data center is, and think about how much better you’d be if that was your businesses primary focus?  If I made widgets as a byproduct of my primary business my widgets would probably not be as good, or as low a cost as a company that only made widgets.  If your focus was data center and your company was built on providing data centers you’d be better at data center.  co-location data centers are a great example of this.  The majority of companies can’t afford to run a Tier III or IV data center but could definitely benefit from the extra up-time.  Rather than building their own they can host equipment at a colo with the appropriate rating and in most cases save overall cost over the TCO of data center ownership.

Business Barriers:

Business barriers can be the most difficult to solve.  The majority of the business challenges revolve around the idea of guarantees.  What guarantees do I have that my data is: safe, secure, and accessible?  Issues like the Microsoft/T-Mobile sidekick fiasco bring this type of issue to the spotlight.  If I pay to have my business applications hosted what happens when things fail?  Currently hosted services typically provide Service Level Agreements (SLA) that guarantee up-time.  The issue is that if the up-time isn’t met the repercussion is typically a pro-rated monthly fee or repayment of the service cost for the time by the provider.  To put that in perspective say you pay 100K a month to host your SAP deployment which directly affects a 20 million dollars per month in sales.  If that hosted SAP deployment fails for a week it costs you 5 million dollars, receiving a refund of the 100K monthly fee doesn’t even begin to make up for the loss.  The 5 million dollar loss doesn’t even begin to take into account the residual losses of customer satisfaction and confidence.

The guarantee issue is one that will definitely need to be worked out, tested, and retested.  The current SLA model will not cut it for full-scale cloud deployments.  One concept that might ease this process would be insurance, that’s right cloud insurance.  For example as a cloud service provider you hire an insurance company to rate your data centers risk, and ensure your SLAs against the per minute value of each customers hosted services/infrastructure.  This allows you to guarantee each customer an actual return on the lost revenue in the event of a failure.  Not only is the cloud provider protected but the customer now has the confidence that their actual costs will be paid if the SLA is not met.

Overall none of the challenges of cloud adoption are show stoppers, but the adoption will not be an immediate one.  The best path to take as a company is to start with the ‘low hanging fruit.’  For instance if you’re looking at using cloud services, try starting with something like email.  Rather than run your own email servers use a hosted service.  Email is a commoditized application and runs basically the same in-house or out, most companies are spending unneeded money running their own email infrastructure because that’s what they’re used to.  The next step up may be to use hosted infrastructure, co-location, or services for disaster recovery (DR) purposes.

Another approach to take is my personal favorite.  Start realizing the power of cloud infrastructures in your own data center.  Many companies are already highly utilizing the benefits of server virtualization but missing the mark on network and storage virtualization.  Use end-to-end virtualization to collapse silos and increase flexibility.  This will allow the IT infrastructure to more rapidly adapt to immediate business needs.  From there start looking at automation technologies that can further reduce administrative costs.  The overall concept here is the ‘Private Cloud’ also sometimes called ‘Internal Cloud.’  This not only has immediate business impact but also provides a test bed for the value of cloud computing.  Additionally once the data center workload is virtualized on a standard platform it’s easier to replicate it or move it into hosted cloud environments.

Most importantly remember moving to a cloud architecture is not an instantaneous rip-and-replace operation it’s a staged gradual shift.  Make the strategic business decision to move towards cloud computing and move towards that path over an alloted time frame with tactical decisions that coincide with your purchasing practices and refresh cycles.  Cloud doesn’t have to be a unantainable goal or mystery, utilize independent consultants or strong vendor ties to define the long-term vision and then execute.

GD Star Rating
loading...