EMC recently announced VSPEX (http://www.emc.com/about/news/press/2012/20120412-01.htm)which is a series of reference architectures designed with: Cisco, Brocade, Citrix, Intel, Microsoft, and VMware.  The intent of these architectures is to provide proven designs for cloud computing while providing customer choice and flexibility.  Overall the intent is to provide flexible architectures of best-of-breed components for cloud computing.

The VSPEX solutions are focused on virtualized infrastructure for private cloud and end-user computing environments.  Current options provide VMware vSphere 5.0 and Microsoft Windows Hyper-V server virtualization from 50 – 250 VMs as well as VMware View and Citrix XenDesktop solutions from 50 – 2000 desktops.  Additionally VSPEX architectures factor in unified management and backup/recovery.  The initial launch solutions are: VMware view (250, 500, 1000, 2000 users), Citrix XenDesktop (250, 500, 1000, 2000 users), VMware Private Cloud (125 & 250 Virtual Machines), VMware Private Cloud (50 & 100 Virtual Machines), Microsoft Private Cloud (50 & 100 Virtual Machines.)  Full details can be found at: http://www.emc.com/platform/virtualizing-information-infrastructure/vspex.htm#!resources.

The reference architecture are further supported through VSPEX Labs from EMC for testing and configuration,which enables partners to validate specific configurations.  The model also enables partners to  further drive new functionality into VSPEX based on their customer base.  First-Level Support will be provided by the EMC channel partner and backed by EMC.

VSPEX is different from Vblock’s offered by VCE The Virtual Computing Environment Company and are more along the lines of FlexPod which is a collaboration of NetApp and Cisco with flavors for VMware, Citrix and several other applications/deployments.  The VSPEX reference architectures offer more choice and flexibility while sacrificing some in the way of acquisition, and operational support.  This gap again presents an opportunity for EMC channel partners to differentiate themselves with custom offerings to fill these gaps.

Overall VSPEX is an excellent offering for both customers and EMC channel partners.  It provides additional options for deploying reliable, tested integrated hardware stacks for private cloud and end-user computing environments.  It also provides a framework and foundation for partners to build a custom solution set from.

GD Star Rating

The Power of Innovative Datacenter Stacks

With the industry drive towards cloud computing models there has been a lot of talk and announcements around ‘converged infrastructure’ ‘integrated stack’ solutions. An integrated stack is pre-packaged offering typically containing some amount of network, storage, and server infrastructure bundled with some level of virtualization, automation, and orchestration software. The purpose of these stacks is to simplify the infrastructure purchasing requirements, and accelerate the migration to virtualized or cloud computing models, accomplished by reducing risk and time to deployment. This simplification and acceleration is accomplished by heavy testing and certification by the vendor or vendors in order to ensure various levels of compatibility, stability and performance.

In broad strokes there are two types of integrated stack solution:

Single Vendor – All stack components are developed, manufactured and bundled by a single vendor.

Multi-Vendor – Products from two or more parent vendors are bundled together to create the stack.

Of these two approaches the true value and power typically comes from the multi-vendor approach or Innovative Stack, as long as some key processes are handled correctly, specifically infrastructure pre-integration/delivery and support. With an innovative stack the certification and integration testing is done by the joint vendors allowing more time to be spent tailoring the solution to specific needs rather than ensuring component compatibility and design validity. The innovative stack provides a cookie cutter approach at the infrastructure level.

The reason the innovative stack holds the sway is the ability to package ‘best-of-breed’ technologies into a holistic top-tier package rather than relying solely on products and software from a single vendor of which some may fall lower in the rankings. The large data center hardware vendors all have several disparate product lines each of which are in various stages of advancement and adoption. While one or two of these product lines may be best-of-breed or close, you’d be hard-pressed to argue that any one vendor can provide the best: storage, server, and network hardware along with automation and orchestration software.

A prime example of this would be VMware, it’s difficult to argue that VMware is not the best-of-breed for server virtualization, with a robust feature set, outstanding history and approximately 90% market share they are typically the obvious choice for server virtualization. That being said VMware does not sell hardware which means if you’re virtualizing servers and want best of breed you’ll need two vendors right out of the gate. VMware also has an excellent desktop virtualization platform but in that arena Citrix could easily be argued best-of-breed and both have pros/cons depending on the specific technical/business requirements. For desktop virtualization architecture it’s not uncommon to have three best-of-breed vendors before even discussing storage or network hardware (Vendor X server, VMware Hypervisor, and Citrix desktop virtualization.)

With the innovative stack approach a collaborative multi-vendor team can analyze, assess, bundle, test, and certify an integration of best-of-breed hardware and software to provide the highest levels of performance, feature set and stability. Once the architectures are defined if an appropriate support and delivery model is put in place jointly by the vendors a best-of-breed innovative stack can accelerate your successful adoption of converged infrastructure and cloud-model services. An excellent example of this type of multi-vendor certified Innovative Stack is the FlexPod for VMware by NetApp, Cisco, and VMware which is backed by a joint support model and delivery packaging through certified expert channel partners.

To participate in a live WebCast on the subject and learn more please register here: http://www.definethecloud.net/innovative-versus-integration-cloud-stacks.

GD Star Rating

OTV and Vplex: Plumbing for Disaster Avoidance

High availability, disaster recovery, business continuity, etc. are all key concerns of any data center design. They all describe separate components of the big concept: ‘When something does go wrong how do I keep doing business.’

Very public real world disasters have taught us as an industry valuable lessons in what real business continuity requires. The Oklahoma City bombing can be at least partially attributed to the concepts of off-site archives and Disaster Recovery (DR.) Prior to that having only local or off-site tape archives was commonly acceptable, data gets lost I get the tape and restore it. That worked well until we saw what happens when you have all the data and no data center to restore to.

September 11th, 2001 taught us another lesson about distance. There were companies with primary data centers in one tower and the DR data center in the other. While that may seem laughable now it wasn’t unreasonable then. There were latency and locality gains from the setup, and the idea that both world class engineering marvels could come down was far-fetched.

With lessons learned we’re now all experts in the needs of DR, right up until the next unthinkable happens ;-). Sarcasm aside we now have a better set of recommended practices for DR solutions to provide Business Continuity (BC.). It’s commonly acceptable that the minimum distance between sites be 50KM away. 50KM will protect from an explosion, a power outage, and several other events, but it probably won’t protect from a major natural disaster such as earthquake or hurricane. If those are concerns the distance increases, and you may end up with more than two data centers.

There are obviously significant costs involved in running a DR data center. Due to these costs the concept of running a ‘dark’ standby data center has gone away. If we pay for: compute, storage, and network we want to be utilizing it. Running Test/Dev systems or other non-frontline mission critical applications is one option, but ideally both data centers could be used in an active fashion for production workloads with the ability to failover for disaster recovery or avoidance.

While solutions for this exist within the high end Unix platforms and mainframes it has been a tough cookie to crack in the x86/x64 commodity server system market. The reason for this is that we’ve designed our commodity server environments as individual application silos directly tied to the operating system and underlying hardware. This makes it extremely complex to decouple and allow the application itself to live resident in two physical locations, or at least migrate non-disruptively between the two. 

In steps VMware and server virtualization.  With VMware’s ability to decouple the operating system and application from the hardware it resides on.  With the direct hardware tie removed, applications running in operating systems on virtual hardware can be migrated live (without disruption) between physical servers, this is known as vMotion.  This application mobility puts us one step closer to active/active datacenters from a Disaster Avoidance (DA) perspective, but doesn’t come without some catches: bandwidth, latency, Layer 2 adjacency, and shared storage.

The first two challenges can be addressed between data centers using two tools: distance and money.  You can always spend more money to buy more WAN/MAN bandwidth, but you can’t beat physics, so latency is dependent on the speed of light and therefore distance.  Even with those two problems solved there has traditionally been no good way to solve the Layer 2 adjacency problem.  By Layer 2 adjacency I’m talking about same VLAN/Broadcast domain, i.e. MAC based forwarding.  Solutions have existed and still exist to provide this adjacency across MAN and WAN boundaries (EoMPLS and VPLS) but they are typically complex and difficult to manage with scale.  Additionally these protocols tend to be cumbersome due to L2 flooding behaviors.

Up next is Cisco with Overlay Transport VLANs (OTV.)  OTV is a Layer 2 extension technology that utilizes MAC routing to extend Layer 2 boundaries between physically separate data centers.  OTV offers both simplicity and efficiency in an L2 extension technology by pushing this routing to Layer 2 and negating the flooding behavior of unknown unicast.  With OTV in place a VLAN can safely span a MAN or WAN boundary providing Layer 2 adjacency to hosts in separate data centers.  This leaves us with one last problem to solve.

The last step in putting the plumbing together for Long Distance vMotion is shared storage.  In order for the magic of vMotion to work, both the server the Virtual Machine (VM) is currently running on, and the server the VM will be moved to must see the same disk.  Regardless of protocol or disk type both servers need to see the files that comprise a VM.  This can be accomplished in many ways dependent on the storage protocol you’re using, but traditionally what you end up with is one of the following two scenarios:image

In the diagram above we see that both servers can access the same disk, but that the server in DC 2, must access the disk across the MAN or WAN boundary, increasing latency and decreasing performance.  The second option is:


In the next diagram shown above we see storage replication at work.  At first glance it looks like this would solve our problem, as the data would be available in both data centers, however this is not the case.  With existing replication technologies the data is only active or primary in one location, meaning it can only be read from and written to on a single array.  The replicated copy is available only for failover scenarios.  This is depicted by the P in the diagram.  While each controller/array may own active disk as shown, it’s only accessible on a single side at a single time, that is until Vplex.

EMC’s Vplex provides the ability to have active/active read/write copies of the same data in two places at the same time.  This solves our problem of having to cross the MAN/WAN boundary for every disk operation.  Using Vplex the virtual machine data can be accessed locally within each data center.


Putting both pieces together we have the infrastructure necessary to perform a Long Distance vMotion as shown above.


OTV and Vplex provide an excellent and unique infrastructure for enabling long-distance vMotion.  They are the best available ‘plumbing’ for use with VMware for disaster avoidance.  I use the term plumbing because they are just part of the picture, the pipes.  Many other factors come into play such as rerouting incoming traffic, backup, and disaster recovery.  When properly designed and implemented for the correct use cases OTV and Vplex provide a powerful tool for increasing the productivity of Active/Active data center designs.

GD Star Rating

SMT, Matrix and Vblock: Architectures for Private Cloud

Cloud computing environments provide enhanced scalability and flexibility to IT organizations.  Many options exist for building cloud strategies, public, private etc.  For many companies private cloud is an attractive option because it allows them to maintain full visibility and control of their IT systems.  Private clouds can also be further enhanced by merging private cloud systems with public cloud systems in a hybrid cloud.  This allows some systems to gain the economies of scale offered by public cloud while others are maintained internally.  Some great examples of hybrid strategies would be:

  • Utilizing private cloud for mission critical applications such as SAP while relying on public cloud for email systems, web hosting, etc.
  • Maintaining all systems internally during normal periods and relying on the cloud for peaks.  This is known as Cloud Bursting and is excellent for workloads that cycle throughout the day, week, month or year.
  • Utilizing private cloud for all systems and capacity while relying on cloud based Disaster Recovery (DR) solutions.

Many more options exist and any combination of options is possible.  If private cloud is part of the cloud strategy for a company there is a common set of building blocks required to design the computing environment.


In the diagram above we see that each component builds upon one another.  Starting at the bottom we utilize consolidated hardware to minimize power, cooling and space as well as underlying managed components.  At the second tier of the private cloud model we layer on virtualization to maximize utilization of the underlying hardware while providing logical separation for individual applications. 

If we stop at this point we have what most of today’s data centers are using to some extent or moving to.  This is a virtualized data center.  Without the next two layers we do not have a cloud/utility computing model.  The next two layers provide the real operational flexibility and organizational benefits of a cloud model.

To move out virtualized data center to a cloud architecture we next layer on Automation and Monitoring.  This layer provides the management and reporting functionality for the underlying architecture.  It could include: monitoring systems, troubleshooting tools, chargeback software, hardware provisioning components, etc.  Next we add a provisioning portal to allow the end-users or IT staff to provision new applications, decommission systems no longer in use, and add/remove capacity from a single tool.  Depending on the level of automation in place below some things like capacity management may be handled without user/staff intervention.

The last piece of the diagram above is security.  While many private cloud discussions leave security out, or minimize its importance it is actually a key component of any cloud design.  When moving to private cloud customers are typically building a new compute environment, or totally redesigning an existing environment.  This is the key time to design robust security in from end-to-end because you’re not tied to previous mistakes (we all make them)or legacy design.  Security should be part of the initial discussion for each layer of the private cloud architecture and the solution as a whole.

Private cloud systems can be built with many different tools from various vendors.  Many of the software tools exist in both Open Source and licensed software versions.  Additionally several vendors have private cloud offerings of an end-to-end stack upon which to build design a private cloud system.  The remainder of this post will cover three of the leading private cloud offerings:

Scope: This post is an overview of three excellent solutions for private cloud.  It is not a pro/con discussion or a feature comparison.  I would personally position any of the three architectures for a given customer dependant on customer requirements, existing environment, cloud strategy, business objective and comfort level.  As always please feel free to leave comments, concerns or corrections using the comment form at the bottom of the post.

Secure Multi-Tenancy (SMT):

Vendor positioning:  ‘This includes the industry’s first end-to-end secure multi-tenancy solution that helps transform IT silos into shared infrastructure.’


SMT is a pairing of: VMware vSphere, Cisco Nexus, UCS, MDS, and NetApp storage systems.  SMT has been jointly validated and tested by the three companies, and a Cisco Validated Design (CVD) exists as a reference architecture.  Additionally a joint support network exists for customers building or using SMT solutions.

Unlike the other two systems SMT is a reference architecture a customer can build internally or along with a trusted partner.  This provides one of the two unique benefits of this solution.

Unique Benefits:

  • Because SMT is a reference architecture it can be built in stages married to existing refresh and budget cycles.  Existing equipment can be reutilized or phased out as needed.
  • SMT is designed to provide end-to-end security for multiple tenants (customers, departments, or applications.)

HP Matrix:

Vendor positioning:  ‘The industry’s first integrated infrastructure platform that enables you to reduce capital costs and energy consumption and more efficiently utilize the talent of your server administration teams for business innovation rather than operations and maintenance.’


Matrix is a integration of HP blades, HP storage, HP networking and HP provisioning/management software.  HP has tested the interoperability of the proven components and software and integrated them into a single offering. 

Unique benefits:

  • Of the three solutions Matrix is the only one that is a complete solution provided by a single vendor.
  • Matrix provides the greatest physical server scalability of any of the three solutions with architectural limits of thousands of servers.


Vendor positioning:  ‘The industry’s first completely integrated IT offering that combines best-in-class virtualization, networking, computing, storage, security, and management technologies with end-to-end vendor accountability.’


Vblocks are a combination of EMC software and storage storage, Cisco UCS, MDS and Nexus, and VMware virtualization.  Vblocks are complete infrastructure packages sold in one of three sizes based on number of virtual machines.  Vblocks offer a thoroughly tested and jointly supported infrastructure with proven performance levels based on a maximum number of VMs. 

Unique Benefits:

  • Vblocks offer a tightly integrated best-of-breed solution that is purchased as a single product.  This provides very predictable scalability costs when looked at from a C-level perspective (i.e. x dollars buys y scalability, when needs increase x dollars will be required for the next block.)
  • Vblock is supported by a unique partnering between Cisco, EMC and VMware as well as there ecosystem of channel partners.  This provides robust front and backend support for customer before during and after install.


Private cloud can provide a great deal of benefits when implemented properly, but like any major IT project the benefits are greatly reduced by mistakes and improper design.  Pre-designed and tested infrastructure solutions such as the ones above provide customers a proven platform on which they can build a private cloud.

GD Star Rating