Digging Into the Software Defined Data Center

The software defined data center is a relatively new buzzword embraced by the likes of EMC and VMware.  For an introduction to the concept see my article over at Network Computing (http://www.networkcomputing.com/data-center/the-software-defined-data-center-dissect/240006848.)  This post is intended to take it a step deeper as I seem to be stuck at 30,000 feet for the next five hours with no internet access and no other decent ideas.  For the purpose of brevity (read: laziness) I’ll use the acronym SDDC for Software Defined Data Center whether or not this is being used elsewhere.)

First let’s look at what you get out of a SDDC:

Legacy Process:

In a traditional legacy data center the workflow for implementing a new service would look something like this:

  1. Approval of the service and budget
  2. Procurement of hardware
  3. Delivery of hardware
  4. Rack and stack of new hardware
  5. Configuration of hardware
  6. Installation of software
  7. Configuration of software
  8. Testing
  9. Production deployment

This process would very greatly in overall time but 30-90 days is probably a good ballpark (I know, I know, some of you are wishing it happened that fast.)

Not only is this process complex and slow but it has inherent risk.  Your users are accustomed to on-demand IT services in their personal life.  They know where to go to get it and how to work with it.  If you tell a business unit it will take 90 days to deploy an approved service they may source it from outside of IT.  This type of shadow IT poses issues for security, compliance, backup/recovery etc. 

SDDC Process:

As described in the link above an SDDC provides a complete decoupling of the hardware from the services deployed on it.  This provides a more fluid system for IT service change: growing, shrinking, adding and deleting services.  Conceptually the overall infrastructure would maintain an agreed upon level of spare capacity and would be added to as thresholds were crossed.  This would provide an ability to add services and grow existing services on the fly in all but the most extreme cases.  Additionally the management and deployment of new services would be software driven through intuitive interfaces rather than hardware driven and disparate CLI based.

The process would look something like this:

  1. Approval of the service and budget
  2. Installation of software
  3. Configuration of software
  4. Testing
  5. Production deployment

The removal of four steps is not the only benefit.  The remaining five steps are streamlined into automated processes rather than manual configurations.  Change management systems and trackback/chargeback are incorporated into the overall software management system providing a fluid workflow in a centralized location.  These processes will be initiated by authorized IT users through self-service portals.  The speed at which business applications can be deployed is greatly increased providing both flexibility and agility.

Isn’t that cloud?

Yes, no and maybe.  Or as we say in the IT world: ‘It depends.’  SDDN can be cloud, with on-demand self-service, flexible resource pooling, metered service etc. it fits the cloud model.  The difference is really in where and how it’s used.  A public cloud based IaaS model, or any given PaaS/SaaS model does not lend itself to legacy enterprise applications.  For instance you’re not migrating your Microsoft Exchange environment onto Amazon’s cloud.  Those legacy applications and systems still need a home.  Additionally those existing hardware systems still have value.  SDDC offers an evolutionary approach to enterprise IT that can support both legacy applications and new applications written to take advantage of cloud systems.  This provides a migration approach as well as investment protection for traditional IT infrastructure. 

How it works:

The term ‘Cloud operating System’ is thrown around frequently in the same conversation as SDDC.  The idea is compute, network and storage are raw resources that are consumed by the applications and services we run to drive our businesses.  Rather than look at these resources individually, and manage them as such, we plug them into a a management infrastructure that understands them and can utilize them as services require them.  Forget the hardware underneath and imagine a dashboard of your infrastructure something like the following graphic.

image

 

The hardware resources become raw resources to be consumed by the IT services.  For legacy applications this can be very traditional virtualization or even physical server deployments.  New applications and services may be deployed in a PaaS model on the same infrastructure allowing for greater application scale and redundancy and even less tie to the hardware underneath.

Lifting the kimono:

Taking a peak underneath the top level reveals a series of technologies both new and old.  Additionally there are some requirements that may or may not be met by current technology offerings. We’ll take a look through the compute, storage and network requirements of SDDC one at a time starting with compute and working our way up.

Compute is the layer that requires the least change.  Years ago we moved to the commodity x86 hardware which will be the base of these systems.  The compute platform itself will be differentiated by CPU and memory density, platform flexibility and cost. Differentiators traditionally built into the hardware such as availability and serviceability features will lose value.  Features that will continue to add value will be related to infrastructure reduction and enablement of upper level management and virtualization systems.  Hardware that provides flexibility and programmability will be king here and at other layers as we’ll discuss.

Other considerations at the compute layer will tie closely into storage.  As compute power itself has grown by leaps and bounds  our networks and storage systems have become the bottleneck.  Our systems can process our data faster than we can feed it to them.  This causes issues for power, cooling efficiency and overall optimization.  Dialing down performance for power savings is not the right answer.  Instead we want to fuel our processors with data rather than starve them.  This means having fast local data in the form of SSD, flash and cache.

Storage will require significant change, but changes that are already taking place or foreshadowed in roadmaps and startups.  The traditional storage array will become more and more niche as it has limited capacities of both performance and space.  In its place we’ll see new options including, but not limited to migration back to local disk, and scale-out options.  Much of the migration to centralized storage arrays was fueled by VMware’s vMotion, DRS, FT etc.  These advanced features required multiple servers to have access to the same disk, hence the need for shared storage.  VMware has recently announced a combination of storage vMotion and traditional vMotion that allows live migration without shared storage.  This is available in other hypervisor platforms and makes local storage a much more viable option in more environments.

Scale-out systems on the storage side are nothing new.  Lefthand and Equalogic pioneered much of this market before being bought by HP and Dell respectively.  The market continues to grow with products like Isilon (acquired by EMC) making a big splash in the enterprise as well as plays in the Big Data market.  NetApp’s cluster mode is now in full effect with OnTap 8.1 allowing their systems to scale out.  In the SMB market new players with fantastic offerings like Scale Computing are making headway and bringing innovation to the market.  Scale out provides a more linear growth path as both I/O and capacity increase with each additional node.  This is contrary to traditional systems which are always bottle necked by the storage controller(s). 

We will also see moves to central control, backup and tiering of distributed storage, such as storage blades and server cache.  Having fast data at the server level is a necessity but solves only part of the problem.  That data must also be made fault tolerant as well as available to other systems outside the server or blade enclosure.  EMC’s VFcache is one technology poised to help with this by adding the server as a storage tier for software tiering.  Software such as this place the hottest data directly next the processor with tier options all the way back to SAS, SATA, and even tape.

By now you should be seeing the trend of software based feature and control.  The last stage is within the network which will require the most change.  Network has held strong to proprietary hardware and high margins for years while the rest of the market has moved to commodity.  Companies like Arista look to challenge the status quo by providing software feature sets, or open programmability layered onto fast commodity hardware.  Additionally Software Defined Networking (http://www.definethecloud.net/sdn-centralized-network-command-and-control) has been validated by both VMware’s acquisition of Nicira and Cisco’s spin-off of Insieme which by most accounts will expand upon the CiscoOne concept with a Cisco flavored SDN offering.  In any event the race is on to build networks based on software flows that are centrally managed rather than the port-to-port configuration nightmare of today’s data centers. 

This move is not only for ease of administration, but also required to push our systems to the levels required by cloud and SDDC.  These multi-tenant systems running disparate applications at various service tiers require tighter quality of service controls and bandwidth guarantees, as well as more intelligent routes.  Today’s physically configured networks can’t provide these controls.  Additionally applications will benefit from network visibility allowing them to request specific flow characteristics from the network based on application or user requirements.  Multiple service levels can be configured on the same physical network allowing traffic to take appropriate paths based on type rather than physical topology.  These network changes are require to truly enable SDDC and Cloud architectures. 

Further up the stack from the Layer 2 and Layer 3 transport networks comes a series of other network services that will be layered in via software.  Features such as: load-balancing, access-control and firewall services will be required for the services running on these shared infrastructures.  These network services will need to be deployed with new applications and tiered to the specific requirements of each.  As with the L2/L3 services manual configuration will not suffice and a ‘big picture’ view will be required to ensure that network services match application requirements.  These services can be layered in from both physical and virtual appliances  but will require configurability via the centralized software platform.

Summary:

By combining current technology trends, emerging technologies and layering in future concepts the software defined data center will emerge in evolutionary fashion.  Today’s highly virtualized data centers will layer on technologies such as SDN while incorporating new storage models bringing their data centers to the next level.  Conceptually picture a mainframe pooling underlying resources across a shared application environment.  Now remove the frame.

Thoughts From a Tech Leadership Summit

This week I attended a tech leadership Summit in Vail Colorado for the second time.  The event is always a fantastic series of discussions and brings some of the top minds in the technology industry.  Here are some thoughts on the trends and thinking that were common at the event.

Virtualization and VDI:

There was a lot less talk of VDI and virtualization then in 2011.  These conversations were replaced with more conversations about cloud and app delivery.  Overall the consensus felt to be that getting the application to the right native environment on a given device was a far better approach then getting the desktop there.

Hypervisors were barely mentioned except in a recurring theme that the hypervisor itself has hit commodity.  This means that management and upper layer feature set are the differentiators.  Parallel to this thought was that VMware no longer has the best hypervisor yet their management system is still far superior to the competition (KVM was touted as the best hypervisor several times.)

The last piece of the virtualization discussion was around VMware’s acquisition of Nicira.  Some bullet points on that:

Storage:

There was a lot of talk about both the vision and execution of EMC over the past year or more.  I personally used ‘execution machine’ more than once to describe them (coming from a typically non-EMC Kool-Aid guy.)  Some key points that resonated over past few days:

I also participated in several discussions around flash and flash storage.  Some highlights:

The last point that struck me was a potential move from shared storage as a whole.  Microsoft would rather have you use local storage, clusters and big data apps like Hadoop thrive on local storage and one last big shared storage draw is going away: vMotion.  Once shared storage is no longer need for live virtual machine migration there will be far less draw for expensive systems.

Cloud:

The major cloud discussion I was a part of (mainly observer) involved OpenStack.  Overall OpenStack has a ton of buzz, and a plethora of developers.  What it’s lacking is customers, leadership and someone driving it who can lead a revolution.  Additionally it’s suffering from politics and bureaucracy.  It was described as impossible to support by one individual who would definitely know one way or another.  My thinking is that if you have CloudStack sitting there with real customers, an easily deployed system, support and leadership why waste cycles continuing down the OpenStack path?  The best answer I heard for that: Ego.  Everyone wants to build the next Amazon and CloudStack is too baked to make as much of a mark.

Overall it’s an interesting topic but my thought is: with limited developers the industry should be getting behind the best horse and working together.

Big Data:

Big Data was obviously another fun topic.  The quote of the week was ‘There are ten people, not companies, that understand Big Data.  6 of them are at Cloudera and the other 4 are locked in Google writing their own checks.’  Basically Big Data knowledge is rare and hiring consultants is not typically a viable option because you need people holding three things: Knowledge of big data processing, knowledge of your data, and knowledge of your business.  These data scientists aren’t easy to come by.  Additionally contrary to popular hype, Hadoop is not the end-all be-all of big data, it’s a tool in a large tool chest.  Especially when talking about real-time you’ll need to look elsewhere.  The consensus was that we are with big data where we were with cloud 2-3 years ago.  That being said CIO’s may still need to show big data initiatives (read: spend) so you should see $$ thrown at well packaged big data solutions geared toward plug-n-play in the enterprise.

All in all it was an excellent event and I was humbled as usual to participate in great conversations with so many smart people who are out there driving the future of technology.  What I’ve written here is a a summary from my perspective on the one summit portion I had time to participate in.  There is always a good chance I misquoted/misunderstood something so feel free to call me out.  As always I’d love your feedback, contradictions or hate mail comments.

Forget Multiple Hypervisors

The concept of managing multiple hypervisors in the data center isn't new--companies have been doing so or thinking about doing so for some time. Changes in licensing schemes and other events bring this issue to the forefront as customers look to avoid new costs. VMware recently acquired DynamicOps, a cloud automation/orchestration company with support for multiple hypervisors, as well as for Amazon Web Services. A hypervisor vendor investing in multihypervisor support brings the topic back to the forefront.  To see the full article visit: http://www.networkcomputing.com/virtualization/240003355

Private Cloud: An IT Staffer's Guide To Success

Recently I wrote The Biggest Threat to Your Private-Cloud Deployment: Your IT Staff as a call to management to understand the importance of their IT staff and the changes that will be required to move to a cloud model. That post received some strong criticism from readers who took it as an attack on IT, which was not its intent. In this post I'll cover the flipside of the coin, the IT staff perspective. To see the full article visit: http://www.networkcomputing.com/private-cloud/240003623.

Chargeback/Trackback: Yes You Need It

You can't fix, manage or justify what you don't understand. IT chargeback/trackback not only helps end users understand their service utilization, but it also helps IT justify and prioritize spend. Measured service is a requirement of NIST's cloud definition… To read the full article visit: http://www.networkcomputing.com/private-cloud/240003313

Private Cloud Infrastructure Design: Go Beyond Best Practices

Of all of the possible benefits of a private cloud infrastructure, one of the most valuable is flexibility. With a properly designed private cloud infrastructure, the data center environment can fluidly shift with the business. This allows new applications to be deployed to meet business demands as they're identified, and legacy applications to be removed when the value is no longer recognized. To see the full article visit: http://www.networkcomputing.com/private-cloud/240002196

Why Software-Defined Networking Could Revolutionize Networking

Software-defined networking (SDN) is a hot topic in the network community. Vendors such as Big Switch, Brocade, Cisco and HP are getting into the mix, and just about anyone in networking is making announcements. Additionally, OpenFlow is an open-source option for building an SDN control plane. To read the full article visit: http://www.networkcomputing.com/next-gen-network-tech-center/240001372

EMC VSPEX

EMC recently announced VSPEX (http://www.emc.com/about/news/press/2012/20120412-01.htm)which is a series of reference architectures designed with: Cisco, Brocade, Citrix, Intel, Microsoft, and VMware.  The intent of these architectures is to provide proven designs for cloud computing while providing customer choice and flexibility.  Overall the intent is to provide flexible architectures of best-of-breed components for cloud computing.

The VSPEX solutions are focused on virtualized infrastructure for private cloud and end-user computing environments.  Current options provide VMware vSphere 5.0 and Microsoft Windows Hyper-V server virtualization from 50 - 250 VMs as well as VMware View and Citrix XenDesktop solutions from 50 – 2000 desktops.  Additionally VSPEX architectures factor in unified management and backup/recovery.  The initial launch solutions are: VMware view (250, 500, 1000, 2000 users), Citrix XenDesktop (250, 500, 1000, 2000 users), VMware Private Cloud (125 & 250 Virtual Machines), VMware Private Cloud (50 & 100 Virtual Machines), Microsoft Private Cloud (50 & 100 Virtual Machines.)  Full details can be found at: http://www.emc.com/platform/virtualizing-information-infrastructure/vspex.htm#!resources.

The reference architecture are further supported through VSPEX Labs from EMC for testing and configuration,which enables partners to validate specific configurations.  The model also enables partners to  further drive new functionality into VSPEX based on their customer base.  First-Level Support will be provided by the EMC channel partner and backed by EMC.

VSPEX is different from Vblock’s offered by VCE The Virtual Computing Environment Company and are more along the lines of FlexPod which is a collaboration of NetApp and Cisco with flavors for VMware, Citrix and several other applications/deployments.  The VSPEX reference architectures offer more choice and flexibility while sacrificing some in the way of acquisition, and operational support.  This gap again presents an opportunity for EMC channel partners to differentiate themselves with custom offerings to fill these gaps.

Overall VSPEX is an excellent offering for both customers and EMC channel partners.  It provides additional options for deploying reliable, tested integrated hardware stacks for private cloud and end-user computing environments.  It also provides a framework and foundation for partners to build a custom solution set from.

The Stack Wars: OpenStack vs. CloudStack

The industry is moving up the stack. We're migrating from a focus on Layer 1 physical to Layer 7, applications. Software is king in the future of IT--applications and services are what matter. Cloud is what we've termed it for now, and when discussing cloud, open and standard are key discussions.  To see the full article visit: http://www.networkcomputing.com/cloud-computing/240000933

Your IT is Broken

Right this minute your data center(s) are broken. You, monitoring systems or NOC may or may not know it. All of the blinky lights may be green, all systems reporting normal, but the data center is broken, and it’s costing you money and business agility…

Full article at: http://www.networkcomputing.com/data-center/232602496