Thoughts From a Tech Leadership Summit

This week I attended a tech leadership Summit in Vail Colorado for the second time.  The event is always a fantastic series of discussions and brings some of the top minds in the technology industry.  Here are some thoughts on the trends and thinking that were common at the event.

Virtualization and VDI:

There was a lot less talk of VDI and virtualization then in 2011.  These conversations were replaced with more conversations about cloud and app delivery.  Overall the consensus felt to be that getting the application to the right native environment on a given device was a far better approach then getting the desktop there.

Hypervisors were barely mentioned except in a recurring theme that the hypervisor itself has hit commodity.  This means that management and upper layer feature set are the differentiators.  Parallel to this thought was that VMware no longer has the best hypervisor yet their management system is still far superior to the competition (KVM was touted as the best hypervisor several times.)

The last piece of the virtualization discussion was around VMware’s acquisition of Nicira.  Some bullet points on that:

  • VMware paid too much for Nicira but that was unavoidable for the startup-to-be in the valley and it’s a great acquisition overall.
  • It’s no surprise VMware moved into networking everyone is moving that way.
  • While this is direct competition with Cisco it is currently in a small niche of service provider business.  Nicira’s product requires significant custom integration to deploy and will take time for VMware to productize it in a fashion usable for the enterprise.  Best guess: two years to real product. \
  • Overall the Cisco VMware partnership is very lucrative on both sides and should not be effected by this in the near term.
  • A seldom discussed portion of this involves the development expertise that comes with the acquisition.  With the hypervisor being commodity, and differentiation moving into the layers above that, we’ll see more and more variety in hypervisors.  This means multi-hypervisor support will be a key component of the upper level management products where virtualization vendors will compete.  Nicira’s team has proven capabilities in this space and can accelerate VMware’s multi-hypervisor strategy.


There was a lot of talk about both the vision and execution of EMC over the past year or more.  I personally used ‘execution machine’ more than once to describe them (coming from a typically non-EMC Kool-Aid guy.)  Some key points that resonated over past few days:

  • EMC’s execution on the VNX/VNXe product lines is astounding.  EMC launched a product and went on direct attack into a portion of NetApp’s business that nobody could really touch.  Through both sales and marketing excellence they’ve taken an increasingly large chunk out of this portion of the market.  This shores up a breech in their product line NetApp was using to gain share.
  • EMC’s Isilon acquisition was not only a fantastic choice, but was quickly integrated well.  Isilon is a fantastic product and has big data potential which is definitely a market that will generate big revenue in coming years.
  • EMC’s cloud vision is sound and they are executing well on it.  Additionally they were ahead of their pack of hardware vendor peers in this regard. EMC is embracing a software defined future.

I also participated in several discussions around flash and flash storage.  Some highlights:

  • PCIe based flash storage is definitely increasing in sales and enterprise consumption.  This market is expected to continue to grow as we strive to move the data closer to the processor.  There are two methods for this: storage in the server, servers in the storage.  PCIe flash plays in the server side and EMC Isilon will eventually play on the storage side.  Also look for an announcement in the SMB storage space around this during VMworld.
  • One issue in this space is that the expensive fast server based flash becomes trapped capacity if a server can’t drive enough I/O to it.  Additionally there are data loss concerns with this data trapped in the server.
  • Both of these issues are looking to be solved by EMC and IBM who intend to add server based flash into the tiering of shared storage.
  • Most traditional storage vendors flash options are ‘bolt-ons’ to traditional array architecture.  This can leave the expensive flash I/O starved, limiting it’s performance benefit.  Several all flash startups intend to use this as an inflection point with flash based systems designed from the ground up for the performance the disk offers.
  • Flash is still not an answer to every problem, and never will be.

The last point that struck me was a potential move from shared storage as a whole.  Microsoft would rather have you use local storage, clusters and big data apps like Hadoop thrive on local storage and one last big shared storage draw is going away: vMotion.  Once shared storage is no longer need for live virtual machine migration there will be far less draw for expensive systems.


The major cloud discussion I was a part of (mainly observer) involved OpenStack.  Overall OpenStack has a ton of buzz, and a plethora of developers.  What it’s lacking is customers, leadership and someone driving it who can lead a revolution.  Additionally it’s suffering from politics and bureaucracy.  It was described as impossible to support by one individual who would definitely know one way or another.  My thinking is that if you have CloudStack sitting there with real customers, an easily deployed system, support and leadership why waste cycles continuing down the OpenStack path?  The best answer I heard for that: Ego.  Everyone wants to build the next Amazon and CloudStack is too baked to make as much of a mark.

Overall it’s an interesting topic but my thought is: with limited developers the industry should be getting behind the best horse and working together.

Big Data:

Big Data was obviously another fun topic.  The quote of the week was ‘There are ten people, not companies, that understand Big Data.  6 of them are at Cloudera and the other 4 are locked in Google writing their own checks.’  Basically Big Data knowledge is rare and hiring consultants is not typically a viable option because you need people holding three things: Knowledge of big data processing, knowledge of your data, and knowledge of your business.  These data scientists aren’t easy to come by.  Additionally contrary to popular hype, Hadoop is not the end-all be-all of big data, it’s a tool in a large tool chest.  Especially when talking about real-time you’ll need to look elsewhere.  The consensus was that we are with big data where we were with cloud 2-3 years ago.  That being said CIO’s may still need to show big data initiatives (read: spend) so you should see $$ thrown at well packaged big data solutions geared toward plug-n-play in the enterprise.

All in all it was an excellent event and I was humbled as usual to participate in great conversations with so many smart people who are out there driving the future of technology.  What I’ve written here is a a summary from my perspective on the one summit portion I had time to participate in.  There is always a good chance I misquoted/misunderstood something so feel free to call me out.  As always I’d love your feedback, contradictions or hate mail comments.

GD Star Rating

Horton Hears Hadoop

I’m feeling Seuss-ish so here goes (Line 1 and 2 by Ken Oestreich @fountnhead.)


Of this poem you should first realize, of course,

Is based on Big Data, and code open-source.

On disk that was spinning… sat data quite large…

So much that in fact it would fill up a barge.


This data had value.  To realize it hard.

The data named Horton.  His contents were barred.

You see to run queries, we needed some help,

Then one day from Yahoo came a very faint yelp.

I’ve got it said Yahoo, we call it Hadoop!

Just give us a minute, we’ll give you the scoop.

With this new fangled tool, value we’ll recoup.


So Horton sat patient, while Yahoo did tell.

Of a man named Doug Cutting, here we will dwell.

Horton, you are so large your values obtuse.

But we can fix that, with a tool MapReduce.

This tool comes from Google, it’s really quite great.

With it and Apache, your value awaits.


We’ll take your large size, distribute it broadly.

Place it on servers, with scale of an army.

Each will have data that sits there quite local.

Data divided and sent as a parcel.


You see with this method my very large friend.

We’ll run great queries watch your value transcend.

Task Trackers / Data Nodes will do all the work.

You’ll be the big hero, no longer the jerk.


With Name Node in charge of tracking the data.

Job Tracker oversees slaves alpha to zeta.

The workload is spread, we parallel process.

To make some sense of this big data nonsense.


With the power of scale, the smallest of all,

Can still have a seat at the processing ball.

They’ll all work in tandem to help sort you out.

And this my friend, is what Hadoop is about.

GD Star Rating

The Idle Cycle Conundrum

One of the advantages of a private cloud architecture is the flexible pooling of resources that allows rapid change to match business demands. These resource pools adapt to the changing demands of existing services and allow for new services to be deployed rapidly. For these pools to maintain adequate performance, they must be designed to handle peak periods and this will also result in periods with idle cycles… To see the full article visit Network Computing:

GD Star Rating