My Recent Guest Spot on The Cloudcast (.NET) Podcast

image

Brian Gracely, Aaron Delp, and I discuss converged infrastructure stack, tech news and industry direction: http://www.thecloudcast.net/2011/06/cloudcast.html.  It was a lot of fun to chat with them and we covered some great topics.

Technology Passion

The May 24th IDC report on server market share by the IDC validated a technology I’ve been passionate about for some time; Cisco unified Computing System (UCS.)  For the first time since UCS’s launch two years ago Cisco reported server earnings to IDC with amazing result – #3 in global Blade Server market share and 1.6% factory revenue share overall for servers as a whole.  Find the summary of blades by Kevin Houston here: http://bladesmadesimple.com/2011/05/q1-2011-idc-worldwide-server-market-shows-blade-server-leader-as/ and the IDC report here: http://www.idc.com/getdoc.jsp?containerId=prUS22841411

This report shows that in two years Cisco has either taken significant market share from incumbents, driven new demand, or both.  Regardless of where the numbers came from they are impressive, as far as servers go it’s close to David and Goliath proportions and still playing out with Cisco about 1% behind IBM in the #2 spot.  I have been a ‘cheerleader’ for UCS for nearly its entire existence but didn’t start that way.  I describe the transition here: http://www.definethecloud.net/why-cisco-ucs-is-my-a-game-server-architecture

Prior to Cisco UCS I was a passionate IBM BladeCenter advocate, great technology, reliable hardware and a go-to brand.  I was passionate about IBM.  When IBM launched the BladeCenter H they worked hard to ensure customer investment protection and in doing so anchored the H chassis as a whole.  They hindered technical enhancements and created complexity to ensure the majority of components customers purchased in BladeCenter E would be forward compatible.  At the time I liked this concept, and IBM had several great engineering concepts built in that provided real value. 

In the same time frame HP released the C-Class blade chassis which had no forward/backward compatibility with previous HP blade architectures but used that fresh slate to build a world class platform that had the right technology for the time with the scalability to move far into the future.  At that point from a technical perspective I had no choice but to concede HP as the technical victor but I still whole-heartedly recommended IBM because the technical difference was minimal enough that IBM’s customer investment protection model made them the right big picture choice in my eyes.   

I always work with a default preference or what I call an ‘A-Game’ as described in the link above, but my A-Game is constantly evolving.  As I discover a new technology that will work in the spaces I exist I assess it against my A-Game and decide whether it can provide better value to 80% or more of the customer base I work with.  When a technology is capable of displacing my A-Game I replace it.

Sean McGee (http://www.mseanmcgee.com/) says it better than I can, so I’ll paraphrase him ‘I’m a technologist, I work with and promote the best technology I’m aware of and can’t support a product once I know a better one exists.’

In the same fashion I’ll support and promote Cisco UCS until a better competitor proves itself, and I’m happy to see that customers agree based on the IDC reporting.

For some added fun here are some great Twitter comments from before the IDC announcement served with a side of crow:

image

The Cloud Rules

Cloud Computing Concepts:

These are Twitter sized quick thoughts. If you’d like more elaboration or have a comment participation is highly encouraged.  As I’ve run out of steam on this I’ve decided to move it into a blog rather than a page.

World Wide Technology’s Upcoming Geek Day

Coming up very quickly is World Wide Technology’s (www.wwt.com) annual Geek Day, March 10th 2011 (http://www.wwt.com/geekday/.)  I’m very much looking forward to the event for two reasons:

  1. It’s free to customers
  2. It’s totally focused on geeks interacting with geeks.

The event is focused around live interactive demo’s from sponsor technology companies with breakout sessions chosen by the attendees via online voting.  My favorite parts are that the sponsors aren’t allowed to do lead collecting (badge scanning you know from conferences), gimmicky swag giveaways, or stock their booths with gobs of marketing fluff.  It’s true focus is the demo, and engineer to engineer discussion.  See the link above for more information, and the video below for some customer feedback on the events.  I hope to see you here in St. Louis in March!

Cisco unified Computing System (UCS) High-Level Overview

I’ve been looking for tools to supplement Power Point, Whiteboard, etc. and Brian Gracely (@bgracely) suggested I try Prezi (www.prezi.com.) Prezi is a very slick tool for non-slide based presentations.   I don’t think it will replace slides or white board for me, but it’s a great supplement.  It’s got a fairly quick learning curve if you watch the quick tutorials.  Additionally it works quite well for mind-mapping, I just throw all of my thoughts on the canvas and then start tying them together, whereas slides are very linear and take more planning.  My favorite feature of Prezi is the ability to break out of the flow, and quickly return to it at any time during a presentation.  I love this because real world discussions never go the way you mapped them out in advance.  To start learning the tool I created the following high-level overview of the Cisco Unified Computing System (UCS.)  This content is fully/usable and recyclable so do with it what you want!

An End User’s Cloud Security Question

I recently received an email with a question about the security of cloud computing environments.  The question comes from a knowledgeable user and boils down to ‘Isn’t my data safer on my systems?’  I thought this would be a great question to open up to the wider community.  Does anyone have any thoughts or feedback for ‘Gramps’ question below?

Joe, I'm not a college grad, but a 70 yr old grandfather, that began programming on a Color Computer using an audio tape recorder for storage.  I've written some corporate code for Owens Corning Fiberglas before I retired, so I've been around the keyboard for a while. <grin>  To make a point, notice how you've told me what your email address is, on your blog (see the about page.)  Hackers, and scammers are so efficient, you and I can't even put our actual email out there.  Now, You are in high gear with putting almost your heart and soul on servers that can be anywhere on the planet... even where there are little or no laws (enforced) governing data piracy.  Joe, I'm not trying to pick a fight, no need to, but look at the Wikileaks > etc.  I guess I could cope with using cloud software for doing my things... but can you tell me you are willing to even leave your emails or data files out there too? Somehow, I just feel a whole lot safer having my critical stuff on my flash drive... Talk to me buddy... 

Jim 'Gramps' , Hillsboro OH

Virtualizing the PCIe bus with Aprius

One of the vendors that presented during Gestalt IT’s Tech Field day 2010 in San Jose was Aprius (http://gestaltit.com/field-day/) (http://www.aprius.com/.)  Aprius’s product virtualizes the PCIe I/O bus and pushes that PCIe traffic over 10GE to the server.  In Aprius’s model you have an Aprius appliance that houses multiple off-the-shelf PCIe cards and a proprietary Aprius initiator which resides in the server.  The concept is to be able to not only share PCIe devices to multiple servers but also allow the use of multiple types of PCIe cards on servers with limited slots.  Additionally there would be some implications for VMware virtualized servers as you could potentially utilize VMware Direct-Path I/O to present these cards directly to a VM.  Aprius’s main competitor is Xsigo which provides a similar benefit using a PCIe appliance containing proprietary PCIe cards and pushing the I/O over standard 10G Ethernet or Infiniband to the server NIC.  I look at the PCIe I/O virtualization space as very niche with limited use cases, let’s take a look at this in reference to Aprius,

With the industry moving more and more toward x64 server virtualization using VMware, HyperV, and Zen hardware compatibility lists come very much into play.  If a card is not on the list it most likely won’t work and is definitely not supported.  Aprius skates around this issue by using a card that appears transparent to the operating system and instead presents only the I/O devices assigned to a given server via the appliance.  This means that the Aprius appliance should work with any given virtualization platform, but support will be another issue.  Until Aprius is on an the Hardware Compatibility List (HCL) for any given hypervisor I wouldn’t recommend to my customers for virtualization.  Additionally the biggest benefit I’d see for using Aprius in a virtualization environment would be passing VMs PCIe devices that aren’t traditionally virtualized, think fax-modem etc.  This still wouldn’t be possible with the Aprius device because those cards aren’t on the virtualization HCL.

The next problem with these types of products is that the industry is moving to consolidate storage, network and HPC traffic on the same wire.  This can be done with FCoE, iSCSI, NFS, CIFS, etc. or any combination you choose.  That move is minimizing the I/O card requirements in the server and the need for specialized PCIe devices is getting smaller every day.  With less PCIe devices needed for any given server, what is the purpose of a PCIe aggregator?

Another use case of Aprius's technology they shared with us was sharing a single card, for example 10GE NIC among several servers as a failover path rather than buying redundant cards per server.  This seems like a major stretch. This adds an Aprius appliance as a point of failure to your redundant path, and still requires an Aprius adapter in each server instead of the redundant NIC.

My main issue with both Aprius and Xsigo is that they both require me to put their boxes in my data path as a single additional point of failure.  You're purchasing their appliance and their cards and using that to aggregate all of your server I/O leaving their appliance as a single point of failure for multiple servers I/O requirements. I just can't swallow that, unless I have some 1-off tye of need that can’t be solved any other way.

The question I neglected to ask Aprius's CEO during the short period he joined us is whether the company was started with the intent to sell a product, or the intent to sell a company.  My thinking is that the real answer is they're only interested in selling enough appliances to get the company as a whole noticed and purchased.  The downside of that is they don't seem to have enough secret sauce that can't be easily copied to be valuable as an acquisition.

The technology both Aprius and Xsigo market would really only be of use if purchased by a larger server vendor with a big R&D budget and some weight with the standards community. It could then be used to push a PCIeoE standard to drive adoption.  Additionally the appliances may have a play within that vendors blade architecture as a way of minimizing required blade components and increasing I/O flexibility, i.e. a PCIe slot blade/module that could be shared across the chassis.

Summary:

Aprius seems to be a fantastic product with a tiny little market that will continue to shrink. This will never be a mainstream data center product but will fit the bill for niche issues and 1-off deployments.  In their shoes my goal would be to court the server vendors and find a buyer before the technology becomes irrelevant, or copied. Their only competition I’m aware of in this space is Xsigo and I think they have a better shot based on deployment model. They're proprietary card in each server becomes a non-issue if a server vendor buys them and builds them into the system board.

Promote Your Strategy to Boost Your Cloud Execution

Sitting on yet another flight during takeoff I was forced to read print because my Kindle could obviously disable the auto-pilot system and force us to crash land on a secret government island and start a horrible soap opera with a four letter title.  Since Harvard Business Review isn’t available for the Kindle it’s typically my takeoff and landing material.  At $17.00 US per issue it’s only barely worth the price, but the summary before each article puts it over the top because it allows me to quickly separate the garbage and filler from articles that aren’t common sense (uncommon virtue or not.)  One of the articles that caught my eye was ‘How Hierarchy Can Hurt Strategy Execution’ (HBR July-August 2010.)  It’s the one or two articles like this per issue that keep me occasionally buying HBR.

The key findings in the article are:

Overall the premise of the article is that 'the findings suggest a more bottom up approach to strategy development and a more transparent communication of overall strategy amongst the ranks.  Before I continue I highly suggest you go find and read this article, my summary doesn’t do it justice. 

This article resonated deeply with me for two reasons:

1) I’ve worked for companies in the past in which strategy and vision were never discussed and input from below was never sought out, the negative effects were openly apparent. I also currently work for a company that clearly understands the importance of promoting strategy and vision through the ranks, accepting input from all levels and ensuring that the entire company is operating toward a common set of goals.  Ask anyone within the company from a receptionist to the CEO and they will be able to tell you the companies values, vision, and year to year goals, as well as why they matter.  The difference it makes in both morale and execution is amazing. 

2) This is information that should be taken extremely seriously for any company engaging in a cloud strategy.  Moving an IT computing model to a cloud based model will be a disruptive change both technically and organizationally and there are many pitfalls that can occur if everyone involved is not working towards a common set of goals.

Whether moving to a public, private or hybrid cloud model there will be a lot of change.  The decision to make that move is typically going to happen at an executive level, but it will be carried out by the IT team and effect them the most directly.  If those teams don’t understand the goal, have a chance to provide input into the execution, and have a clear definition of what their role will be in the cloud model you will have a much harder time with the move, or fail completely. 

How helpful is a system administrator going to be with moving your applications to the cloud if they think that once they get them there they’re out of a job?  Whether that fear is realistic or not isn’t going to matter if it’s not addressed.  The other side of that communication coin will be the knowledge gathered from each level of your IT team.  There may be snags or beneficial ideas that get missed if everyone isn’t involved in the process. 

Once a decision has been made to migrate to a cloud architecture clearly define the goals and benefits then work with the entire team to develop the strategy and roadmap for the migration as well as defining what the individual contributors roles will be after the migration.  If various positions within the IT department will not be required after the migration is complete analyze the individuals in those roles and see where they may fit in other parts of the organization.  Involving them in that discussion is key, they may have career goals and skill sets that management teams aren’t aware of.  I’m a big believer in if you have the right people you can find or create the right fit.

Where Are You?

Joe wrote an excellent guest blog on my website called To Blade Or Not To Blade and offered me the same opportunity. Being a huge fan of Joe’s I'm honored. One of my favorites blog posts is his Data Center 101: Server Virtualization. Joe explained the benefits of server virtualization in the data center. I felt this post is appropriate because Joe showed us that virtualization is “supposed” to make life easier for Customers.  However, a lot of vendors have yet to come up with management tools that facilitate that concept. 

It’s a known fact that I’m a huge Underdog fan. However, what people don’t know is that Scooby-Doo is my second favorite cartoon dog. As a kid I always stayed current with the latest Underdog and Scooby-Doo after school episodes. This probably explains why my mother was always upset with me for not doing my homework first. I always got a kick out of the fact that no matter how many times Mystery Inc. would split up to find the ghost, it was always Scooby-Doo and Shaggy that managed (accidentally) to come face-to-face with the ghost while looking for food. Customers face the same issues that Scooby and Shaggy faced in ghost hunting. If a Customer was in VMware vCenter doing administrative tasks there was no way to effectively manage HBA settings (the ghost) without hunting around or opening a different management interface. Emulex has solved that issue with the new OneCommand Manager Plug-in for vCenter (OCM-VCp).

Being a former Systems Administrator in a previous life. I understand frustrations in opening multiple management interfaces to do a task(s). Emulex has already simplified infrastructure management with OneCommand. In OneCommand Customers already have the capability to manage HBAs across all protocols, generations, see/change CEE settings,  and do batch firmware/driver parameter updates (amongst a myriad of other capabilities).

 

 Not convinced? No problem. Let me introduce you to OCM-VCp interface. Take a look, you know have the opportunity to centrally discover, monitor and manage HBAs across the infrastructure from within vCenter, including vPort to to VM associations. How cool is that? Very.

 

 You get all the functions of the OneCommand HBA management application. No more looking for the elusive ghost that is called HBA settings. No more going back and forth between management interfaces. Which increases the probability of messing up the settings. However, out of all the cool capabilities here are the top 4 functions that I feel stand out for vCenter:

 

In my opinion this couldn't have come any sooner. As more organizations look to do more with less (virtualization principle) OCM-VCp will be the cornerstone of easing infrastructure management within VMware vCenter.  There is no learning curve because the plug-in has the same look and feel as the standalone management interface. In other words is very intuitive. So if you or your Customer(s) are expanding their adoption of virtualization take serious look at this plug-in, because it's going to make your life so much easier.

http://www.niketown588.com/

My First Podcast: ‘Coffee With Thomas’

I had the pleasure of joining Thomas Jones on his new podcast ‘Coffee With Thomas'.’  His podcast is always good, well put together and about 30 minutes.  It’s done in a very refreshing conversation style as if your having a cup of coffee.  If your interested in listening to us talk technology, UCS, Apple, UFC, and other topics check it out: http://www.niketown588.com/2010/09/coffee-with-thomas-episode-5-wwts.html.

 

Thanks for the opportunity Thomas, that was a lot of fun!