Where Are You?

Joe wrote an excellent guest blog on my website called To Blade Or Not To Blade and offered me the same opportunity. Being a huge fan of Joe’s I’m honored. One of my favorites blog posts is his Data Center 101: Server Virtualization. Joe explained the benefits of server virtualization in the data center. I felt this post is appropriate because Joe showed us that virtualization is “supposed” to make life easier for Customers.  However, a lot of vendors have yet to come up with management tools that facilitate that concept. 

It’s a known fact that I’m a huge Underdog fan. However, what people don’t know is that Scooby-Doo is my second favorite cartoon dog. As a kid I always stayed current with the latest Underdog and Scooby-Doo after school episodes. This probably explains why my mother was always upset with me for not doing my homework first. I always got a kick out of the fact that no matter how many times Mystery Inc. would split up to find the ghost, it was always Scooby-Doo and Shaggy that managed (accidentally) to come face-to-face with the ghost while looking for food. Customers face the same issues that Scooby and Shaggy faced in ghost hunting. If a Customer was in VMware vCenter doing administrative tasks there was no way to effectively manage HBA settings (the ghost) without hunting around or opening a different management interface. Emulex has solved that issue with the new OneCommand Manager Plug-in for vCenter (OCM-VCp).

Being a former Systems Administrator in a previous life. I understand frustrations in opening multiple management interfaces to do a task(s). Emulex has already simplified infrastructure management with OneCommand. In OneCommand Customers already have the capability to manage HBAs across all protocols, generations, see/change CEE settings,  and do batch firmware/driver parameter updates (amongst a myriad of other capabilities).


 Not convinced? No problem. Let me introduce you to OCM-VCp interface. Take a look, you know have the opportunity to centrally discover, monitor and manage HBAs across the infrastructure from within vCenter, including vPort to to VM associations. How cool is that? Very.


 You get all the functions of the OneCommand HBA management application. No more looking for the elusive ghost that is called HBA settings. No more going back and forth between management interfaces. Which increases the probability of messing up the settings. However, out of all the cool capabilities here are the top 4 functions that I feel stand out for vCenter:

  • Diagnostic tools tab. This allows you to run PCI/Internal/External loopback and POST tests on a specific port on a specific VM.
  • Driver Parameters Tab. This tab is important to SAN/Network Administrators this is where you can update/change network parameters. The cool thing is that you can make changes temporary or save to a file for batch infrastructure updates/changes.
  • Maintenance Tab. Allows you to update firmware (single host or batch file) without rebooting the host.
  • CEE settings tab. Very important for Datacenter Bridging Capability Exchange Protocol (DCBX). 


In my opinion this couldn’t have come any sooner. As more organizations look to do more with less (virtualization principle) OCM-VCp will be the cornerstone of easing infrastructure management within VMware vCenter.  There is no learning curve because the plug-in has the same look and feel as the standalone management interface. In other words is very intuitive. So if you or your Customer(s) are expanding their adoption of virtualization take serious look at this plug-in, because it’s going to make your life so much easier.


GD Star Rating

How Emulex Broke Out of the ‘Card Pusher’ Box

A few years back when my primary responsibility was architecting server, blade, SAN, and virtualization solutions for customers I selected the appropriate HBA based on the following rule: Whichever (Qlogic or Emulex) is less expensive today through the server OEM I’m using.  I had no technical or personal preference for one or the other.  They were both stable, performed, and allowed my customers to do what they needed to do.  On any given day one might show higher performance than another but that’s always subject to the testing criteria and will be fairly irrelevant for a great deal of customers.  At that point I considered them both ‘Card Pushers.’

Last year I had the opportunity to speak at two Emulex Partner product launch events in the UK and Germany.  My presentation was a vendor independent technical discussion on the drivers for consolidating disparate networks on 10GE and above.  I had no prior knowledge of the exact nature of the product being launched, and didn’t expect anything more than a Gen 2 single chip CNA, nothing to get excited over.  I was wrong.

Sitting through the Key Note presentations by Emulex executives I quickly realized OneConnect was something totally different, and with it Emulex was doing two things:

  1. Betting the farm on Ethernet
  2. Rebranding themselves as more than just a card pusher.

Now just to get this out of the way Emulex did not, has not, and to my knowledge will not stop pursuing better and faster FC technology, their 4GB and 8GB FC HBAs are still rock solid high performance pure FC cards.  What they were however doing is obviously placing a large bet (and R&D investment) on Ethernet as a whole.


The Emulex OneConnect is a Generation 2 Converged Network Adapter (CNA), but it’s a lot more than that.  It also does TCP offload, operates as an iSCSI HBA, and handles FCoE including the full suite of DCB standards.  It’s the Baskin Robins of of I/O interface cards, although admittedly  no FCoTR support 😉 (http://www.definethecloud.net/?p=380)  The technology behind the card impressed me but the licensing model is what makes it matter.  With all that technology built into the hardware you’d expect a nice hefty price tag to go with it.  That’s not the case with the OneConnect card, the licensing options allow you to buy the card at a cost equivalent to competing 10GE NICs and license iSCSI or FCoE if/when desired (licensing models may vary with OEMs.)  This means Emulex, a Fibre Channel HBA vendor, is happy to sell you a high performance 10GE NIC.  In IT there is never one tool for every job, but as far as I/O cards go this one comes close.

You don’t have to take my word for it when it comes to how good this card is, HP’s decision to integrate it into blade and rack mount system boards speaks volumes.  Take a look at Thomas Jones post on the Emulex Federal Blog for more info (http://www.emulex.com/blogs/federal/2010/07/13/the-little-trophy-that-meant-a-lot/.)  Additionally Cisco is shipping OneConnect options for UCS blades and rack mounts, and IBM also OEMs the product.

In addition to the OneConnect launch Emulex has also driven to expand their market into other areas, products like OneCommand Vision promise to provide better network I/O monitoring and management tools, and are uniquely positioned to do this through the eyes of the OneConnect adapter which can see all networks connected to the server.


Overall Emulex has truly moved outside of the ‘Card Pusher’ box and uniquely positioned themselves above their peers.  In an data center market where many traditional Fibre Channel vendors are clinging to pure FC like a sinking ship Emulex has embraced 10GE and offers a product that lets the customer choose the consolidation method or methods that work for them.

GD Star Rating

HP’s FlexFabric

There were quite a few announcements this week at the HP Technology Forum in Vegas.  Several of these announcements were extremely interesting, of these the ones that resonated the most with me were:

Superdome 2:

I’m not familiar with the Superdome 1 nor am I in any way an expert on non x86 architectures.  In fact that’s exactly what struck me as excellent about this product announcement.  It allows the mission critical servers that a company chooses to, or must run on non x86 hardware to run right alongside the more common x86 architecture in the same chassis.  This further consolidates the datacenter and reduces infrastructure for customers with mixed environments, of which there are many.  While there is a current push in some customers to migrate all data center applications onto x86 based platforms, this is not: fast, cheap, or good for every use case.  Superdome 2 provides a common infrastructure for both the mission critical applications and the x86 based applications.

For a more technical description see Kevin Houston’s Superdome 2 blog: http://bladesmadesimple.com/2010/04/its-a-bird-its-a-plane-its-superdome-2-on-a-blade-server/.

Note: As stated I’m no expert in this space and I have no technical knowledge of the Superdome platform, conceptually it makes a lot of sense and it seems like a move in the right direction.

Common Infrastructure:

There was a lot of talk in some of the key notes about common look feel and infrastructure of the separate HP systems (storage, servers, etc.)  At first I laughed this off as a ‘who cares’ but then I started to think about it.  If HP takes this message seriously and standardizes rail kits, cable management, components (where possible), etc. this will have big benefits for administration and deployment of equipment.

If you’ve never done a good deal of racking/stacking of data center gear you may not see the value here, but I spent a lot of time on the integration side with this as part of my job.  Within a single vendor (or sometimes product line) rail kits for server/storage, rack mounting hardware, etc can all be different.  This adds time and complexity to integrating systems and can sometimes lead to less than ideal systems.  For example the first vBlock I helped a partner configure (for demo purposes only) had the two UCS systems stacked on top of one another on the bottom of the rack with no mounting hardware.  The reason for this was the EMC racks being used had different rail mounts than the UCS system was designed for.  Issues like this can cause problems and delays, especially when the people in charge of infrastructure aren’t properly engaged during purchasing (very common.)

Overall I can see this as a very good thing for the end user.

HP FlexFabric

This is the piece that really grabbed my attention while watching the constant Twitter stream of HP announcements.  HP FlexFabric brings network consolidation to the HP blade chassis.  I specifically say network consolidation, because HP got this piece right.  Yes it does FCoE, but that doesn’t mean you have to.  FlexFabric provides the converged network tools to provide any protocol you want over 10GE to the blades and split that out to separate networks at a chassis level.  Here’s a picture of the switch from Kevin Houston’s blog: http://bladesmadesimple.com/2010/06/first-look-hps-new-blade-servers-and-converged-switch-hptf/.

HP Virtual Connect FlexFabric 10Gb/24-Port Module

The first thing to note when looking at this device is that all the front end uplink ports look the same, so how do they split out Fibre Channel and Ethernet?  The answer is Qlogic (the manufacturer of the switch) has been doing some heavy lifting on the engineering side.  They’ve designed the front end ports to support the optics for either Fibre Channel or 10GE.  This means you’ve got flexibility in how you use your bandwidth.  The ability to do this is an industry first, although the Cisco Nexus 5000 hardware ASIC is capable and has been since FCS it was implemented on a per-module basis rather than per-port basis like this switch. 

The next piece that was quite interesting and really provides flexibility and choice to the HP FlexFabric concept is their decision to use Emulex’s OneConnect adapter as the LAN on Motherboard (LOM.)  This was a very smart decision by HP.  Emulex’s OneConnect is a product that has impressed me from square one, it shows a traditionally Fibre Channel company embracing the fact that Ethernet is the future of storage but not locking the decision into an Upper Layer protocol (ULP.)  OneConnect provides 10GE connectivity, TCP offload, iSCSI offload/boot, and FCoE capability all on the same card, now that’s a converged network!  HP seems to have seen the value there as well and built this into the system board. 

Take a step back and soak that in, LOM has been owned by Intel, Broadcom, and other traditional NIC vendors since the beginning.  Emulex until last year was looked at as one of two solid FC HBA vendors.  As of this week HP announced the ousting of the traditional NIC vendor for a traditional FC vendor on their system board.  That’s a big win for Emulex.  Kudos to Emulex for the technology (and business decisions behind it) and to HP for recognizing that value.

Looking a little deeper the next big piece of this overall architecture is that the whole FlexFabric system supports HP’s FlexConnect technology which allows a server admin to carve up a single physical 10GE link into four logical links which are presented to the OS as individual NICs.

The only drawback I see to the FlexFabric picture is the fact that FCoE is only used within the chassis and split into separate networks from there.  This can definitely increase the required infrastructure depending on the architecture.  I’ll wait to go to deep into that until I hear a few good lines of thinking on why that direction was taken.


HP had a strong week in Vegas, these were only a few of the announcements, several others including mind blowing stuff from HP labs (start protecting John Conner now) can be found on blogs and HP’s website.  Of all of the announcements FlexFabric was the one that really caught my attention.  It embraces the idea of I/O consolidation without clinging to FCoE as the only way to do it and it greatly increases the competitive landscape in that market which always benefits the end-user/customer.

Comments, corrections, bitches moans, gripes and complaints all welcome.

GD Star Rating