FlexFabric – Small Step, Right Direction

Note: I’ve added a couple of corrections below thanks to Stuart Miniman at Wikibon (http://wikibon.org/wiki/v/FCoE_Standards)  See the comments for more.

I’ve been digging a little more into the HP FlexFabric announcements in order to wrap my head around the benefits and positioning.  I’m a big endorser of a single network for all applications, LAN, SAN, IPT, HPC, etc. and FCoE is my tool of choice for that right now.  While I don’t see FCoE as the end goal, mainly due to limitations on any network use of SCSI which is the heart of FC, FCoE and iSCSI, I do see FCoE as the way to go for convergence today.  FCoE provides a seamless migration path for customers with an investment in Fibre Channel infrastructure and runs alongside other current converged models such as iSCSI, NFS, HTTP, you name it.  As such any vendor support for FCoE devices is a step in the right direction and provides options to customers looking to reduce infrastructure and cost.

FCoE is quickly moving beyond the access layer where it has been available for two years now.  That being said the access layer (server connections) is where it provides the strongest benefits for infrastructure consolidation, cabling reduction, and reduced power/cooling.  A properly designed FCoE architecture provides a large reduction in overall components required for server I/O.  Let’s take a look at a very simple example using standalone servers (rack mount or tower.)

imageIn the diagram we see traditional Top-of-Rack (ToR) cabling on the left vs. FCoE ToR cabling on the right.  This is for the access layer connections only.  The infrastructure and cabling reduction is immediately apparent for server connectivity.  4 switches, 4 cables, 2-4 I/O cards reduced to 2, 2, and 2.  This is assuming only 2 networking ports are being used which is not the case in many environments including virtualized servers.  For servers connected using multiple 1GE ports the savings is even greater.

Two major vendor options exist for this type of cabling today:

Brocade:

  • Brocade 8000 – This is a 1RU ToR CEE/FCoE switch with 24x 10GE fixed ports and 8x 1/2/4/8G fixed FC ports.  Supports directly connected FCoE servers. 
    • This can be purchased as an HP OEM product.
  • Brocade FCoE 10-24 Blade – This is a blade for the Brocade DCX Fibre Channel chassis with 24x 10GE ports supporting CEE/FCoE.  Supports directly connected FCoE servers.

Note: Both Brocade data sheets list support for CEE which is a proprietary pre-standard implementation of DCB which is in the process of being standardized with some parts ratified by the IEEE and some pending.  The terms do get used interchangeably so whether this is a typo or an actual implementation will be something to discuss with your Brocade account team during the design phase.  Additionally Brocade specifically states use for Tier 3 and ‘some Tier 2’ applications which suggests a lack of confidence in the protocol and may suggest a lack of commitment to support and future products.  (This is what I would read from it based on the data sheets and Brocade’s overall positioning on FCoE from the start.)

Cisco:

  • Nexus 5000 – There are two versions of the Nexus 5000:
    • 1RU 5010 with 20 10GE ports and 1 expansion module slot which can be used to add (6x 1/2/4/8G FC, 6x 10GE, 8x 1/2/4G FC, or 4x 1/2/4G FC and 4x 10GE)
    • 2RU 5020 with 40 10GE ports and 2 expansion module slots which can be used to add (6x 1/2/4/8G FC, 6x 10GE, 8x 1/2/4G FC, or 4x 1/2/4G FC and 4x 10GE)
    • Both can be purchased as HP OEM products.
  • Nexus 7000 – There are two versions of the Nexus 7000 which are both core/aggregation Layer data center switches.  The latest announced 32 x 1/10GE line card supports the DCB standards.  Along with support for Cisco Fabric path based on pre-ratified TRILL standard.

Note: The Nexus 7000 currently only supports the DCB standard, not FCoE.  FCoE support is planned for Q3CY10 and will allow for multi-hop consolidated fabrics.

Taking the noted considerations into account any of the above options will provide the infrastructure reduction shown in the diagram above for stand alone server solutions.

When we move into blade servers the options are different.  This is because Blade Chassis have built in I/O components which are typically switches.  Let’s look at the options for IBM and Dell then take a look at what HP and FlexFabric bring to the table for HP C-Class systems.

IBM:

  • BNT Virtual Fabric 10G Switch Module – This module provides 1/10GE connectivity and will support FCoE within the chassis when paired with the Qlogic switch discussed below.
  • Qlogic Virtual Fabric Extension Module – This module provides 6x 8GB FC ports and when paired with the BNT switch above will provide FCoE connectivity to CNA cards in the blades.
  • Cisco Nexus 4000 – This module is an DCB switch providing FCoE frame delivery while enforcing DCB standards for proper FCoE handling.  This device will need to be connected to an upstream Nexus 5000 for Fibre Channel Forwarder functionality.  Using the Nexus 5000 in conjunction with one or more Nexus 4000s provides multi-hop FCoE for blade server deployments.
  • IBM 10GE Pass-Through – This acts as a 1-to-1 pass-through for 10GE connectivity to IBM blades.  Rather than providing switching functionality this device provides a single 10GE direct link for each blade.  Using this device IBM blades can be connected via FCoE to any of the same solutions mentioned under standalone servers.

Note: Versions of the Nexus 4000 also exist for HP and Dell blades but have not been certified by the vendors, currently only IBM supports the device.  Additionally the Nexus 4000 is a standards compliant DCB switch without FCF capabilities, this means that it provides the lossless delivery and bandwidth management required for FCoE frames along with FIP snooping for FC security on Ethernet networks, but does not handle functions such as encapsulation and de-encapsulation.  This means that the Nexus 4000 can be used with any vendor FCoE forwarder (Nexus or Brocade currently) pending joint support from both companies.

Dell

  • Dell 10GE Pass-Through – Like the IBM pass-through the Dell pass-through will allow connectivity from a blade to any of the rack mount solutions listed above.

Both Dell and IBM offer Pass-Through technology which will allow blades to be directly connected as a rack mount server would.  IBM additionally offers two other options: using the Qlogic and BNT switches to provide FCoE capability to blades, and using the Nexus 4000 to provide FCoE to blades. 

Let’s take a look at the HP options for FCoE capability and how they fit into the blade ecosystem.

HP:

  • 10GE Pass-Through – HP also offers a 10GE pass-through providing the same functionality as both IBM and Dell.
  • HP FlexFabric – The FlexFabric switch is a Qlogic FCoE switch OEM’d by HP which provides a configurable combination of FC and 10GE ports upstream and FCoE connectivity across the chassis mid-plane.  This solution only requires two switches for redundancy as opposed to four with FC and Ethernet configurations.  Additionally this solution works with HP FlexConnect providing 4 logical server ports for each physical 10GE link on a blade, and is part of the VirtualConnect solution which reduces the management overhead of traditional blade systems through  software.

On the surface FlexFabric sounds like the way to go with HP blades, and it very well may be, but let’s take a look at what it’s doing for our infrastructure/cable consolidation.

image

With the FlexFabric solution FCoE exists only within the chassis and is split to native FC and Ethernet moving up to the Access or Aggregation layer switches.  This means that while reducing the number of required chassis switch components and blade I/O cards from four to two there has been no reduction in cabling.  Additionally HP has no announced roadmap for a multi-hop FCoE device and their current offerings for ToR multi-hop are OEM Cisco or Brocade switches.  Because the HP FlexFabric switch is a Qlogic switch this means any FC or FCoE implementation using FlexFabric connected to an existing SAN will be a mixed vendor SAN which can pose challenges with compatibility, feature/firmware disparity, and separate management models.

HP’s announcement to utilize the Emulex OneConnect adapter as the LAN on motherboard (LOM) adapter makes FlexFabric more attractive but the benefits of that LOM would also be recognized using the 10GE Pass-Through connected to a 3rd party FCoE switch, or a native Nexus 4000 in the chassis if HP were to approve and begin to OEM he product.

Summary:

As the title states FlexFabric is definitely a step in the right direction but it’s only a small one.  It definitely shows FCoE commitment which is fantastic and should reduce the FCoE FUD flinging.  The main limitation is the lack of cable reduction and the overall FCoE portfolio.  For customers using, or planning to use VirtualConnect to reduce the management overhead of the traditional blade architecture this is a great solution to reduce chassis infrastructure.  For other customers it would be prudent to seriously consider the benefits and drawbacks of the pass-through module connected to one of the HP OEM ToR FCoE switches.

GD Star Rating
loading...

HP’s FlexFabric

There were quite a few announcements this week at the HP Technology Forum in Vegas.  Several of these announcements were extremely interesting, of these the ones that resonated the most with me were:

Superdome 2:

I’m not familiar with the Superdome 1 nor am I in any way an expert on non x86 architectures.  In fact that’s exactly what struck me as excellent about this product announcement.  It allows the mission critical servers that a company chooses to, or must run on non x86 hardware to run right alongside the more common x86 architecture in the same chassis.  This further consolidates the datacenter and reduces infrastructure for customers with mixed environments, of which there are many.  While there is a current push in some customers to migrate all data center applications onto x86 based platforms, this is not: fast, cheap, or good for every use case.  Superdome 2 provides a common infrastructure for both the mission critical applications and the x86 based applications.

For a more technical description see Kevin Houston’s Superdome 2 blog: http://bladesmadesimple.com/2010/04/its-a-bird-its-a-plane-its-superdome-2-on-a-blade-server/.

Note: As stated I’m no expert in this space and I have no technical knowledge of the Superdome platform, conceptually it makes a lot of sense and it seems like a move in the right direction.

Common Infrastructure:

There was a lot of talk in some of the key notes about common look feel and infrastructure of the separate HP systems (storage, servers, etc.)  At first I laughed this off as a ‘who cares’ but then I started to think about it.  If HP takes this message seriously and standardizes rail kits, cable management, components (where possible), etc. this will have big benefits for administration and deployment of equipment.

If you’ve never done a good deal of racking/stacking of data center gear you may not see the value here, but I spent a lot of time on the integration side with this as part of my job.  Within a single vendor (or sometimes product line) rail kits for server/storage, rack mounting hardware, etc can all be different.  This adds time and complexity to integrating systems and can sometimes lead to less than ideal systems.  For example the first vBlock I helped a partner configure (for demo purposes only) had the two UCS systems stacked on top of one another on the bottom of the rack with no mounting hardware.  The reason for this was the EMC racks being used had different rail mounts than the UCS system was designed for.  Issues like this can cause problems and delays, especially when the people in charge of infrastructure aren’t properly engaged during purchasing (very common.)

Overall I can see this as a very good thing for the end user.

HP FlexFabric

This is the piece that really grabbed my attention while watching the constant Twitter stream of HP announcements.  HP FlexFabric brings network consolidation to the HP blade chassis.  I specifically say network consolidation, because HP got this piece right.  Yes it does FCoE, but that doesn’t mean you have to.  FlexFabric provides the converged network tools to provide any protocol you want over 10GE to the blades and split that out to separate networks at a chassis level.  Here’s a picture of the switch from Kevin Houston’s blog: http://bladesmadesimple.com/2010/06/first-look-hps-new-blade-servers-and-converged-switch-hptf/.

HP Virtual Connect FlexFabric 10Gb/24-Port Module

The first thing to note when looking at this device is that all the front end uplink ports look the same, so how do they split out Fibre Channel and Ethernet?  The answer is Qlogic (the manufacturer of the switch) has been doing some heavy lifting on the engineering side.  They’ve designed the front end ports to support the optics for either Fibre Channel or 10GE.  This means you’ve got flexibility in how you use your bandwidth.  The ability to do this is an industry first, although the Cisco Nexus 5000 hardware ASIC is capable and has been since FCS it was implemented on a per-module basis rather than per-port basis like this switch. 

The next piece that was quite interesting and really provides flexibility and choice to the HP FlexFabric concept is their decision to use Emulex’s OneConnect adapter as the LAN on Motherboard (LOM.)  This was a very smart decision by HP.  Emulex’s OneConnect is a product that has impressed me from square one, it shows a traditionally Fibre Channel company embracing the fact that Ethernet is the future of storage but not locking the decision into an Upper Layer protocol (ULP.)  OneConnect provides 10GE connectivity, TCP offload, iSCSI offload/boot, and FCoE capability all on the same card, now that’s a converged network!  HP seems to have seen the value there as well and built this into the system board. 

Take a step back and soak that in, LOM has been owned by Intel, Broadcom, and other traditional NIC vendors since the beginning.  Emulex until last year was looked at as one of two solid FC HBA vendors.  As of this week HP announced the ousting of the traditional NIC vendor for a traditional FC vendor on their system board.  That’s a big win for Emulex.  Kudos to Emulex for the technology (and business decisions behind it) and to HP for recognizing that value.

Looking a little deeper the next big piece of this overall architecture is that the whole FlexFabric system supports HP’s FlexConnect technology which allows a server admin to carve up a single physical 10GE link into four logical links which are presented to the OS as individual NICs.

The only drawback I see to the FlexFabric picture is the fact that FCoE is only used within the chassis and split into separate networks from there.  This can definitely increase the required infrastructure depending on the architecture.  I’ll wait to go to deep into that until I hear a few good lines of thinking on why that direction was taken.

Summary:

HP had a strong week in Vegas, these were only a few of the announcements, several others including mind blowing stuff from HP labs (start protecting John Conner now) can be found on blogs and HP’s website.  Of all of the announcements FlexFabric was the one that really caught my attention.  It embraces the idea of I/O consolidation without clinging to FCoE as the only way to do it and it greatly increases the competitive landscape in that market which always benefits the end-user/customer.

Comments, corrections, bitches moans, gripes and complaints all welcome.

GD Star Rating
loading...