The Brocade FCoE Proposition

I recently realized that I, like a lot of the data center industry, have completely forgotten about Brocade recently.  There has been little talked about on their FCoE front, Fibre Channel Front, or CNAs.  Cisco and HP have been dominating social media with blade and FCoE battles, but I haven’t seen much coming from Brocade.  I thought it was time to take a good look.

The Brocade Portfolio:

Brocade 1010 and 1020 CNAs The Brocade 1010 (single port) and Brocade 1020 (dual port) Converged Network Adapters (CNAs) integrate 10 Gbps Ethernet Network Interface Card (NIC) functionality with Fibre Channel technology—enabling transport over a 10 Gigabit Ethernet (GbE) connection through the new Data Center Bridging (DCB) and Fibre Channel over Ethernet (FCoE) protocols, providing best-in-class LAN connectivity and I/O consolidation to help reduce cost and complexity in next-generation data center environments.
Brocade 8000 Switch The Brocade 8000 is a top-of-rack link layer (Layer 2) CEE/FCoE switch with 24 10 Gigabit Ethernet (GbE) ports for LAN connections and eight Fibre Channel ports (with up to 8 Gbps speed) for Fibre Channel SAN connections. This reliable, high-performance switch provides advanced Fibre Channel services, supports Ethernet and CEE capabilities, and is managed by Brocade DCFM.
Brocade FCOE10-24 Blade The Brocade FCOE10-24 Blade is a Layer 2 blade with cut-though non-blocking architecture designed for use with Brocade DCX and DCX-4S Backbones. It features 24 10 Gbps CEE ports and extends CEE/FCoE capabilities to Brocade DCX Backbones, enabling end-of-row CEE/FCoE deployment. By providing first-hop connectivity for access layer servers, the Brocade FCOE10-24 also enables server I/O consolidation for servers with Tier 3 and some Tier 2 applications.

Source: http://www.brocade.com/products-solutions/products/index.page?dropType=Connectivity&name=FCOE

The breadth of Brocade’s FCoE portfolio is impressive when compared to the other major players: Emulex and Qlogic with CNAs, HP with FlexFabric for C-Class and H3C S5820X-28C Series ToR, and only Cisco providing a wider portfolio with an FCoE and virtualization aware I/O card (VIC/Palo), blade switches (Nexus 4000), ToR/MoR switches (Nexus 5000), and an FCoE Blade for the Nexus 7000.  This shows a strong commitment to the FCoE protocol on Brocade’s part, as does there participation on the standards body.

Brocade also provides a unique ability to standardize on one vendor from the server I/O card, through the FCoE network to the Fibre Channel (FC) core switching.  Additionally using the 10-24 blade customers can collapse the FCoE edge into their FC core providing a single hop collapsed core mixed FCoE/FC SAN.  That’s a solid proposition for a data center with a heavy investment in FC and a port count low enough to stay within a single chassis per fabric.

But What Does the Future Hold?

Before we take a look at where Brocade’s product line is headed, let’s look at the purpose of FCoE.  FCoE is designed as another tool in the data center arsenal for network consolidation.  We’re moving away from the cost, complexity and waste of separate networks and placing our storage and traditional LAN data on the same infrastructure.  This is similar to what we’ve done in the past in several areas, on mainframes we went from ESCON to FICON to leverage FC, our telephones went from separate infrastructures to IP based, we’re just repeating the same success story with storage.  The end goal is everything on Ethernet.  That end goal may be sooner for some than others, it all depends on comfort level, refresh cycle, and individual environment.

If FCoE is a tool for I/O consolidation and Ethernet is the end-goal of that, then where is Brocade heading?

This has been my question since I started researching and working with FCoE about three years ago.  As FCoE began hitting the mainstream media Cisco was out front pushing the benefits and announcing products, they were the first on the market with an FCoE switch, the Nexus 5000.  Meanwhile Brocade and others were releasing statements attempting to put the brakes on.  They were not saying FCoE was bad, just working to hold it off.

This makes a lot of sense from both perspectives, the core of Cisco’s business is routing and switching therefore FCoE is a great business proposition.  They’re also one of the only two options for FC switching in the enterprise (Brocade and Cisco) so they have the FC knowledge.  Lastly they had a series of products already in development. 

From Brocade’s and others perspectives they didn’t have products ready to ship, and they didn’t have the breath and depth in Ethernet so they needed time.  The marketing releases tended to become more and more positive towards FCoE as their products launched.

This also shows in Brocade’s product offering, two of the three products listed above are designed to maintain the tie to FC.

Brocade 8000:

This switch has 24x 10GE ports and 8x 8Gbps FC ports.  These ports are static onboard which means that this switch is not for you if:

In comparison the competing product is the Nexus 5000 which has a modular design allowing customers to use all Ethernet/DCB or several combinations of Ethernet and FC at 1/2/4/8 Gbps.

Brocade FCoE 10/24 Blade:

This is an Ethernet blade for the DCX Fibre Channel director.  This ties Brocade’s FCoE director capabilities to an FC switch rather than Ethernet switch.  Additionally this switch only supports directly connected FCoE devices which will limit overall scalability.

In comparison the Cisco FCoE blade for the nexus 7000 is a DCB capable line card with FCoE capability by years end.  This merges FCoE onto the network backbone where it’s intended to go.

Summary:

If your purpose in assessing FCoE is to provide a consolidated edge topology for server connectivity tying it back to a traditional FC SAN then Brocade has a strong product suite for you.  If you’re end goal is consolidating the network as a whole then it’s important to seriously consider the purchase of FC based FCoE products.  That’s not to say don’t buy them, just understand what you’re getting, and why you’re getting it.  For instance if you need to tie to a Fibre Channel core now and don’t intend to replace that for 3-5 years then the Brocade 8000 may work for you because it can be refreshed at the same time.

Several options exist for FCoE today and most if not all of them have a good fit.  Assess first what your trying to accomplish and when, then look at the available products and decide what fits best.

Consolidated I/O

Consolidated I/O (input/output) is a hot topic and has been for the last two years, but it's not a new concept.  We've already consolidated I/O once in the data center and forgotten about it, remember those phone PBXs before we replaced them with IP Telephony?  The next step in consolidating I/O comes in the form of getting management traffic, backup traffic and storage traffic from centralized storage arrays to the servers on the same network that carries our IP data.  In the most general terms the concept is 'one wire.'  'Cable Once' or 'One Wire' allows a flexible I/O infrastructure with a greatly reduced cable count and a single network to power, cool and administer.

Solutions have existed and been used for years to do this, iSCSI (SCSI storage data over IP networks) is one tool that has been commonly used to do this.  The reason the topic has hit the mainstream over the last 2 years is that 10GB Ethernet was ratified and we now have a common protocol with the proper bandwidth to support this type of consolidation.  Prior to 10GE we simply didn't have the right bandwidth to effectively put everything down the same pipe.

The first thing to remember when discussing I/O consolidation is that contrary to popular belief I/O consolidation does not mean Fibre Channel over Ethernet (FCoE.)  I/O consolidation is all about using a single infrastructure and underlying protocol to carry any and all traffic types required in the data center.  The underlying protocol of choice is 10G Ethernet because it's lightweight, high bandwidth and Ethernet itself is the most widely used data center protocol today.  Using 10GE and the IEEE standards for Data Center bridging (DCB) as the underlying data center network, any and all protocols can be layered on top as needed on a per application basis.  See my post on DCB for more information (http://www.definethecloud.net/?p=31.)These protocols can be FCoE, iSCSI, UDP, TCP, NFS, CIFS, etc. or any combination of them all.

If you look at the data center today most are already using a combination of these protocols, but typically have 2 or more separate infrastructures to support them.  A data center that uses Fibre Channel heavily has two Fibre Channel networks (for redundancy) and one or more LAN networks. These 'Fibre Channel shops' are typically still using additional storage protocols such as NFS/CIFS for file based storage.  The cost of administering, powering, cooling, and eventually upgrading/refreshing these separate networks continues to grow.

Consolidating onto a single infrastructure not only provides obvious cost benefits but also provides the flexibility required for a cloud infrastructure.  Having a 'Cable Once' infrastructure allows you to provide the right protocol at the right time on an application basis, without the need for hardware changes.

Call it what you will I/O Consolidation, Network Convergence, or Network Virtualization, a cable once topology that can support the right protocol at the right time is one of the pillars of cloud architectures in the data center.