Why FCoE Standards Matter

Mike Fratto at Network Computing recently wrote an article titled ‘FCoE: Standards Don’t Matter; Vendor Choice Does’ (http://www.networkcomputing.com/storage-networking-management/231002706.)

I definitely differ from Mike’s opinion on the subject.  While I’m no fan of the process of making standards (puts sausage making to shame), or the idea of slowing progress to wait on standards, I do feel they are an absolutely necessary part of FCoE’s future.  It’s all about the timing at which we expect them, the way in which they’re written, and most importantly the way in which they’re adhered to.

Mike bases his opinion on Fibre Channel history and accurately describes the strangle hold the storage vendors have had on the customer.  The vendor’s Hardware Compatibility List (HCL) dictates which vendor you could connect to, and which model and which firmware you can use.  Slip off the list and you lose support.  This means that in the FC world customers typically went with the Storage Area Network (SAN) their VAR or storage vendor recommended, and stuck with it.  While not ideal this worked fine in the small network environment of SAN with the specialized and dedicated purpose of delivering block data from array to server.  These extreme restrictions based on storage vendors and protocol compatibility will not fly as we converge networks.

As worried as storage/SAN admins may be about moving their block data onto Ethernet networks, the traditional network admins may be more worried because of the interoperability concept.  For years network admins have been able to intermix disparate vendors technology to build the networks that they desired, best-of-breed or not.  A load-balancer here, firewall there, data center switch here and presto everything works.  They may have had to sacrifice some features (proprietary value add-that isn’t compatible) but they could safely connect the devices.  More importantly they didn’t have to answer to an HCL dictated by some end-point (storage disk) or another on their network.

For converged networking to work, this freedom must remain.  Adding FCoE to consolidate infrastructure cannot lock network admins into storage HCLs and extreme hardware incompatibility.  This means that the standards must exist, be agreed upon, be specific enough, and be adhered to.  While Mike is correct, you probably won’t want to build multi-vendor networks day one, you will want to have the opportunity to incorporate other services, and products, migrate from one vendor to another, etc.  You’ll want an interoperable standard that allows you to buy 3rd party FCoE appliances for things like de-duplication, compression, encryption or whatever you may need down the road.  We’re not talking about building an Ethernet network dedicated to FCoE, we’re talking about building one network to rule them all (hopefully we never have to take it to Mordor and toss it into molten lava.)  To run one network we need the standards and compatibility that provide us flexibility.

There is no reason for storage vendors to hold the keys to what you can deploy any longer.  Hardware is stable, and if standards are in place the network will properly transport the blocks.  Customers and resellers shouldn’t accept lock in and HCL dictation just because that has been the status quo.  We’re moving the technology forward move your thinking forward.  The issue in the past has been the looseness with which IEEE FCBB-5 is written on some aspects since it’s inception.  This leaves room for interpretation which is where interoperability issues arise between vendors who are both ‘standards based.’  The onus is on us as customers, resellers and an IT community to demand that the standards be well defined, and that the vendors adhere to them in an interoperable fashion. 

Do not accept incompatibility and lack of interoperability in your FCoE switching just because we made the mistake of allowing that to happen with pure FC SANs.  Next time your storage vendor wants a few hundred thousand for your next disk array tell them it isn’t happening unless you can plug it into any standards compliant network without fear of their HCL and loss of support.

GD Star Rating

Why NetApp is my ‘A-Game’ Storage Architecture

One of, if not the, most popular of my blog posts to date has been ‘Why Cisco UCS is my ‘A-Game’ Server Architecture (http://www.definethecloud.net/why-cisco-ucs-is-my-a-game-server-architecture.)  In that post I describe why I lead with Cisco UCS for most consultative engagements.  This follow up for storage has been a long time coming, and thanks to some ‘gentle’ nudging and random coincidence combined with an extended airport wait I’ve decided to get this posted.

If you haven’t read my previous post I take the time to define my ‘A-Game’ architectures as such:

“The rule in regards to my A-Game is that it’s not a rule, it’s a launching point. I start with a specific hardware set in mind in order to visualize the customer need and analyze the best way to meet that need. If I hit a point of contention that negates the use of my A-Game I’ll fluidly adapt my thinking and proposed architecture to one that better fits the customer. These points of contention may be either technical, political, or business related:

  • Technical: My A-Game doesn’t fit the customers requirement due to some technical factor, support, feature, etc.
  • Political: My A-Game doesn’t fit the customer because they don’t want Vendor X (previous bad experience, hype, understanding, etc.)
  • Business: My A-Game isn’t on an approved vendor list, or something similar.

If I hit one of these roadblocks I’ll shift my vendor strategy for the particular engagement without a second thought. The exception to this is if one of these roadblocks isn’t actually a roadblock and my A-Game definitely provides the best fit for the customer I’ll work with the customer to analyze actual requirements and attempt to find ways around the roadblock.

Basically my A-Game is a product or product line that I’ve personally tested, worked with and trust above the others that is my starting point for any consultative engagement.

In my A-Game Server post I run through my hate then love relationship that brought me around to trust, support, and evangelize UCS; I cannot express the same for NetApp.  My relationship with NetApp fell more along the lines of love at first sight.

NetApp – Love at first sight:

I began working with NetApp storage at the same time I was diving headfirst into datacenter as a whole.  I was moving from server admin/engineer to architect and drinking from the SAN, Virtualization, and storage firehouse.  I had a fantastic boss who to this day is a mentor and friend that pushed me to learn quickly and execute rapidly and accurately, thanks Mike!  The main products our team handled at the time were: IBM blades/servers, VMware, SAN (Brocade and Cisco) and IBM/NetApp storage.  I was never a fan of the IBM storage.  It performed solidly but was a bear to configure, lacked a rich feature set and typically got put in place and left there untouched until refresh.  At the same time I was coming up to speed on IBM storage I was learning more and more about NetApp.

From the non-technical perspective NetApp had accessible training and experts, clear value-proposition messaging and a firm grasp on VMware, where virtualization was heading and how/why it should be executed on.  This hit right on with what my team was focused on.  Additionally NetApp worked hard to maintain an excellent partner channel relationship, make information accessible, and put the experts a phone call or flight away.  This made me WANT to learn more about their technology.

The lasting bonds:

Breakfast food, yep breakfast food is what made NetApp stick for me, and still be my A-game four years later. Not just any breakfast food, but a personal favorite of mine; beer and waffles, err, umm… WAFL (second only to chicken and waffles and missing only bacon.)  Data ONTAP (the beer) and NetApp’s Write Anywhere File System (WAFL) are at the heart of why they are my A-Game.  While you can find dozens of blogs, competitive papers, etc. attacking the use of WAFL for primary block storage, what WAFL enables is amazing from a feature perspective, and the performance numbers NetApp can put up speak for themselves.  Because, unlike a traditional block based array, NetApp owns the underlying file system they can not only do more with the data, but they can more rapidly adapt to market needs with software enhancements.  Don’t take my word for it, do some research, look at the latest announcements from other storage leaders and check to see what year NetApp announced their version of those same features, with few exceptions you’ll be surprised.  The second piece of my love for NetApp is Data ONTAP.  NetApp has several storage controller systems ranging from the lower end to the Tier-1 high-capacity, high availability systems.  Regardless of which one you use, you’re always using the same operating/management system, Data ONTAP.  This means that as you scale, change, refresh, upgrade, downgrade, you name it, you never have to retrain AND you keep a common feature set.

My love for breakfast is not the only draw to NetApp, and in fact without a bacon offering I would have strayed if there weren’t more (note to NetApp: Incorporate fatty pork the way politicians do.) 

Other features that keep NetApp top of my list are:

  • Primary block-level storage Deduplication with real world savings at 70+ % with minimal performance hit (and no license fee to boot)
  • Ease of upgrade/downgrade (keep the shelves of disks, replace the controllers, data stays)
  • Read/Write ‘0’ space/cost clones (the ability to clone various data sets in a read/write status using only pointers and storing only the change ‘delta’) and FlexClone capabilities as a whole
  • Highly optimized snapshots for point-in-time rollback, test/dev, etc.
  • VMware plugins to enable VMware admins to manage and monitor their own storage allotments
  • Storage virtualization, the ability to carve out storage and the management of that storage to multiple tenants in a similar fashion to what VMware does for servers
  • Ability to get 80% of the performance benefits of a shelf of SSD drives by adding Flash Cache (PAM II) cards 

Add to that more recent features such as first to market with FCoE based storage and you’ve got a winner in my book.  All that being said I still haven’t covered the real reason NetApp is the first storage vendor in my head anytime I talk about storage.


Anytime I’m talking about servers I’m talking about virtualization as well.  Because I don’t work in the Unix or Mainframe worlds I’m most likely talking about VMware (90% market share has that effect.)  When dealing with virtualization my primary goals are consolidation/optimization and flexibility.  In my opinion nobody can touch NetApp storage for this.  I’m a fan of choice and options, I also like particular features/protocols for particular use cases.  On most storage platforms I have to choose my hardware based on the features and protocols my customers require, and most likely use more than one platform to get them all.  This isn’t the case with NetApp.  With few exceptions every protocol/feature is available simultaneously with any given hardware platform.  This means I can run iSCSI, FC, FCoE or all of the above for block based needs at the same time I run CIFS natively to replace Windows file servers, and NFS for my VMware data stores.  All of that from the same box or even the same ports!  This lets me tier my protocols and features to the application requirements instead of to my hardware limitations.

I’ve been working on VMware deployments in some fashion for four years, and have seen dozens of unique deployments but personally never deployed or worked with a VMware environment that ran off a single protocol, typically at a minimum NFS is used for ISO datastores and CIFS can be used to eliminate Windows file servers rather than virtualize them, with a possible block based protocol involved for boot or databases.

Additionally NetApp offers features and functionality to allow multiple storage functions to be consolidated on a single system.  You no longer require separate hardware for primary, secondary, backup, DR, and archive.  All of this can then be easily setup and managed for replication across any of NetApp’s platforms, or many 3rd party systems front-ended with V-series.  These two pieces combined create a truly ‘unified’ platform.

When do I bring out my B-Game?

NetApp like any solution I’ve ever come across is not the right tool for every job.  For me they hit or exceed the 80/20 rule perfectly.  A few places where I don’t see NetApp as a current fit:

  • Small to Medium Business (SMB) – At the SMB level a single protocol solution may work and you can find lower cost solutions that fit the bill, but if you scale faster than expected you’re stuck with a single protocol platform and may end up having to purchase and manage additional devices if/when needs change
  • Massive scalability – Here I’m talking public cloud petabytes upon petabytes where systems like Isilon from EMC and its competitors have the lead
  • Top-Tier performance and enterprise class reliability for Tier-1 applications –  Here at the very high end typically EMC or Hitachi are the players, and IBM using SVC may also play
  • Mainframes, NetApp don’t play that and Big Blue don’t support it  


While I stick to there are no ‘one-size fits all’ IT solutions, and that my A-Game is a starting point not a rule I find NetApp to hit the bulls eye for 80+ percent of the market I work with.  Not only do they fit upfront, but they back it up with support, continued innovation, and product advancement.  NetApp isn’t ‘The Growth Company’ and #2 in storage by luck or chance (although I could argue they did luck out quite a bit with the timing of the industry move to converged storage on 10GE.)

Another reason NetApp still reigns king as my A-Game is the way in which it marries to my A-Game server architecture.  Cisco UCS enables unification, protocol choice and cable consolidation as well as virtualization acceleration, etc.  All of these are further amplified when used alongside NetApp storage which allows rapid provisioning, protocol options, storage consolidation and storage virtualization, etc.  Do you want to pre-provision 50 (or 250) VMware hosts with 25 GB read/write boot LUNs ready to go at the click of a template?  Do you want to do this without utilizing any space up front?  UCS and NetApp have the toolset for you.  You can then rapidly bring up new customers, or stay at dinner with your family while a Network Operations Center (NOC) administrator deploys a pre-architected pre-secured, pre-tested and provisioned server from a template to meet a capacity burst.

If you’re considering a storage decision, a private cloud migration, or a converged infrastructure pod make sure you’re taking a look at NetApp as an option and see it for yourself.  For some more information on NetApp’s virtualization story see the links below:

TR3856: Quantifying the Value of Running VMware on NetApp 

TR3808: VMware vSphere and ESX 3.5 Multiprotocol Performance Comparison Using FC, iSCSI, and NFS

GD Star Rating

Storage Protocols

Storage is a major consideration for cloud initiatives; what type of disk, which vendor, and as importantly which protocol?  Experts will tout one over the other based on cost, performance, throughput, etc.  Let’s take a look at the major storage protocols at play in the data center:

Small Computer System Interface (SCSI):

SCSI is the dominant block level access method for disk in the data center.  Blocks are typically the smallest unit that can be read or written to on a disk, they exist in various sizes depending on disk type and usage.  Block level access means that the server can directly access the disk blocks without the need for a file system in place on top of them, this is opposite of file-based storage discussed later.

SCSI has been in use since the early 1980’s and was originally used to move data within a single server.  The operating system handles writing data using the SCSI protocol to a SCSI drive controller which managed one or more devices on a SCSI cable within a system chassis.  The SCSI controller ensured that only one device would be active on the cable at any time which prevents contention on the SCSI bus.  Because SCSI was managed by a single controller and contained within a system the chance for data loss, or contention were minimal, this meant that SCSI did not require control mechanisms to handle data loss or contention as with networked protocols. SCSI itself is still widely used in its native format but it has also been encapsulated into other protocols for use within storage networks for consolidated storage.

Fibre Channel (FC):

Fibre Channel was designed to extend the functionality of SCSI into point-to-point, loop, and switched topologies.  This allows for longer distances as well as storage consolidation.  FC encapsulates SCSI data and Command Descriptor Blocks (CDB) into the payload of Fibre Channel frames.  Fibre Channel networks provided the addressing, routing, and flow-control required to support SCSI data.  Additionally Fibre Channel networks are designed to meet the needs of SCSI by providing ‘lossless’ in order delivery.  This means that in a stable network FC frames will not be dropped, and are delivered in order ensuring that the Upper Layer Protocols (ULP) will not be forced to reorder or resend frames.

Fibre Channel networks are typically carried over fiber-optic links on dedicated infrastructures.  These infrastructures are traditionally built-in pairs as exact mirrors of one another.  This provides complete physical redundancy end-to-end.  Additionally these networks provide high bandwidth and low-latency.  FC networks come in 1/2/4/8 Gbps speeds with 16/32 Gbps in the works.  Additionally 10Gbps FC links are typically available on a proprietary basis for links between switches.

internet/IP Small Computer System Interface (iSCSI):

iSCSI takes SCSI data and CDBs and places it in the payload of IP packets.  This allows the SCSI protocol to be extended across existing IP infrastructures.  While IP is routable within the data center and across the WAN iSCSI is not traditionally used/supported over routed boundaries (exceptions do exist.)  The draw of iSCSI has been that storage data can be extended across the existing infrastructure with minimal additional cost.

iSCSI has not gained the market share many have predicted over the years due to flaws in the protocol and limitations of the traditional Ethernet based data center networks.  until the standardization of 10 Gigabit Ethernet most data centers relied on 1GE links which were typically saturated already.  This meant implementing iSCSI required new switching infrastructure.  10GE has changed the bandwidth limits but still not catapulted iSCSI into the mainstream.  There are several reasons for this, one being that there is large existing investment in Fibre Channel, and two being the iSCSI protocol itself.

The problem with iSCSI from a protocol standpoint is that it takes the SCSI protocol which expects lossless, in-order delivery, and places it in TCP/IP packets which are designed to support heterogeneous WAN networks and experience packet loss and out-of-order delivery frequently.  This is done without providing any additional tools to either SCSI or TCP/IP for handling the SCSI payloads in the expected fashion.  This in no way means iSCSI is unusable or should be written off it just means that additional considerations must be made when designing iSCSI, especially in the Enterprise or larger environment.

In order to provide proper performance for iSCSI on shared networks Quality of Service (QoS), physical architecture, and jumbo frame support must be taken into account.  Because of these considerations many iSCSI networks have traditionally been placed on separate network hardware from the data center LAN (isolated iSCSI networks.)  This has minimized some of the benefits of consolidating on a single protocol.  With 10 Gigabit Ethernet and the standardization of Data Center Bridging (DCB) iSCSI looks more promising for a greater audience.  For more information on DCB see my previous post (http://www.definethecloud.net/?p=31.)

Fibre Channel over Ethernet (FCoE):

FCoE was ratified in 2009 and provides the functionality for moving native Fibre Channel across consolidated Ethernet networks.  FcoE relies on the DCB standards referenced above.  FCoE encapsulates full Fibre Channel frames inside Ethernet Jumbo Frame payloads.  Utilizing jumbo frames ensure that the FC frame is not fragmented or changed in any way.  The FCoE and DCB standards provide a robust tool set for consolidating existing Fibre Channel workloads on shared 10GE networks while providing the lossless, in-order delivery SCSI expects.  FCoE does not modify the existing Fibre Channel protocol suite and allows for the same management model including zoning, LUN masking, etc.  FCoE has started gaining ground over the last two years pushed by several large hardware vendors in the storage, network, and server markets.  For more information on FCoE see my post (http://www.definethecloud.net/?p=80.)

Common Internet File System (CIFS):

CIFS is a file based storage system based on Small Message block (SMB.)  This is a shared storage protocol typically used in Microsoft environments for file sharing.  Windows-based file shares rely on CIFS as the transfer protocol of the file level data.  File based storage relies on an underlying files system such as FAT32, XFS, NTFS or otherwise which differs from block based storage which does not.  File level storage is an excellent medium for some applications but is not traditionally effective in others.  When an application needs direct block access to disk file based storage is not appropriate.  Deployments that fall into this category include some databases and most Operating Systems.

Network File System (NFS):

NFS is another file based storage protocol.  NFS is traditionally used in Linux and Unix environments.  NFS is also a widely used protocol for VMware environments and can offer several benefits for virtual machine storage.  As a file based storage protocol NFS experiences many of the same limitations as stated for CIFS above.

Hyper Text Transfer Protocol (HTTP) and others:

When the cloud discussion leaves the data center (private/internal cloud) and moves up to the service provider level such as Google, Amazon, or the TelCos the protocols listed above may not have the necessary scalability.  When you begin talking about supporting thousands of customers with multiple Terabytes each, traditional storage protocols may not suffice.  It has to do with both the scalability of the systems and the administration of the disk.  iSCSI and FC both require a fair amount of management for the RAID, volumes, and LUNs, whereas CIFS and NFS require a fair amount for the security and volumes.  Protocols such as HTTP based storage are being used to simplify storage configuration and increase its scalability.

Which is the right protocol to use when moving to the cloud?  Obviously there is only one answer!  As always in IT ‘it depends.’  Each protocol has it’s uses, benefits and drawbacks.  The most important thing to remember is that most environments can benefit from more than one or all of these protocols.  Every application is different and any given protocol may have advantages for a particular app.  The only universal truth in cloud storage is that protocol flexibility will be key.

GD Star Rating