Why NetApp is my ‘A-Game’ Storage Architecture

One of, if not the, most popular of my blog posts to date has been ‘Why Cisco UCS is my ‘A-Game’ Server Architecture (http://www.definethecloud.net/why-cisco-ucs-is-my-a-game-server-architecture.)  In that post I describe why I lead with Cisco UCS for most consultative engagements.  This follow up for storage has been a long time coming, and thanks to some ‘gentle’ nudging and random coincidence combined with an extended airport wait I’ve decided to get this posted.

If you haven’t read my previous post I take the time to define my ‘A-Game’ architectures as such:

“The rule in regards to my A-Game is that it’s not a rule, it’s a launching point. I start with a specific hardware set in mind in order to visualize the customer need and analyze the best way to meet that need. If I hit a point of contention that negates the use of my A-Game I’ll fluidly adapt my thinking and proposed architecture to one that better fits the customer. These points of contention may be either technical, political, or business related:

  • Technical: My A-Game doesn’t fit the customers requirement due to some technical factor, support, feature, etc.
  • Political: My A-Game doesn’t fit the customer because they don’t want Vendor X (previous bad experience, hype, understanding, etc.)
  • Business: My A-Game isn’t on an approved vendor list, or something similar.

If I hit one of these roadblocks I’ll shift my vendor strategy for the particular engagement without a second thought. The exception to this is if one of these roadblocks isn’t actually a roadblock and my A-Game definitely provides the best fit for the customer I’ll work with the customer to analyze actual requirements and attempt to find ways around the roadblock.

Basically my A-Game is a product or product line that I’ve personally tested, worked with and trust above the others that is my starting point for any consultative engagement.

In my A-Game Server post I run through my hate then love relationship that brought me around to trust, support, and evangelize UCS; I cannot express the same for NetApp.  My relationship with NetApp fell more along the lines of love at first sight.

NetApp – Love at first sight:

I began working with NetApp storage at the same time I was diving headfirst into datacenter as a whole.  I was moving from server admin/engineer to architect and drinking from the SAN, Virtualization, and storage firehouse.  I had a fantastic boss who to this day is a mentor and friend that pushed me to learn quickly and execute rapidly and accurately, thanks Mike!  The main products our team handled at the time were: IBM blades/servers, VMware, SAN (Brocade and Cisco) and IBM/NetApp storage.  I was never a fan of the IBM storage.  It performed solidly but was a bear to configure, lacked a rich feature set and typically got put in place and left there untouched until refresh.  At the same time I was coming up to speed on IBM storage I was learning more and more about NetApp.

From the non-technical perspective NetApp had accessible training and experts, clear value-proposition messaging and a firm grasp on VMware, where virtualization was heading and how/why it should be executed on.  This hit right on with what my team was focused on.  Additionally NetApp worked hard to maintain an excellent partner channel relationship, make information accessible, and put the experts a phone call or flight away.  This made me WANT to learn more about their technology.

The lasting bonds:

Breakfast food, yep breakfast food is what made NetApp stick for me, and still be my A-game four years later. Not just any breakfast food, but a personal favorite of mine; beer and waffles, err, umm… WAFL (second only to chicken and waffles and missing only bacon.)  Data ONTAP (the beer) and NetApp’s Write Anywhere File System (WAFL) are at the heart of why they are my A-Game.  While you can find dozens of blogs, competitive papers, etc. attacking the use of WAFL for primary block storage, what WAFL enables is amazing from a feature perspective, and the performance numbers NetApp can put up speak for themselves.  Because, unlike a traditional block based array, NetApp owns the underlying file system they can not only do more with the data, but they can more rapidly adapt to market needs with software enhancements.  Don’t take my word for it, do some research, look at the latest announcements from other storage leaders and check to see what year NetApp announced their version of those same features, with few exceptions you’ll be surprised.  The second piece of my love for NetApp is Data ONTAP.  NetApp has several storage controller systems ranging from the lower end to the Tier-1 high-capacity, high availability systems.  Regardless of which one you use, you’re always using the same operating/management system, Data ONTAP.  This means that as you scale, change, refresh, upgrade, downgrade, you name it, you never have to retrain AND you keep a common feature set.

My love for breakfast is not the only draw to NetApp, and in fact without a bacon offering I would have strayed if there weren’t more (note to NetApp: Incorporate fatty pork the way politicians do.) 

Other features that keep NetApp top of my list are:

  • Primary block-level storage Deduplication with real world savings at 70+ % with minimal performance hit (and no license fee to boot)
  • Ease of upgrade/downgrade (keep the shelves of disks, replace the controllers, data stays)
  • Read/Write ‘0’ space/cost clones (the ability to clone various data sets in a read/write status using only pointers and storing only the change ‘delta’) and FlexClone capabilities as a whole
  • Highly optimized snapshots for point-in-time rollback, test/dev, etc.
  • VMware plugins to enable VMware admins to manage and monitor their own storage allotments
  • Storage virtualization, the ability to carve out storage and the management of that storage to multiple tenants in a similar fashion to what VMware does for servers
  • Ability to get 80% of the performance benefits of a shelf of SSD drives by adding Flash Cache (PAM II) cards 

Add to that more recent features such as first to market with FCoE based storage and you’ve got a winner in my book.  All that being said I still haven’t covered the real reason NetApp is the first storage vendor in my head anytime I talk about storage.

Unification:

Anytime I’m talking about servers I’m talking about virtualization as well.  Because I don’t work in the Unix or Mainframe worlds I’m most likely talking about VMware (90% market share has that effect.)  When dealing with virtualization my primary goals are consolidation/optimization and flexibility.  In my opinion nobody can touch NetApp storage for this.  I’m a fan of choice and options, I also like particular features/protocols for particular use cases.  On most storage platforms I have to choose my hardware based on the features and protocols my customers require, and most likely use more than one platform to get them all.  This isn’t the case with NetApp.  With few exceptions every protocol/feature is available simultaneously with any given hardware platform.  This means I can run iSCSI, FC, FCoE or all of the above for block based needs at the same time I run CIFS natively to replace Windows file servers, and NFS for my VMware data stores.  All of that from the same box or even the same ports!  This lets me tier my protocols and features to the application requirements instead of to my hardware limitations.

I’ve been working on VMware deployments in some fashion for four years, and have seen dozens of unique deployments but personally never deployed or worked with a VMware environment that ran off a single protocol, typically at a minimum NFS is used for ISO datastores and CIFS can be used to eliminate Windows file servers rather than virtualize them, with a possible block based protocol involved for boot or databases.

Additionally NetApp offers features and functionality to allow multiple storage functions to be consolidated on a single system.  You no longer require separate hardware for primary, secondary, backup, DR, and archive.  All of this can then be easily setup and managed for replication across any of NetApp’s platforms, or many 3rd party systems front-ended with V-series.  These two pieces combined create a truly ‘unified’ platform.

When do I bring out my B-Game?

NetApp like any solution I’ve ever come across is not the right tool for every job.  For me they hit or exceed the 80/20 rule perfectly.  A few places where I don’t see NetApp as a current fit:

  • Small to Medium Business (SMB) – At the SMB level a single protocol solution may work and you can find lower cost solutions that fit the bill, but if you scale faster than expected you’re stuck with a single protocol platform and may end up having to purchase and manage additional devices if/when needs change
  • Massive scalability – Here I’m talking public cloud petabytes upon petabytes where systems like Isilon from EMC and its competitors have the lead
  • Top-Tier performance and enterprise class reliability for Tier-1 applications –  Here at the very high end typically EMC or Hitachi are the players, and IBM using SVC may also play
  • Mainframes, NetApp don’t play that and Big Blue don’t support it  

Summary:

While I stick to there are no ‘one-size fits all’ IT solutions, and that my A-Game is a starting point not a rule I find NetApp to hit the bulls eye for 80+ percent of the market I work with.  Not only do they fit upfront, but they back it up with support, continued innovation, and product advancement.  NetApp isn’t ‘The Growth Company’ and #2 in storage by luck or chance (although I could argue they did luck out quite a bit with the timing of the industry move to converged storage on 10GE.)

Another reason NetApp still reigns king as my A-Game is the way in which it marries to my A-Game server architecture.  Cisco UCS enables unification, protocol choice and cable consolidation as well as virtualization acceleration, etc.  All of these are further amplified when used alongside NetApp storage which allows rapid provisioning, protocol options, storage consolidation and storage virtualization, etc.  Do you want to pre-provision 50 (or 250) VMware hosts with 25 GB read/write boot LUNs ready to go at the click of a template?  Do you want to do this without utilizing any space up front?  UCS and NetApp have the toolset for you.  You can then rapidly bring up new customers, or stay at dinner with your family while a Network Operations Center (NOC) administrator deploys a pre-architected pre-secured, pre-tested and provisioned server from a template to meet a capacity burst.

If you’re considering a storage decision, a private cloud migration, or a converged infrastructure pod make sure you’re taking a look at NetApp as an option and see it for yourself.  For some more information on NetApp’s virtualization story see the links below:

TR3856: Quantifying the Value of Running VMware on NetApp 

TR3808: VMware vSphere and ESX 3.5 Multiprotocol Performance Comparison Using FC, iSCSI, and NFS

GD Star Rating
loading...

Intel’s Betting the Storage I/O Farm on the CPU

 

I had the privilege of attending Tech Field Day 4 in San Jose this week as a delegate thanks to Stephen Foskett and Gestalt IT.  It was a great event and a lot of information was covered in two days of presentations.  I’ll be discussing the products and vendors that sponsored the event over the next few blogs starting with this one on Intel.  Check out the official page to view all of the delegates and find links to the recordings etc. http://gestaltit.com/field-day/2010-san-jose/.

Intel presented both their Ethernet NIC and storage I/O strategy as well as a processor update and public road map, this post will focus on the Ethernet and I/O presentation.

Intel began the presentation with an overview of the data center landscape and a description of the move towards converged I/O infrastructure, meaning storage, traditional LAN and potentially High Performance Computing (HPC) on the same switches and cables.  Anyone familiar with me or this site knows that I am a fan and supporter of converging the network infrastructure to reduce overall cost and complexity as well as provide more flexibility to data center I/O so I definitely liked this messaging.  Next was a discussion of iSCSI and its tradition of being used as a consolidation tool.

iSCSI:

iSCSI has been used for years in order to provide a mechanism for consolidated block storage data without the need for a separate physical network.  Most commonly iSCSI has been deployed as a low-cost alternative to Fibre Channel.  Its typically been used in the SMB space and for select applications in larger datacenters.  iSCSI was previously limited to 1 Gigabit pipes (prior to the 10GE ratification) and it also suffers from higher latency and lower throughput than Fibre Channel.  The beauty of iSCSI is the ability to use existing LAN infrastructure and traditional NICs to provide block access to shared disk, the Achilles heal is performance.  Because of this cost has always been the primary deciding factor to use iSCSI. For more information on iSCSI see my post on storage protocols: http://www.definethecloud.net/storage-protocols.

In order to increase the performance of iSCSI and decrease the overhead on the system processor(s) the industry developed iSCSI Host Bus Adapters (HBA) which offload the protocol overhead to the I/O card hardware.  These were not widely adopted due to the cost of the cards, this means that a great deal of iSCSI implementations rely on a protocol stack in the operating system (OS.) 

Intel then drew parallels to doing the same with FCoE via the FCoE software stack available for Windows and included in current Linux kernels.  The issue with drawing this parallel is that iSCSI is a mid-market technology that sacrifices some performance and reliability for cost, whereas FCoE is intended to match/increase the performance and reliability of FC while utilizing Ethernet as the transport.  This means that when looking at FCoE implementations the additional cost of specialized I/O hardware makes sense to gain the additional performance and reduce the CPU overhead.

Intel also showed some performance testing of FCoE software stack versus hardware offload using a CNA.  The IOPS they showed were quite impressive for a software stack, but IOPS isn’t the only issue.  The other issue is protocol overhead on the processor.Their testing showed an average of about 6% overhead for the software stack.  6% is low but we were only being shown one set of test criteria for a specific workload.  Additionally we were not provided the details of the testing criteria.  Other tests I’ve seen of the software stack are about 2 years old and show very comparable CPU utilization for FCoE software stack and Generation I CNAs for 8 KB reads, but a large disparity as the block size increased (CPU overhead became worse and worse for the software stack.)  In order to really understand the implications of utilizing a software stack Intel will need to publish test numbers under multiple test conditions:

  • Sequential and random
  • Various read and write combinations
  • Various block sizes
  • Mixed workloads of FCoE and other Ethernet based traffic

I’ve since located the test Intel referenced from Demartek.  It can be obtained here (http://www.demartek.com/Reports_Free/Demartek_Intel_10GbE_FCoE_iSCSI_Adapter_Performance_Evaluation_2010-09.pdf.)  Notice that in the forward Demartek states the importance of CPU utilization data and stresses that they don’t cherry pick data then provides CPU utilization data only for the Microsoft Exchange simulation through JetStress, not for the SQLIO simulation at various block sizes.  I find that you can learn more from the data not shown in vendor sponsored testing, than the data shown.

Even if we were to make two big assumptions: Software stack IOPS are comparable to CNA hardware, and additional CPU utilization is less than or equal to 6% would you want to add an additional 6% CPU overhead to your virtual hosts?  The purpose of virtualization is to come as close as possible to full hardware utilization via placing multiple workloads on a single server.  In that scenario adding additional processor overhead seems short sighted.

The technical argument for doing this is two fold:

  • Saving cost on specialized I/O hardware
  • Processing capacity evolves faster than I/O offload capacity and speeds mainly due to economies of scale therefore your I/O performance will increase with each processor refresh using a software stack

If you’re looking to save cost and are comfortable with the processor and performance overhead then there is no major issue with using the software stack.  That being said if you’re really trying to maximize performance and or virtualization ratios you want to squeeze every drop you can out of the processor for the virtual machines.  As far as the second point of processor capacity goes, it most definitely rings true but with each newer faster processor you buy you’re losing that assumed 6% off the top for protocol overhead.  That isn’t acceptable to me.

The Other Problem:

FC and FCoE have been designed to carry native SCSI commands and data and treat them as SCSI expects, most importantly frames are not dropped (lossless network.)  The flow control mechanism FC uses for this is called buffer-to-buffer credits (B2B.)  This is a hop-to-hop mechanism implemented in hardware on HBAs/CNAs and FC switches.  In this mechanism when two ports initialize a link they exchange a number of buffer spaces they have dedicated to the device on the other side of the link based on agreed frame size. When any device sends a frame it is responsible for keeping track of the buffer space available on the receiving device based on these credits.  When a device receives a frame and has processed it (removing it from the buffer) it returns an R_RDY similar to a TCP ACK which lets the sending device know that a buffer has been freed.  For more information on this see the buffer credits section of my previous post: http://www.definethecloud.net/whats-the-deal-with-quantized-congestion-notification-qcn.  This mechanism ensures that a device never sends a frame that the receiving device does not have sufficient buffer space for and this is implemented in hardware. 

On FCoE networks we’re relying on Ethernet as the transport so B2B credits don’t exist.  Instead we utilize Priority Flow Control (PFC) which is a priority based implementation of 802.3x pause.  For more information on DCB see my previous post: http://www.definethecloud.net/data-center-bridging-exchange.  PFC is handled by DCB capable NICs and will handle sending a pause before the NIC buffers overflow.  This provides for a lossless mechanism that can be translated back into B2B credits at the FC edge. 

The issue here with the software stack is that while the DCB capable NIC ensures the frame is not dropped on the wire via PFC it has to pass processing across the PCIe bus to the processor and allow the protocol to be handled by the OS kernel.  This adds layers in which the data could be lost or corrupted that don’t exist with a traditional HBA or CNA.

Summary:

FCoE software stack is not a sufficient replacement for a CNA.  Emulex, Broadcom, Qlogic and Brocade are all offloading protocol to the card to decrease CPU utilization and increase performance.  HP has recently announced embedding Emulex OneConnect adapters, which offload iSCSI, TCP and FCoE, on the system board.  That’s a lot of backing for protocol offload with only Intel standing on the other side of the fence.  My guess is that Intel’s end goal is to sell more processors, and utilizing more cycles for protocol processing makes sense.  Additionally Intel doesn’t have a proven FC stack to embed on a card and the R/D costs would be significant, so throwing it in the kernel and selling their standard NIC makes sense to the business.  Lastly don’t forget storage vendor qualification, Intel has an uphill battle getting an FCoE software stack on the approved list for the major storage vendors.

Full Discloser:  Tech Field Day is organized by the folks at Gestalt IT and paid for by the presenters of the event.  My travel, meals and accommodations were paid for by the event but my opinions negative or positive are all mine.

GD Star Rating
loading...

How Emulex Broke Out of the ‘Card Pusher’ Box

A few years back when my primary responsibility was architecting server, blade, SAN, and virtualization solutions for customers I selected the appropriate HBA based on the following rule: Whichever (Qlogic or Emulex) is less expensive today through the server OEM I’m using.  I had no technical or personal preference for one or the other.  They were both stable, performed, and allowed my customers to do what they needed to do.  On any given day one might show higher performance than another but that’s always subject to the testing criteria and will be fairly irrelevant for a great deal of customers.  At that point I considered them both ‘Card Pushers.’

Last year I had the opportunity to speak at two Emulex Partner product launch events in the UK and Germany.  My presentation was a vendor independent technical discussion on the drivers for consolidating disparate networks on 10GE and above.  I had no prior knowledge of the exact nature of the product being launched, and didn’t expect anything more than a Gen 2 single chip CNA, nothing to get excited over.  I was wrong.

Sitting through the Key Note presentations by Emulex executives I quickly realized OneConnect was something totally different, and with it Emulex was doing two things:

  1. Betting the farm on Ethernet
  2. Rebranding themselves as more than just a card pusher.

Now just to get this out of the way Emulex did not, has not, and to my knowledge will not stop pursuing better and faster FC technology, their 4GB and 8GB FC HBAs are still rock solid high performance pure FC cards.  What they were however doing is obviously placing a large bet (and R&D investment) on Ethernet as a whole.

OneConnect:

The Emulex OneConnect is a Generation 2 Converged Network Adapter (CNA), but it’s a lot more than that.  It also does TCP offload, operates as an iSCSI HBA, and handles FCoE including the full suite of DCB standards.  It’s the Baskin Robins of of I/O interface cards, although admittedly  no FCoTR support 😉 (http://www.definethecloud.net/?p=380)  The technology behind the card impressed me but the licensing model is what makes it matter.  With all that technology built into the hardware you’d expect a nice hefty price tag to go with it.  That’s not the case with the OneConnect card, the licensing options allow you to buy the card at a cost equivalent to competing 10GE NICs and license iSCSI or FCoE if/when desired (licensing models may vary with OEMs.)  This means Emulex, a Fibre Channel HBA vendor, is happy to sell you a high performance 10GE NIC.  In IT there is never one tool for every job, but as far as I/O cards go this one comes close.

You don’t have to take my word for it when it comes to how good this card is, HP’s decision to integrate it into blade and rack mount system boards speaks volumes.  Take a look at Thomas Jones post on the Emulex Federal Blog for more info (http://www.emulex.com/blogs/federal/2010/07/13/the-little-trophy-that-meant-a-lot/.)  Additionally Cisco is shipping OneConnect options for UCS blades and rack mounts, and IBM also OEMs the product.

In addition to the OneConnect launch Emulex has also driven to expand their market into other areas, products like OneCommand Vision promise to provide better network I/O monitoring and management tools, and are uniquely positioned to do this through the eyes of the OneConnect adapter which can see all networks connected to the server.

Summary:

Overall Emulex has truly moved outside of the ‘Card Pusher’ box and uniquely positioned themselves above their peers.  In an data center market where many traditional Fibre Channel vendors are clinging to pure FC like a sinking ship Emulex has embraced 10GE and offers a product that lets the customer choose the consolidation method or methods that work for them.

GD Star Rating
loading...

The Cloud Storage Argument

The argument over the right type of storage for data center applications is an ongoing battle.  This argument gets amplified when discussing cloud architectures both private and public.  Part of the reason for this disparity in thinking is that there is no ‘one size fits all solution.’  The other part of the problem is that there may not be a current right solution at all.

When we discuss modern enterprise data center storage options there are typically five major choices:

  • Fibre Channel (FC)
  • Fibre Channel over Ethernet (FCoE)
  • Internet Small Computer System Interface (iSCSI)
  • Network File System (NFS)
  • Direct Attached Storage (DAS)

In a Windows server environment these will typically be coupled with Common internet File Service (CIFS) for file sharing.  Behind these protocols there are a series of storage arrays and disk types that be used to meet the applications I/O requirements.

As people move from traditional server architectures to virtualized servers, and from static physical silos to cloud based architectures they will typically move away from DAS into one of the other protocols listed above to gain the advantages, features and savings associated with shared storage.  For the purpose of this discussion we will focus on these four: FC, FCoE, iSCSI, NFS.

The issue then becomes which storage protocol to use for transport of your data from the server to the disk?  I’ve discussed the protocol differences in a previous post (http://www.definethecloud.net/?p=43) so I won’t go into the details here.  Depending on who you’re talking to it’s not uncommon to find extremely passionate opinions.  There a quite a few consultants and engineers that are hard coded to one protocol or another.  That being said most end-users just want something that works, performs adequately and isn’t a headache to manage.

Most environments currently work on a combination of these protocols, plenty of FC data centers rely on DAS to boot the operating system and NFS/CIFS for file sharing.  The same can be said for iSCSI.  With current options a combination of these protocols is probably always going to be best, iSCSI, FCoE, and NFS/CIFS can be used side by side to provide the right performance at the right price on an application by application basis.

The one definite fact in all of the opinions is that running separate parallel networks as we do today  with FC and Ethernet is not the way to move forward, it adds cost, complexity, management, power, cooling and infrastructure that isn’t needed.  Combining protocols down to one wire is key to the flexibility and cost savings promised by end-to-end virtualization and cloud architectures.  If that’s the case which wire do we choose, and which protocol rides directly on top to transport the rest?

10 Gigabit Ethernet is currently the industries push for a single wire and with good reason:

  • It’s currently got enough bandwidth/throughput to do it (10gigabits using 64b/66b encoding as opposed to FC/Infiniband which currently use 8b/10b with 20% overhead)
  • It’s scaling fast 40GE and 100GE are well on their way to standardization (As opposed to 16G and 32G FC)
  • Everyone already knows and uses it, yes that includes you.

For the sake of argument let’s assume we all agree on 10GE as the right wire/protocol to carry all of our traffic, what do we layer on top?  FCoE, iSCSI, NFS, something else?  Well that is a tough question.  the first part of the answer is you don’t have to decide, this is very important because none of these protocols is mutually exclusive.  The second part of the answer is, maybe none of these is the end-all-be-all long-term solution.  Each current protocol has benefits and draw backs so let’s take a quick look:

  • iSCSI: Block level protocol carrying SCSI over IP.  Works with standard Ethernet but can have performance issues on congested networks, also incurs IP protocol overhead.  iSCSI is great on standard Ethernet networks until congestion occurs, once the network becomes fully utilized iSCSI performance will tend to drop.
  • FCoE: Block level protocol which maintains Fibre Channel reliability and security while using underlying Ethernet.  Requires 10GE or above and DCB (http://www.definethecloud.net/?p=31) capable switches.  FCoE is currently well proven and reliable at the access layer and a fantastic option there, but no current solutions exist to move it up further into the network.  Products are on the road map to push FCoE further into the network but that may not necessarily be the best way forward.
  • NFS: File level protocol which runs on top of UDP or TCP and IP.

And a quick look at comparative performance:

Protocol Performanceimage

While the above performance model is subjective and network tuning and specific equipment will play a big role the general idea holds sound.

One of the biggest factors that needs to be considered when choosing these protocols is block vs. file.  Some applications require direct block access to disk, many databases fall into this category.  As importantly if you want to boot an operating system from disk block level protocol (iSCSI, FCoE) are required.  This means that for most diskless configurations you’ll need to make a choice between FCoE and iSCSI (still within the assumption of consolidating on 10GE.)  Diskless configurations have major benefits in large scale deployments including power, cooling, administration, and flexibility so you should at least be considering them.

If you chosen a diskless configuration and settled on iSCSI or FCoE for your boot disks now you still need to figure out what to do about file shares?  CIFS or NFS are your next decision, CIFS is typically the choice for Windows, and NFS for Linux/UNIX environments.  Now you’ve wound up with 2-3 protocols running to get your storage settled and your stacking those alongside the rest of your typical LAN data.

Now to look at management step back and take a look at block data as a whole.  If you’re using enterprise class storage you’ve got several steps of management to configure the disk in that array.  It varies with vendor but typically something to the effect of:

  1. Configure the RAID for groups of disks
  2. Pool multiple RAID groups
  3. Logically sub divide the pool
  4. Assign the logical disks to the initiators/servers
  5. Configure required network security (FC zoning/ IP security/ACL, etc)

While this is easy stuff for storage and SAN administrators it’s time consuming, especially when you start talking about cloud infrastructures with lots and lots of moves adds and changes.  It becomes way to cumbersome to scale into petabytes with hundreds or thousands of customers.  NFS has more streamlined management but it can’t be used to boot an OS.  This makes for extremely tough decisions when looking to scale into large virtualized data center architectures or cloud infrastructure.

There is a current option that allows you to consolidate on 10GE, reduce storage protocols and still get diskless servers.  I
t’s definitely not the solution for every use case (there isn’t one), and it’s only a great option because there aren’t a whole lot of other great options.

In a fully virtualized environment NFS is a great low management overhead protocol for Virtual Machine disks.  Because it can’t boot we need another way to get the operating system to server memory.  That’s where PXE Boot comes in.  Pre eXecutionEnvironment (PXE) is a network OS boot that works well for small operating systems, typically terminal clients or Linux images.  It allows for a single instance of the operating system to be stored on a PXE server attached to the network, and a diskless server to retrieve that OS at boot time.  Because some virtualization operating systems (Hypervisors) are light weight, they are great candidates for PXE boot.  This allows the architecture below.

PXE/NFS 100% Virtualized Environment

image

Summary:

While there are  several options for data center storage none of them solves every need.  Current options increase in complexity and management as the scale of the implementation increases.  Looking to the future we need to be looking for better ways to handle storage.  Maybe block based storage has run it’s course, maybe SCSI has run it’s course, either way we need more scalable storage solutions available to the enterprise in order to meet the growing needs of the data center and maintain manageability and flexibility.  New deployments should take all current options into account and never write off the advantages of using more than one, or all of them where they fit.

GD Star Rating
loading...

Storage Protocols

Storage is a major consideration for cloud initiatives; what type of disk, which vendor, and as importantly which protocol?  Experts will tout one over the other based on cost, performance, throughput, etc.  Let’s take a look at the major storage protocols at play in the data center:

Small Computer System Interface (SCSI):

SCSI is the dominant block level access method for disk in the data center.  Blocks are typically the smallest unit that can be read or written to on a disk, they exist in various sizes depending on disk type and usage.  Block level access means that the server can directly access the disk blocks without the need for a file system in place on top of them, this is opposite of file-based storage discussed later.

SCSI has been in use since the early 1980’s and was originally used to move data within a single server.  The operating system handles writing data using the SCSI protocol to a SCSI drive controller which managed one or more devices on a SCSI cable within a system chassis.  The SCSI controller ensured that only one device would be active on the cable at any time which prevents contention on the SCSI bus.  Because SCSI was managed by a single controller and contained within a system the chance for data loss, or contention were minimal, this meant that SCSI did not require control mechanisms to handle data loss or contention as with networked protocols. SCSI itself is still widely used in its native format but it has also been encapsulated into other protocols for use within storage networks for consolidated storage.

Fibre Channel (FC):

Fibre Channel was designed to extend the functionality of SCSI into point-to-point, loop, and switched topologies.  This allows for longer distances as well as storage consolidation.  FC encapsulates SCSI data and Command Descriptor Blocks (CDB) into the payload of Fibre Channel frames.  Fibre Channel networks provided the addressing, routing, and flow-control required to support SCSI data.  Additionally Fibre Channel networks are designed to meet the needs of SCSI by providing ‘lossless’ in order delivery.  This means that in a stable network FC frames will not be dropped, and are delivered in order ensuring that the Upper Layer Protocols (ULP) will not be forced to reorder or resend frames.

Fibre Channel networks are typically carried over fiber-optic links on dedicated infrastructures.  These infrastructures are traditionally built-in pairs as exact mirrors of one another.  This provides complete physical redundancy end-to-end.  Additionally these networks provide high bandwidth and low-latency.  FC networks come in 1/2/4/8 Gbps speeds with 16/32 Gbps in the works.  Additionally 10Gbps FC links are typically available on a proprietary basis for links between switches.

internet/IP Small Computer System Interface (iSCSI):

iSCSI takes SCSI data and CDBs and places it in the payload of IP packets.  This allows the SCSI protocol to be extended across existing IP infrastructures.  While IP is routable within the data center and across the WAN iSCSI is not traditionally used/supported over routed boundaries (exceptions do exist.)  The draw of iSCSI has been that storage data can be extended across the existing infrastructure with minimal additional cost.

iSCSI has not gained the market share many have predicted over the years due to flaws in the protocol and limitations of the traditional Ethernet based data center networks.  until the standardization of 10 Gigabit Ethernet most data centers relied on 1GE links which were typically saturated already.  This meant implementing iSCSI required new switching infrastructure.  10GE has changed the bandwidth limits but still not catapulted iSCSI into the mainstream.  There are several reasons for this, one being that there is large existing investment in Fibre Channel, and two being the iSCSI protocol itself.

The problem with iSCSI from a protocol standpoint is that it takes the SCSI protocol which expects lossless, in-order delivery, and places it in TCP/IP packets which are designed to support heterogeneous WAN networks and experience packet loss and out-of-order delivery frequently.  This is done without providing any additional tools to either SCSI or TCP/IP for handling the SCSI payloads in the expected fashion.  This in no way means iSCSI is unusable or should be written off it just means that additional considerations must be made when designing iSCSI, especially in the Enterprise or larger environment.

In order to provide proper performance for iSCSI on shared networks Quality of Service (QoS), physical architecture, and jumbo frame support must be taken into account.  Because of these considerations many iSCSI networks have traditionally been placed on separate network hardware from the data center LAN (isolated iSCSI networks.)  This has minimized some of the benefits of consolidating on a single protocol.  With 10 Gigabit Ethernet and the standardization of Data Center Bridging (DCB) iSCSI looks more promising for a greater audience.  For more information on DCB see my previous post (http://www.definethecloud.net/?p=31.)

Fibre Channel over Ethernet (FCoE):

FCoE was ratified in 2009 and provides the functionality for moving native Fibre Channel across consolidated Ethernet networks.  FcoE relies on the DCB standards referenced above.  FCoE encapsulates full Fibre Channel frames inside Ethernet Jumbo Frame payloads.  Utilizing jumbo frames ensure that the FC frame is not fragmented or changed in any way.  The FCoE and DCB standards provide a robust tool set for consolidating existing Fibre Channel workloads on shared 10GE networks while providing the lossless, in-order delivery SCSI expects.  FCoE does not modify the existing Fibre Channel protocol suite and allows for the same management model including zoning, LUN masking, etc.  FCoE has started gaining ground over the last two years pushed by several large hardware vendors in the storage, network, and server markets.  For more information on FCoE see my post (http://www.definethecloud.net/?p=80.)

Common Internet File System (CIFS):

CIFS is a file based storage system based on Small Message block (SMB.)  This is a shared storage protocol typically used in Microsoft environments for file sharing.  Windows-based file shares rely on CIFS as the transfer protocol of the file level data.  File based storage relies on an underlying files system such as FAT32, XFS, NTFS or otherwise which differs from block based storage which does not.  File level storage is an excellent medium for some applications but is not traditionally effective in others.  When an application needs direct block access to disk file based storage is not appropriate.  Deployments that fall into this category include some databases and most Operating Systems.

Network File System (NFS):

NFS is another file based storage protocol.  NFS is traditionally used in Linux and Unix environments.  NFS is also a widely used protocol for VMware environments and can offer several benefits for virtual machine storage.  As a file based storage protocol NFS experiences many of the same limitations as stated for CIFS above.

Hyper Text Transfer Protocol (HTTP) and others:

When the cloud discussion leaves the data center (private/internal cloud) and moves up to the service provider level such as Google, Amazon, or the TelCos the protocols listed above may not have the necessary scalability.  When you begin talking about supporting thousands of customers with multiple Terabytes each, traditional storage protocols may not suffice.  It has to do with both the scalability of the systems and the administration of the disk.  iSCSI and FC both require a fair amount of management for the RAID, volumes, and LUNs, whereas CIFS and NFS require a fair amount for the security and volumes.  Protocols such as HTTP based storage are being used to simplify storage configuration and increase its scalability.

Which is the right protocol to use when moving to the cloud?  Obviously there is only one answer!  As always in IT ‘it depends.’  Each protocol has it’s uses, benefits and drawbacks.  The most important thing to remember is that most environments can benefit from more than one or all of these protocols.  Every application is different and any given protocol may have advantages for a particular app.  The only universal truth in cloud storage is that protocol flexibility will be key.

GD Star Rating
loading...

Data Center Bridging

Data Center Bridging (DCB) is a group of IEEE standard protocols designed to support I/O consolidation.  DCB enables multiple protocols with very different requirements to run over the same Layer 2 10 Gigabit Ethernet infrastructure.  Because DCB is currently discussed along with Fibre Channel over Ethernet (FCoE) it’s not uncommon for people to think of them as part of FCoE.  This is not the case, while FCoE relies on DCB for proper treatment on a shared network, DCB enhancements can be applied to any protocol on the network.  DCB support is being built into data center hardware and software from multiple vendors and is fully backwards compatible with legacy systems (no forklift upgrades.)  For more information on FCoE see my post on the subject (http://www.definethecloud.net/?p=80.)

Network protocols typically have unique requirements in regards to latency, packet/frame loss, bandwidth, etc.  These differences have a large impact on the performance of the protocol in a shared environment.  Differences such as flow-control and frame loss are the reason Fibre Channel networks have traditionally been separate physical infrastructures from Ethernet networks.  DCB is the set of tools that allows us to converge these networks without sacrificing performance or reliability.

Lets take a look at the DCB suite:

Priority Flow Control (PFC) 802.1Qbb:

PFC is a flow control mechanism.  PFC is designed to eliminate frame loss for specific traffic types on Ethernet networks.  Protocols such as Small Computer System Interface (SCSI) which is used for block data storage are very sensitive to data loss.  SCSI protocol is the heart of Fibre Channel which is a tool used to extend SCSI from internal disk to centralized storage across a network.  In its native form on dedicated networks Fibre Channel has tools to ensure that frames are not lost as long as the network is stable.  In order to move Fibre Channel across Ethernet networks that same ‘lossless’ behavior must be guaranteed, PFC is the tool to do that.

PFC uses a pause mechanism to allow a receiving device to signal a pause to the directly connected sending device prior to buffer overflow and packet loss.  While Ethernet has had a tool to do this for some time (802.3x pause) it has always been at the link level.  This means that all traffic on the link would be paused, rather than just a selected traffic type.  Pausing a link carrying various I/O types would be a bad thing, especially for traffic such as IP Telephony and streaming video.  Rather than pause an entire link PFC sends a pause signal for a single Class of Service (CoS) which is part of an 802.1Q Ethernet header.  This allows up to 8 classes to be defined and paused independent of one another.

Congestion Management (802.1Qau):

When we begin pausing traffic in a network we have the potential to spread network congestion by causing choke points.  Imagine trying to drive past a football stadium (football or American football pick your flavor) when the game is about to start.  You’re stuck in dead lock traffic even though you’re not going to the game, if you’ve got that image your on the right track.  Congestion management is a set of signaling tools used to push that congestion out of the network core to the network edge (if you’re thinking old school FECN and BECN you’re not far off.)

Bandwidth Management (802.1Qaz):

Bandwidth management is a tool for simple consistent application of bandwidth controls at Layer 2 on a DCB network.  Bandwidth management allows specific traffic type to be guaranteed a percentage of available bandwidth based on its CoS.  For instance on a 10GE network access port utilizing FCoE you could guarantee 40% of the bandwidth to FCoE.  This provides a 4Gb tunnel for FCoE when needed but allows other traffic types to utilize that bandwidth when not in use for FCoE.

Data Center bridging Exchange (DCBX):

DCBX is a Layer 2 communication protocol that allows DCB capable devices to communicate and discover the edge of the DCB network, i.e. legacy devices.  DCBX not only allows passing of information but provides tools for passing configuration.  This is key to the consistent configuration of DCB networks.  For instance a DCB switch acting as a Fibre Channel over Ethernet Forwarder (FCF) can let an attached Converged Network Adapter (CNA) on a server know to tag FCoE frames with a specific CoS and enable pause for that traffic type.

All in all the DCB features are key enablers for true consolidated I/O.  They provide a tool set for each traffic type to be handled properly independent of other protocols on the wire.  For more information on Consolidated I/O see my previous post Consolidated IO (http://www.definethecloud.net/?p=67.)

GD Star Rating
loading...

Consolidated I/O

Consolidated I/O (input/output) is a hot topic and has been for the last two years, but it’s not a new concept.  We’ve already consolidated I/O once in the data center and forgotten about it, remember those phone PBXs before we replaced them with IP Telephony?  The next step in consolidating I/O comes in the form of getting management traffic, backup traffic and storage traffic from centralized storage arrays to the servers on the same network that carries our IP data.  In the most general terms the concept is ‘one wire.’  ‘Cable Once’ or ‘One Wire’ allows a flexible I/O infrastructure with a greatly reduced cable count and a single network to power, cool and administer.

Solutions have existed and been used for years to do this, iSCSI (SCSI storage data over IP networks) is one tool that has been commonly used to do this.  The reason the topic has hit the mainstream over the last 2 years is that 10GB Ethernet was ratified and we now have a common protocol with the proper bandwidth to support this type of consolidation.  Prior to 10GE we simply didn’t have the right bandwidth to effectively put everything down the same pipe.

The first thing to remember when discussing I/O consolidation is that contrary to popular belief I/O consolidation does not mean Fibre Channel over Ethernet (FCoE.)  I/O consolidation is all about using a single infrastructure and underlying protocol to carry any and all traffic types required in the data center.  The underlying protocol of choice is 10G Ethernet because it’s lightweight, high bandwidth and Ethernet itself is the most widely used data center protocol today.  Using 10GE and the IEEE standards for Data Center bridging (DCB) as the underlying data center network, any and all protocols can be layered on top as needed on a per application basis.  See my post on DCB for more information (http://www.definethecloud.net/?p=31.)These protocols can be FCoE, iSCSI, UDP, TCP, NFS, CIFS, etc. or any combination of them all.

If you look at the data center today most are already using a combination of these protocols, but typically have 2 or more separate infrastructures to support them.  A data center that uses Fibre Channel heavily has two Fibre Channel networks (for redundancy) and one or more LAN networks. These ‘Fibre Channel shops’ are typically still using additional storage protocols such as NFS/CIFS for file based storage.  The cost of administering, powering, cooling, and eventually upgrading/refreshing these separate networks continues to grow.

Consolidating onto a single infrastructure not only provides obvious cost benefits but also provides the flexibility required for a cloud infrastructure.  Having a ‘Cable Once’ infrastructure allows you to provide the right protocol at the right time on an application basis, without the need for hardware changes.

Call it what you will I/O Consolidation, Network Convergence, or Network Virtualization, a cable once topology that can support the right protocol at the right time is one of the pillars of cloud architectures in the data center.

GD Star Rating
loading...