The Power of Innovative Datacenter Stacks

With the industry drive towards cloud computing models there has been a lot of talk and announcements around ‘converged infrastructure’ ‘integrated stack’ solutions. An integrated stack is pre-packaged offering typically containing some amount of network, storage, and server infrastructure bundled with some level of virtualization, automation, and orchestration software. The purpose of these stacks is to simplify the infrastructure purchasing requirements, and accelerate the migration to virtualized or cloud computing models, accomplished by reducing risk and time to deployment. This simplification and acceleration is accomplished by heavy testing and certification by the vendor or vendors in order to ensure various levels of compatibility, stability and performance.

In broad strokes there are two types of integrated stack solution:

Single Vendor – All stack components are developed, manufactured and bundled by a single vendor.

Multi-Vendor – Products from two or more parent vendors are bundled together to create the stack.

Of these two approaches the true value and power typically comes from the multi-vendor approach or Innovative Stack, as long as some key processes are handled correctly, specifically infrastructure pre-integration/delivery and support. With an innovative stack the certification and integration testing is done by the joint vendors allowing more time to be spent tailoring the solution to specific needs rather than ensuring component compatibility and design validity. The innovative stack provides a cookie cutter approach at the infrastructure level.

The reason the innovative stack holds the sway is the ability to package ‘best-of-breed’ technologies into a holistic top-tier package rather than relying solely on products and software from a single vendor of which some may fall lower in the rankings. The large data center hardware vendors all have several disparate product lines each of which are in various stages of advancement and adoption. While one or two of these product lines may be best-of-breed or close, you’d be hard-pressed to argue that any one vendor can provide the best: storage, server, and network hardware along with automation and orchestration software.

A prime example of this would be VMware, it’s difficult to argue that VMware is not the best-of-breed for server virtualization, with a robust feature set, outstanding history and approximately 90% market share they are typically the obvious choice for server virtualization. That being said VMware does not sell hardware which means if you’re virtualizing servers and want best of breed you’ll need two vendors right out of the gate. VMware also has an excellent desktop virtualization platform but in that arena Citrix could easily be argued best-of-breed and both have pros/cons depending on the specific technical/business requirements. For desktop virtualization architecture it’s not uncommon to have three best-of-breed vendors before even discussing storage or network hardware (Vendor X server, VMware Hypervisor, and Citrix desktop virtualization.)

With the innovative stack approach a collaborative multi-vendor team can analyze, assess, bundle, test, and certify an integration of best-of-breed hardware and software to provide the highest levels of performance, feature set and stability. Once the architectures are defined if an appropriate support and delivery model is put in place jointly by the vendors a best-of-breed innovative stack can accelerate your successful adoption of converged infrastructure and cloud-model services. An excellent example of this type of multi-vendor certified Innovative Stack is the FlexPod for VMware by NetApp, Cisco, and VMware which is backed by a joint support model and delivery packaging through certified expert channel partners.

To participate in a live WebCast on the subject and learn more please register here: http://www.definethecloud.net/innovative-versus-integration-cloud-stacks.

GD Star Rating
loading...

Innovative Versus Integration Cloud Stacks

The Live Webcast with NetApp and Kingman Tang went quite well with good discussion on private cloud and data center stacks.  Check out the recording below.

A BrightTALK Channel

GD Star Rating
loading...

Why NetApp is my ‘A-Game’ Storage Architecture

One of, if not the, most popular of my blog posts to date has been ‘Why Cisco UCS is my ‘A-Game’ Server Architecture (http://www.definethecloud.net/why-cisco-ucs-is-my-a-game-server-architecture.)  In that post I describe why I lead with Cisco UCS for most consultative engagements.  This follow up for storage has been a long time coming, and thanks to some ‘gentle’ nudging and random coincidence combined with an extended airport wait I’ve decided to get this posted.

If you haven’t read my previous post I take the time to define my ‘A-Game’ architectures as such:

“The rule in regards to my A-Game is that it’s not a rule, it’s a launching point. I start with a specific hardware set in mind in order to visualize the customer need and analyze the best way to meet that need. If I hit a point of contention that negates the use of my A-Game I’ll fluidly adapt my thinking and proposed architecture to one that better fits the customer. These points of contention may be either technical, political, or business related:

  • Technical: My A-Game doesn’t fit the customers requirement due to some technical factor, support, feature, etc.
  • Political: My A-Game doesn’t fit the customer because they don’t want Vendor X (previous bad experience, hype, understanding, etc.)
  • Business: My A-Game isn’t on an approved vendor list, or something similar.

If I hit one of these roadblocks I’ll shift my vendor strategy for the particular engagement without a second thought. The exception to this is if one of these roadblocks isn’t actually a roadblock and my A-Game definitely provides the best fit for the customer I’ll work with the customer to analyze actual requirements and attempt to find ways around the roadblock.

Basically my A-Game is a product or product line that I’ve personally tested, worked with and trust above the others that is my starting point for any consultative engagement.

In my A-Game Server post I run through my hate then love relationship that brought me around to trust, support, and evangelize UCS; I cannot express the same for NetApp.  My relationship with NetApp fell more along the lines of love at first sight.

NetApp – Love at first sight:

I began working with NetApp storage at the same time I was diving headfirst into datacenter as a whole.  I was moving from server admin/engineer to architect and drinking from the SAN, Virtualization, and storage firehouse.  I had a fantastic boss who to this day is a mentor and friend that pushed me to learn quickly and execute rapidly and accurately, thanks Mike!  The main products our team handled at the time were: IBM blades/servers, VMware, SAN (Brocade and Cisco) and IBM/NetApp storage.  I was never a fan of the IBM storage.  It performed solidly but was a bear to configure, lacked a rich feature set and typically got put in place and left there untouched until refresh.  At the same time I was coming up to speed on IBM storage I was learning more and more about NetApp.

From the non-technical perspective NetApp had accessible training and experts, clear value-proposition messaging and a firm grasp on VMware, where virtualization was heading and how/why it should be executed on.  This hit right on with what my team was focused on.  Additionally NetApp worked hard to maintain an excellent partner channel relationship, make information accessible, and put the experts a phone call or flight away.  This made me WANT to learn more about their technology.

The lasting bonds:

Breakfast food, yep breakfast food is what made NetApp stick for me, and still be my A-game four years later. Not just any breakfast food, but a personal favorite of mine; beer and waffles, err, umm… WAFL (second only to chicken and waffles and missing only bacon.)  Data ONTAP (the beer) and NetApp’s Write Anywhere File System (WAFL) are at the heart of why they are my A-Game.  While you can find dozens of blogs, competitive papers, etc. attacking the use of WAFL for primary block storage, what WAFL enables is amazing from a feature perspective, and the performance numbers NetApp can put up speak for themselves.  Because, unlike a traditional block based array, NetApp owns the underlying file system they can not only do more with the data, but they can more rapidly adapt to market needs with software enhancements.  Don’t take my word for it, do some research, look at the latest announcements from other storage leaders and check to see what year NetApp announced their version of those same features, with few exceptions you’ll be surprised.  The second piece of my love for NetApp is Data ONTAP.  NetApp has several storage controller systems ranging from the lower end to the Tier-1 high-capacity, high availability systems.  Regardless of which one you use, you’re always using the same operating/management system, Data ONTAP.  This means that as you scale, change, refresh, upgrade, downgrade, you name it, you never have to retrain AND you keep a common feature set.

My love for breakfast is not the only draw to NetApp, and in fact without a bacon offering I would have strayed if there weren’t more (note to NetApp: Incorporate fatty pork the way politicians do.) 

Other features that keep NetApp top of my list are:

  • Primary block-level storage Deduplication with real world savings at 70+ % with minimal performance hit (and no license fee to boot)
  • Ease of upgrade/downgrade (keep the shelves of disks, replace the controllers, data stays)
  • Read/Write ‘0’ space/cost clones (the ability to clone various data sets in a read/write status using only pointers and storing only the change ‘delta’) and FlexClone capabilities as a whole
  • Highly optimized snapshots for point-in-time rollback, test/dev, etc.
  • VMware plugins to enable VMware admins to manage and monitor their own storage allotments
  • Storage virtualization, the ability to carve out storage and the management of that storage to multiple tenants in a similar fashion to what VMware does for servers
  • Ability to get 80% of the performance benefits of a shelf of SSD drives by adding Flash Cache (PAM II) cards 

Add to that more recent features such as first to market with FCoE based storage and you’ve got a winner in my book.  All that being said I still haven’t covered the real reason NetApp is the first storage vendor in my head anytime I talk about storage.

Unification:

Anytime I’m talking about servers I’m talking about virtualization as well.  Because I don’t work in the Unix or Mainframe worlds I’m most likely talking about VMware (90% market share has that effect.)  When dealing with virtualization my primary goals are consolidation/optimization and flexibility.  In my opinion nobody can touch NetApp storage for this.  I’m a fan of choice and options, I also like particular features/protocols for particular use cases.  On most storage platforms I have to choose my hardware based on the features and protocols my customers require, and most likely use more than one platform to get them all.  This isn’t the case with NetApp.  With few exceptions every protocol/feature is available simultaneously with any given hardware platform.  This means I can run iSCSI, FC, FCoE or all of the above for block based needs at the same time I run CIFS natively to replace Windows file servers, and NFS for my VMware data stores.  All of that from the same box or even the same ports!  This lets me tier my protocols and features to the application requirements instead of to my hardware limitations.

I’ve been working on VMware deployments in some fashion for four years, and have seen dozens of unique deployments but personally never deployed or worked with a VMware environment that ran off a single protocol, typically at a minimum NFS is used for ISO datastores and CIFS can be used to eliminate Windows file servers rather than virtualize them, with a possible block based protocol involved for boot or databases.

Additionally NetApp offers features and functionality to allow multiple storage functions to be consolidated on a single system.  You no longer require separate hardware for primary, secondary, backup, DR, and archive.  All of this can then be easily setup and managed for replication across any of NetApp’s platforms, or many 3rd party systems front-ended with V-series.  These two pieces combined create a truly ‘unified’ platform.

When do I bring out my B-Game?

NetApp like any solution I’ve ever come across is not the right tool for every job.  For me they hit or exceed the 80/20 rule perfectly.  A few places where I don’t see NetApp as a current fit:

  • Small to Medium Business (SMB) – At the SMB level a single protocol solution may work and you can find lower cost solutions that fit the bill, but if you scale faster than expected you’re stuck with a single protocol platform and may end up having to purchase and manage additional devices if/when needs change
  • Massive scalability – Here I’m talking public cloud petabytes upon petabytes where systems like Isilon from EMC and its competitors have the lead
  • Top-Tier performance and enterprise class reliability for Tier-1 applications –  Here at the very high end typically EMC or Hitachi are the players, and IBM using SVC may also play
  • Mainframes, NetApp don’t play that and Big Blue don’t support it  

Summary:

While I stick to there are no ‘one-size fits all’ IT solutions, and that my A-Game is a starting point not a rule I find NetApp to hit the bulls eye for 80+ percent of the market I work with.  Not only do they fit upfront, but they back it up with support, continued innovation, and product advancement.  NetApp isn’t ‘The Growth Company’ and #2 in storage by luck or chance (although I could argue they did luck out quite a bit with the timing of the industry move to converged storage on 10GE.)

Another reason NetApp still reigns king as my A-Game is the way in which it marries to my A-Game server architecture.  Cisco UCS enables unification, protocol choice and cable consolidation as well as virtualization acceleration, etc.  All of these are further amplified when used alongside NetApp storage which allows rapid provisioning, protocol options, storage consolidation and storage virtualization, etc.  Do you want to pre-provision 50 (or 250) VMware hosts with 25 GB read/write boot LUNs ready to go at the click of a template?  Do you want to do this without utilizing any space up front?  UCS and NetApp have the toolset for you.  You can then rapidly bring up new customers, or stay at dinner with your family while a Network Operations Center (NOC) administrator deploys a pre-architected pre-secured, pre-tested and provisioned server from a template to meet a capacity burst.

If you’re considering a storage decision, a private cloud migration, or a converged infrastructure pod make sure you’re taking a look at NetApp as an option and see it for yourself.  For some more information on NetApp’s virtualization story see the links below:

TR3856: Quantifying the Value of Running VMware on NetApp 

TR3808: VMware vSphere and ESX 3.5 Multiprotocol Performance Comparison Using FC, iSCSI, and NFS

GD Star Rating
loading...