I enjoyed a great conversation with Netapp’s Vaughn Stewart and Cisco’s Abhinav Joshi about FlexPod last week during Cisco Live 2011. Check out the video below.
I enjoyed a great conversation with Netapp’s Vaughn Stewart and Cisco’s Abhinav Joshi about FlexPod last week during Cisco Live 2011. Check out the video below.
With the industry drive towards cloud computing models there has been a lot of talk and announcements around ‘converged infrastructure’ ‘integrated stack’ solutions. An integrated stack is pre-packaged offering typically containing some amount of network, storage, and server infrastructure bundled with some level of virtualization, automation, and orchestration software. The purpose of these stacks is to simplify the infrastructure purchasing requirements, and accelerate the migration to virtualized or cloud computing models, accomplished by reducing risk and time to deployment. This simplification and acceleration is accomplished by heavy testing and certification by the vendor or vendors in order to ensure various levels of compatibility, stability and performance.
In broad strokes there are two types of integrated stack solution:
Single Vendor – All stack components are developed, manufactured and bundled by a single vendor.
Multi-Vendor – Products from two or more parent vendors are bundled together to create the stack.
Of these two approaches the true value and power typically comes from the multi-vendor approach or Innovative Stack, as long as some key processes are handled correctly, specifically infrastructure pre-integration/delivery and support. With an innovative stack the certification and integration testing is done by the joint vendors allowing more time to be spent tailoring the solution to specific needs rather than ensuring component compatibility and design validity. The innovative stack provides a cookie cutter approach at the infrastructure level.
The reason the innovative stack holds the sway is the ability to package ‘best-of-breed’ technologies into a holistic top-tier package rather than relying solely on products and software from a single vendor of which some may fall lower in the rankings. The large data center hardware vendors all have several disparate product lines each of which are in various stages of advancement and adoption. While one or two of these product lines may be best-of-breed or close, you’d be hard-pressed to argue that any one vendor can provide the best: storage, server, and network hardware along with automation and orchestration software.
A prime example of this would be VMware, it’s difficult to argue that VMware is not the best-of-breed for server virtualization, with a robust feature set, outstanding history and approximately 90% market share they are typically the obvious choice for server virtualization. That being said VMware does not sell hardware which means if you’re virtualizing servers and want best of breed you’ll need two vendors right out of the gate. VMware also has an excellent desktop virtualization platform but in that arena Citrix could easily be argued best-of-breed and both have pros/cons depending on the specific technical/business requirements. For desktop virtualization architecture it’s not uncommon to have three best-of-breed vendors before even discussing storage or network hardware (Vendor X server, VMware Hypervisor, and Citrix desktop virtualization.)
With the innovative stack approach a collaborative multi-vendor team can analyze, assess, bundle, test, and certify an integration of best-of-breed hardware and software to provide the highest levels of performance, feature set and stability. Once the architectures are defined if an appropriate support and delivery model is put in place jointly by the vendors a best-of-breed innovative stack can accelerate your successful adoption of converged infrastructure and cloud-model services. An excellent example of this type of multi-vendor certified Innovative Stack is the FlexPod for VMware by NetApp, Cisco, and VMware which is backed by a joint support model and delivery packaging through certified expert channel partners.
To participate in a live WebCast on the subject and learn more please register here: http://www.definethecloud.net/innovative-versus-integration-cloud-stacks.
The Live Webcast with NetApp and Kingman Tang went quite well with good discussion on private cloud and data center stacks. Check out the recording below.
One of, if not the, most popular of my blog posts to date has been ‘Why Cisco UCS is my ‘A-Game’ Server Architecture (http://www.definethecloud.net/why-cisco-ucs-is-my-a-game-server-architecture.) In that post I describe why I lead with Cisco UCS for most consultative engagements. This follow up for storage has been a long time coming, and thanks to some ‘gentle’ nudging and random coincidence combined with an extended airport wait I’ve decided to get this posted.
If you haven’t read my previous post I take the time to define my ‘A-Game’ architectures as such:
“The rule in regards to my A-Game is that it’s not a rule, it’s a launching point. I start with a specific hardware set in mind in order to visualize the customer need and analyze the best way to meet that need. If I hit a point of contention that negates the use of my A-Game I’ll fluidly adapt my thinking and proposed architecture to one that better fits the customer. These points of contention may be either technical, political, or business related:
- Technical: My A-Game doesn’t fit the customers requirement due to some technical factor, support, feature, etc.
- Political: My A-Game doesn’t fit the customer because they don’t want Vendor X (previous bad experience, hype, understanding, etc.)
- Business: My A-Game isn’t on an approved vendor list, or something similar.
If I hit one of these roadblocks I’ll shift my vendor strategy for the particular engagement without a second thought. The exception to this is if one of these roadblocks isn’t actually a roadblock and my A-Game definitely provides the best fit for the customer I’ll work with the customer to analyze actual requirements and attempt to find ways around the roadblock.
Basically my A-Game is a product or product line that I’ve personally tested, worked with and trust above the others that is my starting point for any consultative engagement.”
In my A-Game Server post I run through my hate then love relationship that brought me around to trust, support, and evangelize UCS; I cannot express the same for NetApp. My relationship with NetApp fell more along the lines of love at first sight.
NetApp – Love at first sight:
I began working with NetApp storage at the same time I was diving headfirst into datacenter as a whole. I was moving from server admin/engineer to architect and drinking from the SAN, Virtualization, and storage firehouse. I had a fantastic boss who to this day is a mentor and friend that pushed me to learn quickly and execute rapidly and accurately, thanks Mike! The main products our team handled at the time were: IBM blades/servers, VMware, SAN (Brocade and Cisco) and IBM/NetApp storage. I was never a fan of the IBM storage. It performed solidly but was a bear to configure, lacked a rich feature set and typically got put in place and left there untouched until refresh. At the same time I was coming up to speed on IBM storage I was learning more and more about NetApp.
From the non-technical perspective NetApp had accessible training and experts, clear value-proposition messaging and a firm grasp on VMware, where virtualization was heading and how/why it should be executed on. This hit right on with what my team was focused on. Additionally NetApp worked hard to maintain an excellent partner channel relationship, make information accessible, and put the experts a phone call or flight away. This made me WANT to learn more about their technology.
The lasting bonds:
Breakfast food, yep breakfast food is what made NetApp stick for me, and still be my A-game four years later. Not just any breakfast food, but a personal favorite of mine; beer and waffles, err, umm… WAFL (second only to chicken and waffles and missing only bacon.) Data ONTAP (the beer) and NetApp’s Write Anywhere File System (WAFL) are at the heart of why they are my A-Game. While you can find dozens of blogs, competitive papers, etc. attacking the use of WAFL for primary block storage, what WAFL enables is amazing from a feature perspective, and the performance numbers NetApp can put up speak for themselves. Because, unlike a traditional block based array, NetApp owns the underlying file system they can not only do more with the data, but they can more rapidly adapt to market needs with software enhancements. Don’t take my word for it, do some research, look at the latest announcements from other storage leaders and check to see what year NetApp announced their version of those same features, with few exceptions you’ll be surprised. The second piece of my love for NetApp is Data ONTAP. NetApp has several storage controller systems ranging from the lower end to the Tier-1 high-capacity, high availability systems. Regardless of which one you use, you’re always using the same operating/management system, Data ONTAP. This means that as you scale, change, refresh, upgrade, downgrade, you name it, you never have to retrain AND you keep a common feature set.
My love for breakfast is not the only draw to NetApp, and in fact without a bacon offering I would have strayed if there weren’t more (note to NetApp: Incorporate fatty pork the way politicians do.)
Other features that keep NetApp top of my list are:
Add to that more recent features such as first to market with FCoE based storage and you’ve got a winner in my book. All that being said I still haven’t covered the real reason NetApp is the first storage vendor in my head anytime I talk about storage.
Anytime I’m talking about servers I’m talking about virtualization as well. Because I don’t work in the Unix or Mainframe worlds I’m most likely talking about VMware (90% market share has that effect.) When dealing with virtualization my primary goals are consolidation/optimization and flexibility. In my opinion nobody can touch NetApp storage for this. I’m a fan of choice and options, I also like particular features/protocols for particular use cases. On most storage platforms I have to choose my hardware based on the features and protocols my customers require, and most likely use more than one platform to get them all. This isn’t the case with NetApp. With few exceptions every protocol/feature is available simultaneously with any given hardware platform. This means I can run iSCSI, FC, FCoE or all of the above for block based needs at the same time I run CIFS natively to replace Windows file servers, and NFS for my VMware data stores. All of that from the same box or even the same ports! This lets me tier my protocols and features to the application requirements instead of to my hardware limitations.
I’ve been working on VMware deployments in some fashion for four years, and have seen dozens of unique deployments but personally never deployed or worked with a VMware environment that ran off a single protocol, typically at a minimum NFS is used for ISO datastores and CIFS can be used to eliminate Windows file servers rather than virtualize them, with a possible block based protocol involved for boot or databases.
Additionally NetApp offers features and functionality to allow multiple storage functions to be consolidated on a single system. You no longer require separate hardware for primary, secondary, backup, DR, and archive. All of this can then be easily setup and managed for replication across any of NetApp’s platforms, or many 3rd party systems front-ended with V-series. These two pieces combined create a truly ‘unified’ platform.
When do I bring out my B-Game?
NetApp like any solution I’ve ever come across is not the right tool for every job. For me they hit or exceed the 80/20 rule perfectly. A few places where I don’t see NetApp as a current fit:
While I stick to there are no ‘one-size fits all’ IT solutions, and that my A-Game is a starting point not a rule I find NetApp to hit the bulls eye for 80+ percent of the market I work with. Not only do they fit upfront, but they back it up with support, continued innovation, and product advancement. NetApp isn’t ‘The Growth Company’ and #2 in storage by luck or chance (although I could argue they did luck out quite a bit with the timing of the industry move to converged storage on 10GE.)
Another reason NetApp still reigns king as my A-Game is the way in which it marries to my A-Game server architecture. Cisco UCS enables unification, protocol choice and cable consolidation as well as virtualization acceleration, etc. All of these are further amplified when used alongside NetApp storage which allows rapid provisioning, protocol options, storage consolidation and storage virtualization, etc. Do you want to pre-provision 50 (or 250) VMware hosts with 25 GB read/write boot LUNs ready to go at the click of a template? Do you want to do this without utilizing any space up front? UCS and NetApp have the toolset for you. You can then rapidly bring up new customers, or stay at dinner with your family while a Network Operations Center (NOC) administrator deploys a pre-architected pre-secured, pre-tested and provisioned server from a template to meet a capacity burst.
If you’re considering a storage decision, a private cloud migration, or a converged infrastructure pod make sure you’re taking a look at NetApp as an option and see it for yourself. For some more information on NetApp’s virtualization story see the links below:
Cloud computing environments provide enhanced scalability and flexibility to IT organizations. Many options exist for building cloud strategies, public, private etc. For many companies private cloud is an attractive option because it allows them to maintain full visibility and control of their IT systems. Private clouds can also be further enhanced by merging private cloud systems with public cloud systems in a hybrid cloud. This allows some systems to gain the economies of scale offered by public cloud while others are maintained internally. Some great examples of hybrid strategies would be:
Many more options exist and any combination of options is possible. If private cloud is part of the cloud strategy for a company there is a common set of building blocks required to design the computing environment.
In the diagram above we see that each component builds upon one another. Starting at the bottom we utilize consolidated hardware to minimize power, cooling and space as well as underlying managed components. At the second tier of the private cloud model we layer on virtualization to maximize utilization of the underlying hardware while providing logical separation for individual applications.
If we stop at this point we have what most of today’s data centers are using to some extent or moving to. This is a virtualized data center. Without the next two layers we do not have a cloud/utility computing model. The next two layers provide the real operational flexibility and organizational benefits of a cloud model.
To move out virtualized data center to a cloud architecture we next layer on Automation and Monitoring. This layer provides the management and reporting functionality for the underlying architecture. It could include: monitoring systems, troubleshooting tools, chargeback software, hardware provisioning components, etc. Next we add a provisioning portal to allow the end-users or IT staff to provision new applications, decommission systems no longer in use, and add/remove capacity from a single tool. Depending on the level of automation in place below some things like capacity management may be handled without user/staff intervention.
The last piece of the diagram above is security. While many private cloud discussions leave security out, or minimize its importance it is actually a key component of any cloud design. When moving to private cloud customers are typically building a new compute environment, or totally redesigning an existing environment. This is the key time to design robust security in from end-to-end because you’re not tied to previous mistakes (we all make them)or legacy design. Security should be part of the initial discussion for each layer of the private cloud architecture and the solution as a whole.
Private cloud systems can be built with many different tools from various vendors. Many of the software tools exist in both Open Source and licensed software versions. Additionally several vendors have private cloud offerings of an end-to-end stack upon which to build design a private cloud system. The remainder of this post will cover three of the leading private cloud offerings:
Scope: This post is an overview of three excellent solutions for private cloud. It is not a pro/con discussion or a feature comparison. I would personally position any of the three architectures for a given customer dependant on customer requirements, existing environment, cloud strategy, business objective and comfort level. As always please feel free to leave comments, concerns or corrections using the comment form at the bottom of the post.
Secure Multi-Tenancy (SMT):
Vendor positioning: ‘This includes the industry’s first end-to-end secure multi-tenancy solution that helps transform IT silos into shared infrastructure.’
SMT is a pairing of: VMware vSphere, Cisco Nexus, UCS, MDS, and NetApp storage systems. SMT has been jointly validated and tested by the three companies, and a Cisco Validated Design (CVD) exists as a reference architecture. Additionally a joint support network exists for customers building or using SMT solutions.
Unlike the other two systems SMT is a reference architecture a customer can build internally or along with a trusted partner. This provides one of the two unique benefits of this solution.
Vendor positioning: ‘The industry’s first integrated infrastructure platform that enables you to reduce capital costs and energy consumption and more efficiently utilize the talent of your server administration teams for business innovation rather than operations and maintenance.’
Matrix is a integration of HP blades, HP storage, HP networking and HP provisioning/management software. HP has tested the interoperability of the proven components and software and integrated them into a single offering.
Vendor positioning: ‘The industry’s first completely integrated IT offering that combines best-in-class virtualization, networking, computing, storage, security, and management technologies with end-to-end vendor accountability.’
Vblocks are a combination of EMC software and storage storage, Cisco UCS, MDS and Nexus, and VMware virtualization. Vblocks are complete infrastructure packages sold in one of three sizes based on number of virtual machines. Vblocks offer a thoroughly tested and jointly supported infrastructure with proven performance levels based on a maximum number of VMs.
Private cloud can provide a great deal of benefits when implemented properly, but like any major IT project the benefits are greatly reduced by mistakes and improper design. Pre-designed and tested infrastructure solutions such as the ones above provide customers a proven platform on which they can build a private cloud.