The Difference Between Private Cloud and Converged Infrastructure

With all of the  hype around private clouds and manufacturer private cloud infrastructure stacks I thought I’d take some time  to differentiate between ‘private-cloud’ and ‘converged-infrastructure.’  For some background on Private Cloud see two of my previous posts: http://www.definethecloud.net/building-a-private-cloud and http://www.definethecloud.net/is-private-cloud-a-unicorn.

Private clouds typically consist of four architectural stages (I describe these here: http://www.definethecloud.net/smt-matrix-and-vblock-architectures-for-private-cloud):

To build a true private cloud hardware/platform consolidation is layered with virtualization, automation and orchestration (without which the ‘On-Demand Self Service requirement of NIST’s definition is not met.)  The end result is a IT model and infrastructure that moves at the pace of business.

Converged infrastructure on the other hand is a subset of this, typically consolidation and virtualization but could possibly include some automation.  With all major vendors selling some form of ‘Integrated stack’ marketed at Private Cloud I thought I’d take a look at where four of the most popular actually fall along the path.

 

image

Starting from the bottom (as in bottom of the pyramid rather than bottom in quality, value, etc.)

FlexPod: FlexPod is an architecture designed using NetApp storage and Cisco compute and networking components.  The FlexPod architectures address various business and application needs but do not include automation/orchestration software.  The idea being that customers will have the flexibility to choose the level and type of automation/orchestration suite they require.

Vblock: Vblocks consist of EMC storage couple with VMware virtualization and Cisco Network/Compute.  Additionally VBlock incorporates EMC’s Unified Infrastructure Manager (UIM) which enables automation and single point of management for most of the infrastructure components. An orchestration suite would still be required for true private cloud.

Exalogic: Oracle’s stack offering is Exalogic which combines Oracle hardware with their middleware and software to provide a private cloud platform tailored toward Java environments.  The provisioning tools included offer the promise of private cloud ‘on-demand self-service.’

BladeSystem Matrix: Is built upon HP BladeSystem, storage, network and software components and is managed by HP’s Cloud Service Automation.  The automation and orchestration tools included in that software suite put HP’s offering in the private cloud arena.

Bottom Line:

Depending on the drivers, requirements, and individual environment all of these stacks can offer customers a platform from which to rapidly build cloud services.  The key is in deciding what you want and what is the best tool to get you there.  The best tool to get you there will be based on both ROI and business agility as cost is not the only reason for a migration to cloud.

For a deeper look at private cloud stacks check out my post at Networking Computing (http://www.networkcomputing.com/private-cloud/229900081.)

GD Star Rating
loading...

Fibre Channel over Ethernet

Fibre Channel over Ethernet (FCoE) is a protocol standard ratified in June of 2009.  FCoE provides the tools for encapsulation of Fibre Channel (FC) in 10 Gigabit Ethernet frames.  The purpose of FCoE is to allow consolidation of low-latency, high performance FC networks onto 10GE infrastructures.  This allows for a single network/cable infrastructure which greatly reduces switch and cable count, lowering the power, cooling, and administrative requirements for server I/O.

FCoE is designed to be fully interoperable with current FC networks and require little to no additional training for storage and IP administrators. FCoE operates by encapsulating native FC into Ethernet frames.  Native FC is considered a ‘lossless’ protocol, meaning frames are not dropped during periods of congestion.  This is by design in order to ensure the behavior expected by the SCSI payloads.  Traditional Ethernet does not provide the tools for lossless delivery on shared networks so enhancements were defined by the IEEE to provide appropriate transport of encapsulated Fibre Channel on Ethernet networks.  These standards are known as Data Center Bridging (DCB) which I’ve discussed in a previous post (http://www.definethecloud.net/?p=31.)  These Ethernet enhancements are fully backward compatible with traditional Ethernet devices, meaning DCB capable devices can exchange standard Ethernet frames seamlessly with legacy devices.  The full 2148 Byte FC frame is encapsulated in an Ethernet jumbo frame avoiding any modification/fragmentation of the FC frame.

FCoE itself takes FC layers 2-4 and maps them to Ethernet layers 1-2, this replaces the FC-0 Physical layer, and FC-1 Encoding Layer.  This mapping between Ethernet and Fibre Channel is done through a Logical End-Point (LEP) which can by thought of as a translator between the two protocols.  The LEP is responsible for providing the appropriate encoding and physical access for frames traveling from FC nodes to Ethernet nodes and vice versa.  There are two devices that typically act as FCoE LEPs: Fibre Channel Forwarders (FCF) which are switches capable of both Ethernet and Fibre Channel, and Converged Network Adapters (CNA) which provide the server-side connection for a FCoE network.  Additionally the LEP operation can be done using a software initiator and traditional 10GE NICs but this places extra workload on the server processor rather than offloading it to adapter hardware.

One of the major advantages of replacing FC layers 0-1 when mapping onto 10GE is the encoding overhead.  8GB Fibre Channel uses an 8/10 bit encoding which adds 25% protocol overhead, 10GE uses a 64/64 bit encoding which has about 2% overhead, dramatically reducing the protocol overhead and increasing throughput.  The second major advantage is that FCoE maintains FC layers 2-4 which allows seamless integration with existing FC devices and maintains the Fibre Channel tool set such as zoning, LUN masking etc.  In order to provide FC login capabilities, multi-hop FCoE networks, and FC zoning enforcement on 10GE networks FCoE relies on another standard set known as Fibre Channel initialization Protocol (FIP) which I will discuss in a lter post.

Overall FCoE is one protocol to choose from when designing converged networks, or cable-once architectures.  The most important thing to remember is that a true cable-once architecture doesn’t make you choose your Upper Layer Protocol (ULP) such as FCoE, only your underlying transport infrastructure.  If you choose 10GE the tools are now in place to layer any protocol of your choice on top, when and if you require it.

Thanks to my colleagues who recently provided a great discussion on protocol overhead and frame encoding…

GD Star Rating
loading...

Consolidated I/O

Consolidated I/O (input/output) is a hot topic and has been for the last two years, but it’s not a new concept.  We’ve already consolidated I/O once in the data center and forgotten about it, remember those phone PBXs before we replaced them with IP Telephony?  The next step in consolidating I/O comes in the form of getting management traffic, backup traffic and storage traffic from centralized storage arrays to the servers on the same network that carries our IP data.  In the most general terms the concept is ‘one wire.’  ‘Cable Once’ or ‘One Wire’ allows a flexible I/O infrastructure with a greatly reduced cable count and a single network to power, cool and administer.

Solutions have existed and been used for years to do this, iSCSI (SCSI storage data over IP networks) is one tool that has been commonly used to do this.  The reason the topic has hit the mainstream over the last 2 years is that 10GB Ethernet was ratified and we now have a common protocol with the proper bandwidth to support this type of consolidation.  Prior to 10GE we simply didn’t have the right bandwidth to effectively put everything down the same pipe.

The first thing to remember when discussing I/O consolidation is that contrary to popular belief I/O consolidation does not mean Fibre Channel over Ethernet (FCoE.)  I/O consolidation is all about using a single infrastructure and underlying protocol to carry any and all traffic types required in the data center.  The underlying protocol of choice is 10G Ethernet because it’s lightweight, high bandwidth and Ethernet itself is the most widely used data center protocol today.  Using 10GE and the IEEE standards for Data Center bridging (DCB) as the underlying data center network, any and all protocols can be layered on top as needed on a per application basis.  See my post on DCB for more information (http://www.definethecloud.net/?p=31.)These protocols can be FCoE, iSCSI, UDP, TCP, NFS, CIFS, etc. or any combination of them all.

If you look at the data center today most are already using a combination of these protocols, but typically have 2 or more separate infrastructures to support them.  A data center that uses Fibre Channel heavily has two Fibre Channel networks (for redundancy) and one or more LAN networks. These ‘Fibre Channel shops’ are typically still using additional storage protocols such as NFS/CIFS for file based storage.  The cost of administering, powering, cooling, and eventually upgrading/refreshing these separate networks continues to grow.

Consolidating onto a single infrastructure not only provides obvious cost benefits but also provides the flexibility required for a cloud infrastructure.  Having a ‘Cable Once’ infrastructure allows you to provide the right protocol at the right time on an application basis, without the need for hardware changes.

Call it what you will I/O Consolidation, Network Convergence, or Network Virtualization, a cable once topology that can support the right protocol at the right time is one of the pillars of cloud architectures in the data center.

GD Star Rating
loading...