CloudStack Graduates to Top-Level Apache Project

The Apache Software Foundation announced in late March that CloudStack is now a top-level project. This is a promotion from CloudStack’s incubator status, where it had lived after being released as open source by Citrix.

This promotion provides additional encouragement to companies and developers looking to contribute to the project, because it validates the CloudStack community and demonstrates ongoing support under the Apache Software Foundation. To read more visit the full article.

GD Star Rating
loading...

NVGRE

The most viable competitor to VXLAN is NVGRE which was proposed by Microsoft, Intel, HP and Dell.  It is another encapsulation technique intended to allow virtual network overlays across the physical network.  Both techniques also remove the scalability issues with VLANs which are bound at a max of 4096.  NVGRE uses Generic Routing Encapsulation (GRE) as the encapsulation method.  It uses the lower 24 bits of the GRE header to represent the Tenant Network Identifier (TNI.)  Like VXLAN this 24 bit space allows for 16 million virtual networks. 

image

While NVGRE provides optional support for broadcast via IP multi-cast, it does not rely on it for address learning as VXLAN does.  It instead leaves that up to an as of yet undefined control plane protocol.  This control plane protocol will handle the mappings between the “provider” address used in the outer header to designate the remote NVGRE end-point and the “customer” address of the destination.  The lack of reliance of flood and learn behavior replicated over IP multicast potentially makes NVGRE a more scalable solution.  This will be dependent on implementation and underlying hardware.

Another difference between VXLAN and NVGRE will be within its multi-pathing capabilities.  In its current format NVGRE will provides little ability to be properly load-balanced by ECMP.  In order to enhance load-balancing the draft suggests the use of multiple IP addresses per NVGRE host, which will allow for more flows.  This is a common issue with tunneling mechanisms and is solved in VXLAN by using a hash of the inner frame as the UDP source port.  This provides for efficient load balancing by devices capable of 5-tuple balancing decisions.  There are other possible solutions proposed for NVGRE load-balancing, we’ll have to wait and see how they pan out. 

The last major difference between the two protocols is the use of jumbo frames.  VXLAN is intended to stay within a data center where jumbo frame support is nearly ubiquitous, therefore it assumes that support is present and utilizes it.  NVGRE is intended to be able to be used inter-data-enter and therefore allows for provisions to avoid fragmentation.

Summary:

While NVGRE still needs much clarification it is backed by some of the biggest companies in IT and has some potential benefits.  With the VXLAN capable hardware world expanding quickly you can expect to see more support for NVGRE.  Layer 3 encapsulation techniques as a whole solve the issues of scalability inherent with bridging.  Additionally due to their routed nature they also provide for loop free multi-pathed environments without the need for techniques such as TRILL and technologies based on it.  In order to reach the scale and performance required by tomorrows data centers our networks need change, overlays such as these are one tool towards that goal.

GD Star Rating
loading...

Stateless Transport Tunneling (STT)

STT is another tunneling protocol along the lines of the VXLAN and NVGRE proposals.  As with both of those the intent of STT is to provide a network overlay, or virtual network running on top of a physical network.  STT was proposed by Nicira and is therefore not surprisingly written from a software centric view rather than other proposals written from a network centric view.  The main advantage of the STT proposal is it’s ability to be implemented in a software switch while still benefitting from NIC hardware acceleration.  The other advantage of STT is its use of a 64 bit network ID rather than the 32 bit IDs used by NVGRE and VXLAN.

The hardware offload STT grants relieves the server CPU of a significant workload in high bandwidth systems (10G+.)  This separates it from it’s peers that use an IP encapsulation in the soft switch which negate the NIC’s LSO and LRO functions.   The way STT goes about this is by having the software switch inserts header information into the packet to make it look like a TCP packet, as well as the required network virtualization features.  This allows the guest OS to send frames up to 64k to the hypervisor which are encapsulated and sent to the NIC for segmentation.  While this does allow for the HW offload to be utilized it causes several network issues due to it’s use of valid TCP headers it causes issues for many network appliances or “middle boxes.” 

STT is not expected to be ratified and is considered by some to have been proposed for informational purposes, rather than with the end goal of a ratified standard.  With its misuse of a valid TCP header it would be hard pressed for ratification.  STT does bring up the interesting issue of hardware offload.  The IP tunneling protocols mentioned above create extra overhead on host CPUs due to their inability to benefit from NIC acceleration techniques.  VXLAN and NVGRE are intended to be implemented in hardware to solve this problem.  Both VXLAN and NVGRE use a 32 bit network ID because they are intended to be implemented in hardware, this space provides for 16 million tenants.  Hardware implementation is coming quickly in the case of VXLAN with vendors announcing VXLAN capable switches and NICs. 

GD Star Rating
loading...

VXLAN Deep Dive – Part II

In part one of this post I covered the basic theory of operations and functionality of VXLAN (http://www.definethecloud.net/vxlan-deep-dive.)  This post will dive deeper into how VXLAN operates on the network.

Let’s start with the basic concept that VXLAN is an encapsulation technique.  Basically the Ethernet frame sent by a VXLAN connected device is encapsulated in an IP/UDP packet.  The most important thing here is that it can be carried by any IP capable device.  The only time added intelligence is required in a device is at the network bridges known as VXLAN Tunnel End-Points (VTEP) which perform the encapsulation/de-encapsulation.  This is not to say that benefit can’t be gained by adding VXLAN functionality elsewhere, just that it’s not required.

image

Providing Ethernet Functionality on IP Networks:

As discussed in Part 1, the source and destination IP addresses used for VXLAN are the Source VTEP and destination VTEP.  This means that the VTEP must know the destination VTEP in order to encapsulate the frame.  One method for this would be a centralized controller/database.  That being said VXLAN is implemented in a decentralized fashion, not requiring a controller.  There are advantages and drawbacks to this.  While utilizing a centralized controller would provide methods for address learning and sharing, it would also potentially increase latency, require large software driven mapping tables and add network management points.  We will dig deeper into the current decentralized VXLAN deployment model.

VXLAN maintains backward compatibility with traditional Ethernet and therefore must maintain some key Ethernet capabilities.  One of these is flooding (broadcast) and ‘Flood and Learn behavior.’  I cover some of this behavior here (http://www.definethecloud.net/data-center-101-local-area-network-switching)  but the summary is that when a switch receives a frame for an unknown destination (MAC not in its table) it will flood the frame to all ports except the one on which it was received.  Eventually the frame will get to the intended device and a reply will be sent by the device which will allow the switch to learn of the MACs location.  When switches see source MACs that are not in their table they will ‘learn’ or add them.

VXLAN is encapsulating over IP and IP networks are typically designed for unicast traffic (one-to-one.)  This means there is no inherent flood capability.  In order to mimic flood and learn on an IP network VXLAN uses IP multi-cast.  IP multi-cast provides a method for distributing a packet to a group.  This IP multi-cast use can be a contentious point within VXLAN discussions because most networks aren’t designed for IP multi-cast, IP multi-cast support can be limited, and multi-cast itself can be complex dependent on implementation.

Within VXLAN each VXLAN segment ID will be subscribed to a multi-cast group.  Multiple VXLAN segments can subscribe to the same ID, this minimizes configuration but increases unneeded network traffic.  When a device attaches to a VXLAN on a VTEP that was not previously in use, the VXLAN will join the IP multi-cast group assigned to that segment and start receiving messages.

image

In the diagram above we see the normal operation in which the destination MAC is known and the frame is encapsulated in IP using the source and destination VTEP address.  The frame is encapsulated by the source VTEP, de-encapsulated at the destination VTEP and forwarded based on bridging rules from that point.  In this operation only the destination VTEP will receive the frame (with the exception of any devices in the physical path, such as the core IP switch in this example.)

image

In the example above we see an unknown MAC address (the MAC to VTEP mapping does not exist in the table.)  In this case the source VTEP encapsulates the original frame in an IP multi-cast packet with the destination IP of the associated multicast group.  This frame will be delivered to all VTEPs participating in the group.  VTEPs participating in the group will ideally only be VTEPs with connected devices attached to that VXLAN segment.  Because multiple VXLAN segments can use the same IP multicast group this is not always the case.  The VTEP with the connected device will de-encapsulate and forward normally, adding the mapping from the source VTEP if required.  Any other VTEP that receives the packet can then learn the source VTEP/MAC mapping if required and discard it. This process will be the same for other traditionally flooded frames such as ARP, etc.  The diagram below shows the logical topologies for both traffic types discussed.

image

As discussed in Part 1 VTEP functionality can be placed in a traditional Ethernet bridge.  This is done by placing a logical VTEP construct within the bridge hardware/software.  With this in place VXLANs can bridge between virtual and physical devices.  This is necessary for physical server connectivity, as well as to add network services provided by physical appliances.  Putting it all together the diagram below shows physical servers communicating with virtual servers in a VXLAN environment.  The blue links are traditional IP links and the switch shown at the bottom is a standard L3 switch or router.  All traffic on these links is encapsulated as IP/UDP and broken out by the VTEPs.

image

Summary:

VXLAN provides backward compatibility with traditional VLANs by mimicking broadcast and multicast behavior through IP multicast groups.  This functionality provides for decentralized learning by the VTEPs and negates the need for a VXLAN controller.

GD Star Rating
loading...

VXLAN Deep Dive

I’ve been spending my free time digging into network virtualization and network overlays.  This is part 1 of a 2 part series, part 2 can be found here: http://www.definethecloud.net/vxlan-deep-divepart-2.  By far the most popular virtualization technique in the data center is VXLAN.  This has as much to do with Cisco and VMware backing the technology as the tech itself.  That being said VXLAN is targeted specifically at the data center and is one of many similar solutions such as: NVGRE and STT.)  VXLAN’s goal is allowing dynamic large scale isolated virtual L2 networks to be created for virtualized and multi-tenant environments.  It does this by encapsulating frames in VXLAN packets.  The standard for VXLAN is under the scope of the IETF NVO3 working group.

 

VxLAN Frame

The VXLAN encapsulation method is IP based and provides for a virtual L2 network.  With VXLAN the full Ethernet Frame (with the exception of the Frame Check Sequence: FCS) is carried as the payload of a UDP packet.  VXLAN utilizes a 24-bit VXLAN header, shown in the diagram, to identify virtual networks.  This header provides for up to 16 million virtual L2 networks.

Frame encapsulation is done by an entity known as a VXLAN Tunnel Endpoint (VTEP.)  A VTEP has two logical interfaces: an uplink and a downlink.  The uplink is responsible for receiving VXLAN frames and acts as a tunnel endpoint with an IP address used for routing VXLAN encapsulated frames.  These IP addresses are infrastructure addresses and are separate from the tenant IP addressing for the nodes using the VXLAN fabric.  VTEP functionality can be implemented in software such as a virtual switch or in the form a physical switch.

VXLAN frames are sent to the IP address assigned to the destination VTEP; this IP is placed in the Outer IP DA.  The IP of the VTEP sending the frame resides in the Outer IP SA.  Packets received on the uplink are mapped from the VXLAN ID to a VLAN and the Ethernet frame payload is sent as an 802.1Q Ethernet frame on the downlink.  During this process the inner MAC SA and VXLAN ID is learned in a local table.  Packets received on the downlink are mapped to a VXLAN ID using the VLAN of the frame.  A lookup is then performed within the VTEP L2 table using the VXLAN ID and destination MAC; this lookup provides the IP address of the destination VTEP.  The frame is then encapsulated and sent out the uplink interface.

image

Using the diagram above for reference a frame entering the downlink on VLAN 100 with a destination MAC of 11:11:11:11:11:11 will be encapsulated in a VXLAN packet with an outer destination address of 10.1.1.1.  The outer source address will be the IP of this VTEP (not shown) and the VXLAN ID will be 1001.

In a traditional L2 switch a behavior known as flood and learn is used for unknown destinations (i.e. a MAC not stored in the MAC table.  This means that if there is a miss when looking up the MAC the frame is flooded out all ports except the one on which it was received.  When a response is sent the MAC is then learned and written to the table.  The next frame for the same MAC will not incur a miss because the table will reflect the port it exists on.  VXLAN preserves this behavior over an IP network using IP multicast groups.

Each VXLAN ID has an assigned IP multicast group to use for traffic flooding (the same multicast group can be shared across VXLAN IDs.)  When a frame is received on the downlink bound for an unknown destination it is encapsulated using the IP of the assigned multicast group as the Outer DA; it’s then sent out the uplink.  Any VTEP with nodes on that VXLAN ID will have joined the multicast group and therefore receive the frame.  This maintains the traditional Ethernet flood and learn behavior.

VTEPs are designed to be implemented as a logical device on an L2 switch.  The L2 switch connects to the VTEP via a logical 802.1Q VLAN trunk.  This trunk contains an VXLAN infrastructure VLAN in addition to the production VLANs.  The infrastructure VLAN is used to carry VXLAN encapsulated traffic to the VXLAN fabric.  The only member interfaces of this VLAN will be VTEP’s logical connection to the bridge itself and the uplink to the VXLAN fabric.  This interface is the ‘uplink’ described above, while the logical 802.1Q trunk is the downlink.

image

Summary

VXLAN is a network overlay technology design for data center networks.  It provides massively increased scalability over VLAN IDs alone while allowing for L2 adjacency over L3 networks.  The VXLAN VTEP can be implemented in both virtual and physical switches allowing the virtual network to map to physical resources and network services.  VXLAN currently has both wide support and hardware adoption in switching ASICS and hardware NICs, as well as virtualization software.

GD Star Rating
loading...

Something up Brocade’s Sleeve, and it looks Good

Brocade’s got some new tricks up their sleeve and they look good.  For far too long Brocade fought against convergence to protect its FC install base and catch up.  This bled over into their Ethernet messaging and hindered market growth and comfort levels there.  Overall they appeared as a company missing the next technology waves and clinging desperately to the remnants of a fading requirement: pure storage networks.  That has all changed, Brocade is embracing Ethernet and focusing on technology innovation that is relevant to today’s trends and business.

The Hardware:

Brocade’s VDX 8770 (http://www.brocade.com/downloads/documents/data_sheets/product_data_sheets/vdx-8770-ds.pdf) is their flagship modular switch for Brocade VCS fabrics.  While at first I scoffed at the idea of bigger chassis switches for fabrics, it turns out I was wrong (happens often.)  I forgot about scale.  These fabrics will typically be built in core/edge or spine leaf/designs, often using End of Row (EoR) rather than Top of Rack (ToR) designs to reduce infrastructure.  This leaves max scalability bound by a combination of port count and switch count dependent on several factors such as interconnect ports.  Switch count will typically be limited by fabric software limitations either real or due to testing and certification processes.  Having high density modular fabric-capable switches helps solve scalability issues.

Some of the more interesting features:

  • Line-rate 40GE
  • “Auto-trunking” ISLs (multiple links between switches will bond automatically.)
  • Multi-pathing at layers 1, 2 and 3
  • Dynamic port-profile configuration and migration for VM mobility
  • 100GE ready
  • 4us latency with 4TB switching capacity
  • Support for 384,000 MAC addresses per fabric for massive L2 scalability
  • Support for up to 8000 ports in a VCS fabric
  • 4 and 8 slot chassis options
  • Multiple default gateways for load-balancing routing

The Software:

The real magic is Brocade’s fabric software.  Brocade looks at the fabric as the base on which to build an intelligent network, SDN or otherwise.  As such the fabric should be: resilient, scalable and easy to manage.  In several conversations with people at Brocade it was pointed out that SDN actually adds a management layer.  No matter how you slice it the SDN software overlays a physical network that must be managed.  Minimizing configuration requirements at this level simplifies the network overall.  Additionally the fabric should provide multi-pathing without link blocking for maximum network throughput. 

Brocade executes on this with VCS fabric.  VCS provides an easy to set up and manage fabric model.  Operations like adding a link for bandwidth are done with minimal configuration through tools like “auto-trunking.’  Basically ports identified as fabric ports will be built into the network topology automatically.  They also provide impressive scalability numbers with support for 384,000 MACs, 352,000 IPv4 routes, 88,000 IPv6 routes, and 8000 ports.

One surprise to me was that Brocade is doing this using custom silicon.  With companies like Arista and Nicira (now part of VMware) touting commodity hardware as the future, why is Brocade spending money on silicon?  The answer is in latency.  If you want to do something at line-rate it must be implemented in hardware.  Merchant silicon is adept at keeping cutting edge at things like switching latency and buffering but is slow to implement new features.  This is due to addressable market.  Merchant silicon manufacturers want to ensure that the cost of hardware design and manufacturing will be recouped through bulk sale to multiple manufacturers.  This means features must have wide applicability and typically be standards driven before being implemented.

Brocade saw the ability to innovate with features while maintaining line-rate as an advantage worth the additional cost.  This allows Brocade to differentiate themselves, and their fabric, from vendors relying solely on merchant silicon.  Additionally they position they’re fabric as enough of an advantage to be worth the additional cost when implementing SDN for reasons listed above.

Summary:

Brocade is making some very smart moves and coming out from under the FC rock.  The technology is relevant and timely, but they will still have an uphill battle gaining the confidence of network teams.  They will have to rely on their FC data center heritage to build that confidence and expand their customer base.  The key now will be in execution, it will be an exciting ride.

GD Star Rating
loading...

Much Ado About Something: Brocade’s Tech Day

Yesterday I had the privilege of attending Brocade’s Tech Day for Analysts and Press.  Brocade announced the new VDX 8770, discussed some VMware announcements, as well as discussed strategy, vision and direction.  I’m going to dig in to a few of the topics that interested me, this is no way a complete recap.

First in regards to the event itself.  My kudos to the staff that put the event together it was excellent from both a pre-event coordination and event staff perspective.  The Brocade corporate campus is beautiful and the EBC building was extremely well suited to such an event.  The sessions went on smoothly, the food was excellent and overall it was a great experience.  I also want to thank Lisa Caywood (@thereallisac) for pointing out that my tweets during the event were more inflammatory then productive and outside the lines of ‘guest etiquette.’  She’s definitely correct and hopefully I can clear up some of my skepticism here in a format left open for debate, and avoid the same mistake in the future.  That being said I had thought I was quite clear going in on who I was and how I write.  To clear up any future confusion from anyone:  if you’re not interested in my unfiltered, typically cynical, honest opinion don’t invite me, I won’t take offense.  Even if you’re a vendor with products I like I’ve probably got a box full of cynicism for your other product lines.

During the opening sessions I observed several things that struck me negatively:

  • A theme (intended or not) that Brocade was being lead into new technologies by their customers.  Don’t get me wrong, listening to your customers and keeping your product in line with their needs is key to success.  That being said if your customers are leading you into new technology you’ve probably missed the boat.  In most cases they’re being lead there by someone else and dragging you along for the ride, that’s not sustainable.  IT vendors shouldn’t need to be dragged kicking and screaming into new technologies by customers.  This doesn’t mean chase every shiny object (squirrel!) but major trends should be investigated and invested in before you’re hearing enough customer buzz to warrant it.  Remember business isn’t just about maintaining current customers it’s about growing by adopting new ones.  Especially for public companies stagnant is as good as dead.
  • The term “ Ethernet Fabric” which is only used by Brocade, everyone else just calls it fabric.  This ties in closely with the next bullet.
  • A continued need to discuss commitment to pure Fibre Channel (FC) storage.  I don’t deny that FC will be around for quite some time and may even see some growth as customers with it embedded will expand.  That being said customers with no FC investment should be avoiding it like the plague and as vendors and consultants we should be pushing more intelligent options to those customers.  You can pick apart technical details about FC vs. anything all day long, enjoy that on your own, the fact is two fold: running two separate networks is expensive and complex, the differences in reliability, performance, etc. are fading if not gone.  Additionally applications are being written in more intelligent ways that don’t require the high availability, low latency silo’d architecture of yester year.  Rather than clinging to FC like a sinking ship vendors should be protecting customer investment while building and positioning the next evolution.  Quote of the day during a conversation in the hall: “Fibre channel is just a slightly slower melting ice cube then we expected.’
  • An insistence that Ethernet fabric was a required building block of SDN.  I’d argue that while it can be a component it is far from required, and as SDN progresses it will be irrelevant completely.  More on this to come.
  • A stance that the network will not be commoditized was common throughout the day.  I’d say that’s either A) naïve or B) posturing to protect core revenue.  I’d say we’ll see network commoditization occur en mass over the next five years.  I’m specifically talking about the data center and a move away from specialized custom built ASICS, not the core routers, and not  the campus.  Custom silicon is expensive and time-consuming to develop, but provides performance/latency benefits and arguable some security benefits.  As the processor and off the shelf chips continue to increase exponentially this differentiator becomes less and less important.  What becomes more important is rapid adaption to new needs.  SDN as a whole won’t rip and replace networking in the next five years but it’s growth and the concepts around it will drive commoditization.  It happened with servers, then storage while people made the same arguments.  Cheaper, faster to produce and ‘good-enough’ consistently wins out.

On the positive side Brocade has some vision that’s quite interesting as well as some areas where they are leading by filling gaps in industry offerings.

  • Brocade is embracing the concept of SDN and understands a concept I tweeted about recently: ‘Revolutions don’t sell.’  Customers want evolutionary steps to new technology.  Few if any customers will rip and replace current infrastructure to dive head first into SDN.  SDN is a complete departure from the way we network today, and will therefore require evolutionary steps to get there. This is shown in their support of ‘hybrid’ open flow implementations on some devices.  This means that OpenFlow implementations can run segregated alongside traditional network deployments.  This allows for test/dev or roll-out of new services without an impact on production traffic.  This is a great approach where other vendors are offering ‘either or’ options.
  • There was discussion of Brocade’s VXLAN gateway which was announced at VMworld.  To my knowledge this is the first offering in this much needed space.  Without a gateway VXLAN is limited to virtual only environments. This includes segregation from services provided by physical devices.  The Brocade VXLAN gateway will allow the virtual and physical networks to be bridged. (http://newsroom.brocade.com/press-releases/brocade-adx-series-to-unveil-vxlan-gateway-and-app-nasdaq-brcd-0923542) To dig deeper on why this is needed check out Ivan’s article: http://blog.ioshints.info/2011/10/vxlan-termination-on-physical-devices.html.
  • The new Brocade VDX 8770 is one bad ass mamma jamma.  With industry leading latency and MAC table capacity, along with TRILL based fabric functionality, it’s built for large scalable high-density fabrics.  I originally tweeted “The #BRCD #VDX8770 is a bigger badder chassis in a world with less need for big bad chassis.” After reading Ivan’s post on it I stand corrected (this happens frequently.)  For some great perspective and a look at specs take a read: http://blog.ioshints.info/2012/09/building-large-l3-fabrics-with-brocade.html.

On the financial side Brocade has been looking good and climbed over $6.00 a share.  There are plenty of conversations stating some of this may be due to upcoming shifts at the CEO level.  They’ve reported two great quarters and are applying some new focus towards federal government and other areas lacking in recent past. I didn’t dig further into this discussion.

During lunch I was introduced to one of the most interesting Brocade offerings I’d never heard of: ‘Brocade Network Subscription”: http://www.brocade.com/company/how-to-buy/capital-solutions/index.page.  Basically you can lease your on-prem network from Brocade Capitol.  This is a great idea for customers looking to shift CapEx to OpEx which can be extremely useful.  I also received a great explanation for the value of a fabric underneath an SDN network from Jason Nolet (VP of Data Center Networking Group.)  Jason’s position (summarized) is that implementing SDN adds a network management layer, rather than removing one.  With that in mind the more complexity we remove from the physical network the better off we are.  What we’ll want for our SDN networks is fast, plug-and-play functionality with max usable links and minimal management.  Brocade VCS fabric fits this nicely.  While I agree with that completely I ‘d also say it’s not the only way to skin that particular cat.  More to come on that.

For the last few years I’ve looked at Brocade as a company lacking innovation and direction.  They clung furiously to FC while the market began shifting to Ethernet, ignored cloud for quite a while, etc.  Meanwhile they burned down deals to purchase them and ended up where they’ve been.  The overall messaging, while nothing new, did have undertones of change as a whole and new direction.  That’s refreshing to hear.  Brocade is embracing virtualization and cloud architectures without tying their cart to a single hypervisor horse.  They are positioning well for SDN and the network market shifts.  Most impressively they are identifying gaps in the spaces they operate and executing on them both from a business and technology perspective.  Examples of this are Brocade Network Subscription and the VXLAN gateway functionality respectively.

Things are looking up and there is definitely something good happening at Brocade.  That being said they aren’t out of the woods yet.  For them, as a company, purchase is far fetched as the vendors that would buy them already have networking plays and would lose half of Brocade’s value by burning OEM relationships with the purchase.  The only real option from a sale perspective is for investors looking to carve them up and sell off pieces individually.  A scenario like this wouldn’t bode well for customers.  Brocade has some work to do but they’ve got a solid set of products and great direction.  We’ll see how it pans out.  Execution is paramount for them at this point.

Final Note:  This blog was intended to stop there but this morning I received an angry accusatory email from Brocade’s head of corporate communications who was unhappy with my tweets.  I thought about posting the email in full, but have decided against it for the sake of professionalism.  Overall his email was an attack based on my tweets.  As stated my tweets were not professional, but this type of email from someone in charge of corporate communications is well over the top in response.  I forwarded the email to several analyst and blogger colleagues, a handful of whom had similar issues with this individual.  One common theme in social media is that lashing out at bad press never does any good, a senior director in this position should know such, but instead continues to slander and attack.  His team and colleagues seem to understand social media use as they’ve engaged in healthy debate with me in regards to my tweets, it’s a shame they are not lead from the front.

GD Star Rating
loading...

Digging Into the Software Defined Data Center

The software defined data center is a relatively new buzzword embraced by the likes of EMC and VMware.  For an introduction to the concept see my article over at Network Computing (http://www.networkcomputing.com/data-center/the-software-defined-data-center-dissect/240006848.)  This post is intended to take it a step deeper as I seem to be stuck at 30,000 feet for the next five hours with no internet access and no other decent ideas.  For the purpose of brevity (read: laziness) I’ll use the acronym SDDC for Software Defined Data Center whether or not this is being used elsewhere.)

First let’s look at what you get out of a SDDC:

Legacy Process:

In a traditional legacy data center the workflow for implementing a new service would look something like this:

  1. Approval of the service and budget
  2. Procurement of hardware
  3. Delivery of hardware
  4. Rack and stack of new hardware
  5. Configuration of hardware
  6. Installation of software
  7. Configuration of software
  8. Testing
  9. Production deployment

This process would very greatly in overall time but 30-90 days is probably a good ballpark (I know, I know, some of you are wishing it happened that fast.)

Not only is this process complex and slow but it has inherent risk.  Your users are accustomed to on-demand IT services in their personal life.  They know where to go to get it and how to work with it.  If you tell a business unit it will take 90 days to deploy an approved service they may source it from outside of IT.  This type of shadow IT poses issues for security, compliance, backup/recovery etc. 

SDDC Process:

As described in the link above an SDDC provides a complete decoupling of the hardware from the services deployed on it.  This provides a more fluid system for IT service change: growing, shrinking, adding and deleting services.  Conceptually the overall infrastructure would maintain an agreed upon level of spare capacity and would be added to as thresholds were crossed.  This would provide an ability to add services and grow existing services on the fly in all but the most extreme cases.  Additionally the management and deployment of new services would be software driven through intuitive interfaces rather than hardware driven and disparate CLI based.

The process would look something like this:

  1. Approval of the service and budget
  2. Installation of software
  3. Configuration of software
  4. Testing
  5. Production deployment

The removal of four steps is not the only benefit.  The remaining five steps are streamlined into automated processes rather than manual configurations.  Change management systems and trackback/chargeback are incorporated into the overall software management system providing a fluid workflow in a centralized location.  These processes will be initiated by authorized IT users through self-service portals.  The speed at which business applications can be deployed is greatly increased providing both flexibility and agility.

Isn’t that cloud?

Yes, no and maybe.  Or as we say in the IT world: ‘It depends.’  SDDN can be cloud, with on-demand self-service, flexible resource pooling, metered service etc. it fits the cloud model.  The difference is really in where and how it’s used.  A public cloud based IaaS model, or any given PaaS/SaaS model does not lend itself to legacy enterprise applications.  For instance you’re not migrating your Microsoft Exchange environment onto Amazon’s cloud.  Those legacy applications and systems still need a home.  Additionally those existing hardware systems still have value.  SDDC offers an evolutionary approach to enterprise IT that can support both legacy applications and new applications written to take advantage of cloud systems.  This provides a migration approach as well as investment protection for traditional IT infrastructure. 

How it works:

The term ‘Cloud operating System’ is thrown around frequently in the same conversation as SDDC.  The idea is compute, network and storage are raw resources that are consumed by the applications and services we run to drive our businesses.  Rather than look at these resources individually, and manage them as such, we plug them into a a management infrastructure that understands them and can utilize them as services require them.  Forget the hardware underneath and imagine a dashboard of your infrastructure something like the following graphic.

image

 

The hardware resources become raw resources to be consumed by the IT services.  For legacy applications this can be very traditional virtualization or even physical server deployments.  New applications and services may be deployed in a PaaS model on the same infrastructure allowing for greater application scale and redundancy and even less tie to the hardware underneath.

Lifting the kimono:

Taking a peak underneath the top level reveals a series of technologies both new and old.  Additionally there are some requirements that may or may not be met by current technology offerings. We’ll take a look through the compute, storage and network requirements of SDDC one at a time starting with compute and working our way up.

Compute is the layer that requires the least change.  Years ago we moved to the commodity x86 hardware which will be the base of these systems.  The compute platform itself will be differentiated by CPU and memory density, platform flexibility and cost. Differentiators traditionally built into the hardware such as availability and serviceability features will lose value.  Features that will continue to add value will be related to infrastructure reduction and enablement of upper level management and virtualization systems.  Hardware that provides flexibility and programmability will be king here and at other layers as we’ll discuss.

Other considerations at the compute layer will tie closely into storage.  As compute power itself has grown by leaps and bounds  our networks and storage systems have become the bottleneck.  Our systems can process our data faster than we can feed it to them.  This causes issues for power, cooling efficiency and overall optimization.  Dialing down performance for power savings is not the right answer.  Instead we want to fuel our processors with data rather than starve them.  This means having fast local data in the form of SSD, flash and cache.

Storage will require significant change, but changes that are already taking place or foreshadowed in roadmaps and startups.  The traditional storage array will become more and more niche as it has limited capacities of both performance and space.  In its place we’ll see new options including, but not limited to migration back to local disk, and scale-out options.  Much of the migration to centralized storage arrays was fueled by VMware’s vMotion, DRS, FT etc.  These advanced features required multiple servers to have access to the same disk, hence the need for shared storage.  VMware has recently announced a combination of storage vMotion and traditional vMotion that allows live migration without shared storage.  This is available in other hypervisor platforms and makes local storage a much more viable option in more environments.

Scale-out systems on the storage side are nothing new.  Lefthand and Equalogic pioneered much of this market before being bought by HP and Dell respectively.  The market continues to grow with products like Isilon (acquired by EMC) making a big splash in the enterprise as well as plays in the Big Data market.  NetApp’s cluster mode is now in full effect with OnTap 8.1 allowing their systems to scale out.  In the SMB market new players with fantastic offerings like Scale Computing are making headway and bringing innovation to the market.  Scale out provides a more linear growth path as both I/O and capacity increase with each additional node.  This is contrary to traditional systems which are always bottle necked by the storage controller(s). 

We will also see moves to central control, backup and tiering of distributed storage, such as storage blades and server cache.  Having fast data at the server level is a necessity but solves only part of the problem.  That data must also be made fault tolerant as well as available to other systems outside the server or blade enclosure.  EMC’s VFcache is one technology poised to help with this by adding the server as a storage tier for software tiering.  Software such as this place the hottest data directly next the processor with tier options all the way back to SAS, SATA, and even tape.

By now you should be seeing the trend of software based feature and control.  The last stage is within the network which will require the most change.  Network has held strong to proprietary hardware and high margins for years while the rest of the market has moved to commodity.  Companies like Arista look to challenge the status quo by providing software feature sets, or open programmability layered onto fast commodity hardware.  Additionally Software Defined Networking (http://www.definethecloud.net/sdn-centralized-network-command-and-control) has been validated by both VMware’s acquisition of Nicira and Cisco’s spin-off of Insieme which by most accounts will expand upon the CiscoOne concept with a Cisco flavored SDN offering.  In any event the race is on to build networks based on software flows that are centrally managed rather than the port-to-port configuration nightmare of today’s data centers. 

This move is not only for ease of administration, but also required to push our systems to the levels required by cloud and SDDC.  These multi-tenant systems running disparate applications at various service tiers require tighter quality of service controls and bandwidth guarantees, as well as more intelligent routes.  Today’s physically configured networks can’t provide these controls.  Additionally applications will benefit from network visibility allowing them to request specific flow characteristics from the network based on application or user requirements.  Multiple service levels can be configured on the same physical network allowing traffic to take appropriate paths based on type rather than physical topology.  These network changes are require to truly enable SDDC and Cloud architectures. 

Further up the stack from the Layer 2 and Layer 3 transport networks comes a series of other network services that will be layered in via software.  Features such as: load-balancing, access-control and firewall services will be required for the services running on these shared infrastructures.  These network services will need to be deployed with new applications and tiered to the specific requirements of each.  As with the L2/L3 services manual configuration will not suffice and a ‘big picture’ view will be required to ensure that network services match application requirements.  These services can be layered in from both physical and virtual appliances  but will require configurability via the centralized software platform.

Summary:

By combining current technology trends, emerging technologies and layering in future concepts the software defined data center will emerge in evolutionary fashion.  Today’s highly virtualized data centers will layer on technologies such as SDN while incorporating new storage models bringing their data centers to the next level.  Conceptually picture a mainframe pooling underlying resources across a shared application environment.  Now remove the frame.

GD Star Rating
loading...

Forget Multiple Hypervisors

The concept of managing multiple hypervisors in the data center isn’t new–companies have been doing so or thinking about doing so for some time. Changes in licensing schemes and other events bring this issue to the forefront as customers look to avoid new costs. VMware recently acquired DynamicOps, a cloud automation/orchestration company with support for multiple hypervisors, as well as for Amazon Web Services. A hypervisor vendor investing in multihypervisor support brings the topic back to the forefront.  To see the full article visit: http://www.networkcomputing.com/virtualization/240003355

GD Star Rating
loading...

Private Cloud: An IT Staffer’s Guide To Success

Recently I wrote The Biggest Threat to Your Private-Cloud Deployment: Your IT Staff as a call to management to understand the importance of their IT staff and the changes that will be required to move to a cloud model. That post received some strong criticism from readers who took it as an attack on IT, which was not its intent. In this post I’ll cover the flipside of the coin, the IT staff perspective. To see the full article visit: http://www.networkcomputing.com/private-cloud/240003623.

GD Star Rating
loading...