The Brocade FCoE Proposition

I recently realized that I, like a lot of the data center industry, have completely forgotten about Brocade recently.  There has been little talked about on their FCoE front, Fibre Channel Front, or CNAs.  Cisco and HP have been dominating social media with blade and FCoE battles, but I haven’t seen much coming from Brocade.  I thought it was time to take a good look.

The Brocade Portfolio:

Brocade 1010 and 1020 CNAs The Brocade 1010 (single port) and Brocade 1020 (dual port) Converged Network Adapters (CNAs) integrate 10 Gbps Ethernet Network Interface Card (NIC) functionality with Fibre Channel technology—enabling transport over a 10 Gigabit Ethernet (GbE) connection through the new Data Center Bridging (DCB) and Fibre Channel over Ethernet (FCoE) protocols, providing best-in-class LAN connectivity and I/O consolidation to help reduce cost and complexity in next-generation data center environments.
Brocade 8000 Switch The Brocade 8000 is a top-of-rack link layer (Layer 2) CEE/FCoE switch with 24 10 Gigabit Ethernet (GbE) ports for LAN connections and eight Fibre Channel ports (with up to 8 Gbps speed) for Fibre Channel SAN connections. This reliable, high-performance switch provides advanced Fibre Channel services, supports Ethernet and CEE capabilities, and is managed by Brocade DCFM.
Brocade FCOE10-24 Blade The Brocade FCOE10-24 Blade is a Layer 2 blade with cut-though non-blocking architecture designed for use with Brocade DCX and DCX-4S Backbones. It features 24 10 Gbps CEE ports and extends CEE/FCoE capabilities to Brocade DCX Backbones, enabling end-of-row CEE/FCoE deployment. By providing first-hop connectivity for access layer servers, the Brocade FCOE10-24 also enables server I/O consolidation for servers with Tier 3 and some Tier 2 applications.

Source: http://www.brocade.com/products-solutions/products/index.page?dropType=Connectivity&name=FCOE

The breadth of Brocade’s FCoE portfolio is impressive when compared to the other major players: Emulex and Qlogic with CNAs, HP with FlexFabric for C-Class and H3C S5820X-28C Series ToR, and only Cisco providing a wider portfolio with an FCoE and virtualization aware I/O card (VIC/Palo), blade switches (Nexus 4000), ToR/MoR switches (Nexus 5000), and an FCoE Blade for the Nexus 7000.  This shows a strong commitment to the FCoE protocol on Brocade’s part, as does there participation on the standards body.

Brocade also provides a unique ability to standardize on one vendor from the server I/O card, through the FCoE network to the Fibre Channel (FC) core switching.  Additionally using the 10-24 blade customers can collapse the FCoE edge into their FC core providing a single hop collapsed core mixed FCoE/FC SAN.  That’s a solid proposition for a data center with a heavy investment in FC and a port count low enough to stay within a single chassis per fabric.

But What Does the Future Hold?

Before we take a look at where Brocade’s product line is headed, let’s look at the purpose of FCoE.  FCoE is designed as another tool in the data center arsenal for network consolidation.  We’re moving away from the cost, complexity and waste of separate networks and placing our storage and traditional LAN data on the same infrastructure.  This is similar to what we’ve done in the past in several areas, on mainframes we went from ESCON to FICON to leverage FC, our telephones went from separate infrastructures to IP based, we’re just repeating the same success story with storage.  The end goal is everything on Ethernet.  That end goal may be sooner for some than others, it all depends on comfort level, refresh cycle, and individual environment.

If FCoE is a tool for I/O consolidation and Ethernet is the end-goal of that, then where is Brocade heading?

This has been my question since I started researching and working with FCoE about three years ago.  As FCoE began hitting the mainstream media Cisco was out front pushing the benefits and announcing products, they were the first on the market with an FCoE switch, the Nexus 5000.  Meanwhile Brocade and others were releasing statements attempting to put the brakes on.  They were not saying FCoE was bad, just working to hold it off.

This makes a lot of sense from both perspectives, the core of Cisco’s business is routing and switching therefore FCoE is a great business proposition.  They’re also one of the only two options for FC switching in the enterprise (Brocade and Cisco) so they have the FC knowledge.  Lastly they had a series of products already in development. 

From Brocade’s and others perspectives they didn’t have products ready to ship, and they didn’t have the breath and depth in Ethernet so they needed time.  The marketing releases tended to become more and more positive towards FCoE as their products launched.

This also shows in Brocade’s product offering, two of the three products listed above are designed to maintain the tie to FC.

Brocade 8000:

This switch has 24x 10GE ports and 8x 8Gbps FC ports.  These ports are static onboard which means that this switch is not for you if:

In comparison the competing product is the Nexus 5000 which has a modular design allowing customers to use all Ethernet/DCB or several combinations of Ethernet and FC at 1/2/4/8 Gbps.

Brocade FCoE 10/24 Blade:

This is an Ethernet blade for the DCX Fibre Channel director.  This ties Brocade’s FCoE director capabilities to an FC switch rather than Ethernet switch.  Additionally this switch only supports directly connected FCoE devices which will limit overall scalability.

In comparison the Cisco FCoE blade for the nexus 7000 is a DCB capable line card with FCoE capability by years end.  This merges FCoE onto the network backbone where it’s intended to go.

Summary:

If your purpose in assessing FCoE is to provide a consolidated edge topology for server connectivity tying it back to a traditional FC SAN then Brocade has a strong product suite for you.  If you’re end goal is consolidating the network as a whole then it’s important to seriously consider the purchase of FC based FCoE products.  That’s not to say don’t buy them, just understand what you’re getting, and why you’re getting it.  For instance if you need to tie to a Fibre Channel core now and don’t intend to replace that for 3-5 years then the Brocade 8000 may work for you because it can be refreshed at the same time.

Several options exist for FCoE today and most if not all of them have a good fit.  Assess first what your trying to accomplish and when, then look at the available products and decide what fits best.

Geekgasm

154252695-59c8c3b7fade904936b4a8f59b669b27_4c7e9ba8-scaled

So far this week at VMworld has been fantastic.  VMware definitely throws one of the best industry events for us geeks that really want to talk shop.  There’s definitely a fair share of marketing fluff, but if you want to talk bits and bytes the sessions and people are here for you.  I’ve had the pleasure to meet several people I respect in the industry and hang out with several others I know.  That’s really been the highlight of my time here, some of the offline conversations I’ve had make the trip worth it all one their own.

The best part of the conference for me so far has been the hanging out answering questions at the World Wide Technology booth.  Shameless plug or not it’s been an awesome experience.  We have three private cloud architectures on the solutions exchange floor up and running:

It’s phenomenal to have these three great solutions side by side and have the opportunity to talk to people about what each has to offer.  We’ve had a lot of great traffic and great questions at the booth and I’ve really enjoyed the chats I’ve had with everyone.  With all of that the part that has really made this a geekgasm for me is the experts that have stopped by to say hello and discuss the technology.  The picture below says it all, I had Brad Hedlund (@bradhedlund / www.bradhedlund.com) from Cisco and Ken Henault (@bladeguy http://www.hp.com/go/bladeblog)from HP having a conversation with me in front of the booth.

p21o

If you’re not familiar these guys are both top of their game within their respective companies and battle it out back and forth in the world of social media.  The conversation was great and it’s good to see a couple of competitors get together shake hands and have a good discussion in front of a technical showcase of some of their top gear.  If you’re at the show and haven’t stopped by to say hello and see the gear your missing out, get over to the booth, I’ll throw in a beer coozie!

The Difference Between ‘Foothold’ and ‘Lock-In’

There is always a lot of talk within IT marketing around vendor ‘lock-in'.  This is most commonly found within competitive marketing, i.e. ‘Product X from Company Y creates lock-in causing you to purchase all future equipment from them.  In some cases lock-in definitely exists, in other cases what you really have is better defined as ‘foothold.’  Foothold is an entirely different thing. 

Any given IT vendor wants to sell as much product as possible to their customers and gain new customers as quickly as possible, that’s business.  One way to do this is to use one product offering as a way in the door (foothold) and sell additional products later on.  Another way to do this is to sell a product that forces the sale of additional products.  There are other methods, including the ‘Build a better mousetrap method’, but these are the two methods I’ll discuss.

Foothold:

Foothold is like the beachhead at Normandy during WWII, it’s not necessarily easy to get but once held it gives a strategic position from which to gain more territory.

Great examples of foothold products exist throughout IT.  My favorite example is NetApp’s NFS/CIFS storage, which did the file based storage job so well they were able to convert their customer’s block storage to NetApp in many cases.  There are currently two major examples of the use of foothold in IT, HP and Cisco.

HP is using its leader position in servers to begin seriously pursuing the network equipment.  They’ve had ProCurve for some time but recently started pushing it hard, and acquired 3Com to significantly boost their networking capabilities (among other advantages.)  This is proper use of foothold and makes strategic sense, we’ll see how it pans out. 

Cisco is using its dominant position in networking to attack the market transition to denser virtualization and cloud computing with its own server line.  From a strategic perspective this could be looked at either offensively or defensively.  Either Cisco is on the offense attacking former strong vendor partner territory to grow revenue, or Cisco on the defense realized HP was leveraging its foothold in servers to take network market share.  In either event it makes a lot of strategic sense.  By placing servers in the data center they have foothold to sell more networking gear, and they also block HP’s traditional foothold.

From my perspective both are strong moves, to continue to grow revenue you eventually need to branch into adjacent markets.  You’ll here people cry and whine about stretching too thin, trying to do too much, etc, but it’s a reality.  As a publicly traded company stagnant revenue stream is nearly as bad as a negative revenue stream.

If you look closely at it both companies are executing in very complementary adjacent markets.  Networks move the data in and out of HP’s core server business, so why not own them?  Servers (and Flip cameras for that matter) create the data Cisco networks move, so why not own them?

Lock-In:

You’ll typically hear more about vendor lock-in then you will actually experience.  that’s not to say there isn’t plenty of it out there, but it usually gets more publicity than is warranted.

Lock-in is when a product you purchase and use forces you to buy another product/service from the same vendor, or replace the first.  To use my previous Cisco and HP example, both companies are using adjacent markets as foothold but neither lock you in.  For example both HP and Cisco servers can be connected to any vendors switching, their network systems interoperate as well.  Of course you may not get every feature when connecting to a 3rd party device but that’s part of foothold and the fact that they add proprietary value.

The best real example of lock-in is blades.  Don’t be fooled, every blade system on the market has inherent vendor lock-in.  Current blade architecture couldn’t provide the advantages it does without lock-in.  To give you an example let’s say you decide to migrate to blades and you purchase 7 IBM blades and a chassis, 4 Cisco blades and a chassis, or 8 HP blades and a chassis.  You now have a chassis half full of blades.  When you need to expand by one server, who you gonna call (Ghost Busters can’t help.)  Your obviously going to buy from the chassis vendor because blades themselves don’t interoperate and you’ve got empty chassis slots.  That is definite lock-in to the max capacity of that chassis.

When you scale past the first blade system you’ll probably purchase another from the same vendor, because you know and understand its unique architecture, that’s not lock-in, that’s foothold.

Summary:

Lock-in happens but foothold is more common.  When you here a vendor, partner, etc. say product X will lock you in to vendor Y make that person explain in detail what they mean.  Chances are you’re not getting locked-in to anything.  If you are getting locked-in, know the limits of that lock-in and make an intelligent decision on whether that lock-in is worth the advantages that made you consider the product in the first place, they very well might be.

Bacon And Eggs as a Service (BAEaaS) at VMworld

Yeah I know the ‘and’ between bacon and eggs should be lower case but that just looks silly, let’s move on 😉

bacon_and_eggs

BAEaaS is a recovery tweetup following the previous nights festivities.  It was originally scheduled for Tuesday but due to popular demand has been moved to Wednesday (mainly because the amount of large parties going on Monday night may negate people’s desire to get out of bed any earlier than sessions require.)

Breakfast will go from 7:00am – 9:00am with the intent that people will filter in and out throughout the two hour period.  There are no reservations and Mel’s is a diner so it’s first come first serve, best bet is to come by 7 or at 8 (breakfast shift work.)

Come eat bacon #vmworld3word

 

 

Note: BAEaaS is a BYOB event (Buy Your Own Breakfast.)  That being said with plenty of partners and vendors around mention an interest and a budget and you may make it on somebody’s expenses.

Details:

twtvite: http://twtvite.com/BAEaaS

Wednesday 9/1 - 7:00am –9:00am Pacific

http://www.melsdrive-in.com/menu/breakfast.html

801 Mission St San Francisco CA 94103

 Overflow:

For any overflow there are two close by options where groups can go: Denny’s and Starbucks depending on your preferences.  See all locations on the map below:

Green Pin: Moscone Center

Red Pin: Mel’s Drive-In

Blue Pin: Denny’s

Orange Pin: Starbucks

image  http://bit.ly/8YWTvD

Special Thanks:

@juiceLSU009 the brains behind the idea

@tscalzott for naming the location which looks to be fantastic food

Extra Special Thanks:

@crystal_lowe Crystal provided me with wonderful suggestions and contacts for properly planning an event such as this.  Due to time constraints, budget, personal laziness, etc. I ignored all of her suggestions.  If any part of the BAEaaS is not up to snuff please ensure you retweet Crystal’s well deserved ‘I told you so.’

If you’re not familiar with Crystal and have a spouse that attends industry events with you, it’s time to familiarize yourself.  Here are links to what she’s got planned this year for the spouses.

Special note: you cannot exchange you actual VMworld pass for a Spousetivities pass although you’ll want to.

http://spousetivities.com/

http://blog.scottlowe.org/2010/08/22/vmworld-2010-spouse-activities-calendar/

http://spousetivities.eventbrite.com/

Dell, Backing the Right Horse in the Wrong Race

Horse racingWith Dell’s announced acquisition of 3par I’ve been pondering the question of what it is they’re thinking.  I’ve been scouring the blogs looking for an answer and there is none that resonates well with me.  Most of what I find states they picked a good horse and that the business behind buying a horse to race makes sense, but nobody asks are they in the right race.  The separate races I’m talking about are private and public clouds.

Dell bid on 3Par which is a small high end storage company with a product line positioned to compete with EMC and Hitachi for some use cases.  This complements Dell’s own storage offering which was built upon the Equalogix iSCSI storage acquisition and geared toward the SMB space.  Dell also has had a traditionally strong partnership with EMC and resold a great deal of EMC storage where Equalogix was not a good fit.  The Equalogix acquisition did not appear to damage the Dell EMC partnership significantly but by adding 3par to the mix this may change.  On the other hand EMC is heavily backing Cisco UCS, so this may very well be a defensive play.

So what is Dell’s play expanding their internal storage capabilities and risking damage to a profitable partnership with EMC?  Most of the analysis I find states that Dell is looking to grow data center revenue to regain profit they are losing to HP in the desktop/laptop space.  In order to do this they are putting together more of the key hardware components of private cloud architectures.  The thinking being that they will try and put together an offering to compete with vBlock, Matrix, SMT, CloudBurst, etc. 

At first glance this all makes sense, Dell doesn’t want to be left without a horse in the private cloud race so they make some moves and acquisitions and get their offering in place, late, but maybe not too late.  On the flip side they can utilize the small market share 3par has as an avenue for Dell server sales, and reversely use Dell server sales to boost 3par’s struggling sales.  With any luck Dell will have the same success with 3par that they did with Equalogix.  That’s what I see at first glance, upon further thought there are more concerns:

The most important question in my mind: Is Dell putting their horse in the right race?

Dell is looking to attack the enterprise and federal data center where private cloud will be a big play.  This is the home of solid high performance, feature rich, innovative platforms.  It’s also a place where trust means everything, i.e. ‘Nobody gets fired for buying vendor x.’  Dell is not vendor X, they’ve typically competed solely on price.  Moving heavily into this market they will be in constant battle with HP, IBM, EMC, NetApp, Cisco and others.

I think Dell is missing an opportunity to execute on their traditional strengths and attack public cloud markets with a unique offering.  Public cloud is all about massive scale and the intelligence, redundancy, etc. is built into the software layers.  This means that a company who can effectively deliver bulk, reliable, low cost servers, storage and networking will have a very strong offering.  The HP’s, Cisco’s, IBM’s etc. will have a much harder time selling into this space due to cost.  Their products have traditionally been more about performance and usability features which may not have a strong a message in the public cloud.

Summary:

Dell solidly executed on the merger and acquisition of Equalogix and has had great success there providing a low-end, low-cost storage system paired perfectly with their server offering.  The 3par acquisition and recent Dell innovations in their server offering show a preview of a new model for Dell.  Whether this is a successful model or not is yet to be seen.  From my point of view successful or not Dell would be better suited to pairing their traditional business to public cloud solutions and creating a new market for themselves with less competition.

10 Things to Know About Cisco UCS

Bob Olwig VP of Corporate Business Development for World Wide Technologies (www.wwt.com) asked me to provide 10 things to know about UCS for his blog http://bobolwig.wordpress.com.  See the post 10 Things to Know About Cisco UCS here: http://bobolwig.wordpress.com/2010/08/04/10-things-to-know-about-cisco-ucs/.

Why Oracle’s 72 port 10GE switch doesn’t matter

I recently ran into some internal buzz about Oracle’s 72 port ‘top-of-rack’ switch announcement and it peeked my interest, so I started taking a look.  Oracle selling a switch is definitely interesting on the surface but then again they did just purchase Sun for a bargain basement price and Sun does make hardware, pretty good hardware at that.  Here is a quick breakdown of the switch:

Size 1RU
Port Count 72x 10GE or 16x 40GE
Oversubscription None fully non-blocking
L3 routing Yes
DCB No
FCoE No
Price $79,200 list

Two three letter words came to mind when I saw this: wow, and why.  Wow is definitely in order, I mean wow!  Packing 72 non-blocking 10GE ports into a 1RU switch chassis is impressive, very impressive.  I’m dying to get a look at the hardware.  Now for the why:

Why does Oracle think they can call a 72 port switch a top-of-rack switch?  1RU form factor doth not a ToR make.  Do you have 72 10GE ports in a rack in your data center?  This switch is really a middle-of-row or end-of-row switch.  Once you move it into that position now you’ve got some cabling to think about, $1000.00 or so times 2 per link for optics another couple hundred for that nice long cable x 72, the cost of running and maintaining those cables… think ‘Holy shit Batman my $79,200 ToR switch just became a $200,000+ EoR switch and a different management model from the rest of my shop.’

Why does Oracle think there is a need for full non-blocking bandwidth for every access layer port?  Is anyone seriously driving sustained 10GE on multiple devices at once, anyone?  You’ve got two options in switching and only one actually makes sense.  You either reduce cost and implement oversubscription in hardware, or you pay for full rate hardware that is still oversubscribed in you network designs because you aren’t using 1:1 server to inter-switch links.  Before deciding how much you really need line-rate bandwidth do yourself a favor and take a look at your I/O profile across a few servers for a week or two.  If you’re like the majority of data centers you’ll find that you’ll be quite fine with as much as 8:1 or higher oversubscription with 10GE at the access layer.

Why would I want to buy a 10GE switch today that has no support for DCB or FCoE?  Whether you like it or not FCoE is here, both Cisco and HP are backing it strongly with products shipping and more on the way.  Emulex and Qlogic are both in their second generation of Converged Network Adapters (CNA) see my take on Emulex’s known as the OneConnect adapter (http://www.definethecloud.net/?p=382.)  The standards are all ratified and even TRILL is soon to be ratified to provide that beautiful Spanning-Tree free utopian network you’ve dreamed of since childhood.  If I’m an all NFS or iSCSI shop maybe this doesn’t bother me but if I’m running Fibre Channel there is no way I’m locking myself into 10GE at the access layer without IEEE standard DCB and FCoE capabilities in the hardware.

What it really comes down to is that this switch is meaningless in the average enterprise data center.  Where this switch fits and has purpose is in specialized multi-rack appliances and clusters.  If you buy an Oracle multi-rack system or cluster from Oracle this will be one option for connectivity.  With any luck they won’t force you into this switch because there are better options.

Thanks to my colleague for helping me out with some of this info.

Kudos: I do want to give Oracle kudos on the QSPF which is the heart of how they were able to put 72 10GE ports in a 1RU design.  The QSPF is a 40GE port that can optionally be split into 4 individual 10GE links. It’s definitely a very cool concept and will hopefully see greater industry adoption.

How to build the 10GE network of your dreams:

One of the things I love about the Oracle 10GE switch is that it highlights exactly what Cisco is working to fix in data center networking with the Nexus 5000 and 2000.

Note: Full disclosure and all that jazz, I work for a Cisco reseller and as part of my role I work closely with Cisco Nexus products.  That being said I chose the role I’m in (and the role chose me) because I’m a big fan and endorser of those products not the other way around.  To put it simply, I love the Nexus product line because I love the Nexus product line, I just so happen to be lucky enough to have a job doing what I love.

So now stepping off my soapbox and out of disclosure mode let’s get to the what the hell is Joe talking about portion of this post.

image

In the diagram above I’m showing two Nexus 5020’s in green at the top and 10 pairs of Nexus 2232’s connected to them.  What this creates is a redundant 320 port 10GE fabric with 2 points of management because the Nexus 2000 is just a remote line card of the Nexus 5000.  All of this comes with two other great features: latency under 5us and FCoE support.  Additionally this puts a 2K at the top of each rack allowing ToR cabling while keeping all management and administration at the 5K in the middle-of-row.  Because the system also supports Twinax cabling there is a cost savings of thousands of dollars per rack over Fibre cabling to TOR or EoR.  There is not another solution on the market that comes close to this today.  All of this at a 4:1 oversubscription rate at the access layer.  If you’re willing to oversubscribe a little more you could actually add 2 more redundant Nexus 2000s for another 64 ports capping at 384 ports.

This entire solution comes in at or below the price of 2 of Oracle’s switches before considering the cost savings on cabling.

Summary:

I don’t believe Oracle’s 72 port switch has a market in the average data center.  It will have specialized use cases, and it is quite an interesting play.  The best thing it has to offer is the QSPF which hopefully will gain some buzz and vendor support thanks to Oracle.

How Emulex Broke Out of the ‘Card Pusher’ Box

A few years back when my primary responsibility was architecting server, blade, SAN, and virtualization solutions for customers I selected the appropriate HBA based on the following rule: Whichever (Qlogic or Emulex) is less expensive today through the server OEM I’m using.  I had no technical or personal preference for one or the other.  They were both stable, performed, and allowed my customers to do what they needed to do.  On any given day one might show higher performance than another but that’s always subject to the testing criteria and will be fairly irrelevant for a great deal of customers.  At that point I considered them both ‘Card Pushers.’

Last year I had the opportunity to speak at two Emulex Partner product launch events in the UK and Germany.  My presentation was a vendor independent technical discussion on the drivers for consolidating disparate networks on 10GE and above.  I had no prior knowledge of the exact nature of the product being launched, and didn’t expect anything more than a Gen 2 single chip CNA, nothing to get excited over.  I was wrong.

Sitting through the Key Note presentations by Emulex executives I quickly realized OneConnect was something totally different, and with it Emulex was doing two things:

  1. Betting the farm on Ethernet
  2. Rebranding themselves as more than just a card pusher.

Now just to get this out of the way Emulex did not, has not, and to my knowledge will not stop pursuing better and faster FC technology, their 4GB and 8GB FC HBAs are still rock solid high performance pure FC cards.  What they were however doing is obviously placing a large bet (and R&D investment) on Ethernet as a whole.

OneConnect:

The Emulex OneConnect is a Generation 2 Converged Network Adapter (CNA), but it’s a lot more than that.  It also does TCP offload, operates as an iSCSI HBA, and handles FCoE including the full suite of DCB standards.  It’s the Baskin Robins of of I/O interface cards, although admittedly  no FCoTR support ;-) (http://www.definethecloud.net/?p=380)  The technology behind the card impressed me but the licensing model is what makes it matter.  With all that technology built into the hardware you’d expect a nice hefty price tag to go with it.  That’s not the case with the OneConnect card, the licensing options allow you to buy the card at a cost equivalent to competing 10GE NICs and license iSCSI or FCoE if/when desired (licensing models may vary with OEMs.)  This means Emulex, a Fibre Channel HBA vendor, is happy to sell you a high performance 10GE NIC.  In IT there is never one tool for every job, but as far as I/O cards go this one comes close.

You don’t have to take my word for it when it comes to how good this card is, HP’s decision to integrate it into blade and rack mount system boards speaks volumes.  Take a look at Thomas Jones post on the Emulex Federal Blog for more info (http://www.emulex.com/blogs/federal/2010/07/13/the-little-trophy-that-meant-a-lot/.)  Additionally Cisco is shipping OneConnect options for UCS blades and rack mounts, and IBM also OEMs the product.

In addition to the OneConnect launch Emulex has also driven to expand their market into other areas, products like OneCommand Vision promise to provide better network I/O monitoring and management tools, and are uniquely positioned to do this through the eyes of the OneConnect adapter which can see all networks connected to the server.

Summary:

Overall Emulex has truly moved outside of the ‘Card Pusher’ box and uniquely positioned themselves above their peers.  In an data center market where many traditional Fibre Channel vendors are clinging to pure FC like a sinking ship Emulex has embraced 10GE and offers a product that lets the customer choose the consolidation method or methods that work for them.

FCoTR a Storage Revolution

As the industry has rapidly standardized and pushed adoption of Fibre Channel over Ethernet (FCoE) there continue to be many skeptics.  Many Fibre Channel gurus balk at the idea of Ethernet being capable of guaranteeing the right level of lossless delivery and performance required for the SCSI data their disks need.  IP Junkies like Greg Ferro (http://etherealmind.com/) balk at the idea of changing Ethernet in any way and insist that IP can solve all the worlds problems including world hunger (Sally Struthers over IP SSoIP.)  Additionally there is a fear from some storage professionals of having to learn Ethernet networks or being displaced by their Network counterparts.

In steps Fibre Channel over Token Ring (FCoTR.)  FCoTR promises to provide collisionless delivery using proven Token Ring networks.  FCoTR is proposed by industry recognized experts: E. Banks, K. Houston, S. Foskett, R. Plankers and W. C. Preston to solve the issues mentioned above and provide a network that can converge Fibre Channel onto Token Ring while maintaining the purity of IP and providing job protection to storage administrators.  FCoTR is synergistic network convergence for Data Center 3.0 and Cloud Computing.

FCoTR has taken the fast track into the public eye and will be interesting to watch as it evolves.  If IBM plays their card rights they may be able to ride this wave into displacing Cisco and regaining their dominance in that space.  For more information on FCoTR:

Why Cloud is as ‘Green’ As It Gets

I stumbled across a document from Greenpeace citing cloud for additional power draws and the need for more renewable energy (http://www.greenpeace.org/international/en/publications/reports/make-it-green-cloud-computing/.)  This is one of a series I’ve been noticing from the organization bastardizing IT for its effect on the environment and chastising companies for new data centers.  These articles all strike a cord with me because they show a complete lack of understanding of what cloud is, does and will do on the whole especially where it concerns energy consumption and 'green' computing.

Greenpeace seams to be looking at cloud as additional hardware and data centers being built to serve more and more data.  While cloud is driving new equipment, new data centers and larger computing infrastructures it is doing so to consolidate computing overall.  Speaking of public cloud specifically there is nothing more green than moving to a fully cloud infrastructure.  It’s not about a company adding new services it’s about moving those services from underutilized internal systems onto highly optimized and utilized shared public infrastructure.

Another point they seem to be missing is the speed at which technology moves.  A state of the art data center built 5-6 years ago would be lucky to reach 1.5:1 Power Usage Effectiveness (PUE) whereas today's state-of-the-art data centers can get to 1.2:1 or below.  This means that a new data center can potentially waste .3 or more KW less per processing KW than one built 5-6 years ago.  Whether that’s renewable energy or not is irrelevant, it’s a good thing.

The most efficient privately owned data centers moving forward will be ones built as private-cloud infrastructures that can utilize resources on demand, scale-up/scale-down instantly and automatically shift workloads during non-peak times to power off unneeded equipment.  Even the best of these won’t come close to the potential efficiency of public cloud offerings which can leverage the same advantages and gain exponential benefits by spreading them across hundreds of global customers maintaining high utilization rates around the clock and calendar year.

Greenpeace lashing out at cloud and focusing on pushes for renewable energy is naive and short sighted.  Several other factors go into thinking green with data center.  Power/Cooling are definitely key, but what about utilization?  Turning a server off during off peak times is great to save power but that still means the components of the computer had to be mined, shipped, assembled, packaged, and delivered to me in order to sit powered off 1/3 of the day when I don’t need the cycles.  That hardware will still be refreshed the same way at which point some of the components may be recycled and the rest will be non-biodegradable and sometimes harmful waste. 

Large data centers housing public clouds have the promise of overall reduced power and cooling with maximum utilization.  You have to look at the whole picture to really go green.

Greenpeace: While you’re out there casting stones at big data centers how about you publish some of your numbers?  Let’s see the power, cooling, utilization numbers for your computing/data centers, actual numbers not what you offset by sending a check to Al Gore’s bank account.  While you’re at it throw in the costs and damage created by your print advertisement (paper, ink, power) etc.  Give us a chance to see how green you are.