The Art of Pre-Sales Part II: Showing Value

Part I of this post http://www.definethecloud.net/the-art-of-pre-sales received quite a few page views and positive feedback so I thought I’d expand on it.  Last week on the Twitters I made a comment re sales engineers showing value via revenue ($$) and got a lot of feedback.  I thought I’d expand on the topic.  While I will touch on a couple of points briefly this post is not intended as a philosophical discussion of how engineers ‘should be judged.’  Quite frankly if you’re an engineer the only thing that matters is how you are judged (for the time being at least.)  This is about understanding and showing your value.  Don’t get wrapped around the axle on right and wrong or principles.  While I don’t always follow my own advice I’ve often found that the best way to change the system is by playing by its rules and becoming a respected participant. 

A move to pre-sales is often a hard transition for an engineer to make.  I discuss some of the thought process in the first blog linked above.  This post focuses on transitioning the way in which you show your value.  This post is focused on providing some tools to assist in career and salary growth, rather than job performance itself.  In a traditional engineering role you are typically graded on performance of duties, engineering acumen and possibly certifications showing your knowledge and growth.  When transitioning to a sales engineer role those metrics can and will change.  There are several keys concepts that will assist in showing your value and reaping the rewards such as salary increases and promotion. 

  1. Understand the metrics
  2. Adapt to the metrics
  3. Gather the data
  4. Sell yourself

Understand the Metrics

The first key is to understand the metrics on which you are graded.  While this seems to be a straightforward concept, it is often missed.  This is best discussed up front when accepting the new role.  Prior to acceptance you often have more of a say in how those things occur.   Each company, organization and even team often uses different metrics.  I’ve had hybrid pre-sales/delivery roles where upper management judged my performance primarily on billable hours.  This means that the work I did up front (pre-sale) held little to know value, no matter how influential it may have been on closing the deal.  I’ve also held roles that focused value primarily on sales influence, basically on revenue.  In most cases you will find a combination of metrics used, you want to be aware of these.  If you are not focused on the right areas the value you provide may go unnoticed.  In the first example mentioned above, if I’d have spent all of my time in front of customers selling deals, but never implementing my value would have been minimized.

Understanding the metrics is the first step, it allows you to know what you’ll be measured on.  In some cases those metrics are black and white and therefore easy.  For instance at the time I was an active duty Marine, E1-E5 promotion was about 70-80% based on both physical fitness test (PFT) and rifle marksmanship qualification score.  These not only counted on their own but were also factored in again into various portions of proficiency and conduct marks which counted for the other portion of promotion.  This meant that a Marine could much more easily move up focusing on shooting and pull-ups than job proficiency. This post is not about gaming the system, but that example shows that knowing the system is important.   

Adapt to the metrics

Let me preface by saying I do not advocate gaming the system, or focusing solely on one area that you know is thoroughly prized while ignoring the others.  That is nothing more than brown nosing, and you’ll quickly lose the respect of your peers.  Instead adapt, where needed, to the metrics you’re measured on.  It’s not about dropping everything to focus on one area, it’s ensuring you are focusing on all areas that are used to assess your performance.  Maybe certifications weren’t important where you were but they’re now required, get on it.  Additionally remember that anything that can be easily measured probably is.  Intangibles or items of a subjective nature are difficult tools to measure performance on.  That doesn’t mean they aren’t/shouldn’t be used it just a fact.  Due to that understand the tangibles and ensure you are showing value there.

Gather the data

In a sales organization sales numbers are always going to be key.  Every company will use them differently but they always factor in.  Every sales engineer at a high level is there to assist in the sale of equipment, therefore those numbers matter.  Additionally those numbers are very tangible, meaning you can show value easily.  Most organizations will use some form of CRM such as salesforce.com, to track sales dollars and customers.  Engineering access to this tool varies, but the more you learn to use the system the better.  Showing the value of the deals you spend your time on is enormous, especially if it sets you apart from your peers.  Take the time to use these systems in the way your organization intends so that you can ensure you are tied to the revenue you generate.

Sales numbers are a great example but there are many others.  If you participate in a standards body, contribute frequently to internal wikis or email aliases, etc. gather that data.  These are parts of what you contribute and may go unnoticed, you need to ensure you have that data at your disposal.  Having the right data on hand is key to step four; selling yourself.

Sell yourself

This may be the most unnatural part of the entire process.  Most people don’t enjoy, and aren’t comfortable presenting their own value. That being said this is also possibly the most important piece.  If you don’t sell yourself you can’t count on anyone else to do it.  When discussing compensation, initially or raise, and promotion always look at it from a pure business perspective.  The person that you’re having the discussion with has an ultimate goal of keeping the right people on board for the lowest cost, you have goal of maintaining the highest cost possible for the value you provide.  Think of it as bargaining for a car, regardless of how much you may like your sales person you want to drive away with as much money in your pocket as possible.

If you’ve followed the first three steps this part should be easier.  You’ll have documentation to support your value along the metrics evaluated, bring it.  Don’t expect your manager to have looked at everything or to have it handy.  Having these things ready helps you frame the discussion around your value, and puts you in charge.  Additionally it shows that you know your own value.  Don’t be afraid to present who you are and what you bring to the table.  Also don’t be afraid to push back.  It can be nerve racking to hear a 3% raise and ask for a 6%, or to push back on a salary offer for another 10K, that doesn’t mean you shouldn’t do it.  Remember you don’t have to make demands, and if you don’t there is no harm in asking.

Phrasing is key here and practice is always best.  Remember you are not saying you’ll leave, you’re asking for your value.  Think in phrases like, “I really appreciate what you’re offering but I’d be much more comfortable at $x and I think my proven value warrants it.”  I’m not saying to use that line specifically but it does ring in the right light.  In these discussions you want to show three things:

  1. That you are appreciative of the position/opportunity
  2. That you know your value
  3. That your value is tangible and proven

Intangibles

There are several other factors I always recommend focusing on:

  • Teamwork – this is not only easily recognizable as value,  it is real value.  A team that works together and supports one another will always be more successful than a group of rock stars.  Share knowledge freely and help your peers wherever possible, even if they are not tied to the same direct team.
  • Leadership -  You don’t need a title to lead.  Set an example and exemplify what you’d like to see in others.  This is one I must constantly remind myself of and fail at often, but it’s key.  Lead from the front, people will follow.
  • Professionalism – As a Marine we had a saying something to the effect of “Act at the rank you want to be.”  Your dress, appearance and professionalism should always be at the level you want to be, not where you were at.  This not only assists in getting there, but also in the transition once acquired.  Have you ever seen an engineer come in wearing jeans and polo one day, shirt and slacks the next after a promotion?  Appears pretty unnatural doesn’t it?  If that engineer had already been acting the part it would have been a natural and expected transition.
  • Commend excellence – When one of your colleagues in any realm does something above and beyond, commend it.  Send a thank you and brief description to them and cc their manager, or to their manager and cc them.  This helps them with steps three and four, but also shows that you noticed.  Y
  • Technical knowledge – While it should go without saying, I won’t let that be.  Always maintain your knowledge and stay sharp. 
  • Know your market value – This can be difficult but there are tools available.  One suggestion for this is using a recruiter.  A good recruiter wants you to command top dollar because it increases their commission, this combined with their market knowledge will help you place yourself.

Do’s and don’ts

  • Do – Self assessments.  I never like to walk into a review and be surprised.  I do thorough self assessments of myself in the format my employer uses prior to a review.  When possible I present my assessment rather than allow the opposite. I always expect to have more areas of improvement listed than they do.
  • Don’t – Use ultimatums.  The best example of this is receiving another offer and using it to strong arm your employer into more money.  If you have an offer you intend to use to negotiate make sure it’s one you intend to take.  Also know that this is a one-time tactic, you won’t ever be able to use again with your employer.
  • Do -  Strive for improvement.  Recognize where you can improve.  Apply as much honesty as possible to self-reviews and assessments. 
  • Don’t- Blame.  Look for the common denominator, if you’ve been passed multiple times for promotion ask why.  Don’t get stuck in the rut of blaming others for things you can improve.  Even if it was someone else’s fault you may find something you can do better.

Summary

In any professional environment, knowing and showing your value is important.  Most of this is specific to a pre-sales role but can be used more widely.  The short version is knowing how to show your value and showing it.  Remember you work to get paid, even if you love what you do.

GD Star Rating
loading...

A Salute to Greatness

There are two things I’ve spent my life doing: being a class clown (laughed at or with is your choice) and building my career.  Since I was 16 I’ve worked no less than 40 hour weeks and more consistently been immersed in IT upwards of 80.  I have rarely taken time off, I typically watch PTO disappear on a spreadsheet January first of each year.  If you count my five years of proud service to my country as a Marine you can do the math on the fact that a Marine is a 24/7 occupation, scratch that, life.  I’ve striven to learn, to advance and to grow both personally and professionally.  I’ve also caught many lucky breaks, more than I deserved.  Most of those breaks were in the form of mentors who saw something better than I was in me and helped me to mold myself into it (if you’re not aware the best mentors are merely guides that help you see the path.  The work is always yours.) The luckiest break I’ve had has been my employment with World Wide Technology (www.wwt.com.)  

WWT is a highly awarded $5 billion dollar systems integrator and VAR who’s has been included in the Fortune Top 100 great Places to work.  While impressive in and of itself, that does not scratch the surface of what makes WWT amazing.  WWT’s culture is the core of both its success and its position on Fortune’s list.  WWT is a culture of excellence, intelligence and talent, but more importantly of integrity, teamwork and value in its people.  In the nearly two and a half years I have been with WWT, I have built both professional relationships and friendships with some of the best of the best in all aspects of IT business.  Every day I am impressed by someone, something or the company as a whole.  The knowledge of the engineers, the dedication of the teams, the loyalty and comradery,  are unmatched.  But still that’s not everything that makes WWT such a great place.

I’ve tried to find the words to describe how WWT treats its people.  The dedication to them that the company, the executives, and the management provides.  I cannot.  Instead I have one example of many that go unannounced, are not done for publicity and in many cases are not even widely known known about internally.  Doug Kung was a WWT engineer I never had the pleasure of meeting.  He was well respected and liked by everyone that knew or worked with him.  Doug passed away in October of 2010 after losing a battle with cancer.  WWT as a company, at the direction of the executive team and directly in-line with the company core values supported Doug, his wife, and his two children through the entire process.  This went well above and beyond what was legally required but more so above what would be reasonably expected.  The support did not stop with his passing, WWT annually arranges events to raise money for Doug’s family and matches the donations made.  While the story itself is a tragedy, the loss of a great person, this brief piece is an example of WWT’s character as a company.  As I said, this is one example. 

The friends and connections I’ve made, the opportunities I’ve had, and the support I’ve been given at WWT are unmatched.  I thank WWT and the people that make it great for those opportunities.  With that being said it is with great regret that I’ve come to the decision to part ways with WWT.  Events in my personal life have brought me to this decision and I will be taking some time for myself.  Over the next couple of months I will be spending some much needed time with family and friends.  It is long overdue and that is the silver lining in everything.  I will do my best to stay abreast of technology trends and intend to immerse myself in technology areas that stretch my abilities (one can’t remain completely idle.)  As a note this is not an issue of health, I am as healthy as I’ve ever been (mmm bacon.)

If anyone is interested in contributing here and “Defining the Cloud” the SDN, the Big Data or any other buzzword please contact me.  I’d hate to see a good search ranking go to waste Winking smile

GD Star Rating
loading...

Support St. Jude and the Fight Against Childhood Cancer

For some time I’ve been looking for a charity that Define the Cloud could support.  I have no desire to try and monetize my traffic through ads and clutter the content.  I also get plenty of benefits from running the site and wouldn’t ask for help with that.  That being said I do generate decent traffic and would like to use that traffic to give back.  I definitely don’t do enough personally to give back and this is a start.  I’ve finally settled on a charity I can stand behind.  Being a lover of the under dog and a hater of cancer I couldn’t pick a charity I’d rather support than St. Jude Children’s Research Hospital (www.stjude.org.)  With that, the only banner you’ll ever see on Define The Cloud is that of St. Jude.  If you’ like my content and prefer free and ad free, you’ve got it.  If instead you’d like to support the site, do so by supporting St. Jude.  If you prefer donating time to donating money you can find plenty of ways to do so here: http://www.stjude.org/volunteers.

In addition to your donations Define the Cloud will match dollar for dollar all donations made by 10/31/2012 up to $1,000.00 USD (we’re on a shoe string budget here.)  If you donate please leave a comment here with the amount so that I can track.  I’m trusting the honor system on this one. 

 

Meet Grace

Disclaimer: My support of St. Jude Children’s Research Hospital in no way implies their support of me or my content.  Let’s not be silly.

GD Star Rating
loading...

Much Ado About Something: Brocade’s Tech Day

Yesterday I had the privilege of attending Brocade’s Tech Day for Analysts and Press.  Brocade announced the new VDX 8770, discussed some VMware announcements, as well as discussed strategy, vision and direction.  I’m going to dig in to a few of the topics that interested me, this is no way a complete recap.

First in regards to the event itself.  My kudos to the staff that put the event together it was excellent from both a pre-event coordination and event staff perspective.  The Brocade corporate campus is beautiful and the EBC building was extremely well suited to such an event.  The sessions went on smoothly, the food was excellent and overall it was a great experience.  I also want to thank Lisa Caywood (@thereallisac) for pointing out that my tweets during the event were more inflammatory then productive and outside the lines of ‘guest etiquette.’  She’s definitely correct and hopefully I can clear up some of my skepticism here in a format left open for debate, and avoid the same mistake in the future.  That being said I had thought I was quite clear going in on who I was and how I write.  To clear up any future confusion from anyone:  if you’re not interested in my unfiltered, typically cynical, honest opinion don’t invite me, I won’t take offense.  Even if you’re a vendor with products I like I’ve probably got a box full of cynicism for your other product lines.

During the opening sessions I observed several things that struck me negatively:

  • A theme (intended or not) that Brocade was being lead into new technologies by their customers.  Don’t get me wrong, listening to your customers and keeping your product in line with their needs is key to success.  That being said if your customers are leading you into new technology you’ve probably missed the boat.  In most cases they’re being lead there by someone else and dragging you along for the ride, that’s not sustainable.  IT vendors shouldn’t need to be dragged kicking and screaming into new technologies by customers.  This doesn’t mean chase every shiny object (squirrel!) but major trends should be investigated and invested in before you’re hearing enough customer buzz to warrant it.  Remember business isn’t just about maintaining current customers it’s about growing by adopting new ones.  Especially for public companies stagnant is as good as dead.
  • The term “ Ethernet Fabric” which is only used by Brocade, everyone else just calls it fabric.  This ties in closely with the next bullet.
  • A continued need to discuss commitment to pure Fibre Channel (FC) storage.  I don’t deny that FC will be around for quite some time and may even see some growth as customers with it embedded will expand.  That being said customers with no FC investment should be avoiding it like the plague and as vendors and consultants we should be pushing more intelligent options to those customers.  You can pick apart technical details about FC vs. anything all day long, enjoy that on your own, the fact is two fold: running two separate networks is expensive and complex, the differences in reliability, performance, etc. are fading if not gone.  Additionally applications are being written in more intelligent ways that don’t require the high availability, low latency silo’d architecture of yester year.  Rather than clinging to FC like a sinking ship vendors should be protecting customer investment while building and positioning the next evolution.  Quote of the day during a conversation in the hall: “Fibre channel is just a slightly slower melting ice cube then we expected.’
  • An insistence that Ethernet fabric was a required building block of SDN.  I’d argue that while it can be a component it is far from required, and as SDN progresses it will be irrelevant completely.  More on this to come.
  • A stance that the network will not be commoditized was common throughout the day.  I’d say that’s either A) naïve or B) posturing to protect core revenue.  I’d say we’ll see network commoditization occur en mass over the next five years.  I’m specifically talking about the data center and a move away from specialized custom built ASICS, not the core routers, and not  the campus.  Custom silicon is expensive and time-consuming to develop, but provides performance/latency benefits and arguable some security benefits.  As the processor and off the shelf chips continue to increase exponentially this differentiator becomes less and less important.  What becomes more important is rapid adaption to new needs.  SDN as a whole won’t rip and replace networking in the next five years but it’s growth and the concepts around it will drive commoditization.  It happened with servers, then storage while people made the same arguments.  Cheaper, faster to produce and ‘good-enough’ consistently wins out.

On the positive side Brocade has some vision that’s quite interesting as well as some areas where they are leading by filling gaps in industry offerings.

  • Brocade is embracing the concept of SDN and understands a concept I tweeted about recently: ‘Revolutions don’t sell.’  Customers want evolutionary steps to new technology.  Few if any customers will rip and replace current infrastructure to dive head first into SDN.  SDN is a complete departure from the way we network today, and will therefore require evolutionary steps to get there. This is shown in their support of ‘hybrid’ open flow implementations on some devices.  This means that OpenFlow implementations can run segregated alongside traditional network deployments.  This allows for test/dev or roll-out of new services without an impact on production traffic.  This is a great approach where other vendors are offering ‘either or’ options.
  • There was discussion of Brocade’s VXLAN gateway which was announced at VMworld.  To my knowledge this is the first offering in this much needed space.  Without a gateway VXLAN is limited to virtual only environments. This includes segregation from services provided by physical devices.  The Brocade VXLAN gateway will allow the virtual and physical networks to be bridged. (http://newsroom.brocade.com/press-releases/brocade-adx-series-to-unveil-vxlan-gateway-and-app-nasdaq-brcd-0923542) To dig deeper on why this is needed check out Ivan’s article: http://blog.ioshints.info/2011/10/vxlan-termination-on-physical-devices.html.
  • The new Brocade VDX 8770 is one bad ass mamma jamma.  With industry leading latency and MAC table capacity, along with TRILL based fabric functionality, it’s built for large scalable high-density fabrics.  I originally tweeted “The #BRCD #VDX8770 is a bigger badder chassis in a world with less need for big bad chassis.” After reading Ivan’s post on it I stand corrected (this happens frequently.)  For some great perspective and a look at specs take a read: http://blog.ioshints.info/2012/09/building-large-l3-fabrics-with-brocade.html.

On the financial side Brocade has been looking good and climbed over $6.00 a share.  There are plenty of conversations stating some of this may be due to upcoming shifts at the CEO level.  They’ve reported two great quarters and are applying some new focus towards federal government and other areas lacking in recent past. I didn’t dig further into this discussion.

During lunch I was introduced to one of the most interesting Brocade offerings I’d never heard of: ‘Brocade Network Subscription”: http://www.brocade.com/company/how-to-buy/capital-solutions/index.page.  Basically you can lease your on-prem network from Brocade Capitol.  This is a great idea for customers looking to shift CapEx to OpEx which can be extremely useful.  I also received a great explanation for the value of a fabric underneath an SDN network from Jason Nolet (VP of Data Center Networking Group.)  Jason’s position (summarized) is that implementing SDN adds a network management layer, rather than removing one.  With that in mind the more complexity we remove from the physical network the better off we are.  What we’ll want for our SDN networks is fast, plug-and-play functionality with max usable links and minimal management.  Brocade VCS fabric fits this nicely.  While I agree with that completely I ‘d also say it’s not the only way to skin that particular cat.  More to come on that.

For the last few years I’ve looked at Brocade as a company lacking innovation and direction.  They clung furiously to FC while the market began shifting to Ethernet, ignored cloud for quite a while, etc.  Meanwhile they burned down deals to purchase them and ended up where they’ve been.  The overall messaging, while nothing new, did have undertones of change as a whole and new direction.  That’s refreshing to hear.  Brocade is embracing virtualization and cloud architectures without tying their cart to a single hypervisor horse.  They are positioning well for SDN and the network market shifts.  Most impressively they are identifying gaps in the spaces they operate and executing on them both from a business and technology perspective.  Examples of this are Brocade Network Subscription and the VXLAN gateway functionality respectively.

Things are looking up and there is definitely something good happening at Brocade.  That being said they aren’t out of the woods yet.  For them, as a company, purchase is far fetched as the vendors that would buy them already have networking plays and would lose half of Brocade’s value by burning OEM relationships with the purchase.  The only real option from a sale perspective is for investors looking to carve them up and sell off pieces individually.  A scenario like this wouldn’t bode well for customers.  Brocade has some work to do but they’ve got a solid set of products and great direction.  We’ll see how it pans out.  Execution is paramount for them at this point.

Final Note:  This blog was intended to stop there but this morning I received an angry accusatory email from Brocade’s head of corporate communications who was unhappy with my tweets.  I thought about posting the email in full, but have decided against it for the sake of professionalism.  Overall his email was an attack based on my tweets.  As stated my tweets were not professional, but this type of email from someone in charge of corporate communications is well over the top in response.  I forwarded the email to several analyst and blogger colleagues, a handful of whom had similar issues with this individual.  One common theme in social media is that lashing out at bad press never does any good, a senior director in this position should know such, but instead continues to slander and attack.  His team and colleagues seem to understand social media use as they’ve engaged in healthy debate with me in regards to my tweets, it’s a shame they are not lead from the front.

GD Star Rating
loading...

Digging Into the Software Defined Data Center

The software defined data center is a relatively new buzzword embraced by the likes of EMC and VMware.  For an introduction to the concept see my article over at Network Computing (http://www.networkcomputing.com/data-center/the-software-defined-data-center-dissect/240006848.)  This post is intended to take it a step deeper as I seem to be stuck at 30,000 feet for the next five hours with no internet access and no other decent ideas.  For the purpose of brevity (read: laziness) I’ll use the acronym SDDC for Software Defined Data Center whether or not this is being used elsewhere.)

First let’s look at what you get out of a SDDC:

Legacy Process:

In a traditional legacy data center the workflow for implementing a new service would look something like this:

  1. Approval of the service and budget
  2. Procurement of hardware
  3. Delivery of hardware
  4. Rack and stack of new hardware
  5. Configuration of hardware
  6. Installation of software
  7. Configuration of software
  8. Testing
  9. Production deployment

This process would very greatly in overall time but 30-90 days is probably a good ballpark (I know, I know, some of you are wishing it happened that fast.)

Not only is this process complex and slow but it has inherent risk.  Your users are accustomed to on-demand IT services in their personal life.  They know where to go to get it and how to work with it.  If you tell a business unit it will take 90 days to deploy an approved service they may source it from outside of IT.  This type of shadow IT poses issues for security, compliance, backup/recovery etc. 

SDDC Process:

As described in the link above an SDDC provides a complete decoupling of the hardware from the services deployed on it.  This provides a more fluid system for IT service change: growing, shrinking, adding and deleting services.  Conceptually the overall infrastructure would maintain an agreed upon level of spare capacity and would be added to as thresholds were crossed.  This would provide an ability to add services and grow existing services on the fly in all but the most extreme cases.  Additionally the management and deployment of new services would be software driven through intuitive interfaces rather than hardware driven and disparate CLI based.

The process would look something like this:

  1. Approval of the service and budget
  2. Installation of software
  3. Configuration of software
  4. Testing
  5. Production deployment

The removal of four steps is not the only benefit.  The remaining five steps are streamlined into automated processes rather than manual configurations.  Change management systems and trackback/chargeback are incorporated into the overall software management system providing a fluid workflow in a centralized location.  These processes will be initiated by authorized IT users through self-service portals.  The speed at which business applications can be deployed is greatly increased providing both flexibility and agility.

Isn’t that cloud?

Yes, no and maybe.  Or as we say in the IT world: ‘It depends.’  SDDN can be cloud, with on-demand self-service, flexible resource pooling, metered service etc. it fits the cloud model.  The difference is really in where and how it’s used.  A public cloud based IaaS model, or any given PaaS/SaaS model does not lend itself to legacy enterprise applications.  For instance you’re not migrating your Microsoft Exchange environment onto Amazon’s cloud.  Those legacy applications and systems still need a home.  Additionally those existing hardware systems still have value.  SDDC offers an evolutionary approach to enterprise IT that can support both legacy applications and new applications written to take advantage of cloud systems.  This provides a migration approach as well as investment protection for traditional IT infrastructure. 

How it works:

The term ‘Cloud operating System’ is thrown around frequently in the same conversation as SDDC.  The idea is compute, network and storage are raw resources that are consumed by the applications and services we run to drive our businesses.  Rather than look at these resources individually, and manage them as such, we plug them into a a management infrastructure that understands them and can utilize them as services require them.  Forget the hardware underneath and imagine a dashboard of your infrastructure something like the following graphic.

image

 

The hardware resources become raw resources to be consumed by the IT services.  For legacy applications this can be very traditional virtualization or even physical server deployments.  New applications and services may be deployed in a PaaS model on the same infrastructure allowing for greater application scale and redundancy and even less tie to the hardware underneath.

Lifting the kimono:

Taking a peak underneath the top level reveals a series of technologies both new and old.  Additionally there are some requirements that may or may not be met by current technology offerings. We’ll take a look through the compute, storage and network requirements of SDDC one at a time starting with compute and working our way up.

Compute is the layer that requires the least change.  Years ago we moved to the commodity x86 hardware which will be the base of these systems.  The compute platform itself will be differentiated by CPU and memory density, platform flexibility and cost. Differentiators traditionally built into the hardware such as availability and serviceability features will lose value.  Features that will continue to add value will be related to infrastructure reduction and enablement of upper level management and virtualization systems.  Hardware that provides flexibility and programmability will be king here and at other layers as we’ll discuss.

Other considerations at the compute layer will tie closely into storage.  As compute power itself has grown by leaps and bounds  our networks and storage systems have become the bottleneck.  Our systems can process our data faster than we can feed it to them.  This causes issues for power, cooling efficiency and overall optimization.  Dialing down performance for power savings is not the right answer.  Instead we want to fuel our processors with data rather than starve them.  This means having fast local data in the form of SSD, flash and cache.

Storage will require significant change, but changes that are already taking place or foreshadowed in roadmaps and startups.  The traditional storage array will become more and more niche as it has limited capacities of both performance and space.  In its place we’ll see new options including, but not limited to migration back to local disk, and scale-out options.  Much of the migration to centralized storage arrays was fueled by VMware’s vMotion, DRS, FT etc.  These advanced features required multiple servers to have access to the same disk, hence the need for shared storage.  VMware has recently announced a combination of storage vMotion and traditional vMotion that allows live migration without shared storage.  This is available in other hypervisor platforms and makes local storage a much more viable option in more environments.

Scale-out systems on the storage side are nothing new.  Lefthand and Equalogic pioneered much of this market before being bought by HP and Dell respectively.  The market continues to grow with products like Isilon (acquired by EMC) making a big splash in the enterprise as well as plays in the Big Data market.  NetApp’s cluster mode is now in full effect with OnTap 8.1 allowing their systems to scale out.  In the SMB market new players with fantastic offerings like Scale Computing are making headway and bringing innovation to the market.  Scale out provides a more linear growth path as both I/O and capacity increase with each additional node.  This is contrary to traditional systems which are always bottle necked by the storage controller(s). 

We will also see moves to central control, backup and tiering of distributed storage, such as storage blades and server cache.  Having fast data at the server level is a necessity but solves only part of the problem.  That data must also be made fault tolerant as well as available to other systems outside the server or blade enclosure.  EMC’s VFcache is one technology poised to help with this by adding the server as a storage tier for software tiering.  Software such as this place the hottest data directly next the processor with tier options all the way back to SAS, SATA, and even tape.

By now you should be seeing the trend of software based feature and control.  The last stage is within the network which will require the most change.  Network has held strong to proprietary hardware and high margins for years while the rest of the market has moved to commodity.  Companies like Arista look to challenge the status quo by providing software feature sets, or open programmability layered onto fast commodity hardware.  Additionally Software Defined Networking (http://www.definethecloud.net/sdn-centralized-network-command-and-control) has been validated by both VMware’s acquisition of Nicira and Cisco’s spin-off of Insieme which by most accounts will expand upon the CiscoOne concept with a Cisco flavored SDN offering.  In any event the race is on to build networks based on software flows that are centrally managed rather than the port-to-port configuration nightmare of today’s data centers. 

This move is not only for ease of administration, but also required to push our systems to the levels required by cloud and SDDC.  These multi-tenant systems running disparate applications at various service tiers require tighter quality of service controls and bandwidth guarantees, as well as more intelligent routes.  Today’s physically configured networks can’t provide these controls.  Additionally applications will benefit from network visibility allowing them to request specific flow characteristics from the network based on application or user requirements.  Multiple service levels can be configured on the same physical network allowing traffic to take appropriate paths based on type rather than physical topology.  These network changes are require to truly enable SDDC and Cloud architectures. 

Further up the stack from the Layer 2 and Layer 3 transport networks comes a series of other network services that will be layered in via software.  Features such as: load-balancing, access-control and firewall services will be required for the services running on these shared infrastructures.  These network services will need to be deployed with new applications and tiered to the specific requirements of each.  As with the L2/L3 services manual configuration will not suffice and a ‘big picture’ view will be required to ensure that network services match application requirements.  These services can be layered in from both physical and virtual appliances  but will require configurability via the centralized software platform.

Summary:

By combining current technology trends, emerging technologies and layering in future concepts the software defined data center will emerge in evolutionary fashion.  Today’s highly virtualized data centers will layer on technologies such as SDN while incorporating new storage models bringing their data centers to the next level.  Conceptually picture a mainframe pooling underlying resources across a shared application environment.  Now remove the frame.

GD Star Rating
loading...

Thoughts From a Tech Leadership Summit

This week I attended a tech leadership Summit in Vail Colorado for the second time.  The event is always a fantastic series of discussions and brings some of the top minds in the technology industry.  Here are some thoughts on the trends and thinking that were common at the event.

Virtualization and VDI:

There was a lot less talk of VDI and virtualization then in 2011.  These conversations were replaced with more conversations about cloud and app delivery.  Overall the consensus felt to be that getting the application to the right native environment on a given device was a far better approach then getting the desktop there.

Hypervisors were barely mentioned except in a recurring theme that the hypervisor itself has hit commodity.  This means that management and upper layer feature set are the differentiators.  Parallel to this thought was that VMware no longer has the best hypervisor yet their management system is still far superior to the competition (KVM was touted as the best hypervisor several times.)

The last piece of the virtualization discussion was around VMware’s acquisition of Nicira.  Some bullet points on that:

  • VMware paid too much for Nicira but that was unavoidable for the startup-to-be in the valley and it’s a great acquisition overall.
  • It’s no surprise VMware moved into networking everyone is moving that way.
  • While this is direct competition with Cisco it is currently in a small niche of service provider business.  Nicira’s product requires significant custom integration to deploy and will take time for VMware to productize it in a fashion usable for the enterprise.  Best guess: two years to real product. \
  • Overall the Cisco VMware partnership is very lucrative on both sides and should not be effected by this in the near term.
  • A seldom discussed portion of this involves the development expertise that comes with the acquisition.  With the hypervisor being commodity, and differentiation moving into the layers above that, we’ll see more and more variety in hypervisors.  This means multi-hypervisor support will be a key component of the upper level management products where virtualization vendors will compete.  Nicira’s team has proven capabilities in this space and can accelerate VMware’s multi-hypervisor strategy.

Storage:

There was a lot of talk about both the vision and execution of EMC over the past year or more.  I personally used ‘execution machine’ more than once to describe them (coming from a typically non-EMC Kool-Aid guy.)  Some key points that resonated over past few days:

  • EMC’s execution on the VNX/VNXe product lines is astounding.  EMC launched a product and went on direct attack into a portion of NetApp’s business that nobody could really touch.  Through both sales and marketing excellence they’ve taken an increasingly large chunk out of this portion of the market.  This shores up a breech in their product line NetApp was using to gain share.
  • EMC’s Isilon acquisition was not only a fantastic choice, but was quickly integrated well.  Isilon is a fantastic product and has big data potential which is definitely a market that will generate big revenue in coming years.
  • EMC’s cloud vision is sound and they are executing well on it.  Additionally they were ahead of their pack of hardware vendor peers in this regard. EMC is embracing a software defined future.

I also participated in several discussions around flash and flash storage.  Some highlights:

  • PCIe based flash storage is definitely increasing in sales and enterprise consumption.  This market is expected to continue to grow as we strive to move the data closer to the processor.  There are two methods for this: storage in the server, servers in the storage.  PCIe flash plays in the server side and EMC Isilon will eventually play on the storage side.  Also look for an announcement in the SMB storage space around this during VMworld.
  • One issue in this space is that the expensive fast server based flash becomes trapped capacity if a server can’t drive enough I/O to it.  Additionally there are data loss concerns with this data trapped in the server.
  • Both of these issues are looking to be solved by EMC and IBM who intend to add server based flash into the tiering of shared storage.
  • Most traditional storage vendors flash options are ‘bolt-ons’ to traditional array architecture.  This can leave the expensive flash I/O starved, limiting it’s performance benefit.  Several all flash startups intend to use this as an inflection point with flash based systems designed from the ground up for the performance the disk offers.
  • Flash is still not an answer to every problem, and never will be.

The last point that struck me was a potential move from shared storage as a whole.  Microsoft would rather have you use local storage, clusters and big data apps like Hadoop thrive on local storage and one last big shared storage draw is going away: vMotion.  Once shared storage is no longer need for live virtual machine migration there will be far less draw for expensive systems.

Cloud:

The major cloud discussion I was a part of (mainly observer) involved OpenStack.  Overall OpenStack has a ton of buzz, and a plethora of developers.  What it’s lacking is customers, leadership and someone driving it who can lead a revolution.  Additionally it’s suffering from politics and bureaucracy.  It was described as impossible to support by one individual who would definitely know one way or another.  My thinking is that if you have CloudStack sitting there with real customers, an easily deployed system, support and leadership why waste cycles continuing down the OpenStack path?  The best answer I heard for that: Ego.  Everyone wants to build the next Amazon and CloudStack is too baked to make as much of a mark.

Overall it’s an interesting topic but my thought is: with limited developers the industry should be getting behind the best horse and working together.

Big Data:

Big Data was obviously another fun topic.  The quote of the week was ‘There are ten people, not companies, that understand Big Data.  6 of them are at Cloudera and the other 4 are locked in Google writing their own checks.’  Basically Big Data knowledge is rare and hiring consultants is not typically a viable option because you need people holding three things: Knowledge of big data processing, knowledge of your data, and knowledge of your business.  These data scientists aren’t easy to come by.  Additionally contrary to popular hype, Hadoop is not the end-all be-all of big data, it’s a tool in a large tool chest.  Especially when talking about real-time you’ll need to look elsewhere.  The consensus was that we are with big data where we were with cloud 2-3 years ago.  That being said CIO’s may still need to show big data initiatives (read: spend) so you should see $$ thrown at well packaged big data solutions geared toward plug-n-play in the enterprise.

All in all it was an excellent event and I was humbled as usual to participate in great conversations with so many smart people who are out there driving the future of technology.  What I’ve written here is a a summary from my perspective on the one summit portion I had time to participate in.  There is always a good chance I misquoted/misunderstood something so feel free to call me out.  As always I’d love your feedback, contradictions or hate mail comments.

GD Star Rating
loading...

Forget Multiple Hypervisors

The concept of managing multiple hypervisors in the data center isn’t new–companies have been doing so or thinking about doing so for some time. Changes in licensing schemes and other events bring this issue to the forefront as customers look to avoid new costs. VMware recently acquired DynamicOps, a cloud automation/orchestration company with support for multiple hypervisors, as well as for Amazon Web Services. A hypervisor vendor investing in multihypervisor support brings the topic back to the forefront.  To see the full article visit: http://www.networkcomputing.com/virtualization/240003355

GD Star Rating
loading...

Private Cloud: An IT Staffer’s Guide To Success

Recently I wrote The Biggest Threat to Your Private-Cloud Deployment: Your IT Staff as a call to management to understand the importance of their IT staff and the changes that will be required to move to a cloud model. That post received some strong criticism from readers who took it as an attack on IT, which was not its intent. In this post I’ll cover the flipside of the coin, the IT staff perspective. To see the full article visit: http://www.networkcomputing.com/private-cloud/240003623.

GD Star Rating
loading...

Chargeback/Trackback: Yes You Need It

You can’t fix, manage or justify what you don’t understand. IT chargeback/trackback not only helps end users understand their service utilization, but it also helps IT justify and prioritize spend. Measured service is a requirement of NIST’s cloud definition… To read the full article visit: http://www.networkcomputing.com/private-cloud/240003313

GD Star Rating
loading...

The Biggest Threat to Your Private-Cloud Deployment: Your IT Staff

People are the No. 1 reason why private clouds fail. The traditional IT staff is a tactically driven, deeply technical group of hardware and software problem solvers who aren’t familiar with strategic IT thinking and don’t have time for it. They aren’t accustomed to aligning IT processes with business drivers. They’re more comfortable with explaining why something can’t be done than finding a way to make it happen. And they will be the downfall of your private cloud deployment.  To see the full article visit: http://www.networkcomputing.com/private-cloud/240002902.

GD Star Rating
loading...