Best Practices of Women in Tech

The following is a guest post by Sara (Ms. Digital Diva)

Today’s tech industry has a new face, and that face is female. Though traditionally male dominated, more and more women are making their mark as leaders in the tech field. Contributing not only to the continuous advancements we’re seeing in technology, these women are making a point to build up one another and the young women who look up to them. Progress has been made, but there’s still work to be done. Here are some of the ways these women are doing it.

Hit the Ground Running
Just as important as the women who are already working in the tech field, are the young girls who aspire to be like them. Supporting these young women and girls to follow their passion and providing them with the necessary resources to reach their goals, is key to the future of tech. An example of these efforts comes from founder of Girls Who Code, Reshma Saujani, who aims to close the gender gap by providing an outlet for girls to explore their abilities and pursue interests computer science. Similarly with Women Who Code, Alaina Percival empowers women by offering services to assist in building successful careers in technology. Breaking out of the stereotypical boxes and utilizing these sorts of programs not only builds confidence, but helps those just starting out find their niche. This can have a important impact on professional development when it becomes time to specialize.

Pursue What’s Most Beneficial to You
There’s no stopping a woman with goals. Once you have that goal set, it’s up to you to do everything it takes to get it done. In this industry, technology is constantly advancing. To stay current, you must maintain a hunger for learning. Staying up-to-date with trends, and qualities that are most in-demand by employers, will keep you ahead of the game and closer to reaching your goals. This quality fortunately seems to come natural to women. According to HackerRank’s Women in Tech Report, women are incredibly practical in this sense, and tend to pursue proficiency in whichever languages are most valued at the moment.

Succeed Together
It’s tough to admit, but getting more women in tech is still a work in progress, and in order to continue progressing we must work together. Rarely does anyone succeed in life without mentorship, guidance or at least support from others. There’s nothing wrong with asking for help. Taking the time to network with women who have earned a position you hope to achieve someday is essential in overcoming workplace challenges and clarifying questions. Even if you can’t get in physical contact with a role model of yours, keeping up with what their writing, saying and working on, can help you expand your own interests and continue learning. The process of working towards your ultimate potential is a long one, but embracing advice can help you get there efficiently.

Lessons Learned
Like anything in life, developing your professional career comes with lots of trial and error. You’ll succeed and you’ll fail, you’ll try things you like and try things you hate. It’s all a part of the process. When you’re the only woman in an office full of men it can be difficult to speak up or put yourself out there in fear of making a mistake. But if I’ve learned anything in my career, it’s that staying silent signifies acceptance and not involving yourself in situations that can help you grow only hurts you. Getting involved in groups, committees, projects, anything that interests you is the biggest piece of advice I can give. Not only will you expand your knowledge and experience, but it’s a great way to get to know others in the tech community. Building relationships is a key part of any profession, but especially in environments where you want to build confidence.

A final thought to take with you is, to always be advancing. So much of the technology industry is self development and striving to discover the next best thing. Curiosity is what will keep you afloat. Utilizing programs, and keeping up with verticals that interest you can help in develop strong points of view on emerging technologies. This is crucial as you grow in your career, as people generally listen to those who have something to say. What you don’t want to do is get swept up in the crowd and lose your voice. If tech is what you’re interested in than it’s where you belong, whether you’ve been studying it your whole life or just getting started. Never underestimate yourself and don’t confuse experience with ability. There are so many incredible women doing incredible things in the tech industry. All they need to be even greater, is you.

GD Star Rating

Intent-Driven Data Center: A Brief Video Overview

Here’s a brief video overview of Intent-Driven data center. More blogs to come.

GD Star Rating

Intent Driven Architecture Part II: Policy Analytics

*** Disclaimer: Yes I work for a company that sells products in this category. You are welcome to assume that biases me and disregard this article completely. ***

In my first post on Intent-Driven Architectures ( I attempted to explain the basics of an Intent-Based, or Intent-driven approach. I also explained the use of Intent-Driven architecture in a network perspective. The next piece of building a fully Intent-Driven architecture is analytics. This post will focus there.

Let’s assume you intend to deploy, or have deployed a network, server, storage, etc. system that can consume intent and automate provisioning based on that. How do you identify your policy, or intent for your existing workloads? This is a tough question, and a common place for policy automation, micro-segmentation, and other projects to stall or fail. This is less challenging for that shiny new app your about to deploy (because you’re defining requirements, the policy/intent), it’s all of those existing apps that create the nightmare. How do you automate the infrastructures based on the applications intent, if you don’t know the applications intent?

This is one of the places where analytics becomes a key piece of an intent-driven architecture. You not only need a tool to discover the existing policy, but one that can keep that up-to-date as things change. Was policy implemented correctly on day 0? Is policy still being adhered to on day 5, 50, 500? This is where real-time, or near real-time analytics will come into play for intent-driven architectures.

I’m going to go back to the network and security as my primary example, I’m a one-trick pony that way. These same concepts are applicable to compute, storage and other parts of the architecture. Using the network example the diagram below shows a very generalized version of a typical policy enforcement example in traditional architectures.

Network Policy


Using the example above we see that most policy is pushed to the distribution layer of the network and enforced in the routing, firewalls, load-balancers etc. The other thing to note is that most policy is very broad deny rules. This is what’s known as a blacklist model; anything is allowed unless explicitly denied. This loose level of policy creates large security gaps, and is very rigid and fragile. Additionally, because the intent or policy is described so loosely it’s nearly impossible to use existing infrastructure to discover application intent.

In order to gather intent and automate the policy requirements based on that intent, we need to look at the actual traffic, not the existing rules. We need a granular look at how the applications communicate, this shows us what needs to be allowed, and can be used to gather what should be blocked. It can also show us policies that enforce user-experience, app-priority, traffic-load requirements, etc. Generally this information can be gathered from one of two locations: the operating system/app-stack, or the network, even better would be using both. With this data we can see much more detail. The figure below shows moving from a broad subnet allow rule, to granular knowledge of the TCP/UDP ports that need to be open between specific points.

Old policy vs new policy

These granular rule-sets are definitely not intent, but they are the infrastructures implementation of that intent. This first step of analytics assists with tightening security through micro-segmentation, but also allows agility in that tightened security. For example if you waved a magic wand and it implemented perfect micro-segmentation, that micro-segmentation would quickly start to create problems without analytics. Developers open a new port? A software patch change the connections ports for an app? Downtime, and slow remediation will be unavoidable. With real, or near-real-time analytics the change can be detected immediately, and possibly remediated with a click.

Analytics plays a much bigger role than just policy/intent discovery. The analytics engine of an Intent-based system should also provide visibility into the policy enforcement. Some examples:

  • Was intent correctly deployed, and enforced on day 0?
  • Is intent still being correctly enforced on day 5, 50, 500?
  • What if scenarios (if I change/add policy x, what would be affected.)

All of this should be done by looking at the actual communication between apps or devices, not by looking at infrastructure configuration. For example, I can look at a firewall rule and determine that it is properly configured to segment traffic a, from traffic b. There is nothing in the firewall config to show me that the rest of the network is properly configured to ensure all traffic passes through that firewall. If traffic is somehow bypassing the firewall, all the rules in the world make no difference.

Analytics engines designed for, or as part of, an intent-based networking system provide two critical things: policy discovery, and policy verification. Even with a completely green-filed environment where the policy can be designed fresh, you’ll want analytics to ensure it is deployed correctly and keep you up-to-date on changes.

There are three major components of an intent-driven architecture. I’ve discussed intent-based automation in the previous post, and analytics in this post. I’ll discuss the third piece in the near future: assurance, knowing your system can consume the new intent.

*** Disclaimer: See disclaimer above. ***

GD Star Rating

Intent Driven Architectures: WTF is Intent?

*** Disclaimer: I work for a vendor who has several offerings in the world of intent-based infrastructure. If you choose to assume that makes my opinion biased and irrelevant, that’s your mistake to make, and you can save time by skipping the rest of this post. ***

** Update at the end of the blog (10/20/2017)**

In the ever evolving world of data center and cloud buzzwords, the word ‘intent’ is slowly gaining momentum: Intent-based x, intent-driven y, etc. What is ‘intent’ and how does that apply to networks, storage, servers, or infrastructure as a whole, or better yet to automation? Let’s take a look.

First, let’s peek at status quo automation. Traditional automation systems for technology infrastructure (switches, servers, storage, etc.) utilize low level commands to configure multiple points at once. For example the diagram below shows a network management system being used to provision VLAN 20 onto 15 switches from a single point of control.

Basic Automation

The issue here is the requirement for low level policy rendering, meaning getting down to the: VLAN, RAID pool, firewall rule level to automate the deployment of a higher level business policy. Higher level business policy is the ‘intent’ and it can be definied in terms of: security, SLA, compliance, geo-dependancy, user-experience, etc. With a traditional automation method a lot of human interaction is required to translate from an applications business requirements, intent, and the infrastructure configuration. Worse, this communication typically occurs between groups that speak very different languages: engineers, developers, lines-of-business. The picture below deipicts this.

App Deployment Chain

This ‘telephone game’ of passing app requirments is not only slow, it is also risk prone because a lot gets lost in the multiple layers of communication.

Hopefully you now have a slight grasp on the way traditional automation works, basically the overall problem statement. Now let’s take a dive into using intent to alleviate this issue.

I’m going to use the network as my example for the remainder of this post. The same concepts are applicable to any infrastructure, or the whole infrastructure, I just want to simplify the explanation. Starting at the top, a network construct like a VLAN is a low-level representation of some type of business policy. A great example might be compliance regulations. An app processes financial data that is regulated to be segmented from all other data. A VLAN is a Layer 2 segment, that in-part, helps to support this. The idea of an intent-driven architecture is to automate the infrastructure based on the high level business policy, and skip the middle layers of translation. Ideally you’d define how you implement policy/intent for something like financial data one time. From them on, simply tagging an app as financial data ensures the system provisions that policy. The diagram below shows this process.

Intent Driven Workflow

One common misnomer is that the network, or infrastructure must be intelligent enough to interpret intent. This is absolutely false. The infrastructure needs to be able to consume intent, not interpret or define it. Intent is already understood in business logic. The infrstructure should be able to consume that, and automate configuration based on that business logic intent. In the example in the diagram business logic has already been defined for the given organizations compliance requirments. Once it has been defined, it is a resuable object allowing automation of that policy for any app tagged requiring it. Another note is that the example uses a ‘dev’ referencing custom built software, the same methodology can be used with off the shelf software.

There are many reasons for not trying to build intent based systems that can automatically detect and consume intent. One, non-minimal reason is the cost of those systems. More important is the ability to actually execute on that vision. Using a network example, it would be fairly simple to build a network that can automatically detect an Oracle application using standard ports and connectivity. What the network alone would not be able to detect is whether that workload was a dev, test, or production environment. Each environment would require different policies or intent. Another example would be difference in policy enforcement. One company may consider a VLAN to be adequate segmentation for different traffic types, another would require a firewall, and a third might require ‘air-gap.’ These differences would not be able to be automatically understood by the infrastructure. Intent based systems should instead consume the existing business logic, and automate provisioning based on that, not attempt to reinterpret that business logic themselves.

The other major misnomer regarding intent based systems is that they must be ‘open’ and able to incorporate any underlying hardware and software. This is definitely not a requirement of intent based systems. There are pros, and cons to open portability across hardware and software platforms. Those should always be weighed when purchasing a system, intent-based or otherwise. One pro for an open system supporting heterogeneity might be the avoidance of ‘vendor lock-in.’ The opposing con, would be the additional engineering, QA costs as well as fragility of the system. There are many more pros/cons to both. To see some of my old, yet still relevant thoughts on ‘lock-in’ see this post:

Overall intent-based systems are emerging and creating a lot of buzz, both within the vendor space and the analyst space. There are examples of intent-based automation for networking in products like Cisco’s Application Centric Infrastructure (ACI). System like these are one piece of a fully intent-driven architecture. I’ll discuss the other two pieces, assurance and analytics, in future posts, if I’m not simply too lazy to care.

** Update: Out of ignorance I neglected to mention another Intent-Based Networking system. Doug Gourlay was kind enough to point out Apstra to me ( After taking a look, I wanted to mention that they offer a vendor agnostic Intent-based networking solution. The omission was unintentional and I’m happy to add other examples brought to my attention. **

*** These thoughts are mine, not sponsored, paid for, or influenced by a paycheck. Take them as you will. ***



GD Star Rating

The Art of Pre-Sales Part II: Showing Value

Part I of this post received quite a few page views and positive feedback so I thought I’d expand on it.  Last week on the Twitters I made a comment re sales engineers showing value via revenue ($$) and got a lot of feedback.  I thought I’d expand on the topic.  While I will touch on a couple of points briefly this post is not intended as a philosophical discussion of how engineers ‘should be judged.’  Quite frankly if you’re an engineer the only thing that matters is how you are judged (for the time being at least.)  This is about understanding and showing your value.  Don’t get wrapped around the axle on right and wrong or principles.  While I don’t always follow my own advice I’ve often found that the best way to change the system is by playing by its rules and becoming a respected participant. 

A move to pre-sales is often a hard transition for an engineer to make.  I discuss some of the thought process in the first blog linked above.  This post focuses on transitioning the way in which you show your value.  This post is focused on providing some tools to assist in career and salary growth, rather than job performance itself.  In a traditional engineering role you are typically graded on performance of duties, engineering acumen and possibly certifications showing your knowledge and growth.  When transitioning to a sales engineer role those metrics can and will change.  There are several keys concepts that will assist in showing your value and reaping the rewards such as salary increases and promotion. 

  1. Understand the metrics
  2. Adapt to the metrics
  3. Gather the data
  4. Sell yourself

Understand the Metrics

The first key is to understand the metrics on which you are graded.  While this seems to be a straightforward concept, it is often missed.  This is best discussed up front when accepting the new role.  Prior to acceptance you often have more of a say in how those things occur.   Each company, organization and even team often uses different metrics.  I’ve had hybrid pre-sales/delivery roles where upper management judged my performance primarily on billable hours.  This means that the work I did up front (pre-sale) held little to know value, no matter how influential it may have been on closing the deal.  I’ve also held roles that focused value primarily on sales influence, basically on revenue.  In most cases you will find a combination of metrics used, you want to be aware of these.  If you are not focused on the right areas the value you provide may go unnoticed.  In the first example mentioned above, if I’d have spent all of my time in front of customers selling deals, but never implementing my value would have been minimized.

Understanding the metrics is the first step, it allows you to know what you’ll be measured on.  In some cases those metrics are black and white and therefore easy.  For instance at the time I was an active duty Marine, E1-E5 promotion was about 70-80% based on both physical fitness test (PFT) and rifle marksmanship qualification score.  These not only counted on their own but were also factored in again into various portions of proficiency and conduct marks which counted for the other portion of promotion.  This meant that a Marine could much more easily move up focusing on shooting and pull-ups than job proficiency. This post is not about gaming the system, but that example shows that knowing the system is important.   

Adapt to the metrics

Let me preface by saying I do not advocate gaming the system, or focusing solely on one area that you know is thoroughly prized while ignoring the others.  That is nothing more than brown nosing, and you’ll quickly lose the respect of your peers.  Instead adapt, where needed, to the metrics you’re measured on.  It’s not about dropping everything to focus on one area, it’s ensuring you are focusing on all areas that are used to assess your performance.  Maybe certifications weren’t important where you were but they’re now required, get on it.  Additionally remember that anything that can be easily measured probably is.  Intangibles or items of a subjective nature are difficult tools to measure performance on.  That doesn’t mean they aren’t/shouldn’t be used it just a fact.  Due to that understand the tangibles and ensure you are showing value there.

Gather the data

In a sales organization sales numbers are always going to be key.  Every company will use them differently but they always factor in.  Every sales engineer at a high level is there to assist in the sale of equipment, therefore those numbers matter.  Additionally those numbers are very tangible, meaning you can show value easily.  Most organizations will use some form of CRM such as, to track sales dollars and customers.  Engineering access to this tool varies, but the more you learn to use the system the better.  Showing the value of the deals you spend your time on is enormous, especially if it sets you apart from your peers.  Take the time to use these systems in the way your organization intends so that you can ensure you are tied to the revenue you generate.

Sales numbers are a great example but there are many others.  If you participate in a standards body, contribute frequently to internal wikis or email aliases, etc. gather that data.  These are parts of what you contribute and may go unnoticed, you need to ensure you have that data at your disposal.  Having the right data on hand is key to step four; selling yourself.

Sell yourself

This may be the most unnatural part of the entire process.  Most people don’t enjoy, and aren’t comfortable presenting their own value. That being said this is also possibly the most important piece.  If you don’t sell yourself you can’t count on anyone else to do it.  When discussing compensation, initially or raise, and promotion always look at it from a pure business perspective.  The person that you’re having the discussion with has an ultimate goal of keeping the right people on board for the lowest cost, you have goal of maintaining the highest cost possible for the value you provide.  Think of it as bargaining for a car, regardless of how much you may like your sales person you want to drive away with as much money in your pocket as possible.

If you’ve followed the first three steps this part should be easier.  You’ll have documentation to support your value along the metrics evaluated, bring it.  Don’t expect your manager to have looked at everything or to have it handy.  Having these things ready helps you frame the discussion around your value, and puts you in charge.  Additionally it shows that you know your own value.  Don’t be afraid to present who you are and what you bring to the table.  Also don’t be afraid to push back.  It can be nerve racking to hear a 3% raise and ask for a 6%, or to push back on a salary offer for another 10K, that doesn’t mean you shouldn’t do it.  Remember you don’t have to make demands, and if you don’t there is no harm in asking.

Phrasing is key here and practice is always best.  Remember you are not saying you’ll leave, you’re asking for your value.  Think in phrases like, “I really appreciate what you’re offering but I’d be much more comfortable at $x and I think my proven value warrants it.”  I’m not saying to use that line specifically but it does ring in the right light.  In these discussions you want to show three things:

  1. That you are appreciative of the position/opportunity
  2. That you know your value
  3. That your value is tangible and proven


There are several other factors I always recommend focusing on:

  • Teamwork – this is not only easily recognizable as value,  it is real value.  A team that works together and supports one another will always be more successful than a group of rock stars.  Share knowledge freely and help your peers wherever possible, even if they are not tied to the same direct team.
  • Leadership -  You don’t need a title to lead.  Set an example and exemplify what you’d like to see in others.  This is one I must constantly remind myself of and fail at often, but it’s key.  Lead from the front, people will follow.
  • Professionalism – As a Marine we had a saying something to the effect of “Act at the rank you want to be.”  Your dress, appearance and professionalism should always be at the level you want to be, not where you were at.  This not only assists in getting there, but also in the transition once acquired.  Have you ever seen an engineer come in wearing jeans and polo one day, shirt and slacks the next after a promotion?  Appears pretty unnatural doesn’t it?  If that engineer had already been acting the part it would have been a natural and expected transition.
  • Commend excellence – When one of your colleagues in any realm does something above and beyond, commend it.  Send a thank you and brief description to them and cc their manager, or to their manager and cc them.  This helps them with steps three and four, but also shows that you noticed.  Y
  • Technical knowledge – While it should go without saying, I won’t let that be.  Always maintain your knowledge and stay sharp. 
  • Know your market value – This can be difficult but there are tools available.  One suggestion for this is using a recruiter.  A good recruiter wants you to command top dollar because it increases their commission, this combined with their market knowledge will help you place yourself.

Do’s and don’ts

  • Do – Self assessments.  I never like to walk into a review and be surprised.  I do thorough self assessments of myself in the format my employer uses prior to a review.  When possible I present my assessment rather than allow the opposite. I always expect to have more areas of improvement listed than they do.
  • Don’t – Use ultimatums.  The best example of this is receiving another offer and using it to strong arm your employer into more money.  If you have an offer you intend to use to negotiate make sure it’s one you intend to take.  Also know that this is a one-time tactic, you won’t ever be able to use again with your employer.
  • Do -  Strive for improvement.  Recognize where you can improve.  Apply as much honesty as possible to self-reviews and assessments. 
  • Don’t- Blame.  Look for the common denominator, if you’ve been passed multiple times for promotion ask why.  Don’t get stuck in the rut of blaming others for things you can improve.  Even if it was someone else’s fault you may find something you can do better.


In any professional environment, knowing and showing your value is important.  Most of this is specific to a pre-sales role but can be used more widely.  The short version is knowing how to show your value and showing it.  Remember you work to get paid, even if you love what you do.

GD Star Rating

A Salute to Greatness

There are two things I’ve spent my life doing: being a class clown (laughed at or with is your choice) and building my career.  Since I was 16 I’ve worked no less than 40 hour weeks and more consistently been immersed in IT upwards of 80.  I have rarely taken time off, I typically watch PTO disappear on a spreadsheet January first of each year.  If you count my five years of proud service to my country as a Marine you can do the math on the fact that a Marine is a 24/7 occupation, scratch that, life.  I’ve striven to learn, to advance and to grow both personally and professionally.  I’ve also caught many lucky breaks, more than I deserved.  Most of those breaks were in the form of mentors who saw something better than I was in me and helped me to mold myself into it (if you’re not aware the best mentors are merely guides that help you see the path.  The work is always yours.) The luckiest break I’ve had has been my employment with World Wide Technology (  

WWT is a highly awarded $5 billion dollar systems integrator and VAR who’s has been included in the Fortune Top 100 great Places to work.  While impressive in and of itself, that does not scratch the surface of what makes WWT amazing.  WWT’s culture is the core of both its success and its position on Fortune’s list.  WWT is a culture of excellence, intelligence and talent, but more importantly of integrity, teamwork and value in its people.  In the nearly two and a half years I have been with WWT, I have built both professional relationships and friendships with some of the best of the best in all aspects of IT business.  Every day I am impressed by someone, something or the company as a whole.  The knowledge of the engineers, the dedication of the teams, the loyalty and comradery,  are unmatched.  But still that’s not everything that makes WWT such a great place.

I’ve tried to find the words to describe how WWT treats its people.  The dedication to them that the company, the executives, and the management provides.  I cannot.  Instead I have one example of many that go unannounced, are not done for publicity and in many cases are not even widely known known about internally.  Doug Kung was a WWT engineer I never had the pleasure of meeting.  He was well respected and liked by everyone that knew or worked with him.  Doug passed away in October of 2010 after losing a battle with cancer.  WWT as a company, at the direction of the executive team and directly in-line with the company core values supported Doug, his wife, and his two children through the entire process.  This went well above and beyond what was legally required but more so above what would be reasonably expected.  The support did not stop with his passing, WWT annually arranges events to raise money for Doug’s family and matches the donations made.  While the story itself is a tragedy, the loss of a great person, this brief piece is an example of WWT’s character as a company.  As I said, this is one example. 

The friends and connections I’ve made, the opportunities I’ve had, and the support I’ve been given at WWT are unmatched.  I thank WWT and the people that make it great for those opportunities.  With that being said it is with great regret that I’ve come to the decision to part ways with WWT.  Events in my personal life have brought me to this decision and I will be taking some time for myself.  Over the next couple of months I will be spending some much needed time with family and friends.  It is long overdue and that is the silver lining in everything.  I will do my best to stay abreast of technology trends and intend to immerse myself in technology areas that stretch my abilities (one can’t remain completely idle.)  As a note this is not an issue of health, I am as healthy as I’ve ever been (mmm bacon.)

If anyone is interested in contributing here and “Defining the Cloud” the SDN, the Big Data or any other buzzword please contact me.  I’d hate to see a good search ranking go to waste Winking smile

GD Star Rating

Support St. Jude and the Fight Against Childhood Cancer

For some time I’ve been looking for a charity that Define the Cloud could support.  I have no desire to try and monetize my traffic through ads and clutter the content.  I also get plenty of benefits from running the site and wouldn’t ask for help with that.  That being said I do generate decent traffic and would like to use that traffic to give back.  I definitely don’t do enough personally to give back and this is a start.  I’ve finally settled on a charity I can stand behind.  Being a lover of the under dog and a hater of cancer I couldn’t pick a charity I’d rather support than St. Jude Children’s Research Hospital (  With that, the only banner you’ll ever see on Define The Cloud is that of St. Jude.  If you’ like my content and prefer free and ad free, you’ve got it.  If instead you’d like to support the site, do so by supporting St. Jude.  If you prefer donating time to donating money you can find plenty of ways to do so here:

In addition to your donations Define the Cloud will match dollar for dollar all donations made by 10/31/2012 up to $1,000.00 USD (we’re on a shoe string budget here.)  If you donate please leave a comment here with the amount so that I can track.  I’m trusting the honor system on this one. 


Meet Grace

Disclaimer: My support of St. Jude Children’s Research Hospital in no way implies their support of me or my content.  Let’s not be silly.

GD Star Rating

Much Ado About Something: Brocade’s Tech Day

Yesterday I had the privilege of attending Brocade’s Tech Day for Analysts and Press.  Brocade announced the new VDX 8770, discussed some VMware announcements, as well as discussed strategy, vision and direction.  I’m going to dig in to a few of the topics that interested me, this is no way a complete recap.

First in regards to the event itself.  My kudos to the staff that put the event together it was excellent from both a pre-event coordination and event staff perspective.  The Brocade corporate campus is beautiful and the EBC building was extremely well suited to such an event.  The sessions went on smoothly, the food was excellent and overall it was a great experience.  I also want to thank Lisa Caywood (@thereallisac) for pointing out that my tweets during the event were more inflammatory then productive and outside the lines of ‘guest etiquette.’  She’s definitely correct and hopefully I can clear up some of my skepticism here in a format left open for debate, and avoid the same mistake in the future.  That being said I had thought I was quite clear going in on who I was and how I write.  To clear up any future confusion from anyone:  if you’re not interested in my unfiltered, typically cynical, honest opinion don’t invite me, I won’t take offense.  Even if you’re a vendor with products I like I’ve probably got a box full of cynicism for your other product lines.

During the opening sessions I observed several things that struck me negatively:

  • A theme (intended or not) that Brocade was being lead into new technologies by their customers.  Don’t get me wrong, listening to your customers and keeping your product in line with their needs is key to success.  That being said if your customers are leading you into new technology you’ve probably missed the boat.  In most cases they’re being lead there by someone else and dragging you along for the ride, that’s not sustainable.  IT vendors shouldn’t need to be dragged kicking and screaming into new technologies by customers.  This doesn’t mean chase every shiny object (squirrel!) but major trends should be investigated and invested in before you’re hearing enough customer buzz to warrant it.  Remember business isn’t just about maintaining current customers it’s about growing by adopting new ones.  Especially for public companies stagnant is as good as dead.
  • The term “ Ethernet Fabric” which is only used by Brocade, everyone else just calls it fabric.  This ties in closely with the next bullet.
  • A continued need to discuss commitment to pure Fibre Channel (FC) storage.  I don’t deny that FC will be around for quite some time and may even see some growth as customers with it embedded will expand.  That being said customers with no FC investment should be avoiding it like the plague and as vendors and consultants we should be pushing more intelligent options to those customers.  You can pick apart technical details about FC vs. anything all day long, enjoy that on your own, the fact is two fold: running two separate networks is expensive and complex, the differences in reliability, performance, etc. are fading if not gone.  Additionally applications are being written in more intelligent ways that don’t require the high availability, low latency silo’d architecture of yester year.  Rather than clinging to FC like a sinking ship vendors should be protecting customer investment while building and positioning the next evolution.  Quote of the day during a conversation in the hall: “Fibre channel is just a slightly slower melting ice cube then we expected.’
  • An insistence that Ethernet fabric was a required building block of SDN.  I’d argue that while it can be a component it is far from required, and as SDN progresses it will be irrelevant completely.  More on this to come.
  • A stance that the network will not be commoditized was common throughout the day.  I’d say that’s either A) naïve or B) posturing to protect core revenue.  I’d say we’ll see network commoditization occur en mass over the next five years.  I’m specifically talking about the data center and a move away from specialized custom built ASICS, not the core routers, and not  the campus.  Custom silicon is expensive and time-consuming to develop, but provides performance/latency benefits and arguable some security benefits.  As the processor and off the shelf chips continue to increase exponentially this differentiator becomes less and less important.  What becomes more important is rapid adaption to new needs.  SDN as a whole won’t rip and replace networking in the next five years but it’s growth and the concepts around it will drive commoditization.  It happened with servers, then storage while people made the same arguments.  Cheaper, faster to produce and ‘good-enough’ consistently wins out.

On the positive side Brocade has some vision that’s quite interesting as well as some areas where they are leading by filling gaps in industry offerings.

  • Brocade is embracing the concept of SDN and understands a concept I tweeted about recently: ‘Revolutions don’t sell.’  Customers want evolutionary steps to new technology.  Few if any customers will rip and replace current infrastructure to dive head first into SDN.  SDN is a complete departure from the way we network today, and will therefore require evolutionary steps to get there. This is shown in their support of ‘hybrid’ open flow implementations on some devices.  This means that OpenFlow implementations can run segregated alongside traditional network deployments.  This allows for test/dev or roll-out of new services without an impact on production traffic.  This is a great approach where other vendors are offering ‘either or’ options.
  • There was discussion of Brocade’s VXLAN gateway which was announced at VMworld.  To my knowledge this is the first offering in this much needed space.  Without a gateway VXLAN is limited to virtual only environments. This includes segregation from services provided by physical devices.  The Brocade VXLAN gateway will allow the virtual and physical networks to be bridged. ( To dig deeper on why this is needed check out Ivan’s article:
  • The new Brocade VDX 8770 is one bad ass mamma jamma.  With industry leading latency and MAC table capacity, along with TRILL based fabric functionality, it’s built for large scalable high-density fabrics.  I originally tweeted “The #BRCD #VDX8770 is a bigger badder chassis in a world with less need for big bad chassis.” After reading Ivan’s post on it I stand corrected (this happens frequently.)  For some great perspective and a look at specs take a read:

On the financial side Brocade has been looking good and climbed over $6.00 a share.  There are plenty of conversations stating some of this may be due to upcoming shifts at the CEO level.  They’ve reported two great quarters and are applying some new focus towards federal government and other areas lacking in recent past. I didn’t dig further into this discussion.

During lunch I was introduced to one of the most interesting Brocade offerings I’d never heard of: ‘Brocade Network Subscription”:  Basically you can lease your on-prem network from Brocade Capitol.  This is a great idea for customers looking to shift CapEx to OpEx which can be extremely useful.  I also received a great explanation for the value of a fabric underneath an SDN network from Jason Nolet (VP of Data Center Networking Group.)  Jason’s position (summarized) is that implementing SDN adds a network management layer, rather than removing one.  With that in mind the more complexity we remove from the physical network the better off we are.  What we’ll want for our SDN networks is fast, plug-and-play functionality with max usable links and minimal management.  Brocade VCS fabric fits this nicely.  While I agree with that completely I ‘d also say it’s not the only way to skin that particular cat.  More to come on that.

For the last few years I’ve looked at Brocade as a company lacking innovation and direction.  They clung furiously to FC while the market began shifting to Ethernet, ignored cloud for quite a while, etc.  Meanwhile they burned down deals to purchase them and ended up where they’ve been.  The overall messaging, while nothing new, did have undertones of change as a whole and new direction.  That’s refreshing to hear.  Brocade is embracing virtualization and cloud architectures without tying their cart to a single hypervisor horse.  They are positioning well for SDN and the network market shifts.  Most impressively they are identifying gaps in the spaces they operate and executing on them both from a business and technology perspective.  Examples of this are Brocade Network Subscription and the VXLAN gateway functionality respectively.

Things are looking up and there is definitely something good happening at Brocade.  That being said they aren’t out of the woods yet.  For them, as a company, purchase is far fetched as the vendors that would buy them already have networking plays and would lose half of Brocade’s value by burning OEM relationships with the purchase.  The only real option from a sale perspective is for investors looking to carve them up and sell off pieces individually.  A scenario like this wouldn’t bode well for customers.  Brocade has some work to do but they’ve got a solid set of products and great direction.  We’ll see how it pans out.  Execution is paramount for them at this point.

Final Note:  This blog was intended to stop there but this morning I received an angry accusatory email from Brocade’s head of corporate communications who was unhappy with my tweets.  I thought about posting the email in full, but have decided against it for the sake of professionalism.  Overall his email was an attack based on my tweets.  As stated my tweets were not professional, but this type of email from someone in charge of corporate communications is well over the top in response.  I forwarded the email to several analyst and blogger colleagues, a handful of whom had similar issues with this individual.  One common theme in social media is that lashing out at bad press never does any good, a senior director in this position should know such, but instead continues to slander and attack.  His team and colleagues seem to understand social media use as they’ve engaged in healthy debate with me in regards to my tweets, it’s a shame they are not lead from the front.

GD Star Rating

Digging Into the Software Defined Data Center

The software defined data center is a relatively new buzzword embraced by the likes of EMC and VMware.  For an introduction to the concept see my article over at Network Computing (  This post is intended to take it a step deeper as I seem to be stuck at 30,000 feet for the next five hours with no internet access and no other decent ideas.  For the purpose of brevity (read: laziness) I’ll use the acronym SDDC for Software Defined Data Center whether or not this is being used elsewhere.)

First let’s look at what you get out of a SDDC:

Legacy Process:

In a traditional legacy data center the workflow for implementing a new service would look something like this:

  1. Approval of the service and budget
  2. Procurement of hardware
  3. Delivery of hardware
  4. Rack and stack of new hardware
  5. Configuration of hardware
  6. Installation of software
  7. Configuration of software
  8. Testing
  9. Production deployment

This process would very greatly in overall time but 30-90 days is probably a good ballpark (I know, I know, some of you are wishing it happened that fast.)

Not only is this process complex and slow but it has inherent risk.  Your users are accustomed to on-demand IT services in their personal life.  They know where to go to get it and how to work with it.  If you tell a business unit it will take 90 days to deploy an approved service they may source it from outside of IT.  This type of shadow IT poses issues for security, compliance, backup/recovery etc. 

SDDC Process:

As described in the link above an SDDC provides a complete decoupling of the hardware from the services deployed on it.  This provides a more fluid system for IT service change: growing, shrinking, adding and deleting services.  Conceptually the overall infrastructure would maintain an agreed upon level of spare capacity and would be added to as thresholds were crossed.  This would provide an ability to add services and grow existing services on the fly in all but the most extreme cases.  Additionally the management and deployment of new services would be software driven through intuitive interfaces rather than hardware driven and disparate CLI based.

The process would look something like this:

  1. Approval of the service and budget
  2. Installation of software
  3. Configuration of software
  4. Testing
  5. Production deployment

The removal of four steps is not the only benefit.  The remaining five steps are streamlined into automated processes rather than manual configurations.  Change management systems and trackback/chargeback are incorporated into the overall software management system providing a fluid workflow in a centralized location.  These processes will be initiated by authorized IT users through self-service portals.  The speed at which business applications can be deployed is greatly increased providing both flexibility and agility.

Isn’t that cloud?

Yes, no and maybe.  Or as we say in the IT world: ‘It depends.’  SDDN can be cloud, with on-demand self-service, flexible resource pooling, metered service etc. it fits the cloud model.  The difference is really in where and how it’s used.  A public cloud based IaaS model, or any given PaaS/SaaS model does not lend itself to legacy enterprise applications.  For instance you’re not migrating your Microsoft Exchange environment onto Amazon’s cloud.  Those legacy applications and systems still need a home.  Additionally those existing hardware systems still have value.  SDDC offers an evolutionary approach to enterprise IT that can support both legacy applications and new applications written to take advantage of cloud systems.  This provides a migration approach as well as investment protection for traditional IT infrastructure. 

How it works:

The term ‘Cloud operating System’ is thrown around frequently in the same conversation as SDDC.  The idea is compute, network and storage are raw resources that are consumed by the applications and services we run to drive our businesses.  Rather than look at these resources individually, and manage them as such, we plug them into a a management infrastructure that understands them and can utilize them as services require them.  Forget the hardware underneath and imagine a dashboard of your infrastructure something like the following graphic.



The hardware resources become raw resources to be consumed by the IT services.  For legacy applications this can be very traditional virtualization or even physical server deployments.  New applications and services may be deployed in a PaaS model on the same infrastructure allowing for greater application scale and redundancy and even less tie to the hardware underneath.

Lifting the kimono:

Taking a peak underneath the top level reveals a series of technologies both new and old.  Additionally there are some requirements that may or may not be met by current technology offerings. We’ll take a look through the compute, storage and network requirements of SDDC one at a time starting with compute and working our way up.

Compute is the layer that requires the least change.  Years ago we moved to the commodity x86 hardware which will be the base of these systems.  The compute platform itself will be differentiated by CPU and memory density, platform flexibility and cost. Differentiators traditionally built into the hardware such as availability and serviceability features will lose value.  Features that will continue to add value will be related to infrastructure reduction and enablement of upper level management and virtualization systems.  Hardware that provides flexibility and programmability will be king here and at other layers as we’ll discuss.

Other considerations at the compute layer will tie closely into storage.  As compute power itself has grown by leaps and bounds  our networks and storage systems have become the bottleneck.  Our systems can process our data faster than we can feed it to them.  This causes issues for power, cooling efficiency and overall optimization.  Dialing down performance for power savings is not the right answer.  Instead we want to fuel our processors with data rather than starve them.  This means having fast local data in the form of SSD, flash and cache.

Storage will require significant change, but changes that are already taking place or foreshadowed in roadmaps and startups.  The traditional storage array will become more and more niche as it has limited capacities of both performance and space.  In its place we’ll see new options including, but not limited to migration back to local disk, and scale-out options.  Much of the migration to centralized storage arrays was fueled by VMware’s vMotion, DRS, FT etc.  These advanced features required multiple servers to have access to the same disk, hence the need for shared storage.  VMware has recently announced a combination of storage vMotion and traditional vMotion that allows live migration without shared storage.  This is available in other hypervisor platforms and makes local storage a much more viable option in more environments.

Scale-out systems on the storage side are nothing new.  Lefthand and Equalogic pioneered much of this market before being bought by HP and Dell respectively.  The market continues to grow with products like Isilon (acquired by EMC) making a big splash in the enterprise as well as plays in the Big Data market.  NetApp’s cluster mode is now in full effect with OnTap 8.1 allowing their systems to scale out.  In the SMB market new players with fantastic offerings like Scale Computing are making headway and bringing innovation to the market.  Scale out provides a more linear growth path as both I/O and capacity increase with each additional node.  This is contrary to traditional systems which are always bottle necked by the storage controller(s). 

We will also see moves to central control, backup and tiering of distributed storage, such as storage blades and server cache.  Having fast data at the server level is a necessity but solves only part of the problem.  That data must also be made fault tolerant as well as available to other systems outside the server or blade enclosure.  EMC’s VFcache is one technology poised to help with this by adding the server as a storage tier for software tiering.  Software such as this place the hottest data directly next the processor with tier options all the way back to SAS, SATA, and even tape.

By now you should be seeing the trend of software based feature and control.  The last stage is within the network which will require the most change.  Network has held strong to proprietary hardware and high margins for years while the rest of the market has moved to commodity.  Companies like Arista look to challenge the status quo by providing software feature sets, or open programmability layered onto fast commodity hardware.  Additionally Software Defined Networking ( has been validated by both VMware’s acquisition of Nicira and Cisco’s spin-off of Insieme which by most accounts will expand upon the CiscoOne concept with a Cisco flavored SDN offering.  In any event the race is on to build networks based on software flows that are centrally managed rather than the port-to-port configuration nightmare of today’s data centers. 

This move is not only for ease of administration, but also required to push our systems to the levels required by cloud and SDDC.  These multi-tenant systems running disparate applications at various service tiers require tighter quality of service controls and bandwidth guarantees, as well as more intelligent routes.  Today’s physically configured networks can’t provide these controls.  Additionally applications will benefit from network visibility allowing them to request specific flow characteristics from the network based on application or user requirements.  Multiple service levels can be configured on the same physical network allowing traffic to take appropriate paths based on type rather than physical topology.  These network changes are require to truly enable SDDC and Cloud architectures. 

Further up the stack from the Layer 2 and Layer 3 transport networks comes a series of other network services that will be layered in via software.  Features such as: load-balancing, access-control and firewall services will be required for the services running on these shared infrastructures.  These network services will need to be deployed with new applications and tiered to the specific requirements of each.  As with the L2/L3 services manual configuration will not suffice and a ‘big picture’ view will be required to ensure that network services match application requirements.  These services can be layered in from both physical and virtual appliances  but will require configurability via the centralized software platform.


By combining current technology trends, emerging technologies and layering in future concepts the software defined data center will emerge in evolutionary fashion.  Today’s highly virtualized data centers will layer on technologies such as SDN while incorporating new storage models bringing their data centers to the next level.  Conceptually picture a mainframe pooling underlying resources across a shared application environment.  Now remove the frame.

GD Star Rating

Thoughts From a Tech Leadership Summit

This week I attended a tech leadership Summit in Vail Colorado for the second time.  The event is always a fantastic series of discussions and brings some of the top minds in the technology industry.  Here are some thoughts on the trends and thinking that were common at the event.

Virtualization and VDI:

There was a lot less talk of VDI and virtualization then in 2011.  These conversations were replaced with more conversations about cloud and app delivery.  Overall the consensus felt to be that getting the application to the right native environment on a given device was a far better approach then getting the desktop there.

Hypervisors were barely mentioned except in a recurring theme that the hypervisor itself has hit commodity.  This means that management and upper layer feature set are the differentiators.  Parallel to this thought was that VMware no longer has the best hypervisor yet their management system is still far superior to the competition (KVM was touted as the best hypervisor several times.)

The last piece of the virtualization discussion was around VMware’s acquisition of Nicira.  Some bullet points on that:

  • VMware paid too much for Nicira but that was unavoidable for the startup-to-be in the valley and it’s a great acquisition overall.
  • It’s no surprise VMware moved into networking everyone is moving that way.
  • While this is direct competition with Cisco it is currently in a small niche of service provider business.  Nicira’s product requires significant custom integration to deploy and will take time for VMware to productize it in a fashion usable for the enterprise.  Best guess: two years to real product. \
  • Overall the Cisco VMware partnership is very lucrative on both sides and should not be effected by this in the near term.
  • A seldom discussed portion of this involves the development expertise that comes with the acquisition.  With the hypervisor being commodity, and differentiation moving into the layers above that, we’ll see more and more variety in hypervisors.  This means multi-hypervisor support will be a key component of the upper level management products where virtualization vendors will compete.  Nicira’s team has proven capabilities in this space and can accelerate VMware’s multi-hypervisor strategy.


There was a lot of talk about both the vision and execution of EMC over the past year or more.  I personally used ‘execution machine’ more than once to describe them (coming from a typically non-EMC Kool-Aid guy.)  Some key points that resonated over past few days:

  • EMC’s execution on the VNX/VNXe product lines is astounding.  EMC launched a product and went on direct attack into a portion of NetApp’s business that nobody could really touch.  Through both sales and marketing excellence they’ve taken an increasingly large chunk out of this portion of the market.  This shores up a breech in their product line NetApp was using to gain share.
  • EMC’s Isilon acquisition was not only a fantastic choice, but was quickly integrated well.  Isilon is a fantastic product and has big data potential which is definitely a market that will generate big revenue in coming years.
  • EMC’s cloud vision is sound and they are executing well on it.  Additionally they were ahead of their pack of hardware vendor peers in this regard. EMC is embracing a software defined future.

I also participated in several discussions around flash and flash storage.  Some highlights:

  • PCIe based flash storage is definitely increasing in sales and enterprise consumption.  This market is expected to continue to grow as we strive to move the data closer to the processor.  There are two methods for this: storage in the server, servers in the storage.  PCIe flash plays in the server side and EMC Isilon will eventually play on the storage side.  Also look for an announcement in the SMB storage space around this during VMworld.
  • One issue in this space is that the expensive fast server based flash becomes trapped capacity if a server can’t drive enough I/O to it.  Additionally there are data loss concerns with this data trapped in the server.
  • Both of these issues are looking to be solved by EMC and IBM who intend to add server based flash into the tiering of shared storage.
  • Most traditional storage vendors flash options are ‘bolt-ons’ to traditional array architecture.  This can leave the expensive flash I/O starved, limiting it’s performance benefit.  Several all flash startups intend to use this as an inflection point with flash based systems designed from the ground up for the performance the disk offers.
  • Flash is still not an answer to every problem, and never will be.

The last point that struck me was a potential move from shared storage as a whole.  Microsoft would rather have you use local storage, clusters and big data apps like Hadoop thrive on local storage and one last big shared storage draw is going away: vMotion.  Once shared storage is no longer need for live virtual machine migration there will be far less draw for expensive systems.


The major cloud discussion I was a part of (mainly observer) involved OpenStack.  Overall OpenStack has a ton of buzz, and a plethora of developers.  What it’s lacking is customers, leadership and someone driving it who can lead a revolution.  Additionally it’s suffering from politics and bureaucracy.  It was described as impossible to support by one individual who would definitely know one way or another.  My thinking is that if you have CloudStack sitting there with real customers, an easily deployed system, support and leadership why waste cycles continuing down the OpenStack path?  The best answer I heard for that: Ego.  Everyone wants to build the next Amazon and CloudStack is too baked to make as much of a mark.

Overall it’s an interesting topic but my thought is: with limited developers the industry should be getting behind the best horse and working together.

Big Data:

Big Data was obviously another fun topic.  The quote of the week was ‘There are ten people, not companies, that understand Big Data.  6 of them are at Cloudera and the other 4 are locked in Google writing their own checks.’  Basically Big Data knowledge is rare and hiring consultants is not typically a viable option because you need people holding three things: Knowledge of big data processing, knowledge of your data, and knowledge of your business.  These data scientists aren’t easy to come by.  Additionally contrary to popular hype, Hadoop is not the end-all be-all of big data, it’s a tool in a large tool chest.  Especially when talking about real-time you’ll need to look elsewhere.  The consensus was that we are with big data where we were with cloud 2-3 years ago.  That being said CIO’s may still need to show big data initiatives (read: spend) so you should see $$ thrown at well packaged big data solutions geared toward plug-n-play in the enterprise.

All in all it was an excellent event and I was humbled as usual to participate in great conversations with so many smart people who are out there driving the future of technology.  What I’ve written here is a a summary from my perspective on the one summit portion I had time to participate in.  There is always a good chance I misquoted/misunderstood something so feel free to call me out.  As always I’d love your feedback, contradictions or hate mail comments.

GD Star Rating