Your Technology Sunk Cost is KILLING you

I recently bought a Nest Hello to replace my perfectly good, near new, Ring Video Doorbell. The experience got me thinking about sunk cost in IT and how significantly it strangles the business and costs companies ridiculous amounts of money.

When I first saw the Nest Hello, I had no interest. I had recently purchased and installed my Ring. I was happy with it, and the Amazon Alexa integration was great. I had no need to change. A few weeks later I decided to replace my home security system because it’s a cable provider system and like everything from a cable provider it’s a shit service at caviar pricing because ‘Hey, you have no choice you sad F’er.’ That’s the beauty of the monopoly our government happily built and sustains for them. I chose to go with a system from Nest, because I already have two of their thermostats, several of their smoke detectors, and a couple of their indoor cameras. I ordered the security system components I needed, and a few cameras to compliment it, then I looked back into the Nest Hello.

The Nest Hello is a much better camera, and more feature rich device. More importantly it will integrate seamlessly with my new security system, and existing devices, eliminating yet another single use app on my phone (the Ring app.) The counter argument for purchasing the device was my sunk cost. I’d spent money on the Ring, and I’d also spent time and hassle installing it. The Nest might require me to get back in the attic and change out the transformer for my doorbell as well as wire in a new line conditioner. Not things I enjoy doing. The sunk cost nearly stopped my purchase. Why throw away a good device I just installed, to get a feature or two and a better picture.

I then stepped back and looked at it from a different point of view. What’s my business case? What’s the outcome I’m purchasing this technology to achieve? The answer is a little bit of security, but a lot of piece of mind for my home. I live alone, and I travel a lot. While I’m gone I need to manage packages, service people, and my pets. I also need to do this quickly and easily. This means that seamless integration is a top priority for me, and video quality, etc. is another big concern. Nest’s Hello camera feature set far better for my use case, especially when adding their IQ cameras. Lastly for video recording and monitoring service, I would now only need one provider, and one manageable bill rather than one for Nest and one for Ring. From that perspective the answer became clear: the cost I sunk wasn’t providing any value based on my use-cases, therefore it was irrelevant. It was actually irrelvant in the first place, but we’ll get back to that.

I went ahead and bought the Nest Hello. Next came another sunk cost problem. My house is covered in Amazon Alexa devices which integrate quite well with Ring. I have no fewer than 8 Alexa enabled devices around the home, garage, etc. Nest is a Google product, so it’s best integration is with Google Home. Do I replace my beloved Amazon devices with Google Home to get the best integration?

First a rant: The fact that I should even have to consider this is ludicrous, and shows that both products are run by shit heads that won’t even feign the semblance of looking out for their customers interests. Because they have competing products they forcibly degrade any integration between the systems rather than integrating and differentiating on product quality rather than engineered lock-in. I despise this, it’s bad business, and completely unnecessary. I’d guess it actually stalls potential sales of both because people want to ‘sit back and see how it plays out’ before investing in one or the other.

I have a lot of sunk financial cost in my Alexa devices. There’s also some cost in time setting them up and integrating them with my other home-automation tools. That in mind I went back to the outcome I’m trying to achieve. My Alexa/Ring integration allowed me to see who was at the front door, and talk to them. My Alexa/Hello integration will only let me view the video. What’s my use-case? I use the integration to see the door, and decide if I should walk to the front door to answer. If it’s a package delivery, I can grab it later. If it needs a signature, I’ll see them waiting. If it’s something else, I walk to the door for a conversation. Basically I only use the integration to view the video and decide if I should go to the door or not. This means that Alexa/Hello integration, while not ideal, meets my needs perfectly. I easily chose to keep Alexa which provides the side benefit of not providing the evil behemoth that is Google any more access to my life than I already have. Last thing I need is my Gmail recommending male potency remedies after the Google device in my bedroom listens in on a night with my girlfriend. I’m picturing Microsoft Clippy here for some reason.

Clippy Help - Copy

 

I’m much more comfortable with Amazon listening in and craftily adding some books on love making for dummies to my Kindle recommendations while using price discrimination to charge me more for marital aid purchases because they know I need them.

Ok, enough TMI, back to the point. Your technology sunk cost is killing you, mmkay? When making technology decisions for your company you should ignore sunk costs. Your rational brain knows this, but you don’t do it.

Rational thinking dictates that we should ignore sunk costs when making a decision. The goal of a decision is to alter the course of the future. And since sunk costs cannot be changed, you should avoid taking those costs into account when deciding how to proceed.https://blog.fastfedora.com/2011/01/the-sunk-cost-dilemma.html

You have sunk cost in hardware, software, people-hours, consulting, and everywhere else under the sun. If you’re like most these sunk costs hinder every decision you make. “I just refreshed my network, I can’t buy new equipment.” “My servers are only two years old, I won’t swap them out.” I have an enterprise ELA with them, I should use their version. These are all bad reasons to make a decision. The cost is already spent, it’s gone, it can’t be changed, but future costs, and capabilities can. Maybe:

  • That sparkly $400,000 SDN rip and replace will plug far more cohesively into the VP of Applications ongoing DevOps project allowing them to launch features faster resulting in millions of dollars in potential profit to the company over the next 24 months.
  • The new servers increase compute density lowering your overall footprint and saving you on power, cooling, management, and licensing over time starting a quarter or two down the road.
  • Maybe that feature that’s included for free with your ELA will end up costing you thousands in unforeseen integration challenges while only solving 10% of your existing problem.

This issue becomes insanely more relevant as you try and modernize for more agile IT delivery. Regardless of the buzzword you’re shooting towards, DevOps, Cloud, UnicornRainbowDeliverySystems, the shift will be difficult. It will be exponentially more difficult if you anchor it with the sunk cost of every bad decision ever made in your environment.

“Of course your tool sounds great, and we need something exactly like it, but we already have so many tools, I can’t justify another one.” I’ve heard that verbatim from a customer, and it’s bat—shit—freaking—crazy. If your other tools suck, get rid of them, don’t let those bad decisions negate you from purchasing something that does what you need. Maybe it’s your vetting process, or um, eh, that thing you see when you look in the mirror that needs changing. That’s like saying ‘My wife needs a car to get to work, but I already have these two project cars I can’t get running, I can’t justify buying her a commuter car.’

Most of our data centers are built using the same methodology Dr. Frankenstein used to reanimate the dead. He grabbed a cart and a wheelbarrow and set off for his local graveyard. He dug up graves grabbing the things he needed, a torso, a couple of legs, a head, etc. and carted them back to his lab. Once safely back at the lab he happily stitched them together and applied power.

Data centers have been built buying the piece needed at the time from the favored vendor of the moment. A smattering of HP here, a dash of Cisco there, some EMC, a touch of NetApp, oh this Arista thing is shiny… Then up through the software stack, a teaspoon of Oracle makes the profits go down, the profits go down… some SalesForce, some VMware, and on, and on. We’ve stitched these things together with Ethernet and applied power.

Now you want to ‘DevOps that’, or ‘cloudify the thing’? Really, are you sure you REALLY want to do that? Fine go ahead, I won’t call you crazy, I’ll just think… never mind, yes I will call you crazy… crazy. DevOps, Cloud, etc. are all like virtualization before them, if you put them on a shit foundation, you get shit results.

Now don’t get me wrong. You can protect your sunk costs, sweat your assets, and still achieve buzzword greatness. It’s possible. The question is should you, and would it actually save you money? The answer is no, and ‘hell no.’ The cost of additional tools, customization, integration and lost time will quickly, and exponentially, outweigh any perceived ‘investment protection’ savings, except in the most extreme of corner-cases.

I’m not promoting throwing the baby out with the bathwater, or rip-and-replace every step of the way. I am recommending you consider those options. Look at the big picture and ignore sunk-cost as much as you can.

Maybe you replace $500,000 in hardware and software you bought last year with $750,000 worth of new-fangled shit today, and $250,000 in services to build and launch it. Crap, you wasted the sunk $500K and sunk $1 million more! How do you explain that? Maybe you’ll be explaining it as the cost of moving your company from 4 software releases per year to 1 software reease per week. Maybe that release schedule is what just allowed your Dev team to ‘dark test’ then rolling release the next killer feature on your customer platform. Maybe customer attrition is down 50% while the cost of customer acquisition is 30% of what it was a year ago. Maybe you’ll be explaining the tough calls it takes to be the hero.

 

 

 

GD Star Rating
loading...

Intent Driven Architecture Part III: Policy Assurance

Here I am finally getting around to the third part of my blog on Intent Driven Architectures, but hey, what’s a year between friends. If you missed or forgot parts I and II the links are below:

Intent Driven Architectures: WTF is Intent

Intent Driven Architectures Part II: Policy Analytics

Intent Driven Data Center: A Brief Overview Video

Now on to part III and a discussion of how assurance systems finalize the architecture.

What gap does assurance fill?

‘Intent’ and ‘Policy’ can be used interchangeably for the purposes of this discussion. Intent is what I want to do, policy is a description of that intent. The tougher question is what intent intent assurance is. Using the network as an example, let’s assume you have a proper intent driven system that can automatically translate a business level intent into infrastructure level configuration.

An intent like deploying a financial application beholden to PCI compliance will boil down into a myriad of config level objects: connectivity, security, quality, etc. At the lowest level this will translate to things like Access Control lists (ACLs), VLANs, firewall (FW) rules, and Quality of Service (QoS) settings. The diagram below shows this mapping.

Note: In an intent driven system the high level business intent is automatically translated down into the low-level constructs based on pre-defined rules and resource pools. Basically, the mapping below should happen automatically.

Blog Graphics

The translation below is one of the biggest challenges in traditional architectures. In those architectures the entire process is manual and human driven. Automating this process through intent creates an exponential speed increase while reducing risk and providing the ability to apply tighter security. That being said it doesn’t get us all the way there. We still need to deploy this intent. Still within the networking example the intent driven system should have a network capable of deploying this policy automatically, but how do you know it can accept these changes, and what they will effect?

In steps assurance…

The purpose of an assurance system is to guarantee that the proposed changes (policy modifications based on intent) can be consumed by the infrastructure. Let’s take one small example to get an idea of how important this is. This example will sound technical, but the technical bits are irrelevant. We’ll call this example F’ing TCAM.

F’ing TCAM:

  • TCAM (Tertiary Content Addressable Memory) is the piece of hardware that stores Access Control Entries (ACEs).
  • TCAM is very expensive, therefore you have a finite amount in any given switch.
  • These are how ACLs get enforced at ‘line-rate’ (as fast as the wire).
  • ACLs can be/are used along with other tools to enforce things like PCI compliance.
  • An individual DC switch can theoretically be out of TCAM space, therefore unable to enforce a new policy.
  • Troubleshooting and verifying that across al the switches in a data center is hard.

That’s only one example of verification that needs to happen before a new intent can be pushed out. Things like VLAN and route availability, hardware/bandwidth utilization, etc. are also important. In the traditional world two terrible choices are available: verify everything manually per device, or ‘spray and pray’ (push the configuration and hope.)

This is where the assurance engine fits in. An assurance engine verifies the ability of the infrastructure to consume new policy before that policy is pushed out. This allows the policy to be modified if necessary prior to changes on the system, and reduces troubleshooting required after a change.

Advanced assurance systems will take this one step further. They perform step 1 as outlined above, which verifies that the change can be made. Step 2 will verify if the change should be made. What I mean by this is that step 2 will check compliance, IT policy, and other guidelines to ensure that the change will not violate them. Many times a change will be possible, even though it will violate some other policy, step 2 ensures that administrators are aware of this before a change is made.

This combination of features is crucial for the infrastructure agility required by modern business. It also greatly reduces the risk of change allowing maintenance windows to be reduced greatly or eliminated. Assurance is a critical piece of achieving true intent driven architectures.

GD Star Rating
loading...

Best Practices of Women in Tech

The following is a guest post by Sara (Ms. Digital Diva)

Today’s tech industry has a new face, and that face is female. Though traditionally male dominated, more and more women are making their mark as leaders in the tech field. Contributing not only to the continuous advancements we’re seeing in technology, these women are making a point to build up one another and the young women who look up to them. Progress has been made, but there’s still work to be done. Here are some of the ways these women are doing it.

Hit the Ground Running
Just as important as the women who are already working in the tech field, are the young girls who aspire to be like them. Supporting these young women and girls to follow their passion and providing them with the necessary resources to reach their goals, is key to the future of tech. An example of these efforts comes from founder of Girls Who Code, Reshma Saujani, who aims to close the gender gap by providing an outlet for girls to explore their abilities and pursue interests computer science. Similarly with Women Who Code, Alaina Percival empowers women by offering services to assist in building successful careers in technology. Breaking out of the stereotypical boxes and utilizing these sorts of programs not only builds confidence, but helps those just starting out find their niche. This can have a important impact on professional development when it becomes time to specialize.

Pursue What’s Most Beneficial to You
There’s no stopping a woman with goals. Once you have that goal set, it’s up to you to do everything it takes to get it done. In this industry, technology is constantly advancing. To stay current, you must maintain a hunger for learning. Staying up-to-date with trends, and qualities that are most in-demand by employers, will keep you ahead of the game and closer to reaching your goals. This quality fortunately seems to come natural to women. According to HackerRank’s Women in Tech Report, women are incredibly practical in this sense, and tend to pursue proficiency in whichever languages are most valued at the moment.

Succeed Together
It’s tough to admit, but getting more women in tech is still a work in progress, and in order to continue progressing we must work together. Rarely does anyone succeed in life without mentorship, guidance or at least support from others. There’s nothing wrong with asking for help. Taking the time to network with women who have earned a position you hope to achieve someday is essential in overcoming workplace challenges and clarifying questions. Even if you can’t get in physical contact with a role model of yours, keeping up with what their writing, saying and working on, can help you expand your own interests and continue learning. The process of working towards your ultimate potential is a long one, but embracing advice can help you get there efficiently.

Lessons Learned
Like anything in life, developing your professional career comes with lots of trial and error. You’ll succeed and you’ll fail, you’ll try things you like and try things you hate. It’s all a part of the process. When you’re the only woman in an office full of men it can be difficult to speak up or put yourself out there in fear of making a mistake. But if I’ve learned anything in my career, it’s that staying silent signifies acceptance and not involving yourself in situations that can help you grow only hurts you. Getting involved in groups, committees, projects, anything that interests you is the biggest piece of advice I can give. Not only will you expand your knowledge and experience, but it’s a great way to get to know others in the tech community. Building relationships is a key part of any profession, but especially in environments where you want to build confidence.

A final thought to take with you is, to always be advancing. So much of the technology industry is self development and striving to discover the next best thing. Curiosity is what will keep you afloat. Utilizing programs, and keeping up with verticals that interest you can help in develop strong points of view on emerging technologies. This is crucial as you grow in your career, as people generally listen to those who have something to say. What you don’t want to do is get swept up in the crowd and lose your voice. If tech is what you’re interested in than it’s where you belong, whether you’ve been studying it your whole life or just getting started. Never underestimate yourself and don’t confuse experience with ability. There are so many incredible women doing incredible things in the tech industry. All they need to be even greater, is you.

GD Star Rating
loading...

Intent-Driven Data Center: A Brief Video Overview

Here’s a brief video overview of Intent-Driven data center. More blogs to come.

GD Star Rating
loading...

Intent Driven Architecture Part II: Policy Analytics

*** Disclaimer: Yes I work for a company that sells products in this category. You are welcome to assume that biases me and disregard this article completely. ***

In my first post on Intent-Driven Architectures (http://www.definethecloud.net/intent-driven-architectures-wtf-is-intent/) I attempted to explain the basics of an Intent-Based, or Intent-driven approach. I also explained the use of Intent-Driven architecture in a network perspective. The next piece of building a fully Intent-Driven architecture is analytics. This post will focus there.

Let’s assume you intend to deploy, or have deployed a network, server, storage, etc. system that can consume intent and automate provisioning based on that. How do you identify your policy, or intent for your existing workloads? This is a tough question, and a common place for policy automation, micro-segmentation, and other projects to stall or fail. This is less challenging for that shiny new app your about to deploy (because you’re defining requirements, the policy/intent), it’s all of those existing apps that create the nightmare. How do you automate the infrastructures based on the applications intent, if you don’t know the applications intent?

This is one of the places where analytics becomes a key piece of an intent-driven architecture. You not only need a tool to discover the existing policy, but one that can keep that up-to-date as things change. Was policy implemented correctly on day 0? Is policy still being adhered to on day 5, 50, 500? This is where real-time, or near real-time analytics will come into play for intent-driven architectures.

I’m going to go back to the network and security as my primary example, I’m a one-trick pony that way. These same concepts are applicable to compute, storage and other parts of the architecture. Using the network example the diagram below shows a very generalized version of a typical policy enforcement example in traditional architectures.

Network Policy

 

Using the example above we see that most policy is pushed to the distribution layer of the network and enforced in the routing, firewalls, load-balancers etc. The other thing to note is that most policy is very broad deny rules. This is what’s known as a blacklist model; anything is allowed unless explicitly denied. This loose level of policy creates large security gaps, and is very rigid and fragile. Additionally, because the intent or policy is described so loosely it’s nearly impossible to use existing infrastructure to discover application intent.

In order to gather intent and automate the policy requirements based on that intent, we need to look at the actual traffic, not the existing rules. We need a granular look at how the applications communicate, this shows us what needs to be allowed, and can be used to gather what should be blocked. It can also show us policies that enforce user-experience, app-priority, traffic-load requirements, etc. Generally this information can be gathered from one of two locations: the operating system/app-stack, or the network, even better would be using both. With this data we can see much more detail. The figure below shows moving from a broad subnet allow rule, to granular knowledge of the TCP/UDP ports that need to be open between specific points.

Old policy vs new policy

These granular rule-sets are definitely not intent, but they are the infrastructures implementation of that intent. This first step of analytics assists with tightening security through micro-segmentation, but also allows agility in that tightened security. For example if you waved a magic wand and it implemented perfect micro-segmentation, that micro-segmentation would quickly start to create problems without analytics. Developers open a new port? A software patch change the connections ports for an app? Downtime, and slow remediation will be unavoidable. With real, or near-real-time analytics the change can be detected immediately, and possibly remediated with a click.

Analytics plays a much bigger role than just policy/intent discovery. The analytics engine of an Intent-based system should also provide visibility into the policy enforcement. Some examples:

  • Was intent correctly deployed, and enforced on day 0?
  • Is intent still being correctly enforced on day 5, 50, 500?
  • What if scenarios (if I change/add policy x, what would be affected.)

All of this should be done by looking at the actual communication between apps or devices, not by looking at infrastructure configuration. For example, I can look at a firewall rule and determine that it is properly configured to segment traffic a, from traffic b. There is nothing in the firewall config to show me that the rest of the network is properly configured to ensure all traffic passes through that firewall. If traffic is somehow bypassing the firewall, all the rules in the world make no difference.

Analytics engines designed for, or as part of, an intent-based networking system provide two critical things: policy discovery, and policy verification. Even with a completely green-filed environment where the policy can be designed fresh, you’ll want analytics to ensure it is deployed correctly and keep you up-to-date on changes.

There are three major components of an intent-driven architecture. I’ve discussed intent-based automation in the previous post, and analytics in this post. I’ll discuss the third piece in the near future: assurance, knowing your system can consume the new intent.

*** Disclaimer: See disclaimer above. ***

GD Star Rating
loading...

Intent Driven Architectures: WTF is Intent?

*** Disclaimer: I work for a vendor who has several offerings in the world of intent-based infrastructure. If you choose to assume that makes my opinion biased and irrelevant, that’s your mistake to make, and you can save time by skipping the rest of this post. ***

** Update at the end of the blog (10/20/2017)**

In the ever evolving world of data center and cloud buzzwords, the word ‘intent’ is slowly gaining momentum: Intent-based x, intent-driven y, etc. What is ‘intent’ and how does that apply to networks, storage, servers, or infrastructure as a whole, or better yet to automation? Let’s take a look.

First, let’s peek at status quo automation. Traditional automation systems for technology infrastructure (switches, servers, storage, etc.) utilize low level commands to configure multiple points at once. For example the diagram below shows a network management system being used to provision VLAN 20 onto 15 switches from a single point of control.

Basic Automation

The issue here is the requirement for low level policy rendering, meaning getting down to the: VLAN, RAID pool, firewall rule level to automate the deployment of a higher level business policy. Higher level business policy is the ‘intent’ and it can be definied in terms of: security, SLA, compliance, geo-dependancy, user-experience, etc. With a traditional automation method a lot of human interaction is required to translate from an applications business requirements, intent, and the infrastructure configuration. Worse, this communication typically occurs between groups that speak very different languages: engineers, developers, lines-of-business. The picture below deipicts this.

App Deployment Chain

This ‘telephone game’ of passing app requirments is not only slow, it is also risk prone because a lot gets lost in the multiple layers of communication.

Hopefully you now have a slight grasp on the way traditional automation works, basically the overall problem statement. Now let’s take a dive into using intent to alleviate this issue.

I’m going to use the network as my example for the remainder of this post. The same concepts are applicable to any infrastructure, or the whole infrastructure, I just want to simplify the explanation. Starting at the top, a network construct like a VLAN is a low-level representation of some type of business policy. A great example might be compliance regulations. An app processes financial data that is regulated to be segmented from all other data. A VLAN is a Layer 2 segment, that in-part, helps to support this. The idea of an intent-driven architecture is to automate the infrastructure based on the high level business policy, and skip the middle layers of translation. Ideally you’d define how you implement policy/intent for something like financial data one time. From them on, simply tagging an app as financial data ensures the system provisions that policy. The diagram below shows this process.

Intent Driven Workflow

One common misnomer is that the network, or infrastructure must be intelligent enough to interpret intent. This is absolutely false. The infrastructure needs to be able to consume intent, not interpret or define it. Intent is already understood in business logic. The infrstructure should be able to consume that, and automate configuration based on that business logic intent. In the example in the diagram business logic has already been defined for the given organizations compliance requirments. Once it has been defined, it is a resuable object allowing automation of that policy for any app tagged requiring it. Another note is that the example uses a ‘dev’ referencing custom built software, the same methodology can be used with off the shelf software.

There are many reasons for not trying to build intent based systems that can automatically detect and consume intent. One, non-minimal reason is the cost of those systems. More important is the ability to actually execute on that vision. Using a network example, it would be fairly simple to build a network that can automatically detect an Oracle application using standard ports and connectivity. What the network alone would not be able to detect is whether that workload was a dev, test, or production environment. Each environment would require different policies or intent. Another example would be difference in policy enforcement. One company may consider a VLAN to be adequate segmentation for different traffic types, another would require a firewall, and a third might require ‘air-gap.’ These differences would not be able to be automatically understood by the infrastructure. Intent based systems should instead consume the existing business logic, and automate provisioning based on that, not attempt to reinterpret that business logic themselves.

The other major misnomer regarding intent based systems is that they must be ‘open’ and able to incorporate any underlying hardware and software. This is definitely not a requirement of intent based systems. There are pros, and cons to open portability across hardware and software platforms. Those should always be weighed when purchasing a system, intent-based or otherwise. One pro for an open system supporting heterogeneity might be the avoidance of ‘vendor lock-in.’ The opposing con, would be the additional engineering, QA costs as well as fragility of the system. There are many more pros/cons to both. To see some of my old, yet still relevant thoughts on ‘lock-in’ see this post: http://www.definethecloud.net/the-difference-between-foothold-and-lock-in/.

Overall intent-based systems are emerging and creating a lot of buzz, both within the vendor space and the analyst space. There are examples of intent-based automation for networking in products like Cisco’s Application Centric Infrastructure (ACI). System like these are one piece of a fully intent-driven architecture. I’ll discuss the other two pieces, assurance and analytics, in future posts, if I’m not simply too lazy to care.

** Update: Out of ignorance I neglected to mention another Intent-Based Networking system. Doug Gourlay was kind enough to point out Apstra to me (http://www.apstra.com/). After taking a look, I wanted to mention that they offer a vendor agnostic Intent-based networking solution. The omission was unintentional and I’m happy to add other examples brought to my attention. **

*** These thoughts are mine, not sponsored, paid for, or influenced by a paycheck. Take them as you will. ***

 

 

GD Star Rating
loading...

The Art of Pre-Sales Part II: Showing Value

Part I of this post http://www.definethecloud.net/the-art-of-pre-sales received quite a few page views and positive feedback so I thought I’d expand on it.  Last week on the Twitters I made a comment re sales engineers showing value via revenue ($$) and got a lot of feedback.  I thought I’d expand on the topic.  While I will touch on a couple of points briefly this post is not intended as a philosophical discussion of how engineers ‘should be judged.’  Quite frankly if you’re an engineer the only thing that matters is how you are judged (for the time being at least.)  This is about understanding and showing your value.  Don’t get wrapped around the axle on right and wrong or principles.  While I don’t always follow my own advice I’ve often found that the best way to change the system is by playing by its rules and becoming a respected participant. 

A move to pre-sales is often a hard transition for an engineer to make.  I discuss some of the thought process in the first blog linked above.  This post focuses on transitioning the way in which you show your value.  This post is focused on providing some tools to assist in career and salary growth, rather than job performance itself.  In a traditional engineering role you are typically graded on performance of duties, engineering acumen and possibly certifications showing your knowledge and growth.  When transitioning to a sales engineer role those metrics can and will change.  There are several keys concepts that will assist in showing your value and reaping the rewards such as salary increases and promotion. 

  1. Understand the metrics
  2. Adapt to the metrics
  3. Gather the data
  4. Sell yourself

Understand the Metrics

The first key is to understand the metrics on which you are graded.  While this seems to be a straightforward concept, it is often missed.  This is best discussed up front when accepting the new role.  Prior to acceptance you often have more of a say in how those things occur.   Each company, organization and even team often uses different metrics.  I’ve had hybrid pre-sales/delivery roles where upper management judged my performance primarily on billable hours.  This means that the work I did up front (pre-sale) held little to know value, no matter how influential it may have been on closing the deal.  I’ve also held roles that focused value primarily on sales influence, basically on revenue.  In most cases you will find a combination of metrics used, you want to be aware of these.  If you are not focused on the right areas the value you provide may go unnoticed.  In the first example mentioned above, if I’d have spent all of my time in front of customers selling deals, but never implementing my value would have been minimized.

Understanding the metrics is the first step, it allows you to know what you’ll be measured on.  In some cases those metrics are black and white and therefore easy.  For instance at the time I was an active duty Marine, E1-E5 promotion was about 70-80% based on both physical fitness test (PFT) and rifle marksmanship qualification score.  These not only counted on their own but were also factored in again into various portions of proficiency and conduct marks which counted for the other portion of promotion.  This meant that a Marine could much more easily move up focusing on shooting and pull-ups than job proficiency. This post is not about gaming the system, but that example shows that knowing the system is important.   

Adapt to the metrics

Let me preface by saying I do not advocate gaming the system, or focusing solely on one area that you know is thoroughly prized while ignoring the others.  That is nothing more than brown nosing, and you’ll quickly lose the respect of your peers.  Instead adapt, where needed, to the metrics you’re measured on.  It’s not about dropping everything to focus on one area, it’s ensuring you are focusing on all areas that are used to assess your performance.  Maybe certifications weren’t important where you were but they’re now required, get on it.  Additionally remember that anything that can be easily measured probably is.  Intangibles or items of a subjective nature are difficult tools to measure performance on.  That doesn’t mean they aren’t/shouldn’t be used it just a fact.  Due to that understand the tangibles and ensure you are showing value there.

Gather the data

In a sales organization sales numbers are always going to be key.  Every company will use them differently but they always factor in.  Every sales engineer at a high level is there to assist in the sale of equipment, therefore those numbers matter.  Additionally those numbers are very tangible, meaning you can show value easily.  Most organizations will use some form of CRM such as salesforce.com, to track sales dollars and customers.  Engineering access to this tool varies, but the more you learn to use the system the better.  Showing the value of the deals you spend your time on is enormous, especially if it sets you apart from your peers.  Take the time to use these systems in the way your organization intends so that you can ensure you are tied to the revenue you generate.

Sales numbers are a great example but there are many others.  If you participate in a standards body, contribute frequently to internal wikis or email aliases, etc. gather that data.  These are parts of what you contribute and may go unnoticed, you need to ensure you have that data at your disposal.  Having the right data on hand is key to step four; selling yourself.

Sell yourself

This may be the most unnatural part of the entire process.  Most people don’t enjoy, and aren’t comfortable presenting their own value. That being said this is also possibly the most important piece.  If you don’t sell yourself you can’t count on anyone else to do it.  When discussing compensation, initially or raise, and promotion always look at it from a pure business perspective.  The person that you’re having the discussion with has an ultimate goal of keeping the right people on board for the lowest cost, you have goal of maintaining the highest cost possible for the value you provide.  Think of it as bargaining for a car, regardless of how much you may like your sales person you want to drive away with as much money in your pocket as possible.

If you’ve followed the first three steps this part should be easier.  You’ll have documentation to support your value along the metrics evaluated, bring it.  Don’t expect your manager to have looked at everything or to have it handy.  Having these things ready helps you frame the discussion around your value, and puts you in charge.  Additionally it shows that you know your own value.  Don’t be afraid to present who you are and what you bring to the table.  Also don’t be afraid to push back.  It can be nerve racking to hear a 3% raise and ask for a 6%, or to push back on a salary offer for another 10K, that doesn’t mean you shouldn’t do it.  Remember you don’t have to make demands, and if you don’t there is no harm in asking.

Phrasing is key here and practice is always best.  Remember you are not saying you’ll leave, you’re asking for your value.  Think in phrases like, “I really appreciate what you’re offering but I’d be much more comfortable at $x and I think my proven value warrants it.”  I’m not saying to use that line specifically but it does ring in the right light.  In these discussions you want to show three things:

  1. That you are appreciative of the position/opportunity
  2. That you know your value
  3. That your value is tangible and proven

Intangibles

There are several other factors I always recommend focusing on:

  • Teamwork – this is not only easily recognizable as value,  it is real value.  A team that works together and supports one another will always be more successful than a group of rock stars.  Share knowledge freely and help your peers wherever possible, even if they are not tied to the same direct team.
  • Leadership -  You don’t need a title to lead.  Set an example and exemplify what you’d like to see in others.  This is one I must constantly remind myself of and fail at often, but it’s key.  Lead from the front, people will follow.
  • Professionalism – As a Marine we had a saying something to the effect of “Act at the rank you want to be.”  Your dress, appearance and professionalism should always be at the level you want to be, not where you were at.  This not only assists in getting there, but also in the transition once acquired.  Have you ever seen an engineer come in wearing jeans and polo one day, shirt and slacks the next after a promotion?  Appears pretty unnatural doesn’t it?  If that engineer had already been acting the part it would have been a natural and expected transition.
  • Commend excellence – When one of your colleagues in any realm does something above and beyond, commend it.  Send a thank you and brief description to them and cc their manager, or to their manager and cc them.  This helps them with steps three and four, but also shows that you noticed.  Y
  • Technical knowledge – While it should go without saying, I won’t let that be.  Always maintain your knowledge and stay sharp. 
  • Know your market value – This can be difficult but there are tools available.  One suggestion for this is using a recruiter.  A good recruiter wants you to command top dollar because it increases their commission, this combined with their market knowledge will help you place yourself.

Do’s and don’ts

  • Do – Self assessments.  I never like to walk into a review and be surprised.  I do thorough self assessments of myself in the format my employer uses prior to a review.  When possible I present my assessment rather than allow the opposite. I always expect to have more areas of improvement listed than they do.
  • Don’t – Use ultimatums.  The best example of this is receiving another offer and using it to strong arm your employer into more money.  If you have an offer you intend to use to negotiate make sure it’s one you intend to take.  Also know that this is a one-time tactic, you won’t ever be able to use again with your employer.
  • Do -  Strive for improvement.  Recognize where you can improve.  Apply as much honesty as possible to self-reviews and assessments. 
  • Don’t- Blame.  Look for the common denominator, if you’ve been passed multiple times for promotion ask why.  Don’t get stuck in the rut of blaming others for things you can improve.  Even if it was someone else’s fault you may find something you can do better.

Summary

In any professional environment, knowing and showing your value is important.  Most of this is specific to a pre-sales role but can be used more widely.  The short version is knowing how to show your value and showing it.  Remember you work to get paid, even if you love what you do.

GD Star Rating
loading...

A Salute to Greatness

There are two things I’ve spent my life doing: being a class clown (laughed at or with is your choice) and building my career.  Since I was 16 I’ve worked no less than 40 hour weeks and more consistently been immersed in IT upwards of 80.  I have rarely taken time off, I typically watch PTO disappear on a spreadsheet January first of each year.  If you count my five years of proud service to my country as a Marine you can do the math on the fact that a Marine is a 24/7 occupation, scratch that, life.  I’ve striven to learn, to advance and to grow both personally and professionally.  I’ve also caught many lucky breaks, more than I deserved.  Most of those breaks were in the form of mentors who saw something better than I was in me and helped me to mold myself into it (if you’re not aware the best mentors are merely guides that help you see the path.  The work is always yours.) The luckiest break I’ve had has been my employment with World Wide Technology (www.wwt.com.)  

WWT is a highly awarded $5 billion dollar systems integrator and VAR who’s has been included in the Fortune Top 100 great Places to work.  While impressive in and of itself, that does not scratch the surface of what makes WWT amazing.  WWT’s culture is the core of both its success and its position on Fortune’s list.  WWT is a culture of excellence, intelligence and talent, but more importantly of integrity, teamwork and value in its people.  In the nearly two and a half years I have been with WWT, I have built both professional relationships and friendships with some of the best of the best in all aspects of IT business.  Every day I am impressed by someone, something or the company as a whole.  The knowledge of the engineers, the dedication of the teams, the loyalty and comradery,  are unmatched.  But still that’s not everything that makes WWT such a great place.

I’ve tried to find the words to describe how WWT treats its people.  The dedication to them that the company, the executives, and the management provides.  I cannot.  Instead I have one example of many that go unannounced, are not done for publicity and in many cases are not even widely known known about internally.  Doug Kung was a WWT engineer I never had the pleasure of meeting.  He was well respected and liked by everyone that knew or worked with him.  Doug passed away in October of 2010 after losing a battle with cancer.  WWT as a company, at the direction of the executive team and directly in-line with the company core values supported Doug, his wife, and his two children through the entire process.  This went well above and beyond what was legally required but more so above what would be reasonably expected.  The support did not stop with his passing, WWT annually arranges events to raise money for Doug’s family and matches the donations made.  While the story itself is a tragedy, the loss of a great person, this brief piece is an example of WWT’s character as a company.  As I said, this is one example. 

The friends and connections I’ve made, the opportunities I’ve had, and the support I’ve been given at WWT are unmatched.  I thank WWT and the people that make it great for those opportunities.  With that being said it is with great regret that I’ve come to the decision to part ways with WWT.  Events in my personal life have brought me to this decision and I will be taking some time for myself.  Over the next couple of months I will be spending some much needed time with family and friends.  It is long overdue and that is the silver lining in everything.  I will do my best to stay abreast of technology trends and intend to immerse myself in technology areas that stretch my abilities (one can’t remain completely idle.)  As a note this is not an issue of health, I am as healthy as I’ve ever been (mmm bacon.)

If anyone is interested in contributing here and “Defining the Cloud” the SDN, the Big Data or any other buzzword please contact me.  I’d hate to see a good search ranking go to waste Winking smile

GD Star Rating
loading...

Support St. Jude and the Fight Against Childhood Cancer

For some time I’ve been looking for a charity that Define the Cloud could support.  I have no desire to try and monetize my traffic through ads and clutter the content.  I also get plenty of benefits from running the site and wouldn’t ask for help with that.  That being said I do generate decent traffic and would like to use that traffic to give back.  I definitely don’t do enough personally to give back and this is a start.  I’ve finally settled on a charity I can stand behind.  Being a lover of the under dog and a hater of cancer I couldn’t pick a charity I’d rather support than St. Jude Children’s Research Hospital (www.stjude.org.)  With that, the only banner you’ll ever see on Define The Cloud is that of St. Jude.  If you’ like my content and prefer free and ad free, you’ve got it.  If instead you’d like to support the site, do so by supporting St. Jude.  If you prefer donating time to donating money you can find plenty of ways to do so here: http://www.stjude.org/volunteers.

In addition to your donations Define the Cloud will match dollar for dollar all donations made by 10/31/2012 up to $1,000.00 USD (we’re on a shoe string budget here.)  If you donate please leave a comment here with the amount so that I can track.  I’m trusting the honor system on this one. 

 

Meet Grace

Disclaimer: My support of St. Jude Children’s Research Hospital in no way implies their support of me or my content.  Let’s not be silly.

GD Star Rating
loading...

Much Ado About Something: Brocade’s Tech Day

Yesterday I had the privilege of attending Brocade’s Tech Day for Analysts and Press.  Brocade announced the new VDX 8770, discussed some VMware announcements, as well as discussed strategy, vision and direction.  I’m going to dig in to a few of the topics that interested me, this is no way a complete recap.

First in regards to the event itself.  My kudos to the staff that put the event together it was excellent from both a pre-event coordination and event staff perspective.  The Brocade corporate campus is beautiful and the EBC building was extremely well suited to such an event.  The sessions went on smoothly, the food was excellent and overall it was a great experience.  I also want to thank Lisa Caywood (@thereallisac) for pointing out that my tweets during the event were more inflammatory then productive and outside the lines of ‘guest etiquette.’  She’s definitely correct and hopefully I can clear up some of my skepticism here in a format left open for debate, and avoid the same mistake in the future.  That being said I had thought I was quite clear going in on who I was and how I write.  To clear up any future confusion from anyone:  if you’re not interested in my unfiltered, typically cynical, honest opinion don’t invite me, I won’t take offense.  Even if you’re a vendor with products I like I’ve probably got a box full of cynicism for your other product lines.

During the opening sessions I observed several things that struck me negatively:

  • A theme (intended or not) that Brocade was being lead into new technologies by their customers.  Don’t get me wrong, listening to your customers and keeping your product in line with their needs is key to success.  That being said if your customers are leading you into new technology you’ve probably missed the boat.  In most cases they’re being lead there by someone else and dragging you along for the ride, that’s not sustainable.  IT vendors shouldn’t need to be dragged kicking and screaming into new technologies by customers.  This doesn’t mean chase every shiny object (squirrel!) but major trends should be investigated and invested in before you’re hearing enough customer buzz to warrant it.  Remember business isn’t just about maintaining current customers it’s about growing by adopting new ones.  Especially for public companies stagnant is as good as dead.
  • The term “ Ethernet Fabric” which is only used by Brocade, everyone else just calls it fabric.  This ties in closely with the next bullet.
  • A continued need to discuss commitment to pure Fibre Channel (FC) storage.  I don’t deny that FC will be around for quite some time and may even see some growth as customers with it embedded will expand.  That being said customers with no FC investment should be avoiding it like the plague and as vendors and consultants we should be pushing more intelligent options to those customers.  You can pick apart technical details about FC vs. anything all day long, enjoy that on your own, the fact is two fold: running two separate networks is expensive and complex, the differences in reliability, performance, etc. are fading if not gone.  Additionally applications are being written in more intelligent ways that don’t require the high availability, low latency silo’d architecture of yester year.  Rather than clinging to FC like a sinking ship vendors should be protecting customer investment while building and positioning the next evolution.  Quote of the day during a conversation in the hall: “Fibre channel is just a slightly slower melting ice cube then we expected.’
  • An insistence that Ethernet fabric was a required building block of SDN.  I’d argue that while it can be a component it is far from required, and as SDN progresses it will be irrelevant completely.  More on this to come.
  • A stance that the network will not be commoditized was common throughout the day.  I’d say that’s either A) naïve or B) posturing to protect core revenue.  I’d say we’ll see network commoditization occur en mass over the next five years.  I’m specifically talking about the data center and a move away from specialized custom built ASICS, not the core routers, and not  the campus.  Custom silicon is expensive and time-consuming to develop, but provides performance/latency benefits and arguable some security benefits.  As the processor and off the shelf chips continue to increase exponentially this differentiator becomes less and less important.  What becomes more important is rapid adaption to new needs.  SDN as a whole won’t rip and replace networking in the next five years but it’s growth and the concepts around it will drive commoditization.  It happened with servers, then storage while people made the same arguments.  Cheaper, faster to produce and ‘good-enough’ consistently wins out.

On the positive side Brocade has some vision that’s quite interesting as well as some areas where they are leading by filling gaps in industry offerings.

  • Brocade is embracing the concept of SDN and understands a concept I tweeted about recently: ‘Revolutions don’t sell.’  Customers want evolutionary steps to new technology.  Few if any customers will rip and replace current infrastructure to dive head first into SDN.  SDN is a complete departure from the way we network today, and will therefore require evolutionary steps to get there. This is shown in their support of ‘hybrid’ open flow implementations on some devices.  This means that OpenFlow implementations can run segregated alongside traditional network deployments.  This allows for test/dev or roll-out of new services without an impact on production traffic.  This is a great approach where other vendors are offering ‘either or’ options.
  • There was discussion of Brocade’s VXLAN gateway which was announced at VMworld.  To my knowledge this is the first offering in this much needed space.  Without a gateway VXLAN is limited to virtual only environments. This includes segregation from services provided by physical devices.  The Brocade VXLAN gateway will allow the virtual and physical networks to be bridged. (http://newsroom.brocade.com/press-releases/brocade-adx-series-to-unveil-vxlan-gateway-and-app-nasdaq-brcd-0923542) To dig deeper on why this is needed check out Ivan’s article: http://blog.ioshints.info/2011/10/vxlan-termination-on-physical-devices.html.
  • The new Brocade VDX 8770 is one bad ass mamma jamma.  With industry leading latency and MAC table capacity, along with TRILL based fabric functionality, it’s built for large scalable high-density fabrics.  I originally tweeted “The #BRCD #VDX8770 is a bigger badder chassis in a world with less need for big bad chassis.” After reading Ivan’s post on it I stand corrected (this happens frequently.)  For some great perspective and a look at specs take a read: http://blog.ioshints.info/2012/09/building-large-l3-fabrics-with-brocade.html.

On the financial side Brocade has been looking good and climbed over $6.00 a share.  There are plenty of conversations stating some of this may be due to upcoming shifts at the CEO level.  They’ve reported two great quarters and are applying some new focus towards federal government and other areas lacking in recent past. I didn’t dig further into this discussion.

During lunch I was introduced to one of the most interesting Brocade offerings I’d never heard of: ‘Brocade Network Subscription”: http://www.brocade.com/company/how-to-buy/capital-solutions/index.page.  Basically you can lease your on-prem network from Brocade Capitol.  This is a great idea for customers looking to shift CapEx to OpEx which can be extremely useful.  I also received a great explanation for the value of a fabric underneath an SDN network from Jason Nolet (VP of Data Center Networking Group.)  Jason’s position (summarized) is that implementing SDN adds a network management layer, rather than removing one.  With that in mind the more complexity we remove from the physical network the better off we are.  What we’ll want for our SDN networks is fast, plug-and-play functionality with max usable links and minimal management.  Brocade VCS fabric fits this nicely.  While I agree with that completely I ‘d also say it’s not the only way to skin that particular cat.  More to come on that.

For the last few years I’ve looked at Brocade as a company lacking innovation and direction.  They clung furiously to FC while the market began shifting to Ethernet, ignored cloud for quite a while, etc.  Meanwhile they burned down deals to purchase them and ended up where they’ve been.  The overall messaging, while nothing new, did have undertones of change as a whole and new direction.  That’s refreshing to hear.  Brocade is embracing virtualization and cloud architectures without tying their cart to a single hypervisor horse.  They are positioning well for SDN and the network market shifts.  Most impressively they are identifying gaps in the spaces they operate and executing on them both from a business and technology perspective.  Examples of this are Brocade Network Subscription and the VXLAN gateway functionality respectively.

Things are looking up and there is definitely something good happening at Brocade.  That being said they aren’t out of the woods yet.  For them, as a company, purchase is far fetched as the vendors that would buy them already have networking plays and would lose half of Brocade’s value by burning OEM relationships with the purchase.  The only real option from a sale perspective is for investors looking to carve them up and sell off pieces individually.  A scenario like this wouldn’t bode well for customers.  Brocade has some work to do but they’ve got a solid set of products and great direction.  We’ll see how it pans out.  Execution is paramount for them at this point.

Final Note:  This blog was intended to stop there but this morning I received an angry accusatory email from Brocade’s head of corporate communications who was unhappy with my tweets.  I thought about posting the email in full, but have decided against it for the sake of professionalism.  Overall his email was an attack based on my tweets.  As stated my tweets were not professional, but this type of email from someone in charge of corporate communications is well over the top in response.  I forwarded the email to several analyst and blogger colleagues, a handful of whom had similar issues with this individual.  One common theme in social media is that lashing out at bad press never does any good, a senior director in this position should know such, but instead continues to slander and attack.  His team and colleagues seem to understand social media use as they’ve engaged in healthy debate with me in regards to my tweets, it’s a shame they are not lead from the front.

GD Star Rating
loading...