Network Overlays: An Introduction

While network overlays are not a new concept, they have come back into the limelight, thanks to drivers brought on by large-scale virtualization. Several standards have been proposed to enable virtual networks to be layered over a physical network infrastructure: VXLAN, NVGRE, and SST. While each proposed standard uses different encapsulation techniques to solve current network limitations, they share some similarities. Let’s look at how network overlays work in general…

To see the full article visit:

GD Star Rating

Why We Need Network Abstraction

The move to highly virtualized data centers and cloud models is straining the network. While traditional data center networks were not designed to support the dynamic nature of today’s workloads, the fact is, the emergence of highly virtualized environments is merely exposing issues that have always existed within network constructs. VLANs, VRFs, subnets, routing, security, etc. have been stretched well beyond their original intent. The way these constructs are currently used limits scale, application expansion, contraction and mobility.  To read the full article visit:

GD Star Rating

Data Center Overlays 101

I’ve been playing around with Show Me ( as a tool to add some white boarding to the blog.  Here’s my first crack at it covering Data Center Network overlays.

GD Star Rating


The most viable competitor to VXLAN is NVGRE which was proposed by Microsoft, Intel, HP and Dell.  It is another encapsulation technique intended to allow virtual network overlays across the physical network.  Both techniques also remove the scalability issues with VLANs which are bound at a max of 4096.  NVGRE uses Generic Routing Encapsulation (GRE) as the encapsulation method.  It uses the lower 24 bits of the GRE header to represent the Tenant Network Identifier (TNI.)  Like VXLAN this 24 bit space allows for 16 million virtual networks. 


While NVGRE provides optional support for broadcast via IP multi-cast, it does not rely on it for address learning as VXLAN does.  It instead leaves that up to an as of yet undefined control plane protocol.  This control plane protocol will handle the mappings between the “provider” address used in the outer header to designate the remote NVGRE end-point and the “customer” address of the destination.  The lack of reliance of flood and learn behavior replicated over IP multicast potentially makes NVGRE a more scalable solution.  This will be dependent on implementation and underlying hardware.

Another difference between VXLAN and NVGRE will be within its multi-pathing capabilities.  In its current format NVGRE will provides little ability to be properly load-balanced by ECMP.  In order to enhance load-balancing the draft suggests the use of multiple IP addresses per NVGRE host, which will allow for more flows.  This is a common issue with tunneling mechanisms and is solved in VXLAN by using a hash of the inner frame as the UDP source port.  This provides for efficient load balancing by devices capable of 5-tuple balancing decisions.  There are other possible solutions proposed for NVGRE load-balancing, we’ll have to wait and see how they pan out. 

The last major difference between the two protocols is the use of jumbo frames.  VXLAN is intended to stay within a data center where jumbo frame support is nearly ubiquitous, therefore it assumes that support is present and utilizes it.  NVGRE is intended to be able to be used inter-data-enter and therefore allows for provisions to avoid fragmentation.


While NVGRE still needs much clarification it is backed by some of the biggest companies in IT and has some potential benefits.  With the VXLAN capable hardware world expanding quickly you can expect to see more support for NVGRE.  Layer 3 encapsulation techniques as a whole solve the issues of scalability inherent with bridging.  Additionally due to their routed nature they also provide for loop free multi-pathed environments without the need for techniques such as TRILL and technologies based on it.  In order to reach the scale and performance required by tomorrows data centers our networks need change, overlays such as these are one tool towards that goal.

GD Star Rating

Stateless Transport Tunneling (STT)

STT is another tunneling protocol along the lines of the VXLAN and NVGRE proposals.  As with both of those the intent of STT is to provide a network overlay, or virtual network running on top of a physical network.  STT was proposed by Nicira and is therefore not surprisingly written from a software centric view rather than other proposals written from a network centric view.  The main advantage of the STT proposal is it’s ability to be implemented in a software switch while still benefitting from NIC hardware acceleration.  The other advantage of STT is its use of a 64 bit network ID rather than the 32 bit IDs used by NVGRE and VXLAN.

The hardware offload STT grants relieves the server CPU of a significant workload in high bandwidth systems (10G+.)  This separates it from it’s peers that use an IP encapsulation in the soft switch which negate the NIC’s LSO and LRO functions.   The way STT goes about this is by having the software switch inserts header information into the packet to make it look like a TCP packet, as well as the required network virtualization features.  This allows the guest OS to send frames up to 64k to the hypervisor which are encapsulated and sent to the NIC for segmentation.  While this does allow for the HW offload to be utilized it causes several network issues due to it’s use of valid TCP headers it causes issues for many network appliances or “middle boxes.” 

STT is not expected to be ratified and is considered by some to have been proposed for informational purposes, rather than with the end goal of a ratified standard.  With its misuse of a valid TCP header it would be hard pressed for ratification.  STT does bring up the interesting issue of hardware offload.  The IP tunneling protocols mentioned above create extra overhead on host CPUs due to their inability to benefit from NIC acceleration techniques.  VXLAN and NVGRE are intended to be implemented in hardware to solve this problem.  Both VXLAN and NVGRE use a 32 bit network ID because they are intended to be implemented in hardware, this space provides for 16 million tenants.  Hardware implementation is coming quickly in the case of VXLAN with vendors announcing VXLAN capable switches and NICs. 

GD Star Rating

VXLAN Deep Dive – Part II

In part one of this post I covered the basic theory of operations and functionality of VXLAN (  This post will dive deeper into how VXLAN operates on the network.

Let’s start with the basic concept that VXLAN is an encapsulation technique.  Basically the Ethernet frame sent by a VXLAN connected device is encapsulated in an IP/UDP packet.  The most important thing here is that it can be carried by any IP capable device.  The only time added intelligence is required in a device is at the network bridges known as VXLAN Tunnel End-Points (VTEP) which perform the encapsulation/de-encapsulation.  This is not to say that benefit can’t be gained by adding VXLAN functionality elsewhere, just that it’s not required.


Providing Ethernet Functionality on IP Networks:

As discussed in Part 1, the source and destination IP addresses used for VXLAN are the Source VTEP and destination VTEP.  This means that the VTEP must know the destination VTEP in order to encapsulate the frame.  One method for this would be a centralized controller/database.  That being said VXLAN is implemented in a decentralized fashion, not requiring a controller.  There are advantages and drawbacks to this.  While utilizing a centralized controller would provide methods for address learning and sharing, it would also potentially increase latency, require large software driven mapping tables and add network management points.  We will dig deeper into the current decentralized VXLAN deployment model.

VXLAN maintains backward compatibility with traditional Ethernet and therefore must maintain some key Ethernet capabilities.  One of these is flooding (broadcast) and ‘Flood and Learn behavior.’  I cover some of this behavior here (  but the summary is that when a switch receives a frame for an unknown destination (MAC not in its table) it will flood the frame to all ports except the one on which it was received.  Eventually the frame will get to the intended device and a reply will be sent by the device which will allow the switch to learn of the MACs location.  When switches see source MACs that are not in their table they will ‘learn’ or add them.

VXLAN is encapsulating over IP and IP networks are typically designed for unicast traffic (one-to-one.)  This means there is no inherent flood capability.  In order to mimic flood and learn on an IP network VXLAN uses IP multi-cast.  IP multi-cast provides a method for distributing a packet to a group.  This IP multi-cast use can be a contentious point within VXLAN discussions because most networks aren’t designed for IP multi-cast, IP multi-cast support can be limited, and multi-cast itself can be complex dependent on implementation.

Within VXLAN each VXLAN segment ID will be subscribed to a multi-cast group.  Multiple VXLAN segments can subscribe to the same ID, this minimizes configuration but increases unneeded network traffic.  When a device attaches to a VXLAN on a VTEP that was not previously in use, the VXLAN will join the IP multi-cast group assigned to that segment and start receiving messages.


In the diagram above we see the normal operation in which the destination MAC is known and the frame is encapsulated in IP using the source and destination VTEP address.  The frame is encapsulated by the source VTEP, de-encapsulated at the destination VTEP and forwarded based on bridging rules from that point.  In this operation only the destination VTEP will receive the frame (with the exception of any devices in the physical path, such as the core IP switch in this example.)


In the example above we see an unknown MAC address (the MAC to VTEP mapping does not exist in the table.)  In this case the source VTEP encapsulates the original frame in an IP multi-cast packet with the destination IP of the associated multicast group.  This frame will be delivered to all VTEPs participating in the group.  VTEPs participating in the group will ideally only be VTEPs with connected devices attached to that VXLAN segment.  Because multiple VXLAN segments can use the same IP multicast group this is not always the case.  The VTEP with the connected device will de-encapsulate and forward normally, adding the mapping from the source VTEP if required.  Any other VTEP that receives the packet can then learn the source VTEP/MAC mapping if required and discard it. This process will be the same for other traditionally flooded frames such as ARP, etc.  The diagram below shows the logical topologies for both traffic types discussed.


As discussed in Part 1 VTEP functionality can be placed in a traditional Ethernet bridge.  This is done by placing a logical VTEP construct within the bridge hardware/software.  With this in place VXLANs can bridge between virtual and physical devices.  This is necessary for physical server connectivity, as well as to add network services provided by physical appliances.  Putting it all together the diagram below shows physical servers communicating with virtual servers in a VXLAN environment.  The blue links are traditional IP links and the switch shown at the bottom is a standard L3 switch or router.  All traffic on these links is encapsulated as IP/UDP and broken out by the VTEPs.



VXLAN provides backward compatibility with traditional VLANs by mimicking broadcast and multicast behavior through IP multicast groups.  This functionality provides for decentralized learning by the VTEPs and negates the need for a VXLAN controller.

GD Star Rating

VXLAN Deep Dive

I’ve been spending my free time digging into network virtualization and network overlays.  This is part 1 of a 2 part series, part 2 can be found here:  By far the most popular virtualization technique in the data center is VXLAN.  This has as much to do with Cisco and VMware backing the technology as the tech itself.  That being said VXLAN is targeted specifically at the data center and is one of many similar solutions such as: NVGRE and STT.)  VXLAN’s goal is allowing dynamic large scale isolated virtual L2 networks to be created for virtualized and multi-tenant environments.  It does this by encapsulating frames in VXLAN packets.  The standard for VXLAN is under the scope of the IETF NVO3 working group.


VxLAN Frame

The VXLAN encapsulation method is IP based and provides for a virtual L2 network.  With VXLAN the full Ethernet Frame (with the exception of the Frame Check Sequence: FCS) is carried as the payload of a UDP packet.  VXLAN utilizes a 24-bit VXLAN header, shown in the diagram, to identify virtual networks.  This header provides for up to 16 million virtual L2 networks.

Frame encapsulation is done by an entity known as a VXLAN Tunnel Endpoint (VTEP.)  A VTEP has two logical interfaces: an uplink and a downlink.  The uplink is responsible for receiving VXLAN frames and acts as a tunnel endpoint with an IP address used for routing VXLAN encapsulated frames.  These IP addresses are infrastructure addresses and are separate from the tenant IP addressing for the nodes using the VXLAN fabric.  VTEP functionality can be implemented in software such as a virtual switch or in the form a physical switch.

VXLAN frames are sent to the IP address assigned to the destination VTEP; this IP is placed in the Outer IP DA.  The IP of the VTEP sending the frame resides in the Outer IP SA.  Packets received on the uplink are mapped from the VXLAN ID to a VLAN and the Ethernet frame payload is sent as an 802.1Q Ethernet frame on the downlink.  During this process the inner MAC SA and VXLAN ID is learned in a local table.  Packets received on the downlink are mapped to a VXLAN ID using the VLAN of the frame.  A lookup is then performed within the VTEP L2 table using the VXLAN ID and destination MAC; this lookup provides the IP address of the destination VTEP.  The frame is then encapsulated and sent out the uplink interface.


Using the diagram above for reference a frame entering the downlink on VLAN 100 with a destination MAC of 11:11:11:11:11:11 will be encapsulated in a VXLAN packet with an outer destination address of  The outer source address will be the IP of this VTEP (not shown) and the VXLAN ID will be 1001.

In a traditional L2 switch a behavior known as flood and learn is used for unknown destinations (i.e. a MAC not stored in the MAC table.  This means that if there is a miss when looking up the MAC the frame is flooded out all ports except the one on which it was received.  When a response is sent the MAC is then learned and written to the table.  The next frame for the same MAC will not incur a miss because the table will reflect the port it exists on.  VXLAN preserves this behavior over an IP network using IP multicast groups.

Each VXLAN ID has an assigned IP multicast group to use for traffic flooding (the same multicast group can be shared across VXLAN IDs.)  When a frame is received on the downlink bound for an unknown destination it is encapsulated using the IP of the assigned multicast group as the Outer DA; it’s then sent out the uplink.  Any VTEP with nodes on that VXLAN ID will have joined the multicast group and therefore receive the frame.  This maintains the traditional Ethernet flood and learn behavior.

VTEPs are designed to be implemented as a logical device on an L2 switch.  The L2 switch connects to the VTEP via a logical 802.1Q VLAN trunk.  This trunk contains an VXLAN infrastructure VLAN in addition to the production VLANs.  The infrastructure VLAN is used to carry VXLAN encapsulated traffic to the VXLAN fabric.  The only member interfaces of this VLAN will be VTEP’s logical connection to the bridge itself and the uplink to the VXLAN fabric.  This interface is the ‘uplink’ described above, while the logical 802.1Q trunk is the downlink.



VXLAN is a network overlay technology design for data center networks.  It provides massively increased scalability over VLAN IDs alone while allowing for L2 adjacency over L3 networks.  The VXLAN VTEP can be implemented in both virtual and physical switches allowing the virtual network to map to physical resources and network services.  VXLAN currently has both wide support and hardware adoption in switching ASICS and hardware NICs, as well as virtualization software.

GD Star Rating

Something up Brocade’s Sleeve, and it looks Good

Brocade’s got some new tricks up their sleeve and they look good.  For far too long Brocade fought against convergence to protect its FC install base and catch up.  This bled over into their Ethernet messaging and hindered market growth and comfort levels there.  Overall they appeared as a company missing the next technology waves and clinging desperately to the remnants of a fading requirement: pure storage networks.  That has all changed, Brocade is embracing Ethernet and focusing on technology innovation that is relevant to today’s trends and business.

The Hardware:

Brocade’s VDX 8770 ( is their flagship modular switch for Brocade VCS fabrics.  While at first I scoffed at the idea of bigger chassis switches for fabrics, it turns out I was wrong (happens often.)  I forgot about scale.  These fabrics will typically be built in core/edge or spine leaf/designs, often using End of Row (EoR) rather than Top of Rack (ToR) designs to reduce infrastructure.  This leaves max scalability bound by a combination of port count and switch count dependent on several factors such as interconnect ports.  Switch count will typically be limited by fabric software limitations either real or due to testing and certification processes.  Having high density modular fabric-capable switches helps solve scalability issues.

Some of the more interesting features:

  • Line-rate 40GE
  • “Auto-trunking” ISLs (multiple links between switches will bond automatically.)
  • Multi-pathing at layers 1, 2 and 3
  • Dynamic port-profile configuration and migration for VM mobility
  • 100GE ready
  • 4us latency with 4TB switching capacity
  • Support for 384,000 MAC addresses per fabric for massive L2 scalability
  • Support for up to 8000 ports in a VCS fabric
  • 4 and 8 slot chassis options
  • Multiple default gateways for load-balancing routing

The Software:

The real magic is Brocade’s fabric software.  Brocade looks at the fabric as the base on which to build an intelligent network, SDN or otherwise.  As such the fabric should be: resilient, scalable and easy to manage.  In several conversations with people at Brocade it was pointed out that SDN actually adds a management layer.  No matter how you slice it the SDN software overlays a physical network that must be managed.  Minimizing configuration requirements at this level simplifies the network overall.  Additionally the fabric should provide multi-pathing without link blocking for maximum network throughput. 

Brocade executes on this with VCS fabric.  VCS provides an easy to set up and manage fabric model.  Operations like adding a link for bandwidth are done with minimal configuration through tools like “auto-trunking.’  Basically ports identified as fabric ports will be built into the network topology automatically.  They also provide impressive scalability numbers with support for 384,000 MACs, 352,000 IPv4 routes, 88,000 IPv6 routes, and 8000 ports.

One surprise to me was that Brocade is doing this using custom silicon.  With companies like Arista and Nicira (now part of VMware) touting commodity hardware as the future, why is Brocade spending money on silicon?  The answer is in latency.  If you want to do something at line-rate it must be implemented in hardware.  Merchant silicon is adept at keeping cutting edge at things like switching latency and buffering but is slow to implement new features.  This is due to addressable market.  Merchant silicon manufacturers want to ensure that the cost of hardware design and manufacturing will be recouped through bulk sale to multiple manufacturers.  This means features must have wide applicability and typically be standards driven before being implemented.

Brocade saw the ability to innovate with features while maintaining line-rate as an advantage worth the additional cost.  This allows Brocade to differentiate themselves, and their fabric, from vendors relying solely on merchant silicon.  Additionally they position they’re fabric as enough of an advantage to be worth the additional cost when implementing SDN for reasons listed above.


Brocade is making some very smart moves and coming out from under the FC rock.  The technology is relevant and timely, but they will still have an uphill battle gaining the confidence of network teams.  They will have to rely on their FC data center heritage to build that confidence and expand their customer base.  The key now will be in execution, it will be an exciting ride.

GD Star Rating

The Art of Pre-Sales Part II: Showing Value

Part I of this post received quite a few page views and positive feedback so I thought I’d expand on it.  Last week on the Twitters I made a comment re sales engineers showing value via revenue ($$) and got a lot of feedback.  I thought I’d expand on the topic.  While I will touch on a couple of points briefly this post is not intended as a philosophical discussion of how engineers ‘should be judged.’  Quite frankly if you’re an engineer the only thing that matters is how you are judged (for the time being at least.)  This is about understanding and showing your value.  Don’t get wrapped around the axle on right and wrong or principles.  While I don’t always follow my own advice I’ve often found that the best way to change the system is by playing by its rules and becoming a respected participant. 

A move to pre-sales is often a hard transition for an engineer to make.  I discuss some of the thought process in the first blog linked above.  This post focuses on transitioning the way in which you show your value.  This post is focused on providing some tools to assist in career and salary growth, rather than job performance itself.  In a traditional engineering role you are typically graded on performance of duties, engineering acumen and possibly certifications showing your knowledge and growth.  When transitioning to a sales engineer role those metrics can and will change.  There are several keys concepts that will assist in showing your value and reaping the rewards such as salary increases and promotion. 

  1. Understand the metrics
  2. Adapt to the metrics
  3. Gather the data
  4. Sell yourself

Understand the Metrics

The first key is to understand the metrics on which you are graded.  While this seems to be a straightforward concept, it is often missed.  This is best discussed up front when accepting the new role.  Prior to acceptance you often have more of a say in how those things occur.   Each company, organization and even team often uses different metrics.  I’ve had hybrid pre-sales/delivery roles where upper management judged my performance primarily on billable hours.  This means that the work I did up front (pre-sale) held little to know value, no matter how influential it may have been on closing the deal.  I’ve also held roles that focused value primarily on sales influence, basically on revenue.  In most cases you will find a combination of metrics used, you want to be aware of these.  If you are not focused on the right areas the value you provide may go unnoticed.  In the first example mentioned above, if I’d have spent all of my time in front of customers selling deals, but never implementing my value would have been minimized.

Understanding the metrics is the first step, it allows you to know what you’ll be measured on.  In some cases those metrics are black and white and therefore easy.  For instance at the time I was an active duty Marine, E1-E5 promotion was about 70-80% based on both physical fitness test (PFT) and rifle marksmanship qualification score.  These not only counted on their own but were also factored in again into various portions of proficiency and conduct marks which counted for the other portion of promotion.  This meant that a Marine could much more easily move up focusing on shooting and pull-ups than job proficiency. This post is not about gaming the system, but that example shows that knowing the system is important.   

Adapt to the metrics

Let me preface by saying I do not advocate gaming the system, or focusing solely on one area that you know is thoroughly prized while ignoring the others.  That is nothing more than brown nosing, and you’ll quickly lose the respect of your peers.  Instead adapt, where needed, to the metrics you’re measured on.  It’s not about dropping everything to focus on one area, it’s ensuring you are focusing on all areas that are used to assess your performance.  Maybe certifications weren’t important where you were but they’re now required, get on it.  Additionally remember that anything that can be easily measured probably is.  Intangibles or items of a subjective nature are difficult tools to measure performance on.  That doesn’t mean they aren’t/shouldn’t be used it just a fact.  Due to that understand the tangibles and ensure you are showing value there.

Gather the data

In a sales organization sales numbers are always going to be key.  Every company will use them differently but they always factor in.  Every sales engineer at a high level is there to assist in the sale of equipment, therefore those numbers matter.  Additionally those numbers are very tangible, meaning you can show value easily.  Most organizations will use some form of CRM such as, to track sales dollars and customers.  Engineering access to this tool varies, but the more you learn to use the system the better.  Showing the value of the deals you spend your time on is enormous, especially if it sets you apart from your peers.  Take the time to use these systems in the way your organization intends so that you can ensure you are tied to the revenue you generate.

Sales numbers are a great example but there are many others.  If you participate in a standards body, contribute frequently to internal wikis or email aliases, etc. gather that data.  These are parts of what you contribute and may go unnoticed, you need to ensure you have that data at your disposal.  Having the right data on hand is key to step four; selling yourself.

Sell yourself

This may be the most unnatural part of the entire process.  Most people don’t enjoy, and aren’t comfortable presenting their own value. That being said this is also possibly the most important piece.  If you don’t sell yourself you can’t count on anyone else to do it.  When discussing compensation, initially or raise, and promotion always look at it from a pure business perspective.  The person that you’re having the discussion with has an ultimate goal of keeping the right people on board for the lowest cost, you have goal of maintaining the highest cost possible for the value you provide.  Think of it as bargaining for a car, regardless of how much you may like your sales person you want to drive away with as much money in your pocket as possible.

If you’ve followed the first three steps this part should be easier.  You’ll have documentation to support your value along the metrics evaluated, bring it.  Don’t expect your manager to have looked at everything or to have it handy.  Having these things ready helps you frame the discussion around your value, and puts you in charge.  Additionally it shows that you know your own value.  Don’t be afraid to present who you are and what you bring to the table.  Also don’t be afraid to push back.  It can be nerve racking to hear a 3% raise and ask for a 6%, or to push back on a salary offer for another 10K, that doesn’t mean you shouldn’t do it.  Remember you don’t have to make demands, and if you don’t there is no harm in asking.

Phrasing is key here and practice is always best.  Remember you are not saying you’ll leave, you’re asking for your value.  Think in phrases like, “I really appreciate what you’re offering but I’d be much more comfortable at $x and I think my proven value warrants it.”  I’m not saying to use that line specifically but it does ring in the right light.  In these discussions you want to show three things:

  1. That you are appreciative of the position/opportunity
  2. That you know your value
  3. That your value is tangible and proven


There are several other factors I always recommend focusing on:

  • Teamwork – this is not only easily recognizable as value,  it is real value.  A team that works together and supports one another will always be more successful than a group of rock stars.  Share knowledge freely and help your peers wherever possible, even if they are not tied to the same direct team.
  • Leadership -  You don’t need a title to lead.  Set an example and exemplify what you’d like to see in others.  This is one I must constantly remind myself of and fail at often, but it’s key.  Lead from the front, people will follow.
  • Professionalism – As a Marine we had a saying something to the effect of “Act at the rank you want to be.”  Your dress, appearance and professionalism should always be at the level you want to be, not where you were at.  This not only assists in getting there, but also in the transition once acquired.  Have you ever seen an engineer come in wearing jeans and polo one day, shirt and slacks the next after a promotion?  Appears pretty unnatural doesn’t it?  If that engineer had already been acting the part it would have been a natural and expected transition.
  • Commend excellence – When one of your colleagues in any realm does something above and beyond, commend it.  Send a thank you and brief description to them and cc their manager, or to their manager and cc them.  This helps them with steps three and four, but also shows that you noticed.  Y
  • Technical knowledge – While it should go without saying, I won’t let that be.  Always maintain your knowledge and stay sharp. 
  • Know your market value – This can be difficult but there are tools available.  One suggestion for this is using a recruiter.  A good recruiter wants you to command top dollar because it increases their commission, this combined with their market knowledge will help you place yourself.

Do’s and don’ts

  • Do – Self assessments.  I never like to walk into a review and be surprised.  I do thorough self assessments of myself in the format my employer uses prior to a review.  When possible I present my assessment rather than allow the opposite. I always expect to have more areas of improvement listed than they do.
  • Don’t – Use ultimatums.  The best example of this is receiving another offer and using it to strong arm your employer into more money.  If you have an offer you intend to use to negotiate make sure it’s one you intend to take.  Also know that this is a one-time tactic, you won’t ever be able to use again with your employer.
  • Do -  Strive for improvement.  Recognize where you can improve.  Apply as much honesty as possible to self-reviews and assessments. 
  • Don’t- Blame.  Look for the common denominator, if you’ve been passed multiple times for promotion ask why.  Don’t get stuck in the rut of blaming others for things you can improve.  Even if it was someone else’s fault you may find something you can do better.


In any professional environment, knowing and showing your value is important.  Most of this is specific to a pre-sales role but can be used more widely.  The short version is knowing how to show your value and showing it.  Remember you work to get paid, even if you love what you do.

GD Star Rating

A Salute to Greatness

There are two things I’ve spent my life doing: being a class clown (laughed at or with is your choice) and building my career.  Since I was 16 I’ve worked no less than 40 hour weeks and more consistently been immersed in IT upwards of 80.  I have rarely taken time off, I typically watch PTO disappear on a spreadsheet January first of each year.  If you count my five years of proud service to my country as a Marine you can do the math on the fact that a Marine is a 24/7 occupation, scratch that, life.  I’ve striven to learn, to advance and to grow both personally and professionally.  I’ve also caught many lucky breaks, more than I deserved.  Most of those breaks were in the form of mentors who saw something better than I was in me and helped me to mold myself into it (if you’re not aware the best mentors are merely guides that help you see the path.  The work is always yours.) The luckiest break I’ve had has been my employment with World Wide Technology (  

WWT is a highly awarded $5 billion dollar systems integrator and VAR who’s has been included in the Fortune Top 100 great Places to work.  While impressive in and of itself, that does not scratch the surface of what makes WWT amazing.  WWT’s culture is the core of both its success and its position on Fortune’s list.  WWT is a culture of excellence, intelligence and talent, but more importantly of integrity, teamwork and value in its people.  In the nearly two and a half years I have been with WWT, I have built both professional relationships and friendships with some of the best of the best in all aspects of IT business.  Every day I am impressed by someone, something or the company as a whole.  The knowledge of the engineers, the dedication of the teams, the loyalty and comradery,  are unmatched.  But still that’s not everything that makes WWT such a great place.

I’ve tried to find the words to describe how WWT treats its people.  The dedication to them that the company, the executives, and the management provides.  I cannot.  Instead I have one example of many that go unannounced, are not done for publicity and in many cases are not even widely known known about internally.  Doug Kung was a WWT engineer I never had the pleasure of meeting.  He was well respected and liked by everyone that knew or worked with him.  Doug passed away in October of 2010 after losing a battle with cancer.  WWT as a company, at the direction of the executive team and directly in-line with the company core values supported Doug, his wife, and his two children through the entire process.  This went well above and beyond what was legally required but more so above what would be reasonably expected.  The support did not stop with his passing, WWT annually arranges events to raise money for Doug’s family and matches the donations made.  While the story itself is a tragedy, the loss of a great person, this brief piece is an example of WWT’s character as a company.  As I said, this is one example. 

The friends and connections I’ve made, the opportunities I’ve had, and the support I’ve been given at WWT are unmatched.  I thank WWT and the people that make it great for those opportunities.  With that being said it is with great regret that I’ve come to the decision to part ways with WWT.  Events in my personal life have brought me to this decision and I will be taking some time for myself.  Over the next couple of months I will be spending some much needed time with family and friends.  It is long overdue and that is the silver lining in everything.  I will do my best to stay abreast of technology trends and intend to immerse myself in technology areas that stretch my abilities (one can’t remain completely idle.)  As a note this is not an issue of health, I am as healthy as I’ve ever been (mmm bacon.)

If anyone is interested in contributing here and “Defining the Cloud” the SDN, the Big Data or any other buzzword please contact me.  I’d hate to see a good search ranking go to waste Winking smile

GD Star Rating