Where Are You?

Joe wrote an excellent guest blog on my website called To Blade Or Not To Blade and offered me the same opportunity. Being a huge fan of Joe’s I’m honored. One of my favorites blog posts is his Data Center 101: Server Virtualization. Joe explained the benefits of server virtualization in the data center. I felt this post is appropriate because Joe showed us that virtualization is “supposed” to make life easier for Customers.  However, a lot of vendors have yet to come up with management tools that facilitate that concept. 

It’s a known fact that I’m a huge Underdog fan. However, what people don’t know is that Scooby-Doo is my second favorite cartoon dog. As a kid I always stayed current with the latest Underdog and Scooby-Doo after school episodes. This probably explains why my mother was always upset with me for not doing my homework first. I always got a kick out of the fact that no matter how many times Mystery Inc. would split up to find the ghost, it was always Scooby-Doo and Shaggy that managed (accidentally) to come face-to-face with the ghost while looking for food. Customers face the same issues that Scooby and Shaggy faced in ghost hunting. If a Customer was in VMware vCenter doing administrative tasks there was no way to effectively manage HBA settings (the ghost) without hunting around or opening a different management interface. Emulex has solved that issue with the new OneCommand Manager Plug-in for vCenter (OCM-VCp).

Being a former Systems Administrator in a previous life. I understand frustrations in opening multiple management interfaces to do a task(s). Emulex has already simplified infrastructure management with OneCommand. In OneCommand Customers already have the capability to manage HBAs across all protocols, generations, see/change CEE settings,  and do batch firmware/driver parameter updates (amongst a myriad of other capabilities).


 Not convinced? No problem. Let me introduce you to OCM-VCp interface. Take a look, you know have the opportunity to centrally discover, monitor and manage HBAs across the infrastructure from within vCenter, including vPort to to VM associations. How cool is that? Very.


 You get all the functions of the OneCommand HBA management application. No more looking for the elusive ghost that is called HBA settings. No more going back and forth between management interfaces. Which increases the probability of messing up the settings. However, out of all the cool capabilities here are the top 4 functions that I feel stand out for vCenter:

  • Diagnostic tools tab. This allows you to run PCI/Internal/External loopback and POST tests on a specific port on a specific VM.
  • Driver Parameters Tab. This tab is important to SAN/Network Administrators this is where you can update/change network parameters. The cool thing is that you can make changes temporary or save to a file for batch infrastructure updates/changes.
  • Maintenance Tab. Allows you to update firmware (single host or batch file) without rebooting the host.
  • CEE settings tab. Very important for Datacenter Bridging Capability Exchange Protocol (DCBX). 


In my opinion this couldn’t have come any sooner. As more organizations look to do more with less (virtualization principle) OCM-VCp will be the cornerstone of easing infrastructure management within VMware vCenter.  There is no learning curve because the plug-in has the same look and feel as the standalone management interface. In other words is very intuitive. So if you or your Customer(s) are expanding their adoption of virtualization take serious look at this plug-in, because it’s going to make your life so much easier.


GD Star Rating

Access Layer Network Virtualization: VN-Tag and VEPA

One of the highlights of my trip to lovely San Francisco for VMworld was getting to join Scott Lowe and Brad Hedlund for an off the cuff whiteboard session.  I use the term join loosely because I contributed nothing other than a set of ears.  We discussed a few things, all revolving around virtualization (imagine that at VMworld.)  One of the things we discussed was virtual switching and Scott mentioned a total lack of good documentation on VEPA, VN-tag and the differences between the two.  I’ve also found this to be true, the documentation that is readily available is:

  • Marketing fluff
  • Vendor FUD
  • Standards body documents which might as well be written in a Klingon/Hieroglyphics slang manifestation

This blog is my attempt to demystify VEPA and VN-tag and place them both alongside their applicable standards, and by that I mean contribute to the extensive garbage info revolving around them both.  Before we get into them both we’ll need to understand some history and the problems they are trying to solve.

First let’s get physical.  Looking at a traditional physical access layer we have two traditional options for LAN connectivity: Top-of-Rack (ToR) and End-of-Row (EoR) switching topologies.  Both have advantages and disadvantages.


EoR topologies rely on larger switches placed on the end of each row for server connectivity.


  • Less Management points
  • Smaller Spanning-Tree Protocol (STP) domain
  • Less equipment to purchase, power and cool


  • More above/below rack cable runs
  • More difficult cable modification, troubleshooting and replacement
  • More expensive cabling


ToR utilizes a switch at the top of each rack (or close to it.)


  • Less cabling distance/complexity
  • Lower cabling costs
  • Faster move/add/change for server connectivity


  • Larger STP domain
  • More management points
  • More switches to purchase, power and cool

Now let’s virtualize.  In a virtual server environment the most common way to provide Virtual Machine (VM) switching connectivity is a Virtual Ethernet Bridge (VEB) in VMware we call this a vSwitch.  A VEB is basically software that acts similar to a Layer 2 hardware switch providing inbound/outbound and inter-VM communication.  A VEB works well to aggregate multiple VMs traffic across a set of links as well as provide frame delivery between VMs based on MAC address.  Where a VEB is lacking is network management, monitoring and security.  Typically a VEB is invisible and not configurable from the network teams perspective.  Additionally any traffic handled by the VEB internally cannot be monitored or secured by the network team.


  • Local switching within a host (physical server)
    • Less network traffic
    • Possibly faster switching speeds
  • Common well understood deployment
  • Implemented in software within the hypervisor with no external hardware requirements


  • Typically configured and managed within the virtualization tools by the server team
  • Lacks monitoring and security tools commonly used within the physical access layer
  • Creates a separate management/policy model for VMs and physical servers

These are the two issues that VEPA and VN-tag look to address in some way.  Now let’s look at the two individually and what they try and solve.

Virtual Ethernet Port Aggregator (VEPA):

VEPA is standard being lead by HP for providing consistent network control and monitoring for Virtual Machines (of any type.)  VEPA has been used by the IEEE as the basis for 802.1Qbg ‘Edge Virtual Bridging.’  VEPA comes in two major forms: a standard mode which requires minor software updates to the VEB functionality as well as upstream switch firmware updates, and a multi-channel mode which will require additional intelligence on the upstream switch.

Standard Mode:

The beauty of VEPA in it’s standard mode is in it’s simplicity, if you’ve worked with me you know I hate complex designs and systems, they just lead to problems.  In the standard mode the software upgrade to the VEB in the hypervisor simply forces each VM frame out to the external switch regardless of destination.  This causes no change for destination MAC addresses external to the host, but for destinations within the host (another VM in the same VLAN) it forces that traffic to the upstream switch which forwards it back instead of handling it internally, called a hairpin turn.)  It’s this hairpin turn that causes the requirement for the upstream switch to have updated firmware, typical STP behavior prevents a switch from forwarding a frame back down the port it was received on (like the saying goes, don’t egress where you ingress.)  The firmware update allows the negotiation between the physical host and the upstream switch of a VEPA port which then allows this hairpin turn.  Let’s step through some diagrams to visualize this.

image  image

Again the beauty of this VEPA mode is in its simplicity.  VEPA simply forces VM traffic to be handled by an external switch.  This allows each VM frame flow to be monitored managed and secured with all of the tools available to the physical switch.  This does not provide any type of individual tunnel for the VM, or a configurable switchport but does allow for things like flow statistic gathering, ACL enforcement, etc.  Basically we’re just pushing the MAC forwarding decision to the physical switch and allowing that switch to perform whatever functions it has available on each transaction.  The drawback here is that we are now performing one ingress and egress for each frame that was previously handled internally.  This means that there are bandwidth and latency considerations to be made.  Functions like Single Root I/O Virtualization (SR/IOV) and Direct Path I/O can alleviate some of the latency issues when implementing this.  Like any technology there are typically trade offs that must be weighed.  In this case the added control and functionality should outweigh the bandwidth and latency additions.

Multi-Channel VEPA:

Multi-Channel VEPA is an optional enhancement to VEPA that also comes with additional requirements.  Multi-Channel VEPA allows a single Ethernet connection (switchport/NIC port) to be divided into multiple independent channels or tunnels.  Each channel or tunnel acts as an unique connection to the network.  Within the virtual host these channels or tunnels can be assigned to a VM, a VEB, or to a VEB operating with standard VEPA.  In order to achieve this goal Multi-Channel VEPA utilizes a tagging mechanism commonly known as Q-in-Q (defined in 802.1ad) which uses a service tag ‘S-Tag’ in addition to the standard 802.1q VLAN tag.  This provides the tunneling within a single pipe without effecting the 802.1q VLAN.  This method requires Q-in-Q capability within both the NICs and upstream switches which may require hardware changes.

image VN-Tag:

The VN-Tag standard was proposed by Cisco and others as a potential solution to both of the problems discussed above: network awareness and control of VMs, and access layer extension without extending management and STP domains.  VN-Tag is the basis of 802.1qbh ‘Bridge Port Extension.’  Using VN-Tag an additional header is added into the Ethernet frame which allows individual identification for virtual interfaces (VIF.)


The tag contents perform the following functions:



Identifies the VN tag


Direction, 1 indicates that the frame is traveling from the bridge to the interface virtualizer (IV.)


Pointer, 1 indicates that a vif_list_id is included in the tag.


A list of downlink ports to which this frame is to be forwarded (replicated). (multicast/broadcast operation)


Destination vif_id of the port to which this frame is to be forwarded.


Looped, 1 indicates that this is a multicast frame that was forwarded out the bridge port on which it was received. In this case, the IV must check the Svif_id and filter the frame from the corresponding port.




Version of the tag


The vif_id of the source of the frame

The most important components of the tag are the source and destination VIF IDs which allow a VN-Tag aware device to identify multiple individual virtual interfaces on a single physical port.

VN-Tag can be used to uniquely identify and provide frame forwarding for any type of virtual interface (VIF.)  A VIF is any individual interface that should be treated independently on the network but shares a physical port with other interfaces.  Using a VN-Tag capable NIC or software driver these interfaces could potentially be individual virtual servers.  These interfaces can also be virtualized interfaces on an I/O card (i.e. 10 virtual 10G ports on a single 10G NIC), or a switch/bridge extension device that aggregates multiple physical interfaces onto a set of uplinks and relies on an upstream VN-tag aware device for management and switching.


Because of VN-tags versatility it’s possible to utilize it for both bridge extension and virtual networking awareness.  It also has the advantage of allowing for individual configuration of each virtual interface as if it were a physical port.  The disadvantage of VN-Tag is that because it utilizes additions to the Ethernet frame the hardware itself must typically be modified to work with it.  VN-tag aware switch devices are still fully compatible with traditional Ethernet switching devices because the VN-tag is only used within the local system.  For instance in the diagram above VN-tags would be used between the VN-tag aware switch at the top of the diagram to the VIF but the VN-tag aware switch could be attached to any standard Ethernet switch.  VN-tags would be written on ingress to the VN-tag aware switch for frames destined for a VIF, and VN-tags would be stripped on egress for frames destined for the traditional network. 

Where does that leave us?

We are still very early in the standards process for both 802.1qbh and 802.1Qbg, and things are subject to change.  From what it looks like right now the standards body will be utilizing VEPA as the basis for providing physical type network controls to virtual machines, and VN-tag to provide bridge extension.  Because of the way in which each is handled they will be compatible with one another, meaning a VN-tag based bridge extender would be able to support VEPA aware hypervisor switches.

Equally as important is what this means for today and today’s hardware.  There is plenty of Fear Uncertainty and Doubt (FUD) material out there intended to prevent product purchase because the standards process isn’t completed.  The question becomes what’s true and what isn’t, let’s take care of the answers FAQ style:

Will I need new hardware to utilize VEPA for VM networking?

No, for standard VEPA mode only a software change will be required on the switch and within the Hypervisor.  For Multi-Channel VEPA you may require new hardware as it utilizes Q-in-Q tagging which is not typically an access layer switch feature.

Will I need new hardware to utilize VN-Tag for bridge extension?

Yes, VN-tag bridge extension will typically be implemented in hardware so you will require a VN-tag aware switch as well as VN-tag based port extenders. 

Will hardware I buy today support the standards?

That question really depends on how much change occurs with the standards before finalization and which tool your looking to use:

  • Standard VEPA – Yes
  • Multi-Channel VEPA – Possibly (if Q-in-Q is supported)
  • VN-Tag – possibly

Are there products available today that use VEPA or VN-Tag?

Yes Cisco has several products that utilize VN-Tag: Virtual interface Card (VIC), Nexus 2000, and the UCS I/O Module (IOM.)  Additionally HP’s FlexConnect technology is the basis for multi-channel VEPA.


VEPA and VN-tag both look to address common access layer network concerns and both are well on their way to standardization.  VEPA looks to be the chosen method for VM aware networking and VN-Tag for bridge extension.  Devices purchased today that rely on pre-standards versions of either protocol should maintain compatibility with the standards as they progress but it’s not guaranteed.    That being said standards are not required for operation and effectiveness, and most start as unique features which are then submitted to a standards body.

GD Star Rating

Shakespearean Guest Post

I got all Hamlet with my guest post on Thomas Jones blog, check it out to address ‘To blade or not to blade.’


GD Star Rating

My First Podcast: ‘Coffee With Thomas’

I had the pleasure of joining Thomas Jones on his new podcast ‘Coffee With Thomas’.’  His podcast is always good, well put together and about 30 minutes.  It’s done in a very refreshing conversation style as if your having a cup of coffee.  If your interested in listening to us talk technology, UCS, Apple, UFC, and other topics check it out: http://www.niketown588.com/2010/09/coffee-with-thomas-episode-5-wwts.html.


Thanks for the opportunity Thomas, that was a lot of fun!

GD Star Rating

Data Center 101: Server Virtualization

Virtualization is a key piece of modern data center design.  Virtualization occurs on many devices within the data center, conceptually virtualization is the ability to create multiple logical devices from one physical device.  We’ve been virtualizing hardware for years:  VLANs and VRFs on the network, Volumes and LUNs on storage, and even our servers were virtualized as far back as the 1970s with LPARs. Server virtualization hit mainstream in the data center when VMware began effectively partitioning clock cycles on x86 hardware allowing virtualization to move from big iron to commodity servers. 

This post is the next segment of my Data Center 101 series and will focus on server virtualization, specifically virtualizing x86/x64 server architectures.  If you’re not familiar with the basics of server hardware take a look at ‘Data Center 101: Server Architecture’ (http://www.definethecloud.net/?p=376) before diving in here.

What is server virtualization:

Server virtualization is the ability to take a single physical server system and carve it up like a pie (mmmm pie) into multiple virtual hardware subsets. 

imageEach Virtual Machine (VM) once created, or carved out, will operate in a similar fashion to an independent physical server.  Typically each VM is provided with a set of virtual hardware which an operating system and set of applications can be installed on as if it were a physical server.

Why virtualize servers:

Virtualization has several benefits when done correctly:

  • Reduction in infrastructure costs, due to less required server hardware.
    • Power
    • Cooling
    • Cabling (dependant upon design)
    • Space
  • Availability and management benefits
    • Many server virtualization platforms provide automated failover for virtual machines.
    • Centralized management and monitoring tools exist for most virtualization platforms.
  • Increased hardware utilization
    • Standalone servers traditionally suffer from utilization rates as low as 10%.  By placing multiple virtual machines with separate workloads on the same physical server much higher utilization rates can be achieved.  This means you’re actually using the hardware your purchased, and are powering/cooling.

How does virtualization work?

Typically within an enterprise data center servers are virtualized using a bare metal installed hypervisor.  This is a virtualization operating system that installs directly on the server without the need for a supporting operating system.  In this model the hypervisor is the operating system and the virtual machine is the application. 


Each virtual machine is presented a set of virtual hardware upon which an operating system can be installed.  The fact that the hardware is virtual is transparent to the operating system.  The key components of a physical server that are virtualized are:

  • CPU cycles
  • Memory
  • I/O connectivity
  • Disk


At a very basic level memory and disk capacity, I/O bandwidth, and CPU cycles are shared amongst each virtual machine.  This allows multiple virtual servers to utilize a single physical servers capacity while maintaining a traditional OS to application relationship.  The reason this does such a good job of increasing utilization is that your spreading several applications across one set of hardware.  Applications typically peak at different times allowing for a more constant state of utilization.

For example imagine an email server, typically an email server is going to peak at 9am, possibly again after lunch, and once more before quitting time.  The rest of the day it’s greatly underutilized (that’s why marketing email is typically sent late at night.)  Now picture a traditional backup server, these historically run at night when other servers are idle to prevent performance degradation.  In a physical model each of these servers would have been architected for peak capacity to support the max load, but most of the day they would be underutilized.  In a virtual model they can both be run on the same physical server and compliment one another due to varying peak times.

Another example of the uses of virtualization is hardware refresh.  DHCP servers are a great example, they provide an automatic IP addressing system by leasing IP addresses to requesting hosts, these leases are typically held for 30 days.  DHCP is not an intensive workload.  In a physical server environment it wouldn’t be uncommon to have two or more physical DHCP servers for redundancy.  Because of the light workload these servers would be using minimal hardware, for instance:

  • 800Mhz processor
  • 512MB RAM
  • 1x 10/100 Ethernet port
  • 16Gb internal disk

If this physical server were 3-5 years old replacement parts and service contracts would be hard to come by, additionally because of hardware advancements the server may be more expensive to keep then to replace.  When looking for a refresh for this server, the same hardware would not be available today, a typical minimal server today would be:

  • 1+ Ghz Dual or Quad core processor
  • 1GB or more of RAM
  • 2x onboard 1GE ports
  • 136GB internal disk

The application requirements haven’t changed but hardware has moved on.  Therefore refreshing the same DHCP server with new hardware results in even greater underutilization than before.  Virtualization solves this by placing the same DHCP server on a virtualized host and tuning the hardware to the application requirements while sharing the resources with other applications.


Server virtualization has a great deal of benefits in the data center and as such companies are adopting more and more virtualization every day.  The overall reduction in overhead costs such as power, cooling, and space coupled with the increased hardware utilization make virtualization a no-brainer for most workloads.  Depending on the virtualization platform that’s chosen there are additional benefits of increased uptime, distributed resource utilization, increased manageability.

GD Star Rating

The Brocade FCoE Proposition

I recently realized that I, like a lot of the data center industry, have completely forgotten about Brocade recently.  There has been little talked about on their FCoE front, Fibre Channel Front, or CNAs.  Cisco and HP have been dominating social media with blade and FCoE battles, but I haven’t seen much coming from Brocade.  I thought it was time to take a good look.

The Brocade Portfolio:

Brocade 1010 and 1020 CNAs The Brocade 1010 (single port) and Brocade 1020 (dual port) Converged Network Adapters (CNAs) integrate 10 Gbps Ethernet Network Interface Card (NIC) functionality with Fibre Channel technology—enabling transport over a 10 Gigabit Ethernet (GbE) connection through the new Data Center Bridging (DCB) and Fibre Channel over Ethernet (FCoE) protocols, providing best-in-class LAN connectivity and I/O consolidation to help reduce cost and complexity in next-generation data center environments.
Brocade 8000 Switch The Brocade 8000 is a top-of-rack link layer (Layer 2) CEE/FCoE switch with 24 10 Gigabit Ethernet (GbE) ports for LAN connections and eight Fibre Channel ports (with up to 8 Gbps speed) for Fibre Channel SAN connections. This reliable, high-performance switch provides advanced Fibre Channel services, supports Ethernet and CEE capabilities, and is managed by Brocade DCFM.
Brocade FCOE10-24 Blade The Brocade FCOE10-24 Blade is a Layer 2 blade with cut-though non-blocking architecture designed for use with Brocade DCX and DCX-4S Backbones. It features 24 10 Gbps CEE ports and extends CEE/FCoE capabilities to Brocade DCX Backbones, enabling end-of-row CEE/FCoE deployment. By providing first-hop connectivity for access layer servers, the Brocade FCOE10-24 also enables server I/O consolidation for servers with Tier 3 and some Tier 2 applications.

Source: http://www.brocade.com/products-solutions/products/index.page?dropType=Connectivity&name=FCOE

The breadth of Brocade’s FCoE portfolio is impressive when compared to the other major players: Emulex and Qlogic with CNAs, HP with FlexFabric for C-Class and H3C S5820X-28C Series ToR, and only Cisco providing a wider portfolio with an FCoE and virtualization aware I/O card (VIC/Palo), blade switches (Nexus 4000), ToR/MoR switches (Nexus 5000), and an FCoE Blade for the Nexus 7000.  This shows a strong commitment to the FCoE protocol on Brocade’s part, as does there participation on the standards body.

Brocade also provides a unique ability to standardize on one vendor from the server I/O card, through the FCoE network to the Fibre Channel (FC) core switching.  Additionally using the 10-24 blade customers can collapse the FCoE edge into their FC core providing a single hop collapsed core mixed FCoE/FC SAN.  That’s a solid proposition for a data center with a heavy investment in FC and a port count low enough to stay within a single chassis per fabric.

But What Does the Future Hold?

Before we take a look at where Brocade’s product line is headed, let’s look at the purpose of FCoE.  FCoE is designed as another tool in the data center arsenal for network consolidation.  We’re moving away from the cost, complexity and waste of separate networks and placing our storage and traditional LAN data on the same infrastructure.  This is similar to what we’ve done in the past in several areas, on mainframes we went from ESCON to FICON to leverage FC, our telephones went from separate infrastructures to IP based, we’re just repeating the same success story with storage.  The end goal is everything on Ethernet.  That end goal may be sooner for some than others, it all depends on comfort level, refresh cycle, and individual environment.

If FCoE is a tool for I/O consolidation and Ethernet is the end-goal of that, then where is Brocade heading?

This has been my question since I started researching and working with FCoE about three years ago.  As FCoE began hitting the mainstream media Cisco was out front pushing the benefits and announcing products, they were the first on the market with an FCoE switch, the Nexus 5000.  Meanwhile Brocade and others were releasing statements attempting to put the brakes on.  They were not saying FCoE was bad, just working to hold it off.

This makes a lot of sense from both perspectives, the core of Cisco’s business is routing and switching therefore FCoE is a great business proposition.  They’re also one of the only two options for FC switching in the enterprise (Brocade and Cisco) so they have the FC knowledge.  Lastly they had a series of products already in development. 

From Brocade’s and others perspectives they didn’t have products ready to ship, and they didn’t have the breath and depth in Ethernet so they needed time.  The marketing releases tended to become more and more positive towards FCoE as their products launched.

This also shows in Brocade’s product offering, two of the three products listed above are designed to maintain the tie to FC.

Brocade 8000:

This switch has 24x 10GE ports and 8x 8Gbps FC ports.  These ports are static onboard which means that this switch is not for you if:

  • You just need 10GE (iSCSI, NFS, RDMA, TCP, UDP, etc.)
  • You plan to fully migrate to FCoE (The FC ports then go unused.)
  • You only need FCoE, small deployment using FCoE based storage which is available today.

In comparison the competing product is the Nexus 5000 which has a modular design allowing customers to use all Ethernet/DCB or several combinations of Ethernet and FC at 1/2/4/8 Gbps.

Brocade FCoE 10/24 Blade:

This is an Ethernet blade for the DCX Fibre Channel director.  This ties Brocade’s FCoE director capabilities to an FC switch rather than Ethernet switch.  Additionally this switch only supports directly connected FCoE devices which will limit overall scalability.

In comparison the Cisco FCoE blade for the nexus 7000 is a DCB capable line card with FCoE capability by years end.  This merges FCoE onto the network backbone where it’s intended to go.


If your purpose in assessing FCoE is to provide a consolidated edge topology for server connectivity tying it back to a traditional FC SAN then Brocade has a strong product suite for you.  If you’re end goal is consolidating the network as a whole then it’s important to seriously consider the purchase of FC based FCoE products.  That’s not to say don’t buy them, just understand what you’re getting, and why you’re getting it.  For instance if you need to tie to a Fibre Channel core now and don’t intend to replace that for 3-5 years then the Brocade 8000 may work for you because it can be refreshed at the same time.

Several options exist for FCoE today and most if not all of them have a good fit.  Assess first what your trying to accomplish and when, then look at the available products and decide what fits best.

GD Star Rating



So far this week at VMworld has been fantastic.  VMware definitely throws one of the best industry events for us geeks that really want to talk shop.  There’s definitely a fair share of marketing fluff, but if you want to talk bits and bytes the sessions and people are here for you.  I’ve had the pleasure to meet several people I respect in the industry and hang out with several others I know.  That’s really been the highlight of my time here, some of the offline conversations I’ve had make the trip worth it all one their own.

The best part of the conference for me so far has been the hanging out answering questions at the World Wide Technology booth.  Shameless plug or not it’s been an awesome experience.  We have three private cloud architectures on the solutions exchange floor up and running:

  • Secure multi-tenancy from NetApp Cisco and VMware
  • HP Matrix
  • vBlock from EMC Cisco and VMware

It’s phenomenal to have these three great solutions side by side and have the opportunity to talk to people about what each has to offer.  We’ve had a lot of great traffic and great questions at the booth and I’ve really enjoyed the chats I’ve had with everyone.  With all of that the part that has really made this a geekgasm for me is the experts that have stopped by to say hello and discuss the technology.  The picture below says it all, I had Brad Hedlund (@bradhedlund / www.bradhedlund.com) from Cisco and Ken Henault (@bladeguy http://www.hp.com/go/bladeblog)from HP having a conversation with me in front of the booth.


If you’re not familiar these guys are both top of their game within their respective companies and battle it out back and forth in the world of social media.  The conversation was great and it’s good to see a couple of competitors get together shake hands and have a good discussion in front of a technical showcase of some of their top gear.  If you’re at the show and haven’t stopped by to say hello and see the gear your missing out, get over to the booth, I’ll throw in a beer coozie!

GD Star Rating