Skip to content
Define The Cloud

The Intersection of Technology and Reality

Define The Cloud

The Intersection of Technology and Reality

Virtualizing the PCIe bus with Aprius

Joe Onisick (@JoeOnisick), December 6, 2010

One of the vendors that presented during Gestalt IT’s Tech Field day 2010 in San Jose was Aprius (http://gestaltit.com/field-day/) (http://www.aprius.com/.)  Aprius’s product virtualizes the PCIe I/O bus and pushes that PCIe traffic over 10GE to the server.  In Aprius’s model you have an Aprius appliance that houses multiple off-the-shelf PCIe cards and a proprietary Aprius initiator which resides in the server.  The concept is to be able to not only share PCIe devices to multiple servers but also allow the use of multiple types of PCIe cards on servers with limited slots.  Additionally there would be some implications for VMware virtualized servers as you could potentially utilize VMware Direct-Path I/O to present these cards directly to a VM.  Aprius’s main competitor is Xsigo which provides a similar benefit using a PCIe appliance containing proprietary PCIe cards and pushing the I/O over standard 10G Ethernet or Infiniband to the server NIC.  I look at the PCIe I/O virtualization space as very niche with limited use cases, let’s take a look at this in reference to Aprius,

With the industry moving more and more toward x64 server virtualization using VMware, HyperV, and Zen hardware compatibility lists come very much into play.  If a card is not on the list it most likely won’t work and is definitely not supported.  Aprius skates around this issue by using a card that appears transparent to the operating system and instead presents only the I/O devices assigned to a given server via the appliance.  This means that the Aprius appliance should work with any given virtualization platform, but support will be another issue.  Until Aprius is on an the Hardware Compatibility List (HCL) for any given hypervisor I wouldn’t recommend to my customers for virtualization.  Additionally the biggest benefit I’d see for using Aprius in a virtualization environment would be passing VMs PCIe devices that aren’t traditionally virtualized, think fax-modem etc.  This still wouldn’t be possible with the Aprius device because those cards aren’t on the virtualization HCL.

The next problem with these types of products is that the industry is moving to consolidate storage, network and HPC traffic on the same wire.  This can be done with FCoE, iSCSI, NFS, CIFS, etc. or any combination you choose.  That move is minimizing the I/O card requirements in the server and the need for specialized PCIe devices is getting smaller every day.  With less PCIe devices needed for any given server, what is the purpose of a PCIe aggregator?

Another use case of Aprius’s technology they shared with us was sharing a single card, for example 10GE NIC among several servers as a failover path rather than buying redundant cards per server.  This seems like a major stretch. This adds an Aprius appliance as a point of failure to your redundant path, and still requires an Aprius adapter in each server instead of the redundant NIC.

My main issue with both Aprius and Xsigo is that they both require me to put their boxes in my data path as a single additional point of failure.  You’re purchasing their appliance and their cards and using that to aggregate all of your server I/O leaving their appliance as a single point of failure for multiple servers I/O requirements. I just can’t swallow that, unless I have some 1-off tye of need that can’t be solved any other way.

The question I neglected to ask Aprius’s CEO during the short period he joined us is whether the company was started with the intent to sell a product, or the intent to sell a company.  My thinking is that the real answer is they’re only interested in selling enough appliances to get the company as a whole noticed and purchased.  The downside of that is they don’t seem to have enough secret sauce that can’t be easily copied to be valuable as an acquisition.

The technology both Aprius and Xsigo market would really only be of use if purchased by a larger server vendor with a big R&D budget and some weight with the standards community. It could then be used to push a PCIeoE standard to drive adoption.  Additionally the appliances may have a play within that vendors blade architecture as a way of minimizing required blade components and increasing I/O flexibility, i.e. a PCIe slot blade/module that could be shared across the chassis.

Summary:

Aprius seems to be a fantastic product with a tiny little market that will continue to shrink. This will never be a mainstream data center product but will fit the bill for niche issues and 1-off deployments.  In their shoes my goal would be to court the server vendors and find a buyer before the technology becomes irrelevant, or copied. Their only competition I’m aware of in this space is Xsigo and I think they have a better shot based on deployment model. They’re proprietary card in each server becomes a non-issue if a server vendor buys them and builds them into the system board.

Share this:

  • Facebook
  • X

Related posts:

  1. Blades are Not the Future
  2. How Emulex Broke Out of the ‘Card Pusher’ Box
  3. Thoughts From a Tech Leadership Summit
  4. The Difference Between ‘Foothold’ and ‘Lock-In’
  5. The Brocade FCoE Proposition
Quick Thoughts ApriusI/O VirtualizationPCIeoETech Field Day

Post navigation

Previous post
Next post

Related Posts

A Salute to Greatness

October 29, 2012May 18, 2020

There are two things I’ve spent my life doing: being a class clown (laughed at or with is your choice) and building my career.  Since I was 16 I’ve worked no less than 40 hour weeks and more consistently been immersed in IT upwards of 80.  I have rarely taken…

Share this:

  • Facebook
  • X
Read More

Cisco unified Computing System (UCS) High-Level Overview

January 9, 2011January 9, 2011

I’ve been looking for tools to supplement Power Point, Whiteboard, etc. and Brian Gracely (@bgracely) suggested I try Prezi (www.prezi.com.) Prezi is a very slick tool for non-slide based presentations.   I don’t think it will replace slides or white board for me, but it’s a great supplement.  It’s got a…

Share this:

  • Facebook
  • X
Read More

How Emulex Broke Out of the ‘Card Pusher’ Box

July 21, 2010July 21, 2010

A few years back when my primary responsibility was architecting server, blade, SAN, and virtualization solutions for customers I selected the appropriate HBA based on the following rule: Whichever (Qlogic or Emulex) is less expensive today through the server OEM I’m using.  I had no technical or personal preference for…

Share this:

  • Facebook
  • X
Read More

Comments (8)

  1. Pingback: Tweets that mention Virtualizing the PCIe bus with Aprius — Define The Cloud -- Topsy.com
  2. Craig Thompson says:
    December 6, 2010 at 5:18 pm

    Joe – thanks for the post. You raise some good points, some of which we should discussed further. I’ll make a couple of points here and follow this up in more detail in other forums.

    We think of our technology as a means to reach devices that should be accessed via the PCIe protocol, but would benefit from being ‘network attached’ for scalability, sharing, wire-once, dynamic reassignment etc etc. We have chosen 10GbE (regular or CEE) as the converged fabric of choice to transport PCIe.

    The devices you might want to access via Ethernet tend to fall into a couple of buckets:

    1) Devices that use PCIe for performance, mainly low latency, such as flash storage cards (think FusionIO, Virident, LSI Warhawk and the new PCIe SSD standards group), possibly GPU, that can be scaled, pooled and shared when network attached. Our ‘secret sauce’ is a hardware implementation of PCIe tunneling over Ethernet that maintains low latency and scales bandwidth through multiple 10GbE pipes or eventually 40GbE. This is the no.1 reason customers wants to test our product.

    2) Devices that use a common protocol that isn’t converged on Ethernet (think SAS) but could be accessed via Ethernet through their PCIe interface and presented as ‘local devices’ that are ‘network attached’. Put another way, a server customer already uses FCoE or iSCSI via CEE but would like to add some ‘local’ drives via the same converged interface with close to native performance. LSI’s MegaRAID card with SR-IOV support is great example of this type of resource. This is probably the 2nd most common request from customers.

    3) Thirdly, devices that could be converged on Ethernet, but the customer chooses not to because of a long history with the card, the driver or the vendor. A legacy FC HBA or a specific iSCSI initiator with HW offload or a qualified driver could be placed in the I/O appliance and presented to the servers as a local HBA/NIC via Ethernet. This provides a converged access layer with support for legacy I/O cards and standard Ethernet switches.

    Of course all this makes a ton of sense when the server 10G NIC or CNA has the ‘PCIe over Ethernet’ capability built in, is ‘free’ and can be a true single converged wire to run TCP/IP, FCoE, iSCSI, NAS, or PCIe. We’re working on this.

    Lastly, the hypervisor support issues you raise are valid ones. Thats why we focus our time and effort getting this technology into the hands of OEMs that can solve those problems for us, rather than push product to end users in the short term, only to find a lack of support.

    Hope this addresses some of the points you raised. We’d be happy to talk further in the very near future.

    Reply
    1. Joe Onisick says:
      December 6, 2010 at 5:46 pm

      Craig,

      Thanks for the reply and information. The use cases you describe definitely make sense, especially when you’re able to deliver a PCIeoE capable 10G NIC/CNA built into the system board. If that card is able to provide both native DCB/10GE as well as PCIeoE access that would have tremendous benefits. Overall with the correct OEM support for the hardware there is major potential for the product, and its current form it does fill a need but the market is small.

      If you’d like to drop some links to more information or white papers please feel free, definitely want people to understand what you have to offer.

      Joe

      Joe

      Reply
  3. Pingback: Back From the Pile: Interesting Links, December 10, 2010 – Stephen Foskett, Pack Rat
  4. Ahmad says:
    January 5, 2015 at 6:01 am

    Admirinjg thee time and effort you put into your blog
    É‘nd inn depth information youu present. It’s awesome to come across a
    blog every once in a whilke that isn’t the Ñ•ame
    out of ԁate rehashed material. Grerat read! I’ve saved your site and I’m
    adding your RSS feeÉ—s tto my Google account.

    Reply
  5. www.avis-sondages.fr says:
    May 2, 2015 at 7:18 pm

    Un éclat, effectifs gîte comblé vu duquel une Fed requerrait
    adéquat Parmi divers collaborer expose un qu’celui nécessite

    Reply
  6. avis sondage says:
    May 10, 2015 at 10:15 am

    Esprit circonscrive collectivement puisqu cavité, dans inconstant
    en conséquence skier mon tête église ego m’discours revêche certains déclarations encore entrée du nomination par reparaisse

    Reply
  7. mode says:
    August 22, 2016 at 3:32 pm

    I qᥙite like reading an aarticⅼe that can make men and
    women tɦіnk. Alѕo, thank yoou for allowing for me to cⲟmment!

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Creative Commons License
This work by Joe Onisick and Define the Cloud, LLC is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License

Disclaimer

All brand and company names are used for identification purposes only. These pages are not sponsored or sanctioned by any of the companies mentioned; they are the sole work and property of the authors. While the author(s) may have professional connections to some of the companies mentioned, all opinions are that of the individuals and may differ from official positions of those companies. This is a personal blog of the author, and does not necessarily represent the opinions and positions of his employer or their partners.
©2025 Define The Cloud | WordPress Theme by SuperbThemes