Virtualizing the PCIe bus with Aprius

One of the vendors that presented during Gestalt IT’s Tech Field day 2010 in San Jose was Aprius (http://gestaltit.com/field-day/) (http://www.aprius.com/.)  Aprius’s product virtualizes the PCIe I/O bus and pushes that PCIe traffic over 10GE to the server.  In Aprius’s model you have an Aprius appliance that houses multiple off-the-shelf PCIe cards and a proprietary Aprius initiator which resides in the server.  The concept is to be able to not only share PCIe devices to multiple servers but also allow the use of multiple types of PCIe cards on servers with limited slots.  Additionally there would be some implications for VMware virtualized servers as you could potentially utilize VMware Direct-Path I/O to present these cards directly to a VM.  Aprius’s main competitor is Xsigo which provides a similar benefit using a PCIe appliance containing proprietary PCIe cards and pushing the I/O over standard 10G Ethernet or Infiniband to the server NIC.  I look at the PCIe I/O virtualization space as very niche with limited use cases, let’s take a look at this in reference to Aprius,

With the industry moving more and more toward x64 server virtualization using VMware, HyperV, and Zen hardware compatibility lists come very much into play.  If a card is not on the list it most likely won’t work and is definitely not supported.  Aprius skates around this issue by using a card that appears transparent to the operating system and instead presents only the I/O devices assigned to a given server via the appliance.  This means that the Aprius appliance should work with any given virtualization platform, but support will be another issue.  Until Aprius is on an the Hardware Compatibility List (HCL) for any given hypervisor I wouldn’t recommend to my customers for virtualization.  Additionally the biggest benefit I’d see for using Aprius in a virtualization environment would be passing VMs PCIe devices that aren’t traditionally virtualized, think fax-modem etc.  This still wouldn’t be possible with the Aprius device because those cards aren’t on the virtualization HCL.

The next problem with these types of products is that the industry is moving to consolidate storage, network and HPC traffic on the same wire.  This can be done with FCoE, iSCSI, NFS, CIFS, etc. or any combination you choose.  That move is minimizing the I/O card requirements in the server and the need for specialized PCIe devices is getting smaller every day.  With less PCIe devices needed for any given server, what is the purpose of a PCIe aggregator?

Another use case of Aprius’s technology they shared with us was sharing a single card, for example 10GE NIC among several servers as a failover path rather than buying redundant cards per server.  This seems like a major stretch. This adds an Aprius appliance as a point of failure to your redundant path, and still requires an Aprius adapter in each server instead of the redundant NIC.

My main issue with both Aprius and Xsigo is that they both require me to put their boxes in my data path as a single additional point of failure.  You’re purchasing their appliance and their cards and using that to aggregate all of your server I/O leaving their appliance as a single point of failure for multiple servers I/O requirements. I just can’t swallow that, unless I have some 1-off tye of need that can’t be solved any other way.

The question I neglected to ask Aprius’s CEO during the short period he joined us is whether the company was started with the intent to sell a product, or the intent to sell a company.  My thinking is that the real answer is they’re only interested in selling enough appliances to get the company as a whole noticed and purchased.  The downside of that is they don’t seem to have enough secret sauce that can’t be easily copied to be valuable as an acquisition.

The technology both Aprius and Xsigo market would really only be of use if purchased by a larger server vendor with a big R&D budget and some weight with the standards community. It could then be used to push a PCIeoE standard to drive adoption.  Additionally the appliances may have a play within that vendors blade architecture as a way of minimizing required blade components and increasing I/O flexibility, i.e. a PCIe slot blade/module that could be shared across the chassis.

Summary:

Aprius seems to be a fantastic product with a tiny little market that will continue to shrink. This will never be a mainstream data center product but will fit the bill for niche issues and 1-off deployments.  In their shoes my goal would be to court the server vendors and find a buyer before the technology becomes irrelevant, or copied. Their only competition I’m aware of in this space is Xsigo and I think they have a better shot based on deployment model. They’re proprietary card in each server becomes a non-issue if a server vendor buys them and builds them into the system board.

GD Star Rating
loading...
Virtualizing the PCIe bus with Aprius, 3.7 out of 5 based on 3 ratings

Comments

  1. Joe – thanks for the post. You raise some good points, some of which we should discussed further. I’ll make a couple of points here and follow this up in more detail in other forums.

    We think of our technology as a means to reach devices that should be accessed via the PCIe protocol, but would benefit from being ‘network attached’ for scalability, sharing, wire-once, dynamic reassignment etc etc. We have chosen 10GbE (regular or CEE) as the converged fabric of choice to transport PCIe.

    The devices you might want to access via Ethernet tend to fall into a couple of buckets:

    1) Devices that use PCIe for performance, mainly low latency, such as flash storage cards (think FusionIO, Virident, LSI Warhawk and the new PCIe SSD standards group), possibly GPU, that can be scaled, pooled and shared when network attached. Our ‘secret sauce’ is a hardware implementation of PCIe tunneling over Ethernet that maintains low latency and scales bandwidth through multiple 10GbE pipes or eventually 40GbE. This is the no.1 reason customers wants to test our product.

    2) Devices that use a common protocol that isn’t converged on Ethernet (think SAS) but could be accessed via Ethernet through their PCIe interface and presented as ‘local devices’ that are ‘network attached’. Put another way, a server customer already uses FCoE or iSCSI via CEE but would like to add some ‘local’ drives via the same converged interface with close to native performance. LSI’s MegaRAID card with SR-IOV support is great example of this type of resource. This is probably the 2nd most common request from customers.

    3) Thirdly, devices that could be converged on Ethernet, but the customer chooses not to because of a long history with the card, the driver or the vendor. A legacy FC HBA or a specific iSCSI initiator with HW offload or a qualified driver could be placed in the I/O appliance and presented to the servers as a local HBA/NIC via Ethernet. This provides a converged access layer with support for legacy I/O cards and standard Ethernet switches.

    Of course all this makes a ton of sense when the server 10G NIC or CNA has the ‘PCIe over Ethernet’ capability built in, is ‘free’ and can be a true single converged wire to run TCP/IP, FCoE, iSCSI, NAS, or PCIe. We’re working on this.

    Lastly, the hypervisor support issues you raise are valid ones. Thats why we focus our time and effort getting this technology into the hands of OEMs that can solve those problems for us, rather than push product to end users in the short term, only to find a lack of support.

    Hope this addresses some of the points you raised. We’d be happy to talk further in the very near future.

    GD Star Rating
    loading...
    • Craig,

      Thanks for the reply and information. The use cases you describe definitely make sense, especially when you’re able to deliver a PCIeoE capable 10G NIC/CNA built into the system board. If that card is able to provide both native DCB/10GE as well as PCIeoE access that would have tremendous benefits. Overall with the correct OEM support for the hardware there is major potential for the product, and its current form it does fill a need but the market is small.

      If you’d like to drop some links to more information or white papers please feel free, definitely want people to understand what you have to offer.

      Joe

      Joe

      GD Star Rating
      loading...

Trackbacks

  1. [...] This post was mentioned on Twitter by Frank Owen, Joe Onisick. Joe Onisick said: New Post: Virtualizing the PCIe bus with #Aprius http://bit.ly/eMEcyT [...]

Speak Your Mind

*