Data Center Bridging

Data Center Bridging (DCB) is a group of IEEE standard protocols designed to support I/O consolidation.  DCB enables multiple protocols with very different requirements to run over the same Layer 2 10 Gigabit Ethernet infrastructure.  Because DCB is currently discussed along with Fibre Channel over Ethernet (FCoE) it’s not uncommon for people to think of them as part of FCoE.  This is not the case, while FCoE relies on DCB for proper treatment on a shared network, DCB enhancements can be applied to any protocol on the network.  DCB support is being built into data center hardware and software from multiple vendors and is fully backwards compatible with legacy systems (no forklift upgrades.)  For more information on FCoE see my post on the subject (http://www.definethecloud.net/?p=80.)

Network protocols typically have unique requirements in regards to latency, packet/frame loss, bandwidth, etc.  These differences have a large impact on the performance of the protocol in a shared environment.  Differences such as flow-control and frame loss are the reason Fibre Channel networks have traditionally been separate physical infrastructures from Ethernet networks.  DCB is the set of tools that allows us to converge these networks without sacrificing performance or reliability.

Lets take a look at the DCB suite:

Priority Flow Control (PFC) 802.1Qbb:

PFC is a flow control mechanism.  PFC is designed to eliminate frame loss for specific traffic types on Ethernet networks.  Protocols such as Small Computer System Interface (SCSI) which is used for block data storage are very sensitive to data loss.  SCSI protocol is the heart of Fibre Channel which is a tool used to extend SCSI from internal disk to centralized storage across a network.  In its native form on dedicated networks Fibre Channel has tools to ensure that frames are not lost as long as the network is stable.  In order to move Fibre Channel across Ethernet networks that same ‘lossless’ behavior must be guaranteed, PFC is the tool to do that.

PFC uses a pause mechanism to allow a receiving device to signal a pause to the directly connected sending device prior to buffer overflow and packet loss.  While Ethernet has had a tool to do this for some time (802.3x pause) it has always been at the link level.  This means that all traffic on the link would be paused, rather than just a selected traffic type.  Pausing a link carrying various I/O types would be a bad thing, especially for traffic such as IP Telephony and streaming video.  Rather than pause an entire link PFC sends a pause signal for a single Class of Service (CoS) which is part of an 802.1Q Ethernet header.  This allows up to 8 classes to be defined and paused independent of one another.

Congestion Management (802.1Qau):

When we begin pausing traffic in a network we have the potential to spread network congestion by causing choke points.  Imagine trying to drive past a football stadium (football or American football pick your flavor) when the game is about to start.  You’re stuck in dead lock traffic even though you’re not going to the game, if you’ve got that image your on the right track.  Congestion management is a set of signaling tools used to push that congestion out of the network core to the network edge (if you’re thinking old school FECN and BECN you’re not far off.)

Bandwidth Management (802.1Qaz):

Bandwidth management is a tool for simple consistent application of bandwidth controls at Layer 2 on a DCB network.  Bandwidth management allows specific traffic type to be guaranteed a percentage of available bandwidth based on its CoS.  For instance on a 10GE network access port utilizing FCoE you could guarantee 40% of the bandwidth to FCoE.  This provides a 4Gb tunnel for FCoE when needed but allows other traffic types to utilize that bandwidth when not in use for FCoE.

Data Center bridging Exchange (DCBX):

DCBX is a Layer 2 communication protocol that allows DCB capable devices to communicate and discover the edge of the DCB network, i.e. legacy devices.  DCBX not only allows passing of information but provides tools for passing configuration.  This is key to the consistent configuration of DCB networks.  For instance a DCB switch acting as a Fibre Channel over Ethernet Forwarder (FCF) can let an attached Converged Network Adapter (CNA) on a server know to tag FCoE frames with a specific CoS and enable pause for that traffic type.

All in all the DCB features are key enablers for true consolidated I/O.  They provide a tool set for each traffic type to be handled properly independent of other protocols on the wire.  For more information on Consolidated I/O see my previous post Consolidated IO (http://www.definethecloud.net/?p=67.)

GD Star Rating
loading...
Data Center Bridging, 4.3 out of 5 based on 6 ratings

Comments

  1. Great work with the blog Joe. Thanks for the initiative and the information. Ill be looking forward to reading more posts.

    GD Star Rating
    loading...
  2. I have been surfing on-line more than three hours today, but I by
    no means discovered any attention-grabbing article like
    yours. It’s pretty worth sufficient for me. Personally, if all website owners and bloggers made excellent content
    as you probably did, the internet might be a lot more helpful than ever before.

    GD Star Rating
    loading...

Trackbacks

  1. [...] standards are known as Data Center Bridging (DCB) which I’ve discussed in a previous post (http://www.definethecloud.net/?p=31.)  These Ethernet enhancements are fully backward compatible with traditional Ethernet [...]

  2. [...] In order to provide proper performance for iSCSI on shared networks Quality of Service (QoS), physical architecture, and jumbo frame support must be taken into account.  Because of these considerations many iSCSI networks have traditionally been placed on separate network hardware from the data center LAN (isolated iSCSI networks.)  This has minimized some of the benefits of consolidating on a single protocol.  With 10 Gigabit Ethernet and the standardization of Data Center Bridging (DCB) iSCSI looks more promising for a greater audience.  For more information on DCB see my previous post (http://www.definethecloud.net/?p=31.) [...]

  3. [...] layered on top as needed on a per application basis.  See my post on DCB for more information (http://www.definethecloud.net/?p=31.)These protocols can be FCoE, iSCSI, UDP, TCP, NFS, CIFS, etc. or any combination of them [...]

  4. [...] Fibre Channel over Ethernet (FCoE) is defined in IEEE FC-BB5 and requires the switches it traverses to support the IEEE Data Center Bridging (DCB)standards for proper traffic treatment on the network.  For more information on FCoE or DCB see my previous posts on the subjects (FCoE: http://www.definethecloud.net/?p=80, DCB: http://www.definethecloud.net/?p=31.) [...]

  5. [...] Selection (ETS) and Priority-Flow Control (PFC) for more info on theses see my DCB blog: http://www.definethecloud.net/?p=31.)  Basically ETS provides a bandwidth guarantee without limiting and PFC provides lossless [...]

  6. [...] DCB ain’t ready, folks. Neither is FCoE. Although Ethernet will eventually sideline InfiniBand and Fibre Channel, that’s not a 2011 topic. I expect to hear a lot of noise about converged network and storage I/O, including high-profile customer adoption stories, but we’re still a few years short of actual impact and serious market share movement. Practical application starts in 2011, though, and it’ll get major coverage and big-money action in the vendor space. [...]

  7. [...] ports of 10 Gigabit Ethernet (GbE) speed and Data Center Bridging (DCB) *4 Fibre Channel ports: that support 2, 4, and 8 Gbps full [...]

Speak Your Mind

*