When discussing the underlying technologies for cloud computing topologies virtualization is typically a key building block.Â Virtualization can be applied to any portion of the data center architecture from load-balancers to routers, and from servers to storage.Â Server virtualization is one of the most widely adopted virtualization technologies, and provides a great deal of benefits to the server architecture.Â
One of the most common challenges with server virtualization is the networking.Â Virtualized servers typically consist of networks of virtual machines that are configured by the server team with little to no management/monitoring possible from the network/security teams.Â This causes inconsistent policy enforcement between physical and virtual servers as well as limited network functionality for virtual machines.
The separate network management models for virtual servers and physical servers presents challenges to: policy enforcement, compliance, and security, as well as adds complexity to the configuration and architecture of virtual server environments.Â Due to this fact many vendors are designing products and solutions to help draw these networks closer together.
The following is a discussion of three products that can be used for this, HPâ€™s Flex-10 adapters, Ciscoâ€™s Nexus 1000v and Ciscoâ€™s Virtual interface Card (VIC.)Â
This is not a pro/con or discussion of which is better, just an overview of the technology and how it relates to VMware.
HP Flex-10 for Virtual Connect:
Using HPâ€™s Virtual Connect switching modules for C-Class blades and either Flex-10 adapters or Lan-On-Motherboard (LOM) administrators can â€˜partition the bandwidth of a single 10GbÂ pipeline into multiple â€œFlexNICs.â€ In addition, customers can regulate the bandwidth for each partition by setting it to a user-defined portion of the total 10Gb connection. Speed can be set from 100 Megabits per second to 10 Gigabits per second in 100 Megabit increments.â€™ (http://bit.ly/boRsiY)
This allows a single 10GE uplink to be presented to any operating system as 4 physical Network Interface Cards (NIC.)
In order to perform this interface virtualization FlexConnectÂ uses internal VLANÂ mappings for traffic segregation within the 10GEÂ Flex-10 port (mid-plane blade chassis connection from the Virtual Connect Flex-10 10GbEÂ interconnect module and the Flex-10 NIC device.)Â Each FlexNICÂ can present one or more VLANs to the installed operating system.
Some of the advantages with this architecture are:
- A single 10GE link can be divided into 4 separate logical links each with a defined portion of the bandwidth.
- More interfaces can be presented from fewer physical adapters which is extremely advantageous within the limited space available on blade servers.
When the installed operating system is VMware this allows for 2x10GEÂ links to be presented to VMware as 8x separate NICsÂ and used for different purposes such as vMotion, Fault Tolerance (FT), Service Console, VM kernel and data.
The requirements for Flex-10 as described here are:
- HP C-Class blade chassis
- VC Flex-10 10GE interconnect module (HP blade switches)
- Flex-10 LOM and or Mezzanine cards
Cisco Nexus 1000v:
â€˜Cisco NexusÂ®Â 1000V Series Switches are virtual machine access switches that are an intelligent software switch implementation for VMware vSphere environments running the CiscoÂ®Â NX-OS operating system. Operating inside the VMware ESXÂ hypervisor, the Cisco Nexus 1000V Series supports Cisco VN-Link server virtualization technology to provide:
â€¢ Policy-based virtual machine (VM) connectivity
â€¢ Mobile VM security and network policy, and
â€¢ Non-disruptive operational model for your server virtualization, and networking teamsâ€™(http://bit.ly/b4JJX5.)
The Nexus 1000vÂ is a Cisco software switch which is placed in the VMware environment and provides physical type network control/monitoring to VMware virtual networks.Â The Nexus 1000v is comprised ofÂ two components the Virtual Supervisor Module (VSM) and Virtual Ethernet Module (VEM.)Â The Nexus 1000vÂ does not have hardware requirements and can be used with any standards compliant physical switching infrastructure.Â Specifically the upstream switch should support 802.1q trunks and LACP.
Cisco Nexus 1000v
Using the Nexus 1000v Network teams have complete control over the virtual network and manage it using the same tools and policies used on the physical network.
Some advantages of the 1000v are:
- Consistent policy enforcement for physical and virtual servers
- vMotion aware policies migrate with the VM
- Increased, security, visibility and control of virtual networks
The requirements for Cisco Nexus 1000v are:
- vSphere 4.0 or higher
- Enterprise + VMware license
- Per physical host CPU VEM license
- Virtual Center Server
Cisco Virtual interface Card (VIC):
The Cisco VIC provides interface virtualization similar to the Flex-10 adapter.Â One 10GEÂ port is able to be presented to an operating system as up to 128 virtual interfaces depending on the infrastructure. â€˜The Cisco UCS M81KR presents up to 128 virtual interfaces to the operating system on a given blade. The virtual interfaces can be dynamically configured by Cisco UCS Manager as either Fibre Channel or Ethernet devicesâ€™ (http://bit.ly/9RT7kk.)
Fibre Channel interfaces are known as vFCÂ and Ethernet interfaces are known as vEth, they can be used in any combination up to the architectural limits.Â Currently the VIC is only available for Cisco UCS blades but will be supported on UCS rack mount servers as well by the end of 2010.Â Interfaces are segregated using an internal tagging mechanism known as VN-Tag which does not use VLANÂ tags and operates independently of VLAN operation.
Virtual Interface Card
Each virtual interface acts as if directly connected to a physical switch port and can be configured in Access or Trunk mode using 802.1q standard trunking. These interfaces can then be used by any operating system or VMware.Â For more information on their use see my post Defining VN-Link (http://bit.ly/ddxGU7.)
- Granular configuration of multiple Fibre Channel and Ethernet ports on one 10GE link.
- Single point of network configuration handled by a network team rather than a server team.
- Cisco UCS B-series blades (until C-Series support is released)
- Cisco Fabric interconnect access layer switches/managers.
Each of these products has benefits in specific use cases and can reduce overhead and/or administration for server networks.Â When combining one or more of these products you should carefully analyze the benefits of each and identify features that may be sacrificed by combining the two.Â For instance using the Nexus 1000vÂ along with FlexConnect adds a Server administered network management layer in between the physical network and virtual network.
Nexus 1000v with Flex-10
Comments and corrections are always welcome.