Microsoft* Hyper-V* Overview

Microsoft* Hyper-V* makes it possible for one or more operating systems to run simultaneously on the same physical system as virtual machines. This allows you to consolidate several servers onto one system, even if they are running different operating systems. Intel® Network Adapters work with, and within, Microsoft Hyper-V virtual machines with their standard drivers and software.

Notes:

Using Intel® Network Adapters in a Hyper-V Environment

When a Hyper-V Virtual NIC (VNIC) interface is created in the parent partition, the VNIC takes on the MAC address of the underlying physical NIC. The same is true when a VNIC is created on a team or VLAN. Since the VNIC uses the MAC address of the underlying interface, any operation that changes the MAC address of the interface (for example, setting LAA on the interface, changing the primary adapter on a team, etc.), will cause the VNIC to lose connectivity. In order to prevent this loss of connectivity, Intel® PROSet will not allow you to change settings that change the MAC address.

Notes:

  • If Fibre Channel over Ethernet (FCoE)/Data Center Bridging (DCB) is present on the port, configuring the device in Virtual Machine Queue (VMQ) + DCB mode reduces the number of VMQs available for guest OSes.
  • When sent from inside a virtual machine, LLDP and LACP packets may be a security risk. The Intel® Virtual Function driver blocks the transmission of such packets.
  • The Virtualization setting on the Advanced tab of the adapter's Device Manager property sheet is not available if the Hyper-V role is not installed.
  • While Microsoft supports Hyper-V on the Windows* 8 client OS, Intel® Ethernet adapters do not support virtualization settings (VMQ, SR-IOV) on Windows 8 client.
  • ANS teaming of VF devices inside a Windows 2008 R2 guest running on an open source hypervisor is supported.

The Virtual Machine Switch

The virtual machine switch is part of the network I/O data path. It sits between the physical NIC and the virtual machine NICs and routes packets to the correct MAC address. Enabling Virtual Machine Queue (VMQ) offloading in Intel(R) ProSet will automatically enable VMQ in the virtual machine switch. For driver-only installations, you must manually enable VMQ in the virtual machine switch.

Using ANS VLANs

If you create ANS VLANs in the parent partition, and you then create a Hyper-V Virtual NIC interface on an ANS VLAN, then the Virtual NIC interface *must* have the same VLAN ID as the ANS VLAN. Using a different VLAN ID or not setting a VLAN ID on the Virtual NIC interface will result in loss of communication on that interface.

Virtual Switches bound to an ANS VLAN will have the same MAC address as the VLAN, which will have the same address as the underlying NIC or team. If you have several VLANs bound to a team and bind a virtual switch to each VLAN, all of the virtual switches will have the same MAC address. Clustering the virtual switches together will cause a network error in Microsoft’s cluster validation tool. In some cases, ignoring this error will not impact the performance of the cluster. However, such a cluster is not supported by Microsoft. Using Device Manager to give each of the virtual switches a unique address will resolve the issue. See the Microsoft Technet article Configure MAC Address Spoofing for Virtual Network Adapters for more information.

Virtual Machine Queues (VMQ) and SR-IOV cannot be enabled on a Hyper-V Virtual NIC interface bound to a VLAN configured using the VLANs tab in Windows Device Manager.

Using an ANS Team or VLAN as a Virtual NIC

If you want to use a team or VLAN as a virtual NIC you must follow these steps:

Note: This applies only to virtual NICs created on a team or VLAN. Virtual NICs created on a physical adapter do not require these steps.
  1. Use Intel® PROSet to create the team or VLAN.
  2. Open the Network Control Panel.
  3. Open the team or VLAN.
  4. On the General Tab, uncheck all of the protocol bindings and click OK.
  5. Create the virtual NIC. (If you check the "Allow management operating system to share the network adapter." box you can do the following step in the parent partition.)
  6. Open the Network Control Panel for the Virtual NIC.
  7. On the General Tab, check the protocol bindings that you desire.
    Note: This step is not required for the team. When the Virtual NIC is created, its protocols are correctly bound.

Command Line for Microsoft Windows Server* Core

Microsoft Windows Server* Core does not have a GUI interface. If you want to use an ANS Team or VLAN as a Virtual NIC, you must use the prosetcl.exe utility, and may need the nvspbind.exe utility, to set up the configuration. Use the prosetcl.exe utility to create the team or VLAN. See the prosetcl.txt file for installation and usage details. Use the nvspbind.exe utility to unbind the protocols on the team or VLAN. The following is an example of the steps necessary to set up the configuration.

Note: The nvspbind.exe utility is not needed in Windows Server 2008 R2 or later.
  1. Use prosetcl.exe to create a team.
      prosetcl.exe Team_Create 1,2,3 TeamNew VMLB
    (VMLB is a dedicated teaming mode for load balancing under Hyper-V.)
  2. Use nvspbind to get the team’s GUID
      nvspbind.exe -n
  3. Use nvspbind to disable the team’s bindings
      nvspbind.exe -d aaaaaaaa-bbbb-cccc-dddddddddddddddd *
  4. Create the virtual NIC by running a remote Hyper-V manager on a different machine. Please see Microsoft's documentation for instructions on how to do this.
  5. Use nvspbind to get the Virtual NIC’s GUID.
  6. Use nvspbind to enable protocol bindings on the Virtual NIC.
      nvspbind.exe -e tttttttt-uuuu-wwww-xxxxxxxxxxxxxxxx ms_netbios
      nvspbind.exe -e tttttttt-uuuu-wwww-xxxxxxxxxxxxxxxx ms_tcpip
      nvspbind.exe -e tttttttt-uuuu-wwww-xxxxxxxxxxxxxxxx ms_server
     

Virtual Machine Queue Offloading

Enabling VMQ offloading increases receive and transmit performance, as the adapter hardware is able to perform these tasks faster than the operating system. Offloading also frees up CPU resources. Filtering is based on MAC and/or VLAN filters. For devices that support it, VMQ is enabled in the host partition on the adapter's Device Manager property sheet, under Virtualization on the Advanced Tab.

Each Intel® Ethernet Adapter has a pool of queues that are split between the various features, such as VMQ Offloading, SR-IOV, Data Center Bridging (DCB), and Fibre Channel over Ethernet (FCoE). Increasing the number of queues used for one feature decreases the number available for other features. On devices that support it, enabling DCB reduces the total pool available for other features to 32. Enabling FCoE further reduces the total pool to 24. Intel PROSet displays the number of queues available for virtual functions under Virtualization properties on the device's Advanced Tab. It also allows you to set how the available queues are distributed between VMQ and SR-IOV.

Teaming Considerations

SR-IOV (Single Root I/O Virtualization)

SR-IOV lets a single network port appear to be several virtual functions in a virtualized environment. If you have an SR-IOV capable NIC, each port on that NIC can assign a virtual function to several guest partitions. The virtual functions bypass the Virtual Machine Manager (VMM), allowing packet data to move directly to a guest partition's memory, resulting in higher throughput and lower CPU utilization. SR-IOV also allows you to move packet data directly to a guest partition's memory. SR-IOV support was added in Microsoft Windows Server 2012. See your operating system documentation for system requirements.

For devices that support it, SR-IOV is enabled in the host partition on the adapter's Device Manager property sheet, under Virtualization on the Advanced Tab.

 

Notes:

  • SR-IOV is not supported with ANS teams.
  • You must enable VMQ for SR-IOV to function.

 


Last modified on 6/01/11 2:12p Revision