Microsoft* Hyper-V* makes it possible for one or more operating systems to run simultaneously on the same physical system as virtual machines. This allows you to consolidate several servers onto one system, even if they are running different operating systems. Intel® Network Adapters work with, and within, Microsoft Hyper-V virtual machines with their standard drivers and software.
Notes:
|
When a Hyper-V Virtual NIC (VNIC) interface is created in the parent partition, the VNIC takes on the MAC address of the underlying physical NIC. The same is true when a VNIC is created on a team or VLAN. Since the VNIC uses the MAC address of the underlying interface, any operation that changes the MAC address of the interface (for example, setting LAA on the interface, changing the primary adapter on a team, etc.), will cause the VNIC to lose connectivity. In order to prevent this loss of connectivity, Intel® PROSet will not allow you to change settings that change the MAC address.
Notes:
|
The virtual machine switch is part of the network I/O data path. It sits between the physical NIC and the virtual machine NICs and routes packets to the correct MAC address. Enabling Virtual Machine Queue (VMQ) offloading in Intel(R) ProSet will automatically enable VMQ in the virtual machine switch. For driver-only installations, you must manually enable VMQ in the virtual machine switch.
If you create ANS VLANs in the parent partition, and you then create a Hyper-V Virtual NIC interface on an ANS VLAN, then the Virtual NIC interface *must* have the same VLAN ID as the ANS VLAN. Using a different VLAN ID or not setting a VLAN ID on the Virtual NIC interface will result in loss of communication on that interface.
Virtual Switches bound to an ANS VLAN will have the same MAC address as the VLAN, which will have the same address as the underlying NIC or team. If you have several VLANs bound to a team and bind a virtual switch to each VLAN, all of the virtual switches will have the same MAC address. Clustering the virtual switches together will cause a network error in Microsoft’s cluster validation tool. In some cases, ignoring this error will not impact the performance of the cluster. However, such a cluster is not supported by Microsoft. Using Device Manager to give each of the virtual switches a unique address will resolve the issue. See the Microsoft Technet article Configure MAC Address Spoofing for Virtual Network Adapters for more information.
Virtual Machine Queues (VMQ) and SR-IOV cannot be enabled on a Hyper-V Virtual NIC interface bound to a VLAN configured using the VLANs tab in Windows Device Manager.
If you want to use a team or VLAN as a virtual NIC you must follow these steps:
Note: This applies only to virtual NICs created on a team or VLAN. Virtual NICs created on a physical adapter do not require these steps. |
Note: This step is not required for the team. When the Virtual NIC is created, its protocols are correctly bound. |
Microsoft Windows Server* Core does not have a GUI interface. If you want to use an ANS Team or VLAN as a Virtual NIC, you must use the prosetcl.exe utility, and may need the nvspbind.exe utility, to set up the configuration. Use the prosetcl.exe utility to create the team or VLAN. See the prosetcl.txt file for installation and usage details. Use the nvspbind.exe utility to unbind the protocols on the team or VLAN. The following is an example of the steps necessary to set up the configuration.
Note: The nvspbind.exe utility is not needed in Windows Server 2008 R2 or later. |
Enabling VMQ offloading increases receive and transmit performance, as the adapter hardware is able to perform these tasks faster than the operating system. Offloading also frees up CPU resources. Filtering is based on MAC and/or VLAN filters. For devices that support it, VMQ is enabled in the host partition on the adapter's Device Manager property sheet, under Virtualization on the Advanced Tab.
Each Intel® Ethernet Adapter has a pool of queues that are split between the various features, such as VMQ Offloading, SR-IOV, Data Center Bridging (DCB), and Fibre Channel over Ethernet (FCoE). Increasing the number of queues used for one feature decreases the number available for other features. On devices that support it, enabling DCB reduces the total pool available for other features to 32. Enabling FCoE further reduces the total pool to 24. Intel PROSet displays the number of queues available for virtual functions under Virtualization properties on the device's Advanced Tab. It also allows you to set how the available queues are distributed between VMQ and SR-IOV.
Teaming Considerations
SR-IOV lets a single network port appear to be several virtual functions in a virtualized environment. If you have an SR-IOV capable NIC, each port on that NIC can assign a virtual function to several guest partitions. The virtual functions bypass the Virtual Machine Manager (VMM), allowing packet data to move directly to a guest partition's memory, resulting in higher throughput and lower CPU utilization. SR-IOV also allows you to move packet data directly to a guest partition's memory. SR-IOV support was added in Microsoft Windows Server 2012. See your operating system documentation for system requirements.
For devices that support it, SR-IOV is enabled in the host partition on the adapter's Device Manager property sheet, under Virtualization on the Advanced Tab.
Notes:
|
Last modified on 6/01/11 2:12p Revision