In Windows Server 2008 with Hyper-V, the storage miniport driver may not automatically load after adding or removing a DCB/FCoE adapter as a shared external virtual device. To load the storage miniport driver, reset the adapter.
If the user installs ANS and creates an AFT team and then installs FCoE/DCB, the result is that DCB is off by default. If the user then enables DCB on one port, the OS detects Storports and the user must manually click on the new hardware wizard prompts for each of them to install. If the user does not do that, DCB status is non-operational and the reason given is no peer.
When the user disables FCoE via the Control-D menu, the Intel PROSet for Windows Device Manager User Interface states that the flash contains an FCoE image, but that the flash needs to be updated. Updating the flash with the FCoE image again, re-enables FCoE and returns the user to the state where all the FCoE settings are available.
If the user uses the control-D menu to disable FCoE, then they should use the control-D menu to enable it because Intel PROSet for Windows Device Manager does not support enabling or disabling FCoE.
Because the FCoE initiator is a virtualized device it does not have its own unique hardware ID and thus is not displayed as a SPC-3 compliant device in Windows MPIO configuration.
For ANS teaming to work with
Microsoft Network Load Balancer (NLB) in unicast mode, the team's LAA must
be set to cluster node IP. For ALB mode, Receive Load Balancing must be
disabled. For further configuration details, refer to http://support.microsoft.com/?id=278431
ANS teaming will work when NLB is in multicast mode, as well. For proper
configuration of the adapter in this mode, refer to
http://technet.microsoft.com/en-ca/library/cc726473(WS.10).aspx
This is a known switch design and configuration issue.Intel® Ethernet Virtual Storage Miniport Driver for FCoE disappears from Device Manager after Virtual Network removal
The user may experience disappearance of the Intel® Ethernet Virtual Storage
Miniport Driver for FCoE when the corresponding Intel
adapter is virtualized to create a new Virtual Network, delete an
existing Virtual Network, or modify an existing virtual network.
As a workaround, the user should remove all
the resource dependency of the Intel® Ethernet Virtual Storage Miniport Driver for FCoE that are currently being used by the system before making any
changes to the Intel adapter for virtualization.
For example, in one use case scenario, the user may have assigned the FCoE disk(s) from
the FCoE storage driver to run one of its Virtual
Machines, and at the same time the user wants to alter the configuration of
the same
Intel adapter for virtualization. In this scenario the user must remove the FCoE
disks(s) from the Virtual Machine before altering the Intel adapter configuration.
The FCoE Option ROM may not discover the desired VLAN when performing VLAN discovery from the Discover Targets function. If the Discover VLAN box is populated with the wrong VLAN, then enter the desired VLAN before executing Discover Targets.
Intel® Ethernet FCoE Boot does not support Brocade switches in Release 16.4. If necessary, please use Release 16.2.
After imaging, if the local disk is not removed before booting from the FCoE disk then Windows may use the paging file from the local disk.
The following scenarios are not supported:
o Crash dump to an FCoE disk if the Windows directory is not on the FCoE Boot LUN.o Use of the DedicatedDumpFile registry value to direct crash dump to another FCoE LUN.
When the FCoE Option ROM connects to an FCoE disk during boot, the Windows installer may be unable to determine if the system was booted from FCoE or not and will block the FCoE uninstall. To uninstall, configure the Option ROM so that it does not connect to an FCoE disk.
When booted with FCoE, a user cannot create VLANs and/or Teams for other traffic types. This prevents converged functionality for non-FCoE traffic.
If a port is set as a boot port, when the user installs the Hyper V role in the system and then goes into the Hyper V Network Manager to select which port to externally virtualize, the boot port displays, which it should not.
When setting the port to a boot port in Intel PROSet for Windows Device Manager (DMIX), a message shows that the user should restart the system for the changes to be effective but does not force a restart. As a result the user level applications are in boot mode (i.e., Data Center Tab is grayed out) but kernel level drivers haven’t been restarted to indicate to the OS that the port is a boot port. When the user then adds the Hyper V service to the system, the OS takes a snap shot of the ports available and this is the snap shot that it uses after the Hyper V role is added, system restarted and the user goes into the Hyper V Virtual Network Manager to virtualize the ports. As a result, the boot port also shows up.
Solutions:
Restart the system after setting a port to a boot port and before adding the Hyper V role. The port does not appear in the list of virtualizable ports in the Hyper V Virtual network manager.
Disable/enable the port in Device Manager after setting it to boot and before adding the Hyper V role. The port does not appear in the list of virtualizable ports in the Hyper V Virtual network manager.
FCoE Linkdown Timeout fails prematurely when Remote Booted
If an FCoE-booted port loses link for longer than the time specified in the Linkdown Timeout advanced setting in the Intel® Ethernet Virtual Storage Miniport Driver for FCoE, the system will crash. Linkdown Timeout values greater than 30 seconds may not provide extra time before a system crash.
Last modified on 9/06/11 3:48p Revision