Details
- Intel® 82575EB Gigabit Network Connection
- Intel® 82575EB Gigabit Backplane Connection
- Intel® Gigabit VT Quad Port Server Adapter
- Intel® 82576 Gigabit Network Connection
- Intel® Gigabit ET Dual Port Server Adapter
- Intel® Gigabit EF Dual Port Server Adapter
These limitations do not apply to products based on the Intel 82575 Gigabit Ethernet Controller.
Solution
Ensure the correct version of the igb driver is loaded before performing these steps.
Enabling VMDq
- Enable NetQueue in VMkernel using VMware Infrastructure Client:
- Choose Configuration > Advanced Settings > VMkernel.
- Select VMkernel.Boot.netNetqueueEnabled.
- Choose Configuration > Advanced Settings > VMkernel.
- Enable the igb module in the service console of the ESX host:
# esxcfg-module -e igb
- Set the required load option for igb to turn on VMDq:
The option IntMode=3 must exist to indicate loading in VMDq mode. A value of 3 for the IntMode parameter specifies using MSI-X and automatically sets the number of receive queues to the maximum supported (devices based on the 82575 Controller enable 4 receive queues per port; devices based on the 82576 Controller enable 8 receive queues per port). The number of receive queues used by the igb driver in VMDq mode cannot be changed.
For a single port, use the command:
# esxcfg-module -s "IntMode=3" igb
For two or more ports, use a comma-separated list of values as shown in the following example (the parameter is applied to the igb-supported interfaces in the order they are enumerated on the PCI bus):
# esxcfg-module -s "IntMode=3,3, ... 3" igb
- Reboot the ESX host system.
Note: If you are using jumbo frames, you also need to change the values for netPktHeapMinSize to 32 andnetPktHeapMaxSize to 128.Verifying VMDq is enabled
To verify that VMDq has been successfully enabled:- Verify NetQueue has been enabled:
# cat /etc/vmware/esx.conf
Confirm the following line has been added into the file:
/vmkernel/netNetqueueEnabled = "TRUE"
Note: Netqueue is enabled by default on ESX 4.0.
- Verify the options configured for the igb module:
# esxcfg-module -g igb
The output appears similar to:
igb enabled = 1 options = 'IntMode=3,3,3,3,3,3'
The enabled value must be equal to 1, which indicates the igb module is enabled to load automatically. IntModemust be equal to 3 and include an entry for each port desired to have VMDq on. This is a 6-port example.
- Query which ports have loaded the igb driver using esxcfg-nics -l. Confirm the driver successfully claimed all supported devices present in the system (enumerate them using lspci and compare the list with the output ofesxcfg-nics -l). Query the statistics using ethtool. If VMDq is enabled, statistics for multiple receive queues is shown (rx_queue_0 through rx_queue_7 in the example below).
# esxcfg-nics -l
Name PCI Driver Link Speed Duplex MTU Description
vmnic0 04:00.00 bnx2 Up 1000Mbps Full 1500 Broadcom Corporation Broadcom NetXtreme II BCM5708 1000Base-T
vmnic1 08:00.00 bnx2 Down 0Mbps Half 1500 Broadcom Corporation Broadcom NetXtreme II BCM5708 1000Base-T
vmnic2 0d:00.00 igb Up 1000Mbps Full 1500 Intel Corporation 82575GB Gigabit Network Connection
vmnic3 0d:00.01 igb Up 1000Mbps Full 1500 Intel Corporation 82575GB Gigabit Network Connection
vmnic4 0e:00.00 igb Up 1000Mbps Full 1500 Intel Corporation 82575GB Gigabit Network Connection
vmnic5 0e:00.01 igb Up 1000Mbps Full 1500 Intel Corporation 82575GB Gigabit Network Connection
vmnic6 10:00.00 igb Up 1000Mbps Full 1500 Intel Corporation 82576 Gigabit Network Connection
vmnic7 10:00.01 igb Up 1000Mbps Full 1500 Intel Corporation 82576 Gigabit Network Connection
# lspci | grep 8257[5-6]
0d:00.0 Ethernet controller: Intel Corporation 82575GB Gigabit Network Connection (rev 02)
0d:00.1 Ethernet controller: Intel Corporation 82575GB Gigabit Network Connection (rev 02)
0e:00.0 Ethernet controller: Intel Corporation 82575GB Gigabit Network Connection (rev 02)
0e:00.1 Ethernet controller: Intel Corporation 82575GB Gigabit Network Connection (rev 02)
10:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
10:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
# ethtool -S vmnic6
NIC statistics:
rx_packets: 0
tx_packets: 0
rx_bytes: 0
tx_bytes: 0
rx_broadcast: 0
tx_broadcast: 0
rx_multicast: 0
tx_multicast: 0
rx_errors: 0
tx_errors: 0
tx_dropped: 0
multicast: 0
collisions: 0
rx_length_errors: 0
rx_over_errors: 0
rx_crc_errors: 0
rx_frame_errors: 0
rx_no_buffer_count: 0
rx_missed_errors: 0
tx_aborted_errors: 0
tx_carrier_errors: 0
tx_fifo_errors: 0
tx_heartbeat_errors: 0
tx_window_errors: 0
tx_abort_late_coll: 0
tx_deferred_ok: 0
tx_single_coll_ok: 0
tx_multi_coll_ok: 0
tx_timeout_count: 0
tx_restart_queue: 0
rx_long_length_errors: 0
rx_short_length_errors: 0
rx_align_errors: 0
tx_tcp_seg_good: 0
tx_tcp_seg_failed: 0
rx_flow_control_xon: 0
rx_flow_control_xoff: 0
tx_flow_control_xon: 0
tx_flow_control_xoff: 0
rx_long_byte_count: 0
rx_csum_offload_good: 0
rx_csum_offload_errors: 0
rx_header_split: 0
low_latency_interrupt: 0
alloc_rx_buff_failed: 0
tx_smbus: 0
rx_smbus: 0
dropped_smbus: 0
tx_queue_0_packets: 0
tx_queue_0_bytes: 0
rx_queue_0_packets: 0
rx_queue_0_bytes: 0
rx_queue_1_packets: 0
rx_queue_1_bytes: 0
rx_queue_2_packets: 0
rx_queue_2_bytes: 0
rx_queue_3_packets: 0
rx_queue_3_bytes: 0
rx_queue_4_packets: 0
rx_queue_4_bytes: 0
rx_queue_5_packets: 0
rx_queue_5_bytes: 0
rx_queue_6_packets: 0
rx_queue_6_bytes: 0
rx_queue_7_packets: 0
rx_queue_7_bytes: 0
Based on VMware KB 1022899