Enabling Support for NetQueue on Intel Gigabit adapters using the igb driver in ESX 3.5

Details

The igb driver supports in version 1.3.8.6.3 two families of Intel Gigabit adapters.
 
Networking devices based on the Intel® 82575 Gigabit Ethernet Controller:
  • Intel® 82575EB Gigabit Network Connection
  • Intel® 82575EB Gigabit Backplane Connection
  • Intel® Gigabit VT Quad Port Server Adapter 
Networking devices based on the Intel® 82576 Gigabit Ethernet Controller:
  • Intel® 82576 Gigabit Network Connection
  • Intel® Gigabit ET Dual Port Server Adapter
  • Intel® Gigabit EF Dual Port Server Adapter 
This version of the driver driver utilizes VMware's NetQueue technology to enable Intel Virtual Machine Device Queues (VMDq). One main distinction between the two families is the number of receive queues supported. All adapters based on the Intel® 82575 Gigabit Ethernet Controller provide 4 receive queues per port, while the adapters based on the Intel® 82576 Gigabit Ethernet Controller provide 8 receive queues per port.
 
For products based on the Intel 82576 Gigabit Ethernet Controller, version 1.3.8.6.3 of the igb driver does not support VLAN tagging or jumbo frames when in NetQueue mode. Disable NetQueue when using VLAN tagging or jumbo frames.

These limitations do not apply to products based on the Intel 82575 Gigabit Ethernet Controller.

Solution

Ensure the correct version of the igb driver is loaded before performing these steps.

Enabling VMDq

To enable VMDq:
  1. Enable NetQueue in VMkernel using VMware Infrastructure Client:

    1. Choose Configuration > Advanced Settings > VMkernel.
    2. Select VMkernel.Boot.netNetqueueEnabled.
       
  2. Enable the igb module in the service console of the ESX host:

    # esxcfg-module -e igb 
     
  3. Set the required load option for igb to turn on VMDq:

    The option IntMode=3 must exist to indicate loading in VMDq mode. A value of 3 for the IntMode parameter specifies using MSI-X and automatically sets the number of receive queues to the maximum supported (devices based on the 82575 Controller enable 4 receive queues per port; devices based on the 82576 Controller enable 8 receive queues per port). The number of receive queues used by the igb driver in VMDq mode cannot be changed.

    For a single port, use the command:

    # esxcfg-module -s "IntMode=3" igb


    For two or more ports, use a comma-separated list of values as shown in the following example (the parameter is applied to the igb-supported interfaces in the order they are enumerated on the PCI bus):

    # esxcfg-module -s "IntMode=3,3, ... 3" igb
     
  4. Reboot the ESX host system.

    Note: If you are using jumbo frames, you also need to change the values for 
    netPktHeapMinSize to 32 andnetPktHeapMaxSize to 128.

    Verifying VMDq is enabled 

    To verify that VMDq has been successfully enabled:
    1. Verify NetQueue has been enabled:

      # cat /etc/vmware/esx.conf

      Confirm the following line has been added into the file:

      /vmkernel/netNetqueueEnabled = "TRUE"

      Note: Netqueue is enabled by default on ESX 4.0.

       
    2. Verify the options configured for the igb module:

      # esxcfg-module -g igb

      The output appears similar to:

      igb enabled = 1 options = 'IntMode=3,3,3,3,3,3'

      The enabled value must be equal to 1, which indicates the igb module is enabled to load automatically. IntModemust be equal to 3 and include an entry for each port desired to have VMDq on. This is a 6-port example.
       
    3. Query which ports have loaded the igb driver using esxcfg-nics -l. Confirm the driver successfully claimed all supported devices present in the system (enumerate them using lspci and compare the list with the output ofesxcfg-nics -l). Query the statistics using ethtool. If VMDq is enabled, statistics for multiple receive queues is shown (rx_queue_0 through rx_queue_7 in the example below).

      # esxcfg-nics -l

      Name PCI Driver Link Speed Duplex MTU Description
      vmnic0 04:00.00 bnx2 Up 1000Mbps Full 1500 Broadcom Corporation Broadcom NetXtreme II BCM5708 1000Base-T
      vmnic1 08:00.00 bnx2 Down 0Mbps Half 1500 Broadcom Corporation Broadcom NetXtreme II BCM5708 1000Base-T
      vmnic2 0d:00.00 igb Up 1000Mbps Full 1500 Intel Corporation 82575GB Gigabit Network Connection
      vmnic3 0d:00.01 igb Up 1000Mbps Full 1500 Intel Corporation 82575GB Gigabit Network Connection
      vmnic4 0e:00.00 igb Up 1000Mbps Full 1500 Intel Corporation 82575GB Gigabit Network Connection
      vmnic5 0e:00.01 igb Up 1000Mbps Full 1500 Intel Corporation 82575GB Gigabit Network Connection
      vmnic6 10:00.00 igb Up 1000Mbps Full 1500 Intel Corporation 82576 Gigabit Network Connection
      vmnic7 10:00.01 igb Up 1000Mbps Full 1500 Intel Corporation 82576 Gigabit Network Connection

      # lspci | grep 8257[5-6]

      0d:00.0 Ethernet controller: Intel Corporation 82575GB Gigabit Network Connection (rev 02)
      0d:00.1 Ethernet controller: Intel Corporation 82575GB Gigabit Network Connection (rev 02)
      0e:00.0 Ethernet controller: Intel Corporation 82575GB Gigabit Network Connection (rev 02)
      0e:00.1 Ethernet controller: Intel Corporation 82575GB Gigabit Network Connection (rev 02)
      10:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
      10:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)

      # ethtool -S vmnic6

      NIC statistics:
      rx_packets: 0
      tx_packets: 0
      rx_bytes: 0
      tx_bytes: 0
      rx_broadcast: 0
      tx_broadcast: 0
      rx_multicast: 0
      tx_multicast: 0
      rx_errors: 0
      tx_errors: 0
      tx_dropped: 0
      multicast: 0
      collisions: 0
      rx_length_errors: 0
      rx_over_errors: 0
      rx_crc_errors: 0
      rx_frame_errors: 0
      rx_no_buffer_count: 0
      rx_missed_errors: 0
      tx_aborted_errors: 0
      tx_carrier_errors: 0
      tx_fifo_errors: 0
      tx_heartbeat_errors: 0
      tx_window_errors: 0
      tx_abort_late_coll: 0
      tx_deferred_ok: 0
      tx_single_coll_ok: 0
      tx_multi_coll_ok: 0
      tx_timeout_count: 0
      tx_restart_queue: 0
      rx_long_length_errors: 0
      rx_short_length_errors: 0
      rx_align_errors: 0
      tx_tcp_seg_good: 0
      tx_tcp_seg_failed: 0
      rx_flow_control_xon: 0
      rx_flow_control_xoff: 0
      tx_flow_control_xon: 0
      tx_flow_control_xoff: 0
      rx_long_byte_count: 0
      rx_csum_offload_good: 0
      rx_csum_offload_errors: 0
      rx_header_split: 0
      low_latency_interrupt: 0
      alloc_rx_buff_failed: 0
      tx_smbus: 0
      rx_smbus: 0
      dropped_smbus: 0
      tx_queue_0_packets: 0
      tx_queue_0_bytes: 0
      rx_queue_0_packets: 0
      rx_queue_0_bytes: 0
      rx_queue_1_packets: 0
      rx_queue_1_bytes: 0
      rx_queue_2_packets: 0
      rx_queue_2_bytes: 0
      rx_queue_3_packets: 0
      rx_queue_3_bytes: 0
      rx_queue_4_packets: 0
      rx_queue_4_bytes: 0
      rx_queue_5_packets: 0
      rx_queue_5_bytes: 0
      rx_queue_6_packets: 0
      rx_queue_6_bytes: 0
      rx_queue_7_packets: 0
      rx_queue_7_bytes: 0
       
     
    Based on VMware KB 1022899
  • 0 משתמשים שמצאו מאמר זה מועיל
?האם התשובה שקיבלתם הייתה מועילה

מאמרים קשורים

Hardware and firmware requirements for 64-bit guest operating systems

PurposeThis article explains the host machine hardware and firmware requirements for installing...

Logging in to the vCenter Server 5.0 Web Client fails with the error: unable to connect to vCenter Inventory Service

DetailsAfter upgrading from vCenter Server 4.1 to 5.0, you experience these symptoms:Cannot log...

Multiple network entries in vCenter Server 5.0.x after migrating virtual machines from a virtual switch to a virtual distributed switch

SymptomsAfter migrating virtual machines from a virtual switch to a virtual Distributed...

Minimum requirements for the VMware vCenter Server 5.x Appliance

PurposeIf you are using the VMware vCenter Server Appliance, beginning with vSphere 5.0 you can...