Configuration maximums for NIC ports on ESXi/ESX 4.x and ESXi 5.x

Solution

VMware Tested Maximum Configuration

VMware limit testing is performed with these settings:
  • 1500-byte MTU (no jumbo frames)
  • The default number of queues for the NIC
These are maximums tested by VMware on systems with all NICs in a system in the same configuration. The testing below does not include mixed configurations where more than one kind of NIC is present in the system at the same time. That is, the adapters in one row cannot be combined with adapters in another row.

Manufacturer and NIC modelNIC DriverMaximum Number of Supported NIC PortsNumber of CPUsMemory
Intel PCI-x NICe100032125.2 GB
Intel PCI-e NICe1000e241632 GB
Intel Zoar 1GigEigb161632 GB
Broadcom 1GigEtg3321632 GB
Broadcom 1GigEbnx2161632 GB
NVIDIA 1GigEforcedeth248 GB
Neterion 10GigEs2io4832 GB
Netxen 10GigEnx_nic4 (8 in ESXi 5.0)832 GB
Intel 10GigEixgbe4 (8 in ESXi 5.0)416 GB
Broadcom 10GigE 5771xbnx2x4 (8 in ESXi 5.0)872 GB
HP Flex-10bnx2x4 (physical NICs)

The maximum configurations in the table above have been verified using ESX 4.0 build164009 using the drivers provided with that release. The configurations above have not been tested in any of these conditions:
  • With drivers released on separate CDs outside of the base ESXi/ESX installation
  • With jumbo frames (MTU > 1500 bytes)
  • With any NetQueue settings other than the defaults
Note: Some hardware vendors re-brand existing cards, but the underlying hardware will be the same. Refer to your hardware information to identify the correct hardware model.

Maximum Configurations

You can model the behavior of multiple adapter/driver classes running concurrently on a single system based on estimated CPU and memory requirements for various NICs. When using a large number of NICs on an ESXi/ESX system, make sure that the system has adequate memory and an appropriate number of CPUs. You must thoroughly qualify all such configurations prior to deployment.

Based on this modeling, VMware believes that you should be able to deploy these untested maximum configurations:


ESXi/ESX 4.x

For 1500 MTU configurations:
  • 4 x 10G ports OR
  • 16 x 1G ports OR
  • When combining different speeds: 2 x 10G ports + 8 x 1G ports
For jumbo frame (for MTU up to 9000 bytes) configurations:
  • 4 x 10G ports (only if the number of cores in the system is more than 8) OR
  • 12 x 1G ports OR
  • When combining different speeds: 2 x 10G ports and 4 x 1G ports
  • ESXi 5.x

    For 1500 MTU and jumbo frame (for MTU up to 9000 bytes) configurations:
    • 8 x 10G ports OR
    • 32 x 1G ports OR
    • When combining different speeds: 6 x 10G ports + 4 x 1G ports

    • Notes:
      • VMware recommends that you use a mixed environment to avoid a single point of failure for a driver issue.
      • These estimates do not take into account platform-specific limitations. The configurations depend on the hardware, number of cores, overall system memory, and other factors.

    • Based on VMware KB 1020808

  • 0 Users Found This Useful
Was this answer helpful?

Related Articles

Hardware and firmware requirements for 64-bit guest operating systems

PurposeThis article explains the host machine hardware and firmware requirements for installing...

Logging in to the vCenter Server 5.0 Web Client fails with the error: unable to connect to vCenter Inventory Service

DetailsAfter upgrading from vCenter Server 4.1 to 5.0, you experience these symptoms:Cannot log...

Multiple network entries in vCenter Server 5.0.x after migrating virtual machines from a virtual switch to a virtual distributed switch

SymptomsAfter migrating virtual machines from a virtual switch to a virtual Distributed...

Minimum requirements for the VMware vCenter Server 5.x Appliance

PurposeIf you are using the VMware vCenter Server Appliance, beginning with vSphere 5.0 you can...