VMware ESX 4.1 Patch ESX410-201104401-SG: Updates VMkernel, VMX, CIM

Details

Release date: April 28, 2011

Patch ClassificationSecurity
BuildFor build information, see KB 1035110.
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
PRs Fixed671984, 674903, 665217, 652611, 630472, 588298, 606441, 611518, 635393, 615869, 630852, 669678, 638481, 637280, 653672, 655807, 653337, 610630, 620291, and 610075
Affected HardwareN/A
Affected SoftwareN/A
VIBs Includedvmware-esx-apps, vmware-esx-cim, vmware-esx-likewise-ad-provider, vmware-esx-likewise-krb5, vmware-esx-likewise-krb5-64, vmware-esx-likewise-krb5-workstation, vmware-esx-likewise-open, vmware-esx-likewise-open-64, vmware-esx-likewise-openldap, vmware-esx-likewise-openldap-64, vmware-esx-lnxcfg, vmware-esx-pam-krb5, vmware-esx-pam-krb5-64, vmware-esx-perftools, vmware-esx-scripts, vmware-esx-srvrmgmt, vmware-esx-tools, vmware-esx-vmkctl, vmware-esx-vmkernel64, vmware-esx-vmnixmod, vmware-esx-vmwauth, vmware-esx-vmx, vmware-hostd-esx, kernel, and omc
Related CVE numbersCVE-2010-2240, CVE-2011-1786, CVE-2010-1324, CVE-2010-1323, CVE-2010-4020, CVE-2010-4021, and CVE-2011-1785


 

Solution

Summaries and Symptoms

This patch resolves the following security issues:

  • Updates the vmware-esx-likewise-openldap and vmware-esx-likewise-krb5 packages to address several security issues.
    One of the vulnerabilities is specific to Likewise while the other vulnerabilities are present in the MIT version of krb5. The Likewise vulnerability may lead to a termination of the Likewise-open lsassd service if a username with illegal byte sequence is entered for user authentication when logging in to the Active Directory domain of the ESXi/ESX host. The MIT-krb5 vulnerabilities are detailed in MITKRB5-SA-2010-007.
    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2011-1786 (likewise only), CVE-2010-1324CVE-2010-1323CVE-2010-4020, and CVE-2010-4021 to these issues.

  • Updates the service console kernel RPM to resolve a security issue.
    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2010-2240 to this issue.

  • Resolves an issue where ESX host could intermittently lose connection with vCenter Server due to socket exhaustion. 
    By sending malicious network traffic to a ESX host an attacker could exhaust the available sockets which would prevent further connections to the host. In the event a host becomes inaccessible its virtual machines will continue to run and have network connectivity but a reboot of the ESX host may be required in order to be able to connect to host again. ESX hosts may intermittently lose connectivity caused by applications that do not correctly close sockets. If this occurs an error message similar to the following may be written to the vpxa log:
    socket() returns -1 (Cannot allocate memory)
    An error message similar to the following may be written to the vmkernel logs:
    socreate(type=2, proto=17) failed with error 55
    The Common Vulnerabilities and Exposures Project (cve.mitre.org) has assigned the name CVE-2011-1785 to this issue.

This patch also resolves the following issues:

  • If the service console descriptor file esxconsole.vmdk is deleted, the ESX 4.1 host might fail to boot displaying an error message similar to the following:
    vsd-mount
    You have entered the recovery shell. The situation you are in may be recoverable. If you are able to fix this situation the boot process will continue normally after you exit this terminal
    /bin/sh:can't access TTY; job control turned off.

  • During PXE installation of Linux virtual machines that are configured with multiple vCPUs and VMXNET3, the network adapter might fail to get a DHCP IP for the virtual distributed switch (vDS) and a message similar to the following is displayed:
    DHCPv4 eth0 - Timed Out.
    dhcp: DHCP Configuration failed.
    The log files at /var/log/messages inside the guest file might display entries similar to following:
    localhost kernel: eth0: intr type 2, mode 0, 1 vectors allocated
    localhost kernel: eth0: NIC Link is Up 10000 Mbps
    localhost kernel: eth0: NIC Link is Down

  • If you configure the port group policies of NIC teaming for parameters such as load balancing, network failover detection, notify switches, or failback, and then restart the ESX host, the ESX host might send traffic only through one physical NIC.

  • Virtual machines configured with CPU limits might experience a drop in performance when the CPU limit is reached (%MLMTD greater than 0). For more information, see KB 1030955.

  • Linux virtual machines with VMXNET2 virtual NIC might fail when the virtual machines are using MTU greater than the standard MTU of 1500 bytes (jumbo frames).

  • If you are using a backup application that utilizes Changed Block Tracking (CBT) and the ctkEnabled option for the virtual machine is set to true, the virtual machine becomes unresponsive for up to 30 seconds when you remove snapshots of the virtual machine residing on an NFS storage.

  • An ESX host connected to an NFS datastore might fail with a purple diagnostic screen due to a corrupted response received from the NFS server for any read operation that you perform on the NFS datastore, displaying error messages similar to the following:
    Saved backtrace from: pcpu 16 SpinLock spin out NMI
    0x4100c00875f8:[0x41801d228ac8]ProcessReply+0x223 stack: 0x4100c008761c
    0x4100c0087648:[0x41801d18163c]vmk_receive_rpc_callback+0x327 stack: 0x4100c0087678
    0x4100c0087678:[0x41801d228141]RPCReceiveCallback+0x60 stack: 0x4100a00ac940
    0x4100c00876b8:[0x41801d174b93]sowakeup+0x10e stack: 0x4100a004b510
    0x4100c00877d8:[0x41801d167be6]tcp_input+0x24b1 stack: 0x1
    0x4100c00878d8:[0x41801d16097d]ip_input+0xb24 stack: 0x4100a05b9e00
    0x4100c0087918:[0x41801d14bd56]ether_demux+0x25d stack: 0x4100a05b9e00
    0x4100c0087948:[0x41801d14c0e7]ether_input+0x2a6 stack: 0x2336
    0x4100c0087978:[0x41801d17df3d]recv_callback+0xe8 stack: 0x4100c0087a58
    0x4100c0087a08:[0x41801d141abc]TcpipRxDataCB+0x2d7 stack: 0x41000f03ae80
    0x4100c0087a28:[0x41801d13fcc1]TcpipRxDispatch+0x20 stack: 0x4100c0087a58 


  • If there are read-only LUNs with valid VMFS metadata, rescanning of VMFS volumes might take a long time to complete because ESX keeps trying to mount the read-only LUNs till the mount operation times out.

  • When you simultaneously start several hundred virtual machines that are configured to use the e1000 virtual NIC, ESX hosts might stop responding and display a purple diagnostic screen. 

  • When VMware Fault Tolerance is enabled on a virtual machine and the ESX host that runs the secondary virtual machines is powered off unexpectedly, the primary virtual machine might become unavailable for about a minute.

  • When a Storage Virtual Appliance (SVA) presents an iSCSI LUN to the ESX host on which it runs, occurrence of VMFS lockup might cause I/O timeouts in the SVA, resulting in messages similar to the following in the /var/log/messagesfile of the SVA virtual machine:
    sva1 kernel: [ 5817.054354] mptscsih: ioc0: attempting task abort! (sc=ffff88001f95b580)
    sva1 kernel: [ 5817.054360] sd 0:0:1:0: [sdb] CDB: Write(10): 2a 00 00 15 30 40 00 00 40 00
    sva1 kernel: [ 5817.182134] mptscsih: ioc0: task abort: SUCCESS (sc=ffff88001f95b580)

  • If you configure primary or secondary private VLAN on vNetwork Distributed Switches while a virtual machine is migrating, the destination ESX host might stop responding and displays a purple diagnostic screen with messages similar to the following:
    #PF Exception 14 in world 4808:hostd-worker IP 0x41801c96964a addr 0x38
    VLAN_PortsetLookupVID@esx:nover+0x59 stack: 0x417f823d77a8
    PVLAN_DVSUpdate@esx:nover+0x5cb stack: 0x0
    DVSPropESSetPVlanMap@esx:nover+0x81 stack: 0x1
    DVSClient_PortsetDataWrite@vmkernel:nover+0x78 stack: 0x417f823d7898
    DVS_PortsetDataSet@vmkernel:nover+0x6c stack: 0x1823d78f8

  • If the NFS volume hosting a virtual machine encounters errors, the NVRAM file of the virtual machine might become corrupted and grow in size from the default 8KB to a few gigabytes. At such a time, if you perform a vMotion or a suspend operation, the virtual machine fails with an error message similar to the following:
    unrecoverable memory allocation failures at bora/lib/snapshot/snapshotUtil.c:856

  • When you migrate a virtual machine or restore a snapshot, you might notice a loss of application monitoring heartbeats. This issue occurs due to internal timing and synchronization issues. As a consequence, you might see a red application monitoring event warning, followed by the immediate reset of the virtual machine if the application monitoring sensitivity is set to High. In addition, application monitoring events that are triggered during the migration might contain outdated host information.

  • When SCO OpenServer 5.0.7 virtual machines with multiple vCPUs are installed on a virtual SMP enabled ESX Server, the SCO OpenServer 5.0.7 virtual machines start with only one vCPU instead of starting with all the vCPUs.

  • The CPU usage of sfcbd becomes higher than the normal, which is around 40% to 60%. The/var/log/sdr_content.raw and /var/log/sel.raw log files might contain the efefefefefefefefefefefef text, and the /var/log might contain a SDR response buffer was wrong size message. This issue occurs becauseIpmiProvider might use CPU for a long time to process meaningless text such as efefef.
    This issue is seen particularly on Fujitsu PRIMERGY servers, but might occur on any other system.

  • Improves the way shared folders are handled.

In addition, this patch updates the Certificate Revocation List (CRL) to revoke an RSA key that HP uses for code-signing certain software components. HP servers contain a new key pair and has re-signed the affected software components with the new key.
If you restart an HP system that is running ESX, you must update the software components to the version signed with the new key. You can download the HP Management Agent for VMware ESX 4.x (hpmgmt-8.7.0-vmware4x.tgz) from the HP Web site.
If you do not restart the system, it continues to work with the currently installed and loaded software. However, the ESX system rejects software signed with the revoked key and logs a warning if the system loads any kernel module signed with the revoked key. This might cause certain HP features to fail.

Deployment Considerations

None beyond the required patch bundles and reboot information listed in the table above.

Patch Download and Installation

See the VMware vCenter Update Manager Administration Guide for instructions on using Update Manager to download and install patches to automatically update ESX 4.1 hosts.

To update ESX 4.1 hosts without using Update Manager, download the patch ZIP file fromhttp://support.vmware.com/selfsupport/download/ and install the bulletin by using esxupdate from the command line of the host. For more information, see the ESX 4.1 Patch Management Guide.

Based on VMware KB 1035097

  • 0 Utilisateurs l'ont trouvée utile
Cette réponse était-elle pertinente?

Articles connexes

Hardware and firmware requirements for 64-bit guest operating systems

PurposeThis article explains the host machine hardware and firmware requirements for installing...

Logging in to the vCenter Server 5.0 Web Client fails with the error: unable to connect to vCenter Inventory Service

DetailsAfter upgrading from vCenter Server 4.1 to 5.0, you experience these symptoms:Cannot log...

Multiple network entries in vCenter Server 5.0.x after migrating virtual machines from a virtual switch to a virtual distributed switch

SymptomsAfter migrating virtual machines from a virtual switch to a virtual Distributed...

Minimum requirements for the VMware vCenter Server 5.x Appliance

PurposeIf you are using the VMware vCenter Server Appliance, beginning with vSphere 5.0 you can...