Configuring Coraid EtherDrive SAN appliances and deploying with ESXi 5.x (Partner Support)

Details

This article provides information about Partner Support for configuring Coraid EtherDrive SAN appliances and deploying with ESXi 5.x.
 
Note: The Partner Verified and Supported Products (PVSP) policy implies that the solution is not directly supported by VMware. For issues with this configuration, contact Coraid directly. See the Support Workflow on how partners can engage with VMware. It is the partner's responsibility to verify that the configuration functions with future vSphere major and minor releases, as VMware does not guarantee that compatibility with future releases is maintained.
 
DisclaimerThe partner product reference in this article is a software module that is developed and supported by a partner. Use of this product is also governed by the end user license agreement of the partner. You must obtain the application, support, and licensing for using this product from the partner. For more information, see: 
  • http://www.coraid.com/products
  • http://www.coraid.com/support/customer_support
  • http://www.coraid.com/support/coraid_support_plans

To contact Coraid support directly, email [email protected].

Solution

Introduction to Coraid technology

The Coraid EtherDrive Host Bus Adapter and driver for VMware ESXi enable your server with AoE technology to deliver affordable, fast EtherDrive SAN solutions for your virtualization environment. Enabling ESXi hosts to work natively with EtherDrive storage is a highly effective way to take full advantage of VMware vSphere features including vMotion and VMFS. Coraid EtherDrive SAN products deliver Fibre Channel speeds at Ethernet prices in an easily scalable, reliable, and simply elegant solution.

EtherDrive SAN is comprised of one or more LUNs exported as storage targets for VMFS Data Stores. Installed in the ESXi host, the EtherDrive HBA presents the LUN on the EtherDrive SAN as a locally attached standard SCSI device to ESXi. The software driver and HBA perform the translation of the SCSI disk requests to AoE requests and transmit them to the EtherDrive SAN. As responses return from the EtherDrive SAN, the reverse translation occurs in the HBA software driver.
 
This image provides an overview:
 

Terminology

  • LUN (Logical Unit Number) – A LUN is a grouping of uniquely numbered blocks of storage with each block containing 512 bytes of data. LUNs can be disk drives, disk partitions, groups of disks operating as one (a RAID array), or abstracted “virtual” LUNs made from other LUNs. EtherDrive storage appears as one or more LUNs to an ESXi host. VMware refers to LUNs as Volumes and/or Datastores.

  • AoE – ATA over Ethernet (AoE) is the storage protocol that Coraid EtherDrive devices use to send block data between target and host.

  • HBA (Host Bus Adapter) – A HBA is used to connect an ESXi host to network storage. These EtherDrive HBAs are available from Coraid:
    • PCIe HBA with (2) RJ-45 GbE ports

    • PCIe HBA with (2) RJ-45 10G Base-T ports

    • PCIe HBA with (2) CX-4 10 GbE ports

    • PCIe HBA with (2) SFP+ 10 GbE ports

    • Mezzanine HBAs for HP, IBM and Dell Blade Servers 

Basic Configuration of RAID LUNs on the EtherDrive SAN

Before Coraid EtherDrive SAN storage can be used with an ESXi host, one or more LUNs must be configured on the EtherDrive SAN storage appliance or through the EtherDrive VSX storage virtualization appliance. The SRX Admin Guide and VSX Admin Guide should both be used as the complete reference for EtherDrive SAN appliance configuration. 

Below is an example of how a LUN can be setup on an EtherDrive SAN storage appliance.

To setup a LUN on an EtherDrive Storage Appliance, follow these sample steps:

  1. Boot unit and set self address:

    SR shelf unset> shelf 1

  2. Create a 8-disk RAID5 LUN:

    SR shelf 1> make 0 raid5 1.0-9

  3. Enable the LUN:

    SR shelf 1> online 1

At this point, LUN 1.0 is online and visible to any EtherDrive HBA on the same Layer 2 network.

Installing the Coraid HBA and Driver in ESXi

The Coraid EtherDrive HBA utilizes a PCIe interface within the ESXi Host. A full set of documentation on the ESXi HBA options and drivers is located at http://support.coraid.com/esx.

When the HBA has been physically installed in the ESXi host, the software driver must be installed for ESXi to recognize and utilize the card. You can download the software driver from http://support.coraid.com/support/esx. You can also access the software driver directly from the ESXi console using the wget command.

To install the driver onto the ESXi host:
 
Important Notes:
  • To SCP the driver to the ESXi host, the ESXi host must be configured to support SSH. For more information, seeUsing Tech Support Mode in ESXi 4.1 and ESXi 5.x (1017910).
  • The proceeding steps provide an example with a generic driver version. Change the file name as referenced in steps as appropriate for the desired release version. For more information, see the Installation and Configuration Guide for the correct recommended product version.  
  1. Access the ESXi console as the root user.
  2. Copy the EtherDrive HBA driver to the ESX/ESXi host with the command:

    wget http://support.coraid.com/support/esx/etherdrive-hba-esxi5-x.x.x-Rx.zip

  3. From the console on the ESXi host, install the EtherDrive HBA driver.

    • If the user is logged in to the ESXi host directly, enter:

      esxcli software vib install –d file:///etherdrive-hba-esxi5-x.x.x-Rx.zip

    • If using the vSphere CLI, enter:

      esxcli -u root -s hostname software vib install -d file:///etherdrive-hba-esxi5-x.x.x-Rx.zip

  4. Reboot the ESXi hostby typing reboot at the Console OS command prompt.

Note: Multipathing across all EtherDrive HBA interfaces is Automatic. Every EtherDrive HBA has two Ethernet ports.VMware recommends that each port be connected to the SAN network. Having two connections to the SAN network offers two advantages: network redundancy and higher bandwidth capacity. A higher level of redundancy is achieved when the two ports of the EtherDrive HBA are connected to separate switches. Furthermore, more than one EtherDrive HBA may be installed in an ESX host. Multiple HBAs offer further redundancy and data throughput. MultiPath is built into the EtherDrive HBA driver. Requiring no configuration, the EtherDrive HBA automatically detects all network paths to EtherDrive SAN storage and utilizes each path to load balance all data packets bound for EtherDrive SAN storage.

Accessing and using EtherDrive LUNs in vSphere Client

Claiming a LUN

The EtherDrive HBA sees all EtherDrive LUNs that are available on the SAN. To allow the ESXi host visibility of the LUN, it must be "claimed". Only LUNs that have been claimed by an ESXi host are presented to ESXi and assigned a SCSI ID. When a LUN is claimed by an ESXi host, all other hosts in the ESXi cluster view it as claimed and also obtain a SCSI ID for access to this shared storage device. The act of claiming is only required by one ESXi host.

Note: One of the changes between ESXi 4.x and 5.x is in the VMFS File System. If a datastore was created on an ESXi 4.x system, that datastore and the LUN are considered "legacy" LUNs. See instructions for claiming Legacy LUNs on an ESXi 5.x, see Claiming a legacy LUN.

To claim an ESXi 5.0 LUN, run the command:  
esxcli ethdrv claim -t targetid

Where targetid is shelf number.LUN number. For example, with shelf 1, LUN 0, the command is:

esxcli ethdrv claim -t 1.0

Claiming a Legacy LUN

When claiming a datastore or LUN that was created on an ESX 4.1 or earlier host, you must identify the legacy LUN status in the claiming command. A standard (non-legacy) LUN is presented in full capacity as a single SCSI device.

Because of the maximum 2TB LUN capacity limit in VMFS 3 and the SCSI-2 LUN capacity limitations included, a legacy LUN is presented as one or more 2TB SCSI devices which each use a unique 8-byte NAA for identification. As part of the legacy claim function, the EtherDrive HBA automatically presents a Coraid LUN greater than 2TB to the ESXi host as multiple LUNs segmented at 2TB boundaries.

In the example below, the 9TB LUN (1.0) is legacy claimed and presented to the ESXi server as two 4TB LUNs and one 1TB LUN.

For example, run the command:

[root@remo ~]# cat /proc/ethdrv/devices

You see this output:
vmhba2:C0:T0:L01.0 2.0TB
vmhba2:C0:T0:L1 1.0 2.0TB
vmhba2:C0:T0:L2 1.0 2.0TB
vmhba2:C0:T0:L3 1.0 2.0TB
vmhba2:C0:T0:L4 1.01105.955GB

The result of this legacy claim is (4) 2TB LUNs and (1) additional ~1TB LUN. If desired, the segmented LUNs can be reassembled into a single datastore by using the Extend Datastore feature.

To use the ESXi CLI to claim a legacy LUN:

  1. Launch the vSphere CLI or VMware Management Assistant (VMA).
  2. Log in with root permissions.
  3. At the prompt, run:

    esxcli ethdrv claim -l -t targetid

    For example:

    ~ # esxcli ethdrv claim -l -t 1.0

    Note-l indicates that this is a legacy target.

Repeat the claim command for each Legacy LUN/datastore that needs to be recognized on an ESXi 5.0 host.

To access and use EtherDrive LUNs in vSphere Client:
  1. Open the vSphere Client and select the ESXi host where the EtherDrive HBA was installed.
  2. Click the Configuration tab.
  3. Click Storage Adapters (under Hardware).
  4. Click Rescan All to instruct the vSphere Client to refresh the list of available storage adapters.
  5. Launch vSphere Client.
  6. Click the Configuration tab.
  7. Click Storage Adapters.
  8. Click Rescan All.
  9. Review the information about the datastores in the host’s storage inventory and the virtual machines in the host’s virtual machine inventory.

    The Coraid EtherDrive HBA should now be displayed in the list of available storage adapters and the available LUNs (targets) should be listed under the Details section.

Renaming the EtherDrive LUNs

vSphere assigns a name to each LUN available to the ESXi host. LUN names can be changed from the vSphere-assigned LUN names to reflect EtherDrive shelf.slot names. Renaming the LUNs to reflect the EtherDrive shelf.slot mapping is recommended to maintain DataStore to LUN mappings and organization.

To name/organize EtherDrive LUNs:
  1. Claim the LUN. For more information, see Claiming a LUN.
  2. Launch vSphere Client.
  3. Click the Configuration tab.
  4. Click Storage Adapters.
  5. Select the vmhba beneath Coraid EtherDrive HBA in the Storage Adapters menu.

    A list of available storage devices displays in the Details field.

  6. Note the vSphere‐assigned LUN name in the Runtime Name column. For example:




    Compare it to the output of the ESX CLI command:

    ~ # cat /proc/ethdrv/devices

    For example:

    vmhba2:C0:T3:L0 26.0 8000.000GB
    vmhba2:C0:T8:L0 28.1 2199.023GB
    vmhba2:C0:T8:L1 28.1 2199.023GB

    The Runtime Name vmhba2:C0:T2:L0 is assigned to LUN 26.0.

  7. Note the name that the ESXi host has assigned to the LUN:

     

    In this example, it is Local Coraid Disk (naa.600100408f7239d9f03fa00700000000).

  8. Right-click the LUN and click Rename.

  9. Name the LUN with the shelf:LUN:Extent number used by the EtherDrive SAN.



    In this case, it is EtherDrive 26.0.

Creating a datastore on the EtherDrive SAN

Creating a datastore on the EtherDrive SAN follows the same steps as configuring any SCSI storage.

To set up a volume:
  1. Launch vSphere Client.
  2. Click the Configuration tab.
  3. Under Hardware, click Storage.
  4. Click Add Storage.
  5. Follow the Add Storage Wizard to select Disk/LUN and the datastore options for storage initialization.

    Based on VMware KB 1031322

  • 0 Users Found This Useful
Was this answer helpful?

Related Articles

Hardware and firmware requirements for 64-bit guest operating systems

PurposeThis article explains the host machine hardware and firmware requirements for installing...

Logging in to the vCenter Server 5.0 Web Client fails with the error: unable to connect to vCenter Inventory Service

DetailsAfter upgrading from vCenter Server 4.1 to 5.0, you experience these symptoms:Cannot log...

Multiple network entries in vCenter Server 5.0.x after migrating virtual machines from a virtual switch to a virtual distributed switch

SymptomsAfter migrating virtual machines from a virtual switch to a virtual Distributed...

Minimum requirements for the VMware vCenter Server 5.x Appliance

PurposeIf you are using the VMware vCenter Server Appliance, beginning with vSphere 5.0 you can...