Configuring an NVMe/TCP Client on VMware VSphere for VAST Cluster Block Storage

Prev Next

Obtaining the Host NQN

In order to be able to map volumes to the host, you need to add the host properties to the cluster (see Provisioning Block Storage with VMS). You will need the host NQN, which you can get using the esxcli nvme info get command:

esxcli nvme info get

For example:

[root@demohost:~] esxcli nvme info get
   Host NQN: nqn.2014-08.com.vastdata:nvme:demohost

Connecting the Host to Subsystems and Mapped Volumes

This procedure assumes that your ESXi hosts are managed using vCenter.

The procedure needs to be done for each host separately.

  1. Identify which network cards (devices) are connected to the cluster's data network:

    1. In the vCenter graphic user interface, navigate to the ESXi host.

    2. Select the Configure tab.

    3. From the Networking menu, select Physical Adapters.

    4. Check the observed IP addresses to see which devices are connected to the cluster's data network. In this example, vmnic 0 and vmnic 1 are connected to the data network (172.21.x.x):

      ESXi_PhysicalAdapters.png
  2. Log on to the host using SSH and run the following commands, adjusting vmnic numbers for the specific configuration).

    Caution

    These commands are not followed by any confirmation prompting. Therefore, run them with caution.

    # Set up vSwitches (already done, but included for completeness)
    esxcli network vswitch standard add -v DataNet
    esxcli network vswitch standard set -m 9000 -v DataNet
    esxcli network vswitch standard uplink add -u vmnic0 -v DataNet
    # Create port groups (already done, but included for completeness)
    esxcli network vswitch standard portgroup add -p DataNet -v DataNet
    # Add a VMkernel interface (vmk) to the port group
    esxcli network ip interface add -i vmk1 -p DataNet
    # Set the interface to use DHCP nvme and vmotion
    esxcli network ip interface ipv4 set -i vmk1 -t dhcp
    esxcli network ip interface tag add -i vmk1 -t NVMeTCP
    esxcli network ip interface tag add -i vmk1 -t VMotion
    # Set MTU to 9000 on the VMkernel interface
    esxcli network ip interface set -i vmk1 -m 9000
    # Ensure vmnic0 is set as an active uplink
    esxcli network vswitch standard policy failover set -v DataNet --standby-uplinks= --active-uplinks=vmnic0
    # Verify the configuration
    esxcli network vswitch standard policy failover get -v DataNet
    
  3. Run the following commands to configure a claim rule.

    VMware claim rules are used to control how storage devices are handled by the ESXi host. These rules dictate which multipathing plugin (such as NMP, HPP, or others) or driver is responsible for managing specific devices.

    Caution

    Running these commands deletes any existing claim rule.

    esxcli storage core claimrule remove --rule 102
    esxcli storage core claimrule load
    esxcli storage core claimrule add --rule 102 -t vendor --nvme-controller-model "VASTData" -P HPP -g "pss=LB-Latency,latency-eval-time=30000,sampling-ios-per-path=16"
    esxcli storage core claimrule load
    esxcli storage core claimrule run
    esxcli storage core claimrule add -u --type vendor --nvme-controller-model "VASTData" --plugin HPP
    esxcli storage core claimrule load
    esxcli storage core claimrule run
    esxcli storage hpp device list

    Where:

    • pss=LB-Latency: Uses latency-based load balancing for I/O paths.

    • latency-eval-time=30000: Evaluates latency over 30 seconds to make path selection decisions.

    • sampling-ios-per-path=16: Collects latency samples after every 16 I/O operations per path.This rule optimizes storage performance by dynamically selecting the lowest-latency path for I/O operations.

    Note

    Prior to version 5.3.2, the NVMe controller model name for the VAST cluster was VastData. Following an upgrade to 5.3.2, complete these steps:

    • Delete and recreate the claim rules with the new value of VASTData.

    • Run these commands to load the new rules:

      esxcli storage core claimrule load
      esxcli storage core claimrule run
    • Reboot the devices that used to run with the old rules.

  4. Create a storage adapter for NVMe over TCP:

    1. In the vCenter graphic user interface, navigate to the host.

    2. Select the Configure tab.

    3. Under Storage, select Storage Adapters.

    4. In the Add software adapter dropdown, select Add NVMe over TCP adapter.

    5. In the Physical Network Adapter dropdown, select the NIC to add.

      A new storage adapter is created.

  5. Discover available controllers and connect paths:

    1. Select the new storage adapter and then select the Controllers tab.

    2. Click Add Controller and complete the following details:

      IP

      Enter the first IP address in the virtual IP pool on the cluster that can provide access to the subsystem.

      The virtual IP pool should be configured with the role protocol and should not be excluded by the subsystem view's view policy. (A view policy configuration might specify that only specific virtual IP pools can access the view. )

      (which can restrict the view to a specific virtual IP pool access that has protocol ip-pool

      Port Number

      Use 4420. If in use, you can use 8009.

    3. Click Discover Controllers and select all discovered paths.

    4. Click OK.

      All mapped volumes are now accessible to the host.