Obtaining the Host NQN
In order to be able to map volumes to the host, you need to add the host properties to the cluster (see Provisioning Block Storage with VMS). You will need the host NQN, which you can get using the esxcli nvme info get command:
esxcli nvme info get
For example:
[root@demohost:~] esxcli nvme info get Host NQN: nqn.2014-08.com.vastdata:nvme:demohost
Connecting the Host to Subsystems and Mapped Volumes
This procedure assumes that your ESXi hosts are managed using vCenter.
The procedure needs to be done for each host separately.
Identify which network cards (devices) are connected to the cluster's data network:
In the vCenter graphic user interface, navigate to the ESXi host.
Select the Configure tab.
From the Networking menu, select Physical Adapters.
Check the observed IP addresses to see which devices are connected to the cluster's data network. In this example,
vmnic 0andvmnic 1are connected to the data network (172.21.x.x):.png?sv=2022-11-02&spr=https&st=2026-02-09T16%3A49%3A59Z&se=2026-02-09T17%3A02%3A59Z&sr=c&sp=r&sig=WkCKa3B7AipDRVOsMhxr6LVERoyYtUmSebQaOp7piLs%3D)
Log on to the host using SSH and run the following commands, adjusting vmnic numbers for the specific configuration).
Caution
These commands are not followed by any confirmation prompting. Therefore, run them with caution.
# Set up vSwitches (already done, but included for completeness) esxcli network vswitch standard add -v DataNet esxcli network vswitch standard set -m 9000 -v DataNet esxcli network vswitch standard uplink add -u vmnic0 -v DataNet # Create port groups (already done, but included for completeness) esxcli network vswitch standard portgroup add -p DataNet -v DataNet # Add a VMkernel interface (vmk) to the port group esxcli network ip interface add -i vmk1 -p DataNet # Set the interface to use DHCP nvme and vmotion esxcli network ip interface ipv4 set -i vmk1 -t dhcp esxcli network ip interface tag add -i vmk1 -t NVMeTCP esxcli network ip interface tag add -i vmk1 -t VMotion # Set MTU to 9000 on the VMkernel interface esxcli network ip interface set -i vmk1 -m 9000 # Ensure vmnic0 is set as an active uplink esxcli network vswitch standard policy failover set -v DataNet --standby-uplinks= --active-uplinks=vmnic0 # Verify the configuration esxcli network vswitch standard policy failover get -v DataNet
Run the following commands to configure a claim rule:
Note
VMware claim rules are used to control how storage devices are handled by the ESXi host. These rules dictate which multipathing plugin (such as NMP, HPP, or others) or driver is responsible for managing specific devices.
Caution
Running these commands deletes any existing claim rule.
esxcli storage core claimrule remove --rule 102 esxcli storage core claimrule load esxcli storage core claimrule add --rule 102 -t vendor --nvme-controller-model "VASTData" -P HPP -g "pss=LB-Latency,latency-eval-time=30000,sampling-ios-per-path=16" esxcli storage core claimrule load esxcli storage core claimrule run esxcli storage core claimrule add -u --type vendor --nvme-controller-model "VASTData" --plugin HPP esxcli storage core claimrule load esxcli storage core claimrule run esxcli storage hpp device list
Note
pss=LB-Latency: Uses latency-based load balancing for I/O paths.latency-eval-time=30000: Evaluates latency over 30 seconds to make path selection decisions.sampling-ios-per-path=16: Collects latency samples after every 16 I/O operations per path.This rule optimizes storage performance by dynamically selecting the lowest-latency path for I/O operations.
Create a storage adapter for NVMe over TCP:
In the vCenter graphic user interface, navigate to the host.
Select the Configure tab.
Under Storage, select Storage Adapters.
In the Add software adapter dropdown, select Add NVMe over TCP adapter.
In the Physical Network Adapter dropdown, select the NIC to add.
A new storage adapter is created.
Discover available controllers and connect paths:
Select the new storage adapter and then select the Controllers tab.
Click Add Controller and complete the following details:
IP
Enter the first IP address in the virtual IP pool on the cluster that can provide access to the subsystem.
The virtual IP pool should be configured with the role protocol and should not be excluded by the subsystem view's view policy. (A view policy configuration might specify that only specific virtual IP pools can access the view. )
(which can restrict the view to a specific virtual IP pool access that has protocol ip-pool
Port Number
Use 4420. If in use, you can use 8009.
Click Discover Controllers and select all discovered paths.
Click OK.
All mapped volumes are now accessible to the host.