This guide describes how to connect VAST Data Block Volumes to a Linux system over NVMe/TCP.
As the name implies, NVMe/TCP operates on top of TCP, so no specific network-level configuration is required. As long as the Linux host can communicate with the VAST ‘data’ network, then you will be able to run NVMe/TCP over the network.
The steps below cover Ubuntu 24.04 and Red Hat Enterprise Linux version 9.6 (including RHEL-based distributions such as Rocky Linux). Other Linux distributions will likely be similar but may require additional steps.
Install NVMe CLI
The NVMe CLI tool is used for all NVMe device management, including NVMe/TCP devices. Install it by running the following command.
On Ubuntu :
sudo apt-get install nvme-cliOn RHEL/Rocky :
sudo yum install nvme-cli -yConfirm nvme-cli is installed by running the nvme list command, which will show all NVMe devices currently visible to the host. If your host has an internal NVMe device, then it will be shown here, otherwise it should return an empty list :
user@ubuntu1:~$ sudo nvme list
Node Generic SN Model Namespace Usage Format FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- --------
user@ubuntu1:~$Load NVMe/TCP Kernel Module
Many Linux distributions will automatically load the required kernel modules to use NVMe/TCP, to make sure they are loaded, run the following command :
sudo modprobe nvme-tcpMap VAST Volumes to Host
Mapping of VAST Volumes to a host is done using the Host’s NQN, which is automatically generated when the nvme-cli package is installed. This NQN is specific to a host and must be configured in the VAST cluster.
The host’s NQN is contained within the file/etc/nvme/hostnqn :
user@ubuntu1:~$ cat /etc/nvme/hostnqn
nqn.2014-08.org.nvmexpress:uuid:9e6b1d42-4bad-9fca-939c-4de9d5c73d0eUsing this NQN, create a Host within VAST, and map one or more Volumes to that host. Details on doing this can be found in the VAST Administrators Guide.
Discovering/Connecting NVMe/TCP Volumes
Once one or more volumes have been mapped to the host in the VAST GUI, they must be connected to the Linux system.
Firstly, we can use the nvme discover command to query the VAST cluster and determine the IP addresses available on the cluster to connect to. This command requires two options - one to tell it to use NVMe/TCP for the connection (--transport=tcp) and a second to provide a single IP address for a VAST VIP Pool (--traddr=x.x.x.x). An additional IP address to use will be automatically discovered by the Linux system.
When run, the nvme discover should return details of the NVMe “subsystems” being presented by the VAST cluster, and up to 16 IP addresses (“traddr”) for the cluster. Note that there is no need to record this information - as long as it is displayed, then we know that things are configured correctly.
Run the following command, replacing x.x.x.x with an IP address from the VAST VIP Pool :
sudo nvme discover --transport=tcp --traddr=x.x.x.xYou should see output similar to this, showing the VAST Clusters subsystem NQN (subnqn) and the discovered IP addresses (traddr).
user@ubuntu1:~$ sudo nvme discover --transport=tcp --traddr=172.31.1.1
Discovery Log Number of Records 6, Generation counter 216
=====Discovery Log Entry 0======
trtype: tcp
adrfam: ipv4
subtype: nvme subsystem
treq: not required
portid: 1
trsvcid: 4420
subnqn: nqn.2024-08.com.vastdata:127db70c-0197-5f4f-8af8-44bead61cda2:default:mysubsys
traddr: 172.31.1.6
eflags: none
sectype: none
=====Discovery Log Entry 1======
trtype: tcp
adrfam: ipv4
subtype: nvme subsystem
treq: not required
portid: 1
trsvcid: 4420
[...etc...]Having confirmed we have proper connectivity to the array, we can add the options passed above to the /etc/nvme/discover.conf file. This will remove the need to pass them for subsequent commands, and more importantly, it will also allow them to be used when auto-connecting NVMe devices after an OS reboot.
These 2 options should be added to this file on a new line, with nothing else on the line.
user@ubuntu1:~$ echo '--transport=tcp --traddr=172.31.1.1' | sudo tee -a /etc/nvme/discovery.conf
--transport=tcp --traddr=172.31.1.1
user@ubuntu1:~$Finally, we can tell the system to connect all of the discoverable NVMe devices to the system, using the connect-all command. There is no need to provide the transport and traddr options to this command, as they will be automatically read from the discovery.conf file above.
sudo nvme connect-allConfirm that the device(s) are now visible by running nvme list. You should see one entry for each VAST Volume mapped to the host.
user@ubuntu1:~$ nvme list
Node Generic SN Model Namespace Usage Format FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- --------
/dev/nvme1n1 /dev/ng1n1 VastData VastData 0x3 0.00 B / 250.00 GB 512 B + 0 B 24.05
/dev/nvme1n2 /dev/ng1n2 VastData VastData 0x3 0.00 B / 500.00 GB 512 B + 0 B 24.05
user@ubuntu1:~$The Linux NVMe drivers include built-in support for multi-pathing. The nvme list-subsys command allows you to view the paths being used to the array, and confirm they are healthy :
user@ubuntu1:~$ sudo nvme list-subsys
nvme-subsys1 - NQN=nqn.2024-08.com.vastdata:127db70c-0197-5f4f-8af8-44bead61cda2:default:mysubsys
hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e6b1d42-4bad-9fca-939c-4de9d5c73d0e
iopolicy=numa
\
+- nvme1 tcp traddr=172.31.1.6,trsvcid=4420,src_addr=172.31.2.25 live
+- nvme2 tcp traddr=172.31.1.2,trsvcid=4420,src_addr=172.31.2.25 live
+- nvme3 tcp traddr=172.31.1.4,trsvcid=4420,src_addr=172.31.2.25 live
+- nvme4 tcp traddr=172.31.1.1,trsvcid=4420,src_addr=172.31.2.25 live
+- nvme5 tcp traddr=172.31.1.3,trsvcid=4420,src_addr=172.31.2.25 live
+- nvme6 tcp traddr=172.31.1.5,trsvcid=4420,src_addr=172.31.2.25 liveIn this case, we can see that there are a total of 6 paths, corresponding to the 6 IP addresses in the VIP Pool on the cluster (172.31.1.1 to .6). The “live” indication at the end of each line confirms that the path is active and in use.
The volume(s) can now be accessed using the path listed under the “Node” heading in the nvme list
user@ubuntu1:~$ sudo fdisk /dev/nvme1n1
Welcome to fdisk (util-linux 2.37.4).
Command (m for help): p
Disk /dev/nvme1n1: 232.83 GiB, 250000000000 bytes, 488281250 sectors
Disk model: VastData
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 1048576 bytesOptimizing Multi-Pathing for Performance
As mentioned above, the Linux NVMe drivers include native Multi-pathing support, which defaults to the “numa” iopolicy, which is sub-optimal for VAST clusters and should be changed to “round-robin” for all VAST devices.
The current iopolicy for a subsystem can normally be displayed in the output of the nvme list-subsys command :
user@ubuntu1:~$ sudo nvme list-subsys
nvme-subsys1 - NQN=nqn.2024-08.com.vastdata:127db70c-0197-5f4f-8af8-44bead61cda2:default:mysubsys
hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e6b1d42-4bad-9fca-939c-4de9d5c73d0e
iopolicy=numa
[...]If the iopolicy is not shown in this output, it can also be found using the below command, replacing “nvme-subsys1” with the relevant identifier for the subsystem from the output above :
user@ubuntu1:~$ cat /sys/class/nvme-subsystem/nvme-subsys1/iopolicy
numaTo configure the default iopolicy used for VAST subsystems, create/edit the file
/etc/udev/rules.d/71-nvmf-vastdata.rulesAnd place the following lines in it :
ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{model}=="VASTData", ATTR{iopolicy}="round-robin"
ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{model}=="VastData", ATTR{iopolicy}="round-robin"(For VAST clusters installed on 5.3.2 or later, only the first line is required; there is no harm in including both.
After saving, run the following 2 commands to cause the system to load and apply the new settings :
sudo udevadm control --reload-rules
sudo udevadm triggerConfirm the correct iopolicy is being used using the same method as previously described.
user@ubuntu1:~$ cat /sys/class/nvme-subsystem/nvme-subsys1/iopolicy
round-robinAuto-Connecting Devices on Reboot
NVMe/TCP devices will not be automatically discovered/connected on reboot unless the nvmf-autoconnect service is enabled. Depending on the distribution, this may be enabled by default, but confirm that it is by running the following :
sudo systemctl enable nvmf-autoconnect