VAST supports Block storage via the NVMe over TCP protocol, suitable for virtualized, containerized, and bare-metal environments. Block storage on the VAST platform is configured through subsystems (block-enabled views) that contain one or more volumes (NVMe namespaces). These volumes are mapped to initiator hosts using their NQN and accessed via a VIP pool, supporting multipath I/O for high availability and load balancing, just as VIPs are used for NFS in a multi-tenant environment.
The Block protocol setting on the VAST cluster can be done via the Web GUI, as shown in the information below, or via CSI, as described in the ‘Kubernetes—CSI Driver for VAST’ section below. It also requires running nvme-cli on the host, as described in the section below.
Notes:
Only Cluster Managers can create or manage block-related resources.
Tenant Managers can view block subsystems and volumes assigned to their tenant (e.g., through tags or naming conventions), but cannot create, map, or modify block resources.
Multipath Support
VAST supports native NVMe multipathing on Linux:
Enabled by default on RHEL 9+, optional in RHEL 8 (
nvme multipath).Clients should connect to all available VIPs in the pool.
VAST returns a maximum of 16 VIPs per subsystem on discovery.
Configuring Block Subsystem and Volumes (Web UI)
Step 1: Create Subsystem (Block View)
This view defines the NVMe/TCP subsystem and acts as the access point for the corresponding volumes.
Go to Element Store → Views → Create View
Set:
Path (e.g.,
/myblock)Policy
Protocol =
Block
In the Block section:
Enter a Subsystem Name (e.g.,
subsys1)Leave NQN blank (auto-generated)
Enable ✅ Define as default view
Click Create

Create a Blobk protocol View
The Element Store menu expands to include Volumes and Hosts options.
Step 2: Create Volumes
Each volume becomes a thin-provisioned NVMe namespace within the subsystem.
Go to Element Store → Block → Volumes, then click Create Volume.
Select the Block View (subsystem) from Step 1.
Enter a Volume Name and Size (min. 1 GB; max. 1 PB).
Optional: Enable encryption.
Click Create.
Note: You can view the Subsystem NQN by reopening the Block View and navigating to the Block tab. Hosts require this for NVMe connections.

Create a Block volume
Step 3: Add Hosts
Initiators must be registered using their NQN (NVMe Qualified Name).
Go to Element Store → Block → Hosts, then click Create Host.
Enter a Host Name and the host's NQN.
(Host side commands - Linux:cat /etc/nvme/hostnqn, ESXi:esxcli nvme info get).Optionally assign the host to a tenant or apply tags.
Click Create.

Add the host to the View to access the block volume
Step 4: Map Volumes to Hosts
Only mapped volumes will be visible to connected initiators.
Go to Element Store → Block → Volumes.
Select the target Volume, then click Manage Mappings.
Select one or more Hosts, and click Map.

Map Volumes to Hosts
Host-Side Configuration (Linux with nvme-cli)
To connect a Linux client to VAST block storage over NVMe/TCP, follow these steps using nvme-cli, which is typically pre-installed on most distributions.
1. Install Tools and Load Drivers
yum install nvme-cli2. Retrieve the Host NQN
cat /etc/nvme/hostnqn Use this NQN when registering the host in Element Store → Block → Hosts on the VAST system.
3. Discover the Subsystem
nvme discover --transport=tcp --traddr=<VIP><VIP>: Any IP in the configured VAST VIP Pool
4. Connect to All Controllers
nvme connect-all Connects to all discovered paths for the given subsystem.
5. Validate Connection
nvme list List connected block devices:
nvme list-subsys Check multipath and subsystems:
For complete details and advanced options, refer to: VAST Cluster 5.3 Administrator’s Guide – Block Storage Protocol