10. Block Protocol Considerations

Prev Next

VAST supports Block storage via the NVMe over TCP protocol, suitable for virtualized, containerized, and bare-metal environments. Block storage on the VAST platform is configured through subsystems (block-enabled views) that contain one or more volumes (NVMe namespaces). These volumes are mapped to initiator hosts using their NQN and accessed via a VIP pool, supporting multipath I/O for high availability and load balancing, just as VIPs are used for NFS in a multi-tenant environment.

The Block protocol setting on the VAST cluster can be done via the Web GUI, as shown in the information below, or via CSI, as described in the ‘Kubernetes—CSI Driver for VAST’ section below. It also requires running nvme-cli on the host, as described in the section below.

Notes:

  1. Only Cluster Managers can create or manage block-related resources.

  2. Tenant Managers can view block subsystems and volumes assigned to their tenant (e.g., through tags or naming conventions), but cannot create, map, or modify block resources.

Multipath Support

VAST supports native NVMe multipathing on Linux:

  • Enabled by default on RHEL 9+, optional in RHEL 8 (nvme multipath).

  • Clients should connect to all available VIPs in the pool.

  • VAST returns a maximum of 16 VIPs per subsystem on discovery.

Configuring Block Subsystem and Volumes (Web UI)

Step 1: Create Subsystem (Block View)

This view defines the NVMe/TCP subsystem and acts as the access point for the corresponding volumes.

  • Go to Element Store → Views → Create View

  • Set:

    • Path (e.g., /myblock)

    • Policy

    • Protocol = Block

  • In the Block section:

    • Enter a Subsystem Name (e.g., subsys1)

    • Leave NQN blank (auto-generated)

    • Enable ✅ Define as default view

  • Click Create

The image shows the "Add View" configuration interface in a storage management tool, where users can define settings such as path and protocols (eBlock) for creating or block-based storage views.

Create a Blobk protocol View

The Element Store menu expands to include Volumes and Hosts options.

Step 2: Create Volumes

Each volume becomes a thin-provisioned NVMe namespace within the subsystem.

  • Go to Element Store → Block → Volumes, then click Create Volume.

  • Select the Block View (subsystem) from Step 1.

  • Enter a Volume Name and Size (min. 1 GB; max. 1 PB).

  • Optional: Enable encryption.

  • Click Create.

Note: You can view the Subsystem NQN by reopening the Block View and navigating to the Block tab. Hosts require this for NVMe connections.

The screenshot displays a volume creation interface, where users can configure options such as volume name, capacity in GB, and suffix to name multiple volumes simultaneously. The interface also features tags to organize and identify hosts easily.

Create a Block volume

Step 3: Add Hosts

Initiators must be registered using their NQN (NVMe Qualified Name).

  • Go to Element Store → Block → Hosts, then click Create Host.

  • Enter a Host Name and the host's NQN.
    (Host side commands - Linux: cat /etc/nvme/hostnqn, ESXi: esxcli nvme info get).

  • Optionally assign the host to a tenant or apply tags.

  • Click Create.

The screenshot shows the "Add Host" interface, where users can configure single or multiple hosts by specifying parameters such as name and NQN under the "Host Creation" section. Below the host configuration fields, there is an option to define tags for better organization and easy identification of the hosts.

Add the host to the View to access the block volume

Step 4: Map Volumes to Hosts

Only mapped volumes will be visible to connected initiators.

  • Go to Element Store → Block → Volumes.

  • Select the target Volume, then click Manage Mappings.

  • Select one or more Hosts, and click Map.

The screenshot displays a mapping interface where "MyHost1" is being configured to map volumes, specifically selecting "MyVol1" and "MyVo2" with 100 GB capacity each. The highlighted options allow users to choose which volumes will be associated with this host, facilitating data management on the system.

Map Volumes to Hosts

Host-Side Configuration (Linux with nvme-cli)

To connect a Linux client to VAST block storage over NVMe/TCP, follow these steps using nvme-cli, which is typically pre-installed on most distributions.

1. Install Tools and Load Drivers

yum install nvme-cli

2. Retrieve the Host NQN

cat /etc/nvme/hostnqn 

Use this NQN when registering the host in Element Store → Block → Hosts on the VAST system.

3. Discover the Subsystem

nvme discover --transport=tcp --traddr=<VIP>
  • <VIP>: Any IP in the configured VAST VIP Pool

4. Connect to All Controllers

nvme connect-all 
  • Connects to all discovered paths for the given subsystem.

5. Validate Connection

nvme list 
  • List connected block devices:

nvme list-subsys 
  • Check multipath and subsystems:


For complete details and advanced options, refer to:  VAST Cluster 5.3 Administrator’s Guide – Block Storage Protocol