Seamless Virtualization: Integrating VAST Block CSI with KubeVirt

Prev Next

Intro

This document focuses on using the VAST CSI driver for block storage in conjunction with KubeVirt and virtual machines (VMs).


It covers the following topics:

  • Deploying the VAST CSI driver for block storage on Kubernetes

  • Creating images

  • Creating PersistentVolumeClaims (PVCs)

  • Uploading images to PVCs

  • Attaching PVCs to VMs

  • Managing snapshots and clones

  • Resizing volumes

We’ll begin by installing the CSI driver to enable block-based storage classes, following the official  VAST CSI Driver Administrator's Guide.

Prerequisites

  • VAST Cluster: Version 5.3 or higher must be deployed.

  • Kubevirt: The virtualization platform must be installed.

  • Containerized Data Importer (CDI): Installation is optional (we will not utilize it in this guide).

  • VAST CSI Driver: Block provisioner version 2.6.3 must be installed.

For full functionality, including Volume Snapshots and VM Live Migration, ensure the following features are enabled in the Kubevirt FeatureGates configuration: edit kubevirt -n kubevirt kubevirt

spec:
        certificateRotateStrategy: {}
        configuration:
        developerConfiguration:
        featureGates:
        - HotplugVolumes
        - DeclarativeHotplugVolumes
        - DataVolumes
        - LiveMigration
        - Snapshot

Preparing multiple images:

In this scenario, we’ll provision several golden images by converting raw files into block PVCs.

To achieve this, the .img and .qcow2 files are first converted to .raw format as follows:

# Converting to raw
        qemu-img convert -f qcow2 -O raw jammy-server-cloudimg-amd64.img ./raw/ubuntu22.raw
        qemu-img convert -f qcow2 -O raw noble-server-cloudimg-amd64.img ./raw/ubuntu24.raw
        qemu-img convert -f qcow2 -O raw Rocky-8-GenericCloud-Base.latest.x86_64.qcow2 ./raw/rocky8.raw
        qemu-img convert -f qcow2 -O raw Rocky-9-GenericCloud-Base.latest.x86_64.qcow2 ./raw/rocky9.raw
        qemu-img convert -f qcow2 -O raw Rocky-10-GenericCloud-Base.latest.x86_64.qcow2 ./raw/rocky10.raw
        qemu-img convert -f qcow2 -O raw Fedora-Cloud-Base-Generic-42-1.1.x86_64.qcow2 ./raw/Fedora42.raw

        (base) [root@slurm-kfs]# ls -l /mnt/images/iso/raw/
        total 9607100
        -rw-r--r--. 1 root root 5368709120 Oct 27 16:58 Fedora42.raw
        -rw-r--r--. 1 root root 10737418240 Oct 27 16:58 rocky10.raw
        -rw-r--r--. 1 root root 10737418240 Oct 27 16:58 rocky8.raw
        -rw-r--r--. 1 root root 10737418240 Oct 27 16:58 rocky9.raw
        -rw-r--r--. 1 root root 2361393152 Oct 26 13:24 ubuntu22.raw
        -rw-r--r--. 1 root root 3758096384 Oct 27 16:57 ubuntu24.raw

For Kubevirt deployments using a Block PVC, the RAW disk image format is generally preferred for the base image due to its superior performance, as it lacks the metadata overhead and indirection layers present in QCOW2. While RAW images inherently don't support snapshots, this isn't a limitation in this setup because VAST, which also handles data reduction, can manage snapshots and other advanced data services directly on the raw volume, ensuring both high performance and necessary flexibility for your virtual machines.

Creating Block PVC

cat block_pvc_golden.yaml

apiVersion: v1
        kind: PersistentVolumeClaim
        metadata:
        name: rocky10-golden
        spec:
        volumeMode: Block
        accessModes:
        - ReadWriteMany
        resources:
        requests:
        storage: 42Gi
        storageClassName: vastdata-block

The PVC rocky10-golden will appear under the correct block storage class:

(base) [root@slurm-kfs]# kubectl get pvc
        NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
        rocky10-golden Bound pvc-081a269a-a7db-477f-8c19-300623ef9b83 42Gi RWX vastdata-block <unset> 3s

In VAST UI, Block PVC is available under element store → volumes:

The screenshot displays the Element Store interface, specifically within the "Volumes" section showing volume details including Name, UUID, Subsystem, and Size with one entry recorded as pvc-081a269a-a7db-477f-8c19-300623ef9b83 sized at 45.097 GB.

Next, upload your .raw image to the destination PVC using the virtctl image-upload command. Since the PVC was pre-created, use the --no-create flag to ensure the command only handles the upload. If you were creating the PVC and uploading simultaneously, you'd also need to specify the --size and the appropriate --storage-class.

(base) [root@slurm-kfs]# virtctl image-upload pvc rocky10-golden --no-create --storage-class=vastdata-block --image-path=/mnt/images/iso/raw/rocky10.raw --insecure --uploadproxy-url https://10.98.203.183:443
        Using existing PVC default/rocky10-golden
        Waiting for PVC rocky10-golden upload pod to be ready...
        Pod now ready
        Uploading data to https://10.98.203.183:443

        10.00 GiB / 10.00 GiB [------------------------------------------------------------------------------------------] 100.00% 44.92 MiB p/s 3m48s

        Uploading data completed successfully, waiting for processing to complete, you can hit ctrl-c without interrupting the progress
        Processing completed successfully
        Uploading /mnt/images/iso/raw/rocky10.raw completed successfully

The provisioned Block PVC is now ready for its primary uses: cloning, snapshots, and attachment to a Virtual Machine (VM). Attaching the PVC to a VM allows you to perform modifications, such as installing packages and system configurations, enabling you to create a perfect "golden image" state for subsequent cloning.

For example, a VM configuration (vm.yaml) uses this PVC as its root filesystem:

volumes:
        - name: disk0
        persistentVolumeClaim:
        claimName: rocky10-golden

YAML

Additional disks, each backed by its own dedicated PVC, can be added using the virtio bus. Important Note: To ensure the VM can be successfully migrated between worker nodes, the underlying Block PVC must be configured with ReadWriteMany (RWX) access mode.

Attach VM to PVC

VM template, for example:

apiVersion: kubevirt.io/v1
        kind: VirtualMachine
        metadata:
        creationTimestamp: 2018-07-04T15:03:08Z
        generation: 1
        labels:
        kubevirt.io/os: linux
        name: rocky10-golden
        spec:
        running: true
        template:
        metadata:
        creationTimestamp: null
        labels:
        kubevirt.io/domain: vm1
        spec:
        domain:
        cpu:
        cores: 8
        ioThreadsPolicy: auto
        devices:
        networkInterfaceMultiqueue: true
        interfaces:
        - name: default
        model: virtio
        masquerade: {}
        disks:
        - disk:
        bus: virtio
        name: disk0
        - cdrom:
        bus: sata
        readonly: true
        name: cloudinitdisk
        machine:
        type: q35
        resources:
        requests:
        memory: 8G
        networks:
        - name: default
        pod: {}
        volumes:
        - name: disk0
        persistentVolumeClaim:
        claimName: rocky10-golden
        - cloudInitNoCloud:
        userData: |
        #cloud-config
        hostname: rocky10-golden
        ssh_pwauth: True
        disable_root: false
        chpasswd:
        list: |
        root:cluster
        expire: False
        ssh_authorized_keys:
        - ssh-rsa AAAAB3NzaC1yc2EAAAAD....
        name: cloudinitdisk
    

Starting the VM by applying the YAML and waiting for the running state :

(base) [root@slurm-kfs]# kubectl get vmis
        NAME AGE PHASE IP NODENAME READY
        rocky10-golden 75s Running 10.244.3.183 v6lg3 True

The VM can be accessed with virtctl console vmname or ssh.

Snapshots

When deploying the VAST CSI driver, you must install the VolumeSnapshot CRDs (Custom Resource Definitions) to enable the use of CSI features like snapshots and volume clones. Additionally, verify that a corresponding VolumeSnapshotClass is defined and available for each of your CSI-backed StorageClasses.

(base) [root@slurm-kfs]# kubectl get volumesnapshotclasses.snapshot.storage.k8s.io
        NAME DRIVER DELETIONPOLICY AGE
        vastdata-snapshot csi.vastdata.com Delete 39d
        vastdata-snapshot-block block.csi.vastdata.com Delete 46d

Taking a snapshot from the existing Block PVC; in this case, the source is rocky10-golden, and the destination is daily.

apiVersion: snapshot.storage.k8s.io/v1
        kind: VolumeSnapshot
        metadata:
        name: rocky10-daily171125
        spec:
        volumeSnapshotClassName: vastdata-snapshot-block
        source:
        persistentVolumeClaimName: rocky10-golden

Listing volume snapshots:

(base) [root@slurm-kfs]# kubectl apply -f block_snapshot.yaml
        volumesnapshot.snapshot.storage.k8s.io/rocky10-daily171125 created
        (base) [root@slurm-kfs]# kubectl get volumesnapshot
        NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE
        rocky10-daily171125 false rocky10-golden vastdata-snapshot-block snapcontent-04b63e8a-dd97-4ef1-86c4-65f07748f515 5s

In VAST UI, snapshots appear under Data protection → Snapshots:

The screenshot displays the "Snapshots" tab under the Data Protection section, showing details such as snapshot names and paths, with options to filter results further based on name, path, or policy.

Restore from snapshot

Restoring a PVC from a snapshot follows the same logic as cloning a PVC from a source. However, in this case, the data source is a VolumeSnapshot specified by its correct name. The following example demonstrates how to restore the snapshot to a new PVC:

apiVersion: v1
        kind: PersistentVolumeClaim
        metadata:
        name: rocky10-restored-pvc
        namespace: default
        spec:
        accessModes:
        - ReadWriteOnce
        resources:
        requests:
        storage: 30Gi
        storageClassName: vastdata-block
        dataSource:
        name: rocky10-daily171125
        kind: VolumeSnapshot
        apiGroup: snapshot.storage.k8s.io
    

List the newly created PVC from the snapshot.

(base) [root@slurm-kfs]# kubectl apply -f block_restore_from_snapshot.yaml
        persistentvolumeclaim/rocky10-restored-pvc created
        (base) [root@slurm-kfs]# kubectl get pvc
        NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
        rocky10-golden Bound pvc-081a269a-a7db-477f-8c19-300623ef9b83 42Gi RWX vastdata-block <unset> 83m
        rocky10-restored-pvc Bound pvc-c8a8a2f7-5276-4569-9e50-0c480445e647 30Gi RWO vastdata-block <unset> 10s

Clone PVC

Cloning a Block PVC is an efficient method to instantly duplicate an existing PVC to one or more new PVCs. This is valuable for two main reasons: it saves time by bypassing the need to create new PVCs from the original image file, and it allows you to quickly create multiple identical versions from a specific, modified VM state (a "golden image").

apiVersion: v1
        kind: PersistentVolumeClaim
        metadata:
        name: rocky10-clone
        spec:
        volumeMode: Block
        accessModes:
        - ReadWriteMany
        storageClassName: vastdata-block
        resources:
        requests:
        storage: 42Gi
        dataSource:
        kind: PersistentVolumeClaim
        name: rocky10-golden

Cloned PVCs are listed in VAST UI under Data Protection → Global Snapshots Clones.

The image displays a screenshot from a data protection dashboard, specifically focusing on "Global Snapshot Clones." It shows two completed snapshot clones with details such as the name, direction, target cluster (vast6-kfs), and tenant (default).

Resize Block PVC

To resize a VM disk, first power off the VM. Then, using the VAST UI, navigate to Element Store → Volume, select the appropriate PVC, and edit its size.

The screenshot displays an interface for updating a volume configuration, where users can modify settings such as capacity and view within specified fields. The current selection shows a tenant set to "default," with specific details like a view path "/block" and a name identifier "pvc-081a269a-a7db-477f-8c19-300623e", indicating it's a persistent volume claim (PVC) with a capacity close to 42 GB.

Monitor

Monitor Block performance is available under analytics, and includes block IOPS, latency, bw, metadata, etc.

The graph displays real-time read and write bandwidth performance over a 10-minute interval, with significant peaks observed during specific time periods within that timeframe. The visualization provides insights into cluster block bandwidth usage on the 'vast6-kfs' cluster as part of predefined analytics.

Also, monitoring the VM and PVC via Analytics → Data Flow.

The image depicts a data flow visualization in an advanced storage management interface, highlighting connections between users, hosts, VIPs (Virtual IPs), Cnodes (Cluster Nodes), views, and volumes within a block subsystem. The interface provides detailed performance metrics such as read/write bandwidth, IOPS, database scan latency, along with connections to a specific volume named 'pvc-081a269a-a7db-47tf-8c'.

Appendix A

NFS

When using VAST CSI with file system storage class over NFS, the general handling of creating, cloning, and snapshotting Persistent Volume Claims (PVCs) remains the same, with two critical differences:

  1. The storage class must specifically point to the correct file system storage class.

  2. The VolumeMode: Block parameter is not applicable and must be omitted.

The vastcsi values allow for specifying particular NFS mount options on a per-storage-class basis. These options can include standard NFS features such as rdma, ports, and nconnect=N. Additionally, VAST NFS driver-specific features, such as remoteports, spread_reads, and spread_writes, can be utilized, as detailed in the VAST NFS Documentation and here in VAST Client NFS Multipathing.

secretName: "vast-mgm"
        endpoint: "10.27.200.6"
        deletionVipPool: "vippool-1"
        deletionViewPolicy: "default"
        verifySsl: false

        StorageClassDefaults:
        volumeNameFormat: "csi:{namespace}:{name}:{id}"
        ephemeralVolumeNameFormat: "eph:{namespace}:{name}:{id}"
        vipPool: "vippool-1"

        storageClasses:
        vastdata-filesystem:
        vipPool: "vippool-1"
        storagePath: "/k8s"
        viewPolicy: "default"
        vastdata-images:
        vipPool: "vippool-1"
        storagePath: "/kubevirt-images"
        viewPolicy: "root_squash"
        mountOptions:
        - proto=rdma
        - port=20049
        - vers=3
        - remoteports=172.21.6.1-172.21.6.4
        - nconnect=8