Creating Block Storage Classes

Prev Next

VAST Block CSI Driver supports multiple Kubernetes storage classes. During initial deployment of VAST Block CSI Driver, you define one or more storage classes in the driver's Helm chart configuration file. Later you can add more storage classes by creating and applying a Kubernetes YAML configuration file.

Adding a Storage Class

To add a storage class using a Kubernetes YAML configuration file:

  1. Create a YAML configuration file that defines a new storage class with the following required parameters:

    Tip

    For a complete list of storage class options, and also for detailed information about required and optional parameters, see Storage Class Option Reference.

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: <storage class name>
    provisioner: block.csi.vastdata.com
    parameters:
      subsystem: <NVMe subsystem name>
      vip_pool_fqdn: <virtual IP pool FQDN> | vip_pool_name: <virtual IP pool name>
      <optional: pairs of secrets and secret namespaces for each volume processing stage>
    

    For example:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: block-on-vast
    provisioner: block.csi.vastdata.com
    parameters:
      subsystem: BlockView
      vip_pool_fqdn: 'MyDomain'
      csi.storage.k8s.io/controller-expand-secret-name: vast-mgmt
      csi.storage.k8s.io/controller-expand-secret-namespace: default
      csi.storage.k8s.io/controller-publish-secret-name: vast-mgmt
      csi.storage.k8s.io/controller-publish-secret-namespace: default
      csi.storage.k8s.io/node-publish-secret-name: vast-mgmt
      csi.storage.k8s.io/node-publish-secret-namespace: default
      csi.storage.k8s.io/node-stage-secret-name: vast-mgmt
      csi.storage.k8s.io/node-stage-secret-namespace: default
      csi.storage.k8s.io/provisioner-secret-name: vast-mgmt
      csi.storage.k8s.io/provisioner-secret-namespace: default
    
  2. Deploy the YAML configuration file:

    kubectl apply -f <filename>.yaml
  3. Verify that the storage class has been added:

    kubectl get storageclasses

    The output is similar to the following:

    NAME                   PROVISIONER               RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
    standard (default)     k8s.io/minikube-hostpath  Delete        Immediate         false                4d1h
    block-on-vast          block.csi.vastdata.com    Delete        Immediate         true                 58m
    

Storage Class Option Reference

You can specify storage class options as follows:

  • In the  Helm chart configuration file created for VAST Block CSI Driver during initial deployment,

  • In a Kubernetes YAML configuration file deployed at a later stage.

  • In VAST CSI Operator's VastStorage custom resource definition, when deploying in an OpenShift environment using VAST CSI Operator.

Option in Helm chart configuration file/VAST CSI Operator's VastStorage

Option in Kubernetes YAML configuration file

Description

allowVolumeExpansion: true|false

allow_volume_expansion: true|false

(Optional) Determines whether volume expansion is allowed (default, true) or not (false).

blockingClones: true|false

blocking_clones: true|false

(Optional) Determines whether VAST Block CSI Driver waits for VAST snapshot or clone operations to complete before allowing Kubernetes to proceed with volume provisioning:

  • If true, the driver waits for all snapshot and clone operations to complete before the PVC can be attached to a pod. This ensures that the volume is fully usable and consistent at the time of provisioning, but may add a significant latency.

  • If false (default), the driver proceeds with provisioning right after the snapshot or clone operation is initiated.

ephemeralVolumeNameFormat: "<format>"

eph_volume_name_fmt: "<format>"

(Optional) A format string that controls naming of Kubernetes ephemeral volumes created through VAST Block CSI Driver.

If not specified, the default format csi:{namespace}:{name}:{id} is used, where:

  • namespace is the Kubernetes namespace where volume parent workload is located.

  • name is the volume name.

  • id is the volume ID set by Kubernetes.

fsType: "ext4" | "ext3" |"xfs"

fs_type: "ext4" | "ext3" |"xfs"

(Optional) The filesystem type to format the volume. The default is ext4.

This option is applicable only when the PVC has volumeMode set to FileSystem. For more information about volume modes, see Volume Modes.

    hostEncryption:
       luksType: "<type>"
       cipher: "<cipher>"          
       keySize: "<bits>"                   
       hash: "<algorithm>"                     
       pbkdfMemory: "<kilobytes>" 
       perf-same_cpu_crypt: {true|false}
       perf-submit_from_crypt_cpus: {true|false}
       perf-no_read_workqueue: {true|false}
       perf-no_write_workqueue: {true|false}

host_encryption: '{"luks_type":"<type>","cipher":"<cipher>","key_size": "<bits>","hash":"<algorithm>","pdkdf_memory":"<kilobytes>", perf_same_cpu_crypt: {true|false}, perf_submit_from_crypt_cpus: {true|false}, perf_no_read_workqueue: {true|false}, perf_no_write_workqueue: {true|false}}'

(Optional) Sets parameters for LUKS-based host encryption:

  • luksType: "<type>" sets the LUKS version. Valid values: luks2 (default) or luks1,

  • cipher: "<cipher>" specifies the cipher to be used for host encryption. Valid values:

    • aes-xts-plain64 (default)

    • aes-cbc-essiv:sha256

    • serpent-xts-plain64

    • twofish-cbc-essiv:sha256

  • keySize: "<bits>" sets the length of the key for the selected cipher, in bits. Default is 512.

  • hash: "<algorithm>" specifies the hashing algorithm. Valid values:

    • sha1

    • sha256 (default)

    • sha512

    • ripemd160

    • whirlpool

  • pdkdfMemory: "<kilobytes>" is the memory cost for PBKDF. Default is 65536.

  • Performance parameters are all true by default:

    • perf-same_cpu_crypt: {true|false} - Use same CPU for encryption work.

    • perf-submit_from_crypt_cpus: {true|false} - Submit IO from crypt CPUs.

    • perf-no_read_workqueue: {true|false} - Bypass read workqueue.

    • perf-no_write_workqueue: {true|false} - Bypass write workqueue.

    Notice

    Performance parameters are available starting with VAST Block CSI Driver 2.6.4.

qosPolicy: "<QoS policy name>"

qos_policy: "<QoS policy name>"

The name of a Quality of Service (QoS) policy to be associated with automatically created views. The QoS policy can be passed either by its name or by its ID (in qos_policy_id).

A QoS policy sets performance limits per view.  For more information, see Configure a QoS Policy.

N/A

qos_policy_id: "<QoS policy ID>"

The ID of a Quality of Service (QoS) policy to be associated with automatically created views. The QoS policy can be passed either by its ID or by its name (in qos_policy_name).

A QoS policy sets performance limits per view.  For more information, see Configure a QoS Policy.

reclaimPolicy: "Delete"|"Retain"

reclaim_policy: "Delete"|"Retain"

(Optional) Determines whether to delete ( Delete , default) or not (Retain) the dynamically provisioned volumes in case their PVCs get deleted.

secretName: "<secret name>"

secretNamespace: "<secret's namespace>"

<pairs of secrets and secret namespaces for each provisioning stage>

These options lets you supply information for communicating with the VAST cluster:

  • <secret name> is the name of the Kubernetes secret that contains information about the VAST cluster on which to provision volumes for this particular storage class, the corresponding VMS user credentials or authentication token and, optionally, the SSL certificate. For more information, see Provisioning Block Volumes on Multiple VAST Clusters.

  • <secret namespace>: if the storage class  Kubernetes secret was created in a namespace that is different from that used to install the driver's Helm chart, add this parameter to specify the namespace of the Kubernetes secret.

If the secret and its namespace are defined as global options in the Helm chart configuration file, they are automatically propagated to each storage class and each provisioning stage therein. In this case, you do not need to explicitly include the per-stage secrets with their corresponding namespaces in the storage class definition.

If no global settings exist for the secret and its namespace, you need to specify them directly in the storage class definition in the following format. Note that you can specify a different value for each stage:

  csi.storage.k8s.io/controller-expand-secret-name: <secret name>
  csi.storage.k8s.io/controller-expand-secret-namespace: <secret namespace>
  csi.storage.k8s.io/controller-publish-secret-name: <secret name>
  csi.storage.k8s.io/controller-publish-secret-namespace: <secret namespace>
  csi.storage.k8s.io/node-publish-secret-name: <secret name>
  csi.storage.k8s.io/node-publish-secret-namespace: <secret namespace>
  csi.storage.k8s.io/node-stage-secret-name: <secret name>
  csi.storage.k8s.io/node-stage-secret-namespace: <secret namespace>
  csi.storage.k8s.io/provisioner-secret-name: <secret name>
  csi.storage.k8s.io/provisioner-secret-namespace: <secret namespace>

setDefaultStorageClass: true|false

set_default_storage_class: true|false

(Optional) If set to true, VAST Block CSI Driver uses the storage class as the default storage class for all volumes provisioned on the cluster. The default value is false.

subsystem: "<subsystem name>"

subsystem: "<subsystem name>"

(Required) The name of the NVMe subsystem where block volumes will be created. This is the subsystem exposed through the VAST cluster view preconfigured for block storage.

tenantName: "<tenant name>"

tenant_name: "<tenant name>"

(Optional) Specify the name of the VAST tenant which is associated with the subsystem. This option is used to determine the correct tenant in case multiple  subsystems on the VAST cluster share the same name across tenants.

vipPool: "<virtual IP pool name>"

vip_pool_name: "<virtual IP pool name>"

The name of the virtual IP pool to be used by VAST Block CSI  Driver.

The virtual IP pool that you specify for a storage class must belong to the same VAST Cluster tenant as the VAST Cluster view specified on the subsystem parameter of the storage class.

Either vipPool or vipPoolFQDN option is required. These options are mutually exclusive. When vipPool is used, VAST Block CSI Driver makes an additional call to the VMS to obtain the IP, which may impact performance when mounting volumes. For more information, see Configure VAST Cluster for VAST Block CSI Driver and Setting Up DNS-Based Virtual IP Resolution for Block Volumes.

vipPoolFQDN: "virtual IP pool's domain name"

vip_pool_fqdn: "virtual IP pool's domain name"

The domain name of the virtual IP pool to be used by VAST Block CSI Driver.

The virtual IP pool that you specify for a storage class must belong to the same VAST Cluster tenant as the VAST Cluster view specified on the subsystem parameter of the storage class.

Either vipPoolFQDN or vipPool option is required. These options are mutually exclusive. With vipPoolFQDN, the IP for volume mounting is obtained through DNS, which improves mounting times. For more information, see Configure VAST Cluster for VAST Block CSI Driver and Setting Up DNS-Based Virtual IP Resolution for Block Volumes.

volumeGroup: "<format>"

volume_group: "<format>"

(Optional) A format string that controls naming of volumes created through VAST Block CSI Driver. If not specified, the default format csi:{namespace}:{name}:{id} is used, where:

  • namespace is the Kubernetes namespace where volume parent workload is located.

  • name is the volume name.

  • id is the volume ID set by Kubernetes.

This parameter can be used to provide a nested directory structure for the volumes, for example: /folder1/folder2/block-{namespace}-{id}