Using Storage Classes

Prev Next

VAST CSI Driver supports multiple Kubernetes storage classes, enabling you to provision multiple storage paths within VAST Cluster, each configured (via VMS) with its own set of access policies, path protection, or replication policies. Each storage class can have its own path, virtual IP pool, a set of mount options, and other parameters.

A storage path is a path within VAST Cluster where VAST CSI Driver will create volume directories. The storage path is specified in the root_export parameter in the YAML configuration file; for example: /a/b/c. (Note that '/' cannot be used as a storage path.) The storage path must be mountable by VAST CSI Driver.

During initial deployment of VAST CSI Driver, you define one or more storage classes in the VAST CSI Driver Helm chart configuration file. Later you can add more storage classes by creating and applying a Kubernetes YAML configuration file.

Adding a Storage Class

To add a storage class using a Kubernetes YAML configuration file:

  1. Create a YAML configuration file that defines a new storage class with the following required parameters:

    Note

    For a complete list of options that can be specified for a storage class, see Storage Class Option Reference.

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: <storage class name>
    provisioner: csi.vastdata.com
    parameters:
      vip_pool_fqdn: <virtual IP pool FQDN> | vip_pool_name: <virtual IP pool name>
      root_export: '/k8s-2'
      view_policy: 'default'
      <optional: pairs of secrets and secret namespaces for each volume processing stage>
    

    The following example shows creating a storage class named vastdata-filesystem-2 that uses path  /k8s-2 , view policy default, and virtual IP pool test1. In the example, no custom mount options are set ( mountOptions is an empty string).

    Tip

    For more examples, see https://github.com/vast-data/vast-csi/tree/v2.6/examples/csi.

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: vastdata-filesystem-2
    provisioner: csi.vastdata.com
    parameters:
      csi.storage.k8s.io/controller-expand-secret-name: vast-mgmt
      csi.storage.k8s.io/controller-expand-secret-namespace: default
      csi.storage.k8s.io/controller-publish-secret-name: vast-mgmt
      csi.storage.k8s.io/controller-publish-secret-namespace: default
      csi.storage.k8s.io/node-publish-secret-name: vast-mgmt
      csi.storage.k8s.io/node-publish-secret-namespace: default
      csi.storage.k8s.io/node-stage-secret-name: vast-mgmt
      csi.storage.k8s.io/node-stage-secret-namespace: default
      csi.storage.k8s.io/provisioner-secret-name: vast-mgmt
      csi.storage.k8s.io/provisioner-secret-namespace: default
      vip_pool_fqdn: 'MyDomain'
      root_export: '/k8s-2'
      view_policy: 'default'
      volume_name_fmt: csi:{namespace}:{name}:{id}
    mountOptions:
      - ''
    allowVolumeExpansion: true
    
  2. Deploy the YAML configuration file:

    kubectl apply -f <filename>.yaml
  3. Verify that the storage class has been added:

    kubectl get storageclasses

    The output is similar to the following:

    NAME                   PROVISIONER                 RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
    standard (default)     k8s.io/minikube-hostpath    Delete        Immediate         false                4d1h
    vastdata-filesystem    (default) csi.vastdata.com  Delete        Immediate         true                 58m
    vastdata-filesystem-2  (default) csi.vastdata.com  Delete        Immediate         true                 56m
    vastdata-filesystem-3  (default) csi.vastdata.com  Delete        Immediate         true                 54m

Storage Class Option Reference

You can specify storage class options as follows:

  • In the  Helm chart configuration file created for VAST CSI Driver during initial deployment,

  • In a Kubernetes YAML configuration file deployed at a later stage.

  • In VAST CSI Operator's VastStorage custom resource definition, when deploying in an OpenShift environment using VAST CSI Operator.

Option in Helm chart configuration file / VAST CSI Operator's VastStorage

Option in Kubernetes YAML configuration file

Description

allowVolumeExpansion: true|false

allow_volume_expansion: true|false

(Optional) Determines whether volume expansion is allowed (default, true) or not (false).

blockingClones: true|false

blocking_clones: true|false

(Optional) Determines whether VAST Block CSI Driver waits for VAST snapshot or clone operations to complete before allowing Kubernetes to proceed with volume provisioning:

  • If true, the driver waits for all snapshot and clone operations to complete before the PVC can be attached to a pod. This ensures that the volume is fully usable and consistent at the time of provisioning, but may add a significant latency.

  • If false (default), the driver proceeds with provisioning right after the snapshot or clone operation is initiated.

ephemeralVolumeNameFormat: "<format>"

eph_volume_name_fmt: "<format>"

A format string that controls naming of Kubernetes ephemeral volumes created through VAST CSI Driver.

If not specified, the default format csi:{namespace}:{name}:{id} is used.

mountOptions: "<options>"

mount_options: "<options>"

Specify NFS mount options for VAST CSI Driver to use when mounting a volume for a PVC with this storage class.

Examples:

mountOptions:
  - "proto=rdma"
  - "port=20049"
mountOptions:
  - debug
  - nosuid
  - soft
mountOptions:
  - nfsvers=4

Note

These mount options override host-specific mount options defined through /etc/nfsmount.conf.d .

qosPolicy: "<QoS policy name>"

qos_policy: "<QoS policy name>"

The name of a Quality of Service (QoS) policy to be associated with automatically created views. The QoS policy can be passed either by its name or by its ID (in qos_policy_id).

A QoS policy sets performance limits per view.  For more information, see Configure a QoS Policy.

N/A

qos_policy_id: "<QoS policy ID>"

The ID of a Quality of Service (QoS) policy to be associated with automatically created views. The QoS policy can be passed either by its ID or by its name (in qos_policy_name).

A QoS policy sets various performance limits per view. For more information, see Configure a QoS Policy.

secretName: "<secret name>"

secretNamespace: "<secret's namespace>"

<pairs of secrets and secret namespaces for each provisioning stage>

These options lets you supply information for communicating with the VAST cluster:

  • <secret name> is the name of the  Kubernetes secret that contains information about the VAST cluster on which to provision volumes for this particular storage class, the corresponding VMS user credentials or authentication token and, optionally, the SSL certificate. For more information, see  Provisioning Volumes on Multiple VAST Clusters.

  • <secret namespace>: if the storage class  Kubernetes secret was created in a namespace that is different from that used to install the driver's Helm chart, add this parameter to specify the namespace of the Kubernetes secret.

If the secret and its namespace are defined as global options in the Helm chart configuration file, they are automatically propagated to each storage class and each provisioning stage therein. In this case, you do not need to explicitly include the per-stage secrets with their corresponding namespaces in the storage class definition.

If no global settings exist for the secret and its namespace, you need to specify them directly in the storage class definition in the following format. Note that you can specify a different value for each stage:

  csi.storage.k8s.io/controller-expand-secret-name: <secret name>
  csi.storage.k8s.io/controller-expand-secret-namespace: <secret namespace>
  csi.storage.k8s.io/controller-publish-secret-name: <secret name>
  csi.storage.k8s.io/controller-publish-secret-namespace: <secret namespace>
  csi.storage.k8s.io/node-publish-secret-name: <secret name>
  csi.storage.k8s.io/node-publish-secret-namespace: <secret namespace>
  csi.storage.k8s.io/node-stage-secret-name: <secret name>
  csi.storage.k8s.io/node-stage-secret-namespace: <secret namespace>
  csi.storage.k8s.io/provisioner-secret-name: <secret name>
  csi.storage.k8s.io/provisioner-secret-namespace: <secret namespace>

storagePath: "<path>"

root_export: "<path>"

The storage path within VAST Cluster to be used when dynamically provisioning Kubernetes volumes. VAST CSI Driver will automatically create a VAST Cluster view for each volume being provisioned.

Caution

Do not specify '/' as the <path>.

This option is required when defining a storage class in the Helm chart configuration file.

viewPolicy: "<policy name>"

view_policy: "<policy name>"

The name of the VAST Cluster view policy to be assigned to VAST Cluster views created by VAST CSI Driver.

A view policy defines access settings for storage exposed through a VAST Cluster view. For more information, see Configure View Policies.

All view policies used with VAST CSI Driver must have the same security flavor.

If you are going to use VAST CSI Driver with VAST Cluster 4.6 or later, a view policy set for a storage class must belong to the same VAST Cluster tenant as the virtual IP pool(s) specified for that storage class.

This option is required when defining a storage class in the Helm chart configuration file.

vipPool: "<virtual IP pool name>"

vip_pool_name: "<virtual IP pool name>"

The name of the virtual IP pool to be used by VAST CSI Driver. For more information, see Set up Virtual IP Pools.

If you are going to use VAST CSI Driver with VAST Cluster 4.6 or later, a virtual IP pool that you specify for a storage class must belong to the same VAST Cluster tenant as the view policy specified for this storage class.

Either vipPool or vipPoolFQDN option is required when defining a storage class in the Helm chart configuration file. These options are mutually exclusive. When  vipPool is used, VAST CSI Driver makes an additional call to the VMS to obtain the IP, which may impact performance when mounting volumes.

vipPoolFQDN: "virtual IP pool's domain name"

vip_pool_fqdn: "virtual IP pool's domain name"

The domain name of the virtual IP pool to be used by VAST CSI Driver. For more information, see Set up Virtual IP Pools.

Either  vipPoolFQDN or  vipPool  option is required when defining a storage class in the Helm chart configuration file. These options are mutually exclusive. With  vipPoolFQDN, the IP for volume mounting is obtained through DNS, which improves mounting times.

volumeNameFormat: "<format>"

volume_name_fmt: "<format>"

A format string that controls naming of volumes created through VAST CSI Driver. If not specified, the default format csi:{namespace}:{name}:{id} is used.