VAST on Cloud for GCP

Prev Next

VAST on Cloud clusters in GCP are created using Terraform. Using Terraform files supplied by VAST, resources are created in a GCP project and then the cluster is installed on them.

Notice

This feature is available starting with VAST Cluster 5.2.0-SP3.

Prerequisites

  • A GCP account with a GCP project, into which the Vast on Cloud Cluster will be deployed.

  • Terraform v1.5.4 or later

  • Google gcloud SDK

  • An SSH key pair

Configuring GCP for VoC

Configure the following in the GCP project, from  the GCP Console.

Enable Compute API

In Compute/VM Instances, enable the Compute API.

Set up Private Networking

In the VPC Networks page, configure Private services access to your VPC by Allocating IP Ranges for Services and Creating Private Connections to Service.

Set up NAT per Region

In Network Services/Cloud NAT, create a Cloud NAT Gateway with these details, for each region that has a VoC cluster:

  • Region: the region containing the cluster

  • Router: Create New Router

  • Network Tier Service: Premium

Enable Secret Manager API

In Security/Secret Manager API, enable the Secret Manager API.

Configure Firewall Rules

In Network Security/FIrewall, configure the firewall policies as follows.

  1. Create or enable a rule that allows TCP and ICMP traffic within the cluster (for example, the default-allow-internal rule, if present).

  2. Create a new firewall rule with these details:

    • Direction: ingress

    • Action on match : allow

    • Target tags: voc-health-check (this tag is created when the VoC cluster is deployed)

    • Source: type IPv4, ranges 130.211.0.0/22 and 35.191.0.0/16 (these are set by Google for health checks).

    • Protocols and ports: TCP on port 22

    Leave all other rule settings as the defaults.

Quotas and Policy Constraints

Your GCP project should have these quotas:

  • Quota for Local SSD per VM, This is set per region, and must allow for at least  9TB for local SSDs per VM, The default quota is sufficient for only three VMs.

    Note

    Increasing the default quota to a sufficient level for a VAST cluster deployment can take some time, and is not done instantly using the GCP Console UI.

  • Quota for n2 CPUs per VM. This is set per region. VMs used for VAST clusters require 48 n2 CPUs per VM. The default quota per region is 5000, sufficient for about 100 VMs. Increase the quota if your cluster has more than this number of VMs

  • Quota for Static Routes per VPC Network. This is set per VPC network, This should allow for any IPs you use to connect to the cluster.

  • Quota for static routes per peering group. This is set per peering group (for all peered projects). Peered groups contain VPCs within the a common project, that can be connected. These connections require static routes. The quota should allow for all the connection routes between VPCs in the peering group.

To avoid problems when creating the cluster in GCP, organizational level policy constraints should not conflict with cluster requirements. For example, policies that restrict creation of n2 VMs.

Installing the gcloud SDK

  1. Download the Google Cloud CLI from https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-cli-469.0.0-darwin-x86_64.tar.gz.

  2. Extract the file.

    tar -xzvf google-cloud-cli-469.0.0-darwin-x86_64.tar.gz
  3. Run the installation script.

    ./google-cloud-sdk/install.sh
  4. Run this command to authenticate the SDK.

    gcloud auth application-default login

    A browser windows opens. Run Google SSO login with your work account. Accept and allow all.

  5. Optionally, run this command to have the ability to run gcloud command directly.

    gcloud auth login

Installing Terraform

Download the latest version of Terraform from here and follow the installation instructions in it.

Configuring Terraform

You will receive a zip file from VAST that contains Terraform files that are used to create the VAST Cluster.

Extract the contents of the file into a folder. If you are creating more than one cluster, extract the contents of each zip file into a separate folder.

Create a file voc.auto.tfvars (use the file example.tfvars, from the zip file, as an example) with this content:

## required variables
name           = "<name>"
zone           = "<zone>"
subnetwork     = "<subnetwork>"
project_id     = "<project_id>"
nodes_count    = 8 # Minimum 8 - Maximum 14
ssh_public_key = "<public ssh key content>"
customer_name  = "<customer_name>"
## variables with defaults, when not provided, these defaults will be used
# network_tags                = []
# labels                      = {}
# ignore_nfs_permissions      = false
# enable_similarity           = false
# enable_callhome             = false
# replication_vips_per_node   = 0
# protocol_vips_per_node      = 0

where name is the name of the cluster, zone, subnetwork, and project_id are from the GCP Project.

In ssh_public_key, enter your SSH public key, similar to this:

ssh-rsa AAAAB*****************************************************************************zrysUvp0EkI5YWm+lmiQP4edfNKo0G3udxeAGdrD9dZSlzqmtdvo7CTW7Qhh3v2T3t3tvTEQnnNx8CkQOFDuU3Eje7NiN1XTp5C14dcGfaZeJnRnwaKhyD710ZHTeRyzjoXhNoAOuPT4qrT4MZ4jUUjr8Fx3ozByPlLco7qHsXurZHdTFWmdR52PlWRZA++9uyjz/sPYO+HcHxtIT5yS7DVfQz8zFQTyL0Rk82v6S0HNlG31mMlA2cPt0/r2vpY0U2zfijHdZEGxu+XeR/xRmVhPFImxN0rl

Optionally, set these variables, or use the default settings:

Variable

Description

network_tags

(Optional) Add GCP network tags to the cluster

labels

(Optional) Add GCP labels to the cluster

ignore_nfs_permissions

If enabled, the VoC cluster will ignore file permissions and allow NFS and S3 clients to access data without checking permissions. Default: disabled.

enable_similarity

Enable this setting to enable similarity-based data reduction on the cluster. Default: disabled.Similarity-Based Data Reduction

enable_callhome

Enable this setting to enable the sending of callhome logs on the cluster. Default: disabled.Configuring Call Home Settings

replication_vips_per_node

The number of VIPs allocated for replication, per node. Use this parameter to configure VIPs for the cluster from Terraform. Default 0 (VIPs are not configured from Terraform). See Configuring VIPs for the Cluster.

protocol_vips_per_node

The number of VIPs allocated for protocol (NFS, SMB, etc). Use this parameter to configure VIPs for the cluster from Terraform. Default 0 (VIPs are not configured from Terraform). See Configuring VIPs for the Cluster.

Configuring VIPs for the Cluster

You can allocate VIPs for the cluster on GCP in the following ways.

  • In the VAST Web UI, in the VIP Pools section of the Network Access page. Using this method, VIPs are configured after the cluster is provisioned and running. The VIPs added here should be routed to GCP, and not be in any GCP subnets, or belong to any CIDR assigned to any of the GCP subnets.

  • In the Terraform file, using the replication_vips_per_node and protocol_vips_per_node parameters. If non-zero, VIPs are created when the cluster is provisioned by Terraform. The VIPs that are created appear in the Terraform output (see below), in the protocol_vips and replication_vips blocks.

Creating the Cluster using Terraform

  1. Run the following command in the folder into which the zip file was extracted. This starts the Terraform deployment.

    ~/voc/gcp/gcp-new-deploy > terraform init

    When complete, the following is shown:

    Terraform has been successfully initialized!
  2. Run the following command to deploy the Vast on Cloud cluster.

    ~/voc/gcp/gcp-new-deploy > terraform apply

    When the Terraform action is complete, something similar to the following is shown:

    Apply complete! Resources: 2 added, 0 changed, 9 destroyed.
    
    Outputs:
    
    availability_zone = "us-central1-a"
    cloud_logging = "https://console.cloud.google.com/logs/viewer?project=vast-on-cloud&advancedFilter=resource.type%3D%22gce_instance%22%0Alabels.cluster_id%3Ddc66387e-c8bb-5bd8-97db-469392f6bdba"
    cluster_mgmt = "https://10.120.9.243:443"
    instance_group_manager_id = "dotan-test-3n-instance-manager"
    instance_ids = tolist([
      "1315258176142165158",
      "5926902481847174310",
      "4477224983631873190",
    ])
    instance_type = "n2-highmem-48"
    private_ips = tolist([
      "10.120.7.254",
      "10.120.8.0",
      "10.120.8.2",
    ])
    protocol_vips = tolist([
      "10.120.9.231",
      "10.120.9.232",
      "10.120.9.233",
      "10.120.9.234",
      "10.120.9.235",
      "10.120.9.236",
    ])
    replication_vips = tolist([
      "10.120.9.237",
      "10.120.9.238",
      "10.120.9.239",
      "10.120.9.240",
      "10.120.9.241",
      "10.120.9.242",
    ])
    serial_consoles = [
      "https://console.cloud.google.com/compute/instancesDetail/zones/us-central1-b/instances/dotan-test-3n-instance-6873/console?port=1&project=vast-on-cloud",
      "https://console.cloud.google.com/compute/instancesDetail/zones/us-central1-b/instances/dotan-test-3n-instance-955w/console?port=1&project=vast-on-cloud",
      "https://console.cloud.google.com/compute/instancesDetail/zones/us-central1-b/instances/dotan-test-3n-instance-trl6/console?port=1&project=vast-on-cloud",
    ]
    vms_ip = "10.120.9.243"
    vms_monitor = "http://10.120.7.254:5551"
    voc_cluster_id = "dc66387e-c8bb-5bd8-97db-469392f6bdba"
    vpc_network = "vast-on-cloud"

    At this point, the cluster installation starts on the resources created by Terraform in the GCP project.

    Monitor progress of the installation at the vms_monitor URL The installation can take several minutes.

Accessing the Cluster in GCP

Access the Vast on Cloud cluster VMS Web UI from a browser, using the  cluster_mgmt URL (from the terraform apply step, above)

Destroying or Changing the Cluster Configuration

To destroy the cluster, run this command:

terraform destroy

If you want to change the cluster settings in the voc.auto.tfvars file, you must first destroy and then rebuild the cluster using Terraform. Do not run terraform apply after making changes to the file - this will corrupt the cluster.

Warning

Data in the cluster is not preserved when the cluster is destroyed using Terraform (including when destroying it in order rebuild it).

Run the following commands  to rebuild the cluster after making changes to the file.

terraform destroy
terraform apply

Best Practices for Terraform Files

The terraform files contained in the zip file contain important information used by Terraform to create your cluster. Take care that these files are not deleted or corrupted.

As well, it is recommended to back these files up.

It is also  required that you use separate folders and files for each cluster you are provisioning on GCP using Terraform.