Prerequisites for Global Access
All participating clusters must be running 5.1 as a minimum version.
Note
Global Access can be configured on the source path of an asynchronous replication group provided participating clusters are running VAST Cluster 5.3 as a minimum version. For details of this configuration, see Configuring Async Replication and Global Access on Shared Paths.
Workflows for Configuring a Global Access Path
Using the DataSpace Page in the VAST Web UI (Requires Connectivity through Management Network)
DataSpace provides a simplified flow for configuring global access. This option is available specifically where the source and destination cluster(s) have interconnectivity through their management networks. For details of this procedure, see Configuring Asynchronous Replication from DataSpace.
If the clusters are not connected over their management networks, proceed with Alternate Workflow (Management Network Connectivity not Required) .
Alternate Workflow (Management Network Connectivity not Required)
Make sure there is a virtual IP pool for replication on the source cluster and on each of the destination clusters. The Virtual IP pool role must be set to replication. You can use this Virtual IP pool to control which CNodes are used to service replication and global access, although this is not mandatory.
Optional: If you want to configure secure mode on the connection between the clusters used for GA, with mTLS encryption, make sure that mTLS certificates are installed on every participating cluster.
Optional: Make sure bucket replication is enabled so that an S3 bucket view of the global folder on the origin will be replicated to the satellite.
On the source cluster, create a replication peer for each of the destination clusters.
Create a global access protected path on the source cluster.
Procedures for Workflow Steps
Creating a Replication Peer
This step involves establishing a connection to a remote cluster that will be the destination peer. The replication peer configuration is mirrored on the remote cluster as well.
Creating a Replication Peer via the VAST Web UI
From the left navigation menu, select Data Protection and then Replication Peers.
Click Create Peer.
Complete the fields:
Field
Description
Peer Name
Enter a name for the peer configuration. The peer configuration will be mirrored on the remote cluster and have the same name on both clusters.
For example: VASTmain-VASTbackup
Remote VIP
Enter any one of the virtual IPs belonging to the replication VIP pool to use as the leading remote virtual IP.
The remote virtual IP is used to establish an initial connection between the peers. Once the connection is established, the peers share their external network topology and form multiple connections between the VIPs.
If the remote peer's replication virtual IP pool is changed after the initial peer configuration, the new virtual IPs are learned automatically if the new range of IPs in the modified virtual IP pool intersects with the previous IP range. However, if the new IP range does not intersect with the old range, the remote virtual IP must be modified on the local peer.
Local VIP Pool
From the drop-down, select the replication VIP pool configured on the local cluster.
For example: vippool_rep
Secure Mode
Select a secure mode for the peer:
Secure. Replication to this peer will be encrypted over the wire with mTLS.
Secure mode requires a certificate, key and root certificate to be uploaded to VMS for mTLS encryption.
None. Replication to this peer will not be encrypted over the wire.
Caution
This setting cannot be changed after creating the replication peer.
Click Create.
The replication peer is created and mirrored to the remote cluster. The details are displayed in the Replication Peers page on both the local cluster and the remote cluster.
Creating a Replication Peer via the VAST CLI
To create a replication peer via the VAST CLI, run replicationpeer create.
For example:
vcli: admin> replicationpeer create --name vastnativebackup --remote-leading-vip 198.51.100.200 --local-vip-pool-id 3Creating a Global Access Protected Path
Creating a Protected Path for Global Access via the VAST Web UI
In the left navigation menu, select Data Protection and then Protected Paths.
On the Protected Paths tab, click Create Protected Path and select Create Remote Protected Path.
In the Add Source/Primary dialog, complete the fields:
Mode
Select the replication mode. For a coexist with global access and async replication, select Async Replication.
Activate Global Access
Select to create global access and async replication on the destination/secondary.
Activate on source as well
Select to create global access and async replication on the source/primary side. If not selected, in case of a failover, the global access is not available and the data will return to the last available replication snapshot.
Protection Policy
Select an existing protection policy or create a new one.
Name
Enter a name for the protected path.
Tenant
Select the tenant under which the source path resides.
Path
The path you want to replicate. A snapshot of this directory will be taken periodically according to the protection policy.
Note
If you specify '/' (the root directory), this includes data written via S3.
To specify a path to a specific S3 bucket with name bucket, enter /bucket.
Click Next.
In the Add Destination/Secondary dialog, select Global Access from the Mode dropdown and complete these fields:
Cluster
Select the remote peer cluster on which you want to create a destination path at which to provide access to the source path's data.
Remote tenant
This field appears only if the remote peer has more than one tenant. If it appears, select a tenant on the remote peer from the dropdown. The remote path will be created on the selected tenant.
The selection of tenant on the remote peer is subject to restriction. See Creating Protected Paths between Tenants.
Remote Path
Specify the directory on the remote peer where the data should be shared for access. This must be a directory that does not yet exist on the remote peer.
Click Add.
In the Add the Lease Expiration Time dialog, set the lease expiration time.
Lease expiration time is the duration for which data that was already requested at the destination path can be read locally from cache without the destination peer requesting it from the source peer. When the lease expires, the cache is invalidated and the next read request for the data is requested again from the source peer.
Enter a number in the field provided and select the time units from the dropdown.
Click Add.
Check the details of the configuration. If needed, you can click the edit buttons and edit the source and/or the destination and/or the lease expiration time.
Click Create.
The protected path is added to the Protected Paths tab. Initially, the state Initializing is displayed in the State column for the protected path.
When the protected path status changes to Active, you can add another destination:
Right-click the protected path and select Edit.
In the Edit Remote Protected Path dialog, click Add Another.
Complete the fields and click Save.
Click Update.
Creating a Protected Path for Global Access from the VAST CLI
Use the protectedpath create command to create the protected path with one destination.
For example:
vcli: admin> protectedpath create --name --protection-policy-id 5 --capabilities STARED_GLOBAL_NAMESPACE ga1 --source-dir /gasource --local-tenant-id 1 --remote-target-id 4 --target-exported-dir /ga_destination --remote-tenant-name default --lease-expiry-time 1200Use the protectedpath add-stream command to add each additional destination, one at a time.
Creating Protected Paths Between Tenants
You can configure protected paths between tenants on different clusters, subject to the following restriction:
If Tenant A replicates a protected path to Tenant B on a remote cluster, it cannot then replicate another path to Tenant C on the same remote cluster (that is, Tenant A cannot have replicated protected paths to more than one tenant on the same remote cluster). It can, however, replicate protected paths to Tenant C (or any other tenant) on a different remote cluster. Similarly, Tenant A can replicate additional protected paths to Tenant B on the same remote cluster.
Prefetching Data to Global Access Peers
You can prefetch folders in protected paths on a Global Access source cluster to the cache on a destination cluster. You configure this on the protected path of the destination cluster.
The operation to prefetch information is called a prefetch task.
Prefetch Data using the VAST Web UI
In the navigation menu on the destination peer, select Data Protection, and then Protected Paths.
Right-click on the path for the prefetch, and select Global Access/Prefetch Data.
Enter the name of the folder to prefetch, in the path on the destination cluster. The equivalent folder on the source cluster is fetched.
Select Get full directory data to fetch the entire contents of the folder (metadata and files), or Get directory structure only to fetch only the metadata, without file data. Subfolders are fetched in both cases.
Fetching full directory data may involve a large amount of data. Check whether there is sufficient cache memory. If not, as the prefetch data is loaded, it overwrites earlier parts.
Click Prefetch. A new task is created to fetch the selected data. The data is copied from the equivalent folder on in the path on the source cluster.
For example, if protected path
/xon the source is shared by Global Access to a destination as path/x-dest, and folder/x-dest/yis prefetched on the destination, the data from/x/yis the source for this.
Prefetching Data using the VAST CLI
Use the
protectedpath start-prefetchcommand to prefetch a specific folder.For example:
vcli: admin> protectedpath start-prefetch --id 123 --folder /test --prefetch-type FULL
Showing Status of Prefetch Tasks using the VAST Web UI
You can see the status of prefetch tasks on the destination peer. The peer must be a destination for Global Access
In the navigation menu, select Data Protection, and then Protected Paths.
Open the inspection pane for a path. The panel shows details for all prefetch tasks for the path, including status and size.
Click on Prefetch tasks to see a detailed list of all prefetch tasks for this path. The list shows the path that was prefetched, the status, and the amount of data copied.
You can stop a prefetch task in progress. Right-click on the prefetch task in the list, and select Abort.
You can rerun a prefetch task, to copy the data again. Right-click on the task in the list, and select Fetch Again.
Getting Status of Prefetch Tasks using the VAST CLI
Use the
protectedpath get-prefetch-statuscommand to get details for prefetch tasks.
Enabling Bucket Replication
Note
Bucket replication cannot be disabled.
To enable bucket replication from the VAST Web UI
From the left navigation menu, select Settings and then S3.
Click the Enable button for Bucket replication.
If you want to enable bucket replication for VAST Database buckets, click the Enable button for Bucket DB replication.
Click Save.
To enable bucket replication from the VAST CLI, run the cluster modify command with the --enable-bucket-replication option. If you want to enable bucket replication for VAST Database buckets, specify the --enable-bucket-db-replication option on the command.