Async Replication Configuration Workflows

Prev Next

If the Clusters are Connected through the Management Network

DataSpace provides a simplified flow for configuring async replication. This option is available specifically where the source and destination cluster(s) have interconnectivity through their management networks. For details of this procedure, see Configuring Asynchronous Replication from DataSpace.

After you configure replication from DataSpace, if you want to enable seamless failover for NFSv3 clients, create a new view on each of the clusters participating in the replication, on the path that you want to make mountable for clients. That can be the replicated path or a path contained within the replicated path.  For each view that you create, enable global sync. Creating Views

If the clusters are not connected over their management networks, proceed with Configuration Workflow for Async Replication with a Single Destination to configure replication with a single destination, or Configuration Workflow for Async Replication with Multiple Destinations to configure replication with multiple destinations.

Configuration Workflow for Async Replication with a Single Destination

These steps complete the configuration of replication between two peers. To configure a replication group with more than two peers, see Configuration Workflow for Async Replication with Multiple Destinations.

Further steps are needed to prepare for smooth failover or to perform failover. See Deploying a Failed-Over Replication Peer as a Working Cluster.

Follow this workflow to configure async replication between one cluster and one other cluster:

  1. Configure a virtual IP pool for replication on each of the clusters. A dedicated virtual IP pool must be created for replication on each of the peer clusters. The virtual IP pool role must be set to replication. You can use this VIP pool to control which CNodes are used to replicate data between the peers, although this is not mandatory.

  2. If you want to configure replication in secure mode, with mTLS encryption, make sure that mTLS certificates are installed on both participating clusters.

  3. Create a replication peer. This is the configuration of a replication relationship with another cluster. The peer configuration is mirrored on the destination peer.

  4. Create a protection policy. This is a policy governing the schedule and retention parameters for replicating data to the configured peer.

  5. Create a remote protected path. This defines a data path on the cluster to replicate, the destination path on the peer and the protection policy. You can create multiple protected paths using the same protection policy and replication peer to replicate different paths. On the remote peer, you can also set up multiple protected paths with the local peer as the destination. In other words, replication can be set up in both directions between a pair of peers.

  6. If you want to enable seamless failover for NFSv3 clients, create a new view on the replicated path on the source peer and on the destination peer, enabling global sync on each view. Creating Views

    Note

    Global synchronization cannot be enabled on an existing view. It must be enabled when the view is created.

Configuration Workflow for Async Replication with Multiple Destinations

Note

All clusters that will participate in a replication configuration where there is more than one destination, must be running VAST Cluster 4.7 or later.

Note

You cannot convert a protected path that was created prior to VAST Cluster 4.7 to support group replication. If you want to expand a replication configuration that was created with an earlier version to include a third peer, you must delete and then recreate the protected path.

A configuration of replication between more than two replication peers involves the creation of a peer relationship and a protection policy to be configured between each and every peer in the group and each and every other peer in the group, including between destination peers. These configurations between destination peers wait on standby while data is replicated from the source to each destination. In case of a failover event requiring a change of the source, one of the destinations can become the replication source and the configuration is in place to enable replication from that new source to each destination. The configuration that enables replication between any two peers is sometimes referred to as a replication stream, such as in the VAST Web UI of previous versions.

Follow the steps below to complete a replication configuration where a source peer replicates to multiple destination peers.

Further steps are needed to prepare for smooth failover or to perform failover. See Deploying a Failed-Over Replication Peer as a Working Cluster.

  1. Configure a VIP pool for replication on each of the clusters. A dedicated VIP pool must be created for replication on each of the peer clusters. The VIP pool role must be set to replication. You can use this VIP pool to control which CNodes are used to replicate data between the peers, although this is not mandatory.

  2. If you want to configure replication in secure mode, with mTLS encryption, make sure that mTLS certificates are installed on every participating cluster.

  3. On the primary cluster:

    1. Create a replication peer for each of the other clusters in the group.  Each of these replication peer configurations is mirrored on the destination cluster.  

    2. Create a protection policy for each replication peer.  This is also mirrored on the replication peer.

    3. Create a remote protected path, for replicating from the primary cluster to all the destination clusters.  The protected path is mirrored on each destination peer.

      Note

      At this point, if you wish to enable seamless failover for NFSv3 clients that will access the replicated path, you can create a new view with global sync enabled on the local path and on the remote path and you can create . Otherwise, you can create these views after completing the replication group.Creating Views

  4. On one of the other clusters in the group:

    1. Create a replication peer for each of the other destination peers.

    2. Create a protection policy for replicating to each of the other destination peers.

    3. Edit the protected path and add each of the other destination peers to the protected path as a destination.

  5. Repeat creation of replication peers, protection policies and destinations on destination peers as needed until all destination peers have standby replication streams to all other destination peers.

  6. At this point, if you want to enable seamless failover for NFSv3 clients and you did not yet create views with global sync on the replication paths, create a new view on each replication peer in the replication group, on the path that you want to make mountable for clients. That can be the replicated path or a path contained within the replicated path.  For each view that you create, enable global synchronization. Creating Views

    Note

    Global synchronization cannot be enabled on an existing view. It must be enabled when the view is created.

Adding a Member to a Replication Group

Note

You cannot convert a protected path that was created prior to VAST Cluster 4.7 to support group replication. If you want to expand a replication configuration that was created with an earlier version to include a third peer, you must delete and then recreate the protected path.

To add another peer to an existing replication group:

  1. On the primary cluster:

    1. Create a replication peer and a protection policy for the new member.

    2. Edit the protected path for the group and add a destination with the protection policy that you created for replicating to the new member.

  2. On the new member peer:

    1. Create a replication VIP pool if needed.

    2. Create a replication peer and protection policy for each of the other destination peers.

    3. Open the group's protected path to edit it. Add each of the other destination peers as a destination, by specifying the protection policies created in the previous step.

  3. If you want to enable seamless failover for NFSv3 clients, create a new view on the replicated path on the new peer, and enable global sync on the view. Creating Views

    Note

    Global synchronization cannot be enabled on an existing view. It must be enabled when the view is created.

Procedures for Workflow Steps

Configuring Replication Virtual IP Pools

A replication VIP pool must be configured on each cluster that will participate in async replication.

A replication VIP pool is used exclusively for routing replication traffic between the peers and not for serving data to clients. The CNodes that are assigned VIPs from the replication VIP pool are used to communicate directly with the remote peer, while other CNodes can communicate only indirectly with the remote peer.

When you configure a replication VIP pool, you can optionally restrict it to specific CNodes.

Creating a Replication VIP Pool

On each replication peer, create a virtual IP pool dedicated to replication as follows:

  • Set the VIP pool's role to Replication.

  • You can configure multiple non-consecutive VIP ranges in a replication VIP pool.

  • Do not specify a domain name.

  • You can dedicate one or more CNodes to the replication VIP pool.

  • You can tag the replication VIP pool with a VLAN.

Encrypting Replication with mTLS

VAST Cluster supports securing of the replication connection with mutual TLS (mTLS) encryption, in which each replication peer cluster authenticates the other side. mTLS encryption requires certificates installed on each of the peer clusters and is used for replication peer configurations that have secure mode enabled.    

To configure mTLS encryption, do the following:

  1. Obtain Certificates for mTLS encryption

  2. Install mTLS Certificates on each Participating VAST Cluster

  3. When you create a replication peer, set the secure mode setting to Secure.

Obtain Certificates for mTLS encryption

Obtain an RSA type TLS certificate from a Certification Authority (CA) for each of the peers in the replication peer configuration. This will consist of a certificate file and a private key file.  Obtain the files in PEM format.

Obtain a copy of the CA's root certificate, which will be used to make sure each peer can trust certificates presented by other peers. This should be the same root certificate for each peer.

Install mTLS Certificates on each Participating VAST Cluster

Installing mTLS Certificates from the VAST Web UI
  1. From the left navigation menu, select Settings and then Certificates to open the Certificates tab. 

  2. From the Certificate for dropdown, select replication.

  3. Either paste the certificate file contents into the Certificate field or use the Upload button to upload the file, and paste or upload the key file content into the Key field and the root Certificate file contents in the Root Certificate field.

    When pasting the file content, include the BEGIN CERTIFICATE / BEGIN PRIVATE KEY and END CERTIFICATE / END PRIVATE KEY lines, like this:

    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE----- 
  4. Click Update.

Installing mTLS Certificates from the VAST CLI

To install the certificates using the VAST CLI, use the cluster modify command with the following parameters: --cluster-certificate, --cluster-private-key --root-certificate.

Creating a Replication Peer

This step involves establishing a connection to a remote cluster that will be the destination peer. The replication peer configuration is mirrored on the remote cluster as well.

Creating a Replication Peer via the VAST Web UI

  1. From the left navigation menu, select Data Protection and then Replication Peers.

  2. Click Create Peer.

  3. Complete the fields:

    Field

    Description

    Peer Name

    Enter a name for the peer configuration. The peer configuration will be mirrored on the remote cluster and have the same name on both clusters.

    For example: VASTmain-VASTbackup

    Remote VIP

    Enter any one of the virtual IPs belonging to the replication VIP pool to use as the leading remote virtual IP.

    The remote virtual IP is used to establish an initial connection between the peers. Once the connection is established, the peers share their external network topology and form multiple connections between the VIPs.

    If the remote peer's replication virtual IP pool is changed after the initial peer configuration, the new virtual IPs are learned automatically if the new range of IPs in the modified virtual IP pool intersects with the previous IP range. However, if the new IP range does not intersect with the old range, the remote virtual IP must be modified on the local peer.

    Local VIP Pool

    From the drop-down, select the replication VIP pool configured on the local cluster.

    For example: vippool_rep

    Secure Mode

    Select a secure mode for the peer:

    • Secure. Replication to this peer will be encrypted over the wire with mTLS.

      Secure mode requires a certificate, key and root certificate to be uploaded to VMS for mTLS encryption. Encrypting Replication with mTLS

    • None. Replication to this peer will not be encrypted over the wire.

    Caution

    This setting cannot be changed after creating the replication peer.

  4. Click Create.

    The replication peer is created and mirrored to the remote cluster. The details are displayed in the Replication Peers page on both the local cluster and the remote cluster.

Creating a Replication Peer via the VAST CLI

To create a replication peer via the VAST CLI, run replicationpeer create.

For example:

vcli: admin> replicationpeer create --name vastnativebackup --remote-leading-vip 198.51.100.200 --local-vip-pool-id 3

Creating Protection Policies for Asynchronous Replication

This step creates a protection policy for scheduling snapshots on a cluster and transferring them to a remote replication peer. Optionally, the policy can specify to retain the snapshots on the local cluster as well as transferring them. The protection policy is mirrored to the replication peer where it can be used for replicating in the reverse direction in the event of a failover.

Creating a Protection Policy via VAST Web UI

  1. From the left navigation menu, select Data Protection and then Protection Policies.

  2. Click Create Protection Policy.

  3. In the Add Protection Policy dialog, complete the fields:

    Field

    Description

    Policy name

    Enter a name for the protection policy.

    Peer

    Select the replication peer from the dropdown. This defines the replication peer as a destination to which snapshots can be copied.

    Snapshot prefix

    Enter a prefix for the snapshot names.

    The name of each snapshot will be <prefix>_<timestamp>, where <prefix> is the prefix specified here and <timestamp> is the time the snapshot is created, in the format yyyy-mm-ddTHH:MM:SS.SSSSSSzzz (T denotes time and doesn't represent a value, zzz is the timezone, and the time is accurate to the microsecond). For example, if the prefix is dev, a snapshot taken at 8:15 pm UTC on 20th November 2024 would be named:

    dev_2024-11-20T20:15:06.144783UTC

  4. If you want to make the protection policy indestructible, enable the Indestructible setting. This setting protects the policy and its snapshots from accidental or malicious deletion. For more information about indestructibility, see Managing the Indestructibility Mechanism.

    Caution

    After saving the protection policy, you won't be able to delete the policy or disable its indestructibility without performing a procedure for authorized unlocking of the cluster's indestructibility mechanism.

    Note

    If a replication peer is configured, the indestructibility setting will be replicated to the peer.

  5. Set up one or more replication schedules:

    Note

    If you want to set up multiple schedules, click the Add Schedule button to display more scheduling fields in the dialog.

    • To set the start time, click in the Start at field. In the calendar that appears, click the start date you want and adjust the start time:

      Set_start_time.png

      Note

      When a protected path is active, it performs an initial data sync to the replication peer immediately after being created. The initial sync creates the first restore point. Therefore, the restore point created on the start date is in fact the second restore point.

    • To set a period, select a time unit from the Period dropdown and enter the number of time units in the Every field.

      Note

      The minimum interval is 15 seconds.

  6. Set the Keep local copy for period. This is the amount of time for which local snapshots are retained on the local cluster.

    Select a time unit from the Period dropdown and enter the number of time units in the Keep local copy for field.

  7. Set the Keep remote copy for period. This is the amount of time restore points are retained on the remote peer.

    Select a time unit from the Period dropdown and enter the number of time units in the Keep remote copy for field.

  8. Click Create.

    The protection policy is created and listed in the Protection Policies page.

Creating a Protection Policy via VAST CLI

To create a protection policy via the VAST CLI, use the protectionpolicy create command.

For example:

vcli: admin> protectionpolicy create --schedule every 90m start at 2025-07-27 20:10:35 keep-local 10h keep-remote 10d --prefix Snapdir1 --clone-type CLOUD_REPLICATION --name protect-pol1 --peer-id 1

Creating a Protected Path for Async Replication

When you have defined a protection policy for async replication to a remote peer, you can define a protected path to start replicating data from a local path.

Limitations

  • Data cannot be moved into or out of a path that is protected by either async replication or S3 replication. This applies to moving files or directories from a protected path to a non-protected path, from a non-protected path to a protected path or from one protected path to another protected path.

  • Protected paths with async replication cannot be nested.

Creating Protected Paths between Tenants

You can configure protected paths between tenants on different clusters, subject to the following restriction:

If Tenant A replicates a protected path to Tenant B on a remote cluster, it cannot then replicate another path to Tenant C on the same remote cluster (that is, Tenant A cannot have replicated protected paths to more than one tenant on the same remote cluster). It can, however, replicate protected paths to Tenant C (or any other tenant) on a different remote cluster. Similarly, Tenant A can replicate additional protected paths to Tenant B on the same remote cluster.

Creating a Protected Path via the VAST Web UI

  1. In the left navigation menu, select Data Protection and then Protected Paths.

  2. On the Protected Paths tab, click Create Protected Path and then New Remote Protected Path.

  3. In the Add Source/Primary dialog, complete the fields:

    Name

    Enter a name for the protected path.

    Tenant

    Select the tenant under which the source path resides.

    Note

    Paths on different tenants can share identical names.

    Path

    The path you want to replicate. A snapshot of this directory will be taken periodically according to the protection policy.

    Note

    • If you specify '/' (the root directory), this includes data written via S3.

    • To specify a path to a specific S3 bucket with name bucket, enter /bucket.

  4. Click Next.

  5. In the Add Destination\Secondary dialog, complete these fields:

    Capability

    Select Replication from the dropdown.

    Protection policy

    Select a protection policy from the dropdown or select the option to create a new one, configure the new one in the dialog provided and save it.  

    Warning

    After adding a destination to a protected path, it is not possible to change which policy is associated with the destination. All changes to a streams's snapshot schedule, replication schedule, and snapshot expiration must be done by modifying the protection policy. Those modifications affect all destinations that use the same protection policy. To work around this limitation, use one protection policy per destination.

    (Cluster)

    This field is filled automatically with the cluster specified as the peer in the protection policy.  

    Remote tenant

    This field appears only if the remote peer has more than one tenant. If it appears, select a tenant on the remote peer from the dropdown. The remote path will be created on the selected tenant.

    The selection of tenant on the remote peer is subject to the restriction in Creating Protected Paths between Tenants.

    Path

    Specify the directory on the remote peer where the data should be replicated. This must be a directory that does not yet exist on the remote peer.

    Tip

    You cannot use "/" as remote path because that always exists already. Therefore if you would like to replicate all data under the root directory, you will need to replicate this to a subdirectory. e.g. path on peer = "mirror/"

  6. Click Add.

  7. Click Create.

    The protected path is created and listed in the Protected Paths tab.

Creating a Protected Path via the VAST CLI

Use the protectedpath create command.

For example:

vcli: admin> protectedpath create --name backupthisdir --protection-policy-id 1 --source-dir / --target-exported-dir /backup

Adding Destinations to a Protected Path

When you first create a protected path, you can add one destination. After creating the protected path, you can add additional destinations.

Creating Protected Paths between Tenants

You can configure protected paths between tenants on different clusters, subject to the following restriction:

If Tenant A replicates a protected path to Tenant B on a remote cluster, it cannot then replicate another path to Tenant C on the same remote cluster (that is, Tenant A cannot have replicated protected paths to more than one tenant on the same remote cluster). It can, however, replicate protected paths to Tenant C (or any other tenant) on a different remote cluster. Similarly, Tenant A can replicate additional protected paths to Tenant B on the same remote cluster.

  1. In the Protected paths page, open the Actions menu for the protected path and select Edit.

  2. Select a Sync Point Guarantee using the dropdowns provided. This ensures a minimal duration since the last sync point between the destination peers in the group. A sync point is a snapshot that is shared between the peers in the replication group.

  3. In the Update Protected Path dialog, under Add Replication Stream, enter the following:

    Protection policy

    Select the protection policy that is configured for the remote peer that you want to add.  The Remote peer field is filled with the remote peer from the protection policy.

    Remote path

    Specify the path on the remote peer to which you want the stream to replicate the data from the protected path. The path you specify must be to a directory that does not yet exist on the remote peer.

    Remote tenant

    This field appears only if the remote peer has more than one tenant. Select the tenant on the remote peer where you want to create the remote path.  

    The selection of tenant on the remote peer is subject to the restriction in Creating Protected Paths between Tenants.

  4. Click Update.

Removing a Destination from a Protected Path

When you remove a destination from a protected path on the source cluster, VMS removes any associated standby stream(s) on destination clusters.

Removing a Destination from the VAST Web UI

  1. On the source cluster, right-click the protected path and select Edit.

  2. Click the trash_icon.png icon for the destination you want to remove.

  3. Click Yes to confirm the removal.

Removing a Destination from the VAST CLI

Use the protectedpath remove-stream command.