S3 Asynchronous Replication with Local Users and Identity Policy

Prev Next

ℹ️ Info

This document provides clear instructions for configuring Asynchronous S3 replication using local users and identity policies.
The steps listed in this procedure have been tested on VAST 5.2 and above.

 

Create an identity policy

  • Log in to the VAST UI.

  • Go to User Management.

  • Click on Identity Policy.

  • Click on Create Policy, type a name for the new policy, and set the policy definition. The policy definition can be done using the Action and Resource drop-down menus or the JSON code box.

You can use the JSON example below to set the policy.

The screenshot shows the "Add Policy" interface with a focus on configuring an S3 replication policy that allows all actions on any resource in the default tenant, as indicated by the JSON template and form fields for policy definition.

JSON code example

 

JSON policy definition example.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "s3:*",
      "Resource": [
        "arn:aws:s3:::bucket-1",
        "arn:aws:s3:::bucket-1/*"
      ]
    }
  ]
}

ℹ️ Info

NOTE - The identity policy is automatically replicated to the DR(remote) cluster, but in Disabled mode, please manually enable it.

Create a local user (On the production cluster)

  • From the VAST UI, navigate to User Management.

  • Click on Users.

  • Click the Create user button, then provide a username and a UID.

  • Use the Identity Policy drop-down menu to select the required policy (in the example, I’ve used the policy we created in the previous step).

  • Click on the Create button.

The image shows an interface for adding a new user in a cloud storage management system, where users can specify details like a unique identifier (UID), permissions to create and delete buckets, select group memberships, and apply identity policies such as "S3-Perlication-Allow-All-Policy."

Add User

  • Click Edit on the newly created user and click the Create new keys button.

  • Make sure to save a copy of the key before clicking the Update button.

The image displays the update user interface, where users can modify settings such as name, UID, group memberships, and S3 identity policies. It also shows access keys in an enabled state with options to create new or copy them before closing the window.

Update User

ℹ️ Info

Note that to enable access on the remote cluster in case of a failover, you need to set a user with the same keys on the remote cluster.

 

Create a replication peer

  • From the VAST UI, go to Data Protection.

  • Replication Peers, then click the Create Peer button.

  • Name the new peer and provide the Remote VIP address, choose the Local IP Pool to use, and click on Create.

The image shows the interface for adding a Native Replication Peer, where users can input parameters such as Peer name, Remote VIP address, Local Virtual IP pool, and Secure mode to configure replication settings securely.

Add native replication peer

  • Wait a few moments and verify that the peer is connected before moving on.

The screenshot displays the replication peers section in a data protection management interface, showing details such as peer status, remote and local virtual IP ranges, available space, heartbeats time, and version information. The selected peer named "V151-To-V54-Peer" is connected to another instance with v54 version.

Data Protection

Enable S3 Bucket replication

Production Cluster

  • From the VAST UI, go to Settings.

  • Click S3, then enable Bucket Replication.

The configuration interface displays options to enable S3 HTTP, HTTPS, and CORS services features, alongside other security and replication settings to enhance data management in an Amazon Web Services environment.

Enable bucket replication

 

ℹ️ Info

You will be prompted to enable the replication. Note that this option is not reversible.

The dialog box in the image alerts users that enabling bucket replication is irreversible and will overwrite existing buckets, requiring careful consideration before proceeding. It also notes that replication can only occur with default S3 policies to ensure data consistency across regions.

Enable bucket replication confirmation

 

Configure Protected Path

  • From the VAST UI, navigate to Data Protection.

  • Click on Protected Path.

  • Click Create Protected Path, then choose New Remote Protected Path.

The image shows a user interface where users can create protected paths, with options to either establish a new local protected path or a new remote protected path.

Create Protected Path

  • Name the new protected path and fill the Path field.

  • Note that you can set the path to a specific bucket or to an endpoint. In this example, we’ve pointed to an endpoint, so every bucket created under this endpoint will be included in the replication.

The screenshot shows the first step in creating a remote protected path, where users need to specify the name and path on their source cluster that will be made globally accessible, as part of defining a protected path setup process.

Create new protected path

  • Click Next to continue.

  • Select Replication Type = Async Replication.

  • Specify which Protection Policy to use; if none exists, click on Create new policy and fill in the needed parameters that best meet your operational requirements.

The image shows the configuration screen for adding a protection policy in a backup and recovery system, where users can define details such as the policy name, peer connection, snapshot prefix, and scheduling options for local and remote copies.

Add protection policy

  • In this example, we’ve selected a preconfigured policy.

  • Specify the Path parameter and note that the path must not be already configured on the remote system.

  • Click Add.

The screenshot shows the process to create a remote protected path with asynchronous replication mode and an S3 protection policy, where the source cluster is specified as "v54" and the target path is defined under the selected tenant in the secondary/destination section.

Set protected path parameters

 

  • Click the Create buttonthen wait for the new path to complete initialization and become Active.

The screenshot illustrates the configuration interface to create a remote protected path, where local snapshots from the source/primary (v3115/default) will be asynchronously replicated and saved every 15 seconds for up to 30 minutes in the destination/secondary (v54/default). The replication process endpoint policy is applied for this setup.

Create Remote Protected Path

Configure “Remote” system local user (On the DR cluster)

  • On the remote system from the VAST UI, navigate to “User Management” => “Users”.

  • Click the Create User button, provide a username, and a UID.

  • Use the Identity Policy drop-down menu to select the required policy (in the example, we’ve used the policy created in the previous step).

  • Click on the Create button.

The image displays an "Add User" interface where users can configure permissions, such as allowing create and delete bucket actions, within Amazon S3. The user has specified the user identity policies to be "S3-Perlication-Allow-All-Policy," although the leading group is currently set to "Select Group."

Add User

  • Click Edit on the newly created user and click the Provide access and secret keys mark.

  • Enter the access and secret keys of the user created on the “Local” system.

The screenshot displays an interface for updating user permissions, including fields to modify user name, select groups, and manage access keys. The interface also includes options to enable identity policies, and toggle controls allowing or deny bucket creation and deletion.

Update User

  • Click the Update button to complete the operation.

 

ℹ️ Info

At this point, each bucket you create on the “Local” system will be replicated to the “Remote” system.

 

Initiate FailOver

  • To initiate FailOver, go to the UI of the remote (DR) cluster.

  • Go to Data Protection.

  • Go to Protected Paths.

  • Right-click on the newly created protected path and select:
    Replication => Modify Replication State.

The provided screenshot illustrates the Data Protection interface in a cloud storage management system, showcasing options to manage protected paths including replication settings and operational controls such as replicating now or modify replication state.

Modify Replication State

  • Select FailOver based on your preference.

  • Stand alone: Any updates done afterwards at <your path name> on <your “local” cluster> are not available at <your “remote” cluster>.

  • Source: The configured replication interval is x. The estimated Read-Only time is xxx. To reduce the Read-Only time, you may reduce the replication interval.

  • Click on the FailOver button.

The image displays a configuration overlay for modifying replication state, where it proposes to change the writing source/standalone status from the current setup (v54 reading, v3115 writable) to an updated flow where both endpoints will be in read-only mode. A

Modify replication state

  • Click Yes on the confirmation box.

The image shows a confirmation dialog asking whether to modify the replication state, where the system will change its writing source/station alone according to the specified definitions. The updated flow diagram illustrates that the S3-AsyncRep-EndPath is being set as a writeable source while another endpoint becomes read-only.

Confirm modification

  • Monitor the Role of the protected path and wait until it changes from Destination to Source.

The screenshot displays the "Data Protection" section, specifically under the "Protected Paths" tab, with various columns such as Role, State, Health, Local Path, Tenant, Destination Path, Replication Peer, and Protection Policy, indicating the setup options available for managing protected paths in a replication or snapshot environment.

Observe changes

  • Once the failover is completed, the operation is done.

ℹ️ Info

The remote (DR) bucket is always in ReadOnly mode, and attempting to write to it will issue “(AccessDenied) when calling the PutObject operation: Access Denied” Error message.

ℹ️ Info

Note that FailOver and FailBack are always performed from the “Remote/DR” cluster.

Client configuration notes

ℹ️ Info

Although the above procedure covers the system behavior, client adjustments are required for operational continuity. Follow the baseline instructions below for configuring the client.

Given that our production cluster is accessible via the DNS name s3.prod.vast.local and our bucket is prod-bucket, and that the remote (DR) cluster is accessible via s3.dr.vast.local, assuming we've already configured prod-bucket to be included in our protected-path (or simply created it under the protected endpoint), all we need to do to establish continuity is to point our application to the new (active) system by updating the S3 endpoint.

Example (Python with Boto3):

Before (Production):

import boto3

# Connect to production S3 cluster
s3_client = boto3.client(
    's3',
    endpoint_url='http://s3.prod.vast.local',
    aws_access_key_id='YOUR_ACCESS_KEY',
    aws_secret_access_key='YOUR_SECRET_KEY'
)

# Verify bucket access by listing objects
response = s3_client.list_objects_v2(Bucket='prod-bucket')
for item in response.get('Contents', []):
    print(f"Object found: {item['Key']}")

After Failover (DR cluster becomes active):

import boto3

# Connect to the DR S3 cluster
s3_client = boto3.client(
    's3',
    endpoint_url='http://s3.dr.vast.local',  # Update only this line
    aws_access_key_id='YOUR_ACCESS_KEY',
    aws_secret_access_key='YOUR_SECRET_KEY'
)

# Verify bucket access by listing objects
response = s3_client.list_objects_v2(Bucket='prod-bucket')
for item in response.get('Contents', []):
    print(f"Object found: {item['Key']}")