S3 Asynchronous Replication with Local Users and Bucket Policy

Prev Next

ℹ️ Info

This document is intended to provide clear instructions for configuring S3 Asynchronous replication using a local user and bucket policy.
The steps listed in this procedure have been tested on VAST 5.2 and above.


Create a Bucket Policy

Important notes:

  • Policy create/attach/delete can be performed only using 3rd third-party tool such as aws-cli.

  • Only the bucket owner can attach or modify bucket policies.

  • Supported starting from VAST Cluster version 5.2.


Bucket policy can only be managed from cli therefore first login to a machine with access to the S3 endpoint and create a JSON file with the desired policy, in the below example you can find a policy template that will allow all action on a specific bucket and its subdirectories, you can review the bucket policy wiki or aws documentation for more details on policies permutations and tuning.

Policy JSON (allow-all-s3.json):

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "allow-all-s3",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": [
        "arn:aws:s3:::managed-by-b-policy",
        "arn:aws:s3:::managed-by-b-policy/*"
      ]
    }
  ]
}

Attach Policy to Bucket

  • Use the below command to attach the JSON file (allow-all-s3.json) policy to the managed-by-b-policy bucket.

aws s3api put-bucket-policy \
  --bucket managed-by-b-policy \
  --policy file://allow-all-s3.json \
  --endpoint-url http://172.27.115.1
  • You can (and should) validate the attachment by running the command below.

aws s3api get-bucket-policy \
  --bucket managed-by-b-policy \
  --endpoint-url http://172.27.115.1

Create a local user (On the production cluster)

  • From the VAST UI, navigate to User Management.

  • Click on Users.
    Click on the Create user button, provide a user name and a UID.

  • Click on the Create button.

The image depicts an interface where a user is in the process of adding a new user, with fields to input the name and UID, select leading groups, choose specific groups, manage permissions by toggling settings (Allow Create Bucket and Allow Delete Bucket), and apply identity policies from a dropdown list.

Add user

  • Click Edit on the newly created user and hit the Create new keys button.

  • Make sure to save a copy of the key before hitting the Update button.

The screenshot displays an interface for updating user details, including fields such as name, UID, and access keys status. The interface also includes options to manage access permissions through identity policies and group selections, with a critical notice about the temporary nature of the secret key displayed after closing the window.

Update User

ℹ️ Info

Note that to enable access on the remote cluster in case of a failover, you need to set a user with the same keys on the remote cluster.

 

Create a Replication Peer

  • From the VAST UI, go to Data Protection.

  • Replication Peers and click on the Create Peer button.

  • Name the new peer and provide the Remote VIP address, choose the Local IP Pool to use, and click on Create.

The image shows a form for configuring a native replication peer, requiring input fields such as "Peer name," "Remote VIP," and "Local Virtual IP pool." Secure mode selection is also required before creating or dismissing changes.

Add native replication peer

  • Wait a few moments and verify that the peer is connected before moving on.

The screenshot displays the "Replication Peers" section within a data protection interface, showing a connected replication peer named V151-To-V54-Peer with details such as its local and remote virtual IP pools, space left, last heartbeat timestamp, and version number.

Very replication setup

 

Enable S3 Bucket Replication

Production Cluster

  • From the VAST UI, go to Settings.

  • Click on S3 and Enable the Bucket Replication.

 

The interface displays configuration options related to Amazon S3, including enabling HTTP and HTTPS protocols, CORS settings, and signature verification methods such as S3 Signature V2 and V4. Additionally, there bucket replication is enabled with an advisory about the security implications of using SigV2.

This configuration page allows users to manage various aspects of their AWS S3 environment, ensuring both functionality and security in accordance with specific use cases and security best practices.

Enable the Bucket Replication

 

ℹ️ Info

You will be prompted to enable the replication. Note that this option is not reversible.

 

The image shows a confirmation dialog in an S3 management interface, warning users about the irreversible nature of enabling bucket replication and its only compatibility with default S3 policies, which will overwrite existing buckets if enabled.

Verify Enable the Bucket Replication

 

Configure Protected Path

  • From the VAST UI, navigate to Data Protection.

  • Click on Protected Path.

  • Click on Create Protected Path and choose New Remote Protected Path.

The image displays an interface where users can create protected paths, with options to either generate a new local or remote path. This feature is likely part of a system designed for secure data management and access control.

New Remote Protected Path

  • Name the new protected path and fill the Path field.

  • Note that you can set the path to a specific bucket or to an endpoint. In this example, we’ve pointed to an endpoint, so every bucket that will be created under this endpoint will be included within the replication.

The image shows the initial step in creating a remote protected path, where users need to specify the source cluster's name and tenant along with the globally accessible path on that cluster. The next step involves adding the destination/secondary component.

Name the new protected path

 

  • Click Next to continue.

  • Select Replication Type = Async Replication.

  • Specify which Protection Policy to use; if none exists, click on Create new policy and fill in the needed parameters that best meet your operational requirements.

The screenshot displays the interface to create a protection policy, where users can set parameters such as the policy name, peer connection details, and schedule for snapshot creation and retention periods both locally and remotely.

Create new policy

 

  • In this example, I’ve selected a preconfigured policy.

  • Specify the Path parameter and note that the path must not be already configured on the remote system.

  • Click Add.

The screenshot displays the "Create Remote Protected Path" interface, where users can configure settings for asynchronous replication between clusters, selecting protection policies and specifying remote paths to manage data access and synchronization effectively.

Specify the Path parameter

 

  • Click the Create button and wait for the new path to complete the initialization process and to become Active.

The screenshot illustrates the setup interface for creating a remote protected path, where local snapshots will be asynchronously replicated to an S3 Async Replication Endpoint destination every 15 seconds and saved for 30 minutes. The primary source and secondary destination endpoints along with their respective policies are clearly defined in the interface.

Create a remote protected path

 

Configure “Remote” System Local User (On the DR cluster)

  • On the remote system from the VAST UI, navigate to “User Management” => “Users”.

  • Click on the Create user button, provide a user name, and a UID.

  • Click on the Create button.

The image shows an interface for adding a new user in an AWS IAM management console, where users can specify details such as name and groups, enabling or disabling permission to create and delete S3 buckets, and applying identity policies like "S3-Perlication-Allow-All-Policy."

Add user

  • Click Edit on the newly created user and click the Provide access and secret keys mark.

  • Enter the access and secret keys of the user created on the “Local” system.

The image depicts an interface for updating AWS IAM user permissions, where users can modify access keys, select groups and leading groups, adjust bucket creation and deletion permissions, and apply identity policies to control access.

Update user

  • Click the Update button to complete the operation.

 

ℹ️ Info

At this point, each bucket you create on the “Local” system will be replicated to the “Remote” system.

 

Initiate FailOver

  • To initial FailOver go the the web UI of the remote (DR) cluster.

  • Go to Data Protection.

  • Go to Protected Paths.

  • Right click on the newly created protected path and select:
    Replication => Modify Replication State

The screenshot displays the "Data Protection" interface, focusing on protected paths with options to manage replication states and perform operations like activating, deactivating, or modify replication settings directly from the context menu dropdown.

Modify Replication State

  • Select FailOver based on your preference.

  • Stand alone: Any updates done afterwards at <your path name> on <your “local” cluster> are not available at <your “remote” cluster>.

  • Source: The configured replication interval is x. The estimated Read-Only time is xxx. To reduce the Read-Only time, you may reduce the replication interval.

  • Click on the FailOver button.

The image illustrates the process to modify replication state, transferring the writeable status from S3-AsyncRep-EndPoint-v54-dest to S3-AyncRep-EndPoint and vice versa, with an option to either dismiss or fail over the configuration changes.

Modify Replication State - failover

  • Click Yes on the confirmation box.

The image displays a confirmation dialog asking if the user wishes to modify the replication state, with options to confirm ('Yes') or cancel action ('No'). Below this dialog, it outlines an updated flow from 'v54' (Writeable) to 'v3115' (Read), indicating a change in the system's writing source/standalone status according to specified definitions.

Modify failover

 

  • Monitor the Role of the protected path and wait until it changes from Destination to Source.

The screenshot displays the "Data Protection" section in an application, focusing on the "Protected Paths" tab where various attributes such as Role (Destination), State, Health, Local Path, Tenant, Destination Path, Replication Peer, and Protection Policy can be managed or S3 Async Replication configurations.

Montior status

  • Once the failover is completed, the operation is done.

ℹ️ Info

The remote (DR) bucket is always in ReadOnly mode, and attempting to write to it will issue “(AccessDenied) when calling the PutObject operation: Access Denied” Error message.

ℹ️ Info

Note that FailOver and FailBack are always performed from the “Remote/DR” cluster.

 

Client Configuration Notes

ℹ️ Info

Although the above procedure covers the system behavior, client adjustments are required for operational continuity. Follow the baseline instructions below for configuring the client.

 

Given that our production cluster is accessible via the DNS name s3.prod.vast.local and our bucket is prod-bucket, and that the remote (DR) cluster is accessible via s3.dr.vast.local, assuming we've already configured prod-bucket to be included in our protected-path (or simply created it under the protected endpoint), all we need to do to establish continuity is to point our application to the new (active) system by updating the S3 endpoint.

Example (Python with Boto3):

Before (Production):

import boto3
# Connect to production S3 cluster
s3_client = boto3.client(
    's3',
    endpoint_url='http://s3.prod.vast.local',
    aws_access_key_id='YOUR_ACCESS_KEY',
    aws_secret_access_key='YOUR_SECRET_KEY'
)
# Verify bucket access by listing objects
response = s3_client.list_objects_v2(Bucket='prod-bucket')
for item in response.get('Contents', []):
    print(f"Object found: {item['Key']}")

After Failover (DR cluster becomes active):

import boto3
# Connect to the DR S3 cluster
s3_client = boto3.client(
    's3',
    endpoint_url='http://s3.dr.vast.local',  # Update only this line
    aws_access_key_id='YOUR_ACCESS_KEY',
    aws_secret_access_key='YOUR_SECRET_KEY'
)
# Verify bucket access by listing objects
response = s3_client.list_objects_v2(Bucket='prod-bucket')
for item in response.get('Contents', []):
    print(f"Object found: {item['Key']}")