ℹ️ Info
This document is intended to provide clear instructions for configuring S3 Asynchronous replication using a local user and bucket policy.
The steps listed in this procedure have been tested on VAST 5.2 and above.
Create a Bucket Policy
Important notes:
Policy create/attach/delete can be performed only using 3rd third-party tool such as
aws-cli.Only the bucket owner can attach or modify bucket policies.
Supported starting from VAST Cluster version 5.2.
Bucket policy can only be managed from cli therefore first login to a machine with access to the S3 endpoint and create a JSON file with the desired policy, in the below example you can find a policy template that will allow all action on a specific bucket and its subdirectories, you can review the bucket policy wiki or aws documentation for more details on policies permutations and tuning.
Policy JSON (allow-all-s3.json):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "allow-all-s3",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::managed-by-b-policy",
"arn:aws:s3:::managed-by-b-policy/*"
]
}
]
}
Attach Policy to Bucket
Use the below command to attach the JSON file (allow-all-s3.json) policy to the
managed-by-b-policybucket.
aws s3api put-bucket-policy \
--bucket managed-by-b-policy \
--policy file://allow-all-s3.json \
--endpoint-url http://172.27.115.1You can (and should) validate the attachment by running the command below.
aws s3api get-bucket-policy \
--bucket managed-by-b-policy \
--endpoint-url http://172.27.115.1Create a local user (On the production cluster)
From the VAST UI, navigate to User Management.
Click on Users.
Click on the Create user button, provide a user name and a UID.Click on the Create button.

Add user
Click Edit on the newly created user and hit the Create new keys button.
Make sure to save a copy of the key before hitting the Update button.

Update User
ℹ️ Info
Note that to enable access on the remote cluster in case of a failover, you need to set a user with the same keys on the remote cluster.
Create a Replication Peer
From the VAST UI, go to Data Protection.
Replication Peers and click on the Create Peer button.
Name the new peer and provide the Remote VIP address, choose the Local IP Pool to use, and click on Create.

Add native replication peer
Wait a few moments and verify that the peer is connected before moving on.

Very replication setup
Enable S3 Bucket Replication
Production Cluster
From the VAST UI, go to Settings.
Click on S3 and Enable the Bucket Replication.

Enable the Bucket Replication
ℹ️ Info
You will be prompted to enable the replication. Note that this option is not reversible.

Verify Enable the Bucket Replication
Configure Protected Path
From the VAST UI, navigate to Data Protection.
Click on Protected Path.
Click on Create Protected Path and choose New Remote Protected Path.

New Remote Protected Path
Name the new protected path and fill the Path field.
Note that you can set the path to a specific bucket or to an endpoint. In this example, we’ve pointed to an endpoint, so every bucket that will be created under this endpoint will be included within the replication.

Name the new protected path
Click Next to continue.
Select Replication Type = Async Replication.
Specify which Protection Policy to use; if none exists, click on Create new policy and fill in the needed parameters that best meet your operational requirements.

Create new policy
In this example, I’ve selected a preconfigured policy.
Specify the Path parameter and note that the path must not be already configured on the remote system.
Click Add.

Specify the Path parameter
Click the Create button and wait for the new path to complete the initialization process and to become Active.

Create a remote protected path
Configure “Remote” System Local User (On the DR cluster)
On the remote system from the VAST UI, navigate to “User Management” => “Users”.
Click on the Create user button, provide a user name, and a UID.
Click on the Create button.

Add user
Click Edit on the newly created user and click the Provide access and secret keys mark.
Enter the access and secret keys of the user created on the “Local” system.

Update user
Click the Update button to complete the operation.
ℹ️ Info
At this point, each bucket you create on the “Local” system will be replicated to the “Remote” system.
Initiate FailOver
To initial FailOver go the the web UI of the remote (DR) cluster.
Go to Data Protection.
Go to Protected Paths.
Right click on the newly created protected path and select:
Replication => Modify Replication State

Modify Replication State
Select FailOver based on your preference.
Stand alone: Any updates done afterwards at <your path name> on <your “local” cluster> are not available at <your “remote” cluster>.
Source: The configured replication interval is x. The estimated Read-Only time is xxx. To reduce the Read-Only time, you may reduce the replication interval.
Click on the FailOver button.

Modify Replication State - failover
Click Yes on the confirmation box.

Modify failover
Monitor the Role of the protected path and wait until it changes from Destination to Source.

Montior status
Once the failover is completed, the operation is done.
ℹ️ Info
The remote (DR) bucket is always in ReadOnly mode, and attempting to write to it will issue “(AccessDenied) when calling the PutObject operation: Access Denied” Error message.
ℹ️ Info
Note that FailOver and FailBack are always performed from the “Remote/DR” cluster.
Client Configuration Notes
ℹ️ Info
Although the above procedure covers the system behavior, client adjustments are required for operational continuity. Follow the baseline instructions below for configuring the client.
Given that our production cluster is accessible via the DNS name s3.prod.vast.local and our bucket is prod-bucket, and that the remote (DR) cluster is accessible via s3.dr.vast.local, assuming we've already configured prod-bucket to be included in our protected-path (or simply created it under the protected endpoint), all we need to do to establish continuity is to point our application to the new (active) system by updating the S3 endpoint.
Example (Python with Boto3):
Before (Production):
import boto3
# Connect to production S3 cluster
s3_client = boto3.client(
's3',
endpoint_url='http://s3.prod.vast.local',
aws_access_key_id='YOUR_ACCESS_KEY',
aws_secret_access_key='YOUR_SECRET_KEY'
)
# Verify bucket access by listing objects
response = s3_client.list_objects_v2(Bucket='prod-bucket')
for item in response.get('Contents', []):
print(f"Object found: {item['Key']}")
After Failover (DR cluster becomes active):
import boto3
# Connect to the DR S3 cluster
s3_client = boto3.client(
's3',
endpoint_url='http://s3.dr.vast.local', # Update only this line
aws_access_key_id='YOUR_ACCESS_KEY',
aws_secret_access_key='YOUR_SECRET_KEY'
)
# Verify bucket access by listing objects
response = s3_client.list_objects_v2(Bucket='prod-bucket')
for item in response.get('Contents', []):
print(f"Object found: {item['Key']}")