VAST supports replication of S3 objects to another VAST cluster via both synchronous and asynchronous replication. To synchronise S3 users and Identity Policies across clusters, it is strongly recommended to use Vast users that map to users in an Active Directory domain accessible from both VAST clusters. If this is not possible, local Vast users can be used; however, there are some limitations, and some additional configuration is required to do so.
The following features can NOT be used when replicating S3 buckets using local users :
ACLs
Access/Secret Key Replication
Identity Policy Replication
Bucket Replication
The following features CAN be used when replicating S3 buckets using local users :
Identity Policies (however, they must be setup manually on each cluster)
Bucket Policies
In order to map users between clusters being used for replication, Vast normally relies on an external directory service (or “Provider”) to provide a common source of reference for user accounts. When an S3 Access Key is assigned to a user residing in an external provider, such as Active Directory or LDAP, details of that user (including policies and access/secret keys) will be replicated to the remote VAST cluster, where they will be mapped to the same external directory provider user.
When using Local users on the Vast cluster, the two systems have no way to correlate users between the two systems. This means that a number of extra steps are required when using local users and replication together.
User Creation
As local users are not replicated between clusters, it is required to create the user as a local user on both clusters. However, it is important to understand that even if the user is created with the same username and the same UID on each cluster, from the perspective of Vast, they are still treated as two separate users. This is because, internally, Vast uses an identifier known as a “VID” for each user, which is unique for every user, even between clusters. When an external directory service is being used, details from that directory service (eg, the user's SID in Active Directory) can be used to map between these “VIDs” on multiple clusters so the two users can be treated as one - but without a common external directory service, this can’t be done.
Create a user on each cluster with the same username and UID. (Technically, the username/UID does not need to be the same; however, it will make management much easier if they are.)
Recent versions of Vast allow specifying an S3 key for a user rather than having one randomly generated. In order to have the same S3 key for the user on each cluster, use the “Create S3 Access Key” option on the first cluster to generate a random key, and “Manually Add S3 Access Key” on the second cluster to manually define an access/secret key pair the same as that generated on the first cluster.

S3 Access Keys
Identity Policies
Although we have created a user with the same name/keys on each cluster, the two Vast clusters still consider them to be different users (due to unique VIDs). This means that any objects replicated from one cluster to another will be inaccessible to the user on the target cluster, as they will not have permissions to access the objects.
To work around this, we need to create an Identity Policy on BOTH clusters that gives the bucket owner full access to all objects in the bucket. This would normally not be required as the Bucket Owner would generally have access to the bucket and its contents, we need it in this situation to handle the fact that the replicated files have a different owner/VID. Identity Policies will need to be used on both clusters as replication can occur in either direction (eg, after a failover).
Create an identity policy similar to the following on BOTH clusters, replacing BUCKETNAME with the name of the bucket being replicated. Note that the cluster will replicate the policy created to the remote system. As mentioned above, replicated identity policies are not supported when using local users, so the same policy will need to be created on each cluster independently. For this reason, it’s a good idea to give each policy a unique name, such as by including the cluster name in front of the policy name.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::BUCKETNAME",
"arn:aws:s3:::BUCKETNAME/*"
]
}
]
}Assign the policy created above to the local user on each cluster by editing the user and selecting the local version of the policy in the “Identity Policies” dropdown.
The example above shows a policy for a single bucket. You can either create one policy per bucket and assign it to the bucket owner, or if the same user owns multiple buckets, you can create a single policy and include each of the bucket names in the Resources section (2 entries are required for each bucket - one for the bucketname, and the second for bucketname/*)
Creating the View on the Target System
After replication has been setup you will need to create the View (bucket) on the target system. Note that this can not be done before the replication has completed the “Initial Sync” phase, as before this occurs, the correct directory structure will not exist.
Once replication has completed the initial sync phase, create the view with the same settings as on the source cluster, including View Policy and Bucket Owner.
At this point, your user should be able to access the bucket on both sides - on the source, the bucket will be fully read-write for that user. On the target, it will be read-only when using asynchronous replication, or read-write when using sync replication.