Prepare your VAST cluster as a migration destination for SyncEngine. This guide covers creating the required NFS views (for POSIX migrations) and S3 buckets (for S3 migrations) using vastpy-cli.
Note: All of these steps can also be performed using the VAST VMS Web UI. This guide focuses on the CLI approach. For UI instructions, refer to the VAST Support Documentation.
Prerequisites
- Network connectivity from your host to the VAST VMS at
https://<VMS_IP>:443 - Python 3 installed
- Admin credentials for the VAST cluster
Install vastpy-cli
This can be done on the SyncEngine control plane host or any host with network access to the VAST VMS.
# Install vastpy package
pip3 install vastpy
# Set up environment variables
export VMS_USER=admin
export VMS_PASSWORD=<your-vast-password>
export VMS_ADDRESS=<vast-vms-address>
# Verify connectivity to your VAST cluster
vastpy-cli --json get clusters | jq -r '.[0].name'
You should see your cluster name in the output.
NFS (POSIX) Destination Setup
Follow this section if you are migrating from POSIX filesystems (NFS) to VAST NFS.
Step 1: Create NFS View Policy
SyncEngine workers need nosquash root access to write data with root privileges during migration. You must also enable Native Protocol Limit (NPL) for both path length and allowed characters.
# Check existing view policies
vastpy-cli --json get viewpolicies | jq '.[] | {id: .id, name: .name, nfs_no_squash: .nfs_no_squash}'
Create a new policy with nosquash enabled for your worker hosts:
vastpy-cli --json post viewpolicies \
name='nfs-migration-policy' \
flavor='NFS' \
path_length='NPL' \
allowed_characters='NPL' \
"nfs_no_squash"='["<worker-host-1-ip>", "<worker-host-2-ip>"]'
Replace <worker-host-1-ip>, <worker-host-2-ip> with the actual IP addresses of your worker hosts. CIDR notation is also supported (e.g., 172.200.0.0/16).
Verify the policy:
vastpy-cli --json get viewpolicies name='nfs-migration-policy' | \
jq '.[] | {
id: .id,
name: .name,
path_length: .path_length,
allowed_characters: .allowed_characters,
nfs_no_squash: .nfs_no_squash
}'
Expected output:
{
"id": 48,
"name": "nfs-migration-policy",
"path_length": "NPL",
"allowed_characters": "NPL",
"nfs_no_squash": [
"172.200.0.0/16",
"10.10.10.10"
]
}
Step 2: Create NFS View
# Get the policy ID from previous step
POLICY_ID=$(vastpy-cli get viewpolicies name='nfs-migration-policy' --json | jq -r '.[0].id')
echo $POLICY_ID
# Create NFS view
vastpy-cli --json post views \
name='nfs data migration dest' \
path='/data-migration-dest' \
policy_id=$POLICY_ID \
create_dir=true \
protocols='["NFS"]'
# Verify view creation
vastpy-cli --json get views path='/data-migration-dest' | \
jq '.[] | {
id: .id,
name: .name,
path: .path,
viewpolicy: .policy
}'
Step 3: Mount on Worker Hosts
Mount the source and destination filesystems on each worker host.
Mount the source NFS server:
sudo mkdir -p /mnt/source
sudo mount -t nfs -o vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 \
<source-nfs-server>:/migration-source /mnt/source
Mount VAST NFS destination (standard NFS):
sudo mkdir -p /mnt/vast-dest
sudo mount -t nfs -o vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 \
<vast-dns-name>:/data-migration-dest /mnt/vast-dest
Mount VAST NFS destination (with vast-nfs client for load balancing):
sudo mkdir -p /mnt/vast-dest
sudo mount -t nfs -o vers=3,proto=tcp,nconnect=8,remoteports=<vast-vip-range>,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 \
<vast-dns-name>:/data-migration-dest /mnt/vast-dest
The nconnect=8 and remoteports=<vast-vip-range> options enable load balancing across multiple VAST VIPs. Replace <vast-vip-range> with your VAST VIP pool range (e.g., 172.25.1.1-172.25.1.32). For more details on VAST NFS mount options, refer to the VAST NFS Documentation.
Verify mounts:
df -h /mnt/source /mnt/vast-dest
Tip: Make sure the worker's
VOLUMESvariable includes the mount paths. If your data lives outside/mnt, setVOLUMES='/mnt,/data'when deploying the worker. See the Worker Tuning section.
S3 Destination Setup
Follow this section if you are migrating from S3-compatible storage to VAST S3.
Step 1: Create S3 User and Access Key
Note: These instructions create a Local user on the VAST cluster. If you plan to use an AD/LDAP user for migration, consult VAST Documentation.
# Create S3 user
vastpy-cli --json post users name='s3-migration' user_type='LOCAL' | \
jq '{
name: .name,
id: .id
}'
# Get user ID
USER_ID=$(vastpy-cli get users name='s3-migration' --json | jq -r '.[0].id')
echo $USER_ID
# Generate S3 access key
vastpy-cli --json post users/$USER_ID/access_keys \
description='S3 migration'
Important: Save the
access_keyandsecret_keyfrom the output -- the secret is shown only once.
Example output:
{
"access_key": "9NTPT0P7QBDQTXNUBQ8Z",
"secret_key": "NDQbgCzLTk9fruK0UezcbM1M4wovlhrstJFCpcYU"
}
Step 2: Create Identity Policy
This grants the S3 user full access to the destination bucket. Due to the JSON string requirement, this is a two-step process:
Create the policy JSON file:
Adjust the Resource to match your destination bucket name. The tenant_id of 1 is for the default tenant -- adjust if you have a multi-tenant setup.
jq -n '{
"name": "s3-migration-policy",
"tenant_id": 1,
"enabled": true,
"policy": ({
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1",
"Action": ["s3:*"],
"Effect": "Allow",
"Resource": [
"s3-data-dest",
"s3-data-dest/*"
]
}
]
} | tostring)
}' > /tmp/s3policy_create.json
Create the policy and assign it to the user:
# Create the identity policy
vastpy-cli --json post s3policies -i /tmp/s3policy_create.json
# Get the policy ID
S3_POLICY_ID=$(vastpy-cli --json get s3policies name='s3-migration-policy' | jq -r '.[0].id')
echo $S3_POLICY_ID
# Assign policy to user (tenant_id=1 for default tenant)
vastpy-cli patch users/$USER_ID/tenant_data tenant_id=1 s3_policies_ids="[$S3_POLICY_ID]"
Step 3: Create S3 View Policy
vastpy-cli --json post viewpolicies \
name='s3-migration-policy' \
flavor='S3_NATIVE' \
path_length='NPL' \
allowed_characters='NPL'
# Verify policy creation
vastpy-cli --json get viewpolicies name='s3-migration-policy' | \
jq '.[] | {
id: .id,
name: .name,
flavor: .flavor,
path_length: .path_length,
allowed_characters: .allowed_characters
}'
Step 4: Create S3 View (Bucket)
The bucket name must match what was specified in the identity policy. The bucket_owner should match the S3 user created earlier.
# Get the S3 view policy ID
S3_VIEWPOLICY_ID=$(vastpy-cli get viewpolicies name='s3-migration-policy' --json | jq -r '.[0].id')
# Create S3 view (bucket)
vastpy-cli --json post views \
path='/s3-data-dest' \
policy_id=$S3_VIEWPOLICY_ID \
create_dir=true \
protocols='["S3"]' \
bucket='s3-data-dest' \
bucket_owner='s3-migration'
# Verify view creation
vastpy-cli --json get views path='/s3-data-dest' | \
jq '.[] | {
id: .id,
name: .name,
bucket: .bucket,
bucket_owner: .bucket_owner
}'
Step 5: Test S3 Connectivity
# Test HTTP connectivity to VAST S3
curl -v http://<vast-dns-name>:80
# Test basic S3 endpoint response
curl -v http://<vast-dns-name>:80/s3-data-dest/
# Test HTTPS connectivity
curl -v https://<vast-dns-name>:443
VAST S3 automatically load balances across multiple VIPs. Ensure your DNS resolution or load balancer distributes requests across your VAST VIP pool for optimal performance.
QoS (Optional)
VAST QoS policies can throttle migration bandwidth to prevent overwhelming the cluster during migration. Configuration varies by VAST version. Refer to the VAST Documentation for your specific version for QoS setup instructions.
Next Steps
- POSIX migration: See POSIX Migration Guide
- S3 migration: See S3 Migration Guide
- Metadata indexing: See Indexing Guide