This guide walks you through creating a POSIX filesystem migration using syncengine-cli -- from any NFS source to a VAST NFS destination.
Prerequisites
syncengine-cliinstalled (RPM provides both CLI and web UI)- CLI config in place (
/etc/default/syncengine-uior~/.syncengine-cli.yaml) - Source path (e.g.,
/mnt/source) mounted and accessible on all worker hosts - Destination path (e.g.,
/mnt/vast/dest) mounted and accessible on all worker hosts - VAST cluster prepared per the VAST Cluster Setup Guide (NFS view with nosquash policy)
Verify the Installation
Before creating connectors or migrations, confirm that the SyncEngine backend is healthy.
Backend Health Check (check on the Control Plane host)
curl http://localhost:5009/api/v1/healthz
A healthy response returns the server version:
{"status":"healthy","version":"Vastdata SyncEngine v3.1.0"}
The endpoint is also available at:
Web UI Health Check
curl http://localhost:8888/api/webuihealth
A healthy response:
{
"status": "healthy",
"redis": true,
"database": true,
"syncengine": true,
"message": "All services are operational"
}
If any service shows false, resolve the connectivity issue before proceeding. Check configuration in /etc/default/syncengine-ui and service logs for details.
Step 1: Create the Source Connector
syncengine-cli connector create --type=filesystem --name="source-nfs"
Note the connector ID returned (e.g., Connector created with ID: 1).
Step 2: Create the Destination Connector
syncengine-cli connector create --type=filesystem --name="vast-nfs"
Note this connector ID as well (e.g., Connector created with ID: 2).
Step 3: Verify Connectors
syncengine-cli connector list --type=filesystem
You should see both connectors listed with their IDs.
Step 4: Create the Migration
Choose either automatic or manual sync mode.
Option A: Automatic Sync
Automatic mode starts an initial sync after the time period is passed and re-syncs at the specified interval (in hours).
syncengine-cli migration create \
--source-connector-id=1 \
--destination-connector-id=2 \
--source-path=/mnt/source \
--destination-path=/mnt/vast/dest \
--label=posix-to-vast \
--sync-mode=automatic \
--sync-interval=24
This creates the migration and kicks off the first sync after 24 hours. Subsequent syncs run every 24 hours.
Option B: Manual Sync
Manual mode creates the migration but does not start syncing until you trigger it.
syncengine-cli migration create \
--source-connector-id=1 \
--destination-connector-id=2 \
--source-path=/mnt/source \
--destination-path=/mnt/vast/dest \
--label=posix-to-vast \
--sync-mode=manual \
--enable-deletes
To start a sync manually:
syncengine-cli migration start --migration-id=1
About --enable-deletes
When enabled, file deletions propagate from source to destination -- if a file is deleted on the source, it will be removed from the destination on the next sync. Omit this flag to keep destination files even when they are deleted from the source.
WARNING: With
--enable-deletesenabled, any file on the destination that does not exist on the source will be deleted on the next sync. Do not write directly to the destination while delete propagation is active. The source is treated as the single source of truth.
Step 5: Deploy a Worker
Workers connect to the control plane, authenticate, and start processing jobs.
Run these commands on the worker host, not on the control plane host.
-
Copy the ms script and worker bundle to the worker host (use scp, USB drive, or your preferred transfer method)
-
Make ms executable
chmod +x ms -
Install worker images
./ms install worker /path/to/Vastdata_SyncEngine_v3.1.0.tar.gz /opt/syncengine -
Deploy the worker, pointing it at your control plane
cd /opt/syncengine WORKER_LABEL=worker1 META_CONTROL_IP=<control_plane_ip> ./ms deploy worker
Replace <control_plane_ip> with the actual IP address of the host running the control plane (e.g., 172.31.31.55).
Verify the worker is running
Check the container
podman ps | grep worker
Watch the logs
podman logs -f syncengine-worker
You should see the worker register with the control plane and start polling for jobs.
Step 6: Monitor the Migration
View current status
syncengine-cli migration status --migration-id=1
Status values:
init-- Migration created, no sync has run yetsyncing-- Sync is in progresssynced-- Sync completed successfullylocked-- Migration is locked, no new syncs allowed
View migration statistics
syncengine-cli migration stats --migration-id=1
Monitor in real-time
Watch progress with automatic refresh (default 5 seconds):
syncengine-cli migration monitor --migration-id=1
Set a custom refresh interval (in seconds):
syncengine-cli migration monitor --migration-id=1 --interval=2
List all migrations
syncengine-cli migration list
Filter by status:
syncengine-cli migration list --status=syncing
Common Management Commands
| Action | Command |
|---|---|
| Start a manual sync | syncengine-cli migration start --migration-id=1 |
| Stop an active sync | syncengine-cli migration stop --migration-id=1 |
| Pause a sync job | syncengine-cli migration pause --sync-job-id=<ID> |
| Resume a paused sync | syncengine-cli migration resume --sync-job-id=<ID> |
| Force full resync | syncengine-cli migration resync --migration-id=1 |
| Replay failed files | syncengine-cli migration replay-failures --migration-id=1 |
| Switch to automatic | syncengine-cli migration auto --migration-id=1 --interval=24 |
| Switch to manual | syncengine-cli migration manual --migration-id=1 |
| Lock migration | syncengine-cli migration lock --migration-id=1 |
| Unlock migration | syncengine-cli migration unlock --migration-id=1 |
Next Steps
- S3 migration: See S3 Migration Guide
- Metadata indexing: See Indexing Guide
- Worker tuning: See Worker Tuning in the Install Guide