SyncEngine Installation Guide

Prev Next

A step-by-step guide to get SyncEngine and its worker up and running.


What You're Installing

SyncEngine has three pieces to install:

  1. Control Plane -- the brain. Runs the API, database, scheduler, and monitoring stack.
  2. Worker -- the muscle. Connects to the control plane, picks up data migration jobs, and executes them.
  3. CLI/UI -- the interface. An RPM that provides the web UI and command-line tools.

You install the control plane first, then deploy workers, then install the CLI/UI.


Hardware Requirements

Resource Control Plane Worker
CPU 4 cores 4 cores
Memory 128 GB 256 GB
Disk 50 GB free 50 GB free
Network 10+ GbE recommended 100 GbE recommended

Note on networking: SyncEngine will work with whatever network is available, but
100 GbE is recommended for production data migration workloads.


Prerequisites

Before you start, make sure you have:

  • Linux host (CentOS/RHEL/Rocky Linux 9 recommended)
  • Podman and podman-compose installed and running
  • curl and tar available
  • Hardware meets the requirements above
  • 50 GB free at /opt/syncengine (or your chosen install path)
  • These ports available:
Port Service
5009 SyncEngine API
5432 PostgreSQL
6379 Redis
8888 SyncEngine UI (HTTP)
8443 SyncEngine UI (HTTPS/TLS)
3009 Grafana
19091 Prometheus
5050 pgAdmin
5540 RedisInsight
2049 NFS (workers to source/dest)
80 S3 HTTP (workers to source/dest)
443 S3 HTTPS (workers to source/dest)

Verify Podman is ready

podman --version
podman-compose --version

If either is missing, install them before continuing.

SELinux

SELinux can interfere with Podman container operations. Check and adjust if needed:

# Check current SELinux status
getenforce

If SELinux is in Enforcing mode and causing issues:

# Set permissive (takes effect immediately, does not survive reboot)
sudo setenforce 0

# Make permanent (survives reboot) -- edit /etc/selinux/config:
sudo sed -i 's/^SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config

Firewall

Ensure the required ports are open on the control plane host.

RHEL / Rocky Linux / CentOS (firewalld):

# Check if firewalld is running
sudo firewall-cmd --state

# Open required ports
sudo firewall-cmd --permanent --add-port=5009/tcp
sudo firewall-cmd --permanent --add-port=8888/tcp
sudo firewall-cmd --permanent --add-port=8443/tcp
sudo firewall-cmd --permanent --add-port=3009/tcp
sudo firewall-cmd --permanent --add-port=5432/tcp
sudo firewall-cmd --permanent --add-port=6379/tcp

# Reload and verify
sudo firewall-cmd --reload
sudo firewall-cmd --list-ports

# Also check for iptables rules that may block traffic
sudo iptables -L -n

Debian / Ubuntu (ufw):

# Check if ufw is active
sudo ufw status

# Allow required ports
sudo ufw allow 5009/tcp
sudo ufw allow 8888/tcp
sudo ufw allow 8443/tcp
sudo ufw allow 3009/tcp

# Verify
sudo ufw status

Step 1: Download the Release Artifacts

You need two files. Download them to your control plane host.

Note: These URLs can be long. If you received them via email, make sure the URL
didn't get broken across multiple lines before pasting it into your terminal.

Download the SyncEngine bundle

curl -o 'syncengine-3.1.0.tar.gz' -L 'https://vast-support.s3.eu-west-1.amazonaws.com/syncengine-3.1.0.tar.gz?<...>'

Download the CLI/UI RPM

curl -o syncengine-ui-2.0.0-20260303.1609.el9.x86_64.rpm -L 'https://vast-support.s3.eu-west-1.amazonaws.com/syncengine-ui..<..>...rpm?<....>'

Tip: Presigned S3 URLs expire after 7 days. If the download fails, request fresh URLs from your team.


Step 2: Extract the Bundle

Untar the downloaded bundle:

tar xzf syncengine-3.1.0.tar.gz

This produces two files:

File What it is
ms The install/management script
Vastdata_SyncEngine_v3.1.0.tar.gz The container images and configs

Make the ms script executable:

chmod +x ms

Step 3: Install the Control Plane

Run the install command, providing the path to the inner bundle and your target install directory:

./ms install control /path/to/Vastdata_SyncEngine_v3.1.0.tar.gz /opt/syncengine

Make sure /opt/syncengine has at least 50 GB of free space.

What happens during install

  1. Checks that curl, tar, and Podman are available
  2. Extracts the bundle contents to /opt/syncengine
  3. Validates and loads all container images (PostgreSQL, Redis, Grafana, etc.)
  4. Starts all control plane services via podman-compose

When it's done you'll see:

✅ Installation completed successfully!
✅ Control services started successfully!

Verify the control plane is running

# Ping the control plane
./ms ping

# List SyncEngine containers
podman ps --filter label=project=syncengine

You should see these containers running:

Container Port What it does
syncengine-control 5009 API server
postgres 5432 Database
redis 6379 Job coordination & queuing
prometheus 19091 Metrics collection
grafana 3009 Dashboards
pgadmin 5050 Database management UI
redisinsight 5540 Redis visualization
otel-collector 4318 Telemetry aggregation
loki 3100 Log aggregation

If any container isn't running, check its logs:

podman logs <container_name>

If the container doesn't start eventually and there isn't an obvious clue, please open a Support case and send these to VAST Support.


Step 4: Install the CLI/UI

On the control plane host, install the RPM:

sudo rpm -Uvh syncengine-ui-2.0.0-20260303.1609.el9.x86_64.rpm

Once installed, run

sudo systemctl enable --now syncengine-ui

If there's trouble, check firewall ports and the logs:

sudo journalctl -fu syncengine-ui

Look at these logs and if there isn't an obvious clue, please open a Support case and send these to Vast Support.

Step 5: Log In

Open your browser to:

http://<control_plane_ip>:8888

Default credentials

Field Value
Username Admin123#
Password Admin123#

API docs

Interactive Swagger docs are also available at:

http://<control_plane_ip>:5009/docs

Step 6: Deploy a Worker

Workers connect to the control plane, authenticate, and start processing jobs.

Run these commands on the worker host, not on the control plane host.

# 1. Copy the ms script and worker bundle to the worker host
# (use scp, USB drive, or your preferred transfer method)

# 2. Make ms executable
chmod +x ms

# 3. Install worker images
./ms install worker /path/to/Vastdata_SyncEngine_v3.1.0.tar.gz /opt/syncengine

# 4. Deploy the worker, pointing it at your control plane
cd /opt/syncengine
WORKER_LABEL=worker1 META_CONTROL_IP=<control_plane_ip> ./ms deploy worker

Replace <control_plane_ip> with the actual IP address of the host running the control plane (e.g., 172.31.31.55).

Verify the worker is running

# Check the container
podman ps | grep worker

# Watch the logs
podman logs -f syncengine-worker

You should see the worker register with the control plane and start polling for jobs.


Quick Reference: Environment Variables

Worker deployment variables

Variable Required? Default Description
WORKER_LABEL Yes -- Unique name for this worker
META_CONTROL_IP Yes (separate host) -- IP of the control plane
CONTROL_PORT No 5009 Control plane API port
PROCESS_COUNT No cpu_count*2 Number of worker processes to spawn
VOLUMES No /mnt Comma-separated mount paths
(/srv, /mnt, /home)
CONTROL_USERNAME No Admin123# Username for auto token creation
CONTROL_PASSWORD No Admin123# Password for auto token creation
LOG_LEVEL No INFO Logging level (DEBUG, INFO, AUDIT)

Monitoring Dashboards

Once everything is running, these UIs are available:

Tool URL Purpose
SyncEngine UI http://<host>:8888 Main web interface
Grafana http://<host>:3009 Performance dashboards
Prometheus http://<host>:19091 Raw metrics

Safe Restart

If you need to reboot the control plane host or restart all SyncEngine services:

Stop all containers

From the install directory (/opt/syncengine/):

cd /opt/syncengine
podman-compose -f podman-compose.yaml stop

Reboot (if needed)

sudo reboot

Bring services back up

After the host is back online, from the install directory:

cd /opt/syncengine
./ms deploy control

Verify everything is running

# Ping the control plane API
./ms ping

# Verify all containers are up
podman ps --filter label=project=syncengine

You should see all control plane containers running (syncengine-control, postgres, redis, prometheus, grafana, etc.).


Worker Tuning

Two environment variables let you tune worker behavior for different workloads:

PROCESS_COUNT

Controls the number of parallel transfer threads the worker spawns. Default: cpu_count * 2.

  • Many small files -- increase PROCESS_COUNT to keep all threads busy (e.g., 80-128)
  • Large files -- decrease PROCESS_COUNT to avoid memory pressure and let each thread move data efficiently (e.g., 16-32)
# High thread count for small file workloads
PROCESS_COUNT=80 WORKER_LABEL=smallfile-test META_CONTROL_IP=10.143.14.105 ./ms deploy worker

VOLUMES

Controls which host paths the worker container mounts, making them accessible for migration jobs. Default: /mnt.

If your source or destination data lives outside /mnt, add those paths:

# Mount multiple paths
VOLUMES='/mnt,/data,/home,/cluster,/srv' PROCESS_COUNT=64 WORKER_LABEL=homedir_migration META_CONTROL_IP=10.10.10.1 ./ms deploy worker

Combined example

# Worker tuned for a large small-file migration across multiple mount points
VOLUMES='/mnt,/data' PROCESS_COUNT=96 WORKER_LABEL=bulk-migration META_CONTROL_IP=10.10.10.1 ./ms deploy worker

Viewing Logs

Use podman logs to view logs from any SyncEngine container. These examples work for both control plane and worker containers.

Tail logs (follow mode)

# Follow worker logs in real-time
podman logs -f syncengine-worker

# Follow control plane logs
podman logs -f syncengine-control

Last N lines

# Last 100 lines from the worker
podman logs --tail 100 syncengine-worker

# Last 500 lines from the control plane
podman logs --tail 500 syncengine-control

Time-based filtering

# Last 2 hours
podman logs --since 2h syncengine-worker

# Last 24 hours
podman logs --since 24h syncengine-worker

# Since a specific timestamp
podman logs --since "2026-03-20T08:00:00" syncengine-worker

Save logs to a file

# Dump full worker logs to a file
podman logs syncengine-worker > worker1-logs.txt 2>&1

# Last 24 hours to a file
podman logs --since 24h syncengine-worker > worker1-last24h.txt 2>&1

Common container names

Container Where it runs
syncengine-control Control plane
postgres Control plane
redis Control plane
prometheus Control plane
grafana Control plane
loki Control plane
otel-collector Control plane
syncengine-worker Worker host

Uninstall

Uninstall the control plane

From the install directory (/opt/syncengine/):

cd /opt/syncengine
./ms uninstall

This removes all SyncEngine containers, images, volumes, and networks, and verifies the cleanup.

Uninstall a worker

On the worker host, stop and remove the worker container:

podman stop syncengine-worker
podman rm syncengine-worker

To also remove the worker image and install directory:

cd /opt/syncengine
./ms uninstall

Uninstall the CLI/UI RPM

sudo rpm -e syncengine-ui

Troubleshooting

Control plane won't start

# Check container logs
podman logs syncengine-control

# Make sure ports aren't already in use
ss -tlnp | grep -E '5009|5432|6379'

Worker can't connect to control plane

# From the worker host, verify network connectivity
curl http://<control_plane_ip>:5009/api/v1/healthz | jq '.'

# Check firewall rules on the control plane host
sudo firewall-cmd --list-ports

If the health check fails, make sure port 5009 is open on the control plane host.

Podman containers stop when you log out

Enable "linger" so containers keep running after your SSH session ends:

sudo loginctl enable-linger $USER

The ms script will prompt you about this during install, but you can run it manually if needed.

Worker auth token issues

If the worker fails to authenticate, you can manually create a token:

# From any host that can reach the control plane
curl -X POST http://<control_plane_ip>:5009/auth/token/for_worker \
  -H "Content-Type: application/json" \
  -d '{"username": "Admin123#", "password": "Admin123#"}'

Then pass it to the worker:

SE_TOKEN=<token_from_above> WORKER_LABEL=worker1 META_CONTROL_IP=<control_plane_ip> ./ms deploy worker

Reset everything and start over

./ms uninstall 
./ms install control /path/to/Vastdata_SyncEngine_v3.1.0.tar.gz /opt/syncengine