How to control where SyncEngine container logs are stored, prevent logs from filling your root filesystem, and set up log rotation.
Why This Matters
By default, both Docker and Podman store container logs on the root filesystem (/). A busy SyncEngine worker can generate gigabytes of logs per hour. If / fills up, the host becomes unresponsive and all containers stop.
This guide covers how to redirect logs to a dedicated volume (e.g., /data/syncengine_logs/) for both fresh installs and existing deployments.
How SyncEngine Containers Are Deployed
SyncEngine has two deployment types with different logging behavior:
| Component | Deployed Via | Logging Config | Container Names |
|---|---|---|---|
| Control plane | ./ms deploy control (podman-compose / docker-compose) |
max-size: 100m in compose file |
syncengine-control, postgres, redis, prometheus, grafana, pgadmin, redisinsight, loki, otel-collector, syncengine-ui |
| Worker | ./ms deploy worker (podman run / docker run) |
None -- unlimited growth | syncengine-worker |
Note: Container names are set by the
.envfile (CONTROL_HOSTNAME, etc.) and by the
msscript (syncengine-workeris hardcoded in thedeploy_workerfunction). Your
deployment may use different names -- substitute accordingly in the commands below.
The worker is the primary concern. It generates the most log volume and has no rotation configured by default. The control plane containers have max-size: 100m set in their compose files (podman-compose.yaml / docker-compose.yaml), but their logs still land on / unless you move the storage location.
Note: Podman does not support the
max-fileoption. Themax-file: "5"setting in
podman-compose.yamlis silently ignored. Whenmax-sizeis reached, Podman truncates
the log file in place rather than rotating to a new file.
Prerequisites
- A dedicated volume mounted at your target path (e.g.,
/data) with sufficient space - Root or sudo access (for rootful containers) or ownership of the Podman storage directory (for rootless)
- Docker 17.05+ or Podman 3.0+
- The
./msscript and.envfile in your SyncEngine install directory
Quick Reference
| Method | Downtime Required | Scope | Docker | Podman |
|---|---|---|---|---|
| Fresh install -- configure storage before deploying | None | All containers | Yes | Yes |
| Per-container log path (worker) | Worker restart | Single container | No | Yes |
| Move all container storage | Full restart | All containers | Yes | Yes |
| Symlink storage directory | Full restart | All containers | Yes | Yes |
| Emergency: free space now | None | Immediate relief | Yes | Yes |
Method 1: Fresh Install -- Configure Storage Before Deploying
Use this when deploying SyncEngine for the first time. Configure the container runtime to store all data (images, containers, logs) on your target volume before running ./ms deploy.
Docker
# Create the storage directory
sudo mkdir -p /data/docker
# Create or edit the daemon configuration
sudo tee /etc/docker/daemon.json <<'EOF'
{
"data-root": "/data/docker",
"log-driver": "json-file",
"log-opts": {
"max-size": "500m",
"max-file": "5"
}
}
EOF
# Reload and restart Docker
sudo systemctl daemon-reload
sudo systemctl restart docker
Verify:
docker info -f '{{ .DockerRootDir }}'
# Expected: /data/docker
Now deploy SyncEngine normally:
./ms deploy control
CONTROL_IP=<ip> WORKER_LABEL=<label> ./ms deploy worker
All container logs, images, and volumes will be stored under /data/docker/.
Note: The
log-optssection configures global log rotation. Every new container will
automatically rotate its log files at 500 MB, keeping a maximum of 5 rotated files.
Podman (Rootful)
# Create the storage directories
sudo mkdir -p /data/containers/storage
sudo mkdir -p /data/containers/run
# Configure storage location
sudo tee /etc/containers/storage.conf <<'EOF'
[storage]
driver = "overlay"
graphroot = "/data/containers/storage"
runroot = "/data/containers/run"
EOF
# Configure default log rotation
sudo mkdir -p /etc/containers
sudo tee -a /etc/containers/containers.conf <<'EOF'
[containers]
log_size_max = 524288000
EOF
Note:
log_size_maxis specified in bytes. 524288000 = 500 MB. When the log reaches
this size, it is truncated. Set to-1(the default) for unlimited.
Verify:
podman info --format '{{.Store.GraphRoot}}'
# Expected: /data/containers/storage
Now deploy SyncEngine normally:
./ms deploy control
CONTROL_IP=<ip> WORKER_LABEL=<label> ./ms deploy worker
Podman (Rootless)
# Create the storage directories
mkdir -p /data/containers/storage
mkdir -p /data/containers/run
# Configure storage location
mkdir -p ~/.config/containers
cat <<'EOF' > ~/.config/containers/storage.conf
[storage]
driver = "overlay"
graphroot = "/data/containers/storage"
runroot = "/data/containers/run"
EOF
# Configure default log rotation
cat <<'EOF' >> ~/.config/containers/containers.conf
[containers]
log_size_max = 524288000
EOF
Verify:
podman info --format '{{.Store.GraphRoot}}'
# Expected: /data/containers/storage
Important: If you have already created containers or pulled images before changing
storage.conf, you must runpodman system reset --forcefirst. This deletes all existing
containers, images, and volumes. Only run this if you have nothing to preserve.
Method 2: Redirect a Single Container's Logs (Podman Only)
Podman supports setting a custom log file path per container. Docker does not support this option.
This is useful when you only need to move the worker's logs off / without changing the entire storage backend. The worker is the highest-volume log producer.
Redeploy the worker with a custom log path
# Create the log directory
mkdir -p /data/syncengine_logs
# Stop and remove the existing worker
podman stop syncengine-worker
podman rm syncengine-worker
# Redeploy with custom log path and rotation
CONTROL_IP=<ip> WORKER_LABEL=<label> ./ms deploy worker
Note: The
./ms deploy workercommand builds apodman runcommand that does not
include log options. To add them, edit thedeploy_workerfunction in themsscript
and add--log-driver=k8s-file --log-opt path=/data/syncengine_logs/syncengine-worker.log --log-opt max-size=500mbto the run command.
Verify
podman inspect --format '{{.LogPath}}' syncengine-worker
# Expected: /data/syncengine_logs/syncengine-worker.log
ls -lh /data/syncengine_logs/
Method 3: Move All Container Storage to /data
Use this when you have a running deployment and need to move everything off /.
Option A: Symlink (Least Disruptive)
This moves all container data to /data by replacing the original directory with a symbolic link. No configuration files change, so all container definitions, compose files, and the ms script continue to work as-is.
Docker:
# 1. Stop all SyncEngine containers and Docker
sudo docker stop $(sudo docker ps -q) 2>/dev/null
sudo systemctl stop docker
# 2. Move existing data
sudo mv /var/lib/docker /data/docker
# 3. Create symlink
sudo ln -s /data/docker /var/lib/docker
# 4. Start Docker
sudo systemctl start docker
# 5. Verify
docker ps
ls -la /var/lib/docker
# Expected: /var/lib/docker -> /data/docker
Podman (Rootful):
# 1. Stop all containers
sudo podman stop --all
# 2. Move existing data
sudo mv /var/lib/containers /data/containers
# 3. Create symlink
sudo ln -s /data/containers /var/lib/containers
# 4. Verify and restart containers
ls -la /var/lib/containers
# Expected: /var/lib/containers -> /data/containers
sudo podman start --all
Podman (Rootless):
# 1. Stop all containers
podman stop --all
# 2. Move existing data
mv ~/.local/share/containers /data/containers
# 3. Create symlink
ln -s /data/containers ~/.local/share/containers
# 4. Verify and restart containers
ls -la ~/.local/share/containers
# Expected: /home/<user>/.local/share/containers -> /data/containers
podman start --all
Option B: Change the Storage Root
This changes the container runtime's configuration to point at a new location.
Docker:
# 1. Stop all containers and Docker
sudo docker stop $(sudo docker ps -q) 2>/dev/null
sudo systemctl stop docker
# 2. Copy existing data to the new location
sudo rsync -aP /var/lib/docker/ /data/docker/
# 3. Configure the new data root
sudo tee /etc/docker/daemon.json <<'EOF'
{
"data-root": "/data/docker",
"log-driver": "json-file",
"log-opts": {
"max-size": "500m",
"max-file": "5"
}
}
EOF
# 4. Restart Docker
sudo systemctl daemon-reload
sudo systemctl start docker
# 5. Verify
docker ps
docker info -f '{{ .DockerRootDir }}'
# Expected: /data/docker
Important: Do not remove
/var/lib/dockeruntil you have verified that all containers
and images are working from the new location. Keep it as a backup for at least 24 hours.
Podman (Rootful):
# 1. Stop all containers
sudo podman stop --all
# 2. Copy existing data to the new location
sudo rsync -aP /var/lib/containers/storage/ /data/containers/storage/
# 3. Update storage.conf
sudo tee /etc/containers/storage.conf <<'EOF'
[storage]
driver = "overlay"
graphroot = "/data/containers/storage"
runroot = "/data/containers/run"
EOF
# 4. Start containers
sudo podman start --all
Podman (Rootless):
# 1. Stop all containers
podman stop --all
# 2. Copy existing data
rsync -aP ~/.local/share/containers/storage/ /data/containers/storage/
# 3. Update storage.conf
mkdir -p ~/.config/containers
cat <<'EOF' > ~/.config/containers/storage.conf
[storage]
driver = "overlay"
graphroot = "/data/containers/storage"
runroot = "/data/containers/run"
EOF
# 4. Start containers
podman start --all
Important: If Podman reports storage errors after the move, you may need to run
podman system reset --forceand re-pull images. This is a last resort -- try restarting
first.
Method 4: Emergency -- Free Disk Space Now
If / is critically full and you cannot restart containers, use these steps to buy time.
Check current log sizes
# List log sizes for all SyncEngine containers
# Docker
for c in $(docker ps --filter label=project=syncengine -q); do
echo "$(docker inspect --format '{{.Name}}' $c): $(ls -lh $(docker inspect --format '{{.LogPath}}' $c) 2>/dev/null | awk '{print $5}')"
done
# Podman
for c in $(podman ps --filter label=project=syncengine -q); do
echo "$(podman inspect --format '{{.Name}}' $c): $(ls -lh $(podman inspect --format '{{.LogPath}}' $c) 2>/dev/null | awk '{print $5}')"
done
Truncate the largest log files
# Docker
truncate -s 0 $(docker inspect --format '{{.LogPath}}' syncengine-worker)
# Podman
truncate -s 0 $(podman inspect --format '{{.LogPath}}' syncengine-worker)
Warning: After truncation,
docker logs/podman logswill not show historical
entries from before the truncation. The container continues running and writing new log
entries normally. Usetruncaterather than shell redirection (: >) as it is safer
when the container is actively writing.
Truncate all SyncEngine container logs at once
# Docker
for c in $(docker ps --filter label=project=syncengine -q); do
truncate -s 0 $(docker inspect --format '{{.LogPath}}' $c)
done
# Podman
for c in $(podman ps --filter label=project=syncengine -q); do
truncate -s 0 $(podman inspect --format '{{.LogPath}}' $c)
done
Prevent regrowth while you plan a permanent fix
Set up a cron job to truncate the worker log periodically:
# Truncate every 30 minutes (adjust schedule as needed)
# Docker
(crontab -l 2>/dev/null; echo "*/30 * * * * truncate -s 0 \$(docker inspect --format '{{.LogPath}}' syncengine-worker 2>/dev/null) 2>/dev/null") | crontab -
# Podman
(crontab -l 2>/dev/null; echo "*/30 * * * * truncate -s 0 \$(podman inspect --format '{{.LogPath}}' syncengine-worker 2>/dev/null) 2>/dev/null") | crontab -
Important: This is a temporary measure. Remove the cron entry once you have applied
one of the permanent solutions above. Cron-based truncation discards log data on every run.
Log Rotation Reference
Docker
Docker supports automatic log rotation via the json-file and local drivers.
| Option | Description | Default |
|---|---|---|
max-size |
Maximum size of a single log file before rotation | -1 (unlimited) |
max-file |
Number of rotated log files to keep | 1 |
compress |
Compress rotated log files | disabled |
Set globally in /etc/docker/daemon.json:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "500m",
"max-file": "5"
}
}
Or per container at run time:
docker run --log-opt max-size=500m --log-opt max-file=5 <image>
Tip: Docker's
locallog driver enables rotation by default (20 MB max-size, 5 files)
and uses a more efficient binary format. Consider using it if you do not need JSON-formatted
logs:--log-driver=local.
Podman
Podman's default k8s-file driver supports max-size but does not support max-file (multiple rotated files). When the log reaches max-size, it is truncated in place -- older log entries are lost.
| Option | Description | Default |
|---|---|---|
max-size |
Maximum log file size before truncation | -1 (unlimited) |
path |
Custom log file path | Container storage directory |
Set globally in /etc/containers/containers.conf (or ~/.config/containers/containers.conf for rootless):
[containers]
log_size_max = 524288000
Or per container at run time:
podman run --log-opt max-size=500mb <image>
Note: Podman does not support the
max-fileoption. When rotation occurs, the log
file is truncated rather than rotated to a new file. For production deployments that
require log retention, SyncEngine includes a Loki instance for centralized log aggregation.
Logs shipped to Loki are retained for 7 days by default, independent of local log rotation.
SyncEngine Compose Files
The control plane compose files already include logging configuration for all containers:
podman-compose.yaml:
logging:
driver: "k8s-file"
options:
max-size: "100m"
max-file: "5" # Note: ignored by Podman
docker-compose.yaml:
logging:
driver: "json-file"
options:
max-size: "100m"
max-file: "5"
The worker container (syncengine-worker) is deployed via ./ms deploy worker using a podman run / docker run command and does not include any logging options by default.
Troubleshooting
"Error: cannot open storage: the storage lock is held by another process"
Another Podman process is running. Stop all containers and retry:
podman stop --all
podman system migrate
"Error: storage configured differently"
This occurs when you change storage.conf while existing containers reference the old location. Reset and re-pull:
# WARNING: this deletes all containers, images, and volumes
podman system reset --force
Then re-deploy SyncEngine:
./ms deploy control
CONTROL_IP=<ip> WORKER_LABEL=<label> ./ms deploy worker
Docker containers missing after changing data-root
The old containers are still at the original path. Either copy them:
sudo rsync -aP /var/lib/docker/ /data/docker/
Or re-pull images and recreate containers using ./ms deploy control.
Logs still growing on / after making changes
Verify the log path actually changed:
# Docker
docker inspect --format '{{.LogPath}}' syncengine-worker
# Podman
podman inspect --format '{{.LogPath}}' syncengine-worker
If the path still points to /var/lib/... or ~/.local/share/..., the container was created before the configuration change. You must recreate the container for the new log path to take effect:
# Worker
podman stop syncengine-worker && podman rm syncengine-worker
CONTROL_IP=<ip> WORKER_LABEL=<label> ./ms deploy worker
# Control plane
./ms deploy control
How do I check disk usage by container logs?
# Docker
for c in $(docker ps --filter label=project=syncengine -q); do
echo "$(docker inspect --format '{{.Name}}' $c): $(ls -lh $(docker inspect --format '{{.LogPath}}' $c) 2>/dev/null | awk '{print $5}')"
done
# Podman
for c in $(podman ps --filter label=project=syncengine -q); do
echo "$(podman inspect --format '{{.Name}}' $c): $(ls -lh $(podman inspect --format '{{.LogPath}}' $c) 2>/dev/null | awk '{print $5}')"
done