Migrating a Docker image to an air-gapped VM

One of my projects needs a web-facing VM that’s totally locked down: no apt-get, no outbound pulls from ghcr.io/docker.io. So the workflow is: build the image somewhere else, then ship it to the VM for updates.

The simple path: docker save → copy → docker load

On the build machine:

Bash
# build or pull normally
docker build -t myapp:2026-03-04 .

# export to a tarball
docker save -o myapp-2026-03-04.tar myapp:2026-03-04

Code language: Bash (bash)

Copy to the VM (scp/rsync works even if the VM can’t pull images):

Bash
scp myapp-2026-03-04.tar admin@vm:/tmp/
Code language: Bash (bash)

On the locked-down VM:

Bash
docker load -i myapp-2026-03-04.tar
Code language: Bash (bash)

IMPORTANT: BACK UP & DOUBLE CHECK PERSISTENT PATHS BEFORE NEXT STEP

So… I did this, without backing up and without checking for persistent paths in the Dockerfile and my entire installation got overwritten… So here’s a reminder to do that before doing any of these.

Then update your compose to reference the new tag:

Bash
# Example: in docker-compose.yml set image: myapp:2026-03-04
docker compose up -d
Code language: Bash (bash)

A small improvement: keep tags + rollback easy

I tag images with a date (or git SHA) and never “overwrite latest” on the VM. That way rollback is instant:

Bash
# rollback
docker compose down
# change tag back in compose
docker compose up -d
Code language: Bash (bash)

Note: If you’re shipping multi-arch

If your build host and VM are different architectures, build the right arch (or multi-arch) before you export. Otherwise docker load will succeed but the container won’t run.