An air-gap deployment checklist for AI platforms
Concrete things to verify before you stand up an LLM platform on a network that has no outbound internet.
By The Bastion AI team
Most LLM platform projects assume the public internet exists. Most regulated networks assume the opposite. The gap between those two assumptions is where these deployments tend to die. Here's the checklist we walk through with customers before kickoff.
Mirror everything you depend on
- Container registry (Harbor, JFrog, ECR-mirror).
- Python package index (devpi, Artifactory).
- Model weights — pinned by hash, signed, mirrored to local object store.
- OS package mirrors for the GPU drivers and CUDA toolkit.
Pin and sign everything you ship
Every artifact that crosses into the network gets a hash, a signature, and an SBOM. We use Sigstore where we can. The point isn't bureaucracy — it's being able to answer “is the model that's currently serving prod the one we approved?” in seconds rather than days.
Plan for upgrades on day zero
The first model upgrade is harder than the first install. Before you go live, dry-run the path: how does a new model version land in the mirror, get scanned, get approved, get rolled out behind LiteLLM, and get rolled back if it regresses? If the answer is “we'll figure it out later,” you'll figure it out at 2 AM.