isol8:<runtime> images. Optionally build custom images with pre-installed dependencies.
What It Does
- Verify Docker — Pings the Docker daemon to confirm it’s running and accessible.
- Build base images — Builds all 5 runtime images from the multi-stage
docker/Dockerfile. Each runtime is a separate build target (python,node,bun,deno,bash). - Build custom images — If package flags are provided (via CLI flags or
config.dependencies), builds custom images with dependencies pre-installed.
Options
Comma-separated Python packages to install via pip.
Comma-separated Node.js packages to install via npm.
Comma-separated Bun packages to install via bun.
Comma-separated Deno module URLs to pre-cache via
deno cache.Comma-separated Alpine apk packages to install.
Custom Images
When you provide package flags (or havedependencies in your config file), isol8 builds a custom image tagged isol8:<runtime>-custom. These custom images are automatically preferred over base images via the resolveImage() logic — if isol8:python-custom exists, all Python executions use it without any additional flags.
For example, after running:
isol8:python-custom is created with numpy and pandas pre-installed. All subsequent Python executions automatically use this image.
How Custom Images Are Built
Custom images extend the base image with runtime-specific install commands. The generated Dockerfile for each runtime:| Runtime | Generated Dockerfile |
|---|---|
python | FROM isol8:pythonRUN pip install --no-cache-dir numpy pandas |
node | FROM isol8:nodeRUN npm install -g lodash express |
bun | FROM isol8:bunRUN bun install -g zod hono |
deno | FROM isol8:denoRUN deno cache https://deno.land/std/path/mod.ts |
bash | FROM isol8:bashRUN apk add --no-cache jq curl |
config.dependencies, so both sources contribute to the custom image.
Base Images
All base images are built from a multi-stageDockerfile in the docker/ directory. The shared base stage is Alpine 3.21 and includes:
tinias the init process (PID 1 signal handling)curlandca-certificates- The HTTP/HTTPS filtering proxy (
proxy.mjs) copied to/usr/local/bin/ /sandboxas the working directorytinias the entrypoint
base with its specific runtime binary:
| Stage | Base | Installs |
|---|---|---|
python | base | python3, py3-pip |
node | base | nodejs, npm |
bun | base | bash, unzip, libstdc++, libgcc, then downloads Bun via install script |
deno | denoland/deno:alpine | tini, curl, ca-certificates, proxy.mjs |
bash | base | bash |
The Deno stage uses
denoland/deno:alpine as its base image instead of the shared base stage. It independently installs tini, curl, and ca-certificates, and copies proxy.mjs — mirroring the setup from base but starting from the official Deno Alpine image.