Skip to main content
isol8 ships with three benchmark scripts that measure end-to-end execution latency for a minimal “hello world” program across all supported runtimes. These benchmarks run on real Docker containers with the same security constraints used in production.
Results below were measured on Apple Silicon (Docker Desktop). Your numbers will vary depending on hardware, Docker configuration, and system load.

Running Benchmarks

Benchmarks are available as npm scripts. They require Docker to be running and base images to be built (isol8 setup).
# Cold start — fresh engine per run, no pool reuse
bun run bench

# Warm pool — single engine instance, measures pool speedup
bun run bench:pool

# Detailed breakdown — per-phase timing using raw Docker API
bun run bench:detailed
You can also run them directly via bunx, npx, or pnpx:
bunx tsx benchmarks/spawn.ts
npx tsx benchmarks/spawn.ts
pnpx tsx benchmarks/spawn.ts

Cold Start

Each iteration creates a fresh DockerIsol8 instance, executes a single “hello world” script, and tears down the engine. This measures worst-case latency when no warm containers are available.
RuntimeMinMedianMaxAvg
Python148ms155ms414ms239ms
Node.js152ms155ms186ms165ms
Bun124ms145ms260ms176ms
Deno339ms372ms626ms446ms
Bash115ms123ms148ms128ms
Takeaways:
  • Bash and Bun are the fastest cold-start runtimes (~120-145ms median)
  • Deno is the slowest due to its startup overhead and permission system (~370ms median)
  • Python and Node.js fall in between (~155ms median each)

Warm Pool

A single DockerIsol8 instance is reused across 5 sequential runs. The first run is cold (the pool is empty and a container must be created). Subsequent runs acquire pre-started containers from the warm pool, which eliminates the container create+start overhead.
RuntimeColdWarm AvgWarm MinSpeedup
Python285ms95ms89ms3.2x
Node.js177ms91ms76ms2.3x
Bun157ms72ms66ms2.4x
Deno330ms264ms231ms1.4x
Bash222ms68ms55ms4.0x
Takeaways:
  • The warm pool delivers 2-4x speedup for most runtimes
  • Bash benefits the most (4.0x speedup, 55ms warm minimum)
  • Deno benefits the least (1.4x) because its startup overhead is within the runtime itself, not in container creation
  • Warm execution for Bash, Bun, Node.js, and Python is consistently under 100ms
The bar shows cold start latency. The line shows warm pool minimum. The gap between them is the overhead eliminated by the pool.

Execution Phase Breakdown

This benchmark bypasses DockerIsol8 and uses the raw dockerode API directly to measure time spent in each phase of the container lifecycle. No warm pool is involved.
PhaseDescription
Createdocker.createContainer() — allocates the container
Startcontainer.start() — boots the container
WriteWrites user code to /sandbox/main.* via exec
Exec Setupcontainer.exec() — creates the exec instance
Runexec.start() — runs the code and collects output
Cleanupcontainer.remove({ force: true }) — tears down
RuntimeCreateStartWriteExec SetupRunCleanupTotal
Python41ms49ms17ms1ms40ms43ms190ms
Node.js32ms63ms34ms1ms39ms43ms212ms
Bun32ms56ms26ms1ms27ms44ms186ms
Bash35ms69ms23ms1ms20ms48ms196ms
Deno is excluded from the detailed breakdown because it uses a different base image (denoland/deno:alpine) rather than the shared multi-stage build.
Takeaways:
  • Create + Start dominate at ~70-110ms combined — this is exactly what the warm pool eliminates
  • Exec Setup is negligible (~1ms) — Docker exec creation is fast
  • Run time reflects actual runtime startup: Bash (20ms) and Bun (27ms) are fastest, Python (40ms) and Node.js (39ms) are similar
  • Cleanup is consistent at ~43-48ms regardless of runtime
  • Write varies by runtime (17-34ms) due to file size and base64 encoding overhead

How the Warm Pool Works

The container pool is the primary performance optimization in isol8. Understanding it helps explain the benchmark results.
  1. Acquire: If the pool has a pre-started container for the requested image, it is returned immediately. Otherwise, a new container is created and started inline (cold path).
  2. Execute: Code is written to /sandbox/main.* and executed via docker exec.
  3. Release: After execution, the container’s /sandbox tmpfs is wiped clean and the container is returned to the pool for reuse.
  4. Replenish: When a container is acquired from the pool, a background task creates a replacement container so the pool stays warm for the next request.
The pool maintains up to 2 containers per image (hardcoded). This means after the first cold execution of a given runtime, subsequent executions skip the ~100ms create+start overhead entirely.

Methodology

All benchmarks:
  • Execute a minimal “hello world” program (e.g., print("hello") for Python, console.log("hello") for Node.js)
  • Use default security settings: read-only rootfs, network disabled, 512MB memory, 1 CPU, 64 PID limit
  • Run inside Docker Desktop on the host machine (no remote Docker)
  • Use performance.now() for high-resolution timing
  • Are located in the benchmarks/ directory and can be run via bun run bench, bun run bench:pool, or bun run bench:detailed