Skip to main content
The most common use case for isol8: give an AI agent (ChatGPT, Claude, a custom LLM) the ability to write and execute code safely. The agent generates code, isol8 runs it in a sandbox, and the result is returned to the agent for further reasoning.

Basic Setup

import { DockerIsol8 } from "isol8";

const isol8 = new DockerIsol8({
  mode: "ephemeral",
  network: "none",
  memoryLimit: "512m",
  timeoutMs: 15000,
});

await isol8.start();

async function executeAgentCode(code: string, runtime: "python" | "node" | "bun") {
  const result = await isol8.execute({ code, runtime });

  return {
    output: result.stdout,
    error: result.stderr,
    exitCode: result.exitCode,
    durationMs: result.durationMs,
  };
}

With OpenAI Function Calling

Wire isol8 as a tool that the LLM can invoke:
import OpenAI from "openai";
import { DockerIsol8 } from "isol8";

const openai = new OpenAI();
const isol8 = new DockerIsol8({ mode: "ephemeral", network: "none" });
await isol8.start();

const tools: OpenAI.ChatCompletionTool[] = [
  {
    type: "function",
    function: {
      name: "execute_code",
      description: "Execute Python code in a secure sandbox and return the output",
      parameters: {
        type: "object",
        properties: {
          code: { type: "string", description: "Python code to execute" },
        },
        required: ["code"],
      },
    },
  },
];

async function chat(userMessage: string) {
  const messages: OpenAI.ChatCompletionMessageParam[] = [
    { role: "system", content: "You can execute Python code to answer questions. Use the execute_code tool." },
    { role: "user", content: userMessage },
  ];

  const response = await openai.chat.completions.create({
    model: "gpt-4o",
    messages,
    tools,
  });

  const toolCall = response.choices[0].message.tool_calls?.[0];
  if (toolCall?.function.name === "execute_code") {
    const { code } = JSON.parse(toolCall.function.arguments);
    const result = await isol8.execute({ code, runtime: "python" });

    // Feed result back to the model
    messages.push(response.choices[0].message);
    messages.push({
      role: "tool",
      tool_call_id: toolCall.id,
      content: JSON.stringify({
        stdout: result.stdout,
        stderr: result.stderr,
        exitCode: result.exitCode,
      }),
    });

    const finalResponse = await openai.chat.completions.create({
      model: "gpt-4o",
      messages,
    });

    return finalResponse.choices[0].message.content;
  }

  return response.choices[0].message.content;
}

// Usage
const answer = await chat("What is the 50th Fibonacci number?");
console.log(answer);

Streaming Results to the User

For long-running code, stream output in real time so the user sees progress:
async function executeWithStreaming(code: string) {
  const chunks: string[] = [];

  for await (const event of isol8.executeStream({ code, runtime: "python" })) {
    switch (event.type) {
      case "stdout":
        process.stdout.write(event.data);
        chunks.push(event.data);
        break;
      case "stderr":
        process.stderr.write(event.data);
        break;
      case "exit":
        console.log(`\nProcess exited with code ${event.data}`);
        break;
    }
  }

  return chunks.join("");
}

With Secrets (API Keys)

When the agent needs to call external APIs, inject credentials as masked secrets so they never leak in the output:
const result = await isol8.execute({
  code: `
import os, urllib.request, json
req = urllib.request.Request("https://api.example.com/data",
  headers={"Authorization": f"Bearer {os.environ['API_KEY']}"})
resp = urllib.request.urlopen(req)
print(json.loads(resp.read()))
`,
  runtime: "python",
  env: { API_KEY: "sk-secret-key" },
});

// If the code accidentally prints the API key, it appears as "***" in result.stdout
For proper secret masking, use the secrets option on the engine:
const isol8 = new DockerIsol8({
  mode: "ephemeral",
  network: "filtered",
  secrets: { API_KEY: "sk-secret-key" },
});

Multi-Turn Conversations with State

Use persistent mode for multi-turn agent interactions where each step builds on the previous:
const isol8 = new DockerIsol8({ mode: "persistent" });
await isol8.start();

// Turn 1: Agent creates a dataset
await isol8.execute({
  code: `
import json
data = [{"name": "Alice", "score": 95}, {"name": "Bob", "score": 87}]
json.dump(data, open("/sandbox/data.json", "w"))
print("Dataset created")
`,
  runtime: "python",
});

// Turn 2: Agent analyzes the dataset
const result = await isol8.execute({
  code: `
import json
data = json.load(open("/sandbox/data.json"))
avg = sum(d["score"] for d in data) / len(data)
print(f"Average score: {avg}")
`,
  runtime: "python",
});

console.log(result.stdout); // "Average score: 91.0"

await isol8.stop();

Error Handling

Always handle execution failures gracefully in your agent loop:
async function safeExecute(code: string) {
  try {
    const result = await isol8.execute({
      code,
      runtime: "python",
      timeoutMs: 10000,
    });

    if (result.exitCode !== 0) {
      return {
        success: false,
        error: result.stderr || "Process exited with non-zero code",
        exitCode: result.exitCode,
      };
    }

    return { success: true, output: result.stdout };
  } catch (err) {
    return {
      success: false,
      error: err instanceof Error ? err.message : "Unknown error",
    };
  }
}