Skip to main content
This page covers sandbox configuration options including environment variables and connecting external storage buckets.

Environment variables

Default environment variables

Upon creating a sandbox, useful sandbox metadata is set as environment variables for commands:
  • E2B_SANDBOX is set to true for processes to know if they are inside our VM.
  • E2B_SANDBOX_ID to know the ID of the sandbox.
  • E2B_TEAM_ID to know the team ID that created the sandbox.
  • E2B_TEMPLATE_ID to know what template was used for the current sandbox.
You can try it out by running the following code in the sandbox:
const sandbox = await Sandbox.create()
const result = await sandbox.commands.run('echo $E2B_SANDBOX_ID')
These default environment variables are only accessible via the SDK, when using the CLI you can find them in the form of dot files in the /run/e2b/ dir:
user@e2b:~$ ls -a /run/e2b/
.E2B_SANDBOX  .E2B_SANDBOX_ID  .E2B_TEAM_ID  .E2B_TEMPLATE_ID

Setting environment variables

There are 3 ways to set environment variables in a sandbox:
  1. Global environment variables when creating the sandbox.
  2. When running code in the sandbox.
  3. When running commands in the sandbox.

1. Global environment variables

You can set global environment variables when creating a sandbox.
const sandbox = await Sandbox.create({
  envs: {
    MY_VAR: 'my_value',
  },
})

2. Setting environment variables when running code

You can set environment variables for specific code execution call in the sandbox.
  • These environment variables are scoped to the command but are not private in the OS.
  • If you had a global environment variable with the same name, it will be overridden only for the command.
const sandbox = await Sandbox.create()
const result = await sandbox.runCode('import os; print(os.environ.get("MY_VAR"))', {
  envs: {
    MY_VAR: 'my_value',
  },
})

3. Setting environment variables when running commands

You can set environment variables for specific command execution in the sandbox.
  • These environment variables are scoped to the command but are not private in the OS.
  • If you had a global environment variable with the same name, it will be overridden only for the command.
const sandbox = await Sandbox.create()
sandbox.commands.run('echo $MY_VAR', {
  envs: {
    MY_VAR: '123',
  },
})

Storage buckets

To connect a bucket for storing data from the sandbox, you can use the FUSE file system to mount the bucket inside the sandbox. You will need to create a custom sandbox template with the FUSE file system installed. The guide for building a custom sandbox template can be found here.

Google Cloud Storage

Prerequisites

To use Google Cloud Storage, you’ll need a bucket and a service account. You can create a service account here and a bucket here. If you want to write to the bucket, make sure the service account has the Storage Object User role for this bucket. You can find a guide on creating a service account key here.

Mounting the bucket

To use the Google Cloud Storage we need to install the gcsfuse package. There’s simple template that can be used to create a container with the gcsfuse installed.
const template = Template()
  .fromTemplate("code-interpreter-v1")
  .aptInstall(["gnupg", "lsb-release"])
  .runCmd("lsb_release -c -s > /tmp/lsb_release")
  .runCmd(
    'GCSFUSE_REPO=$(cat /tmp/lsb_release) && echo "deb [signed-by=/usr/share/keyrings/cloud.google.asc] https://packages.cloud.google.com/apt gcsfuse-$GCSFUSE_REPO main" | sudo tee /etc/apt/sources.list.d/gcsfuse.list',
  )
  .runCmd(
    "curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo tee /usr/share/keyrings/cloud.google.asc",
  )
  .aptInstall(["gcsfuse"])
The bucket is mounted during the sandbox runtime using the gcsfuse command.
import { Sandbox } from 'e2b'

const sandbox = await Sandbox.create('<your template id>')
await sandbox.files.makeDir('/home/user/bucket')
await sandbox.files.write('key.json', '<your service account key>')

await sandbox.commands.run('sudo gcsfuse <flags> --key-file /home/user/key.json <bucket-name> /home/user/bucket')

Flags

The complete list of flags is available here.

Allow the default user to access the files

To allow the default user to access the files, we can use the following flags:
-o allow_other -file-mode=777 -dir-mode=777

Amazon S3

To use Amazon S3, we can use the s3fs package. The template setup is similar to that of Google Cloud Storage.
const template = Template()
  .fromImage("ubuntu:latest")
  .aptInstall(["s3fs"])
Similar to Google Cloud Storage, the bucket is mounted during the runtime of the sandbox. The s3fs command is used to mount the bucket to the sandbox.
import { Sandbox } from 'e2b'

const sandbox = await Sandbox.create('<your template id>')
await sandbox.files.makeDir('/home/user/bucket')

// Create a file with the credentials
// If you use another path for the credentials you need to add the path in the command s3fs command
await sandbox.files.write('/root/.passwd-s3fs', '<AWS_ACCESS_KEY_ID>:<AWS_SECRET_ACCESS_KEY>')
await sandbox.commands.run('sudo chmod 600 /root/.passwd-s3fs')

await sandbox.commands.run('sudo s3fs <flags> <bucket-name> /home/user/bucket')

Flags

The complete list of flags is available here.

Allow the default user to access the files

To allow the default user to access the files, add the following flag:
-o allow_other

Cloudflare R2

For Cloudflare R2, we can use a setup very similar to S3. The template remains the same as for S3. However, the mounting differs slightly; we need to specify the endpoint for R2.
import { Sandbox } from 'e2b'

const sandbox = await Sandbox.create({ template: '<your template id>' })
await sandbox.files.makeDir('/home/user/bucket')

// Create a file with the R2 credentials
// If you use another path for the credentials you need to add the path in the command s3fs command
await sandbox.files.write('/root/.passwd-s3fs', '<R2_ACCESS_KEY_ID>:<R2_SECRET_ACCESS_KEY>')
await sandbox.commands.run('sudo chmod 600 /root/.passwd-s3fs')

await sandbox.commands.run('sudo s3fs -o url=https://<ACCOUNT ID>.r2.cloudflarestorage.com <flags> <bucket-name> /home/user/bucket')

Flags

It’s the same as for S3.