Installation & Running¶
ScriptHut can be installed via pip or run with Docker. It requires Python 3.11+.
Prerequisites¶
Before installing ScriptHut, ensure you have:
- SSH key-based authentication set up for your remote clusters (Slurm / PBS)
- Your SSH private key accessible at the path you'll configure (default:
~/.ssh/id_rsa) - Network access to your HPC login nodes from the machine running ScriptHut
For AWS Batch backends:
- AWS credentials reachable via the standard credential chain — see AWS Credentials for Batch below
- An existing Batch job queue and at least one compute environment attached to it
- A container image in ECR / GHCR / Docker Hub containing your workflow
- The
scripthut[batch]extra installed (see below) to pull inboto3
Install with pip¶
From PyPI¶
AWS Batch support (optional)¶
If you plan to use the AWS Batch backend, install the [batch] extra to pull in boto3:
From source¶
git clone https://github.com/tlamadon/scripthut.git
cd scripthut
pip install -e .
# or with AWS Batch support:
pip install -e '.[batch]'
Development install¶
If you want to contribute or run tests:
This installs additional dependencies for linting (ruff, mypy), testing (pytest), documentation (mkdocs-material, mike), and boto3 for the Batch backend tests.
Running ScriptHut¶
Quick start¶
-
Create a
scripthut.yamlin your working directory (see Configuration for the full reference): -
Start the server:
-
Open http://localhost:8000 in your browser.
Command-line options¶
| Option | Short | Description |
|---|---|---|
--config |
-c |
Path to configuration file. Default: ./scripthut.yaml or ./scripthut.yml. |
--host |
Host to bind the server to. Overrides the settings.server_host value in the config. |
|
--port |
-p |
Port to bind the server to. Overrides the settings.server_port value in the config. |
Examples:
# Use a specific config file
scripthut --config /path/to/my-config.yaml
# Bind to all interfaces on port 9000
scripthut --host 0.0.0.0 --port 9000
Running with Docker¶
ScriptHut publishes a Docker image at ghcr.io/tlamadon/scripthut.
Docker run¶
docker run -d \
--name scripthut \
-p 8000:8000 \
-v $(pwd)/scripthut.yaml:/app/scripthut.yaml:ro \
-v ~/.ssh:/root/.ssh:ro \
ghcr.io/tlamadon/scripthut:latest
This:
- Mounts your
scripthut.yamlconfiguration into the container - Mounts your SSH keys so the container can connect to remote clusters
- Exposes the web UI on port 8000
Docker Compose¶
Create a docker-compose.yml file:
services:
scripthut:
image: ghcr.io/tlamadon/scripthut:latest
container_name: scripthut
restart: unless-stopped
ports:
- "8000:8000"
volumes:
# Configuration file
- ./scripthut.yaml:/app/scripthut.yaml:ro
# SSH keys for connecting to remote clusters
- ~/.ssh:/root/.ssh:ro
# Persistent data (run history, logs, cached repos)
- scripthut-data:/root/.cache/scripthut
volumes:
scripthut-data:
Then start it:
To view logs:
To stop:
Building the image locally¶
If you want to build from source:
Then use scripthut instead of ghcr.io/tlamadon/scripthut:latest in the commands above.
Docker Compose with local build¶
services:
scripthut:
build: .
container_name: scripthut
restart: unless-stopped
ports:
- "8000:8000"
volumes:
- ./scripthut.yaml:/app/scripthut.yaml:ro
- ~/.ssh:/root/.ssh:ro
- scripthut-data:/root/.cache/scripthut
volumes:
scripthut-data:
SSH Key Considerations¶
File permissions¶
SSH keys must have correct permissions, both on the host and inside containers:
Certificate-based auth¶
If your cluster uses SSH certificates, configure the cert_path in your backend's SSH config:
backends:
- name: my-cluster
type: slurm
ssh:
host: login.cluster.edu
user: your_username
key_path: ~/.ssh/id_rsa
cert_path: ~/.ssh/id_rsa-cert.pub
Known hosts¶
By default, ScriptHut does not verify host keys. To enable host key verification:
ssh:
host: login.cluster.edu
user: your_username
key_path: ~/.ssh/id_rsa
known_hosts: ~/.ssh/known_hosts
AWS Credentials for Batch¶
The AWS Batch backend uses boto3, which resolves credentials from the standard AWS credential chain. ScriptHut never reads credentials from scripthut.yaml — you only reference a profile name there (optionally). Pick whichever method matches how you run scripthut:
Option 1 — CLI profile (most common for local dev)¶
Install and configure the AWS CLI, then reference the profile name in scripthut.yaml:
backends:
- name: aws-batch
type: batch
aws:
profile: scripthut
region: us-east-1
job_queue: my-queue
default_image: ghcr.io/org/image:latest
Option 2 — AWS SSO¶
For organizations using IAM Identity Center:
Set aws.profile: scripthut-sso. boto3 refreshes the session token automatically while scripthut is running, but you'll need to re-run aws sso login when the SSO session expires (typically 8–12 hours).
Option 3 — Environment variables¶
Useful for CI or one-off runs:
export AWS_ACCESS_KEY_ID=AKIA...
export AWS_SECRET_ACCESS_KEY=...
export AWS_DEFAULT_REGION=us-east-1
scripthut
Leave aws.profile unset — boto3 picks up the env vars automatically.
Option 4 — IAM instance role (recommended for production)¶
If scripthut runs on EC2, ECS, EKS, or Fargate, attach an IAM role to the host / task. boto3 fetches credentials from the instance metadata service automatically — no disk-stored credentials, no manual refresh. Leave aws.profile unset.
Docker¶
Mount your AWS credentials read-only, or (if running on AWS) use a task IAM role so no credentials land on disk:
docker run -d -p 8000:8000 \
-v ./scripthut.yaml:/app/scripthut.yaml:ro \
-v ~/.aws:/root/.aws:ro \
-e AWS_PROFILE=scripthut \
ghcr.io/tlamadon/scripthut:latest
Or with a mix of SSH and AWS:
services:
scripthut:
image: ghcr.io/tlamadon/scripthut:latest
ports:
- "8000:8000"
environment:
- AWS_PROFILE=scripthut
volumes:
- ./scripthut.yaml:/app/scripthut.yaml:ro
- ~/.ssh:/root/.ssh:ro
- ~/.aws:/root/.aws:ro
- scripthut-data:/root/.cache/scripthut
volumes:
scripthut-data:
Minimum IAM permissions¶
See the Configuration reference for the required policy document. In short: the principal needs batch:SubmitJob, batch:DescribeJobs, batch:ListJobs, batch:CancelJob, batch:TerminateJob, batch:RegisterJobDefinition, batch:DescribeJobQueues, batch:DescribeComputeEnvironments, and logs:GetLogEvents on your log group. If you configure job_role_arn or execution_role_arn, also grant iam:PassRole on those roles.
Verifying the installation¶
After starting ScriptHut, you should see output similar to:
Navigate to the URL in your browser. The dashboard should show your configured backends and workflows. If a backend connection fails, check the terminal output for SSH error messages and verify your SSH configuration.