A Terraform provider for managing RunPod GPU cloud resources.
- Pod Management: Create, update, and delete GPU pods
- GPU Type Discovery: Query available GPU types and their specifications
- GPU Type Selection: Specify the GPU type for your pod
git clone https://github.com/nilenso/terraform-provider-runpod.git
cd terraform-provider-runpod
make installOnce published to the Terraform Registry, add to your required_providers:
terraform {
required_providers {
runpod = {
source = "nilenso/runpod"
version = "~> 0.1"
}
}
}provider "runpod" {
api_key = "your-api-key" # Or use RUNPOD_API_KEY environment variable
}| Variable | Description |
|---|---|
RUNPOD_API_KEY |
Your RunPod API key |
# List all available GPU types
data "runpod_gpu_types" "all" {
}
# Get a specific GPU type
data "runpod_gpu_types" "a4000" {
filter {
id = "NVIDIA RTX A4000"
}
}
output "gpu_types" {
value = data.runpod_gpu_types.all.gpu_types
}resource "runpod_pod" "example" {
name = "my-gpu-pod"
image_name = "runpod/pytorch:2.1.0-py3.10-cuda11.8.0-devel-ubuntu22.04"
gpu_type_id = "NVIDIA RTX A4000"
gpu_count = 1
volume_in_gb = 40
container_disk_in_gb = 20
# Optional settings
cloud_type = "ALL" # ALL, SECURE, or COMMUNITY
ports = "8888/http,22/tcp"
volume_mount_path = "/workspace"
# Environment variables
env = {
JUPYTER_PASSWORD = "mysecretpassword"
}
}
output "pod_id" {
value = runpod_pod.example.id
}Manages a RunPod GPU pod.
| Attribute | Type | Required | Description |
|---|---|---|---|
name |
string | Yes | The name of the pod |
image_name |
string | Yes | Docker image to use |
gpu_type_id |
string | Yes | GPU type ID (e.g., "NVIDIA RTX A4000") |
gpu_count |
number | No | Number of GPUs (default: 1) |
volume_in_gb |
number | No | Persistent volume size in GB (default: 0) |
container_disk_in_gb |
number | No | Container disk size in GB (default: 20) |
cloud_type |
string | No | Cloud type: ALL, SECURE, COMMUNITY (default: ALL) |
ports |
string | No | Ports to expose (e.g., "8888/http,22/tcp") |
volume_mount_path |
string | No | Volume mount path (default: /workspace) |
docker_args |
string | No | Docker arguments |
env |
map(string) | No | Environment variables |
min_vcpu_count |
number | No | Minimum vCPUs required |
min_memory_in_gb |
number | No | Minimum memory in GB |
network_volume_id |
string | No | Network volume to attach |
template_id |
string | No | Template to use |
data_center_id |
string | No | Specific data center |
support_public_ip |
bool | No | Support public IP (default: true) |
start_ssh |
bool | No | Start SSH service (default: true) |
| Attribute | Description |
|---|---|
id |
The pod's unique identifier |
machine_id |
The machine ID the pod is running on |
pod_host_id |
The host ID of the pod |
Pods can be imported using their ID:
terraform import runpod_pod.example <pod-id>Fetches available GPU types from RunPod.
| Attribute | Type | Required | Description |
|---|---|---|---|
filter.id |
string | No | Filter by GPU type ID |
| Attribute | Description |
|---|---|
gpu_types |
List of GPU types |
gpu_types[].id |
GPU type ID (e.g., "NVIDIA RTX A6000") |
gpu_types[].display_name |
Display name |
gpu_types[].memory_in_gb |
GPU memory in GB |
gpu_types[].secure_cloud |
Available on secure cloud |
gpu_types[].community_cloud |
Available on community cloud |
make build# Unit tests
make test
# Acceptance tests (requires RUNPOD_API_KEY)
export RUNPOD_API_KEY="your-api-key"
make testacc
# Run a specific acceptance test
make testacc-one TEST=TestAccPodResource_lifecyclemake installmake fmt
make lintMIT License - see LICENSE for details.