Using Terraform and GitHub Actions to Build Self-Serve AWS Environments
by Gary Worthington, More Than Monkeys

Most teams want reliable infrastructure, but few want to maintain it manually. That’s where Terraform earns its keep. It’s infrastructure-as-code done properly: repeatable, auditable, and scalable.
But it becomes even more powerful when paired with GitHub Actions to create ephemeral environments. These are short-lived, isolated AWS environments spun up automatically for each pull request, and destroyed when they’re no longer needed.
In this post, we’ll talk through:
- How to set up Terraform from scratch
- How to store your Terraform state securely
- A full working example with S3, Lambda and API Gateway
- How to use GitHub Actions to automate preview environments
- Common gotchas and scaling tips
Setting Up Terraform From Scratch
If you’re just getting started with Terraform, here’s a minimal setup to deploy resources in AWS, including state storage and locking.
1. Install Terraform
Install Terraform using https://developer.hashicorp.com/terraform/downloads, or with Homebrew:
brew tap hashicorp/tap
brew install hashicorp/tap/terraform
Check it’s working:
terraform version
2. Configure AWS Credentials
Terraform uses your AWS credentials to manage resources. Set environment variables:
export AWS_ACCESS_KEY_ID=your-key-id
export AWS_SECRET_ACCESS_KEY=your-secret
export AWS_DEFAULT_REGION=eu-west-2
In GitHub Actions or CI/CD environments, use IAM roles or OIDC tokens instead.
3. Store Terraform State in S3 with dynamodb Locking
Terraform uses a state file to track what it has built. By default, this file is local (terraform.tfstate), which is fine for local experiments but unsuitable for team use.
For collaboration, use:
- S3 to store the state
- DynamoDB to enable locking (prevents two people from applying at once)
Create the state bucket and lock table:
aws s3api create-bucket --bucket my-terraform-state-bucket --region eu-west-2
aws dynamodb create-table \
--table-name terraform-locks \
--attribute-definitions AttributeName=LockID,AttributeType=S \
--key-schema AttributeName=LockID,KeyType=HASH \
--billing-mode PAY_PER_REQUEST \
--region eu-west-2
Then configure your backend:
Create a new file at the root of your Terraform project called backend.tf, and add:
terraform {
backend "s3" {
bucket = "my-terraform-state-bucket"
key = "envs/dev/terraform.tfstate"
region = "eu-west-2"
dynamodb_table = "terraform-locks"
encrypt = true
}
}
Make sure backend.tf only defines the backend. Do not pass variables into this block. Terraform loads it before any variables are evaluated.
Run:
terraform init
Terraform will migrate your state into S3 and enable locking via DynamoDB.
Full Working Example: S3 + Lambda + API Gateway
Here’s a minimal project that sets up:
- A versioned S3 bucket
- A Lambda function (Python, zipped locally)
- An API Gateway that exposes the Lambda to HTTP traffic
Directory structure:
infrastructure/
--main.tf
--variables.tf
--outputs.tf
--lambda/
----handler.py
----function.zip
lambda/handler.py
def handler(event, context):
return {
"statusCode": 200,
"body": "Hello from Lambda!"
}
cd lambda
zip function.zip handler.py
main.tf
provider "aws" {
region = var.aws_region
}
resource "aws_s3_bucket" "app_bucket" {
bucket = "app-bucket-${var.environment}"
force_destroy = true
tags = {
Environment = var.environment
}
}
resource "aws_iam_role" "lambda_exec" {
name = "lambda_exec_role_${var.environment}"
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [{
Action = "sts:AssumeRole",
Effect = "Allow",
Principal = {
Service = "lambda.amazonaws.com"
}
}]
})
}
resource "aws_iam_role_policy_attachment" "lambda_logs" {
role = aws_iam_role.lambda_exec.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
resource "aws_lambda_function" "api_lambda" {
function_name = "api_lambda_${var.environment}"
filename = "${path.module}/lambda/function.zip"
handler = "handler.handler"
runtime = "python3.12"
role = aws_iam_role.lambda_exec.arn
source_code_hash = filebase64sha256("${path.module}/lambda/function.zip")
}
resource "aws_apigatewayv2_api" "http_api" {
name = "api-${var.environment}"
protocol_type = "HTTP"
}
resource "aws_apigatewayv2_integration" "lambda_integration" {
api_id = aws_apigatewayv2_api.http_api.id
integration_type = "AWS_PROXY"
integration_uri = aws_lambda_function.api_lambda.invoke_arn
integration_method = "POST"
payload_format_version = "2.0"
}
resource "aws_apigatewayv2_route" "default_route" {
api_id = aws_apigatewayv2_api.http_api.id
route_key = "GET /"
target = "integrations/${aws_apigatewayv2_integration.lambda_integration.id}"
}
resource "aws_apigatewayv2_stage" "default_stage" {
api_id = aws_apigatewayv2_api.http_api.id
name = "$default"
auto_deploy = true
}
resource "aws_lambda_permission" "api_gateway_invoke" {
statement_id = "AllowExecutionFromAPIGateway"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.api_lambda.function_name
principal = "apigateway.amazonaws.com"
source_arn = "${aws_apigatewayv2_api.http_api.execution_arn}/*/*"
}
variables.tf
variable "environment" {
type = string
default = "dev"
}
variable "aws_region" {
type = string
default = "eu-west-2"
}
outputs.tf
output "lambda_function_name" {
value = aws_lambda_function.api_lambda.function_name
}
output "bucket_name" {
value = aws_s3_bucket.app_bucket.bucket
}
output "api_endpoint" {
value = aws_apigatewayv2_api.http_api.api_endpoint
}
Apply It:
terraform init
terraform apply -var="environment=preview-001"
You’ll get an API endpoint like:
https://abc123xyz.execute-api.eu-west-2.amazonaws.com
Hitting that endpoint will return:
{
"statusCode": 200,
"body": "Hello from Lambda!"
}
Automate It with GitHub Actions
Set up a GitHub Actions workflow to deploy this infrastructure on every pull request:
deploy-preview.yml
name: Deploy Preview Environment
on:
pull_request:
types: [opened, synchronize]
jobs:
preview:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Set up Terraform
uses: hashicorp/setup-terraform@v2
- name: Init
run: terraform init
working-directory: infrastructure
- name: Apply Preview
run: |
terraform apply -auto-approve -var="environment=preview-${{ github.event.pull_request.number }}"
working-directory: infrastructure
And destroy it automatically:
destroy-preview.yml
name: Destroy Preview Environment
on:
pull_request:
types: [closed]
jobs:
destroy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Set up Terraform
uses: hashicorp/setup-terraform@v2
- name: Destroy
run: |
terraform destroy -auto-approve -var="environment=preview-${{ github.event.pull_request.number }}"
working-directory: infrastructure
Gotchas to Watch For
- Always use unique names for resources like buckets and APIs
- Use tag-based clean-up or TTLs for forgotten preview environments
- Ensure your IAM policies and roles are scoped safely
- Watch your AWS limits if you spin up too many preview environments in parallel
- Keep your Terraform backend locked down and encrypted
Final Thoughts
This setup allows every PR to spin up its own infrastructure. It’s production-grade, auditable, and removes the bottleneck of shared staging environments.
We’ve used this exact pattern at More Than Monkeys to support everything from MVPs to multi-squad platforms, and it scales without becoming a DevOps burden.
If you want to ship faster, with cleaner infrastructure and happier developers, this is the kind of investment that pays off fast.
Gary Worthington is a software engineer, delivery consultant, and agile coach who helps teams move fast, learn faster, and scale when it matters. He writes about modern engineering, product thinking, and helping teams ship things that matter.
Through his consultancy, More Than Monkeys, Gary helps startups and scaleups improve how they build software — from tech strategy and agile delivery to product validation and team development.
Visit morethanmonkeys.co.uk to learn how we can help you build better, faster.
Follow Gary on LinkedIn for practical insights into engineering leadership, agile delivery, and team performance.