Article

How to Set Up Custom Loggers in Django and Stream to CloudWatch

by Gary Worthington, More Than Monkeys

An image depicting a log statement, and three different sources of log data. The title is “Custom Django Logging with Cloudwatch”

Logging is often the last thing developers think about, right up until the moment they need it. Whether you’re diagnosing a subtle production issue, tracing a misbehaving Celery task, or trying to understand what your API actually receives, good logging makes all the difference. Observability of yoursystems should never be overlooked.

This post walks you through how to set up structured, custom loggers in Django using environment-aware CloudWatch log groups. We’ll split logs by context (API, DB, background tasks), apply different formats, and send each stream where it needs to go. This example uses Cloudwatch, but you could use it for any type of logging you want to do.

Why Custom Logging?

By default, Django logs everything into one big pile. It’s fine when you’re local and tailing your terminal, but in production, that gets noisy really quickly.

I like to have logs ordered by endpoint or task, to reduce the noise and lower the cognitive load so I always try for:

  • Logs grouped by environment and purpose
  • Custom formatting for different contexts
  • Support for multiple outputs: console for dev, CloudWatch for prod
  • Easy filtering in CloudWatch by log stream

Step 1: Capture the Environment

First, define which environment we’re running in, and set the base path for CloudWatch log groups. In this example I am running in AWS ElasticBeanstalk, but you can use any environment variable that makes sense in your setup.

#settings.py

import os

ENVIRONMENT_NAME = os.environ.get('EB_ENVIRONMENT_NAME', 'local-dev')
LOG_GROUP_BASE = f'/aws/elasticbeanstalk/{ENVIRONMENT_NAME}'

Step 2: Formatters: Making Logs Human-Friendly

Next, we define how we want our logs to be formatted. In this example, I am defining three different log formats, which I can use for different purposes.

#settings.py

'formatters': {
'verbose': {
'format': '{levelname} {asctime} {module} {funcName} {lineno} {message}',
'style': '{',
},
'api_format': {
'format': 'API {levelname} {asctime} [{funcName}] {message}',
'style': '{',
},
'db_format': {
'format': 'DB {levelname} {asctime} [{module}] {message}',
'style': '{',
},
}

Step 3: Handlers: Where Logs Go

Handlers are the main part of logging, and in this example I am defining cloudwatch_dynamic alongsidethe default ‘console’ handler provided out of the box.

#settings.py

'handlers': {
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'verbose',
},
'cloudwatch_dynamic': {
'level': 'INFO',
'class': 'main.cloudwatch_logger.DynamicCloudWatchLogsHandler',
'log_group_base': LOG_GROUP_BASE,
'formatter': 'verbose',
},
...
}

Step 4: Loggers: What Gets Logged

The loggers section defines which handlers get used for what purpose. In this example, I am specifically setting any code under main.views to log out to both console and cloudwatch_dynamic log handlers. You can also control log verbosity for this.

You’ll notice there is also a root logger defined; this gets used when no specific logger matches.

#settings.py

'loggers': {
'main.views': {
'handlers': ['console', 'cloudwatch_dynamic'],
'level': 'DEBUG',
'propagate': False,
},
...
},
'root': {
'handlers': ['console', 'cloudwatch_dynamic'],
'level': 'INFO',
}

TLDR: Settings.py

Below is a stripped down settings.py showing only the logging specific settings. This should give you a good idea of how the formatters, handlers and loggers are laid out.

import os

ENVIRONMENT_NAME = os.environ.get('EB_ENVIRONMENT_NAME', 'local-dev')
LOG_GROUP_BASE = f'/aws/elasticbeanstalk/{ENVIRONMENT_NAME}'

LOGGING = {
'version': 1,
'disable_existing_loggers': False,

'formatters': {
'verbose': {
'format': '{levelname} {asctime} {module} {funcName} {lineno} {message}',
'style': '{',
},
'api_format': {
'format': 'API {levelname} {asctime} [{funcName}] {message}',
'style': '{',
},
'db_format': {
'format': 'DB {levelname} {asctime} [{module}] {message}',
'style': '{',
},
},

'handlers': {
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'verbose',
},
'cloudwatch_dynamic': {
'level': 'INFO',
'class': 'main.cloudwatch_logger.DynamicCloudWatchLogsHandler',
'log_group_base': LOG_GROUP_BASE,
'formatter': 'verbose',
},
'api_endpoints': {
'level': 'INFO',
'class': 'main.cloudwatch_logger.CloudWatchLogsHandler',
'log_group': f'{LOG_GROUP_BASE}/api-endpoints',
'log_stream': 'requests',
'formatter': 'api_format',
},
'database': {
'level': 'WARNING',
'class': 'main.cloudwatch_logger.CloudWatchLogsHandler',
'log_group': f'{LOG_GROUP_BASE}/database',
'log_stream': 'queries',
'formatter': 'db_format',
},
'tasks': {
'level': 'INFO',
'class': 'main.cloudwatch_logger.CloudWatchLogsHandler',
'log_group': f'{LOG_GROUP_BASE}/background-tasks',
'log_stream': 'celery',
'formatter': 'verbose',
},
},

'loggers': {
'main.views': {
'handlers': ['console', 'cloudwatch_dynamic'],
'level': 'DEBUG',
'propagate': False,
},
'main.api': {
'handlers': ['console', 'api_endpoints'],
'level': 'INFO',
'propagate': False,
},
'django.db.backends': {
'handlers': ['database'],
'level': 'WARNING',
'propagate': False,
},
'main.models': {
'handlers': ['console', 'database'],
'level': 'INFO',
'propagate': False,
},
'main.tasks': {
'handlers': ['console', 'tasks'],
'level': 'INFO',
'propagate': False,
},
'celery': {
'handlers': ['tasks'],
'level': 'INFO',
'propagate': False,
},
},

'root': {
'handlers': ['console', 'cloudwatch_dynamic'],
'level': 'INFO',
},
}

Step 5: Implementing DynamicCloudWatchLogsHandler

And to tie everything together, here’s a minimal working version of the handler class. This sends logs to CloudWatch, creating one stream per logger name. You can customise this to your heart’s content; just keep the constructor and emit functions if you want it to work.

# main/cloudwatch_logger.py

import logging
import boto3
import time
from datetime import datetime

class DynamicCloudWatchLogsHandler(logging.Handler):
def __init__(self, log_group_base: str, *args, **kwargs):
super().__init__(*args, **kwargs)
self.log_group_base = log_group_base
self.client = boto3.client('logs')
self.stream_tokens = {}

def emit(self, record: LogRecord):
logger_name = record.name
log_group = f'{self.log_group_base}/{logger_name}'
log_stream = datetime.utcnow().strftime('%Y-%m-%d') # One stream per day
message = self.format(record)
self._ensure_log_group(log_group)
self._ensure_log_stream(log_group, log_stream)
kwargs = {
'logGroupName': log_group,
'logStreamName': log_stream,
'logEvents': [{
'timestamp': int(time.time() * 1000),
'message': message
}]
}
token = self.stream_tokens.get((log_group, log_stream))
if token:
kwargs['sequenceToken'] = token
try:
response = self.client.put_log_events(**kwargs)
self.stream_tokens[(log_group, log_stream)] = response['nextSequenceToken'
except self.client.exceptions.InvalidSequenceTokenException as e:
# Retry once on sequence error
expected_token = e.response['Error']['Message'].split()[-1]
kwargs['sequenceToken'] = expected_token
response = self.client.put_log_events(**kwargs)
self.stream_tokens[(log_group, log_stream)] = response['nextSequenceToken']

def _ensure_log_group(self, group):
try:
self.client.create_log_group(logGroupName=group)
except self.client.exceptions.ResourceAlreadyExistsException:
pass

def _ensure_log_stream(self, group, stream):
key = (group, stream)
if key in self.stream_tokens:
return

try:
self.client.create_log_stream(logGroupName=group, logStreamName=stream)
except self.client.exceptions.ResourceAlreadyExistsException:
pass

self.stream_tokens[key] = None
Requirements: Make sure you’ve installed boto3, and the IAM role running your app has permissions to write to CloudWatch Logs.
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}

Example Usage

# views.py

import logging

logger = logging.getLogger(__name__) # __name__ == "main.utils.email"
def send_invite(user):
logger.info("Sending invite to %s", user.email)

This will automatically create a log group:

/aws/elasticbeanstalk/production/main.utils.email

Wrapping Up

With this setup, you get structured logs that are:

  • Environment-aware
  • Separated by logger context
  • Dynamically routed to the right CloudWatch log group

You don’t have to predefine everything. Just use getLogger('whatever') and let the DynamicCloudWatchLogsHandler take care of the rest.

Gary Worthington is a software engineer, delivery consultant, and agile coach who helps teams move fast, learn faster, and scale when it matters. He writes about modern engineering, product thinking, and helping teams ship things that matter.

Through his consultancy, More Than Monkeys, Gary helps startups and scaleups turn chaos into clarity — building web and mobile apps that ship early, scale sustainably, and deliver value without the guesswork.

Follow Gary on LinkedIn for practical insights into agile delivery, engineering culture, and building software teams that thrive.

(AI was used to improve syntax and grammar)