Connecting Django to RDS via pgbouncer using IAM auth

Tahuli Trail - Reverse Waterfall

Setting up the infra

​We primarily use a django backend with RDS postgres in a kubernetes cluster for production environments. We were trying to setup an environment which would be HIPAA compliant and secure as well, as well as keep complexity to a minimum in the backend layer.  We decided to go with IAM authentication to connect to RDS for the django application.

The tricky part was to figure out how we would add pgbouncer between RDS and django to manage connection pooling issues.

We setup a kubernetes service account and associated an IAM role to it that'd help us generate RDS auth tokens that will allow us to connect to the database.

In short, you'd have to create an OIDC provider for your cluster, create an IAM role and attach the relevant policies to that role, and associate the role to that service account. If you use other AWS services in your backend, you could attach relevant policies for those services that'd allow you to access those without adding access keys. If you are provisioning infra resources via terraform, you will run into issues. The OIDC provider configuration will be missing if you don't explicitly assign it while provisioning through terraform. It gets automatically generated/assigned if you do it via the AWS console.

A whole guide to do the above is here.

Application configuration

We bundle and run pgbouncer in the pod running the django service. You'd need a script that will call and generate IAM tokens that'll connect pgbouncer to the RDS instance. ​Also pgbouncer has a max size of 160 bytes for passwords, but the passwords generated by RDS IAM are around 800-900 bytes.

We forked the pgbouncer repo and updated the size of the passwords to 2048 bytes. The forked repo is compiled and installed while building the docker image.

pool_mode = transaction
listen_port = {{PGBOUNCER_PORT}}
listen_addr = {{PGBOUNCER_HOST}}
auth_type = trust
auth_file = users.txt
pidfile = /var/run/pgbouncer/
logfile = /var/run/pgbouncer/pgbouncer.log
max_client_conn = 4000
default_pool_size = 20
reserve_pool_size = 5
server_tls_sslmode = verify-full
server_tls_ca_file = rds-ca-2019-root.pem
syslog = 1

Copy pgbouncer.template.ini, and, rds-ca-2019-root.pem to the filesystem of the container. We used two scripts, in the entrypoint, the former to set the database credentials when the container starts, while the latter is ran every 10 minutes as a cron to update the credentials.

COPY ./pgbouncer.template.ini /pgbouncer.template.ini
COPY /pgbouncer.template.ini /pgbouncer.ini
COPY ./rds-ca-2019-root.pem /rds-ca-2019-root.pem
COPY ./scripts/ /
COPY ./scripts/ /

RUN sed -i -e 's/\r//' /
RUN sed -i -e 's/\r//' /

RUN chown django /pgbouncer.template.ini
RUN chown django /pgbouncer.ini
RUN chmod +x /
RUN chown django /

RUN chmod +x /

echo "*/10 * * * * bash / >> /app/cron.log 2>&1
# This extra line makes it a valid cron" > scheduler.txt

crontab scheduler.txt
nohup cron -f &
sh /

PASSWORD=`echo $(aws rds generate-db-auth-token  \
  --hostname $POSTGRES_HOST \
  --port 5432 \
  --username $POSTGRES_USER \
  --region $DJANGO_AWS_REGION)`
# pgbouncer.ini
sed -e "s/{{POSTGRES_DB}}/"$POSTGRES_DB"/" \
    -e "s/{{POSTGRES_USER}}/"$POSTGRES_USER"/" \
    -e "s/{{POSTGRES_HOST}}/"$POSTGRES_HOST"/" \
    -e "s/{{POSTGRES_PORT}}/"$POSTGRES_PORT"/" \
    pgbouncer.template.ini > pgbouncer.ini
# users.txt
sed -e "s|{{POSTGRES_PASSWORD}}|"$(echo $PASSWORD | sed -e 's/\\/\\\\/g; s/\//\\\//g; s/&/\\\&/g')"|" \
    -e "s/{{POSTGRES_USER}}/"$POSTGRES_USER"/" \
    users.template.txt > users.txt
# run as daemon
pgbouncer -d -v pgbouncer.ini -u django


PASSWORD=`echo $(aws rds generate-db-auth-token --hostname $POSTGRES_HOST --port 5432 --username $POSTGRES_USER --region $DJANGO_AWS_REGION)`

# users.txt
sed -e "s|{{POSTGRES_PASSWORD}}|"$(echo $PASSWORD | sed -e 's/\\/\\\\/g; s/\//\\\//g; s/&/\\\&/g')"|" \
    -e "s/{{POSTGRES_USER}}/"$POSTGRES_USER"/" \
    /app/users.template.txt > /app/users.txt

​The users.template.txt referenced in the above scripts


Hope this helps you out. We ran into some issues while trying to setup/run the cron job. You may have to grant access/permissions to the cron folders in the Dockerfile for the default user of your container.

Credit to Makarand for figuring some of the stuff here.