Overview - Prepping Django
This is the first blog post of the Pulumi infrastructure series. This series will cover me building out the infrastructure to host the Django application I created. In this part we will prepare the Django web app to be deployed. We will need to get our Django settings configured, the Docker file created, bash commands created, and our requirements file created.
Key Vault
The first thing I need to do is create a key vault to store my secrets. I am going to use two key vaults because I want one to have some of my secrets pre-loaded and the others I want to load when the Pulumi deployments runs. I am going to name them “cybauer-capchta-email” and “cybauer-vault”. Lets create the first one now in the portal and load the secrets into it.
I got it created and changed the permission model to use role-based access control. This is more secure than access policies because you can only access the vault through an Entra ID account or a managed identity with the correct role assigned. Now lets load the secrets.
Here are the three secrets I need initially. Captcha key is the private key used for Google captcha services, the email secret is the application key to my Gmail email, and the secret-key is the key the web app needs. Now lets go tweak the settings to grab these secrets in the code.
Django Settings
I need to be able to access the key vault and grab the keys which means I need the Secret Cient and Managed Identity Credential classes from the Azure python SDK. One I have those imported I can create the get_secrets() function to grab the secrets.
from azure.identity import ManagedIdentityCredential
from azure.keyvault.secrets import SecretClient
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
clientid = os.environ.get("AZURE_CLIENT_ID")
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/5.0/howto/deployment/checklist/
def get_secrets(key_vault_name, secrets_name):
key_uri = f"https://{key_vault_name}.vault.azure.net"
credential = ManagedIdentityCredential(client_id=clientid)
#credential = DefaultAzureCredential()
client = SecretClient(key_uri, credential)
retrieved_secret = client.get_secret(secrets_name)
return retrieved_secret.value
The clientid will be a environment variable I load into the container app from Pulumi. The managed identity class needs this to know what identity to use.
As you can see the function takes in two arguments, key vault name, and secret name. It will then set up the secret client and retrieve the secret, it will then return the secret value. With the function created I can now grab the secrets I need and use them in the Django settings.
Secret Key:
sec_key = get_secrets(key_vault_name="cybauer-capchta-email", secrets_name="secret-key")
SECRET_KEY = sec_key
Captcha Key;
captcha_key = get_secrets(key_vault_name="cybauer-capchta-email", secrets_name="captcha")
RECAPTCHA_PUBLIC_KEY = '6’
RECAPTCHA_PRIVATE_KEY = captcha_key
Email Key:
email_key = get_secrets(key_vault_name="cybauer-capchta-email", secrets_name="email")
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_HOST = 'smtp.gmail.com'
EMAIL_PORT = 587
EMAIL_USE_TLS = True
EMAIL_HOST_USER = 'bauerbrett1@gmail.com'
EMAIL_HOST_PASSWORD = email_key
Now that I have these ready to go I can get the settings filled out for the secrets that are going to be in the cybauer-vault I will create in the Pulumi deployment.
Database Settings:
db_password = get_secrets(key_vault_name="cybauer-vault", secrets_name="dbpassword")
db_host = get_secrets(key_vault_name="cybauer-vault", secrets_name="dbhost")
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': "cybauer",
'USER': "dbadmin",
'PASSWORD': db_password,
'HOST': db_host,
'PORT': '5432',
'OPTIONS': {'sslmode': 'require',},
}
}
Container URL (url of the Azure container app):
container_url = get_secrets(key_vault_name="cybauer-vault", secrets_name="containerurl")
if DEBUG:
ALLOWED_HOSTS = ['localhost', '127.0.0.1']
else:
ALLOWED_HOSTS = [container_url, 'cybauer.com']
All of these secrets I will need to create in the Pulumi deployment so I need to make sure I name everything the same in it. Now that I have the key vault and secrets settings done, I need to set up the storage settings for static and media storage.
Storage Account:
I am going to use a Shard Access Signature token to access the storage account. This is the only way I was able to get it to work. For security reasons it is always best to use an identity token credential with the correct storage account role assigned, but for some reason I was not able to get it to work in here. Which means the next best thing is the SAS token.
sas = os.environ.get("SAS")
account_name = "cybauersa"
AZURE_TOKEN_CREDENTIAL = credential
# Azure Storage Configuration
STORAGES = {
'default': {
'BACKEND': 'storages.backends.azure_storage.AzureStorage',
'OPTIONS': {
"sas_token": sas,
'account_name': "cybauersa",
'azure_container': "media",
},
},
'staticfiles': {
'BACKEND': 'storages.backends.azure_storage.AzureStorage',
'OPTIONS': {
"sas_token": sas,
'account_name': "cybauersa",
'azure_container': "static",
},
},
}
STATIC_URL = f"https://cybauersa.blob.core.windows.net/static/"
MEDIA_URL = f"https://cybauersa.blob.core.windows.net/media/"
I created two storages, the default will use the media container in cybauersa storage account, and the staticfiles will use the static container in the cybauersa storage account.
Ckeditor:
This is the tool used to write the blogs. It allows users to write formatted posts with images, code blocks, and a bunch of other things. I need to set up the storage for it so it will use my storage accounts media container whenever an image is uploaded to it. For this I am going to create a custom class that is inherited from the Azure Storage class.
# Custom CKEditor storage class (if needed)
from storages.backends.azure_storage import AzureStorage
import os
sas = os.environ.get("SAS")
class CKEditorAzureStorage(AzureStorage):
account_name = 'cybauersa'
sas_token = sas
azure_container = 'media'
expiration_secs = None # Use None to not expire the SAS token / will expired it through the Pulumi deployment.
Now I need to go back to the settings and add this custom_ckeditor.py to the storage setting.
CKEDITOR_5_FILE_STORAGE = "Bauer_Cyber_Services.custom_ckeditor.CKEditorAzureStorage"
That should be it for the settings.py in Django. Now I need to move on to creating a docker image that Azure Container Apps can use.
Docker
Requirements.txt:
I need to get the python packages that are used in my app so I can install them in the docker container.
Now that I have all of the requirements I can put them into a requirements.txt file. Now that I have that let’s create the docker file.
Docker File:
# Use the official Python image from the Docker Hub
FROM python:3.12.4-slim-bullseye
RUN apt-get update \
&& apt-get -y install libpq-dev gcc \
&& pip install psycopg2
# Set the working directory
WORKDIR /app
# Install dependencies
COPY requirements.txt .
RUN python -m pip install --upgrade pip
RUN python -m pip install -r requirements.txt
COPY creation.sh .
RUN chmod +x creation.sh
# Copy the Gunicorn config file
COPY gunicorn.conf.py /app/gunicorn.conf.py
COPY . .
# Expose the port that the app runs on
EXPOSE 8000
# Set environment variables for Django
ENV DJANGO_SETTINGS_MODULE=Bauer_Cyber_Services.settings
CMD bash -c ". creation.sh"
Basically this is going to get a python image from docker hub and install all my requirements. You can see it is pip installing all the requirements.txt items. It also doing a couple other things. First it is copying me gunicorn config file to the app directory. This is the file that sets up gunicorn. Here is my gunicorn.conf.py file:
import multiprocessing
max_requests = 1000
max_requests_jitter = 50
log_file = "-"
bind = "0.0.0.0:8000"
workers = (multiprocessing.cpu_count() * 2) + 1
threads = workers
timeout = 600
The other thing it does is copy my creation.sh script into the directory and chmod to it is able to run. Here is my creation.sh script:
#!/bin/bash
set -e
python3 -m pip install --upgrade pip
echo "Running migrations."
python3 manage.py makemigrations
python3 manage.py migrate
echo "Collecting static files."
python3 manage.py collectstatic --no-input
echo "Creating superuser."
export DJANGO_SUPERUSER_BLOGGER_NAME="Brett Bauer"
export DJANGO_SUPERUSER_EMAIL="bauerbrett1@gmail.com"
export DJANGO_SUPERUSER_PASSWORD="gn^s8qO&I5PxU!En!"
export DJANGO_SUPERUSER_FIRST_NAME="Brett"
export DJANGO_SUPERUSER_LASTR_NAME="Bauer"
echo "from django.contrib.auth import get_user_model; User = get_user_model();
user_exists = User.objects.filter(email='$DJANGO_SUPERUSER_EMAIL').exists(); print('Superuser already exists') if user_exists else
User.objects.create_superuser(first_name='$DJANGO_SUPERUSER_FIRST_NAME', last_name='$DJANGO_SUPERUSER_LAST_NAME',
blogger_name='$DJANGO_SUPERUSER_BLOGGER_NAME', email='$DJANGO_SUPERUSER_EMAIL', password='$DJANGO_SUPERUSER_PASSWORD')" | python manage.py shell
echo "Starting Gunicorn."
exec gunicorn -c /app/gunicorn.conf.py Bauer_Cyber_Services.wsgi:application
This is the script that gets ran when the container starts up. It does a few things, the first is it makes database migrations and migrates them. This is a Django command that gets the database ready with the data models you create.
The next is the collectstatic command which grabs all of the static files in the app directory and ships them to the static storage setting. In my case I set it up to use my Azure storage account cybauersa. So, when this command is ran all of my static files will load up into the static container.
Then it uses the shell command to create a Django super user. In this example I have my password in clear text and not in an environment variable and that is because the first thing I do when this is ran is go and change the password. So, I am not too worried about it. This command also checks to see if the user in the command already exist. If the user exists it will just say the user already exist, if the user does not exist it will create the new user. I did this because whenever I was creating new container revisions or restarting my container in Azure Container Apps it would rerun this whole script and would fail because that user already existed in the database.
The last command it runs is starting the gunicorn with my conf file I showed above. The command points it at my WSGI_APPLICATION setting in the settings.py file.
WSGI_APPLICATION = 'Bauer_Cyber_Services.wsgi.application'
Docker Ignore:
The last thing I need is the files I want to tell docker to ignore from my folder.
Conclusion
This is the end of part 1 of the series. I got my Django app ready to be deployed to Azure Container Apps with the docker file, creation script, and the connections to the infrastructure that will be built in the next few parts of this series. In the next part I will go over building the resource group, storage account, and networking infrastructure.