Networking, Storage Account, and Resource Group
Plan:
Here is a quick diagram I threw together that has most of the main parts. Basically an end user will make a request to the gateway public ip, that gateway has a listener listening on port 443 and when it that listener is hit it will direct the traffic to the container app that is in the container app subnet. The container app has connections with the postgres server, the container registry, the app gateway, the key vaults, and the storage account. There is three subnets all in one vnet. Traffic in and out of the subnets are restricted by Network Security Groups (NSGs). Each subnet has its own NSG associated with it. All of this is also inside one resource group.
Resource Group
First thing I need to do is get my Pulumi config file and import my subscription id, resource group name, and location. After doing that I can create the resource group
import pulumi
from pulumi_azure_native import resources
def create_resourcegroup():
# Get the resource group name from the configuration
config = pulumi.Config()
resource_group_name = config.require("resourceGroupName")
print(f"Using resource group name: {resource_group_name}")
# Create a resource group with the specified name
cybauer_rg = resources.ResourceGroup(
"resource_group",
resource_group_name=resource_group_name
)
return cybauer_rg
While creating the infrastructure or this app I decided to do everything in functions inside Pulumi. From what I have saw some people use function, some use classes, some don’t use either, and some use a little bit of everything. I decided to use functions and import all of the function into the __main__.py and use those functions and the values they return. So for example with this I made the create_resourcegroup() function and it returns my created group. I then import this function into the main file and use it in there.
from resource_group import create_resourcegroup
cybauer_rg = create_resourcegroup()
Now for the rest of this code I can pass the resource group into other functions by using that cybauer_rg instance. As I keep going, I will extend this main file.
Networking
I need to now create the main networking resources and put them into that resource group. I will create the VNET and the three subnets inside of it.
# Create the Container Apps Subnet
container_apps_subnet = network.Subnet(
"containerAppsSubnet",
subnet_name="containerAppsSubnet",
resource_group_name=rg_name,
virtual_network_name=vnet.name,
address_prefix="10.0.0.0/23",
service_endpoints=[
network.ServiceEndpointPropertiesFormatArgs(
service="Microsoft.Storage", locations=["East US"]),
network.ServiceEndpointPropertiesFormatArgs(
service="Microsoft.KeyVault", locations=["East US"]),
network.ServiceEndpointPropertiesFormatArgs(
service="Microsoft.ContainerRegistry", locations=["East US"])
]
)
# Export the Container Apps Subnet ID
pulumi.export("container_apps_subnet_id", container_apps_subnet.id)
# Create the Application Gateway Subnet, dependent on Container Apps Subnet creation
app_gateway_subnet = container_apps_subnet.id.apply(lambda _: network.Subnet(
"appGatewaySubnet",
subnet_name="appGatewaySubnet",
resource_group_name=rg_name,
virtual_network_name=vnet.name,
address_prefix="10.0.2.0/24",
private_link_service_network_policies=network.VirtualNetworkPrivateLinkServiceNetworkPolicies.DISABLED
))
# Export the Application Gateway Subnet ID
pulumi.export("app_gateway_subnet_id", app_gateway_subnet.id)
# Create the PostgreSQL Subnet, dependent on Application Gateway Subnet creation
postgres_subnet = app_gateway_subnet.id.apply(lambda _: network.Subnet(
"postgresSubnet",
subnet_name="postgresSubnet",
resource_group_name=rg_name,
virtual_network_name=vnet.name,
address_prefix="10.0.3.0/24",
delegations=[network.DelegationArgs(
name="postgresDelegation",
service_name="Microsoft.DBforPostgreSQL/flexibleServers"
)]
))
pulumi.export("postgres_subnet_id", postgres_subnet.id)
There are a couple items in here I want to point out. The first is the service endpoints. These service endpoints allow the subnet to access those resources over the Azure backbone network. This allows me to restrict traffic to only my subnets inside the storage account, key vaults, and container registry. If you are not using private endpoints this the way you should restrict network access to your PaaS resources in Azure.
service_endpoints=[
network.ServiceEndpointPropertiesFormatArgs(
service="Microsoft.Storage", locations=["East US"]),
network.ServiceEndpointPropertiesFormatArgs(
service="Microsoft.KeyVault", locations=["East US"]),
network.ServiceEndpointPropertiesFormatArgs(
service="Microsoft.ContainerRegistry", locations=["East US"])
]
The other thing I want to point out is the delegation on my postgres_subnet. This delegation is needed to integrate my postgres server into my vnet and make it not accessible outside my VNET. From Microsoft, “Your virtual network integrated Azure Database for PostgreSQL flexible server instance must be in a subnet that's delegated. That is, only Azure Database for PostgreSQL flexible server instances can use that subnet. No other Azure resource types can be in the delegated subnet. You delegate a subnet by assigning its delegation property as Microsoft.DBforPostgreSQL/flexibleServers.” For more information on this visit https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-networking-private .
delegations=[network.DelegationArgs(
name="postgresDelegation",
service_name="Microsoft.DBforPostgreSQL/flexibleServers"
)]
Private DNS Zone:
Now that I created my VNET and subnets I can create a Private DNS Zone for my PostgresSQL server. Even though it is not created yet I can still create the Private DNS Zone and link it to my VNET. This DNS zone is needed to make the access to a PostgresSQL sever private. From Microsoft, “Azure Private DNS provides a reliable and secure DNS service for your virtual network. Azure Private DNS manages and resolves domain names in the virtual network without the need to configure a custom DNS solution. When using private network access with Azure virtual network, providing the private DNS zone information is mandatory in order to be able to do DNS resolution. For new Azure Database for PostgreSQL flexible server instance creation using private network access, private DNS zones need to be used while configuring Azure Database for PostgreSQL flexible server instances with private access." Because I want my PostgresSQL server to integrated inside my VNET, I need to set this up.
I am going to need to create the zone and link it to my vnet.
# Create Private DNS Zone
private_dns_zone = network.PrivateZone(
"privateDnsZone",
resource_group_name=rg_name,
private_zone_name="privatelink.postgres.database.azure.com",
location="Global"
# This is a standard DNS zone used when linking to Azure Database services over private link
)
private_dns_zone_virtual_network_link = network.VirtualNetworkLink("cybauerPostgresDnsZoneLink",
resource_group_name=rg_name,
private_zone_name=private_dns_zone.name,
location="Global",
virtual_network=network.SubResourceArgs(
id=VNET.id
),
registration_enabled=False
)
Now that the network is finished I can connect all of it to the function and return those class instances to use later.
import pulumi
from pulumi_azure_native import network
def create_network(rg_name):
# Create the VNET
VNET = network.VirtualNetwork("VNET",
virtual_network_name="cybauer_VNET",
resource_group_name=rg_name,
address_space=network.AddressSpaceArgs(
address_prefixes=["10.0.0.0/19"],
)
)
# Create the Container Apps Subnet
container_apps_subnet = network.Subnet(
"containerAppsSubnet",
subnet_name="containerAppsSubnet",
resource_group_name=rg_name,
virtual_network_name=VNET.name,
address_prefix="10.0.0.0/23",
service_endpoints=[
network.ServiceEndpointPropertiesFormatArgs(
service="Microsoft.Storage", locations=["East US"]),
network.ServiceEndpointPropertiesFormatArgs(
service="Microsoft.KeyVault", locations=["East US"]),
network.ServiceEndpointPropertiesFormatArgs(
service="Microsoft.ContainerRegistry", locations=["East US"])
]
)
# Export the Container Apps Subnet ID
pulumi.export("container_apps_subnet_id", container_apps_subnet.id)
# Create the Application Gateway Subnet, dependent on Container Apps Subnet creation
app_gateway_subnet = container_apps_subnet.id.apply(lambda _: network.Subnet(
"appGatewaySubnet",
subnet_name="appGatewaySubnet",
resource_group_name=rg_name,
virtual_network_name=VNET.name,
address_prefix="10.0.2.0/24",
private_link_service_network_policies=network.VirtualNetworkPrivateLinkServiceNetworkPolicies.DISABLED
))
# Export the Application Gateway Subnet ID
pulumi.export("app_gateway_subnet_id", app_gateway_subnet.id)
# Create the PostgreSQL Subnet, dependent on Application Gateway Subnet creation
postgres_subnet = app_gateway_subnet.id.apply(lambda _: network.Subnet(
"postgresSubnet",
subnet_name="postgresSubnet",
resource_group_name=rg_name,
virtual_network_name=VNET.name,
address_prefix="10.0.3.0/24",
delegations=[network.DelegationArgs(
name="postgresDelegation",
service_name="Microsoft.DBforPostgreSQL/flexibleServers"
)]
))
pulumi.export("postgres_subnet_id", postgres_subnet.id)
# Create Private DNS Zone
private_dns_zone = network.PrivateZone(
"privateDnsZone",
resource_group_name=rg_name,
private_zone_name="privatelink.postgres.database.azure.com",
location="Global"
# This is a standard DNS zone used when linking to Azure Database services over private link
)
private_dns_zone_virtual_network_link = network.VirtualNetworkLink("cybauerPostgresDnsZoneLink",
resource_group_name=rg_name,
private_zone_name=private_dns_zone.name,
location="Global",
virtual_network=network.SubResourceArgs(
id=VNET.id
),
registration_enabled=False
)
return VNET, postgres_subnet, container_apps_subnet, app_gateway_subnet, private_dns_zone, [container_apps_subnet, postgres_subnet, app_gateway_subnet]
Storage Account
I need the storage account to host static and media files for the app. In my app settings.py I named the storage account cybauersa which means I need to make sure to name it that in my Pulumi code. I will need a storage account, a blob service, two containers (static and media) and a SAS token so I can input it into the container app as a environment variable. I also want to limit access to this storage account to only connections from my container app subnet. This way no one else can access the account or get the files.
First lets create the storage account:
from pulumi_azure_native import storage
import pulumi_azure as azure
import pulumi
def create_sa(rg_name, container_apps_sub_id):
# Create an Azure resource (Storage Account)
account = storage.StorageAccount("storageAccount",
account_name="cybauersa",
resource_group_name=rg_name,
allow_blob_public_access=True,
enable_https_traffic_only=True,
public_network_access=storage.PublicNetworkAccess.ENABLED,
sku=storage.SkuArgs(
name=storage.SkuName.STANDARD_LRS,
),
kind=storage.Kind.STORAGE_V2,
default_to_o_auth_authentication = True,
encryption={
"keySource": storage.KeySource.MICROSOFT_STORAGE,
"requireInfrastructureEncryption": True,
"services": {
"blob": {
"enabled": True,
"keyType": storage.KeyType.ACCOUNT,
},
"file": {
"enabled": True,
"keyType": storage.KeyType.ACCOUNT,
},
},
},
minimum_tls_version = storage.MinimumTlsVersion.TLS1_2,
allow_shared_key_access = True,
network_rule_set=storage.NetworkRuleSetArgs(
default_action=storage.DefaultAction.DENY,
virtual_network_rules=[
storage.VirtualNetworkRuleArgs(
virtual_network_resource_id = container_apps_sub_id
)
]
)
)
Couple things to point out here. In my Django app I could not get the static files to work with token credentials or SAS tokens when retrieving them. The collectstatic command from manage.py works and it will grab the static files and upload them to the storage account, but when it comes time to grab them, it didn't work. For that reason I decided to make the static container public since the only files that go in there are public data already. Because of this I needed to make allow_blob_public access True so I could open the blob container up. The media container will stay private. The public network access enabled is needed if you are not using Private Endpoints. In the bottom of that code block you can see the firewall rules are set up to restrict the traffic to my container app subnet. The default action on the virtual network is to deny, so it will deny all connections besides the ones whitelisted. The only connection that is allowed through is the container app subnet. As you can see I was able to enter the container app subnet id as an argument into the function and then use it for the resource id.
Note: If the storage account had any data that was even remotely sensitive I would use a private endpoint connection, turn off shared key access and enforce Entra ID authentication to the storage account and it's files. This is considered best practice and the most secure way to use Azure Storage Accounts.
Create the blob service:
Now I need a blob service set up so I can create the two containers.
blob_service_properties_resource = account.name.apply(lambda name: storage.BlobServiceProperties("blobServicePropertiesResource",
account_name = name,
resource_group_name = rg_name,
automatic_snapshot_policy_enabled = False,
blob_services_name = "default",
# Add CORS rules to allow requests from https://cybauer.com
cors = storage.CorsRulesArgs(
cors_rules=[
storage.CorsRuleArgs(
allowed_origins=["https://cybauer.com"],
allowed_methods=["GET", "HEAD", "POST", "OPTIONS"],
allowed_headers=["*"],
exposed_headers=["*"],
max_age_in_seconds=3600,
)
]
),
container_delete_retention_policy = {
"allowPermanentDelete": False,
"days": 7,
"enabled": True,
},
delete_retention_policy = {
"allowPermanentDelete": False,
"days": 7,
"enabled": True,
},
is_versioning_enabled = False,
restore_policy = {
"enabled": True,
"days": 5,
}))
The interesting item in this is the cors rules. When I was testing my web app could not access any of my web fonts in the static container and that was because I did not have any cross origin resource sharing rules. So I had to create a rule that allowed connections from cybauer.com to access the resources in the containers. From Microsoft, “CORS is an HTTP feature that enables a web application running under one domain to access resources in another domain. Web browsers implement a security restriction known as same-origin policy that prevents a web page from calling APIs in a different domain; CORS provides a secure way to allow one domain (the origin domain) to call APIs in another domain. See the CORS specification for details on CORS.”
I turned off permanent delete on it and made the retention 7 days. I also created a restore policy.
The last two things I need to do is create the containers and SAS token.
static_container = blob_service_properties_resource.id.apply(lambda _: storage.BlobContainer("static",
account_name = account.name,
container_name="static",
resource_group_name = rg_name,
public_access=storage.PublicAccess.BLOB
))
media_container = blob_service_properties_resource.id.apply(lambda _: storage.BlobContainer("media",
account_name = account.name,
container_name="media",
resource_group_name = rg_name,
))
# Create a SAS token
sas_token = pulumi.Output.all(rg_name,account.name).apply(
lambda args: storage.list_storage_account_sas(
resource_group_name=args[0],
account_name=args[1],
permissions="rwdlacup",
resource_types="sco",
services="b",
shared_access_start_time="2024-07-01T00:00:00Z",
shared_access_expiry_time="2025-07-01T00:00:00Z",
protocols="https",
key_to_sign="key1"
)
Again the storage account does not have sensitive data so to get the static container to work with my Django app I left the blob open to public. The SAS token also should have a shorter lifespan if you are going to use SAS. It is in most scenarios best to use Entra ID authentication with a role assignment.
Now I need to put it all together and return the class instances.
from pulumi_azure_native import storage
import pulumi_azure as azure
import pulumi
def create_sa(rg_name, container_apps_sub_id):
# Create an Azure resource (Storage Account)
account = storage.StorageAccount("storageAccount",
account_name="cybauersa",
resource_group_name=rg_name,
allow_blob_public_access=True,
enable_https_traffic_only=True,
public_network_access=storage.PublicNetworkAccess.ENABLED,
sku=storage.SkuArgs(
name=storage.SkuName.STANDARD_LRS,
),
kind=storage.Kind.STORAGE_V2,
default_to_o_auth_authentication = True,
encryption={
"keySource": storage.KeySource.MICROSOFT_STORAGE,
"requireInfrastructureEncryption": True,
"services": {
"blob": {
"enabled": True,
"keyType": storage.KeyType.ACCOUNT,
},
"file": {
"enabled": True,
"keyType": storage.KeyType.ACCOUNT,
},
},
},
minimum_tls_version = storage.MinimumTlsVersion.TLS1_2,
allow_shared_key_access = True,
network_rule_set=storage.NetworkRuleSetArgs(
default_action=storage.DefaultAction.DENY,
virtual_network_rules=[
storage.VirtualNetworkRuleArgs(
virtual_network_resource_id = container_apps_sub_id
)
]
)
)
blob_service_properties_resource = account.name.apply(lambda name: storage.BlobServiceProperties("blobServicePropertiesResource",
account_name = name,
resource_group_name = rg_name,
automatic_snapshot_policy_enabled = False,
blob_services_name = "default",
# Add CORS rules to allow requests from https://cybauer.com
# Add CORS rules to allow requests from https://cybauer.com
# Add CORS rules to allow requests from https://cybauer.com
cors = storage.CorsRulesArgs(
cors_rules=[
storage.CorsRuleArgs(
allowed_origins=["https://cybauer.com"],
allowed_methods=["GET", "HEAD", "POST", "OPTIONS"],
allowed_headers=["*"],
exposed_headers=["*"],
max_age_in_seconds=3600,
)
]
),
container_delete_retention_policy = {
"allowPermanentDelete": False,
"days": 7,
"enabled": True,
},
delete_retention_policy = {
"allowPermanentDelete": False,
"days": 7,
"enabled": True,
},
is_versioning_enabled = False,
restore_policy = {
"enabled": False,
"days": 5,
}))
# Create a SAS token
sas_token = pulumi.Output.all(rg_name,account.name).apply(
lambda args: storage.list_storage_account_sas(
resource_group_name=args[0],
account_name=args[1],
permissions="rwdlacup",
resource_types="sco",
services="b",
shared_access_start_time="2024-07-01T00:00:00Z",
shared_access_expiry_time="2028-07-01T00:00:00Z",
protocols="https",
key_to_sign="key1"
)
)
pulumi.export("storage_name", account.name)
static_container = blob_service_properties_resource.id.apply(lambda _: storage.BlobContainer("static",
account_name = account.name,
container_name="static",
resource_group_name = rg_name,
public_access=storage.PublicAccess.BLOB
))
media_container = blob_service_properties_resource.id.apply(lambda _: storage.BlobContainer("media",
account_name = account.name,
container_name="media",
resource_group_name = rg_name,
))
return account, blob_service_properties_resource, static_container, media_container, sas_token.account_sas_token
Now with these three things complete, lets go check out the main file.
from resource_group import create_resourcegroup
from network import create_network
from storage_account import create_sa
cybauer_rg = create_resourcegroup()
VNET, postgres_subnet, container_apps_subnet, app_gateway_subnet, private_dns_zone, subnet_dependencies = create_network(rg_name=cybauer_rg.name)
account, blob_service_properties_resource, static_container, media_container, sas_token = create_sa(rg_name=cybauer_rg.name, container_ap
As you can see I used the function and created variables for the class instances. You can also see how I use the instance in the create_network(rg_name=cybauer_rg.name). The resource group instance has an attribute called name and I can grab it and use it wherever I need to now.
Conclusion
This is the end of part 2. In this post I created the Virtual Network (VNET), the subnets and the Private DNS Zone for my PostgresSQL server. I also created the resource group that all of these resources will go into and I created the storage account my Django app will use to store static and media files in.
In the next post I will create the cybauer-vault key vault and the PostgresSQL server.