Both AWS and Azure will happily let you deploy insecure infrastructure at scale—they just make you screw it up in different ways. If you think choosing between them is about feature parity or pricing, you're missing the point: it's about which security model aligns with how your team actually operates, and which one makes it harder to accidentally expose your customer database.
Identity and Access Management: Different Philosophies, Same Footguns
AWS IAM: Everything is a JSON Policy
AWS IAM is resource-centric. You attach policies to users, groups, or roles, and those policies explicitly define what actions are allowed on which resources. It's powerful, flexible, and gives you enough rope to hang yourself.
Here's a typical least-privilege policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::prod-uploads/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "203.0.113.0/24"
}
}
}
]
}
The good: granular control down to individual resource ARNs. You can restrict actions by IP, time of day, MFA presence, or any combination of conditions.
The bad: policy evaluation is complex. You've got identity-based policies, resource-based policies, SCPs, permission boundaries, and session policies all interacting. The IAM policy simulator exists because nobody can predict what the hell will actually happen.
Common AWS IAM mistakes I see constantly:
Wildcard resource permissions:
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
This grants S3 admin on every bucket in your account. I've seen this in production more times than I care to admit.
Cross-account role trust policies that trust everything:
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "sts:AssumeRole"
}
Congratulations, you just let anyone in any AWS account assume this role. The Condition block better be airtight.
Azure RBAC: Role-Based All The Way Down
Azure uses role-based access control tied to Azure Active Directory (now "Microsoft Entra ID" because Microsoft can't stop renaming things). Instead of writing policies, you assign built-in or custom roles at different scopes: management group, subscription, resource group, or individual resource.
# Assign Storage Blob Data Contributor at resource group level
az role assignment create \
--assignee user@example.com \
--role "Storage Blob Data Contributor" \
--scope "/subscriptions/abc-123/resourceGroups/prod-rg"
The good: easier to understand than AWS policy documents. Fewer "wait, what does this actually allow?" moments.
The bad: less granular. Built-in roles often grant more access than you need. Custom roles exist but require more work to maintain.
Azure's killer feature: Privileged Identity Management (PIM)
PIM lets you grant time-limited, just-in-time elevated access. Instead of permanent admin rights, users request elevation with justification, get it for a few hours, and it automatically expires.
# Request elevation to Contributor role for 8 hours
az role assignment create \
--assignee user@example.com \
--role Contributor \
--scope /subscriptions/abc-123/resourceGroups/prod-rg \
--duration 8h \
--justification "Incident response for Ticket-12345"
AWS has no native equivalent. You can build it with Lambda and Step Functions, but it's not out-of-the-box.
Service Accounts and Workload Identity
AWS: IAM roles for EC2, ECS, Lambda. Attach a role to a resource, it gets temporary credentials via instance metadata service (IMDSv2 if you're not still using v1 like it's 2018).
# Attach role to EC2 instance
aws ec2 associate-iam-instance-profile \
--instance-id i-1234567890abcdef0 \
--iam-instance-profile Name=app-server-role
Azure: Managed identities (system-assigned or user-assigned). Similar concept, slightly different implementation.
# Enable system-assigned managed identity on VM
az vm identity assign \
--name prod-app-vm \
--resource-group prod-rg
Both work fine. AWS has better documentation. Azure's user-assigned identities are cleaner when you need the same identity across multiple resources.
Network Security: Stateful vs. Stateful (With Extra Steps)
AWS Security Groups: Simple and Stateful
Security groups in AWS are stateful firewalls attached to ENIs (Elastic Network Interfaces). You define inbound and outbound rules. Return traffic is automatically allowed because statefulness.
# Allow HTTPS from specific CIDR
aws ec2 authorize-security-group-ingress \
--group-id sg-0123456789abcdef0 \
--protocol tcp \
--port 443 \
--cidr 10.0.1.0/24
Default deny on ingress, default allow on egress. Most teams never restrict egress, which is a mistake—you should know what's calling out.
Security group chaining:
# Web tier allows traffic from load balancer security group
aws ec2 authorize-security-group-ingress \
--group-id sg-web-tier \
--protocol tcp \
--port 443 \
--source-group sg-load-balancer
This is the right way to architect. Reference security groups, not CIDR blocks. When instances scale, access controls move with them.
Azure NSGs: Stateful With Priority Rules
Network Security Groups (NSGs) are stateful like AWS security groups, but with priority-based rule evaluation. Lower priority number = evaluated first.
# Create NSG rule allowing HTTPS
az network nsg rule create \
--resource-group prod-rg \
--nsg-name prod-nsg \
--name allow-https \
--priority 100 \
--source-address-prefixes 10.0.1.0/24 \
--destination-port-ranges 443 \
--access Allow \
--protocol Tcp
The priority system matters when you have conflicting rules. AWS security groups are "least permissive wins"—deny always beats allow. Azure NSGs evaluate in priority order until a match.
Azure Application Security Groups (ASGs):
These are Azure's answer to security group chaining, but cleaner:
# Create ASG
az network asg create --name web-tier-asg --resource-group prod-rg
# Assign VM NIC to ASG
az network nic ip-config update \
--resource-group prod-rg \
--nic-name vm-nic \
--name ipconfig1 \
--application-security-groups web-tier-asg
# NSG rule references ASG, not IP ranges
az network nsg rule create \
--nsg-name prod-nsg \
--name allow-web-to-db \
--source-asgs web-tier-asg \
--destination-asgs db-tier-asg \
--destination-port-ranges 5432 \
--access Allow \
--protocol Tcp
Conceptually cleaner than AWS. In practice, both work fine once you grok them.
What Both Get Wrong: Default Egress
Neither platform restricts egress by default. Your compromised web server can beacon to command-and-control infrastructure all day long unless you explicitly block it.
Do this:
AWS:
# Remove default egress rule, add explicit allows
aws ec2 revoke-security-group-egress \
--group-id sg-0123456789abcdef0 \
--ip-permissions '[{"IpProtocol": "-1", "IpRanges": [{"CidrIp": "0.0.0.0/0"}]}]'
# Allow only necessary egress (DNS, HTTPS to internal services)
aws ec2 authorize-security-group-egress \
--group-id sg-0123456789abcdef0 \
--protocol udp \
--port 53 \
--cidr 10.0.0.2/32
Azure:
# Deny all outbound at lowest priority
az network nsg rule create \
--nsg-name prod-nsg \
--name deny-all-outbound \
--priority 4096 \
--destination-address-prefixes '*' \
--destination-port-ranges '*' \
--access Deny \
--protocol '*'
# Add explicit allows at higher priority
az network nsg rule create \
--nsg-name prod-nsg \
--name allow-dns \
--priority 100 \
--destination-address-prefixes 168.63.129.16 \
--destination-port-ranges 53 \
--access Allow \
--protocol Udp
Most teams won't do this because it breaks stuff initially. But it's the difference between detection and prevention.
Storage Encryption: Different Defaults, Same Capabilities
AWS S3: Server-Side Encryption
S3 defaults to no encryption on new buckets (unless you've enabled it organization-wide via S3 Block Public Access settings). You need to explicitly enable it.
# Enable default encryption with AWS-managed keys
aws s3api put-bucket-encryption \
--bucket prod-data \
--server-side-encryption-configuration '{
"Rules": [{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "AES256"
}
}]
}'
# Better: use customer-managed KMS keys
aws s3api put-bucket-encryption \
--bucket prod-data \
--server-side-encryption-configuration '{
"Rules": [{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "aws:kms",
"KMSMasterKeyID": "arn:aws:kms:us-east-1:123456789012:key/abc-123"
}
}]
}'
Customer-managed keys give you control over key rotation, access policies, and audit trails. Use them for anything remotely sensitive.
Client-side encryption:
For truly paranoid scenarios, encrypt before upload:
import boto3
from cryptography.fernet import Fernet
# Generate key (store this in Secrets Manager, not in code)
key = Fernet.generate_key()
cipher = Fernet(key)
# Encrypt data before S3 upload
data = b"sensitive customer data"
encrypted = cipher.encrypt(data)
s3 = boto3.client('s3')
s3.put_object(Bucket='prod-data', Key='customer-data.enc', Body=encrypted)
AWS never sees your plaintext. Good for compliance, pain for operations.
Azure Storage: Encryption by Default
Azure Storage accounts have encryption enabled by default using Microsoft-managed keys. You can switch to customer-managed keys if needed:
# Create key vault and key
az keyvault create --name prod-keyvault --resource-group prod-rg
az keyvault key create --vault-name prod-keyvault --name storage-key --kty RSA
# Configure storage account to use customer-managed key
az storage account update \
--name prodstorageacct \
--resource-group prod-rg \
--encryption-key-source Microsoft.Keyvault \
--encryption-key-vault https://prod-keyvault.vault.azure.net \
--encryption-key-name storage-key
Azure's default-on approach is better from a security perspective. AWS's explicit configuration is better from a "know what you've deployed" perspective. Pick your poison.
Secrets Management: Not Even Close
AWS Secrets Manager:
import boto3
import json
client = boto3.client('secretsmanager')
# Store secret
client.create_secret(
Name='prod/db/password',
SecretString=json.dumps({'username': 'admin', 'password': 'changeme'})
)
# Retrieve secret
response = client.get_secret_value(SecretId='prod/db/password')
secret = json.loads(response['SecretString'])
Azure Key Vault:
from azure.identity import DefaultAzureCredential
from azure.keyvault.secrets import SecretClient
credential = DefaultAzureCredential()
client = SecretClient(vault_url="https://prod-keyvault.vault.azure.net", credential=credential)
# Store secret
client.set_secret("db-password", "changeme")
# Retrieve secret
secret = client.get_secret("db-password")
print(secret.value)
Both work fine. AWS Secrets Manager has native RDS integration for automatic rotation. Azure Key Vault has better integration with managed identities. Neither is objectively better—use what you're already invested in.
Logging and Monitoring: CloudTrail vs. Activity Log
AWS CloudTrail:
Logs every API call. Disabled by default. Enable it immediately:
aws cloudtrail create-trail \
--name org-trail \
--s3-bucket-name audit-logs \
--is-multi-region-trail \
--enable-log-file-validation
Ship to CloudWatch Logs, set up metric filters for suspicious activity (root account usage, security group changes, IAM policy modifications).
Azure Activity Log:
Enabled by default for subscription-level events. Export to Log Analytics workspace for long-term retention:
az monitor diagnostic-settings create \
--name export-activity-log \
--resource /subscriptions/abc-123 \
--logs '[{"category": "Administrative", "enabled": true}]' \
--workspace /subscriptions/abc-123/resourceGroups/monitoring/providers/Microsoft.OperationalInsights/workspaces/prod-logs
Azure's default-enabled logging is better. AWS's CloudTrail integration with GuardDuty is more mature for threat detection.
The Actual Differences That Matter
Where AWS Wins:
- Granular IAM policies - You can lock down access to ridiculous levels of specificity
- Service Control Policies (SCPs) - Organization-wide denial policies that override everything
- Mature ecosystem - More third-party tooling, more Stack Overflow answers
- VPC endpoints - Private connectivity to AWS services without NAT gateways
Where Azure Wins:
- Privileged Identity Management - JIT access out of the box
- Default encryption - Fewer "oops, forgot to encrypt" moments
- Hybrid identity - Better if you're already Microsoft AD-centric
- Azure Policy - Declarative compliance enforcement across subscriptions
Where Both Fail:
- Complexity - Security configuration surface area is enormous
- Documentation - Feature parity docs exist, security implications docs don't
- Defaults - Neither platform ships truly hardened by default
- Alert fatigue - Both generate tons of logs; neither makes it easy to find what matters
The Bottom Line
You're not choosing between "secure" and "insecure." You're choosing between two platforms that both require expertise to configure securely.
Pick AWS if:
- You need maximum granularity in access control
- Your team is already fluent in IAM policy language
- You're building on existing AWS infrastructure
Pick Azure if:
- You're already in the Microsoft ecosystem (AD, Office 365)
- You prefer role-based access over policy documents
- You want JIT privileged access without building it yourself
Pick neither if:
- You're not prepared to invest in learning cloud security properly
- You think compliance frameworks will secure your infrastructure
- You believe vendor security features are a substitute for competent engineering
The cloud providers give you tools. They don't give you judgment. AWS and Azure security configurations are powerful—which means they're powerful tools to screw things up at scale. Choose based on which model fits how your team operates, then invest in actually learning how to use it.
And for the love of all that's holy, enable CloudTrail or Activity Log before you deploy anything else.