Back to Projects
4 min read Updated Feb 21, 2026

Building a Secure, Multi-Region AWS Infrastructure

When designing the infrastructure for a global enterprise platform, the primary goal was clear: build a robust, secure, and highly available production environment across multiple AWS regions. Using Terraform for infrastructure-as-code and Ansible for configuration management, I designed a peered network architecture capable of handling real-world scale while prioritizing strict security boundaries.

Note: Specific regions, IP ranges, and bucket names have been generalized to protect production infrastructure.

The Architecture at a Glance

To ensure high availability and proper segmentation, the infrastructure is distributed across three distinct AWS regions. These regions are securely connected using a full mesh-like VPC Peering setup, allowing private, internal communication without ever exposing backend traffic to the public internet.

  • Primary Management Hub - 10.x.0.0/16 This acts as our management hub. It houses the Bastion Host, which serves as the single, heavily monitored entry point for all administrative access into our private network.
  • Primary Production Region - 10.y.0.0/16 This is the primary production region. It runs the core application services (prod-core-svcs) behind external Application Load Balancers.
  • Secondary / DR Region - 10.z.0.0/16 Serving as our secondary/DR (Disaster Recovery) environment, this region provides geographical failover redundancy and peering acceptance.

Security First: Production Considerations

Building for a true production environment means adopting a “zero-trust” mindset and minimizing the attack surface wherever possible.

  • Strict Network Isolation: All production EC2 instances reside exclusively in private subnets. They cannot be reached directly from the public internet. Any outbound traffic required by these instances (e.g., for system patches or third-party APIs) is routed safely through managed NAT Gateways.
  • Centralized, Secure Access: SSH access is heavily locked down. Administrators must authenticate using key-pairs through the Bastion Host in the Management Hub. From there, instance-level security groups explicitly permit SSH traffic (Port 22) only if it originates from our internal peered CIDRs.
  • Dynamic Firewall Rules: I developed a custom Terraform Security Group module utilizing dynamic blocks. This allows us to strictly define ingress and egress rules scoped to specific internal CIDR blocks or trusted Security Group IDs, entirely avoiding wide-open port ranges.
  • TLS/SSL Encryption: All external traffic is terminated at the Application Load Balancer using AWS Certificate Manager (ACM). This ensures 100% encrypted HTTPS connections for users before their traffic is safely routed to our private application instances.

Infrastructure as Code (Terraform)

To keep the codebase DRY and maintainable, I broke the Terraform configuration down into reusable custom modules, keeping environment-specific state separated. State files are securely locked and stored remotely in an S3 bucket (<redacted>-tfstate) to prevent concurrent modification collisions.

Our modular approach includes:

  • modules/vpc: Handles the heavy lifting of network creation—spanning multi-AZ public/private subnets, automatically provisioning NAT Gateways for egress, and wiring up complex routing tables to handle peering traffic.
  • modules/ec2: Standardizes our compute deployments. It automatically fetches the latest Ubuntu LTS AMI (unless pinned for stability, like our Bastion) and ensures the correct IAM roles, key pairs, and security groups are attached.
  • modules/alb: Provisions Application Load Balancers with standardized health checks, target groups, and required tagging policies for organizational compliance.
  • modules/vpc-peering: Manages the cross-region peering connections and automatic route table propagation.

This architecture allows us to deploy changes cleanly per region:

cd environments/<region>/prod
terraform init
terraform plan
terraform apply

Configuration Management (Ansible)

While Terraform provisions the cloud resources, Ansible handles the software stack inside the instances (such as our core LAMP stack).

Because our production instances have no public IP addresses, Ansible is configured to execute playbooks via a ProxyJump through our Bastion Host. By defining the ansible_ssh_common_args in our inventory file (ansible/inventory/hosts.yml), Ansible seamlessly pipes its SSH connections securely through the bastion and into the private application targets.

# Example deployment of the application stack
cd ansible
ansible-playbook playbooks/install-lamp-stack.yml

Final Thoughts

This project was an excellent exercise in translating production-grade security requirements into a clean, automated, and codified workflow. By separating infrastructure provisioning from configuration management—and rigidly isolating private resources from the public internet—the infrastructure is both resilient and heavily defended.

Let's work together

Have a project in mind? Let's discuss how I can help with DevOps, cloud infrastructure, or platform engineering.

Reach me on GitHub, X, or LinkedIn.