Back to Projects
3 min read Updated Feb 21, 2026

QAuto Infrastructure: Multi-Layered Secure Cloud Architecture

A core focus of my recent work involved architecting and deploying the overarching cloud infrastructure for an enterprise automotive and logistics platform using Terraform on AWS. The environment hosts multiple high-traffic production services across a multi-layered, heavily secured Virtual Private Cloud (VPC).

Note: Specific client names, internal routing domains, and proprietary identifiers have been anonymized in this write-up to protect production security.

Architecture Overview

The infrastructure was designed from the ground up with a defense-in-depth approach. By strictly maintaining separation between public-facing ingress points and backend private subnets, we ensure that workloads ranging from compute clusters to database engines remain resilient and isolated.

The Foundation: VPC & Networking

The core of the deployment rests on a custom VPC distributed across multiple Availability Zones to ensure fault tolerance.

  • Public Subnets: These act strictly as the DMZ. They host the external Application Load Balancers (ALBs) and our single point of administrative entry, a heavily restricted Bastion Host.
  • Private Subnets: The vast majority of the infrastructure lives here, entirely cut off from direct internet access. This includes our container orchestration nodes, virtual machines, and all data persistence layers. Outbound internet access required for patching or external APIs is routed seamlessly through managed NAT Gateways.

Compute & Orchestration Layers (EC2 & EKS)

Rather than relying on a monolith, compute is segmented logically:

  • Production Instances: Dedicated, autoscaled EC2 instances handle stateful workloads for various multi-tenant products (e.g., enterprise B2B tools, customer-facing portals, and internal logistics services).
  • Staging Environments: A unified, cost-optimized staging environment mirrors production routing to allow for high-fidelity QA testing before rollout.
  • Kubernetes (EKS): We deployed a managed Amazon EKS cluster that serves as the modern backbone for our containerized, microservices-oriented workloads, enabling agile deployment patterns.

Advanced Traffic Routing

To manage traffic efficiently across so many disparate services within the same VPC, we utilize sophisticated Application Load Balancing configurations:

  • Host-Based Routing: The Production ALB utilizes complex listener rules to inspect host headers (e.g., api.example-service.com, admin.internal.net, client-portal.io) and dynamically route requests to their designated Target Groups.
  • Automated TLS/SSL: All web traffic is strictly served over HTTPS, with AWS Certificate Manager (ACM) seamlessly handling certificate provisioning and renewal.

Persistence & Caching

Data gravity is handled securely inside the private subnets:

  • Relational Data: Robust MySQL instances process transactional production workloads.
  • NoSQL & Document Stores: A clustered MongoDB deployment provides high availability and performance tuning for specialized document workloads.
  • In-Memory Caching: Redis (via ElastiCache) instances are utilized to heavily offset database load and dramatically accelerate response times for end users.

Infrastructure as Code & State Management

The entire environment is codified using extremely modular Terraform, strictly enforcing DRY principles:

  • modules/vpc/: Lays the networking foundation.
  • modules/security_groups/: Manages explicit, tightly-scoped firewall rules for ALBs, web nodes, DBs, Cache, and Bastion access.
  • modules/alb/: Controls listener rules and target groups.
  • modules/eks/ & modules/redis/ & modules/sqs/: Orchestrates managed AWS services.

To ensure consistency across the engineering team, Terraform state is stored securely in an S3 bucket with strict versioning enabled, and DynamoDB is used for state locking to entirely prevent concurrent modification collisions.

Security Best Practices Enforced

When operating at this scale, security cannot be an afterthought:

  1. Zero Public Backends: All application and database servers are located in private subnets. There is no route from the Internet Gateway directly to backend nodes.
  2. Granular Firewalling: Security Groups restrict ingress strictly by source SG IDs rather than broad IP ranges. Only the ALB can talk to Web nodes; only Web nodes can talk to Databases.
  3. Restricted Administrative Access: The only way to securely SSH into internal backend servers is by authenticating with key-pairs via the Bastion Host located in the public subnet.

This project pushed the limits of organizing dynamic, flexible cloud networking while enforcing rigid enterprise security requirements across a complex micro-services ecosystem.

Let's work together

Have a project in mind? Let's discuss how I can help with DevOps, cloud infrastructure, or platform engineering.

Reach me on GitHub, X, or LinkedIn.