This posts touches on a VPC which we could use for production severs. We’d be deploying the servers in two Availability Zones (AZs), using an Auto Scaling group with managed instances, and an Application Load Balancer (ALB). The servers will be deployed in private subnets for security, be managed by an Auto Scaling group, and receive traffic from the load balancer. Since they are in a private subnet, connectivity to the Internet would be granted through the NAT gatway.
Example VPC
Configuring Routing
When the VPC is created using the console, a route table will be created for the public subnets with local routes and routes to the internet gateway (IGW). The pivate subnets will also have a route table created with local routes, and routes to the NAT gateway, egress-only internet gateway, and gateway VPC endpoint.
Below is an example route table for the public subnets, note that we’ll be creating IPv4-only subnets instead of dual stack subnets.
Destination | Target |
---|---|
10.0.0.0/16 | local |
0.0.0.0/0 | igw-id |
An example route table for one of the private subnets would be as follows.
Destination | Target |
---|---|
10.0.0.0/16 | local |
0.0.0.0/0 | nat-gateway-id |
Security configuration
An example securty group to associate with the servers would be as follows. It will allow traffic from the load balancer on the listening port and protocol, in addition to health check traffic.
Source | Protocol | Port Range | Comments |
---|---|---|---|
ID of the load balancer security group | listener protocol | listener port | Allows inbound traffic from the load balancer on the listener port |
ID of the load balancer security group | health check protocol | health check port | Allows inbound health check traffic from the load balancer |
Above is an overview on how the VPC would be set up, for please refer this documentation1 for detailed information about configuring the same on AWS.