/ AWS

Multiple SSL domains on AWS ELB with Nginx

Is it possible to serve multiple domains (each with a unique SSL certificate) via HTTPS behind a single load balancer on AWS?

Yes you can; with TCP and Proxy Protocol.

Proxy Protocol allows you to safely and transparently forward TCP (layer 4) requests while attaching upstream client address information. More details are available in the HAProxy abstract about how it all actually works. As of July 2013, Amazon Web Services ELB (Elastic Load Balancer) supports distributing TCP connections with Proxy Protocol.

In a typical ELB/HTTPS setup, the SSL connection is negotiated by the load balancer and HTTP traffic is forwarded on to the web server. If you are only hosting one top-level domain, this setup works just fine. For our use case, we need to handle HTTPS connections for multiple domains on a single application stack. This means multiple certificates, which a single ELB instance does not support.

For our setup, SSL negotiation will be done by nginx on the web server, rather than by the ELB. With nginx, leveraging multiple server blocks each with its own SSL certificate is pretty straight forward. Here is what you will need:

  • Nginx 1.6.2
  • Ubuntu 14.04 LTS
  • AWS CLI

Install Prerequisites

The nginx PPA includes the required modules, so there is no need to compile a build. Feel free to adjust to your own requirements.

# install nginx
sudo add-apt-repository -y ppa:nginx/stable
sudo apt-get update
sudo apt-get -y install nginx=1.6.*

The AWS CLI will require credentials provided by your account.

# install aws cli
sudo apt-get install awscli
aws configure

Create and Configure the Load Balancer

If you are also (likely) handling standard requests over port 80, you do not need to enable Proxy Protocol for non-secure traffic. The HTTP traffic can remain unaffected while adding HTTPS to an existing ELB.

First, we need an ELB instance. If you do not already have a load balancer, you can create one using the AWS console, or by following these instructions for AWS CLI. In the example, we use acme-balancer as the ELB name and we are forwarding to backend port 9443.

The listener port should be created using the TCP protocol for both the Load Balancer Protocol and the Instance Protocol. The application layer protocol (HTTPS) is not handled until we reach the nginx instance. In most cases, the public port should be the standard 443.

# create proxy protocol policy
aws elb create-load-balancer-policy \
  --load-balancer-name acme-balancer \
  --policy-name EnableProxyProtocol \
  --policy-type-name ProxyProtocolPolicyType \
  --policy-attributes AttributeName=ProxyProtocol,AttributeValue=True

# add policy to elb
aws elb set-load-balancer-policies-for-backend-server \
  --load-balancer-name acme-balancer \
  --instance-port 9443 \
  --policy-names EnableProxyProtocol

# results
aws elb describe-load-balancers --load-balancer-name acme-balancer

Configure Nginx with Proxy Protocol

If you have multiple server blocks running on the same port (virtual hosts), any port that includes proxy_protocol in your nginx configuration will enable proxy protocol handling for ALL traffic on this port, not just the particular server block.

# block for proxy traffic
server {

    # port elb is forwarding ssl traffic to
    listen 9443 ssl proxy_protocol;
    
    # sets the proper client ip
    real_ip_header proxy_protocol;
    
    # aws vpc subnet ip range
    set_real_ip_from 10.0.0.0/16;
    
    server_name acme.com www.acme.com;
    
    ssl on;
    ssl_certificate /etc/ssl/acme/acme.com.crt;
    ssl_certificate_key /etc/ssl/acme/acme.com.key;

}

# block for direct traffic
server {

    listen 443 ssl;
    
    server_name acme.com www.acme.com;
    
    ssl on;
    ssl_certificate /etc/ssl/acme/acme.com.crt;
    ssl_certificate_key /etc/ssl/acme/acme.com.key;

}

And that's it. If the real IP settings are working correctly, you should not need to setup a custom log format. I have chosen to forward proxy protocol requests to a non-standard port (9443 in the example). This provides me access to the standard port outside of the load balancer for troubleshooting and debugging.

Creating separate server blocks for direct and proxied traffic is more verbose, but has a few benefits. It mitigates the need for conditional blocks later down the road. I also find that it is easier for others to understand. Lastly, separate blocks allow you to maintain a distinct configuration to test direct traffic to production nodes before they are added to the load balancer.

Putting It All Together

This example is obviously a summation of the required changes, and may vary greatly depending on your AWS setup, with respect to subnets, ports, security groups, etc. If you have questions or suggestions, please feel free to get in touch via the appropriate social outlet.