Friday 12/24/21 Cloud Studies update: AWS SSL Offload & Session Stickiness

Adrian Cantrill’s SAA-C02 study course, 60 minutes: HA & Scaling section: SSL Offload & Session Stickiness

SSL Offload & Session Stickiness

In this lesson we looked at two features of the Elastic Load Balancer series of products, SSL offload and session stickiness. For this lesson we focused on theory and architecture, as implementation is not considered to be so relevant for the SAA exam.

There are three ways that a load balancer can handle secure connections: Bridging, Pass-through, Offload. Each comes with pros and cons, and knowledge of all of this is necessary for development as a Solutions Architect.

Bridging mode: The default mode of an application load balancer. With briding mode, one or more clients makes one or more connections to a load balancer, and the load balancer is configured so that it’s listener uses HTTPS. This means that SSL connections occur between the client and the load balancer. So they’re decrypted – known as ‘terminated’ – on the load balancer itself. This means that the load balancer needs an SSL certificate which matches the domain name that the application uses. It also means that in theory AWS do have some level of application to that certificate. That’s important if you have strong security frameworks that you need to stay inside of. So, if you’re in a situation where you need to be really careful where your certificates are stored, then potentially you might have a problem with bridged mode. Once the secure connection from the client has been terminated on the load balancer, the load balancer makes a second connection to the backend compute resources. As a reminder, HTTPS is just HTTP with a secure wrapper, so when the SSL connection comes from the client to the front facing – the listener side of the load balancer – it gets terminated, which essentially means that the SSL wrapper is removed from the unencrypted HTTP, which is inside. So the load balancer has access to the HTTP, which it can understand and use to make decisions. The important thing to understand is as an application load balancer in bridging mode can actually see the HTTP traffic, it can take actions based on the contents of HTTP, and this is the reason why this is the default mode for the application load balancer. It’s also the reason why the application load balancer requires and SSL certificate because it needs to decrypt any data that’s being encrypted by the client. It needs to decrypt it first, then interpret it, then create new encrypted sessions between it and the back end EC2 instances. This also means that the EC2 instances will need matching SSL certificate, so certificates which match the domain name that the application is using, so the Elastic Load Balancer will re-encrypt the HTTP within a secure wrapper and deliver this to the EC2 instances, which will use the SSL certificate to decrypt that encrypted connection. They both need the SSL certificates to be located on the EC2 instances, as well as needing the compute to be able to perform those cryptographic operations. So in briding mode, which is the default, every EC2 instance, at the back end, needs to perform cryptographic operations. And, for high volume applications the overhead of performing these operations can be significant.


– The ELB gets to see the unencrypted HTTP and can take actions based on what’s contained in this plain text protocol


– The certificate does need to be stored on the load balancer itself and that’s a risk

– The EC2 instances also need a copy of that certificate, which has an admin overhead, and they need to be able to perform the cryptographic operations

SSL Passthrough: This architecture is very different. With this method the client connects, but the load balancer just passes that connection along to one of the back end instances. It doesn’t decrypt it at all. The connection encryption is maintained between the client and the the back end instances. The instances still need to have the SSL certificates installed, but the load balancer doesn’t. Specifically, it’s a network load balancer which is able to perform this style of connection architecture. The load balancer is configured to listen using TCP. It means that it can see the source and destination IP addresses and ports. So it can make basic decisions about which instance to send traffic to, the process of performing load balancing, but it never touches the encryption. The encrypted connection exists as one encrypted tunnel between the client all the way through to one of the back end instances. Using this method means that AWS never needs to see the certificate that you use, it’s managed and controlled entirely by you. You can even use a cloud HSM appliance.


– You don’t get to do any load balancing based on the HTTP part because that’s never decrypted; it’s never exposed to the network load balancer

-The instances still need to have the certificates and still need to perform the cryptographic operations, which uses compute.

SSL Offload: With this architecture, clients connect to the load balancer in the same way using HTTPS. The connections use HTTPS and are terminated on the load balancer. It needs and SSL certificate, which matches the name that’s used by the application. But the load balancer is configured to connect to the back end instances using HTTP. So, the connections are never encrypted again. From a customer perspective, data is encrypted between them and the load balancer. So, at all times while using the public internet, data is encryped, but it transits from the load balancer to the EC2 instances in plain text form. It means that while an SSL certificate is required on the load balancer, it’s not needed on the EC2 instances. The EC2 instances only need to handle the HTTP traffic. Because of that, they don’t need to perform any cryptographic operations, which reduces the per instance overhead, and also potentially means you can use smaller instances.


The downside is that data is in plain text form across AWS’ network.

Session Stickiness:

If there is no session stickiness, then for any sessions which Bob or anyone else makes, they’re distributed across all of the back end instances based on fair balancing and any health checks. So generally, this means a fairly equal distribution of connections across all back end instances.

The problem with this approach is that if the application doesn’t handle sessions externally, every time a user lands on a new instance, it would be like they’re starting again; they would need to log in again, fill their shopping car again, etc…

Applications needs to be designed to handle state appropriately. An application which uses stateless EC2 instances where the state is handled in say DynamoDB can use this non sticky architecture and operate without any problems. If the state is stored on a particular server, then you can’t have session being fully load balanced across all of the different servers, because every time a connection moves to a different server, it will impact the user experience.

There is an option available within Elastic Load Balancers called session stickiness. Within an application load balancer, this is enabled on a target group. What this means, is that if enabled, the first time a user makes a request, the load balancer generates a cooke called AWSALB. This cookie has a duration which you define when enabling the feature. A valid duration is anywhere between one second and seven days. If you enable this option, it means that every time a single user accesses this application, the cookie is provided along with the request, and it means that for this one particular cookie, sessions will be sent always to the same back end instance, all connections going to EC2-2, for instance.

This situation of sending sessions to the same server, this will happen until one of two things occur:

1. If there is a server failure, then the user will be moveed over to a different EC2 instance.

2. The cookie can expire (changing the session stickiness).

As soon as the whole cookie expires and disappears, the whole process will repeat over again, and the user will receive a new cookie and be allocated a new backend instance. Session stickiness is designed to allow an application to function using a load balancer, if the state of the user session is stored on an individual server.


– It can cause uneven load on back end servers: a single user, even if he or she is causing significant amounts of load, will only ever use one single server.

Where possible applications should be designed to use stateless servers, so holding the session or user state somewhere else, so not an an EC2 instance, but somewhere else like DynamoDB. If you host the session externally, it means that the EC2 instances are completely stateless, and load balancing can be performed automatically by the load balancer without using cookies in a completely fair and balanced way.  

Published by pauldparadis

Working towards cloud networking security as a profession.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: