Friday 1/7/22 Cloud Studies Update: AWS Architecture Evolution

Adrian Cantrill’s SAA-C02 study course, 75 minutes: [Advanced Demo] Architecture Evolution stage 6, [Advanced Demo] Architecture Evolutio stage 7

[Advanced Demo] Architecture Evolution Stage 6a – Optional .. move to Aurora and DB HA

The focus of this stage of the demo was on migrating from RDS to a fully, highly available Aurora cluster.

To begin this process, we navigated to Services, and then clicked on the RDS console. From he we took a snapshot of the RDS instance and used this to migrate to Aurora. We clicked on snapshots, take snapshot, chose the specific RDS instance we wanted, typed in the name we wanted to use, and then clicked on Take Snapshot. After several minutes, and multiple refreshes, the snapshot was completed.

Once the snapshot was complete, we checked the box next to the snapshot name, clicked on Actions, and then chose Migrate Snapshot. For Migrate to DB Engine we selected Aurora, left the DB engine as default, chose the small T class as our instance, picked the specific identifier we wanted to use, verified that the correct VPC was selected, verified that the specific subnet group we needed was chosen, selected no preference for availability zone preference, chose an existing VPC security group by deleting the default and clicking on the dropdown and locating the specific SG we wanted to use, verified that Disable Encryption was selected, and then clicked on Migrate.

Once the Aurora cluster was provisioned, we added additional replica. To do this we selected the Aurora cluster, clicked on actions, and then Added reader. We were reminded that readers are additional replicas within the Aurora cluster. For DB instance identifier we selected the specific identifier we wanted to use, selected the same DB class as the other Aurora cluster, kept the rest of the defaults the same, and then selected Add Reader at the bottom of the screen.

While the read replica was being created, with the Aurora cluster still selected, we clicked on actions again and then add reader. We followed the same process as before, with the exception of giving this replica a unique identifier. With this, we clicked on Add Reader to create this new read replica. It was explained that all the instances would be located in different availability zones, and this would create a fully resilient cluster.

Because it’s an Aurora cluster, there would be two different standard endpoints. There is the writer endpoint, which would always point at the writer capable replica, which could be used for writes and reads, and then there would be the reader endpoint, which would load balance across all the available reader replicas. Because the website needs to read and write, we copied the writer endpoint into our clipboard.

To move the application over to this, we clicked on Services, moved to Systems Manager, and updated the value in the Parameter Store. To start, we deleted the parameter that currently pointed at the RDS instance DNS name. Then we created a new parameter referencing the DB path as the name. This would be a standard tier parameter, string type, with the Aurora cluster DNS name pasted in the value section. From here we created the parameter. This meant that the Aurora cluster would now be used as the database endpoint for the applications. To update them, the easiest way would be to provision the application instances.

To do this, we moved to the EC2 console, clicked on Auto Scaling groups, selected the Auto Scaling group, moved to Instance Refresh, and started an instance refresh. Because we had only one instance, we set the minimum healthy instance percentage to zero, and clicked on start. This initiated a process that would go through the instances in the Auto Scaling group, and reprovision them. It was explianed that part of the reprovisioning process was to use the launch template to pull in the latest parameter value for the DB endpoint and all of the authentication information. In effect, this new instance would be pointing at the new Aurora cluster; it would be using the Aurora cluster, rather than the RDS instance. It was explained that this was a simple way to do a rolling refresh of all the instances within an Auto Scaling group and have them pointing at a brand new database instance.

During the refresh, we noted that the instance refresh waited for a warmup period first, and once the warmup was completed, it would wait for each instance to complete at least one health check before continuing. So the refresh has created a brand new instance, waiting for it to warm up and then pass it’s first health check before terminating the existing instance. The end result of this should be one EC2 instance using the new database configuration. It was reiterated that this was an effective to update a fleet of EC2 instances within an Auto Scaling group.

After verifying that the refresh was successfully completed, we wanted to access the EC2 instances. Because we had provisioned a load balancer, we navigated to load balancers, obtained the DNS name for the load balancer and copied that into our clipboard, and then opened that in a new tab, it loaded the same website.

At this point it was verified that we were now connecting through a load balancer abstracted away from any one specific EC2 instance, the data was being loaded from a highly available Aurora cluster, and the media was being loaded from the Elastic File System, a shared file system which was used by all of the EC2 instances within the Auto Scaling Group.

At this point we had completed all of the taskst of the demo, and our architecture was a highly available architecture, an Aurora cluster with replicas in three availability zones, using an Elastic File System also across three Availability Zones, an Auto Scaling group which can provision instances into three Availability Zones, and then a load balancer to provide the abstraction and self-healing capabilities.

It was emphasized that through the demo series we had evolved a simple single server WordPress implementation into what was now a fully resilient, scalable solution, and all that was required was knowledge of the application and AWS products and services. The last step would be to clean up the AWS account, returning it to the state it was in prior to the start of the demo series.

[AdvancedDemo] Architecture Evolution Stage 7 – Cleanup

To close out the demo, we looked at tidying up the account and putting things back to the same state as they were when we started this demo lesson. The plan was to go back in reverse order through everything that was implemented.

The first step was to remove the load balancer. For this we moved to the EC2 console, selected load balancers, selected the specific load balancer, and clicked on delete, and confirmed the action.

After the load balancer was deleted, we selected target groups, selected the target group we had created for this demo series. We clicked on actions, delete, and then confirmed the delete.

After this, we deleted the Auto Scaling group. We scrolled down, clicked on auto scaling groups, selected the auto scaling group, clicked on actions, delete, and then confirmed the delete.

Moving on, we clicked on services, typed EFS, and opened that in a new tab, selected the EFS file system, clicked on delete, and confirmed by typing in the file system ID. This worked by first deleting the mount targets and then the file system itself.

Moving on, we clicked on services and moved to the RDS console, selected database, selected the RDS instance, clicked on actions, clicked delete, unchecked create final snapshot box, acknowledged that upon instance deletion automatic snapshots including backups would no longer be available, and verified the delete.

The next step was to remove the Aurora cluster. For this the first step was to remove all of the replicas. We selected reader2, clicked on actions, then delete, and confirmed the delete. We then repeated the process for reader1, and also with the writer replica.

From here we did a review of the prior deleting actions, confirming that EFS, EC2 to check on the auto scaling group, and then moved on to deleting the launch template.

From here, we looked at deleting RDS. For this we navigated back to the RDS console, waited for the DB instances to finish deleting, and then worked on fully deleting the infrastructure. After everything DB related had finished, we deleted the last snapshot. Now, all the infrastructure was fully deleted, the only step remaining being to delete the CFN stack we had applied at the very beginning.   

Published by pauldparadis

Working towards cloud networking security as a profession.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: