Sunday 12/26/21 Cloud Studies Update: AWS Architecture Evolution stage 1: part 1

Adrian Cantrill’s SAA-C02 study course, 60 minutes: HA & Scaling section: [ADVANCED DEMO]: Architecture evolution, stage 1, part 1

[Advanced Demo] Architecture Evolution

This was the first part of a seven part ‘advanced demo’. The purpose of this extended demo is to start from a single WP instance and evolve that into a scalable and resilient architecture. The single WP instance was deployed running the application itself, the database, and also storing the content for all of the blog posts. The blog we used was one containing multiple pictures.

We walked through how to build a server manually, the express purpose of this part of the demo being to experience all of the different components that need to operate to produce the web application. After this we replicated the process, this time using a launch template to provide automatic provisioning of the WP application, but still the one single WP instance. This was followed by a database migration, moving the MySQL database off the EC2 instance and running it on a dedicated RDS instance. Now the database, the data of the application, would exist outside the lifecycle of the EC2 instance. This was considered to be the first step in the direction of moving towards a fully elastic, scalable architecture. After migrating the database, instead of storing the content locally on the EC2 instance, we opted to provision an elastic file system (EFS), in order to provide a network-based resilient file system. Then we migrated all of the content for the WP application from the instance to EFS.

Once we were finished, these would be all the components required to move the architecture to be fully elastic, which means being able to scale out or in based on load on that system. It was then considered that the next step would be to move away from having customers connect directly to the single EC2 instance, choosing to provision an autoscaling group, which would allow instances to scale out or in as required, and along with this we configured an elastic load balancer to point at that auto scaling group, so any customers would connect in via the application load balancer rather than connecting to the instances directly. It was considered that this would abstract any customers away from the instances, and allow the system to be fully resilient, self-healing, and fully elastically scalable. Lastly, we opted to upgrade the single RDS instance to a full three availability zone Aurora Cluster, which would provide three AZ resilience for the application database.

One important thing that was emphasized was not just learning the technical aspects of the implementation, but also learning why each of the modifications was performed, part of the initiative being to have some ability to handle scenario based questions as part of job interviews.

To start we navigated to the AWS console, where we logged in as our admin IAM user, located in us-east-1 (N. Virginia region). After this we deployed a one-click provision which created the base infrastructure. We also used some text-based instructions in working through all seven parts of the demo.

After the CFN stack had finished deploying, we manually created a single instance WP deployment. The CFN had created a base VPC with a three-tier architecture, database, application, and public subnets, split across three different AZ’s. From here we worked on creating the single EC2 instance, and we created it manually in order to gain more hands-on experience to see how all the pieces fit together, and also to experience the associated limitations. For this, we navigated to the EC2 console, then we launched an Amazon Linux 2 AMI, which was an HVM SSD volume type, using a 64 bit x86 processor architecture. Then we chose whatever free tier instance type was available so as to minimize costs.

This was followed by the ‘configure instance details’ section. On this screen, we clicked on the network dropdown to verify that the specific VPC we had created was selected, verified that we had the correct subnets selected, and also verified the correct instance for the associated IAM role in the IAM role section. This was followed by checking the ‘unlimited’ box for the credit specification section.

Continuing on, we navigated to the ‘add storage’ section, and accepted the defaults for storage, an 8 GIB root volume in this case. After this, we navigated to the add tags section, setting the key as ‘Name’ and the value as a specific WP value relevant to our current circumstances.

Continuing on, we navigated to the ‘configure security group’ section, selected ‘existing security group’, and checked the box next to the specific security we needed. Then we reviewed and launched the EC2 instance, choosing to continue without a key pair. We then navigated to the ‘instances’ tab to keep track on the provisioning instance, so we could be ready to proceed once it had finished provisioning.

As stated, the goal of this seven part demo was to evolve this one single WP EC2 instance into a fully scalable or elastically scalable design, so we had one more set of steps to perform at this time, the goal being to start moving away from statically setting any configuration options, and so were directed to make use of the parameter store, a part of systems manager. The goal here was to create some parameters that are automatic build processes which would be utilized later in the demo. While continuing with the manual demo at this stage, we would utilize the instance store variables, because it would help simplify what would needed to be typed in the EC2 instance.

To do so, we typed on ‘services’, started typing ‘systems manager’, and then right-clicked on the hyper link to open it in a new tab. After this, we navigated to the ‘parameter store’ tab, and clicked on it to move to the paramater store console. From here we worked on creating a number of parameters, clicking on ‘create parameter’ and inputting information from the text instructions contained in the GitHub repository we were given to work with.

These parameters included WP parameters for database username, the username that would have permissions on the WP database, set to a particular path. We included a description for the username, set the tier to standard to keep things in the free tier, type was set to string, data type as text, and value was the actual DB username.

The next paramter was the database name, proceeding with setting the above mentioned variables in a similar manner as the preceding.

After this we created the database endpoint, a host name that WP would connect to. Again we set the same variables in a similar manner.

The next paramater created was one to store the password for the WP user. Again we set the same variables in a similar manner, the only difference that we set the type as ‘secure string’, because this parameter is password related. This involved also setting the KMS key source and KMS key ID. This parameter was completed with the creation of a strong password.

After this we created our last parameter, which was a WP DB root password, the password for the local DB server running on the EC2 instance. Again, the parameters were set similarly to the preceding, using a secure string and modifying the KMS key account in this case. This parameter creation was finished by inputting a strong password.

Published by pauldparadis

Working towards cloud networking security as a profession.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: