Monday 1/3/22 Cloud Studies update: AWS Architecture Evolution: Stage 4

Adrian Cantrill’s SAA-C02 course, 60 minutes: HA & Scaling section: [ADVANCED DEMO]: Stage 4

[Advanced Demo] Architecture Evolution – Stage 4

Split out the WP filesystem and update the LT

In this section of the demo we focused on last steps that would lead to our architecture being truly elastic and scalable. This included migrating the WP-Content folder from the EC2 instance on to EFS,

the Elastic File System, which is a shared network file system that can be used to store images or other content in a resilient way outside of the lifecycle of these individual EC2 instances.

To start this we navigated back to the AWS Console, clicked on the services dropdown, and typed EFS, opening the EFS console in a new tab, then clicking on ‘Create File System’. We opted to step through the full configuration options, so instead of using the simplified user interface, we clicked on the customize option.

The first step was to create the file system itself. For this the first step was to name the file system, then we opted to leave automatic backups enabled and lifecycle management to be set to default, which was 30 days since last access in this case. For EFS there are two performance modes: general purpose and max I/O. For our demo we chose to use general purpose; Max I/O is considered to be good for very specific high performance scenarios. For 99% of use cases, general purpose will be the default.

Moving on, we picked the ‘bursting’ option for throughput, which links performance to how much space is consumed; the more space consumed, the higher the performance. The other option is provisioned, which allows for the performance to be specified independent of consumption. For this particular demonstration, we opted for bursting. Then we unchecked ‘enable encryption of data at rest’; it was explained that if this were a production scenario this would be left on, but for this demo, which was focusing on architecture evolution, the implementation is simplified if this is disabled. This completed the filed system specific options we needed to configure.

The next step was to configure the EFS mount targets, the network interfaces in the VPC, which our instances will connect with. Clicking on the VPC dropdown, we selected a specific VPC which had already been created, which would be the VPC the mount targets would go into.

Each of the mount targets would be secured by a security group. For this, the first thing we needed to do was strip off the default security group for the VPC. Looking at the console, there was currently one security group per AZ remaining. Moving on, we chose a specific subnet ID for each AZ, which was basically a configuration name attached to a CIDR block. After this we clicked on the security groups dropdown and selected the same specific security group for each AZ. Now that the mount targets were configured, they would be allocated with an IP address in each subnet automatically, which would allow for connections to them.

Moving on, we clicked on next, reviewing options for file system policies, which we opted to ignore at this point, clicked on next to go to the review screen, double checked that there were no mistakes in our configuration, and clicked on create.

After the file system status changed from ‘creating’ to ‘available’, we clicked on the file system itself, clicked on the network tab, and scrolled down. At this point we were able to see the mount points which were being created. In order to configure the EC2 instance, we would need all of the mount points to be in the available state. To save time, we recorded the file system ID of the EFS file system, as we needed this information to configure another parameter to point at the file system ID. The rationale for this was that when scaling things automatically, it would always be best practice to use the parameter store to store configuration information.

To complete this step, we opened Systems Manager in a new tab, navigated to the Systems Manager console, and clicked on parameter store, and clicked on ‘create parameter’ to create a new parameter. We named the new parameter referencing a file path that included WP in the name, included a specific description, set the parameter as standard tier, the type as string, and data type as text, and pasted the file system ID in for the value. Then we clicked on ‘create parameter’.

The next step was to navigate to the EFS console, hit refresh, and verified that all the mount targets were now running in ‘available’ state. After this, we navigated to the EC2 console in order to configure the EC2 instance to connect to the file system. To complete this, we navigated to running instance, located the specific instance we wanted to use, right-clicked, selected connect, and connected via Session Manager. This opened the Session Manager console to the EC2 instance. From here we started working in Bash.

On a side note, it was discussed that even though EFS is based on NFS, which is a standard, in order to get EC2 instances to connect to EFS, the installation of an additional tools package was required, the Amazon EFS utils specifically. This was installed with YUM. After this we cd’d to the Web Root folder, then worked to move the WP-CONTENT folder from this location to somewhere else. To get an idea about what was contained in this folder, we ls -la’d inside it to see that it contained plugins, themes, and uploads, and inside those folders were contained any media assets owned by WP.

After investigating those contents, we used sudo mv to move this folder to the /tmp folder, sudo mkdir’d to create a new folder, which ended up being the mount point for the EFS file system.

Moving on we ran more comments to create the EFSFSID pasting in the value from the parameter created in the parameter store.

After this we discussed FSTAB, which is a list of file systems mounted on the EC2 instance. LS’ing in it showed that in its’ current state it only contained a listing for the boot volume. We added an additional line to the FSTAB file, which would configure the EC2 instance to mount the EFS file system on boot every single time. To do this we ran move commands in Bash, mounting this to the file system folder. We then verified that this was set, so that EFS would mount whenever the OS was started. To get started we ran mount -a -t efs defaults to force the mount, and then ran df -k to show that the EFS file system was now mounted as the WP-CONTENT folder.

The last step was to migrate the existing data that was moved to the temporary folder back in to WP-CONTENT, which was achieved with several terminal commands, including mv, and finished this subsection by running -R ec2-user:apache /var/www to fix up the permissions. The completion of this step re-established permissions and ownership of everything in this particular part of the file system.

Continuing on, we ran the reboot command to reboot the system, primarily to check that everything was configured correctly on reboot. Meaning that the instance should start, EFS should be loaded, and WP should have access to the WP-CONTENT folder, which would now be running from a network files system.

Verifying that everything was working correctly, we were now at a point where if we were interacting with both the database and WP-CONTENT, both existed away from the EC2 instance. This means that we were now at a point where we could scale the EC2 instance without worrying about the data or the media for any of the posts. At this point, we were now in a position to further evolve the architecture to be fully elastic.

Before ending this part of the demo, we needed to complete one final step, which was to update the launch templates to include the updated configuration to use EFS. For this we navigated back tot he EC2 console, clicked on launch templates, and selected the specific launch template we wanted to configure. From here we clicked on the Actions dropdown, selected modify template, create new version, updated the description as ‘app only, uses EFL filesystem defined in’ and then path. Because we were creating a new version, everything would be populated with the previous template version.

Moving on, we expanded the ‘advanced details’ menu located at the bottom, and edited the user data, pasting in some information to include the instance id, the amazon-efs-utils, added commands to create WP-CONTENT, created a mount point, and mounted EFS into it, set LT as default. Now the the database would be stored in RDS, and WP-CONTENT data would be stored in EFS. This solved many of the applications limitations, allowing us to scale the DB independently of the application, stored the media file separately from the instance, allowing us to scale the instance freely out or in without risking the media or the database.  

Published by pauldparadis

Working towards cloud networking security as a profession.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: