Adrian Cantrill’s SAA-C02 study course, 60 minutes: HA & Scaling section: [Advanced Demo] Architecture Evolution Stage 3
[Advanced Demo] Architecture Evolution Stage 3: Split out the DB into RDS and update the LT
The stated aim for stage three was to change the single server architecture and move towards something a little more scalable. We would focus on migrating the database from the EC2 instance into a separate RDS instance, which would mean that each could scale independently, so we could grow or shrink the database independently of the EC2 instance. It would also mean that the data in the database would live past the life cycle of the EC2 instance and this is required for cases where we would want to scale in and out based on load.
To start this process, we opened the RDS console in a new browser tab. Following this, we created a subnet group, which is what allows RDS to select through a range of subnets to put its databases inside. We chose to give RDS a collection of three subnets, so three AZ’s which it could choose to deploy database instances into.
To initiate this process, we clicked on the subnet groups tab located on the left hand side of the RDS console, clicked on create db subnet group in the next screen, assigned a specific name and description to it, picked our VPC, selected which AZ’s we would use, selected which subnets would be deployed inside the AZ’s, and clicked on create. All of this together created the DB subnet group RDS would use in order to select which subnets database instances should go into.
Continuing on, we focused on creating the RDS instance itself. We started with a free tier eligible database. Clicking on the databases button, we clicked on create database, and selected standard create. It was discussed here that RDS is capable of using lots of different database engines; we opted for MySQL, and then we proceeded to choose a specific version. Moving on, we selected free tier in the templates section, provided a specific name in the DB instance identifier section, added the values from the parameter store for master username and password. To access parameter store, we opened Systems Manager in a new tab, navigated to the parameter store, and scrolled to DBUser parameter, proceeding to copy what was listed in the value field. Going back to the RDS console, we pasted that value in for the master username. We then repeated this set of steps for the master password.
Moving on, we selected a free tier eligible DB size, selected which VPC we wanted in the connectivity section, verified our subnet group, set ‘publiclyaceesible’ to no, set VPC security groups to default, and selected the specific VPC security group we wanted to use.
Continuing on, we set our AZ preference, scrolled past database authentication, and opened the addition configuration section at the bottom of the screen. In this section out intention was to set an initial database name, which required us to navigate back to the parameter store. After this we clicked on create database, and waited for the database to be created.
Now that the database was created, the next step was to migrate the actual WP data. For this, we navigated back to the EC2 console, selected the specific instance we wanted to use, right clicked on it, and connected via Session Manager. From this instance itself, we would perform the migration.
We started off by inputting some specific terminal commands, and proceeded to run a preset list of commands which were embedded in the lesson screen on a text document. The commands were to load data from the parameter store into environment variables within the OS, including DB password, DB root password, DB user, DB name, and DB endpoint, all into environment variables.
The next step was to export the data from the local MariaDB database instance, using more commands involving MySQL manipulation, directing the output into a .SQL file. We then ran ls -la to verify that everything that should be contained in the current directory was actually present.
Continuing on, we worked on changing the parameter endpoint in parameter store for DB endpoint, so that it would point at the new RDS instance. To do this, we navigated back to the RDS console, clicked on the specific WP instance we wanted to use, and copied the endpoint name into the clipboard, navigated back to parameter store in Systems Manager, and deleted then recreated the parameter for the WP DB endpoint. For the creation, we selected the name, tier, set type to string, data type as text, and proceeded to paste in the specific RDS endpoint we wanted to use. From here we created the parameter.
Navigating back to the session manager tab we had opened to the instance, we proceeded to refresh the environment variable with the updated parameter store parameter, using another section of the provided terminal commands.
The next step was to run MySQL commands to load in the .SQL export into the RDS instance, and then finalize the migration by updating the WP configuration file, so that instead of pointing at the local DB instance, it would point at RDS, which was achieved using SED to perform a replace of local host with the contents of the DB endpoint environment variable, which now contained the DNS name for the RDS instance. We navigated to the specific file path we needed to be occupying, pasted in the information needed, and this reconfigured the WP instance so that it would talk to the RDS instance for the DB functionality. After this we ran a set of commands to disable MariaDB so that it didn’t start every time the OS boots and set it to stopped. Now MariaDB would no longer be running on the EC2 instance.
Continuing on, we verified that our instance was still running by navigating back to the EC2 console, selecting our WP instance, copied the public IPv4 into our clipboard, and opened the instance in a new tab. It was emphasized that WP was loading data from RDS instance. It was also mentioned that at this point when creating a new blog post WP would have two different sets of data, the data of the blog post, including text, metadata, author, data and time, permissions, published status and many other things that are stored in the database. But any media and content for the blog post is still stored locally in a directory called WP-Content. That is still on the EC2 instance; all that we’ve migrated in this stage of the demo is the database itself from MariaDB through to RDS.
Before finishing this stage of the demo, the last task would be to update the launch template so that we could launch additional EC2 instances, but using this new configuration, so pointing at the RDS instance. For this, we navigated back to the EC2 console, clicked on launch templates, clicked in the checkbox next to the WP launch template, selected the actions dropdown, and then located and clicked modify template, create new version.
We edited the description, updated the user data, which meant editing the code for the user data, and clicked on create template version. From here the template would launch being designed to use RDS.
From here we navigated back to the launch template screen and clicked on the launch template, changing it so that the new version would be the default version used whenever we launch instances from the template. Clicking on launch templates, we set the version from one to two, clicked on actions dropdown, and picked Set Default Version, checked the dialogue box to make sure that version two was selected, and clicked on Set as Default Version. Now version two, the version that used RDS would now be set as default which means that when we used this template to launch any instances, this is the version that would be used by default.
To recap, in this stage we migrated the data for a working WP installation from a local MariaDB database instance through to RDS, which is essential to be able to scale this application, because now the data is outside of the life cycle of the EC2 instance. So for any scale in or out events, the relational or SQL-based data would not be affected. It also meant that we could scale the DB independently of the WP application instances. This helped us reach the desired outcome of a fully elastic architecture.
It was also considered that at this point we had fixed many of the limitations of this design, now needing to only address the application media and the WP content, which still resided in a folder local to the EC2 instance. We would need to migrate that out so that we could scale the instances in and out without risking the data. On top of this, other limiting factors are that customers are still connecting directly to the instance, which we would need to resolve by using a load balancer, and the IP address off the instance is still hard-coded into the DB. So if this EC2 instance were to fail for whatever reason and we were to provision a new one, it wouldn’t function be WP would expect everything to be loaded from the hard-coded IP address.