Adrian Cantrill’s SAA-C02 study course, 50 minutes: [Demo] ‘Implementing EFS’ parts 1 & 2
[DEMO] Implementing EFS Part 1
This demo focused on interacting with EFS on its own to get some hands-on experience working with EFS. To start with, we verified that we were logged into the N. Virginia region, and then clicked on a one-click deployment that created CFN stack we used for the demo. We also had a list of commands for use in the demo.
This created a base VPC and a number of EC2 instances, along with an ‘EFS-A’ and ‘EFS-B’ which we would use to create an EFS file system and mount points and then mounting them on both of the instances to allow for interactions with the data stored on the file system. This was all for the purpose of getting comfortable with the network shared file system.
To begin the demo we navigated to the EFS console and opened that in a new tab. From there we chose to create a file system, which is the base entity of the EFS product. This brought up a dialogue that allowed for ‘simple’ or ‘custom’ creation screens.
We started by naming our file system, and then picked a VPC for the file system to be provisioned into. From here we moved into the ‘custom’ console, which allowed for multiple options, including availability and durability (regional vs. one zone), automatic backups, Lifecycle management (transitioning into and out of IA), performance and throughput modes, encryption (KMS),, and tags.
Moving on to the next steps in configuring, we moved to the network access screen, which allowed for choices around AZ’s, subnet id, ip, security groups, and mount targets. It was pointed out that the AZ’s where you’re consuming the resources provided by EFS was the best choice for the mount target.
After this we deleted the default security groups, the intention being to create our own, and it was pointed out that any mount targets created will have an associated security group. Continuing on, we chose the application subnet in each of the AZ’s we were working in. For the security group, the one-click deployment create an ‘instance’ security group for our use, so we chose that from the security groups dropdown.
After this we moved to the next screen, which showed options for configuring file system policies, including preventing root access by default, enforcing read-only by default, preventing anonymous access, and enforcing in-transit encryption for all clients.
After these we clicked on the create button.
[DEMO] Implementing EFS: part 2
Now that our EFS infrastructure was configured and deployed, we navigated to the tab where the EC2 console was opened, after which we duplicated the EC2 console in a new tab. After this we connected via Instance Connect, using each tab for one of the two EC2 instances we had provisioned in the CFN stack. After doing so we verified that the EC2 instances were not connected to EFS by running df -k, noting the fact that no EFS was listed in the outpout.
To begin the process of attaching the instance to EC2, we created a folder for the EFS file system to be mounted into. Following this, we installed the amazon efs utils package in order to be able to access tooling usable with EFS.
Moving on, we set EFS to mount in this EC2 instance every time the instance was restarted, which we did in the fstab file. We then navigated back to the EFS console to obtain the filesystem id which was pasted into the Linux command line for specifyiing which EFS was going to be mounted in. After finishing with mounting the file system, we ran df -k again and could see the addition of the file system in the output.
We then created a file in EFS to demonstrate that it is a network file system.
Moving on, we navigated to the other EC2 instance, ran df -k to verify that EFS was not mounted on this other instance, installed the amazon efs utils on this other EC2 instance, created a folder for mounting the file system, edited fstab as before, pasted in the file system id, etc… and finished the mounting process. After this we cd’d into the folder containing the file system, ran ls -la, and could see the same file we saved on EFS in the other EC2 instance.