Thursday 11/18/21 AWS/Cloud Update: RDS and CSA CCSK study guide

Adrian Cantrill’s SAA-C02 study guide, RDS section: [DEMO] Multi-AZ and using a snapshot to recover from data corruption part 1, [DEMO] Multi-AZ and using a snapshot to recover from data corruption part 2, RDS data security

CSA CCSK Security Guidance study guide: Cloud Risk Management Trade-offs, Cloud Risk Management Tools

[DEMO] Multi-AZ & Using a snapshot to recover from data corruption part 1:

In this demo we worked with RDS’s multi-AZ mode along with creating and restoring snapshots, and experimenting with RDS failover.

The first step was to click on a link for a one-click deployment which opened a tab in CFN for a quick stack create page. After this we recreated the WordPress blog we have been working with, and created a basic blog post.

This was followed by moving to RDS and taking a snapshot of the RDS instance by selecting the DB instance that was created using the one-click deployment and then creating the snapshot from the Actions dropdown. This took some time, as the first time a snapshot is created of an RDS instance, it is of the entire instance, including all of the data being used by the RDS instance. Other factors include the speed of AWS on the particular day, the amount of data contained within the database, and of course if this is a first snapshot or a subsequent snapshot. Because of the incremental nature of snapshots, later snapshots will only include changes made from the last snapshot. Also, snapshots taken manually are not managed by AWS and live past the lifecycle of the RDS instance.

The next step was to investigate the multi-AZ mode of RDS. We moved back to the RDS console, and then moving to the databases section, we could see that the database we were currently using was in one AZ. We expanded the resiliency of the RDS instance by creating a standby replica in another AZ. Multi-AZ mode is not included in the free tier.

To access multi-AZ mode we clicked on the modify button located on the databases page. This page allows to change modifier, password, size instance type, storage, autoscaling, availability, durability, and more.

To invoke multi-AZ deployment, we navigated to the Availability & Durability section. We changed the setting from ‘do not create a standby instance’ to ‘create a standby instance’. This creates a read-replica in another AZ (different subnet group). Then we had to select to apply the modification immediately as opposed to during the next maintenance window.

To simulate an outage, we rebooted with failover. This causes the endpoint to move to the read-replica, which creates a new CNAME.

[DEMO] Multi-AZ & Using a snapshot to recover from data corruption Part 2

Picking up from part one, refreshing our browser tab showed that the region had changed for the RDS instance, induced by the failover during reboot.

Continuing on with the demo, we corrupted some data in the WordPress instance to have the oppportunity to work with corrupted data in RDS. We achieved this by changing the title of the blog post and then we updated the post. We then moved back to the RDS console and clicked on the snapshots tab. We restored from the snapshot by selecting the snapshot and then clicking restore from the dropdown link, which achieves the restore by creating a brand new database instance. We selected the engine, created a new identifier, and also selected the subnet for the restore to be created in. We had options for public access along with security group settings for the snapshot. Because we are not in free tier mode for the restore, we were able to choose the instance type, multi-AZ deployment, authentication, and encryption. There were also several advanced configuration options, but we left it all as default as none of these options are specific to the SAA course. After the restored instance was created, we took note of how the Endpoint named was different between the original instance and the restore. This will require application configuration updating if any is needed.

After this we moved to the EC2 console to point the wordpress instance at the RDS restore. This process started by using instance connect for generating the EC2 instance, and then we proceeded to edit the WordPress configuration file, by replacing the endpoint name in the ‘hostname’ section with the endpoint from the restored RDS instance. With normal RDS it is not possible to restore in place.

To close out the demo, we looked at the ‘restore to point in time’ option in the actions dropdown menu in the console screen for the original RDS instance. This allowed us to explore the automatic backup feature and its automatic retention period, and we could choose the latest time, or a custom time that we specify. This was followed by further practice with cleaning up the deployment to return our account to it’s pre-demo state.

RDS Data Security:

For security in RDS, we looked at four different things: Authentication, Authorization, Encryption in transit between clients in RDS, and Encryption at rest.

With all of the different engines in RDS you can use encryption in transit, which means data between the client and RDS instance is encrypted via SSL or TLS. This can be set to mandatory on a per user basis.

Encryption in rest is support in a few different ways by the database engine. By default it’s supported using EBS and KMS encryption, so this is handled by the RDS host, and the underlying EBS-based storage. As far as the RDS engine knows, it’s just writing unencrypted data to storage. The data is encrypted by the host that the RDS instance is running on. KMS is used, so you select a customer master, or CMK, to use either a customer-managed CMK or an AWS-managed CMK, and then the CMK is used to generate data encryption keys or DEKs, which are used for the actual encryption operations. When using this type of encryption, the storage, the logs, the snapshots, and any replicas are all encrypted using the same customer master key, and importantly, encryption cannot be moved once it’s added. These features are supported as standard with RDS. In addition, Microsoft SQL and Oracle support TDE (Transparent Data Encryption) encryption supported and handled within the database engine, so data is encrypted and decrypted by the database itself, not the host that the instance is running on, which means there’s less trust, as data is encrypted the moment it’s written to disk. Also, RDS Oracle supports integration with CloudHSM, and with this architecture the encryption process is even more secure, with even stronger key controls, because CloudHSM is managed by you with no key exposure to AWS, which means that you can implement encryption, where there is no trust chain which involves AWS, which is very valuable for many demanding regulatory situations.

Here is a brief description of the encryption architecture. Let’s say we have a VPC and inside this, a few RDS instances running on a pair of underlying hosts, and these instances use EBS for underlying storage. Let’s say one instance uses Oracle, so this will be using TDE for encryption, and so CloudHSM is used for key services. Because TDE is native and handled by the database engine, the data is encrypted from the engine all the way through to the storage, with AWS having no exposure outside of the RDS instance to the encryption keys which are used. With KMS based encryption, KMS generates and allows usage of CMKs, which themselves can be used to generate data encryption keys known as DEKs. These Data Encryption Keys are loaded onto the RDS hosts as needed and are used by the host as needed to perform the encryption or decryption operations. This means the database engine doesn’t need to natively support encryption or decryption. It has no encryption awareness. From it’s perspective, it’s writing data as normal, and it’s encrypted by the host before sending it on to EBS in its final encrypted format. Data that’s transferred between replicas, as with MySQL for example, is also encrypted, as are any snapshots of the RDS EBS volumes, and these use the same encryption key. This is at rest encryption.

Finally, there’s IAM authentication for RDS. Normally, logins to RDS are controlled using local database users. These have their own usernames and passwords. They’re not IAM users and outside of the control of AWS. One gets created when you provision an RDS instance, but that’s it. You can configure RDS to allow IAM user authentication against a database. We start with an RDS instance on which we create a local database user account, configured to allow authentication using an AWS authentication token. How this works is that we have IAM users and roles, in this case, an instance role, and attached to those roles and users are policies. These policies contain a mapping between that IAM entity, so the user or role, and a local RDS database user. This allows those users to run a generate-db-auth-token operation, which works with RDS and IAM, and based on the policies attached to the IAM identities, it generates a token with a 15 minute validity. This token can then be used to log in to the database user within RDS without requiring a password. So by associating a policy with an IAM user or an IAM role, it allows either of those two identities to generate an authentication token, which can be used to login to RDS instead of a password. An important thing to understand is that this is only authentication, this is not authorization. The permissions over the RDS database inside the instance are still controlled by the permissions on the local database user, so authorization is still handled internally. This process is only for authentication, which involves IAM, and only if you specifically enable it on the RDS instance.

CSA CCSK Security Guidance Study guide: Cloud Risk Management Trade-offs

There are advantages and disadvantages to managing enterprise for cloud deployments. These factors are, as you would expect, more pronounced for public cloud and hosted private cloud:

  • less physical control over assets
  • greater reliance on contracts, audits, and assessments
  • increased requirement for proactive management
  • cloud customers have a reduced need to manage risks accepted by CSP in shared responsibility model Cloud Risk Management Tools

The following processes help form the foundation of managing risk in cloud computing deployments: manage, transfer, accept, or avoid risk

Everything starts with a proper assessment:

  • request/acquire documentation
  • review security program and documentation
  • review legal, regulatory, contractual, jurisdictional requirements
  • evaluate contracted services
  • separately evaluate overall provider
  • periodically review audits and assessments

Published by pauldparadis

Working towards cloud networking security as a profession.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: