Friday 10/29/21 AWS/Cloud Study Update

Adrian Cantrill’s SAA-C02 study course, learn.cantrill.io, 120 minutes:

Advanced EC2 section: ‘EC2 Placement groups pt. 1’, ‘EC2 Placement groups pt. 2’, ‘Dedicated Hosts’, ‘Enhanced Networking and EBS Optimized’

Today I learned about EC2 placement groups, dedicated hosts, enhanced networking and EBS optimized.

Normally when launching an EC2 instance, it’s placement inside AWS datacenters is selected by AWS,  by choosing the EC2 host which makes the most sense within the availability zone it’s launched into. Placement groups allow you to influence placement of instances either physically close together or not physically close together. There are currently three types of placement groups. All influence how instances are arranged on physical hardware. The three types are cluster, spread, and partition: cluster packs instances close together, spread keeps instances separated, while partition creates groups of instances that are spread apart.

CLUSTER PLACEMENT GROUPS: Offers the absolute highest performance level within EC2. After creating the group, launch all instances in the group at the same time. This locks the instances into the AZ everything is launched into, and helps prevent any issues that may arise due to capacity limitations in an AZ. The instances are all run in the same physical group. All instance have direct bandwidth to all other instances, single stream transfer rates of 10gbps, the lowest levels of latency, and max packet transfer rate possible within AWS. This configuration offers the highest performance, but due to physical proximity, if hardware fails, all instances can fail and consequently Cluster Placement Groups offer little to no resilience. Cluster Placement Groups cannot span AZ’s, but can span VPC peers, and requires the supported instance type. Best practices include using the same instance type.

SPREAD PLACEMENT GROUPS: There is a focus on resilience and availability. Spread placement groups span multiple AZ’s. Instances in SPD are located on separate, isolated infrastructure racks, each with own isolated network and power supply. This placement group provides infrastructure isolation: each instance runs from a different rack. There is a 7 instance hard limit per AZ within the Spread Placement Group, and no support for dedicated instances or hosts. Spread Placement Groups are ideal for a small number of critical instances that need to be kept separated from each other, all handled natively by AWS.

PARTITION PLACEMENT GROUPS: This placement group is best for when there are more than 7 instances per AZ but you still want to separate instances into separate fault domains. Partition Placement Groups can be created across multiple AZ’s. You will need to  specify the number of partitions to be created inside the partition placement group, with a maximum of 7, each partition has its own rack, and you can launch as many instances as desired in each partition. You can manually allocate which partition an instance resides in or have AWS do it. Partition Placement Groups are designed for huge-scale systems. Partition Placement Groups offer visibility into partitions, and can share information with ‘topology-aware’ applications like HDFS, HBase, Cassandra. Partition Placement Groups are not support for dedicated hosts.

DEDICATED HOSTS: Dedicates hosts are hosts dedicated for your use, designed for a specific family of instances. There are no charges for instances running on host. Pricing options include on-demand and reservations (1 or 3 year). Dedicated hosts specify physical sockets and cores. Dedicated hosts are designed for a specific family and size of instance; most dedicated hosts are designed to run with one specific size at a time, which is required to be set in advance. RHEL, SUSE and windows AMI’s not supported as dedicated hosts, RDS not supported, and placement groups are also not supported. Dedicated hosts can be shared with other organization accounts using RAM (Resources Access Manager). Other accounts can create instances on the shared dedicated host. Other accounts with hosts shared in can only see instances they create on that host. The host account can see all instances but cannot control any hosts not created by the host account.

ENHANCED NETWORKING & EBS OPTIMIZED: Both provide massive performance benefits and support other enhanced features.

ENHANCED NETWORKING: This is designed to improve overall networking, and required for high-end networking like cluster placement groups. Enchanced Networking uses SR-IOV, which enables virtualization awareness on the NIC. The host offers logical cards, multiple per physical card. Each instance is given exclusive access to one logical card. Benefits include higher i/o, lower host cpu, more bandwidth, higher pps because of 1-to-1 instance to logical card configuration, and consistent low latency.

EBS OPTIMIZED INSTANCES: EBS- block storage over the network. Historically the network was shared by data and EBS. EBS optimized means there is dedicated capacity for EBS. Most instances are supported and enabled by default. This enables higher iops and throughput.

Published by pauldparadis

Working towards cloud networking security as a profession.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: