AWS Cloud Fundamentals
AWS Cli Installation
AWS Account Creation
- Root login: When you first create an AWS account, you’ll have a root user login linked to your email address. The root user has full control of the account! Guard this login carefully and use it only for tasks that require it.
- IAM users: AWS has a system called Identity and Access Management (IAM), which allows you to set up users and roles under your account that have exactly the permissions they need to do certain work. We’ll cover IAM in detail later; you’ll use the system a little bit here, to create a non-root admin user for your account.
- MFA: We (and AWS) recommend setting up multi-factor authentication (MFA), at a minimum, for the root user of your AWS account.
AWS Login
1 | aws login |
VPC
What is VPC?
A Virtual Private Cloud (VPC) is a virtual network dedicated to your AWS account. It is a private network that is isolated from the public internet. You can use it to launch AWS resources in a secure and isolated environment.
VPC Components
| Component | Description |
|---|---|
| VPC | The virtual network that contains the subnets and other resources. |
| Subnet | A range of IP addresses in your VPC. You can launch AWS resources into a subnet. |
| Internet Gateway | A gateway that allows communication between your VPC and the internet. |
| Route Table | A collection of routes that define how network traffic is routed between subnets and the internet. |
| Security Group | A virtual firewall that controls inbound and outbound traffic to your AWS resources. |
| Network ACL | A virtual firewall that controls inbound and outbound traffic at the subnet level. |
| NAT Gateway | A gateway that allows instances in a private subnet to connect to the internet. |
| VPC Peering | A connection between two VPCs that allows them to communicate with each other. |
VPC CIDR Block
When you create a VPC, you must specify a CIDR block for the VPC. The CIDR block is a range of IP addresses that the VPC can use. The CIDR block must be between /16 and /28. For example, if you specify a CIDR block of 10.0.0.0/16, you can use up to 65,536 IP addresses in your VPC.
Classless Inter-Domain Routing (CIDR) Blocks
Any IPv4 address that starts with 10. is private. Anything else is public.
VPC Subnets
A subnet is a range of IP addresses in your VPC. You can launch AWS resources into a subnet. A subnet can be public or private. Public subnets are accessible from the internet, while private subnets are only accessible from other resources in the same VPC.
Create Subnet1
2
3
4
5aws ec2 create-subnet \
--vpc-id <vpc-id> \
--cidr-block <CIDR> \
--availability-zone <us-east-1a> \
--tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=<subnet-name>}]'
Check Subnets1
2aws sts get-caller-identity # Check the current AWS account and user information
aws ec2 describe-subnets --no-cli-pager --output json \ --filters 'Name=tag:Name,Values=xxx-private-a' # Check the CIDR block of a specific subnet
VPC Internet Gateway
An Internet Gateway is a gateway that allows communication between your VPC and the internet. It is a required component of a VPC.
Create Internet Gateway1
aws ec2 create-internet-gateway --tag-specifications 'ResourceType=internet-gateway,Tags=[{Key=Name,Value=<ig-name>}]'
Attach Internet Gateway to VPC1
aws ec2 attach-internet-gateway --internet-gateway-id <igw-id> --vpc-id <vpc-id>
Check Internet Gateway1
aws ec2 describe-internet-gateways --no-cli-pager --output json \ --filters 'Name=tag:Name,Values=xxx-ig' # Check the ID of a specific Internet Gateway
VPC Route Table
A route table is a collection of routes that define how network traffic is routed between subnets and the internet. Each subnet in your VPC must be associated with a route table. The route table controls the routing for the subnet.
Create Route Table1
aws ec2 create-route-table --vpc-id <vpc-id> --tag-specifications 'ResourceType=route-table,Tags=[{Key=Name,Value=<rt-name>}]'
Associate Route Table with Subnet1
aws ec2 associate-route-table --route-table-id <rt-id> --subnet-id <subnet-id>
Add a route to the route table1
aws ec2 create-route --route-table-id <rt-id> --destination-cidr-block 0.0.0.0/0 --gateway-id <igw-id>
Check Route Table1
aws ec2 describe-route-tables --no-cli-pager --output json \ --filters 'Name=tag:Name,Values=xxx-rt' # Check the ID of a specific route table
VPC Security Group
A security group is a virtual firewall that controls inbound and outbound traffic to your AWS resources. You can create security groups and specify the rules that allow or deny traffic to your resources.
Create Security Group1
aws ec2 create-security-group --group-name <sg-name> --description <sg-description> --vpc-id <vpc-id> --tag-specifications 'ResourceType=security-group,Tags=[{Key=Name,Value=<sg-name>}]'
Authorize Security Group1
aws ec2 authorize-security-group-ingress --group-id <sg-id> --protocol <protocol> --cidr <cidr>
Check Security Group1
aws ec2 describe-security-groups --no-cli-pager --output json \ --filters 'Name=tag:Name,Values=xxx-sg' # Check the ID of a specific security group
VPC Network ACL
A network ACL is a virtual firewall that controls inbound and outbound traffic at the subnet level. You can create network ACLs and specify the rules that allow or deny traffic to your subnets.
Create Network ACL1
aws ec2 create-network-acl --vpc-id <vpc-id> --subnet-ids <subnet-id> --tag-specifications 'ResourceType=network-acl,Tags=[{Key=Name,Value=<nacl-name>}]'
Authorize Network ACL1
aws ec2 authorize-network-acl-ingress --network-acl-id <nacl-id> --protocol <protocol> --port <port> --cidr <cidr>
Check Network ACL1
aws ec2 describe-network-acls --no-cli-pager --output json \ --filters 'Name=tag:Name,Values=xxx-nacl' # Check the ID of a specific network ACL
VPC NAT Gateway
A NAT gateway is a gateway that allows instances in a private subnet to connect to the internet. You can create NAT gateways and specify the subnets that they can access.
Create NAT Gateway1
aws ec2 create-nat-gateway --subnet-id <subnet-id> --tag-specifications 'ResourceType=natgateway,Tags=[{Key=Name,Value=<ng-name>}]'
Check NAT Gateway1
aws ec2 describe-nat-gateways --no-cli-pager --output json \ --filters 'Name=tag:Name,Values=xxx-ng' # Check the ID of a specific NAT gateway
EC2
Prerequisites
Ec2 Instance Configuration: Keys
Generate a key pair for ssh1
ssh-keygen -t ed25519 -C "xxx" -f ~/.ssh/xxx
Upload the public key to AWS1
aws ec2 import-key-pair --key-name <key-name> --public-key-material file://~/.ssh/xxx.pub
Check the key pair1
aws ec2 describe-key-pairs
Create EC2 Instance
1 | aws ec2 run-instances --image-id AMI-ID --instance-type TYPE --key-name KEY-NAME --subnet-id SUBNET-ID --security-group-ids SG-ID --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=INSTANCE-NAME}]' |
Check EC2 Instance1
aws ec2 describe-instances --no-cli-pager --output json \ --filters 'Name=tag:Name,Values=INSTANCE-NAME' # Check the ID of a specific EC2 instance
Terminate EC2 Instance
1 | aws ec2 terminate-instances --instance-ids OLD-INSTANCE-ID |
Elastic IP Address
An Elastic IP address is a public IP address that you can associate with your instance to enable communication with the internet. You can use Elastic IP addresses to connect to your instance from outside the VPC. No matter how many times you stop and start your instance, the Elastic IP address will remain the same. This is useful for applications that require a static IP address.
Allocate Elastic IP Address1
aws ec2 allocate-address --domain vpc
Associate Elastic IP Address1
aws ec2 associate-address --instance-id INSTANCE-ID --allocation-id ALLOCATION-ID
Disassociate Elastic IP Address1
aws ec2 disassociate-address --association-id ASSOCIATION-ID
Release Elastic IP Address1
aws ec2 release-address --allocation-id ALLOCATION-ID
*Check Elastic IP Address1
aws ec2 describe-addresses --no-cli-pager --filters 'Name=domain,Values=vpc' # Check the Elastic IP addresses allocated for VPC
Security Group
A security group is a virtual firewall that controls inbound and outbound traffic to your AWS resources. You can create security groups and specify the rules that allow or deny traffic to your resources.
Create Security Group1
aws ec2 create-security-group --group-name SG-NAME --description SG-DESCRIPTION --vpc-id VPC-ID --tag-specifications 'ResourceType=security-group,Tags=[{Key=Name,Value=SG-NAME}]'
Add an inbound rule1
2aws ec2 authorize-security-group-ingress --group-id SG-ID --ip-permissions IpProtocol=tcp,FromPort=22,ToPort=22,IpRanges=[{CidrIp=YOUR-IP/32,Description=DESCRIPTION}]
aws ec2 authorize-security-group-ingress --group-id SG-ID --ip-permissions IpProtocol=tcp,FromPort=8080,ToPort=8080,IpRanges=[{CidrIp=0.0.0.0/0,Description="Allow web traffic"}]
Attach to an instance1
aws ec2 modify-instance-attribute --instance-id INSTANCE-ID --groups SG-ID
Authorize Security Group1
aws ec2 authorize-security-group-ingress --group-id SG-ID --protocol PROTOCOL --cidr CIDR
Check Security Group1
aws ec2 describe-security-groups --no-cli-pager --output json \ --filters 'Name=tag:Name,Values=SG-NAME' # Check the ID of a specific security group
SSH into EC2 Instance
1 | ssh -i KEY-NAME.pem ec2-user@PUBLIC-IP |
Config the .ssh/config file for easier access
1
2
3
4 Host ec2-instance
HostName PUBLIC-IP
User ec2-user
IdentityFile KEY-NAME.pem
Then you can simply run ssh ec2-instance to connect to your EC2 instance.
Update ssh config to remove old host key
1 ssh-keygen -R PUBLIC-IP
Install Packages
1 | sudo dnf upgrade |
Create AMI
1 | aws ec2 create-image --name AMI-NAME --instance-id INSTANCE-ID |
1 | aws ec2 describe-images --no-cli-pager --owners self --output json \ --filters 'Name=name,Values=AMI-NAME' # Check the ID of a specific AMI |
Auto Scaling Group
An Auto Scaling group is a collection of EC2 instances that are treated as a logical grouping for the purposes of automatic scaling and management. You can use Auto Scaling groups to ensure that you have the right number of EC2 instances available to handle the load for your application.
Create Auto Scaling Group
1 | aws autoscaling create-auto-scaling-group --auto-scaling-group-name ASG-NAME --launch-configuration LaunchConfigurationName=LC-NAME --min-size MIN-SIZE --max-size MAX-SIZE --vpc-zone-identifier ZoneName=ZONE-NAME |
Delete Auto Scaling Group
1 | aws autoscaling delete-auto-scaling-group --auto-scaling-group-name ASG-NAME |
Auto Scaling Launch Template
A launch template is a blueprint for an EC2 instance that defines the configuration settings for the instance, such as the instance type, security groups, and tags. You can use launch templates to create multiple instances with the same configuration.
Auto Scaling Launch Template
Create Launch Template
1 | aws autoscaling create-launch-template --launch-template-name LT-NAME --launch-template-version-description LT-DESCRIPTION --launch-template-data '{"ImageId":"AMI-ID","InstanceType":"INSTANCE-TYPE","KeyName":"KEY-NAME","SecurityGroupIds":["SG-ID"],"UserData":"USER-DATA"}' |
Create an EC2 instance using the launch template1
aws ec2 run-instances --launch-template LaunchTemplateName=xxx-web-launcher --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=xxx-web-v2}]'
Re-associate the Elastic IP1
aws ec2 associate-address --instance-id NEW-INSTANCE-ID --allocation-id EIP-ALLOCATION-ID
Delete Launch Template
1 | aws autoscaling delete-launch-template --launch-template-name LT-NAME |
Auto Scaling Lifecycle Hook
A lifecycle hook is a notification that Auto Scaling sends to an Amazon SNS topic whenever an instance is launched, terminated, or replaced.
Amazon RDS (Relational Database Service)
Amazon RDS is a web service that makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-effective, resizable capacity for an application, without the need for the application team to perform the administrative tasks associated with setting up, operating, and scaling a database server.
1 | Check your VPC |
Create RDS Instance
1 | Create security group |
Add outbound rule to security group1
2
3
4
5Add outbound rule to app server security group (assuming you want a specific rule)
aws ec2 authorize-security-group-egress --group-id APP-SG-ID --ip-permissions IpProtocol=tcp,FromPort=5432,ToPort=5432,UserIdGroupPairs=[{GroupId=RDS-SG-ID}]
Add inbound rule to DB security group
aws ec2 authorize-security-group-ingress --group-id RDS-SG-ID --ip-permissions IpProtocol=tcp,FromPort=5432,ToPort=5432,UserIdGroupPairs=[{GroupId=APP-SG-ID}]
RES Storage and IOPS
General Purpose (SSD) Storage
This is the default storage type for most workloads. It provides high IOPS and low latency, making it ideal for applications that require fast access to data.
- Baseline performance: 3,000 IOPS (Input/Output Operations Per Second)
- Scalable: Can provision up to 16,000 IOPS if needed
- Cost-effective: Lower cost than provisioned IOPS
- Use case: Most applications, up to moderate I/O requirements
Provisioned IOPS SSD
This storage type is designed for applications that require high I/O performance. It provides high IOPS and low latency, making it ideal for applications that require fast access to data..
- Predictable performance: Guaranteed IOPS (1,000 to 256,000)
- Low latency: Consistent performance
- Higher cost: More expensive than general purpose storage
- Use case: Databases with high transaction rates, large databases, I/O-intensive workloads
What Are IOPS?
IOPS (Input/Output Operations Per Second) measures approximately how many read/write operations your database can perform per second. Think of it like the speed limit of your database storage.
RDS Backup and Restore
RDS backups include:
- Full database snapshot: Complete copy of your database at a point in time
- Transaction logs: All changes since the last backup (enables point-in-time recovery)
The combination of full backups plus transaction logs lets you restore to any specific time within your retention period.
Restoring from Backups
You can restore your database in several ways.
Point-in-Time Recovery
Restore to any specific time within the retention period:
- Uses full backup + transaction logs
- Can restore to the exact second before a data loss event
- Useful when you know exactly when something went wrong
Restore from Snapshot
Restore to a specific backup snapshot:
- Faster and simpler than point-in-time recovery
- Useful for restoring to a known good state
- You can also create manual snapshots at any time
Restore to New Instance
All restores create a new database instance:
- Original instance remains unchanged
- This means “restoring” is also useful for testing, cloning, or migration
- You can restore to test changes before applying to production
Read Replicas
Read replicas are copies of the database that are used to provide read-only access to the data. They are useful for applications that need to access the data but do not require write access.
How Read Replicas Work
- Asynchronous replication: Data is copied from the primary database to replicas continually
- Read-only: Replicas handle read queries (SELECT statements)
- Write operations: Still go to the primary database (INSERT, UPDATE, UPSERT, DELETE)
- Automatic updates: Replicas stay in sync with the primary, give or take a few seconds
Why Use Read Replicas?
- Read performance: Distribute read traffic across multiple instances
- Reduce load: Take read load off the primary database
- Scalability: Add more replicas as traffic grows
- Geographic distribution: Place replicas closer to users in different regions, so they at least get fast reads
- High availability: Use replicas for read-only failover if the primary fails
1 | Create a read replica |
Identity and Access Management (IAM)
IAM is a web service that helps you securely control access to AWS resources. You can use IAM to create and manage AWS users and groups, and to use permissions to allow and deny their access to AWS resources. IAM is a critical component of AWS security, and it is important to understand how to use it effectively to protect your AWS resources.
IAM Basics
IAM uses a few core pieces of information to identify and authenticate users:
- User: A user is an individual who has an AWS account and can access AWS resources. Users are identified by an email address or an Amazon Web Services (AWS) account ID.
- Group: A group is a collection of users who can be managed as a unit. Groups of users that share permissions. A user can be part of multiple groups (e.g. Billing or Developers).
- Role: A role is a set of permissions that define what a user can do in AWS. Roles are used to manage access to AWS resources and to delegate permissions to users or groups. Temporary, assumable identities with permissions. Best for applications and services. No long-lived credentials.
- Trust Policies: A trust policy is a document that defines who can assume a role. It specifies the trusted entities (users, groups, or other roles) that are allowed to assume the role and the conditions under which they can do so. Define how an application or service can assume a role.
- Policy: A policy is a document that defines what a user or role can do in AWS. It specifies the actions that a user or role can perform and the resources they can access. Rules that define what actions are allowed or denied on resources (e.g. Can access EC2). These are usually written in JSON.
Example
| Task | Answer |
|---|---|
| Frank, a solo dev, needs to login to the AWS console | User |
| The accounting team needs billing access | Group |
| Zach needs access to EC2 | Policy |
| The application’s backend server needs access to S3 | Roles |
| All the application servers need access to RDS | Trust Policies |
Create IAM User
1 | aws iam create-user --user-name xxx-admin |
check user1
aws iam get-user --user-name xxx-admin
Inline Policies
Policies allow or deny actions on specific resources. They look something like this:1
2
3
4
5
6
7
8
9
10
11
12{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:Describe*"
],
"Resource": "*"
}
]
}
A policy has three main parts:
- Effect: Allow or Deny. Are we permitting or denying these actions?
- Action: The action to allow or deny. There are thousands of these, like DescribeInstances, CreateBucket, or PutObject.
- Resource: Which resources the action targets. For example, “all EC2 servers,” or perhaps the ARN of just a specific one.
create policy1
aws iam put-user-policy --user-name xxx-admin --policy-name xxx-ec2-readonly --policy-document file://POLICY-FILE.json
Attach Policy to User1
aws iam attach-user-policy --user-name xxx-admin --policy-arn arn:aws:iam::aws:policy/AdministratorAccess
check policy1
aws iam list-attached-user-policies --user-name xxx-admin
Create IAM Group
1 | Create a customer-managed policy |
IAM Roles
IAM Roles are used to define a set of permissions and permissions boundaries that an entity can assume. They are similar to policies, but they are attached to an IAM user or group instead of an individual user.
Roles are useful when you want to delegate permissions to a group of users or to an application. For example, you can create a role for an application that allows it to access specific resources, such as an S3 bucket or an RDS database.
The process is simple:
- Create a role with a policy that defines what actions it can perform.
- Assign the role to a trusted entity, like an EC2 instance.
- The entity can then assume the role to get temporary credentials.
- The credentials are automatically rotated and expire after a short time, so there’s no need to manage them manually.
Create IAM Role
1 | Create a role with trust policy |
Deny Policies
By default, all actions are denied. You can create policies that explicitly deny certain actions, which will override any allow policies. This is useful for enforcing security controls and ensuring that certain actions cannot be performed, even if other policies would allow them.
create the deny-all policy1
2
3
4aws iam create-policy --policy-name POLICY-NAME --policy-document file://DENY-POLICY.json
Attach the policy to a role
aws iam attach-role-policy --role-name ROLE-NAME --policy-arn POLICY-ARN
SSM Parameters
AWS provides a vendor-specific solution in AWS Systems Manager Parameter Store (SSM parameters). It’s a centralized safe where you can keep configuration data, secrets, and parameters that your applications need. It provides:
- Key-value storage: Store configuration values, secrets, and parameters
- Organization by path: Use hierarchical paths like /DB_ENDPOINT or /SECRETS
- Security: Can encrypt sensitive values (SecureString type)
- Accessibility: Applications and EC2 instances can retrieve parameters via IAM permissions
- Versioning: Track changes to parameter values over time
1 | aws ssm put-parameter --name /DATABASE_URL --value postgresql://postgres:PASSWORD@xxx-db.RANDOM-ID.us-east-1.rds.amazonaws.com:5432/xxx --type String |
1 | Create and Attach an inline policy |
Internal Monitoring
CloudWatch
CloudWatch is a monitoring service provided by AWS that allows you to collect, store, and analyze metrics and logs from your AWS resources. It provides a wide range of features, including:
- Real-time monitoring of your AWS resources
- Customizable dashboards and alerts
- Integration with other AWS services
- Cost optimization
To enable CloudWatch, you need to set the cloudwatch:PutMetricData permission for the IAM role associated with your AWS account. This permission allows the CloudWatch agent to send metrics and logs to CloudWatch.
1 | Create IAM role with trust policy |
Install CloudWatch Agent
The CloudWatch agent is a software program that runs on your Amazon EC2 instances and collects metrics and logs from your system. You can install the CloudWatch agent on your EC2 instances using the AWS Management Console, the AWS CLI, or the CloudFormation template.
Install using the AWS Management Console
create and attach CloudWatch policy1
2
3
4
5
6
7
8
9
10
11
12
13
14
15{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogStreams"
],
"Resource": ["arn:aws:logs:*:*:*"]
}
]
}
create CloudWatch agent config file1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31{
"agent": {
"metrics_collection_interval": 60,
"run_as_user": "root"
},
"logs": {
"logs_collected": {
"files": {
"collect_list": [
{
"file_path": "/var/log/xxx.log",
"log_group_name": "xxx-monitoring",
"log_stream_name": "{instance_id}"
}
]
}
}
},
"metrics": {
"append_dimensions": {
"AutoScalingGroupName": "${aws:AutoScalingGroupName}",
"ImageId": "${aws:ImageId}",
"InstanceId": "${aws:InstanceId}",
"InstanceType": "${aws:InstanceType}"
},
"metrics_collected": {
"mem": { "measurement": ["mem_used_percent"] },
"swap": { "measurement": ["swap_used_percent"] }
}
}
}
install CloudWatch agent1
2
3sudo dnf install amazon-cloudwatch-agent
sudo systemctl start amazon-cloudwatch-agent
sudo systemctl enable amazon-cloudwatch-agent
CloudWatch Alarms
CloudWatch Alarms allow you to set thresholds on your metrics and receive notifications when those thresholds are breached. For example, you can set an alarm to notify you when CPU utilization exceeds 80% for a certain period of time.
1 | Create SNS topic |
Route 53
Route 53 is a scalable and highly available Domain Name System (DNS) web service provided by AWS. It is designed to route end-user requests to applications hosted on AWS or on-premises. Route 53 provides several features, including:
- Domain registration: You can register new domain names directly through Route 53.
- Domain delegation: You can delegate domain names to other registrars, such as GoDaddy or Namecheap.
- Domain forwarding: You can forward domain names to another registrar or to your own website.
- Domain health checks: You can set up health checks to monitor the availability of your website.
- Domain failover: You can configure Route 53 to automatically switch to another Amazon Route 53 hosted zone in case of a failure.
- Domain alias: You can create an alias record to point a domain name to another resource, such as an Amazon S3 bucket or an Amazon EC2 instance.
- Domain geolocation: You can set up geolocation rules to direct traffic to the closest AWS region.
Create Route 53 Hosted Zone(Private Zone)
1 | aws route53 create-hosted-zone \ |
A Record
A record is a resource record that maps a domain name to an IP address. It is used to direct traffic to a specific IP address for a domain name.
1 | aws route53 change-resource-record-sets --hosted-zone-id ZONE-ID --change-batch '{"Changes":[{"Action":"CREATE","ResourceRecordSet":{"Name":"RECORD-NAME","Type":"A","TTL":300,"ResourceRecords":[{"Value":"IP-ADDRESS"}]}}]}' |
CNAME Records
A CNAME record is a resource record that maps a domain name to another domain name. It is used to direct traffic to a specific domain name for a domain name.
1 | aws route53 change-resource-record-sets --hosted-zone-id ZONE-ID --change-batch '{"Changes":[{"Action":"CREATE","ResourceRecordSet":{"Name":"RECORD-NAME","Type":"CNAME","TTL":300,"ResourceRecords":[{"Value":"ALIAS-NAME"}]}}]}' |
Important Rules:
- CNAME records can only point to other domain names, not directly to IP addresses.
- CNAMEs can’t overlap. You can’t have two rules for the same subdomain (e.g., an A record and a CNAME record for blog.boot.dev).
- If the target domain has multiple A records, the CNAME will point to all of them.
AWS S3
Amazon S3 (Simple Storage Service) is AWS’ object storage service. Think of it as slower, but very scalable storage for files (objects) in containers (buckets). It’s relatively cheap and very reliable at huge scales.
Create S3 Bucket
1 | aws s3api create-bucket --bucket BUCKET-NAME --region REGION |
List Buckets
1 | aws s3api list-buckets --query 'Buckets[].Name' |
List Objects
1 | aws s3api list-objects --bucket BUCKET-NAME --query 'Contents[].Key' |
Delete Object
1 | aws s3api delete-object --bucket BUCKET-NAME --key KEY-NAME |
Copy Object
1 | aws s3api copy-object --bucket BUCKET-NAME --copy-source SOURCE-BUCKET-NAME/OBJECT-KEY --key NEW-OBJECT-KEY |
Move Object
1 | aws s3api copy-object --bucket BUCKET-NAME --copy-source SOURCE-BUCKET-NAME/OBJECT-KEY --key NEW-OBJECT-KEY |
Delete Bucket
1 | aws s3api delete-bucket --bucket BUCKET-NAME |
AWS CloudFront
CloudFront is AWS’s content delivery network (CDN). A CDN is a network of servers distributed around the world that cache and serve content in locations that are physically close to users. In simple terms, they cache static content like images, videos, and scripts from an origin server and serve it from the nearest edge location.
How CloudFront Works
- Origin: Your source content (S3 bucket, EC2 instance, load balancer, etc.)
- Distribution: CloudFront creates a “distribution,” i.e., a configuration for how to route requests to your origin.
- Edge Locations: AWS has hundreds of edge locations worldwide.
- Caching: Edge locations cache content based on user requests.
- Delivery: When a user requests content, CloudFront serves it from the nearest edge location.
Create CloudFront Distribution
1 | aws cloudfront create-distribution --distribution-config '{"CallerReference":"<unique-string>","Origins":{"Quantity":1,"Items":[{"Id":"<origin-id>","DomainName":"<bucket-domain>","S3OriginConfig":{"OriginAccessIdentity":""}}]},"DefaultCacheBehavior":{"TargetOriginId":"<origin-id>","ViewerProtocolPolicy":"redirect-to-https","AllowedMethods":{"Quantity":2,"Items":["GET","HEAD"]},"CachePolicyId":"<cache-policy-id>"},"Enabled":true}' |
CloudFront Invalidation
CloudFront allows you to invalidate objects in your cache by creating a invalidation request. This is useful when you want to update your content or when you want to remove old content from your cache.
1 | aws cloudfront create-invalidation --distribution-id YOUR_DISTRIBUTION_ID --paths "/xxx" |
Presigned URLs
Presigned URLs are temporary, time-limited URLs that grant access to private S3 objects or CloudFront content. They’re like tickets that expire:
- Time-limited: The URL expires after a set duration (e.g., 1 hour, or 24 hours).
- Secure: The content itself remains private; only those with the presigned URL can access it.
- Controlled: You control who gets the URL and when it expires.
1 | aws s3 presign s3://YOUR_BUCKET_NAME/favicon.ico --expires-in 15 |
AWS Lambda
AWS Lambda is a serverless compute service that allows you to run code without provisioning or managing servers. You can use Lambda to run code in response to events, such as changes to data in an S3 bucket or updates to a DynamoDB table. Lambda automatically scales your application by running code in response to each trigger, and you only pay for the compute time you consume.
When Lambda Makes Sense
- Event-driven workloads: Process S3 uploads, respond to SNS messages, react to DynamoDB changes
- APIs with variable traffic: Handle thousands of requests per second or just a handful per day
- Scheduled tasks: Run a job every hour without maintaining a server
- Background processing: Resize images, generate PDFs, send emails
- Prototypes and MVPs: Get something running fast without infrastructure setup
When Lambda Does Not Make Sense
- Long-running processes: Lambda functions time out after 15 minutes max
- Stateful applications: Each invocation is isolated; you can’t keep state in memory between requests
- Applications needing large dependencies: 250 MB unzipped deployment package limit
- Consistent high traffic: If your function runs constantly, EC2 or ECS might be cheaper
- Applications requiring specific OS customization: You get a standard runtime environment
