So you’ve heard about AWS (Amazon Web Services) and want to understand what it’s all about. Let me walk you through the essentials I’ve learned while working with AWS in real-world DevOps scenarios.
What is AWS?
AWS is Amazon’s cloud computing platform that provides on-demand computing resources over the internet. Think of it as renting someone else’s super powerful computers instead of buying and maintaining your own servers. You pay only for what you use, and you can scale up or down based on your needs.
AWS offers over 200 services, but don’t let that overwhelm you. We’ll focus on the core services you’ll actually use day-to-day for deploying and managing applications.
Understanding AWS Resource Scopes
Before diving in, you need to understand that every resource you create in AWS falls under one of three scopes. This isn’t just theoretical - it affects how you architect your applications and where your data lives.
Global Services
These services are available across all AWS regions:
- IAM (Identity and Access Management): Your users, groups, and permissions
- Billing: All your AWS charges in one place
- Route53: AWS’s DNS service
Regional Services
These exist within a specific geographic region:
- S3 (Simple Storage Service): Object storage (though buckets are region-specific, the S3 service interface is global)
- VPC (Virtual Private Cloud): Your isolated network
- DynamoDB: NoSQL database service
Availability Zone (AZ) Services
These are tied to specific data centers within a region:
- EC2 (Elastic Compute Cloud): Virtual servers
- EBS (Elastic Block Storage): Hard drives for EC2
- RDS (Relational Database Service): Managed databases
Why does this matter? When you create an EC2 instance, it lives in a specific AZ. If that data center has issues, your instance goes down unless you’ve designed for high availability across multiple AZs.
IAM: Your First Stop in AWS
When you create an AWS account, you get a root user with god-mode permissions. Never use the root user for daily work. Here’s why and what to do instead:
The First Thing: Create an Admin User
The root user has unrestricted access to everything, including billing. If those credentials leak, you’re in serious trouble. Instead:
- Log in as root (just this once)
- Go to IAM service
- Create an admin user with administrative permissions
- Enable MFA (Multi-Factor Authentication) on both root and admin
- Log out and use the admin user from now on
IAM Core Concepts
Users: Actual people or services that need access to AWS
- Each developer gets their own IAM user
- Never share credentials
Groups: Collections of users with similar permissions
- Example: Create a “Developers” group with EC2 and S3 access
- Add users to groups instead of assigning permissions individually
Roles: Temporary permissions that can be assumed by users or services
- Think of it as a “hat” someone can wear temporarily
- Perfect for EC2 instances that need to access S3 (the instance assumes a role)
Policies: JSON documents that define permissions
- Attached to users, groups, or roles
- Define what actions are allowed on which resources
IAM Users vs IAM Roles: What’s the Difference?
This confused me at first, so let me clarify:
IAM Users are for:
- Permanent identities (people, long-running services)
- You create credentials (password and/or access keys)
- Example: Your colleague needs to manage EC2 instances
IAM Roles are for:
- Temporary credentials
- Services or applications that need AWS access
- Cross-account access
- Example: Your EC2 instance needs to read from S3
The key difference: Users have permanent credentials. Roles have temporary credentials that are automatically rotated.
Creating IAM Users and Getting CLI Access
Here’s the practical workflow:
-
Create a user:
- Navigate to IAM → Users → Add user
- Set username (e.g., “jane-dev”)
- Choose access type:
- AWS Management Console access (for UI access)
- Programmatic access (for CLI/SDK)
-
Add to groups:
- Either add to existing group or create new one
- Attach policies to the group (e.g., AmazonEC2FullAccess)
-
Get CLI credentials:
- After creating the user, go to: User → Security credentials
- Click “Create access key”
- Choose “Command Line Interface (CLI)”
- Download the credentials (you only see them once!)
- These are your
AWS_ACCESS_KEY_IDandAWS_SECRET_ACCESS_KEY
Security tip: Rotate these keys regularly and never commit them to git!
Regions and Availability Zones
Let me break down AWS’s global infrastructure because it’s crucial for understanding how your applications are deployed.
Regions
A Region is a geographical area where AWS has data centers. Examples:
us-east-1(North Virginia)eu-west-1(Ireland)ap-south-1(Mumbai)
Why multiple regions?
- Latency: Deploy closer to your users
- Compliance: Some data must stay in specific countries
- Disaster Recovery: If an entire region fails, you have backups elsewhere
- Cost: Pricing varies by region
Availability Zones (AZs)
Each region has multiple Availability Zones - physically separate data centers within the same region. They’re:
- Close enough for low-latency communication
- Far enough apart that a disaster (fire, flood) won’t take down multiple AZs
- Connected by high-speed private networks
For example, us-east-1 has 6 AZs: us-east-1a, us-east-1b, etc.
Best Practice: Deploy your application across at least 2 AZs in a region. If one AZ fails, your app stays up.
Visualization
AWS Global Infrastructure
│
├─ Region: us-east-1 (North Virginia)
│ ├─ AZ: us-east-1a (Data Center 1)
│ ├─ AZ: us-east-1b (Data Center 2)
│ └─ AZ: us-east-1c (Data Center 3)
│
├─ Region: eu-west-1 (Ireland)
│ ├─ AZ: eu-west-1a
│ ├─ AZ: eu-west-1b
│ └─ AZ: eu-west-1c
│
└─ Region: ap-south-1 (Mumbai)
├─ AZ: ap-south-1a
└─ AZ: ap-south-1b
VPC: Your Private Cloud Network
VPC (Virtual Private Cloud) is your own isolated network in AWS. Think of it as your own private data center in the cloud, but without the hardware headaches.
Key VPC Concepts
What is a VPC?
- A virtual representation of a traditional network infrastructure
- Completely isolated from other AWS customers
- You control the IP address range, subnets, route tables, and network gateways
- One VPC per region, but it spans all AZs in that region
Why do you need a VPC?
- Network isolation and security
- Control over your network architecture
- Connect to your on-premises data center
- Segment your application into different network tiers (web, app, database)
Subnets: Dividing Your VPC
A subnet is a range of IP addresses within your VPC. Here’s what you need to know:
- Subnets are AZ-specific: Each subnet lives in exactly one AZ
- Public subnets: Can communicate with the internet (e.g., for web servers)
- Private subnets: Cannot directly access the internet (e.g., for databases)
What makes a subnet public or private? It’s not a checkbox - it’s about routing configuration:
- Public subnet: Has a route to an Internet Gateway
- Private subnet: No route to Internet Gateway (but can access internet via NAT Gateway)
IP Addresses in VPC
When you create a VPC, you define a CIDR block (Classless Inter-Domain Routing). Let me explain:
CIDR Block Example: 10.0.0.0/16
10.0.0.0is the base IP address/16means the first 16 bits are fixed, giving you 65,536 IP addresses- Your VPC owns all IPs from
10.0.0.0to10.0.255.255
Subnet CIDR Examples (carved out of the VPC):
- Public subnet 1:
10.0.1.0/24(256 IPs in AZ-1a) - Public subnet 2:
10.0.2.0/24(256 IPs in AZ-1b) - Private subnet 1:
10.0.10.0/24(256 IPs in AZ-1a) - Private subnet 2:
10.0.11.0/24(256 IPs in AZ-1b)
Two types of IP addresses:
- Private IPs: Used for internal communication within the VPC (e.g.,
10.0.1.45) - Public IPs: Allows internet communication (e.g.,
54.123.45.67)
When you launch an EC2 instance:
- It always gets a private IP for internal VPC communication
- Optionally gets a public IP if it’s in a public subnet and you enable it
Internet Gateway
An Internet Gateway is what connects your VPC to the internet. Think of it as the front door:
- Attached to your VPC
- Allows resources in public subnets to communicate with the internet
- Highly available and scales automatically
Without an Internet Gateway, your VPC is completely isolated from the internet.
Controlling Access: Firewalls in AWS
AWS gives you two levels of firewall control:
1. Network ACLs (NACLs) - Subnet Level
Network Access Control Lists operate at the subnet boundary:
- Stateless: You must define both inbound and outbound rules separately
- Rules are evaluated in order (lowest number first)
- Can explicitly ALLOW or DENY traffic
- Default NACL allows all traffic
Example NACL rules:
Inbound:
100 - Allow HTTP (port 80) from 0.0.0.0/0
200 - Allow HTTPS (port 443) from 0.0.0.0/0
300 - Allow SSH (port 22) from 203.0.113.0/24
* - Deny all
Outbound:
100 - Allow all traffic to 0.0.0.0/0
* - Deny all
2. Security Groups - Instance Level
Security Groups are virtual firewalls for individual EC2 instances:
- Stateful: If you allow inbound traffic, the response is automatically allowed
- Only ALLOW rules (no deny rules - everything not explicitly allowed is denied)
- Can reference other security groups (e.g., “allow traffic from web-server-sg”)
Example Security Group:
Name: web-server-sg
Inbound Rules:
- Type: HTTP, Port: 80, Source: 0.0.0.0/0
- Type: HTTPS, Port: 443, Source: 0.0.0.0/0
- Type: SSH, Port: 22, Source: my-ip-address/32
Outbound Rules:
- Type: All traffic, Destination: 0.0.0.0/0
Best Practice: Use Security Groups as your primary firewall. They’re more flexible and easier to manage.
VPC Architecture Visualization
VPC: 10.0.0.0/16 (us-east-1)
├─ Internet Gateway
├─ Public Subnet 1: 10.0.1.0/24 (us-east-1a)
│ └─ EC2: Web Server (Private: 10.0.1.10, Public: 54.123.45.67)
├─ Public Subnet 2: 10.0.2.0/24 (us-east-1b)
│ └─ EC2: Web Server (Private: 10.0.2.20, Public: 54.123.45.89)
├─ Private Subnet 1: 10.0.10.0/24 (us-east-1a)
│ └─ RDS Database (Private: 10.0.10.50)
└─ Private Subnet 2: 10.0.11.0/24 (us-east-1b)
└─ RDS Database Replica (Private: 10.0.11.51)
EC2: Your Virtual Servers in the Cloud
EC2 (Elastic Compute Cloud) is AWS’s virtual server service. Instead of buying physical servers, you rent virtual machines by the hour (or second).
Creating an EC2 Instance: Step by Step
Let me walk you through each configuration option and why it matters:
Step 1: Name and Tags
What: Give your instance a name and optional tags
Name: web-server-prod-1
Tags:
- Environment: Production
- Application: MyApp
- Team: DevOps
Why: Tags are crucial for:
- Organization (finding resources quickly)
- Cost tracking (see what each team/project costs)
- Automation (scripts can filter by tags)
Step 2: Choose an AMI (Operating System)
What: AMI (Amazon Machine Image) is your OS template
Options:
- Amazon Linux 2: AWS-optimized, free tier eligible, my go-to choice
- Ubuntu: Popular, great community support
- Red Hat, CentOS: For enterprise environments
- Windows Server: If you need Windows
- Custom AMIs: Your own pre-configured images
Why it matters: Different AMIs have different package managers, default users, and configurations:
- Amazon Linux uses
yum/dnfand the default user isec2-user - Ubuntu uses
aptand the default user isubuntu
Step 3: Instance Type
What: The hardware configuration (CPU, RAM, network)
Common types:
t2.micro: 1 vCPU, 1 GB RAM (free tier)t3.small: 2 vCPU, 2 GB RAMt3.medium: 2 vCPU, 4 GB RAMm5.large: 2 vCPU, 8 GB RAMc5.xlarge: 4 vCPU, 8 GB RAM (compute-optimized)
Naming convention: [family].[size]
- t3: Burstable general purpose (great for web servers with variable load)
- m5: General purpose (balanced CPU/memory)
- c5: Compute optimized (high CPU)
- r5: Memory optimized (high RAM)
Why it matters:
- Start small, you can always resize later
- Different instance types have different network performance
- Cost scales with instance size
Step 4: Key Pair (Critical!)
What: SSH key pair for secure access to your instance
How to create:
- Click “Create new key pair”
- Name it (e.g.,
my-aws-key) - Choose format:
- RSA (works everywhere)
- ED25519 (newer, more secure)
- Choose format:
.pemfor Linux/Mac.ppkfor PuTTY on Windows
- Download the key file
IMPORTANT:
- You only get to download this once!
- Store it securely
- Never commit it to git
- Set proper permissions:
chmod 400 my-aws-key.pem
Why it matters: Without this key, you can’t SSH into your instance. Lose it, and you’re locked out.
Step 5: Network Settings
This is where VPC knowledge comes in handy:
VPC Selection:
- Choose which VPC to launch in
- Usually you’ll have a default VPC already
Subnet:
- Choose which subnet (and therefore which AZ)
- Pick a public subnet if you need internet access
Auto-assign Public IP:
- Enable: Your instance gets a public IP (needed for internet access)
- Disable: Only private IP (for internal-only servers)
Firewall (Security Groups):
You can create a new security group or use an existing one. For a web server, you’d typically allow:
Security Group: web-server-sg
Inbound Rules:
1. SSH (22) from My IP
- Why: So you can manage the server
- Security: Only from your IP, not 0.0.0.0/0
2. HTTP (80) from Anywhere (0.0.0.0/0)
- Why: Users need to access your website
3. HTTPS (443) from Anywhere (0.0.0.0/0)
- Why: Secure web traffic
4. Custom TCP (8080) from Anywhere (0.0.0.0/0)
- Why: If your app runs on port 8080
Outbound Rules:
- All traffic to Anywhere (default)
- Why: Server needs to download packages, make API calls, etc.
Step 6: Configure Storage
What: Attach storage volumes to your instance
Options:
- Size: Start with 8-30 GB for general use
- Volume Type:
- gp3: General purpose SSD (best price/performance)
- gp2: Previous generation general purpose
- io1/io2: High-performance SSD (databases)
- st1: Throughput-optimized HDD (big data)
Delete on Termination:
- Checked: Volume deletes when instance terminates (default)
- Unchecked: Volume persists (useful for data drives)
Why it matters: You can add more volumes later, but the root volume is where your OS lives.
Connecting to Your EC2 Instance
Once your instance is running:
-
Get the public IP from the EC2 console (e.g.,
54.123.45.67) -
Set key permissions (first time only):
chmod 400 ~/Downloads/my-aws-key.pem -
SSH into the instance:
ssh -i ~/Downloads/my-aws-key.pem ec2-user@54.123.45.67Replace
ec2-userwith the appropriate user:- Amazon Linux:
ec2-user - Ubuntu:
ubuntu - Red Hat:
ec2-userorroot
- Amazon Linux:
-
First login success!
[ec2-user@ip-10-0-1-10 ~]$
Real-World Example: Running a Node.js App from a Private Docker Registry
Let me show you a complete example of deploying a containerized application on EC2. This is what you’ll actually do in production.
Scenario
You have a Node.js application in a private Docker registry (Docker Hub private repo) and want to run it on EC2, accessible from the internet.
Step-by-Step Implementation
1. Launch EC2 Instance
Follow the steps above, but make sure:
- Public subnet selected ✓
- Auto-assign public IP enabled ✓
- Security group allows ports 22 (SSH) and 3000 (Node app) ✓
2. SSH into the Instance
ssh -i ~/.ssh/my-aws-key.pem ec2-user@54.123.45.67
3. Install Docker
# Update packages
sudo yum update -y
# Install Docker
sudo yum install docker -y
# Start Docker service
sudo systemctl start docker
sudo systemctl enable docker
# Verify Docker is running
sudo docker --version
4. Add Your User to Docker Group
Why? By default, you need sudo to run Docker commands. This gets annoying and is risky in scripts.
# Add current user to docker group
sudo usermod -aG docker $USER
# Apply the new group (without logging out)
newgrp docker
# Or simply log out and back in for changes to take effect
exit
SSH back in, and now you can run docker ps without sudo!
5. Login to Private Docker Registry
# Login to Docker Hub (or your private registry)
docker login
# Enter your credentials when prompted
Username: your-dockerhub-username
Password: your-dockerhub-password
6. Pull and Run Your Application
# Pull the image from your private repo
docker pull your-username/node-app:latest
# Run the container
docker run -d \
--name my-node-app \
-p 3000:3000 \
--restart unless-stopped \
your-username/node-app:latest
# Verify it's running
docker ps
Explanation of flags:
-d: Run in detached mode (background)--name: Give it a friendly name-p 3000:3000: Map host port 3000 to container port 3000--restart unless-stopped: Auto-restart if it crashes
7. Test Your Application
From your local machine:
curl http://54.123.45.67:3000
Or open in a browser: http://54.123.45.67:3000
Complete Script Version
Here’s a user data script you can use when launching the EC2 instance to automate everything:
#!/bin/bash
# This runs automatically when the instance first starts
# Install Docker
yum update -y
yum install docker -y
systemctl start docker
systemctl enable docker
# Add ec2-user to docker group
usermod -aG docker ec2-user
# Pull and run the app (requires credentials to be configured)
su - ec2-user -c "docker login -u YOUR_USERNAME -p YOUR_PASSWORD"
su - ec2-user -c "docker pull your-username/node-app:latest"
su - ec2-user -c "docker run -d --name my-node-app -p 3000:3000 --restart unless-stopped your-username/node-app:latest"
Security Note: Don’t hardcode credentials in user data! Use AWS Secrets Manager or IAM roles instead.
What’s Next?
In Part 2: AWS DevOps - Jenkins CI/CD and AWS CLI, we dive into:
- Jenkins CI/CD Pipelines: Complete deployment automation from GitHub to EC2
- AWS CLI Mastery: Managing AWS from the command line
- IAM Management via CLI: Creating users, groups, and custom policies
- Best Practices: Security, cost optimization, and high availability tips
The hands-on Jenkins pipeline example shows a production-grade workflow that builds, tests, containerizes, and deploys your application automatically.