Skip to content
Go back

AWS Essentials: A Practical DevOps Guide (Part 1)

Edit page

So you’ve heard about AWS (Amazon Web Services) and want to understand what it’s all about. Let me walk you through the essentials I’ve learned while working with AWS in real-world DevOps scenarios.

What is AWS?

AWS is Amazon’s cloud computing platform that provides on-demand computing resources over the internet. Think of it as renting someone else’s super powerful computers instead of buying and maintaining your own servers. You pay only for what you use, and you can scale up or down based on your needs.

AWS offers over 200 services, but don’t let that overwhelm you. We’ll focus on the core services you’ll actually use day-to-day for deploying and managing applications.


Understanding AWS Resource Scopes

Before diving in, you need to understand that every resource you create in AWS falls under one of three scopes. This isn’t just theoretical - it affects how you architect your applications and where your data lives.

Global Services

These services are available across all AWS regions:

Regional Services

These exist within a specific geographic region:

Availability Zone (AZ) Services

These are tied to specific data centers within a region:

Why does this matter? When you create an EC2 instance, it lives in a specific AZ. If that data center has issues, your instance goes down unless you’ve designed for high availability across multiple AZs.


IAM: Your First Stop in AWS

When you create an AWS account, you get a root user with god-mode permissions. Never use the root user for daily work. Here’s why and what to do instead:

The First Thing: Create an Admin User

The root user has unrestricted access to everything, including billing. If those credentials leak, you’re in serious trouble. Instead:

  1. Log in as root (just this once)
  2. Go to IAM service
  3. Create an admin user with administrative permissions
  4. Enable MFA (Multi-Factor Authentication) on both root and admin
  5. Log out and use the admin user from now on

IAM Core Concepts

Users: Actual people or services that need access to AWS

Groups: Collections of users with similar permissions

Roles: Temporary permissions that can be assumed by users or services

Policies: JSON documents that define permissions

IAM Users vs IAM Roles: What’s the Difference?

This confused me at first, so let me clarify:

IAM Users are for:

IAM Roles are for:

The key difference: Users have permanent credentials. Roles have temporary credentials that are automatically rotated.

Creating IAM Users and Getting CLI Access

Here’s the practical workflow:

  1. Create a user:

    • Navigate to IAM → Users → Add user
    • Set username (e.g., “jane-dev”)
    • Choose access type:
      • AWS Management Console access (for UI access)
      • Programmatic access (for CLI/SDK)
  2. Add to groups:

    • Either add to existing group or create new one
    • Attach policies to the group (e.g., AmazonEC2FullAccess)
  3. Get CLI credentials:

    • After creating the user, go to: User → Security credentials
    • Click “Create access key”
    • Choose “Command Line Interface (CLI)”
    • Download the credentials (you only see them once!)
    • These are your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY

Security tip: Rotate these keys regularly and never commit them to git!


Regions and Availability Zones

Let me break down AWS’s global infrastructure because it’s crucial for understanding how your applications are deployed.

Regions

A Region is a geographical area where AWS has data centers. Examples:

Why multiple regions?

Availability Zones (AZs)

Each region has multiple Availability Zones - physically separate data centers within the same region. They’re:

For example, us-east-1 has 6 AZs: us-east-1a, us-east-1b, etc.

Best Practice: Deploy your application across at least 2 AZs in a region. If one AZ fails, your app stays up.

Visualization

AWS Global Infrastructure

├─ Region: us-east-1 (North Virginia)
│  ├─ AZ: us-east-1a (Data Center 1)
│  ├─ AZ: us-east-1b (Data Center 2)
│  └─ AZ: us-east-1c (Data Center 3)

├─ Region: eu-west-1 (Ireland)
│  ├─ AZ: eu-west-1a
│  ├─ AZ: eu-west-1b
│  └─ AZ: eu-west-1c

└─ Region: ap-south-1 (Mumbai)
   ├─ AZ: ap-south-1a
   └─ AZ: ap-south-1b

VPC: Your Private Cloud Network

VPC (Virtual Private Cloud) is your own isolated network in AWS. Think of it as your own private data center in the cloud, but without the hardware headaches.

Key VPC Concepts

What is a VPC?

Why do you need a VPC?

Subnets: Dividing Your VPC

A subnet is a range of IP addresses within your VPC. Here’s what you need to know:

What makes a subnet public or private? It’s not a checkbox - it’s about routing configuration:

IP Addresses in VPC

When you create a VPC, you define a CIDR block (Classless Inter-Domain Routing). Let me explain:

CIDR Block Example: 10.0.0.0/16

Subnet CIDR Examples (carved out of the VPC):

Two types of IP addresses:

  1. Private IPs: Used for internal communication within the VPC (e.g., 10.0.1.45)
  2. Public IPs: Allows internet communication (e.g., 54.123.45.67)

When you launch an EC2 instance:

Internet Gateway

An Internet Gateway is what connects your VPC to the internet. Think of it as the front door:

Without an Internet Gateway, your VPC is completely isolated from the internet.

Controlling Access: Firewalls in AWS

AWS gives you two levels of firewall control:

1. Network ACLs (NACLs) - Subnet Level

Network Access Control Lists operate at the subnet boundary:

Example NACL rules:

Inbound:
100 - Allow HTTP (port 80) from 0.0.0.0/0
200 - Allow HTTPS (port 443) from 0.0.0.0/0
300 - Allow SSH (port 22) from 203.0.113.0/24
* - Deny all

Outbound:
100 - Allow all traffic to 0.0.0.0/0
* - Deny all

2. Security Groups - Instance Level

Security Groups are virtual firewalls for individual EC2 instances:

Example Security Group:

Name: web-server-sg
Inbound Rules:
- Type: HTTP, Port: 80, Source: 0.0.0.0/0
- Type: HTTPS, Port: 443, Source: 0.0.0.0/0
- Type: SSH, Port: 22, Source: my-ip-address/32

Outbound Rules:
- Type: All traffic, Destination: 0.0.0.0/0

Best Practice: Use Security Groups as your primary firewall. They’re more flexible and easier to manage.

VPC Architecture Visualization

VPC: 10.0.0.0/16 (us-east-1)
├─ Internet Gateway
├─ Public Subnet 1: 10.0.1.0/24 (us-east-1a)
│  └─ EC2: Web Server (Private: 10.0.1.10, Public: 54.123.45.67)
├─ Public Subnet 2: 10.0.2.0/24 (us-east-1b)
│  └─ EC2: Web Server (Private: 10.0.2.20, Public: 54.123.45.89)
├─ Private Subnet 1: 10.0.10.0/24 (us-east-1a)
│  └─ RDS Database (Private: 10.0.10.50)
└─ Private Subnet 2: 10.0.11.0/24 (us-east-1b)
   └─ RDS Database Replica (Private: 10.0.11.51)

EC2: Your Virtual Servers in the Cloud

EC2 (Elastic Compute Cloud) is AWS’s virtual server service. Instead of buying physical servers, you rent virtual machines by the hour (or second).

Creating an EC2 Instance: Step by Step

Let me walk you through each configuration option and why it matters:

Step 1: Name and Tags

What: Give your instance a name and optional tags

Name: web-server-prod-1
Tags:
  - Environment: Production
  - Application: MyApp
  - Team: DevOps

Why: Tags are crucial for:

Step 2: Choose an AMI (Operating System)

What: AMI (Amazon Machine Image) is your OS template

Options:

Why it matters: Different AMIs have different package managers, default users, and configurations:

Step 3: Instance Type

What: The hardware configuration (CPU, RAM, network)

Common types:

Naming convention: [family].[size]

Why it matters:

Step 4: Key Pair (Critical!)

What: SSH key pair for secure access to your instance

How to create:

  1. Click “Create new key pair”
  2. Name it (e.g., my-aws-key)
  3. Choose format:
    • RSA (works everywhere)
    • ED25519 (newer, more secure)
  4. Choose format:
    • .pem for Linux/Mac
    • .ppk for PuTTY on Windows
  5. Download the key file

IMPORTANT:

Why it matters: Without this key, you can’t SSH into your instance. Lose it, and you’re locked out.

Step 5: Network Settings

This is where VPC knowledge comes in handy:

VPC Selection:

Subnet:

Auto-assign Public IP:

Firewall (Security Groups):

You can create a new security group or use an existing one. For a web server, you’d typically allow:

Security Group: web-server-sg

Inbound Rules:
1. SSH (22) from My IP
   - Why: So you can manage the server
   - Security: Only from your IP, not 0.0.0.0/0

2. HTTP (80) from Anywhere (0.0.0.0/0)
   - Why: Users need to access your website
   
3. HTTPS (443) from Anywhere (0.0.0.0/0)
   - Why: Secure web traffic

4. Custom TCP (8080) from Anywhere (0.0.0.0/0)
   - Why: If your app runs on port 8080

Outbound Rules:
- All traffic to Anywhere (default)
  - Why: Server needs to download packages, make API calls, etc.

Step 6: Configure Storage

What: Attach storage volumes to your instance

Options:

Delete on Termination:

Why it matters: You can add more volumes later, but the root volume is where your OS lives.

Connecting to Your EC2 Instance

Once your instance is running:

  1. Get the public IP from the EC2 console (e.g., 54.123.45.67)

  2. Set key permissions (first time only):

    chmod 400 ~/Downloads/my-aws-key.pem
  3. SSH into the instance:

    ssh -i ~/Downloads/my-aws-key.pem ec2-user@54.123.45.67

    Replace ec2-user with the appropriate user:

    • Amazon Linux: ec2-user
    • Ubuntu: ubuntu
    • Red Hat: ec2-user or root
  4. First login success!

    [ec2-user@ip-10-0-1-10 ~]$ 

Real-World Example: Running a Node.js App from a Private Docker Registry

Let me show you a complete example of deploying a containerized application on EC2. This is what you’ll actually do in production.

Scenario

You have a Node.js application in a private Docker registry (Docker Hub private repo) and want to run it on EC2, accessible from the internet.

Step-by-Step Implementation

1. Launch EC2 Instance

Follow the steps above, but make sure:

2. SSH into the Instance

ssh -i ~/.ssh/my-aws-key.pem ec2-user@54.123.45.67

3. Install Docker

# Update packages
sudo yum update -y

# Install Docker
sudo yum install docker -y

# Start Docker service
sudo systemctl start docker
sudo systemctl enable docker

# Verify Docker is running
sudo docker --version

4. Add Your User to Docker Group

Why? By default, you need sudo to run Docker commands. This gets annoying and is risky in scripts.

# Add current user to docker group
sudo usermod -aG docker $USER

# Apply the new group (without logging out)
newgrp docker

# Or simply log out and back in for changes to take effect
exit

SSH back in, and now you can run docker ps without sudo!

5. Login to Private Docker Registry

# Login to Docker Hub (or your private registry)
docker login

# Enter your credentials when prompted
Username: your-dockerhub-username
Password: your-dockerhub-password

6. Pull and Run Your Application

# Pull the image from your private repo
docker pull your-username/node-app:latest

# Run the container
docker run -d \
  --name my-node-app \
  -p 3000:3000 \
  --restart unless-stopped \
  your-username/node-app:latest

# Verify it's running
docker ps

Explanation of flags:

7. Test Your Application

From your local machine:

curl http://54.123.45.67:3000

Or open in a browser: http://54.123.45.67:3000

Complete Script Version

Here’s a user data script you can use when launching the EC2 instance to automate everything:

#!/bin/bash
# This runs automatically when the instance first starts

# Install Docker
yum update -y
yum install docker -y
systemctl start docker
systemctl enable docker

# Add ec2-user to docker group
usermod -aG docker ec2-user

# Pull and run the app (requires credentials to be configured)
su - ec2-user -c "docker login -u YOUR_USERNAME -p YOUR_PASSWORD"
su - ec2-user -c "docker pull your-username/node-app:latest"
su - ec2-user -c "docker run -d --name my-node-app -p 3000:3000 --restart unless-stopped your-username/node-app:latest"

Security Note: Don’t hardcode credentials in user data! Use AWS Secrets Manager or IAM roles instead.


What’s Next?

In Part 2: AWS DevOps - Jenkins CI/CD and AWS CLI, we dive into:

The hands-on Jenkins pipeline example shows a production-grade workflow that builds, tests, containerizes, and deploys your application automatically.


Edit page
Share this post on:

Previous Post
AWS DevOps: Jenkins CI/CD and AWS CLI (Part 2)
Next Post
Jenkins: Advanced CI/CD & Production Practices (Part 2)