This is Part 2 of the AWS Essentials series. If you haven’t already, check out Part 1: AWS Essentials covering IAM, VPC, EC2, and deploying containerized applications.
Jenkins Pipeline to EC2 Deployment
Now for the real stuff - automating deployments with Jenkins. This is a production-grade CI/CD pipeline that:
- Tests your code
- Builds a JAR file
- Creates a Docker image
- Pushes to Docker Hub
- Deploys to EC2
- Commits version changes back to Git
Architecture Overview
GitHub Repo (Java Maven App)
↓
Jenkins Pipeline (triggered on push)
↓
1. Run Tests
2. Increment Version
3. Build JAR (Maven)
4. Build Docker Image
5. Push to Docker Hub
↓
6. Deploy to EC2
- SCP docker-compose.yaml and script to EC2
- SSH into EC2
- Run deployment script
↓
7. Commit version increment back to Git
Prerequisites
-
Jenkins Setup:
- Jenkins server running
- Maven tool configured (maven-3.9.11)
- Required plugins:
- SSH Agent Plugin
- Docker Pipeline Plugin
- Git Plugin
-
Jenkins Credentials:
docker-hub: Docker Hub username/passwordaws-lab-key: SSH private key for EC2deploy-key-jva: SSH key for Git push access
-
Shared Library:
- GitHub repo with reusable pipeline functions
- Configured in Jenkins (Manage Jenkins → Configure System → Global Pipeline Libraries)
Setting Up Jenkins Credentials
Docker Hub Credentials
- Go to: Jenkins → Manage Jenkins → Manage Credentials
- Click “(global)” domain
- Add Credentials:
- Kind: “Username with password”
- ID:
docker-hub - Username: Your Docker Hub username
- Password: Your Docker Hub password
EC2 SSH Key
- Add Credentials:
- Kind: “SSH Username with private key”
- ID:
aws-lab-key - Username:
ec2-user - Private Key: Paste your
.pemfile contents
Git SSH Key
-
Generate an SSH key (if you don’t have one):
ssh-keygen -t ed25519 -C "jenkins@yourcompany.com" -
Add the public key to your GitHub repo (Settings → Deploy keys)
-
Add the private key to Jenkins:
- Kind: “SSH Username with private key”
- ID:
deploy-key-jva - Username:
git - Private Key: Paste the private key
Jenkins Shared Library Structure
The shared library lives in a separate Git repo and provides reusable pipeline functions.
Directory Structure:
jenkins-shared-library/
├── src/
│ └── com/
│ └── example/
│ └── Docker.groovy # Docker class with methods
└── vars/
├── buildImage.groovy # Build Docker image
├── buildJar.groovy # Build Maven JAR
├── deployApp.groovy # Deploy to EC2 via SSH
├── dockerLogin.groovy # Login to Docker registry
├── dockerPush.groovy # Push Docker image
├── incrementVersionMvn.groovy # Auto-increment Maven version
└── pushVersioIncrement.groovy # Commit version change to Git
Docker.groovy - Core Docker Operations
This is a Groovy class that encapsulates Docker operations:
#!/usr/bin/env groovy
package com.example
class Docker implements Serializable {
def script
Docker(script) {
this.script = script
}
def buildDockerImage(String imageName) {
script.echo "building the docker image..."
script.sh "docker build -t $imageName ."
}
def dockerLogin(String credentialsId) {
script.withCredentials([script.usernamePassword(
credentialsId: credentialsId,
passwordVariable: 'PASS',
usernameVariable: 'USER'
)]) {
script.sh "echo '${script.PASS}' | docker login -u '${script.USER}' --password-stdin"
}
}
def dockerPush(String imageName) {
script.sh "docker push $imageName"
}
}
Why a class? It encapsulates related functionality and makes the code reusable and testable.
vars/ Functions - Pipeline Steps
These are global variables available in your Jenkinsfile:
buildImage.groovy:
#!/usr/bin/env groovy
import com.example.Docker
def call(String imageName) {
return new Docker(this).buildDockerImage(imageName)
}
buildJar.groovy:
#!/usr/bin/env groovy
def call() {
echo "building the application for branch $BRANCH_NAME"
sh 'mvn package'
}
dockerLogin.groovy:
#!/usr/bin/env groovy
import com.example.Docker
def call(String credentialsId) {
return new Docker(this).dockerLogin(credentialsId)
}
dockerPush.groovy:
#!/usr/bin/env groovy
import com.example.Docker
def call(String imageName) {
return new Docker(this).dockerPush(imageName)
}
incrementVersionMvn.groovy - The Smart Version Manager:
def call() {
echo 'incrementing app version...'
sh 'mvn build-helper:parse-version versions:set \
-DnewVersion=\\\${parsedVersion.majorVersion}.\\\${parsedVersion.minorVersion}.\\\${parsedVersion.nextIncrementalVersion} \
versions:commit'
// Extract the new version from pom.xml
def matcher = readFile('pom.xml') =~ '<version>(.+)</version>'
def version = matcher[0][1]
// Set environment variable for use in other stages
env.IMAGE_NAME = "$version-$BUILD_NUMBER"
echo "New version is: ${env.IMAGE_NAME}"
}
What’s happening here?
- Uses Maven versions plugin to increment the patch version (1.0.0 → 1.0.1)
- Reads the new version from pom.xml using regex
- Combines version with Jenkins build number (e.g.,
1.0.1-42) - Stores in
IMAGE_NAMEenvironment variable for Docker tagging
deployApp.groovy - The Deployment Workhorse:
def call(String server, String shellCommand, ArrayList filesToCopy, String credentialsId, String workspace='.') {
sshagent([credentialsId]) {
// Copy files to EC2
filesToCopy.each { file ->
sh "scp -o StrictHostKeyChecking=no ${file} ${server}:${workspace}/${file}"
}
// Execute deployment script on EC2
sh "ssh -o StrictHostKeyChecking=no ${server} '${shellCommand}'"
}
}
What’s happening?
sshagentactivates the SSH keyStrictHostKeyChecking=noavoids manual confirmation (needed for automation)- SCP copies files to EC2
- SSH executes the deployment command remotely
pushVersioIncrement.groovy - Git Commit Automation:
def call(String branch='master', String gitCredentialsId, String origin) {
sh 'git config --global user.email "jenkins@100xprojects.system" && git config --global user.name "Jenkins CI"'
sh 'git add .'
sh 'git commit -m "Bumping version" || echo "No changes to commit"'
sshagent([gitCredentialsId]) {
sh "git remote set-url origin ${origin}"
sh "git push origin HEAD:${branch}"
}
}
The || echo "No changes to commit" trick: Prevents pipeline failure if there’s nothing to commit.
Application Files
Dockerfile
Creates a lightweight Docker image for the Java application:
FROM amazoncorretto:8-alpine3.17-jre
EXPOSE 8080
COPY ./target/java-maven-app-*.jar /usr/app/
WORKDIR /usr/app
CMD java -jar java-maven-app-*.jar
Why this Dockerfile?
amazoncorretto:8-alpine3.17-jre: Amazon’s optimized Java runtime, Alpine-based (small image)EXPOSE 8080: Documents that the app uses port 8080- Wildcard
java-maven-app-*.jar: Works with any version number CMD: Runs when container starts
docker-compose.yaml
Defines the application stack:
services:
java-maven-app:
image: ${IMAGE}
ports:
- "8080:8080"
postgres:
image: postgres:15
environment:
POSTGRES_PASSWORD: password
ports:
- "5432:5432"
Key points:
${IMAGE}: Environment variable for dynamic image selection- Runs the Java app + PostgreSQL database
- Port mappings make services accessible from host
server-script.sh
The deployment script that runs on EC2:
#!/bin/bash
export IMAGE=$1
docker compose -f docker-compose.yaml up -d
echo "Server is running on port 8080"
What it does:
- Takes image name as argument
- Exports it as environment variable (used by docker-compose)
- Starts the containers in detached mode
- The old containers are automatically replaced
Why this approach?
- Simple and reliable
- Docker Compose handles container lifecycle
- Zero-downtime if you configure health checks
The Complete Jenkinsfile
This is where everything comes together:
#!/usr/bin/env groovy
// Load the shared library from GitHub
library identifier: 'jenkins-shared-library-master@main', retriever: modernSCM(
[$class: 'GitSCMSource',
remote: 'https://github.com/WhisperNet/jenkins-shared-library-master.git'
]
)
pipeline {
agent any
tools {
maven 'maven-3.9.11' // Must match tool name in Jenkins config
}
stages {
stage("test") {
steps {
script {
sh "mvn test"
}
}
}
stage("Increment version") {
steps {
script {
incrementVersionMvn() // Sets env.IMAGE_NAME
}
}
}
stage("build jar") {
steps {
script {
echo "Building jar"
buildJar()
}
}
}
stage("build and push docker image") {
steps {
script {
echo "Building and pushing the docker image"
def credentialsId = "docker-hub"
buildImage("whispernet/java-app:${env.IMAGE_NAME}")
dockerLogin(credentialsId)
dockerPush("whispernet/java-app:${env.IMAGE_NAME}")
}
}
}
stage("deploy to ec2") {
steps {
script {
echo "Deploying the application"
def server = "ec2-user@98.89.35.103"
def imageName = "whispernet/java-app:${env.IMAGE_NAME}"
def shellCommand = "bash server-script.sh ${imageName}"
def filesToCopy = ["server-script.sh", "docker-compose.yaml"]
def credentialsId = "aws-lab-key"
def workSpace = "/home/ec2-user"
deployApp(server, shellCommand, filesToCopy, credentialsId, workSpace)
}
}
}
stage("Commit incremented version") {
steps {
script {
echo "Committing the incremented version"
def branch = "master"
def gitCreds = "deploy-key-jva"
def origin = "git@github.com:WhisperNet/java-app-cicd.git"
pushVersioIncrement(branch, gitCreds, origin)
}
}
}
}
}
Creating a Multibranch Pipeline in Jenkins
-
Install SSH Agent Plugin:
- Go to: Manage Jenkins → Manage Plugins
- Search for “SSH Agent”
- Install and restart
-
Create the Pipeline:
- New Item → Enter name → Multibranch Pipeline
- Branch Sources → Add source → Git
- Project Repository: Your app’s Git URL
- Credentials: Add your Git credentials
- Build Configuration: Mode: by Jenkinsfile, Script Path: Jenkinsfile
-
Scan Repository:
- Jenkins scans for branches with Jenkinsfile
- Automatically creates a job for each branch
Finding Pipeline Syntax
Jenkins has a built-in tool to generate pipeline code:
- In your pipeline job, click “Pipeline Syntax”
- Select a step (e.g., “sshagent: SSH Agent”)
- Fill in the form
- Click “Generate Pipeline Script”
- Copy the generated code
This is invaluable for learning pipeline syntax!
Deployment Flow Visualization
Developer pushes code
↓
Jenkins detects change
↓
[Stage 1] Run mvn test
↓
[Stage 2] Increment version (1.0.5 → 1.0.6)
↓
[Stage 3] Build JAR (mvn package)
↓
[Stage 4] Build Docker image (1.0.6-42)
↓
[Stage 5] Push to Docker Hub
↓
[Stage 6] Deploy to EC2:
- SCP docker-compose.yaml → EC2
- SCP server-script.sh → EC2
- SSH: bash server-script.sh whispernet/java-app:1.0.6-42
- EC2 pulls image and runs containers
↓
[Stage 7] Commit version to Git
↓
Deployment complete! 🎉
What Happens on the EC2 Server?
When server-script.sh runs:
- Receives image name:
whispernet/java-app:1.0.6-42 - Exports as variable:
export IMAGE=whispernet/java-app:1.0.6-42 - Docker Compose reads it:
services: java-maven-app: image: ${IMAGE} # Becomes whispernet/java-app:1.0.6-42 - Pulls the image from Docker Hub
- Stops old containers (if running)
- Starts new containers with the updated image
Troubleshooting Common Issues
Issue: “Permission denied (publickey)”
- Solution: Check that the SSH key is correct in Jenkins credentials
- Verify: Try manual SSH from Jenkins server:
ssh -i key.pem ec2-user@your-ip
Issue: “docker: command not found” on EC2
- Solution: Docker isn’t installed. SSH in and install it
Issue: “Cannot connect to Docker daemon”
- Solution: EC2 user isn’t in docker group or Docker isn’t running
sudo usermod -aG docker ec2-user sudo systemctl start docker
Issue: Pipeline fails at version increment
- Solution: Check that pom.xml has the build-helper-maven-plugin configured
Issue: Can’t push to Git
- Solution: SSH key doesn’t have write access. Check GitHub deploy key settings
AWS CLI: Command-Line Power
The AWS CLI lets you manage AWS from your terminal. It’s faster than clicking through the console and essential for automation.
Installation and Configuration
Install AWS CLI (if not already installed):
# On Linux
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
# Verify
aws --version
Configure your credentials:
aws configure
You’ll be prompted for:
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-east-1
Default output format [None]: json
Where are these stored?
cat ~/.aws/credentials
# [default]
# aws_access_key_id = AKIAIOSFODNN7EXAMPLE
# aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
cat ~/.aws/config
# [default]
# region = us-east-1
# output = json
Creating a Security Group via CLI
Security groups are virtual firewalls. Let’s create one from the command line.
Step 1: List Existing VPCs
aws ec2 describe-vpcs
Output:
{
"Vpcs": [
{
"VpcId": "vpc-0abcd1234efgh5678",
"CidrBlock": "172.31.0.0/16",
"IsDefault": true
}
]
}
Note the VpcId - you’ll need it.
Step 2: Create the Security Group
aws ec2 create-security-group \
--group-name my-web-sg \
--description "Security group for web servers" \
--vpc-id vpc-0abcd1234efgh5678
Output:
{
"GroupId": "sg-0123456789abcdef0"
}
Step 3: Add Firewall Rules
Allow SSH from anywhere:
aws ec2 authorize-security-group-ingress \
--group-id sg-0123456789abcdef0 \
--protocol tcp \
--port 22 \
--cidr 0.0.0.0/0
Allow HTTP:
aws ec2 authorize-security-group-ingress \
--group-id sg-0123456789abcdef0 \
--protocol tcp \
--port 80 \
--cidr 0.0.0.0/0
Allow HTTPS:
aws ec2 authorize-security-group-ingress \
--group-id sg-0123456789abcdef0 \
--protocol tcp \
--port 443 \
--cidr 0.0.0.0/0
Allow custom port 8080:
aws ec2 authorize-security-group-ingress \
--group-id sg-0123456789abcdef0 \
--protocol tcp \
--port 8080 \
--cidr 0.0.0.0/0
Step 4: Verify the Security Group
aws ec2 describe-security-groups --group-id sg-0123456789abcdef0
Output shows all your rules:
{
"SecurityGroups": [
{
"GroupId": "sg-0123456789abcdef0",
"GroupName": "my-web-sg",
"IpPermissions": [
{
"IpProtocol": "tcp",
"FromPort": 22,
"ToPort": 22,
"IpRanges": [{"CidrIp": "0.0.0.0/0"}]
},
{
"IpProtocol": "tcp",
"FromPort": 80,
"ToPort": 80,
"IpRanges": [{"CidrIp": "0.0.0.0/0"}]
}
]
}
]
}
Creating an SSH Key Pair
aws ec2 create-key-pair \
--key-name my-cli-key \
--query 'KeyMaterial' \
--output text > MyKey.pem
What happened?
- Created a key pair in AWS
--query 'KeyMaterial'extracts just the private key--output textoutputs it as plain text (not JSON)> MyKey.pemsaves it to a file
Set permissions:
chmod 400 MyKey.pem
Launching an EC2 Instance via CLI
First, list available subnets:
aws ec2 describe-subnets
Now launch the instance:
aws ec2 run-instances \
--image-id ami-0c55b159cbfafe1f0 \
--count 1 \
--instance-type t3.micro \
--key-name my-cli-key \
--security-group-ids sg-0123456789abcdef0 \
--subnet-id subnet-0abcd1234
Parameters explained:
--image-id: The AMI (find usingaws ec2 describe-images)--count: How many instances to launch--instance-type: Size of the instance--key-name: SSH key for access--security-group-ids: Firewall rules--subnet-id: Which subnet/AZ to use
Describing Instances
List all instances:
aws ec2 describe-instances
This outputs A LOT of JSON. Let’s filter it.
Powerful Filtering and Querying
AWS CLI supports two ways to narrow down results:
1. Filters (Server-side)
Filters reduce the data AWS sends back:
Find instances by type:
aws ec2 describe-instances \
--filters "Name=instance-type,Values=t2.micro"
Find instances by tag:
aws ec2 describe-instances \
--filters "Name=tag:Name,Values=web-server-with-docker"
Find instances by multiple AMI IDs:
aws ec2 describe-instances \
--filters "Name=image-id,Values=ami-abc123,ami-def456,ami-ghi789"
Find running instances only:
aws ec2 describe-instances \
--filters "Name=instance-state-name,Values=running"
Combine multiple filters:
aws ec2 describe-instances \
--filters \
"Name=instance-type,Values=t2.micro" \
"Name=instance-state-name,Values=running"
2. Queries (Client-side)
Queries use JMESPath to extract specific fields from JSON:
Get only instance IDs:
aws ec2 describe-instances \
--query "Reservations[].Instances[].InstanceId"
Output:
[
"i-0abcd1234efgh5678",
"i-0abcd9999efgh1111"
]
Get instance IDs and states:
aws ec2 describe-instances \
--query "Reservations[].Instances[].[InstanceId, State.Name]" \
--output table
Output:
---------------------------------
| DescribeInstances |
+------------------+------------+
| i-0abcd1234 | running |
| i-0abcd9999 | stopped |
+------------------+------------+
Combine filters and queries:
aws ec2 describe-instances \
--filters "Name=instance-type,Values=t2.micro" \
--query "Reservations[].Instances[].[InstanceId, PublicIpAddress, Tags[?Key=='Name'].Value | [0]]" \
--output table
Complex query - Get instances with specific tag:
aws ec2 describe-instances \
--query "Reservations[].Instances[?Tags[?Key=='Environment' && Value=='Production']].[InstanceId, PrivateIpAddress]"
Changing AWS CLI User/Credentials
There are three ways to switch users:
Method 1: Run aws configure again
aws configure
# Enter new credentials when prompted
This overwrites ~/.aws/credentials.
Method 2: Set specific values
aws configure set aws_access_key_id AKIAI44QH8DHBEXAMPLE
aws configure set aws_secret_access_key je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
aws configure set region us-west-2
Method 3: Environment variables (temporary)
export AWS_ACCESS_KEY_ID=AKIAI44QH8DHBEXAMPLE
export AWS_SECRET_ACCESS_KEY=je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
export AWS_DEFAULT_REGION=us-west-2
# Now AWS CLI uses these credentials
aws ec2 describe-instances
# Unset when done
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
unset AWS_DEFAULT_REGION
Why environment variables?
- Useful in scripts
- Temporary (doesn’t modify config files)
- Can be used in CI/CD pipelines
Method 4: Named profiles
Best for managing multiple accounts:
# Configure a named profile
aws configure --profile production
# Enter production credentials
aws configure --profile development
# Enter development credentials
# Use a specific profile
aws ec2 describe-instances --profile production
aws s3 ls --profile development
# Or set as default for session
export AWS_PROFILE=production
IAM Management via CLI
The AWS CLI can manage users, groups, and policies.
Get help for IAM commands:
aws iam help
Create a group:
aws iam create-group --group-name Developers
Create a user:
aws iam create-user --user-name john-doe
Add user to group:
aws iam add-user-to-group \
--user-name john-doe \
--group-name Developers
Verify group membership:
aws iam get-group --group-name Developers
Output:
{
"Group": {
"GroupName": "Developers",
"GroupId": "AGPAI23HZ27SI6FQMGNQ2",
"Arn": "arn:aws:iam::123456789012:group/Developers"
},
"Users": [
{
"UserName": "john-doe",
"UserId": "AIDAI23HZ27SI6FQMGNQ2",
"Arn": "arn:aws:iam::123456789012:user/john-doe"
}
]
}
Attach a policy to a user:
First, find the policy ARN:
aws iam list-policies \
--query "Policies[?PolicyName=='AmazonEC2FullAccess'].Arn" \
--output text
Output: arn:aws:iam::aws:policy/AmazonEC2FullAccess
Attach it:
aws iam attach-user-policy \
--user-name john-doe \
--policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess
Attach a policy to a group:
aws iam attach-group-policy \
--group-name Developers \
--policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess
List attached group policies:
aws iam list-attached-group-policies --group-name Developers
Create a login profile (console password):
aws iam create-login-profile \
--user-name john-doe \
--password "TempPassword123!" \
--password-reset-required
The --password-reset-required flag forces the user to change their password on first login.
Get user details:
aws iam get-user --user-name john-doe
Create access keys (for CLI/SDK):
aws iam create-access-key --user-name john-doe
Output:
{
"AccessKey": {
"UserName": "john-doe",
"AccessKeyId": "AKIAI44QH8DHBEXAMPLE",
"Status": "Active",
"SecretAccessKey": "je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY"
}
}
Save these credentials securely! You can’t retrieve the secret key again.
Creating Custom IAM Policies
IAM policies are JSON documents that define permissions.
Example Policy Document (AWS-IAM-Policy.json):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["iam:ChangePassword"],
"Resource": "arn:aws:iam::062565762440:user/test-cli-user"
},
{
"Effect": "Allow",
"Action": ["iam:GetAccountPasswordPolicy"],
"Resource": "*"
}
]
}
What this policy does:
- Allows the user to change their own password
- Allows viewing the account password policy
- Nothing else (implicit deny on everything else)
Policy structure:
Version: Always use “2012-10-17” (the current IAM policy language version)Statement: Array of permission statementsEffect: “Allow” or “Deny”Action: What operations are allowed (e.g.,iam:CreateUser,s3:GetObject)Resource: Which resources the actions apply to
Create the policy:
aws iam create-policy \
--policy-name AWS-CLI-TEST-POLICY \
--policy-document file://AWS-IAM-Policy.json
The file:// prefix tells AWS CLI to read from a file.
Output:
{
"Policy": {
"PolicyName": "AWS-CLI-TEST-POLICY",
"PolicyId": "ANPAI23HZ27SI6FQMGNQ2",
"Arn": "arn:aws:iam::123456789012:policy/AWS-CLI-TEST-POLICY",
"Path": "/",
"DefaultVersionId": "v1",
"AttachmentCount": 0
}
}
Attach the custom policy to a user:
aws iam attach-user-policy \
--user-name test-cli-user \
--policy-arn arn:aws:iam::123456789012:policy/AWS-CLI-TEST-POLICY
More Policy Examples
Allow read-only S3 access to a specific bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-app-bucket",
"arn:aws:s3:::my-app-bucket/*"
]
}
]
}
Allow EC2 management in a specific region:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:*",
"Resource": "*",
"Condition": {
"StringEquals": {
"ec2:Region": "us-east-1"
}
}
}
]
}
Deny access to production resources:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": "*",
"Resource": "*",
"Condition": {
"StringEquals": {
"ec2:ResourceTag/Environment": "Production"
}
}
}
]
}
Useful AWS CLI Commands Cheat Sheet
EC2 Commands
# List all instances
aws ec2 describe-instances
# List only running instances
aws ec2 describe-instances --filters "Name=instance-state-name,Values=running"
# Get public IPs of running instances
aws ec2 describe-instances \
--filters "Name=instance-state-name,Values=running" \
--query "Reservations[].Instances[].[Tags[?Key=='Name'].Value | [0], PublicIpAddress]" \
--output table
# Start an instance
aws ec2 start-instances --instance-ids i-0abcd1234efgh5678
# Stop an instance
aws ec2 stop-instances --instance-ids i-0abcd1234efgh5678
# Terminate an instance
aws ec2 terminate-instances --instance-ids i-0abcd1234efgh5678
# Get instance types available in a region
aws ec2 describe-instance-types \
--filters "Name=instance-type,Values=t3.*" \
--query "InstanceTypes[].[InstanceType, VCpuInfo.DefaultVCpus, MemoryInfo.SizeInMiB]" \
--output table
Security Group Commands
# List all security groups
aws ec2 describe-security-groups
# Get details of a specific security group
aws ec2 describe-security-groups --group-ids sg-0123456789abcdef0
# Delete a rule
aws ec2 revoke-security-group-ingress \
--group-id sg-0123456789abcdef0 \
--protocol tcp \
--port 22 \
--cidr 0.0.0.0/0
S3 Commands
# List all buckets
aws s3 ls
# List contents of a bucket
aws s3 ls s3://my-bucket/
# Upload a file
aws s3 cp myfile.txt s3://my-bucket/
# Download a file
aws s3 cp s3://my-bucket/myfile.txt ./
# Sync a directory (like rsync)
aws s3 sync ./local-dir s3://my-bucket/remote-dir
IAM Commands
# List all users
aws iam list-users
# List all groups
aws iam list-groups
# Get current user identity
aws sts get-caller-identity
# List policies attached to a user
aws iam list-attached-user-policies --user-name john-doe
Best Practices I’ve Learned
Security
- Never use root credentials: Create IAM users immediately
- Enable MFA: On root and admin accounts at minimum
- Principle of least privilege: Only grant the permissions needed
- Rotate credentials regularly: Set up password and access key rotation policies
- Use IAM roles for EC2: Instead of storing credentials on instances
- Don’t hardcode credentials: Use environment variables or AWS Secrets Manager
- Review security groups regularly: Remove unnecessary open ports
Cost Optimization
- Start small: You can always resize instances
- Use tags religiously: Track costs by project/team/environment
- Stop instances when not needed: Development servers don’t need to run 24/7
- Set up billing alerts: Know when costs exceed thresholds
- Use the right instance type: Don’t pay for compute you don’t need
- Leverage free tier: Great for learning and small projects
High Availability
- Deploy across multiple AZs: Don’t put all eggs in one basket
- Use Auto Scaling Groups: Automatically replace failed instances
- Implement health checks: In load balancers and Auto Scaling
- Separate data from compute: Use EBS snapshots, RDS backups
- Test your disaster recovery: Regularly practice restoring from backups
Automation
- Automate deployments: Like our Jenkins pipeline
- Use Infrastructure as Code: Terraform, CloudFormation
- Script repetitive tasks: AWS CLI is your friend
- Version control everything: Even your infrastructure code
- Use shared libraries: Don’t repeat yourself in Jenkinsfiles
Common Gotchas and How to Avoid Them
1. “I can’t SSH into my instance!”
Possible causes:
- Security group doesn’t allow port 22 from your IP
- Instance is in a private subnet without public IP
- Wrong SSH key
- Wrong username (
ec2-uservsubuntu)
Debug steps:
# Check security group
aws ec2 describe-security-groups --group-ids sg-xxxxx
# Check if instance has public IP
aws ec2 describe-instances --instance-ids i-xxxxx \
--query "Reservations[].Instances[].[PublicIpAddress, PrivateIpAddress]"
# Try verbose SSH to see what's failing
ssh -v -i key.pem ec2-user@ip-address
2. “My application isn’t accessible from the internet!”
Checklist:
- Is the instance in a public subnet?
- Does it have a public IP?
- Does the security group allow the application port?
- Is the application actually running? (
docker ps,systemctl status) - Is the application binding to 0.0.0.0 (not localhost)?
3. “AWS CLI says access denied!”
Possible causes:
- Using wrong credentials
- IAM user doesn’t have required permissions
- Region mismatch (resource exists in different region)
Debug:
# Check who you're authenticated as
aws sts get-caller-identity
# Check if it's a permission issue (look for specific permission needed in error)
# Then check user's policies
aws iam list-attached-user-policies --user-name your-username
4. “My Jenkins pipeline fails at SSH step!”
Possible causes:
- SSH key in Jenkins doesn’t match the instance’s key pair
- Security group doesn’t allow SSH from Jenkins server IP
- EC2 IP changed (if not using Elastic IP)
Solution:
# Test SSH manually from Jenkins server
ssh -i /path/to/key.pem ec2-user@ec2-ip
# Check Jenkins SSH Agent plugin is installed
# Verify credential ID matches in Jenkinsfile
5. “Docker says permission denied!”
Solution:
# Add user to docker group
sudo usermod -aG docker $USER
# Either logout/login or use newgrp
newgrp docker
# Verify
docker ps
Conclusion
AWS is massive, but you don’t need to know everything to be productive. Focus on these essentials:
- IAM: Secure your account and manage access
- VPC: Understand networking basics
- EC2: Launch and manage servers
- Security Groups: Control traffic flow
- AWS CLI: Automate everything
Start small, experiment in the free tier, and gradually build more complex architectures. The skills you develop here will serve you well in any cloud environment.
Remember: The cloud is just someone else’s computer. Don’t let the jargon intimidate you. You’ve got this!