Skip to content
Go back

From Jenkins to Kubernetes: My Complete CI/CD Adventure

Edit page

The moment when your pipeline actually deploys to production Kubernetes? Pure magic. Here’s how I got there.


Remember when deploying an app meant SSHing into a server and manually copying files? Yeah, those days are behind us. Today, I’m walking you through something way cooler: building a complete CI/CD pipeline that takes your code from a Git commit all the way to a running pod in AWS EKS (or any Kubernetes cluster, really).

This isn’t just another “hello world” tutorial. This is the messy, real-world journey of configuring Jenkins to talk to Kubernetes, handling authentication headaches, managing Docker registries, and ultimately watching your application automatically deploy itself. Buckle up—this gets detailed.

The Big Picture: What We’re Building

Before we dive into commands and YAML files, let me paint the picture of what we’re creating:

  1. Jenkins as Mission Control: Our CI/CD orchestrator that builds, tests, and deploys
  2. Multiple Kubernetes Targets: We’ll deploy to both AWS EKS and Linode Kubernetes Engine (LKE) to understand platform differences
  3. Two Registry Strategies: Using both private DockerHub and AWS ECR for container storage
  4. Real Security: Not just tutorials that use admin credentials everywhere

By the end, you’ll have a pipeline that:

Let’s get started.


Part 1: Teaching Jenkins to Speak Kubernetes (The EKS Way)

The Authentication Challenge

Here’s the first hurdle I hit: Jenkins running in a Docker container doesn’t magically know how to talk to your EKS cluster. You need three things:

  1. kubectl - The Kubernetes command-line tool
  2. aws-iam-authenticator - AWS’s special authentication bridge
  3. A properly configured kubeconfig file - The connection blueprint

Why aws-iam-authenticator? Because EKS uses AWS IAM for cluster authentication. It’s brilliant actually—no separate credential management. If your IAM user can access EKS, kubectl can too.

Step 1: Installing kubectl Inside Jenkins

First, we need to get inside the Jenkins container. Jenkins typically runs as a non-root user, but we need root privileges to install tools:

docker exec -u 0 -it <jenkins_container_id> bash

That -u 0 flag? That’s saying “run as root.” Now we’re inside with superpowers.

Here’s the one-liner to install kubectl:

curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" && \
chmod +x ./kubectl && \
mv ./kubectl /usr/local/bin/kubectl

What’s happening here?

Quick check: kubectl version --client should now work!

Step 2: Installing aws-iam-authenticator

This is AWS’s authentication helper. Install it similarly:

curl -Lo aws-iam-authenticator https://github.com/kubernetes-sigs/aws-iam-authenticator/releases/download/v0.6.11/aws-iam-authenticator_0.6.11_linux_amd64 && \
chmod +x ./aws-iam-authenticator && \
mv ./aws-iam-authenticator /usr/local/bin

Note: The latest version as of December 2025 is actually v0.7.9, but the concept remains identical. Just swap the version number in the URL.

Verify: aws-iam-authenticator version

Step 3: Creating the Magic Kubeconfig File

Now comes the crucial part—the kubeconfig file tells kubectl how to connect to your cluster. Create this file on your host machine first (not inside the container):

apiVersion: v1
kind: Config
clusters:
- cluster:
    certificate-authority-data: <your-cluster-cert-data>
    server: <your-eks-cluster-endpoint>
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: aws
  name: aws
current-context: aws
users:
- name: aws
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      command: /usr/local/bin/aws-iam-authenticator
      args:
        - "token"
        - "-i"
        - <your-cluster-name>

Let me break down what’s happening:

Get your certificate data and endpoint from the AWS EKS console or via:

aws eks describe-cluster --name <cluster-name> --query 'cluster.{endpoint:endpoint,ca:certificateAuthority.data}'

Step 4: Getting the Kubeconfig Into Jenkins

Find where Jenkins stores its configuration:

docker exec -it <jenkins_container_id> bash
cd ~
pwd

This usually returns something like /var/jenkins_home. That’s our target.

Now copy the kubeconfig from your host to the container:

# On your host machine
docker cp kubeconfig.yaml <jenkins_container_id>:/var/jenkins_home/.kube/config

The .kube directory is where kubectl looks for configuration by default.

Step 5: AWS Credentials for Jenkins

Here’s the security piece: aws-iam-authenticator needs AWS credentials to generate tokens. Don’t hardcode these!

Create AWS credentials file manually:

docker exec -it <jenkins_container_id> bash
mkdir -p /var/jenkins_home/.aws
cat > /var/jenkins_home/.aws/credentials << EOF
[default]
aws_access_key_id = YOUR_ACCESS_KEY
aws_secret_access_key = YOUR_SECRET_KEY
EOF

Better approach: Use Jenkins Credentials

In your Jenkinsfile, reference credentials as environment variables:

environment {
    AWS_ACCESS_KEY_ID = credentials('aws_access_key_id')
    AWS_SECRET_ACCESS_KEY = credentials('aws_secret_access_key')
}

Create these credentials in Jenkins:

  1. Navigate to Manage Jenkins → Manage Credentials
  2. Add credentials of type “Secret text”
  3. Use IDs like aws_access_key_id and aws_secret_access_key

Testing the Connection

Inside the Jenkins container or in a pipeline:

kubectl get nodes

If you see your EKS nodes listed, congratulations! Jenkins can now deploy to your EKS cluster.


Part 2: Platform Differences - EKS vs LKE

Here’s something that confused me initially: not all Kubernetes clusters authenticate the same way.

EKS: The AWS IAM Way

As we just set up, EKS uses:

This is AWS-specific. The benefit? Unified access control. If someone has EKS permissions in IAM, they can access the cluster. Remove their IAM permissions, and cluster access goes away too.

LKE (Linode Kubernetes Engine): The Simpler Path

Linode (and most other managed Kubernetes platforms) use a static kubeconfig with embedded credentials. No special authenticators needed.

Setting up LKE deployment in Jenkins:

  1. Download the kubeconfig from Linode Cloud Manager

  2. Create a Secret File credential in Jenkins

    • Go to Jenkins credentials
    • Add “Secret file” type
    • Upload your LKE kubeconfig
    • Give it an ID like lke-kubeconfig
  3. Install the Kubernetes CLI Jenkins Plugin

    • Manage Jenkins → Manage Plugins
    • Search for “Kubernetes CLI”
    • Install without restart
  4. Use in Jenkinsfile:

stage('deploy to LKE') {
    steps {
        script {
            withKubeConfig([credentialsId: 'lke-kubeconfig']) {
                sh 'kubectl get nodes'
                sh 'kubectl apply -f deployment.yaml'
            }
        }
    }
}

The withKubeConfig wrapper automatically sets up the kubeconfig file for the duration of the block. Clean and secure!

Key Takeaway: Understand your platform’s authentication mechanism. EKS requires extra setup but integrates with AWS IAM. Other platforms like LKE, GKE, or AKS have their own approaches—usually simpler but less integrated with their cloud IAM.


Part 3: Security Best Practices for Jenkins and Kubernetes

Let me share some hard-learned lessons about doing this securely.

Don’t Use Root Everywhere

Bad:

docker exec -it jenkins bash  # Logs in as jenkins user
sudo su -  # Escalates to root for everything

Better: Only use root (-u 0) when installing tools. For regular operations, use the jenkins user.

Create Dedicated Service Accounts

For EC2/VMs: Instead of using the main ec2-user or ubuntu for SSH:

# Create a jenkins user
sudo useradd -m -s /bin/bash jenkins
# Set up SSH keys for this user
# Configure Jenkins to use this user for SSH connections

For Kubernetes: Don’t use the admin kubeconfig in production!

Create a dedicated ServiceAccount with limited permissions:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: jenkins-deployer
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: deployment-manager
rules:
- apiGroups: ["apps"]
  resources: ["deployments", "replicasets"]
  verbs: ["get", "list", "create", "update", "patch"]
- apiGroups: [""]
  resources: ["services", "pods"]
  verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: jenkins-deployment-binding
subjects:
- kind: ServiceAccount
  name: jenkins-deployer
roleRef:
  kind: Role
  name: deployment-manager
  apiGroup: rbac.authorization.k8s.io

Then generate a token for this ServiceAccount and use it in Jenkins instead of admin credentials.

Credential Management Checklist

✅ Use Jenkins credentials store—never hardcode secrets
✅ Use Secret Text for API keys and tokens
✅ Use Secret File for kubeconfigs and certificates
✅ Use Username/Password for registry authentication
✅ Rotate credentials regularly
✅ Use least-privilege principles for IAM roles and Kubernetes RBAC


Part 4: Complete CI/CD Pipeline with Private DockerHub

Time to build something real. Here’s a full pipeline that:

Prerequisites

1. DockerHub Secret for Kubernetes:

Your cluster needs to authenticate with DockerHub to pull private images:

kubectl create secret docker-registry dockerhub-key \
  --docker-server=docker.io \
  --docker-username=your-username \
  --docker-password=your-password \
  --namespace=default

This creates an imagePullSecret that pods can reference.

2. Install envsubst in Jenkins:

We’ll use environment variable substitution for dynamic image tags:

docker exec -u 0 -it <jenkins_container_id> bash
apt-get update && apt-get install -y gettext-base

The gettext-base package provides the envsubst command.

The Complete Jenkinsfile

#!/usr/bin/env groovy

library identifier: 'jenkins-shared-library-master@main', retriever: modernSCM(
    [$class: 'GitSCMSource',
     remote: 'https://github.com/WhisperNet/jenkins-shared-library-master.git']
)

pipeline {
    agent any
    
    tools {
        maven 'maven-3.9.11'
    }
    
    stages {
        stage("test") {
            steps {
                script {
                    sh "mvn test"
                }
            }
        }
        
        stage("Increment version") {
            steps {
                script {
                    incrementVersionMvn()
                }
            }
        }
        
        stage("build jar") {
            steps {
                script {
                    echo "Building jar"
                    buildJar()
                }
            }
        }
        
        stage("build and push docker image") {
            steps {
                script {
                    echo "Building and pushing the docker image"
                    def credentialsId = "docker-hub"
                    buildImage("whispernet/java-app-k8s-cicd:${env.IMAGE_NAME}")
                    dockerLogin(credentialsId)
                    dockerPush("whispernet/java-app-k8s-cicd:${env.IMAGE_NAME}")
                }
            }
        }
        
        stage('deploy to k8s') {
            environment {
                AWS_ACCESS_KEY_ID = credentials('aws_access_key_id')
                AWS_SECRET_ACCESS_KEY = credentials('aws_secret_access_key')
            }
            steps {
                script {
                    echo "Deploying to EKS cluster"
                    sh 'envsubst < java-depl.yaml > java-depl-tmp.yaml'
                    sh 'kubectl apply -f java-depl-tmp.yaml'
                    sh 'rm java-depl-tmp.yaml'
                }
            }
        }
        
        stage("Commit incremented version") {
            steps {
                script {
                    echo "Committing the incremented version"
                    def branch = "master"
                    def gitCreds = "github-repo-access"
                    def origin = "git@github.com:WhisperNet/CI-CD-Pipeline-with-k8s.git"
                    pushVersionIncrement(branch, gitCreds, origin)
                }
            }
        }
    }
}

What makes this pipeline special:

The Kubernetes Deployment Manifest

apiVersion: apps/v1
kind: Deployment
metadata:
  name: java-app-depl
spec:
  replicas: 2
  selector:
    matchLabels:
      app: java-app
  template:
    metadata:
      labels:
        app: java-app
    spec:
      imagePullSecrets:
        - name: dockerhub-key  # References our DockerHub secret
      containers:
        - name: java-app-container
          image: whispernet/java-app-k8s-cicd:${IMAGE_NAME}
          ports:
            - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: java-app-service
spec:
  type: LoadBalancer  # Creates an AWS ELB
  selector:
    app: java-app
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080

Key points:

When Jenkins runs envsubst < java-depl.yaml > java-depl-tmp.yaml, it replaces ${IMAGE_NAME} with the actual version (like 1.2.3), creating a deployment-ready manifest.


Part 5: Switching to AWS ECR (Elastic Container Registry)

DockerHub is great, but what if you want to keep everything in AWS? Enter ECR.

Setting Up ECR

1. Create an ECR repository:

aws ecr create-repository --repository-name java-app-k8s-cicd

This gives you a repository URL like:
123456789012.dkr.ecr.us-east-1.amazonaws.com/java-app-k8s-cicd

2. Get login credentials:

ECR uses a temporary password that expires after 12 hours. Here’s the magic command:

aws ecr get-login-password --region us-east-1

This returns a password (actually a token) that you can use for Docker login.

3. Create Jenkins credentials for ECR:

In Jenkins, create a “Username with password” credential:

Pro tip: For production, automate this! Create a Jenkins job that refreshes the ECR token every 10 hours.

Creating Kubernetes Secret for ECR

ECR authentication requires base64 encoding the credentials:

kubectl create secret docker-registry aws-registry-key \
  --docker-server=123456789012.dkr.ecr.us-east-1.amazonaws.com \
  --docker-username=AWS \
  --docker-password=$(aws ecr get-login-password --region us-east-1) \
  --namespace=default

Modified Jenkinsfile for ECR

pipeline {
    agent any
    
    environment {
        DOCKER_REPO = '123456789012.dkr.ecr.us-east-1.amazonaws.com/java-app-k8s-cicd'
        AWS_REGION = 'us-east-1'
    }
    
    stages {
        // ... (test, version increment, build jar stages remain the same)
        
        stage("build and push docker image to ECR") {
            environment {
                AWS_ACCESS_KEY_ID = credentials('aws_access_key_id')
                AWS_SECRET_ACCESS_KEY = credentials('aws_secret_access_key')
            }
            steps {
                script {
                    echo "Building and pushing to ECR"
                    
                    // Build the image
                    sh "docker build -t ${DOCKER_REPO}:${env.IMAGE_NAME} ."
                    
                    // Login to ECR
                    sh """
                        aws ecr get-login-password --region ${AWS_REGION} | \
                        docker login --username AWS --password-stdin ${DOCKER_REPO}
                    """
                    
                    // Push the image
                    sh "docker push ${DOCKER_REPO}:${env.IMAGE_NAME}"
                }
            }
        }
        
        stage('deploy to k8s') {
            environment {
                AWS_ACCESS_KEY_ID = credentials('aws_access_key_id')
                AWS_SECRET_ACCESS_KEY = credentials('aws_secret_access_key')
            }
            steps {
                script {
                    echo "Deploying to EKS"
                    sh 'envsubst < java-depl.yaml > java-depl-tmp.yaml'
                    sh 'kubectl apply -f java-depl-tmp.yaml'
                    sh 'rm java-depl-tmp.yaml'
                }
            }
        }
    }
}

Key differences:

Updated Kubernetes Manifest for ECR

apiVersion: apps/v1
kind: Deployment
metadata:
  name: java-app-depl
spec:
  replicas: 2
  selector:
    matchLabels:
      app: java-app
  template:
    metadata:
      labels:
        app: java-app
    spec:
      imagePullSecrets:
        - name: aws-registry-key  # Changed from dockerhub-key
      containers:
        - name: java-app-container
          image: 123456789012.dkr.ecr.us-east-1.amazonaws.com/java-app-k8s-cicd:${IMAGE_NAME}
          ports:
            - containerPort: 8080

Troubleshooting Common Issues

”unauthorized: authentication required” when pulling images

Problem: Kubernetes can’t pull from your private registry.

Solution: Verify your imagePullSecret exists and is in the correct namespace:

kubectl get secrets
kubectl describe secret dockerhub-key

“error: You must be logged in to the server (Unauthorized)”

Problem: kubectl can’t authenticate to the cluster.

Solution for EKS:

”ImagePullBackOff” in pod status

Problem: Pod can’t pull the Docker image.

Solutions:

  1. Verify image exists: docker pull <your-image>
  2. Check imagePullSecret is referenced correctly
  3. Ensure the secret is in the same namespace as the pod
  4. For ECR, check if the token expired (recreate the secret)

envsubst replaces too many variables

Problem: envsubst replaces ALL $VARIABLE patterns, even ones you don’t want changed.

Solution: Be specific about which variables to substitute:

envsubst '$IMAGE_NAME' < template.yaml > output.yaml

What I Learned Along the Way

1. Authentication complexity is real: Each Kubernetes platform has its quirks. Understanding the auth flow saves hours of debugging.

2. Security is not optional: Even in learning projects, practicing good credential management builds the right habits.

3. Automation tools should be automated: Manually updating ECR tokens? That’s technical debt waiting to happen. Automate it from day one.

4. Infrastructure as Code pays off: Those YAML files and Jenkinsfiles? They’re documentation, disaster recovery, and reproducibility all in one.

5. Start simple, then optimize: Get it working first (even with admin credentials), then refine with proper RBAC and least privilege.


Next Steps and Further Exploration

You now have a working CI/CD pipeline, but here’s where to go next:

Immediate improvements:

Advanced challenges:

Security hardening:


The Complete Picture

Here’s what your final CI/CD flow looks like:

[Git Push]

[Jenkins Webhook Trigger]

[Run Tests]

[Increment Version]

[Build JAR]

[Build Docker Image with Version Tag]

[Push to DockerHub/ECR]

[Substitute Variables in K8s Manifest]

[Deploy to EKS/LKE using kubectl]

[Kubernetes Pulls Image using imagePullSecret]

[LoadBalancer Routes Traffic to New Pods]

[Commit Version Bump to Git]

Every push to your repository triggers this entire chain automatically. That’s the power of proper CI/CD.


Resources and Code Repositories

Want to see the complete code? Check out these repositories:

Final Thoughts

Building this pipeline taught me more about DevOps than dozens of theoretical tutorials. The authentication headaches, the “why isn’t this working?” moments, the satisfaction when pods finally start running—that’s where real learning happens.

Don’t be discouraged if things don’t work the first time. Kubernetes has many moving parts, and Jenkins adds another layer of complexity. But once you understand how they communicate, you unlock the ability to deploy anything, anywhere, automatically.

Now go build something awesome, break it, fix it, and automate it. That’s the DevOps way.


Got questions or ran into issues? Drop a comment below! I’d love to hear about your own pipeline adventures and the creative solutions you’ve come up with.


Edit page
Share this post on:

Next Post
Kubernetes Guide Part 5: Production EKS & AWS Integration