Skip to content
Go back

Kubernetes Guide Part 3: Helm Package Management & Private Registries

Edit page

K8s Continues: From Helm Charts to Stateful Apps, Secrets, and Operators

1. Helm - Package Manager for Kubernetes

1.1 What is Helm?

Helm is the package manager for Kubernetes - think of it like apt for Ubuntu, yum for RedHat, or brew for macOS, but for Kubernetes applications.

Official Definition: Helm helps you manage Kubernetes applications. Helm Charts help you define, install, and upgrade even the most complex Kubernetes applications.

The Problem Helm Solves:

Imagine deploying a complex application:

- 5 Deployments
- 5 Services
- 3 ConfigMaps
- 2 Secrets
- 2 StatefulSets
- 1 Ingress
= 18 YAML files to manage! 😱

Without Helm, you’d need to:

  1. Create all 18 YAML files
  2. Apply them in the correct order
  3. Manage dependencies manually
  4. Update each file for different environments
  5. Keep track of what’s deployed

With Helm:

helm install myapp myapp-chart
# Done! ✨

1.2 Helm Concepts

Helm Chart

A Chart is a Helm package that contains all the Kubernetes resource definitions needed to run an application.

A Chart contains:

Helm Repository

A Repository is a place where charts are stored and shared.

Popular repositories:

Helm Release

A Release is an instance of a chart running in your cluster. You can install the same chart multiple times with different release names.

# Two releases of the same chart
helm install dev-db bitnami/mongodb
helm install prod-db bitnami/mongodb

1.3 Why Use Helm Charts?

1. Package Management - Use Existing Charts

Instead of writing YAML from scratch:

# Search for charts
helm search hub mongodb

# Install a production-ready MongoDB
helm install my-mongodb bitnami/mongodb

Benefits:

2. Templating Engine - Reusability

The Problem: You need to deploy the same application across multiple environments:

# dev-deployment.yaml
metadata:
  name: myapp-dev
spec:
  replicas: 1
  image: myapp:1.0.0
  env:
    - name: DB_HOST
      value: dev-db.example.com

# staging-deployment.yaml
metadata:
  name: myapp-staging    # Only this changed
spec:
  replicas: 2             # And this
  image: myapp:1.0.0
  env:
    - name: DB_HOST
      value: staging-db.example.com  # And this

Lots of duplication! πŸ”

With Helm Templates:

# templates/deployment.yaml
metadata:
  name: {{ .Values.appName }}-{{ .Values.environment }}
spec:
  replicas: {{ .Values.replicaCount }}
  image: {{ .Values.image.name }}:{{ .Values.image.tag }}
  env:
    - name: DB_HOST
      value: {{ .Values.database.host }}
# values-dev.yaml
appName: myapp
environment: dev
replicaCount: 1
image:
  name: myapp
  tag: 1.0.0
database:
  host: dev-db.example.com
# values-prod.yaml
appName: myapp
environment: prod
replicaCount: 5
image:
  name: myapp
  tag: 1.0.0
database:
  host: prod-db.example.com

Deploy to different environments:

helm install myapp-dev ./myapp -f values-dev.yaml
helm install myapp-prod ./myapp -f values-prod.yaml

3. Sharing Helm Charts

Create once, share everywhere:

# Package your chart
helm package myapp/

# Upload to repository
helm push myapp-1.0.0.tgz oci://registry.example.com/charts

# Others can install it
helm install their-app oci://registry.example.com/charts/myapp

1.4 Helm Chart Structure

mychart/
β”œβ”€β”€ Chart.yaml          # Chart metadata
β”œβ”€β”€ values.yaml         # Default configuration values
β”œβ”€β”€ charts/             # Dependent charts (subcharts)
β”œβ”€β”€ templates/          # Kubernetes manifest templates
β”‚   β”œβ”€β”€ deployment.yaml
β”‚   β”œβ”€β”€ service.yaml
β”‚   β”œβ”€β”€ ingress.yaml
β”‚   β”œβ”€β”€ configmap.yaml
β”‚   β”œβ”€β”€ _helpers.tpl   # Template helpers
β”‚   └── NOTES.txt      # Usage notes shown after install
β”œβ”€β”€ .helmignore         # Files to ignore when packaging
└── README.md           # Documentation

Chart.yaml

apiVersion: v2
name: myapp
description: A Helm chart for my application
type: application
version: 1.0.0        # Chart version
appVersion: "1.2.3"   # Application version
keywords:
  - web
  - api
maintainers:
  - name: Your Name
    email: you@example.com
dependencies:
  - name: mongodb
    version: 13.x.x
    repository: https://charts.bitnami.com/bitnami

values.yaml

# Default values for myapp
replicaCount: 2

image:
  repository: myapp
  tag: "1.2.3"
  pullPolicy: IfNotPresent

service:
  type: ClusterIP
  port: 80

ingress:
  enabled: false
  className: nginx
  hosts:
    - host: myapp.example.com
      paths:
        - path: /
          pathType: Prefix

resources:
  limits:
    cpu: 500m
    memory: 512Mi
  requests:
    cpu: 250m
    memory: 256Mi

templates/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "myapp.fullname" . }}
  labels:
    {{- include "myapp.labels" . | nindent 4 }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      {{- include "myapp.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        {{- include "myapp.selectorLabels" . | nindent 8 }}
    spec:
      containers:
      - name: {{ .Chart.Name }}
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        ports:
        - containerPort: 8080
        resources:
          {{- toYaml .Values.resources | nindent 12 }}

Helm Template Functions:

1.5 Essential Helm Commands

Repository Management

# Add a repository
helm repo add bitnami https://charts.bitnami.com/bitnami

# Update repositories
helm repo update

# List repositories
helm repo list

# Remove a repository
helm repo remove bitnami

Searching Charts

# Search in all added repositories
helm search repo mongodb

# Search in Artifact Hub
helm search hub mongodb

# Search in specific repository
helm search repo bitnami/mongodb

# Show chart versions
helm search repo bitnami/mongodb --versions

Installing Charts

# Install with default values
helm install my-release bitnami/mongodb

# Install with custom values file
helm install my-release bitnami/mongodb -f values.yaml

# Install with inline value overrides
helm install my-release bitnami/mongodb \
  --set auth.rootPassword=secret123 \
  --set replicaCount=3

# Install in specific namespace
helm install my-release bitnami/mongodb -n database

# Dry run (see what would be created)
helm install my-release bitnami/mongodb --dry-run --debug

Managing Releases

# List all releases
helm list
helm ls

# List releases in all namespaces
helm list --all-namespaces

# Get release status
helm status my-release

# Get release values
helm get values my-release

# Get release manifest (actual K8s resources)
helm get manifest my-release

Upgrading Releases

# Upgrade with new values
helm upgrade my-release bitnami/mongodb -f new-values.yaml

# Upgrade to specific chart version
helm upgrade my-release bitnami/mongodb --version 13.6.0

# Upgrade or install (if not exists)
helm upgrade --install my-release bitnami/mongodb

Rolling Back

# View release history
helm history my-release

# Rollback to previous version
helm rollback my-release

# Rollback to specific revision
helm rollback my-release 2

Uninstalling

# Uninstall release
helm uninstall my-release

# Uninstall but keep history
helm uninstall my-release --keep-history

Chart Development

# Create a new chart
helm create mychart

# Lint chart (check for issues)
helm lint mychart/

# Template chart (see generated YAML)
helm template mychart/

# Package chart
helm package mychart/

# Install local chart
helm install my-release ./mychart

2. Practical Demo: MongoDB with Helm on Linode

Let’s deploy a production-ready MongoDB cluster using Helm on a Linode Kubernetes cluster.

2.1 Prerequisites

# Ensure kubectl is configured for your Linode cluster
kubectl get nodes

# Add Bitnami repository
helm repo add bitnami https://charts.bitnami.com/bitnami

# Update repository
helm repo update

2.2 Search for MongoDB Chart

# Search in all repos
helm search repo mongodb

# Get specific chart info
helm search repo bitnami/mongodb

# Show all versions
helm search repo bitnami/mongodb --versions

# Get chart details
helm show chart bitnami/mongodb
helm show values bitnami/mongodb

2.3 Create Custom Values File

Create mongodb-values.yaml:

# Architecture: Deploy as a ReplicaSet (StatefulSet)
architecture: replicaset

# Number of MongoDB replicas
replicaCount: 3

# Authentication settings
auth:
  rootPassword: secret-root-pwd

# Image configuration
image:
  registry: docker.io
  repository: bitnami/mongodb
  tag: latest

# Global settings
global:
  security:
    allowInsecureImages: true

# Disable metrics (optional)
metrics:
  enabled: false

Important Configuration Explained:

2.4 Install MongoDB

# Install with custom values
helm install mongodb --values mongodb-values.yaml bitnami/mongodb

# Watch the pods being created
kubectl get pods -w

What Helm Creates:

kubectl get all

# Output:
NAME             READY   STATUS    RESTARTS   AGE
pod/mongodb-0    1/1     Running   0          2m
pod/mongodb-1    1/1     Running   0          1m
pod/mongodb-2    1/1     Running   0          30s

NAME                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)     AGE
service/mongodb            ClusterIP   10.128.45.12    <none>        27017/TCP   2m
service/mongodb-headless   ClusterIP   None            <none>        27017/TCP   2m

NAME                       READY   AGE
statefulset.apps/mongodb   3/3     2m

Two Services Created:

  1. mongodb (ClusterIP) - For general read/write operations
  2. mongodb-headless (Headless) - For direct pod access

2.5 Understanding StatefulSet URLs

Helm created a StatefulSet with 3 pods and a headless service. Each pod gets a stable DNS name.

Individual Pod DNS Names

mongodb-0.mongodb-headless.default.svc.cluster.local:27017
mongodb-1.mongodb-headless.default.svc.cluster.local:27017
mongodb-2.mongodb-headless.default.svc.cluster.local:27017

DNS Format Breakdown:

<pod-name>.<headless-service-name>.<namespace>.svc.cluster.local:<port>

Example:

How Applications Should Connect

Option 1: Use the Regular Service (Recommended)

env:
- name: MONGODB_URL
  value: "mongodb://root:secret-root-pwd@mongodb:27017/mydb"
  #                                      ↑ Regular service name

This automatically load-balances across all MongoDB replicas.

Option 2: Use Full ReplicaSet Connection String

For applications that need to be aware of the entire replica set:

env:
- name: MONGODB_URL
  value: "mongodb://root:secret-root-pwd@mongodb-0.mongodb-headless:27017,mongodb-1.mongodb-headless:27017,mongodb-2.mongodb-headless:27017/mydb?replicaSet=rs0"

Why the full connection string?

MongoDB Replica Set Architecture:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚    mongodb-0        β”‚ ← Primary (accepts writes)
β”‚    (Primary)        β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
           β”‚
       Replicates
           β”‚
    β”Œβ”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”
    β”‚             β”‚
β”Œβ”€β”€β”€β–Όβ”€β”€β”€β”€β”   β”Œβ”€β”€β”€β–Όβ”€β”€β”€β”€β”
β”‚mongodb-β”‚   β”‚mongodb-β”‚ ← Secondaries (accept reads)
β”‚   1    β”‚   β”‚   2    β”‚    Can be promoted to primary
β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜

2.6 Deploy Mongo Express

Now let’s deploy a web UI to interact with MongoDB.

Create mongo-express.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongo-express
  labels:
    app: mongo-express
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongo-express
  template:
    metadata:
      labels:
        app: mongo-express
    spec:
      containers:
      - name: mongo-express
        image: mongo-express
        ports:
        - containerPort: 8081
        env:
        - name: ME_CONFIG_MONGODB_ADMINUSERNAME
          value: root
        - name: ME_CONFIG_MONGODB_ADMINPASSWORD 
          valueFrom:
            secretKeyRef:
              name: mongodb                    # Helm-created secret
              key: mongodb-root-password
        - name: ME_CONFIG_MONGODB_URL
          value: "mongodb://$(ME_CONFIG_MONGODB_ADMINUSERNAME):$(ME_CONFIG_MONGODB_ADMINPASSWORD)@mongodb-0.mongodb-headless:27017"
---
apiVersion: v1
kind: Service
metadata:
  name: mongo-express-service
spec:
  selector:
    app: mongo-express
  ports:
    - protocol: TCP
      port: 8081
      targetPort: 8081

Key Points:

  1. Secret Reference: Helm automatically created a secret named mongodb with the root password
  2. Connection URL: Connects to mongodb-0.mongodb-headless (the primary)
  3. Environment Variable Substitution: Uses $(VARIABLE) to construct the connection string

Check the Helm-created secret:

kubectl get secret mongodb -o yaml

# Decode the password
kubectl get secret mongodb -o jsonpath='{.data.mongodb-root-password}' | base64 --decode

Deploy Mongo Express:

kubectl apply -f mongo-express.yaml

# Check if it's running
kubectl get pods -l app=mongo-express
kubectl logs -l app=mongo-express

2.7 Install Nginx Ingress Controller

To access Mongo Express from outside the cluster, we need an Ingress Controller.

# Add nginx ingress repository
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

# Update repository
helm repo update

# Install nginx ingress controller
helm install nginx-ingress ingress-nginx/ingress-nginx \
  --set controller.publishService.enabled=true

# Check installation
kubectl get pods -l app.kubernetes.io/name=ingress-nginx

# Get the external IP (LoadBalancer)
kubectl get service nginx-ingress-ingress-nginx-controller

# Wait for EXTERNAL-IP to be assigned (on Linode, this provisions a NodeBalancer)

What publishService.enabled=true Does:

2.8 Create Ingress Resource

Create mongo-express-ingress.yaml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: mongo-express
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  - host: YOUR_HOST_DNS_NAME  # Replace with your domain
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: mongo-express-service
            port:
              number: 8081

Replace YOUR_HOST_DNS_NAME with:

Example with nip.io:

# Get the LoadBalancer IP
EXTERNAL_IP=$(kubectl get service nginx-ingress-ingress-nginx-controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

echo $EXTERNAL_IP
# Output: 139.144.123.45

# Use nip.io (wildcard DNS service)
# Replace YOUR_HOST_DNS_NAME with: 139.144.123.45.nip.io

Apply the Ingress:

kubectl apply -f mongo-express-ingress.yaml

# Check ingress
kubectl get ingress

# Describe to see details
kubectl describe ingress mongo-express

Access Mongo Express:

# Open in browser
http://YOUR_HOST_DNS_NAME

# Or with nip.io
http://139.144.123.45.nip.io

2.9 Test Data Persistence

Let’s verify that data persists even when pods are deleted.

Step 1: Create some data in Mongo Express

1. Open Mongo Express in browser
2. Create a new database: "testdb"
3. Create a collection: "users"
4. Add a document: {"name": "John", "age": 30}

Step 2: Scale down StatefulSet to 0

# Scale down (deletes all pods)
kubectl scale statefulset mongodb --replicas=0

# Verify pods are gone
kubectl get pods

# Wait a moment, then scale back up
kubectl scale statefulset mongodb --replicas=3

# Wait for pods to be ready
kubectl get pods -w

Step 3: Verify data still exists

1. Refresh Mongo Express
2. Navigate to testdb β†’ users
3. Your document should still be there! βœ…

Why does data persist?

Pod deleted β†’ PVC remains β†’ Pod recreated β†’ Mounts same PVC β†’ Data intact!

Check the PVCs:

kubectl get pvc

# Output shows PVCs are retained:
NAME                STATUS   VOLUME                                     CAPACITY
datadir-mongodb-0   Bound    pvc-abc123...                             8Gi
datadir-mongodb-1   Bound    pvc-def456...                             8Gi
datadir-mongodb-2   Bound    pvc-ghi789...                             8Gi

Even if you delete the StatefulSet:

kubectl delete statefulset mongodb

# PVCs still exist!
kubectl get pvc

2.10 Cleanup

# Uninstall MongoDB (deletes StatefulSet, Services, Secrets)
helm uninstall mongodb

# Delete PVCs manually (Helm doesn't delete PVCs by default)
kubectl delete pvc --all

# Delete Mongo Express
kubectl delete -f mongo-express.yaml

# Delete Ingress
kubectl delete -f mongo-express-ingress.yaml

# Uninstall Nginx Ingress Controller
helm uninstall nginx-ingress

# List remaining resources
kubectl get all

2.11 Useful Helm Commands for This Setup

# List installed releases
helm list

# Get MongoDB release info
helm status mongodb

# Get MongoDB values (what you configured)
helm get values mongodb

# Get all values (including defaults)
helm get values mongodb --all

# View MongoDB release history
helm history mongodb

# Upgrade MongoDB (e.g., increase replicas)
helm upgrade mongodb bitnami/mongodb \
  --set replicaCount=5 \
  --reuse-values

# Rollback if something goes wrong
helm rollback mongodb

3. Pulling Private Docker Images

When using private Docker registries (AWS ECR, Docker Hub private repos, GitLab Registry, etc.), Kubernetes needs credentials to pull images.

3.1 The Problem

containers:
- name: app
  image: 123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:latest

Without credentials:

ErrImagePull: failed to pull image: unauthorized

Solution: Create a Kubernetes Secret of type kubernetes.io/dockerconfigjson

3.2 Two Methods for Creating Secrets

There are two main methods for creating docker-registry secrets:

MethodBest ForProsCons
Method 1: Direct kubectlSingle registryQuick, convenientOne registry at a time
Method 2: From config.jsonMultiple registriesAll registries in one secretExtra steps

3.3 Method 1: Direct kubectl create (Convenient)

Best for: Single registry (like AWS ECR or one private Docker Hub repo)

Step 1: Get Registry Credentials

For AWS ECR:

# Get the password (prints to screen)
aws ecr get-login-password --region us-east-1

# Output: eyJwYXlsb2FkIjoiQ... (long base64 string)

For Docker Hub:

Step 2: Create Secret Directly

kubectl create secret docker-registry my-ecr-secret \
  --namespace default \
  --docker-server=123456789.dkr.ecr.us-east-1.amazonaws.com \
  --docker-username=AWS \
  --docker-password=<paste-password-from-step-1>

For Docker Hub:

kubectl create secret docker-registry my-dockerhub-secret \
  --namespace default \
  --docker-server=docker.io \
  --docker-username=myusername \
  --docker-password=mypassword

Verify:

kubectl get secret my-ecr-secret

kubectl describe secret my-ecr-secret

3.4 Method 2: From docker config.json (Bundled)

Best for: Multiple registries (ECR + Docker Hub + GitLab, etc.)

Why This Method?

When you run docker login, Docker stores credentials in ~/.docker/config.json:

{
  "auths": {
    "123456789.dkr.ecr.us-east-1.amazonaws.com": {
      "auth": "base64-encoded-credentials"
    },
    "docker.io": {
      "auth": "base64-encoded-credentials"
    },
    "registry.gitlab.com": {
      "auth": "base64-encoded-credentials"
    }
  }
}

One secret = all registries! 🎯

Step 1: Login with Docker

AWS ECR:

aws ecr get-login-password --region us-east-1 | docker login \
  --username AWS \
  --password-stdin 123456789.dkr.ecr.us-east-1.amazonaws.com

Docker Hub:

docker login docker.io
# Enter username and password when prompted

GitLab Registry:

docker login registry.gitlab.com
# Enter username and personal access token

Verify config.json was updated:

cat ~/.docker/config.json

Step 2: Create Secret from config.json

Option A: Using kubectl create

kubectl create secret docker-registry my-multi-registry-secret \
  --namespace default \
  --from-file=.dockerconfigjson=$HOME/.docker/config.json

Option B: Using YAML (more explicit)

# Base64 encode the entire config.json file
base64 -w 0 ~/.docker/config.json

# Output: ewoJImF1dGhzIjp7CgkJImh0dHBzOi8vaW5kZXguZG9ja2VyLml...

Create docker-secret.yaml:

apiVersion: v1
kind: Secret
metadata:
  name: my-multi-registry-secret
  namespace: default
type: kubernetes.io/dockerconfigjson
data:
  .dockerconfigjson: ewoJImF1dGhzIjp7CgkJImh0dHBzOi8vaW5kZXguZG9ja2VyLml...
kubectl apply -f docker-secret.yaml

3.5 Using the Secret in Deployments

Add imagePullSecrets to your Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      imagePullSecrets:
      - name: my-ecr-secret          # Reference the secret here
      containers:
      - name: myapp
        image: 123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:latest
        ports:
        - containerPort: 8080

For multiple registries (Method 2):

spec:
  template:
    spec:
      imagePullSecrets:
      - name: my-multi-registry-secret  # One secret for all!
      containers:
      - name: app1
        image: 123456789.dkr.ecr.us-east-1.amazonaws.com/app1:latest
      - name: app2
        image: myusername/privateapp:latest  # Docker Hub
      - name: app3
        image: registry.gitlab.com/mygroup/app3:latest  # GitLab

3.6 Important Notes

Namespace Scope

Secrets are namespaced! The secret must be in the same namespace as the Deployment.

# Create secret in specific namespace
kubectl create secret docker-registry my-secret \
  --namespace production \
  --docker-server=... \
  --docker-username=... \
  --docker-password=...

# Use in production namespace
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  namespace: production  # Same namespace!
spec:
  template:
    spec:
      imagePullSecrets:
      - name: my-secret

Testing Both Methods

Try Method 1:

# Create with direct method
kubectl create secret docker-registry method1-secret \
  --docker-server=docker.io \
  --docker-username=testuser \
  --docker-password=testpass

# Verify
kubectl get secret method1-secret -o yaml

Try Method 2:

# Login to create config.json
docker login docker.io

# Create from file
kubectl create secret docker-registry method2-secret \
  --from-file=.dockerconfigjson=$HOME/.docker/config.json

# Verify
kubectl get secret method2-secret -o yaml

Compare:

# Decode and compare (they're the same format!)
kubectl get secret method1-secret -o jsonpath='{.data.\.dockerconfigjson}' | base64 --decode
kubectl get secret method2-secret -o jsonpath='{.data.\.dockerconfigjson}' | base64 --decode

3.7 Common Issues and Solutions

Issue 1: ErrImagePull persists

# Check if secret exists
kubectl get secret my-secret

# Check if secret is referenced correctly
kubectl describe pod <pod-name>

# Check secret contents
kubectl get secret my-secret -o jsonpath='{.data.\.dockerconfigjson}' | base64 --decode

# Check namespace matches
kubectl get secret my-secret -n <namespace>

Issue 2: AWS ECR token expires

AWS ECR tokens expire after 12 hours.

Solution 1: Use AWS IAM roles (recommended)

Solution 2: Automate token refresh

# Cronjob to refresh ECR secret
aws ecr get-login-password --region us-east-1 | \
kubectl create secret docker-registry ecr-secret \
  --docker-server=... \
  --docker-username=AWS \
  --docker-password=$(cat) \
  --dry-run=client -o yaml | \
kubectl apply -f -

Issue 3: Multiple registries, one fails

When using Method 2 with multiple registries, if one credential is wrong, all images from that registry fail.

Debug:

# Extract config.json
kubectl get secret my-multi-secret -o jsonpath='{.data.\.dockerconfigjson}' | base64 --decode > /tmp/config.json

# Check each registry
cat /tmp/config.json | jq '.auths'


← Previous: Part 2 Next: Part 4 β†’


Edit page
Share this post on:

Previous Post
Kubernetes Guide Part 4: Operators, Security, & Microservices
Next Post
Kubernetes Guide Part 2: Advanced Networking, Storage, & Config