Skip to content
Go back

Kubernetes Guide Part 4: Operators, Security, & Microservices

Edit page

4. Kubernetes Operators

4.1 What are Kubernetes Operators?

Operators are software extensions to Kubernetes that use custom resources to manage applications and their components.

Think of an Operator as an automated SRE (Site Reliability Engineer) that knows how to deploy, manage, and troubleshoot a specific application.

4.2 The Problem Operators Solve

Stateless apps are easy:

Stateful apps are hard:

Manual Operations Required:

MySQL Cluster Setup:
1. Deploy primary instance
2. Configure replication
3. Deploy secondary instances
4. Set up monitoring
5. Configure backup schedule
6. Handle failover scenarios
7. Perform upgrades carefully
8. Manage schema migrations

With a MySQL Operator:

apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
  name: my-cluster
spec:
  instances: 3
  secretName: mysql-credentials

That’s it! The Operator handles everything else. 🎉

4.3 How Operators Work

Custom Resource Definitions (CRDs)

Operators extend Kubernetes API with Custom Resources:

# Before installing operator
kubectl get innodbcluster
# error: the server doesn't have a resource type "innodbcluster"

# After installing MySQL operator
kubectl get innodbcluster
# NAME         INSTANCES   ROUTERS   AGE
# my-cluster   3           1         5m

Control Loop

Operators use the same control loop pattern as Kubernetes controllers:

1. Watch for changes to Custom Resources
2. Compare desired state (what you want)
   with current state (what exists)
3. Take actions to reconcile the difference
4. Repeat

Example: MySQL Operator

User creates InnoDBCluster resource (desired: 3 instances)

Operator detects new resource

Operator creates: StatefulSet, Services, ConfigMaps

Operator monitors health

If pod fails → Operator handles failover

If user changes replicas → Operator scales safely

4.4 Operator Components

An Operator typically includes:

  1. Custom Resource Definitions (CRDs)

    • Define new resource types
  2. Controller

    • Watches CRD instances
    • Reconciles state
  3. Domain-specific knowledge

    • How to deploy the application
    • How to handle upgrades
    • How to perform backups
    • How to handle failures

Databases

Monitoring

Storage

Message Queues

CI/CD

4.6 Example: Prometheus Operator

Without Operator:

# Prometheus ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
data:
  prometheus.yml: |
    scrape_configs:
    - job_name: 'my-app'
      static_configs:
      - targets: ['my-app:8080']
---
# Prometheus Deployment
apiVersion: apps/v1
kind: Deployment
# ... 50+ lines of YAML
---
# Prometheus Service
---
# ServiceMonitor (manual management)

With Prometheus Operator:

# Install Operator once (via Helm)
helm install prometheus-operator prometheus-community/kube-prometheus-stack

# Then just define what to monitor:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: my-app-monitor
spec:
  selector:
    matchLabels:
      app: my-app
  endpoints:
  - port: metrics
    interval: 30s

The Operator handles:

4.7 Finding Operators: OperatorHub

OperatorHub.io is the central registry for Kubernetes Operators.

# Visit: https://operatorhub.io

# Search for operators:
- Database operators
- Monitoring operators
- Backup operators
- Security operators

Categories:

4.8 When to Use Operators

Use Operators when:

Examples:

Don’t need Operators for:

4.9 Installing an Operator (Example)

Using Operator Lifecycle Manager (OLM):

# Install OLM
curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.25.0/install.sh | bash -s v0.25.0

# Install an operator (e.g., Prometheus)
kubectl create -f https://operatorhub.io/install/prometheus.yaml

# Verify
kubectl get csv -n operators

Using Helm:

# Many operators are available via Helm
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install prometheus prometheus-community/kube-prometheus-stack

Manually:

# Apply CRDs
kubectl apply -f https://example.com/operator/crds.yaml

# Deploy operator
kubectl apply -f https://example.com/operator/operator.yaml

5. Securing Your Kubernetes Cluster

Security in Kubernetes is a vast topic. Here are the essential practices.

5.1 Authentication & Authorization

We covered RBAC earlier, but as a reminder:

# Create role with minimal permissions
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: dev
  name: developer
rules:
- apiGroups: ["", "apps"]
  resources: ["pods", "deployments"]
  verbs: ["get", "list", "create", "update"]

---
# Bind role to user
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: developer-binding
  namespace: dev
subjects:
- kind: User
  name: jane
roleRef:
  kind: Role
  name: developer
  apiGroup: rbac.authorization.k8s.io

Best Practices:

5.2 Network Policies

Control traffic between pods:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: api-allow
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: api
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
    ports:
    - protocol: TCP
      port: 8080
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: database
    ports:
    - protocol: TCP
      port: 5432

This policy:

5.3 Pod Security Standards

Use Pod Security Admission:

apiVersion: v1
kind: Namespace
metadata:
  name: production
  labels:
    pod-security.kubernetes.io/enforce: restricted
    pod-security.kubernetes.io/audit: restricted
    pod-security.kubernetes.io/warn: restricted

Security Contexts:

securityContext:
  runAsNonRoot: true
  runAsUser: 1000
  fsGroup: 2000
  seccompProfile:
    type: RuntimeDefault
  capabilities:
    drop:
    - ALL
    add:
    - NET_BIND_SERVICE

5.4 Secrets Management

Don’t store secrets in Git:

# ❌ BAD
apiVersion: v1
kind: Secret
data:
  password: cGFzc3dvcmQxMjM=  # base64 != encryption!

Use external secrets managers:

Example with External Secrets Operator:

apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
  name: aws-secrets
spec:
  provider:
    aws:
      service: SecretsManager
      region: us-east-1

---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: db-credentials
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: aws-secrets
  target:
    name: db-secret
  data:
  - secretKey: password
    remoteRef:
      key: prod/db/password

5.5 Image Security

Scan images for vulnerabilities:

# Using Trivy
trivy image myapp:latest

# In CI/CD pipeline
docker build -t myapp:latest .
trivy image --exit-code 1 --severity CRITICAL,HIGH myapp:latest
docker push myapp:latest

Use private registries:

imagePullSecrets:
- name: private-registry-secret

Only allow approved registries (admission controller):

apiVersion: v1
kind: ImageReviewSpec
allowedRegistries:
- docker.io/mycompany/*
- gcr.io/my-project/*

5.6 Additional Security Measures

Enable Audit Logging

# kube-apiserver configuration
--audit-log-path=/var/log/k8s-audit.log
--audit-policy-file=/etc/kubernetes/audit-policy.yaml

Regular Updates

# Keep Kubernetes updated
# Managed services handle this automatically
# Self-managed: follow upgrade guides

# Update node OS
apt update && apt upgrade

Limit API Server Access

# Use network policies
# Restrict to VPN or specific IPs
# Use API server firewall rules

Monitoring and Alerting

# Install Falco for runtime security
helm install falco falcosecurity/falco

# Monitor for suspicious activity:
- Unexpected process execution
- Unauthorized file access
- Privilege escalation

Hands-On Practice Recommendations

# 1. Build a personal project
# Deploy a full-stack app:
- Frontend (React/Vue)
- Backend API (Node.js/Python)
- Database (PostgreSQL/MongoDB)
- Monitoring (Prometheus/Grafana)

# 2. Break things and fix them
# This is how you truly learn:
- Delete a pod and watch it recreate
- Scale replicas up and down
- Simulate node failures
- Test backup and restore

# 3. Join the community
- Kubernetes Slack: slack.k8s.io
- CNCF events and webinars
- Local Kubernetes meetups
- Contribute to open source projects

Essential Resources

Documentation:

Interactive Learning:

Books:

Video Courses:

From Monolith to Microservices: Deploying Production-Ready Applications on Kubernetes with Helm

When I first started learning about microservices, I was overwhelmed. How do all these independent services talk to each other? How do you manage dozens of YAML files without losing your sanity? And most importantly—how do you actually deploy this mess to production?

After diving deep into a real-world microservices project, I finally had my “aha” moment. Today, I’m sharing everything I learned about deploying microservices on Kubernetes using Helm charts, complete with production-grade best practices that you can implement right away.

What Are Microservices, Really?

Before we dive into deployment strategies, let’s ground ourselves in the fundamentals. Microservices are an architectural pattern where your application is broken down into small, independent services that each handle a specific business capability.

Think of it like a restaurant: instead of one person doing everything (monolith), you have specialized teams—chefs, waiters, bartenders, cashiers (microservices)—each focusing on what they do best.

How Do Microservices Communicate?

This is where things get interesting. In our restaurant analogy, the teams need to communicate. How do they do it? Microservices use three primary communication patterns:

1. API Calls (REST/HTTP or gRPC)

The most straightforward approach. Service A makes a direct request to Service B.

# Example: Frontend service calling the Product Catalog service
GET http://productcatalogservice:3550/products

When to use: Synchronous operations where you need an immediate response (e.g., fetching product details).

Trade-off: Creates tight coupling—if Service B is down, Service A’s request fails.

2. Message Brokers (RabbitMQ, Kafka, NATS)

Services communicate through a middleman (the message broker). Service A publishes a message, Service B subscribes to it.

Order Service → [RabbitMQ Queue] → Email Service
                                → Inventory Service

When to use: Asynchronous operations, event-driven architectures, or when you need to decouple services.

Trade-off: Added complexity with the message broker infrastructure.

3. Service Mesh (Istio, Linkerd, Consul)

A dedicated infrastructure layer that handles service-to-service communication, security, and observability.

When to use: Large-scale microservices deployments (20+ services) where you need advanced traffic management, security, and monitoring.

Trade-off: Significant complexity and resource overhead—overkill for smaller applications.

For the demo project we’ll explore, services communicate primarily via gRPC (a high-performance RPC framework), which falls under the API calls category.

Helm: The Package Manager That Saved My Sanity

When you deploy microservices to Kubernetes, you need Kubernetes manifests—YAML files defining Deployments, Services, ConfigMaps, and more. For a single microservice, you might have 2-3 YAML files. For 10 microservices? That’s 20-30 files minimum. Multiply that across dev, staging, and production environments, and you’re drowning in YAML.

Enter Helm, the package manager for Kubernetes. Helm uses templating to turn those repetitive YAML files into reusable, configurable templates.

Templating: The DRY Principle for Kubernetes

Instead of writing this 10 times:

# frontend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 2
  template:
    spec:
      containers:
      - name: frontend
        image: gcr.io/google-samples/frontend:v0.8.0
        ports:
        - containerPort: 8080
# cartservice-deployment.yaml (same structure, different values)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cartservice
spec:
  replicas: 2
  template:
    spec:
      containers:
      - name: cartservice
        image: gcr.io/google-samples/cartservice:v0.8.0
        ports:
        - containerPort: 7070

You write one template with placeholders:

# templates/deployment.yaml (Helm template)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Values.appName }}
spec:
  replicas: {{ .Values.appReplicas }}
  template:
    spec:
      containers:
      - name: {{ .Values.appName }}
        image: "{{ .Values.appImage }}:{{ .Values.appVersion }}"
        ports:
        - containerPort: {{ .Values.containerPort }}

Then you create values files with the specific configurations:

# frontend-values.yaml
appName: frontend
appImage: gcr.io/google-samples/frontend
appVersion: v0.8.0
appReplicas: 2
containerPort: 8080

This is the essence of Helm’s power: write once, configure many times.

Two Approaches to Helm Charts: Which Should You Choose?

When designing Helm charts for microservices, you have two main strategies:

Approach 1: One Helm Chart Per Microservice

charts/
├── frontend/
│   ├── Chart.yaml
│   ├── values.yaml
│   └── templates/
├── cartservice/
│   ├── Chart.yaml
│   ├── values.yaml
│   └── templates/
└── emailservice/
    ├── Chart.yaml
    ├── values.yaml
    └── templates/

Pros:

Cons:

charts/
└── microservice/      # Generic, reusable chart
    ├── Chart.yaml
    ├── values.yaml
    └── templates/
values/                # Service-specific configurations
├── frontend-values.yaml
├── cart-service-values.yaml
└── email-service-values.yaml

Pros:

Cons:

My recommendation: Use the shared chart approach (Approach 2) for application services that follow similar patterns. Create separate charts for infrastructure components like databases (Redis, PostgreSQL) that have unique requirements.

Hands-On: Essential Helm Commands

Let’s walk through the core Helm workflow I use daily:

1. Creating a Helm Chart

helm create microservice

This generates a scaffold with:

2. Testing Your Templates Locally

Before deploying, always preview what Helm will generate:

# Render templates with your values file
helm template -f values/frontend-values.yaml microservice

This outputs the final YAML files that will be applied to Kubernetes. Check for:

3. Linting for Errors

# Catch syntax errors and best practice violations
helm lint -f values/frontend-values.yaml microservice

Think of this as your spell-checker for Helm charts. It catches:

4. Deploying to Kubernetes

# Install a release (first-time deployment)
helm install -f values/frontend-values.yaml frontend-release microservice

Breaking this down:

5. Dry Run: The Safety Net

# Simulate deployment without actually applying changes
helm install --dry-run -f values/redis-values.yaml rediscart charts/redis

This is crucial for production. It validates your configuration against the Kubernetes API server without making any changes. Use it every single time before a real deployment.

Example: Deploying Redis

# Dry run first (check for errors)
helm install --dry-run -f values/redis-values.yaml rediscart charts/redis

# If all looks good, deploy
helm install -f values/redis-values.yaml rediscart charts/redis

# Verify it's running
kubectl get pods -l app=redis-cart

Helmfile: Managing Multiple Services Like a Pro

Here’s the challenge: if you have 11 microservices (like our demo project), you need to run helm install 11 times. And when you update configurations, you need to helm upgrade 11 times. This gets tedious fast.

Helmfile solves this by letting you declare all your Helm releases in a single file and manage them collectively.

The Helmfile Configuration

# helmfile.yaml
releases:
  - name: rediscart
    chart: charts/redis
    values:
      - values/redis-values.yaml

  - name: emailservice
    chart: charts/microservice
    values:
      - values/email-service-values.yaml

  - name: cartservice
    chart: charts/microservice
    values:
      - values/cart-service-values.yaml

  # ... 8 more services

Essential Helmfile Commands

# Deploy or update all services
helmfile sync

# Show what's currently deployed
helmfile list

# Preview changes before applying
helmfile diff

# Delete all releases
helmfile destroy

Pro tip: Use helmfile sync as your single deployment command. It’s idempotent—run it anytime, and it will converge your cluster to the desired state defined in helmfile.yaml.

Real-World Demo: Google’s Online Boutique Microservices App

To solidify all these concepts, I worked with the Online Boutique demo application—a realistic e-commerce platform built with 11 microservices in different languages (Go, Python, Java, Node.js, C#).

The Application Architecture

Here’s what each service does:

ServiceLanguagePurpose
FrontendGoWeb UI, user sessions, orchestrates backend calls
Cart ServiceC#Shopping cart operations with Redis storage
Product CatalogGoProduct info from JSON catalog, search functionality
Currency ServiceNode.jsReal-time currency conversion
Payment ServiceNode.jsPayment transaction processing (mock)
Shipping ServiceGoShipping cost calculation (mock)
Email ServicePythonOrder confirmation emails (mock)
Checkout ServiceGoOrchestrates checkout: payment + shipping + email
RecommendationPythonProduct recommendations based on cart
Ad ServiceJavaContextual advertisements
Redis-Session storage for cart service

How Services Communicate (The Flow)

When a user checks out:

1. User clicks "Checkout" → Frontend Service
2. Frontend → Checkout Service (orchestrator)
3. Checkout Service makes parallel calls to:
   ├→ Cart Service → Redis (get cart items)
   ├→ Payment Service (charge card)
   ├→ Shipping Service (calculate shipping)
   └→ Email Service (send confirmation)
4. Checkout returns success → Frontend shows confirmation

All communication happens via gRPC over Kubernetes internal networking (ClusterIP services). The frontend service discovers other services using Kubernetes DNS:

# frontend-values.yaml (service discovery via DNS)
containerEnvVars:
  - name: PRODUCT_CATALOG_SERVICE_ADDR
    value: 'productcatalogservice:3550'  # <service-name>:<port>
  - name: CART_SERVICE_ADDR
    value: 'cartservice:7070'
  - name: CHECKOUT_SERVICE_ADDR
    value: 'checkoutservice:5050'

The Helm Chart Structure (Flat vs. Hierarchy)

Our project uses a flat structure (recommended):

.
├── helmfile.yaml
├── charts/
│   ├── microservice/       # Shared chart for app services
│   │   ├── Chart.yaml
│   │   ├── values.yaml
│   │   └── templates/
│   │       ├── deployment.yaml
│   │       └── service.yaml
│   └── redis/             # Separate chart for Redis
│       ├── Chart.yaml
│       ├── values.yaml
│       └── templates/
│           ├── deployment.yaml
│           └── service.yaml
└── values/                # Service-specific configs (flat!)
    ├── frontend-values.yaml
    ├── cart-service-values.yaml
    ├── email-service-values.yaml
    └── ... (11 total)

Why flat over hierarchical?

Use hierarchy only when you have 50+ services and need organizational namespaces.

The Magic of the Shared Chart Template

Here’s the simplified deployment.yaml template that powers all 10 app services:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Values.appName }}
spec:
  replicas: {{ .Values.appReplicas }}
  selector:
    matchLabels:
      app: {{ .Values.appName }}
  template:
    metadata:
      labels:
        app: {{ .Values.appName }}
    spec:
      containers:
      - name: {{ .Values.appName }}
        image: "{{ .Values.appImage }}:{{ .Values.appVersion }}"
        ports:
        - containerPort: {{ .Values.containerPort }}
        env:
        {{- range .Values.containerEnvVars }}
        - name: {{ .name | quote }}
          value: {{ .value | quote }}
        {{- end }}

The {{- range }} loop is particularly clever—it dynamically injects environment variables from your values file:

# cart-service-values.yaml
containerEnvVars:
  - name: REDIS_ADDR
    value: 'redis-cart:6379'
  - name: PORT
    value: '7070'

Becomes:

env:
- name: "REDIS_ADDR"
  value: "redis-cart:6379"
- name: "PORT"
  value: "7070"

Why this matters: Add a new environment variable? Just edit the values file. No need to touch the template.

Deploying the Entire Application

With Helmfile, deployment is absurdly simple:

# Clone the repository
git clone https://github.com/WhisperNet/Microservices-Deployment-on-K8s.git
cd Microservices-Deployment-on-K8s

# Deploy all 11 services with one command
helmfile sync

# Watch the magic happen
kubectl get pods -w

Within minutes, you’ll see all services running:

NAME                                   READY   STATUS
frontend-7f5c8b8d9c-x7v2m              1/1     Running
cartservice-6d7f9c8b5d-9k2l1           1/1     Running
productcatalogservice-5c7d8f9b-p3m4n   1/1     Running
...

Access the frontend:

# Get the external IP (LoadBalancer)
kubectl get service frontend

# Or port-forward locally
kubectl port-forward svc/frontend 8080:80
# Visit http://localhost:8080

You now have a fully functional e-commerce application running on Kubernetes!

Production & Security Best Practices: Don’t Skip This

Deploying to production isn’t just about getting pods running—it’s about ensuring reliability, security, and maintainability. Here are the non-negotiables I learned the hard way:

1. Always Use Version Tags (Never :latest)

Bad:

image: myapp:latest  # ❌ Unpredictable

Good:

image: myapp:v1.2.3  # ✅ Reproducible

Why: The :latest tag is mutable—it can point to different images over time. This breaks reproducibility and makes rollbacks a nightmare. Always use semantic versioning or commit SHAs.

2. Kubernetes Knows Pod State, But Not Container Health

Kubernetes can tell if your pod is running, but it doesn’t know if your application inside the container is healthy. That’s where health probes come in.

Liveness Probe: “Is my app alive?”

If this check fails, Kubernetes restarts the container.

livenessProbe:
  httpGet:
    path: /health
    port: 8080
  initialDelaySeconds: 15
  periodSeconds: 10

Options:

Readiness Probe: “Is my app ready for traffic?”

If this check fails, Kubernetes removes the pod from the Service load balancer (stops sending traffic).

readinessProbe:
  httpGet:
    path: /ready
    port: 8080
  initialDelaySeconds: 5
  periodSeconds: 5

Real-world scenario: Your app is starting up and needs to connect to a database. Readiness probe fails until the DB connection succeeds. Once ready, traffic flows in.

3. Resource Requests & Limits: Don’t Starve Your Cluster

Define how much CPU and memory your containers need:

resources:
  requests:
    memory: "128Mi"   # Minimum guaranteed
    cpu: "100m"       # 0.1 CPU cores
  limits:
    memory: "256Mi"   # Maximum allowed
    cpu: "200m"       # 0.2 CPU cores

Requests: Kubernetes uses this to schedule your pod on a node with available resources.

Limits: If your container exceeds the memory limit, it gets OOMKilled (Out Of Memory). If it exceeds CPU, it gets throttled.

Pro tip: Start conservative, monitor with Prometheus, then adjust based on actual usage.

4. Never Expose Nodes Directly

Bad:

serviceType: NodePort  # ❌ Exposes all nodes

Good:

serviceType: LoadBalancer  # ✅ Single entry point
# Or use an Ingress Controller (even better)

Why: Exposing NodePorts means every node in your cluster is a potential attack surface. Use a LoadBalancer or Ingress for controlled, secure access.

5. High Availability: Replicas & Multi-Node Distribution

Single replica = single point of failure:

appReplicas: 1  # ❌ If this pod crashes, service is down

Multiple replicas = resilience:

appReplicas: 3  # ✅ Can survive pod failures

But here’s the catch: if all 3 replicas are on the same node, and that node dies, you’re still down.

Solution: Pod Anti-Affinity

affinity:
  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchExpressions:
        - key: app
          operator: In
          values:
          - frontend
      topologyKey: kubernetes.io/hostname

This forces Kubernetes to spread your pods across different nodes.

Rule of thumb:

6. Organization: Labels & Namespaces

Labels are metadata for grouping and querying resources:

metadata:
  labels:
    app: frontend
    version: v1.2.3
    team: platform
    environment: production

Query by label:

kubectl get pods -l app=frontend
kubectl get pods -l environment=production,team=platform

Namespaces provide logical isolation:

kubectl create namespace production
kubectl create namespace staging

# Deploy to specific namespace
helm install -f values.yaml -n production frontend charts/microservice

Benefits:

7. Security: Scan Images & Avoid Root Users

Vulnerability scanning:

# Scan images before deploying
trivy image myapp:v1.2.3

Tools like Trivy, Clair, or Snyk scan for known CVEs (Common Vulnerabilities and Exposures) in your images and dependencies.

Run as non-root user:

# Dockerfile
FROM node:18-alpine

# Create non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup

# Switch to non-root user
USER appuser

COPY --chown=appuser:appgroup . /app
WORKDIR /app

CMD ["node", "server.js"]

Or configure in Kubernetes:

securityContext:
  runAsNonRoot: true
  runAsUser: 1000
  fsGroup: 1000

Why: If an attacker compromises your container, running as root gives them full control. Non-root limits the blast radius.

Good news: Most official images (like postgres, redis, nginx) already run as non-root by default.

8. Keep Your Cluster Updated

Kubernetes evolves rapidly. Security patches, bug fixes, and new features come with each release.

Strategy:

# Drain node (move pods to other nodes)
kubectl drain node-1 --ignore-daemonsets --delete-emptydir-data

# Upgrade node (platform-specific, e.g., GKE, EKS, AKS)
# ...

# Mark node as schedulable again
kubectl uncordon node-1

Cadence: Aim to stay within 2-3 minor versions of the latest Kubernetes release.

Key Takeaways & Next Steps

If you’ve made it this far, congratulations! You now understand:

✅ How microservices communicate (API calls, message brokers, service mesh)
✅ Why Helm templating is a game-changer for managing Kubernetes manifests
✅ The trade-offs between dedicated vs. shared Helm charts (shared wins for scalability)
✅ How to use Helmfile to orchestrate multi-service deployments with one command
✅ Real-world deployment patterns from the Online Boutique demo project
✅ Production-grade best practices for security, reliability, and maintainability

What to Do Next

  1. Clone the demo project and deploy it to a local Kubernetes cluster (Minikube, Kind, or Docker Desktop):

    git clone https://github.com/WhisperNet/Microservices-Deployment-on-K8s.git
    cd Microservices-Deployment-on-K8s
    helmfile sync
  2. Experiment with changes:

    • Scale the frontend to 5 replicas
    • Add a new environment variable to the cart service
    • Change the service type of the frontend from LoadBalancer to Ingress
  3. Study the Helm templates: Open charts/microservice/templates/deployment.yaml and trace how values flow from values/*.yaml into the final manifests.

  4. Implement security practices: Add resource limits, health probes, and security contexts to the templates.

  5. Level up: Once comfortable, explore Kustomize (for environment-specific patches), ArgoCD (for GitOps), or Istio (for service mesh).

Resources for Deeper Learning


Did this guide help clarify microservices deployment for you? I’d love to hear about your own Kubernetes journey—what clicked for you, and what still feels murky? Drop a comment or reach out!

And if you found this valuable, bookmark it. Future-you deploying microservices at 2 AM will thank you. 😉

Happy deploying! 🚀


← Previous: Part 3 Next: Part 5 →


Edit page
Share this post on:

Previous Post
Kubernetes Guide Part 5: Production EKS & AWS Integration
Next Post
Kubernetes Guide Part 3: Helm Package Management & Private Registries