10. Ingress Configuration
10.1 The Problem with External Services
NodePort:
http://123.45.67.89:30000 ❌ Ugly URL, exposes port
LoadBalancer:
http://35.123.45.67 ❌ Still an IP address
What we want:
https://myapp.com ✅ Clean, memorable, secure
10.2 What is Ingress?
Ingress is not a Service type - it’s a separate resource that acts as the entry point to your cluster.
Ingress provides:
- Human-readable URLs (myapp.com, api.myapp.com)
- Path-based routing (/api, /admin)
- SSL/TLS termination (HTTPS)
- Virtual hosting (multiple domains on one IP)
Comparison:
WITHOUT Ingress:
User → LoadBalancer1 (35.1.2.3) → Service1 → Pods
LoadBalancer2 (35.4.5.6) → Service2 → Pods
LoadBalancer3 (35.7.8.9) → Service3 → Pods
(Multiple cloud load balancers = $$$)
WITH Ingress:
User → Ingress (myapp.com) → Service1, Service2, Service3 → Pods
(One load balancer + intelligent routing)
10.3 Ingress Controller
Important: Ingress resource alone does nothing. You need an Ingress Controller.
What is an Ingress Controller?
- An application that reads Ingress rules
- Evaluates and processes those rules
- Manages redirections
Popular Implementations:
- Kubernetes Nginx Ingress Controller (most common)
- Traefik
- HAProxy
- AWS ALB Ingress Controller
- GCE Ingress Controller
Install in Minikube:
minikube addons enable ingress
# Verify
kubectl get pods -n ingress-nginx
10.4 Basic Ingress Example
Let’s expose the Kubernetes dashboard via Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
spec:
rules:
- host: dashboard.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 80
Apply:
kubectl apply -f dashboard-ingress.yaml
# Check ingress
kubectl get ingress -n kubernetes-dashboard
# Output:
# NAME CLASS HOSTS ADDRESS PORTS AGE
# dashboard-ingress nginx dashboard.com 192.168.49.2 80 1m
Understanding the ADDRESS field:
- In Minikube: Shows Minikube’s IP
- In cloud K8s: Shows the load balancer IP that should be used
- This is the IP your domain should resolve to
Configure local DNS (for testing):
# Edit hosts file
sudo vim /etc/hosts
# Add entry:
192.168.49.2 dashboard.com
Access:
# Open browser
http://dashboard.com
10.5 How Ingress Works in Cloud vs Bare Metal
Cloud Environment
When you install an Ingress Controller via Helm/YAML, it includes:
- Deployment for the controller pods
- Service of type LoadBalancer for that deployment
Cloud Load Balancer (provisioned automatically)
↓
Ingress Controller Service (LoadBalancer type)
↓
Ingress Controller Pods
↓
Evaluates Ingress rules
↓
Routes to backend Services (ClusterIP)
↓
Application Pods
The cloud provider sees the LoadBalancer Service and automatically provisions a real load balancer. The Ingress Controller just uses standard Kubernetes Service mechanisms!
Bare Metal Environment
You need to configure an entry point yourself:
Option 1: External Load Balancer
- Set up HAProxy/Nginx outside cluster
- Point it to Ingress Controller NodePort
Option 2: MetalLB
- Software load balancer for bare metal
- Assigns real IPs to LoadBalancer services
10.6 Multiple Paths (Path-Based Routing)
Route different URLs to different services:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
spec:
rules:
- host: myapp.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 8080
- path: /admin
pathType: Prefix
backend:
service:
name: admin-service
port:
number: 9090
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
Request Flow:
myapp.com/api → api-service
myapp.com/admin → admin-service
myapp.com/ → frontend-service
pathType Options:
- Prefix: Matches URL prefix (e.g.,
/apimatches/api/users) - Exact: Matches exact URL (e.g.,
/apidoesn’t match/api/users) - ImplementationSpecific: Depends on Ingress Controller
10.7 Multiple Hosts (Virtual Hosting)
Host multiple domains on one cluster:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: multi-host-ingress
spec:
rules:
- host: app1.mycompany.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app1-service
port:
number: 80
- host: app2.mycompany.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app2-service
port:
number: 80
- host: api.mycompany.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-service
port:
number: 8080
10.8 Configuring TLS/HTTPS
Secure your applications with HTTPS:
Step 1: Create a Secret with TLS certificates
apiVersion: v1
kind: Secret
metadata:
name: myapp-tls
namespace: default
type: kubernetes.io/tls
data:
tls.crt: <base64-encoded-cert>
tls.key: <base64-encoded-key>
Create from files:
kubectl create secret tls myapp-tls \
--cert=path/to/tls.crt \
--key=path/to/tls.key
Step 2: Reference in Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tls-ingress
spec:
tls:
- hosts:
- myapp.com
secretName: myapp-tls
rules:
- host: myapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-service
port:
number: 80
Now accessible via:
https://myapp.com ✅
10.9 Default Backend
Handle requests that don’t match any rules:
kubectl describe ingress dashboard-ingress -n kubernetes-dashboard
# You'll see:
# Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Create a default backend:
apiVersion: v1
kind: Service
metadata:
name: default-backend
spec:
selector:
app: default-backend
ports:
- port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: default-backend
spec:
replicas: 1
selector:
matchLabels:
app: default-backend
template:
metadata:
labels:
app: default-backend
spec:
containers:
- name: default-backend
image: gcr.io/google_containers/defaultbackend:1.4
ports:
- containerPort: 8080
Update Ingress:
spec:
defaultBackend:
service:
name: default-backend
port:
number: 80
10.10 Ingress Annotations
Customize Ingress Controller behavior:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
# Rewrite URLs
nginx.ingress.kubernetes.io/rewrite-target: /
# SSL redirect
nginx.ingress.kubernetes.io/ssl-redirect: "true"
# CORS
nginx.ingress.kubernetes.io/enable-cors: "true"
# Rate limiting
nginx.ingress.kubernetes.io/limit-rps: "10"
spec:
# ... rules
11. Volumes and Data Persistence
11.1 The Problem
Containers are ephemeral - when they restart, data is lost:
Container writes data → Container crashes
↓
Data is GONE! 💥
Kubernetes doesn’t manage data persistence by default. You are responsible for:
- Choosing storage backend
- Backing up data
- Replicating data
- Managing storage lifecycle
11.2 Storage Requirements for Production
For databases and stateful applications:
- ✅ Available on all nodes - Pods can be scheduled anywhere
- ✅ Survive pod lifecycle - Data persists when pods restart
- ✅ Survive cluster crashes - Data isn’t lost if entire cluster fails
11.3 Volume Types Overview
Kubernetes supports many types of volumes:
Ephemeral volumes (pod lifetime):
emptyDir- Empty directory, exists as long as pod existsconfigMap- Inject ConfigMap data as filessecret- Inject Secret data as files
Persistent volumes (beyond pod lifetime):
hostPath- Mounts directory from host nodelocal- Local storage on a nodenfs- Network File Systemcephfs,glusterfs- Distributed file systems- Cloud provider volumes:
awsElasticBlockStore(AWS EBS)azureDisk(Azure Disk)gcePersistentDisk(GCP Persistent Disk)
11.4 Local vs Remote Volumes
Local Volumes (hostPath, local)
volumes:
- name: data
hostPath:
path: /data
type: Directory
Problems with local volumes:
- ❌ Tied to one specific node (violates requirement #1)
- ❌ Doesn’t survive cluster crashes (violates requirement #3)
Use cases for local volumes:
- Non-critical data
- Cache
- Temporary processing
Remote Volumes (NFS, cloud storage)
volumes:
- name: data
nfs:
server: nfs-server.example.com
path: /exports
Advantages:
- ✅ Available on all nodes
- ✅ Survives pod restarts
- ✅ Survives cluster crashes
For database persistence, always use remote storage!
11.5 Persistent Volumes (PV)
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an admin or dynamically provisioned using Storage Classes.
Key Characteristics:
- Cluster-wide resource (not namespaced)
- Lifecycle independent of pods
- Defines storage capacity, access modes, storage class
Example PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-pv
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: standard
hostPath:
path: /data/mongodb
Access Modes:
- ReadWriteOnce (RWO): Volume can be mounted as read-write by a single node
- ReadOnlyMany (ROX): Volume can be mounted as read-only by many nodes
- ReadWriteMany (RWX): Volume can be mounted as read-write by many nodes
Check PVs:
kubectl get pv
kubectl get persistentvolume
11.6 Persistent Volume Claims (PVC)
A PersistentVolumeClaim (PVC) is a request for storage by a user.
Think of it like:
- PV = Storage available in cluster (supply)
- PVC = Application requesting storage (demand)
Example PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongodb-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: standard
Key Points:
- PVCs are namespaced
- PVC requests specific size and access modes
- Kubernetes binds PVC to a matching PV
Check PVCs:
kubectl get pvc
kubectl get persistentvolumeclaim
11.7 Using PVC in Pods
In Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
volumeMounts:
- name: mongodb-storage
mountPath: /data/db
volumes:
- name: mongodb-storage
persistentVolumeClaim:
claimName: mongodb-pvc
Flow:
Pod → PVC (mongodb-pvc) → PV (mongodb-pv) → Actual Storage
11.8 Storage Classes (Dynamic Provisioning)
Manually creating PVs for every application doesn’t scale. Storage Classes enable dynamic provisioning.
Example Storage Class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-storage
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp3
iopsPerGB: "10"
encrypted: "true"
How it works:
1. Developer creates PVC
2. PVC references StorageClass
3. StorageClass automatically provisions PV
4. PV is bound to PVC
Updated PVC with StorageClass:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongodb-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: fast-storage # References StorageClass
resources:
requests:
storage: 5Gi
Benefits:
- No manual PV creation needed
- Automatic provisioning
- Different storage tiers (fast, slow, encrypted)
Provisioners:
- Internal (kubernetes.io/*): Built into Kubernetes
kubernetes.io/aws-ebskubernetes.io/azure-diskkubernetes.io/gce-pd
- External: Third-party storage systems
- Ceph, GlusterFS, NetApp, etc.
11.9 ConfigMap and Secret as Volumes
ConfigMaps and Secrets can be mounted as files in containers (not just env variables).
Practical Example: Mosquitto MQTT Broker
Create ConfigMap with config file:
apiVersion: v1
kind: ConfigMap
metadata:
name: mosquitto-config-file
data:
mosquitto.conf: |
log_dest stdout
log_type all
log_timestamp true
listener 9001
Create Secret with secret file:
apiVersion: v1
kind: Secret
metadata:
name: mosquitto-secret-file
type: Opaque
data:
secret.file: |
VGVjaFdvcmxkMjAyMyEgLW4K
Mount in Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mosquitto
spec:
replicas: 1
selector:
matchLabels:
app: mosquitto
template:
metadata:
labels:
app: mosquitto
spec:
containers:
- name: mosquitto
image: eclipse-mosquitto:2.0
ports:
- containerPort: 1883
volumeMounts:
- name: mosquitto-config
mountPath: /mosquitto/config
- name: mosquitto-secret
mountPath: /mosquitto/secret
readOnly: true
volumes:
- name: mosquitto-config
configMap:
name: mosquitto-config-file
- name: mosquitto-secret
secret:
secretName: mosquitto-secret-file
What happens:
/mosquitto/config/mosquitto.conf ← ConfigMap data
/mosquitto/secret/secret.file ← Secret data
Verify:
kubectl exec -it <mosquitto-pod> -- ls /mosquitto/config
kubectl exec -it <mosquitto-pod> -- cat /mosquitto/config/mosquitto.conf
12. StatefulSet for Stateful Applications
12.1 Stateful vs Stateless Applications
Stateless Applications
- Don’t store data between requests
- Each request is independent
- Examples: Web servers, REST APIs, frontend apps
- Easy to scale horizontally
Request 1 → Pod A → Response
Request 2 → Pod B → Response (doesn't need data from Request 1)
Request 3 → Pod A → Response
Stateful Applications
- Store and manage data
- Depend on previous data/state
- Examples: Databases, message queues, distributed caches
- Complex to scale
Write → Pod A (Master) → Must sync to replicas
Read → Pod B (Slave) → Must have latest data
12.2 Deployment vs StatefulSet
| Feature | Deployment | StatefulSet |
|---|---|---|
| Identity | Random (pod-7f8d9c-xk4p) | Predictable (mysql-0, mysql-1) |
| Order | Created randomly/parallel | Created sequentially |
| Storage | Shared or ephemeral | Individual persistent storage |
| Network | Load balanced randomly | Stable network identity |
| Scaling | Easy | Complex |
| Use Case | Stateless apps | Databases, queues |
12.3 StatefulSet Example
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
serviceName: mysql-headless
replicas: 3
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:8.0
ports:
- containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: password
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 10Gi
What gets created:
Pods:
- mysql-0 (with PVC data-mysql-0)
- mysql-1 (with PVC data-mysql-1)
- mysql-2 (with PVC data-mysql-2)
Each pod has:
- Stable name
- Individual persistent storage
- Stable network identity
12.4 Sticky Identity
StatefulSet pods maintain sticky identity across restarts:
Before restart:
mysql-0 (10.0.0.5) with PVC data-mysql-0
Pod crashes/restarts:
mysql-0 (10.0.0.12) with SAME PVC data-mysql-0
↑ New IP, but same name and storage!
Pod Name Format:
$(statefulset-name)-$(ordinal)
mysql-0, mysql-1, mysql-2, ...
12.5 Ordered Creation and Deletion
Creation Order:
1. mysql-0 created → Running
2. mysql-1 created → Running
3. mysql-2 created → Running
Deletion Order:
1. mysql-2 deleted
2. mysql-1 deleted
3. mysql-0 deleted
This ensures:
- Master is created first
- Slaves can connect to master
- Safe shutdown order
12.6 StatefulSet with Headless Service
StatefulSets require a headless service for network identity:
apiVersion: v1
kind: Service
metadata:
name: mysql-headless
spec:
clusterIP: None
selector:
app: mysql
ports:
- port: 3306
targetPort: 3306
Each pod gets a DNS entry:
mysql-0.mysql-headless.default.svc.cluster.local
mysql-1.mysql-headless.default.svc.cluster.local
mysql-2.mysql-headless.default.svc.cluster.local
12.7 Scaling Challenges
Master-Slave Architecture:
┌─────────────┐
│ mysql-0 │ ← Master (accepts writes)
│ (Master) │
└──────┬──────┘
│
┌───┴────┐
│ │
┌──▼──┐ ┌──▼──┐
│mysql│ │mysql│ ← Slaves (read-only)
│ -1 │ │ -2 │
└─────┘ └─────┘
Challenges:
- Only master can write - must identify which pod is master
- Data synchronization - replicas must sync with master
- Split-brain - handle network partitions
- Failover - promote slave to master if master fails
This is why deploying databases in Kubernetes is complex!
12.8 Individual Pod Endpoints
Unlike Deployments, you can access individual StatefulSet pods:
# Access master directly
mysql -h mysql-0.mysql-headless.default.svc.cluster.local
# Access specific slave
mysql -h mysql-1.mysql-headless.default.svc.cluster.local
Still need a regular ClusterIP service for load-balanced reads:
apiVersion: v1
kind: Service
metadata:
name: mysql-read
spec:
selector:
app: mysql
ports:
- port: 3306
targetPort: 3306
13. Managed Kubernetes Services
13.1 Self-Managed vs Managed Kubernetes
Self-Managed (DIY)
You create cluster from scratch:
✅ Full control over everything
✅ Can customize anything
❌ Must set up control plane nodes
❌ Must configure networking, security
❌ Responsible for upgrades, patches
❌ Must handle high availability
❌ Time-consuming and complex
When to use:
- Specific compliance requirements
- On-premises infrastructure
- Learning Kubernetes internals
Managed Kubernetes Services
Cloud provider manages control plane:
✅ Control plane managed by provider
✅ Automatic updates and patches
✅ Built-in high availability
✅ Integrated with cloud services
✅ Quick to set up
✅ Only pay for worker nodes
❌ Less control over control plane
❌ Vendor lock-in
13.2 How Managed Services Work
┌─────────────────────────────────────┐
│ Cloud Provider Responsibility │
│ │
│ ┌───────────────────────────────┐ │
│ │ Control Plane Nodes │ │
│ │ - API Server │ │
│ │ - Scheduler │ │
│ │ - Controller Manager │ │
│ │ - etcd │ │
│ └───────────────────────────────┘ │
│ (Managed, HA, Auto-updated) │
└─────────────────────────────────────┘
↓
┌─────────────────────────────────────┐
│ Your Responsibility │
│ │
│ ┌──────┐ ┌──────┐ ┌──────┐ │
│ │Worker│ │Worker│ │Worker│ │
│ │Node 1│ │Node 2│ │Node 3│ │
│ └──────┘ └──────┘ └──────┘ │
│ (You configure, scale, pay for) │
└─────────────────────────────────────┘
What you manage:
- Worker nodes (size, count)
- Application workloads
- Monitoring and logging
- Network policies
- Storage configuration
What cloud provider manages:
- Control plane (free or low cost)
- etcd backups
- Kubernetes version updates
- Control plane high availability
- API server endpoint
13.3 Popular Managed Kubernetes Services
AWS EKS (Elastic Kubernetes Service)
# Create cluster
eksctl create cluster \
--name my-cluster \
--region us-west-2 \
--nodegroup-name standard-workers \
--node-type t3.medium \
--nodes 3
Features:
- Integrates with AWS services (ALB, EBS, IAM)
- Supports Fargate (serverless containers)
- Auto-scales node groups
Pricing:
- Control plane: $0.10/hour
- Worker nodes: EC2 pricing
Azure AKS (Azure Kubernetes Service)
# Create cluster
az aks create \
--resource-group myResourceGroup \
--name myAKSCluster \
--node-count 3 \
--enable-addons monitoring
Features:
- Free control plane (you only pay for nodes!)
- Integrates with Azure AD, Azure Monitor
- Virtual nodes (serverless)
Pricing:
- Control plane: FREE
- Worker nodes: VM pricing
Google GKE (Google Kubernetes Engine)
# Create cluster
gcloud container clusters create my-cluster \
--zone us-central1-a \
--num-nodes 3 \
--machine-type n1-standard-2
Features:
- Autopilot mode (fully managed nodes)
- Built by Google (created Kubernetes)
- Best integration with GCP services
Pricing:
- Control plane: $0.10/hour (free for one cluster)
- Worker nodes: Compute Engine pricing
Linode LKE (Linode Kubernetes Engine)
# Create via UI or API
linode-cli lke cluster-create \
--label my-cluster \
--region us-east \
--k8s_version 1.28
Features:
- Simple, affordable
- Free control plane
- Good for small to medium workloads
Pricing:
- Control plane: FREE
- Worker nodes: Linode pricing
13.4 Cloud Integration Benefits
Load Balancers:
# This automatically provisions cloud load balancer
apiVersion: v1
kind: Service
metadata:
name: my-app
spec:
type: LoadBalancer # Cloud provider creates real LB
ports:
- port: 80
Storage:
# This provisions cloud storage
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-storage
spec:
accessModes:
- ReadWriteOnce
storageClassName: aws-ebs # Provisions AWS EBS
resources:
requests:
storage: 10Gi
Native Services:
- AWS: RDS (database), S3 (storage), CloudWatch (monitoring)
- Azure: Azure SQL, Blob Storage, Azure Monitor
- GCP: Cloud SQL, Cloud Storage, Cloud Logging
13.5 Cost Comparison
Example: 3-node cluster
| Provider | Control Plane | Worker Nodes (3x) | Total/month |
|---|---|---|---|
| AWS EKS | $72/month | ~$150/month | ~$222/month |
| Azure AKS | FREE | ~$150/month | ~$150/month |
| GKE | $72/month | ~$150/month | ~$222/month |
| Linode LKE | FREE | ~$90/month | ~$90/month |
Note: Worker node costs vary by instance type and region.
14. Best Practices
14.1 Image Management
✅ Always Pin Version Tags
# ❌ BAD - unpredictable, could break production
containers:
- name: app
image: nginx:latest
# ✅ GOOD - predictable, reproducible
containers:
- name: app
image: nginx:1.25.3
Why?
latesttag can change unexpectedly- Hard to debug “what changed?”
- Impossible to rollback reliably
14.2 Health Checks
Liveness Probe
Tells Kubernetes if the container is alive:
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
What it does:
- Kubernetes periodically checks
/health - If fails 3 times, restarts the container
Readiness Probe
Tells Kubernetes if the container is ready to receive traffic:
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
successThreshold: 1
What it does:
- Container not ready → Removed from Service endpoints
- No traffic sent until ready
Why both?
App starting up:
- Liveness: OK (container is alive)
- Readiness: FAIL (not ready for traffic)
→ Container keeps running, but receives no traffic
App crashed:
- Liveness: FAIL
→ Container is restarted
14.3 Resource Management
resources:
requests: # Minimum guaranteed
memory: "256Mi"
cpu: "250m"
limits: # Maximum allowed
memory: "512Mi"
cpu: "500m"
Why?
- Prevents one buggy container from consuming all resources
- Helps Scheduler make better decisions
- Enables autoscaling
CPU units:
1= 1 CPU core500m= 0.5 CPU core100m= 0.1 CPU core
Memory units:
Mi= Mebibytes (1024-based)Gi= Gibibytes
14.4 Deployment Strategies
✅ Always Deploy Multiple Replicas
spec:
replicas: 3 # Minimum for high availability
Why?
- Zero downtime during updates
- Survive node failures
- Better load distribution
✅ Multiple Worker Nodes
Avoid single point of failure:
1 Node with 3 replicas = All gone if node fails ❌
3 Nodes with 3 replicas = 2 replicas survive ✅
14.5 Security Best Practices
Don’t Use NodePort in Production
# ❌ BAD for production
spec:
type: NodePort
# ✅ GOOD for production
spec:
type: LoadBalancer
Why?
- Exposes worker nodes directly
- Multiple entry points to secure
- Use LoadBalancer or Ingress instead
No Root Access for Containers
securityContext:
runAsNonRoot: true
runAsUser: 1000
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
Why?
- Limits damage if container is compromised
- Follows principle of least privilege
Scan Images for Vulnerabilities
# Use tools like:
- Trivy
- Clair
- Anchore
- Snyk
# Automate in CI/CD pipeline
Keep Kubernetes Updated
# Check current version
kubectl version
# Upgrade (managed services handle this)
# Self-managed: follow upgrade guides
Why?
- Security patches
- Bug fixes
- New features
14.6 Organization Best Practices
Use Labels Everywhere
metadata:
labels:
app: frontend
tier: web
environment: production
version: v1.2.3
Benefits:
- Easy filtering:
kubectl get pods -l app=frontend - Organized resources
- Service selectors work better
Use Namespaces for Organization
# Organize by environment
- production
- staging
- development
# Or by team
- team-alpha
- team-beta
# Or by function
- frontend
- backend
- data
Store Configs in Version Control
my-k8s-configs/
├── base/
│ ├── deployment.yaml
│ └── service.yaml
├── overlays/
│ ├── dev/
│ ├── staging/
│ └── prod/
└── README.md
15. Conclusion
Key Takeaways
You’ve learned the essentials of Kubernetes:
-
Core Concepts
- Pods, Services, Deployments, StatefulSets
- Control Plane and Worker Nodes architecture
- How components interact
-
Configuration
- YAML manifests structure
- ConfigMaps and Secrets for configuration
- Labels and Selectors for connectivity
-
Networking
- Service types (ClusterIP, NodePort, LoadBalancer)
- Ingress for intelligent routing
- Internal vs external access
-
Storage
- Persistent Volumes and Claims
- Storage Classes for dynamic provisioning
- ConfigMaps/Secrets as volumes
-
Production Readiness
- Managed Kubernetes services
- Best practices for security, reliability
- Health checks and resource management
What’s Next?
Topics to explore:
- Helm: Package manager for Kubernetes
- Operators: Automate complex applications
- Service Mesh: Advanced networking (Istio, Linkerd)
- GitOps: Declarative deployment workflows
- Autoscaling: HPA, VPA, Cluster Autoscaler
- Monitoring: Prometheus, Grafana
- Logging: ELK Stack, Loki
Hands-On Practice
The best way to learn Kubernetes is by doing:
# Start with Minikube
minikube start
# Deploy applications
kubectl apply -f your-app.yaml
# Break things and fix them
# Experiment with different configurations
# Read the error messages carefully
# The kubectl documentation is your friend
kubectl explain deployment
kubectl explain pod.spec
Resources
Official Documentation:
Interactive Learning:
Books:
- “Kubernetes Up & Running” by Kelsey Hightower
- “The Kubernetes Book” by Nigel Poulton
**Happy Kubing! **