Kubernetes Basics: Complete Container Orchestration Guide
Table of Contents
- Introduction to Kubernetes
- Core Concepts and Architecture
- Pods: The Basic Unit
- Services and Networking
- Deployments and ReplicaSets
- ConfigMaps and Secrets
- Persistent Volumes
- Namespaces and Resource Management
- Monitoring and Debugging
- Best Practices
Introduction to Kubernetes {#introduction}
Kubernetes (K8s) is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. Originally developed by Google, it's now maintained by the Cloud Native Computing Foundation (CNCF).
Why Kubernetes?
- Automatic scaling: Scale applications up or down based on demand
- Self-healing: Automatically restart failed containers and reschedule them
- Service discovery: Built-in load balancing and service discovery
- Rolling updates: Deploy new versions with zero downtime
- Resource optimization: Efficient utilization of cluster resources
Key Benefits
# High availability through redundancy
# Horizontal scaling capabilities
# Declarative configuration
# Platform independence
# Extensive ecosystem
Core Concepts and Architecture {#architecture}
Cluster Architecture
A Kubernetes cluster consists of a control plane and worker nodes:
┌─────────────────────────────────────────────────────────┐
│ Control Plane │
├─────────────────┬─────────────────┬─────────────────────┤
│ API Server │ Controller │ Scheduler │
│ │ Manager │ │
├─────────────────┼─────────────────┼─────────────────────┤
│ │ etcd │ │
└─────────────────┴─────────────────┴─────────────────────┘
│ │ │
└─────────────────┼─────────────────┘
│
┌─────────────────────────────────────────────┐
│ Worker Nodes │
├─────────────┬─────────────┬─────────────────┤
│ kubelet │ kube-proxy │ Container │
│ │ │ Runtime │
└─────────────┴─────────────┴─────────────────┘
Control Plane Components
# API Server: Entry point for all REST commands
# etcd: Distributed key-value store for cluster data
# Controller Manager: Runs controller processes
# Scheduler: Assigns pods to nodes
# Cloud Controller Manager: Interfaces with cloud providers
Node Components
# kubelet: Agent that runs on each node
# kube-proxy: Network proxy maintaining network rules
# Container Runtime: Docker, containerd, or CRI-O
Essential Objects
# Pods: Smallest deployable units
# Services: Network abstraction for pods
# Deployments: Manage application lifecycle
# ConfigMaps: Configuration data
# Secrets: Sensitive data
# PersistentVolumes: Storage abstraction
Pods: The Basic Unit {#pods}
What is a Pod?
A Pod is the smallest deployable unit in Kubernetes, containing one or more containers that share storage and network.
Basic Pod Definition
# simple-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
environment: development
spec:
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
env:
- name: ENVIRONMENT
value: "development"
- name: LOG_LEVEL
value: "info"
restartPolicy: Always
Multi-Container Pod
# multi-container-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: web-app-pod
spec:
containers:
- name: web-app
image: my-web-app:v1.0
ports:
- containerPort: 8080
volumeMounts:
- name: shared-data
mountPath: /var/data
- name: sidecar-logger
image: fluentd:v1.14
volumeMounts:
- name: shared-data
mountPath: /var/log
- name: config
mountPath: /fluentd/etc
volumes:
- name: shared-data
emptyDir: {}
- name: config
configMap:
name: fluentd-config
Pod Lifecycle Commands
# Create a pod
kubectl apply -f simple-pod.yaml
# List pods
kubectl get pods
kubectl get pods -o wide # More details
kubectl get pods --show-labels
# Describe pod
kubectl describe pod nginx-pod
# Get pod logs
kubectl logs nginx-pod
kubectl logs nginx-pod -c nginx # Specific container
# Execute commands in pod
kubectl exec -it nginx-pod -- /bin/bash
kubectl exec nginx-pod -- ls -la /usr/share/nginx/html
# Port forwarding
kubectl port-forward nginx-pod 8080:80
# Delete pod
kubectl delete pod nginx-pod
kubectl delete -f simple-pod.yaml
Services and Networking {#services}
Service Types
# ClusterIP (default): Internal cluster access only
# NodePort: Exposes service on each node's IP
# LoadBalancer: External load balancer (cloud provider)
# ExternalName: Maps to external service
ClusterIP Service
# clusterip-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: ClusterIP
selector:
app: nginx
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
NodePort Service
# nodeport-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-nodeport
spec:
type: NodePort
selector:
app: nginx
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30080
LoadBalancer Service
# loadbalancer-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-loadbalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- name: http
port: 80
targetPort: 80
Ingress Configuration
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- myapp.example.com
secretName: myapp-tls
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 8080
Service Discovery
# DNS-based service discovery
# Services are accessible via:
# <service-name>.<namespace>.svc.cluster.local
# Environment variables
# Kubernetes automatically creates environment variables for services
Deployments and ReplicaSets {#deployments}
Deployment Definition
# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 5
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
Deployment Operations
# Create deployment
kubectl apply -f nginx-deployment.yaml
# Get deployments
kubectl get deployments
kubectl get deployment nginx-deployment -o yaml
# Scale deployment
kubectl scale deployment nginx-deployment --replicas=5
# Update deployment
kubectl set image deployment/nginx-deployment nginx=nginx:1.22
# Rollout status
kubectl rollout status deployment/nginx-deployment
# Rollout history
kubectl rollout history deployment/nginx-deployment
# Rollback deployment
kubectl rollout undo deployment/nginx-deployment
kubectl rollout undo deployment/nginx-deployment --to-revision=2
# Pause/Resume rollout
kubectl rollout pause deployment/nginx-deployment
kubectl rollout resume deployment/nginx-deployment
Advanced Deployment Strategies
# Blue-Green Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-blue
spec:
replicas: 3
selector:
matchLabels:
app: myapp
version: blue
template:
metadata:
labels:
app: myapp
version: blue
spec:
containers:
- name: app
image: myapp:v1.0
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-green
spec:
replicas: 3
selector:
matchLabels:
app: myapp
version: green
template:
metadata:
labels:
app: myapp
version: green
spec:
containers:
- name: app
image: myapp:v2.0
ConfigMaps and Secrets {#config}
ConfigMap Examples
# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
# Key-value pairs
database_host: "postgres.example.com"
database_port: "5432"
log_level: "info"
# File content
nginx.conf: |
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html;
}
location /api/ {
proxy_pass http://backend:8080/;
}
}
app.properties: |
spring.datasource.url=jdbc:postgresql://postgres:5432/mydb
spring.datasource.username=user
logging.level.org.springframework=INFO
Using ConfigMaps in Pods
# pod-with-configmap.yaml
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app
image: myapp:latest
# Environment variables from ConfigMap
envFrom:
- configMapRef:
name: app-config
# Specific environment variable
env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: app-config
key: database_host
# Mount as volume
volumeMounts:
- name: config-volume
mountPath: /etc/config
- name: nginx-config
mountPath: /etc/nginx/conf.d/default.conf
subPath: nginx.conf
volumes:
- name: config-volume
configMap:
name: app-config
- name: nginx-config
configMap:
name: app-config
Secrets Management
# secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
type: Opaque
data:
# Base64 encoded values
username: YWRtaW4= # admin
password: MWYyZDFlMmU2N2Rm # secret_password
api_key: YWJjZGVmZ2hpams=
---
# TLS Secret
apiVersion: v1
kind: Secret
metadata:
name: tls-secret
type: kubernetes.io/tls
data:
tls.crt: |
LS0tLS1CRUdJTi... (base64 encoded certificate)
tls.key: |
LS0tLS1CRUdJTi... (base64 encoded private key)
Using Secrets in Pods
# pod-with-secrets.yaml
apiVersion: v1
kind: Pod
metadata:
name: app-with-secrets
spec:
containers:
- name: app
image: myapp:latest
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: app-secrets
key: password
- name: API_KEY
valueFrom:
secretKeyRef:
name: app-secrets
key: api_key
volumeMounts:
- name: secret-volume
mountPath: /etc/secrets
readOnly: true
volumes:
- name: secret-volume
secret:
secretName: app-secrets
defaultMode: 0400
ConfigMap and Secret Commands
# Create ConfigMap from literal
kubectl create configmap app-config --from-literal=db_host=postgres.example.com --from-literal=db_port=5432
# Create ConfigMap from file
kubectl create configmap nginx-config --from-file=nginx.conf
# Create Secret from literal
kubectl create secret generic app-secrets --from-literal=username=admin --from-literal=password=secret123
# Create TLS Secret
kubectl create secret tls tls-secret --cert=path/to/cert.crt --key=path/to/cert.key
# View ConfigMap/Secret
kubectl get configmap app-config -o yaml
kubectl describe secret app-secrets
# Edit ConfigMap/Secret
kubectl edit configmap app-config
kubectl patch configmap app-config -p '{"data":{"new_key":"new_value"}}'
Persistent Volumes {#storage}
PersistentVolume Definition
# persistent-volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-storage
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: fast-ssd
hostPath:
path: /data/pv-storage
---
# For cloud environments
apiVersion: v1
kind: PersistentVolume
metadata:
name: aws-ebs-pv
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: gp3
awsElasticBlockStore:
volumeID: vol-1234567890abcdef0
fsType: ext4
PersistentVolumeClaim
# persistent-volume-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: storage-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: fast-ssd
StatefulSet with Persistent Storage
# statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: database
spec:
serviceName: database-service
replicas: 3
selector:
matchLabels:
app: database
template:
metadata:
labels:
app: database
spec:
containers:
- name: postgres
image: postgres:13
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: myapp
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: db-secrets
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: db-secrets
key: password
volumeMounts:
- name: storage
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: storage
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: fast-ssd
resources:
requests:
storage: 10Gi
Storage Classes
# storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ssd
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp3
iops: "3000"
throughput: "125"
allowVolumeExpansion: true
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
Namespaces and Resource Management {#namespaces}
Namespace Definition
# namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
environment: production
team: backend
---
apiVersion: v1
kind: Namespace
metadata:
name: development
labels:
environment: development
team: backend
Resource Quotas
# resource-quota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-quota
namespace: development
spec:
hard:
requests.cpu: "4"
requests.memory: 8Gi
limits.cpu: "8"
limits.memory: 16Gi
pods: "10"
persistentvolumeclaims: "5"
services: "5"
secrets: "10"
configmaps: "10"
Limit Ranges
# limit-range.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: resource-limits
namespace: development
spec:
limits:
- default:
memory: "512Mi"
cpu: "200m"
defaultRequest:
memory: "256Mi"
cpu: "100m"
type: Container
- max:
memory: "2Gi"
cpu: "1"
min:
memory: "128Mi"
cpu: "50m"
type: Container
Network Policies
# network-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: production
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
Monitoring and Debugging {#monitoring}
Health Checks
# deployment-with-probes.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
spec:
replicas: 3
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web
image: myapp:latest
ports:
- containerPort: 8080
# Liveness Probe
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
# Readiness Probe
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
# Startup Probe
startupProbe:
httpGet:
path: /startup
port: 8080
initialDelaySeconds: 0
periodSeconds: 10
timeoutSeconds: 3
failureThreshold: 30
Debugging Commands
# Pod debugging
kubectl get pods -o wide
kubectl describe pod <pod-name>
kubectl logs <pod-name>
kubectl logs <pod-name> -c <container-name>
kubectl logs <pod-name> --previous
kubectl logs -f <pod-name> # Follow logs
# Execute commands in pods
kubectl exec -it <pod-name> -- /bin/bash
kubectl exec <pod-name> -- ps aux
kubectl exec <pod-name> -- cat /etc/hosts
# Resource usage
kubectl top nodes
kubectl top pods
kubectl top pods --sort-by=cpu
kubectl top pods --sort-by=memory
# Events
kubectl get events --sort-by=.metadata.creationTimestamp
kubectl get events --field-selector type=Warning
# Debug services
kubectl get endpoints
kubectl describe service <service-name>
# Port forwarding for debugging
kubectl port-forward pod/<pod-name> 8080:80
kubectl port-forward service/<service-name> 8080:80
# Copy files to/from pods
kubectl cp <pod-name>:/path/to/file ./local-file
kubectl cp ./local-file <pod-name>:/path/to/file
Monitoring with Prometheus
# prometheus-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
spec:
containers:
- name: prometheus
image: prom/prometheus:latest
ports:
- containerPort: 9090
volumeMounts:
- name: config
mountPath: /etc/prometheus
- name: storage
mountPath: /prometheus
volumes:
- name: config
configMap:
name: prometheus-config
- name: storage
emptyDir: {}
Best Practices {#best-practices}
Security Best Practices
# security-context.yaml
apiVersion: v1
kind: Pod
metadata:
name: secure-pod
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
containers:
- name: app
image: myapp:latest
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
volumeMounts:
- name: tmp
mountPath: /tmp
- name: var-run
mountPath: /var/run
volumes:
- name: tmp
emptyDir: {}
- name: var-run
emptyDir: {}
Resource Management
# Always specify resource requests and limits
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "200m"
# Use Horizontal Pod Autoscaler
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: web-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: web-app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
Deployment Best Practices
# Use labels consistently
app: myapp
version: v1.2.3
environment: production
team: backend
# Implement rolling updates
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
# Use health checks
# Always define readiness and liveness probes
# Version your images
# Never use :latest tag in production
# Use secrets for sensitive data
# Never hardcode passwords or API keys
# Implement resource quotas
# Set appropriate resource requests and limits
# Use namespaces for isolation
# Separate environments with namespaces
Troubleshooting Checklist
# 1. Check pod status
kubectl get pods
# 2. Check pod details
kubectl describe pod <pod-name>
# 3. Check logs
kubectl logs <pod-name>
# 4. Check events
kubectl get events
# 5. Check resource usage
kubectl top pods
# 6. Check service endpoints
kubectl get endpoints
# 7. Test connectivity
kubectl exec -it <pod-name> -- nslookup <service-name>
# 8. Check network policies
kubectl get networkpolicies
# 9. Verify RBAC permissions
kubectl auth can-i <verb> <resource>
# 10. Check cluster health
kubectl get nodes
kubectl cluster-info
This comprehensive guide covers the fundamental concepts and practical implementations of Kubernetes. As you work with Kubernetes, remember that it's a powerful but complex platform. Start with simple deployments and gradually introduce more advanced features as your understanding grows.
The key to mastering Kubernetes is hands-on practice. Set up a local cluster using minikube or kind, experiment with different configurations, and build real applications to gain practical experience with container orchestration.