Cloud Computing (4): Cloud-Native and Container Technologies
Chen Kai BOSS

The shift from monolithic applications to cloud-native architectures represents one of the most significant transformations in modern software development. This evolution isn't just about moving code to the cloud — it's about fundamentally rethinking how we build, deploy, and operate applications to leverage the full potential of distributed systems. At the heart of this transformation lie containers and orchestration platforms, technologies that have revolutionized the way we package, ship, and run software.

This comprehensive guide explores the entire cloud-native ecosystem, from foundational principles to advanced orchestration patterns. We'll dive deep into Docker's containerization model, Kubernetes' orchestration capabilities, and the architectural patterns that enable scalable, resilient distributed systems. Whether you're deploying your first containerized application or architecting a multi-region microservices platform, this guide provides the practical knowledge and real-world examples you need.

Cloud-Native Principles and the 12-Factor App

Cloud-native applications are designed specifically for cloud environments, embracing principles that enable scalability, resilience, and rapid iteration. The 12-Factor App methodology, developed by Heroku, provides a blueprint for building software-as-a-service applications that are portable, scalable, and maintainable.

The 12 Factors

I. Codebase: One codebase tracked in revision control, many deploys. Each application has a single codebase, but multiple environments (staging, production) can deploy from it.

II. Dependencies: Explicitly declare and isolate dependencies. Never rely on implicit existence of system-wide packages.

1
2
3
4
# Example: requirements.txt for Python
flask==2.3.0
redis==4.5.0
psycopg2-binary==2.9.7

III. Config: Store configuration in the environment. Configuration that varies between deploys (database URLs, API keys) should be stored as environment variables, not in code.

1
2
3
4
5
# Bad: Hardcoded configuration
DATABASE_URL = "postgresql://user:pass@localhost:5432/mydb"

# Good: Environment-based configuration
DATABASE_URL = os.environ.get('DATABASE_URL')

IV. Backing Services: Treat backing services as attached resources. Databases, message queues, and cache servers should be addressable via URLs and interchangeable without code changes.

V. Build, Release, Run: Strictly separate build and run stages. The build stage transforms code into an executable bundle; the release stage combines the build with config; the run stage executes the app in the execution environment.

VI. Processes: Execute the app as one or more stateless processes. Application state should be stored in backing services (databases, caches), not in memory or local filesystem.

VII. Port Binding: Export services via port binding. Applications should be self-contained and declare their port dependencies.

VIII. Concurrency: Scale out via the process model. Applications should spawn multiple processes to handle different types of work.

IX. Disposability: Maximize robustness with fast startup and graceful shutdown. Processes should start quickly and shut down gracefully when receiving SIGTERM.

X. Dev/Prod Parity: Keep development, staging, and production as similar as possible. Use the same backing services and avoid environment-specific workarounds.

XI. Logs: Treat logs as event streams. Applications should write to stdout/stderr and let the execution environment handle aggregation and storage.

XII. Admin Processes: Run admin/management tasks as one-off processes. Database migrations, data transformations, and one-time scripts should run in the same environment as regular processes.

Cloud-Native Characteristics

Beyond the 12 factors, cloud-native applications exhibit several key characteristics:

  • Microservices Architecture: Applications are decomposed into small, independently deployable services
  • Containerization: Applications and dependencies are packaged into containers for consistency across environments
  • Dynamic Orchestration: Containers are orchestrated by platforms like Kubernetes for automatic scheduling and scaling
  • Service Mesh: Inter-service communication is handled by a dedicated infrastructure layer
  • Declarative APIs: Desired state is declared, and the platform ensures convergence
  • DevOps Culture: Development and operations teams collaborate closely with shared tooling

Docker Fundamentals

Docker revolutionized application deployment by introducing a standardized way to package applications with their dependencies. Understanding Docker's core concepts is essential for working with cloud-native technologies.

Images and Containers

A Docker image is a read-only template containing instructions for creating a container. Images are built from a Dockerfile and consist of multiple layers, each representing a filesystem change.

A Docker container is a running instance of an image. Containers are isolated from each other and from the host system, sharing only the host's kernel.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# Pull an image from Docker Hub
docker pull nginx:1.25-alpine

# Run a container from an image
docker run -d -p 8080:80 --name my-nginx nginx:1.25-alpine

# List running containers
docker ps

# List all containers (including stopped)
docker ps -a

# View container logs
docker logs my-nginx

# Stop a container
docker stop my-nginx

# Remove a container
docker rm my-nginx

Dockerfile Best Practices

A Dockerfile defines how to build a Docker image. Following best practices ensures secure, efficient, and maintainable images.

1. Use Multi-Stage Builds: Reduce final image size by using intermediate stages for building and a minimal final stage for runtime.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Multi-stage build example
FROM golang:1.21-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o /app/server

FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /app/server .
EXPOSE 8080
CMD ["./server"]

2. Leverage Layer Caching: Order Dockerfile instructions from least to most frequently changing. Copy dependency files before source code.

1
2
3
4
5
6
7
8
9
10
11
12
13
# Good: Dependencies cached separately
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build

# Bad: Everything invalidated on code change
FROM node:18-alpine
WORKDIR /app
COPY . .
RUN npm ci --only=production && npm run build

3. Use Specific Tags: Avoid latest tag in production. Pin specific versions for reproducibility.

1
2
3
4
5
# Good
FROM node:18.17.0-alpine

# Bad
FROM node:latest

4. Minimize Layers: Combine RUN commands to reduce image layers and size.

1
2
3
4
5
6
7
8
9
# Good: Single layer
RUN apt-get update && \
apt-get install -y python3 python3-pip && \
rm -rf /var/lib/apt/lists/*

# Bad: Multiple layers
RUN apt-get update
RUN apt-get install -y python3 python3-pip
RUN rm -rf /var/lib/apt/lists/*

5. Use .dockerignore: Exclude unnecessary files from build context.

1
2
3
4
5
6
7
# .dockerignore
node_modules
.git
.gitignore
*.md
.env
dist

6. Run as Non-Root User: Enhance security by running containers as non-root users.

1
2
3
4
5
6
FROM alpine:latest
RUN addgroup -g 1000 appuser && \
adduser -D -u 1000 -G appuser appuser
USER appuser
WORKDIR /home/appuser
COPY --chown=appuser:appuser . .

Docker Networking

Docker provides several networking modes for containers:

Bridge Network (default): Containers on the same bridge network can communicate using container names.

1
2
3
4
5
6
7
8
9
# Create a custom bridge network
docker network create my-network

# Run containers on the network
docker run -d --name web --network my-network nginx
docker run -d --name app --network my-network myapp:latest

# Containers can communicate using names
# app can reach web at http://web:80

Host Network: Container shares the host's network stack directly.

1
2
docker run -d --network host nginx
# Container uses host's IP and ports directly

Overlay Network: Enables multi-host networking for Docker Swarm or Kubernetes.

1
2
# Create overlay network for Swarm
docker network create --driver overlay my-overlay

Macvlan Network: Assigns MAC addresses to containers, making them appear as physical devices.

1
2
3
4
5
docker network create -d macvlan \
--subnet=192.168.1.0/24 \
--gateway=192.168.1.1 \
-o parent=eth0 \
my-macvlan

Docker Volumes

Volumes provide persistent storage for containers, surviving container removal and enabling data sharing between containers.

Named Volumes: Managed by Docker, stored in Docker's directory.

1
2
3
4
5
6
7
8
# Create a named volume
docker volume create my-data

# Use volume in container
docker run -d -v my-data:/data --name app myapp:latest

# Inspect volume
docker volume inspect my-data

Bind Mounts: Mount host directories into containers.

1
2
3
4
5
# Mount host directory
docker run -d -v /host/path:/container/path nginx

# Use absolute paths for clarity
docker run -d -v $(pwd)/data:/app/data myapp:latest

Volume in Dockerfile: Define volume mount points in Dockerfile.

1
2
3
FROM alpine:latest
VOLUME ["/data"]
# /data is a mount point for volumes

Volume Drivers: Use external storage drivers for cloud storage.

1
2
3
4
5
6
docker volume create \
--driver local \
--opt type=nfs \
--opt o=addr=192.168.1.100,rw \
--opt device=:/path/to/nfs \
nfs-volume

Docker Compose for Local Development

Docker Compose simplifies multi-container application development by defining services, networks, and volumes in a single YAML file.

Basic Compose File

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
version: '3.8'

services:
web:
build: .
ports:

- "3000:3000"
environment:

- NODE_ENV=development
- DATABASE_URL=postgresql://postgres:password@db:5432/mydb
depends_on:

- db
- redis
volumes:

- .:/app
- /app/node_modules
networks:

- app-network

db:
image: postgres:15-alpine
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: mydb
volumes:

- postgres-data:/var/lib/postgresql/data
networks:

- app-network

redis:
image: redis:7-alpine
ports:

- "6379:6379"
networks:

- app-network

volumes:
postgres-data:

networks:
app-network:
driver: bridge

Compose Commands

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# Start services
docker-compose up -d

# View logs
docker-compose logs -f web

# Execute command in service
docker-compose exec web npm test

# Scale services
docker-compose up -d --scale web=3

# Stop services
docker-compose down

# Stop and remove volumes
docker-compose down -v

# Rebuild services
docker-compose up -d --build

Advanced Compose Features

Environment Files: Externalize configuration.

1
2
3
4
5
6
7
8
9
# docker-compose.yml
services:
web:
env_file:

- .env.development
environment:

- DEBUG=true
1
2
3
# .env.development
DATABASE_URL=postgresql://postgres:password@db:5432/mydb
REDIS_URL=redis://redis:6379

Health Checks: Define service health conditions.

1
2
3
4
5
6
7
8
services:
web:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s

Profiles: Organize services by environment.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
services:
web:
# ...

worker:
# ...
profiles:

- production

dev-tools:
# ...
profiles:

- development
1
2
# Start with specific profile
docker-compose --profile development up

Kubernetes Core Concepts

Kubernetes (K8s) is the de facto standard for container orchestration, providing automated deployment, scaling, and management of containerized applications.

Architecture Overview

A Kubernetes cluster consists of:

  • Control Plane: Manages cluster state and scheduling decisions
    • API Server: Exposes Kubernetes API
    • etcd: Distributed key-value store for cluster state
    • Scheduler: Assigns pods to nodes
    • Controller Manager: Runs cluster controllers
    • Cloud Controller Manager: Integrates with cloud providers
  • Worker Nodes: Run containerized applications
    • kubelet: Agent that communicates with control plane
    • kube-proxy: Maintains network rules
    • Container Runtime: Runs containers (containerd, CRI-O)

Pods

A Pod is the smallest deployable unit in Kubernetes, containing one or more containers that share storage and network.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:

- name: nginx
image: nginx:1.25-alpine
ports:

- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"

Multi-Container Pods: Pods can contain multiple containers that work together.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
apiVersion: v1
kind: Pod
metadata:
name: web-pod
spec:
containers:

- name: web
image: nginx:alpine
volumeMounts:

- name: shared-data
mountPath: /usr/share/nginx/html

- name: sidecar
image: busybox
command: ['sh', '-c', 'while true; do echo "$(date)" > /shared/index.html; sleep 10; done']
volumeMounts:

- name: shared-data
mountPath: /shared
volumes:

- name: shared-data
emptyDir: {}

Services

Services provide stable network endpoints for pods, abstracting pod IPs that change on restart.

ClusterIP (default): Exposes service on cluster-internal IP.

1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
selector:
app: nginx
ports:

- protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP

NodePort: Exposes service on each node's IP at a static port.

1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
selector:
app: nginx
ports:

- port: 80
targetPort: 8080
nodePort: 30080

LoadBalancer: Exposes service externally using cloud provider's load balancer.

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: LoadBalancer
selector:
app: nginx
ports:

- port: 80
targetPort: 8080

ExternalName: Maps service to external DNS name.

1
2
3
4
5
6
7
apiVersion: v1
kind: Service
metadata:
name: external-db
spec:
type: ExternalName
externalName: database.example.com

Deployments

Deployments manage ReplicaSets and provide declarative updates, rollbacks, and scaling.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:

- name: nginx
image: nginx:1.25-alpine
ports:

- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 5

Rolling Updates: Deployments support zero-downtime updates.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Update image
kubectl set image deployment/web-deployment nginx=nginx:1.26-alpine

# Check rollout status
kubectl rollout status deployment/web-deployment

# Rollback to previous version
kubectl rollout undo deployment/web-deployment

# Rollback to specific revision
kubectl rollout undo deployment/web-deployment --to-revision=2

# View rollout history
kubectl rollout history deployment/web-deployment

Rolling Update Strategy: Configure update behavior.

1
2
3
4
5
6
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0

StatefulSets

StatefulSets manage stateful applications, providing stable network identities and ordered deployment/scaling.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql-statefulset
spec:
serviceName: mysql-headless
replicas: 3
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:

- name: mysql
image: mysql:8.0
env:

- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: root-password
ports:

- containerPort: 3306
volumeMounts:

- name: mysql-data
mountPath: /var/lib/mysql
volumeClaimTemplates:

- metadata:
name: mysql-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi

Headless Service: Required for StatefulSets to provide stable DNS names.

1
2
3
4
5
6
7
8
9
10
11
apiVersion: v1
kind: Service
metadata:
name: mysql-headless
spec:
clusterIP: None
selector:
app: mysql
ports:

- port: 3306

StatefulSet pods get predictable names: mysql-statefulset-0, mysql-statefulset-1, mysql-statefulset-2, accessible via DNS: mysql-statefulset-0.mysql-headless.default.svc.cluster.local.

Kubernetes Cluster Deployment

Deploying a Kubernetes cluster requires careful planning and configuration. Multiple approaches exist, from manual setup to fully managed services.

kubeadm (On-Premises/Cloud VMs)

kubeadm is a tool for bootstrapping Kubernetes clusters, suitable for on-premises or cloud VM deployments.

Master Node Setup:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# Install container runtime (containerd)
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

sudo sysctl --system

# Install containerd
sudo apt-get update
sudo apt-get install -y containerd
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd

# Install kubeadm, kubelet, kubectl
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

# Initialize cluster
sudo kubeadm init --pod-network-cidr=10.244.0.0/16

# Configure kubectl
mkdir -p$HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown$(id -u):$(id -g)$HOME/.kube/config

# Install CNI plugin (Flannel)
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

Worker Node Setup:

1
2
3
4
5
# Install container runtime and kubeadm (same as master)
# ...

# Join cluster (use token from master)
sudo kubeadm join <master-ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>

Verify Cluster:

1
2
3
4
5
6
7
8
# Check nodes
kubectl get nodes

# Check system pods
kubectl get pods -n kube-system

# Check cluster info
kubectl cluster-info

kops (AWS)

kops simplifies Kubernetes cluster deployment on AWS.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
# Install kops
curl -LO https://github.com/kubernetes/kops/releases/download/v1.28.0/kops-linux-amd64
chmod +x kops-linux-amd64
sudo mv kops-linux-amd64 /usr/local/bin/kops

# Configure AWS credentials
aws configure

# Create S3 bucket for cluster state
aws s3 mb s3://my-kops-state-bucket

# Set environment variable
export KOPS_STATE_STORE=s3://my-kops-state-bucket

# Create cluster configuration
kops create cluster \
--name=mycluster.k8s.local \
--cloud=aws \
--zones=us-east-1a,us-east-1b \
--node-count=3 \
--node-size=t3.medium \
--master-size=t3.small \
--master-count=1 \
--networking=calico

# Review configuration
kops edit cluster mycluster.k8s.local

# Create cluster
kops update cluster mycluster.k8s.local --yes

# Validate cluster
kops validate cluster mycluster.k8s.local

# Delete cluster
kops delete cluster mycluster.k8s.local --yes

Managed Kubernetes Services

Amazon EKS:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# Create EKS cluster
aws eks create-cluster \
--name my-cluster \
--version 1.28 \
--role-arn arn:aws:iam::123456789012:role/EKSClusterRole \
--resources-vpc-config subnetIds=subnet-12345,subnet-67890,securityGroupIds=sg-12345

# Wait for cluster creation
aws eks wait cluster-active --name my-cluster

# Configure kubectl
aws eks update-kubeconfig --name my-cluster --region us-east-1

# Create node group
aws eks create-nodegroup \
--cluster-name my-cluster \
--nodegroup-name my-nodegroup \
--node-role arn:aws:iam::123456789012:role/NodeInstanceRole \
--subnets subnet-12345 subnet-67890 \
--instance-types t3.medium \
--scaling-config minSize=2,maxSize=4,desiredSize=3

Google GKE:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# Create GKE cluster
gcloud container clusters create my-cluster \
--zone us-central1-a \
--num-nodes 3 \
--machine-type n1-standard-2 \
--enable-autoscaling \
--min-nodes 2 \
--max-nodes 5

# Configure kubectl
gcloud container clusters get-credentials my-cluster --zone us-central1-a

# Enable autopilot mode (fully managed)
gcloud container clusters create-auto my-autopilot-cluster \
--region us-central1

Azure AKS:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# Create resource group
az group create --name myResourceGroup --location eastus

# Create AKS cluster
az aks create \
--resource-group myResourceGroup \
--name myAKSCluster \
--node-count 3 \
--node-vm-size Standard_B2s \
--enable-addons monitoring \
--generate-ssh-keys

# Configure kubectl
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster

# Enable cluster autoscaler
az aks update \
--resource-group myResourceGroup \
--name myAKSCluster \
--enable-cluster-autoscaler \
--min-count 2 \
--max-count 5

Kubernetes Networking

Kubernetes networking enables pods to communicate with each other and external services. Understanding CNI plugins and service mesh concepts is crucial for production deployments.

Container Network Interface (CNI)

CNI plugins provide networking functionality for pods. Popular options include:

Flannel: Simple overlay network using VXLAN or host-gw.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# flannel-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-flannel-cfg
namespace: kube-flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}

Calico: Policy-driven networking with BGP routing.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# calico-installation.yaml (simplified)
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
calicoNetwork:
ipPools:

- blockSize: 26
cidr: 192.168.0.0/16
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()

Cilium: eBPF-based networking with advanced security features.

1
2
3
4
5
6
7
8
9
10
11
# cilium-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cilium-config
namespace: kube-system
data:
enable-ipv4: "true"
enable-ipv6: "false"
enable-bpf-masquerade: "true"
enable-remote-node-identity: "true"

Network Policies

Network Policies control traffic between pods using label selectors.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-allow
spec:
podSelector:
matchLabels:
app: api
policyTypes:

- Ingress
- Egress
ingress:

- from:
- podSelector:
matchLabels:
app: frontend
ports:

- protocol: TCP
port: 8080
egress:

- to:
- podSelector:
matchLabels:
app: database
ports:

- protocol: TCP
port: 5432

- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:

- protocol: TCP
port: 53

Service Mesh Preview

Service meshes provide advanced traffic management, security, and observability. Istio is a popular service mesh implementation.

Key Features:

  • Traffic management (load balancing, circuit breaking, retries)
  • Security (mTLS, RBAC)
  • Observability (metrics, tracing, logging)

Istio Architecture:

  • Control Plane: Istiod (Pilot, Citadel, Galley)
  • Data Plane: Envoy sidecar proxies
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# Enable Istio sidecar injection
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
istio-injection: enabled
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:

- name: web
image: nginx:alpine
1
2
3
4
5
6
7
8
9
10
# Install Istio
curl -L https://istio.io/downloadIstio | sh -
cd istio-*
./bin/istioctl install --set values.defaultRevision=default

# Enable sidecar injection
kubectl label namespace production istio-injection=enabled

# Deploy sample application
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

Kubernetes Storage

Kubernetes provides flexible storage abstractions for stateful applications through PersistentVolumes (PVs), PersistentVolumeClaims (PVCs), and StorageClasses.

PersistentVolumes and PersistentVolumeClaims

PersistentVolume (PV): Cluster-wide storage resource provisioned by administrators.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
spec:
capacity:
storage: 20Gi
accessModes:

- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
hostPath:
path: /mnt/data/mysql

PersistentVolumeClaim (PVC): User's request for storage, bound to a PV.

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
spec:
accessModes:

- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: local-storage

Using PVC in Pod:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: v1
kind: Pod
metadata:
name: mysql-pod
spec:
containers:

- name: mysql
image: mysql:8.0
volumeMounts:

- name: mysql-storage
mountPath: /var/lib/mysql
volumes:

- name: mysql-storage
persistentVolumeClaim:
claimName: mysql-pvc

StorageClasses

StorageClass enables dynamic provisioning of storage.

1
2
3
4
5
6
7
8
9
10
11
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ssd
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp3
fsType: ext4
encrypted: "true"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

Dynamic Provisioning Example:

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dynamic-pvc
spec:
accessModes:

- ReadWriteOnce
storageClassName: fast-ssd
resources:
requests:
storage: 100Gi

Volume Types

Local Storage: Direct host path mounting.

1
2
3
4
5
6
volumes:

- name: host-path
hostPath:
path: /data
type: DirectoryOrCreate

NFS: Network File System volumes.

1
2
3
4
5
6
7
volumes:

- name: nfs-volume
nfs:
server: nfs-server.example.com
path: /exports/data
readOnly: false

Cloud Storage: Provider-specific volumes (AWS EBS, GCE PD, Azure Disk).

1
2
3
4
5
6
7
# AWS EBS
volumes:

- name: ebs-volume
awsElasticBlockStore:
volumeID: vol-12345678
fsType: ext4

CSI Volumes: Container Storage Interface for custom storage plugins.

1
2
3
4
5
6
7
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-storage
provisioner: csi.example.com/driver
parameters:
type: ssd

Helm Package Management

Helm is the package manager for Kubernetes, simplifying application deployment and management through charts.

Helm Basics

Chart Structure:

1
2
3
4
5
6
7
8
my-chart/
├── Chart.yaml # Chart metadata
├── values.yaml # Default configuration
├── templates/ # Kubernetes manifests
│ ├── deployment.yaml
│ ├── service.yaml
│ └── ingress.yaml
└── charts/ # Chart dependencies

Chart.yaml:

1
2
3
4
5
6
7
8
9
10
11
apiVersion: v2
name: my-app
description: A Helm chart for my application
type: application
version: 1.0.0
appVersion: "2.0.0"
dependencies:

- name: postgresql
version: 12.1.0
repository: https://charts.bitnami.com/bitnami

values.yaml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
replicaCount: 3

image:
repository: nginx
pullPolicy: IfNotPresent
tag: "1.25-alpine"

service:
type: ClusterIP
port: 80

ingress:
enabled: true
className: nginx
hosts:

- host: app.example.com
paths:

- path: /
pathType: Prefix

Template Example (templates/deployment.yaml):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "my-app.fullname" . }}
labels:
{{ - include "my-app.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{ - include "my-app.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{ - include "my-app.selectorLabels" . | nindent 8 }}
spec:
containers:

- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:

- name: http
containerPort: 80

Helm Commands

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# Create a new chart
helm create my-chart

# Install chart
helm install my-release ./my-chart

# Install with custom values
helm install my-release ./my-chart -f custom-values.yaml

# Upgrade release
helm upgrade my-release ./my-chart

# Rollback release
helm rollback my-release 1

# List releases
helm list

# Show release values
helm get values my-release

# Uninstall release
helm uninstall my-release

# Package chart
helm package ./my-chart

# Add repository
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update

# Install from repository
helm install postgres bitnami/postgresql

Helm Best Practices

  1. Use Semantic Versioning: Follow SemVer for chart versions
  2. Parameterize Everything: Make charts configurable via values.yaml
  3. Use Templates: Leverage Go templates for DRY principles
  4. Validate Charts: Use helm lint and helm template --debug
  5. Document Values: Document all values.yaml options
  6. Test Charts: Use helm test for post-installation validation

Service Mesh: Istio Deep Dive

Service meshes provide a dedicated infrastructure layer for managing service-to-service communication, offering advanced features beyond basic Kubernetes networking.

Istio Architecture

Control Plane Components:

  • Istiod: Unified control plane (replaces Pilot, Citadel, Galley)
    • Service discovery
    • Configuration management
    • Certificate management
  • Envoy Proxy: Data plane sidecar for each pod

Traffic Management:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# VirtualService: Route traffic
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: reviews
spec:
hosts:

- reviews
http:

- match:
- headers:
end-user:
exact: jason
route:

- destination:
host: reviews
subset: v2

- route:
- destination:
host: reviews
subset: v1
weight: 50

- destination:
host: reviews
subset: v3
weight: 50
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# DestinationRule: Define subsets and policies
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: reviews
spec:
host: reviews
subsets:

- name: v1
labels:
version: v1

- name: v2
labels:
version: v2

- name: v3
labels:
version: v3
trafficPolicy:
loadBalancer:
simple: LEAST_CONN
connectionPool:
tcp:
maxConnections: 100
http:
http1MaxPendingRequests: 10
http2MaxRequests: 100
maxRequestsPerConnection: 2
outlierDetection:
consecutiveErrors: 3
interval: 30s
baseEjectionTime: 30s
maxEjectionPercent: 50

Security (mTLS):

1
2
3
4
5
6
7
8
# PeerAuthentication: Enable mTLS
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
spec:
mtls:
mode: STRICT
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# AuthorizationPolicy: Access control
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: allow-frontend
spec:
selector:
matchLabels:
app: api
action: ALLOW
rules:

- from:
- source:
principals: ["cluster.local/ns/default/sa/frontend"]
to:

- operation:
methods: ["GET", "POST"]

Observability:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# Telemetry configuration
apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
name: mesh-default
spec:
accessLogging:

- providers:
- name: envoy
metrics:

- providers:
- name: prometheus
tracing:

- providers:
- name: zipkin

Microservices Architecture Patterns

Microservices architecture decomposes applications into independently deployable services. Understanding common patterns is essential for building resilient distributed systems.

API Gateway Pattern

An API Gateway acts as a single entry point for client requests, routing to appropriate microservices.

Benefits:

  • Single entry point
  • Request routing
  • Authentication/authorization
  • Rate limiting
  • Request/response transformation

Implementation with Kong:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# kong-config.yaml
_format_version: "3.0"
services:

- name: user-service
url: http://user-service:8080
routes:

- name: user-route
paths:

- /api/users
methods:

- GET
- POST
plugins:

- name: rate-limiting
config:
minute: 100

- name: key-auth

Service Discovery

Services need to discover and communicate with each other dynamically.

Kubernetes DNS: Built-in service discovery via DNS.

1
2
3
4
5
6
7
8
9
10
11
# Service automatically registered in DNS
apiVersion: v1
kind: Service
metadata:
name: user-service
spec:
selector:
app: user
ports:

- port: 8080
1
2
3
# Services accessible via DNS
# user-service.default.svc.cluster.local
curl http://user-service:8080/api/users

Consul: External service discovery and configuration.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: v1
kind: ConfigMap
metadata:
name: consul-config
data:
config.json: |
{
"datacenter": "dc1",
"data_dir": "/consul/data",
"server": true,
"bootstrap_expect": 3,
"ui": true,
"client_addr": "0.0.0.0"
}

Circuit Breaker Pattern

Circuit breakers prevent cascading failures by stopping requests to failing services.

Implementation with Istio:

1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: payment-service
spec:
host: payment-service
trafficPolicy:
outlierDetection:
consecutiveErrors: 5
interval: 30s
baseEjectionTime: 30s
maxEjectionPercent: 50
minHealthPercent: 50

Saga Pattern

Sagas manage distributed transactions across multiple services.

Choreography-Based Saga:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# Order Service
apiVersion: v1
kind: ConfigMap
metadata:
name: order-service-config
data:
config.yaml: |
saga:
steps:

- name: reserve-inventory
service: inventory-service
compensate: cancel-reservation

- name: charge-payment
service: payment-service
compensate: refund-payment

- name: ship-order
service: shipping-service
compensate: cancel-shipment

Event-Driven Architecture

Services communicate asynchronously through events.

Kafka Integration:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: apps/v1
kind: Deployment
metadata:
name: order-service
spec:
template:
spec:
containers:

- name: order-service
image: order-service:latest
env:

- name: KAFKA_BROKERS
value: "kafka-service:9092"

- name: KAFKA_TOPIC_ORDERS
value: "order-events"

Database per Service

Each microservice owns its database, ensuring loose coupling.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
# User Service Database
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: user-db-pvc
spec:
accessModes:

- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: user-db
spec:
serviceName: user-db
replicas: 1
template:
spec:
containers:

- name: postgres
image: postgres:15
env:

- name: POSTGRES_DB
value: userdb
volumeMounts:

- name: data
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:

- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20Gi

CI/CD Pipelines

Continuous Integration and Continuous Deployment pipelines automate building, testing, and deploying applications to Kubernetes.

Jenkins Pipeline

Jenkins provides flexible pipeline-as-code for CI/CD.

Jenkinsfile:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
pipeline {
agent any

environment {
DOCKER_REGISTRY = 'registry.example.com'
KUBERNETES_NAMESPACE = 'production'
}

stages {
stage('Checkout') {
steps {
git branch: 'main', url: 'https://github.com/user/repo.git'
}
}

stage('Build') {
steps {
sh 'docker build -t${DOCKER_REGISTRY}/myapp:${BUILD_NUMBER} .'
}
}

stage('Test') {
steps {
sh 'docker run --rm${DOCKER_REGISTRY}/myapp:${BUILD_NUMBER} npm test'
}
}

stage('Push') {
steps {
withCredentials([usernamePassword(credentialsId: 'docker-registry', usernameVariable: 'USER', passwordVariable: 'PASS')]) {
sh '''
docker login -u$USER -p $PASS${DOCKER_REGISTRY}
docker push ${DOCKER_REGISTRY}/myapp:${BUILD_NUMBER}
'''
}
}
}

stage('Deploy') {
steps {
sh '''
kubectl set image deployment/myapp \
myapp=${DOCKER_REGISTRY}/myapp:${BUILD_NUMBER} \
-n${KUBERNETES_NAMESPACE}
kubectl rollout status deployment/myapp -n ${KUBERNETES_NAMESPACE}
'''
}
}
}

post {
always {
cleanWs()
}
failure {
emailext subject: "Build Failed:${env.JOB_NAME}",
body: "Build ${env.BUILD_NUMBER} failed.",
to: "team@example.com"
}
}
}

Jenkins Kubernetes Plugin:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# jenkins-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
template:
spec:
containers:

- name: jenkins
image: jenkins/jenkins:lts
ports:

- containerPort: 8080
- containerPort: 50000
volumeMounts:

- name: jenkins-home
mountPath: /var/jenkins_home
env:

- name: JAVA_OPTS
value: "-Djenkins.install.runSetupWizard=false"
volumes:

- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-pvc

GitLab CI/CD

GitLab provides integrated CI/CD with Kubernetes support.

.gitlab-ci.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
stages:

- build
- test
- deploy

variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
IMAGE_TAG:$CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA

build:
stage: build
image: docker:latest
services:

- docker:dind
before_script:

- docker login -u$CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD$CI_REGISTRY
script:

- docker build -t$IMAGE_TAG .
- docker push$IMAGE_TAG
only:

- main
- develop

test:
stage: test
image: node:18
script:

- npm install
- npm test
- npm run lint

deploy:production:
stage: deploy
image: bitnami/kubectl:latest
script:

- kubectl config use-context production
- kubectl set image deployment/myapp myapp=$IMAGE_TAG -n production
- kubectl rollout status deployment/myapp -n production
environment:
name: production
url: https://app.example.com
only:

- main
when: manual

GitLab Kubernetes Integration:

1
2
3
# Add Kubernetes cluster in GitLab
# Settings > Kubernetes > Add Kubernetes cluster
# GitLab automatically configures kubectl context

GitHub Actions

GitHub Actions provides native CI/CD for GitHub repositories.

.github/workflows/deploy.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
name: Build and Deploy

on:
push:
branches: [ main ]
pull_request:
branches: [ main ]

env:
REGISTRY: ghcr.io
IMAGE_NAME:${{ github.repository }}

jobs:
build-and-push:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:

- name: Checkout
uses: actions/checkout@v3

- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2

- name: Log in to Container Registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username:${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}

- name: Build and push
uses: docker/build-push-action@v4
with:
context: .
push: true
tags:${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}

- name: Run tests
run: |
docker run --rm${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }} npm test

deploy:
needs: build-and-push
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:

- name: Checkout
uses: actions/checkout@v3

- name: Configure kubectl
uses: azure/setup-kubectl@v3

- name: Set up kubeconfig
run: |
echo "${{ secrets.KUBECONFIG }}" | base64 -d > kubeconfig
export KUBECONFIG=./kubeconfig

- name: Deploy to Kubernetes
run: |
kubectl set image deployment/myapp \
myapp=${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }} \
-n production
kubectl rollout status deployment/myapp -n production

ArgoCD for GitOps

ArgoCD implements GitOps by syncing Kubernetes manifests from Git repositories.

ArgoCD Application:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: myapp
spec:
project: default
source:
repoURL: https://github.com/user/repo.git
targetRevision: main
path: k8s/overlays/production
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:

- CreateNamespace=true

Case Studies

Case Study 1: E-Commerce Platform Migration

Challenge: Migrate monolithic e-commerce application to microservices on Kubernetes.

Solution: 1. Containerization: Dockerized existing Java application 2. Service Decomposition: Split into services (user, product, order, payment, shipping) 3. Kubernetes Deployment: Deployed services using Deployments and StatefulSets 4. Service Mesh: Implemented Istio for traffic management and security 5. CI/CD: GitLab CI/CD for automated deployments

Architecture:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
# User Service Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
spec:
replicas: 5
selector:
matchLabels:
app: user-service
template:
metadata:
labels:
app: user-service
version: v1
spec:
containers:

- name: user-service
image: registry.example.com/user-service:v1.2.0
ports:

- containerPort: 8080
env:

- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: user-db-secret
key: url
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1000m"
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: user-service
spec:
selector:
app: user-service
ports:

- port: 80
targetPort: 8080

Results:

  • 60% reduction in deployment time
  • 99.9% uptime with auto-scaling
  • Independent service scaling based on load
  • Zero-downtime deployments

Case Study 2: Financial Services Platform

Challenge: Build secure, compliant financial services platform with strict regulatory requirements.

Solution: 1. Security: Implemented mTLS with Istio, network policies, RBAC 2. Compliance: Audit logging, encryption at rest and in transit 3. High Availability: Multi-region deployment with active-active setup 4. Disaster Recovery: Automated backup and restore procedures

Security Configuration:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
# Network Policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-network-policy
spec:
podSelector:
matchLabels:
app: api
policyTypes:

- Ingress
- Egress
ingress:

- from:
- podSelector:
matchLabels:
app: gateway
ports:

- protocol: TCP
port: 8080
egress:

- to:
- podSelector:
matchLabels:
app: database
ports:

- protocol: TCP
port: 5432

- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:

- protocol: UDP
port: 53
---
# RBAC
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: api-role
rules:

- apiGroups: [""]
resources: ["configmaps", "secrets"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: api-rolebinding
subjects:

- kind: ServiceAccount
name: api-serviceaccount
roleRef:
kind: Role
name: api-role
apiGroup: rbac.authorization.k8s.io

Results:

  • Passed security audits and compliance checks
  • Encrypted communication between all services
  • Comprehensive audit trail
  • Multi-region failover in < 5 minutes

Case Study 3: SaaS Analytics Platform

Challenge: Scale analytics platform to handle millions of events per second with real-time processing.

Solution: 1. Event Streaming: Kafka for event ingestion 2. Stream Processing: Kafka Streams for real-time analytics 3. Auto-scaling: Horizontal Pod Autoscaler (HPA) and Cluster Autoscaler 4. Storage: Time-series database (InfluxDB) with StatefulSets 5. Monitoring: Prometheus and Grafana for observability

Auto-scaling Configuration:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
# Horizontal Pod Autoscaler
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: analytics-processor-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: analytics-processor
minReplicas: 3
maxReplicas: 50
metrics:

- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70

- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:

- type: Percent
value: 50
periodSeconds: 60
scaleUp:
stabilizationWindowSeconds: 0
policies:

- type: Percent
value: 100
periodSeconds: 15

- type: Pods
value: 4
periodSeconds: 15
selectPolicy: Max

Kafka StatefulSet:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kafka
spec:
serviceName: kafka-headless
replicas: 3
selector:
matchLabels:
app: kafka
template:
metadata:
labels:
app: kafka
spec:
containers:

- name: kafka
image: confluentinc/cp-kafka:7.4.0
ports:

- containerPort: 9092
env:

- name: KAFKA_BROKER_ID
valueFrom:
fieldRef:
fieldPath: metadata.name

- name: KAFKA_ZOOKEEPER_CONNECT
value: "zookeeper:2181"

- name: KAFKA_ADVERTISED_LISTENERS
value: "PLAINTEXT://$(HOSTNAME).kafka-headless:9092"
volumeMounts:

- name: kafka-data
mountPath: /var/lib/kafka/data
volumeClaimTemplates:

- metadata:
name: kafka-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Gi

Results:

  • Handled 10M+ events/second peak load
  • Auto-scaling from 3 to 50 pods based on load
  • Sub-second latency for real-time analytics
  • 99.95% uptime

Q&A Section

Q1: What's the difference between Docker and Kubernetes?

A: Docker is a containerization platform that packages applications and dependencies into containers. Kubernetes is an orchestration platform that manages containers across multiple hosts, handling scheduling, scaling, networking, and lifecycle management. Docker creates containers; Kubernetes manages them at scale.

Q2: When should I use StatefulSets vs Deployments?

A: Use Deployments for stateless applications where pods are interchangeable (web servers, API services). Use StatefulSets for stateful applications requiring stable network identities, ordered deployment, and persistent storage (databases, message queues, distributed systems).

Q3: How do I choose between different CNI plugins?

A:

  • Flannel: Simple, good for basic networking needs
  • Calico: Advanced policy enforcement, BGP routing, good for multi-cluster
  • Cilium: eBPF-based, advanced security features, high performance
  • Weave: Simple overlay network with encryption

Choose based on your requirements: simplicity (Flannel), policy (Calico), performance/security (Cilium).

Q4: What's the difference between ClusterIP, NodePort, and LoadBalancer services?

A:

  • ClusterIP: Internal-only access within cluster
  • NodePort: Exposes service on each node's IP at static port (30000-32767)
  • LoadBalancer: Cloud provider load balancer (AWS ELB, GCP LB, Azure LB)

For external access, use LoadBalancer (cloud) or NodePort + external load balancer (on-premises). For internal access, use ClusterIP.

Q5: How do I handle secrets in Kubernetes?

A: Use Kubernetes Secrets (base64 encoded) or external secret management (HashiCorp Vault, AWS Secrets Manager). For production, prefer external secret management with CSI drivers:

1
2
3
4
5
6
7
8
9
10
11
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: aws-secrets
spec:
provider: aws
parameters:
objects: |

- objectName: "db-credentials"
objectType: "secretsmanager"

Q6: What's the best practice for resource requests and limits?

A:

  • Requests: Guaranteed resources (CPU/memory pod can use)
  • Limits: Maximum resources (prevents resource exhaustion)

Set requests based on typical usage, limits based on peak usage. Use monitoring to tune values. Avoid setting limits too low (causes OOMKilled) or too high (wastes resources).

1
2
3
4
5
6
7
resources:
requests:
memory: "256Mi" # Typical usage
cpu: "250m"
limits:
memory: "512Mi" # Peak usage
cpu: "500m"

Q7: How do I implement blue-green deployments in Kubernetes?

A: Deploy new version alongside old version, switch traffic, then remove old version:

1
2
3
4
5
6
7
8
9
10
11
# Deploy new version
kubectl apply -f deployment-v2.yaml

# Create service pointing to v2
kubectl patch service myapp -p '{"spec":{"selector":{"version":"v2" }}}'

# Verify v2 is working
kubectl rollout status deployment/myapp-v2

# Remove v1
kubectl delete deployment myapp-v1

Or use Istio VirtualService for traffic splitting:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: myapp
spec:
hosts:

- myapp
http:

- route:
- destination:
host: myapp
subset: v2
weight: 100

Q8: What monitoring tools should I use with Kubernetes?

A:

  • Metrics: Prometheus (metrics collection), Grafana (visualization)
  • Logging: ELK Stack (Elasticsearch, Logstash, Kibana) or Loki
  • Tracing: Jaeger or Zipkin
  • APM: New Relic, Datadog, or open-source alternatives

Prometheus Operator simplifies Prometheus deployment:

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: myapp-metrics
spec:
selector:
matchLabels:
app: myapp
endpoints:

- port: metrics
interval: 30s

Q9: How do I backup and restore Kubernetes clusters?

A:

  • etcd Backup: Backup etcd for cluster state
  • PV Snapshots: Use VolumeSnapshots for persistent volumes
  • Velero: Comprehensive backup/restore tool

Velero Backup:

1
2
3
4
5
6
7
8
# Install Velero
velero install --provider aws --plugins velero/velero-plugin-for-aws:v1.7.0

# Backup entire namespace
velero backup create my-backup --include-namespaces production

# Restore backup
velero restore create --from-backup my-backup

Q10: What's the difference between ConfigMaps and Secrets?

A:

  • ConfigMaps: Store non-sensitive configuration (environment variables, config files)
  • Secrets: Store sensitive data (passwords, API keys, certificates)

Both can be mounted as volumes or injected as environment variables. Secrets are base64 encoded (not encrypted by default). For production, use external secret management or encrypt secrets at rest.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
config.yaml: |
log_level: info
timeout: 30s

# Secret
apiVersion: v1
kind: Secret
metadata:
name: app-secret
type: Opaque
data:
password: cGFzc3dvcmQ= # base64 encoded

Summary Cheat Sheet

Docker Commands

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# Image Management
docker build -t image:tag .
docker pull image:tag
docker push image:tag
docker images
docker rmi image:tag

# Container Management
docker run -d -p host:container --name name image:tag
docker ps
docker ps -a
docker stop container
docker start container
docker rm container
docker logs container
docker exec -it container sh

# Networking
docker network create network-name
docker network ls
docker network inspect network-name

# Volumes
docker volume create volume-name
docker volume ls
docker volume inspect volume-name

Kubernetes Commands

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
# Cluster Information
kubectl cluster-info
kubectl get nodes
kubectl get namespaces

# Pods
kubectl get pods
kubectl get pods -n namespace
kubectl describe pod pod-name
kubectl logs pod-name
kubectl exec -it pod-name -- sh
kubectl delete pod pod-name

# Deployments
kubectl get deployments
kubectl create deployment name --image=image:tag
kubectl scale deployment name --replicas=3
kubectl rollout status deployment/name
kubectl rollout undo deployment/name
kubectl set image deployment/name container=image:newtag

# Services
kubectl get services
kubectl expose deployment name --port=80 --type=LoadBalancer
kubectl port-forward service/name 8080:80

# ConfigMaps and Secrets
kubectl create configmap name --from-file=file
kubectl create secret generic name --from-literal=key=value
kubectl get configmaps
kubectl get secrets

# Apply Manifests
kubectl apply -f manifest.yaml
kubectl delete -f manifest.yaml
kubectl get all

# Debugging
kubectl describe resource/name
kubectl logs -f pod-name
kubectl top nodes
kubectl top pods

Common YAML Patterns

Deployment:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 3
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:

- name: app
image: app:tag
ports:

- containerPort: 8080

Service:

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: Service
metadata:
name: app
spec:
selector:
app: app
ports:

- port: 80
targetPort: 8080
type: LoadBalancer

ConfigMap:

1
2
3
4
5
6
apiVersion: v1
kind: ConfigMap
metadata:
name: config
data:
key: value

Secret:

1
2
3
4
5
6
7
apiVersion: v1
kind: Secret
metadata:
name: secret
type: Opaque
data:
key: base64encodedvalue

Best Practices Checklist


This comprehensive guide covers the essential concepts and practices for building cloud-native applications with containers and Kubernetes. From Docker fundamentals to advanced orchestration patterns, these technologies enable teams to build scalable, resilient, and maintainable distributed systems. As the cloud-native ecosystem continues to evolve, staying current with best practices and emerging patterns is crucial for success in modern software development.

  • Post title:Cloud Computing (4): Cloud-Native and Container Technologies
  • Post author:Chen Kai
  • Create time:2023-02-05 00:00:00
  • Post link:https://www.chenk.top/en/cloud-computing-cloud-native-containers/
  • Copyright Notice:All articles in this blog are licensed under BY-NC-SA unless stating additionally.
 Comments