The shift from monolithic applications to cloud-native architectures
represents one of the most significant transformations in modern
software development. This evolution isn't just about moving code to the
cloud — it's about fundamentally rethinking how we build, deploy, and
operate applications to leverage the full potential of distributed
systems. At the heart of this transformation lie containers and
orchestration platforms, technologies that have revolutionized the way
we package, ship, and run software.
This comprehensive guide explores the entire cloud-native ecosystem,
from foundational principles to advanced orchestration patterns. We'll
dive deep into Docker's containerization model, Kubernetes'
orchestration capabilities, and the architectural patterns that enable
scalable, resilient distributed systems. Whether you're deploying your
first containerized application or architecting a multi-region
microservices platform, this guide provides the practical knowledge and
real-world examples you need.
Cloud-Native
Principles and the 12-Factor App
Cloud-native applications are designed specifically for cloud
environments, embracing principles that enable scalability, resilience,
and rapid iteration. The 12-Factor App methodology, developed by Heroku,
provides a blueprint for building software-as-a-service applications
that are portable, scalable, and maintainable.
The 12 Factors
I. Codebase: One codebase tracked in revision
control, many deploys. Each application has a single codebase, but
multiple environments (staging, production) can deploy from it.
II. Dependencies: Explicitly declare and isolate
dependencies. Never rely on implicit existence of system-wide
packages.
1 2 3 4
# Example: requirements.txt for Python flask==2.3.0 redis==4.5.0 psycopg2-binary==2.9.7
III. Config: Store configuration in the environment.
Configuration that varies between deploys (database URLs, API keys)
should be stored as environment variables, not in code.
IV. Backing Services: Treat backing services as
attached resources. Databases, message queues, and cache servers should
be addressable via URLs and interchangeable without code changes.
V. Build, Release, Run: Strictly separate build and
run stages. The build stage transforms code into an executable bundle;
the release stage combines the build with config; the run stage executes
the app in the execution environment.
VI. Processes: Execute the app as one or more
stateless processes. Application state should be stored in backing
services (databases, caches), not in memory or local filesystem.
VII. Port Binding: Export services via port binding.
Applications should be self-contained and declare their port
dependencies.
VIII. Concurrency: Scale out via the process model.
Applications should spawn multiple processes to handle different types
of work.
IX. Disposability: Maximize robustness with fast
startup and graceful shutdown. Processes should start quickly and shut
down gracefully when receiving SIGTERM.
X. Dev/Prod Parity: Keep development, staging, and
production as similar as possible. Use the same backing services and
avoid environment-specific workarounds.
XI. Logs: Treat logs as event streams. Applications
should write to stdout/stderr and let the execution environment handle
aggregation and storage.
XII. Admin Processes: Run admin/management tasks as
one-off processes. Database migrations, data transformations, and
one-time scripts should run in the same environment as regular
processes.
Cloud-Native Characteristics
Beyond the 12 factors, cloud-native applications exhibit several key
characteristics:
Microservices Architecture: Applications are
decomposed into small, independently deployable services
Containerization: Applications and dependencies are
packaged into containers for consistency across environments
Dynamic Orchestration: Containers are orchestrated
by platforms like Kubernetes for automatic scheduling and scaling
Service Mesh: Inter-service communication is
handled by a dedicated infrastructure layer
Declarative APIs: Desired state is declared, and
the platform ensures convergence
DevOps Culture: Development and operations teams
collaborate closely with shared tooling
Docker Fundamentals
Docker revolutionized application deployment by introducing a
standardized way to package applications with their dependencies.
Understanding Docker's core concepts is essential for working with
cloud-native technologies.
Images and Containers
A Docker image is a read-only template containing
instructions for creating a container. Images are built from a
Dockerfile and consist of multiple layers, each representing a
filesystem change.
A Docker container is a running instance of an
image. Containers are isolated from each other and from the host system,
sharing only the host's kernel.
# Pull an image from Docker Hub docker pull nginx:1.25-alpine
# Run a container from an image docker run -d -p 8080:80 --name my-nginx nginx:1.25-alpine
# List running containers docker ps
# List all containers (including stopped) docker ps -a
# View container logs docker logs my-nginx
# Stop a container docker stop my-nginx
# Remove a container docker rm my-nginx
Dockerfile Best Practices
A Dockerfile defines how to build a Docker image. Following best
practices ensures secure, efficient, and maintainable images.
1. Use Multi-Stage Builds: Reduce final image size
by using intermediate stages for building and a minimal final stage for
runtime.
1 2 3 4 5 6 7 8 9 10 11 12 13 14
# Multi-stage build example FROM golang:1.21-alpine AS builder WORKDIR /app COPY go.mod go.sum ./ RUN go mod download COPY . . RUN CGO_ENABLED=0 GOOS=linux go build -o /app/server
FROM alpine:latest RUN apk --no-cache add ca-certificates WORKDIR /root/ COPY --from=builder /app/server . EXPOSE8080 CMD ["./server"]
2. Leverage Layer Caching: Order Dockerfile
instructions from least to most frequently changing. Copy dependency
files before source code.
1 2 3 4 5 6 7 8 9 10 11 12 13
# Good: Dependencies cached separately FROM node:18-alpine WORKDIR /app COPY package*.json ./ RUN npm ci --only=production COPY . . RUN npm run build
# Bad: Everything invalidated on code change FROM node:18-alpine WORKDIR /app COPY . . RUN npm ci --only=production && npm run build
3. Use Specific Tags: Avoid latest tag
in production. Pin specific versions for reproducibility.
1 2 3 4 5
# Good FROM node:18.17.0-alpine
# Bad FROM node:latest
4. Minimize Layers: Combine RUN commands to reduce
image layers and size.
1 2 3 4 5 6 7 8 9
# Good: Single layer RUN apt-get update && \ apt-get install -y python3 python3-pip && \ rm -rf /var/lib/apt/lists/*
# Bad: Multiple layers RUN apt-get update RUN apt-get install -y python3 python3-pip RUNrm -rf /var/lib/apt/lists/*
5. Use .dockerignore: Exclude unnecessary files from
build context.
# Start with specific profile docker-compose --profile development up
Kubernetes Core Concepts
Kubernetes (K8s) is the de facto standard for container
orchestration, providing automated deployment, scaling, and management
of containerized applications.
Architecture Overview
A Kubernetes cluster consists of:
Control Plane: Manages cluster state and scheduling
decisions
API Server: Exposes Kubernetes API
etcd: Distributed key-value store for cluster state
Scheduler: Assigns pods to nodes
Controller Manager: Runs cluster controllers
Cloud Controller Manager: Integrates with cloud providers
Worker Nodes: Run containerized applications
kubelet: Agent that communicates with control plane
Kubernetes networking enables pods to communicate with each other and
external services. Understanding CNI plugins and service mesh concepts
is crucial for production deployments.
Container Network Interface
(CNI)
CNI plugins provide networking functionality for pods. Popular
options include:
Flannel: Simple overlay network using VXLAN or
host-gw.
Kubernetes provides flexible storage abstractions for stateful
applications through PersistentVolumes (PVs), PersistentVolumeClaims
(PVCs), and StorageClasses.
PersistentVolumes
and PersistentVolumeClaims
PersistentVolume (PV): Cluster-wide storage resource
provisioned by administrators.
# Install from repository helm install postgres bitnami/postgresql
Helm Best Practices
Use Semantic Versioning: Follow SemVer for chart
versions
Parameterize Everything: Make charts configurable
via values.yaml
Use Templates: Leverage Go templates for DRY
principles
Validate Charts: Use helm lint and
helm template --debug
Document Values: Document all values.yaml
options
Test Charts: Use helm test for
post-installation validation
Service Mesh: Istio Deep Dive
Service meshes provide a dedicated infrastructure layer for managing
service-to-service communication, offering advanced features beyond
basic Kubernetes networking.
Istio Architecture
Control Plane Components:
Istiod: Unified control plane (replaces Pilot,
Citadel, Galley)
Microservices architecture decomposes applications into independently
deployable services. Understanding common patterns is essential for
building resilient distributed systems.
API Gateway Pattern
An API Gateway acts as a single entry point for client requests,
routing to appropriate microservices.
Q1:
What's the difference between Docker and Kubernetes?
A: Docker is a containerization platform that
packages applications and dependencies into containers. Kubernetes is an
orchestration platform that manages containers across multiple hosts,
handling scheduling, scaling, networking, and lifecycle management.
Docker creates containers; Kubernetes manages them at scale.
Q2: When
should I use StatefulSets vs Deployments?
A: Use Deployments for stateless
applications where pods are interchangeable (web servers, API services).
Use StatefulSets for stateful applications requiring
stable network identities, ordered deployment, and persistent storage
(databases, message queues, distributed systems).
Q3: How do I
choose between different CNI plugins?
A:
Flannel: Simple, good for basic networking
needs
Calico: Advanced policy enforcement, BGP routing,
good for multi-cluster
Cilium: eBPF-based, advanced security features,
high performance
Weave: Simple overlay network with encryption
Choose based on your requirements: simplicity (Flannel), policy
(Calico), performance/security (Cilium).
Q4:
What's the difference between ClusterIP, NodePort, and LoadBalancer
services?
A:
ClusterIP: Internal-only access within cluster
NodePort: Exposes service on each node's IP at
static port (30000-32767)
Q6:
What's the best practice for resource requests and limits?
A:
Requests: Guaranteed resources (CPU/memory pod can
use)
Limits: Maximum resources (prevents resource
exhaustion)
Set requests based on typical usage, limits based on peak usage. Use
monitoring to tune values. Avoid setting limits too low (causes
OOMKilled) or too high (wastes resources).
Q10:
What's the difference between ConfigMaps and Secrets?
A:
ConfigMaps: Store non-sensitive configuration
(environment variables, config files)
Secrets: Store sensitive data (passwords, API keys,
certificates)
Both can be mounted as volumes or injected as environment variables.
Secrets are base64 encoded (not encrypted by default). For production,
use external secret management or encrypt secrets at rest.
# Cluster Information kubectl cluster-info kubectl get nodes kubectl get namespaces
# Pods kubectl get pods kubectl get pods -n namespace kubectl describe pod pod-name kubectl logs pod-name kubectl exec -it pod-name -- sh kubectl delete pod pod-name
# Deployments kubectl get deployments kubectl create deployment name --image=image:tag kubectl scale deployment name --replicas=3 kubectl rollout status deployment/name kubectl rollout undo deployment/name kubectl set image deployment/name container=image:newtag
# Services kubectl get services kubectl expose deployment name --port=80 --type=LoadBalancer kubectl port-forward service/name 8080:80
# ConfigMaps and Secrets kubectl create configmap name --from-file=file kubectl create secret generic name --from-literal=key=value kubectl get configmaps kubectl get secrets
# Apply Manifests kubectl apply -f manifest.yaml kubectl delete -f manifest.yaml kubectl get all
# Debugging kubectl describe resource/name kubectl logs -f pod-name kubectl top nodes kubectl top pods
This comprehensive guide covers the essential concepts and practices
for building cloud-native applications with containers and Kubernetes.
From Docker fundamentals to advanced orchestration patterns, these
technologies enable teams to build scalable, resilient, and maintainable
distributed systems. As the cloud-native ecosystem continues to evolve,
staying current with best practices and emerging patterns is crucial for
success in modern software development.
Post title:Cloud Computing (4): Cloud-Native and Container Technologies
Post author:Chen Kai
Create time:2023-02-05 00:00:00
Post link:https://www.chenk.top/en/cloud-computing-cloud-native-containers/
Copyright Notice:All articles in this blog are licensed under BY-NC-SA unless stating additionally.