Core Concepts
Learn the fundamental building blocks of Kubernetes: Pods, Deployments, Services, Namespaces, and the cluster architecture.
1. Kubernetes Architecture
A Kubernetes cluster is made up of two main parts: the Control Plane (the brain) and Worker Nodes (the muscles).
Control Plane Components
- kube-apiserver: The front door of Kubernetes. Every
kubectlcommand, every automated tool, and every internal component communicates through this REST API. It validates and processes requests, then stores state in etcd. - etcd: A distributed, consistent key-value store. This is the single source of truth for the entire cluster state โ every pod, every service, every config. It must be backed up regularly in production.
- kube-scheduler: Watches for newly created Pods that have no assigned node, evaluates resource requirements and constraints, and assigns them to the best available node.
- kube-controller-manager: Runs a collection of controllers in a single process. Examples: the Node controller (responds when nodes go down), the Deployment controller (maintains the correct number of Pods), and the Job controller.
Worker Node Components
- kubelet: An agent that runs on every node. It watches the API server for Pod specs assigned to its node and ensures the containers described in those specs are running and healthy. Reports node and pod status back to the control plane.
- kube-proxy: A network proxy running on each node. It maintains network rules (using iptables or IPVS) that allow Pods to communicate with Services. It is responsible for implementing Kubernetes Service load balancing.
- Container Runtime: The software that actually runs containers (e.g., containerd, CRI-O). Docker is no longer the default runtime since Kubernetes 1.24.
2. Pods โ The Smallest Deployable Unit
A Pod is a wrapper around one or more containers that share the same network namespace and storage. Containers in the same Pod can communicate via localhost.
- Pods are ephemeral โ they are not self-healing. If a Pod dies, it stays dead unless a controller (like a Deployment) recreates it.
- Each Pod gets a unique IP address from the cluster's internal network pool.
- Multi-container Pods are used for sidecar patterns (e.g., a log shipper alongside your app).
apiVersion: v1
kind: Pod
metadata:
name: my-app-pod
labels:
app: my-app # Labels are key-value pairs used for selection
spec:
containers:
- name: app
image: nginx:1.25
ports:
- containerPort: 80
resources: # Always set resource limits!
requests:
cpu: "100m" # 100 millicores
memory: "128Mi"
limits:
cpu: "500m"
memory: "256Mi" 3. Deployments โ Managing Pod Lifecycle
You almost never create Pods directly. Instead, you define a Deployment which manages Pods for you, providing:
- Self-healing: If a Pod dies, the Deployment's ReplicaSet automatically creates a replacement.
- Scaling: Easily change the number of replicas.
- Rolling Updates: Gradually replace old Pods with new ones with zero downtime.
- Rollbacks: Revert to a previous version instantly.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3 # Maintain 3 identical pods
selector:
matchLabels:
app: nginx # MUST match template labels below
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1 # Max pods above desired during update
maxUnavailable: 0 # Never go below desired count
template:
metadata:
labels:
app: nginx # Pods get this label
spec:
containers:
- name: nginx
image: nginx:1.25 # Change this to trigger a rolling update
ports:
- containerPort: 80 Key Deployment Commands
Rolling Updates
kubectl set image deploy/nginx nginx=nginx:1.26kubectl rollout status deploy/nginxkubectl rollout history deploy/nginxkubectl rollout undo deploy/nginx
Scaling
kubectl scale deploy/nginx --replicas=5kubectl autoscale deploy/nginx --min=2 --max=10 --cpu-percent=80
4. Services โ Stable Networking for Pods
Since Pods are ephemeral and their IPs change, you need a stable endpoint to reach them. A Service provides a fixed IP address and DNS name that proxies to a set of Pods selected by label.
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
selector:
app: nginx # Selects all Pods with label "app: nginx"
ports:
- protocol: TCP
port: 80 # Port the Service listens on
targetPort: 80 # Port on the container
type: ClusterIP # Internal-only (default) 5. Namespaces โ Logical Cluster Partitioning
Namespaces provide a mechanism to isolate groups of resources within a cluster. Common use cases: separate dev, staging, and prod environments; or teams with their own quotas.
- Default namespaces:
default,kube-system(K8s components),kube-public,kube-node-lease. - Resource names must be unique within a namespace but can repeat across namespaces.
- Most resources are namespace-scoped; cluster-level resources (Nodes, PersistentVolumes, ClusterRoles) are not.
kubectl create namespace dev-team kubectl get pods -n dev-team kubectl config set-context --current --namespace=dev-team # Switch default namespace
6. ConfigMaps & Secrets
ConfigMaps decouple configuration from your container images. Secrets do the same for sensitive data (passwords, tokens, keys).
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
DB_HOST: "postgres.default.svc.cluster.local"
LOG_LEVEL: "info"
---
# Reference in a Pod:
spec:
containers:
- name: app
envFrom:
- configMapRef:
name: app-config # Injects ALL keys as env vars