Networking
Master Kubernetes networking: Services, Ingress, NetworkPolicies, DNS, and how Pods communicate across nodes.
1. The Kubernetes Networking Model
Kubernetes enforces a flat networking model with three fundamental rules:
- Every Pod gets its own unique IP address.
- Pods on any node can communicate with all Pods on any other node without NAT.
- Agents on a node (kubelet, kube-proxy) can communicate with all Pods on that node.
This model is implemented by a CNI (Container Network Interface) plugin such as Calico, Flannel, or Cilium.
2. Services โ Stable Endpoints for Pods
Pods are ephemeral โ they are replaced constantly during updates and failures, and their IPs change every time. A Service is an abstraction that provides a stable IP address and DNS name, and load-balances traffic to a dynamically changing set of Pods selected by label.
Service Types
- ClusterIP (default): Exposes service on a cluster-internal virtual IP. Only reachable from within the cluster. Perfect for internal microservice communication.
- NodePort: Exposes the service on each Node's IP at a static port (30000โ32767). Traffic hitting
NodeIP:NodePortis forwarded to the Service. Useful for development; not recommended for production exposure. - LoadBalancer: Extends NodePort by automatically provisioning a cloud load balancer (AWS ELB, GCP LB). The cloud LB gets a public IP and routes traffic into the cluster. This is how production workloads are usually exposed.
- ExternalName: Maps a Service to an external DNS name (e.g.,
my-database.example.com). No proxying โ it's a DNS CNAME trick. Useful for migrating services into a cluster.
## ClusterIP (default)
apiVersion: v1
kind: Service
metadata:
name: web-internal
spec:
selector:
app: web
ports:
- port: 80
targetPort: 8080
type: ClusterIP
---
## NodePort โ accessible externally
apiVersion: v1
kind: Service
metadata:
name: web-nodeport
spec:
selector:
app: web
ports:
- port: 80
targetPort: 8080
nodePort: 30080 # Optional: set a specific port, or let K8s pick
type: NodePort 3. DNS Inside Kubernetes โ CoreDNS
Kubernetes runs CoreDNS as the cluster DNS server. Every Service automatically gets a DNS entry. Pods can resolve services by name.
| From Namespace | Service Name | DNS Address |
|---|---|---|
| Same namespace | database | database |
| Different namespace | database in prod | database.prod |
| Full FQDN (anywhere) | โ | database.prod.svc.cluster.local |
4. Ingress โ HTTP/HTTPS Layer-7 Routing
An Ingress is a set of routing rules that map external HTTP/HTTPS requests to internal Services based on hostname and URL path. It requires an Ingress Controller to be installed (e.g., nginx-ingress, Traefik, AWS ALB Controller).
Ingress vs LoadBalancer: A LoadBalancer Service costs one cloud IP per service. Ingress routes multiple services through a single IP/load balancer using path or host rules โ far more cost-effective.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx # Specifies which Ingress Controller to use
rules:
- host: myapp.example.com # Route by hostname
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service # Route /api/* to this service
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: web-service # Route /* to this service
port:
number: 80
tls: # Optional: HTTPS termination
- hosts:
- myapp.example.com
secretName: myapp-tls-secret # Cert stored as a K8s Secret 5. NetworkPolicies โ Internal Firewall Rules
By default, all Pods in a cluster can communicate with all other Pods. NetworkPolicies let you restrict this. Once a NetworkPolicy selects a Pod, all traffic not explicitly allowed by a policy is denied (deny-all becomes the default for that pod).
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: db-isolation
namespace: production
spec:
podSelector:
matchLabels:
role: database # Apply this policy to database pods
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: backend # ONLY allow traffic from backend pods
ports:
- protocol: TCP
port: 5432 # PostgreSQL port
egress: [] # Deny all outgoing traffic from the DB pod 6. kube-proxy and Service Implementation
kube-proxy runs on every node and watches the API server for Service and Endpoints changes. It programs the OS-level networking rules (iptables or IPVS) to implement Service routing. For example, when a Pod accesses the ClusterIP of a Service, kube-proxy's iptables rules transparently redirect that traffic to one of the healthy backend Pods.