Understanding Kubernetes Networking
Why Services Matter
The Problem: Pods are ephemeral - they come and go with changing IPs.
The Solution: Services provide stable endpoints and load balancing.
Key Benefit: Decouple consumers from providers with service discovery.
Real-World Analogy: Company Phone System
Think of Services as a company phone system:
- Service = Main company phone number
- Endpoints = Individual employee extensions
- Load Balancer = Call distribution system
- DNS = Company phone directory
- Ingress = Reception desk routing external calls
Kubernetes Networking Model
External Client
Internet traffic enters the cluster through LoadBalancer or Ingress resources.
LoadBalancer
Cloud-provided external IP (e.g., 34.102.136.180) that routes to NodePorts.
NodePort
Opens a static port (30000-32767) on every node in the cluster.
ClusterIP
Internal virtual IP (e.g., 10.96.0.1) only reachable within the cluster.
Pod
Each Pod gets its own IP (e.g., 10.244.1.5) on the cluster network.
kube-proxy
Runs on every node, manages iptables rules to route Service traffic to Pods.
Kubernetes Network Principles
- Every Pod gets its own IP: No NAT between pods
- Containers in a Pod share network: Communicate via localhost
- All Pods can communicate: No NAT required across nodes
- Services get stable IPs: Virtual IPs that don't change
Service Types
ClusterIP (Default)
Exposes service on a cluster-internal IP. Only reachable from within the cluster. Use for internal microservices and databases.
NodePort
Exposes service on each node's IP at a static port (30000-32767). Accessible from outside. Use for development and simple external access.
LoadBalancer
Exposes service externally using cloud provider's load balancer. Gets external IP. Use for production apps on cloud.
ExternalName
Maps service to external DNS name. No proxying, just DNS CNAME record. Use for external databases and APIs.
ClusterIP Service Example
apiVersion: v1
kind: Service
metadata:
name: backend-service
namespace: default
spec:
type: ClusterIP # Default type
selector:
app: backend
tier: api
ports:
- name: http
protocol: TCP
port: 80 # Service port
targetPort: 8080 # Container port
- name: metrics
protocol: TCP
port: 9090
targetPort: metrics # Named port
sessionAffinity: ClientIP # Sticky sessions
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
NodePort Service Example
apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
type: NodePort
selector:
app: frontend
ports:
- name: http
port: 80 # Service port
targetPort: 3000 # Container port
nodePort: 30080 # Node port (30000-32767)
protocol: TCP
LoadBalancer Service Example
apiVersion: v1
kind: Service
metadata:
name: web-service
annotations:
# AWS annotations
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
spec:
type: LoadBalancer
selector:
app: web
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8443
loadBalancerSourceRanges: # Restrict access
- 10.0.0.0/8
- 172.16.0.0/12
Headless Service (No Cluster IP)
apiVersion: v1
kind: Service
metadata:
name: database-headless
spec:
clusterIP: None # Headless service
selector:
app: cassandra
ports:
- name: cql
port: 9042
targetPort: 9042
# DNS returns all Pod IPs, not a single service IP
# Used for StatefulSets and direct pod communication
Service Commands
# Create service from YAML
kubectl apply -f service.yaml
# Expose deployment as service
kubectl expose deployment nginx --port=80 --target-port=8080 --type=ClusterIP
# Get services
kubectl get services
kubectl get svc -o wide
# Describe service
kubectl describe service my-service
# Get endpoints
kubectl get endpoints my-service
# Test service from inside cluster
kubectl run test-pod --image=busybox -it --rm -- wget -O- my-service
# Port forward to access service locally
kubectl port-forward service/my-service 8080:80
# Get service in YAML format
kubectl get service my-service -o yaml
DNS & Service Discovery
Kubernetes DNS Resolution
When you create a Service, Kubernetes DNS automatically creates a DNS record. Here is how it works step by step:
- Step 1 - Service Creation: Kubernetes DNS creates a DNS record for the Service
- Step 2 - DNS Format:
<service-name>.<namespace>.svc.cluster.local - Step 3 - Pod DNS Query: Pods query CoreDNS to resolve service names to IPs
- Step 4 - IP Resolution: CoreDNS returns the ClusterIP of the service
DNS Examples
# Full DNS name
my-service.default.svc.cluster.local
# Within same namespace
my-service
# Cross namespace
my-service.other-namespace
# Service subdomain
my-service.other-namespace.svc
# Pod DNS (for StatefulSets)
pod-0.my-service.default.svc.cluster.local
# SRV records for ports
_http._tcp.my-service.default.svc.cluster.local
Testing DNS Resolution
# Run DNS test pod
kubectl run dns-test --image=busybox:1.28 -it --rm --restart=Never -- sh
# Inside the pod, test DNS resolution
nslookup my-service
nslookup my-service.default.svc.cluster.local
nslookup kubernetes.default
# Test with dig (if available)
kubectl run dig-test --image=tutum/dnsutils -it --rm --restart=Never -- sh
dig my-service.default.svc.cluster.local
# Check CoreDNS logs
kubectl logs -n kube-system -l k8s-app=kube-dns
# Get CoreDNS config
kubectl get configmap coredns -n kube-system -o yaml
Custom DNS Configuration
apiVersion: v1
kind: Pod
metadata:
name: custom-dns-pod
spec:
dnsPolicy: "None" # Custom DNS settings
dnsConfig:
nameservers:
- 8.8.8.8
- 8.8.4.4
searches:
- default.svc.cluster.local
- svc.cluster.local
- cluster.local
options:
- name: ndots
value: "2"
- name: edns0
containers:
- name: app
image: nginx
DNS Policies
- ClusterFirst: Default. Use cluster DNS first, then host
- Default: Use node's DNS settings
- ClusterFirstWithHostNet: For pods with hostNetwork: true
- None: Use custom DNS config
Advanced Networking
Ingress Controller
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "true"
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
ingressClassName: nginx
tls:
- hosts:
- app.example.com
secretName: app-tls
rules:
- host: app.example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
Service Mesh Integration
apiVersion: v1
kind: Service
metadata:
name: productpage
labels:
app: productpage
service: productpage
annotations:
# Istio traffic management
traffic.sidecar.istio.io/includeInboundPorts: "9080"
traffic.sidecar.istio.io/excludeOutboundPorts: "15090,15021"
spec:
ports:
- port: 9080
name: http
selector:
app: productpage
Multi-Port Services
apiVersion: v1
kind: Service
metadata:
name: multi-port-service
spec:
selector:
app: multi-app
ports:
- name: http
port: 80
targetPort: 8080
protocol: TCP
- name: https
port: 443
targetPort: 8443
protocol: TCP
- name: metrics
port: 9090
targetPort: 9090
protocol: TCP
- name: grpc
port: 50051
targetPort: 50051
protocol: TCP
EndpointSlices
EndpointSlices vs Endpoints
EndpointSlices are the new way to track network endpoints:
- Scalability: Better for large numbers of endpoints
- Performance: Reduced API server load
- Topology: Support for topology-aware routing
- Dual-stack: Better IPv4/IPv6 support
Network Policies
Network Policy Rules
Ingress: Allow traffic FROM specific pods/namespaces TO this pod.
Egress: Allow traffic FROM this pod TO specific destinations.
Default Deny All Policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {} # Apply to all pods in namespace
policyTypes:
- Ingress
- Egress
Allow Specific Traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: web-to-db
spec:
podSelector:
matchLabels:
app: database
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: web
- namespaceSelector:
matchLabels:
name: production
ports:
- protocol: TCP
port: 5432
Egress Control
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-external-dns
spec:
podSelector:
matchLabels:
app: web
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53 # DNS
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 169.254.169.254/32 # Block metadata service
ports:
- protocol: TCP
port: 443
Network Policy Pitfalls
- CNI Support: Not all CNI plugins support NetworkPolicies
- Default Allow: Without policies, all traffic is allowed
- No Deny Rules: Policies are additive, can't explicitly deny
- DNS Access: Remember to allow DNS (port 53) for name resolution
Practice Problems
Easy Multi-Tier Application Services
Set up services for a 3-tier application: frontend (LoadBalancer), backend API (ClusterIP), and database (Headless). Verify connectivity between tiers.
Use type: LoadBalancer for the frontend, type: ClusterIP for the backend, and clusterIP: None for the headless database service. Match selectors to the tier labels on your pods.
# Frontend Service (LoadBalancer)
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
selector:
tier: frontend
ports:
- port: 80
targetPort: 3000
---
# Backend API Service (ClusterIP)
apiVersion: v1
kind: Service
metadata:
name: backend-api
spec:
type: ClusterIP
selector:
tier: backend
ports:
- port: 8080
targetPort: 8080
---
# Database Service (Headless)
apiVersion: v1
kind: Service
metadata:
name: database
spec:
clusterIP: None
selector:
tier: database
ports:
- port: 5432
targetPort: 5432
Medium Network Policy Implementation
Secure your application with network policies: default deny-all, allow frontend to backend, allow backend to database, and allow egress to external APIs on port 443.
Start with a default deny-all policy using an empty podSelector. Then create individual policies for each allowed traffic flow using podSelector and namespaceSelector in the ingress/egress rules.
# Default deny all
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
# Frontend to Backend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-allow-frontend
spec:
podSelector:
matchLabels:
tier: backend
ingress:
- from:
- podSelector:
matchLabels:
tier: frontend
ports:
- port: 8080
---
# Backend to Database
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-allow-backend
spec:
podSelector:
matchLabels:
tier: database
ingress:
- from:
- podSelector:
matchLabels:
tier: backend
ports:
- port: 5432
Medium Service Discovery Testing
Create a service, test DNS resolution from a pod, access the service using different DNS formats, and verify load balancing across endpoints.
Use kubectl run with busybox to create a test pod. Inside, use nslookup and wget to test DNS resolution with short names, namespace-qualified names, and FQDNs.
# Create test deployment and service
kubectl create deployment test-app --image=nginx --replicas=3
kubectl expose deployment test-app --port=80
# Run test pod
kubectl run test-client --image=busybox -it --rm -- sh
# Inside the pod, test DNS
nslookup test-app
wget -O- test-app
wget -O- test-app.default
wget -O- test-app.default.svc.cluster.local
# Check endpoints
kubectl get endpoints test-app
# Test load balancing
for i in {1..10}; do
kubectl exec test-client -- wget -qO- test-app | grep "Server"
done
Hard Complete Production Network Setup
Create a production-ready network configuration with multi-tier services, an Ingress controller for external access, network policies for security, custom DNS, and optional service mesh integration.
Combine everything you have learned: use LoadBalancer or Ingress for external traffic, ClusterIP for internal services, Headless for stateful workloads, and NetworkPolicies for zero-trust security. Do not forget to allow DNS egress in your policies.
# This combines all networking concepts:
# 1. Ingress for external HTTP routing
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: production-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- app.example.com
secretName: app-tls
rules:
- host: app.example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: backend-api
port:
number: 8080
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
---
# 2. Default deny + allow DNS egress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
Pro Tip
Start with ClusterIP services for internal communication, then add NodePort or LoadBalancer only when you need external access. Always implement NetworkPolicies in production - default-deny with explicit allow rules follows the principle of least privilege.