Role-Based Access Control (RBAC)
Control who can access what resources in your Kubernetes cluster using fine-grained permissions.
Subjects
Users, Groups, or ServiceAccounts that need access
Roles/ClusterRoles
Define what actions can be performed on which resources
Bindings
Link subjects to roles, granting the defined permissions
Creating a Role
# Namespace-scoped Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: production
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods", "pods/log", "pods/status"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list"]
---
# Cluster-wide ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: secret-reader
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
# Restrict to specific resource names
resourceNames: ["app-secret", "db-secret"]
Creating RoleBindings
# Bind Role to User
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: production
subjects:
- kind: User
name: jane
apiGroup: rbac.authorization.k8s.io
- kind: Group
name: developers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
---
# Bind ClusterRole to ServiceAccount
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: read-secrets-global
subjects:
- kind: ServiceAccount
name: secret-manager
namespace: kube-system
roleRef:
kind: ClusterRole
name: secret-reader
apiGroup: rbac.authorization.k8s.io
ServiceAccount with RBAC
# Create ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-deployer
namespace: production
---
# Create Role with deployment permissions
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: production
name: deployment-manager
rules:
- apiGroups: ["apps", "extensions"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list", "watch"]
---
# Bind Role to ServiceAccount
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: deployment-manager-binding
namespace: production
subjects:
- kind: ServiceAccount
name: app-deployer
namespace: production
roleRef:
kind: Role
name: deployment-manager
apiGroup: rbac.authorization.k8s.io
---
# Use ServiceAccount in Pod
apiVersion: v1
kind: Pod
metadata:
name: deployer-pod
namespace: production
spec:
serviceAccountName: app-deployer
containers:
- name: kubectl
image: bitnami/kubectl:latest
command: ["sleep", "3600"]
Pro Tip
Use the principle of least privilege. Grant only the minimum permissions required for a task.
RBAC Best Practices
Developers
Read pods, logs, deployments
CI/CD
Deploy, update, rollback
Monitoring
Read all resources, metrics
Admin
Full cluster access
Common RBAC Patterns
# Read-only access to namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: namespace-viewer
rules:
- apiGroups: [""]
resources: ["namespaces", "pods", "services", "configmaps"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["deployments", "daemonsets", "replicasets", "statefulsets"]
verbs: ["get", "list", "watch"]
---
# Developer access with limited delete
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: developer
rules:
- apiGroups: [""]
resources: ["pods", "pods/log", "pods/exec"]
verbs: ["*"]
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
# No delete permission
- apiGroups: [""]
resources: ["secrets", "configmaps"]
verbs: ["get", "list"]
# Read-only for sensitive data
Network Policies
Control traffic flow at the IP address or port level using Kubernetes NetworkPolicies.
Important
Network policies require a CNI plugin that supports them (Calico, Cilium, Weave Net). They will not work with basic networking like kubenet.
Default Deny All Traffic
# Deny all ingress traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: production
spec:
podSelector: {} # Apply to all pods in namespace
policyTypes:
- Ingress
---
# Deny all egress traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
egress: [] # No allowed egress rules
Allow Specific Traffic
# Allow frontend to backend communication
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-netpol
namespace: production
spec:
podSelector:
matchLabels:
app: backend
tier: api
policyTypes:
- Ingress
- Egress
ingress:
- from:
# Allow from frontend pods
- podSelector:
matchLabels:
app: frontend
# Allow from specific namespace
- namespaceSelector:
matchLabels:
name: monitoring
podSelector:
matchLabels:
app: prometheus
# Allow from specific IP ranges
- ipBlock:
cidr: 10.0.0.0/8
except:
- 10.0.1.0/24
ports:
- protocol: TCP
port: 8080
- protocol: TCP
port: 8443
egress:
# Allow DNS
- to:
- namespaceSelector:
matchLabels:
name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
# Allow to database
- to:
- podSelector:
matchLabels:
app: postgres
ports:
- protocol: TCP
port: 5432
Advanced Network Policies
# Multi-tier application network policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: web-tier-policy
namespace: production
spec:
podSelector:
matchLabels:
tier: web
policyTypes:
- Ingress
- Egress
ingress:
# Allow from ingress controller
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
podSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
ports:
- protocol: TCP
port: 80
- protocol: TCP
port: 443
egress:
# Allow to API tier
- to:
- podSelector:
matchLabels:
tier: api
ports:
- protocol: TCP
port: 8080
# Allow external HTTPS
- to:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 443
# Allow DNS
- to:
- podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
Network Policy Patterns
- Zero Trust: Default deny all, explicitly allow required traffic
- Microsegmentation: Isolate different application tiers
- Namespace Isolation: Prevent cross-namespace communication
- Egress Control: Restrict outbound connections to approved endpoints
Service Mesh Security
# Istio PeerAuthentication for mTLS
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
namespace: production
spec:
mtls:
mode: STRICT # Enforce mTLS for all traffic
---
# Authorization Policy
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: frontend-authz
namespace: production
spec:
selector:
matchLabels:
app: frontend
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/production/sa/backend"]
to:
- operation:
methods: ["GET", "POST"]
paths: ["/api/*"]
- from:
- source:
namespaces: ["monitoring"]
to:
- operation:
methods: ["GET"]
paths: ["/metrics"]
Pod Security Context
Configure security settings at the pod and container level to minimize attack surface.
apiVersion: v1
kind: Pod
metadata:
name: secure-pod
spec:
# Pod-level security context
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
fsGroupChangePolicy: "OnRootMismatch"
seccompProfile:
type: RuntimeDefault
supplementalGroups: [4000]
containers:
- name: app
image: myapp:v1
# Container-level security context
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- name: tmp
mountPath: /tmp
- name: var-cache
mountPath: /var/cache
volumes:
# Writable volumes for read-only root filesystem
- name: tmp
emptyDir: {}
- name: var-cache
emptyDir: {}
Pod Security Standards
1. Privileged
Unrestricted policy, providing the widest possible permissions
2. Baseline
Minimally restrictive policy, prevents known privilege escalations
3. Restricted
Heavily restricted policy, following current Pod hardening best practices
Pod Security Admission
# Namespace labels for Pod Security Standards
apiVersion: v1
kind: Namespace
metadata:
name: secure-namespace
labels:
# Enforce restricted standard
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/enforce-version: latest
# Audit baseline violations
pod-security.kubernetes.io/audit: baseline
pod-security.kubernetes.io/audit-version: latest
# Warn on policy violations
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
Pod Security Policies (Deprecated)
Deprecation Notice
PodSecurityPolicy is deprecated in Kubernetes v1.21+ and removed in v1.25+. Use Pod Security Standards instead.
Alternative: OPA Gatekeeper
# OPA Gatekeeper ConstraintTemplate
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8srequiredsecuritycontext
spec:
crd:
spec:
names:
kind: K8sRequiredSecurityContext
validation:
openAPIV3Schema:
type: object
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredsecuritycontext
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
not container.securityContext.runAsNonRoot
msg := "Container must run as non-root user"
}
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
not container.securityContext.allowPrivilegeEscalation == false
msg := "Container must not allow privilege escalation"
}
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
not container.securityContext.readOnlyRootFilesystem
msg := "Container must have read-only root filesystem"
}
---
# Apply constraint
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredSecurityContext
metadata:
name: must-have-security-context
spec:
match:
kinds:
- apiGroups: ["apps"]
kinds: ["Deployment", "StatefulSet", "DaemonSet"]
namespaces: ["production"]
Secrets Management
Securely store and manage sensitive information like passwords, tokens, and keys.
Creating and Using Secrets
# Create secret from literal values
kubectl create secret generic db-credentials \
--from-literal=username=dbuser \
--from-literal=password='S3cur3P@ssw0rd!'
# Create secret from files
kubectl create secret generic ssl-certs \
--from-file=tls.crt=/path/to/tls.crt \
--from-file=tls.key=/path/to/tls.key
# Create TLS secret
kubectl create secret tls tls-secret \
--cert=path/to/tls.crt \
--key=path/to/tls.key
# Secret manifest (base64 encoded)
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
type: Opaque
data:
api-key: YXBpLWtleS12YWx1ZQ== # base64 encoded
db-password: cGFzc3dvcmQxMjM=
stringData: # Plain text (will be encoded automatically)
config.yaml: |
database:
host: postgres
port: 5432
---
# Using secrets in pods
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app
image: myapp:v1
# Mount as environment variables
env:
- name: API_KEY
valueFrom:
secretKeyRef:
name: app-secrets
key: api-key
envFrom:
- secretRef:
name: app-secrets
# Mount as files
volumeMounts:
- name: secrets
mountPath: /etc/secrets
readOnly: true
volumes:
- name: secrets
secret:
secretName: app-secrets
defaultMode: 0400 # Read-only for owner
External Secrets Management
# HashiCorp Vault with Secrets Store CSI Driver
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: vault-database
spec:
provider: vault
parameters:
vaultAddress: "http://vault.vault:8200"
roleName: "database"
objects: |
- objectName: "db-password"
secretPath: "secret/data/database"
secretKey: "password"
- objectName: "db-username"
secretPath: "secret/data/database"
secretKey: "username"
---
# Use with CSI volume
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
serviceAccountName: app
containers:
- name: app
image: myapp:v1
volumeMounts:
- name: secrets-store
mountPath: "/mnt/secrets"
readOnly: true
volumes:
- name: secrets-store
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: vault-database
Encrypting Secrets at Rest
# EncryptionConfiguration for API server
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
# AES-GCM with random nonce
- aesgcm:
keys:
- name: key1
secret: YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY=
# AES-CBC with PKCS#7 padding
- aescbc:
keys:
- name: key2
secret: YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY=
# Identity provider (no encryption)
- identity: {}
Secrets Best Practices
- Enable encryption at rest: Configure etcd encryption for secrets
- Rotate secrets regularly: Implement automated secret rotation
- Never commit secrets: Use sealed secrets or external secret stores
- Audit secret access: Monitor and log secret usage
Security Scanning
Identify and fix vulnerabilities in container images, Kubernetes manifests, and running workloads.
Image Scanning
# Scan Docker image for vulnerabilities
trivy image nginx:latest
# Scan with Anchore Grype
grype nginx:latest
# Docker native scanning with Snyk
docker scan nginx:latest
Admission Controller for Image Scanning
# OPA Gatekeeper policy for image validation
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8sallowedimages
spec:
crd:
spec:
names:
kind: K8sAllowedImages
validation:
openAPIV3Schema:
type: object
properties:
allowedRegistries:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8sallowedimages
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
not starts_with(container.image, input.parameters.allowedRegistries[_])
msg := sprintf("Container image %v is not from allowed registry", [container.image])
}
---
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sAllowedImages
metadata:
name: must-use-approved-registry
spec:
match:
kinds:
- apiGroups: ["apps", ""]
kinds: ["Deployment", "StatefulSet", "DaemonSet", "Pod"]
parameters:
allowedRegistries:
- "gcr.io/my-org/"
- "docker.io/mycompany/"
Runtime Security with Falco
# Falco rules for runtime security
- rule: Terminal shell in container
desc: A shell was used as the entrypoint/exec
condition: >
spawned_process and container
and shell_procs and proc.name in (shell_binaries)
and not container.image.repository in (allowed_images)
output: >
Shell opened in container (user=%user.name container_id=%container.id
container_name=%container.name shell=%proc.name)
priority: WARNING
tags: [container, shell]
- rule: Write below etc
desc: an attempt to write to any file below /etc
condition: >
write and etc_dir and not shadowutils_binaries
and not (container and proc.name in (known_binaries))
output: >
File below /etc opened for writing (user=%user.name command=%proc.cmdline
file=%fd.name container_id=%container.id)
priority: ERROR
tags: [filesystem, mitre_persistence]
Kubernetes Manifest Scanning
# Scan Kubernetes manifests with Kubesec
kubesec scan deployment.yaml
# Scan with Polaris
polaris audit --audit-path ./manifests/
# Scan with Checkov
checkov -f deployment.yaml --framework kubernetes
# Scan with KubeLinter
kube-linter lint manifests/
Security Scanning Tools
- Trivy: Comprehensive vulnerability scanner
- Falco: Runtime security monitoring
- KubeLinter: Static analysis of Kubernetes YAML
- Kubesec: Security risk analysis for manifests
- Polaris: Best practices validation
- OPA Gatekeeper: Policy enforcement
Audit Logging
Track and monitor all API server activities for security and compliance.
Audit Policy Configuration
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
# Don't log requests to these paths
- level: None
nonResourceURLs:
- /healthz*
- /metrics
- /swagger*
# Log metadata for all requests in RequestReceived stage
- level: Metadata
omitStages:
- RequestReceived
# Log pod changes at RequestResponse level
- level: RequestResponse
resources:
- group: ""
resources: ["pods", "pods/status"]
namespaces: ["production", "staging"]
# Log secret and configmap access
- level: Metadata
resources:
- group: ""
resources: ["secrets", "configmaps"]
# Log full request/response for sensitive operations
- level: RequestResponse
verbs: ["delete", "deletecollection"]
# Detailed logging for RBAC changes
- level: RequestResponse
resources:
- group: "rbac.authorization.k8s.io"
resources: ["clusterroles", "clusterrolebindings", "roles", "rolebindings"]
# Default level for everything else
- level: Metadata
Processing Audit Logs
# Fluentd configuration for audit logs
<source>
@type tail
path /var/log/kubernetes/audit.log
pos_file /var/log/kubernetes/audit.log.pos
tag kubernetes.audit
<parse>
@type json
time_key timestamp
time_format %Y-%m-%dT%H:%M:%S.%N%z
</parse>
</source>
<filter kubernetes.audit>
@type grep
<regexp>
key $.responseStatus.code
pattern /^(4|5)\d{2}$/
</regexp>
</filter>
<match kubernetes.audit>
@type elasticsearch
host elasticsearch.monitoring
port 9200
index_name kubernetes-audit
<buffer>
@type file
path /var/log/fluentd-buffers/kubernetes.audit
flush_interval 5s
</buffer>
</match>
Compliance Scanning
# CIS Kubernetes Benchmark with kube-bench
kube-bench run --targets master,node,etcd,policies
# Example output parsing
kube-bench run --json | jq '.tests[] | select(.results[].status=="FAIL")'
# Compliance checking with Polaris
polaris audit --set-exit-code-on-danger --severity error
Security Monitoring Checklist
1. API Server Audit Logs
Track all API requests, authentication, and authorization decisions
2. Runtime Monitoring
Detect anomalous container behavior with Falco or Sysdig
3. Network Traffic Analysis
Monitor network flows and detect unusual patterns
4. Image Vulnerability Scanning
Continuous scanning of running container images
5. Compliance Validation
Regular CIS benchmark and policy compliance checks
Security Event Response
- Isolate affected workloads using NetworkPolicies
- Collect audit logs and runtime events
- Analyze attack vector and impact
- Patch vulnerabilities and update policies
- Document incident and update runbooks
Practice Problems
Easy Create a Read-Only RBAC Role
Create a Role that grants read-only access to pods and services in the "dev" namespace.
Use verbs "get", "list", "watch" on resources "pods" and "services" with apiGroups [""].
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: dev
name: readonly
rules:
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list", "watch"]
Easy Default Deny NetworkPolicy
Write a NetworkPolicy that denies all ingress and egress traffic for all pods in a namespace.
Use an empty podSelector {} to match all pods. Specify both Ingress and Egress in policyTypes with no rules.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Medium Harden a Pod Security Context
Given a basic pod spec, add a securityContext that runs as non-root, drops all capabilities, uses a read-only root filesystem, and prevents privilege escalation.
Set runAsNonRoot: true, allowPrivilegeEscalation: false, readOnlyRootFilesystem: true, and capabilities.drop: ["ALL"].
apiVersion: v1
kind: Pod
metadata:
name: hardened-pod
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
containers:
- name: app
image: myapp:v1
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
volumeMounts:
- name: tmp
mountPath: /tmp
volumes:
- name: tmp
emptyDir: {}
Medium Secure Frontend-to-Backend Traffic
Write a NetworkPolicy that allows only pods with label app=frontend to reach pods with label app=backend on port 8080, while also allowing DNS resolution.
Target the backend pods with podSelector, allow ingress from frontend pods on port 8080, and allow egress to kube-dns on UDP 53.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-allow-frontend
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
Hard Design a Complete RBAC Strategy
Design RBAC for a production namespace with three roles: viewer (read-only), developer (deploy and debug), and admin (full access except cluster-level changes). Create the Roles and RoleBindings.
Create three separate Roles with increasing permissions. Viewers get only get/list/watch. Developers add create/update/patch on deployments and exec on pods. Admins get all verbs but use Role (not ClusterRole) to scope to the namespace.
# Viewer Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: production
name: viewer
rules:
- apiGroups: ["", "apps", "batch"]
resources: ["*"]
verbs: ["get", "list", "watch"]
---
# Developer Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: production
name: developer
rules:
- apiGroups: [""]
resources: ["pods", "pods/log", "pods/exec"]
verbs: ["*"]
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["configmaps", "secrets"]
verbs: ["get", "list"]
---
# Admin Role (namespace-scoped)
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: production
name: admin
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]