6. Kubernetes Audit Logs
Time to Complete
Planned time: ~20 minutes
Kubernetes Audit Logs provide a chronological record of all API server activity, capturing who did what, when, and to which resources. This is essential for security monitoring, compliance, and forensic investigations. In this lab, you’ll enable audit logging, understand audit policies, and learn how to tune logging levels for different types of requests.
What You’ll Learn
- How Kubernetes audit logging works at the API server level
- How to configure audit policies to control what gets logged
- How to interpret audit log entries and identify key fields
- The difference between audit levels (None, Metadata, Request, RequestResponse)
- How to tune audit policies to balance visibility and volume
- How to find specific events in audit logs for security investigations
Trainer Instructions
Tested versions:
- Kubernetes:
1.32.x - kind:
0.20+
Cluster requirements:
This lab requires a cluster where you can configure the kube-apiserver. This works with:
- kind clusters (this lab uses kind with kubeadmConfigPatches)
- kubeadm-based clusters (control plane as static pods)
- Self-managed clusters where you can edit API server flags
Note: This lab is typically NOT possible on fully managed clusters (EKS, GKE, AKS) unless audit logging is exposed and configurable by the provider. Most managed providers have their own audit log solutions.
No external integrations are required.
Info
We create a separate local kind cluster for this exercise.
1. Understand Audit Logging Architecture
Before enabling audit logs, let’s understand how they work.
How Audit Logging Works
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ kubectl │────▶│ API Server │────▶│ Audit Log │
│ (request) │ │ (processes) │ │ (file/webhook)
└─────────────┘ └─────────────┘ └─────────────┘
│
▼
┌─────────────┐
│ Audit Policy│
│ (rules) │
└─────────────┘
The API server evaluates each request against the audit policy to determine:
- Should this request be logged? (based on rules)
- At what level? (None, Metadata, Request, RequestResponse)
- To where? (file, webhook, or both)
Audit Levels
| Level | Description | Use Case |
|---|---|---|
None |
Don’t log | Health checks, metrics, watch requests |
Metadata |
Log who/what/when, no body | Read operations, sensitive resources |
Request |
Log metadata + request body | Write operations |
RequestResponse |
Log metadata + request + response | Security-critical operations |
Questions
- Why are audit logs generated by the API server (and not by kubelet)?
- What’s the difference between audit logs and application logs?
Answers
- Audit logs are generated by the kube-apiserver because it is the central point for ALL API requests. Every kubectl command, controller action, and scheduled operation goes through the API server.
- Audit logs capture control plane API actions (who did what to which resources). Application logs capture what applications write to stdout/stderr (application behavior and errors).
2. Create a Kind Cluster with Audit Logging
For this lab, we’ll use a kind cluster configured with audit logging enabled via kubeadm patches.
Kind Configuration
Review the kind configuration (~/exercise/kubernetes/audit-logs/kind-config.yaml):
# Kind cluster configuration with Kubernetes audit logging enabled
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
# Mount the audit policy file and log directory
extraMounts:
- hostPath: ./audit-policy.yaml
containerPath: /etc/kubernetes/audit/audit-policy.yaml
readOnly: true
- hostPath: /tmp/audit-logs
containerPath: /var/log/kubernetes/audit
kubeadmConfigPatches:
- |
kind: ClusterConfiguration
apiServer:
extraArgs:
audit-policy-file: /etc/kubernetes/audit/audit-policy.yaml
audit-log-path: /var/log/kubernetes/audit/audit.log
audit-log-format: json
audit-log-maxage: "7"
audit-log-maxbackup: "3"
audit-log-maxsize: "100"
extraVolumes:
- name: audit-policy
hostPath: /etc/kubernetes/audit/audit-policy.yaml
mountPath: /etc/kubernetes/audit/audit-policy.yaml
readOnly: true
pathType: File
- name: audit-logs
hostPath: /var/log/kubernetes/audit
mountPath: /var/log/kubernetes/audit
pathType: DirectoryOrCreate
Audit Policy
Review the audit policy (~/exercise/kubernetes/audit-logs/audit-policy.yaml):
# Kubernetes Audit Policy
# This policy logs API server activity at different levels based on the type of request
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
# Don't log requests to the following API endpoints
- level: None
nonResourceURLs:
- /healthz*
- /readyz*
- /livez*
- /metrics
# Don't log watch requests (too verbose)
- level: None
verbs: ["watch"]
# Log authentication failures at Metadata level
- level: Metadata
users: ["system:anonymous"]
# Log secrets access at Metadata level only (don't log secret content)
- level: Metadata
resources:
- group: ""
resources: ["secrets"]
# Log configmaps at Metadata level for reads, Request level for writes
- level: Metadata
verbs: ["get", "list"]
resources:
- group: ""
resources: ["configmaps"]
- level: Request
verbs: ["create", "update", "patch", "delete"]
resources:
- group: ""
resources: ["configmaps"]
# Log pod exec and attach at RequestResponse level (important for security)
- level: RequestResponse
resources:
- group: ""
resources: ["pods/exec", "pods/attach", "pods/portforward"]
# Log pod read operations at Metadata level
- level: Metadata
verbs: ["get", "list"]
resources:
- group: ""
resources: ["pods"]
# Log pod write operations at Request level
- level: Request
verbs: ["create", "update", "patch", "delete"]
resources:
- group: ""
resources: ["pods"]
# Log RBAC changes at RequestResponse level
- level: RequestResponse
resources:
- group: "rbac.authorization.k8s.io"
resources: ["roles", "rolebindings", "clusterroles", "clusterrolebindings"]
# Default: log everything else at Metadata level
- level: Metadata
omitStages:
- RequestReceived
Task
- Create the kind cluster with audit logging enabled
- Verify the cluster is running and audit logs are being generated
Hint
mkdir -p /tmp/audit-logs
as we mount ‘./audit-policy.yaml’
cd ~/exercise/kubernetes/audit-logs
or the cluster will not start and hang in ‘Starting control-plane’
Solution
Create the audit log directory and cluster:
mkdir -p /tmp/audit-logs
cd ~/exercise/kubernetes/audit-logs
kind create cluster --name audit-demo --image kindest/node:v1.32.0 --config kind-config.yaml --wait 2m
cd
# which will set the kubernetes context automatically
# verify with
kx
# and select it
kx kind-audit-demo
kubectl get nodes
kubectl cluster-info
docker exec audit-demo-control-plane ls -la /var/log/kubernetes/audit/
docker exec audit-demo-control-plane tail -5 /var/log/kubernetes/audit/audit.log
3. Create a Test Namespace and Generate Events
Now let’s generate some API events and observe them in the audit log.
Task
- Create a namespace called
audit-lab - Create a deployment in the namespace
- Create a ConfigMap
- Create a Secret
- Find these events in the audit log
Solution
Create resources:
kubectl create namespace audit-lab
kubectl -n audit-lab create deployment web --image=nginx:1.27.3
kubectl -n audit-lab create configmap demo --from-literal=env=dev
kubectl -n audit-lab create secret generic test-secret --from-literal=password=secret123
kubectl -n audit-lab rollout status deployment/web
docker exec audit-demo-control-plane grep '"namespace":"audit-lab"' /var/log/kubernetes/audit/audit.log | head -20
docker exec audit-demo-control-plane cat /var/log/kubernetes/audit/audit.log | grep '"namespace":"audit-lab"' | jq -c '{verb: .verb, resource: .objectRef.resource, name: .objectRef.name, user: .user.username, level: .level}'
Questions
- Which fields help you identify who performed the action?
- Which fields help you identify what was done?
Answers
- Who:
user.username,user.groups,sourceIPs,userAgent - What:
verb,objectRef.resource,objectRef.name,objectRef.namespace,requestURI - Result:
responseStatus.code,responseStatus.reason
4. Compare Audit Levels
Let’s observe how different resources are logged at different levels based on our policy.
Task
- List pods (read operation - should be Metadata level)
- Create a new ConfigMap (write operation - should be Request level)
- Compare the log entries for these operations
Solution
Generate events:
kubectl -n audit-lab get pods
kubectl -n audit-lab create configmap demo2 --from-literal=key=value
docker exec audit-demo-control-plane grep '"namespace":"audit-lab"' /var/log/kubernetes/audit/audit.log | grep '"resource":"pods"' | grep '"verb":"list"' | tail -1 | jq '{level: .level, verb: .verb, resource: .objectRef.resource, hasRequestObject: (.requestObject != null)}'
docker exec audit-demo-control-plane grep '"namespace":"audit-lab"' /var/log/kubernetes/audit/audit.log | grep '"resource":"configmaps"' | grep '"verb":"create"' | grep 'demo2' | tail -1 | jq '{level: .level, verb: .verb, resource: .objectRef.resource, name: .objectRef.name, hasRequestObject: (.requestObject != null), requestData: .requestObject.data}'
Expected observations:
| Operation | Audit Level | Request Body Logged |
|---|---|---|
| List pods | Metadata | No |
| Create ConfigMap | Request | Yes (includes data) |
| Create Secret | Metadata | No (secrets are sensitive) |
Security Note
Notice that secrets are logged at Metadata level only - the secret content is NOT recorded in the audit log. This is an important security measure to prevent credential exposure.
5. Investigate Security Events
Audit logs are crucial for security investigations. Let’s simulate and investigate some security-relevant events.
Task
- Try to access a secret
- Exec into a pod
- Find these events in the audit log
Solution
Get a secret (should be logged):
kubectl -n audit-lab get secret test-secret -o yaml
POD=$(kubectl -n audit-lab get pods -o name | head -1)
kubectl -n audit-lab exec $POD -- whoami
docker exec audit-demo-control-plane grep '"namespace":"audit-lab"' /var/log/kubernetes/audit/audit.log | grep '"resource":"secrets"' | jq -c '{verb: .verb, name: .objectRef.name, user: .user.username, sourceIP: .sourceIPs[0]}'
docker exec audit-demo-control-plane grep '"namespace":"audit-lab"' /var/log/kubernetes/audit/audit.log | grep '"resource":"pods/exec"' | jq -c '{verb: .verb, pod: .objectRef.name, user: .user.username, command: .requestObject}'
Security Investigation
When investigating a security incident, key questions to answer from audit logs:
- Who accessed the resource? (user, service account)
- When did they access it? (timestamp)
- From where did they access it? (sourceIPs)
- What tool did they use? (userAgent)
- Was it authorized? (annotations.authorization.k8s.io/decision)
6. Bonus: Create a Custom Audit Policy
Bonus Exercise
This section is optional and provides an additional challenge.
Task
- Create a minimal audit policy that logs only write operations
- Create a verbose audit policy for debugging
- Understand the trade-offs
Minimal policy (~/exercise/kubernetes/audit-logs/audit-policy-minimal.yaml):
# Minimal Kubernetes Audit Policy
# Logs all requests at the Metadata level - useful for getting started
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
# Log all requests at the Metadata level
- level: Metadata
Verbose policy (~/exercise/kubernetes/audit-logs/audit-policy-verbose.yaml):
# Verbose Kubernetes Audit Policy
# Logs all requests at RequestResponse level - useful for debugging/forensics
# WARNING: This generates a lot of data and may include sensitive information
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
# Don't log health checks and metrics (too noisy)
- level: None
nonResourceURLs:
- /healthz*
- /readyz*
- /livez*
- /metrics
# Don't log watch requests (too verbose)
- level: None
verbs: ["watch"]
# Log secrets at Metadata only (never log secret content!)
- level: Metadata
resources:
- group: ""
resources: ["secrets"]
# Log everything else at RequestResponse level
- level: RequestResponse
Trade-offs
| Policy Type | Pros | Cons |
|---|---|---|
| Minimal | Low storage, fast | Limited forensics |
| Verbose | Complete visibility | High storage, sensitive data exposure risk |
| Tuned (recommended) | Balanced | Requires planning |
Best Practice
- Start with a minimal policy and add rules as needed
- Always log sensitive operations (secrets, RBAC, exec) at appropriate levels
- Never log secret/configmap content unless absolutely necessary
- Consider using audit webhooks for real-time alerting
7. Bonus: Analyze Audit Logs with jq
Bonus Exercise
This section is optional and provides an additional challenge.
Task
Use jq to answer security questions from the audit log.
Who has accessed secrets in the last hour?
Solution
docker exec audit-demo-control-plane cat /var/log/kubernetes/audit/audit.log | \
jq -r 'select(.objectRef.resource == "secrets") | "\(.user.username) accessed secret \(.objectRef.name // "unknown") via \(.verb)"' | sort | uniq -c | sort -rn
What resources were deleted?
Solution
docker exec audit-demo-control-plane cat /var/log/kubernetes/audit/audit.log | \
jq -r 'select(.verb == "delete" and .responseStatus.code == 200) | "\(.objectRef.resource)/\(.objectRef.name) deleted by \(.user.username)"'
Find failed authentication attempts:
Solution
docker exec audit-demo-control-plane cat /var/log/kubernetes/audit/audit.log | \
jq -r 'select(.responseStatus.code >= 400 and .user.username == "system:anonymous") | "\(.verb) \(.requestURI) - \(.responseStatus.reason)"'
Get all unique users who made API calls:
Solution
docker exec audit-demo-control-plane cat /var/log/kubernetes/audit/audit.log | \
jq -r '.user.username' | sort | uniq -c | sort -rn
8. Clean Up
Delete the test namespace:
kubectl delete namespace audit-lab
Delete the kind cluster:
kind delete cluster --name audit-demo
Clean up the local audit log directory:
rm -rf /tmp/audit-logs
Recap
You have:
- Understood how Kubernetes audit logging works at the API server level
- Created a kind cluster with audit logging enabled
- Learned the four audit levels: None, Metadata, Request, RequestResponse
- Generated API events and found them in the audit log
- Identified key fields for security investigations (who, what, when, where)
- Compared how different resources are logged at different levels
- (Bonus) Explored minimal and verbose audit policies
- (Bonus) Used jq to analyze audit logs for security questions
Wrap-Up Questions
Discussion
- Which events should always be logged at RequestResponse level?
- How would you transport audit logs to a SIEM for real-time alerting?
- What’s the relationship between audit logs and Falco alerts?
Discussion Points
- Always RequestResponse: Pod exec/attach, RBAC changes, admission webhook calls, any action that modifies cluster security posture
- Log transport options:
- Webhook backend (real-time to external system)
- Log shipping (Fluentd, Fluent Bit, Vector)
- Managed solutions (cloud provider audit logs)
- Audit logs vs Falco:
- Audit logs capture API-level activity (kubectl, controllers)
- Falco captures syscall-level activity (inside containers)
- Both are needed for complete visibility
- Audit logs show “who started the exec”, Falco shows “what happened in the shell”
Further Reading
- Kubernetes Auditing Documentation
- Audit Policy Reference
- GKE Audit Logging
- EKS Control Plane Logging
- AKS Diagnostics Logging
- Falco + Kubernetes Audit Logs
End of Lab