all posts
deep dive

how to stream kubernetes logs without losing your mind

log streaming in kubernetes is deceptively complex. here's how to handle multi-container pods, log rotation, and real-time filtering.

Dec 20, 20256 min readby kdashboard team

Streaming logs from a single container is easy. Streaming logs from 50 containers across 3 namespaces while filtering for errors? That's where things get interesting.

The basics: kubectl logs

# Follow logs from a single pod
kubectl logs -f my-pod

# Follow logs from a specific container
kubectl logs -f my-pod -c sidecar

# Get logs from a crashed container
kubectl logs my-pod --previous

# Get last 100 lines
kubectl logs --tail=100 my-pod

This works fine for quick debugging. But it falls apart when you need to:

  • Stream from multiple pods simultaneously
  • Filter by log level in real time
  • Correlate timestamps across services
  • Handle pods that restart frequently

Multi-pod streaming with labels

You can stream logs from all pods matching a label:

kubectl logs -f -l app=api-server --all-containers=true

But there's a catch — this doesn't handle pods that are created after you start streaming. If a pod restarts or a new replica is added, you'll miss its logs.

The log rotation problem

Kubernetes nodes rotate container logs to prevent disk exhaustion. The default config (typically managed by the container runtime) rotates when logs reach 10MB and keeps 5 rotated files.

This means:

  • kubectl logs only shows logs from the current log file
  • Historical logs are on the node's filesystem, not accessible via the API
  • If a pod restarts, --previous only shows the last container's logs

Structured logging matters

The single best thing you can do for log management is adopt structured logging:

{
  "level": "error",
  "msg": "failed to connect to database",
  "service": "api-server",
  "timestamp": "2026-02-15T10:23:45Z",
  "trace_id": "abc-123-def",
  "error": "connection refused"
}

Structured logs enable:

  • Parsing without regex — each field is a known key
  • Cross-service correlation — trace IDs link requests across microservices
  • Level filtering — filter by severity without guessing formats
  • Aggregation — count errors by service, endpoint, or time window

Real-time filtering strategies

When streaming logs in real time, you need fast filtering:

# Basic grep filtering
kubectl logs -f my-pod | grep -E "ERROR|WARN"

# JSON parsing with jq
kubectl logs -f my-pod | jq 'select(.level == "error")'

# Multiple pods with stern
stern "api-.*" -n production --output json | jq '.message'

But command-line tools have limitations. You can't easily:

  • Toggle filters without restarting the stream
  • Highlight different log levels with colors
  • Search through buffered logs while streaming continues
  • Save interesting log segments for sharing

What a dedicated log viewer provides

A purpose-built log viewer in a tool like kdashboard gives you:

  • Live streaming with automatic reconnection when pods restart
  • Level-based filtering that can be toggled on/off instantly
  • Regex search across the buffered log history
  • Timestamp correlation across multiple containers
  • Color coding by log level for visual scanning
  • One-click copy of log segments for bug reports

The gap between kubectl logs -f and a proper log viewer is the gap between reading raw text and understanding what your system is actually doing.