Kubernetes Logs: A Complete Guide

Operating a Kubernetes cluster effectively requires deep insights into its inner workings. How do you know if your applications are performing as expected? How do you troubleshoot issues when they inevitably arise? The answer lies in Kubernetes logs. These logs provide a crucial window into the health and performance of your cluster, offering a detailed record of events and activities within your applications and infrastructure.

This guide explores the essential aspects of Kubernetes logs, from basic retrieval using kubectl to managing logs at scale across multiple Kubernetes clusters wth Plural. Whether debugging a specific issue or implementing a comprehensive monitoring strategy, understanding Kubernetes logs is fundamental to successful Kubernetes management.

Unified Cloud Orchestration for Kubernetes

Manage Kubernetes at scale through a single, enterprise-ready platform.

GitOps Deployment
Secure Dashboards
Infrastructure-as-Code
Book a demo

Key Takeaways

  • Centralized logging simplifies Kubernetes management: Aggregate logs from your distributed environment into a searchable platform like Elasticsearch, streamlining troubleshooting and monitoring. Consider Fluentd for collecting and forwarding logs.
  • Advanced log management improves observability: Combine kubectl for quick checks with a robust logging pipeline. Implement log rotation, use a lightweight logging agent, and consider AI-powered analysis tools for actionable insights.
  • Unified Log Aggregation for Kubernetes: Plural’s unified logging experience brings observability, governance, and AI-driven insights together, making Kubernetes log management simpler and more powerful for teams of any size.

What are Kubernetes Logs?

Kubernetes logs provide crucial visibility into what is happening inside your applications and the cluster. The logs are particularly useful for debugging problems and monitoring cluster activity. Debugging issues in a distributed environment like Kubernetes without a robust logging strategy can be incredibly challenging.

Definition and Role in Monitoring and Troubleshooting

Kubernetes logs record activities within your cluster, offering a detailed history of what's happening inside your application. They're indispensable for monitoring application activity, identifying errors, and troubleshooting. By analyzing these logs, you can detect anomalies, track down the root cause of problems, and proactively address potential issues before they impact your users.

Unlike traditional monolithic applications, Kubernetes deployments often involve multiple interconnected services. Logs help you trace requests across these services, pinpoint bottlenecks, and understand the complex interactions within your application. This granular level of detail is essential for effective troubleshooting in a microservices architecture.

Container Logs vs. System Component Logs

Kubernetes logging can be broadly categorized into two types: container logs and system component logs.

Pod and container logs 

Kubernetes captures logs from each container in a running Pod. Container logs capture the output (stdout and stderr) generated by your applications running inside containers. This example uses a manifest for a Pod with a container that writes text to the standard output stream, once per second.

apiVersion: v1
kind: Pod
metadata:
  name: counter
spec:
  containers:
  - name: count
    image: busybox:1.28
    args: [/bin/sh, -c,
            'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']

To fetch the logs, use the kubectl logs command, as follows:

kubectl logs counter

The output is similar to:

0: Fri Apr  1 11:42:23 UTC 2022
1: Fri Apr  1 11:42:24 UTC 2022
2: Fri Apr  1 11:42:25 UTC 2022

System component logs

System component logs, on the other hand, track the activities of various Kubernetes components. There are two types of system components: those that typically run in a container and those components directly involved in running containers. For example:

Access Kubernetes Logs with kubectl

This section focuses on using kubectl logs, the primary command-line tool for retrieving container logs in Kubernetes.

Basic Syntax and kubectl logs Usage

The most straightforward way to access container logs is by using the kubectl logs command. The basic syntax is:

kubectl logs <pod-name> [<container-name>]

If your pod has only one container, you can omit the <container-name>. For instance, to view logs from a pod named my-app, you would run:

kubectl logs my-app

If my-app had multiple containers, such as my-app-main and my-app-sidecar, you would specify the container like so:

kubectl logs my-app -c my-app-main

This command fetches the logs from the designated container and displays them in your terminal.

Retrieve and Filter Logs with Key Options

kubectl logs offers several options to refine how you retrieve logs, making it easier to pinpoint specific issues or monitor ongoing activity. Here are some of the most useful options:

  • -f or --follow: This streams the logs continuously, providing real-time updates as new log entries are generated. This is invaluable for monitoring applications and observing behavior as it happens.
  • -p or --previous: If a pod restarts, its logs are no longer accessible through the pod's current instance. The --previous flag allows you to access the logs from the previous instance of the container, which is essential for debugging crashes or understanding the events leading up to a restart.
  • --all-containers: This option retrieves logs from all containers within the specified pod, providing a consolidated view of activity across multiple containers.
  • --since and --since-time: These options filter logs based on a relative or absolute time, respectively. For example, --since=1h shows logs from the last hour, while --since-time=2024-01-01T00:00:00Z shows logs since the specified time.
  • --tail: This option limits the output to the last N lines of the log file. For instance, --tail=100 displays only the last 100 lines, which is helpful for quickly checking recent activity without wading through extensive log history.

Manage Kubernetes Logs Effectively

Effectively managing logs involves implementing a centralized logging strategy and establishing robust log aggregation, retention, and security processes.

Implement Centralized Logging

With distributed systems like Kubernetes, logs are generated across multiple nodes and components. Trying to troubleshoot issues by SSHing into individual nodes or sifting through numerous log files is inefficient and impractical.

How nodes handle container logs

A centralized logging solution solves this by aggregating logs from all your Kubernetes resources into a single, searchable repository. This approach simplifies troubleshooting, allows for comprehensive monitoring, and enables efficient analysis of application behavior and system performance. Popular centralized logging solutions compatible with Kubernetes include Elastic Stack and Grafana Loki. When selecting a solution, consider factors like scalability, query capabilities, and integration with your existing monitoring stack. For enterprise needs, solutions include Plural - One Console, All Your Logs.

Aggregate, Retain, and Secure Logs

Kubernetes offers several mechanisms for collecting logs from various sources, including container logs, system component logs, and audit logs. Tools like Fluentd, a popular open-source log aggregator, can be deployed within your cluster to gather logs from different nodes and forward them to your centralized logging system.

Beyond aggregation, defining appropriate log retention policies is essential. Storing logs indefinitely can be costly and may not provide significant value. Determine a retention period that aligns with your compliance requirements and troubleshooting needs. Finally, securing your log data is paramount. Logs often contain sensitive information, and protecting them from unauthorized access is critical. Implement appropriate access controls and encryption measures to safeguard your log data and ensure compliance with relevant security regulations.

Advanced Kubernetes Logs Management

As your Kubernetes deployments grow, basic logging won't be enough. You'll need more advanced strategies to manage your log data's increasing volume and complexity effectively. This involves parsing and analyzing logs for actionable insights, implementing efficient log rotation, and optimizing resource utilization.

Parse, Analyze, and Rotate Logs

Kubernetes can automatically locate and read container log files. Parsing these logs effectively is crucial for extracting meaningful information. Choose tools that can handle your applications' diverse formats, from JSON and plain text to structured application logs. Once parsed, analyze these logs to identify trends, pinpoint errors, and monitor application behavior. Log rotation is another critical aspect of advanced log management. Without it, logs will consume increasing amounts of storage, impacting performance and cost. Implement a log rotation policy that balances data retention with storage efficiency. Tools like logrotate can automate this process, compressing and archiving or deleting older logs based on your rules.

Optimize Log Performance and Resource Usage

Efficient log management requires careful consideration of resource usage. Logging, while essential, can consume significant resources if not properly managed. Deploying a logging agent, like Fluentbit or Filebeat, to each node ensures comprehensive logging regardless of where your applications run. Consider using a lightweight logging agent to minimize resource consumption on your nodes. Also, evaluate the performance impact of your chosen logging destination.

Logs Aggregation in Plural: One Console, All Your Logs

Most teams running Kubernetes face a common headache: logs are scattered across clusters, services, and tools. Platform teams spend too much time setting up log aggregation and governance, while developers spend hours jumping between different interfaces to debug issues. The problem gets worse at scale. As you add more clusters and services, you end up with a maze of logging tools and access controls. And when something breaks, finding the proper logs becomes a frustrating treasure hunt.

What's New!

Plural now offers built-in log aggregation to view and search logs directly in the Plural console. You can query logs at either the service or cluster level.

Key Features

Service-Level Logs

Service logs are relevant for developer personas. Every Plural service now has a Logs tab where you can:

  • Filter by time and other attributes
  • View the log context to see what happened before and after an event
  • See all the facets associated with your logs and filter on them

And you can tune the log view with the filter modal like:

Cluster-Level Logs

You can also view logs across your entire cluster. This is particularly useful for platform teams and Kubernetes administrators who need broader visibility.

Built-in Governance

One of the key advantages of Plural's logging solution is that it's already
integrated with Plural's permission model. The system automatically extracts
logs that map to the pods in your service, ensuring users only see logs they're
authorized to access. This governance - which typically requires expensive
enterprise solutions - comes standard with Plural.

AI-Powered Troubleshooting

When issues arise in your clusters, Plural AI-driven Insights can now use log data to help identify root causes. For example:

  • For certificate provisioning issues, we'll automatically search logs from
    cert-manager and external-dns
  • For application errors, we'll identify the specific endpoints and code causing
    issues
  • All findings include the actual log evidence used to reach conclusions

Try it out!

Ready to get started with unified logging? Book a demo to try Plural. For detailed setup instructions, check out our log aggregation documentation.

Unified Cloud Orchestration for Kubernetes

Manage Kubernetes at scale through a single, enterprise-ready platform.

GitOps Deployment
Secure Dashboards
Infrastructure-as-Code
Book a demo

Frequently Asked Questions

How do I view logs for a specific container in a pod with multiple containers?

Use the -c flag with the kubectl logs command. For example, kubectl logs my-pod -c my-container shows logs only from my-container running inside my-pod.

What's the difference between Kubernetes logs and events?

Logs provide detailed output from your applications and system components, offering a granular view of what's happening inside. Events record notable occurrences within the cluster, like resource creation/deletion or significant state changes (e.g., pod crashes). Logs are essential for debugging and monitoring application behavior, while events provide a high-level overview of cluster activity.

My Kubernetes cluster is generating a massive amount of logs. How can I manage this?

Implement a centralized logging strategy using tools like Fluentd or Filebeat to collect logs from all nodes and forward them to a platform like Elasticsearch, Splunk, or Loggly. Also, configure appropriate log levels (DEBUG, INFO, WARN, ERROR) to control verbosity and implement log rotation policies to manage storage.

What are some best practices for logging microservices in Kubernetes?

Structure your logs using a consistent format, preferably JSON, for easier parsing and querying. Use adequate log levels to control verbosity and distinguish between logs from different containers in a multi-container pod using the -c flag with kubectl logs. Consider a service mesh for tracing requests across services.

How can I integrate my Kubernetes logs with external monitoring and alerting tools?

Most centralized logging platforms offer integrations with various monitoring and alerting tools. Fluentd can also forward logs to a wide range of destinations. Choose a logging solution that integrates seamlessly with your existing monitoring stack.