Table of Contents
- How Docker Collects Container Logs
- Docker Logging Driver Types and Selection Criteria
- json-file Driver Log Rotation Setup
- Docker Log File Location and Manual Cleanup
- Optimizing Docker Container Log Management with the local Driver
- Setting Up External Log Collection with the Fluentd Driver
- Removing Logging Bottlenecks with non-blocking Mode
- Common Issues When Switching Logging Drivers
- Docker Container Log Management Operational Checklist
- Next Steps for Log Management Automation
One of the main reasons disk usage exceeds 90% on servers running dozens of containers is the unlimited accumulation of Docker log files. Docker’s default configuration stores JSON log files with no size limit, so without explicit configuration, disk exhaustion is only a matter of time.
This article covers the core aspects of Docker container log management step by step: logging driver selection, rotation configuration, and external collection setup. Each step includes daemon.json settings and docker run flags ready to copy and use.
How Docker Collects Container Logs
Docker captures two I/O streams from containers — STDOUT and STDERR — and records them as logs. The two streams are stored separately, making it possible to identify which lines are errors in docker logs output. Logging drivers can also use the source (stdout/stderr) as metadata. Everything an application sends to standard output becomes input to the Docker logging system.
Official images follow this principle. The nginx official image symlinks log files to /dev/stdout and /dev/stderr, while httpd writes directly to /proc/self/fd/1 and /proc/self/fd/2 to output logs to standard streams. If a container writes logs to internal files, the Docker logging system won’t capture them — sending output to stdout is the fundamental principle.
Checking Logs with docker logs
The docker logs command is the primary tool for viewing logs from running containers. Beyond simple output, it supports several filtering options.
| Option | Function | Example |
|---|---|---|
--follow (-f) | Real-time streaming | docker logs -f <CONTAINER> |
--tail (-n) | Output last N lines | docker logs --tail 100 <CONTAINER> |
--since | After a specific time | docker logs --since 1m30s <CONTAINER> |
--until | Before a specific time | docker logs --until 2026-04-17T10:00:00 <CONTAINER> |
--timestamps (-t) | Add RFC3339Nano timestamps | docker logs -t <CONTAINER> |
--since accepts RFC 3339 dates, Unix timestamps, and Go duration strings (e.g., 1m30s). When the failure time is clear, combining --since and --until enables quick extraction of logs from that specific window. The full list of options is available in the docker logs CLI reference.
Docker Logging Driver Types and Selection Criteria

Docker supports a wide range of logging drivers: json-file, local, journald, syslog, gelf (Graylog), fluentd, awslogs (Amazon CloudWatch), splunk, gcplogs (Google Cloud Logging), logentries, and more. Custom logging drivers can also be implemented via the plugin system.
Among these, the three most commonly used drivers in production are summarized below.
| Driver | Storage Location | Auto Rotation | docker logs Support | Use Case |
|---|---|---|---|---|
| json-file | Local JSON files | Manual config required | Yes | Development, testing, small-scale ops |
| local | Optimized local storage | Enabled by default | Yes | Recommended for small-to-mid production |
| fluentd | External Fluentd collector | Managed by collector | No | Centralized log collection |
The default logging driver is json-file. It stores container logs locally in JSON format, but the default configuration has no size limit — making it a primary cause of disk exhaustion. The local driver, on the other hand, performs automatic log rotation to prevent disk exhaustion and is the driver recommended by Docker’s official documentation.
How to Check Current Settings
To assess the current state of Docker container log management on a running server, start by checking the active driver.
docker info --format '{{.LoggingDriver}}'
docker inspect -f '{{.HostConfig.LogConfig.Type}}' <CONTAINER>
The first command returns the daemon-wide default driver; the second returns the driver applied to a specific container. Since drivers can be overridden per container, the daemon default and individual container values may differ.
json-file Driver Log Rotation Setup
In environments where json-file is the default driver, the first priority is configuring log rotation. Specify max-size and max-file options in daemon.json.
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3",
"labels": "production_status",
"env": "os,customer"
}
}
max-size sets the maximum size of a single log file, and max-file sets the number of files to retain. With this configuration, each container preserves up to 30MB of logs (10MB × 3 files). The labels and env options add container labels and environment variables as log metadata. If a production environment runs 50 containers, the total maximum is 1.5GB — so max-size and max-file values should be calculated based on disk capacity and container count.
After modifying
daemon.json, the Docker daemon must be restarted for changes to take effect. Already running containers are not affected — they must be recreated for the new settings to apply. It’s safest to proceed with a zero-downtime deployment strategy in place.
Per-Container Override
Containers that need different values from the daemon-wide settings can be overridden with --log-driver and --log-opt flags. This is useful when a specific container under debugging needs to retain more logs.
In docker-compose environments, the logging key is specified per service.
services:
web:
image: nginx:latest
logging:
driver: json-file
options:
max-size: "50m"
max-file: "5"
worker:
image: myapp:latest
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
Containers like web servers with high request log volume get a larger max-size, while background workers with lower log output get a smaller value. Since log characteristics vary by service, per-service tuning is more practical than blanket settings.
Docker Log File Location and Manual Cleanup
There are situations where logs that accumulated before rotation was configured need to be cleaned up. Docker stores json-file driver logs at /var/lib/docker/containers/<container-id>/, with filenames in the format <container-id>-json.log.
# 전체 Docker 컨테이너 로그 디렉토리 용량 확인
sudo du -sh /var/lib/docker/containers/
# 컨테이너별 로그 파일 크기 상위 10개 출력
sudo find /var/lib/docker/containers/ -name "*-json.log" \
-exec du -sh {} \; | sort -rh | head -10
Deleting a log file with rm while the container is running leaves the file descriptor open — the file is unlinked but the process still holds the descriptor. In this case, disk space isn’t actually reclaimed until the container stops. The safe approach is truncate, which empties the file contents only.
# 실행 중인 컨테이너 로그 파일 내용만 비우기 (파일 삭제 없이 용량 즉시 해제)
sudo truncate -s 0 \
/var/lib/docker/containers/<container-id>/<container-id>-json.log
Stopping a container and removing it with docker rm causes Docker to automatically delete the associated log file. However, containers left in a stopped state retain their log files until explicitly removed, so a routine for periodically cleaning up stopped containers is necessary.
| Method | Behavior | Safe While Running | Primary Use |
|---|---|---|---|
truncate -s 0 | Empties file contents only | Yes | Immediate space reclamation |
docker rm | Deletes container + logs | No (stop first) | Container cleanup |
docker system prune | Bulk cleanup of unused resources | Stopped only | Periodic bulk cleanup |
rm -f (not recommended) | Force deletes file | Risk of dangling descriptor | Avoid |
The docker system prune command removes stopped containers, unused images, networks, and build caches all at once. It doesn’t directly delete log files — log files are removed as part of deleting stopped containers. Running it periodically via a cron job automates disk usage management.
Optimizing Docker Container Log Management with the local Driver
The local driver was created to address the limitations of json-file. It records container stdout/stderr to an internal storage optimized for performance and disk usage, with automatic rotation enabled by default.
The local driver operates with defaults of max-size 20MB, max-file 5, and compress true.
| Option | Default | Meaning |
|---|---|---|
| max-size | 20MB | Maximum size per file |
| max-file | 5 | Maximum number of files |
| compress | true | Compression enabled |
Each container retains up to 100MB (20MB × 5 files), with the oldest file automatically deleted during rotation. Since compression is enabled by default, actual disk usage is less than 100MB.
docker run --log-driver local --log-opt max-size=10m
The advantages of the local driver over json-file are clear: disk exhaustion is prevented without manual rotation configuration, and compression improves storage efficiency. The docker logs command is fully supported, so there’s no impact on debugging.
The local driver’s log files are designed for exclusive access by the Docker daemon. External tools accessing them directly can interfere with Docker’s logging system and cause unexpected behavior. If log collection is needed, use the
docker logs command or a separate logging driver.
Migrating from json-file to local
To switch from an existing json-file environment to local, only daemon.json needs to be modified.
{
"log-driver": "local",
"log-opts": {
"max-size": "20m",
"max-file": "5"
}
}
After the change, a Docker daemon restart and container recreation are required. Existing json-file logs remain in place, so manual cleanup of old log files is needed if disk space recovery is a concern after the switch. The full list of options is available in the Docker logging driver configuration guide.
Setting Up External Log Collection with the Fluentd Driver

When the number of containers grows or they’re distributed across multiple hosts, local storage alone has limitations. The Fluentd logging driver sends container logs as structured data to a Fluentd collector.
The Fluentd driver automatically attaches these four metadata fields during transmission:
- Container ID (full 64 characters)
- Container name
- Log source (stdout or stderr)
- Log message body
The default connection address is localhost:24224, supporting both TCP and Unix sockets. The most basic run command looks like this:
docker run --log-driver=fluentd --log-opt fluentd-address=fluentdhost:24224
To set Fluentd as the default driver daemon-wide, add the following to daemon.json:
{
"log-driver": "fluentd",
"log-opts": {
"fluentd-address": "fluentdhost:24224"
}
}
Fluentd Async Mode and Buffer Settings
The key operational option for the Fluentd driver is fluentd-async. Enabling it allows background connections and message buffering — the container won’t stop even if the Fluentd daemon is temporarily unavailable. This must be enabled in production.
| Option | Default | Description |
|---|---|---|
| fluentd-async | false | Enable async transmission |
| fluentd-buffer-limit | 1,048,576 events | Maximum events to buffer |
| fluentd-retry-wait | 1 second | Reconnection wait time |
| fluentd-sub-second-precision | false | Nanosecond timestamps |
With fluentd-async disabled, a downed Fluentd collector can block the container itself. This single setting is what prevents failure propagation. The full list of options and tag template syntax is available in the Fluentd logging driver documentation.
When using the Fluentd driver, the
docker logs command does not work. If docker logs is needed for debugging, consider a dual --log-driver configuration or a sidecar container that captures stdout separately.
Removing Logging Bottlenecks with non-blocking Mode
An often-overlooked setting in Docker container log management is the message delivery mode. The default is blocking — when the log buffer fills up, the application thread is blocked. This means that during traffic spikes, log processing delays can cascade into application response delays.
non-blocking mode introduces an intermediate buffer to solve this problem. When the buffer fills up, the oldest logs are dropped, but the application is never blocked.
docker run -it --log-opt mode=non-blocking --log-opt max-buffer-size=4m alpine ping 127.0.0.1
The default max-buffer-size is 1MB; the example above increases it to 4MB. Buffer size is a balance between log generation rate and driver processing speed. Too small increases log loss; too large wastes memory unnecessarily.
For payment or audit systems where log loss is absolutely unacceptable, blocking mode is appropriate — but the log driver’s processing capacity must be sufficient. For general web services and batch jobs, non-blocking mode is the safer choice. A few lost log lines are a smaller problem than service response delays.
When applying non-blocking mode in docker-compose, specify both mode and max-buffer-size under logging.options.
services:
api:
image: myapi:latest
logging:
driver: json-file
options:
mode: "non-blocking"
max-buffer-size: "4m"
max-size: "10m"
max-file: "3"
Common Issues When Switching Logging Drivers
Even after modifying daemon.json and restarting the Docker daemon, existing containers don’t pick up the new logging settings. Containers created earlier remain locked to the logging driver that was active at creation time. Check the driver applied to a running container with:
docker inspect \
--format='{{.HostConfig.LogConfig.Type}} {{.HostConfig.LogConfig.Config}}' \
<CONTAINER>
The most common issue is starting a container with the Fluentd driver configured while the Fluentd collector isn’t ready yet. If fluentd-async is disabled, the container either fails to start or log output itself is blocked.
| Symptom | Cause | Solution |
|---|---|---|
| Container start delay or failure | Waiting for Fluentd collector connection | Set fluentd-async=true |
Empty docker logs output | Using a non-file driver | Switch to json-file or local |
| Log file size not decreasing | Settings not applied to existing containers | Recreate containers |
Permission denied | Direct access to /var/lib/docker | Use docker logs command |
A syntax error in daemon.json prevents the Docker daemon from starting entirely. After editing the file, validate the configuration with sudo dockerd --validate or check error messages with sudo systemctl status docker. The daemon.json file path is /etc/docker/daemon.json on Linux and either the GUI settings or ~/.docker/daemon.json on macOS Docker Desktop.
Right after switching from json-file to the local driver, existing JSON log files may remain under /var/lib/docker/containers/. New containers write logs via the local driver, but old files aren’t automatically removed — a manual cleanup step is needed after the transition.
Docker Container Log Management Operational Checklist
When auditing Docker container log management in production, check the following items.
Initial Setup Checklist
- Verify the default driver with
docker info --format '{{.LoggingDriver}}' - Confirm
max-sizeandmax-fileare set indaemon.json - If the default driver is json-file, evaluate switching to local
- For environments requiring external collection, enable the Fluentd driver with
fluentd-async - Check whether non-blocking mode is applied to high-traffic services
Periodic Inspection Checklist
- Monitor disk usage of
/var/lib/docker/containers/per host - Verify logging settings aren’t dropped when containers are recreated
- Health-check the Fluentd collector — confirm containers without
fluentd-asyncaren’t affected during collector downtime - Verify each container’s driver matches intent with
docker inspect -f '{{.HostConfig.LogConfig.Type}}' - Confirm log rotation is actually working by checking file count and sizes
Define an
x-logging anchor at the top level of a docker-compose file and reference it from each service to prevent configuration omissions. YAML anchors are a standard YAML feature in Docker Compose and require no additional plugins.
Next Steps for Log Management Automation
Once the basics of Docker container log management are in place, several areas warrant further consideration.
The official documentation doesn’t include an integration tutorial for the EFK (Elasticsearch + Fluentd + Kibana) stack, nor does it cover log-based alerting configuration. Guidance on log correlation tracking in multi-container environments is also lacking. These areas require separate reference to the Fluentd and Elasticsearch documentation.
After logging driver setup is complete, the next step is building an EFK stack — loading Fluentd-collected logs into Elasticsearch and visualizing them with Kibana. A container log monitoring dashboard significantly improves incident detection speed. Integrating docker compose log management into CI/CD pipelines so that logging settings are automatically validated during deployments is also worth considering. Docker container log management isn’t a one-time configuration — it’s an area that requires ongoing review of log rotation policies and collection pipelines.