How to Redirect Docker Logs to a Single File

If you’re using Docker to manage your containers, you’re probably aware of how quickly the logs can pile up. By default, Docker stores container logs in separate files, which can be a hassle to manage and analyze. Fortunately, you can redirect Docker logs to a centralized file for easier access and analysis. In this guide, we’ll walk you through the steps to redirect Docker logs to a central file, and how to manage and analyze those logs efficiently.

Step 1: Set Up a Centralized Logging System

Before we begin, it’s important to set up a centralized logging system to store your Docker logs. Several open-source tools are available for this, including Elasticsearch, Fluentd, and Logstash. For this guide, we’ll use the ELK stack (Elasticsearch, Logstash, and Kibana) as a popular choice for centralized logging.

Step 2: Configure Docker to Send Logs to Logstash

Once you have set up the centralized logging system, the next step is configuring Docker to send logs to Logstash. Logstash is a data processing pipeline that ingests data from various sources and sends it to a central location for storage and analysis. You must modify the Docker daemon configuration file to configure Docker to send logs to Logstash.

First, create a new directory called “docker” in the “/etc” directory:

$ sudo mkdir /etc/docker

Next, create a new file called “daemon.json” in the “/etc/docker” directory:

$ sudo nano /etc/docker/daemon.json

Add the following lines to the file:

{
"log-driver": "syslog",
  "log-opts": {
    "syslog-address": "tcp://logstash:5000",
    "tag": "{{.Name}}"
  }
}

The “log-driver” option specifies the logging driver to use, in this case, Syslog. The “log-opts” option defines the syslog address and tag. In this example, the syslog address is set to “tcp://logstash:5000“, which is the address of the Logstash server.

Save and close the file.

Step 3: Start Logstash

Before we can receive logs from Docker, we need to start Logstash. Logstash can be started as a service or as a standalone process. For this guide, we’ll start Logstash as a standalone process.

$ bin/logstash -e 'input { syslog { } } output { elasticsearch { hosts => ["localhost:9200"] } }'

This command starts Logstash, configures it to receive syslog data, and sends it to Elasticsearch. The Elasticsearch host is set to “localhost:9200“, the default Elasticsearch host and port.

Step 4: View Your Docker Logs in Kibana

With Logstash and Elasticsearch set up, you can now view your Docker logs in Kibana, a web-based visualization tool for Elasticsearch data.

To view your Docker logs in Kibana, open your web browser and navigate to “http://localhost:5601“. This will open the Kibana web interface. From the main menu, select “Discover”.

In the “Discover” interface, you should see all of your Docker logs. You can use the search and filter features to narrow down the logs to specific containers or time periods.

Step 5: Analyze Your Docker Logs

Now that your Docker logs are centralized in one location, you can easily analyze them to gain insights into your containerized environment. You can use Kibana to create visualizations and dashboards that show trends and patterns in your logs, such as error rates, response times, and resource usage.

To create a visualization in Kibana, click “Visualize” from the main menu. From here, you can choose the type of visualization you want to create, such as a bar chart, line chart, or pie chart. Once you have selected your visualization type, you can configure your visualization’s settings and data source.

For example, you could create a bar chart showing the error rate for each Docker container. To do this, you would select the “Bar Chart” visualization type, and configure the “X-Axis” to be the container name, and the “Y-Axis” to be the percentage of log entries that contain an error.

Once you have created your visualization, you can add it to a dashboard to create a custom monitoring dashboard for your Docker environment.

But There Is More

Sometimes, when troubleshooting or monitoring a Docker container, we want to direct the application’s logs to a file. Unlike any other software, containerized applications generate standard output (stdout) and standard error (stderr).

The Docker daemon merges these streams and directs them to one of several locations, depending on which logging driver is installed.

The default driver makes the container output easy to access, and if we want to copy the information to another location, we can redirect docker logs to a file.

Let’s take a look at how we can manipulate logs generated by the default json-file log driver.

This driver places the log output in a system directory in JSON formatted files but provides a command line tool for displaying log contents in its original format.

Viewing Container Logs

Let’s start with a simple container. We’ll create one that prints a message every second.

$ docker run --name test -d busybox sh -c "while true; do $(echo date); sleep 1; done"

This command is an example from the Docker website. It executes a minimal Docker container with a Bourne shell script that logs a timestamp once every second. We’ll use it for a couple of examples.

So the docker logs command is the easiest way to view container logs. Start the container and then display the logs.

$ docker run --name test -d busybox sh -c "while true; do $(echo date); sleep 1; done"
13f622b1e214d7e84b6aa2093484f932ab55c14752459d3075b41b1a0c60cbaf
$ docker logs test
Thu Oct 18 01:55:27 UTC 2018
Thu Oct 18 01:55:28 UTC 2018
Thu Oct 18 01:55:29 UTC 2018
Thu Oct 18 01:55:30 UTC 2018

This command displays all of the output the container has generated so far.

If there’s too much output for a single screen, pipe it to a buffering command like less.

$ docker logs test | less
Thu Oct 18 01:55:27 UTC 2018
Thu Oct 18 01:55:28 UTC 2018
Thu Oct 18 01:55:29 UTC 2018
Thu Oct 18 01:55:30 UTC 2018
Thu Oct 18 01:55:31 UTC 2018
Thu Oct 18 01:55:32 UTC 2018
Thu Oct 18 01:55:33 UTC 2018
Thu Oct 18 01:55:34 UTC 2018
Thu Oct 18 01:55:35 UTC 2018
Thu Oct 18 01:55:36 UTC 2018
Thu Oct 18 01:55:37 UTC 2018
Thu Oct 18 01:55:38 UTC 2018
Thu Oct 18 01:55:39 UTC 2018
Thu Oct 18 01:55:40 UTC 2018
Thu Oct 18 01:55:41 UTC 2018
:

This will still send all of the output generated so far through less, so you’ll need to page through to get to the end.

But sometimes, you want to follow the logs as the container is running. There’s a command line option with an appropriate name for that: run docker logs with –follow.

$ docker logs --follow test
Thu Oct 18 01:55:27 UTC 2018
Thu Oct 18 01:55:28 UTC 2018
Thu Oct 18 01:55:29 UTC 2018
Thu Oct 18 01:55:30 UTC 2018
Thu Oct 18 01:55:31 UTC 2018
Thu Oct 18 01:55:32 UTC 2018
Thu Oct 18 01:55:33 UTC 2018
Thu Oct 18 01:55:34 UTC 2018
Thu Oct 18 01:55:35 UTC 2018
Thu Oct 18 01:55:36 UTC 2018
Thu Oct 18 01:55:37 UTC 2018
Thu Oct 18 01:55:38 UTC 2018
Thu Oct 18 01:55:39 UTC 2018
Thu Oct 18 01:55:40 UTC 2018
Thu Oct 18 01:55:41 UTC 2018

After printing all of the container output so far, the command will continue to follow the logs. You can use the short form of the option -f, similar to the tail command, too.

The docker logs command also offers an array of convenient options, such as filtering output based on time and displaying timestamps. You can read more about them here.

But what if you want to redirect the docker logs to a file for viewing offline?

Redirect Docker Logs to File

Since Docker merges stdout and stderr for us, we can treat the log output like any other shell stream. To redirect the current logs to a file, use a redirection operator.

$ docker logs test > output.log

To send the current logs and then any updates that follow, use –follow with the redirection operator.

$ docker logs -f test > output.log

This command saves the output for future reference, but what if you want to keep the logs and view them at the same time?

Opening another shell and tailing the output file feels like a hack. There must be a better way.

So, let’s use tee to watch the output and save it to a file at the same time.

$ docker logs -f test |tee output.log
Thu Oct 18 02:08:52 UTC 2018
Thu Oct 18 02:08:53 UTC 2018
Thu Oct 18 02:08:54 UTC 2018
Thu Oct 18 02:08:55 UTC 2018
Thu Oct 18 02:08:56 UTC 2018
Thu Oct 18 02:08:57 UTC 2018
Thu Oct 18 02:08:58 UTC 2018
Thu Oct 18 02:08:59 UTC 2018
Thu Oct 18 02:09:00 UTC 2018
Thu Oct 18 02:09:01 UTC 2018
Thu Oct 18 02:09:02 UTC 2018
Thu Oct 18 02:09:03 UTC 2018

We still see the container output. Now, open another shell and check the output file.

$ tail -f output.log
Thu Oct 18 02:08:52 UTC 2018
Thu Oct 18 02:08:53 UTC 2018
Thu Oct 18 02:08:54 UTC 2018
Thu Oct 18 02:08:55 UTC 2018
Thu Oct 18 02:08:56 UTC 2018
Thu Oct 18 02:08:57 UTC 2018
Thu Oct 18 02:08:58 UTC 2018
Thu Oct 18 02:08:59 UTC 2018
Thu Oct 18 02:09:00 UTC 2018
Thu Oct 18 02:09:01 UTC 2018

The file is being updated too! We have the output on the terminal, and tee saves it to the file we specified at the same time.

Redirecting Stdout/Stderr in the Container

Of course, there’s another way to save your container logs to a file.

Instead of sending output to stderr and stdout, redirect your application’s output to a file and map the file to permanent storage outside of the container.

When to eschew the standard output completely is a judgment call that you should make when you take into account

  • how much output your application generates,
  • how often you think you’ll need to refer to it,
  • and what kind of logging infrastructure is available to you.

If, for example, you need to store events for compliance purposes, a logging framework like Java’s Logback may be a better option than capturing stdout and stderr.

That said, let’s modify the previous example to save the output to a file. We’ll change the loop to print the date to a file in /tmp.

We’ll also run the container with /tmp mapped to the same directory of our host.

Note that Docker will not accept relative paths on the command line, so if you want to use a different directory, you’ll need to use the complete path.

$ docker run -v /tmp:/tmp --name test -d busybox sh -c "while true; do date > /tmp/output.log; sleep 1; done"
$ tail -f /tmp/output.log
Tue Oct 17 22:03:45 UTC 2018
Tue Oct 17 22:03:46 UTC 2018
Tue Oct 17 22:03:47 UTC 2018
Tue Oct 17 22:03:48 UTC 2018
Tue Oct 17 22:03:49 UTC 2018
Tue Oct 17 22:03:50 UTC 2018

Now, take a look at docker logs.

$ docker logs test

There’s nothing there. Since we redirected the output to a file, there is nothing for Docker to capture.

Of course, we would need to capture both stdout and stderr in a production application.

Redirect Logs to a File In or Out of The Container?

We covered several different ways to redirect docker logs to a file for saving and analysis. The best method is the one that works for you.

In a development environment, the docker logs command is a powerful tool that works well with other command line tools. Docker’s built-in tools can view, filter, and redirect logs to a file.

Log management should be part of an enterprise strategy in a production environment. Scalyr’s log aggregation tools help you aggregate, process, search, and analyze your logs.