Managing Docker Logs Using the Command Line Interface
Spiff Azeta
DevOps Engineer | Terraform & 2x AWS Certified | Cloud-Native, IaC, IT Automation
Why do we have to check Docker container logs?
docker restart containername
When?we have issues with our Docker containers (when not if), it becomes easy to perform the unwritten first rule of IT, “If it suddenly stops working, turn it off and on again”. In this case, this would be to restart the container or kill and replace the container.
What if, the issue is not with the application running in the container itself?
What if you can not connect to a downstream service?
What if the issue is with a downstream service your container needs to work?
What if the issue is an expired certificate inside your container?
What if the content of a docker volume bound to a directory on your host machine has been modified or worse case deleted?
Are you panicking yet?
A lot could go wrong with a container, but the great thing is we have the tools to investigate.
w?…?
The great thing is that Docker has some functionality to assist you in investigating the root cause of whatever issue your containerized application is having whenever issues come up. By default, Docker containers emit logs to the stdout and stderr output streams. In simpler terms, Docker logs the activities of the applications running in the container.
The command that can help you with this is — “docker logs [options] containerid”.
I have a mongo container running in my test environment and I’ll run the docker logs command to show you what the output looks like.
First, I’ll run?docker ps?to get the container id.
Then the?docker logs?command to get the logs from the container using the container id.
From my screenshot, you can see the mongo db container has some information logged. Although I don’t know what those are, I’m sure your application would log data that is relevant for you as well as the standard stdout logs from your application.
The limitation of running the command without any option is that it gives you the logs as of the time you ran the command.
To view the logs as events happen you might want to “follow” the logs. So we can add the “ — follow” option. Using my mongo container from earlier, that would be, “docker logs — follow 7043781d910e” or we can use the shorthand, “docker logs -f 7043781d910e”. This would continuously display the logs as activities happen until we press the control + C buttons to end the streaming.
Awesome right?! However, a possible concern is that this would display all the logs from inception. If the container has been running for some days, (or maybe months!) then that could be a problem.
We can limit (or tail) the logs by using the –tail or -n command to view the last number of entries in the logs. That would look like this, “docker logs -f –tail 2 7043781d910e” to view the most recent 2 entries in the logs.
You might be wondering why -n and not -t for tail. -t adds timestamps to the log entries. The full command is –timestamps.
We can also use the “–since” option to view logs “since” a particular date. “docker logs — since 2023–01–10 7043781d910e”.
That’s all the basics. However, some?possible issues can come up with container logs.
For the entire lifetime of your container, the logs are going to be stored somewhere. This means that those logs would take up space somewhere.
The log files can grow to the point where you are struggling with disk space issues, especially if your logs are stored on the host machine. Your first thought might be to delete the log file. However, deleting docker log files can cause problems and affect?some?running containers.
As an alternative to deleting the file, the file can be truncated (the truncate command can remove the content of a file). The command to do this is, “truncate -s 0 /var/lib/docker/containers/*/*-json.log” where, /var/lib/docker/containers/*/*-json.log is the path to the container logs.
Another effective way to manage this for?all containers?running on a docker host is to modify the docker daemon configuration file.
Note: If you are not sure what you are doing, DO NOT modify this file.
The file can be found by default in Linux machines in the /etc/docker/daemon.json file.
In this file, the configuration can be set by modifying the log-opts key.
{
"log-driver": "json-file",
"log-opts": {
"max-size": "1k",
"max-file": "5"
}
}
If this file is modified, new containers would automatically pick the configuration. However, for already existing containers to pick this new configuration, the docker service would need to be restarted.
However, in my opinion, the best ways to manage this are:
1) Set up your system to rotate the logs and ship them to remote storage. Possibly a backup disk, network storage location or an Amazon S3 bucket.
2) Log to a database. This could be an SQL database or a no SQL document db like Mongo DB. This might be slower but you would be able to write queries to get exactly what you want from the logs.
3) (My preferred option) Ship the logs to Elasticsearch.
I’ll be dropping steps on how to set up log shipping to Elasticsearch. When I do, I’ll link to that here.
Please save this and share it with others who you think would find this useful.
The official documentation for Docker logs can be found here —?https://docs.docker.com/engine/reference/commandline/logs/
Connect with me on Linkedin —?https://www.dhirubhai.net/in/azeta-spiff/
Director Of Engineering | IT Solutions Architect | Innovation Enthusiast | Dev Ops Master | Safe Agilist | Salesforce Ranger | Project Manager | Digital Transformation Leader | Speaker.
2 年Good job Azeta Spiff. Welldone