Skip to main content

        Docker: Automatic Image and Log Cleanup - Featured image

Docker: Automatic Image and Log Cleanup

Why doesn’t Docker automatically clean up junk files?

If you’ve had a Docker server for more than a month, you’ve probably noticed that disk space mysteriously disappears. Docker is designed on the principles of immutability and security. It doesn’t delete anything by default because it can’t guess whether that “orphaned” image from three months ago is a critical version you plan to roll back to, or if that build cache is something you need for a quick deployment tomorrow.

In short: Docker prefers to fill your disk rather than accidentally delete something you might need. Therefore, system hygiene is in our hands.

1. The Maintenance Script (daily_maintenance.sh)

To prevent temporary files and old images from taking over your storage, we’ll use a script that automates system cleanup.

Save this file in /usr/local/bin/daily_maintenance.sh

#!/bin/bash

# ==============================================================================
# Docker & System Daily Cleanup Script
# ==============================================================================

# Redirect all output to a log file for auditing
LOG_FILE="/var/log/daily_maintenance.log"
exec > >(tee -a "$LOG_FILE") 2>&1

echo "--- Starting Maintenance: $(date) ---"

# 1. Docker Image Cleanup
# Prune unused images (not referenced by any container)
# '-a': Remove all unused images, not just dangling ones
# '-f': Force command without confirmation
# 'until=24h': Only remove images created more than 24 hours ago
echo "Running Docker image prune..."
docker image prune -a -f --filter "until=24h"

# Optional: Clean Builder Cache (can grow large over time)
# docker builder prune -f --filter "until=24h"

echo "--- Maintenance Finished ---"

What exactly does this command do? Using –filter “until=24h” is vital. Prevent the script from deleting images you’ve just downloaded but haven’t yet loaded into a container. It’s the perfect balance between cleanliness and caution.

2. Installation and Automation

For this to actually work, we need to give it execution permissions and schedule it on the system.

Grant Permissions:

# Set execution permissions for the script
sudo chmod +x /usr/local/bin/daily_maintenance.sh

Add to Crontab (Daily): Open the crontab (crontab -e) and schedule the script to run at 4:00 AM, when the server load is usually minimal:

# Run maintenance script every day at 4:00 AM
0 4 * * * /usr/local/bin/daily_maintenance.sh
  1. Pro Recommendation: Log Control (daemon.json) The script above cleans the system, but it doesn’t touch the individual logs of the live containers. These .log files are the number one cause of full disks in Docker.

Instead of manually deleting them (which is dangerous), the best practice is to configure global log rotation. Create or edit the file /etc/docker/daemon.json and add the following:

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

What does this achieve? No container, no matter how noisy, will occupy more than 30MB of logs in total (3 files of 10MB each). Remember to restart the service to apply the changes:

# Restart Docker service to apply log rotation settings
sudo systemctl restart docker

Conclusion Automating the cleanup of your Docker environment isn’t just about organization; it’s about ensuring the availability of your infrastructure. A full disk can corrupt databases and crash critical services in your home lab. With this script and the log configuration, you can forget about disk space alerts for a long time.