Docker's overlay2 disk size usage

this issue was initially discussed in the hummingbot #dev channel over at discord

  • disk size keeps increasing
  • my single (HB) container size is only 88MB
  • du -csh /var/lib/docker shows that docker is using 8.4G
  • looking into /var/lib/docker I found that overlay2 folder is taking up the majority of the space (5.5G)
  • I used ncdu to investigate which folder exactly is increasing in size
  • folders /var/lib/docker/overlay2/**/merged/opt/conda are the ones with the highest disk usage

temporary fix:

  • stop instance and run docker system prune -a

This might be helpful.

Thanks for this temporary fix suggestion, it just led to freeing up 4.8 GB on my system.

Thanks for the suggestion , I tried it and released 5.2gb. Im testing right now if there’s possible changes from what it used to be.

docker system prune -a

WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all build cache
Are you sure you want to continue? [y/N]

I used to experience this but not anymore with our testing bots. When you switched to development version, did you remove the “latest” image when you were no longer using it?

When you run the command docker system df it will show docker images and containers with their size and how much you can save when removing those unused (reclaimable).

Then doing docker system prune -a will remove unused containers, images, network and build cache (if there are any).

One of the bots I’m using running as a single docker container using 1 image (development) in a cloud server has only been using around 2.8 GB of disk space and hasn’t changed ever since. Although the minimum storage requirement for Hummingbot is 5 GB per instance as indicated in the documentation.

I will continue to attempt to reproduce your setup and investigate if the disk usage will increase over time.

I am using the “update.sh” script which deletes the previous image, it happens on all my servers, so perhaps I can see how to reproduce it and provide feedback.

That’s correct, update.sh script deletes the previous image of the branch and downloads the most recent one. However it doesn’t delete the image of the other branch when you switch i.e. ./update.sh --> development --> deletes old development image --> creates the container with the new development image. Also ./update.sh --> latest --> deletes latest image --> creates the container with the new latest image.

Example:

  1. I created container A using latest image
  2. Then I updated container A to switch to development branch
  3. Every time there is a new docker image published, I use update.sh script to update container A

In the above scenario, it only deletes the development image. Do a docker system df on the other servers and post screenshots so we can investigate further.

Yes indeed, in my use case I don’t switch branches, always stay on the same one.