More often than not (sadly) I struggle with the next error message at my amazon aws instances:
unable to create '/something/something.tmp': No space left on device
So, disk space issue… probably big log files that needs to be wiped or something like that? not always that easy.
How to diagnose the issue
When I ran df -H the output says that I still have some space left, so what is it?
Well, is possible that is the inodes allocation. Run df -i to check them out.
If you find that you are at 100% or close to it, you need to get rid of them.
How to locate the files to remove
Where are they at? You can try to run this script (that doesnt needs to create a tmp file for it, since we dont have space left for that)
cd /
sudo find . -xdev -type f | cut -d "/" -f 2 | sort | uniq -c | sort -nCommon cause: linux-headers inodes
But, you can save some time if you first try for the most common source of this issue… old linux-headers
They are located at /usr/src/ folder, just run a ll and delete some to recover some inodes space (you wont be able to purge them using apt-get if you are at 100% disk usage). Don’t delete them all they are used by linux, just get rid of some old versions that may be lingering there.
So, for example, run:
sudo rm -rf linux-headers-4.4.0-130/Then you can try to purge some of them using sudo apg-get -f autoremove (and then manually remove old ones that weren’t removed that way)
TLDR;
Check if the issue are the inodes by running df -i
If its at 100% or close to it, try removing old linux-headers to recover some space. Don’t go crazy on this step, try removing only 1 of the oldest images and then run sudo apt-get -f autoremove
They are located at /usr/src/ folder (you can remove them with rm -rf)