I’ve been running a server which has a fairly busy Joomla site on it. The server has 2Gb RAM, and is running nginx, php5-fpm and mysql, and not much else. However it would run for a while and then the disk would start swapping out. Not a lot, but enough to cause a few issues. If I restarted the server, memory usage would start at something like this
>$ free total used free Mem: 2048036 1024048 1023988 Swap: 4192960 0 4192960
After about a day it would look like this
>$ free total used free Mem: 2048036 1924048 73988 Swap: 4192960 0 4192960
And eventually it would have a flurry of activity which would make it look like this
>$ free total used free Mem: 2048036 1924048 23988 Swap: 4192960 4567 4188393
Not too bad, in the general scheme of things, but still any swapping is not good, as when its doing that, the web pages are slow and the server is grinding. It’s worth mentioning at this point that the install of Joomla uses a lot of third party plugins which push its memory usage up to a monstrous 200Mb per page (altered accordingly in php.ini and nginx config) – any less than this and I just got blank pages. So naturally I suspected that the plugins were causing this memory usage and swapping, and there was nothing we could do with it until the next site redesign. I wrote me a script to periodically shut down nginx, clear the cache memory, clear swap and then restart nginx. It did this once a day and that was a good enough band aid.
However the truth of the matter is stranger still. I was investigating something else and the site didn’t seem to be updating. I suspected the cache, so I went to Joomla’s cache manager and tried to clear it. The site and server froze. Ugh. So I ssh’ed in and took a look in Joomla’s /cache directory. Now things were really odd. I couldn’t list the files as there were too many of them! But I managed to glimpse a few using find and grep. in /cache/com_content there were millions of files. Some were named with a long hex string (eg 0126e6a115e7b44a8c9912972b3045c8.php) and they had a counterpart with the word _expire tagged on the end (eg 0126e6a115e7b44a8c9912972b3045c8.php_expire). So apparently Joomla hadn’t been clearing the cache out. rm would also not work to clear the files as there were so many of them — using df before and after told me there were 21 Gb of files accumulated over a 3 month period! Insane!
I eventually managed to figure out how to delete them in batches using find and xargs. I then put in a script that runs once a day and zaps all files older than 2 hours, along the lines of
find /path/to/server/public_html/cache/com_content/ -type f -mmin +120 -print0 | xargs -0 rm
So that was the fix for the monster cache from hell. Still no idea why Joomla wasn’t deleting it. But it turns out that also fixed my server memory problem. This is what was happening I think. When I restarted the server the inode table takes up some of the server memory and this somehow grows over time. In this case the inode count was massive due to the 21 Gb of small files in the cache directory, and that was slurping up a good chunk of the server memory. After deleting these files … no more memory problems on the server.
I still don’t fully understand it, but it worked for me. I was also humbled to learn that even the mighty rm -rf * can be defeated if there are simply too many files!