Backups on a LAMP server were much smaller than expected, and the cause was excessively large log files. This post explains the apparent paradox and the steps taken to trace and fix the problem.
The server in question runs the usual set of grandfather-father-son backups. Before each backup, the software prints the expected size of the backup into a log file. On other servers the estimate is usually bang-on. On this server, the backup turned out to be about 12 times smaller than the estimate. Not a problem in itself, but puzzling and indicative of something untoward on the system.
The backups are encrypted (with gpg), and the inherent compression ration is normally about 1.6 on this system. The fact that is has jumped right up to 12 indicates that an awful lot of something very compressible has appeared on the system.
A quick inspection revealed some of the the system log files to be over 300 Mb in size. Obviously, logrotate was not doing its job. The system in total has only about 2.7 Gb of used storage, so 300 Mb is an appreciable proportion of that, and an appreciable proportion of the backed up data.
Log files are, of course, human readable text, and would normally compress by about 10 times under encryption/compression. These particular log files were particularly repetitive, and therefore particularly compressible. Further testing showed the “messages” file to be compressible by 17 times with a simple Linux “compress” command, and that’s not the tightest of compressors. The file was filled mostly with repeated notifications from iptables, the kernel’s firewall, about broadcast packets from a local Samba server.
Many other log files were similarly over sized, similarly filled with repetitive messages from iptables and sendmail regarding harmless system events. The Apache logs were gargantuan too. Not that this web site is particularly busy, just the logs had not been trimmed.
The Logrotate Command
Logrotate is a useful linux facility to automatically trim and manage log files. An inspection of the cron files and the logrotate configuration revealed that logrotate should have been trimming these files regularly, but was not. Running logrotate in the shell foreground printed a simple error message about an invalid time on a logrotate state file under /var. I did not record the message, unfortunately, but the fix was simply to zero the file:
# ls -l /var/lib/logrotate/status
-rw-r–r– 1 root root 980 Jan 14 1960 /var/lib/logrotate/status
# > /var/lib/logrotate/status
With a timestamp like that (1960), its not surprising that logrotate barfed. It pre-dates even the unix 1970 epoch. Running logrotate in the shell twice then performed the needful. The first run renamed the huge files, the second compressed them (this behaviour resulting from the default Debian Squeeze logrotate configuration).
The backups sizes and estimates then returned to normal. Problem fixed.
A known hardware error was to blame. Every time that server crashes and reboots, its clock is at 1960 for a few moments before NTP corrects it. A power cut weeks earlier brought about those conditions, breaking logrotate and creating the problems above.
Log files are a minor humbug for backups. They are constantly being updated and are therefore caught in every backup – full and incremental. Where backups are not compressed/emcrypted, they may also take up appreciable resources, adding to backup times and consuming space on the backup target: whether that is disk, VTL or tape. On LAMP servers, as with some other applications, the log files are critical for security, monitoring and analysis, and Linux administrators must take these backup tussles on the chin.