On Tue, May 06, 2008 at 09:03:06AM +0200, Marco Berizzi wrote:
> David Chinner wrote:
> > > May 5 14:31:38 Pleiadi kernel: xfs_inactive:^Ixfs_ifree() returned
> > > error = 22 on hda8
> > Is it reproducable?
> honestly, I don't know. As you may see from the
> dmesg output this box has been started on 24 april
> and the crash has happened yesterday.
Yeah, I noticed that it happened after substantial uptime.
> IMHO the crash happended because of this:
> At 12:23 squid complain that there is no left space
> on device, and it start to shrinking cache_dir, and
> at 12:57 the kernel start logging...
> This box is pretty slow (celeron) and the hda8 filesystem
> is about 2786928 1k-blocks.
Hmmmmm - interesting. Both the reports of this problem are from
machines running as squid proxies. Are you using AUFS for the cache?
Interesting the ENOSPC condition, but I'm not sure it is at all
relevant - the other case seemed to be triggered by some cron job
doing cache cleanup so I think it's just the removal files that is
> > What were you doing at the time the problem occurred?
> this box is running squid (http proxy): hda8 is where
> squid cache and logs are stored.
> I haven't rebooted this box since the problem happened.
> If you need ssh access just email me.
> This is the output from xfs_repair:
You've run repair, there's not much I can look at now.
As a suggestion, when the cache gets close to full next time, can
you take a metadump of the filesystem (obfuscates names and contains
no data) and then trigger the cache cleanup function? If the
filesystem falls over, I'd be very interested in getting a copy of
hte metadump image and trying to reproduce the problem locally.
(BTW, you'll need a newer xfsprogs to get xfs_metadump).
Still, thank you for the information - the bit about squid proxies
if definitely relevant, I think...
SGI Australian Software Group