xfs
[Top] [All Lists]

Re: Strange df output

To: Jarrod Johnson <jbj-ksylph@xxxxxxxxxxxxxxxx>
Subject: Re: Strange df output
From: Steve Lord <lord@xxxxxxx>
Date: Sat, 17 Jan 2004 09:17:30 -0600
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <40094F09.3000702@linux-sxs.org>
References: <20040108154940.7dd28591.jbj-ksylph@ken-ohki> <1073595887.27384.260.camel@stout.americas.sgi.com> <20040117091614.7d172889.jbj-ksylph@ken-ohki> <40094F09.3000702@linux-sxs.org>
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6b) Gecko/20031205 Thunderbird/0.4
Net Llama! wrote:
On 01/17/04 06:16, Jarrod Johnson wrote:

Unfortunately, this was first created ~1.5 years ago, so I have no mkfs arguments, I assume the default from whatever it was back then. I've gone since to kernel 2.4.23 with the snapshot for download applied, and it was about then I started noticing this behavior. The system crashed at one point listing about ~4G free, but on reboot the system had 0 free.

I tried to run xfs_check on it, but xfs_check is killed partway through, kernel prints:
__alloc_pages: 0-order allocation failed (gfp=0x1d2/0)
VM: killing process xfs_db


I might be wrong, but isn't that what the kernel spits out when the box is running out of memory (and swap)? THat looks exactly like the non-OOM killer message to me.


It is, xfs_check is going to be a memory hog, if your user space tools are as old as the filesystem is, try getting new ones, there were some
changes which would reduce the amount of memory consumed by a factor of
2 at least. The other options are to run from single user so there is
more memory available, and make sure you have swap configured.


xfs_check is just going to report error conditions, xfs_repair is the
program to fix them. Same comments apply as above.

You can always get the mkfs parameters out of the filesystem by using
xfs_info on the mounted fs. If it will not mount then you can dump the
super block which contains the info:

xfs_db -r /dev/xxx
sb 0
p

Steve


<Prev in Thread] Current Thread [Next in Thread>