xfs
[Top] [All Lists]

Re: unable to use xfs_repair

To: Sidik Isani <lksi@xxxxxxxxxxxxxxx>, linux-xfs@xxxxxxxxxxx
Subject: Re: unable to use xfs_repair
From: Seth Mos <knuffie@xxxxxxxxx>
Date: Fri, 04 Oct 2002 09:04:33 +0200
In-reply-to: <20021003170436.A11748@xxxxxxxxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
At 17:04 3-10-2002 -1000, Sidik Isani wrote:
Oct  3 16:32:10 pelmo kernel: Out of Memory: Killed process 806 (xfs_repair).

Hello again -

  We have a bit of a problem: we have needs for very large filesystems
  (more than 1 TB) but no need to scale the memory in the machines
  the same way ... except if you ever need to run xfs_repair on the
  filesystem, it seems!

That isn't right. Can you run it through strace and get the top and bottom part?

  I picked up a recent version of xfs_repair (2.3.3 that I got out of
  CVS a few days ago) and it consumes all of 1 GB and never finishes
  repairing.  I can't add more swap space in a *file*, so ... well,
  this is a bit awkward.  Is it normal for xfs_repair to consume that
  much memory, and can anything be done about it?  Is there something
  strange about my filesystem causing xfs_repair to leak possibly?

There have been multiple fixes in both the recovery part and the xfs_repair utility and their memory usage. I have never seen this happen before.

The other developers might be able to understand a strace.

  Ok, I scrounged some other partitions and converted them into swap,
  but if this is normal I guess we should consider splitting in the
  future to avoid grid-lock.  Don't make an FS that > 1000 times
  available RAM?  Seems nicer if we can avoid that, what do you think
  the practical limitations are?

There are a number of users out there with really large partitions that don't see it. How much ram does the machine actually have?

The biggest I have is a 150GB partition with 256MB ram.

Cheers

--
Seth
It might just be your lucky day, if you only knew.


<Prev in Thread] Current Thread [Next in Thread>