On Fri, Dec 27, 2013 at 09:07:22AM +0100, Arkadiusz MiÅkiewicz wrote:
> On Friday 27 of December 2013, Jeff Liu wrote:
> > On 12/27 2013 14:48 PM, Stor?? wrote:
> > > Hey:
> > >
> > > 20T xfs file system
> > >
> > >
> > >
> > > /usr/sbin/xfs_check: line 28: 14447 Killed
> > > xfs_db$DBOPTS -i -p xfs_check -c "check$OPTS" $1
> > xfs_check is deprecated and please use xfs_repair -n instead.
> > The following back traces show us that it seems your system is run out
> > memory when executing xfs_check, thus, snmp daemon/xfs_db were killed.
> This reminds me a question...
> Could xfs_repair store its temporary data (some of that data, the biggest
> parte) on disk instead of in memory?
Where on disk? We can't write to the disk until we've verified all
the free space is really free space, and guess what uses all the
memory? Besides, if the information is not being referenced
regularly (and it usually isn't), then swap space is about as
efficient as any database we might come up with...
> I don't know it that would make sense, so asking. Not sure if xfs_repair
> to access that data frequently (so on disk makes no sense) or maybe it needs
> only for iteration purposes in some later phase (so on disk should work).
> Anyway memory usage of xfs_repair was always a problem for me (like 16GB not
> enough for 7TB fs due to huge amount of fies being stored). With parallel
> it's even worse obviously.
Yes, your problem is that the filesystem you are checking contains
40+GB of metadata and a large amount of that needs to be kept in
memory from phase 3 through to phase 6. If you really want to add
some kind of database interface to store this information somewhere
else, then I'll review the patches. ;)