xfs
[Top] [All Lists]

Re: [xfs_check Out of memory: ]

To: stan@xxxxxxxxxxxxxxxxx
Subject: Re: [xfs_check Out of memory: ]
From: Arkadiusz MiÅkiewicz <arekm@xxxxxxxx>
Date: Sun, 29 Dec 2013 00:39:16 +0100
Cc: Dave Chinner <david@xxxxxxxxxxxxx>, "Stor??" <289471341@xxxxxx>, Jeff Liu <jeff.liu@xxxxxxxxxx>, xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=maven.pl; s=maven; h=from:to:subject:date:user-agent:cc:references:in-reply-to :mime-version:content-type:content-transfer-encoding:message-id; bh=M2dSgVWNwdpWFgAHt6aVc7jHqm7069HiyWAtKWYgrYk=; b=BJmXalAEcJxRqyFuhTJCJv9Uu/X5Xaw8x29EHLXXhQrZzVOvLFVnFVFOGborAKtdlM wicfBchf/8pc1AK/Ku4hOLJQC8s6VGI9lwZzcA8T+pbFZosNB7BFhspEr/AQQXy/wVlz BE7Zq6jxgaxLtbSgjzorKowYr3xAfX8OZxG1k=
In-reply-to: <52BF0295.9040301@xxxxxxxxxxxxxxxxx>
References: <tencent_3F12563342ED1D4E049D1123@xxxxxx> <201312280020.39244.arekm@xxxxxxxx> <52BF0295.9040301@xxxxxxxxxxxxxxxxx>
User-agent: KMail/1.13.7 (Linux/3.12.6-dirty; KDE/4.12.0; x86_64; ; )
On Saturday 28 of December 2013, Stan Hoeppner wrote:
> On 12/27/2013 5:20 PM, Arkadiusz MiÅkiewicz wrote:
> ...
> 
> > - can't add more RAM easily, machine is at remote location, uses obsolete
> > DDR2, have no more ram slots and so on
> 
> ...
> 
> > So looks like my future backup servers will need to have 64GB, 128GB or
> > maybe even more ram that will be there only for xfs_repair usage. That's
> > gigantic waste of resources. And there are modern processors that don't
> > work with more than 32GB of ram - like "Intel Xeon E3-1220v2" (
> > http://tnij.org/tkqas9e ). So adding ram means replacing CPU, likely
> > replacing mainboard. Fun :)
> 
> ..
> 
> > IMO ram usage is a real problem for xfs_repair and there has to be some
> > upstream solution other than "buy more" (and waste more) approach.
> 
> The problem isn't xfs_repair.  

This problem is fully solvable on xfs_repair side (if disk space outside of 
broken xfs fs is available).

> The problem is that you expect this tool
> to handle an infinite number of inodes while using a finite amount of
> memory, or at least somewhat less memory than you have installed.  We
> don't see your problem reported very often which seems to indicate your
> situation is a corner case, or that others simply

It's not something common. Happens from time to time judging based on #xfs 
questions.

> size their systems
> properly without complaint.

I guess having milions of tiny files (few kb each file) in simply not 
something common rather than "properly sizing systems".

> If you'd actually like advice on how to solve this, today, with
> realistic solutions, in lieu of the devs recoding xfs_repair for the
> single goal of using less memory, then here are your options:
> 
> 1.  Rewrite or redo your workload to not create so many small files,
>     so many inodes, i.e. use a database

It's a backup copy that needs to be directly accessible (so you could run 
production directly from backup server for example).  That solution won't 
work.

> 2.  Add more RAM to the system

> 3.  Add an SSD of sufficient size/speed for swap duty to handle
>     xfs_repair requirements for filesystems with arbitrarily high
>     inode counts

That would work... if the server was locally available.

Right now my working "solution" is:
- add 40GB of swap space
- stop all other services
- run xfs_repair, leave it for 1-2 days

Adding SSD is my only long term option it seems.

> The fact that the systems are remote, that you have no more DIMM slots,
> are not good arguments for you to make in this context.  Every system
> will require some type of hardware addition/replacement/maintenance.
> And this is not the first software "problem" that requires more hardware
> to solve.  If your application that creates these millions of files
> needed twice as much RAM, forcing an upgrade, would you be complaining
> this way on their mailing list?

If that application could do its job without requiring 2xRAM then surely I 
would write about this to ml.

> If so I'd suggest the problem lay
> somewhere other than xfs_repair and that application.

IMO this problem could be solved on xfs_repair side but well... someone would 
have to write patches and that's unlikely to happen.

So now more important question. How to actually estimate these things? 
Example: 10TB xfs filesystem fully written with files - 10kb each file (html 
pages, images etc) - web server. How much ram my server would need for repair 
to succeed?

-- 
Arkadiusz MiÅkiewicz, arekm / maven.pl

<Prev in Thread] Current Thread [Next in Thread>