On Sunday 29 of December 2013, Stan Hoeppner wrote:
> On 12/28/2013 5:39 PM, Arkadiusz MiÅkiewicz wrote:
> > On Saturday 28 of December 2013, Stan Hoeppner wrote:
> >> On 12/27/2013 5:20 PM, Arkadiusz MiÅkiewicz wrote:
> > It's a backup copy that needs to be directly accessible (so you could run
> > production directly from backup server for example). That solution won't
> > work.
>
> So it's an rsnapshot server and you have many millions of hardlinks.
Something like that (initially it was just copy of few other servers but now
hardlinks are also in use).
> The obvious solution here is to simply use a greater number of smaller
> XFS filesystems with fewer hardlinks in each. This is by far the best
> way to avoid the xfs_repair memory consumption issue due to massive
> inode count.
> You might even be able to accomplish this using sparse files. This
> would preclude the need to repartition your storage for more
> filesystems, and would allow better utilization of your storage. Dave
> is the sparse filesystem expert so I'll defer to him on whether this is
> possible, or applicable to your workload.
I'll go SSD way since making things more complicated just for xfs_repair isn't
sane.
[...]
> > Adding SSD is my only long term option it seems.
>
> It's not a perfect solution by any means, and the SSD you choose matters
> greatly, which I why I recommended the Samsung 840 Pro. More RAM is the
> best option with your current setup, but is not available for your
> system. Using more filesystems with fewer inodes in each is by far the
> best option, WRT xfs_repair and limited memory.
The server is over 30TB but I used 7TB partitions. Unfortunately it's not
possible to go low with these since hardlinks needs to be on the same
partition etc.
[...]
> > So now more important question. How to actually estimate these things?
> > Example: 10TB xfs filesystem fully written with files - 10kb each file
> > (html pages, images etc) - web server. How much ram my server would need
> > for repair to succeed?
>
> One method is to simply ask xfs_repair how much memory it needs to
> repair the filesystem. Usage:
Assume I'm planning new server and I need to figure that out without actually
having hardware or fs. How to estimate this?
If there is a way I'll gladly describe it and add to xfs faq.
xfs_repair estimate doesn't work, too - see below.
> $ umount /mount/point
> $ xfs_repair -n -m 1 -vv /mount/point
> $ mount /mount/point
>
> e.g.
>
> $ umount /dev/sda7
> $ xfs_repair -n -m 1 -vv /dev/sda7
> Phase 1 - find and verify superblock...
> - max_mem = 1024, icount = 85440, imem = 333, dblock =
> 24414775, dmem = 11921
> Required memory for repair is greater that the maximum specified with
> the -m option. Please increase it to at least 60.
> $ mount /dev/sda7
Phase 1 - find and verify superblock...
- max_mem = 1024, icount = 124489792, imem = 486288, dblock =
1953509376, dmem = 953862
Required memory for repair is greater that the maximum specified
with the -m option. Please increase it to at least 1455.
So minimal 1.5GB but the real usage were nowhere near that minimal estimate.
xfs_repair needed somewhere around 30-40GB for this fs.
So 2x64GB SSD (raid1) for swap should be ok for now but in long term 2x128GB
is the way to go it seems.
--
Arkadiusz MiÅkiewicz, arekm / maven.pl
|