On 2/19/12 7:16 AM, Michael Monnerie wrote:
> Am Freitag, 17. Februar 2012, 18:17:46 schrieb Eric Sandeen:
> I tried that, and it said "use 434":
That's megabytes, FWIW.
> xfs_repair -n -vv -m 1 /dev/mapper/vg_orion-lv_orion_data
> Phase 1 - find and verify superblock...
> - max_mem = 1024, icount = 339648, imem = 1326, dblock =
> 805304256, dmem = 393214
> Required memory for repair is greater that the maximum specified with
> the -m option. Please increase it to at least 434.
> But when I tried with
> # xfs_repair -n -vv -m 434 /dev/mapper/vg_orion-lv_orion_data
> it said the same again. It only worked with 435:
> # xfs_repair -n -vv -m 435 /dev/mapper/vg_orion-lv_orion_data
> (is that what you call an off-by-1 error?)
Yep, but really not too serious, I guess, still worth fixing though.
It's only used to try to enforce the bare minimum - in reality you'd
want more than that.
> Maybe that has been fixed already? This is
> # xfs_repair -V
> xfs_repair Version 3.1.6
> BTW, this XFS is 3219644160 KB (3,2TB), used 2,9TB, has (df -i) 325364
> inodes used, 293884 files in 31643 dirs. It seems mem usage primarily
> comes from inodes, not from the size of the filesystem.
_(" - max_mem = %lu, icount = %" PRIu64 ", imem = %" PRIu64 ", db
lock = %" PRIu64 ", dmem = %" PRIu64 "\n"),
mp->m_sb.sb_icount >> (10 - 2),
mp->m_sb.sb_dblocks >> (10 + 1));
so yes, inodes in use count for more in the approximation.