On Fri, Jul 25, 2014 at 08:10:38AM +1000, Dave Chinner wrote:
> On Thu, Jul 24, 2014 at 10:22:51AM -0400, Brian Foster wrote:
> > Inodes are always allocated in chunks of 64 and thus the loop in
> > xfs_inobt_insert() is unnecessary.
> I don't believe this is true. The number of inodes allocated at once
> mp->m_ialloc_inos = (int)MAX((__uint16_t)XFS_INODES_PER_CHUNK,
So I'm going on that effectively that the number of inodes per block
will never be larger than 8 (v5) due to a max block size of 4k.
> So when the block size is, say, 64k, the number of 512 byte inodes
> allocated at once is 128. i.e. 2 chunks. Hence xfs_inobt_insert()
> can be called with a inode could of > 64 and therefore the loop is
> still necessary...
Playing with mkfs I see that we actually can format >4k bsize
filesystems and the min and max are set at 512b and 64k. I can't
actually mount such filesystems due to the page size limitation. FWIW,
the default log size params appear to be broken for bsize >= 32k as
well, so I wonder if/how often that format tends to occur.
What's the situation with regard to >PAGE_SIZE block size support? Is
this something we actually could support today? Do we know about any
large page sized arches that could push us into this territory with the
actual page size limitation?
> And, indeed, we might want to increase the allocation size in future
> to do entire stripe units or stripe widths of inodes at once:
> This also means a loop would be required -somewhere-...
Indeed, though I'm less inclined to keep this around for the purposes of
this unimplemented feature. It should be easy enough to add the loop in
the appropriate place according to the code at the time this is
I suppose if we have >4k page sized arches that utilize block sizes
outside of the 256b-4k range, that's enough to justify the existence of
the range in the general sense. I just might have to factor this area of
code a bit differently. It would also be nice if there was a means to
> Dave Chinner
> xfs mailing list