> I think I've found the problem.
> It's not an xfs bug, is a bug in libgdbm, gdbm_open:
> dbf->header->dir_size = 8 * sizeof (off_t);
> while (dbf->header->dir_size < dbf->header->block_size)
> dbf->header->dir_size <<= 1;
> dbf->header->dir_bits += 1;
> /* Check for correct block_size. */
> if (dbf->header->dir_size != dbf->header->block_size)
> gdbm_close (dbf);
> gdbm_errno = GDBM_BLOCK_SIZE_ERROR;
> return NULL;
> The initial dir_size is 8*4=32. dbf->header->block_size is the io block
> size returned by fstat.
> On my partition I use swidth=384 => block_size = 196608 = 3 * 65536.
> But in the piece of code above dir_size can be only of the form 2^n
> (dir_size<<=1), so in my case it will be always different from
> block_size :(
> I have to report this to libgdbm developers.
OK, good to hear, I suspect they may question the block_size being
returned, but since this is supposed to be the optimal size for disk
I/O and not an actual filesystem blocksize you are correct here.
By the way, you are going to get some really big and inefficient dbm
files on this filesystem, xfs used to default to reporting 64K and
people complained about the time taken to rebuild an rpm database.
Changing the default to be 4K, which is more in tune with reality
in the linux implementation, fixed this. You might want to suggest
to them that they put some sort of cap on the block size they use
rather than blindly following the value reported by the kernel.