On Tue, 17 Jul 2001, Steve Lord wrote:
>
> Could you possibly try the cvs tree, Linus was still working deadlocks
> out of the memory allocation/reclaim end of things up until 2.4.7-pre2.
> XFS and ext2 will almost certainly push things in different directions.
OK I'll try it..
Right, it has now been running longer than ever before without a lockup.
However, the performance is very bad. But it just might be caused by the
simultaneous RAID resync I am doing at the same time. I'll get back to
this after the resync is done (or the machine has crashed).
> Another issue here is that you may actually be creating inode numbers with
> greated than 32 bits in them with a filesystem this size. If you run
> xfs_growfs -n on the mount point of the filesystem and run the attached
> perl script with the following arguments it will tell you how many bits
> your inodes can consume.
<snip>
> You can play with numbers to make the number of bits <= 32, increasing
> the inode size will be the thing which does it for you, also if you
> did not end up with 4GB allocation groups you should attempt to get
> them setup that way. Unfortunately this means mkfs to fix.
>
> I do have some plans to make this issue go away for large filesystems, but
> you beat me to it!
I had inode size exactly 32 bits and 4GB allocation groups, but I still
recreated the file system (no problem, since I'm still only testing the
sw and hw):
# /sbin/mkfs.xfs -f -Lfs -dagsize=4g -isize=512 /dev/md0
meta-data=/dev/md0 isize=512 agcount=254, agsize=1048576
blks
data = bsize=4096 blocks=265939776, imaxpct=25
= sunit=1 swidth=6 blks, unwritten=0
naming =version 2 bsize=4096
log =internal log bsize=4096 blocks=32463
realtime =none extsz=24576 blocks=0, rtextents=0
Thanks for the advice!
- Jani
|