On Tue, Jul 29, 2003 at 06:31:03PM -0700, Ravi Wijayaratne wrote:
> Hi,
>
> We are seeing a %15 performance drop when we move from XFS 1.1 to 1.2.
> Here are some of our particulars:
> We have been testing the performance of Linux-2.4.19 and XFS 1.2 with
> NetBench.
> We compared the performance with that of Linux-2.4.18 and XFS 1.1.
> We have been running Samba 3.0.
>
>
> The following are the configuration settings:
>
> RAID config 0 or 5: 4 drives with chunk size of 64
>
> xfs_info.sh /hd/vol_mnt0/
> meta-data=/hd/vol_mnt0 isize=2048 agcount=74, agsize=1048560 blks
> data = bsize=4096 blocks=76644352, imaxpct=25
> = sunit=16 swidth=16 blks, unwritten=0
> naming =version 2 bsize=4096
> log =internal bsize=4096 blocks=9360 version=2
> = sunit=16 blks
> realtime =none extsz=65536 blocks=0, rtextents=0
>
> We are running software RAID 0 and 5 (md). The performance numbers were
> obtained after the
> RAID sync was completed.
>
> For Linux 2.4.18-XFS-1.1 case we used log version 1.
> For Linux 2.4.19-XFS-1.2 case we used log version 2.
>
> At mount time we changed the internal log size from 64k-256k, but that change
> did
> not show any difference.
>
Try changing one thing at a time to try figure out what actually
causes the performance changes. ie. is it the change from 2.4.18
to 2.4.19, or 1.1 to 1.2 XFS code, or is it the use of v1 vs. v2
logs, or is it the use of larger iclogs, or the use of a large log
sunit, or... just too many variables here.
Posting the results of the benchmarks would be of use too.
thanks.
--
Nathan
|