xfs
[Top] [All Lists]

agsize and performance

To: xfs@xxxxxxxxxxx
Subject: agsize and performance
From: K T <mailkarthikt@xxxxxxxxx>
Date: Tue, 29 Oct 2013 18:10:20 -0400
Delivered-to: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=mkve3tsJmYde2LJn/bAF8B5nAajDGDGRa1ON3i4OMnk=; b=a6NbgiElknElR9Oh7RN4f52P/N7inKIOJ8zRQDVhRGSsw9RaNXwjCrBFhMpbY9Afl9 UjeVPcnonlUCtzlKY87/+4LwxZTqzGUHD3WOTb4yLTeYXFZ06svMQm+G0cwYOGnA3rFu aFLIWvtFAQV/4W6DfjzqnR0tNDbdusFmWwruwrh21YifhAwUPDBd1zfNNtObDJEY8WhA yKbtgA7c/BW/6c7aNvJBq9KklpdtNFKcxmv2wGuXOAld5c6wsecJRUPf5H8oboXdixWh LAxhJGL+peT6PLnZyDkcwNZ7nJi05M+aWw2Pq2DeE7OwwlIBsnwjj2xxJojydHPQGBlY arWg==
Hi,

I have a 1 TB SATA disk(WD1003FBYX) with XFS. In my tests, I preallocate a bunch of 10GB files and write data to the files one at a time. I have observed that the default mkfs setting(4 AGs) gives very low throughput. When I reformat the disk with a agsize of 256mb(agcount=3726), I see better throughput. I thought with a bigger agsize, the files will be made of fewer extents and hence perform better(due to lesser entries in the extent map getting updated). But, according to my tests, the opposite seems to be true. Can you please explain why this the case? Am I missing something?
My test parameters:

mkfs.xfs -f /dev/sdbf1
mount  -o inode64 /dev/sdbf1 /mnt/test
fallocate -l 10G fname
dd if=/dev/zero of=fname bs=2M count=64 oflag=direct,sync conv=notrunc seek=0

# uname -a
Linux gold 3.0.82-0.7-default #1 SMP Thu Jun 27 13:19:18 UTC 2013 (6efde93) x86_64 x86_64 x86_64 GNU/Linux
# cat /etc/SuSE-release
SUSE Linux Enterprise Server 11 (x86_64)
VERSION = 11
PATCHLEVEL = 3
# Intel(R) Xeon(R) CPU E5-2603 0 @ 1.80GHz


------- Tests with agsize of 256MB -----------
# mkfs.xfs -f /dev/sdbf1 -d agsize=256m
meta-data=""             isize=256    agcount=3726, agsize=65536 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       ""   blocks=244187136, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=65532, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

# mount  -o inode64 /dev/sdbf1 /mnt/test
# cd test
# ls
# fallocate -l 10g file.1
# xfs_bmap -p -v file.1 | wc -l
43
# dd if=/dev/zero of=file.1 bs=2M count=64 oflag=direct,sync conv=notrunc seek=0
64+0 records in
64+0 records out
134217728 bytes (134 MB) copied, 3.56155 s, 37.7 MB/s
(the first file write seems to be slow)

# fallocate -l 10g file.2
# dd if=/dev/zero of=file.2 bs=2M count=64 oflag=direct,sync conv=notrunc seek=0
64+0 records in
64+0 records out
134217728 bytes (134 MB) copied, 1.57496 s, 85.2 MB/s
# fallocate -l 10g file.3
# dd if=/dev/zero of=file.3 bs=2M count=64 oflag=direct,sync conv=notrunc seek=0
64+0 records in
64+0 records out
134217728 bytes (134 MB) copied, 1.56151 s, 86.0 MB/s

------- Tests with default mkfs parameters -----------
# cd ..
# umount test
# mkfs.xfs -f /dev/sdbf1
meta-data=""             isize=256    agcount=4, agsize=61047598 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       ""   blocks=244190390, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=119233, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
# mount  -o inode64 /dev/sdbf1 /mnt/test
# cd test
# fallocate -l 10g fle.1
# xfs_bmap -p -v file.1 | wc -l
3
# xfs_bmap -p -v file.1
file.1:
 EXT: FILE-OFFSET      BLOCK-RANGE      AG AG-OFFSET           TOTAL FLAGS
   0: [0..20971519]:   96..20971615      0 (96..20971615)   20971520 10000

# dd if=/dev/zero of=file.1 bs=2M count=64 oflag=direct,sync conv=notrunc seek=0
64+0 records in
64+0 records out
134217728 bytes (134 MB) copied, 3.55862 s, 37.7 MB/s
# xfs_bmap -p -v file.1
file.1:
 EXT: FILE-OFFSET         BLOCK-RANGE      AG AG-OFFSET             TOTAL FLAGS
   0: [0..262143]:        96..262239        0 (96..262239)         262144 00000
   1: [262144..20971519]: 262240..20971615  0 (262240..20971615) 20709376 10000

# fallocate -l 10g file.2
# xfs_bmap -p -v file.2
file.2:
 EXT: FILE-OFFSET      BLOCK-RANGE        AG AG-OFFSET               TOTAL FLAGS
   0: [0..20971519]:   20971616..41943135  0 (20971616..41943135) 20971520 10000
# dd if=/dev/zero of=file.2 bs=2M count=64 oflag=direct,sync conv=notrunc seek=0
64+0 records in
64+0 records out
134217728 bytes (134 MB) copied, 3.56464 s, 37.7 MB/s
# xfs_bmap -p -v file.2
file.2:
 EXT: FILE-OFFSET         BLOCK-RANGE        AG AG-OFFSET               TOTAL FLAGS
   0: [0..262143]:        20971616..21233759  0 (20971616..21233759)   262144 00000
   1: [262144..20971519]: 21233760..41943135  0 (21233760..41943135) 20709376 10000

# fallocate -l 10g file.3
# xfs_bmap -p -v file.3
file.3:
 EXT: FILE-OFFSET      BLOCK-RANGE        AG AG-OFFSET               TOTAL FLAGS
   0: [0..20971519]:   41943136..62914655  0 (41943136..62914655) 20971520 10000
# dd if=/dev/zero of=file.3 bs=2M count=64 oflag=direct,sync conv=notrunc seek=0
64+0 records in
64+0 records out
134217728 bytes (134 MB) copied, 3.55932 s, 37.7 MB/s
# xfs_bmap -p -v file.3
file.3:
 EXT: FILE-OFFSET         BLOCK-RANGE        AG AG-OFFSET               TOTAL FLAGS
   0: [0..262143]:        41943136..42205279  0 (41943136..42205279)   262144 00000
   1: [262144..20971519]: 42205280..62914655  0 (42205280..62914655) 20709376 10000

Thanks,
Karthik
<Prev in Thread] Current Thread [Next in Thread>