xfs
[Top] [All Lists]

Re: Pre-production questions

To: Steve Lord <lord@xxxxxxx>
Subject: Re: Pre-production questions
From: Joshua Baker-LePain <jlb17@xxxxxxxx>
Date: Tue, 27 Mar 2001 11:20:43 -0500 (EST)
Cc: Linux xfs mailing list <linux-xfs@xxxxxxxxxxx>
In-reply-to: <200103202025.f2KKP3s07381@jen.americas.sgi.com>
Reply-to: Linux xfs mailing list <linux-xfs@xxxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
On Tue, 20 Mar 2001 at 2:25pm, Steve Lord wrote

> p.s. Let us know if these numbers make things go faster!
>
OK.  ;)  The answer is a bit, but not really.

I verified with the vendor that, indeed, the stripe size specified on the
RAID system is blocks/disk.  So, I upped the stripe size to 128
(512byte) blocks (I'll be dealing mostly with large files), made two
filesystems, and formatted them as following:

[jlb@philip jlb]$ sudo mkfs.xfs -f /dev/sdb1
meta-data=/dev/sdb1              isize=256    agcount=134, agsize=261319
blks
data     =                       bsize=4096   blocks=35016700, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=0
naming   =version 2              bsize=4096
log      =internal log           bsize=4096   blocks=1200
realtime =none                   extsz=65536  blocks=0, rtextents=0
[jlb@philip jlb]$ sudo mkfs.xfs -f -d sunit=128,swidth=896 /dev/sdb2
meta-data=/dev/sdb2              isize=256    agcount=134, agsize=261328
blks
data     =                       bsize=4096   blocks=35016704, imaxpct=25
         =                       sunit=16     swidth=112 blks, unwritten=0
naming   =version 2              bsize=4096
log      =internal log           bsize=4096   blocks=1200
realtime =none                   extsz=65536  blocks=0, rtextents=0
[jlb@philip /mnt]$ sudo mount -t xfs /dev/sdb1 /mnt/raid1
[jlb@philip /mnt]$ sudo mount -t xfs /dev/sdb2 /mnt/raid2

I benchmarked with bonnie++ and lmdd.  If you want me to run any more
tests, just let me know.  I'll still be playi^Wtesting for most of the
rest of this week.  It's going to be a shame to let the grad students at
this thing...

For the record, the RAID is attached to a Dell Precision 410 with one PII
450 and 512MB of RAM.  It's attached via an Initio INI-A100U2W card (Domex
branded).  I'm still running 2.4.2-XFS pulled out of CVS on 3/13.  For
those who don't recall the rest of this (week-old) thread, the RAID is
made up of 8 5400RPM 80GB Maxtor disks in RAID5 and appearing as a SCSI
disk to the host.

Here are the results (averaged over three runs):

[jlb@philip jlb]$ pwd
/mnt/raid1/jlb
[jlb@philip jlb]$ bonnie++ -r 512 -n 64
Version  1.01       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
philip.egr.duke. 1G  5058  98 30302  35 10498  16  4609  84 26341  22 133.8   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 64  1299  89 88532  98  2185  90  1239  84 99707  98   782  44

[jlb@philip jlb]$ pwd
/mnt/raid2/jlb
[jlb@philip jlb]$ bonnie++ -r 512 -n 64
Version  1.01       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
philip.egr.duke. 1G  4962  97 32775  38  9669  15  4498  84 26304  21 134.6   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 64  1144  79 88914  96  2196  92  1029  70 101379 95   585  32

[jlb@philip jlb]$ pwd
/mnt/raid1/jlb
[jlb@philip jlb]$ lmdd if=internal of=tmp bs=32k count=25000 fsync=1
819.2000 MB in 31.2588 secs, 26.2070 MB/sec
[jlb@philip jlb]$ lmdd if=tmp of=internal
819.2000 MB in 32.1173 secs, 25.5065 MB/sec

[jlb@philip jlb]$ pwd
/mnt/raid2/jlb
[jlb@philip jlb]$ lmdd if=internal of=tmp bs=32k count=25000 fsync=1
819.2000 MB in 31.3305 secs, 26.1470 MB/sec
[jlb@philip jlb]$ lmdd if=tmp of=internal
819.2000 MB in 32.3775 secs, 25.3015 MB/sec

Please note that I am *not* complaining about these numbers.  I'll
take 26-30MB/sec any day.  But if you have any more suggestions or want me
to do any other tests, let me know.  Thanks!

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University


<Prev in Thread] Current Thread [Next in Thread>