xfs
[Top] [All Lists]

Re: xfs results question

To: Jano Lukac <jano@xxxxxxxxxxxxxxxx>, linux-xfs <linux-xfs@xxxxxxxxxxx>, linux-ide-arrays <linux-ide-arrays@xxxxxxxxxxxxxxxxx>
Subject: Re: xfs results question
From: Simon Matter <simon.matter@xxxxxxxxxxxxxxxx>
Date: Thu, 10 Jan 2002 08:29:51 +0100
>received: from mobile.sauter-bc.com (unknown [10.1.6.21]) by basel1.sauter-bc.com (Postfix) with ESMTP id 46CD157306; Thu, 10 Jan 2002 08:29:53 +0100 (CET)
Organization: Sauter AG, Basel
References: <33499.63.204.249.45.1010618787.squirrel@xxxxxxxxxxxxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
Jano Lukac schrieb:
> 
> Hi,
> 
> In a recent e-mail, you showed some results of using XFS on a software raid5

Hi,

Because this is RAID and XFS specific, I'm posting to the ML's too.

> setup:
> http://marc.theaimsgroup.com/?l=linux-xfs&m=101047412725459&w=2
> 
> The last test you approximated:
> XFS  on Software RAID5 w/o write caching,
>         logdev on SoftRAID1 on the same disks           : ~10 min
> 
> So how exactly did this setup look like?  soft raid5 on three of the disks,
> with the xfs log-dev on the fourth?  Or you stuck in an extra pair of disks

No, for two reasons:
- Waste of disk space since your log does not have to be big.
- I want redundancy on all devices, so I need at least two disks for the
log as well.

> with software raid1 and put the log-dev there?  What's confusing me is the
> "soft raid1 on the same disks."

I was confused too by the results of my first tests, that's why I took
the time to find out how it works well.
Here we go:

[root@client130 root]# fdisk -l /dev/sd[a-d]
 
Disk /dev/sda: 255 heads, 63 sectors, 1106 cylinders
Units = cylinders of 16065 * 512 bytes
 
   Device Boot    Start       End    Blocks   Id  System
/dev/sda1   *         1        13    104391   fd  Linux raid autodetect
/dev/sda2            14       141   1028160   fd  Linux raid autodetect
/dev/sda3           142      1106   7751362+  fd  Linux raid autodetect
 
Disk /dev/sdb: 255 heads, 63 sectors, 1106 cylinders
Units = cylinders of 16065 * 512 bytes
 
   Device Boot    Start       End    Blocks   Id  System
/dev/sdb1   *         1        13    104391   fd  Linux raid autodetect
/dev/sdb2            14       122    875542+  fd  Linux raid autodetect
/dev/sdb3           142      1106   7751362+  fd  Linux raid autodetect
/dev/sdb4           123       141    152617+  fd  Linux raid autodetect
 
Partition table entries are not in disk order
 
Disk /dev/sdc: 255 heads, 63 sectors, 1106 cylinders
Units = cylinders of 16065 * 512 bytes
 
   Device Boot    Start       End    Blocks   Id  System
/dev/sdc1   *         1        13    104391   fd  Linux raid autodetect
/dev/sdc2            14       141   1028160   fd  Linux raid autodetect
/dev/sdc3           142      1106   7751362+  fd  Linux raid autodetect
 
Disk /dev/sdd: 255 heads, 63 sectors, 1106 cylinders
Units = cylinders of 16065 * 512 bytes
 
   Device Boot    Start       End    Blocks   Id  System
/dev/sdd1   *         1        13    104391   fd  Linux raid autodetect
/dev/sdd2            14       122    875542+  fd  Linux raid autodetect
/dev/sdd3           142      1106   7751362+  fd  Linux raid autodetect
/dev/sdd4           123       141    152617+  fd  Linux raid autodetect
 
Partition table entries are not in disk order

[root@client130 root]# cat /etc/raidtab
raiddev             /dev/md1
raid-level                  1
nr-raid-disks               2
chunk-size                  64k
persistent-superblock       1
nr-spare-disks              0
    device          /dev/sda2
    raid-disk     0
    device          /dev/sdc2
    raid-disk     1
raiddev             /dev/md3
raid-level                  1
nr-raid-disks               3
chunk-size                  64k
persistent-superblock       1
nr-spare-disks              1
    device          /dev/sda1
    raid-disk     0
    device          /dev/sdb1
    raid-disk     1
    device          /dev/sdc1
    raid-disk     2
    device          /dev/sdd1
    spare-disk     0
raiddev             /dev/md0
raid-level                  5
nr-raid-disks               4
chunk-size                  64k
persistent-superblock       1
nr-spare-disks              0
    device          /dev/sda3
    raid-disk     0
    device          /dev/sdb3
    raid-disk     1
    device          /dev/sdc3
    raid-disk     2
    device          /dev/sdd3
    raid-disk     3
raiddev             /dev/md2
raid-level                  1
nr-raid-disks               2
chunk-size                  64k
persistent-superblock       1
nr-spare-disks              0
    device          /dev/sdb2
    raid-disk     0
    device          /dev/sdd2
    raid-disk     1
raiddev             /dev/md4
raid-level                  1
nr-raid-disks               2
chunk-size                  64k
persistent-superblock       1
nr-spare-disks              0
    device          /dev/sdb4
    raid-disk     0
    device          /dev/sdd4
    raid-disk     1

[root@client130 root]# cat /etc/fstab
/dev/md1                /                       xfs     defaults       
1 1
LABEL=/boot             /boot                   xfs     defaults       
1 2
none                    /dev/pts                devpts  gid=5,mode=620 
0 0
LABEL=/home             /home                   xfs    
defaults,logdev=/dev/md4 1 2
none                    /proc                   proc    defaults       
0 0
none                    /dev/shm                tmpfs   defaults       
0 0
/dev/md2                swap                    swap    defaults       
0 0
/dev/cdrom              /mnt/cdrom              iso9660
noauto,owner,kudzu,ro 0 0
/dev/fd0                /mnt/floppy             auto   
noauto,owner,kudzu 0 0
ftp:/home/ftp/pub       /mnt/nfs                nfs     defaults       
0 0

[root@client130 root]# cat /proc/mdstat
Personalities : [raid1] [raid5]
read_ahead 1024 sectors
md1 : active raid1 sdc2[1] sda2[0]
      1028096 blocks [2/2] [UU]
 
md3 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0]
      104320 blocks [3/3] [UUU]
 
md2 : active raid1 sdd2[1] sdb2[0]
      875456 blocks [2/2] [UU]
 
md0 : active raid5 sdd3[3] sdc3[2] sdb3[1] sda3[0]
      23253888 blocks level 5, 64k chunk, algorithm 0 [4/4] [UUUU]
 
md4 : active raid1 sdd4[1] sdb4[0]
      152512 blocks [2/2] [UU]
 
unused devices: <none>

This setup was only for test purposes, that's why it looks somewhat
ugly.

BTW if you want to build Software RAID10, make shure you do it like
this:
Two mirrors, and then stripe, NOT two stripe and then mirror. Why? Both
of them work but only the first gives you good performance.

<--- /etc/raidtab
raiddev             /dev/md4
raid-level                  1
nr-raid-disks               2
chunk-size                  64k
persistent-superblock       1
#nr-spare-disks     0
    device          /dev/sda7
    raid-disk     0
    device          /dev/sdb7
    raid-disk     1
 
raiddev             /dev/md5
raid-level                  1
nr-raid-disks               2
chunk-size                  64k
persistent-superblock       1
#nr-spare-disks     0
    device          /dev/sdc7
    raid-disk     0
    device          /dev/sdd7
    raid-disk     1
 
raiddev             /dev/md6
raid-level                  0
nr-raid-disks               2
chunk-size                  64k
persistent-superblock       1
#nr-spare-disks     0
    device          /dev/md4
    raid-disk     0
    device          /dev/md5
    raid-disk     1
<--- EOF

Simon

> 
> I appreciate your taking the time to read this, look forward to a reply.
> Thanks!
> 
> Jano



<Prev in Thread] Current Thread [Next in Thread>
  • Re: xfs results question, Simon Matter <=