xfs
[Top] [All Lists]

Re: xfs on raid questions

To: Steve Lord <lord@xxxxxxx>
Subject: Re: xfs on raid questions
From: Chuck Campbell <campbell@xxxxxxxxxxxx>
Date: Thu, 17 Jun 2004 13:45:10 -0500
Cc: campbell@xxxxxxxxxxxx, linux-xfs@xxxxxxxxxxx
In-reply-to: <40D1D87D.9030208@xxxxxxx>
References: <20040617152353.GB2511@xxxxxxxxxxxxxxxx> <40D1D87D.9030208@xxxxxxx>
Reply-to: campbell@xxxxxxxxxxxx
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: Mutt/1.4.1i
On Thu, Jun 17, 2004 at 12:44:29PM -0500, Steve Lord wrote:
> Chuck Campbell wrote:
> >
> >I have a new raid box, with 2 raid5 sets of 7 x 250Gb disks each.
> >I need to install it on an SGI (which I have right now, running 6.5.17), 
> >then later move it to a linux server (which is being built in early July.  
> >I need to do proof of concept testing now.  The docs indicate this should 
> >not be a problem.
> >
> >I have set up the raid with a 64k stripe size (the device will see lots
> >of large files).  I assume the device will pass 64kb * 512 bytes per 
> >transfer.
> >
> >I assume I need to use:
> >sunit = 128, which is 64kb/512b
> >swidth = 896, which is (#disks-1) * sunit
> 
> Sounds about right from my memory.

Currently I've got xfs filesystems on the sgi box over old raid5 with 64k
stripe size, but they look like this under Irix 6.5.17:

# xfs_growfs -n /npi07
meta-data=/npi07                 isize=256    agcount=336, agsize=261564 blks
data     =                       bsize=4096   blocks=87885312, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 1              bsize=4096  
log      =internal               bsize=4096   blocks=1000
realtime =none                   extsz=65536  blocks=0, rtextents=0
 
and like this under irix 6.5.2:

# xfs_growfs -n /disk5
meta-data=/disk5                 isize=256    agcount=38, agsize=260683 blks
data     =                       bsize=4096   blocks=9905920, imaxpct=25
         =                       sunit=0      swidth=0 blks
log      =internal               bsize=4096   blocks=1000
realtime =none                   bsize=65536  blocks=0, rtextents=0

I'm assuming I'd get better performance if I backup, build new filesystems 
and restore on these raid setups as well?

Is Bonnie a good tool to test this with before and after for comparison?

> 
> >
> >Is this sane for aligned transfers, or am I misunderstanding something?
> >
> >I am totally guessing that I should use log section options of
> >size = 64m
> >sunit = 128 
> >
> 
> I do not think you will be able to find an Irix version which
> understands aligned logs. If you are going to do a one time
> transfer from Irix to Linux then it should be possible to
> fix up the superblock and use xfs_repair to reformat the
> log to get the stripe alignment setup there. Of course

If the linux server works well in the proof of concept, it will be a one
time, one way move.  I had mostly hoped to avoid a dump to tape, mkfs,
restore from tape, since tape is really SLOOOOOW and requires a LOT of
human intervention for a couple of Tb.

> you need a clean fs before you do this - and note that
> Linux and Irix do not understand each others logs - since
> the log is in machine byte order, xfs_repair -L will fix
> that for you.

So I will attach to the linux box and then run xfs_repair once right?

> 
> As for stripe aligned logs in general, I am not sure that
> going for a really large sunit there is a wise move, log
> writes are not very big in the first place, the main point
> of aligning the log writes was for software raid on linux
> which does not like it if you do writes which are not
> a multiple of 4K with the way xfs sets up the block
> device.
> 
> Since this is a hardware raid, you may be as well sticking
> with the original log format - especially since this is
> your only option on irix.

OK, so I should just ignore the log section options, and take the defaults?  
Will that have a performance hit when I get it on the linux box, or is that a
red herring?

thanks,
-chuck


<Prev in Thread] Current Thread [Next in Thread>