xfs
[Top] [All Lists]

Re: XFS - Performance Optimazation.

To: "C.G.Senthilkumar." <senthilkumar.cheeta@xxxxxxxxx>
Subject: Re: XFS - Performance Optimazation.
From: Steve Lord <lord@xxxxxxx>
Date: Wed, 25 Apr 2001 10:15:49 -0500
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: Message from "C.G.Senthilkumar." <senthilkumar.cheeta@wipro.com> of "Wed, 25 Apr 2001 19:35:33 +0530." <3AE6D9AD.ACF587C3@wipro.com>
Sender: owner-linux-xfs@xxxxxxxxxxx
> Hi,
>         I'm including the XFS filesystem into my Linux kernel. The
> Kernel will
> be running a video application and hence handle huuuuugggge files and I
> need a very high transfer rate. (I can hear you asking, Why else one
> would use XFS.)

Actually no, XFS is useful for lots of applications, it is not just a
streaming data application server.

>         I understand that currently XFS on Linux supports a max. block
> size of
> 4KB and extent size upto 1GB.

Actually XFS supports a block size of the system page size right now on
linux, so you are restricted to 4K on ia32 boxes. This will be changed
in the future, first to support block sizes of the system page size or
less, possibly to support larger block sizes later. 

The maximum extent size is governed by the size of the allocation groups
in the filesystem which can be upto 4Gbytes, there are some mkfs changes
in the pipeline which will make selecting the allocation group size a
little easier than it is now. For now there is nothing you can really do
about the maximum extent size it will allocate.

> 
>         I'd like to know what is the optimal block size and extent size
> combination with which I can get the highest transfer rate. Any answers
> and leads to other sources of answers will be greatly appreciated.

OK, now to the crux of the matter, these mkfs options are not going to help
you achieve higher bandwidth, your hardware is going to play the biggest part
in this, so it is all really a matter of budget. We have had a single file
doing streaming I/O at close to 100 Mbytes/sec on linux, however, that
involved a fiber channel connected JBOD running lvm on 8 10000 rpm scsi
drives. Going beyond this you need to look at multiple pci buses with
multiple controllers and lots of fun stuff like that. But I suspect this
is getting beyond what you are looking for ;-)

So first thing to do is work out what bandwidth you need, and then look at
the sustained transfer rates of various disks, scsi is probably still going
to work better than ide for a configuration involving several drives.

You should probably build an lvm volume striped across several drives, the
best stripe width is probably going to depend on how the application does
I/O, but it depends on how much bandwidth you need to squeeze out.

Steve

> 
>         Thanks for your time and efforts.
> 
> Warm Regards,
> C.G.Senthilkumar.



<Prev in Thread] Current Thread [Next in Thread>