xfs
[Top] [All Lists]

Re: Little questions

To: linux-xfs <linux-xfs@xxxxxxxxxxx>
Subject: Re: Little questions
From: yoros@xxxxxxxxxx
Date: Wed, 30 Oct 2002 06:11:02 +0100
In-reply-to: <1035932635.1088.43.camel@laptop.americas.sgi.com>
References: <20021027214706.GA5589@morpheus.matrix.com> <1035817834.18751.24.camel@jen.americas.sgi.com> <20021028225113.GA12476@morpheus.matrix.com> <20021029195735.GC5708@tapu.f00f.org> <20021029223018.GB18631@morpheus.matrix.com> <1035932635.1088.43.camel@laptop.americas.sgi.com>
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: Mutt/1.4i
On Tue, Oct 29, 2002 at 05:03:53PM -0600, Stephen Lord wrote:
> On Tue, 2002-10-29 at 16:30, yoros@xxxxxxxxxx wrote:
> > On Tue, Oct 29, 2002 at 11:57:35AM -0800, Chris Wedgwood wrote:
> > > On Mon, Oct 28, 2002 at 11:51:13PM +0100, yoros@xxxxxxxxxx wrote:
> > > 
> > > > Yes, I know that the files I'm deleting has a lot of extents but I
> > > > also know that other filesystems are faster deleting files.
> > > 
> > > It really depends.  For large files, XFS is *much* faster than
> > > anythiny else presently available for Linux.
> > 
> > When a file is very fragmented... ext2 only have to remove a few blocks
> > (inode, simple-indirect, double-indirect, etc...). This is the historic
> > Tanembaum's standard and a lot of UNIX-filesystems implemented this
> > methods in the past.
> > 
> 
> for every block in the file ext2 needs to free this block and place it
> in the bitmaps. For xfs the same is true, it needs to free each extent.
> One of the issues with a journalled filesystem is we need to keep the
> filesystem consistent between each transaction. The amount of work in
> removing a file is unbounded, and a transaction needs to have a bounded
> size (don't ask it gets really complicated). But what it means is
> that removing a file takes multiple transactions, and those end
> up causing disk I/O. 
> 
> In the case of ext2, if you crash between the remove and all the
> metadata getting flushed out to disk, you need to run fsck. In
> a journaled filesystem you do not. You are seeing one of the costs
> of a journaled filesystem.

Yes, I understand that removing a file with more than 40000 extents
takes a LOT of time and that is not bad because such file has too much
extents. What structure is created for each extent? This is the quextion
because updating 40000 "bits" of a bitmap (like ext2) is very quick.

I only want to know what is the way to get best performance with XFS.

Pedro

-- 
Pedro Martinez Juliá
\  yoros@xxxxxxxx
)|    yoros@xxxxxxxxxx
/        http://yoros.cjb.net
Socio HispaLinux #311
Usuario Linux #275438 - http://counter.li.org
GnuPG public information:  pub  1024D/74F1D3AC
Key fingerprint = 8431 7B47 D2B4 5A46 5F8E  534F 588B E285 74F1 D3AC

Attachment: pgpRfhzg45Wyl.pgp
Description: PGP signature

<Prev in Thread] Current Thread [Next in Thread>