xfs
[Top] [All Lists]

Re: Reducing memory requirements for high extent xfs files

To: David Chinner <dgc@xxxxxxx>
Subject: Re: Reducing memory requirements for high extent xfs files
From: Michael Nishimoto <miken@xxxxxxxxx>
Date: Fri, 22 Jun 2007 16:58:06 -0700
Cc: xfs@xxxxxxxxxxx
In-reply-to: <20070606234723.GC86004887@sgi.com>
References: <200705301649.l4UGnckA027406@oss.sgi.com> <20070530225516.GB85884050@sgi.com> <4665E276.9020406@agami.com> <20070606013601.GR86004887@sgi.com> <4666EC56.9000606@agami.com> <20070606234723.GC86004887@sgi.com>
Sender: xfs-bounce@xxxxxxxxxxx
User-agent: Mail/News 1.5.0.4 (X11/20060629)


> Also, should we consider a file with 1MB extents as > fragmented? A 100GB file with 1MB extents has 100k extents.

Yes, that's fragmented - it has 4 orders of magnitude more extents
than optimal - and the extents are too small to allow reads or
writes to acheive full bandwidth on high end raid configs....

Fair enough, so multiply those numbers by 100 -- a 10TB file with 100MB extents. It seems to me that we can look at the negative effects of fragmentation in two ways here. First, (regardless of size) if a file has a large number of extents, then it is too fragmented. Second, if a file's extents are so small that we can't get full bandwidth, then it is too fragmented.

If the second case were of primary concern, then it would be reasonable
to have 1000s of extents as long as each of the extents were big enough
to amortize disk latencies across a large amount of data.

We've been assuming that a good write is one which can send
2MB of data to a single drive; so with an 8+1 raid device, we need
16MB of write data to achieve high disk utilization.  In particular,
there are flexibility advantages if high extent count files can
still achieve good performance.

  Michael



<Prev in Thread] Current Thread [Next in Thread>