| To: | Eric Sandeen <sandeen@xxxxxxx> |
|---|---|
| Subject: | Re: howto preallocate to minimize fragmentation |
| From: | Ying-Hung Chen <ying@xxxxxxxxxxxxxx> |
| Date: | Thu, 22 Sep 2005 22:56:54 +0800 |
| Cc: | linux-xfs@xxxxxxxxxxx |
| In-reply-to: | <4332C248.70503@xxxxxxx> |
| References: | <43329839.2070005@xxxxxxxxxxxxxx> <4332A22B.6070708@xxxxxxx> <4332BFCC.8050803@xxxxxxxxxxxxxx> <4332C248.70503@xxxxxxx> |
| Sender: | linux-xfs-bounce@xxxxxxxxxxx |
| User-agent: | Mozilla Thunderbird 1.0.6 (Windows/20050716) |
> pre-allocation before writing would still be your best bet. If you > pre-allocate on a fresh fs before writing, you should get very large > extents. does this mean, if I create a 2GB file via dd (not sparse file), when i 'overwrite' to the same file, it will stay there? (same physical place) > > Other things you could try; if you put each file in its own dir, it will > tend to go into its own allocation group. > > You could make the filesystem with allocation groups sized at 2GB > I just thought of wild idea... since i am creating 90 files, what if i just create 90 allocation group via -d agcount=90, does this make sense or it won't work at all? Thanks, -Ying |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: howto preallocate to minimize fragmentation, Eric Sandeen |
|---|---|
| Next by Date: | Re: howto preallocate to minimize fragmentation, Eric Sandeen |
| Previous by Thread: | Re: howto preallocate to minimize fragmentation, Eric Sandeen |
| Next by Thread: | Re: howto preallocate to minimize fragmentation, Eric Sandeen |
| Indexes: | [Date] [Thread] [Top] [All Lists] |