On Wed, Jan 25, 2012 at 10:34:54AM -0600, Eric Sandeen wrote:
> On 1/24/12 11:19 PM, Amit Sahrawat wrote:
> > In XFS we can write parallel (i.,e we can make use of allocation
> > groups for writing process). If the files are kept in individual
> > directories, there is a possibility that first the blocks for that
> > files be used from individual allocation groups. If I start ‘4’
> > writing process(cp 100MB_file /<dirnum>/) – after writing is finished
> > – if I check the bmap – it does shows that initial allocation was from
> > individual allocation groups.
> > Even though in Ext4 also we do have groups – but I am not able to get
> > behavior similar to XFS.
> > If I check the file extents – the extents are in mixed form, the
> > allocation pattern is also very fragmented.
> > Please share more on this. Also, if there is a possible exact test
> > case to check for parallel writes support.
> It seems that you are asking more about allocation policy than parallelism
> in general? With either filesystem, you could use preallocation to wind
> up with more contiguous files when you write them in parallel, though
> that requires some idea of the file size ahead of time.
> ext4 doesn't have that exact dir::group heuristic that xfs uses,
> but it does have other mechanisms and heuristics to try to get good
> file and directory layout.
XFs has different allocation policies according to mount options
used. inode32 (default for > 1TB), inode64 (default for <1TB) and
filestreams. Each will give you different layouts for the same test
depending on the size of your filesystem and the amount of free
space you have available in it.
If XFS does what you want, then use it. There is no good reason for
trying to make ext4 do everything XFS does because it simply can't.
Especially when it comes to allocation strategies and policies...