xfs
[Top] [All Lists]

Re: multiple write stream performance

To: chatz@xxxxxxxxxxxxxxxxx
Subject: Re: multiple write stream performance
From: Ming Zhang <mingz@xxxxxxxxxxx>
Date: Thu, 04 May 2006 21:38:09 -0400
Cc: xfs <linux-xfs@xxxxxxxxxxx>
In-reply-to: <1146785522.3609.186.camel@localhost.localdomain>
References: <1146777335.3609.173.camel@localhost.localdomain> <445A8112.7050803@melbourne.sgi.com> <1146785522.3609.186.camel@localhost.localdomain>
Reply-to: mingz@xxxxxxxxxxx
Sender: linux-xfs-bounce@xxxxxxxxxxx
Hi David

Or we put fragmentation issue aside first. How could I allow multiple
write streams to come in concurrently and get full speed potential by
avoiding seek as much as possible?

Thanks!

Ming


On Thu, 2006-05-04 at 19:32 -0400, Ming Zhang wrote:
> On Fri, 2006-05-05 at 08:32 +1000, David Chatterton wrote:
> > Ming,
> > 
> > What are the I/O characteristics of the application? Typically I
> > have seen direct I/O for video data at reasonable sizes, and
> > smaller buffered I/O for audio data in media apps. In the
> > worse case they mix buffered and direct to the same file. The
> > larger the I/O requests the better in terms of reducing
> > fragmentation.
> 
> I feel that I here want the fragmentation. I will have 10-20 large size
> (~10GB) multimedia files write at same time to this RAID0. then later a
> background program will dump them to tape. so i want the concurrent
> write to be as soon as possible.
> 
> so if xfs allocate 0 ~ (16MB-512) to file1, 16MB ~ (32MB-51) file2,...,
> then when write to file 1 to file N concurrently. the disk heads have to
> move back and forth among these places and thus leave the the poor
> performance i saw.
> 
> ps, what u mean DDN, the full name is ___?
> 
> ming
> 
> 
> > 
> > Some applications take advantage of the preallocation APIs and
> > know that they are ingesting X GBs, and preallocate that space.
> > This may still be fragmented, but in most circumstnaces the
> > fragmentation is far less than without preallocation.
> > 
> > Performance degrading with multiple writers is not unexpected
> > if they are jumping around a lot, and there is limited cache
> > of the controller etc. That is why for customers with demanding
> > media workloads we recommend storage like DDN that have very
> > large caches and can absorb lots of streams. But that costs
> > a lot more than a jbod!
> > 
> > Coming soon we will introduce to XFS on linux a new mount option
> > that will put writers to files in different directories into
> > different allocation groups. If you only have one writer per
> > directory, then fragementation in those files can be significantly
> > better since the writers aren't fighting for space in the same
> > region of the filesystem. That will help here but I'm not sure
> > it will solve your problem.
> > 
> > Thanks,
> > 
> > David
> > 
> > 
> > Ming Zhang wrote:
> > > Hi, all
> > > 
> > > I have a 8*300GB DISK RAID0 used to hold temporary large size media
> > > files. Usually application will write those ~10GB files to it
> > > sequentially.
> > > 
> > > Now I found that if I have one file write to it, I can get like
> > > ~260MB/s, but if i have 4 concurrent file write, i can only get
> > > aggregated 192MB/s, with 16 concurrent writes, the aggregated throughput
> > > becomes ~100MB/s.
> > > 
> > > Anybody know why I got such a bad write performance? I guess it is
> > > because of seek back and forth.
> > > 
> > > This shows that spaces are still allocated to file with large chunks.
> > > thus lead to the seek when writing different files. but why xfs can not
> > > allocate space better?
> > > 
> > > [root@dualxeon bonnie++-1.03a]# xfs_bmap /tmp/t/v8
> > > /tmp/t/v8:
> > >         0: [0..49279]: 336480..385759
> > >         1: [49280..192127]: 39321664..39464511
> > >         2: [192128..229887]: 39485504..39523263
> > >         3: [229888..267391]: 39571904..39609407
> > >         4: [267392..590207]: 52509888..52832703
> > >         5: [590208..620671]: 52847168..52877631
> > >         6: [620672..663807]: 91995584..92038719
> > >         7: [663808..677503]: 92098112..92111807
> > >         8: [677504..691327]: 92130624..92144447
> > > 
> > > Ming
> > > 
> > > 
> > 
> 


<Prev in Thread] Current Thread [Next in Thread>