Hi, all
I have a 8*300GB DISK RAID0 used to hold temporary large size media
files. Usually application will write those ~10GB files to it
sequentially.
Now I found that if I have one file write to it, I can get like
~260MB/s, but if i have 4 concurrent file write, i can only get
aggregated 192MB/s, with 16 concurrent writes, the aggregated throughput
becomes ~100MB/s.
Anybody know why I got such a bad write performance? I guess it is
because of seek back and forth.
This shows that spaces are still allocated to file with large chunks.
thus lead to the seek when writing different files. but why xfs can not
allocate space better?
[root@dualxeon bonnie++-1.03a]# xfs_bmap /tmp/t/v8
/tmp/t/v8:
0: [0..49279]: 336480..385759
1: [49280..192127]: 39321664..39464511
2: [192128..229887]: 39485504..39523263
3: [229888..267391]: 39571904..39609407
4: [267392..590207]: 52509888..52832703
5: [590208..620671]: 52847168..52877631
6: [620672..663807]: 91995584..92038719
7: [663808..677503]: 92098112..92111807
8: [677504..691327]: 92130624..92144447
Ming
|