I created a 100GB file with bonnie++, then I run xfs_bmap on it, this is
the output:
EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET
TOTAL
0: [0..8376063]: 2172649600..2181025663 259 (128..8376191)
8376064
1: [8376064..16746623]: 2181038208..2189408767 260 (128..8370687)
8370560
2: [16746624..25127551]: 2189426816..2197807743 261 (128..8381055)
8380928
3: [25127552..33485183]: 2197815424..2206173055 262 (128..8357759)
8357632
4: [33485184..41868415]: 2206204032..2214587263 263 (128..8383359)
8383232
5: [41868416..50236543]: 2214592640..2222960767 264 (128..8368255)
8368128
6: [50236544..58570367]: 2222981248..2231315071 265 (128..8333951)
8333824
7: [58570368..66937983]: 2231369856..2239737471 266 (128..8367743)
8367616
8: [66937984..75270911]: 2239758464..2248091391 267 (128..8333055)
8332928
9: [75270912..83652863]: 2248147072..2256529023 268 (128..8382079)
8381952
10: [83652864..92038527]: 2256535680..2264921343 269 (128..8385791)
8385664
11: [92038528..100404095]: 2264924288..2273289855 270 (128..8365695)
8365568
12: [100404096..108774015]: 2273312896..2281682815 271 (128..8370047)
8369920
13: [108774016..117129983]: 2281701504..2290057471 272 (128..8356095)
8355968
14: [117129984..125485439]: 2290090112..2298445567 273 (128..8355583)
8355456
15: [125485440..133853183]: 2298478720..2306846463 274 (128..8367871)
8367744
16: [133853184..142209023]: 2306867328..2315223167 275 (128..8355967)
8355840
17: [142209024..150577279]: 2315255936..2323624191 276 (128..8368383)
8368256
18: [150577280..158957183]: 2323644544..2332024447 277 (128..8380031)
8379904
19: [158957184..167322495]: 2332033152..2340398463 278 (128..8365439)
8365312
20: [167322496..175695487]: 2340421760..2348794751 279 (128..8373119)
8372992
21: [175695488..184040575]: 2348810368..2357155455 280 (128..8345215)
8345088
22: [184040576..192397567]: 2357198976..2365555967 281 (128..8357119)
8356992
23: [192397568..200776959]: 2365587584..2373966975 282 (128..8379519)
8379392
24: [200776960..209165055]: 2373976192..2382364287 283 (128..8388223)
8388096
25: [209165056..209715199]: 2382364800..2382914943 284 (128..550271)
550144
then I did the benchmark how they were described in the man page I found on
the web.
[root@localhost lmdd]# ./lmdd if=internal of=/raid/XXX count=1000 fsync=1
8.1920 MB in 2.2116 secs, 3.7041 MB/sec
unmounted and mounted again
[root@localhost lmdd]# ./lmdd if=/raid/XXX of=internal
8.1920 MB in 0.0776 secs, 105.6065 MB/sec
the 105 MB/s seem to be very slow,
/christoph
------- Ursprungligt brev -------
Från: lord@xxxxxxx
Datum: 25 Sep 2003 09:17:21 -0500
On Thu, 2003-09-25 at 08:57, Christoph Klocker wrote:
> I had a look at the dd man page, but I couldn't find out how I use it to
> check the write speed, can you tell me how I should do it.
Try looking for lmdd instead, you will need to pull it off the net
as it is not a standard part of a distribution.
As for your original question, since you have 1G of memory, when
bonnie++ completes its write, a large percentage of the data is
still in ram, once you ramp up the size, the percentage is
smaller. To measure real disk performance you need to flush
out to disk, this does not totally explain your drop off, but
does explain some of it.
Two suggestions:
Run xfs_bmap -v on one of your files after the run has
completed and send us the output.
Read the xfsctl man page and look at the space
preallocation calls. You could also try O_DIRECT writes
here if you have control of the app. lmdd can be built
O_DIRECT capabable I think.
Steve
--
Steve Lord voice: +1-651-683-3511
Principal Engineer, Filesystem Software email: lord@xxxxxxx
|