Search String: Display: Description: Sort:

Results:

References: [ +subject:/^(?:^\s*(re|sv|fwd|fw)[\[\]\d]*[:>-]+\s*)*\[Jfs\-discussion\]\s+benchmark\s+results\s*$/: 27 ]

Total 27 documents matching your query.

1. Re: [Jfs-discussion] benchmark results (score: 1)
Author: pg_jf2@xxxxxxxxxxxxxxxxxx (Peter Grandi)
Date: Thu, 24 Dec 2009 13:05:39 +0000
Unfortunately there seems to be an overproduction of rather meaningless file system "benchmarks"... After having a glance, I suspect that your tests could be enormously improved, and doing so would
/archives/xfs/2009-12/msg00239.html (9,587 bytes)

2. Re: [Jfs-discussion] benchmark results (score: 1)
Author: tytso@xxxxxxx
Date: Thu, 24 Dec 2009 16:27:56 -0500
One of the problems is that very few people are interested in writing or maintaining file system benchmarks, except for file system developers -- but many of them are more interested in developing (a
/archives/xfs/2009-12/msg00240.html (10,989 bytes)

3. Re: [Jfs-discussion] benchmark results (score: 1)
Author: Evgeniy Polyakov <zbr@xxxxxxxxxxx>
Date: Fri, 25 Dec 2009 02:46:31 +0300
Hi Ted. Hmmmm.... I suppose here should be a link to such set? :) No link? Than I suppose benchmark results are pretty much in sync with what they are supposed to show. It depends on the size of unta
/archives/xfs/2009-12/msg00241.html (11,090 bytes)

4. Re: [Jfs-discussion] benchmark results (score: 1)
Author: Christian Kujau <lists@xxxxxxxxxxxxxxx>
Date: Thu, 24 Dec 2009 17:52:34 -0800 (PST)
Well, I do "sync" after each operation, so the data should be on disk, but that doesn't mean it'll clear the filesystem buffers - but this doesn't happen that often in the real world too. Also, all f
/archives/xfs/2009-12/msg00242.html (9,793 bytes)

5. Re: [Jfs-discussion] benchmark results (score: 1)
Author: lakshmi pathi <lakshmipathi.g@xxxxxxxxx>
Date: Fri, 25 Dec 2009 18:49:19 +0530
I'm a file system testing newbie, I have a question/doubt,please let me know if i'm wrong. Do you think a tool, which uses output from "hdparm" command,to get hard drives maximum performance and comp
/archives/xfs/2009-12/msg00243.html (10,902 bytes)

6. Re: [Jfs-discussion] benchmark results (score: 1)
Author: tytso@xxxxxxx
Date: Fri, 25 Dec 2009 11:11:46 -0500
If people are using benchmarks to improve file system, and a benchmark shows a problem, then trying to remedy the performance issue is a good thing to do, of course. Sometimes, though the case which
/archives/xfs/2009-12/msg00244.html (11,925 bytes)

7. Re: [Jfs-discussion] benchmark results (score: 1)
Author: tytso@xxxxxxx
Date: Fri, 25 Dec 2009 11:14:53 -0500
Did you include the "sync" in part of what you timed? Peter was quite right -- the fact that the measured bandwidth in your "cp" test is five times faster than the disk bandwidth as measured by hdpar
/archives/xfs/2009-12/msg00245.html (10,274 bytes)

8. Re: [Jfs-discussion] benchmark results (score: 1)
Author: Larry McVoy <lm@xxxxxxxxxxxx>
Date: Fri, 25 Dec 2009 08:22:38 -0800
Dudes, sync() doesn't flush the fs cache, you have to unmount for that. Once upon a time Linux had an ioctl() to flush the fs buffers, I used it in lmbench. ioctl(fd, BLKFLSBUF, 0); No idea if that i
/archives/xfs/2009-12/msg00246.html (12,027 bytes)

9. Re: [Jfs-discussion] benchmark results (score: 1)
Author: tytso@xxxxxxx
Date: Fri, 25 Dec 2009 11:33:41 -0500
Depends on what you are trying to do (flush has multiple meanings, so using can be ambiguous). BLKFLSBUF will write out any dirty buffers, *and* empty the buffer cache. I use it when benchmarking e2f
/archives/xfs/2009-12/msg00247.html (10,102 bytes)

10. Re: [Jfs-discussion] benchmark results (score: 1)
Author: Christian Kujau <lists@xxxxxxxxxxxxxxx>
Date: Fri, 25 Dec 2009 10:42:30 -0800 (PST)
In my "generic" tests[0] I do "sync" after each of the cp/tar/rm operations. That's right, and that's what I replied to Peter on jfs-discussion[1]: True, because I'm tarring up ~2.7GB of content whil
/archives/xfs/2009-12/msg00248.html (11,755 bytes)

11. Re: [Jfs-discussion] benchmark results (score: 1)
Author: Christian Kujau <lists@xxxxxxxxxxxxxxx>
Date: Fri, 25 Dec 2009 10:51:05 -0800 (PST)
Thanks Larry, that was exactly my point[0] too, I should add that to the results page to avoid further confusion or misassumptions: I realize however that on the same results page the bonnie++ tests
/archives/xfs/2009-12/msg00249.html (11,216 bytes)

12. Re: [Jfs-discussion] benchmark results (score: 1)
Author: Christian Kujau <lists@xxxxxxxxxxxxxxx>
Date: Fri, 25 Dec 2009 10:56:53 -0800 (PST)
Thanks for the hint, I could find sys/vm/drop-caches documented in Documentation/ but it's good to know there's a way to flush all these caces via this knob. Maybe I should add this to those "genric"
/archives/xfs/2009-12/msg00250.html (10,854 bytes)

13. Re: [Jfs-discussion] benchmark results (score: 1)
Author: Christian Kujau <lists@xxxxxxxxxxxxxxx>
Date: Fri, 25 Dec 2009 11:32:58 -0800 (PST)
--^ not, was what I meant to say, but it's all there, as "drop_caches" in Documentation/sysctl/vm.txt Christian. -- BOFH excuse #129: The ring needs another token
/archives/xfs/2009-12/msg00251.html (10,768 bytes)

14. Re: [Jfs-discussion] benchmark results (score: 1)
Author: jim owens <jowens@xxxxxx>
Date: Sat, 26 Dec 2009 11:00:59 -0500
Good, but not good enough for many tests... info sync CONFORMING TO POSIX.2 NOTES On Linux, sync is only guaranteed to schedule the dirty blocks for writing; it can actually take a short time before
/archives/xfs/2009-12/msg00253.html (10,714 bytes)

15. Re: [Jfs-discussion] benchmark results (score: 1)
Author: Christian Kujau <lists@xxxxxxxxxxxxxxx>
Date: Sat, 26 Dec 2009 11:06:38 -0800
[...] Noted, many times already. That's why I wrote "should be" - but in this special scenario (filesystem speed tests) I don't care for file integrity: if I pull the plug after "sync" and some data
/archives/xfs/2009-12/msg00254.html (11,371 bytes)

16. Re: [Jfs-discussion] benchmark results (score: 1)
Author: tytso@xxxxxxx
Date: Sat, 26 Dec 2009 14:19:16 -0500
Actually, Linux's sync does more than just schedule the writes; it has for quite some time: static void sync_filesystems(int wait) { ... } SYSCALL_DEFINE0(sync) { wakeup_flusher_threads(0); sync_file
/archives/xfs/2009-12/msg00255.html (12,026 bytes)

17. Re: [Jfs-discussion] benchmark results (score: 1)
Author: jim owens <jowens@xxxxxx>
Date: Sun, 27 Dec 2009 14:50:14 -0500
OK, that was wrong per Ted's explanation: You did not understand my point. It was not about data integrity, it was about test timing validity. And even with sync(2) behaving as Ted describes, *timing
/archives/xfs/2009-12/msg00257.html (13,272 bytes)

18. Re: [Jfs-discussion] benchmark results (score: 1)
Author: Christian Kujau <lists@xxxxxxxxxxxxxxx>
Date: Sun, 27 Dec 2009 13:55:26 -0800 (PST)
Not me, I'm comparing filesystems - and when the HBA or whatever plays tricks and "sync" doesn't flush all the data, it'll do so for every tested filesystem. Of course, filesystem could handle "sync"
/archives/xfs/2009-12/msg00259.html (12,166 bytes)

19. Re: [Jfs-discussion] benchmark results (score: 1)
Author: tytso@xxxxxxx
Date: Sun, 27 Dec 2009 17:33:07 -0500
Yes, but given many of the file systems have almost *exactly* the same bandwidth measurement for the "cp" test, and said bandwidth measurement is 5 times the disk bandwidith as measured by hdparm, it
/archives/xfs/2009-12/msg00261.html (14,089 bytes)

20. Re: [Jfs-discussion] benchmark results (score: 1)
Author: Christian Kujau <lists@xxxxxxxxxxxxxxx>
Date: Sun, 27 Dec 2009 17:24:05 -0800 (PST)
"Almost" indeed - but curiously enough some filesystem are *not* the same, although they should. Again: we have 8GB RAM, I'm copying ~3GB of data, so why _are_ there differences? (Answer: because fil
/archives/xfs/2009-12/msg00262.html (13,889 bytes)


This search system is powered by Namazu