On Sun, Apr 03, 2005 at 12:03:27PM -0700, Chris Wedgwood <cw@xxxxxxxx> wrote:
> > I think the problem you are running into is that with a slow writing
> > app pdflush is pushing pages out to disk to quickly. A way to test
> > that is to increase the pdflush interval, don't remember which proc
> > value you need to change for that dirty_writback_centisecs I think.
>
> for really slow writes i found a large biosize helped. i've had this
> in my quilt series for a long time now and use it (with appropriate
> mount option):
applying the patch and mounting with biosize=24 improved it a little bit,
typical xfs_bmap -v output for a recording looks like this:
0: [0..383]: 175826800..175827183 8 (19535920..19536303) 384
1: [384..1023]: 175666096..175666735 8 (19375216..19375855) 640
2: [1024..1151]: 175665968..175666095 8 (19375088..19375215) 128
3: [1152..2175]: 175158880..175159903 8 (18868000..18869023) 1024
4: [2176..4351]: 175156704..175158879 8 (18865824..18867999) 2176
5: [4352..5375]: 175155680..175156703 8 (18864800..18865823) 1024
6: [5376..6399]: 175154656..175155679 8 (18863776..18864799) 1024
7: [6400..7423]: 175153632..175154655 8 (18862752..18863775) 1024
8: [7424..8447]: 175152608..175153631 8 (18861728..18862751) 1024
9: [8448..9471]: 175151584..175152607 8 (18860704..18861727) 1024
10: [9472..10495]: 175150552..175151575 8 (18859672..18860695) 1024
11: [10496..11519]: 175149528..175150551 8 (18858648..18859671) 1024
12: [11520..12479]: 175148568..175149527 8 (18857688..18858647) 960
13: [12480..14591]: 175146456..175148567 8 (18855576..18857687) 2112
14: [14592..15615]: 175145432..175146455 8 (18854552..18855575) 1024
15: [15616..16639]: 175144408..175145431 8 (18853528..18854551) 1024
16: [16640..17663]: 175143384..175144407 8 (18852504..18853527) 1024
17: [17664..30463]: 192737976..192750775 9 (16910736..16923535) 12800
18: [30464..233087]: 208000752..208203375 10 (12637152..12839775) 202624
19: [233088..234111]: 204951752..204952775 10 (9588152..9589175) 1024
20: [234112..235135]: 204950728..204951751 10 (9587128..9588151) 1024
21: [235136..236159]: 204949704..204950727 10 (9586104..9587127) 1024
22: [236160..237183]: 204948680..204949703 10 (9585080..9586103) 1024
23: [237184..238207]: 204947656..204948679 10 (9584056..9585079) 1024
24: [238208..239359]: 204946504..204947655 10 (9582904..9584055) 1152
..etc..
I'd inclined to say that it didn't improve the situation much, but there
certainly is improvement (the typical extent size is now 1024, while
before it was more like 127..256). I'd say 4mb/extent is enough to make me
happy, given the small amount of work required to arrive at that solution.
Thanks for the hint&patch!
[realtime code]
With regards to realtime code, the reason I was a bit reluctant is the
help entry for the realtime subvolume switch in the kernel config, which
says:
This feature is unsupported at this time, is not yet fully
functional, and may cause serious problems.
But maybe "realtime subvolume" is not the same thing as "using the
realtime block allocator?".
--
The choice of a
-----==- _GNU_
----==-- _ generation Marc Lehmann
---==---(_)__ __ ____ __ pcg@xxxxxxxx
--==---/ / _ \/ // /\ \/ / http://schmorp.de/
-=====/_/_//_/\_,_/ /_/\_\ XX11-RIPE
|