Quick related question... (and I haven't been following the group closely
recently) does this affect any sort of hardware RAID 5? What about RAID
1+0?
> The code which fixes this is stuck in my todo pile in an almost working
> state, but almost working includes a tendency to occasionally do a log
> write into a random spot and occasionally refuse to mount a filesystem.
>
> we really should remove the growfs info about the log, it has never
> been implemented - we would like to be able to do it too sometimes.
> A bigger log will not help you the fundamental problem here is unaligned
> writes to the log, it causes raid5 to flush its cache, this is the
> performance killer.
>
> How recent is your kernel? There are changes recently (last month)
> which reduce kupdated load in xfs dramatically.
>
> Steve
>
> On Thu, 2002-04-25 at 13:56, Mike Eldridge wrote:
> > xfs/kernel gurus,
> >
> > i am having a problem with a xfs fs on a raid5 array. i experience
> > extremely poor i/o performance. i notice a LOT of processes enter into
> > an uninterruptable sleep state when attempting to read/write to the xfs
> > filesystem.
> >
> > the system is configured as such:
> > 60GB RAID5 (escalade 7850 + 4x20GB IBM deskstar) mounted on /var
> > log=internal,bsize=4096,blocks=1839
> > agcount=58,agsize=262144 blocks
> > realtime=none
> >
> > i am mostly a newbie when it comes to raid and xfs, so i do not have
> > enough experience to diagnose the problem with 100% certainty. baptism
> > by fire, i should say.
> >
> > i suspect that my problem is this use of raid5 [0], though i want to
> > make sure i cover all bases.
> >
> > i was reading an article on ibm's developerworks website regarding xfs.
> > this article mentions a few points regarding xfs settings that may cause
> > the fs to suffer under heavy io load.
> >
> > the article mentions the following possible bottlenecks:
> > - lack of an appropriately sized metadata log
> > - too many allocation groups
> >
> > could this be the cause of my sleeping i/o-bound processes?
> >
> > i am extremely annoyed that i cannot yet move/grow the log using
> > xfs_growfs, so i cannot say that my log is the problem without trashing
> > the entire filesystem.
> >
> > looking at kupdated's used cpu time on the box, it's eaten close to
> > 2h30m of time in its four days of uptime, a sharp contrast to the used
> > cpu time of the kupdate process on the old box (2h over 94 days). this
> > seems alarming to me, though the old box was running linux-2.2. could
> > linux-2.4+xfs account for such an increase in kupdate's use of the cpu?
> >
> > i appreciate any insight/comments/suggestions/information
> >
> > -mike
> >
> > [0] raid5 - i'm now wanting to trash the raid5 array and instead create
> > several mirrored pairs and then use LVM and stripe over the mirrored
> > pairs. im hoping this will boost performance.
> >
> > --------------------------------------------------------------------------
> > /~\ the ascii all that is gold does not glitter
> > \ / ribbon campaign not all those who wander are lost
> > X against html -- jrr tolkien
> > / \ email!
> >
> > radiusd+mysql: http://www.cafes.net/~diz/kiss-radiusd
> > --------------------------------------------------------------------------
>
------------------------------------------------------------------------
Justin Coffey 858.535.9332 x 2025
Technical Advisor justin@xxxxxxxxx
Homes.com, Inc. http://homes.com
------------------------------------------------------------------------
|