[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

A couple of long-standing XFS (non-fatal) wierdnesses




I haven't bothered posting about these because they really aren't that
big a deal and I've been otherwise too occupied to work on any followup.

I've had a couple of strange behaviours with XFS on a few of my machines
since about the 2.4.14 days or so, up through the current 2.4.18 1.1
RELEASE version.

Weird #1: On my workstation at home - and this is the only machine I've
seen this behaviour on - which is an Athlon 1.33Ghz with 512MB of RAM
and IDE drives running Debian Woody current as of a couple of weeks ago
(I've recently moved and only have dialup for now, grr...) when I sit
down in the evening after work and log in, the HD goes nuts for anywhere
from 30-60 seconds while the system drags under I/O load.  If I log in
and immediately type "sync" it takes some time for it to return, so it
appears that the system is flushing a bunch of buffers.  Also, if I do a
"free" it shows that all the memory is used, mostly for filesystem
buffers.  All filesystems on this machine (except / which is ext3) are
XFS.  I suspect this has something to do with some nightly process that
grinds through the drive for some reason or another - the "find" that
updates the "locate" database or something.  It's not a big deal, and
after that inital "sync" the machine behaves normally, but it's
curious.  This may not even actually be an XFS issue, my suspicion is
based on the fact that it started doing this shortly after I converted
the filesystems to XFS.

Weird #2: This I can reproduce on few XFS machines.  (Again with IDE.) 
If a LOT of data changing file activity takes place on an XFS filesystem
(i.e. untar and rebuild a kernel, rsync a lot of data from another box,
etc) it takes a while for that filesystem to umount and the drive light
comes on steady while it's umount'ing.  The more file create/delete
actions that have taken place, the longer it seems to take to umount the
filesystem.  These are primarily Debian Potato boxes (what we use at
work and I use at home) that I see this behaviour on, although I've also
seen Woody do it a few times.  (I saw something come across the list
once about certain RedHat versions taking a long time to umount, but I
don't know if that was related because I can't find the details.)  Doing
"sync" before umount'ing the partition doesn't seem to make a
difference.  I can't say for absolutely sure, but I *think* that I've
only seen this behaviour on systems with more than one IDE hard drive in
them, which is just plain weird.  (I know the machines that I can
reproduce it fairly easily are my work workstation with two drives, my
home workstation with two, and my home server with four.)

Neither one of these things is really causing me problems, although once
or twice I've had to reboot my file server at home and had to wait a
couple minutes for the drives to spin while it umount'ed them before it
would come back up, but that's just an inconvenience rather than a real
problem.

If there's any kind of instrumentation I can build into a kernel to see
what's happening at the XFS or VFS levels in these two cases, I'd like
to take a look.

-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
#!/usr/bin/perl -w
$_='while(read+STDIN,$_,2048){$a=29;$b=73;$c=142;$t=255;@t=map
{$_%16or$t^=$c^=($m=(11,10,116,100,11,122,20,100)[$_/16%8])&110;
$t^=(72,@z=(64,72,$a^=12*($_%16-2?0:$m&17)),$b^=$_%64?12:0,@z)
[$_%8]}(16..271);if((@a=unx"C*",$_)[20]&48){$h=5;$_=unxb24,join
"",@b=map{xB8,unxb8,chr($_^$a[--$h+84])}@ARGV;s/...$/1$&/;$d=
unxV,xb25,$_;$e=256|(ord$b[4])<<9|ord$b[3];$d=$d>>8^($f=$t&($d
>>12^$d>>4^$d^$d/8))<<17,$e=$e>>8^($t&($g=($q=$e>>14&7^$e)^$q*
8^$q<<6))<<9,$_=$t[$_]^(($h>>=8)+=$f+(~$g&$t))for@a[128..$#a]}
print+x"C*",@a}';s/x/pack+/g;eval 

usage: qrpff 153 2 8 105 225 < /mnt/dvd/VOB_FILENAME \
    | extract_mpeg2 | mpeg2dec - 

         http://www.cs.cmu.edu/~dst/DeCSS/Gallery/
http://www.eff.org/                   http://www.anti-dmca.org/