Another one (see attachement). This time on a server with SAS drives and
without the lazy-count option:
meta-data=/dev/sdb isize=256 agcount=4, agsize=27471812
blks
= sectsz=512 attr=2
data = bsize=4096 blocks=109887246, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=32768, version=2
= sectsz=512 sunit=0 blks, lazy-count=0
realtime =none extsz=4096 blocks=0, rtextents=0
We really don't want to rollback to 2.6.28.x as this doesn't solve the
issue.
Any hint would be appreciated.
-Patrick
Patrick Schreurs wrote:
Just had another one. It's likely we'll have to downgrade to 2.6.28.x.
These servers have 28 SCSI disks mounted separately (JBOD). The workload
is basically i/o load (90% read, 10% write) from these disks. The
servers are not extreme busy (overloaded).
xfs_info from a random disk:
sb02:~# xfs_info /dev/sdb
meta-data=/dev/sdb isize=256 agcount=4, agsize=18310547
blks
= sectsz=512 attr=2
data = bsize=4096 blocks=73242187, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=32768, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
As you can see we use lazy-count=1. Mount options aren't very exotic:
rw,noatime,nodiratime
We are seeing these panic's on at least 3 different servers.
If you have any hints on how to investigate, we would greatly appreciate
it.
-Patrick
Eric Sandeen wrote:
Others aren't hitting this, what sort of workload are you running when
you hit it?
I have not had time to look at it yet but some sort of testcase may
greatly help.
-Eric
On Jun 20, 2009, at 5:18 AM, Patrick Schreurs
<patrick@xxxxxxxxxxxxxxxx> wrote:
Unfortunately another panic. See attachment.
Would love to receive some advice on this issue.
Thanks in advance.
-Patrick
Patrick Schreurs wrote:
Eric Sandeen wrote:
Patrick Schreurs wrote:
Hi all,
We are experiencing kernel panics on servers running 2.6.29(.1)
and 2.6.30. I've included two attachments to demonstrate.
The error is:
Kernel panic - not syncing: xfs_fs_destroy_inode: cannot reclaim ...
OS is 64bit Debian lenny.
Is this a known issue? Any comments on this?
It's not known to me, was this a recent upgrade? (IOW, did it start
with .29(.1)?
We've seen this on 2 separate servers. It probably happened more
often, but we didn't captured the panic message. One server was
running 2.6.29.1, the other server was running 2.6.30. Currently
we've updated all similar servers to 2.6.30.
If we can provide you with more details to help fix this issue,
please let us know.
-Patrick
_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs
<sb06-20090619.png>
_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs
_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs

|