Hi there!
We've been experiencing similiar Problems like Daryl has. Seems to be nfs
related.
Ok, lets tell about our hardware.
Dual PIII@800MHz(Coppermine) on a MSI-6321 Board. # /sbin/lspci
00:00.0 Host bridge: VIA Technologies, Inc. VT82C693A/694x [Apollo
PRO133x] (rev c4)
00:01.0 PCI bridge: VIA Technologies, Inc. VT82C598/694x [Apollo
MVP3/Pro133x AGP]
00:07.0 ISA bridge: VIA Technologies, Inc. VT82C686 [Apollo Super South]
(rev 22)
00:07.1 IDE interface: VIA Technologies, Inc. Bus Master IDE (rev 10)
00:07.2 USB Controller: VIA Technologies, Inc. UHCI USB (rev 10)
00:07.3 USB Controller: VIA Technologies, Inc. UHCI USB (rev 10)
00:07.4 Host bridge: VIA Technologies, Inc. VT82C686 [Apollo Super ACPI]
(rev 30)
00:07.5 Multimedia audio controller: VIA Technologies, Inc. AC97 Audio
Controller (rev 20)
00:0c.0 Unknown mass storage controller: Promise Technology, Inc. 20265
(rev 02)
00:0f.0 Ethernet controller: 3Com Corporation 3c905C-TX [Fast Etherlink]
(rev 74)
01:00.0 VGA compatible controller: ATI Technologies Inc 3D Rage Pro AGP
1X/2X (rev 5c)
# free
total used free shared buffers cached
Mem: 1029976 507836 522140 0 624 272712
-/+ buffers/cache: 234500 795476
Swap: 2000084 3392 1996692
#dmesg
----cut----
hda: 60036480 sectors (30739 MB) w/1916KiB Cache, CHS=3737/255/63, UDMA(66)
hdc: 195371568 sectors (100030 MB) w/2048KiB Cache, CHS=193821/16/63,
UDMA(66)
hdd: 195371568 sectors (100030 MB) w/2048KiB Cache, CHS=193821/16/63,
UDMA(66)
hde: 195371568 sectors (100030 MB) w/2048KiB Cache, CHS=193821/16/63,
UDMA(100)
hdf: 195371568 sectors (100030 MB) w/2048KiB Cache, CHS=193821/16/63,
UDMA(100)
hdg: 195371568 sectors (100030 MB) w/2048KiB Cache, CHS=193821/16/63,
UDMA(100)
hdh: 195371568 sectors (100030 MB) w/2048KiB Cache, CHS=193821/16/63,
UDMA(100)
----cut----
(Software)-Raid5 consists of 6 Western Digital 100G hardiscs (WDC
WD1000BB-00CCB0),
2 of them connected as hdc,hdd on onboard Via Controller, the rest
connected to onboard Promise PDC20265.
System is Debian-Testing(last update: early February). Raid5 is exported
to about 150 clients (Linux and Sun Solaris).
What's our Problem?
This machine had been running stable for about 100days with kernel-2.4.14
and xfs-1.0.2, but then crashed and wouldn't boot anymore.(At this time,
the Raid was about 70% full). Kernel locked while trying to initialize the
Promise controller.We then switched to 2.4.18 and xfs(dated March 3), but
still no luck. After installing a new Bios Version including a new
Promise-Bios these problems were gone, but those nfs-related began!!
Even under normal nfs load this machine crashed.
Now i switched back to 2.4.14 + xfs-1.0.2 and my problems seem to have
gone....
btw.: I didn't try xfs-1.1PR4
What could be wrong with linux-2.4.18-xfs, any ideas ??
Thx
Christian
|