xfs
[Top] [All Lists]

Re: fc3 and stacks

To: Robin Humble <rjh@xxxxxxxxxxxxxxxx>
Subject: Re: fc3 and stacks
From: Joshua Baker-LePain <jlb17@xxxxxxxx>
Date: Mon, 21 Mar 2005 07:40:53 -0500 (EST)
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <20050320212509.GA11859@xxxxxxxxxxxxxxxxxxxxxxxx>
References: <20050310232036.GA19295@xxxxxxxxxxxxxxxxxxxxxxxx> <4234E903.8010309@xxxxxxxxxxx> <4235D44F.1020902@xxxxxxxxxxx> <20050314190915.GB9784@xxxxxxxxxxxxxxxxxxxxxxxx> <4235F824.6070303@xxxxxxx> <20050315044606.GA32635@xxxxxxxxxxxxxxxxxxxxxxxx> <20050320212509.GA11859@xxxxxxxxxxxxxxxxxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
On Sun, 20 Mar 2005 at 4:25pm, Robin Humble wrote

> I hammered a standard fc3 kernel (4k stacks) and couldn't break it.
> 
> I ran 4 simultaneous bonnie++'s locally, and 8 more over NFS using two
> gigabit ethernet links. No software raid, no lvm, no quotas, default
> mount options, etc. just XFS on a 2.3TB partition of a 3ware 9000
> SATA hardware raid5. Machine was a dual 2.66GHz Xeon with 2G of ram.
> kernel was kernel-smp-2.6.10-1.770_FC3, userland was RHEL AS4.
> 
> I filled up the disk several times with dd's and bonnie's and saw no
> signs of problems there either.
> 
> So that's one extra data point for a relatively simple config that says
> XFS and 4k stacks is pretty stable.

Hrm.  I had exactly the opposite experience.  My testbed was far older 
(and simpler) -- dual PIII 450, 384MB RAM, AIC-7890 controller and 2 SCSI 
disks (not in any sort of RAID or anything).  I was running RHEL4 on it 
with the kernel modified simply to turn on XFS support.
'tiobench --size 2047' would reliably produce stack overflows.

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University


<Prev in Thread] Current Thread [Next in Thread>