On Mon, Jan 22 2001, Rajagopal Ananthanarayanan wrote:
> > > [seth@lsautom large]$ dd if=/dev/zero of=largefile.bin bs=1024
> > > count=3500000
> > > 3500000+0 records in
> > > 3500000+0 records out
> > > [seth@lsautom large]$
> >
> > Yes, I have seen similar things - I have also recently found ways to
> > deadlock
> > the system via too much I/O going into XFS. Ananth may have some comments on
> > dealing with starvation.
>
> I'm not sure whether it is the same as the deadlock issue,
> which btw, I just opened a bug against so we can track it.
> Seth Mos' problem seems to be recoverable, i.e. it is not
> a deadlock. I routinely run lmdd's where filesize = 4X memory
> size without problems. I just tried tried (on a 64MB box,
> my disk is smaller):
>
> dd if=/dev/zero of=/xfs/output bs=1024 count=1000000
>
> and another window was running "top -d1" without any hesitation.
> I'm using the bleeding edge kernel. Let us know if you
> still see problems after updating your sources.
The max-locked-buffers heuristics should make this behave
much better in the future, i.e. when XFS moves to 2.4.1.
--
* Jens Axboe <axboe@xxxxxxx>
* SuSE Labs
|