Steve Lord wrote:
>
> > Hi,
> >
> > [seth@lsautom large]$ dd if=/dev/zero of=largefile.bin bs=1024 count=3500000
> > 3500000+0 records in
> > 3500000+0 records out
> > [seth@lsautom large]$
>
> Yes, I have seen similar things - I have also recently found ways to deadlock
> the system via too much I/O going into XFS. Ananth may have some comments on
> dealing with starvation.
I'm not sure whether it is the same as the deadlock issue,
which btw, I just opened a bug against so we can track it.
Seth Mos' problem seems to be recoverable, i.e. it is not
a deadlock. I routinely run lmdd's where filesize = 4X memory
size without problems. I just tried tried (on a 64MB box,
my disk is smaller):
dd if=/dev/zero of=/xfs/output bs=1024 count=1000000
and another window was running "top -d1" without any hesitation.
I'm using the bleeding edge kernel. Let us know if you
still see problems after updating your sources.
thanks,
ananth.
> >
> >
> > Kills the machine for the next 3 minutes.
> > The system is not usable during this time.
> > I had a top open at the time I made the file and it jumped 3 minutes as
> > soon as it was done.
|