On Wed, 2002-01-30 at 16:51, Chris Pascoe wrote:
> > How long freeze takes depends on how much data is dirty in the
> > filesystem, it should take a similar time to an unmount. However,
> > unless this was a very large and dirty filesystem this feels
> > like a long time - although it was all system time,
> > so something was going on.
>
> 6 hours is a long time for an unmount! The volume is ~70GB, and had no
> activity on it before I tried to create the snapshots - the machine had just
> been rebooted before some attempts.
Whoa! it was early this morning, I missed a few digits when I read the
elapsed time! Yes, that is a leetle on the long side. There is a complex
loop in the xfs_syncsub function, not sure yet how it can get stuck
though.
Steve
>
> > The spot you kept seeing on the stack does not make much sense, vn_count
> is
> > basically an atomic_read.
>
> It just seems that's the place I've hit the break key the most. A closer
> examination of my console log suggests it isn't stuck in vn_count, there are
> a few occurances of it being somewhere else in xfs_iflush_all - so maybe
> it's got itself tangled in a loop for a few hours?
>
> 0xe95c3b88 0xc01a1a63 xfs_iflush_all+0xc3
> (0xf6d1f000, 0x1, 0xf6d1f000, 0xc, 0xc039ce80)
> 0xe95c3b88 0xc01a1ab9 xfs_iflush_all+0x119
> (0xf6d1f000, 0x1, 0xf6d1f000, 0xc, 0xc039ce80)
> 0xe95c3b88 0xc01a1ac3 xfs_iflush_all+0x123
> (0xf6d1f000, 0x1, 0xf6d1f000, 0xc, 0xc039ce80)
>
> > How repeatable is this?
>
> The problem was persistent across reboots - I rebooted to install a
> different LVM version at some stage (move to 1.0.2, from 1.0.1), and it
> occurred five times in a row after that. I just rebooted now to say that it
> still happens - but, alas - it doesn't want to any more. I'll try again at
> some random times throughout the day.
>
> Chris
--
Steve Lord voice: +1-651-683-3511
Principal Engineer, Filesystem Software email: lord@xxxxxxx
|