On Tue, Oct 07, 2008 at 04:58:24PM -0700, Allan Haywood wrote:
> > I could see this as an issue, if there are pending metadata writes
> > to a filesystem, that filesystem through failure is mounted on
> > another server and used as normal, then unmounted normally, then
> > when the ports are re-activated on the server that has pending
> > metadata, is it possible this does get flushed to the disk, but
> > since the disk has been in use on another server the metadata no
> > longer matches the filesystem properly and potentially writes over
> > or changes the filesystem in a way that causes corruption.
> Once you've fenced the server, you really, really need to make
> sure that it has no further pending writes that could be issued
> when the fence is removed. I'd suggest that if you failed to
> unmount the filesystem before fencing, you need to reboot that
> server to remove any possibility of it issuing stale I/O
> once it is unfenced. i.e. step 3b = STONITH.
> > Would reloading the xfs module work also, to clear any pending
> > writes (if I could get it to a point where modprobe -r xfs
> > would work)? Although I am doubting that if there are pending
> > writes that it would be easy to get xfs to unload.
Correct. While a filesystem is mounted, you can't unload the XFS
> > Another possibility, is there a command that will tell xfs
> > To clear any pending writes?
You can force-shutdown the filesystem then unmount it. That
# xfs_io -x -c "shutdown" <mtpt>
# umount <mtpt>
See the man page for xfs_io - you want to shut down the filesystem
without forcing the log (can't do I/O).