"xfs_log_force: error 5 returned." for drive that was removed.

Carlos Maiolino cmaiolino at redhat.com
Mon Apr 18 13:54:29 CDT 2016


On Sun, Apr 17, 2016 at 09:33:27AM -0500, Joe Wendt wrote:
>    Hello! This may be a silly question or an interesting one...
>    We had a drive fail in a production server, which spawned this error in
>    the logs:
>    XFS (sde1): xfs_log_force: error 5 returned.
>    The dead array was lazy-unmounted, and the drive was hot-swapped, but
>    when the RAID array was rebuilt, it came online as /dev/sdk instead of
>    /dev/sde.
>    Now /dev/sde1 doesn't exist in the system, but we still see this
>    message every 30 seconds. I'm assuming a reboot will clear out whatever
>    is still trying to access sde1, but I'm trying to avoid that if
>    possible. Could someone point me in the direction of what XFS might
>    still be trying to do with that device?
>    lsof hasn't given me any clues. I can't run xfs_repair on a volume that
>    isn't there. I haven't been able to find anything similar yet online.
>    Any help would be greatly appreciated!
>    Thanks,
>    Joe

I believe this is the same problem being discussed in this thread:

XFS hung task in xfs_ail_push_all_sync() when unmounting FS after disk
failure/recovery.

Can you get a stack dump of the system (sysrq-t) and post it in some pastebin?


> _______________________________________________
> xfs mailing list
> xfs at oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs


-- 
Carlos



More information about the xfs mailing list