Folks:
We're deploying XFS in a configuration where the file system is being exported with NFS. XFS is being mounted on Linux, with default options; an iSCSI volume is the formatted media. We're working out a failover solution for this deployment utilizing Linux HA. Things appear to work correctly in the general case, but in continuous testing we're getting XFS superblock corruption on a very reproducible basis.
The sequence of events in our test scenario:
1. NFS server #1 online
2. Run IO to NFS server #1 from NFS client
3. NFS server #1 offline, (via passing 'b' to /proc/sysrq-trigger)
4. NFS server #2 online
5. XFS mounted as part of failover mechanism, mount fails
The mount fails with the following:
<snip>
kernel: XFS mounting filesystem sde
kernel: Starting XFS recovery on filesystem: sde (logdev: internal)
kernel: XFS: xlog_recover_process_data: bad clientid
kernel: XFS: log mount/recovery failed: error 5
kernel: XFS: log mount failed
</snip>
When running xfs_repair:
<snip>
[root@machine ~]# xfs_repair /dev/sde
xfs_repair: warning - cannot set blocksize on block device /dev/sde: Invalid argument
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- zero log...
ERROR: The filesystem has valuable metadata changes in a log which needs ...
</snip>
Any advice or insight into what we're doing wrong would be very much
appreciated. My apologies in advance for the somewhat off-topic question.
- John Quigley
|