We ran xfs_repair and were able to get it back into a running state.
This is fine for a test & dev but in production it won't be
acceptable. What other data do we need to get to the bottom of this?
On Wed, May 27, 2015 at 4:27 PM, Eric Sandeen <sandeen@xxxxxxxxxxx> wrote:
> That's not a crash. That is xfs detecting on disk corruption which likely
> happened at some time prior. You should unmount and run xfs_repair, possibly
> with ân first if you would like to do a dry run to see what it might do. If
> you get fresh corruption after a full repair, then that becomes more
> interesting. It's possible that you have a problem with the underlying block
> layer or it's possible that it is an xfs bug - but I think this is not
> something that we have seen before.
>> On May 27, 2015, at 6:06 PM, Shrinand Javadekar <shrinand@xxxxxxxxxxxxxx>
>> I am running Openstack Swift in a VM with XFS as the underlying
>> filesystem. This is generating a metadata heavy workload on XFS.
>> Essentially, it is creating a new directory and a new file (256KB) in
>> that directory. This file has extended attributes of size 243 bytes.
>> I am seeing the following two crashes of the machine:
>> I have only seen these when running in a VM. We have run several tests
>> on physical server but have never seen these problems.
>> Are there any known issues with XFS running on VMs?
>> Thanks in advance.
>> xfs mailing list