[Top] [All Lists]

RE: xfs data loss

To: Eric Sandeen <sandeen@xxxxxxxxxxx>
Subject: RE: xfs data loss
From: "Passerone, Daniele" <Daniele.Passerone@xxxxxxx>
Date: Fri, 28 Aug 2009 21:42:55 +0200
Accept-language: it-IT, de-CH
Acceptlanguage: it-IT, de-CH
Cc: "xfs@xxxxxxxxxxx" <xfs@xxxxxxxxxxx>
In-reply-to: <4A981133.6060009@xxxxxxxxxxx>
References: <B9A7B002C7FAFC469D4229539E909760308DA651DE@xxxxxxxxxxxxxxxxxxxxxxxxxxx> <4A975A35.3060809@xxxxxxxxxxx> <B9A7B002C7FAFC469D4229539E909760308DA65345@xxxxxxxxxxxxxxxxxxxxxxxxxxx> <4A981133.6060009@xxxxxxxxxxx>
Thread-index: AcooA3SC8DG5+cSzQ/WrnNCXoFSl1QAE7tIQ
Thread-topic: xfs data loss
Hi Eric, 
and thank you for your attention and your time.
>Ok then perhaps I don't know what you mean by "power shock"
We work at a Materials science center, and there are big plants for 
simulating earthquakes or so (believe it or not).
These plants have sometimes the side effect of inducing strong 
destabilizations in the power supply. 
We suspect that such destabilization could have induced 
an effect in the system of 48 drives which constitutes our NAS server.
But of course, this is only an hypotesis.
>On the server as well?  Or just clients?  -really- no server-side errors
>in the logs?
>Are you sure the storage hardware & the md volume is in ok shape?

This is a very good question. 
Indeed, the md volume (md6) close to the affected one (md4) showed loss of 2 
disks upon reboot, but a 
repair of THAT filesystem (md6) worked. 

>Not yet, still wondering what really happened.
Me too
Thanks a lot.


>> Thank you!
>> Daniele
>> _______________________________________________
>> xfs mailing list
>> xfs@xxxxxxxxxxx
>> http://oss.sgi.com/mailman/listinfo/xfs

<Prev in Thread] Current Thread [Next in Thread>