| To: | Eric Sandeen <sandeen@xxxxxxxxxxx> |
|---|---|
| Subject: | Re: help with xfs_repair on 10TB fs |
| From: | Alberto Accomazzi <aaccomazzi@xxxxxxxxx> |
| Date: | Sun, 18 Jan 2009 15:34:05 -0500 |
| Cc: | xfs@xxxxxxxxxxx |
| Dkim-signature: | v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=IjzNdEl1MKMuJ4iB3JT5H/tFIor630oFa0gdTdvtJdg=; b=czRuiV9MxFTZqCN4bl/AfAWZc799lGjVpHL1naOS2mj9fVqZJLSNISBSxt3qbf5Y8z fux7+En2KluSgZNr4EiDEPGi45sdHYsvz09oRqd8VuYV8k1hAvL4XXDHiJ5BDyZz1pC/ ydw4PE562GPsLjlFvRSOzhWvKkR+hTKh4NXQ8= |
| Domainkey-signature: | a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=homPy6+lZQy2kY85AP3OyyXrA4Hrm1IidR1bp+d18U5RjE0dVg7onY5EyovXq2CY8C XNIvkpdTDdkPfI/uNEQZb9lvEKK96ucxJssKsinT9giE5zTT8kfxBPLAX9a8Divrl1DO zitbRaDFk0vCb3/1BBbjN/FL4mMKE9GNNc8Lo= |
| In-reply-to: | <49726E94.4060806@xxxxxxxxxxx> |
| References: | <adcf4ef70901170913l693376d7s6fd0395e2c88e10@xxxxxxxxxxxxxx> <4972166D.5000006@xxxxxxxxxxx> <adcf4ef70901171042p31054ae0rb56819fce7b6f47e@xxxxxxxxxxxxxx> <49722875.90202@xxxxxxxxxxx> <adcf4ef70901171514v2fa036a6o9deb0df7d9dd569d@xxxxxxxxxxxxxx> <49726E94.4060806@xxxxxxxxxxx> |
On Sat, Jan 17, 2009 at 6:49 PM, Eric Sandeen <sandeen@xxxxxxxxxxx> wrote: > Alberto Accomazzi wrote: > > >> For the record, after upgrading to xfsprogs-2.10.2 as suggested by >> Eric, xfs_repair completed successfully. Unfortunately now I'm left >> with dealing with quite a mess: 388K files in lost+found, with a >> filesystem of over 160M inodes. ugh... >> >> Thanks for all your help, though... >> -- Alberto > > Bummer. What was the first thing that went wrong, by the way? A hardware issue with the underlying RAID managed by a 3ware controller. Although this is running RAID 6 + hot spare apparently there was enough accumulation of bad blocks on the drives that we experienced data corruption. I'm definitely not happy about it and I'm trying to figure out if there is something faulty going on here. In fact I just noticed that the drives in question (Seagate ES.2) have been found to have problems with data loss under certain circumstances: http://seagate.custkb.com/seagate/crm/selfservice/search.jsp?DocId=207931 -- Alberto |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: [PATCH] xfs_repair: allow filesystems with a single allocation group, Eric Sandeen |
|---|---|
| Next by Date: | Re: [linux-dvb] compiling on 2.6.28 broken?, Christoph Hellwig |
| Previous by Thread: | Re: help with xfs_repair on 10TB fs, Eric Sandeen |
| Next by Thread: | Re: help with xfs_repair on 10TB fs, Tru Huynh |
| Indexes: | [Date] [Thread] [Top] [All Lists] |