Search String: Display: Description: Sort:

Results:

References: [ +subject:/^(?:^\s*(re|sv|fwd|fw)[\[\]\d]*[:>-]+\s*)*xfs_repair\s+of\s+critical\s+volume\s*$/: 35 ]

Total 35 documents matching your query.

1. xfs_repair of critical volume (score: 1)
Author: Eli Morris <ermorris@xxxxxxxx>
Date: Sun, 31 Oct 2010 00:54:13 -0700
Hi, I have a large XFS filesystem (60 TB) that is composed of 5 hardware RAID 6 volumes. One of those volumes had several drives fail in a very short time and we lost that volume. However, four of th
/archives/xfs/2010-10/msg00373.html (7,806 bytes)

2. Re: xfs_repair of critical volume (score: 1)
Author: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Sun, 31 Oct 2010 04:54:46 -0500
Eli Morris put forth on 10/31/2010 2:54 AM: t! This isn't the storage that houses the genome data is it? Unfortunately I don't have an answer for you Eli, or, at least, not one you would like to hear
/archives/xfs/2010-10/msg00375.html (9,495 bytes)

3. Re: xfs_repair of critical volume (score: 1)
Author: Emmanuel Florac <eflorac@xxxxxxxxxxxxxx>
Date: Sun, 31 Oct 2010 15:10:00 +0100
Le Sun, 31 Oct 2010 00:54:13 -0700 vous écriviez: You may still have a slight chance to repair the broken RAID volume. What is the type and model of RAID controller? What is the model of the drives?
/archives/xfs/2010-10/msg00376.html (8,572 bytes)

4. Re: xfs_repair of critical volume (score: 1)
Author: Steve Costaras <stevecs@xxxxxxxxxx>
Date: Sun, 31 Oct 2010 09:41:37 -0500
Do NOT try this. It's only good for some /very/ specific types of issues with older drives. With an array of your size you are probably running relatively current drives (i.e. past 5-7 years) and thi
/archives/xfs/2010-10/msg00377.html (10,496 bytes)

5. Re: xfs_repair of critical volume (score: 1)
Author: Roger Willcocks <roger@xxxxxxxxxxxxxxxx>
Date: Sun, 31 Oct 2010 16:52:13 +0000
Don't do anything which has the potential to write to your drives until you have a full bit-for-bit copy of the existing volumes. In particular, don't run xfs_repair. This is is a hardware issue. It
/archives/xfs/2010-10/msg00378.html (10,129 bytes)

6. Re: xfs_repair of critical volume (score: 1)
Author: Eli Morris <ermorris@xxxxxxxx>
Date: Sun, 31 Oct 2010 12:56:33 -0700
Hi guys, Thanks for all the responses. On the XFS volume that I'm trying to recover here, I've already re-initialized the RAID, so I've kissed that data goodbye. I am using LVM2. Each of the 5 RAID
/archives/xfs/2010-10/msg00379.html (12,424 bytes)

7. Re: xfs_repair of critical volume (score: 1)
Author: Emmanuel Florac <eflorac@xxxxxxxxxxxxxx>
Date: Sun, 31 Oct 2010 21:40:21 +0100
Le Sun, 31 Oct 2010 12:56:33 -0700 vous écriviez: oK, so what we'd like to do is get the backup RAID volume back in working order. You said it's made of 2TB Caviar green drives, but didn't mention th
/archives/xfs/2010-10/msg00380.html (9,098 bytes)

8. Re: xfs_repair of critical volume (score: 1)
Author: Steve Costaras <stevecs@xxxxxxxxxx>
Date: Sun, 31 Oct 2010 16:10:06 -0500
Hi guys, Thanks for all the responses. On the XFS volume that I'm trying to recover here, I've already re-initialized the RAID, so I've kissed that data goodbye. I am using LVM2. Each of the 5 RAID v
/archives/xfs/2010-10/msg00381.html (10,814 bytes)

9. Re: xfs_repair of critical volume (score: 1)
Author: Eli Morris <ermorris@xxxxxxxx>
Date: Sun, 31 Oct 2010 20:40:20 -0700
Hi, Thanks for your help. The RAID is a SCSI connected direct attached storage 16 bay unit made by Maxtronic. It is a Janus 6640, is case that helps anything. At the time of its problem, it was mount
/archives/xfs/2010-10/msg00382.html (10,019 bytes)

10. Re: xfs_repair of critical volume (score: 1)
Author: Emmanuel Florac <eflorac@xxxxxxxxxxxxxx>
Date: Mon, 1 Nov 2010 11:07:34 +0100
Le Sun, 31 Oct 2010 20:40:20 -0700 vous écriviez: Alas, never heard of it... Looks like a quite low end hardware, without redundant controllers. Probably similar to Infortrend. The possibility of an
/archives/xfs/2010-11/msg00001.html (8,342 bytes)

11. Re: xfs_repair of critical volume (score: 1)
Author: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Mon, 01 Nov 2010 10:03:18 -0500
Eli Morris put forth on 10/31/2010 2:56 PM: In additional to the suggestions you've already received, I'd suggest you reach out to your colleagues at SDSC. They'd most certainly have quite a bit of s
/archives/xfs/2010-11/msg00009.html (9,923 bytes)

12. Re: xfs_repair of critical volume (score: 1)
Author: Eric Sandeen <sandeen@xxxxxxxxxxx>
Date: Mon, 01 Nov 2010 17:21:28 -0500
One thing you could do is make an xfs_metadump image, xfs_mdrestore it to a sparse file, and then do a real xfs_repair run on that. You can then mount the repaired image and see what's there. So from
/archives/xfs/2010-11/msg00011.html (8,461 bytes)

13. Re: xfs_repair of critical volume (score: 1)
Author: Eli Morris <ermorris@xxxxxxxx>
Date: Mon, 1 Nov 2010 16:32:54 -0700
Hi Eric, Thanks for the suggestion. I tried is out and this is what happened when I ran xfs_mdrestore: xfs_mdrestore: cannot set filesystem image size: File too large Any ideas? Is the file as large
/archives/xfs/2010-11/msg00012.html (9,497 bytes)

14. Re: xfs_repair of critical volume (score: 1)
Author: Eric Sandeen <sandeen@xxxxxxxxxxx>
Date: Mon, 01 Nov 2010 19:14:22 -0500
Guessing you tried to create it on an ext3 filesystem? The file has a maximum offset == the size of the filesystem, but it is sparse so does not take up that much disk space. ext3 can't go beyond a 2
/archives/xfs/2010-11/msg00014.html (10,560 bytes)

15. Re: xfs_repair of critical volume (score: 1)
Author: Eli Morris <ermorris@xxxxxxxx>
Date: Fri, 12 Nov 2010 00:48:02 -0800
Hi guys, For reference: vol5 is the 62TB XFS filesystem on Centos 5.2 I had that was composed of 5 RAID units. One went bye-bye and was re-initialized. I was able to get it back in the LVM volume wit
/archives/xfs/2010-11/msg00167.html (11,791 bytes)

16. Re: xfs_repair of critical volume (score: 1)
Author: Michael Monnerie <michael.monnerie@xxxxxxxxxxxxxxxxxxx>
Date: Fri, 12 Nov 2010 14:22:28 +0100
The filesystem is not optimized for "I replace part of the disk contents with zeroes" and find that errors. You will have to look in each file if it's contents are still valid, or maybe bogus. I find
/archives/xfs/2010-11/msg00170.html (10,314 bytes)

17. Re: xfs_repair of critical volume (score: 1)
Author: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Fri, 12 Nov 2010 16:14:52 -0600
Michael Monnerie put forth on 11/12/2010 7:22 AM: This isn't "robustness" Michael. If anything it's a serious problem. XFS is reporting that hundreds or thousands of files that have been physically r
/archives/xfs/2010-11/msg00176.html (8,806 bytes)

18. Re: xfs_repair of critical volume (score: 1)
Author: Eli Morris <ermorris@xxxxxxxx>
Date: Fri, 12 Nov 2010 15:01:47 -0800
Hi Michael, thanks for the advise. Let me see if I can give you and everyone else a little more information and clarify this problem somewhat. And if there is nothing practical that can be done, then
/archives/xfs/2010-11/msg00179.html (13,238 bytes)

19. Re: xfs_repair of critical volume (score: 1)
Author: Emmanuel Florac <eflorac@xxxxxxxxxxxxxx>
Date: Sat, 13 Nov 2010 09:19:38 +0100
Le Fri, 12 Nov 2010 16:14:52 -0600 vous écriviez: I beg to disagree. Would it be better if instead of still having some of the data, everything was lost? At what level of accidental destruction do yo
/archives/xfs/2010-11/msg00188.html (9,361 bytes)

20. Re: xfs_repair of critical volume (score: 1)
Author: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Sat, 13 Nov 2010 03:28:45 -0600
Emmanuel Florac put forth on 11/13/2010 2:19 AM: You've missed the point of this sub thread discussion, or I did. He stated that having the metadata show the files still exist is a positive thing. Th
/archives/xfs/2010-11/msg00189.html (9,917 bytes)


This search system is powered by Namazu