xfs
[Top] [All Lists]

Partions (or lack thereof) and xfs repair

To: linux-xfs@xxxxxxxxxxx
Subject: Partions (or lack thereof) and xfs repair
From: AndyLiebman@xxxxxxx
Date: Fri, 17 Dec 2004 10:07:08 EST
Sender: linux-xfs-bounce@xxxxxxxxxxx
Hi, 

I wrote in yesterday about a serious file system corruption incident. This 
issue is related, but separate, so I want to create a separate post. 

To summarize yesterday's issue:  A 1.75 TB xfs file system got corrupted. 
Upon running xfs_repair, I ended up with about 3000 files in a lost+found 
directory. They are listed by INODE number. The file names seem to have gotten 
lost. 
I don't know if this is normal or not for xfs_repair. 

I am wondering if my particular setup method could have contributed to a 
failure of xfs_repair. When I create a RAID-5 array with a 3ware 9000 SATA 
card, I 
get a 1.75 TB device -- let's call it /dev/sda. On the advice of some Linux 
experts who should know about these things, I DO NOT create any partitions on 
the device /dev/sda. I do not run fdisk on the device. When I put the xfs 
filesystem on it, I use: 

"mkfs.xfs /dev/sda"

xfs doesn't complain about not being on a partition. I haven't had any issues 
until this one -- after many months and many machines that have been set up 
this way. The only thing that worries me is that if a user opens up a disk 
management utility such as the Mandrake "DiskDrake" program, he will get a 
message 
saying "I don't understand this device. It doesn't have any information that 
I can understand. Do you want to create a partition on it?" I don't use 
DiskDrake, so I don't really care about this message. And I have made it 
difficult 
for my users to run DiskDrake out in the field. 

But does the fact that I don't run fdisk on the device, and the lack of at 
least a single partition, in any way making the xfs file system vulnerable? 

I would appreciate your speedy and thoughtful comments. 

Regards, 
Andy Liebman


<Prev in Thread] Current Thread [Next in Thread>