Hi,
if you cat /proc/partitions you get the major and minor numbers for dm-0, then
ls -l /dev/mapper/ to match the vg.
Adam
On Thu, Mar 25, 2004 at 12:48:10PM +1200, Steve Wray wrote:
> On Thu, 25 Mar 2004 12:24, Nathan Scott wrote:
> > On Thu, Mar 25, 2004 at 09:47:34AM +1200, Steve Wray wrote:
> > > We have a machine with;
> > > SATA, software raid, LVM2 and XFS filesystems.
> > > LVM2 is appropriately configured to work on software raid.
> > >
> > > We see corruption on XFS filesystems on real partitions on normal
> > > IDE drives (not SATA and no software RAID or LVM) as well as on
> > > LVM volumes on SATA drives.
> > >
> > > Corruption occurs during normal use, there have been no power or
> > > kernel panic problems.
> >
> > hi there,
> >
> > Any chance you have a reproducible test case for hitting this?
>
> Well, if we leave it up overnight there will always be some in the
> mornings :-/
>
> > The output from xfs_db on that inode (21214761) would be useful
> > information too. Also the xfs_db superblock dump (or xfs_info
> > output) would be handy.
>
> I have no idea how to interpret from the error message to the
> filesystem. The log entries report things like;
>
> Filesystem "dm-0"
>
> How do I interpret dm-0 as a /dev entry?
>
> I am guessing that what I see in this entry in /dev/mapper;
> brw------- 1 root root 253, 0 Mar 22 10:37 vg0-lv_home
>
> might imply that dm-0 refers to /dev/vg0/lv_home (an xfs filesystem that
> has had these errors).
>
> However I'm not sure where I see the log entries for /var (on /dev/hda5)
> in there...
>
|