[Top] [All Lists]

XFS file loss -, FC RAID

To: xfs-oss <xfs@xxxxxxxxxxx>
Subject: XFS file loss -, FC RAID
From: Paul Anderson <pha@xxxxxxxxx>
Date: Tue, 28 Jun 2011 11:03:02 -0400
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:sender:date:x-google-sender-auth:message-id:subject :from:to:content-type; bh=kMW9NX4UnIH0rdkgB3AKsTd2jKw2nGditrjLvUQKPnQ=; b=OlK7H+Npm3+4uitizpj7HM5E/ZgJx98SiWzmCv95Baa2rRKHPwHPTdSF/6PgzlK74N mGTX9rVvH93cjP4GGUpxDxVdIk7vGdfikuulD1rwzsuL77sD5xL1GN5zHPsr84gE2DTC 9dSC1l3rezhBI2XENtENOlKVSvtLxrvNUd//E=
Sender: powool@xxxxxxxxx
I'm sending this error report as an informational point - I'm not sure
much can be done about it at the present time.

We had a machine crash Sunday night (June 26) around 8PM - the
hardware failed due to a Sun J4400 chassis fault.  The XFS file loss
noted in this report was not on this chassis.

On power cycle and subsequent reboot, one of our home directory
volumes, a pair of 40TiByte Promise RAID6 fiber channel SAN array
together in a single LVM, lost many files.

File loss is characterized by numerous files now with length of zero.
I lost files that I know were last changed on Friday (June 24), more
than 2 days before the crash.

Kernel is, userland is Ubuntu 10.04, server hardware is a 24
core Dell R900 w/128GiBytes RAM, an LSIFC949E fiber channel card, a
bunch of Dell PERC 6 RAID cards, and a lot of direct attach SAS JBOD
cabinets (mostly J4400, but a few Dell MD1000's).  The boot drive is a
pair of matched 1TiByte drives in a HW RAID-1 config.

The Promise RAID6 SAN unit where the files were lost is battery
backed, and reports no errors.  The filesystem showed no signs of
distress prior to this.  The filesystem was less than 4 weeks old.

Here's the fstab mount options:

/dev/wonderlandhomet/homet    /homet    xfs
inode64,logbufs=8,noatime        0       0

xfs_info shows:

root@wonderland:~# xfs_info /homet
meta-data=/dev/mapper/wonderlandhomet-homet isize=256    agcount=81,
agsize=268435328 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=21484355584, imaxpct=1
         =                       sunit=128    swidth=2816 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

The dmesg log shows no signs of hardware or kernel software problems
up to the point where the directly attached SAS card reported faults
for the cabinet.

The vm tuning parameters are defaults (yes, I know this is bad):

root@louie:/proc/sys/vm# cat dirty_background_bytes
root@louie:/proc/sys/vm# cat dirty_background_ratio
root@louie:/proc/sys/vm# cat dirty_bytes
root@louie:/proc/sys/vm# cat dirty_expire_centisecs
root@louie:/proc/sys/vm# cat dirty_ratio
root@louie:/proc/sys/vm# cat dirty_writeback_centisecs

My main question is: what specific action can I take to minimize the
likelihood of this happening again?  As far as I know, the dirty pages
should expire and be flushed to the FC array (2 days?  should be
enough), and the FC array itself is stable.

The machine was moderately busy, but far from overwhelmingly so.

Feedback welcome...


Paul Anderson
Center for Statistical Genetics
University of Michigan, Ann Arbor

<Prev in Thread] Current Thread [Next in Thread>