xfs
[Top] [All Lists]

Re: raid50 and 9TB volumes

To: linux-xfs@xxxxxxxxxxx
Subject: Re: raid50 and 9TB volumes
From: Raz <raziebe@xxxxxxxxx>
Date: Mon, 16 Jul 2007 16:57:32 +0300
Dkim-signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:message-id:date:from:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=eKL4zIVxXP2AA5kzejb93k/d2FsiXPEkTb6D4ZlB1W6+p1M3uuJkNYP0UpIYG7em5J65l1NdMrfY4jJoKftxM9Ow7NBbJ2P8nZbz0VfsO89gOfuztJTlKAXCpOfYjWmcYU1OONqPHSzj4JN11GYCMH8q1+Pk1aFuj16i6BhSwlo=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=pG6f2wfxMEJYkyXZlburK4LkucefdTshNf+JtV+8PkSd8K210ef8Y6/m7Z8y4wINJ67nzyQMlvnwxogGS/yLGRpls/+XOGChvTTjKpZ9/CRNIVGSh7rL8zc+oDS3MXAXehYWtTxEEHkQ7xgogf5TC38Fh3I4NUlqM6s99UyEb8c=
In-reply-to: <20070716130140.GC31489@xxxxxxx>
References: <5d96567b0707160542t2144c382mbfe3da92f0990694@xxxxxxxxxxxxxx> <20070716130140.GC31489@xxxxxxx>
Sender: xfs-bounce@xxxxxxxxxxx
On 7/16/07, David Chinner <dgc@xxxxxxx> wrote:
On Mon, Jul 16, 2007 at 03:42:28PM +0300, Raz wrote:
> Hello
> I found that using xfs over raid50, ( two raid5's 8 disks each and
> raid 0 over them ) crashes the file system when the file system is ~
> 9TB. crashing is easy: we simply create few hundred of files, then
> erase them in bulk. the same test passes in 6.4TB filesystems.
> this bug happens in 2.6.22 as well as 2.6.17.7.
> thank you .
>
> 4391322.839000] Filesystem "md3": XFS internal error
> xfs_alloc_read_agf at line 2176 of file fs/xfs/xfs_alloc.c.  Caller
> 0xc10d31ea
> [4391322.863000]  <c10d36e9> xfs_alloc_read_agf+0x199/0x220
> <c10d31ea> xfs_alloc_fix_freelist+0x41a/0x4b0

Judging by the kernel addresses (<c10d36e9>) you're running
an i386 kernel right? Which means there's probably a wrapping
issue at 8TB somewhere in the code which has caused an AGF
header to be trashed somewhere lower down in the filesystem.
what does /proc/partitions say? I.e. does the kernel see
the whole 9TB of space?

What does xfs_repair tell you about the corruption? (assuming
it doesn't OOM, which is a good chance if you really are on
i386).

Cheers,

Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group

Well you are right.  /proc/partitions  says:
....
   8   241  488384001 sdp1
   9     1 3404964864 md1
   9     2 3418684416 md2
   9     3 6823647232 md3

while xfs formats md3 as 9 TB.
If i am using LBD , what is the biggest size I can use on i386 ?

many thanks
raz

--
Raz


<Prev in Thread] Current Thread [Next in Thread>