[Top] [All Lists]

RE: XFS and LBD patch on 2.4.20 or 2.4.22

To: "'Nathan Scott'" <nathans@xxxxxxx>
Subject: RE: XFS and LBD patch on 2.4.20 or 2.4.22
From: Gustavo Rincon <grincon@xxxxxxxxxxx>
Date: Tue, 25 Nov 2003 08:34:33 -0600
Cc: "'linux-xfs@xxxxxxxxxxx'" <linux-xfs@xxxxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
Well for this test I'm using kernel-2.4.22 plus a version of the lbd patch
(linux-2.4.28-18-rh-lbd.path took
from the SGI ftp, i don't know if this patch is still available on the SGI
ftp) and 
SGI XFS snapshot-2.4.22-2003-11-10_23:49_UTC

I did some modification to the lbd patch to make it suitable for 2.4.22,
this patch has #define HAVE_SECTOR_T 
on include/linux/type.h.

I compiled the kernel with XFS builded as module and the LBD option enable.
(used gcc (GCC) 3.2 20020903) and
It was installed in the target machine.

I modified the include/xfs_types.h on the xfsprogs-2.5.6 directory, the
#define XFS_BIG_FILESYSTEMS 0 was changed to 
#define XFS_BIG_FILESYSTEMS 1 on line 221.   The xfsprogs was compiled and
installed in the target machine.

#if defined(CONFIG_LBD) || (defined(HAVE_SECTOR_T) && (BITS_PER_LONG == 64))
#  endif
#  endif

Hardware used:
A DUAL PENTIUM 4 XEON motherboard with 1 GBytes of RAM.
3ware raid controller 7000 series with 12 Serial ATA 250Gbytes disk and 
two raid0 luns were defined  (each one with 6 disks or 1.5 Terabyte LUNS)

Testing performed
        1.- Using the raidtools a md0 device was created. The following
raidtab file was used.
                        raiddev /dev/md0
                                raid-level 0
                                nr-raid-disks 2
                                chunk-size 512
                                device  /dev/sda3
                                raid-disk       0
                                device  /dev/sdb3
                                raid-disk       1

        2.-  mkfs.xfs -f /dev/md0 was executed.

        3.-  mount -t xfs /dev/md0 /mnt was executed.
                the dmesg output was :
                        SGI XFS snapshot-2.4.22-2003-11-10_23:49_UTC with
ACLs, large block numbers, no
                        debug enabled
                        SGI XFS Quota Management subsystem

        4.-  cd /mnt was executed.

        5.-  xfs_mkfile 600G test1 was executed sucessfully.
        6.-  xfs_mkfile 500G test2 was executed sucessfully.
        7.-  xfs_mkfile 400G test3 was executed sucessfully.
        8.-  xfs_mkfile 400G test4 was executed sucessfully.
        9.-  xfs_mkfile 500G test5 was executed sucessfully.
        10.- xfs_mkfile 500G test6 was executed sucessfully.
        11.- ls -l /mnt was executed and all the created files were
presented in the list.
        12.- cd / and umount /mnt
        13.- xfs_repair /dev/md0 (look in attachment log1 to see the output
of xfs_repair)

        Partial output of xfs_repair

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
bad fwd (right) sibling pointer (saw 5888878 should be NULLDFSBNO)
        in inode 2057 (data fork) bmap btree block 695161421
bad data fork in inode 2057
cleared inode 2057
bad fwd (right) sibling pointer (saw 5888944 should be NULLDFSBNO)
        in inode 2058 (data fork) bmap btree block 695161487
bad data fork in inode 2058
cleared inode 2058
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - clear lost+found (if it exists) ...
        - clearing existing "lost+found" inode
        - deleting existing "lost+found" entry
        - check for inodes claiming duplicate blocks...
        - agno = 0
entry "test5" in shortform directory 2048 references free inode 2057
junking entry "test5" in directory inode 2048
entry "test6" in shortform directory 2048 references free inode 2058
junking entry "test6" in directory inode 2048

        14.- mount -t xfs /dev/md0 /mnt
        15.- ls -l /mnt  was executed and test5 and test6 were not in the

Now will try to use xfsprogs-2.6.0 version and try to reproduce the problem,
something that I noticed in the new version of xfsprogs is that on
xfs_types.h was changed from the 2.5.6 version.  

#if defined(CONFIG_LBD) || (defined(HAVE_SECTOR_T) && (BITS_PER_LONG == 64))
# define XFS_BIG_BLKNOS 1
# if BITS_PER_LONG == 64
#  define XFS_BIG_INUMS 1
# else
#  define XFS_BIG_INUMS 0
# endif
# define XFS_BIG_BLKNOS 0
# define XFS_BIG_INUMS  0

Thank you
Gustavo Rincon

-----Original Message-----
From: Nathan Scott [mailto:nathans@xxxxxxx]
Sent: Monday, November 24, 2003 4:04 PM
To: Gustavo Rincon
Cc: 'linux-xfs@xxxxxxxxxxx'
Subject: Re: XFS and LBD patch on 2.4.20 or 2.4.22

hi there,

On Mon, Nov 24, 2003 at 02:43:06PM -0600, Gustavo Rincon wrote:
> Hi guys, I have a XFS 1.3.1 version compile on linux kernel-2.4.20 (Red
> version) and kernel-2.4.22 (Vanilla) and 
> i was wonder what changes I have to do to xfsprogs in order to create and
> support filesystem greater than 2 Terabytes.


> The kernel that I compiled with the gelato LBD patch and I tested with
> and REISERFS on a MD device, the size of the Device is 2.7 Terabytes,
> and everything looks OKAY, but when i try to tested with XFS sometime
> i perform xfs_repair some files are erased by the xfs_repair utility.  

There are changes in this area in CVS - could you try a current
CVS kernel.  Also, can you describe your tests in detail, and if
the problem reproducible for you?  (if so, hopefully it is for me
too, so I can fix it ;)

> All this testing was done in a PENTIUM 4 XEION and gcc (GCC) 3.2 20020903
> (Red Hat Linux 8.0 3.2-7) to compile the Kernel and the xfsprogs.
> The xfsprogs utility  (2.5.6) was compiled with the xfs_types.h file
> modified (The XFS_BIG_FILESYSTEMS #define was turned ON).

Thats unnecessary - it is always "on" during an xfsprogs build;
see xfsprogs/include/builddefs.in.

> Do I have to compile the xfsprogs using or modifying more #defines? 




<Prev in Thread] Current Thread [Next in Thread>