xfs-masters
[Top] [All Lists]

[Bug 680] XFS_WANT_CORRUPTED_RETURN in xfs_alloc.c

To: xfs-masters@xxxxxxxxxxx
Subject: [Bug 680] XFS_WANT_CORRUPTED_RETURN in xfs_alloc.c
From: bugzilla-daemon@xxxxxxxxxxx
Date: Tue, 13 Jan 2009 15:05:11 -0600
http://oss.sgi.com/bugzilla/show_bug.cgi?id=680


bo@xxxxxxxxxx changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|RESOLVED                    |REOPENED
         Resolution|FIXED                       |




------- Additional Comments From bo@xxxxxxxxxx  2009-01-13 15:05 CST -------
This is still a problem....  Happening on a machine with a 16 drive RAID 5 array
running on a 3ware 9650SE controller.  The machine is a backup server and we're
doing 4 simultaneous reverse binary diffs on 2GB files.

Stack trace and system information below.
[105249.002236] XFS internal error XFS_WANT_CORRUPTED_RETURN at line 296 of file
/build/buildd/linux-2.6.24/fs/xfs/xfs_alloc.c.  Caller 0xf92e9138
[105249.015164] Pid: 16747, comm: rdiff-backup Not tainted 2.6.24-22-server #1
[105249.022154]  [<f92e75b6>] xfs_alloc_fixup_trees+0x2e6/0x3a0 [xfs]
[105249.028464]  [<f92e9138>] xfs_alloc_ag_vextent_near+0x448/0xa00 [xfs]
[105249.035116]  [<f92e9138>] xfs_alloc_ag_vextent_near+0x448/0xa00 [xfs]
[105249.041774]  [<f92e97b7>] xfs_alloc_ag_vextent+0xc7/0x120 [xfs]
[105249.047933]  [<f92ea075>] xfs_alloc_vextent+0x285/0x510 [xfs]
[105249.053868]  [<f92fdde3>] xfs_bmap_btalloc+0x513/0xc50 [xfs]
[105249.059698]  [<f93223e9>] xfs_iomap_eof_want_preallocate+0x149/0x220 [xfs]
[105249.066757]  [<f92feca1>] xfs_bmapi+0x751/0x16a0 [xfs]
[105249.072065]  [<f931be4d>] xfs_iext_bno_to_ext+0xad/0x1f0 [xfs]
[105249.078085]  [<f93284af>] xlog_grant_log_space+0x22f/0x270 [xfs]
[105249.084280]  [<f9334814>] xfs_trans_reserve+0x84/0x200 [xfs]
[105249.090120]  [<f932315f>] xfs_iomap_write_allocate+0x2df/0x510 [xfs]
[105249.096661]  [<f9326098>] xlog_assign_tail_lsn+0x28/0x60 [xfs]
[105249.102696]  [<f9328251>] xfs_log_release_iclog+0x11/0x40 [xfs]
[105249.108805]  [<f932224a>] xfs_iomap+0x4ba/0x510 [xfs]
[105249.114039]  [<f9341bf3>] xfs_map_blocks+0x43/0x90 [xfs]
[105249.119520]  [<f9343466>] xfs_page_state_convert+0x3e6/0x7f0 [xfs]
[105249.125917]  [<f93439a1>] xfs_vm_writepage+0x61/0xf0 [xfs]
[105249.131578]  [<c0178c98>] __writepage+0x8/0x30
[105249.136177]  [<c017928f>] write_cache_pages+0x20f/0x300
[105249.141551]  [<c0178c90>] __writepage+0x0/0x30
[105249.146156]  [<c01793a0>] generic_writepages+0x20/0x30
[105249.151446]  [<c01793db>] do_writepages+0x2b/0x50
[105249.156314]  [<c017374a>] __filemap_fdatawrite_range+0x7a/0xa0
[105249.162315]  [<c0173a23>] filemap_fdatawrite+0x23/0x30
[105249.167594]  [<c01ba87e>] do_fsync+0x4e/0xb0
[105249.172024]  [<c01ba905>] __do_fsync+0x25/0x40
[105249.176658]  [<c010838a>] sysenter_past_esp+0x6b/0xa1
[105249.181844]  =======================

# uname -a
Linux st02 2.6.24-22-server #1 SMP Mon Nov 24 19:14:19 UTC 2008 i686 GNU/Linux

# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             487M  270M  217M  56% /
varrun                2.0G  3.9M  2.0G   1% /var/run
varlock               2.0G     0  2.0G   0% /var/lock
udev                  2.0G  100K  2.0G   1% /dev
devshm                2.0G     0  2.0G   0% /dev/shm
/dev/sda2             244M   90M  154M  37% /boot
/dev/sda8             1.6G   33M  1.6G   3% /export
/dev/mapper/backup-tmp
                      4.5G   33M  4.5G   1% /tmp
/dev/sda6             3.9G  338M  3.5G   9% /usr
/dev/sda7             3.9G  1.1G  2.8G  29% /var
/dev/mapper/backup-vmbackup
                      4.0T  3.3T  749G  82% /vmbackup
/dev/mapper/backup-filekeeper
                      1.5T   12G  1.5T   1% /filekeeper

# xfs_info /vmbackup
meta-data=/dev/mapper/backup-vmbackup isize=256    agcount=43, agsize=25165824 
blks
         =                       sectsz=512   attr=1
data     =                       bsize=4096   blocks=1073741824, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096  
log      =internal               bsize=4096   blocks=32768, version=1
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0


# tw_cli 
//st02> info

Ctl   Model        Ports   Drives   Units   NotOpt   RRate   VRate   BBU
------------------------------------------------------------------------
c0    9650SE-16ML  16      16       1       0        4       4       OK       

//st02> /c0 show all
/c0 Driver Version = 2.26.02.010
/c0 Model = 9650SE-16ML
/c0 Memory Installed  = 224MB
/c0 Firmware Version = FE9X 3.06.00.003
/c0 Bios Version = BE9X 3.06.00.002
/c0 Monitor Version = BL9X 3.05.00.002
/c0 Serial Number = L322621A6510250
/c0 PCB Version = Rev 032
/c0 PCHIP Version = 2.00
/c0 ACHIP Version = 1.90
/c0 Number of Ports = 16
/c0 Number of Units = 1
/c0 Number of Drives = 16
/c0 Total Optimal Units = 1
/c0 Not Optimal Units = 0 
/c0 JBOD Export Policy = off
/c0 Disk Spinup Policy = 1
/c0 Spinup Stagger Time Policy (sec) = 1
/c0 Auto-Carving Policy = off
/c0 Auto-Carving Size = 2048 GB
/c0 Auto-Rebuild Policy = on
/c0 Controller Bus Type = PCIE
/c0 Controller Bus Width = 8 lanes
/c0 Controller Bus Speed = 2.5 Ghz

Unit  UnitType  Status         %Cmpl  Stripe  Size(GB)  Cache  AVerify  IgnECC
------------------------------------------------------------------------------
u0    RAID-5    OK             -      64K     9778.74   ON     OFF      OFF     
 

Port   Status           Unit   Size        Blocks        Serial
---------------------------------------------------------------
p0     OK               u0     698.63 GB   1465149168    3QD0WK3Z            
p1     OK               u0     698.63 GB   1465149168    3QD0J9F6            
p2     OK               u0     698.63 GB   1465149168    3QD0E328            
p3     OK               u0     698.63 GB   1465149168    3QD0W4A5            
p4     OK               -      698.63 GB   1465149168    3QD0LGRD            
p5     OK               u0     698.63 GB   1465149168    3QD0XPA9            
p6     OK               u0     698.63 GB   1465149168    3QD0JGC5            
p7     OK               u0     698.63 GB   1465149168    3QD0FHWE            
p8     OK               u0     698.63 GB   1465149168    3QD0XR9E            
p9     OK               u0     698.63 GB   1465149168    3QD0DB8H            
p10    OK               u0     698.63 GB   1465149168    3QD0VYKF            
p11    OK               u0     698.63 GB   1465149168    3QD0MRRD            
p12    OK               u0     698.63 GB   1465149168    3QD0XL2M            
p13    OK               u0     698.63 GB   1465149168    3QD0JGLM            
p14    OK               u0     698.63 GB   1465149168    3QD0E978            
p15    OK               u0     698.63 GB   1465149168    3QD0L4KC            

Name  OnlineState  BBUReady  Status    Volt     Temp     Hours  LastCapTest
---------------------------------------------------------------------------
bbu   On           Yes       OK        OK       OK       255    10-Aug-2007  





-- 
Configure bugmail: http://oss.sgi.com/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.

<Prev in Thread] Current Thread [Next in Thread>