xfs-masters
[Top] [All Lists]

[xfs-masters] [Bug 680] New: XFS_WANT_CORRUPTED_RETURN in xfs_alloc.c

To: xfs-master@xxxxxxxxxxx
Subject: [xfs-masters] [Bug 680] New: XFS_WANT_CORRUPTED_RETURN in xfs_alloc.c
From: bugzilla-daemon@xxxxxxxxxxx
Date: Sat, 3 Jun 2006 06:13:29 -0700
Reply-to: xfs-masters@xxxxxxxxxxx
Sender: xfs-masters-bounce@xxxxxxxxxxx
http://oss.sgi.com/bugzilla/show_bug.cgi?id=680

           Summary: XFS_WANT_CORRUPTED_RETURN  in xfs_alloc.c
           Product: Linux XFS
           Version: unspecified
          Platform: PC
        OS/Version: Linux
            Status: NEW
          Severity: normal
          Priority: P2
         Component: XFS kernel code
        AssignedTo: xfs-master@xxxxxxxxxxx
        ReportedBy: billv@xxxxxxxxxxxx


Received this error during an overnight load test.  I have a NAS with 3 3Ware 
9550 controllers each with 8 500 GB drives. I had to create the filesystem 
directly on the device (sda,sdb,sdc) without using fdisk.  fdisk could not see 
> 2TB. The NAS is a video server and is being hit with about 400 Mbps of 
traffic (over NFSv3).  This error occurred after several hours.  We normally 
run with the 250 GB drives and I have never seen the error with that config. 

Stack trace and system information below.

internal error XFS_WANT_CORRUPTED_RETURN at line 298 of file 
fs/xfs/xfs_alloc.c.  Caller 0xc01ee69c
 [<c01ed5a0>] xfs_alloc_fixup_trees+0x1ff/0x370
 [<c01ee69c>] xfs_alloc_ag_vextent_near+0xb97/0xc9b
 [<c01ee69c>] xfs_alloc_ag_vextent_near+0xb97/0xc9b
 [<c01ed8f3>] xfs_alloc_ag_vextent+0x137/0x13c
 [<c01f03b1>] xfs_alloc_vextent+0x464/0x5a0
 [<c020020c>] xfs_bmap_alloc+0x6a7/0x14f1
 [<c02682b9>] submit_bio+0x63/0x100
 [<c0205756>] xfs_bmapi+0x14ea/0x1918
 [<c020ec5b>] xfs_buf_item_relse+0x3f/0x43
 [<c0254863>] xfs_buf_rele+0x28/0x98
 [<c0202a53>] xfs_bmap_do_search_extents+0xf0/0x453
 [<c0237644>] xlog_state_release_iclog+0x30/0xe0
 [<c0245c0f>] xfs_trans_unlock_items+0xd7/0xde
 [<c0231c15>] xfs_iomap_write_allocate+0x329/0x617
 [<c026d7d3>] as_merged_request+0x52/0xe7
 [<c014db2d>] cache_flusharray+0x9f/0xcc
 [<c013c0b6>] test_clear_page_dirty+0xaf/0xc0
 [<c02306c4>] xfs_iomap+0x43f/0x563
 [<c0251c40>] xfs_submit_ioend_bio+0x2d/0x3d
 [<c0251b8f>] xfs_map_blocks+0x57/0x88
 [<c0252d48>] xfs_page_state_convert+0x46b/0x7ab
 [<c0135c03>] find_get_pages_tag+0x41/0x85
 [<c02537ee>] linvfs_writepage+0x69/0xff
 [<c0175753>] mpage_writepages+0x217/0x3ec
 [<c0253785>] linvfs_writepage+0x0/0xff
 [<c013bd8b>] do_writepages+0x4e/0x54
 [<c01739ff>] __sync_single_inode+0x63/0x1e1
 [<c0173bfc>] __writeback_single_inode+0x7f/0x198
 [<c0124616>] del_timer_sync+0x10/0x19
 [<c0366d4b>] schedule_timeout+0x60/0xaa
 [<c0124dca>] process_timeout+0x0/0x9
 [<c0173ecc>] sync_sb_inodes+0x1b7/0x2b3
 [<c0174087>] writeback_inodes+0xbf/0xc8
 [<c013b9eb>] background_writeout+0xb7/0xf1
 [<c013c4a4>] pdflush+0x0/0x47
 [<c013c3e3>] __pdflush+0xaf/0x170
 [<c013c4e3>] pdflush+0x3f/0x47
 [<c013b934>] background_writeout+0x0/0xf1
 [<c012e9a1>] kthread+0xb7/0xbd
 [<c012e8ea>] kthread+0x0/0xbd
 [<c010113d>] kernel_thread_helper+0x5/0xb

>uname -a
Linux nas 2.6.16.11 #5 SMP Tue May 30 13:06:32 EDT 2006 i686 Intel(R) Xeon(TM) 
CPU 2.80GHz GenuineIntel GNU/Linux


> cat /proc/meminfo
MemTotal:      3369660 kB
MemFree:        189648 kB
Buffers:        100000 kB
Cached:        2809720 kB
SwapCached:          0 kB
Active:         781900 kB
Inactive:      2132656 kB
HighTotal:     2489792 kB
HighFree:         7840 kB
LowTotal:       879868 kB
LowFree:        181808 kB
SwapTotal:           0 kB
SwapFree:            0 kB
Dirty:          380664 kB
Writeback:        5396 kB
Mapped:           7016 kB
Slab:           210144 kB
CommitLimit:   1684828 kB
Committed_AS:    11432 kB
PageTables:        352 kB
VmallocTotal:   114680 kB
VmallocUsed:      1156 kB
VmallocChunk:   113512 kB

> df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/rd/0                87455     76276      6179  93% /
udev                   1684828       140   1684688   1% /dev
/dev/sda             3417764864 1199423504 2218341360  36% /RaidA
/dev/sdb             3417764864 1198291716 2219473148  36% /RaidB
/dev/sdc             3417764864 1127658484 2290106380  33% /RaidC
none                   1684828         0   1684828   0% /dev/shm

> xfs_info /RaidA/
meta-data=/dev/sda               isize=256    agcount=32, agsize=26702312 blks
         =                       sectsz=512
data     =                       bsize=4096   blocks=854473984, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=32768, version=1
         =                       sectsz=512   sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0

> xfs_info /RaidB
meta-data=/dev/sdb               isize=256    agcount=32, agsize=26702312 blks
         =                       sectsz=512
data     =                       bsize=4096   blocks=854473984, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=32768, version=1
         =                       sectsz=512   sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0

> xfs_info /RaidC
meta-data=/dev/sdc               isize=256    agcount=32, agsize=26702312 blks
         =                       sectsz=512
data     =                       bsize=4096   blocks=854473984, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=32768, version=1
         =                       sectsz=512   sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0

# 3Ware CLI info  (3 3Ware 9550 8 port controllers with 500 GB Seagate NL35 
drives)
# Latest 9304 3Ware firmware
> info

Ctl   Model        Ports   Drives   Units   NotOpt   RRate   VRate   BBU
------------------------------------------------------------------------
c0    9550SX-8LP   8       8        1       0        5       5       -
c1    9550SX-8LP   8       8        1       0        5       5       -
c2    9550SX-8LP   8       8        1       0        5       5       -

# This is the same for all 3 controllers 

//SteelBox> /c0 show all
/c0 Driver Version = 2.26.02.007
/c0 Model = 9550SX-8LP
/c0 Memory Installed  = 112MB
/c0 Firmware Version = FE9X 3.04.00.005
/c0 Bios Version = BE9X 3.04.00.002
/c0 Monitor Version = BL9X 3.01.00.006
/c0 Serial Number = L20805B5461749
/c0 PCB Version = Rev 032
/c0 PCHIP Version = 1.60
/c0 ACHIP Version = 1.70
/c0 Number of Ports = 8
/c0 Number of Units = 1
/c0 Number of Drives = 8
/c0 Total Optimal Units = 1
/c0 Not Optimal Units = 0
/c0 JBOD Export Policy = off
/c0 Disk Spinup Policy = 1
/c0 Spinup Stagger Time Policy (sec) = 2
/c0 Auto-Carving Policy = off
/c0 Auto-Carving Size = 2048 GB
/c0 Auto-Rebuild Policy = on
/c0 Controller Bus Type = PCIX
/c0 Controller Bus Width = 64 bits
/c0 Controller Bus Speed = 100 Mhz

Unit  UnitType  Status         %Cmpl  Stripe  Size(GB)  Cache  AVerify  IgnECC
------------------------------------------------------------------------------
u0    RAID-5    OK             -      64K     3259.56   ON     OFF      ON

Port   Status           Unit   Size        Blocks        Serial
---------------------------------------------------------------
p0     OK               u0     465.76 GB   976773168     3PM0FRDE
p1     OK               u0     465.76 GB   976773168     3PM0E642
p2     OK               u0     465.76 GB   976773168     3PM0LTNS
p3     OK               u0     465.76 GB   976773168     3PM0KPYH
p4     OK               u0     465.76 GB   976773168     3PM0JVDL
p5     OK               u0     465.76 GB   976773168     3PM0KWB7
p6     OK               u0     465.76 GB   976773168     3PM0KDP2
p7     OK               u0     465.76 GB   976773168     3PM0BVEZ




>vmstat 1
 1  0      0 192536 100000 2810604    0    0     0 25423 5426 23736  0  8 92  0
 0  1      0 191028 100000 2812032    0    0 19092 80724 6825 24729  0 11 80  8
 2  0      0 188416 100000 2814684    0    0  9456  5070 5640 20985  0  6 78 15
 3  0      0 188844 100000 2813936    0    0     0 71994 5861 24413  0  8 92  0
 0  2      0 370912 100000 2632308    0    0 13516  4623 6543 21121  0 12 72 15
 0  1      0 324436 100000 2678888    0    0  4424 75599 6444 20774  0 13 64 22
 0  1      0 271440 100000 2731996    0    0 10876  2141 7223 20827  0  7 65 28
 0  1      0 204708 100000 2799316    0    0 25072 26898 7291 22558  0  9 69 22
 0  2      0 192072 100000 2810332    0    0  2324 54867 6064 21015  0  9 57 34
 0  0      0 188364 100000 2814888    0    0 15128  2165 7045 24579  0 10 72 18
 2  0      0 192232 100000 2810264    0    0  2404 133752 6687 23970  0 11 87  
2
 0  3      0 189824 100000 2813324    0    0 10704  9819 7203 23026  0 10 65 26
 0  2      0 188752 100000 2813528    0    0  9664 111953 6524 22456  0  7 61 
33
 0  0      0 189692 100000 2813528    0    0   568 74147 6300 22887  0  9 79 11


net.core.netdev_budget = 300
net.core.somaxconn = 1024
net.core.optmem_max = 10240
net.core.message_burst = 10
net.core.message_cost = 5
net.core.netdev_max_backlog = 1000
net.core.dev_weight = 64
net.core.rmem_default = 8388608
net.core.wmem_default = 131072
net.core.rmem_max = 8388608
net.core.wmem_max = 131072
vm.legacy_va_layout = 0
vm.vfs_cache_pressure = 1000000
vm.block_dump = 0
vm.laptop_mode = 0
vm.max_map_count = 65536
vm.percpu_pagelist_fraction = 0
vm.min_free_kbytes = 131072
vm.drop_caches = 0
vm.lowmem_reserve_ratio = 256   256     32
vm.swappiness = 60
vm.nr_pdflush_threads = 2
vm.dirty_expire_centisecs = 100
vm.dirty_writeback_centisecs = 25
vm.dirty_ratio = 34
vm.dirty_background_ratio = 8
vm.page-cluster = 3
vm.overcommit_ratio = 50
vm.overcommit_memory = 0

> cat /proc/net/rpc/nfsd
rc 302 178277166 35864305
fh 0 0 0 0 0
io 3525459672 1914674708
th 1024 380799 2246.356 1676.872 1257.844 887.276 604.328 424.200 305.712 
212.892 149.928 206.724
ra 128 35830332 45 0 0 0 0 0 0 0 0 3719
net 214141753 214141767 0 0
rpc 214140498 0 0 0 0
proc2 18 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
proc3 22 12732 0 0 4641 0 0 35834028 178244293 31857 0 0 0 131 0 0 0 111 0 
12750 7 0 0

-- 
Configure bugmail: http://oss.sgi.com/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.


<Prev in Thread] Current Thread [Next in Thread>
  • [xfs-masters] [Bug 680] New: XFS_WANT_CORRUPTED_RETURN in xfs_alloc.c, bugzilla-daemon <=