[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

opps on xfs on a linear raid on a sparc64 box



Now that I have xfs compiled and "working", I'm trying to test it on
several diferent conditions.

Today, I'd tried to use xfs on a linear raid, and it oops'ses.

The scene, is a CVS XFS from about a week ago (just after the VFS quota
patches got it) on an ultra1 running a linear raid[1] on 7 scsi disks.

As usual, I create the raid device and then the filesystem on top of it.

After the xfs-raid is mounted, I run an rsync to a local server with
about #$"&"&" files and some 30G in order to do a basic test to it (as
in, before the xfstests) and I get an oops. (I also tried it with a
recursive ftp)

After booting up the box, I try to mount the partition again, and I get
another oops. The following trace, is from that oops. Do I need to send
the other oops also?, If so, please state it. Would it help if I test it
on other raid levels?

Unable to handle kernel paging request at virtual address
0000000000002000
tsk->{mm,active_mm}->context = 0000000000000476
tsk->{mm,active_mm}->pgd = fffff80001683000
              \|/ ____ \|/
              "@'/ .. \`@"
              /_| \__/ |_\
                 \__U_/
mount(125): Oops
TSTATE: 0000009911009603 TPC: 000000000052c848 TNPC: 000000000052c84c Y:
08000000    Not tainted
Using defaults from ksymoops -t elf32-sparc -a sparc
g0: fffff80001ec7120 g1: fffff8000122d0a0 g2: 0000000000000000 g3:
0000000000000000
g4: fffff80000000000 g5: 000000000000001b g6: fffff80000fa4000 g7:
0000000000000002
o0: 00000000006a5c00 o1: 0000000000000000 o2: fffff80011d92d20 o3:
0000000000000000
o4: 0000000000002000 o5: fffff80011d92040 sp: fffff80000fa5fe1 ret_pc:
000000000052c910
l0: 0000000000696800 l1: 0000000000000001 l2: 000000000253d500 l3:
0000000002600244
l4: 00000000005c13b0 l5: 0000000000677a38 l6: 0000000000673a48 l7:
0000000000000008
i0: 0000000000000000 i1: 0000000000002000 i2: 000000000000000c i3:
0000000000000060
i4: 00000000005afbf8 i5: 0000000000000000 i6: fffff80000fa60a1 i7:
00000000004fcc50
Caller[00000000004fcc50]
Caller[00000000004fce0c]
Caller[00000000004fd28c]
Caller[00000000004fcefc]
Caller[0000000000505174]
Caller[00000000004e5e24]
Caller[00000000004e6138]
Caller[00000000004e6a54]
Caller[00000000004e6350]
Caller[00000000004e6650]
Caller[00000000004e9554]
Caller[00000000004e2bac]
Caller[00000000004eab54]
Caller[00000000004f2130]
Caller[00000000004f22d4]
Caller[00000000004f231c]
Caller[00000000005061fc]
Caller[0000000000465404]
Caller[0000000000465a10]
Caller[00000000004790f4]
Caller[0000000000479548]
Caller[00000000004274a8]
Caller[0000000000410974]
Caller[0000000000012610]
Instruction DUMP: 01000000  9de3bf40  11001a97 <d616601c> 90122320 
d45e6068  9332e008  932a7003  80a2a000 

>>PC;  0052c848 <generic_make_request+8/180>   <=====
>>O7;  0052c910 <generic_make_request+d0/180>
>>I7;  004fcc50 <_pagebuf_page_io+230/2e0>
Trace; 004fcc50 <_pagebuf_page_io+230/2e0>
Trace; 004fce0c <_page_buf_page_apply+10c/120>
Trace; 004fd28c <pagebuf_segment_apply+8c/100>
Trace; 004fcefc <pagebuf_iorequest+dc/160>
Trace; 00505174 <xfsbdstrat+34/60>
Trace; 004e5e24 <xlog_bread+44/80>
Trace; 004e6138 <xlog_find_verify_cycle+78/100>
Trace; 004e6a54 <xlog_find_zeroed+154/1c0>
Trace; 004e6350 <xlog_find_head+10/300>
Trace; 004e6650 <xlog_find_tail+10/2c0>
Trace; 004e9554 <xlog_recover+14/c0>
Trace; 004e2bac <xfs_log_mount+6c/c0>
Trace; 004eab54 <xfs_mountfs+7d4/ce0>
Trace; 004f2130 <xfs_cmountfs+4f0/5c0>
Trace; 004f22d4 <xfs_mount+54/80>
Trace; 004f231c <xfs_vfsmount+1c/40>
Trace; 005061fc <linvfs_read_super+1fc/340>
Trace; 00465404 <get_sb_bdev+244/300>
Trace; 00465a10 <do_kern_mount+b0/1c0>
Trace; 004790f4 <do_add_mount+14/260>
Trace; 00479548 <do_mount+168/1a0>
Trace; 004274a8 <sys32_mount+108/160>
Trace; 00410974 <linux_sparc_syscall32+34/40>
Trace; 00012610 Before first symbol

Code;  0052c83c <__make_request+75c/760>
0000000000000000 <_PC>:
Code;  0052c83c <__make_request+75c/760>
   0:   01 00 00 00       nop 
Code;  0052c840 <generic_make_request+0/180>
   4:   9d e3 bf 40       save  %sp, -192, %sp
Code;  0052c844 <generic_make_request+4/180>
   8:   11 00 1a 97       sethi  %hi(0x6a5c00), %o0
Code;  0052c848 <generic_make_request+8/180>   <=====
   c:   d6 16 60 1c       lduh  [ %i1 + 0x1c ], %o3   <=====
Code;  0052c84c <generic_make_request+c/180>
  10:   90 12 23 20       or  %o0, 0x320, %o0
Code;  0052c850 <generic_make_request+10/180>
  14:   d4 5e 60 68       unknown
Code;  0052c854 <generic_make_request+14/180>
  18:   93 32 e0 08       srl  %o3, 8, %o1
Code;  0052c858 <generic_make_request+18/180>
  1c:   93 2a 70 03       unknown
Code;  0052c85c <generic_make_request+1c/180>
  20:   80 a2 a0 00       cmp  %o2, 0

[1] /etc/raidtab
raiddev /dev/md1
          raid-level      linear
          nr-raid-disks   7
          nr-spare-disks  0
          persistent-superblock 1
          chunk-size      32
          device          /dev/scsi/host1/bus0/target3/lun0/part1
          raid-disk       0
          device          /dev/scsi/host1/bus0/target11/lun0/part1
          raid-disk       1
          device          /dev/scsi/host1/bus0/target4/lun0/part1
          raid-disk       2
          device          /dev/scsi/host1/bus0/target12/lun0/part1
          raid-disk       3
          device          /dev/scsi/host1/bus0/target5/lun0/part1
          raid-disk       4
          device          /dev/scsi/host1/bus0/target8/lun0/part1
          raid-disk       5
          device          /dev/scsi/host1/bus0/target14/lun0/part1
          raid-disk       6

-- 
Alvaro Figueroa