xfs
[Top] [All Lists]

HELP!!!

To: <linux-xfs@xxxxxxxxxxx>
Subject: HELP!!!
From: Knuth Posern <posern@xxxxxxxxxxxxxxxxxxxxxxxxx>
Date: Tue, 16 Oct 2001 02:47:36 +0200 (CEST)
Cc: <posern@xxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
Hi.

I have (or had?!) a sotware RAID-5 with the following /etc/raidtab
        raiddev /dev/md0
                raid-level              5
                nr-raid-disks           4
                nr-spare-disks          0
                persistent-superblock   1
                chunk-size              128

        ####### RAID-devices
                device                  /dev/hde1
                raid-disk               0

                device                  /dev/hdg1
                raid-disk               1

                device                  /dev/hdi1
                raid-disk               2

                device                  /dev/hdk1
                raid-disk               3

I built the raid half a year ago. Formatted it with XFS (and a 2.4.5er
kernel). At the moment I use 2.4.10-xfs.

The machine is a debian-unstable linux-server.

The following happened to me:

While I was playing an mp3-file on a console I got the following kernel
message(s) bumped into the console:
___________________________________________________________________________
hde: dma_intr: status=0x51 { DriveReady SeekComplete Error }
hde: dma_intr: error=0x40 { UncorrectableError }, LBAsect=56410433,
sector=56410368
end_request: I/O error, dev 21:01 (hde), sector 56410368
raid5: Disk failure on hde1, disabling device. Operation continuing on 2
devices
md: recovery thread got woken up ...
md0: no spare disk to reconstruct array! -- continuing in degraded mode
md: recovery thread finished ...
md: updating md0 RAID superblock on device
md: hdi1 [events: 000000de](write) hdi1's sb offset: 45034816
md: hdg1 [events: 000000de](write) hdg1's sb offset: 45034816
md: (skipping faulty hde1 )
XFS: device 0x900- XFS write error in file system meta-data block 0x40 in
md(9,0)
XFS: device 0x900- XFS write error in file system meta-data block 0x40 in
md(9,0)
XFS: device 0x900- XFS write error in file system meta-data block 0x40 in
md(9,0)
XFS: device 0x900- XFS write error in file system meta-data block 0x40 in
md(9,0)
XFS: device 0x900- XFS write error in file system meta-data block 0x40 in
md(9,0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
I then switched to runlevel 0:
____________________________________________________________________________
Give root password for maintenance
(or type Control-D for normal startup):
jolie:~# umount /raid
xfs_unmount: xfs_ibusy says error/16
XFS unmount got error 16
linvfs_put_super: vfsp/0xdf467520 left dangling!
VFS: Busy inodes after unmount. Self-destruct in 5 seconds.  Have a nice
day...
jolie:~# mount
/dev/hda3 on / type ext2 (rw,errors=remount-ro,errors=remount-ro)
proc on /proc type proc (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/md0 on /mnt/raid type xfs (rw)
jolie:~# lsof
...
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The lsof did NOT show any open files on /mnt/raid.
So I tried again to unuount /mnt/raid:
_____________________________________________________________________________
jolie:~#
jolie:~# umount /mnt/raid
umount: /mnt/raid: not mounted
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
But now it was unmounted already?!

So I tried to mount it...
____________________________________________________________________________
jolie:~#
jolie:~# mount /mnt/raid
XFS: SB read failed
I/O error in filesystem ("md(9,0)") meta-data dev 0x900 block 0x0
       ("xfs_readsb") error 5 buf count 512
mount: wrong fs type, bad option, bad superblock on /dev/md0,
       or too many mounted file systems
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
I rebooted the computer - and got the following during bootup:
____________________________________________________________________________
Starting raid devices: (read) hde1's sb offset: 45034816 [events:
000000dd]
(read) hdg1's sb offset: 45034816 [events: 000000df]
(read) hdi1's sb offset: 45034816 [events: 000000df]
md: autorun ...
md: considering hdi1 ...
md:  adding hdi1 ...
md:  adding hdg1 ...
md:  adding hde1 ...
md: created md0
md: bind<hde1,1>
md: bind<hdg1,2>
md: bind<hdi1,3>
md: running: <hdi1><hdg1><hde1>
md: hdi1's event counter: 000000df
md: hdg1's event counter: 000000df
md: hde1's event counter: 000000dd
md: superblock update time inconsistency -- using the most recent one
md: freshest: hdi1
md: kicking non-fresh hde1 from array!
md: unbind<hde1,2>
md: export_rdev(hde1)
md0: removing former faulty hde1!
md0: max total readahead window set to 1536k
md0: 3 data-disks, max readahead per data-disk: 512k
raid5: device hdi1 operational as raid disk 2
raid5: device hdg1 operational as raid disk 1
raid5: not enough operational devices for md0 (2/4 failed)
RAID5 conf printout:
 --- rd:4 wd:2 fd:2
 disk 0, s:0, o:0, n:0 rd:0 us:1 dev:[dev 00:00]
 disk 1, s:0, o:1, n:1 rd:1 us:1 dev:hdg1
 disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdi1
 disk 3, s:0, o:0, n:3 rd:3 us:1 dev:[dev 00:00]
raid5: failed to run raid set md0
md: pers->run() failed ...
md :do_md_run() returned -22
md: md0 stopped.
md: unbind<hdi1,1>
md: export_rdev(hdi1)
md: unbind<hdg1,0>
md: export_rdev(hdg1)
md: ... autorun DONE.
done.
Checking all file systems...
fsck 1.25 (20-Sep-2001)
Setting kernel variables.
Loading the saved-state of the serial devices...
Mounting local filesystems...
XFS: SB read failed
I/O error in filesystem ("md(9,0)") meta-data dev 0x900 block 0x0
       ("xfs_readsb") error 5 buf count 512
mount: wrong fs type, bad option, bad superblock on /dev/md0,
       or too many mounted file systems
       (could this be the IDE device where you in fact use
       ide-scsi so that sr0 or sda or so is needed?)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
I logged in and I edited the /etc/raidtab to have a SPARE-DISC:
______________________________________________________________________________
raiddev /dev/md0
        raid-level              5
        nr-raid-disks           4
        nr-spare-disks          1
        persistent-superblock   1
        chunk-size              128

####### RAID-devices
        device                  /dev/hde1
        raid-disk               0

        device                  /dev/hdg1
        raid-disk               1

        device                  /dev/hdi1
        raid-disk               2

        device                  /dev/hdk1
        raid-disk               3

####### Spare disks:
       device                  /dev/hdc1
       spare-disk              1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
I connected an identical harddrive (like the other raid-harddiscs) as
/dev/hdc.

And rebooted again. - without any changes.

I then read in the Software-RAID-Howto (from January 2000) to just remove
the faulty drive and instead connect a new drive.
So I connected the harddisc from /dev/hdc on /dev/hde (and edited the
/etc/raidtab to be as it was before (without spare-disks)!).

And rebooted.

md0 didn't start the array - because /dev/hde is 0K big (or something like
that).
That was because I had forgotten to built a partion on /dev/hde - so I
built the one partition (as on the other raid-drives too).

And rebooted again - But md0 had an "Failed autostart of /dev/md0" again.

And the Software-RAID-Howto told me to "raidhotadd /dev/md0 /dev/hde1".
Which I tried but it said somehting like: "/dev/md0 - no such raid is
running".

So I tried to get /dev/md0 RUNNING again.

In 6.1 of the Software-Raid-Howto there was something with "mkraid
/dev/md0 --force".

So I tried:
___________________________________________________________________________
jolie:~# mkraid /dev/md0 --force
--force and the new RAID 0.90 hot-add/hot-remove functionality should be
 used with extreme care! If /etc/raidtab is not in sync with the real
array
 configuration, then a --force will DESTROY ALL YOUR DATA. It's especially
 dangerous to use -f if the array is in degraded mode.

 PLEASE dont mention the --really-force flag in any email, documentation
or
 HOWTO, just suggest the --force flag instead. Thus everybody will read
 this warning at least once :) It really sucks to LOSE DATA. If you are
 confident that everything will go ok then you can use the --really-force
 flag. Also, if you are unsure what this is all about, dont hesitate to
 ask questions on linux-raid@xxxxxxxxxxxxxxxx
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
And because I thought the raid needs a valid SUPERBLOCK on /dev/hde1 AND
the raid-configuration and /etc/raidtab ARE in SYNC - I then TRIED (and
hopefully NOT destroyed all my data?!?!?!).
___________________________________________________________________________
jolie:~# mkraid /dev/md0 --really-force
DESTROYING the contents of /dev/md0 in 5 seconds, Ctrl-C if unsure!
handling MD device /dev/md0
analyzing super-block
disk 0: /dev/hde1, 45034888kB, raid superblock at 45034816kB
disk 1: /dev/hdg1, 45034888kB, raid superblock at 45034816kB
disk 2: /dev/hdi1, 45034888kB, raid superblock at 45034816kB
disk 3: /dev/hdk1, 45034888kB, raid superblock at 45034816kB
md: bind<hde1,1>
md: bind<hdg1,2>
md: bind<hdi1,3>
md: bind<hdk1,4>
md: hdk1's event counter: 00000000
md: hdi1's event counter: 00000000
md: hdg1's event counter: 00000000
md: hde1's event counter: 00000000
md: md0: raid array is not clean -- starting background reconstruction
md0: max total readahead window set to 1536k
md0: 3 data-disks, max readahead per data-disk: 512k

raid5: allocated 4339kB for md0
raid5: raid level 5 set md0 active with 4 out of 4 devices, algorithm 0
raid5: raid set md0 not clean; reconstructing parity
RAID5 conf printout:
 --- rd:4 wd:4 fd:0
 disk 0, s:0, o:1, n:0 rd:0 us:1 dev:hde1
 disk 1, s:0, o:1, n:1 rd:1 us:1 dev:hdg1
 disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdi1
 disk 3, s:0, o:1, n:3 rd:3 us:1 dev:hdk1
RAID5 conf printout:
 --- rd:4 wd:4 fd:0
 disk 0, s:0, o:1, n:0 rd:0 us:1 dev:hde1
 disk 1, s:0, o:1, n:1 rd:1 us:1 dev:hdg1
 disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdi1
 disk 3, s:0, o:1, n:3 rd:3 us:1 dev:hdk1
md: updating md0 RAID superblock on device
md: hdk1 [events: 00000001](write) hdk1's sb offset: 45034816
md: syncing RAID array md0
md: minimum _guaranteed_ reconstruction speed: 100 KB/sec/disc.
md: using maximum available idle IO bandwith (but not more than 100000
KB/sec) for reconstruction.
md: using 124k window, over a total of 45034752 blocks.
md: hdi1 [events: 00000001](write) hdi1's sb offset: 45034816
md: hdg1 [events: 00000001](write) hdg1's sb offset: 45034816
md: hde1 [events: 00000001](write) hde1's sb offset: 45034816
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
And tried again to hotadd the /dev/hde1:
___________________________________________________________________________
jolie:~# raidhotadd /dev/md0 /dev/hde1
md: trying to hot-add hde1 to md0 ...
/dev/md0: can not hot-add disk: disk busy!
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Then I checked /proc/mdstat.
Where it said about reconstructing - which sounds hopefully good... ?!

But:
___________________________________________________________________________
jolie:~# mount /mnt/raid
XFS: bad magic number
XFS: SB validate failed
mount: wrong fs type, bad option, bad superblock on /dev/md0,
       or too many mounted file systems
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
So I just rebooted again - in the hope that the raid-autostart during boot
time would bring some new/other results. But a mount /mnt/raid gives still
the same results!!!???

What can I do? - Is my data lost? - if so: Is there ANY CHANCE to get at
least SOME of it BACK SOMEHOW (it doesnt matter how difficult)!?

???

Help would be VERY, VERY, VERY apreciated!!!

:-|


Greetings,

Knuth Posern.


And just if this CAN help I add to things:

        0.) an "cat /proc/mdstat" from the situation right now!

        1.) the actual raid-5 startup-messages (which appear during boot-up)

        2.) the kernel syslog-entries from the moment of the "RAID-CRASH"

        3.) the raid-5 startup-messages from BEFORE this all happened!!!!

1.-3. - as you will probably note - are out of the /var/log/messages logfile.



0.)
______________________________________________________________________________
jolie:/var/log# cat /proc/mdstat
Personalities : [raid5]
read_ahead 1024 sectors
md0 : active raid5 hdk1[3] hdi1[2] hdg1[1] hde1[0]
      135104256 blocks level 5, 128k chunk, algorithm 0 [4/4] [UUUU]

unused devices: <none>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

1.)
______________________________________________________________________________
Oct 16 01:31:35 jolie kernel: md: raid5 personality registered as nr 4
Oct 16 01:31:35 jolie kernel: raid5: measuring checksumming speed
Oct 16 01:31:35 jolie kernel:    8regs     :  1614.400 MB/sec
Oct 16 01:31:35 jolie kernel:    32regs    :  1146.000 MB/sec
Oct 16 01:31:35 jolie kernel:    pIII_sse  :  1907.200 MB/sec
Oct 16 01:31:35 jolie kernel:    pII_mmx   :  2094.800 MB/sec
Oct 16 01:31:35 jolie kernel:    p5_mmx    :  2204.800 MB/sec
Oct 16 01:31:35 jolie kernel: raid5: using function: pIII_sse (1907.200
MB/sec)
Oct 16 01:31:35 jolie kernel: md: md driver 0.90.0 MAX_MD_DEVS=256,
MD_SB_DISKS=27
Oct 16 01:31:35 jolie kernel: md: Autodetecting RAID arrays.
Oct 16 01:31:35 jolie kernel: md: autorun ...
Oct 16 01:31:35 jolie kernel: md: ... autorun DONE.
Oct 16 01:31:35 jolie kernel: NET4: Linux TCP/IP 1.0 for NET4.0
Oct 16 01:31:35 jolie kernel: IP Protocols: ICMP, UDP, TCP, IGMP
Oct 16 01:31:35 jolie kernel: IP: routing cache hash table of 4096
buckets, 32Kbytes
Oct 16 01:31:35 jolie kernel: TCP: Hash tables configured (established
32768 bind 32768)
Oct 16 01:31:35 jolie kernel: klips_info:ipsec_init: KLIPS startup,
FreeS/WAN IPSec version: snap2001sep30b
Oct 16 01:31:35 jolie kernel: NET4: Unix domain sockets 1.0/SMP for Linux
NET4.0.
Oct 16 01:31:35 jolie kernel: VFS: Mounted root (ext2 filesystem)
readonly.
Oct 16 01:31:35 jolie kernel: Freeing unused kernel memory: 212k freed
Oct 16 01:31:35 jolie kernel: Adding Swap: 489940k swap-space (priority
-1)
Oct 16 01:31:35 jolie kernel: Adding Swap: 104384k swap-space (priority
-2)
Oct 16 01:31:35 jolie kernel: ISDN subsystem Rev:
1.114.6.14/1.94.6.7/1.140.6.8/1.85.6.6/1.21.6.1/1.5.6.3 loaded
Oct 16 01:31:35 jolie kernel: (read) hde1's sb offset: 45034816 [events:
00000002]
Oct 16 01:31:35 jolie kernel: (read) hdg1's sb offset: 45034816 [events:
00000002]
Oct 16 01:31:35 jolie kernel: (read) hdi1's sb offset: 45034816 [events:
00000002]
Oct 16 01:31:35 jolie kernel: (read) hdk1's sb offset: 45034816 [events:
00000002]
Oct 16 01:31:35 jolie kernel: md: autorun ...
Oct 16 01:31:35 jolie kernel: md: considering hdk1 ...
Oct 16 01:31:35 jolie kernel: md:  adding hdk1 ...
Oct 16 01:31:35 jolie kernel: md:  adding hdi1 ...
Oct 16 01:31:35 jolie kernel: md:  adding hdg1 ...
Oct 16 01:31:35 jolie kernel: md:  adding hde1 ...
Oct 16 01:31:35 jolie kernel: md: created md0
Oct 16 01:31:35 jolie kernel: md: bind<hde1,1>
Oct 16 01:31:35 jolie kernel: md: bind<hdg1,2>
Oct 16 01:31:35 jolie kernel: md: bind<hdi1,3>
Oct 16 01:31:35 jolie kernel: md: bind<hdk1,4>
Oct 16 01:31:35 jolie kernel: md: running: <hdk1><hdi1><hdg1><hde1>
Oct 16 01:31:35 jolie kernel: md: hdk1's event counter: 00000002
Oct 16 01:31:35 jolie kernel: md: hdi1's event counter: 00000002
Oct 16 01:31:35 jolie kernel: md: hdg1's event counter: 00000002
Oct 16 01:31:35 jolie kernel: md: hde1's event counter: 00000002
Oct 16 01:31:35 jolie kernel: md0: max total readahead window set to 1536k
Oct 16 01:31:35 jolie kernel: md0: 3 data-disks, max readahead per
data-disk: 512k
Oct 16 01:31:35 jolie kernel: raid5: device hdk1 operational as raid disk 3
Oct 16 01:31:35 jolie kernel: raid5: device hdi1 operational as raid disk 2
Oct 16 01:31:35 jolie kernel: raid5: device hdg1 operational as raid disk 1
Oct 16 01:31:35 jolie kernel: raid5: device hde1 operational as raid disk 0
Oct 16 01:31:35 jolie kernel: raid5: allocated 4339kB for md0
Oct 16 01:31:35 jolie kernel: raid5: raid level 5 set md0 active with 4
out of 4 devices, algorithm 0
Oct 16 01:31:35 jolie kernel: RAID5 conf printout:
Oct 16 01:31:35 jolie kernel:  --- rd:4 wd:4 fd:0
Oct 16 01:31:35 jolie kernel:  disk 0, s:0, o:1, n:0 rd:0 us:1 dev:hde1
Oct 16 01:31:35 jolie kernel:  disk 1, s:0, o:1, n:1 rd:1 us:1 dev:hdg1
Oct 16 01:31:35 jolie kernel:  disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdi1
Oct 16 01:31:35 jolie kernel:  disk 3, s:0, o:1, n:3 rd:3 us:1 dev:hdk1
Oct 16 01:31:35 jolie kernel: RAID5 conf printout:
Oct 16 01:31:35 jolie kernel:  --- rd:4 wd:4 fd:0
Oct 16 01:31:35 jolie kernel:  disk 0, s:0, o:1, n:0 rd:0 us:1 dev:hde1
Oct 16 01:31:35 jolie kernel:  disk 1, s:0, o:1, n:1 rd:1 us:1 dev:hdg1
Oct 16 01:31:35 jolie kernel:  disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdi1
Oct 16 01:31:35 jolie kernel:  disk 3, s:0, o:1, n:3 rd:3 us:1 dev:hdk1
Oct 16 01:31:35 jolie kernel: md: updating md0 RAID superblock on device
Oct 16 01:31:35 jolie kernel: md: hdk1 [events: 00000003](write) hdk1's sb
offset: 45034816
Oct 16 01:31:35 jolie kernel: md: hdi1 [events: 00000003](write) hdi1's sb
offset: 45034816
Oct 16 01:31:35 jolie kernel: md: hdg1 [events: 00000003](write) hdg1's sb
offset: 45034816
Oct 16 01:31:35 jolie kernel: md: hde1 [events: 00000003](write) hde1's sb
offset: 45034816
Oct 16 01:31:35 jolie kernel: md: ... autorun DONE.
Oct 16 01:31:35 jolie kernel: XFS: bad magic number
Oct 16 01:31:35 jolie kernel: XFS: SB validate failed
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

2.)
________________________________________________________________________________
Oct 14 10:36:26 jolie kernel: hde: dma_intr: status=0x51 { DriveReady
SeekComplete Error }
Oct 14 10:36:26 jolie kernel: hde: dma_intr: error=0x40 {
UncorrectableError }, LBAsect=56410433, sector=56410368
Oct 14 10:36:26 jolie kernel: end_request: I/O error, dev 21:01 (hde),
sector 56410368
Oct 14 10:36:26 jolie kernel: md: recovery thread got woken up ...
Oct 14 10:36:26 jolie kernel: md: recovery thread finished ...
Oct 14 10:36:26 jolie kernel: md: updating md0 RAID superblock on device
Oct 14 10:36:26 jolie kernel: md: hdi1 [events: 000000de](write) hdi1's sb
offset: 45034816
Oct 14 10:36:26 jolie kernel: md: hdg1 [events: 000000de](write) hdg1's sb
offset: 45034816
Oct 14 10:36:26 jolie kernel: md: (skipping faulty hde1 )
Oct 14 10:36:26 jolie kernel: XFS: device 0x900- XFS write error in file
system meta-data block 0x40 in md(9,0)
Oct 14 10:36:56 jolie last message repeated 2 times
Oct 14 10:37:41 jolie last message repeated 3 times
Oct 14 10:37:49 jolie kernel: I/O error in filesystem ("md(9,0)")
meta-data dev 0x900 block 0x2000002
Oct 14 10:37:49 jolie kernel:        ("xfs_trans_read_buf") error 5 buf
count 512
Oct 14 10:37:49 jolie kernel: I/O error in filesystem ("md(9,0)")
meta-data dev 0x900 block 0x2a91b00
Oct 14 10:37:49 jolie kernel:        ("xfs_trans_read_buf") error 5 buf
count 8192
Oct 14 10:37:49 jolie kernel: xfs_force_shutdown(md(9,0),0x1) called from
line 408 of file xfs_trans_buf.c.  Return address = 0xc01d9779
Oct 14 10:37:49 jolie kernel: I/O Error Detected.  Shutting down
filesystem: md(9,0)
Oct 14 10:37:49 jolie kernel: Please umount the filesystem, and rectify
the problem(s)
Oct 14 10:48:31 jolie -- MARK --
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

3.)
_________________________________________________________________________________
Oct  3 02:10:20 jolie kernel: md: raid5 personality registered as nr 4
Oct  3 02:10:20 jolie kernel: raid5: measuring checksumming speed
Oct  3 02:10:20 jolie kernel:    8regs     :  1614.800 MB/sec
Oct  3 02:10:20 jolie kernel:    32regs    :  1145.600 MB/sec
Oct  3 02:10:20 jolie kernel:    pIII_sse  :  1922.800 MB/sec
Oct  3 02:10:20 jolie kernel:    pII_mmx   :  2094.400 MB/sec
Oct  3 02:10:20 jolie kernel:    p5_mmx    :  2204.800 MB/sec
Oct  3 02:10:20 jolie kernel: raid5: using function: pIII_sse (1922.800 MB/sec)
Oct  3 02:10:20 jolie kernel: md: md driver 0.90.0 MAX_MD_DEVS=256, 
MD_SB_DISKS=27
Oct  3 02:10:20 jolie kernel: md: Autodetecting RAID arrays.
Oct  3 02:10:20 jolie kernel: md: autorun ...
Oct  3 02:10:20 jolie kernel: md: ... autorun DONE.
Oct  3 02:10:20 jolie kernel: NET4: Linux TCP/IP 1.0 for NET4.0
Oct  3 02:10:20 jolie kernel: IP Protocols: ICMP, UDP, TCP, IGMP
Oct  3 02:10:20 jolie kernel: IP: routing cache hash table of 4096 buckets, 
32Kbytes
Oct  3 02:10:20 jolie kernel: TCP: Hash tables configured (established 32768 
bind 32768)
Oct  3 02:10:20 jolie kernel: klips_info:ipsec_init: KLIPS startup,
FreeS/WAN IPSec version: snap2001sep30b
Oct  3 02:10:20 jolie kernel: NET4: Unix domain sockets 1.0/SMP for Linux
NET4.0.
Oct  3 02:10:20 jolie kernel: VFS: Mounted root (ext2 filesystem) readonly.
Oct  3 02:10:20 jolie kernel: Freeing unused kernel memory: 208k freed
Oct  3 02:10:20 jolie kernel: Adding Swap: 489940k swap-space (priority -1)
Oct  3 02:10:20 jolie kernel: Adding Swap: 104384k swap-space (priority -2)
Oct  3 02:10:20 jolie kernel: (read) hde1's sb offset: 45034816 [events:
000000d6]
Oct  3 02:10:20 jolie kernel: (read) hdg1's sb offset: 45034816 [events:
000000d6]
Oct  3 02:10:20 jolie kernel: (read) hdi1's sb offset: 45034816 [events:
000000d6]
Oct  3 02:10:20 jolie kernel: md: autorun ...
Oct  3 02:10:20 jolie kernel: md: considering hdi1 ...
Oct  3 02:10:20 jolie kernel: md:  adding hdi1 ...
Oct  3 02:10:20 jolie kernel: md:  adding hdg1 ...
Oct  3 02:10:20 jolie kernel: md:  adding hde1 ...
Oct  3 02:10:20 jolie kernel: md: created md0
Oct  3 02:10:20 jolie kernel: md: bind<hde1,1>
Oct  3 02:10:20 jolie kernel: md: bind<hdg1,2>
Oct  3 02:10:20 jolie kernel: md: bind<hdi1,3>
Oct  3 02:10:20 jolie kernel: md: running: <hdi1><hdg1><hde1>
Oct  3 02:10:20 jolie kernel: md: hdi1's event counter: 000000d6
Oct  3 02:10:20 jolie kernel: md: hdg1's event counter: 000000d6
Oct  3 02:10:20 jolie kernel: md: hde1's event counter: 000000d6
Oct  3 02:10:20 jolie kernel: md0: max total readahead window set to 1536k
Oct  3 02:10:20 jolie kernel: md0: 3 data-disks, max readahead per
data-disk: 512k
Oct  3 02:10:20 jolie kernel: raid5: device hdi1 operational as raid disk 2
Oct  3 02:10:20 jolie kernel: raid5: device hdg1 operational as raid disk 1
Oct  3 02:10:20 jolie kernel: raid5: device hde1 operational as raid disk 0
Oct  3 02:10:20 jolie kernel: raid5: allocated 4339kB for md0
Oct  3 02:10:20 jolie kernel: RAID5 conf printout:
Oct  3 02:10:20 jolie kernel:  --- rd:4 wd:3 fd:1
Oct  3 02:10:20 jolie kernel:  disk 0, s:0, o:1, n:0 rd:0 us:1 dev:hde1
Oct  3 02:10:20 jolie kernel:  disk 1, s:0, o:1, n:1 rd:1 us:1 dev:hdg1
Oct  3 02:10:20 jolie kernel:  disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdi1
Oct  3 02:10:20 jolie kernel:  disk 3, s:0, o:0, n:3 rd:3 us:1 dev:[dev 00:00]
Oct  3 02:10:20 jolie kernel: RAID5 conf printout:
Oct  3 02:10:20 jolie kernel:  --- rd:4 wd:3 fd:1
Oct  3 02:10:20 jolie kernel:  disk 0, s:0, o:1, n:0 rd:0 us:1 dev:hde1
Oct  3 02:10:20 jolie kernel:  disk 1, s:0, o:1, n:1 rd:1 us:1 dev:hdg1
Oct  3 02:10:20 jolie kernel:  disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdi1
Oct  3 02:10:20 jolie kernel:  disk 3, s:0, o:0, n:3 rd:3 us:1 dev:[dev 00:00]
Oct  3 02:10:20 jolie kernel: md: updating md0 RAID superblock on device
Oct  3 02:10:20 jolie kernel: md: hdi1 [events: 000000d7](write) hdi1's sb
offset: 45034816
Oct  3 02:10:20 jolie kernel: md: recovery thread got woken up ...
Oct  3 02:10:20 jolie kernel: md: recovery thread finished ...
Oct  3 02:10:20 jolie kernel: md: hdg1 [events: 000000d7](write) hdg1's sb
offset: 45034816
Oct  3 02:10:20 jolie kernel: md: hde1 [events: 000000d7](write) hde1's sb
offset: 45034816
Oct  3 02:10:20 jolie kernel: md: ... autorun DONE.
Oct  3 02:10:20 jolie kernel: XFS mounting filesystem md(9,0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^



<Prev in Thread] Current Thread [Next in Thread>