From owner-linux-xfs@oss.sgi.com Thu Jul 1 05:51:45 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 01 Jul 2004 05:51:48 -0700 (PDT) Received: from smtp3.mail.be.easynet.net (eshu.mail.be.easynet.net [212.100.160.117]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i61Cpbgi020179 for ; Thu, 1 Jul 2004 05:51:45 -0700 Received: from bidibule.office.be.easynet.net ([212.100.163.150] helo=bidibule) by smtp3.mail.be.easynet.net with esmtp (Exim 4.34) id 1Bg12O-0003hO-VQ; Thu, 01 Jul 2004 14:51:36 +0200 Received: from [127.0.0.1] (helo=be.easynet.net ident=rb) by bidibule with esmtp (Exim 3.35 #1 (Debian)) id 1Bg12L-0002Qf-00; Thu, 01 Jul 2004 14:51:33 +0200 Message-ID: <40E408D4.5070305@be.easynet.net> Date: Thu, 01 Jul 2004 14:51:32 +0200 From: Raphael Bauduin User-Agent: Mozilla Thunderbird 0.5 (X11/20040208) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Eric Sandeen CC: evilninja , linux-xfs@oss.sgi.com Subject: Re: XFS partition problem References: In-Reply-To: Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 3577 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: raphael.bauduin@be.easynet.net Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1641 Lines: 44 Eric Sandeen wrote: > On Fri, 25 Jun 2004, Raphael Bauduin wrote: > > >>I've looked at older logs and those messages appeared at a boot of 3 december, also followed by this: >> >>Dec 3 10:18:19 dotnet kernel: Starting XFS recovery on filesystem: sd(8,38) (dev: 8/38) >>Dec 3 10:18:19 dotnet kernel: Ending XFS recovery on filesystem: sd(8,38) (dev: 8/38) >> >>What's the exact meaning of these messages? Does it mean the partition was not cleanly unmounted? > > > that does mean that it was not cleanly unmounted, and it is using > the journal/log for recovery. This is normal xfs operation. > > >>If the partition is not cleanly unmounted at each boot, could it result in a partition error like I had? > > > It should not; xfs is designed to replay the log to get a consistent > filesystem after an unclean shutdown. > > Note that if you point xfs_check or xfs_repair at a filesystem with a > dirty log, you will see inconsistencies - both of these tools require > a clean log to operate. mount/umount to be sure your log is clean. > > -Eric > > Hi, just to give a little update. An xfs_repair worked fine and the partition is working fine. The problem came of this: on this partition, we have several chrooted environments running, and when the server is shut down, all processes in the chrooted env is stopped. That's where the problem was: some processes were not stopped. When running xfs_repair, it outputted messages about unavailable files (ssh.pid and apache.pid), which corresponds to the peocesses that were not stopped cleanly in the chrooted environments..... Everything seems to be running fine now. Raph From owner-linux-xfs@oss.sgi.com Fri Jul 2 11:22:42 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 02 Jul 2004 11:22:48 -0700 (PDT) Received: from mail-in-05.arcor-online.net (mail-in-05.arcor-online.net [151.189.21.45]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i62IMVgi028074 for ; Fri, 2 Jul 2004 11:22:32 -0700 Received: from dialin-212-144-002-229.arcor-ip.net (dialin-212-144-002-229.arcor-ip.net [212.144.2.229]) by mail-in-05.arcor-online.net (Postfix) with ESMTP id 2998BAD83A8 for ; Fri, 2 Jul 2004 18:06:43 +0200 (CEST) From: Thomas Gaertner Reply-To: tga@software-tomography.com To: linux-xfs@oss.sgi.com Subject: Mail Problems on this list? Date: Fri, 2 Jul 2004 18:06:34 +0200 User-Agent: KMail/1.6.2 MIME-Version: 1.0 Content-Disposition: inline Content-Type: Text/Plain; charset="iso-8859-1" Message-Id: <200407021806.38535.tgaertne@informatik.tu-cottbus.de> Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id i62IMggi028080 X-archive-position: 3578 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: tgaertne@informatik.tu-cottbus.de Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1225 Lines: 34 -----BEGIN PGP SIGNED MESSAGE----- Hi there, is this list currently experiencing problems with its list daemon? I don't get mail and mail send to this list gets lost. bye... - -- Thomas Gaertner ( host leela ) Brandenburg Technical University at Cottbus - --------------------------------------------------------------- Please send only plain ASCII-Mail. In case your Mail will be turned down, use . But beware - I check this mailbox only once a day. - --     Revival of Futurama Petition     Sign at http://www.petitiononline.com/bbf_2004/     More under www.peelified.com, leelazone.com.ar. - --- PGP-Public-Key http://www-stud.informatik.tu-cottbus.de/~tgaertne/public-pgp-2048.04232001.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.2-rc1-SuSE (GNU/Linux) iQEVAwUBQOWIDlRxDogMOg+VAQEPugf/TogPeiQRJcF21uslKWptMzT2f46uq62X hKaHUKOTECz/2pUjTc91wv43qxCCri31SRNeQijpeyMzjbPBGXUdanPbiyg+58Wh XBpkmCjR2/vR86kMDuwtAhPd4od92g9JTz+6DPmc/qBPqYX38FIKSgWxCtuRYfAG MwA1BZrDNhCR1pvC2DibEErCy3hB03iJ1UL7+0lWfghRXp3oqZ+P5s2HymU645Md +d26qhNW3w/i5bVgSVynhYeoWUBPX9kMum642H42g2I/qiYPdLFNNJG5VK76EKQH OnQ/id/QE/3jnDkb/UZgXR3EbecpT8BML4lJZ+pUtvqY5ogw/72xUQ== =DJDM -----END PGP SIGNATURE----- From owner-linux-xfs@oss.sgi.com Fri Jul 2 12:47:39 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 02 Jul 2004 12:47:42 -0700 (PDT) Received: from pimout1-ext.prodigy.net (pimout1-ext.prodigy.net [207.115.63.77]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i62Jldgi002443 for ; Fri, 2 Jul 2004 12:47:39 -0700 Received: from taniwha.stupidest.org (adsl-63-202-172-209.dsl.snfc21.pacbell.net [63.202.172.209]) by pimout1-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i62JlWKn177248; Fri, 2 Jul 2004 15:47:33 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id AF4CC115C874; Fri, 2 Jul 2004 12:47:30 -0700 (PDT) Date: Fri, 2 Jul 2004 12:47:30 -0700 From: Chris Wedgwood To: Raphael Bauduin Cc: Eric Sandeen , evilninja , linux-xfs@oss.sgi.com Subject: Re: XFS partition problem Message-ID: <20040702194730.GA20726@taniwha.stupidest.org> References: <40E408D4.5070305@be.easynet.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <40E408D4.5070305@be.easynet.net> X-archive-position: 3579 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 427 Lines: 13 On Thu, Jul 01, 2004 at 02:51:32PM +0200, Raphael Bauduin wrote: > When running xfs_repair, it outputted messages about unavailable > files (ssh.pid and apache.pid), which corresponds to the peocesses > that were not stopped cleanly in the chrooted environments..... xfs_repair should not be required in this case, if for some reason it is required and these files are not being dealt with properly then it's a bug. --cw From owner-linux-xfs@oss.sgi.com Sun Jul 4 10:40:05 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 04 Jul 2004 10:40:08 -0700 (PDT) Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.126.185]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i64He4gi024567 for ; Sun, 4 Jul 2004 10:40:05 -0700 Received: from [212.227.126.202] (helo=mrvnet.kundenserver.de) by moutng.kundenserver.de with esmtp (Exim 3.35 #1) id 1BhAyB-0005Jq-00 for linux-xfs@oss.sgi.com; Sun, 04 Jul 2004 19:40:03 +0200 Received: from [172.23.4.145] (helo=config18.kundenserver.de) by mrvnet.kundenserver.de with esmtp (Exim 3.35 #1) id 1BhAyA-0007li-00; Sun, 04 Jul 2004 19:40:02 +0200 Received: from www-data by config18.kundenserver.de with local (Exim 3.35 #1 (Debian)) id 1BhAyA-0002wz-00; Sun, 04 Jul 2004 19:40:02 +0200 To: Subject: =?iso-8859-1?Q?oops_with_xfs_/_nfs?= From: Message-Id: <27495344$108896174540e83cd1534a75.27867426@config18.schlund.de> X-Binford: 6100 (more power) X-Originating-From: 27495344 X-Mailer: Webmail X-Routing: DE X-Received: from config18 by 195.126.66.126 with HTTP id 27495344 for linux-xfs@oss.sgi.com; Sun, 4 Jul 2004 19:38:02 +0200 Content-Type: text/plain; charset="iso-8859-1" Mime-Version: 1.0 Content-Transfer-Encoding: 8bit X-Priority: 3 Date: Sun, 4 Jul 2004 19:38:02 +0200 X-Provags-ID: kundenserver.de abuse@kundenserver.de ident:@172.23.4.145 X-archive-position: 3580 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: info@nerdbynature.de Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 6798 Lines: 155 hello list, today i got an oops on my nfs server during heavy use of nfs read/writes from a client. this is debian/unstable (i386), vanilla 2.6.7. please CC me on replies, as i am not able to recieve the list-msg atm. Thanks, Christian. kernel: nfsd: page allocation failure. order:5, mode:0xd0 kernel: [] __alloc_pages+0x2d1/0x2e0 kernel: [] __get_free_pages+0x18/0x40 kernel: [] kmem_getpages+0x19/0xb0 kernel: [] cache_grow+0xa8/0x230 kernel: [] cache_alloc_refill+0x1db/0x220 kernel: [] __kmalloc+0x5c/0x60 kernel: [] xfs_iread_extents+0x86/0x180 kernel: [] xfs_bmapi+0x1e9/0x1400 kernel: [] do_gettimeofday+0x1a/0xc0 kernel: [] ipt_do_table+0x2eb/0x500 [ip_tables] kernel: [] boomerang_rx+0x1d7/0x480 [3c59x] kernel: [] recalc_task_prio+0xbb/0x1b0 kernel: [] recalc_task_prio+0xbb/0x1b0 kernel: [] xfs_imap_to_bmap+0x2a/0x2a0 kernel: [] xfs_iomap+0x176/0x4a0 kernel: [] xfs_iget+0x13c/0x150 kernel: [] linvfs_get_block_core+0x9e/0x2a0 kernel: [] xfs_access+0x3f/0x50 kernel: [] linvfs_permission+0x0/0x20 kernel: [] dput+0x87/0x230 kernel: [] linvfs_get_block+0x25/0x30 kernel: [] do_mpage_readpage+0x248/0x330 kernel: [] radix_tree_node_alloc+0x10/0x50 kernel: [] radix_tree_insert+0x76/0x110 kernel: [] add_to_page_cache+0x5a/0xb0 kernel: [] mpage_readpages+0xee/0x140 kernel: [] linvfs_get_block+0x0/0x30 kernel: [] ip_finish_output2+0x0/0x1e0 kernel: [] nf_hook_slow+0x13f/0x160 kernel: [] ip_finish_output2+0x0/0x1e0 kernel: [] copy_to_user+0x32/0x50 kernel: [] read_pages+0x130/0x140 kernel: [] linvfs_get_block+0x0/0x30 kernel: [] buffered_rmqueue+0xf7/0x1e0 kernel: [] __alloc_pages+0x9f/0x2e0 kernel: [] tcp_recvmsg+0x2fb/0x760 kernel: [] do_page_cache_readahead+0x1a4/0x1d0 kernel: [] page_cache_readahead+0xe9/0x1f0 kernel: [] do_generic_mapping_read+0xf2/0x470 kernel: [] generic_file_sendfile+0x6b/0x90 kernel: [] nfsd_read_actor+0x0/0xd0 [nfsd] kernel: [] xfs_sendfile+0xc7/0x1b0 kernel: [] nfsd_read_actor+0x0/0xd0 [nfsd] kernel: [] linvfs_sendfile+0x4a/0x60 kernel: [] nfsd_read_actor+0x0/0xd0 [nfsd] kernel: [] nfsd_read+0x226/0x3e0 [nfsd] kernel: [] nfsd_read_actor+0x0/0xd0 [nfsd] kernel: [] nfsd3_proc_read+0xaf/0x170 [nfsd] kernel: [] nfs3svc_decode_readargs+0x0/0x190 [nfsd] kernel: [] nfsd_dispatch+0x8a/0x1f0 [nfsd] kernel: [] svc_authenticate+0xd3/0x120 [sunrpc] kernel: [] svc_process+0x57d/0x610 [sunrpc] kernel: [] nfsd+0x1cb/0x380 [nfsd] kernel: [] nfsd+0x0/0x380 [nfsd] kernel: [] kernel_thread_helper+0x5/0x18 kernel: kernel: Unable to handle kernel NULL pointer dereference at virtual address 00000000 kernel: printing eip: kernel: c01b5bc6 kernel: *pde = 00000000 kernel: Oops: 0002 [#1] kernel: PREEMPT kernel: Modules linked in: nls_iso8859_15 isofs nls_base sr_mod cdrom ipt_REJECT ipt_state ipt_multiport ipt_MASQUERAD E ip_nat_ftp iptable_nat ip_conntrack_ftp ip_conntrack iptable_filter ip_tables ppp_deflate zlib_deflate zlib_inflate bsd_comp nfsd exportfs lockd sunrpc ppp_async parport_pc lp parport ipv6 ppp_generic slhc 8250 serial_core 3c59x loop_twofish loop rtc kernel: CPU: 0 kernel: EIP: 0060:[] Not tainted kernel: EFLAGS: 00010202 (2.6.7) kernel: EIP is at xfs_bmap_read_extents+0x2a6/0x470 kernel: eax: 00000000 ebx: 000000b6 ecx: 00000000 edx: 1200e064 kernel: esi: c7e71018 edi: 000000b6 ebp: 00000000 esp: cbab38ac kernel: ds: 007b es: 007b ss: 0068 kernel: Process nfsd (pid: 14198, threadinfo=cbab2000 task=cbc16cf0) kernel: Stack: 00000000 cbab3904 00000002 c01351d9 000000d0 000000b6 00000013 cee43800 kernel: c90e8040 000000b6 003b11bf 00000000 00000000 00000000 00000000 00001565 kernel: 00000000 c8aa2ce0 00000001 c7e71000 c8aa2c90 00000000 c90e8040 00015650 kernel: Call Trace: kernel: [] kmem_getpages+0x19/0xb0 kernel: [] xfs_iread_extents+0xa6/0x180 kernel: [] xfs_bmapi+0x1e9/0x1400 kernel: [] do_gettimeofday+0x1a/0xc0 kernel: [] ipt_do_table+0x2eb/0x500 [ip_tables] kernel: [] boomerang_rx+0x1d7/0x480 [3c59x] kernel: [] recalc_task_prio+0xbb/0x1b0 kernel: [] recalc_task_prio+0xbb/0x1b0 kernel: [] xfs_imap_to_bmap+0x2a/0x2a0 kernel: [] xfs_iomap+0x176/0x4a0 kernel: [] xfs_iget+0x13c/0x150 kernel: [] linvfs_get_block_core+0x9e/0x2a0 kernel: [] xfs_access+0x3f/0x50 kernel: [] linvfs_permission+0x0/0x20 kernel: [] dput+0x87/0x230 kernel: [] linvfs_get_block+0x25/0x30 kernel: [] do_mpage_readpage+0x248/0x330 kernel: [] radix_tree_node_alloc+0x10/0x50 kernel: [] radix_tree_insert+0x76/0x110 kernel: [] add_to_page_cache+0x5a/0xb0 kernel: [] mpage_readpages+0xee/0x140 kernel: [] linvfs_get_block+0x0/0x30 kernel: [] ip_finish_output2+0x0/0x1e0 kernel: [] nf_hook_slow+0x13f/0x160 kernel: [] ip_finish_output2+0x0/0x1e0 kernel: [] copy_to_user+0x32/0x50 kernel: [] read_pages+0x130/0x140 kernel: [] linvfs_get_block+0x0/0x30 kernel: [] buffered_rmqueue+0xf7/0x1e0 kernel: [] __alloc_pages+0x9f/0x2e0 kernel: [] tcp_recvmsg+0x2fb/0x760 kernel: [] do_page_cache_readahead+0x1a4/0x1d0 kernel: [] page_cache_readahead+0xe9/0x1f0 kernel: [] do_generic_mapping_read+0xf2/0x470 kernel: [] generic_file_sendfile+0x6b/0x90 kernel: [] nfsd_read_actor+0x0/0xd0 [nfsd] kernel: [] xfs_sendfile+0xc7/0x1b0 kernel: [] nfsd_read_actor+0x0/0xd0 [nfsd] kernel: [] linvfs_sendfile+0x4a/0x60 kernel: [] nfsd_read_actor+0x0/0xd0 [nfsd] kernel: [] nfsd_read+0x226/0x3e0 [nfsd] kernel: [] nfsd_read_actor+0x0/0xd0 [nfsd] kernel: [] nfsd3_proc_read+0xaf/0x170 [nfsd] kernel: [] nfs3svc_decode_readargs+0x0/0x190 [nfsd] kernel: [] nfsd_dispatch+0x8a/0x1f0 [nfsd] kernel: [] svc_authenticate+0xd3/0x120 [sunrpc] kernel: [] svc_process+0x57d/0x610 [sunrpc] kernel: [] nfsd+0x1cb/0x380 [nfsd] kernel: [] nfsd+0x0/0x380 [nfsd] kernel: [] kernel_thread_helper+0x5/0x18 kernel: kernel: Code: 89 08 89 68 04 8b 46 08 89 d5 83 c6 10 89 c1 8b 44 24 38 0f From owner-linux-xfs@oss.sgi.com Sun Jul 4 11:11:49 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 04 Jul 2004 11:11:53 -0700 (PDT) Received: from zok.sgi.com (mtvcafw.sgi.com [192.48.171.6]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i64IBmgi025578 for ; Sun, 4 Jul 2004 11:11:48 -0700 Received: from flecktone.americas.sgi.com (flecktone.americas.sgi.com [192.48.203.135]) by zok.sgi.com (8.12.9/8.12.9/linux-outbound_gateway-1.1) with ESMTP id i64I9Yhv018233 for ; Sun, 4 Jul 2004 11:09:34 -0700 Received: from poppy-e236.americas.sgi.com (poppy-e236.americas.sgi.com [128.162.236.207]) by flecktone.americas.sgi.com (8.12.9/8.12.10/SGI_generic_relay-1.2) with ESMTP id i64I9YKe41398354; Sun, 4 Jul 2004 13:09:34 -0500 (CDT) Received: from penguin.americas.sgi.com (penguin.americas.sgi.com [128.162.240.135]) by poppy-e236.americas.sgi.com (8.12.9/SGI-server-1.8) with ESMTP id i64I9X3b7180643; Sun, 4 Jul 2004 13:09:33 -0500 (CDT) Date: Sun, 4 Jul 2004 13:09:31 -0500 (CDT) From: Eric Sandeen X-X-Sender: sandeen@penguin.americas.sgi.com To: info@nerdbynature.de cc: linux-xfs@oss.sgi.com Subject: Re: =?iso-8859-1?Q?oops_with_xfs_/_nfs?= In-Reply-To: <27495344$108896174540e83cd1534a75.27867426@config18.schlund.de> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 3581 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: sandeen@sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 619 Lines: 24 looks like a memory allocation failed, and was not checked. irix allocations never fail, so there are plenty of places in xfs where allocations are still not checked, unfortunately. Some are almost impossible error out of, but this one looks easy enough. I'll see about getting a fix in. Thanks, -Eric On Sun, 4 Jul 2004 info@nerdbynature.de wrote: > > hello list, > > today i got an oops on my nfs server during heavy use of nfs read/writes > from a client. this is debian/unstable (i386), vanilla 2.6.7. > please CC me on replies, as i am not able to recieve the list-msg atm. > > Thanks, > Christian. > From owner-linux-xfs@oss.sgi.com Sun Jul 4 11:53:18 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 04 Jul 2004 11:53:21 -0700 (PDT) Received: from omx1.americas.sgi.com (cfcafw.sgi.com [198.149.23.1]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i64IrIgi027646 for ; Sun, 4 Jul 2004 11:53:18 -0700 Received: from flecktone.americas.sgi.com (flecktone.americas.sgi.com [192.48.203.135]) by omx1.americas.sgi.com (8.12.10/8.12.9/linux-outbound_gateway-1.1) with ESMTP id i64Ir80f004449 for ; Sun, 4 Jul 2004 13:53:08 -0500 Received: from poppy-e236.americas.sgi.com (poppy-e236.americas.sgi.com [128.162.236.207]) by flecktone.americas.sgi.com (8.12.9/8.12.10/SGI_generic_relay-1.2) with ESMTP id i64Ir8Ke39701325; Sun, 4 Jul 2004 13:53:08 -0500 (CDT) Received: from penguin.americas.sgi.com (penguin.americas.sgi.com [128.162.240.135]) by poppy-e236.americas.sgi.com (8.12.9/SGI-server-1.8) with ESMTP id i64Ir83b7196738; Sun, 4 Jul 2004 13:53:08 -0500 (CDT) Date: Sun, 4 Jul 2004 13:53:05 -0500 (CDT) From: Eric Sandeen X-X-Sender: sandeen@penguin.americas.sgi.com To: info@nerdbynature.de cc: linux-xfs@oss.sgi.com Subject: Re: =?iso-8859-1?Q?oops_with_xfs_/_nfs?= In-Reply-To: <27495344$108896174540e83cd1534a75.27867426@config18.schlund.de> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 3582 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: sandeen@sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 570 Lines: 22 on second thought, the alloc should not have failed here thanks to the KM_SLEEP flag. (thanks for pointing that out, Christoph...) so looks like maybe a bug in the core kernel - you might try the latest kernel from linus' bk tree or oss.sgi.com cvs -eric On Sun, 4 Jul 2004 info@nerdbynature.de wrote: > > hello list, > > today i got an oops on my nfs server during heavy use of nfs read/writes > from a client. this is debian/unstable (i386), vanilla 2.6.7. > please CC me on replies, as i am not able to recieve the list-msg atm. > > Thanks, > Christian. > From owner-linux-xfs@oss.sgi.com Mon Jul 5 03:15:17 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 05 Jul 2004 03:15:24 -0700 (PDT) Received: from enterprise01.xszone.net (PART-OF.cybermedia.nl [62.129.140.180] (may be forged)) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i65AFGgi002874 for ; Mon, 5 Jul 2004 03:15:17 -0700 Received: from localhost (localhost [127.0.0.1]) by enterprise01.xszone.net (Postfix) with ESMTP id 0BDEECECD8 for ; Mon, 5 Jul 2004 12:12:47 +0200 (CEST) Received: from enterprise01.xszone.net ([127.0.0.1]) by localhost (enterprise01 [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 16524-08 for ; Mon, 5 Jul 2004 12:12:46 +0200 (CEST) Received: from webmail.xszone.net (localhost [127.0.0.1]) by enterprise01.xszone.net (Postfix) with SMTP id 96F59CECD4 for ; Mon, 5 Jul 2004 12:12:46 +0200 (CEST) Received: from 62.58.45.130 (SquirrelMail authenticated user catch_all@saerts.nl) by webmail.xszone.net with HTTP; Mon, 5 Jul 2004 12:12:46 +0200 (CEST) Message-ID: <2188.62.58.45.130.1089022366.squirrel@webmail.xszone.net> Date: Mon, 5 Jul 2004 12:12:46 +0200 (CEST) Subject: Problem with XFS, kernel 2.6.7 and NFSD(kernel) From: "Sander Aerts" To: linux-xfs@oss.sgi.com Reply-To: saerts@saerts.nl User-Agent: SquirrelMail/1.4.2 MIME-Version: 1.0 Content-Type: text/plain;charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-Priority: 3 Importance: Normal X-Virus-Scanned: by amavisd-new-20030616-p7 (Debian) at xszone.net X-archive-position: 3583 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: saerts@saerts.nl Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 2069 Lines: 56 Hi Hoooo Hello all, I have a few systems with the following configuration: - 12 channel 3ware RAID - 3 x 900GB hardware raid0 arrays - LVM with the 3 hardware raid arrays in it to create on psychical volume of 2.7 Terabyte - XFS - NFSD (Kernel NFS) Now i get the following error message: Jun 27 22:47:56 fcluster1 kernel: Unable to handle kernel paging request at virtual address cad7ef28 Jun 27 22:47:56 fcluster1 kernel: printing eip: Jun 27 22:47:56 fcluster1 kernel: c0241ef9 Jun 27 22:47:56 fcluster1 kernel: *pde = 0002a067 Jun 27 22:47:56 fcluster1 kernel: *pte = 0ad7e000 Jun 27 22:47:56 fcluster1 kernel: Oops: 0000 [#1] Jun 27 22:47:56 fcluster1 kernel: DEBUG_PAGEALLOC Jun 27 22:47:56 fcluster1 kernel: CPU: 0 Jun 27 22:47:56 fcluster1 kernel: EIP: 0060:[] Not tainted Jun 27 22:47:56 fcluster1 kernel: EFLAGS: 00010286 (2.6.6) Jun 27 22:47:56 fcluster1 kernel: EIP is at pagebuf_daemon+0x231/0x288 Jun 27 22:47:56 fcluster1 kernel: eax: 00000000 ebx: cad7eec4 ecx: f7b799d8 edx: f7b799e8 Jun 27 22:47:56 fcluster1 kernel: esi: cb6ebec4 edi: c042cee0 ebp: f7aedfec esp: f7aedfd8 Jun 27 22:47:56 fcluster1 kernel: ds: 007b es: 007b ss: 0068 Jun 27 22:47:56 fcluster1 kernel: Process xfsbufd (pid: 12, threadinfo=f7aec000 task=f7b29a10) Jun 27 22:47:56 fcluster1 kernel: Stack: c0241cc8 00000000 00000000 cabd7f14 d2c6bf14 00000000 c01022ed 00000000 Jun 27 22:47:56 fcluster1 kernel: 00000000 00000000 Jun 27 22:47:56 fcluster1 kernel: Call Trace: Jun 27 22:47:56 fcluster1 kernel: [] pagebuf_daemon+0x0/0x288 Jun 27 22:47:56 fcluster1 kernel: [] kernel_thread_helper+0x5/0xc Jun 27 22:47:56 fcluster1 kernel: Jun 27 22:47:56 fcluster1 kernel: Code: 8b 43 64 8b 40 08 85 c0 74 14 8b 40 74 85 c0 74 0d 8b 50 14 Then my NFSD locks ups and i need to reboot the system, there is nothing wrong with the machine or with installed memory. Checked everthing with memtest, i have it on more systems? Maybe XFS kernel bug?? Any comments or suggestions please, -- Thx in advance, Sander Aerts From owner-linux-xfs@oss.sgi.com Mon Jul 5 03:38:20 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 05 Jul 2004 03:38:22 -0700 (PDT) Received: from omx1.americas.sgi.com (cfcafw.sgi.com [198.149.23.1]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i65AcJgi003649 for ; Mon, 5 Jul 2004 03:38:19 -0700 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by omx1.americas.sgi.com (8.12.10/8.12.9/linux-outbound_gateway-1.1) with SMTP id i65AOO0f027613 for ; Mon, 5 Jul 2004 05:24:25 -0500 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id UAA08147; Mon, 5 Jul 2004 20:24:20 +1000 Received: from wobbly.melbourne.sgi.com (localhost [127.0.0.1]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id i65AOJln2035158; Mon, 5 Jul 2004 20:24:19 +1000 (EST) Received: (from nathans@localhost) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id i65AOIK22026935; Mon, 5 Jul 2004 20:24:18 +1000 (EST) Date: Mon, 5 Jul 2004 20:24:18 +1000 From: Nathan Scott To: Sander Aerts Cc: linux-xfs@oss.sgi.com Subject: Re: Problem with XFS, kernel 2.6.7 and NFSD(kernel) Message-ID: <20040705202417.A2032736@wobbly.melbourne.sgi.com> References: <2188.62.58.45.130.1089022366.squirrel@webmail.xszone.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5i In-Reply-To: <2188.62.58.45.130.1089022366.squirrel@webmail.xszone.net>; from saerts@saerts.nl on Mon, Jul 05, 2004 at 12:12:46PM +0200 X-archive-position: 3584 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 997 Lines: 29 On Mon, Jul 05, 2004 at 12:12:46PM +0200, Sander Aerts wrote: > Hi Hoooo Hello all, > > threadinfo=f7aec000 task=f7b29a10) > Jun 27 22:47:56 fcluster1 kernel: Stack: c0241cc8 00000000 00000000 > cabd7f14 d2c6bf14 00000000 c01022ed 00000000 > Jun 27 22:47:56 fcluster1 kernel: 00000000 00000000 > Jun 27 22:47:56 fcluster1 kernel: Call Trace: > Jun 27 22:47:56 fcluster1 kernel: [] pagebuf_daemon+0x0/0x288 > Jun 27 22:47:56 fcluster1 kernel: [] kernel_thread_helper+0x5/0xc > Jun 27 22:47:56 fcluster1 kernel: > Jun 27 22:47:56 fcluster1 kernel: Code: 8b 43 64 8b 40 08 85 c0 74 14 8b > 40 74 85 c0 74 0d 8b 50 14 > > Then my NFSD locks ups and i need to reboot the system, there is nothing > wrong with the machine or with installed memory. > > Checked everthing with memtest, i have it on more systems? > > Maybe XFS kernel bug?? Indeed - Christoph fixed this recently, try using XFS CVS from oss.sgi.com or Linus' current bitkeeper tree. cheers. -- Nathan From owner-linux-xfs@oss.sgi.com Mon Jul 5 11:56:47 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 05 Jul 2004 11:56:50 -0700 (PDT) Received: from smtp.bensa.ar (host84.200-117-131.telecom.net.ar [200.117.131.84]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i65Iukgi025663 for ; Mon, 5 Jul 2004 11:56:47 -0700 Received: from localhost (localhost.localdomain [127.0.0.1]) by smtp.bensa.ar (Postfix) with ESMTP id 114F05D5A33 for ; Mon, 5 Jul 2004 15:56:46 -0300 (ART) Received: from smtp.bensa.ar ([127.0.0.1]) by localhost (zeddmore.bensa.ar [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 28071-05; Mon, 5 Jul 2004 15:56:41 -0300 (ART) Received: from venkman.bensa.ar (venkman.bensa.ar [192.168.1.125]) by smtp.bensa.ar (Postfix) with ESMTP id 6ECB1587B6C; Mon, 5 Jul 2004 15:56:41 -0300 (ART) From: Norberto Bensa Subject: Fwd: XFS: how to NOT null files on fsck? Date: Mon, 5 Jul 2004 15:56:43 -0300 User-Agent: KMail/1.6.2 To: linux-xfs@oss.sgi.com MIME-Version: 1.0 Content-Disposition: inline Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Message-Id: <200407051556.43673.norberto+linux-xfs@bensa.ath.cx> X-Virus-Scanned: by amavisd-new at bensa.ar X-archive-position: 3585 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: norberto+linux-xfs@bensa.ath.cx Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 274 Lines: 13 This was posted to linux-kernel, but no one replied yet: ---------- Forwarded Message ---------- how do I setup XFS to not null files after a bad shutdown? ------------------------------------------------------- BTW, this is Linux kernel 2.6.7-mm6. Thanks, Norberto From owner-linux-xfs@oss.sgi.com Mon Jul 5 12:57:38 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 05 Jul 2004 12:57:40 -0700 (PDT) Received: from pimout2-ext.prodigy.net (pimout2-ext.prodigy.net [207.115.63.101]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i65Jvcgi030358 for ; Mon, 5 Jul 2004 12:57:38 -0700 Received: from taniwha.stupidest.org (adsl-63-202-172-209.dsl.snfc21.pacbell.net [63.202.172.209]) by pimout2-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i65JvTUK170698; Mon, 5 Jul 2004 15:57:29 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id 2BEC510D0439; Mon, 5 Jul 2004 12:57:26 -0700 (PDT) Date: Mon, 5 Jul 2004 12:57:26 -0700 From: Chris Wedgwood To: Norberto Bensa Cc: linux-xfs@oss.sgi.com Subject: Re: Fwd: XFS: how to NOT null files on fsck? Message-ID: <20040705195726.GA32243@taniwha.stupidest.org> References: <200407051556.43673.norberto+linux-xfs@bensa.ath.cx> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <200407051556.43673.norberto+linux-xfs@bensa.ath.cx> X-archive-position: 3586 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 147 Lines: 9 On Mon, Jul 05, 2004 at 03:56:43PM -0300, Norberto Bensa wrote: > how do I setup XFS to not null files after a bad shutdown? you can't --cw From owner-linux-xfs@oss.sgi.com Mon Jul 5 13:26:46 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 05 Jul 2004 13:26:48 -0700 (PDT) Received: from imag.imag.fr (imag.imag.fr [129.88.30.1]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i65KQigi031248 for ; Mon, 5 Jul 2004 13:26:45 -0700 Received: from mail-veri.imag.fr (pave.imag.fr [129.88.43.12]) by imag.imag.fr (8.12.10/8.12.10) with ESMTP id i65KQfgH024161 for ; Mon, 5 Jul 2004 22:26:41 +0200 (CEST) Received: from obiou.imag.fr ([129.88.43.2] ident=kowalski) by mail-veri.imag.fr with esmtp (Exim 3.35 #1 (Debian)) id 1Bha2z-0005Ym-00 for ; Mon, 05 Jul 2004 22:26:41 +0200 Date: Mon, 5 Jul 2004 22:26:41 +0200 (MEST) From: Nicolas Kowalski X-X-Sender: kowalski@obiou.imag.fr To: linux-xfs@oss.sgi.com Subject: Re: Fwd: XFS: how to NOT null files on fsck? In-Reply-To: <200407051556.43673.norberto+linux-xfs@bensa.ath.cx> Message-ID: References: <200407051556.43673.norberto+linux-xfs@bensa.ath.cx> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-IMAG-MailScanner: Found to be clean X-IMAG-MailScanner-Information: Please contact the ISP for more information X-archive-position: 3587 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: Nicolas.Kowalski@imag.fr Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 506 Lines: 20 On Mon, 5 Jul 2004, Norberto Bensa wrote: > how do I setup XFS to not null files after a bad shutdown? The last time our UPS failed, no files were corrupted because: - the NFS exports are setup with the sync flag, as recommended, - [to be verified] Samba shares filesystems with "strict sync" and "sync always" options, For local files, use the sync attribute (chattr +S) to be sure that file contents are flushed on disk. This is usefull when files are frequently modified. Regards. -- Nicolas From owner-linux-xfs@oss.sgi.com Mon Jul 5 13:38:03 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 05 Jul 2004 13:38:08 -0700 (PDT) Received: from lists.vasoftware.com (mail@internalmx2.vasoftware.com [12.152.184.150]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i65Kc3gi032005 for ; Mon, 5 Jul 2004 13:38:03 -0700 Received: from adsl-67-121-191-12.dsl.sntc01.pacbell.net ([67.121.191.12]:62088 helo=[10.0.0.1]) by lists.vasoftware.com with asmtp (Cipher TLSv1:RC4-MD5:128) (Exim 4.20 #1 (Debian)) id 1BhaDs-0000UO-JW by VAauthid with fixed_plain; Mon, 05 Jul 2004 13:37:56 -0700 Message-ID: <40E9BC26.3050903@linux-sxs.org> Date: Mon, 05 Jul 2004 13:37:58 -0700 From: "Net Llama!" Organization: VA Software User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8a2) Gecko/20040704 X-Accept-Language: en-us, en MIME-Version: 1.0 To: Nicolas Kowalski CC: linux-xfs@oss.sgi.com Subject: Re: Fwd: XFS: how to NOT null files on fsck? References: <200407051556.43673.norberto+linux-xfs@bensa.ath.cx> In-Reply-To: Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-EA-Verified: lists.vasoftware.com 1BhaDs-0000UO-JW d2b02af4f3b8e4487b4d33e438f2a9af X-archive-position: 3588 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: netllama@linux-sxs.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 959 Lines: 30 On 07/05/2004 01:26 PM, Nicolas Kowalski wrote: > On Mon, 5 Jul 2004, Norberto Bensa wrote: > > >>how do I setup XFS to not null files after a bad shutdown? > > > The last time our UPS failed, no files were corrupted because: > > - the NFS exports are setup with the sync flag, as recommended, > > - [to be verified] Samba shares filesystems with "strict sync" and > "sync always" options, > > For local files, use the sync attribute (chattr +S) to be sure that file > contents are flushed on disk. This is usefull when files are frequently > modified. Is there any way to set this for an entire filesystem? Also, there must be a performance hit from doing this, right? -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ L. Friedman netllama@linux-sxs.org Linux Step-by-step & TyGeMo: http://netllama.ipfox.com 13:35:00 up 15 days, 16 min, 1 user, load average: 0.26, 0.18, 0.27 From owner-linux-xfs@oss.sgi.com Mon Jul 5 13:56:54 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 05 Jul 2004 13:57:33 -0700 (PDT) Received: from imag.imag.fr (imag.imag.fr [129.88.30.1]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i65Kuqgi032636 for ; Mon, 5 Jul 2004 13:56:53 -0700 Received: from mail-veri.imag.fr (pave.imag.fr [129.88.43.12]) by imag.imag.fr (8.12.10/8.12.10) with ESMTP id i65KuogH001633 for ; Mon, 5 Jul 2004 22:56:50 +0200 (CEST) Received: from obiou.imag.fr ([129.88.43.2] ident=kowalski) by mail-veri.imag.fr with esmtp (Exim 3.35 #1 (Debian)) id 1BhaW9-0005yA-00 for ; Mon, 05 Jul 2004 22:56:49 +0200 Date: Mon, 5 Jul 2004 22:56:49 +0200 (MEST) From: Nicolas Kowalski X-X-Sender: kowalski@obiou.imag.fr To: linux-xfs@oss.sgi.com Subject: Re: Fwd: XFS: how to NOT null files on fsck? In-Reply-To: <40E9BC26.3050903@linux-sxs.org> Message-ID: References: <200407051556.43673.norberto+linux-xfs@bensa.ath.cx> <40E9BC26.3050903@linux-sxs.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-IMAG-MailScanner: Found to be clean X-IMAG-MailScanner-Information: Please contact the ISP for more information X-archive-position: 3589 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: Nicolas.Kowalski@imag.fr Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 516 Lines: 18 On Mon, 5 Jul 2004, Net Llama! wrote: > Is there any way to set this for an entire filesystem? Also, there must > be a performance hit from doing this, right? The sync option in /etc/fstab can achieve this I believe ; I did not tested it myself. Regarding performance, yes, it slowed down writes, but I prefer a slower system, with consistent files, as my users do. BTW, it is only when creating numerous files in a short time, like when extracting tar files, that our server slows down. Regards. -- Nicolas From owner-linux-xfs@oss.sgi.com Mon Jul 5 14:55:53 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 05 Jul 2004 14:55:55 -0700 (PDT) Received: from pimout1-ext.prodigy.net (pimout1-ext.prodigy.net [207.115.63.77]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i65Ltqgi001681 for ; Mon, 5 Jul 2004 14:55:53 -0700 Received: from taniwha.stupidest.org (adsl-63-202-172-209.dsl.snfc21.pacbell.net [63.202.172.209]) by pimout1-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i65LtnKn217714; Mon, 5 Jul 2004 17:55:50 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id 3473410D0439; Mon, 5 Jul 2004 14:55:49 -0700 (PDT) Date: Mon, 5 Jul 2004 14:55:49 -0700 From: Chris Wedgwood To: Nicolas Kowalski Cc: linux-xfs@oss.sgi.com Subject: Re: Fwd: XFS: how to NOT null files on fsck? Message-ID: <20040705215549.GA363@taniwha.stupidest.org> References: <200407051556.43673.norberto+linux-xfs@bensa.ath.cx> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-archive-position: 3590 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 682 Lines: 23 On Mon, Jul 05, 2004 at 10:26:41PM +0200, Nicolas Kowalski wrote: > - the NFS exports are setup with the sync flag, as recommended, > > - [to be verified] Samba shares filesystems with "strict sync" and > "sync always" options, this kills performance and increases file fragmentation in many cases > For local files, use the sync attribute (chattr +S) to be sure that > file contents are flushed on disk. This is usefull when files are > frequently modified. again this kills performance ideally applications should be smarter about this, things like MTAs actually are and get good performance w/o resorting to synchronous IO (generally they use fsync and rename) --cw From owner-linux-xfs@oss.sgi.com Mon Jul 5 14:58:32 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 05 Jul 2004 14:58:33 -0700 (PDT) Received: from pimout3-ext.prodigy.net (pimout3-ext.prodigy.net [207.115.63.102]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i65LwVgi002008 for ; Mon, 5 Jul 2004 14:58:31 -0700 Received: from taniwha.stupidest.org (adsl-63-202-172-209.dsl.snfc21.pacbell.net [63.202.172.209]) by pimout3-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i65LwLlM135852; Mon, 5 Jul 2004 17:58:21 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id 9C22F10D0439; Mon, 5 Jul 2004 14:58:20 -0700 (PDT) Date: Mon, 5 Jul 2004 14:58:20 -0700 From: Chris Wedgwood To: Net Llama! Cc: Nicolas Kowalski , linux-xfs@oss.sgi.com Subject: Re: Fwd: XFS: how to NOT null files on fsck? Message-ID: <20040705215820.GB363@taniwha.stupidest.org> References: <200407051556.43673.norberto+linux-xfs@bensa.ath.cx> <40E9BC26.3050903@linux-sxs.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <40E9BC26.3050903@linux-sxs.org> X-archive-position: 3591 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 441 Lines: 19 On Mon, Jul 05, 2004 at 01:37:58PM -0700, Net Llama! wrote: > Is there any way to set this for an entire filesystem? mount -o sync you might want osyncisosync in there too, i'd have to check and im too azy now > Also, there must be a performance hit from doing this, right? yes, as previously mentioned this hurts like hell also, if you are doing all IO syncrhonous, there is arguably little need for a journaling filesystem --cw From owner-linux-xfs@oss.sgi.com Mon Jul 5 15:08:07 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 05 Jul 2004 15:08:09 -0700 (PDT) Received: from pooh.lsc.hu (pooh.lsc.hu [195.56.172.131]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i65M7ugi002574 for ; Mon, 5 Jul 2004 15:07:57 -0700 Received: by pooh.lsc.hu (Postfix, from userid 1004) id 8BFF31D42C; Mon, 5 Jul 2004 23:36:59 +0200 (CEST) Date: Mon, 5 Jul 2004 23:36:59 +0200 From: "Laszlo 'GCS' Boszormenyi" To: linux-xfs@oss.sgi.com Subject: hdd strange badblocks problem Message-ID: <20040705213659.GA29703@pooh> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.4i X-Whitelist: OK X-archive-position: 3592 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: gcs@lsc.hu Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 6136 Lines: 144 Hi, I am an XFS user almost since the beginning, and very satisfied with it. Thanks for it! I do sorry that Steve Lord had to leave SGI a while ago, but well, such is life. So actually my problem is that we have a small server in our office, which served us well in the last one and half years. However when I came in this morning, I realised it's running slow. First I thought a network problem, but quickly recognised that the second and 'big' -- read 120 Gb Maxtor IDE hdd -- has read errors on it. As it's quite new, less than six months old, I hoped it's not really true for some minutes. But first go back to the beginning: currently we run on 2.6.7 since a week or so, and the first oddity I can recall was on Friday. I have deleted a 4 Gb file or so, but has not seen the space freed. I had very much to do, so I decided to look into it later; and on this morning it had read errors on the hdd. :( I took down the machine, and realised I can not even mount the hdd again, which has only one partition, the full 120 Gb, I haven't used any special arguments when I mkfs'd it. First I checked if the partition is still there -> read errors, could not get the partition table. That was when I get out the hdd, and carefully moved it back and forth next to my ears. It has a bit strange sound, but we can't decide if that's normal or not. One of my collegue says the heads are down and other says it's completely normal. So I began experiencing, someone told me there's a good program called HDD Regenerator, which can cure bad blocks. OK, used it, and although I did not wait until it finishes, but the first 60 Gb was checked, about 500 bad sectors were found, and 'fixed'. Wow, I checked the partition table, and that's readable and correct! I begin to have faith again. Ofcourse I can not just mount it, as it gives: XFS mounting filesystem hde1 XFS: failed to read root inode mount: Unknown error 990 How should I proceed? I have tried to do 'xfs_repair -n -v /dev/hde1': Phase 1 - find and verify superblock... Phase 2 - using internal log - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 bad magic number 0x4dce on inode 128 bad inode format in inode 128 bad inode format in inode 129 bad magic number 0x4dce on inode 130 bad version number 0x25 on inode 130 bad inode format in inode 130 bad version number 0x0 on inode 131 bad magic number 0x4dce on inode 132 bad version number 0x0 on inode 132 bad version number 0x0 on inode 133 bad (negative) size -9222808949420424705 on inode 133 [...] bad inode format in inode 130 would clear realtime summary inode 130 bad version number 0x0 on inode 131, would reset version number bad non-zero extent size value 671744 for non-realtime inode 131, would reset to zero [...] inode 134 - extent offset too large - start 31934, count 513, offset 4509097185509637 bad data fork in inode 134 would have cleared inode 134 bmap rec out of order, inode 135 entry 1 [o s c] [0 0 131072], 0 [939590656 14 9015] [...] indicated size of data btree root (122884 bytes) greater than space in inode 144 data fork bad data fork in inode 144 would have cleared inode 144 bad non-zero extent size value 2688 for non-realtime inode 145, would reset to zero [...] And so on. Is it possible that I get most of these as it's running in 'no-change' mode (ie the real run would have far less errors, as the fix goes on)? Anyone thinks I will be able to recover anything? Should I try something else? What would be the best practise? Thanks for any pointers in advance, Laszlo/GCS Ps:Well, I have a fresh backup of the most important files, but still missing some-would-be-good-to-have-them files, about ten to twenty Gb. :-| rom owner-linux-xfs@oss.sgi.com Mon Jul 5 15:48:57 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 05 Jul 2004 15:49:03 -0700 (PDT) Received: from pimout1-ext.prodigy.net (pimout1-ext.prodigy.net [207.115.63.77]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i65Mmsgi003799 for ; Mon, 5 Jul 2004 15:48:56 -0700 Received: from taniwha.stupidest.org (adsl-63-202-172-209.dsl.snfc21.pacbell.net [63.202.172.209]) by pimout1-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i65MmiKn114046; Mon, 5 Jul 2004 18:48:49 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id 3D12410D0439; Mon, 5 Jul 2004 15:48:44 -0700 (PDT) Date: Mon, 5 Jul 2004 15:48:44 -0700 From: Chris Wedgwood To: "Laszlo 'GCS' Boszormenyi" Cc: linux-xfs@oss.sgi.com Subject: Re: hdd strange badblocks problem Message-ID: <20040705224844.GA668@taniwha.stupidest.org> References: <20040705213659.GA29703@pooh> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20040705213659.GA29703@pooh> X-archive-position: 3593 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs On Mon, Jul 05, 2004 at 11:36:59PM +0200, Laszlo 'GCS' Boszormenyi wrote: > XFS mounting filesystem hde1 > XFS: failed to read root inode > mount: Unknown error 990 990 is EFSCORRUPTED which isn't exported beyond XFS (arguably the OS layer should probably change this to EIO or something but then we might not be able to tell the two apart). > How should I proceed? I have tried to do 'xfs_repair -n -v /dev/hde1': -n is pointless in this case, you have corruption and -n will just spew wads of errors about things that are wrong > Is it possible that I get most of these as it's running in > 'no-change' mode (ie the real run would have far less errors, as the > fix goes on)? Yes > Anyone thinks I will be able to recover anything? Should I try > something else? What would be the best practise? Run w/o -n and I'm guessing it will do a pretty good job for you. Backup the raw device first if you are paranoid... I personally wouldn't bother though. --cw From owner-linux-xfs@oss.sgi.com Mon Jul 5 17:58:45 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 05 Jul 2004 17:58:47 -0700 (PDT) Received: from smtp.bensa.ar (host84.200-117-131.telecom.net.ar [200.117.131.84]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i660wigi014862 for ; Mon, 5 Jul 2004 17:58:45 -0700 Received: from localhost (localhost.localdomain [127.0.0.1]) by smtp.bensa.ar (Postfix) with ESMTP id 7EB615E9746; Mon, 5 Jul 2004 21:58:40 -0300 (ART) Received: from smtp.bensa.ar ([127.0.0.1]) by localhost (zeddmore.bensa.ar [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 05905-04; Mon, 5 Jul 2004 21:58:35 -0300 (ART) Received: from venkman.bensa.ar (venkman.bensa.ar [192.168.1.125]) by smtp.bensa.ar (Postfix) with ESMTP id 2F9B95AE438; Mon, 5 Jul 2004 21:58:35 -0300 (ART) From: Norberto Bensa To: Nicolas Kowalski Subject: Re: Fwd: XFS: how to NOT null files on fsck? Date: Mon, 5 Jul 2004 21:58:36 -0300 User-Agent: KMail/1.6.2 Cc: linux-xfs@oss.sgi.com References: <200407051556.43673.norberto+linux-xfs@bensa.ath.cx> In-Reply-To: MIME-Version: 1.0 Content-Disposition: inline Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Message-Id: <200407052158.37057.norberto+linux-xfs@bensa.ath.cx> X-Virus-Scanned: by amavisd-new at bensa.ar X-archive-position: 3594 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: norberto+linux-xfs@bensa.ath.cx Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 376 Lines: 13 Nicolas Kowalski wrote: > For local files, use the sync attribute (chattr +S) to be sure that file > contents are flushed on disk. This is usefull when files are frequently > modified. OK. Thanks. I'll try chattr +S since is KDE the beast that loses its configuration files. I guess a `chattr -R +S ~/.kde` will do. Again, many thanks to everyone. Best regards, Norberto From owner-linux-xfs@oss.sgi.com Tue Jul 6 00:23:20 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 06 Jul 2004 00:23:52 -0700 (PDT) Received: from omx1.americas.sgi.com (cfcafw.sgi.com [198.149.23.1]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i667NKgi032716 for ; Tue, 6 Jul 2004 00:23:20 -0700 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by omx1.americas.sgi.com (8.12.10/8.12.9/linux-outbound_gateway-1.1) with SMTP id i667E00f019525 for ; Tue, 6 Jul 2004 02:14:01 -0500 Received: from bruce.melbourne.sgi.com (bruce.melbourne.sgi.com [134.14.54.176]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA29180 for ; Tue, 6 Jul 2004 17:13:58 +1000 Received: from bruce.melbourne.sgi.com (localhost.localdomain [127.0.0.1]) by bruce.melbourne.sgi.com (8.12.8/8.12.8) with ESMTP id i666MHhr005729 for ; Tue, 6 Jul 2004 16:22:18 +1000 Received: (from fsgqa@localhost) by bruce.melbourne.sgi.com (8.12.8/8.12.8/Submit) id i666MHwG005728 for linux-xfs@oss.sgi.com; Tue, 6 Jul 2004 16:22:17 +1000 Date: Tue, 6 Jul 2004 16:22:17 +1000 From: FSG QA Message-Id: <200407060622.i666MHwG005728@bruce.melbourne.sgi.com> Subject: TAKE 907752 - xfstests Apparently-To: X-archive-position: 3595 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: fsgqa@bruce.melbourne.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 356 Lines: 14 Add QA test to exercise the direct IO fsx invocations that showed problems. Date: Tue Jul 6 00:13:24 PDT 2004 Workarea: bruce.melbourne.sgi.com:/home/fsgqa/qa/xfs-cmds Inspected by: nathans The following file(s) were checked into: bonnie.engr.sgi.com:/isms/slinx/xfs-cmds Modid: xfs-cmds:slinx:174747a xfstests/091 - 1.1 xfstests/091.out - 1.1 From owner-linux-xfs@oss.sgi.com Tue Jul 6 02:04:52 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 06 Jul 2004 02:04:56 -0700 (PDT) Received: from gusi.leathercollection.ph (gusi.leathercollection.ph [202.163.192.10]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i6694ogi003457 for ; Tue, 6 Jul 2004 02:04:51 -0700 Received: from localhost (lawin.alabang.leathercollection.ph [192.168.0.2]) by gusi.leathercollection.ph (Postfix) with ESMTP id 4DB1D88A673 for ; Tue, 6 Jul 2004 17:04:43 +0800 (PHT) Received: from lawin.alabang.leathercollection.ph (lawin.alabang.leathercollection.ph [192.168.0.2]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by gusi.leathercollection.ph (Postfix) with ESMTP id 9C74C88A66C for ; Tue, 6 Jul 2004 17:04:36 +0800 (PHT) Received: by lawin.alabang.leathercollection.ph (Postfix, from userid 1000) id 76076A54F1B5; Tue, 6 Jul 2004 17:04:35 +0800 (PHT) Date: Tue, 6 Jul 2004 17:04:35 +0800 From: Federico Sevilla III To: Linux-XFS Mailing List Subject: Re: hdd strange badblocks problem Message-ID: <20040706090435.GR14173@leathercollection.ph> Mail-Followup-To: Linux-XFS Mailing List References: <20040705213659.GA29703@pooh> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20040705213659.GA29703@pooh> X-Organization: The Leather Collection, Inc. X-Organization-URL: http://www.leathercollection.ph X-Personal-URL: http://jijo.free.net.ph User-Agent: Mutt/1.5.6+20040523i X-Virus-Scanned: by amavisd-new-20030616-p9 (Debian) at leathercollection.ph X-archive-position: 3596 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: jijo@free.net.ph Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 7535 Lines: 172 On Mon, Jul 05, 2004 at 11:36:59PM +0200, Laszlo 'GCS' Boszormenyi wrote: > So actually my problem is that we have a small server in our office, > which served us well in the last one and half years. However when I > came in this morning, I realised it's running slow. First I thought a > network problem, but quickly recognised that the second and 'big' -- > read 120 Gb Maxtor IDE hdd -- has read errors on it. As it's quite > new, less than six months old, I hoped it's not really true for some > minutes. I have a fairly new (less than six months old, too) 120GB Seagate IDE hard drive that uses XFS, and recently ran into hardware read problems with it, too. I didn't need the data in it (it's used for backups using rsnapshot, and both machines it was backing up were okay so I could do without the backups for a short while) so I used DBAN [http://dban.sourceforge.net/] and ran multiple rounds of the PRNG writes with verification on every step. What this basically did was write random data to each sector of the entire drive then read things to make sure the data was actually written, about 45 times all in all (I dictated how many rounds to do, it takes about 1.5 hours per round). (Note: in case it's not yet obvious, I wiped out the entire drive doing this, which was okay in my case since I didn't need the data and really just wanted to make sure the drive was actually okay.) My rationale for doing this is that modern drives automatically remap bad sectors using a set of reserved sectors. This is done on-the-fly, but only happens when you write to a bad sector. Reading from a bad sector will just give you an error. This seems to have done the trick: I've been using the drive for five days (and read from and write to it intensively every night during the automatic backup) and haven't had errors in five days so far. It may be worth mentioning that the drive consistently passed the full media scans I did using Seagate's SeaTools utility, before and after the IDE read errors showed up with Linux. > So I began experiencing, someone told me there's a good program called > HDD Regenerator, which can cure bad blocks. OK, used it, and although > I did not wait until it finishes, but the first 60 Gb was checked, > about 500 bad sectors were found, and 'fixed'. Wow, I checked the > partition table, and that's readable and correct! I begin to have > faith again. Maybe you want to run HDD Regenerator completely to fix your entire drive before running xfs_repair? > How should I proceed? I have tried to do 'xfs_repair -n -v /dev/hde1': As recommended by the experts on the list, run xfs_repair on /dev/hde1 without "-n". If this is able to fix everything, and you can mount /dev/hde1, view all the files, and xfs_check gives your filesystem a clean bill of health then you're good. There's an off-chance that it won't be able to fix things completely, though. In my particular case I ran into corruption on a filesystem on a perfectly good internal drive, because the entire IO system locked up when operations on a separate usb-storage drive also running XFS froze. xfs_repair was able to fix things partially, but I ran into errors similar to those detailed in the mailing list archives in . Fortunately only three files were corrupted and I didn't need them, so I used the tips detailed in the previously-mentioned thread to remove the files using xfs_db. --> Jijo -- Federico Sevilla III : jijo.free.net.ph : When we speak of free software GNU/Linux Specialist : GnuPG 0x93B746BE : we refer to freedom, not price. rom owner-linux-xfs@oss.sgi.com Tue Jul 6 04:53:36 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 06 Jul 2004 04:53:45 -0700 (PDT) Received: from pimout3-ext.prodigy.net (pimout3-ext.prodigy.net [207.115.63.102]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i66BrZgi014088 for ; Tue, 6 Jul 2004 04:53:36 -0700 Received: from taniwha.stupidest.org (adsl-63-202-172-209.dsl.snfc21.pacbell.net [63.202.172.209]) by pimout3-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i66BrYlM116822; Tue, 6 Jul 2004 07:53:34 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id 5F1A710A8CD2; Tue, 6 Jul 2004 04:53:33 -0700 (PDT) Date: Tue, 6 Jul 2004 04:53:33 -0700 From: Chris Wedgwood To: Andi Kleen Cc: linux-xfs@oss.sgi.com Subject: Re: [PATCH] deadlocks on ENOSPC Message-ID: <20040706115333.GA2098@taniwha.stupidest.org> References: <20040612040838.020a2efb.ak@suse.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20040612040838.020a2efb.ak@suse.de> X-archive-position: 3597 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs On Sat, Jun 12, 2004 at 04:08:38AM +0200, Andi Kleen wrote: > I've been tracking a deadlock on out of space conditions. I can get these too... but not just with processes stuck in D, sometimes it will wedge the entire machine up solid (no network activity, sysrq will no bring the screen out of sleep, etc). This is a little surprising on an SMP machine. Have you seen anything similar? It's fairly repeatable. The NMI watchdog is busted and I've not had a chance to look over this, but the fact everything dies seems odd, (my best guess now is that it's wedging up the CPU which happens to have the interrupt routing for eth0 and the 8042 on it, but that seems a little odd it would do it every-time). --cw rom owner-linux-xfs@oss.sgi.com Tue Jul 6 05:46:01 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 06 Jul 2004 05:46:08 -0700 (PDT) Received: from Cantor.suse.de (cantor.suse.de [195.135.220.2]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i66Cjogi021141 for ; Tue, 6 Jul 2004 05:45:51 -0700 Received: from hermes.suse.de (hermes-ext.suse.de [195.135.221.8]) (using TLSv1 with cipher EDH-RSA-DES-CBC3-SHA (168/168 bits)) (No client certificate requested) by Cantor.suse.de (Postfix) with ESMTP id 6460885879E; Tue, 6 Jul 2004 14:08:40 +0200 (CEST) Date: Tue, 6 Jul 2004 14:08:23 +0200 From: Andi Kleen To: Chris Wedgwood Cc: linux-xfs@oss.sgi.com Subject: Re: [PATCH] deadlocks on ENOSPC Message-Id: <20040706140823.5fea0584.ak@suse.de> In-Reply-To: <20040706115333.GA2098@taniwha.stupidest.org> References: <20040612040838.020a2efb.ak@suse.de> <20040706115333.GA2098@taniwha.stupidest.org> X-Mailer: Sylpheed version 0.9.11 (GTK+ 1.2.10; i686-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-archive-position: 3598 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: ak@suse.de Precedence: bulk X-list: linux-xfs On Tue, 6 Jul 2004 04:53:33 -0700 Chris Wedgwood wrote: > On Sat, Jun 12, 2004 at 04:08:38AM +0200, Andi Kleen wrote: > > > I've been tracking a deadlock on out of space conditions. > > I can get these too... but not just with processes stuck in D, > sometimes it will wedge the entire machine up solid (no network > activity, sysrq will no bring the screen out of sleep, etc). This is > a little surprising on an SMP machine. > > Have you seen anything similar? Nope, I only see file system deadlocks, but the machine is still quite usable. -Andi From owner-linux-xfs@oss.sgi.com Tue Jul 6 09:07:48 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 06 Jul 2004 09:07:52 -0700 (PDT) Received: from pooh.lsc.hu (pooh.lsc.hu [195.56.172.131]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i66G7jgi000328 for ; Tue, 6 Jul 2004 09:07:46 -0700 Received: by pooh.lsc.hu (Postfix, from userid 1004) id 5FF1A1D49E; Tue, 6 Jul 2004 18:03:49 +0200 (CEST) Date: Tue, 6 Jul 2004 18:03:49 +0200 From: "Laszlo 'GCS' Boszormenyi" To: linux-xfs@oss.sgi.com Subject: Re: hdd strange badblocks problem Message-ID: <20040706160349.GA23719@pooh> References: <20040705213659.GA29703@pooh> <20040705224844.GA668@taniwha.stupidest.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20040705224844.GA668@taniwha.stupidest.org> User-Agent: Mutt/1.5.4i X-Whitelist: OK X-archive-position: 3599 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: gcs@lsc.hu Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 4161 Lines: 88 * Chris Wedgwood [2004-07-05 15:48:44 -0700]: > 990 is EFSCORRUPTED which isn't exported beyond XFS (arguably the OS > layer should probably change this to EIO or something but then we > might not be able to tell the two apart). I see. Thanks for the clarification. > -n is pointless in this case, you have corruption and -n will just > spew wads of errors about things that are wrong OK, I have removed it. I can not really recall how it stopped, but after I could mount the hdd, only to see a lost+found with ~6.5 Gb in it (the whole usage was 120 Gb). Fishing in it, I could save ~2.2 Gb important data and ~3.1 Gb unimportant, and I saw another Gb unimportant data in it. So I wanted to continue with xfs_repair, so I tried to umount the partition. It hung the mount process, so I had switch off the machine. Reboot, mount, lost+found still there, umount, hung, switch off. Next I did not mount it, but tried to run xfs_repair on it, which constantly stops with this: [...] Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - ensuring existence of lost+found directory - traversing filesystem starting at / ... - traversal finished ... - traversing all unattached subtrees ... empty data block 0 in directory inode 445048263: junking block unknown magic number 0xd2f1 for block 8388608 in directory inode 445048263 rebuilding directory inode 445048263 creating missing "." entry in dir ino 445048263 fatal error -- can't make "." entry in dir ino 445048263, createname error 136117232 > Run w/o -n and I'm guessing it will do a pretty good job for you. Well, I think I lost the other data, as I tried to fiddle around with xfs_db (only in read-only mode), and it gave the same error like above. Should I purge that bogus magic number somehow? > Backup the raw device first if you are paranoid... I personally > wouldn't bother though. I don't have a spare drive unfortunately. But I think I would not have any advance if I do a copy. * Federico Sevilla III [2004-07-06 17:04:35 +0800]: > I didn't need the data in it (it's used for backups using > rsnapshot, and both machines it was backing up were okay so I could do > without the backups for a short while) Good for you. :-| > so I used DBAN > [http://dban.sourceforge.net/] and ran multiple rounds of the PRNG > writes with verification on every step. What this basically did was > write random data to each sector of the entire drive then read things to > make sure the data was actually written, about 45 times all in all (I > dictated how many rounds to do, it takes about 1.5 hours per round). Well, for this purposes I always used badblocks, available in every distro without further request I think. It can do pattern write test as well with user defined round numbers. > (Note: in case it's not yet obvious, I wiped out the entire drive doing > this, which was okay in my case since I didn't need the data and really > just wanted to make sure the drive was actually okay.) It is obvious enough. But I would wait with it until I can get some more data from the hdd, or give up with it. > It may be worth mentioning that the drive consistently passed the full > media scans I did using Seagate's SeaTools utility, before and after the > IDE read errors showed up with Linux. I don't think they do a _real_ media scan. I think they just address each sectors, and that's all. At least for my 40 Gb IBM drive it happens way too fast to be true. > Maybe you want to run HDD Regenerator completely to fix your entire > drive before running xfs_repair? I did, ofcourse. That's why I used 'xfs_repair -n' until then. So far, so good, it found ~1000 bad sectors, and said to fix all of them. Sure, I do not get more read errors from the drive. > xfs_repair was able to fix things partially, but I ran into errors > similar to those detailed in the mailing list archives in > . Started reading. Thanks for helping, Laszlo/GCS From owner-linux-xfs@oss.sgi.com Tue Jul 6 13:59:30 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 06 Jul 2004 13:59:31 -0700 (PDT) Received: from mail.monmouth.edu (mail.monmouth.edu [192.100.64.12]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i66Kx9gi013944 for ; Tue, 6 Jul 2004 13:59:30 -0700 Received: from yahoo.com (MU11846.lab.monmouth.edu [10.11.0.222]) by mail.monmouth.edu (8.12.11/8.12.11) with ESMTP id i66Kx6ff029987 for ; Tue, 6 Jul 2004 16:59:06 -0400 Message-ID: <40EB129A.94920C48@yahoo.com> Date: Tue, 06 Jul 2004 16:59:06 -0400 From: suprima cherlakola X-Mailer: Mozilla 4.76 [en] (Windows NT 5.0; U) X-Accept-Language: en MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: (no subject) Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.43 X-archive-position: 3600 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: psuprima@yahoo.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 182 Lines: 7 Hi, i want to insatll XFS filesystem on my Debian machine.ext3 is already mounted on disk.how to partition the disk for XFS and mount it. please can u help me with this.. Suprima From owner-linux-xfs@oss.sgi.com Tue Jul 6 14:03:51 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 06 Jul 2004 14:03:54 -0700 (PDT) Received: from web51503.mail.yahoo.com (web51503.mail.yahoo.com [206.190.38.195]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i66L3Ugi014249 for ; Tue, 6 Jul 2004 14:03:50 -0700 Message-ID: <20040706210325.38504.qmail@web51503.mail.yahoo.com> Received: from [192.154.130.66] by web51503.mail.yahoo.com via HTTP; Tue, 06 Jul 2004 14:03:25 PDT Date: Tue, 6 Jul 2004 14:03:25 -0700 (PDT) From: parvath reddy suprima Subject: XFS To: linux-xfs@oss.sgi.com MIME-Version: 1.0 Content-Disposition: inline Content-Type: text/plain Content-Transfer-Encoding: 7bit X-archive-position: 3601 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: psuprima@yahoo.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 433 Lines: 13 Hi, can u tell me how to partition the disk and mount XFS and my Debian machine already has ext3 filesystem mounted on the disk.do i hav to make system backup before i mount XFS and how to do it.. please can u help me with this... suprima __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com [[HTML alternate version deleted]] From owner-linux-xfs@oss.sgi.com Tue Jul 6 15:14:37 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 06 Jul 2004 15:14:40 -0700 (PDT) Received: from pimout1-ext.prodigy.net (pimout1-ext.prodigy.net [207.115.63.77]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i66MEYgi018676 for ; Tue, 6 Jul 2004 15:14:36 -0700 Received: from taniwha.stupidest.org (adsl-63-202-172-209.dsl.snfc21.pacbell.net [63.202.172.209]) by pimout1-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i66MEWKn081664; Tue, 6 Jul 2004 18:14:33 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id 10D3B115C80C; Tue, 6 Jul 2004 15:14:32 -0700 (PDT) Date: Tue, 6 Jul 2004 15:14:32 -0700 From: Chris Wedgwood To: parvath reddy suprima Cc: linux-xfs@oss.sgi.com Subject: Re: XFS Message-ID: <20040706221431.GA6130@taniwha.stupidest.org> References: <20040706210325.38504.qmail@web51503.mail.yahoo.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20040706210325.38504.qmail@web51503.mail.yahoo.com> X-archive-position: 3602 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 718 Lines: 24 On Tue, Jul 06, 2004 at 02:03:25PM -0700, parvath reddy suprima wrote: > can u tell me how to partition the disk and mount XFS and my Debian > machine already has ext3 filesystem mounted on the disk. ideally you need to boot with something that is XFS aware and create a new XFS filesystem and move the data over > do i hav to make system backup before i mount XFS and how to do it.. backups never hurt, i would recommend them if you can. if you want to try and move the data in-place there is something called convertfs: http://tzukanov.narod.ru/convertfs/ i've never used it but i'm told is works although can be very slow > please can u help me with this... why do you really need to change? --cw From owner-linux-xfs@oss.sgi.com Tue Jul 6 21:59:27 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 06 Jul 2004 21:59:31 -0700 (PDT) Received: from sebastian.peppermillcas.com (64-161-38-131.peppermillcas.com [64.161.38.131] (may be forged)) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i674x7gi006177 for ; Tue, 6 Jul 2004 21:59:27 -0700 Received: by SEBASTIAN with Internet Mail Service (5.5.2653.19) id ; Tue, 6 Jul 2004 21:59:06 -0700 Message-ID: <07AF3C77A0FBD311A99F00508B6520390435B12F@SEBASTIAN> From: Sabra Elges To: linux-xfs@oss.sgi.com Subject: Out of Office AutoReply: Approved Date: Tue, 6 Jul 2004 21:59:06 -0700 MIME-Version: 1.0 X-Mailer: Internet Mail Service (5.5.2653.19) Content-Type: text/plain; charset="iso-8859-1" X-archive-position: 3603 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: SElges@peppermillreno.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 84 Lines: 3 I am currently out of the office and will be back Thursday, July 8, 2004.. Thanks! From owner-linux-xfs@oss.sgi.com Wed Jul 7 03:36:06 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 07 Jul 2004 03:36:12 -0700 (PDT) Received: from mail.gmx.net (mail.gmx.net [213.165.64.20]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i67Aa5gi002003 for ; Wed, 7 Jul 2004 03:36:06 -0700 Received: (qmail 13114 invoked by uid 65534); 7 Jul 2004 10:35:58 -0000 Received: from G0c22.g.pppool.de (EHLO [192.168.1.11]) (80.185.12.34) by mail.gmx.net (mp015) with SMTP; 07 Jul 2004 12:35:58 +0200 X-Authenticated: #2986359 Message-ID: <40EBD210.1080307@gmx.net> Date: Wed, 07 Jul 2004 12:36:00 +0200 From: evilninja User-Agent: Thunderbird 0.6 (X11/20040605) X-Accept-Language: de-de, de-at, de, en-us, en MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: still xfs/nfs oopses with 2.6.7-bk17 X-Enigmail-Version: 0.83.6.0 X-Enigmail-Supports: pgp-inline, pgp-mime Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-archive-position: 3604 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: evilninja@gmx.net Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 4068 Lines: 95 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 hi list, last week i encountered an xfs error, related with nfs and was told to use a newer kernel (subject was: "oops with xfs / nfs"). hm, shiny new kernel is 2.6.7-bk17 and today i got an oops again: kernel: nfsd: page allocation failure. order:5, mode:0xd0 kernel: [] __alloc_pages+0x2d1/0x2e0 kernel: [] __get_free_pages+0x18/0x40 kernel: [] kmem_getpages+0x19/0xb0 kernel: [] cache_grow+0xb6/0x190 kernel: [] cache_alloc_refill+0x1db/0x220 kernel: [] ip_local_deliver_finish+0x0/0x1d0 kernel: [] __kmalloc+0x5c/0x60 kernel: [] kmem_alloc+0x5e/0xf0 kernel: [] xfs_iread_extents+0x66/0x110 kernel: [] ip_local_deliver_finish+0x0/0x1d0 kernel: [] xfs_bmapi+0x1e9/0x1400 kernel: [] sk_reset_timer+0xc/0x20 kernel: [] tcp_clean_rtx_queue+0x29b/0x3d0 kernel: [] ipt_do_table+0x2eb/0x500 [ip_tables] kernel: [] tcp_ack+0xdd/0x5d0 kernel: [] xfs_bmap_do_search_extents+0xc8/0x400 kernel: [] ip_local_deliver_finish+0x0/0x1d0 kernel: [] xfs_bmap_search_extents+0x5b/0x80 kernel: [] xfs_bmapi+0x240/0x1400 kernel: [] xfs_iomap+0x176/0x4a0 kernel: [] iget_locked+0x6a/0xe0 kernel: [] linvfs_get_block_core+0x9e/0x2a0 kernel: [] xfs_iaccess+0xb1/0x210 kernel: [] xfs_access+0x3f/0x50 kernel: [] linvfs_permission+0x0/0x20 kernel: [] linvfs_get_block+0x25/0x30 kernel: [] do_mpage_readpage+0x248/0x330 kernel: [] radix_tree_node_alloc+0x10/0x50 kernel: [] radix_tree_insert+0x76/0x110 kernel: [] add_to_page_cache+0x5a/0xb0 kernel: [] mpage_readpages+0xee/0x140 kernel: [] linvfs_get_block+0x0/0x30 kernel: [] ip_finish_output2+0xb8/0x1e0 kernel: [] ip_finish_output2+0x0/0x1e0 kernel: [] nf_hook_slow+0x13f/0x160 kernel: [] read_pages+0x130/0x140 kernel: [] linvfs_get_block+0x0/0x30 kernel: [] buffered_rmqueue+0xf7/0x1e0 kernel: [] __alloc_pages+0x9f/0x2e0 kernel: [] do_page_cache_readahead+0x1a4/0x1d0 kernel: [] page_cache_readahead+0xe9/0x1f0 kernel: [] do_generic_mapping_read+0xed/0x4d0 kernel: [] generic_file_sendfile+0x6b/0x90 kernel: [] nfsd_read_actor+0x0/0xd0 [nfsd] kernel: [] xfs_sendfile+0xc7/0x1b0 kernel: [] nfsd_read_actor+0x0/0xd0 [nfsd] kernel: [] linvfs_sendfile+0x4a/0x60 kernel: [] nfsd_read_actor+0x0/0xd0 [nfsd] kernel: [] nfsd_read+0x226/0x3e0 [nfsd] kernel: [] nfsd_read_actor+0x0/0xd0 [nfsd] kernel: [] nfsd3_proc_read+0xaf/0x170 [nfsd] kernel: [] nfs3svc_decode_readargs+0x0/0x190 [nfsd] kernel: [] nfsd_dispatch+0x8a/0x1f0 [nfsd] kernel: [] svc_authenticate+0xd3/0x120 [sunrpc] kernel: [] svc_process+0x57d/0x610 [sunrpc] kernel: [] nfs3svc_release_fhandle+0x0/0x10 [nfsd] kernel: [] nfsd+0x1cb/0x380 [nfsd] kernel: [] nfsd+0x0/0x380 [nfsd] kernel: [] kernel_thread_helper+0x5/0x18 again it starts with "kernel: nfsd: page allocation failure" - should i send this to a nfs-list or lkml instead? but it always seems to happen with xfs. nfs-load was not so heavy this time. the system is (as last time) still usable, at some point i can tigger asystem lockup now, so a clean reboot is a good idea... i wonder what the real issue is here.... Thanks, Christian. - -- BOFH excuse #405: Sysadmins unavailable because they are in a meeting talking about why they are unavailable so much. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD4DBQFA69IQC/PVm5+NVoYRAor+AJjeaHM0WPkeQEWixKNQq+b1XhpuAKD4HaJm BaaE0A+lkfGfCbNs3Qe0+g== =fqvC -----END PGP SIGNATURE----- From owner-linux-xfs@oss.sgi.com Wed Jul 7 04:05:30 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 07 Jul 2004 04:05:35 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.12.10/8.12.9) with ESMTP id i67B5Ugi003402 for ; Wed, 7 Jul 2004 04:05:30 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.12.10/8.12.8/Submit) id i67B5U4p003400 for linux-xfs@oss.sgi.com; Wed, 7 Jul 2004 04:05:30 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.12.10/8.12.9) with ESMTP id i67B5Sgi003388 for ; Wed, 7 Jul 2004 04:05:28 -0700 Received: (from apache@localhost) by oss.sgi.com (8.12.10/8.12.8/Submit) id i67Ahdn2002511; Wed, 7 Jul 2004 03:43:39 -0700 Date: Wed, 7 Jul 2004 03:43:39 -0700 Message-Id: <200407071043.i67Ahdn2002511@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 343] New: still xfs/nfs oopses with 2.6.7-bk17 X-Bugzilla-Reason: AssignedTo X-archive-position: 3605 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 3794 Lines: 88 http://oss.sgi.com/bugzilla/show_bug.cgi?id=343 Summary: still xfs/nfs oopses with 2.6.7-bk17 Product: Linux XFS Version: Current Platform: All OS/Version: Linux Status: NEW Severity: normal Priority: High Component: XFS kernel code AssignedTo: xfs-master@oss.sgi.com ReportedBy: evilninja@gmx.net kernel is 2.6.7-bk17 and today i got an oops again: kernel: nfsd: page allocation failure. order:5, mode:0xd0 kernel: [] __alloc_pages+0x2d1/0x2e0 kernel: [] __get_free_pages+0x18/0x40 kernel: [] kmem_getpages+0x19/0xb0 kernel: [] cache_grow+0xb6/0x190 kernel: [] cache_alloc_refill+0x1db/0x220 kernel: [] ip_local_deliver_finish+0x0/0x1d0 kernel: [] __kmalloc+0x5c/0x60 kernel: [] kmem_alloc+0x5e/0xf0 kernel: [] xfs_iread_extents+0x66/0x110 kernel: [] ip_local_deliver_finish+0x0/0x1d0 kernel: [] xfs_bmapi+0x1e9/0x1400 kernel: [] sk_reset_timer+0xc/0x20 kernel: [] tcp_clean_rtx_queue+0x29b/0x3d0 kernel: [] ipt_do_table+0x2eb/0x500 [ip_tables] kernel: [] tcp_ack+0xdd/0x5d0 kernel: [] xfs_bmap_do_search_extents+0xc8/0x400 kernel: [] ip_local_deliver_finish+0x0/0x1d0 kernel: [] xfs_bmap_search_extents+0x5b/0x80 kernel: [] xfs_bmapi+0x240/0x1400 kernel: [] xfs_iomap+0x176/0x4a0 kernel: [] iget_locked+0x6a/0xe0 kernel: [] linvfs_get_block_core+0x9e/0x2a0 kernel: [] xfs_iaccess+0xb1/0x210 kernel: [] xfs_access+0x3f/0x50 kernel: [] linvfs_permission+0x0/0x20 kernel: [] linvfs_get_block+0x25/0x30 kernel: [] do_mpage_readpage+0x248/0x330 kernel: [] radix_tree_node_alloc+0x10/0x50 kernel: [] radix_tree_insert+0x76/0x110 kernel: [] add_to_page_cache+0x5a/0xb0 kernel: [] mpage_readpages+0xee/0x140 kernel: [] linvfs_get_block+0x0/0x30 kernel: [] ip_finish_output2+0xb8/0x1e0 kernel: [] ip_finish_output2+0x0/0x1e0 kernel: [] nf_hook_slow+0x13f/0x160 kernel: [] read_pages+0x130/0x140 kernel: [] linvfs_get_block+0x0/0x30 kernel: [] buffered_rmqueue+0xf7/0x1e0 kernel: [] __alloc_pages+0x9f/0x2e0 kernel: [] do_page_cache_readahead+0x1a4/0x1d0 kernel: [] page_cache_readahead+0xe9/0x1f0 kernel: [] do_generic_mapping_read+0xed/0x4d0 kernel: [] generic_file_sendfile+0x6b/0x90 kernel: [] nfsd_read_actor+0x0/0xd0 [nfsd] kernel: [] xfs_sendfile+0xc7/0x1b0 kernel: [] nfsd_read_actor+0x0/0xd0 [nfsd] kernel: [] linvfs_sendfile+0x4a/0x60 kernel: [] nfsd_read_actor+0x0/0xd0 [nfsd] kernel: [] nfsd_read+0x226/0x3e0 [nfsd] kernel: [] nfsd_read_actor+0x0/0xd0 [nfsd] kernel: [] nfsd3_proc_read+0xaf/0x170 [nfsd] kernel: [] nfs3svc_decode_readargs+0x0/0x190 [nfsd] kernel: [] nfsd_dispatch+0x8a/0x1f0 [nfsd] kernel: [] svc_authenticate+0xd3/0x120 [sunrpc] kernel: [] svc_process+0x57d/0x610 [sunrpc] kernel: [] nfs3svc_release_fhandle+0x0/0x10 [nfsd] kernel: [] nfsd+0x1cb/0x380 [nfsd] kernel: [] nfsd+0x0/0x380 [nfsd] kernel: [] kernel_thread_helper+0x5/0x18 this is debian/unstable (i386), gcc-3.4.0, xfsprogs-2.6.11-1. (i sent a mail to linux-xfs, but i think bugzilla is the preferred location to report bugs) ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs@oss.sgi.com Wed Jul 7 04:08:23 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 07 Jul 2004 04:08:50 -0700 (PDT) Received: from pentafluge.infradead.org ([213.146.154.40]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i67B81gi003865 for ; Wed, 7 Jul 2004 04:08:22 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.33 #1 (Red Hat Linux)) id 1BiAHQ-00048P-Vu; Wed, 07 Jul 2004 12:08:00 +0100 Date: Wed, 7 Jul 2004 12:08:00 +0100 From: Christoph Hellwig To: evilninja Cc: linux-xfs@oss.sgi.com Subject: Re: still xfs/nfs oopses with 2.6.7-bk17 Message-ID: <20040707110800.GA15891@infradead.org> References: <40EBD210.1080307@gmx.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <40EBD210.1080307@gmx.net> User-Agent: Mutt/1.4.1i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-archive-position: 3606 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 414 Lines: 13 On Wed, Jul 07, 2004 at 12:36:00PM +0200, evilninja wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > hi list, > > last week i encountered an xfs error, related with nfs and was told to > use a newer kernel (subject was: "oops with xfs / nfs"). hm, shiny new > kernel is 2.6.7-bk17 and today i got an oops again: Does the box actually hang? The message below just means a memory allocation failed. From owner-linux-xfs@oss.sgi.com Wed Jul 7 05:18:59 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 07 Jul 2004 05:19:02 -0700 (PDT) Received: from mail.gmx.net (pop.gmx.net [213.165.64.20]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i67CIwgi014805 for ; Wed, 7 Jul 2004 05:18:59 -0700 Received: (qmail 4064 invoked by uid 65534); 7 Jul 2004 12:18:52 -0000 Received: from G0c22.g.pppool.de (EHLO [192.168.1.11]) (80.185.12.34) by mail.gmx.net (mp011) with SMTP; 07 Jul 2004 14:18:52 +0200 X-Authenticated: #2986359 Message-ID: <40EBEA29.1010602@gmx.net> Date: Wed, 07 Jul 2004 14:18:49 +0200 From: evilninja User-Agent: Thunderbird 0.6 (X11/20040605) X-Accept-Language: de-de, de-at, de, en-us, en MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: Re: still xfs/nfs oopses with 2.6.7-bk17 References: <40EBD210.1080307@gmx.net> <20040707110800.GA15891@infradead.org> In-Reply-To: <20040707110800.GA15891@infradead.org> X-Enigmail-Version: 0.83.6.0 X-Enigmail-Supports: pgp-inline, pgp-mime Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-archive-position: 3607 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: evilninja@gmx.net Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1189 Lines: 38 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Christoph Hellwig schrieb: > Does the box actually hang? The message below just means a memory > allocation failed. no, it does not hang. but last time i just wanted to unexport/unmount my nfs shares, and then the box hung. nope, it does not. i've unmount the nfs shares from the client, unexported on the server, unmounted on the server, then again mount/exported/mounted again with no hungs (i do make a difference here to a lockup: the box always seem to respond to SYSRQ, although the last time it could not write to disk any more). now i don't have to reboot :-) when you say "memory allocation", is this releted to the in-kernel bugs Chris told me about or do you suspect any hardware memory corruption? because i don't, the machine was always quite stable... thanks, Christian. - -- BOFH excuse #418: Sysadmins busy fighting SPAM. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFA6+opC/PVm5+NVoYRAl12AJ4hTxdImUFdxSpndUsqW7Nrp0UR4QCg8RUI jSxvJNx85I+Sf0ZCwjkD33Y= =2abc -----END PGP SIGNATURE----- From owner-linux-xfs@oss.sgi.com Wed Jul 7 06:26:35 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 07 Jul 2004 06:26:37 -0700 (PDT) Received: from zok.sgi.com (mtvcafw.sgi.com [192.48.171.6]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i67DQYgi016987 for ; Wed, 7 Jul 2004 06:26:34 -0700 Received: from flecktone.americas.sgi.com (flecktone.americas.sgi.com [192.48.203.135]) by zok.sgi.com (8.12.9/8.12.9/linux-outbound_gateway-1.1) with ESMTP id i67DP7hv007057 for ; Wed, 7 Jul 2004 06:25:07 -0700 Received: from maine.americas.sgi.com (maine.americas.sgi.com [128.162.232.87]) by flecktone.americas.sgi.com (8.12.9/8.12.10/SGI_generic_relay-1.2) with ESMTP id i67DP7Ke42170712; Wed, 7 Jul 2004 08:25:07 -0500 (CDT) Received: from nstraz by maine.americas.sgi.com with local (Exim 3.36 #1 (Debian)) id 1BiCQ6-0001MI-00; Wed, 07 Jul 2004 08:25:06 -0500 Date: Wed, 7 Jul 2004 08:25:06 -0500 From: Nathan Straz To: parvath reddy suprima Cc: linux-xfs@oss.sgi.com Subject: Re: XFS Message-ID: <20040707132506.GE14640@sgi.com> Mail-Followup-To: parvath reddy suprima , linux-xfs@oss.sgi.com References: <20040706210325.38504.qmail@web51503.mail.yahoo.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20040706210325.38504.qmail@web51503.mail.yahoo.com> User-Agent: Mutt/1.5.3i X-archive-position: 3608 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nstraz@sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1193 Lines: 30 On Tue, Jul 06, 2004 at 02:03:25PM -0700, parvath reddy suprima wrote: > can u tell me how to partition the disk and mount XFS and my Debian > machine already has ext3 filesystem mounted on the disk.do i hav to > make system backup before i mount XFS and how to do it.. please can u > help me with this... Are you planning on reinstalling, switching, or adding another partition? Reinstalling: Use the new debian-installer CDs for Sarge. They have native XFS support. Switching: This is a little trickier, especially if you want to do this in place. I would recommend getting help with this from someone at your local Linux User Group. Adding another partition: Upgrade your kernel to something past 2.4.25 (IIRC) and install the xfsprogs package. Reboot to the new kernel and verify that xfs is listed in /proc/filesystems. You might have to modprobe in the xfs module. Now you can mkfs your new partition and mount it where ever you please. -- Nate Straz nstraz@sgi.com sgi, inc http://www.sgi.com/ Linux Test Project http://ltp.sf.net/ From owner-linux-xfs@oss.sgi.com Wed Jul 7 16:54:34 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 07 Jul 2004 16:54:37 -0700 (PDT) Received: from mail2.catalyst.net.nz (godel.catalyst.net.nz [202.49.159.12]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i67NsXgi019147 for ; Wed, 7 Jul 2004 16:54:34 -0700 Received: from leibniz.catalyst.net.nz ([202.49.159.7] helo=shankara.wgtn.cat-it.co.nz) by mail2.catalyst.net.nz with asmtp (Cipher TLSv1:RC4-MD5:128) (Exim 3.35 #1 (Debian)) id 1BiMF8-0004Cr-02 for ; Thu, 08 Jul 2004 11:54:26 +1200 From: Steve Wray Reply-To: stevew@catalyst.net.nz To: linux-xfs@oss.sgi.com Subject: debugging xfs issue (maybe) Date: Thu, 8 Jul 2004 11:54:23 +1200 User-Agent: KMail/1.6.1 MIME-Version: 1.0 Content-Disposition: inline Content-Type: Text/Plain; charset="us-ascii" Message-Id: <200407081154.25556.stevew@catalyst.net.nz> X-System-Filter-Id: mail2.catalyst.net.nz 1BiMF8-0004Cr-02 X-Virus-Scanned-By: Amavis with CLAM Anti Virus on mail2.catalyst.net.nz Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id i67NsZgi019148 X-archive-position: 3609 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: stevew@catalyst.net.nz Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 869 Lines: 30 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi there, We've been having ongoing filesystem corruption problems with a critical machine thats running LVM2, software raid 5, XFS, SATA and NFS (I've posted to the list about this before). We keep trying new kernels and so forth, but nothing has helped. We are going to cut our losses and change it over to ext3, keeping everything else the same. This is a little way off yet so I have an opportunity to prepare some testing; we will be doing a full backup and rebuild so I can arrange for some destructive testing. If theres anyone on list who wants to help out with advice, debugging etc please let me know. Thanks! -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) iD8DBQFA7I0vmVx2hyhuTucRAiQTAKC34jQXOKzTFQlJnTZKrVNertBCeQCfQQb6 cK7lEz+TRIL2NsqR4uOIuS4= =rkad -----END PGP SIGNATURE----- From owner-linux-xfs@oss.sgi.com Wed Jul 7 17:30:55 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 07 Jul 2004 17:30:57 -0700 (PDT) Received: from zok.sgi.com (mtvcafw.sgi.com [192.48.171.6]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i680Utgi021264 for ; Wed, 7 Jul 2004 17:30:55 -0700 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by zok.sgi.com (8.12.9/8.12.9/linux-outbound_gateway-1.1) with SMTP id i680Umhv010687 for ; Wed, 7 Jul 2004 17:30:49 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA15496; Thu, 8 Jul 2004 10:30:45 +1000 Received: from wobbly.melbourne.sgi.com (localhost [127.0.0.1]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id i680Ugln2112751; Thu, 8 Jul 2004 10:30:43 +1000 (EST) Received: (from nathans@localhost) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id i680UfaX2081765; Thu, 8 Jul 2004 10:30:41 +1000 (EST) Date: Thu, 8 Jul 2004 10:30:41 +1000 From: Nathan Scott To: Steve Wray Cc: linux-xfs@oss.sgi.com Subject: Re: debugging xfs issue (maybe) Message-ID: <20040708103041.D1946083@wobbly.melbourne.sgi.com> References: <200407081154.25556.stevew@catalyst.net.nz> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5i In-Reply-To: <200407081154.25556.stevew@catalyst.net.nz>; from stevew@catalyst.net.nz on Thu, Jul 08, 2004 at 11:54:23AM +1200 X-archive-position: 3610 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 15854 Lines: 334 Hi Steve, On Thu, Jul 08, 2004 at 11:54:23AM +1200, Steve Wray wrote: > ... > This is a little way off yet so I have an opportunity to prepare some > testing; we will be doing a full backup and rebuild so I can arrange > for some destructive testing. > > If theres anyone on list who wants to help out with advice, debugging > etc please let me know. Could you post a summary of your problems please? Also adding information about the things you being done when problems arose, points where you ran repair and what it found, the points where you were reading from the block device concurrently, with which kernel versions, what problems still remained after stopping that (whether you repaired inbetween), etc, etc. The xfs_info output on the affected filesystem(s) is also very useful. It became quite difficult to follow what state your filesystem might have been in during your earlier posts, so there was not much anyone could do to help out there. If you can find test cases from the problems you experienced, that show going from a filesystem in known state A, applied a sequence of operations X, Y and Z, and ended up with filesystem or file data in bad state B - that is the sort of thing I can best help out with, because I can then reproduce and analyse it locally and figure out if XFS is at fault, etc. thanks. -- Nathan rom owner-linux-xfs@oss.sgi.com Wed Jul 7 17:47:27 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 07 Jul 2004 17:47:36 -0700 (PDT) Received: from mail2.catalyst.net.nz (godel.catalyst.net.nz [202.49.159.12]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i680lPgi022096 for ; Wed, 7 Jul 2004 17:47:26 -0700 Received: from leibniz.catalyst.net.nz ([202.49.159.7] helo=shankara.wgtn.cat-it.co.nz) by mail2.catalyst.net.nz with asmtp (Cipher TLSv1:RC4-MD5:128) (Exim 3.35 #1 (Debian)) id 1BiN4C-00062U-02; Thu, 08 Jul 2004 12:47:12 +1200 From: Steve Wray Reply-To: stevew@catalyst.net.nz To: Nathan Scott Subject: Re: debugging xfs issue (maybe) Date: Thu, 8 Jul 2004 12:47:01 +1200 User-Agent: KMail/1.6.1 Cc: linux-xfs@oss.sgi.com References: <200407081154.25556.stevew@catalyst.net.nz> <20040708103041.D1946083@wobbly.melbourne.sgi.com> In-Reply-To: <20040708103041.D1946083@wobbly.melbourne.sgi.com> MIME-Version: 1.0 Content-Disposition: inline Content-Type: Multipart/Mixed; boundary="Boundary-00=_FmJ7AdN8Hy8jlcr" Message-Id: <200407081247.11376.stevew@catalyst.net.nz> X-System-Filter-Id: mail2.catalyst.net.nz 1BiN4C-00062U-02 X-Virus-Scanned-By: Amavis with CLAM Anti Virus on mail2.catalyst.net.nz X-archive-position: 3611 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: stevew@catalyst.net.nz Precedence: bulk X-list: linux-xfs --Boundary-00=_FmJ7AdN8Hy8jlcr Content-Type: Text/Plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Content-Disposition: inline -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Thursday 08 July 2004 12:30, Nathan Scott wrote: > Hi Steve, > > On Thu, Jul 08, 2004 at 11:54:23AM +1200, Steve Wray wrote: > > ... > > This is a little way off yet so I have an opportunity to prepare > > some testing; we will be doing a full backup and rebuild so I can > > arrange for some destructive testing. > > > > If theres anyone on list who wants to help out with advice, > > debugging etc please let me know. > > Could you post a summary of your problems please?=20=20 Well... the first thing we notice is that a nightly cron job returns things= like; /etc/cron.daily/standard: find: /var/lib/postgres/data/base/141159285/141159709: Unknown error 990 find: /var/lib/postgres/data/base/141159285/141159709: Unknown error 990 find: /var/lib/postgres/data/base/141159285/141159709: Unknown error 990 and we look in dmesg and see things like; Filesystem "dm-0": corrupt dinode 67279401, extent total =3D 1025, nblocks = =3D 1. Unmount and run xfs_repair. 0x0: 49 4e 81 b4 01 02 00 01 00 00 04 10 00 00 00 64 Filesystem "dm-0": XFS internal error xfs_iformat(1) at line 475 of file fs= /xfs/xfs_inode.c. Caller 0xc024630a [] xfs_error_report+0x3a/0x3c [] xfs_corruption_error+0x3b/0x48 [] xfs_iread+0x132/0x23c [] xfs_iformat+0x1e6/0x558 [] xfs_iread+0x132/0x23c [] xfs_iread+0x132/0x23c [] xfs_iget_core+0x26c/0x5c0 [] xfs_iget+0x8c/0x164 [] xfs_dir_lookup_int+0x63/0xc8 [] xfs_lookup+0x3e/0x68 [] linvfs_lookup+0x3f/0x80 [] real_lookup+0x59/0xcc [] do_lookup+0x45/0x84 [] link_path_walk+0x66b/0x944 [] path_lookup+0x185/0x18c [] __user_walk+0x28/0x40 [] vfs_lstat+0x16/0x44 [] sys_lstat64+0x13/0x30 [] error_code+0x2d/0x38 [] syscall_call+0x7/0xb > Also adding=20 > information about the things you being done when problems arose, So far we've just sucked in our guts and kept at it; the machine in question is fairly key here and the corruption hasn't so far impacted anything we=20 couldn't sort out. The main thing we've been doing is reboot into single us= er and run xfs_repair over all the xfs filesystems. I have scripted this and generated logs of the output of xfs_repair.=20 I'll tar them up and attach them. > points where you ran repair and what it found, the points where > you were reading from the block device concurrently, with which > kernel versions,=20 various 2.6 kernels, currently its on 2.6.7 As for the other details we havn't had a chance to do any *real* work getting this sort of info or running any real tests to see exactly what kind of behavior produces the problems. Its busy here, very busy. > what problems still remained after stopping=20 > that (whether you repaired inbetween), etc, etc. The xfs_info > output on the affected filesystem(s) is also very useful. I'll include this with the logs from the last xfs_repair > It became quite difficult to follow what state your filesystem > might have been in during your earlier posts, so there was not > much anyone could do to help out there. > > If you can find test cases from the problems you experienced, > that show going from a filesystem in known state A, applied a > sequence of operations X, Y and Z, and ended up with filesystem > or file data in bad state B - that is the sort of thing I can > best help out with, because I can then reproduce and analyse it > locally and figure out if XFS is at fault, etc. So far we havn't been able to reliably reproduce the problem on demand. Its a very active box. We reboot it, xfs_repair everything, bring it up multiuser and sometimes the same day, sometimes weeks=20 later we find corruption. One time we went over a month without any problems and thought the latest kernel upgrade had fixed it, then it came back. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) iD8DBQFA7JmOmVx2hyhuTucRAl18AKDOyY1o+QIIdWj7JQ7mB8N9RVKX/gCfeQT4 OdtTslYqXmJyUb/Vp0tAijM=3D =3DpKUh -----END PGP SIGNATURE----- --Boundary-00=_FmJ7AdN8Hy8jlcr Content-Type: application/x-tbz; name="xfs.logs.tar.bz2" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="xfs.logs.tar.bz2" QlpoOTFBWSZTWWTQouoAL8v/lf89ikB87//3f+v/Sv////ACAAAAgQAACGAh PnwA69u5EDe3V9O+e57zJxbprn1lesY2++7gPoNAAAoBQdvqcbpr2HuaLW82 SKkpUpFNNctLZ11EIgoJJUNgDKoLCSU0ZEaNU/TEGlPUZinoNEDRkAABoAAD QAlEptNDJNQm8pkIYEYGQ0AmAIYRkwAJgDU09EmlDyhiADT1AAAAGgAAAAAA JNSURM2pPSehB6TNTQ00ADQGTQDQAAAAAiUEEMiaaBomJHpT0aaKemjKNGxR p6hkDagD1PSaGgVJBAQTQmgmUyap+mpPRkyJoHqeoz1I0DJ6jQB6T1GR/AzP wA/wpF8wLWAhaRigS4qJ7fSez2GVdwWAgPtsVvh7a0LTFw5maBBYCMW5EtSW SKEgW4H9U/Cs0/Ly4q0xH9MPyksT8woYL5s7jUdxgdh3NdbPzCxOIxpbgYHB jN64HAwOBwOBsThOE4S6bm5ubm5uZJubm5uYNzc3NzcwTc3Nzc3Mm5ubm5cl 0Z4yjKbXs/qf/Klydv2pZLR6JZO5xxdJN51rrqTlPenKdUynh2o/S81nMcv+ EWYVImYWhiFQysWhiFrry9UspebLL0qmJiii5dddUVFosXlvQotipUVRheLl lF1lLPOMSYNNKpmgxKkHwyEBqBXOolkpfJfUzzVSdbROsRRLKNeysT4p44eW HfnbvnrD2hpn9mut/F3RL2vSlXTtzJSV81sy01s7RwGxrIqJBEEIxirBISvp KEWM0wc0iIiqqqOnUu50CdoFaeEdDjulsrf+DphPr5smWIoHlUVs+4eKcWde GFt+zARPWJFFQOQDBCDSE8hUEgwnslokEc+V0t9afYnVPkJA2T3J4p7d5Ov6 xNTrwnUl5wn0O5GGcXg1UwGqJ8A5LF/uQtfiO/Pz/J6xCg9r5+qU8B2jQbDY egfMek8xwS34U8aptHgMB+sePAcvIDmD6Hl+mgHjXEuDz/G+B3DajtwG9m1x W0LCtYxLjFDahtQupLUxdi4fPysTT95y7Jwm92g/V8bD66HTS9x1S6wbJiw9 MLDShZaw8WzcluNU/8f+b5Tak3pNlJqlFsLA2wm23GTRRtRsoNUoSk+wsGVE Lp6tMYJxlOcWTCKJzndhGcku8+Xl/YRCZTIlmAlAjpL1kG0CZYkpDJ6KSWwg QqxRFAfhxj3MCs1UZEG9MKRFVItV7SgElochGgjc4oSMQvARCci3hPDlOifU n5+qfwJSeidX+yx6ycj0n2gd+4DJdeyhxWHmkOd1OgarU849oPQdZMzQnvQ+ fj007w7IbL7JlLJomSWuAe4wqOpiOI+g4lioHKVMEzoc1rgvyeeK6GYS4mmq qqqqTCTMJeNURERdjBJJJJiTQUL5D4j6hvQeA+F05ZgYZjZHsHQdq+Xj1hy5 ZJtGLidvAeA9Q5jj2jYdRRC0HQAhIJgI8Irb1SMCKq4vFB2HLLplyueOmdFG 9QryhBeUIVS9XhXlb5wYFeYOp2sRO5wAsQ6iie6S5KxAFQARcAMuqHUOdZXt 4QSSCTOoVVIL73Xte6973XvfyTdPgmE9RqmidEwTdlMjd0H9Jl8g+Z7tmK8p 4jCp2bRkGg1MR5jvGDsHpHyTMcsM6UpmNszzPaNSDpmOFtBiFIFBv5LgNGTR Li0jfJemO2q2tWRptVKtGvfME5LQzM2avC/Tb09osaQWS7pm2vlQgvFoKYMS U4OJA0PE4Vd3ex10NU1wzgxl4EXQTjlJUWiY2ZAdTYMMyo+CZnjD0Gq7SpAe Y0NRsPnUOZA9id6cx3AOwfhH/odeQwdvLEcjWg5jAXLFsY6yspKphTIOSmvm FroPBYkkmGS4xMkK+FqDcZBEaLBc2nfFdQVaRBaRG4OCo4UpMtCrPyWMsBeP xWepup79dKs/khZzwtJ3ezZpDLZ3cYhsuvzJeutOZKh/YvGNlOkZnK7Mz+Re OjSOLeULOY47Gzd9mjdJM0e1LOLJjFkxhzXm0kzh+uIpdCWMgbpG7ckdLCQU wwD6wRB80YMGKEBiL6FBoKMCJKKOGZybGddYM9DP5xNS+XZT+NmBAxpA9V6D J0QFPQ6/wqAIYl2EJJ1a8N3YTwHqHIQyH8w6DUfgGDBg7cEUy79tiG7dLS9I rP1hzmu84s2G5eW5S0eCUWiDyeWSKysWYpDZRPlvmyFyeVUIMWXp1nvO4qCd x7coT5NyeBCBmVw+dSZstJVbHzYcX65RhTDNUy6qYcuOi7RbGh3axLteFoa9 4XOG9NpNneO0XhxAewroKOaqgy48ougaXolbS1qbFiKLNYxV2G1WweX0a6/3 VtyPl6bedCaJkmB8RlPY9Cl3FCyFO2EgdROyFXaUVxDwPcY8vPeO8rnUf3Wn xWAre4OPqsN+NicNNYYFshxC5ibMc69HPZlNvvecKRPVRenZeFk504zYTL3W Fam9qg6gV+wBzuDWJBhJ1xXimXRX1w468CBxNTo7n227VoZcJFCBiRSQYZ5R c4EAvQCru6uksZEUUgkAIpwHChFJYqdWN9A8zTXGreNUMZoW0VZMlWQvQXvG 7NVn58X0tquhpq2MPb+TDMNnRra795n5ZW9+rmauW8WFU/rEfjKkqKlIKVQg IJAQREQBIkYIICIijAiIiIjIKAwEURERGEQRIyRGCIMGQERSMM07JYJVAixk QgsFkSQWIRAixIKinn899t0vsShjdKPBr9aU6+W89OK8eMa9JrC/YAyvyUql 6B8ZIyFg6pCaUTss5gYQuGZmaRiVlvJPMWzFAMhQgsGHFOB4HKrs0JpBBlYB kpcZ/Vapsop2xQrBA3HWaFh+9hB78D0Cc6MztaPqr0TSWUZhSHl1uOSrk9NH of1ptxtkFzaPKp0XdVLj21F9XZO2NPgtRpxr9SmgppFKz+dd1ku63aVdJaGl 2a5z3UuhoKPMsw6p5t02NV9Ia75vjRbrWq9rMHCXHbAdWeV0yKjVV/Lq2sml 9U1OFQ7/T5/D6YxjGLQw6M68dYnGVjhDiy8UYqY8VCjQnomhoZONums2Y56X Z0pCMciKIoiiMc50cULDxlyE1L5ZYiEclLKChiQGmo8ibKia4el8GffZLaKF M3qS+l7DS9r5a3RwlCz2sMwmqpKND130a6jCa2VpY3Tp5NtZOvu18CoYTXi6 ra3GOrhOmIatayriquvDEt2ddEwMutm+mt9enu93p6YxjGPJl1UZWcptrokd aVUEpayRm8WQpuuSScd9gNQLBqDwC6V7ZMt1sCQcvYW+uq2TlL56aLSb87uD 3M7J2h7qc/0W2ZxDZ0JdMrOaXk3c7Fk1c1l10snvTl4S6cRfi50YTtZfXfpx 22TVq5S+U+VQwcmWjdPpSdXq1NU+dTDVTzhp0dmmxrC3OBgthWzGfDLQ1Xud btYecl8uH8/vPC+7eS3lF3WnezClMcCsYHcXMHYnc6/KNGipI0fSG1nDDrMY axvqYjST4vHSTlN0t8bei/bGfGlM92HDXB4dGG3DpnX5N234cOfkhU6LlN5G nO3RNLMNZLnVqt667XjFttKTSHliGhuy1WSytk4umTdqsXxSyb+r08l2DmOZ YpZuKbGcmdenIZjyQxxG3MLrQ19Dp1dR03qrLROWNdOlsi98Dc7HJ2JvlNvP TjjjFrbJ+DPb25yVfjZ2k6xjsWamjMjRzgzx29CUPKk8e2p0dKU7NcFTqynX ojNV540uac350ZMxmbSOOzJgfLnvg21Y1eS7JmyHbjsmSnK88RO6/EYWk/w9 YauWiF+OvQbDU7mDqjtSuqXSiZkxJja7OfXDF2W46N7ei63XvnXxvCsi7r4T VYeQzRCVOA7TNDcVLkrkB1mOqV41NnZbujG2rG61+1+2DJV6Nss7tNeO7TXj dx2k9XfRxdy9Y4ZQ9ztHfv121XulDqXMnaRms2Wkr0txfSO7Rru3hid9jbLH Lum6Xsc3pOGNdeLbbI9+Nbw31W7xs5dOVqvMhU3Vrjdi9HMjKi2TzwmtNK7N 7ejpy+FzaOjV5wsw0b7juOxcweBg7yPNsv2Vpq0cvEW7drSpfLydHTibWddL 3rC6coePby6a6a92QtD09XRo9GV+yfgpOfLhvynHaHDoM+wa7jp2ru1x3bQr pyQmwalpXODx15ad42Nr3U9wWUvPSWsrzLmXl4bPCOqbpodC3dmlWyvtXbmM Dw0a10h886944Lfuc289/LvL76crNdehLycZKY8ndVOWb7OIt7b3K22aIYvv vTxnVqhrY6NNOHU2vd7tzGznfa3msw9FNDUU5cpNVLOVLR2baqRwzkiDiQbu pvNY5riRa3elR1228tEIcMhfd0OCbWFpIARf40pyNK10mkcp8xNKy97GfUU5 ocbFTfCmLkljLukoMHQkwaXM6sgAGVuqqqqqreU5j+wQoCIPUO4ege4HwGo1 GtB5keypElmFqsWIH8CUWCiXrPSJmQnI4WuqqrWlayqrxSBUDVxGvA2DEWhw TKNQwDmmWZNNXp0+7cMrKdJOsOYtpF2IaOVlOMM2NjBUWY++dIq3OZrDSOW+ bSWw1dW7q+/MN5NXK+uVd+rs7rO7ZlZsZn6/u/anqT90+8oV+uh86n2Fhff+ OGHznrov1HzUG6EPxt9tFwPuyP0KE5Mgi5GYLjhHgBRnkd6AX4FHFMhjhV8o IYIaAG27xtw5+TDyHEbq5T959n2/BYtoHEy1NDUtXrMv7ByRCnig4lKmCDQP v2n/YvnjT95z8qHr8HPMvXym/zPxHfIfSKjSr4xphMJgzeFEqIRUD2/oEPdE BXn9pyO/h9gBvr79qIWjd1PsKZKW0FBiBMhyMNc0yVY1C0KOCLUcL+DJ6ZZW sufSGpiP1caaO9pNtaf2rrNIxGnWN1BS0idymXD+dvI3fBR1/Y+pZdv+v+9Z tycPN5P5dOdKWt2qmty8tzeL9IWOF2braxTwr3xyh1X/yHsfCTY4rh6fF9w/ HHgkMNYxSaG2aCNoTaCimF8aENuRMhUKGY4H1alUKR2ITMzewfrSHkhkOCnE PzjKDMMN73mY9JVXvN+1SqEsjhQDDz+AOseJNQRO4nxT6n6z4sUlfUsPZKF6 mq2yocjqQutIdSZDwBzxeoINRuDgonHyQx0HWYXMCq67gJmj9NOjshYDNCC4 pmZ5qGDYdyWUgiYBPm7ExBtdB7xSKjV25Q74s92rpmTx2kUnm5TAn+L1WOpG UbU0zaLrYYZBKEG/JRxBDRNwgaxQGs3C58/G9200+3k3aR0dI2PX/U/SPdJd Jgm0jvOBcvE1OStBakHxfhRoAnWDEAu6zcegu9nE8X1D8jDA8lFNKkDTnFuq +zMpBZ1jLai1yeQyGMMzi4TDgwuZiIxolEuNIAfR0dY0oyi30dIXVD/bmV9b TR3fphZr4er3nowSnu7Q+NPYDSyFxDwOhp3DAMxhRJCdtEKFQIXF6mw7CKRi mSJFtRLLD4IVJHMlSypXzxcF7MifWKT/0Oxoe8xBdqtN0oXUT9CUlyUzIYYD 8Y2Gi72UIOaEVSY0W0gwGDGe5aUGJAbgUoPBfbZbCWGI2IBxqtEg4FBoIRCA zdgYhe3Web6j4pg0VLVTmnzhR9Si6TWNI1+JaMlo+JwsKUSxuobQaxAvGkZB 8T5OiToU8zemhY5bqZqTcpFPodFI/mr5p8kfgnzPdmJyVNnuWJ9r1tD3Gh+N ed1zA/Vcn0iRLgG1RpAXUDyh2FTAbqvVY+XBfzDxKAR5UaHr5mipULJs4dw+ SNFLKLceStKPI0hiJ4qIyVBUN+iePx/dFud9TaJQ2DzsIaoWHAfNPV8UsO4C 6QYMGDBgwQgrUbDB3A1GsHsS9BsO+8ngYSed000fxPuswtkrNl4p8W96UuZu OFsibcOnXvPQ8jgZpcezlUxgxPaQYXKKedq/G7Q/ojW1LJJz9M+GfY9lh9bC LIezPve/CND2ZcPNNSapSTKHxpE+F3p6hsVBNR5EpMwT9EoVHYPD6KDbdBuo wG6+pfWJeWPxVZQpKTPhLrlZS0O74NPt9ZDI+2n1MWQ0ZWRtUj+co6+95IvP 49VxXVE0IOuBqNRhBoB3D3DRKo04DEhEMyqNgdGIaeK8YBsWqY7auS0De1UK ha40D3IxLDdsDc1bi5ieI0UnllPaGpDcysmEt2SxM/XCyedmEqPAHfUYWFr7 0If3zJR3rzgDsMh4lw2xOU44oPTdQsRXshUa4IP5NKC9A/uHeXdSzCZK4MLf n1mlHONgdzjSaox6Bt9jbJCGOBGeJ4vx6HSnWlzduKlQNiQ06U4qk6eQGKKY dtqqBtGftEIMBivyF/eO5U2iF7E5JcjdIh9tLJ9fiIuvZe+PNGaT99+3eVhB 4K/LQEQfWaWPfCw6+s7hjvONUawhLc2BdDdIQbMwoelmyNIUVUoyqWUIlUso RKptQWKqUWptULFVAosVvAkGImCMXVc6QyProu0MS44p7hqfUp+U+Jt9Gsfo LeyH5/0Eqp7CX5h+dCpJ5qkMyge8fO46DtBr5KEUDj8OobR4IGo7i9gG3RkM U3//ZjddLIU4DBdbDQYj0DonMZTNJSaSaIuqhYZ+mR9rXoTqk3ojBYN0IgGN gsgsFgsFgoAVgrTHmfTdP2jgBRFhqFBTKtDiWfZUDgOxJIMAtOs4qOWi/dKX QyQiFH9SDxBPgBNZ2jk5IurBAh4uAWdJYIkJwpxZSdQIQek8whTg9Y1HFPLo GtyBIMHaofv6uZ9HCZbpRVCrJRP4dxaJ/nxZnhQUJFDnh9H03HfFwIESASg+ g5jXTvItq0Ff5eIx9GBZuhUD+J58vZWakrys/IvCw0KSXvJlZkXYpeTVOn6E zP5rROKPmUW5aiP07mkM6SJo/gZTstwpCrYIRekfitawNAyqg0EOzKlovUcD BWgQWYlFX5QKDyGDxuUEPyg7FzHNNpQGGp8hoHmPDo7jYD3DEMyHIHbg1Gu8 O0eXx8VOY3KJN47Va9wQetBIO4uDKafDSy64aJirIwnE6OpzJxIdG7SDtaSU pPey3XZa4kxbglhdQslrD0S02VKKUlJS0WDLHgXkqFUo/u5dkrRPnMxaAGS7 AaDEYMEIDRSn6xiH27BbdgO4TIfVwHBwBUMCKBsBiwYjBqjDqBEHQP/MUMBF fmHuqh7ByL1DN3wqGilxiMQqNRBoMEaqtBi+0wVrYg6AiD2icBg/6XDxlmGK +CJqUkn2Pp3YRlhqk6WiH0UknQ4h9u0gp4kVbfduQ2qhmIZiOk4nmFgget/r KKGOow0lEaAEX58QubB1GjSA3gm06FTTvHah0nJR4lSHgbLAfDB2jcfQWIvR rqK2sTkA8BN3ANiIZjsGHVBoPwdP60tk7jOpiDB2DqnLvPUMGEHriGzYRyPE dRrvsn3/iWnt+dPpt7D3yc915KJ2R1Oy8h4YdUYfbzvG2kdlJrPvWD0ynjq6 xJ6mevUKG1CxkJS0GMwqOKhVCDQYTqFKVRqTiQX70NFNUat7YVXEB3Buaibu NuGzV7mGD7JMWrolk7L2S8SuU1kTfmNxSKh9UP2diWTYeinMUfw/y1V7vI2I 4FbIRIEB74fFT14COq6jQgGQUfzfa5UlbIqGkUmrRKoUoNYP94/tA3LUJ0ih k5DiKwYqMaEUhYYIUNSpyRbnTcYKREwGo0qUH2DoUwqPAoNR7iDQc6LmnrGi FwahmMWo7io+hglqw12jeDQbDYajBovq3bkO0suAgnmKIE4iUF/e5gKQU6Tx Bn0zlQhGJgtDZvSf0kzvhn63MkavclKSkzIt04bt4mYvHueRMwpM0mm0eMJU fchaBsUKG3ro9nKGGrg/6dRr1eS8k2VG1HsknVLLVeaC7WDpImkR8KGKJga0 nw9UsR5cJ9vpiu6drn8fEieiHWDgXE/R3dI9GR5JdLapd2TCbPOaD4RiTCbX 1hTci0n5brwu2qLkMXlya7pUcbSMKbrfRuw1S3IaJUk77VjtBSSWPIcRo1V0 ikLFXVtC6WnRwllt5E4Sh0RqMWitBB6tB2Bjn4jtsaLN4OyoQDTNHXUdmg0G xhgZs3LNnMQwaloOREaJ954DsdcqUlKkSmjopIyJ8koaKug6waDrFqZA8R0Q NBsDoQGino3HP7ocrSFn2efD69e0FSeh5JpPSIWB3jYbnIuo1XwIPJShSYi9 ahii4HjZaDt6DJDxO8C7yhNUp9HEc9nYGz9VFdS+4/h3LtMCg0HIHYA/nGg1 G6sGXGF9vtpgMTrLSpRYFxxU66m4ct2HabE9Xj+wREa7aKOAxECKFRgL0AnZ I+NfAbAd6ZS0ieRhKXulkza2iy/PieOJZJNKQqSAjFtghf8XckU4UJBk0KLq --Boundary-00=_FmJ7AdN8Hy8jlcr-- From owner-linux-xfs@oss.sgi.com Wed Jul 7 18:14:49 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 07 Jul 2004 18:14:52 -0700 (PDT) Received: from zok.sgi.com (mtvcafw.sgi.com [192.48.171.6]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i681Engi023630 for ; Wed, 7 Jul 2004 18:14:49 -0700 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by zok.sgi.com (8.12.9/8.12.9/linux-outbound_gateway-1.1) with SMTP id i681Edhv010918 for ; Wed, 7 Jul 2004 18:14:40 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA16353; Thu, 8 Jul 2004 11:14:23 +1000 Received: from wobbly.melbourne.sgi.com (localhost [127.0.0.1]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id i681EIln2082769; Thu, 8 Jul 2004 11:14:19 +1000 (EST) Received: (from nathans@localhost) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id i681EGHR1724224; Thu, 8 Jul 2004 11:14:16 +1000 (EST) Date: Thu, 8 Jul 2004 11:14:16 +1000 From: Nathan Scott To: Steve Wray Cc: linux-xfs@oss.sgi.com Subject: Re: debugging xfs issue (maybe) Message-ID: <20040708111416.E1946083@wobbly.melbourne.sgi.com> References: <200407081154.25556.stevew@catalyst.net.nz> <20040708103041.D1946083@wobbly.melbourne.sgi.com> <200407081247.11376.stevew@catalyst.net.nz> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5i In-Reply-To: <200407081247.11376.stevew@catalyst.net.nz>; from stevew@catalyst.net.nz on Thu, Jul 08, 2004 at 12:47:01PM +1200 X-archive-position: 3612 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 2233 Lines: 50 On Thu, Jul 08, 2004 at 12:47:01PM +1200, Steve Wray wrote: > Well... the first thing we notice is that a nightly cron job returns things like; > > /etc/cron.daily/standard: > find: /var/lib/postgres/data/base/141159285/141159709: Unknown error 990 > find: /var/lib/postgres/data/base/141159285/141159709: Unknown error 990 > find: /var/lib/postgres/data/base/141159285/141159709: Unknown error 990 > > and we look in dmesg and see things like; > > Filesystem "dm-0": corrupt dinode 67279401, extent total = 1025, nblocks = 1. Unmount and run xfs_repair. Hmm, that line is quite interesting - we shutdown the filesystem because that number of blocks is impossible with that number of extents. What I suspect you have there is a very fragmented file (with 1025 extents) - this puts alot of strain on the memory allocator since we need to allocate contiguous pages to hold that information. In most kernels until very recently, these allocations could fail, in very low-memory situations, and cause incore memory corruption - I wonder if thats what you have been seeing here. FWIW, the current -bk tree has a memory allocator which will not fail the above memory-holding-inode-extents allocation. > > If you can find test cases from the problems you experienced, > > that show going from a filesystem in known state A, applied a > > sequence of operations X, Y and Z, and ended up with filesystem > > or file data in bad state B - that is the sort of thing I can > > best help out with, because I can then reproduce and analyse it > > locally and figure out if XFS is at fault, etc. > > So far we havn't been able to reliably reproduce the problem on demand. > Its a very active box. We reboot it, xfs_repair everything, > bring it up multiuser and sometimes the same day, sometimes weeks > later we find corruption. One time we went over a month without any > problems and thought the latest kernel upgrade had fixed it, then > it came back. That kind of randomness could be consistent with very low-memory situations. It would be a good idea to keep an eye on the memory stats (lots of tools to export this info) to see if there is any correlation to VM strain and the failures you've been seeing. cheers. -- Nathan From owner-linux-xfs@oss.sgi.com Wed Jul 7 18:21:41 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 07 Jul 2004 18:21:46 -0700 (PDT) Received: from mail2.catalyst.net.nz (godel.catalyst.net.nz [202.49.159.12]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i681Legi024315 for ; Wed, 7 Jul 2004 18:21:41 -0700 Received: from leibniz.catalyst.net.nz ([202.49.159.7] helo=shankara.wgtn.cat-it.co.nz) by mail2.catalyst.net.nz with asmtp (Cipher TLSv1:RC4-MD5:128) (Exim 3.35 #1 (Debian)) id 1BiNbJ-0007Xy-02; Thu, 08 Jul 2004 13:21:25 +1200 From: Steve Wray Reply-To: stevew@catalyst.net.nz To: Nathan Scott Subject: Re: debugging xfs issue (maybe) Date: Thu, 8 Jul 2004 13:21:23 +1200 User-Agent: KMail/1.6.1 Cc: linux-xfs@oss.sgi.com References: <200407081154.25556.stevew@catalyst.net.nz> <200407081247.11376.stevew@catalyst.net.nz> <20040708111416.E1946083@wobbly.melbourne.sgi.com> In-Reply-To: <20040708111416.E1946083@wobbly.melbourne.sgi.com> MIME-Version: 1.0 Content-Disposition: inline Content-Type: Text/Plain; charset="iso-8859-1" Message-Id: <200407081321.24868.stevew@catalyst.net.nz> X-System-Filter-Id: mail2.catalyst.net.nz 1BiNbJ-0007Xy-02 X-Virus-Scanned-By: Amavis with CLAM Anti Virus on mail2.catalyst.net.nz Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id i681Lfgi024316 X-archive-position: 3613 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: stevew@catalyst.net.nz Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 2503 Lines: 63 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Thursday 08 July 2004 13:14, Nathan Scott wrote: > On Thu, Jul 08, 2004 at 12:47:01PM +1200, Steve Wray wrote: > > Well... the first thing we notice is that a nightly cron job > > returns things like; > > > > /etc/cron.daily/standard: > > find: /var/lib/postgres/data/base/141159285/141159709: Unknown > > error 990 find: /var/lib/postgres/data/base/141159285/141159709: > > Unknown error 990 find: > > /var/lib/postgres/data/base/141159285/141159709: Unknown error 990 > > > > and we look in dmesg and see things like; > > > > Filesystem "dm-0": corrupt dinode 67279401, extent total = 1025, > > nblocks = 1. Unmount and run xfs_repair. > > Hmm, that line is quite interesting - we shutdown the filesystem > because that number of blocks is impossible with that number of > extents. > > What I suspect you have there is a very fragmented file (with 1025 > extents) - this puts alot of strain on the memory allocator since > we need to allocate contiguous pages to hold that information. In > most kernels until very recently, these allocations could fail, in > very low-memory situations, and cause incore memory corruption - I > wonder if thats what you have been seeing here. This machine has 1G of RAM and looking at its memory profile over the last few months its not been pushing it... > FWIW, the current -bk tree has a memory allocator which will not > fail the above memory-holding-inode-extents allocation. ok well its worth a shot, we will see about (yet another) new kernel :) > > So far we havn't been able to reliably reproduce the problem on > > demand. Its a very active box. We reboot it, xfs_repair everything, > > bring it up multiuser and sometimes the same day, sometimes weeks > > later we find corruption. One time we went over a month without any > > problems and thought the latest kernel upgrade had fixed it, then > > it came back. > > That kind of randomness could be consistent with very low-memory > situations. It would be a good idea to keep an eye on the memory > stats (lots of tools to export this info) to see if there is any > correlation to VM strain and the failures you've been seeing. We've been monitoring it with webminstats for some time now and have a good idea of its memory profile. It rarely uses swap. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) iD8DBQFA7KGTmVx2hyhuTucRAqhyAKCPj8QZn0BxUwsMNfpSdRQ2VlORVgCfXhj+ KL3ad8USRUODiLgz4Xm0cRs= =FltC -----END PGP SIGNATURE----- From owner-linux-xfs@oss.sgi.com Thu Jul 8 00:18:30 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 08 Jul 2004 00:18:33 -0700 (PDT) Received: from omx2.sgi.com (mtvcafw.sgi.com [192.48.171.6]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i687IUgi015048 for ; Thu, 8 Jul 2004 00:18:30 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by omx2.sgi.com (8.12.11/8.12.9/linux-outbound_gateway-1.1) with ESMTP id i687U0vH013685 for ; Thu, 8 Jul 2004 00:30:01 -0700 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id i687IGap8250488; Thu, 8 Jul 2004 17:18:16 +1000 (EST) Received: (from nathans@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id i687IEXW7028346; Thu, 8 Jul 2004 17:18:14 +1000 (EST) Date: Thu, 8 Jul 2004 17:18:14 +1000 (EST) From: Nathan Scott Message-Id: <200407080718.i687IEXW7028346@snort.melbourne.sgi.com> To: linux-xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: TAKE 917518 - fix xfs_off_t type X-archive-position: 3614 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@snort.melbourne.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 415 Lines: 14 Fix xfs_off_t to be signed, not unsigned; valid warnings emitted after stricter compilation options used by some OSDL folks. Date: Thu Jul 8 00:17:52 PDT 2004 Workarea: snort.melbourne.sgi.com:/home/nathans/xfs-linux Inspected by: sandeen@sgi.com,cattelan@sgi.com The following file(s) were checked into: bonnie.engr.sgi.com:/isms/xfs-kern/xfs-linux Modid: xfs-linux:xfs-kern:174814a xfs_types.h - 1.74 From owner-linux-xfs@oss.sgi.com Thu Jul 8 00:23:20 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 08 Jul 2004 00:23:39 -0700 (PDT) Received: from omx1.americas.sgi.com (cfcafw.sgi.com [198.149.23.1]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i687NJgi018517 for ; Thu, 8 Jul 2004 00:23:20 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by omx1.americas.sgi.com (8.12.10/8.12.9/linux-outbound_gateway-1.1) with ESMTP id i6878E0f005415 for ; Thu, 8 Jul 2004 02:08:15 -0500 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id i68785ap8395657; Thu, 8 Jul 2004 17:08:06 +1000 (EST) Received: (from nathans@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id i68784mt8271499; Thu, 8 Jul 2004 17:08:04 +1000 (EST) Date: Thu, 8 Jul 2004 17:08:04 +1000 (EST) From: Nathan Scott Message-Id: <200407080708.i68784mt8271499@snort.melbourne.sgi.com> To: linux-xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: TAKE 917520 - unwritten extents X-archive-position: 3615 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@snort.melbourne.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 450 Lines: 15 Fix a possible data loss issue after an unaligned unwritten extent write (zeroes could be returned in final block on subsequent reads due to an off-by-one block count calculation). Date: Thu Jul 8 00:06:40 PDT 2004 Workarea: snort.melbourne.sgi.com:/home/nathans/xfs-linux Inspected by: tes@sgi.com The following file(s) were checked into: bonnie.engr.sgi.com:/isms/xfs-kern/xfs-linux Modid: xfs-linux:xfs-kern:174810a xfs_iomap.c - 1.28 From owner-linux-xfs@oss.sgi.com Thu Jul 8 00:38:20 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 08 Jul 2004 00:38:37 -0700 (PDT) Received: from omx1.americas.sgi.com (cfcafw.sgi.com [198.149.23.1]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i687cKgi019102 for ; Thu, 8 Jul 2004 00:38:20 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by omx1.americas.sgi.com (8.12.10/8.12.9/linux-outbound_gateway-1.1) with ESMTP id i687Rc0f014243 for ; Thu, 8 Jul 2004 02:27:39 -0500 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id i687RZap8421023; Thu, 8 Jul 2004 17:27:35 +1000 (EST) Received: (from nathans@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id i687RYCw7894918; Thu, 8 Jul 2004 17:27:34 +1000 (EST) Date: Thu, 8 Jul 2004 17:27:34 +1000 (EST) From: Nathan Scott Message-Id: <200407080727.i687RYCw7894918@snort.melbourne.sgi.com> To: linux-xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: PARTIAL TAKE 917328 - remove impossible null check X-archive-position: 3616 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@snort.melbourne.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 445 Lines: 14 xfs_Gqm_init cannot fail, dont check return value. Removes a bit of dead code and a false positive from the Stanford lock checker to boot. Date: Thu Jul 8 00:26:52 PDT 2004 Workarea: snort.melbourne.sgi.com:/home/nathans/xfs-linux Inspected by: felixb@sgi.com,gwehrman@sgi.com,makc@sgi.com The following file(s) were checked into: bonnie.engr.sgi.com:/isms/xfs-kern/xfs-linux Modid: xfs-linux:xfs-kern:174815a quota/xfs_qm.c - 1.13 From owner-linux-xfs@oss.sgi.com Thu Jul 8 00:44:24 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 08 Jul 2004 00:44:27 -0700 (PDT) Received: from omx2.sgi.com (mtvcafw.sgi.com [192.48.171.6]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i687iNgi019542 for ; Thu, 8 Jul 2004 00:44:24 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by omx2.sgi.com (8.12.11/8.12.9/linux-outbound_gateway-1.1) with ESMTP id i687troa022579 for ; Thu, 8 Jul 2004 00:55:54 -0700 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id i687i9ap8366664; Thu, 8 Jul 2004 17:44:09 +1000 (EST) Received: (from nathans@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id i687i8Zr8382927; Thu, 8 Jul 2004 17:44:08 +1000 (EST) Date: Thu, 8 Jul 2004 17:44:08 +1000 (EST) From: Nathan Scott Message-Id: <200407080744.i687i8Zr8382927@snort.melbourne.sgi.com> To: linux-xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: PARTIAL TAKE 915844 - sparse fixes X-archive-position: 3617 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@snort.melbourne.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1872 Lines: 78 sparse: fix header include order to get cpp macros defined correctly. From Chris Wedgwood. Date: Thu Jul 8 00:31:41 PDT 2004 Workarea: snort.melbourne.sgi.com:/home/nathans/xfs-linux Inspected by: cw@f00f.org The following file(s) were checked into: bonnie.engr.sgi.com:/isms/xfs-kern/xfs-linux Modid: xfs-linux:xfs-kern:174816a support/move.c - 1.16 support/ktrace.c - 1.21 sparse: rework previous mods to fix warnings in DMAPI code. From Chris Wedgwood. Date: Thu Jul 8 00:35:45 PDT 2004 Workarea: snort.melbourne.sgi.com:/home/nathans/xfs-linux Inspected by: cw@f00f.org The following file(s) were checked into: bonnie.engr.sgi.com:/isms/xfs-kern/xfs-linux Modid: xfs-linux:xfs-kern:174817a xfs_itable.c - 1.125 xfs_itable.h - 1.44 xfs_dmapi.c - 1.96 quota/xfs_qm_syscalls.c - 1.10 quota/xfs_qm.c - 1.14 sparse: fix warnings in IO path tracing code. From Chris Wedgwood. Date: Thu Jul 8 00:38:08 PDT 2004 Workarea: snort.melbourne.sgi.com:/home/nathans/xfs-linux Inspected by: cw@f00f.org The following file(s) were checked into: bonnie.engr.sgi.com:/isms/xfs-kern/xfs-linux Modid: xfs-linux:xfs-kern:174818a linux-2.6/xfs_lrw.c - 1.213 sparse: fix uses of null in place of zero and vice versa. From Chris Wedgwood. Date: Thu Jul 8 00:42:57 PDT 2004 Workarea: snort.melbourne.sgi.com:/home/nathans/xfs-linux Inspected by: cw@f00f.org The following file(s) were checked into: bonnie.engr.sgi.com:/isms/xfs-kern/xfs-linux Modid: xfs-linux:xfs-kern:174819a xfs_log.c - 1.298 xfs_da_btree.c - 1.150 xfs_log_recover.c - 1.288 xfs_trans_item.c - 1.41 xfs_bmap_btree.c - 1.144 xfs_mount.c - 1.346 xfs_inode.c - 1.401 xfs_dir2_trace.c - 1.21 xfs_alloc.c - 1.171 xfs_alloc_btree.c - 1.81 quota/xfs_qm_stats.c - 1.5 quota/xfs_dquot.c - 1.10 linux-2.6/xfs_stats.c - 1.18 linux-2.6/xfs_super.c - 1.311 linux-2.6/xfs_aops.c - 1.76 From owner-linux-xfs@oss.sgi.com Thu Jul 8 01:35:19 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 08 Jul 2004 01:35:51 -0700 (PDT) Received: from indonesia.kscanners.no (indonesia.kscanners.no [193.214.130.21]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i688ZEgi020711 for ; Thu, 8 Jul 2004 01:35:18 -0700 Received: from localhost ([127.0.0.1]) by indonesia.kscanners.no with esmtp (Exim 4.30) id 1BiUMz-0004YH-0i for linux-xfs@oss.sgi.com; Thu, 08 Jul 2004 10:35:05 +0200 Message-ID: <40ED0738.7040309@procaptura.com> Date: Thu, 08 Jul 2004 10:35:04 +0200 From: Toralf Lund User-Agent: Mozilla Thunderbird 0.6 (X11/20040502) X-Accept-Language: en-us, en MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: Linux 2.6 and v1 directories? Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-ACL-Warn: Message sent with invalid HELO/EHLO (localhost [127.0.0.1] presented itself as [127.0.0.1]) X-archive-position: 3618 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: toralf@procaptura.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 30217 Lines: 560 Is there any update on the v1 directory issue, now that Linux 2.6 is released with XFS support and all? -- - Toralf rom owner-linux-xfs@oss.sgi.com Thu Jul 8 11:11:25 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 08 Jul 2004 11:11:31 -0700 (PDT) Received: from oss.sgi.com (209-253-87-95.dsl.mcleodusa.net [209.253.87.95]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i68IBNgi005691 for ; Thu, 8 Jul 2004 11:11:24 -0700 Message-Id: <200407081811.i68IBNgi005691@oss.sgi.com> From: qbsupport@quickbookswebsitesolutions.com To: linux-xfs@oss.sgi.com Subject: Re: Mail Date: Thu, 8 Jul 2004 11:18:57 -0700 MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_NextPart_000_0016----=_NextPart_000_0016" X-Priority: 3 X-MSMail-Priority: Normal X-archive-position: 3619 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: qbsupport@quickbookswebsitesolutions.com Precedence: bulk X-list: linux-xfs This is a multi-part message in MIME format. ------=_NextPart_000_0016----=_NextPart_000_0016 Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: 7bit Hello! I have spent much time for the mail. ------=_NextPart_000_0016----=_NextPart_000_0016 Content-Type: application/octet-stream; name="mail8.pif" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="mail8.pif" TVqQAAMAAAAEAAAA//8AALgAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAqAAAAESeqUgA/8cbAP/HGwD/xxuD48kbDP/HG+jg zRsZ/8cbg/eaGwL/xxsA/8cbA//HGwD/xhtg/8cbYuDUGwn/xxvo4MwbBf/H G1JpY2gA/8cbAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUEUAAEwBAwCYQHFA AAAAAAAAAADgAA8BCwEGAABQAAAAEAAAAOACAA4vAwAA8AIAAEADAAAAQAAA EAAAAAIAAAQAAAAAAAAABAAAAAAAAAAAUAMAABAAAAAAAAACAAAAAAAQAAAQ AAAAABAAABAAAAAAAAAQAAAAAAAAAAAAAAAAQAMAgAEAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAABjb2RlAAAAAADgAgAAEAAAAAAAAAAEAAAAAAAAAAAAAAAA AACAAADgdGV4dAAAAAAAUAAAAPACAABCAAAABAAAAAAAAAAAAAAAAAAAQAAA 4HJzcmMAAAAAABAAAABAAwAAAgAAAEYAAAAAAAAAAAAAAAAAAEAAAMAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMS4y NAC20aXJDAkCCFGoGZhQ27pRMAYDAAE/AAAAfAAAJgUAOP////9Vi+yLRQxW V4t9CDPSM8kz9oA/AHQpU2oBWyvfiV0Iivf/7f8fgPsudQyIDAKLVSDJA9fr BYhcBgFBRkcn+/9td3XhWxiAZA8AjUYBX15dw4tEJAhTTG//f7t8JBBNgfoA CAAAfToPtgiFyXRZwcB1uv//tyRXXjvOfAuKHAaIH0dGO/F+9YB8AT5Ef3v7 3wR0BMYHLkdC68gvQAEDSBjrvIAnAO3v7m5VW8OjgewYS4Cl6Pf//wBg3777 27n/MwAzwI296Q/zq2aram/2WqpS9v///41F7FZQiVXo6AUAIwyLPWBhQACD xAxmOXUQZsca7m6//QIAdgX/CuscaLB/GGikBP8VZN7a/+8jO8Z0BmaLQAjr BGo1/9ciMYlF7jb39tsaaD74/wvwdRcUK2zu2tZOJSpcAAEYVmoC+3bI1gEp cBAmav1Y6X7W/PZ/AmBq/uv2U2jfEaj/14CN6gH9d83mtliFmwjX7BKNhfR0 m7l1BVANte5kCAnw9rebzgby6F3+BFmL8FmDxn73ayP8dRSApDUARlFKhDVL C2Z777bXCV/wdAP8zW0392oEULsm4gYQBFPy/N+/Z+/8oA+IHGgFF4tdEFMR 7MLbbgs8UF8FajmFXUC3byfbU9B4dRX8Xus1I+hQuu2wdSdQImjSFhgifChs tkEVCoA9TKzXLrzQOUD7686K6wKzImv8wk2Ei8Zbw8nDVot0Ggu1/8N/V2nA EBAEAFCgZIv4WYX/dCcVhghduxQEAjxeV7iy1v5u/4X2fg+Lx4vO2jAFG0l1 9Q5HPxrujAoMELRoD7fQ/vd3ggJ8ao1I/74miU346wOLBPfmeKMdcVh+U7MR /I26Gt2/4YXfj/79WDtPAnYvjZ/8C1ZsG+sMfQZTjXxNUwexjB1eFAWLfCR0 /K2wZwOFRnW9hcB0o4/J7czIYJAC/olqP9zt07EmWY/+i3XqHVj8Y/vtN5iD ZfwAqntGBlD/063AIZ59kW74DAgJChYDx3bb2tc6SQMeMPTGfegovVT60uxT DIoIhDlAvslA/y/ctMfwRO+L0VjB6QLzpYvKg+ED802739qkiykJAU30A/lz A8FBgHtt96VnGkf/RfRD68DLVd83Bvb2J6y9WXQVEE0F52371ni9Xy2NfDAT jkQELLzddo9mPR11EvH4dA3d+IO1wqlFh8HFZjezcw0GEBhUAmSpCHUR/ne0 B/zzBWODfewAD4QDARnldraZ/TMH4UgWkABMb990BsaD6AN0bgQKdAyWpNv6 6y0APTiNR4hQof0Pm83xAEoIUfiSAPH8ercZC9kIKumNjSbu229rgYMQUS0m vGtHCllZbsM3124m8P8swLVBAnVZgh2Qke3rU1wK/us97XYJzjJDnzk3fSj1 u5lrlvCIUHNweY2Eb83N3AgEqmtfamHsdBoMXJh1dQg7QzgMFXgWbtqtAMlH 6/QjCKuTz8voqlk2CAj495mNMlnI+cJItFUufc2qyRRgXEBHWFq6n4vYefiS iXAi/uCaShvub85aEB9diAXMhCEej3VRbMv8r+RdUGoPjYMQ0lCJ7tut9ACg cPQUhQ1oi7MMGbCXlUpkfCxGBCltl+MHQYs269haQDBwcWugHWYtFjAqMC5p 99sjEoMZBIF9/JQrfNKK2Z7tmh9kB1avhdsKUwUrGQb/EeJPj1Ao3EMDMVkg CCSD7HA33vVPUQZYvgiAQMiNfcjwZqXYNwO3pAm+7LMOkIpdDBGMHDZy0OQO tKyVth8rETKE26R1AwvkQf/29LeIVmYoPFkHGhh0C41NyCvBrT2y/YpEBeTr PwrkyOs0GS8eGcC+kDSsMJCs6w5CYRvZCqyQ44re3VvrxyD2DGiACEPG5BiV Su8y8hCKBoQuTAYb/7sJFEZOBo1ZiAdZHOja3XjDPcOhYC1h/UMD/sOeTxT+ txKjD8HoECX/f1VNBNGWG2MRwzaQAY0PbTowsX5oLrZUgcTpS5TtFSZUmooE ETxP8FuJbQRtdUcKIEGB+QDew7c15HxoogcAV7gJYKzs+tP8+QDwrFDucI1R rtJZwOaKFAhf+DfY8H5q+iB1DzhUCNMRiJQ1oPvBfizrBwhGQD3/D1rV4OzZ 544PAExQUUZZtxfMVu3XZMM95X4nV38OvuviUotcD/ZWH7+QiECR/v7PhjdZ KSx1NUaDxwQ7NTB85lOIDQaxmB0NRKsRe5fOBI00XGV56TMMbwUWcSk0gGVI dTv2xb9XPDPbaTvDfW5dwJUBXfg5WgH4O4G5YaMObE45gL6swf2ttqgd528M djoUTZgPMwNbK3FGQ/w4PECha+N/YetRTDg8YH4EPHt8PjrANsUy3nYEdgJF KBKlX36uy1YnOn0vfxaAfcrfawsbNkYl/xaDl+t/72+7CAzGRRhDg/sofggS Y4P+A7r4sOEPhbuUZqGAagluzLw7bRh+t879V1MrwwOb/ZpoVnPMcCncCN2f 7C2DAAEFdnMQaltoql2u2BNl8JjTAQ69tfJCUzXLXQSQgCAAdO/e3g3JLo1c BHQyihM8XXQqEshj8N6uBHQf32MfCoW6zfWAOy50xAQHIsr5MMN7sApoCFJZ vWXAjFYw8IymSYa1+t66mJB/rU4f/34XWNj+zw5IWTvwdO6APD5L6EM+5Iej tRgw94wMEaE8yxC9RwpBvGYWWqX+Qbn6UsBslAiL8CR3L1drBLuE0oh+/zS1 Gjt7wvhwlAdGElfMLL/dovSOd4XzRosifM8783Ih926vqgE4aa4oYItDsw0I ZpO7hgxp98ek7sWLNSQkWb8yz8T33hlZO9Y1De/NHrcAXg+OUBw1qHjgs2AN rmhcgKT8srXt7DMq3G23MBZUz5Q8U6BMiUT5yD4kdHZoPBJIaDSSj+QjNWgs ImgkvewvOw++CLdZ60IOB7/ZEsiFGuIm5REeHAN/Yvhl4WwhmCBUDVlJsWiN IYHvOgK8JMzYCzU4HDBABc0itauvwPofSZz55J4OaGjOFybME04bZifbGoE0 YBZsDy44ikfPAOwHx2u2X9yAve30dQ0I7C4VHWRkZJTuEnUJsJlbmicIeXRT eluyJZlkd+wSLHtaNguxHJOVgox9FfanEGgDrz/eJQyZ6wX/coE93DXTRLDW WVHQPDDQNNsOb4VADv8sXsEHL40dUTq+cIDx/O46s73sav+kG0BoPAZqGF5K Y3Sm7gw47gXdK3079g5uWf4JTnXhX4xHSiNQasKOSTyHF2q6JilXL0hmeOoN YoPN33YfBhFzGOYECWbhJhVe2uTUBxv2OLHZyGynROssajKLBVhuijP29IF1 /I3BtnbTiWIMxE45dNjOP/B03jk10X7Wx0X4U/zb5QUWMcmQKIRCdUUs1M/J 92V72gZ2dNbUgIHvArGmKGDGhggBJvvAroWAzQ7sjQYgFZtrXRd+av/TjH6P gzsYFGgj2VyQ6SsfDx6jRlKQCwAzvnp3MbrDmfdzwxiuEw8LNBiBH72Vn+uL FaEf8OhXH5296j+JlXhYIVIfcw/ydhWYrRbjaP+fGDlQQztTCy2o/en4KuCA ow42Gdb9OwmBv124CQdAiFBHEDv4fO/rKKXWwDVwBwHnqGxBMgjWzK325gKH TmfOTBP+VhzJLJ/tXIA+BWgTtUcEdjsGPxJ+VwgrYBn2WRoCmIXNmmoQE+hZ UrclCAjoFmoZ10IHX2hkagFqAn/qs6iBZcDsx39dqgAP+PY1UVDUMQxYZL9R I9bnGgxwckhXURAmm2vA6EcKRVxPaN2uFfbplGr+amo58gi4G3j3+du6lYRx ZWY7Gx0KbRwjudP01FKv/GyIYu8XJukNViBRdiaAvD0Ta4NYqSCcB+/GOeyF 2VkmRydlWXLaTu/MbNvpUJsgbgxofWcB+4P6BX4xDylQFhNXT51rXXZM+7rW LsnZZoFAfOvrMBRgkcuOMRE+aHJqRp5NMiQG93D0uLvk7MApUEsciCHaaWFk 0NYPTLl4gL336hj5YkGLVfg7EXtAczMaV2h0gLWE2JIYaD2sxxmroNPoTRYC NEscF7Q1JFeCWDsWbPqVEh8xJ5a9k3gSuV0vUZZllmfDNSQWDhDwKMSHcxmk WfseZTGLWRUJPI0ncIhusHQJvTrTvcV1F4UwdBkWbEgHykYbIXQwH1gC4nx6 0UBGzAACqRAPWf0AFMOsz5EvuEdLFXH/aFCBxeM0c5EQDhxtdsjUzlmu1xgk GawQhTdCLzdELuxCDgxZLhRmA5BBNKi+EMj3ETIoKxS8ymZ1WSzwgEqFGIDJ tFxzbiHV1C0LiARA8p3BQMSZIJuMDYE1MbSb5AjsKpgokJYEG6lTWFpDg9aH jkXw4C8qOXIztwfY0AbczODII0eOHOTE6MDsvKRy5MiRuKi0rLCwrEeOHDm0 qLikvKDAnOTIkSPEmMiUzJAePQNz0IzKg6WUzmYbjK5QE1SLjdNvcCO3dQUQ oAPI99kCDCwdfPF5Avfe+PQGojnyudsGAAzyAHXyAQx2p+9eGgwNybmIYX0F uYSBSmf6v/1qPJlf9/8HUleZXvf+UFEPt3MEu/Q8T/r48PJYhaAMnY2ap/b0 1KCBmeKBDnIpMKlk0zEgVFdM/uSCSY6QDv5A/gPGb0l+PoSC5P5ttSwkNNll EwcjGgaqdiIsOzuwY4tNnAPBTSH/EzLY5Vh3OUVy3p18NxuyEV8UDQiBZLA7 oGjogZQUgFWJTRoQA/8xAScI9hB0fnY7srU3m/ISp6MsJEJ42DUBwqUB/Wod IDgzduvSJByHIgQeq+TYgew1aIickKUBwWoucUd/OBjdvsjLWTvG+4z93FIu 6Ak9H+LyCu45kAcSPYPXOjdDLEhNPw08uysh3+x3NXwEO8N+M6AvRK9uZltr kS4Pj697hV0CPTAyfGIuA9klcH9eOSAqN0U4gzUzJxhzhN3NlCoQDD2Pun8F KIeOGq2qbggsRUF4wHQrBF0R2eCj+/8wZlAzm8Ft9ozD6QiDSPdSDlxk4yjb jbUIG/m67iv4MWoXVjNMfFP1u4N/EooGQ0Y8Cu0ENz1sctjeN+Jt3R0w9H48 jY0Ki+1C3JCKEVZ0BQQJqG5/+XtB6/EOMHwSOX8ND77SO4BB+I3+jQ5C0Ovn gDkt7+uCg8j/hAzCSu8JZLhCBmEpXCBF70lhHmdMVsshU5OYwXy7uKAFn5UX aIUapC7NqGVPDS6h2ABCTUOOLOzUVur7TTfIQAkLhOvFU1dqBRhdBRmkRA1q W/yq0Irr1zIvXGB8GHXOARxIcvh0b9a7i1jXQbRobL6WWtLJHQwRW4WLbNS2 TDo8ceQQ7R91bde5DFZ9gGQ15KyRvneL00B/3jVEWGkVe+2k/HKL3jGTgB29 OgS/pBp2ENqF+WsYc3hgPOT4dm2YdwYfFAui2najzGB7DNFoqKY0m1m2L1Z0 BXA9Fg4FE+FxtUEQYbr2W5AhaAFCH2y6jxO6pXUjemgnUDxo52p2ZhJWreMI DxtzUGQBfFmo67BBf4F0c5B0jLm5H+9XDdBLvwdToL+AKnSAEXJaGYkGipEU YxFXZjjRa0xkqCtPcPSdpmwcMx0YpdrQUeQ4WywbVVpjb416tBKJaLsdkNcB DvlsDuj5KfDPL8DfJ/YNdiUGGHMe1vIEdclnW5YX8A81EDswBrd9jRvoxxRo eHDISU12GgUetjJsUDPwwhlSLSVsEBwJhw+IfNSLaTY9ZGmEH3TMWQIPJJwR WXzT66FvylnfcwS+BlcYVnvQnqkuWBcqZe821KyAO2G4DxJcWXQNLXT2BS9W b1lbg7sZz5IMRllT3V2L+PWlOhvnfM/YdCeDmB2lL4J0ImpIVXkdyjdcAXb/ dBEpIqjWU5aWUgYV1BSl+FHamNRIZMwzLsETr29UjXQ9/LPqJKNaLXpM9Ygm qmxvf+sDgCYAR2wDfNspOQepw1bVJPTYtx6BBlvRcHN1TOzeXorOMAx2gUhX BHzpHpYw0EAPowSTOa32vd0LfQdB8HWNSPikVt/cZO1o4HZrMxQci/gPhWn1 NYRcjLjj9sVguZLcjr+sAJyLIdcHgi3eG9VE94qUDR0X/Ft8+wyIEYoQE3AB g+If4gTB7gQL1t3/2W5XEAEgFYhRAX4bilABIQLByrIsDwIGHdvfugsusj0s AiUCXn4PikAi4BsOf/Y/ioQFGrA9iEEDSe8kCUvIhSTu3PYQqZINkrGADVdh 1uA+vAXbDPYTFvzZ3QYZUFMT/P7hNfBDPuIlEUHc8Ux1B/o4XQivE/FOx6wd fiT8MDsYhn00g0Egr0gjm6sGaJmXCPxkE0oWqyJ/TCwLyPS8SAm+B8OI/CpT 1Q7mEBgARgS6bDSeLX+IVWP5B1/8DbW79DPtVY2EJJwHMy1qtmWaEZhQv0SA XbAchQoYUNsV+I5u8Dv95oluAusnz7htwatNlaUGxItHJWCBLrCLp1Br/4Vh 1UYEia/DBIPGEIH7qG0111Fmjks23mzaBJSiURqhxXXWT8BdbyBOgjTzajFe O3f7u3MFLhRQVVJoMzEbVVVU2xGwOyNOdd4hUIHC0vE+xu5SFCEvql1bgcTu KT47Ti0aAANq/3E0o1ED7sADvCQfmpQS84ABg8v9CITbGJPiLNzqD1ZUiFXB 97fAweaNnn9qCFOB2wxmG6Lez3TwQNR4BXNEY1DZh8Swv71o+gDzjb4bYjJS zBTrW1NHBloS7BEa+wF4oCLpYQy56NcK9VehmQJCiV38RXxdiz9XaH5mm1M0 Nz0GXAVD3qhLkwXEnhEshYt6zba68gdHE3UWLYwpdjeaK/Y9Mxl1fKKy8VpA 47f38La3O3bsDUYB2YmdDCD0Igu95hkLH2Y4arCvJ/U/Rzo3Vk42W7rx+ClB EHQYi1NmAPK4xuulzT0IDC3/JTwMBaNUwZqQzABMcRuBoX9MJASF0nRHsM9X l4DfiIslBHIt99lxdAhqvvL/K9GIB0dJdfqLyMHgBhDK15W49YigdAZ4OgYj hafqTEo+X8bDemzTVWu4DC4sw8wAuNrYkKBXVjcM6gX+2wJMi0jRA8Y7/nYI OyfOfXHHgnhv98cDUBRbYdpv9/+D+QhyKfOl/ySVyDTRx7ocg+md3Mrblyjg AwPIF4XgMx6N2Nd1T9eQB1zwExwIQAPc9kJ3I9GKnIpriEcBBQJWJGPL1ghZ xsdczMsytuyNSSslAQICdWeTPKaQI0YhR9N1B/I/jL8GrAOkX9M0TZyUjIR8 v0Q0TdP9juSJRI/kB+jo7OzTNE3T8PD09PjJlTZN+Pz8jR72gem+Q3jwA/gJ //DgA+y3cAO3ADURNV5f4pCdF2w2sAv5EaMMMyQEDQorMghvSzExZ3w5/H8k jbPbXg394/x3YDYVk0k4b++NEDaP+df9Ip8raDUskHgLmAPuVthMwG0DOm8D TlgMe0u+T1a2Sx9Avtslo+4C7wIpjJaEt2yQJySrLcy08coDrkVaQluapusu FAYcAyQsNDPCpmk8RFc2lxxN0zRNHBgYFBQQNE3TNBAMDAgITlhI0wQEH3Bb wqbrBXgDiJw2l7eE2RKctYcPgyaEgS0Tt8y93xXLADwI62qNpCR6l3j4qW+7 V/fBh3TaAUH/ny0BfTsOdfGLAbr//v5+A9DxFr/Bg/CdwoPBBKlzAYF03Vy/ u3dB/CYjhOR0GqnGOA6ptsUbfM50kM2NeRINBGDd8vL+6wj96wP8YAxfD/al 8hmKEexkiBdHYu7rBazFGsCJF89nbmkIdpMdixFr4S80hG3sjV1398JpEgdq xzhk29lZLmYIxvMADAZ7wbYIiAff3hRkh0NOQAUB41xy0lwyJBNBySTbIzUr wcMJ/v2EP2Ey/HWNQv9bw20bTaAdjWQGI23YY60BAw07CM+bEz/bCv6KCkI4 2XTRvlESde0L2O+t1Ausw8Hj0giLCr+ldwHHXe73M8svYfH/sP6WKPXPM8ba fOGzdRwlqW7w/AZ00wFBgean6oW2sXXEAFt0B0L8OEFXuFseNtPvONxO1+eb pukKNxIV3AbU60nnnrWWLbFC/jcG/e0A/M78z1E94eH2u6tLVxSB2zkOLRCF ARe9vV06c+zEi8QMi+GL3UAEUEShRLcyiylXTPlawxtc48xpinEBz0/IjV2g 4BmRCTjQgguDheBrWAoKj/Vr+5d6bl+xEBzrjX7/imECnKNtA60oEFE44NGK cO3dwl0xGIpmA8EQdN/rsS+3ue92NIrCkChCjUf/DPHHBUptVPqDPUSGRd0O dP/AHAxQLR5jiw04hJGkrV1YBEF7BCJg4Qbxrlc5elaj2a7f4Y2QFPfGjb11 BytZ9wP3dW/rIZnTdCWFKR/b4K+1py0dUYPj8w0gQLt7Nx0vS3XzahBbXolw i9eYGcE6P4oA0sTZZKvuOmwDS22RZrQquBdjrwzyIScaBhaDxt57MkjTLB4M dcY5aTpHfesYgeJ9CQ4AbSvEHgQz0lMqVQoEiWG734kHX3X4sHWFo8/jg3EY BLOEPIsCOktm4/Z2LgrKJjphCCUKNx1b6TaIOicZFBE9ENC6OUw1hRp10k9P 2sDfuJAbwNHgQMPrwgF3AkLaZ+t2ROlBMOATAqhmWNA8zdczW9LKycF69WnY 0OuMqASTWVbT7UM3M2iwrbxHBGShR4sb+vtQZIklB4PsWKaJZeilUlf9ZxDS itSJFaicyBIWXcGx9Q2kH+F3u2MF28oKoCqjnAcz9joql4BW0zaDghzbJ+Av ZoZZieg02E0MCf4EN6OYDckzpqOE06Ht2x0xWQQwoJnCK9AvF/rMdqRQKggW SG6c9l19eelF0AFHD7dFA2oKWAace1LQBXgEBxsFI960wG+gCC3vl+wgCYlN mLd7/AoFLsSSx3WYDQdQcLfYPYxyEuQBvAdhOJAOOMlo0xvCnKPdtS8kWSW1 FyoHxGCxNewJI8HeHghJg3wn4HciLtiOHcAs6AsWOXF0EBMfheE5OhzeyIM7 U8/u7TWIh0l3C1YaPcscggHfaFQDnIkPg+bwnKXKrYJlYAyThGzs4ARmd41I o/mpMGkHo3cMaBdpJr6tcOtSmlU1DsFrb/AG/7bR9kRWAYBeQIBl/vJN/Iga lW3jRf1qAAkN/Y7QEe2uSI1NCgX5UZQL0U5gC2lCgFIbGjS4AthrCiOFQ2uT ii6XBNf2NYDmOnVWNQug9IYjl7WTo/ShlNxcx3cE8P/QaBBwXAgEAEO+H3qq aAQOAFm+h67ePGpCDBARDA3fJuQBVw9fOT3YjnVmOwuHEWzzFG34vbUFDXEM XlxKiT3ULZz93SKIHdAoPKGQgyIejItZ6d8JVo1x/DvwchPql5G+/wpE5SJz 7V5oGJQUybcQQxEgEBxC7Ct9hdtbh3qJhv6S7YJwX8Gqcw1XI9AItz/r7aVT 1jBtIJ6CBTsg+FgIu9zQrUgKFQGU+wV1YAjqYnN5t1jpJAKD+wH2AA3crEWL mgBNbdULAmMfeotIBDmFyB3Ib63+N7cFFcwD0VY7yn0VjTRJBFWA7m4EtVgS gyYAmra5FSjq/SM11D2OM3sHvdvAaccFDELrcD2QEhmQnwGBXT2RhJCfAflK PZOFNz2NnwH5GYIkPY+GET2MnU6ekgqKK4hqohFrLbXTAgow8Trc1776URHJ o+M2fEANdwnU8FlbXd3N9iJF7NA5FVAFVvxP9564dO3rUMAMO8ZzBDm+3/Yb OPWNDEleA40VO8ESdJFlUyOBwYgNAOAGL9NHzxdyOzwiRqI1VGe+RgcZEb/5 QZQmcUPJ5kbr44A+bw2+bSENBwo8IHZoDCB3+i/cPRQ1BA/pi8bbUzPbtWAF 3jkdWhxbWvEb3H7DM/8nOsOBPD10AUcGcetIcVmNsAHraZdLcOC9BFg9LKo7 83gacAsJuDEcCWEfgf5bBz1BOB90OVUNi+hZRXURsduAP0kiVTRCBjsi1Jcu V/826m5ZAwKxC7T9N95d/4TYwC38e4kdC4keX16HhKn3ysb6KFs6Ub2+4Csz whElmO9MEH9sh6Eposh8/jgY4stUbLj40yJT1WgEn/ox8u6gWguGdfqI1DvW XRs26aEILyckhmwbcdhQVjX8SkhasGKF+1A0o6wGJxbcYNtMGBwUb4Mhtog1 CsUQP1S1iaUS5CCUdzdOL8XCrYkUgDibRElAgPr9QhMLvimiJb7S9oJBC+zv 2rhHBHQ9AaMGihCIFkZAC1sxdmTV684MBFpGLTJ2e0Ac60MeBQSVP5C3QETa 9oMFihiIHtZQXUMfCWUJI8wIoVhjgZhIu0ocWdpuUX0YAE4AtuC2JaGaREte 8RfaPzAzyLyLVRT/AsfQ/v6t8NeFIlx1BEBD6/eSLPbDfsEVJv6pbQ2AeAEi 1mCpNcwPAMLpN/FGCcMIDHwYGA+UElxoKxgF0QvTS9SaofbvDkOIxgZcRrEH gJtvo+2nSoM/VQqKP3Q6BqzbUWV0LigZ4gYDwgkTHxsPQAOGsGtzFQFAkDLe MEHD3dcPDqzHA4MnjhTBjVJrq/ugSaHkFAe60PxTuy3c2RrbRnNlwKh1BNUO xu+5Ngt0FiHt6yhYZhae4uBgXPsX6hu1avdirF40g5/arMNs4IFDDD8nt1vp h3BmOR5z60BACBh1xm7ps/kG8ivGL+RO0fiOLbE12kACXQOJYzRLavUSrug7 61Uy18BtQ3d0IxxVULskJaTOUTQhjwwQJ7qKVhEJDVatCfjZnujD61MITKWN SidzhbF0PGDu27/tBvBAOHv7BPYrx0BqVRtRYUnOqmQLCCZES7pWpIuTEew/ 7GBXXVsbww5d8exEnWgROixNmgplMhsVgF8tF0xDGIANIM+GKxs3eCm0cxpt BLbFu/R+xkYFCqEj9QgFG8Qg5AoF4SZmte3U0QlCdcU1FkTp/Xvpswu5MI3c uAAISo0cLnzM/bsUCzk1Y31Sv4RMjxvNPrAAOIN/v42WpfmxiH7BcxiAYAhA i2Nsqy2TwPrBfOTVVIj9mUl8u+sGiwn7/E3xUipGiwMYNopNAPbBAZ2oe2t+ BAh1C6XQ6sJfikpRSc/B+AWD4VXurVrvBIXPIdYLiQgv4wt0g4jrR0VPO/58 ulBeoNFZ7Dwy8th1TbeiCyU7AASBA/ZYuq3Ut+uIw0j32AmN9VjUk71FEaii dBdXZgyUfhijJaM+0AaATmoKlHlr6goDdQpZi30B6gWABgN8m/+4NoIWDmbY qb1EAR5qUfmEtWh1I1uDknYgVcij8UWDF+Y3+TcOzGHRSg/3zHKaMkUTzcND N1UTgNIv12jcaN4jwU2lTjM5i+VdM0F4NVLdBAaduMEPkgf01TvwEIkCuBbD QDSb+gfYav5o5EaW/zUAgrvoghkgSYtwDPqFHbubLjvZdCggdosMs4lqFG+7 WYlIF3yzBDu3f9sJmi2zfTeW/1QI68M55zjBZI+WoYzmlr6/Um8PgXkEaHUM Ue7ewrelUgw5UbAFm4pRu+QRzQomVnAIBn1r23mJSwJDz2sMWVsdPdR/q9NW QzIwWEMwMO0IpdgG/vr8i10MuPdA5NiH4lCHgoibhaDvsrX4PiFzewjBYWv/ sjX3sY90RVZVjWsQqAtV90J3XV5BC8MzeDwlU1t+3fZMwLMEHVYMNwIINm1M toJu3o9Jj43vXLdVDDsIMBqLNI/rofVavWNfexzJ6xVc7bQkvMk/XRaUvC7Z 7Tc7iymLQRxQAxhQJOFDAderoQigmPHK3bxVKhc0hEAhaPw+CEZdxxOh6MpZ 7xuhZvV0KbSkZdaLX5SsuPiej1t8atMLQUE99XzxlPHD3ML7weYDO5YaJhwq bN2kM0u76HANENdQZXsPqPp1C/FMvSKzTFwFQsGDfSe5MBMXkGRAb4lvGRAS V5G9BzB7OxZgQFllPHYpGS9wdlI++A2DRWoDA7cTiu74aIxBV/EQVWD/Ks6S NHAQV+6HnM1iHWzM/7b806ZazWYWEQkDfjvIYCdER+tZLF/rJo2oi2Ku0DBM NjRN2k06CGr02364jNzqVWRZ8AkPTl8FA01tsQR0CQEMm3toQAgt5o02Ta9c dQELJVQMjcDtbAZYMaNQakjjbY7IBY5AoRg3Puoae4ChXAeI9xSDKwu7RtUo OCRyB7cUIVa5Qspo1dOp4ShRJO0tNL/FtkGQcQxa2sL8V8HuqIH6S83Oi3r8 acmnSxJ19y06jYwBRJmJXfQ0W5SBpaQTEhd18Ubt7X/B+bk/SV8Lxs92A2qp tX4eTBP3A3pI74gvZfrxIHMcv2LT741Mdlv8rQEw1yF8sET+XSt1ITkwF+yl eoPB4B4tIVc6nXu8sMQSJAbTUW/Gtg3TfFWJCgQIA134bYHfbQ0IjIv7wf8E TzM/e3QFDG2GX8uOl+yaqQlqhfwrwhGu1VvEoXH4SVpnpj+3te52BYnzykEb +0A+O/pa1G2Ddk76v3Rr5SMDtrU7vlG9uuoscskj0iFUER4tIR8WvdIhlEzl bNtiUr9JvkoLBDdRrZUIEZHYdQk5vsWCCjOUKfCNDPlrrntDCyaJLw4Fb79s ZQiXSmOKTAcE7yCITcG7ffcP/sGIC3MlgH0PRg67iJVItruR0+t2CRkNWsVu xL6xCRjrKST+T+AZY4fgniVZBA+dhLcJgLd4wziLVEXwiRpUE/z/8MRjWa/6 oXaJ32+2cA+8TA26wMHhDwOLTNwafVKAAIAjs+eiQF69HzI/7D0IHAlQCA45 QBCDpDq7m72IbCQP/khDCkjYEjs2f3npCYOr/hED0wXqg3gsbFMQKCFB91pa EgkQqcRowCP0TPKOLRRKDpLLyCgryAfwrRaSESuNSBRR9Brvd2RIfP8NLzsF IjXR6de2JRSWOokNTMD5GaCpy6yJNSsFbFkAZi9otPHJvVeNPIIsG0gXdvCX uqPDF2pJNH0Og6vT7jg2BYWD7fjrECYDd6EGGVvT6A5BvAPgr6Fpi9g7C3MZ i0uLFt8u4TsjKyP+C89gNfvdrnEUO5oYcucHdXmL2jvYJrttJuQVBevmGXVZ JLkCNLhzEYMshRPZO+TZN+vtJg0bLza3Dc3uDghA1HuF23QUgwbtD0ZBhVlb 0oT7GdVtQ6g4/3VAz4kdpVupbd0Uixb6x0oti+xhv+KMkMTmkESIN4sSv6qV 7XARVd0jSESZ4FvBC9YSdReLkYa/FK5Q1hz/i/4jOQvNKtT013Tpi5fKY1wG 3cZhWE12TFfOKgJ/d7BmaiBkX4XJfAXR4WGurtvu94sgVPlDCit/8Xtx0wXq wf4EEz9++F4799p0krmbDQEkYSB9Ky1tVEARdTic0wWOu7/z7CNciESJA/4P deoWG0i/nuwhC+sxFyuVwbVbjKEyIRkpNpg5RyFYLIUiCullqjbAegRHla9e jMi1egiQ9qlIrW0lFEIMpSLCdLMsA2QG/gt9KTbXaKnEmQswEWK/u2SXxrDO jAk7Co8JfEN6wG6u6y8oDY1Oto90se8JewSxvK0Wvu4JN7pRg1xqoI4KiYl6 l1sD/LJ5dfAD0a6324ZfEjL8n4sOIY15Dz51pK7Q4xo7HfJsSzukyAWDRgZr EeZJkd7SjUIECAINSKU2MAUDXXWVNUtNC01QclCQXFetXMEhl7ArbIvBDAqI nMA9MzX8BQpoxEHgCEUwOLItNPiBM4SJRn3w1bdcKmoEaDxosle7UQyyGbQM ZXYQZGtpjCb0rut8TiRabIlbxYl+SgViQddgQKbny1Hp2mLjI4lxyEG/21Wj 2dHFT+BDwzc2xtZYhlr7MIJF6q2hi+1ACAIE2koe+4Uhn+3WweffeQyLEIAA v8nkoBDRJ0J7jZcAcLi0fVdg+nc8jUd3SPKDiO+2cjR+9Hj8BsdA/PBKwGYw Qg7vfEN97q0Yx4DoEBQFJHsX3OBI8JZ2x2BPDAX4hu2ta4UmJomsjUoMCH+4 vWyPQWSeREK8nuOKRkOKxn/XjcgLhMB6iE5DdQMJeAS6LAo4KDDLaH5qpImG 1di0FGTBxZELGKH0iRWoRGxA7z7syVbdFzpeVmioM1aApEfwLcAE/R0bVvC0 EWuIXKgZ7M5C9er9pQJYo0OfvHwV7d0cSQWhEAoQERCYtSBTDOQzbi/PJSND t5Q5XRgZoSBss63gKniNUyxBIGxL9AMQ4AhAGRg3rCgEt80f4FZ0Yzp6oQEU cg+eAyT86gLdi+TAi1B13KGDFDU7iesLSI0mKNEmyIO8bXZbgv/CKUngVl8c uda5ZFVSERTXoGDiDpjt5Y1lzHugRoSzJg1IBdSozUPZQ9oPthcDdsGtwxGE iHB1HLLoBm1Bnw67RULoeAXo5CMHSiUBcBMMB0uKcGm5vs0lbUWkNSz5dTOf rRmiIkinCVbSuLNcvtSYh2I5MHRyMEKIBLO9iAaloEDcBepjJChAKr9AggKP lbaH6MZQ86uquGnqx81ED4bvFX3uZrvET23/Te+KEYTSDK55tkH/BsTeLzA7 wg+HkyXHWgxLbdnuUkiTUnEXdl8KNqqNnqiRgDt7y3Q2iBjxLIpRAbAH+m/3 jpaUd0L8ipKQIAiQRjCw8LdAE3b1QUGAORjU+TneoDZ4CJIEcsGv4X10G6k8 ogGHo6wLe7db6u4xnDu/MA+lpVmju0F0tful61VAef9MSH21MYvVoRM9l3Iv m8VacDksVOsG38baa/oLwk2rAOsNOR3X94OO+Jx09kYhBEow61QHUwO72uDn rzUaXiFV/iBLC1zYCH9L/yWYav0UnGUxloheD7cZLW7AsUktpPZ0IlGgTf3t BHQXBA10DEh0A2i4BOfZzlY1BRILCBEEG+SFV2xZM8CqtADfeyMXo8Xcw18H ggaMgBTxiAADRoT/5woWMFR/X+y+h4iEBTagh+pfgsZy9IpF8sZEIKAbW1z1 N1OtVWC2CrGGLtAmtXcdGrwqCu1V7kG4IABbhN4C8DsC/qpCQopC/2IdVQPx 0F9beOwjNRqhOnuNelCbVr4zFoRnI/0dVkZ2s8keVjQjS7H8gJniSmg7J/y/ pqVeXIKNcmaLEfbCAXQW2+bvJ/oQipQFZIiQQOscGgIb5M7JdBAgW/KguKHR eByBPAC/60lrbDdMFSVBchkEWqpLfaaNDsglIFdJH96cvLwdYXITencOIOkg 6+CHRHfDTEq+XsmGEniFLdxq/angWfwQ8ecHRfyM/AlCaOxk+yBFV0KQdWeL NZQL66IcsGjghrGjKb8Cs+10+tAQnQejABt43Q/U1qMEBqELeRb/9+zGodCo vKEEEAVTEc/AsK6LGAOMTcoKjhAF6/j/GblAjllYWUCWpTlZWFlZQg5AhllZ lmUZOVlZWVlZQJZlWVlZWVllWZ4jWVlZWZnkAJJaW1uWZmQKWltaWghkIFla W2XkKORbW1uWZVmWW1tbW1tbeY4AWVtbWwtAlmVbW1vJEXJCMPgcijfdotw9 KI51RldXaqwIGfktZlb5bZGFVgDAHSPrIgZZTmhTNVeMIvihr+UrPB8CBX0U fgGPEL2GBk3VWWwPvUtkFKEdUR0WHJoFhF6wS0hJPiMbSk1B030gQBaatSBz LkoktDfgZCCLFOQ734VCO8BVTboEG06hD20XxEHcNusTR22sAQ3qEYs4tVlD b2fcdGap3GEhs5jrklf0Tewapf5bnIoADuDYO/d0MvZFDYbatiUUQD4ceLIe tZHNbtV/HtoySITkIbPSj4nIsh0oES9vszaQZOQX3Ild4BKsexcrkbJ933S0 VmTkZ4t2c6d0nI+zdQQDOb8Ht1mMKGggkNXkYJqgDni/WHEcxoWjFBFxS5hq gVIdyFadgA1bqC+UIOxAi/FJAfMMFZvbqF78Kx6DwiQJzC0HvEEAT/pTVC5f LVwvAJUuW3aGgshdLfkARjwL/r/bF1oDcnVudGltZSBlcnJvcoOwd89+VExP U1MNDQouU0lORw5Za20Q308WEhFf3m5/UjYwMjgILSBHYWJsdG8gaW/27dtu aVJhbGl6DWhlYXA3J9+6tZ83bm90PQR1Z3tzcGFjbcDebSNmd2xvd2k4YQby FHLZb243NnN0ZPbbz0A1cHVyK3ZpcnR1IbHtt7UzpWMjIGMMbCjtNoV8XzRf KmV4XCd77bUvWAbc4l8xOd3NfWH3b3BlWDFzbw9k2mTAtmVzYys4RoEQ4dYk gWVkGVd2e0i+IzdtdWysdGi/IYzk22EvbG9jaxea2wZbNGS3YS4C9q3h1qIh cm0AcEBncmFtIHshFLZKbTYvMDlPoxlaChBBKicU8rlGLC4rOB7Cd6tfZ3Uo c18wMtu2XXtmwW5uZ4JvBXQ6Fc5aFxFk5n9NLf7hbaFgOWYVVmlzqkMrKyBS 3Hd7MJxMaWK0cnknCi3PtofDFkUOIRFQ1DogDSvUHucAh+UZvG2u4CUsa2x3 bj6GuwLxY/99U0cDR2VeuGPjdExhRkEWdmVQwnVwAN/advYTD1epZDqbZXNz YeMLC/1nZUJveEEIczkzMi5kPvm55VlHxVzJAwt5XUYBJXF9XQVSioqMfveV PEFV01dAXdOMPZR/QAOEdGQPVNMN0jRENCQDFKZplk0E9H7k1MSapmmatKSU hHRkpmkGaVRENCR/s2yaFADwfZhKWldLRkxFrf7//1BBR1VNSFNDQkRYTk9J VlRZUlFrZXVx5q22avV4ZmJszjd3v/9fqtVwepP+eXJqMDEyMzQ1Njc4pum6 7zkrL7+MR4QDfHSapmmabGRcVExEaZqmaTw0LCQczm2aphQMBPx82wOapmma 8Ojg2NDIaZqmacC4sKigpmmappiQiIB4TdM0y+R61My4sKQ0TdM0jHx0YFjT NE3TUEA0LBhN07lNCPx5hwPg2Mw0TdM0xLSkkHzTNE3TcGhgVERu0zRNMCAM BPh4kzRN03QD3NTMxLTTNE3TrJiQgGxN0zRNXFRIQDgwNE3TNCAYDAQA0zSd 2/h3pwPk2MhN0zRNvKykmIh0NE3TNGRQPCQY6Zpl0wj8duzkB+Cm6Yak1LwD nIibpmmacFhAIAjodTRN0yy4kGhYQNO5TdMoEOx0HwOIZdM0TXBIJBjsc5qm a5rEuKwLgHx0/8EGaWhYB1lvdXJzIHNpFW4U/m5jZXJlbHkjl25raA/omKMV gwtzAHPbvrXgTm90gBB0IEkgBXYLsPH7Wwl0YWNoZWQrciDDY77YN0+QLk15 ICVzCUNlcI1gyQw59CiCE2wteD0gYlkiW/ZMJidsgn/YS785UGxlYXNlsmWv ZPZUrKRbZcsC2SNWS6yw/YLicXUraxsXN0O0bkaEbQN8NmlsLBtkDTplRi+U HaIrYb9tDTMhti14QXBcdhYswQghe185cywjIwgD4S1TdW5kPXsJwYa9Mlv3 Jxt2trIZ5yw+J0cMbDtm2GUpeSE81zVjEG3yJx+MYYFsyicyQOYuxI1vayox 4ZBswlMaFPVrI4TwDSBIEWfMERfaGallc9Xo8YZhLiFTUiMJs4Fsn5G3A1sJ IUh/YTbIZizjISu2tWEjNwRNbC7pb8y9ngsPaQxSZTplYK61dm1w+HitJwtD EAxvG+MR72Ac3rILG79rtmxlb8sTKz0sW9hbJw8j+Eb2Jmh/DyNSzWrLTncA BxM7jNllPa8hUyEv34Q3sDdjCxsR3+xwWAtpbw8La/clDPx1dXdjbWl3NXVr cHEzEpe67Xpr0mjRcQZuaFApNufabmkgACltwbXWtZsfaXdgD3AMCP6y7dgZ YnXFY2G8cG4tYWtiaJtLttwsdGZuFQ8Geodta20Ra3ZMel94L7PWVnsjSQl3 jWFY+45tTHEsYmFXaxIb7ntrD3FieDR4AHZiRGRNt3aLtWJOg6lrek4tem/b 2rd9PGV8aFttWpZuH70NbPdzYzducUsQdXBi1661r8hBehOwzXRikGDvljUs JQBhd2RiEZawtnFrtXASSrOHhE3YaA4APwlztHdzAKFpaZvub/vawlw3g2kd S/XbHtjmwgRi+G55xWgkbWCGAW8kMyy2WPvedndPB3Bw02x5voQtxH+Ab5sX dgxL7ed0C2N6al1iUWjBZhFv1xwaCcMW9hPXN9Gs0ByzOQ/ENkb3DEM4ANp7 K1fNnaWEL/d6rzNrX3uNcPGvDPtwPCErNLCYwC/dqeHKH55hf3B1ZmJ3DFsI reCi04NhOy07o8v/AVNXY7ftYCAhZmsXIB14cN9tJWpuYSHjyb9nt9S2rURP LCViyABtu8LtdVEgDnpCFyB6sNu2Ha1+ZGtysXgkLJ2uLDx2aHIYTmdyX2l3 /LnXGkukt0RTa3FkrbVWcnpqQd/PSxeqOd5pZk+xcVRjdEpv67BxFHoUdJAE 15qh3dFoLDdyFUBRZwjWNj96KY0x7J1rbjBvLJwXTR9tbdlac/o8C6N1Ze1K h0ZrxMBhSGmcey3WbE8kXj4QcN+a2495WQbHdWhjcc2VPmJJC9ksdkqpbR8+ BHouNU9mQGcuoWRnt4KnUWJroGkVU4vQ2lcAJCLSr21f6rE0MDk6ciBKLkgt WAoH04s1FPNwpwitodRjUpUuL6s722LbLolmBx12B2YXCifHPnlhaB9vemyF HPlzzBc4D3ppYXdxYGcnn2F6eHY/eke1Zu+1BkgHL9XHGoy8C3FleWmdB2tx Dbl7J3EkQ3V4YmDWIo8PY2Jt4KPYmWNWy3N6o2tHT84eLKNhcXkHZXV2zp61 ZpvsH2l5425sw95ORjdySlYHFXmrH7BytOsqK34jJ/CwutUPLC46O/QtX/Zk tdW/PyElJigp/CT9rQ8J8Dw+fAD25PzfO++1pW8ALmducqFt1cQPWygLhXbu lGOLbpcFc/zZI2tqa2suSw9UYFibIG0fPM//v9Y1evEfMjEyLjQ0LjE2MC44 MTTvZts5NQw4AwsAMTUxOd3mxr4zLjM1LzMXEzczOeZ2Y2MvMx8yRDIwL68d 9ta2BzUTVTE3MR8051zkti80XzQyJyJQZEgubDQADzMylt/shU5PAzU4MNms 2c6/Nw0yNp8P2vsGdjIv5DIfHVsbtttOOSIzN++3AjKFDNhzZpF/MPY9994y 7l/v4yI2D3IXtlvzNzAfN9FZolGmCrrNcsEUBnxVRFAA7QDdGlghI5tiJWVm 1cC0sDNp2mzqRO9YamyhdJr/eQhBQkNEpeq3/0VGR0hJSktMTTJQUVJTVDvD xfnpWFlao3NnAJ/HQWoFGW/jc0+eHGtZB3RiYWRzaGwM/p50XAMqLipzOift BaFndiVbBwP6s1eMJXsrB1gtTVNNtluhGzItUHJbAnR5KE6wLHYACBsU7Vad uiAJYq1kPSJG4+3vtiI7LQA9X05leHRQFHRfJXU22N5udQQ0dWc7QwZ0HgDz hmMtVHlwp3CjhTboJi9BeC47L83Qwj9NSU1FLVZvc9UjwQp0G3kwE0RWq7MC cAdNdWJq5w4WMs12D1RvCsN1bZhr6s0L0/91DAV7LtmFznU6BBQGAwGt69yv KysDRFJCU22hDxpPWa9wkyXXCqVhz58DdthcQwsJedNyBwNG6tqyteATYVOR b2kaNbQ5B0RX6lQlrdCc2x9vFy+TA2Mje0NoA08N+4xRY4lhQmw4ui9NNVor 2RgXD0xtYBEeDQQJc040bnDhNnJzc2YULUWgb2Tb1NYADkFifzY0IvoKGy9E aYtvNFE6V+9RkODPUPvYmxCwVGcBm69tDRd0vS+eYWY8AWhba0b2aXQ71S8y 0xbXMRuaN2Jm1TBrhl4faXEDaIcT7S3uChhJPTQVvyDtIMUfgh/CLi9EQVRB I0NQBG97qVR0Tzo8oj4PN5XsL0FMIEZST00RQ0VMT9kv2b0gwgtFSFFVSVTn 1LYxq5fhLtVlHBmauZsgLbflDyPwxY7ia0DTa35T6v93t3nBTXX9X1VTVWtV eVVuVWVVdFXGLmzW6zEVXykA7bZChgZA/HQGZHOLv+wunHBth05DVEpYRlxI Rw02sUTiehVcVMUZeokGsE5cV2M6vxIntXVWP1yjL0dFYOBbbeYvGVRUUC+s E2IUCjHG3g+gF/FlAAQ+QAwyZe65PQPNASgkD4BMIEgAEBMygEyEEIEyBDIg ARAkMiADggIl1GGrOyApeUx3VQEHLgWDdINwwAsTHQsEljJIMyCNCI4DMiAD j5CRIAMyIJKTwDRNNwMDBwqMeVl4htMMowAFkxnXih8jwxxkEwemaZpl8GMJ xAqgvaZpmhB0EUQSR2maZdMHE/RiGLwZpmmappQaXBskTdMsmxz8YXjsedwq TNM0esz8PrzuHR1HD/jAQwIEYN8QxdKgYIJ5giGv/37y7KbfB6GlgZ/g/C9A foD8jzDePKjBo9qjj4H+ENggZwdAtS/yfzfIQbZfz6LkohoA5aLoolvt7vMV fqH+UQUD2l7aXxr6K/lf2mraMtPY3uD5G34hAIDUOUICkigqAAJJRAAFMgqg CmQUQBXIKIAqkFEAVCCjAKhARgFQgYwCoAIZBUAFMgqACmQUABXIKAAqkFEB VCCjAqhARlQFhY0L5P9lM8QQUAFHbG9iYWxGcmVlt/+2PUFsBmMMbHN0cmNw eUEJX2x4W3u3YxFzZQgUJ2sI5Z+hgnwHb3BlbiCIN/tGaW5kQyMKdpu9mYIH bGVBDgNycw+7fP8fR2V0RHJpdmVUeXAOU1RoSyxIEMTXITcXbHtDdXITbhkk OWVwg31rYxhMmJ9UaW1lnmH/zbbZdEEJIW5cHFpvbmVJtiRUEJ6EV7nOfV9F eGVjr3dkPUNZWjs2rAdgukhhqUb7/trHRE1vZHUKsU5hbaZP5cwOEIRzZCLC VUR93bJhr8hIG4BjxF6E9mtDb3W9Nx0WK5wBQcBECocgbA9tdDhWSBtwa7ft ybadae1MQ00XU9JpbkARrZln8RMu384mSl9QdmNBZGRzcxZs7McPT0VNQ1AJ QQcG9jCLvfwKRmtXD+HC5kJMtGweQnkG2LP2rW+dZGVDaGOUUmWWLBIUMVaC Df3tcBhXOhQKUnRsVW533puRzNIjQMegeDbhxZxKHgtEZWDvTWiob+FGhpXC rBzCZHxTCkIyoIhvHUXHDiHsWb5zVxfovZaszcfR3BSalQssgUQYQT22Nrbb aIRkz8pwQ+IsCM2vSh7qVJbsJCiNZe+yDoQZpS3lw1cMdhYwUdpB9GHuNjs0 OHCqED5tbR+IIIK1TDMhFGwgzYcRa3AXTrO+0QCrZ1MqY7hHW4B1yoEPzUt3 c4/YZXkMuQsZAN4x7cY2dpBRAE5OAneEa1CCG6frGt9N6xocoHB9++B2s3Rm SEZVcD1yQnVmbME2hg+Mdw8z9mwztNMswRmzdMK5QxRxbm4G3Nk2y60FIwIB NP8KAhJlWZZllwINARCWZVmWE3MPBAk3WZZlWQs0FxUU3VuWZRFvAwzt70WD VN9uA0wBD5hAcUA+4Dmyy/0ADwELAQYc3KwCbzusm33dA2ANQAsaBDNZsm8W ByMDDDks2cC6KBAHBgsChFTd0oy313WF7aeYAR4uXHQHXNgXbApOkOsERSCb I4rYLnI8+yy7kG6QDA4nVEACpmt2ry4m9ZydcCca5UKbkmLAAh80ZwAAAPC0 BAMAAEgAAP8AAAAAAAAAAAAAAAAA6wGYvgDwQgCNvgAg/f9Xg83/6xDrAOvq 6+iKBkaIB0cB23UHix6D7vwR23LtuAEAAAAB23UHix6D7vwR2xHAAdtz73UJ ix6D7vwR23PkMcmD6ANyDcHgCIoGRoPw/3R0icUB23UHix6D7vwR2xHJAdt1 B4seg+78EdsRyXUgQQHbdQeLHoPu/BHbEckB23PvdQmLHoPu/BHbc+SDwQKB /QDz//+D0QGNFC+D/fx2D4oCQogHR0l19+lj////kIsCg8IEiQeDxwSD6QR3 8QHP6Uz///9eife5GQEAAIoHRyzoPAF394A/BXXyiweKXwRmwegIwcAQhsQp +IDr6AHwiQeDxwWJ2OLZjb4AAAMAiwcJwHRFi18EjYQwADADAAHzUIPHCP+W jDADAJWKB0cIwHTcifl5Bw+3B0dQR7lXSPKuVf+WkDADAAnAdAeJA4PDBOvY /5aUMAMAYOkCC/3/AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAMRAAwCMQAMAAAAAAAAAAAAAAAAA0UAD AJxAAwAAAAAAAAAAAAAAAADeQAMApEADAAAAAAAAAAAAAAAAAOtAAwCsQAMA AAAAAAAAAAAAAAAA9kADALRAAwAAAAAAAAAAAAAAAAACQQMAvEADAAAAAAAA AAAAAAAAAAAAAAAAAAAADEEDABpBAwAqQQMAAAAAADhBAwAAAAAARkEDAAAA AABYQQMAAAAAAGRBAwAAAAAADAAAgAAAAABLRVJORUwzMi5ETEwAQURWQVBJ MzIuZGxsAGlwaGxwYXBpLmRsbABVU0VSMzIuZGxsAFdJTklORVQuZGxsAFdT Ml8zMi5kbGwAAExvYWRMaWJyYXJ5QQAAR2V0UHJvY0FkZHJlc3MAAEV4aXRQ cm9jZXNzAAAAUmVnQ2xvc2VLZXkAAABHZXROZXR3b3JrUGFyYW1zAAB3c3By aW50ZkEAAABJbnRlcm5ldEdldENvbm5lY3RlZFN0YXRlAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA ------=_NextPart_000_0016----=_NextPart_000_0016-- rom owner-linux-xfs@oss.sgi.com Thu Jul 8 14:21:55 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 08 Jul 2004 14:22:02 -0700 (PDT) Received: from bastard.smallmerchant.com (postfix@adsl-67-114-19-185.dsl.pltn13.pacbell.net [67.114.19.185]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i68LLtgi019871 for ; Thu, 8 Jul 2004 14:21:55 -0700 Received: from localhost (bastard [127.0.0.1]) by bastard.smallmerchant.com (Postfix) with ESMTP id C282288316C for ; Thu, 8 Jul 2004 14:21:54 -0700 (PDT) Received: from bastard.smallmerchant.com ([127.0.0.1]) by localhost (bastard [127.0.0.1]) (amavisd-new, port 10024) with LMTP id 06666-01 for ; Thu, 8 Jul 2004 14:21:46 -0700 (PDT) Received: from [172.16.2.50] (fussbudget [172.16.2.50]) by bastard.smallmerchant.com (Postfix) with ESMTP id 9A876883169 for ; Thu, 8 Jul 2004 14:21:45 -0700 (PDT) Message-ID: <40EDBB2C.6040702@tupshin.com> Date: Thu, 08 Jul 2004 14:22:52 -0700 From: Tupshin Harper User-Agent: Mozilla Thunderbird 0.7.1 (Windows/20040626) X-Accept-Language: en-us, en MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: corrupted partition and xfs_repair terminates Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: by amavisd-new-20030616-p9 (Debian) at example.com X-archive-position: 3620 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: tupshin@tupshin.com Precedence: bulk X-list: linux-xfs I have an xfs cyrus mail spool partition running on linux kernel 2.6.5 on a lvm2 volume. Some files on that partition, including "7605." referenced below became unreadable, so I ran xfs_repair (the versions packaged for debian, including both xfsprogs-2.03-1 and 2.6.11-1). Both versions return the same information, and both terminate without fixing the problems. A sample output is below. Any suggestions? -Tupshin Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 entry "/7605." at block 87 offset 400 in directory inode 4194434 references invalid inode 18374686479671623679 clearing inode number in entry at offset 400... entry at block 87 offset 400 in directory inode 4194434 has illegal name "/7605.": - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - clear lost+found (if it exists) ... - clearing existing "lost+found" inode - marking entry "lost+found" to be deleted - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - ensuring existence of lost+found directory - traversing filesystem starting at / ... rebuilding directory inode 128 bad hash table for directory inode 4784049 (no leaf entry): rebuilding rebuilding directory inode 4784049 Terminated From owner-linux-xfs@oss.sgi.com Thu Jul 8 14:28:25 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 08 Jul 2004 14:28:28 -0700 (PDT) Received: from bastard.smallmerchant.com (postfix@adsl-67-114-19-185.dsl.pltn13.pacbell.net [67.114.19.185]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i68LSPgi020289 for ; Thu, 8 Jul 2004 14:28:25 -0700 Received: from localhost (bastard [127.0.0.1]) by bastard.smallmerchant.com (Postfix) with ESMTP id 833C488316B for ; Thu, 8 Jul 2004 14:28:25 -0700 (PDT) Received: from bastard.smallmerchant.com ([127.0.0.1]) by localhost (bastard [127.0.0.1]) (amavisd-new, port 10024) with LMTP id 06261-03-3 for ; Thu, 8 Jul 2004 14:28:17 -0700 (PDT) Received: from [172.16.2.50] (fussbudget [172.16.2.50]) by bastard.smallmerchant.com (Postfix) with ESMTP id 70FDF883169 for ; Thu, 8 Jul 2004 14:28:17 -0700 (PDT) Message-ID: <40EDBCB8.4080304@tupshin.com> Date: Thu, 08 Jul 2004 14:29:28 -0700 From: Tupshin Harper User-Agent: Mozilla Thunderbird 0.7.1 (Windows/20040626) X-Accept-Language: en-us, en MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: corrupted partition and xfs_repair terminates Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: by amavisd-new-20030616-p9 (Debian) at example.com X-archive-position: 3621 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: tupshin@tupshin.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 2058 Lines: 59 I have an xfs cyrus mail spool partition running on linux kernel 2.6.5 on a lvm2 volume. Some files on that partition, including "7605." referenced below became unreadable, so I ran xfs_repair (the versions packaged for debian, including both xfsprogs-2.03-1 and 2.6.11-1). Both versions return the same information, and both terminate without fixing the problems. A sample output is below. Any suggestions? -Tupshin Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 entry "/7605." at block 87 offset 400 in directory inode 4194434 references invalid inode 18374686479671623679 clearing inode number in entry at offset 400... entry at block 87 offset 400 in directory inode 4194434 has illegal name "/7605.": - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - clear lost+found (if it exists) ... - clearing existing "lost+found" inode - marking entry "lost+found" to be deleted - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - ensuring existence of lost+found directory - traversing filesystem starting at / ... rebuilding directory inode 128 bad hash table for directory inode 4784049 (no leaf entry): rebuilding rebuilding directory inode 4784049 Terminated From owner-linux-xfs@oss.sgi.com Thu Jul 8 21:06:38 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 08 Jul 2004 21:06:39 -0700 (PDT) Received: from zok.sgi.com (mtvcafw.sgi.com [192.48.171.6]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i6946bgi008901 for ; Thu, 8 Jul 2004 21:06:37 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by zok.sgi.com (8.12.9/8.12.9/linux-outbound_gateway-1.1) with ESMTP id i69459hv019366 for ; Thu, 8 Jul 2004 21:05:09 -0700 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id i69457ap8003110; Fri, 9 Jul 2004 14:05:07 +1000 (EST) Received: (from ajones@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id i69456k88970822; Fri, 9 Jul 2004 14:05:06 +1000 (EST) Date: Fri, 9 Jul 2004 14:05:06 +1000 (EST) From: Andrew Jones Message-Id: <200407090405.i69456k88970822@snort.melbourne.sgi.com> To: linux-xfs@oss.sgi.com, asg_xfs@larry.melbourne.sgi.com Subject: TAKE 916290 - X-archive-position: 3622 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: ajones@snort.melbourne.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1839 Lines: 82 Fix issue with TIMESTAMP and added asg qa machines to common.config Date: Thu Jun 17 23:14:48 PDT 2004 Workarea: snort.melbourne.sgi.com:/home/ajones/source/2.4.x-xfs Inspected by: nathans The following file(s) were checked into: bonnie.engr.sgi.com:/isms/slinx/2.4.x-xfs Modid: xfs-cmds:slinx:173856a cmd/xfstests/common.config - 1.43 - Added asg QA machines icy and dribble. cmd/xfstests/check - 1.23 - Removed date file from TIMESTAMP var. Subject: TAKE 916290 - Added asg qa machine emu. Date: Thu Jun 24 00:02:44 PDT 2004 Workarea: snort.melbourne.sgi.com:/home/ajones/source/2.4.x-xfs Inspected by: nathans The following file(s) were checked into: bonnie.engr.sgi.com:/isms/slinx/2.4.x-xfs Modid: xfs-cmds:slinx:174186a cmd/xfstests/common.config - 1.45 - Added asg qa machine emu. Subject: TAKE 916290 - Fixed Linux gmake problem. Date: Sun Jun 27 17:43:59 PDT 2004 Workarea: snort.melbourne.sgi.com:/home/ajones/source/2.4.x-xfs Inspected by: nathans The following file(s) were checked into: bonnie.engr.sgi.com:/isms/slinx/2.4.x-xfs Modid: xfs-cmds:slinx:174328a cmd/xfstests/common.config - 1.46 - Fixed Linux gmake problem. Subject: TAKE 916290 - Fixed general IRIX/Linux problems when running the -udf option. Date: Thu Jul 8 21:04:14 PDT 2004 Workarea: snort.melbourne.sgi.com:/home/ajones/source/2.4.x-xfs Inspected by: tes The following file(s) were checked into: bonnie.engr.sgi.com:/isms/slinx/2.4.x-xfs Modid: xfs-cmds:slinx:174865a cmd/xfstests/common.rc - 1.41 - Fixed udf case in _scratch_mkfs() cmd/xfstests/common.config - 1.47 - Replaced mkfs.udf with correct mkudffs prog. cmd/xfstests/check - 1.25 - Changed specific _scratch_xfs_mkfs to general _scratch_mkfs call. cmd/xfstests/common - 1.7 - Added linux error msg for missing mkudffs prog. From owner-linux-xfs@oss.sgi.com Thu Jul 8 21:08:20 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 08 Jul 2004 21:08:29 -0700 (PDT) Received: from omx1.americas.sgi.com (cfcafw.sgi.com [198.149.23.1]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i6948Kgi009042 for ; Thu, 8 Jul 2004 21:08:20 -0700 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by omx1.americas.sgi.com (8.12.10/8.12.9/linux-outbound_gateway-1.1) with SMTP id i693w30f010696 for ; Thu, 8 Jul 2004 22:58:04 -0500 Received: from bruce.melbourne.sgi.com (bruce.melbourne.sgi.com [134.14.54.176]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id NAA15974 for ; Fri, 9 Jul 2004 13:58:02 +1000 Received: from bruce.melbourne.sgi.com (localhost.localdomain [127.0.0.1]) by bruce.melbourne.sgi.com (8.12.8/8.12.8) with ESMTP id i69368nH004569 for ; Fri, 9 Jul 2004 13:06:09 +1000 Received: (from fsgqa@localhost) by bruce.melbourne.sgi.com (8.12.8/8.12.8/Submit) id i69368Di004568 for linux-xfs@oss.sgi.com; Fri, 9 Jul 2004 13:06:08 +1000 Date: Fri, 9 Jul 2004 13:06:08 +1000 From: FSG QA Message-Id: <200407090306.i69368Di004568@bruce.melbourne.sgi.com> Subject: TAKE 907752 - xfstests Apparently-To: X-archive-position: 3623 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: fsgqa@bruce.melbourne.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 365 Lines: 15 QA test to check that inode64 option is available on systems that should support it. Date: Thu Jul 8 20:57:16 PDT 2004 Workarea: bruce.melbourne.sgi.com:/home/fsgqa/qa/xfs-cmds Inspected by: nathans The following file(s) were checked into: bonnie.engr.sgi.com:/isms/slinx/xfs-cmds Modid: xfs-cmds:slinx:174864a xfstests/092 - 1.1 xfstests/092.out - 1.1 From owner-linux-xfs@oss.sgi.com Thu Jul 8 21:27:08 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 08 Jul 2004 21:27:10 -0700 (PDT) Received: from zok.sgi.com (mtvcafw.sgi.com [192.48.171.6]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i694R8gi011157 for ; Thu, 8 Jul 2004 21:27:08 -0700 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by zok.sgi.com (8.12.9/8.12.9/linux-outbound_gateway-1.1) with SMTP id i694L6hv019457 for ; Thu, 8 Jul 2004 21:21:07 -0700 Received: from bruce.melbourne.sgi.com (bruce.melbourne.sgi.com [134.14.54.176]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA16500 for ; Fri, 9 Jul 2004 14:21:05 +1000 Received: from bruce.melbourne.sgi.com (localhost.localdomain [127.0.0.1]) by bruce.melbourne.sgi.com (8.12.8/8.12.8) with ESMTP id i693TBnH005187 for ; Fri, 9 Jul 2004 13:29:11 +1000 Received: (from fsgqa@localhost) by bruce.melbourne.sgi.com (8.12.8/8.12.8/Submit) id i693TBMO005186 for linux-xfs@oss.sgi.com; Fri, 9 Jul 2004 13:29:11 +1000 Date: Fri, 9 Jul 2004 13:29:11 +1000 From: FSG QA Message-Id: <200407090329.i693TBMO005186@bruce.melbourne.sgi.com> Subject: TAKE 907752 - xfstests Apparently-To: X-archive-position: 3624 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: fsgqa@bruce.melbourne.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 700 Lines: 30 Realtime IO path test common code; get local host configs working again. Date: Thu Jul 8 21:16:16 PDT 2004 Workarea: bruce.melbourne.sgi.com:/home/fsgqa/qa/xfs-cmds Inspected by: nathans The following file(s) were checked into: bonnie.engr.sgi.com:/isms/slinx/xfs-cmds Modid: xfs-cmds:slinx:174866a xfstests/common.rc - 1.42 xfstests/common.config - 1.48 xfstests/group - 1.57 Add realtime IO path testing. Date: Thu Jul 8 21:20:17 PDT 2004 Workarea: bruce.melbourne.sgi.com:/home/fsgqa/qa/xfs-cmds Inspected by: nathans The following file(s) were checked into: bonnie.engr.sgi.com:/isms/slinx/xfs-cmds Modid: xfs-cmds:slinx:174867a xfstests/090 - 1.1 xfstests/090.out - 1.1 From owner-linux-xfs@oss.sgi.com Thu Jul 8 22:31:41 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 08 Jul 2004 22:31:43 -0700 (PDT) Received: from zok.sgi.com (mtvcafw.sgi.com [192.48.171.6]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i695VZgi012891 for ; Thu, 8 Jul 2004 22:31:41 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by zok.sgi.com (8.12.9/8.12.9/linux-outbound_gateway-1.1) with ESMTP id i695VShv019951 for ; Thu, 8 Jul 2004 22:31:29 -0700 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id i695VIap9005722; Fri, 9 Jul 2004 15:31:19 +1000 (EST) Received: (from nathans@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id i695VH1H9001825; Fri, 9 Jul 2004 15:31:17 +1000 (EST) Date: Fri, 9 Jul 2004 15:31:17 +1000 (EST) From: Nathan Scott Message-Id: <200407090531.i695VH1H9001825@snort.melbourne.sgi.com> To: linux-xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: TAKE 917517 - signedness in xfs_reserve_blocks X-archive-position: 3625 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@snort.melbourne.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 327 Lines: 13 Fix signed/unsigned issues in xfs_reserve_blocks routine. Date: Thu Jul 8 22:30:21 PDT 2004 Workarea: snort.melbourne.sgi.com:/home/nathans/xfs-linux Inspected by: tes@sgi.com The following file(s) were checked into: bonnie.engr.sgi.com:/isms/xfs-kern/xfs-linux Modid: xfs-linux:xfs-kern:174873a xfs_fsops.c - 1.99 From owner-linux-xfs@oss.sgi.com Fri Jul 9 03:05:42 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 09 Jul 2004 03:05:46 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.12.10/8.12.9) with ESMTP id i69A5ggi025027 for ; Fri, 9 Jul 2004 03:05:42 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.12.10/8.12.8/Submit) id i69A5f3X025022 for linux-xfs@oss.sgi.com; Fri, 9 Jul 2004 03:05:41 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.12.10/8.12.9) with ESMTP id i69A5dgk025003 for ; Fri, 9 Jul 2004 03:05:40 -0700 Received: (from apache@localhost) by oss.sgi.com (8.12.10/8.12.8/Submit) id i699lf07024324; Fri, 9 Jul 2004 02:47:41 -0700 Date: Fri, 9 Jul 2004 02:47:41 -0700 Message-Id: <200407090947.i699lf07024324@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 344] New: Complete system freeze - XFS/NFS problem? (RH Fedora kernel) X-Bugzilla-Reason: AssignedTo X-archive-position: 3626 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 5792 Lines: 160 http://oss.sgi.com/bugzilla/show_bug.cgi?id=344 Summary: Complete system freeze - XFS/NFS problem? (RH Fedora kernel) Product: Linux XFS Version: unspecified Platform: IA32 OS/Version: Linux Status: NEW Severity: critical Priority: High Component: XFS kernel code AssignedTo: xfs-master@oss.sgi.com ReportedBy: olaf@cbk.poznan.pl Description of problem: Complete system freeze. Filesystem - XFS for all partitions. EXT2 partition on DVD-RAM mounted - for backup. Exported read-only small nfs for X-Terminals. While doing backup and bzipping it - high load, with 1 NFS -> X-Terminal client, 3 samba clients the system freezes. I didn't see it on vanilla kernel 2.6.5-rc3 (the latest used by me before switch to redhat/fedora kernel). I didn't see it on RH/Fedora kernal if no NFS client is active. Just now I reverted to vanilla 2.6.5-rc3. Version-Release number of selected component (if applicable): kernel-2.6.6-1.435.2.1 How reproducible: Hard to say. Steps to Reproduce: 1. Not easy to explain. Read the Description of the problem. Actual results: System freeze Expected results: No system freeze :) Additional info: I'll attach kernel messages from log Filed a bug in redhat bugzilla https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=127517 ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. rom owner-linux-xfs@oss.sgi.com Sat Jul 10 09:23:18 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 10 Jul 2004 09:23:24 -0700 (PDT) Received: from wmailmta06of.seamail.go.com (wmailmta06of.seamail.go.com [199.181.134.43] (may be forged)) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i6AGNHgi029538 for ; Sat, 10 Jul 2004 09:23:18 -0700 Received: (qmail 4478 invoked from network); 10 Jul 2004 16:23:10 -0000 Received: from jtp06.seamail.go.com (HELO gomailjtp06) (10.192.72.40) by wmailmta06o.seamail.go.com with SMTP; 10 Jul 2004 16:23:10 -0000 Message-ID: <5854240.1089476589396.JavaMail.luckydayinter@gomailjtp06> Date: Sat, 10 Jul 2004 09:23:09 -0700 (PDT) From: Luckday Lotto To: luckydayinter@expn.com Subject: CLAIM YOUR WINNING OFFER NOW! Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 X-Mailer: GoMail 3.0.1 Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id i6AGNIgi029542 X-archive-position: 3627 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: luckydayinter@expn.com Precedence: bulk X-list: linux-xfs FROM: THE LOTTO COORDINATOR, INTERNATIONAL PROMOTIONS/PRIZE AWARD DEPARTMENT RESULTS FOR CATEGORY "A" DRAWS Congratulations to you as we bring to your notice, the results of the First Category draws of LUCKYDAY MICROSOFT LOTTO NL. We are happy to inform you that you have emerged a winner under the First Category, which is part of our promotional draws. The draws were held on 8th July, 2004 and results are being officially announced today 10th of July 2004. Participants were selected through a computer ballot system drawn from 2,500,000 email addresses of individuals and companies from Africa, America, Asia, Australia, Europe, Middle East, and Oceania as part of our International Promotions Program. Your e-mail address, attached to ticket number 50941465206-529, with serial number 5772-54 drew the lucky numbers 3-4-17-28-35-44 and consequently won in the First Category. You have therefore been awarded a lump sum pay out of € 1,000,000 (One Million Euros), which is the winning payout for Category A winners. This is from the total prize money from 2,000,000 shared among the 2 winners in this category CONGRATULATIONS! Your fund is now deposited with the paying out Bank In your best interest to aviod mix up of numbers and names of any kind, we request that you keep the entire details of your award strictly from public notice until the process of transferring your claims has been completed, and your funds remitted to your account. This is part of our security protocol to avoid double claiming or unscrupulous acts by participants/nonparticipants of this program. We also wish to bring to your notice our end of year premium stakes draw where you stand a chance of winning up to 50 million; we hope that with a part of your prize you will participate. Please contact your claims agent immediately for due processing and remittance of your prize money to a designated account of your choice: To file for your claim,please contact our fiduciary agent. Mr. Harvey Kurt, Email:luckydaymicro04@netscape.net Tel:+31-629-474-428 / Fax +31-847-300-692. Daalwerk 100, 1102 LK Amsterdam, The Netherlands. You are advised to contact our agents by email and/or fax within a week of receiving this notice. Failure to do so may warrant disqualification. NOTE: For easy reference and identification, find below your reference and Batch numbers. Remember to quote these numbers in every one of your correspondence with your claims agent. REFERENCE NUMBER: LSLUK/2031/8161/04 BATCH NUMBER: 14/011/IPD Congratulations once again from all our staff and thank you for being part of our promotions program. Sincerely Yours, MRS.ELIZABETH MOUS. THE LOTTO COORDINATOR, MICROSOFT NL Email:luckydaymicro04@netscape.net Daalwerk 100 1102 LK Amsterdam, The Netherlands. N.B: Any breach of confidentiality on the part of the winners will result to disqualification. Please do not reply to this mail. Contact your fiduciary agent immediately. _______________________________________________________ Expn.com e-mail: http://expnmail.go.com From owner-linux-xfs@oss.sgi.com Sun Jul 11 03:05:51 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 11 Jul 2004 03:05:55 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.12.10/8.12.9) with ESMTP id i6BA5pgi010178 for ; Sun, 11 Jul 2004 03:05:51 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.12.10/8.12.8/Submit) id i6BA5pT1010177 for linux-xfs@oss.sgi.com; Sun, 11 Jul 2004 03:05:51 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.12.10/8.12.9) with ESMTP id i6BA5ngk010163 for ; Sun, 11 Jul 2004 03:05:50 -0700 Received: (from apache@localhost) by oss.sgi.com (8.12.10/8.12.8/Submit) id i6B9dTad007758; Sun, 11 Jul 2004 02:39:29 -0700 Date: Sun, 11 Jul 2004 02:39:29 -0700 Message-Id: <200407110939.i6B9dTad007758@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 345] New: Memory allocation failure X-Bugzilla-Reason: AssignedTo X-archive-position: 3628 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 19116 Lines: 318 http://oss.sgi.com/bugzilla/show_bug.cgi?id=345 Summary: Memory allocation failure Product: Linux XFS Version: Current Platform: All OS/Version: Linux Status: NEW Severity: normal Priority: High Component: XFS kernel code AssignedTo: xfs-master@oss.sgi.com ReportedBy: xfs@tobias.olsson.be Jul 11 03:58:08 blaster java: page allocation failure. order:4, mode:0x50 Jul 11 03:58:08 blaster [] dump_stack+0x1e/0x20 Jul 11 03:58:08 blaster [] __alloc_pages+0x2b9/0x320 Jul 11 03:58:08 blaster [] __get_free_pages+0x22/0x50 Jul 11 03:58:08 blaster [] kmem_getpages+0x1d/0xc0 Jul 11 03:58:08 blaster [] cache_grow+0x8a/0x1e0 Jul 11 03:58:08 blaster [] cache_alloc_refill+0x139/0x1f0 Jul 11 03:58:08 blaster [] __kmalloc+0x6c/0x80 Jul 11 03:58:08 blaster [] kmem_alloc+0x51/0xb0 Jul 11 03:58:08 blaster [] kmem_realloc+0x21/0x90 Jul 11 03:58:08 blaster [] xfs_iext_realloc+0x10d/0x160 Jul 11 03:58:08 blaster [] xfs_bmap_insert_exlist+0x2f/0xe0 Jul 11 03:58:08 blaster [] xfs_bmap_add_extent_hole_delay+0x2e8/0x4b0 Jul 11 03:58:08 blaster [] xfs_bmap_add_extent+0x360/0x400 Jul 11 03:58:08 blaster [] xfs_bmapi+0x5de/0x13d0 Jul 11 03:58:08 blaster [] xfs_iomap_write_delay+0x532/0x7d0 Jul 11 03:58:08 blaster [] xfs_iomap+0x2be/0x490 Jul 11 03:58:08 blaster [] linvfs_get_block_core+0x91/0x260 Jul 11 03:58:08 blaster [] linvfs_get_block_sync+0x3e/0x40 Jul 11 03:58:08 blaster [] __block_prepare_write+0x1a3/0x360 Jul 11 03:58:08 blaster [] block_prepare_write+0x32/0x40 Jul 11 03:58:08 blaster [] generic_file_aio_write_nolock+0x342/0xab0 Jul 11 03:58:08 blaster [] xfs_write+0x241/0x790 Jul 11 03:58:08 blaster [] linvfs_write+0x9b/0x110 Jul 11 03:58:08 blaster [] do_sync_write+0x89/0xc0 Jul 11 03:58:08 blaster [] vfs_write+0xa0/0x120 Jul 11 03:58:08 blaster [] sys_write+0x3f/0x60 Jul 11 03:58:08 blaster [] syscall_call+0x7/0xb Jul 11 03:58:08 blaster Jul 11 03:58:08 blaster java: page allocation failure. order:4, mode:0x50 Jul 11 03:58:08 blaster [] dump_stack+0x1e/0x20 Jul 11 03:58:08 blaster [] __alloc_pages+0x2b9/0x320 Jul 11 03:58:08 blaster [] __get_free_pages+0x22/0x50 Jul 11 03:58:08 blaster [] kmem_getpages+0x1d/0xc0 Jul 11 03:58:08 blaster [] cache_grow+0x8a/0x1e0 Jul 11 03:58:08 blaster [] cache_alloc_refill+0x139/0x1f0 Jul 11 03:58:08 blaster [] __kmalloc+0x6c/0x80 Jul 11 03:58:08 blaster [] kmem_alloc+0x51/0xb0 Jul 11 03:58:08 blaster [] kmem_realloc+0x21/0x90 Jul 11 03:58:08 blaster [] xfs_iext_realloc+0x10d/0x160 Jul 11 03:58:08 blaster [] xfs_bmap_insert_exlist+0x2f/0xe0 Jul 11 03:58:08 blaster [] xfs_bmap_add_extent_hole_delay+0x2e8/0x4b0 Jul 11 03:58:08 blaster [] xfs_bmap_add_extent+0x360/0x400 Jul 11 03:58:08 blaster [] xfs_bmapi+0x5de/0x13d0 Jul 11 03:58:08 blaster [] xfs_iomap_write_delay+0x532/0x7d0 Jul 11 03:58:08 blaster [] xfs_iomap+0x2be/0x490 Jul 11 03:58:08 blaster [] linvfs_get_block_core+0x91/0x260 Jul 11 03:58:08 blaster [] linvfs_get_block_sync+0x3e/0x40 Jul 11 03:58:08 blaster [] __block_prepare_write+0x1a3/0x360 Jul 11 03:58:08 blaster [] block_prepare_write+0x32/0x40 Jul 11 03:58:08 blaster [] generic_file_aio_write_nolock+0x342/0xab0 Jul 11 03:58:08 blaster [] xfs_write+0x241/0x790 Jul 11 03:58:08 blaster [] linvfs_write+0x9b/0x110 Jul 11 03:58:08 blaster [] do_sync_write+0x89/0xc0 Jul 11 03:58:08 blaster [] vfs_write+0xa0/0x120 Jul 11 03:58:08 blaster [] sys_write+0x3f/0x60 Jul 11 03:58:08 blaster [] syscall_call+0x7/0xb Jul 11 03:58:08 blaster Jul 11 03:58:08 blaster java: page allocation failure. order:4, mode:0x50 Jul 11 03:58:08 blaster [] dump_stack+0x1e/0x20 Jul 11 03:58:08 blaster [] __alloc_pages+0x2b9/0x320 Jul 11 03:58:08 blaster [] __get_free_pages+0x22/0x50 Jul 11 03:58:08 blaster [] kmem_getpages+0x1d/0xc0 Jul 11 03:58:08 blaster [] cache_grow+0x8a/0x1e0 Jul 11 03:58:08 blaster [] cache_alloc_refill+0x139/0x1f0 Jul 11 03:58:08 blaster [] __kmalloc+0x6c/0x80 Jul 11 03:58:08 blaster [] kmem_alloc+0x51/0xb0 Jul 11 03:58:08 blaster [] kmem_realloc+0x21/0x90 Jul 11 03:58:08 blaster [] xfs_iext_realloc+0x10d/0x160 Jul 11 03:58:08 blaster [] xfs_bmap_insert_exlist+0x2f/0xe0 Jul 11 03:58:08 blaster [] xfs_bmap_add_extent_hole_delay+0x2e8/0x4b0 Jul 11 03:58:08 blaster [] xfs_bmap_add_extent+0x360/0x400 Jul 11 03:58:08 blaster [] xfs_bmapi+0x5de/0x13d0 Jul 11 03:58:08 blaster [] xfs_iomap_write_delay+0x532/0x7d0 Jul 11 03:58:08 blaster [] xfs_iomap+0x2be/0x490 Jul 11 03:58:08 blaster [] linvfs_get_block_core+0x91/0x260 Jul 11 03:58:08 blaster [] linvfs_get_block_sync+0x3e/0x40 Jul 11 03:58:08 blaster [] __block_prepare_write+0x1a3/0x360 Jul 11 03:58:08 blaster [] block_prepare_write+0x32/0x40 Jul 11 03:58:08 blaster [] generic_file_aio_write_nolock+0x342/0xab0 Jul 11 03:58:08 blaster [] xfs_write+0x241/0x790 Jul 11 03:58:08 blaster [] linvfs_write+0x9b/0x110 Jul 11 03:58:08 blaster [] do_sync_write+0x89/0xc0 Jul 11 03:58:08 blaster [] vfs_write+0xa0/0x120 Jul 11 03:58:08 blaster [] sys_write+0x3f/0x60 Jul 11 03:58:08 blaster [] syscall_call+0x7/0xb Jul 11 03:58:08 blaster Jul 11 03:58:09 blaster java: page allocation failure. order:4, mode:0x50 Jul 11 03:58:09 blaster [] dump_stack+0x1e/0x20 Jul 11 03:58:09 blaster [] __alloc_pages+0x2b9/0x320 Jul 11 03:58:09 blaster [] __get_free_pages+0x22/0x50 Jul 11 03:58:09 blaster [] kmem_getpages+0x1d/0xc0 Jul 11 03:58:09 blaster [] cache_grow+0x8a/0x1e0 Jul 11 03:58:09 blaster [] cache_alloc_refill+0x139/0x1f0 Jul 11 03:58:09 blaster [] __kmalloc+0x6c/0x80 Jul 11 03:58:09 blaster [] kmem_alloc+0x51/0xb0 Jul 11 03:58:09 blaster [] kmem_realloc+0x21/0x90 Jul 11 03:58:09 blaster [] xfs_iext_realloc+0x10d/0x160 Jul 11 03:58:09 blaster [] xfs_bmap_insert_exlist+0x2f/0xe0 Jul 11 03:58:09 blaster [] xfs_bmap_add_extent_hole_delay+0x2e8/0x4b0 Jul 11 03:58:09 blaster [] xfs_bmap_add_extent+0x360/0x400 Jul 11 03:58:09 blaster [] xfs_bmapi+0x5de/0x13d0 Jul 11 03:58:09 blaster [] xfs_iomap_write_delay+0x532/0x7d0 Jul 11 03:58:09 blaster [] xfs_iomap+0x2be/0x490 Jul 11 03:58:09 blaster [] linvfs_get_block_core+0x91/0x260 Jul 11 03:58:09 blaster [] linvfs_get_block_sync+0x3e/0x40 Jul 11 03:58:09 blaster [] __block_prepare_write+0x1a3/0x360 Jul 11 03:58:09 blaster [] block_prepare_write+0x32/0x40 Jul 11 03:58:09 blaster [] generic_file_aio_write_nolock+0x342/0xab0 Jul 11 03:58:09 blaster [] xfs_write+0x241/0x790 Jul 11 03:58:09 blaster [] linvfs_write+0x9b/0x110 Jul 11 03:58:09 blaster [] do_sync_write+0x89/0xc0 Jul 11 03:58:09 blaster [] vfs_write+0xa0/0x120 Jul 11 03:58:09 blaster [] sys_write+0x3f/0x60 Jul 11 03:58:09 blaster [] syscall_call+0x7/0xb Jul 11 03:58:09 blaster Jul 11 03:58:09 blaster java: page allocation failure. order:4, mode:0x50 Jul 11 03:58:09 blaster [] dump_stack+0x1e/0x20 Jul 11 03:58:09 blaster [] __alloc_pages+0x2b9/0x320 Jul 11 03:58:09 blaster [] __get_free_pages+0x22/0x50 Jul 11 03:58:09 blaster [] kmem_getpages+0x1d/0xc0 Jul 11 03:58:09 blaster [] cache_grow+0x8a/0x1e0 Jul 11 03:58:09 blaster [] cache_alloc_refill+0x139/0x1f0 Jul 11 03:58:09 blaster [] __kmalloc+0x6c/0x80 Jul 11 03:58:09 blaster [] kmem_alloc+0x51/0xb0 Jul 11 03:58:09 blaster [] kmem_realloc+0x21/0x90 Jul 11 03:58:09 blaster [] xfs_iext_realloc+0x10d/0x160 Jul 11 03:58:09 blaster [] xfs_bmap_insert_exlist+0x2f/0xe0 Jul 11 03:58:09 blaster [] xfs_bmap_add_extent_hole_delay+0x2e8/0x4b0 Jul 11 03:58:09 blaster [] xfs_bmap_add_extent+0x360/0x400 Jul 11 03:58:09 blaster [] xfs_bmapi+0x5de/0x13d0 Jul 11 03:58:09 blaster [] xfs_iomap_write_delay+0x532/0x7d0 Jul 11 03:58:09 blaster [] xfs_iomap+0x2be/0x490 Jul 11 03:58:09 blaster [] linvfs_get_block_core+0x91/0x260 Jul 11 03:58:09 blaster [] linvfs_get_block_sync+0x3e/0x40 Jul 11 03:58:09 blaster [] __block_prepare_write+0x1a3/0x360 Jul 11 03:58:09 blaster [] block_prepare_write+0x32/0x40 Jul 11 03:58:09 blaster [] generic_file_aio_write_nolock+0x342/0xab0 Jul 11 03:58:09 blaster [] xfs_write+0x241/0x790 Jul 11 03:58:09 blaster [] linvfs_write+0x9b/0x110 Jul 11 03:58:09 blaster [] do_sync_write+0x89/0xc0 Jul 11 03:58:09 blaster [] vfs_write+0xa0/0x120 Jul 11 03:58:09 blaster [] sys_write+0x3f/0x60 Jul 11 03:58:09 blaster [] syscall_call+0x7/0xb Jul 11 03:58:09 blaster Jul 11 03:58:09 blaster java: page allocation failure. order:4, mode:0x50 Jul 11 03:58:09 blaster [] dump_stack+0x1e/0x20 Jul 11 03:58:09 blaster [] __alloc_pages+0x2b9/0x320 Jul 11 03:58:09 blaster [] __get_free_pages+0x22/0x50 Jul 11 03:58:09 blaster [] kmem_getpages+0x1d/0xc0 Jul 11 03:58:09 blaster [] cache_grow+0x8a/0x1e0 Jul 11 03:58:09 blaster [] cache_alloc_refill+0x139/0x1f0 Jul 11 03:58:09 blaster [] __kmalloc+0x6c/0x80 Jul 11 03:58:09 blaster [] kmem_alloc+0x51/0xb0 Jul 11 03:58:09 blaster [] kmem_realloc+0x21/0x90 Jul 11 03:58:09 blaster [] xfs_iext_realloc+0x10d/0x160 Jul 11 03:58:09 blaster [] xfs_bmap_insert_exlist+0x2f/0xe0 Jul 11 03:58:09 blaster [] xfs_bmap_add_extent_hole_delay+0x2e8/0x4b0 Jul 11 03:58:09 blaster [] xfs_bmap_add_extent+0x360/0x400 Jul 11 03:58:09 blaster [] xfs_bmapi+0x5de/0x13d0 Jul 11 03:58:09 blaster [] xfs_iomap_write_delay+0x532/0x7d0 Jul 11 03:58:09 blaster [] xfs_iomap+0x2be/0x490 Jul 11 03:58:09 blaster [] linvfs_get_block_core+0x91/0x260 Jul 11 03:58:09 blaster [] linvfs_get_block_sync+0x3e/0x40 Jul 11 03:58:09 blaster [] __block_prepare_write+0x1a3/0x360 Jul 11 03:58:09 blaster [] block_prepare_write+0x32/0x40 Jul 11 03:58:09 blaster [] generic_file_aio_write_nolock+0x342/0xab0 Jul 11 03:58:09 blaster [] xfs_write+0x241/0x790 Jul 11 03:58:09 blaster [] linvfs_write+0x9b/0x110 Jul 11 03:58:09 blaster [] do_sync_write+0x89/0xc0 Jul 11 03:58:09 blaster [] vfs_write+0xa0/0x120 Jul 11 03:58:09 blaster [] sys_write+0x3f/0x60 Jul 11 03:58:09 blaster [] syscall_call+0x7/0xb Jul 11 03:58:09 blaster Jul 11 03:58:09 blaster java: page allocation failure. order:4, mode:0x50 Jul 11 03:58:09 blaster [] dump_stack+0x1e/0x20 Jul 11 03:58:09 blaster [] __alloc_pages+0x2b9/0x320 Jul 11 03:58:09 blaster [] __get_free_pages+0x22/0x50 Jul 11 03:58:09 blaster [] kmem_getpages+0x1d/0xc0 Jul 11 03:58:09 blaster [] cache_grow+0x8a/0x1e0 Jul 11 03:58:09 blaster [] cache_alloc_refill+0x139/0x1f0 Jul 11 03:58:09 blaster [] __kmalloc+0x6c/0x80 Jul 11 03:58:09 blaster [] kmem_alloc+0x51/0xb0 Jul 11 03:58:09 blaster [] kmem_realloc+0x21/0x90 Jul 11 03:58:09 blaster [] xfs_iext_realloc+0x10d/0x160 Jul 11 03:58:09 blaster [] xfs_bmap_insert_exlist+0x2f/0xe0 Jul 11 03:58:09 blaster [] xfs_bmap_add_extent_hole_delay+0x2e8/0x4b0 Jul 11 03:58:09 blaster [] xfs_bmap_add_extent+0x360/0x400 Jul 11 03:58:09 blaster [] xfs_bmapi+0x5de/0x13d0 Jul 11 03:58:09 blaster [] xfs_iomap_write_delay+0x532/0x7d0 Jul 11 03:58:09 blaster [] xfs_iomap+0x2be/0x490 Jul 11 03:58:09 blaster [] linvfs_get_block_core+0x91/0x260 Jul 11 03:58:09 blaster [] linvfs_get_block_sync+0x3e/0x40 Jul 11 03:58:09 blaster [] __block_prepare_write+0x1a3/0x360 Jul 11 03:58:09 blaster [] block_prepare_write+0x32/0x40 Jul 11 03:58:09 blaster [] generic_file_aio_write_nolock+0x342/0xab0 Jul 11 03:58:09 blaster [] xfs_write+0x241/0x790 Jul 11 03:58:09 blaster [] linvfs_write+0x9b/0x110 Jul 11 03:58:09 blaster [] do_sync_write+0x89/0xc0 Jul 11 03:58:09 blaster [] vfs_write+0xa0/0x120 Jul 11 03:58:09 blaster [] sys_write+0x3f/0x60 Jul 11 03:58:09 blaster [] syscall_call+0x7/0xb Jul 11 03:58:09 blaster Jul 11 03:58:09 blaster java: page allocation failure. order:4, mode:0x50 Jul 11 03:58:09 blaster [] dump_stack+0x1e/0x20 Jul 11 03:58:09 blaster [] __alloc_pages+0x2b9/0x320 Jul 11 03:58:09 blaster [] __get_free_pages+0x22/0x50 Jul 11 03:58:09 blaster [] kmem_getpages+0x1d/0xc0 Jul 11 03:58:09 blaster [] cache_grow+0x8a/0x1e0 Jul 11 03:58:09 blaster [] cache_alloc_refill+0x139/0x1f0 Jul 11 03:58:09 blaster [] __kmalloc+0x6c/0x80 Jul 11 03:58:09 blaster [] kmem_alloc+0x51/0xb0 Jul 11 03:58:09 blaster [] kmem_realloc+0x21/0x90 Jul 11 03:58:09 blaster [] xfs_iext_realloc+0x10d/0x160 Jul 11 03:58:09 blaster [] xfs_bmap_insert_exlist+0x2f/0xe0 Jul 11 03:58:09 blaster [] xfs_bmap_add_extent_hole_delay+0x2e8/0x4b0 Jul 11 03:58:09 blaster [] xfs_bmap_add_extent+0x360/0x400 Jul 11 03:58:09 blaster [] xfs_bmapi+0x5de/0x13d0 Jul 11 03:58:09 blaster [] xfs_iomap_write_delay+0x532/0x7d0 Jul 11 03:58:09 blaster [] xfs_iomap+0x2be/0x490 Jul 11 03:58:09 blaster [] linvfs_get_block_core+0x91/0x260 Jul 11 03:58:09 blaster [] linvfs_get_block_sync+0x3e/0x40 Jul 11 03:58:09 blaster [] __block_prepare_write+0x1a3/0x360 Jul 11 03:58:09 blaster [] block_prepare_write+0x32/0x40 Jul 11 03:58:09 blaster [] generic_file_aio_write_nolock+0x342/0xab0 Jul 11 03:58:09 blaster [] xfs_write+0x241/0x790 Jul 11 03:58:09 blaster [] linvfs_write+0x9b/0x110 Jul 11 03:58:09 blaster [] do_sync_write+0x89/0xc0 Jul 11 03:58:09 blaster [] vfs_write+0xa0/0x120 Jul 11 03:58:09 blaster [] sys_write+0x3f/0x60 Jul 11 03:58:09 blaster [] syscall_call+0x7/0xb Jul 11 03:58:09 blaster Jul 11 03:58:09 blaster java: page allocation failure. order:4, mode:0x50 Jul 11 03:58:09 blaster [] dump_stack+0x1e/0x20 Jul 11 03:58:09 blaster [] __alloc_pages+0x2b9/0x320 Jul 11 03:58:09 blaster [] __get_free_pages+0x22/0x50 Jul 11 03:58:09 blaster [] kmem_getpages+0x1d/0xc0 Jul 11 03:58:09 blaster [] cache_grow+0x8a/0x1e0 Jul 11 03:58:09 blaster [] cache_alloc_refill+0x139/0x1f0 Jul 11 03:58:09 blaster [] __kmalloc+0x6c/0x80 Jul 11 03:58:09 blaster [] kmem_alloc+0x51/0xb0 Jul 11 03:58:09 blaster [] kmem_realloc+0x21/0x90 Jul 11 03:58:09 blaster [] xfs_iext_realloc+0x10d/0x160 Jul 11 03:58:09 blaster [] xfs_bmap_insert_exlist+0x2f/0xe0 Jul 11 03:58:09 blaster [] xfs_bmap_add_extent_hole_delay+0x2e8/0x4b0 Jul 11 03:58:09 blaster [] xfs_bmap_add_extent+0x360/0x400 Jul 11 03:58:09 blaster [] xfs_bmapi+0x5de/0x13d0 Jul 11 03:58:09 blaster [] xfs_iomap_write_delay+0x532/0x7d0 Jul 11 03:58:09 blaster [] xfs_iomap+0x2be/0x490 Jul 11 03:58:09 blaster [] linvfs_get_block_core+0x91/0x260 Jul 11 03:58:09 blaster [] linvfs_get_block_sync+0x3e/0x40 Jul 11 03:58:09 blaster [] __block_prepare_write+0x1a3/0x360 Jul 11 03:58:09 blaster [] block_prepare_write+0x32/0x40 Jul 11 03:58:09 blaster [] generic_file_aio_write_nolock+0x342/0xab0 Jul 11 03:58:09 blaster [] xfs_write+0x241/0x790 Jul 11 03:58:09 blaster [] linvfs_write+0x9b/0x110 Jul 11 03:58:09 blaster [] do_sync_write+0x89/0xc0 Jul 11 03:58:09 blaster [] vfs_write+0xa0/0x120 Jul 11 03:58:09 blaster [] sys_write+0x3f/0x60 Jul 11 03:58:09 blaster [] syscall_call+0x7/0xb Jul 11 03:58:09 blaster Jul 11 03:58:09 blaster java: page allocation failure. order:4, mode:0x50 Jul 11 03:58:09 blaster [] dump_stack+0x1e/0x20 Jul 11 03:58:09 blaster [] __alloc_pages+0x2b9/0x320 Jul 11 03:58:09 blaster [] __get_free_pages+0x22/0x50 Jul 11 03:58:09 blaster [] kmem_getpages+0x1d/0xc0 Jul 11 03:58:09 blaster [] cache_grow+0x8a/0x1e0 Jul 11 03:58:09 blaster [] cache_alloc_refill+0x139/0x1f0 Jul 11 03:58:09 blaster [] __kmalloc+0x6c/0x80 Jul 11 03:58:09 blaster [] kmem_alloc+0x51/0xb0 Jul 11 03:58:09 blaster [] kmem_realloc+0x21/0x90 Jul 11 03:58:09 blaster [] xfs_iext_realloc+0x10d/0x160 Jul 11 03:58:09 blaster [] xfs_bmap_insert_exlist+0x2f/0xe0 Jul 11 03:58:09 blaster [] xfs_bmap_add_extent_hole_delay+0x2e8/0x4b0 Jul 11 03:58:09 blaster [] xfs_bmap_add_extent+0x360/0x400 Jul 11 03:58:09 blaster [] xfs_bmapi+0x5de/0x13d0 Jul 11 03:58:09 blaster [] xfs_iomap_write_delay+0x532/0x7d0 Jul 11 03:58:09 blaster [] xfs_iomap+0x2be/0x490 Jul 11 03:58:09 blaster [] linvfs_get_block_core+0x91/0x260 Jul 11 03:58:09 blaster [] linvfs_get_block_sync+0x3e/0x40 Jul 11 03:58:09 blaster [] __block_prepare_write+0x1a3/0x360 Jul 11 03:58:09 blaster [] block_prepare_write+0x32/0x40 Jul 11 03:58:09 blaster [] generic_file_aio_write_nolock+0x342/0xab0 Jul 11 03:58:09 blaster [] xfs_write+0x241/0x790 Jul 11 03:58:09 blaster [] linvfs_write+0x9b/0x110 Jul 11 03:58:09 blaster [] do_sync_write+0x89/0xc0 Jul 11 03:58:09 blaster [] vfs_write+0xa0/0x120 Jul 11 03:58:09 blaster [] sys_write+0x3f/0x60 Jul 11 03:58:09 blaster [] syscall_call+0x7/0xb Jul 11 03:58:09 blaster Jul 11 03:59:20 blaster printk: 20 messages suppressed. and then it starts again. I don't know, but the filesystem may have become full at about this time. The messages stopped once I removed some files, but I also did a couple of other things which may have freed RAM. ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs@oss.sgi.com Sun Jul 11 03:54:32 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 11 Jul 2004 03:54:48 -0700 (PDT) Received: from dsl-prvgw1cc4.dial.inet.fi (dsl-prvgw1cc4.dial.inet.fi [80.223.50.196]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i6BAsAgi011806; Sun, 11 Jul 2004 03:54:31 -0700 Received: by dsl-prvgw1cc4.dial.inet.fi (Postfix, from userid 1000) id 2CE582403801; Sun, 11 Jul 2004 13:54:09 +0300 (EEST) Received: from localhost (localhost [127.0.0.1]) by dsl-prvgw1cc4.dial.inet.fi (Postfix) with ESMTP id 2A0E130A4; Sun, 11 Jul 2004 13:54:09 +0300 (EEST) Date: Sun, 11 Jul 2004 13:54:09 +0300 (EEST) From: "Petri T. Koistinen" To: xfs-masters@oss.sgi.com, nathans@sgi.com Cc: linux-xfs@oss.sgi.com, linux-kernel@vger.kernel.org Subject: [PATCH] Fix XFS uses of plain integer as NULL pointer Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 3629 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: petri.koistinen@iki.fi Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 9570 Lines: 318 Hi! This patch will fix XFS sparse warnings about using plain integer as NULL pointer. Signed-off-by: Petri T. Koistinen --- linux-2.6/fs/xfs/linux-2.6/xfs_stats.c.orig 2004-07-11 12:26:13.000000000 +0300 +++ linux-2.6/fs/xfs/linux-2.6/xfs_stats.c 2004-07-11 12:33:28.000000000 +0300 @@ -119,9 +119,9 @@ void xfs_init_procfs(void) { - if (!proc_mkdir("fs/xfs", 0)) + if (!proc_mkdir("fs/xfs", NULL)) return; - create_proc_read_entry("fs/xfs/stat", 0, 0, xfs_read_xfsstats, NULL); + create_proc_read_entry("fs/xfs/stat", 0, NULL, xfs_read_xfsstats, NULL); } void --- linux-2.6/fs/xfs/linux-2.6/xfs_aops.c.orig 2004-07-11 13:09:32.000000000 +0300 +++ linux-2.6/fs/xfs/linux-2.6/xfs_aops.c 2004-07-11 13:09:53.000000000 +0300 @@ -264,7 +264,7 @@ page = find_trylock_page(mapping, index); if (!page) - return 0; + return NULL; if (PageWriteback(page)) goto out; --- linux-2.6/fs/xfs/quota/xfs_dquot.c.orig 2004-07-11 13:22:39.000000000 +0300 +++ linux-2.6/fs/xfs/quota/xfs_dquot.c 2004-07-11 13:23:01.000000000 +0300 @@ -145,7 +145,7 @@ dqp->q_res_icount = 0; dqp->q_res_rtbcount = 0; dqp->q_pincount = 0; - dqp->q_hash = 0; + dqp->q_hash = NULL; ASSERT(dqp->dq_flnext == dqp->dq_flprev); #ifdef XFS_DQUOT_TRACE --- linux-2.6/fs/xfs/quota/xfs_qm_stats.c.orig 2004-07-11 13:24:26.000000000 +0300 +++ linux-2.6/fs/xfs/quota/xfs_qm_stats.c 2004-07-11 13:27:34.000000000 +0300 @@ -137,8 +137,10 @@ void xfs_qm_init_procfs(void) { - create_proc_read_entry("fs/xfs/xqmstat", 0, 0, xfs_qm_read_stats, NULL); - create_proc_read_entry("fs/xfs/xqm", 0, 0, xfs_qm_read_xfsquota, NULL); + create_proc_read_entry("fs/xfs/xqmstat", 0, NULL, xfs_qm_read_stats, + NULL); + create_proc_read_entry("fs/xfs/xqm", 0, NULL, xfs_qm_read_xfsquota, + NULL); } void --- linux-2.6/fs/xfs/linux-2.6/xfs_super.c.orig 2004-07-11 13:29:30.000000000 +0300 +++ linux-2.6/fs/xfs/linux-2.6/xfs_super.c 2004-07-11 13:29:46.000000000 +0300 @@ -573,7 +573,7 @@ dotdot.d_name.name = ".."; dotdot.d_name.len = 2; - dotdot.d_inode = 0; + dotdot.d_inode = NULL; cvp = NULL; vp = LINVFS_GET_VP(child->d_inode); --- linux-2.6/fs/xfs/xfs_alloc.c.orig 2004-07-11 12:38:21.000000000 +0300 +++ linux-2.6/fs/xfs/xfs_alloc.c 2004-07-11 12:45:29.000000000 +0300 @@ -665,7 +665,7 @@ * Allocate/initialize a cursor for the by-number freespace btree. */ bno_cur = xfs_btree_init_cursor(args->mp, args->tp, args->agbp, - args->agno, XFS_BTNUM_BNO, 0, 0); + args->agno, XFS_BTNUM_BNO, NULL, 0); /* * Lookup bno and minlen in the btree (minlen is irrelevant, really). * Look for the closest free block <= bno, it must contain bno @@ -721,7 +721,7 @@ * Allocate/initialize a cursor for the by-size btree. */ cnt_cur = xfs_btree_init_cursor(args->mp, args->tp, args->agbp, - args->agno, XFS_BTNUM_CNT, 0, 0); + args->agno, XFS_BTNUM_CNT, NULL, 0); ASSERT(args->agbno + args->len <= INT_GET(XFS_BUF_TO_AGF(args->agbp)->agf_length, ARCH_CONVERT)); @@ -788,7 +788,7 @@ * Get a cursor for the by-size btree. */ cnt_cur = xfs_btree_init_cursor(args->mp, args->tp, args->agbp, - args->agno, XFS_BTNUM_CNT, 0, 0); + args->agno, XFS_BTNUM_CNT, NULL, 0); ltlen = 0; bno_cur_lt = bno_cur_gt = NULL; /* @@ -916,7 +916,7 @@ * Set up a cursor for the by-bno tree. */ bno_cur_lt = xfs_btree_init_cursor(args->mp, args->tp, - args->agbp, args->agno, XFS_BTNUM_BNO, 0, 0); + args->agbp, args->agno, XFS_BTNUM_BNO, NULL, 0); /* * Fix up the btree entries. */ @@ -944,7 +944,7 @@ * Allocate and initialize the cursor for the leftward search. */ bno_cur_lt = xfs_btree_init_cursor(args->mp, args->tp, args->agbp, - args->agno, XFS_BTNUM_BNO, 0, 0); + args->agno, XFS_BTNUM_BNO, NULL, 0); /* * Lookup <= bno to find the leftward search's starting point. */ @@ -956,7 +956,7 @@ * search. */ bno_cur_gt = bno_cur_lt; - bno_cur_lt = 0; + bno_cur_lt = NULL; } /* * Found something. Duplicate the cursor for the rightward search. @@ -1301,7 +1301,7 @@ * Allocate and initialize a cursor for the by-size btree. */ cnt_cur = xfs_btree_init_cursor(args->mp, args->tp, args->agbp, - args->agno, XFS_BTNUM_CNT, 0, 0); + args->agno, XFS_BTNUM_CNT, NULL, 0); bno_cur = NULL; /* * Look for an entry >= maxlen+alignment-1 blocks. @@ -1406,7 +1406,7 @@ * Allocate and initialize a cursor for the by-block tree. */ bno_cur = xfs_btree_init_cursor(args->mp, args->tp, args->agbp, - args->agno, XFS_BTNUM_BNO, 0, 0); + args->agno, XFS_BTNUM_BNO, NULL, 0); if ((error = xfs_alloc_fixup_trees(cnt_cur, bno_cur, fbno, flen, rbno, rlen, XFSA_FIXUP_CNT_OK))) goto error0; @@ -1553,8 +1553,8 @@ /* * Allocate and initialize a cursor for the by-block btree. */ - bno_cur = xfs_btree_init_cursor(mp, tp, agbp, agno, XFS_BTNUM_BNO, 0, - 0); + bno_cur = xfs_btree_init_cursor(mp, tp, agbp, agno, XFS_BTNUM_BNO, + NULL, 0); cnt_cur = NULL; /* * Look for a neighboring block on the left (lower block numbers) @@ -1613,8 +1613,8 @@ /* * Now allocate and initialize a cursor for the by-size tree. */ - cnt_cur = xfs_btree_init_cursor(mp, tp, agbp, agno, XFS_BTNUM_CNT, 0, - 0); + cnt_cur = xfs_btree_init_cursor(mp, tp, agbp, agno, XFS_BTNUM_CNT, + NULL, 0); /* * Have both left and right contiguous neighbors. * Merge all three into a single free block. --- linux-2.6/fs/xfs/xfs_alloc_btree.c.orig 2004-07-11 12:48:08.000000000 +0300 +++ linux-2.6/fs/xfs/xfs_alloc_btree.c 2004-07-11 12:48:56.000000000 +0300 @@ -263,7 +263,7 @@ /* * Update the cursor so there's one fewer level. */ - xfs_btree_setbuf(cur, level, 0); + xfs_btree_setbuf(cur, level, NULL); cur->bc_nlevels--; } else if (level > 0 && (error = xfs_alloc_decrement(cur, level, &i))) --- linux-2.6/fs/xfs/xfs_bmap_btree.c.orig 2004-07-11 12:50:30.000000000 +0300 +++ linux-2.6/fs/xfs/xfs_bmap_btree.c 2004-07-11 12:51:11.000000000 +0300 @@ -1953,7 +1953,7 @@ *bpp = cur->bc_bufs[level]; rval = XFS_BUF_TO_BMBT_BLOCK(*bpp); } else { - *bpp = 0; + *bpp = NULL; ifp = XFS_IFORK_PTR(cur->bc_private.b.ip, cur->bc_private.b.whichfork); rval = ifp->if_broot; --- linux-2.6/fs/xfs/xfs_da_btree.c.orig 2004-07-11 12:54:29.000000000 +0300 +++ linux-2.6/fs/xfs/xfs_da_btree.c 2004-07-11 12:55:22.000000000 +0300 @@ -2092,7 +2092,7 @@ int caller, inst_t *ra) { - xfs_buf_t *bp = 0; + xfs_buf_t *bp = NULL; xfs_buf_t **bplist; int error=0; int i; --- linux-2.6/fs/xfs/xfs_log.c.orig 2004-07-11 12:56:27.000000000 +0300 +++ linux-2.6/fs/xfs/xfs_log.c 2004-07-11 13:00:21.000000000 +0300 @@ -356,13 +356,13 @@ if (!xlog_debug && xlog_target == log->l_targ) return 0; #endif - cb->cb_next = 0; + cb->cb_next = NULL; spl = LOG_LOCK(log); abortflg = (iclog->ic_state & XLOG_STATE_IOERROR); if (!abortflg) { ASSERT_ALWAYS((iclog->ic_state == XLOG_STATE_ACTIVE) || (iclog->ic_state == XLOG_STATE_WANT_SYNC)); - cb->cb_next = 0; + cb->cb_next = NULL; *(iclog->ic_callback_tail) = cb; iclog->ic_callback_tail = &(cb->cb_next); } @@ -564,7 +564,7 @@ xlog_in_core_t *first_iclog; #endif xfs_log_iovec_t reg[1]; - xfs_log_ticket_t tic = 0; + xfs_log_ticket_t tic = NULL; xfs_lsn_t lsn; int error; SPLDECL(s); @@ -1277,7 +1277,7 @@ int error; xfs_log_iovec_t reg[1]; - reg[0].i_addr = 0; + reg[0].i_addr = NULL; reg[0].i_len = 0; ASSERT_ALWAYS(iclog); @@ -1856,8 +1856,8 @@ do { if (iclog->ic_state == XLOG_STATE_DIRTY) { iclog->ic_state = XLOG_STATE_ACTIVE; - iclog->ic_offset = 0; - iclog->ic_callback = 0; /* don't need to free */ + iclog->ic_offset = 0; + iclog->ic_callback = NULL; /* don't need to free */ /* * If the number of ops in this iclog indicate it just * contains the dummy transaction, we can @@ -2080,7 +2080,7 @@ while (cb != 0) { iclog->ic_callback_tail = &(iclog->ic_callback); - iclog->ic_callback = 0; + iclog->ic_callback = NULL; LOG_UNLOCK(log, s); /* perform callbacks in the order given */ @@ -3098,7 +3098,7 @@ log->l_ticket_cnt++; log->l_ticket_tcnt++; } - t_list->t_next = 0; + t_list->t_next = NULL; log->l_tail = t_list; LOG_UNLOCK(log, s); } /* xlog_state_ticket_alloc */ @@ -3126,7 +3126,7 @@ /* no need to clear fields */ #else /* When we debug, it is easier if tickets are cycled */ - ticket->t_next = 0; + ticket->t_next = NULL; if (log->l_tail != 0) { log->l_tail->t_next = ticket; } else { --- linux-2.6/fs/xfs/xfs_log_recover.c.orig 2004-07-11 13:01:52.000000000 +0300 +++ linux-2.6/fs/xfs/xfs_log_recover.c 2004-07-11 13:02:45.000000000 +0300 @@ -2319,7 +2319,7 @@ * invalidate the buffer when we write it out below. */ imap.im_blkno = 0; - xfs_imap(log->l_mp, 0, ino, &imap, 0); + xfs_imap(log->l_mp, NULL, ino, &imap, 0); } /* --- linux-2.6/fs/xfs/xfs_mount.c.orig 2004-07-11 13:04:23.000000000 +0300 +++ linux-2.6/fs/xfs/xfs_mount.c 2004-07-11 13:04:45.000000000 +0300 @@ -633,7 +633,7 @@ xfs_buf_t *bp; xfs_sb_t *sbp = &(mp->m_sb); xfs_inode_t *rip; - vnode_t *rvp = 0; + vnode_t *rvp = NULL; int readio_log, writeio_log; vmap_t vmap; xfs_daddr_t d; --- linux-2.6/fs/xfs/xfs_trans_item.c.orig 2004-07-11 13:06:05.000000000 +0300 +++ linux-2.6/fs/xfs/xfs_trans_item.c 2004-07-11 13:07:27.000000000 +0300 @@ -290,7 +290,7 @@ } ASSERT(0); /* NOTREACHED */ - return 0; /* keep gcc quite */ + return NULL; /* keep gcc and sparse quiet */ } /* From owner-linux-xfs@oss.sgi.com Sun Jul 11 04:05:56 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 11 Jul 2004 04:06:31 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.12.10/8.12.9) with ESMTP id i6BB5ugi012678 for ; Sun, 11 Jul 2004 04:05:56 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.12.10/8.12.8/Submit) id i6BB5uMl012677 for linux-xfs@oss.sgi.com; Sun, 11 Jul 2004 04:05:56 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.12.10/8.12.9) with ESMTP id i6BB5ogk012663 for ; Sun, 11 Jul 2004 04:05:50 -0700 Received: (from apache@localhost) by oss.sgi.com (8.12.10/8.12.8/Submit) id i6BAtrJ3012011; Sun, 11 Jul 2004 03:55:53 -0700 Date: Sun, 11 Jul 2004 03:55:53 -0700 Message-Id: <200407111055.i6BAtrJ3012011@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 345] Memory allocation failure X-Bugzilla-Reason: AssignedTo X-archive-position: 3630 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 345 Lines: 14 http://oss.sgi.com/bugzilla/show_bug.cgi?id=345 ------- Additional Comments From xfs@tobias.olsson.be 2004-11-07 03:55 PDT ------- The exact version I used was a cvs co linux-2.6-xfs from around 10 July 2004 23:00 GMT ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs@oss.sgi.com Sun Jul 11 11:45:11 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 11 Jul 2004 11:45:13 -0700 (PDT) Received: from pimout2-ext.prodigy.net (pimout2-ext.prodigy.net [207.115.63.101]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i6BIj8gi000822 for ; Sun, 11 Jul 2004 11:45:11 -0700 Received: from taniwha.stupidest.org (adsl-63-202-173-53.dsl.snfc21.pacbell.net [63.202.173.53]) by pimout2-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6BIj3UK193024; Sun, 11 Jul 2004 14:45:04 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id 39978115C809; Sun, 11 Jul 2004 11:45:03 -0700 (PDT) Date: Sun, 11 Jul 2004 11:45:03 -0700 From: Chris Wedgwood To: "Petri T. Koistinen" Cc: xfs-masters@oss.sgi.com, nathans@sgi.com, linux-xfs@oss.sgi.com, linux-kernel@vger.kernel.org Subject: Re: [PATCH] Fix XFS uses of plain integer as NULL pointer Message-ID: <20040711184503.GB25196@taniwha.stupidest.org> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-archive-position: 3631 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 3798 Lines: 121 On Sun, Jul 11, 2004 at 01:54:09PM +0300, Petri T. Koistinen wrote: > This patch will fix XFS sparse warnings about using plain integer as > NULL pointer. These are already in the CVS tree. rom owner-linux-xfs@oss.sgi.com Mon Jul 12 08:13:48 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 12 Jul 2004 08:13:59 -0700 (PDT) Received: from snark.thyrsus.com (dsl092-053-140.phl1.dsl.speakeasy.net [66.92.53.140]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i6CFDRgi008848 for ; Mon, 12 Jul 2004 08:13:48 -0700 Received: from snark.thyrsus.com (localhost [127.0.0.1]) by snark.thyrsus.com (8.12.11/8.12.11) with ESMTP id i6CFD1JB016921 for ; Mon, 12 Jul 2004 11:13:01 -0400 Date: Mon, 12 Jul 2004 11:13:01 -0400 From: esr@thyrsus.com Message-Id: <200407121513.i6CFD1JB016921@snark.thyrsus.com> To: linux-xfs@oss.sgi.com Subject: problems in one or more man pages you maintain X-archive-position: 3632 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: esr@thyrsus.com Precedence: bulk X-list: linux-xfs This is automatically generated email about problems in a man page for which you appear to be responsible. If you are not the right person or list, tell me and I will attempt to correct my database. See http://catb.org/~esr/doclifter/problems.html for details on how and why these patches were generated. Feel free to email me with any questions. Note: This patch does not change the mod date of the manual page. You may wish to do that by hand. Problems with xfs_bmap.8: 1. Unknown or invalid macro. That is, one that does not fit in the macro set that the man page seems to be using. This is a serious error; it often means part of your text is being lost or rendered incorrectly. --- xfs_bmap.8-orig 2004-07-09 06:59:14.210997800 -0400 +++ xfs_bmap.8 2004-07-09 06:59:59.299143360 -0400 @@ -12,10 +12,12 @@ in the file that do not have any corresponding blocks (\f2hole\f1s). Each line of the listings takes the following form: -.Ex -\f2extent\f1\f7: [\f1\f2startoffset\f1\f7..\f1\f2endoffset\f1\f7]: \c -\f1\f2startblock\f1\f7..\f1\f2endblock\f1 -.Ee +.nf +.ft CW + \f2extent\f1\f7: [\f1\f2startoffset\f1\f7..\f1\f2endoffset\f1\f7]: \c + \f1\f2startblock\f1\f7..\f1\f2endblock\f1 +.ft +.fi Holes are marked by replacing the \f2startblock..endblock\f1 with \f2hole\fP. All the file offsets and disk blocks are in units of 512-byte blocks, @@ -29,9 +31,11 @@ .PP If the \f3-l\f1 option is used, then -.Ex -\f1\f2\f1\f7 \f1\f2blocks\f1\f7 -.Ee +.nf +.ft CW + \f1\f2\f1\f7 \f1\f2blocks\f1\f7 +.ft +.fi will be appended to each line. \f1\f2Nblocks\f1\f7 is the length of the extent described on the line in units of 512-byte blocks. Problems with xfs_check.8: 1. Unknown or invalid macro. That is, one that does not fit in the macro set that the man page seems to be using. This is a serious error; it often means part of your text is being lost or rendered incorrectly. --- xfs_check.8-orig 2004-07-09 07:01:51.627066920 -0400 +++ xfs_check.8 2004-07-09 07:03:06.440693520 -0400 @@ -90,17 +90,21 @@ rather than produce useful output. If the filesystem is completely corrupt, a core dump might be produced instead of the message -.Ex -\f2xxx\f1\f7 is not a valid filesystem\f1 -.Ee +.nf +.ft CW + \f2xxx\f1\f7 is not a valid filesystem\f1 +.ft +.fi .PP If the filesystem is very large (has many files) then .I xfs_check might run out of memory. In this case the message -.Ex -out of memory -.Ee +.nf +.ft CW + out of memory +.ft +.fi is printed. .PP The following is a description of the most likely problems and the associated -- Eric S. Raymond From owner-linux-xfs@oss.sgi.com Mon Jul 12 10:05:59 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 12 Jul 2004 10:06:02 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.12.10/8.12.9) with ESMTP id i6CH5xgi022433 for ; Mon, 12 Jul 2004 10:05:59 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.12.10/8.12.8/Submit) id i6CH5xQA022432 for linux-xfs@oss.sgi.com; Mon, 12 Jul 2004 10:05:59 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.12.10/8.12.9) with ESMTP id i6CH5wgk022418 for ; Mon, 12 Jul 2004 10:05:58 -0700 Received: (from apache@localhost) by oss.sgi.com (8.12.10/8.12.8/Submit) id i6CGlcjq020972; Mon, 12 Jul 2004 09:47:38 -0700 Date: Mon, 12 Jul 2004 09:47:38 -0700 Message-Id: <200407121647.i6CGlcjq020972@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 345] Memory allocation failure X-Bugzilla-Reason: AssignedTo X-archive-position: 3633 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 677 Lines: 21 http://oss.sgi.com/bugzilla/show_bug.cgi?id=345 ------- Additional Comments From sandeen@sgi.com 2004-12-07 09:47 PDT ------- You probably had some heavily fragmented files (doing any p2p downloading or nfs work?) which required the large allocation for an extent list. OTOH the 2.6 kernel should never fail a memory allocation request, I think, so the true bug is probably in the core kernel. You might try the cvs tree from oss.sgi.com, which has some core memory allocation fixes in it. Oh, but wait, you said you have cvs from Jul 10? Hm.... ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs@oss.sgi.com Mon Jul 12 10:34:19 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 12 Jul 2004 10:34:23 -0700 (PDT) Received: from mail.linux-sxs.org (mail.linux-sxs.org [207.218.156.196]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i6CHY7gi025900 for ; Mon, 12 Jul 2004 10:34:11 -0700 Received: from mail.linux-sxs.org (localhost [127.0.0.1]) by mail.linux-sxs.org (8.12.11/8.12.11/Debian-5) with ESMTP id i6CHXShT008130 for ; Mon, 12 Jul 2004 12:33:28 -0500 Received: from localhost (netllama@localhost) by mail.linux-sxs.org (8.12.11/8.12.11/Debian-5) with ESMTP id i6CHXSop008127 for ; Mon, 12 Jul 2004 12:33:28 -0500 Date: Mon, 12 Jul 2004 12:33:27 -0500 (EST) From: Net Llama! To: linux-xfs@oss.sgi.com Subject: 2.6.7 oops Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 3634 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: netllama@mail.linux-sxs.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 7794 Lines: 163 Hello, I'm running a vanilla 2.6.7 on a workstation. It appears to have Oops'ed yesterday morning while the slocate/updatedb cronjob was running: Jul 11 04:07:38 netllama kernel: Unable to handle kernel paging request at virtual address 985f57cc Jul 11 04:07:38 netllama kernel: printing eip: Jul 11 04:07:38 netllama kernel: c020aff9 Jul 11 04:07:38 netllama kernel: *pde = 00000000 Jul 11 04:07:38 netllama kernel: Oops: 0000 [#2] Jul 11 04:07:38 netllama kernel: PREEMPT Jul 11 04:07:38 netllama kernel: Modules linked in: Jul 11 04:07:38 netllama kernel: CPU: 0 Jul 11 04:07:38 netllama kernel: EIP: 0060:[radix_tree_lookup+73/96] Not tainted Jul 11 04:07:38 netllama kernel: EFLAGS: 00010047 (2.6.7) Jul 11 04:07:38 netllama kernel: EIP is at radix_tree_lookup+0x49/0x60 Jul 11 04:07:38 netllama kernel: eax: 985f57cc ebx: fffffffa ecx: dfe86100 edx: 00000000 Jul 11 04:07:38 netllama kernel: esi: 0002096f edi: 985f570c ebp: 0002096f esp: c355cc64 Jul 11 04:07:38 netllama kernel: ds: 007b es: 007b ss: 0068 Jul 11 04:07:38 netllama kernel: Process updatedb (pid: 25042, threadinfo=c355c000 task=d705d1b0) Jul 11 04:07:38 netllama kernel: Stack: 00000000 00000000 dfe86140 c0131e2c dfe86140 0002096f 00000000 00000000 Jul 11 04:07:38 netllama kernel: 00000050 0002096f c0131f34 dfe8613c 0002096f 00000000 c0639360 0002096f Jul 11 04:07:38 netllama kernel: 00000000 00001000 c1ad93c0 c01f97d9 dfe8613c 0002096f 00000050 c041f698 Jul 11 04:07:38 netllama kernel: Call Trace: Jul 11 04:07:38 netllama kernel: [find_lock_page+44/240] find_lock_page+0x2c/0xf0 Jul 11 04:07:38 netllama kernel: [find_or_create_page+68/192] find_or_create_page+0x44/0xc0 Jul 11 04:07:39 netllama kernel: [_pagebuf_lookup_pages+265/784] _pagebuf_lookup_pages+0x109/0x310 Jul 11 04:07:39 netllama kernel: [pagebuf_get+408/480] pagebuf_get+0x198/0x1e0 Jul 11 04:07:39 netllama kernel: [xfs_trans_read_buf+396/992] xfs_trans_read_buf+0x18c/0x3e0 Jul 11 04:07:39 netllama kernel: [xfs_da_do_buf+1964/2560] xfs_da_do_buf+0x7ac/0xa00 Jul 11 04:07:39 netllama kernel: [xfs_iget+259/368] xfs_iget+0x103/0x170 Jul 11 04:07:39 netllama kernel: [xfs_da_read_buf+88/96] xfs_da_read_buf+0x58/0x60 Jul 11 04:07:39 netllama kernel: [xfs_dir2_block_getdents+177/752] xfs_dir2_block_getdents+0xb1/0x2f0 Jul 11 04:07:39 netllama kernel: [xfs_dir2_block_getdents+177/752] xfs_dir2_block_getdents+0xb1/0x2f0 Jul 11 04:07:39 netllama kernel: [real_lookup+213/256] real_lookup+0xd5/0x100 Jul 11 04:07:39 netllama kernel: [xfs_bmap_last_offset+271/288] xfs_bmap_last_offset+0x10f/0x120 Jul 11 04:07:39 netllama kernel: [xfs_dir2_put_dirent64_direct+0/176] xfs_dir2_put_dirent64_direct+0x0/0xb0 Jul 11 04:07:39 netllama kernel: [xfs_dir2_isblock+42/128] xfs_dir2_isblock+0x2a/0x80 Jul 11 04:07:39 netllama kernel: [xfs_dir2_put_dirent64_direct+0/176] xfs_dir2_put_dirent64_direct+0x0/0xb0 Jul 11 04:07:39 netllama kernel: [xfs_dir2_getdents+196/432] xfs_dir2_getdents+0xc4/0x1b0 Jul 11 04:07:39 netllama kernel: [xfs_dir2_put_dirent64_direct+0/176] xfs_dir2_put_dirent64_direct+0x0/0xb0 Jul 11 04:07:39 netllama kernel: [xfs_readdir+97/192] xfs_readdir+0x61/0xc0 Jul 11 04:07:39 netllama kernel: [linvfs_readdir+253/608] linvfs_readdir+0xfd/0x260 Jul 11 04:07:39 netllama kernel: [vfs_readdir+137/160] vfs_readdir+0x89/0xa0 Jul 11 04:07:39 netllama kernel: [filldir64+0/272] filldir64+0x0/0x110 Jul 11 04:07:39 netllama kernel: [sys_getdents64+122/194] sys_getdents64+0x7a/0xc2 Jul 11 04:07:39 netllama kernel: [filldir64+0/272] filldir64+0x0/0x110 Jul 11 04:07:39 netllama kernel: [syscall_call+7/11] syscall_call+0x7/0xb Jul 11 04:07:39 netllama kernel: Jul 11 04:07:39 netllama kernel: Code: 8b 00 5b 5e 5f c3 5b 31 c0 5e 5f c3 8d 74 26 00 8d bc 27 00 Jul 11 04:07:39 netllama kernel: <6>note: updatedb[25042] exited with preempt_count 1 Jul 11 04:07:39 netllama kernel: bad: scheduling while atomic! Jul 11 04:07:39 netllama kernel: [schedule+1210/1264] schedule+0x4ba/0x4f0 Jul 11 04:07:39 netllama kernel: [unmap_page_range+83/144] unmap_page_range+0x53/0x90 Jul 11 04:07:39 netllama kernel: [unmap_vmas+460/480] unmap_vmas+0x1cc/0x1e0 Jul 11 04:07:39 netllama kernel: [exit_mmap+125/352] exit_mmap+0x7d/0x160 Jul 11 04:07:39 netllama kernel: [mmput+98/144] mmput+0x62/0x90 Jul 11 04:07:39 netllama kernel: [do_exit+337/992] do_exit+0x151/0x3e0 Jul 11 04:07:39 netllama kernel: [do_page_fault+0/1324] do_page_fault+0x0/0x52c Jul 11 04:07:39 netllama kernel: [die+252/256] die+0xfc/0x100 Jul 11 04:07:39 netllama kernel: [do_page_fault+864/1324] do_page_fault+0x360/0x52c Jul 11 04:07:39 netllama kernel: [pagebuf_get+355/480] pagebuf_get+0x163/0x1e0 Jul 11 04:07:39 netllama kernel: [xfs_da_buf_make+373/592] xfs_da_buf_make+0x175/0x250 Jul 11 04:07:39 netllama kernel: [xfs_da_do_buf+805/2560] xfs_da_do_buf+0x325/0xa00 Jul 11 04:07:39 netllama kernel: [xfs_dir2_block_lookup_int+82/432] xfs_dir2_block_lookup_int+0x52/0x1b0 Jul 11 04:07:39 netllama kernel: [do_page_fault+0/1324] do_page_fault+0x0/0x52c Jul 11 04:07:39 netllama kernel: [error_code+45/56] error_code+0x2d/0x38 Jul 11 04:07:39 netllama kernel: [radix_tree_lookup+73/96] radix_tree_lookup+0x49/0x60 Jul 11 04:07:39 netllama kernel: [find_lock_page+44/240] find_lock_page+0x2c/0xf0 Jul 11 04:07:39 netllama kernel: [find_or_create_page+68/192] find_or_create_page+0x44/0xc0 Jul 11 04:07:39 netllama kernel: [_pagebuf_lookup_pages+265/784] _pagebuf_lookup_pages+0x109/0x310 Jul 11 04:07:39 netllama kernel: [pagebuf_get+408/480] pagebuf_get+0x198/0x1e0 Jul 11 04:07:39 netllama kernel: [xfs_trans_read_buf+396/992] xfs_trans_read_buf+0x18c/0x3e0 Jul 11 04:07:39 netllama kernel: [xfs_da_do_buf+1964/2560] xfs_da_do_buf+0x7ac/0xa00 Jul 11 04:07:39 netllama kernel: [xfs_iget+259/368] xfs_iget+0x103/0x170 Jul 11 04:07:39 netllama kernel: [xfs_da_read_buf+88/96] xfs_da_read_buf+0x58/0x60 Jul 11 04:07:39 netllama kernel: [xfs_dir2_block_getdents+177/752] xfs_dir2_block_getdents+0xb1/0x2f0 Jul 11 04:07:39 netllama kernel: [xfs_dir2_block_getdents+177/752] xfs_dir2_block_getdents+0xb1/0x2f0 Jul 11 04:07:39 netllama kernel: [real_lookup+213/256] real_lookup+0xd5/0x100 Jul 11 04:07:39 netllama kernel: [xfs_bmap_last_offset+271/288] xfs_bmap_last_offset+0x10f/0x120 Jul 11 04:07:39 netllama kernel: [xfs_dir2_put_dirent64_direct+0/176] xfs_dir2_put_dirent64_direct+0x0/0xb0 Jul 11 04:07:39 netllama kernel: [xfs_dir2_isblock+42/128] xfs_dir2_isblock+0x2a/0x80 Jul 11 04:07:39 netllama kernel: [xfs_dir2_put_dirent64_direct+0/176] xfs_dir2_put_dirent64_direct+0x0/0xb0 Jul 11 04:07:39 netllama kernel: [xfs_dir2_getdents+196/432] xfs_dir2_getdents+0xc4/0x1b0 Jul 11 04:07:39 netllama kernel: [xfs_dir2_put_dirent64_direct+0/176] xfs_dir2_put_dirent64_direct+0x0/0xb0 Jul 11 04:07:40 netllama kernel: [xfs_readdir+97/192] xfs_readdir+0x61/0xc0 Jul 11 04:07:40 netllama kernel: [linvfs_readdir+253/608] linvfs_readdir+0xfd/0x260 Jul 11 04:07:40 netllama kernel: [vfs_readdir+137/160] vfs_readdir+0x89/0xa0 Jul 11 04:07:40 netllama kernel: [filldir64+0/272] filldir64+0x0/0x110 Jul 11 04:07:40 netllama kernel: [sys_getdents64+122/194] sys_getdents64+0x7a/0xc2 Jul 11 04:07:40 netllama kernel: [filldir64+0/272] filldir64+0x0/0x110 Jul 11 04:07:40 netllama kernel: [syscall_call+7/11] syscall_call+0x7/0xb The box seems to be stable since then (i've not rebooted it yet), although the updatedb cronjob is hung. I'm not 100% certain that this is a XFS issue, but since i see a ton of xfs references in the oops, i'm asking here first. thanks. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Lonni J Friedman netllama@linux-sxs.org Linux Step-by-step & TyGeMo http://netllama.ipfox.com From owner-linux-xfs@oss.sgi.com Mon Jul 12 10:45:51 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 12 Jul 2004 10:45:53 -0700 (PDT) Received: from zok.sgi.com (mtvcafw.sgi.com [192.48.171.6]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i6CHjpgi029957 for ; Mon, 12 Jul 2004 10:45:51 -0700 Received: from flecktone.americas.sgi.com (flecktone.americas.sgi.com [192.48.203.135]) by zok.sgi.com (8.12.9/8.12.9/linux-outbound_gateway-1.1) with ESMTP id i6CHjjhv014718 for ; Mon, 12 Jul 2004 10:45:45 -0700 Received: from daisy-e236.americas.sgi.com (daisy-e236.americas.sgi.com [128.162.236.214]) by flecktone.americas.sgi.com (8.12.9/8.12.10/SGI_generic_relay-1.2) with ESMTP id i6CHjXOV42555248 for ; Mon, 12 Jul 2004 12:45:33 -0500 (CDT) Received: from sgi.com (penguin.americas.sgi.com [128.162.240.135]) by daisy-e236.americas.sgi.com (8.12.9/SGI-server-1.8) with ESMTP id i6CHj25N3138906; Mon, 12 Jul 2004 12:45:12 -0500 (CDT) Received: from penguin.americas.sgi.com (localhost.localdomain [127.0.0.1]) by sgi.com (8.12.8/8.12.8) with ESMTP id i6CHglpY027917; Mon, 12 Jul 2004 12:42:49 -0500 Received: (from sandeen@localhost) by penguin.americas.sgi.com (8.12.8/8.12.8/Submit) id i6CHgexX027882; Mon, 12 Jul 2004 12:42:40 -0500 Date: Mon, 12 Jul 2004 12:42:40 -0500 From: Eric Sandeen Message-Id: <200407121742.i6CHgexX027882@penguin.americas.sgi.com> Subject: PARTIAL TAKE 917571 - X-archive-position: 3635 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: sandeen@penguin.americas.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 360 Lines: 15 Turn on XFS_BIG_BLKNOS and/or XFS_BIG_INUMS if LBD patch is present and enabled Date: Mon Jul 12 10:44:02 PDT 2004 Workarea: penguin.americas.sgi.com:/src/eric/linux-2.4.x Inspected by: nathans The following file(s) were checked into: bonnie.engr.sgi.com:/isms/linux/2.4.x-xfs Modid: xfs-linux:xfs-kern:174963a fs/xfs/linux-2.4/xfs_linux.h - 1.135 From owner-linux-xfs@oss.sgi.com Mon Jul 12 12:04:56 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 12 Jul 2004 12:05:00 -0700 (PDT) Received: from pooh.lsc.hu (pooh.lsc.hu [195.56.172.131]) by oss.sgi.com (8.12.10/8.12.9) with SMTP id i6CJ4hgi013454 for ; Mon, 12 Jul 2004 12:04:46 -0700 Received: by pooh.lsc.hu (Postfix, from userid 1004) id 476E41D406; Mon, 12 Jul 2004 20:46:59 +0200 (CEST) Date: Mon, 12 Jul 2004 20:46:59 +0200 From: "Laszlo 'GCS' Boszormenyi" To: linux-xfs@oss.sgi.com Subject: Re: 2.6.7 oops Message-ID: <20040712184658.GA12749@pooh> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.4i X-Whitelist: OK X-archive-position: 3636 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: gcs@lsc.hu Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1063 Lines: 24 * Net Llama! [2004-07-12 12:33:27 -0500]: > I'm running a vanilla 2.6.7 on a workstation. It appears to have Oops'ed > yesterday morning while the slocate/updatedb cronjob was running: [...] > Jul 11 04:07:39 netllama kernel: <6>note: updatedb[25042] exited with > preempt_count 1 > Jul 11 04:07:39 netllama kernel: bad: scheduling while atomic! > Jul 11 04:07:39 netllama kernel: [schedule+1210/1264] > schedule+0x4ba/0x4f0 [...] > The box seems to be stable since then (i've not rebooted it yet), although > the updatedb cronjob is hung. I'm not 100% certain that this is a XFS > issue, but since i see a ton of xfs references in the oops, i'm asking > here first. thanks. Well, I have seen such problems on other kernels as well (2.6.7 -> 2.6.7-bk20), running on machines without any XFS partition. Thus I tend to say it is an other kernel bug (scheduling problems are not related to XFS anyway IMHO). It seems running on 2.8.1-rc1 is a better idea, as I heard 2.6.7 had different small problems as well. Regards, Laszlo From owner-linux-xfs@oss.sgi.com Mon Jul 12 13:06:00 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 12 Jul 2004 13:06:02 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.12.10/8.12.9) with ESMTP id i6CK60gi022947 for ; Mon, 12 Jul 2004 13:06:00 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.12.10/8.12.8/Submit) id i6CK60xB022946 for linux-xfs@oss.sgi.com; Mon, 12 Jul 2004 13:06:00 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.12.10/8.12.9) with ESMTP id i6CK5xgk022932 for ; Mon, 12 Jul 2004 13:05:59 -0700 Received: (from apache@localhost) by oss.sgi.com (8.12.10/8.12.8/Submit) id i6CJMSOA019088; Mon, 12 Jul 2004 12:22:28 -0700 Date: Mon, 12 Jul 2004 12:22:28 -0700 Message-Id: <200407121922.i6CJMSOA019088@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 345] Memory allocation failure X-Bugzilla-Reason: AssignedTo X-archive-position: 3637 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 980 Lines: 27 http://oss.sgi.com/bugzilla/show_bug.cgi?id=345 ------- Additional Comments From xfs@tobias.olsson.be 2004-12-07 12:22 PDT ------- Yes, that partition is exlusively used for downloading and p2p-programs. It's almost constantly full, and on top of that almost continually in use by at least one p2p-program. I use mldonkey(ed2k) and azureus(bittorrent), and both of these are set to start with sparse files, even further fragmenting the filesystem. On top of that, I often use azureus over nfs.. So, yes, it's probably very, VERY fragmented.. I'll run xfs_fsr for a night or so, and get back on if I still get memory allocation error. Compiling and testing patches (unless destructive) is no problem for me. Apparently cw was able to reproduce the bug by running multiple cd-burns after eachother, but I'm not burning any cd's on that computer. ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs@oss.sgi.com Tue Jul 13 15:23:44 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 13 Jul 2004 15:23:55 -0700 (PDT) Received: from out2.smtp.messagingengine.com (out2.smtp.messagingengine.com [66.111.4.26]) by oss.sgi.com (8.12.10/8.13.0) with SMTP id i6DMNhHY015649 for ; Tue, 13 Jul 2004 15:23:43 -0700 Received: from server3.messagingengine.com (server3.internal [10.202.2.134]) by mail.messagingengine.com (Postfix) with ESMTP id 12CD3C11FEA for ; Tue, 13 Jul 2004 18:23:39 -0400 (EDT) Received: by server3.messagingengine.com (Postfix, from userid 99) id 8520615DD43; Tue, 13 Jul 2004 18:23:40 -0400 (EDT) Content-Disposition: inline Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="ISO-8859-1" MIME-Version: 1.0 X-Mailer: MIME::Lite 1.4 (F2.72; T1.001; A1.62; B3.01; Q3.01) Subject: 10 GIG hard drive partitioning To: linux-xfs@oss.sgi.com Date: Tue, 13 Jul 2004 15:23:40 -0700 From: "Asterius Pandoras" X-Sasl-Enc: uAVkhr7+rWM2xvIZVL3alA 1089757420 Message-Id: <1089757420.2618.200301256@webmail.messagingengine.com> X-archive-position: 3638 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: asterius@airpost.net Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 497 Lines: 12 I'm going to reinstall my Gentoo linux from scratch and asking for the expert advice on the sizes, number and journal sizes of those partitions. From information I read it is still very confusing. I understand that much of it depends on what am I going to do with the system. Nothing in particular, mail, news, webserver ( only for developing and testing of websites). It will be linux only system. So, if you were me, how would you do it? Thanks. ( of course I am talking about XFS). Asterius. From owner-linux-xfs Wed Jul 14 19:24:42 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 14 Jul 2004 19:24:50 -0700 (PDT) Received: from omx2.sgi.com (mtvcafw.sgi.com [192.48.171.6]) by oss.sgi.com (8.13.0/8.13.0) with SMTP id i6F2OgG1029880 for ; Wed, 14 Jul 2004 19:24:42 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by omx2.sgi.com (8.12.11/8.12.9/linux-outbound_gateway-1.1) with ESMTP id i6F2bBNP000398 for ; Wed, 14 Jul 2004 19:37:12 -0700 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id i6F2OXap12229038 for ; Thu, 15 Jul 2004 12:24:33 +1000 (EST) Received: (from ajones@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id i6F2OWSV12255567 for linux-xfs@oss.sgi.com; Thu, 15 Jul 2004 12:24:32 +1000 (EST) Date: Thu, 15 Jul 2004 12:24:32 +1000 (EST) From: Andrew Jones Message-Id: <200407150224.i6F2OWSV12255567@snort.melbourne.sgi.com> To: linux-xfs@oss.sgi.com Subject: TAKE 917515 - X-archive-position: 3639 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: ajones@snort.melbourne.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1336 Lines: 49 Fixes for IRIX tape tests. Date: Wed Jul 14 19:24:08 PDT 2004 Workarea: snort.melbourne.sgi.com:/home/ajones/source/2.4.x-xfs Inspected by: tes The following file(s) were checked into: bonnie.engr.sgi.com:/isms/slinx/2.4.x-xfs Modid: xfs-cmds:slinx:175120a cmd/xfstests/035.out.irix - 1.1 - Added support for different .out.irix and .out.linux files. cmd/xfstests/035.out.linux - 1.1 - Added support for different .out.irix and .out.linux files. cmd/xfstests/022 - 1.10 - Added note about fsstress using a endian dependent random number generator. cmd/xfstests/025 - 1.9 - Removed IRIX as a supported OS. cmd/xfstests/035 - 1.9 - Added support for different .out.irix and .out.linux files. cmd/xfstests/common.dump - 1.50 - Modified filter. Fixed that 'missing t' problem for IRIX. cmd/xfstests/035.out - 1.7 - Added support for different .out.irix and .out.linux files. cmd/xfstests/022.out.irix - 1.2 - Fixed simple .out file inconsistency. cmd/xfstests/023.out.irix - 1.2 - Fixed simple .out file inconsistency. cmd/xfstests/025.out.irix - 1.2 - Removed IRIX as a supported OS. xfstests/025.out.linux 1.2 renamed to xfstests/025.out 1.7 - xfstests/025.out.linux 1.1 Renamed to xfstests/025.outRemoved IRIX as a supported OS. cmd/xfstests/043.out.irix - 1.2 - Fixed simple .out file inconsistency. From owner-linux-xfs Thu Jul 15 04:54:15 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 15 Jul 2004 04:54:18 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6FBsFGm002574 for ; Thu, 15 Jul 2004 04:54:15 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6FBsFgd002573 for linux-xfs@oss.sgi.com; Thu, 15 Jul 2004 04:54:15 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6FBsDcH002549 for ; Thu, 15 Jul 2004 04:54:13 -0700 Received: (from apache@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6FBNhJX001786; Thu, 15 Jul 2004 04:23:43 -0700 Date: Thu, 15 Jul 2004 04:23:43 -0700 Message-Id: <200407151123.i6FBNhJX001786@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 346] New: Fix for xfs_fsr crash on DEC Alpha X-Bugzilla-Reason: AssignedTo X-archive-position: 3640 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1194 Lines: 39 http://oss.sgi.com/bugzilla/show_bug.cgi?id=346 Summary: Fix for xfs_fsr crash on DEC Alpha Product: Linux XFS Version: 1.3.x Platform: Other OS/Version: Linux Status: NEW Severity: normal Priority: Medium Component: xfsdump AssignedTo: xfs-master@oss.sgi.com ReportedBy: jan-jaap.vanderheijden@phoenixbv.com In xfs_fsr.c, the function xfs_bulkstat() takes a size_t *ocount, but writes only to the lower 32bits: bulkreq.ocount = (__s32 *)ocount; When size_t is 64bits (such as on the Alpha), the upper 32bits will be undefined . When fsrfs() passes an uninitialized variable to it, it crashes. This is a fix against xfsdump-2.2.21: --- xfsdump-2.2.21/fsr/xfs_fsr.c.orig 2004-07-15 12:35:29.000000000 +0200 +++ xfsdump-2.2.21/fsr/xfs_fsr.c 2004-07-15 13:19:43.000000000 +0200 @@ -178,6 +178,7 @@ { xfs_fsop_bulkreq_t bulkreq; + *ocount = 0; bulkreq.lastip = lastip; bulkreq.icount = icount; bulkreq.ubuffer = ubuffer; ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs Thu Jul 15 06:54:14 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 15 Jul 2004 06:54:17 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6FDsEC7006402 for ; Thu, 15 Jul 2004 06:54:14 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6FDsEqg006401 for linux-xfs@oss.sgi.com; Thu, 15 Jul 2004 06:54:14 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6FDsDiF006387 for ; Thu, 15 Jul 2004 06:54:13 -0700 Received: (from apache@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6FDrWpl006381; Thu, 15 Jul 2004 06:53:32 -0700 Date: Thu, 15 Jul 2004 06:53:32 -0700 Message-Id: <200407151353.i6FDrWpl006381@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 346] Fix for xfs_fsr crash on DEC Alpha X-Bugzilla-Reason: AssignedTo X-archive-position: 3641 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 526 Lines: 19 http://oss.sgi.com/bugzilla/show_bug.cgi?id=346 ------- Additional Comments From jan-jaap.vanderheijden@phoenixbv.com 2004-15-07 06:53 PDT ------- Same problem occurs at xfsdump-2.2.21/dump/content.c:2920 Initializing buflenout on line 2879 should work. Another option would be to use __s32 instead of size_t for buflenout, this approach is taken in xfsdump-2.2.21/common/util.c in bigstat_iter() ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs Thu Jul 15 12:32:09 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 15 Jul 2004 12:32:16 -0700 (PDT) Received: from mxfep02.bredband.com (mxfep02.bredband.com [195.54.107.73]) by oss.sgi.com (8.13.0/8.13.0) with SMTP id i6FJW8V3001889 for ; Thu, 15 Jul 2004 12:32:09 -0700 Received: from mail.ter.nu ([213.114.210.36] [213.114.210.36]) by mxfep02.bredband.com with ESMTP id <20040715193201.EXTL23867.mxfep02.bredband.com@mail.ter.nu> for ; Thu, 15 Jul 2004 21:32:01 +0200 Received: from grabbarna.nu (w.tenet [192.168.0.5]) by mail.ter.nu (Postfix) with ESMTP id C9E2C9981D4 for ; Thu, 15 Jul 2004 21:32:00 +0200 (CEST) Message-ID: <40F6DBC1.6050909@grabbarna.nu> Date: Thu, 15 Jul 2004 21:32:17 +0200 From: Jan Banan User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6) Gecko/20040116 X-Accept-Language: sv, en-us, en MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: Recover a XFS on raid -1 (linear) when one disk is broken Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 3642 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: b@grabbarna.nu Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1543 Lines: 35 Hi, I have a raid -1 (linear) on my RedHat Linux 9 system with XFS 1.2.0. The raid consists of 4 disks where the last disk now seem to be broken. When I try to mount my XFS I get in /var/log/messages : --------------------------------------------------------------------------- Jul 15 21:18:51 d kernel: XFS mounting filesystem loop(7,0) Jul 15 21:18:51 d kernel: Starting XFS recovery on filesystem: loop(7,0) (dev: 7/0) Jul 15 21:18:58 d kernel: hdh: dma_intr: status=0x51 { DriveReady SeekComplete Error } Jul 15 21:18:58 d kernel: hdh: dma_intr: error=0x40 { UncorrectableError }, LBAsect=243818407, high=14, low=8937383, sector=243818336 Jul 15 21:18:58 d kernel: end_request: I/O error, dev 22:41 (hdh), sector 243818336 Jul 15 21:19:00 d kernel: hdh: dma_intr: status=0x51 { DriveReady SeekComplete Error } Jul 15 21:19:00 d kernel: hdh: dma_intr: error=0x40 { UncorrectableError }, LBAsect=243818407, high=14, low=8937383, sector=243818344 Jul 15 21:19:00 d kernel: end_request: I/O error, dev 22:41 (hdh), sector 243818344 Jul 15 21:19:00 d kernel: I/O error in filesystem ("loop(7,0)") meta-data dev 0x700 block 0x2f5d2060^I ("xlog_recover_do..(read#2)") error 5 buf count 8192 Jul 15 21:19:00 d kernel: XFS: log mount/recovery failed Jul 15 21:19:00 d kernel: XFS: log mount failed --------------------------------------------------------------------------- Is it possible to in some way mount this raid system so that I can recover the files stored on the first 3 disks of this raid -1 (linear)? Best regards, Jan From owner-linux-xfs Thu Jul 15 13:59:26 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 15 Jul 2004 13:59:35 -0700 (PDT) Received: from pimout3-ext.prodigy.net (pimout3-ext.prodigy.net [207.115.63.102]) by oss.sgi.com (8.13.0/8.13.0) with SMTP id i6FKxOln010451 for ; Thu, 15 Jul 2004 13:59:25 -0700 Received: from taniwha.stupidest.org (adsl-63-202-173-53.dsl.snfc21.pacbell.net [63.202.173.53]) by pimout3-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6FKxBlM090874; Thu, 15 Jul 2004 16:59:19 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id 782FB115C855; Thu, 15 Jul 2004 13:59:10 -0700 (PDT) Date: Thu, 15 Jul 2004 13:59:10 -0700 From: Chris Wedgwood To: Jan Banan Cc: linux-xfs@oss.sgi.com Subject: Re: Recover a XFS on raid -1 (linear) when one disk is broken Message-ID: <20040715205910.GA9948@taniwha.stupidest.org> References: <40F6DBC1.6050909@grabbarna.nu> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <40F6DBC1.6050909@grabbarna.nu> X-archive-position: 3643 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 673 Lines: 23 On Thu, Jul 15, 2004 at 09:32:17PM +0200, Jan Banan wrote: > I have a raid -1 (linear) on my RedHat Linux 9 system with XFS > 1.2.0. The raid consists of 4 disks where the last disk now seem to > be broken. ouch backups? > Is it possible to in some way mount this raid system so that I can > recover the files stored on the first 3 disks of this raid -1 > (linear)? you could replace the last disk with a sparse file and run xfs_repair, my gut feeling is that it won't work very well though since files will be spread over disks (maybe not badly, depends on access patterns) and also metadata on broken disks will refer to non-broken blocks and vice-versa --cw From owner-linux-xfs Thu Jul 15 19:02:05 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 15 Jul 2004 19:02:10 -0700 (PDT) Received: from illusionary.com (illusionary.com [66.152.21.136] (may be forged)) by oss.sgi.com (8.13.0/8.13.0) with SMTP id i6G2203d000882 for ; Thu, 15 Jul 2004 19:02:05 -0700 Received: from zorak.illusionary.lan (wbar11.tampa1-4-4-140-185.tampa1.dsl-verizon.net [4.4.140.185]) by illusionary.com (8.12.11/8.12.11/Debian-3) with ESMTP id i6G21p00032157 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=OK) for ; Thu, 15 Jul 2004 22:01:52 -0400 Received: from [172.17.118.32] (vandemar-wan [172.17.118.32]) by zorak.illusionary.lan (8.12.3/8.12.3/Debian-6.6) with ESMTP id i6G21obh013878 for ; Thu, 15 Jul 2004 22:01:51 -0400 Mime-Version: 1.0 (Apple Message framework v618) Content-Transfer-Encoding: 7bit Message-Id: <17773B6C-D6CC-11D8-8956-000A95DBAEDE@illusionary.com> Content-Type: text/plain; charset=US-ASCII; format=flowed To: linux-xfs@oss.sgi.com From: Derek Glidden Subject: probably useless XFS crash report Date: Thu, 15 Jul 2004 22:01:46 -0400 X-Mailer: Apple Mail (2.618) X-archive-position: 3644 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: dglidden@illusionary.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 3729 Lines: 91 Gentoo (yeah, yeah. I built the kernel myself anyway) 2.4.26 kernel on an older Dual-Celeron with IDE. Trying to reboot after crashing the box, XFS barfs when it tries to recover the root filesystem. 2.6.7 successfully rebooted and mounted the filesystem. I haven't run xfs_check or anything on the disk because it's a single-disk crash-it box and I haven't had a chance to plug anything else into it so I can boot without booting off the root filesystem. It appears to be working OK, though, after automatic recovery. I realize this isn't a complete decode, but since it was on the root filesystem it's all I could get from the machine, so if this is useless, please just ignore it. I haven't had any sort of XFS crash for literally years though, so I wanted to report what I could. ksymoops 2.4.9 on i686 2.4.26-gentoo-r3. Options used -V (default) -k /proc/ksyms (default) -l /proc/modules (default) -o /lib/modules/2.4.26-gentoo-r3/ (default) -m /boot/System.map-2.4.26-r3 (specified) cpu: 0, clocks: 1002264, slice: 334088 cpu: 1, clocks: 1002264, slice: 334088 SGI XFS with no debug enabled c1595b2c c0269328 c03adb9f 00000001 dfe55800 c03adb7c 00000156 c024f1b9 00000000 dfc32000 dfc34e8c c024f1b9 dfc34e8c dfc32000 00000000 dfe56140 00000000 c1595b9c 00000002 dfe56200 c1595b8c c1595b98 00000006 dfe55800 Call Trace: [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] Warning (Oops_read): Code line not seen, dumping what data is available Trace; c0269328 Trace; c024f1b9 Trace; c024f1b9 Trace; c024c584 Trace; c024d894 Trace; c02badeb Trace; c02a20c8 Trace; c0298926 Trace; c0298a12 Trace; c0285d67 Trace; c029a170 Trace; c028647e Trace; c029077c Trace; c029ba4b Trace; c02b8708 Trace; c02b53ad Trace; c028c4df Trace; c02a3c36 Trace; c02b6323 Trace; c02b5eae Trace; c020a7c7 Trace; c02084fa Trace; c0209166 Trace; c021f1ad Trace; c020959d Trace; c02205f3 Trace; c0220934 Trace; c0220735 Trace; c0220d56 Trace; c01b9655 Trace; c01b930c Trace; c01bb78e Trace; c01b92e0 3c59x: Donald Becker and others. www.scyld.com/network/vortex.html 1 warning issued. Results may not be reliable. -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- "We all enter this world in the | Support Electronic Freedom same way: naked; screaming; soaked | http://www.eff.org/ in blood. But if you live your | http://www.anti-dmca.org/ life right, that kind of thing |--------------------------- doesn't have to stop there." -- Dana Gould From owner-linux-xfs Thu Jul 15 19:41:38 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 15 Jul 2004 19:41:40 -0700 (PDT) Received: from pimout3-ext.prodigy.net (pimout3-ext.prodigy.net [207.115.63.102]) by oss.sgi.com (8.13.0/8.13.0) with SMTP id i6G2fbew002442 for ; Thu, 15 Jul 2004 19:41:38 -0700 Received: from taniwha.stupidest.org (adsl-63-202-173-53.dsl.snfc21.pacbell.net [63.202.173.53]) by pimout3-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6G2fWlM097734; Thu, 15 Jul 2004 22:41:34 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id 4C634115C85E; Thu, 15 Jul 2004 19:41:29 -0700 (PDT) Date: Thu, 15 Jul 2004 19:41:29 -0700 From: Chris Wedgwood To: Derek Glidden Cc: linux-xfs@oss.sgi.com Subject: Re: probably useless XFS crash report Message-ID: <20040716024129.GA12793@taniwha.stupidest.org> References: <17773B6C-D6CC-11D8-8956-000A95DBAEDE@illusionary.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <17773B6C-D6CC-11D8-8956-000A95DBAEDE@illusionary.com> X-archive-position: 3645 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1334 Lines: 42 On Thu, Jul 15, 2004 at 10:01:46PM -0400, Derek Glidden wrote: > Gentoo (yeah, yeah. I built the kernel myself anyway) 2.4.26 kernel > on an older Dual-Celeron with IDE. Heh. There's quite a few 'variables' there... > I realize this isn't a complete decode, but since it was on the root > filesystem it's all I could get from the machine, so if this is > useless, please just ignore it. What were you doing when it crashed? I take it there is nothing in the kernel logs before this? > Trace; c0269328 Any idea if you got something like "SBTREE ERROR" before that? [...] > Trace; c0298926 > Trace; c0298a12 > Trace; c0285d67 > Trace; c029a170 > Trace; c028647e > Trace; c029077c > Trace; c029ba4b > Trace; c02b8708 > Trace; c02b53ad > Trace; c028c4df > Trace; c02a3c36 > Trace; c02b6323 This occured during mount of the rootfs? You said there is only one fs so I wonder how you got the oops if that is the case? I probably would boot of CD/whatever and run xfs_repair if I were you. --cw From owner-linux-xfs Thu Jul 15 22:05:07 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 15 Jul 2004 22:05:17 -0700 (PDT) Received: from illusionary.com (illusionary.com [66.152.21.136] (may be forged)) by oss.sgi.com (8.13.0/8.13.0) with SMTP id i6G554Bo008776 for ; Thu, 15 Jul 2004 22:05:06 -0700 Received: from zorak.illusionary.lan (wbar11.tampa1-4-4-140-185.tampa1.dsl-verizon.net [4.4.140.185]) by illusionary.com (8.12.11/8.12.11/Debian-3) with ESMTP id i6G54tYA001620 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=OK) for ; Fri, 16 Jul 2004 01:04:57 -0400 Received: from [172.17.118.32] (vandemar-wan [172.17.118.32]) by zorak.illusionary.lan (8.12.3/8.12.3/Debian-6.6) with ESMTP id i6G54sbh014264 for ; Fri, 16 Jul 2004 01:04:55 -0400 Mime-Version: 1.0 (Apple Message framework v618) In-Reply-To: <20040716024129.GA12793@taniwha.stupidest.org> References: <17773B6C-D6CC-11D8-8956-000A95DBAEDE@illusionary.com> <20040716024129.GA12793@taniwha.stupidest.org> Content-Type: text/plain; charset=US-ASCII; format=flowed Message-Id: Content-Transfer-Encoding: 7bit From: Derek Glidden Subject: Re: probably useless XFS crash report Date: Fri, 16 Jul 2004 01:04:54 -0400 To: linux-xfs@oss.sgi.com X-Mailer: Apple Mail (2.618) X-archive-position: 3646 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: dglidden@illusionary.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 2262 Lines: 55 On Jul 15, 2004, at 10:41 PM, Chris Wedgwood wrote: > On Thu, Jul 15, 2004 at 10:01:46PM -0400, Derek Glidden wrote: > >> Gentoo (yeah, yeah. I built the kernel myself anyway) 2.4.26 kernel >> on an older Dual-Celeron with IDE. > > Heh. There's quite a few 'variables' there... True. I wanted to see if it was at all useful before flooding the list with details. If you do want to know more, please tell me. > What were you doing when it crashed? I take it there is nothing in > the kernel logs before this? I was crashing the machine. :) I'm playing with "Xen" virtualization and a VM panicked and caused a reboot. I booted back to vanilla 2.4.26 kernel to check the VM filesystems. When I rebooted, it tried to mount the root partition and gave me that message/dump. It didn't actually crash or panic the whole system at that point, it just couldn't perform the automatic repair and refused to remount the root fs r/w. After getting the crash info from dmesg, I rebooted again, this time to 2.6.7, which did its "repair" and mount just fine. > Any idea if you got something like "SBTREE ERROR" before that? I only barely caught the dump as it scrolled by during boot, so not really. "dmesg" didn't show anything other than the dump data and the usual kernel boot messages. > This occured during mount of the rootfs? You said there is only one > fs so I wonder how you got the oops if that is the case? It still mounted the root volume, just r/o. Then I did "dmesg > /boot/xfs_dump". /boot is on a different partition, but it's the only other partition. I just don't have /usr, /var, etc on separate partitions in this case. At least I didn't have to write it down by hand as at times in the past... > I probably would boot of CD/whatever and run xfs_repair if I were > you. Yeah. This isn't an especially important machine, so I'm in no great hurry. -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- "We all enter this world in the | Support Electronic Freedom same way: naked; screaming; soaked | http://www.eff.org/ in blood. But if you live your | http://www.anti-dmca.org/ life right, that kind of thing |--------------------------- doesn't have to stop there." -- Dana Gould From owner-linux-xfs Fri Jul 16 17:15:31 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 16 Jul 2004 17:15:32 -0700 (PDT) Received: from snark.thyrsus.com (dsl092-053-140.phl1.dsl.speakeasy.net [66.92.53.140]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6H0FUKB001733 for ; Fri, 16 Jul 2004 17:15:30 -0700 Received: from snark.thyrsus.com (localhost [127.0.0.1]) by snark.thyrsus.com (8.12.11/8.12.11) with ESMTP id i6H0Es7k020498 for ; Fri, 16 Jul 2004 20:14:54 -0400 Date: Fri, 16 Jul 2004 20:14:54 -0400 From: esr@thyrsus.com Message-Id: <200407170014.i6H0Es7k020498@snark.thyrsus.com> To: linux-xfs@oss.sgi.com Subject: problems in one or more man pages you maintain X-archive-position: 3647 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: esr@thyrsus.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 2652 Lines: 92 This is automatically generated email about problems in a man page for which you appear to be responsible. If you are not the right person or list, tell me and I will attempt to correct my database. See http://catb.org/~esr/doclifter/problems.html for details on how and why these patches were generated. Feel free to email me with any questions. Note: This patch does not change the mod date of the manual page. You may wish to do that by hand. Problems with xfs_bmap.8: 1. Unknown or invalid macro. That is, one that does not fit in the macro set that the man page seems to be using. This is a serious error; it often means part of your text is being lost or rendered incorrectly. --- xfs_bmap.8-orig 2004-07-09 06:59:14.210997800 -0400 +++ xfs_bmap.8 2004-07-09 06:59:59.299143360 -0400 @@ -12,10 +12,12 @@ in the file that do not have any corresponding blocks (\f2hole\f1s). Each line of the listings takes the following form: -.Ex -\f2extent\f1\f7: [\f1\f2startoffset\f1\f7..\f1\f2endoffset\f1\f7]: \c -\f1\f2startblock\f1\f7..\f1\f2endblock\f1 -.Ee +.nf +.ft CW + \f2extent\f1\f7: [\f1\f2startoffset\f1\f7..\f1\f2endoffset\f1\f7]: \c + \f1\f2startblock\f1\f7..\f1\f2endblock\f1 +.ft +.fi Holes are marked by replacing the \f2startblock..endblock\f1 with \f2hole\fP. All the file offsets and disk blocks are in units of 512-byte blocks, @@ -29,9 +31,11 @@ .PP If the \f3-l\f1 option is used, then -.Ex -\f1\f2\f1\f7 \f1\f2blocks\f1\f7 -.Ee +.nf +.ft CW + \f1\f2\f1\f7 \f1\f2blocks\f1\f7 +.ft +.fi will be appended to each line. \f1\f2Nblocks\f1\f7 is the length of the extent described on the line in units of 512-byte blocks. Problems with xfs_check.8: 1. Unknown or invalid macro. That is, one that does not fit in the macro set that the man page seems to be using. This is a serious error; it often means part of your text is being lost or rendered incorrectly. --- xfs_check.8-orig 2004-07-09 07:01:51.627066920 -0400 +++ xfs_check.8 2004-07-09 07:03:06.440693520 -0400 @@ -90,17 +90,21 @@ rather than produce useful output. If the filesystem is completely corrupt, a core dump might be produced instead of the message -.Ex -\f2xxx\f1\f7 is not a valid filesystem\f1 -.Ee +.nf +.ft CW + \f2xxx\f1\f7 is not a valid filesystem\f1 +.ft +.fi .PP If the filesystem is very large (has many files) then .I xfs_check might run out of memory. In this case the message -.Ex -out of memory -.Ee +.nf +.ft CW + out of memory +.ft +.fi is printed. .PP The following is a description of the most likely problems and the associated -- Eric S. Raymond From owner-linux-xfs Fri Jul 16 17:49:42 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 16 Jul 2004 17:49:46 -0700 (PDT) Received: from omx1.americas.sgi.com ([192.48.179.6]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6H0nf4k002915 for ; Fri, 16 Jul 2004 17:49:42 -0700 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by omx1.americas.sgi.com (8.12.10/8.12.9/linux-outbound_gateway-1.1) with SMTP id i6H0nX0f014051 for ; Fri, 16 Jul 2004 19:49:34 -0500 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA18323; Sat, 17 Jul 2004 10:49:29 +1000 Received: from wobbly.melbourne.sgi.com (localhost [127.0.0.1]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id i6H0nSln2360190; Sat, 17 Jul 2004 10:49:28 +1000 (EST) Received: (from nathans@localhost) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id i6H0nRCN2363047; Sat, 17 Jul 2004 10:49:27 +1000 (EST) Date: Sat, 17 Jul 2004 10:49:27 +1000 From: Nathan Scott To: esr@thyrsus.com Cc: linux-xfs@oss.sgi.com Subject: Re: problems in one or more man pages you maintain Message-ID: <20040717104926.F2261660@wobbly.melbourne.sgi.com> References: <200407170014.i6H0Es7k020498@snark.thyrsus.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5i In-Reply-To: <200407170014.i6H0Es7k020498@snark.thyrsus.com>; from esr@thyrsus.com on Fri, Jul 16, 2004 at 08:14:54PM -0400 X-archive-position: 3648 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 343 Lines: 12 On Fri, Jul 16, 2004 at 08:14:54PM -0400, esr@thyrsus.com wrote: > This is automatically generated email about problems in a man page for which > you appear to be responsible. If you are not the right person or list, tell > me and I will attempt to correct my database. Thanks Eric, these are fixed in xfsprogs-2.6.19. cheers. -- Nathan From owner-linux-xfs Fri Jul 16 17:54:07 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 16 Jul 2004 17:54:11 -0700 (PDT) Received: from omx1.americas.sgi.com ([192.48.179.6]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6H0s7mX003260 for ; Fri, 16 Jul 2004 17:54:07 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by omx1.americas.sgi.com (8.12.10/8.12.9/linux-outbound_gateway-1.1) with ESMTP id i6H0rx0f016220 for ; Fri, 16 Jul 2004 19:54:00 -0500 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id i6H0roap13749990; Sat, 17 Jul 2004 10:53:50 +1000 (EST) Received: (from nathans@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id i6H0rnee13758409; Sat, 17 Jul 2004 10:53:49 +1000 (EST) Date: Sat, 17 Jul 2004 10:53:49 +1000 (EST) From: Nathan Scott Message-Id: <200407170053.i6H0rnee13758409@snort.melbourne.sgi.com> To: linux-xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: PARTIAL TAKE 917692 - xfs_copy fix X-archive-position: 3649 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@snort.melbourne.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 358 Lines: 13 Fix an xfs_copy integer wrap when dealing with large filesystems. From Mark Portney. Date: Fri Jul 16 17:52:48 PDT 2004 Workarea: snort.melbourne.sgi.com:/home/nathans/xfs-cmds Inspected by: mnp@sgi.com The following file(s) were checked into: bonnie.engr.sgi.com:/isms/slinx/xfs-cmds Modid: xfs-cmds:slinx:175293a xfsprogs/copy/xfs_copy.c - 1.9 From owner-linux-xfs Sat Jul 17 06:29:06 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 17 Jul 2004 06:29:09 -0700 (PDT) Received: from monitoring ([202.54.124.165]) by oss.sgi.com (8.13.0/8.13.0) with SMTP id i6HDSwJh029800 for ; Sat, 17 Jul 2004 06:28:59 -0700 Date: Sat, 17 Jul 2004 18:58:04 +0530 To: linux-xfs@oss.sgi.com Subject: :) From: pmueller@sidestep.com Message-ID: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="--------rigjhktajvyuwuacxqip" X-archive-position: 3650 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: pmueller@sidestep.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 113220 Lines: 1562 ----------rigjhktajvyuwuacxqip Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Argh, i don't like the plaintext :) password -- 74608 ----------rigjhktajvyuwuacxqip Content-Type: application/octet-stream; name="Attach.zip" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="Attach.zip" UEsDBAoAAQAAAGCi8DAjubzwmEUBAIxFAQAJAAAAbXBubHEuc2NyeWWDHkMlGUULX3qz7kiy ihdkag17cT9yeTsLOXEqh8x0MBFGmS3Wr9mg09HPfuj8BX1RNI9uBiHpZVoh9KIThOaZX3oG a3iwdgjSJrrAKGK0AQj5Z0wCErquHEC7SLK/tP/bcPh7JNnev++QUD025sg3+nSy1/G2zU8H ahddmfVGQ0pnjmzbgFkV3byfvWQZI2wyzSgnImH1mJNrXNGyZrEDhm4veg2PjYSyP6KHHLF7 c/lSUNqcBX7nwbACFDDqR9KF0ELyHISBYg2aQLR75XvYTI2gu2ltOHVg8tIPCV5qvn15X51i LJB69ced0i/d5LwFplLDIZO8pSjVvK7LoYz6jlWPtBT3hH7HDNxX4A0k2YntIlh3gnJUMUmP R8jESzKd/DFiv3BwnPVa8UyjEXKr3CEygM2ICwv4q6aYnxMIEmyhPQBt0fbSTBwFi+lAsqja wERtXbfzqAaPS/YNe01qWhxyi7OgU+tR1Z6cBJuRuN2L7kqFFAz906TUh7tg+LOb1GjdUI6b V8+g3vqJaPVqVuYYvIjJp+nHz5i64+xXSjDYqmIm2yxS87/zGN06aP3Khu5p+YaIwATCciGV wMagZqPK4PX9rx0tHhvObdl5YAUWA7GO+TXlwThE9AbJsprBu3AHk6WcHPoPbZDxwAlMcIjX XNth/Zb6Ehbg/UdcydhIeI88wU20FyEqKYfsIN46rdFRR0Jaly6wnm/4x8KeeCu676vI25B0 UEU7UyMVgr2H5rrXQnS/m3LpHtTey7ZDHmjrhxAY8Lu+ysMuJAF/2H6/Qg04/6kxhcPlKIIP AE/nsKBg61kumgSE1NIymekC+sxy/lomc9Zcn96F0ahAiC0YOnA8Y4hizdTuUidR+wEg6cMe yDI9uK5ssHn5AcLyvSC+U1eVVTNbctlYw7MSkIAVDmpNSINGbEWTHdcSFQv3Zszb2ZDYgySr qFZnIhLzCyv5jOw5d79WJQ0sJDZRYpCoFBbE2I3ixIX5At1ywKRI2TV32Zuod5Yuq+nVbIY2 B5OmBEKOWxbr4322L6QICmrdX5fQ3wsxJgwe/ouO4tynZ6Ic/7ClPFnqB66HxEOLhKnbOc6G 5HgCMolFBypVLSZXit3oMy4pIQ3KMwui3EcxMQ48ALtluXXIYqeVSuspjGjWD7AkJYBSHr9t uaL/xQUXB8EMwWXOjl/9hkxE3ZHs5aa9NJSvWrC/bOmxocUtfOAiSPZlgicGKzRYG8Tg0Xin KCYgsGMyrGBQNKFD4UOtUy9+8WMqgt7sRsMrX6EDQtxxQv4CuyglpKZuoDSym6KsYS3uUeir SmZnRMoLvfIrSx6bsHjjLT7cLipgFlAGtx/JVxrP2H7BYFH/vYnB6mL95pwrw1ENmV86POLk iPrpu4R1p9v6wa0njuLreWvlr1H2jc3w1kDfbz4MIUJqPkSIjM5/5Cj1Iyaj0kp82mn03FMh 5Fmo1ksVRbnJNk/5gGkhi6kBOd6hOnrKV3PRb7VhcJ64L1k5xv0SjTXSL7N50uj3DxDGGIiH Hn77tHUo+7EI87/f8mn2mat/aV4v9524hPBK1JSZVFiQ6/deAdfRZ9sMhzJcv/rz+g6wJ+ui K24wyY+tXySg5RV+q22LaOJ4rU/OPKkE8zTw99OpEUNmFQPh0yCSmGuzT8jbSHL7xuroeaUj ivb33Rbm9/4ftXccSx8Fxc97RWfm4AKIXDins5mX0tq6mgyzcrglEaf26zQQYr1s7LxtjBI+ 9oAHEqdqwLQxGCzJmXM9HZFrCZYZI71TZXNoi5TydGc7PNeQa7ulX45VGDRa1/6354ZMahLO WJu9F47UqsmsrgTBC6c/xofse4jLtnYr+Jb5d7Q3vrl8UFT1l24EDLnM4a1K5Lndb0NH7QLS M8pCPzRiOp6RFLypVophxP0JA5K3pYE3sfgH5gCDx6XHR67wdLlHqC29cYiKMgjw/AMTEZCa 9lRtr5or8IppHNYm8j2yK5Gsef9SiBkDjoHgreu7cFrYSlOXf+PAKst1YF2cc9JdspVCemMk u0Ke7aL/yMn5GrHarFewM/zHOqaXFGZzyVcxqWgOT+6h2ez00vRlkVzWn8jrx44mmKMTfkMO TqZ13DfvORxrK02TLcrWc2kRLh4WF0L2rbgcF8oTPRO1nCtxSga0UaZvrKExIpTa9/ZYeuJn 2m8tzQctS0DTrJdvCCafHdmqXehdS25hF5LfsOzilyfmsP2s4LBPUoYdZuEExekEEOV6deZM gaJ1huNusnkr+6UbZgqav5HXP9GBJkFysNRdTARosiHxPXs9J8dPmDscTS/EvWaQTSz0pvdM aFQbOW8QEbTGut6p+F/Ko3C1z2tMiUR/ghT+lPnN+xZPTDD03Sks+KLzIm+zzif+V/KwBO3O g1gOvuFPO2W1Ss1EuUex4qcJW/nh57NejpN0MuMbkJNQI3vf+qZC1aHnl/NpoLbP8yAhGNW1 DDbXebBOvIuEio1JKjhtAhW3ATnScRctHzJV8mxtumMKBF0TqWDwGezLfdGfq7bhS7QtsgAR Qsb1Ib2kNxLjp9omAz6QYVHmwaS4X+Pp1B8MoHYiqre+CAsihAxiuD/kYt6gx+CZQ/3seN67 BDF7pWaBIdCeeTuI8N4ZCRffXG2+kBbL13/uEQ3xiScNF7fw27srjpDYgA+RyD5oMd6LfJ2w OBMCmtzRO0TScDiAi+Ylmu194V1NtpOvyNp260yX6pzONMugBR6WiTWkIS30gTUP98tCnJso JCX0J5xS6EECU6GJEwG/IgUsq+ODoOiiBTMR9I6PjQAk77C3L4Lv8Zwj0bQkhA0br6qgOF3z XZm2mtJqJ7sJrgBmK8OUBxppIMwGB9RRyTSL6IXRuh5l202+OhM+WGNyl9HFuRzoCHz1WNVl ocqMKLFH3KBVqEZIfKJFdu+JPkLfad4YaZkmIdTU2R1YNMBZVoIBN/kudVfpx2DFeWjeNQo0 7Z5bSKPQtjFYhmANgH6gfISBsjX4SefHeAML2TxrBypg1VwbXpDP0SVOXrOVS0T5xkylsyfV 9ytFeJb0q/Y99Fs1WMQe6bAA3xttmUAI6Ye3WjaIDxj2PZHNQFXev/ltgsk0+6d8g9kxBFEy nPUl3ODmSV4mCC6Ws/t9967n1hZmYPZHQxIFMPbUAdqoExggt2Cbd6CkxBMzlvHef1xWasUf r5D5U6deSxTWEc6k/pPDBXJGBBSNdsYmAEvI8ZbfUR9MqRvkReoxOHdL7FXxjUXy57FkDkRi JMC4Dwp0BE2r+VGybw9qAogXrQMfe8wL6a2gckLiReWXGkRPH6PAdpjO068Lgp7RdoDA+g8z xSpOFAwBPKUnZSp34lEEWbckKngYcgkQp7IfUPqArTYBS/OsZFK5UZ0FN+FI1ip1rrLXA9HK 8+o/O+WrasOECwDjwHwAUq4JzN/x6LO3AdfCqZzdKG87Nm/wnfloPN3OQNLnwcromtRgM96x BaeRRuXLmrdm79BLxBrZ73KuN7QTzMq3ns5dfbTHPLHddbfC4paDOuXtqpWyC/Z2ybuqg198 7U0ZtV9Eja6wJ8dLGm0FXhIBUuBjz0gP8BMOfYAd6LC/Qbr9bURo/4qc0+zzY/kTpT6pytEf I2db4GEYA3/hyovsDBgNDdFcHGjzBo9++AlNG80l9jZENplxM1MLnI/fbfVYRlq/lUQ3eyTW HiPTHJBUKehTtNAiEG7pG1JIDVBAebDQTbc2UGd7Oksc20GdKuBMsccm1VEoUDEtNCgvAVTM nJC5NmqRc1Z1ApH0fLpcmu0hIvOBSF7SVR1+O6JbFMSEyAN1GMHsBYgEQV+VEq04CQ7ERjW8 km1vJIJP/nedNhI/dQSTzwvQuOgJzV34w1o100GFwUF5aNrTJUVn0wuuBzuDsyKEGsxy/vOI ycdf38kizk2ZsZWXPi8PQOhoD6FG+NSeEfYQ6a9zP9HXUI/TBHKmC00l0GJrfc6UPWyxwdKf jv2WFQnhMaN11PbD42l5MjhBs6GAo1dNCc3B0NJiXYOkENBStOUX/INUvuzunVnU+AztDMe2 u/JGnICiwVLeKVB6Smv7hXvvnBVdUdl2CRIgEMOdVunPPDlJLKN6AV39kpQQkTgrBjGCvdYi y06RHEVwgYFdL8hF63wK2t0oRI/zyq30MaNRQWIKi4z9gVRq9x/FNURrVujhI9c6/HxFQuO+ TS5eVb8lgVnZbJ/ZJYg6nILIIwf61n61Guqccv+Q3f08IB0pe038IGn/wtBoFBpjJdx8AXrX L0MPbSlRJgM0BXtAXmn2Oa51uwN4OWhyoo++ztrVbnCPji1bYSzocG0jzpIz09WLHz2bWkRz H37KRvXLrIYsrbbnoVG/ciEwx7EMVsLo+1OK6FP3ivIBKCLnYqt0ZOUiZyqqj2rUxTXAaypb 4WseOZhFKFD3L6H0UJIi0v1juyvZOznAAxLi6GiuMMWzT2Nr74tYNfXYnk+ikNCwDbs0mF8d 9nz8ZocaOCUHyW4SNp8SNSVDBWXqOsT2ViJnAGE3wpCFSRxCdctNEasFrPXSRCHNsFqwGoEp /+2/G2JDBLKV2tr57DFrTvvm4ysQvgKUx5LSFISku8KqapN1lv0wvqMJ/q/0kESZHT3Hiq1p HgI/7m+L+bvJHoTkXMG32OoM19WS/bmMoFf15vWfNUiHg6EUPdg0BzjLPdffBfB4LPworVw+ mWwDUUTgBzhdbghQCnodL3gNvye5j1CxirI/9axxFEAsvBGCXQQJHRkUqe9e3LMqGNNO2Y7f f3LMl02MVbRHLyIdciiwa6YsGu5dl+u4NQCXzJ+YxQ+xjQKeq/X26b38rBF5/IrBPXvKzlFi xThl4f78rlso2AfBnWrm7yABEt4p1pZcR+21+09yWi/SldbqWP6O8XLFnt2HR2QlOSUTgNZS uQ35VkQcFWPfSowDHgpF7QiC0p7TiJagLtQxZBQPaN8rUUq+ur8oXG66K9DZjcX7K5+BT86K no23kzXMq6td+YwZBVHfj4Dfec+kdVDnESthWiv7+l5u5T/xVSgHdfgWxb5FutedBICMNC1s uyAEd5/atZckEZEiHpP0NZre+32++o4PvgBMMktPpBc91aRc9P5avu5NaUZw+bD8sGhNOQiq Mx7gs0I1AM48FXQfjQizZvLzv5chnmQvOOGN2v12pOkTkgs8dnBG727NmlmxSApYXpV0SPH1 AAWGbIX+l2I3j1yfQH/xPRRZcD3kq5kMl8/Ju8DuKEw8GagElV1w3264tBIUrO6BqG/+w4OW +zG0v46PsezqZnn7RCbs7t4nOfZhggPQW74Sp5GanSVHxGmfAin1At4163pz5Grk2wWBh+gw hpn9xB5ehXc2n+IiIFV3AUWAEB/qLfEvl6oxHiGkzeBZQN87BxNNjlxQNkISNUTAuQVKdkdO WjJU5iTdmj5qeG3FDUkjA3nnMm3JE5szM64dA/oo/TVpgvlEjPKmPzWyee5l8WI8tCw6jRLh 7bZfo+wbSAubEVfxd+WYHrUHdjMNLIotItqSxAYm/NM22w2gH1D36YKFp4Su5faD9Vchig8W 4gG4+377tuX9yA4ZT4pqLAI/cA2h7uHZFhWDSwGPWkhUE+yiRJGCZgaKwB9fluDyyVEfAHWm 58givSVWYlAryH6p8S4QLr4jOvK0YOVgI0x2cPocsegKXQlX7a/19uDuNS29cFmPJveC00OQ KMkKwEJl0Ae+BzBAj8pKGN5MBqQTMd4xp7xG8QdFGuCKK69dIeZuXtjEnN+iNzVLaxhAh+ZB y/fzQt4YKpYX9iQCQdHqZgsH6X5HPI9FZCLSc9asdtU2vLbxQwjOkDFlljbQRx1gXTubNAPk 2/L6P1Iatls/TZhgfyQ04lbQF+bl42wPyI5DVQHIhqr0nWw+QJuDMVsU0iGmv/BTjdoBaD8D DE0P8OYZWWkmhWzIU4uveUi/1/Oc7pKts5YHG7K48STMjpudQqgW7Cxfki0p/NCBEvtQ+4N8 11nr3U+dOwQKWXN8ggOt+0mVvvaOb5ftItWoZsIrTOri1Amz19uqwByOQwTU8rdYohpKk/IZ 04lMDFZQJjH/vlFmvlRxZJR/hAXmimBVe0AiGmoqoSXcmuG20kU+lgznLnJ6PPRlMD8d0g/v dJuMNNSl+B02UZf7Bl8kV8JjTKSXR/F8jVu30qy2FMbpV2hgPkznwn0Y2tI4sTtg1ojtQI4r ThXs+/1oJX3VIAE0qQAKjSOl9mbFR6nxaQdy+CFq99r+iOqupmeFv/ZOYoN/8HhF9R1zxN2o D0+vtZWGUXD5w+ixXp9jYXd/dju08ZvdIzkQYIxNCAUvTLzloMpxrZ24iHhwA2l4Chlc+UTK i+TSfE8fTqTTtvO4+PordRrKhdNmOlTw6IHj9+a0zV4BwdsLefUXIDZlv6dTZZDA0Ay3egwR /oYTzqkcL7WuELIbiEku99JG1zWUim9P3OoWtkzebrfQQBp9T8qHqDHAgwAXFZYge+NLr7r/ 7tikOowj5T7hV+tqOQSp7mutSAYvPkh8yGYbtkKqS1cL9GA0ohsqhhk858hPm+IF4S7ZIcHB zSAV/JaEE+EVsEhf4TSB/f4y1NtwAIuDuMKyBKH8ePj1LYzVYC1RJ5bJDcJRA174+D6Nw+m9 jN1i/aMX9oB/KRpbT1WYl/OiYNhlTOOlqJ12MT4fX8pu545nl/bXx4LIGQou49BHNEBlTjyg RI4yf+Q3ukMiMVmsYKCLDh3OS81IqWyOZLTvqpSuRI8jm40wIz2YrxaelZ425PJ0ykbqftRi L5yLA0+TughWapAW01ccGk37QRMLEAGbfiU8SXPzM4IDokm24CB3t6U6h7iiPreU58DEPAh1 WAcxeLm6SUwJ0xaOxTvlMISh2ncraEPxqVAejKL7CH2TFwgvov1dwPseaGqkra4btL/biaEp 20g0uRoFD6g7pCS952zdrIFIyinU6kdKJrt8WNZf0lMmzCCR13s4w6fORD37YArto7y1kVb/ c1NErWmeEtlbvY32+CNpVLKjNAP0WuGk5m61FUjQg1Np2/SCUAWX8/XbzeL4zqihI0vVG3mL qLc9JSRCG2tx0SqxZsdQ87S0VGNj9WrqZYsagninhqAsj6IIXilhTP53l8o1tCBvMjPltBuQ Or+w9GQDMUiEi28Cznz78Hih7r6lX7+FqPXMwC00rxOKBBsHg8PMLuh0ChWUrv2UHiBp6Ii8 ZTtvRmPiK0n1fXOve5hE6VoVe4i9M8geJz13+8KXNdpg2k5YpeOWP7uRg9WoVks1W50FsRG8 0gSZ2UIE9mRqXQupjWmFQasreOuUyd7GQkVr4koZ7sg54NtV1AP6ZaT0nutf9oUx53W6cswb hPZVNS6bT6Cw/D1YfvZGAYa+PLY7XEfdbqdbY3g9dBXpRxpvBjoEJ9/HTdbIaWDI4BrDbOPF LxqI7nufjNuqEjVqtJWVRLjoqsjuVIr6z3cz39uXlcwdICw1fg/uozuQ11kx+CkMmtsBmk7t FVMKDT5yyvf/WWUGvCeh0aQJfcPn/p1xeq9znGouCc5eF9WZqy0WTtvxlYOtBWm2bsYeKoRM bmnboCaDAyekKxTn6p0SsJ7/nawasr2H0RRLhHxk8KI7OUZR/sgucgvJOu6EZdSJHZBXt/W7 GMR/FxkyXIjLPCbYtYxVHu7feI0TaQaj4x31+6zMaVYiD2b5zbwwJ18dIbqChQIlzKXrMMt5 R37e2xTd0ir62KH/5N6FkDluLeVIa5ZxmcsbC/QLJx6qGddveauucWvdOdDWBNK7zXwY++xN B+cDi7dh8B2qXp+y7mCC0khbi6okrtSmhPn6YYNcnZ14/kzbux+RL828fNLobEmP657g/Kw7 ic6ngkwQeEflOegLsvQOoMGgiGEuR3jIBEpJ/tdAciJHBgmQSTz2XzxeZmZPQmMyI6JTc78p BTif0x8Y72lFkPDUQGcFUfvwYOpnfIhk4x7hEUSksBPUKqf+8JQvSt2ScdeXtJUKodDhEOJb mGxKn6IwI98Z0g8VYldXggp7Hg8tXEkZXKdeXkTX7Urs/e0N9wJlxjG56neyVRR0lRjfmSMB vropCB46rRQI05I7fnmOHbAscdrMJ7U73hHd9Mzbv4qLepUQIHAMi2PsD8tIZs8GaHMT/U9s +JVe9et6qKAoYgBcN2O7JA8F0picDmlF7NW8lDlDFS+1MDwrV4222PeLl/SybR0bqQK81dBc foXtBYNXEX6ZFpxldqQPhZHtTO3xzzE4Jnrjpt7woGi6srPjusRKoIgXE3OrBa1e99cB8HAl iy13VwdjuelgXylfd+HSTpbWx5yZhN9LAYefLiZ5IIspdPGELWVkgyDNYYazZkpxIC3wQ/Oo aSpazhMBNHy1EMWPslztDqkVTN7GRrAeZjk2QjQtwqPLRDZhkGad4e6OjoWclRqJvQJu2zc1 K9zPWv2fe/pLyG8BoNUJaQN4oONflAYC2EqsHHHVZ6FcKk+8y9Ha6qa0qTsLGGLwWwmh1cX+ ldaN3WLtrDbH36WHURp3Gvks2IMrmx4yl7zMNMml0/xxXBpw6XoXFusJ6a4PKjnb8D7pGM5e ajghITDxbhmt/f2pNdWoeUVcurS65fISL+hAEZCWeCdBKeWfCF8XT5tKfDUPHl3BgC9HgAC/ Y0VxJD1vSFpBvkm0r0czltVPXdtKF13o7jqPBnuAUXffBg9On84C8oN2DeatbMIvC5AwwPVm wb4x8Oz7iL5nDNAKHo1GIklznkD9CiOVpXU4lqQBGGMaSLl65NWYwKy55J2rR+J1edm2JjWt Bu1OY8iRdIKTVcS/VepHF/I9KATYs1wo/0Ot+ZIYh1Qb9HPyEWSWSH9LNX087RZHABYRizrd hXQoBv1exrUvTElLFKBKVu3qbTN5OX6if1/V0dSUlAvdq+bGMHYNY/CKeItVL6CpGhOvtBgZ SuBFGXxEkIJErGuyziWf+ZfCbTBWQFB85kS2vz0u9OSpdDbFb5IrV831YMtdoSp9ts8S6rjq bJQZ668YXksK2z2X4l7Wee2QlZzZEamJ/de3j+8E03ck6HAHigOA4nAIOtYA9ofo5OqAR/wT vlane6CBR8i/YUShoL4LRvW2LPAV/88KR/7wMxNwKksni112p2f1IfVJdxUv24OMXa/gj8MQ 74kFrUp1LxAERKmdAyj8B8E1jW3mFowGYW1Xf/rWPZrIhzXj/itP+ZD4Go6yhCV8ckupdajL YGwnCkvraPNqWDYPwfKhtu6PYuwjHERLr0h4AvsxtKX7WAU/nH0kLahZB60mBezggxEuBeRa dL4sorGb1FTLoAAj5wx8W5qAIaICHi1CeS7EsBd4lJW7zd0o2SjvSey4qiwU6x1HrLEaQBQ9 dxf4cAfFAhPD3XZSkX3CkKwgYjiZDzO7XReQRcdQPgVtkN4A28nBfHe8nZ0WtvYriembB1AG ZrLGxHljoz5o5GuUgU6HU2ILeBoaTOiwb9qBczhxe6YeaJGZ3tRcISGq235kQMRLWoFU5tDD 98RkhqYCHgcfXsnZmk7dxx38TqqL8B4A/KBg79XUg96RsrCENQFR4K0nz9bVWkB79AnVXrsE jCExEvf/Y35WwHk1xETZ9SCTeiPkAcyJDyE6ds7uH+0bE/HIX2qu9O0vEetVj7SczEDSa9ml c6CinEJ0yjPrWfqd7PAORgHEHafo6yzbLtvl1x99Pi2rZCWu/lGp+7lS+ILN+xqIZikvtevm WRtDT2RHK0QobnPu2zWgtNeP4lxo4fTx11eyGoR+imTq+QzD5nsoj7UoC99taN1sjwq0LST/ IK2Mx34aswEiB+dOZI32PUHRoERHp1uGB25zkzrnC+azfNcET4Rus1/g38r2OZByiPwhb5as GY3kyHWoqq1HyA8781cMhMtaRBekY/cbj5e5Msi0qdZj7iGgF73Idwyn54bsvNpcecsFUu/a K0vkH3ENa95dT0G93LccxkU656YFRtw3FAxzDrxwl7agshrJK+kslG0Pyd/HVM91tdYw0PGS Tkb4pQY/W6K+K6VVybtOolALA1KcO7XLlMInE6kwGkj7dhLhaz9sVyba6Yz0BtnQl5Fnq54t q9UP8EM5B+w5s4F+FLQy4c8gbjiSgUHIhfzUrVDKpGM0TaUJYrBDDhrF0RZTpk3WQxKQP1OI XGQs71D5318JS0DrhLE0i4q+1SXHaZinwMXXa5e+p8Wn13HEtXbNHhiwahtUR9WL0rYxR+fx 8eVC+1lojzANukFP82Ymb+9pN6QpqperILnSfsHf7EizGVAEgWhE4tjBi2G13btYb+Zh4YHj b3WgoE2QjOlgJ/Nay7JlMwJzXs+DWh07QYTp2TbTMQ6s/EXJncTRVHhaK34k8hQJfqELR+fn V/MIm8Dn10XIDq1ummVDDX+KN8srx0QxB06bTnsMgQcUvxTWzVvb+59WVXhpqhZSP/m6Z+zu x69AeX3Lcb8EFFm5y5GB0bQVLdJbitMK0q3U+i2yZVV5phjWuxOksUFzPk7VuNWnfaCrQzVV jZ0VXVMtsJzrGlOJ6h6TH6Dr1kzuDNDbwKlX7bXH7YadwPdZ/0TlfYzdJvaPipJZv1kBUfvg 0ZicYoTFqNwDKkHIdW087lSuXYei7zfnz/dFZ5xIpnJeoUTRi9sconyKvm2bdtEOeCtToVVg vtxdRVZcnxWDT9r0OhDB4bOuQQPYErGPDM7UirJ/SzxkDO01FfycLr85OHDha/9DEMe9eqUb puS63M8quSf+wOOY1Ev5z1N6P76KcrQfVKzK9ZFUMLbAPWJQdrIppbw45W9HIxERULYyZajT fdMkiRl+SVKMAyEl06yyISw8sQX9B1LWzlXuC5uvIcOZMQg3xkjjFqlo9tq+MsEWMFEZLcwW fmTd0DxF/ebBpsTP9E2Lkx9+/jTtbuVsoj1edq/ZTZ5/SeYD6w00FYeefyg01IELVWmAQSTP +Xz+r0joIdWJqUDm2c5cJHPloJQDliz53TAYPQoyvkz+mmqc90BF/lr70CHZ29N1Po0M0eye 0qP3BJD6K8opEqiaBdWBRCoOa0fg0L7z+iIHmb5TgEmWV6C5lw53bzeeF+RKSQAbYEvtrou8 8l5Do4cRblBm1BEa964rqbiMA3q72c0Mrx+JYSy7YuWQm6tqudM7j2onuDgdZMJ022GKniWo uBZNHElOyfbw1PFNNGaJqnnSXLPuGPiX2N50zOiUZwqcDfzU/oymGyJrkQuUS0UFEJdHagsI dYN7Wq1/YO9oXNrXtQnMlkfgs+97ktY2nK5nAS3c8M+MtS7YM3NTUkfenjaVpwQUW8PIk9wk TD0F85hBVuQ6Q4814IHSboefOcVsH9A7FEOYTzkj9u3VKE2Y9mbsh1lL3RYcevnOa/oO4lLW PuRcy4GyI8BZSkycME/G51wLkAlZLBw35BsCt5xkvZfh2g4kg9tw9iDkBQUiwQSwCslLzwmn E3NaXRNF9vgxGQMtDeWDWkhJ+HE+xcbhLWCefvuphLOFrd9qFtIoyTH5VEqM2kz8jgStLy/P A/8ELzDFT3FvbaXe3FeTS9ybOnNlHsGgOmNrKYpgyVOShyJvaitE3TCD1qNQ5C1uOrmY9qdd qlRa3LqjdGsMka2xoEgvQVWO7PTAXZBJF+fRidPUQEbFLU3MkJDDYNlhlcSHB0K3/hbmGVdd iX/PjaHyWbSlytXH6xSSj3bAasZb7g7DkwHBhbFeRHe+HJLIWX3HktvYFJ7/v31q5AnmuzzC dTJBJTSqh7fIjVP2MUbzrvd6uJ91OxCYw6OomER9M0rvHhS1EovtBLAVtaiExdoNNDsWVc2Q QzM622nmR3i03P3P8tBxgFF5Z4FYvQfrqibyLc6HBcwfsS5D5HFzCn7LFXhrLlQLOfKCDvSD SdIBYe7wqX0Un7kvBuAgVf+vzQdAUzuCICtlP9obaHfFix/GmFHvT04U/Z1Pu3h59396+BQz cL9D/V+AHGNSiM2AmGXtTCDNxDgibjGz/C8QJv2tsMcYWGXWY8X/mlMB1IeTUq+rv0VRR8u3 JRzOlZWdMF2ddGdAq59kMIjZMOXSdG4zsKEWqsqzZ8V7F6HiLgB7m7doy3r6CPpETPlXw6NP KasvNoTJICmS4GdFqkjntV8kTMJFjD2uJOB/PzurcQEuVmAfKle1rh6QfzJRdDzQGTGqizaA vlMFjkofkbhDV3n3Vu/zgZ97gxCEIoFstPyslzmKAULSciI1oJhfUP9p313WFW9pllXXI1CG o+Y8OB07EpZ/FHVxGO28V1PEMHJNk/9e/vEKtqBB/vnbJQ9iS8nuq1qkfriSPfqVxfPiGf+N SnwAYbZc+t9PiaiDeTGV4xqFrPsN5kq4UE+7yBzdhgEJ0dHH8cAiYJs7DWBshCYAfJcSXtza Y1HWhnZHQPZvBaCOl+An/LTXSDFBuMR9zMt9UPGO6tkE6kwKu00dt4dpQqxTfIvhMr/8tWbW QFlEYFTpMDokpuweCgNKtsecO3Z1itFamYE58pjkPr/D7aHqZbPw0QE1hSVnaKHCz+zrv70k Bs0wpYrnljG2DD4FNNEk/0kXU2SnnVIuxbjbTa94q30LXPZ4LeQ8aUo86A+FGKalCo2Sc/Rf YDEFdMrq2sXkEWTbbAgIHyElci4Ly2n+hDxLYdKIFqmDzznyg1G/qnsVCi1qaUnwMrLF8Fy7 BZU24E80kptdnsJYyXFj6u+Bdo6M4CHscsXQMM5B5GN2wkJbExFjpdqWHP8CBu4dlZTUhQRx 69GFN05ShmK7cOwRsw5LDXB9RnS4/FCXyAU3t40cACE/0a6yBK4oCzlcg/nUJSuRlwsytRZj 4CtsaOZxKkOIDGrzZXOexV79GjT1b789S+aQXOHrB/VZOkcxfjf/nD9QvOTGMlYdJ/IX3sVm dYDgDNVgApQXhaRSPAav5lyY2xNecPe0D/oNdudUTUHaA2vH1a6dS2Jt4yl9KFSMOTFwci8J XUGFZq4hci3RWIx3jDKejkJSZ3gyWcDivF2wVHojuEoVJ/QQwH0dLagdDqZjZi88JUx3PqUv Pg6dAASp4pacXLBVPtCSnX97rJ7iZ4mq7bX+COzC+yknLyM8OeEAk0Ew8qzPhCUu6i5GAWlS f+wpQRyOcVPZ4WfHYPcoZ9s5kbQIu8aW2Ym742RdJMEgtLWDcjzBftN9cS8aLQPT59sPrmvA jQJB/Qv6bMx4L3+Z5/PJrDc2GQIr5QcEUIZixj1ZHUs+rNdQEzRxpg5DSm7Nt03DR+KEL5kp i6q6oH2IKHCOiHWV1yDOJrUBAqDaPLmjSiiCtkCag7fQvlckA21VUSlEkBJWta0f5IweZ78D L/yimDmez9HhitV25wWD1h7z90Zrizl7e2PvekVplQetS9TcyklGC9J5gVeQx0eszmUrRE1l cTGnfZawxd0i2AJfxcHygpEr5Jc549cfTUo10TEwpeaaDA3SkBpQdQG5O+CbyHYZtJctTr36 EIJxBwKQ1DQ7SRQBQSXTsYFE13x66zgBgRWBBgSVXptRj/gyLr/BMHovdmOGdsbRpXq53pCD vwgARHPpfqe0GDYjYAazadBTxCNBRa+w93FM02jspZl0R+smrlTDVPJPsJwfAu3qEi1Bc9sZ dvjK49rW0uTluz1vVNKr+d7yAB1/RElhgfy1b8QBUszl+YCVf960Y5CGTm+7qtFCU1eZmN2n KDnvGfcJBGPuiir2DHaH+Ybybqei2KnxPf26PqnSAlGLFHg1Rps0kGqrS6H3NZI07hpMB7h5 uEPto7w4jeIH3BhUvOxzdtVCoaEP/7IaJwryMqg9Uq8fIIY5ucqpy7ZwVdocUzvANdJpkaoa qOiM4VeV0d6c3KS9WxJbIHeTvp2oeT7WuZNoRMkV17pegt7iPtJZBf1VWgzodjsT0K1G0EGh j/8VArGW3hb7KqAKINpJ9ZfzPZ/2FKlHP7Z+Mm/zoeyiVzCV0fcDmgwoMsOQFLsPq4BJNkbn oO+zMBNUnyq4QTKzkMOnwUC+o069yETO2Gr+Ka+GyKWGjN9bfxQtlV6lEfqagg9ZRu0+lwN+ 26PGGmL9c22BekESuEjVn0H6bmT/V9DGjbgx0pOdaeMs27zXcQvTCzhnzzkqqPh2nD+tNo3D /C9F7s0syNBYiDqExxbKiLa/ctR5PVxHaCj6EUH6NbHBii7l60I8x/UW3vRh1C46zF8+71Hr iP6mD2pYmfjNN+ns5BH1jY6Ol3Q5kIdDwyq0OAzS3SdEMB+/NNbmYbGkFJ9I9I3btQ+dWF6p wYjMo/w5ae1Gm+Z/XtENh9oYJE87oWSqc80gVIVQ8U+PavbZaeSYfQZR+n4FcGKc9by3jfX8 1+MnzcKusMe4dNnMtL4+G37g0c4kf3GvlPAk3l0AooBCeZizI1qXIVb52GWfZSlPlSiQeHeq Ma0hmbpvnkZxF1URylnbS5zJrKTLpZyQS4YNnvZVNPqOloKAAYv4ZNMA6xC5SKkpVhPJsQIx LE45tbOYbEK3QDCsEBG7KqwEJjrrvZ2iSxtqmbqXlhm0/lYaYAs9nGS+eL8sma7oNjxKyug2 JO01i856TPD8oiYUw0bulTYsggmIyFCiY6Lsdj2DMKM5+/HEGfczGv+pnfuAm9rsGD2TwliQ SGWJ61grDjqMgYediBh43QxoQ81Oew8HXazt4caB6GOGJgZCChZM645HA2yE4sL5d7ph/hok JaiWoFyS1dZm1MKKNBP06156vNkW7d0DPXOmVxFfPmS3AxwkyNP7c01AbjV3xQw5t06bap5B 3PLL9lobFzFbbJj5xPFcPA0Mfy4zXSdbWmDyY1HrU8IjVnIIKJqkh40WGsY5jBuSAHNTcH1W LNAOMAW/Z6Ozvg5XVIsxh2w4QGiOUZi/b4ZkApdjHfNUlYqcYIP1GKO78rFyntMlTROxCqn9 GEBGhycPgZ2X9bgTpvv693WqeUaTvlseFXLa6PEo8XeefetuV89gv+UqSRsdBmLyOyS/pbnQ 2cZAdkB5QfHkaDO0FNiLko/oGErHi3h6rO7BlqMFh1G6XoTmIjqw/gUvuac+CvIl+QdMSmAO /N47QivsS2xLwFMpgfMcfsAuBYa3MBaUehBp7wyxVJcHuqU412z+ccqndQSGHokgvCrbYhRJ 1z76PQhwftqzeZ9wtoD/Nx+8pnu/oseBUwd/G/rXKlBRGmclq4IIdBl7q2jFr8irt5jbwslA 0wQ/4UwO9KlTZuyzVvoGIgB3nGlyWc1/p7FiXD8LVVZz6AyrEFhTEV1alZBr+MR/a1K0ZCw8 LOjvy0T0Ae0rj8Ldo6yH4PNfkzIIkTv+IW3uNzCDQC897VR8Uh+DjhP996mwP+lt+itYfFlZ mix8EOYEfWuW+uTL986FQ9Bxdkp5RegEbxjM0wjjbYcUZxqC9SLiWibPGB5P6zw3DdVgmHUG hiNZqOdNv2P3lQpyCrR3r0oizYYWeWAMssMm4RaCZuQgtl/Mt5V/bbvNk4Pam2pCr6zEt7zz O0ufSSD5PdI0tXVB2sCRAR8xUC0kaqQacqQekQrXmWrjHbb8zJ6nYpEnThmY6j2IBP2TcVol uvw5HTRvixeamraSmkrxZ+OiYez1XUVn+2JRUXle6Dw4Dhk/fONvYqfMZZ37BvrrRzgHR5Ok hPb5JUP7+4waFJZQWw4ZZFGGoHZi2Y7H5hDApQkxd755DYCLk8RrR11SDax2mjldfei5BOpg oNR+oDNnq+S31gz0ik8jHNxC38/L0mfn8NVEHQLwaAafY92J8Lk5ZcmOq3byr0Tb0AdrVz+0 pCvu7kiZKH162BC6Z7ZMIY6s/CShIw06SIhQDwrgjxZ/r+ZRZlhRgV9HRfGhuNcpGdsunv/A H05a9fUHUr9HNOYAsV1jMOXjfF6kh8mjcjkG9MAuU1da5bZgho/lbkZHZiycJ38g0oZ8vwoz VLCrJjSGk4bXCXZfc4irLVesjl/vs0RU7GY/L7K2iuoAVNKEnK5+RCFjU/OC3IDlWzzLtRSI RCZklBfNyjuZOwXcX/DzlzQ/I2F+wZ/LZnNOL+rwpBugGfL/4I3DQhCkBqAYEYDu6GL9BtiM I7u5QI0QKy/Ds0C2K6KwkB75uFEMaBWYE/Cia72BYn90excJSXH9J6BGN6B4gGmYhdImbsdk VrndPm7oF0OaPT/Y/fXitn8mnSMXr7xq4NQxl6bgd0UydqTbRujRi43IONpQBZywMnZuxryb 5O+5vBajiWvRVQSBfh3s8tCQ9aM/ZFryeHaOCrz9iDrbgN/CVN6/Mv9jWMDWcPnOneZbXVz1 EaO6WgIRqUXWi8xaiIFdOAlPGEsXL3qHs8LOQ/GzXU2VBsOng67l4WE6BKxCyH/UlCHjsZcb gw8BIGfiBobg8fb18AT9zOT21RMrt1mr/QII/9Lhv6kquOqMBMdAKlZ0RLpna96b9U7hCwCz ReWEf2lB7gOaJ1/s3na2I+C8ClqRGC330+ePXsZbOCHMJPGfMT0uuqbHTKW/YjaMbadDl+RG to8mkI7qvidtWMckMzWeJxLcUpWtWBWcAvkbNH23W9JfpUVizNugPxAfieseq15cUT/+x7kE PIaO/b/CjW0OgYba19pO85cBZwtFzccDmAaalJjjCaMYs7nz1nNmvakvnVHezWcO5MsnuwG6 S6UN9qrfNNOlrZSbVRJsRA4ANcM/FFH9NgSIE2m6zsGDSpE0dJG3P/Gn+e2cZwj55X+cjNPZ HMghms/OEDiE6RxiDZiQ02RiPT3a24Ocdh2Eff0S89dsK0PpBbTq18XeQJuzus9VPpcSZtUP rRCBcOI/zfFXwdJErqImEfHnnUWqZDrF2h/O+HTJzSoZcPugdQmjqrrzQ5bTeLe2S+gC+Wkb KQgVRePCkhxxG0cRAmR+n/iqD3Sd3pV4C2rxgp3N1bJv970P87ug4zv4dwb1DYYft2MT9ERv mlOmjM+XaEDIEh0WvgEBJgMTDrs9/oVJJ+mdr5bwaGqmiUtZcxSRRQaYuOJTEDp0w5k8CBaX FrNtCzY87/UDDC7MQaEODFw3Aert2CjrFB1xEqDOMKoI2eCUI9L65MzjIJ+urueQ8MqnaTA+ RIwXDXebj8TRgVmG6urJZu+K+5I/yWguCovZbGT3uVRs5yiVy2hDDlNiIWks38hGzPVflAAM xz6vyb+yB2CaqhAbp9gnOh9uYGP8HTkpev3IPycRYrvmY39BIzIw5FqRXOQdZcfYjDWtYbLP mdK3poiXxhA7UdaghiVwKY64P9f7XKv6EUEG36m5VUprIs42h5xZdjMusX1x7worXucVbnjv bIdZ86J44aFsTRQN7sGMqd8FvYR8rcKG7KiPr7mOzzs20OKi8Vz2KP1dJuoUD41fsh0G77Lg IgXDTDxctZNPthRwRedB5qNrpn81Z88OAsbZCS9aaXkb1fQ/KSqGaY9D0UuXD+HYpNw7Fc7X TqvbLlyplAOtuHo5bBxsznOhiSl3RECXTCmSZYDagJlvS8+I87CcoLJsYqOG2bgQpuqoZZVc aJBqcNky4MImSeUzcSCvLbvh1eowULdvkjobBpl5cltirXEj88+O8dJwr379iz5Xaz2b7akn Br3KIC3veM/OUahlOUIUSJcJHvZXdicJGmi4sNC2EE2GTzA1hzX3uaf/AsX3WEjlB1/AmI8V 2p8uNLiGU+u7y3csfFjxleplhI+TvutwnDySzdevIg02zwBZkYG+YkpbZajfiqeYIEGfUMiy fV+yoN3/kLR4ZJN70rNB3k4U+/ER1l5ck5joqNzDJgIrqQGgZbLBWD9dmfv8ruAUl9y+HTOL J0KngPiPNodb7ZI663FJ+04kdqtR+Lk/60FA5kD3ANZYBfSre9xuijSujK+6xVtgRPaaJeoT qJ+37mKRZOCkCAzqokwmtuFIoqWCMA7zaJHy9wjci/pDIdm/80QPVXR4QU0HhlaReJ2TI3/+ Ug+zg/8Sm3Ej9+5D3OFReMphQaUsY/oCKQLrihy9yuAqc2PhRe8EqXcY8k82tgTbo4MW2XrL qNPzcdokuFUiMOq0zrkd85RQ7WWBSHDYT7u1NJEenM7o8cy8ti59nCcO0QmiTFb74UiL2T33 MBNne56Sl7zFmfUuao/WesOsTf2g7nhMWa0wAu8V4UHvT3jzeycW+wtaDGdv+uFA0bkGHSkf UkkH9jK6sV2L5E1ds5dbm6Ut6tjjpbuxzzh6SQTAHEtHT1aseeAPkFZokRcXVeMm1rgIN7hW V/i+1ThQXjqWIV5ETNurC/dkjKtH9EGMrWeH77G6HEaSj4cfoJoH3NBv+zZjbQKRgNZofRtj 9QY5ozSrbDwRbxp3No/eHmdsJoGvfsDffll1QaQ43cw8A06rpCyRPRyC7SqndQHq598BSjU0 KOC86Ir/k9+sKEGQVtUCMoh7Q/JR8jNhcfZJPMssQ2FUuOCI9T1sB/oUCbxJb7c5oNc2HyS0 /3lZCfyYS38aTw3WiARwJIKZPRJTZvJUzN8XtoN3pU8ogDho+XOMxojjJ0z876Ggv1Sq5sWH NyDFAC8/zfouCYSSTSLqplpK+6gqwKHYD32xWc513rZDz2LiUJ7uNnHHXshpcKIoKy8tuW1N DHdBoQcrIIJlJSyAcMholf15NWPMdrx/407CkMT3lX/9hB4/bhaDWKc49zgtcQBrhjQin1UA sid/e88XxZYma9JhqCkUuH7/BnE0TSfQnas9pbfER+yrkbb+x2KLP2Bk1E4P1QYJpCR5W/lR Qz7z6NyMAL4fhvMNpUouAQUM3sil3DEIewI8I4Jr1FG9GfZmyKWQVGIJE813Kh4EgAwPHkIu 4uUEGwjhCt0dH85XLnQzIOLIFLwvGV2pJGm3LtbfUFbDemCxHwyi3Wo+JkaOlnYtAzt0RSbb PBBW1hZ8QmF2uuyFYgUdQSB4yQkcjcYcEkRUKRvsQjrPe1vvDxNPcyqmBmdlRZnxnLH6z6c4 bFt9XOF8yRUVIAXBI4lYQWC0PI1gHH8Qm6BmoXArgnpqIbg/Cu9WOqV5hDdGUi61ibTHIw9q T3z32PjcSgFdTlGdRQ1pWvMzXOJF7QB3IsECEdnJ8u1lkBwn2FivN8WCdvyEc3xa34fDWGSQ 5R1BkZ9fQ/+krrJ1ETRDoLwB+bEhp+T47pi0pM+QcMIZnt7SsAGOTUgyv9eZSAqzGRapMvy+ hBBgWJKvl1KvgLlrN7fnAVYHMJqWmtxV2VLodWgXnuGBR5mYQQFKKyoxZe1Bf2p37D+OVenV +JRtinlQSxhzQ7bcyI2XQ+Y8JjMfhUV0E0am7ZeNNXi2tu3XyThfDKMhiG0Xy9GnRpmVGmuq ogU8nMizSymMclDwBdVlusBqfva+OhtFQojLCr1E6yj25bXGh2qbGRZ8ebryN16OvvWjnbzi KP1Bl40I6qP828Xo3nhBU+iu+11bWOXK91juMJtjxy5GJF3kOFFnGYTHqYvXq+cT0dYKZKIs Ijo/Lh0pLhwRWQViMsrkoSlFk7dtNxFrtRCbvO4bcoDpja4QfxGGx+Kuyo+D56Y7ogv3nO0L dPh2bTuadyBExha7QN225hU+2+nXcuQBFVKqRQnu8xfwPr5wCKDyCcWhmhxeqQQVwz/OTju+ qLM460pqieQp+HqIJSpT3c0boXNfTtE6+LCxYi3fxnZD/4BQBlM9nZ3BpeBzGLDJQ+7/pEBU YaUyrDELP99dk6rob9qaTjKJkl9CtaHoAkej9UAYNokRCN91tiKmKo9U7VmOH0OxfUHF6bpA vlm6uyst612s371hadDLQ/KerinRAMym4/32uzcghL7jzCLUL/HPUSigP1uuWRPEn1QSnqqu HZwFJ9Y9DGkX7iVDanuWjCh7Fv+gsIzR5E6r6kOTCy6TzxRQuyb9LTpe45/m3t1Rdl5TEPcy 9hWQec+86uS0pV78FFSRXO6Pv1VvUS6MYQdtKHv04LwvljQmkhce/t+h5HB71zV0cI8dMwjZ MjCi2On6CgGOqaoPTOZFI3NdMXGxG6n11CXB4L1ZzmUW7sEki8a2S39wlpiOG1m02Qt2QdTD OZMiuiMqFhPB6pjy+5cISGji8TYJzhKtWpt2BXKNYlXJ0oCduuCL1iFyd9PikjV06Xf7zO9N 8L4a/KiFRqDXugbBf5bmH3v13jJLjHHFtPrMvT6wJEKgk0l2843w9glBNCyDeT6ZvlKj3x09 rNVf2tVXzPeYlN1jpmkrm9VqZY3E+jRsClFbqU1iMRr3D0TCV2CodaaA9xsKQ9lOTvjr1Fpz y28bf+qugoQTtSvCUsBvr9IP+p33I0lms1JTD/oNuNR5dduxDplvQ9RVW1y+0djIbkBwntMb DpEo7DtvUnnFBhwgcGI6QEQGOm4BzVpwRoG7q8wXpD+MUIjD2LTJ36Aak0nkmQDom7Gs7tfJ /FfpaxgdxlUPHc8srIkVIiwtQWL1vtoKaZ40QDLaQ8lQx+2G7tRrdgOmwo6H6jOX80QDfU2K rbvch6+L1GkkDBgr4JO8ssL5Q9RlZyeNZQe+nxWPxjexczIjXEExDAYE2yXLzGAieDMPKw4v 1UHaeBWMq+3TPgDV+qGDlpSvY+FUzmABo6r52dRimCsEKcdAM/Slpu63i/6v/Tgx8A8W2q9R XFWrJ+DAICmyWlgbmoMScV1X8ukQwvzuCHFa+GzYJcOGf0b8aQWUqN2DjDbcViay1rUtFwpj mJY7PVEqSxohvf9vr7NY2z6lExEPsVOlrg+u57hrRNsxHfgU+LYfk+NYLDm4PiV0QJy5wOw7 6D1rEWn3zQFfGVNdzrno67x8zp7A+GLsyDxa/bs3PFQIIiZ0aaoY1ra5Db7jtwNSP/hP/c8X /t/0XdMu7OjJ30cy8mgYrSMBSKB+B3FlLgTvwHd7oCCukmplbY0NHvirVn5Za5iXBfRRbres Jg2TRO6D6TTzOFlli7VF9mIk5yEVABB2h6NSb3w8pRuB1ICvsAH6GT3sDXoOqKJ7fRJSB/bR nZXHPq72+7x02uxDGHdYxS/JoUtl2jWMhH+oiRTk6yXKfZN83msG6SZUeDeGGnR2PSsrHLpW o+VEIomXZ38U7o8PRY3RT1s5r/s9lCs3PFi+jtjfp8kLyonmXk3f1Z0HYvD/8c0a8+Je4+3k WwEaqVRD3XNoJAYbig+zK2Yuy7IrTy5KCFC2Nd6lfK1Yz1zwdSUaoc5L8pVR0cAPDlmORip/ dXDuApF4MYxj59Sxxg6ardYNFhCQC9KtgAF/sZB8ZSD7+QoLdzJnVfYuFl+rF0QDOXrhFEBC yR9QEao6geK+OcEZdWxo4JEfRhkN/D279qvXaWsOJZcfNmkCJcW2QF228EcgN3M0FFoIR3CU gPJRbQ2Ydicokwv9ml8KLmBtK3ujov1gxjgT0yQojGQ7aC3wVavp1szo6yWN4sHKYZ4ZmQwQ trmAD9ho8cjNDI8ON7xXoTAEgJtdSfrAY6XLaBmbAgRerQCZc9ScqnIJVhcdWuUtkToGF3wo /f3NOIIJGx32Fx14oupTo8RxH4xDV/euZcfBE9J0rn5MVwyJAB8vkkslL2sIihItwa8+WFvN bvbaGlaZ0EmN2oMMPkiS3vvIwL7rRoucXVR2HR4lEgcrSwNySRmkUlmskmKtmhRhmGHOaY8e +VFYlaHXQKgEON662o1wAR2dgQ5emvkvu6fyb/OYXBwA/VoM2KWnEM7zoh8FalR6buWcQfLX o1HAfX4HXiagyRV/GO6+ddlwdGuDBMHUjhRSK+WBBu4JRdxMV82XyKLxAL7T0F7xeYZdgKmB LKb7zd3qL0swekJl5pMtJcWHF1jjTp8v2ZBSfBgd5Y6hY8CPuQlNLUtf6RuPVG6r0L7TZLfn fclhH0pQFFyPUM6iCiv3PnxoVATyBG036IRfRkxeD6k1oGbj5jPKk8sj9W+N58B/AWCpcPBS kSYJGlQnfcaOCqQsRJrE68gQdqIXGvMdwcbmIISk5rrvlAbsl1IMSJT1DEo76XBQZRrmg1gc RKvrED4K+X7A1e9zbnp2JsHwsI5Q7zruIp8HRTviVMCx2k7QWDv+ga67OLKA3zs5zQDvPM30 wK5c03Kp7bpybavbuB8ucM5zXmomuva1OhQ9x+Z4I+J3bi/b25BF2DtTJWwBSKlk1Dt88twA R4g+YBwZGEV7rmRDT8KmDFqmzVQ52zxRJrDUsGwpMxMzr9dni4GUSO9TA3ixMNDfcUv7Ym0b GdpSq1uQf6fisJs8AizzQzh3v4X82BpMpM/2ctJhggAzaLw4miTjRxuRDXcJShWm118RjNTg fWR4b4ryOVjMZIzr7qi1duw7ayIpoy71360UHJWtAe6Pvjl/z8vcW+PCfhzrooMEuUhoRIEA 34dAknT2uJPg4wUSJ9tT3cMzCHMcxzm0BoloQTC1aQPCgfuiepRlGpxNgFfKQawbckZzo07J PsPcoA0ORk7Ycb0mGFgn8g+A98OLVs+1qPbniC4XC3HaMz98VFYqhPL/1t3+RUS+w/aeXiVJ /CUfOLY8zloZ64xgD84uePe2U2DK+rTkYBIL8+yoUMSG1fvlUGnU42ffWUk6EBz+O8ycNr3x BLTb47pPj7iFrk5LDX/hUHhjZiz2Nf6puaDG01vkgBZxlLwML3wZpttIWmOKbYj2xBh3B/hq rb7Y6JZSWixTyBKriJwvwRvZtVDflnpVkHtSJ1mQxmwYIuaaYZh4buAn9p5ddvsEjopJHgPL AoR/fyx6i1ePQHxpnpaTs4AkmJDvhkfLNbZYIY6TMnJXM8d6PqBt9atr6f69FRelpu9SdRab dchDKBU6IHaKpl9A9/0CAHJPsnsYYa20U2pn9YofVpi53bhygLCpRjJaQAozjhPhw2Bctd/y eGfwSRGs6nkUbdXNQnu8UMAT2FGiMOOWJ87g/GpmGdgN5qlGC3EuwVgG3/wZKTrmBGAoaBqa AgRU/w24rRkEZD21IaCnOB/rR4xO1eFdkczNTrmmWfhHVLJLXt7K4EHq4ByE8VuVaXqQuMxW JZiD1uS/XYtISgPdvx0iDh/uX10P1ABjjTLpRiHdmbY8jLWBkubInmkQYxXny0x1FdrXhsIP FBLBoacYQkRI09pYzr7anJJwCkMzrGT+8Yw0pbPQILsGpZtLZbpiv7ZkQKmcEsCNFtYTdiRp 4pUtFG9WZ5sk/Ma8urCwwW6L/U6Vrq51WuHF2qh5ED8/WxBhOKsy+WHEVCGyhHI4joHzrpd6 jfwqHPS8+U26BsLfPyc92h1bXD7npXdwF7qJZ2wGrnEy5oTHUl3jd/HcV7SQ5FJncevvUwVd ITBABgLcVcGQNKj9hVUZwf6EAmpY4Nzp7sdGi0UrcGjyPqX2CdpuM3F4WCihH5Z2NLo+YYdn IfUQmn2XvbgLMVjRhcoUYnNCI7bBMvDKnpOj0+jxKM7t+jt6rG97IhmNosGzaWudbCxkdFdH kpVTOEVpOivX8FX2K//R6d0y7ErUX8eyi+J+ZE6d5vv4vZRhJ3fcQKtmJYlTAqnPA7pZvU7v mBd0IpjunuN/h9NZ4In0LDzOyFz21xS3FMP+aMPMo1h3FSnwZB+dDUjxk7bLORMPh37r4RMf ZKZxETuBgjhq619AhX11zMPO9zDsPRiyVv2rliahWCDDaH8kxDl16CU2p22Zh25wGuE6Aq7s VHTgM6+UEebETgrKzU2ZxizXTwiLBQYmNL/xVgCpVP4APKapZR9vjrLyTPidNG3P0MaggCGs /h3MU1v2wVRKnqY6kvzE2zI3gx7mNOfF6t85W2LBHqSzPvY4ReYrr18dbqAwxOaWqF/htwGT nGrF0zQ5qQaf2wljpytCCy6dpql5+a4i6RRhzF5iXnNdnu/7BeOh3k96ny7juA5NB+jcNBNB pW7En67haQxQBTsF/DbEHNv1X1VlEhbZdJj4jqSID5pbul/AxIcqUN1koMezOQYW7oj8qV9/ meF2PFc+DSFagezmfbGsvzoex1zeLeTrUgU/d7FKG2oTJYjaYsdBk540EMYoCoeHbUwU/Rp1 PGDzWu/hCsWEI9bsQWqjJpkRIMsEbUkeo3Hn82/iNi3EBpTmbK1Qgu3jg6VOViVLlL3CGWkN zX9vqPgBRLlMQ6eyR2/ovcOJbdYCoVXgUhZd2oZnKJWpybUnTcejGlRxGTh0WyRHRR9JA595 nS3Sbq5lkYZ/Dxa+Bg/idsGDMPTgnmwDgLBrdkZYDr6sXau8Hi9X88uWI0V3fQJipcWyq5DL ymcGMGAIBEOkLHsXD9p2DMMShESKOOhXuVV3An4I8rCJbbIwu/yJD+Di2AdFTVwbqdVBFEyI Zr+MMdRehz8en4mE+i4k34Kpy/zUzWdeQrI2xIudFSmjxDD4cJRN7wcDIFIGLSccieV24sy8 U28MT8n3IPaH1Af6+GVLp3YPHBLmKQnRRQzGHEyF/yTedQtnbMSF/6fdbJ34rOCtMCz6/xhO EK8mny4ZOyqm/TloWpSisXdnRAYx+OXUukqssO7OkYnk3u2vSYvBd4utoxgKag3eKUE1onUW fvBxAXX2GoPDQMILZ55m9/iEchxDyxDDeanah+l+yzNY7355iKc6ZozyYhwuODqjH//fnd72 WyBsKGud7TiD6Ja1miXSeqLdyCV3FtJTv1Uw5b5l2sYd7tUuYvvHpL04E7sOEEtG9LXrHpV3 zhmCqTi6PGKiX3dr7xCwxMiQNfdyJNk1hKgqBiWyLSLbztraQB7BfTKGDNnjbhurexiWEQNK ak1MjjyXCggBB4L6WpWRhwLGE5wLi5Sr+Ye+cIgJqaFRoifGg1Uy/lfpHGNps6dBAnleNCfp qIt4mHukvNB6wGag8piQy2kzBb+OaD4vEm2nkQ7epIvOvATuCEfMHXOi11RuA3G0ytw6aJZQ kmtjgjIyQ2RP9p2b/13C21oQrwysSBjpTWntgahyZ577DJl4e+pQmdy5A1A1xz4Do4sd+6Ui zMFz4QoSXkvAFYZaz6qReTJiNtFYsnppng0OFgl0FZn/DbBizkel9m3584w2hAnR9BTCguvh 99go9MG0BX3DXP6BVtZwUTWE4oCCziCPsXdLISFOZtMNuvrqvW5Jke10k1VMISKdxhj/bmJy COJO3oXl40vVWFOckEwcFKss1uSZ1Ejn9GoCtFUpvcWMNBHqUBrIK9B0zv9xMAUkgaRqs/Vb 7kN5ttlhgu1fdmvhPxuwhGcN48jeeBpDZx992NjuQWXBixwsjUC29WCR1KR+7nD93LeaQLMo roqpglnA9RZ/GCqEFsws+PLm0dV8J/KO3/ZcXmn4MXsVQxW/fyIkf9bn/GbGaSJSv7S2m0i0 SiBCkIRfc7qhqNHYKxSzCMuB6ziPMeVz/cPreY7Dk00bEUCQSDtjdXjqV3GcufUzPs1m5EJ2 yFpnQSXJR1DytByLYrLg+kAh0n101O73VXAYTZcdOynvpClXPTVxMHJj46xuEYG4Pkic1QrD XczvETeOt/3TJVOoAfnX57ADqdxbQBsrRpj2zzGGcxj1jarV+G5iJ+Ied9YBeHpTQTqbEHZO geMqcMCygdutaV/1FT5W9mWXDvlbHpQ6sYFs7y1JIWQ3+c2ke3DVVAbfQKtNBzYEpanilKsM 0pltujQamTYti9IZd7fvC8awGLVnmtPYMvNjZXJVJw025xpTDBynkoxht05np+GkhRBByOzz fhB+lgn+Yu1odiF/pQIGWyeAARp6ILW125b8/82uJEgYr7YoIWklxL/qsBhKalI7Btfhup9x wT1Ftldf40aKjdkqUT6xguZ9QYVbyFUV3ffeGXdoJ1aVL3JeuDYa+Lop8j8yFWCEUzwFSU9C N/oYGRFTdlk0nBu+lL9E1bHtQ2LTuO2qscCpWAG6xfmI65GRa8i7OS+V4HtbxUMbkwRCiim2 Iqg+aioFi72ULlIsNKLDgrobcm7EC3Ohqt0cxQPgA5AFZv3CL7aXzGJa+KghE4bWBr0+fcQZ 2vHvfkFfIi9yjjG7mfN6N7g6IEaQ++bSM2nakVNWQh1Ek/1bJThcDoQwda8oEsCPRNn1e//f E7bt7/qmaBCtzLhsZNOb/5YeTMBmXqLzJIpbKoWNexoPAVC2vsN7rP8nMXh2kI6pBtdsiXNW Ycn4fps8JpaUpQtoKGK905ndtAJCNy71/TYYs2Wc+fSfB2uRA7m1GHgm76UOClKAZaIA78M2 4Dy1DrJbwBfE5r4Dt9TyRFrithvDIGvu3F6KftbLhr50p6eMoe9jiXIk8VAKnSC1WOwMmFGc /ppHRwZsSrLa0mtMcjMW7w2kGRXNjrg7MMATI5fzkxNS1UyNMuv4pmzozhvQDHKqM6la4czO hBKxs4apchBY1S/EuAhAa2xG8w+GKNSAVnTDohQIENuoD1h2omXkOpJHPf9hIkioTXMENn4a J0Qmxt08wn+HxWmerTQvmD6cREBJAw3jbAmApfCM/Yopr49tzK8tsNGx3bu6v4L5QLWypX6d RgOxXjDIWKUghrKKVMt8kc0JDTyH6xKTbADkxbPtUGfRjsXCS9WMA8Z/WO6DI+0s8i8POT1a 9P46yLBhI4g4yXhc85XjonBO09ynL13tPCVsZ8I2shOzpWQKTjsJ7TWgmZmSJqN6W6MXURsn MM8yFUFVfM7ka8CG/g/zdJwAoKTOrvznMCgStFViE95Th1EpACqj7OAokPJkA/5rKAMIHJ2D xOvqchxijP7PtjkMt7o5d8BKAhoeBV8k7E9BR1kt/pXAbjW4yBT3SUwoOXVAw4bekwScl37L mS5sIXX66ShHglmdpN6iYUyJu9QFRpFH7trnyeDb6Wc9PsKnfCyp6+Wn8/pWT2SQpS1j/D3B jvsS7NHx9Xq+icYcQUxUNj7PSLwbnifNr3sJihCW1mMjbRWoniXQ188t6YsDtzNrjgoGYWe3 avwzWVSjrFa36gaOMZ6JKtkEgiIIumYx4qQVHI3gRuHuxGIMGbSV0hrbK9pW5CoGSG/ip+es gUkYBEGVZPR3gHiWS89kTwFskF5lGOQO/CFDhFiKnDbEkuOgabjpe0ubXQV9GBIM7L9SHjyq +Aq0qv2caB5saN8aFwidhTVWbo5d6wcDj4dk5l9CJHHd2qe2ETrtYGySBJMP+gwTlZrwDSZn dzjfJuj1ZnJGxmcv28rYKb+S1EtFuH7KOSCGV9TsVqFue8d9jy2SareiLXgU8s1sAK/zUbS/ euS8MiaTL3EM89XQpl7i5yaPBk8kRHmLdV0v1ZL7Eh8iOR1Nkw2eJCStyAGVAaV3+gFwlckw rwJdwfd3JSC2n420uNpgXTQ7RI0/sET9VVevyNYx9JmfOp52C4ODGR6zGGQw4HkcoWi94UNR o0KbpDrLB3xxPLItvb6SfHGynDVjJ3dpT5UTDTQEymkxQMFOBq79vGsDObwKEvt2ekAeAvxE 72EYCyiYkhERjQzHvuwTY9w905HnShvknzXBpcSgz9PVXosSqonPPI0efk8nJNh0kS5gIzQi rNDs9Y0cwfgwKwb+6vZxRiM/wk0Omqav1jlfV66yZ3X3RICamWepz15Cu+23NvXCAwJTInGW 4+8se07LXn9oa0AMwAgB1wD1/RrZOVeRTpvwBdebxxmQY42BDWRsnL+gTtruaq27GEx99gXa oQ3pN146+C+Wvzh+Xc59opjPjElI9b5PV3982jI561qX/sSeCwQomxRp+1yJmiF1QCCYHGEM mElXjRyL2tg4glMJt4eejFOHyHdqDVHE8LAlzTXVYsJn5ReJ2dhy7Ak2/tyFeybHgEogXAhI KSZswHvlmzVIDKKesHyM1IqIDQYEMQkxJkx0xJO2Cjj9UJlFxBotLkAP9GX/5Dq62g3wT9RK 7Vxwha7PDjPEAXrucRPcZl2QTC8X9W9dwPN6VpMPRhp7ba/+lwLxdYPHcm02HceVz/YBRO+t 22Hzrq6JwdDIOAzu1KmRcVbXqYfiVWbuz0CxA+nkJ5C3aCg/VdBd0Qfzxr48j8HawYoR4Jn+ oS4gzx+unHQchn3/oKBheXqA4NbJmVHomuxphCmpJ3e/U8d6pL+wu7HY2q6c5ccIqwkYhYyt w8Oozh1lG70vCHIT8i5uIQlUQuOlv9mQC8srZIK/o+KOvsGVneZWszm7uepTKhu2CSa7+Rh/ dqgzLsW+I1FCrqs6emxGQosgBjkDJzsh3N2UNYMYWyFnqKHz/H10qbil1y8LH8iAFDeJqQFc P9Twd6PA6GMccyu4pSEbUGa6KjNmIzGKRSBL/tLhK18b/r46h3j+eWtyhoK9PunrVB7wed1J Mz2ibrXrigE06ggF81jXhDtrN5HdalenhyE0SnC0H/c9imTMKsIHbzo9FoizyvN9z552c8oo bWEIRT3ODaUNDkhBmfXv9aX5rvkm/raooM29A8xMf7TafRVqKgRrhjhO6ivf93+vWahhs0Ew ezoGqx38Ddwfd4Ydj/5hb4xvGI3q09yytlXsO0atpnE/jpCssgW01mI5sSBJy3PV39Ux86rJ enYVqGskkudgi46WQH7scLEhatbQx2JyoQAr3maAtGELHYEpNp9taH6+N2q4sGGQaKlRiU0a xvtoqn4jxh8dBl4mMtG+Co++/WbabaMYpQndcom88Tf2LTPTUz+Vy+yVOT7XVzuVRx1cK5mo 0zJquZ1hD70W70dKx6FcUojzzTicF2NCXxZxtKaEHrIICdfnw86trBDGhZC57nfxCt1h8/mA 94qjo8pduBiOpcyE3p26zIbvvV7+PNjNIB3TVfuVwAU+jIKA0G49+o3vpUQEKVhZt39sZyjm xkUfeBtWEv+d/ogC8nwjST4EQwRAtXiRDcE9dSRM+JnEzLlMNn/3YHytbTEihyGT6bqZLfDw 1LR94CXit4dDI1cNLzd4Jr/+z1Vv4lYwSVgOxCLK5WW6Gq6QoPoWjgOj2PWgxNbcFHrH7XFz fr9xi9yR7qM+2p+VeRrg2fyqci3P7WNQy+GZZ+knXE3I6JWcBJSYhaIIZ4ueJZZsj4ce/Jjm mMGxmAOAfVzAeATbxA/7aU5VHzAFTp8710TnP/nrJootfzr5krySVwrWs3fKh3y8nLLKbLjC 0A8esThaINF37z4db4V9wPphPBAgIN8CgqAlzGsM3F7z+MkUJsEnDqjuq29eLPWWkfDdVB1y asYcIFP6JjSDDQRqdlys4731B9OHWfMo+JSzujYpjmGiloBLzHX+piPw9jcVyNvRCMoJuyaO /AxmoxBBkBgVNLnOUWW6JFWTUQelCPnc+63dC+1JUO0uhavo/4TURxsH7jmPbtDXKBYVEXVT 0OTsicQNxDKCHUWbkEdBSWuqMRkinHpB47dfZFu99qR2dhkvcqbTgK22co/OmiyoITWtrU1a vtW3VMnjp8TJ0hDTjFYzbUOvzVzU6pRCc9aair+tKMcdidkJZULr0gZKcghZnkYHg0V9M1nA cqIpgaTllUkf6bCaHBKGEwDT35gogRzWeaAfbizRfTuuzKb+rx1lgl398J3USBZxmYi6pOb7 mkoHaF4KL8xVFvV/GRWjfragKajmu4Z2z6CezoubzX8J4tm3ewbaWeZDjQwJelPcsEkEMNsd tuRlpmDTQKocCr39u3do/xAYdtq9U116r23zegDMx9U7vVQRcoQjHRLLSNuECTxZSjnMtuWl 4qZewfU2X6VpZHPASzou8Xo9Ct847OErulQ536CE3ObVxaRF4Ah18kO78DIEUS+IQvEiXzRn sf0xbX2MF392BdGt87y8OtB70ApH3ji2+WtrZiJcfBtjmNu3MXByCbOFrFqRaMgwaUNKSUEZ oK7H8H9vydM66+hSfIIOR/8JlT4Moj9eI3OnsKZHQPsD/qLLaMEzLy7NOZU7WoiYf0YFZ+aa 1qO0wFgZeIA8Ww0/8E7k1p1yWjn1Y935e38U8X0hFipZ7Jw/o+KWfrmzZp5aJWE4/XqZkivk YH+Va9QKlyOPngjZpNU1rmUUGtCM67wJHo31YosbD/YM3WUAU6XfCaDUyv47k1K/EwhVKZ8m urwB3uVP//HFw41SWU87kZwoCpOv07XdJ2+G7j/mQsRk6vG9ZdPab5MqeDOXoyVi7Wc0LB+Q oF+l2UK4L4IMIKpaEafmb96kolLm26rGT+JS8Lg8Xa5yxETcPu/193G8pwzvstd4eerJyFEL 8drwcDrUFkoIzXBGNba+wZZf3xo6GcywQiWNHUshgp2/YgGEy/ryHKfgHG4Ik93sYWYpiX1c 5gIEpWIX/iM1MdRr0kJU5axMiqdis8MD0mRdVtu0qdHSBLXayIB9EmCm2l9kyxxXIzYraH+g 13lH4p09BjCS/FTP+BKMnGZ6aqZrnmtrkxymzB17rC7n6a8Fq78BKW6TyuwIGlvN4wIlRwo8 bzvL9+5x/0T8dSuFLGjo1JbwHCyDQcbVbGdNc4CK6BPA2Z2WOtdHr08JiNWTWca0vbhfWWiU 9GdCZvi/VwkmZLh71xzh5DPDTTOHQQQVIvzP0ka1KlcC6iR4alY08vfXtF8U+AKFkCQlT1E9 Ah2wFE29JjPFxeUqOASpWb+Fl/atnzpzTGlrHzvRHAtI00I0pmOATJj/cT4WMk0itmsonPXv nNUeZeKtAtZXztFT0qsUe+H+aYJQI9piiflCkoOVJ/KVgTxwf6KAlnSzwgDQFiUzUiBL/4iV 0fxT4xWgb8xGLcAZcOp5vkvuF27LqDo3hQTZ+pAkrosAFsBEL3aTH3qUh9zo+KtEM4VYFM3e 3w/D9s0dbUpiYVZrgIs8EWaArGXMKZ9WjpAPrs+CLWYHnOsCbdYUsQPaenwHOMboEHY/lfL3 2asDnCpRjoLWKA7hIiQ67mcdhxZ8OrUMIT2naIsYVRLu7Z55fBqjZrZYuV0a7O2bup1EcYDQ afxwhAfzCF2QMnFwSkGkTNcJ905dVGm9rRtO+qoxVDhM3s7HuOMd/GXk9mY9h4dYTG2Au5+6 IbrEkRbXefPc7Cpru5sla4K3H05hqYqqAPEU+0c6olXpQ8RflAeNzJqvbrOCXz6tLtdoX7PD Gl608+Lb/YqwOQB9sPeAQbjJoKC9iVNdD98w4IkZ4W816kPYrx1PxGnS/uasJbBIaPBAR5xx zqebO2Hp2Y3xfwy50rltkxtIBfNC9p4jCZtQxkQHbrkhS4koH4v+XpK5ueyDtL46+n69LOvW 2BGoGk1H1tXXDEdeymLF62/Tjn+5neZLalVRHtWp4DgrKlHYBauj7jJzEpKRaN0d3QL+DrPc /I1zD1ZCei05lhLmu6/1snLt7AQ3RgNoxGEXQjBxDwS4JupVpk1A4VZfm95DvRa8SUEj5Beu V3C4a4p0+/XV4zL+KQGUuyOZ+InKBgogB0ZI6NFfo3a//3xAUd7h/iOvI5iHQaYXotZ0UoGu fIFw7R41ZLQ/3yQn6zM6E9QgVMIe7HVISH5iIrpzNWZcWYPtr+mGg/GQ4EceRU7Eqs04U4FQ ENL77nF6gCLzIa5kcP12IbHlh45SSI1jQ0Kegkl8gdBbWlZtB2wKx7qQs0moxeWB2IXQq8xG 3OKMwglKuPcPg1Q9X1xnvXYmAC+igS6zaR6OPKlpYFJkjrWFtCjrjm7eU+sY+EMl6MhjpYsw bSr4lJW1App+j4yoA/EmUlH2h2lrT0i7XCvcfvur0y8e4zFOqCBbNx9gBiIg5ixqPord+FV8 /Y0M6JiEC1YHJUOfn7U0ftj3H/HTm3vi2LVQjAlfYkvPwPjbqSTmk/kPA5m2b93L3dPsOsv9 1wcZZkMAwaKHXS/tUNHY4sEjHMacrcNlHD9gALaC21/fEjaOGPqOTYMfGR+OfFBfJjNN6sdG QlDkRk/h9cuwlMfXQpJyllBEriQGaBumVvXmnfph2ZnJnJ31FY/vGxcYG1lJmy/2A/heynrw TfSWA4D8n13ZmwJd/ns5dNeEOR6BiadRq0ghoNRriIUbVbj3JI03JjMRnhtwBeecyJhOzdrl ABBC8KVDLRv+P2pcq+rX+0uHfLU4yvf7txo5OFqU7BbeR3/aXHpUUakJ+Z1lx2ybri1hzolQ 7TT0pYLy8EmLzzL6F0WNwnZEFqpoV2K2+36gYcRkgqWs2IfqqkPwN8Dh+lqgyMsWn4ZqXfrD XAMEbI4qjUlIPE+xapFO+7biII1Hx8JzVmfryY5vFGkjDuA81Z9GIDGSS5sl4Ke0b69ZpZ45 tYDw8gKZ1618sMqyUbXs0g56DYYffe2qSZ0j3NaiS6quJIQyY+eCSQUxkpXZ9P0oJ8eBM58w ZQ1sQV73iqX5FuKibOn6ipcSNA0cNEKMbN1jdYn3Uu9G4P/O+V0XkeK19fC0h5j6L9bKT9d0 9eQ/jB0ENmXHnDbAXqvWg4dlq/nttLAuiRGMZhKflS/8gP0VFyBf9kME7obdniMb4+i9Gkxj 2MaWSz4cbq+6OgQ9sDfb7ZqbOyuuTVrZd3fSgZai/pABk2dHigtNOONyyuUMy1TnAHzTP578 jgD1UN6ofKbo3CE07sXLZF5C1ePXOeGv4VpremkqMPd4HcrUjYikAI58RBst3rJGI1kO5HuA GDzDuYSorZkm9EubtQ85Ec6L1Mlj4euXI2g3K6y6WhiqV73Pq7mVzM4btuo9z5udSXuq9Z82 3Dq+JroHNldEIlKsVvqczb/ixltfwDbfIZRTn6aiGeHAthDQiwuhVwzPeeLflaSIk1FBClMv gH/Wnjwk7Nw/dUT9PH9WE49bmlPrO+VgylJU86HW1A2rm4BnJfve/in82gU/I51lAE0TRDwD XP+6Df2hzDNxocyqc5KU82rARdxuHEgz/p18xhkW+3JqLdBkQNXiRaBeFqWLI+YDYz8YKnYi 8YrhGhiteHYgVlb9q8JmjzdHMEUy9MZgrMou7CAv13ayZ5RUyq74s/tjnMOWQQeeXhksgyYI RZhREcNPVMLcj6NQ8Zh6PKo+QFBqycCPtKh9CkpjroTNuBk+m5zfNGHYXSmo9OiWjtecaZr0 cszqd3/mZr5dvsScoh8LicE8zU5inoIqC6yf6lRC0KTvdqZvrTy1X+nXGcZEhrzOkXJNRoKs rJDt/evN0qE/EZoqWLcma087hEYHZUKLrdfBAgokdfY7EbQRfvUxBO1VlzsK3fk547ySlysj iss6OThQEWcTSfEGuYFc/2owKB927rykfPLdLLuvMXusD2ESH6eJwrI9UKQi0mZ8lQIBzlFt wdB9lMLGLMsUbWa0dA/1OKRi157CFUj6BviGntBjrAxjNRW60jMu6OxKpePGG1SDSiUNopy9 UztVt5Y6d/gm3OOYranacL/ejlL8WM+xGJ4M2eN4VbedN/v3MUwLoDc4ZOO+NVd3cyKEXPIH +dSY0M+3rLsuxsiZVs8dr3o/jbokAp9mg1/WuBqer4pXMm2hkvBJnszXRFT7yilvm19csTvT ror+Aaxi0bZ3sBR70Yo8T2mZIOLQcWVgom3VpeNASPvqNu/kPZzmb6gzbYjYO6GrkYqVJvHX /0Gd8tnUukH9PV9tyXNzJ8S6gaLLPdNfuRTd3hBhDhgctJxB0BysjOZvANo85ji7T8mcuaau eVBAX9kuAi0up6QLQwz3WBMcXOMNFglsKz8h29coxDgXl9hdyqdIibvP611SXIFZCHgbzG6i /HeGepdtf+PICYyI2zsrfMk9P9Msz2e3gW6IQt9OP6dxBr41S0VIjXxbm88e9ut3NaFJjUxQ Rn8A/INYXFx6DILqtfgdQlb1CjsUiLbn6D7GgW5zKUMhHekVwC26+CclVa7fuk8oe+a4mgCy 9hMaesVdU5VcBFv0x4KJY06MVJ89TWC9wy7pcwajCwK/wCspQk/t1hdYVwJ3JW7r1bLrqWUL CDAkq+n2x2M6Mi/EAj/ui9DIY/cdESKPbW2RylsVXHhG5hR5Zc7lPyCR0fqqOSZaRd778mvY 5Y/zUZq4hC5pGe0+ikkxfeS1utkmv3js4v7k+uJh5AtcwhgZ0BMtpYtwwkzbAivN+/mDd2Hw eWAuV72RaJb+o8iM4zDzrSmno89e2UbInJHY/hFr8jhoHRmfNQzQz1shgzX1BQ/3CbgDq8j4 ZTMcsv7Ycm4B3soR274rVGU2STxn+hg0GIpPblgwQwbZoXqWlt1gj9BWYXPRHol+HNDf40p2 /unHdHB9+N4zhLy1wnsDzKAQbNdfIVsKcpvimf7K2Ph4h3NAvWsSAqLm0ZfZ6uPR/o/32kvR j2qhPYg7ktIS2TMCb/3qZVaB2MmgQlJOj6/TMjJD+573YwK1SYo0hZZIweCbffX22plTzPL+ CxMmpvTYH9M/VWtJgqncU1UIBDTQ5Cw8cQkemD7izKqn6EA97ItWpjTuTJuB33qlixZi9TjD w61DRO+fYYLtv+W9gTKZzLNRFjoJtJ7a06LvIuJiAKYMWaPiLxkqjBGw5pTteboceYYH/Xxc HAx526kIwHjuF4BWjwzxm0n434Oz5JXfTgmyONWRgSgL9M7Hez6gaP5tDdPbiAaydIGucxnb HrLqITL4vq5myUqQsfWFDbw4FllWMXWxljaGEUArqA3xaP7eteGML3BC8dUyt9chyVjebYZ0 DP3j4c3JiyBJkStsnmLsWTLdP+FGF6Quzyafz0N6AIF9MTiSBuf82x4lgv7QzRZlKx8jyEMr xRZA7vXQwG8B5qgQjWKY9ZZlzp7wSpRiyHsX9GDC64eD9Kfkmxay8+PpgyAihoO+02Pg2xyl cMI9VFm/Q/ibZtYdMf9bcQl0jBTiQD+VpMRN3w5OXR6U26YgvPvB7IbtZeE+JqiUGWl/+IDy CSJJ9rZTVHevg1/WPMUNBpjZhGpACuRTLuenLKkbpMyphjo1x3DrjmiPYRCRLlfMtzokT0PH CX05OHB1eLciWTUKqMlSjUOhy/PjBH0oa7bC+MrBl7n6ljdLEV4dQQr2yWyIwHFFZdS0fUQ5 wxCni7XpNjioTqVqCxYjpGA9am57I25QmzTmIw90563D35lvnK3c+hPhVvkazZ7Zy/Iosr7d Y9oSlUtZkjwPfppySfxMmJBOIIFWRc+w2F880skOVh7QVYhmKDmkFsfW83FqPNrK1nbYuI6z S6tl1SQ/XeOvWe+8gFJ4SeXEfUg+z1lUO30Ki1WLqlY95WmXxcv4rtzs8PL0Z7ZR2wLyCYt8 rG7PHWSKXTTsaGf32VhgvUrsDJQyowyTH464y3pCCzyc/51mRh666QPpQCI9WpGlnGaexdoq WD/gCaKLczZAOP29JtFv4vJOwMUQjHrUEGyMDuxwMH/P1vT+iL9YHb6sxoi8Q/6Bf7xoMTGm dL2s60GrGMM5brj4v2ec7inMfyMxZVhVjF8ghyQnaKt7ul+ucZ5FHgmCAaOrdUdiwrmkrOQ6 gwxKx6NJpWG4DcBvXO5TPyr1Wg4FnzGvKnafuymDYtmCpwNc4H3SxaKZ1AoKZg/dLTUwaslz uML7+kiKYERoVkoplDeXoPUj+S5jxcQyl6nhrtLZMwGvc3ss8c/Y33qGQGdjbugk0MUBJH9Q rORPGNq5ShQE1OS8TbNE2Io5JJr2znAmF1yMrWmaQNFH7WlVegBA1i2YuXiDq91IV0wvDhpy QBeXTGwMI/+1cHghDaJOgwwXfcns5Igiomi5qUfVu99GEXEAROj3k+4PikKTxA9boh1+sA03 3SvL4DpbyHJb95t6kyYVgT6UuSfJNYERmk2rjq86iyewNNmgKiNIT5tCp1Y5W+VCgR4dzv7v DuD9G6V9b6llNhiiCkpkQdQJFCaxBPxW2+z7K0SuhcZWhYyW7RN4niVEla93OtTGN5QY8Afp e/MfDW2CrDY3EvlSYf5xych7jLtfgkSg+e0/2zAaFdo9KtAYJIFi10mYPskQsYNFOc2Yn7nz HbQYh4FCEOnsDQi37P5TzNZ5rw19X4cdDB+9zIUvsAanBcGGTQAkOfQqDVCsL4pnAP3uYY5k byI5236Mt+oWn9voDxNUaf/n54+p025MLYu+FVTem92gCLsAosSpiS+T3JO0GvHR45G1EjFV n46ToAjIrGFSFRteRwTxOTrSqWHVvBphsVHGZh/TxCJqG5qSbZ4VkdUW+uPZTmaViOWmaXf+ x/VjshyKDk5wTFtTl1v7ZV2pb8zrKtqsEFjKod1V2HpMJ0jOhWIkmcIJJmyiFq0/UOsJuj4i 0v8W10FjJ93Djjbtm2YBYAgJWMfvxRTu3JjFhAa45rexGRwXUqZbmSLc4lKsEFDvFoIfB6B6 LiSho24aKJxjBZvyMD15duPEWNIRp7rYPyee4is8fLoP/Ib+mwT6GQzE/WoXQbX2AFimkag+ 6JzmQTHFRA1oPs+eKpRxoaEwt/0H+S++0yWQRsWLIpRcwufq0G81qxFtMTbxVRzsOuSRw4G3 3zCI0lcmiB9L/l7nme5fo61vblvVm22bOvZHUmtPGiBJigT/bp9EeMp62aY2htZniu/bAsez dCXccgWTMto00PCAnl7U8ZO9ltArIF4MvYd8zU0QAhC4J1uFuaqkx7ONMfsIgZZODbxa3lel XeWYxJrndvULCimAhLuDOWhNynWnd0CNMoXB1MBX/bZyKkNYV68F0JUm34hhK0pLgb8wxctB h8Qd7uCf2ont0zg5niPEb62FGGtYScMaAXyHjguwD2iMYz/0zGYh1w5xbRhCB4ByaCFaFlwa FeO9v1eSN4rPKTj5Zi7GauK3PqeItQTuxvmS06QdfEviWLBLVlYWgd1FWUDwjgyL1TZgk1PH /d9xDBsD/utNLh695pfJw+TMePNWYKEzZu2Y6I9D7zXBLk1n81VsIVgDklkhfNGKU0w6QrlR WmtrLhlR4a1sO24k+zhbgzZjInhAEIGUO2yL3rdaNBHxZLJf87qQHQG8SAWeTfJY9I/jq6ls w27CCcG6eNGVHpZcx7HwAAs4hdc34Fa36wkHdKEHZDce2F5Evep1qV1/pBzbXPedIGQNF2U+ j5MXtIovpZwLo3gQlGABHbIJy6duPbi2kY9ZcAmmNtNFtEQHfouJMyimqT0yTgw0OMndiT5H nZF2RYLWh352B3BZp0X0nInFoueaQhKF8AClxydC8Vb3kuFB2UqydrqAeG2nUrOymFlcwi+0 ydONhD47WfzjxuZ1o/kk6pFqDlBbnMvu5Zaa4Kgpo+QT/OUjfDZ3hSGTf5XloAGXrukyexYK 6TInQWi8gp1p1G02RvwoEOss39R4xsV/YEtNl3djG+bY4TSkUgH68EIpo0IkePLlGuFCFtaG NKLV3ULppuP2z21WySwTpCndZXRWZYFwLxq3sng2h9TUQ53YPkOFRFUWopuSdyIZHb5WQFvz 8fjUOR0noek9FYLUcwEMwY4eU4tN2Is1F0AWm78dsLMf0yy2LYfN3vvlRbqvdQDq2+D+5BSZ LZ2OEFaWzgbIx09SBJl6qRDRk5KD7tL4CsK9WajIt9cgD8Mc73ULUWozJI+LB089psasD/gl zyJSfN86FcFvD0pkPaIWWYz07KJPhK6g8tU1VzZS/+/i0kMWA6JPz8Kpj54+QxAGOOCUHtwV pj0PU5behB8Ppum0XI0FJWxFwjEfkpJ4Quc/gXcix5/DHq/XLJ5y0/47v6Ehhhya7hMZHXO4 jTUonLMA8MZJi0+dELkBfDoDWWuw3AQCYfC0e7Zr08k6qNdQvopN9z+WRjECN89pmcdmnoZU WVL3c0G6dAn42vZpGMPJhcZJKEtGs6f20VSTTIb0wFmfTDZYgg6P7yRhFTuOG7JmII5lfZj3 GmkLYUXT/iZM028aubEKU+39vOZD7Ia2qF/NG9BNXtS+sVzhIv2KWDp2f0JPRkzUULjX4n20 ar9/cyemdxUqiPJtpQf12BGb5QtVun2QA8EiCyXuFc1KOvTdoKeJtUNJrF5y8WupCFvxU2d0 eMArad1yiRo+zJujAnAOP9MyICxJWavzsiRdp85nR+wHbVCaJT+QunXM+RRlry1MoGU3Mp4E grNpvQOQOZQiCblMrJIFEH8eXvZNiaH8k0d2Pb3BaaIMez+f10jKl7xppeq8Rhy4sscqXmCx 4bWEFqXgI6MWdAaF5wfGFQ6wr4N4dhiPQSJ8kfot7XhGyhGUp7BuKn3fCAIuImk2YiCqshxK M+pvqj4Q93ImZdKcI+O/aAxincwXxDa0oxJ1iqSV0G43JQCv9veT8BIPGgTMyWszdqXCmjz8 cqLd7fWQ5aKfoZiKQJolimvKZRJUZ3QIgoggJRgvs5/YDwPOoE6bS+aKGYKC9333F0gXWZDf 9EB/cyq2PeNKVXw7JsoA+ncsqb8iB0kSO9Wt/Ox99aNcKaiCgYFWPqYM22zJVx9Pu8f4LglT I/ZslsCrF+CBbGMWQVmB+lICSsfOfZ8UjR0zXqD/EB/irMlmhinCBCIO/OM98FI3HPPQt6Xh 8aTltyf91FDIwBz2KOb/qHe0Dh1BuRRtO9Uba71D1TMyytOqVHMImhtzs1pddZqiyeyhB4C7 TlIM0d7lmBxzTYw8Cx62T0o2pdfBw8yxTAphVys/+Qpbk7isiNJU60Lcv6nKlKzTK4mFvZfd CC1pe0GxNhYBu+1cmKi+EMkTT66xIh4zepYgB51YJGYFDXaPjcbblDfb+9AQBnHoJOnKYsUs 2u+zvmN2L3VorIxmG2LnX7C5QetRwNp463CS0889gLxUZ+Xmc4Ku8QS5yglAbQRhEpxr3Z7Q X0Utl/OPcjpwCgDckBJ8BK3r7G0lOFxq4OCjmDbxviUHyizo3h2hVsovs0ZwiyXHkjb4w+yW ZRVGtgcELYcbj7XmCcnbPUWqCZG3cFDETfZAOS8gVkCwPdfLff1h6QFARlkTgFujOWFW8/D+ KeCL6M9FPIRfdiWI5cEWb9DfATgBP1sfMhF55nRdO7EJk/Sy7m3X2av/gEWrVAEPxlbEboWs z+jb0HryWvVpFaT5WJr9xX4Q6dbFriW8eRlUNsKLfRMrUKi+ZJ0iVrzUi1ZDB5b2x7c0SzCV gujjWlRzdEIipS6M7GbGGnzcW7z2HXnxlpM7xQJh2DwEGVCX5RqCyq4ASkEe5CFcstj4xDa4 J7wfzYvMGlCAxkcPBPfZXMBvoLDGJkh3kiesuwBwNcsFstnHtEJ1Olgq1vqWJr9hIBSWJupk D4ctEXuYe2GWnAEut0IAgkpefeV/Y0WnqpJ+OsixUTRllvSgnQRK8aE2JGpgLwbHJ2FggVVL B6WOsh77DBJiPb6QqtlWiaSmBVUr6dDoAiseqSCmxE3JLHypKxjkVMIOFrtV4eCgJEH4Z0JJ +SdUEL/UF3oQM9wV8HoUZDM9lbce1u0C6Ymh7qP8zQQO+C5F0BSjHu+7JYS6BB7yH4u09LOZ wm22ZUlWJlb+wHsY50pm2ouTeZG+8iyN+ZiXpJ5GxGntne5rYRO81qdGMWgdcRsKom5LTQI5 fY4eUC1T9NGmcV1OOmxr7f8731vsGUzVwERbz7yXqFbwabm8c87wgSqYKrtHrvxOXSPA5o1b /M7Vju3De4U3hy8UnzkTHNThb0d18HWKQuAPug/Xaq9hgCvhSz5TAOdPudlSMbMM1G8Anmpk evLiHm58mIv7CUVncDVlbNqnGDy2tScxf6yaZr1xqOnHTTjtiYSW6NHDZ/zVwiRrbEE/bFbe TmudixyB37A6U5dbTn0NbGW+UhZ4w8yy2wdthH81wr9MMEb6HJEB0B1K4FEWlaH9vrfF4JT0 NN+xDomJp5fdkKd11aVqlVcWZmebxseBlCtVjeoT8X6Oh3tRK6Yh0tosCvLZpGya1IVfIITq lgPj9l1JTEqEUfi7S8Rlu0g8HlKiBNrmUltuyIwOskKZZS0gXaBKhDRD68N9qGPqfKvIB06H SJWkHovuVTtFT8SwnMlS2XzE7srBWyPLwpoRSGh8zVQ4AGD16jDzOJPnFbttWLyXT8vRMVB7 zTPave8UYM6TfIrx/N5AINK+9RWmr1wb01SjFq3hYHOwhqzhMsbRB4l1Z3OrpVQOaz9AttqQ duK+3BLsDSKlHYY/uumO4SjiQdzQbOOPv+yJ83NY8eXC7kiGVi8NbUGZ9ym75ftKe7hxR5xm FZ0IzP2wTYMuF3xVe6RI+aFzSJDhEO8OnzZ6+2jcwjQrycqgYnDtN/hfxOUCddWVeysXq63S Xu9O+kHYTKkTda6hSE5ibv39czSKx26FBqvcEom4+evfKkouWJ4cGB+83G6HxQpc1ESYSUu0 qxr1cJsirxCEyBC0q/nFGmJKigIsLb3KockQonNkt3843/MCjE9m8vDjzqMaoj16OXVDQxi+ YThlitPhq6OgM6VCljcrMLxH2cVNe3SYU3KlQw12qUK0Fz9LnyTEh5Exbj9GogMMV1MBTvY8 lqS3ibHydCPJ6tYKmKPmx9/wfQsbX9FBBQN9WeHB75DxO+wNX+Chr6rrkoBTkz9yFEwS9mXQ GDM4uXxKVk2dk9NRPcEweG+uomRmf/bSh7frpJ/xTvZF18S6qb2gSv9ptd7VbGB1fFgPKgwK Wn4nCnoDNyVBHn4RoWahIT42C6bsRrYIssp+n2bJRYuDY+L266uGRmnHoSsDSxFBSKxbvwjc Fpvw8KlLY3NdBUb1SSklgsvl7eHz8I4CxWAHBRykytEv142hyIudjzMzgcNeM5uCSu5gsNVK K5fmon/MK09Q+ZrK2tfNNwOdtn9mH9ON3a0Q3auqGjmT0zb/CmPDrgWC0pKsbsCGs/KcAn/x ytw1O9LwsVaa2TbQLcRwXy/oFHj36lMUmm9zhcRqb+L535gN0YFE2EO2SRiPCzROa9xHYp2T l/LIDe804igofULG3/ftZaNQjNGlJ1g2sKDtJIUFBn+XFEGwcD25101ig5t9KXY5Kt3Kf8QW fzzPXwPer1+1MGV7gxQroi/S6f3BzthWHYpUCHJBpUQGJ0Ee/gBkragR5o8zvPWXhQJILcuP OWb4etMPY6zWLuCYf3o7qNh+WkP+b3/J1+1AkLjPK0CKtq5ARZN4buuUrVsazVEQs6uDkY2b kyQD7tT/RvG4CAdcrfD4Qpm3oDAoJJviVDr4nKS5YginH24Nszhz12IQjymBSeQWSnwieXkr a7jasdiTNmAFqIUO7c46pc1kgaypn53ZaQ467ImC0QvTBRKowfQQs7hJ+K8m1LkyGcOQ/92n uiEqJWaICZc4d9PjA8cee/ftnLAhgGPkTnsrV6PWZUcOy06x/qxxAn42AYy3llsniORHMh1W jpimukqqcrma/jAunr6IJ3dMPb59v6/m+zJz7F5j19VRX+k9OgA4uTdRn3C5nxhqCdUPANZA 2AAKQyWBHbdOMSTfYgp1eot/gVegz761vhcwc7J/b+IZN5qi1SVZ04hT9Cb2QTpmrHOy1uPY uyygPw89Z8ZR8gAA32wO/jaCvocq/d4DVpMazKVFr4psui7phc937eTBSNCQYX93fg+hKAKa nlfErPg8tPk0tHLC22525w/VXP+pBYW+gaEkOgr5GajkNfIwhfSimIqNaE5EjNCWXFlz0Bzq +1/SjnoVtPKc9P07tTJsid8HI9hfv9mrtk87uJv2nYaz7FwaGAXeCd5aa0iUCIT3WQ9kXEG1 LD/LAUI5ad9I2Id7aR1Ax28rpww116/fmPrdZQcqmnu2b3NPSXyAfwfo0gse1UVjdF8iKWHn VAW/hdc1a4SjjTnMo4zSAFDZOXVceLnT5zNPmQNGkJKo4JrPhBXWQiT759rCcGdHhMSPaj12 /fkmQShbgy+q+PAaNaDV5Ks74E6JM9ZvwZl5UVKaiOVZcNGqYG80viB2lYOGQF2Kjvpp/9bf 24U1WK57lgYVzC2gL48Te8GNCRMUlCNqX4GuLj8OyDYVDSEPaA7upjyypAZrzPhY8x4nsHTL 474JKcasCFaDs4rABCgWxcMkHC+iX1zmEDKvw4+0zdYOuMQdShfpHEm3xSKegsub89D5QsvG jzo59oHsbD2wwQOXICQYiIse5d3JHe12nfMBeCWjZnchBM91RrgO8y7klhduRbxKcSHOU5NH 2UR+4tfUyN2pgHv8SKD9jB5vuAr3woyiQeOjLaY0Uf2FtsEkqdPb9TxThUhhWAm6S2jy1rQO G/38j0ePeQyJu/WL8mhZRfF+R723Vaq2ZiVxhsWEQtqihuq+nU5QptLsp6tildid/MlSjIHy 6tFWld88xju0+pmwOi+OdimVdO1uHsftblUM7dy6o6pQgj274xqWVWpzAJITzA1AlkgJujhS mAsVq7zm77oBse+5N6WElBBumaggwWYXMBgnifKnnKAlVsPo4xxV2A6aVdBkNaTMhgxfcgma LuuQEybKwaHqqo0gG304y7ijgHgEZozOnVihE2n/qBfdZbVJA2S27BLnpReARJeHvVt/IKWS 4Vt65SpnEOYJBgA1breG6QjWp0jfAcWHtZI96UzbSEkOn1ietJ4dQwoSqP4U8LgT1vJD8nwX /WdxKjMsBHd7gv5TR6mHPVYR7BLJK2LkDhAx/Sga8Z1ZuF4XD5ou9by3Jx38sdQDQqjCcK/i PdXjbkWSw9KSS7/tjZfbKRygtC7kQ1wqmlVuCSvBZWNomGKVFxvCbaqzwWa7912nrQxEJ1bt 8tfrp0ylusBnlKRdKGyj2iEE4ilEw+VHgZgpK4nmKKpR8mHA7C3pYzllxbgynAitCGFXDKZQ g6T3A90tD9OU7fHTG5rM96v2SF3ZTN/1ulITZ/i4FbdKZ1Q0qIR9Np4Lz2VcwyD/PTDkUx93 MTArVPu4bERI8nqErnAYYvTqaBHT/guGESVQfclsMTGQv3xaukEdGsEpw8KAzyl57F4zypyW UexJJVmQnzINZRFN7BiHFNW67yosbmWBfa5ygsBeboAOdnJmPnszscG/IM4DHbLSEa/WqP2J nrwUo797sRKWZaVTg112OyR3p+cLcItxZwv1dsyhordeeU3d80HPW2PFug7CQGZrEGHXXFPt Zy/Em77o1+SDZuh+S8fLMQzRPDXgc0FTkp0Lpbi/qBClB+A0Too9EWaIaS2egUaJUg5MQMaO yq6l/D4/byWGSrke/Cgtn1oDpIZSs0z1uWF5zN7mCESKeQPUlU3xB5/iuwyFbOdUzT5FCS/+ HmhzNM1B79+GImaY28dG0j5TqOgcEqUvStE9CFHSVinmqELo1CMjeLmFWEowzkYDPbQHCDJv ThBgOJT9wmK6bTSKj3EvuOGkpuSkge0SuSkufMj5inGDvYiA9rBGnC6mRl0ddA2LSEE24Qow HHOYc3BpBM/LuLQL0gFijJmW3rWz5IC7kmjy+Acd8xtCvdnKV0BNpKgiX4XyctAnMHozhJvm rvCO1QHgO2ziTt2HCMJn3Iwh8i2sFFH75CU10AzQJLJHwbD/rL2+Vpt3lmak1R1f/a1tcz+a mtRdyaJZvtmfMCpqt/pQfXrKdyg5WgIk8Idx20l4LdhQuCc5Njh/2icJBMFKycslr2xHI6Rs f8nbrb44EuISyhs7ZfMx+iUd3HJanijprcHAiEDpdG4V4txI9V6FqOQs/5HCFeZcR+vaIVQJ otyAYkTa9D1GQux+12hTlg5qShDTGfPw5NZGhSwby/Gqf5prQUbP+6ITqim4c2KrzLEh7dMB jvcDpHxzDzamJ4MMz6foGBGIFg4V+HCzhF4OBE7gVNbNPYDCT00L6oqMHnI33OU9lbj84aOG v6YaITGGtTcti9Sg+Lpo34HzVEKUKbk9eGlGNXzvVljdSTHG2PP8o0nCngpcI5QGZYKecuVK ZLoLjVLdmC6tQ1qDVnRwpHuUF72R0pEyitZ0/c5bw6lw2/AyBMiTbKgT+IjLIEA2ih9HdXXj ORUFrPnWR9zQDk85ZrPN+dkhXLrAL7qVQjxW3MS3ZUCsuGNkaRd+uVImJ42Clz9JEeO4r8T0 /rtgNVzA4Fq8rspDtEEfOTRgg9gBWoLfWXvlsCHqIpv0MPqXcp3fYjoqNXkNtOkf4fSZxuoc ROPtdIiauF96Dals3rdYV085iW5dHPdZ83WXi0LFWtB7P8/BrdUXk8PapjbJkzlFDhELSlDU cOeh1TkoFMvD+i+jWsK+C4zLJEofbvuXsqXFt8zfbEwbZh6fmO+of3ivu7qNCImIXLKxeBWy Xs3ZwjW5XvUve7gEeX4FJHvKQofTYS/WmL4yQEBGra5YfdqvmFncyOPBOfLmgVKFELk2WERb nztiK7OQU6Vran0XBicQKxLrDRIwqq1FsMSB9YIa+L8xVX8RuDn3vlwm5E2gK8CKd98TWOqR aiszOaSfxMUMlDBKK82VpxsX9euujJwNLM6prW1R5hi4hAH4MmErPXcMoYemoxnHSUEOylYU 3sCMNKjoZwBX4LKTRs4+AMs33SpP98NhHSFvuB/8U00Njh00InxH5gPj9YOpiBX217hwgf37 e31TkoHudvpDPWJ1TCzVkoENYhLTA79vuT7cMn7LqjDNoTfuijaLq615VCI3mkGZJtV60nNg krv31yDuUf/GDvjc0oMo33wm3oKSFZUzmBmPJc6Br/1CJCLYzuWeMxFWBKkJLIeuKkgkBBzL MqxgJW3ecQmDOoy+nI1iyvF9aRPGGv94JafcooH7v5yYCuKb6m1Nns34Yu4ZpovLWVgkYpEV SpjCRRrKytkW90/4xQtdliSh12iV6Z0HhPhmdfGpcStKX6u71hSz/ycYfPQIJG61P08FNMbG FPiKUYQWhbYyE9KfUXqK9wzlX9Q8seDrPP5TLoA4IAxTxrDs+0j+utaoYMLlcPIH3qgHrXGh jSeQgAWBRhFdF+mUckxCwcp5R530eH4E/NjuG7jhj3H9Ge5rgJ1H8CRP8PqC5015b0A7uATh S37N3AH1RqXLJuYAoBn8UZxo8w4pgel+cGS2r2s53N/coIJpPGAOhVUl51zmqS1zEKyqflvg X8KGrWJmrUvqnR1MI38VxoWOz/DWxFrBe7iBJDbf8bXvFRssoz2peKwtV0DRhVXC8swPzmSQ 73NwgUroNq/ybV45fEx4MBjxTmn1W6mHFhs5exsWih5HXC01ikzYuRK8QIwkYlwVri0QwdCF Bnix1T27co4szw1c/qLSZBzkobpQosx3B61JW5fRGLM/x1AB+IAsXcxnvhjBPbbGWsr/Xtx6 H1yUB62aaooQOFOzHvMwfD3Gs4TMDWmzvaDaMgQfRYNqsMax9AWkZVcQJuA3ncbKt1pYZoZ7 al64/ucMHHCNMg90O0BJVsOuvsrcZx61igtv2Ls1eYT2hT0MspFbEenPbxanlvVRaH+ywob8 Y6eKfmbHvN7ZBH6hiFt2lGvAVpPhMozdIDnpxCE+l6gxo59DgsqN6reqn32PoTqp75k2PoD6 J8vtvDGjqszJaSRYuiHR7hbN+UYaQcshFkkJ1p/tdL/0eivJnyd28A17TniPHLOyy6OuULyo 47Zr9lhAxQALC/lmOHIHiYNTg7ucLqzrqOqnDCARwYU5sDWqLSVzgk8zVzOFyWDwxN09HgfJ /2SDK6v1FhHeO2+JNn+NU9ECCJ/3bpUPtCS0zyN2ZV00Kf+VuGtFNhaproEzfEPCUxfiCiHk ySfBP9wBEwOYEt6z/uSDxFWY+l1NFt6ZOTxPjSitKRoJSKWyCb8wgmXt1B/eKRds1w5zS7tp hvBUhnSTLKy9yX4tSalqRuA9pQfjTonDG2f/3YyduAXlRqyQDy5lwNNl/RU191oqLuhrdl/T PBlalfxEUkwa5J8rULGYV2vzlfl/6O3nbXZsPdqn5xFyKelKYIJ7VIrPKPQccjVj5gkpYlva hhuW//nRwi7KHtsu0yzXEiQPa6yWWJHWlU2jUFo2nr89Ix8zFuGr2VL1g9rTCTUNaGug4RgG zMPfTjr1KomZAnbLcAoqdQEfdnesaojwGMNIj00p3UK93MsDbTXUgJ87ze77p2Zc0cR3tm7p LDQgpNSeOJWI5fKGI0VnRsVDT9caxrwszfzGczCZKCdGCc/3czhi0OAVVWC69Pe8OpoXwL21 YP4SazdyZ6WNY9YWYwoOCRVCagDM4WnOLtYhveK2hKMw/3Tiu6L1UvpKhFnnMWDObwdMOOM5 h89YXBRHvQJGGUOM9LhyPWqgjJbMtLAAzyQm0hayOTSJOjAdkB7FtQ64+HbDlwsg1SWJfoAi nON6njtfu7xOKvliyNa7p7ALlEqO4RiUCm7HCffKVQd0AnFX146u+8wVnjDTZ8qWiberZ4f2 KHsAlfiVfT84Xcigrd/32NHWUgM3hhR9JWD/vYKFYCviixgclOzvYQOab7FK7RA88tpJRh0E kMnrSH2xvB+egTsnN9fdncOCeQufRTZvvxDEHdc4SibLNfBDbSAAxbD1HB34TYRQu7XO7Ngr Z5Vt31hkXD9fGmeLg+YJTKeDTqqWEe1PofmDnb/ys9O9UIoI3L1BrGzSbna8UR88Gn8/dfi8 +yVHIvCQPgxoZvai85xGiM3qYMd3TS73na9YtWobIkYc8ouIsokYWFhyyUJFdlr6O9FCgKZa BOzYwwEVdCQk1w56fUx6UFQZBo+rUFa3l07uCK8hfoshD11k9NOeSjTTIY2gXtmyEqky7eQO Pi3mMMwx28rcDh+6y/AyWOTVD+C/rNHxcyg3fT5VjOTzKj1H4qMSt+bz5E72IMreNs3z8poO lOvHFMsKjknaMcY/gTlqdqPj19uFcjOkWQmCSND2iOhjLN5kFrm7C4fDFUCqPh7BeyfQpvpu oFMot/NXjKlU2nx2srkj1+w3Z0SZOGoeX2td+fWYkr/CToX84YTKSamlpBrW9vrc7y9lUnzJ cnAjUsYYdmWjbiz4DY0ALWWEUOhL+W1xtJkV7NtpNdxjYLEKdmyhPP/B96ktYrsuJgS9WtUF UDC/9gHTrwul3Pc3AWfDO1FyTPmO9DiDBmRM8FsTv8ifelYzM7LHga7IirLLJpP0GpRVx74g LzPwrcRziA/nKj/0PNmjfc8gQqUJy5GtIa7BDWFNX6zyKCrIRiKotaBhETLPwCunrtFFUQnk wWUxiipoGk89Hb4inCU17iQtWyn5VUahJvv0k/0YZrWzHhN6CdG8DYimrrE7Ucs3KnWzJbHa H7RdoIwYHs5XoO/jUh3s7E3Ag7bIKaZxSXULLxpmY16NLUsFKOuntKZk1mHzZLoafh1EsLB5 eKXeBsXq+1nvZIsXjmix8A3FWODMQ0didqLmVVFC8RapIUwKkgI8lGMakE4RWEhL0DJLeNRq y2R3N5Z+WYyzCJBORjyEvNDE7EAoSXxOetMeBONhlI5vw6ibKVujHtppzb3pAL3UyIoliV9c kDD/maejuU7Imvlt6H8syCNNAIZOYs1Ekb818mZ5JNi//qxuStThXjo7Ig8bPVKLRpwwVJv5 pHwp7zUMGST+U+R0W3Bqpq8OgG8pbzLiWoMs1F6fvO7shAdilnlRDe3LmcPxK7bDPvdLt3xb BpIzkgNNn6wscPPNFYp1KFZHOn8rVyvlkfXlt7QRV44r5wnlcz2Vp4KOdhbuwqh0h3OwI/Y/ 0AbYJjPdxqPglMdhnyrSG5lMGfPTy0OvnAzhUPp2W7E2PYKr4asIvSjGI5hHLCdZrKARJTvE L/Dv4KZtUlCUZASlqyq2Gz9vOFSozgvcE+GSDjIiRX+FypCTTFDsZ2GjHbZAWJnmZ6IMlNTW Vr8Rw3pAXMAUNanDprfa88bVdJOnGeWjVhsD7f4GSaltwG/nz1q1DHInVgAHgco2Z61GHrxO kqaPVaPA2qp2OUcGZARDgKzP1mWnuuaRdcZBJlAEIMvy8ywTRNJtUFzz3plawVZSwmaCp+e4 j8azkThVRiGP1WRXfxwgx5kM9aXSwGVpplwGRGuHfQxtBDbDhSkeclq52ezfQFgKm6N8Dyhb OZQJ9nHkfDZGrGJt/515/pHMv/9f4uhwipN8oiVptYCsiDJJX/DD3TrsKn4GnaXTzUb1n7A0 zGI3zVM3WwcOai1UBbUOiV/FtcgYw6k6HJ2QgoIIMfLJA4Arxs9ZPEpHDVZc2vmQL/nauWNf BBHV2fx/vhktZIeCeisdor47rM4umxvl2Y0RG/W/f0l/5KF6zyNlJBXqJEMtVR0XAlRTMy81 aQpsmkPzCaVkhbffzeOxoCE1Fz0+R/smpcPQIeOGk001JbR9lvhe2J3UB233O+j2evUPyy07 wRnZhq+53Blb23kPwUeW+Nnr0pT0ByxzsSyif38JZmLSvmCHBR1x4ce/R4cSg1jTmInfciAX Q0vyI7KHmxMEc/oj8RmevTt32uNE504kz7Arwl7Fc78s8vaIy38QjgqSCNKkhpM+lQ1YhApu s8uBpayWfvDfOEz3ysjy4hGkWJ8lr+RpJhbhLg2nwaJrBIkqEXJvkF6OXWmtZNiB2iCX+WkF hvIuaKuKMwiPgj2tFME/9uytxi3LA6/KsjFFDqAtPl23rEoN7nsy22dj1KTVfuCFdp2s3kj4 ojPaIau6vi7xbylFmMfXeFPUk8J29KPxHIvLLu8HetmR83myqu+Zq4HxOiyGEAvizsPVwNjB GVcpKPX4VO1Vuc7mW40+RVjuOJBhsj6zz87gjTKcy2Y1WSu3NugyfSqAVA4xrGjeOU9naH9a 6VIggEf+tSkXwdU6dEnwnVkNeOf8A83yDemuqjfjkevCl7fGtxuJpB1uiCXGD7ScghnuvpJt siPkqIsvq4heVaDC+i7Lau2+6nQsL4tktuaDR6IwU1imyf0eQUr2wvCLfaJTwRkswEzsLRhj VcfRHez0hhWCde+tO1pGsXUj8rNp998+RwlEGeqj7FGsQTIZH3Zc1jcXrhFXvbC0IMVF7QsG yksjz5cPSjOGT88c83JxKLnc90+w09NgdxsFzWCorE5MwJIElTr07ioIOtByywoR+Z33Mnfi pbWq2Jto3Z9SDvFq6R0KGUzL6F9cYarOqJfNx/blp8HUDkfDnmNUzEC43v9PYhF41byWaKGa vdaQdmHeNtiZoEL0LKV0PbjfQ/YBN3BCYaIVxxqBpXhW62ZItz/FdkKiCkmWM1u+lemfP8bx iM+UBOc1hNBaweGKCINBM7WvTDO00geUlYrRQXAMALVXEYPMey1QpHAFEMclPArrcnwl/ZHf u+mL8wSFyqSJzZfqf3g4aXu5mwzwCGTHTglB9THrznUA4cqctLjYUgDZo5YI849Lw5CZ/SqJ P7lXL3vB3TWCbfuHKmAFhk4f04sofRHkBLyNIc26R3kFVkkNTWOFGKxHTDbsYiuJlomhbPmS 0bUkKNDIul08SEZO4NPNS7WIVW5rWY9YozPAPpTiRVdSdDO0vRPHp2xLyg+bUgTCJv8dzp3v Wv1/1fn55jsJAhk7Pg78sL3xpKMSw7eD+dMwFpXj8xB4CLZpON1dwi1fokGZrtNT3H7OWbD4 ulzM6zIXSNjxyZi62NdRkO4qlLASPpWcoCjRMlYCHET9Yp6xPRatlVBKGEJBEXLGcQV/TCVo XHnLmsbUFgy8MyhosCrqeKE6Yau7h8vd5afPJyZNh2sHOEk5fSQadYIllN4sn7vR1UH9r3KL qBT1WgATktP35HEuQRrLZYA5RgzLZ0g+Q/HyDscWXylV4Uoe1etg3PfO7cQ80Ftlb3x5g6+y irinV1p+q6PD26LOT0MxHvbsvNF3BYmpE9bPrY+g8lO33lv/ckOLMdMUrVpL4kfZBg6LKUrt zGYAo2Cjv8mLrV4JCLIle+uRJzXHTtRU7drXH+75r/id6t8iSFDdWKQnQ87WrSZlbYJyFbQo oUZYHCFBsIs0kKMgM6L805XpbY8Wp7f19HYDVymMqrDYgm9KraNeBpyE5tVowclUHReBx9HN FDDOczdx49EAdxTfCdvE6RbRsb/kFwoSs8e/Q0M+JV+gq/Jqzwix7qUdnvOOZdJXrOK2SnHi 3Z/8nYnGzpNyoBoAS3I4ByEa85iFJz8sXqdieEDRtgk1mdbTuF/EVSAsTjoVhZMJaxacgNsx qvo4i5IGEm/hf7k17awo+o7aCdTuOEu62awjOUzV5V0JR2sxODgADNXBPFYOo8WdbDbeMeVt jE5Fj1QbCQGmJKCVS4jSyq79WdwsLJmaKWXeEBUWJqL1MPFMdgVlH2xjBloI4SEna4VR+fee ZlqCOy6tSIvNQRWgHFoaO9zbXlCJ2LEfpc4ORg6R7dGaCjdR+ocjwZapODDX6iqxUGWxA3f3 sHKjexpwunRarmTedREj1rfB+mlmxnV4ORnkG3/fSqF/f0t8Gow3RqrVtpey7qn+abMHay7l wZrGjTKKBJnMemjYxRRtLWSbdV+g5uTXP5ddmMwrGhLfw+/hzaW8Rg0fJdPCNWVaxz41LtzW 0kBECQGUkmka3CRLBInLD1DK8YLkD5q+U+Q66AZL5VAny3Wb22L6JvGV21z5e8/t+wZzdI7t wxd3N4q4xSL7ukNAwt1L7Zh7ojKw/2zq8saewvCOwwdvYXVQZCI2awJteSGs1Bqi6juLvFHI +XrTPsYB4k/FPSjT4ebobbqcI3Lv1GoCTBXRvafy2NWxHYQmOUWGywSA2GQML+UjmmjYngXH SmruhjK2rRso460V+hJK60Qzl+gNLkjAzOejJwOtCrKrJKwlw3VDSV08qeZkBekNe8pcwfeH hlhEwulN2xo49KOoHG0RDeFNTLqjkiBzDfqEC6OeRvRot3y1COoVTLT+mOrOuCvZI3KX3CD0 vv48uVD1fwwNbNiey/xVqglzJz55PIGQ0Dl05+P9NQeTRyeI2uBdfPUez759Pm5+CBs07Rjj 9cPb0OMHSvUzIz5Mmw9osDbC0krrai01p4zN2GKmY3kAQtBTAk38r9xfrMe8bS/gA1WKKeiS Muua/ggAcPY7gTPK17WjtQg4GAEgJLuBdO5CW7lrP9b1HGpVkZUOEOGirFDM3YUIB6P9QohL l2JMy5bwRR28MQWZa6SAYO0RnTcTY2MCPQti/WaCVFxwLqkiwGZSZfPtnLjfV4uZudE44IaX oDzjXSSpwvmEN2tP52a0A83AiSZilD3vktTTjtU5/gNEaRuMgggiAr6Bs4UT++23kr2GTOj2 xhVoo4fSgrb3rCyEtykCOAt81rpF8Cww1gZaI/Fssi5l+0dyfYepOH6NLKFnnDFlqTaYeHNQ ySAMnThGLoKGKzDXLnL5LbehIdOJnGTwm3A5CBzzUpIXt7AAJgJkxBW6h5grn/F5gGZrSn8D b+ZWyNHvnfwg3Z3PB+gd7ap8KWgHOHBfgwbAITMcLjSQZiiPPbsZOLGrSDCzk+6+EJNlJBTG PymGILp12+r0dz4L5M51ErZqjBL2+ZPVgMf9EAvNc+0TNtM03DKhTL91YUAtrOTYwXEmk7Zn /5OtLE73/Kv2/43pQRbnV5IwApkESAKQkYOetTBGlUK5J6sXkL+WheYMOs2six1bS52rmx45 E4qbS2FXbdT/JKiDGSixf3Lq254LmcVcoS8+M6sLOsXQdgZBKGHM7XOFcXFRrkwEwdSJOvyQ 1HNO3evJ07mqhwwYYXZZHrZ8K+uICZ07RZHlCV6lU4fcnYYSEn/Y+BmRj6oYq4MxOrRawte5 JDn1137DymXkUKokavlp98AabQUXZ4/Fy+B0olXPP6VrTMShqN55Mli4WBA43zQ3ZIFrPE7h tRbCrvEgRq2ii/dvE/j3n8iSjJ8GhcXRAh/saFcATse8EQothkt9wlGcLes80L23tKfsAkJp IY4KDNoHHtg3uIc+foMJR23yeFaU5elcNiTyQe0RvgBidZrxyq24q6Kyz6PW9RkaEWJ6L609 fIksWDhe/D/W9BG1bYBXvbaOPlrfVCXsne5/GachYsySAk98Q5skCmlWt2LgRj3GPwiWqYZR RKCTN+w3fE+BlThfnrtVP1wcGG/JWaRg4Ufjgld/5Guy3M666UPjF68FFBC3bzhG3pgu80RY Jc2zhhNAweycc2HRMwiG/CC3Ce2DIwic4wBDKzFo60RmPEt39lEXrQ+VNLJncKkCTA2lB3dA rO61ziqfPPc8heWMIzTVMVEWGx1wxX63YhY/L4GAzMMspCQezd/MfubNXDue8Gg2CaHJjvKc yn4bgRIDvWEUHAM4feAjYTevCIweVGxUrhKvrTVgANNGyrxaBvPnAWH3eCcodU7aRi8JG0ld 37ea2/VxZcqmyHKR/YKKZkFGXvJ2PO7QjhEzt8q+hZJlVz706FScrHGvh59H/NEec95MM1Fc 5Z92NtozZ9aFH97AvGHn+6GiGi1kc0qH11w8z/MIDyha603vSob/ovHndsvQVGRdqAuoEw7A P5/Co8fvTypwsES8nPvR8ytIJptXvVeKMh5IgvsAiCx3MnVeu7iDUV65Lboav4H5qHDHUzdB B/aPSEsJrB/Eq9i8dw1W4p3eaFL3iPGJ0sjlc0YKR5XXX70wuzVY2kWkafKgylxmjQLkxGlQ /t6wt0hX4W2i6JSbPzSWkZPp6CHuwbv55HsE5mx9xHdam9ttg0MhFGuCO/zcZcVw/+DDKCNF TKUPHH2o6VQntwkM9UTGBD4lz+HfEAo+Brng7x9GyNBSJ3tEg/PsOU+fGZalIwHi5ry0UbZP hsEAzCsb+Hzn+G64MQHXh2iSa4hfDqIy3BHj5tonBZWbpcVLQ+eRXsbBog6f5EnBfqx2hp2+ IMwhKwdsusPxIUS0izLP2J5Hi/wCfGO9OEkwDiyRKMiv1fCIcLOefHtwILIJn35ce5Yno2oK KTyWOfnxjIv7n4ep76pN1GIZWYUUdshrdTk49GZZ0L1UxfAv2kFB5Q0euZcxh0vrtEd9wFcW W9B5zFnOmDls6ptiFGXHGHHMwrXHYv8gWLFeLVZrb7GejH3alMQYUz6kaMCxr85yg1RYAdeR BMWIXhbJ9/T12Su5hGer/FNwVc/3xDL+VtPE+9BSB2byBJ43MtcvieNG/IbrVgu213hb7XYx eRVDrZol9JttKFQYxlxdwjsb1B0ZwK37JC5Q7fd53QdN8tG1JmbrR6NmeP6kau7BPhAwJkgQ wRmqBn88Egh3v3Q2sFEWb3snytOXHbg6K9WMYUUR0FHlOVbOiBJD7L1aTjqi/PheqZFFTGa3 mtdux96rXkq6yziQL6gxHsU3liwO8i+qWmF2fTNUs1MdDGXJ4/oUNx237wjEkDd7PMweyz7y gzIWa5LpvOski58uMQaxjJgh25kRjIpB7aaYJPeQBdK7/bxyNXD7QWRHUAvy0lLQCMgzPCBa FSyIzaYCM+xKaTu+uCvQcgHuolxV8qSZJYvJwxHp8vlw3AksAbmFdC/tPZtCHwwwmWBnwUaO J7YuCln76JAWq+Zh34gTKNI7CN/ldP6cqMMWsMPp7k6KWK90mcotnVWyosFCGqHgNjuVslAj KYgUDO4eu3x0Af6Xiak7ZJ9/9Y9LOY4ltVwgKgGBjtsEQ8NNiWlFXhX+vAOQLJYhqv64MG8K GpoaG9YcpNOdwIRjR+LIx0QHW1BHc4QbNzvWMMly8l9ieYk1hhRC5DYAu801GhiLGaqTBHhT 3GMSDXcrGr1SW8+P+tK0KyFxtRXyYtS2rFYtOIJyU+nrqcZB0TtFd05DKKeYztwMy7KoZgTP ADJRHTRG7Xu6WDReRR87fZ4mUQ2p+PJ3O4eE477ebvtkU69T95NILgn7CcsjQ/15+9zS3CMP f7Ml2xAtW1fSsQZHwoebl5gvW7AU35/zNHw6YEDbO5/HMEWIQyXOKt0eG2S19aUzyrLRN2f9 sT9XGEI0E3BPAd9rBepacGo1qoyi1jHb53OLCzrwlMTCjkoinL2rohhhJhd+RyQGvg7Nunx5 DEm8XDTO4JMEfxacTwgsFMMqmC7/J8Tk9ONUXigIMseznZ8dMdb2gVA5ZFmEAQWqIhhFEF4m 3WXRuL4hOHirXpkmd4MEv6oopL2+IPVQtindN7KXtpkuUTbyrbhbWnNwxFN6XlRr+zRmChtO tG99xVBL3N/q8xYwO8Tr0r0zNPXa1nZVAR4N2Ab7E+6aIoekYei7rQ1FoyZ86qNCBZsDHR/T 1xXGDC4ise4IQK6bLxQO4o++/XnSNoIk/N6opsFTdtNd/4vML17pEiXYgEhz8PIp26oPwba/ 5zN3KpojNxzVmXK+lBGlR//cHu8y3uH7Caa3nplL3PRB2ZjEgKEYWmgcXfGMANYMBdEzTi7k k0iNLab46GQc2hUJdq7GUAOMYve/r9sGUSXWVIp6ksbZKQImT6dsQ7/4q9lrZ2F1dinNifHb G6CtzpHEGUfcIvzHJe9SobsjFxi275lYgvHScQm6qTd1tq58/oN5gQzdVezxGtgSFOrfrelJ 4SS2zxvKQ2dQp5z1I2fZi2ttN4vmaqrjoixDjzCMIzlq/qFFJkSjOrNTMPkIESSONKCHY+r3 f3+dj5Bse19q/c9JYFQZvbejADUuLBTO/DtKmiZm4mlQBOTIGiOr7fz93mHKtRWlpT/PDwnc AknTVGNOZXRLQ8JKCosqLq1HV5TMOz3LWOIA0+gDStFkMJ2WdO1my2CeOpG2JMVPDFCYt6Vi EWuQACSsoHAvUDtSashJrmXH3h8vafU59hc1tIZJMQrCciNqxgsjvCaLKtQMdIeGcXYkMuVD cgEL5T1mw0WaqwMXGA2nCPsu2x6Oh2fMPbTPwj1/RSAVKytWbKPJloLfWljYZ9gKo2lYmRpr TJ7kOhC/kqyF1SmI6YRwrxizmfSZH8lljtDGDFVat5XWoGParFBoIASfXO3DTkl5UEfjAT1t v8zSWraAnZjVIWm5ztZkpjdghhW+dar7sbhtHRVotgxONki6T8xe8k1cSor2G98w3U3qNdyD z0zDnFtbJez7ZOXO8ee5dS9+DMBdF8oU1RUb9eTOmimyqPjanLG3e5Sk1Nu+UBezN2ZBSkUv P+AyltgfhKKhp/Fu622o2fjNJAntB05xiokuY+/7xT83rwyOHh4eBupZ/sdJbyKDs8lO01a2 dpqRIxhyBQLZuGk+clZERQCoA5U6VdxaEyiknWplrX6ylEDBz17flrzoPIoMKFmp4GoYqKM6 bWfpziKsvIasPmeLY0jMZwKtLEsuigtsKR48WzrkGUZ7v6Qq6qu86wb4hgaP5bOWWT5ma+sZ xhx9EUxYtU5lv6GXyKouuX4uBquMjSMuT4bRQPh0Bn50zHfKCIl3giRIb1IfSwfdgvv5AKgZ CKoSuteUItJRA8Z3MgzYrxgnGEjJANJ+2nLNgXL953aB26GQzY3Dy1jfMBJhYoS5rpQuezLr kDNvWYdQ7pDBdA/D4UFUZCSMtVJK66lyI3W7OZQ8NYRNDMPtfEWXDdpVfv2SVyfu72YshhT0 8gqj3MvOjUy+Jgmbij4zrEAduMMyUbU3UltEo/r64rY32WvzKjiyV/hopbALZZxK6GM7Brkm YhsacVGHx6fUhtUMg7/B4Nci6ZwZLthdJZeZ8NbiP2ZSr3ptsRA4TjKLg4CpoRyr3kDbedEI bXt38r4JvmEK27mo9D8xhUYl/vlp/S+34Lqfa4WRVVIsQPYBbSgsmhqvFq1L2HUmqtE8D9br Vd5oiNo2kQIF0+IHR4zppTtyUq45BK8WcKJxNXFkTHj0/tMAMG8ifLGupBSEAG0CScvvBoWC RS82hHCh84jhNcEdEsm5RP1NstKhVkGu/ZhyQkoYREyzJW5fwLfXF2psiWcNhFjEXFczwAd5 zBYW5wLc2tyDPPx9K7cJphylhxbUAztd7XwoRut+dQmSY0RRriS1esPvZA/Fs4PFpJTEMPcq kyjROnWZ3iHREiz5GXnru0fBd+kx4qfNOw8ha4OdmDNoGbhdprwW93lxicZ3f6R+bC+LjGn4 BRaZLFI7AQ4M/ZFfHKbBTnopMkSi37wIZm/H35ERPQMKSKjb6I2QHxokGWszVPm7B9ShQgv6 KXxexs/S87voVJasSmzUC0V7qF6GTDTe1wqiGB+798DKAlcJDko5cGd+aFNL8U2dQfQ8k54p YPeBJb6F4KPHuGa3DifI82v5a4rbg9JdYk/2Q4iumv+m2CsLHj20hpr7wxG8UtNMwNOaswSO 4SocA2IIRsRsf/HuqutmxxHA1KlQ4Zj2zTMD9Tg3YaPd+rL90VYtkBr/fp4Q4RdsFszE3fwo xPs9jJkJdY0A7zc8oNoBc5/sJQY97T7fxjEaGPG2CpiPCE1vqabQL0s6/ZwRCYSTzasCVLxF lmSb7NH/xMsFg0z0put4ADybHaC9xJ33eryN/lnKTOW2I8uiBUASUT7Enrpamdg6X9RCz0IB +3pUiX6SsAj3Bq6jp6nOToHK4Tw+p27J8w50FkD3aZgfeTQw79QIO0LVfml5nTbfdEZE1IzT 3qyHtK628hB3OlXrqeQjsqzLXz42HH3IKRpg/K8O8lE0arLMRvUFv+eDtUH/jkHTbp7xsxcL LYyYSviP+r5nbxVzJfVjfuXi2bF8HdDJVCZ2f2YUWFTkOTdSe3PxJxaF+3LU/ddCV6k9rQJZ CUHkLq57ujh2I8Vqw0Db/XSKMFSUZ1kjr3TVzOKHxoI57rpqJlCV7FR/kJvgjPo00B7VkSWM 3MMv6cNLxwWsEyKEyHsaKGCWtXQon4GhR6BH1S5Z31AKFqV+w+8zb0Gi2i2fOgKyV7vFEYhE 8WloXZ6pBy5Jv0xMkrrsDmK4j3OoQpasrz+uEKE36BMlvJg6qNX4qilduwgShisll8gGZZgJ 5+IWWhTn6xFZeKme2jo1Q5PO1ytmhz7KOfE0u7Df63jj7iABD5ljixkG0ZGcqhotX9VWWW2v h2UGO6TLEnqfdp0zAJY+4R45FVSSibqAoOFEwsitEYqtnaVDjJZhEGu2Ff21zMlQmWR83uEB o9/1LcAybZpqik8TgpG8G1apSar8HHKy9V2M7/ITjaCufQ9guCR6OW2/m1w43mltP3CMV3iX JGx/WsGBsTPAPk/dKdXxRWNHjc+/5mYeQ+sM1j7iRRaQ5pntNPe9xyFXWn+p6PB3iwIucdcM ZBinbMPSfhehrX89eHG4PYzhk9BsB6r5q7res/8qWMetZvIPec5GEppNegsEJ55AFY+PeGC5 YPt/2t7vIpfQWiWhWFHelVitr3IrzBBHRxpBki3UVDwjdDUekFsCURMvgASNLNxpX6Xt5rFu epRMeUAYQdDtrRYe1rg/k5JO4e5nque/X1wym6N7W/Ydl8InAL18B8cPCZS/UXUFsFE4cRCa 6MtUt1++VQqefX5MjGZ2R6KDT2KCZjsZ/NBI/9JayoO+6oVzKmluk6daqKuurCW2U88l0Mm8 N060gx8lIilOjz0pK1a8AeN339TFSWplvmnNqfelglNUcJiYokrS4y20Fi8qIU4BUGZ/Ieh+ WZsBQywCMHInqHgt+7/teiSkOv15synnfaFkWdvDVfvSRmBQW/o85ODwLQgv22tvjtNCPEEz X+VEm1rcVFjQ3Ky/3ghtQGVnbkK3JWx4fxoNuUavMDP9arQlifJO1VDjhLoV/5uLaC8MVcuc cHRcrxI2tuwbsZ00bZ5RUHpTzEYH2B7p9ou5P1LU4KL2bJ3ag6BCsMvUa93wgSI5JgyhDVcY AnCaXFt/qaUm7HkJ1OYecQD3aoyusZn/cdXBd+v8r2lSd4ECI9joIs1fbN39uMUbiz+SE7Tz 8DocxIyFHPPH9a64NjUsNJDBNFZMFOa9CU/His6p/a94Nhvntga4bZIOGqzOIW56ybRG2uH2 Co8VPTmlE8hzXMv2gBKXemfEzHRDH4jSEeBmIXbkrQeJpgk4vECOLAhAMKQ5fXwv1B3/vKGL VXu4CdmMM91X53LBf6fRovWrkQGzzvwvcghffm+DMfaHkbs7YIx0zdJYZkWXB0CHUWB2G3gy O6UfiQM2HBJizSnavJjULe9SDIDrFI/y9kKj2XAG06LKmBIT+Dp7Fyh+Yv9jT57tt91vb0Ir sfJK6Lt1UY7F9/tki6eAOQ0/3nbvm8s//nmvFNmBJPqmCf6tKlsIHNCIWe9Ey5jwKmaYu83D hEsJAdAjEHXjTyozs6TCvWZkHYDjGJ6olV6da32G2jso4Ti04Y5x8F6k5qGroqg6Ola1eMpJ E/GvFBg5xH2Z7s88lX05O6F0wNnT4RKVggCBg8Bys4YoYkm3z8yoK9ejnVUSft/OZwvDaUOF uIbAl4boB8qIVW7EdbWZRtd0o/MyM5m+Jw7g15Y1n5T+AVWcfbSlsjzP1IvkNWNUqCIW8zut cHDbg56IyAyK3gTagIy7j3L60Eg7J60Wv95LyOscfY90dCXhiuS3W6CuXY19sLAsy634C6Bn dqDx/sPKHHR4eH4ua0GEybXn2wM8e4KYaRSEI8sM7z+DC1hJ5MhGcabRQ9689Hi3809ShdpK YZ0rM5Ub/Z1fs3JbP8YqemvndtG4CiI2N6NAFiI0JxqMFke4R+7rysnnjHHS7IzXNsUnZRhG mWUcW4I2T9VKiMqZrDPjYfx2zi/MfbN2LCEpNAYfHhmbxGStTKg869pcfVuubK1nDdx/pLQN NDSv+H3HRuuetc77+0tSRjGfj7hiPKbE5oGcRwa5XrCZc/Ucc5WocS/IHwXhFRJXvOoiWxz+ U72tGtUX/cjArrkodk0xqLXOLapVb7wSfNEL8VLqTdkUv2p71pGtJlA3mwjUU4fSSKhyB8Eq aFxZDsy3SoTKxLG69XCLzF56z3Tb5LR1jTWDzkeSwiej4aExTk3tCGLwMcE/Z6mr2RnI9Stv GaN65QnQjKUxv5GnoLmOZ67inftlXz9/W+KvfpgjlzLLv53sNRHxMyYwQu7u0t8/tl7uGHgL nhxk+3JgxI49Gs4dOZj3cqihLdRG3o0Kkm/LNXalhipY/VMRLAC6USVOlNNjqN24Yq0J06pt HqTakgujXD+4haDHhs/cBPIpgK2yNofj70bPU8K4xheqn52vRXQA0TG/eZ2CwHxIx08urwLD eg5kupnyiqBDNAwJf8is0bdjOBuza07BgXtkeMZiwXuHU8diJPVwqjY2EU8WY8r8mpIuReQF 11CO6fTXrT8nw7a/OvrkPz2H/SbODkIcxmLm/ym9pVM0YywsKI53dESS2797nWP6SQbo6zqF hH9TOJyM84kTEZm/n3/17gt93EuTUwM1zlsUUe/TKdNdlMdtWKDSf6QX0aS0XTvm8CNi7i/T rOz940GxzhHK/W3YkC7JBfSJqg+XyqF6LF0T99CZG49ANQnPPNLETCWXyTz1y0O3J26X4Fio +KPx+C2VaPAG0uWsVx/1PaUe3yADB39rx5FWFMIN/+2a942x6rOkIJ1ZQSaAt76h25iKdS1p y49/auM6Mlb1RyXbqXMWXiRG88/kTVQBwIYEEZSUzhWjcAS5wD5eVTtPgSP8OjPKUTXfyKyg lgjY/pFhWPJ2YtEKml8eaw63tyX5c653Q3KcjrXua8Fc4KCQlWFC2l0hZDHKBAzhZJEAJJ/8 e4MI31ow41VxheIgue7zj4oEvyUYfet8ncUpqvVgIe5xxYf8VqC9TwTCu59fU+KR8srWIVV8 zyc7gDjuedeXoVJlQEfbdLtEoLBgb1HDRTiJkgfHLEL93ejDu+C84Xj7NXrqP4O3eUe4wfzF 86GLTWqt0LE066/5gOul75K2mTJCOXvLUzKw45wUEKB4jOKou9H0EZH81Gs0kasMYrgSCq66 I4Rc7IBT/2AT2wnmxgHEf8+LHfsU3bRTl14QLZd5IX85C6rx8leBwgOFd6n7Yl2SM/FPxcfU nfeeNw2+FgmkREWq6VCh/vXVcdQA6hKj4jgjR9VOqfSS1sP6p//ACDiRUIeC2mzlHLH+3riQ xTyEYMANaaHWxzKyXIi3iiOPQSMmfM1RaewzRffn487+IYfbk0eBSr1aRXLIMggRjcFu8Jgh G3rCqIMf37pbGDWBzRdIv8j+12XdioiogjlxJAp81yHrZzef03Q8GQiT8R5LDjELXTR8ilux auT/aVjk6FoU2+67PXzycR8vbTOrdTjpDuqzv+jr4gPRu3n0rh4PNHdyCdzlaMTfaWMN7RZ8 kaX8+cMYosIXvF8R/L7HYfyG+pLbKMwrsZwMqwYdy/NMJ7YUTeOQHtUfKudXi39fIaDBaSm1 L02ezk0ffE+YKUclINdXyAaMAWKTXU7tw4yUXO2c6Qo015tjkcaJzxOfme4l6F2j483Fu4zb hp4o61/ytFhMtB/YPlYAIdeyY98TuuuUAIWkMcfrz1VaCGo6xP3AkutBrBDrNDG0bvHO1H09 fk/9OMkIO9VGgxXJkV8v4ktxmHX4U2vTGCYLeLNQs0c/LtStYPd1+rTM6yNwK6F6VAmmeQdo ztg0YMSBqvJKqhevIBJ421VP5gBqWThgwmOxBKenEtMB6d707WQ/fdqLIvSKozZBgF/Qw/FA 4Yrp+6RWCszzifeM1Dzc8sPP2Le6/hsh73c92nc45mnaNR+i3BJl/32epqq/5nd2FvM2vS4x pwajSKtk556d16yLHABGwoXMsC9rqXJISk8b9VXbi6PMI37zhP3eIL6b2uqHr5dLmbfImk15 8eKf6V6LPIx2IaJ3VHRrZ9mdKVpbhFjquP+ry+LlVzTgwjjEREezg0gmd1lWpn98l+XpIiRC 9eUQilpmqR+m6ivGnp3fTg2rkO4x3NM/u6CPX/Ba6FJWQfCpRxrBNSx5kUh4+R2jOMABOLEI AEKGX2TCYtr5l9hiICss9qtGFnjEWU7xU+WOEQyTUSGa3NCvNvGij/Rl0NZzF3oYmxyliDmp pAygLOQopOSTbFUwrO3p6pZHZ2HSdR7vOQoUor2pJ+7yQgufUe0pS0+Ueg4TWwNpARZUwbtY H9CTe7ff0AEFF6ZaaiKb+yu8EarMhg+yNH6povbtPdU725J3zmhRZLjVIAn7OEctuWLsYanM W5vhUAbKke23mCNxxuobI8xwuvhHw/WtxeWXk9JlUZlUVsbyF1MbqeYvV1IRL6O8WwGnF8Bh gpdzNcdE1Ij5x1qRbM0kXPqdRj23rxF4etVbhmgaLyiS9LpNdzpbU0b6rOHqdp4F3V2t7Maj +aXiccbTdMlFJ5rQtMUfo7Vbqjn5y+3L5wSF0gU3yKefBHl/wLScZX+hdfUCOmxkdqE2JvBj DlakurLBcA3vYQdOX+BWUfILa2fSwJdrqzaZLvsIK0UNYYNP0Wt4upg0AFOtCVU8wxF5gKda TMPIoO/T3GulGv3Yyh0xC//AjsUvFZGJBvucVUtLNCJjcrujDoWH2vGGTmf8vSh8R3xzCm69 XDmZoNsfBNo/gP7niDxRLWMJsUyzx/+WcJPvCZncJ+2Z6pWpVc6FsurfiCPRhGWn/Qm2n3zh iUjtwnN5SGx8TXU1NvVyEruGYcrKDqnb2kpUZDBHazmPS8CdlLVMfe6Sez3lEOrnQjiqO35j igO6tpOVUR7CgNbCniYkwjr/a1Ore6+KjfofPoeCgOHIzZ4L09wG/uQyKTwcp8u0tvg//5GO KxrcuTGntSJZ4wtwU/OXuOPd6U6RKhz7Us+lnkWAGsUurAABwVk+ryueTWb02jqZd7OfIhFH zzelZqDfxnswPg12demu6fFO+6atYWjor7u0VQ/RjnfQgBC6YyocyusL1NFvsh6YyJmrj6Eh Oo3ILKwtLuNI4Dpd9hhGKgmgMgFT6aPbN8NWalmZ+Ez2eVj5S6QyMAI3uv0Ct+I024YbkWhN eYyEnyojqUBUk3lLPlrYctbaMdJjnp04eXCi6jFTlX7LB/CD7Ayf1nAERofsan203rYzSUcz 3cJh/8q8UQRIFQoDZhiEU3g/znFLpMMsJUx2HKqz9Of+nfxDCfwd9mGW/wHid1xH/H6RjmX+ eawUlRMtJoRgtIDfYfaA8wBEKEgaBrELGIbRkiYjLGyu4xJTVxcBW4t5mNjzLyUWMwB+IWkM PLsDRt7SHAwb5fb/yfk2y+U1MrjgO1Z1Cm+XPEeufATDDiCukbYJIVYHrl9BFk2pAtajxRYi T3qMXUXx40q/fSqcci2vgFdjkphy9RpXkslKKmgpykM/huw6g/9JQWNEqpFxD2CmtK18i2q9 QrrLDnx8q8rKHyifkJykj5DXLA7Q1iCkUOUMv1MSQB392lPE/8NirLSi+8U5lJ3MbbazpqbM saiv7WB32oB1pyY2Px0+TRVXnQm1XAaIwMQxV8RIch9t1f/mxCvdAmUkOHs6nXLXq61sL+nL SUnKJdXqb7Co772BdYo/yczYf6eKP0ziRQ6zJjolFGvJs8sbWMFE3vYeUAxSaDfSiF33myJY MeoT0G7PH7jIz3s4yq55S+EFt5kvbb6u5ZjhjYKmoFuve88BJT22S5ipvWz84pcqu/xVwMGf wR0euqZHsVQuowXI4UIcW+Rw+we5RwcpBnwsvUPhhDG0qH6vc2zCJ4sL8LSHabv/ytFHggr2 2wAWLyglBC5iVu7Y7OxBuiIXIZyNcg8dg6ASbMsdwc237qG7RTQvBUcQGqzWECzqpbgQcO/A A8JVrf13QgOGQMvcK91R3VsNf97gtCqTexpdnHT8KRWFzOLM6X423s/R3QnQOhGOi+dRME1l 1oSOV0tX7oYhv/1hG+svEQgy2mANb/HLmm7ui5jrDqml9XfeqzDDEPrIxmE0VFQp5XCxhCqT +H59/3/52H1OBs8swp/j/RtSJsVs3C9D/XhUeLwNLpDs5BafYoWz3Nlvi7eG3EDldT/wdK8W XD+SRc120/WW4MS355ak7eKJTX3ivnptZEPA/xwoRkwqUTuwjCVN0yiPGPv0/jszln39YHT9 +qeMgIp7SsxM13iZcrnrw5R1HKq2Bq/yI6DIqOrIuQ+ArAXSTrBFzhQY2+tOsxiMfKBjbElW ARABX+AOQZBxy5EzAy/He5k3+SZ6SOac/2i+x8OZKQqWvG025rhq4ETW2j8BseGfIT1085GR qRR+0TgQ+vKzHjjWzVcE4u/cs9+Ag8USPtr+zRdLZeV5b/ZK8I91cVqBepAVKqrBunmtZCiA BDXJw70W059oxtRLjU/eioCR/cDVmTOOih6B9raLCZXK5GtsiZxBLB9bctCcTOKICIeKhK66 9kMk5m9V9Gg8Eq4qUFMVw084YZ1RqKjkGWmaXIf2C1khLxIzwHY6MixqdQoiax6mfsfdmLK+ QqkHLEwgRy3iLnhjTBtZ3yoAVDzEgQ4ru/r6CHnKIm3+rpSUyiq4muIgwRjWUoz1kevxebJ+ VfDdvWeMY+g92wWQ70mwcj39JevwV01+U3g85UrcFriw5Ooisl/4/D1efyS/ZTI7NUX02k/t wnIaRwvxofDR6U9qCgGd/XTF5UpA5rUqF1ITl8XfhaU9kLYtYi0yNTkoyfmB8NtwT7/BYB7+ RjDLqArS687s9osbW1KaVRXoZTMxkamoPH6P23kX1RsuqyFc47M4sP8V/j+eJvl+tUqgXkl1 9e/fVUqkfZGjhCHK1QODoPh7J/F4zaZ+OL6cSlENFBFEXBv6DwOhUfgeOXygKngr9nZRV5m8 nQa9/gsvEKY/mPFJFUAGiSRJqm6dFyoHnBIK6OeDdHHPDlM+1Ppjrdzew43nS6Zgs3dFcbMi 60di6HTlDO7Bpbmf9v6sJFTcNVHb53AfY/5OOprjwFGy3R0XJxjHXCE2bUIf4aRcTjF5cA1F inm1s+b0Q3GjGOU+NhoxusPYClSRYNgeEMcpIwshuIhSFcyoibtm3HooK9duIvzdHkzJIFYJ LpYpMjUmlOAwxzmYsDdqKemW8T3OaPT8tUez39/IjOUCdnMIxrjnu/2SWxhYjfbmOXGGuRwV 6rCR4q0s5ICB8rfNsUYqO9NlkerIk5p4LfopPydZRRAIDKlfNRQ8Vzf1/YfcVdCtn7/iaXT1 4MqttBRIyjGXkL32gEWdxQSHB5wUhnP6D2cB1fPZndap0byEjZDxnxhZh66CaGekEtRNvPS0 2ZPoHizjRgq0wocG/UAJq3B/kr5pARPInG02YjvND4ZqK+6pFC0UVg4gc0LhpTdwBCxr8QQY AE7FKrcZX9JuKD81+gKtm21boaT4HQHjvUHXc+r8IaIBi1SSt6HxR++kppIBED9FEHT9pvIW KFTxtgMXoCJU6zsxl/pCVRmZ73jCLx3cG5LWxfAzlCChdMiLbxJlMwHqDzULjL+rcXKS58Xq uNQE3T8EkSaFlMyOdrpFrgN1T1CEzNp59Y3VgqTDtkdIfwEvfKzU9ivjvmqLSdVa6OpsRhDN zFprq/ebkWqlB4Lm2ik5XzEA8TIXnee9Qa13Fu2h36Vfi6RbxdoKyw4s0jOWZWq/qAuw0lti zvYRg2Tn4o6d0+WC+DA6VRKIuEYRPUA7u1AL04xmL8CpXJoYUzZBgN3/f8qE+9ySfVc9c+si 7BQ6oaLwNf+x4L13HV512vq2FVVvHE9wrj5pEsXRMawBf4koB+nw1KRIyFSIAeX8XB65QNyw yh2zdKiJIAIc2SpB4/aghPBxE+TlWkLClS4KEdha4fPdNDJYijnBzMLYQ/InVrfz/Ce/7vBr A5C157RoN9ZSdOmm3VCB8SNr6APJdfDGJWsUqiGRaFp7OhntsdF2H0rNWcxQQXVPsa1CXzEn +EkpqT/97elzvs/DG45sI1mffiQNXT9FPKiqTZJI/x1lbPlhUUUdNfT01DqCmHY+2W+EnZnW EjvlhWc9TFNlKuti45JIAnX57Sv4NuwMs4kcGvko6LuDLFM78PGflNDWosyGZn7XX+42BIsF 1hJvl2RMcE17wGcdYTFR2529jECLxvFWmpNFEGjZYfUwnwr+5LofDYtwJRg506Sft1f3IjRo LKPVBHswLJMvnmkWJFOPw/a8NXwBoRClxjA3s7Nsyeq/ziEqGNjRIYJVo0F4xkgh+qHYpilF BWiqkcqzlB0Gsv671EU7xSpt6U4uiM0A+H8xZfNSTHqOu6Na3HoBmKAqXiHKk/SUbx7TjdR7 DO4s8KRJ0V8a9P6ROKCjX+qlufPcs47pTxW6jen1TATEmCCJcTjKOgvexIRrXBedO1GUCrxq 2wdHB64cf+Llv9zKtSm7sj6tP/F/ixkaQDzD1hZOgns8HZpfp2BhRZ5D18rwoqw6mLRdsKZ1 GSD1uDmDARMMPAKI+4GYqQmLBdpnz/LmgiNKniJMwpmR7BuSVEcY75FuVcgwbFbi5rjpb6Ca opRpesyFF+ST4TqW03c/qQIfGd8sDSgvPyPq100oZJctymqT5IALowRYT+nDXppDrOOycvjz MlLrhxtsBqNJJlbaRII1BxHv29lpwWAcJ68oqXj99SDEJ1PQ/1oy86QAvkwn5otMS/UodRdA 43ifEydaPhHWfOulPkbzb/sHGNGLQG+zV10GCUZ6zm1IVRq/PRdsJdhFgRrrJE0M29WUFiwi jr+6+VxUf4yjhUXCBwJhJdA5Zc4gHt2Hs0UNgah3pKlRXU0GlD5xHkTajeaWnpzrVDnX6aZv Izia9FW8ySwbedkg8P/Uk5nCd8usqphEmcsBiUshAyw8V0e73q9clMkl+/Ocjb+fbImeN4u/ xwHYPxMHctKACxTG4Yu7YKJE2l8z5ABiHmv5X+2nHVGmCRD6LqjyQvGDWRSgu0ahZ+onEeJe cWo1rLRu2hRwtz466Jnzw8SVeF8sL4HhoZYx0Hw9pc855KlZelEradkHVtzmQTNWc4fu7vab 8wMF1qpp0ktYBngX985ToeeC22po579loh3ytcw2QmMpbDWIT+i8V4ojPjuRCucz92Mt+nht +6PjbDsjJDPs+AYwdRD7KUJQV9poNSqDPVP0fmf8A700BsRE8Pt/VMbNrLB9Dwko/sjlmVu+ JcP7zsVgoVkzc+XBLgF0Ir6jjayZxaJdZRfVxz2cHXEk5MhKXJy7mSO9/Xhg0tcOTeCqe+bA UehlL+8826JMehqRiit6M9uzEjXpvHXDvWGbE8tPVom8PLMDcr36vYIDZs6gy50PhbT4RRxr D6aiwNn7RwbmWTMYw2Yi9mSnufnl73+0RkRhIahZJj4MfGUBdW3poyPtLjWfnh8r1iKWea0H kIQvFH6mYHDJPIqaUFjEZZIZrTQDZFH59Ej5PNgHD7h2Qas8Zq2KSK0PfVybrYolxzUwVcvL lp4Kn8ZnwR4NwgBuCPp7ORg8kdiEqNDNv+VraHKunv2AvesZOkNogONo9kdQVyCouK/Hpaey PaqiuWLpzyP71o3OU7sTifQjzorAS24b3RgT++S6HskNBLCYipyoaYY5fAci1US70fY6jxWD kWatsYNejUURHIupTsyLCfYJuz3pvy1dHXntIvOIOIw76Sxt60u7lTnMBopDq/Mp0A67ire0 +w61u/S51g46MbFCy0P+hgD9h5QZVHkr9CEJIxVCmv/0CJEtXXvvkfRF/mDLg80MdLZykskW jEUYeAHg3KESHDn3dXYeoY440E7R5chKuZtxzO9p4yOruoJQmF2qsLer4oCcrD3XnZhKgaVj HOdJMfBxdtFYv3t8YiS4qsdrD2vqPFrPGeINBHcT4DKn5d2esrgyMedVibk5EAauS52u47rs qdTKFCA7SvoNgOWBiTsjBv13xkhljlP5yb7YQ8XONi/f4sLk5Icu8KQRgwesd9eOg1hc4l5M gvqrw3ChsYKJN1YWmDnamigMn1w2GlTYEnjHIgpAmIoR0xdr5DajfGh2eS8wT7gWM9AkyS8A 1sqtt3Gm2FOjKuvv0CtnZRTGpdQAHIBZ4edPOovVjqHidy6BWElIfp1Q0ubf2fypTQoJCfaE 3xLSSVGCtlcOiUeDgZSJanK36CzpfIPpaUCnY9t9E8izp3E5ROJf6oD9/up08ycpKC7B/PuM oGnyUsJbbskOtRcgX0SIVrgqtRKVboGDYFURoq8pSot9KU4z9XyRPqdnZIwb49grAG/qTfgM sQgdGGfYRmQkfJjEDIPBPWsVFjp2tP+d6GXMrIyCeAE7MKtzlrgDUi0YbdcenD0fDizKEoJI qCnWH2E76xPZhyeAnt1q4cdvaHOn7jWdvLbR+Rf6ooLFZ9L1bOYMjA9mmlJtBZ+LpsisPUlZ pMidz9kcGkzHiqHU+cmo6qlVoIBLjMrzdx2uxdCWa7S8p+AormuZEccUN/x9dk7cqlWbEELb MF/T/v9qdt99SbFDDpsVKVd884C3Gk127FxChBgvd5iTlpjvZTeUT7voTDV1MQEq81IIuRKU LLTEY2kppgnPkrMR+x0hSMf8UoNZeRjIKInMh7/wEfZ0eoFaf4+RZ9TkbfNbewvgv9cnP/7u bSh0gjG3rKa2up77R0vtAZwwtRBfNwvs0JdJaY8xTD2QOBV7eFlR1FAPqYvKHin7nufRpgiW AcgaI2womAr75MZU9+WO2Siwl9rTb8WYxueGjZPL+7N2pkRBMSHOobXtKuNBxqqNBacDaTeK xqH9UbSXb0tiZk/PhxLAOgkMd9v6cOFzT6u1IV+6JdFjOJJJDGKIjdCihkQhvQMv0CvnfspU 6HTYzsse035jWakq1c9dpltSuwMfBZdU9Te9BClmcA62/bynAYDluDEkSWMJD3zt0+NctIai 2WqeiAPKkSI32LlBI7iQ/1VUihS+TbLVU6KYB2Caz7iQC+8TQeQWSjG3UPN2jDNCFw9y3TAJ b4QlLi1Xv23Y2zgbzXV2fLO51D9Xky01g0WVp7pEUuG+DG80lOA0ZUHpbsC6G9E6VvpEz+gc VG1dtjfmyOISum04qemfXyw9PUPGXxYD64wN9kE9ATvqwBwwNR8h3o5vQa+bX8EhswjfRHsD oTfe2z9cmsPIes+YqkPSThNbLLDjcEqtC3pZYvr77xtMHHOiXo2MOM+rPvojWXLo5nKt9Vwj GRONPQI3Zt96V64gVl0CpNFxQgD5psXEtsQiySuawlpMy+veEMowyqWEW04L9X+GTPgcLV6P qIfJpCFseSD7zaMwC26XdIyovOQQ0cNXoNJWiPYp15vhrroNQ4vu00oADoCWYovqs/D2lBr3 ocCEyDY/UlgirBLGkvUPuXhn43gdXuDxyZeFX1XQaXtGlGYFN2mL9VDC1ydkv8Vj3PvrAH8x EX0FO6JZGZP/agPQOEAJJmDrwhGIke5WHKpzoVs34SKVw5MhOnvkyc8mZVAtpihLladYeMuq aR5QY+SGI0D9AIS3E70McU/WBazq+ikhtZY1jPntnOMfhj+fcmUCqe5lTIcNq8/Cie8i+OnE Yb5/FHd1GFUddcxhZ3X1UN7hx5eMsPp1ZzW/80hmWrfI5U76GIce0204eLecUSF4aU+yNNXj F3tFDaPeOoCUJ3nniOPpIkOH7DtsuKwLpce9DuZ4UwXEAeMx1yiESCib5YkrruueeawbEsR3 8b7HrayrxbYFaPg3VU1I6PFj13+fVZOHG7QwTdH2Ee/nMqEqIlTvNp6BMdOE8d11yGl9tS3N E5ag/49fGB3GX97B8mXlXUq+N/dxPxM/TdeQgbkINAmEZ4lIBYAzbjkyigev0fHi7H0uGCm5 vgSmZoI8++jcMwLBXDcF70vDHZeZ0zzFIOugPs6ZkyoHNsMKjrKHvxeOSvOwAHtYQ/LFXJ8S EvWw+9VgYY803f70Q2PLS/toPxvnRm1OvKxpK+JVzTSXLLKGS1uHBkdg4Du5RnSprFESC0sG 5X9aIuD9k7v3DhZN/vYtH04HKGwTF9DDmpFMzpz+A1n7c6Xx3lnCjK8bh7PNINKVQvfkQg2D FdaifLui+yER8Hqdl8v6b3syxqqAhfbTKsq8nTVCHG6bj3VDYrc8x8J5KBMmD3BuRVIbi5+Z uxjVewAE1Um5VlStBkWqvlHLr0IfF4Z1Z5p3Uos0Dic7b6pxXUQJmbpPfcoZO+mQXxpRDD4/ MyEiu7BqC/sVxzzlGFhn9GT7HIIlvLGofegEv3o3eTuG6r5JRU/7yU3YA12MT4QMPU2eRTg0 DUrn+OT8XQxzCealcgwoe8OUhP2mO1j/xno+b7EQJAUgx/cI+QDzOBMRikoP5Vo9ucqOJHgH 42Jz+9HafNCOFCjmBw68iO+mgAHnRIRwrH6OtkOLbJebyvOXfbURAe+8VojVOGUBd9SFmWkI f2H7UHZbl0n7N+nx9BK8maI6viaoy6peTzFL91jdyMVCIFfVbTEKZJbqCwHgkWrtUGnDoxGT po3MZW0yC28f1yjPEcqDPuATgKoK2UdvY1/0g5TuptFkT7Hl9CPFuz8MTIjr5+Gh1l5fdSBT EhzQ9YsIi5KlUVCP6TwYHhveyH8XahLrhfMQIfW+EwMwRvpX7CXlE5rLUkprtb9LWsfzRm8t AlEXO7ludfgkp6bTwEMveaggjzNNnS7sUg63x97peoTE61EKSb2eoNVoiNvKS2LMLSdBtT+0 dZz3ka+iyccIHsgk7UFatOeusNtywVgYQwqCoyGknNMWU8YjWpzSFjqhTay0xg2jSommVDfW n/YSr5Xf3XMHvSDEE5iiWt6vJgiLKYsjnOshJXHv855aSnzSh1opLS9L2T7BCmm7D84VeBIr bT4rPhULIeeDDGcpgRjdfkCSOgIw0KFpMccj/kM4LexvVel9Norooac/NkfgEShOl2LACmO3 2AdF5jpCmQFH4uASYuRzNr9P124wrOkw/uys4Rxa8G273RjIHji64j2TsHFpeuoNhav1LPIV DFJtoeyI4pVj72WMXQx6lcXokjyYjVm65JRLLNOLeEWj/LNvYZZjoP2MjsRvCyBImYQrgdFZ 8qMb1a8eDB9hJp0B2oljYimGIO14TdW6pbSN8hMuodvUnPqMvbnoaGBDSCn35lZaSDv7OxzK mRc9Ns61gd0BuLpdzeNv2opJm5UH0Iv/iBurlCmsCoZR5xdgyLF4UQ5Eok76RpgPP8i+LFfW BgL0+Jjxb+VMtEa1gIbgWvLZ4ul3DHv1nDiJIIDp1/xp6CmDXVGebmN1VQBriEzSM6/Dy9bO cvoAsEx0GylGE4xYY9c3nSZFzYmIF6wSOTEcsyhdVUCUni1/7yF6qAKXgLlFlhmVlsTSX4uS /p7vS/X2qiu8ex7B+VYSqrpTZJ9wdzgUqTbmO4wsIGV9633/PfTFyng7LjJSd19qRoLK5vRz a/zSewUhvEeyXY5Vxiv+HqjJEsAc+hLaUKycHbcwWe6+MM/8+bk56W0HaUHKcQI3R0yEus0R +IjHf0fKgUt0cbBI+ryR7pIBrkF88UqXqzsrgkiMB0xEcxV19xt2OtdYq1F2arjQrDONJ0+B 6pJw7ciJT9CUGDhJ0EtdW/aSsrcMax5Vm+UVZVLbnj+k5DfH2o4OyISOUOpLldrwzzcoXAUm XmcAmAItOnjUY/d1YUJ1cNNMPSKCgFzaS3P9C+e5ug+AMw76Ekvo6XQpeZAF5uIxeSAxS6aC ZvtSWk81XxDAvhgSnPrdkZo0Tvc6fvhQSr5Hi4OGjS5TbEZwy7fTw/Ew1iSDQGPk9ZQvsAGf lAIRTyoc+7zMhLzPe5h6MsFtzI8uXSeahqFHRJQS+V6hGWoh2LH85o7RiS5U5QdDeVVzaug6 3NrmmGNGTqT/c3vquKDhBIx1aqy72UyAaclOA7KvAjv/vNfWb1Gy3slgbs40BNSoKCBwCtCx 1tCj7YNuucbJIMZ7fXt6diTs0YVN1riAx+uhWeU6P0FhESZQUwvLN+MxJtFblKjUH8ardfmS cl9IgTSvBD7jYSpyRddWeZ8Casy5qUBp+T6XtJIV0OnUlDNvkxG7C2mLsbEDHuALo7qqmC1R o1fZbvAzwWXERe+qThO7A8rC+q+As7TeTlZHcXFLPo7hfbCx+lx+k7+QBoU+pdNHdxkUvMRc FcZNsCfY0eM741ehzs6pCvp8fK8d1swKeVvGe64A8wK/Z30qf8YAI/CDC08JZpMBSWXhU1qM UtodAKXKe7cD0PRBaFkHtJ6p3Tj2knGE2ui1V+Syt9XzOeyFYwL3Oyz77s/IxFGhagCmbu97 SCAzEH9ZXrCXfHlcTmTPIxMdByVUtOMDtVGlKPMDE4odkGWBf3MSbb/lFlTa785qh0eGa4n0 0d1fUayUM4F+CJNY2ErhKrvSruZdDGRoWFyefdZWYB580TKptd/oYbnd0IJN7N9yYU0wcfVb uF4CK56GlBviQzzjsHxUwkrqlOpzyES9Bn2rdBwSzDH/FZ/bXPVk8gHztqd+dURdquwEBWMV jsOPV+4esKgRqurIu0Nq7ivUU4sEjTAtQT3I+MvlvJuLV/4kAgTnZu3LMb+oGRmGBLbOcN10 xT9khz20vc7HE/lEvRwJIYbUPbx12N7dyHLhLRxqJaOfovX9rgyE13xZvqhzBSXpvL3RgkB0 SwLGoMIar6FjvwcchJM1pPlzHsnoUvCrL58yF/hR0uDfDhCTctu6rmYJDC1XaIuzjkCR4lZ/ sS/VP3a6IePPfohjaPbEvKX2023tnrtvCIoA3aLaIaVGGpGpOEcN2upEIchhdMoq5oXcTbUD Owmx9AX/PTY7Y4Ara3Ubed10XoBKAxQ8vVqKwsm5L74OpPoEYTUvZBVfX+BURcCAEWxcNHWs jSGAt3GiYYh9+lEsMcN2p0Au411HM/hjelNxIS9j7OQZs57vkZ1tHeQNByi7gA6ghkHdzdJr pz51wDh/AtmurMJ8JBPJrVB/R7hAJApCpVuFaw/J0ce163UnBFzKEYsiUeivFQiJLYt4UUoN qDRobYgjmox0a+QIaHRPSiKCrI/4ieqyANLfqi8cgFrk0Te2ozvk3EQWNb1wtrWdu1mnDDYY obCAXTnesYycEl9bfS6szvmG0EDA2mG0R/HlIi9Y1+VMUtW5Kpelw5IAPPyUDXbvbEPbLOGk LKML4WNYoer4k8ZLozzFFoIyRkEYF910vAnb3bA+NwHO4EEDQIgc9tA5T+PMHGEqjvXspdtV PV8R20Wdwo8DVXE/ylXK08nrDCWsYYfMEDn7LV2b0dRrZ6HtbNU2i6/7l3LOfb2yL3/TlTLP 7F6FcWZsx2R9s5yrqoIXNJJtH+uER0oB1sZD8mrMv4hlWecUc2fJiwQ29kPKn7IG7MO+geSA mYCNGsz7j7TrFELNxEmA5Zan72lLDMG9wDK389+iERPuWdvpYjPPLWsWuPdCjZ/ydeIIbypm UHQaW/t7nG9qNlYJc87Tl8PQ2y0iNlXvuAlRK6ITIwwP/zMB1bfC1E3ut08ajER46G1Wj9+b kRFhClTO6v6i7J/Uo5gQMeSnr0+af3DdZ5/ra21whm4VMmKTpAngY6iqbN29KH0078LNICFf RzCxIDo3bblZfjuDVC3bc3EvmYRMKhBQpF1aiFzyaMsPtyf660DNc7xr2VqmV8wbHytQNoLL R8Y1i23dijp4eqVlvsoV6m1/a2iRloCSkDXsU78Hj6KGDLtsuyd80sG/tjgJtT0ujcENTzP9 KGz3CwZ8dacp+llGQucRtX/ZL3bDgZeu9OVIuF/sy5aLZ2xIkbWhZFvyr8vRN2NSg/Vi7o4b J/jJlnPAw+N9kuQ4sH7XEqhRhiuy8QvJHmgVrwMw7DD3aDr2Ta5Lju0gmNqASjgWDcHTK3Zf TRBigeALxNUKC61tIxDGnd3nSy1Ax81T3mSysuKKXbYCGg0JHNxCiCtSktRPd/yNaENVBwkg NEdkwkjYV9opctqhNvqhkyELt1+gz+QgFnl69KSDr3cjNXnuOTTUNDrn9pJu7hLaWJzs1m3P niybIpQEaBSRYX08Pv0oAKl41YejJqQLoHIW3pPnPzZpzrCyxIvcxThxM7Rx8igpc5r8Spu5 blRQuE2X5+smkEd/+TT1ncN6X2g9Q8LfJSnlaXVYWmKW9YRqvuGHeXlpt/LoQ8iTHaYFfbRu X0QjG7OllNQyXYAzDo1wlsB75TdUILgVPfUXzoLCsWZQ0ZVnKTYFp7auaDEUo7D+gV/qPg+d 2YSMRzjkQXrRlRS2ARKQWBbU3YZdEZ+8PAh/fvm9Y/ultRv6zzOSc+D5hPy6THu9qfjq+4VC Qi7Ofdd6sRV8bPFnDYpJXEydVRmfi3AhOJGAdc3zRSs0myOsfATQ5+MPfSczKn6nyl3buICz X+3uNY2oX12ocFVz3igxJAo+dOXaRGGz6rxNKajN5yGyTHmMyS1ANw4ycd5/qHzwstQcmdK0 PQQAdtq2CABbsBH55/iSpIhdKhLL28Y6olWWRAyDxyxd+ggmMFF9Qu5dKlycpb9LU/zOHcz5 kdScBpdQRy2/LmG+L3i0ZSh37DZ7ZbcpmIJ62ydhxZU0M7mkEgwFKWjcsz/7La1pY/NzYeBo b+1x0e48Xy3jnyj2kA2aRbri9GgAxWPTUWUkDDikR+asLyXqptx15QQMksRnl0BzwUZRnw/b UPdDCwP9HfaIUXXlbgfzNfSoel7my9qaUlQTzdotnN8wO0htuOAcjyRWuJ/Z+ye2TcfM4M9+ K5C91HLrRbh5plgl/ZdJucOQIMAKXsrsc5QYgg/trX9NtYbi9RwbBZdyeoHXJ7xYZ9wjCw4E XjO2Ya31uxgAxlRf24BGoZ5/BczIn0q1MsAyiTqGaZXYDPjfs40wE8xMmtmMC1s38ddFAvlL TYg7LAbpDyZCQH/MisxHjOfBeb5ByjEsjACSVAwQJGY0hmVIjCaJD423CpGMUThwan4BQboN lCQ0ANCNe6G6SUFYNXNxhm67a3yXsmzH704fF4OTh9anaeyR4TP+l7EgvJYe+vmv5edc02he DRIxU6KjoVBnjX+DeIysLk7/Rt2CR59/bySvG1cjnbKNFMiWofPoRguptZcrVgE+mGwGpHp2 v/FXGotSU+FYco+Y20T7OV8N7ylJrT1QbfusZYeOHBAWeRsT0K8KPR1po6zIGfeDpQX2TElT 05//YfKAOGqkocE45fJdjoLhZYic+82aUWvQ4g80sKlMirGZX12MQNKajOvdxHBwFD2zufnC I3zThQQO8wEVKu1SFJF1C5vWMf/SbJWnhAXG64byxegAu5tZFLYOoYo1UJz5/UFM3VDvae6P M4D4xFj6DJ0gYHn/HfqD+wGcSdknfI+pIT9puzo4vkb+A5n1//+eiRnHeKZFZBYmAbj/9OYa YzGw2XSLe0xV0Edeoy+hKK8RHH5G5G4aweRLWF0pYTg8bR0UYE0IpPucLhHfyElUiwkEEeKB g5IKFEpCfMdwHyLEoIhOjpv57nogsUat6oT8gOO3arsosjj+AoxuRfMOuoLivkenIZsa/QE7 gq3MdqgMKyYOFgyGJVglNBvLOxYXrPZl6CNjdrRaK4Bm3sy/KX/NtB2zW5ys03L2RpICKs6l tdi6JZ0EttENpqRg8spzP/wQhhxc9pvhr1l6QBgd7oKhNdAYDW7bjY2me6HRHPUVTi98PFZ9 wrqmp4v9c5gx5R+lLD5Vea461KshBug+3aTIaFY6mcspT0xwLSMh0hSpKrS5VNx1LCt0girk Mz6aEe5o44dpcu5G2Y9AfeQFGR+vyutVV+OAy9X0VXsEXIBXrTlAUhRvUXkgQLub1VZk02Sa LYr3IBF6koEPvQGqLFVxezQ+T+7gmyslXlKwC+LBX04sxIotQItmdnsdWVZwUyfSKVxBMIxi j9n5xVyKp2Vb/N/xazXY7Yxuxzlis23NpRkxxXsen4VgWs6K5AiTRmoqK9e0aHWGmtr18Uey lzc4eENyQNHAkMPoUcNEAmPv8aY1WeFB/cluGfJ+ehzW9bKM23j9/nzdXi8me1qO6iu02HU/ SCGwQ2S12ScxQ18+EO+tex1ILJ6bYea5SJuO/CgEJ3XQidracfVs1QszLPLxVkQZVJjkmfGR /uYQPoyYvqovughoIS9ddLfNIsZt7EczPRtblNKqfPoQoWTO2KDTql2aZCjXNdnefxX0s5SC GbR0QqzF1rYBDHkjnYVXa0PEePvPIUjrNQ0qY8Xrl8qz9QSsN3Zd2Q8A/1f1MRSxFjRv3B8R yjuBUae/wC1JMk5tYJm9ZQxUQX52Bm4kXCP0JAjBRPs8DpcnLFF+RgZzRaWeP3Zy+PjpezAX kGlbvPkt0SUUTinbxbgjt/lElPz7gR2LXvokLWUIWt4yjDphsoW/qi7k7T5vwe+sPeYPY2jv 2LcGPmVlmFPEcLCFZj1solr+Nup6D6fZrIt5ZQFlu5sz24Gg7jDfb99yaVXcUMtGSHuJP9G6 EzvtC/HXvACztiA+Wc80/jSV7tVXn6uZfLNXcZOhMynZ+rI31zfRn/1ldnwkX/fPVybVDL/m orls7NnAaF2P6LiWUNinIwqrWZuFZ5pLqvOvc5McMtfxSUftcBdusoQf1nr/3aGRAQICE4bQ bn9s7KKfEqBMdkCtuE5SX5Tm5piTKYQgh4qyY+3LajfENoZdQJvqYIRsLrO2LrC7aM8q8bUz 48hQpD/jF10ap45QfZKGj/4jT44ls344dogrhWZ7FSoVnl1Rnj3o6shwtDgysdt4DO3xUPii 0KTlIxurNWJHIQDS39kOgI0aZF76ZuxtzYEOxuBSprPf6MwV8ZX3dGW8DdFsArYGSU8v1CiL vaLONo7ulsGDCX6XR91uLx1SNDny3CVni/kgfp/NXaL1/lfwZSnYY86XIhW0PAav4gI/s1Qm q4FzJSGcqVa1cJkdduISFV3ihTM4b9EaSPQ9V411F62aHg2tkEmjxEkQrIcnvIXJqmz/ddEv XCHlNHdcHYUf7ZqNcPHNObMwulJ4xlfg9JZQh9L27rpXQAP1usmsh5/ftx7DZoYijE8iREot ZDo24lrBIDqQYwp66v5R7SxmBcuI0LHviLXbQLSW0z83vqntYOULSBWIm0q6aUparjLATBvB SieH+pzKZevvUGYHh83OXQHTZmBVkK4eWjQzeUE1jy7zNdAlpdY1+MqLrFxkNfQNxV0e60um CeL60EysDnGr2652P6sBxLNmtqzrunWPeaYVkmbaMg7kuwZB+SvsQAhuxs/tt+umZ1QrOqAC VrXWwDMBKfTY6aXfgNk5YoWcKcPzIRG3AWKCWqbnhcFnf4STaKm9M6dwj+mrHyrw1eJ6S2SD 0bwy+OdkYuoLv2YvBMzByfyviL25BXN/Zg4HS5ChXv5nqP/qQxWiq7ZWJIp4xY84MHB1YiYp peovrc0WlKCSp1YVtMxWvGfThoLLX85bo5sBvfrm8uswWQvMbVPuObKZU64xoXrupAHKfEId 9OoTLL8FDf5b/Ful9KzBD/TABDLCnLJRMnVH2HM0EgCvqupvE6vddYhnDGdY+2AZjEFc58eq IuAaG4ptj/lc9nwf4peF4mCtze/cVUvfAF/+O8weQcPrQuiPmSThk4MQb6x5XqcBHNdduqxK rA9Ikg2MghoXun0E3XiwnXecq7ZmT/yo+W+4em9eTuT+K1jCIfntnsd286ub0Q4Im+/TPfy5 eO84q65I996UZ9fj7aWxfrPeRhVsZC4+cgJixMWeQf5O6RGKSnG0E19fZ29CVK5Oep55J3kJ w4XBtSiNPKBoXpLIhBGeM4E6JulR4pLLV885WppoPFg9Z1j0vc0SDvF53dXj/m5DaiIvRZOv J8TAPv3mLxqxhXc9gse6Qj+SxZbKbdC3wPEVJvFz3yd/KrKpNo+BTT771GrjDrBzcBZ0H6hs augUa4hM+neeKpkd50L3+LpuCF3xrrLXnKGxvgAJM8tpi5nTXTwf1Tge1Ekqih4RvmiI9LvB lEFxSypKkU+a7+CzJ5y9m5RNpuZiyPWQoQy+ckgG2VkY7Bvg6Gr8TUq2mDrn1fvpCW0qyYC0 b8LmfXYWCUWg77J9yOspcs7+wVCiA4NFTf7G3AvevN9v+vtD3jS6A7xdRsIJWGEFCidKirWl 8z7FcUcsHrJwYWX8LLA186rezoqWQA7jK/B8M9qTx+20KlT6xODrolIYGCU/NjRRaoSXjwKy ZgrJEqlPzjAFoIGvEgQiZtFGlH78mATqXEpHprJ9PhGCnALlJ0eaIg7AssgrhESvsfQ698jd DBdamM3sczrw98vL6hrmewL1BEiQNvRPpw4Kw8/PMumAnuANgmfivKXgPE3d5/erzNxNC3x0 mTYcze94bB70ElaUjXHRg1Q1tZJkRcD7fQOvojbpXTqHdlFqDrQEA7toQliL4aS7dGR2RBVg bjTWazFyC5uQZuR43usTWYuMrzm1wjZmvNNM3kw4ilwfiI/fDY1hgUA0r8gArgFGu9vRBUIh 37gobXSIa6We6EYHWvJqNSI3+Vgv2RPYJDhwxGYYrDe+mM9z8LEDoxH5yJgTkDoJbbqjRAo1 d3Sl9b80FOrL/dJyJoz/f3hO2oUyrgBOeadHz1L+EI6HEBoHMNLleIeAcNS0R3VkmqUN09H/ Ew44auxxOqUMvQdTD8Sr94aIHASkkYQs0DJvRHv0Xue5wtv36abAs972bHGF5P80Pu7ep2WF QlN44buXKMFNKG/wdmdculFp9qX2G5yWIF8kP/d67SX0QF3JJlXqQC6+sC5Z0P7TQtAsAlOY xeUZDZBVUvowitQDGzet3IFBudaVTdEK17ZOBOBLACNpv+60DM22JtzrrYLRRzQDurx5N2F+ mkdgpictPZ/uyHDCdioLfcvJPihxJrrof2XokaR27fQ/n+Amtrqe9oAYKQUAXeRV1JN0HYsp 6IurGoUxO5JW2m8UkckD9dJZ6w2YEp5IYEn7Q3u1YkqIB8FlnpDEwzrJWuUBvlk+SaJUTS2b dnvZemoQXDbBHDqxpdqJeGnvjmElKvSPmqm1B2Apx8CKt1quOANsigaTQ9Vzu0OZm/QfUzE8 aCeAegjXuLX3iaa9dPPMn653/evLfRcM5SICHm1oPBMLhcXXaZ7Zbxf0QXsoToVbzXvC7Ngb 7/mwYhIVU9XDJ65F1E08I8WNRNOpVKPiZ4qHu9eE3KIb5EfSzT3mBmfduUYKujIcKjHzeq3A vUoxxHEaFCi73V2TNU6J4CltlhpBwtqJq/0WzKlEFzNc9G4fFMXZRnkLmXul0LruAhTi9HYW Y3r4YCCa1v1FAdg3W/MhcyofmV5bd9FeuN276mSlcul4ey49bL2BVHQTRFJhjdwd+03Wp7bN S7kLBYJ1/wjZKM6daSveuW6mVHJMzyIunaynYAfCKdziOUN1gdkh979dT+raMaYAVESWgnn2 cjxzyDkOq53LGM46il9KA4MDyp2fZwn/lpIVmrmUroUp8bZuLjJgNqimMx8A10diTGnMg/qn iZaImO1oMyRLu7U9OvYhTRsMTPCPZkZjpVBpSl9keeQCIZoEdNV5OThTXHPdgMSrpjoFLQ/T zgMzxb/Kv6ndQdL0TTq79y9QfbdmhaHKH55JS7B0NghxvpA+tMsQbVYD53nSzUV2+++v2BGg F02ijcR6/p0Kc677lGt83YgD4k5pufOW6vTbnQ3DXEq2zJa4RXvZOmMdJbao7ZoJJq7VjwVq Am+oBCFzdBOgBqDk5yhjo+TaFqhTIP7wpi4pqaXhBsdV3fOeWHke+rkpgFbClc/biVEAPllI IAoA9XhwoaGb9G+A3YZdNH2wEMZcwRRhWwptLSXScZw6SWpPlgACtm3b/8nQ1Af0oCWxAL3C hXbhNZtqQ0rBkSya/5NJXkkGn/vzA+7NtDHJ6sovqM1AhVusEj0Y8t7rPjgx+6EtptrEOoxq WpgHhc1Y0h8W6rK+UPcFQNLviQY5n6gm28kw+eT96kagbMFIAK65egFTNo77IExP1mXnU3Xp 11D1y6O0s4yXyKroteN1CkN5Gw0sI2AP9CFwSAPU9BC4c3/C99otPVnL5JaNRCBGExXtof2x 7NjnDOVst44+vMZUwF0zMsQR/Cv1qHe6LAOX/cAE+9uDcSnWkr2Jxihg7yk0Q5rAl/9JZEcq 6MT1M9h2xK62HmvNwZ2u10yrc7qCQiK9hqbKHimzEJdCCLuFYkzosbcNPtj5cBo3TLxsZxB0 VTuueiC0lHLPC16PjQR/bAM0Ju+iwfEOQyI0F3TWKVuySrpIW14pw8q4FJOB66l6qjXhqKDU G2+rrmQtb/3j2VjUB6dfLXnteoJ+lR1fDS+iKVsbDgGMXAYEO4h+BA7V1KGG9RirjVj+u7km JlioPXYic4iYop6fMZg/DOQV67grx6uq451/xHIkPjzSEfObfrFeefdHBJGXXqoAl6XZMcE/ 5ZqHZJvm2Zxv7qmeBtkMk+/kdltfVXpbMHW+P+TLlplj3p/W++0WbY4l94NaO4ZgUQbf+gIC 3th/BeO1842NKZXUlESYWM7GVMLiptEzBxqod68iz0TI5Eso30rT/5MBPj1wJ0jGQXSUKNbp ODyUdumRsOot6bzEp4U5JIY8Yb5RwnJHIAxVjRFsuk92ncx/Ka+3DhGQlCFdlJ58+4c8mymF r6Eo11Mpp7RzwMX+MS69q3Dg8u9Q1gkZvqn0gBoQF/ZDFWGAM5Oi5RngXvqk0Wo/iK1qOINF awV9v/NXt8eIiI20tc2cvrlr0zb/iLEjuoJ57L8yzE1ATDzcd4WkspBuTbSHxg9cMahNOEk+ i9GNYLpHSTetEuG10R5UgtMRqMGkyZW4gKH+28Tl917bPEHZVWAdX5IWHx8aKdxRnWZZS5I0 xKxsj7hTJuJe/6UtGJao9DE5iMgGS6FeM9YynC4p2AlXM2b5Z0gXHXYWUnqfRc8LucaIdf0L jBavNb197Dydu9umRJ1v0zW5ID1lSwtP6Oj7yjWjTnT/4uKEtWFgawiCki4J7ocnQyToOVkx 2eVsKu4yEY6Z5rYw1NIUFvbmIAsViT5mWIXZQ1Ba9JEacr637ckgin7G0utAX4r58DlARus2 Wu3LLwhSrCyTiM9NQKkKSaJ9QcHgsYi/B8EYuj0avslsj/3PcGOoCf62+aWSk4ASTwcgVhd8 t+rV4/gBzS85oHb2cMkdS9+dGemcF3y6Veuz8fY458VN3cib4ALVCeOpoNslMTTADPAG67Zw Spo4Yud/YuN02KZriYllwSn1Z7cIWO5dbusvejGQP2eqWritlehfSDpCOuyZHC+flFIs0HvK b1cfKzA1w3QVtDQMkU9mUfBYa0wfyFFAVzdmEU42q4XsICWzHTyU4fCOHrGesIYHcXlVK6gk Dtses4Cz4DB0xj1IbwhOpGawmT8RrabtxaNn2fFuxCp/9Bxzv9gtO3I8cKFAP7Iyzu2KlXA3 9sI0GaIu2UCp40YfB+M9fHEkWvPew66GVFFK20qpWL/iUYaU/8UqKqnTRI5uqm4k2WUEd8cu +E86e6C6CimDOHAwKmYGv8H284b8gceujUIb2Zs6BFNxzt+4WQB7+d3PnfNhz5dCGNocFA65 2kkqYHgLYvqCoLZy0NAqzpfcnwaoBvoalof/YW7m1NRGiTBR2XrIC/wJlNNmcu1+zBbnFzks voXVllOC9/NfITDkvBLjlEytjUotTIbXsfv4JJVAsQmqDgamKr9UTwNB8751yPKkKOvGLTkD gFB6SsETgByDnfR+QSqJd9yuzzZ/HGXs0fnqFx6nt1LXL7lcgeBpAinjCSHiqILbBOWoDQGx NMdIukTcyfNkREOFp6lDC+wTglGhGtj5XZrRMNz6OWzWIYGVH/612xQcKAgK2LUM53BUfpuQ iyi4hrVNuvg+qOpmnmqm0OIbJiBmvzP9gvXTD624FUcKoV2/FAGFgam9qRRu1J7DFytR8Ay2 Wpz2XiPc5w8hcITQCu/kPinH6vEn/P2S1amLoogi5+jCPRoMsgnc9aaLkExiB6y/SDNo+RGz Nue13Q2UuRSA3G18Pa2uPk7P3z9gnEz2cl90Or6zZt4+wTc0ASQrU+DiyrRY/hkQhQAaJafB z/fNaZVRppir1K68o33BRy3OcSZm5GWwIIBEwn+17I+XUnZsRYw7K1DGbwL8gzBfxDqtHllP bJseOTlmaUDzyGCvgdf8cuPFWl8cpOfnIwnIb4sPamjnx+9vAjwdhXCvvmLYnoC3230KiQL0 HmukU/DWWPvgG2Xe7SmlH0lFPOadC5XgJZdAo+FKvAOeiEfoVeuqzbLyNdrtNjAmDNNRMfU5 pIG91MhxK1Q/CgSp1PGMV1EWpqny8GgyYkJ7L4BdRT8X8Bxm7QXTmXgEy1d9GoST6ozmCebR S2cv+jfaPv4x6gXpvaPlH3t2u4mEog4g0GOgaFaDQiVdMwC+E2zpTuTg9dnBn6mnDJCKw3PW bokoYnTamS8px+0BYsruScKyJxV+dW+FQq0JnwoNFhTby7y8i7QOs569PNXAfZdMsItFS25s 2mAuiQMMfK9tID+xTpaU6L8mnfsFQ8ciQhSd5nPxy+Obfil3l25Ux+hwU4ibrMndk/2zURyP ooGAU1HjZWnF3dJjHLKwodIWf2xq3XhWCEHdD3WPSpjphN+3zUOEFKSmaTv9DMltkt1H8JAZ b5SSioWz+UAB3AHMUB5v2KyIb2lHUdEbiLcCbYa2wRTsqRrRxUS4N85YH2OtwMSKy93P6gUF y8LbjncEQgCQnH/W7LLNVSh5k1uu3wOfUqIH3l0eE904N4UnhEyKREqmzQfcof8rs+IHKhRP 3+JyHJBc9qeWuz7buUO48HqYP1NoDwAwFyMfl6uOuGphWjKvkaINL2kH408MLEzhkqRdX3yG +zRhNygxkZJY88LrQ8DI/Y/X63EbnRMVgzqNYvjl5UjbxqlXsgIPfvvxZfAGHgdPJrOXKGKV 1xtiYpNk2aI+ZOInGY1wP7QogZWzSs3q1vtovLSoMwr6hnaGYxhTObRNK7sQMpaqo/HhXg3z Qzty3ZHfdDDL7YRqjJ5DcNB7ptbDhqzstXR0XkwCaJ4NCykg3v3w+SPFbYiy1YyAKfx5QwCU y6BE21Ffg0tPYf6/r2A3U6dupFVgm3oZW4paWQKJo8GaaVUul8uGGdZTL2sa2y7lD8CMbW7/ DnoEqdfYjdyEHe0eyVaVMQT4vQpbiiWrFDoauNRMCtx6qEjX4Fl9on4ju4HKG9xtws7tFfTb B4N52ybv2P8kPOjgHjcaUPv1lFFtiRLvSQV1hXq6pqrK84aokc6AKA+HBePlbLS4eC6vWbih IgSrJypmJW+gFN5RGmPZV0aoyFk1o07wF/knyKn9DXlRxuPS3y1oxhlL+xreyM1nmwB1LAWf ys5thU1PuoPlk4R7BNAUiVBrXbguHnAvAVnYvIT7W54j7DdkSSt2xyX+7ciBXwKBWuP+DYGH MWbVe5ZWs3td8QyQKXa3mPSZUynXhrUtXy/LdLMbsVp7az1si+yLSvmdl/e9UUemOn7VQ7vU 2RckChsmZG76ozKb+ZngjIfNFMfp+S7H5q47PBhYnz8aP5ou1ZpKDjyruOdgCUtrROeGWs5f ThN1F6gx+M8LTw3Kh+NAqcTpgQS9BzSOCDA7qWaP4pHUU6ZpVvcTi9T+cPGAlEqrGJig6UL+ +j6SHz9feVX2BfXizMHpAVtmj6T4XYlRao3NOIl5AnoNZG4b2VnHGXLB+mJHyEBdo/ZFGKPp 9JBFMgjnW280uFRKWKzG0yj9E2ivkwozLrZxSd/PuhVzpKNpsFfQtD2vaHuupytsCvLq4NdV OxEaIdexJlIXjULatnTsj9GDWlygBdB1b2oYkNfQMZKXaeXW2KjPJHCXYicGoL9WlveQ2Esk ELJh5N+Tao0ZXKyQaLY4RSeYV1QRHYn8BKscFip//ZQOpyFCDWeeiD2Wurj57/Ou0eFhhWTp NHpJsRlWBWhOyvwBSvfVhaqj5w2PIs1lSTFR2kFX1oCJ/C5qeAI8D3QQxvmiRpVluSwOaM6X Y+ph9W8n8jjawabJmZPikxMxC/41jdfiRxPfbukzg4+jwRxVjIO1CcbIH3kWhZJ6dclAyZTx 4GWyaURCf+lBM/zHkyRuWCyNmrpbNHDecToIFxyMNpWsERytpIj4yEtFhII4O+N8UCOCmKgp f46wUVe6l06kGw5v6RmC0AHKPwdFnPacoXt1u8HA5LeUqt7ezXy4gfs7NPHqZYDIJFDaBwl5 oWNaG5fulaNChraB7D3nljLnFkkRveSkf0noUqJ88LA9/pGfzPba3LAim4J/94BBHi+4LHJy MBjx+eeLVLhUFsR58f0KQSwN4t+us4VIlz8eu1cXxDNtZXOeaC5zVJ5VBVzY1j18AjWzo7Iv CexkfyRdanZ2Sum9wgbRnQzgS3r1lQspStXRr6hJGeiUWIuESrB2SgOEINt/0+BBrOxoKUib n3c4uejHaOxBAEtLcN36y++JYtrHfnzucQyvaQ25CzuqS+jZCP6UXyPASKWzmwf8tna5KrAX 7dxQkskffVpgpsFbDTbAhUJYGMN3lFyGZq5d+4LXYH3b7riUcqgPcyjU2MPxsrx+zpszfLrb IYZhcOqhuRydm+L3VFSxjhQeY6hIlJwkycC/zPlm9skCoLXZ0lnszWs5QzXtdGsIE6rChg1E 7H20WP5gapbTxn0l5648Ayuw7AzvQX76EHKaJwmpPCIMrJGgMrsZKlRCBJCwFzSCPQB7AWoq vDZKPFBhkoowhmZuxC2LlFe6CBh/YRyN+OwyvgggjhZYC0WUdWGeRZSirNN0ne56C1XgIuKv dTj9qhCXQbtuKqNBzv9N1j+TcvJzO+SRSwUpXZdIIKItbR93OzGKsoqKh+RvNjym7zYviAHL +msxrwAcKUKxdhBPUQ5iF8MeYw8BcbiVhchIZiwVsIRWM/VJycn4Jc80DWAeRO7Iwyj0gyOC jLifFlSky3ACv1/C6Z2eetHP8UBWD8ExfgEos09GQbuhXrodixRC8CfSVzEXyNUf3tchBvt7 c75DRzTSNV9/MiCHchRz9C7VfINC+v+px4Ng+sPKn2MAIGagC/fKUsHj8uQDhgTOiZfIJBuF +JZZFRLuuWOZDu91jMEhXm3G9mOAsz7k+hD2KtUgz/MYrfly6gn4lUF4j9nXDY0fv8qY72nE 4uMdI+lsZBly2bdFEr1+2I5H0scR2pO2jRpDJEY6R6lcBxlm/rf2qRyRjIT2/JR/8rnc3JNi kQoK9tcQl5xju1S/qqaVcil3IS4iiktB4UIysRGV3uaGGWMka+LnZN9D9cd7/Y1qCTLLcx5T +BZqQVk47AIXHQ0jkmtROoMYUJRKgen2pxaniorQUjVpFXVU5gOiiepSMmAzQZuyzQHgQos7 gvfoWRvIYaqnliAmcC4199uomCIKqqSXrfDRWHjH8MicDS/FEnv1cyI4ceHOwB9YTPMUYCvy HkW84duE76rprYdrRMzdErix4jKSwJUBKKiPgPCL2iWHbpHkoBYZEDPgBYvVUvoyYiH67cBL RspwY37xkytBbWcOt5CKjH/+utYS6l5AmIrQk23XH9USJoGXEpbJcYVn2m29Ppw/0QwjOoES bOFgujLF4brN4pTiQ3xSiAOY0VrdNLh5VHlc2Kq5V7mDanj0OLBA96Rvh9pAHdnKbSADaIUa BL6RSE97na9t+h+noQfmxmn2Bb0xgOQmmwq5eC1gaeS/2+e2lCIjws0hkwG1McblzlgA4HCe fsyjvSbMbjs35ioZcGSJ7WLomX63g2BD4Fz+sXID+AUMDFfVP32GUXksM7nZQ9SvblFfpwoe g9D+NM/R6Xuk/DTYFABnHTCO+ufr8XC5bQapfN8JTCIFQ0uiW7+N1ynEIVjxHTYIx7eOSwe4 MsckHPJQH/cTw+RjO4HskLbA0/b3fE3c9EVTYxyD4QqHoFUpgNA3cacbKOb2veFWL9kLmHPh YoK6nlUQD9wSORGnK9dMv3W64hRbBJuzlhnUWmUH9OBk4AvVGpaqz5WxobAAClsqw3V+HaX2 jTTC3qL5TA73cAb9BAvMK1P8QyzHUGyXJT0bPwo4XnvcMEc7kbO5gPwdGUwldvJ53no2HdRH CW8h9nJkoan+vEQkWbKEZF3WvPfbbhwp41hdtTukmLaY4RGP3OHtU72xWfrY1dh63oP2Q0Ng NALN9qJdHWtT4Rsps6Su4TORkKL5CjZl5T+vjyhQMZnK/ZkI0BcaKJKZ1bsZThHRTx7VI8I1 tO2we2lPmYdnZU9kM0sS8/2idXCGuukiDAdNobT5gN3thCY+JTACl1XoKe0dBJBsvO15q/EC GrnPpuVjkTc7DEqANfMqoa0bNAxTmr6mKGfbuBtNxoh6DWUNQScLzXIamM0q6CIBcZPX71PL JUCjADNa1erBPL7Qp5Xo46PtFTUiTqHi/cfIi2Dx88wbCxQiJv03jw5LJ15a9YFU4OXk8P9p Xg74Wv89xAhPQtZlM4oozqpSH7tElkhmfl1LvwobTUT+5+3RhE6+EqytkxtkHngMrpZxojUO xJQWa6weYZtARkOYqanBld7prHk4eixig00ITNdjZTkbibk5UnDUN+w/oxgi5exQeHfAu+PV q7UrXccFtTiKnkd0VR02IlMhsnqkrkQX6xMWFHM4lKzpbUssAbzovwGX21T3aGW9WCuo2ok3 uCFiQmkJy2ngy0QxFOPO4AEVHtyLwg+USDmnW14K3eyvuu/UbKTcfMxAAilHjxT4RvUcGMkP k0aiGp4y1Fp9iOnoqAh7IPfPVuON9N2yH7sCKdkHAu603u4Iz4whrCGAM/rXXq3fxX8256Ld 4EAJF5SGNGr+buify3z/5+YLtfpJvQ6jLQdssWvyml9Y7HfN/ZzCkrcVaccqmRRxTigzTNBU nEoObKZ23AddjaH0VJ7viGZotp1ttWU1TQG1NGP3uUjOfEgWqmwTBxKX2Yg0gnAIpuwfwWZi JYDcJ3CHF2ZOhynF+TaN3nRXUpbAAKmpNM5EH/7EGIALZTOeCE4afFNmQf5u7qhLfgAfXxRr tVLqSavHJdrLw4/JifqUcAlS2ViNiapAYVNBsI1SoAXaMwIiRjdkoAwMcde+znTyxNoQRq0L uS/QSdIhz6PieOPLakXKpcXQTynnadK9JzXpESZvm70mJ8K6sZtTaNcesgtb0e/5VasnIQap XiXtyllYYE6cl4wTnw7+pI22nSdi7Fzmn/Nf3IXvz1rW7geJO7E+S0T7Dsyt4SiMRflsxRZp ejSpnyhpSEVYviGSec93bJVmH5RA3lqufwBt4HMw7JFt/MTh7O2cS2DkN1bwadpPgLntYREb /L+54YougnuO2hIpF/tH1TC85vs96sAGwEIoTaMOEfv6c38ADH3FxPylDtwT7zVYcAXIrxdr 2MmBOmWG0Rilt278PfIVMQ3uVAIUypV+tL3iQcLrOo0Cn7ewIwGnX893noZZpy81PacCQ8Ju vo5Vv3DjepjiLWlrBaRG0AR+u2dTU0rPfVMuI7LEXhiTyE9FqGpSiQTtug640gnvdyyRoJ7o JZg5k7MXA/+Wnt18JqP5xUsEKMhBb2vGHyxJho8TV0tYgLm/Eg/lh/ryj6eZ367NMRu0MdQr MzcDQentrGd61GAl+5TIZT0RliznfdFkrgR+3DAHc1WQqoYnxq3m8TGfnfDU302rpFbZdh0S 6KMNnYkranhjRn2bCTOPM1nX6Z9+nXnMjoidb4/DgWOwOLZvCvZQjmn2F0htIa1BcRFcUf5e 3DLjm4PZqGeoTWhRme05CF2RC252y7mYkJExME5eGpS655ch1HDF68KsftIfIMrHc2icqvME aMfG4H+sHG4F+9VgCsqyZQpZlFJNazgwWcRFOSI8m70tvTD+S1P53kdMITEwo6JX5AGVcByV fojeSlqR6DIYJSFi7Fr4RJ3EuKreuxuXg+Phbpi72AuJ3I5jfm5knKF6l8k96Gkd4MCAeAsb TEX8ijHkgNyp1DdO/jXk7HcOWfV4RjANfOMmq3M6W6dPIDMveF2xqm6huBF9bx951OHUcT2d EXJzQAfzSYtxxP0xQQlOT5cZyOqOCqN6ikV5s1jsUcTd+gXa1uZV5ntqXFKtH5043GqwYBws DSppSJhCa/4vBOPHqGgFyIM3da1x8ZQ+TmsIOWfBCp310XF0O5SeGfj4lajLcYRchfOuhn51 q9zJGj0DQRfG6g5eTA6FBpd1FivI04mznCdRXWjwzPT/k4cNP1//XISkhcuwtm0t+ietB+L4 MrQRh0LGwjKYhRxFSHAHGomKN6cpE5C7th89ea+R4Oqs9uXioO1GXDVWPTBo/PDNOmGlW7h5 nducDrggLNf89qdcrPEPh/eC50kjaGS8iYO3IU7oH0ZS/PIpPhLOn89gp5rSQvRF2H06iJVi zUZ0kKVvpcbe83d2H4vTPIPzSBrnRFD3+S2tdzFIYw5lXdmY8So7iRE8sCrTfq6111I4ORKY 3hyJZ3KBUDxf/aoPypCYref+wZOeMJeQyDJkYSrGPTVQKSwEuC4ZBUKh/ShRrwNFCUoh5yEU y525VAxlIStN5S6LvMNWww5bZ0rGHLb3JXU7BejWSQgRh3U/FpmlmY25i9MCVSuQB5V/K82h f59CajH1CJYEeKzgyJVNTAZ8NTGgBiujbmKkVfFEflbDU9jfrauZIJDcC3fzopAsH6MjiaCZ X0hq7AwgO7N1AqZ4Ykkl0xZohr+fZhSqbINvMx12GBEjRvRzkQs7W1RRcRzsK9tOmP2fpN3S jq/Hyp2Smm7NwXKwIxaCDNBorOQaCTlSWwXppqG2U+qULF2UEV7A8g07fbERgzepyQQCfX9o LMCImu7HYM4UYAXYeqGHhwljIcdjrkuEfOXIbpFQgGmNuuI0JBQ/dre1ih7WHeUi9QyHpS3h 9NtTTresp/NMuhfeVsv0I0DcnitaT4xJZdD83Q5REQILXOysewrJr51fKo3r15mBPGfUfvaQ IzwIrtHac2D5xf/uOyPTuVs0ABfaQ/QmrAekZjblP66NncUT2CPkRHbT+Y/MjDFtdnhHSdWZ NoJLeSWUgUz1oTrZeB+ILjxhJda8qMtSISm8Gj5k23G9fb/JqktQ/1efiK1whn69361gK1N5 qdK75YqpGlJtmH0IdWUJpA2ACwcZigQUF/NMCNptAfk8uhLOejGFYJPgUs1Gk4qgdeRIPcGZ rhd2L6qTmONUm2KPkGsGNH2yQ8ayZ4+HltN4dYI6WxrdTel/I8UfTJTCjw35WGBXwOxvmPCI 0Q7i386u3te3NoDJY6iUAEoz1fqwuNcof91NOlCfiwHW5AlyRmjSa+iAa4nkhVVubEPP2sLZ uITzQZVqV8w8TS/lPWRI2qcRNXf98RQ7Gdl5zLTnD00bOzE+Jx44yaXNl0lGn50/wrt+MnoK /CjfvFdW/jFH44QJ3eUZhiwI2dwNGqew2/Yq9QFrDumLjGqam9x865uR6KRQ2RJppCfnJVNB pKBH9L1PDlnE8JCXcJkih3Hvyo4d2suuiLJftMywq/5KDHLcL5Hi//bGZhK9857xyoebQjA9 h+69zsFNfFGX7mZAgElosjZ5zw6BQn2HP1nT0oAUB9tPGh0JQcK7dbnpUs86+erJMHjSCEeq SD/kuyZ6L0XM2op9RSWdqF3EAc2HhNok/F9qD9PqEzfhvWYa20F08iXrsO1SccyaWdDH4mfG 5xZ/seFg9xP1yeaHh/xJxyG1OmYHy4IT/Lp6m/W7wkIPtqreetN35jr27+O+GM5YNg0oRDxo nlG3QxWlrKta9eXRizx9DtrpkIlV3qjSHdrtZI65syLGwGEBfuexGXWKwFiinPn9QC5MzMJJ ACY2fJq+WLrbTHzE87Ex3IYHo8lMSJF60xKR6hr7xL+xz4BrT2Gtu3oVaGrVvGHAeHfPmM98 lLXsBoeC/ch5VISmK1q7uJBgtaB59s+Tp8cqT+OArpNx60uhGV52dP/+iCICGinuOd9tIG5o oWcDKsnJAH5KelUxKq/EQurQEcrUW+B27rPNTr0VKR4wjnuEWSRZNMORt0xEEt3RoANjtj54 OVFPsFlxUtbnxmIWeYk0nInNBiWU/z7gBp9Pjepg+CY+UfUNc7I2JSlfwqiaLFdKJWJQ9X6M JvnWn+fNI7Nu3vsnjoBGZkj7UQUuQ8ZqhEoJSrr6VL++vCt9SQiJvsbnSCrJn648lUK/+mEe dhEFJAmBa8w1SZuYsHOke6Y25Y0z39dyYxEkl0iVQJjyCrkt8k0JrGSObkUXPZ1TKdJHPGPF jbmb0cDBFegthJFjM0n2f6iM5iDogll2CGCrc90Hl73AuZ6ixb13ghVzEmFu80RXNp/LRGlv XS+AONz4X1IL4fye2pJgzvk+WB83LqMn+4M30ARLSSfPZhWbfJC/TL9yzBm1Z+XCFdgSNJXd A9nfE6Q60Mlt4rCb7q8m1pTGCWOGIDUcnzMwxkk/uOLMML+ESAkXcvLU10cTT1aNmXSaMoI2 CpDQsYV6FdrFXEt1ymCx9QmFQlpp6ZZZLwC/bk2aOf8cIoXuBZFGvC4djv98p7kpbmmV4UPz dHF66BbDFSV1WVKiDWUFptM98OVf9/syaGOQ+Oevzr9/vSs+NAme+bFrzxJUqTJ+HFioLgo0 UaaIk4T5+Yqwr8BCpxlISqgNaPB6YRNbpOTuBdBBq2V3ccEyAUKd9i9Y9LDlZk1YDVkk9c// m/3lyMx5Oa/3KyngJN9aKAqnLkyIZq+Q9GQ8nSahOPs9lB/mbgYb0dbPsENmzeg4O0qUM1Eb eUd+gGcdh2cMCqz8xIweVYhGLinjAIwnlx5azGng1zkjNowWxliEvxOggxY2RvH2q6U8UTv+ KIBYBp3+h3L790DhCTqRGMjZcx4W39jz+GSMWWmERMdTUqXxNG/aS4pGF38cCrAlRVg5xLYN xKMYO1/4B/8p8qi6Q7TGdtkcORG6k05QhBF/spL3baBCbRryXrYhngBSgy3NhnSKNvfWpsbF ZGOrg010cJyW7QcLeRWKBwe47t3FCz/+IDfjnhZGytd91TL4TrqWOgOBE1XXq0K/vPzl0SPt hpnuH52kObBAuQV9pn+hZ+bJD1q3LC2xHl4K/gBkC4ruwbbnmZ7O4tap/gN/WGEusWR3TtUV lj/GynZ2TtTkFsdjv2v3eTPQgw71/K9u0T/E9LOVEgor1n1GoI5fvV7YhurbJgSqJueN1BhT i6vEP+jPdXwbY+Vp+thsSIhdyCD1etDfsZUKKY0T6dEw90eskSf7/lb04xjMvFrePXZdWOWc cFH1pCAGyVaTHYIB1QTytDgcScaBz0K6BeQRRKbR2jTT3qOPguOrzoA4KlfxhBwv/TBblMo5 wQFM2TSiGedqTaUROWw0EKnupvxk/HpYZ2IAzdoDYAzLHmhkWKF+jVJpj65BMr6cpBTmIS13 5XxlxRASRORarzPoXMPfmr+m/DIzfBkbzqpEkW2dHEVvvCijsBV/zGWlpKWPbfFuqOZPhry6 QLLsWMSnJNebgAD5Hy5hMMynWpD1gndP3SUrDOv5pZqU3F9i+cOBG52hW9N2FG3O59GsUhoC 4FwdgIKp2sfopH2By4HWAYJrjzCny5mW0JA19AlyGNivGhZXDvg1B+mIQL7wVRx6vb50Iz08 SJgdVrwByVR63id0wPh3fyrMiNh6VbVQsCPWs54FLAtLKHPr2W86CZRNfrmod+ADfUrMfUPn jDER5BZFDd8/iBQrcF5kjXgBtouydXuLJODTiEWXSfh7qihWDEyQgA+ksEYtvWSQrKrdNV5n Fgm/28pd2Fei/RZdLE80eAadGx/ymbFGD50bPqJqOHPREytwOiVLQiWfKXfMlpBz2RuKHuJy y7/IBzi0n58yOWW/2DPW8qy7T4Vb1XKUiTwjRMuartLwvprEPpPbsfIs+gZ1Zs/NaAlYcXdC tYNGgLU/ZaIhc3RYEcUKJslqHueB8gy0GXtOPL0ZirFSOVQVbgxueQSfVMkcEJ2g1CuYbGXQ YNJ6hrx6suPExtOH6uacBWJCTngSiaUPYKq/R5x5muRr0WqgYr+vYwLa2epRw5TfDeCnbVNg IRGLkDILlXI2CBxOcpCatwwKw5FIKtjXh8IcTqs/8MpJUNc1Bt7giDEJQjItjjJW0AcCf0JM UnvTdH0o8d0pe2y3pYT27yaaqiwvUbgWUhlh8PFpsXS9cO//9c2KoCckITL/TQK0W3oxUe5P 8G1Helzw06q0X/K2+n8WJrZxu4aLingU2vTOjwcjRBir1E3qKrtOC33Kdi10AYo0fNtKpkCr 133k27hAt3E3CIph/ErQU+jJT5hU36tf3S2LvXLKkQfKkinbh3hUKkWExAkzedDjsiwjHt3r HUiXYCLZzfN1PaZ9sDCcRyckpNaFbSerqhfIFqY8f8Rfcc+/K/a6AebDK6MnkEqmjxy6MFYl HPFpgD2GoLwFuzzqZuYYbErxrtNmZuCsA+L6hys3WL92hFpOTg2lby57IUeyEcted82kX3m1 RO+JQZlqQkHK3Fr65kHbRtX3v+xoA2+9kB3glGP7FK3pwUtiY7m22bhigIM1fVVijScJRnp3 AMBS2g/BXSSidrYm7TDhP1zdiB3N96OzymxHXHQ4uDwJjoAuYpLf+PLwYddhoM4EUph3Phqq o2LXbth4BsM2lMQZxNT3q5Q4ekbYDeW8uFo94D7X9+KKhH3N8H7BNdGVakX2OMh4K2G7pSjP CrT6jE/XKo++cZHKNd7kXKOYDyIrObfA4h0HiQedItAflbiKu5fAv7drIhB2V5z84DzlHe3Q tMJtKVRx9ygaqNZucdt++86Q0qmhZWIvqD2P2rZ8UvxzZNs4YrVavJdA1q1yH5OUwh4TGAPA 4HlKr88u7bJFqCWZ+77Z2oKz6XtonnJC5xmaemWSuDDirseiyeb9BBkm5ZD4K2zsjjN/OUH5 lEA+oWh+Q9GiSD1skrYfDSAuculw2Kz/gxA8O4odFKAfyprCbnCPPPu+Z0JKbzhsd9MPxZs3 VW40u02TXW5KOrjJvJknaJLstVe4S5HnbIawOboARNF1bPsTSLff05gsPZXSigUhEbVXRm/k EpmTBCriAAXaBM+vG8pHSVy472ZJ+muDCzeJuOl9OW6PKjY6WzbpF41l9/PNvqKcfeLUeJFi Pqmgnj+L7+MY/LxEIOVqQNJsDuFO4Cyt7gjBdkE3oMcuweBvgVwNJ0cg2LYxWXo158QHMIP9 0DtBKSxoAhWO1KROMIdAm0ok+dMPgs7cKCGr/AYvwtyO+vqRtF/cAw+LOgpdXNRU/7J4BIA+ U145aIqhxA15U6N1H6uTxcCmRDpLq6kCGxdWTFNFJFljkdre5m32NHyjXUSg+pAO5CCThiGR v4et209wffEGxUjD3lvNzwCkxwcFB4yJ0CCPalRrOj+nRHOLc5eiOd/BSn0gciyTBYXCVhfX yHcoVmUEH4HXgYJYVWf+eId/IRuemjzJ4BiejLrKDZ4PmrhP7CSWWhwiOO6nDglPC47C3SN9 WcFN3NBjno0lJruBjVlRcBfBwPyG8gAW4RvfPkYZLwGfGrXlGVjkGrw89mg03J3Bzo/pJ8Y/ vR3uz+9MudAaM0Wx+14kj76eYpl1FwlCdY84nMVQ3MVjqOCaBUvPzC5U8d34y4aT4BUNTpAd uJ+ktk/YELQrdTsX7EVDHLA+BbwYMhAyrxp7D36U9U/fn7U703mOanmru6oKtjZJ0IeqF4nR J3xg7/BChc1UNYdqOgWPDMIiEgCWUaxK7D1l2srZiMSDKbUj3vur09k4hi1ZwPCJXXC9pkVv NUXsVwoyCXOur3fehPVXerkmfwTZkmpc2Q5ypaJJRG0W/UZU/kj4VXDXJ6wTFgRcpAzoZXFu zvJc6z8naU1H1gh1yjOmcQQGfUzchvVb5iQ4p/jRYkawKZysfFToNN4m+keKGlTRyV3Ify+z nY9InCPCdv0OdWlTjoAsa2Br+2JSuU3hRpf5KwPdGOjYmeK7WDWTabfxgHwlptBUG/LBUSmj cAkMU0RlHA0bg3DA8Mzwbv17ohptGk89/MxuywQmbccxZkQmVScNydPOo+qCZN+h4G4b2a5Z oRA1M3LWe3WdMqtduHPEk/bR6ESb5MfjZXX+wEOlBoPxglk22QAhuaBkHc6l7ggqst3INNyz PKMAKqF0cdYBZPmU6050lE/DRxqjurbRUAIZipe5Y5OvKSpuvxMgQai9TT17rU2YZB6UeY8n 8tjejPLkmwCgAjy0oogQB3aaV8lgdH/++L3ZDaEx0kkbvNfiwpUZyswhdZsQccLhlAMfU8Zy iJ4A2A7zjNuehYZyupkiYAnEwIcMd26DX4R4VlQGaq1mn6iisJpxuxCBYShUzVxfTsMbeXuq rP95Tfq19J8qkXTmEqP3RdCku9lxZdepKF1zDUHakch9O17dnNvOtLN0k8LcHz2cFKVZ6kBS 56tDm3RKUIJqCMWbk4hZSHBHTqjf8upXabsJTkcyF1g8TsmAUYW5VG9GMZ+gIkJPxYnvmQ21 pAXAB3NTGr+c1t15K5u0X/48a0WvY5YNuYzaQRq8NowhKoB8/U2qI/7ABOhFPta4xcI1qLNr zL+Ceb13tGspflFXSmNKWVMzCivRlsAGIx1tJ/3ef9RvXn8CjI7/fXuJLlwRohzdRlsTf9Sq zu2nWnqejFoSdrMwwWB6oxWk8h73gB3VoJ9OgPH30YSvk1G9FcxXa8X0J0YpDwUVQQxN/JbO /asmndudiE3L62AtZ8WIIiAem5xgKZNfWEZ4liKKcQt3fzx2sbPsIV2FdO6rs7mHM019EBg9 MDUjxzLZ22NP27sxKubj3S459KzhVtx6CNKLftmcf0fkmLuBU4Fl/7oqp0UZUGpLmq3Ae+65 nEkL/3mhNVq1zcT3n3caLWXO4b3tlDORWG7TCSqFyTDPV/mq5UfInUTGt5M8WAl4tAyKpxyX xe7SIL5u9Mz8l6OaQCfV6DYAQziYy/3Wh8rf6R1xj+Sx8OVgWU2Vju1HW64AOhiGTKaCV8Ct EpaqADdLLPZJQwms6QWlV+T1FhQO82+33y4Oe1PXGGhFDEm1UiwPu6bJVuW5VaRYngvc8YzE 3GFapvdKRkCHn2NNZSZzyjdwZMrcjDJ+CMKe1x3T64j1kD63/xTjGhMhNegI9QBhP+dfhrv6 9i9VIGj9Ckcc8gZdE1umi/PT+OwwmDgxrOWjR89TbDPJsAPFhZU6zs9NlBlNWTOluzGRwGnC x1zAsb/TWQHLBHU5FbYR1atzjrU9S+hLD0LA5Oon+L7hJm/ZhVTKSSdnhQLrWYRTWgy65Q0y l8N0t7l0p7/fl56TdKpUNKL767Df2gPy3N/4qiW09BqObsemL7Dmr63XAn/mlZrbtuvwQ50j RUUuVFw06NIlIJz8kZLUjTaYbeCKkYhbb0P3QcxFPvvLYo4hge2rRbMwWdmSPUTQ6jRQai/2 e2sV1uQRzMTGYcNDuisQB+hO4QFoVxg5/L9xMCr+fdVmFSEueV6TVxGFXCJ5m/8H8zyIrR9G A1MvppRdMQwl8on+XMrcn4BWs9YxkLSoZ4GmoNuHQszdWhqxeW+eSoA/mXWCRSlUhyfD1y6R kr73pABWD8PSM0/+Qw6xOrEdKOZBRQGZMkyUFb4xl5Uz8D3DH64d+bpBByIwt6uMZIsRylld LY3/BL5XlECf77g/u0p4KQJJQAr0hlFUkcftFKQxKboc5dN1d+OZvcFI3pASpzMU042L7EPE RaDdjVuP0Mjx/H3Z6vrRiMz5+XO+yQ4JjyK/R2SJTOu1dbSYhQ1Ia/Xk8+Hpi2rS+wU+Dt+f PVZ2cY+MNx287JwBkg5VVNtS92DeBgOAl/bIWpppTM5iSkiFUdMgO0oDRN2iPbiml5WKtj1j tmkkYazEc2ZYYkipwpItHzLNLcpab+nyr8h1moJnesJJJWwfTUtBFj3V5Nm3daAx9axhs8NK 21iZ9rkzMflgI5HaOV6XLRKIuMVhZnC/2RCJ4wgjLW2BNQBWDxjyyyr6vKjDdg96mwO1sAsT SbqjSx8DvQCgaxyO2D1Lk6Uhdx5MjKbKjIBlvun8mqu8Z0rtbDXqK19LYiGDP0IMY+ZPNUBO AQAKg75Pi3TZ/fP45VTabHqpUhoaXURkdASWZKc8GcebTB+dxV6W5e8cF3bkI2LpTHJ3vc30 qCOMiHeRGVmWtb2jA6JiAewaUEfCh5OUbV0m1PzzQOv7/bUfzLbfEEEBnJqDVGmpB4pHDUwj 2+M5DPT12idyhvHkUVUzdoxktJ0XK1CgRCFM9Cxnhv74O5gEdAKzq4ewTJfBA0oO1wFGkOKu mvE7YaqrADS29Qhd49Qt3JS+Ooy4r5a1MdE7O0Kd40y5qZPBta9C5wftbx/auM8bAehA0Uaw x+MjdT+7pADhwxCo0QmMIrIlH8+tEwt+zfUPHv1BB3VSdkgARPzq+0Lp5X2IvkE4Q763EwwJ mX3LNZv31MIdsPCtHkHb6Z3MtErDyZdUOP6G48M4P9qVYG8JFa8jZGSYRB/x/zrXBfjv+RCt 1lrfSLSJLRyQ9IVPmopztRzQ8WifSXNNtMdfzbYrZztEVlUN1myF2DWuOnRKOzR89jBNLppc hpJBHDT7UWY/JF/IG8utw+hULIsI0y5MkIx9b7zq9xYVL2+HY3CuzKnJhRAfQxcvrR3/kv3E c/L1+wY6M3PNBMMC78goy8D+n609cukmhkDl4ndRzAmNXApN/iBzcvyoW85SYnxjhWD4vdMQ /4kL5NIerq9zi5anjJ0zxO3hDC1Zf8M5+pGYvtjl/ikSe5/WhYpPZnsJubQvWUf7x6/nrlDg cZlPI1i/fWF/id3lqKjY+SK/0kuqXXhQBvNDBXwQTcA7oOOy7kGz+fHaV0lE7yK6bUzXI1GJ ia3pbmMrWBdpePYiuQJzNUKzR9lC/QMuPVl7j4ViF+73tN1vJjc8iDQCVB8qyNZoHTpBH8Sj bvTRYwW4zvQJ3VOugez+5suaSjXD0bZYWwd21SH2b2raUyVGYrDIvFof4Tkq7uAtScJN9eRF f9qWzpj0Kw44folDnj+aJlqf6s1ah4UaCz5kjRPz+dFjHKxSKMxp9HLf/RjHNiezfe6IRuRD OTL0NsD1oS6gMThqoeILwM5ZA5BT42LGxn1NnU3MKp7G6uUfZWT5wgYImrv3UgLnw+Di/sIo LTCRRh7Yt4T192j38I7dscPweg3Myou63VZpQO9/vbectn/ErOfUT9CP/JRyllbc8VWt/rWV Dh+7s+EFX/yVLlUYDZBQDNiWydHYPr80J3gVr7+zYGI1bQCndguowh3gQ3/c/49mwVdXnAfu tyw7SYJx7OX/FMGlIGgvHIPSR86JKNIk0O4QgVwLVfXcHYmPNwfHYJ3Ph8SfCuPQL3Z3X/+j 3EFGEELLhYNyHRY93lQu/aLL9rQh0ICaaLrHpKKazBzuIDWVNBoTFbZcqlYNU272v4n7fgxU kh87sQHlrld2kml6DAyFs1O9c/HliWpGR8YnTUTNMcMdkbIC6g4QmVKenskQq8BAHaTs4K5y KPRA6vmfJ9mjXp6+UNOXyM495EanBkde48HJHCA03woMH6z+yF/qH/L0q13Q0VN44UID2hnX LYLn5nmi4Sp9JE0HiWgsmdqQB9tWYtRQxZ9vLH2gDZsWV8xMUcFOnhVBZqI8+L13x5K9YSMj 7znLlBUQrkCVmnFEbKHKP8qkzSAuAHgu0jWK+QXWJtPQ2s91GUsfB69mIB82q+2ryAOFQb8x mKs2psdbQ78S1fPMIH2zOd7m9bUPA6RKH6OdfD++rKFU69XX2mFQ3qYq0M/IsqIoncnTSzfd snGCyDDHywX69SARbvtODOQfsGSlf0gsj9t/FIItm2dFQheMqTLGCPrQm4Lbpdtv+2XsP86U YPQPx1O52mS6wN/aXHdivpMrBMHWA5dBYMhNKC3g6w1A+eydRekFad7lDPgBOotXoGuC7Ilv 1o6i1JE8FelwMGcs+ASA94aI6PsQnQIgl3DO63civxrgC71LkHqB62V2bdnDc+op6U5hnI7+ A8tP2y5I591WEItqEkfCEpwajMv4pSRz16vkBT/p6GVfqr8Lca9DFDpxZywAstHdaNhKfIk3 bGj35ptJpFFn6lTMgcM63ZfJKq5Gqj7vQOi1o9DvJ0OsLCl9FQK6dD9fizkVyKtytFJQ7SNT cGNzVRt98CqCvr09oNUUocjXVAJ9xvwZxGSFDp0WlDUM5ZsMkL4PyTRPUb4QCOImv5GovWCT hS2VdQYAHOYPn0PFnAH3kDRFHEx4kmw9h0tu1WCNYDcfyRIMFNblH7goi6v5PQFBKCEXgqC7 M6EJnI6IpmXLD269Zonl5nXqjfhFSu72cvXqMaYCoZRte4JNXvFTdbvP0tCzmXrCNDQmPETv THXYyY0k2MCGantmqNJAlja2zEXw7Da6+IDoaw0hzWmb2Lzfermnqf5eJLNqpVMdCDBJPzmv 4a71LuJ0OSuEDJdlCesPMj0mHSSPeYTg2Fa7Oxq/DtKpYOHJFaokhv2ZVRQXRcuqKhbv4XBu hLMeehXShRzKZ0UTtw63P+P4bCEZ+ViceVYQ+HlOF0/hDI9kuVp1fJPjqbF5lSLOZQavCflG A5MeCGDufYh0VSuhRdTiYTtPHLWJBbCjltxvjWLgU/czuXTOJMdozAmsD4jWnnL8fNajUeEW IZc/HoTE73rw29fH23ICdR6LkGewJ53aQJvfQ/u1xqeHfmvrky2VryUpOqBhzB8MD6Dyj8aN BD0lJNcaqJN7SOPyqICqX+D90BrEig87A9Y3c7ZaQ4qFWUuYacTnQBwdH2ICkKSiCY7aqJRt k2Fijjhvbz3w2Wot3y+2wFBMVscbHqIL3j8FYuMcvINNESQWa3ljZtuUvkB/jBwoftfIjj6s mle7T0eioOtNpV3dmeMocxA+W4cjaRe7MFSHXdJJoc/MPgOrhiRwND3E6Oau7ZmSfKC5eMCT owLqlc2Jy99XQ87oyL/Ywe9yrvbSSM4vw+YzBw8Dl8EeVuGdTpuiX9X6dEHuxZJX8VOX6mVk aI7Uf6kEGiqRKx1yO0sbbQMER2jist+oV94Uxg4p0QYdOJgUxxsKhiQyPHxAgWhvFb4+tlGs 4RfMZ0qXasGXRkqlGYKCH74GqPaSfUwHZswSTtTILctqChUb9QCCNpiSsvQHch5btZmokGEl ahdMgSR66A+m+sq8e4PSzt30HWaM5TbKqwXfbJFIdJNjR79Ih+OV8ro9wsIEDXMSoOKNatwW vWBCHX7XrB7oRCWxKcj1rFsNKFmtr2Xegk6gKx2iXuzbKgupNG8yQB86mzgk2svymfRva1c3 Tq5dHdQCOjixTnBlvlqPpOhG1LVbgA0Lyseoglw+066GMoDTMdAFk6LFkj5h+5rE4tJ4/orJ orFAuYpU51xu/dTbnqmG6kfkzpzGUUosRLD56i0A41SBZwuO9p9NbSZfBWeySQOoe4tVrplI V2lrc+9acV/g7iHRO6ZXHQkQqjkMRukpsuYsXwuM7mwMdeCAUwNpajVnFd8fXPOJf/Tz94bh eqexFEL256orGXbLcYN84y/Ss0iV3sw1I4B5J7jZcOd7rx8XQRMHjFCeVTiqhQhIsGyZpFs+ 08aA9zrUP5YQHZSnW6z1RVVIiDWgvG0l4Nn8KRQ14ChW/s+eNXAkVvHduhJuANQ38srgbyZl kYeR8PtzuhZQqe8eMx/92KnIDOy5fzmK6VlbIDMEsbsreLzW1tgON2bUA6RuBe2R418sTv3E 2/Ou7E1gxTkVtv55zdFOvFfUQEW3bSPgUl2RLqdKXVGZAPEiAEMlKcwkpJrd2976+LtYvSd+ UaEdd+2ubFRTSU3kRU7hL7fhU1h8ifk37nmWwYxWhMyKtLNOVbJtjWS/MvW7glcg2W4KLiGx tbZ2WWnDHgkz7EHrAO08yjRl/DWxuuB1x+YJuOouMQpLLur1QWLKlKDr6ETrijV8Fu8/N9Ld HP7vgCdBcTA73pHJO/niiRrTNCn6BOGwf0/2sOy4crPLERB4BxjqVJqVb1Tx6MYuKIxxDOO3 9Ds2moW8RohgIV2RUhReg7dwJqZeSV4QO2oonb/unVqT5qvX8od9ydmopNNOehI90gbtP3Gu yP1Dn0RRD6lY9k77VzbFKAZ9yr+FO0yLLErCffrunlaXjj+WlPjHmZQzsQwgnJGJ0cOreUwR tuSkZMTxbVOr5a2JHOMSgi554P5nOn1SSxiYoAQh6JZBYM1QQ2TIGvnRuZTSggDoQ4gXiJ5m LJRdah3wUhnrIBIVZjmeDQdxXGsrx1IDeFdiwL/gWcbgSZMjGIFt1vIC0ocxiRfY8lrFiX5P ffjp0i6GW1A7Rdt8MnuO/qIlFWuVd60KT+catb6vDtz0cl0lRWFUTUR8J3QVLaEbkLFWsgON O/BJq+nOzOI+TlsO2zKCJDSce8fhWY/XAFkVFpwkdzs5wX6A4rRiTGgTiWOpCGkAj0EiLIiq N1Fsfl2wkt3u0wbeTwYVmnl0emXvYpErIgoByiKLVLZ0kv334nctpWLRg+ttLotz2TAe1wf3 V6WIut2WfVdzdWbpNdtdNYT1USklS+w2nUpErMqZ4URV+YQnuKN2eoK3zfmBxtJqsDGEUcIy jAGVKJjObl66DObTH4tR83yCd77tixxe2NLF1ATdJZKvxCkbN+B+GYnPbxA9vmMlV5kMOqU2 lYRZs3isNBdddOxXOhAcPx6hEW6bsnBMcMZ/qDqFlaqjOFj604XRdktiAyaT0b7y/kOnq4ib atHG/ZMVPN/dIoJpYkZ0pTvpiJNGjeDl+ivfFA8N/XIkE3MJHF0PBsmXHOD3xGMg6g+prcjx vXRTm1xKofMkYk/1mhMxOSyGK9WQzAQkL5M1uPnHH2gA2tixotaBjZD0e83VBG2GHCVf2LQr fw4j7o2rk9LRyYMgJXhXSmd8IBSpP0w11yFCeGt2NFQJTtRW45/Nw1hnOgtv5hFXz2SvgB0c Wusf13Z7TUWOFGSGAcFPQLLHnb9Cna9ECyWeLc69KvDl+NzsB0UGaFcHtmHQMA4DxaMe5eIm b/u2B+nDDAV1P0nSGiiNu6eLnU/RxathoXlXp9a2kB0KaPTSQq9AKqCBujVTT2kcMtfwbwyq qnCpMjygAgcCKJzwTX1465AZDR8nMLerAuAQ7Zl0zSJodCw/uEJuz/7aDtUN8kM9pTxm1guy VnDj2wuh2yc1GQCNLl0Fh3iD8xemWkoPIT9R5GU6w0qkvJFYtbGD5LkVEcV0Wfu8AX8i855q Dg7SHRpEjqACnYmcMxZzfkB6gEKqGgPEz/t7zrppp2CIZUcNdpNwbiCAYt3Kv9KVeWaOoWA+ T6XrJZIxoh3atWKRi8o9XdgURVKl8vkw0zQDBq88eWs+QFs9F6u66ODsijWC4/9zRzJk4zC1 DoKQXM8IOClqHFLshqOTc5ZiJJPOkFf6w4YE9qvwxno06MgsW9RAxXu0SgdHAaDUhAtYB9Oj P9s4lK9uK6D+zgqgq+MGyXZsH8uqL+wDC+hGfV7Y4BHSoa2wrTkoKUB0IBlegJGGhXBLlrQB kk7PQE6nR4Pq8ij7trgGY3UDePy29qrlpNqyz3Bf65EJCl+QnejcJxXckjBTptJDbh+cKIa2 305ftm+dbJJrztzF8H6MS90sk0LB4yLOHR50j+cBrXiAWHdoEjrenkv44ZBEeU4VfV3IZoEr BZqvCfcbwh6x2jl7OuXd1rAMAGK7UJvLnLRKfG+Cr81bWBTDxypl/KVn/gYfkGvPRDWPqtXb YRDIha+aPJfYcSWmOAGIR2zxE2APWTR9pXFvZUyKm/vPms/+U4mz4oEyreq7Pay8cb9IBBt8 Ak0g4jetrvrg0aiGMwi5xhnfjc/y4UbzDV1FQkNrVhPKSeU+L37t/ZXPu4oICLPZVkffYDtn D3lnyQSRL9YbR82I+7EwWPLL9OV3EYqfG0WI2BNvG0XStYAMt0R+FEzJTta8h62sDaTASahT R0nuGw4hl7RKHLPlGHBtUZeo0aHGIYOjc43JGvP5ZuL8M5dZse87zpG8FaSJ95dXuB999hBO Ed7pU8BcRm5z5LGr/pwx6Ua2oDEglSwaFv1sYPBMHrem1yqvc7lbag3Kn7YmePbKPZO58/n4 sf827MeC6M7QL4nI6LWajGm9Q7L1v7XqH5IrWQjOHxTR7CxlaGBf07WS2ofdEuaJgBcyEPvV 77P54N+OpSMAsmoUsAOizoY1lRdusJdNgRIOhHk10+v1Y3bADQV0f/xIPQOM/2YpnFIt4ode +10Zb0j3BOH5UO+0RIlZBWLXdxB4+G04FMIPS2BnKzEfsVq7+qADZhr6bLloAyJn6AnRwoJL fMUQnnpt0dFq7sDgxlemyt/fY7goriwJ/gfJdniYnZ9k+zXgJFhH28tJ3qy1As1TPVlq51wu 8pAA3feGWi3aO3+aQsAKJ1cWNwyZPWdv6qchi1hT4ceW9HV/hafdkmrwEBM1lbRLxlw4S1Fu aDx4MbFbW4IfJ+PnF8+eK7grrqWGFFikpbx09iNLNt6fZnT5s3Ka+6vmBdQuXU6Sjddzi54j pIvNIOllme0OpiMPBTTtOZkDcOiWtbNxO0NiXiu5By8P/LMbPlvWgmkl8AKP5k859GOg0V33 33pNqqP6HDvOENnxeedKDovLlpEMNKM/ccqU5UcGLXsO31I2UsRfLUcyCqJ6vriZrJ21vEm+ 1gClHJesikVycZa9+D7nd040Ke51/OeQYBKHvsHmY1dd8ZYVFTGOYD67Z9QZkNnzwaLuEWnv wpHWMVYDE8Wz8mYSmVGnXc+rThvcH4DUNnBd73EJpQ8A694w6Eu15mp21h/JDo2AW4mGKnDN kVSBiFdjekGvPXy3GXWlzvJCByxOp3TIw3r7VKqiySl7cD/VYqcTT3/l867Bf4b6sDsC6kML fMVLmUukREvudIsO7IuExU9lCRR6u9YDJKdF6aAuF4Qpn67Ib9R+ail7r4Zlaq/cMJKealw9 PscwOYuFAhrhd9KjyxJB55GoMBBTbQqz1EUTKrwvC3YIMe+0GJX6NapffyQtIIawsvI3cvk5 3Z/ftHcG7rzJ+wZP0rnkqN3bilAoSBL4oEA3UaqFsV7IzQeMQXWiT5GjfvNGwuvp0zHgyhq7 WMwhT9ZmKCLl181O6jeQ8N2Ea85wkrSxPY5BkvI9K8+USlmE8KdBcti+xixdrcz7ZuwiaYCX bve4jZ4YoQp/iavb6xCylYw+KPpWrvZFLBhPRMaXZC0rw1J+2qUgGKTt0zGw4DFzdS8VPtA1 h0ORgl5ErmpB7qTRj9TKBKriWoZhtK2JNnkdNvlf4TIVa2davbu10CeNW6XvqaKIdhyPZmZR xwNuj9WnzrFQh4/T6u9rnjdVBHrB0xoxrFfy2tvQnSjwsqY+paKcFYZSOtP42rWoXTeoE4cs oNdtLiFX6CyHQSDqc73RjwpthbjWlKgT0jgIpURIqG+b6NY0uMzA2ZiHqCzs5zDxRZcDha0x ComC/uyJrba2Ol0TKLH4ZHDDpxeVHpeqmJ37L4HqLvGCavdmt4USjk4RvyH3sT86inWJ23Fj oetyH3l7yn+j4/919zzAii4F5ICK96jtYPqi8xNOYM8Q30+B0tLLr97wMdEXZP8ul9xsAns1 YkSlEfLG7F/WSxpM6fMI5mWZCMDvZex/+psxabwZ2elfCqKTUu9j3oS+w0IOI032SwtbYKhM Hk/4agzmMO07A8Y4eN6SNZ5FoFGm0aZiZiZ42783dbC+CuOXbS2o+qK53Mb0vrCn5VG+7H5z IcHtGYrUCDX35OWh/A+Wrs94HxcGtzK5wKn9oDguEMFA2Rvw3tWfH1ZuFsdt+Oqx0dLGpyg6 QPncdFb1EhSKLhLTlmda+Pf1YanBPu4iIPr8ShuW6bA6yYm2nr1JxR6AWPDF1VQF0xz9X6ZV iRwym5tEDoVCFVXcbCOpJRKiHN27eYPU7zL12YfxH2VcrNthQtRDieohu4zFQ1cZlsymFjpO snpG/57nvKi5JENsi3fdwjqcYpTEXOFmdRK6aHjXzFgRtV8hD8ocRmVxngmDk8kypCfp0WOx YfHI8Cdc+aNP/jdT8d5MbDSW/rIAalrhJOSat0ayzN/0ELNv57/YGlPPkwtLgrKy6vYzH2Us +8sUROmUbUKSZAP3qhfxz4z9L8eUGsXgUMtpq3IQb4W+iBSPbAURA4LNa3xqX0EbUl+eJNHz JXs34jahxX0mKE/TGwiGYCKDCXbqPnuWFd0LRyFeLQ4tSoaXKHMhKIVLZGPcpL3WNhQyX2YJ gYwwsXZsibj9fpdbYvYOvezaJC9LVDlmKxQphezvVeU96Df5GoiRux23DuwZ63ASLbRdXCJ+ /0YClhEdcl4yCJKzRx+LC+PiusfNog91JxKPPuPf2Sxczu5BSX91sjzKZ3Xsdpx1SymBwQtU 6eo4qQhJI1Rzir7/i/IKccfUhrqUwGmY7CmhmLT0Ivk8djm7XAelsSUxLp5npAN6QStGwRUz ToUZwEbTv1BsQsYZW0CxoVfqEcioM6twFXAE4wDuP/06BM8F+WDj5xciHsRqK7Ha7+ClVjsC rHvoPzf74TJxXe8V8NYTVTPpIBTaenDNyfHH/nb+feP/oqVUI9y0MgACeJKy97u9SE+dicUO Eia4JNcHjF3w7YvgYkjgwJIYhIgexFPStIUJx25mxzAX0j1gyNcHP9HlmSlmRwl7vikoBUpB vExwfS6Qs8S1mMBP1fKOjTEk9NvbzuPiiYuWslPHFEHlDdLILc5GaMeTtRU6c/VgTfCYByAm 8wnSmuW3zi+6cEEEomjZCAhy6e272MIt1JI+COLNx1Bo1YNcDaB+vgNgdPA8jnINKBJUymHT 0U2zhb5BgPYWmTQLF8taZgwMR/roEAEu0S8VECajSaiPnFZMao1QY51v9u59zj538ck7bW2J zBDhmq4sRo1+MM8jJIJxnIUBR+oiX4A8KpN/hEUPxOreNH0RfQGS9gi2ELmjJ/oodtBxVDaD uoiTCa9BHMpCVxC+rqcdzt4u0G3QLkN5LD4mrDXVZDizgN3jwv0/y6EJMwIk52vsR38ccez0 dJkjlQqL5Ziggy7WYtuPSeY0gVWXgQh03btqoe3ne8so0K2Jj3cs9KbSH+GnhNF7petFMgoE xGh20OawjYTxDiQ6QNY+WFy1UUBi98AK+k9aYh6nRSdYFt4PHJa/wZLY/TXIawH1XspE3IUd W/OI2av6YLWTaZbYHBOIw+d4ghmQhu92Ak+OomvDcmnbJGl/Vz5aWAsiwGW+J2hYqVTMS/GK 7XGo9Lommqu4Jhj7iMcvzi6Kn+hBjWeIiBlkIA8DgfUBXO4yopEPdLUC5ocwRj9EntfFjC/U tOxR63eYpp8FCcOZCmIcNtY4Ijr85FqMn9UIEDcvGtNyrUZvVbSrzr/yZLpjzMSst8xmqynr 1zVTHONJIwBlzwAZXTCRYiS5IQWrsLKdWUQi7GdN3ZqepGoFdhEmRlXDXXttgzgeY9ZKGuHQ DJmAffW22gos+3mpjgf6PULaPRZ/efbg40SLQSktqBObSHSvPSUS6FxHcRJpMfGfdr4WAbSx RIcNMDTqxJLrnroclGmP8k178IFi85X1nYt/2zneH3et39wG00OFsXVl8phhKH2ZDB/geAnG FapZmciLvP/QkCreB3HR+I9vdiA8bDRHWh/s30Lr+d9gNEEuzgpZB2fZ+34gCQkuzokQS0AU RWXHaFiCI2uHWUqXnDnlY0WzfMsJhQa9ODsc4WYlO6nCaQSZcQtliH1Je5nf6pFJjEEu9moG 7Ii0Ii+XcLQYm0b5T2ST9/cZmF6cXkpQbMAej3gmw1I8GKK82//3Vj7mLz4shFGufDeWx1UF rFCAlo/ZNUnQPoWTwfwuVvxA1mBJxiHvQBdhAtAvMPw2hwoHdL82Ns7EzzCqm3KYrMK6A0ai Af2SVb+sORwPnCgMindgxh/LxNZjS8F/HnTXIafQrtvz+pjtpJotFCvrSH5/tegX3qO4j/hG hvlF8HpdOYQZVfENskbW1QU7CucL5vu/4unIbn5Jz9KgUnq/28WXmr0C3zEX3p7LZjE9ef/D T5nUcSpxgjNj3ztuKbqP3SLiYkCp5rMuhROjZ6Hp9KJyNm/SkzhKLjmOQ0reQ7fEPRYfCYyz 2fDny/QWS5KWVmOYNf3fML0yYTMRPjfN50M5tK9xDMFpRLtGin7BY9YA0/xvJ+PPgkwIglSj jFCz3vVVjBxDvoKNGYZRc7mYI9YISLNjnNr1wxXNRRAHGzZxkm/J9/gyKUqxM5KyDdLkD9TX I56Gs2HA96rGXoPCqi4NiDIsIs3e+uG2NAaoIbAsUjk7j0jSALAAlGOULBBFYL/3c5RFE2vM yGWDj9SfnvDVVevLW2Ric8QlgUbXbPJl0lrDfD04PGRy9xOuIITvk/yqb9ZzyzTF6XSdlNAT oVmcUBGPMSYR1kptaIItA4an+2bxa/EAqTBYTLQKbWZ8EKKcVz5mJgOTZed+Trv2FL2t8ZKv exzijBSlDK4uxIh+Fw/ONomOq3gQ7d+7GuRAx4Jvvo0Lq2zqijLLYdbrZ8Qm5n7PnE1uQax+ XdB09/QYCnWUWxgN5JKkuIfmBS3K/XiidCysHizBhmcrE2ejke73wub8teJMoIkFvD525/9k 0Q8/n+hwAWBB9QdIUXKUg9xJlDpDBPkojaIxX6sgYx9vTCNqQqx4VEIF2krTuvVuI8Me/Yd5 DxJ99V+RhrZiykV+b8ZfiKPZee2UGFW0VSvJPDah0UIOPsGJQUY7Jyw2LALe6RyAfMT+uWFf bZYzjkRlDn0Hh9+6YpmNd7ik1uOHPgEvKtcXrzPKGF6dvpKrb9vkfnit+GKBIw6cSHhvZcbd 27WOh1+mWbjeggEvadR6Vge/rpVPR2Y52k5tNsEqP+CenoXkUbD32BcBVoylUph0S6Vbu43y 32nVAIiVOa50IExcJXInxnGeWMOZuJ2jOgua0mznW8BxPsxxyWFE6WuHj1STQbtycXvX52lJ fio3nRnHnZG8aArBxs+Wh2Jq8JGPwd/z16eDJe/46NI+oE15bbSav1PQ8HHu/TmWrQOew0Ld 5IPsgIJDBVZ0jQZEMxqHGrkxtftDWQqP56ThDipeIRxd0OcdAISKquo9oVTAIFlwahlUQIDv oQROJx2HZJyKwkH2TqDib2AelKFOB6UZAh8n+xmvxL8zWgBOTmiKm+IVQ+2upHQFQK+6GK5b OX/r5k/IH12S9Pp+doQv8qkV30kaWJkYei8kWZYkc4EWp29c+d9lD0ZEdw5CwpL44bPGAeD1 jn3nT2LnL5xbMfETRmr0sIh38mheTSMhak1XtG7KckBQX3ePiicDYAJHsfDlc61gYHRkptM2 /1rtSJxDmVrPn/onbPQydv2wAw20rTcCe2L5NZTlWGEFPDprawGT1PmbCssPhN9Vzzog9tw2 AwrmfmhQuKxNP2v6JIAVtftd2byKVA7VhnoeAtGEeJRbBWL0pVS6A8r8ILo0MfqEalQ8sXqf KKc7IdeYLZTiZtYu5LC9mCjbSrOed/QWlJUKtMxiT0IgBgGmxLq6bZ6kR6n1WrpFlWduc1dp 6S5ol8ek08UWsfaodpQ6a0s7zU6txT2v9L9uBope7+Xqg6xdXaEEGLdCcG0z8eCjKrQWTMvL PML7jY2E1URyWnUbHbltwbGOd3Y+2UcNgBB0WMyB+sPr5/S1bGsbKBMg+goTQLYzg1VM8is9 7gRHPt6M5OY10mLJnkdlHzXVJhbdIs5kEuxYcggI9RFm+uYCdSawiBEA1857aTai6C336fFA 8bL4rnzGYSTzhwZqLhDmV86PLGlZEASY0rLxjABB68Dd8b9MX4qzcgL1/17IZWiv3et2T23V LpgMSUOyMistpu8IQdovHeHXe62NBSW7z2zXSXojy6Ln438e8S6fgiztH0za3Avbl93qMzUp QZ+oI9ElvcJ4KNzsIuLOIMridxTFKgAiIzPpLi3kIKcLR+8nGe6mp6FaC0YVqDkboOSQg6GQ 4Lw8kqLRawcResMgZt/M06zKIWpZ2LNHNQkkvMLFX1PlTjOfiKida5FVr8Lrwxssej8u2DoN mF/Pa3z2ir00P96y+eHqPZsSz8EL3vFiXBV/2QEQ7IwyC7UOiqT5Oo9cF21NU0wA8Nr5KGBa faIPm0yMmPvFdQk6Dhti8vZpC54iK6pheFuPU7mqagwIqrJgOYlIwSsJOF4x5pjwutufKeOb K/+/xgQvCJfDFycljuGhqG8j0Xk259yK+rRcGXstV1TQzLvE2pGwTUMGL3hnB97ZRIvWzCPA 5f9iLLhKgWY2tCBA5euTij4GnrPBrriS6lyui4imejFd5PtkOWQ+hqj40Lkfq8QU4fJdolsA AF58cJC6DlvdE5D0ux68TTNJqYiZcRBBrKfQ2yN1HKt4XNsoy9UMN1/6gvpcJaV82ZAGXxyM wPsxAvybi+eTerFylrMabNMOyWIEWxqR9s0oMOq1M6qQw+j6DUxRZ7BfgQ5DHf+n7FFUd7WL 3bwFidOY6Srb4bmV/7KFd/ZgHOw5NpFZtx/S8W5Fqayyphp/7T62X5NvLWxllvCLwJ5AYZWZ 2NnRA1Mq1VAFgMvQ8eS3pleTph/d7sZgzNNiJMnYj/RV7BKDx6GVJNWYOMx7Msd1MNKKOQ8I uEZRH5Mw8lJ6LegxpefnokXYpA2wUeLT8OuHvvTS/RzB8+Prt1eXI3nIUHsfE/YzCXNtTT1j GFvSH5tahT8lUPlq6bNI/Vf1W8CAjXd8s/kJqyXd8N7UrD1xvepZnEehfKtTnZi62JAd27Kn nS/6K8ArfzMxm3wBqQEr23OHwxiEcAJ+kJtU0Nfn9GzbnWHHZrtHbmZNsz2Yn4GoLCLqMg9n xVaNzfUURegLXoHbZn0ASL7R0yzGqgPsDJD+TdutkZb9LU6+RokjHt0r/ousYqdkkoiBu22m 013NQ0xBMioDXsxy+A1RVopMWDBMHcnMomIk4UqDsbc3/79Y2mMiNLXfgkE23YW3J1G+AoU+ YX3zGwDr1Gu3cr9MEexyDMfAd6xRwPdgb5ZVfHakpmNzMPXciDvZE80ROUzMnWiRePVUHPxA ehUI5uwITkdc4j3E0kn935EwF2mNVCgFRfWJJdQH2y6yjzCPyKeIwxCf/BGR7edMTfSP1TOc fyOc95Q5bxzNwcMQu5oNuq1AMp4fauHp+ReAifzw2CifH4X9Z9aYuVJThs71iHc8gEUJyqu6 o8Emp1CnTl4o21G4W7wHOnH8cbJYUim2gY7wHVVbO7D5LH6r1UnGANKNy14w/zwhxCkmWMzh +lsRCMbLxD8OnzjArn5a0uJd//S1Na5zB7zNAu09o2ASlOLO5lcEgOQbdErewwXDlOLSvQqE KAj5mRtvD9X5pVw8k3yhGmRmUtQ2y6GWcRNfxVakTauEo2Up/wctpL06JG9VKMTRnphhgTPL BNKvHABDqTaPH20/C+5Otsrp0eJbUS/1qiMQT4F9XlrhBHI6FVmW4ePFKQyan9fln83uD+Qm qph9C1STuXDD2cBm6U09jtsfzNKjQ1I/4xw3zuMAFTrtC/7tBCkZrmKXqktaivEAplzVECk9 RALIiaE4feGHQmGW3QjLqOIGuI8cF1bN3rsBOnHRQxeCzwwdEkCib890i+K47WJHYIgIl5Kz kiVO0K7rjwVA/czF2LYfPAnO6niCDliWP8j+DveHkkNGSJH8wxPZYJh2ZBH5syZudktGRl/I 9zrLYFaWYw3xbe3YcKtE5BH/Vk0lEJqAYOw5csQkf++ZoeW7vDZw7icweSx9+HL1s+gqT/Hf A1rYe6CC7NKWRxHr7sCguJc7X6Wn9FL8hMNAclZt2ELgNEQfnKEXwzsNm31sN/z2XPCQpIE6 o6jOotSiGr9OY8yIrXjSpax3ZhLy5nzg5DUBMkz19vbOv9zVYDMaYwBMONFE+iv/4lU26QIb jfa9jQDlUNmfBuQY2f/UPs/KiEXz8Nak+Ho1klOE+Y+1H35CwGtT8cLQaP4EZ8sjWDeAaAs3 eXoprL+uaxhzOD97DmVeUSqH9uXvUt//w3d4Ip0tZqQBcLleE1Z8V5HG/quoUS1cjq1EjR4F xgz7xNuV3+YKa9SGt/Vh7mXmOHCajOS+pmEZsTK6jFZ5X5rEKoCeCCfxJIo0smaE9sxvkj9E NWW608bJgOeQ+UuZ3pleKQ3+T2hwxQKzx2/9g9ZRB3/oMC1G9qmB11ggTWzk57d9Fl2qHplT 0NbGbe7tetG2mNRjWb9Gjp7fIl/Vc36jbN6pqKsUI1+upgkI9LkJcKpLLg1Cfd6l6rJrsDib rmvjo6kJjX4YjXrhSSL3tVaXyd0kxIHoA/mod8s8XIRjfPjH0ReM5sUpEEWJeIsDIf6okRYX AtcStrcgDOnkghp8KLwFMBbTfxbpOhx7lu8JCGDjtTF0fdyJogyxIfE/ENTrNojPv5t7CyaY +Xv6bta9/bX4BIChMm02jNfgXvGjdTnAzU247BI22kKfuQHdACBh4llWYwomjbakDqCQwEj5 P8JGzGgyLHv1r84lRZrpHkX9zcNukrC/cZIn+OtfW8xKRngPuhnvnsNtyXDlXdaxRfj1OVrA p6UjI8BUhTvSDK10WJPXPWElE4sJ9Cb5ctgz20KYN9Fo74NyEd3kQOCp/DNnyfYWaxIhArdl uVxWgVneO2b4qlKNZ1afSTa/SUR98YNAdWwtDguTmrHgE9DX4VjXFOY3X0kLwsQbNp7qs4v2 Enwq75JE/QYIs+LxIXMYSQ0eSLU11Fj08zvSvl7ViUqzQtB9FaghJ3uEEsxOYizzQoWknjE9 2xCCHUIUkWi+djpLG5YJ4lwpZ7l1RpCmVUaKYy7duHFS3fvXW9nnIDbEKMj5NxOj7XdC2j+M 4pzY0Q3GGPachd8gU3Q82Ptq/Wq7Kzh2qWcf16+QB3S5HetXK6ew7XUFRPSO3ZaISmjMkYul DI3YnHSfnnbEvoCw8L6LHEYkH70D2za828M1zkQiK1bUfa/yUcovCOevaZYN8poK0LUaHzZI Kshm8InW8LMuiAv1S9Kjyl8xKPQAl/1Sd5YF23AQ6vCv7o6trp/Xrp7FerrTMp7AYKw/YY+d YcEJMcr1VwpbcVZ8bglckiG+5+qUXdt+Huy2g2xvXYumbEU62aF6vm00jXye87ttsmPx6hMD QiZ1k4R1AwqDYtBOl7Zdzd6GX7zGCv8Nd1RfERhsG8GE71EofENOgC9yTe1w9I+3AnmmRs1B w4r4HJErE3og7E/OSfJzVPe8dJQujmbRejftBdMxYpBECz4HX73DJEg7cNX61Rk0Xi1z2ZF7 ztQcxZA/tf6kWPXROf8k5XD52bG3E/y/4Eho/2nPIaJ98b0gqeUAGxtpeop+i/JEhuo9GNJT r3bTUqtmCjmDT2ejE9WMca4pXCHkop8QXjQYN5N4ayzmwjWh2sP2Y0951KOSV8P+icVRUGRx Uo5uc/ItlTHTkFFPGpKwg6PPMYZYQdWaT1m4G8NAyJL2dIV3naFMQHNSbCaz3XoubzO0zWqP ecptEZ/fg0+MECxACaGY3IDNbakGDh978YNjQ8jXxD6o1CuYutnLea+QZqqDJ7fHR6nCIhBL y7EJB7IpjsG1lsIChmS3yPM+3zSCIDxYKV02hnJxV06chLr4MVJATZnkWFNOk9Zntk9ZreyF PjJJq4UtG1jNsA7Gp4oqa6p9tAwNftuorsOYFrhhhkz5yuzDrKf963yTqzH1Ikt0S3A1m343 h/9t7638PrWqHW350/DsE4etiaUoeXW+eozQ5fRjfKXIXBURXuCcfgNph80AOGtZsbjnCLJg b4FAxamBjUrhygSWyBgVeBX5uK2yzuSUW3M4ZMdqR42hFmIEaYBWqJN/2EJz4wnL6EdIkJ8E FiRCwHJkhegSHxL2VMpPTi/B6nndJ0zcKaMSdTGZs6te7u/5UmmYT6VBBLCebddBCkBn+HwF 8t6xjyzgARL7gQMH1VFf3pptJ0rqYwKFuxIB99DgLuyRNjNQarUAKgG/5la3g3nMAQfBkvQw zHrP4oz/UXBpzSDySnNvSb36dptfn5uIe7Lr4Fwj8nmxttUOYn04+tT/RpSP9eEMd2hQ4qku iFmX7x711BF0/mN+PS+zEuMyu/2Eu5ssT7Z+HpYtB8fpwr3W0UqXZdR1OniLbk9Cuu0ipGpU oPIT+AiaHQXPYqdEzt80b2mWGBbTsZrSw+B7l3DJR7/kmRAXifLTtQ2eLokrGJNfOQ2wTpoL CanAXtOcbZcQ84zdFGcP5BWF/GDSnvflPDYjVEeY6A1feq7R1lVFF7k0Z1QFwMf4YwOX6CLv +glldbMcfdbLfwTZapr16Uzf9Wg7NtleGubBCNCWTBv88QDQC7cV95xrw/8o8LI8mtmqXd/r gnjtOIV0iA42dgS1D2TQnHoCuRu8YoXINAQoYX1JEpFFJ7bLoevWxpljEKOnN0WBinxnTFQI m65FfdtYcOdkSfrXUsCOVMqcFgBgknX1/GWy5hcsrKeKCbkRyWH/vzzoL4yIuf5rQR7OTm6X FRt+3Pre8F4QkEKxj133LtzeTC9XqBOWw4x3RO2QPvf+RjPb+VwlRl70aowMfNT9kzXosy/5 eL6/r06K6X35tdOw0U5rUEsBAhQACgABAAAAYKLwMCO5vPCYRQEAjEUBAAkAAAAAAAAAAQAg AAAAAAAAAG1wbmxxLnNjclBLBQYAAAAAAQABADcAAAC/RQEAAAA= ----------rigjhktajvyuwuacxqip-- From owner-linux-xfs Sat Jul 17 07:05:10 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 17 Jul 2004 07:05:20 -0700 (PDT) Received: from mxfep01.bredband.com (mxfep01.bredband.com [195.54.107.70]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6HE59em031095 for ; Sat, 17 Jul 2004 07:05:10 -0700 Received: from mail.ter.nu ([213.114.210.36] [213.114.210.36]) by mxfep01.bredband.com with ESMTP id <20040717140458.RCKB23501.mxfep01.bredband.com@mail.ter.nu> for ; Sat, 17 Jul 2004 16:04:58 +0200 Received: from grabbarna.nu (w.tenet [192.168.0.5]) by mail.ter.nu (Postfix) with ESMTP id 4203D9981AE for ; Sat, 17 Jul 2004 16:04:57 +0200 (CEST) Message-ID: <40F9321C.7060403@grabbarna.nu> Date: Sat, 17 Jul 2004 16:05:16 +0200 From: Jan Banan User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6) Gecko/20040116 X-Accept-Language: sv, en-us, en MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: Re: Recover a XFS on raid -1 (linear) when one disk is broken References: <40F6DBC1.6050909@grabbarna.nu> <20040715205910.GA9948@taniwha.stupidest.org> In-Reply-To: <20040715205910.GA9948@taniwha.stupidest.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 3651 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: b@grabbarna.nu Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1970 Lines: 53 >>I have a raid -1 (linear) on my RedHat Linux 9 system with XFS >>1.2.0. The raid consists of 4 disks where the last disk now seem to >>be broken. >> >> > >backups? > > I have some quite old backups. I'd like to try retrieve more recent data if possible. >>Is it possible to in some way mount this raid system so that I can >>recover the files stored on the first 3 disks of this raid -1 >>(linear)? >> >> > >you could replace the last disk with a sparse file and run xfs_repair, >my gut feeling is that it won't work very well though since files will >be spread over disks (maybe not badly, depends on access patterns) and >also metadata on broken disks will refer to non-broken blocks and >vice-versa > I suppose the best stradegy is to get a new disk of the same size and then try to copy the whole damaged disk with "dd" to the new disk and then try to startup the raid again and after that run xfs_repair. What arguments to "dd" would fit best in this case? I think I've read that "dd" will normally abort when it can't read from a damaged disk and the disk is quite big, 250 GB (Maxtor). Since it is a 4 disk linear raid I hope most of the files are not spread over blocks on different disks since I suppose XFS (1.2.0) tries to store the files on blocks close to each other(?). Anyone knows what normally has happened to a disk when you suddenly can not read from some parts of the disk? I get these kind of errors: Jul 15 21:18:58 d kernel: hdh: dma_intr: status=0x51 { DriveReady SeekComplete Error } Jul 15 21:18:58 d kernel: hdh: dma_intr: error=0x40 { UncorrectableError }, LBAsect=243818407, high=14, low=8937383, sector=243818336 Jul 15 21:18:58 d kernel: end_request: I/O error, dev 22:41 (hdh), sector 243818336 Can I do something to make it better? The disk is only one year old but maybe the temperature has been a little bit to high in the computer box. Best regards and thanks for any kind of hint! Jan From owner-linux-xfs Sat Jul 17 13:39:48 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 17 Jul 2004 13:39:52 -0700 (PDT) Received: from hob.acsalaska.net (hob.acsalaska.net [209.112.155.42]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6HKdlga013081 for ; Sat, 17 Jul 2004 13:39:48 -0700 Received: from erbenson.alaska.net (223-pm16.nwc.acsalaska.net [209.112.141.223]) by hob.acsalaska.net (8.12.11/8.12.11) with ESMTP id i6HKdiTR025572 for ; Sat, 17 Jul 2004 12:39:44 -0800 (AKDT) (envelope-from erbenson@alaska.net) Received: from plato.local.lan (plato.local.lan [192.168.0.4]) by erbenson.alaska.net (Postfix) with ESMTP id 5D01339D9 for ; Sat, 17 Jul 2004 12:39:42 -0800 (AKDT) Received: by plato.local.lan (Postfix, from userid 1000) id 3114640FF36; Sat, 17 Jul 2004 12:39:43 -0800 (AKDT) Date: Sat, 17 Jul 2004 12:39:43 -0800 From: Ethan Benson To: linux-xfs@oss.sgi.com Subject: Re: Recover a XFS on raid -1 (linear) when one disk is broken Message-ID: <20040717203943.GL20260@plato.local.lan> Mail-Followup-To: linux-xfs@oss.sgi.com References: <40F6DBC1.6050909@grabbarna.nu> <20040715205910.GA9948@taniwha.stupidest.org> <40F9321C.7060403@grabbarna.nu> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="lR6P3/j+HGelbRkf" Content-Disposition: inline In-Reply-To: <40F9321C.7060403@grabbarna.nu> User-Agent: Mutt/1.3.28i X-OS: Debian GNU X-gpg-fingerprint: E3E4 D0BC 31BC F7BB C1DD C3D6 24AC 7B1A 2C44 7AFC X-gpg-key: http://www.alaska.net/~erbenson/gpg/key.asc Mail-Copies-To: nobody X-No-CC: I subscribe to this list; do not CC me on replies. X-ACS-Spam-Status: no X-ACS-Scanned-By: MD 2.42; SA 2.63; spamdefang 1.102 X-archive-position: 3652 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: erbenson@alaska.net Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1853 Lines: 52 --lR6P3/j+HGelbRkf Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sat, Jul 17, 2004 at 04:05:16PM +0200, Jan Banan wrote: > I suppose the best stradegy is to get a new disk of the same size and=20 > then try to copy the whole damaged disk with "dd" to the new disk and=20 > then try to startup the raid again and after that run xfs_repair. What=20 > arguments to "dd" would fit best in this case? I think I've read that=20 > "dd" will normally abort when it can't read from a damaged disk and the= =20 > disk is quite big, 250 GB (Maxtor). dd if=3D/dev/broken of=3D/dev/new bs=3D512 conv=3Dsync,noerror this will cause dd to continue after errors, and the blocks on the new disk which could not be read from the old will be filled with null bytes, rather then random data. using a 512 byte blocksize will help reduce the number of blocks which will be filled with nulls. > sector=3D243818336 > Jul 15 21:18:58 d kernel: end_request: I/O error, dev 22:41 (hdh),=20 > sector 243818336 >=20 > Can I do something to make it better? The disk is only one year old but= =20 > maybe the temperature has been a little bit to high in the computer box. ive heard you can sometimes make a disk temporarily become functional again by shutting it down for a few days. i think ive even heard that putting it in a freezer can help. in any event the most you could hope for is just enough functional time to recover the data. --=20 Ethan Benson http://www.alaska.net/~erbenson/ --lR6P3/j+HGelbRkf Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) iEYEARECAAYFAkD5jo8ACgkQJKx7GixEevwOFQCgmVw5BPRRnTIZSC2WQw0hb7c/ IfMAmweTux7f5eERcshTjvcZyNQqs7M2 =Vkck -----END PGP SIGNATURE----- --lR6P3/j+HGelbRkf-- From owner-linux-xfs Sat Jul 17 16:32:59 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 17 Jul 2004 16:33:04 -0700 (PDT) Received: from pimout3-ext.prodigy.net (pimout3-ext.prodigy.net [207.115.63.102]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6HNWuaF019023 for ; Sat, 17 Jul 2004 16:32:59 -0700 Received: from taniwha.stupidest.org (adsl-63-202-173-53.dsl.snfc21.pacbell.net [63.202.173.53]) by pimout3-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6HNWilM243886; Sat, 17 Jul 2004 19:32:50 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id 1ADA4115C858; Sat, 17 Jul 2004 16:32:36 -0700 (PDT) Date: Sat, 17 Jul 2004 16:32:36 -0700 From: Chris Wedgwood To: Jan Banan Cc: linux-xfs@oss.sgi.com Subject: Re: Recover a XFS on raid -1 (linear) when one disk is broken Message-ID: <20040717233236.GA10234@taniwha.stupidest.org> References: <40F6DBC1.6050909@grabbarna.nu> <20040715205910.GA9948@taniwha.stupidest.org> <40F9321C.7060403@grabbarna.nu> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <40F9321C.7060403@grabbarna.nu> X-archive-position: 3653 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 2242 Lines: 59 On Sat, Jul 17, 2004 at 04:05:16PM +0200, Jan Banan wrote: > I suppose the best stradegy is to get a new disk of the same size > and then try to copy the whole damaged disk with "dd" to the new > disk and then try to startup the raid again and after that run > xfs_repair. that sounds like a good solution if most of the damaged disk is readable (i assumed it was completely dead) > What arguments to "dd" would fit best in this case? I think I've > read that "dd" will normally abort when it can't read from a damaged > disk and the disk is quite big, 250 GB (Maxtor). 'conv=noerror' i guess, see the man dd page > Since it is a 4 disk linear raid I hope most of the files are not > spread over blocks on different disks since I suppose XFS (1.2.0) > tries to store the files on blocks close to each other(?). the file-blocks will *usually* be close together and usually within the same ag various access patterns can change this though (like writing with a very full fs) > Anyone knows what normally has happened to a disk when you suddenly > can not read from some parts of the disk? I get these kind of > errors: > Jul 15 21:18:58 d kernel: hdh: dma_intr: error=0x40 { > UncorrectableError }, LBAsect=243818407, high=14, low=8937383, > sector=243818336 disk media error, if there are only a few of these i would stomp over them (if and there aren't many relocated sectors) in the hopes the disk will remap them --- i've done this myself with good results and help various other people to this > Can I do something to make it better? The disk is only one year old > but maybe the temperature has been a little bit to high in the > computer box. smartctl -a /dev/ will tell you how man relocated sectors there are and various other details. like i said, if there relocated sector count is low and you don't have *that* bad bad sectors on the disk (badblocks will tell you this) i would write over the bad-blocks (keeping a record of which blocks were bad), hope the disk relocates those sectors sanely and then run xs_repair to see how well that does. if you know which sectors (well blocks) were bad you can work out which files (well, parts of files were damaged) maybe i should write something up on this? --cw From owner-linux-xfs Sat Jul 17 17:11:23 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 17 Jul 2004 17:11:25 -0700 (PDT) Received: from mxfep01.bredband.com (mxfep01.bredband.com [195.54.107.70]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6I0BLMD019903 for ; Sat, 17 Jul 2004 17:11:22 -0700 Received: from mail.ter.nu ([213.114.210.36] [213.114.210.36]) by mxfep01.bredband.com with ESMTP id <20040718001114.UBBS23501.mxfep01.bredband.com@mail.ter.nu> for ; Sun, 18 Jul 2004 02:11:14 +0200 Received: from grabbarna.nu (w.tenet [192.168.0.5]) by mail.ter.nu (Postfix) with ESMTP id CD3ED9981AE for ; Sun, 18 Jul 2004 02:11:13 +0200 (CEST) Message-ID: <40F9C034.7020003@grabbarna.nu> Date: Sun, 18 Jul 2004 02:11:32 +0200 From: Jan Banan User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6) Gecko/20040116 X-Accept-Language: sv, en-us, en MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: Re: Recover a XFS on raid -1 (linear) when one disk is broken References: <40F6DBC1.6050909@grabbarna.nu> <20040715205910.GA9948@taniwha.stupidest.org> <40F9321C.7060403@grabbarna.nu> <20040717233236.GA10234@taniwha.stupidest.org> In-Reply-To: <20040717233236.GA10234@taniwha.stupidest.org> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 3654 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: b@grabbarna.nu Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 285 Lines: 13 >if there are only a few of these i would stomp over >them (if and there aren't many relocated sectors) > How can I perform that "stomp over" thing? > maybe i should write something up on this? Yes please, that would be really nice if you could do that, thanks! Best regards, Jan From owner-linux-xfs Sun Jul 18 12:15:39 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 18 Jul 2004 12:15:52 -0700 (PDT) Received: from web53802.mail.yahoo.com (web53802.mail.yahoo.com [206.190.36.197]) by oss.sgi.com (8.13.0/8.13.0) with SMTP id i6IJFcTg026817 for ; Sun, 18 Jul 2004 12:15:39 -0700 Message-ID: <20040718191531.80876.qmail@web53802.mail.yahoo.com> Received: from [63.226.217.245] by web53802.mail.yahoo.com via HTTP; Sun, 18 Jul 2004 12:15:31 PDT Date: Sun, 18 Jul 2004 12:15:31 -0700 (PDT) From: Carl Spalletta Subject: [PATCH] Remove prototypes of nonexistent functions from fs/xfs files To: lkml Cc: nathans@sgi.com, linux-xfs@oss.sgi.com MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-archive-position: 3655 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cspalletta@yahoo.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 4389 Lines: 86 diff -ru linux-2.6.7-orig/fs/xfs/linux-2.6/xfs_fs_subr.h linux-2.6.7-new/fs/xfs/linux-2.6/xfs_fs_subr.h --- linux-2.6.7-orig/fs/xfs/linux-2.6/xfs_fs_subr.h 2004-06-15 22:19:42.000000000 -0700 +++ linux-2.6.7-new/fs/xfs/linux-2.6/xfs_fs_subr.h 2004-07-18 08:40:42.000000000 -0700 @@ -40,7 +40,6 @@ extern int fs_noerr(void); extern int fs_nosys(void); -extern int fs_nodev(void); extern void fs_noval(void); extern void fs_tosspages(bhv_desc_t *, xfs_off_t, xfs_off_t, int); extern void fs_flushinval_pages(bhv_desc_t *, xfs_off_t, xfs_off_t, int); diff -ru linux-2.6.7-orig/fs/xfs/quota/xfs_qm.h linux-2.6.7-new/fs/xfs/quota/xfs_qm.h --- linux-2.6.7-orig/fs/xfs/quota/xfs_qm.h 2004-06-15 22:19:03.000000000 -0700 +++ linux-2.6.7-new/fs/xfs/quota/xfs_qm.h 2004-07-18 08:34:23.000000000 -0700 @@ -187,7 +187,6 @@ extern int xfs_qm_sync(xfs_mount_t *, short); /* dquot stuff */ -extern void xfs_qm_dqunlink(xfs_dquot_t *); extern boolean_t xfs_qm_dqalloc_incore(xfs_dquot_t **); extern int xfs_qm_dqattach(xfs_inode_t *, uint); extern void xfs_qm_dqdetach(xfs_inode_t *); diff -ru linux-2.6.7-orig/fs/xfs/xfs_acl.h linux-2.6.7-new/fs/xfs/xfs_acl.h --- linux-2.6.7-orig/fs/xfs/xfs_acl.h 2004-06-15 22:19:13.000000000 -0700 +++ linux-2.6.7-new/fs/xfs/xfs_acl.h 2004-07-18 08:36:48.000000000 -0700 @@ -71,8 +71,6 @@ extern int xfs_acl_inherit(struct vnode *, struct vattr *, xfs_acl_t *); extern int xfs_acl_iaccess(struct xfs_inode *, mode_t, cred_t *); -extern int xfs_acl_get(struct vnode *, xfs_acl_t *, xfs_acl_t *); -extern int xfs_acl_set(struct vnode *, xfs_acl_t *, xfs_acl_t *); extern int xfs_acl_vtoacl(struct vnode *, xfs_acl_t *, xfs_acl_t *); extern int xfs_acl_vhasacl_access(struct vnode *); extern int xfs_acl_vhasacl_default(struct vnode *); diff -ru linux-2.6.7-orig/fs/xfs/xfs_attr_leaf.h linux-2.6.7-new/fs/xfs/xfs_attr_leaf.h --- linux-2.6.7-orig/fs/xfs/xfs_attr_leaf.h 2004-06-15 22:18:37.000000000 -0700 +++ linux-2.6.7-new/fs/xfs/xfs_attr_leaf.h 2004-07-18 08:39:17.000000000 -0700 @@ -246,7 +246,6 @@ int xfs_attr_shortform_to_leaf(struct xfs_da_args *args); int xfs_attr_shortform_remove(struct xfs_da_args *remove); int xfs_attr_shortform_list(struct xfs_attr_list_context *context); -int xfs_attr_shortform_replace(struct xfs_da_args *args); int xfs_attr_shortform_allfit(struct xfs_dabuf *bp, struct xfs_inode *dp); /* diff -ru linux-2.6.7-orig/fs/xfs/xfs_bmap_btree.h linux-2.6.7-new/fs/xfs/xfs_bmap_btree.h --- linux-2.6.7-orig/fs/xfs/xfs_bmap_btree.h 2004-06-15 22:19:23.000000000 -0700 +++ linux-2.6.7-new/fs/xfs/xfs_bmap_btree.h 2004-07-18 08:36:18.000000000 -0700 @@ -551,13 +551,6 @@ struct xfs_btree_cur *, int *); -int -xfs_bmbt_insert_many( - struct xfs_btree_cur *, - int, - xfs_bmbt_rec_t *, - int *); - void xfs_bmbt_log_block( struct xfs_btree_cur *, diff -ru linux-2.6.7-orig/fs/xfs/xfs_inode.h linux-2.6.7-new/fs/xfs/xfs_inode.h --- linux-2.6.7-orig/fs/xfs/xfs_inode.h 2004-06-15 22:19:43.000000000 -0700 +++ linux-2.6.7-new/fs/xfs/xfs_inode.h 2004-07-18 08:38:39.000000000 -0700 @@ -508,7 +508,6 @@ uint xfs_dic2xflags(struct xfs_dinode_core *, xfs_arch_t); int xfs_ifree(struct xfs_trans *, xfs_inode_t *, struct xfs_bmap_free *); -int xfs_atruncate_start(xfs_inode_t *); void xfs_itruncate_start(xfs_inode_t *, uint, xfs_fsize_t); int xfs_itruncate_finish(struct xfs_trans **, xfs_inode_t *, xfs_fsize_t, int, int); diff -ru linux-2.6.7-orig/fs/xfs/xfs_log_priv.h linux-2.6.7-new/fs/xfs/xfs_log_priv.h --- linux-2.6.7-orig/fs/xfs/xfs_log_priv.h 2004-06-15 22:18:58.000000000 -0700 +++ linux-2.6.7-new/fs/xfs/xfs_log_priv.h 2004-07-18 08:35:15.000000000 -0700 @@ -543,7 +543,6 @@ xfs_daddr_t *head_blk, xfs_daddr_t *tail_blk, int readonly); -extern int xlog_print_find_oldest(xlog_t *log, xfs_daddr_t *last_blk); extern int xlog_recover(xlog_t *log, int readonly); extern int xlog_recover_finish(xlog_t *log, int mfsi_flags); extern void xlog_pack_data(xlog_t *log, xlog_in_core_t *iclog); From owner-linux-xfs Sun Jul 18 18:30:08 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 18 Jul 2004 18:30:11 -0700 (PDT) Received: from coredumps.de (coredumps.de [217.160.213.75]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6J1U6nX008811 for ; Sun, 18 Jul 2004 18:30:07 -0700 Received: from port-212-202-54-216.dynamic.qsc.de ([212.202.54.216] helo=ente.berdmann.de) by coredumps.de with asmtp (TLSv1:DES-CBC3-SHA:168) (Exim 4.33) id 1BmMyi-00081I-07 for linux-xfs@oss.sgi.com; Mon, 19 Jul 2004 03:30:04 +0200 Received: from octane.berdmann.de ([192.168.1.14] helo=berdmann.de) by ente.berdmann.de with esmtp (Exim 3.36 #1) id 1BmMyf-0001m1-00 for linux-xfs@oss.sgi.com; Mon, 19 Jul 2004 03:30:01 +0200 Message-ID: <40FB2417.7030406@berdmann.de> Date: Mon, 19 Jul 2004 03:29:59 +0200 From: Bernhard Erdmann User-Agent: Mozilla/5.0 (X11; U; IRIX64 IP30; en-US; rv:1.6) Gecko/20040505 X-Accept-Language: de, en, fr MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: building xfsprogs (CVS): mmap.c:627: `MADV_NORMAL' undeclared Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 3656 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: be@berdmann.de Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 3725 Lines: 107 Hi, when trying to build the current CVS version of xfsprogs using "make" in xfs-cmds an error occurs: [...] gcc -O1 -g -DDEBUG -funsigned-char -Wall -I../include -DVERSION=\"2.6.19\" -DLOCALEDIR=\"/usr/share/locale\" -DPACKAGE=\"xfsprogs\" -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -DHAVE_FADVISE -DHAVE_SENDFILE -DHAVE_INJECT -DHAVE_RESBLKS -DHAVE_SHUTDOWN -I/usr/local/src/xfs-cmds/xfsprogs/include -I/usr/local/src/xfs-cmds/dmapi/include -I/usr/local/src/xfs-cmds/attr/include -c -o mmap.o mmap.c mmap.c: In function `madvise_f': mmap.c:627: `MADV_NORMAL' undeclared (first use in this function) mmap.c:627: (Each undeclared identifier is reported only once mmap.c:627: for each function it appears in.) mmap.c:633: `MADV_DONTNEED' undeclared (first use in this function) mmap.c:636: `MADV_RANDOM' undeclared (first use in this function) mmap.c:639: `MADV_SEQUENTIAL' undeclared (first use in this function) mmap.c:642: `MADV_WILLNEED' undeclared (first use in this function) mmap.c: In function `mincore_f': mmap.c:729: warning: implicit declaration of function `mincore' gmake[2]: *** [mmap.o] Error 1 make[1]: *** [default] Error 2 make[1]: Leaving directory `/usr/local/src/xfs-cmds/xfsprogs' acl and attr have been build before by the makefile without any errors. Logs/configure is: make[1]: Entering directory `/usr/local/src/xfs-cmds/xfsprogs' autoconf ./configure \ --prefix=/ \ --exec-prefix=/ \ --sbindir=/sbin \ --bindir=/usr/sbin \ --libdir=/lib \ --libexecdir=/usr/lib \ --includedir=/usr/include \ --mandir=/usr/share/man \ --datadir=/usr/share \ $LOCAL_CONFIGURE_OPTIONS checking for gcc... gcc checking for C compiler default output... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gmake... /usr/bin/gmake checking for glibtool... no checking for libtool... /usr/bin/libtool checking for tar... /bin/tar checking for gzip... /bin/gzip checking for makedepend... /usr/X11R6/bin/makedepend checking for awk... /bin/awk checking for sed... /bin/sed checking for echo... /bin/echo checking for sort... /bin/sort checking whether ln -s works... yes checking for msgfmt... /usr/bin/msgfmt checking for msgmerge... /usr/bin/msgmerge checking for rpm... /bin/rpm checking for rpmbuild... /usr/bin/rpmbuild checking how to run the C preprocessor... gcc -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking uuid.h usability... no checking uuid.h presence... no checking for uuid.h... no checking sys/uuid.h usability... no checking sys/uuid.h presence... no checking for sys/uuid.h... no checking uuid/uuid.h usability... yes checking uuid/uuid.h presence... yes checking for uuid/uuid.h... yes checking for uuid_compare... no checking for uuid_compare in -luuid... yes checking pthread.h usability... yes checking pthread.h presence... yes checking for pthread.h... yes checking for pthread_mutex_init in -lpthread... yes checking for __psint_t ... no checking for __psunsigned_t ... no checking for long... yes checking size of long... 4 checking for char *... yes checking size of char *... 4 configure: creating ./config.status config.status: creating include/builddefs config.status: creating include/platform_defs.h touch .census make[1]: Leaving directory `/usr/local/src/xfs-cmds/xfsprogs' From owner-linux-xfs Sun Jul 18 19:01:41 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 18 Jul 2004 19:01:57 -0700 (PDT) Received: from zok.sgi.com (mtvcafw.sgi.com [192.48.171.6]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6J21enK010051 for ; Sun, 18 Jul 2004 19:01:40 -0700 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by zok.sgi.com (8.12.9/8.12.9/linux-outbound_gateway-1.1) with SMTP id i6J21Vhv027021 for ; Sun, 18 Jul 2004 19:01:32 -0700 Received: from kao2.melbourne.sgi.com (kao2.melbourne.sgi.com [134.14.55.180]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA29458 for ; Mon, 19 Jul 2004 12:01:30 +1000 Received: by kao2.melbourne.sgi.com (Postfix, from userid 16331) id AA0A7C2173; Mon, 19 Jul 2004 12:01:30 +1000 (EST) Received: from kao2.melbourne.sgi.com (localhost [127.0.0.1]) by kao2.melbourne.sgi.com (Postfix) with ESMTP id A6B5614010A; Mon, 19 Jul 2004 12:01:30 +1000 (EST) X-Mailer: exmh version 2.6.3_20040314 03/14/2004 with nmh-1.0.4 From: Keith Owens To: Bernhard Erdmann Cc: linux-xfs@oss.sgi.com Subject: Re: building xfsprogs (CVS): mmap.c:627: `MADV_NORMAL' undeclared In-reply-to: Your message of "Mon, 19 Jul 2004 03:29:59 +0200." <40FB2417.7030406@berdmann.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Date: Mon, 19 Jul 2004 12:01:29 +1000 Message-ID: <4774.1090202489@kao2.melbourne.sgi.com> X-archive-position: 3657 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: kaos@sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1278 Lines: 29 On Mon, 19 Jul 2004 03:29:59 +0200, Bernhard Erdmann wrote: >Hi, > >when trying to build the current CVS version of xfsprogs using "make" in >xfs-cmds an error occurs: > >[...] >gcc -O1 -g -DDEBUG -funsigned-char -Wall -I../include >-DVERSION=\"2.6.19\" -DLOCALEDIR=\"/usr/share/locale\" >-DPACKAGE=\"xfsprogs\" -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 >-DHAVE_FADVISE -DHAVE_SENDFILE -DHAVE_INJECT -DHAVE_RESBLKS >-DHAVE_SHUTDOWN -I/usr/local/src/xfs-cmds/xfsprogs/include >-I/usr/local/src/xfs-cmds/dmapi/include >-I/usr/local/src/xfs-cmds/attr/include -c -o mmap.o mmap.c >mmap.c: In function `madvise_f': >mmap.c:627: `MADV_NORMAL' undeclared (first use in this function) >mmap.c:627: (Each undeclared identifier is reported only once >mmap.c:627: for each function it appears in.) >mmap.c:633: `MADV_DONTNEED' undeclared (first use in this function) >mmap.c:636: `MADV_RANDOM' undeclared (first use in this function) >mmap.c:639: `MADV_SEQUENTIAL' undeclared (first use in this function) >mmap.c:642: `MADV_WILLNEED' undeclared (first use in this function) >mmap.c: In function `mincore_f': >mmap.c:729: warning: implicit declaration of function `mincore' All defined in mman.h. Try adding #include after the other includes in mmap.c. From owner-linux-xfs Mon Jul 19 08:58:46 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 08:59:00 -0700 (PDT) Received: from zok.sgi.com (mtvcafw.sgi.com [192.48.171.6]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6JFwkoj016043 for ; Mon, 19 Jul 2004 08:58:46 -0700 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by zok.sgi.com (8.12.9/8.12.9/linux-outbound_gateway-1.1) with SMTP id i6JFwchv031353 for ; Mon, 19 Jul 2004 08:58:38 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id BAA12820; Tue, 20 Jul 2004 01:58:31 +1000 Received: from wobbly.melbourne.sgi.com (localhost [127.0.0.1]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id i6JFwTln2444156; Tue, 20 Jul 2004 01:58:29 +1000 (EST) Received: (from nathans@localhost) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id i6JFwRVl2442395; Tue, 20 Jul 2004 01:58:27 +1000 (EST) Date: Tue, 20 Jul 2004 01:58:27 +1000 From: Nathan Scott To: Bernhard Erdmann , Keith Owens Cc: linux-xfs@oss.sgi.com Subject: Re: building xfsprogs (CVS): mmap.c:627: `MADV_NORMAL' undeclared Message-ID: <20040720015826.A2406645@wobbly.melbourne.sgi.com> References: <40FB2417.7030406@berdmann.de> <4774.1090202489@kao2.melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5i In-Reply-To: <4774.1090202489@kao2.melbourne.sgi.com>; from kaos@sgi.com on Mon, Jul 19, 2004 at 12:01:29PM +1000 X-archive-position: 3658 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1023 Lines: 27 On Mon, Jul 19, 2004 at 12:01:29PM +1000, Keith Owens wrote: > On Mon, 19 Jul 2004 03:29:59 +0200, > Bernhard Erdmann wrote: > >... > >mmap.c: In function `madvise_f': > >mmap.c:627: `MADV_NORMAL' undeclared (first use in this function) > >mmap.c:627: (Each undeclared identifier is reported only once > >mmap.c:627: for each function it appears in.) > >mmap.c:633: `MADV_DONTNEED' undeclared (first use in this function) > >mmap.c:636: `MADV_RANDOM' undeclared (first use in this function) > >mmap.c:639: `MADV_SEQUENTIAL' undeclared (first use in this function) > >mmap.c:642: `MADV_WILLNEED' undeclared (first use in this function) > >mmap.c: In function `mincore_f': > >mmap.c:729: warning: implicit declaration of function `mincore' > > All defined in mman.h. Try adding #include after the > other includes in mmap.c. mmap.c already includes that header - what version of the glibc headers are you using there Bernhard? (which distribution, and which version?) cheers. -- Nathan From owner-linux-xfs Mon Jul 19 09:21:11 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 09:21:23 -0700 (PDT) Received: from burgers.bubbanfriends.org (IDENT:6eohi6c0BasXxcWbNInuM1HfAKRC8GqT@burgers.bubbanfriends.org [69.212.163.241]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6JGLAX0016927 for ; Mon, 19 Jul 2004 09:21:11 -0700 Received: from localhost (burgers.bubbanfriends.org [127.0.0.1]) by burgers.bubbanfriends.org (Postfix) with ESMTP id BEBBE1421009 for ; Mon, 19 Jul 2004 11:21:08 -0500 (EST) Received: from burgers.bubbanfriends.org ([127.0.0.1]) by localhost (burgers.bubbanfriends.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 22492-05 for ; Mon, 19 Jul 2004 11:21:08 -0500 (EST) Received: by burgers.bubbanfriends.org (Postfix, from userid 500) id 4DB011421006; Mon, 19 Jul 2004 11:21:08 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by burgers.bubbanfriends.org (Postfix) with ESMTP id 4D065302939B for ; Mon, 19 Jul 2004 11:21:08 -0500 (EST) Date: Mon, 19 Jul 2004 11:21:08 -0500 (EST) From: Mike Burger To: linux-xfs@oss.sgi.com Subject: Re: XFS installer for Fedora 1 In-Reply-To: <1068152873.1405.6.camel@stout.americas.sgi.com> Message-ID: References: <1068144421.1405.4.camel@stout.americas.sgi.com> <20031106201337.GG13576@puariko.nirvana> <1068152873.1405.6.camel@stout.americas.sgi.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Virus-Scanned: by amavisd-new at bubbanfriends.org X-archive-position: 3659 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: mburger@bubbanfriends.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 880 Lines: 32 On Thu, 6 Nov 2003, Eric Sandeen wrote: > On Thu, 2003-11-06 at 14:13, Axel Thimm wrote: > > > Fedora Core 2 is scheduled 4-6 months from now, start lobbying! :) (I > > don't think RH will put XFS in the "updates", previously known as > > "errata") > > FC2 is supposed to have the linux 2.6 kernel, so if they strip xfs out > of that, then we'll know how they -really- feel about us. ;-) Touching back on this subject: Has anyone tried upgrading a current RH/FC system, that already has XFS in place, with the stock FC2 CDs? -- Mike Burger http://www.bubbanfriends.org Visit the Dog Pound II BBS telnet://dogpound2.citadel.org or http://dogpound2.citadel.org:2000 To be notified of updates to the web site, visit http://www.bubbanfriends.org/mailman/listinfo/site-update, or send a message to: site-update-request@bubbanfriends.org with a message of: subscribe From owner-linux-xfs Mon Jul 19 09:45:38 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 09:45:43 -0700 (PDT) Received: from poptart.bithose.com (poptart.bithose.com [204.97.176.41]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6JGjblm017836 for ; Mon, 19 Jul 2004 09:45:38 -0700 Received: from poptart.bithose.com (localhost [127.0.0.1]) by poptart.bithose.com (8.12.10/8.12.10) with ESMTP id i6JGjZu4323935 for ; Mon, 19 Jul 2004 12:45:35 -0400 (EDT) Received: from localhost (jakari@localhost) by poptart.bithose.com (8.12.10/8.12.10/Submit) with ESMTP id i6JGjYYb323860 for ; Mon, 19 Jul 2004 12:45:34 -0400 (EDT) X-Authentication-Warning: poptart.bithose.com: jakari owned process doing -bs Date: Mon, 19 Jul 2004 12:44:38 -0400 (EDT) From: Jameel Akari To: Mike Burger Subject: Re: XFS installer for Fedora 1 In-Reply-To: Message-ID: References: <1068144421.1405.4.camel@stout.americas.sgi.com> <20031106201337.GG13576@puariko.nirvana> <1068152873.1405.6.camel@stout.americas.sgi.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII ReSent-Date: Mon, 19 Jul 2004 12:45:27 -0400 (EDT) ReSent-From: Jameel Akari ReSent-To: linux-xfs@oss.sgi.com ReSent-Subject: Re: XFS installer for Fedora 1 ReSent-Message-ID: X-archive-position: 3660 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: jakari@bithose.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1432 Lines: 45 On Mon, 19 Jul 2004, Mike Burger wrote: > On Thu, 6 Nov 2003, Eric Sandeen wrote: > > > On Thu, 2003-11-06 at 14:13, Axel Thimm wrote: > > > > > Fedora Core 2 is scheduled 4-6 months from now, start lobbying! :) (I > > > don't think RH will put XFS in the "updates", previously known as > > > "errata") > > > > FC2 is supposed to have the linux 2.6 kernel, so if they strip xfs out > > of that, then we'll know how they -really- feel about us. ;-) Well, i can tell you that FC2 kernels supports xfs - however they don't install any xfsprogs (mkfs.xfs, xfsdump, etc.) or acl stuff - and the installer won't create XFS partitions - just ext2,ext3. On my sorta-testing PFc2 box here, I've installed: xfsprogs-2.5.6-1 libattr-2.4.8-1 libacl-2.2.15-1 acl-2.2.15-1 attr-2.4.8-1 xfsdump-2.2.13-0 dmapi-2.0.8-0 ... from the oss.sgi.com FTP site and so far it all seems to work - I've created a couple XFS filesystems and I've been building and burning DVD ISOs from them. > Has anyone tried upgrading a current RH/FC system, that already has XFS in > place, with the stock FC2 CDs? That's a good question, and if I had my FC2 CDs with me I'd feed them to the VMware gods and find out. Don't feel like blowing up a production system today, though I imagine a RH7.3+XFS -> FC2 upgrade would be interesting to watch (like say how a train wreck is interesting to watch...) -- #!/jameel/akari sleep 4800; make clean && make breakfast From owner-linux-xfs Mon Jul 19 09:56:30 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 09:56:34 -0700 (PDT) Received: from mail.linux-sxs.org (mail.linux-sxs.org [207.218.156.196]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6JGuSAG018487 for ; Mon, 19 Jul 2004 09:56:29 -0700 Received: from mail.linux-sxs.org (localhost [127.0.0.1]) by mail.linux-sxs.org (8.12.11/8.12.11/Debian-5) with ESMTP id i6JGoBGg017333; Mon, 19 Jul 2004 11:50:11 -0500 Received: from localhost (netllama@localhost) by mail.linux-sxs.org (8.12.11/8.12.11/Debian-5) with ESMTP id i6JGoAS1017328; Mon, 19 Jul 2004 11:50:10 -0500 Date: Mon, 19 Jul 2004 11:50:10 -0500 (EST) From: "L. Friedman" To: Jameel Akari cc: Mike Burger , linux-xfs@oss.sgi.com Subject: Re: XFS installer for Fedora 1 In-Reply-To: Message-ID: References: <1068144421.1405.4.camel@stout.americas.sgi.com> <20031106201337.GG13576@puariko.nirvana> <1068152873.1405.6.camel@stout.americas.sgi.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 3661 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: netllama@mail.linux-sxs.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 2013 Lines: 47 On Mon, 19 Jul 2004, Jameel Akari wrote: > On Mon, 19 Jul 2004, Mike Burger wrote: > > > On Thu, 6 Nov 2003, Eric Sandeen wrote: > > > > > On Thu, 2003-11-06 at 14:13, Axel Thimm wrote: > > > > > > > Fedora Core 2 is scheduled 4-6 months from now, start lobbying! :) (I > > > > don't think RH will put XFS in the "updates", previously known as > > > > "errata") > > > > > > FC2 is supposed to have the linux 2.6 kernel, so if they strip xfs out > > > of that, then we'll know how they -really- feel about us. ;-) > > Well, i can tell you that FC2 kernels supports xfs - however they don't > install any xfsprogs (mkfs.xfs, xfsdump, etc.) or acl stuff - and the > installer won't create XFS partitions - just ext2,ext3. That wasn't my experience (and what's an 'XFS partition')? FC2 installs with XFS ok, as long as /boot isn't XFS. When it is, grub gets caught somewhere and kinda hangs until you kill it manually during the install. But yea, the xfs userspace tools don't ship with FC2, so you're on your own in getting them, which is fairly trivial, assuming that you've used XFS before. > ... from the oss.sgi.com FTP site and so far it all seems to work - I've > created a couple XFS filesystems and I've been building and burning DVD > ISOs from them. > > > Has anyone tried upgrading a current RH/FC system, that already has XFS in > > place, with the stock FC2 CDs? > > That's a good question, and if I had my FC2 CDs with me I'd feed them to > the VMware gods and find out. Don't feel like blowing up a production > system today, though I imagine a RH7.3+XFS -> FC2 upgrade would be > interesting to watch (like say how a train wreck is interesting to > watch...) Personally, i'd be surprised if it turned out well, since XFS support in the FC2 installer isn't quite baked yet. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Lonni J Friedman netllama@linux-sxs.org Linux Step-by-step & TyGeMo http://netllama.ipfox.com From owner-linux-xfs Mon Jul 19 10:07:41 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 10:07:42 -0700 (PDT) Received: from poptart.bithose.com (poptart.bithose.com [204.97.176.41]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6JH7eDS019102 for ; Mon, 19 Jul 2004 10:07:40 -0700 Received: from poptart.bithose.com (localhost [127.0.0.1]) by poptart.bithose.com (8.12.10/8.12.10) with ESMTP id i6JH7cu4323972 for ; Mon, 19 Jul 2004 13:07:38 -0400 (EDT) Received: from localhost (jakari@localhost) by poptart.bithose.com (8.12.10/8.12.10/Submit) with ESMTP id i6JH7bkp323948 for ; Mon, 19 Jul 2004 13:07:37 -0400 (EDT) X-Authentication-Warning: poptart.bithose.com: jakari owned process doing -bs Date: Mon, 19 Jul 2004 13:07:37 -0400 (EDT) From: Jameel Akari To: linux-xfs@oss.sgi.com Subject: Re: XFS installer for Fedora 1 In-Reply-To: Message-ID: References: <1068144421.1405.4.camel@stout.americas.sgi.com> <20031106201337.GG13576@puariko.nirvana> <1068152873.1405.6.camel@stout.americas.sgi.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 3662 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: jakari@bithose.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1145 Lines: 34 On Mon, 19 Jul 2004, L. Friedman wrote: > > install any xfsprogs (mkfs.xfs, xfsdump, etc.) or acl stuff - and the > > installer won't create XFS partitions - just ext2,ext3. > > That wasn't my experience (and what's an 'XFS partition')? FC2 installs It's typo, mostly, obviously I meant "filesystem," I'm just thinking of the installer's Disk Druid partitioning at the same time. > > system today, though I imagine a RH7.3+XFS -> FC2 upgrade would be > > Personally, i'd be surprised if it turned out well, since XFS support in > the FC2 installer isn't quite baked yet. I certainly don't have much faith in it, hence the test system. The installer kernel seems to know enough about XFS to mount a fs r/w so in theory it'll work. I don't normally have XFS /boot anyway so that much shouldn't be a problem. OTOH, the FC2 installer is half-baked in more ways than just XFS support - I've tried the text mode installer on three different machines and had it randomly fail on each - so all bets are off. Well, I'll try it tomorrow, if only to say "Nope, that doesn't work." -- #!/jameel/akari sleep 4800; make clean && make breakfast From owner-linux-xfs Mon Jul 19 10:49:25 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 10:49:30 -0700 (PDT) Received: from burgers.bubbanfriends.org (IDENT:GnHWwCeSIT1PaH3FOKuUgGdhj0oEKslh@burgers.bubbanfriends.org [69.212.163.241]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6JHnOPp020550 for ; Mon, 19 Jul 2004 10:49:25 -0700 Received: from localhost (burgers.bubbanfriends.org [127.0.0.1]) by burgers.bubbanfriends.org (Postfix) with ESMTP id 7FA5D1421009; Mon, 19 Jul 2004 12:49:20 -0500 (EST) Received: from burgers.bubbanfriends.org ([127.0.0.1]) by localhost (burgers.bubbanfriends.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 24348-08; Mon, 19 Jul 2004 12:49:20 -0500 (EST) Received: by burgers.bubbanfriends.org (Postfix, from userid 500) id F0DBE1421006; Mon, 19 Jul 2004 12:49:19 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by burgers.bubbanfriends.org (Postfix) with ESMTP id E72F33029375; Mon, 19 Jul 2004 12:49:19 -0500 (EST) Date: Mon, 19 Jul 2004 12:49:19 -0500 (EST) From: Mike Burger To: Jameel Akari Cc: linux-xfs@oss.sgi.com Subject: Re: XFS installer for Fedora 1 In-Reply-To: Message-ID: References: <1068144421.1405.4.camel@stout.americas.sgi.com> <20031106201337.GG13576@puariko.nirvana> <1068152873.1405.6.camel@stout.americas.sgi.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Virus-Scanned: by amavisd-new at bubbanfriends.org X-archive-position: 3663 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: mburger@bubbanfriends.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1017 Lines: 37 On Mon, 19 Jul 2004, Jameel Akari wrote: > > > On Mon, 19 Jul 2004, L. Friedman wrote: > > > > install any xfsprogs (mkfs.xfs, xfsdump, etc.) or acl stuff - and the > > > installer won't create XFS partitions - just ext2,ext3. > > > > That wasn't my experience (and what's an 'XFS partition')? FC2 installs > > It's typo, mostly, obviously I meant "filesystem," I'm just thinking of > the installer's Disk Druid partitioning at the same time. Well, since Disk Druid, fdisk, et al still refer to the process as partitioning the disk, I don't think the term is really all that inappropropriate. I use the term, myself, all the time. But that could just be me. -- Mike Burger http://www.bubbanfriends.org Visit the Dog Pound II BBS telnet://dogpound2.citadel.org or http://dogpound2.citadel.org:2000 To be notified of updates to the web site, visit http://www.bubbanfriends.org/mailman/listinfo/site-update, or send a message to: site-update-request@bubbanfriends.org with a message of: subscribe From owner-linux-xfs Mon Jul 19 11:12:12 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 11:12:15 -0700 (PDT) Received: from mail.linux-sxs.org (mail.linux-sxs.org [207.218.156.196]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6JIC9R5021309 for ; Mon, 19 Jul 2004 11:12:10 -0700 Received: from mail.linux-sxs.org (localhost [127.0.0.1]) by mail.linux-sxs.org (8.12.11/8.12.11/Debian-5) with ESMTP id i6JI63x6017830; Mon, 19 Jul 2004 13:06:03 -0500 Received: from localhost (netllama@localhost) by mail.linux-sxs.org (8.12.11/8.12.11/Debian-5) with ESMTP id i6JI63X0017827; Mon, 19 Jul 2004 13:06:03 -0500 Date: Mon, 19 Jul 2004 13:06:03 -0500 (EST) From: "L. Friedman" To: Mike Burger cc: Jameel Akari , linux-xfs@oss.sgi.com Subject: Re: XFS installer for Fedora 1 In-Reply-To: Message-ID: References: <1068144421.1405.4.camel@stout.americas.sgi.com> <20031106201337.GG13576@puariko.nirvana> <1068152873.1405.6.camel@stout.americas.sgi.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 3664 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: netllama@mail.linux-sxs.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1067 Lines: 29 On Mon, 19 Jul 2004, Mike Burger wrote: > On Mon, 19 Jul 2004, Jameel Akari wrote: > > > > > > > On Mon, 19 Jul 2004, L. Friedman wrote: > > > > > > install any xfsprogs (mkfs.xfs, xfsdump, etc.) or acl stuff - and the > > > > installer won't create XFS partitions - just ext2,ext3. > > > > > > That wasn't my experience (and what's an 'XFS partition')? FC2 installs > > > > It's typo, mostly, obviously I meant "filesystem," I'm just thinking of > > the installer's Disk Druid partitioning at the same time. > > Well, since Disk Druid, fdisk, et al still refer to the process as > partitioning the disk, I don't think the term is really all that > inappropropriate. I use the term, myself, all the time. > > But that could just be me. Yea, but partitioning is a verb, partition types are Linux (83) and that's about it. Its all symantics :) -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Lonni J Friedman netllama@linux-sxs.org Linux Step-by-step & TyGeMo http://netllama.ipfox.com From owner-linux-xfs Mon Jul 19 11:59:24 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 11:59:26 -0700 (PDT) Received: from smtp.pzkagis.cz (gis.netbox.cz [212.96.173.85]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6JIxMAi022842 for ; Mon, 19 Jul 2004 11:59:23 -0700 Received: (from luf@localhost) by smtp.pzkagis.cz (8.11.6/8.11.6) id i6JIxJp06181 for linux-xfs@oss.sgi.com; Mon, 19 Jul 2004 20:59:19 +0200 Date: Mon, 19 Jul 2004 20:59:19 +0200 From: Ludek Finstrle To: linux-xfs@oss.sgi.com Subject: Re: XFS installer for Fedora 1 Message-ID: <20040719185919.GA6134@soptik.pzkagis.cz> References: <1068144421.1405.4.camel@stout.americas.sgi.com> <20031106201337.GG13576@puariko.nirvana> <1068152873.1405.6.camel@stout.americas.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4i X-archive-position: 3665 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: ludek.finstrle@pzkagis.cz Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1243 Lines: 54 Mon, Jul 19, 2004 at 12:44:38PM -0400, Jameel Akari napsal(a): > On Mon, 19 Jul 2004, Mike Burger wrote: > > On Thu, 6 Nov 2003, Eric Sandeen wrote: > > > On Thu, 2003-11-06 at 14:13, Axel Thimm wrote: > > > > > > FC2 is supposed to have the linux 2.6 kernel, so if they strip xfs out > > > of that, then we'll know how they -really- feel about us. ;-) > > Well, i can tell you that FC2 kernels supports xfs - however they don't > install any xfsprogs (mkfs.xfs, xfsdump, etc.) or acl stuff - and the > installer won't create XFS partitions - just ext2,ext3. > > On my sorta-testing PFc2 box here, I've installed: > xfsprogs-2.5.6-1 > libattr-2.4.8-1 > libacl-2.2.15-1 > acl-2.2.15-1 > attr-2.4.8-1 > xfsdump-2.2.13-0 > dmapi-2.0.8-0 Are you sure? When setup ask for kernel (in text) I add xfs as latest parameter and it works as I expect. Something like: boot: linux text xfs for console install or just boot: linux xfs for install with GUI. I have installed with FC2 setup: # rpm -qa | grep xfs xfsprogs-2.6.13-1 # rpm -qa | grep attr libattr-2.4.1-4 attr-2.4.1-4 libattr-devel-2.4.1-4 # rpm -qa | grep dmapi # # rpm -qa | grep acl libacl-2.2.7-5 acl-2.2.7-5 libacl-devel-2.2.7-5 Do I miss something? Luf P.S. I tried only FC2 From owner-linux-xfs Mon Jul 19 14:14:22 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 14:14:33 -0700 (PDT) Received: from heretic.physik.fu-berlin.de (heretic.physik.fu-berlin.de [160.45.32.227]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6JLELQN028755 for ; Mon, 19 Jul 2004 14:14:22 -0700 Received: by heretic.physik.fu-berlin.de (8.12.10/8.12.10) with ESMTP id i6JLEGpg004332 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Mon, 19 Jul 2004 23:14:17 +0200 Received: (from thimm@localhost) by neu.nirvana (8.12.11/8.12.11/Submit) id i6JLEAIF002810 for linux-xfs@oss.sgi.com; Mon, 19 Jul 2004 23:14:10 +0200 Resent-Message-Id: <200407192114.i6JLEAIF002810@neu.nirvana> Received: from mail.atrpms.net (at1.physik.fu-berlin.de [160.45.32.86]) by up.physik.fu-berlin.de (8.11.1/8.9.1) with ESMTP id i6JH73q641924 for ; Mon, 19 Jul 2004 19:07:03 +0200 (CEST) X-Envelope-From: thimm@physik.fu-berlin.de X-Envelope-To: X-ZEDV-BeenThere: nukleon Received: from at1.physik.fu-berlin.de ([160.45.32.86] helo=heretic.physik.fu-berlin.de) by mail.atrpms.net with esmtp (Exim 4.30) id 1BmbbB-0006jk-P4 for Axel.Thimm@ATrpms.net; Mon, 19 Jul 2004 19:06:45 +0200 Received: by heretic.physik.fu-berlin.de (8.12.10/8.12.10) with ESMTP id i6JH6fpg025892 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Mon, 19 Jul 2004 19:06:42 +0200 Received: (from thimm@localhost) by neu.nirvana (8.12.11/8.12.11/Submit) id i6JH6dEl000605; Mon, 19 Jul 2004 19:06:39 +0200 Date: Mon, 19 Jul 2004 19:06:39 +0200 From: Axel Thimm To: Jameel Akari Cc: Mike Burger Message-ID: <20040719170639.GD31854@neu.nirvana> References: <1068144421.1405.4.camel@stout.americas.sgi.com> <20031106201337.GG13576@puariko.nirvana> <1068152873.1405.6.camel@stout.americas.sgi.com> Mime-Version: 1.0 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2i X-SA-Exim-Mail-From: thimm@physik.fu-berlin.de Subject: Re: XFS installer for Fedora 1 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="UfEAyuTBtIjiZzX6" X-SA-Exim-Version: 3.1 (built Sun Feb 8 09:08:40 EST 2004) X-SA-Exim-Scanned: Yes X-Bogosity: No, tests=bogofilter, spamicity=0.000057, version=0.17.5, scanned=2004-07-19T17:07:03Z, spam_cutoff=9.90e-01 Resent-From: Axel.Thimm@ATrpms.net Resent-Date: Mon, 19 Jul 2004 23:14:10 +0200 Resent-To: linux-xfs@oss.sgi.com X-archive-position: 3666 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: Axel.Thimm@ATrpms.net Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 2235 Lines: 69 --UfEAyuTBtIjiZzX6 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Mon, Jul 19, 2004 at 12:44:38PM -0400, Jameel Akari wrote: > On Mon, 19 Jul 2004, Mike Burger wrote: > > On Thu, 6 Nov 2003, Eric Sandeen wrote: > > > On Thu, 2003-11-06 at 14:13, Axel Thimm wrote: > > > > Fedora Core 2 is scheduled 4-6 months from now, start lobbying! :) = (I > > > > don't think RH will put XFS in the "updates", previously known as > > > > "errata") > > > > > > FC2 is supposed to have the linux 2.6 kernel, so if they strip xfs out > > > of that, then we'll know how they -really- feel about us. ;-) >=20 > Well, i can tell you that FC2 kernels supports xfs - however they don't > install any xfsprogs (mkfs.xfs, xfsdump, etc.) or acl stuff - and the > installer won't create XFS partitions - just ext2,ext3. Have you tried booting with "linux xfs"? > On my sorta-testing PFc2 box here, I've installed: > xfsprogs-2.5.6-1 > libattr-2.4.8-1 > libacl-2.2.15-1 > acl-2.2.15-1 > attr-2.4.8-1 > xfsdump-2.2.13-0 > dmapi-2.0.8-0 >=20 > ... from the oss.sgi.com FTP site and so far it all seems to work - I've > created a couple XFS filesystems and I've been building and burning DVD > ISOs from them. >=20 > > Has anyone tried upgrading a current RH/FC system, that already has XFS= in > > place, with the stock FC2 CDs? >=20 > That's a good question, and if I had my FC2 CDs with me I'd feed them to > the VMware gods and find out. Don't feel like blowing up a production > system today, though I imagine a RH7.3+XFS -> FC2 upgrade would be > interesting to watch (like say how a train wreck is interesting to > watch...) The latter is fun, I guess. Anyone volunteering ;) (RH7.3 to FC2 would be a disaster even w/o XFS I guess, too many versions in between. But FC1->FC2 could have a chance. Too bad I already jumped my XFS boxes to FC2. ;) --=20 Axel.Thimm at ATrpms.net --UfEAyuTBtIjiZzX6 Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) iD8DBQFA+/+fQBVS1GOamfERAvOUAJ9qTENBY+DkfbQ8NL7LBWDkUzCCHACfc7dU qHRV/zJWrM1LCn1j0eYiaoE= =krVX -----END PGP SIGNATURE----- --UfEAyuTBtIjiZzX6-- From owner-linux-xfs Mon Jul 19 15:00:51 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 15:01:01 -0700 (PDT) Received: from bill.corporate.quris.com (mx1.hq.quris.com [216.150.62.20] (may be forged)) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6JM0oJC030112 for ; Mon, 19 Jul 2004 15:00:51 -0700 X-MimeOLE: Produced By Microsoft Exchange V6.5.7226.0 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Subject: mount: Function not implemented? Date: Mon, 19 Jul 2004 16:00:43 -0600 Message-ID: <74918D8CA17F7C418753F01078F10B6BD08524@bill.corporate.quris.com> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: mount: Function not implemented? thread-index: AcRt29YpE8/YRwa/QpSz6947Pvvs4g== From: "Anthony Biacco" To: Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id i6JM0pJC030113 X-archive-position: 3667 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: ABiacco@quris.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1354 Lines: 46 Hi, I'm trying to use an XFS filesystem under a newly compiled linux 2.6.8-rc2 kernel (my first try of the 2.6 series, btw) I can create the FS fine, but mount won't mount it System is RH AS3 Update 2, Dual AMD64 Opteron, xfs compiled in kernel as module. [root@stampy src]# mkfs.xfs -l size=32m -b size=16k -d su=16k,sw=3 -f -L /u03 /dev/sdb1 meta-data=/dev/sdb1 isize=256 agcount=8, agsize=55976 blks = sectsz=512 data = bsize=16384 blocks=447808, imaxpct=25 = sunit=1 swidth=3 blks, unwritten=1 naming =version 2 bsize=16384 log =internal log bsize=16384 blocks=2048, version=1 = sectsz=512 sunit=0 blks realtime =none extsz=65536 blocks=0, rtextents=0 [root@stampy src]# mount -t xfs /dev/sdb1 /u03 mount: Function not implemented [root@stampy src]# mount -V mount: mount-2.11y [root@stampy src]# lsmod | grep xfs xfs 496464 0 [root@stampy src]# grep xfs /etc/fstab /dev/sdb1 /u03 xfs defaults 1 2 I have the latest xfsprogs,xfsdump,attr,dmapi compiled from source. Thanx, --Tony ------------------------------ Anthony J. Biacco Systems/Network Administrator Quris, Inc. 720-836-2015 From owner-linux-xfs Mon Jul 19 15:05:32 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 15:05:38 -0700 (PDT) Received: from pimout3-ext.prodigy.net (pimout3-ext.prodigy.net [207.115.63.102]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6JM5Vsx030491 for ; Mon, 19 Jul 2004 15:05:32 -0700 Received: from taniwha.stupidest.org (adsl-67-124-119-20.dsl.snfc21.pacbell.net [67.124.119.20]) by pimout3-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6JM5SlM033660; Mon, 19 Jul 2004 18:05:28 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id 0A557115C858; Mon, 19 Jul 2004 15:05:27 -0700 (PDT) Date: Mon, 19 Jul 2004 15:05:27 -0700 From: Chris Wedgwood To: Anthony Biacco Cc: linux-xfs@oss.sgi.com Subject: Re: mount: Function not implemented? Message-ID: <20040719220526.GA2007@taniwha.stupidest.org> References: <74918D8CA17F7C418753F01078F10B6BD08524@bill.corporate.quris.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <74918D8CA17F7C418753F01078F10B6BD08524@bill.corporate.quris.com> X-archive-position: 3668 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 511 Lines: 15 On Mon, Jul 19, 2004 at 04:00:43PM -0600, Anthony Biacco wrote: > I'm trying to use an XFS filesystem under a newly compiled linux > 2.6.8-rc2 kernel (my first try of the 2.6 series, btw) I can create > the FS fine, but mount won't mount it System is RH AS3 Update 2, > Dual AMD64 Opteron, xfs compiled in kernel as module. > data = bsize=16384 blocks=447808, imaxpct=25 ^^^^^ this won't work --- what are you trying to do here? --cw From owner-linux-xfs Mon Jul 19 15:19:30 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 15:19:41 -0700 (PDT) Received: from bill.corporate.quris.com (bill.corporate.quris.com [216.150.62.20]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6JMJUXw031094 for ; Mon, 19 Jul 2004 15:19:30 -0700 X-MimeOLE: Produced By Microsoft Exchange V6.5.7226.0 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Subject: RE: mount: Function not implemented? Date: Mon, 19 Jul 2004 16:19:22 -0600 Message-ID: <74918D8CA17F7C418753F01078F10B6BD0853B@bill.corporate.quris.com> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: mount: Function not implemented? thread-index: AcRt3IIFyCHlS08bTG2B0LZ8ZWBrJAAAaYzQ From: "Anthony Biacco" To: "Chris Wedgwood" Cc: Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id i6JMJUXw031101 X-archive-position: 3669 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: ABiacco@quris.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 914 Lines: 34 Trying to use a blocksize of 16k to match my oracle block size and HW RAID5 stripe size. --Tony ------------------------------ Anthony J. Biacco Systems/Network Administrator Quris, Inc. 720-836-2015 -----Original Message----- From: Chris Wedgwood [mailto:cw@f00f.org] Sent: Monday, July 19, 2004 4:05 PM To: Anthony Biacco Cc: linux-xfs@oss.sgi.com Subject: Re: mount: Function not implemented? On Mon, Jul 19, 2004 at 04:00:43PM -0600, Anthony Biacco wrote: > I'm trying to use an XFS filesystem under a newly compiled linux > 2.6.8-rc2 kernel (my first try of the 2.6 series, btw) I can create > the FS fine, but mount won't mount it System is RH AS3 Update 2, Dual > AMD64 Opteron, xfs compiled in kernel as module. > data = bsize=16384 blocks=447808, imaxpct=25 ^^^^^ this won't work --- what are you trying to do here? --cw From owner-linux-xfs Mon Jul 19 15:23:29 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 15:23:33 -0700 (PDT) Received: from mbe0.msomt.modwest.com (mbe0.msomt.modwest.com [216.220.25.82]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6JMNTwj031507 for ; Mon, 19 Jul 2004 15:23:29 -0700 Received: from d216-220-25-60.dynip.modwest.com (d216-220-25-60.dynip.modwest.com [216.220.25.60]) (using TLSv1 with cipher EDH-RSA-DES-CBC3-SHA (168/168 bits)) (No client certificate requested) by mbe0.msomt.modwest.com (Postfix) with ESMTP id 9105427AF7B for ; Mon, 19 Jul 2004 16:23:25 -0600 (MDT) Date: Mon, 19 Jul 2004 16:30:10 -0600 From: Michael Loftis To: linux-xfs@oss.sgi.com Subject: RE: mount: Function not implemented? Message-ID: <25826718.1090254610@d216-220-25-60.dynip.modwest.com> In-Reply-To: <74918D8CA17F7C418753F01078F10B6BD0853B@bill.corporate.quris.com> References: <74918D8CA17F7C418753F01078F10B6BD0853B@bill.corporate.quris.com > X-Mailer: Mulberry/3.1.2 (Win32) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline X-Modwest-MailScanner: Found to be clean X-Modwest-MailScanner-SpamCheck: X-MailScanner-From: mloftis@wgops.com X-archive-position: 3671 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: mloftis@wgops.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1229 Lines: 48 FYI linux combines writes to a 128k window on SCSI & IDE...so it writes 128k blocks at a time. --On Monday, July 19, 2004 16:19 -0600 Anthony Biacco wrote: > > Trying to use a blocksize of 16k to match my oracle block size and HW > RAID5 stripe size. > > --Tony > ------------------------------ > Anthony J. Biacco > Systems/Network Administrator > Quris, Inc. > 720-836-2015 > > -----Original Message----- > From: Chris Wedgwood [mailto:cw@f00f.org] > Sent: Monday, July 19, 2004 4:05 PM > To: Anthony Biacco > Cc: linux-xfs@oss.sgi.com > Subject: Re: mount: Function not implemented? > > On Mon, Jul 19, 2004 at 04:00:43PM -0600, Anthony Biacco wrote: > >> I'm trying to use an XFS filesystem under a newly compiled linux >> 2.6.8-rc2 kernel (my first try of the 2.6 series, btw) I can create >> the FS fine, but mount won't mount it System is RH AS3 Update 2, Dual >> AMD64 Opteron, xfs compiled in kernel as module. > >> data = bsize=16384 blocks=447808, > imaxpct=25 > ^^^^^ > > this won't work --- what are you trying to do here? > > > --cw > > > -- GPG/PGP --> 0xE736BD7E 5144 6A2D 977A 6651 DFBE 1462 E351 88B9 E736 BD7E From owner-linux-xfs Mon Jul 19 15:23:18 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 15:23:32 -0700 (PDT) Received: from pimout2-ext.prodigy.net (pimout2-ext.prodigy.net [207.115.63.101]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6JMNHrb031496 for ; Mon, 19 Jul 2004 15:23:18 -0700 Received: from taniwha.stupidest.org (adsl-67-124-119-20.dsl.snfc21.pacbell.net [67.124.119.20]) by pimout2-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6JMNEUK105432; Mon, 19 Jul 2004 18:23:14 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id 92E22115C858; Mon, 19 Jul 2004 15:23:13 -0700 (PDT) Date: Mon, 19 Jul 2004 15:23:13 -0700 From: Chris Wedgwood To: Anthony Biacco Cc: linux-xfs@oss.sgi.com Subject: Re: mount: Function not implemented? Message-ID: <20040719222313.GA2123@taniwha.stupidest.org> References: <74918D8CA17F7C418753F01078F10B6BD0853B@bill.corporate.quris.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <74918D8CA17F7C418753F01078F10B6BD0853B@bill.corporate.quris.com> X-archive-position: 3670 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 177 Lines: 7 On Mon, Jul 19, 2004 at 04:19:22PM -0600, Anthony Biacco wrote: > Trying to use a blocksize of 16k to match my oracle block size and > HW RAID5 stripe size. don't do that :) From owner-linux-xfs Mon Jul 19 15:29:32 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 15:29:34 -0700 (PDT) Received: from bill.corporate.quris.com (mx1.hq.quris.com [216.150.62.20] (may be forged)) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6JMTVAl032177 for ; Mon, 19 Jul 2004 15:29:32 -0700 X-MimeOLE: Produced By Microsoft Exchange V6.5.7226.0 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Subject: RE: mount: Function not implemented? Date: Mon, 19 Jul 2004 16:29:24 -0600 Message-ID: <74918D8CA17F7C418753F01078F10B6BD08541@bill.corporate.quris.com> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: mount: Function not implemented? thread-index: AcRt3v1skCnc6bX2TmGqVlJ7NrVXhgAAH8Kg From: "Anthony Biacco" To: "Chris Wedgwood" Cc: Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id i6JMTWAl032182 X-archive-position: 3672 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: ABiacco@quris.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 612 Lines: 25 ok, elaborate please. It's a valid parameter, yes? From what I understand XFS can do up to 64k blocksizes. Am I mistaken? --Tony ------------------------------ Anthony J. Biacco Systems/Network Administrator Quris, Inc. 720-836-2015 -----Original Message----- From: Chris Wedgwood [mailto:cw@f00f.org] Sent: Monday, July 19, 2004 4:23 PM To: Anthony Biacco Cc: linux-xfs@oss.sgi.com Subject: Re: mount: Function not implemented? On Mon, Jul 19, 2004 at 04:19:22PM -0600, Anthony Biacco wrote: > Trying to use a blocksize of 16k to match my oracle block size and HW > RAID5 stripe size. don't do that :) From owner-linux-xfs Mon Jul 19 15:41:55 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 15:41:57 -0700 (PDT) Received: from mail00hq.adic.com ([63.81.117.10]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6JMftoq000372 for ; Mon, 19 Jul 2004 15:41:55 -0700 Received: from mail02hq.adic.com ([172.16.9.18]) by mail00hq.adic.com with Microsoft SMTPSVC(5.0.2195.6713); Mon, 19 Jul 2004 15:41:47 -0700 Received: from [172.16.82.67] ([172.16.82.67]) by mail02hq.adic.com with Microsoft SMTPSVC(5.0.2195.6713); Mon, 19 Jul 2004 15:41:47 -0700 Message-ID: <40FC4DBE.5000109@xfs.org> Date: Mon, 19 Jul 2004 17:39:58 -0500 From: Steve Lord User-Agent: Mozilla Thunderbird 0.7.1 (X11/20040626) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Anthony Biacco CC: Chris Wedgwood , linux-xfs@oss.sgi.com Subject: Re: mount: Function not implemented? References: <74918D8CA17F7C418753F01078F10B6BD08541@bill.corporate.quris.com> In-Reply-To: <74918D8CA17F7C418753F01078F10B6BD08541@bill.corporate.quris.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 19 Jul 2004 22:41:47.0806 (UTC) FILETIME=[9335C3E0:01C46DE1] X-archive-position: 3673 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: lord@xfs.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 254 Lines: 10 Anthony Biacco wrote: > ok, elaborate please. It's a valid parameter, yes? From what I > understand XFS can do up to 64k blocksizes. Am I mistaken? Not on linux, it can only do filesystem blocksize upto pagesize. So on ia32 it maxes out at 4K. Steve From owner-linux-xfs Mon Jul 19 15:42:47 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 15:42:49 -0700 (PDT) Received: from bill.corporate.quris.com (bill.corporate.quris.com [216.150.62.20]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6JMglWl000615 for ; Mon, 19 Jul 2004 15:42:47 -0700 X-MimeOLE: Produced By Microsoft Exchange V6.5.7226.0 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Subject: RE: mount: Function not implemented? Date: Mon, 19 Jul 2004 16:42:40 -0600 Message-ID: <74918D8CA17F7C418753F01078F10B6BD08554@bill.corporate.quris.com> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: mount: Function not implemented? thread-index: AcRt3v1skCnc6bX2TmGqVlJ7NrVXhgAAoChQ From: "Anthony Biacco" To: Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id i6JMglWl000651 X-archive-position: 3674 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: ABiacco@quris.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 667 Lines: 30 >FYI linux combines writes to a 128k window on SCSI & IDE...so it writes >128k blocks at a time. But it shouldn't do this with Direct IO, right? Nothing should get buffered. -Tony ------------------------------ Anthony J. Biacco Systems/Network Administrator Quris, Inc. 720-836-2015 -----Original Message----- From: Chris Wedgwood [mailto:cw@f00f.org] Sent: Monday, July 19, 2004 4:23 PM To: Anthony Biacco Cc: linux-xfs@oss.sgi.com Subject: Re: mount: Function not implemented? On Mon, Jul 19, 2004 at 04:19:22PM -0600, Anthony Biacco wrote: > Trying to use a blocksize of 16k to match my oracle block size and HW > RAID5 stripe size. don't do that :) From owner-linux-xfs Mon Jul 19 15:50:13 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 15:50:17 -0700 (PDT) Received: from bill.corporate.quris.com (mx1.hq.quris.com [216.150.62.20] (may be forged)) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6JMoCVT001086 for ; Mon, 19 Jul 2004 15:50:13 -0700 X-MimeOLE: Produced By Microsoft Exchange V6.5.7226.0 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Subject: RE: mount: Function not implemented? Date: Mon, 19 Jul 2004 16:50:05 -0600 Message-ID: <74918D8CA17F7C418753F01078F10B6BD08560@bill.corporate.quris.com> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: mount: Function not implemented? thread-index: AcRt4ZQ839mAx6K1Rh2g4bjodutJKQAAB86Q From: "Anthony Biacco" To: "Steve Lord" Cc: Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id i6JMoDVT001088 X-archive-position: 3675 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: ABiacco@quris.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 779 Lines: 31 But it's a 64-bit system. How do people get oracle performing on enterprise class hardware, with linux, with such a low page size? Do you just have to say, the hell with it, and create a raw device? -Tony ------------------------------ Anthony J. Biacco Systems/Network Administrator Quris, Inc. 720-836-2015 -----Original Message----- From: Steve Lord [mailto:lord@xfs.org] Sent: Monday, July 19, 2004 4:40 PM To: Anthony Biacco Cc: Chris Wedgwood; linux-xfs@oss.sgi.com Subject: Re: mount: Function not implemented? Anthony Biacco wrote: > ok, elaborate please. It's a valid parameter, yes? From what I > understand XFS can do up to 64k blocksizes. Am I mistaken? Not on linux, it can only do filesystem blocksize upto pagesize. So on ia32 it maxes out at 4K. Steve From owner-linux-xfs Mon Jul 19 15:56:02 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 15:56:07 -0700 (PDT) Received: from internalmx.vasoftware.com (internalmx1.vasoftware.com [12.152.184.149]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6JMu24Z001495 for ; Mon, 19 Jul 2004 15:56:02 -0700 Received: from adsl-67-121-190-215.dsl.sntc01.pacbell.net ([67.121.190.215]:63891 helo=[10.0.0.1]) by internalmx.vasoftware.com with asmtp (Cipher TLSv1:RC4-MD5:128) (Exim 4.22 #1 (Debian)) id 1Bmh35-0006Hu-MX by VAauthid with fixed_plain; Mon, 19 Jul 2004 15:55:56 -0700 Message-ID: <40FC517B.6040307@linux-sxs.org> Date: Mon, 19 Jul 2004 15:55:55 -0700 From: "Net Llama!" Organization: HAL V User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8a2) Gecko/20040716 X-Accept-Language: en-us, en MIME-Version: 1.0 To: Anthony Biacco CC: Steve Lord , linux-xfs@oss.sgi.com Subject: Re: mount: Function not implemented? References: <74918D8CA17F7C418753F01078F10B6BD08560@bill.corporate.quris.com> In-Reply-To: <74918D8CA17F7C418753F01078F10B6BD08560@bill.corporate.quris.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-EA-Verified: internalmx.vasoftware.com 1Bmh35-0006Hu-MX 9ddf0ea5cc4bec65ebe85b6ed3385d1e X-archive-position: 3676 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: netllama@linux-sxs.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1224 Lines: 44 Usually i say "the hell with oracle" and use postgresql. ;) On 07/19/2004 03:50 PM, Anthony Biacco wrote: > But it's a 64-bit system. > How do people get oracle performing on enterprise class hardware, with > linux, with such a low page size? > Do you just have to say, the hell with it, and create a raw device? > > -Tony > ------------------------------ > Anthony J. Biacco > Systems/Network Administrator > Quris, Inc. > 720-836-2015 > > -----Original Message----- > From: Steve Lord [mailto:lord@xfs.org] > Sent: Monday, July 19, 2004 4:40 PM > To: Anthony Biacco > Cc: Chris Wedgwood; linux-xfs@oss.sgi.com > Subject: Re: mount: Function not implemented? > > Anthony Biacco wrote: > >>ok, elaborate please. It's a valid parameter, yes? From what I >>understand XFS can do up to 64k blocksizes. Am I mistaken? > > > Not on linux, it can only do filesystem blocksize upto pagesize. > So on ia32 it maxes out at 4K. > > Steve > > > -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ L. Friedman netllama@linux-sxs.org Linux Step-by-step & TyGeMo: http://netllama.ipfox.com 15:55:00 up 29 days, 2:37, 3 users, load average: 0.21, 0.23, 0.12 From owner-linux-xfs Mon Jul 19 15:57:28 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 15:57:30 -0700 (PDT) Received: from pimout2-ext.prodigy.net (pimout2-ext.prodigy.net [207.115.63.101]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6JMvSRW001808 for ; Mon, 19 Jul 2004 15:57:28 -0700 Received: from taniwha.stupidest.org (adsl-67-124-119-20.dsl.snfc21.pacbell.net [67.124.119.20]) by pimout2-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6JMvPUK042392; Mon, 19 Jul 2004 18:57:25 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id 6608A115C858; Mon, 19 Jul 2004 15:57:25 -0700 (PDT) Date: Mon, 19 Jul 2004 15:57:25 -0700 From: Chris Wedgwood To: Anthony Biacco Cc: Steve Lord , linux-xfs@oss.sgi.com Subject: Re: mount: Function not implemented? Message-ID: <20040719225725.GA2294@taniwha.stupidest.org> References: <74918D8CA17F7C418753F01078F10B6BD08560@bill.corporate.quris.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <74918D8CA17F7C418753F01078F10B6BD08560@bill.corporate.quris.com> X-archive-position: 3677 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 640 Lines: 22 On Mon, Jul 19, 2004 at 04:50:05PM -0600, Anthony Biacco wrote: > But it's a 64-bit system. with a crappy MMU :( that's by far my biggest bitch with x86-64 > How do people get oracle performing on enterprise class hardware, > with linux, with such a low page size? i really don't think you'll see much performance difference between 4k and 16k pages (on a cpu that does allow this) > Do you just have to say, the hell with it, and create a raw device? you can try that, but i don't think you'll see a significant performance difference using 4k blocks linux doesn't do IO in the size of the fs' blocksize, raw or otherwise --cw From owner-linux-xfs Mon Jul 19 18:25:30 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 18:25:33 -0700 (PDT) Received: from zok.sgi.com (mtvcafw.sgi.com [192.48.171.6]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6K1PTni009783 for ; Mon, 19 Jul 2004 18:25:30 -0700 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by zok.sgi.com (8.12.9/8.12.9/linux-outbound_gateway-1.1) with SMTP id i6K1PLhv001804 for ; Mon, 19 Jul 2004 18:25:22 -0700 Received: from kao2.melbourne.sgi.com (kao2.melbourne.sgi.com [134.14.55.180]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA02904 for ; Tue, 20 Jul 2004 11:25:20 +1000 Received: by kao2.melbourne.sgi.com (Postfix, from userid 16331) id A56BFC2173; Tue, 20 Jul 2004 11:25:17 +1000 (EST) Received: from kao2.melbourne.sgi.com (localhost [127.0.0.1]) by kao2.melbourne.sgi.com (Postfix) with ESMTP id A229A1400F4; Tue, 20 Jul 2004 11:25:17 +1000 (EST) X-Mailer: exmh version 2.6.3_20040314 03/14/2004 with nmh-1.0.4 From: Keith Owens To: Mike Burger Cc: linux-xfs@oss.sgi.com Subject: Re: XFS installer for Fedora 1 In-reply-to: Your message of "Mon, 19 Jul 2004 11:21:08 EST." Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Date: Tue, 20 Jul 2004 11:25:16 +1000 Message-ID: <6095.1090286716@kao2.melbourne.sgi.com> X-archive-position: 3678 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: kaos@sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 383 Lines: 9 On Mon, 19 Jul 2004 11:21:08 -0500 (EST), Mike Burger wrote: >Has anyone tried upgrading a current RH/FC system, that already has XFS in >place, with the stock FC2 CDs? I went RH9 -> FC1 -> FC2 with no problems. The systems were already using XFS partitions. AFAICR other users had problems with grub on XFS in FC2, but I use lilo so who cares :) From owner-linux-xfs Mon Jul 19 19:08:52 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 19:09:02 -0700 (PDT) Received: from relay03.roc.ny.frontiernet.net (relay03.roc.ny.frontiernet.net [66.133.131.36]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6K28n2U011337 for ; Mon, 19 Jul 2004 19:08:52 -0700 Received: (qmail 5772 invoked from network); 20 Jul 2004 02:08:47 -0000 Received: from 208-186-10-249.nrp1.brv.mn.frontiernet.net (HELO [192.168.1.102]) ([208.186.10.249]) (envelope-sender ) by relay03.roc.ny.frontiernet.net (FrontierMTA 2.3.23) with SMTP for ; 20 Jul 2004 02:08:47 -0000 Message-ID: <40FC7EB4.3080906@xfs.org> Date: Mon, 19 Jul 2004 21:08:52 -0500 From: Steve Lord User-Agent: Mozilla Thunderbird 0.7 (X11/20040615) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Anthony Biacco CC: linux-xfs@oss.sgi.com Subject: Re: mount: Function not implemented? References: <74918D8CA17F7C418753F01078F10B6BD08560@bill.corporate.quris.com> In-Reply-To: <74918D8CA17F7C418753F01078F10B6BD08560@bill.corporate.quris.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 3679 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: lord@xfs.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 512 Lines: 16 Anthony Biacco wrote: > But it's a 64-bit system. > How do people get oracle performing on enterprise class hardware, with > linux, with such a low page size? > Do you just have to say, the hell with it, and create a raw device? > Raw devices don't do any bigger I/O's, this is merely the unit of allocation used by the filesystem, not the unit of I/O to the disk drives. XFS will still allocate disk space in large contiguous chunks. Large block sizes I think helped Irix more than they would Linux. Steve From owner-linux-xfs Mon Jul 19 20:44:37 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 20:44:41 -0700 (PDT) Received: from hulk.vianw.pt (hulk.vianw.pt [195.22.31.43]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6K3iaG2018029 for ; Mon, 19 Jul 2004 20:44:37 -0700 Received: from wizy.org (adsl-sul01-2-184.vianw.pt [80.172.1.184]) by hulk.vianw.pt (8.12.11/8.12.11) with ESMTP id i6K3iRDk005487 for ; Tue, 20 Jul 2004 04:44:27 +0100 Received: from wizy (wizy [192.168.0.3]) by wizy.org (Postfix) with ESMTP id 1307D7FD75D for ; Tue, 20 Jul 2004 04:44:22 +0100 (WEST) From: Ricardo Correia To: linux-xfs@oss.sgi.com Subject: Null files reloaded :-) Date: Tue, 20 Jul 2004 04:44:21 +0100 User-Agent: KMail/1.6.2 MIME-Version: 1.0 Content-Disposition: inline Content-Type: text/plain; charset="iso-8859-1" Message-Id: <200407200444.21761.wizeman@wizy.org> Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id i6K3ibG2018034 X-archive-position: 3680 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: wizeman@wizy.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 2775 Lines: 58 Hi (sorry about the long post, but please read until the end), I think you are doing a *really* great job bringing XFS to Linux. But I think you know what I'm about to talk ;) Generally, I never had any problems with XFS (I've been using it since 2.4.x, and I'm using vanilla 2.6.7 now). However there's still one thing where I think it doesn't work very well: when there's a power failure or a kernel panic/freeze. A few days ago, I've been trying to make a flaky driver work with my ADSL Modem, which at this point crashes every hour or so. Well, out of 10 crashes, at least 9 times I lose a file.. In 2 days I've lost all my aMule downloads and configuration, most of my KDE configuration (several times), uptime records (I think it was supposed to be crash-resistant arghh..), and other things which I can't remember right now. The last time this happened, I've lost all of those above in the same reboot.. which is why I stopped testing the damn thing! As you can imagine, I wasn't very happy.. The problem is that unfortunately I will have to do this again in a few days, for a longer period (!). The thing I'm trying to understand is why does this happen with XFS? I know it can happen with any filesystem, but the fact is that it doesn't happen nearly as many times as it does with XFS. Well, for example, the aMule program uses a small file called xxx.part.met to track downloaded parts, which usually has about 200-800 bytes. I tried to use the trick you mentioned (chattr +S, if I'm not mistaken) with these files, because these were the ones which after the reboot became 0-length. The problem is that after about a minute or so, lsattr shows that the file doesn't have the attribute anymore. So I suppose these files gets deleted and recreated every so often. So does this really happen because XFS only journals metadata? From what I understand, XFS puts small files inside the inode, when it fits (ls -sh shows 0 KB used). If the .met file gets deleted and recreated almost instantly, weren't both changes supposed to happen on-disk (kind of) simultaneously in the periodic disk-sync? Even with a journaled filesystem? It makes sense that only when the power is cut between these (quick) changes, the file content is lost, not everytime! Or does XFS truncate the file immediately on-disk after unlink()? I don't think that makes much sense. So why does this happen? Is it for security reasons? I don't think it's that.. there are lots of single-user systems out there which don't need that. Or is it really by design? Can't it be changed (optionally, if you must)? I think the 'chattr +S', even if it works for whole directories, cannot be the solution for this. This will be very frustrating, as you can imagine.. Help! :-) From owner-linux-xfs Mon Jul 19 22:23:52 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 22:23:54 -0700 (PDT) Received: from gusi.leathercollection.ph (gusi.leathercollection.ph [202.163.192.10]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6K5Np5Z022650 for ; Mon, 19 Jul 2004 22:23:52 -0700 Received: from localhost (lawin.alabang.leathercollection.ph [192.168.0.2]) by gusi.leathercollection.ph (Postfix) with ESMTP id 1CD0D88ACFE for ; Tue, 20 Jul 2004 13:23:48 +0800 (PHT) Received: from lawin.alabang.leathercollection.ph (lawin.alabang.leathercollection.ph [192.168.0.2]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by gusi.leathercollection.ph (Postfix) with ESMTP id 8B8B288A668 for ; Tue, 20 Jul 2004 13:23:41 +0800 (PHT) Received: by lawin.alabang.leathercollection.ph (Postfix, from userid 1000) id 3CB42A54F1B5; Tue, 20 Jul 2004 13:23:40 +0800 (PHT) Date: Tue, 20 Jul 2004 13:23:39 +0800 From: Federico Sevilla III To: linux-xfs@oss.sgi.com Subject: Re: Null files reloaded :-) Message-ID: <20040720052339.GN28157@leathercollection.ph> Mail-Followup-To: linux-xfs@oss.sgi.com References: <200407200444.21761.wizeman@wizy.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <200407200444.21761.wizeman@wizy.org> X-Organization: The Leather Collection, Inc. X-Organization-URL: http://www.leathercollection.ph X-Personal-URL: http://jijo.free.net.ph User-Agent: Mutt/1.5.6+20040523i X-Virus-Scanned: by amavisd-new-20030616-p9 (Debian) at leathercollection.ph X-archive-position: 3681 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: jijo@free.net.ph Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 2198 Lines: 46 On Tue, Jul 20, 2004 at 04:44:21AM +0100, Ricardo Correia wrote: > Well, out of 10 crashes, at least 9 times I lose a file.. In 2 days > I've lost all my aMule downloads and configuration, most of my KDE > configuration (several times), uptime records (I think it was supposed > to be crash-resistant arghh..), and other things which I can't > remember right now. The last time this happened, I've lost all of > those above in the same reboot.. which is why I stopped testing the > damn thing! > > As you can imagine, I wasn't very happy.. The problem is that > unfortunately I will have to do this again in a few days, for a longer > period (!). Like you, I've been using XFS for awhile now, and continue to recommend it to all my clients who need better IO/filesystem performance than ext3 can provide. Like you, the null file issue is a pet peeve. Of course production servers hardly go down, so the null file issue is hardly an issue there. I use XFS on my workstations as well, though, and sometimes a crash here and there makes me lose stuff and then I get pissed. So I'd really love to know if anything can be done beyond what had already been done right when we hit Linux 2.4.18 and XFS 1.1... but in general the benefits of XFS outweigh this hassle, and so I'm a pretty happy camper. :) > So why does this happen? Is it for security reasons? I don't think > it's that.. there are lots of single-user systems out there which > don't need that. Or is it really by design? Can't it be changed > (optionally, if you must)? > > I think the 'chattr +S', even if it works for whole directories, > cannot be the solution for this. Since you already know you'll be doing a lot of testing that will cause crashes, you probably want to be running with the sync mount option on at least while you're in the middle of testing. That's like having 'chattr +S' on the entire filesystem. It slows things down a lot, I'm sure, but it should help while you're testing. You can go back to normal operations when things stabilize. :) --> Jijo -- Federico Sevilla III : jijo.free.net.ph : When we speak of free software GNU/Linux Specialist : GnuPG 0x93B746BE : we refer to freedom, not price. From owner-linux-xfs Mon Jul 19 22:54:35 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 22:54:58 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6K5sZH1023834 for ; Mon, 19 Jul 2004 22:54:35 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6K5sZNd023833 for linux-xfs@oss.sgi.com; Mon, 19 Jul 2004 22:54:35 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6K5sY2P023821 for ; Mon, 19 Jul 2004 22:54:34 -0700 Received: (from apache@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6K5dNhZ023413; Mon, 19 Jul 2004 22:39:23 -0700 Date: Mon, 19 Jul 2004 22:39:23 -0700 Message-Id: <200407200539.i6K5dNhZ023413@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 346] Fix for xfs_fsr crash on DEC Alpha X-Bugzilla-Reason: AssignedTo X-archive-position: 3682 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 767 Lines: 29 http://oss.sgi.com/bugzilla/show_bug.cgi?id=346 tes@sgi.com changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED ------- Additional Comments From tes@sgi.com 2004-19-07 22:39 PDT ------- Hi Jan-Jaap, Thanks for your report and suggested fix. It seems a better idea to me to take the latter approach that you mentioned, and make the passed in parameter to be the address of a __s32 variable as was done in common/util.c. I'll check in for xfs_fsr.c and dump/content.c shortly. Regards, Tim. ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs Mon Jul 19 23:09:38 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 23:09:41 -0700 (PDT) Received: from pimout1-ext.prodigy.net (pimout1-ext.prodigy.net [207.115.63.77]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6K69cGg024393 for ; Mon, 19 Jul 2004 23:09:38 -0700 Received: from taniwha.stupidest.org (adsl-67-124-119-20.dsl.snfc21.pacbell.net [67.124.119.20]) by pimout1-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6K69YKn269542; Tue, 20 Jul 2004 02:09:35 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id 41190115C85E; Mon, 19 Jul 2004 23:09:34 -0700 (PDT) Date: Mon, 19 Jul 2004 23:09:34 -0700 From: Chris Wedgwood To: Ricardo Correia Cc: linux-xfs@oss.sgi.com Subject: Re: Null files reloaded :-) Message-ID: <20040720060934.GA8839@taniwha.stupidest.org> References: <200407200444.21761.wizeman@wizy.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <200407200444.21761.wizeman@wizy.org> X-archive-position: 3683 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 3381 Lines: 102 On Tue, Jul 20, 2004 at 04:44:21AM +0100, Ricardo Correia wrote: > Well, out of 10 crashes, at least 9 times I lose a file.. this at some level becomes a religious debate between "bad fs" and "badly behaved application" ignoring the fact that there were no "good fs's" two years ago and many people still have to use 'bad fs' > In 2 days I've lost all my aMule downloads and configuration, most > of my KDE configuration (several times) KDE is known to be bad here, there is a trivial fix the KDE people could make but they basically said "use a better fs" or something silly (by sane fs they seem to mean ext3 and that's not an option for many many people, all the world is not Linux, etc). > The thing I'm trying to understand is why does this happen with XFS? > I know it can happen with any filesystem, but the fact is that it > doesn't happen nearly as many times as it does with XFS. ext3 by default shouldn't show this. ext2 probably will, not sure about others XFS *might* be worse than others because of delay allocations, i'm still mulling that over at to how much difference that should make > Well, for example, the aMule program uses a small file called > xxx.part.met to track downloaded parts, which usually has about > 200-800 bytes. I tried to use the trick you mentioned (chattr +S, > if I'm not mistaken) with these files, because these were the ones > which after the reboot became 0-length. maybe aMule writes a new file and renames? that would certainly cause problems > The problem is that after about a minute or so, lsattr shows that > the file doesn't have the attribute anymore. So I suppose these > files gets deleted and recreated every so often. sounds like it. which is lame, because if aMule is doing that it could fsync the new file before the rename and you wouldn't have any problems > So does this really happen because XFS only journals metadata? yes > From what I understand, XFS puts small files inside the inode, when > it fits (ls -sh shows 0 KB used). only metadata, file data cannot put be into the inode no matter how small it is > Or does XFS truncate the file immediately on-disk after unlink()? I > don't think that makes much sense. sounds like: old file foo on disk, all safe new file bar is written metadata on disk, file data in ram [*] rename bar to foo old file unlinked, new file in place but data not flushed yet now, if there was an fsync at [*] it would work just fine > So why does this happen? Is it for security reasons? I don't think > it's that.. it is > there are lots of single-user systems out there which don't need > that. Or is it really by design? Can't it be changed (optionally, > if you must)? no, because the data was never flushed to disk, so changing the behavior won't help you there --- since there is no data to access if you did it differently > I think the 'chattr +S', even if it works for whole directories, > cannot be the solution for this. for the most part at works, but in truth for many application it papers of the problem and could still have nasty consequences ideally well written applications doesn't have this problem, but once again we just get into some religious discussion about what an fs should do and what it shouldn't do, etc. and how much more clueless we want to allow application writers to be --cw From owner-linux-xfs Mon Jul 19 23:11:57 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 19 Jul 2004 23:11:58 -0700 (PDT) Received: from pimout2-ext.prodigy.net (pimout2-ext.prodigy.net [207.115.63.101]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6K6BuFv024765 for ; Mon, 19 Jul 2004 23:11:56 -0700 Received: from taniwha.stupidest.org (adsl-67-124-119-20.dsl.snfc21.pacbell.net [67.124.119.20]) by pimout2-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6K6BrUK187398 for ; Tue, 20 Jul 2004 02:11:53 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id 3CE5F115C85E; Mon, 19 Jul 2004 23:11:53 -0700 (PDT) Date: Mon, 19 Jul 2004 23:11:53 -0700 From: Chris Wedgwood To: linux-xfs@oss.sgi.com Subject: Re: Null files reloaded :-) Message-ID: <20040720061153.GB8839@taniwha.stupidest.org> References: <200407200444.21761.wizeman@wizy.org> <20040720052339.GN28157@leathercollection.ph> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20040720052339.GN28157@leathercollection.ph> X-archive-position: 3684 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 594 Lines: 20 On Tue, Jul 20, 2004 at 01:23:39PM +0800, Federico Sevilla III wrote: > Since you already know you'll be doing a lot of testing that will > cause crashes, you probably want to be running with the sync mount > option on at least while you're in the middle of testing. testing shouldn't cause crashes --- if it does it's a bug and should be reported plenty of people beat on XFS very hard on very large filesystems and don't see crashes and testing 'mount -o sync' isn't usually helpful, the performance is abysmal and it exercises different code paths to be very much less useful --cw From owner-linux-xfs Tue Jul 20 02:46:17 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 20 Jul 2004 02:46:20 -0700 (PDT) Received: from pimout1-ext.prodigy.net (pimout1-ext.prodigy.net [207.115.63.77]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6K9kHwL026482 for ; Tue, 20 Jul 2004 02:46:17 -0700 Received: from taniwha.stupidest.org (adsl-67-124-119-20.dsl.snfc21.pacbell.net [67.124.119.20]) by pimout1-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6K9kDKn074830; Tue, 20 Jul 2004 05:46:14 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id 9B0F6115C783; Tue, 20 Jul 2004 02:46:13 -0700 (PDT) Date: Tue, 20 Jul 2004 02:46:13 -0700 From: Chris Wedgwood To: Ricardo Correia Cc: linux-xfs@oss.sgi.com Subject: Re: Null files reloaded :-) Message-ID: <20040720094613.GA12515@taniwha.stupidest.org> References: <200407200444.21761.wizeman@wizy.org> <20040720060934.GA8839@taniwha.stupidest.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20040720060934.GA8839@taniwha.stupidest.org> X-archive-position: 3685 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 518 Lines: 15 On Mon, Jul 19, 2004 at 11:09:34PM -0700, Chris Wedgwood wrote: > KDE is known to be bad here, there is a trivial fix the KDE people > could make but they basically said "use a better fs" or something > silly (by sane fs they seem to mean ext3 and that's not an option > for many many people, all the world is not Linux, etc). that said, it seems they did listen several days ago when this was pointed out: http://webcvs.kde.org/cgi-bin/cvsweb.cgi/kdelibs/kdecore/ktempfile.cpp.diff?r1=1.30&r2=1.31 --cw From owner-linux-xfs Tue Jul 20 04:30:29 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 20 Jul 2004 04:30:44 -0700 (PDT) Received: from msi2.arz.co.at (msi2.arz.co.at [193.110.182.34]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6KBURKG032418 for ; Tue, 20 Jul 2004 04:30:28 -0700 Received: from 10.1.19.31(c860031f.m286) by msi2.arz.co.at via phion mailgw id 20040720-113012-02816-00; Tue Jul 20 11:30:12 2004 To: jfs-discussion@www-124.southbury.usf.ibm.com, linux-xfs@oss.sgi.com MIME-Version: 1.0 X-Mailer: Lotus Notes Release 6.5.1 January 21, 2004 Message-ID: From: matthias.hofer@arz.co.at Date: Tue, 20 Jul 2004 13:28:08 +0200 Subject: JFS or XFS for GNU/Linux production Server? X-MIMETrack: Serialize by Router on LN000P50/SRV/ARZ-Com/AT(Release 6.0.3|September 26, 2003) at 20.07.2004 13:30:12, Serialize complete at 20.07.2004 13:30:12 Content-Type: text/plain; charset="US-ASCII" X-archive-position: 3686 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: matthias.hofer@arz.co.at Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 922 Lines: 24 Hi, I'm working in a quite large computing center as Systems Designer for UNIX and UNIX-like Systems. We largely use IBMs AIX and a bit of Sun's Solaris. Now we want to bring GNU/Linux broadly to our datacenter and application servers. We chose "SuSE Linux Enterprise Server 8" for including IBM JFS and SGI XFS support. On AIX, LVM is everywhere and the Filesystems can be increased (and now shrinked) in size when mounted. The Filesystem Increasement therefore is an important feature for us on Linux. But we don't know which one of XFS and JFS to take. Does one of them have advantages that makes it better for servers? I mean, I know that JFS for Linux was the base for JFS2 for AIX, but as I read the JFS for Linux mailinglist I don't think, this is a feature that could help us. Anyway, thank you very much for help in advance Matthias Hofer Allgemeines Rechenzentrum/System UNIX http://www.arz.co.at/ From owner-linux-xfs Tue Jul 20 05:23:56 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 20 Jul 2004 05:23:59 -0700 (PDT) Received: from lucidpixels.com (qmailr@lucidpixels.com [66.45.37.187]) by oss.sgi.com (8.13.0/8.13.0) with SMTP id i6KCNtvn000950 for ; Tue, 20 Jul 2004 05:23:55 -0700 Received: (qmail 15366 invoked by uid 1002); 20 Jul 2004 12:23:51 -0000 Received: from localhost (sendmail-bs@127.0.0.1) by localhost with SMTP; 20 Jul 2004 12:23:51 -0000 Date: Tue, 20 Jul 2004 08:23:51 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p500 To: matthias.hofer@arz.co.at cc: jfs-discussion@www-124.southbury.usf.ibm.com, linux-xfs@oss.sgi.com Subject: Re: JFS or XFS for GNU/Linux production Server? In-Reply-To: Message-ID: References: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-archive-position: 3687 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1257 Lines: 35 Hi, here is a review of all the journaling filesystems I did a while ago. http://209.81.41.149/~jpiszcz/ I believe XFS is more mature than JFS in Linux. This is just my opinion however and from reading LKML and other related mailing lists. On Tue, 20 Jul 2004 matthias.hofer@arz.co.at wrote: > Hi, > > I'm working in a quite large computing center as Systems Designer for UNIX > and UNIX-like Systems. We largely use IBMs AIX and a bit of Sun's Solaris. > > Now we want to bring GNU/Linux broadly to our datacenter and application > servers. We chose "SuSE Linux Enterprise Server 8" for including IBM JFS > and SGI XFS support. > On AIX, LVM is everywhere and the Filesystems can be increased (and now > shrinked) in size when mounted. The Filesystem Increasement therefore is > an important feature for us on Linux. > > But we don't know which one of XFS and JFS to take. Does one of them have > advantages that makes it better for servers? > I mean, I know that JFS for Linux was the base for JFS2 for AIX, but as I > read the JFS for Linux mailinglist I don't think, this is a feature that > could help us. > > Anyway, thank you very much for help in advance > > Matthias Hofer > Allgemeines Rechenzentrum/System UNIX > http://www.arz.co.at/ > > From owner-linux-xfs Tue Jul 20 06:40:50 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 20 Jul 2004 06:41:01 -0700 (PDT) Received: from hulk.vianw.pt (hulk.vianw.pt [195.22.31.43]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6KDeTdw002365 for ; Tue, 20 Jul 2004 06:40:50 -0700 Received: from wizy.org (adsl-sul01-2-184.vianw.pt [80.172.1.184]) by hulk.vianw.pt (8.12.11/8.12.11) with ESMTP id i6KDeEle027174; Tue, 20 Jul 2004 14:40:16 +0100 Received: from wizy (wizy [192.168.0.3]) by wizy.org (Postfix) with ESMTP id A6C387FD75D; Tue, 20 Jul 2004 14:40:04 +0100 (WEST) From: Ricardo Correia To: linux-xfs@oss.sgi.com Subject: Re: Null files reloaded :-) Date: Tue, 20 Jul 2004 14:40:03 +0100 User-Agent: KMail/1.6.2 Cc: Chris Wedgwood References: <200407200444.21761.wizeman@wizy.org> <20040720060934.GA8839@taniwha.stupidest.org> In-Reply-To: <20040720060934.GA8839@taniwha.stupidest.org> MIME-Version: 1.0 Content-Disposition: inline Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Message-Id: <200407201440.03713.wizeman@wizy.org> X-archive-position: 3688 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: wizeman@wizy.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 983 Lines: 32 On Tuesday 20 July 2004 07:09, Chris Wedgwood wrote: > sounds like: > > old file foo on disk, all safe > > new file bar is written metadata on disk, file > data in ram > > [*] > > rename bar to foo old file unlinked, new > file in place but data > not flushed yet > > now, if there was an fsync at [*] it would work just fine > What if during journal replaying it would recognize this behaviour, and use the old file, which is still on-disk (right? I suppose at this point the metadata only gets written to the journal, unless there's a sync, of course)? > > So why does this happen? Is it for security reasons? I don't think > > it's that.. > > it is > Here I meant that XFS wouldn't recover the file data if it wasn't sure that it's contents were valid, which would be useful in multi-user systems (where a user could accidentaly see other users' files), but in single-user systems it doesn't matter. But I guess this isn't the problem. From owner-linux-xfs Tue Jul 20 06:57:45 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 20 Jul 2004 06:57:49 -0700 (PDT) Received: from web90005.mail.scd.yahoo.com (web90005.mail.scd.yahoo.com [66.218.94.63]) by oss.sgi.com (8.13.0/8.13.0) with SMTP id i6KDvjF6002966 for ; Tue, 20 Jul 2004 06:57:45 -0700 Message-ID: <20040720135738.46416.qmail@web90005.mail.scd.yahoo.com> Received: from [66.94.231.241] by web90005.mail.scd.yahoo.com via HTTP; Tue, 20 Jul 2004 06:57:38 PDT Date: Tue, 20 Jul 2004 06:57:38 -0700 (PDT) From: Samuel Johnson Subject: XFS To: linux-xfs@oss.sgi.com MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-archive-position: 3689 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: lovingsamuel@yahoo.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 377 Lines: 16 Hello Sir, my linux xfs server has got crashed. when i boot in linux it gives an error message could not init font path element unix/:7100. please help and guide me in reinstalling it. Thanks in advance Samuel __________________________________ Do you Yahoo!? Vote for the stars of Yahoo!'s next ad campaign! http://advision.webevents.yahoo.com/yahoo/votelifeengine/ From owner-linux-xfs Tue Jul 20 07:54:19 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 20 Jul 2004 07:54:50 -0700 (PDT) Received: from pooh.lsc.hu (pooh.lsc.hu [195.56.172.131]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6KEsH2l004860 for ; Tue, 20 Jul 2004 07:54:18 -0700 Received: by pooh.lsc.hu (Postfix, from userid 1004) id 942021D43D; Tue, 20 Jul 2004 16:53:25 +0200 (CEST) Date: Tue, 20 Jul 2004 16:53:25 +0200 From: "Laszlo 'GCS' Boszormenyi" To: Samuel Johnson Cc: linux-xfs@oss.sgi.com Subject: Re: XFS Message-ID: <20040720145325.GA3788@pooh> References: <20040720135738.46416.qmail@web90005.mail.scd.yahoo.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20040720135738.46416.qmail@web90005.mail.scd.yahoo.com> User-Agent: Mutt/1.5.4i X-Whitelist: OK X-archive-position: 3690 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: gcs@lsc.hu Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 577 Lines: 17 Dear Mr. Johnson, * Samuel Johnson [2004-07-20 06:57:38 -0700]: > my linux xfs server has got crashed. when i boot in > linux it gives an error message could not init font > path element unix/:7100. please help and guide me in > reinstalling it. You chose the wrong list. This XFS is the filesystem of SGI, ported to Linux. What you are looking for is the X Font Server. The latter is offtopic here, but if you can not find an other list, I may help you out. But please give me more details then: distribution, it's version etc. Regards, Laszlo From owner-linux-xfs Tue Jul 20 08:45:40 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 20 Jul 2004 08:45:42 -0700 (PDT) Received: from bill.corporate.quris.com (mx1.hq.quris.com [216.150.62.20] (may be forged)) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6KFjetc010172 for ; Tue, 20 Jul 2004 08:45:40 -0700 X-MimeOLE: Produced By Microsoft Exchange V6.5.7226.0 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Subject: RE: mount: Function not implemented? Date: Tue, 20 Jul 2004 09:45:32 -0600 Message-ID: <74918D8CA17F7C418753F01078F10B6BD08616@bill.corporate.quris.com> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: mount: Function not implemented? thread-index: AcRt/n5EZAUkFI9QSjeQ4flGpQRGBwAcYSkg From: "Anthony Biacco" To: "Steve Lord" Cc: Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id i6KFjetc010174 X-archive-position: 3691 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: ABiacco@quris.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1365 Lines: 48 >Raw devices don't do any bigger I/O's, this is merely the unit of allocation used by the filesystem, not the unit of >I/O to the disk drives. XFS will still allocate disk space in large contiguous chunks. They ALLOW bigger I/Os. Raw devices aren't limited by the page size or the OS cache. That's the whole purpose of the raw device. I could use them, but I don't want the maintenance nightmare of 1 oracle DB file per device. >Large block sizes I think helped Irix more than they would Linux. Agreed. *sigh* maybe I'll check out OCFS. -Tony ------------------------------ Anthony J. Biacco Systems/Network Administrator Quris, Inc. 720-836-2015 -----Original Message----- From: Steve Lord [mailto:lord@xfs.org] Sent: Monday, July 19, 2004 8:09 PM To: Anthony Biacco Cc: linux-xfs@oss.sgi.com Subject: Re: mount: Function not implemented? Anthony Biacco wrote: > But it's a 64-bit system. > How do people get oracle performing on enterprise class hardware, with > linux, with such a low page size? > Do you just have to say, the hell with it, and create a raw device? > Raw devices don't do any bigger I/O's, this is merely the unit of allocation used by the filesystem, not the unit of I/O to the disk drives. XFS will still allocate disk space in large contiguous chunks. Large block sizes I think helped Irix more than they would Linux. Steve From owner-linux-xfs Tue Jul 20 09:01:13 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 20 Jul 2004 09:01:15 -0700 (PDT) Received: from mail00hq.adic.com (mail00hq.adic.com [63.81.117.10] (may be forged)) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6KG1CUC011874 for ; Tue, 20 Jul 2004 09:01:13 -0700 Received: from mail02hq.adic.com ([172.16.9.18]) by mail00hq.adic.com with Microsoft SMTPSVC(5.0.2195.6713); Tue, 20 Jul 2004 09:01:01 -0700 Received: from [172.16.82.67] ([172.16.82.67]) by mail02hq.adic.com with Microsoft SMTPSVC(5.0.2195.6713); Tue, 20 Jul 2004 09:01:01 -0700 Message-ID: <40FD414E.3080102@xfs.org> Date: Tue, 20 Jul 2004 10:59:10 -0500 From: Steve Lord User-Agent: Mozilla Thunderbird 0.7.1 (X11/20040626) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Anthony Biacco CC: linux-xfs@oss.sgi.com Subject: Re: mount: Function not implemented? References: <74918D8CA17F7C418753F01078F10B6BD08616@bill.corporate.quris.com> In-Reply-To: <74918D8CA17F7C418753F01078F10B6BD08616@bill.corporate.quris.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 20 Jul 2004 16:01:01.0658 (UTC) FILETIME=[C10083A0:01C46E72] X-archive-position: 3692 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: lord@xfs.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1897 Lines: 56 Anthony Biacco wrote: > > > >>Raw devices don't do any bigger I/O's, this is merely the unit of > > allocation used by the filesystem, not the unit of > >>I/O to the disk drives. XFS will still allocate disk space in large > > contiguous chunks. > > They ALLOW bigger I/Os. > Raw devices aren't limited by the page size or the OS cache. That's the > whole purpose of the raw device. I could use them, but I don't want the > maintenance nightmare of 1 oracle DB file per device. If you read the linux kernel code, you will see that raw devices are in exactly the same boat as the filesystem is when it comes to actual I/Os. Data is submitted to the block device in page sized chunks from both. Up until a recent patch, which I think is only in the 2.6-mm tree right now, the order memory was allocated in means that the chances of two pages of memory in a user application being physically contiguous and hence mergable into a single dma scatter gather element was minimal to say the least. The upshot of all of this is that, you submit a 100 Mbyte I/O from user space using raw I/O, or O_DIRECT in a filesystem like XFS which probably puts the whole 100 Mbytes in one spot on disk. Because of memory allocation and the linux block layer, you end up splitting it into SCSI commands which contain at most 128 pages each (you run out of scatter gather elements). Given a large enough memory and channel bandwidth, you are much more likely to saturate your controller than the channel bandwidth. Oh, and O_DIRECT and raw device access will actually chop that large I/O up anyway because they both use the same code to limit how much user memory you can have pinned down for I/O at once. > > >>Large block sizes I think helped Irix more than they would Linux. > > > Agreed. > > *sigh* maybe I'll check out OCFS. > Which is sitting on exactly the same infrastructure here. Steve From owner-linux-xfs Tue Jul 20 09:27:23 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 20 Jul 2004 09:27:26 -0700 (PDT) Received: from burgers.bubbanfriends.org (IDENT:Mycd6E+vKUYVBE5FF6PNTojikYAcO/A0@burgers.bubbanfriends.org [69.212.163.241]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6KGRMhn012714 for ; Tue, 20 Jul 2004 09:27:22 -0700 Received: from localhost (burgers.bubbanfriends.org [127.0.0.1]) by burgers.bubbanfriends.org (Postfix) with ESMTP id 13A50142101D; Tue, 20 Jul 2004 11:27:16 -0500 (EST) Received: from burgers.bubbanfriends.org ([127.0.0.1]) by localhost (burgers.bubbanfriends.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 25192-07; Tue, 20 Jul 2004 11:27:15 -0500 (EST) Received: by burgers.bubbanfriends.org (Postfix, from userid 500) id 86C24142101A; Tue, 20 Jul 2004 11:27:15 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by burgers.bubbanfriends.org (Postfix) with ESMTP id 814363006541; Tue, 20 Jul 2004 11:27:15 -0500 (EST) Date: Tue, 20 Jul 2004 11:27:15 -0500 (EST) From: Mike Burger To: Keith Owens Cc: linux-xfs@oss.sgi.com Subject: Re: XFS installer for Fedora 1 In-Reply-To: <6095.1090286716@kao2.melbourne.sgi.com> Message-ID: References: <6095.1090286716@kao2.melbourne.sgi.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Virus-Scanned: by amavisd-new at bubbanfriends.org X-archive-position: 3693 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: mburger@bubbanfriends.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 929 Lines: 30 On Tue, 20 Jul 2004, Keith Owens wrote: > On Mon, 19 Jul 2004 11:21:08 -0500 (EST), > Mike Burger wrote: > >Has anyone tried upgrading a current RH/FC system, that already has XFS in > >place, with the stock FC2 CDs? > > I went RH9 -> FC1 -> FC2 with no problems. The systems were already > using XFS partitions. AFAICR other users had problems with grub on XFS > in FC2, but I use lilo so who cares :) The system in question also uses LILO and the /boot filesystem is ext2, so Grub wouldn't really be an issue in either case, as far as I know. -- Mike Burger http://www.bubbanfriends.org Visit the Dog Pound II BBS telnet://dogpound2.citadel.org or http://dogpound2.citadel.org:2000 To be notified of updates to the web site, visit http://www.bubbanfriends.org/mailman/listinfo/site-update, or send a message to: site-update-request@bubbanfriends.org with a message of: subscribe From owner-linux-xfs Tue Jul 20 09:50:37 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 20 Jul 2004 09:50:43 -0700 (PDT) Received: from web20426.mail.yahoo.com (web20426.mail.yahoo.com [66.163.170.249]) by oss.sgi.com (8.13.0/8.13.0) with SMTP id i6KGobXR013427 for ; Tue, 20 Jul 2004 09:50:37 -0700 Message-ID: <20040720165035.43899.qmail@web20426.mail.yahoo.com> Received: from [128.101.189.160] by web20426.mail.yahoo.com via HTTP; Tue, 20 Jul 2004 09:50:35 PDT Date: Tue, 20 Jul 2004 09:50:35 -0700 (PDT) From: Peter Lu Subject: dmapi app cannot receive events To: linux-xfs@oss.sgi.com MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-archive-position: 3694 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: peterlu_2000@yahoo.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 582 Lines: 18 Hello, I just installed xfs and dmapi on red-hat linux 9. I wrote some simple code to play with it. Initially it worked fine. Then I tweaked the code, somehow the xfs file system was unavailable and any access got stucked. I rebooted the system and mounted it back. The xfs system is accessible, but the dmapi app cannot receive any event. Even I tried the previous code which worked fine before, it no longer works. Any idea? Thanks! __________________________________ Do you Yahoo!? New and Improved Yahoo! Mail - Send 10MB messages! http://promotions.yahoo.com/new_mail From owner-linux-xfs Tue Jul 20 09:54:38 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 20 Jul 2004 09:54:52 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6KGsctu013781 for ; Tue, 20 Jul 2004 09:54:38 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6KGscd9013780 for linux-xfs@oss.sgi.com; Tue, 20 Jul 2004 09:54:38 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6KGsbju013766 for ; Tue, 20 Jul 2004 09:54:37 -0700 Received: (from apache@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6KGZmqF013179; Tue, 20 Jul 2004 09:35:48 -0700 Date: Tue, 20 Jul 2004 09:35:48 -0700 Message-Id: <200407201635.i6KGZmqF013179@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 347] New: XFS clearing the disk X-Bugzilla-Reason: AssignedTo X-archive-position: 3695 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 842 Lines: 28 http://oss.sgi.com/bugzilla/show_bug.cgi?id=347 Summary: XFS clearing the disk Product: Linux XFS Version: unspecified Platform: All OS/Version: Linux Status: NEW Severity: critical Priority: High Component: xfsprogs AssignedTo: xfs-master@oss.sgi.com ReportedBy: christopher.g.dorosky@lmco.com I have heard a complaint (from a customer) of an XFS disk being filled with data, working, then mysteriously being completely cleared. the df command shows the correct amount of space taken up (when there were files) but the directory is completely empty. Is this a known bug that was fixed in a later version? ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs Tue Jul 20 10:07:07 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 20 Jul 2004 10:07:10 -0700 (PDT) Received: from zok.sgi.com (mtvcafw.sgi.com [192.48.171.6]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6KH760D014360 for ; Tue, 20 Jul 2004 10:07:07 -0700 Received: from flecktone.americas.sgi.com (flecktone.americas.sgi.com [192.48.203.135]) by zok.sgi.com (8.12.9/8.12.9/linux-outbound_gateway-1.1) with ESMTP id i6KH6xhv006409 for ; Tue, 20 Jul 2004 10:06:59 -0700 Received: from tulip-e236.americas.sgi.com (tulip-e236.americas.sgi.com [128.162.236.208]) by flecktone.americas.sgi.com (8.12.9/8.12.10/SGI_generic_relay-1.2) with ESMTP id i6KH6xOV43304158; Tue, 20 Jul 2004 12:06:59 -0500 (CDT) Received: from sgi.com (chewtoy.americas.sgi.com [128.162.233.33]) by tulip-e236.americas.sgi.com (8.12.9/SGI-server-1.8) with ESMTP id i6KH6w8191185258; Tue, 20 Jul 2004 12:06:58 -0500 (CDT) Message-Id: <200407201706.i6KH6w8191185258@tulip-e236.americas.sgi.com> To: Peter Lu cc: linux-xfs@oss.sgi.com Subject: Re: dmapi app cannot receive events Date: Tue, 20 Jul 2004 12:06:58 -0500 From: Dean Roehrich X-archive-position: 3696 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: roehrich@sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1328 Lines: 32 >From: Peter Lu >Hello, I just installed xfs and dmapi on red-hat linux >9. I wrote some simple code to play with it. Initially >it worked fine. Then I tweaked the code, somehow the >xfs file system was unavailable and any access got >stucked. I rebooted the system and mounted it back. >The xfs system is accessible, but the dmapi app cannot >receive any event. Even I tried the previous code >which worked fine before, it no longer works. Any >idea? Thanks! What did you tweak? Why did the XFS filesystem become unavailable? What did the system log say? What is the mount commandline that you are using? Did the call to dm_init_service() succeed? Do you have a dmapi device under /proc/fs or under /proc/fs/xfs? What is it's name? Does the /lib/libdm.so.0 library look for that same file (run 'strings' on the library and grep for dmapi)? Under /proc/fs or under /proc/fs/xfs there should be a directory with "dmapi" in its name. After your filesystem is mounted check the contents of all the files under that directory. Maybe show us your "simple code". Do you have dmapi and XFS built-in to the kernel or are they modules? If modules, did all of them load? Is there anything in the system log about DMAPI, XFS, or your filesystem? Is the machine plugged-in and turned on? :) Dean From owner-linux-xfs Tue Jul 20 10:54:38 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 20 Jul 2004 10:54:41 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6KHscAQ015375 for ; Tue, 20 Jul 2004 10:54:38 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6KHscuS015374 for linux-xfs@oss.sgi.com; Tue, 20 Jul 2004 10:54:38 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6KHsb3w015360 for ; Tue, 20 Jul 2004 10:54:37 -0700 Received: (from apache@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6KH2BKJ014230; Tue, 20 Jul 2004 10:02:11 -0700 Date: Tue, 20 Jul 2004 10:02:11 -0700 Message-Id: <200407201702.i6KH2BKJ014230@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 347] XFS clearing the disk X-Bugzilla-Reason: AssignedTo X-archive-position: 3697 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 882 Lines: 28 http://oss.sgi.com/bugzilla/show_bug.cgi?id=347 lord@xfs.org changed: What |Removed |Added ---------------------------------------------------------------------------- Severity|critical |major ------- Additional Comments From lord@xfs.org 2004-20-07 10:02 PDT ------- Erm, please deposit $10000 in my bank account, then go find the person at the customer who removed everything by mistake and is not owning up to it. Seriously a few more details might help! There is absolutely nothing to go on in what you wrote. Look in the system log for messages for starters. Chances are the filesystem got shutdown on them for some reason - although an accidental rm -r -f is not really impossible. ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs Tue Jul 20 11:06:30 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 20 Jul 2004 11:06:31 -0700 (PDT) Received: from pimout3-ext.prodigy.net (pimout3-ext.prodigy.net [207.115.63.102]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6KI6TLP015797 for ; Tue, 20 Jul 2004 11:06:30 -0700 Received: from taniwha.stupidest.org (adsl-67-124-119-20.dsl.snfc21.pacbell.net [67.124.119.20]) by pimout3-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6KI6QlM109698; Tue, 20 Jul 2004 14:06:26 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id 67098115C874; Tue, 20 Jul 2004 11:06:25 -0700 (PDT) Date: Tue, 20 Jul 2004 11:06:25 -0700 From: Chris Wedgwood To: Ricardo Correia Cc: linux-xfs@oss.sgi.com Subject: Re: Null files reloaded :-) Message-ID: <20040720180625.GA31713@taniwha.stupidest.org> References: <200407200444.21761.wizeman@wizy.org> <20040720060934.GA8839@taniwha.stupidest.org> <200407201440.03713.wizeman@wizy.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <200407201440.03713.wizeman@wizy.org> X-archive-position: 3698 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 290 Lines: 12 On Tue, Jul 20, 2004 at 02:40:03PM +0100, Ricardo Correia wrote: > What if during journal replaying it would recognize this behaviour, > and use the old file, which is still on-disk there is no way to recognise this, and there is no old file (it was unlinked and/or truncated) --cw From owner-linux-xfs Tue Jul 20 11:21:27 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 20 Jul 2004 11:21:30 -0700 (PDT) Received: from hulk.vianw.pt (hulk.vianw.pt [195.22.31.43]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6KILQ7O016392 for ; Tue, 20 Jul 2004 11:21:27 -0700 Received: from wizy.org (adsl-sul01-2-184.vianw.pt [80.172.1.184]) by hulk.vianw.pt (8.12.11/8.12.11) with ESMTP id i6KIL9Lr032303; Tue, 20 Jul 2004 19:21:13 +0100 Received: from wizy (wizy [192.168.0.3]) by wizy.org (Postfix) with ESMTP id 7FC767FD75D; Tue, 20 Jul 2004 19:20:59 +0100 (WEST) From: Ricardo Correia To: linux-xfs@oss.sgi.com Subject: Re: Null files reloaded :-) Date: Tue, 20 Jul 2004 19:20:59 +0100 User-Agent: KMail/1.6.2 Cc: Chris Wedgwood References: <200407200444.21761.wizeman@wizy.org> <200407201440.03713.wizeman@wizy.org> <20040720180625.GA31713@taniwha.stupidest.org> In-Reply-To: <20040720180625.GA31713@taniwha.stupidest.org> MIME-Version: 1.0 Content-Disposition: inline Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Message-Id: <200407201920.59109.wizeman@wizy.org> X-archive-position: 3699 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: wizeman@wizy.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 724 Lines: 20 On Tuesday 20 July 2004 19:06, Chris Wedgwood wrote: > there is no way to recognise this, and there is no old file (it was > unlinked and/or truncated) Hmm.. I don't get it.. Consider your previous scenario: 1 - File A is safe on disk 2 - Process writes file B (metadata on disk, file data in RAM) 3 - Process renames B to A 4 - Periodic disk sync (every 5 seconds or so?) Now if power failure occurs after 3, but before 4.. isn't the inode and content of A still on disk? And isn't the directory still pointing to the inode of A? I thought metadata at this point was only written to the journal. If you could enlighten me, I would be appreciated :) I just think it doesn't make much sense (but what do I know?) :-) From owner-linux-xfs Tue Jul 20 11:31:28 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 20 Jul 2004 11:31:29 -0700 (PDT) Received: from pimout3-ext.prodigy.net (pimout3-ext.prodigy.net [207.115.63.102]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6KIVSXB016760 for ; Tue, 20 Jul 2004 11:31:28 -0700 Received: from taniwha.stupidest.org (adsl-67-124-119-20.dsl.snfc21.pacbell.net [67.124.119.20]) by pimout3-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6KIVOlM118256; Tue, 20 Jul 2004 14:31:25 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id 75075115C874; Tue, 20 Jul 2004 11:31:24 -0700 (PDT) Date: Tue, 20 Jul 2004 11:31:24 -0700 From: Chris Wedgwood To: Ricardo Correia Cc: linux-xfs@oss.sgi.com Subject: Re: Null files reloaded :-) Message-ID: <20040720183124.GA32751@taniwha.stupidest.org> References: <200407200444.21761.wizeman@wizy.org> <200407201440.03713.wizeman@wizy.org> <20040720180625.GA31713@taniwha.stupidest.org> <200407201920.59109.wizeman@wizy.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <200407201920.59109.wizeman@wizy.org> X-archive-position: 3700 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 627 Lines: 24 On Tue, Jul 20, 2004 at 07:20:59PM +0100, Ricardo Correia wrote: > 1 - File A is safe on disk > 2 - Process writes file B (metadata on disk, file data in RAM) > 3 - Process renames B to A A is gone, the metadata pointing to it is lost. > 4 - Periodic disk sync (every 5 seconds or so?) > Now if power failure occurs after 3, but before 4.. isn't the inode > and content of A still on disk? yes, but we don't know where > And isn't the directory still pointing to the inode of A? I thought > metadata at this point was only written to the journal. it is, this means the rename and freeing of disk-blocks for A --cw From owner-linux-xfs Tue Jul 20 11:54:39 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 20 Jul 2004 11:54:41 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6KIscJP017456 for ; Tue, 20 Jul 2004 11:54:38 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6KIscdU017455 for linux-xfs@oss.sgi.com; Tue, 20 Jul 2004 11:54:38 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6KIsb5t017441 for ; Tue, 20 Jul 2004 11:54:37 -0700 Received: (from apache@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6KIA77R016230; Tue, 20 Jul 2004 11:10:07 -0700 Date: Tue, 20 Jul 2004 11:10:07 -0700 Message-Id: <200407201810.i6KIA77R016230@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 347] XFS clearing the disk X-Bugzilla-Reason: AssignedTo X-archive-position: 3701 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1399 Lines: 43 http://oss.sgi.com/bugzilla/show_bug.cgi?id=347 ------- Additional Comments From christopher.g.dorosky@lmco.com 2004-20-07 11:10 PDT ------- Sorry, didn't want to add more until I found out if: anyone would actually read the bug. if there was a very common bug in version 1.2 that erased the disk. Disk type: SCSI Linux version 2.4.something Customer complaint is that with 2 separate disks, that they have had data erasure with "df" not noticing. This is what bothers me. rm -rf would set the disk back to near 0% full. But, if the disk was 65% full (of 163 GB), and was working, and all of a sudden goes to 65% full, with NO FILES AVAILABLE, then this indicates a problem. They say it has happened on two separate disks. I am not aware of an easy way to remove files, and fool "df" into thinking that they are still there. This was not clear on the last message, sorry. df STILL shows the same amount (or very close) of space taken up, even when the root disk directory says it is empty. What could happen to the file system, to essentially lose the root entries to it's directory structure? Shouldn't the journaling fix this? Should some version of fsck or something similiar be run every boot when xfs is installed? They do have the disks mounted rw. ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs Tue Jul 20 12:50:26 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 20 Jul 2004 12:50:31 -0700 (PDT) Received: from hermes.fachschaften.tu-muenchen.de (hermes.fachschaften.tu-muenchen.de [129.187.202.12]) by oss.sgi.com (8.13.0/8.13.0) with SMTP id i6KJoO8e021953 for ; Tue, 20 Jul 2004 12:50:25 -0700 Received: (qmail 25204 invoked from network); 20 Jul 2004 19:44:01 -0000 Received: from mimas.fachschaften.tu-muenchen.de (129.187.202.58) by hermes.fachschaften.tu-muenchen.de with QMQP; 20 Jul 2004 19:44:01 -0000 Date: Tue, 20 Jul 2004 21:50:12 +0200 From: Adrian Bunk To: "Jeffrey E. Hundstad" , Linus Torvalds , Andrew Morton , Steve Lord Cc: linux-xfs@oss.sgi.com, xfs-masters@oss.sgi.com, nathans@sgi.com, Cahya Wirawan , linux-kernel@vger.kernel.org Subject: [2.6 patch] let 4KSTACKS depend on EXPERIMENTAL and XFS on 4KSTACKS=n Message-ID: <20040720195012.GN14733@fs.tum.de> References: <20040720114418.GH21918@email.archlab.tuwien.ac.at> <40FD0A61.1040503@xfs.org> <40FD2E99.20707@mnsu.edu> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <40FD2E99.20707@mnsu.edu> User-Agent: Mutt/1.5.6i X-archive-position: 3702 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bunk@fs.tum.de Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 3494 Lines: 94 On Tue, Jul 20, 2004 at 09:39:21AM -0500, Jeffrey E. Hundstad wrote: > Steve Lord wrote: > > >Don't use 4K stacks and XFS. What you hit here is a path where the > >filesystem is getting full and it needs to free some reserved space > >by flushing cached data which is using reserved extents. Reserved > >extents do not yet have an on disk address and they include a > >reservation for the worst case metadata usage. Flushing them will > >get you room back. > > > >As you can see, it is a pretty deep call stack, most of XFS is going > >to work just fine with a 4K stack, but there are end cases like > >this one which will just not fit. > > > If this is a known truth with XFS maybe it would be a good idea to have > 4K stacks and XFS be an impossible combination using the config tool. The patch below does: 1. let 4KSTACKS depend on EXPERIMENTAL Rationale: 4Kb stacks on i386 are the future. But currently this option might still cause problems in some areas of the kernel. OTOH, 4Kb stacks isn't a big gain for most people. 2.6 is a stable kernel series, and 4KSTACKS=n is the safe choice. Once all issues with 4KSTACKS=y are resolved this can be reverted. 2. let XFS depend on (4KSTACKS=n || BROKEN) Rationale: Mark Loy said: Don't use 4K stacks and XFS. Mark this combination as BROKEN until XFS is fixed. This might result in XFS support disappearing for some people, but if they use EXPERIMENTAL=y they should know what they are doing. The 4KSTACKS option has to be moved for that it's asked before XFS in "make config". diffstat output: arch/i386/Kconfig | 19 ++++++++++--------- fs/Kconfig | 1 + 2 files changed, 11 insertions(+), 9 deletions(-) Signed-off-by: Adrian Bunk --- linux-2.6.8-rc2-full/arch/i386/Kconfig.old 2004-07-20 21:00:32.000000000 +0200 +++ linux-2.6.8-rc2-full/arch/i386/Kconfig 2004-07-20 21:03:30.000000000 +0200 @@ -865,6 +865,16 @@ generate incorrect output with certain kernel constructs when -mregparm=3 is used. +config 4KSTACKS + bool "Use 4Kb for kernel stacks instead of 8Kb" + depends on EXPERIMENTAL + help + If you say Y here the kernel will use a 4Kb stacksize for the + kernel stack attached to each process/thread. This facilitates + running more threads on a system and also reduces the pressure + on the VM subsystem for higher order allocations. This option + will also use IRQ stacks to compensate for the reduced stackspace. + endmenu @@ -1289,15 +1299,6 @@ If you don't debug the kernel, you can say N, but we may not be able to solve problems without frame pointers. -config 4KSTACKS - bool "Use 4Kb for kernel stacks instead of 8Kb" - help - If you say Y here the kernel will use a 4Kb stacksize for the - kernel stack attached to each process/thread. This facilitates - running more threads on a system and also reduces the pressure - on the VM subsystem for higher order allocations. This option - will also use IRQ stacks to compensate for the reduced stackspace. - config X86_FIND_SMP_CONFIG bool depends on X86_LOCAL_APIC || X86_VOYAGER --- linux-2.6.8-rc2-full/fs/Kconfig.old 2004-07-20 21:04:02.000000000 +0200 +++ linux-2.6.8-rc2-full/fs/Kconfig 2004-07-20 21:04:25.000000000 +0200 @@ -294,6 +294,7 @@ config XFS_FS tristate "XFS filesystem support" + depends on (4KSTACKS=n || BROKEN) help XFS is a high performance journaling filesystem which originated on the SGI IRIX platform. It is completely multi-threaded, can From owner-linux-xfs Tue Jul 20 13:43:30 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 20 Jul 2004 13:43:32 -0700 (PDT) Received: from pimout2-ext.prodigy.net (pimout2-ext.prodigy.net [207.115.63.101]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6KKhR6o023208 for ; Tue, 20 Jul 2004 13:43:30 -0700 Received: from taniwha.stupidest.org (adsl-67-124-119-20.dsl.snfc21.pacbell.net [67.124.119.20]) by pimout2-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6KKgcUK038388; Tue, 20 Jul 2004 16:42:39 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id 3A233115C874; Tue, 20 Jul 2004 13:42:38 -0700 (PDT) Date: Tue, 20 Jul 2004 13:42:38 -0700 From: Chris Wedgwood To: Adrian Bunk Cc: "Jeffrey E. Hundstad" , Linus Torvalds , Andrew Morton , Steve Lord , linux-xfs@oss.sgi.com, xfs-masters@oss.sgi.com, nathans@sgi.com, Cahya Wirawan , linux-kernel@vger.kernel.org Subject: Re: [2.6 patch] let 4KSTACKS depend on EXPERIMENTAL and XFS on 4KSTACKS=n Message-ID: <20040720204238.GA3051@taniwha.stupidest.org> References: <20040720114418.GH21918@email.archlab.tuwien.ac.at> <40FD0A61.1040503@xfs.org> <40FD2E99.20707@mnsu.edu> <20040720195012.GN14733@fs.tum.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20040720195012.GN14733@fs.tum.de> X-archive-position: 3703 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 809 Lines: 22 On Tue, Jul 20, 2004 at 09:50:12PM +0200, Adrian Bunk wrote: > 1. let 4KSTACKS depend on EXPERIMENTAL i don't like this change, despite what i might have claimed earlier :) the reason i say this is if XFS blows up with 4K stacks then it probably can with 8K stacks but it will be much harder, so it's not really fixing anything but just papering over the problem the reason for this is 8K stacks means you don't have separate irq stacks, so if and interrupt comes along at the right time and the codes paths are just right, you can still overflow (arguably you have less overall space than with 4K stacks and separate irq stacks) that said, separate irq stacks *and* 8k thread stacks would be safe, but i'd love to see ideas on how to get the stack utilization down (it's actually really hard) --cw From owner-linux-xfs Tue Jul 20 13:50:42 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 20 Jul 2004 13:50:48 -0700 (PDT) Received: from hermes.fachschaften.tu-muenchen.de (hermes.fachschaften.tu-muenchen.de [129.187.202.12]) by oss.sgi.com (8.13.0/8.13.0) with SMTP id i6KKoeKW023558 for ; Tue, 20 Jul 2004 13:50:41 -0700 Received: (qmail 969 invoked from network); 20 Jul 2004 20:44:19 -0000 Received: from mimas.fachschaften.tu-muenchen.de (129.187.202.58) by hermes.fachschaften.tu-muenchen.de with QMQP; 20 Jul 2004 20:44:19 -0000 Date: Tue, 20 Jul 2004 22:50:31 +0200 From: Adrian Bunk To: Chris Wedgwood Cc: "Jeffrey E. Hundstad" , Linus Torvalds , Andrew Morton , Steve Lord , linux-xfs@oss.sgi.com, xfs-masters@oss.sgi.com, nathans@sgi.com, Cahya Wirawan , linux-kernel@vger.kernel.org Subject: Re: [2.6 patch] let 4KSTACKS depend on EXPERIMENTAL and XFS on 4KSTACKS=n Message-ID: <20040720205030.GO14733@fs.tum.de> References: <20040720114418.GH21918@email.archlab.tuwien.ac.at> <40FD0A61.1040503@xfs.org> <40FD2E99.20707@mnsu.edu> <20040720195012.GN14733@fs.tum.de> <20040720204238.GA3051@taniwha.stupidest.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20040720204238.GA3051@taniwha.stupidest.org> User-Agent: Mutt/1.5.6i X-archive-position: 3704 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bunk@fs.tum.de Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1059 Lines: 32 On Tue, Jul 20, 2004 at 01:42:38PM -0700, Chris Wedgwood wrote: > On Tue, Jul 20, 2004 at 09:50:12PM +0200, Adrian Bunk wrote: > > > 1. let 4KSTACKS depend on EXPERIMENTAL > > i don't like this change, despite what i might have claimed earlier :) > > the reason i say this is if XFS blows up with 4K stacks then it > probably can with 8K stacks but it will be much harder, so it's not > really fixing anything but just papering over the problem >... 2.6 is a stable kernel series used in production environments. The correct solution is to fix XFS (and other problems with 4kb stacks if they occur), and my patch is only a short-term workaround. 4KSTACKS=n is simply the better tested case, and 4KSTACKS=y uncovers some issues you might not want to see in production environments. > --cw cu Adrian -- "Is there not promise of rain?" Ling Tan asked suddenly out of the darkness. There had been need of rain for many days. "Only a promise," Lao Er said. Pearl S. Buck - Dragon Seed From owner-linux-xfs Tue Jul 20 13:59:13 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 20 Jul 2004 13:59:28 -0700 (PDT) Received: from pimout3-ext.prodigy.net (pimout3-ext.prodigy.net [207.115.63.102]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6KKxCbQ023910 for ; Tue, 20 Jul 2004 13:59:13 -0700 Received: from taniwha.stupidest.org (adsl-67-124-119-20.dsl.snfc21.pacbell.net [67.124.119.20]) by pimout3-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6KKwUlM153640; Tue, 20 Jul 2004 16:58:31 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id 04EEB115C874; Tue, 20 Jul 2004 13:58:30 -0700 (PDT) Date: Tue, 20 Jul 2004 13:58:29 -0700 From: Chris Wedgwood To: Adrian Bunk Cc: "Jeffrey E. Hundstad" , Linus Torvalds , Andrew Morton , Steve Lord , linux-xfs@oss.sgi.com, xfs-masters@oss.sgi.com, nathans@sgi.com, Cahya Wirawan , linux-kernel@vger.kernel.org Subject: Re: [2.6 patch] let 4KSTACKS depend on EXPERIMENTAL and XFS on 4KSTACKS=n Message-ID: <20040720205829.GB3217@taniwha.stupidest.org> References: <20040720114418.GH21918@email.archlab.tuwien.ac.at> <40FD0A61.1040503@xfs.org> <40FD2E99.20707@mnsu.edu> <20040720195012.GN14733@fs.tum.de> <20040720204238.GA3051@taniwha.stupidest.org> <20040720205030.GO14733@fs.tum.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20040720205030.GO14733@fs.tum.de> X-archive-position: 3705 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 848 Lines: 25 On Tue, Jul 20, 2004 at 10:50:31PM +0200, Adrian Bunk wrote: > 2.6 is a stable kernel series used in production environments. so is 2.4.x and problems i mentioned can occur there too but are harder to hit > The correct solution is to fix XFS (and other problems with 4kb > stacks if they occur), and my patch is only a short-term workaround. it's not really a workaround, it just makes the problems harder to hit a real fix is going to be hard, it's partly the fact there are insanely long complicated paths and partly the fact for ia32 gcc spills register space badly and bloats functions (afaik amd64 uses significantly less stack in some functions) > 4KSTACKS=n is simply the better tested case, and 4KSTACKS=y uncovers > some issues you might not want to see in production environments. neither address the real problem though --cw From owner-linux-xfs Tue Jul 20 22:15:37 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 20 Jul 2004 22:15:51 -0700 (PDT) Received: from coredumps.de (coredumps.de [217.160.213.75]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6L5FYc3006179 for ; Tue, 20 Jul 2004 22:15:36 -0700 Received: from port-212-202-54-216.dynamic.qsc.de ([212.202.54.216] helo=ente.berdmann.de) by coredumps.de with asmtp (TLSv1:DES-CBC3-SHA:168) (Exim 4.33) id 1Bn9Rz-00054R-3a for linux-xfs@oss.sgi.com; Wed, 21 Jul 2004 07:15:31 +0200 Received: from octane.berdmann.de ([192.168.1.14] helo=berdmann.de) by ente.berdmann.de with esmtp (Exim 3.36 #1) id 1Bn9Ry-00018t-00 for linux-xfs@oss.sgi.com; Wed, 21 Jul 2004 07:15:30 +0200 Message-ID: <40FDFBF2.7020500@berdmann.de> Date: Wed, 21 Jul 2004 07:15:30 +0200 From: Bernhard Erdmann User-Agent: Mozilla/5.0 (X11; U; IRIX64 IP30; en-US; rv:1.6) Gecko/20040505 X-Accept-Language: de, en, fr MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: Re: building xfsprogs (CVS): mmap.c:627: `MADV_NORMAL' undeclared References: <40FB2417.7030406@berdmann.de> <4774.1090202489@kao2.melbourne.sgi.com> <20040720015826.A2406645@wobbly.melbourne.sgi.com> In-Reply-To: <20040720015826.A2406645@wobbly.melbourne.sgi.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 3706 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: be@berdmann.de Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1310 Lines: 32 Nathan Scott wrote: [...] > mmap.c already includes that header - what version of the glibc > headers are you using there Bernhard? (which distribution, and > which version?) It's glibc 2.1.3-29 from redhat 6.2 updates: $ rpm -qi glibc-devel Name : glibc-devel Relocations: (not relocateable) Version : 2.1.3 Vendor: Red Hat, Inc. Release : 29 Build Date: Wed Mar 5 22:58:36 2003 Install date: Sat Mar 29 10:30:58 2003 Build Host: daffy.perf.redhat.com Group : Development/Libraries Source RPM: glibc-2.1.3-29.src.rpm Size : 34980841 License: LGPL Packager : Red Hat, Inc. Summary : Header and object files for development using standard C libraries. Description : The glibc-devel package contains the header and object files necessary for developing programs which use the standard C libraries (which are used by nearly all programs). If you are developing programs which will use the standard C libraries, your system needs to have these standard header and object files available in order to create the executables. Install glibc-devel if you are going to develop programs which will use the standard C libraries. From owner-linux-xfs Wed Jul 21 06:54:43 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 21 Jul 2004 06:54:57 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6LDsh2Y025636 for ; Wed, 21 Jul 2004 06:54:43 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6LDshLp025635 for linux-xfs@oss.sgi.com; Wed, 21 Jul 2004 06:54:43 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6LDsfT5025601 for ; Wed, 21 Jul 2004 06:54:41 -0700 Received: (from apache@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6LDEwEQ024351; Wed, 21 Jul 2004 06:14:58 -0700 Date: Wed, 21 Jul 2004 06:14:58 -0700 Message-Id: <200407211314.i6LDEwEQ024351@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 347] XFS clearing the disk X-Bugzilla-Reason: AssignedTo X-archive-position: 3707 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 406 Lines: 15 http://oss.sgi.com/bugzilla/show_bug.cgi?id=347 ------- Additional Comments From olaf@cbk.poznan.pl 2004-21-07 06:14 PDT ------- until the problem is not diagnosed I wouldn't event mount them (even mounting read-only replays the log!!). And mounting read-write is rather stupid. ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs Wed Jul 21 07:54:43 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 21 Jul 2004 07:55:01 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6LEshf4026913 for ; Wed, 21 Jul 2004 07:54:43 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6LEshYq026912 for linux-xfs@oss.sgi.com; Wed, 21 Jul 2004 07:54:43 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6LEsfM9026888 for ; Wed, 21 Jul 2004 07:54:41 -0700 Received: (from apache@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6LEWaRj026575; Wed, 21 Jul 2004 07:32:36 -0700 Date: Wed, 21 Jul 2004 07:32:36 -0700 Message-Id: <200407211432.i6LEWaRj026575@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 347] XFS clearing the disk X-Bugzilla-Reason: AssignedTo X-archive-position: 3708 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 311 Lines: 15 http://oss.sgi.com/bugzilla/show_bug.cgi?id=347 ------- Additional Comments From christopher.g.dorosky@lmco.com 2004-21-07 07:32 PDT ------- I agree. I can't stop my customer though. ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs Wed Jul 21 08:54:43 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 21 Jul 2004 08:55:03 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6LFshOP032055 for ; Wed, 21 Jul 2004 08:54:43 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6LFshFS032054 for linux-xfs@oss.sgi.com; Wed, 21 Jul 2004 08:54:43 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6LFsgt8032040 for ; Wed, 21 Jul 2004 08:54:42 -0700 Received: (from apache@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6LFIvTm027708; Wed, 21 Jul 2004 08:18:57 -0700 Date: Wed, 21 Jul 2004 08:18:57 -0700 Message-Id: <200407211518.i6LFIvTm027708@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 347] XFS clearing the disk X-Bugzilla-Reason: AssignedTo X-archive-position: 3709 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 979 Lines: 32 http://oss.sgi.com/bugzilla/show_bug.cgi?id=347 ------- Additional Comments From olaf@cbk.poznan.pl 2004-21-07 08:18 PDT ------- Do you have backup ? If you do, then try some newer kernel, and restore the backup. If you don't then you can try: 1. umount !!! the partitions you have problem with (NEVER access /dev/sd(xx) even for read if filesystem is mounted - it will corrupt your filesystem) 2. copy them using dd to some other disk - you will have COPY_1 3. don't mount this copy. 4. from COPY_1 do a COPY_2 5. mount COPY_2 on another server and try xfs_repair 6. if it doesn't help make again COPY_2 form COPY_1 7. mount it in another server with latest kernel 2.6.x (2.6.8-rc?) 8. do xfs_repair, if doesn't help try xfs_db. 9. Try to hire somebody from SGI :) BTW, SGI guys - is there any paid service (by SGI) to recover XFS data? Regards. ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs Wed Jul 21 11:54:46 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 21 Jul 2004 11:54:48 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6LIsjSv003576 for ; Wed, 21 Jul 2004 11:54:45 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6LIsjiZ003575 for linux-xfs@oss.sgi.com; Wed, 21 Jul 2004 11:54:45 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6LIsicL003561 for ; Wed, 21 Jul 2004 11:54:44 -0700 Received: (from apache@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6LIsN7W003557; Wed, 21 Jul 2004 11:54:23 -0700 Date: Wed, 21 Jul 2004 11:54:23 -0700 Message-Id: <200407211854.i6LIsN7W003557@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 347] XFS clearing the disk X-Bugzilla-Reason: AssignedTo X-archive-position: 3710 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 572 Lines: 22 http://oss.sgi.com/bugzilla/show_bug.cgi?id=347 ------- Additional Comments From christopher.g.dorosky@lmco.com 2004-21-07 11:54 PDT ------- oops, I thought the replies were posted here. I am confused about the last comment. For #1. Mounting XFS disks (that are NOT corrupted) is like anything else, in that you mount /dev/sd(xx) right??? Was I supposed to mount them another way? Data that is gone is not a problem. There are many backups. ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs Wed Jul 21 13:54:45 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 21 Jul 2004 13:54:47 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6LKsiFv011120 for ; Wed, 21 Jul 2004 13:54:44 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6LKsiZ8011119 for linux-xfs@oss.sgi.com; Wed, 21 Jul 2004 13:54:44 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6LKsgwD011105 for ; Wed, 21 Jul 2004 13:54:43 -0700 Received: (from apache@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6LJwhDd008640; Wed, 21 Jul 2004 12:58:43 -0700 Date: Wed, 21 Jul 2004 12:58:43 -0700 Message-Id: <200407211958.i6LJwhDd008640@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 347] XFS clearing the disk X-Bugzilla-Reason: AssignedTo X-archive-position: 3711 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 521 Lines: 18 http://oss.sgi.com/bugzilla/show_bug.cgi?id=347 ------- Additional Comments From sandeen@sgi.com 2004-21-07 12:58 PDT ------- you say suddenly no files are available; what does this mean? They are there but cannot be accessed? They cannot be read? ls fails? As Steve suggested, look in the system logs. The filesystem may have shut down due to corruption. xfs_repair may be of help here. ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs Wed Jul 21 14:03:35 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 21 Jul 2004 14:03:49 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6LL3Z96011533 for ; Wed, 21 Jul 2004 14:03:35 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6LL3Zta011530 for linux-xfs@oss.sgi.com; Wed, 21 Jul 2004 14:03:35 -0700 Received: from pimout2-ext.prodigy.net (pimout2-ext.prodigy.net [207.115.63.101]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6LL3YoR011518; Wed, 21 Jul 2004 14:03:34 -0700 Received: from taniwha.stupidest.org (adsl-67-124-119-20.dsl.snfc21.pacbell.net [67.124.119.20]) by pimout2-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6LL3EUK106198; Wed, 21 Jul 2004 17:03:15 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id 580EB115C85B; Wed, 21 Jul 2004 14:03:14 -0700 (PDT) Date: Wed, 21 Jul 2004 14:03:14 -0700 From: Chris Wedgwood To: bugzilla-daemon@oss.sgi.com Cc: xfs-master@oss.sgi.com, christopher.g.dorosky@lmco.com Subject: Re: [Bug 347] New: XFS clearing the disk Message-ID: <20040721210314.GA27546@taniwha.stupidest.org> References: <200407201635.i6KGZmqF013179@oss.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <200407201635.i6KGZmqF013179@oss.sgi.com> X-archive-position: 3712 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 486 Lines: 19 On Tue, Jul 20, 2004 at 09:35:48AM -0700, bugzilla-daemon@oss.sgi.com wrote: > I have heard a complaint (from a customer) of an XFS disk being > filled with data, working, then mysteriously being completely > cleared. still very light on details here. how about (from the machine in question): uname -a cat /proc/mounts to start with? can the 'problem' disk/parition be unmounted easily, if so then please unmount it and run xfs_repair over it and see what that says. --cw From owner-linux-xfs Wed Jul 21 14:29:32 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 21 Jul 2004 14:29:33 -0700 (PDT) Received: from eik.ii.uib.no (eik.ii.uib.no [129.177.16.3]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6LLTVlt012202 for ; Wed, 21 Jul 2004 14:29:31 -0700 Received: from lapprose.ii.uib.no ([129.177.20.37]:53106) by eik.ii.uib.no with esmtp (TLSv1:AES256-SHA:256) (Exim 4.30) id 1BnOeQ-000189-BE; Wed, 21 Jul 2004 23:29:22 +0200 Received: (from jfm@localhost) by lapprose.ii.uib.no (8.12.11/8.12.11/Submit) id i6LLTLlB028512; Wed, 21 Jul 2004 23:29:21 +0200 Date: Wed, 21 Jul 2004 23:29:21 +0200 From: Jan-Frode Myklebust To: linux-xfs@oss.sgi.com Cc: Charles Steinkuehler Subject: another XFS+LVM+SoftwareRAID5 query Message-ID: <20040721212921.GB28273@ii.uib.no> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline X-archive-position: 3713 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: janfrode@parallab.uib.no Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 2006 Lines: 52 I've just set up a system with one sw raid5 of 7 disks, and one sw raid5 on 6 disks (all scsi), made a LVM stripe over these, and put XFS on it. Full set of commands run: mdadm Create md0 --level=raid5 --raid-devices=8 sda1 sdb1 sdc1 \ sdd1 sde1 sdf1 sdg1 mdadm Create md1 --level=raid5 --raid-devices=7 sdh1 sdi1 sdj1 \ sdk1 sdl1 sdm1 pvcreate /dev/md0 pvcreate /dev/md1 vgcreate --physicalextentsize=8M sstudvg /dev/md0 /dev/md1 lvcreate --stripes 2 --size 339G --name sstudlv sstudvg lvcreate --size 34G --name sparelv sstudvg mkfs.xfs /dev/sstudvg/sstudlv mkfs.xfs /dev/sstudvg/sparelv When I mounted this fs and started using it I got lots and lots of messages saying something like: raid5: switching cache buffer size, 0 --> 512 raid5: switching cache buffer size, 512 --> 4096 raid5: switching cache buffer size, 0 --> 512 Then I checked the xfs_info, and noticed the only thing being 512 bytes was the sector size. I changed this to 4K (mkfs -s size=4096), and the problem seems to have gone away. I still get a couple during boot when I mount the filesystems: SGI XFS 1.3.3 with ACLs, large block numbers, no debug enabled SGI XFS Quota Management subsystem raid5: switching cache buffer size, 1024 --> 512 raid5: switching cache buffer size, 1024 --> 4096 XFS mounting filesystem lvm(58,0) raid5: switching cache buffer size, 512 --> 4096 Ending clean XFS mount for filesystem: lvm(58,0) raid5: switching cache buffer size, 4096 --> 512 XFS mounting filesystem lvm(58,1) raid5: switching cache buffer size, 512 --> 4096 Ending clean XFS mount for filesystem: lvm(58,1) So I am a bit conserned I might get the same problem as Charles Steinkuehler if I ever need to run xfs_repair.. Charles did you find a solution for your problem? And, what consequence does the increased sector size have on the fs? BTW: I'm running the 2.4.21-15.EL.sgi3smp kernel. -jf From owner-linux-xfs Wed Jul 21 14:37:24 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 21 Jul 2004 14:37:26 -0700 (PDT) Received: from mail00hq.adic.com (mail00hq.adic.com [63.81.117.10] (may be forged)) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6LLbN2W012688 for ; Wed, 21 Jul 2004 14:37:24 -0700 Received: from mail02hq.adic.com ([172.16.9.18]) by mail00hq.adic.com with Microsoft SMTPSVC(5.0.2195.6713); Wed, 21 Jul 2004 14:37:16 -0700 Received: from [172.16.82.67] ([172.16.82.67]) by mail02hq.adic.com with Microsoft SMTPSVC(5.0.2195.6713); Wed, 21 Jul 2004 14:37:15 -0700 Message-ID: <40FEE1EA.30705@xfs.org> Date: Wed, 21 Jul 2004 16:36:42 -0500 From: Steve Lord User-Agent: Mozilla Thunderbird 0.7.1 (X11/20040626) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Jan-Frode Myklebust CC: linux-xfs@oss.sgi.com, Charles Steinkuehler Subject: Re: another XFS+LVM+SoftwareRAID5 query References: <20040721212921.GB28273@ii.uib.no> In-Reply-To: <20040721212921.GB28273@ii.uib.no> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 21 Jul 2004 21:37:16.0242 (UTC) FILETIME=[E4677B20:01C46F6A] X-archive-position: 3714 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: lord@xfs.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 700 Lines: 20 Jan-Frode Myklebust wrote: > > And, what consequence does the increased sector size have on the fs? > There are some chunks of metadata at the start of each allocation group which where originally layed out as being 512 bytes long. The superblock and some headers for allocation structures. The log is also written in sector sized chunks (or multiples thereof). All the rest of the metadata and file data is in filesystem block sized chunks. Bumping the sector size rounds these up to a larger size, the change in disk space usage is tiny. The reason xfs uses two different sizes is that otherwise all I/O would have had to be submitted in 512 byte chunks and the overhead is horrible. Steve From owner-linux-xfs Wed Jul 21 14:54:46 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 21 Jul 2004 14:54:50 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6LLskwU013346 for ; Wed, 21 Jul 2004 14:54:46 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6LLsk2Y013345 for linux-xfs@oss.sgi.com; Wed, 21 Jul 2004 14:54:46 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6LLsjnP013331 for ; Wed, 21 Jul 2004 14:54:45 -0700 Received: (from apache@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6LLPgml012175; Wed, 21 Jul 2004 14:25:42 -0700 Date: Wed, 21 Jul 2004 14:25:42 -0700 Message-Id: <200407212125.i6LLPgml012175@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 347] XFS clearing the disk X-Bugzilla-Reason: AssignedTo X-archive-position: 3715 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1121 Lines: 37 http://oss.sgi.com/bugzilla/show_bug.cgi?id=347 ------- Additional Comments From christopher.g.dorosky@lmco.com 2004-21-07 14:25 PDT ------- I don't have access to the machine. I still really need answers to the following two questions: Q1. Where are the logs that were mentioned? Q2. Can you answer the questions about the mounting? It is normally safe to access an XFS disk through /dev/sd(xx), right? Here is what I mean by no files available. Lets say that the disk was mounted as /disk2 Normally there could be directories /disk2/dir1 /disk2/dir2 /disk2/dir3 , etc.. If I switch to /disk2, and do "ls" I normally get dir1 dir2 dir3 . .. After the error, I would get . .. But df would report the same space occupied as before. I will hunt around for some logs on a duplicate machine here, and see if I can find any XFS messages. Which log shoudl I look at? If I can manage to get a disk to fail here, then I will let you know. In the meantime, do you have answers for Q1 and Q2? ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs Wed Jul 21 15:31:29 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 21 Jul 2004 15:31:33 -0700 (PDT) Received: from mx01.birch.net (kcmailp02.birch.net [216.212.0.97]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6LMVO9N014763 for ; Wed, 21 Jul 2004 15:31:29 -0700 Received: (qmail 31210 invoked from network); 21 Jul 2004 22:31:22 -0000 Received: from unknown (HELO steinkuehler.net) ([65.16.44.210]) (envelope-sender ) by mx01.birch.net (qmail-ldap-1.03) with SMTP for ; 21 Jul 2004 22:31:21 -0000 Message-ID: <40FEEEB9.2090504@steinkuehler.net> Date: Wed, 21 Jul 2004 17:31:21 -0500 From: Charles Steinkuehler User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.6) Gecko/20040113 X-Accept-Language: en-us, en MIME-Version: 1.0 To: Jan-Frode Myklebust CC: linux-xfs@oss.sgi.com Subject: Re: another XFS+LVM+SoftwareRAID5 query References: <20040721212921.GB28273@ii.uib.no> In-Reply-To: <20040721212921.GB28273@ii.uib.no> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 3716 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: charles@steinkuehler.net Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 2320 Lines: 48 Jan-Frode Myklebust wrote: > So I am a bit conserned I might get the same problem as Charles > Steinkuehler if I ever need to run xfs_repair.. Charles did you find a > solution for your problem? I found a solution...I switched to JFS. ;-P While my problems with XFS may have been greatly magnified due to low-level hardware probems (fixed by switching to a Promise SATA controller), there seem to be enough folks reporting problems with XFS + LVM + Software RAID5 that I would hesitate to use this combination in production without a *LOT* of testing I didn't have time for. I don't currently need either extended attribute support or the larger filesystem sizes available in XFS, so JFS is working fine for me. NOTE: A quick check on my system (running debian testing, 2.6.6-1-k7 kernel, Promise TX4 w/4x now-working SATA drives) and the first of my stress tests passes (lvcreate, mkfs.xfs, mount, bonnie++, umount, xfs_repair). I don't have enough room for the second stress-test I was using (rsync apx 150G of data from an ext3 partition, umount, and xfs_repair). I suspect if you can extract/compile the kernel tarball, umount, and get a clean xfs_repair, things are probably working normally. I'd probably also try a hard power-off shutdown while extracting the kernel tarball (or compiling, or otherwise pounding on the FS) followed by an xfs_repair to make sure you can return to normal from a real-world error condition (probably a good idea to mount all but the volume under test as read-only before-hand!). I was seeing the "switching cache buffer size" messages when running xfs_repair (and IIRC when mounting or unmounting), not in normal operation (once I formatted with size=4096), so I'd definately try to test xfs_repair on a 'broken' FS before trusting it with production data. Finally, while unrelated to LVM/RAID5, based on information I've absorbed in my extensive googling, XFS seems to have a tendency to keep data in a write-cache, which combined with it's journal replay characteristics on startup (ie: zeroing unclean files) can cause problems if your system is ever subject to unclean shutdowns. Hopefully someone on-list with more knowledge of XFS internals can comment on how accurate this is. -- Charles Steinkuehler charles@steinkuehler.net From owner-linux-xfs Wed Jul 21 15:54:44 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 21 Jul 2004 15:54:49 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6LMsiUG015305 for ; Wed, 21 Jul 2004 15:54:44 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6LMsisI015304 for linux-xfs@oss.sgi.com; Wed, 21 Jul 2004 15:54:44 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6LMshnh015290 for ; Wed, 21 Jul 2004 15:54:43 -0700 Received: (from apache@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6LMGPSL014607; Wed, 21 Jul 2004 15:16:25 -0700 Date: Wed, 21 Jul 2004 15:16:25 -0700 Message-Id: <200407212216.i6LMGPSL014607@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 347] XFS clearing the disk X-Bugzilla-Reason: AssignedTo X-archive-position: 3717 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 626 Lines: 20 http://oss.sgi.com/bugzilla/show_bug.cgi?id=347 ------- Additional Comments From juri@koschikode.com 2004-21-07 15:16 PDT ------- Logs: look into /var/log/messages and/or /var/log/syslog And yes, of cause it safe (and the normal way to do it) to access the filesystem via /dev/sdaX, meaning mounting the filesystem e.g. with 'mount /dev/sdaX /somewhere'. I think what was meant is: never access a _mounted_ filesystem via /dev/sdaX, e.g. don't do a 'dump /dev/sdaX' as long the the fs is mounted. ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs Wed Jul 21 17:05:17 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 21 Jul 2004 17:05:19 -0700 (PDT) Received: from ishtar.tlinx.org (ishtar.tlinx.org [64.81.245.74]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6M05G4k020029 for ; Wed, 21 Jul 2004 17:05:17 -0700 Received: from [192.168.3.20] (shiva [192.168.3.20]) by ishtar.tlinx.org (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id i6M057mp017779; Wed, 21 Jul 2004 17:05:07 -0700 Message-ID: <40FF0479.6050509@tlinx.org> Date: Wed, 21 Jul 2004 17:04:09 -0700 From: L A Walsh User-Agent: Mozilla Thunderbird 0.7.1 (Windows/20040626) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Linux-Kernel CC: linux-xfs@oss.sgi.com Subject: 2.6.7-vanilla-SMP kernel: pagebuf_get: failed to lookup pages X-Enigmail-Version: 0.84.1.0 X-Enigmail-Supports: pgp-inline, pgp-mime Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 3718 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: lkml@tlinx.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1776 Lines: 38 Jul 20 09:07:34 ishtar kernel: pagebuf_get: failed to lookup pages Jul 20 09:07:34 ishtar last message repeated 25 times Jul 20 09:26:38 ishtar kernel: pagebuf_get: failed to lookup pages Jul 20 09:27:09 ishtar last message repeated 354 times Jul 20 09:27:52 ishtar last message repeated 274 times Jul 20 09:45:46 ishtar kernel: pagebuf_get: failed to lookup pages Jul 20 09:45:46 ishtar last message repeated 2 times Jul 20 10:00:10 ishtar kernel: pagebuf_get: failed to lookup pages Jul 21 02:30:00 ishtar su: (to backup) root on none Jul 21 02:30:01 ishtar kernel: pagebuf_get: failed to lookup pages Jul 21 02:30:04 ishtar last message repeated 16 times Jul 21 02:30:30 ishtar su: (to backup) root on none Jul 21 02:31:55 ishtar su: (to backup) root on none Jul 21 02:31:55 ishtar last message repeated 3 times Jul 21 03:15:09 ishtar kernel: pagebuf_get: failed to lookup pages Jul 21 03:15:09 ishtar last message repeated 4 times Jul 21 04:07:34 ishtar kernel: pagebuf_get: failed to lookup pages Jul 21 04:07:34 ishtar last message repeated 9 times Jul 21 04:26:44 ishtar kernel: pagebuf_get: failed to lookup pages Jul 21 04:27:45 ishtar last message repeated 1516 times Jul 21 04:27:54 ishtar last message repeated 36 times Jul 21 04:45:51 ishtar kernel: pagebuf_get: failed to lookup pages Jul 21 04:45:51 ishtar last message repeated 7 times ---- Any idea what this message means? I especially notice a high frequency during high disk i/o. File systems are all xfs if that is pertinent. Backups run in early AM backing up SCSI disks to a large IDE. However, the messages around 9:27 on the 20th wouldn't have been backup related but possibly processing a backlog of email after some system maintenance -- and that would have all been on SCSI disks. From owner-linux-xfs Wed Jul 21 17:12:10 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 21 Jul 2004 17:12:14 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6M0CAFr020538 for ; Wed, 21 Jul 2004 17:12:10 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6M0C93C020535 for linux-xfs@oss.sgi.com; Wed, 21 Jul 2004 17:12:09 -0700 Received: from pimout1-ext.prodigy.net (pimout1-ext.prodigy.net [207.115.63.77]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6M0C3il020522; Wed, 21 Jul 2004 17:12:05 -0700 Received: from taniwha.stupidest.org (adsl-67-124-119-20.dsl.snfc21.pacbell.net [67.124.119.20]) by pimout1-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6M0B4Kn032800; Wed, 21 Jul 2004 20:11:09 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id B46FB115C783; Wed, 21 Jul 2004 17:11:03 -0700 (PDT) Date: Wed, 21 Jul 2004 17:11:03 -0700 From: Chris Wedgwood To: bugzilla-daemon@oss.sgi.com Cc: xfs-master@oss.sgi.com, christopher.g.dorosky@lmco.com Subject: Re: [Bug 347] XFS clearing the disk Message-ID: <20040722001103.GB30595@taniwha.stupidest.org> References: <200407212125.i6LLPgml012175@oss.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <200407212125.i6LLPgml012175@oss.sgi.com> X-archive-position: 3719 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1784 Lines: 51 On Wed, Jul 21, 2004 at 02:25:42PM -0700, bugzilla-daemon@oss.sgi.com wrote: > I don't have access to the machine. so, some machine, not yours, but one you heard about all of a sudden maybe had something go wrong, files are missing but there are no other clues... this is really hard to assist with > Q1. Where are the logs that were mentioned? wherever the machine logs kernel messages to, depends on the distro but usually something like /var/log/kern.log or even /var/log/syslog you could also try: "dmesg -s 262144 > file" on the machine if it wasn't rebooted > Q2. Can you answer the questions about the mounting? It is normally > safe to access an XFS disk through /dev/sd(xx), right? depends what you mean, obviously you mound it that way but if it's mounted you shouldn't access the raw device as it can cause corruptions or even oopsen > But df would report the same space occupied as before. is the machine still up, can you try "lsof +L1" on that machine and see if there are deleted but still referrence files? if not, unount the disk and run xfs_repair and see it relocates the lost stuff to /lost+found > I will hunt around for some logs on a duplicate machine here, and > see if I can find any XFS messages. Which log shoudl I look at? does the duplicate machine have the same problem? if not, then there isn't any point > If I can manage to get a disk to fail here, then I will let you > know. this is the *only* error report i've heard like this, i don't want to say it's not XFS but it's a very strange bug and I can't see how XFS can loose an entire directory structure w/o spewing many more messages or something more serious if something was wrong i really do wonder if the data wasn't deleted at some point, everything looks like that --cw From owner-linux-xfs Wed Jul 21 17:12:28 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 21 Jul 2004 17:12:40 -0700 (PDT) Received: from pimout2-ext.prodigy.net (pimout2-ext.prodigy.net [207.115.63.101]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6M0CR3H020651 for ; Wed, 21 Jul 2004 17:12:28 -0700 Received: from taniwha.stupidest.org (adsl-67-124-119-20.dsl.snfc21.pacbell.net [67.124.119.20]) by pimout2-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6M0COUK161478; Wed, 21 Jul 2004 20:12:24 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id 2D39E115C783; Wed, 21 Jul 2004 17:12:24 -0700 (PDT) Date: Wed, 21 Jul 2004 17:12:24 -0700 From: Chris Wedgwood To: L A Walsh Cc: Linux-Kernel , linux-xfs@oss.sgi.com Subject: Re: 2.6.7-vanilla-SMP kernel: pagebuf_get: failed to lookup pages Message-ID: <20040722001224.GC30595@taniwha.stupidest.org> References: <40FF0479.6050509@tlinx.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <40FF0479.6050509@tlinx.org> X-archive-position: 3720 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 193 Lines: 10 On Wed, Jul 21, 2004 at 05:04:09PM -0700, L A Walsh wrote: > Any idea what this message means? it means "try the CVS tree" (i think hch fixed this and it's in CVS but not mainline) --cw From owner-linux-xfs Wed Jul 21 17:22:40 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 21 Jul 2004 17:22:53 -0700 (PDT) Received: from ishtar.tlinx.org (ishtar.tlinx.org [64.81.245.74]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6M0Mecq021307 for ; Wed, 21 Jul 2004 17:22:40 -0700 Received: from [192.168.3.20] (shiva [192.168.3.20]) by ishtar.tlinx.org (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id i6M0MTmp017873; Wed, 21 Jul 2004 17:22:29 -0700 Message-ID: <40FF0885.7060704@tlinx.org> Date: Wed, 21 Jul 2004 17:21:25 -0700 From: L A Walsh User-Agent: Mozilla Thunderbird 0.7.1 (Windows/20040626) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Chris Wedgwood CC: Linux-Kernel , linux-xfs@oss.sgi.com Subject: Re: 2.6.7-vanilla-SMP kernel: pagebuf_get: failed to lookup pages References: <40FF0479.6050509@tlinx.org> <20040722001224.GC30595@taniwha.stupidest.org> In-Reply-To: <20040722001224.GC30595@taniwha.stupidest.org> X-Enigmail-Version: 0.84.1.0 X-Enigmail-Supports: pgp-inline, pgp-mime Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 3721 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: lkml@tlinx.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 808 Lines: 35 Will this be included/fixed in 2.6.8? How serious is the problem? The system doesn't seem to panic or indicate backup failures. Setting up a CVS tree to get a patch for a "stable-series" kernel seems a bit unstable. I'm not sure what I'd pull in besides the fix or even if I'd pull down a coherent/stable CVS image if I downloaded in the middle of when some other patch was being checked in. Maybe I'm sounding like a wimp, but the idea of pulling in freshly checked in CVS code for use on a 'stable' machine is bordering on my discomfort zone. :-) -l Chris Wedgwood wrote: >On Wed, Jul 21, 2004 at 05:04:09PM -0700, L A Walsh wrote: > > > >>Any idea what this message means? >> >> > >it means "try the CVS tree" (i think hch fixed this and it's in CVS >but not mainline) > > > --cw > > From owner-linux-xfs Wed Jul 21 17:34:01 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 21 Jul 2004 17:34:04 -0700 (PDT) Received: from pimout1-ext.prodigy.net (pimout1-ext.prodigy.net [207.115.63.77]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6M0Y1jE021744 for ; Wed, 21 Jul 2004 17:34:01 -0700 Received: from taniwha.stupidest.org (adsl-67-124-119-20.dsl.snfc21.pacbell.net [67.124.119.20]) by pimout1-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6M0XvKn151768; Wed, 21 Jul 2004 20:33:58 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id A324F115C783; Wed, 21 Jul 2004 17:33:57 -0700 (PDT) Date: Wed, 21 Jul 2004 17:33:57 -0700 From: Chris Wedgwood To: L A Walsh Cc: Linux-Kernel , linux-xfs@oss.sgi.com Subject: Re: 2.6.7-vanilla-SMP kernel: pagebuf_get: failed to lookup pages Message-ID: <20040722003357.GA31163@taniwha.stupidest.org> References: <40FF0479.6050509@tlinx.org> <20040722001224.GC30595@taniwha.stupidest.org> <40FF0885.7060704@tlinx.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <40FF0885.7060704@tlinx.org> X-archive-position: 3722 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1528 Lines: 47 On Wed, Jul 21, 2004 at 05:21:25PM -0700, L A Walsh wrote: > Will this be included/fixed in 2.6.8? i assume that's the intention but i don't know when 2.6.8 is and how much time the sgi people have before then. my guess is yes though > How serious is the problem? The system doesn't seem to panic or > indicate backup failures. not sure, hch can you comment here maybe? > Setting up a CVS tree to get a patch for a "stable-series" kernel > seems a bit unstable. CVS is "stable linux releases + XFS fixes" --- it's really not that bad (whilst i personally don't use it, my tree is derieved from it and i don't have problems) > I'm not sure what I'd pull in besides the fix or even if I'd pull > down a coherent/stable CVS image if I downloaded in the middle of > when some other patch was being checked in. cd path/to/workarea cvs -qz9 -d :pserver:cvs@oss.sgi.com:/cvs co linux-2.6-xfs cd linux-2.6-xfs cp path/to/old/.config .config make oldconfig make ... > Maybe I'm sounding like a wimp, but the idea of pulling in freshly > checked in CVS code for use on a 'stable' machine is bordering on my > discomfort zone. :-) FWIW, the CVS tree isn't freshly checked in, it's a reflection of the internal ptools tree where in theory you shouldn't get adhoc checkins like lots of places (but yeah, it does sometimes break but not usually badly). Anyhow, if you don't like it you can (1) ingnore the problems (2) use official binary releases from vendors (3) use ext3, etc. whatever works for you... --cw From owner-linux-xfs Wed Jul 21 23:13:35 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 21 Jul 2004 23:13:40 -0700 (PDT) Received: from mail.pacrimopen.com ([64.65.177.98]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6M6DZEN030307 for ; Wed, 21 Jul 2004 23:13:35 -0700 Received: by mail.pacrimopen.com (Postfix, from userid 1064) id F0EB15876AF1; Wed, 21 Jul 2004 23:21:07 -0700 (PDT) Received: from mail.pacrimopen.com (localhost [127.0.0.1]) by mail.pacrimopen.com (Postfix) with ESMTP id B8A0458776B5 for ; Wed, 21 Jul 2004 23:21:06 -0700 (PDT) Received: from pacrimopen.com (unknown [4.4.175.20]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.pacrimopen.com (Postfix) with ESMTP id 34EA95876AF1; Wed, 21 Jul 2004 23:21:06 -0700 (PDT) Message-ID: <40FF5B0A.30202@pacrimopen.com> Date: Wed, 21 Jul 2004 23:13:30 -0700 From: Joshua Schmidlkofer User-Agent: Mozilla Thunderbird 0.5 (X11/20040402) X-Accept-Language: en-us, en MIME-Version: 1.0 To: L A Walsh , linux-xfs@oss.sgi.com Subject: Re: 2.6.7-vanilla-SMP kernel: pagebuf_get: failed to lookup pages References: <40FF0479.6050509@tlinx.org> <20040722001224.GC30595@taniwha.stupidest.org> <40FF0885.7060704@tlinx.org> In-Reply-To: <40FF0885.7060704@tlinx.org> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 3723 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: kernel@pacrimopen.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1485 Lines: 40 L A Walsh wrote: > Will this be included/fixed in 2.6.8? > > How serious is the problem? The system doesn't seem to panic or > indicate backup > failures. > > Setting up a CVS tree to get a patch for a "stable-series" kernel > seems a bit > unstable. I'm not sure what I'd pull in besides the fix or even if > I'd pull > down a coherent/stable CVS image if I downloaded in the middle of when > some > other patch was being checked in. Maybe I'm sounding like a wimp, but > the idea > of pulling in freshly checked in CVS code for use on a 'stable' > machine is > bordering on my discomfort zone. :-) > > -l FWIW I am getting this error. I assumed it was my NVIDIA binary drivers causing it. However, there are lots of weird allocator issues popping up right now [so it seems]. I have checked my FS several times, and it does not seem to be a problem. It only happens when I have large blocks of memory sucked into some process, which would lead me to believe that there is some amount of memory pressure leading up to them. The pagebuf_get is a note, but it does not seem to be an error. AFAICT the XFS folks tend to "dead-lock instead of corrupt" [which is standard fare in the Linux kernel]. So you will have your own mileage I am sure. If it is a production machine, and 2.6.6 ran fine, unless you need something specific I would back-rev and wait. All that being said, I have not tried to Use The Source(tm). So all I have is speculation. js From owner-linux-xfs Wed Jul 21 23:38:34 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 21 Jul 2004 23:38:38 -0700 (PDT) Received: from pimout3-ext.prodigy.net (pimout3-ext.prodigy.net [207.115.63.102]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6M6cYFp030881 for ; Wed, 21 Jul 2004 23:38:34 -0700 Received: from taniwha.stupidest.org (adsl-67-124-119-20.dsl.snfc21.pacbell.net [67.124.119.20]) by pimout3-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6M6cSlM176386; Thu, 22 Jul 2004 02:38:28 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id DAFAC115C783; Wed, 21 Jul 2004 23:38:27 -0700 (PDT) Date: Wed, 21 Jul 2004 23:38:27 -0700 From: Chris Wedgwood To: Joshua Schmidlkofer Cc: L A Walsh , linux-xfs@oss.sgi.com Subject: Re: 2.6.7-vanilla-SMP kernel: pagebuf_get: failed to lookup pages Message-ID: <20040722063827.GA3454@taniwha.stupidest.org> References: <40FF0479.6050509@tlinx.org> <20040722001224.GC30595@taniwha.stupidest.org> <40FF0885.7060704@tlinx.org> <40FF5B0A.30202@pacrimopen.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <40FF5B0A.30202@pacrimopen.com> X-archive-position: 3724 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1416 Lines: 39 On Wed, Jul 21, 2004 at 11:13:30PM -0700, Joshua Schmidlkofer wrote: > FWIW I am getting this error. I assumed it was my NVIDIA binary > drivers causing it. However, there are lots of weird allocator > issues popping up right now [so it seems]. i don't think it's an allocator problem, see: http://oss.sgi.com/bugzilla/show_bug.cgi?id=311 > I have checked my FS several times, and it does not seem to be a > problem. It only happens when I have large blocks of memory sucked > into some process, which would lead me to believe that there is some > amount of memory pressure leading up to them. what kernel? this was fixed May 27 if it's the above problem. > AFAICT the XFS folks tend to "dead-lock instead of corrupt" [which > is standard fare in the Linux kernel]. linux memory allocations can fail, linux has always been this way and linux drivers are written to deal with this irix memory allocations never fail, so XFS isn't written to deal with this and thus there is a layer for such cases where XFS tries really hard to get memory and loops as best as i can tell current xfs shouldn't explode on low memory but might stall --- i'm not sure there is much that can be done about that easily though i really think most of the pain people are presently seeing will go away when the CVS tree is sync'd to mainline (or just use CVS). then you can find a whole lot more pain to fix. --cw From owner-linux-xfs Thu Jul 22 00:54:47 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 22 Jul 2004 00:55:08 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6M7sl9d003054 for ; Thu, 22 Jul 2004 00:54:47 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6M7slm0003053 for linux-xfs@oss.sgi.com; Thu, 22 Jul 2004 00:54:47 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6M7sjn8003039 for ; Thu, 22 Jul 2004 00:54:45 -0700 Received: (from apache@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6M7AH28031811; Thu, 22 Jul 2004 00:10:17 -0700 Date: Thu, 22 Jul 2004 00:10:17 -0700 Message-Id: <200407220710.i6M7AH28031811@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 347] XFS clearing the disk X-Bugzilla-Reason: AssignedTo X-archive-position: 3725 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 879 Lines: 27 http://oss.sgi.com/bugzilla/show_bug.cgi?id=347 ------- Additional Comments From seth.mos@xs4all.nl 2004-22-07 00:10 PDT ------- What you probably hit is a XFS error 990. Also known as Unknown under linux but it returns EFSCORRUPTED. If XFS thinks something is out of wack and inconsistent it would trigger and unmount the filesystem to prevent further damage. Chances are that a reboot might bring back the filesystem but it would probably fail again in the same place (reading a certain file or dir for example). Go there, upgrade to more recent XFS bits, userspace tools. In order of sequence that would be, upgrade the xfs_progs, repair the filesystem (xfs_repair) and then upgrade to more recent XFS bits, like CVS or one of the 1.3 releases. ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs Thu Jul 22 01:18:52 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 22 Jul 2004 01:19:11 -0700 (PDT) Received: from omx2.sgi.com (mtvcafw.sgi.com [192.48.171.6]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6M8Iqb0004180 for ; Thu, 22 Jul 2004 01:18:52 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by omx2.sgi.com (8.12.11/8.12.9/linux-outbound_gateway-1.1) with ESMTP id i6M8WPtA031416 for ; Thu, 22 Jul 2004 01:32:26 -0700 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id i6M8IZ7X677714; Thu, 22 Jul 2004 18:18:35 +1000 (EST) Received: (from tes@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id i6M8IYAu678105; Thu, 22 Jul 2004 18:18:34 +1000 (EST) Date: Thu, 22 Jul 2004 18:18:34 +1000 (EST) From: Timothy Shimmin Message-Id: <200407220818.i6M8IYAu678105@snort.melbourne.sgi.com> To: sgi.bugs.xfs@engr.sgi.com, linux-xfs@oss.sgi.com Subject: TAKE 918329 - X-archive-position: 3726 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: tes@snort.melbourne.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 440 Lines: 16 make ocount argument to bulkstat a signed 32 int - suggested by Jan-Jaap van der Heijden Date: Thu Jul 22 01:15:33 PDT 2004 Workarea: snort.melbourne.sgi.com:/home/tes/isms/xfs-cmds Inspected by: hch@lst.de The following file(s) were checked into: bonnie.engr.sgi.com:/isms/slinx/xfs-cmds Modid: xfs-cmds:slinx:175557a xfsdump/VERSION - 1.63 xfsdump/doc/CHANGES - 1.70 xfsdump/fsr/xfs_fsr.c - 1.17 xfsdump/dump/content.c - 1.31 From owner-linux-xfs Fri Jul 23 11:29:28 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 23 Jul 2004 11:29:43 -0700 (PDT) Received: from smtp1.ActiveState.com (gw.activestate.com [209.17.183.249]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6NITSwn019783 for ; Fri, 23 Jul 2004 11:29:28 -0700 Received: from smtp3.ActiveState.com (latte.activestate.com [192.168.4.252]) by smtp1.ActiveState.com (8.12.10/8.12.10) with ESMTP id i6NITKTb027552 for ; Fri, 23 Jul 2004 11:29:20 -0700 (envelope-from daves@ActiveState.com) Received: from activestate.com (awl.activestate.com [192.168.3.163]) by smtp3.ActiveState.com (8.12.9/8.12.9) with ESMTP id i6NITKPr001330 for ; Fri, 23 Jul 2004 11:29:20 -0700 Message-ID: <410159BA.6060006@activestate.com> Date: Fri, 23 Jul 2004 11:32:26 -0700 From: David Sparks User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.4) Gecko/20030624 X-Accept-Language: en-us, en MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: Re: Null files reloaded :-) References: <200407200444.21761.wizeman@wizy.org> In-Reply-To: <200407200444.21761.wizeman@wizy.org> X-Enigmail-Version: 0.76.8.0 X-Enigmail-Supports: pgp-inline, pgp-mime Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 3727 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: daves@ActiveState.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 696 Lines: 25 What happens when an existing file is changed and only the metadata is sync'd? I'm curios as to what happens in this situation: open existing file seek xxx write data [filesystem brought uncleanly offline] The real world example is database files. What happens if a busy mysql database making a lot of inserts has the fs brought down uncleanly? Are sections of the file nulled or is the entire file nulled? Thanks, ds ps I have XFS running on ~20 servers and have been very happy with it. Thank you for your hard work, it is appreciated! I think that this + the earlier thread should be distilled into a few FAQ entries as it is useful knowledge, and surely to come up again. :) From owner-linux-xfs Fri Jul 23 12:15:02 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 23 Jul 2004 12:15:17 -0700 (PDT) Received: from pimout1-ext.prodigy.net (pimout1-ext.prodigy.net [207.115.63.77]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6NJF1EH020815 for ; Fri, 23 Jul 2004 12:15:01 -0700 Received: from taniwha.stupidest.org (adsl-67-124-119-20.dsl.snfc21.pacbell.net [67.124.119.20]) by pimout1-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6NJEjhF264380; Fri, 23 Jul 2004 15:14:46 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id 07959115C858; Fri, 23 Jul 2004 12:14:45 -0700 (PDT) Date: Fri, 23 Jul 2004 12:14:44 -0700 From: Chris Wedgwood To: David Sparks Cc: linux-xfs@oss.sgi.com Subject: Re: Null files reloaded :-) Message-ID: <20040723191444.GA22637@taniwha.stupidest.org> References: <200407200444.21761.wizeman@wizy.org> <410159BA.6060006@activestate.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <410159BA.6060006@activestate.com> X-archive-position: 3728 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 683 Lines: 31 On Fri, Jul 23, 2004 at 11:32:26AM -0700, David Sparks wrote: > What happens when an existing file is changed and only the metadata > is sync'd? you can see old data > open existing file > seek xxx > write data > [filesystem brought uncleanly offline] you see some/none/all of the new data depending on timing > The real world example is database files. for a sane database this shouldn't matter as they have code to work with this > What happens if a busy mysql database making a lot of inserts has > the fs brought down uncleanly? not sure, my guess is 'bad things' unless mysql has a log > Are sections of the file nulled or is the entire file nulled? neither --cw From owner-linux-xfs Fri Jul 23 12:27:34 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 23 Jul 2004 12:27:39 -0700 (PDT) Received: from mail.mysnip.de (mail.mysnip.de [194.25.82.167]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6NJRVeY024370 for ; Fri, 23 Jul 2004 12:27:33 -0700 Received: from localhost (localhost [127.0.0.1]) by mail.mysnip.de (Postfix) with ESMTP id B0A6C60B6D0D; Fri, 23 Jul 2004 21:27:27 +0200 (CEST) Received: from mail.mysnip.de ([127.0.0.1]) by localhost (webserver [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 23553-05; Fri, 23 Jul 2004 21:27:07 +0200 (CEST) Received: from [192.168.1.49] (pD9E80D9E.dip0.t-ipconnect.de [217.232.13.158]) (using TLSv1 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) by mail.mysnip.de (Postfix) with ESMTP id 2F25360B6CFC; Fri, 23 Jul 2004 21:27:07 +0200 (CEST) Message-ID: <4101668C.5040607@mysnip.de> Date: Fri, 23 Jul 2004 21:27:08 +0200 From: Thomas User-Agent: Mozilla Thunderbird 1.6.3.1d (Windows/20040707) X-Accept-Language: en-us, en MIME-Version: 1.0 To: David Sparks Cc: linux-xfs@oss.sgi.com Subject: Re: Null files reloaded :-) References: <200407200444.21761.wizeman@wizy.org> <410159BA.6060006@activestate.com> In-Reply-To: <410159BA.6060006@activestate.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: by amavisd-new at mail.mysnip.de X-archive-position: 3729 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: thomas-lists@mysnip.de Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 803 Lines: 26 Hey there, David Sparks wrote: > The real world example is database files. What happens if a busy > mysql database making a lot of inserts has the fs brought down > uncleanly? Are sections of the file nulled or is the entire file nulled? Some real-world experiences ... I'm running a busy database-server running only mysql on it (dual athlon-mp, 2GB ram). The filesystem for the mysql-databases is xfs. In the old kernel-2.4.19-days I had a lot of crashes of this machines while the database was under load. The worst things I've ever seen have been index-corruption, nothing a mysqlrepair can't handle. Never had nulled files or parts of files. Since upgrading that machine to 2.4.25 everything runs smooth and without crashes ... thanks for that great filesystem :-). Regards, Thomas From owner-linux-xfs Sat Jul 24 01:12:43 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 24 Jul 2004 01:12:48 -0700 (PDT) Received: from gizmo11bw.bigpond.com (gizmo11bw.bigpond.com [144.140.70.21]) by oss.sgi.com (8.13.0/8.13.0) with SMTP id i6O8Cg6f015747 for ; Sat, 24 Jul 2004 01:12:43 -0700 Received: (qmail 17581 invoked from network); 24 Jul 2004 08:12:31 -0000 Received: from unknown (HELO BWMAM16.bigpond.com) (144.135.24.114) by gizmo11bw.bigpond.com with SMTP; 24 Jul 2004 08:12:31 -0000 Received: from c25.53.135.144.satellite.bigpond.com ([144.135.53.25]) by BWMAM16.bigpond.com(MAM REL_3_4_2a 261/35836837) with SMTP id 35836837; Sat, 24 Jul 2004 18:12:30 +1000 Message-ID: <002f01c47158$06a03580$0100a8c0@valleychase> From: "Jason Cole" To: "linux-xfs" Subject: ISO installer for RH9 with XFS 1.3.1 Date: Sat, 24 Jul 2004 16:27:11 +0800 MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2800.1158 X-MIMEOLE: Produced By Microsoft MimeOLE V6.00.2800.1165 X-archive-position: 3730 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: colejp@bigpond.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 129 Lines: 11 Hi, Does any one know why SGI removed this from there web site? And would any one know where to get it from. Thank's Jason From owner-linux-xfs Sat Jul 24 03:09:34 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 24 Jul 2004 03:09:37 -0700 (PDT) Received: from smtpq3.home.nl (smtpq3.home.nl [213.51.128.198]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6OA9WEq020064 for ; Sat, 24 Jul 2004 03:09:34 -0700 Received: from [213.51.128.135] (port=40078 helo=smtp4.home.nl) by smtpq3.home.nl with esmtp (Exim 4.30) id 1BoJT5-0002Ud-OH; Sat, 24 Jul 2004 12:09:27 +0200 Received: from cp232498-a.gelen1.lb.home.nl ([217.120.68.81]:1037 helo=[192.168.1.100]) by smtp4.home.nl with esmtp (Exim 4.30) id 1BoJT4-0001b0-7f; Sat, 24 Jul 2004 12:09:26 +0200 Message-ID: <41023555.2050503@home.nl> Date: Sat, 24 Jul 2004 12:09:25 +0200 From: Pascal de Bruijn User-Agent: Mozilla Thunderbird 0.7.2 (Windows/20040707) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Jason Cole CC: linux-xfs@oss.sgi.com Subject: Re: ISO installer for RH9 with XFS 1.3.1 References: <002f01c47158$06a03580$0100a8c0@valleychase> In-Reply-To: <002f01c47158$06a03580$0100a8c0@valleychase> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-AtHome-MailScanner-Information: Neem contact op met support@home.nl voor meer informatie X-AtHome-MailScanner: Found to be clean X-archive-position: 3731 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: keizerflipje@home.nl Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 577 Lines: 34 I don't know... But are you sure you want to use this? The kernel included is probably heavily outdated, and doesn't have the latest security patches. And making your own kernel on such a machine would be hard because RedHat's kernel has NPTL patched in. Note that if I'm not mistaken Fedora Core 2 has XFS support, you just need to boot with another kernel (i think)... Regards, Pascal de Bruijn Jason Cole wrote: >Hi, > >Does any one know why SGI removed this from there web site? > >And would any one know where to get it from. > > >Thank's > >Jason > > > > > From owner-linux-xfs Sat Jul 24 04:14:21 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 24 Jul 2004 04:15:17 -0700 (PDT) Received: from burgers.bubbanfriends.org (IDENT:weQ6Nff7YZ3mbat8O4l/QNKxOLVHdGqU@burgers.bubbanfriends.org [69.212.163.241]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6OBEJDm021908 for ; Sat, 24 Jul 2004 04:14:20 -0700 Received: from localhost (localhost.localdomain [127.0.0.1]) by burgers.bubbanfriends.org (Postfix) with ESMTP id E9AF8142101A; Sat, 24 Jul 2004 06:14:16 -0500 (EST) Received: from burgers.bubbanfriends.org ([127.0.0.1]) by localhost (burgers.bubbanfriends.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 06209-04; Sat, 24 Jul 2004 06:14:16 -0500 (EST) Received: by burgers.bubbanfriends.org (Postfix, from userid 500) id 58F7A1421002; Sat, 24 Jul 2004 06:14:16 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by burgers.bubbanfriends.org (Postfix) with ESMTP id 40FBD30273DB; Sat, 24 Jul 2004 06:14:16 -0500 (EST) Date: Sat, 24 Jul 2004 06:14:16 -0500 (EST) From: Mike Burger To: Pascal de Bruijn Cc: Jason Cole , linux-xfs@oss.sgi.com Subject: Re: ISO installer for RH9 with XFS 1.3.1 In-Reply-To: <41023555.2050503@home.nl> Message-ID: References: <002f01c47158$06a03580$0100a8c0@valleychase> <41023555.2050503@home.nl> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Virus-Scanned: by amavisd-new at bubbanfriends.org X-archive-position: 3732 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: mburger@bubbanfriends.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 911 Lines: 32 On Sat, 24 Jul 2004, Pascal de Bruijn wrote: > I don't know... But are you sure you want to use this? > > The kernel included is probably heavily outdated, and doesn't have the > latest security patches. > > And making your own kernel on such a machine would be hard because > RedHat's kernel has NPTL patched in. > > Note that if I'm not mistaken Fedora Core 2 has XFS support, you just > need to boot with another kernel (i think)... From may reports here, and even the Fedora Unofficial FAQ, you can just boot the FC2 CD with "linux xfs" and be on your way. -- Mike Burger http://www.bubbanfriends.org Visit the Dog Pound II BBS telnet://dogpound2.citadel.org or http://dogpound2.citadel.org To be notified of updates to the web site, visit http://www.bubbanfriends.org/mailman/listinfo/site-update, or send a message to: site-update-request@bubbanfriends.org with a message of: subscribe From owner-linux-xfs Sat Jul 24 04:18:43 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 24 Jul 2004 04:18:54 -0700 (PDT) Received: from gizmo01bw.bigpond.com (gizmo01bw.bigpond.com [144.140.70.11]) by oss.sgi.com (8.13.0/8.13.0) with SMTP id i6OBITNR022219 for ; Sat, 24 Jul 2004 04:18:42 -0700 Received: (qmail 3637 invoked from network); 24 Jul 2004 11:18:18 -0000 Received: from unknown (HELO BWMAM16.bigpond.com) (144.135.24.114) by gizmo01bw.bigpond.com with SMTP; 24 Jul 2004 11:18:18 -0000 Received: from c25.53.135.144.satellite.bigpond.com ([144.135.53.25]) by BWMAM16.bigpond.com(MAM REL_3_4_2a 261/35912389) with SMTP id 35912389; Sat, 24 Jul 2004 21:18:18 +1000 Message-ID: <00cf01c47171$fabcb210$0100a8c0@valleychase> From: "Jason Cole" To: "linux-xfs" Subject: Re: ISO installer for RH9 with XFS 1.3.1 Date: Sat, 24 Jul 2004 19:32:58 +0800 MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2800.1158 X-MIMEOLE: Produced By Microsoft MimeOLE V6.00.2800.1165 X-archive-position: 3733 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: colejp@bigpond.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 988 Lines: 53 > >I don't know... But are you sure you want to use this? > I have been doing some searching and I think they removed it because of SCO's law suit against the open source community >The kernel included is probably heavily outdated, and doesn't have the >latest security patches. > >And making your own kernel on such a machine would be hard because >RedHat's kernel has NPTL patched in. I have had a lot of trouble trying to build a kernel for RH9, just wan't compile proply ;( > >Note that if I'm not mistaken Fedora Core 2 has XFS support, you just >need to boot with another kernel (i think)... > I have already started to download these ISO's, did find old post where some people did say it was supported but not XFSPROGS, but i have high hopes :-) >Regards, >Pascal de Bruijn > > > >Jason Cole wrote: > >>Hi, >> >>Does any one know why SGI removed this from there web site? >> >>And would any one know where to get it from. >> >> >>Thank's >> >>Jason >> > Cheers Jason From owner-linux-xfs Sat Jul 24 09:06:22 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 24 Jul 2004 09:06:40 -0700 (PDT) Received: from burgers.bubbanfriends.org (IDENT:IdEHcFcfgQg5LPkjMdPKY5Lsg8l7LY+U@burgers.bubbanfriends.org [69.212.163.241]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6OG6Lrq001185 for ; Sat, 24 Jul 2004 09:06:22 -0700 Received: from localhost (localhost.localdomain [127.0.0.1]) by burgers.bubbanfriends.org (Postfix) with ESMTP id 4D7AC142101A; Sat, 24 Jul 2004 11:06:18 -0500 (EST) Received: from burgers.bubbanfriends.org ([127.0.0.1]) by localhost (burgers.bubbanfriends.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 13775-04; Sat, 24 Jul 2004 11:06:17 -0500 (EST) Received: by burgers.bubbanfriends.org (Postfix, from userid 500) id B8C401421002; Sat, 24 Jul 2004 11:06:17 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by burgers.bubbanfriends.org (Postfix) with ESMTP id B5A1E3027603; Sat, 24 Jul 2004 11:06:17 -0500 (EST) Date: Sat, 24 Jul 2004 11:06:17 -0500 (EST) From: Mike Burger To: Jason Cole Cc: linux-xfs Subject: Re: ISO installer for RH9 with XFS 1.3.1 In-Reply-To: <00cf01c47171$fabcb210$0100a8c0@valleychase> Message-ID: References: <00cf01c47171$fabcb210$0100a8c0@valleychase> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Virus-Scanned: by amavisd-new at bubbanfriends.org X-archive-position: 3734 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: mburger@bubbanfriends.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 796 Lines: 31 On Sat, 24 Jul 2004, Jason Cole wrote: > >Note that if I'm not mistaken Fedora Core 2 has XFS support, you just > >need to boot with another kernel (i think)... > > > > I have already started to download these ISO's, did find old post where > some people did say it was supported but not XFSPROGS, but i have high hopes > :-) The Fedora project lists xfsprogs 2.6.13 as included in their package list: http://fedora.redhat.com/projects/package-list -- Mike Burger http://www.bubbanfriends.org Visit the Dog Pound II BBS telnet://dogpound2.citadel.org or http://dogpound2.citadel.org To be notified of updates to the web site, visit http://www.bubbanfriends.org/mailman/listinfo/site-update, or send a message to: site-update-request@bubbanfriends.org with a message of: subscribe From owner-linux-xfs Sat Jul 24 15:17:59 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 24 Jul 2004 15:18:11 -0700 (PDT) Received: from plato.arts.usyd.edu.au (plato.arts.usyd.edu.au [129.78.16.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6OMHtnu001920 for ; Sat, 24 Jul 2004 15:17:58 -0700 Received: from arts.usyd.edu.au (holly.aitch.ucc.usyd.edu.au [129.78.226.234]) by plato.arts.usyd.edu.au (8.12.6/8.12.6) with ESMTP id i6OMHpmW019717 for ; Sun, 25 Jul 2004 08:17:52 +1000 (EST) Message-ID: <4102E006.3070309@arts.usyd.edu.au> Date: Sun, 25 Jul 2004 08:17:42 +1000 From: Matthew Geier User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6) Gecko/20040510 X-Accept-Language: en-us, en MIME-Version: 1.0 To: linux-xfs Subject: Re: ISO installer for RH9 with XFS 1.3.1 References: <00cf01c47171$fabcb210$0100a8c0@valleychase> In-Reply-To: Content-Type: multipart/signed; protocol="application/x-pkcs7-signature"; micalg=sha1; boundary="------------ms020409030006060802030801" X-archive-position: 3735 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: matthew@arts.usyd.edu.au Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 5803 Lines: 112 This is a cryptographically signed message in MIME format. --------------ms020409030006060802030801 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Mike Burger wrote: > On Sat, 24 Jul 2004, Jason Cole wrote: > > >>>Note that if I'm not mistaken Fedora Core 2 has XFS support, you just >>>need to boot with another kernel (i think)... >>> >> >>I have already started to download these ISO's, did find old post where >>some people did say it was supported but not XFSPROGS, but i have high hopes >>:-) It's all there, it's just if you do an XFS install and xfsprogs is the only package it needs from CD4, at the start it won't say it needs CD4, it will just 'spring that on you' later on when it actually needs CD4 to get the xfsprogs packages off it. You can also upgrade a RH9+XFS to Fedora 2 WITH OUT any boot options - it detects the existing XFS filesystems and does the right thing. I've done this sucessfully. --------------ms020409030006060802030801 Content-Type: application/x-pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEH AQAAoIIJmzCCAygwggKRoAMCAQICAwxAwjANBgkqhkiG9w0BAQQFADBiMQsw CQYDVQQGEwJaQTElMCMGA1UEChMcVGhhd3RlIENvbnN1bHRpbmcgKFB0eSkg THRkLjEsMCoGA1UEAxMjVGhhd3RlIFBlcnNvbmFsIEZyZWVtYWlsIElzc3Vp bmcgQ0EwHhcNMDQwNTA1MDUwMTUwWhcNMDUwNTA1MDUwMTUwWjB3MR8wHQYD VQQDExZUaGF3dGUgRnJlZW1haWwgTWVtYmVyMScwJQYJKoZIhvcNAQkBFhht YXR0aGV3QGFydHMudXN5ZC5lZHUuYXUxKzApBgkqhkiG9w0BCQEWHG1hdHRo ZXdAc2xlZXBlci5hcGFuYS5vcmcuYXUwggEiMA0GCSqGSIb3DQEBAQUAA4IB DwAwggEKAoIBAQDwAd1tx+TWOfa5VADB99s2pAaccqVbTISUBoOdIaPeOR+H gHeRg0VbYc0c1+8x6ed/bBeEk8Ve6V8vQ3lq7PwwBHFXSUn1vqSPtPwm+ti/ mhCu/ZqdMs+pTGRbcPng7pABRkKNyDujDELu4H6FvFSBYkXFVHiMCRUzaETb gGZIGY8qKqRUgSWeRpTEkqsr8HdtIhtMzkT2oXC8nnQGGK0UIF60EqUUKpFM HyKsmZXx2Vjk8qosYIoD0mTvisYLYY8MHKaXgxt4qLNJ6Ts6ZwLmmHv6uPuU y4dU8YWttOhI6mug/lT4BHE3Oz2jy/Geanancg8bFWRrSphuGekv0wGRAgMB AAGjUzBRMEEGA1UdEQQ6MDiBGG1hdHRoZXdAYXJ0cy51c3lkLmVkdS5hdYEc bWF0dGhld0BzbGVlcGVyLmFwYW5hLm9yZy5hdTAMBgNVHRMBAf8EAjAAMA0G CSqGSIb3DQEBBAUAA4GBAKYUVmleWefqGmH5SeQMX8M89Tt9l6qr6b+rQN6S 8SgSoLziOW1ND+y3Ph78BV9v82QWNBM2G65TFsFd64fgptwYdnj4/hYrlkJF UeghOQE9zS4+LWOwlS1f/tn5SuuP7c2lb1+MNlGsnI6mwpCjqbPGpGMhsS1v CBgTrYinXeb1MIIDKDCCApGgAwIBAgIDDEDCMA0GCSqGSIb3DQEBBAUAMGIx CzAJBgNVBAYTAlpBMSUwIwYDVQQKExxUaGF3dGUgQ29uc3VsdGluZyAoUHR5 KSBMdGQuMSwwKgYDVQQDEyNUaGF3dGUgUGVyc29uYWwgRnJlZW1haWwgSXNz dWluZyBDQTAeFw0wNDA1MDUwNTAxNTBaFw0wNTA1MDUwNTAxNTBaMHcxHzAd BgNVBAMTFlRoYXd0ZSBGcmVlbWFpbCBNZW1iZXIxJzAlBgkqhkiG9w0BCQEW GG1hdHRoZXdAYXJ0cy51c3lkLmVkdS5hdTErMCkGCSqGSIb3DQEJARYcbWF0 dGhld0BzbGVlcGVyLmFwYW5hLm9yZy5hdTCCASIwDQYJKoZIhvcNAQEBBQAD ggEPADCCAQoCggEBAPAB3W3H5NY59rlUAMH32zakBpxypVtMhJQGg50ho945 H4eAd5GDRVthzRzX7zHp539sF4STxV7pXy9DeWrs/DAEcVdJSfW+pI+0/Cb6 2L+aEK79mp0yz6lMZFtw+eDukAFGQo3IO6MMQu7gfoW8VIFiRcVUeIwJFTNo RNuAZkgZjyoqpFSBJZ5GlMSSqyvwd20iG0zORPahcLyedAYYrRQgXrQSpRQq kUwfIqyZlfHZWOTyqixgigPSZO+KxgthjwwcppeDG3ios0npOzpnAuaYe/q4 +5TLh1Txha206Ejqa6D+VPgEcTc7PaPL8Z5qdqdyDxsVZGtKmG4Z6S/TAZEC AwEAAaNTMFEwQQYDVR0RBDowOIEYbWF0dGhld0BhcnRzLnVzeWQuZWR1LmF1 gRxtYXR0aGV3QHNsZWVwZXIuYXBhbmEub3JnLmF1MAwGA1UdEwEB/wQCMAAw DQYJKoZIhvcNAQEEBQADgYEAphRWaV5Z5+oaYflJ5Axfwzz1O32Xqqvpv6tA 3pLxKBKgvOI5bU0P7Lc+HvwFX2/zZBY0EzYbrlMWwV3rh+Cm3Bh2ePj+FiuW QkVR6CE5AT3NLj4tY7CVLV/+2flK64/tzaVvX4w2UaycjqbCkKOps8akYyGx LW8IGBOtiKdd5vUwggM/MIICqKADAgECAgENMA0GCSqGSIb3DQEBBQUAMIHR MQswCQYDVQQGEwJaQTEVMBMGA1UECBMMV2VzdGVybiBDYXBlMRIwEAYDVQQH EwlDYXBlIFRvd24xGjAYBgNVBAoTEVRoYXd0ZSBDb25zdWx0aW5nMSgwJgYD VQQLEx9DZXJ0aWZpY2F0aW9uIFNlcnZpY2VzIERpdmlzaW9uMSQwIgYDVQQD ExtUaGF3dGUgUGVyc29uYWwgRnJlZW1haWwgQ0ExKzApBgkqhkiG9w0BCQEW HHBlcnNvbmFsLWZyZWVtYWlsQHRoYXd0ZS5jb20wHhcNMDMwNzE3MDAwMDAw WhcNMTMwNzE2MjM1OTU5WjBiMQswCQYDVQQGEwJaQTElMCMGA1UEChMcVGhh d3RlIENvbnN1bHRpbmcgKFB0eSkgTHRkLjEsMCoGA1UEAxMjVGhhd3RlIFBl cnNvbmFsIEZyZWVtYWlsIElzc3VpbmcgQ0EwgZ8wDQYJKoZIhvcNAQEBBQAD gY0AMIGJAoGBAMSmPFVzVftOucqZWh5owHUEcJ3f6f+jHuy9zfVb8hp2vX8M OmHyv1HOAdTlUAow1wJjWiyJFXCO3cnwK4Vaqj9xVsuvPAsH5/EfkTYkKhPP K9Xzgnc9A74r/rsYPge/QIACZNenprufZdHFKlSFD0gEf6e20TxhBEAeZBly YLf7AgMBAAGjgZQwgZEwEgYDVR0TAQH/BAgwBgEB/wIBADBDBgNVHR8EPDA6 MDigNqA0hjJodHRwOi8vY3JsLnRoYXd0ZS5jb20vVGhhd3RlUGVyc29uYWxG cmVlbWFpbENBLmNybDALBgNVHQ8EBAMCAQYwKQYDVR0RBCIwIKQeMBwxGjAY BgNVBAMTEVByaXZhdGVMYWJlbDItMTM4MA0GCSqGSIb3DQEBBQUAA4GBAEiM 0VCD6gsuzA2jZqxnD3+vrL7CF6FDlpSdf0whuPg2H6otnzYvwPQcUCCTcDz9 reFhYsPZOhl+hLGZGwDFGguCdJ4lUJRix9sncVcljd2pnDmOjCBPZV+V2vf3 h9bGCE6u9uo05RAaWzVNd+NWIXiC3CEZNd4ksdMdRv9dX2VPMYIDOzCCAzcC AQEwaTBiMQswCQYDVQQGEwJaQTElMCMGA1UEChMcVGhhd3RlIENvbnN1bHRp bmcgKFB0eSkgTHRkLjEsMCoGA1UEAxMjVGhhd3RlIFBlcnNvbmFsIEZyZWVt YWlsIElzc3VpbmcgQ0ECAwxAwjAJBgUrDgMCGgUAoIIBpzAYBgkqhkiG9w0B CQMxCwYJKoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0wNDA3MjQyMjE3NDJa MCMGCSqGSIb3DQEJBDEWBBRM89Wn3lCzg+K3OtDmmXw3E5CxzTBSBgkqhkiG 9w0BCQ8xRTBDMAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMCAgIAgDANBggqhkiG 9w0DAgIBQDAHBgUrDgMCBzANBggqhkiG9w0DAgIBKDB4BgkrBgEEAYI3EAQx azBpMGIxCzAJBgNVBAYTAlpBMSUwIwYDVQQKExxUaGF3dGUgQ29uc3VsdGlu ZyAoUHR5KSBMdGQuMSwwKgYDVQQDEyNUaGF3dGUgUGVyc29uYWwgRnJlZW1h aWwgSXNzdWluZyBDQQIDDEDCMHoGCyqGSIb3DQEJEAILMWugaTBiMQswCQYD VQQGEwJaQTElMCMGA1UEChMcVGhhd3RlIENvbnN1bHRpbmcgKFB0eSkgTHRk LjEsMCoGA1UEAxMjVGhhd3RlIFBlcnNvbmFsIEZyZWVtYWlsIElzc3Vpbmcg Q0ECAwxAwjANBgkqhkiG9w0BAQEFAASCAQAw5kBXLPmkOpGKq6RopdK7Kc1K s8faGwY2AdY84dQtFXCstnLYM59Tl27E72BiKSJ3sYN6cVfkSP5DZf3SmZD1 RUXY6tRjJNnGTlKgJj0tDFLLI+MaF1qrEp04mtVWzAMuX3BiN3J4xZektpvN yq+Kmy93NhjAkYy0NsOeVHUjq/A6P/YnOVf6CrWsWrIAK66WCVuymSEvn6ud REJFtWcye5HHrJ/X2OCqS3jDQeuU/8epzsiggMxBSQrVqMikqPkY+KUI1eak /GTnGFtlhqon7VxBZCNYmXzj47y7yLdv8Rae3wh2bCDI0624eqjUAaJaOw00 NpaNk7khMCtruMlZAAAAAAAA --------------ms020409030006060802030801-- From owner-linux-xfs Sun Jul 25 01:04:26 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 25 Jul 2004 01:04:38 -0700 (PDT) Received: from pimout1-ext.prodigy.net (pimout1-ext.prodigy.net [207.115.63.77]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6P84PfI023530 for ; Sun, 25 Jul 2004 01:04:26 -0700 Received: from taniwha.stupidest.org (adsl-67-124-119-20.dsl.snfc21.pacbell.net [67.124.119.20]) by pimout1-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6P84JDi038826; Sun, 25 Jul 2004 04:04:20 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id A8AB3115C80C; Sun, 25 Jul 2004 01:04:18 -0700 (PDT) Date: Sun, 25 Jul 2004 01:04:18 -0700 From: Chris Wedgwood To: Matthew Geier Cc: linux-xfs Subject: Re: ISO installer for RH9 with XFS 1.3.1 Message-ID: <20040725080418.GA1026@taniwha.stupidest.org> References: <00cf01c47171$fabcb210$0100a8c0@valleychase> <4102E006.3070309@arts.usyd.edu.au> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4102E006.3070309@arts.usyd.edu.au> X-archive-position: 3736 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 670 Lines: 18 On Sun, Jul 25, 2004 at 08:17:42AM +1000, Matthew Geier wrote: > It's all there, it's just if you do an XFS install and xfsprogs is > the only package it needs from CD4, at the start it won't say it > needs CD4, it will just 'spring that on you' later on when it > actually needs CD4 to get the xfsprogs packages off it. this might sound silly, but i've never done a fc install so humor me... i assume you can install *almost* everything you absolutely need from the first CD right? except for xfsprogs on CD4 --- if that is the case shouldn't someone open a bug with RH to have this mode to CD1 (since it's small I can't see why this would be a big deal) --cw From owner-linux-xfs Sun Jul 25 01:38:26 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 25 Jul 2004 01:38:42 -0700 (PDT) Received: from mail.ocs.com.au (mail.ocs.com.au [202.147.117.210]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6P8c5g0024590 for ; Sun, 25 Jul 2004 01:38:26 -0700 Received: from ocs3.ocs.com.au (ocs3.ocs.com.au [192.168.255.3]) by mail.ocs.com.au (Postfix) with ESMTP id CC19B180092 for ; Sun, 25 Jul 2004 18:37:52 +1000 (EST) Received: by ocs3.ocs.com.au (Postfix, from userid 16331) id 0A7C7C2172; Sun, 25 Jul 2004 18:37:49 +1000 (EST) Received: from ocs3.ocs.com.au (localhost [127.0.0.1]) by ocs3.ocs.com.au (Postfix) with ESMTP id E99BA1400AE for ; Sun, 25 Jul 2004 18:37:49 +1000 (EST) X-Mailer: exmh version 2.6.3_20040314 03/14/2004 with nmh-1.0.4 From: Keith Owens To: linux-xfs Subject: Re: ISO installer for RH9 with XFS 1.3.1 In-reply-to: Your message of "Sun, 25 Jul 2004 01:04:18 MST." <20040725080418.GA1026@taniwha.stupidest.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Date: Sun, 25 Jul 2004 18:37:48 +1000 Message-ID: <14964.1090744668@ocs3.ocs.com.au> X-archive-position: 3737 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: kaos@sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1548 Lines: 38 On Sun, 25 Jul 2004 01:04:18 -0700, Chris Wedgwood wrote: >On Sun, Jul 25, 2004 at 08:17:42AM +1000, Matthew Geier wrote: > >> It's all there, it's just if you do an XFS install and xfsprogs is >> the only package it needs from CD4, at the start it won't say it >> needs CD4, it will just 'spring that on you' later on when it >> actually needs CD4 to get the xfsprogs packages off it. > >this might sound silly, but i've never done a fc install so humor >me... > >i assume you can install *almost* everything you absolutely need from >the first CD right? except for xfsprogs on CD4 --- if that is the >case shouldn't someone open a bug with RH to have this mode to CD1 >(since it's small I can't see why this would be a big deal) 651884 FC2-i386-disc1.iso 650198 FC2-i386-disc2.iso 653336 FC2-i386-disc3.iso 198962 FC2-i386-disc4.iso 77668 FC2-i386-rescuecd.iso 509744 FC2-i386-SRPMS-disc1.iso 509762 FC2-i386-SRPMS-disc2.iso 509742 FC2-i386-SRPMS-disc3.iso 509774 FC2-i386-SRPMS-disc4.iso 960600 FC2-i386-disc4.iso/Fedora/RPMS/xfsprogs-2.6.13-1.i386.rpm 258170 FC2-i386-disc4.iso/Fedora/RPMS/xfsprogs-devel-2.6.13-1.i386.rpm Moving xfsprogs from disc4 to disc1 would fit, disc1 would go to 652822 which is less than disc3. OTOH, you could copy all of disc1-4 into a single directory then boot disc1 using option 'askmethod'. Point the installer at the merged directory (either local or NFS) and you will never have to swap a CD again. Well worth it if you need to upgrade more than one machine. From owner-linux-xfs Sun Jul 25 02:16:30 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 25 Jul 2004 02:16:37 -0700 (PDT) Received: from heretic.physik.fu-berlin.de (heretic.physik.fu-berlin.de [160.45.32.227]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6P9GSuD025371 for ; Sun, 25 Jul 2004 02:16:29 -0700 Received: by heretic.physik.fu-berlin.de (8.12.10/8.12.10) with ESMTP id i6P9G5pg010167 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Sun, 25 Jul 2004 11:16:07 +0200 Received: (from thimm@localhost) by neu.nirvana (8.12.11/8.12.11/Submit) id i6P9G273030692; Sun, 25 Jul 2004 11:16:02 +0200 Date: Sun, 25 Jul 2004 11:16:01 +0200 From: Axel Thimm To: Keith Owens Cc: linux-xfs Subject: FC2 installer (was: ISO installer for RH9 with XFS 1.3.1) Message-ID: <20040725091601.GB11018@neu.nirvana> References: <20040725080418.GA1026@taniwha.stupidest.org> <14964.1090744668@ocs3.ocs.com.au> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="A6N2fC+uXW/VQSAv" Content-Disposition: inline In-Reply-To: <14964.1090744668@ocs3.ocs.com.au> User-Agent: Mutt/1.4.2i X-archive-position: 3738 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: Axel.Thimm@ATrpms.net Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 2586 Lines: 69 --A6N2fC+uXW/VQSAv Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sun, Jul 25, 2004 at 06:37:48PM +1000, Keith Owens wrote: > On Sun, 25 Jul 2004 01:04:18 -0700,=20 > Chris Wedgwood wrote: > >On Sun, Jul 25, 2004 at 08:17:42AM +1000, Matthew Geier wrote: > > > >> It's all there, it's just if you do an XFS install and xfsprogs is > >> the only package it needs from CD4, at the start it won't say it > >> needs CD4, it will just 'spring that on you' later on when it > >> actually needs CD4 to get the xfsprogs packages off it. > > > >this might sound silly, but i've never done a fc install so humor > >me... > > > >i assume you can install *almost* everything you absolutely need from > >the first CD right? except for xfsprogs on CD4 --- if that is the > >case shouldn't someone open a bug with RH to have this mode to CD1 > >(since it's small I can't see why this would be a big deal) >=20 > 651884 FC2-i386-disc1.iso > 650198 FC2-i386-disc2.iso > 653336 FC2-i386-disc3.iso > 198962 FC2-i386-disc4.iso > 77668 FC2-i386-rescuecd.iso > 509744 FC2-i386-SRPMS-disc1.iso > 509762 FC2-i386-SRPMS-disc2.iso > 509742 FC2-i386-SRPMS-disc3.iso > 509774 FC2-i386-SRPMS-disc4.iso >=20 > 960600 FC2-i386-disc4.iso/Fedora/RPMS/xfsprogs-2.6.13-1.i386.rpm > 258170 FC2-i386-disc4.iso/Fedora/RPMS/xfsprogs-devel-2.6.13-1.i386.rpm >=20 > Moving xfsprogs from disc4 to disc1 would fit, disc1 would go to 652822 > which is less than disc3. >=20 > OTOH, you could copy all of disc1-4 into a single directory then boot > disc1 using option 'askmethod'. Point the installer at the merged > directory (either local or NFS) and you will never have to swap a CD > again. Well worth it if you need to upgrade more than one machine. If you are network-less there is a DVD image you can use if you don't like CD-jokeying. And for kickstarting a series of machines you can easily craft pxeboot environments automated with "linux xfs" :) :) :) FC3 is already in test1 (there are three tests and then the final release which is scheduled to land in October). Perhaps someone wants to check whether this has been fixed by now, and if not let jkatz/bugzilla know? --=20 Axel.Thimm at ATrpms.net --A6N2fC+uXW/VQSAv Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) iD8DBQFBA3pRQBVS1GOamfERAobAAJ0YJP0kgE3M5aTUGtbpvw3nlkTg7wCff5sW QcQrvrhmePnkdnnBJvz9DZw= =F/xw -----END PGP SIGNATURE----- --A6N2fC+uXW/VQSAv-- From owner-linux-xfs Sun Jul 25 02:19:24 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 25 Jul 2004 02:19:36 -0700 (PDT) Received: from heretic.physik.fu-berlin.de (heretic.physik.fu-berlin.de [160.45.32.227]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6P9JN38025693 for ; Sun, 25 Jul 2004 02:19:23 -0700 Received: by heretic.physik.fu-berlin.de (8.12.10/8.12.10) with ESMTP id i6P9Iopg018846 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Sun, 25 Jul 2004 11:18:51 +0200 Received: (from thimm@localhost) by neu.nirvana (8.12.11/8.12.11/Submit) id i6P9IoYZ030738; Sun, 25 Jul 2004 11:18:50 +0200 Date: Sun, 25 Jul 2004 11:18:50 +0200 From: Axel Thimm To: linux-xfs@oss.sgi.com Subject: non-4KSTACKS kernel for FC2 Message-ID: <20040725091850.GC11018@neu.nirvana> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="4ZLFUWh1odzi/v6L" Content-Disposition: inline User-Agent: Mutt/1.4.2i X-archive-position: 3739 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: Axel.Thimm@ATrpms.net Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 752 Lines: 29 --4ZLFUWh1odzi/v6L Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable There is a FC2 kernel w/o 4KSTACKS at http://ATrpms.net/name/kernel-testing/ (the 2.6.7-1.456_4.rhfc2.at kernel, the 2.6.7-1.494_5.rhfc2.at had 4KSTACKS turned on again, but will soon be replaced by an 8kstacks version, too. Too many issues with 4KSTACKS for my blood pressure ...) --=20 Axel.Thimm at ATrpms.net --4ZLFUWh1odzi/v6L Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) iD8DBQFBA3r6QBVS1GOamfERAlb9AJwJVLNO4vJ3+mVkGgfMuWmGiBbtIgCfey5G pYTANfXYMWKkkoN8h8k/Crc= =CZE3 -----END PGP SIGNATURE----- --4ZLFUWh1odzi/v6L-- From owner-linux-xfs Sun Jul 25 05:42:46 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 25 Jul 2004 05:42:49 -0700 (PDT) Received: from burgers.bubbanfriends.org (IDENT:6TvHnegB6qWhOCABbo5n52F6giYB1TbL@burgers.bubbanfriends.org [69.212.163.241]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6PCghst007937 for ; Sun, 25 Jul 2004 05:42:46 -0700 Received: from localhost (localhost.localdomain [127.0.0.1]) by burgers.bubbanfriends.org (Postfix) with ESMTP id 9BDB6142101D; Sun, 25 Jul 2004 07:42:40 -0500 (EST) Received: from burgers.bubbanfriends.org ([127.0.0.1]) by localhost (burgers.bubbanfriends.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 02416-05; Sun, 25 Jul 2004 07:42:40 -0500 (EST) Received: by burgers.bubbanfriends.org (Postfix, from userid 500) id 0DE07142101B; Sun, 25 Jul 2004 07:42:39 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by burgers.bubbanfriends.org (Postfix) with ESMTP id C724730273DB; Sun, 25 Jul 2004 07:42:39 -0500 (EST) Date: Sun, 25 Jul 2004 07:42:39 -0500 (EST) From: Mike Burger To: Chris Wedgwood Cc: Matthew Geier , linux-xfs Subject: Re: ISO installer for RH9 with XFS 1.3.1 In-Reply-To: <20040725080418.GA1026@taniwha.stupidest.org> Message-ID: References: <00cf01c47171$fabcb210$0100a8c0@valleychase> <4102E006.3070309@arts.usyd.edu.au> <20040725080418.GA1026@taniwha.stupidest.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Virus-Scanned: by amavisd-new at bubbanfriends.org X-archive-position: 3740 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: mburger@bubbanfriends.org Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 1302 Lines: 39 On Sun, 25 Jul 2004, Chris Wedgwood wrote: > On Sun, Jul 25, 2004 at 08:17:42AM +1000, Matthew Geier wrote: > > > It's all there, it's just if you do an XFS install and xfsprogs is > > the only package it needs from CD4, at the start it won't say it > > needs CD4, it will just 'spring that on you' later on when it > > actually needs CD4 to get the xfsprogs packages off it. > > this might sound silly, but i've never done a fc install so humor > me... > > i assume you can install *almost* everything you absolutely need from > the first CD right? except for xfsprogs on CD4 --- if that is the > case shouldn't someone open a bug with RH to have this mode to CD1 > (since it's small I can't see why this would be a big deal) The actual bug would be that it doesn't inform you that you'll need CD4 until it's actually time to install that package. I'm just as inclined to leave packages where they are on the CDs, but fix the actual functional bug. -- Mike Burger http://www.bubbanfriends.org Visit the Dog Pound II BBS telnet://dogpound2.citadel.org or http://dogpound2.citadel.org To be notified of updates to the web site, visit http://www.bubbanfriends.org/mailman/listinfo/site-update, or send a message to: site-update-request@bubbanfriends.org with a message of: subscribe From owner-linux-xfs Mon Jul 26 13:45:35 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 26 Jul 2004 13:45:48 -0700 (PDT) Received: from omx1.americas.sgi.com (omx1-ext.sgi.com [192.48.179.11]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6QKjY2O019236 for ; Mon, 26 Jul 2004 13:45:35 -0700 Received: from flecktone.americas.sgi.com (flecktone.americas.sgi.com [192.48.203.135]) by omx1.americas.sgi.com (8.12.10/8.12.9/linux-outbound_gateway-1.1) with ESMTP id i6QKjV0f000930 for ; Mon, 26 Jul 2004 15:45:31 -0500 Received: from daisy-e236.americas.sgi.com (daisy-e236.americas.sgi.com [128.162.236.214]) by flecktone.americas.sgi.com (8.12.9/8.12.10/SGI_generic_relay-1.2) with ESMTP id i6QKjVOV43680245 for ; Mon, 26 Jul 2004 15:45:31 -0500 (CDT) Received: from sgi.com (penguin.americas.sgi.com [128.162.240.135]) by daisy-e236.americas.sgi.com (8.12.9/SGI-server-1.8) with ESMTP id i6QKjU5N5100438; Mon, 26 Jul 2004 15:45:30 -0500 (CDT) Received: from penguin.americas.sgi.com (localhost.localdomain [127.0.0.1]) by sgi.com (8.12.8/8.12.8) with ESMTP id i6QKjBRr012080; Mon, 26 Jul 2004 15:45:12 -0500 Received: (from sandeen@localhost) by penguin.americas.sgi.com (8.12.8/8.12.8/Submit) id i6QKjBDO012078; Mon, 26 Jul 2004 15:45:11 -0500 Date: Mon, 26 Jul 2004 15:45:11 -0500 From: Eric Sandeen Message-Id: <200407262045.i6QKjBDO012078@penguin.americas.sgi.com> Subject: TAKE 911116 - X-archive-position: 3742 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: sandeen@penguin.americas.sgi.com Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 389 Lines: 15 Don't lock down user pages when doing direct IO; this can lead to trouble (double-locking zero page, etc). Date: Mon Jul 26 13:44:23 PDT 2004 Workarea: penguin.americas.sgi.com:/src/eric/linux-2.4.x Inspected by: nathans,hch The following file(s) were checked into: bonnie.engr.sgi.com:/isms/linux/2.4.x-xfs Modid: xfs-linux:xfs-kern:175810a fs/xfs/linux-2.4/xfs_buf.c - 1.193 From owner-linux-xfs Tue Jul 27 06:55:13 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 27 Jul 2004 06:55:16 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6RDtDrX004442 for ; Tue, 27 Jul 2004 06:55:13 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6RDtDxN004441 for linux-xfs@oss.sgi.com; Tue, 27 Jul 2004 06:55:13 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6RDtAO5004426 for ; Tue, 27 Jul 2004 06:55:11 -0700 Received: (from apache@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6RDEnUq003299; Tue, 27 Jul 2004 06:14:49 -0700 Date: Tue, 27 Jul 2004 06:14:49 -0700 Message-Id: <200407271314.i6RDEnUq003299@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 272] xfs_force_shutdown in xfs_trans_cancel, part 2 X-Bugzilla-Reason: AssignedTo X-archive-position: 3743 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 1435 Lines: 57 http://oss.sgi.com/bugzilla/show_bug.cgi?id=272 ------- Additional Comments From Peter.Kelemen+sgi@cern.ch 2004-27-07 06:14 PDT ------- We found a particular case where we can repeatedly reproduce this error. xfs_force_shutdown(md0,0x8) called from line 1088 of file fs/xfs/xfs_trans.c. Return address = 0xa00000020034ce90 Filesystem "md0": Corruption of in-memory data detected. Shutting down filesystem: md0 Hardware: 4x Itanium2 1.5 GHz (HP rx4640), 16 GB RAM 2x 3ware 9500-12MI (all disks JBOD) 24x WD Raptor 74GB Software: Linux 2.6.8-rc1-mm1 $ zgrep _XFS /proc/config.gz CONFIG_XFS_FS=m # CONFIG_XFS_RT is not set CONFIG_XFS_QUOTA=y # CONFIG_XFS_SECURITY is not set CONFIG_XFS_POSIX_ACL=y mdadm-1.5.0-3 xfsprogs-2.6.13-1 DEVICE /dev/sd[c-z] ARRAY /dev/md0 level=raid0 num-devices=24 1640349696 blocks 1024k chunks Filesystem was created with mkfs.xfs -f -L data01 -d su=1m,sw=24 -l version=2,su=256k -i size=512 /dev/md0 default mount options Script to reproduce: # script not optimized :-) dd if=/dev/zero of=/tmp/TEST bs=199813120 count=1 cd /mnt for i in `seq 1 10`; do cat TEST >> test; done for i in `seq 2 12`; do echo $i; cp test test$i; sleep 1; done # cache is full around #8 cp test12 test13 # KABOOM. I don't see the contents of the proposed patch for bug#186 in my source tree. ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs Tue Jul 27 07:55:12 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 27 Jul 2004 07:55:37 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6REtCMd007699 for ; Tue, 27 Jul 2004 07:55:12 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6REtCMB007698 for linux-xfs@oss.sgi.com; Tue, 27 Jul 2004 07:55:12 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6REtAbD007684 for ; Tue, 27 Jul 2004 07:55:11 -0700 Received: (from apache@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6REsS2R007670; Tue, 27 Jul 2004 07:54:28 -0700 Date: Tue, 27 Jul 2004 07:54:28 -0700 Message-Id: <200407271454.i6REsS2R007670@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 272] xfs_force_shutdown in xfs_trans_cancel, part 2 X-Bugzilla-Reason: AssignedTo X-archive-position: 3744 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 805 Lines: 23 http://oss.sgi.com/bugzilla/show_bug.cgi?id=272 ------- Additional Comments From Peter.Kelemen+sgi@cern.ch 2004-27-07 07:54 PDT ------- Filesystem layout: meta-data=/mnt isize=512 agcount=32, agsize=12815360 blks = sectsz=512 data = bsize=4096 blocks=410087424, imaxpct=25 = sunit=256 swidth=6144 blks, unwritten=1 naming =version 2 bsize=4096 log =internal bsize=4096 blocks=32768, version=2 = sectsz=512 sunit=64 blks realtime =none extsz=25165824 blocks=0, rtextents=0 ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs Wed Jul 28 01:55:16 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 28 Jul 2004 01:55:27 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6S8tGml019479 for ; Wed, 28 Jul 2004 01:55:16 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6S8tG3L019478 for linux-xfs@oss.sgi.com; Wed, 28 Jul 2004 01:55:16 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6S8tEsr019464 for ; Wed, 28 Jul 2004 01:55:14 -0700 Received: (from apache@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6S8fON2019262; Wed, 28 Jul 2004 01:41:24 -0700 Date: Wed, 28 Jul 2004 01:41:24 -0700 Message-Id: <200407280841.i6S8fON2019262@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 350] New: Starting XFS recovery never complete X-Bugzilla-Reason: AssignedTo X-archive-position: 3745 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 2312 Lines: 60 http://oss.sgi.com/bugzilla/show_bug.cgi?id=350 Summary: Starting XFS recovery never complete Product: Linux XFS Version: unspecified Platform: IA32 OS/Version: Linux Status: NEW Severity: critical Priority: High Component: XFS kernel code AssignedTo: xfs-master@oss.sgi.com ReportedBy: dbatchovski@technologica.com Hello, I have few servers with XFS, all running Debian. One of them with Debian Sarge , kernel-2.6.7 + xfsprogs 2.6.18 do strange things.Afrer power lost from yesterday booting stop here and never continue. I wait 30-40min,1h but nothing. It seems like XFS recovery can't finish ? initrd-tools: 0.1.70 vesafb: probe of vesafb0 failed with error -6 NET: Registered protocol family 1 md: md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27 md: raid1 personality registered as nr 3 Uniform Multi-Platform E-IDE driver Revision: 7.00alpha2 ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx VP_IDE: IDE controller at PCI slot 0000:00:07.1 VP_IDE: chipset revision 16 VP_IDE: not 100% native mode: will probe irqs later VP_IDE: VIA vt82c686a (rev 22) IDE UDMA66 controller on pci0000:00:07.1 ide0: BM-DMA at 0xd000-0xd007, BIOS settings: hda:DMA, hdb:pio ide1: BM-DMA at 0xd008-0xd00f, BIOS settings: hdc:DMA, hdd:pio hda: ST38410A, ATA DISK drive Using anticipatory io scheduler ide0 at 0x1f0-0x1f7,0x3f6 on irq 14 hdc: ST38410A, ATA DISK drive ide1 at 0x170-0x177,0x376 on irq 15 hda: max request size: 128KiB hda: 16841664 sectors (8622 MB) w/512KiB Cache, CHS=16708/16/63, UDMA(66) /dev/ide/host0/bus0/target0/lun0: p1 p2 p3 p4 hdc: max request size: 128KiB hdc: 16841664 sectors (8622 MB) w/512KiB Cache, CHS=16708/16/63, UDMA(66) /dev/ide/host0/bus1/target0/lun0: p1 p2 p3 p4 md: md0 stopped. md: bind md: bind raid1: raid set md0 active with 2 out of 2 mirrors mdadm: /devfs/md/0 has been started with 2 drives. SGI XFS with ACLs, security attributes, realtime, large block numbers, no debug enabled SGI XFS Quota Management subsystem XFS mounting filesystem md0 Starting XFS recovery on filesystem: md0 (dev: md0) ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs Wed Jul 28 03:37:20 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 28 Jul 2004 03:37:38 -0700 (PDT) Received: from eik.ii.uib.no (eik.ii.uib.no [129.177.16.3]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6SAbJlB023704 for ; Wed, 28 Jul 2004 03:37:19 -0700 Received: from lapprose.ii.uib.no ([129.177.20.37]:38267) by eik.ii.uib.no with esmtp (TLSv1:AES256-SHA:256) (Exim 4.30) id 1Bplo4-00015z-ON for linux-xfs@oss.sgi.com; Wed, 28 Jul 2004 12:37:08 +0200 Received: (from jfm@localhost) by lapprose.ii.uib.no (8.12.11/8.12.11/Submit) id i6SAb83Z026145 for linux-xfs@oss.sgi.com; Wed, 28 Jul 2004 12:37:08 +0200 Date: Wed, 28 Jul 2004 12:37:08 +0200 From: Jan-Frode Myklebust To: linux-xfs@oss.sgi.com Subject: bugzilla vs. bugzilla Message-ID: <20040728103708.GA26088@ii.uib.no> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline X-archive-position: 3746 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: janfrode@parallab.uib.no Precedence: bulk X-list: linux-xfs Status: O Content-Length: 478 Lines: 15 I've reported a few xfs-kernel problems to the standard bugzilla.kernel.org lately, but the response from the XFS-team seems very slow/lacking. Ref: http://bugme.osdl.org/show_bug.cgi?id=2841 http://bugme.osdl.org/show_bug.cgi?id=2929 http://bugme.osdl.org/show_bug.cgi?id=3118 Am I using the wrong bugzilla? Should all XFS-problems go to http://oss.sgi.com/bugzilla ? I hope not, since it's often not clear to me if it's a NFSD (or other component) or XFS bug.. -jf From owner-linux-xfs Wed Jul 28 05:41:14 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 28 Jul 2004 05:41:43 -0700 (PDT) Received: from gizmo01bw.bigpond.com (gizmo01bw.bigpond.com [144.140.70.11]) by oss.sgi.com (8.13.0/8.13.0) with SMTP id i6SCfDxr030169 for ; Wed, 28 Jul 2004 05:41:13 -0700 Received: (qmail 19114 invoked from network); 28 Jul 2004 12:41:02 -0000 Received: from unknown (HELO BWMAM16.bigpond.com) (144.135.24.114) by gizmo01bw.bigpond.com with SMTP; 28 Jul 2004 12:41:02 -0000 Received: from c25.53.135.144.satellite.bigpond.com ([144.135.53.25]) by BWMAM16.bigpond.com(MAM REL_3_4_2a 261/38659034) with SMTP id 38659034; Wed, 28 Jul 2004 22:41:01 +1000 Message-ID: <002701c474a2$376bc470$0100a8c0@valleychase> From: "Jason Cole" To: "linux-xfs" Subject: Fedora XFS out of box! Date: Wed, 28 Jul 2004 20:55:49 +0800 MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2800.1158 X-MIMEOLE: Produced By Microsoft MimeOLE V6.00.2800.1165 X-archive-position: 3747 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: colejp@bigpond.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 185 Lines: 8 Mike Burger Wrote: >From may reports here, and even the Fedora Unofficial FAQ, you can just >boot the FC2 CD with "linux xfs" and be on your way. I tryed this and nothing happend? From owner-linux-xfs Wed Jul 28 05:47:11 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 28 Jul 2004 05:47:23 -0700 (PDT) Received: from gizmo07bw.bigpond.com (gizmo07bw.bigpond.com [144.140.70.42]) by oss.sgi.com (8.13.0/8.13.0) with SMTP id i6SClARq030565 for ; Wed, 28 Jul 2004 05:47:11 -0700 Received: (qmail 17663 invoked from network); 28 Jul 2004 12:46:59 -0000 Received: from unknown (HELO BWMAM16.bigpond.com) (144.135.24.114) by gizmo07bw.bigpond.com with SMTP; 28 Jul 2004 12:46:59 -0000 Received: from c25.53.135.144.satellite.bigpond.com ([144.135.53.25]) by BWMAM16.bigpond.com(MAM REL_3_4_2a 261/38661828) with SMTP id 38661828; Wed, 28 Jul 2004 22:46:58 +1000 Message-ID: <002801c474a3$0c9fb160$0100a8c0@valleychase> From: "Jason Cole" To: "linux-xfs" Subject: Stupid mkfs question Date: Wed, 28 Jul 2004 21:01:47 +0800 MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2800.1158 X-MIMEOLE: Produced By Microsoft MimeOLE V6.00.2800.1165 X-archive-position: 3748 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: colejp@bigpond.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 429 Lines: 23 I know this will sound stupid but, I have just installed fedora core 2, and installed xfsprogs thats ok, now I want to create a XFS partition, I have about 20g free on my drive. (note: /dev/hda1 - 3 are used) so typicaly (as far I can read) I do mkfs -t xfs /dev/hda4 but get mkfs.xfs: can not open /dev/hda4: can't find device or address so my question is how do i make the device or partion /dev/hda4 thank's Jason From owner-linux-xfs Wed Jul 28 05:55:33 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 28 Jul 2004 05:55:46 -0700 (PDT) Received: from mail.linux-sxs.org (mail.linux-sxs.org [207.218.156.196]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6SCtVw7030927 for ; Wed, 28 Jul 2004 05:55:32 -0700 Received: from mail.linux-sxs.org (localhost [127.0.0.1]) by mail.linux-sxs.org (8.12.11/8.12.11/Debian-5) with ESMTP id i6SCP2WR014016; Wed, 28 Jul 2004 07:25:03 -0500 Received: from localhost (netllama@localhost) by mail.linux-sxs.org (8.12.11/8.12.11/Debian-5) with ESMTP id i6SCOuLh014010; Wed, 28 Jul 2004 07:25:02 -0500 Date: Wed, 28 Jul 2004 07:24:56 -0500 (EST) From: "L. Friedman" To: Jason Cole cc: linux-xfs Subject: Re: Fedora XFS out of box! In-Reply-To: <002701c474a2$376bc470$0100a8c0@valleychase> Message-ID: References: <002701c474a2$376bc470$0100a8c0@valleychase> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 3749 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: netllama@mail.linux-sxs.org Precedence: bulk X-list: linux-xfs Status: O Content-Length: 460 Lines: 15 On Wed, 28 Jul 2004, Jason Cole wrote: > Mike Burger Wrote: > > >From may reports here, and even the Fedora Unofficial FAQ, you can just > >boot the FC2 CD with "linux xfs" and be on your way. > > I tryed this and nothing happend? meaning? -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Lonni J Friedman netllama@linux-sxs.org Linux Step-by-step & TyGeMo http://netllama.ipfox.com From owner-linux-xfs Wed Jul 28 05:57:25 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 28 Jul 2004 05:57:37 -0700 (PDT) Received: from mail.linux-sxs.org (mail.linux-sxs.org [207.218.156.196]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6SCvN0u031199 for ; Wed, 28 Jul 2004 05:57:23 -0700 Received: from mail.linux-sxs.org (localhost [127.0.0.1]) by mail.linux-sxs.org (8.12.11/8.12.11/Debian-5) with ESMTP id i6SCQtp3014044; Wed, 28 Jul 2004 07:26:55 -0500 Received: from localhost (netllama@localhost) by mail.linux-sxs.org (8.12.11/8.12.11/Debian-5) with ESMTP id i6SCQsA1014041; Wed, 28 Jul 2004 07:26:54 -0500 Date: Wed, 28 Jul 2004 07:26:54 -0500 (EST) From: "L. Friedman" To: Jason Cole cc: linux-xfs Subject: Re: Stupid mkfs question In-Reply-To: <002801c474a3$0c9fb160$0100a8c0@valleychase> Message-ID: References: <002801c474a3$0c9fb160$0100a8c0@valleychase> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 3750 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: netllama@mail.linux-sxs.org Precedence: bulk X-list: linux-xfs Status: O Content-Length: 736 Lines: 26 On Wed, 28 Jul 2004, Jason Cole wrote: > I know this will sound stupid but, > > I have just installed fedora core 2, and installed xfsprogs > > thats ok, > > now I want to create a XFS partition, I have about 20g free on my drive. > > (note: /dev/hda1 - 3 are used) > > so typicaly (as far I can read) I do mkfs -t xfs /dev/hda4 > > but get > > mkfs.xfs: can not open /dev/hda4: can't find device or address > > so my question is how do i make the device or partion /dev/hda4 fdisk, parted, cfdisk. take your pick. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Lonni J Friedman netllama@linux-sxs.org Linux Step-by-step & TyGeMo http://netllama.ipfox.com From owner-linux-xfs Wed Jul 28 06:01:58 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 28 Jul 2004 06:02:10 -0700 (PDT) Received: from gizmo09bw.bigpond.com (gizmo09bw.bigpond.com [144.140.70.19]) by oss.sgi.com (8.13.0/8.13.0) with SMTP id i6SD1vKT031570 for ; Wed, 28 Jul 2004 06:01:58 -0700 Received: (qmail 9502 invoked from network); 28 Jul 2004 13:01:46 -0000 Received: from unknown (HELO BWMAM16.bigpond.com) (144.135.24.114) by gizmo09bw.bigpond.com with SMTP; 28 Jul 2004 13:01:46 -0000 Received: from c25.53.135.144.satellite.bigpond.com ([144.135.53.25]) by BWMAM16.bigpond.com(MAM REL_3_4_2a 261/38668822) with SMTP id 38668822; Wed, 28 Jul 2004 23:01:45 +1000 Message-ID: <008e01c474a5$1d3672f0$0100a8c0@valleychase> From: "Jason Cole" To: "linux-xfs" Subject: Re: Fedora XFS out of box! Date: Wed, 28 Jul 2004 21:16:34 +0800 MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2800.1158 X-MIMEOLE: Produced By Microsoft MimeOLE V6.00.2800.1165 X-archive-position: 3751 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: colejp@bigpond.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 348 Lines: 15 When installing fedora, supposly passing a linux xfs as a boot parameter will give xfs support during install of fedora (so that the partions are XFS not EXT3), but it does not. So does anyone know how to get XFS this way or am I wasting my time.... (Just means I manualy install xfsprogs and try to convert the system over..) Thanks Jason From owner-linux-xfs Wed Jul 28 06:10:51 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 28 Jul 2004 06:11:12 -0700 (PDT) Received: from mail.linux-sxs.org (mail.linux-sxs.org [207.218.156.196]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6SDAond032096 for ; Wed, 28 Jul 2004 06:10:51 -0700 Received: from mail.linux-sxs.org (localhost [127.0.0.1]) by mail.linux-sxs.org (8.12.11/8.12.11/Debian-5) with ESMTP id i6SCeHfP014138; Wed, 28 Jul 2004 07:40:17 -0500 Received: from localhost (netllama@localhost) by mail.linux-sxs.org (8.12.11/8.12.11/Debian-5) with ESMTP id i6SCeHLe014135; Wed, 28 Jul 2004 07:40:17 -0500 Date: Wed, 28 Jul 2004 07:40:17 -0500 (EST) From: "L. Friedman" To: Jason Cole cc: linux-xfs Subject: Re: Fedora XFS out of box! In-Reply-To: <008e01c474a5$1d3672f0$0100a8c0@valleychase> Message-ID: References: <008e01c474a5$1d3672f0$0100a8c0@valleychase> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 3752 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: netllama@mail.linux-sxs.org Precedence: bulk X-list: linux-xfs Status: O Content-Length: 657 Lines: 16 On Wed, 28 Jul 2004, Jason Cole wrote: > When installing fedora, supposly passing a linux xfs as a boot parameter > will give xfs support during install of fedora (so that the partions are XFS > not EXT3), but it does not. > > So does anyone know how to get XFS this way or am I wasting my time.... > > (Just means I manualy install xfsprogs and try to convert the system over..) This works just fine for FC2. It will not work for FC1. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Lonni J Friedman netllama@linux-sxs.org Linux Step-by-step & TyGeMo http://netllama.ipfox.com From owner-linux-xfs Wed Jul 28 06:25:41 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 28 Jul 2004 06:25:54 -0700 (PDT) Received: from gizmo06bw.bigpond.com (gizmo06bw.bigpond.com [144.140.70.41]) by oss.sgi.com (8.13.0/8.13.0) with SMTP id i6SDPd4L002923 for ; Wed, 28 Jul 2004 06:25:40 -0700 Received: (qmail 15533 invoked from network); 28 Jul 2004 13:25:28 -0000 Received: from unknown (HELO BWMAM16.bigpond.com) (144.135.24.114) by gizmo06bw.bigpond.com with SMTP; 28 Jul 2004 13:25:28 -0000 Received: from c25.53.135.144.satellite.bigpond.com ([144.135.53.25]) by BWMAM16.bigpond.com(MAM REL_3_4_2a 261/38678899) with SMTP id 38678899; Wed, 28 Jul 2004 23:25:27 +1000 Message-ID: <00c701c474a8$6c98af90$0100a8c0@valleychase> From: "Jason Cole" To: "linux-xfs" Subject: Re: Fedora XFS out of box! Date: Wed, 28 Jul 2004 21:40:15 +0800 MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2800.1158 X-MIMEOLE: Produced By Microsoft MimeOLE V6.00.2800.1165 X-archive-position: 3753 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: colejp@bigpond.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 327 Lines: 13 I just tried to use parted and wiped my partions! So back to square 1, what a night.... so I tried the Linux xfs option at the boot again on Fedora Core 2 and found the XFS option (my mistake for not looking properly and assuming that it will select it over EXT3 because I passed the option) Thankyou for your help Jason From owner-linux-xfs Wed Jul 28 08:39:37 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 28 Jul 2004 08:39:56 -0700 (PDT) Received: from omx1.americas.sgi.com (omx1-ext.sgi.com [192.48.179.11]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6SFdaP8009882 for ; Wed, 28 Jul 2004 08:39:37 -0700 Received: from flecktone.americas.sgi.com (flecktone.americas.sgi.com [192.48.203.135]) by omx1.americas.sgi.com (8.12.10/8.12.9/linux-outbound_gateway-1.1) with ESMTP id i6SFdX0f016824 for ; Wed, 28 Jul 2004 10:39:33 -0500 Received: from poppy-e236.americas.sgi.com (poppy-e236.americas.sgi.com [128.162.236.207]) by flecktone.americas.sgi.com (8.12.9/8.12.10/SGI_generic_relay-1.2) with ESMTP id i6SFdWOV43869449; Wed, 28 Jul 2004 10:39:32 -0500 (CDT) Received: from penguin.americas.sgi.com (penguin.americas.sgi.com [128.162.240.135]) by poppy-e236.americas.sgi.com (8.12.9/SGI-server-1.8) with ESMTP id i6SFdW3b10467751; Wed, 28 Jul 2004 10:39:32 -0500 (CDT) Date: Wed, 28 Jul 2004 10:39:03 -0500 (CDT) From: Eric Sandeen X-X-Sender: sandeen@penguin.americas.sgi.com To: Jan-Frode Myklebust cc: linux-xfs@oss.sgi.com Subject: Re: bugzilla vs. bugzilla In-Reply-To: <20040728103708.GA26088@ii.uib.no> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 3754 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: sandeen@sgi.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 976 Lines: 32 either one should be fine, we get email from both. If you're not certain it's an xfs bug, then osdl.org would be a better choice. Regarding speed of response, it's mostly an issue of time & resources, although your bug 2929 was resolved in 1 day, I think, and the last one was just bumped over to xfs from nfs, IIRC. FWIW, it looks like you're getting memory allocation failures in some of these cases. -Eric On Wed, 28 Jul 2004, Jan-Frode Myklebust wrote: > I've reported a few xfs-kernel problems to the standard > bugzilla.kernel.org lately, but the response from the XFS-team seems > very slow/lacking. Ref: > > http://bugme.osdl.org/show_bug.cgi?id=2841 > http://bugme.osdl.org/show_bug.cgi?id=2929 > http://bugme.osdl.org/show_bug.cgi?id=3118 > > Am I using the wrong bugzilla? Should all XFS-problems go to > http://oss.sgi.com/bugzilla ? I hope not, since it's often not clear > to me if it's a NFSD (or other component) or XFS bug.. > > > -jf > > From owner-linux-xfs Wed Jul 28 08:55:17 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 28 Jul 2004 08:55:45 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6SFtHr6011282 for ; Wed, 28 Jul 2004 08:55:17 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6SFtH2C011279 for linux-xfs@oss.sgi.com; Wed, 28 Jul 2004 08:55:17 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6SFtGn5011252 for ; Wed, 28 Jul 2004 08:55:16 -0700 Received: (from apache@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6SFW3jl009467; Wed, 28 Jul 2004 08:32:03 -0700 Date: Wed, 28 Jul 2004 08:32:03 -0700 Message-Id: <200407281532.i6SFW3jl009467@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 350] Starting XFS recovery never complete X-Bugzilla-Reason: AssignedTo X-archive-position: 3755 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 593 Lines: 20 http://oss.sgi.com/bugzilla/show_bug.cgi?id=350 ------- Additional Comments From sandeen@sgi.com 2004-28-07 08:32 PDT ------- If you have kdb in the kernel, you could backtrace the mount process and see where it's stuck. if no kdb, then sysrq-t would also be helpful. xfs_logprint -t mighth also be useful. As a last resort, you could zero the log and repair, possibly losing some information in the process, but it'd be better to figure out why this is hanging. ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs Wed Jul 28 08:55:17 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 28 Jul 2004 08:55:46 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6SFtHt5011281 for ; Wed, 28 Jul 2004 08:55:17 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6SFtHcN011280 for linux-xfs@oss.sgi.com; Wed, 28 Jul 2004 08:55:17 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6SFtGn1011252 for ; Wed, 28 Jul 2004 08:55:16 -0700 Received: (from apache@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6SFEeRc005865; Wed, 28 Jul 2004 08:14:40 -0700 Date: Wed, 28 Jul 2004 08:14:40 -0700 Message-Id: <200407281514.i6SFEeRc005865@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 350] Starting XFS recovery never complete X-Bugzilla-Reason: AssignedTo X-archive-position: 3756 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 566 Lines: 20 http://oss.sgi.com/bugzilla/show_bug.cgi?id=350 ------- Additional Comments From dbatchovski@technologica.com 2004-28-07 08:14 PDT ------- Nice XFS. Booting from knoppix, do xfs_repair -L /dev/md0 and everything is ok.Only 1 lost file. Some additional info, I have 3 file systems, with raid1: md0,md1,md2. Only md0 have errors due power lost.And also this is not first power lost, but this never happened, always XFS recover excellent. ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs Wed Jul 28 09:32:31 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 28 Jul 2004 09:32:43 -0700 (PDT) Received: from eik.ii.uib.no (eik.ii.uib.no [129.177.16.3]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6SGWUmJ014429 for ; Wed, 28 Jul 2004 09:32:31 -0700 Received: from lapprose.ii.uib.no ([129.177.20.37]:38764) by eik.ii.uib.no with esmtp (TLSv1:AES256-SHA:256) (Exim 4.30) id 1BprLp-000064-4Z; Wed, 28 Jul 2004 18:32:21 +0200 Received: (from jfm@localhost) by lapprose.ii.uib.no (8.12.11/8.12.11/Submit) id i6SGWKGv029106; Wed, 28 Jul 2004 18:32:20 +0200 Date: Wed, 28 Jul 2004 18:32:20 +0200 From: Jan-Frode Myklebust To: Eric Sandeen Cc: linux-xfs@oss.sgi.com Subject: Re: bugzilla vs. bugzilla Message-ID: <20040728163220.GA28915@ii.uib.no> References: <20040728103708.GA26088@ii.uib.no> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-archive-position: 3757 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: janfrode@parallab.uib.no Precedence: bulk X-list: linux-xfs Status: O Content-Length: 994 Lines: 27 On Wed, Jul 28, 2004 at 10:39:03AM -0500, Eric Sandeen wrote: > either one should be fine, we get email from both. If you're not > certain it's an xfs bug, then osdl.org would be a better choice. OK, good. > Regarding speed of response, it's mostly an issue of time & resources, > although your bug 2929 was resolved in 1 day, I think, and the last > one was just bumped over to xfs from nfs, IIRC. Yes, bug 2929 was quick. While 2841 hasn't gotten any response in ~2 months, and now 3118 hasn't had any response since Trond moved it to XFS 3 days ago. Unless I get timely response, I'm forced to bet on luck and jump to the latest release/pre-release hoping it's fixed there, or convert to another fs :( BTW: Together with bug 2840 these problems have oopsed my server about every second week since I installed it with 2.6.6. > FWIW, it looks like you're getting memory allocation failures > in some of these cases. Meaning xfs problems, other kernel component error, or hw? -jf From owner-linux-xfs Wed Jul 28 11:09:37 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 28 Jul 2004 11:09:51 -0700 (PDT) Received: from omx2.sgi.com (omx2-ext.sgi.com [192.48.171.19]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6SI9aRU017463 for ; Wed, 28 Jul 2004 11:09:37 -0700 Received: from flecktone.americas.sgi.com (flecktone.americas.sgi.com [192.48.203.135]) by omx2.sgi.com (8.12.11/8.12.9/linux-outbound_gateway-1.1) with ESMTP id i6SJAWG2024051 for ; Wed, 28 Jul 2004 12:10:33 -0700 Received: from poppy-e236.americas.sgi.com (poppy-e236.americas.sgi.com [128.162.236.207]) by flecktone.americas.sgi.com (8.12.9/8.12.10/SGI_generic_relay-1.2) with ESMTP id i6SI9XOV43813379; Wed, 28 Jul 2004 13:09:33 -0500 (CDT) Received: from [128.162.232.50] (stout.americas.sgi.com [128.162.232.50]) by poppy-e236.americas.sgi.com (8.12.9/SGI-server-1.8) with ESMTP id i6SI9W3a10511729; Wed, 28 Jul 2004 13:09:32 -0500 (CDT) Subject: Re: bugzilla vs. bugzilla From: Eric Sandeen To: Jan-Frode Myklebust Cc: linux-xfs@oss.sgi.com In-Reply-To: <20040728163220.GA28915@ii.uib.no> References: <20040728103708.GA26088@ii.uib.no> <20040728163220.GA28915@ii.uib.no> Content-Type: text/plain Organization: Eric Conspiracy Secret Labs Message-Id: <1091038172.7002.12.camel@stout.americas.sgi.com> Mime-Version: 1.0 X-Mailer: Ximian Evolution 1.4.6 (1.4.6-2) Date: Wed, 28 Jul 2004 13:09:32 -0500 Content-Transfer-Encoding: 7bit X-archive-position: 3758 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: sandeen@sgi.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 671 Lines: 19 On Wed, 2004-07-28 at 11:32, Jan-Frode Myklebust wrote: > > FWIW, it looks like you're getting memory allocation failures > > in some of these cases. > > Meaning xfs problems, other kernel component error, or hw? Well, xfs is not good at failing memory allocations - irix would happily wait forever for memory, rather than failing. Linux 2.6 should now also have this feature (allocations that can wait forever) but for a while (IIRC) there was a problem where memory allocation -could- fail, xfs doesn't check for this, and kablooey. -Eric -- Eric Sandeen [C]XFS for Linux http://oss.sgi.com/projects/xfs sandeen@sgi.com SGI, Inc. 651-683-3102 From owner-linux-xfs Wed Jul 28 17:35:55 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 28 Jul 2004 17:37:17 -0700 (PDT) Received: from omx1.americas.sgi.com (omx1-ext.sgi.com [192.48.179.11]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6T0Yc8O006223 for ; Wed, 28 Jul 2004 17:35:52 -0700 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by omx1.americas.sgi.com (8.12.10/8.12.9/linux-outbound_gateway-1.1) with SMTP id i6T0YS0f018734 for ; Wed, 28 Jul 2004 19:34:29 -0500 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA15236; Thu, 29 Jul 2004 10:34:25 +1000 Received: from frodo.melbourne.sgi.com (root@frodo.melbourne.sgi.com [134.14.55.153]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id i6T0YMln2706111; Thu, 29 Jul 2004 10:34:22 +1000 (EST) Received: from frodo.melbourne.sgi.com (nathans@localhost [127.0.0.1]) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) with ESMTP id i6T1Utgh001103; Thu, 29 Jul 2004 11:30:56 +1000 Received: (from nathans@localhost) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) id i6T1UnMs001101; Thu, 29 Jul 2004 11:30:49 +1000 Date: Thu, 29 Jul 2004 11:30:49 +1000 From: Nathan Scott To: Norberto Bensa , L A Walsh Cc: linux-kernel@vger.kernel.org, linux-xfs@oss.sgi.com Subject: Re: XFS: how to NOT null files on fsck? Message-ID: <20040729013049.GE800@frodo> References: <200407050247.53743.norberto+linux-kernel@bensa.ath.cx> <40EEC9DC.8080501@tlinx.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <40EEC9DC.8080501@tlinx.org> User-Agent: Mutt/1.5.3i X-archive-position: 3759 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 862 Lines: 27 On Fri, Jul 09, 2004 at 09:37:48AM -0700, L A Walsh wrote: > It's a feature! :-) > > It's been in the code for years to randomly write nulls to some files Pfft, nonsense. The problem relates to an updated inode size being flushed ahead of the data behind it (hence a size update can make it out before delayed allocate extents do, and we end up with a hole beyond the end of file, which reads as zeroes). > Apparently not easily reproduced, no one has a clue why it does it. > Just does. No, its actually well known why it behaves this way. We are looking into ways to address this, and have some ideas - the trick is fixing it without hurting write performance - which we will do, its just not trivial. There are several techiques to reduce the impact of this behaviour, as others have described (or see the linux-xfs archives). cheers. -- Nathan From owner-linux-xfs Wed Jul 28 20:43:57 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 28 Jul 2004 20:44:05 -0700 (PDT) Received: from visualfx.animezone.org (CPE006097a16e12-CM400026313227.cpe.net.cable.rogers.com [24.101.19.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6T3hs9j014790 for ; Wed, 28 Jul 2004 20:43:56 -0700 Received: from animezone.org (picasso.animezone.org [192.168.68.5]) by visualfx.animezone.org (8.12.8/8.12.8) with ESMTP id i6T3dmJM012894 for ; Wed, 28 Jul 2004 23:39:52 -0400 Message-ID: <4108726A.1090408@animezone.org> Date: Wed, 28 Jul 2004 23:43:38 -0400 From: Andrew Ho User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6) Gecko/20040510 X-Accept-Language: en-us, en MIME-Version: 1.0 To: linux-xfs Subject: XFS installer on RHEL3.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.43 X-archive-position: 3760 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: andrewho@animezone.org Precedence: bulk X-list: linux-xfs Status: O Content-Length: 89 Lines: 8 Hi, Can I find the XFS installer on Redhat Enterprise Linux WS v3. Thanks, Andrew Ho From owner-linux-xfs Wed Jul 28 21:50:25 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 28 Jul 2004 21:50:44 -0700 (PDT) Received: from omx2.sgi.com (omx2-ext.SGI.COM [192.48.171.19]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6T4oOk9016245 for ; Wed, 28 Jul 2004 21:50:24 -0700 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by omx2.sgi.com (8.12.11/8.12.9/linux-outbound_gateway-1.1) with SMTP id i6T5pMqu026502 for ; Wed, 28 Jul 2004 22:51:23 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA20284; Thu, 29 Jul 2004 14:50:15 +1000 Received: from frodo.melbourne.sgi.com (root@frodo.melbourne.sgi.com [134.14.55.153]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id i6T4o9ln2678055; Thu, 29 Jul 2004 14:50:11 +1000 (EST) Received: from frodo.melbourne.sgi.com (nathans@localhost [127.0.0.1]) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) with ESMTP id i6T5khgh001864; Thu, 29 Jul 2004 15:46:44 +1000 Received: (from nathans@localhost) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) id i6T5kcsR001862; Thu, 29 Jul 2004 15:46:38 +1000 Date: Thu, 29 Jul 2004 15:46:38 +1000 From: Nathan Scott To: Bernhard Erdmann Cc: linux-xfs@oss.sgi.com Subject: Re: building xfsprogs (CVS): mmap.c:627: `MADV_NORMAL' undeclared Message-ID: <20040729054638.GK800@frodo> References: <40FB2417.7030406@berdmann.de> <4774.1090202489@kao2.melbourne.sgi.com> <20040720015826.A2406645@wobbly.melbourne.sgi.com> <40FDFBF2.7020500@berdmann.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <40FDFBF2.7020500@berdmann.de> User-Agent: Mutt/1.5.3i X-archive-position: 3761 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 549 Lines: 17 On Wed, Jul 21, 2004 at 07:15:30AM +0200, Bernhard Erdmann wrote: > Nathan Scott wrote: > [...] > >mmap.c already includes that header - what version of the glibc > >headers are you using there Bernhard? (which distribution, and > >which version?) > > It's glibc 2.1.3-29 from redhat 6.2 updates: Does your /usr/include/{bits,sys}/mman.h have those missing macros? If so, what cpp macro is guarding them? (mail me the file please) If not, can you grep below /usr/include for them and let me know which header defines them? thanks. -- Nathan From owner-linux-xfs Wed Jul 28 21:58:33 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 28 Jul 2004 21:58:37 -0700 (PDT) Received: from omx2.sgi.com (omx2-ext.SGI.COM [192.48.171.19]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6T4wW0P016718 for ; Wed, 28 Jul 2004 21:58:32 -0700 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by omx2.sgi.com (8.12.11/8.12.9/linux-outbound_gateway-1.1) with SMTP id i6T5xU0P028860 for ; Wed, 28 Jul 2004 22:59:31 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA20379; Thu, 29 Jul 2004 14:58:20 +1000 Received: from frodo.melbourne.sgi.com (root@frodo.melbourne.sgi.com [134.14.55.153]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id i6T4wFln2708550; Thu, 29 Jul 2004 14:58:16 +1000 (EST) Received: from frodo.melbourne.sgi.com (nathans@localhost [127.0.0.1]) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) with ESMTP id i6T5sogh001902; Thu, 29 Jul 2004 15:54:50 +1000 Received: (from nathans@localhost) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) id i6T5sbxx001900; Thu, 29 Jul 2004 15:54:37 +1000 Date: Thu, 29 Jul 2004 15:54:37 +1000 From: Nathan Scott To: L A Walsh , Chris Wedgwood Cc: Linux-Kernel , linux-xfs@oss.sgi.com Subject: Re: 2.6.7-vanilla-SMP kernel: pagebuf_get: failed to lookup pages Message-ID: <20040729055437.GL800@frodo> References: <40FF0479.6050509@tlinx.org> <20040722001224.GC30595@taniwha.stupidest.org> <40FF0885.7060704@tlinx.org> <20040722003357.GA31163@taniwha.stupidest.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20040722003357.GA31163@taniwha.stupidest.org> User-Agent: Mutt/1.5.3i X-archive-position: 3762 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 706 Lines: 24 On Wed, Jul 21, 2004 at 05:33:57PM -0700, Chris Wedgwood wrote: > On Wed, Jul 21, 2004 at 05:21:25PM -0700, L A Walsh wrote: > > > Will this be included/fixed in 2.6.8? > > i assume that's the intention but i don't know when 2.6.8 is and how > much time the sgi people have before then. my guess is yes though The fix has been included in the 2.6.8-pre/rc kernels for some time now, so yes it'll be in 2.6.8. > > How serious is the problem? The system doesn't seem to panic or > > indicate backup failures. > > not sure, hch can you comment here maybe? This leaked locked pages on metadata readahead failure (which could occur when free memory becomes low), which is serious. cheers. -- Nathan From owner-linux-xfs Wed Jul 28 22:13:13 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 28 Jul 2004 22:13:37 -0700 (PDT) Received: from omx2.sgi.com (omx2-ext.SGI.COM [192.48.171.19]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6T5DCBo017542; Wed, 28 Jul 2004 22:13:13 -0700 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by omx2.sgi.com (8.12.11/8.12.9/linux-outbound_gateway-1.1) with SMTP id i6T6EB8G000386; Wed, 28 Jul 2004 23:14:12 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA20644; Thu, 29 Jul 2004 15:12:59 +1000 Received: from frodo.melbourne.sgi.com (root@frodo.melbourne.sgi.com [134.14.55.153]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id i6T5Cnln2713669; Thu, 29 Jul 2004 15:12:51 +1000 (EST) Received: from frodo.melbourne.sgi.com (nathans@localhost [127.0.0.1]) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) with ESMTP id i6T69Ggh001974; Thu, 29 Jul 2004 16:09:17 +1000 Received: (from nathans@localhost) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) id i6T6902i001972; Thu, 29 Jul 2004 16:09:00 +1000 Date: Thu, 29 Jul 2004 16:09:00 +1000 From: Nathan Scott To: Adrian Bunk Cc: "Jeffrey E. Hundstad" , Linus Torvalds , Andrew Morton , Steve Lord , linux-xfs@oss.sgi.com, xfs-masters@oss.sgi.com, Cahya Wirawan , linux-kernel@vger.kernel.org Subject: Re: [2.6 patch] let 4KSTACKS depend on EXPERIMENTAL and XFS on 4KSTACKS=n Message-ID: <20040729060900.GA1946@frodo> References: <20040720114418.GH21918@email.archlab.tuwien.ac.at> <40FD0A61.1040503@xfs.org> <40FD2E99.20707@mnsu.edu> <20040720195012.GN14733@fs.tum.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20040720195012.GN14733@fs.tum.de> User-Agent: Mutt/1.5.3i X-archive-position: 3763 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 859 Lines: 32 On Tue, Jul 20, 2004 at 09:50:12PM +0200, Adrian Bunk wrote: > > The patch below does: > > 1. let 4KSTACKS depend on EXPERIMENTAL > Rationale: > 4Kb stacks on i386 are the future. But currently this option might still > cause problems in some areas of the kernel. OTOH, 4Kb stacks isn't a big > gain for most people. > 2.6 is a stable kernel series, and 4KSTACKS=n is the safe choice. > Once all issues with 4KSTACKS=y are resolved this can be reverted. Seems fine. > 2. let XFS depend on (4KSTACKS=n || BROKEN) > Rationale: > Mark Loy said: > Don't use 4K stacks and XFS. Who is Mark Loy? (and what does he know about XFS?) > Mark this combination as BROKEN until XFS is fixed. This part is not useful. We want to hear about problems that people hit with 4K stacks so we can try to address them, and it mostly works as is. cheers. -- Nathan From owner-linux-xfs Wed Jul 28 23:46:56 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 28 Jul 2004 23:47:08 -0700 (PDT) Received: from mail.epost.de (mail.epost.de [193.28.100.151]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6T6ktsV019464 for ; Wed, 28 Jul 2004 23:46:56 -0700 Received: from [172.27.18.21] (217.110.232.6) by mail.epost.de (7.1.026.1) (authenticated as rainer.traut) id 410837B100006E0D; Thu, 29 Jul 2004 08:46:33 +0200 Message-ID: <41089D48.7060907@epost.de> Date: Thu, 29 Jul 2004 08:46:32 +0200 From: Rainer Traut User-Agent: Mozilla Thunderbird 0.7+ (Windows/20040725) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Jan-Frode Myklebust CC: linux-xfs@oss.sgi.com Subject: Re: bugzilla vs. bugzilla References: <20040728103708.GA26088@ii.uib.no> In-Reply-To: <20040728103708.GA26088@ii.uib.no> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 3764 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: rainer.traut@epost.de Precedence: bulk X-list: linux-xfs Status: O Content-Length: 801 Lines: 26 Hi, Jan-Frode Myklebust wrote: > I've reported a few xfs-kernel problems to the standard > bugzilla.kernel.org lately, but the response from the XFS-team seems > very slow/lacking. Ref: > > http://bugme.osdl.org/show_bug.cgi?id=2841 > http://bugme.osdl.org/show_bug.cgi?id=2929 > http://bugme.osdl.org/show_bug.cgi?id=3118 > > Am I using the wrong bugzilla? Should all XFS-problems go to > http://oss.sgi.com/bugzilla ? I hope not, since it's often not clear > to me if it's a NFSD (or other component) or XFS bug.. > It seems you're running a RH Enterprise 3 clone? From what I read on the taroon mailing list, all the userland tools are not ready for kernel 2.6 yet and the developers do not recommend doing this. Have you considered this? Maybe it's working fine though. Gruss Rainer From owner-linux-xfs Thu Jul 29 00:01:57 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 29 Jul 2004 00:02:19 -0700 (PDT) Received: from eik.ii.uib.no (eik.ii.uib.no [129.177.16.3]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6T71upw020014 for ; Thu, 29 Jul 2004 00:01:57 -0700 Received: from lapprose.ii.uib.no ([129.177.20.37]:39704) by eik.ii.uib.no with esmtp (TLSv1:AES256-SHA:256) (Exim 4.30) id 1Bq4vC-0005NO-L8; Thu, 29 Jul 2004 09:01:46 +0200 Received: (from jfm@localhost) by lapprose.ii.uib.no (8.12.11/8.12.11/Submit) id i6T71kJc003713; Thu, 29 Jul 2004 09:01:46 +0200 Date: Thu, 29 Jul 2004 09:01:46 +0200 From: Jan-Frode Myklebust To: Rainer Traut Cc: linux-xfs@oss.sgi.com Subject: Re: bugzilla vs. bugzilla Message-ID: <20040729070146.GA3559@ii.uib.no> References: <20040728103708.GA26088@ii.uib.no> <41089D48.7060907@epost.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <41089D48.7060907@epost.de> X-archive-position: 3765 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: janfrode@parallab.uib.no Precedence: bulk X-list: linux-xfs Status: O Content-Length: 786 Lines: 26 On Thu, Jul 29, 2004 at 08:46:32AM +0200, Rainer Traut wrote: > > It seems you're running a RH Enterprise 3 clone? Yes. > From what I read on the taroon mailing list, all the userland tools > are not ready for kernel 2.6 yet and the developers do not recommend > doing this. > Have you considered this? Maybe it's working fine though. I have updated a few RPMs from http://people.redhat.com/arjanv/2.6/RPMS.kernel/ and it's working fine (except for the kernel problems). There were also a suprises with 2.6.6 and nfs-utils-1.0.6-21EL: https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=125345 But the 2.6-kernel is an exciting thing to run.. I was looking forward to Linus handing it over to someone more stability-oriented, but that doesn't seem to be happening.. -jf From owner-linux-xfs Thu Jul 29 04:42:34 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 29 Jul 2004 04:42:54 -0700 (PDT) Received: from hermes.fachschaften.tu-muenchen.de (hermes.fachschaften.tu-muenchen.de [129.187.202.12]) by oss.sgi.com (8.13.0/8.13.0) with SMTP id i6TBgWpL003430 for ; Thu, 29 Jul 2004 04:42:33 -0700 Received: (qmail 23798 invoked from network); 29 Jul 2004 11:35:31 -0000 Received: from mimas.fachschaften.tu-muenchen.de (129.187.202.58) by hermes.fachschaften.tu-muenchen.de with QMQP; 29 Jul 2004 11:35:30 -0000 Date: Thu, 29 Jul 2004 13:42:19 +0200 From: Adrian Bunk To: Nathan Scott Cc: "Jeffrey E. Hundstad" , Linus Torvalds , Andrew Morton , Steve Lord , linux-xfs@oss.sgi.com, xfs-masters@oss.sgi.com, Cahya Wirawan , linux-kernel@vger.kernel.org Subject: Re: [2.6 patch] let 4KSTACKS depend on EXPERIMENTAL and XFS on 4KSTACKS=n Message-ID: <20040729114219.GN2349@fs.tum.de> References: <20040720114418.GH21918@email.archlab.tuwien.ac.at> <40FD0A61.1040503@xfs.org> <40FD2E99.20707@mnsu.edu> <20040720195012.GN14733@fs.tum.de> <20040729060900.GA1946@frodo> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20040729060900.GA1946@frodo> User-Agent: Mutt/1.5.6i X-archive-position: 3766 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bunk@fs.tum.de Precedence: bulk X-list: linux-xfs Status: O Content-Length: 1633 Lines: 55 On Thu, Jul 29, 2004 at 04:09:00PM +1000, Nathan Scott wrote: > On Tue, Jul 20, 2004 at 09:50:12PM +0200, Adrian Bunk wrote: > > > > The patch below does: > > > > 1. let 4KSTACKS depend on EXPERIMENTAL > > Rationale: > > 4Kb stacks on i386 are the future. But currently this option might still > > cause problems in some areas of the kernel. OTOH, 4Kb stacks isn't a big > > gain for most people. > > 2.6 is a stable kernel series, and 4KSTACKS=n is the safe choice. > > Once all issues with 4KSTACKS=y are resolved this can be reverted. > > Seems fine. > > > 2. let XFS depend on (4KSTACKS=n || BROKEN) > > Rationale: > > Mark Loy said: > > Don't use 4K stacks and XFS. > > Who is Mark Loy? (and what does he know about XFS?) I don't know where I got this name from. I wanted to write "Steve Lord". (Sorry for the confusion.) > > Mark this combination as BROKEN until XFS is fixed. > > This part is not useful. We want to hear about problems > that people hit with 4K stacks so we can try to address > them, and it mostly works as is. 2.6 is a stable kernel series used in production environments. Regarding Linus' tree, it's IMHO the best solution to work around it this way until all issues are sorted out. Feel free to revert it in -mm later, since there are many brave souls running -mm you'll still get to hear about problems. > cheers. > Nathan cu Adrian -- "Is there not promise of rain?" Ling Tan asked suddenly out of the darkness. There had been need of rain for many days. "Only a promise," Lao Er said. Pearl S. Buck - Dragon Seed From owner-linux-xfs Thu Jul 29 04:48:18 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 29 Jul 2004 04:48:34 -0700 (PDT) Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6TBmIIs004014; Thu, 29 Jul 2004 04:48:18 -0700 Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.12.10/8.12.10) with ESMTP id i6TBkve1015990; Thu, 29 Jul 2004 07:46:57 -0400 Received: from [172.31.3.35] (arjanv.cipe.redhat.com [10.0.2.48]) by int-mx1.corp.redhat.com (8.11.6/8.11.6) with ESMTP id i6TBkta04679; Thu, 29 Jul 2004 07:46:55 -0400 Subject: Re: [2.6 patch] let 4KSTACKS depend on EXPERIMENTAL and XFS on 4KSTACKS=n From: Arjan van de Ven Reply-To: arjanv@redhat.com To: Adrian Bunk Cc: Nathan Scott , "Jeffrey E. Hundstad" , Linus Torvalds , Andrew Morton , Steve Lord , linux-xfs@oss.sgi.com, xfs-masters@oss.sgi.com, Cahya Wirawan , linux-kernel@vger.kernel.org In-Reply-To: <20040729114219.GN2349@fs.tum.de> References: <20040720114418.GH21918@email.archlab.tuwien.ac.at> <40FD0A61.1040503@xfs.org> <40FD2E99.20707@mnsu.edu> <20040720195012.GN14733@fs.tum.de> <20040729060900.GA1946@frodo> <20040729114219.GN2349@fs.tum.de> Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="=-cM5iABXiyfHOS7eUaigT" Organization: Red Hat UK Message-Id: <1091101612.2792.8.camel@laptop.fenrus.com> Mime-Version: 1.0 X-Mailer: Ximian Evolution 1.4.6 (1.4.6-2) Date: Thu, 29 Jul 2004 13:46:52 +0200 X-archive-position: 3767 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: arjanv@redhat.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 1109 Lines: 37 --=-cM5iABXiyfHOS7eUaigT Content-Type: text/plain Content-Transfer-Encoding: quoted-printable > > > Mark this combination as BROKEN until XFS is fixed. > >=20 > > This part is not useful. We want to hear about problems > > that people hit with 4K stacks so we can try to address > > them, and it mostly works as is. >=20 > 2.6 is a stable kernel series used in production environments. >=20 > Regarding Linus' tree, it's IMHO the best solution to work around it=20 > this way until all issues are sorted out. >=20 > Feel free to revert it in -mm later, since there are many brave souls=20= =20 > running -mm you'll still get to hear about problems. can you then also mark XFS broken in 2.4 entirely? 2.4 has a nett stack of also 4Kb...=20 --=-cM5iABXiyfHOS7eUaigT Content-Type: application/pgp-signature; name=signature.asc Content-Description: This is a digitally signed message part -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) iD8DBQBBCOOrxULwo51rQBIRAhc4AJ9Jp+/ePsNufUxqo5ymgIAu1yufegCfRuLY jLyLMBfI7nJcjMBZQf4ivaY= =eahr -----END PGP SIGNATURE----- --=-cM5iABXiyfHOS7eUaigT-- From owner-linux-xfs Thu Jul 29 09:03:41 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 29 Jul 2004 09:03:43 -0700 (PDT) Received: from omx2.sgi.com (omx2-ext.SGI.COM [192.48.171.19] (may be forged)) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6TG3eIg030027 for ; Thu, 29 Jul 2004 09:03:40 -0700 Received: from flecktone.americas.sgi.com (flecktone.americas.sgi.com [192.48.203.135]) by omx2.sgi.com (8.12.11/8.12.9/linux-outbound_gateway-1.1) with ESMTP id i6TH4iDr015407 for ; Thu, 29 Jul 2004 10:04:44 -0700 Received: from poppy-e236.americas.sgi.com (poppy-e236.americas.sgi.com [128.162.236.207]) by flecktone.americas.sgi.com (8.12.9/8.12.10/SGI_generic_relay-1.2) with ESMTP id i6TG3aOV43902674; Thu, 29 Jul 2004 11:03:36 -0500 (CDT) Received: from penguin.americas.sgi.com (penguin.americas.sgi.com [128.162.240.135]) by poppy-e236.americas.sgi.com (8.12.9/SGI-server-1.8) with ESMTP id i6TG3Z3b10733464; Thu, 29 Jul 2004 11:03:36 -0500 (CDT) Date: Thu, 29 Jul 2004 11:02:50 -0500 (CDT) From: Eric Sandeen X-X-Sender: sandeen@penguin.americas.sgi.com To: Andrew Ho cc: linux-xfs Subject: Re: XFS installer on RHEL3.0 In-Reply-To: <4108726A.1090408@animezone.org> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 3768 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: sandeen@sgi.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 2653 Lines: 84 You're assuming one exists... :) Actually someone on this list is working on one, so hopefully one will be available soon. -Eric On Wed, 28 Jul 2004, Andrew Ho wrote: > Hi, > > Can I find the XFS installer on Redhat Enterprise Linux WS v3. > > Thanks, > > Andrew Ho > > rom owner-linux-xfs Thu Jul 29 09:13:56 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 29 Jul 2004 09:14:04 -0700 (PDT) Received: from mail.linux-sxs.org (mail.linux-sxs.org [207.218.156.196]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6TGDtlV032015 for ; Thu, 29 Jul 2004 09:13:55 -0700 Received: from mail.linux-sxs.org (localhost [127.0.0.1]) by mail.linux-sxs.org (8.12.11/8.12.11/Debian-5) with ESMTP id i6TFhNh1002405; Thu, 29 Jul 2004 10:43:23 -0500 Received: from localhost (netllama@localhost) by mail.linux-sxs.org (8.12.11/8.12.11/Debian-5) with ESMTP id i6TFhMSI002402; Thu, 29 Jul 2004 10:43:22 -0500 Date: Thu, 29 Jul 2004 10:43:22 -0500 (EST) From: "L. Friedman" To: Eric Sandeen cc: Andrew Ho , linux-xfs Subject: Re: XFS installer on RHEL3.0 In-Reply-To: Message-ID: References: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 3769 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: netllama@mail.linux-sxs.org Precedence: bulk X-list: linux-xfs Barring that, what i've been doing is installing RHEL-3, then upgrading it to a kernel with XFS support. Then I boot that box with Knoppix, and tar up all of the partitions. Next, on a separate box, I boot with Knoppix, setup XFS formatted partitions, and scp the tarballs from the first box onto the box with XFS, and untar. Update /etc/lilo.conf, run /sbin/lilo, and voila!, RHEL-3 on XFS. On Thu, 29 Jul 2004, Eric Sandeen wrote: > You're assuming one exists... :) > > Actually someone on this list is working on one, so hopefully one will > be available soon. > > -Eric > > On Wed, 28 Jul 2004, Andrew Ho wrote: > > > Hi, > > > > Can I find the XFS installer on Redhat Enterprise Linux WS v3. > > > > Thanks, > > > > Andrew Ho > > > > > > -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Lonni J Friedman netllama@linux-sxs.org Linux Step-by-step & TyGeMo http://netllama.ipfox.com From owner-linux-xfs Thu Jul 29 09:14:54 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 29 Jul 2004 09:14:59 -0700 (PDT) Received: from mail-relay-1.tiscali.it (mail-relay-1.tiscali.it [213.205.33.41]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6TGEqoX032316 for ; Thu, 29 Jul 2004 09:14:53 -0700 Received: from dreamland.darkstar.lan (82.84.178.36) by mail-relay-1.tiscali.it (7.1.021.3) id 40F404C80047C7F0; Thu, 29 Jul 2004 18:14:44 +0200 Received: by dreamland.darkstar.lan (Postfix, from userid 1000) id 84FC6653; Thu, 29 Jul 2004 18:14:53 +0200 (CEST) Date: Thu, 29 Jul 2004 18:14:53 +0200 From: Kronos To: Nathan Scott Cc: linux-kernel@vger.kernel.org, linux-xfs@oss.sgi.com Subject: Re: [2.6.8-rc2][XFS] Page allocation failure Message-ID: <20040729161453.GA4239@dreamland.darkstar.lan> Reply-To: kronos@kronoz.cjb.net References: <20040725173022.GA8345@dreamland.darkstar.lan> <20040729010403.GC800@frodo> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20040729010403.GC800@frodo> User-Agent: Mutt/1.5.6+20040722i X-archive-position: 3770 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: kronos@kronoz.cjb.net Precedence: bulk X-list: linux-xfs Status: O Content-Length: 987 Lines: 29 Il Thu, Jul 29, 2004 at 11:04:03AM +1000, Nathan Scott ha scritto: > Hi there, > > On Sun, Jul 25, 2004 at 07:30:23PM +0200, Kronos wrote: > > ... > > It seems that XFS failed an order 5 allocation (not atomic) on the read > > Hmm, that file is fragmented up the wazoo for some reason > (see the xfs_bmap -vv output on the file). Any chance you > know how it was written originally? I think it was written by azureus (bt client), it creates a sparse file and writes small blocks (usally 256KB) when they are downloaded (in radom order). The partition had 1GB free (of 10GB) and I run xfs_fsr at night. > In particular, was it written with O_SYNC set? (or via a sync NFS > mount?). I don't think so, I gave a look at sources it seems using memory mapped files. Luca -- Home: http://kronoz.cjb.net "La mia teoria scientifica preferita e` quella secondo la quale gli anelli di Saturno sarebbero interamente composti dai bagagli andati persi nei viaggi aerei." -- Mark Russel From owner-linux-xfs Thu Jul 29 12:22:12 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 29 Jul 2004 12:22:17 -0700 (PDT) Received: from visualfx.animezone.org (CPE006097a16e12-CM400026313227.cpe.net.cable.rogers.com [24.101.19.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6TJMANx010497 for ; Thu, 29 Jul 2004 12:22:12 -0700 Received: from animezone.org (picasso.animezone.org [192.168.68.5]) by visualfx.animezone.org (8.12.8/8.12.8) with ESMTP id i6TJHeJM016348; Thu, 29 Jul 2004 15:17:43 -0400 Message-ID: <41094E41.9080508@animezone.org> Date: Thu, 29 Jul 2004 15:21:37 -0400 From: Andrew Ho User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6) Gecko/20040510 X-Accept-Language: en-us, en MIME-Version: 1.0 To: "L. Friedman" CC: Eric Sandeen , linux-xfs Subject: Re: XFS installer on RHEL3.0 References: In-Reply-To: Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.43 X-archive-position: 3771 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: andrewho@animezone.org Precedence: bulk X-list: linux-xfs Status: O Content-Length: 859 Lines: 45 Thanks for the information. Andrew L. Friedman wrote: >Barring that, what i've been doing is installing RHEL-3, then upgrading it >to a kernel with XFS support. Then I boot that box with Knoppix, and tar >up all of the partitions. Next, on a separate box, I boot with >Knoppix, setup XFS formatted partitions, and scp the tarballs from the >first box onto the box with XFS, and untar. Update /etc/lilo.conf, run >/sbin/lilo, and voila!, RHEL-3 on XFS. > >On Thu, 29 Jul 2004, Eric Sandeen wrote: > > > >>You're assuming one exists... :) >> >>Actually someone on this list is working on one, so hopefully one will >>be available soon. >> >>-Eric >> >>On Wed, 28 Jul 2004, Andrew Ho wrote: >> >> >> >>>Hi, >>> >>>Can I find the XFS installer on Redhat Enterprise Linux WS v3. >>> >>>Thanks, >>> >>>Andrew Ho >>> >>> >>> >>> >> >> > > > From owner-linux-xfs Thu Jul 29 14:11:52 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 29 Jul 2004 14:11:57 -0700 (PDT) Received: from hermes.fachschaften.tu-muenchen.de (hermes.fachschaften.tu-muenchen.de [129.187.202.12]) by oss.sgi.com (8.13.0/8.13.0) with SMTP id i6TLBmcN013505 for ; Thu, 29 Jul 2004 14:11:51 -0700 Received: (qmail 12417 invoked from network); 29 Jul 2004 21:04:48 -0000 Received: from mimas.fachschaften.tu-muenchen.de (129.187.202.58) by hermes.fachschaften.tu-muenchen.de with QMQP; 29 Jul 2004 21:04:48 -0000 Date: Thu, 29 Jul 2004 23:11:37 +0200 From: Adrian Bunk To: Arjan van de Ven Cc: Nathan Scott , "Jeffrey E. Hundstad" , Linus Torvalds , Andrew Morton , Steve Lord , linux-xfs@oss.sgi.com, xfs-masters@oss.sgi.com, Cahya Wirawan , linux-kernel@vger.kernel.org Subject: Re: [2.6 patch] let 4KSTACKS depend on EXPERIMENTAL and XFS on 4KSTACKS=n Message-ID: <20040729211137.GC23589@fs.tum.de> References: <20040720114418.GH21918@email.archlab.tuwien.ac.at> <40FD0A61.1040503@xfs.org> <40FD2E99.20707@mnsu.edu> <20040720195012.GN14733@fs.tum.de> <20040729060900.GA1946@frodo> <20040729114219.GN2349@fs.tum.de> <1091101612.2792.8.camel@laptop.fenrus.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1091101612.2792.8.camel@laptop.fenrus.com> User-Agent: Mutt/1.5.6i X-archive-position: 3772 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bunk@fs.tum.de Precedence: bulk X-list: linux-xfs Status: O Content-Length: 1084 Lines: 32 On Thu, Jul 29, 2004 at 01:46:52PM +0200, Arjan van de Ven wrote: > > > > > Mark this combination as BROKEN until XFS is fixed. > > > > > > This part is not useful. We want to hear about problems > > > that people hit with 4K stacks so we can try to address > > > them, and it mostly works as is. > > > > 2.6 is a stable kernel series used in production environments. > > > > Regarding Linus' tree, it's IMHO the best solution to work around it > > this way until all issues are sorted out. > > > > Feel free to revert it in -mm later, since there are many brave souls > > running -mm you'll still get to hear about problems. > > can you then also mark XFS broken in 2.4 entirely? > 2.4 has a nett stack of also 4Kb... There are reports of breakages with 4kb stacks in 2.6, but AFAIK no similar reports for 2.4 . cu Adrian -- "Is there not promise of rain?" Ling Tan asked suddenly out of the darkness. There had been need of rain for many days. "Only a promise," Lao Er said. Pearl S. Buck - Dragon Seed From owner-linux-xfs Thu Jul 29 14:25:15 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 29 Jul 2004 14:25:23 -0700 (PDT) Received: from eik.ii.uib.no (eik.ii.uib.no [129.177.16.3]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6TLPExp013992 for ; Thu, 29 Jul 2004 14:25:14 -0700 Received: from lapprose.ii.uib.no ([129.177.20.37]:40999) by eik.ii.uib.no with esmtp (TLSv1:AES256-SHA:256) (Exim 4.30) id 1BqIOc-0006Jm-B4; Thu, 29 Jul 2004 23:25:02 +0200 Received: (from jfm@localhost) by lapprose.ii.uib.no (8.12.11/8.12.11/Submit) id i6TLP1Nj011056; Thu, 29 Jul 2004 23:25:01 +0200 Date: Thu, 29 Jul 2004 23:25:01 +0200 From: Jan-Frode Myklebust To: Eric Sandeen Cc: linux-xfs@oss.sgi.com Subject: Re: bugzilla vs. bugzilla Message-ID: <20040729212501.GA10878@ii.uib.no> References: <20040728103708.GA26088@ii.uib.no> <20040728163220.GA28915@ii.uib.no> <1091038172.7002.12.camel@stout.americas.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1091038172.7002.12.camel@stout.americas.sgi.com> X-archive-position: 3773 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: janfrode@parallab.uib.no Precedence: bulk X-list: linux-xfs Status: O Content-Length: 668 Lines: 17 On Wed, Jul 28, 2004 at 01:09:32PM -0500, Eric Sandeen wrote: > > Well, xfs is not good at failing memory allocations - irix would happily > wait forever for memory, rather than failing. Linux 2.6 should now also > have this feature (allocations that can wait forever) but for a while > (IIRC) there was a problem where memory allocation -could- fail, xfs > doesn't check for this, and kablooey. > Could these crashes be related to the 4KSTACKS which Steve Lord seems to be advising against? http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&c2coff=1&safe=off&client=googlet&frame=right&th=a1bc6f5a855bb5e8&seekm=2k46t-2u5-21%40gated-at.bofh.it#link1 -jf From owner-linux-xfs Thu Jul 29 14:45:03 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 29 Jul 2004 14:45:12 -0700 (PDT) Received: from pimout1-ext.prodigy.net (pimout1-ext.prodigy.net [207.115.63.77]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6TLj2xm014626 for ; Thu, 29 Jul 2004 14:45:03 -0700 Received: from taniwha.stupidest.org (adsl-63-202-172-176.dsl.snfc21.pacbell.net [63.202.172.176]) by pimout1-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6TLiDDi207454; Thu, 29 Jul 2004 17:44:13 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id 8584D115C860; Thu, 29 Jul 2004 14:44:12 -0700 (PDT) Date: Thu, 29 Jul 2004 14:44:12 -0700 From: Chris Wedgwood To: Adrian Bunk Cc: Arjan van de Ven , Nathan Scott , "Jeffrey E. Hundstad" , Linus Torvalds , Andrew Morton , Steve Lord , linux-xfs@oss.sgi.com, xfs-masters@oss.sgi.com, Cahya Wirawan , linux-kernel@vger.kernel.org Subject: Re: [2.6 patch] let 4KSTACKS depend on EXPERIMENTAL and XFS on 4KSTACKS=n Message-ID: <20040729214412.GA22041@taniwha.stupidest.org> References: <20040720114418.GH21918@email.archlab.tuwien.ac.at> <40FD0A61.1040503@xfs.org> <40FD2E99.20707@mnsu.edu> <20040720195012.GN14733@fs.tum.de> <20040729060900.GA1946@frodo> <20040729114219.GN2349@fs.tum.de> <1091101612.2792.8.camel@laptop.fenrus.com> <20040729211137.GC23589@fs.tum.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20040729211137.GC23589@fs.tum.de> X-archive-position: 3774 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: O Content-Length: 659 Lines: 19 On Thu, Jul 29, 2004 at 11:11:37PM +0200, Adrian Bunk wrote: > There are reports of breakages with 4kb stacks in 2.6, but AFAIK no > similar reports for 2.4 . 2.4.x uses the stack(s) differently than 2.6.x so it will usually be harder (but not impossible) to break and less easy to detect. I believe what Arjan is saying that that 2.4.x effectively really only has 4K of safely usable stack anyhow (we have some on-stack allocated data and interrupts use the same stack). Also, FWIW I do think there were been reports of problems in 2.4.x that looked like they might be stack-size related (things dying horribly after an interrupt for example). --cw From owner-linux-xfs Thu Jul 29 14:46:10 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 29 Jul 2004 14:46:16 -0700 (PDT) Received: from pimout1-ext.prodigy.net (pimout1-ext.prodigy.net [207.115.63.77]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6TLk9hC014945 for ; Thu, 29 Jul 2004 14:46:10 -0700 Received: from taniwha.stupidest.org (adsl-63-202-172-176.dsl.snfc21.pacbell.net [63.202.172.176]) by pimout1-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6TLk5Di198718; Thu, 29 Jul 2004 17:46:05 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id 61855115C860; Thu, 29 Jul 2004 14:46:05 -0700 (PDT) Date: Thu, 29 Jul 2004 14:46:05 -0700 From: Chris Wedgwood To: Jan-Frode Myklebust Cc: Eric Sandeen , linux-xfs@oss.sgi.com Subject: Re: bugzilla vs. bugzilla Message-ID: <20040729214605.GB22041@taniwha.stupidest.org> References: <20040728103708.GA26088@ii.uib.no> <20040728163220.GA28915@ii.uib.no> <1091038172.7002.12.camel@stout.americas.sgi.com> <20040729212501.GA10878@ii.uib.no> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20040729212501.GA10878@ii.uib.no> X-archive-position: 3775 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: O Content-Length: 516 Lines: 14 On Thu, Jul 29, 2004 at 11:25:01PM +0200, Jan-Frode Myklebust wrote: > Could these crashes be related to the 4KSTACKS which Steve Lord > seems to be advising against? Unrelated. IRIX memory allocations never fail they just might take forever. Under Linux they can fail so the callers have to be aware of this and check for this and retry as required. XFS under Linux didn't do this everywhere it should previously so memory allocations could fail and you would see null pointer dereference problems. --cw From owner-linux-xfs Thu Jul 29 15:03:15 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 29 Jul 2004 15:03:17 -0700 (PDT) Received: from mail.gmx.net (pop.gmx.net [213.165.64.20]) by oss.sgi.com (8.13.0/8.13.0) with SMTP id i6TM3EsD016057 for ; Thu, 29 Jul 2004 15:03:15 -0700 Received: (qmail 30534 invoked by uid 65534); 29 Jul 2004 22:03:05 -0000 Received: from pD9FB22E5.dip.t-dialin.net (EHLO localhost) (217.251.34.229) by mail.gmx.net (mp016) with SMTP; 30 Jul 2004 00:03:05 +0200 X-Authenticated: #21439359 Date: Fri, 30 Jul 2004 00:00:50 +0200 From: julius To: linux-xfs@oss.sgi.com Subject: XFS and loop-aes speed? Message-Id: <20040730000050.62f6c736.julius.junghans@gmx.de> X-Mailer: Sylpheed version 0.9.12 (GTK+ 1.2.10; i686-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-archive-position: 3776 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: julius.junghans@gmx.de Precedence: bulk X-list: linux-xfs Status: O Content-Length: 151 Lines: 6 Hi, im using XFS for some time now, my p4 2,8ghz loads up to 70% when theres an ftp connection downloading @ 100mbit. is this normal? Regards Julius From owner-linux-xfs Thu Jul 29 15:31:12 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 29 Jul 2004 15:31:18 -0700 (PDT) Received: from omx1.americas.sgi.com (omx1-ext.SGI.COM [192.48.179.11] (may be forged)) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6TMVB9T018293 for ; Thu, 29 Jul 2004 15:31:11 -0700 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by omx1.americas.sgi.com (8.12.10/8.12.9/linux-outbound_gateway-1.1) with SMTP id i6TMV50f014098 for ; Thu, 29 Jul 2004 17:31:06 -0500 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id IAA09372; Fri, 30 Jul 2004 08:30:50 +1000 Received: from wobbly.melbourne.sgi.com (localhost [127.0.0.1]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id i6TMUjln2733962; Fri, 30 Jul 2004 08:30:45 +1000 (EST) Received: (from nathans@localhost) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id i6TMUeS22706497; Fri, 30 Jul 2004 08:30:40 +1000 (EST) Date: Fri, 30 Jul 2004 08:30:40 +1000 From: Nathan Scott To: Arjan van de Ven , Adrian Bunk Cc: "Jeffrey E. Hundstad" , Linus Torvalds , Andrew Morton , Steve Lord , linux-xfs@oss.sgi.com, Cahya Wirawan , linux-kernel@vger.kernel.org Subject: Re: [xfs-masters] Re: [2.6 patch] let 4KSTACKS depend on EXPERIMENTAL and XFS on 4KSTACKS=n Message-ID: <20040730083040.A2703153@wobbly.melbourne.sgi.com> References: <20040720114418.GH21918@email.archlab.tuwien.ac.at> <40FD0A61.1040503@xfs.org> <40FD2E99.20707@mnsu.edu> <20040720195012.GN14733@fs.tum.de> <20040729060900.GA1946@frodo> <20040729114219.GN2349@fs.tum.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5i In-Reply-To: <20040729114219.GN2349@fs.tum.de>; from bunk@fs.tum.de on Thu, Jul 29, 2004 at 01:42:19PM +0200 X-archive-position: 3777 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 976 Lines: 29 Arjan wrote: > can you then also mark XFS broken in 2.4 entirely? > 2.4 has a nett stack of also 4Kb... The assumptions there are incorrect - 2.4 is now quite a different kernel - we haven't seen problems like this on 2.4 at all, and I routinely test that failing code path in our regression tests every other night on 2.4. There have certainly been stack consumers in the 2.6 VFS that weren't there in 2.4 (like AIO and struct kiocb, etc) so thats not an apples-to-apples comparison anymore. Adrian wrote: > 2.6 is a stable kernel series used in production environments. > > Regarding Linus' tree, it's IMHO the best solution to work around it > this way until all issues are sorted out. I'm not really convinced - the EXPERIMENTAL marking should be plenty of a deterent to folks in production environments. There are reports of stack overruns on other filesystems as well with 4KSTACKS, so doesn't seem worthwhile to me to do this just for XFS. cheers. -- Nathan From owner-linux-xfs Thu Jul 29 16:42:03 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 29 Jul 2004 16:42:06 -0700 (PDT) Received: from omx2.sgi.com (omx2-ext.SGI.COM [192.48.171.19] (may be forged)) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6TNg28m031990 for ; Thu, 29 Jul 2004 16:42:02 -0700 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by omx2.sgi.com (8.12.11/8.12.9/linux-outbound_gateway-1.1) with SMTP id i6U0h7P1011856 for ; Thu, 29 Jul 2004 17:43:08 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA10769; Fri, 30 Jul 2004 09:41:54 +1000 Received: from frodo.melbourne.sgi.com (root@frodo.melbourne.sgi.com [134.14.55.153]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id i6TNfpln2734286; Fri, 30 Jul 2004 09:41:52 +1000 (EST) Received: from frodo.melbourne.sgi.com (nathans@localhost [127.0.0.1]) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) with ESMTP id i6U0cQgh004648; Fri, 30 Jul 2004 10:38:26 +1000 Received: (from nathans@localhost) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) id i6U0cATi004646; Fri, 30 Jul 2004 10:38:10 +1000 Date: Fri, 30 Jul 2004 10:38:10 +1000 From: Nathan Scott To: Toralf Lund Cc: linux-xfs@oss.sgi.com Subject: Re: Linux 2.6 and v1 directories? Message-ID: <20040730003809.GC1946@frodo> References: <40ED0738.7040309@procaptura.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <40ED0738.7040309@procaptura.com> User-Agent: Mutt/1.5.3i X-archive-position: 3778 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 248 Lines: 12 On Thu, Jul 08, 2004 at 10:35:04AM +0200, Toralf Lund wrote: > Is there any update on the v1 directory issue, now that Linux 2.6 is > released with XFS support and all? > No, version 1 directories are unsupported on Linux. cheers. -- Nathan From owner-linux-xfs Thu Jul 29 16:55:24 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 29 Jul 2004 16:55:28 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6TNtOLD032623 for ; Thu, 29 Jul 2004 16:55:24 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6TNtOJC032622 for linux-xfs@oss.sgi.com; Thu, 29 Jul 2004 16:55:24 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6TNtMmC032607 for ; Thu, 29 Jul 2004 16:55:22 -0700 Received: (from apache@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6TNs4QY032563; Thu, 29 Jul 2004 16:54:04 -0700 Date: Thu, 29 Jul 2004 16:54:04 -0700 Message-Id: <200407292354.i6TNs4QY032563@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 272] xfs_force_shutdown in xfs_trans_cancel, part 2 X-Bugzilla-Reason: AssignedTo X-archive-position: 3779 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 426 Lines: 20 http://oss.sgi.com/bugzilla/show_bug.cgi?id=272 ------- Additional Comments From nathans@sgi.com 2004-29-07 16:54 PDT ------- Hi Peter, Any chance you could try your same MD setup but with a filesystem size which is below a terabyte (-dsize=500g or something) and see if that still fails. thanks! ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs Thu Jul 29 23:52:11 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 29 Jul 2004 23:52:25 -0700 (PDT) Received: from jdc.local (ppp1FAC.dsl.pacific.net.au [203.100.245.172]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6U6pwWB020897 for ; Thu, 29 Jul 2004 23:52:06 -0700 Received: from jdc.local (localhost [127.0.0.1]) by jdc.local (8.12.11/8.12.11/Debian-5) with ESMTP id i6U6pdjv011742; Fri, 30 Jul 2004 16:51:39 +1000 Received: (from jason@localhost) by jdc.local (8.12.11/8.12.11/Debian-4) id i6U6pbOO011722; Fri, 30 Jul 2004 16:51:37 +1000 From: Jason White MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <16649.61433.112009.77476@jdc.local> Date: Fri, 30 Jul 2004 16:51:37 +1000 To: "Jason Cole" Cc: "linux-xfs" Subject: Re: Stupid mkfs question In-Reply-To: <002801c474a3$0c9fb160$0100a8c0@valleychase> References: <002801c474a3$0c9fb160$0100a8c0@valleychase> X-Mailer: VM 7.18 under Emacs 21.3.1 Reply-To: jasonw@ariel.its.unimelb.edu.au X-archive-position: 3780 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: jasonw@ariel.its.unimelb.edu.au Precedence: bulk X-list: linux-xfs Status: O Content-Length: 259 Lines: 10 Jason Cole writes: > > so typicaly (as far I can read) I do mkfs -t xfs /dev/hda4 > > but get > > mkfs.xfs: can not open /dev/hda4: can't find device or address Have you created a Linux partition occupying the free space on the drive? See fdisk(8). From owner-linux-xfs Fri Jul 30 00:00:22 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 30 Jul 2004 00:00:36 -0700 (PDT) Received: from pimout3-ext.prodigy.net (pimout3-ext.prodigy.net [207.115.63.102]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6U70LDd021339 for ; Fri, 30 Jul 2004 00:00:22 -0700 Received: from taniwha.stupidest.org (adsl-63-202-172-176.dsl.snfc21.pacbell.net [63.202.172.176]) by pimout3-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6U704lM050238; Fri, 30 Jul 2004 03:00:05 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id BD288115C879; Fri, 30 Jul 2004 00:00:03 -0700 (PDT) Date: Fri, 30 Jul 2004 00:00:03 -0700 From: Chris Wedgwood To: julius Cc: linux-xfs@oss.sgi.com Subject: Re: XFS and loop-aes speed? Message-ID: <20040730070003.GA19498@taniwha.stupidest.org> References: <20040730000050.62f6c736.julius.junghans@gmx.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20040730000050.62f6c736.julius.junghans@gmx.de> X-archive-position: 3781 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: O Content-Length: 353 Lines: 14 On Fri, Jul 30, 2004 at 12:00:50AM +0200, julius wrote: > im using XFS for some time now, my p4 2,8ghz loads up to 70% when > theres an ftp connection downloading @ 100mbit. probably not XFS related > is this normal? 70% does seem quite high for 100mbit ftp you really need to provide more details if you think this is XFS related (i doubt it is) From owner-linux-xfs Fri Jul 30 00:56:59 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 30 Jul 2004 00:57:09 -0700 (PDT) Received: from host-81-191-111-35.bluecom.no (host-81-191-111-35.bluecom.no [81.191.111.35]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6U7uvNJ025989 for ; Fri, 30 Jul 2004 00:56:59 -0700 Received: from localhost ([127.0.0.1]) by host-81-191-111-35.bluecom.no with esmtp (Exim 4.33) id 1BqSG5-0000VP-O0; Fri, 30 Jul 2004 09:56:53 +0200 Message-ID: <4109FF45.90108@procaptura.com> Date: Fri, 30 Jul 2004 09:56:53 +0200 From: Toralf Lund User-Agent: Mozilla Thunderbird 0.6 (X11/20040502) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Nathan Scott CC: linux-xfs@oss.sgi.com Subject: Re: Linux 2.6 and v1 directories? References: <40ED0738.7040309@procaptura.com> <20040730003809.GC1946@frodo> In-Reply-To: <20040730003809.GC1946@frodo> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 3782 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: toralf@procaptura.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 311 Lines: 21 Nathan Scott wrote: >On Thu, Jul 08, 2004 at 10:35:04AM +0200, Toralf Lund wrote: > > >>Is there any update on the v1 directory issue, now that Linux 2.6 is >>released with XFS support and all? >> >> >> > >No, version 1 directories are unsupported on Linux. > > And always will be? >cheers. > > > From owner-linux-xfs Fri Jul 30 01:36:51 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 30 Jul 2004 01:36:59 -0700 (PDT) Received: from smtp-vbr2.xs4all.nl (smtp-vbr2.xs4all.nl [194.109.24.22]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6U8ansl028587 for ; Fri, 30 Jul 2004 01:36:50 -0700 Received: from [10.0.2.53] (host-2.coltex.demon.nl [212.238.252.66]) (authenticated bits=0) by smtp-vbr2.xs4all.nl (8.12.11/8.12.11) with ESMTP id i6U8aedo018233; Fri, 30 Jul 2004 10:36:45 +0200 (CEST) (envelope-from seth.mos@xs4all.nl) Message-ID: <410A0896.3040501@xs4all.nl> Date: Fri, 30 Jul 2004 10:36:38 +0200 From: Seth Mos User-Agent: Mozilla Thunderbird 0.7 (Windows/20040616) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Toralf Lund CC: linux-xfs@oss.sgi.com Subject: Re: Linux 2.6 and v1 directories? References: <40ED0738.7040309@procaptura.com> <20040730003809.GC1946@frodo> <4109FF45.90108@procaptura.com> In-Reply-To: <4109FF45.90108@procaptura.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: by XS4ALL Virus Scanner X-archive-position: 3783 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: seth.mos@xs4all.nl Precedence: bulk X-list: linux-xfs Status: O Content-Length: 643 Lines: 24 Toralf Lund wrote: > Nathan Scott wrote: > >> On Thu, Jul 08, 2004 at 10:35:04AM +0200, Toralf Lund wrote: > And always will be? It might work, however fixing and testing it is very low on to the todo list. There are not many cases of people needing to mount volumes with v1 directories. The only place to find them are IRIX volumes, which need to have a 4K blocksize, cleanly unmounted and the journal cleared before it can be mounted on Intel. There is no sensible need whatsoever for v1 directories if you only use linux. Those that need to be able to mount IRIX disks can always bug the SGI support channels :-) Cheers Seth From owner-linux-xfs Fri Jul 30 06:55:26 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 30 Jul 2004 06:55:32 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6UDtQva018071 for ; Fri, 30 Jul 2004 06:55:26 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6UDtQjZ018070 for linux-xfs@oss.sgi.com; Fri, 30 Jul 2004 06:55:26 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6UDtPpx018056 for ; Fri, 30 Jul 2004 06:55:25 -0700 Received: (from apache@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6UDYdo2016889; Fri, 30 Jul 2004 06:34:39 -0700 Date: Fri, 30 Jul 2004 06:34:39 -0700 Message-Id: <200407301334.i6UDYdo2016889@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 351] New: XFS internal error xlog_clear_stale_blocks X-Bugzilla-Reason: AssignedTo X-archive-position: 3784 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 3308 Lines: 79 http://oss.sgi.com/bugzilla/show_bug.cgi?id=351 Summary: XFS internal error xlog_clear_stale_blocks Product: Linux XFS Version: unspecified Platform: IA32 OS/Version: Linux Status: NEW Severity: normal Priority: High Component: XFS kernel code AssignedTo: xfs-master@oss.sgi.com ReportedBy: janfrode@parallab.uib.no kernel-smp-2.4.21-15.EL.sgi3 on a dual Pentium III. We're running 2x raid5 over scsi disk, and have LVM stripe over md0 and md1. On top of this we have XFS. We just had a failure on md1. One disk failed, and md1 went down immediately because it thought the spare disk was newer than the rest of the raid-disks (or something like that). We were able to recover after pulling out the spare disk. But, when trying to mount the XFS-filesystem now, we get a failure: XFS mounting filesystem lvm(58,0) raid5: switching cache buffer size, 512 --> 4096 Filesystem "lvm(58,0)": XFS internal error xlog_clear_stale_blocks(2) at line 1253 of file xfs_log_recover.c. Caller 0xf8a15ec3 f081fc00 f8a0001b 00000008 00000001 c4925000 f8a42660 f8a4068e 000004e5 f8a405c5 f8a15ec3 f8a16893 f8a4068e 00000001 c4925000 f8a405c5 000004e5 f8a15ec3 0000015d 0000015b 00039cb0 0000015b 00040000 f4727000 0000f191 Call Trace: [] xfs_error_report [xfs] 0x5b (0xf081fc04) [] .rodata.str1.32 [xfs] 0x320 (0xf081fc14) [] .rodata.str1.1 [xfs] 0x8b2 (0xf081fc18) [] .rodata.str1.1 [xfs] 0x7e9 (0xf081fc20) [] xlog_find_tail [xfs] 0x2a3 (0xf081fc24) [] xlog_clear_stale_blocks [xfs] 0x153 (0xf081fc28) [] .rodata.str1.1 [xfs] 0x8b2 (0xf081fc2c) [] .rodata.str1.1 [xfs] 0x7e9 (0xf081fc38) [] xlog_find_tail [xfs] 0x2a3 (0xf081fc40) [] xlog_find_tail [xfs] 0x2a3 (0xf081fc64) [] xlog_recover [xfs] 0x37 (0xf081fcb4) [] xfs_log_mount [xfs] 0x90 (0xf081fcec) [] xfs_mountfs [xfs] 0x69f (0xf081fd14) [] truncate_inode_pages [kernel] 0x75 (0xf081fd80) [] set_blocksize [kernel] 0xfd (0xf081fdac) [] xfs_ioinit [xfs] 0x34 (0xf081fdc8) [] xfs_mount [xfs] 0x2ce (0xf081fddc) [] vfs_mount [xfs] 0x43 (0xf081fe10) [] xfs_qm_mount [xfs_quota] 0x4c (0xf081fe20) [] vfs_mount [xfs] 0x43 (0xf081fe38) [] linvfs_read_super [xfs] 0x8d (0xf081fe48) [] kmalloc [kernel] 0x48 (0xf081fe88) [] alloc_super [kernel] 0x3a (0xf081fe98) [] insert_super [kernel] 0x53 (0xf081fe9c) [] get_sb_bdev [kernel] 0x1e7 (0xf081feac) [] xfs_fs_type [xfs] 0x0 (0xf081fef4) [] do_kern_mount [kernel] 0x121 (0xf081fefc) [] xfs_fs_type [xfs] 0x0 (0xf081ff00) [] do_add_mount [kernel] 0x93 (0xf081ff20) [] do_mount [kernel] 0x160 (0xf081ff40) [] copy_mount_options [kernel] 0x81 (0xf081ff70) [] sys_mount [kernel] 0xdf (0xf081ff90) XFS: failed to locate log tail XFS: log mount/recovery failed XFS: log mount failed Will try running xfs_repair as soon as the raid is finished recovering. ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs Fri Jul 30 07:51:03 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 30 Jul 2004 07:51:19 -0700 (PDT) Received: from mxfep02.bredband.com (mxfep02.bredband.com [195.54.107.73]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6UEp27Q020384 for ; Fri, 30 Jul 2004 07:51:03 -0700 Received: from mail.ter.nu ([213.114.210.36] [213.114.210.36]) by mxfep02.bredband.com with ESMTP id <20040730145053.BGKB23867.mxfep02.bredband.com@mail.ter.nu>; Fri, 30 Jul 2004 16:50:53 +0200 Received: from grabbarna.nu (w.tenet [192.168.0.5]) by mail.ter.nu (Postfix) with ESMTP id 884009981AE; Fri, 30 Jul 2004 16:50:51 +0200 (CEST) Message-ID: <410A6067.6030903@grabbarna.nu> Date: Fri, 30 Jul 2004 16:51:19 +0200 From: Jan Banan User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6) Gecko/20040116 X-Accept-Language: sv, en-us, en MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Cc: Ethan Benson Subject: Re: Recover a XFS on raid -1 (linear) when one disk is broken References: <40F6DBC1.6050909@grabbarna.nu> <20040715205910.GA9948@taniwha.stupidest.org> <40F9321C.7060403@grabbarna.nu> <20040717203943.GL20260@plato.local.lan> In-Reply-To: <20040717203943.GL20260@plato.local.lan> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 3785 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: b@grabbarna.nu Precedence: bulk X-list: linux-xfs Status: O Content-Length: 1229 Lines: 30 >>I suppose the best stradegy is to get a new disk of the same size and >>then try to copy the whole damaged disk with "dd" to the new disk and >>then try to startup the raid again and after that run xfs_repair. What >>arguments to "dd" would fit best in this case? I think I've read that >>"dd" will normally abort when it can't read from a damaged disk and the >>disk is quite big, 250 GB (Maxtor). >> >> > >dd if=/dev/broken of=/dev/new bs=512 conv=sync,noerror > I have now got a new harddisk with the exact same size (250 GB). I have now started doing that "dd" command line you suggested. I can not find any kind of "verbose"-flag to "dd" so I do not know how long time this will take. Any idea? (Pentium II 450 MHz with 512 MB memory and running Red Hat Linux 9) >ive heard you can sometimes make a disk temporarily become functional >again by shutting it down for a few days. i think ive even heard that >putting it in a freezer can help. in any event the most you could >hope for is just enough functional time to recover the data. > > I have tried both shutting it down for a few days and to put it in a freezer, but none of those two things did help :( Best regards and thanks for all hints, Jan From owner-linux-xfs Fri Jul 30 07:55:27 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 30 Jul 2004 07:55:38 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6UEtRjV020783 for ; Fri, 30 Jul 2004 07:55:27 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6UEtRYh020782 for linux-xfs@oss.sgi.com; Fri, 30 Jul 2004 07:55:27 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6UEtPGr020754 for ; Fri, 30 Jul 2004 07:55:25 -0700 Received: (from apache@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6UEtKYa020745; Fri, 30 Jul 2004 07:55:20 -0700 Date: Fri, 30 Jul 2004 07:55:20 -0700 Message-Id: <200407301455.i6UEtKYa020745@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 272] xfs_force_shutdown in xfs_trans_cancel, part 2 X-Bugzilla-Reason: AssignedTo X-archive-position: 3786 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 348 Lines: 16 http://oss.sgi.com/bugzilla/show_bug.cgi?id=272 ------- Additional Comments From Peter.Kelemen+sgi@cern.ch 2004-30-07 07:55 PDT ------- Also happens with 2.6.8-rc2-mm1 as well. Do you want me to try the CVS tree? Peter ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs Fri Jul 30 07:55:27 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 30 Jul 2004 07:55:46 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6UEtRed020784 for ; Fri, 30 Jul 2004 07:55:27 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6UEtRPT020781 for linux-xfs@oss.sgi.com; Fri, 30 Jul 2004 07:55:27 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6UEtPGv020754 for ; Fri, 30 Jul 2004 07:55:26 -0700 Received: (from apache@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6UED9wa019356; Fri, 30 Jul 2004 07:13:09 -0700 Date: Fri, 30 Jul 2004 07:13:09 -0700 Message-Id: <200407301413.i6UED9wa019356@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 272] xfs_force_shutdown in xfs_trans_cancel, part 2 X-Bugzilla-Reason: AssignedTo X-archive-position: 3787 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 1116 Lines: 34 http://oss.sgi.com/bugzilla/show_bug.cgi?id=272 ------- Additional Comments From Peter.Kelemen+sgi@cern.ch 2004-30-07 07:13 PDT ------- Nathan, Tried with -d size=512g (additionally to the parameters used before) and the forced shutdown still happens. meta-data=/mnt isize=512 agcount=16, agsize=8388608 blks = sectsz=512 data = bsize=4096 blocks=134217728, imaxpct=25 = sunit=256 swidth=6144 blks, unwritten=1 naming =version 2 bsize=4096 log =internal bsize=4096 blocks=32768, version=2 = sectsz=512 sunit=64 blks realtime =none extsz=25165824 blocks=0, rtextents=0 xfs_force_shutdown(md0,0x8) called from line 1088 of file fs/xfs/xfs_trans.c. Return address = 0xa00000020034ce90 Moreover, we've tried with different sw RAID configurations and the problem is persistent. Peter ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs Fri Jul 30 08:02:49 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 30 Jul 2004 08:03:00 -0700 (PDT) Received: from alsvidh.mathematik.uni-muenchen.de (alsvidh.mathematik.uni-muenchen.de [129.187.111.42]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6UF2mEp021520 for ; Fri, 30 Jul 2004 08:02:49 -0700 Received: by alsvidh.mathematik.uni-muenchen.de (Postfix, from userid 666) id 6CAA34CD19; Fri, 30 Jul 2004 17:02:44 +0200 (CEST) To: Jan Banan Cc: linux-xfs@oss.sgi.com Subject: Re: Recover a XFS on raid -1 (linear) when one disk is broken References: <40F6DBC1.6050909@grabbarna.nu> <20040715205910.GA9948@taniwha.stupidest.org> <40F9321C.7060403@grabbarna.nu> <20040717203943.GL20260@plato.local.lan> <410A6067.6030903@grabbarna.nu> Organization: Lehrstuhl fuer vergleichende Astrozoologie X-Mahlzeit: Das ist per Saldo Gemuetlichkeit Reply-To: Jens Schmalzing From: Jens Schmalzing Date: 30 Jul 2004 17:02:43 +0200 In-Reply-To: <410A6067.6030903@grabbarna.nu> Message-ID: User-Agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.3.50 MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id i6UF2nEp021521 X-archive-position: 3788 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: j.s@lmu.de Precedence: bulk X-list: linux-xfs Status: O Content-Length: 411 Lines: 16 Hi, Jan Banan writes: > I have now started doing that "dd" command line you suggested. I can > not find any kind of "verbose"-flag to "dd" so I do not know how > long time this will take. Any idea? When you send a SIGUSR1 to your dd process, it will display a summary of what it has done so far. Regards, Jens. -- J'qbpbe, le m'en fquz pe j'qbpbe! Le veux aimeb et mqubib panz je pézqbpbe je djuz tqtaj! From owner-linux-xfs Fri Jul 30 09:55:27 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 30 Jul 2004 09:55:30 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6UGtRIQ028413 for ; Fri, 30 Jul 2004 09:55:27 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6UGtRxQ028412 for linux-xfs@oss.sgi.com; Fri, 30 Jul 2004 09:55:27 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6UGtPip028398 for ; Fri, 30 Jul 2004 09:55:26 -0700 Received: (from apache@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6UGFAkQ027332; Fri, 30 Jul 2004 09:15:10 -0700 Date: Fri, 30 Jul 2004 09:15:10 -0700 Message-Id: <200407301615.i6UGFAkQ027332@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 351] XFS internal error xlog_clear_stale_blocks X-Bugzilla-Reason: AssignedTo X-archive-position: 3789 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 601 Lines: 19 http://oss.sgi.com/bugzilla/show_bug.cgi?id=351 ------- Additional Comments From janfrode@parallab.uib.no 2004-30-07 09:15 PDT ------- I had to run 'xfs_repair -L' to recover the filesystem. Ended up with 3502 files and 782 directories in lost&found. This seems quite high, since there shouldn't be too much activity on this filesystem... I would have expected only a few resently moved/created/deleted files being affected by me clearing the log.. Isn't that right, or ? ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs Fri Jul 30 10:55:28 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 30 Jul 2004 10:55:32 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6UHtSf1030130 for ; Fri, 30 Jul 2004 10:55:28 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6UHtStC030129 for linux-xfs@oss.sgi.com; Fri, 30 Jul 2004 10:55:28 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6UHtQlU030115 for ; Fri, 30 Jul 2004 10:55:26 -0700 Received: (from apache@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6UHBds5029478; Fri, 30 Jul 2004 10:11:39 -0700 Date: Fri, 30 Jul 2004 10:11:39 -0700 Message-Id: <200407301711.i6UHBds5029478@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 351] XFS internal error xlog_clear_stale_blocks X-Bugzilla-Reason: AssignedTo X-archive-position: 3790 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 426 Lines: 16 http://oss.sgi.com/bugzilla/show_bug.cgi?id=351 ------- Additional Comments From juri@koschikode.com 2004-30-07 10:11 PDT ------- I don't know much about LVM, but a stripe set sounds much like RAID0, which does not give you any redundancy. In case of a disk failure (md1) the complete set is hosed. ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs Fri Jul 30 11:54:46 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 30 Jul 2004 11:54:55 -0700 (PDT) Received: from pimout3-ext.prodigy.net (pimout3-ext.prodigy.net [207.115.63.102]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6UIsjKN031849 for ; Fri, 30 Jul 2004 11:54:46 -0700 Received: from taniwha.stupidest.org (adsl-63-202-172-176.dsl.snfc21.pacbell.net [63.202.172.176]) by pimout3-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6UIsYlM278498; Fri, 30 Jul 2004 14:54:34 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id EA37F115C84B; Fri, 30 Jul 2004 11:54:33 -0700 (PDT) Date: Fri, 30 Jul 2004 11:54:33 -0700 From: Chris Wedgwood To: julius Cc: linux-xfs@oss.sgi.com Subject: Re: XFS and loop-aes speed? Message-ID: <20040730185433.GB26663@taniwha.stupidest.org> References: <20040730000050.62f6c736.julius.junghans@gmx.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20040730000050.62f6c736.julius.junghans@gmx.de> X-archive-position: 3791 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: O Content-Length: 345 Lines: 14 On Fri, Jul 30, 2004 at 12:00:50AM +0200, julius wrote: > im using XFS for some time now, my p4 2,8ghz loads up to 70% when > theres an ftp connection downloading @ 100mbit. is this normal? sorry, i missed the subject before and didn't realize you were doing loop-aes. i would say for this what you are seeing is probably normal. --cw From owner-linux-xfs Fri Jul 30 11:55:28 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 30 Jul 2004 11:55:51 -0700 (PDT) Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6UItSNH031989 for ; Fri, 30 Jul 2004 11:55:28 -0700 Received: (from xfs@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6UItRkv031988 for linux-xfs@oss.sgi.com; Fri, 30 Jul 2004 11:55:27 -0700 Received: from oss.sgi.com (localhost [127.0.0.1]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6UItQpg031974 for ; Fri, 30 Jul 2004 11:55:26 -0700 Received: (from apache@localhost) by oss.sgi.com (8.13.0/8.12.8/Submit) id i6UI5Fq3030583; Fri, 30 Jul 2004 11:05:15 -0700 Date: Fri, 30 Jul 2004 11:05:15 -0700 Message-Id: <200407301805.i6UI5Fq3030583@oss.sgi.com> From: bugzilla-daemon@oss.sgi.com To: xfs-master@oss.sgi.com Subject: [Bug 351] XFS internal error xlog_clear_stale_blocks X-Bugzilla-Reason: AssignedTo X-archive-position: 3792 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bugzilla-daemon@oss.sgi.com Precedence: bulk X-list: linux-xfs Status: O Content-Length: 580 Lines: 19 http://oss.sgi.com/bugzilla/show_bug.cgi?id=351 ------- Additional Comments From janfrode@parallab.uib.no 2004-30-07 11:05 PDT ------- md0 and md1 are both raid5, on top of that I use LVM in stripe (RAID0) mode. I temporarily lost md1, but was able to recover it after a reboot, so both sets of devices used for the LVM-stripe should be OK. Hopefully LVM was also smart enough to immediately fail when it lost md1, and not write anything more to md0. ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From owner-linux-xfs Fri Jul 30 16:38:34 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 30 Jul 2004 16:38:36 -0700 (PDT) Received: from mxfep02.bredband.com (mxfep02.bredband.com [195.54.107.73]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6UNcWBa015268 for ; Fri, 30 Jul 2004 16:38:33 -0700 Received: from mail.ter.nu ([213.114.210.36] [213.114.210.36]) by mxfep02.bredband.com with ESMTP id <20040730233823.ECDD23867.mxfep02.bredband.com@mail.ter.nu> for ; Sat, 31 Jul 2004 01:38:23 +0200 Received: from grabbarna.nu (w.tenet [192.168.0.5]) by mail.ter.nu (Postfix) with ESMTP id 41C279981AE for ; Sat, 31 Jul 2004 01:38:22 +0200 (CEST) Message-ID: <410ADC0A.6060100@grabbarna.nu> Date: Sat, 31 Jul 2004 01:38:50 +0200 From: Jan Banan User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6) Gecko/20040116 X-Accept-Language: sv, en-us, en MIME-Version: 1.0 Cc: linux-xfs@oss.sgi.com Subject: Re: Recover a XFS on raid -1 (linear) when one disk is broken References: <40F6DBC1.6050909@grabbarna.nu> <20040715205910.GA9948@taniwha.stupidest.org> <40F9321C.7060403@grabbarna.nu> <20040717203943.GL20260@plato.local.lan> In-Reply-To: <20040717203943.GL20260@plato.local.lan> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 3793 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: b@grabbarna.nu Precedence: bulk X-list: linux-xfs Status: O Content-Length: 934 Lines: 24 >dd if=/dev/broken of=/dev/new bs=512 conv=sync,noerror > I have tried this two times now and after about four hours the computer crashes with this in /var/log/messages : [...] Jul 30 20:46:23 d kernel: hdh: dma_intr: status=0x51 { DriveReady SeekComplete Error } Jul 30 20:46:23 d kernel: hdh: dma_intr: error=0x01 { AddrMarkNotFound }, LBAsect=28117754, high=1, low=11340538, sector=28117718 Jul 30 20:46:23 d kernel: hdg: DMA disabled Jul 30 20:46:23 d kernel: hdh: DMA disabled Jul 30 20:46:23 d kernel: PDC202XX: Secondary channel reset. Jul 30 20:46:23 d kernel: ide3: reset: success Second time the last sector mentioned was 28117656. Maybe I should try to skip over that and start right after that sector? If I read the man-page of "dd" right I should add something like "seek=28117718 skip=28117718" to that command-line to perform that. Am I correct? What you think? Thanks alot for your input, Jan From owner-linux-xfs Fri Jul 30 22:47:08 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 30 Jul 2004 22:47:11 -0700 (PDT) Received: from malik.acsalaska.net (malik.acsalaska.net [209.112.155.41]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6V5l72I025144 for ; Fri, 30 Jul 2004 22:47:08 -0700 Received: from erbenson.alaska.net (67-pm18.nwc.acsalaska.net [209.112.142.67]) by malik.acsalaska.net (8.12.11/8.12.11) with ESMTP id i6V5l2TI055895 for ; Fri, 30 Jul 2004 21:47:02 -0800 (AKDT) (envelope-from erbenson@alaska.net) Received: from plato.local.lan (plato.local.lan [192.168.0.4]) by erbenson.alaska.net (Postfix) with ESMTP id 6DB2F39B1 for ; Fri, 30 Jul 2004 21:47:00 -0800 (AKDT) Received: by plato.local.lan (Postfix, from userid 1000) id 417F640FF35; Fri, 30 Jul 2004 21:47:01 -0800 (AKDT) Date: Fri, 30 Jul 2004 21:47:01 -0800 From: Ethan Benson To: linux-xfs@oss.sgi.com Subject: Re: Recover a XFS on raid -1 (linear) when one disk is broken Message-ID: <20040731054701.GC20260@plato.local.lan> Mail-Followup-To: linux-xfs@oss.sgi.com References: <40F6DBC1.6050909@grabbarna.nu> <20040715205910.GA9948@taniwha.stupidest.org> <40F9321C.7060403@grabbarna.nu> <20040717203943.GL20260@plato.local.lan> <410A6067.6030903@grabbarna.nu> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="2pi4Gp0KyRJpQ5Nw" Content-Disposition: inline In-Reply-To: <410A6067.6030903@grabbarna.nu> User-Agent: Mutt/1.3.28i X-OS: Debian GNU X-gpg-fingerprint: E3E4 D0BC 31BC F7BB C1DD C3D6 24AC 7B1A 2C44 7AFC X-gpg-key: http://www.alaska.net/~erbenson/gpg/key.asc Mail-Copies-To: nobody X-No-CC: I subscribe to this list; do not CC me on replies. X-ACS-Spam-Status: no X-ACS-Scanned-By: MD 2.42; SA 2.63; spamdefang 1.102 X-archive-position: 3794 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: erbenson@alaska.net Precedence: bulk X-list: linux-xfs Status: O Content-Length: 2028 Lines: 66 --2pi4Gp0KyRJpQ5Nw Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Fri, Jul 30, 2004 at 04:51:19PM +0200, Jan Banan wrote: >=20 > >>I suppose the best stradegy is to get a new disk of the same size and= =20 > >>then try to copy the whole damaged disk with "dd" to the new disk and= =20 > >>then try to startup the raid again and after that run xfs_repair. What= =20 > >>arguments to "dd" would fit best in this case? I think I've read that= =20 > >>"dd" will normally abort when it can't read from a damaged disk and the= =20 > >>disk is quite big, 250 GB (Maxtor). > >>=20=20=20 > >> > > > >dd if=3D/dev/broken of=3D/dev/new bs=3D512 conv=3Dsync,noerror > > > I have now got a new harddisk with the exact same size (250 GB). I have= =20 > now started doing that "dd" command line you suggested. I can not find=20 > any kind of "verbose"-flag to "dd" so I do not know how long time this=20 > will take. Any idea? (Pentium II 450 MHz with 512 MB memory and running= =20 > Red Hat Linux 9) it will take quite awhile since bs=3D512 is the least efficient, but it will also result in the least lost data. > >ive heard you can sometimes make a disk temporarily become functional > >again by shutting it down for a few days. i think ive even heard that > >putting it in a freezer can help. in any event the most you could > >hope for is just enough functional time to recover the data. > >=20 > > > I have tried both shutting it down for a few days and to put it in a=20 > freezer, but none of those two things did help :( often not, unfortunatly. > Best regards and thanks for all hints, > Jan --=20 Ethan Benson http://www.alaska.net/~erbenson/ --2pi4Gp0KyRJpQ5Nw Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) iEYEARECAAYFAkELMlUACgkQJKx7GixEevxbowCeMsA7dhtoifrEIWJBrINHyxQ9 9jUAn1HPpRlgxgv9Mdn5xZxXPHukhw/1 =MPWw -----END PGP SIGNATURE----- --2pi4Gp0KyRJpQ5Nw-- From owner-linux-xfs Fri Jul 30 22:49:37 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 30 Jul 2004 22:49:38 -0700 (PDT) Received: from pimout3-ext.prodigy.net (pimout3-ext.prodigy.net [207.115.63.102]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6V5naIs025458 for ; Fri, 30 Jul 2004 22:49:36 -0700 Received: from taniwha.stupidest.org (adsl-63-202-172-176.dsl.snfc21.pacbell.net [63.202.172.176]) by pimout3-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6V5nOlM222406; Sat, 31 Jul 2004 01:49:30 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id B29A9115C86C; Fri, 30 Jul 2004 22:49:24 -0700 (PDT) Date: Fri, 30 Jul 2004 22:49:24 -0700 From: Chris Wedgwood To: Jan Banan Cc: linux-xfs@oss.sgi.com Subject: Re: Recover a XFS on raid -1 (linear) when one disk is broken Message-ID: <20040731054924.GA4748@taniwha.stupidest.org> References: <40F6DBC1.6050909@grabbarna.nu> <20040715205910.GA9948@taniwha.stupidest.org> <40F9321C.7060403@grabbarna.nu> <20040717203943.GL20260@plato.local.lan> <410ADC0A.6060100@grabbarna.nu> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <410ADC0A.6060100@grabbarna.nu> X-archive-position: 3795 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: O Content-Length: 754 Lines: 23 On Sat, Jul 31, 2004 at 01:38:50AM +0200, Jan Banan wrote: > I have tried thiS two times now and after about four hours the > computer Crashes with this in /var/log/messages : it's not a crash, it's the IDE layer bitching > Second time the last sector mentioned was 28117656. Maybe I should > try to skip over that and start right after that sector? If I read > the man-page of "dd" right I should add something like > "seek=28117718 skip=28117718" to that command-line to perform > that. yes, but conv=noerror should also skip over the bad sector too. did it not continue after this? also, did you get the SMART values from the bad disk? i still think it's worth forcing a realloction and using the existing disk if it's not *that* bad --cw From owner-linux-xfs Fri Jul 30 23:38:07 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 30 Jul 2004 23:38:09 -0700 (PDT) Received: from coredumps.de (coredumps.de [217.160.213.75]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6V6c47R026791 for ; Fri, 30 Jul 2004 23:38:07 -0700 Received: from port-212-202-185-131.dynamic.qsc.de ([212.202.185.131] helo=ente.berdmann.de) by coredumps.de with asmtp (TLSv1:DES-CBC3-SHA:168) (Exim 4.33) id 1BqnVI-0005Yv-IE for linux-xfs@oss.sgi.com; Sat, 31 Jul 2004 08:38:00 +0200 Received: from octane.berdmann.de ([192.168.1.14] helo=berdmann.de) by ente.berdmann.de with esmtp (Exim 3.36 #1) id 1BqnVI-0005vN-00 for linux-xfs@oss.sgi.com; Sat, 31 Jul 2004 08:38:00 +0200 Message-ID: <410B3E47.2030806@berdmann.de> Date: Sat, 31 Jul 2004 08:37:59 +0200 From: Bernhard Erdmann User-Agent: Mozilla/5.0 (X11; U; IRIX64 IP30; en-US; rv:1.6) Gecko/20040505 X-Accept-Language: de, en, fr MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: Re: building xfsprogs (CVS): mmap.c:627: `MADV_NORMAL' undeclared References: <40FB2417.7030406@berdmann.de> <4774.1090202489@kao2.melbourne.sgi.com> <20040720015826.A2406645@wobbly.melbourne.sgi.com> <40FDFBF2.7020500@berdmann.de> <20040729054638.GK800@frodo> In-Reply-To: <20040729054638.GK800@frodo> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 3796 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: be@berdmann.de Precedence: bulk X-list: linux-xfs Status: O Content-Length: 748 Lines: 19 Nathan Scott wrote: > Does your /usr/include/{bits,sys}/mman.h have those missing macros? > If so, what cpp macro is guarding them? (mail me the file please) > If not, can you grep below /usr/include for them and let me know > which header defines them? thanks. Hi Nathan, /usr/include/{bits,sys}/mman.h does not have a definition for MADV_NORMAL. Instead, it is defined in /usr/include/asm/mman.h: #define MADV_NORMAL 0x0 /* default page-in behavior */ #define MADV_RANDOM 0x1 /* page-in minimum required */ #define MADV_SEQUENTIAL 0x2 /* read-ahead aggressively */ #define MADV_WILLNEED 0x3 /* pre-fault pages */ #define MADV_DONTNEED 0x4 /* discard these pages */ From owner-linux-xfs Sat Jul 31 00:35:14 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 31 Jul 2004 00:35:25 -0700 (PDT) Received: from mxfep02.bredband.com (mxfep02.bredband.com [195.54.107.73]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6V7ZDxC031220 for ; Sat, 31 Jul 2004 00:35:14 -0700 Received: from mail.ter.nu ([213.114.210.36] [213.114.210.36]) by mxfep02.bredband.com with ESMTP id <20040731073504.FUVR23867.mxfep02.bredband.com@mail.ter.nu> for ; Sat, 31 Jul 2004 09:35:04 +0200 Received: from grabbarna.nu (w.tenet [192.168.0.5]) by mail.ter.nu (Postfix) with ESMTP id A36EE9981AE for ; Sat, 31 Jul 2004 09:35:02 +0200 (CEST) Message-ID: <410B4BC3.8000404@grabbarna.nu> Date: Sat, 31 Jul 2004 09:35:31 +0200 From: Jan Banan User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6) Gecko/20040116 X-Accept-Language: sv, en-us, en MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: Re: Recover a XFS on raid -1 (linear) when one disk is broken References: <40F6DBC1.6050909@grabbarna.nu> <20040715205910.GA9948@taniwha.stupidest.org> <40F9321C.7060403@grabbarna.nu> <20040717203943.GL20260@plato.local.lan> <410ADC0A.6060100@grabbarna.nu> <20040731054924.GA4748@taniwha.stupidest.org> In-Reply-To: <20040731054924.GA4748@taniwha.stupidest.org> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 3797 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: b@grabbarna.nu Precedence: bulk X-list: linux-xfs Status: O Content-Length: 600 Lines: 19 >yes, but conv=noerror should also skip over the bad sector too. did >it not continue after this? > > The computer was "dead" after those messages in /var/log/messages and it didn't answer "ping" and the connected monitor screen was black so I suppose the kernel was not working any longer. So I then rebooted the computer the hard way with the reset-button. >also, did you get the SMART values from the bad disk? i still think >it's worth forcing a realloction and using the existing disk if it's >not *that* bad > Where do I find those SMART values? (sorry for being stupid) Thanks, Jan From owner-linux-xfs Sat Jul 31 02:12:32 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 31 Jul 2004 02:12:43 -0700 (PDT) Received: from pimout3-ext.prodigy.net (pimout3-ext.prodigy.net [207.115.63.102]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6V9CVme001970 for ; Sat, 31 Jul 2004 02:12:31 -0700 Received: from taniwha.stupidest.org (adsl-63-202-172-176.dsl.snfc21.pacbell.net [63.202.172.176]) by pimout3-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6V9CLlM050970; Sat, 31 Jul 2004 05:12:25 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id 075F6115C86C; Sat, 31 Jul 2004 02:12:21 -0700 (PDT) Date: Sat, 31 Jul 2004 02:12:20 -0700 From: Chris Wedgwood To: Jan Banan Cc: linux-xfs@oss.sgi.com Subject: Re: Recover a XFS on raid -1 (linear) when one disk is broken Message-ID: <20040731091220.GA6158@taniwha.stupidest.org> References: <40F6DBC1.6050909@grabbarna.nu> <20040715205910.GA9948@taniwha.stupidest.org> <40F9321C.7060403@grabbarna.nu> <20040717203943.GL20260@plato.local.lan> <410ADC0A.6060100@grabbarna.nu> <20040731054924.GA4748@taniwha.stupidest.org> <410B4BC3.8000404@grabbarna.nu> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <410B4BC3.8000404@grabbarna.nu> X-archive-position: 3798 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: O Content-Length: 1463 Lines: 37 On Sat, Jul 31, 2004 at 09:35:31AM +0200, Jan Banan wrote: > Where do I find those SMART values? (sorry for being stupid) 'smartctl -a /dev/hda' or similar. smartctl is usually part of of the smart-suite package. Looking at a craptop drive here which has been abused heavily I see: pain:~# smartctl -a /dev/hda smartctl version 5.32 Copyright (C) 2002-4 Bruce Allen [...] ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000d 100 100 050 Pre-fail Offline - 12884901912 2 Throughput_Performance 0x0005 099 096 050 Pre-fail Offline - 3210 3 Spin_Up_Time 0x0007 100 100 050 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 094 094 000 Old_age Always - 6591 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 7 [...] The last line show says 7 sectors have been reallocated. I did this deliberately when I got some read errors by noting the location of the errors (so I could determine which file(s) were affected) and then writing data over those blocks to force the drive to reallocate. It's not a perfect solution but it was quick and easy to do and I don't have to get another craptop drive (assuming things don't get significantly worse). --cw P.S. I really prefer it if you can cc' me on replies as well as the list. From owner-linux-xfs Sat Jul 31 09:58:26 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 31 Jul 2004 09:58:30 -0700 (PDT) Received: from internalmx.vasoftware.com (internalmx1.vasoftware.com [12.152.184.149]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6VGwPS4021418 for ; Sat, 31 Jul 2004 09:58:26 -0700 Received: from adsl-67-122-115-220.dsl.sntc01.pacbell.net ([67.122.115.220]:63259 helo=[10.0.0.1]) by internalmx.vasoftware.com with asmtp (Cipher TLSv1:RC4-MD5:128) (Exim 4.22 #1 (Debian)) id 1BqxBX-00009t-NG by VAauthid with fixed_plain; Sat, 31 Jul 2004 09:58:16 -0700 Message-ID: <410BCFA7.6090709@linux-sxs.org> Date: Sat, 31 Jul 2004 09:58:15 -0700 From: "Net Llama!" Organization: HAL V User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8a2) Gecko/20040716 X-Accept-Language: en-us, en MIME-Version: 1.0 To: Chris Wedgwood CC: Jan Banan , linux-xfs@oss.sgi.com Subject: Re: Recover a XFS on raid -1 (linear) when one disk is broken References: <40F6DBC1.6050909@grabbarna.nu> <20040715205910.GA9948@taniwha.stupidest.org> <40F9321C.7060403@grabbarna.nu> <20040717203943.GL20260@plato.local.lan> <410ADC0A.6060100@grabbarna.nu> <20040731054924.GA4748@taniwha.stupidest.org> <410B4BC3.8000404@grabbarna.nu> <20040731091220.GA6158@taniwha.stupidest.org> In-Reply-To: <20040731091220.GA6158@taniwha.stupidest.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-EA-Verified: internalmx.vasoftware.com 1BqxBX-00009t-NG da4474d75b8303a1f7455cb3e8f603f5 X-archive-position: 3799 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: netllama@linux-sxs.org Precedence: bulk X-list: linux-xfs Status: O Content-Length: 1822 Lines: 44 On 07/31/2004 02:12 AM, Chris Wedgwood wrote: > On Sat, Jul 31, 2004 at 09:35:31AM +0200, Jan Banan wrote: > > >>Where do I find those SMART values? (sorry for being stupid) > > > 'smartctl -a /dev/hda' or similar. smartctl is usually part of of the > smart-suite package. Looking at a craptop drive here which has been > abused heavily I see: > > pain:~# smartctl -a /dev/hda > smartctl version 5.32 Copyright (C) 2002-4 Bruce Allen > > [...] > > ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE > 1 Raw_Read_Error_Rate 0x000d 100 100 050 Pre-fail Offline - 12884901912 > 2 Throughput_Performance 0x0005 099 096 050 Pre-fail Offline - 3210 > 3 Spin_Up_Time 0x0007 100 100 050 Pre-fail Always - 0 > 4 Start_Stop_Count 0x0032 094 094 000 Old_age Always - 6591 > 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 7 > > [...] > > The last line show says 7 sectors have been reallocated. I did this > deliberately when I got some read errors by noting the location of the > errors (so I could determine which file(s) were affected) and then > writing data over those blocks to force the drive to reallocate. > > It's not a perfect solution but it was quick and easy to do and I > don't have to get another craptop drive (assuming things don't get > significantly worse). Out of sheer curiosity, what kind of 'craptop' drive is this? -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ L. Friedman netllama@linux-sxs.org Linux Step-by-step & TyGeMo: http://netllama.ipfox.com 09:55:00 up 40 days, 20:39, 2 users, load average: 0.04, 0.10, 0.08 From owner-linux-xfs Sat Jul 31 11:10:31 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 31 Jul 2004 11:10:33 -0700 (PDT) Received: from mxfep01.bredband.com (mxfep01.bredband.com [195.54.107.70]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6VIAUiQ023315 for ; Sat, 31 Jul 2004 11:10:30 -0700 Received: from mail.ter.nu ([213.114.210.36] [213.114.210.36]) by mxfep01.bredband.com with ESMTP id <20040731181021.HKBW23501.mxfep01.bredband.com@mail.ter.nu>; Sat, 31 Jul 2004 20:10:21 +0200 Received: from grabbarna.nu (w.tenet [192.168.0.5]) by mail.ter.nu (Postfix) with ESMTP id 946549981AE; Sat, 31 Jul 2004 20:10:19 +0200 (CEST) Message-ID: <410BE0A9.3030904@grabbarna.nu> Date: Sat, 31 Jul 2004 20:10:49 +0200 From: Jan Banan User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6) Gecko/20040116 X-Accept-Language: sv, en-us, en MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Cc: Chris Wedgwood Subject: Re: Recover a XFS on raid -1 (linear) when one disk is broken References: <40F6DBC1.6050909@grabbarna.nu> <20040715205910.GA9948@taniwha.stupidest.org> <40F9321C.7060403@grabbarna.nu> <20040717203943.GL20260@plato.local.lan> <410ADC0A.6060100@grabbarna.nu> <20040731054924.GA4748@taniwha.stupidest.org> <410B4BC3.8000404@grabbarna.nu> <20040731091220.GA6158@taniwha.stupidest.org> In-Reply-To: <20040731091220.GA6158@taniwha.stupidest.org> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 3800 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: b@grabbarna.nu Precedence: bulk X-list: linux-xfs Status: O Content-Length: 1663 Lines: 37 This is the output from "smartctl": # smartctl -a /dev/hdh smartctl version 5.32 Copyright (C) 2002-4 Bruce Allen [...] ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 3 Spin_Up_Time 0x0027 252 252 063 Pre-fail Always - 3089 4 Start_Stop_Count 0x0032 253 253 000 Old_age Always - 14 5 Reallocated_Sector_Ct 0x0033 109 109 063 Pre-fail Always - 1459 [...] I suppose the value of Reallocated_Sector_Ct (1459) is not a good sign :-( I also run "badblocks /dev/hdh" and it did find 593 bad blocks before the kernel crashed (I suppose) with (like it did with "dd"): Jul 31 19:21:32 d kernel: hdh: dma_intr: status=0x51 { DriveReady SeekComplete Error } Jul 31 19:21:32 d kernel: hdh: dma_intr: error=0x01 { AddrMarkNotFound}, LBAsect=28117754, high=1, low=11340538, sector=28117692 Jul 31 19:21:32 d kernel: hdg: DMA disabled Jul 31 19:21:32 d kernel: hdh: DMA disabled Jul 31 19:21:32 d kernel: PDC202XX: Secondary channel reset. Jul 31 19:21:32 d kernel: ide3: reset: success After that I rebooted the computer with the reset-button since it did no longer respond to anything. Maybe I should continue with "dd" now like this (?): # dd if=/dev/broken of=/dev/new bs=512 conv=sync,noerror seek=28117692 skip=28117692 I am a little bit confused if it is correct to give "seek" and "skip" the value of the last sector before the crash (28117692). According to the man-page of "dd" then "seek" and "skip" skips "ibs/obs-sized BLOCKS" and not "SECTORS". So am I typing the correct value (28117692)? Best regards, Jan From owner-linux-xfs Sat Jul 31 11:29:14 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 31 Jul 2004 11:29:17 -0700 (PDT) Received: from pimout1-ext.prodigy.net (pimout1-ext.prodigy.net [207.115.63.77]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6VITE5g023890 for ; Sat, 31 Jul 2004 11:29:14 -0700 Received: from taniwha.stupidest.org (adsl-63-202-172-176.dsl.snfc21.pacbell.net [63.202.172.176]) by pimout1-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6VISoDi013442; Sat, 31 Jul 2004 14:28:55 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id AD3F0115C860; Sat, 31 Jul 2004 11:28:49 -0700 (PDT) Date: Sat, 31 Jul 2004 11:28:49 -0700 From: Chris Wedgwood To: Net Llama! Cc: Jan Banan , linux-xfs@oss.sgi.com Subject: Re: Recover a XFS on raid -1 (linear) when one disk is broken Message-ID: <20040731182849.GC11283@taniwha.stupidest.org> References: <40F6DBC1.6050909@grabbarna.nu> <20040715205910.GA9948@taniwha.stupidest.org> <40F9321C.7060403@grabbarna.nu> <20040717203943.GL20260@plato.local.lan> <410ADC0A.6060100@grabbarna.nu> <20040731054924.GA4748@taniwha.stupidest.org> <410B4BC3.8000404@grabbarna.nu> <20040731091220.GA6158@taniwha.stupidest.org> <410BCFA7.6090709@linux-sxs.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <410BCFA7.6090709@linux-sxs.org> X-archive-position: 3801 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: O Content-Length: 415 Lines: 13 On Sat, Jul 31, 2004 at 09:58:15AM -0700, Net Llama! wrote: > Out of sheer curiosity, what kind of 'craptop' drive is this? HITACHI DK23EB-40 (it's failry new, only 1 yr or so old, so I'm no thrilled about IO errors at all but otherwise it seems to be working pertty well). I'm wondering if it's worse than it might otherwise be because the machine is moved about very often and gets a very hard life. --cw From owner-linux-xfs Sat Jul 31 11:31:21 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 31 Jul 2004 11:31:27 -0700 (PDT) Received: from lists.vasoftware.com (mail@internalmx2.vasoftware.com [12.152.184.150]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6VIVKRS024200 for ; Sat, 31 Jul 2004 11:31:21 -0700 Received: from adsl-67-122-115-220.dsl.sntc01.pacbell.net ([67.122.115.220]:64659 helo=[10.0.0.1]) by lists.vasoftware.com with asmtp (Cipher TLSv1:RC4-MD5:128) (Exim 4.20 #1 (Debian)) id 1BqydY-0003OL-03 by VAauthid with fixed_plain; Sat, 31 Jul 2004 11:31:16 -0700 Message-ID: <410BE571.1040603@linux-sxs.org> Date: Sat, 31 Jul 2004 11:31:13 -0700 From: "Net Llama!" Organization: HAL V User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8a2) Gecko/20040716 X-Accept-Language: en-us, en MIME-Version: 1.0 To: Chris Wedgwood CC: linux-xfs@oss.sgi.com Subject: Re: Recover a XFS on raid -1 (linear) when one disk is broken References: <40F6DBC1.6050909@grabbarna.nu> <20040715205910.GA9948@taniwha.stupidest.org> <40F9321C.7060403@grabbarna.nu> <20040717203943.GL20260@plato.local.lan> <410ADC0A.6060100@grabbarna.nu> <20040731054924.GA4748@taniwha.stupidest.org> <410B4BC3.8000404@grabbarna.nu> <20040731091220.GA6158@taniwha.stupidest.org> <410BCFA7.6090709@linux-sxs.org> <20040731182849.GC11283@taniwha.stupidest.org> In-Reply-To: <20040731182849.GC11283@taniwha.stupidest.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-EA-Verified: lists.vasoftware.com 1BqydY-0003OL-03 c59a74ab71e490b918e7607688ff9c06 X-archive-position: 3802 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: netllama@linux-sxs.org Precedence: bulk X-list: linux-xfs Status: O Content-Length: 901 Lines: 25 On 07/31/2004 11:28 AM, Chris Wedgwood wrote: > On Sat, Jul 31, 2004 at 09:58:15AM -0700, Net Llama! wrote: > > >>Out of sheer curiosity, what kind of 'craptop' drive is this? > > > HITACHI DK23EB-40 (it's failry new, only 1 yr or so old, so I'm no > thrilled about IO errors at all but otherwise it seems to be working > pertty well). I'm wondering if it's worse than it might otherwise be > because the machine is moved about very often and gets a very hard > life. Nah, that sounds like about the quality i've come to know & hate from Hitachi (AKA IBM) drives of late. When I have a choice, i stick with Seagate. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ L. Friedman netllama@linux-sxs.org Linux Step-by-step & TyGeMo: http://netllama.ipfox.com 11:30:00 up 40 days, 22:14, 2 users, load average: 0.31, 0.23, 0.15 From owner-linux-xfs Sat Jul 31 11:33:23 2004 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 31 Jul 2004 11:33:26 -0700 (PDT) Received: from pimout1-ext.prodigy.net (pimout1-ext.prodigy.net [207.115.63.77]) by oss.sgi.com (8.13.0/8.13.0) with ESMTP id i6VIXNoD024502 for ; Sat, 31 Jul 2004 11:33:23 -0700 Received: from taniwha.stupidest.org (adsl-63-202-172-176.dsl.snfc21.pacbell.net [63.202.172.176]) by pimout1-ext.prodigy.net (8.12.10 milter /8.12.10) with ESMTP id i6VIXDDi254706; Sat, 31 Jul 2004 14:33:13 -0400 Received: by taniwha.stupidest.org (Postfix, from userid 38689) id C682F115C860; Sat, 31 Jul 2004 11:33:12 -0700 (PDT) Date: Sat, 31 Jul 2004 11:33:12 -0700 From: Chris Wedgwood To: Jan Banan Cc: linux-xfs@oss.sgi.com Subject: Re: Recover a XFS on raid -1 (linear) when one disk is broken Message-ID: <20040731183312.GD11283@taniwha.stupidest.org> References: <40F6DBC1.6050909@grabbarna.nu> <20040715205910.GA9948@taniwha.stupidest.org> <40F9321C.7060403@grabbarna.nu> <20040717203943.GL20260@plato.local.lan> <410ADC0A.6060100@grabbarna.nu> <20040731054924.GA4748@taniwha.stupidest.org> <410B4BC3.8000404@grabbarna.nu> <20040731091220.GA6158@taniwha.stupidest.org> <410BE0A9.3030904@grabbarna.nu> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <410BE0A9.3030904@grabbarna.nu> X-archive-position: 3803 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: linux-xfs Status: O Content-Length: 678 Lines: 20 On Sat, Jul 31, 2004 at 08:10:49PM +0200, Jan Banan wrote: > I suppose the value of Reallocated_Sector_Ct (1459) is not a good > sign :-( No. It means the disk has probably been failing for a little while before you noticed it or it got really bad really fast, neither of which are good. > I also run "badblocks /dev/hdh" and it did find 593 bad blocks > before the kernel crashed (I suppose) with (like it did with "dd"): i'm surpised it dies here, what does sysrq-t say? > According to the man-page of "dd" then "seek" and "skip" skips > "ibs/obs-sized BLOCKS" and not "SECTORS". So am I typing the correct > value (28117692)? for bs=512 the 'dd BLOCKS' are sectors.